chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
In earlier sections, we introduce some basic methods for the experimental measurement of activities and activity coefficients. The Debye-Hückel theory leads to an equation for the activity coefficient of an ion in solution. The theory gives accurate values for the activity of an ion in very dilute solutions. As salt concentrations become greater, the accuracy of the Debye-Hückel model decreases. As a rough rule of thumb, the theory gives useful values for the activity coefficients of dissolved ions in solutions whose total salt concentrations are less than about 0.01 molal.${}^{2}$ The theory is based on an electrostatic model. We describe this model and present the final result. We do not, however, present the argument by which the result is obtained.
We begin by reviewing some necessary ideas from electrostatics. When point charges $q_1$ and $q_2$ are embedded in a continuous medium, the Coulomb’s law force exerted on $q_1$ by $q_2$ is
${\mathop{F}\limits^{\rightharpoonup}}_{21}=\frac{q_1q_2{\hat{r}}_{21}}{4\pi \varepsilon_0Dr^2_{12}} \nonumber$
where $\varepsilon_0$ is a constant called the permittivity of free space, and $D$ is a constant called the dielectric coefficient of the continuous medium. ${\hat{r}}_{21}$ is a unit vector in the direction from the location of $q_2$ to the location of $q_1$. When $q_1$ and $q_2$ have the same sign, the force is positive and acts to increase the separation between the charges. The force exerted on $q_2$ by $q_1$ is ${\mathop{F}\limits^{\rightharpoonup}}_{12}=-{\mathop{F}\limits^{\rightharpoonup}}_{21};$ the net force on the system of charges is
${\mathop{F}\limits^{\rightharpoonup}}_{net}={\mathop{F}\limits^{\rightharpoonup}}_{12}+{\mathop{F}\limits^{\rightharpoonup}}_{21}=0. \nonumber$
When the force is expressed in newtons, the point charges are expressed in coulombs, and distance is expressed in meters, $\varepsilon_0=8.854\times {10}^{-12}\ {\mathrm{C}^2}{\mathrm{N}}^{-1}{\mathrm{m}}^{-2}$. The dielectric coefficient is a dimensionless quantity whose value in a vacuum is unity. In liquid water at 25 ºC, $D=78.4$ We are interested in the interactions between ions whose charges are multiples of the fundamental unit of charge, $e$. We designate the charge on a proton and an electron as $e$ and $-e$, respectively, where $e=1.602\times {10}^{-19\ }\mathrm{C}$. We express the charge on a cation, say $A^{m+}$, as $z_Ae$, and that on an anion, say $B^{n-}$, as $z_Be$, where $z_A=+m>0$ and $z_B=-n<0$.
The Debye-Hückel theory models the environment around a particular central ion—the ion whose activity coefficient we calculate. We assume that the interactions between the central ion and all other ions result exclusively from Coulomb’s law forces. We assume that the central ion is a hard sphere whose charge, $q_C$, is located at the center of the sphere. We let the radius of this sphere be $a_C$. Focusing on the central ion makes it possible to simplify the mathematics by fixing the origin of the coordinate system at the center of the central ion; as the central ion moves through the solution, the coordinate system moves with it. The theory develops a relationship between the activity coefficient of the central ion and the electrical work that is done when the central ion is brought into the solution from an infinite distance—where its potential energy is taken to be zero.
The theory models the interactions of the central ion with the other ions in the solution by supposing that, for every type of ion, $k$, in the solution, there is a spherically symmetric function, ${\rho }_k\left(r\right)$, which specifies the concentration of $k$-type ions at the location specified by $r$, for $r\ge a_C$. That is, we replace our model of mobile point-charge ions with a model in which charge is distributed continuously. The physical picture corresponding to this assumption is that the central ion remains discrete while all of the other ions are “ground up” into tiny charged bits that are spread smoothly—but not uniformly—throughout the solution that surrounds the central ion. The introduction of ${\rho }_k\left(r\right)$ changes our model from one involving point-charge neighbor ions—whose effects would have to be obtained by summing an impracticably large number of terms and whose locations are not well defined anyway—to one involving a mathematically continuous function. From this perspective, we adopt, for the sake of a quantitative mathematical treatment, a physical model that violates the atomic description of everything except the central ion.
It is useful to have a name for the collection of charged species around the central ion; we call it the ionic atmosphere. The ionic atmosphere occupies a microscopic region around the central ion in which ionic concentrations depart from their macroscopic-solution values. The magnitudes of these departures depend on the sign and magnitude of the charge on the central ion.
The essence of the Debye-Hückel model is that the charge of the central ion gives rise to the ionic atmosphere. To appreciate why this is so, we can imagine introducing an uncharged moiety, otherwise identical to the central ion, into the solution. In such a process, no ionic atmosphere would form. As far as long-range Coulombic forces are concerned, no work would be done.
When we imagine introducing the charged central ion into the solution in this way, Coulombic forces lead to the creation of the ionic atmosphere. Since formation of the ionic atmosphere entails the separation of charge, albeit on a microscopic scale, this process involves electrical work. Alternatively, we can say that electrical work is done when a charged ion is introduced into a salt solution and that this work is expended on the creation of the ionic atmosphere.
In the Debye-Hückel model, this electrical work is the energy change associated with the process of solvating the ion. Since the reversible, non-pressure–volume work done in a constant-temperature, constant-pressure process is also the Gibbs free energy change for that process, the work of forming the ionic atmosphere is the same thing as the Gibbs free energy change for introducing the ion into the solution.
The Debye-Hückel theory makes these ideas quantitative by finding the work done in creating the ionic atmosphere. To do this, it proves to be useful to define a quantity that we call the ionic strength of the solution. By definition, the ionic strength is
$I=\sum^n_{k=1} {z^2_k{\underline{m}}_k/{2}} \nonumber$
where the sum is over all of the ions present in the solution. The factor of $1/2$ is essentially arbitrary. We introduce it in order to make the ionic strength of a 1:1 electrolyte equal to its molality. ($z_k$ is dimensionless.)
For the hypothetical one-molal standard state that we consider in §6, the activity coefficient for solute $C$, $\gamma_C$, is related to the chemical potential of the real substance, $\mu_C\left(P,{\underline{m}}_C\right)$, and that of a hypothetical ideal solute $C$ at the same concentration, $\mu_C\left(\mathrm{Hyp\ solute},P,{\underline{m}}_C\right)$, by
${ \ln \gamma_C\ }=\frac{\mu_C\left(P,{\underline{m}}_C\right)-\mu_C\left(\mathrm{Hyp\ solute},P,{\underline{m}}_C\right)}{RT} \nonumber$
The Debye-Hückel model equates this chemical-potential difference to the electrical work that accompanies the introduction of the central ion into a solution whose ionic strength is I. The final result is
${ \ln \gamma_C\ }=\frac{-z^2_Ce^2\kappa \overline{N}}{8\pi \varepsilon_0D\left(1+\kappa a_C\right)} \nonumber$
(While it is not obvious from our discussion, the parameter,
$\kappa ={\left(\frac{2e^2\overline{N}d_wI}{\varepsilon_0DkT}\right)}^{1/2} \nonumber$
characterizes the ionic atmosphere around the central ion. The quantity $d_w$ is the density of the pure solvent, which is usually water.)
For sufficiently dilute solutions, $1+\kappa a_C\approx 1$. (See problem 14.) Introducing this approximation, substituting for $\kappa$, and dividing by 2.303 to convert to base-ten logarithms, we obtain the Debye-Hückel limiting law in the form in which it is usually presented:
$\log_{10} \gamma_C =-A_\gamma z^2_CI^{1/2} \nonumber$
where
$A_\gamma=\frac{\left(2d_w\right)^{1/2}\overline{N}^2}{2.303\left(8\pi \right)}{\left(\frac{e^2}{\varepsilon_0DRT}\right)}^{3/2} \nonumber$
For aqueous solutions at 25 C, $A_\gamma=0.510$.
The Debye-Hückel model finds the activity of an individual ion. In §18, we note that the activity of an individual ion cannot be determined experimentally. We introduce the mean activity coefficient, $\gamma_{\pm }$, for a strong electrolyte to as a way to express the departure of a salt solution from ideal-solution behavior. Adopting the hypothetical one-molal ideal-solution state as the standard state for the salt, $A_pB_q$, we develop conventions that express the Gibbs free energy of a real salt solution and find
$\gamma_{\pm}=\left(\gamma^p_A \gamma^q_B \right)^{1/{\left(p+q\right)}}. \nonumber$
Using the Debye-Hückel limiting law values for the individual-ion activity coefficients, we find
$\log_{10} \gamma_{\pm } =\frac{p \log_{10} \gamma_A+q \log_{10} \gamma_B }{p+q}=\frac{-pA_\gamma z^2_AI^{1/2}-qA_\gamma z^2_BI^{1/2}}{p+q}=-\left(\frac{pz^2_A+qz^2_B}{p+q}\right)A_\gamma I^{1/2}=-A_\gamma z_Az_BI^{1/2} \nonumber$
where we use the identity
$\frac{pz^2_A+qz^2_B}{p+q}=-z_Az_B \nonumber$
(See problem 16.12.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/16%3A_The_Chemical_Activity_of_the_Components_of_a_Solution/16.18%3A_Activities_of_Electrolytes_-_The_Debye-Huckel_Theory.txt
|
In this chapter, we introduce several ways to measure the activities and chemical potentials of solutes. In Sections 16.116.6 we consider the determination of the activities and chemical potentials of solutes with measurable vapor pressures. To do so, we use the ideal behavior expressed by Raoult’s Law and Henry’s Law. In Section 16.15 we discuss the determination of solvent activity coefficients from measurements of the decrease in the freezing point of the solvent. In Section 16.7 we discuss the mathematical analysis by which we can obtain solute activity coefficients from measured solvent activity coefficients. Electrical potential measurements on electrochemical cells are an important source of thermodynamic data. In Chapter 17, we consider the use of electrochemical cells to measure the Gibbs free energy difference between two systems that contain the same substances but at different concentrations.
We define the activity of substance $A$ in a particular system such that
${\overline{G}}_A=\mu_A={\widetilde\mu}^o_A+RT{\ln \tilde{a}_A\ }. \nonumber$
In the activity standard state the chemical potential is ${\widetilde\mu}^o_A$ and the activity is unity, $\tilde{a}_A=1$. It is often convenient to choose the standard state of the solute to be the hypothetical one-molal solution, particularly for relatively dilute solutions. In the hypothetical one-molal standard state, the solute molality is unity and the environment of a solute molecule is the same as its environment at infinite dilution. The solute activity is a function of its molality, $\tilde{a}_A\left(\underline{m}_A\right)$. We let the molality of the actual solution of unit activity be $\underline{m}^o_A$. That is, we let $\tilde{a}_A\left(\underline{m}^o_A\right)=1$; consequently, we have $\mu_A\left(\underline{m}^o_A\right)={\widetilde\mu}^o_A$ even though the actual solution whose molality is $\underline{m}^o_A$ is not the standard state. To relate the solute activity and chemical potential in the actual solution to the solute molality, we must find the activity coefficient, $\gamma_A$, as a function of the solute molality,
$\gamma_A=\gamma_A\left(\underline{m}_A\right) \nonumber$
Then
$\tilde{a}_A\left(\underline{m}_A\right)=\underline{m}_A\gamma_A\left(\underline{m}_A\right) \nonumber$ and $\tilde{a}_A\left(\underline{m}^o_A\right)=\underline{m}^o_A\gamma_A\left(\underline{m}^o_A\right)=1 \nonumber$
To introduce some basic approaches to the determination of activity coefficients, let us assume for the moment that we can measure the actual chemical potential, $\mu_A$, in a series of solutions where $\underline{m}_A$ varies. We have
$\mu_A={\widetilde\mu}^o_A+RT{\ln \tilde{a}_A\ }={\widetilde\mu}^o_A+RT{\ln \underline{m}_A\ }+RT{\ln \gamma_A\ } \nonumber$
We know $\underline{m}_A$ from the preparation of the system—or by analysis. If we also ${\widetilde\mu}^o_A$, we can calculate $\gamma_A\left(\underline{m}_A\right)$ from our experimental values of $\mu_A$. If we don’t know ${\widetilde\mu}^o_A$, we need to find it before we can proceed. To find it, we recall that
${\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} RT\ln \gamma_A }=0 \nonumber$ Then
${\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \left(\mu_A-RT{\ln \underline{m}_A\ }\right)\ }={\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \left({\widetilde\mu}^o_A+RT{\ln \gamma_A\ }\right)\ }={\widetilde\mu}^o_A \nonumber$
and a plot of $\left(\mu_A-RT{\ln \underline{m}_A\ }\right)$ versus $\underline{m}_A$ will intersect the line $\underline{m}_A=0$ at ${\widetilde\mu}^o_A$.
Now, in fact, we can measure only Gibbs free energy differences. In the best of circumstances what we can measure is the difference between the chemical potential of $A$ at two different concentrations. If we choose a reference molality, $\underline{m}^{\mathrm{ref}}_A$, the chemical potential difference $\Delta \mu_A\left(\underline{m}_A\right)=\mu_A\left(\underline{m}_A\right)-\mu_A\left(\underline{m}^{\mathrm{ref}}_A\right)$ is a measurable quantity. A series of such results can be displayed as a plot of $\Delta \mu_A\left(\underline{m}_A\right)$ versus $\underline{m}_A$—or any other function of $\underline{m}_A$ that proves to suit our purposes. The reverence molality, $\underline{m}^{\mathrm{ref}}_A$, can be chosen for experimental convenience.
If our theoretical structure is valid, the results are represented by the equations
$\Delta \mu_A\left(\underline{m}_A\right)=\mu_A\left(\underline{m}_A\right)-\mu_A\left(\underline{m}^{\mathrm{ref}}_A\right)=RT{\ln \frac{\tilde{a}_A\left(\underline{m}_A\right)}{\tilde{a}_A\left(\underline{m}^{\mathrm{ref}}_A\right)}\ }=RT{\ln \underline{m}_A+RT{\ln \gamma_A\left(\underline{m}_A\right)\ }\ }-RT{\ln \tilde{a}_A\left(\underline{m}^{\mathrm{ref}}_A\right)\ } \nonumber$
When $\underline{m}_A=\underline{m}^o_A$, we have
$\Delta \mu_A\left(\underline{m}^o_A\right)=\mu_A\left(\underline{m}^o_A\right)-\mu_A\left(\underline{m}^{\mathrm{ref}}_A\right)={\widetilde\mu}^0_A-\mu_A\left(\underline{m}^{\mathrm{ref}}_A\right)=RT{\ln \frac{\tilde{a}_A\left(\underline{m}^o_A\right)}{\tilde{a}_A\left(\underline{m}^{\mathrm{ref}}_A\right)}\ }=-RT{\ln \tilde{a}_A\left(\underline{m}^{\mathrm{ref}}_A\right)\ } \nonumber$
and
$\Delta \mu_A\left(\underline{m}_A\right)-\Delta \mu_A\left(\underline{m}^o_A\right)=RT{\ln \underline{m}_A\ }+RT{\ln \gamma_A\left(\underline{m}_A\right)\ } \nonumber$ so that
$RT{\ln \gamma_A\left(\underline{m}_A\right)\ }=-\Delta \mu_A\left(\underline{m}^o_A\right)+\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ } \nonumber$
Since ${\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \gamma_A\left(\underline{m}_A\right)=1\ }$, we have
$0={\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} RT{\ln \gamma_A\left(\underline{m}_A\right)\ }\ }=-\Delta \mu_A\left(\underline{m}^o_A\right)+{\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \left[\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ }\right]\ } \nonumber$
Letting $\beta \left(\underline{m}_A\right)=\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ } \nonumber$
and ${\beta }^o={\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \left[\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ }\right]\ } \nonumber$
we have $\Delta \mu_A\left(\underline{m}^o_A\right)={\beta }^o \nonumber$
Then
$RT{\ln \gamma_A\left(\underline{m}_A\right)\ }=-\Delta \mu_A\left(\underline{m}^o_A\right)+\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ }=-{\beta }^o+\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ } \nonumber$
so that we know both the activity coefficient, $\gamma_A=\gamma_A\left(\underline{m}_A\right)$, and the activity, $\tilde{a}_A\left(\underline{m}_A\right)=\underline{m}_A\gamma_A\left(\underline{m}_A\right)$, of $A$ as a function of its molality. Consequently, we know the value of $\Delta \mu_A\left(\underline{m}_A\right)-\Delta \mu_A\left(\underline{m}^o_A\right)$ as a function of molality. Since this difference vanishes when $\underline{m}_A=\underline{m}^o_A$, we can find $\underline{m}^0_A$ from our experimental data. Finally, the activity equation becomes
$RT{\ln \tilde{a}_A\left(\underline{m}_A\right)\ }=\Delta \mu_A\left(\underline{m}_A\right)-{\beta }^o \nonumber$
This procedure yields the activity of $A$ as a function of the solute molality. We obtain this function from measurements of $\Delta \mu_A\left(\underline{m}_A\right)=\mu_A\left(\underline{m}_A\right)-\mu_A\left(\underline{m}^{\mathrm{ref}}_A\right)$. These measurements do not yield a value for $\mu_A\left(\underline{m}_A\right)$; what we obtain from our analysis is an alternative expression,
$RT{\ln \frac{\tilde{a}_A\left(\underline{m}_A\right)}{\tilde{a}_A\left(\underline{m}^{\mathrm{ref}}_A\right)}\ } \nonumber$
for the chemical potential difference, $\mu_A\left(\underline{m}_A\right)-\mu_A\left(\underline{m}^{ref}_A\right)$ between two states of the same substance. $\mu_A\left(\underline{m}_A\right)$ is the difference between the chemical potential of solute A at $\underline{m}_A$ and the chemical potential of its constituent elements in their standard states at the same temperature. To find this difference is a separate experimental undertaking. If, however, we can find $\mu_A\left(\underline{m}^*_A\right)$ for some $\underline{m}^*_A$, our activity equation yields ${\widetilde\mu}^o_A$ as
${\widetilde\mu}^0_A=\mu_A\left(\underline{m}^*_A\right)-RT{\ln \tilde{a}_A\left(\underline{m}^*_A\right)\ } \nonumber$
This analysis of the $\Delta \mu_A\left(\underline{m}_A\right)$ data assumes that we can find ${\beta }^o={\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} \left[\Delta \mu_A\left(\underline{m}_A\right)-RT{\ln \underline{m}_A\ }\right]\ }$. To find an accurate value for ${\beta }^o$, it is important to collect data for $\Delta \mu_A\left(\underline{m}_A\right)$ at the lowest possible values for $\underline{m}_A{\mathrm{m}}_{\mathrm{A}}. Inevitably, however, the experimental error in \(\Delta \mu_A\left(\underline{m}_A\right)$ increases as $\underline{m}_A$ decreases. Our theory requires that $\beta \left(\underline{m}_A\right)={\beta }^o+f\left(\underline{m}_A\right)$, where ${\mathop{\mathrm{lim}}_{\underline{m}_A\to 0} f\left(\underline{m}_A\right)=0\ }$, so that the graph of $\beta \left(\underline{m}_A\right)$ versus $f\left(\underline{m}_A\right)$ has an intercept at ${\beta }^o$. Accurate extrapolation of the data to the intercept at $\underline{m}_A=0$ is greatly facilitated if we can choose $f\left(\underline{m}_A\right)$ so that the graph is linear. In practice, the increased experimental error in $\beta \left(\underline{m}_A\right)$ at the lowest values of $\underline{m}_A$ causes the uncertainty in the extrapolated value of ${\beta }^o$ for a given choice of $f\left(\underline{m}_A\right)$ to be similar to the range of ${\beta }^o$ values estimated using different functions. For some $p$ in the range $0.5 < p < 2$, letting $f \left( \underline{m}_A \right) = \underline{m}_A^P$ often provides a fit that is as satisfactorily linear as the experimental uncertainty can justify.
This procedure yields the activity of $A$ as a function of the solute molality. We obtain this function from measurements of $\Delta \mu_A \left( \underline{m}_A \right) = \mu_A \left( \underline{m}_A \right) - \mu_A \left) \underline{m}_A^{ \text{ref}} \right)$. These measurements do not yield a value for $\mu_A \left( \underline{m}_A \right)$; what we obtain from our analysis is an alternative expression,
$RT \ln \frac{ \tilde{a}_A \left( \underline{m}_A \right)}{ \tilde{a}_A \left( \underline{m}_A^{ \text{ref}} \right)} \nonumber$
for the chemical potential difference, $\mu_A \left( \underline{m}_A \right) - \mu_A \left) \underline{m}_A^{ \text{ref}} \right)$ between two states of the same substance. $\mu_A \left( \underline{m}_A \right)$ is the difference between the chemical potential of solute A at $\underline{m}_A$ and the chemical potential of its constituent elements in their standard states at the same temperature. To find this difference is a separate experimental undertaking. If, however, we can find $\mu_A \left( \underline{m}_A^* \right)$ for some $\underline{m}_A^*$, our activity equation yields $\tilde{\mu}^o_A$ as
$\tilde{ \mu}_A^o = \mu_A \left( \underline{m}_A^* \right) - RT \ln \tilde{a}_A \left( \underline{m}_A^* \right) \nonumber$
Finally, let us contrast this analysis to the analysis of chemical equilibrium that we discuss briefly in Chapter 15. In the present analysis, we use an extrapolation to infinite dilution to derive activity values from the difference between the chemical potentials of the same substance at different concentrations. In the chemical equilibrium analysis for $aA+bB\rightleftharpoons cD+dD$, we have
$\Delta_r\mu =\Delta_r{\widetilde\mu}^o+RT{\ln \frac{\tilde{a}^c_C\tilde{a}^d_D}{\tilde{a}^a_A\tilde{a}^b_B}\ }=\Delta_r{\widetilde\mu}^o+RT{\ln \frac{\underline{m}^c_C\underline{m}^d_D}{\underline{m}^a_A\underline{m}^b_B}\ }+RT{\ln \frac{\gamma^c_C{\widetilde{\gamma}}^d_D}{\gamma ^a_A\gamma^b_B}\ } \nonumber$
When the system is at equilibrium, we have $\Delta_r\mu =0$. Since ${\mathop{\mathrm{lim}}_{\underline{m}_i\to 0} \gamma_i=1\ }$, we have, in the limit that all of the concentrations go to zero in an equilibrium system,
$0={\mathop{\mathrm{lim}}_{\underline{m}_{i\to 0}} RT{\ln \frac{\gamma^c_C\gamma^d_D}{\gamma^a_A\gamma^b_B}\ }\ }=\Delta_r{\widetilde\mu}^o+{\mathop{\mathrm{lim}}_{\underline{m}_{i\to 0}} RT{\ln \frac{\underline{m}^c_C\underline{m}^d_D}{\underline{m}^a_A\underline{m}^b_B}\ }\ } \nonumber$
Letting
$K_c=\frac{\underline{m}^c_C\underline{m}^d_D}{\underline{m}^a_A\underline{m}^b_B} \nonumber$
We have
$\Delta_r{\widetilde\mu}^o=-RT{\mathop{\mathrm{lim}}_{\underline{m}_i\to 0} K_c\ } \nonumber$
Since $\Delta_r\mu =0$ whenever the system is at equilibrium, measurement of $K_c$ for any equilibrium state of the reaction yields the corresponding ratio of activity coefficients: $RT{\ln \frac{\gamma^c_C\gamma^d_D}{\gamma^a_A\gamma^b_B}\ }=-\Delta_r{\widetilde\mu}^o-RT{\ln K_c\ } \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/16%3A_The_Chemical_Activity_of_the_Components_of_a_Solution/16.19%3A_Finding_Solute_Activity_Using_the_Hypothetical_One-mola.txt
|
1. At 100 C, the enthalpy of vaporization of water is $40.657\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$. Calculate the boiling-point elevation constant for water when the solute concentration is expressed in molality units.
2. At 0 C, the enthalpy of fusion of water is $6.009\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$. Calculate the freezing-point depression constant for water when the solute concentration is expressed in molality units.
3. A solution is prepared by dissolving 20.0 g of ethylene glycol (1,2-ethanediol) in 1 kg of water. Estimate the boiling point and the freezing point of this solution.
4. A biopolymer has a molecular weight of 250,000 dalton. At 300 K, estimate the osmotic pressure of a solution that contains 1 g of this substance in 10 mL of water.
5. Cyclohexanol melts at 25.46 C; the enthalpy of fusion is $1.76\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$. Estimate the freezing-point depression constant when the solute concentration is expressed as a mole fraction and when it is expressed in molality units. A solution is prepared by mixing 1 g of ethylene glycol with 50 g of liquid cyclohexanol. How much is the freezing point of this solution depressed relative to the freezing point of pure cyclohexanol?
6. Freezing-point depression data for numerous solutes in aqueous solution${}^{3}$ are reproduced below. Calculate the freezing-point depression, $-{\Delta T_{fp}}/{\underline{m}}$, for each of these solutes. Compare these values to the freezing-point depression constant that you calculated in problem 2. Explain any differences.
Solute molality $\boldsymbol{-}\boldsymbol{\Delta }{\boldsymbol{T}}_{\boldsymbol{fp}}\boldsymbol{,\ }\boldsymbol{K}$
Acetone 0.087 0.16
ethanol 0.109 0.20
ethylene glycol 0.081 0.15
ammonia 0.295 0.55
glycerol 0.110 0.18
lithium chloride 0.119 0.42
nitric acid 0.080 0.28
potassium bromide 0.042 0.15
barium chloride 0.024 0.12
7. In a binary solution of solute $A$ in solvent $B$, the mole fractions in the pure solvent are $y_A=0$ and $y_B=1$. We let the pure solvent be the solvent standard state; when $y_A=0$, $y_B={\tilde{a}}_B=1$, and ${ \ln {\tilde{a}}_B\ }=0$. What happens to the value of ${ \ln {\tilde{a}}_B\ }$ as $y_A\to 1$? Sketch the graph of ${\left(1-y_A\right)}/{y_A}$ versus ${ \ln {\tilde{a}}_B\ }$. For $0<y^*_a><1$>, shade the area on this graph that represents the integral $\int^{y^*_A}_{y_A}{\left(\frac{1-y_A}{y_A}\right)d{ \ln {\gamma }_A\ }} \nonumber$ Is this area greater or less than zero?
8. In a binary solution of solute $A$ in solvent $B$, the activity coefficient of the solvent can be modeled by the equation ${ \ln {\gamma }_B\ }\left(y_A\right)=cy^p_A$, where the constants $c$ and $p$ are found by using least squares to fit experimental data to the equation. Find an equation for ${ \ln {\gamma }_A\ }\left(y_A\right)$. For $c=8.4$ and $p=2.12$, plot ${ \ln {\gamma }_B\ }\left(y_A\right)$ and ${ \ln {\gamma }_A\ }\left(y_A\right)$ versus $y_A$.
9. A series of solutions contains a non-volatile solute, $A$, dissolved in a solvent, $B$. At a fixed temperature, the vapor pressure of solvent $B$ is measured for these solutions and for pure $B$ ($y_A=0$). At low solute concentrations, the vapor pressure varies with the solute mole fraction according to $P=P^{\textrm{⦁}}\left(1-y_A\right)\mathrm{exp}\left(-\alpha y^{\beta }_A\right)$.
(a) If the pure solvent at one bar is taken as the standard state for liquid $A$, and gaseous $B$ behaves as an ideal gas, now does the activity of solvent $B$ vary with $y_A$?
(b) How does ${ \ln {\gamma }_B\ }$ vary with $y_A$?
(c) Find ${ \ln {\gamma }_A\ }\left(y_A\right)$.
10. In Section 14.14, we find for liquid solvent $B$,
${\overline{L}}^o_B=H^o_B-{\overline{H}}^{ref}_B={\mathop{\mathrm{lim}}_{T\to 0} {\left(-\frac{\partial {\Delta }_{\mathrm{mix}}\overline{H}}{\partial n_B}\right)}_{P,T,n_A}\ } \nonumber$
Since ${\overline{H}}^{\textrm{⦁}}_B$ is the molar enthalpy of pure liquid $B$, we have ${\left(\frac{\partial {\overline{H}}^{\textrm{⦁}}_B}{\partial T}\right)}_P=C_P\left(B,\mathrm{liquid},T\right) \nonumber$ In Section 16.15, we set ${\left(\frac{\partial {\overline{H}}^{ref}_B}{\partial T}\right)}_P=C_P\left(B,\mathrm{liquid},T\right) \nonumber$ Show that this is equivalent to the condition ${\left(\frac{\partial {\overline{L}}^o_B}{\partial T}\right)}_P\ll C_P\left(B,\mathrm{liquid},T\right) \nonumber$
11. If $pz_A=-qz_B$, prove that $\frac{pz^2_A+qz^2_B}{p+q}=-z_Az_B \nonumber$
12. At temperatures of 5 C, 25 C, and 45 C, evaluate Debye-Hückel parameter $\kappa$ for aqueous sodium chloride solutions at concentrations of ${10}^{-3}\ \underline{m}$, ${10}^{-2}\ \underline{m}$, and ${10}^{-1}\ \underline{m}$.
13. Introducing the approximation $1+\kappa a_c\approx 1$ produces the Debye-Hückel limiting law, which is strictly applicable only in the limiting case of an infinitely dilute solution. Introducing the approximation avoids the problem of choosing an appropriate value for $a_c$. If $a_c=0.2\ \mathrm{nm}$, calculate $1+\kappa a_c$ for aqueous solutions in which the ionic strength, $I$, is ${10}^{-3}\ \underline{m}$, ${10}^{-2}\ \underline{m}$, and ${10}^{-1}\ \underline{m}$. What does the result suggest about the ionic-strength range over which the limiting law is a good approximation?
14. The solubility product for barium sulfate, $K_{sp}= \tilde{a}_{Ba^{2+}} \tilde{a}_{SO^{2-}_4}$, is $1.08\times {10}^{-10}$. Estimate the solubility of barium sulfate in pure water and in ${10}^{-2}\ \underline{m}$ potassium perchlorate.
15. The enthalpy of vaporization${}^{3}$ of n-butane at its normal boiling point, 272.65 K, is $22.44\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$.${}^{\ }$ In the temperature range $273.15, the solubility\({}^{3}$ of n-butane in water is given by
${ \ln y_A\ }=A+\frac{100\ B}{T}+C\ { \ln \left(\frac{T}{100}\right)\ } \nonumber$
where $A=-102.029$, $B=146.040\ {\mathrm{K}}^{-1}$, and $C=38.7599$. From the result we develop in Section 16.14, calculate ${\Delta }_{\mathrm{vap}}{\overline{H}}_{A,\mathrm{solution}}$ for n-butane at is normal boiling point. (Note that the normal boiling temperature is slightly below the temperature range to which the equation for ${ \ln y_A\ }$ is valid.) Comment.
16. The enthalpy of vaporization${}^{3}$ of molecular oxygen at its normal boiling point, 90.02 K, is $6.82\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$.${}^{\ }$ In the temperature range $273.15<$>348.15, the solubility${}^{3}$ of oxygen in water is given by
${ \ln y_A\ }=A+\frac{100\ B}{T}+C\ { \ln \left(\frac{T}{100}\right)\ } \nonumber$
where $A=-66.7354$, $B=87.4755\ {\mathrm{K}}^{-1}$, and $C=24.4526$. From the result we develop in Section 16.14, calculate ${\Delta }_{\mathrm{vap}}{\overline{H}}_{A,\mathrm{solution}}$ for oxygen at 273.25 K and at its normal boiling point, 90.02 K. Comment.
Notes
${}^{1}$ Raoult’s law and ideal solutions can be defined using fugacities in place of partial pressures. The result is more general but—for those whose intuition has not yet embraced fugacity—less transparent.
${}^{2}$ For a discussion of the concentration range in which the Debeye-Huckel model is valid and of various supplemental models that allow for the effects of forces that are specific to the chemical characteristics of the interacting ions, see Lewis and Randall, Pitzer and Brewer, Thermodynamics, 2${}^{nd}$ Edition, McGraw Hill Book Company, New York, 1961, Chapter 23.
${}^{3}$ Data from CRC Handbook of Chemistry and Physics, 79${}^{th}$ Edition, David R. Lide, Ed., CRC Press, 1998-1999.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/16%3A_The_Chemical_Activity_of_the_Components_of_a_Solution/16.20%3A_Problems.txt
|
• 17.1: Oxidation-reduction Reactions
In a large and important class of reactions we find it useful to focus on the transfer of one or more electrons from one chemical moiety to another. Reactions in which electrons are transferred from one chemical moiety to another are called oxidation–reduction reactions, or redox reactions, for short.
• 17.2: Electrochemical Cells
• 17.3: Defining Oxidation States
The definition of oxidation states predates our ability to estimate electron densities through quantum mechanical calculations. As it turns out, however, the ideas that led to the oxidation state formalism are directionally correct; atoms that have high positive oxidation states according to the formalism also have relatively high positive charges by quantum mechanical calculation.
• 17.4: Balancing Oxidation-reduction Reactions
• 17.5: Electrical Potential
Electrical potential is measured in volts. If a system comprising one coulomb of charge passes through a potential difference of one volt, one joule of work is done on the system. Whether this represents an increase or a decrease in the energy of the system depends on the sign of the charge and on the sign of the potential difference. Electrical potential and gravitational potential are analogous.
• 17.6: Electrochemical Cells as Circuit Elements
If the reaction between silver ions and copper metal is to occur, electrons must pass through the external circuit from the copper terminal to the silver terminal. An electron that is free to move in the presence of an electrical potential must move away from a region of more negative electrical potential and toward a region of more positive electrical potential.
• 17.7: The Direction of Electron Flow and its Implications
The difference between electrolytic and galvanic cells lies in the direction of current flow and, correspondingly, the direction in which the cell reaction occurs. In a galvanic cell, a spontaneous chemical reaction occurs and this reaction determines the direction of current flow and the signs of the electrode potentials. In an electrolytic cell, the sign of the electrode potentials is determined by an applied potential source, which determines the direction of current flow.
• 17.8: Electrolysis and the Faraday
Electrolytic cells are very important in the manufacture of products essential to our technology-intensive civilization. Only electrolytic processes can produce many materials, notably metals that are strong reducing agents. Aluminum and the alkali metals are conspicuous examples. Many manufacturing processes that are not themselves electrolytic utilize materials that are produced in electrolytic cells. These processes would not be possible if the electrolytic products were not available.
• 17.9: Electrochemistry and Conductivity
From the considerations we have discussed, it is evident that any electrolytic cell involves a flow of electrons in an external circuit and a flow of ions within the materials comprising the cell. The function of the current collectors is to transfer electrons back and forth between the external circuit and the cell reagents.
• 17.10: The Standard Hydrogen Electrode (S.H.E)
We also need to choose an arbitrary reference half-cell. The choice that has been adopted is the Standard Hydrogen Electrode, often abbreviated the S.H.E. The S.H.E. is defined as a piece of platinum metal, immersed in a unit-activity aqueous solution of a protonic acid, and over whose surface hydrogen gas, at unit fugacity, is passed continuously. These concentration choices make the electrode a standard electrode.
• 17.11: Half-reactions and Half-cells
• 17.12: Standard Electrode Potentials
We adopt a very useful convention to tabulate the potential drops across standard electrochemical cells, in which one half-cell is the S.H.E. Since the potential of the S.H.E. is zero, we define the standard electrode potential, of any other standard half-cell (and its associated half-reaction) to be the potential difference when the half-cell operates spontaneously versus the S.H.E. The electrical potential of the standard half-cell determines both the magnitude and sign of the standard half-ce
• 17.13: Predicting the Direction of Spontaneous Change
It is useful to associate the standard electrode potential with the half-reaction written as a reduction, that is, with the electrons written on the left side of the equation. We also establish the convention that reversing the direction of the half-reaction reverses the algebraic sign of its potential. When these conventions are followed, the overall reaction and the full-cell potential can be obtained by adding the corresponding half-cell information.
• 17.14: Cell Potentials and the Gibbs Free Energy
• 17.15: The Nernst Equation
• 17.16: The Nernst Equation for Half-cells
• 17.17: Combining two Half-cell Equations to Obtain a new Half-cell Equation
• 17.18: The Nernst Equation and the Criterion for Equilibrium
• 17.19: Problems
Thumbnail: Schematic of Zn-Cu galvanic cell. (CC BY-SA 3.0; Ohiostandard).
17: Electrochemistry
We find it useful to classify reactions according to the type of change that the reagents undergo. Many classification schemes exist, often overlapping one another. The three most commonly used classifications are acid–base reactions, substitution reactions, and oxidation–reduction reactions.
Acids and bases can be defined in several ways, the most common being the Brønsted-Lowry definition, in which acids are proton donors and bases are proton acceptors. The Brønsted-Lowry definition is particularly useful for reactions that occur in aqueous solutions. A prototypical example is the reaction of acetic acid with hydroxide ion to produce the acetate ion and water.
$CH_3CO_2H+OH^-\to CH_3CO^-_2+H_2O \nonumber$
Here acetic acid is the proton donor, hydroxide ion is the proton acceptor. The products are also an acid and a base, since water is a proton donor and acetate ion is a proton acceptor.
When we talk about substitution reactions, we focus on a particular substituent in a chemical compound. The original compound is often called the substrate. In a substitution reaction, the original substituent is replaced by a different chemical moiety. A prototypical example is the displacement of one substituent on a tetrahedral carbon atom by a more nucleophilic group, as in the reaction of methoxide ion with methyl iodide to give dimethyl ether and iodide ion.
$CH_3I+CH_3O^-\to CH_3OCH_3+I^- \nonumber$
We could view a Brønsted-Lowry acid-base reaction as a substitution reaction in which one group (the acetate ion in the example above) originally bonded to a proton is replaced by another (hydroxide ion). Whether we use one classification scheme or another to describe a particular reaction depends on which is better suited to our immediate purpose.
In acid-base reactions and substitution reactions, we focus on the transfer of a chemical moiety from one chemical environment to another. In a large and important class of reactions we find it useful to focus on the transfer of one or more electrons from one chemical moiety to another. For example, copper metal readily reduces aqueous silver ion. If we place a piece of clean copper wire in an aqueous silver nitrate solution, reaction occurs according to the equation
$2Ag^++Cu^0\to 2Ag^0+Cu^{2+} \nonumber$
We have no trouble viewing this reaction as the transfer of two electrons from the copper atom to the silver ions. In consequence, a cupric ion, formed at the copper surface, is released into the solution. Two atoms of metallic silver are deposited at the copper surface. Reactions in which electrons are transferred from one chemical moiety to another are called oxidation–reduction reactions, or redox reactions, for short.
We define oxidation as the loss of electrons by a chemical moiety. Reduction is the gain of electrons by a chemical moiety. Since a moiety can give up electrons only if they have some place to go, oxidation and reduction are companion processes. Whenever one moiety is oxidized, another is reduced. In the reduction of silver ion by copper metal, it is easy to see that silver ion is gaining electrons and copper metal is losing them. In other reactions, it is not always so easy to see which moieties are gaining and losing electrons, or even that electron transfer is actually involved. As an adjunct to our ideas about oxidation and reduction, we develop a scheme for formally assigning electrons to the atoms in a molecule or ion. This is called the oxidation state formalism and comprises a series of rules for assigning a number, which we call the oxidation state (or oxidation number), to every atom in the molecule. When we adopt this scheme, the redox character of a reaction is determined by which atoms increase their oxidation state and which decrease their oxidation state as a consequence of the reaction. Those whose oxidation state increases lose electrons and are oxidized, while those whose oxidation state decreases gain electrons and are reduced.
A process of electron loss is called an oxidation because reactions with elemental oxygen are viewed as prototypical examples of such processes. Since many observations are correlated by supposing that oxygen atoms in compounds are characteristically more electron-rich than the atoms in elemental oxygen, it is useful to regard a reaction of a substance with oxygen as a reaction in which the atoms of the substance surrender electrons to oxygen atoms. It is then a straightforward generalization to say that a substance is oxidized whenever it loses electrons, whether oxygen atoms or some other chemical moiety takes up those electrons. So, for example, the reaction of sodium metal with oxygen in a dry environment produces sodium oxide, ${Na}_2O$, in which the sodium is usefully viewed as carrying a positive charge. (The oxidation state of sodium is 1+; the oxidation state of oxygen is 2–.)
The conversion of a metal oxide to the corresponding metal is described as reducing the oxide. Since converting a metal oxide to the metal reverses the change that occurs when we oxidize it, generalization of this idea leads us to apply the term reduction to any process in which a chemical moiety gains electrons. It is a fortunate coincidence that a reduction process is one in which the oxidation number of an atom becomes smaller (more negative) and is therefore reduced, in the sense of being decreased.
Another feature of oxidation–reduction reactions, and one that relates to the utility of viewing these reactions in terms of electron gain and loss, emerges when we observe the reaction of aqueous silver ions with copper metal closely. As the reaction proceeds, the aqueous solution becomes blue as cupric ions accumulate. Long needle-like crystals of silver metal grow out from the copper surface. The simplest mechanism that we can imagine for the growth of well-formed silver crystals is that silver ions from the solution plate out on the surface of the growing silver crystal, accepting an electron from the metallic crystal as they do so. The silver metal acquires this electron from the copper metal, with which it is in contact, but at a large (on an atomic scale) distance from the site at which the new atom of silver is deposited. Evidently the processes of electron loss and gain that characterize an overall reaction can occur at different locations, if there is a suitable process for moving the electron from one location to the other.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.01%3A_Oxidation-reduction_Reactions.txt
|
We can extend the idea of carrying out the electron loss and electron gain steps in different physical locations. Suppose that the only aqueous species in contact with the silver metal are silver ions and nitrate ions; the silver metal is also in contact with a length of copper wire, whose other end dips into a separate reservoir containing an aqueous solution of sodium nitrate. This arrangement is sketched in Figure 1. When we create this arrangement, nothing happens. We do not see any visible change in the silver metal, and the water contacting the copper wire never turns blue. On the one hand, this result does not surprise us. We are accustomed to the idea that reactants must be able to contact one another in order for reaction to occur.
On the other hand, the original experiment really does show that silver ions can accept electrons in one location while copper atoms give them up in another, so long as we provide a metal bridge on which the electrons can move between the two locations. Why should this not continue to happen in the new experimental arrangement? In fact, it does. It is just that the reaction occurs to only a very small extent before stopping altogether. The reason is easy to appreciate. After a very small number of silver ions are reduced, the silver nitrate solution contains more nitrate ions than silver ions; the solution as a whole has a negative charge. In the other reservoir, a small number of cupric ions dissolve, but there is no increase in the number of counter ions, so this solution acquires a positive charge. These net charges polarize the metal that connects them; the metal has an excess of positive charge at the copper-solution end and an excess of negative charge at the silver-solution end. This polarization opposes the motion of a negatively charged electron from the copper-solution end toward the silver-solution end. When the polarization becomes sufficiently great, electron flow ceases and no further reaction can occur.
By this analysis, the anions that the cupric solution needs in order to achieve electroneutrality are present in the silver-ion solution. The reaction stops because the anions have no way to get from one solution to the other. Evidently, the way to make the reaction proceed is to modify the two-reservoir experiment so that nitrate ions can move from the silver-solution reservoir to the copper-solution reservoir. Alternatively, we could introduce a modification that allows copper ions to move in the opposite direction or one that allows both kinds of movement. We can achieve the latter by connecting the two solutions with a tube containing sodium nitrate solution, as diagrammed in Figure 2. Now, nitrate ions can move between the reservoirs and maintain electroneutrality in both of them. However, silver ions can also move between the reservoirs. When we do this experiment, we observe that electrons do flow through the wire, indicating that silver-ion reduction and copper-atom oxidation are occurring at the separated sites. However, after a short time, the solutions mix; silver ions migrate through the aqueous medium and react directly with the copper metal. Because the mixing is poorly controllable, the reproducibility of this experiment is poor.
Evidently, we need a way to permit the exchange of ions between the two reservoirs that does not permit the wholesale transfer of reactive species. One device that accomplishes this is called a salt bridge. The requirement we face is that ions should be able to migrate from reservoir to reservoir so as to maintain electroneutrality. However, we do not want ions that participate in electrode reactions to migrate. A salt bridge is simply a salt solution that we use to connect the two reservoirs. To avoid introducing unwanted ions into the reservoir solutions, we prepare the salt-bridge solution using a salt whose ions are not readily oxidized or reduced. Alkali metal salts with nitrate, perchlorate, or halide anions are often used. To avoid mixing the reservoir solutions with the salt bridge solution, we plug each end of the salt bridge with a porous material that permits diffusion of ions but inhibits bulk movement of solution in or out of the bridge. The inhibition of bulk movement can be made much more effective by filling the bridge with a gel, so that the solution is unable to undergo bulk motion in any part of the bridge.
With a salt bridge in place, inert ions can move from one reservoir to the other to maintain electroneutrality. Under these conditions, we see an electrical current through the external circuit and a compensating diffusion of ions through the salt bridge. The salt bridge completes the circuit. Transport of electrons from one electrode to the other carries charge in one direction; motion of ionic species through the salt bridge carries negative charge through the solution in the opposite direction. This compensating ionic motion has anions moving opposite to the electron motion and cations moving in the same direction as the electrons.
We have just described one kind of electrochemical cell. As diagrammed in Figure 3, it has four principal features: two reservoirs within which reactions can occur, a wire through which electrons can pass from one reservoir to the other, and a salt bridge through which ionic species can pass. Many similar electrochemical cells can be constructed. The reservoirs can contain a wide variety of reagents. Because each reservoir must be able to exchange electrons with the connecting wire, each must contain an electrically conducting solid that serves as a terminal and a current collector, and often participates in the chemical change as a reactant or as a catalyst. The combination of reagents and current collector is called a half-cell. The current collector itself is called an electrode, although this term is often applied to the whole half-cell as well. In this case, the wire is the external circuit. In applications of chemical interest, the external circuit typically contains devices to measure the electrochemical cell’s properties as a circuit element.
If we view this electrochemical cell as a device for producing an electrical current, we see that it has a number of practical limitations. Two of the most important relate to the performance of the salt bridge. Whenever electrons move through the external circuit, the salt bridge must accept a charge-compensating number of ions from one reservoir and release the same quantity of ionic charge to the other reservoir. We construct the salt bridge so that ions can pass into and out of it only by diffusion. Consequently, the rate at which ions can diffuse through the salt bridge limits the rate at which electrons can flow through the external circuit. Since diffusion is a slow process over the macroscopic dimensions of the bridge, the cell can pass only a small current. From an electrical perspective, slow diffusion of ions through the salt bridge causes a surplus of positively charged species to develop at one end of the salt bridge and a surplus of negatively charged species to develop at the other. This charge imbalance means that there is a potential gradient across the salt bridge, whose effect is to oppose the flow of further current.
The second limitation of the cell attributable to the properties of the salt bridge is that the amount of current the cell can produce before its performance characteristics change dramatically is limited by the amount of inert salt in the bridge. After a relatively small charge passes through the cell, migration of reactive species from one reservoir to the other becomes significant. Effective electrochemical power sources must use other methods to separate reactants and products while allowing for the transport of ions between half-cells.
Despite these limitations, such electrochemical cells are very effective tools for the study of the thermodynamics of electrochemical reactions. The principle interaction between electrochemistry and thermodynamics revolves around the relationship between the free energy change for the reaction and the properties of the electrochemical cell viewed as a circuit element. In Section 17.14, we see that the Gibbs free energy change for the chemical reaction is proportional to the electrical potential that develops across the terminals of the corresponding electrochemical cell.
In experiments, we find that the electrical potential difference across a cell depends on the amount of current that is being drawn from the cell. Because the movement of ions and other substances within the cell is slow compared to the rate at which a wire can transfer electrons from one terminal to another, potential differences that develop within an operating cell decrease the electrical potential across the terminals. Only when the current being drawn from the cell is zero does the electrical potential correspond precisely to the Gibbs free energy change of the chemical reaction occurring in the cell. This should not surprise us. The experimental measurement of any entropy-dependent thermodynamic function must be made on a system that is undergoing reversible change. A reversible change in an electrochemical cell is a change in which the current flow is zero.
Measuring the electrical potential at zero current is experimentally straightforward, at least in principle. We connect the cell to some reference device that provides a known and variable electrical potential. The connection is made such that the electrical potential from the reference device opposes the potential from the electrochemical cell; that is, we connect the positive terminal of the reference device to the positive terminal of the cell, and the negative terminal of the device to the negative terminal of the cell. (See Section 17.7.) We then vary the potential of the reference device until current flow in the circuit stops. When this occurs the potential drop being supplied by the reference device must be precisely equal to the potential drop across the electrochemical cell, which is the datum we want.
In practice, the reference device is another “standard” electrochemical cell, whose potential drop is defined to have a particular value at specified conditions. Modern electronics make it possible to do the actual measurements with great sophistication. The necessary measurements can also be done with very basic equipment. The principles remain the same. In the basic experiment, a variable resistor is used to adjust the potential drop across the standard cell until it exactly matches that of the cell being studied. When this potential is reached, current flow ceases. Current flow is monitored using a sensitive galvanometer. It is not necessary to actually measure the current. Since we are interested in locating the potential drop at which the current flow is zero, it is sufficient to find the potential drop at which the galvanometer detects no current. The accuracy of the potential measurement depends on the stability of the standard cell potential, the accuracy of the variable resistor, and the sensitivity of the galvanometer.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.02%3A_Electrochemical_Cells.txt
|
We introduce oxidation states to organize our thinking about oxidation–reduction reactions and electrochemical cells. When we define oxidation states, we create a set of rules for allocating the electrons in a molecule or ion to the individual atoms that make it up. The definition of oxidation states is therefore an accounting exercise. The definition of oxidation states predates our ability to estimate electron densities through quantum mechanical calculations. As it turns out, however, the ideas that led to the oxidation state formalism are directionally correct; atoms that have high positive oxidation states according to the formalism also have relatively high positive charges by quantum mechanical calculation. In general, the absolute values of oxidation states are substantially larger than the absolute values of the partial charges found by quantum-mechanical calculation; however, there is no simple quantitative relationship between oxidation states and the actual distribution of electrons in real chemical moieties. It is a serious mistake to think that our accounting system provides a quantitative description of actual electron densities.
It is a serious mistake to think that the Oxidation State system provides a quantitative description of actual electron densities.
The rules for assigning oxidation states grow out of the primitive (and quantitatively incorrect) idea that oxygen atoms usually acquire two electrons and hydrogen atoms usually lose one electron in forming chemical compounds and ionic moieties. The rest of the rules derive from a need to recognize some exceptional cases and from applying the basic ideas to additional elements. The rules of the oxidation state formalism are these:
• For any element in any of its allotropic forms, the oxidation state of its atoms is zero.
• In any of its compounds, the oxidation state of an oxygen atom is 2–, except in compounds that contain an oxygen–oxygen bond, where the oxidation state of oxygen is 1–. The excepted compounds are named peroxides. Examples include sodium peroxide, \({Na}_2O_2\), and hydrogen peroxide, \(H_2O_2\).
• In any of its compounds, the oxidation state of a hydrogen atom is 1+, except in compounds that contain a metal–hydrogen bond, where the oxidation state of hydrogen is 1–. The excepted compounds are named hydrides. Examples include sodium hydride, \(NaH\), and calcium hydride, \(CaH_2\).
• In any of their compounds, the oxidation states of alkali metal atoms (lithium, sodium, potassium, rubidium, cesium, and francium) are 1+. (There are exceptional cases, but we do not consider them.)
• In any of their compounds, the oxidation states of halogen atoms (fluorine, chlorine, bromine, iodine, and astatine) are 1–, except in compounds that contain a halogen–oxygen bond.
• The oxidation states of any other atoms in a compound are chosen so as to make the sum of the oxidation states in the chemical moiety equal to its charge. So, for a neutral molecule, the oxidation states sum to zero. For a monovalent anion, they sum to 1–, etc.
17.04: Balancing Oxidation-reduction Reactions
Having defined oxidation states, we can now redefine an oxidation–reduction reaction as one in which at least one element undergoes a change of oxidation state. For example, in the reaction between permanganate ion and oxalate ion, the oxidation states of manganese and carbon atoms change. In the reactants, the oxidation state of manganese is 7+; in the products, it is 2+. In the reactants, the oxidation state of carbon is 3+; in the products, it is 4+.
$\begin{array}{c c c c c c c} 7+ & ~ & 3+ & ~ & 2+ & ~ & 4+ \ MnO^-_4 & + & C_2O^{2-}_4 & \to & {Mn}^{2+} & + & CO_2 \end{array} \nonumber$
These oxidation state changes determine the stoichiometry of the reaction. In terms of the oxidation state formalism, each manganese atom gains five electrons and each carbon atom loses one electron. Thus the reaction must involve five times as many carbon atoms as manganese atoms. Allowing for the presence of two carbon atoms in the oxalate ion, conservation of electrons requires that the stoichiometric coefficients be
$2\ MnO^-_4+5\ C_2O^-_4\to 2\ {Mn}^{2+}+10\ CO_2 \nonumber$
Written this way, two $MnO^-_4$ moieties gain ten electrons, and five $C_2O^{2-}_4$ moieties lose ten electrons. When we fix the coefficients of the redox reactants, we also fix the coefficients of the redox products. However, inspection shows that both charge and the number of oxygen atoms are out of balance in this equation.
The reaction occurs in acidic aqueous solution. This means that enough water molecules must participate in the reaction to achieve oxygen-atom balance. Adding eight water molecules to the product brings oxygen into balance. Now, however, charge and hydrogen atoms
$2\ MnO^-_4+5\ C_2O^-_4\to 2\ {Mn}^{2+}+10\ CO_2+8\ H_2O \nonumber$
do not balance. Since the solution is acidic, we can bring hydrogen into balance by adding sixteen protons to the reactants. When we do so, we find that charge balances also.
$2\ MnO^-_4+5\ C_2O^-_4+16\ H^+\to 2\ {Mn}^{2+}+10\ CO_2+8\ H_2O \nonumber$
Evidently, this procedure achieves charge balance because the oxidation state formalism enables us to find the correct stoichiometric ratio between oxidant and reductant.
We can formalize this thought process in a series of rules for balancing oxidation–reduction reactions. In doing this, we can derive some advantage from splitting the overall chemical change into two parts, which we call half-reactions. It is certainly not necessary to introduce half-reactions just to balance equations; the real advantage is that a half-reaction describes the chemical change in an individual half-cell. The rules for balancing oxidation–reduction reactions using half-cell reactions are these:
1. Find the oxidation state of every atom in every reactant and every product.
2. Write skeletal equations showing:
1. the oxidizing agent $\to$ its reduced product
2. the reducing agent $\to$ its oxidized product
3. Balance the skeletal equations with respect to all elements other than oxygen and hydrogen.
4. Add electrons to each equation to balance those gained or lost by the atoms undergoing oxidation-state changes.
5. For a reaction occurring in acidic aqueous solution:
1. balance oxygen atoms by adding water to each equation.
2. balance hydrogen atoms by adding protons to each equation
6. For a reaction occurring in basic aqueous solution, balance oxygen and hydrogen atoms by adding water to one side of each equation and hydroxide ion to the other.
7. The net effect of adding one water and one hydroxide is to increase by one the number of hydrogen atoms on the side to which the water is added.
8. Adding two hydroxide ions to one side and a water molecule to the other increases by one the number of oxygen atoms on the side to which hydroxide is added.
1. Multiply each half-reaction by a factor chosen to make each of the resulting half-reactions contain the same number of electrons.
2. Add the half-reactions to get a balanced equation for the overall chemical change. The electrons cancel. Often, some of the water molecules, hydrogen atoms, or hydroxide ions cancel also.
When we apply this method to the permanganate–oxalate reaction, we have
$2\ MnO^-_4+16\ H^++10\ e^-\to 2\ {Mn}^{2+}+8\ H_2O \nonumber$
reduction half-reaction
$5\ C_2O^-_4\to 10\ CO_2+10\ e^- \nonumber$
oxidation half-reaction
$2\ MnO^-_4+5\ C_2O^-_4+16\ H^+\to 2\ {Mn}^{2+}+10\ CO_2+8\ H_2O \nonumber$
balanced reaction
The half-reactions sum to the previously obtained result; the electrons cancel. For an example of a reaction in basic solution, consider the disproportionation of chloride dioxide to chlorite and chlorate ions:
$\begin{array}{c c c c c} 4+ & ~ & 3+ & ~ & 5+ \ ClO_2 & \to & ClO^-_2 & + & ClO^-_3 \end{array} \nonumber$
skeletal reaction
$ClO_2+e^-\to ClO^-_2 \nonumber$
reduction half-reaction
$ClO_2+2OH^-\to ClO^-_3+H_2O+e^- \nonumber$
oxidation half-reaction
$2\ ClO_2+2OH^-\to ClO^-_2+ClO^-_3+H_2O \nonumber$
balanced equation
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.03%3A_Defining_Oxidation_States.txt
|
Electrical potential is measured in volts. If a system comprising one coulomb of charge passes through a potential difference of one volt, one joule of work is done on the system. The work done on the system is equal to the change in the energy of the system. For $Q$ coulombs passing through a potential difference of $\mathcal{E}$ volts, we have $\Delta E=w_{elec}=Q\mathcal{E}$. Whether this represents an increase or a decrease in the energy of the system depends on the sign of the charge and on the sign of the potential difference.
Electrical potential and gravitational potential are analogous. The energy change associated with moving a mass from one elevation to another in the earth’s gravitational field is
$\Delta E=w_{grav}=mgh_{final}-mgh_{initial}=m{\mathit{\Phi}}_{grav} \nonumber$
where ${\mathit{\Phi}}_{grav}=g\left(h_{final}-h_{initial}\right)$, which is the gravitational potential difference.
The role played by charge in the electrical case is played by mass in the gravitational case. The energies of these systems change because charge or mass moves in response to the application of a force. In the electrical case, the force is the electrical force that arises from the interaction between charges. In the gravitational case, the force is the gravitational force that arises from the interaction between masses. A notable difference is that mass is always a positive quantity, whereas charge can be positive or negative.
The distinguishing feature of an electrochemical cell is that there is an electrical potential difference between the two terminals. For any given cell, the magnitude of the potential difference depends on the magnitude of the current that is flowing. (Making the general problem even more challenging, we find that it depends also on the detailed history of the conditions under which electrical current has been drawn from the cell.) Fortunately, if we keep the cell’s temperature constant and measure the potential at zero current, the electrical potential is constant. Under these conditions, the cell’s characteristics are fixed, and potential measurements give reproducible results. We want to understand the origin and magnitude of this potential difference. Experimentally, we find:
1. If we measure the zero-current electrical potential of the same cell at different temperatures, we find that this potential depends on temperature.
2. If we prepare two cells with different chemical species, they exhibit different electrical potentials—except possibly for an occasional coincidence.
3. If we prepare two cells with the same chemical species at different concentrations and measure their zero-current electrical potentials at the same temperature, we find that they exhibit different potentials.
4. If we draw current from a given cell over a period of time, we find that there is a change in the relative amounts of the reagents present in the cell. Overall, a chemical reaction occurs; some reagents are consumed, while others are produced.
We can summarize these experimental observations by saying that the central issue in electrochemistry is the interrelation of three characteristics of an electrochemical cell: the electrical-potential difference between the terminals of the cell, the flow of electrons in the external circuit, and the chemical changes inside the cell that accompany this electron flow.
17.06: Electrochemical Cells as Circuit Elements
Suppose we use a wire to connect the terminals of the cell built from the silver–silver ion half-cell and the copper–cupric ion half-cell. This wire then constitutes the external circuit, the path that the electrons follow as chemical change occurs within the cell. When the external circuit is simply a low-resistance wire, the cell is short-circuited. The external circuit can be more complex. For example, when we want to know the direction of electron flow, we incorporate a galvanometer.
If the reaction between silver ions and copper metal is to occur, electrons must pass through the external circuit from the copper terminal to the silver terminal. An electron that is free to move in the presence of an electrical potential must move away from a region of more negative electrical potential and toward a region of more positive electrical potential. Since the electron-flow is away from the copper terminal and toward the silver terminal, the copper terminal must be electrically negative and the silver terminal must be electrically positive. Evidently, if we know the chemical reaction that occurs in an electrical cell, we can immediately deduce the direction of electron flow in the external circuit. Knowing the direction of electron flow in the external circuit immediately tells us which is the negative and which the positive terminal of the cell.
The converse is also true. If we know which cell terminal is positive, we know that electrons in the external circuit flow toward this terminal. Even if we know nothing about the composition of the cell, the fact that electrons are flowing toward a particular terminal tells us that the reaction occurring in that half-cell is one in which a solution species, or the electrode material, takes up electrons. That is to say, some chemical entity is reduced in a half-cell whose potential is positive. It can happen that we know the half-reaction that occurs in a given half-cell, but that we do not know which direction the reaction goes. For example, if we replace the silver–silver ion half cell with a similar cell containing an aqueous zinc nitrate solution and a zinc electrode, we are confident that the half-cell reaction is either
$\ce{Zn^{0} \to Zn^{2+} + 2e^{-}} \nonumber$
or
$\ce{Zn^{2+} + 2 e^{-} \to Zn^{0}} \nonumber$
When we determine experimentally that the copper electrode is electrically positive with respect to the zinc electrode, we know that electrons are leaving the zinc electrode and flowing to the copper electrode. Therefore, the cell reaction must be
$\ce{Zn^{0} + Cu^{2+} \to Zn^{2+} + Cu^{0}} \nonumber$
It is convenient to have names for the terminals of an electrochemical cell. One naming convention is to call one terminal the anode and the other terminal the cathode. The definition is that the cathode is the electrode at which a reacting species is reduced. In the silver–silver ion containing cell, the silver electrode is the cathode. In the zinc–zinc ion containing cell, the copper electrode is the cathode. In these cells, the cathode is the electrically positive electrode. An important feature of these experiments is that the direction of the electrical potential in the external circuit is established by the reactions that occur spontaneously in the cells. The cells are sources of electrical current. Cells that operate to produce current are called galvanic cells.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.05%3A_Electrical_Potential.txt
|
We can incorporate another potential source into the external circuit of an electrochemical cell. If we do so in such a way that the two electrical potentials augment one another, as diagrammed in Figure 4, the potential drop around the new external circuit is the sum of the potential drops of the two sources taken independently. The direction of electron flow is unchanged. An electron anywhere in the external circuit is propelled in the same direction by either potential source. The effective potential difference in the composite circuit is the sum of the potentials that the sources exhibit when each acts alone.
Alternatively, we can connect the two potential sources so that they oppose one another, as diagrammed in Figure 5. Now an electron in the external circuit is pushed in one direction by one of the potential sources and in the opposite direction by the other potential source. The effective potential difference in the composite circuit is the difference between the potentials that the sources exhibit when each acts alone. In the composite circuit, the direction of electron flow is determined by the potential source whose potential difference is greater.
This has a dramatic effect on the direction of the reaction occurring in the weaker cell. In the composite cell, the direction of electron flow through the weaker cell is opposite to the direction of electron flow when the weaker cell is operating as a galvanic cell. Since the direction of electron flow in the external circuit determines the directions in which the half-reactions occur, the chemical reaction that occurs in the cell must occur in the opposite direction also. When the direction of current flow through a cell is determined by connection to a greater potential difference in this fashion, the cell is called an electrolytic cell. Reduction occurs at the negative terminal of an electrolytic cell. In an electrolytic cell, the cathode is the electrically negative electrode. The direction of current flow in any cell can be reversed by the application of a sufficiently large counter-potential.
When a cell operates as a source of current (that is, as a galvanic cell), the cell reaction is a spontaneous process. Since, as the cell reaction proceeds, electrons move through a potential difference in the external circuit, the reaction releases energy in the cell’s surroundings. If the external circuit is simply a resistor, as when the terminals are short-circuited, the energy is released as heat. Let $q$ be the heat released and let $Q$ be the amount of charge passed through the external circuit in a time interval $\Delta t$. The heat-release rate is given by
$\frac{q}{\Delta t}=\frac{\Delta E}{\Delta t}=\frac{Q\mathcal{E}}{\Delta t} \nonumber$
The electrical current is $I={Q}/{\Delta t}$. If the resistor follows Ohm’s law, $\mathcal{E}=IR$, where $R$ is the magnitude of the resistance, the heat release rate becomes
$\frac{q}{\Delta t}=I^2R \nonumber$
As the reaction proceeds and energy is dissipated in the external circuit, the ability of the cell to supply further energy is continuously diminished. The energy delivered to the surroundings through the external circuit comes at the expense of the cell’s internal energy and corresponds to the depletion of the cell reactants.
When the chemical reaction occurring within a cell is driven by the application of an externally supplied potential difference, the opposite occurs. In the driven (electrolytic) cell, the direction of the cell reaction is opposite the direction of the spontaneous reaction that occurs when the cell operates galvanically. The electrolytic process produces the chemical reagents that are consumed in the spontaneous cell reaction. The external circuit delivers energy to the electrolytic cell, increasing its content of spontaneous-direction reactants and thereby increasing its ability to do work.
In summary, the essential difference between electrolytic and galvanic cells lies in the factor that determines the direction of current flow and, correspondingly, the direction in which the cell reaction occurs. In a galvanic cell, a spontaneous chemical reaction occurs and this reaction determines the direction of current flow and the signs of the electrode potentials. In an electrolytic cell, the sign of the electrode potentials is determined by an applied potential source, which determines the direction of current flow; the cell reaction proceeds in the non-spontaneous direction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.07%3A_The_Direction_of_Electron_Flow_and_its_Implications.txt
|
Electrolytic cells are very important in the manufacture of products essential to our technology-intensive civilization. Only electrolytic processes can produce many materials, notably metals that are strong reducing agents. Aluminum and the alkali metals are conspicuous examples. Many manufacturing processes that are not themselves electrolytic utilize materials that are produced in electrolytic cells. These processes would not be possible if the electrolytic products were not available. For example, elemental silicon, the essential precursor of most contemporary computer chips, is produced from silicon tetrachloride by reduction with sodium.
$SiCl_4+4\ Na^0\to Si^0+4\ NaCl \nonumber$
(The silicon so produced is intensively refined, formed into large single crystals, and sliced into wafers before the chip-manufacturing process begins.) Elemental sodium is produced by the electrolysis of molten sodium chloride.
Successful electrolytic processes involve artful selection of the current-collector material and the reaction conditions. The design of the cell is often crucial. Since sodium metal reacts violently with water, we recognize immediately that electrolysis of aqueous sodium chloride solutions cannot produce sodium metal. What products are obtained depends on numerous factors, notably the composition of the electrodes, the concentration of the salt solution, and the potential that is applied to the cell.
Electrolysis of concentrated, aqueous, sodium chloride solutions is used on a vast scale in the chlor-alkali process for the co-production of chlorine and sodium hydroxide, both of which are essential for the manufacture of many common products.
$2\ NaCl\left(aq\right)+2\ H_2O\left(\ell \right)\to 2\ NaOH\left(aq\right)+Cl_2\left(g\right)+H_2\left(g\right) \nonumber$
Hydrogen is a by-product. The overall process does not involve sodium ion; rather, the overall reaction is an oxidation of chloride ion and a reduction of water.
$2\ Cl^-\left(aq\right)\to Cl_2\left(g\right)+2\ e^- \nonumber$ oxidation half-reaction $2\ H_2O\left(\ell \right)+2\ e^-\to 2\ OH^-\left(aq\right)+H_2\left(g\right) \nonumber$
reduction half-reaction
The engineering difficulties associated with the chlor-alkali process are substantial. They occur because hydroxide ion reacts with chlorine gas; a practical cell must be designed to keep these two products separate. Commercially, two different designs have been successful. The diaphragm-cell process uses a porous barrier to separate the anodic and cathodic cell compartments. The mercury-cell process uses elemental mercury as the cathodic current collector; in this case, sodium ion is reduced, but the product is sodium amalgam (sodium–mercury alloy) not elemental sodium. Like metallic sodium, sodium amalgam reduces water, but the amalgam reaction is much slower. The amalgam is removed from the cell and reacted with water to produce sodium hydroxide and regenerate mercury for recycle to the electrolytic cell.
Elemental sodium is manufactured by the electrolysis of molten sodium chloride. This is effected commercially using an iron cathode and a carbon anode. The reaction is
$NaCl\left(\ell \right)\to {Na}^0+Cl_2\left(g\right) \nonumber$
Such a cell is diagrammed in Figure 6. A mechanical barrier suffices to keep the products separate and prevent their spontaneous reaction back to the salt. A more significant problem in the design of the cell was to find an anode material that did not react with the chlorine produced. From the cell reaction, we see that one electron passes through the external circuit for every sodium atom that is produced. The charge that passes through the external circuit during the production of one mole of sodium metal is, therefore, the charge on one mole of electrons.
In honor of Michael Faraday, the magnitude of the charge carried by a mole of electrons is called the faraday. The faraday constant is denoted by the symbol “$\mathcal{F}$.” That is,
$1\mathcal{F}=\frac{6.02214\times {10}^{23}\mathrm{\ electrons}}{\mathrm{mol}}\times \frac{1.602187\times {10}^{-19}\ C}{\mathrm{electron}}=96,485\ C\ {\mathrm{mol}}^{-1} \nonumber$
The faraday is a useful unit in electrochemical calculations. The unit of electrical current, the ampere, is defined as the passage of one coulomb per second. Knowing the current in a circuit and the time for which it is passed, we can calculate the number of coulombs that are passed. Remembering the value of one faraday enables us to do stoichiometric calculations without bringing in Avogadro’s number and the electron charge every time.
Tabulated information about the thermodynamic characteristics of half-reactions enable us to make useful predictions about what can and cannot occur in various cells that we might think of building. This information can be used to predict the potential difference that will be observed in a galvanic cell made by connecting two arbitrarily selected half-cells. In any electrolytic cell, more than one electron-transfer reaction can usually occur. In the chlor-alkali process, for example, water rather than chloride ion might be oxidized at the anode. In such cases, tabulated half-cell data enable us to predict which species can react at a particular applied potential.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.08%3A_Electrolysis_and_the_Faraday.txt
|
From the considerations we have discussed, it is evident that any electrolytic cell involves a flow of electrons in an external circuit and a flow of ions within the materials comprising the cell. The function of the current collectors is to transfer electrons back and forth between the external circuit and the cell reagents.
The measurement of solution conductivity is a useful technique for determining the concentrations and mobilities of ions in solution. Since conductivity measurements involve the passage of electrical current through a liquid medium, the process must involve electrode reactions as well as motion of ions through the liquid. Normally, the electrode reactions are of little concern in conductivity measurements. The applied potential is made large enough to ensure that some electrode reaction occurs. When the liquid medium is water, the electrode reactions are usually the reduction of water at the cathode and its oxidation at the anode. The conductivity attributable to a given ionic species is approximately proportional to its concentration. In the absence of dissolved ions, little current is passed. For aqueous solutions, this just restates the familiar observation that pure water is a poor electrical conductor. When few ions are present, it is not possible to move charge through the cell quickly enough to support a significant current in the external circuit.
17.10: The Standard Hydrogen Electrode (S.H.E)
In Section 17.4, we introduce the idea of a half-reaction and a half-cell in the context of balancing equations for oxidation–reduction reactions. The real utility of these ideas is that they correspond to distinguishable parts of actual electrochemical cells. Information about the direction of a spontaneous reaction enables us to predict the relative electrical potentials of the half-cells that make up the corresponding electrochemical cell. Conversely, given information about the characteristic electrical potentials of half-cells, we can predict what chemical reactions can occur spontaneously. In short, there is a relationship between the electrical potential of an electrochemical cell at a particular temperature and pressure and the Gibbs free energy change for the corresponding oxidation–reduction reaction.
Since cell potentials vary with the concentrations of the reactive components, we can simplify our record-keeping requirements by defining standard reference conditions that apply to a standard electrode of any type. We adopt the convention that a standard electrochemical cell contains all reactive components at unit activity. The vast majority of electrochemical cells that have been studied contain aqueous solutions. In data tables, the activity standard state for solute species is nearly always the hypothetical one-molal solution. For many purposes, it is an adequate approximation to say that all solutes are present at a concentration of one mole per liter, and all reactive gases at a pressure of one bar. (In Section 17.15, we see that the dependence of cell potential on reagent concentration is logarithmic.) In Sections 17.2 and 17.7, we discuss the silver–silver ion electrode; in this approximation, a standard silver–silver ion electrode is one in which the silver ion is present in the solution at a concentration of one mole per liter. Likewise, a standard copper–cupric ion electrode is one in which cupric ion is present in the solution at one mole per liter.
We also need to choose an arbitrary reference half-cell. The choice that has been adopted is the Standard Hydrogen Electrode, often abbreviated the S.H.E. The S.H.E. is defined as a piece of platinum metal, immersed in a unit-activity aqueous solution of a protonic acid, and over whose surface hydrogen gas, at unit fugacity, is passed continuously. These concentration choices make the electrode a standard electrode. Frequently, it is adequate to approximate the S.H.E. composition by assuming that the hydrogen ion concentration is one molar and the hydrogen gas pressure is one bar. The half-reaction associated with the S.H.E. is
$\ce{ H^{+} + e^{-} \to 1/2 H2} \nonumber$
We define the electrical potential of this half-cell to be zero volts.
17.11: Half-reactions and Half-cells
Let us consider some standard electrochemical cells we could construct using the S.H.E. Two possibilities are electrochemical cells in which the second electrode is the standard silver–silver ion electrode or the standard copper–cupric ion electrode. The diagrams in Figure 7 summarize the half-reactions and the electrical potentials that we find when we construct these cells.
We can also connect these cells so that the two S.H.E. are joined by one wire, while a second wire joints the silver and copper electrodes. This configuration is sketched in Figure 8. Whatever happens at one S.H.E. happens in the exact reverse at the other S.H.E. The net effect is essentially the same as connecting the silver– silver ion half-cell to the copper–cupric ion half-cell by a single salt bridge. If we did not already know what reaction occurs, we could figure it out from the information we have about how each of these two cells performs when it operates against the S.H.E.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.09%3A_Electrochemistry_and_Conductivity.txt
|
We adopt a very useful convention to tabulate the potential drops across standard electrochemical cells, in which one half-cell is the S.H.E. Since the potential of the S.H.E. is zero, we define the standard electrode potential, ${\mathcal{E}}^o$, of any other standard half-cell (and its associated half-reaction) to be the potential difference when the half-cell operates spontaneously versus the S.H.E. The electrical potential of the standard half-cell determines both the magnitude and sign of the standard half-cell potential.
If the process that occurs in the half-cell reduces a solution species or the electrode material, electrons traverse the external circuit toward the half-cell. Hence, the electrical sign of the half-cell terminal is positive. By the convention, the algebraic sign of the cell potential is positive $\left({\mathcal{E}}^o>0\right)$. If the process that occurs in the half-cell oxidizes a solution species or the electrode, electrons traverse the external circuit away from the half-cell and toward the S.H.E. The electrical sign of the half-cell is negative, and the algebraic sign of the cell potential is negative $\left({\mathcal{E}}^o<0\right)$.
If we know the standard half-cell potential, we know the essential electrical properties of the standard half-cell operating spontaneously versus the S.H.E. at zero current. In particular, the algebraic sign of the standard half-cell potential tells us the direction of current flow and hence the direction of the reaction that occurs spontaneously.
An older convention associates the sign of the standard electrode potential with the direction in which an associated half-reaction is written. This convention is compatible with the definition we have chosen; however, it creates two ways of expressing the same information. The difference is whether we write the direction of the half-reaction with the electrons appearing on the right or on the left side of the equation.
When the half-reaction is written as a reduction process, with the electrons appearing on the left, the associated half-cell potential is called the reduction potential of the half-cell. Thus we would convey the information we have developed about the silver–silver ion and the copper–copper ion half cells by presenting the reactions and their associated potentials as
${Ag}^++e^-\to {Ag}^0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ +0.7992\ \mathrm{volts} \nonumber$
${Cu}^{2+}+2\ e^-\to {Cu}^0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ +0.3394\ \mathrm{volts} \nonumber$
When the half-reaction is written as a reduction process, the sign of the electrode potential is the same as the sign of the electrical potential of the half-cell when the half-cell operates spontaneously versus the S.H.E. Thus, the reduction potential has the same algebraic sign as the electrode potential of our definition.
We can convey the same information by writing the half-reaction in the reverse direction; that is, as an oxidation process in the left-to-right direction so that the electrons appear on the right. The agreed-upon convention is that we reverse the sign of the half-cell potential when we reverse the direction in which we write the equation. When the half-reaction is written as an oxidation process, the associated half-cell potential is called the oxidation potential of the half-cell. Older tabulations of electrochemical data often present half-reactions written as oxidation processes, with the electrons on the right, and present the potential information using the oxidation potential convention.
${Ag}^++e^-\to {Ag}^0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ +0.7992\ \mathrm{volts} \nonumber$
reduction potential
${Ag}^0\to {Ag}^++e^-\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ -0.7992\ \mathrm{volts} \nonumber$
oxidation potential
Note that, in the convention that we have adopted, the term half-cell potential always denotes the potential of the half-cell when it operates spontaneously versus the S.H.E. In this convention, we do not need to write the half-reaction in order to specify the standard potential. It is sufficient to specify the chemical constituents of the half-cell. This is achieved using another representational convention.
Cell Notation
This cell-describing convention lists the active components of a half-cell, using a vertical line to indicate the presence of a phase boundary like that separating silver metal from an aqueous solution containing silver ion. The silver–silver ion cell is denoted ${Ag}^+\mid {Ag}^0$. (Using the superscript zero on the symbol for elemental silver is redundant; however, it does promote clarity.) The copper–cupric ion cell is denoted
${Cu}^{2+}\mid {Cu}^0. \nonumber$
The S.H.E. is denoted $H^+\mid H_2\mid {Pt}^0$, reflecting the presence of three distinct phases in the operating electrode. A complete electrochemical cell can be described using this convention. When the complete cell contains a salt bridge, this is indicated with a pair of vertical lines, $\mathrm{\textrm{⃦}}$. A cell composed of a silver–silver ion half-cell and a S.H.E. is denoted
${Pt}^0\mid H_2\ \mid H^+\ \ \ \textrm{⃦}\ \ {Ag}^+\mid \ {Ag}^0. \nonumber$
A further convention stipulates that the half-cell with the more positive electrode potential is written on the right. Under this convention, spontaneous operation of the standard full cell transfers electrons through the external circuit from the terminal shown on the left to the terminal shown on the right.
We can now present our information about the behavior of the silver–silver ion half-cell versus the S.H.E. by writing that the standard potential of the ${Ag}^+\mid {Ag}^0$ half-cell is +0.7792 volts. The standard potential of the ${Cu}^{2+}\mid {Cu}^0$ half-cell is +0.3394 volts. The standard potential of the $H^+\mid H_2\mid {Pt}^0$ (the S.H.E.) half-cell is 0.0000 volts. Again, our definition of the standard electrode potential makes the sign of the standard electrode potential independent of the direction in which the equation of the corresponding half-reaction is written.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.12%3A_Standard_Electrode_Potentials.txt
|
While our convention does not use the equation that we write for the half-reaction to establish the algebraic sign of the standard electrode potential, it is useful to associate the standard electrode potential with the half-reaction written as a reduction, that is, with the electrons written on the left side of the equation. We also establish the convention that reversing the direction of the half-reaction reverses the algebraic sign of its potential. When these conventions are followed, the overall reaction and the full-cell potential can be obtained by adding the corresponding half-cell information. If the resulting full-cell potential is greater than zero, the spontaneous overall reaction proceeds in the direction it is written, from left to right. If the full-cell potential is negative, the direction of spontaneous reaction is opposite to that written; that is, a negative full cell potential corresponds to the spontaneous reaction occurring from right to left. For example,
$2\ {Ag}^++2\ e^-\to 2\ {Ag}^0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ +0.7992\ \mathrm{volts} \nonumber$
${Cu}^0\to {Cu}^{2+}+2\ e^-\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ -0.3394\ \mathrm{volts} \nonumber$
$2\ {Ag}^++{Cu}^0\to 2\ {Ag}^0+{Cu}^{2+} \nonumber$ ${\mathcal{E}}^0=\ +0.4598\ \mathrm{volts} \nonumber$
yields the equation corresponding to the spontaneous reaction and a positive full-cell potential. Writing
$2\ {Ag}^0\to 2\ {Ag}^+\ +2\ e^-\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=\ -0.7992\ \mathrm{volts} \nonumber$
${Cu}^{2+}+2\ e^-\to {Cu}^0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{E}}^0=+0.3394\ \mathrm{volts} \nonumber$
$2\ {Ag}^0+{Cu}^{2+}\to 2\ {Ag}^++{Cu}^0 \nonumber$ ${\mathcal{E}}^0=\ -0.4598\ \mathrm{volts} \nonumber$
yields the equation for the non-spontaneous reaction and, correspondingly, the full-cell potential is less than zero.
Note that when we multiply a chemical equation by some factor, we do not apply the same factor to the corresponding potential. The electrical potential of the corresponding electrochemical cell is independent of the number of moles of reactants and products that we choose to write. The cell potential is an intensive property. It has the same value for a small cell as for a large one, so long as the other intensive properties (temperature, pressure, and concentrations) are the same.
17.14: Cell Potentials and the Gibbs Free Energy
In Section 17.11, we see that the electrical potential drop across the standard cell ${Pt}^0\mid H_2\ \mid H^+\ \ \ \textrm{⃦}\ \ {Ag}^+\mid \ {Ag}^0$ is 0.7992 volts. We measure this potential under conditions in which no current is flowing. That is, we find the counter-potential at which no current flows through the cell in either direction. An arbitrarily small change in the counter-potential away from this value, in either direction, is sufficient to initiate current flow. This means that the standard potential is measured when the cell is operating reversibly. By the definition of a standard cell, all of the reactants are at the standard condition of unit activity. If any finite current is drawn from a cell of finite size, the concentrations of the reagents will no longer be exactly the correct values for a standard cell. Nevertheless, we can calculate the energy that would be dissipated in the surroundings if the cell were to pass one mole of electrons (corresponding to consuming one mole of silver ions and one-half mole of hydrogen gas) through the external circuit while the cell conditions remain exactly those of the standard cell. This energy is
$96,485\ \mathrm{C}\ {\mathrm{mol}}^{-1}\times 0.7992\ \mathrm{V}=77,110\ \mathrm{J}\ {\mathrm{mol}}^{-1} \nonumber$
The form in which this energy appears in the surroundings depends on the details of the external circuit. However, we know that this energy represents the reversible work done on electrons in the external circuit as they traverse the path from the anode to the cathode. We call this the electrical work. Above we describe this as the energy change for a hypothetical reversible process in which the composition of the cell does not change. We can also view it as the energy change per electron for one electron-worth of real process, multiplied by the number of electrons in a mole. Finally, we can also describe it as the reversible work done on electrons during the reaction of one mole of silver ions in an infinitely large standard cell.
The Gibbs free energy change for an incremental reversible process is $dG = VdP + SdT + dw_{NPV}$, where $dw_{NPV}$ is the increment of non-pressure–volume work. In the case of an electrochemical cell, the electrical work is non-pressure–volume work. In the particular case of an electrochemical cell operated at constant temperature and pressure, $dP = dT\mathrm{=0}$, and $dG = dw_{NPV} = dw_{\mathrm{elect}}$.
The electrical work is just the charge times the potential drop. Letting $n$ be the number of moles of electrons that pass through the external circuit for one unit of reaction, the total charge is $Q=-n\mathcal{F}$, where $\mathcal{F}$ is one faraday. For a standard cell, the potential drop is ${\mathcal{E}}^0$, so the work done on the electrons is $Q{\mathcal{E}}^0=-n{\mathcal{F}\mathcal{E}}^0$. Since the standard conditions for Gibbs free energies are the same as those for electrical cell potentials, we have
${w^{rev}_{elect}={\Delta }_rG}^o={-n\mathcal{F}\mathcal{E}}^o \nonumber$
If the reaction occurs spontaneously when all of the reagents are in their standard states, we have ${\mathcal{E}}^o>0$. For a spontaneous process, the work done on the system is less than zero, $w^{rev}_{elect}<0$; the work done on the surroundings is ${\hat{w}}^{rev}_{elect}=-w^{rev}_{elect}>0$; and the energy of the surroundings increases as the cell reaction proceeds. The standard potential is an intensive property; it is independent of the size of the cell and of the way we write the equation for the chemical reaction. However, the work and the Gibbs free energy change depend on the number of electrons that pass through the external circuit. We usually specify the number of electrons by specifying the chemical equation to which the Gibbs free energy change applies. That is, if the associated reaction is written as
$Ag^+ + \frac{1}{2} H_2\to Ag^0+H^+ \nonumber$
we understand that one mole of silver ions are reduced and one mole of electrons are transferred; $n=1$ and
$\Delta G^o=-\mathcal{F}{\mathcal{E}}^o$. If the reaction is written
$2\ {Ag}^++H_2\to 2\ {Ag}^0+2\ H^+ \nonumber$
we understand that two moles of silver ions are reduced and two moles of electrons are transferred, so that $n=2$ and $\Delta G^o=-2\mathcal{F}{\mathcal{E}}^o$.
The same considerations apply to measurement of the potential of electrochemical cells whose component are not at the standard condition of unit activity. If the cell is not a standard cell, we can still measure its potential. We use the same symbol to denote the potential, but we omit the superscript zero that denotes standard conditions. These are, of course, just the conventions we have been using to distinguish the changes in other thermodynamic functions that occur at standard conditions from those that do not. We have therefore, for the Gibbs free energy change for the reaction occurring in an electrochemical cell that is not at standard conditions,
$w^{rev}_{elect}={\Delta }_rG=-n\mathcal{F}\mathcal{E} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.13%3A_Predicting_the_Direction_of_Spontaneous_Change.txt
|
In Chapter 14, we find that the Gibbs free energy change is a function of the activities of the reactants and products. For the general reaction $aA+bB\to cC+dD$
we have ${\Delta }_rG={\Delta }_rG^o+RT{ \ln \frac{{\tilde{a}}^c_C{\tilde{a}}^d_D}{{\tilde{a}}^a_A{\tilde{a}}^b_B}\ } \nonumber$
Using the relationship between cell potentials and the Gibbs free energy, we find
$-n\mathcal{F}\mathcal{E}=-n\mathcal{F} \mathcal{E}^o+RT \ln \frac{\tilde{a}^c_C \tilde{a}^d_D}{\tilde{a}^a_A \tilde{a}^b_B} \nonumber$
or
$\mathcal{E}= \mathcal{E}^o-\frac{RT}{n\mathcal{F}} \ln \frac{\tilde{a}^c_C \tilde{a}^d_D}{ \tilde{a}^a_A \tilde{a}^b_B} \nonumber$
This is the Nernst equation. We derive it from our previous results for the activity dependence of the Gibbs free energy, which makes no explicit reference to electrochemical measurements at all. When we make the appropriate experimental measurements, we find that the Nernst equation accurately represents the temperature and concentration dependence of electrochemical-cell potentials.
Reagent activities are often approximated adequately by molalities or molarities, for solute species, and by partial pressures—expressed in bars—for gases. The activities of pure solid and liquid phases can be taken as unity. For example, if we consider the reaction $Ag^+ + \frac{1}{2} H_2\to Ag^0+H^+ \nonumber$
it is often sufficiently accurate to approximate the Nernst equation as
$\mathcal{E}= \mathcal{E}^o-\frac{RT}{n\mathcal{F}} \ln \frac{\left[H^+\right]}{\left[{Ag}^+\right] P^{1/2}_{H_2}} \nonumber$
17.16: The Nernst Equation for Half-cells
If the S.H.E. is one of the half-cells, the corresponding Nernst equation can be viewed as a description of the other half-cell. Using the cell in which the silver–silver ion electrode opposes the S.H.E., as in the preceding example, the cell potential is the algebraic sum of the potential of the silver terminal and the potential of the platinum terminal. We can represent the potential of the silver–silver ion electrode as ${\mathcal{E}}_{Ag\mid {Ag}^+}$. Since the S.H.E. is always at standard conditions, its potential, which we can represent as ${\mathcal{E}}^o_{Pt\mid H_2\mid H^+}$, is zero by definition. The cell potential is
$\mathcal{E}={\mathcal{E}}_{Ag\mid {Ag}^+}+{\mathcal{E}}^o_{Pt\mid H_2\mid H^+} \nonumber$
The potential of the cell with both half-cells at standard conditions is
${\mathcal{E}^o={\mathcal{E}}^o_{Ag\mid {Ag}^+}+\mathcal{E}}^o_{Pt\mid H_2\mid H^+} \nonumber$
and, again since the S.H.E. is at standard conditions, ${\tilde{a}}_{H^+}=1$ and $P_{H_2}=1$. Substituting into the Nernst equation for the full cell, we have
$\mathcal{E}_{Ag\mid {Ag}^+}+ \mathcal{E}^o_{Pt\mid H_2\mid H^+}= \mathcal{E}^o_{Ag\mid Ag^+}+\mathcal{E}^o_{Pt\mid H_2\mid H^+}-\frac{RT}{\mathcal{F}} \ln \frac{1}{\tilde{a}_{Ag}^+} \nonumber$
or
$\mathcal{E}_{Ag\mid {Ag}^+}= \mathcal{E}^o_{Ag\mid {Ag}^+}-\frac{RT}{\mathcal{F}} \ln \frac{1}{\tilde{a}_{Ag^+}} \nonumber$
where the algebraic signs of $\mathcal{E}_{Ag\mid {Ag}^+}$ and $\mathcal{E}^o_{Ag\mid {Ag}^+}$ correspond to writing the half-reaction in the direction $Ag^++e^-\to Ag^0$. Note that this is precisely the equation that we would obtain by writing out the Nernst equation corresponding to the chemical equation $Ag^++e^-\to Ag^0$.
To see how these various conventions work together, let us consider the oxidation of hydroquinone $\left(H_2Q\right)$ to quinone $\left(Q\right)$ by ferric ion in acidic aqueous solutions:
$2\ {Fe}^{3+}+H_2Q\rightleftharpoons \ 2\ {Fe}^{2+}+Q+2H^+ \nonumber$
The quinone–hydroquinone couple is
and the ferric ion–ferrous ion couple is
$Fe^{3+}+e^-\rightleftharpoons Fe^{2+} \nonumber$
The standard electrode potentials are $\mathcal{E}_{Pt\mid Q,H_2Q,H^+}=+0.699\ \mathrm{v}$ and $\mathcal{E}_{Pt\mid Fe^{3+},Fe^{2+}}=+0.783\ \mathrm{v}$. In each case, the numerical value is the potential of a full cell in which the other electrode is the S.H.E. The algebraic sign of the half-cell potential is equal to the sign of the half-cell’s electrical potential when it operates versus the S.H.E.
To carry out this reaction in an electrochemical cell, we can use a salt bridge to join a $Pt\mid Fe^{3+},Fe^{2+}$ cell to a $Pt\mid Q,H_2Q,H^+$ cell. To construct a standard $Pt\mid Fe^{3+},Fe^{2+}$ cell, we need only insert a platinum wire into a solution containing ferric and ferrous ions, both at unit activity. To construct a standard $Pt\mid Q,H_2Q,H^+$ cell, we insert a platinum wire into a solution containing quinone, hydroquinone, and hydronium ion, all at unit activity. For standard half-cells, the cathode and anode reactions are
$Fe^{3+}+e^-\rightleftharpoons Fe^{2+} \nonumber$
and
$H_2Q\rightleftharpoons Q+2H^++2e^- \nonumber$
We can immediately write the Nernst equation for each of these half-reactions as
$\mathcal{E}_{Pt\mid Fe^{3+},Fe^{2+}}=\mathcal{E}^o_{Pt\mid Fe^{3+},Fe^{2+}}-\frac{RT}{\mathcal{F}} \ln \frac{\tilde{a}_{Fe^{2+}}}{\tilde{a}_{Fe^{3+}}} \nonumber$
and
$\left(-\mathcal{E}_{Pt\mid Q,H_2Q,H^+}\right)=\left(- \mathcal{E}^o_{Pt\mid Q,H_2Q,H^+}\right)-\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{\tilde{a}_Q \tilde{a}^2_{H^+}}{\tilde{a}_{H_2Q}} \nonumber$
If we add the equations for these half-reactions, the result does not correspond to the original full-cell reaction, because the number of electrons does not cancel. This can be overcome by multiplying the ferric ion–ferrous ion half-reaction by two. What do we then do about the corresponding half-cell Nernst equation? Clearly, the values of ${\mathcal{E}}_{Pt\mid {Fe}^{3+},{Fe}^{2+}}$ and ${\mathcal{E}}^o_{Pt\mid {Fe}^{3+},{Fe}^{2+}}$ do not depend on the stoichiometric coefficients in the half-reaction equation. However, the activity terms in the logarithm’s argument do, as does the number of electrons taking part in the half-reaction. We have
$2Fe^{3+}+2e^-\rightleftharpoons 2Fe^{2+} \nonumber$
with
\begin{aligned} \mathcal{E}_{Pt\mid Fe^{3+},Fe^{2+}} & = \mathcal{E}^o_{Pt\mid Fe^{3+},Fe^{2+}}-\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{\tilde{a}^2_{Fe^{2+}}}{\tilde{a}^2_{Fe^{3+}}} \ ~ & =\mathcal{E}^o_{Pt\mid Fe^{3+},Fe^{2+}}-\frac{RT}{\mathcal{F}} \ln \frac{\tilde{a}_{Fe^{2+}}}{\tilde{a}_{Fe^{3+}}} \end{aligned} \nonumber
We see that we can apply any factor we please to the half-reaction. The Nernst equation gives the same dependence of the half-cell potential on reagent concentrations no matter what factor we choose. This is true also of the Nernst equation for any full-cell reaction. In the present example, adding the appropriate half-cell equations and their corresponding Nernst equations gives
$2\ Fe^{3+}+H_2Q\rightleftharpoons \ 2\ Fe^{2+}+Q+2H^+ \nonumber$
and
\begin{aligned} \mathcal{E} & = \mathcal{E}_{Pt\mid Fe^{3+},Fe^{2+}}- \mathcal{E}_{Pt\mid Q,H_2Q,H^+} \ ~ & = \mathcal{E}^o_{Pt\mid Fe^{3+},Fe^{2+}}- \mathcal{E}^o_{Pt\mid Q,H_2Q,H^+}-\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{\tilde{a}^2_{Fe^{2+}}}{\tilde{a}^2_{Fe^{3+}}} -\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{\tilde{a}_Q \tilde{a}^2_{H^+}}{\tilde{a}_{H_2Q}} \ ~ & =\mathcal{E}^0-\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{\tilde{a}_Q \tilde{a}^2_{H^+} \tilde{a}^2_{Fe^{2+}}}{\tilde{a}_{H_2Q} \tilde{a}^2_{Fe^{3+}}} \end{aligned} \nonumber
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.15%3A_The_Nernst_Equation.txt
|
The same chemical species can be a reactant or product in many different half-cells. Frequently, data on two different half-cells can be combined to give information about a third half-cell. Let us consider two half-cells that involve the ferrous ion, ${Fe}^{2+}$. Ferrous ion and elemental iron form a redox couple. The half-cell consists of a piece of pure ion in contact with aqueous ferrous ion at unit activity. Our notation for this half-cell and its potential are $Fe\mid {Fe}^{2+}$ and ${\mathcal{E}}_{Fe\mid {Fe}^{2+}}$. The corresponding half-reaction and its potential are
$Fe^{2+}+2e^-\rightleftharpoons Fe^0 \nonumber$
and
$\mathcal{E}_{Fe\mid Fe^{2+}}= \mathcal{E}^o_{Fe\mid Fe^{2+}}-\frac{RT}{\mathrm{2}\mathcal{F}} \ln \frac{1}{\tilde{a}_{Fe^{2+}}} \nonumber$
Ferrous ion can also give up an electron at an inert electrode, forming ferric ion, $Fe^{3+}$. This process is reversible. Depending on the potential of the half-cell with which it is paired, the inert electrode can either accept an electron from the external circuit and deliver it to a ferric ion, or take an electron from a ferrous ion and deliver it to the external circuit. Thus, ferrous and ferric ions form a redox couple. Platinum metal functions as an inert electrode in this reaction. The half-cell consists of a piece of pure platinum in contact with aqueous ferrous and ferric ions, both present at unit activity. Our notation for this half-cell and potential are $Pt\mid Fe^{2+},Fe^{3+}$ and $\mathcal{E}_{Pt\mid Fe^{2+},Fe^{3+}}$. The corresponding half-reaction and its potential are
$Fe^{3+}+e^-\rightleftharpoons Fe^{2+} \nonumber$
and
$\mathcal{E}_{Pt\mid Fe^{2+},Fe^{3+}}= \mathcal{E}^o_{Pt\mid Fe^{2+},Fe^{3+}}-\frac{RT}{\mathcal{F}} \ln \frac{\tilde{a}_{Fe^{2+}}}{\tilde{a}_{Fe^{3+}}} \nonumber$
We can add these two half-reactions, to obtain
$Fe^{3+}+3e^-\rightleftharpoons Fe^0 \nonumber$
The Nernst equation for this half-reaction is
$\mathcal{E}_{Fe\mid Fe^{3+}}=\mathcal{E}^o_{Fe\mid Fe^{3+}}-\frac{RT}{\mathrm{3}\mathcal{F}} \ln \frac{1}{\tilde{a}_{Fe^{3+}}} \nonumber$
From our past considerations, both of these equations are clearly correct. However, in this case, the Nernst equation of the sum is not the sum of the Nernst equations. Nor should we expect it to be. The half-cell Nernst equations are really shorthand notation for the behavior of the half-cell when it is operated against a S.H.E. Adding half-cell Nernst equations corresponds to creating a new system by connecting the two S.H.E. electrodes of two separate full cells, as we illustrate in Figure 8. In the present instance, we are manipulating two half-reactions to obtain a new half-reaction; this manipulation does not correspond to any possible way of interconnecting the corresponding half-cells.
Nevertheless, if we know the standard potentials for the first two reactions ($\mathcal{E}^o_{Fe\mid Fe^{2+}}$ and $\mathcal{E}^o_{Pt\mid Fe^{2+},Fe^{3+}}$), we can obtain the standard potential for their sum ($\mathcal{E}^o_{Fe\mid Fe^{3+}}$). To do so, we exploit the relationship we found between electrical potential and Gibbs free energy. The first two reactions represent sequential steps that jointly achieve the same net change as the third reaction. Therefore, the sum of the Gibbs free energy changes for the first two reactions must be the same as the Gibbs free energy change for the third reaction. The standard potentials are not additive, but the Gibbs free energy changes are. We have
$\begin{array}{l l} Fe^{3+}+e^-\rightleftharpoons Fe^{2+} & \Delta G^o_{Fe^{3+}\to Fe^{2+}}=\ \mathcal{F} \mathcal{E}^o_{Pt\mid Fe^{2+},Fe^{3+}} \ Fe^{2+}+2e^-\rightleftharpoons Fe^0 & \Delta G^o_{Fe^{2+}\to Fe^0} =2 \mathcal{F} \mathcal{E}^o_{Fe\mid Fe^{2+}} \ \hline Fe^{3+}+3e^-\rightleftharpoons Fe^0 & \Delta G^o_{Fe^{3+}\to Fe^0} =3 \mathcal{F} \mathcal{E}^o_{Fe\mid Fe^{3+}} \end{array} \nonumber$
Since also
$\Delta G^o_{Fe^{3+}\to Fe^{2+}}+ \Delta G^o_{Fe^{2+} \to Fe^0}=\Delta G^o_{Fe^{3+} \to Fe^0} \nonumber$
we have
$\mathcal{F} \mathcal{E}^o_{Pt\mid Fe^{2+},Fe^{3+}}+2\mathcal{F} \mathcal{E}^o_{Fe\mid Fe^{2+}}=3\mathcal{F} \mathcal{E}^o_{Fe\mid Fe^{3+}} \nonumber$
and
$\mathcal{E}^o_{Fe\mid Fe^{3+}}=\frac{\mathcal{E}^o_{Pt\mid Fe^{2+},Fe^{3+}}+2 \mathcal{E}^o_{Fe\mid Fe^{2+}}}{3} \nonumber$
17.18: The Nernst Equation and the Criterion for Equilibrium
In Section 17.15 we find for the general reaction $aA+bB\to cD+dD$ that the Nernst equation is
$\mathcal{E}= \mathcal{E}^o-\frac{RT}{n\mathcal{F}} \ln \frac{\tilde{a}^c_C \tilde{a}^d_D}{\tilde{a}^a_A \tilde{a}^b_B} \nonumber$
We now want to consider the relationship between the potential of an electrochemical cell and the equilibrium position of the cell reaction. If the potential of the cell is not zero, short-circuiting the terminals of the cell will cause electrons to flow in the external circuit and reaction to proceed spontaneously in the cell. Since a spontaneous reaction occurs, the cell is not at equilibrium with respect to the cell reaction.
As we draw current from any electrochemical cell, cell reactants are consumed and cell products are produced. Experimentally, we see that the cell voltage decreases continuously, and inspection of the Nernst equation shows that it predicts a potential decrease. Eventually, the voltage of a short-circuited cell decreases to zero. No further current is passed. The cell reaction stops; it has reached chemical equilibrium. If the cell potential is zero, the cell reaction must be at equilibrium, and vice versa.
We also know that, at equilibrium, the activity ratio that appears as the argument of the logarithmic term is a constant—the equilibrium constant. So when $\mathcal{E}=0$, we have also that
$K_a=\frac{{\tilde{a}}^c_C{\tilde{a}}^d_D}{{\tilde{a}}^a_A{\tilde{a}}^b_B} \nonumber$
Substituting these conditions into the Nernst equation, we obtain $0={\mathcal{E}}^o-\frac{RT}{n\mathcal{F}}{ \ln K_a\ } \nonumber$ or $K_a=\mathrm{exp}\frac{\left(n\mathcal{F}{\mathcal{E}}^o\right)}{RT} \nonumber$
We can obtain this same result if we recall that $\Delta G^o=-RT{ \ln K_a\ }$ and that $\Delta G^o=-n\mathcal{F}{\mathcal{E}}^o$. We can determine equilibrium constants by measuring the potentials of standard cells. Alternatively, we can measure an equilibrium constant and determine the potential of the corresponding cell without actually constructing it. Standard potentials and equilibrium constants are both measures of the Gibbs free energy change when the reaction occurs under standard conditions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.17%3A_Combining_two_Half-cell_Equations_to_Obtain_a_new_Half-cell_Equation.txt
|
1. Balance the following chemical equations assuming that they occur in aqueous solution.
(a) ${Cu}^0+{Ag}^+\to {Cu}^{2+}+{Ag}^0$
(b) ${Fe}^{2+}+{Cr}_2O^{2-}_7\to {Fe}^{3+}+{Cr}^{3+}$
(c) ${Cr}^{2+}+{Cr}_2O^{2-}_7\to {Cr}^{3+}$
(d) ${Cl}_2+{Br}^-\to {Cl}^-+{Br}_2$
(e) $ClO_3\to {Cl}^-+ClO^-_4$
(f) $I^-+{IO}^-_3\to I_2$
(g) $I^-+O_2\to I_2+{OH}^-$ (basic solution)
(h) $H_2C_2O^{2-}_4+MnO^-_4\to {CO}_2+{Mn}^{2+}$
(i) ${Fe}^{2+}+MnO^-_4\to {Fe}^{3+}+{Mn}^{2+}$
(j) $H_2O_2+MnO^-_4\to {Mn}^{2+}+O_2$
(k) $PbO_2+{Pb}^0+H_2SO_4\to {PbSO}_4$
(l) ${Fe}^{2+}\to {Fe}^0+{Fe}^{3+}$
(m) ${Cu}^{2+}+{Fe}^0\to {Cu}^0+{Fe}^{3+}$
(n) ${Al}^0+{OH}^-\to H_2+Al{\left(OH\right)}^-_4$ (basic solution)
(o) ${Au}^0+{CN}^-+O_2\to Au{\left(CN\right)}^-_4$ (basic solution)
(p) ${Cu}^0+{HNO}_3\to {Cu}^{2+}+NO_2$
(q) ${Al}^0+{Fe}_2O_3\to {Al}_2O_3+{Fe}^0$
(r) $I^-+H_2O_2\to I_2+{OH}^-$ (basic solution)
(s) $HFeO^-_4+{Mn}^{2+}\to {Fe}^{3+}+{MnO}_2$
(t) ${Fe}^{2+}+S_2O^{2-}_8\to {Fe}^{3+}+SO^{2-}_4$
(u) ${Cu}^++O_2\to {Cu}^{2+}+{OH}^-$ (basic solution)
2. Calculate the equilibrium constant,$K_a$, for the reaction ${Cu}^0+2{Ag}^+\to {Cu}^{2+}+2{Ag}^0$. An excess of clean copper wire is placed in a ${10}^{-1}\ \underline{\mathrm{M}}$ silver nitrate solution. Assuming that molarities adequately approximate the activities of the ions, find the equilibrium concentrations of ${Ag}^+$ and ${Cu}^{2+}$.
3. The standard potentials for reduction of ${Fe}^{2+}$ and ${Fe}^{3+}$ to ${Fe}^0$ are
${Fe}^{2+}+2e^-\to {Fe}^0 {\mathcal{E}}^o=-0.447\ \mathrm{v} \nonumber$ ${Fe}^{3+}+3e^-\to {Fe}^0 {\mathcal{E}}^o=-0.037\ \mathrm{v} \nonumber$
(a) Find the standard potential for the disproportionation of ${Fe}^{2+}$ to ${Fe}^{3+}$ and ${Fe}^0$: $\ \ \ {Fe}^{2+}\to {{Fe}^0+Fe}^{3+}$.
(b) Find the standard half-cell potential for the reduction of ${Fe}^{3+}$ to ${Fe}^{2+}$: ${Fe}^{3+}+e^-\to {Fe}^{2+}$.
4. The standard potential for reduction of tris-ethylenediamineruthenium (III) to tris-ethylenediamineruthenium (II) is
${\left[Ru{\left(en\right)}_3\right]}^{3+}+e^-\to {\left[Ru{\left(en\right)}_3\right]}^{2+} {\mathcal{E}}^o=+0.210\ \mathrm{v} \nonumber$
Half-cell potential data are given below for several oxidants. Which of them can oxidize ${\left[Ru{\left(en\right)}_3\right]}^{2+}$ to ${\left[Ru{\left(en\right)}_3\right]}^{3+}$ in acidic ($\left[H^+\right]\approx {\tilde{a}}_{H^+}={10}^{-1}$) aqueous solution?
(a) $UO^{2+}_2+e^-\to UO^+_2$ ${\mathcal{E}}^o=+0.062\ \mathrm{v}$
(b) ${\left[Ru{\left(NH_3\right)}_6\right]}^{3+}+e^-\to {\left[Ru{\left(NH_3\right)}_6\right]}^{2+}$ ${\mathcal{E}}^o=+0.10\ \mathrm{v} \nonumber$ (c) ${Cu}^{2+}+e^-\to {Cu}^+$ ${\mathcal{E}}^o=+0.153\ \mathrm{v}$
(d) $AgCl+e^-\to {Ag}^0+{Cl}^-$ ${\mathcal{E}}^o=+0.222\ \mathrm{v}$
(e) ${Hg}_2{Cl}_2+2e^-\to 2{Hg}^0+2{Cl}^-$ ${\mathcal{E}}^o=+0.268\ \mathrm{v} \nonumber$ (f) $AgCN+e^-\to {Ag}^0+{CN}^-$ ${\mathcal{E}}^o=-0.017\ \mathrm{v}$
(g) $SnO_2+4H^++4e^-\to {Sn}^0+2H_2O$ ${\mathcal{E}}^o=-0.117\ \mathrm{v} \nonumber$
5. An electrochemical cell is constructed in which one half cell is a standard hydrogen electrode and the other is a hydrogen electrode immersed in a solution of $pH=7$ $\left(\left[H^+\right]\approx {\tilde{a}}_{H^+}={10}^{-7}\right)$. What is the potential difference between the terminals of the cell? What chemical change occurs in this cell?
6. The standard half-cell potential for the reduction of oxygen gas at an inert electrode (like platinum metal) is
$O_2+4H^++4e^-\to 2H_2O {\mathcal{E}}^o=+1.229\ \mathrm{v} \nonumber$
An electrochemical cell is constructed in which one half cell is a standard hydrogen electrode and the other cell is a piece of platinum metal, immersed in a $1\ \underline{M}$ solution of $HClO_4$, which is continuously in contact with bubbles of oxygen gas at a pressure of $1$ bar.
(a) What is the potential difference between the terminals of the cell? What chemical change occurs in this cell?
(b) The $1\ \underline{M}$ $HClO_4$ solution in part (a) is replaced with pure water $\left(\left[H^+\right]\approx {\tilde{a}}_{H^+}={10}^{-7}\right)$. What is the potential difference between the terminals of this cell?
(c) The $1\ \underline{M}$ $HClO_4$ solution in part (a) is replaced with $1\ \underline{M}$ $NaOH$ $\left(\left[H^+\right]\approx {\tilde{a}}_{H^+}={10}^{-14}\right)$. What is the potential difference between the terminals of this cell?
7. A variable electrical potential source is introduced into the external circuit of the cell in part (a) of problem
6. The negative terminal of the potential source is connected to the oxygen electrode and the positive terminal of the potential source is connected to the standard hydrogen electrode. If the applied electrical potential is 1.3 v, what chemical change occurs? What is the minimum electrical potential that must be applied to electrolyze water if the oxygen electrode contains a $1\ \underline{M}$ $HClO_4$ solution? A neutral ($pH=7$) solution? A $1\ \underline{M}$ $NaOH$ solution?
8. Two platinum electrodes are immersed in $1\ \underline{M}$ $HClO_4$. What potential difference must be applied between these electrodes in order to electrolyze water? (Assume that $P_{O_2}=1\ \mathrm{bar}$ and $P_{H_2}=1\ \mathrm{bar}$ at their respective electrodes, as will be the case as soon as a few bubbles of gas have accumulated at each electrode.) What potential difference is required if the electrodes are immersed in pure water? In $1\ \underline{M}$ $NaOH$?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/17%3A_Electrochemistry/17.19%3A_Problems.txt
|
• 18.1: Energy Distributions and Energy Levels
The probability that the energy of a particular molecule is in a particular interval is intimately related to the energies that it is possible for a molecule to have. Before we can make further progress in describing molecular energy distributions, we must discuss atomic and molecular energies. For our development of the Boltzmann equation, we need to introduce the idea of quantized energy states.
• 18.2: Quantized Energy - De Broglie's Hypothesis and the Schroedinger Equation
Subsequent to Planck’s proposal that energy is quantized, the introduction of two further concepts led to the theory of quantum mechanics. The first was Einstein’s relativity theory, and his deduction from it of the equivalence of matter and energy. The second was de Broglie’s hypothesis that any particle of mass m moving at velocity v , behaves like a wave. De Broglie’s hypothesis is an independent postulate about the structure of nature.
• 18.3: The Schrödinger Equation for A Particle in A Box
The particle in a box provides a convenient illustration of the principles involved in setting up and solving the Schrödinger equation. Besides being a good illustration, the problem also proves to be a useful approximation to many physical systems. The statement of the problem is simple. We have a particle of mass m that is constrained to move only in one dimension. For locations in the box, the particle has zero potential energy. Outside the box, the particle has infinite potential energy.
• 18.4: The Schrödinger Equation for a Molecule
• 18.5: Solutions to Schroedinger Equations for Harmonic Oscillators and Rigid Rotors
• 18.6: Wave Functions, Quantum States, Energy Levels, and Degeneracies
We approximate the wave function for a molecule by using a product of approximate wave functions, each of which models some subset of the motions that the molecule undergoes. In general, the wave functions that satisfy the molecule’s Schrödinger equation are degenerate; that is, two or more of these wave functions have the same energy.
• 18.7: Particle Spins and Statistics- Bose-Einstein and Fermi-Dirac Statistics
The spin of a particle is an important quantum mechanical property. It turns out that quantum mechanical solutions depend on the spin of the particle being described. Particles with integral spins behave differently from particles with half-integral spins. When we treat the statistical distribution of these particles, we need to treat particles with integral spins differently from particles with half-integral spins.
18: Quantum Mechanics and Molecular Energy Levels
Beginning in Chapter 20, we turn our attention to the distribution of energy among the molecules in a closed system that is immersed in a constant-temperature bath, that is at equilibrium, and that contains a large number of molecules. We want to find the probability that the energy of a molecule in such a system is in a particular interval of energy values. This probability is also the fraction of the molecules whose energies are in the specified interval, since we assume that these statements mean the same thing for a system at equilibrium.
The probability that the energy of a particular molecule is in a particular interval is intimately related to the energies that it is possible for a molecule to have. Before we can make further progress in describing molecular energy distributions, we must discuss atomic and molecular energies. For our development of the Boltzmann equation, we need to introduce the idea of quantized energy states. This requires a short digression on the basic ideas of quantum mechanics and the quantized energy levels of atoms and molecules.
We have derived two expressions that relate the energy of a molecule to the probability that the molecule will have that energy. One follows from the barometric formula
\begin{align*} \eta \left(h\right) &=\eta \left(0\right)\mathrm{exp}\left(\frac{-mgh}{kT}\right) \[4pt] &=\eta \left(0\right)\mathrm{exp}\left(\frac{-{\epsilon }_{potential}}{kT}\right) \end{align*}
in which the number density of molecules depends exponentially on their gravitational potential energy, $mgh$, and the reciprocal of the temperature. From the barometric formula, we can find the probability density function
$\frac{df}{dh}=\frac{mg}{kT}\mathrm{exp}\left(\frac{-mgh}{kT}\right) \nonumber$
(See problem 3.22) The other is the Maxwell-Boltzmann distribution function
$\frac{df}{dv}=4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}v^2\mathrm{exp}\left(\frac{-mv^2}{2kT}\right)=4\pi {\left(\frac{m}{2\pi kT}\right)}^{3/2}v^2\mathrm{exp}\left(\frac{-{\epsilon }_{kinetic}}{2kT}\right) \nonumber$
in which the probability density of molecular velocities depends exponentially on their kinetic energies, ${{mv}^2}/{2}$, and the reciprocal of the temperature. We will see that this dependence is very general. Any time the molecules in a system can have a range of energies, the probability that a molecule has energy $\epsilon$ is proportional to $\mathrm{exp}\left({-\epsilon }/{kT}\right)$. The exponential term, $\mathrm{exp}\left({-\epsilon }/{kT}\right)$, is often called the Boltzmann factor.
We might try to develop a more general version of the Maxwell-Boltzmann distribution function by an argument that somehow parallels our derivation of the Maxwell-Boltzmann equation. It turns out that any such attempt is doomed to failure, because it is based on a fundamentally incorrect view of nature. In developing the barometric formula and the Maxwell-Boltzmann distribution, we assume that the possible energies are continuous; a molecule can be at any height above the surface of the earth, and its translational velocity can have any value. When we turn to the distribution of other ways in which molecules can have energy, we find that this assumption produces erroneous predictions about the behavior of macroscopic collections of molecules.
The failure of such attempts led Max Planck to the first formulation of the idea that energy is quantized. The spectrum of light emitted from glowing-hot objects (so-called “black bodies”) depends on the temperature of the emitting object. Much of the experimentally observable behavior of light can be explained by the hypothesis that light behaves like a wave. Mechanical (matter-displacement) waves carry energy; the greater the amplitude of the wave, the more energy it carries. Now, light is a form of energy, and a spectrum is an energy distribution. It was a challenge to late nineteenth century physics to use the wave model for the behavior of light to predict experimentally observed emission spectra. This challenge went unmet until Planck introduced the postulate that such “black-body radiators” absorb or emit electromagnetic radiation only in discrete quantities, called quanta. Planck proposed that the energy of one such quantum is related to the frequency, $\nu$, of the radiation by the equation $E=h\nu$, where the proportionality constant, $h$, is now called Planck’s constant. In Planck’s model, the energy of an electromagnetic wave depends on its frequency, not its amplitude.
In the years following Planck’s hypothesis, it became clear that many properties of atoms and molecules are incompatible with the idea that an atom or molecule can have any arbitrary energy. We obtain agreement between experimental observations and theoretical models only if we assume that atoms and molecules can have only very particular energies. This is observed most conspicuously in the interactions of atoms and molecules with electromagnetic radiation. One such interaction gives rise to a series of experimental observations known as the photoelectric effect. In order to explain the photoelectric effect, Albert Einstein showed that it is necessary to extend Planck’s concept to assume that light itself is a stream of discrete energy quanta, called photons. In our present understanding, it is necessary to describe some of the properties of light as wave-like and some as particle-like.
In many absorption and emission spectra, we find that a given atom or molecule can emit or absorb electromagnetic radiation only at very particular frequencies. For example, the light emitted by atoms excited by an electrical discharge contains a series of discrete emission lines. When it is exposed to a continuous spectrum of frequencies, an atom is observed to absorb light at precisely the discrete frequencies that are observed in emission. Niels Bohr explained these observations by postulating that the electrons in atoms can have only particular energies. The absorption of visible light by atoms and molecules occurs when an electron takes up electromagnetic energy and moves from one discrete energy level to a second, higher, one. (Absorption of a continuous range of frequencies begins to occur only when the light absorbed provides sufficient energy to separate an electron from the original chemical species, producing a free electron and a positively charged ion. At the onset frequency, neither of the product species has any kinetic energy. Above the onset frequency, spectra are no longer discrete, and the species produced have increasingly greater kinetic energies.) Similar discrete absorption lines are observed for the absorption of infrared light and microwave radiation by diatomic or polyatomic gas molecules. Infrared absorptions are associated with vibrational motions, and microwave absorptions are associated with rotational motions of the molecule about its center of mass. These phenomena are explained by the quantum theory.
The quantized energy levels of atoms and molecules can be found by solving the Schrödinger equation for the system at hand. To see the basic ideas that are involved, we discuss the Schrödinger equation and some of the most basic approximations that are made in applying it to the description of atomic and molecular systems. But first, we should consider one more preliminary question: If the quantum hypothesis is so important to obtaining valid equations for the distribution of energies, why are the derivations of the Maxwell-Boltzmann equation and the barometric formula successful? Maxwell’s derivation is successful because the quantum mechanical description of a molecule’s translational kinetic energy is very well approximated by the assumption that the molecule’s kinetic energy can have any value. In the language of quantum mechanics, the number of translational energy levels available to a molecule is very large, and successive energy levels are very close together—so close together that it is a good approximation to say that they are continuous. Similarly, the gravitational potential energies available to a molecule in the earth’s atmosphere are well approximated by the assumption that they belong to a continuous distribution.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.01%3A_Energy_Distributions_and_Energy_Levels.txt
|
Subsequent to Planck’s proposal that energy is quantized, the introduction of two further concepts led to the theory of quantum mechanics. The first was Einstein’s relativity theory, and his deduction from it of the equivalence of matter and energy. The relativistic energy of a particle is given by
$E^2=p^2c^2+m^2_0c^4 \nonumber$
where $p$ is the momentum and $m_0$ is the mass of the particle when it is at rest. The second was de Broglie’s hypothesis that any particle of mass $m$ moving at velocity $v$, behaves like a wave. De Broglie’s hypothesis is an independent postulate about the structure of nature. In this respect, its status is the same as that of Newton’s laws or the laws of thermodynamics. Nonetheless, we can construct a line of thought that is probably similar to de Broglie’s, recognizing that these are heuristic arguments and not logical deductions.
We can suppose that de Broglie’s thinking went something as follows: Planck and Einstein have proposed that electromagnetic radiation—a wave-like phenomenon—has the particle-like property that it comes in discrete lumps (photons). This means that things we think of as waves can behave like particles. Conversely, the lump-like photons behave like waves. Is it possible that other lump-like things can behave like waves? In particular is it possible that material particles might have wave-like properties? If a material particle behaves like a wave, what wave-like properties should it exhibit?
Well, if we are going to call something a wave, it must have a wavelength, $\lambda$, a frequency, $\nu$, and a propagation velocity, $v$, and these must be related by the equation $v=\lambda \nu$. The velocity of propagation of light is conventionally given the symbol $c$, so $c=\lambda \nu$. The Planck-Einstein hypothesis says that the energy of a particle (photon) is $E=h\nu ={hc}/{\lambda }$. Einstein proposes that the energy of a particle is given by $E^2=p^2c^2+m^2_0c^4$. A photon travels at the speed of light. This is compatible with other relativistic equations only if the rest mass of a photon is zero. Therefore, for a photon, we must have $E=pc$. Equating these energy equations, we find that the momentum of a photon is
$p={h}/{\lambda } \nonumber$
Now in a further exercise of imagination, we can suppose that this equation applies also to any mass moving with any velocity. Then we can replace $p$ with $mv$, and write $mv={h}/{\lambda } \nonumber$
We interpret this to mean that any mass, $m$, moving with velocity, $v$, has a wavelength, $\lambda$, given by
$\lambda ={h}/{mv} \nonumber$
This is de Broglie’s hypothesis. We have imagined that de Broglie found it by a series of imaginative—and not entirely logical—guesses and suppositions. The illogical parts are the reason we call the result a hypothesis rather than a derivation, and the originality of the guesses and suppositions is the reason de Broglie’s hypothesis was new. It is important physics, because it turns out to be experimentally valid. Very small particles do exhibit wave-like properties, and de Broglie’s hypothesis correctly predicts their wavelengths.
In a similar vein, we can imagine that Schrödinger followed a line of thought something like this: de Broglie proposes that any moving particle behaves like a wave whose wavelength depends on its mass and velocity. If a particle behaves as a wave, it should have another wave property; it should have an amplitude. In general, the amplitude of a wave depends on location and time, but we are thinking about a rather particular kind of wave, a wave that—so to speak—stays where we put it. That is, our wave is supposed to describe a particle, and particles do not dissipate themselves in all directions like the waves we get when we throw a rock in a pond. We call a wave that stays put a standing wave; it is distinguished by the fact that its amplitude depends on location but not on time.
Mathematically, the amplitude of any wave can be described as a sum of (possibly many) sine and cosine terms. A single sine term describes a simple wave. If it is a standing wave, its amplitude depends only on distance, and its amplitude is the same for any two points separated by one wavelength. Letting the amplitude be $\psi$, this standing wave is described by $\psi \left(x\right)=A{\mathrm{sin} \left(ax\right)\ }$, where $x$ is the location, expressed as a distance from the origin at $x=0$. In this wave equation, $A$ and $a$ are parameters that fix the maximum amplitude and the wavelength, respectively. Requiring the wavelength to be $\lambda$ means that $a\lambda =2\pi$. (Since $\psi$ is a sine function, it repeats every time its argument increases by $2\pi$ radians. We require that $\psi$ repeat every time its argument increases by $a\lambda$ radians, which requires that $a\lambda =2\pi$.) Therefore, we have $a={2\pi }/{\lambda } \nonumber$
and the wave equation must be
$\psi \left(x\right)=A{\mathrm{sin} \left({2\pi x}/{\lambda }\right)\ } \nonumber$
Equations, $\psi$, that describe standing waves satisfy the differential equation
$\frac{d^2\psi }{dx^2}=-C\psi \nonumber$
where $C$ is a constant. In the present instance, we see that
$\frac{d^2\psi }{dx^2}=-{\left(\frac{2\pi }{\lambda }\right)}^2A{\mathrm{sin} \left(\frac{2\pi x}{\lambda }\right)\ }=-{\left(\frac{2\pi }{\lambda }\right)}^2\psi \nonumber$
From de Broglie’s hypothesis, we have $\lambda ={h}/{mv}$, so that the constant $C$ can be written as
$C={\left(\frac{2\pi }{\lambda }\right)}^2={\left(\frac{2\pi mv}{h}\right)}^2={\left(\frac{2\pi }{h}\right)}^2\left(2m\right)\left(\frac{{mv}^2}{2}\right)=\left(\frac{8{\pi }^2m}{h^2}\right)\left(\frac{{mv}^2}{2}\right) \nonumber$
Let $T$ be the kinetic energy, ${{mv}^2}/{2}$, and let $V$ be the potential energy of our wave-like particle. Then its energy is $E=T+V$, and we have ${{mv}^2}/{2}=T=E-V$.
The constant $C$ becomes
$C=\left(\frac{8{\pi }^2m}{h^2}\right)T=\left(\frac{8{\pi }^2m}{h^2}\right)\left(E-V\right) \nonumber$
Making this substitution for $C$, we find a differential equation that describes a standing wave, whose wavelength satisfies the de Broglie equation. This is the time-independent Schrödinger equation in one dimension:
$\frac{d^2\psi }{dx^2}=-\left(\frac{8{\pi }^2m}{h^2}\right)\left(E-V\right)\psi \nonumber$ or $-\left(\frac{h^2}{8{\pi }^2m}\right)\frac{d^2\psi }{dx^2}+V\psi =E\psi \nonumber$
Often the latter equation is written as
$\left[-\left(\frac{h^2}{8{\pi }^2m}\right)\frac{d^2}{dx^2}+V\right]\psi =E\psi \nonumber$
where the expression in square brackets is called the Hamiltonian operator and abbreviated to $H$, so that the Schrödinger equation becomes simply, if cryptically,
$H\psi =E\psi \nonumber$
If we know how the potential energy of a particle, $V$, depends on its location, we can write down the Hamiltonian operator and the Schrödinger equation that describe the wave properties of the particle. Then we need to find the wave equations that satisfy this differential equation. This can be difficult even when the Schrödinger equation involves only one particle. When we write the Schrödinger equation for a system containing multiple particles that interact with one another, as for example an atom containing two or more electrons, analytical solutions become unattainable; only approximate solutions are possible. Fortunately, a great deal can be done with approximate solutions.
The Schrödinger equation identifies the value of the wavefunction, $\psi \left(x\right)$, with the amplitude of the particle wave at the location x. Unfortunately, there is no physical interpretation for $\psi \left(x\right)$; that is, no measurable quantity corresponds to the value of $\psi \left(x\right)$. There is, however, a physical interpretation for the product $\psi \left(x\right)\psi \left(x\right)$ or ${\psi }^2\left(x\right)$. [More accurately, the product $\psi \left(x\right){\psi }^*\left(x\right)$, where ${\psi }^*\left(x\right)$ is the complex conjugate of $\psi \left(x\right)$. In general, $x$ is a complex variable.] ${\psi }^2\left(x\right)$ is the probability density function for the particle whose wavefunction is $\psi \left(x\right)$. That is, the product ${\psi }^2\left(x\right)dx$ is the probability of finding the particle within a small distance, $dx$, of the location $x$. Since the particle must be somewhere, we also have
$1=\int^{+\infty }_{-\infty }{{\psi }^2\left(x\right)}dx \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.02%3A_Quantized_Energy_-_De_Broglie%27s_Hypothesis_and_the_Schroeding.txt
|
A problem usually called the particle in a box provides a convenient illustration of the principles involved in setting up and solving the Schrödinger equation. Besides being a good illustration, the problem also proves to be a useful approximation to many physical systems. The statement of the problem is simple. We have a particle of mass $m$ that is constrained to move only in one dimension. For locations in the interval $0\le x\le \ell$, the particle has zero potential energy. For locations outside this range, the particle has infinite potential energy. Since the particle cannot have infinite energy, this means that it can never find its way into locations outside of the interval $0\le x\le \ell$. We can think of this particle as a bead moving on a wire, with stops located on the wire at $x=0$ and at $x=\ell$. We can also think of it as being confined to a one-dimensional box of length $\ell$, which is the viewpoint represented by the name. The particle in a box model is diagrammed in Figure 1.
The potential energy constraints mean that the amplitude of the particle’s wavefunction must be zero, $\psi \left(x\right)=0$, when the value of $x$ lies in the interval
$- \infty x < 0 \nonumber$
or
$\ell < x < + \infty \nonumber$
We assume that the probability of finding the particle cannot change abruptly when its location changes by an arbitrarily small amount. This means that the wavefunction must be continuous, and it follows that $\psi \left(0\right)=0$ and $\psi \left(\ell \right)=0$. Inside the box, the particle’s Schrödinger equation is
$-\left(\frac{h^2}{8{\pi }^2m}\right)\frac{d^2\psi }{dx^2}=E\psi \nonumber$
and we seek those functions $\psi \left(x\right)$ that satisfy both this differential equation and the constraint equations $\psi \left(0\right)=0$ and $\psi \left(\ell \right)=0$. It turns out that there are infinitely many such solutions, ${\psi }_n$, each of which corresponds to a unique energy level, $E_n$.
To find these solutions, we first guess—guided by our considerations in §2—that solutions will be of the form $\psi \left(x\right)=A{\mathrm{sin} \left(ax\right)\ }+B{\mathrm{cos} \left(bx\right)\ } \nonumber$
A solution must satisfy
$\psi \left(0\right)=A{\mathrm{sin} \left(0\right)\ }+B{\mathrm{cos} \left(0\right)\ }=B{\mathrm{cos} \left(0\right)=0\ } \nonumber$
so that $B=0$. At the other end of the box, we must have
$\psi \left(\ell \right)=A{\mathrm{sin} \left(a\ell \right)\ }=0 \nonumber$
which means that $a\ell =n\pi$, where $n$ is any integer: $n\ =1,\ 2,\ \dots .$ Hence, we have
$a={n\pi }/{\ell } \nonumber$
and the only equations of the proposed form that satisfy the conditions at the ends of the box are
${\psi }_n\left(x\right)=A{\mathrm{sin} \left({n\pi x}/{\ell }\right)\ } \nonumber$
To test whether these equations satisfy the Schrödinger equation, we check
$-\left(\frac{h^2}{8{\pi }^2m}\right)\frac{d^2}{dx^2}\left[A{\mathrm{sin} \left(\frac{n\pi x}{\ell }\right)\ }\right]=E_n{\psi }_n \nonumber$
and find
$\left(\frac{h^2}{8{\pi }^2m}\right){\left(\frac{n\pi }{\ell }\right)}^2\left[A{\mathrm{sin} \left(\frac{n\pi x}{\ell }\right)\ }\right]=\left(\frac{n^2h^2}{8{m\ell }^2}\right){\psi }_n=E_n{\psi }_n \nonumber$
so that the wavefunctions ${\psi }_n\left(x\right)=A{\mathrm{sin} \left({n\pi x}/{\ell }\right)\ }$ are indeed solutions and the energy, $E_n$, associated with the wavefunction ${\psi }_n\left(x\right)$ is
$E_n=\frac{n^2h^2}{8{m\ell }^2} \nonumber$
We see that the energy values are quantized; although there are infinitely many energy levels, $E_n$, only very particular real numbers—those given by the equation above—correspond to energies that the particle can have. If we sketch the first few wavefunctions, ${\psi }_n\left(x\right)$, we see that there are always $n-1$ locations inside the box at which ${\psi }_n\left(x\right)$ is zero. These locations are called nodes. Once we know $n$, we know the number of nodes, and we can sketch the general shape of the corresponding wavefunction. The first three wavefunctions and their squares are sketched in Figure 2.
At this point, we have found a complete set of infinitely many solutions, except for the parameter $A$. To determine $A$, we interpret ${\psi }^2\left(x\right)$ as a probability density function, and we require that the probability of finding the particle in the box be equal to unity. This means that
$1=\int^{\ell }_0{A^2{{\mathrm{sin}}^2 \left(\frac{n\pi x}{\ell }\right)\ }dx}=A^2\int^{\ell }_0{\left[\frac{1}{2}-\frac{1}{2}{\mathrm{cos} \left(\frac{2n\pi x}{\ell }\right)\ }\right]dx}=A^2\left(\frac{\ell }{2}\right) \nonumber$
so that $A=\sqrt{2/\ell }$, and the final wavefunctions are
${\psi }_n\left(x\right)=\sqrt{\frac{2}{\ell }}{\mathrm{sin} \left(\frac{n\pi x}{\ell }\right)\ } \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.03%3A_The_Schrodinger_Equation_for_A_Particle_in_A_Box.txt
|
Molecules are composed of atoms, and atoms are composed of nuclei and electrons. When we consider the internal motions of molecules, we have to consider the motions of a large number of charged particles with respect to one another. In principle, we can write down the potential function (the $V$ in the Schrödinger equation) that describes the Coulomb’s law based potential energy of the system of charged particles. In principle, we can then solve the Schrödinger equation and obtain a series of wavefunctions, ${\psi }_n\left(x\right)$, and their corresponding energies, $E_n$, that completely characterize the motions of the molecule’s constituent particles. Each of the $E_n$ is an energy value that the molecule can have. Often we say that it is an energy level that the molecule can occupy.
Since every distance between two charged particles is a variable in the Schrödinger equation, the number of variables increases dramatically as the size of the molecule increases. The two-particle hydrogen-atom problem has been solved analytically. For any chemical species larger than the hydrogen atom, only approximate solutions are possible. Nevertheless, approximate results can be obtained to very high accuracy. Greater accuracy comes at the expense of more extensive calculations.
Let us look briefly at the more fundamental approximations that are made. One is called the Born-Oppenheimer approximation; it states that the motions of the nuclei in a molecule are too slow to affect the motions of the electrons. This occurs because nuclei are much more massive than electrons. The Born-Oppenheimer approximation assumes that the electronic motions can be calculated as if the nuclei are fixed at their equilibrium positions without introducing significant error into the result. That is, there is an approximate wavefunction describing the motions of the electrons that is independent of a second wavefunction that describes the motions of the nuclei.
The mathematical description of the nuclear motions can be further simplified using additional approximations; we can separate the nuclear motions into translational, rotational, and vibrational modes. Translational motion is the three-dimensional displacement of an entire molecule. It can be described by specifying the motion of the molecule’s center of mass. The motions of the constituent nuclei with respect to one another can be further subdivided: rotational motions change the orientation of the whole molecule in space; vibrational motions change distances between constituent nuclei.
The result is that the wavefunction for the molecule as a whole can be approximated as a product of a wavefunction (${\psi }_{electronic}$ or ${\psi }_e$) for the electronic motions, a wavefunction (${\psi }_{vibration}$ or ${\psi }_v$) for the vibrational motions, a wavefunction (${\psi }_{rotation}$ or ${\psi }_r$) for the rotational motions, and a wavefunction (${\psi }_{translation}$ or ${\psi }_t$) for the translational motion of the center of mass. We can write
${\psi }_{molecule}={\psi }_e{\psi }_v{\psi }_r{\psi }_t \nonumber$
(None of this is supposed to be obvious. We are merely describing the essential results of a considerably more extensive development.)
When we write the Hamiltonian for a molecule under the approximation that the electronic, vibrational, rotational, and translational motions are independent of each other, we find that the Hamiltonian is a sum of terms. In some of these terms, the only independent variables are those that specify the locations of the electrons. We call these variables electronic coordinates. Some of the remaining terms involve only vibrational coordinates, some involve only rotational coordinates, and some involve only translational coordinates. That is, we find that the Hamiltonian for the molecule can be expressed as a sum of terms, each of which is the Hamiltonian for one of the kinds of motion:
$H_{molecule}=H_e+H_v+H_r{+H}_t \nonumber$
where we have again abbreviated the subscripts denoting the various categories of motion.
Consequently, when we write the Schoedinger equation for the molecule in this approximation, we have
\begin{align*} H_{molecule}{\psi }_{molecule} &=\left(H_e+H_v+H_r{+H}_t\right){\psi }_e{\psi }_v{\psi }_r{\psi }_t \[4pt] &={\psi }_v{\psi }_r{\psi }_tH_e{\psi }_e+{\psi }_e{\psi }_r{\psi }_tH_v{\psi }_v+{\psi }_e{\psi }_v{\psi }_tH_r{\psi }_r+{\psi }_e{\psi }_v{\psi }_rH_t{\psi }_t \[4pt] &={\psi }_v{\psi }_r{\psi }_tE_e{\psi }_e+{\psi }_e{\psi }_r{\psi }_tE_v{\psi }_v+{\psi }_e{\psi }_v{\psi }_tE_r{\psi }_r+{\psi }_e{\psi }_v{\psi }_rE_t{\psi }_t \[4pt] &=\left(E_e+E_v+E_r{+E}_t\right){\psi }_e{\psi }_v{\psi }_r{\psi }_t \end{align*}
We find that the energy of the molecule as a whole is simply the sum of the energies associated with the several kinds of motion
$E_{molecule}=E_e+E_v+E_r{+E}_t \nonumber$
${\psi }_t$, ${\psi }_v$, ${\psi }_r$, and ${\psi }_e$ can be further approximated as products of wavefunctions involving still smaller numbers of coordinates. We can have a component wavefunction for every distinguishable coordinate that describes a possible motion of a portion of the molecule. The three translational modes are independent of one another. It is a good approximation to assume that they are also independent of the rotational and vibrational modes. Frequently, it is a good approximation to assume that the vibrational and rotational modes are independent of one another. We can deduce the number of one-dimensional wavefunctions that are required to give an approximate wavefunction that describes all of the molecular motions, because this will be the same as the number of coordinates required to describe the nuclear motions. If we have a collection of $N$ atoms that are not bonded to one another, each atom is free to move in three dimensions. The number of coordinates required to describe their motion is $3N$. When the same atoms are bonded to one another in a molecule, the total number of motions remains the same, but it becomes convenient to reorganize the way we describe them.
First, we recognize that the atomic nuclei in a molecule occupy positions that are approximately fixed relative to one another. Therefore, to a good approximation, the motion of the center of mass is independent of the way that the atoms move relative to one another or relative to the center of mass. It takes three coordinates to describe the motion of the center of mass, so there are $3N-3$ coordinates left over after this is done.
The number of rotational motions available to a molecule depends upon the number of independent axes about which it can rotate. We can imagine a rotation of a molecule about any axis we choose. In general, in three dimensions, we can choose any three non-parallel axes and imagine that the molecule rotates about each of them independently of its rotation about the others. If we consider a set of more than three non-parallel axes, we find that any of the axes can be expressed as a combination of any three of the others. This means that the maximum number of independent rotational motions for the molecule as a whole is three.
If the molecule is linear, we can take the molecular axis as one of the axes of rotation. Most conveniently, we can then choose the other two axes to be perpendicular to the molecular axis and perpendicular to each other. However, rotation about the molecular axis does not change anything about the molecule’s orientation in space. If the molecule is linear, rotation about the molecular axis is not a rotation at all! So, if the molecule is linear, only two coordinates are required to describe all of the rotational motions, and there are $3N-5$ coordinates left over after we allocate those needed to describe the translational and rotational motions.
The coordinates left over after we describe the translational and rotational motions must be used to describe the motion of the atoms with respect to one another. These motions are called vibrations, and hence the number of coordinates needed to describe the vibrations of a non-linear molecule is $3N-6$. For a linear molecule, $3N-5$ coordinates are needed to describe the vibrations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.04%3A_The_Schrodinger_Equation_for_a_Molecule.txt
|
We can approximate the wavefunction for a molecule by partitioning it into wavefunctions for individual translational, rotational, vibrational, and electronic modes. The wavefunctions for each of these modes can be approximated by solutions to a Schrödinger equation that approximates that mode. Our objective in this chapter is to introduce the quantized energy levels that are found.
Translational modes are approximated by the particle in a box model that we discuss above.
Vibrational modes are approximated by the solutions of the Schrödinger equation for coupled harmonic oscillators. The vibrational motion of a diatomic molecule is approximated by the solutions of the Schrödinger equation for the vibration of two masses linked by a spring. Let the distance between the masses be $r$ and the equilibrium distance be $r_0$. Let the reduced mass of the molecule be $\mu$, and let the force constant for the spring be $\lambda$. From classical mechanics, the potential energy of the system is
$V\left(r\right)=\frac{\lambda {\left(r-r_0\right)}^2}{2} \nonumber$
and the vibrational frequency of the classical oscillator is $\nu =\frac{1}{2\pi }\sqrt{\frac{\lambda }{\mu }} \nonumber$
The Schrödinger equation is
$-\left(\frac{h^2}{8{\pi }^2\mu }\right)\frac{d^2\psi }{dr^2}+\frac{\lambda {\left(r-r_0\right)}^2}{2}\psi =E\psi \nonumber$
The solutions to this equation are wavefunctions and energy levels that constitute the quantum mechanical description of the classical harmonic oscillator. The energy levels are given by
$E_n=h\nu \left(n+\frac{1}{2}\right) \nonumber$
where the quantum numbers, $n$, can have any of the values $n=0,\ 1,\ 2,\ 3,\ \dots .$ The lowest energy level, that for which $n=0$, has a non-zero energy; that is,
$E_0={h\nu }/{2} \nonumber$
The quantum mechanical oscillator can have infinitely many energies, each of which is a half-integral multiple of the classical frequency, $\nu$. Each quantum mechanical energy corresponds to a quantum mechanical frequency:
${\nu }_n=\nu \left(n+\frac{1}{2}\right) \nonumber$
A classical rigid rotor consists of two masses that are connected by a weightless rigid rod. The rigid rotor is a dumbbell. The masses rotate about their center of mass. Each two-dimensional rotational motion of a diatomic molecule is approximated by the solutions of the Schrödinger equation for the motion of a rigid rotor in a plane. The simplest model assumes that the potential term is zero for all angles of rotation. Letting $I$ be the molecule’s moment of inertia and $\varphi$ be the rotation angle, the Schrödinger equation is
$-\left( \frac{h^2}{8\pi ^2I}\right) \frac{d^2\psi }{d \varphi ^2}=E\psi \nonumber$
The energy levels are given by
$E_m=\frac{m^2h^2}{8\pi ^2I} \nonumber$
where the quantum numbers, $m$, can have any of the values $m=1,\ 2,\ 3,\ \dots .,$(but not zero). Each of these energy levels is two-fold degenerate. That is, two quantum mechanical states of the molecule have the energy $E_m$.
The three-dimensional rotational motion of a diatomic molecule is approximated by the solutions of the Schrödinger equation for the motion of a rigid rotor in three dimensions. Again, the simplest model assumes that the potential term is zero for all angles of rotation. Letting $\theta$ and $\varphi$ be the two rotation angles required to describe the orientation in three dimensions, the Schrödinger equation is
$-\frac{h^2}{8{\pi }^2I}\left(\frac{1}{\mathrm{sin} \theta} \frac{\partial }{\partial \theta } \left(\mathrm{sin} \theta \frac{\partial \psi }{\partial \theta }\right)+\frac{1}{\mathrm{sin}^2 \theta }\frac{d^2\psi }{d{\varphi }^2}\right)=E\psi \nonumber$
The energy levels are given by
$E_J=\frac{h^2}{8{\pi }^2I}J\left(J+1\right) \nonumber$
where the quantum numbers, $J$, can have any of the values $J=0,\ 1,\ 2\ ,3,\ \dots .$ $E_J$ is $\left(2J+1\right)$-fold degenerate. That is, there are $2J+1$ quantum mechanical states of the molecule all of which have the same energy, $E_J$.
Equations for the rotational energy levels of larger molecules are more complex.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.05%3A_Solutions_to_Schroedinger_Equations_for_Harmonic_Oscillators_an.txt
|
We approximate the wavefunction for a molecule by using a product of approximate wavefunctions, each of which models some subset of the motions that the molecule undergoes. In general, the wavefunctions that satisfy the molecule’s Schrödinger equation are degenerate; that is, two or more of these wavefunctions have the same energy. (The one-dimensional particle in a box and the one-dimensional harmonic oscillator have non-degenerate solutions. The rigid-rotor in a plane has doubly degenerate solutions; two wavefunctions have the same energy. The $J$-th energy level of the three-dimensional rigid rotor is $\left(2J+1\right)$-fold degenerate; there are $\left(2J+1\right)$ wavefunctions whose energy is $E_J$.) We use doubly subscripted symbols to represent the wavefunctions that satisfy the molecule’s Schrödinger equation. We write ${\psi }_{i,j}$ to represent all of the molecular wavefunctions whose energy is ${\epsilon }_i$. We let $g_i$ be the number of wavefunctions whose energy is ${\epsilon }_i$. We say that the energy level ${\epsilon }_i$ is $g_i$-fold degenerate. The wavefunctions
${\psi }_{i,1},\ {\psi }_{i,2},\ \dots ,\ {\psi }_{i,j},\dots ,\ {\psi }_{i,g_i} \nonumber$
are all solution to the molecule’s Schrödinger equation; we have
$H_{molecule}{\psi }_{i,j}={\epsilon }_i{\psi }_{i,j} \nonumber$
for $j=1,\ 2,\ \dots ,\ g_i$. Every energy level ${\epsilon }_i$ is associated with $g_i$ quantum states. For simplicity, we can think of each of the $g_i$ wavefunctions, ${\psi }_{i,j}$, as a quantum state; however, the molecule’s Schrödinger equation is also satisfied by any set of $g_i$ independent linear combinations of the ${\psi }_{i,j}$. For present purposes, all that matters is that there are $g_i$ quantum-mechanical descriptions—quantum states—all of which have energy ${\epsilon }_i$.
18.07: Particle Spins and Statistics- Bose-Einstein and Fermi-Dirac St
Our goal is to develop the theory of statistical thermodynamics from Boltzmann statistics. In this chapter, we explore the rudiments of quantum mechanics in order to become familiar with the idea that we can describe a series of discrete energy levels for any given molecule. For our purposes, that is all we need. We should note, however, that we are not developing the full story about the relationship between quantum mechanics and statistical thermodynamics. The spin of a particle is an important quantum mechanical property. It turns out that quantum mechanical solutions depend on the spin of the particle being described. Particles with integral spins behave differently from particles with half-integral spins. When we treat the statistical distribution of these particles, we need to treat particles with integral spins differently from particles with half-integral spins. Particles with integral spins are said to obey Bose-Einstein statistics; particles with half-integral spins obey Fermi-Dirac statistics.
Fortunately, both of these treatments converge to the Boltzmann distribution if the number of quantum states available to the particles is much larger than the number of particles. For macroscopic systems at ordinary temperatures, this is the case. In Chapters 19 and 20, we introduce the ideas underlying the theory of statistical mechanics. In Chapter 21, we derive the Boltzmann distribution from a set of assumptions that does not correspond to either the Bose-Einstein or the Fermi-Dirac requirement. In Chapter 25, we derive the Bose-Einstein and Fermi-Dirac distributions and show how they become equivalent to the Boltzmann distribution for most systems of interest in chemistry.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/18%3A_Quantum_Mechanics_and_Molecular_Energy_Levels/18.06%3A_Wave_Functions_Quantum_States_Energy_Levels_and_Degeneracies.txt
|
Suppose that we have two coins, one minted in 2001 and one minted in 2002. Let the probabilities of getting a head and a tail in a toss of the 2001 coin be $P_{H,1}$ and $P_{T,1}$, respectively. We assume that these outcomes exhaust the possibilities. From the laws of probability, we have: $1=\left(P_{H,1}+P_{T,1}\right)$. For the 2002 coin, we have $1=\left(P_{H,2}+P_{T,2}\right)$. The product of these two probabilities must also be unity. Expanding this product gives
\begin{align*} 1 &=\left(P_{H,1}+P_{T,1}\right)\left(P_{H,2}+P_{T,2}\right) \[4pt] &=P_{H,1}P_{H,2}+P_{H,1}P_{T,2}+P_{T,1}P_{H,2}+P_{T,1}P_{T,2} \end{align*}
This equation represents the probability of a trial in which we toss the 2001 coin first and the 2002 coin second. The individual terms are the probabilities of the possible outcomes of such a trial. It is convenient to give a name to this latter representation of the product; we will call it the expanded representation of the total probability sum.
Our procedure for multiplying two binomials generates a sum of four terms. Each term contains two factors. The first factor comes from the first binomial; the second term comes from the second binomial. Each of the four terms corresponds to a combination of an outcome from tossing the 2001 coin and an outcome from tossing the 2002 coin. Conversely, every possible combination of outcomes from tossing the two coins is represented in the sum. $P_{H,1}P_{H,2}$ represents the probability of getting a head from tossing the 2001 coin and a head from tossing the 2002 coin. $P_{H,1}P_{T,2}$ represents the probability of getting a head from tossing the 2001 coin and a tail from tossing the 2002 coin, etc. In short, there is a one-to-one correspondence between the terms in this sum and the possible combinations of the outcomes of tossing these two coins.
This analysis depends on our ability to tell the two coins apart. For this, the mint date is sufficient. If we toss the two coins simultaneously, the four possible outcomes remain the same. Moreover, if we distinguish the result of a first toss from the result of a second toss, etc., we can generate the same outcomes by using a single coin. If we use a single coin, we can represent the possible outcomes from two tosses by the ordered sequences $HH$, $HT$, $TH$, and $TT$, where the first symbol in each sequence is the result of the first toss and the second symbol is the result of the second toss. The ordered sequences $HT$ and $TH$ differ only in the order in which the symbols appear. We call such ordered sequences permutations.
Now let us consider a new problem. Suppose that we have two coin-like slugs that we can tell apart because we have scratched a “$1$” onto the surface of one and a “$2$” onto the surface of the other. Suppose that we also have two cups, one marked “$H$” and the other marked “$T$.” We want to figure out how many different ways we can put the two slugs into the two cups. We can also describe this as the problem of finding the number of ways we can assign two distinguishable slugs (objects) to two different cups (categories). There are four such ways: Cup $H$ contains slugs $1$ and $2$; Cup $H$ contains slug $1$ and Cup $T$ contains slug $2$; Cup $H$ contains slug $2$ and Cup $T$ contains slug $1$; Cup $T$ contains slugs $1$ and$\ 2$.
We note that, given all of the ordered sequences for tossing two coins, we can immediately generate all of the ways that two distinguishable objects (numbered slugs) can be assigned to two categories (Cups $H$ and $T$). For each ordered sequence, we assign the first object to the category corresponding to the first symbol in the sequence, and we assign the second object to the category corresponding to the second symbol in the sequence.
In short, there are one-to-one correspondences between the sequences of probability factors in the total probability sum, the possible outcomes from tossing two distinguishable coins, the possible sequences of outcomes from two tosses of a single coin, and the number of ways we can assign two distinguishable objects to two categories. (See Table 1.)
Table 1.
Problems Correspondences
Sequences of probability factors in the total probability sum $P_{H,1} P_{H,2}$ $P_{H,1} P_{T,2}$ $P_{T,1} P_{H,2}$ $P_{T,1} P_{T,2}$
Probability factors for coins distinguished by identification numbers $P_H P_H$ $P_H P_T$ $P_T P_H$ $P_T P_T$
Sequences from toss of a single coin $HH$ $HT$ $TH$ $TT$
Assignments of two distinguishable objects to two categories Cup $H$ holds slugs 1 & 2 Cup $H$ holds slug 1 and Cup $T$ holds slug 2 Cup $H$ holds slug 2 and Cup $T$ holds slug 1 Cup $T$ holds slugs 1 & 2
If the probability of tossing a head is constant, we have $P_{H,1}=P_{H,2}=P_H$ and $P_{T,1}=P_{T,2}=P_T$. Note that we are not assuming $P_H=P_T$. If we do not care about the order in which the heads and tails appear, we can simplify our equation for the product of probabilities to
$1=P^2_H+2P_HP_T+P^2_T \nonumber$
$P^2_H$ is the probability of tossing two heads, $P_HP_T$ is the probability of tossing one head and one tail, and $P^2_T$ is the probability of tossing two tails. We must multiply the $P_HP_T$-term by two, because there are two two-coin outcomes and correspondingly two combinations, $P_{H,1}P_{T,2}$ and $P_{T,1}P_{H,2}$, that have the same probability, $P_HP_T$. Completely equivalently, we can say that the reason for multiplying the $P_HP_T$-term by two is that there are two permutations, $HT$ and $TH$, which correspond to one head and one tail in successive tosses of a single coin.
We have lavished considerable attention on four related but very simple problems. Now, we want to extend this analysis—first to tosses of multiple coins and then to situations in which multiple outcomes are possible for each of many independent events. Eventually we will find that understanding these problems enables us to build a model for the behavior of molecules that explains the observations of classical thermodynamics.
If we extend our analysis to tossing $n$ coins, which we label coins $1$, $2$, etc., we find:
\begin{align*} 1 &=\left(P_{H,1}+P_{T,1}\right)\left(P_{H,2}+P_{T,2}\right)\dots \left(P_{H,n}+P_{T,n}\right) \[4pt] &=\left(P_{H,1}P_{H,2}\dots P_{H,n}\right)+\left(P_{H,1}P_{H,2}{\dots P}_{H,i}\dots P_{T,n}\right)+\dots +\left(P_{T,1}P_{T,2}\dots P_{T,i}\dots P_{T,n}\right) \end{align*}
We write each of the product terms in this expanded representation of the total-probability sum with the second index, $r$, increasing from $1$ to $n$ as we read through the factors, $P_{X,r}$, from left to right. Just as for tossing only two coins:
1. Each product term is a sequence of probability factors that appears in the total probability sum.
2. Each product term corresponds to a possible outcome from tossing n coins that are distinguished from one another by identification numbers.
3. Each product term is equivalent to a possible outcome from repeated tosses of a single coin: the $r^{th}$ factor is $P_H$ or $P_T$ according as the $r^{th}$ toss produces a head or a tail.
4. Each product term is equivalent to a possible assignment of n distinguishable objects to the two categories $H$ and $T$.
In Section 3.9, we introduce the term population set to denote a set of numbers that represents a possible combination of outcomes. Here the possible combinations of outcomes are the numbers of heads and tails. If in five tosses we obtain $3$ heads and $2$ tails, we say that this group of outcomes belongs to the population set $\{3,2\}$. If in $n$ tosses, we obtain $n_H$ heads and $n_T$ tails, this group of outcomes belongs to the population set $\{n_H,n_T\}$. For five tosses, the possible population sets are $\left\{5,0\right\}$, $\left\{4,1\right\}$, $\left\{3,2\right\}$, $\left\{2,3\right\}$, $\left\{1,4\right\}$, and $\left\{5,0\right\}$. Beginning in the next chapter, we focus on the energy levels that are available to a set of particles and on the number of particles that has each of the available energies. Then the number of particles, $N_i$, that have energy ${\epsilon }_i$ is the population of the ${\epsilon }_i$-energy level. The set of all such numbers is the energy-level population set for the set of particles.
If we cannot distinguish one coin from another, the sequence $P_{H,1}P_{T,2}P_{H,3}P_{H,4}$ becomes $P_HP_TP_HP_H$. We say that $P_HP_TP_HP_H$ is distinguishable from $P_HP_HP_TP_H$ because the tails-outcome appears in the second position in $P_HP_TP_HP_H$ and in the third position in $P_HP_HP_TP_H$. We say that $P_{H,1}P_{T,2}P_{H,3}P_{H,4}$ and $P_{H,3}P_{T,2}P_{H,1}P_{H,4}$are indistinguishable, because both become $P_HP_TP_HP_H$. In general, many terms in the expanded form of the total probability sum belong to the population set corresponding to $n_H$ heads and $n_T$ tails. Each such term corresponds to a distinguishable permutation of $n_H$ heads and $n_T$ tails and the corresponding distinguishable permutation of $P_H$ and $P_T$ terms.
We use the notation $C\left(n_H,n_T\right)$ to denote the number of terms in the expanded form of the total probability sum in which there are $n_H$ heads and $n_T$ tails. $C\left(n_H,n_T\right)$ is also the number of distinguishable permutations of $n_H$ heads and $n_T$ tails or of $n_H$ P${}_{H}$-terms and $n_T$ P${}_{T}$-terms. The principal goal of our analysis is to find a general formula for $C\left(n_H,n_T\right)$. To do so, we make use of the fact that $C\left(n_H,n_T\right)$ is also the number of ways that we can assign $n$ objects (coins) to two categories (heads or tails) in such a way that $n_H$ objects are in one category (heads) and $n_T$ objects are in the other category (tails). We also call $C\left(n_H,n_T\right)$ the number of combinations possible for distinguishable coins in the population set $\{n_H,n_T\}$.
The importance of $C\left(n_H,n_T\right)$ is evident when we recognize that, if we do not care about the sequence (permutation) in which a particular number of heads and tails occurs, we can represent the total-probability sum in a much compressed form:
$1=P^n_H+nP^{n-1}_HP_T+\dots +C\left(n_H,n_T\right)P^{n_H}_HP^{n_T}_T+nP_HP^{n-1}_T+P^n_T \nonumber$
In this representation, there are $n$ terms in the total-probability sum that have $n_H=n-1$ and $n_T=1$. These are the terms
$P_{H,1}P_{H,2}P_{H,3}{\dots P}_{H,i}\dots P_{H,n-1}{\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{n}} \nonumber$ $P_{H,1}P_{H,2}P_{H,3}{\dots P}_{H,i}\dots {\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{n}\boldsymbol{-}\boldsymbol{1}}P_{H,n} \nonumber$ $P_{H,1}P_{H,2}P_{H,3}\dots {\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{i}}\dots P_{H,n-1}P_{H,n} \nonumber$
$P_{H,1}P_{H,2}{\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{3}}{\dots P}_{H,i}\dots P_{H,n-1}P_{H,n} \nonumber$ $P_{H,1}{\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{2}}P_{H,3}{\dots P}_{H,i}\dots P_{H,n-1}P_{H,n} \nonumber$ ${\boldsymbol{P}}_{\boldsymbol{T},\boldsymbol{1}}P_{H,2}P_{H,3}{\dots P}_{H,i}\dots P_{H,n-1}P_{H,n} \nonumber$
Each of these terms represents the probability that $n-1$ heads and one tail will occur in the order shown. Each of these terms has the same value. Each of these terms is a distinguishable permutation of $n-1$ $P_H$ terms and one $P_T$ term. Each of these terms corresponds to a combination in which one of n numbered slugs is assigned to Cup $T$, while the remaining $n-1$ numbered slugs are assigned to Cup $H$. It is easy to see that there are $n$ such terms, because each term is the product of $n$ probabilities, and the tail can occur at any of the $n$ positions in the product. If we do not care about the order in which heads and tails occur and are interested only in the value of the sum of these $n$ terms, we can replace these $n$ terms by the one term $nP^{n-1}_HP_T$. We see that $nP^{n-1}_HP_T$ is the probability of tossing $n-1$ heads and one tail, irrespective of which toss produces the tail.
There is another way to show that there must be $n$ terms in the total-probability sum in which there are $n-1$ heads and one tail. This method relies on the fact that the number of such terms is the same as the number of combinations in which n distinguishable things are assigned to two categories, with $n-1$ of the things in one category and the remaining thing in the other category, $C\left(n-1,1\right)$. This method is a little more complicated, but it offers the great advantage that it can be generalized.
The new method requires that we think about all of the permutations we can create by reordering the results from any particular series of $n$ tosses. To see what we have in mind when we say all of the permutations, let $P_{X,k}$ represent the probability of toss number $k$, where for the moment we do not care whether the outcome was a head or a tail. When we say all of the permutations, we mean the number of different ways we can order (permute) n different values $P_{X,k}$. It is important to recognize that one and only one of these permutations is a term in the total-probability sum, specifically:
$P_{X,1}P_{X,2}P_{X,3}\dots P_{X,k}\dots P_{X,n} \nonumber$
in which the values of the second subscript are in numerical order. When we set out to construct all of these permutations, we see that there are $n$ ways to choose the toss to put first and $n-1$ ways to choose the toss to put second, so there are $n\left(n-1\right)$ ways to choose the first two tosses. There are $n-2$ ways to choose the third toss, so there are $n\left(n-1\right)\left(n-2\right)$ ways to choose the first three tosses. Continuing in this way through all $n$ tosses, we see that the total number of ways to order the results of n tosses is $n\left(n-1\right)\left(n-2\right)\left(n-3\right)\dots \left(3\right)\left(2\right)\left(1\right)=n!$
Next, we need to think about the number of ways we can permute $n$ values $P_{X,k}$ if $n-1$ of them are $P_{H,1}$, $P_{H,2}$,, $P_{H,r-1},$, $P_{H,r+1},\dots ,P_{H,n}$ and one of them is $P_{T,r}$, and we always keep the one factor $P_{T,r}$ in the same position. By the argument above, there are $\left(n-1\right)!$ ways to permute the values $P_{H,s}$ in a set containing $n-1$ members. So for every term (product of factors $P_{X,k}$) that occurs in the total-probability sum, there are $\left(n-1\right)!$ other products (other permutations of the same factors) that differ only in the order in which the $P_{H,s}$ appear. The single tail outcome occupies the same position in each of these permutations. If the $r^{th}$ factor in the term in the total probability sum is $P_{T,r}$, then $P_{T,r}$ is the $r^{th}$ factor in each of the $\left(n-1\right)!$ permutations of this term. This is an important point, let us repeat it in slightly different words: For every term that occurs in the total-probability sum, there are $\left(n-1\right)!$ permutations of the same factors that leave the heads positions occupied by heads and the tails position occupied by tails.
Equivalently, for every assignment of $n-1$ distinguishable objects to one of two categories, there are $\left(n-1\right)!$ permutations of these objects. There are $C\left(n-1,1\right)$ such assignments. Accordingly, there are a total of $\left(n-1\right)!C\left(n-1,1\right)$ permutations of the $n$ distinguishable objects. Since we also know that the total number of permutations of n distinguishable objects is $n!$, we have
$n!=\left(n-1\right)!C\left(n-1,1\right) \nonumber$
so that $C\left(n-1,1\right)=\frac{n!}{\left(n-1\right)!} \nonumber$
which is the same result that we obtained by our first and more obvious method.
The distinguishable objects within a category in a particular assignment can be permuted. We give these within-category permutations another name; we call them indistinguishable permutations. (This terminology reflects our intended application, which is to find the number of ways $n$ identical molecules can be assigned to a set of energy levels. We can tell two isolated molecules of the same substance apart only if they have different energies. We can distinguish molecules in different energy levels from one another. We cannot distinguish two molecules in the same energy level from one another. Two different permutations of the molecules within any one energy level are indistinguishable from one another.) For every term in the expanded representation of the total probability sum, indistinguishable permutations can be obtained by exchanging $P_H$ factors with one another, or by exchanging $P_T$ factors with one another, but not by exchanging $P_H$ factors with $P_T$ factors. That is, heads are exchanged with heads; tails are exchanged with tails; but heads are not exchanged with tails.
Now we can consider the general case. We let $C\left(n_H,n_T\right)$ be the number of terms in the total-probability sum in which there are $n_H$ heads and $n_T$ tails. We want to find the value of $C\left(n_H,n_T\right)$. Let’s suppose that one of the terms with $n_H$ heads and $n_T$ tails is
$\left(P_{H,a}P_{H,b}\dots P_{H,m}\right)\left(P_{T,r}P_{T,s}\dots P_{T,z}\right) \nonumber$
where there are $n_H$ indices in the set $\{a,\ b,\ \dots ,m\}$ and $n_T$ indices in the set $\{r,s,\dots ,z\}$. There are $n_H!$ ways to order the heads outcomes and $n_T!$ ways to order the tails outcomes. So, there are $n_H!n_T!$ possible ways to order $n_H$ heads and $n_T$ tails outcomes. This is true for any sequence in which there are $n_H$ heads and $n_T$ tails; there will always be $n_H!n_T!$ permutations of $n_H$ heads and $n_T$ tails, whatever the order in which the heads and tails appear. This is also true for every term in the total-probability sum that contains $n_H$ heads factors and $n_T$ tails factors. The number of such terms is $C\left(n_H,n_T\right)$. For every such term, there are $n_H!n_T!$ permutations of the same factors that leave the heads positions occupied by heads and the tails positions occupied by tails.
Accordingly, there are a total of $n_H!n_T!C\left(n_H,n_T\right)$ permutations of the $n$ distinguishable objects. The total number of permutations of n distinguishable objects is $n!$, so that
$n!=n_H!n_T!C\left(n_H,n_T\right) \nonumber$
and
$C\left(n_H,n_T\right)=\frac{n!}{n_H!n_T!} \nonumber$
Equivalently, we can construct a sum of terms, $R$, in which the terms are all of the $n!$ permutations of $P_{H,i}$ factors for $n_H$ heads and $P_{T,j}$ factors for $n_T$ tails. The value of each term in $R$ is $P^{n_H}_HP^{n_T}_T$. So we have
$R=n!P^{n_H}_HP^{n_T}_T \nonumber$
$R$ contains all $C\left(n_H,n_T\right)$ of the $P^{n_H}_HP^{n_T}_T$-valued terms that appear in the total-probability sum. For each of these $P^{n_H}_HP^{n_T}_T$-valued terms there are $n_H!n_T!$ indistinguishable permutations that leave heads positions occupied by heads and tails positions occupied by tails. $R$ will also contain all of the $n_H!n_T!$ permutations of each of these $P^{n_H}_HP^{n_T}_T$-valued terms. That is, every term in $R$ is either a term in the expanded representation of the total probability sum or an indistinguishable permutation of such a term. It follows that $R$ is also given by
$R=n_H!n_T!C\left(n_H,n_T\right)P^{n_H}_HP^{n_T}_T \nonumber$
Equating these equations for R, we have
$n!P^{n_H}_HP^{n_T}_T=n_H!n_T!C\left(n_H,n_T\right)P^{n_H}_HP^{n_T}_T \nonumber$
and, again,
$C\left(n_H,n_T\right)=\frac{n!}{n_H!n_T!} \nonumber$
In summary: The total number of permutations is $n!$ The number of combinations of $n$ distinguishable things in which $n_H$ of them are assigned to category $H$ and $n_T=n-n_H$ are assigned to category $T$ is $C\left(n_H,n_T\right)$. (Every combination is a distinguishable permutation.) The number of indistinguishable permutations of the objects in each such combination is $n_H!n_T!$. The relationship among these quantities is
total number of permutations = (number of distinguishable combinations)${}_{\ }$${}_{\times }$ (number of indistinguishable permutations for each distinguishable combination)
We noted earlier that $C\left(n_H,n_T\right)$ is the formula for the binomial coefficients. If we do not care about the order in which the heads and tails arise, the probability of tossing $n_T$ tails and $n_H=n-n_T$ heads is
$C\left(n_H,n_T\right)P^{n_H}_HP^{n_T}_T=\left(\frac{n!}{n_H!n_T!}\right)P^{n_H}_HP^{n_T}_T \nonumber$
and the sum of such terms for all $n+1$ possible values of $n_T$ in the interval $0\le n_T\le n$ is the total probability for all possible outcomes from $n$ tosses of a coin. This total probability must be unity. That is, we have
$1={\left(P_H+P_T\right)}^n=\sum^n_{n_T=0}{C\left(n_H,n_T\right)P^{n_H}_HP^{n_T}_T}=\sum^n_{n_T=0}{\left(\frac{n!}{n_H!n_T!}\right)P^{n_H}_HP^{n_T}_T} \nonumber$
For an unbiased coin, $P_H=P_T={1}/{2}$, and $P^{n_H}_HP^{n_T}_T={\left({1}/{2}\right)}^n$, for all $n_T$. This means that the probability of tossing $n_H$ heads and $n_T$ tails is proportional to $C\left(n_H,n_T\right)$ where the proportionality constant is ${\left({1}/{2}\right)}^n$. The probability of $n^{\blacksquare }$ heads and $n-n^{\blacksquare }$ tails is the same as the probability of $n-n^{\blacksquare }$ heads and $n^{\blacksquare }$ tails.
Nothing in our development of the equation for the total probability requires that we set $P_H=P_T$, and in fact, the binomial probability relationship applies to any situation in which there are repeated trials, where each trial has two possible outcomes, and where the probability of each outcome is constant. If $P_H\neq P_T$, the symmetry observed for tossing coins does not apply, because
$P^{n-n^{\blacksquare }}_HP^{n^{\blacksquare }}_T\neq P^{n^{\blacksquare }}_HP^{n-n^{\blacksquare }}_T \nonumber$
This condition corresponds to a biased coin.
Another example is provided by a spinner mounted at the center of a circle painted on a horizontal surface. Suppose that a pie-shaped section accounting for $25\%$ of the circle’s area is painted white and the rest is painted black. If the spinner’s stopping point is unbiased, it will stop in the white zone with probability $P_W=0.25$ and in the black zone with probability $P_B=0.75$. After $n$ spins, the probability of $n_W$ white outcomes and $n_B$ black outcomes is
$\left(\frac{n!}{n_W!n_B!}\right){\left(0.25\right)}^{n_W}{\left(0.75\right)}^{n_B} \nonumber$
After $n$ spins, the sum of the probabilities for all possible combinations of white and black outcomes is
\begin{align*} 1 &={\left(P_W+P_B\right)}^n=\sum^n_{n_B=0}{C\left(n_W,n_B\right)P^{n_W}_WP^{n_B}_B} \[4pt] &=\sum^n_{n_B=0}{\left(\frac{n!}{n_W!n_B!}\right)P^{n_W}_WP^{n_B}_B} \[4pt] &=\sum^n_{n_{B=0}}{\left(\frac{n!}{n_W!n_B!}\right){\left(0.25\right)}^{n_W}{\left(0.75\right)}^{n_B}} \end{align*}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/19%3A_The_Distribution_of_Outcomes_for_Multiple_Trials/19.01%3A_Distribution_of_Results_for_Multiple_Trials_with_Two_Possibl.txt
|
Let us extend the ideas we have developed for binomial probabilities to the case where there are three possible outcomes for any given trial. To be specific, suppose we have a coin-sized object in the shape of a truncated right-circular cone, whose circular faces are parallel to each other. The circular faces have different diameters. When we toss such an object, allowing it to land on a smooth hard surface, it can wind up resting on the big circular face ($\boldsymbol{H}$eads), the small circular face ($\boldsymbol{T}$ails), or on the conical surface ($\boldsymbol{C}$one-side). Let the probabilities of these outcomes in a single toss be $P_H$, $P_T$, and $P_C$, respectively. In general, we expect these probabilities to be different from one another; although, of course, we require $1=\left(P_H+P_T+P_C\right)$.
Following our development for the binomial case, we want to write an equation for the total probability sum after $n$ tosses. Let $n_H$, $n_T$, and $n_C$ be the number of $H$, $T$, and $C$ outcomes exhibited in $n_H+n_T+n_C=n$ trials. We let the probability coefficients be $C\left(n_H,n_T,n_C\right)$. The probability of $n_H$, $n_T$, $n_C$ outcomes in $n$ trials is
$C\left(n_H,n_T,n_C\right)P^{n_H}_HP^{n_T}_TP^{n_C}_C \nonumber$
and the total probability is
$1={\left(P_H+P_T+P_C\right)}^n=\sum_{n_H,n_T,n_C}{C\left(n_H,n_T,n_C\right)P^{n_H}_HP^{n_T}_TP^{n_C}_C} \nonumber$
where the summation is to be carried out over all combinations of integer values for $n_H$, $n_T$, and $n_C$, consistent with $n_H+n_T+n_C=n$.
To find $C\left(n_H,n_T,n_C\right)$, we proceed as before. We suppose that one of the terms with $n_H$ heads, $n_T$ tails, and $n_C$ cone-sides is
$\left(P_{H,a}P_{H,b}\dots P_{H,f}\right)\left(P_{T,g}P_{T,h}\dots P_{T,m}\right)\left(P_{C,p}P_{C,q}\dots P_{C,z}\right) \nonumber$
where there are $n_H$ indices in the set $\{a,\ b,\ \dots ,\ f\}$, $n_T$ indices in the set $\{g,\ h,\ \dots ,\ m\}$, and $n_C$ indices in the set $\mathrm{\{}$p, q,, z$\mathrm{\}}$. There are $n_H!$ ways to order the heads outcomes, $n_T!$ ways to order the tails outcomes, and $n_C!$ ways to order the cone-sides outcomes. So, there are $n_H!n_T!n_C!$ possible ways to order $n_H$ heads, $n_T$ tails, and $n_C$ cone-sides. There will also be $n_H!n_T!n_C!$ indistinguishable permutations of any combination (particular assignment) of $n_H$ heads, $n_T$ tails, and $n_C$ cone-sides. There are $n!$ possible permutations of $n$ probability factors and $C\left(n_H,n_T,n_C\right)$ distinguishable combinations with $n_H$ heads, $n_T$ tails, and $n_C$ cone-sides. As before, we have
total number of permutations = (number of distinguishable combinations)${}_{\ }$${}_{\times }$ (number of indistinguishable permutations for each distinguishable combination)
so that
$n!=n_H!n_T!n_C!C\left(n_H,n_T,n_C\right) \nonumber$
and hence, $C\left(n_H,n_T,n_C\right)=\frac{n!}{n_H!n_T!n_C!} \nonumber$
Equivalently, we can construct a sum of terms, $S$, in which the terms are all of the $n!$ permutations of $P_{H,r}$ factors for $n_H$ heads, $P_{T,s}$ factors for $n_T$ tails, and $P_{C,t}$ factors for $n_C$ cone-sides. The value of each term in $S$ will be $P^{n_H}_HP^{n_T}_TP^{n_C}_C$. Thus, we have
$S=n!P^{n_H}_HP^{n_T}_TP^{n_C}_C \nonumber$
$S$ will contain all $C\left(n_H,n_T,n_C\right)$ of the distinguishable combinations $n_H$ heads, $n_T$ tails, and $n_C$ cone-sides outcomes that give rise to $P^{n_H}_HP^{n_T}_TP^{n_C}_C$-valued terms. Moreover, $S$ will also include all of the $n_H!n_T!n_C!$ indistinguishable permutations of each of these $P^{n_H}_HP^{n_T}_TP^{n_C}_C$-valued terms, and we also have
$S=n_H!n_T!n_C!C\left(n_H,n_T,n_C\right)P^{n_H}_HP^{n_T}_TP^{n_C}_C \nonumber$
Equating these two expressions for S gives us the number of $P^{n_H}_HP^{n_T}_TP^{n_C}_C$-valued terms in the total-probability product,$\ C\left(n_H,n_T,n_C\right)$. That is,
$S=n!P^{n_H}_HP^{n_T}_TP^{n_C}_C=n_H!n_T!n_C!C\left(n_H,n_T,n_C\right)P^{n_H}_HP^{n_T}_TP^{n_C}_C \nonumber$
and, again, $C\left(n_H,n_T,n_C\right)=\frac{n!}{n_H!n_T!n_C!} \nonumber$
In the special case that $P_H=P_T=P_C={1}/{3}$, all of the products $P^{n_H}_HP^{n_T}_TP^{n_C}_C$ will have the value ${\left({1}/{3}\right)}^n$. Then the probability of any set of outcomes, $\{n_H,n_T,n_C,\}$, is proportional to $C\left(n_H,n_T,n_C\right)$ with the proportionality constant ${\left({1}/{3}\right)}^n$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/19%3A_The_Distribution_of_Outcomes_for_Multiple_Trials/19.02%3A_Distribution_of_Results_for_Multiple_Trials_with_Three_Possi.txt
|
It is now easy to extend our results to multiple trials with any number of outcomes. Let the outcomes be $A$, $B$, $C$, ., $Z$, for which the probabilities in a single trial are $P_A$, $P_B$, $P_C$,$P_Z$. We again want to write an equation for the total probability after $n$ trials. We let $n_A$, $n_B$, $n_C$,$n_Z$ be the number of $A$, $B$, $C$,, $Z$ outcomes exhibited in $n_A+n_B+n_C+...+n_Z=n$ trials. If we do not care about the order in which the outcomes are obtained, the probability of $n_A$, $n_B$, $n_C$,, $n_Z$ outcomes in $n$ trials is
$C\left(n_A,n_B,n_C,\dots ,n_Z\right)P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z \nonumber$
and the total probability sum is
$1={\left(P_A+P_B+P_C+\dots +P_Z\right)}^n=\sum_{n_I}{C\left(n_A,n_B,n_C,\dots ,n_Z\right)P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z} \nonumber$
where the summation is to be carried out over all combinations of integer values for $n_A$, $n_B$, $n_C$,, $n_Z$ consistent with $n_A+n_B+n_C+...+n_Z=n$.
Let one of the terms for $n_A$ $A$-outcomes, $n_B$ $B$-outcomes, $n_C$ $C$-outcomes, , $n_Z$${}_{\ }$$Z$-outcomes, be
$\left(P_{A,a}P_{A,b}\dots P_{A,f}\right)\left(P_{B,g}P_{B,h}\dots P_{B,m}\right)\times \left(P_{C,p}P_{C,q}\dots P_{C,t}\right)\dots \left(P_{Z,u}P_{Z,v}\dots P_{Z,z}\right) \nonumber$
where there are $n_A$ indices in the set $\{a,\ b,\ \dots ,\ f\}$, $n_B$ indices in the set $\{g,\ h,\ \dots ,\ m\}$, $n_C$ indices in the set $\{p,\ q,\ \dots ,\ t\}$, , and $n_Z$ indices in the set $\{u,\ v,\ \dots ,\ z\}$. There are $n_A!$ ways to order the $A$-outcomes, $n_B!$ ways to order the $B$-outcomes, $n_C!$ ways to order the $C$-outcomes, , and $n_Z!$ ways to order the $Z$-outcomes. So, there are $n_A!n_B!n_C!\dots n_Z!$ ways to order $n_A$ $A$-outcomes, $n_B$ $B$-outcomes, $n_C$ $C$-outcomes, , and $n_Z$ $Z$-outcomes. The same is true for any other distinguishable combination; for every distinguishable combination belonging to the population set $\{n_A$, $n_B$, $n_C$,, $n_Z\}$ there are $n_A!n_B!n_C!\dots n_Z!$ indistinguishable permutations. Again, we can express this result as the general relationship:
total number of permutations = (number of distinguishable combinations)${}_{\ }$${}_{\times }$ (number of indistinguishable permutations for each distinguishable combination)
so that
$n!=n_A!n_B!n_C!\dots n_Z!C\left(n_A,n_B,n_C,\dots ,n_Z\right) \nonumber$
and $C\left(n_A,n_B,n_C,\dots ,n_Z\right)=\frac{n!}{n_A!n_B!n_C!\dots n_Z!} \nonumber$
Equivalently, we can construct a sum, $T$, in which we add up all of the $n!$ permutations of $P_{A,a}$ factors for $n_A$ $A$-outcomes, $P_{B,b}$ factors for $n_B$ $B$-outcomes, $P_{C,c}$ factors for $n_C$ $C$-outcomes, , and $P_{Z,z}$ factors for $n_Z$ $Z$-outcomes. The value of each term in $T$ will be $P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z$. So we have
$T=n!P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z \nonumber$
$T$ will contain all $C\left(n_A,n_B,n_C,\dots ,n_Z\right)$ of the $P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z$-valued products (distinguishable combinations) that are a part of the total-probability sum. Moreover, $T$ will also include all of the $n_A!n_B!n_C!\dots n_Z!$ indistinguishable permutations of each of these $P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z$-valued products. Then we also have
$T=n_A!n_B!n_C!\dots n_Z!C\left(n_A,n_B,n_C,\dots ,n_Z\right) \nonumber$ $\times P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z \nonumber$
Equating these two expressions for$\ T$ gives us the number of $P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z$-valued products
$n!P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z=n_A!n_B!n_C!\dots n_Z! \nonumber$ $\times C\left(n_A,n_B,n_C,\dots ,n_Z\right)P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z \nonumber$
and hence,
$C\left(n_A,n_B,n_C,\dots ,n_Z\right)=\frac{n!}{n_A!n_B!n_C!\dots n_Z!} \nonumber$
In the special case that $P_A=P_B=P_C=\dots =P_Z$, all of the products $P^{n_A}_AP^{n_B}_BP^{n_C}_C\dots P^{n_Z}_Z$ have the same value. Then, the probability of any set of outcomes, $\{n_A,n_B,n_C,\dots ,n_Z\}$, is proportional to $C\left(n_A,n_B,n_C,\dots ,n_Z\right)$.
19.04: Stirling's Approximation
The polynomial coefficient, $C$, is a function of the factorials of large numbers. Since $N!$ quickly becomes very large as $N$ increases, it is often impractical to evaluate $N!$ from the definition,
$N!=\left(N\right)\left(N-1\right)\left(N-2\right)\dots \left(3\right)\left(2\right)\left(1\right) \nonumber$
Fortunately, an approximation, known as Stirling’s formula or Stirling’s approximation is available. Stirling’s approximation is a product of factors. Depending on the application and the required accuracy, one or two of these factors can often be taken as unity. Stirling’s approximation is
$N!\approx N^N \left(2\pi N\right)^{1/2}\mathrm{exp}\left(-N\right)\mathrm{exp}\left(\frac{1}{12N}\right)\approx N^N\left(2\pi N\right)^{1/2}\mathrm{exp}\left(-N\right)\approx N^N\mathrm{exp}\left(-N\right) \nonumber$
In many statistical thermodynamic arguments, the important quantity is the natural logarithm of $N!$ or its derivative, ${d ~ { \ln N!\ }}/{dN}$. In such cases, the last version of Stirling’s approximation is usually adequate, even though it affords a rather poor approximation for $N!$ itself.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/19%3A_The_Distribution_of_Outcomes_for_Multiple_Trials/19.03%3A_Distribution_of_Results_for_Multiple_Trials_with_Many_Possib.txt
|
1. Leland got a train set for Christmas. It came with seven rail cars. (We say that all seven cars are “distinguishable.”) Four of the rail cars are box cars and three are tank cars. If we distinguish between permutations in which the box cars are coupled (lined up) differently but not between permutations in which tank cars are coupled differently, how many ways can the seven cars be coupled so that all of the tank cars are together? What are they? What formula can we use to compute this number?
(Hint: We can represent one of the possibilities as $b_1b_2b_3b_4T$. This is one of the possibilities in which the first four cars behind the engine are all box cars. There are $4!$ such possibilities; that is, there are $4!$ possible permutations for placing the four box cars.)
2. If we don’t care about the order in which the box cars are coupled, and we don’t care about the order in which the tank cars are coupled, how many ways can the rail cars in problem 1 be coupled so that all of the tank cars are together? What are they? What formula can we use to compute this number?
3. If we distinguish between permutations in which either the box cars or the tank cars in problem 1 are ordered differently, how many ways can the rail cars be coupled so that all of the tank cars are together? What formula can we use to compute this number?
4. How many ways can all seven rail cars in problem 1 be coupled if the tank cars need not be together?
5. If, as in the previous problem, we distinguish between permutations in which any of the rail cars are ordered differently, how many ways can the rail cars be coupled so that not all of the tank cars are together?
6. If we distinguish between box cars and tank cars, but we do not distinguish one box car from another box car, and we do not distinguish one tank car from another tank car, how many ways can the rail cars in problem 1 be coupled?
7. If Leland gets five flat cars for his birthday, he will have four box cars, three tank cars and five flat cars. How many ways will Leland be able to couple (permute) these twelve rail cars?
8. If we distinguish between box cars and tank cars, between box cars and flat cars, and between tank cars and flat cars, but we do not distinguish one box car from another box car, and we do not distinguish one tank car from another tank car, and we do not distinguish one flat car from another flat car, how many ways can the rail cars in problem seven be coupled? What formula can we use to compute this number?
9. We are given four distinguishable marbles, labeled $A--D$, and two cups, labeled $1$ and $2$. We want to explore the number of ways we can put two marbles in cup $1$ and two marbles in cup $2$. This is the number of combinations, $C\left(2,2\right)$, for the population set $N_1=2$, $N_2=2$.
(a) One combination is ${\left[AB\right]}_1{\left[CD\right]}_2$. Find the remaining combinations. What is $C\left(2,2\right)$?
(b) There are four permutations for the combination given in (a):$\ {\left[AB\right]}_1{\left[CD\right]}_2$; ${\left[BA\right]}_1{\left[CD\right]}_2$; ${\left[AB\right]}_1{\left[DC\right]}_2$; ${\left[BA\right]}_1{\left[DC\right]}_2$. Find all of the permutations for each of the remaining combinations.
(c) How many permutations are there for each combination?
(d) Write down all of the possible permutations of marbles $A--D$. Show that there is a one-to-one correspondence with the permutations in (b).
(e) Show that the total number of permutations is equal to the number of combinations times the number of permutations possible for each combination.
10. We are given seven distinguishable marbles, labeled $A--G$, and two cups, labeled $1$ and $2$. We want to find the number of ways we can put three marbles in cup $1$ and four marbles in cup$\ 2$. That is, we seek $C\left(3,4\right)$, the number of combinations in which $N_1=3$ and $N_2=4$. ${\left[ABC\right]}_1{\left[DEFG\right]}_2$ is one such combination.
(a) How many different ways can these marbles be placed in different orders without exchanging any marbles between cup $1$ and cup $2$? (This is the number of permutations associated with this combination.)
(b) Find a different combination with $N_1=3$ and $N_2=4$.
(c) How many permutations are possible for the marbles in (b)? How many permutations are possible for any combination with $N_1=3$ and $N_2=4$?
(d) If $C\left(3,4\right)$ is the number of combinations in which $N_1=3$ and $N_2=4$, and if $P$ is the number of permutations for each such combination, what is the total number of permutations possible for 7 marbles?
(e) How else can one express the number of permutations possible for 7 marbles?
(f) Equate your conclusions in (d) and (e). Find $C\left(3,4\right)$.
11.
(a) Calculate the probabilities of 0, 1, 2, 3, and 4 heads in a series of four tosses of an unbiased coin. The event of 2 heads is$\ 20\%$ of these five events. Note particularly the probability of the event: 2 heads in 4 tosses.
(b) Calculate the probabilities of 0, 1, 2, 3,, 8, and 9 heads in a series of nine tosses of an unbiased coin. The events of 4 heads and 5 heads comprise $20\%$ of these ten cases. Calculate the probability of 4 heads or 5 heads; i.e., the probability of being in the middle $20\%$ of the possible events.
(c) Calculate the probabilities of 0, 1, 2, 3,, 13, and 14 heads in a series of fourteen tosses of an unbiased coin. The events of 6 heads, 7 heads, and 8 heads comprise 20% of these fifteen cases. Calculate the probability of 6, 7, or 8 heads; i.e., the probability of being in the middle $20\%$ of the possible events.
(d) What happens to the probabilities for the middle $20\%$ of possible events as the number of tosses becomes very large? How does this relate to the fraction heads in a series of tosses when the total number of tosses becomes very large?
12. Let the value of the outcome heads be one and the value of the outcome tails be zero. Let the “score” from a particular simultaneous toss of $n$ coins be
$\mathrm{score}=1\times \left(\frac{number\ of\ heads}{number\ of\ coins}\right)\ +0\times \left(\frac{number\ of\ tails}{number\ of\ coins}\right) \nonumber$
Let us refer to the distribution of scores from tosses of $n$ coins as the “$S_n$ distribution.”
(a) The $S_1$ distribution comprises two outcomes: $\mathrm{\{}$1 head, 0 tail$\mathrm{\}}$ and $\mathrm{\{}$0 head, 1 tail$\mathrm{\}}$.
What is the mean of the $S_1$ distribution?
(b) What is the variance of the $S_1$ distribution?
(c) What is the mean of the $S_n$ distribution?
(d) What is the variance of the $S_n$ distribution?
13. Fifty unbiased coins are tossed simultaneously.
(a) Calculate the probability of 25 heads and 25 tails.
(b) Calculate the probability of 23 heads and 27 tails.
(c) Calculate the probability of 3 heads and 47 tails.
(d) Calculate the ratio of your results for parts (a) and (b).
(e) Calculate the ratio of your results for parts (a) and (c).
14. For $N=3,\ 6$ and $10$, calculate
(a) The exact value of $N!$
(b) The value of $N!$ according to the approximation $N!\approx N^N \left(2\pi N\right)^{1/2}\mathrm{exp}\left(-N\right)\mathrm{exp}\left(\frac{1}{12N}\right) \nonumber$
(c) The value of N! according to the approximation $N!\approx N^N \left(2\pi N\right)^{1/2}\mathrm{exp}\left(-N\right) \nonumber$
(d) The value of N! according to the approximation $N!\approx N^N\mathrm{exp}\left(-N\right) \nonumber$
(e) The ratio of the value in (b) to the corresponding value in (a).
(f) The ratio of the value in (c) to the corresponding value in (a).
(g) The ratio of the value in (d) to the corresponding value in (a).
(h) Comment.
15. Find , $d ~ \ln N! /dN$ using each of the approximations $N!\approx N^N \left(2\pi N\right)^{1/2} \mathrm{exp}\left(-N\right)\mathrm{exp}\left(\frac{1}{12N}\right)\approx N^N \left(2\pi N\right)^{1/2} \mathrm{exp}\left(-N\right)\approx N^N\mathrm{exp}\left(-N\right) \nonumber$
How do the resulting approximations for $d ~ \ln N! /dN$ compare to one another as $N$ becomes very large?
16. There are three energy levels available to any one molecule in a crystal of the substance. Consider a crystal containing $1000$ molecules. These molecules are distinguishable because each occupies a unique site in the crystalline lattice. How many combinations (microstates) are associated with the population set $N_1=800$, $N_2=150$, $N_3=50$?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/19%3A_The_Distribution_of_Outcomes_for_Multiple_Trials/19.05%3A_Problems.txt
|
In Chapter 18, our survey of quantum mechanics introduces the idea that a molecule can have any of an infinite number of discrete energies, which we can put in order starting with the smallest. We now turn our attention to the properties of a system composed of a large number of molecules. This multi-molecule system must obey the laws of quantum mechanics. Therefore, there exists a Schrödinger equation, whose variables include all of the inter-nucleus, inter-electron, and electron-nucleus distance and potential terms in the entire multi-molecule system. The relevant boundary conditions apply at the physical boundaries of the macroscopic system. The solutions of this equation include a set of infinitely many wavefunctions, ${\mathit{\Psi}}_{i,j}$, each describing a quantum mechanical state of the entire multi-molecule system. In general, the collection of elementary particles that can be assembled into a particular multi-molecule system can also be assembled into many other multi-molecule systems. For example, an equimolar mixture of $CO$ and $H_2O$ can be reassembled into a system comprised of equimolar $CO_2$ and $H_2$, or into many other systems containing mixtures of $CO$, $H_2O$, $CO_2$, and $H_2$. Infinitely many quantum-mechanical states are available to each of these multi-molecule systems.
For every such multi-molecule wavefunction, ${\psi }_{i,j}$, there is a corresponding system energy, $E_i$. In general, the system energy, $E_i$, is ${\mathit{\Omega}}_i$-fold degenerate; there are ${\mathit{\Omega}}_i$ wavefunctions, ${\mathit{\Psi}}_{i,1}$, ${\mathit{\Psi}}_{i,2}$, , ${\mathit{\Psi}}_{i,{\mathit{\Omega}}_i}$, whose energy is $E_i$. The wavefunctions include all of the interactions among the molecules of the system, and the energy levels of the system reflect all of these interactions. While generating and solving this multi-molecule Schrödinger equation is straightforward in principle, it is completely impossible in practice.
Fortunately, we can model multi-molecule systems in another way. The primary focus of chemistry is the study of the properties and reactions of molecules. Indeed, the science of chemistry exists, as we know it, only because the atoms comprising a molecule stick together more tenaciously than molecules stick to one another. (Where this is not true, we get macromolecular materials like metals, crystalline salts, etc.) This occurs because the energies that characterize the interactions of atoms within a molecule are much greater than the energies that characterize the interaction of one molecule with another. Consequently, the energy of the system can be viewed as the sum of two terms. One term is a sum of the energies that the component molecules would have if they were all infinitely far apart. The other term is a sum of the energies of all of the intermolecular interactions, which is the energy change that would occur if the molecules were brought from a state of infinite separation to the state of interest.
In principle, we can describe a multi-molecule system in this way with complete accuracy. This description has the advantage that it breaks a very large and complex problem into two smaller problems, one of which we have already solved: In Chapter 18, we see that we can approximate the quantum-mechanical description of a molecule and its energy levels by factoring molecular motions into translational, rotational, vibrational, and electronic components. It remains only to describe the intermolecular interactions. When intramolecular energies are much greater than intermolecular-interaction energies, it may be a good approximation to ignore the intermolecular interactions altogether. This occurs when we describe ideal gas molecules; in the limit that a gas behaves ideally, the force between any two of its molecules is nil.
In Chapter 23, we return to the idea of multi-molecule wavefunctions and energy levels. Meanwhile we assume that intermolecular interactions can be ignored. This is a poor approximation for many systems. However, it is a good approximation for many others, and it enables us to keep our description of the system simple while we use molecular properties in our development of the essential ideas of statistical thermodynamics.
We focus on developing a theory that gives the macroscopic thermodynamic properties of a pure substance in terms of the energy levels available to its individual molecules. To begin, we suppose that we solve the Schrödinger equation for an isolated molecule. In this Schrödinger equation, the variables include the inter-nucleus, inter-electron, and electron-nucleus distance and potential terms that are necessary to describe the molecule. The solutions are a set of infinitely many wavefunctions, ${\psi }_{i,j}$, each describing a different quantum-mechanical state of an isolated molecule. We refer to each of the possible wavefunctions as quantum state of the molecule. For every such wavefunction, there is a corresponding molecular energy, ${\epsilon }_i$. Every unique molecular energy, ${\epsilon }_i$, is called an energy level. Several quantum states can have the same energy. When two or more quantum states have the same energy, we say that they belong to the same energy level, and the energy level is said to be degenerate. In general, there are $g_i$ quantum states that we can represent by the $g_i$ wavefunctions, ${\psi }_{i,1}$, , ${\psi }_{i,2}$, ..., , ${\psi }_{i,g_i}$, each of whose energy is ${\epsilon }_i$. The number of quantum states that have the same energy is called the degeneracy of the energy level. Figure 1 illustrates the terms we use to describe the quantum states and energy levels available to a molecule.
In our development of classical thermodynamics, we find it convenient to express the value of a thermodynamic property of a pure substance as the change that occurs during a formal process that forms one mole of the substance, in its standard state, from its unmixed constituent elements, in their standard states. In developing statistical thermodynamics, we find it convenient to express the value of a molecular energy, ${\epsilon }_i$, as the change that occurs during a formal process that forms a molecule of the substance, in one of its quantum states, ${\psi }_{i,j}$, from its infinitely separated, stationary, constituent atoms. That is, we let the isolated constituent atoms be the reference state for the thermodynamic properties of a pure substance.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.01%3A_The_Independent-Molecule_Approximation.txt
|
If only pressure–volume work is possible, the state of a closed, reversible system can be specified by specifying its volume and temperature. Since the system is closed, the number, $N$, of molecules is constant. Let us consider a closed, equilibrated, constant-volume, constant-temperature system in which the total number of molecules is very large. Let us imagine that we can monitor the quantum state of one particular molecule over a very long time. Eventually, we are able to calculate the fraction of the elapsed time that the molecule spends in each of the quantum states. We label the available quantum states with the wavefunction symbols, ${\psi }_{i,j}$.
We assume that the fraction of the time that a molecule spends in the quantum state ${\psi }_{i,j}$ is the same thing as the probability of finding the molecule in quantum state ${\psi }_{i,j}$. We denote this probability as $\rho \left({\psi }_{i,j}\right)$. To develop the theory of statistical thermodynamics, we assume that this probability depends on the energy, and only on the energy, of the quantum state ${\psi }_{i,j}$. Consequently, any two quantum states whose energies are the same have the same probability, and the $g_i$-fold degenerate quantum states, ${\psi }_{i,j}$, whose energies are ${\epsilon }_i$, all have the same probability. In our imaginary monitoring of the state of a particular molecule, we observe that the probabilities of two quantum states are the same if and only if their energies are the same; that is, we observe $\rho \left({\psi }_{i,j}\right)=\mathrm{\ }\rho \left({\psi }_{k,m}\right)$ if and only if $i=k$.
The justification for this assumption is that the resulting theory successfully models experimental observations. We can ask, however, why we might be led to make this assumption in the first place. We can reason as follows: The fact that we observe a definite value for the energy of the macroscopic system implies that quantum states whose energies are much greater than the average molecular energy must be less probable than quantum states whose energies are smaller. Otherwise, the sum of the energies of high-energy molecules would exceed the energy of the system. Therefore, we can reasonably infer that the probability of a quantum state depends on its energy. On the other hand, we can think of no plausible reason for a given molecule to prefer one quantum state to another quantum state that has the same energy.
This assumption means that a single function suffices to specify the probability of finding a given molecule in any quantum state, ${\psi }_{i,j}$, and the only independent variable is the quantum-state energy, ${\epsilon }_i$. We denote the probability of a single quantum state, ${\psi }_{i,j}$, whose energy is ${\epsilon }_i$, as $\rho \left({\epsilon }_i\right)$. Since this is the probability of each of the $g_i$-fold degenerate quantum states, ${\psi }_{i,j}$, that have energy ${\epsilon }_i$, the probability of finding a given molecule in any energy level, ${\epsilon }_i$, is $P\left({\epsilon }_i\right)=g_i\rho \left({\epsilon }_i\right)$. We find it convenient to introduce “$P_i$” to abbreviate this probability; that is, we let
$P_i=\sum^{g_i}_{j=1}{\rho \left({\psi }_{i,j}\right)}=P\left({\epsilon }_i\right)=g_i\rho \left({\epsilon }_i\right) \nonumber$
(the probability of energy level $\mathrm{\epsilon}_{\mathrm{i}}$)
There is a $P_i$ for every energy level ${\epsilon }_i$. $P_i$ must be the same for any molecule, since every molecule has the same properties. If the population set $\{N^{\textrm{⦁}}_1,\ N^{\textrm{⦁}}_2,\dots ,N^{\textrm{⦁}}_i,\dots \}$ characterizes the equilibrium system, the fraction of the molecules that have energy ${\epsilon }_i$ is ${N^{\textrm{⦁}}_i}/{N}$. (Elsewhere, an energy-level population set is often called a “distribution.” Since we define a distribution somewhat differently, we avoid this usage.) Since the fraction of the molecules in an energy level at any instant of time is the same as the fraction of the time that one molecule spends in that energy level, we have
$P_i=P\left({\epsilon }_i\right)=g_i\rho \left({\epsilon }_i\right)=\frac{N^{\textrm{⦁}}_i}{N} \nonumber$
As long as the system is at equilibrium, this fraction is constant. In Chapter 21, we find an explicit equation for the probability function, $\rho \left({\epsilon }_i\right)$.
The energy levels, ${\epsilon }_i$, depend on the properties of the molecules. In developing Boltzmann statistics for non-interacting molecules, we assume that the probability of finding a molecule in a particular energy level is independent of the number of molecules present in the system. While $P_i$ and $\rho \left({\epsilon }_i\right)$ depend on the energy level, ${\epsilon }_i$, neither depends on the number of molecules, $N$. If we imagine inserting a barrier that converts an equilibrated collection of molecules into two half-size collections, each of the new collections is still at equilibrium. Each contains half as many molecules and has half the total energy of the original. In our model, the fraction of the molecules in any given energy level remains constant. Consequently, the probabilities associated with each energy level remain constant. (In Chapter 25, we introduce Fermi-Dirac and Bose-Einstein statistics. When we must use either of these models to describe the system, $P_i$ is affected by rules for the number of molecules that can occupy an energy level.)
The number of molecules and the total energy are extensive properties and vary in direct proportion to the size of the system. The probability, $P_i$, is an intensive variable that is a characteristic property of the macroscopic system. $P_i$ is a state function. $P_i$ depends on ${\epsilon }_i$. So long as the thermodynamic variables that determine the state of the system remain constant, the ${\epsilon }_i$ are constant. For a given macroscopic system in which only pressure–volume work is possible, the quantum mechanical energy levels, ${\epsilon }_i$, are constant so long as the system volume and temperature are constant. However, the ${\epsilon }_i$ are quantum-mechanical quantities that depend on our specification of the molecule and on the boundary values in our specification of the system. If we change any molecular properties or the dimensions of the system, the probabilities, $P_i$, change.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.02%3A_The_Probability_of_An_Energy_Level_at_Constant_N_V_and_T.txt
|
In developing Boltzmann statistics, we assume that we can tell different molecules of the same substance apart. We say that the molecules are distinguishable. This assumption is valid for molecules that occupy lattice sites in a crystal. In a crystal, we can specify a particular molecule by specifying its position in the lattice. In other systems, we may be unable to distinguish between different molecules of the same substance. Most notably, we cannot distinguish between two molecules of the same substance in the gas phase. The fact that gas molecules are indistinguishable, while we assume otherwise in developing Boltzmann statistics, turns out to be a problem that is readily overcome. We discuss this in Section 24.2.
We want to model properties of a system that contains $N$, identical, distinguishable, non-interacting molecules. The solutions of the Schrödinger equation presume fixed boundary conditions. This means that the volume of this $N$-molecule system is constant. We assume also that the temperature of the $N$-molecule system is constant. Thus, our goal is a theory that predicts the properties of a system when $N$, $V$, and $T$ are specified. When there are no intermolecular interactions, the energy of the system is just the sum of the energies of the individual molecules. If we know how the molecules are allocated among the energy levels, we can find the energy of the system. Letting $N_i$ be the population of the energy level ${\epsilon }_i$, any such allocation is a population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$. We have
$N=\sum^{\infty }_{i=1}{N_i} \nonumber$
and the system energy is
$E=\sum^{\infty }_{i=1}{N_i}{\epsilon }_i \nonumber$
Let us imagine that we can assemble a system with the molecules allocated among the energy levels in any way we please. Let $\{N^o_1,\ N^o_2,\dots ,N^o_i,\dots \}$ represent an initial population set that describes a system that we assemble in this way. This population set corresponds to a well-defined system energy. We imagine immersing the container in a constant-temperature bath. Since the system can exchange energy with the bath, the molecules of the system gain or lose energy until the system attains the temperature of the bath in which it is immersed. As this occurs, the populations of the energy levels change. A series of different population sets characterizes the state of the system as it evolves toward thermal equilibrium. When the system reaches equilibrium, the population sets that characterize it are different from the initial one, $\{N^o_1,\ N^o_2,\dots ,N^o_i,\dots \}$.
Evidently, the macroscopic properties of such a system also change with time. The changes in the macroscopic properties of the system parallel the changing energy-level populations. At thermal equilibrium, macroscopic properties of the system cease to undergo any further change. In Section 3.9, we introduce the idea that the most probable population set, which we denote as
$\left\{N^{\textrm{⦁}}_1,\ N^{\textrm{⦁}}_2,\dots ,N^{\textrm{⦁}}_i,\dots \right\} \nonumber$
or its proxy,
$\left\{NP\left({\epsilon }_1\right),NP\left({\epsilon }_2\right),\dots ,NP\left({\epsilon }_i\right),\dots \right\} \nonumber$
(where $N=N^{\textrm{⦁}}_1+N^{\textrm{⦁}}_2+...+N^{\textrm{⦁}}_i+...$), is the best prediction we can make about the outcomes in a future set of experiments in which we find the energy of each of $N$ different molecules at a particular instant. We hypothesize that the most probable population set specifies all of the properties of the macroscopic system in its equilibrium state. When we develop the logical consequences of this hypothesis, we find a theory that expresses macroscopic thermodynamic properties in terms of the energy levels available to individual molecules. In the end, the justification of this hypothesis is that it enables us to calculate thermodynamic properties that agree with experimental measurements made on macroscopic systems.
Our hypothesis asserts that the properties of the equilibrium state are the same as the properties of the system when it is described by the most probable population set. Evidently, we can predict the system’s equilibrium state if we can find the equilibrium $N^{\textrm{⦁}}_i$ values, and vice versa. To within an arbitrary factor representing its size, an equilibrated system can be completely described by its intensive properties. In the present instance, the fractions ${N^{\textrm{⦁}}_1}/{N}$, ${N^{\textrm{⦁}}_2}/{N}$, , ${N^{\textrm{⦁}}_i}/{N},\dots$ describe the equilibrated system to within the factor, $N$, that specifies its size. Since we infer that $P_i=P\left({\epsilon }_i\right)={N^{\textrm{⦁}}_i}/{N}$, the equilibrated system is also described by the probabilities $\left(P_1,P_2,\dots ,\ P_i,\dots \right)$.
Our hypothesis does not assert that the most-probable population set is the only population set possible at equilibrium. A very large number of other population sets may describe an equilibrium system at different instants of time. However, when its state is specified by any such population set, the macroscopic properties of the system are indistinguishable from the macroscopic properties of the system when its state is specified by the most-probable population set. The most-probable population set characterizes the equilibrium state of the system in the sense that we can calculate the properties of the equilibrium state of the macroscopic system by using the single-molecule energy levels and the most probable population set—or its proxy. The relationship between a molecular energy level, ${\epsilon }_i$, and its equilibrium population, $N^{\textrm{⦁}}_i$, is called the Boltzmann equation. From $P_i={N^{\textrm{⦁}}_i}/{N}$, we see that the Boltzmann equation specifies the probability of finding a given molecule in energy level ${\epsilon }_i$.
Although we calculate thermodynamic properties from the most probable population set, the population set that describes the system can vary from instant to instant while the system remains at equilibrium. The central limit theorem enables us to characterize the amount of variation that can occur. When $N$ is comparable to the number of molecules in a macroscopic system, the probability that variation among population sets can result in a macroscopically observable effect is vanishingly small. The hypothesis is successful because the most probable population set is an excellent proxy for any other population set that the equilibrium system is remotely likely to attain.
We develop the theory of statistical thermodynamics for $N$-molecule systems by considering the energy levels, ${\epsilon }_i$, available to a single molecule that does not interact with other molecules. Thereafter, we develop a parallel set of statistical thermodynamic results by considering the energy levels, ${\hat{E}}_i$, available to a system of $N$ molecules. These $N$-molecule-system energies can reflect the effects of any amount of intermolecular interaction. We can apply the same arguments to find that the Boltzmann equation also describes the equilibrium properties of systems in which intermolecular interactions are important. That is, the probability, $P_i\left({\hat{E}}_i\right)$, that an $N$-molecule system has energy ${\hat{E}}_i$ is the same function of ${\hat{E}}_i$ as the molecular-energy probability, $P_i=P\left({\epsilon }_i\right)$, is of ${\epsilon }_i$.
When we finish our development based on single-molecule energy levels, we understand nearly all of the ideas that we need in order to complete the development for the energies of an $N$-molecule system. This development is an elegant augmentation of the basic argument called the ensemble treatment or the ensemble method. The ensemble treatment is due to J. Willard Gibbs; we discuss it in Chapter 23. For now, we simply note that our approach involves no wasted effort. When we discuss the ensemble method, we use all of the ideas that we develop in this chapter and the next. The extension of these arguments that is required for the ensemble treatment is so straightforward as to be (almost) painless.
20.04: How can Infinitely Many Probabilities Sum to Unity
There are an infinite number of successively greater energies for a quantum mechanical system. We infer that the probability that a given energy level is occupied is a property of the energy level. Each of the probabilities must be between 0 and 1. When we sum the fixed probabilities associated with the energy levels, the sum contains an infinite number of terms. By the nature of probability, the sum of this infinite number of terms must be one:
\begin{align*} 1 &=P_1+P_2+\dots +P_i+\dots \[4pt] &=P\left({\epsilon }_1\right)+P\left({\epsilon }_2\right)+\dots +P\left({\epsilon }_i\right)+\dots \[4pt] &=\sum^{\infty }_{i=1}{P\left({\epsilon }_i\right)} \end{align*}
That is, the sum of the probabilities is an infinite series, which must converge: The sum of all of the occupancy probabilities must be unity. This can happen only if all later members of the series are very small. In the remainder of this chapter, we explore some of the thermodynamic ramifications of these facts. In the next chapter, we use this relationship to find the functional dependence of the $P_i$ on the energy levels, ${\epsilon }_i$. To obtain these results, we need to think further about the probabilities associated with the various population sets that can occur. Also, we need to introduce a new fundamental postulate.
To focus on the implications of this sum of probabilities, let us review geometric series. A geometric series is a sum of terms, in which each successive term is a multiple of its predecessor. A geometric series is an infinite sum that can converge:
$T=a+ar+ar^2+\dots +ar^i\dots =a\left(1+r+r^2+\dots +r^i+\dots \right)=a+a\sum^{\infty }_{i=1}{r^i} \nonumber$
Successive terms approach zero if $\left|r\right|<1$. If $\left|r\right|\ge 1$, successive terms do not become smaller, and the sum does not have a finite limit. If $\left|r\right|\ge 1$, we say that the infinite series diverges.
We can multiply an infinite geometric series by its constant factor to obtain
\begin{align*} rT &=ar+ar^2+ar^3+\dots +ar^i+\dots \[4pt] &=a\left(r+r^2+r^3+\dots +r^i+\dots \right) \[4pt] &=a\sum^{\infty }_{i=1}{r^i} \end{align*}
If $\left|r\right|<1$, we can subtract and find the value of the infinite sum: $T-rT=a \nonumber$ so that $T={a}/{\left(1-r\right)} \nonumber$
In a geometric series, the ratio of two successive terms is ${r^{n+1}}/{r^n}=r$ The condition of convergence for a geometric series can also be written as $\left|\frac{r^{n+1}}{r^n}\right|<1 \nonumber$
We might anticipate that any other series also converges if its successive terms become smaller at least as fast as those of a geometric series. In fact, this is true and is the basis for the ratio test for convergence of an infinite series. If we represent successive terms in an infinite series as $t_i$, their sum is $T=\sum^{\infty }_{i=0}{t_i} \nonumber$
The ratio test is a theorem which states that the series converges, and $T$ has a finite value, if
${\mathop{\mathrm{lim}}_{n\to \infty } \left|\frac{t_{n+1}}{t_n}\right|<1\ } \nonumber$
One of our goals is to discover the relationship between the energy, ${\epsilon }_i$, of a quantum state and the probability that a molecule will occupy one of the quantum states that have this energy, $P_i=g_i\rho \left({\epsilon }_i\right)$. When we do so, we find that the probabilities for all of the quantum mechanical systems that we discuss in Chapter 18 satisfy the ratio test.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.03%3A_The_Population_Sets_of_a_System_at_Equilibrium_at_Constant_N_V_and_T.txt
|
In a collection of distinguishable independent molecules at constant $N$, $V$, and $T$, the probability that a randomly selected molecule has energy ${\epsilon }_i$ is $P_i$; we have $1=P_1+P_2+\dots +P_i+\dots$. At any instant, every molecule in the $N$-molecule system has a specific energy, and the state of the system is described by a population set, $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, wherein $N_i$ can have any value in the range $0\le N_i\le N$, subject to the condition that
$N=\sum^{\infty }_{i=1}{N_i} \nonumber$
The probabilities that we assume for this system of molecules have the properties we assume in Chapter 19 where we find the total probability sum by raising the sum of the energy-level probabilities to the $N^{th}$ power.
$1={\left(P_1+P_2+\dots +P_i+\dots \right)}^N=\sum_{\{N_i\}}{\frac{N!}{N_1!N_2!\dots N_i!\dots }}P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i\dots \nonumber$
The total-probability sum is over all possible population sets, $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, which we abbreviate to $\{N_i\}$, in indicating the range of the summation. Each term in this sum represents the probability of the corresponding population set $\{N_1,\ N_2,\dots ,N_i,\dots .\}$,. At any given instant, one of the possible population sets describes the way that the molecules of the physical system are apportioned among the energy levels. The corresponding term in the total probability sum represents the probability of this apportionment. It is not necessary that all of the energy levels be occupied. We can have $N_k=0$, in which case $P^{N_k}_k=P^0_k=1$ and $N_k!=1$. Energy levels that are not occupied have no effect on the probability of a population set. The unique population set
$\{N^{\textrm{⦁}}_1,\ N^{\textrm{⦁}}_2,\dots ,N^{\textrm{⦁}}_i,\dots .\} \nonumber$
that we conjecture to characterize the equilibrium state is represented by one of the terms in this total probability sum. We want to focus on the relationship between a term in the total probability sum and the corresponding state of the physical system.
Each term in the total probability sum includes a probability factor, $P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i\dots$ This factor is the probability that $N_i$ molecules occupy each of the energy levels ${\epsilon }_i$. This term is not affected by our assumption that the molecules are distinguishable. The probability factor is multiplied by the polynomial coefficient
$\frac{N!}{N_1!N_2!\dots N_i!\dots } \nonumber$
This factor is the number of combinations of distinguishable molecules that arise from the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$. It is the number of ways that the $N$ distinguishable molecules can be assigned to the available energy levels so that $N_1$ of them are in energy level, ${\epsilon }_1$, etc.
The combinations for the population set {3,2} are shown in Figure 2.
The expression for the number of combinations takes the form it does only because the molecules can be distinguished from one another. To emphasize this point, let us find the number of combinations using the method we develop in Chapter 19. Briefly recapitulated, the argument is this:
1. We can permute the $N$ molecules in $N!$ ways. If we were to distinguish (as different combinations) any two permutations of all of the molecules, this would also be the number of combinations.
2. In fact, however, we do not distinguish between different permutations of those molecules that are assigned to the same energy level. If the $N_1$ molecules assigned to the first energy level are $B$, $C$, $Q$,, $X$, we do not distinguish the permutation $BCQ\dots X$ from the permutation $CBQ\dots X$ or from any other permutation of these $N_1$ molecules. Then the complete set of $N!$ permutations contains a subset of $N_1!$ permutations, all of which are equivalent because they have the same molecules in the first energy level. So the total number of permutations, $N!$, over-counts the number of combinations by a factor of $N_1!$ We can correct for this over-count by dividing by $N_1!$ That is, after correcting for the over-counting for the $N_1$ molecules in the first energy level, the number of combinations is ${N!}/{N_1!}$ (If all $N$ of the molecules were in the first energy level, there would be only one combination. We would have $N=N_1$, and the number of combinations calculated from this formula would be ${N!}/{N!}=1$, as required.)
3. The complete set of $N!$ permutations also includes $N_2!$ permutations of the $N_2$ molecules in the second energy level. In finding the number of combinations, we want to include only one of these permutations, so correcting for the over-counting due to both the $N_1$ molecules in the first energy level and the $N_2$ molecules in the second energy level gives $\frac{N!}{N_1!N_2!} \nonumber$
4. Continuing this argument through all of the occupied energy levels, we see that the total number of combinations is
$C\left(N_1,N_2,\dots ,N_i,\dots \right)=\frac{N!}{N_1!N_2!\dots N_i!\dots } \nonumber$
Because there are infinitely many energy levels and probabilities, $P_i$, there are infinitely many terms in the total-probability sum. Every energy available to the macroscopic system is represented by one or more terms in this total-probability sum. Since there is no restriction on the energy levels that can be occupied, there are an infinite number of such system energies. There is an enormously large number of terms each of which corresponds to an enormously large system energy. Nevertheless, the sum of all of these terms must be one. The $P_i$ form a convergent series, and the total probability sum must sum to unity.
Just as the $P_i$ series can converge only if the probabilities of high molecular energies become very small, so the total probability sum can converge only if the probabilities of high system energies become very small. If a population set has $N_i$ molecules in the $i^{th}$ energy level, the probability of that population set is proportional to $P^{N_i}_i$. We see therefore, that the probability of a population set in which there are many molecules in high energy levels must be very small. Terms in the total probability sum that correspond to population sets with many molecules in high energy levels must be negligible. Equivalently, at a particular temperature, macroscopic states in which the system energy is anomalously great must be exceedingly improbable.
What terms in the total probability sum do we need to consider? Evidently from among the infinitely many terms that occur, we can select a finite subset whose sum is very nearly one. If there are many terms that are small and nearly equal to one another, the number of terms in this finite subset could be large. Nevertheless, we can see that terms in this subset must involve the largest possible $P_i$ values raised to the smallest possible powers, $N_i$, consistent with the requirement that the $N_i$ sum to $N$.
If an equilibrium macroscopic system could have only one population set, the probability of that population set would be unity. Could an equilibrium system be characterized by two or more population sets for appreciable fractions of an observation period? Would this require that the macroscopic system change its properties with time as it jumps from one population set to another? Evidently, it would not, since our observations of macroscopic systems show that the equilibrium properties are unique. A system that wanders between two (or more) macroscopically distinguishable states cannot be at equilibrium. We are forced to the conclusion that, if a macroscopic equilibrium system has multiple population sets with non-negligible probabilities, the macroscopic properties associated with each of these population sets must be indistinguishably similar. (The alternative is to abandon the theory, which is useful only if its microscopic description of a system makes useful predictions about the system’s macroscopic behavior.)
To be a bit more precise about this, we recognize that our theory also rests on another premise: Any intensive macroscopic property of many independent molecules depends on the energy levels available to an individual molecule and the fraction of the molecules that populate each energy level. The average energy is a prime example. For the population set $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, the average molecular energy is
$\overline{\epsilon }=\sum^{\infty }_{i=1}{\left(\frac{N_i}{N}\right)}{\epsilon }_i \nonumber$
We recognize that many population sets may contribute to the total probability sum at equilibrium. If we calculate essentially the same $\overline{\epsilon }$ from each of these contributing population sets, then all of the contributing population sets correspond to indistinguishably different macroscopic energies. We see in the next section that the central limit theorem guarantees that this happens whenever $N$ is as large as the number of molecules in a macroscopic system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.05%3A_The_Total_Probability_Sum_at_Constant_N_V_and_T.txt
|
We are imagining that we can examine a collection of $N$ distinguishable molecules and determine the energy of each molecule in the collection at any particular instant. If we do so, we find the population set, $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, that characterizes the system at that instant. In Section 3.9, we introduce the idea that the most probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i,.,,,\}$, or its proxy, $\{NP\left({\epsilon }_1\right),NP\left({\epsilon }_2\right),\dots ,NP\left({\epsilon }_i\right),\dots .\}$, is the best prediction we can make about the outcome of a future replication of this measurement. In Section 20.2, we hypothesize that the properties of the system when it is characterized by the most probable population set are indistinguishable from the properties of the system at equilibrium.
Now let us show that this hypothesis is implied by the central limit theorem. We suppose that the population set that characterizes the system varies from instant to instant and that we can find this population set at any given instant. The population set that we find at a particular instant comprises a random sample of $N$ molecular energies. For this sample, we can find the average energy from
$\overline{\epsilon }=\sum^{\infty }_{i=1}{\left(\frac{N_i}{N}\right)}{\epsilon }_i \nonumber$
The expected value of the molecular energy is $\left\langle \epsilon \right\rangle =\sum^{\infty }_{i=1}{P_i{\epsilon }_i} \nonumber$
It is important that we remember that $\overline{\epsilon }$ and $\left\langle \epsilon \right\rangle$ are not the same thing. There is a distribution of $\overline{\epsilon }$ values, one $\overline{\epsilon }$ value for each of the possible population sets $\{N_1,\ N_2,\dots ,N_i,\dots .\}$. In contrast, when $N$, $V$, and $T$ are fixed, the expected value, $\left\langle \epsilon \right\rangle$, is a constant; the value of $\left\langle \epsilon \right\rangle$ is completely determined by the values of the variables that determine the state of the system and fix the probabilities $P_i$. If our theory is to be useful, the value of $\left\langle \epsilon \right\rangle$ must be the per-molecule energy that we observe for the macroscopic system we are modeling.
According to the central limit theorem, the average energy of a randomly selected sample, $\overline{\epsilon }$, approaches the expected value for the distribution, $\left\langle \epsilon \right\rangle$, as the number of molecules in the sample becomes arbitrarily large. In the present instance, we hypothesize that the most probable population set, or its proxy, characterizes the equilibrium system. When $N$ is sufficiently large, this hypothesis implies that the probability of the $i^{th}$ energy level is given by $P_i={N^{\textrm{⦁}}_i}/{N}$. Then the expected value of a molecular energy is
$\left\langle \epsilon \right\rangle =\sum^{\infty }_{i=1}{P_i{\epsilon }_i}=\sum^{\infty }_{i=1}{\left(\frac{N^{\textrm{⦁}}_i}{N}\right){\epsilon }_i} \nonumber$
Since the central limit theorem asserts that $\overline{\epsilon }$ approaches $\left\langle \epsilon \right\rangle$ as $N$ becomes arbitrarily large:
$0={\mathop{\mathrm{lim}}_{N\to \infty } \left(\overline{\epsilon }-\left\langle \epsilon \right\rangle \ \right)\ }={\mathop{\mathrm{lim}}_{N\to \infty } \sum^{\infty }_{i=1}{\left(\frac{N_i}{N}-P_i\right)}\ }{\varepsilon }_i={\mathop{\mathrm{lim}}_{N\to \infty } \sum^{\infty }_{i=1}{\left(\frac{N_i}{N}-\frac{N^{\textrm{⦁}}_i}{N}\right)}{\epsilon }_i\ } \nonumber$
One way for the limit of this sum to be zero is for the limit of every individual term to be zero. If the ${\epsilon }_i$ were arbitrary, this would be the only way that the sum could always be zero. However, the ${\epsilon }_i$ and the $P_i$ are related, so we might think that the sum is zero because of these relationships.
To see that the limit of every individual term must in fact be zero, we devise a new distribution. We assign a completely arbitrary number, $X_i$, to each energy level. Now the $i^{th}$ energy level is associated with an $X_i$ as well as an ${\epsilon }_i$. We have an $X$ distribution as well as an energy distribution. We can immediately calculate the expected value of $X$. It is
$\left\langle X\right\rangle =\sum^{\infty }_{i=1}{P_iX_i} \nonumber$
When we find the population set $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, we can calculate the corresponding average value of $X$. It is $\overline{X}=\sum^{\infty }_{i=1}{\left(\frac{N_i}{N}\right)}X_i \nonumber$
The central limit theorem applies to any distribution. So, it certainly applies to the $X$ distribution; the average value of $X$ approaches the expected value of $X$ as $N$ becomes arbitrarily large:
$0={\mathop{\mathrm{lim}}_{N\to \infty } \left(\overline{X}-\left\langle X\right\rangle \ \right)\ }={\mathop{\mathrm{lim}}_{N\to \infty } \sum^{\infty }_{i=1}{\left(\frac{N_i}{N}-P_i\right)}\ }X_i={\mathop{\mathrm{lim}}_{N\to \infty } \sum^{\infty }_{i=1}{\left(\frac{N_i}{N}-\frac{N^{\textrm{⦁}}_i}{N}\right)}X_i\ } \nonumber$
Now, because the $X_i$ can be chosen completely arbitrarily, the only way that the limit of this sum can always be zero is that every individual term becomes zero.
In the limit as $N\to \infty$, we find that
${N_i}/{N}\to {N^{\textrm{⦁}}_i}/{N} \nonumber$
As the number of molecules in the equilibrium system becomes arbitrarily large, the fraction of the molecules in each energy level at an arbitrarily selected instant approaches the fraction in that energy level in the equilibrium-characterizing most-probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i\dots \}$. In other words, the only population sets that we have any significant chance of observing in a large equilibrium system are population sets whose occupation fractions, ${N_i}/{N}$, are all very close to those, ${N^{\textrm{⦁}}_i}/{N}$, in the equilibrium-characterizing population set. Estimating $P_i$ as the ratio ${N_i}/{N}$ gives essentially the same result whichever of these population sets we use. Below, we see that the ${\epsilon }_i$ and the $P_i$ determine the thermodynamic properties of the system. Consequently, when we calculate any observable property of the macroscopic system, each of these population sets gives the same result.
Since the only population sets that we have a significant chance of observing are those for which
${N_i}/{N}\approx {N^{\textrm{⦁}}_i}/{N} \nonumber$
we frequently say that we can ignore all but the most probable population set. What we have in mind is that the most probable population set is the only one we need in order to calculate the macroscopic properties of the equilibrium system. We are incorrect, however, if we allow ourselves to think that the most probable population set is necessarily much more probable than any of the others. Nor does the fact that the ${N_i}/{N}$ are all very close to the ${N^{\textrm{⦁}}_i}/{N}$ mean that the $N_i$ are all very close to the $N^{\textrm{⦁}}_i$. Suppose that the difference between the two ratios is ${10}^{-10}$. If $N={10}^{20}$, the difference between $N_i$ and $N^{\textrm{⦁}}_i$ is ${10}^{10}$, which probably falls outside the range of values that we usually understand by the words “very close.”
We develop a theory that includes a mathematical model for the probability that a molecule has any one of its quantum-mechanically possible energies. It turns out that we are frequently interested in macroscopic systems in which the number of energy levels greatly exceeds the number of molecules. For such systems, we find $NP_i\ll 1$, and it is no longer possible to say that a single most-probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i,\dots \}$, describes the equilibrium state of the system. When it is very unlikely that any energy level is occupied by more than one molecule, the probability of any population set in which any $N_i$ is greater than one becomes negligibly small. We can approximate the total probability sum as
$1={\left(P_1+P_2+\dots +P_i+\dots \right)}^N\approx \sum_{\{N_i\}}{N!}P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i\dots \nonumber$
However, the idea that the proxy, $\{NP\left({\epsilon }_1\right),NP\left({\epsilon }_2\right),\dots ,NP\left({\epsilon }_i\right),\dots .\}$, describes the equilibrium state of the system remains valid. In these circumstances, a great many population sets can have essentially identical properties; the properties calculated from any of these are indistinguishable from each other and indistinguishable from the properties calculated from the proxy. Since the equilibrium properties are fixed, the value of these extended products is fixed. For any of the population sets available to such a system at equilibrium, we have
$P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i\dots =P^{{NP}_1}_1P^{{NP}_2}_2\dots P^{{NP}_i}_i\dots =\mathrm{constant} \nonumber$
It follows that, for some constant, $c$, we have
$c=\sum^{\infty }_{i=1}{NP_i{ \ln P_i\ }}=N\sum^{\infty }_{i=1}{P_i{ \ln P_i\ }} \nonumber$
As it evolves, we see that the probability of finding a molecule in an energy level is the central feature of our theory.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.06%3A_The_Most_Probable_Population_Set_at_Constant_N_V_and_T.txt
|
Thus far, we have considered only the probabilities associated with the assignments of distinguishable molecules to the allowed energy levels. In Section 20.2, we introduce the hypothesis that all of the $g_i$ degenerate quantum states with energy $\epsilon_i$ are equally probable, so that the probability that a molecule has energy $\epsilon _i$ is $P_i=P\left(\epsilon _i\right)=g_i\rho \left(\epsilon _i\right)$. Making this substitution, the total probability sum becomes
\begin{align*} 1&=\left(P_1+P_2+\dots +P_i+\dots \right)^N \[4pt] &=\sum_{\left\{N_i\right\}}{\frac{N!}{N_1!N_2!\dots N_i!\dots }}P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i\dots \[4pt] &=\sum_{\left\{N_i\right\}}{\frac{N!g^{N_1}_1g^{N_2}_2\dots g^{N_i}_i\dots }{N_1!N_2!\dots N_i!\dots }}{\rho \left(\epsilon _1\right)}^{N_1}{\rho \left(\epsilon _2\right)}^{N_2}\dots {\rho \left(\epsilon _i\right)}^{N_i}\dots . \[4pt] &=\sum_{\left\{N_i\right\}}{N!} \prod^{\infty}_{i=1} \left(\frac{g^{N_i}_i}{N_i!}\right) \rho \left(\epsilon_i\right)^{N_i} \[4pt] &=\sum_{\left\{N_i\right\}} W\prod^{\infty }_{i=1}\rho \left(\epsilon _i\right)^{N_i} \end{align*}
where we use the notation
$a_1\times a_2\times \dots a_i\times \dots a_{\omega }\times =\prod^{\omega }_{i=1}{a_i} \nonumber$
for extended products and introduce the function
\begin{align*} W &= W\left(N_i,g_i\right) \[4pt] &=W\left(N_1,g_1,N_2,g_2,\dots ,N_i,g_i,\dots .\right) \[4pt] &=N!\prod^{\infty }_{i=1}{\left(\frac{g^{N_i}_i}{N_i!}\right)} \[4pt] &= C\left(N_1,N_2,\dots ,N_i,\dots \right)\prod^{\infty }_{i=1}{g^{N_i}_i} \end{align*}
For reasons that become clear later, $W$ is traditionally called the thermodynamic probability. This name is somewhat unfortunate, because $W$ is distinctly different from an ordinary probability.
In Section 20.5, we note that $P^{N_1}_1P^{N_2}_2\dots P^{N_i}_i$ is the probability that $N_i$ molecules occupy each of the energy levels $\epsilon _i$ and that ${N!}/{\left(N_1!N_2!\dots N_i!\dots \right)}$ is the number of combinations of distinguishable molecules that arise from the population set $\{N_1,N_2,\dots ,N_i,\dots \}$. Now we observe that the extended product
${\rho \left(\epsilon _1\right)}^{N_1}{\rho \left(\epsilon _2\right)}^{N_2}\dots {\rho \left(\epsilon _i\right)}^{N_i}\dots . \nonumber$
is the probability of any one assignment of the distinguishable molecules to quantum states such that $N_i$ molecules are in quantum states whose energies are $\epsilon _i$. Since a given molecule of energy $\epsilon _i$ can be in any of the $g_i$ degenerate quantum states, the probability that it is in the energy level $\epsilon _i$ is $g_i$-fold greater that the probability that it is in any one of these quantum states.
Microstates
We call a particular assignment of distinguishable molecules to the available quantum states a microstate. For any population set, there are many combinations. When energy levels are degenerate, each combination gives rise to many microstates. The factor ${\rho \left(\epsilon _1\right)}^{N_1}{\rho \left(\epsilon _2\right)}^{N_2}\dots {\rho \left(\epsilon _i\right)}^{N_i}\dots .$ is the probability of any one microstate of the population set $\{N_1,N_2,\dots ,N_i,\dots \}$. Evidently, the thermodynamic probability
$W=N!\prod^{\infty }_{i=1}{\left(\frac{g^{N_i}_i}{N_i!}\right)} \label{micro}$
is the total number of microstates of that population set.
To see directly that the number of microstates is dictated by Equation \ref{micro}, let us consider the number of ways we can assign $N$ distinguishable molecules to the quantum states when the population set is $\{N_1,N_2,\dots ,N_i,\dots \}$ and energy level $\epsilon _i$ is $g_i$-fold degenerate. We begin by assigning the $N_1$ molecules in energy level $\epsilon _1$. We can choose the first molecule from among any of the $N$ distinguishable molecules and can choose to place it in any of the $g_1$ quantum states whose energy is $\epsilon _1$. The number of ways we can make these choices is ${Ng}_1$. We can choose the second molecule from among the $N-1$ remaining distinguishable molecules. In Boltzmann statistics, we can place any number of molecules in any quantum state, so there are again $g_1$ quantum states in which we can place the second molecule. The total number of ways we can place the second molecule is $\left(N-1\right)g_1$.
The number of ways the first and second molecules can be chosen and placed is therefore $N\left(N-1\right)g^2_1$. We find the number of ways that successive molecules can be placed in the quantum states of energy $\epsilon _1$ by the same argument. The last molecule whose energy is $\epsilon _1$ can be chosen from among the $\left(N-N_1+1\right)$ remaining molecules and placed in any of the $g_1$ quantum states. The total number of ways of placing the $N_1$ molecules in energy level $\epsilon _1$ is $N\left(N-1\right)\left(N-2\right)\dots \left(N-N_1+1\right)g^{N_1}_1$.
This total includes all possible orders for placing every set of $N_1$ distinguishable molecules into every possible set of quantum states. However, the order doesn’t matter; the only thing that affects the state of the system is which molecules go into which quantum state. (When we consider all of the ways our procedure puts all of the molecules into any of the quantum states, we find that any assignment of molecules $A$, $B$, and $C$ to any particular set of quantum states occurs six times. Selections in the orders $A$,$B$,$C$; $A$,$C$,$B$; $B$,$A$,$C$; $B$,$C$,$A$; $C$,$A$,$B$; and $C$,$B$,$A$ all put the same molecules in the same quantum states.) There are $N_1!$ orders in which our procedure chooses the $N_1$ molecules; to correct for this, we must divide by $N_1!$, so that the total number of assignments we want to include in our count is
$N\left(N-1\right)\left(N-2\right)\dots \left(N-N_1+1\right)g^{N_1}_1/N_1! \nonumber$
The first molecule that we assign to the second energy level can be chosen from among the $N-N_1$ remaining molecules and placed into any of the $g_2$ quantum states whose energy is $\epsilon _2$. The last one can be chosen from among the remaining $\left(N-N_1-N_2+1\right)$ molecules. The number of assignments of the $N_2$ molecules to $g_2$-fold degenerate quantum states whose energy is $\epsilon _2$ is
$\left(N-N_1\right)\left(N-N_1-1\right)\dots \left(N-N_1-N_2+1\right)g^{N_2}_2/N_2! \nonumber$
When we consider the number of assignments of molecules to quantum states with energies $\epsilon _1$ and $\epsilon _2$ we have
$N\left(N-1\right)\dots \left(N-N_1+1\right)\left(N-N_1\right)\left(N-N_1-1\right)\dots \nonumber$ $\times \left(N-N_1-N_2+1\right)\left(\frac{g^{N_1}_1}{N_1!}\right)\left(\frac{g^{N_2}_2}{N_2!}\right) \nonumber$
Let the last energy level to contain any molecules be $\epsilon _{\omega }$. The number of ways that the $N_{\omega }$ molecules can be assigned to the quantum states with energy $\epsilon _{\omega }$ is $N_{\omega }\left(N_{\omega }-1\right)\dots \left(1\right)g^{N_{\omega }}_{\omega }/N_{\omega }!$ The total number of microstates for the population set $\{N_1,N_2,\dots ,N_i,\dots \}$ becomes
$N\left(N-1\right)\dots \left(N-N_1\right)\left(N-N_1-1\right)\dots \nonumber$ $\times \left(N_{\omega }\right)\left(N_{\omega }-1\right)\dots \left(1\right)\prod^{\infty }_{i=1}{\left(\frac{g^{N_i}_i}{N_i!}\right)}=N!\prod^{\infty }_{i=1}{\left(\frac{g^{N_i}_i}{N_i!}\right)} \nonumber$
When we consider Fermi-Dirac and Bose-Einstein statistics, it is no longer true that the molecules are distinguishable. For Fermi-Dirac statistics, no more than one molecule can be assigned to a particular quantum state. For a given population set, Boltzmann, Fermi-Dirac, and Bose-Einstein statistics produce different numbers of microstates.
It is helpful to have notation that enables us to specify different combinations and different microstates. If $\epsilon _i$ is the energy associated with the wave equation that describes a particular molecule, it is convenient to say that the molecule is in energy level $\epsilon _i$; that is, its quantum state is one of those that has energy $\epsilon _i$. Using capital letters to represent molecules, we indicate that molecule $A$ is in energy level $\epsilon _i$ by writing $\epsilon _i\left(A\right)$. To indicate that $A$, $B$, and $C$ are in $\epsilon _i$, we write $\epsilon _i\left(A,B,C\right)$. Similarly, to indicate that molecules $D$ and $E$ are in $\epsilon _k$, we write $\epsilon _k\left(D,E\right)$. For this system of five molecules, the assignment $\epsilon _i\left(A,B,C\right)\epsilon _k\left(D,E\right)$ represents one of the possible combinations. The order in which we present the molecules that have a given energy is immaterial: $\epsilon _i\left(A,B,C\right)\epsilon _k\left(D,E\right)$ and $\epsilon _i\left(C,B,A\right)\epsilon _k\left(E,D\right)$ represent the same combination. When any one molecule is distinguishable from others of the same substance, assignments in which a given molecule has different energies are physically different and represent different combinations. The assignments $\epsilon _i\left(A,B,C\right)\epsilon _k\left(D,E\right)$ and $\epsilon _i\left(D,B,C\right)\epsilon _k\left(A,E\right)$ represent different combinations. In Figure 2, we represent these assignments more schematically.
Any two assignments in which a particular molecule occupies different quantum states give rise to different microstates. If the $i^{th}$ energy level is three-fold degenerate, a molecule in any of the quantum states ${\psi }_{i,1}$, ${\psi }_{i,2}$, or ${\psi }_{i,3}$ has energy $\epsilon _i$. Let us write
${\psi }_{i,1}\left(A,B\right){\psi }_{i,2}\left(C\right){\psi }_{k,1}\left(DE\right) \nonumber$
to indicate the microstate arising from the combination $\epsilon _i\left(A,B,C\right)\epsilon _k\left(D,E\right)$ in which molecules $A$ and $B$ occupy ${\psi }_{i,1}$, molecule $C$ occupies ${\psi }_{i,2}$, and molecules $D$ and $E$ occupy ${\psi }_{k,1}$. Then,
${\psi }_{i,1}\left(A,B\right){\psi }_{i,2}\left(C\right){\psi }_{k,1}\left(DE\right) \nonumber$ ${\psi }_{i,1}\left(B,C\right){\psi }_{i,2}\left(A\right){\psi }_{k,1}\left(DE\right) \nonumber$ ${\psi }_{i,1}\left(A\right){\psi }_{i,2}\left(B,C\right){\psi }_{k,1}\left(DE\right) \nonumber$
are three of the many microstates arising from the combination $\epsilon _i\left(A,B,C\right)\epsilon _k\left(D,E\right)$. Figure 3 shows all of the microstates possible for the population set $\{2,1\}$ when the quantum states of a molecule are ${\psi }_{1,1}$, ${\psi }_{1,2}$, and ${\psi }_{2,1}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.07%3A_The_Microstates_of_a_Given_Population_Set.txt
|
In Section 20.2, we introduce the assumption that, for a molecule in a constant-N-V-T system, for which the $g_i$ and ${\epsilon }_i$ are fixed, the probability of a quantum state, $\rho \left({\epsilon }_i\right)$, depends only on its energy. It follows that two or more quantum states that have the same energy must have equal probabilities. We accept the idea that the probability depends only on energy primarily because we cannot see any reason for a molecule to prefer one state to another if both states have the same energy.
We extend this thinking to multi-molecule systems. If two microstates have the same energy, we cannot see any reason for the system to prefer one rather than the other. In a constant-N-V-T system, in which the total energy is not otherwise restricted, each microstate of $\{N_1,\ N_2,\dots ,N_i,\dots .\}$ occurs with probability ${\rho \left({\epsilon }_1\right)}^{N_1}{\rho \left({\epsilon }_2\right)}^{N_2}\dots {\rho \left({\epsilon }_i\right)}^{N_i}\dots .$, and each microstate of $\{N^{\#}_1,N^{\#}_2,\ \dots ,N^{\#}_I,\dots \ \}$ occurs with probability ${\rho \left({\epsilon }_1\right)}^{N^{\#}_1}{\rho \left({\epsilon }_2\right)}^{N^{\#}_2}\dots {\rho \left({\epsilon }_i\right)}^{N^{\#}_i}\dots .$ When the energies of these population sets are equal, we infer that these probabilities are equal, and their value is a constant of the system. That is,
${\rho \left({\epsilon }_1\right)}^{N_1}{\rho \left({\epsilon }_2\right)}^{N_2}\dots {\rho \left({\epsilon }_i\right)}^{N_i}\dots . \nonumber$ $={\rho \left({\epsilon }_1\right)}^{N^{\#}_1}{\rho \left({\epsilon }_2\right)}^{N^{\#}_2}\dots {\rho \left({\epsilon }_i\right)}^{N^{\#}_i}\dots .\ \ \ \ ={\rho }_{MS,N,E}=\mathrm{constant} \nonumber$
where we introduce ${\rho }_{MS,N,E}$ to represent the probability of a microstate of a system of $N$ molecules that has total energy $E$. If $E=E^{\#}$, then ${\rho }_{MS,N,E}={\rho }_{MS,N,E^{\#}}$.
When we think about it critically, the logical basis for this equal-probability idea is not very impressive. While the idea is plausible, it is not securely rooted in any particular empirical observation or prior postulate. The equal-probability idea is useful only if it leads us to theoretical models that successfully mirror the behavior of real macroscopic systems. This it does. Accordingly, we recognize that the equal-probability idea is really a fundamental postulate about the behavior of quantum-mechanical systems. It is often called the principle of equal a priori probabilities:
Definition: principle of equal a priori probabilities
For a particular system, all microstates that have the same energy have the same probability.
Our development of statistical thermodynamics relies on the principle of equal a priori probabilities. For now, let us summarize the important relationships that the principle of equal a priori probabilities imposes on our microscopic model for the probabilities of two population sets of a constant-N-V-T system that have the same energy:
• A given population set $\{N_1,\ N_2,\dots ,N_i,\dots .\}$ gives rise to $W\left(N_i,g_i\right)$ microstates, and each of these microstates has energy $E=\sum^{\infty }_{i=1}{N_i{\epsilon }_i} \nonumber$
• A second population set, $\{N^{\#}_1,N^{\#}_2,\ \dots ,N^{\#}_I,\dots \ \}$, that has the same energy need not—and usually will not—give rise to the same number of microstates. In general, for two such population sets, $W\left(N_i,g_i\right)\neq W\left(N^{\#}_i,g_i\right) \nonumber$ However, because each microstate of either population set has the same energy, we have $E=\sum^{\infty }_{i=1}{N_i{\epsilon }_i}=\sum^{\infty }_{i=1}{N^{\#}_i{\epsilon }_i} \nonumber$
• The probability of a microstate of a given population set $\{N_1,\ N_2,\dots ,N_i,\dots .\}$ depends only on its energy: ${\rho \left({\epsilon }_1\right)}^{N_1}{\rho \left({\epsilon }_2\right)}^{N_2}\dots {\rho \left({\epsilon }_i\right)}^{N_i}\dots .={\rho }_{MS,N,E}=\mathrm{constant} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.08%3A_The_Probabilities_of_Microstates_that_Have_the_Same_Energy.txt
|
In principle, the energy of an equilibrium system that is in contact with a constant-temperature heat reservoir can vary slightly with time. In contrast, the energy of an isolated system is constant. A more traditional and less general statement of the equal a priori probability principle focuses on isolated systems, for which all possible microstates necessarily have the same energy:
All microstates of an isolated (constant energy) system occur with equal probability.
If we look at the fraction of the molecules of an isolated system that are in each microstate, we expect to find that these fractions are approximately equal. In consequence, for an isolated system, the probability of a population set, $\{N_1,\ N_2,\dots ,N_i,\dots .\}$, is proportional to the number of microstates, $W\left(N_i,g_i\right)$, to which that population set gives rise.
In principle, the population sets of a constant-N-V-T system can be significantly different from those of a constant-N-V-E system. That is, if we move an isolated system, whose temperature is T, into thermal contact with a heat reservoir at constant-temperature T, the population sets that characterize the system can change. In practice, however, for a system containing a large number of molecules, the population sets that contribute to the macroscopic properties of the system must be essentially the same.
The fact that the same population sets are important in both systems enables us to make two further assumptions that become important in our development. We assume that the proportionality between the probability of a population set and $W\left(N_i,g_i\right)$, which is strictly true only for a constant-N-V-E system, is also true for the corresponding constant-N-V-T system. We also assume that the probabilities of a quantum state, $\rho \left({\epsilon }_i\right)$, and a microstate, ${\rho }_{MS,N,E}$, which we defined for the constant-N-V-T system, are the same for the corresponding constant-N-V-E system.
Let us see why we expect the same population sets to dominate the macroscopic properties of otherwise identical constant-energy and constant-temperature systems. Suppose that we isolate a constant-N-V-T system in such a way that the total energy, $E=\sum^{\infty }_{i=1}{N_i{\epsilon }_i}$, of the isolated system is exactly equal to the expected value, $\left\langle E\right\rangle =N\sum^{\infty }_{i=1}{P_i{\epsilon }_i}$, of the energy of the system when its temperature is constant. What we have in mind is a gedanken experiment, in which we monitor the energy of the thermostatted system as a function of time, waiting for an instant in which the system energy, $E=\sum^{\infty }_{i=1}{N_i{\epsilon }_i}$, is equal to the expected value of the system energy, $\left\langle E\right\rangle$. When this occurs, we instantaneously isolate the system.
We suppose that the isolation process is accomplished before any molecule can experience an energy change, so that the population set that characterizes the system immediately afterwards is the same as the one that characterizes it before. After isolation, of course, the molecules can exchange energy with one another, and many population sets may be available to the system.
Clearly, the value of every macroscopic property of the isolated system must be the same as its observable value in the original constant-temperature system. Our microscopic description of it is different. Every population set that is available to the isolated system has energy $E=\left\langle E\right\rangle$, and gives rise to
$W\left(N_i,g_i\right)=N!\prod^{\infty }_{i=1}{\left(\frac{g^{N_i}_i}{N_i!}\right)} \nonumber$
microstates. At the same temperature, each of these microstates occurs with the same probability. Since the isolated-system energy is $\left\langle E\right\rangle$, this probability is ${\rho }_{MS,N,\left\langle E\right\rangle }$. The probability of an available population set is $W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }$.
Since the temperature can span a range of values centered on $\left\langle T\right\rangle$, where $\left\langle T\right\rangle$ is equal to the temperature of the original constant-N-V-T system, there is a range of ${\rho }_{MS,N,\left\langle E\right\rangle }$ values spanning the (small) range of temperatures available to the constant-energy system. Summing over all of the population sets that are available to the isolated system, we find
$1=\sum_{\left\{N_i\right\},\ \ E=\left\langle E\right\rangle ,T=\left\langle T\right\rangle }{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }}+\sum_{\left\{N_i\right\},\ \ E=\left\langle E\right\rangle ,T\neq \left\langle T\right\rangle }{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }} \nonumber$
The addition of “$E=\left\langle E\right\rangle$” beneath the summation sign emphasizes that the summation is to be carried out over the population sets that are consistent with both the molecule-number and total-energy constraints and no others. The total probability sum breaks into two terms, one spanning population sets whose temperature is exactly $\left\langle T\right\rangle$ and another spanning all of the other population sets. (Remember that the $\rho \left({\epsilon }_i\right)$ are temperature dependent.)
The population sets available to the isolated system are slightly different from those available to the constant-temperature system. In our microscopic model, only population sets that have exactly the right total energy can occur in the isolated system. Only population sets that have exactly the right temperature can occur in the constant-temperature system.
Summing over all of the population sets that are available to the constant-temperature system, we partition the total probability sum into two terms:
$1=\sum_{\left\{N_i\right\},E=\left\langle E\right\rangle ,T=\left\langle T\right\rangle }{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }}+\sum_{\left\{N_i\right\},\ \ E\neq \left\langle E\right\rangle ,T=\left\langle T\right\rangle }{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }} \nonumber$
From the central limit theorem, we expect the constant-energy system to have (relatively) few population that fail to meet the condition $E=\left\langle E\right\rangle$. Likewise, we expect the constant temperature system to have (relatively) few population sets that fail to meet the condition $T=\left\langle T\right\rangle$. The population sets that satisfy both of these criteria must dominate both sums. For the number of molecules in macroscopic systems, we expect the approximation to the total probability sum
$1=\sum_{\left\{N_i\right\},E}{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }}\approx \sum_{\left\{N_i\right\},E=\left\langle E\right\rangle ,T=\left\langle T\right\rangle }{W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }} \nonumber$
to be very good. The same population sets dominate both the constant-temperature and constant-energy systems. Each system must have a most probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i,.,,,\}$. If these are not identically the same set, they must be so close that the same macroscopic properties are calculated using either one.
Thus, the central limit theorem implies that the total probability sum, which we develop for the constant-temperature system, also describes the constant-energy system, so long as the number of molecules in the system is sufficiently large.
Now, two aspects of this development warrant elaboration. The first is that the probability of population sets that have energies and temperature that satisfy $E=\left\langle E\right\rangle$ and $T=\left\langle T\right\rangle$ exactly may actually be much less than one. The second is that constant-energy and constant-temperature systems are creatures of theory. No real system can actually have an absolutely constant energy or temperature.
Recognizing these facts, we see that when we stipulate $E=\left\langle E\right\rangle$ or $T=\left\langle T\right\rangle$, what we really mean is that $E=\left\langle E\right\rangle \pm \delta E$ and $T=\left\langle T\right\rangle \pm \delta T$, where the intervals $\pm \delta E$ and $\pm \delta T$ are vastly smaller than any differences we could actually measure experimentally. When we write $E\neq \left\langle E\right\rangle$ and $T\neq \left\langle T\right\rangle$, we really intend to specify energies and temperatures that fall outside the intervals $E=\left\langle E\right\rangle \pm \delta E$ and $T=\left\langle T\right\rangle \pm \delta T$. If the system contains sufficiently many molecules, the population sets whose energies and temperatures fall within the intervals $E=\left\langle E\right\rangle \pm \delta E$ and $T=\left\langle T\right\rangle \pm \delta T$ account for nearly all of the probability—no matter how small we choose $\delta E$ and $\delta T$. All of the population sets whose energies and temperatures fall within the intervals $E=\left\langle E\right\rangle \pm \delta E$ and $T=\left\langle T\right\rangle \pm \delta T$ correspond to the same macroscopically observable properties.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.09%3A_The_Probabilities_of_the_Population_Sets_of_an_Isolated_System.txt
|
In an isolated system, the probability of population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$ is $W\left(N_i,g_i\right){\rho }_{MS,N,\left\langle E\right\rangle }$, where ${\rho }_{MS,N,\left\langle E\right\rangle }$ is a constant. It follows that $W=W\left(N_i,g_i\right)$ is proportional to the probability that the system is in one of the microstates associated with the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$. Likewise, $W^{\#}=W\left(N^{\#}_i,g_i\right)$ is proportional to the probability that the system is in one of the microstates associated with the population set $\{N^{\#}_1,N^{\#}_2,\dots N^{\#}_i,\dots \}$. Suppose that we observe the isolated system for a long time. Let $F$ be the fraction of the time that the system is in microstates of population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$ and $F^{\#}$ be the fraction of the time that the system is in microstates of the population set $\{N^{\#}_1,N^{\#}_2,\dots N^{\#}_i,\dots \}$. The principle of equal a priori probabilities implies that we would find
$\frac{F^{\#}}{F}=\frac{W^{\#}}{W} \nonumber$
Suppose that $W^{\#}$ is much larger than $W$. This means there are many more microstates for $\{N^{\#}_1,N^{\#}_2,\dots N^{\#}_i,\dots \}$ than there are for $\{N_1,\ N_2,\dots ,N_i,\dots \}$. The fraction of the time that the population set $\{N^{\#}_1,N^{\#}_2,\dots N^{\#}_i,\dots \}$ characterizes the system is much greater than the fraction of the time $\{N_1,\ N_2,\dots ,N_i,\dots \}$ characterizes it. Alternatively, if we examine the system at an arbitrary instant, we are much more likely to find the population set $\{N^{\#}_1,N^{\#}_2,\dots N^{\#}_i,\dots \}$ than the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$. The larger $W\left(N_1,g_1,\ N_2,g_2,\dots ,N_i,g_i,\dots \right)$, the more likely it is that the system will be in one of the microstates associated with the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$. In short, $W$ predicts the state of the system; it is a measure of the probability that the macroscopic properties of the system are those of the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$.
If an isolated system can undergo change, and we re-examine it at after a few molecules have moved to different energy levels, we expect to find it in one of the microstates of a more-probable population set; that is, in one of the microstates of a population set for which $W$ is larger. At still later times, we expect to see a more-or-less smooth progression: the system is in microstates of population sets for which the values of $W$ are increasingly larger. This can continue only until the system occupies one of the microstates of the population set for which $W$ is a maximum or a microstate of one of the population sets whose macroscopic properties are essentially the same as those of the constant-$N$-$V$-$E$ population set for which $W$ is a maximum.
Once this occurs, later inspection may find the system in other microstates, but it is overwhelmingly probable that the new microstate will still be one of those belonging to the largest-$W$ population set or one of those that are macroscopically indistinguishable from it. Any of these microstates will belong to a population set for which $W$ is very well approximated by $W\left(\ N^{\textrm{⦁}}_1,g_1,\ N^{\textrm{⦁}}_2,g_2,\dots ,N^{\textrm{⦁}}_i,g_i,\dots \right)$. Evidently, the largest-$W$ population set characterizes the equilibrium state of the either the constant-$N$-$V$-$T$ system or the constant–$N$-$V$-$E$ system. Either system can undergo change until $W$ reaches a maximum. Thereafter, it is at equilibrium and can undergo no further macroscopically observable change.
Boltzmann recognized this relationship between $W$, the thermodynamic probability, and equilibrium. He noted that the unidirectional behavior of $W$ in an isolated system undergoing spontaneous change is like the behavior we found for the entropy function. Boltzmann proposed that, for an isolated (constant energy) system, $S$ and $W$ are related by the equation $S=k{ \ln W\ }$, where $k$ is Boltzmann’s constant. This relationship associates an entropy value with every population set. For an isolated macroscopic system, equilibrium corresponds to a state of maximum entropy. In our microscopic model, equilibrium corresponds to the population set for which $W$ is a maximum. By the argument we make in §6, this population set must be well approximated by the most probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i,.,,,\}$. That is, the entropy of the equilibrium state of the macroscopic system is
\begin{align*} S &= k ~ { \ln W_{max}\ } \[4pt] &=k ~ { \ln \frac{N!}{N^{\textrm{⦁}}_i!N^{\textrm{⦁}}_i!\dots N^{\textrm{⦁}}_i!\dots }\ }+k ~ \sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i{ \ln g_i\ }} \end{align*}
This equation can be taken as the definition of entropy. Clearly, this definition is different from the thermochemical definition, $S={q^{rev}}/{T}$. We can characterize—imperfectly—the situation by saying that the two definitions provide alternative scales for measuring the same physical property. As we see below, our statistical theory enables us to define entropy in still more ways, all of which prove to be functionally equivalent. Gibbs characterized these alternatives as “entropy analogues;” that is, functions whose properties parallel those of the thermochemically defined entropy.
We infer that the most probable population set characterizes the equilibrium state of either the constant-temperature or the constant-energy system. Since our procedure for isolating the constant-temperature system affects only the thermal interaction between the system and its surroundings, the entropy of the constant-temperature system must be the same as that of the constant-energy system. Using $N^{\textrm{⦁}}_i=NP_i=Ng_i\rho \left({\epsilon }_i\right)$ and assuming that the approximation ${ \ln N^{\textrm{⦁}}_i!\ }=N^{\textrm{⦁}}_i{ \ln N^{\textrm{⦁}}_i\ }-N^{\textrm{⦁}}_i$ is adequate for all of the energy levels that make a significant contribution to $S$, substitution shows that the entropy of either system depends only on probabilities:
\begin{align*} S &= kN ~ { \ln N - kN - k\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{\left[NP_i{ \ln \left(NP_i\right)\ } - NP_i\right]}\ } + k\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{NP_i{ \ln g_i\ }} \[4pt]&= kN ~ { \ln N\ }\mathrm{-kN} -kN\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{\left[P_i{ \ln \left(N\right)\ } + P_i{ \ln P_i\ } - P_i - P_i{ \ln g_i\ }\right]} \[4pt] &= k ~ \left(N{ \ln N\ } - N\right) - k\left(N{ \ln N\ } - N\right)\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{P_i} - kN\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{P_i}\left[{ \ln P_i - { \ln g_i\ }\ }\right]\mathrm{=-}kN\sum^{\mathrm{\infty }}_{i\mathrm{=1}}{P_i{ \ln \rho \left({\epsilon }_i\right)\ }} \end{align*}
The entropy per molecule, ${S}/{N}$, is proportional to the expected value of ${ \ln \rho \left({\epsilon }_i\right)\ }$; Boltzmann’s constant is the proportionality constant. At constant temperature, $\rho \left({\epsilon }_i\right)$ depends only on ${\epsilon }_i$. The entropy per molecule depends only on the quantum state properties, $g_i$ and ${\epsilon }_i$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.10%3A_Entropy_and_Equilibrium_in_an_Isolated_System.txt
|
To relate these ideas to a change in a more specific macroscopic system, let us consider isomeric substances $A$ and $B$. (We consider this example further in Chapter 21.) In principle, we can solve the Schrödinger equation for a molecule of isomer $A$ and for a molecule of isomer $B$. We obtain all possible energy levels for a molecule of each isomer.${}^{1}$ If we list these energy levels in order, beginning with the lowest, some of these levels belong to isomer $A$ and the others belong to isomer $B$.
Now let us consider a mixture of $N_A$ molecules of $A$ and $N_B$ molecules of $B$. We suppose that individual molecules are distinguishable and that intermolecular interactions can be ignored. Since a group of atoms that can form an $A$ molecule can also form a $B$ molecule, every energy level is accessible to this group of atoms; that is, we can view both sets of energy levels as being available to the atoms that make up the molecules. For a given system energy, there will be many population sets in which only the energy levels belonging to isomer $A$ are occupied. For each of these population sets, there is a corresponding thermodynamic probability, $W$. Let $W^{max}_A$ be the largest of these thermodynamic probabilities. Similarly, there will be many population sets in which only the energy levels corresponding to isomer $B$ are occupied. Let $W^{max}_B$ be the largest of the thermodynamic probabilities associated with these population sets. Finally, there will be many population sets in which the occupied energy levels belong to both isomer $A$ and isomer $B$. Let $W^{max}_{A,B}$ be the largest of the thermodynamic probabilities associated with this group of population sets.
Now, $W^{max}_A$ is a good approximation to the number of ways that the atoms of the system can come together to form isomer $A$. $W^{max}_B$ is a good approximation to the the number of ways that the atoms of the system can come together to form isomer $B$. At equilibrium, therefore, we expect
$K=\frac{N_B}{N_A}=\frac{W^{max}_B}{W^{max}_A} \nonumber$
If we consider the illustrative—if somewhat unrealistic—case of isomeric molecules whose energy levels all have the same degeneracy ($g_i=g$ for all $i$), we can readily see that the equilibrium system must contain some amount of each isomer. For a system containing $N$ molecules, $N!g^N$ is the numerator in each of the thermodynamic probabilities $W^{max}_A$, $W^{max}_B$, and $W^{max}_{A,B}$. The denominators are different. The denominator of $W^{max}_{A,B}$ must contain terms, $N_i!$, for essentially all of the levels represented in the denominator of $W^{max}_A$. Likewise, it must contain terms, $N_j!$, for essentially all of the energy levels represented in the denominator of $W^{max}_B$. Then the denominator of $W^{max}_{A,B}$ is a product of $N_k!$ terms that are generally smaller than the corresponding factorial terms in the denominators of $W^{max}_A$ and $W^{max}_B$. As a result, the denominators of $W^{max}_A$ and $W^{max}_B$ are larger than the denominator of $W^{max}_{A,B}$. In consequence, $W^{max}_{A,B}>W^{max}_A$ and $W^{max}_{A,B}>W^{max}_B$. (See problems 5 and 6.)
If we create the system as a collection of $A$ molecules, or as a collection of $B$ molecules, redistribution of the sets of atoms among all of the available energy levels must eventually produce a mixture of $A$ molecules and $B$ molecules. Viewed as a consequence of the principle of equal a priori probabilities, this occurs because there are necessarily more microstates of the same energy available to some mixture of $A$ and $B$ molecules than there are microstates available to either $A$ molecules alone or $B$ molecules alone. Viewed as a consequence of the tendency of the isolated system to attain the state of maximum entropy, this occurs because $k{ \ln W^{max}_{A,B}>k{ \ln W^{max}_A\ }\ }$ and $k{ \ln W^{max}_{A,B}>k{ \ln W^{max}_B\ }\ }$.
20.12: The Degeneracy of an Isolated System and Its Entropy
In Section 20.9, we find that the sum of the probabilities of the population sets of an isolated system is
$1=\sum_{\left\{N_i\right\},E}{W\left(N_i,g_i\right){\rho }_{MS,N,E}}. \nonumber$
By the principle of equal a priori probabilities, ${\rho }_{MS,N,E}$ is a constant, and it can be factored out of the sum. We have
$1={\rho }_{MS,N,E}\sum_{\left\{N_i\right\},E}{W\left(N_i,g_i\right)} \nonumber$
Moreover, the sum of the thermodynamic probabilities over all allowed population sets is just the number of microstates that have energy $E$. This sum is just the degeneracy of the system energy, $E$. The symbol $\mathit{\Omega}_E$ is often given to this system-energy degeneracy. That is,
$\mathit{\Omega}_E=\sum_{\left\{N_i\right\},E}{W\left(N_i,g_i\right)} \nonumber$
The sum of the probabilities of the population sets of an isolated system becomes
$1={\rho }_{MS,N,E}{\mathit{\Omega}}_E \nonumber$
In Section 20.9, we infer that
$\rho_{MS,N,E}=\prod^{\infty }_{i=1}{\rho \left({\epsilon }_i\right)^{N_i}} \nonumber$
so we have
$1={\mathit{\Omega}}_E\prod^{\infty }_{i=1}\rho \left(\epsilon_i\right)^{N_i} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.11%3A_Thermodynamic_Probability_and_Equilibrium_in_an_Isomerization_Reaction.txt
|
In Section 20.10, we observe that the entropy of an isolated equilibrium system can be defined as $S=k{\ln W_{max}}$. In Section 20.12, we see that the system-energy degeneracy is a sum of terms, one of which is $W_{max}=W\left(N^{\textrm{⦁}}_i,g_i\right)$. That is, we have ${\Omega}_E=W_{max}+\sum_{\left\{N_i\right\}\neq \left\{N^{\textrm{⦁}}_i\right\},E_{total}}{W\left(N_i,g_i\right)} \nonumber$
where the last sum is taken over all energy-qualifying population sets other than the most-probable population set.
Let us now consider the relative magnitude of $\Omega_E$ and $W_{max}$. Clearly, $\Omega_E\ge W_{max}$. If only one population set is consistent with the total-molecule and total-energy constraints of the isolated system, then $\Omega_E=W_{max}$. In general, however, we must expect that there will be many, possibly an enormous number, of other population sets that meet the constraints. Ultimately, the relative magnitude of ${\Omega}_E$ and $W_{max}$ depends on the energy levels available to the molecules and the number of molecules in the system and so could be almost anything. However, rather simple considerations lead us to expect that, for most macroscopic collections of molecules, the ratio $\alpha ={\Omega_E}/W_{max}$ will be much less than $W_{max}$. That is, although the value of $\alpha$ may be very large, for macroscopic systems we expect to find $\alpha \ll W_{max}$. If $\Omega_E=W_{max}$, then $\alpha =1$, and ${\ln \alpha }=0$.
Because $W$ for any population set that contributes to ${\Omega}_E$ must be less than or equal to $W_{max}$, the maximum value of $\alpha$ must be less than the number of population sets which satisfy the system constraints. For macroscopic systems whose molecules have even a modest number of accessible energy levels, calculations show that $W_{max}$ is a very large number indeed. Calculation of $\alpha$ for even a small collection of molecules is intractable unless the number of accessible molecular energy levels is small. Numerical experimentation on small systems, with small numbers of energy levels, shows that the number of qualifying population sets increases much less rapidly than $W_{max}$ as the total number of molecules increases. Moreover, the contribution that most qualifying population sets make to ${\Omega}_E$ is much less than $W_{max}$.
For macroscopic systems, we can be confident that $W_{max}$ is enormously greater than $\alpha$. Hence $\Omega_E$ is enormously greater than $\alpha$. When we substitute for $W_{max}$ in the isolated-system entropy equation, we find
\begin{align*} S &=k \ln W_{max} \[4pt] &=k \ln \left(\Omega_E/\alpha \right) \[4pt] &=k \mathrm{l}\mathrm{n} \Omega_E -k \ln \alpha \[4pt] &\approx k \ln \Omega_E \end{align*}
where the last approximation is usually very good.
In many developments, the entropy of an isolated system is defined by the equation $S=k{\ln {\Omega}_E}$ rather than the equation we introduced first, $S=k{\ln W_{max}}$. From the considerations above, we expect the practical consequences to be the same. In Section 20.14, we see that the approximate equality of ${\ln W_{max}}$ and ${\ln {\Omega}_E}$ is a mathematical consequence of our other assumptions and approximations.
20.14: Effective Equivalence of the Isothermal and Constant-energy Conditions
In principle, an isolated system is different from a system with identical macroscopic properties that is in equilibrium with its surroundings. We emphasize this point, because this distinction is important in the logic of our development. However, our development also depends on the assumption that, when $N$ is a number that approximates the number molecules in a macroscopic system, the constant-temperature and constant-energy systems are functionally equivalent.
In Section 20.9, we find that any calculation of macroscopic properties must produce the same result whether we consider the constant-temperature or the constant-energy system. The most probable population set, $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,\dots N^{\textrm{⦁}}_i,\dots \}$, provides an adequate description of the macroscopic state of the constant-temperature system precisely because it is representative of all the population sets that contribute significantly to the total probability of the constant-temperature system. The effective equivalence of the constant-temperature and constant-energy systems ensures that the most probable population set is also representative of all the population sets that contribute significantly to the total probability of the constant-energy system.
In Section 20.12, we see that the essential equivalence of the isothermal and constant-energy systems means that we have
$1={\Omega}_E\prod^{\infty }_{i=1}{\rho {\left({\epsilon }_i\right)}^{N^{\textrm{⦁}}_i}} \nonumber$
Taking logarithms of both sides, we find
${ \ln {\Omega}_E=-\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i}}{ \ln \rho \left({\epsilon }_i\right)} \nonumber$
From $S=k{ \ln {\Omega}_E}$, it follows that
$\mathrm{S} =-k\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i} \ln \rho \left(\epsilon_i\right) \nonumber$
For the constant-temperature system, we have $N^{\textrm{⦁}}_i=NP_i$. When we assume that the equilibrium constant-temperature and constant-energy systems are essentially equivalent, the entropy of the N-molecule system becomes
\begin{align*} S &=-k\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i}{\mathrm{l}\mathrm{n} \rho \left({\epsilon }_i\right)} \[4pt] &=-kN\sum^{\infty }_{i=1}{P_i}{ \ln \rho \left({\epsilon }_i\right)} \end{align*}
so that we obtain the same result from assuming that $S=k{ \ln {\Omega}_E}$ as we do in Section 20.10 from assuming that $S=k{ \ln W_{max}}$. Under the approximations we introduce, ${ \ln {\Omega}_E}$ and ${ \ln W_{max}}$ evaluate to the same thing.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.13%3A_The_Degeneracy_of_an_Isolated_System_and_its_Entropy.txt
|
1. Three non-degenerate energy levels are available to a set of five distinguishable molecules, $\{A,\ B,\ C,\ D,\ E\}$. The energies of these levels are $1$, $2$, and $3$, in arbitrary units. Find all of the population sets that are possible in this system. For each population set, find the system energy, $E$, and the number of microstates, $W$. For each system energy, $E$, list the associated population sets and the total number of microstates. How many population sets are there? What is $W_{max}$? If this system is isolated with $E=10$, how many population sets are possible? What is ${\mathit{\Omega}}_E$ for $E=10$?
2. For the particle in a box, the allowed energies are proportional to the squares of the successive integers. What population sets are possible for the distinguishable molecules, $\{A,\ B,\ C,\ D,\ E\}$, if they can occupy three quantum states whose energies are $1$, $4$, and $9$? For each population set, find the system energy, $E$, and the number of microstates. For each system energy, $E$, list the associated population sets and the total number of microstates. How many population sets are there? What is $W_{max}$? If this system is isolated with $E=24$, how many population sets are possible? What is ${\mathit{\Omega}}_E$ for $E=24$?
3. Consider the results you obtained in problem 2. In general, when the allowed energies are proportional to the squares of successive integers, how many population sets do you think will be associated with each system energy?
4.
(a) Compare $W$ for the population set $\{3,3,3\}$ to $W$ for the population set $\{2,5,2\}$. The energy levels are non-degenerate.
(b) Consider an $N$-molecule system that has a finite number, $M$, of quantum states. Show that $W$ is (at least locally) a maximum when $N_1=N_2=\dots =N_M={N}/{M}$. (Hint: Let $U={N}/{M}$, and assume that $N$ can be chosen so that $U$ is an integer. Let $W_U={N!}/{\left[U!U!\prod^{i=M-2}_{i=1}{U!}\right]} \nonumber$
and let $W_O={N!}/{\left[\left(U+1\right)!\left(U-1\right)!\prod^{i=M-2}_{i=1}{U!}\right]} \nonumber$
Show that ${W_O}/{W_U}<1$.)
5. The energy levels available to isomer $A$ are ${\epsilon }_0=1$, ${\epsilon }_2=2$, and ${\epsilon }_4=3$, in arbitrary units. The energy levels available to isomer B are ${\epsilon }_1=2$, ${\epsilon }_3=3$, and ${\epsilon }_5=4$. The energy levels are non-degenerate.
(a) A system contains five molecules. The energy of the system is $10$. List the population sets that are consistent with $N=5$ and $E=10$. Find $W$ for each of these population sets. What are $W^{max}_{A,B}$, $W^{max}_A$, and $W^{max}_B$? What is the total number of microstates, $=\mathit{\Omega}_{A,B}$, available to the system in all of the cases in which $A$ and $B$ molecules are present? What is the ratio $\mathit{\Omega}_{A,B}/W^{max}_{A,B}$?
(b) Repeat this analysis for a system that contains six molecules and whose energy is $12$.
(c) Would the ratio $\mathit{\Omega}_{A,B}/W^{max}_{A,B}$ be larger or smaller for a system with $N=50$ and $E=100$?
(d) What would happen to this ratio if the number of molecules became very large, while the average energy per molecule remained the same?
6. In Section 20.11, we assume that all of the energy levels available to an isomeric pair of molecules have the same degeneracy. We then argue that the thermodynamic probabilities of a mixture of the isomers must be greater than the thermodynamic probability of either pure isomer: $W^{max}_{A,B}>W^{max}_A$ and $W^{max}_{A,B}>W^{max}_B$. Implicitly, we assume that many energy levels are multiply occupied: $N_i>1$ for many energy levels ${\epsilon }_i$. Now consider the case that $g_i>1$ for most ${\epsilon }_i$, but that nearly all energy levels are either unoccupied or contain only one molecule: $N_i=0$ or $N_i=1$. Show that under this assumption also, we must have $W^{max}_{A,B}>W^{max}_A$ and $W^{max}_{A,B}>W^{max}_B$.
Notes
${}^{1}$The statistical-mechanical procedures that have been developed for finding the energy levels available to a molecule express molecular energies as the difference between the molecule energy and the energy that its constituent atoms have when they are motionless. This is usually effected in two steps. The molecular energy levels are first expressed relative to the energy of the molecule’s own lowest energy state. The energy released when the molecules is formed in its lowest energy state from the isolated constituent atoms is then added. The energy of each level is then equal to the work done on the component atoms when they are brought together from infinite separation to form the molecule in that energy level. (Since energy is released in the formation of a stable molecule, the work done on the atoms and the energy of the resulting molecule are less than zero.) In our present discussion, we suppose that we can solve the Schrödinger equation to find the energies of the allowed quantum states. This corresponds to choosing the isolated constituent electrons and nuclei as the zero of energy for both isomers.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/20%3A_Boltzmann_Statistics/20.15%3A_Problems.txt
|
• 21.1: Finding the Boltzmann Equation
We previously introduced the principle of equal a priori probabilities, which asserts that any two microstates of an isolated system have the same probability. From the central limit theorem, we infer that an isolated system is functionally equivalent to a constant-temperature system when the system contains a sufficiently large number of molecules. From these ideas, we can now find the relationship between the energy values and the corresponding probabilities of difference states.
• 21.2: Lagrange's Method of Undetermined Multipliers
Lagrange’s method of undetermined multipliers is a method for finding the minimum or maximum value of a function subject to one or more constraints.
• 21.3: Deriving the Boltzmann Equation I
We can use Lagrange’s method to find the dependence of the quantum-state probability on its energy. This is an alternative way to derive the Boltzmann distribution.
• 21.4: Deriving the Boltzmann Equation II
This derivation of Boltzmann’s equation from maximum entropy is the most common introductory treatment and relies on the assumption that all of the Ni are large enough to justify treating them as continuous variables This assumption proves to be invalid for many important systems; but the result obtained is clearly correct and leads to satisfactory agreement between microscopic models and the macroscopic properties of a wide variety of systems.
• 21.5: Partition Functions and Equilibrium - Isomeric Molecules
In Section 20.11, we discuss chemical equilibrium between isomers from the perspective afforded by Boltzmann’s definition of entropy. Now, let us consider equilibrium in this system from the perspective afforded by the energy-level probabilities.
• 21.6: Finding ß and the Thermodynamic Functions for Distinguishable Molecules
All of a substance’s thermodynamic functions can be derived from the molecular partition function.
• 21.7: The Microscopic Model for Reversible Change
Now let us return to the closed (constant- N ) system to develop another perspective on the dependence of its macroscopic thermodynamic properties on the molecular energy levels and their probabilities. We undertake to describe the system using volume and temperature as the independent variables. In thinking about the energy-level probabilities, we stipulate that any parameters that affect the state of the system remain constant.
• 21.8: The Third Law of Thermodynamics
In some cases, however, the assumption that the entropy is zero at absolute zero leads to absolute entropy values that are not consistent with other experiments. In these cases, the absolute entropies can be brought into agreement with other entropy measurements by taking into account degeneracies.
• 21.9: The Partition Function for a System of N Molecules
The molecular origins of the energies of the system enter the ensemble treatment only indirectly. The theory deals with the relationships between the possible values of the energy of the system and its thermodynamic state. How molecular energy levels and intermolecular interactions give rise to these values of the system energy becomes a separate issue.
• 21.10: Problems
21: The Boltzmann Distribution Function
The probabilities of the energy levels of a constant-temperature system at equilibrium must depend only on the intensive variables that serve to characterize the equilibrium state. In Section 20.8, we introduce the principle of equal a priori probabilities, which asserts that any two microstates of an isolated system have the same probability. From the central limit theorem, we infer that an isolated system is functionally equivalent to a constant-temperature system when the system contains a sufficiently large number of molecules. From these ideas, we can now find the relationship between the energy values, ${\epsilon }_i$, and the corresponding probabilities,
$P_i=P\left({\epsilon }_i\right)=g_i\rho \left({\epsilon }_i\right).\nonumber$
Let us consider the microstates of an isolated system whose energy is $E^{\#}$. For any population set, $\{N_1,\ N_2,\dots ,N_i,\dots \}$, that has energy $E^{\#}$, the following relationships apply.
1. The sum of the energy-level populations is the total number of molecules: $N=N_1+N_2+\dots +N_i=\displaystyle \sum^{\infty }_{j=1}{N_j}\nonumber$
2. The energy of the system is the sum of the energies of its constituent molecules: $E^{\#}=N_1{\epsilon }_1+N_2{\epsilon }_2+\dots +N_i{\epsilon }_i=\displaystyle \sum^{\infty }_{j=1}{N_j}{\epsilon }_j\nonumber$
3. The product of powers of quantum-state probabilities is a constant: ${\rho \left({\epsilon }_1\right)}^{N_1}{\rho \left({\epsilon }_1\right)}^{N_1}\dots {\rho \left({\epsilon }_1\right)}^{N_1}\dots =\textrm{ĸ}\nonumber$ or, equivalently, \begin{align*} N_1{ \ln \rho \left({\epsilon }_1\right)\ }+N_2{ \ln \rho \left({\epsilon }_2\right)\ }+\dots +N_i{ \ln \rho \left({\epsilon }_i\right)\ }+ … &=\displaystyle \sum^{\infty }_{i=1}{N_i{ \ln \rho \left({\epsilon }_i\right)\ }} \[4pt] &={ \ln \textrm{ĸ}\ } \end{align*}\nonumber
4. For the system at constant temperature, the sum of the energy-level probabilities is one. When we infer that the constant-temperature system and the isolated system are functionally equivalent, we assume that this is true also for the isolated system: $1=P\left({\epsilon }_1\right)+P\left({\epsilon }_2\right)+\dots +P\left({\epsilon }_i\right)+\dots =\displaystyle \sum^{\infty }_{j=1}{P\left({\epsilon }_j\right)}\nonumber$
We want to find a function, $\rho \left(\epsilon \right)$, that satisfies all four of these conditions. One way is to keep trying functions that look like they might work until we find one that does. A slightly more sophisticated version of this approach is to try the most general possible version of each such function and see if any set of restrictions will make it work. We could even try an infinite series. Suppose that we are clever (or lucky) enough to try the series solution
${ \ln \rho \left(\epsilon \right)\ }=c_0+c_1\epsilon +\dots +c_i{\epsilon }^i+\dots =\displaystyle \sum^{\infty }_{k=0}{c_k}{\epsilon }^k\nonumber$
Then the third condition becomes
\begin{align*} {\ln \textrm{ĸ}\ } &=\displaystyle \sum^{\infty }_{i=1}{N_i}{ \ln \rho \ }\left({\epsilon }_i\right) \[4pt]&=\displaystyle \sum^{\infty }_{i=1}{N_i}\displaystyle \sum^{\infty }_{k=0}{\left[c_k{\epsilon }^k_i\right]}\[4pt]&=\displaystyle \sum^{\infty }_{k=0}{\displaystyle \sum^{\infty }_{i=1}{c_kN_i{\epsilon }^k_i}}=c_0\displaystyle \sum^{\infty }_{i=1}{N_i}{\epsilon }^0_i+c_1\displaystyle \sum^{\infty }_{i=1}{N_i}{\epsilon }^1_i+\dots +c_k\displaystyle \sum^{\infty }_{k=2}{\displaystyle \sum^{\infty }_{i=1}{N_i{\epsilon }^k_i}}+\dots \[4pt]&=c_0N+c_1E^{\#}+\dots +c_k\displaystyle \sum^{\infty }_{k=2}{\displaystyle \sum^{\infty }_{i=1}{N_i{\epsilon }^k_i}}+\dots \end{align*}\nonumber
We see that the coefficient of $c_0$ is $N$ and the coefficient of $c_1$ is the total energy, $E^{\#}$. Therefore, the sum of the first two terms is a constant. We can make the trial function satisfy the third condition if we set $c_k=0$ for all $k>1$. We find
${ \ln \textrm{ĸ}\ }=\displaystyle \sum^{\infty }_{i=1}{N_i}{ \ln \rho \ }\left({\epsilon }_i\right)=\displaystyle \sum^{\infty }_{i=1}{N_i}\left(c_0+c_1{\epsilon }_i\right)\nonumber$
The last equality is satisfied if, for each quantum state, we have
${ \ln \rho \ }\left({\epsilon }_i\right)=c_0+c_1{\epsilon }_i\nonumber$ or $\rho \left({\epsilon }_i\right)=\alpha \ \mathrm{exp}\left(c_1{\epsilon }_i\right)\nonumber$
where $\alpha =\mathrm{exp}\left(c_0\right)$. Since the ${\epsilon }_i$ are positive and the probabilities $\rho \left({\epsilon }_i\right)$ lie in the interval $0<\rho \left({\epsilon }_i\right)<1$, we must have $c_1<0$. Following custom, we let $c_1=-\beta$, where $\beta$ is a constant, and $\beta >0$. Then,
$\rho \left({\epsilon }_i\right)=\alpha \ \mathrm{exp}\left(-\beta {\epsilon }_i\right)\nonumber$ and $P_i=g_i\rho \left({\epsilon }_i\right)=\alpha g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)\nonumber$
The fourth condition is that the energy-level probabilities sum to one. Using this, we have
$1=\displaystyle \sum^{\infty }_{i=1}{P\left({\epsilon }_i\right)}=\alpha \displaystyle \sum^{\infty }_{i=1}{g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)}\nonumber$
The sum of exponential terms is so important that it is given a name. It is called the molecular partition function. It is often represented by the letter “$z$.” Letting
$z=\displaystyle \sum^{\infty }_{i=1}{g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)}\nonumber$ we have
$\alpha =\frac{1}{\displaystyle \sum^{\infty }_{i=1}{g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)}}=z^{-1}\nonumber$
Thus, we have the Boltzmann probability:
\begin{align*} P\left({\epsilon }_i\right) &=g_i\rho \left({\epsilon }_i\right) \[4pt] &=\frac{g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)}{\displaystyle \sum^{\infty }_{i=1}{g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)}} \[4pt] &=\frac{g_i}{z}\ \mathrm{exp}\left(-\beta {\epsilon }_i\right) \end{align*}
The probability of an energy level depends only on its degeneracy, $g_i$, its energy, ${\epsilon }_i$, and the constant $\beta$. Since the equilibrium-characterizing population set is determined by the probabilities, we have $P_i={N^{\textrm{⦁}}_i}/{N}$, and
$\frac{N^{\textrm{⦁}}_i}{N}=\frac{g_i}{z}\ \mathrm{exp}\left(-\beta {\epsilon }_i\right)\nonumber$
In Section 21.2, we develop Lagrange’s method of undetermined multipliers. In Section 21.3, we develop the same result by applying Lagrange’s method to our model for the probabilities of the microstates of an isolated system. That is, we find the Boltzmann probability equation by applying Lagrange’s method to the entropy relationship,
$S=-Nk\displaystyle \sum^{\infty }_{i=1}{P_i}{ \ln \rho \left({\epsilon }_i\right)\ }\nonumber$
that we first develop in §20-11. In §4, we find the Boltzmann probability equation by using Lagrange’s method to find the values of $N^{\textrm{⦁}}_i$ that produce the largest possible value for $W_{max}$ in an isolated system. This argument requires us to assume that there is a very large number of molecules in each of the occupied energy levels of the most probable population set. Since our other arguments do not assume anything about the magnitude of the various $N^{\textrm{⦁}}_i$, it is evident that some of the assumptions we make when we apply Lagrange’s method to find the $N^{\textrm{⦁}}_i$ are not inherent characteristics of our microscopic model.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.01%3A_Finding_the_Boltzmann_Equation.txt
|
Lagrange’s method of undetermined multipliers is a method for finding the minimum or maximum value of a function subject to one or more constraints. A simple example serves to clarify the general problem. Consider the function
$z=z_0\ \mathrm{exp}\left(x^2+y^2\right)\nonumber$
where $z_0$ is a constant. This function is a surface of revolution, which is tangent to the plane $z=z_0$ at $\left(0,0,z_0\right)$. The point of tangency is the minimum value of $z$. At any other point in the $xy$-plane, $z\left(x,y\right)$ is greater than $z_0$. If either $x$ or $y$ becomes arbitrarily large, $z$ does also. If we project a contour of constant $z$ onto the $xy$-plane, the projection is a circle of radius
$r=\left(x^2+y^2\right)^{1/2}. \nonumber$
Suppose that we introduce an additional condition; we require $y=1-x$. Then we ask for the smallest value of $z$ consistent with this constraint. In the $xy$-plane the constraint is a line of slope $-1$ and intercept $1$. A plane that includes this line and is parallel to the $z$-axis intersects the function $z$. As sketched in Figure 1, this intersection is a curve. Far away from the origin, the value of $z$ at which the intersection occurs is large. Nearer the origin, the value of $z$ is smaller, and there is some $\left(x,y\right)$ at which it is a minimum. Our objective is to find this minimum.
There is a straightforward solution of this problem; we can substitute the constraint equation for $y$ into the equation for$\ z$, making $z$ a function of only one variable, $x$. We have
\begin{align*} z&=z_0\ \mathrm{exp} \left(x^2+{\left(1-x\right)}^2\right) \[4pt] &=z_0\ \mathrm{exp} \left(2x^2-2x+1\right)\end{align*} \nonumber
To find the minimum, we equate the derivative to zero, giving
$0=\frac{dz}{dx}=\left(4x-2\right)z_0\ \mathrm{exp} \left(2x^2-2x+1\right)\nonumber$
so that the minimum occurs at $x={1}/{2}$, y$={1}/{2}$, and
$z=z_0\ \mathrm{exp}\left({1}/{2}\right)\nonumber$
Solving such problems by elimination of variables can become difficult. Lagrange’s method of undetermined multipliers is a general method, which is usually easy to apply and which is readily extended to cases in which there are multiple constraints. We can see how Lagrange’s method arises by thinking further about our particular example. We can imagine that we “walk” along the constraint line in the $xy$-plane and measure the $z$ that is directly overhead as we progress. The problem is to find the minimum value of $z$ that we encounter as we proceed along the line. This perspective highlights the central feature of the problem: While it is formally a problem in three dimensions ($x$, $y$, and $z$), the introduction of the constraint makes it a two-dimensional problem. We can think of one dimension as a displacement along the line $y=1-x$, from some arbitrary starting point on the line. The other dimension is the perpendicular distance from the $xy$-plane to the intersection with the surface $z$.
The relevant part of the $xy$-plane is just the one-dimensional constraint line. We can recognize this by parameterizing the line. Let $t$ measure location on the line relative to some initial point at which $t\ =\ 0$. Then we have $x=x\left(t\right)$ and $y=y\left(t\right)$ and
$z\left(x,y\right)=z\left(x\left(t\right),y\left(t\right)\right)=z\left(t\right).\nonumber$
The point we seek is the one at which ${dz}/{dt}=0$.
Now let us examine a somewhat more general problem. We want a general way to find the values $\left(x,y\right)$ that minimize (or maximize) a function $h=h\left(x,y\right)$ subject to a constraint of the form $c=g\left(x,y\right)$, where $c$ is a constant. As in our example, this constraint requires a solution in which $\left(x,y\right)$ are on a particular line. If we parameterize this problem, we have
$h=h\left(x,y\right)=h\left(x\left(t\right),y\left(t\right)\right)=h\left(t\right)\nonumber$
and
$c=g\left(x,y\right)=g\left(x\left(t\right),y\left(t\right)\right)=g\left(t\right)\nonumber$
Because $c$ is a constant, ${dc}/{dt}={dg}/{dt}=0$. The solution we seek is the point at which $h$ is an extremum. At this point, ${dh}/{dt}=0$. Therefore, at the point we seek, we have
$\frac{dh}{dt}={\left(\frac{\partial h}{\partial x}\right)}_y\frac{dx}{dt}+{\left(\frac{\partial h}{\partial y}\right)}_x\frac{dy}{dt}=0\nonumber$ and $\frac{dg}{dt}={\left(\frac{\partial g}{\partial x}\right)}_y\frac{dx}{dt}+{\left(\frac{\partial g}{\partial y}\right)}_x\frac{dy}{dt}=0\nonumber$
We can multiply either of these equations by any factor, and the product will be zero. We multiply ${dg}/{dt}$ by $\lambda$ (where $\lambda \neq 0$) and subtract the result from ${dh}/{dt}$. Then, at the point we seek,
$0=\frac{dh}{dt}-\lambda \frac{dg}{dt}={\left(\frac{\partial h}{\partial x}-\lambda \frac{\partial g}{\partial x}\right)}_y\frac{dx}{dt}+{\left(\frac{\partial h}{\partial y}-\lambda \frac{\partial g}{\partial y}\right)}_x\frac{dy}{dt}\nonumber$
Since we can choose $x\left(t\right)$ and $y\left(t\right)$ any way we please, we can insure that ${dx}/{dt}\neq 0$ and ${dy}/{dt}\neq 0$ at the solution point. If we do so, the terms in parentheses must be zero at the solution point.
Conversely, setting
${\left(\frac{\partial h}{\partial x}-\lambda \frac{\partial g}{\partial x}\right)}_y=0\nonumber$ and ${\left(\frac{\partial h}{\partial y}-\lambda \frac{\partial g}{\partial y}\right)}_x=0\nonumber$
is sufficient to insure that
$\frac{dh}{dt}=\lambda \frac{dg}{dt}\nonumber$
Since ${dg}/{dt}=0$, these conditions insure that ${dh}/{dt}=0$. This means that, if we can find a set $\{x,y,\lambda \}$ satisfying
${\left(\frac{\partial h}{\partial x}-\lambda \frac{\partial g}{\partial x}\right)}_y=0\nonumber$ and ${\left(\frac{\partial h}{\partial y}-\lambda \frac{\partial g}{\partial y}\right)}_x=0\nonumber$ and $c-g\left(x,y\right)=0\nonumber$
then the values of $x$ and $y$ must be those make $h\left(x,y\right)$ an extremum, subject to the constraint that $c=g\left(x,y\right)$. We have not shown that the set $\{x,y,\lambda \}$ exists, but we have shown that if it exists, it is the desired solution.
A useful mnemonic simplifies the task of generating the family of equations that we need to use Lagrange’s method. The mnemonic calls upon us to form a new function, which is a sum of the function whose extremum we seek and a series of additional terms. There is one additional term for each constraint equation. We generate this term by putting the constraint equation in the form $c-g\left(x,y\right)=0$ and multiplying by an undetermined parameter. For the case we just considered, the mnemonic function is
$F_{mn}=h\left(x,y\right)+\lambda \left(c-g\left(x,y\right)\right)\nonumber$
We can generate the set of equations that describe the solution set, $\{x,y,\lambda \}$, by equating the partial derivatives of $F_{mn}$ with respect to $x$, $y$, and $\lambda$ to zero. That is, the solution set satisfies the simultaneous equations
$\frac{\partial F_{mn}}{\partial x}=0\nonumber$
$\frac{\partial F_{mn}}{\partial y}=0\nonumber$ and $\frac{\partial F_{mn}}{\partial \lambda }=0\nonumber$
If there are multiple constraint equations, $c_{\lambda }-g_{\lambda }\left(x,y\right)=0$, $c_{\alpha }-g_{\alpha }\left(x,y\right)=0$, and $c_{\beta }-g_{\beta }\left(x,y\right)=0$, then the mnemonic function is
$F_{mn}=h\left(x,y\right)+\lambda \left(c_{\lambda }-g_{\lambda }\left(x,y\right)\right)+\alpha \left(c_{\alpha }-g_{\alpha }\left(x,y\right)\right)+\beta \left(c_{\beta }-g_{\beta }\left(x,y\right)\right)\nonumber$
and the simultaneous equations that represent the constrained extremum are
• ${\partial F_{mn}}/{\partial }x=0$,
• ${\partial F_{mn}}/{\partial }y=0$,
• ${\partial F_{mn}}/{\partial }\lambda =0$,
• ${\partial F_{mn}}/{\partial }\alpha =0$, and
• ${\partial F_{mn}}/{\partial }\beta =0$.
To illustrate the use of the mnemonic, let us return to the example with which we began. The mnemonic equation is
$F_{mn}=z_0\ \mathrm{exp} \left(x^2+y^2\right)+\lambda \left(1-x-y\right)\nonumber$
so that
$\frac{\partial F_{mn}}{\partial x}=2xz_0\ \mathrm{exp} \left(x^2+y^2\right)-\lambda =0, \nonumber$
$\frac{\partial F_{mn}}{\partial y}=2yz_0\ \mathrm{exp} \left(x^2+y^2\right)-\lambda =0\nonumber$
and
$\frac{\partial F_{mn}}{\partial \lambda }=1-x-y=0 \nonumber$
which yield $x={1}/{2}$, y$={1}/{2}$, and $\lambda =z_0\ \mathrm{exp} \left({1}/{2}\right)$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.02%3A_Lagrange%27s_Method_of_Undetermined_Multipliers.txt
|
In Sections 20-10 and 20-14, we develop the relationship between the system entropy and the probabilities of a microstate, $\rho \left({\epsilon }_i\right)$, and an energy level, $P_i=g_i\rho \left({\epsilon }_i\right)$, in our microscopic model. We find
\begin{align*} S &=-Nk\sum^{\infty }_{i=1}{P_i}{ \ln \rho \left({\epsilon }_i\right)\ } \[4pt] &=-Nk\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}{ \ln \rho \left({\epsilon }_i\right)\ } \end{align*}
For an isolated system at equilibrium, the entropy must be a maximum, and hence
$-\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}{ \ln \rho \left({\epsilon }_i\right)} \label{maxentropy}$
must be a maximum. We can use Lagrange’s method to find the dependence of the quantum-state probability on its energy. The $\rho \left({\epsilon }_i\right)$ must be such as to maximize entropy (Equation \ref{maxentropy}) subject to the constraints
$1=\sum^{\infty }_{i=1}{P_i}=\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)} \nonumber$
and
$\left\langle \epsilon \right\rangle =\sum^{\infty }_{i=1}{P_i{\epsilon }_i}=\sum^{\infty }_{i=1}{g_i{\varepsilon }_i\rho \left({\epsilon }_i\right)} \nonumber$
where $\left\langle \epsilon \right\rangle$ is the expected value of the energy of one molecule. The mnemonic function becomes
$F_{mn}=-\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}{ \ln \rho \left({\epsilon }_i\right)\ }+{\alpha }^*\left(1-\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}\right)+\beta \left(\left\langle \epsilon \right\rangle -\sum^{\infty }_{i=1}{g_i{\varepsilon }_i\rho \left({\epsilon }_i\right)}\right) \nonumber$
Equating the partial derivative with respect to $\rho \left({\epsilon }_i\right)$ to zero, $\frac{\partial F_{mn}}{\partial \rho \left({\epsilon }_i\right)}=-g_i{ \ln \rho \left({\epsilon }_i\right)\ }-g_i-{\alpha }^*g_i-\beta g_i{\epsilon }_i=0 \nonumber$
so that
$\rho \left({\epsilon }_i\right)={\mathrm{exp} \left(-{\alpha }^*-1\right)\ }{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ } \nonumber$
From
$1=\sum^{\infty }_{i=1}{P_i}=\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)} \nonumber$
the argument we use in Section 21.1 again leads to the partition function, $z$, and the Boltzmann equation
$P_i=g_i\rho \left({\epsilon }_i\right)=z^{-1}g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right) \nonumber$
21.04: Deriving the Boltzmann Equation II
In Section 20-9, we find that the probability of the population set $\{N_1,\ N_2,\dots ,N_i,\dots \}$ in an isolated system is
${\rho }_{MS,N,E} = N!\prod^{\infty }_{i=1}{\frac{g^{N_i}_i}{N_i!}} \nonumber$
The thermodynamic probability
$W\left(N_i,g_i\right)=N!\prod^{\infty }_{i=1}{\frac{g^{N_i}_i}{N_i!}} \nonumber$
is the number of microstates of the population set. $\rho_{MS,N,E}$ is the constant probability of any one microstate. In consequence, as we see in Section 20.10, the probability of a population set is proportional to its thermodynamic probability, $W\left(N_i,g_i\right)$. It follows that the most probable population set is that for which $W\left(N_i,g_i\right)$ is a maximum. Our microscopic model asserts that the most probable population set, $\{N^{\textrm{⦁}}_1,\ N^{\textrm{⦁}}_2,\dots ,N^{\textrm{⦁}}_i,\dots \}$, characterizes the equilibrium state, because the equilibrium system always occupies the either the most probable population set or another population set whose macroscopic properties are indistinguishable from those of the most probable one.
Evidently, the equilibrium-characterizing population set is the one for which $W\left(N_i,g_i\right)$, or ${ \ln W\left(N_i,g_i\right)\ }$, is a maximum. Let us assume that the $N_i$ are very large so that we can treat them as continuous variables, and we can use Stirling’s approximation for $N_i!$. Then we can use Lagrange’s method of undetermined multipliers to find the most probable population set by finding the set, $N_1,\ N_2,\dots ,N_i,\dots$, for which $\ln W\left(N_i,g_i\right)$ is a maximum, subject to the constraints
$N=\sum^{\infty }_{i=1}{N_i} \nonumber$
and
$E=\sum^{\infty }_{i=1}{N_i}{\epsilon }_i. \nonumber$
From our definition of the system, both $N$ and $E$ are constant. The mnemonic function is
\begin{align*} F_{mn} &={ \ln \left(\frac{N!g^{N_1}_1g^{N_2}_2\dots g^{N_i}_i\dots }{N_1!N_2!\dots N_i!\dots }\right)\ }+\alpha \left(N-\sum^{\infty }_{i=1}{N_i}\right)+\beta \left(E-\sum^{\infty }_{i=1}{N_i{\epsilon }_i}\right) \[4pt] &\approx N{ \ln N-N-\sum^{\infty }_{i=1}{N_i{ \ln N_i\ }}\ }+\sum^{\infty }_{i=1}{N_i}+\sum^{\infty }_{i=1}{N_i}{ \ln g_i\ }+\alpha \left(N-\sum^{\infty }_{i=1}{N_i}\right)+\beta \left(E-\sum^{\infty }_{i=1}{N_i{\epsilon }_i}\right) \end{align*}
Taking the partial derivative with respect to $N_i$ gives
$\frac{\partial F_{mn}}{\partial N_i}=-N_i\left(\frac{1}{N_1}\right)-{ \ln N_i\ }+1+{ \ln g_i\ }-\alpha -\beta {\epsilon }_i=-{ \ln N_i\ }+{ \ln g_i\ }-\alpha -\beta {\epsilon }_i \nonumber$
from which we have, for the population set with the largest possible thermodynamic probability,
$-{ \ln N^{\textrm{⦁}}_i\ }+{ \ln g_i\ }-\alpha -\beta {\epsilon }_i=0 \nonumber$ or $N^{\textrm{⦁}}_i=g_i{\mathrm{exp} \left(-\alpha \right)\ }{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ } \nonumber$
We can again make use of the constraint on the total number of molecules to find ${\mathrm{exp} \left(-\alpha \right)\ }$:
$N=\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i}={\mathrm{exp} \left(-\alpha \right)\ }\sum^{\infty }_{i=1}{g_i{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ }} \nonumber$
so that ${\mathrm{exp} \left(-\alpha \right)\ }=Nz^{-1}$, where $z$ is the partition function, $z=\sum^{\infty }_{i=1}{g_i{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ }}$. Therefore, in the most probable population set, the number of molecules having energy ${\epsilon }_i$ is
$N^{\textrm{⦁}}_i=Nz^{-1}g_i{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ } \nonumber$
The fraction with this energy is
$\dfrac{N^{\textrm{⦁}}_i}{N}=z^{-1}g_i{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ } \nonumber$
This fraction is also the probability of finding an arbitrary molecule in one of the quantum states whose energy is ${\epsilon }_i$. When the isolated system and the corresponding constant-temperature system are functionally equivalent, this probability is $P_i$. As in the two previous analyses, we have
\begin{align*} P_i &=g_i\rho \left({\epsilon }_i\right) \[4pt] &=z^{-1}g_i\ \mathrm{exp}\left(-\beta {\epsilon }_i\right). \end{align*}
This derivation of Boltzmann’s equation from $W_{max}$ is the most common introductory treatment. It relies on the assumption that all of the $N_i$ are large enough to justify treating them as continuous variables. This assumption proves to be invalid for many important systems. (For ideal gases, we find that $N_i=0$ or $N_i=1$ for nearly all of the very large number of energy levels that are available to a given molecule.) Nevertheless, the result obtained is clearly correct; not only is it the same as the result of our two previous arguments, but also it leads to satisfactory agreement between microscopic models and the macroscopic properties of a wide variety of systems.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.03%3A_Deriving_the_Boltzmann_Equation_I.txt
|
In Section 20.11, we discuss chemical equilibrium between isomers from the perspective afforded by Boltzmann’s definition of entropy. Now, let us consider equilibrium in this system from the perspective afforded by the energy-level probabilities. Let us assign even-integer labels to energy levels of isomer $A$ and odd-integer labels to energy levels of isomer $B$. A group of atoms that can arrange itself into either a molecule of $A$ or a molecule of $B$ can occupy any of these energy levels. The partition function for this group of molecules to which all energy levels are available is
$z_{A+B}=\sum^{\infty }_{i=1}{g_i{\mathrm{exp} \left(-\beta {\epsilon }_i\right)\ }} \nonumber$
The fraction of molecules in the first (odd) energy level associated with molecules of isomer $B$ is
$\dfrac{N^{\textrm{⦁}}_1}{N_{A+B}}=g_1{\left(z_{A+B}\right)}^{-1}{\mathrm{exp} \left(-\beta {\epsilon }_1\right)\ } \nonumber$
and the fraction in the next is
$\dfrac{N^{\textrm{⦁}}_3}{N_{A+B}}=g_3{\left(z_{A+B}\right)}^{-1}{\mathrm{exp} \left(-\beta {\epsilon }_3\right)\ } \nonumber$
The total number of $B$ molecules is
$N^{\textrm{⦁}}_B=\sum_{i\ odd}{N_i} \nonumber$
so that the fraction of all of the molecules that are $B$ molecules is
$\dfrac{N^{\textrm{⦁}}_B}{N_{A+B}}={\left(z_{A+B}\right)}^{-1}\sum_{i\ odd}{g_i{exp \left(-\beta {\epsilon }_i\right)\ }}={z_B}/{z_{A+B}} \nonumber$
Likewise, the fraction that is $A$ molecules is
$\dfrac{N^{\textrm{⦁}}_A}{N_{A+B}}={\left(z_{A+B}\right)}^{-1}\sum_{i\ even}{g_i{exp \left(-\beta {\epsilon }_i\right)\ }}={z_A}/{z_{A+B}} \nonumber$
The equilibrium constant for the equilibrium between $A$ and $B$ is
$K_{eq}=\frac{N^{\textrm{⦁}}_B}{N^{\textrm{⦁}}_A}=\frac{z_B}{z_A} \nonumber$
We see that the equilibrium constant for the isomerization reaction is simply equal to the ratio of the partition functions of the isomers.
It is always true that the equilibrium constant is a product of partition functions for reaction-product molecules divided by a product of partition functions for reactant molecules. However, the partition functions for the various molecules must be expressed with a common zero of energy. Choosing the infinitely separated component atoms as the zero-energy state for every molecule assures that this is the case. However, it is often convenient to express the partition function for a molecule by measuring each molecular energy level, ${\epsilon }_i$, relative to the lowest energy state of that isolated molecule. When we do this, the zero of energy is different for each molecule.
To adjust the energies in a molecule’s partition function so that they are expressed relative to the energy of the molecule’s infinitely separated atoms, we must add to each molecular energy the energy required to take the molecule from its lowest energy state to its isolated component atoms. If $z$ is the partition function when the ${\epsilon }_i$ are measured relative to the lowest energy state of the isolated molecule, $\mathrm{\Delta }\epsilon$ is the energy released when the isolated molecule is formed from its component atoms, and $z^{\mathrm{*}}$ is the partition function when the ${\epsilon }_i$ are measured relative to the molecule’s separated atoms, we have $z^{\mathrm{*}} = z\mathrm{\ }\mathrm{exp}\left({ + \mathrm{\Delta }\epsilon }/{kT}\right)$.
21.06: Finding and the Thermodynamic Functions for Distinguishable Molecules
All of a substance’s thermodynamic functions can be derived from the molecular partition function. We begin with the entropy. We consider closed (constant N) systems of independent, distinguishable molecules in which only pressure–volume work is possible. In Sections 20.10 and 20.14, we find that two different approaches give the entropy of this system,
$S=-Nk\sum^{\infty }_{i=1}{P_i}{ \ln \rho \left(\epsilon_i\right)\ }. \nonumber$
In Sections 20.1, 20.3, and 20.4, we find that three different approaches give the Boltzmann equation,
$P_i=g_i\rho \left(\epsilon_i\right)=z^{-1}g_i\ \mathrm{exp}\left(-\beta \epsilon_i\right). \nonumber$
We have
$\ln \rho \left(\epsilon_i\right)=-\ln z -\beta \epsilon_i \nonumber$
Substituting, and recognizing that the energy of the N-molecule system is $E=N\left\langle \epsilon \right\rangle$, we find that the entropy of the system is
$S=kN\sum^{\infty }_{i=1}{P_i}\left[{ \ln z\ }+\beta \epsilon_i\right]=kN{ \ln z\ }\sum^{\infty }_{i=1}{P_i}+k\beta N\sum^{\infty }_{i=1}{P_i}\epsilon_i=kN{ \ln z\ }+k\beta E \nonumber$
In Section 10.1, we find that the fundamental equation implies that
${\left(\frac{\partial E}{\partial S}\right)}_V=T \nonumber$
Since the $\epsilon_i$ are fixed when the volume and temperature of the system are fixed, ${ \ln z\ }$ is constant when the volume and temperature of the system are constant. Differentiating $S=kN{ \ln z\ }+k\beta E$ with respect to $S$ at constant $V$, we find
$1=k\beta {\left(\frac{\partial E}{\partial S}\right)}_V=k\beta T \nonumber$ so that $\beta =\frac{1}{kT} \nonumber$
This is an important result: Because we have now identified all of the parameters in our microscopic model, we can write the results we have found in forms that are more useful:
1. $\underbrace{z=\sum^{\infty}_{i=1}{g_i} \mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right)}_{\text{molecular partition function}} \nonumber$
2. $\underbrace{P_i=g_i{\rho }\left({\epsilon}_i\right) =z^{-1}g_i\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right)}_{\text{Boltzmann’s equation}} \nonumber$
3. $\underbrace{S=kN \ln z+\frac{E}{T}}_{\text{Entropy of an N-molecule system}} \nonumber$
To express the system energy in terms of the molecular partition function, we first observe that
$E=N\left\langle \epsilon \right\rangle =N\sum^{\infty }_{i=1}{P_i\epsilon_i}=Nz^{-1}\sum^{\infty }_{i=1}{g_i\epsilon_i}\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right) \nonumber$
Then we observe that
\begin{align*} \left(\frac{\partial \ln z}{\partial T}\right)_V &= z^{-1} \sum^{\infty}_{i=1} g_i \left(\frac{\epsilon_i}{kT^2}\right) \mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right) \[4pt] &= \left(\frac{1}{NkT^2}\right) N z^{-1} \sum^{\infty}_{i = 1} g_i \epsilon_i \mathrm{exp} \left(\frac{-\epsilon_i}{kT}\right) \[4pt] &=\frac{E}{NkT^2} \end{align*} \nonumber
The system energy becomes
$\underbrace{E=NkT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V}_{\text{energy of an N-molecule system}} \nonumber$
By definition, $A=E-TS$. Rearranging our entropy result, $S=kN{ \ln z\ }+{E}/{T}$, we have $E-TS=-NkT{ \ln z\ }$. Thus, $A=-NkT{ \ln z\ } \nonumber$ (Helmholtz free energy of an N-molecule system)
From $dA=-SdT-PdV$, we have
${\left(\frac{\partial A}{\partial V}\right)}_T=-P \nonumber$
(Here, of course, $P$ is the pressure of the system, not a probability.) Differentiating $A=-NkT{ \ln z\ }$ with respect to $V$ at constant $T$, we find
$P=NkT{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T \nonumber$ (pressure of an N-molecule system)
The pressure–volume product becomes
$PV=NkTV{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T \nonumber$
Substituting into $H=E+PV$, the enthalpy becomes
$H=NkT\left[T{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V+V{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T\right] \nonumber$ (enthalpy of an N-molecule system)
The Gibbs free energy is given by $G=A+PV$. Substituting, we find
$G=-NkT{ \ln z\ }+NkTV{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T \nonumber$ (Gibbs free energy of an N-molecule system)
The chemical potential can be found from
$\mu ={\left(\frac{\partial A}{\partial n}\right)}_{V,T} \nonumber$
At constant volume and temperature, $kT{ \ln z\ }$ is constant.
Substituting $N=n\overline{N}$ into $A=-NkT{ \ln z\ }$ and taking the partial derivative, we find
$\underbrace{\mu =-\overline{N}kT{ \ln z\ }=-RT \ln z}_{\text{chemical potential of distinguishable molecules}} \nonumber$
In statistical thermodynamics we frequently express the chemical potential per molecule, rather than per mole; then, $\mu ={\left(\frac{\partial A}{\partial N}\right)}_{V,T} \nonumber$ and
$\mu =-kT{ \ln z\ } \nonumber$(chemical potential per molecule)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.05%3A_Partition_Functions_and_Equilibrium_-_Isomeric_Molecules.txt
|
Now let us return to the closed (constant-$N$) system to develop another perspective on the dependence of its macroscopic thermodynamic properties on the molecular energy levels and their probabilities. We undertake to describe the system using volume and temperature as the independent variables. In thinking about the energy-level probabilities, we stipulate that any parameters that affect the state of the system remain constant. Specifically, we mean that any parameters that appear in the Schrödinger equation remain constant. For example, the energy levels of a particle in a box depend on the mass of the particle and the length of the box. Any such parameter is called an exogenous variable. If we change an exogenous variable (say the length of the box) by a small amount, all of the energy levels change by a small amount, and all of the probabilities change by a small amount. The energy levels and their probabilities are smooth functions of the exogenous variable. If $\xi$ is the exogenous variable, we have
$P_i=P\left(\epsilon_i\right)=g_i\rho \left(\epsilon_i\left(\xi \right)\right) \nonumber$
A change in the exogenous variable corresponds to a reversible macroscopic process.
For a particle in a box, the successive ${\psi }_i$ are functions that depend on the quantum number, $i$, and the length of the box, $\ell$. When we change the length of the box, the wavefunction and its associated energy both change. Both are continuous functions of the length of the box. The energy is
$\epsilon_i=\frac{i^2h^2}{8m{\ell }^2} \nonumber$
Changing the length of the box is analogous to changing the volume of a system. A reversible volume change entails work. We see that changing the length of the box does work on the particle-in-a-box, just as changing the volume of a three-dimensional system does work on the system.
Temperature plays a central role in the description of equilibrium from the macroscopic perspective. We can see that temperature enters the description of equilibrium from the microscopic perspective through its effect on the probability factors. When we increase the temperature of a system, its energy increases. The average energy of its molecules increases. The probability of an energy level must depend on temperature. Evidently, the probabilities of energy levels that are higher than the original average energy increase when the temperature increases. The probabilities of energy levels that are lower than the original average energy decrease when the temperature increases. The effects of heat and work on the energy levels and their equilibrium populations are diagrammed in Figure $1$.
If our theory is to be useful, the energy we measure for a macroscopic system must be indistinguishably close to the expected value of the system energy as calculated from our microscopic model:
$E_{\mathrm{experiment}}\approx \left\langle E\right\rangle =N\left\langle \epsilon \right\rangle =N\sum^{\infty }_{i=1}{P_i\epsilon_i} \nonumber$
We can use this equation to relate the probabilities, $P_i$, to other thermodynamic functions. Dropping the distinction between the experimental and expected energies, and assuming that the $\epsilon_i$ and the $P_i$ are continuous variables, we find the total differential
$dE=N\sum^{\infty }_{i=1}{\epsilon_idP_i}+N\sum^{\infty }_{i=1}{{P_id\epsilon }_i} \nonumber$
This equation is important because it describes a reversible macroscopic process in terms of the microscopic variables $\epsilon_i$ and $P_i$.
Let us consider the first term. Since $N$ is a constant, we have from $N^{\textrm{⦁}}_i=P_iN$ that $dN^{\textrm{⦁}}_i=NdP_i$. Substituting, we have
$\left(dE\right)_{\epsilon_i}=N\sum^{\infty }_{i=1}{\epsilon_idP_i}=\sum^{\infty }_{i=1}{\epsilon_i}dN^{\textrm{⦁}}_i \nonumber$
This asserts that the energy of the system changes if we redistribute the molecules among the various energy levels. If the redistribution takes molecules out of lower energy levels and puts them into higher energy levels, the energy of the system increases. This is our statistical-mechanical picture of the shift in the equilibrium position that occurs when we heat a system of independent molecules; the allocation of molecules among the available energy levels shifts to put more molecules in higher energy levels and fewer in lower ones. This corresponds to an increase in the temperature of the macroscopic system.
In terms of the macroscopic system, the first term represents an increment of heat added to the system in a reversible process; that is,
$dq^{rev}=N\sum^{\infty }_{i=1}{\epsilon_idP_i} \nonumber$
The second term, $N\sum^{\infty }_{i=1}{{P_id\epsilon }_i}$, is a contribution to the change in the energy of the system from reversible changes in the energy of the various quantum states, while the number of molecules in each quantum state remains constant. This term corresponds to a process in which the quantum states (and their energies) evolve in a continuous way as the state of the system changes. The second term represents an increment of work done on the system in a reversible process; that is
$dw^{rev}=N\sum^{\infty }_{i=1}{{P_id\epsilon }_i} \nonumber$
Evidently, the total differential expression for $dE$ is the fundamental equation of thermodynamics expressed in terms of the variables we use to characterize the molecular system. It enables us to relate the variables that characterize our microscopic model of the molecular system to the variables that characterize the macroscopic system.
For a system in which the reversible work is pressure–volume work, the energy levels depend on the volume. At constant temperature we have
$dw^{rev}=-PdV=N\sum^{\infty }_{i=1}{{P_id\epsilon }_i}=N\sum^{\infty }_{i=1}{P_i{\left(\frac{\partial \epsilon_i}{\partial V}\right)}_TdV} \nonumber$
so that the system pressure, $P$, is related to the energy-level probabilities, $P_i$, as
$P=-N\sum^{\infty }_{i=1}{P_i{\left(\frac{\partial \epsilon_i}{\partial V}\right)}_T} \nonumber$
To evaluate the pressure, we must know how the energy levels depend on the volume of the system.
The first term relates the entropy to the energy-level probabilities. Since $dq^{rev}=TdS=N\sum^{\infty }_{i=1}{\epsilon_idP_i}$, we have $dS=\frac{N}{T}\sum^{\infty }_{i=1}{\epsilon_idP_i} \nonumber$
From the Boltzmann distribution function we have
$P_i=z^{-1}g_i\mathrm{exp}\left({-\epsilon_i}/{kT}\right)$, or
$\epsilon_i=-kT\ln P_i +kT\ln g_i -kT\ln z \nonumber$
Substituting into our expression for $dS$, we find
$dS=-Nk\sum^{\infty }_{i=1}{\left(\ln P_i \right)}dP_i+Nk\sum^{\infty }_{i=1}{\left(\ln g_i \right)}dP_i-Nk\left(\ln z \right)\sum^{\infty }_{i=1}{dP_i} \nonumber$
Since $\sum^{\infty }_{i=1}{P_i}=1$, we have $\sum^{\infty }_{i=1}{dP_i}=0$, and the last term vanishes. Also,
$\sum^{\infty }_{i=1}{d\left(P_i\ln P_i \right)}=\sum^{\infty }_{i=1}{\left(\ln P_i \right){dP}_i}+\sum^{\infty }_{i=1}{dP_i}=\sum^{\infty }_{i=1}{\left(\ln P_i \right){dP}_i} \nonumber$
so that
$dS=-Nk\sum^{\infty }_{i=1}{d\left(P_i\ln P_i \right)}+Nk\sum^{\infty }_{i=1}{\left(\ln g_i \right)}dP_i \nonumber$
At any temperature, the probability ratio for any two successive energy levels is
$\frac{P_{i+1}\left(T\right)}{P_i\left(T\right)}=\frac{P_{i+1}}{P_i}=\frac{g_{i+1}}{g_i}\mathrm{exp}\left(\frac{-\left(\epsilon_{i+1}-\epsilon_i\right)}{kT}\right) \nonumber$
In the limit as the temperature goes to zero,
$\frac{P_{i+1}}{P_i}\to 0 \nonumber$
It follows that $P_1\left(0\right)=1$ and $P_i\left(0\right)=0$ for $i>1$. Integrating from $T=0$ to $T$, the entropy of the system goes from $S\left(0\right)=S_0$ to $S\left(T\right)$, and the energy-level probabilities go from $P_i\left(0\right)$ to $P_i\left(T\right)$. We have
$\int^{S\left(T\right)}_{S_0}{dS}=-Nk\sum^{\infty }_{i=1}{\int^{P_i\left(T\right)}_{P_i\left(0\right)}{d\left(P_i\ln P_i \right)}}+Nk\sum^{\infty }_{i=1}{\int^{P_i\left(T\right)}_{P_i\left(0\right)}{\left(\ln g_i \right)dP_i}} \nonumber$
so that
$S\left(T\right)-S_0=-Nk\sum^{\infty }_{i=1}{P_i\left(T\right)}\ln P_i\left(T\right) +NkP_1\left(0\right)\ln P_1\left(0\right) +Nk\sum^{\infty }_{i=1}{\left(\ln g_i \right)P_i\left(T\right)}-Nk\left(\ln g_1 \right)P_1\left(0\right) \nonumber$
Since $P_1\left(0\right)=1$, $\ln P_1\left(0\right)$ vanishes. The entropy change becomes
$S\left(T\right)-S_0=-Nk\sum^{\infty }_{i=1}{P_i}\left[\ln P_i -\ln g_i \right]-Nk\ln g_1 =-Nk\sum^{\infty }_{i=1}{P_i}\ln \rho \left(\epsilon_i\right) -Nk\ln g_1 \nonumber$
We have $S_0=Nk\ln g_1$. If $g_1=1$, the lowest energy level is non-degenerate, and $S_0=0$; then we have
$S=-Nk\sum^{\infty }_{i=1}{P_i}\ln \rho \left(\epsilon_i\right) \nonumber$
This is the entropy of an $N$-molecule, constant-volume, constant-temperature system that is in thermal contact with its surroundings at the same temperature. We obtain this same result in Sections 20.10 and 20.14 by arguments in which we assume that the system is isolated. In all of these arguments, we assume that the constant-temperature system and its isolated counterpart are functionally equivalent; that is, a group of population sets that accounts for nearly all of the probability in one system also accounts for nearly all of the probability in the other.
Because we obtain this result by assuming that the system is composed of $N$, independent, non-interacting, distinguishable molecules, the entropy of this is system is $N$ times the entropy contribution of an individual molecule. We can write
$S_{\mathrm{molecule}}=-k\sum^{\infty }_{i=1}{P_i}\ln \rho \left(\epsilon_i\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.07%3A_The_Microscopic_Model_for_Reversible_Change.txt
|
In Section 21.7, we obtain the entropy by a definite integration. We take the lower limits of integration, at $T=0$, as $P_1\left(0\right)=1$ and $P_i\left(0\right)=0$, for $i>1$. In doing so, we apply the third law of thermodynamics, which states that the entropy of a perfect crystal can be chosen to be zero when the temperature is at absolute zero. The idea behind the third law is that, at absolute zero, the molecules of a crystalline substance all are in the lowest energy level that is available to them. The probability that a molecule is in the lowest energy state is, therefore, $P_1=1$, and the probability that it is any higher energy level, $i>1$, is $P_i=0$.
While the fact is not relevant to the present development, we note in passing that the energy of a perfect crystal is not zero at absolute zero. While all of the constituent particles will be in their lowest vibrational energy levels at absolute zero, the energies of these lowest vibrational levels are not zero. In the harmonic oscillator approximation, the lowest energy possible for each oscillator is ${h\nu }/{2}$. (See Section 18.5).
By a perfect crystalline substance we mean one in which the lowest energy level is non-degenerate; that is, for which $g_1=1$. We see that our entropy equation conforms to the third law when we let
$S_0=Nk \ln g_1 \nonumber$
so that $S_0=0$ when $g_1=1$.
Let us consider a crystalline substance in which the lowest energy level is degenerate; that is, one for which $g_1>1$. This substance is not a perfect crystal. In this case, the temperature-zero entropy is
$S_0=Nk \ln g_1 >0 \nonumber$
The question arises: How can we determine whether a crystalline substance is a perfect crystal? In Chapter 11, we discuss the use of the third law to determine the absolute entropy of substances at ordinary temperatures. If we assume that the substance is a perfect crystal at zero degrees when it is not, our theory predicts a value for the absolute entropy at higher temperatures that is too small, because it does not include the term $S_0=Nk\ln g_1$. When we use this too-small absolute entropy value to calculate entropy changes for processes involving the substance, the results do not agree with experiment.
Absolute entropies based on the third law have been experimentally determined for many substances. As a rule, the resulting entropies are consistent with other experimentally observed entropy changes. In some cases, however, the assumption that the entropy is zero at absolute zero leads to absolute entropy values that are not consistent with other experiments. In these cases, the absolute entropies can be brought into agreement with other entropy measurements by assuming that, indeed, $g_1>1$ for such substances. In any particular case, the value of $g_1$ that must be used is readily reconciled with other information about the substance.
For example, the third law entropy for carbon monoxide must be calculated taking $g_1=2$ in order to obtain a value that is consistent with other entropy measurements. This observation is readily rationalized. In perfectly crystalline carbon monoxide, all of the carbon monoxide molecules point in the same direction, as sketched in Figure 11-2. However, the two ends of the carbon monoxide molecule are very similar, with the consequence that the carbon monoxide molecules in the crystal point randomly in either of two directions. Thus there are two (approximately) equally energetic states for a carbon monoxide molecule in a carbon monoxide crystal at absolute zero, and we can take $g_1=2$. (We are over-simplifying here. We explore this issue further in Section 22-7.)
21.09: The Partition Function for a System of N Molecules
At a given temperature, the Boltzmann equation gives the probability of finding a molecule in any of the energy levels that the molecule can occupy. Throughout our development, we assume that there are no energies of interaction among the molecules of the system. The molecular partition function contains information about the energy levels of only one molecule. We obtain equations for the thermodynamic functions of an \(N\)-molecule system in terms of this molecular partition function. However, since these results are based on assigning the same isolated-molecule energy levels to each of the molecules, they do not address the real-system situation in which intermolecular interactions make important contributions to the total energy of the system.
As we mention in Sections 20.1 and 20.3, the ensemble theory of statistical thermodynamics extends our arguments to express the thermodynamic properties of a macroscopic system in terms of all of the total energies that are available to the macroscopic system. The molecular origins of the energies of the system enter the ensemble treatment only indirectly. The theory deals with the relationships between the possible values of the energy of the system and its thermodynamic state. How molecular energy levels and intermolecular interactions give rise to these values of the system energy becomes a separate issue. Fortunately, ensemble theory just reuses—from a different perspective—all of the ideas we have just studied. The result is just the Boltzmann equation, again, but now the energies that appear in the partition function are the possible energies for the collection of \(N\) molecules, not the energies available to a single molecule.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.08%3A_The_Third_Law_of_Thermodynamics.txt
|
1. Consider a system with three non-degenerate quantum states having energies ${\epsilon }_1=0.9\ kT$, ${\epsilon }_2=1.0\ kT$, and ${\epsilon }_3=1.1\ kT$. The system contains $N=3\times {10}^{10}$ molecules. Calculate the partition function and the number of molecules in each quantum state when the system is at equilibrium. This is the equilibrium population set $\{N^{\textrm{⦁}}_1,N^{\textrm{⦁}}_2,N^{\textrm{⦁}}_3\}$. Let $W_{mp}$ be the number of microstates associated with the equilibrium population set. Consider the population set when ${10}^{-5}$ of the molecules in ${\epsilon }_2$ are moved to each of ${\epsilon }_1$ and ${\epsilon }_3$. This is the population set $\{N^{\textrm{⦁}}_1+{10}^{-5}N^{\textrm{⦁}}_2,\ \ \ N^{\textrm{⦁}}_2-2\times {10}^{-5},\ \ \ N^{\textrm{⦁}}_3+{10}^{-5}N^{\textrm{⦁}}_2\}$. Let $W$ be the number of microstates associated with this non-equilibrium population set.
(a) What percentage of the molecules are moved in converting the first population set into the second?
(b) How do the energies of these two populations sets differ from one another?
(c) Find ${W_{mp}}/{W}$. Use Stirling’s approximation and carry as many significant figures as your calculator will allow. You need at least six.
(d) What does this calculation demonstrate?
2. Find the approximate number of energy levels for which $\epsilon <kt$> for a molecule of molecular weight $40$ in a box of volume ${10}^{-6}\ {\mathrm{m}}^3$ at $300$ K.
3. The partition function plays a central role in relating the probability of finding a molecule in a particular quantum state to the energy of that state. The energy levels available to a particle in a one-dimensional box are
${\epsilon }_n=\frac{n^2h^2}{8m{\ell }^2} \nonumber$
where $m$ is the mass of the particle and $\ell$ is the length of the box. For molecular masses and boxes of macroscopic lengths, the factor ${h^2}/{8m{\ell }^2}$ is a very small number. Consequently, the energy levels available to a molecule in such a box can be considered to be effectively continuous in the quantum number, $n$. That is, the partition function sum can be closely approximated by an integral in which the variable of integration, $n$, runs from $0$ to $\infty$.
(a) Obtain a formula for the partition function of a particle in a one-dimensional box. Integral tables give $\int^{\infty }_0 \mathrm{exp} \left(-an^2\right) dn=\sqrt{\pi /4a} \nonumber$
(b) The expected value of the energy of a molecule is given by $\left\langle \epsilon \right\rangle =kT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V \nonumber$
What is $\left\langle \epsilon \right\rangle$ for a particle in a box?
(c) The relationship between the partition function and the per-molecule Helmholtz free energy is $A=-kT{ \ln z\ }$. For a molecule in a one-dimensional box, we have $dA=-SdT-\rho \ell$, where $\rho$ is the per-molecule “pressure” on the ends of the box and $\ell$ is the length of the box. (The increment of work associated with changing the length of the box is $dw=-\rho \ d\ell$. In this relationship, $d\ell$ is the incremental change in the length of the box and $\rho$ is the one-dimensional “pressure” contribution from each molecule. $\rho$ is, of course, just the force required to push the end of the box outward by a distance $d\ell$. $\rho d\ell$ is the one-dimensional analog of $PdV$.) For the one-dimensional system, it follows that $\rho =-{\left(\frac{\partial A}{\partial \ell }\right)}_T \nonumber$
Use this information to find $\rho$ for a molecule in a one-dimensional box.
(d) We can find $\rho$ for a molecule in a one-dimensional box in another way. The per-molecule contribution to the pressure of a three-dimensional system is related to the energy-level probabilities, $P_i$, by
$P^{\mathrm{system}}_{\mathrm{molecule}}=-\sum^{\infty }_{n=1}{P_n}{\left(\frac{\partial {\epsilon }_n}{\partial V}\right)}_T \nonumber$
By the same argument we use for the three-dimensional case, we find that the per-molecule contribution to the “pressure” inside a one-dimensional box is
$\rho =-\sum^{\infty }_{n=1}{P_n}{\left(\frac{\partial {\epsilon }_n}{\partial \ell }\right)}_T \nonumber$
From the equation for the energy levels of a particle in a one dimensional box, find an equation for
${\left(\frac{\partial {\epsilon }_n}{\partial \ell }\right)}_T \nonumber$
(Hint: We can express this derivative as a simple multiple of ${\epsilon }_n$.)
(e) Using your result from part (d), show that the per molecule contribution, $\rho$, to the “one-dimensional pressure” of $N$ molecules in a one-dimensional box is $\rho ={2\left\langle \epsilon \right\rangle }/{\ell } \nonumber$
(f) Use your results from parts (b) and (e) to express $\rho$ as a function of $k$, $T$, and $\ell$.
(g) Let $\mathrm{\Pi }$ be the pressure of a system of $N$ molecules in a one-dimensional box. From your result in part (c) or part (f), give an equation for $\mathrm{\Pi }$. Show how this equation is analogous to the ideal gas equation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/21%3A_The_Boltzmann_Distribution_Function/21.10%3A_Problems.txt
|
• 22.1: Interpreting the Partition Function
Only quantum states whose energy is less than kT can make substantial contributions to the magnitude of a partition function. Very approximately, we can say that the partition function is equal to the number of quantum states for which the energy is less than kT . Each such quantum state will contribute approximately one to the sum that comprises the partition function; the contribution of the corresponding energy level will be approximately equal to its degeneracy.
• 22.2: Conditions under which Integrals Approximate Partition Functions
A common approximation is to substitue integrals for sums. This section looks at the constraints that must be satisfied in order to make the integral a good approximation to the sum.
• 22.3: Probability Density Functions from the Energies of Classical-mechanical Models
We could postulate probability density functions apply to other energies derived from classical-mechanical models for molecular motion. We will see that this can indeed be done. The results correspond to the results that we get from the Boltzmann equation, where we assume for both derivations that many energy levels satisfy ϵ≪kT. The point is that, at a sufficiently high temperature, the behavior predicted by the quantum mechanical model and that predicted from classical mechanics converge.
• 22.4: Partition Functions and Average Energies at High Temperatures
It is important to remember that the use of integrals to approximate Boltzmann-equation sums assumes that there are a large number of energy levels, ϵi , for which ϵi≪kT . If we select a high enough temperature, the energy levels for any motion will always satisfy this condition. The energy levels for translational motion satisfy this condition even at sub-ambient temperatures. This is the reason that Maxwell’s derivation of the probability density function for translational motion is successf
• 22.5: Energy Levels for a Three-dimensional Harmonic Oscillator
One of the earliest applications of quantum mechanics was Einstein’s demonstration that the union of statistical mechanics and quantum mechanics explains the temperature variation of the heat capacities of solid materials. The physical model underlying Einstein’s development is that a monatomic solid consists of atoms vibrating about fixed points in a lattice. The particles of this solid are distinguishable from one another, because the location of each lattice point is uniquely specified.
• 22.6: Energy and Heat Capacity of the "Einstein Crystal"
• 22.7: Applications of Other Entropy Relationships
• 22.8: Problems
22: Some Basic Applications of Statistical Thermodynamics
When it is a good approximation to say that the energy of a molecule is the sum of translational, rotational, vibrational, and electronic components, we have
${\epsilon }_{i,j,k,m}={\epsilon }_{t,i}+{\epsilon }_{r,j}+{\epsilon }_{v,k}+{\epsilon }_{e,m} \nonumber$
where the indices $i$, $j$, $k$, and $m$ run over all possible translational, rotational, vibrational, and electronic quantum states, respectively. Then the partition function for the molecule can be expressed as a product of the individual partition functions $z_t$, $z_r$, $z_v$, and $z_e$; that is,
\begin{align*} z_{\mathrm{molecule}} &=\sum_t{\sum_r{\sum_v{\sum_e{g_{t,i}}}}}g_{r,j}g_{v,k}g_{e,m}\mathrm{exp}\left(\frac{-{\epsilon }_{i,j,k,m}}{kT}\right) \[4pt] &=\sum_t{g_{t,i}}exp\left(\frac{-{\epsilon }_{t,i}}{kT}\right)\sum_r{g_{r,j}}exp\left(\frac{-{\epsilon }_{r,j}}{kT}\right) \sum_v{g_{v,k}}exp\left(\frac{-{\epsilon }_{v,k}}{kT}\right)\sum_e{g_{e,m}}exp\left(\frac{-{\epsilon }_{e,m}}{kT}\right) \[4pt] &=z_tz_rz_vz_e \end{align*}
The magnitude of an individual partition function depends on the magnitudes of the energy levels associated with that kind of motion. Table 1 gives the contributions made to their partition functions by levels that have various energy values.
Table 1:
$\epsilon_{\boldsymbol{i}}$ $\frac{\boldsymbol{-}\epsilon_{\boldsymbol{i}}}{\boldsymbol{kT}}$ $\mathrm{exp}\left(\frac{\boldsymbol{-}\epsilon_{\boldsymbol{i}}}{\boldsymbol{kT}}\right)$ Type of Motion
${10}^{-2}\ kT$ $-{10}^{-2}$ $0.990$
${10}^{-1}\ kT$ $-{10}^{-1}$ $0.905$
$kT$ $-1$ $0.365$ translational
$5\ kT$ $-5$ $0.0067$ rotational
$10\ kT$ $-10$ $4.5\times {10}^{-5}$ vibration
$100\ kT$ $-100$ $3.7\times {10}^{-44}$ electronic
We see that only quantum states whose energy is less than $kT$ can make substantial contributions to the magnitude of a partition function. Very approximately, we can say that the partition function is equal to the number of quantum states for which the energy is less than $kT$. Each such quantum state will contribute approximately one to the sum that comprises the partition function; the contribution of the corresponding energy level will be approximately equal to its degeneracy. If the energy of a quantum state is large compared to $kT$, the fraction of molecules occupying that quantum state will be small. This idea is often expressed by saying that such states are “unavailable” to the molecule. It is then said that the value of the partition function is approximately equal to the number of available quantum states. When most energy levels are non-degenerate, we can also say that the value of the partition function is approximately equal to the number of available energy levels.
22.02: Conditions under which Integrals Approximate Partition
The Boltzmann equation gives the equilibrium fraction of particles in the $i^{th}$ energy level, $\epsilon_i$, as
$\frac{N^{\textrm{⦁}}_i}{N}=\frac{g_i}{z}\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right) \nonumber$
so the fraction of particles in energy levels less than $\epsilon_n$ is
$f\left(\epsilon_n\right)=z^{-1}\sum^{n-1}_{i=1}{g_i}\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right) \nonumber$
where $z=\sum^{\infty }_{i=1}{g_i}\mathrm{exp}\left({\epsilon_i}/{kT}\right)$. We can represent either of these sums as the area under a bar graph, where the height and width of each bar are $g_i\mathrm{exp}\left({\epsilon_i}/{kT}\right)$ and unity, respectively. If $g_i$ and $\epsilon_i$ can be approximated as continuous functions, this area can be approximated as the area under the continuous function $y\left(i\right)=g_i\mathrm{exp}\left({\epsilon_i}/{kT}\right)$. That is,
$\sum^{n-1}_{i=1}g_i\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right)\approx \int^n_{i=0}{g_i\mathrm{exp}\left(\frac{-\epsilon_i}{kT}\right)}di \nonumber$
To evaluate this integral, we must know how both $g_i$ and $\epsilon_i$ depend on the quantum number, $i$.
Let us consider the case in which $g_i=1$ and look at the constraints that the $\epsilon_i$ must satisfy in order to make the integral a good approximation to the sum. The graphical description of this case is sketched in Figure 1. Since $\epsilon_i>\epsilon_{i-1}>0$, we have
$e^{-\epsilon_{i-1}/kT}-e^{-\epsilon_i/kT}>0 \nonumber$
For the integral to be a good approximation, we must have
$e^{-\epsilon_{i-1}/kT}\gg e^{-\epsilon_{i-1}/kT}-e^{-\epsilon_i/kT}>0, \nonumber$
which means that
$1\gg 1-e^{-\Delta \epsilon /kT}>0 \nonumber$
where $\Delta \epsilon =\epsilon_i-\epsilon_{i-1}$. Now,
$e^x\approx 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\dots \nonumber$
so that the approximation will be good if
$1\gg 1-\left(1-\frac{\Delta \epsilon }{kT}+\dots \right) \nonumber$ or $1\gg \frac{\Delta \epsilon }{kT} \nonumber$ or $kT\gg \Delta \epsilon \nonumber$
We can be confident that the integral is a good approximation to the exact sum whenever there are many pairs of energy levels, $\epsilon_i$ and $\epsilon_{i-1}$, that satisfy the condition
$\Delta \epsilon =\epsilon_i-\epsilon_{i-1}\ll kT. \nonumber$
If there are many energy levels that satisfy $\epsilon_i\ll kT$, there are necessarily many intervals, $\Delta \epsilon$, that satisfy $\Delta \epsilon \ll kT$. In short, if a large number of the energy levels of a system satisfy the criterion $\epsilon \ll kT$, we can use integration to approximate the sums that appear in the Boltzmann equation. In Section 24.3, we use this approach and the energy levels for a particle in a box to find the partition function for an ideal gas.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/22%3A_Some_Basic_Applications_of_Statistical_Thermodynamics/22.01%3A_Interpreting_the_Partition_Function.txt
|
Guided by our development of the Maxwell-Boltzmann probability density function for molecular velocities, we could postulate that similar probability density functions apply to other energies derived from classical-mechanical models for molecular motion. We will see that this can indeed be done. The results correspond to the results that we get from the Boltzmann equation, where we assume for both derivations that many energy levels satisfy $\epsilon \ll kT$. The essential point is that, at a sufficiently high temperature, the behavior predicted by the quantum mechanical model and that predicted from classical mechanics converge. This high-temperature approximation is a good one for translational motions but a very poor one for vibrational motions. These results further illuminate the differences between the classical-mechanical and the quantum-mechanical models for the behavior of molecules.
Let us look at how we can generate probability density functions based on the energies of classical-mechanical models for molecular motions. In the classical mechanical model, a particle moving in one dimension with velocity $v$ has kinetic energy ${mv^2}/{2}$. From the discussion above, if many velocities satisfy $kT\gg {mv^2}/{2}$, we can postulate a probability density function of the form
$\frac{df}{dv}=B_{\mathrm{trans}}\mathrm{\ exp}\left(\frac{-mv^2}{2kT}\right) \nonumber$
where $B_{\mathrm{trans}}$ is fixed by the condition
$\int^{\infty }_{-\infty }{\left(\frac{df}{dv}\right)dv}=B_{\mathrm{trans}}\int^{\infty }_{-\infty }{\mathrm{exp}\left(\frac{-mv^2}{2kT}\right)}dv=1 \nonumber$
Evidently, this postulate assumes that each velocity constitutes a quantum state and that the degeneracy is the same for all velocities. This assumption is successful for one-dimensional translation, but not for translational motion in two or three dimensions. The definite integral is given in Appendix D. We find
$B_{\mathrm{trans}}=\left({m}/{2\pi k}T\right)^{1/2} \nonumber$ and
$\frac{df}{dv}={\left(\frac{m}{2\pi kT}\right)}^{1/2}\mathrm{exp}\left(\frac{-mv^2}{2kT}\right) \nonumber$
With $m/kT=\lambda$, this is the same as the result that we obtain in Section 4.4. With $B_{\mathrm{trans}}$ in hand, we can calculate the average energy associated with the motion of a gas molecule in one dimension
$\left\langle \epsilon \right\rangle =\int^{\infty}_{-\infty}{\left(\frac{mv^2}{2}\right)\left(\frac{df}{dv}\right)dv}={\left(\frac{m^3}{8\pi kT}\right)}^{1/2}\int^{\infty }_{-\infty }{v^2\mathrm{exp}\left(\frac{-mv^2}{2kT}\right)}dv \nonumber$
This definite integral is also given in Appendix D. We find $\left\langle \epsilon_{\mathrm{trans}}\right\rangle =\frac{kT}{2} \nonumber$
We see that we can obtain the average kinetic energy for one degree of translational motion by a simple argument that uses classical-mechanical energies in the Boltzmann equation. We can make the same argument for each of the other two degrees of translational motion. We conclude that each degree of translational freedom contributes ${kT}/{2}$ to the average energy of a gas molecule. For three degrees of translational freedom, the total contribution is ${3kT}/{2}$, which is the result that we first obtained in Section 2.10.
Now let us consider a classical-mechanical model for a rigid molecule rotating in a plane. The classical kinetic energy is $\epsilon_{\mathrm{rot}}={I{\omega }^2}/{2}$, where $I$ is the molecule’s moment of inertia about the axis of rotation, and $\omega$ is the angular rotation rate. This has the same form as the translational kinetic energy, so if we assume $kT\gg {I{\omega }^2}/{2}$ and a probability density function of the form
$\frac{df}{d\omega }=B_{\mathrm{rot\ }}\mathrm{exp}\left(\frac{-I{\omega }^2}{2kT}\right) \nonumber$
finding $B_{\mathrm{rot\ }}$ and $\left\langle \epsilon_{\mathrm{rot\ }}\right\rangle$ follows exactly as before, and the average rotational kinetic energy is
$\left\langle \epsilon_{\mathrm{rot}}\right\rangle ={kT}/{2} \nonumber$
for a molecule with one degree of rotational freedom.
For a classical harmonic oscillator, the vibrational energy has both kinetic and potential energy components. They are ${mv^2}/{2}$ and ${kx^2}/{2}$ where $v$ is the oscillator’s instantaneous velocity, $x$ is its instantaneous location, and $k$ is the force constant. Both of these have the same form as the translational kinetic energy equation. If we can assume that $kT\gg {mv^2}/{2}$, that $kT\gg {kx^2}/{2}$, and that the probability density functions are
$\frac{df}{dv}=B^{\mathrm{kinetic}}_{\mathrm{vib}}\ \mathrm{exp}\left(\frac{-mv^2}{2kT}\right) \nonumber$ and $\frac{df}{dx}=B^{\mathrm{potential}}_{\mathrm{vib}}\mathrm{exp}\left(\frac{-kx^2}{2kT}\right) \nonumber$
the same arguments show that the average kinetic energy and the average potential energy are both ${kT}/{2}$:
$\left\langle \epsilon^{\mathrm{kinetic}}_{\mathrm{vib}}\right\rangle ={kT}/{2} \nonumber$ and $\left\langle \epsilon^{\mathrm{potential}}_{\mathrm{vib}}\right\rangle ={kT}/{2} \nonumber$
so that the average total vibrational energy is
$\left\langle \epsilon^{\mathrm{total}}_{\mathrm{vib}}\right\rangle =kT \nonumber$
In summary, because the energy for translational motion in one dimension, the energy for rotational motion about one axis, the energy for vibrational kinetic energy in one dimension, and the energy for vibrational potential energy in one dimension all have the same form ($\epsilon =Xu^2$) each of these modes can contribute ${kT}/{2}$ to the average energy of a molecule. For translation and rotation, the total is ${kT}/{2}$ for each degree of translational or rotational freedom. For vibration, because there is both a kinetic and a potential energy contribution, the total is $kT$ per degree of vibrational freedom.
Let us illustrate this for the particular case of a non-linear, triatomic molecule. From our discussion in Section 18.4, we see that there are three degrees of translational freedom, three degrees of rotational freedom, and three degrees of vibrational freedom. The contributions to the average molecular energy are
• $3\left({kT}/{2}\right)$ from translation
• $+3\left({kT}/{2}\right)$ from rotation
• $+3kT$ from vibration
• $=6kT$ in total
Since the heat capacity is
$C_V=\left(\frac{\partial \epsilon }{\partial T}\right)_v \nonumber$
each translational degree of freedom can contribute ${k}/{2}$ to the heat capacity. Each rotational degree of freedom can also contribute ${k}/{2}$ to the heat capacity. Each vibrational degree of freedom can contribute $k$ to the heat capacity. It is important to remember that these results represent upper limits for real molecules. These limits are realized at high temperatures, or more precisely, at temperatures where many energy levels, $\epsilon_i$, satisfy $\epsilon_i\ll kT$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/22%3A_Some_Basic_Applications_of_Statistical_Thermodynamics/22.03%3A_Probability_Density_Functions_from_the_Energies_of_Clas.txt
|
It is enlightening to find the integral approximations to the partition functions and average energies for our simple quantum-mechanical models of translational, rotational, and vibrational motions. In doing so, however, it is important to remember that the use of integrals to approximate Boltzmann-equation sums assumes that there are a large number of energy levels, $\epsilon_i$, for which $\epsilon_i\ll kT$. If we select a high enough temperature, the energy levels for any motion will always satisfy this condition. The energy levels for translational motion satisfy this condition even at sub-ambient temperatures. This is the reason that Maxwell’s derivation of the probability density function for translational motion is successful.
Rotational motion is an intermediate case. At sub-ambient temperatures, the classical-mechanical derivation can be inadequate; at ordinary temperatures, it is a good approximation. This can be seen by comparing the classical-theory prediction to experimental values for diatomic molecules. For diatomic molecules, the classical model predicts a constant-volume heat capacity of ${5k}/{2}$ from $3$ degrees of translational and $2$ degrees of rotational freedom. Since this does not include the contributions from vibrational motions, constant-volume heat capacities for diatomic molecules must be greater than ${5k}/{2}$ if both the translational and rotational contributions are accounted for by the classical model. For diatomic molecules at $298$ K, the experimental values are indeed somewhat larger than ${5k}/{2}$. (Hydrogen is an exception; its value is $2.47\ k$.)
Vibrational energies are usually so big that only a minor fraction of the molecules can be in higher vibrational levels at reasonable temperatures. If we try to increase the temperature enough to make the high-temperature approximation describe vibrational motions, most molecules decompose. Likewise, electronic partition functions must be evaluated from the defining equation.
The high-temperature limiting average energies can also be calculated from the Boltzmann equation and the appropriate quantum-mechanical energies. Recall that we find the following quantum-mechanical energies for simple models of translational, rotational, and vibrational motions:
Translation
$\epsilon^{\left(n\right)}_{\mathrm{trans}}=\frac{n^2h^2}{8m{\ell }^2} \nonumber$
($\mathrm{n\ =\ 1,\ 2,\ 3,\dots .}$ Derived for a particle in a box)
Rotation
$\epsilon^{\left(m\right)}_{\mathrm{rot}}=\frac{m^2h^2}{8{\pi }^2I} \nonumber$ ($\mathrm{m\ =\ 1,\ 2,\ 3,\ \dots .}$ Derived for rotation about one axis—each energy level is doubly degenerate)
Vibration
$\epsilon^{\left(n\right)}_{\mathrm{vibration}}=h\nu \left(n+\frac{1}{2}\right) \nonumber$ ($\mathrm{n\ =\ 0,\ 1,\ 2,\ 3,\dots .}$ Derived for simple harmonic motion in one dimension)
When we assume that the temperature is so high that many $\epsilon_i$ are small compared to $kT$, we find the following high-temperature limiting partition functions for these motions:
$z_{\mathrm{translation}}=\sum^{\infty }_{n=1}{\mathrm{exp}}\left(\frac{-n^2h^2}{8m{\ell }^2kT}\right)\approx \int^{\infty }_0{\mathrm{exp}}\left(\frac{-n^2h^2}{8m{\ell }^2kT}\right)dn={\left(\frac{2\pi mkT{\ell }^2}{h^2}\right)}^{1/2} \nonumber$
$z_{\mathrm{rotation}}=\sum^{\infty }_{m=1}{\mathrm{2\ exp}}\left(\frac{-m^2h^2}{8{\pi }^2IkT}\right)\approx 2\int^{\infty }_0{\mathrm{exp}}\left(\frac{-m^2h^2}{8{\pi }^2IkT}\right)dn={\left(\frac{8{\pi }^3IkT}{h^2}\right)}^{1/2} \nonumber$ $z_{\mathrm{vibration}}=\sum^{\infty }_{n=0}{\mathrm{exp}}\left(\frac{-h\nu }{kT}\left(n+\frac{1}{2}\right)\right)\approx \int^{\infty }_0{\mathrm{exp}}\left(\frac{-h\nu }{kT}\left(n+\frac{1}{2}\right)\right)dn=\frac{kT}{h\nu }\mathrm{exp}\ \left(\frac{-h\nu }{2kT}\right) \nonumber$
We can then calculate the average energy for each mode as
$\left\langle \epsilon \right\rangle =z^{-1}\int^{\infty }_0{\epsilon_n}{\mathrm{exp} \left(\frac{-\epsilon_n}{kT}\right)\ }dn \nonumber$
and find
\begin{align*} \left\langle \epsilon_{\mathrm{translation}}\right\rangle &=z^{-1}_{\mathrm{translation}}\int^{\infty }_0{\left(\frac{n^2h^2}{8m{\ell }^2}\right)\mathrm{\ exp}}\left(\frac{-n^2h^2}{8m{\ell }^2kT}\right)dn \[4pt] &=\frac{kT}{2} \[4pt] \left\langle \epsilon_{\mathrm{rotation}}\right\rangle &=z^{-1}_{\mathrm{rotation}}\int^{\infty }_0{2\left(\frac{m^2h^2}{8{\pi }^2I}\right)\mathrm{\ exp}}\left(\frac{-m^2h^2}{8{\pi }^2IkT}\right)dm \[4pt] &=\frac{kT}{2} \[4pt] \left\langle \epsilon_{\mathrm{vibration}}\right\rangle &=z^{-1}_{\mathrm{vibration}} \times \int^{\infty }_0{h\nu \left(n+\frac{1}{2}\right) \mathrm{\ exp}}\left(\frac{-h\nu }{kT}\left(n+\frac{1}{2}\right)\right)dn \[4pt] &=kT+\frac{h\nu }{2} \[4pt] &\approx kT \end{align*}
where the last approximation assumes that ${h\nu }/{2}\ll kT$. In the limit as $T\to 0$, the average energy of the vibrational mode becomes just ${h\nu }/{2}$. This is just the energy of the lowest vibrational state, implying that all of the molecules are in the lowest vibrational energy level at absolute zero.
22.05: Energy Levels for a Three-dimensional Harmonic Oscillat
One of the earliest applications of quantum mechanics was Einstein’s demonstration that the union of statistical mechanics and quantum mechanics explains the temperature variation of the heat capacities of solid materials. In Section 7.14, we note that the heat capacities of solid materials approach zero as the temperature approaches absolute zero. We also review the law of Dulong and Petit, which describes the limiting heat capacity of many solid elements at high (ambient) temperatures. The Einstein model accounts for both of these observations.
The physical model underlying Einstein’s development is that a monatomic solid consists of atoms vibrating about fixed points in a lattice. The particles of this solid are distinguishable from one another, because the location of each lattice point is uniquely specified. We suppose that the vibration of any one atom is independent of the vibrations of the other atoms in the lattice. We assume that the vibration results from a Hooke’s Law restoring force
$\mathop{F}\limits^{\rightharpoonup}=-\lambda \mathop{r}\limits^{\rightharpoonup}=-\lambda \left(x\mathop{\ i}\limits^{\rightharpoonup}+y\ \mathop{j}\limits^{\rightharpoonup}+z\ \mathop{k}\limits^{\rightharpoonup}\right) \nonumber$
that is zero when the atom is at its lattice point, for which $\mathop{r}\limits^{\rightharpoonup}=\left(0,0,0\right)$. The potential energy change when the atom, of mass m, is driven from its lattice point to the point $\mathop{r}\limits^{\rightharpoonup}=\left(x,y,x\right)$ is
$V=\int^{\mathop{r}\limits^{\rightharpoonup}}_{\mathop{r}\limits^{\rightharpoonup}=\mathop{0}\limits^{\rightharpoonup}}{-\mathop{F}\limits^{\rightharpoonup}\bullet d\mathop{r}\limits^{\rightharpoonup}}=\lambda \int^x_{x=0}{xdx}+\lambda \int^y_{y=0}{ydy}+\lambda \int^z_{z=0}{zdz}=\lambda \frac{x^2}{2}+\lambda \frac{y^2}{2}+\lambda \frac{z^2}{2} \nonumber$
The Schrödinger equation for this motion is
$-\frac{h^2}{8{\pi }^2m}\left[\frac{{\partial }^2\psi }{\partial x^2}+\frac{{\partial }^2\psi }{\partial y^2}+\frac{{\partial }^2\psi }{\partial z^2}\right]+\lambda \left[\frac{x^2}{2}+\frac{y^2}{2}+\frac{z^2}{2}\right]\psi =\epsilon \psi \nonumber$
where $\psi$ is a function of the three displacement coordinates; that is $\psi =\psi \left(x,y,z\right)$. We assume that motions in the $x$-, $y$-, and $z$-directions are completely independent of one another. When we do so, it turns out that we can express the three-dimensional Schrödinger equation as the sum of three one-dimensional Schrödinger equations
$\ \ \ \left[-\frac{h^2}{8{\pi }^2m}\frac{{\partial }^2{\psi }_x}{\partial x^2}+\lambda \frac{x^2{\psi }_x}{2}\right] \nonumber$ $+\left[-\frac{h^2}{8{\pi }^2m}\frac{{\partial }^2{\psi }_y}{\partial y^2}+\lambda \frac{y^2{\psi }_y}{2}\right] \nonumber$ $+\left[-\frac{h^2}{8{\pi }^2m}\frac{{\partial }^2{\psi }_z}{\partial z^2}+\lambda \frac{z^2{\psi }_z}{2}\right] \nonumber$ $=\epsilon \ {\psi }_x+\epsilon {\ \psi }_y+\epsilon \ {\psi }_z \nonumber$
where any wavefunction ${\psi }^{\left(n\right)}_x$ is the same function as ${\psi }^{\left(n\right)}_y$ and ${\psi }^{\left(n\right)}_z$, and the corresponding energies ${\epsilon }^{\left(n\right)}_x$, ${\epsilon }^{\left(n\right)}_y$, and ${\epsilon }^{\left(n\right)}_z$ have the same values. The energy of the three-dimensional atomic motion is simply the sum of the energies for the three one-dimensional motions. That is,
${\epsilon }_{n,m,p}={\epsilon }^{\left(n\right)}_x+{\epsilon }^{\left(m\right)}_y+{\epsilon }^{\left(p\right)}_z, \nonumber$
which, for simplicity, we also write as
${\epsilon }_{n,m,p}={\epsilon }_n+{\epsilon }_m+{\epsilon }_p. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/22%3A_Some_Basic_Applications_of_Statistical_Thermodynamics/22.04%3A_Partition_Functions_and_Average_Energies_at_High_Temper.txt
|
In Section 22.4, we find an approximate partition function for the harmonic oscillator at high temperatures. Because it is a geometric series, the partition function for the harmonic oscillator can also be obtained exactly at any temperature. By definition, the partition function for the harmonic oscillator is
$z=\sum^{\infty }_{n=0}{\mathrm{exp}}\left(\frac{-h\nu }{kT}\left(n+\frac{1}{2}\right)\right)=\mathrm{exp}\left(\frac{-h\nu }{2kT}\right)\sum^{\infty }_{n=0}{\mathrm{exp}}\left(\frac{-nh\nu }{kT}\right)=\mathrm{exp}\left(\frac{-h\nu }{2kT}\right)\sum^{\infty }_{n=0}{{\left[{\mathrm{exp} \left(\frac{-h\nu }{kT}\right)\ }\right]}^n} \nonumber$
This is just the infinite sum
$z=a\sum^{\infty }_{n=0}{r^n}=\frac{a}{1-r} \nonumber$ with $a=\mathrm{exp}\left(\frac{-h\nu }{2kT}\right) \nonumber$ and $r={\mathrm{exp} \left(\frac{-h\nu }{kT}\right)\ } \nonumber$
Hence, the exact partition function for the one-dimensional harmonic oscillator is
$z=\frac{\mathrm{exp} \left(-h\nu /2kT\right)}{1- \mathrm{exp} \left(-h\nu /kT\right)} \nonumber$
The partition function for vibration in each of the other two dimensions is the same. To get the partition function for oscillation in all three dimensions, we must sum over all possible combinations of the three energies. Distinguishing the energies associated with motion in the $x$-, $y$-, and $z$-directions by the subscripts $n$, $m$, and $p$, respectively, we have for the three-dimensional harmonic oscillator:
\begin{aligned} z_{3D} & =\sum^{\infty }_{p=0} \sum^{\infty }_{m=0} \sum^{\infty }_{n=0} \mathrm{exp} \left[\frac{-\left({\epsilon }_n+{\epsilon }_m+{\epsilon }_p\right)}{kT}\right] \ ~ & =\sum^{\infty }_{p=0} \mathrm{exp} \frac{-{\epsilon }_p}{kT} \sum^{\infty }_{m=0} \mathrm{exp} \frac{-{\epsilon }_m}{kT} \sum^{\infty }_{n=0} \mathrm{exp} \frac{-{\epsilon }_n}{kT} \ ~ & =z^3 \end{aligned} \nonumber
Hence,
$z_{3D}=\left[\frac{\mathrm{exp} \left(-h\nu /2kT\right)}{1- \mathrm{exp} \left(-h\nu /kT\right)} \right]^3 \nonumber$
and the energy of a crystal of $N$, independent, distinguishable atoms is
\begin{aligned} E & =N\left\langle \epsilon \right\rangle \ ~ & =NkT^2 \left(\frac{\partial \ln z_{3D}}{ \partial T} \right)_V \ ~ & =\frac{3Nh\nu }{2}+\frac{3Nh\nu \mathrm{exp} \left(-h\nu /kT\right)}{1-\mathrm{exp} \left(-h\nu /kT\right)} \end{aligned} \nonumber
Taking the partial derivative with respect to temperature gives the heat capacity of this crystal. The molar heat capacity can be expressed in two ways that are useful for our purposes:
\begin{aligned} C_V & =\left(\frac{\partial \overline{E}}{\partial T}\right)_V \ ~ & =3\overline{N}k \left(\frac{h\nu }{kT}\right)^2\left[\frac{\mathrm{exp} \left(-h\nu /kT\right)}{\left(1-\mathrm{exp} \left(-h\nu /kT\right) \right)^2}\right] \ ~ & =3\overline{N}k \left(\frac{h\nu }{kT}\right)^2\left[\frac{\mathrm{exp} \left(h\nu /kT\right)}{\left(\mathrm{exp} \left(h\nu /kT\right)-1\right)^2}\right] \end{aligned} \nonumber
Consider the heat capacity at high temperatures. As the temperature becomes large, $h\nu /kT$ approaches zero. Then
$\mathrm{exp} \left(\frac{h\nu }{kT}\right) \approx 1+\frac{h\nu }{kT} \nonumber$
Using this approximation in the second representation of $C_V$ gives for the high temperature limit
\begin{aligned} C_V & \approx 3\overline{N}k \left(\frac{h\nu }{kT}\right)^2\left[\frac{1+h\nu /kT}{\left(1+{h\nu }/{kT}-1\right)^2}\right] \ ~ & \approx 3\overline{N}k\left(1+\frac{h\nu }{kT}\right) \ ~ & \approx 3\overline{N}k=3R \end{aligned} \nonumber
Since $C_V$ and $C_P$ are about the same for solids at ordinary temperatures, this result is essentially equivalent to the law stated by Dulong and Petit. Indeed, it suggests that the law would be more accurate if stated as a condition on $C_V$ rather than $C_P$, and this proves to be the case.
At low temperatures, $h\nu /kT$ becomes arbitrarily large and $\mathrm{exp} \left(-h\nu /kT\right)$ approaches zero. From the first representation of $C_V,$ we see that
$\mathop{\mathrm{lim}}_{T\to 0} \left(\frac{\partial \overline{E}}{\partial T}\right)_V =C_V=0 \nonumber$
In Section 10.9, we see that $C_P-C_V\to 0$ as $T\to 0$. Hence, the theory also predicts that $C_P\to 0$ as $T\to 0$, in agreement with experimental results.
The Einstein model assumes that energy variations in a solid near absolute zero are entirely due to variations in the vibrational energy. From the assumption that all of these vibrational motions are characterized by a single frequency, it predicts the limiting values for the heat capacity of a solid at high and low temperatures. At intermediate temperatures, the quantitative predictions of the Einstein model leave room for improvement. An important refinement developed by Peter Debye assumes a spectrum of vibrational frequencies and results in excellent quantitative agreement with experimental values at all temperatures.
We can give a simple qualitative interpretation for the result that heat capacities decrease to zero as the temperature goes to absolute zero. The basic idea is that, at a sufficiently low temperature, essentially all of the molecules in the system are in the lowest available energy level. Once essentially all of the molecules are in the lowest energy level, the energy of the system can no longer decrease in response to a further temperature decrease. Therefore, in this temperature range, the heat capacity is essentially zero. Alternatively, we can say that as the temperature approaches zero, the fraction of the molecules that are in the lowest energy level approaches one, and the energy of the system of $N$ molecules approaches the smallest value it can have.
The weakness in this qualitative view is that there is always a non-zero probability of finding molecules in a higher energy level, and this probability changes as the temperature changes. To firm up the simple picture, we need a way to show that the energy decreases more rapidly than the temperature near absolute zero. More precisely, we need a way to show that
${\mathop{\mathrm{lim}}_{T\to 0} {\left(\frac{\partial \overline{E}}{\partial T}\right)}_V\ }=C_V=0 \nonumber$
Since the Einstein model produces this result, it constitutes a quantitative validation of our qualitative model.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/22%3A_Some_Basic_Applications_of_Statistical_Thermodynamics/22.06%3A_Energy_and_Heat_Capacity_of_the_Einstein_Crystal.txt
|
In most cases, calculation of the entropy from information about the energy levels of a system is best accomplished using the partition function. Occasionally other entropy relationships are useful. We illustrate this by using the entropy relationship
$S=-Nk\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}{ \ln \rho \left({\epsilon }_i\right)\ }+Nk{ \ln g_1\ } \nonumber$
to find the entropy of an $N$-molecule disordered crystal at absolute zero. To be specific, let us consider a crystal of carbon monoxide.
We can calculate the entropy of carbon monoxide at absolute zero from either of two perspectives. Let us first assume that the energy of a molecule is almost completely independent of the orientations of its neighbors in the crystal. Then the energy of any molecule in the crystal is essentially the same in either of the two orientations available to it. In this model for the system, we consider that there are two, non-degenerate, low-energy quantum states available to the molecule. We suppose that all other quantum states lie at energy levels whose probabilities are very small when the temperature is near absolute zero. We have $g_1=g_2=1$, ${\epsilon }_2\approx {\epsilon }_1$. Near absolute zero, we have $\rho \left({\epsilon }_2\right)\approx \rho \left({\epsilon }_1\right)\approx {1}/{2}$; for $i>2$, $\rho \left({\epsilon }_i\right)\approx 0$. The entropy becomes
\begin{aligned} S & =-Nk\sum^{\infty }_{i=1}{g_i\rho \left({\epsilon }_i\right)}{ \ln \rho \left({\epsilon }_i\right)\ }+Nk{ \ln g_1\ } \ ~ & =-Nk\left(\frac{1}{2}\right){ \ln \left(\frac{1}{2}\right)\ }-Nk\left(\frac{1}{2}\right){ \ln \left(\frac{1}{2}\right)\ } \ ~ & =-Nk{ \ln \left(\frac{1}{2}\right)\ } \ ~ & =Nk{ \ln 2\ } \end{aligned} \nonumber
Alternatively, we can consider that there is just one low-energy quantum state available to the molecule but that this quantum state is doubly degenerate. In this model, the energy of the molecule is the same in either of the two orientations available to it. We have $g_1=2$. Near absolute zero, we have $\rho \left({\epsilon }_1\right)\approx 1$; for $i>1$, $\rho \left({\epsilon }_i\right)\approx 0$. The summation term vanishes, and the entropy becomes
$S=Nk{ \ln g_1\ }=Nk{ \ln 2\ } \nonumber$
Either perspective implies the same value for the zero-temperature entropy of the $N$-molecule crystal.
Either of these treatments involves a subtle oversimplification. In our first model, we recognize that the carbon monoxide molecule must have a different energy in each of its two possible orientations in an otherwise perfect crystal. The energy of the orientation that makes the crystal perfect is slightly less than the energy of the other orientation. We introduce an approximation when we say that $\rho \left({\epsilon }_2\right)\approx \rho \left({\epsilon }_1\right)\approx {1}/{2}$. However, if ${\epsilon }_2$ is not exactly equal to ${\epsilon }_1$, this approximation cannot be valid at an arbitrarily low temperature. To see this, we let the energy difference between these orientations be ${\epsilon }_2-{\epsilon }_1=\Delta \epsilon >0$. At relatively high temperatures, at which $\Delta \epsilon \ll kT$, we have
$\frac{\rho \left({\epsilon }_2\right)}{\rho \left({\epsilon }_1\right)}={\mathrm{exp} \left(\frac{-\Delta \epsilon }{kT}\right)\ }\approx 1 \nonumber$
and $\rho \left({\epsilon }_2\right)\approx \rho \left({\epsilon }_1\right)\approx {1}/{2}$. At such temperatures, the system behaves as if the lowest energy level were doubly degenerate, with ${\epsilon }_2={\epsilon }_1$. However, since $T$ can be arbitrarily close to zero, this condition cannot always apply. No matter how small $\Delta \epsilon$ may be, there are always temperatures at which $\Delta \epsilon \gg kT$ and at which we have
$\frac{\rho \left({\epsilon }_2\right)}{\rho \left({\epsilon }_1\right)}\approx 0 \nonumber$
This implies that the molecule should always adopt the orientation that makes the crystal perfectly ordered when the temperature becomes sufficiently close to zero. This conclusion disagrees with the experimental observations.
Our second model assumes that the energy of a carbon monoxide molecule is the same in either of its two possible orientations. However, its interactions with the surrounding molecules cannot be exactly the same in each orientation; consequently, its energy cannot be exactly the same. From first principles, therefore, our second model cannot be strictly correct.
To resolve these apparent contradictions, we assume that the rate at which a carbon monoxide molecule can change its orientation within the lattice depends on temperature. For some temperature at which $\Delta \epsilon \ll kT$, the reorientation process occurs rapidly, and the two orientations are equally probable. As the temperature decreases, the rate of reorientation becomes very slow. If the reorientation process effectively ceases to occur while the condition $\Delta \epsilon \ll kT$ applies, the orientations of the component molecules remain those that occur at higher temperatures no matter how much the temperature decreases thereafter. This is often described by saying that molecular orientations become “frozen.” The zero-temperature entropy of the system is determined by the energy-level probabilities that describe the system at the temperature at which reorientation effectively ceases to occur.
22.08: Problems
1. The gravitational potential energies available to a molecule near the surface of the earth are $\epsilon \left(h\right)=mgh$. Each height, $h$, corresponds to a unique energy, so we can infer that the degeneracy of $\epsilon \left(h\right)$ is unity. Derive the probability density function for the distribution of molecules in the earth’s atmosphere. (See Problem 19 in Chapter 3.)
2. The value of the molecular partition function approximates the number of quantum states that are available to the molecule and whose energy is less than $kT$. How many such quantum states are available to a molecule of molecular weight $40$ that is confined in a volume of ${10}^{-6}\ {\mathrm{m}}^3$ at $300$ K?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/22%3A_Some_Basic_Applications_of_Statistical_Thermodynamics/22.07%3A_Applications_of_Other_Entropy_Relationships.txt
|
• 23.1: Ensembles of N-molecule Systems
Imagine collecting n N-molecule, constant-volume, constant-temperature systems. An aggregate of many multi-molecule systems is called an ensemble. Just as we assume that no forces act among the non-interacting molecules we consider earlier, we assume that no forces act among the systems of the ensemble. However, as we emphasize above, our model for the systems of an ensemble recognizes that intermolecular forces among the molecules of an individual system can be important
• 23.2: The Ensemble Entropy and the Value of ß
• 23.3: The Thermodynamic Functions of the N-molecule System
We can find the other thermodynamic functions for the N -molecule system from the equations for Z and P^i by the arguments we used previously.
23: The Ensemble Treatment
When we begin our discussion of Boltzmann statistics in Chapter 20, we note that there exists, in principle, a Schrödinger equation for an $N$-molecule system. For any particular set of boundary conditions, the solutions of this equation are a set of infinitely many wavefunctions, $\Psi_{i,j}$, for the $N$-molecule system. For every such wavefunction, there is a corresponding system energy, $E_i$. The wavefunctions reflect all of the attractive and repulsive interactions among the molecules of the system. Likewise, the energy levels of the system reflect all of these interactions.
In Section 20.12, we introduce the symbol $\Omega_E$ to denote the degeneracy of the energy, $E$, of an $N$-molecule system. Because the constituent molecules are assumed to be distinguishable and non-interacting, we have
$\Omega_E=\sum_{\left\{N_i\right\},E}{W\left(N_i,g_i\right)}\nonumber$
In the solution of the Schrödinger equation for a system of $N$ interacting molecules, each system-energy level, $E_i$, can be degenerate. We again let $\Omega$ denote the degeneracy of an energy level of the system. We use $\Omega_i$ (rather than $\Omega_{E_i}$) to represent the degeneracy of $E_i$. It is important to recognize that the symbol “$\Omega_i$” now denotes an intrinsic quantum-mechanical property of the N-particle system.
In Chapters 21 and 22, we denote the parallel properties of an individual molecule by ${\psi }_{i,j}$ for the molecular wavefunctions, ${\epsilon }_i$ for the corresponding energy levels, and $g_i$ for the degeneracy of the $i^{th}$ energy level. We imagine creating an $N$-molecule system by collecting $N$ non-interacting molecules in a fixed volume and at a fixed temperature.
In exactly the same way, we now imagine collecting $\hat{N}$ of these $N$-molecule, constant-volume, constant-temperature systems. An aggregate of many multi-molecule systems is called an ensemble. Just as we assume that no forces act among the non-interacting molecules we consider earlier, we assume that no forces act among the systems of the ensemble. However, as we emphasize above, our model for the systems of an ensemble recognizes that intermolecular forces among the molecules of an individual system can be important. We can imagine specifying the properties of the individual systems in a variety of ways. A collection is called a canonical ensemble if each of the systems in the ensemble has the same values of $N$, $V$, and $T$. (The sense of this name is that by specifying constant $N$, $V$, and $T$, we create the ensemble that can be described most simply.)
The canonical ensemble is a collection of $\hat{N}$ identical systems, just as the $N$-molecule system is a collection of $N$ identical molecules. We imagine piling the systems that comprise the ensemble into a gigantic three-dimensional stack. We then immerse the entire stack—the ensemble—in a constant temperature bath. The ensemble and its constituent systems are at the constant temperature $T$. The volume of the ensemble is $\hat{N}V$. Because we can specify the location of any system in the ensemble by specifying its $x$-, $y$-, and $z$-coordinates in the stack, the individual systems that comprise the ensemble are distinguishable from one another. Thus the ensemble is analogous to a crystalline $N$-molecule system, in which the individual molecules are distinguishable from one another because each occupies a particular location in the crystal lattice, the entire crystal is at the constant temperature, $T$, and the crystal volume is $NV_{\mathrm{molecule}}$.
Since the ensemble is a conceptual construct, we can make the number of systems in the ensemble, $\hat{N}$, as large as we please. Each system in the ensemble will have one of the quantum-mechanically allowed energies, $E_i$. We let the number of systems that have energy $E_1$ be $\hat{N}_1$. Similarly, we let the number with energy $E_2$ be $\hat{N}_2$, and the number with energy $E_i$ be $\hat{N}_i$. Thus at any given instant, the ensemble is characterized by a population set, $\{\hat{N}_1,\ \hat{N}_2,\ \dots ,\ \hat{N}_i,\dots \}$, in exactly the same way that an $N$-molecule system is characterized by a population set, $\{N_1,\ N_2,\dots ,\ N_i,\dots \}$. We have
$\hat{N}=\sum^{\infty }_{i=1}{\hat{N}_i}\nonumber$
While all of the systems in the ensemble are immersed in the same constant-temperature bath, the energy of any one system in the ensemble is completely independent of the energy of any other system. This means that the total energy of the ensemble, $\hat{E}$, is given by
$\hat{E}=\sum^{\infty }_{i=1}{\hat{N}_i}E_i\nonumber$
Property System Ensemble
Quantum entity Molecule at fixed volume and temperature System comprising a collection of $N$ molecules at fixed volume and temperature
Aggregate of quantum entities System comprising a collection of $N$ molecules at fixed volume and temperature Ensemble comprising $\hat{N}$ systems each of which contains $N$ molecules
Number of quantum entities in aggregate $N$ $\hat{N}$
Wave functions/quantum states ${\psi }_i$ $\Psi_i$
Energy levels ${\epsilon }_i$ $E_i$
Energy level degeneracies $g_i$ $\Omega_i$
Probability that an energy level is occupied $P_i$ $\hat{P}_i$
Number of quantum entities in the $i^{th\ }$ energy level $N_i$ $\hat{N}_i$
Probability that a quantum state is occupied $\rho \left({\epsilon }_i\right)$ $\widehat{\rho }\left(E_i\right)$
Energy of the aggregate’s $k^{th\ }$ population set $E_k=\sum{N_{k,i}{\epsilon }_i}$ ${\hat{E}}_k=\sum{\hat{N}_{k,i}{\epsilon }_i}$
Expected value of the energy of the aggregate $\left\langle E\right\rangle =N\sum{P_i{\epsilon }_i}$ $\left\langle \hat{E}\right\rangle =\hat{N}\sum{\hat{P}_iE_i}$
The population set, $\{\hat{N}_1,\ \hat{N}_2,\ \dots ,\ \hat{N}_i,\dots \}$, that characterizes the ensemble is not constant in time. However, by the same arguments that we apply to the N-molecule system, there is a population set
$\{\hat{N}^{\textrm{⦁}}_1,\ \hat{N}^{\textrm{⦁}}_2,\dots ,\ \hat{N}^{\textrm{⦁}}_i,\dots \}\nonumber$
which characterizes the ensemble when it is at equilibrium in the constant-temperature bath.
We define the probability, $\hat{P}_i$, that a system of the ensemble has energy $E_i$ to be the fraction of the systems in the ensemble with this energy, when the ensemble is at equilibrium at the specified temperature. Thus, by definition,
$\hat{P}_i=\dfrac{\hat{N}^{\textrm{⦁}}_i}{\hat{N}}.\nonumber$
We define the probability that a system is in one of the states, $\Psi_{i,j}$, with energy $E_i$, as
$\widehat{\rho }\left(E_i\right)=\frac{\hat{P}_i}{\Omega_i}\nonumber$
The method we have used to construct the canonical ensemble insures that the entire ensemble is always at the specified temperature. If the component systems are at equilibrium, the ensemble is at equilibrium. The expected value of the ensemble energy is
$\left\langle \hat{E}\right\rangle =\hat{N}\sum^{\infty }_{i=1}{\hat{P}_iE_i=}\sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_iE_i}\nonumber$
Because the number of systems in the ensemble, $\hat{N}$, is very large, we know from the central limit theorem that any observed value for the ensemble energy will be indistinguishable from the expected value. To an excellent approximation, we have at any time,
$\hat{E}=\left\langle \hat{E}\right\rangle\nonumber$
and
$\hat{N}^{\textrm{⦁}}_i=\hat{N}_i\nonumber$
The table above summarizes the terminology that we have developed to characterize molecules, $N$-molecule systems, and $\hat{N}$-system ensembles of $N$-molecule systems.
We can now apply to an ensemble of $\hat{N}$, distinguishable, non-interacting systems the same logic that we applied to a system of $N$, distinguishable, non-interacting molecules. The probability that a system is in one of the energy levels is
$1=\hat{P}_1+\hat{P}_2+\dots +\hat{P}_i+\dots\nonumber$
The total probability sum for the constant-temperature ensemble is
$1={\left(\hat{P}_1+\hat{P}_2+\dots +\hat{P}_i+\dots \right)}^\hat{N}=\sum_{\{\hat{N}_i\}}{\hat{W}\left(\hat{N}_i,\Omega_i\right)}{\widehat{\rho }\left(E_1\right)}^{\hat{N}_1}{\widehat{\rho }\left(E_2\right)}^{\hat{N}_2}\dots {\widehat{\rho }\left(E_i\right)}^{\hat{N}_i}\dots\nonumber$
where
$\hat{W}\left(\hat{N}_i,\Omega_i\right)=\hat{N}!\prod^{\infty }_{i=1}{\frac{\Omega^{\hat{N}_i}_i}{\hat{N}_i!}}\nonumber$
Moreover, we can imagine instantaneously isolating the ensemble from the temperature bath in which it is immersed. This is a wholly conceptual change, which we effect by replacing the fluid of the constant-temperature bath with a solid blanket of insulation. The ensemble is then an isolated system whose energy, $\hat{E}$, is constant. Every system of the isolated ensemble is immersed in a constant-temperature bath, where the constant-temperature bath consists of the $\hat{N}-1$ systems that make up the rest of the ensemble. This is an important feature of the ensemble treatment. It means that any conclusion we reach about the systems of the constant-energy ensemble is also a conclusion about each of the $\hat{N}$ identical, constant-temperature systems that comprise the isolated, constant-energy ensemble.
Only certain population sets, $\{\hat{N}_1,\ \hat{N}_2,\ \dots ,\ \hat{N}_i,\dots \}$, are consistent with the fixed value, $\hat{E}$, of the isolated ensemble. For each of these population sets, there are $\hat{W}\left(\hat{N}_i,\Omega_i\right)$ system states. The probability of each of these system states is proportional to ${\widehat{\rho }\left(E_1\right)}^{\hat{N}_1}{\widehat{\rho }\left(E_2\right)}^{\hat{N}_2}\dots {\widehat{\rho }\left(E_i\right)}^{\hat{N}_i}\dots$. By the principle of equal a priori probability, every system state of the fixed-energy ensemble occurs with equal probability. We again conclude that the population set that characterizes the equilibrium state of the constant-energy ensemble, $\{\hat{N}^{\textrm{⦁}}_1,\ \hat{N}^{\textrm{⦁}}_2,\dots ,\ \hat{N}^{\textrm{⦁}}_i,\dots \}$, is the one for which $\hat{W}$ or ${ \ln \hat{W}\ }$ is a maximum, subject to the constraints
$\hat{N}=\sum^{\infty }_{i=1}{\hat{N}_i}\nonumber$
and
$\hat{E}=\sum^{\infty }_{i=1}{\hat{N}_i}E_i\nonumber$
The fact that we can make $\hat{N}$ arbitrarily large ensures that any term, $\hat{N}^{\textrm{⦁}}_i$, in the equilibrium-characterizing population set can be very large, so that $\hat{N}^{\textrm{⦁}}_i$ can be found using Stirling’s approximation and Lagrange’s method of undetermined multipliers. We have the mnemonic function $F_{mn}=\hat{N}{ \ln \hat{N}-\hat{N}+\sum^{\infty }_{i=1}{\left(\hat{N}_i{ \ln \Omega_i\ }-\hat{N}_i{ \ln \hat{N}_i\ }+\hat{N}_i\right)}\ }+\alpha \left(\hat{N}-\sum^{\infty }_{i=1}{\hat{N}_i}\right)+\beta \left(\hat{E}-\sum^{\infty }_{i=1}{\hat{N}_i}E_i\right)\nonumber$ so that
${\left(\frac{\partial F_{mn}}{\partial \hat{N}^{\textrm{⦁}}_i}\right)}_{j\neq i}={ \ln \Omega_i\ }-\frac{\hat{N}^{\textrm{⦁}}_i}{\hat{N}^{\textrm{⦁}}_i}-{ \ln \hat{N}^{\textrm{⦁}}_i\ }+1-\alpha -\beta E_i=0\nonumber$
and
${ \ln \hat{N}^{\textrm{⦁}}_i\ }={ \ln \Omega_i\ }-\alpha -\beta E_i\nonumber$
or
$\hat{N}^{\textrm{⦁}}_i=\Omega_i\exp \left(-\alpha \right) \exp -\beta E_i\nonumber$
When we make use of the constraint on the total number of systems in the ensemble, we have
$\hat{N}=\sum^{\infty}_{i=1} \hat{N}^{\textrm{⦁}}_i =\exp \left(-\alpha \right)\sum^{\infty }_{i=1} \Omega_i \exp \left(-\beta E_i\right)\nonumber$
so that
$\exp \left(-\alpha \right)=\hat{N}Z^{-1}\nonumber$
where the partition function for a system of $N$ possibly-interacting molecules is
$Z=\sum^{\infty}_{i=1} \Omega_i \exp \left(-\beta E_i\right)\nonumber$
The probability that a system has energy $E_i$ is equal to the equilibrium fraction of systems in the ensemble that have energy $E_i$, so that
$\hat{P}_i=\frac{\hat{N}^{\textrm{⦁}}_i}{\hat{N}}=\frac{\Omega_i\exp \left(-\beta E_i\right)}{Z}\nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/23%3A_The_Ensemble_Treatment/23.01%3A_Ensembles_of_N-molecule_Systems.txt
|
At equilibrium, the entropy of the $\hat{N}$-system ensemble, $S_{\text{ensemble}}$, must be a maximum. By arguments that parallel those in Chapter 20, $\hat{W}$ is a maximum for the ensemble population set that characterizes this equilibrium state. Applying the Boltzmann definition to the ensemble, the ensemble entropy is $S_{\text{ensemble}}=k{ \ln {\hat{W}}_{\text{max}}\ }$. Since all $\hat{N}$ systems in the ensemble have effectively the same entropy, $S$, we have $S_{\text{ensemble}}=\hat{N}S$. When we assume that ${\hat{W}}_{\text{max}}$ occurs for the equilibrium population set, $\left\{\hat{N}^{\textrm{⦁}}_1,\ {\hat{N}}^{\textrm{⦁}}_2,\dots ,\ {\hat{N}}^{\textrm{⦁}}_i,\dots \right\}$, we have
${\hat{W}}_{\text{max}}=\hat{N}!\prod^{\infty }_{i=1}{\frac{\Omega^{\hat{N}^{\textrm{⦁}}_i}_i}{\hat{N}^{\textrm{⦁}}_i!}} \nonumber$
so that
$S_{\text{ensemble}}=\hat{N}S=k \ln \hat{N}! +k \sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_i} {\ln \Omega_i} - k \sum^{\infty }_{i=1} { \ln \left(\hat{N}^{\textrm{⦁}}_i!\right) } \nonumber$
From the Boltzmann distribution function, ${\hat{N}^{\textrm{⦁}}_i}/{\hat{N}}=Z^{-1}\Omega_i{\mathrm{exp} \left(-\beta E_i\right)\ }$, we have
${ \ln \Omega_i\ }={ \ln Z\ }+{ \ln {\hat{N}}^{\textrm{⦁}}_i\ }+\beta E_i-{ \ln \hat{N}\ } \nonumber$
Substituting, and introducing Stirling’s approximation, we find
\begin{align*} \hat{N}S &=k\hat{N}{ \ln \hat{N}-k\hat{N}\ } + k\sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_i\left({ \ln Z+{ \ln {\hat{N}}^{\textrm{⦁}}_i\ }\ }+\beta E_i-{ \ln \hat{N}\ }\right)}-k\sum^{\infty }_{i=1}{\left({\hat{N}}^{\textrm{⦁}}_i{ \ln {\hat{N}}^{\textrm{⦁}}_i-{\hat{N}}^{\textrm{⦁}}_i\ }\right)} \[4pt] &=\hat{N}k{ \ln Z\ }+k\beta \sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_iE_i} \end{align*}
Since $\sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_iE_i}$ is the energy of the $\hat{N}$-system ensemble and the energy of each system is the same, we have
$\sum^{\infty }_{i=1}{\hat{N}^{\textrm{⦁}}_iE_i}=E_{\text{ensemble}}=\hat{N}E \nonumber$
Substituting, we find
$S=k\beta E+k{ \ln Z\ } \nonumber$
where $S$, $E$, and $Z$ are the entropy, energy, and partition function for the $N$-molecule system. From the fundamental equation, we have
${\left(\frac{\partial E}{\partial S}\right)}_V=T \nonumber$
Differentiating $S=k\beta E+k{ \ln Z\ }$ with respect to entropy at constant volume, we find
$1=k\beta {\left(\frac{\partial E}{\partial S}\right)}_V \nonumber$ and it follows that $\beta =\frac{1}{kT} \nonumber$
We have, for the $N$-molecule system
$Z=\sum^{\infty }_{i=1}{\Omega_i}{\mathrm{exp} \left(\frac{-E_i}{kT}\right)\ } \nonumber$ (System partition function)
${\hat{P}}_i=Z^{-1}\Omega_i{\mathrm{exp} \left(\frac{-E_i}{kT}\right)\ } \nonumber$ (Boltzmann’s equation)
$S=\frac{E}{T}+k{ \ln Z\ } \nonumber$ (Entropy of the N-molecule system)
23.03: The Thermodynamic Functions of the N-molecule System
With the results of Section 23.2 in hand, we can find the other thermodynamic functions for the $N$-molecule system from the equations for $Z$ and $\hat{P}_i$ by the arguments we use in Chapters 20 and 21. Let us summarize these arguments. From
$E=\sum^{\infty }_{i=1}{\hat{P}_i}E_i\nonumber$
we have
$dE=\sum^{\infty }_{i=1}{E_id\hat{P}_i}+\sum^{\infty }_{i=1}{\hat{P}_i}{dE}_i\nonumber$
We associate the first term with ${dq}^{rev}$ and the second term with $dw=-PdV$; that is,
$dq^{rev}=TdS=\sum^{\infty }_{i=1} E_id\hat{P}_i = -kT\sum^{\infty}_{i=1} \ln \left(\frac{\hat{P}_i}{\Omega_i}\right) d\hat{P}_i-kT \ln Z \sum^{\infty }_{i=1} d\hat{P}_i\nonumber$
Where we substitute
$E_i=-kT{ \ln \left(\frac{\hat{P}_i}{\Omega_i}\right)\ }-kT{ \ln Z\ }\nonumber$
which we obtain by taking the natural logarithm of the partition function. Since $\sum^{\infty }_{i=1}{d\hat{P}_i}=0$, we have for each system,
$dS=-k\sum^{\infty }_{i=1} \ln \left(\frac{\hat{P}_i}{\Omega_i}\right) d\hat{P}_i=-k\sum^{\infty }_{i=1}{\left\{\Omega_id\left(\frac{\hat{P}_i}{\Omega_i}{ \ln \frac{\hat{P}_i}{\Omega_i}\ }\right)-d\hat{P}_i\right\}}=-k\sum^{\infty }_{i=1}{d\left(\hat{P}_i{ \ln \frac{\hat{P}_i}{\Omega_i}\ }\right)}\nonumber$
The system entropy, $S$, and the system-energy-level probabilities, $\hat{P}_i$, are functions of temperature. Integrating from $T=0$ to $T$ and choosing the lower limits for the integrations on the right to be $\hat{P}_1\left(0\right)=1$ and $\hat{P}_i\left(0\right)=0$ for $i>1$, we have
$\int^S_{S_0}{dS}=-k\sum^{\infty }_{i=1}{\int^{\hat{P}_i\left(T\right)}_{\hat{P}_i\left(0\right)}{d\left(\hat{P}_i{ \ln \frac{\hat{P}_i}{\Omega_i}\ }\right)}}\nonumber$
Letting $\hat{P}_i\left(T\right)=\hat{P}_i$, the result is
\begin{align*} S-S_0 &= -k\hat{P}_1{ \ln \frac{\hat{P}_1}{\Omega_1}\ }+k \ln \frac{1}{\Omega_1} -k\sum^{\infty }_{i=2}{\hat{P}_i{ \ln \frac{\hat{P}_i}{\Omega_i}\ }} \[4pt] &=-k\sum^{\infty }_{i=1}{\hat{P}_i{ \ln \frac{\hat{P}_i}{\Omega_i}\ }}-k \ln \Omega_1 \end{align*}\nonumber
From the partition function, we have
${ \ln \left(\frac{\hat{P}_i}{\Omega_i}\right)\ }=-\frac{E_i}{kT}+{ \ln Z\ }\nonumber$
so that
\begin{align*} S-S_0 &= -k\sum^{\infty }_{i=1}{\hat{P}_i}\left(-\frac{E_i}{kT}+{ \ln Z\ }\right)-k{ \ln \Omega_1\ } \[4pt] &= \frac{1}{T}\sum^{\infty }_{i=1}{\hat{P}_i}E_i+k{ \ln Z\ }\sum^{\infty }_{i=1}{\hat{P}_i}-k{ \ln \Omega_1\ } \[4pt]&= \frac{E}{T}+k{ \ln Z\ }-k{ \ln \Omega_1\ } \end{align*}\nonumber
We take the system entropy at absolute zero, $S_0$, to be
$S_0=k{ \ln \Omega_1\ }\nonumber$
If the lowest energy state is non-degenerate, $\Omega_1=1$, and $S_0=0$, so that
$S\left(T\right)=\frac{E}{T}+k{ \ln Z\ }\nonumber$
As in Section 21.6, we observe that
$E=\sum^{\infty }_{i=1}{\hat{P}_i}E_i=Z^{-1}\sum^{\infty }_{i=1}{\Omega_i}E_i{\mathrm{exp} \left(\frac{-E_i}{kT}\right)\ }\nonumber$ and that
${\left(\frac{\partial { \ln Z\ }}{\partial T}\right)}_V=Z^{-1}\sum^{\infty }_{i=1}{\Omega_i}\left(\frac{E_i}{kT^2}\right){\mathrm{exp} \left(\frac{-E_i}{kT}\right)\ }=\frac{E}{kT^2}\nonumber$
so that
$E=kT^2{\left(\frac{\partial { \ln Z\ }}{\partial T}\right)}_V\nonumber$
From $A=E-TS$ and the entropy equation, $S={E}/{T}+k{ \ln Z\ }$, the Helmholtz free energy of the system is
$A=-kT{ \ln Z\ }\nonumber$
For the system pressure, we find from
$P=-{\left(\frac{\partial A}{\partial V}\right)}_T\nonumber$ that $P=kT{\left(\frac{\partial { \ln Z\ }}{\partial V}\right)}_T\nonumber$
From $H=E+PV$, we find
$H=kT^2{\left(\frac{\partial { \ln Z\ }}{\partial T}\right)}_V+VkT{\left(\frac{\partial { \ln Z\ }}{\partial V}\right)}_T\nonumber$
and from $G=E+PV-TS$, we find
$G=VkT{\left(\frac{\partial { \ln Z\ }}{\partial V}\right)}_T-kT{ \ln Z\ }\nonumber$
For the chemical potential per molecule in the $N$-molecule system, we obtain
$\mu ={\left(\frac{\partial A}{\partial N}\right)}_{VT}=-kT{\left(\frac{\partial { \ln Z\ }}{\partial N}\right)}_{VT}\nonumber$
Thus, we have found the principle thermodynamic functions for the $N$-molecule system expressed in terms of ${ \ln Z\ }$ and its derivatives. The system partition function, $Z$, depends on the energy levels available to the $N$-molecule system. The thermodynamic functions we have obtained are valid for any system, including systems in which intermolecular forces make large contributions to the system energy. Of course, the system partition function, $Z$, must accurately reflect the effects of these forces.
In Chapter 24 we find that the partition function, $Z$, for a system of $N$, distinguishable, non-interacting molecules is related in a simple way to the molecular partition function, $z$. We find $Z=z^N$. When we substitute this result for $Z$ into the system partition functions developed above, we recover the same results that we developed in Chapters 20 and 21 for the thermodynamic properties of a system of $N$, distinguishable, non-interacting molecules.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/23%3A_The_Ensemble_Treatment/23.02%3A_The_Ensemble_Entropy_and_the_Value_of.txt
|
The ensemble analysis shows that the thermodynamic functions for an \(N\)-molecule system can be developed from the principles of statistical mechanics whether the molecules of the system interact or not. The theory is valid irrespective of the strengths of inter-molecular attractions and repulsions. However, to carry out numerical calculations, it is necessary to know the energy levels for the \(N\)-molecule system. For systems in which the molecules interact, obtaining useful approximations to these levels is a difficult problem. As a result, many applications assume that the molecules do not interact with one another. In this chapter we apply the results from the ensemble theory to the particular case of ideal gases.
24: Indistinguishable Molecules - Statistical Thermodynamics of Ideal Gases
In Chapter 21, our analysis of a system of $N$ distinguishable and non-interacting molecules finds that the system entropy is given by
$S=\frac{E}{T}+Nk{ \ln z\ }=\frac{E}{T}+k{ \ln z^N\ } \nonumber$
where $E$ is the system energy and $z$ is the molecular partition function. From ensemble theory, we found
$S=\frac{E}{T}+k{ \ln Z\ } \nonumber$
where $Z$ is the partition function for the $N$-molecule system. Comparison implies that, for a system of $N$, distinguishable, non-interacting molecules, we have
$Z=z^N \nonumber$
We can obtain this same result by writing out the energy levels for the system in terms of the energy levels of the distinguishable molecules that make up the system. First we develop the obvious notation for the energy levels of the individual molecules. We let the energy levels of the first molecule be the set $\{{\epsilon }_{1,i}\}$, the energy levels of the second molecule be the set $\{{\epsilon }_{2,i}\}$, and so forth to the last molecule for which the energy levels are the set $\{{\epsilon }_{N,i}\}$. Thus, the $i^{th}$ energy level of the $r^{th}$ molecule is ${\epsilon }_{r,i}$. We let the corresponding energy-level degeneracy be $g_{r,i}$ and the partition function for the $r^{th}$ molecule be $z_r$. Since all of the molecules are identical, each has the same set of energy levels; that is, we have ${\epsilon }_{p,i}={\epsilon }_{r,i}$ and $g_{p,i}=g_{r,i}$ for any two molecules, $p$ and $r$, and any energy level, $i$. It follows that the partition function is the same for every molecule
$z_1=z_2=\dots =z_j=\dots =z_N=z=\sum^{\infty }_{i=1}{g_{r,i}}{\mathrm{exp} \left(\frac{-{\epsilon }_{r,i}}{kT}\right)\ } \nonumber$ so that $z_1z_2\dots z_r\dots z_N=z^N \nonumber$
We can write down the energy levels available to the system of $N$ distinguishable, non-interacting molecules. The energy of the system is just the sum of the energies of the constituent molecules, so the possible system energies consist of all of the possible sums of the distinguishable-molecule energies. Since there are infinitely many molecular energies, there are infinitely many system energies.
$E_1={\epsilon }_{1,1}+{\epsilon }_{2,1}+\dots +{\epsilon }_{r,1}+\dots +{\epsilon }_{N,1} \nonumber$ $E_2={\epsilon }_{1,2}+{\epsilon }_{2,1}+\dots +{\epsilon }_{r,1}+\dots +{\epsilon }_{N,1} \nonumber$ $E_3={\epsilon }_{1,3}+{\epsilon }_{2,1}+\dots +{\epsilon }_{r,1}+\dots +{\epsilon }_{N,1} \nonumber$ $\dots \nonumber$ $E_m={\epsilon }_{1,i}+{\epsilon }_{2,j}+\dots +{\epsilon }_{r,k}+\dots +{\epsilon }_{N,p} \nonumber$ $\dots \nonumber$
The product of the $N$ molecular partition functions is
$z_1z_2\dots z_r\dots z_N=\sum^{\infty }_{i=1}{g_{1,i}}{\mathrm{exp} \left(\frac{-{\epsilon }_{1,i}}{kT}\right)\ } \nonumber$ $\times \sum^{\infty }_{j=1}{g_{2,j}}{\mathrm{exp} \left(\frac{-{\epsilon }_{2,j}}{kT}\right)\ }\times \dots \times \sum^{\infty }_{k=1}{g_{r,k}}{\mathrm{exp} \left(\frac{-{\epsilon }_{r,k}}{kT}\right)\ }\times \nonumber$ $\dots \times \sum^{\infty }_{p=1}{g_{N,p}}{\mathrm{exp} \left(\frac{-{\epsilon }_{N,p}}{kT}\right)\ } \nonumber$
$=\sum^{\infty }_{i=1}{\sum^{\infty }_{j=1}{\dots \sum^{\infty }_{k=1}{\dots \sum^{\infty }_{p=1}{g_{1,i}g_{2,j}\dots g_{r,k}\dots g_{N,p}}}}} \nonumber$ $\times {\mathrm{exp} \left[\frac{-\left({\epsilon }_{1,i}+{\epsilon }_{2,j}+\dots +{\epsilon }_{r,k}+\dots +{\epsilon }_{N,p}\right)}{kT}\right]\ } \nonumber$
The sum in each exponential term is just the sum of $N$ single-molecule energies. Moreover, every possible combination of $N$ single-molecule energies occurs in one of the exponential terms. Each of these possible combinations is a separate energy level available to the system of $N$ distinguishable molecules.
The system partition function is
$Z=\sum^{\infty }_{i=1}{{\mathit{\Omega}}_i}{\mathrm{exp} \left(\frac{{-E}_i}{kT}\right)\ } \nonumber$
The $i^{th}$ energy level of the system is the sum
$E_i={\epsilon }_{1,v}+{\epsilon }_{2,w}+\dots +{\epsilon }_{r,k}+\dots +{\epsilon }_{N,y} \nonumber$
The degeneracy of the $i^{th}$ energy level of the system is the product of the degeneracies of the molecular energy levels that belong to it. We have
${\mathit{\Omega}}_i=g_{1,v}g_{2,w}\dots g_{r,k}\dots g_{N,y} \nonumber$
Thus, by a second, independent argument, we see that
$z_1z_2\dots z_r\dots z_N=z^N=Z \nonumber$ ($\mathrm{N}$ distinguishable, non-interacting molecules)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.01%3A_The_Partition_Function_for_N_Distingu.txt
|
In all of our considerations to this point, we focus on systems in which the molecules are distinguishable. This effectively confines the practical applications to crystalline solids. Since there is no way to distinguish one molecule of a given substance from another in the gas phase, it is evident that the assumptions we have used so far do not apply to gaseous systems. The number and importance of practical applications increases dramatically if we can extend the theory to describe the behavior of ideal gases.
We might suppose that distinguishability is immaterial—that there is no difference between the behavior of a system of distinguishable particles and an otherwise-identical system of indistinguishable particles. Indeed, this is an idea well worth testing. We know the partition function for a particle in box, and we have every reason to believe that this should be a good model for the partition function describing the translational motion of a gas particle. If an ideal gas behaves as a collection of $N$ distinguishable particles-in-a-box, the translational partition of the gas is just $z^N$. Thermodynamic properties calculated on this basis for, say, argon should agree with those observed experimentally. Indeed, when the comparison is made, this theory gives some properties correctly. The energy is correct; however, the entropy is not.
Thus, experiment demonstrates that the partition function for a system of indistinguishable molecules is different from that of an otherwise-identical system of distinguishable molecules. The reason for this becomes evident when we compare the microstates available to a system of distinguishable molecules to those available to a system of otherwise-identical indistinguishable molecules. Consider the distinguishable-molecule microstate whose energy is
$E_i={\epsilon }_{1,v}+{\epsilon }_{2,w}+\dots +{\epsilon }_{r,k}+\dots +{\epsilon }_{N,y} \nonumber$
As a starting point, we assume that every molecule is in a different energy level. That is, all of the $N$ energy levels, ${\epsilon }_{i,j}$, that appear in this sum are different. For the case in which the molecules are distinguishable, we can write down additional microstates that have this same energy just by permuting the energy values among the $N$ molecules. (A second microstate with this energy is $E_i = {\epsilon }_{\mathrm{1,}w} + {\epsilon }_{\mathrm{2,}v}\mathrm{+\dots +}{\epsilon }_{r,k}\mathrm{+\dots +}{\epsilon }_{N,y}$.) Since there are $N!$ such permutations, there are a total of $N!$ quantum states that have this same energy, and each of them appears as an exponential term in the product $z_1z_2\dots z_r\dots z_N=z^N$.
If, however, the $N$ molecules are indistinguishable, there is no way to tell one of these $N!$ assignments from another. They all become the same thing. All we know is that some one of the $N$ molecules has the energy ${\epsilon }_w$, another has the energy ${\epsilon }_v$, etc. This means that there is only one way that the indistinguishable molecules can have the energy $E_i$. It means also that the difference between the distinguishable-molecules case and the indistinguishable-molecules case is that, while they contain the same system energy levels, each level appears $N!$ more times in the distinguishable-molecules partition function than it does in the indistinguishable-molecules partition function. We have
$Z_{\mathrm{indistinguishable}}=\frac{1}{N!}Z_{\mathrm{distinguishable}}=\frac{1}{N!}z^N \nonumber$
In the next section, we see that nearly all of the molecules in a sample of gas must have different energies, so that this relationship correctly relates the partition function for a single gas molecule to the partition function for a system of $N$ indistinguishable gas molecules.
Before seeing that nearly all of the molecules in a macroscopic sample of gas actually do have different energies, however, let us see what happens if they do not. Suppose that just two of the indistinguishable molecules have the same energy. Then there are not $N!$ permutations of the energies among the distinguishable molecules; rather there are only ${N!}/{2!}$ such permutations. In this case, the relationship between the system and the molecular partition functions is
$Z_{\mathrm{indistinguishable}}=\frac{2!}{N!}Z_{\mathrm{distinguishable}}=\frac{2!}{N!}z^N \nonumber$
For the population set $\{N_1,\ N_2,\dots ,N_r,\dots ,N_{\omega }\}$ the relationship is
$Z_{\mathrm{indistinguishable}}=\frac{N_1!N_2!\dots N_r!\dots N_{\omega }!}{N!}z^N \nonumber$ which is much more complex than the case in which all molecules have different energies. Of course, if we extend the latter case, so that the population set consists of N energy levels, each occupied by at most one molecule, the relationship reverts to the one with which we began.
$Z_{indistinguishable}=\frac{1}{N!}\left(\prod^{\infty }_{i=1}{N_i!}\right)z^N=\frac{1}{N!}\left(\prod^{\infty }_{i=1}{1}\right)z^N=\frac{1}{N!}z^N \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.02%3A_The_Partition_Function_for_N_Indistin.txt
|
The particle in a box is a quantum mechanical model for the motion of a point mass in one dimension. In Section 18.3, we find that the energy levels are
${\epsilon }_n=\frac{n^2h^2}{8m{\ell }^2} \nonumber$
so that the partition function for a particle in a one-dimensional box is
$z=\sum^{\infty }_{n=1} \mathrm{exp} \left(\frac{-n^2h^2}{8mkT{\ell }^2}\right) \nonumber$
When the mass approximates that of a molecule, the length of the box is macroscopic, and the temperature is not extremely low, there are a very large number of energy levels for which ${\epsilon }_n<kt$>. When this is the case, we find in Section 22-4 that this sum can be approximated by an integral to obtain an expression for z in closed form:
$z\approx \int^{\infty }_0 \mathrm{exp} \left(\frac{-n^2h^2}{8mkT{\ell }^2}\right)\ dn= \left(\frac{2\pi mkT}{h^2}\right)^{1/2}\ell \nonumber$
A particle in a three-dimensional rectangular box is a quantum mechanical model for an ideal gas molecule. The molecule moves in three dimensions, but the component of its motion parallel to any one coordinate axis is independent of its motion parallel to the others. This being the case, the kinetic energy of a particle in a three-dimensional box can be modeled as the sum of the energies for motion along each of the three independent coordinate axes that describe the translational motion of the particle. Taking the coordinate axes parallel to the faces of the box and labeling the lengths of the sides ${\ell }_x$, ${\ell }_y$, and ${\ell }_z$, the energy of the particle in the three-dimensional box becomes
$\epsilon ={\epsilon }_x+{\epsilon }_y+{\epsilon }_z \nonumber$
and the three-dimensional partition function becomes
\begin{aligned} z_t & =\sum^{\infty }_{n_{x=1}} \sum^{\infty }_{n_{y=1}} \sum^{\infty }_{n_{z=1}} \mathrm{exp} \left[\left(\frac{-h^2}{8mkT}\right)\left(\frac{n^2_x}{{\ell }^2_x}+\frac{n^2_y}{{\ell }^2_y}+\frac{n^2_z}{{\ell }^2_z}\right)\right] \ ~ & =\sum^{\infty }_{n_x=1} \mathrm{exp} \left(\frac{-n^2_xh^2}{8mkT{\ell }^2_x}\right) \sum^{\infty }_{n_y=1} \mathrm{exp} \left(\frac{-n^2_yh^2}{8mkT{\ell }^2_y}\right) \sum^{\infty }_{n_z=1} \mathrm{exp} \left(\frac{-n^2_zh^2}{8mkT{\ell }^2_z}\right) \end{aligned} \nonumber
or, recognizing this as the product of three one-dimensional partition functions,
$z_t=z_xz_yz_z. \nonumber$
Approximating each molecular partition function as integrals gives
$z_t= \left(\frac{2\pi mkT}{h^2}\right)^{3/2}{\ell }_x{\ell }_y{\ell }_z=\left(\frac{2\pi mkT}{h^2}\right)^{3/2}V \nonumber$
where the volume of the container is $V={\ell }_x{\ell }_y{\ell }_z$.
Let us estimate a lower limit for the molecular partition function for the translational motion of a typical gas at ambient temperature. The partition function increases with volume, $V$, so we want to select a volume that is near the smallest volume a gas can have. We can estimate this as the volume of the corresponding liquid at the same temperature. Let us calculate the molecular translational partition function for a gas whose molar mass is $0.040\ \mathrm{kg}$ in a volume of $0.020\ \mathrm{L}$ at $300$ K. We find $z_t=5\times {10}^{27}$.
Given $z_t$, we can estimate the probability that any one of the energy levels available to this molecule is occupied. For any energy level, the upper limit to the term $\mathrm{exp} \left({-\epsilon }_i/kT\right)$ is one. If the quantum numbers $n_x$, $n_y$, and $n_z$ are different from one another, the corresponding molecular energy is non-degenerate. To a good approximation, we have $g_i=1$. We find
$\frac{N_i}{\overline{N}}=\frac{g_i \mathrm{exp} \left(-{\epsilon }_i/kT\right)}{z_t}<\frac{1}{z_t}=2\times 10^{-28} \nonumber$
We calculate $N_i\approx 1\times 10^{-4}$. When a mole of this gas occupies $0.020\ \mathrm{L}$, the system density approximates that of a liquid. Therefore, even in circumstances selected to minimize the number of energy levels, there is less than one gas molecule per ten thousand energy levels.
For translational energy levels of gas molecules, it is an excellent approximation to say that each molecule occupies a different translational energy level. This is a welcome result, because it assures us that the translational partition function for a system containing a gas of $N$ indistinguishable non-interacting molecules is just
$Z_t=\frac{1}{N!} \left(\frac{2\pi mkT}{h^2}\right)^{3N/2}V^N \nonumber$
So that $Z_t$ is the translational partition function for a system of $N$ ideal gas molecules.
We derive $Z_t$ from the assumption that every equilibrium population number, $N^{\textrm{⦁}}_i$, for the molecular energy levels satisfies $N^{\textrm{⦁}}_i\le 1$. We use $Z_t$ and the ensemble-treatment results that we develop in Chapter 23 to find thermodynamic functions for the $N$-molecule ideal-gas system. The ensemble development assumes that the number of systems, $\hat{N}^{\textrm{⦁}}_i$, in the ensemble that have energy $E_i$ is very large. Since the ensemble is a creature of our imaginations, we can imagine that $\hat{N}$ is as big as it needs to be in order that $\hat{N}^{\textrm{⦁}}_i$ be big enough. The population sets $N^{\textrm{⦁}}_i$ and $\hat{N}^{\textrm{⦁}}_i$ are independent; they characterize different distributions. The fact that $N^{\textrm{⦁}}_i\le 1$ is irrelevant when we apply Lagrange’s method to find the distribution function for $\hat{N}^{\textrm{⦁}}_i$, the partition function $Z_t$, and the thermodynamic functions for the system. Consequently, the ensemble treatment enables us to find the partition function for an ideal gas, $Z_{IG}$, by arguments that avoid the questions that arise when we apply Lagrange’s method to the distribution of molecular translational energies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.03%3A_Occupancy_Probabilities_for_Translati.txt
|
At this point in our development, we have a theory that gives the thermodynamic properties of a polyatomic ideal gas molecule. To proceed, however, we must know the energy of every quantum state that is available to the molecule. There is more than one way to obtain this information. We will examine one important method—one that involves a further idealization of molecular behavior.
We have made great progress by using the ideal gas model, and as we have noted repeatedly, the essential feature of the ideal gas model is that there are no attractive or repulsive forces between its molecules. Now we assume that the molecule’s translational, rotational, vibrational, and electronic motions are independent of one another. We could say that this idealization defines super-ideal gas molecules; not only does one molecule not interact with another molecule, an internal motion of one of these molecules does not interact with the other internal motions of the same molecule!
The approximation that a molecule’s translational motion is independent of its rotational, vibrational, and electronic motions is usually excellent. The approximation that its intramolecular rotational, vibrational and electronic motions are also independent proves to be surprisingly good. Moreover, the very simple quantum mechanical systems that we describe in Chapter 18 prove to be surprisingly good models for the individual kinds of intramolecular motion. The remainder of this chapter illustrates these points.
In Chapter 18, we note that a molecule’s wavefunction can be approximated as a product of a wavefunction for rotations, a wavefunction for vibrations, and a wavefunction for electronic motions. (As always, we are simply quoting quantum mechanical results that we make no effort to derive; we begin with the knowledge that the quantum mechanical problems have been solved and that the appropriate energy levels are available for our use.) Our goal is to see how we can apply the statistical mechanical results we have obtained to calculate the thermodynamic properties of ideal gases. To illustrate the essential features, we consider diatomic molecules. The same considerations apply to polyatomic molecules; there are additional complications, but none that introduce new principles.
For diatomic molecules, we need to consider the energy levels for translational motion in three dimensions, the energy levels for rotation in three dimensions, the energy levels for vibration along the inter-nuclear axis, and the electronic energy states.
24.05: The Partition Function for A Gas of I
We represent the successive molecular energy levels as ${\epsilon }_i$ and the successive translational, rotational, vibrational, and electronic energy levels as ${\epsilon }_{t,a}$, ${\epsilon }_{r,b}$, ${\epsilon }_{v,c}$, and ${\epsilon }_{e,d}$. Now the first subscript specifies the energy mode; the second specifies the energy level. We approximate the successive energy levels of a diatomic molecule as
${\epsilon }_1={\epsilon }_{t,1}+{\epsilon }_{r,1}+{\epsilon }_{v,1}+{\epsilon }_{e,1} \nonumber$ ${\epsilon }_2={\epsilon }_{t,2}+{\epsilon }_{r,1}+{\epsilon }_{v,1}+{\epsilon }_{e,1} \nonumber$
$\dots \nonumber$ ${\epsilon }_i={\epsilon }_{t,a}+{\epsilon }_{r,b}+{\epsilon }_{v,c}+{\epsilon }_{e,d} \nonumber$
$\dots \nonumber$
In Section 22.1, we find that the partition function for the molecule becomes
\begin{align*} z&=\sum^{\infty }_{a=1}{\sum^{\infty }_{b=1}{\sum^{\infty }_{c=1}{\sum^{\infty }_{d=1}{g_{t,a}}}}}g_{r,b}g_{v,c}g_{e,d} \times {\mathrm{exp} \left[\frac{-\left({\epsilon }_{t,a}+{\epsilon }_{r,b}+{\epsilon }_{v,c}+{\epsilon }_{e,d}\right)}{kT}\right]\ } \[4pt] &=z_tz_rz_vz_e \end{align*}
where $z_t$, $z_r$, $z_v$, and $z_e$ are the partition functions for the individual kinds of motion that the molecule undergoes; they are sums over the corresponding energy levels for the molecule. This is essentially the same argument that we use in Section 22.1 to show that the partition function for an $N$-molecule system is a product of $N$ molecular partition functions:
$Z=z^N. \nonumber$
We are now able to write the partition function for a gas containing $N$ molecules of the same substance. Since the molecules of a gas are indistinguishable, we use the relationship
$Z_{\mathrm{indistinguishable}}=\frac{1}{N!}z^N=\frac{1}{N!}{\left(z_tz_rz_vz_e\right)}^N \nonumber$
To make the notation more compact and to emphasize that we have specialized the discussion to the case of an ideal gas, let us replace “$Z_{\mathrm{indistinguishable}}$” with “$Z_{\mathrm{IG}}$”. Also, recognizing that $N!$ enters the relationship because of molecular indistinguishability, and molecular indistinguishability arises because of translational motion, we regroup the terms, writing
$Z_{\mathrm{IG}}=\left[\frac{{\left(z_t\right)}^N}{N!}\right]{\left(z_r\right)}^N{\left(z_v\right)}^N{\left(z_e\right)}^N \nonumber$
Our goal is to calculate the thermodynamic properties of the ideal gas. These properties depend on the natural logarithm of the ideal-gas partition function. This is a sum of terms:
${ \ln Z_{IG}\ }={ \ln \left[\frac{{\left(z_t\right)}^N}{N!}\right]+N{ \ln z_r\ }+N{ \ln z_v\ }+N{ \ln z_e\ }\ } \nonumber$
In our development of classical thermodynamics, we find it convenient to express the properties of substance on a per-mole basis. For the same reasons, we focus on evaluating ${ \ln Z_{IG}\ }$ for one mole of gas; that is, for the case that $N$ is Avogadro’s number, $\overline{N}$. We now examine the relationships that enable us to evaluate each of these contributions to ${ \ln Z_{IG}\ }$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.04%3A_The_Separable-modes_molecular_Model.txt
|
We can make use of Stirling’s approximation to write the translational contribution to ${ \ln Z_{IG}\ }$ per mole of ideal gas. This is
$\ln \left[\frac{\left(z_t\right)^{\overline{N}}}{\overline{N}!}\right] =\overline{N} \ln z_t -\overline{N} \ln \overline{N} +\overline{N}=\overline{N}+\overline{N} \ln \frac{z_t}{\overline{N}} \nonumber$
(We omit the other factors in Stirling’s approximation. Their contribution to the thermodynamic values we calculate is less than the uncertainty introduced by the measurement errors in the molecular parameters we use.) In Section 24.3 we find the molecular partition function for translation:
$z_t= \left(\frac{2\pi mkT}{h^2}\right)^{3/2}V \nonumber$
For one mole of an ideal gas, $\overline{V}={\overline{N}kT}/{P}$. The translational contribution to the partition function for one mole of an ideal gas becomes
$\ln \left[\frac{\left(z_t\right)^{\overline{N}}}{\overline{N}!}\right] =\overline{N}+\overline{N} \ln \left[ \left(\frac{2\pi mkT}{h^2}\right)^{3/2}\frac{\overline{V}}{\overline{N}}\right] =\overline{N}+\overline{N} \ln \left[\left(\frac{2\pi mkT}{h^2}\right)^{3/2}\frac{kT}{P}\right] \nonumber$
24.07: The Electronic Partition Function of
Our quantum-mechanical model for a diatomic molecule takes the zero of energy to be the infinitely separated atoms at rest—that is, with no kinetic energy. The electrical interactions among the nuclei and electrons are such that, as the atoms approach one another, a bond forms and the energy of the two-atom system decreases. At some inter-nuclear distance, the energy reaches a minimum; at shorter inter-nuclear distances, the repulsive interactions between nuclei begin to dominate, and the energy increases. We can use quantum mechanics to find the wavefunction and energy of the molecule when the nuclei are separated to any fixed distance. By repeating the calculation at a series of inter-nuclear distances, we can find the distance at which the molecular energy is a minimum. We take this minimum energy as the electronic energy of the molecule, and the corresponding inter-nuclear distance as the bond length. This is the energy of the lowest electronic state of the molecule. The lowest electronic state is called the ground state.
Excited electronic states exist, and their energies can be estimated from spectroscopic measurements or by quantum mechanical calculation. For most molecules, these excited electronic states are at much higher energy than the ground state. When we compare the terms in the electronic partition function, we see that
${\mathrm{exp} \left({-{\epsilon }_{e,1}}/{kT}\right)\ }\gg {\mathrm{exp} \left({-{\epsilon }_{e,2}}/{kT}\right)\ } \nonumber$
The term for any higher energy level is insignificant compared to the term for the ground state. The electronic partition function becomes just
$z_e=g_1{\mathrm{exp} \left({-{\epsilon }_{e,1}}/{kT}\right)\ } \nonumber$
The ground-state degeneracy, $g_1$, is one for most molecules. For unusual molecules the ground-state degeneracy can be greater; for molecules with one unpaired electron, it is two.
The energy of the electronic ground state that we obtain by direct quantum mechanical calculation includes the energy effects of the motions of the electrons and the energy effects from the electrical interactions among the electrons and the stationary nuclei. Because we calculate it for stationary nuclei, the electronic energy does not include the energy of nuclear motions. The ground state electronic energy is the energy released when the atoms come together from infinite separation to a state in which they are at rest at the equilibrium inter-nuclear separation. This is just minus one times the work required to separate the atoms to an infinite distance, starting from the inter-nuclear separation with the smallest energy. On a graph of electronic (or potential) energy versus inter-nuclear distance, the ground state energy is just the depth of the energy well measured from the top down $\left({\epsilon }_{e,1}<0\right)$. The work required to separate one mole of these molecules into their constituent atoms is called the equilibrium dissociation energy, and conventionally given the symbol $D_e$. These definitions mean that $D_e>0$ and $D_e=-\overline{N}{\epsilon }_{e,1}$.
In practice, the energy of the electronic ground state is often estimated from spectroscopic measurements. By careful study of its spectra, it is possible to find out how much energy must be added, as a photon, to cause a molecule to dissociate into atoms. Expressed per mole, this energy is called the spectroscopic dissociation energy, and it is conventionally given the symbol $D_0$. These spectroscopic measurements involve the absorption of photons by real molecules. Before they absorb the photon, these molecules already have energy in the form of vibrational and rotational motions. So the real molecules that are involved in any spectroscopic measurement have energies that are greater than the energies of the hypothetical motionless-atom molecules at the bottom of the potential energy well. This means that less energy is required to separate the real molecule than is required to separate the hypothetical molecule at the bottom of the well. For any molecule, $D_e\mathrm{>}D_0$.
To have the lowest possible energy, a real molecule must be in its lowest rotational and lowest vibrational energy levels. As turns out, a molecule can have zero rotational energy, but its vibrational energy can never be zero. In Section 24.8 we review the harmonic oscillator approximation. In its lowest vibrational energy level $\left(n=0\right)$, a diatomic molecule’s minimum vibrational energy is ${h\nu }/{\mathrm{2}}$. $D_0$ and $\nu$ can be estimated from spectroscopic experiments. We estimate
${\epsilon }_{e,1}=-\frac{D_e}{\overline{N}}=-\left(\frac{D_0}{\overline{N}}+\frac{h\nu }{2}\right) \nonumber$
and the molecular electronic partition function becomes
$z_e=g_1{\mathrm{exp} \left(\frac{D_0}{\overline{N}kT}+\frac{h\nu }{2kT}\right)\ } \nonumber$
or
$z_e=g_1{\mathrm{exp} \left(\frac{D_0}{RT}+\frac{h\nu }{2kT}\right)\ } \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.06%3A_The_Translational_Partition_Function_.txt
|
We base the electronic potential energy for a diatomic molecule on a model in which the nuclei are stationary at the bottom of the electronic potential energy well. We now want to expand this model to include vibrational motion of the atoms along the line connecting their nuclei. It is simple, logical, and effective to model this motion using the quantum mechanical treatment of the classical (Hooke’s law) harmonic oscillator.
A Hooke’s law oscillator has a location, $r_0$, at which the restoring force, $F\left(r_0\right)$, and the potential energy, $\epsilon \left(r_0\right)$, are zero. As it is displaced from $r_0$, the oscillator experiences a restoring force that is proportional to the magnitude of the displacement, $dF=-\lambda \ dr$. Then, we have
$\int^r_{r_0}{dF}=-\lambda \ \int^r_{r_0}{dr} \nonumber$
so that $F\left(r\right)-F\left(r_o\right)=-\lambda \left(r-r_0\right)$. Since $F\left(r_o\right)=0$, we have $F\left(r\right)=-\lambda \left(r-r_0\right)$. The change in the oscillator’s potential energy is proportional to the square of the displacement,
$\epsilon \left(r\right)-\epsilon \left(r_o\right)=\int^r_{r_0}{-F\ dr}=\lambda \ \int^r_{r_0}{\left(r-r_0\right)dr\ }=\frac{\lambda }{2}{\left(r-r_0\right)}^2 \nonumber$
Since we take $\epsilon \left(r_o\right)=0$, we have $\epsilon \left(r\right)={\lambda {\left(r-r_0\right)}^2}/{2}$. Taking the second derivative, we find
$\frac{d^2\epsilon }{{dr}^2}=\lambda \nonumber$
Therefore, if we determine the electronic potential energy function accurately near $r_0$, we can find $\lambda$ from its curvature at $r_0$.
In Chapter 18, we note that the Schrödinger equation for such an oscillator can be solved and that the resulting energy levels are given by ${\epsilon }_n=h\nu \left(n+{1}/{2}\right)$ where $\nu$ is the vibrational frequency. The relationship between frequency and force constant is
$\nu =\frac{1}{2\pi }\sqrt{\frac{\lambda }{m}} \nonumber$
where the oscillator consists of a single moving mass, $m$. In the case where masses $m_1$ and $m_2$ oscillate along the line joining their centers, it turns out that the same equations describe the relative motion, if the mass, $m$, is replaced by the reduced mass
$\mu =\frac{m_1m_2}{m_1+m_2} \nonumber$
Therefore, in principle, we can find the characteristic frequency, $\nu$, of a diatomic molecule by accurately calculating the dependence of the electronic potential energy on $r$ in the vicinity of $r_0$. When we know $\nu$, we know the vibrational energy levels available to the molecule. Alternatively, as discussed in Section 24.7, we can obtain information about the molecule’s vibrational energy levels from its infrared absorption spectrum and use these data to find $\nu$. Either way, once we know $\nu$, we can evaluate the vibrational partition function. We have
$z_v=\sum^{\infty }_{n=0} \mathrm{exp} \left[-\frac{h\nu }{kT}\left(n+\frac{1}{2}\right)\right] =\frac{\mathrm{exp} \left(-h\nu /2kT\right)}{1- \mathrm{exp} \left(-h\nu /kT\right)} \nonumber$
where we take advantage of the fact that the vibrational partition function is the sum of a geometric series, as we show in Section 22.6.
24.09: The Rotational Partition Function of
For a diatomic molecule that is free to rotate in three dimensions, we can distinguish two rotational motions; however, their wave equations are intertwined, and the quantum mechanical result is that there is one set of degenerate rotational energy levels. The energy levels are
${\epsilon }_{r,J}=\frac{J\left(J+1\right)h^2}{8{\pi }^2I} \nonumber$
with degeneracies $g_J=2J+1$, where $J=0,\ 1,\ 2,\ 3,\dots$.
(Recall that $I$ is the moment of inertia, defined as $I=\sum{m_ir^2_i}$, where $r_i$ is the distance of the $i^{th}$ nucleus from the molecule’s center of mass. For a diatomic molecule, $XY$, whose internuclear distance is $r_{XY}$, the values of $r_X$ and $r_Y$ must satisfy the conditions $r_X+r_Y=r_{XY}$ and $m_Xr_X=m_Yr_Y$. From these relationships, it follows that the moment of inertia is $I=\mu r^2_{XY}$, where $\mu$ is the reduced mass.) For heteronuclear diatomic molecules, the rotational partition function is
$z_r=\sum^{\infty }_{J=0}{\left(2J+1\right)}{\mathrm{exp} \left[\frac{J\left(J+1\right)h^2}{8{\pi }^2IkT}\right]\ } \nonumber$
For homonuclear diatomic molecules, there is a complication. This complication occurs in the quantum mechanical description of the rotation of any molecule for which there is more than one indistinguishable orientation in space. When we specify the locations of the atoms in a homonuclear diatomic molecule, like $H_2$, we must specify the coordinates of each atom. If we rotate this molecule by ${360}^{\mathrm{o}}$ in a plane, the molecule and the coordinates are unaffected. If we rotate it by only ${180}^{\mathrm{o}}$ in a plane, the coordinates of the nuclei change, but the rotated molecule is indistinguishable from the original molecule. Our mathematical model distinguishes the ${180}^{\mathrm{o}}$-rotated molecule from the original, unrotated molecule, but nature does not.
This means that there are twice as many energy levels in the mathematical model as actually occur in nature. The rotational partition function for a homonuclear diatomic molecule is exactly one-half of the rotational partition function for an “otherwise identical” heteronuclear diatomic molecule. To cope with this complication in general, it proves to be useful to define a quantity that we call the symmetry number for any molecule. The symmetry number is usually given the symbol $\sigma$; it is just the number of ways that the molecule can be rotated into indistinguishable orientations. For a homonuclear diatomic molecule, $\sigma =2$; for a heteronuclear diatomic molecule, $\sigma =1$.
Making use of the symmetry number, the rotational partition function for any diatomic molecule becomes
$z_r=\left(\frac{1}{\sigma }\right)\sum^{\infty }_{J=0}{\left(2J+1\right)}{\mathrm{exp} \left[\frac{J\left(J+1\right)h^2}{8{\pi }^2IkT}\right]\ } \label{exact}$
For most molecules at ordinary temperatures, the lowest rotational energy level is much less than $kT$, and this infinite sum can be approximated to good accuracy as the corresponding integral. That is
$z_r \approx \left(\frac{1}{\sigma }\right)\int^{\infty }_{J=0}{\left(2J+1\right){\mathrm{exp} \left[\frac{J\left(J+1\right)h^2}{8{\pi }^2IkT}\right]\ }}dJ \nonumber$
Initial impressions notwithstanding, this integral is easily evaluated. The substitutions $a={h^2}/{8{\pi }^2IkT}$ and $u=J\left(J+1\right)$ yield
\begin{align} z_r & \approx \left(\frac{1}{\sigma }\right)\int^{\infty }_{u=0} \mathrm{exp} \left(-au\right) du \[4pt] & \approx \left(\frac{1}{\sigma }\right)\left(\frac{1}{a}\right)=\frac{8{\pi }^2IkT}{\sigma h^2} \label{approx}\end{align}
To see that this is a good approximation for most molecules at ordinary temperatures, we calculate the successive terms in the partition function of the hydrogen molecule at $25\ \mathrm{C}$. The results are shown in Table 1. We choose hydrogen because the energy difference between successive rotational energy levels becomes greater the smaller the values of $I$ and $T$. Since hydrogen has the smallest angular momentum of any molecule, the integral approximation will be less accurate for hydrogen than for any other molecule at the same temperature. For hydrogen, summing the first seven terms in the exact calculation (Equation \ref{exact}) gives $z_{\mathrm{rotation}}=1.87989$, whereas the approximate calculation (Equation \ref{approx}) gives $1.70284$. This difference corresponds to a difference of $245\ \mathrm{J}$ in the rotational contribution to the standard Gibbs free energy of molecular hydrogen.
Table 1: Rotational Partition Function Contributions for Molecular Hydrogen at 298 K
J $=\frac{\left(2J+1 \right)}{ \sigma} exp ^{Z_J} \left( - \frac{J \left( J+1 \right) h^2}{8 \pi^2 IkT} \right)$ $\approx \sum^{Z_r} Z_J$
0 0.50000 0.50000
1 0.83378 1.33378
2 0.42935 1.76313
3 0.10323 1.86637
4 0.01267 1.87904
5 0.00082 1.87986
6 0.00003 1.87989
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.08%3A_The_Vibrational_Partition_Function_of.txt
|
In our discussion of ensembles, we find that the thermodynamic functions for a system can be expressed as functions of the system’s partition function. Now that we have found the molecular partition function for a diatomic ideal gas molecule, we can find the partition function, $Z_{IG}$, for a gas of $N$ such molecules. From this system partition function, we can find all of the thermodynamic functions for this $N$-molecule ideal-gas system. The system entropy, energy, and partition function are related to each other by the equation
$S=\frac{E}{T}+k{ \ln Z\ }_{IG} \nonumber$
Rearranging, and adding $\left(PV\right)_{\mathrm{system}}$ to both sides, we find the Gibbs free energy
$G=E-TS+\left(PV\right)_{\mathrm{system}}= \left(PV\right)_{\mathrm{system}}-kT \ln Z_{IG} \nonumber$
For a system of one mole of an ideal gas, we have $\left(PV\right)_{\mathrm{system}}=\overline{N}kT$. If the ideal gas is diatomic, we can substitute the molecular partition functions developed above to find
\begin{align*} G_{IG}&=\overline{N}kT-kT \ln Z_{IG} \[4pt] &=\overline{N}kT-\mathrm{kT ln} \left[\frac{\left(z_t\right)^{\overline{N}}}{\overline{N}!}\right] -\overline{N}kT \ln z_r -\overline{N}kT \ln z_v -\overline{N}kT \ln z_e \[4pt] &=\overline{N}kT-\overline{N}kT-\overline{N}kT \ln \left[\left(\frac{2\pi mkT}{h^2}\right)^{3/2}\frac{kT}{P}\right] -\overline{N}kT \ln \left(\frac{8{\pi }^2IkT}{\sigma h^2}\right) -\overline{N}kT \ln \left(\frac{\mathrm{exp} \left(-h\nu/2kT\right)}{1-\mathrm{exp} \left(-h\nu /kT\right)}\right) -\overline{N}kT \ln \left(\frac{D_0}{RT}+\frac{h\nu }{2kT}\right) \end{align*}
For the standard Gibbs free energy of an ideal gas, we define the pressure to be one bar. Introduction of this condition $\left(P=P^o=1\ \mathrm{bar}={10}^5\ \mathrm{Pa}\right)$ and further simplification gives
$G^o_{IG}=-RT \ln \left[\left(\frac{2\pi mkT}{h^2}\right)^{3/2}\frac{kT}{P^o}\right] -RT \ln \left(\frac{8{\pi }^2IkT}{\sigma h^2}\right) -RT \ln \left(\frac{\mathrm{exp} \left(-h\nu/2kT\right)}{1-\mathrm{exp} \left(-h\nu /kT\right)}\right)-RT\left(\frac{D_0}{RT}+\frac{h\nu }{2kT}\right) \nonumber$
In this form, the successive terms represent, respectively, the translational, rotational, vibrational, and electronic contributions to the Gibbs free energy. Further simplification results because vibrational and electronic contributions from terms involving $h\nu /2kT$ cancel. This is a computational convenience. Factoring out $RT$,
$G^o_{IG}=-RT\left\{ \ln \left[\left(\frac{2\pi mkT}{h^2}\right)^{3/2}\frac{kT}{P^o}\right] + \ln \left(\frac{8{\pi }^2IkT}{\sigma h^2}\right) - \ln \left(1-\mathrm{exp} \left(-h\nu/kT \right) \right) +\frac{D_0}{RT}\right\} \nonumber$
24.11: The Standard Gibbs Free Energy for H%
For many diatomic molecules, the data needed to calculate $G^o_{IG}$ are readily available in various compilations. For illustration, we consider the molecules $H_2$, $I_2$, and $HI$. The necessary experimental data are summarized in Table 2.
Table 2: Data1 for the calculation of partition functions for $H_2 ~ (g)$, $I_2 ~ (g)$, and $HI ~ (g)$
Compound Molar mass, g $D_0$, kJ mol-1 $\nu$, hertz $r_XY$, m
$H_2$ $2.016$ $432.073$ $1.31948 \times 10^{14}$ $7.4144 \times 10^{−11}$
$I_2$ $253.82$ $148.81$ $6.43071 \times 10^{12}$ $2.666 \times 10^{−10}$
$HI$ $127.918$ $294.67$ $6.69227 \times 10^{13}$ $1.60916 \times 10^{−10}$
The terms in the simplified equation for the standard Gibbs free energy at $298.15$ K are given in Table 3.
Table 3: Gibbs free energy components
Compound $\ln \left[ \left( \frac{2 \pi mkT}{h^2} \right)^{3/2} \frac{kT}{p_o} \right]$ $\ln \left( \frac{8 \pi^2 IkT}{\sigma h^2} \right)$ $- \ln \left( 1 - e^{-h \nu/kT} \right)$ $\frac{D_0}{RT}$
$H_2$ 126.23929 0.6312* 0.0000 174.295
$I_2$ 133.49256 7.932 0.4388 60.0289
$HI$ 132.46470 3.4604 0.00002 118.868
*Calculated as a sum of terms (see Table 1) rather than as the integral approximation.
Finally, the standard molar Gibbs Free Energies at $298.15$ K are summarized in Table 4.
Table 4: Calculated Gibbs free energies
Compound $G^o_{298 ~ \text{K}}, ~ \text{kJ mole}^{-1}$
$H_2$ −746.577
$I_2$ −500.471
$HI$ −631.622
These results can be used to calculate the standard Gibbs free energy change, at $298.15$ K, for the reaction
$H_2\left(g\right)+I_2\left(g\right)\to 2HI\left(g\right). \nonumber$
We find
${\Delta }_rG^o_{298}=2G^o\left(HI,\ g,\ 298.15\ \mathrm{K}\right)-G^o\left(H_2,\ g,\ 298.15\ \mathrm{K}\right)-G^o\left(I_2,\ g,\ 298.15\ \mathrm{K}\right)=-16.20\ \mathrm{kJ} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.10%3A_The_Gibbs_Free_Energy_for_One_Mole_of.txt
|
The standard Gibbs free energies of formation${}^{1}$ for $HI\left(g\right)$ and $I_2\left(g\right)$ are $1.7\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$ and $19.3\ \mathrm{kJ}\ {\mathrm{mol}}^{-1}$, respectively. Calculation of the Gibbs free energy of this reaction from thermochemical data gives ${\Delta }_rG^o\left(298.15\ \mathrm{K}\right)=-15.9\ \mathrm{kJ}$. The difference between this value and the value calculated above is $0.3\ \mathrm{kJ}$. The magnitude of this difference is consistent with the number of significant figures given for the tabulated thermochemical data. However, some error results because we have used the simplest possible quantum mechanical models for rotational and vibrational motions. The accuracy of the statistical–mechanical calculation can be increased by using models in which the vibrational oscillator does not follow Hooke’s law exactly and in which the rotating molecule is not strictly rigid.
24.13: The Reference State for Molecular Par
In Sections 24.11 and 24.12, we see that the standard Gibbs free energy, $G^o$, that we calculate from our statistical thermodynamic model is not the same quantity as the Gibbs free energy of formation, ${\Delta }_fG^o$. Nevertheless, these calculations show that we can use the statistical-thermodynamic Gibbs free energies of the reacting species to calculate the Gibbs free energy change for a reaction in exactly the same way that we use the corresponding Gibbs free energies of formation.
The use of Gibbs free energies of formation for these calculations is successful because we measure all Gibbs free energies of formation relative to the Gibbs free energies of the constituent elements in their standard states. By convention, we set the standard-state Gibbs free energies of the elements equal to zero, but this is incidental; our method is successful because the Gibbs free energies of the constituent elements cancel out when we calculate the Gibbs free energy change for a reaction from the Gibbs free energies of formation of the reacting species.
Our statistical-mechanical Gibbs free energies represent the Gibbs free energy change for a different process. They correspond to the formation of the molecule from its isolated constituent atoms. The isolated constituent atoms are the reference state for our statistical-mechanical calculation of standard molar Gibbs free energies. We choose the Gibbs free energies of the isolated atoms to be zero. (Whatever Gibbs free energies we might assign to the isolated atoms, they cancel out when we calculate the Gibbs free energy change for a reaction from the statistical-thermodynamic Gibbs free energies of the reacting species.)
When we sum the component energies of our model for a diatomic molecule, we have
${\epsilon }_{\mathrm{molecule}}={\epsilon }_t+{\epsilon }_r+{\epsilon }_v+{\epsilon }_e. \nonumber$
The smallest of these quantum mechanically allowed values for ${\epsilon }_{\mathrm{molecule}}$ is particularly significant in our present considerations. Once we have created this molecule in its lowest energy state, we can consider that we can get it into any other state just by adding energy to it. When the isolated constituent atoms are the reference state, the value of the lowest-energy state of the molecule is the energy exchanged with the surroundings when the molecule is formed in this state from the constituent atoms.
Reviewing our models for the motions that a diatomic molecule can undergo, we see that the translational and rotational energies can be zero. The smallest vibrational energy is ${h\nu }/{2}$, and the smallest electronic energy is $-\left(\frac{D_0}{\overline{N}}+\frac{h\nu }{2}\right) \nonumber$
The minimum molecular energy is ${\epsilon }^{\mathrm{minimum}}_{\mathrm{molecule}}=-{D_0}/{\overline{N}}<0$. Since ${D_0}/{\overline{N}}$ is the energy required to just separate the diatomic molecule into its constituent elements, the end product of this process is two stationary atoms, situated at an infinite distance from one another. Conversely, ${\epsilon }^{\mathrm{minimum}}_{\mathrm{molecule}}$ is the energy released when the stationary constituent atoms approach one another from infinite separation to form the diatomic molecule in its lowest energy state. The reference state for the statistical-mechanical calculation of molecular thermodynamic properties is a set of isolated constituent atoms that have no kinetic energy. The stipulation that the reference-state atoms have no kinetic energy is often expressed by saying that the reference state is the constituent atoms at the absolute zero of temperature.
24.14: Problems
1. The partition function, $Z$, for a system of $N$, distinguishable, non-interacting molecules is $Z=z^N$, where $z$ is the molecular partition function, $z=\sum{g_i}{\mathrm{exp} \left({-{\epsilon }_i}/{kT}\right)\ }$, and the ${\epsilon }_i$ and $g_i$ are the energy levels available to the molecule and their degeneracies. Show that the thermodynamic functions for the $N$-molecule system depend on the molecular partition function as follows:
(a) $E=NkT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V$
(b) $S=NkT{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V+Nk{ \ln z\ }$
(c) $A=-NkT{ \ln z\ }$
(d) $P_{\mathrm{system}}=NkT{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T$
(e) $H=NkT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V+NkTV{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T$
(f) $G=-NkT{ \ln z\ }+NkTV{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T$
2. When the number of available quantum states is much larger than the number of molecules, the partition function, $Z$, for a system of $N$, indistinguishable, non-interacting molecules is $Z={z^N}/{N!}$, where $z$ is the molecular partition function, $z=\sum{g_i}{\mathrm{exp} \left({-{\epsilon }_i}/{kT}\right)\ }$, and the ${\epsilon }_i$ and $g_i$ are the energy levels available to the molecule and their degeneracies. Show that the thermodynamic functions for the N-molecule system depend on the molecular partition function as follows:
(a) $E=NkT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V$
(b) $S=Nk\left[T{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V+{ \ln \frac{z}{N}\ }+1\right]$
(c) $A=-NkT\left[{\mathrm{1+ln} \frac{z}{N}\ }\right]$
(d) $P_{\mathrm{system}}=NkT{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T$
(e) $H=NkT^2{\left(\frac{\partial { \ln z\ }}{\partial T}\right)}_V+NkTV{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T$
(f) $G=-NkT\left[{\mathrm{1+ln} \frac{z}{N}\ }+V{\left(\frac{\partial { \ln z\ }}{\partial V}\right)}_T\right]$
3. The molecular partition function for the translational motion of an ideal gas is
$z_t= \left(\frac{2\pi mkT}{h^2}\right)^{3/2}V \nonumber$
The partition function for a gas of $N$, monatomic, ideal-gas molecules is $Z={z^N_t}/{N!}$. Show that the thermodynamic functions are as follows:
(a) $E=\frac{3}{2}NkT$
(b) $S=Nk\left[\frac{5}{2}+{ \ln \frac{z}{N}\ }\right]$
(c) $A=-NkT\left[1+{ \ln \frac{z}{N}\ }\right]$
(d) $P_{\mathrm{system}}=\frac{NkT}{V}$
(e) $H=\frac{5}{2}NkT$
(f) $G=-NkT{ \ln \frac{z}{N}\ }$
4. Find $E$, $S$, $A$, $H$, and $G$ for one mole of Xenon at $300$ K and $1$ bar.
Notes
${}^{1}$ Data from the Handbook of Chemistry and Physics, 79${}^{th}$ Ed., David R. Linde, Ed., CRC Press, New York, 1998.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/24%3A_Indistinguishable_Molecules_-_Statistical_Thermodynamics_of_Ideal_Gases/24.12%3A_The_Gibbs_Free_Energy_Change_for_Form.txt
|
In developing the theory of statistical thermodynamics and the Boltzmann distribution function, we assume that molecules are distinguishable and that any number of molecules in a system can have the same quantum mechanical description. These assumptions are not valid for many chemical systems. Fortunately, it turns out that more rigorous treatment of the conditions imposed by quantum mechanics usually leads to the same conclusions as the Boltzmann treatment. The Boltzmann treatment can become inadequate when the system consists of low-mass particles (like electrons) or when the system temperature is near absolute zero.
Thumbnail: Comparison of average occupancy of the ground state for three statistics. (CC BY-SA 4.0; Victor Blacus via Wikipedia)
25: Bose-Einstein and Fermi-Dirac Statistics
In developing the theory of statistical thermodynamics and the Boltzmann distribution function, we assume that molecules are distinguishable and that any number of molecules in a system can have the same quantum mechanical description. These assumptions are not valid for many chemical systems. Fortunately, it turns out that more rigorous treatment${}^{1,2}$ of the conditions imposed by quantum mechanics usually leads to the same conclusions as the Boltzmann treatment. The Boltzmann treatment can become inadequate when the system consists of low-mass particles (like electrons) or when the system temperature is near absolute zero.
In this chapter, we introduce some modifications that make our statistical model more rigorous. We consider systems that contain large numbers of particles. We address the effects that the principles of quantum mechanics have on the equilibrium states that are possible, but we continue to assume that the particles do not otherwise exert forces on one another. We derive distribution functions for statistical models that satisfy quantum-mechanical restrictions on the number of particles that can occupy a particular quantum state. Our primary objective is to demonstrate that the more rigorous models reduce to the Boltzmann distribution function for most chemical systems at common laboratory conditions.
We have been using the quantum mechanical result that the discrete energy levels of a molecule or other particle can be labeled ${\epsilon }_1$, ${\epsilon }_2$,, ${\epsilon }_i$,. We have assumed that we can put any number of identifiable particles into any of these energy levels. We have assumed also that we can distinguish one particle from another, so that we can know the energy of any particular particle. In fact, we may not be able to tell the particles apart. In this case, we can know how many particles have a given energy, but we cannot distinguish the particles that have this energy from one another. Moreover, there is a quantum-mechanical theorem about the number of particles that can occupy a quantum state. If the particles have integral $\left(0,\ \ 1,\ \ 2,\ \dots .\right)$ spin, any number of them can occupy the same quantum state. Such particles are said to follow Bose-Einstein statistics. If on the other hand, the particles have half-integral $\left({1}/{2},\ \ {3}/{2},\ \ {5}/{2,}\dots .\right)$ spin, then only one of them can occupy a given quantum state. Such particles are said to follow Fermi-Dirac statistics.
Protons, neutrons, and electrons all have spin ${1}/{2}$. The spin of an atom or molecule is just the sum of the spins of its constituent elementary particles. If the number of protons, neutrons, and electrons is odd, the atom or molecule obeys Fermi-Dirac statistics. If it is even, the atom or molecule obeys Bose-Einstein statistics. For most molecules at temperatures that are not too close to absolute zero, the predicted difference in behavior is negligible. However, the isotopes of helium provide an important test of the theory. Near absolute zero, the behavior of ${He}^3$ differs markedly from that of ${He}^4$. The difference is consistent with the expected difference between the behavior of a spin-${1}/{2}$ particle (${He}^3$) and that of a spin-$0$ particle (${He}^4$).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/25%3A_Bose-Einstein_and_Fermi-Dirac_Statistics/25.01%3A_Quantum_Statistics.txt
|
Let us consider the total probability sum for a system of particles that follows Fermi-Dirac statistics. As before, we let ${\epsilon }_1$, ${\epsilon }_2$,, ${\epsilon }_i$,. be the energies of the successive energy levels. We let $g_1$, $g_2$,, $g_i$,. be the degeneracies of these levels. We let $N_1$, $N_2$,, $N_i$,. be the number of particles in all of the degenerate quantum states of a given energy level. The probability of finding a particle in a quantum state depends on the number of particles in the system; we have $\rho \left(N_i,{\epsilon }_i\right)$ rather than $\rho \left({\epsilon }_i\right)$. Consequently, we cannot generate the total probability sum by expanding an equation like
$1={\left(P_1+P_2+\dots +P_i+\dots \right)}^N. \nonumber$
However, we continue to assume:
1. A finite subset of the population sets available to the system accounts for nearly all of the probability when the system is held in a constant-temperature environment.
2. Essentially the same finite subset of population sets accounts for nearly all of the probability when the system is isolated.
3. All of the microstates that have a given energy have the same probability. We let this probability be ${\rho }^{FD}_{MS,N,E}$.
As before, the total probability sum will be of the form $1=\sum_{\{N_i\}}{W^{FD}\left(N_i,{\epsilon }_i\right)}{\rho }^{FD}_{MS,N,E} \nonumber$
Each such term reflects the fact that there are $W^{FD}\left(N_i,{\epsilon }_i\right)$ ways to put $N_1$ particles in the $g_1$ quantum states of energy level ${\epsilon }_1$, and $N_2$ particles in the $g_2$ quantum states of energy level ${\epsilon }_2$, and, in general, $N_i$ particles in the $g_i$ quantum states of energy level ${\epsilon }_i$. Unlike Boltzmann statistics, however, the probabilities are different for successive particles, so the coefficient $W^{FD}$ is different from the polynomial coefficient, or thermodynamic probability, $W$. Instead, we must discover the number of ways to put $N_i$ indistinguishable particles into the $g_i$-fold degenerate quantum states of energy ${\epsilon }_i$ when a given quantum state can contain at most one particle.
These conditions can be satisfied only if $g_i\ge N_i$. If we put $N_i$ of the particles into quantum states of energy ${\epsilon }_i$, there are
1. $g_i$ ways to place the first particle, but only
2. $g_i-1$ ways to place the second, and
3. $g_i-2$ ways to place the third, and
4. $g_i-\left(N_i-1\right)$ ways to place the last one of the $N_i$ particles.
This means that there are
$\left(g_i\right)\left(g_i-1\right)\left(g_i-2\right)\dots \left(g_i-\left(N_i+1\right)\right)= \nonumber$
$=\frac{\left(g_i\right)\left(g_i-1\right)\left(g_i-2\right)\dots \left(g_i-\left(N_i+1\right)\right)\left(g_i-N_i\right)\dots \left(1\right)}{\left(g_i-N_i\right)!}=\frac{g_i!}{\left(g_i-N_i\right)!} \nonumber$
ways to place the $N_i$ particles. Because the particles cannot be distinguished from one another, we must exclude assignments which differ only by the way that the $N_i$ particles are permuted. To do so, we must divide by $N_i!$. The number of ways to put $N_i$ indistinguishable particles into $g_i$ quantum states with no more than one particle in a quantum state is $\frac{g_i!}{\left(g_i-N_i\right)!N_i!} \nonumber$
The number of ways to put indistinguishable Fermi-Dirac particles of the population set $\{N_1\mathrm{,\ }N_2\mathrm{,\dots ,\ }N_i\mathrm{,\dots }\}$ into the available energy states is
$W^{FD}\left(N_i,g_i\right)=\left[\frac{g_1!}{\left(g_1-N_1\right)!N_1!}\right]\times \left[\frac{g_2!}{\left(g_2-N_2\right)!N_2!}\right]\times \dots \times \left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]\times \dots =\prod^{\infty }_{i=1}{\left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]} \nonumber$
so that the total probability sum for a Fermi-Dirac system becomes
$1=\sum_{\{N_j\}}{\prod^{\infty }_{i=1}{\left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]}{\left[{\rho }^{FD}\left({\epsilon }_i\right)\right]}^{N_i}} \nonumber$
To find the Fermi-Dirac distribution function, we seek the population set $\{N_1\mathrm{,\ }N_2\mathrm{,\dots ,\ }N_i\mathrm{,\dots }\}$ for which $W^{FD}$ is a maximum, subject to the constraints
$N=\sum^{\infty }_{i=1}{N_i} \nonumber$ and $E=\sum^{\infty }_{i=1}{N_i}{\epsilon }_i \nonumber$
The mnemonic function becomes
$F^{FD}_{mn}=\sum^{\infty }_{i=1}{ \ln g_i!\ } -\sum^{\infty }_{i=1}{\left[\left(g_i-N_i\right){ \ln \left(g_i-N_i\right)\ }-\left(g_i-N_i\right)\right]}-\sum^{\infty }_{i=1}{\left[N_i{ \ln N_i-N_i\ }\right]+\alpha \left[N-\sum^{\infty }_{i=1}{N_i}\right]} +\ \beta \left[E-\sum^{\infty }_{i=1}{N_i}{\epsilon }_i\right] \nonumber$
We seek the $N^{\textrm{⦁}}_i$ for which $F^{FD}_{mn}$ is an extremum; that is, the $N^{\textrm{⦁}}_i$ satisfying
\begin{align*} 0&=\frac{\partial F^{FD}_{mn}}{\partial N_i}=\frac{g_i-N^{\textrm{⦁}}_i}{g_i-N^{\textrm{⦁}}_i}+{ \ln \left(g_i-N^{\textrm{⦁}}_i\right)\ }-1-\frac{N^{\textrm{⦁}}_i}{N^{\textrm{⦁}}_i}-{ \ln N^{\textrm{⦁}}_i\ }+1-\alpha -\beta {\epsilon }_i \[4pt] &={ \ln \left(g_i-N^{\textrm{⦁}}_i\right)\ }-{ \ln N^{\textrm{⦁}}_i\ }-\alpha -\beta {\epsilon }_i \end{align*}
Solving for $N^{\textrm{⦁}}_i$, we find
$N^{\textrm{⦁}}_i=\frac{g_ie^{-\alpha }e^{-\beta {\epsilon }_i}}{1+e^{-\alpha }e^{-\beta {\epsilon }_i}} \nonumber$
or, equivalently,
$\frac{N^{\textrm{⦁}}_i}{g_i}=\frac{1}{1+e^{\alpha }e^{\beta {\epsilon }_i}} \nonumber$
If $1\gg e^{-\alpha }e^{-\beta {\epsilon }_i}$ (or $1\ll e^{\alpha }e^{\beta {\epsilon }_i}$), the Fermi-Dirac distribution function reduces to the Boltzmann distribution function. It is easy to see that this is the case. From
$N^{\textrm{⦁}}_i=\frac{g_ie^{-\alpha }e^{-\beta {\epsilon }_i}}{1+e^{-\alpha }e^{-\beta {\epsilon }_i}}\approx g_ie^{-\alpha }e^{-\beta {\epsilon }_i} \nonumber$
and $N=\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i}$, we have
$N=e^{-\alpha }\sum^{\infty }_{i=1}{g_i}e^{-\beta {\epsilon }_i}=e^{-\alpha }z \nonumber$
It follows that $e^{\alpha }={z}/{N}$. With $\beta ={1}/{kT}$, we recognize that ${N^{\textrm{⦁}}_i}/{N}$ is the Boltzmann distribution. For occupied energy levels, $e^{-\beta {\epsilon }_i}=e^{-\epsilon_i}/{kT}\approx 1$; otherwise, $e^{-\beta \epsilon_i}=e^{-\epsilon_i/kT}<1$. This means that the Fermi-Dirac distribution simplifies to the Boltzmann distribution whenever $1\gg e^{-\alpha }$. We can illustrate that this is typically the case by considering the partition function for an ideal gas.
Using the translational partition function for one mole of a monatomic ideal gas from Section 24.3, we have
\begin{align*} e^{\alpha } &=\frac{z_t}{N}=\left[\frac{2\pi mkT}{h^2}\right]^{3/2} \frac{\overline{V}}{\overline{N}} \[4pt] &=\left[\frac{2\pi mkT}{h^2}\right]^{3/2} \frac{kT}{P^0} \end{align*}
For an ideal gas of molecular weight $40$ at $300$ K and $1$ bar, we find $e^{\alpha }=1.02\times {10}^7$ and $e^{-\alpha }=9.77\times {10}^{-8}$. Clearly, the condition we assume in demonstrating that the Fermi-Dirac distribution simplifies to the Boltzmann distribution is satisfied by molecular gases at ordinary temperatures. The value of $e^{\alpha }$ decreases as the temperature and the molecular weight decrease. To find $e^{\alpha }\approx 1$ for a molecular gas, it is necessary to consider very low temperatures.
Nevertheless, the Fermi-Dirac distribution has important applications. The behavior of electrons in a conductor can be modeled on the assumption that the electrons behave as a Fermi-Dirac gas whose energy levels are described by a particle-in-a-box model.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/25%3A_Bose-Einstein_and_Fermi-Dirac_Statistics/25.02%3A_Fermi-Dirac_Statistics_and_the_Fermi-Dirac_Distribution_Function.txt
|
For particles that follow Bose-Einstein statistics, we let the probability of a microstate of energy $E$ in an $N$-particle system be ${\rho }^{BE}_{MS,N,E}$. For an isolated system of Bose-Einstein particles, the total probability sum is
$1=\sum_{\{N_i\}}{W^{BE}\left(N_i,g_i\right){\rho }^{BE}_{MS,N,E}} \label{eq1}$
We need to find $W^{BE}\left(N_i,g_i\right)$, the number of ways to assign indistinguishable particles to the quantum states, if any number of particles can occupy the same quantum state.
We begin by considering the number of ways that $N_i$ particles can be assigned to the $g_i$ quantum states associated with the energy level ${\epsilon }_i$. We see that the fewest number of quantum states that can be used is one; we can put all of the particles into one quantum state. At the other extreme, we cannot use more than the $N_i$ quantum states that we use when we give each particle its own quantum state. We can view this problem as finding the number of way we can draw as many as $g_i$ boxes around $N_i$ points. Let us create a scheme for drawing such boxes. Suppose we have a linear frame on which there is a row of locations. Each location can hold one particle. The frame is closed at both ends. Between each successive pair of particle-holding locations, there is a slot, into which a wall can be inserted. This frame is sketched in Figure 1.
When we insert $\left(g_i-1\right)$ walls into these slots, the frame contains $g_i$ boxes. We want to be able to insert the walls so that the $N_i$ particles are distributed among the $g_i$ boxes in such a way that we can have any desired number of particles in any desired number of boxes. (Of course, placement of the walls is subject to the constraints that we use at most $g_i$ boxes and exactly $N_i$ particles.) We can achieve this by constructing the frame to have $\left(N_i+g_i-1\right)$ particle-holding locations. To see this, we think about the case that requires the largest number of particle-holding locations. This is the case in which all $N_i$ particles are in one box. (See Figure 2.) For this case, we need $N_i$ occupied locations and $\left(g_i-1\right)$ unoccupied locations.
Now we consider the number of ways that we can insert $\left(g_i-1\right)$ walls into the $\left(N_i+g_i-1\right)$ slots. The first wall can go into any of $\left(N_i+g_i-1\right)$ slots. The second can go into any of $\left(N_i+g_i-1-\left(-1\right)\right)$ or $\left(N_i+g_i-2\right)$ slots. The last wall can go into any of $\left(N_i+g_i-1-\left(g_i-2\right)\right)$ or $\left(N_i+1\right)$ slots. The total number of ways of inserting the $\left(g_i-1\right)$ walls is therefore
\begin{align*} \left(N_i+g_i-1\right)\left(N_i+g_i-2\right)\dots \left(N_i+1\right) &=\frac{\left(N_i+g_i-1\right)\left(N_i+g_i-2\right)\dots \left(N_i+1\right)\left(N_i\right)\dots \left(2\right)\left(1\right)}{N_i!} \[4pt] &=\frac{\left(N_i+g_i-1\right)!}{N_i!} \end{align*}
This total is greater than the answer we seek, because it includes all permutations of the walls. It does not matter whether the first, the second, or the last wall occupies a given slot. Therefore, the expression we have obtained over-counts the quantity we seek by the factor $\left(g_i-1\right)!$, which is the number of ways of permuting the $\left(g_i-1\right)$ walls. We have therefore that the $N_i$ particles can be assigned to $g_i$ quantum states in
$\frac{\left(N_i+g_i-1\right)!}{N_i!\left(g_i-1\right)!} \nonumber$
ways, and hence
\begin{align*} W^{BE}\left(N_i,g_i\right) &=\left[\frac{\left(N_1+g_1-1\right)!}{\left(g_1-1\right)!N_1!}\right]\times \left[\frac{\left(N_2+g_2-1\right)!}{\left(g_2-1\right)!N_2!}\right]\times \dots \times \left[\frac{\left(N_i+g_i-1\right)!}{\left(g_i-1\right)!N_i!}\right]\times \dots \[4pt] &=\prod^{\infty }_{i=1}{\left[\frac{\left(N_i+g_i-1\right)!}{\left(g_i-1\right)!N_i!}\right]} \end{align*}
so that the total probability sum for a Bose-Einstein system becomes (via Equation \ref{eq1}):
$1=\sum_{\{N_j\}}{\prod^{\infty }_{i=1}{\left[\frac{\left(N_i+g_i-1\right)!}{\left(g_i-1\right)!N_i!}\right]}}{\left[{\rho }^{BE}\left({\epsilon }_i\right)\right]}^{N_i} \nonumber$
To find the Bose-Einstein distribution function, we seek the population set $\{N_1\mathrm{,\ }N_2\mathrm{,\dots ,\ }N_i\mathrm{,\dots }\}$ for which $W^{BE}$ is a maximum, subject to the constraints
$N=\sum^{\infty }_{i=1}{N_i} \nonumber$ and $E=\sum^{\infty }_{i=1}{N_i}{\epsilon }_i \nonumber$
The mnemonic function is
$F^{BE}_{mn}=\sum^{\infty }_{i=1}{\left[\left(N_i+g_i-1\right){ \ln \left(N_i+g_i-1\right)-\left(N_i+g_i-1\right)\ }-N_i{ \ln N_1+N_i\ }-\left(g_i-1\right){ \ln \left(g_i-1\right)+\left(g_i-1\right)\ }\right]}+\alpha \left(N-\sum^{\infty }_{i=1}{N_i}\right)+\beta \left(E-\sum^{\infty }_{i=1}{N_i{\epsilon }_i}\right) \nonumber$
We seek the $N^{\textrm{⦁}}_i$ for which $F^{BE}_{mn}$ is an extremum; that is, the $N^{\textrm{⦁}}_i$ satisfying
\begin{align*} 0&=\frac{\partial F^{BE}_{mn}}{\partial N^{\textrm{⦁}}_i} \[4pt] &=\frac{N^{\textrm{⦁}}_i+g_i-1}{N^{\textrm{⦁}}_i+g_i-1}+{ \ln \left(N^{\textrm{⦁}}_i+g_i-1\right)-1-\frac{N^{\textrm{⦁}}_i}{N^{\textrm{⦁}}_i}-{ \ln N^{\textrm{⦁}}_i\ }\ }+1-\alpha -\beta {\epsilon }_i \[4pt]&=-{ \ln \frac{N^{\textrm{⦁}}_i}{N^{\textrm{⦁}}_i+g_i-1}\ }-\alpha -\beta {\epsilon }_i \end{align*}
Solving for $N^{\textrm{⦁}}_i$, we find
$N^{\textrm{⦁}}_i=\frac{\left(g_i-1\right)e^{-\alpha }e^{-\beta {\epsilon }_i}}{1-e^{-\alpha }e^{-\beta {\epsilon }_i}}\approx \frac{g_ie^{-\alpha }e^{-\beta {\epsilon }_i}}{1-e^{-\alpha }e^{-\beta {\epsilon }_i}} \nonumber$
where the last expression takes advantage of the fact that $g_i$ is usually a very large number, so the error introduced by replacing $\left(g_i-1\right)$ by $g_i$ is usually negligible. If $1\gg e^{-\alpha }e^{-\beta {\epsilon }_i}$, the Bose-Einstein distribution function reduces to the Boltzmann distribution function. As we find in Section 25.2, this is always the case for a molecular gas at ambient temperatures.
Notes
${}^{1}$Richard C. Tolman, The Principles of Statistical Mechanics, Dover Publications, Inc., New York, 1979, pp 367-378. (This is a republication of the book originally published in 1938 by Oxford University Press.)
${}^{2}$Malcom Dole, Introduction to Statistical Thermodynamics, Prentice Hall, Inc., New York, 1954, pp 206-215.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/25%3A_Bose-Einstein_and_Fermi-Dirac_Statistics/25.03%3A_Bose-Einstein_Statistics_and_the_Bose-Einstein_Distribution_Function.txt
|
[Scaled to Ar(12C) = 12, where 12C is a neutral atom in its nuclear and electronic ground state]
$\begin{array}{l l l l} \text{Atomic Number} & \text{Name} & \text{Symbol} & \text{Atomic Weight} \ \hline 1 & \text{Hydrogen} & \text{H} & 1.00794 \ 2 & \text{Helium} & \text{He} & 4.002602 \ 3 & \text{Lithium} & \text{Li} & [6.941(2)] \ 4 & \text{Beryllium} & \text{Be} & 9.012182 \ 5 & \text{Boron} & \text{B} & 10.811 \ 6 & \text{Carbon} & \text{C} & 12.0107 \ 7 & \text{Nitrogen} & \text{N} & 14.0067 \ 8 & \text{Oxygen} & \text{O} & 15.9994 \ 9 & \text{Fluorine} & \text{F} & 18.9984032 \ 10 & \text{Neon} & \text{Ne} & 20.1797 \ 11 & \text{Sodium} & \text{Na} & 22.989770 \ 12 & \text{Magnesium} & \text{Mg} & 24.3050 \ 13 & \text{Aluminum} & \text{Al} & 26.981538 \ 14 & \text{Silicon} & \text{Si} & 28.0855 \ 15 & \text{Phosphorus} & \text{P} & 30.973761 \ 16 & \text{Sulfur} & \text{S} & 32.065 \ 17 & \text{Chlorine} & \text{Cl} & 35.453 \ 18 & \text{Argon} & \text{Ar} & 39.948 \ 19 & \text{Potassium} & \text{K} & 39.0983 \ 20 & \text{Calcium} & \text{Ca} & 40.078 \ 21 & \text{Scandium} & \text{Sc} & 44.955910 \ 22 & \text{Titanium} & \text{Ti} & 47.867 \ 23 & \text{Vanadium} & \text{V} & 50.9415 \ 24 & \text{Chromium} & \text{Cr} & 51.9961 \ 25 & \text{Manganese} & \text{Mn} & 54.938049 \ 26 & \text{Iron} & \text{Fe} & 55.845 \ 27 & \text{Cobalt} & \text{Co} & 58.933200 \ 28 & \text{Nickel} & \text{Ni} & 58.6934 \ 29 & \text{Copper} & \text{Cu} & 63.546 \ 30 & \text{Zinc} & \text{Zn} & 65.39 \ 31 & \text{Gallium} & \text{Ga} & 69.723 \ 32 & \text{Germanium} & \text{Ge} & 72.64 \ 33 & \text{Arsenic} & \text{As} & 74.92160 \ 34 & \text{Selenium} & \text{Se} & 78.96 \ 35 & \text{Bromine} & \text{Br} & 79.904 \ 36 & \text{Krypton} & \text{Kr} & 83.80 \ 37 & \text{Rubidium} & \text{Rb} & 85.4678 \ 38 & \text{Strontium} & \text{Sr} & 87.62 \ 39 & \text{Yttrium} & \text{Y} & 88.90585 \ 40 & \text{Zirconium} & \text{Zr} & 91.224 \ 41 & \text{Niobium} & \text{Nb} & 92.90638 \ 42 & \text{Molybdenum} & \text{Mo} & 95.94 \ 43 & \text{Technetium*} & \text{Tc}^{98} & 97.9072 \ 44 & \text{Ruthenium} & \text{Ru} & 101.07 \ 45 & \text{Rhodium} & \text{Rh} & 102.90550 \ 46 & \text{Palladium} & \text{Pd} & 106.42 \ 47 & \text{Silver} & \text{Ag} & 107.8682 \ 48 & \text{Cadmium} & \text{Cd} & 112.411 \ 49 & \text{Indium} & \text{In} & 114.818 \ 50 & \text{Tin} & \text{Sn} & 118.710 \ 51 & \text{Antimony} & \text{Sb} & 121.760 \ 52 & \text{Tellurium} & \text{Te} & 127.60 \ 53 & \text{Iodine} & \text{I} & 126.90447 \ 54 & \text{Xenon} & \text{Xe} & 131.293 \ 55 & \text{Cesium} & \text{Cs} & 132.90545 \ 56 & \text{Barium} & \text{Ba} & 137.327 \ 57 & \text{Lanthanum} & \text{La} & 138.9055 \ 58 & \text{Cerium} & \text{Ce} & 140.116 \ 59 & \text{Praseodymium} & \text{Pr} & 140.90765 \ 60 & \text{Neodymium} & \text{Nd} & 144.24 \ 61 & \text{Promethium} & \text{Pm}^{145} & 144.9127 \ 62 & \text{Samarium} & \text{Sm} & 150.36 \ 63 & \text{Europium} & \text{Eu} & 151.964 \ 64 & \text{Gadolinium} & \text{Gd} & 157.25 \ 65 & \text{Terbium} & \text{Tb} & 158.92534 \ 66 & \text{Dysprosium} & \text{Dy} & 162.50 \ 67 & \text{Holmium} & \text{Ho} & 164.93032 \ 68 & \text{Erbium} & \text{Er} & 167.259 \ 69 & \text{Thulium} & \text{Tm} & 168.93421 \ 70 & \text{Ytterbium} & \text{Yb} & 173.04 \ 71 & \text{Lutetium} & \text{Lu} & 174.967 \ 72 & \text{Hafnium} & \text{Hf} & 178.49 \ 73 & \text{Tantalum} & \text{Ta} & 180.9479 \ 74 & \text{Tungsten} & \text{W} & 183.84 \ 75 & \text{Rhenium} & \text{Re} & 186.207 \ 76 & \text{Osmium} & \text{Os} & 190.23 \ 77 & \text{Iridium} & \text{Ir} & 192.217 \ 78 & \text{Platinum} & \text{Pt} & 195.078 \ 79 & \text{Gold} & \text{Au} & 196.96655 \ 80 & \text{Mercury} & \text{Hg} & 200.59 \ 81 & \text{Thallium} & \text{Tl} & 204.3833 \ 82 & \text{Lead} & \text{Pb} & 207.2 \ 83 & \text{Bismuth} & \text{Bi} & 208.98038 \ 84 & \text{Polonium*} & \text{Po}^{210} & 209.9829 \ 85 & \text{Astatine*} & \text{At}^{210} & 209.9871 \ 86 & \text{Radon*} & \text{Rn}^{222} & 222.0176 \ 87 & \text{Francium*} & \text{Fr}^{223} & 223.0197 \ 88 & \text{Radium*} & \text{Ra}^{226} & 226.0254 \ 89 & \text{Actinium*} & \text{Ac}^{227} & 227.0277 \ 90 & \text{Thorium*} & \text{Th} & 232.0381 \ 91 & \text{Protactinium*} & \text{Pa} & 231.03588 \ 92 & \text{Uranium*} & \text{U} & 238.02891 \ 93 & \text{Neptunium*} & \text{Np}^{237} & 237.0482 \ 94 & \text{Plutonium*} & \text{Pu}^{244} & 244.0642 \ 95 & \text{Americium*} & \text{Am}^{243} & 243.0614 \ 96 & \text{Curium*} & \text{Cm}^{247} & 247.0704 \ 97 & \text{Berkelium*} & \text{Bk}^{247} & 247.0703 \ 98 & \text{Californium*} & \text{Cf}^{251} & 251.0796 \ 99 & \text{Einsteinium*} & \text{Es}^{252} & 252.0830 \ 100 & \text{Fermium*} & \text{Fm}^{257} & 257.0951 \ 101 & \text{Mendelevium*} & \text{Md}^{258} & 258.0984 \ 102 & \text{Nobelium*} & \text{No}^{259} & 259.1010 \ 103 & \text{Lawrencium*} & \text{Lr}^{262} & 262.1097 \ 104 & \text{Rutherfordium*} & \text{Rf}^{261} & 261.1088 \end{array} \nonumber$
This table is slightly modified from that given in “ATOMIC WEIGHTS OF THE ELEMENTS, 1999” published by the International Union of Pure and Applied Chemistry, Inorganic Chemistry Division, Commission on Atomic Weights and Isotopic Abundances. Elements 105 – 118 are omitted. The Commission’s report was prepared for publication by T. B. Coplen, U.S. Geological Survey, 431 National Center, Reston, Virginia 20192, USA. See http://www.physics.curtin.edu.au/iupac/docs/Atwt1999.doc
*Element has no stable nuclides. Three such elements ($\text{Th}$, $\text{Pa}$, and $\text{U}$) have a characteristic terrestrial isotopic composition, and for these an atomic weight is tabulated. When the element symbol is listed with an atomic number, that isotope has the longest half-life and its atomic weight is tabulated.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/26%3A_Appendices/26.01%3A_Appendix_A._Standard_Atomic_Weights_1999.txt
|
$\begin{array}{l l} \textbf{Quantity} & \textbf{Symbol} & \textbf{Value} & \textbf{Unit} \ \text{speed of light in a vacuum} & \text{c} & 299 792 458 & \text{m s}^{–1} \ \text{Planck constant} & \text{h} & 6.626 068 76 \times 10^{–34} & \text{J s} \ \text{elementary charge} & \text{e} & 1.602 176 462 \times 10^{–19} & \text{C} \ \text{electron mass} & \text{m}_{\text{e}} & 9.109 381 88 \times 10^{–31} & \text{kg} \ \text{proton mass} & \text{m}_{\text{p}} & 1.672 621 58 \times 10^{–27} & \text{kg} \ \text{Avogadro constant} & \overline{ \text{N}} & 6.022 141 99 \times 10^{23} & \text{mol}^{–1} \ \text{Faraday constant} & \text{F} & 96 485.341 5 & \text{C mol}^{–1} \ \text{molar gas constant} & \text{R} & 8.314 472 & \text{J mol}^{–1} ~ \text{K}^{–1} \ ~ & \text{R} & 8.314 472 \times 10^{–2} & \text{bar L mol}^{–1} ~ \text{K}^{–1} \ ~ & \text{R} & 8.205 745 \times 10^{–2} & \text{atm L mol}^{–1} ~ \text{K}^{–1} \ ~ & \text{R} & 1.987 206 & \text{cal mol}^{–1} ~ \text{K}^{–1} \ \text{Boltzmann constant} & \text{k} & 1.380 650 3 \times 10^{–23} & \text{J K}^{–1} \ \begin{array}{l l} \text{standard acceleration of free} \ \text{fall at the earth’s surface†} \end{array} & \text{g} & 9.806 650 & \text{m s}^{–2} \end{array} \nonumber$
Source: 1998 CODATA recommended values. Peter J. Mohr and Barry N. Taylor, Reviews of Modern Physics, Vol. 72, No. 2, pp. 351-495, 2000. See www.physics.nist.gov/constants.
While it is included here for convenience, g is not a Fundamental Constant. Value from David R. Lide, CRC Handbook of Chemistry and Physics, 79th Ed., CRC Press, 1999-2000, pp. 1-27, reproduced from NIST Special Publication 811, Guide for the Use of the International System of Units (Superintendent of Documents, U. S. Government Printing Office, 1991).
26.05: Units and Conversion Factors
$\begin{array}{r c l} \hline 1 \text{ m} & = & 10^{–10} \text{ Å} \ ~ & = & 3.280 840 \text{ ft} \ ~ & = & 39.370 08 \text{ in} \ \hline 1 \text{ kg} & = & 2.204 622 \text{ lb} \ 1 \text{ lb} & = & 453.592 5 \text{ g} \ \hline 1 \text{ m}^3 & = & 10^3 \text{ L} \ ~ & = & 35.314 67 \text{ ft}^3 \ ~ & = & 264.172 0 \text{ US liquid gallon} \ \hline 1 \text{ N} & = & 1 \text{ kg m s}^{–2} \ ~ & = & 1 \text{ Pa m}^2 \ ~ & = & 10^{–5} \text{ bar m}^2 \ ~ & = & 10^5 \text{dyne} ~ ( \text{g cm s}^{–2}) \ ~ & = & 0.138 255 0 \text{ poundal} \ \hline 1 \text{ bar} & = & 10^5 \text{ Pa} \ ~ & = & 10^5 \text{N m}^{–2} \ ~ & = & 0.986 92 \text{ atm} \ ~ & = & 750.064 \text{ torr} \ 1 \text{ atm} & = & 101 325 \text{ Pa} \ ~ & = & 1.013 25 \text{ bar} \ \hline 1 \text{ J} & = & 1 \text{ N m} \ ~ & = & 1 \text{ Pa m}^3 \ ~ & = & 1 \text{ W s} \ ~ & = & 2 \text{ C V} \ ~ & = & 10^7 \text{ erg} ~ ( \text{g cm}^2 \text{ s}^{–2}) \ ~ & = & 10^{–5} \text{ bar m}^3 \ ~ & = & 10^{–2} \text{ bar L} \ ~ & = & 9.869 23 \times 10^{-3} \text{ L atm} \ ~ & = & 0.239 0 \text{ cal} \ 1 \text{ cal} & = & 4.184 \text{ J} \ 1 \text{ eV} & = & 1.602 176 462 \times 10^{–19} \text{ J} \ ~ & ~ & \begin{array}{l} \text{(Energy released when one electron} \ \text{experiences a potential change of one volt.)} \end{array} \ \hline \end{array} \nonumber$
Conversion factors in this table are given by, or calculated from, values given by David R. Lide, CRC Handbook of Chemistry and Physics, 79th Ed., CRC Press, 1999-2000, pp. 1-24 to 1-31, reproduced from NIST Special Publication 811, Guide for the Use of the International System of Units (Superintendent of Documents, U. S. Government Printing Office, 1991).
26.07: Appendix D. Some Important Definite Integrals
We frequently need the values of the definite integrals below. These values are available in standard tables. Note that integrands involving even powers of the argument are even functions; integrands involving odd powers are odd functions. (A function, $f(x)$, is even if $f(x) = f(-x)$; it is odd if $f(x) = -f(-x)$.) The integrals are given over the interval $0 < x < \infty$. For integrands that are even functions, the integrals over the interval $- \infty < x < \infty$ are twice the integrals over the interval $0 < x < \infty$. For integrands that are odd functions, the integrals over the interval $- \infty < x < \infty$ are zero.
$\begin{array}{l} \int_0^{\infty} \text{exp} \left( -ax^2 \right) dx = \frac{1}{2} \sqrt{ \frac{\pi}{a}} \ \int_0^{ \infty} x \text{ exp} \left( -ax^2 \right) dx = \frac{1}{2a} \ \int_0^{ \infty} x^2 \text{ exp} \left( -ax^2 \right) dx = \frac{1}{4} \sqrt{\frac{\pi}{a^3}} \ \int_0^{\infty} x^3 \text{ exp} \left( -ax^2 \right) dx = \frac{1}{2a^2} \ \int_0^{ \infty} x^4 \text{ exp} \left( -ax^2 \right) dx = \frac{3}{8} \sqrt{ \frac{\pi}{a^5}} \ \int_0^{ \infty} x^6 \text{ exp} \left( -ax^2 \right) dx = \frac{15}{16} \sqrt{ \frac{\pi}{a^7}} \end{array} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/26%3A_Appendices/26.03%3A_Appendix_B._Fundamental_Constants.txt
|
• 1.1: Describing a System Quantum Mechanically
As a starting point it is useful to review the postulates of quantum mechanics, and use this as an opportunity to elaborate on some definitions and properties of quantum systems.
• 1.2: Matrix Mechanics
Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. Matrix mechanics was the first conceptually autonomous and logically consistent formulation of quantum mechanics.
• 1.3: Basic Quantum Mechanical Models
This section summarizes the results that emerge for common models for quantum mechanical objects. These form the starting point for describing the motion of electrons and the translational, rotational, and vibrational motions for molecules. Thus they are the basis for developing intuition about more complex problems.
• 1.4: Exponential Operators
Throughout our work, we will make use of exponential operators that act on a wavefunction to move it in time and space. Therefore they also referred to as propagators. Of particular interest to us is the time-evolution operator.
• 1.5: Numerically Solving the Schrödinger Equation
Often the bound potentials that we encounter are complex, and the time-independent Schrödinger equation will need to be evaluated numerically. There are two common numerical methods for solving for the eigenvalues and eigenfunctions of a potential. Both methods require truncating and discretizing a region of space that is normally spanned by an infinite dimensional Hilbert space.
01: Overview of Time-Independent Quantum Mechanics
As a starting point it is useful to review the postulates of quantum mechanics, and use this as an opportunity to elaborate on some definitions and properties of quantum systems.
1. The Wavefunction
Quantum mechanical matter exhibits wave-particle duality in which the particle properties emphasize classical aspects of the object’s position, mass, and momentum, and the wave properties reflect its spatial delocalization and ability to interfere constructively or destructively with other particles or waves. As a result, in quantum mechanics the physical properties of the system are described by the wavefunction $\Psi$. The wavefunction is a time-dependent complex probability amplitude function that is itself not observable; however, it encodes all properties of the system’s particles and fields. Depending on the context, particle is a term that will refer to a variety of objects―such as electron, nucleons, and atoms―that fill space and have mass, but also retain wavelike properties. Fields refer to a variety of physical quantities that are continuous in time and space, which have energy and influence the behavior of particles.
In the general sense, the wavefunction, or state, does not refer to a three dimensional physical space in which quantum particles exist, but rather an infinite dimensional linear vector space, or Hilbert space, that accounts for all possible observable properties of the system. We can represent the wavefunction in physical space, $\Psi(\mathbf{r})$ by carrying out a projection onto the desired spatial coordinates. As a probability amplitude function, the wavefunction describes the statistical probability of locating particles or fields in space and time. Specifically, we claim that the square of the wavefunction is proportional to a probability density (probability per unit volume). In one dimension, the probability of finding a particle in a space between x and x+dx at a particular time t is
$P(\mathbf{x}, \mathbf{t}) d x=\Psi^{*}(\mathbf{x}, \mathbf{t}) \Psi(\mathbf{x}, \mathbf{t}) \mathrm{d} \mathrm{x}$
We will always assume that the wavefunctions for a particle are properly normalized, so that $\int \mathrm{P}(\mathbf{x}, \mathrm{t}) \mathrm{dx}=1$.
2. Operators
Quantum mechanics parallels Hamilton’s formulation of classical mechanics, in which the properties of particles and fields are described in terms of their position and momenta. Each particle described by the wavefunction will have associated with it one or more degrees of freedom that are defined by the dimensionality of the problem. For each degree of freedom, particles which are described classically by a position x and momentum $p_{x}$ will have associated with it a quantum mechanical operator $\hat{x} \text { or } \hat{p}_{x}$ which will be used to describe physical properties and experimental observables. Operators correspond to dynamical variables, whereas static variables, such as mass, do not have operators associated with them. In practice there is a quantum/classical correspondence which implies that the quantum mechanical behavior can often be deduced from the classical dynamical equations by substituting the quantum mechanical operator for the corresponding classical variables. In the case of position and momenta, these operators are $x \rightarrow \hat{x} \text { and } \hat{p}_{x}=-i \hbar(\partial / \partial x)$. Table 1 lists some important operators that we will use. Note that time does not have an operator associated with it, and for our purposes is considered an immutable variable that applies uniformly to the entire system.
Table $1$: lists some important operators that we will use. Note that time does not have an operator associated with it, and for our purposes is considered an immutable variable that applies uniformly to the entire system.
\begin{array}{|l|l|l|l|}
\hline & & \text {Classical variable} & \text {Operator} \
\hline \text {Position} & (1 \mathrm{D}) & x & \hat{x} \
\hline & (3 \mathrm{D}) & r & \hat{r} \
\hline \text {Linear momentum} & (1 \mathrm{D}) & p_{\mathrm{x}} & \hat{p}_{x}=-i \hbar(\partial / \partial x) \
\hline & (3 \mathrm{D}) & p & \hat{p}=-i \hbar \nabla \
\hline \begin{array}{l}
\text {Function of position} \
\text {and momentum}
\end{array} & (1 \mathrm{D}) & f\left(x, p_{\mathrm{x}}\right) & f\left(\hat{x}, \hat{p}_{x}\right) \
\hline \text {Angular momentum} & (3 \mathrm{D}) & \bar{L}=\bar{r} \times \bar{p} & \hat{L}=-i \hbar \hat{r} \times \bar{\nabla} \
\hline \begin{array}{l}
\text {z-component of orbital} \
\text {angular momentum}
\end{array} & & & \hat{L}_{z}=-i \hbar(\partial / \partial \phi) \
\hline
\end{array}
What do operators do? Operators map one state of the system to another―also known as acting on the wavefunction:
$\hat{\mathrm{A}} \Psi_{0}=\Psi_{\mathrm{A}} \label{2}$
Here $\Psi_{0}$ is the initial wavefunction and $\Psi_{\mathrm{A}}$ refers to the wavefunction after the action of the operator $\hat{\mathrm{A}}$. Whereas the variable x represents a position in physical space, the operator $\hat{x}$ maps the wavefunction from Hilbert space onto physical space. Operators also represent a mathematical operation on the wavefunction that influences or changes it, for instance moving in time and space. Operators may be simply multiplicative, as with the operator $\hat{x}$, or they may take differential or integral forms. The gradient $\bar{\nabla}$, divergence $\nabla \cdot, \text {and curl} \nabla \times$ are examples of differential operators, whereas Fourier and Laplace transforms are integral operators.
When writing an operator, it is always understood to be acting on a wavefunction to the right. For instance, the operator $\hat{p}_{x}$ says that one should differentiate the wavefunction to its right with respect to $x$ and then multiply the result by $-i \hbar$. The operator $\hat{x}$ simply means multiply the wavefunction by x. Since operators generally do not commute, a series of operators must be applied in the prescribed right-to-left order.
$\hat{\mathrm{B}} \hat{\mathrm{A}} \Psi_{0}=\hat{\mathrm{B}} \Psi_{A}=\Psi_{\mathrm{B}, \mathrm{A}} \label{3}$
One special characteristic of operators that we will look for is whether operators are Hermitian. A Hermitian operator obeys the equality $\hat{A}=\hat{A}^{*}$.
Of particular interest is the Hamiltonian, $\hat{H}$, an operator corresponding to the total energy of the system. The Hamiltonian operator describes all interactions between particles and fields, and thereby determines the state of the system. The Hamiltonian is a sum of the total kinetic and potential energy for the system of interest, $\hat{H}=\hat{T}+\hat{V}$, and is obtained by substituting the position and momentum operators into the classical Hamiltonian. For one particle under the influence of a potential,
$\hat{H}=-\frac{\hbar^{2}}{2 m} \nabla^{2}+V(\hat{r}, t) \label{4}$
Notation: In the following chapters, we will denote operators with a circumflex only when we are trying to explicitly note its role as an operator, but otherwise we take the distinction between variables and operators to be understood.
3. Eigenvalues and Eigenfunctions
The properties of a system described by mapping with the operator $\hat{A}$ can only take on the values a that satisfy an eigenvalue equation
$\hat {A \Psi} = a \Psi \label{5}$
For instance, if the state of the system is $\Psi (x) = e^{i p x / \hbar}$, the momentum operator $\hat {p} _ {x} = - i \hbar ( \partial / \partial x )$ returns the eigenvalue $p$ (a scalar) times the original wavefunction. Then $\Psi (x)$ x is said to be an eigenfunction of $\hat {p} _ {x}$. For the Hamiltonian, the solutions to the eigenvalue equation
$\hat {H} \Psi = E \Psi \label{6}$
yield possible energies of the system. The set of all possible eigenvectors are also known as the eigenstates $\psi_{1}$. Equation (6) is the time-independent Schrödinger equation (TISE).
4. Linear Superposition
The eigenstates of $\hat{A}$ form a complete orthonormal basis. In Hilbert space the wavefunction is expressed as a linear combination of orthonormal functions,
$\Psi = \sum _ {i = 0}^{\infty} c _ {i} \psi _ {i} \label{7}$
where $c _ {i}$ are complex numbers. The eigenvectors $\psi_{1}$ are orthogonal and complete:
$\int _ {- \infty}^{+ \infty} d \tau \psi _ {i}^{*} \psi _ {j} = \delta _ {i j} \label{8}$
and
$\sum _ {i = 0}^{\infty} \left| c _ {i} \right|^{2} = 1 \label{9}$
The choice of orthonormal functions in which to represent the system is not unique and is referred to as selecting a basis set. The change of basis set is effectively a transformation that rotates the wavefunction in Hilbert space.
5. Expectation Values
The outcome of a quantum measurement cannot be known with arbitrary accuracy; however, we can statistically describe the probability of measuring a certain value. The measurement of a value associated with the operator is obtained by calculating the expectation value of the operator
$\langle A\rangle=\int d \tau \Psi^{*} \hat{A} \Psi \label{10}$
Here the integration is over Hilbert space. The brackets $\langle\ldots\rangle$ refer to an average value that will emerge from a large series of measurements on identically prepared systems. Whereas $\langle A\rangle$ is an average value, the variance in a distribution of values measured can be calculated from $\Delta A=\left\langle A^{2}\right\rangle-\langle A\rangle^{2}$. Since an observable must be real valued, operators corresponding to observables are Hermitian:
$\int d \tau \Psi^{*} \hat{A} \Psi=\int d \tau \hat{A}^{*} \Psi^{*} \Psi \label{11}$
As a consequence, a Hermitian operator must have real eigenvalues and orthogonal eigenfunctions.
6. Commutators
Operators are associative but not necessarily commutative. Commutators determine whether two operators commute. The commutator of two operators $\hat{A} \text {and} \hat{B}$ is defined as
$[\hat{A}, \hat{B}]=\hat{A} \hat{B}-\hat{B} \hat{A} \label{12}$
If we first make an observation of an eigenvalue a for $\hat{A}$, one cannot be assured of determining a unique eigenvalue b for a second operator $\hat{B}$. This is only possible if the system is an eigenstate of both $\hat{A}$ and $\hat{B}$. This would allow one to state that $\hat{A} \hat{B} \psi=\hat{B} \hat{A} \psi$ or alternatively $[\hat{A}, \hat{B}] \psi=0$. If the operators commute, the commutator is zero, and $\hat{A}$ and $\hat{B}$ have simultaneous eigenfunctions. If the operators do not commute, one cannot specify a and b exactly, however, the variance in their uncertainties can be expressed as $\Delta A^{2} \Delta B^{2} \geq\left\langle\frac{1}{2}[\hat{A}, \hat{B}]\right\rangle^{2}$. As an example, we see that $\hat{p}_{x} \text {and} \hat{p}_{y}$ commute, but $\hat{x} \text {and} \hat{p}_{x}$ do not. Thus we can specify the momentum of a particle in the x and y coordinates precisely, but cannot specify both the momentum and position of a particle in the x dimension to arbitrary resolution. We find that $\left[\hat{x}, \hat{p}_{x}\right]=i \hbar$ and $\Delta x \Delta p_{x} \geq \hbar / 2$.
Note that for the case that the Hamiltonian can be written as a sum of commuting terms, as is the case for a set of independent or separable coordinates or momenta, then the total energy is additive in eigenvalues for each term, and the total eigenfunctions can be written as product states in the eigenfunctions for each term.
7. Time Dependence
The wavefunction evolves in time as described by the time-dependent Schrödinger equation (TDSE):
$- i \hbar \frac {\partial \Psi} {\partial t} = \hat {H} \Psi \label{13}$
In the following chapter, we will see the reasoning that results in this equation.
8. Readings
1. P. Atkins and R. Friedman, Molecular Quantum Mechanics, 4th ed. (Oxford University Press, Oxford; New York, 2005)
2. G. Baym, Lectures on Quantum Mechanics. (Perseus Book Publishing, L.L.C., New York, 1969)
3. C. Cohen-Tannoudji, B. Diu and F. Lalöe, Quantum Mechanics. (Wiley-Interscience, Paris, 1977)
4. D. J. Griffiths, Introduction to Quantum Mechanics, 2nd ed. (Pearson Prentice Hall, Upper Saddle River, NJ, 2005)
5. E. Merzbacher, Quantum Mechanics, 3rd ed. (Wiley, New York, 1998); A. Messiah, Quantum Mechanics. (Dover Publications, 1999)
6. J. J. Sakurai, Modern Quantum Mechanics, Revised Edition. (Addison-Wesley, Reading, MA, 1994)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.01%3A_Describing_a_System_Quantum_Mechanically.txt
|
Most of our work will make use of the matrix mechanics formulation of quantum mechanics. The wavefunction is written as $|\Psi\rangle$ and referred to as a ket vector. The complex conjugate $\Psi^{*}=\langle\Psi|$ is a bra vector, where $\langle a \Psi|=a^{*}\langle\Psi|$. The product of a bra and ket vector, $\langle\alpha \mid \beta\rangle$ is therefore an inner product (scalar), whereas the product of a ket and bra $|\beta\rangle\langle\alpha|$ is an outer product (matrix). The use of bra–ket vectors is the Dirac notation in quantum mechanics.
In the matrix representation, $|\Psi\rangle$ is represented as a column vector for the expansion coefficients $c_{i}$ in a particular basis set.
$|\Psi\rangle=\left(\begin{array}{c} c_{1} \ c_{2} \ c_{3} \ \vdots \end{array}\right) \label{14}$
The bra vector $\langle\Psi|$ refers to a row vector of the conjugate expansion coefficients $c_{i}^{*}$. Since wavefunctions are normalized, $\langle\Psi \mid \Psi\rangle=1$. Dirac notation has the advantage of brevity, often shortening the wavefunction to a simple abbreviated notation for the relevant quantum numbers in the problem. For instance, we can write eq. (1.1.7) as
$|\Psi\rangle=\sum_{i} c_{i}|i\rangle \label{15}$
where the sum is over all eigenstates and the $i^{\text {th}} \text { eigenstate }|i\rangle=\psi_{i}$. Implicit in this equation is that the expansion coefficient for the $i^{\text {th }} \text { eigenstate is } c_{i}=\langle i \mid \Psi\rangle$. With this brevity comes the tendency to hide some of the variables important to the description of the wavefunction. One has to be aware of this, and although we will use Dirac notation for most of our work, where detail is required, Schrödinger notation will be used.
The outer product $|i\rangle\langle i|$ is known as a projection operator because it can be used to project the wavefunction of the system onto the $i^{\mathrm{th}}$ eigenstate of the system as $|i\rangle\langle i \mid \Psi\rangle=c_{i}|i\rangle$. Furthermore, if we sum projection operators over the complete basis set, we obtain an identity operator
$\sum_{i}|i\rangle\langle i|=1 \label{16}$
which is a statement of the completeness of a basis set. The orthogonality of eigenfunctions (eq. (1.1.8)) is summarized as $\langle i \mid j\rangle=\delta_{i j}$.
The operator $\hat{A}$ is a square matrix that maps from one state to another
$\hat{A}\left|\Psi_{0}\right\rangle=\left|\Psi_{A}\right\rangle \label{17}$
and from eq. (1.1.6) the TISE is
$\hat{H}|\Psi\rangle=E|\Psi\rangle \label{18}$
where E is a diagonal matrix of eigenvalues whose solution is obtained from the characteristic equation
$\operatorname{det}(H-E \mathbf{I})=0 \label{19}$
The expectation value, a restatement of eq. (1.1.10), is written
$\langle A\rangle=\langle\Psi|\hat{A}| \Psi\rangle \label{20}$
or from eq. (\ref{15})
$\langle A\rangle=\sum_{i} \sum_{j} c_{i}^{*} c_{j} A_{i j} \label{21}$
where $A_{i j}=\langle i|A| j\rangle$ are the matrix elements of the operator $\hat{A}$. As we will see later, the matrix of expansion coefficients $\rho_{i j}=c_{i}^{*} c_{j}$ is known as the density matrix. From eq. (\ref{18}), we see that the expectation value of the Hamiltonian is the energy of the system,
$E=\langle\Psi|H| \Psi\rangle \label{22}$
Hermitian operators play a special role in quantum mechanics. The Hermitian adjoint of an operator $\hat{A} \text { is written } \hat{A}^{\dagger}$, and is defined as the conjugate transpose of $\hat{A}: \hat{A}^{\dagger}=\left(\hat{A}^{T}\right)^{*}$. From this we see $\langle\hat{A} \psi \mid \phi\rangle=\left\langle\psi \mid \hat{A}^{\dagger} \phi\right\rangle$. A Hermitian operator is one that is self-adjoint, i.e., $\hat{A}^{\dagger}=\hat{A}$. For a Hermitian operator, a unique unitary transformation exists that will diagonalize it.
Each basis set provides a different route to representing the same physical system, and a similarity transformation S transforms a matrix from one orthonormal basis to another. A transformation from the state $|\Psi\rangle \text { to the state }|\Phi\rangle$ can be expressed as
$|\Theta\rangle=S|\Psi\rangle$
where the elements of the matrix are $S_{i j}=\left\langle\theta_{i} \mid \psi_{j}\right\rangle$. Then the reverse transformation is
$|\Psi\rangle=S^{\dagger}|\Theta\rangle$
If $S^{-1} = S^{\dagger}$, then $S^{\dagger} S=1$ and the transformation is said to be unitary. A unitary transformation refers to a similarity transformation in Hilbert space that preserves the scalar product, i.e., the length of the vector. The transformation of an operator from one basis to another is obtained from $S^{\dagger} A S$ and diagonalizing refers to finding the unitary transformation that puts the matrix A in diagonal form.
Properties of operators
1. The inverse of $\hat{A}\left(\text { written } \hat{A}^{-1}\right)$ is defined by
$\hat{A}^{-1} \hat{A}=\hat{A} \hat{A}^{-1}=1$
2. The transpose of $\hat{A}\left(\text { written } A^{T}\right)$ is
$\left(A^{T}\right)_{n q}=A_{q n}$
If $A^{T}=-A$ then the matrix is antisymmetric.
3. The trace of $\hat{A}$ is defined as
$\operatorname{Tr}(\hat{A})=\sum_{q} A_{q q}$
The trace of a matrix is invariant to a similarity operation.
4. The Hermitian adjoint of $\hat{A}\left(\text { written } \hat{A}^{\dagger}\right)$ is
$\begin{array}{l} \hat{A}^{\dagger}=\left(\hat{A}^{T}\right)^{*} \ \left(\hat{A}^{\dagger}\right)_{n q}=\left(\hat{A}_{q n}\right)^{*} \end{array}$
5. $\hat{A}$ is Hermitian if $\hat{A}^{\dagger}=\hat{A}$
$\left(\hat{A}^{T}\right)^{*}=\hat{A}$
If $\hat{A}$ is Hermitian, then $\hat{A}^{n}$ is Hermitian and $e^{\hat{A}}$ is Hermitian. For a Hermitian operator, $\langle\psi \mid \hat{A} \varphi\rangle=\langle\psi \hat{A} \mid \varphi\rangle$. Expectation values of Hermitian operators are real, so all physical observables are associated with Hermitian operators.
6. $\hat{A}$ is a unitary operator if its adjoint is also its inverse:
$\begin{array}{l} \hat{A}^{\dagger}=\hat{A}^{-1} \ \left(\hat{A}^{T}\right)^{*}=\hat{A}^{-1} \ \hat{A} \hat{A}^{\dagger}=1 \quad \Rightarrow \quad\left(\hat{A} \hat{A}^{\dagger}\right)_{n q}=\delta_{n q} \end{array}$
7. $\hat{A}^{\dagger}=-\hat{A} \text { then } \hat{A}$ is said to be anti-Hermitian. Anti-Hermetian operators have imaginary expectation values. Any operator can be decomposed into its Hermitian and anti-Hermitian parts as
$\begin{array}{l} \hat{A}=\hat{A}_{H}+\hat{A}_{A H} \ \hat{A}_{H}=\frac{1}{2}\left(\hat{A}+\hat{A}^{\dagger}\right) \ \hat{A}_{A H}=\frac{1}{2}\left(\hat{A}-\hat{A}^{\dagger}\right) \end{array}$
Properties of commutators
From the definition of a commutator:
$[\hat{A}, \hat{B}]=\hat{A} \hat{B}-\hat{B} \hat{A}$
we find it is anti-symmetric to exchange:
$[\hat{A}, \hat{B}]=-[\hat{B}, \hat{A}]$
and distributive:
$[\hat{A}, \hat{B}+\hat{C}]=[\hat{A}, \hat{B}]+[\hat{B}, \hat{C}]$
These properties lead to a number of useful identities:
$\left[\hat{A}, \hat{B}^{n}\right]=n \hat{B}^{n-1}[\hat{A}, \hat{B}]$
$\left[\hat{A}^{n}, \hat{B}\right]=n \hat{A}^{n-1}[\hat{A}, \hat{B}]$
$[\hat{A}, \hat{B} \hat{C}]=[\hat{A}, \hat{B}] \hat{C}+\hat{B}[\hat{A}, \hat{C}]$
$[[\hat{C}, \hat{B}], \hat{A}]=[[\hat{A}, \hat{B}], \hat{C}]$
$\begin{array}{l} {[\hat{A},[\hat{B}, \hat{C}]]+[\hat{B},[\hat{C}, \hat{A}]]} \ \quad+[\hat{C},[\hat{A}, \hat{B}]]=0 \end{array}$
The Hermetian conjugate of a commutator is
$[\hat{A}, \hat{B}]^{\dagger}=\left[\hat{B}^{\dagger}, \hat{A}^{\dagger}\right]$
Also, the commutator of two Hermitian operators is also Hermitian. The anti-commutator is defined as
$[\hat{A}, \hat{B}]_{+}=\hat{A} \hat{B}+\hat{B} \hat{A}$
and is symmetric to exchange. For two Hermitian operators, their product can be written in terms of the commutator and anti-commutator as
$\hat{A} \hat{B}=\frac{1}{2}[\hat{A}, \hat{B}]+\frac{1}{2}[\hat{A}, \hat{B}]_{+}$
The anti-commutator is the real part of the product of two operators, whereas the commutator is the imaginary part.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.02%3A_Matrix_Mechanics.txt
|
This section summarizes the results that emerge for common models for quantum mechanical objects. These form the starting point for describing the motion of electrons and the translational, rotational, and vibrational motions for molecules. Thus they are the basis for developing intuition about more complex problems.
Waves
Waves form the basis for our quantum mechanical description of matter. Waves describe the oscillatory amplitude of matter and fields in time and space, and can take a number of forms. The simplest form we will use is plane waves, which can be written as
$\psi ( \mathbf {r} , t ) = \mathbf {A} \exp [ i \mathbf {k} \cdot \mathbf {r} - \mathbf {i} \omega t ] \label{43}$
The angular frequency $ω$ describes the oscillations in time and is related to the number of cycles per second through $ν = ω/2π$. The wave amplitude also varies in space as determined by the wavevector $\mathbf {k}$, where the number of cycles per unit distance (wavelength) is $λ = ω/k$. Thus the wave propagates in time and space along a direction $\mathbf {k}$ with a vector amplitude A with a phase velocity $vϕ = νλ$.
Free Particles
For a free particle of mass $m$ in one dimension, the Hamiltonian only reflects the kinetic energy of the particle
$\hat {H} = \hat {T} = \frac {\hat {p}^{2}} {2 m} \label{44}$
Judging from the functional form of the momentum operator, we assume that the wavefunctions will have the form of plane waves
$\psi (x) = A e^{i k x} \label{45}$
Inserting this expression into the TISE, eq. (1.1.6), we find that
$k = \sqrt {\frac {2 m E} {\hbar^{2}}} \label{46}$
and set $A = 1 / \sqrt {2 \pi}$. Now, since we know that $E = p^{2} / 2 m$, we can write
$k = \frac {p} {\hbar} \label{47}$
$k$ is the wavevector, which we equate with the momentum of the particle.
Free particle plane waves $\psi _ {k} (x)$ form a complete and continuous basis set with which to describe the wavefunction. Note that the eigenfunctions, Equation (\ref{45}), are oscillatory over all space. Thus describing a plane wave allows one to exactly specify the wavevector or momentum of the particle, but one cannot localize it to any point in space. In this form, the free particle is not observable because its wavefunction extends infinitely and cannot be normalized. An observation, however, taking an expectation value of a Hermitian operator will collapse this wavefunction to yield an average momentum of the particle with a corresponding uncertainty relationship to its position.
Bound particles
Particle-in-a-Box
The minimal model for translational motion of a particle that is confined in space is given by the particle-in-a-box. For the case of a particle confined in one dimension in a box of length L with impenetrable walls, we define the Hamiltonian as
$\hat {H} = \frac {\hat {p}^{2}} {2 m} + V (x) \label{48}$
$V (x) = \left\{\begin{array} {l l} {0} & {0 < x < L _ {x}} \ {\infty} & {\text {otherwise}} \end{array} \right. \label{49}$
The boundary conditions require that the particle cannot have any probability of being within the wall, so the wavefunction should vanish at $x = 0$ and $L_x$, as with standing waves. We therefore assume a solution in the form of a sine function. The properly normalized eigenfunctions are
$\psi _ {n} = \sqrt {\frac {2} {L}} \sin \frac {n \pi x} {L} \quad n = 1,2,3 \dots \label{50}$
Here $n$ are the integer quantum numbers that describe the harmonics of the fundamental frequency $\pi/L$ whose oscillations will fit into the box while obeying the boundary conditions. We see that any state of the particle-in-a-box can be expressed in a Fourier series. On inserting Equation \ref{50} into the time-independent Schrödinger equation, we find the energy eigenvalues
$E _ {n} = \frac {n^{2} \pi^{2} h^{2}} {2 m L^{2}} \label{51}$
Note that the spacing between adjacent energy levels grows as $n(n+1)$. This model is readily extended to a three-dimensional box by separating the box into $x$, $y$, and $z$ coordinates. Then
$\hat {H} = \hat {H} _ {x} + \hat {H} _ {y} + \hat {H} _ {z} \label{52}$
in which each term is specified as Equation \ref{48}. Since $\hat {H} _ {x}$, $\hat {H} _ {y}$, $\hat {H} _ {z}$ commute, each dimension is separable from the others. Then we find
$\psi ( x , y , z ) = \psi _ {x} \psi _ {y} \psi _ {z} \label{53}$
and
$E _ {x , y , z} = E _ {x} + E _ {y} + E _ {z} \label{54}$
which follow the definitions given in Equation \ref{50} and \ref{51} above. The state of the system is now specified by three quantum numbers with positive integer values: $n_x$, $n_y$, $n_z$
Figure 1. Particle-in-a-box potential wavefunctions that are plotted superimposed on their corresponding energy levels.
Figure 2. Harmonic oscillator potential showing wavefunctions that are superimposed on their corresponding energy levels.
Harmonic Oscillator
The harmonic oscillator Hamiltonian refers to a particle confined to a parabolic, or harmonic, potential. We will use it to represent vibrational motion in molecules, but it also becomes a general framework for understanding all bosons. For a classical particle bound in a one-dimensional potential, the potential near the minimum $x_0$ can be expanded as
$V (x) = V \left( x _ {0} \right) + \left( \frac {\partial V} {\partial x} \right) _ {x = x _ {0}} \left( x - x _ {0} \right) + \frac {1} {2} \left( \frac {\partial V^{2}} {\partial x^{2}} \right) _ {x = x _ {0}} \left( x - x _ {0} \right)^{2} + \cdots \label{55}$
Setting $x_0$ to 0, the leading term with a dependence on $x$ is the second-order (harmonic) term $V = - \mathrm {K} x^{2} / 2$, where the force constant
$\kappa = - \left( \partial^{2} V / \partial x^{2} \right) _ {x = 0}.$
The classical Hamiltonian for a particle of mass $m$ confined to this potential is
$H = \frac {p^{2}} {2 m} + \frac {1} {2} \kappa x^{2} \label{56}$
Noting that the force constant and frequency of oscillation are related by
$\kappa = m \omega _ {0}^{2},$
we can substitute operators for $p$ and $x$ in Equation \ref{56} to obtain the quantum Hamiltonian
$\hat {H} = - \frac {1} {2} \frac {\hbar^{2}} {m} \frac {\partial^{2}} {\partial x^{2}} + \frac {1} {2} m \omega _ {0}^{2} \hat {x}^{2} \label{57}$
We will also make use of reduced mass-weighted coordinates defined as
$p = \sqrt {\frac {2} {m \hbar \omega _ {0}}} \hat {p}\label{58A}$
$x = \sqrt {\frac {m \omega _ {0}} {2 \hbar}} \hat {x} \label{58B}$
for which the Hamiltonian can be written as
$\hat {H} = \hbar \omega _ {0} \left( p^{2} + q^{2} \right) \label{59}$
The eigenstates for the Harmonic oscillator are expressed in terms of Hermite polynomials
$\psi _ {n} (x) = \sqrt {\frac {\alpha} {2^{n} \sqrt {\pi} n !}} e^{- \alpha^{2} x^{2} / 2} \mathcal {H} _ {n} ( \alpha x ) \label{60}$
where $\alpha=\sqrt{m \omega_{0} / \hbar}$ and the Hermite polynomials are obtained from
$\mathcal {H} _ {n} (x) = ( - 1 )^{n} e^{x^{2}} \frac {d^{n}} {d x^{n}} e^{- x^{2}} \label{61}$
The corresponding energy eigenvalues are equally spaced in units of the vibrational quantum $\hbar \omega _ {0}$ above the zero-point energy $\hbar \omega _ {0} / 2$.
$E_{n}=\hbar \omega_{0}\left(n+\frac{1}{2}\right) \quad n=0,1,2 \ldots \label{62}$
Raising and Lowering Operators for Harmonic Oscillators
From a practical point of view, it will be most useful for us to work problems involving harmonic oscillators in terms of raising and lower operators (also known as creation and annihilation operators, or ladder operators). We define these as
$\hat {a} = \sqrt {\frac {2 \hbar} {m \omega _ {0}}} \left( \hat {x} + \frac {i} {m \omega _ {0}} \hat {p} \right) \label{63}$
$\hat {a}^{\dagger} = \sqrt {\frac {2 \hbar} {m \omega _ {0}}} \left( \hat {x} - \frac {i} {m \omega _ {0}} \hat {p} \right) \label{64}$
Note $a$ and $a^†$ operators are Hermitian conjugates of one another. These operators get their name from their action on the harmonic oscillator wavefunctions, which is to lower or raise the state of the system:
$\hat {a} | n \rangle = \sqrt {n} | n - 1 \rangle \label{65}$
and
$\hat {a}^{\dagger} | n \rangle = \sqrt {n + 1} | n + 1 \rangle$
Then we find that the position and momentum operators are
$\hat {x} = \sqrt {\frac {\hbar} {2 m \omega _ {0}}} \left( \hat {a}^{\dagger} + \hat {a} \right) \label{66}$
$\hat {p} = i \sqrt {\frac {m \hbar \omega _ {0}} {2}} \left( \hat {a}^{\dagger} - \hat {a} \right) \label{67}$
When we substitute these ladder operators for the position and momentum operators—known as second quantization—the Hamiltonian becomes
$\hat {H} = \hbar \omega _ {0} \left( \hat {n} + \frac {1} {2} \right) \label{68}$
The number operator is defined as $\hat {n} = \hat {a}^{\dagger} \hat {a}$ and returns the state of the system: $\hat {n} = \hat {a}^{\dagger} \hat {a}$. The energy eigenvalues satisfying $\hat {H} | n \rangle = E _ {n} | n \rangle$ are given by Equation \ref{62). Since the quantum numbers cannot be negative, we assert a boundary condition $a | 0 \rangle = 0$, where $0$ refers to the null vector. The harmonic oscillator Hamiltonian expressed in raising and lowering operators, together with its commutation relationship
$\left[ a , a^{\dagger} \right] = 1 \label{69}$
is used as a general representation of all bosons, which for our purposes includes vibrations and photons.
Properties of raising and lower operators
$a$ and $a^†$ a operators are Hermitian conjugates of one another.
$a a^{\dagger} + a^{\dagger} a = a^{\dagger} a + \frac {1} {2} \label{70}$
$\left[ a , a^{\dagger} \right] = 1 \label{71}$
$[ a , a ] = 0 \left[ a^{\dagger} , a^{\dagger} \right] = 0 \label{72}$
$\left[ a , \left( a^{\dagger} \right)^{n} \right] = n \left( a^{\dagger} \right)^{n - 1} \label{73}$
$\left[ a^{\dagger} , a^{n} \right] = - n a^{n - 1} \label{74}$
$| n \rangle = \frac {1} {\sqrt {n !}} \left( a^{\dagger} \right)^{n} | 0 \rangle \label{75}$
Morse Oscillator
The Morse oscillator is a model for a particle in a one-dimensional anharmonic potential energy surface with a dissociative limit at infinite displacement. It is commonly used for describing the spectroscopy of diatomic molecules and anharmonic vibrational dynamics, and most of its properties can be expressed through analytical expressions.3 The Morse potential is
$V (x) = D _ {e} \left[ 1 - e^{- \alpha x} \right]^{2} \label{76}$
where $x = \left( r - r _ {0} \right)$. $D_e$ sets the depth of the energy minimum at $r = r_0$ relative to the dissociation limit as $r → ∞$, and α sets the curvature of the potential. If we expand $V$ in powers of $x$ as described in Equation \ref{55}
$V (x) \approx \frac {1} {2} \kappa x^{2} + \frac {1} {6} g x^{3} + \frac {1} {24} h x^{4} + \cdots \label{77}$
we find that the harmonic, cubic, and quartic expansion coefficients are
$\kappa = 2 D _ {e} \alpha^{2},$
$g = - 6 D _ {e} \alpha^{3},$
and $h = 14 D _ {e} \alpha^{4}.$
The Morse oscillator Hamiltonian for a diatomic molecule of reduced mass mR bound by this potential is
$H = \frac {p^{2}} {2 m _ {R}} + V (x) \label{78}$
and has the eigenvalues
$E _ {n} = \hbar \omega _ {0} \left[ \left( n + \frac {1} {2} \right) - x _ {e} \left( n + \frac {1} {2} \right)^{2} \right] \label{79}$
Here $\omega _ {0} = \sqrt {2 D _ {e} \alpha^{2} / m _ {R}}$ is the fundamental frequency and $x _ {e} = \hbar \omega _ {0} / 4 D _ {e}$ is the anharmonic constant. Similar to the harmonic oscillator, the frequency $\omega _ {0} = \sqrt {\kappa / m _ {R}}$. The anharmonic constant e x is commonly seen in the spectroscopy expression for the anharmonic vibrational energy levels
$G ( v ) = \omega _ {e} \left( v + \frac {1} {2} \right) - \omega _ {e} x _ {e} \left( v + \frac {1} {2} \right)^{2} + \omega _ {e} y _ {e} \left( v + \frac {1} {2} \right)^{3} + \cdots \label{80}$
From Equation \ref{79}, the ground state (or zero-point) energy is
$E _ {0} = \frac {1} {2} \hbar \omega _ {0} \left( 1 - \frac {1} {2} x _ {e} \right) \label{81}$
So the dissociation energy for the Morse potential is given by $D_{0}=D_{e}-E_{0}$. The transition energies are
$E _ {n} - E _ {m} = \hbar \omega _ {0} ( n - m ) \left[ 1 - x _ {e} \left( n + m + \frac {1} {2} \right) \right] \label{82}$
The proper harmonic expressions are obtained from the corresponding Morse oscillator expressions by setting $D _ {e} \rightarrow \infty$ or $x _ {e} \rightarrow 0$.
Figure 3. Shape of the Morse potential illustrating the first six energy eigenvalues.
Figure 4. First six eigenfunctions of the Morse oscillator potential.
The wavefunctions for the Morse oscillator can also be expressed analytically in terms of associated Laguerre polynomials $\mathcal {L} _ {n}^{\prime} ( z )$
$\psi _ {n} = N _ {n} e^{- z / 2} z^{b / 2} \mathcal {L} _ {n}^{b} ( z ) \label{83}$
where $N_{n}=[\alpha \cdot b \cdot n ! / \Gamma(k-n)]^{1 / 2}$, $z=k \exp [-\alpha q], b=k-2 n-1$, and $k=4 D_{e} / \hbar \omega_{0}$. These expressions and those for matrix elements in $q, q^{2}, \mathrm{e}^{-\alpha q}, \text {and} q \mathrm{e}^{-\alpha q}$ have been given by Vasan and Cross.
Angular momentum
Angular Momentum Operators
To describe quantum mechanical rotation or orbital motion, one has to quantize angular momentum. The total orbital angular momentum operator is defined as
$\hat{L}=\hat{r} \times \hat{p}=i \hbar(\hat{r} \times \nabla)$
It has three components $\left(\hat{L}_{x}, \hat{L}_{y}, \hat{L}_{z}\right)$ that generate rotation about the x, y, or z axis, and whose magnitude is given by
$\hat{L}^{2}=\hat{L}_{x}^{2}+\hat{L}_{y}^{2}+\hat{L}_{z}^{2}$. The angular momentum operators follow the commutation relationships
$\left[ H , L _ {z} \right] = 0 \label{85A}$
$\left[ H , L^{2} \right] = 0 \label{85B}$
$\left[ L _ {x} , L _ {y} \right] = i \hbar L _ {z} \label{86}$
(In Equation \ref{86} the $x$, $y$, $z$ indices can be cyclically permuted.) There is an eigenbasis common to $H$ and $L^2$ and one of the $L_i$, which we take to be $L_z$. The eigenvalues for the orbital angular momentum operator L and z-projection of the angular momentum Lz are
$L^{2} | \ell m \rangle = \hbar^{2} \ell ( \ell + 1 ) | \ell m \rangle \quad \ell = 0,1,2 \ldots \label{87}$
$L _ {z} | \ell m \rangle = \hbar m | \ell m \rangle \quad m = 0 , \pm 1 , \pm 2 \ldots \pm \ell \label{88}$
where the eigenstates $| \ell m \rangle$ are labeled by the orbital angular momentum quantum number $\ell$, and the magnetic quantum number, $m$.
Similar to the strategy used for the harmonic oscillator, we can also define raising and lowering operators for the total angular momentum,
$\hat {L} _ {\pm} = \hat {L} _ {i} \pm \mathrm {i} \hat {L} _ {y} \label{89}$
which follow the commutation relations $\left[ \hat {L}^{2} , \hat {L} _ {\pm} \right] = 0$ and $\left[ \hat {L} _ {z} , \hat {L} _ {\pm} \right] = \pm \hbar \hat {L} _ {\pm}$, and satisfy the eigenvalue equation
$\hat {L} _ {\pm} | \ell m \rangle = A _ {\ell , m} | \ell m \rangle \label{90}$
$A _ {\ell , m} = \hbar [ \ell ( \ell + 1 ) - m ( m \pm 1 ) ]^{1 / 2}$
Spherically Symmetric Potential
Let’s examine the role of angular momentum for the case of a particle experiencing a spherically symmetric potential V(r) such as the hydrogen atom, 3D isotropic harmonic oscillator, and free particles or molecules. For a particle with mass $m_{R}$, the Hamiltonian is
$\hat{H}=-\frac{\hbar^{2}}{2 m} \nabla^{2}+V(r) \label{91}$
Writing the kinetic energy operator in spherical coordinates,
$-\frac{\hbar^{2}}{2 m} \nabla^{2}=-\frac{\hbar^{2}}{2 m}\left(\frac{1}{r^{2}} \frac{\partial}{\partial r} r^{2} \frac{\partial}{\partial r}-\frac{1}{r^{2}} L^{2}\right) \label{92}$
where the square of the total angular momentum is
$L^{2}=-\frac{1}{\sin \theta}\left(\frac{1}{\sin \theta} \frac{\partial^{2}}{\partial \phi^{2}}+\frac{\partial}{\partial \theta} \sin \theta \frac{\partial}{\partial \theta}\right) \label{93}$
We note that this representation separates the radial dependence in the Hamiltonian from the angular part. We therefore expect that the overall wavefunction can be written as a product of a radial and an angular part in the form
$\psi(r, \theta, \phi)=R(r) Y(\theta, \phi) \label{94}$
Substituting this into the TISE, we find that we solve for the orientational and radial wavefunctions separately. Considering solutions first to the angular part, we note that the potential is only a function of r, and only need to consider the angular momentum. This leads to the identities in eqs. (\ref{87}) and (\ref{88}), and reveals that the $|\ell m\rangle$ wavefunctions projected onto spherical coordinates are represented by the spherical harmonics
$Y_{\ell}^{m}(\theta, \phi)=N_{t m}^{Y} P_{\ell}^{|m|}(\cos \theta) \mathrm{e}^{i m \phi} \label{95}$
$P_{\ell}^{m}$ are the associated Legendre polynomials and the normalization factor is
$N_{(m}^{Y}=(-1)^{(m+\mid m) / 2} i^{\ell}\left[\frac{2 \ell+1}{4 \pi} \frac{(\ell-|m|) !}{(\ell+|m|) !}\right]^{1 / 2}$
The angular components of the wavefunction are common to all eigenstates of spherically symmetric potentials. In chemistry, it is common to use real angular wavefunctions instead of the complex form in eq. (\ref{95}). These are constructed from the linear combinations $Y_{\mathrm{n}, z} \pm Y_{\mathrm{n},-\ell}$.
Substituting eq. (\ref{92}) and eq. (\ref{87}) into eq. (\ref{91}) leads to a new Hamiltonian that can be inserted into the Schrödinger equation. This can be solved as a purely radial problem for a given value of $l$. It is convenient to define the radial distribution function $\chi(r)=r R(r)$, which allows the TISE to be rewritten as
$\left(-\frac{\hbar^{2}}{2 m} \frac{\partial^{2}}{\partial r^{2}}+U(r, \ell)\right) \chi=E \chi \label{96}$
U plays the role of an effective potential
$U(r, \ell)=V(r)+\frac{\hbar^{2}}{2 m r^{2}} \ell(\ell+1) \label{97}$
Equation (\ref{96}) is known as the radial wave equation. It looks like the TISE for a one-dimensional problem in r, where we could solve this equation for each value of $\ell$. Note U has a barrier due to centrifugal kinetic energy that scales as $r^{-2} \text {for} \ell>0$.
The wavefunctions defined in eq. (\ref{94}) are normalized such that
$\int|\psi|^{2} d \Omega=1 \label{98}$
where
$\int d \Omega \equiv \int_{0}^{\infty} r^{2} d r \int_{0}^{\pi} \sin \theta d \theta \int_{0}^{2 \pi} d \phi \label{99}$
If we restrict the integration to be over all angles, we find that the probability of finding a particle between a distance r and $r+d r \text {is} P(r)=4 \pi r^{2}|R(r)|^{2}=4 \pi|\chi(r)|^{2}$.
To this point the treatment of orbital angular momentum is identical for any spherically symmetric potential. Now we must consider the specific form of the potential; for instance in the case of the isotropic harmonic oscillator, $U(r)=1 / 2 \kappa r^{2}$. In the case of a free particle, we substitute $V(r)=0 \text {in eq.}(\ref{97})$ and find that the radial solutions can be written in terms of spherical Bessel functions, $j_{\ell}$. Then the solutions to the full wavefunction for the free particle can be written as
$\Psi(r, \theta, \phi)=j_{\ell}(\mathrm{k} r) Y_{\ell}^{m}(\theta, \phi) \label{100}$
where the wavevector k is defined as in eq. (\ref{46}).
Hydrogen Atom
For a hydrogen-like atom, a single electron of charge e interacts with a nucleus of charge $Ze$ under the influence of a Coulomb potential
$V_{H}(r)=-\frac{Z e^{2}}{4 \pi \epsilon_{0}} \frac{1}{r} \label{101}$
We can simplify the expression by defining atomic units for distance and energy. The Bohr radius is defined as
$a_{0}=4 \pi \varepsilon_{0} \frac{\hbar^{2}}{m_{e} e^{2}}=5.2918 \times 10^{-11} \mathrm{~m} \label{102}$
and the Hartree is
$\mathcal{E}_{H}=\frac{1}{4 \pi \varepsilon_{0}} \frac{e^{2}}{a_{0}}=4.3598 \times 10^{-18} J=27.2 \mathrm{eV} \label{103}$
Written in terms of atomic units, we can see from eq. (\ref{103}) that eq. (\ref{101}) becomes $\left(V / \mathcal{E}_{H}\right)=-Z /\left(r / a_{0}\right)$. Thus the conversion effectively sets the SI variables $m_{\mathrm{e}}=e=\left(4 \pi \varepsilon_{0}\right)^{-1}=\hbar = 1$. Then the radial wave equation is
$\frac{\partial^{2} \chi}{\partial r^{2}}+\left(\frac{2 Z}{r}-\frac{\ell(\ell+1)}{r^{2}}\right) \chi=2 E \chi \label{104}$
The effective potential within the parentheses in eq. (\ref{104}) is shown in Figure 5 for varying $\ell$. Solutions to the radial wavefunction for the hydrogen atom take the form
$R_{n \ell}(r)=N_{n \ell}^{R} \rho^{\ell} \mathcal{L}_{n+\ell}^{2 \ell+1}(\rho) e^{-\rho / 2} \label{105}$
where the reduced radius $\rho=2 r / n a_{0} \text {and} \mathcal{L}_{k}^{\alpha}(z)$ are the associated Laguerre polynomials. The primary quantum number takes on integer values $n=1,2,3 \ldots, \text {and} \ell$ is constrained such that $\ell= 0,1,2 \ldots n-1$. The radial normalization factor in eq. (\ref{105}) is
$N_{n \ell}^{R}=-\frac{2}{n^{3} a_{0}^{3 / 2}}\left[\frac{(\mathrm{n}-\ell-1) !}{[(n+1) !]^{3}}\right]^{1 / 2} \label{106}$
The energy eigenvalues are
$E_{n}=-\frac{Z^{2}}{2 n^{2}} \mathcal{E}_{H} \label{107}$
Figure 5. The radial effective potential, $U_{e f f}(\rho)$
Figure 6. Radial probability density R and radial distribution function $\chi=r R$.
Electron Spin
In describing electronic wavefunctions, the electron spin also results in a contribution to the total angular momentum, and results in a spin contribution to the wavefunction. The electron spin angular momentum S and its z-projection are quantized as
$S^{2}\left|s m_{s}\right\rangle=\hbar^{2} s(s+1)\left|s m_{s}\right\rangle \quad s=0,1 / 2,1,3 / 2,2 \ldots \label{108}$
$S_{z}\left|s m_{s}\right\rangle=\hbar m_{s}\left|s m_{s}\right\rangle \quad m_{s}=-s,-s+1, \ldots, s \label{109}$
where the electron spin eigenstates $\left|s m_{s}\right\rangle$ are labeled by the electron spin angular momentum quantum number s and the spin magnetic quantum number $m s$. The number of values of $S_{z}$ is $2 s+1$ and is referred to as the spin multiplicity. As fermions, electrons have half-integer spin, and each unpaired electron contributes $1 / 2$ to the electron spin quantum number s. A single unpaired electron has $s=1 / 2, \text {for which} m_{s}=\pm 1 / 2$ corresponding to spin-up and spin-down configurations. For multi-electron systems, the spin is calculated as the vector sum of spins, essentially $1 / 2$ times the number of unpaired electrons.
The resulting total angular momentum for an electron is $J=L+S$. J has associated with it the total angular momentum quantum number $j$, which takes on values of $j=|\ell-s|,|\ell-s|+1, \ldots \ell+s$. The additive nature of the orbital and spin contributions to the angular momentum leads to a total electronic wavefunction that is a product of spatial and spin wavefunctions.
$\Psi_{\text {tot}}=\Psi(r, \theta, \phi)\left|s m_{s}\right\rangle \label{110}$
Thus the state of an electron can be specified by four quantum numbers $\Psi_{t o t}=\left|n \ell m_{\ell} m_{s}\right\rangle$.
Rigid Rotor
In the case of a freely spinning anisotropic molecule, the total angular momentum J is obtained from the sum of the orbital angular momentum L and spin angular momentum S for the molecular constituents: $J=L+S, \text {where} L=\sum_{i} L_{i} \text {and} S=\sum_{i} S_{i}$. The case of the rigid rotor refers to the minimal model for the rotational quantum states of a freely spinning object that has cylindrical symmetry and no magnetic spin. Then, the Hamiltonian is given by the rotational kinetic energy
$H_{r o t}=\frac{\hat{J}^{2}}{2 I} \label{111}$
I is the moment of inertia about the principle axis of rotation. The eigenfunctions for this Hamiltonian are spherical harmonics $Y_{J, M}(\theta, \phi)$
$\begin{array}{ll} \hat{J}^{2}\left|Y_{J, M}\right\rangle=\hbar^{2} J(J+1)\left|Y_{J, M}\right\rangle & J=0,1,2 \ldots \ \hat{J}_{z}\left|Y_{J, M}\right\rangle=M \hbar\left|Y_{J, M}\right\rangle & M=-J,-J+1, \ldots, J \end{array} \label{112}$
J is the rotational quantum number. M is its projection onto the z axis. The energy eigenvalues for $H_{\text {rot}}$ are
$E_{J, M}=\bar{B} J(J+1) \label{113}$
where the rotational constant is
$\bar{B}=\frac{\hbar^{2}}{2 I} \label{114}$
More commonly, $\bar{B}$ is given in units of $c m^{-1} \text {using} \bar{B}=h / 8 \pi^{2} I c$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.03%3A_Basic_Quantum_Mechanical_Models.txt
|
Throughout our work, we will make use of exponential operators of the form
$\hat {T} = e^{- i \hat {A}} \label{115}$
We will see that these exponential operators act on a wavefunction to move it in time and space, and are therefore also referred to as propagators. Of particular interest to us is the time-evolution operator, $\hat {U} = e^{- i \hat {H} t / \hbar},$ which propagates the wavefunction in time. Note the operator $\hat{T}$ is a function of an operator, $f(\hat{A})$. A function of an operator is defined through its expansion in a Taylor series, for instance
$\hat {T} = e^{- \hat {i} \hat {A}} = \sum _ {n = 0}^{\infty} \dfrac {( - i \hat {A} )^{n}} {n !} = 1 - i \hat {\hat {A}} - \dfrac {\hat {A} \hat {A}} {2} - \cdots \label{116}$
Since we use them so frequently, let’s review the properties of exponential operators that can be established with Equation \ref{116}. If the operator $\hat {A}$ is Hermitian, then$\hat {T} = e^{- i \hat {A}}$ is unitary, i.e., $\hat {T}^{\dagger} = \hat {T}^{- 1}.$ Thus the Hermitian conjugate of $\hat {T}$ reverses the action of $\hat {T}$. For the time-propagator $\vec {U}$, $\vec {U}^{\dagger}$ is often referred to as the time-reversal operator.
The eigenstates of the operator $\hat {A}$ also are also eigenstates of $f(\hat {A})$, and eigenvalues are functions of the eigenvalues of $\hat {A}$. Namely, if you know the eigenvalues and eigenvectors of $\hat {A}$, i.e., $\hat {A} \varphi _ {n} = a _ {n} \varphi _ {n},$you can show by expanding the function
$f ( \hat {A} ) \varphi _ {n} = f \left( a _ {n} \right) \varphi _ {n} \label{117}$
Our most common application of this property will be to exponential operators involving the Hamiltonian. Given the eigenstates $\varphi _ {n}$, then $\hat {H} | \varphi _ {n} \rangle = E _ {n} | \varphi _ {n} \rangle$ implies
$e^{- i \hat {H} t / \hbar} | \varphi _ {n} \rangle = e^{- i E _ {n} t / \hbar} | \varphi _ {n} \rangle \label{118}$
Just as $\hat {D} _ {x} ( \lambda )$ is the time-evolution operator that displaces the wavefunction in time, $\hat {D} _ {x} = e^{- i \hat {p} _ {x} x / h}$ is the spatial displacement operator that moves $\psi$ along the $x$ coordinate. If we define $\hat {D}_ {x} = e^{- i \hat {p} _ {x} x / h},$ then the action of is to displace the wavefunction by an amount $\lambda$
$| \psi ( x - \lambda ) \rangle = \hat {D} _ {x} ( \lambda ) | \psi (x) \rangle \label{119}$
Also, applying $\hat {D} _ {x} ( \lambda )$ to a position operator shifts the operator by $\lambda$
$\hat {D} _ {x}^{\dagger} \hat {x} \hat {D} _ {x} = x + \lambda \label{120}$
Thus $e^{- i \hat {p} _ {x} \lambda / \hbar} | x \rangle$ is an eigenvector of $\hat {x}$ with eigenvalue $x + \lambda$ instead of $x$. The operator
$\hat {D} _ {x} = e^{- i \hat {p} _ {x} \lambda / h}$ is a displacement operator for $x$ position coordinates. Similarly, $\hat {D} _ {y} = e^{- i \hat {p} _ {y} \lambda / h}$generates displacements in $y$ and $\hat {D_z}$ in $z$. Similar to the time-propagator $\boldsymbol {U}$, the displacement operator $\boldsymbol {D}$ must be unitary, since the action of $\hat {D}^{\dagger} \hat {D}$ must leave the system unchanged. That is if $\hat {D} _ {x}$ shifts the system to $x$ from $x_0$, then $\hat {D} _ {x}^{\dagger}$ shifts the system from $x$ back to $x_0$.
We know intuitively that linear displacements commute. For example, if we wish to shift a particle in two dimensions, x and y, the order of displacement does not matter. We end up at the same position, whether we move along x first or along y, as illustrated in Figure 7. In terms of displacement operators, we can write
\begin{aligned} \left|x_{2}, y_{2}\right\rangle &=e^{-i b p_{y} / \hbar} e^{-i a p_{x} / \hbar}\left|x_{1}, y_{1}\right\rangle \ &=e^{-i a p_{x} / \hbar} e^{-i b p_{y} / \hbar}\left|x_{1}, y_{1}\right\rangle \end{aligned}
These displacement operators commute, as expected from $[p_x,p_y] = 0.$
Similar to the displacement operator, we can define rotation operators that depend on the angular momentum operators, $L_x$, $L_y$, and $L_z$. For instance,
$\hat {R} _ {x} ( \phi ) = e^{- i \phi L _ {x} / \hbar}$
gives a rotation by angle $\phi$ about the $x$ axis. Unlike linear displacement, rotations about different axes do not commute. For example, consider a state representing a particle displaced along the z axis, $| z 0 \rangle$. Now the action of two rotations $\hat {R} _ {x}$ and $\hat {R} _ {y}$ by an angle of $\phi = \pi / 2$ on this particle differs depending on the order of operation, as illustrated in Figure 8. If we rotate first about $x$, the operation
$e^{- i \tfrac {\pi} {2} L _ {y} / \hbar} e^{- i \tfrac {\pi} {2} L _ {x} / \hbar} | z _ {0} \rangle \rightarrow | - y \rangle \label{122}$
leads to the particle on the –y axis, whereas the reverse order
$e^{- i \tfrac {\pi} {2} L _ {x} / \hbar} e^{- i \tfrac {\pi} {2} L _ {y} / \hbar} | z _ {0} \rangle \rightarrow | + x \rangle \label{123}$
leads to the particle on the +x axis. The final state of these two rotations taken in opposite order differ by a rotation about the z axis. Since rotations about different axes do not commute, we expect the angular momentum operators not to commute. Indeed, we know that
$\left[ L _ {x} , L _ {y} \right] = i \hbar L _ {z}$
where the commutator of rotations about the x and y axes is related by a z-axis rotation. As with rotation operators, we will need to be careful with time-propagators to determine whether the order of time-propagation matters. This, in turn, will depend on whether the Hamiltonians at two points in time commute.
Properties of exponential operators
1. If $\hat{A}$ and $\hat{B}$ do not commute, but $[ \hat {A} , \hat {B} ]$ commutes with $\hat{A}$ and $\hat{B}$, then
$e^{\hat {A} + \hat {B}} = e^{\hat {A}} e^{\hat {B}} e^{- \frac {1} {2} [ \hat {A} , \hat {B} ]} \label{124}$
$e^{\hat {A}} e^{\hat {B}} = e^{\hat {B}} e^{\hat {A}} e^{- [ \hat {B} , \hat {A} ]} \label{125}$
2. More generally, if $\hat{A}$ and $\hat{B}$ do not commute,
$e^{\hat {A}} e^{\hat {B}} = {\mathrm {exp}} \left[ \hat {A} + \hat {B} + \dfrac {1} {2} [ \hat {A} , \hat {B} ] + \dfrac {1} {12} ( [ \hat {A} , [ \hat {A} , \hat {B} ] ] + [ \hat {A} , [ \hat {B} , \hat {B} ] ] ) + \cdots \right] \label{126}$
3. The Baker–Hausdorff relationship:
$\left. \begin{array} {r l} {\mathrm {e}^{i \hat {G} \lambda} \hat {A} \mathrm {e}^{- i \hat {G} \lambda} = \hat {A} + i \lambda [ \hat {G} , \hat {A} ] + \left( \dfrac {i^{2} \lambda^{2}} {2 !} \right) [ \hat {G} , [ \hat {G} , \hat {A} ] ] + \ldots} \ {} & {+ \left( \dfrac {i^{n} \lambda^{n}} {n !} \right) [ \hat {G} , [ \hat {G} , [ \hat {G} , \hat {A} ] ] ] \ldots ] + \ldots} \end{array} \right. \label{127}$
where $λ$ is a number.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.04%3A_Exponential_Operators.txt
|
Often the bound potentials that we encounter are complex, and the time-independent Schrödinger equation will need to be evaluated numerically. There are two common numerical methods for solving for the eigenvalues and eigenfunctions of a potential. Both methods require truncating and discretizing a region of space that is normally spanned by an infinite dimensional Hilbert space. The Numerov method is a finite difference method that calculates the shape of the wavefunction by integrating step-by-step across along a grid. The DVR method makes use of a transformation between a finite discrete basis and the finite grid that spans the region of interest.
The Numerov Method
A one-dimensional Schrödinger equation for a particle in a potential can be numerically solved on a grid that discretizes the position variable using a finite difference method. The TISE is
$[ T + V (x) ] \psi (x) = E \psi (x) \label{128}$
with
$T = - \dfrac {\hbar^{2}} {2 m} \dfrac {\partial^{2}} {\partial x^{2}},$
which we can write as
$\psi^{\prime \prime} (x) = - k^{2} (x) \psi (x) \label{129}$
where
$k^{2} (x) = \dfrac {2 m} {\hbar^{2}} [ E - V (x) ].$
If we discretize the variable $x$, choosing a grid spacing $\delta x$ over which $V$ varies slowly, we can use a three point finite difference to approximate the second derivative:
$f _ {i}^{\prime \prime} \approx \dfrac {1} {\delta x^{2}} \left( f \left( x _ {i + 1} \right) - 2 f \left( x _ {i} \right) + f \left( x _ {i - 1} \right) \right) \label{130}$
The discretized Schrödinger equation can then be written in the form
$\psi \left( x _ {i + 1} \right) - 2 \psi \left( x _ {i} \right) + \psi \left( x _ {i - 1} \right) = - k^{2} \left( x _ {i} \right) \psi \left( x _ {i} \right) \label{131}$
Using the equation for $\psi \left( x _ {i + 1} \right)$, one can iteratively solve for the eigenfunction. In practice, you discretize over a range of space such that the highest and lowest values lie in a region where the potential is very high or forbidden. Splitting the space into N points, chose the first two values $\psi \left( x _ {1} \right) = 0$ and $\psi \left( x _ {2} \right)$x to be a small positive or negative number, guess $E$, and propagate iteratively to $\psi \left( x _ {N} \right)$. A comparison of the wavefunctions obtained by propagating from $x_1$ to $x_N$ with that obtained propagating from $x_N$ to $x_1$ tells you how good your guess of $E$ was.
The Numerov Method improves on Equation \ref{131} by taking account for the fourth derivative of the wavefunction $\Psi^{( 4 )}$, leading to errors on the order $O \left( \delta x^{6} \right)$. Equation \ref{130} becomes
$f _ {i}^{\prime \prime} \approx \dfrac {1} {\delta x^{2}} \left( f \left( x _ {i + 1} \right) - 2 f \left( x _ {i} \right) + f \left( x _ {i - 1} \right) \right) - \dfrac {\delta x^{2}} {12} f _ {i}^{( 4 )} \label{132}$
By differentiating Equation \ref{129} we know
$\psi^{( 4 )} (x) = - \left( k^{2} (x) \psi (x) \right)^{\prime \prime}$
and the discretized Schrödinger equation becomes
\left.\begin{aligned} \psi^{\prime \prime} \left( x _ {i} \right) & = \dfrac {1} {\delta x^{2}} \left( \psi \left( x _ {i + 1} \right) - 2 \psi \left( x _ {i} \right) + \psi \left( x _ {i - 1} \right) \right) + \ & \dfrac {1} {12} \left( k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i + 1} \right) - 2 k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i} \right) + k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i - 1} \right) \right) \end{aligned} \right. \label{33}
This equation leads to the iterative solution for the wavefunction
$\psi \left( x _ {i + 1} \right) = \dfrac{\psi \left( x _ {i} \right) \left( 2 + \dfrac {10 \delta x^{2}} {12} k^{2} \left( x _ {i} \right) \right) - \psi \left( x _ {i - 1} \right) \left( 1 - \dfrac {\delta x^{2}} {12} k^{2} \left( x _ {i - 1} \right) \right)}{1 - \dfrac {\delta x^{2}} {12} k^{2} \left( x _ {i + 1} \right)} \label{134}$
Discrete Variable Representation (DVR)
Numerical solutions to the wavefunctions of a bound potential in the position representation require truncating and discretizing a region of space that is normally spanned by an infinite dimensional Hilbert space. The DVR approach uses a real space basis set whose eigenstates $\varphi _ {i} (x)$ we know and that span the space of interest—for instance harmonic oscillator wavefunctions—to express the eigenstates of a Hamiltonian in a grid basis ($\theta _ {j}$) that is meant to approximate the real space continuous basis $\delta (x)$. The two basis sets, which we term the eigenbasis ($\varphi$) and grid basis ($\theta$), will be connected through a unitary transformation
$\Phi^{\dagger} \varphi (x) = \theta (x) \label{135}$ $\Phi \theta (x) = \varphi (x)$
For $N$ discrete points in the grid basis, there will be $N$ eigenvectors in the eigenbasis, allowing the properties of projection and completeness will hold in both bases. Wavefunctions can be obtained by constructing the Hamiltonian in the eigenbasis,
$H = T ( \hat {p} ) + V ( \hat {x} ),$ transforming to the DVR basis, $H^{D V R} = \Phi H \Phi,$ and then diagonalizing.
Here we will discuss a version of DVR in which the grid basis is set up to mirror the continuous $| \mathcal {X} \rangle$ eigenbasis. We begin by choosing the range of $x$ that contain the bound states of interest and discretizing these into $N$ points ($x_i$) equally spaced by $δx$. We assume that the DVR basis functions $\theta _ {j} \left( x _ {i} \right)$ resemble the infinite dimensional position basis
$\theta _ {j} \left( x _ {i} \right) = \sqrt {\Delta x} \delta _ {i j} \label{136}$
Our truncation is enabled using a projection operator in the reduced space
$P _ {N} = \sum _ {i = 1}^{N} | \theta _ {i} \rangle \langle \theta _ {i} | \approx 1 \label{137}$
which is valid for appropriately high $N$. The complete Hamiltonian can be expressed in the DVR basis DVR
$H^{D V R} = T^{D V R} + V^{D V R}.$
For the potential energy, since $\left\{\theta _ {i} \right\}$ is localized with $\left\langle \theta _ {i} | \theta _ {j} \right\rangle = \delta _ {i j}$, we make the DVR approximation, which casts $V^{DVR}$ into a diagonal form that is equal to the potential energy evaluated at the grid point:
$V _ {i j}^{D V R} = \left\langle \theta _ {i} | V ( \hat {x} ) | \theta _ {j} \right\rangle \approx V \left( x _ {i} \right) \delta _ {i j} \label{138}$
This comes from approximating the transformation as $\Phi V ( \hat {x} ) \Phi^{\dagger} \approx V \left( \Phi \hat {x} \Phi^{\dagger} \right) .$
For the kinetic energy matrix elements $\left\langle \theta _ {i} | T ( \hat {p} ) | \theta _ {j} \right\rangle$, we need to evaluate second derivatives between different grid points. Fortunately, Colbert and Miller have simplified this process by finding an analytical form for the $T^{DVR}$ matrix for a uniformly gridded box with a grid spacing of $∆x$.
$T _ {i j}^{\mathrm {DVR}} = \dfrac {\hbar^{2} ( - 1 )^{i - j}} {2 m \Delta x^{2}} \left\{\begin{array} {c c} {\pi^{2} / 3} & {i = j} \ {2 / ( i - j )^{2}} & {i \neq j} \end{array} \right\} \label{139}$
This comes from a Fourier expansion in a uniformly gridded box. Naturally this looks oscillatory in $x$ at period of $δx$. Expression becomes exact in the limit of $N \rightarrow \infty$ or $\Delta x \rightarrow 0$. The numerical routine becomes simple and efficient. We construct a Hamiltonian filling with matrix elements whose potential and kinetic energy contributions are given by Equations \ref{138} and \ref{139}. Then we diagonalize $H^{DVR}$, from which we obtain $N$ eigenvalues and the $N$ corresponding eigenfunctions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.05%3A_Numerically_Solving_the_Schrodinger_Equation.txt
|
• 2.1: Time-Evolution with a Time-Independent Hamiltonian
The time evolution of the state of a quantum system is described by the time-dependent Schrödinger equation.
• 2.2: Exponential Operators Again
Throughout our work, we will make use of exponential operators that act on a wavefunction to move it in time and space. Of particular interest to us is the time-propagator or time-evolution operator that propagates the wavefunction in time.
• 2.3: Two-Level Systems
It is common to reduce or map quantum problems onto a two level system (2LS). We will pick the most important states for our problem and find strategies for discarding or simplifying the influence of the remaining degrees of freedom.
02: Introduction to Time-Dependent Quantum Mechanics
The time evolution of the state of a quantum system is described by the time-dependent Schrödinger equation (TDSE):
$i \hbar \frac {\partial} {\partial t} \psi ( \overline {r} , t ) = \hat {H} ( \overline {r} , t ) \psi ( \overline {r} , t ) \label{1.1}$
$\hat{H}$ is the Hamiltonian operator which describes all interactions between particles and fields, and determines the state of the system in time and space. $\hat{H}$ is the sum of the kinetic and potential energy. For one particle under the influence of a potential
$\hat {H} = - \frac {\hbar^{2}} {2 m} \hat {\nabla}^{2} + \hat {V} ( \overline {r} , t ) \label{1.2}$
The state of the system is expressed through the wavefunction $\psi ( \overline {r} , t )$. The wavefunction is complex and cannot be observed itself, but through it we obtain the probability density
$P = | \psi ( \overline {r} , t ) |^{2},$
which characterizes the spatial probability distribution for the particles described by $\hat{H}$ at time $t$. Also, it is used to calculate the expectation value of an operator $\hat{A}$
\begin{align} \langle \hat {A} (t) \rangle &= \int \psi^{*} ( \overline {r} , t ) \hat {A} \psi ( \overline {r} , t ) d \overline {r} \[4pt] &= \langle \psi (t) | \hat {A} | \psi (t) \rangle \label{1.3} \end{align}
Physical observables must be real, and therefore will correspond to the expectation values of Hermitian operators ($\hat {A} = \hat {A}^{\dagger}$).
Our first exposure to time-dependence in quantum mechanics is often for the specific case in which the Hamiltonian $\hat{H}$ is assumed to be independent of time: $\hat {H} = \hat {H} ( \overline {r} )$. We then assume a solution with a form in which the spatial and temporal variables in the wavefunction are separable:
$\psi ( \overline {r} , t ) = \varphi ( \overline {r} ) T (t) \label{1.4}$
$i \hbar \frac {1} {T (t)} \frac {\partial} {\partial t} T (t) = \frac {\hat {H} ( \overline {r} ) \varphi ( \overline {r} )} {\varphi ( \overline {r} )} \label{1.5}$
Here the left-hand side is a function only of time, and the right-hand side is a function of space only ($\overline {r}$, or rather position and momentum). Equation \ref{1.5} can only be satisfied if both sides are equal to the same constant, $E$. Taking the right-hand side we have
$\frac {\hat {H} ( \overline {r} ) \varphi ( \overline {r} )} {\varphi ( \overline {r} )} = E \quad \Rightarrow \quad \hat {H} ( \overline {r} ) \varphi ( \overline {r} ) = E \varphi ( \overline {r} ) \label{1.6}$
This is the Time-Independent Schrödinger Equation (TISE), an eigenvalue equation, for which $\varphi ( \overline {r} )$ are the eigenstates and $E$ are the eigenvalues. Here we note that
$\langle \hat {H} \rangle = \langle \psi | \hat {H} | \psi \rangle = E,$
so $\hat{H}$ is the operator corresponding to $E$ and drawing on classical mechanics we associate $\hat{H}$ with the expectation value of the energy of the system. Now taking the left-hand side of Equation \ref{1.5} and integrating:
\begin{align} i \hbar \frac {1} {T (t)} \frac {\partial T} {\partial t} &= E \[4pt] \left( \frac {\partial} {\partial t} + \frac {i E} {\hbar} \right) T (t) &= 0 \label{1.7} \end{align}
which has solutions like this:
$T (t) = \exp ( - i E t / \hbar ) \label{1.8}$
So, in the case of a bound potential we will have a discrete set of eigenfunctions $\varphi _ {n} ( \overline {r} )$ with corresponding energy eigenvalues $E_n$ from the TISE, and there are a set of corresponding solutions to the TDSE.
$\psi _ {n} ( \overline {r} , t ) = \varphi _ {n} ( \overline {r} ) \underbrace{\exp \left( - i E _ {n} t / \hbar \right)}_{\text{phase factor}} \label{1.9}$
Phase Factor
For any complex number written in polar form (such as $re^{iθ}$), the phase factor is the complex exponential factor ($e^{iθ}$). The phase factor does not have any physical meaning, since the introduction of a phase factor does not change the expectation values of a Hermitian operator. That is
$\langle \phi |A|\phi \rangle = \langle \phi |e^{-i\theta}Ae^{i\theta}|\phi \rangle$
Since the only time-dependence in $\psi _ {n}$ is a phase factor, the probability density for an eigenstate is independent of time:
$P = \left| \psi _ {n} (t) \right|^{2} = \text {constant}.$
Therefore, the eigenstates $\varphi ( \overline {r} )$ do not change with time and are called stationary states.
However, more generally, a system may exist as a linear combination of eigenstates:
\begin{align} \psi ( \overline {r} , t ) &= \sum _ {n} c _ {n} \psi _ {n} ( \overline {r} , t ) \[4pt] &= \sum _ {n} c _ {n} e^{- i E _ {n} t h} \varphi _ {n} ( \overline {r} ) \label{1.10} \end{align}
where $c_n$ are complex amplitudes, with
$\sum _ {n} \left| c _ {n} \right|^{2} = 1. \nonumber$
For such a case, the probability density will oscillate with time. As an example, consider two eigenstates
\begin{align} \psi ( \overline {r} , t ) &= \psi _ {1} + \psi _ {2} \nonumber \[4pt] &= c _ {1} \varphi _ {1} e^{- i E _ {1} t / h} + c _ {2} \varphi _ {2} e^{- i E _ {2} t / h} \label{1.11} \end{align}
For this state the probability density oscillates in time as
\begin{align} P (t) & = | \psi |^{2} \nonumber \[4pt] &= \left| \psi _ {1} + \psi _ {2} \right|^{2} \nonumber \[4pt] & = \left| c _ {1} \varphi _ {1} \right|^{2} + \left| c _ {2} \varphi _ {2} \right|^{2} + c _ {1}^{*} c _ {2} \varphi _ {1}^{*} \varphi _ {2} e^{- i \left( \alpha _ {2} - \omega _ {1} \right) t} + c _ {2}^{*} c _ {1} \varphi _ {2}^{*} \varphi _ {1} e^{+ i \left( a _ {2} - \omega _ {1} \right) t} \nonumber \[4pt] & = \left| \psi _ {1} \right|^{2} + \left| \psi _ {2} \right|^{2} + 2 \left| \psi _ {1} \psi _ {2} \right| \cos \left( \omega _ {2} - \omega _ {1} \right) t \label{1.12} \end{align}
where $\omega _ {n} = E _ {n} / \hbar$. We refer to this state of the system that gives rise to this time-dependent oscillation in probability density as a coherent superposition state, or coherence. More generally, the oscillation term in Equation \ref{1.12} may also include a time-independent phase factor $\phi$ that arises from the complex expansion coefficients.
As an example, consider the superposition of the ground and first excited states of the quantum harmonic oscillator. The basis wavefunctions, $\psi _ {0} (x)$ and $\psi _ {1} (x)$, and their stationary probability densities $P _ {i} = \left\langle \psi _ {i} (x) | \psi _ {i} (x) \right\rangle$ are
If we create a superposition of these states with Equation \ref{1.11}, the time-dependent probability density oscillates, with $\langle x (t) \rangle$ bearing similarity to the classical motion. (Here $c_0 = 0.5$ and $c_1 = 0.87$.)
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 405.
2. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 1.
3. Schatz, G. C.; Ratner, M. A., Quantum Mechanics in Chemistry. Dover Publications: Mineola, NY, 2002; Ch. 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.01%3A_Time-Evolution_with_a_Time-Independent_Hamiltoni.txt
|
Throughout our work, we will make use of exponential operators of the form $\hat {T} = e^{- i \hat {A}}$ We will see that these exponential operators act on a wavefunction to move it in time and space. Of particular interest to us is the time-propagator or time-evolution operator $\hat {U} = e^{- i \hat {H} t / h},$ which propagates the wavefunction in time. Note the operator $\hat {T}$ is a function of an operator, $f(\hat{A})$. A function of an operator is defined through its expansion in a Taylor series, for instance
$\hat {T} = e^{- i \hat {A}} = \sum _ {n = 0}^{\infty} \frac {( - i \hat {A} )^{n}} {n !} = 1 - i \hat {A} - \frac {\hat {A} \hat {A}} {2} - \cdots \label{1.13}$
Since we use them so frequently, let’s review the properties of exponential operators that can be established with Equation \ref{1.13}. If the operator $\hat{A}$ is Hermitian, then $\hat {T} = e^{- i \hat {A}}$ is unitary, i.e., $\hat {T}^{\dagger} = \hat {T}^{- 1}$. Thus the Hermitian conjugate of $\hat{T}$ reverses the action of $\hat{T}$. For the time-propagator $\hat{U}$, $U^{†}$ is often referred to as the time-reversal operator.
The eigenstates of the operator $\hat{A}$ also are also eigenstates of $f(\hat{A})$, and eigenvalues are functions of the eigenvalues of $\hat{A}$. Namely, if you know the eigenvalues and eigenvectors of $\hat{A}$, i.e., $\hat {A} \varphi _ {n} = a _ {n} \varphi _ {n}$, you can show by expanding the function that
$f ( \hat {A} ) \varphi _ {n} = f \left( a _ {n} \right) \varphi _ {n} \label{1.14}$
Our most common application of this property will be to exponential operators involving the Hamiltonian. Given the eigenstates $\varphi _ {n}$, then $\hat {H} | \varphi _ {n} \rangle = E _ {n} | \varphi _ {n} \rangle$ implies
$e^{- i \hat {H} t / \hbar} | \varphi _ {n} \rangle = e^{- i E _ {n} t / \hbar} | \varphi _ {n} \rangle \label{1.15}$
Just as $\hat {U} = e^{- i \hat {H} t / h}$ is the time-evolution operator, which displaces the wavefunctionin time, $\hat {D} _ {x} = e^{- i \hat {p} _ {x} x / \hbar}$ is the spatial displacement operator that moves $\psi$ along the $x$ coordinate. If we define $\hat {D} _ {x} ( \lambda ) = e^{- i \hat {p} _ {x} \lambda / h},$ then the action of is to displace the wavefunction by an amount $\lambda$
$| \psi ( x - \lambda ) \rangle = \hat {D} _ {x} ( \lambda ) | \psi (x) \rangle \label{1.16}$
Also, applying $\hat {D} _ {x} ( \lambda )$ to a position operator shifts the operator by $\lambda$
$\hat {D} _ {x}^{\dagger} \hat {x} \hat {D} _ {x} = \hat {x} + \lambda \label{1.17}$
Thus $e^{- i \hat {p} _ {x} \lambda / h} | x \rangle$ is an eigenvector of $\hat{x}$ with eigenvalue $x + \lambda$ instead of $x$. The operator $\hat {D} _ {x} = e^{- i \hat {p} _ {x} \lambda / \hbar}$ is a displacement operator for $x$ position coordinates. Similarly, $\hat {D} _ {y} = e^{- i \hat {p} _ {y} \lambda / \hbar}$ generates displacements in $y$ and $\hat{D}_z$ in $z$. Similar to the time-propagator $\hat{U}$, the displacement $hta{D}$ operator must be unitary, since the action of must leave the system unchanged. That is if $\hat{D}$ shifts the system to from , then shifts the system from $x$ back to $x_0$.
We know intuitively that linear displacements commute. For example, if we wish to shift a particle in two dimensions, $x$ and $y$, the order of displacement does not matter. We end up at the same position. These displacement operators commute, as expected from $[\hat{p}_x,\hat{p}_y] = 0$
Similar to the displacement operator, we can define rotation operators that depend on the angular momentum operators, $L_x$, $L_y$, and $L_z$. For instance, $\hat {R} _ {x} ( \phi ) = e^{- i \phi L _ {x} / \hbar}$ gives a rotation by angle $\phi$ about the x-axis. Unlike linear displacement, rotations about different axes do not commute. For example, consider a state representing a particle displaced along the z-axis, $| \mathrm {Z} 0 \rangle$. Now the action of two rotations $\hat{R}_x$ and $\hat{R}_y$ by an angle of $\pi/2$ on this particle differs depending on the order of operation.
The results of these two rotations taken in opposite order differ by a rotation about the z–axis. Thus, because the rotations about different axes do not commute, we must expect the angular momentum operators, which generate these rotations, not to commute. Indeed, we know that $[L_x,L_y] = i\hbar L_z$ where the commutator of rotations about the x and y axes is related by a z-axis rotation. As with rotation operators, we will need to be careful with time-propagators to determine whether the order of time-propagation matters. This, in turn, will depend on whether the Hamiltonians at two points in time commute.
Useful Properties of Exponential Operator
Finally, it is worth noting some relationships that are important in evaluating the action of exponential operators:
1. The Baker–Hausdorff relationship: $\left. \begin{array} {r l} {\exp ( i \hat {G} \lambda ) \hat {A} \exp ( - i \hat {G} \lambda ) = \hat {A} + i \lambda [ \hat {G} , \hat {A} ] + \left( \frac {i^{2} \lambda^{2}} {2 !} \right) [ \hat {G} , [ \hat {G} , \hat {A} ] ] + \ldots} \ {+ \left( \frac {i^{n} \lambda^{n}} {n !} \right) [ \hat {G} , [ \hat {G} , [ \hat {G} , \hat {A} ] ] ] \ldots ]} & {+ \dots} \end{array} \right. \label{1.18}$
2. If $\hat {A}$ and $\hat {B}$ do not commute, but $[ \hat {A} , \hat {B} ]$ commutes with $\hat {A}$ and $\hat {B}$, then $e^{\hat {A} + \hat {B}} = e^{\hat {A}} e^{\hat {B}} e^{- \frac {1} {2} [ \hat {A} , \hat {B} ]}\label{1.19}$
3. $e^{\hat {A}} e^{\hat {B}} = e^{\hat {B}} e^{\hat {A}} e^{- [ \hat {B} , \hat {A} ]} \label{1.19B}$
Time-Evolution Operator
Since the TDSE is deterministic and linear in time, we can define an operator that describes the dynamics of the wavefunction:
$\psi (t) = \hat {U} \left( t , t _ {0} \right) \psi \left( t _ {0} \right) \label{1.20}$
$U$ is is the time-propagator or time-evolution operator that evolves the quantum system as a function of time. It represents the solution to the time-dependent Schrödinger equation. To investigate its form we consider the TDSE for a time-independent Hamiltonian:
$\frac {\partial} {\partial t} \psi ( \overline {r} , t ) + \frac {i \hat {H}} {\hbar} \psi ( \overline {r} , t ) = 0 \label{1.21}$
To solve this, we will define an exponential operator $\hat {T} = \exp ( - i \hat {H} t / \hbar )$, which is defined through its expansion in a Taylor series:
\begin{align} \hat {T} &= \exp ( - i \hat {H} t / \hbar ) \[4pt] &= 1 - \frac {i \hat {H} t} {\hbar} + \frac {1} {2 !} \left( \frac {i \hat {H} t} {\hbar} \right)^{2} - \cdots \end{align} \label{1.22}
You can also confirm from the expansion that $\hat {T}^{- 1} = \exp ( i \hat {H} t / \hbar ),$ noting that $\hat{H}$ is Hermitian and commutes with $\hat{T}$. Multiplying Equation \ref{1.21} from the left by $\hat {T}^{- 1}$, we can write
$\frac {\partial} {\partial t} \left[ \exp \left( \frac {i \hat {H} t} {\hbar} \right) \psi ( \overline {r} , t ) \right] = 0 \label{1.23}$
and integrating $t _ {0} \rightarrow t$, we get
$\exp \left( \frac {i \hat {H} t} {\hbar} \right) \psi ( \overline {r} , t ) - \exp \left( \frac {i \hat {H} t _ {0}} {\hbar} \right) \psi \left( \overline {r} , t _ {0} \right) = 0 \label{1.24}$
$\psi ( \overline {r} , t ) = \exp \left( \frac {- i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right) \psi \left( \overline {r} , t _ {0} \right) \label{1.25}$
So, comparing to Equation \ref{1.20}, we see that the time-propagator is
$\hat {U} \left( t , t _ {0} \right) = \exp \left( \frac {- i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right) \label{1.26}$
For the time-independent Hamiltonian for which we know the eigenstates $\phi_n$ and eigenvalues $E_n$, we can express this in a practical form using Equation \ref{1.14}
$\psi _ {n} ( \overline {r} , t ) = e^{- i E _ {n} \left( t - t _ {0} \right) / \hbar} \psi _ {n} \left( \overline {r} , t _ {0} \right) \label{1.27}$
Alternatively, if we substitute the projection operator (or identity relationship)
$\sum _ {n} | \varphi _ {n} \rangle \langle \varphi _ {n} | = 1 \label{1.28}$
into Equation \ref{1.26}, we see
\begin{align} \hat {U} \left( t , t _ {0} \right) &= e^{- i \hat {H} \left( t - t _ {0} \right) / \hbar} \sum _ {n} | \varphi _ {n} \rangle \langle \varphi _ {n} | \[4pt] &= \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} | \varphi _ {n} \rangle \langle \varphi _ {n} | \end{align}
$\omega _ {n} = \frac {E _ {n}} {\hbar}$
So now we can write our time-developing wave-function as
\begin{align} | \psi _ {n} ( \overline {r} , t ) \rangle & = | \varphi _ {n} \rangle \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} \left\langle \varphi _ {n} | \psi _ {n} \left( \overline {r} , t _ {0} \right) \right\rangle \ & = \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} c _ {n} \ & = \sum _ {n} c _ {n} (t) | \varphi _ {n} \rangle \end{align} \label{1.30}
As written in Equation \ref{1.20}, we see that the time-propagator $\hat {U} \left( t , t _ {0} \right)$, acts to the right (on kets) to evolve the system in time. The evolution of the conjugate wavefunctions (bras) is under the Hermitian conjugate of $\hat {U} \left( t , t _ {0} \right)$, acting to the left:
$\langle \psi (t) | = \langle \psi \left( t _ {0} \right) | \hat {U}^{\dagger} \left( t , t _ {0} \right) \label{1.31}$
From its definition as an expansion and recognizing $\hat{H}$ as Hermitian, you can see that
$\hat {U}^{\dagger} \left( t , t _ {0} \right) = \exp \left[ \frac {i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right] \label{1.32}$
Noting that $\hat{U}$ is unitary, $\hat {U}^{\dagger} = \hat {U}^{- 1}$, we often refer to $\hat {U}^{\dagger}$ as the time reversal operator.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.02%3A_Exponential_Operators.txt
|
Let’s use the time-propagator in a model calculation that we will refer to often. It is common to reduce or map quantum problems onto a two level system (2LS). We will pick the most important states for our problem and find strategies for discarding or simplifying the influence of the remaining degrees of freedom. Consider a 2LS with two unperturbed or “zeroth order” states $| \varphi _ {a} \rangle$ and $| \varphi _ {b} \rangle$ with energies ${\varepsilon} _ {a}$ and ${\varepsilon} _ {b}$, which are described by a zero-order Hamiltonian $H_0$:
\begin{align} \hat {H} _ {0} &= | \varphi _ {a} \rangle \varepsilon _ {a} \left\langle \varphi _ {a} | + | \varphi _ {b} \right\rangle \varepsilon _ {b} \langle \varphi _ {b} | \[4pt] &= \left( \begin{array} {l l} {\varepsilon _ {a}} & {0} \[4pt] {0} & {\varepsilon _ {b}} \end{array} \right) \label{1.33} \end{align}
These states interact through a coupling $V$ of the form
\begin{align} \hat {V} &= | \varphi _ {a} \rangle V _ {a b} \left\langle \varphi _ {b} | + | \varphi _ {b} \right\rangle V _ {b a} \langle \varphi _ {a} | \[4pt] &= \left( \begin{array} {l l} {0} & {V _ {a b}} \[4pt] {V _ {b a}} & {0} \end{array} \right) \label{1.34} \end{align}
The full Hamiltonian for the two coupled states is $\hat{H}$:
\begin{align} \hat {H} & = \hat {H} _ {0} + \hat {V} \[4pt] & = \left( \begin{array} {c c} {\varepsilon _ {a}} & {V _ {a b}} \[4pt] {V _ {b a}} & {\varepsilon _ {b}} \end{array} \right) \label{1.35} \end{align}
The zero-order states are $| \varphi _ {a} \rangle$ and $| \varphi _ {b} \rangle$. The coupling mixes these states, leading to two eigenstates of $\hat{H}$, $| \varphi _ {+} \rangle$ and $| \varphi _ {-} \rangle$, with corresponding energy eigenvalues ${\varepsilon} _ {+}$ and ${\varepsilon} _ {-}$, respectively.
We will ask: If we prepare the system in state $| \varphi _ {a} \rangle$, what is the time-dependent probability of observing it in $| \varphi _ {b} \rangle$? Since $| \varphi _ {a} \rangle$ and $| \varphi _ {b} \rangle$ are not eigenstates of $\hat{H}$, and since our time-propagation will be performed in the eigenbasis using Equation \ref{1.29}, we will need to find the transformation between these bases.
We start by searching for the eigenvalues of the Hamiltonian (Equation \ref{1.35}). Since the Hamiltonian is Hermitian, ($H _ {i j} = H _ {j i}^{*}$), we write
$V _ {a b} = V _ {b a}^{*} = V e^{- i \varphi} \label{1.36}$
$\hat {H} = \left( \begin{array} {c c} {\varepsilon _ {a}} & {V e^{- i \varphi}} \[4pt] {V e^{+ i \varphi}} & {\varepsilon _ {b}} \end{array} \right) \label{1.37}$
Often the couplings we describe are real, and we can neglect the phase factor $\phi$. Now we define variables for the mean energy and energy splitting between the uncoupled states
$E = \frac {\varepsilon _ {a} + \varepsilon _ {b}} {2}$
$\Delta = \frac {\varepsilon _ {a} - \varepsilon _ {b}} {2} \label{1.39}$
We can then obtain the eigenvalues of the coupled system by solving the secular equation
$\operatorname {det} ( H - \lambda I ) = 0$
giving
$\varepsilon _ {\pm} = E \pm \Omega \label{1.41}$
Here I defined another variable
$\Omega = \sqrt {\Delta^{2} + V^{2}} \label{1.42}$
To determine the eigenvectors of the coupled system $| \varphi _ {\pm} \rangle$, it proves to be a great simplification to define a mixing angle $\theta$ that describes the relative magnitude of the coupling relative to the zero-order energy splitting through
$\tan 2 \theta = \frac {V} {\Delta} \label{1.43}$
We see that the mixing angle adopts values such that $0 \leq \theta \leq \pi / 4$. Also, we note that
$\sin 2 \theta = V / \Omega \label{1.44}$
$\cos 2 \theta = \Delta / \Omega \label{1.45}$
In this representation the Hamiltonian (Equation \ref{1.37}) becomes
$\hat {H} = E \overline {I} + \Delta \left( \begin{array} {l l} {1} & {\tan 2 \theta e^{- i \varphi}} \[4pt] {\tan 2 \theta e^{+ i \varphi}} & {- 1} \end{array} \right) \label{1.46}$
and we can express the eigenvalues as
$\varepsilon _ {\pm} = E \pm \Delta \sec 2 \theta \label{1.47}$
Next we want to find $S$, the transformation that diagonalizes the Hamiltonian and which transforms the coefficients of the wavefunction from the zero-order basis to the eigenbasis. The eigenstates can be expanded in the zero-order basis in the form
$| \varphi _ {\pm} \rangle = c _ {a} | \varphi _ {a} \rangle + c _ {b} | \varphi _ {b} \rangle \label{1.48}$
So that the transformation can be expressed in matrix form as
$\left( \begin{array} {l} {\varphi _ {+}} \[4pt] {\varphi _ {-}} \end{array} \right) = S \left( \begin{array} {l} {\varphi _ {a}} \[4pt] {\varphi _ {b}} \end{array} \right)\label{1.49}$
To find $S$, we use the Schrödinger equation $\hat {H} | \varphi _ {\pm} \rangle = \varepsilon _ {\pm} | \varphi _ {\pm} \rangle$
substituting Equation \ref{1.48}. This gives
$S = \left( \begin{array} {l l} {\cos \theta} & {e^{- i \varphi / 2}} & {\sin \theta} & {e^{i \varphi / 2}} \[4pt] {- \sin \theta} & {e^{- i \varphi / 2}} & {\cos \theta} & {e^{i \varphi / 2}} \end{array} \right) \label{1.50}$
Note that $S$ is unitary since $S^{\dagger} = S^{- 1}$ and $\left( S^{T} \right)^{*} = S^{- 1}$. Also, the eigenbasis is orthonormal:
$\left\langle \varphi _ {+} | \varphi _ {+} \right\rangle + \left\langle \varphi _ {-} | \varphi _ {-} \right\rangle = 1.$
Now, let’s examine the eigenstates in two limits:
1. Weak coupling ($| {V} / \Delta | \ll 1$). Here $\theta \approx 0$, and $| \varphi _ {+} \rangle$ corresponds to $| \varphi _ {a} \rangle$ weakly perturbed by the $V_{ab}$ interaction. $| \varphi _ {-} \rangle$ corresponds to $| \varphi _ {b} \rangle$. In another way, as $\theta \rightarrow 0$, we find $| \varphi _ {+} \rangle \rightarrow | \varphi _ {a} \rangle$ and $| \varphi _ {-} \rangle \rightarrow | \varphi _ {b} \rangle.$
2. Strong coupling ($| V / \Delta | \gg 1$). In this limit $\theta = \pi / 4$, and the $a/b$ basis states are indistinguishable. The eigenstates are symmetric and antisymmetric combinations: $| \varphi _ {\pm} \rangle = \frac {1} {\sqrt {2}} ( | \varphi _ {b} \rangle \pm | \varphi _ {a} \rangle ) \label{1.51}$ Note from Equation \ref{1.50} that the sign of $V$ dictates whether $| \varphi _ {+} \rangle$ or $| \varphi _ {-} \rangle$ corresponds to the symmetric or antisymmetric eigenstate. For negative $V \gg \Delta$, $\theta = - \pi / 4$, and the correspondence in Equation \ref{1.51} changes to $\mp$.
We can schematically represent the energies of these states with the following diagram. Here we explore the range of ${E} _ {\pm}$ available given a fixed coupling $V$ and varying the splitting $\Delta$.
This diagram illustrates an avoided crossing effect. The strong coupling limit is equivalent to a degeneracy point ($\Delta = 0$) between the states $| \varphi _ {a} \rangle$ and $| \varphi _ {b} \rangle$. The eigenstates completely mix the unperturbed states, yet remain split by the strength of interaction $2V$. We will return to the discussion of avoided crossings when we describe potential energy surfaces and the adiabatic approximation, where the dependence of $V$ and $\Delta$ on position $R$ must be considered.
Now we can turn to describing dynamics. The time evolution of this system is given by the time-propagator
$U (t) = | \varphi _ {+} \rangle e^{- i \omega , t} \left\langle \varphi _ {+} | + | \varphi _ {-} \right\rangle e^{- i \omega t} \langle \varphi _ {-} | \label{1.52}$
where $\omega _ {\pm} = \varepsilon _ {\pm} / \hbar$. Since $\varphi _ {a}$ and $\varphi _ {b}$ are not the eigenstates, preparing the system in state $\varphi _ {a}$ will lead to time evolution! Let’s prepare the system so that it is initially in $\varphi _ {a}$.
$| \psi ( 0 ) \rangle = | \varphi _ {a} \rangle \label{1.53} \nonumber$
Evaluating the time-dependent amplitudes of initial and final states with the help of $S$, we find
\begin{align*} c _ {a} (t) & = \left\langle \varphi _ {a} | U (t) | \varphi _ {a} \right\rangle \[4pt] & = e^{- i E t} \left[ \cos^{2} \theta e^{i \Omega _ {R} t} + \sin^{2} \theta e^{- i \Omega _ {R} t} \right] \label{1.54} \[4pt] c _ {b} (t) & = \left\langle \varphi _ {b} | U (t) | \varphi _ {a} \right\rangle \[4pt] & = 2 \sin \theta \cos \theta e^{- i E t} \sin \Omega _ {R} t \label{1.55} \end{align*}
So, what is the probability that it is found in state $| \varphi _ {b} \rangle$ at time $t$?
\begin{aligned} P _ {b a} (t) & = \left| c _ {b} (t) \right|^{2} \[4pt] & = \frac {V^{2}} {V^{2} + \Delta^{2}} \sin^{2} \Omega _ {R} t \end{aligned} \label{1.56}
where
$\Omega _ {R} = \frac {1} {\hbar} \sqrt {\Delta^{2} + V^{2}} \label{1.57}$
$\Omega _ {R}$, the Rabi Frequency, represents the frequency at which probability amplitude oscillates between $\varphi _ {a}$ and $\varphi _ {b}$ states.
Notice for the weak coupling limit ($V \rightarrow 0$), $\varphi _ {\pm} \rightarrow \varphi _ {a , b}$ (the eigenstates resemble the stationary states), and the time-dependence disappears. In the strong coupling limit ($V \gg \Delta$), amplitude is exchanged completely between the zero-order states at a rate given by the coupling: $\Omega _ {R} \rightarrow V / \hbar$. Even in this limit it takes a finite amount of time for amplitude to move between states. To get $P=1$ requires a time $\tau$:
$\tau = \pi / 2 \Omega _ {R} = \hbar \pi / 2 V. \nonumber$
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; pp. 405-420.
2. Liboff, R. L., Introductory Quantum Mechanics. Addison-Wesley: Reading, MA, 1980; p. 77.
3. Mukamel, S., Principles of Nonlinear Optical Spectroscopy. Oxford University Press: New York, 1995; Ch. 2.
4. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.03%3A_Two-Level_Systems.txt
|
Dynamical processes in quantum mechanics are described by a Hamiltonian that depends on time. Naturally the question arises how do we deal with a time-dependent Hamiltonian? In principle, the time-dependent Schrödinger equation can be directly integrated choosing a basis set that spans the space of interest. Using a potential energy surface, one can propagate the system forward in small time-steps and follow the evolution of the complex amplitudes in the basis states. In practice even this is impossible for more than a handful of atoms, when you treat all degrees of freedom quantum mechanically. However, the mathematical complexity of solving the time-dependent Schrödinger equation for most molecular systems makes it impossible to obtain exact analytical solutions. We are thus forced to seek numerical solutions based on perturbation or approximation methods that will reduce the complexity. Among these methods, time-dependent perturbation theory is the most widely used approach for calculations in spectroscopy, relaxation, and other rate processes. In this section we will work on classifying approximation methods and work out the details of time-dependent perturbation theory.
• 3.1: Time-Evolution Operator
We are seeking equations of motion for quantum systems that are equivalent to Newton’s—or more accurately Hamilton’s—equations for classical systems. The question is, if we know the wavefunction at a specific time, how does it change with time? How do we determine the wavefunction for some later time? We will use our intuition here, based largely on correspondence to classical mechanics.
• 3.2: Integrating the Schrödinger Equation Directly
Hhow do we evaluate the time-propagator and obtain a time-dependent trajectory for a quantum system? Rather than general recipes, there exist an arsenal of different strategies that are suited to particular types of problems. The choice of how to proceed is generally dictated by the details of your problem, and is often an art-form. Considerable effort needs to be made to formulate the problem, particularly choosing an appropriate basis set for your problem.
• 3.3: Transitions Induced by Time-Dependent Potential
For many time-dependent problems, we can often partition the problem so that the time-dependent Hamiltonian contains a time-independent part (H₀)that we can describe exactly, and a time-dependent potential. The remaining degrees of freedom are discarded, and then only enter in the sense that they give rise to the interaction potential with H₀. This is effective if you have reason to believe that the external Hamiltonian can be treated classically or if negligible.
• 3.4: Resonant Driving of a Two-Level System
Let’s describe what happens when you drive a two-level system with an oscillating potential. Note, this is the form you would expect for an electromagnetic field interacting with charged particles, i.e. dipole transitions. In a simple sense, the electric field is
• 3.5: Schrödinger and Heisenberg Representations
The mathematical formulation of quantum dynamics that has been presented is not unique. So far, we have described the dynamics by propagating the wavefunction, which encodes probability densities. Ultimately, since we cannot measure a wavefunction, we are interested in observables, which are probability amplitudes associated with Hermitian operators, with time dependence that can be interpreted differently.
• 3.6: Interaction Picture
The interaction picture is a hybrid representation that is useful in solving problems with time-dependent Hamiltonians.
• 3.7: Time-Dependent Perturbation Theory
Perturbation theory refers to calculating the time-dependence of a system by truncating the expansion of the interaction picture time-evolution operator after a certain term. In practice, truncating the full time-propagator U is not effective, and only works well for times short compared to the inverse of the energy splitting between coupled states of your Hamiltonian.
• 3.8: Fermi’s Golden Rule
A number of important relationships in quantum mechanics that describe rate processes come from first-order perturbation theory. These expressions begin with two model problems that we want to work through: (1) time evolution after applying a step perturbation, and (2) time evolution after applying a harmonic perturbation. As before, we will ask: if we prepare the system in one state, what is the probability of observing the system in a different state following the perturbation?
03: Time-Evolution Operator
Let’s start at the beginning by obtaining the equation of motion that describes the wavefunction and its time evolution through the time propagator. We are seeking equations of motion for quantum systems that are equivalent to Newton’s—or more accurately Hamilton’s—equations for classical systems. The question is, if we know the wavefunction at time $| \psi (\vec{r}, t_o ) \rangle$, how does it change with time? How do we determine $| \psi (\vec{r}, t ) \rangle$, for some later time $t > t_o$? We will use our intuition here, based largely on correspondence to classical mechanics. To keep notation to a minimum, in the following discussion we will not explicitly show the spatial dependence of wavefunction.
We start by assuming causality: $| \psi ( t_o ) \rangle$ precedes and determines $| \psi (t) \rangle$, which is crucial for deriving a deterministic equation of motion. Also, as usual, we assume time is a continuous variable:
$\lim _ {t \rightarrow \tau _ {0}} | \psi (t) \rangle = | \psi \left( t _ {0} \right) \rangle \label{2.1}$
Now define an “time-displacement operator” or “propagator” that acts on the wavefunction to the right and thereby propagates the system forward in time:
$| \psi (t) \rangle = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \label{2.2}$
We also know that the operator $U$ cannot be dependent on the state of the system $| \psi (t) \rangle$. This is necessary for conservation of probability, i.e., to retain normalization for the system. If
$| \psi \left( t _ {0} \right) \rangle = a _ {1} | \varphi _ {1} \left( t _ {0} \right) \rangle + a _ {2} | \varphi _ {2} \left( t _ {0} \right) \rangle \label{2.3}$
then
\begin{align} | \psi (t) \rangle & = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \[4pt] & = U \left( t , t _ {0} \right) a _ {1} | \varphi _ {1} \left( t _ {0} \right) \rangle + U \left( t , t _ {0} \right) a _ {2} | \varphi _ {2} \left( t _ {0} \right) \rangle \[4pt] & = a _ {1} (t) | \varphi _ {1} \rangle + a _ {2} (t) | \varphi _ {2} \rangle \end{align}. \label{2.4}
This is a reflection of the importance of linearity and the principle of superposition in quantum mechanical systems. While $|a_i(t)|$ typically is not equal to $|a_i(0)|$
$\sum _ {n} \left| a _ {n} (t) \right|^{2} = \sum _ {n} \left| a _ {n} \left( t _ {0} \right) \right|^{2} \label{2.5}$
This dictates that the differential equation of motion is linear in time.
Properties of U
We now make some important and useful observations regarding the properties of $U$.
1. Unitary. Note that for Equation \ref{2.5} to hold and for probability density to be conserved, $U$ must be unitary $P = \langle \psi (t) | \psi (t) \rangle = \left\langle \psi \left( t _ {0} \right) \left| U^{\dagger} U \right| \psi \left( t _ {0} \right) \right\rangle \label{2.6}$ which holds if $U^{\dagger} = U^{- 1}$.
2. Time continuity: The state is unchanged when the initial and final time-points are the same $U ( t , t ) = 1 \label{2.7}$
3. Composition property. If we take the system to be deterministic, then it stands to reason that we should get the same wavefunction whether we evolve to a target time in one step ($t_0 \rightarrow t_2$) or multiple steps ($t_0 \rightarrow t_1 \rightarrow t_2$). Therefore, we can write $U \left( t _ {2} , t _ {0} \right) = U \left( t _ {2} , t _ {1} \right) U \left( t _ {1} , t _ {0} \right) \label{2.8}$ Note, since $U$ acts to the right, order matters: \left.\begin{aligned} | \psi \left( t _ {2} \right) \rangle & = U \left( t _ {2} , t _ {1} \right) U \left( t _ {1} , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \[4pt] & = U \left( t _ {2} , t _ {1} \right) | \psi \left( t _ {1} \right) \rangle \end{aligned} \right. \label{2.9} Equation \ref{2.8} is already very suggestive of an exponential form for $U$. Furthermore, since time is continuous and the operator is linear it it also suggests that the time propagator is only a dependent on a time interval $U \left( t _ {1} , t _ {0} \right) = U \left( t _ {1} - t _ {0} \right) \label{2.10}$
4. Time-reversal. The inverse of the time-propagator is the time reversal operator. From Equation \ref{2.8}:
\begin{align} U \left( t , t _ {0} \right) U \left( t _ {0} , t \right) = &1 \label{2.11} \[4pt] \therefore \,\, U^{- 1} \left( t , t _ {0} \right) &= U \left( t _ {0} , t \right) . \label{2.12} \end{align}
An Equation of Motion for U
Let’s find an equation of motion that describes the time-evolution operator using the change of the system for an infinitesimal time-step, $\delta t$: $U(t+ \delta t)$. Since
$\lim _ {\delta t \rightarrow 0} U ( t + \delta t , t ) = 1 \label{2.13}$
We expect that for small enough $\delta t$, $U$ will change linearly with $\delta t$. This is based on analogy to thinking of deterministic motion in classical systems. Setting $t_0$ to 0, so that $U(t,t_o) = U(t)$, we can write
$U ( t + \delta t ) = U (t) - i \hat {\Omega} (t) \delta t \label{2.14}$
$\hat{\Omega}$ is a time-dependent Hermitian operator, which is required for $U$ to be unitary. We can now write a differential equation for the time-development of $U(t,t_o)$, the equation of motion for $U$:
$\dfrac {d U (t)} {d t} = \lim _ {\delta t \rightarrow 0} \dfrac {U ( t + \delta t ) - U (t)} {\delta t} \label{2.15}$
So from Equation \ref{2.14} we have:
$\dfrac {\partial U \left( t , t _ {0} \right)} {\partial t} = - i \hat {\Omega} U \left( t , t _ {0} \right) \label{2.16}$
You can now see that the operator needed a complex argument, because otherwise probability density would not be conserved; it would rise or decay. Rather it oscillates through different states of the system.
We note that $\hat {\Omega}$ has units of frequency. Since quantum mechanics fundamentally associates frequency and energy as $E = \hbar \omega$, and since the Hamiltonian is the operator corresponding to the energy, and responsible for time evolution in Hamiltonian mechanics, we write
$\hat {\Omega} = \dfrac {\hat {H}} {\hbar} \label{2.17}$
With that substitution we have an equation of motion for
$\mathrm {i} \hbar \dfrac {\partial} {\partial t} U \left( t , t _ {0} \right) = \hat {H} U \left( t , t _ {0} \right) \label{2.18}$
Multiplying from the right by $| \psi(t_o) \rangle$ gives the TDSE:
$i \hbar \dfrac {\partial} {\partial t} | \psi \rangle = \hat {H} | \psi \rangle \label{2.19}$
If you use the Hamiltonian for a free particle ($- \left( \hbar^{2} / 2 m \right) \left( \partial^{2} / \partial x^{2} \right)$), this looks like a classical wave equation, except that it is linear in time. Rather, this looks like a diffusion equation with imaginary diffusion constant. We are also interested in the equation of motion for $U^{\dagger}$ which describes the time evolution of the conjugate wavefunctions. Following the same approach and recognizing that $U^{\dagger} \left( t , t _ {0} \right)$, acts to the left:
$\langle \psi (t) | = \langle \psi \left( t _ {0} \right) | U^{\dagger} \left( t , t _ {0} \right) \label{2.20}$
we get
$- i \hbar \dfrac {\partial} {\partial t} U^{\dagger} \left( t , t _ {0} \right) = U^{\dagger} \left( t , t _ {0} \right) \hat {H} \label{2.21}$
Evaluating the Time-Evolution Operator
At first glance it may seem straightforward to integrate Equation \ref{2.18}. If $H$ is a function of time, then the integration of $i \hbar \dfrac{d U}{U} = H\, dt$ gives
$U \left( t , t _ {0} \right) = \exp \left[ \frac {- i} {\hbar} \int _ {t _ {0}}^{t} H \left( t^{\prime} \right) d t^{\prime} \right] \label{2.22}$
Following our earlier definition of the time-propagator, this exponential would be cast as a series expansion
$U \left( t , t _ {0} \right)^{2} = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} H \left( t^{\prime} \right) d t^{\prime} + \frac {1} {2 !} \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d t^{\prime} d t^{\prime \prime} H \left( t^{\prime} \right) H \left( t^{\prime \prime} \right) + \ldots \label{2.23}$
This approach is dangerous, since we are not properly treating $H$ as an operator. Looking at the second term in Equation \ref{2.23}, we see that this expression integrates over both possible time-orderings of the two Hamiltonian operations, which would only be proper if the Hamiltonians at different times commute: $H(t'),H(t'')] =0$
Now, let’s proceed a bit more carefully assuming that the Hamiltonians at different times do not commute. Integrating Equation \ref{2.18} directly from $t_0$ to $t$ gives
$U \left( t , t _ {0} \right) = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau H ( \tau ) U \left( \tau , t _ {0} \right) \label{2.24}$
This is the solution; however, it is not very practical since $U(t,t_o)$ is a function of itself. But we can make an iterative expansion by repetitive substitution of $U$ into itself. The first step in this process is
\begin{align} U \left( t , t _ {0} \right) &= 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau H ( \tau ) \left[ 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{\tau} d \tau^{\prime} H \left( \tau^{\prime} \right) U \left( \tau^{\prime} , t _ {0} \right) \right] \[4pt] & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) U \left( \tau^{\prime} , t _ {0} \right)\end{align} \label{2.25}
Note in the last term of this equation, that the integration limits enforce a time-ordering; that is, the first integration variable $\tau'$ must precede the second $\tau$. Pictorially, the area of integration is
The next substitution step gives
\begin{align} U \left( t , t _ {0} \right) & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) \nonumber \[4pt] & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) \label{2.26} \[4pt] & + \left( \frac {- i} {\hbar} \right)^{3} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} \int _ {t _ {0}}^{t^{\prime}} d \tau^{\prime \prime} H ( \tau ) H \left( \tau^{\prime} \right) H \left( \tau^{\prime \prime} \right) U \left( \tau^{\prime \prime} , t _ {0} \right) \nonumber \end{align}
From this expansion, you should be aware that there is a time-ordering to the interactions. For the third term, $\tau^{\prime \prime}$ acts before $\tau^{\prime}$, which acts before $\tau$: $t _ {0} \leq \tau^{\prime \prime} \leq \tau^{\prime} \leq \tau \leq t$
What does this expression represent? Imagine you are starting in state $| \psi _ {0} \rangle = | \ell \rangle$ and you want to describe how one evolves toward a target state $| \psi \rangle = | k \rangle$. The possible paths by which one can shift amplitude and evolve the phase, pictured in terms of these time variables are:
The first term in Equation \ref{2.26} represents all actions of the Hamiltonian which act to directly couple $| \ell \rangle$ and $| k \rangle$. The second term described possible transitions from $| \ell \rangle$ to $| k \rangle$ via an intermediate state $| m \rangle$. The expression for $U$ describes all possible paths between initial and final state. Each of these paths interferes in ways dictated by the acquired phase of our eigenstates under the timedependent Hamiltonian.
The solution for $U$ obtained from this iterative substitution is known as the positive timeordered exponential
\left.\begin{aligned} U \left( t , t _ {0} \right) & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) \[4pt] & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) \[4pt] & + \left( \frac {- i} {\hbar} \right)^{3} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} \int _ {t _ {0}}^{t^{\prime}} d \tau^{\prime \prime} H ( \tau ) H \left( \tau^{\prime} \right) H \left( \tau^{\prime \prime} \right) U \left( \tau^{\prime \prime} , t _ {0} \right) \end{aligned} \right. \label{2.27}
($\hat{T}$ is known as the Dyson time-ordering operator.) In this expression the time-ordering is
$\left. \begin{array} {l} {t _ {0} \rightarrow \tau _ {1} \rightarrow \tau _ {2} \rightarrow \tau _ {3} \dots \tau _ {n} \rightarrow t} \[4pt] {t _ {0} \rightarrow \quad \dots \quad \tau^{\prime \prime} \rightarrow \tau^{\prime} \rightarrow \tau} \end{array} \right.\label{2.28}$
So, this expression tells you about how a quantum system evolves over a given time interval, and it allows for any possible trajectory from an initial state to a final state through any number of intermediate states. Each term in the expansion accounts for more possible transitions between different intermediate quantum states during this trajectory.
Compare the time-ordered exponential with the traditional expansion of an exponential:
$1 + \sum _ {n = 1}^{\infty} \frac {1} {n !} \left( \frac {- i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d \tau _ {n} \ldots \int _ {t _ {0}}^{t} d \tau _ {1} H \left( \tau _ {n} \right) H \left( \tau _ {n - 1} \right) \ldots H \left( \tau _ {1} \right) \label{2.29}$
Here the time-variables assume all values, and therefore all orderings for $H(t,t_0)$ are calculated. The areas are normalized by the $n!$ factor (there are $n!$ time-orderings of the $t_n$ times.) (As commented above these points need some more clarification.) We are also interested in the Hermitian conjugate of $U \left( t , t _ {0} \right)$, which has the equation of motion in Equation \ref{2.21}. If we repeat the method above, remembering that $U^{\dagger} \left( t , t _ {0} \right)$, acts to the left, then we obtain
$U^{\dagger} \left( t , t _ {0} \right) = 1 + \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau U^{\dagger} ( t , \tau ) H ( \tau ) \label{2.30}$
Performing iterative substitution leads to a negative-time-ordered exponential:
$U^{\dagger} \left( t , t _ {0} \right) = 1 + \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau U^{\dagger} ( t , \tau ) H ( \tau )\label{2.31}$
Here the $H(\tau_i)$ act to the left.
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 1340.
2. Merzbacher, E., Quantum Mechanics. 3rd ed.; Wiley: New York, 1998; Ch. 14.
3. Mukamel, S., Principles of Nonlinear Optical Spectroscopy. Oxford University Press: New York, 1995; Ch. 2.
4. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994; Ch. 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.01%3A_Time-Evolution_Operator.txt
|
Okay, how do we evaluate the time-propagator and obtain a time-dependent trajectory for a quantum system? Expressions such as the time-ordered exponentials are daunting, and there are no simple ways in which to handle this. One cannot truncate the exponential because usually this is not a rapidly converging series. Also, the solutions oscillate rapidly as a result of the phase acquired at the energy of the states involved, which leads to a formidable integration problem. Rapid oscillations require small time steps, when in fact the time scales. For instance in a molecular dynamics problem, the highest frequency oscillations may be as a result of electronically excited states with periods of less than a femtosecond, and the nuclear dynamics that you hope to describe may occur on many picosecond time scales. Rather than general recipes, there exist an arsenal of different strategies that are suited to particular types of problems. The choice of how to proceed is generally dictated by the details of your problem, and is often an art-form. Considerable effort needs to be made to formulate the problem, particularly choosing an appropriate basis set for your problem. Here it is our goal to gain some insight into the types of strategies available, working mainly with the principles, rather than the specifics of how it’s implemented.
Let’s begin by discussing the most general approach. With adequate computational resources, we can choose the brute force approach of numerical integration. We start by choosing a basis set and defining the initial state $\psi_0$. Then, we can numerically evaluate the timedependence of the wavefunction over a time period $t$ by discretizing time into $n$ small steps of width $\delta t = t / n$ over which the change of the system is small. A variety of strategies can be pursed in practice.
One possibility is to expand your wavefunction in the basis set of your choice
$| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | \varphi _ {n} \rangle \label{2.32}$
and solve for the time-dependence of the expansion coefficients. Substituting into the right side of the TDSE,
$i \hbar \frac {\partial} {\partial t} | \psi \rangle = \hat {H} | \psi \rangle \label{2.33}$
and then acting from the left by $\langle k |$ on both sides leads to an equation that describes their time dependence:
$i \hbar \frac {\partial c _ {k} (t)} {\partial t} = \sum _ {n} H _ {k n} (t) c _ {n} (t) \label{2.34}$
or in matrix form $i \hbar \dot {c} = H c$. This represents a set of coupled first-order differential equations in which amplitude flows between different basis states at rates determined by the matrix elements of the time-dependent Hamiltonian. Such equations are straightforward to integrate numerically. We recognize that we can integrate on a grid if the time step forward ($\delta t$) is small enough that the Hamiltonian is essentially constant. Then Equation \ref{2.34} becomes
$i \hbar \delta c _ {k} (t) = \sum _ {n} H _ {k n} (t) c _ {n} (t) \delta t \label{2.35}$
and the system is propagated as
$c _ {k} ( t + \delta t ) = c _ {k} (t) + \delta c _ {k} (t) \label{2.36}$
The downside of such a calculation is the unusually small time-steps and significant computational cost required.
Similarly, we can use a grid with short time steps to simplify our time-propagator as
$\hat {U} ( t + \delta t , t ) = \exp \left[ - \frac {i} {\hbar} \int _ {t}^{t + \delta t} d t^{\prime} \hat {H} \left( t^{\prime} \right) \right] \approx \exp \left[ - \frac {i} {\hbar} \delta t \hat {H} (t) \right] \label{2.37}$
Therefore the time propagator can be written as a product of $n$ propagators over these small intervals.
\begin{align} \hat {U} (t) & = \lim _ {\delta t \rightarrow 0} \left[ \hat {U} _ {n} \hat {U} _ {n - 1} \cdots \hat {U} _ {2} \hat {U} _ {1} \right] \label{2.38A} \[4pt] & = \lim _ {n \rightarrow \infty} \prod _ {j = 0}^{n - 1} \hat {U} _ {j} \label{2.38B} \end{align}
Here the time-propagation over the jth small time step is
\left.\begin{aligned} \hat {U} _ {j} & = \exp \left[ - \frac {i} {\hbar} \delta t \hat {H} _ {j} \right] \[4pt] \hat {H} _ {j} & = \hat {H} ( j \delta t ) \end{aligned} \right. \label{2.39}
Note that the expressions in Equations \ref{2.38A} and \ref{2.38B} are operators time ordered from right to left, which we denote with the “+” subscript. Although Equation \ref{2.38B} is exact in the limit $\delta t \rightarrow 0$ (or $n→∞$), we can choose a finite number such that $H(t)$ does not change much over the time $\delta t$. In this limit the time propagator does not change much and can be approximated as an expansion
$\hat {U} _ {j} \approx 1 - \frac {i} {\hbar} \delta t \hat {H} _ {j} \label{2.40}.$
In a general sense this approach is not very practical. The first reason is that the time step is determined by $\delta \mathrm {t} < \hbar / | H |$ which is typically very small in comparison to the dynamics of interest. The second complication arises when the potential and kinetic energy operators in the Hamiltonian don’t commute. Taking the Hamiltonian to be $\hat {H} = \hat {T} + \hat {V}$
\left.\begin{aligned} e^{- i \hat {H} (t) \delta t / h} & = e^{- i ( \hat {T} (t) + \hat {V} (t) ) \delta t / h} \[4pt] & \approx e^{- i \hat {T} (t) \delta t / \hbar} e^{- i \hat {V} (t) \delta t / h} \end{aligned} \right. \label{2.41}
The second line makes the Split Operator approximation, what states that the time propagator over a short enough period can be approximated as a product of independent propagators evolving the system over the kinetic and potential energy. The validity of this approximation depends on how well these operators commute and the time step, with the error scaling like $\frac {1} {2} [ \hat {T} (t) , \hat {V} (t) ] ( \delta t / \hbar )^{2}$ meaning that we should use a time step, such that $\delta t < \left\{2 \hbar^{2} / [ \hat {T} (t) , \hat {V} (t) ] \right\}^{1 / 2}$
This approximation can be improved by symmetrizing the split operator as
$e^{- i \hat {H} (t) \delta t / h} \approx e^{- i \hat {V} (t) \frac {\delta t} {2} / h} e^{- i \hat {T} (t) \delta t / h} e^{- i \hat {V} (t) \frac {\delta t} {2} / h} \label{2.42}$
Here the error scales as $\frac {1} {12} ( \delta t / \hbar )^{3} \left\{[ \hat {T} , [ \hat {T} , \hat {V} ] ] + \frac {1} {2} [ \hat {V} , [ \hat {V} , \hat {T} ] ] \right\}$. There is no significant increase in computational effort since half of the operations can be combined as
$e^{- \frac {i \hat {V}} {h} \frac {( j + 1 ) \delta t} {2}} e^{- \frac {i \hat {V}} {\hbar} \frac {j \delta t} {2}} \approx e^{- i \hat {V} j \delta t / \hbar}$
to give $U (t) \approx e^{- i \hat {V} \frac {n \delta t} {2} / h} \left[ \prod _ {j = 1}^{n} e^{- i \hat {V} j \delta t / h} e^{- i \hat {T} j \delta t / h} \right] e^{- i \hat {V} \frac {\delta t} {2} / h} \label{2.44}$
Readings
1. Tannor, D. J., Introduction to Quantum Mechanics: A Time-Dependent Perspective. University Science Books: Sausilito, CA, 2007.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.02%3A_Integrating_the_Schrodinger_Equation_Directly.txt
|
For many time-dependent problems, most notably in spectroscopy, we can often partition the problem so that the time-dependent Hamiltonian contains a time-independent part $H_0$ that we can describe exactly, and a time-dependent potential $V(t)$
$H = H _ {0} + V (t) \label{2.45}$
The remaining degrees of freedom are discarded, and then only enter in the sense that they give rise to the interaction potential with $H_0$. This is effective if you have reason to believe that the external Hamiltonian can be treated classically, or if the influence of $H_0$ on the other degrees of freedom is negligible. From Equation \ref{2.45}, there is a straightforward approach to describing the time evolving wavefunction for the system in terms of the eigenstates and energy eigenvalues of $H_0$.
To begin, we know the complete set of eigenstates and eigenvalues for the system Hamiltonian
$H _ {0} | n \rangle = E _ {n} | n \rangle \label{2.46}$
The state of the system can then be expressed as a superposition of these eigenstates:
$| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{2.47}$
The TDSE can be used to find an equation of motion for the eigenstate coefficients
$c _ {k} (t) = \langle k | \psi (t) \rangle \label{2.48}$
Starting with
$\frac {\partial | \psi \rangle} {\partial t} = \frac {- i} {\hbar} H | \psi \rangle \label{2.49}$
$\frac {\partial c _ {k} (t)} {\partial t} = - \frac {i} {\hbar} \langle k | H | \psi (t) \rangle \label{2.50}$
and from Equation \ref{2.47}
$\frac {\partial c _ {k} (t)} {\partial t} = - \frac {i} {\hbar} \sum _ {n} \langle k | H | n \rangle c _ {n} (t) \label{2.51}$
Already we see that the time evolution amounts to solving a set of coupled linear ordinary differential equations. These are rate equations with complex rate constants, which describe the feeding of one state into another. Substituting Equation \ref{2.45} we have:
\left.\begin{aligned} \frac {\partial c _ {k} (t)} {\partial t} & = - \frac {i} {\hbar} \sum _ {n} \left\langle k \left| \left( H _ {0} + V (t) \right) \right| n \right\rangle c _ {n} (t) \ & = - \frac {i} {\hbar} \sum _ {n} \left[ E _ {n} \delta _ {k n} + V _ {k n} (t) \right] c _ {n} (t) \end{aligned} \right. \label{2.52}
or $\frac {\partial c _ {k} (t)} {\partial t} + \frac {i} {\hbar} E _ {k} c _ {k} (t) = - \frac {i} {\hbar} \sum _ {n} V _ {k n} (t) c _ {n} (t) \label{2.53}$
Next, we define and substitute
$c _ {m} (t) = e^{- i E _ {m} t / h} b _ {m} (t) \label{2.54}$
which implies a definition for the wavefunction as
$| \psi (t) \rangle = \sum _ {n} b _ {n} (t) e^{- i E _ {n} t / h} | n \rangle \label{2.55}$
This defines a slightly different complex amplitude, that allows us to simplify things considerably. Notice that
$\left| b _ {k} (t) \right|^{2} = \left| c _ {k} (t) \right|^{2}.$
Also, $b _ {k} ( 0 ) = c _ {k} ( 0 )$. In practice what we are doing is pulling out the “trivial” part of the time evolution, the time-evolving phase factor, which typically oscillates much faster than the changes to the amplitude of $b$ or $c$.
We will come back to this strategy which we discuss the interaction picture.
Now Equation \ref{2.53} becomes
$e^{- i E _ {t} t / h} \frac {\partial b _ {k}} {\partial t} = - \frac {i} {\hbar} \sum _ {n} V _ {k n} (t) e^{- i E _ {n} t / h} b _ {n} (t) \label{2.56}$
or $i \hbar \frac {\partial b _ {k}} {\partial t} = \sum _ {n} V _ {k n} (t) e^{- i \omega _ {n k} t} b _ {n} (t) \label{2.57}$
This equation is an exact solution. It is a set of coupled differential equations that describe how probability amplitude moves through eigenstates due to a time-dependent potential. Except in simple cases, these equations cannot be solved analytically, but it is often straightforward to integrate numerically.
When can we use the approach described here? Consider partitioning the full Hamiltonian into two components, one that we want to study $H_0$ and the remaining degrees of freedom $H_1$. For each part, we have knowledge of the complete eigenstates and eigenvalues of the Hamiltonian: $H _ {i} | \psi _ {i , n} \rangle = E _ {i , n} | \psi _ {i , n} \rangle$. These subsystems will interact with one another through $H_{int}$. If we are careful to partition this in such a way that Hint is small compared $H_0$ and $H_1$, then it should be possible to properly describe the state of the full system as product states in the subsystems: $| \psi \rangle = | \psi _ {0} \psi _ {1} \rangle$. Further, we can write a time-dependent Schrödinger equation for the motion of each subsystem as:
$i \hbar \frac {\partial | \psi _ {1} \rangle} {\partial t} = H _ {1} | \psi _ {1} \rangle \label{2.58}$
Within these assumptions, we can write the complete time-dependent Schrödinger equation in terms of the two sub-states:
$i \hbar | \psi _ {0} \rangle \frac {\partial | \psi _ {1} \rangle} {\partial t} + i \hbar | \psi _ {1} \rangle \frac {\partial | \psi _ {0} \rangle} {\partial t} = | \psi _ {0} \rangle H _ {1} | \psi _ {1} \rangle + | \psi _ {1} \rangle H _ {0} | \psi _ {0} \rangle + H _ {\mathrm {int}} | \psi _ {0} \rangle | \psi _ {1} \rangle \label{2.59}$
Then left operating by $\langle \psi _ {1} |$ and making use of Equation \ref{2.58}, we can write
$i \hbar \frac {\partial | \psi _ {0} \rangle} {\partial t} = \left[ H _ {0} + \left\langle \psi _ {1} \left| H _ {\mathrm {int}} \right| \psi _ {1} \right\rangle \right] | \psi _ {0} \rangle \label{2.60}$
This is equivalent to the TDSE for a Hamiltonian of form (Equation \ref{2.45}) where the external interaction $V (t) = \left\langle \psi _ {1} \left| H _ {\mathrm {int}} (t) \right| \psi _ {1} \right\rangle$ comes from integrating the 1-2 interaction over the sub-space of $| \psi _ {1} \rangle$. So this represents a time-dependent mean field method.
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 308.
2. Merzbacher, E., Quantum Mechanics. 3rd ed.; Wiley: New York, 1998; Ch. 14.
3. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Sec. 2.3.
4. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994; Ch. 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.03%3A_Transitions_Induced_by_Time-Dependent_Potential.txt
|
Let’s describe what happens when you drive a two-level system with an oscillating potential.
$V (t) = V \cos \omega t \label{2.61}$
$V _ {k \ell} (t) = V _ {k \ell} \cos \omega t \label{2.62}$
Note, this is the form you would expect for an electromagnetic field interacting with charged particles, i.e. dipole transitions. In a simple sense, the electric field is $\overline {E} (t) = \overline {E} _ {0} \cos \omega t$ and the interaction potential can be written as $V = - \overline {E} \cdot \overline {\mu},$ where $\overline {\mu}$ represents the dipole operator.
We will look at the form of this interaction a bit more carefully later. We now couple two states $| a \rangle$ and $| b \rangle$ with the oscillating field. Here the energy of the states is ordered so that $\varepsilon _ {b} > \varepsilon _ {a}$. Let’s ask if the system starts in $| a \rangle$ what is the probability of finding it in $| a \rangle$ at time $t$ ?
The system of differential equations that describe this problem is:
\begin{align} i \hbar \frac {\partial} {\partial t} b _ {k} (t) & = \sum _ {n = a , b} b _ {n} (t) V _ {k n} (t) e^{- i \omega _ {n k} t} \[4pt] & = \sum _ {n = a , b} b _ {n} (t) V _ {k n} e^{- i \omega _ {n k} t} \cdot \frac {1} {2} \left( e^{- i \omega t} + e^{i \omega t} \right) \label{2.63} \end{align}
Where $\cos \omega t$ in written its complex form. Writing this explicitly
$i \hbar \dot {b} _ {b} = \frac {1} {2} b _ {a} V _ {b a} \left[ e^{i \left( \omega _ {b a} - \omega \right) t} + e^{i \left( \omega _ {b a} + \omega \right) t} \right] + \frac {1} {2} b _ {b} V _ {b b} \left[ e^{i \omega t} + e^{- i \omega t} \right]$
$i \hbar \dot {b} _ {a} = \frac {1} {2} b _ {a} V _ {a a} \left[ e^{i \omega t} + e^{- i \omega t} \right] + \frac {1} {2} b _ {b} V _ {a b} \left[ e^{i \left( \omega _ {a b} - \omega \right) t} + e^{i \left( \omega _ {a b} + \omega \right) t} \right]$
or alterntively changing the last term:
$i \hbar \dot {b} _ {a} = \frac {1} {2} b _ {a} V _ {a a} \left[ e^{i \omega t} + e^{- i \omega t} \right] + \frac {1} {2} b _ {b} V _ {a b} \left[ e^{- i \left( \omega _ {b a} + \omega \right) t} + e^{- i \left( \omega _ {b a} - \omega \right) t} \right]$
Here the expressions have been written in terms of the frequency $\omega_{ba}$. Two of these terms are dropped, since (for our case) the diagonal matrix elements $V_{ii} =0$. We also make the secular approximation (or rotating wave approximation) in which the nonresonant terms are dropped. When $\omega _ {b a} \approx \omega$, terms like $e^{\pm i \omega t}$ or $e^{i \left( \omega _ {b a} + \omega \right) t}$ oscillate very rapidly (relative to $\left| V _ {b a} \right|^{- 1}$) and so do not contribute much to change of $c_n$. (Remember, we take the frequencies $\omega_{b a}$ and $\omega$ to be positive). So now we have:
$\dot {b} _ {b} = \frac {- i} {2 \hbar} b _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} \label{2.66}$
$\dot {b} _ {a} = \frac {- i} {2 \hbar} b _ {b} V _ {a b} e^{- i \left( \omega _ {b a} - \omega \right) t} \label{2.67}$
Note that the coefficients are oscillating at the same frequency but phase shifted to one another. Now if we differentiate Equation \ref{2.66}:
$\ddot {b} _ {b} = \frac {- i} {2 \hbar} \left[ \dot {b} _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} + i \left( \omega _ {b a} - \omega \right) b _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} \right] \label{2.68}$
Rewrite Equation \ref{2.66}:
$b _ {a} = \frac {2 i \hbar} {V _ {b a}} \dot {b} _ {b} e^{- i \left( \omega _ {b a} - \omega \right) t} \label{2.69}$
and substitute Equations \ref{2.69} and \ref{2.67} into Equation \ref{2.68}, we get linear second order equation for $b_b$.
$\ddot {b} _ {b} - i \left( \omega _ {b a} - \omega \right) \dot {b} _ {b} + \frac {\left| V _ {b a} \right|^{2}} {4 \hbar^{2}} b _ {b} = 0 \label{2.70}$
This is just the second order differential equation for a damped harmonic oscillator:
$a \ddot {x} + b \dot {x} + c x = 0 \label{2.71}$
$x = e^{- ( b / 2 a ) t} ( A \cos \mu t + B \sin \mu t ) \label{2.72}$
with
$\mu = \frac {1} {2 a} \sqrt {4 a c - b^{2}}$
With a little more manipulation, and remembering the initial conditions $b_j(0)=0$ and $b_{\ell} (0) =1$, we find
$P _ {b} (t) = \left| b _ {b} (t) \right|^{2} = \frac {\left| V _ {b a} \right|^{2}} {\left| V _ {b a} \right|^{2} + \hbar^{2} \left( \omega _ {b a} - \omega \right)^{2}} \sin^{2} \Omega _ {R} t \label{2.73}$
Where the Rabi Frequency
$\Omega _ {R} = \frac {1} {2 \hbar} \sqrt {\left| V _ {b a} \right|^{2} + \hbar^{2} \left( \omega _ {b a} - \omega \right)^{2}} \label{2.74}$
Also,
$P _ {a} = 1 - P _ {b} \label{2.75}$
The amplitude oscillates back and forth between the two states at a frequency dictated by the coupling between them. [Note a result we will return to later: electric fields couple quantum states, creating coherences!]
An important observation is the importance of resonance between the driving potential and the energy splitting between states. To get transfer of probability density you need the driving field to be at the same frequency as the energy splitting. On resonance, you always drive probability amplitude entirely from one state to another.
The efficiency of driving between $| a \rangle$ and $|b \rangle$ states drops off with detuning. Here plotting the maximum value of $P_b$ as a function of frequency:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.04%3A_Resonant_Driving_of_a_Two-Level_System.txt
|
The mathematical formulation of quantum dynamics that has been presented is not unique. So far, we have described the dynamics by propagating the wavefunction, which encodes probability densities. Ultimately, since we cannot measure a wavefunction, we are interested in observables, which are probability amplitudes associated with Hermitian operators, with time dependence that can be interpreted differently. Consider the expectation value:
\begin{align} \langle \hat {A} (t) \rangle & = \langle \psi (t) | \hat {A} | \psi (t) \rangle = \left\langle \psi ( 0 ) \left| U^{\dagger} \hat {A} U \right| \psi ( 0 ) \right\rangle \[4pt] & = ( \langle \psi ( 0 ) | U^{\dagger} ) \hat {A} ( U | \psi ( 0 ) \rangle ) \label{S rep} \[4pt] & = \left\langle \psi ( 0 ) \left| \left( U^{\dagger} \hat {A} U \right) \right| \psi ( 0 ) \right\rangle \label{2.76} \end{align}
The last two expressions are written to emphasize alternate “pictures” of the dynamics. Equation \ref{S rep} is known as the Schrödinger picture, refers to everything we have done so far. Here we propagate the wavefunction or eigenvectors in time as $U | \psi \rangle$. Operators are unchanged because they carry no time-dependence. Alternatively, we can work in the Heisenberg picture (Equation \ref{2.76}) that uses the unitary property of $U$ to time-propagate the operators as $\hat {A} (t) = U^{\dagger} \hat {A} U,$ but the wavefunction is now stationary. The Heisenberg picture has an appealing physical picture behind it, because particles move. That is, there is a time-dependence to position and momentum.
Schrödinger Picture
In the Schrödinger picture, the time-development of $| \psi \rangle$ is governed by the TDSE
$i \hbar \frac {\partial} {\partial t} | \psi \rangle = H | \psi \rangle \label{2.77A}$
or equivalently, the time propagator:
$| \psi (t) \rangle = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \label{2.77B}$
In the Schrödinger picture, operators are typically independent of time, $\partial A / \partial t = 0$. What about observables? For expectation values of operators
$\langle A (t) \rangle = \langle \psi | A | \psi \rangle$
\begin{align} i \hbar \frac {\partial} {\partial t} \langle \hat {A} (t) \rangle & = i \hbar \left[ \left\langle \psi | \hat {A} | \frac {\partial \psi} {\partial t} \right\rangle + \left\langle \frac {\partial \psi} {\partial t} | \hat {A} | \psi \right\rangle + \cancel{\left\langle \psi \left| \frac {\partial \hat {A}} {\partial t} \right| \psi \right\rangle} \right] \[4pt] & = \langle \psi | \hat {A} H | \psi \rangle - \langle \psi | H \hat {A} | \psi \rangle \[4pt] & = \langle [ \hat {A} , H ] \rangle \label{2.78} \end{align}
If$\hat{A}$ is independent of time (as we expect in the Schrödinger picture), and if it commutes with $\hat{H}$, it is referred to as a constant of motion.
Heisenberg Picture
From Equation \ref{2.76}, we can distinguish the Schrödinger picture from Heisenberg operators:
$\hat {A} (t) = \langle \psi (t) | \hat {A} | \psi (t) \rangle _ {S} = \left\langle \psi \left( t _ {0} \right) \left| U^{\dagger} \hat {A} U \right| \psi \left( t _ {0} \right) \right\rangle _ {S} = \langle \psi | \hat {A} (t) | \psi \rangle _ {H} \label{2.79}$
where the operator is defined as
\left.\begin{aligned} \hat {A} _ {H} (t) & = U^{\dagger} \left( t , t _ {0} \right) \hat {A} _ {S} U \left( t , t _ {0} \right) \[4pt] \hat {A} _ {H} \left( t _ {0} \right) & = \hat {A} _ {S} \end{aligned} \right. \label{2.80}
Note, the pictures have the same wavefunction at the reference point $t_0$. Since the wavefunction should be time-independent, $\partial | \psi _ {H} \rangle / \partial t = 0$, we can relate the Schrödinger and Heisenberg wavefunctions as
$| \psi _ {S} (t) \rangle = U \left( t , t _ {0} \right) | \psi _ {H} \rangle \label{2.81}$
So,
$| \psi _ {H} \rangle = U^{\dagger} \left( t , t _ {0} \right) | \psi _ {S} (t) \rangle = | \psi _ {S} \left( t _ {0} \right) \rangle \label{2.82}$
As expected for a unitary transformation, in either picture the eigenvalues are preserved:
\begin{align} \hat {A} | \varphi _ {i} \rangle _ {S} & = a _ {i} | \varphi _ {i} \rangle _ {S} \[4pt] U^{\dagger} \hat {A} U U^{\dagger} | \varphi _ {i} \rangle _ {S} & = a _ {i} U^{\dagger} | \varphi _ {i} \rangle _ {S} \[4pt] \hat {A} _ {H} | \varphi _ {i} \rangle _ {H} & = a _ {i} | \varphi _ {i} \rangle _ {H} \end{align} \label{2.83}
The time evolution of the operators in the Heisenberg picture is:
\begin{aligned} \frac {\partial \hat {A} _ {H}} {\partial t} & = \frac {\partial} {\partial t} \left( U^{\dagger} \hat {A} _ {s} U \right) = \frac {\partial U^{\dagger}} {\partial t} \hat {A} _ {s} U + U^{\dagger} \hat {A} _ {s} \frac {\partial U} {\partial t} + U^{\dagger} \cancel{\frac {\partial \hat {A}} {\partial t}} U \[4pt] &= \frac {i} {\hbar} U^{\dagger} H \hat {A} _ {S} U - \frac {i} {\hbar} U^{\dagger} \hat {A} _ {S} H U + \left( \cancel{\frac {\partial \hat {A}} {\partial t}} \right) _ {H} \[4pt] &= \frac {i} {\hbar} H _ {H} \hat {A} _ {H} - \frac {i} {\hbar} \hat {A} _ {H} H _ {H} \[4pt] &= - \frac {i} {\hbar} [ \hat {A} , H ] _ {H} \end{aligned} \label{2.84}
The result
$i \hbar \frac {\partial} {\partial t} \hat {A} _ {H} = [ \hat {A} , H ] _ {H} \label{2.85}$
is known as the Heisenberg equation of motion. Here I have written the odd looking $H _ {H} = U^{\dagger} H U$. This is mainly to remind one about the time-dependence of $\hat{H}$. Generally speaking, for a time-independent Hamiltonian $U = e^{- i H t / h}$, $U$ and $H$ commute, and $H_H =H$. For a time-dependent Hamiltonian, $U$ and $H$ need not commute.
Classical equivalence for particle in a potential
The Heisenberg equation is commonly applied to a particle in an arbitrary potential. Consider a particle with an arbitrary one-dimensional potential
$H = \frac {p^{2}} {2 m} + V (x) \label{2.86}$
For this Hamiltonian, the Heisenberg equation gives the time-dependence of the momentum and position as
$\dot {p} = - \frac {\partial V} {\partial x} \label{2.87}$
$\dot {x} = \frac {p} {m} \label{2.88}$
Here, I have made use of
$\left[ \hat {x}^{n} , \hat {p} \right] = i \hbar n \hat {x}^{n - 1} \label{2.89}$
$\left[ \hat {x} , \hat {p}^{n} \right] = i \hbar n \hat {p}^{n - 1} \label{2.90}$
Curiously, the factors of $\hbar$ have vanished in Equations \ref{2.87} and \ref{2.88}, and quantum mechanics does not seem to be present. Instead, these equations indicate that the position and momentum operators follow the same equations of motion as Hamilton’s equations for the classical variables. If we integrate Equation \ref{2.88} over a time period $t$ we find that the expectation value for the position of the particle follows the classical motion.
$\langle x (t) \rangle = \frac {\langle p \rangle t} {m} + \langle x ( 0 ) \rangle \label{2.91}$
We can also use the time derivative of Equation \ref{2.88} to obtain an equation that mirrors Newton’s second law of motion, $F=ma$:
$m \frac {\partial^{2} \langle x \rangle} {\partial t^{2}} = - \langle \nabla V \rangle \label{2.92}$
These observations underlie Ehrenfest’s Theorem, a statement of the classical correspondence of quantum mechanics, which states that the expectation values for the position and momentum operators will follow the classical equations of motion.
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 312.
2. Mukamel, S., Principles of Nonlinear Optical Spectroscopy. Oxford University Press: New York, 1995.
3. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 4. 2-20
4. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994; Ch. 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.05%3A_Schrodinger_and_Heisenberg_Representations.txt
|
The interaction picture is a hybrid representation that is useful in solving problems with time-dependent Hamiltonians in which we can partition the Hamiltonian as
$H (t) = H_0 + V (t) \label{2.93}$
$H_0$ is a Hamiltonian for the degrees of freedom we are interested in, which we treat exactly, and can be (although for us usually will not be) a function of time. $V(t)$ is a time-dependent potential which can be complicated. In the interaction picture, we will treat each part of the Hamiltonian in a different representation. We will use the eigenstates of $H_0$ as a basis set to describe the dynamics induced by $V(t)$, assuming that $V(t)$ is small enough that eigenstates of $H_0$ are a useful basis. If $H_0$ is not a function of time, then there is a simple time-dependence to this part of the Hamiltonian that we may be able to account for easily. Setting $V$ to zero, we can see that the time evolution of the exact part of the Hamiltonian $H_0$ is described by
$\frac {\partial} {\partial t} U_0 \left( t , t_0 \right) = - \frac {i} {\hbar} H_0 (t) U_0 \left( t , t_0 \right) \label{2.94}$
where,
$U_0 \left( t , t_0 \right) = \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {t_0}^{t} d \tau H_0 (t) \right] \label{2.95}$
or, for a time-independent $H_0$,
$U_0 \left( t , t_0 \right) = e^{- i H_0 \left( t - t_0 \right) / \hbar} \label{2.96}$
We define a wavefunction in the interaction picture $| \psi _ {I} \rangle$ in terms of the Schrödinger wavefunction through:
$| \psi _ {S} (t) \rangle \equiv U_0 \left( t , t_0 \right) | \psi _ {I} (t) \rangle \label{2.97}$
or
$| \psi _ {I} \rangle = U_0^{\dagger} | \psi _ {S} \rangle \label{2.98}$
Effectively the interaction representation defines wavefunctions in such a way that the phase accumulated under $e^{- i H_0 t / h}$ is removed. For small $V$, these are typically high frequency oscillations relative to the slower amplitude changes induced by $V$.
Now we need an equation of motion that describes the time evolution of the interaction picture wavefunctions. We begin by substituting Equation \ref{2.97} into the TDSE:
\begin{align} | \psi _ {S} (t) \rangle & = U_0 \left( t , t_0 \right) | \psi _ {1} (t) \rangle \[4pt] & = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) | \psi _ {I} \left( t_0 \right) \rangle \[4pt] & = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) | \psi _ {S} \left( t_0 \right) \rangle \[4pt] \therefore \quad U & \left( t , t_0 \right) = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) \end{align}
$\therefore \quad i \hbar \frac {\partial | \psi _ {I} \rangle} {\partial t} = V_I | \psi _ {I} \rangle \label{2.101}$
where
$V_I (t) = U_0^{\dagger} \left( t , t_0 \right) V (t) U_0 \left( t , t_0 \right) \label{2.102}$
$| \psi _ {I} \rangle$ satisfies the Schrödinger equation with a new Hamiltonian in Equation \ref{2.102}: the interaction picture Hamiltonian, $V_I(t)$. We have performed a unitary transformation of $V(t)$ into the frame of reference of $H_0$, using $U_0$. Note: Matrix elements in
$V_I = \left\langle k \left| V_I \right| l \right\rangle = e^{- i \omega _ {l k} t} V _ {k l}$
where $k$ and $l$ are eigenstates of $H_0$. We can now define a time-evolution operator in the interaction picture:
$| \psi _ {I} (t) \rangle = U _ {I} \left( t , t_0 \right) | \psi _ {I} \left( t_0 \right) \rangle \label{2.103}$
where
$U _ {I} \left( t , t_0 \right) = \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {t_0}^{t} d \tau V_I ( \tau ) \right] \label{2.104}$
Now we see that
\begin{aligned} \left|\psi_{S}(t)\right\rangle &=U_{0}\left(t, t_{0}\right)\left|\psi_{I}(t)\right\rangle \[4pt] &=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\left|\psi_{I}\left(t_{0}\right)\right\rangle \[4pt] &=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\left|\psi_{S}\left(t_{0}\right)\right\rangle \end{aligned}
$\therefore U\left(t, t_{0}\right)=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\label{2.106}$
Also, the time evolution of conjugate wavefunction in the interaction picture can be written
$U^{\dagger} \left( t , t_0 \right) = U _ {I}^{\dagger} \left( t , t_0 \right) U_0^{\dagger} \left( t , t_0 \right) = \exp _ {-} \left[ \frac {i} {\hbar} \int _ {t_0}^{t} d \tau V_I ( \tau ) \right] \exp _ {-} \left[ \frac {i} {\hbar} \int _ {t_0}^{t} d \tau H_0 ( \tau ) \right] \label{2.107}$
For the last two expressions, the order of these operators certainly matters. So what changes about the time-propagation in the interaction representation? Let’s start by writing out the time-ordered exponential for $U$ in Equation \ref{2.106} using Equation \ref{2.104}:
\begin{align} U \left( t , t_0 \right) &= U_0 \left( t , t_0 \right) + \left( \frac {- i} {\hbar} \right) \int _ {t_0}^{t} d \tau U_0 ( t , \tau ) V ( \tau ) U_0 \left( \tau , t_0 \right) + \cdots \[4pt] &= U_0 \left( t , t_0 \right) + \sum _ {n = 1}^{\infty} \left( \frac {- i} {\hbar} \right)^{n} \int _ {t_0}^{t} d \tau _ {n} \int _ {t_0}^{\tau _ {n}} d \tau _ {n - 1} \cdots \int _ {t_0}^{\tau _ {2}} d \tau _ {1} U_0 \left( t , \tau _ {n} \right) V \left( \tau _ {n} \right) U_0 \left( \tau _ {n} , \tau _ {n - 1} \right) \ldots \times U_0 \left( \tau _ {2} , \tau _ {1} \right) V \left( \tau _ {1} \right) U_0 \left( \tau _ {1} , t_0 \right) \label{2.108} \end{align}
Here I have used the composition property of $U \left( t , t_0 \right)$. The same positive time-ordering applies. Note that the interactions $V(\tau_i)$ are not in the interaction representation here. Rather we used the definition in Equation \ref{2.102} and collected terms. Now consider how $U$ describes the timedependence if $I$ initiate the system in an eigenstate of $H_0$, $| l \rangle$ and observe the amplitude in a target eigenstate $| k \rangle$. The system evolves in eigenstates of $H_0$ during the different time periods, with the time-dependent interactions $V$ driving the transitions between these states. The first-order term describes direct transitions between $l$ and $k$ induced by $V$, integrated over the full time period. Before the interaction phase is acquired as $e^{- i E _ {\ell} \left( \tau - t_0 \right) / \hbar}$, whereas after the interaction phase is acquired as $e^{- i E _ {\ell} ( t - \tau ) / \hbar}$. Higher-order terms in the time-ordered exponential accounts for all possible intermediate pathways.
We now know how the interaction picture wavefunctions evolve in time. What about the operators? First of all, from examining the expectation value of an operator we see
\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \langle \psi (t) | \hat {A} | \psi (t) \rangle \[4pt] & = \left\langle \psi \left( t_0 \right) \left| U^{\dagger} \left( t , t_0 \right) \hat {A} U \left( t , t_0 \right) \right| \psi \left( t_0 \right) \right\rangle \[4pt] & = \left\langle \psi \left( t_0 \right) \left| U _ {I}^{\dagger} U_0^{\dagger} \hat {A} U_0 U _ {I} \right| \psi \left( t_0 \right) \right\rangle \[4pt] & = \left\langle \psi _ {L} (t) \left| \hat {A} _ {L} \right| \psi _ {L} (t) \right\rangle \end{aligned} \right. \label{2.109}
where
$A _ {I} \equiv U_0^{\dagger} A _ {S} U_0 \label{2.110}$
So the operators in the interaction picture also evolve in time, but under $H_0$. This can be expressed as a Heisenberg equation by differentiating
$\frac {\partial} {\partial t} \hat {A} _ {I} = \frac {i} {\hbar} \left[ H_0 , \hat {A} _ {I} \right] \label{2.111}$
Also, we know
$\frac {\partial} {\partial t} | \psi _ {I} \rangle = \frac {- i} {\hbar} V_I (t) | \psi _ {I} \rangle \label{2.112}$
Notice that the interaction representation is a partition between the Schrödinger and Heisenberg representations. Wavefunctions evolve under VI , while operators evolve under
$\text {For} H_0 = 0 , V (t) = H \quad \Rightarrow \quad \frac {\partial \hat {A}} {\partial t} = 0 ; \quad \frac {\partial} {\partial t} | \psi _ {S} \rangle = \frac {- i} {\hbar} H | \psi _ {S} \rangle \text{For Schrödinger}$
$\text {For} H_0 = H , V (t) = 0 \Rightarrow \frac {\partial \hat {A}} {\partial t} = \frac {i} {\hbar} [ H , \hat {A} ] ; \quad \frac {\partial \psi} {\partial t} = 0 \text{For Heisenberg} \label{2.113}$
The relationship between UI and bn
Earlier we described how time-dependent problems with Hamiltonians of the form $H = H_0 + V (t)$ could be solved in terms of the time-evolving amplitudes in the eigenstates of $H_0$. We can describe the state of the system as a superposition
$| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{2.114}$
where the expansion coefficients $c _ {k} (t)$ are given by
\left.\begin{aligned} c _ {k} (t) & = \langle k | \psi (t) \rangle = \left\langle k \left| U \left( t , t_0 \right) \right| \psi \left( t_0 \right) \right\rangle \[4pt] & = \left\langle k \left| U_0 U _ {I} \right| \psi \left( t_0 \right) \right\rangle \[4pt] & = e^{- i E _ {k} t / \hbar} \left\langle k \left| U _ {I} \right| \psi \left( t_0 \right) \right\rangle \end{aligned} \right. \label{2.115}
Now, comparing equations \ref{2.115} and \ref{2.54} allows us to recognize that our earlier modified expansion coefficients $b_n$ were expansion coefficients for interaction picture wavefunctions
$b _ {k} (t) = \langle k | \psi _ {I} (t) \rangle = \left\langle k \left| U _ {I} \right| \psi \left( t_0 \right) \right\rangle \label{2.116}$
Readings
1. Mukamel, S., Principles of Nonlinear Optical Spectroscopy. Oxford University Press: New York, 1995.
2. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 4.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.06%3A_Interaction_Picture.txt
|
Perturbation theory refers to calculating the time-dependence of a system by truncating the expansion of the interaction picture time-evolution operator after a certain term. In practice, truncating the full time-propagator $U$ is not effective, and only works well for times short compared to the inverse of the energy splitting between coupled states of your Hamiltonian. The interaction picture applies to Hamiltonians that can be cast as
$H=H_o + V(t)$
and allows us to focus on the influence of the coupling. We can then treat the time evolution under $H_o$ exactly, but truncate the influence of $V(t)$. This works well for weak perturbations. Let’s look more closely at this.
We know the eigenstates for $H_o$:
$H _ {0} | n \rangle = E _ {n} | n \rangle$
and we can calculate the evolution of the wavefunction that results from $V(t)$:
$| \psi _ {I} (t) \rangle = \sum _ {n} b _ {n} (t) | n \rangle \label{2.117}$
For a given state $k$, we calculate $b_k(t)$ as:
$b _ {k} = \left\langle k \left| U _ {I} \left( t , t _ {0} \right) \right| \psi \left( t _ {0} \right) \right\rangle \label{2.118}$
where
$U _ {I} \left( t , t _ {0} \right) = \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {t _ {0}}^{t} V _ {I} ( \tau ) d \tau \right ]\label{2.119}$
Now we can truncate the expansion after a few terms. This works well for small changes in amplitude of the quantum states with small coupling matrix elements relative to the energy splittings involved ($\left| b _ {k} (t) \right| \approx \left| b _ {k} ( 0 ) \right| ; | V | \ll \left| E _ {k} - E _ {n} \right|$). As we will see, the results we obtain from perturbation theory are widely used for spectroscopy, condensed phase dynamics, and relaxation. Let’s take the specific case where we have a system prepared in , and we want to know the probability of observing the system in $| k \rangle$ at time $t$ due to $V(t)$:
$P _ {k} (t) = \left| b _ {k} (t) \right|^{2}] \label{2.120}.$
Expanding
$b _ {k} (t) = \langle k | \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {I} ( \tau ) \right] \ell \rangle$
\left.\begin{aligned} b _ {k} (t) = \langle k | \ell \rangle & - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \left\langle k \left| V _ {I} ( \tau ) \right| \ell \right\rangle \ & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau _ {2} \int _ {t _ {0}}^{\tau _ {2}} d \tau _ {1} \left\langle k \left| V _ {I} \left( \tau _ {2} \right) V _ {I} \left( \tau _ {1} \right) \right| \ell \right\rangle + \ldots \end{aligned} \right. \label{2.121}
Now, using
$\left\langle k \left| V _ {I} (t) \right| \ell \right\rangle = \left\langle k \left| U _ {0}^{\dagger} V (t) U _ {0} \right| \ell \right\rangle = e^{- i \omega _ {l k} t} V _ {k \ell} (t) \label{2.122}$
we obtain:
$b _ {k} (t) = \delta _ {k \ell} - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau _ {1} e^{- i \omega _ {l k} \tau _ {1}} V _ {k \ell} \left( \tau _ {1} \right) "first-order” \label{2.123}$
$+ \sum _ {m} \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau _ {2} \int _ {t _ {0}}^{\tau _ {2}} d \tau _ {1} e^{- i \omega _ {m k} \tau _ {2}} V _ {k n} \left( \tau _ {2} \right) e^{- i \omega _ {\ln t _ {1}}} V _ {m \ell} \left( \tau _ {1} \right) \label{2.124}$
$+ ...$
The first-order term allows only direct transitions between $| \ell \rangle$ and $| k \rangle$, as allowed by the matrix element in $V$, whereas the second-order term accounts for transitions occurring through all possible intermediate states $| m \rangle$. For perturbation theory, the time-ordered integral is truncated at the appropriate order. Including only the first integral is first-order perturbation theory. The order of perturbation theory that one would extend a calculation should be evaluated initially by which allowed pathways between $| \ell \rangle$ and $| k \rangle$ you need to account for and which ones are allowed by the matrix elements.
For first-order perturbation theory, the expression in Equation \ref{2.123} is the solution to the differential equation that you get for direct coupling between $| \ell \rangle$ and $| k \rangle$:
$\frac {\partial} {\partial t} b _ {k} = \frac {- i} {\hbar} e^{- i \omega _ {a x} t} V _ {k \ell} (t) b _ {\ell} ( 0 ) \label{2.125}$
This indicates that the solution does not allow for the feedback between $| \ell \rangle$ and $| k \rangle$ that accounts for changing populations. This is the reason we say that validity dictates
$\left| b _ {k} (t) \right|^{2} - \left| b _ {k} ( 0 ) \right|^{2} \ll 1.$
If the initial state of the system $\left|\psi_{0}\right\rangle$ is not an eigenstate of $H_0$, we can express it as a superposition of eigenstates,
$b _ {k} (t) = \sum _ {n} b _ {n} ( 0 ) \left\langle k \left| U _ {I} \right| n \right\rangle \label{2.126}$
Another observation applies to first-order perturbation theory. If the system is initially prepared in a state $| \ell \rangle$, and a time-dependent perturbation is turned on and then turned off over the time interval $t=-\infty \text {to}+\infty$, then the complex amplitude in the target state $| k \rangle$ is just related to the Fourier transform of $V_{\ell k}(t)$ evaluated at the energy gap $\omega_{\ell k}$.
$b _ {k} (t) = - \frac {i} {\hbar} \int _ {- \infty}^{+ \infty} d \tau \,e^{- i \omega _ {\ell k} \tau} V _ {k \ell} ( \tau ) \label{2.127}$
If the Fourier transform pair is defined in the following manner:
$\tilde {V} ( \omega ) \equiv \tilde {\mathcal {F}} [ V (t) ] = \int _ {- \infty}^{+ \infty} d t \,V (t) \exp ( i \omega t ) \label{2.128}$
$V (t) \equiv \tilde {\mathcal {F}}^{- 1} [ \tilde {V} ( \omega ) ] = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d \omega\, \tilde {V} ( \omega ) \exp ( - i \omega t ) \label{2.129}$
Then we can write the probability of transfer to state $k$ as
$P _ {k \ell} = \frac {2 \pi \left| \tilde {V} _ {k \ell} \left( \omega _ {k \ell} \right) \right|^{2}} {\hbar^{2}} \label{2.130}$
Example: First-order Perturbation Theory
Let’s consider a simple model for vibrational excitation induced by the compression of harmonic oscillator. We will subject a harmonic oscillator initially in its ground state to a Gaussian compression pulse, which increases its force constant.
First, write the complete time-dependent Hamiltonian:
$H (t) = T + V (t) = \frac {p^{2}} {2 m} + \frac {1} {2} k (t) x^{2} \label{2.131}$
Now, partition it according to $H=H_o + V(t)$ in such a manner that we can write $H_o$ as a harmonic oscillator Hamiltonian. This involves partitioning the time-dependent force constant into two parts:
$k (t) = k _ {0} + \delta k (t)$
$k _ {0} = m \Omega^{2}$
$\delta k (t) = \delta k _ {0} \exp \left( - \frac {\left( t - t _ {0} \right)^{2}} {2 \sigma^{2}} \right) \label{2.133}$
$H=\underbrace{\frac{p^{2}}{2 m}+\frac{1}{2} k_{0} x^{2}}_{H_{0}}+\underbrace{\frac{1}{2} \delta k_{0} x^{2} \exp \left(-\frac{\left(t-t_{0}\right)^{2}}{2 \sigma^{2}}\right)}_{V(t)}$
Here $\delta k _ {0}$ is the magnitude of the induced change in the force constant, and $\sigma$ is the time-width of the Gaussian perturbation. So, we know the eigenstates of $H_0$: $H _ {0} | n \rangle = E _ {n} | n \rangle$
$H_{0}=\hbar \Omega\left(a^{\dagger} a+\frac{1}{2}\right)$
and
$E _ {n} = \hbar \Omega \left( n + \frac {1} {2} \right)$
Now we ask, if the system is in $|0\rangle$ before applying the perturbation, what is the probability of finding it in state n after the perturbation?
For $n \neq 0$
$b _ {n} (t) = \frac {- i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {n 0} ( \tau ) e^{i \omega _ {n 0} \tau} \label{2.135}$
Using
$\omega _ {n 0} = \left( E _ {n} - E _ {0} \right) / \hbar = n \Omega$
and recognizing that we can set the limits to $t_{0}=-\infty \text {and} \mathrm{t}=\infty$
$b _ {n} (t) = \frac {- i} {2 \hbar} \delta k _ {0} \left\langle n \left| x^{2} \right| 0 \right\rangle \int _ {- \infty}^{+ \infty} d \tau e^{i n \Omega \tau} e^{- \tau^{2} / 2 \sigma^{2}} \label{2.136}$
This leads to
$b _ {n} (t) = \frac {- i} {2 \hbar} \delta k _ {0} \sqrt {2 \pi} \sigma \left\langle n \left| x^{2} \right| 0 \right\rangle e^{- n^{2} \sigma^{2} \Omega^{2} / 2} \label{2.137}$
Here we made use of an important identity for Gaussian integrals:
$\int _ {- \infty}^{+ \infty} \exp \left( a x^{2} + b x + c \right) d x = \sqrt {\frac {- \pi} {a}} \exp \left( c - \frac {1} {4} \frac {b^{2}} {a} \right) \label{2.138}$
and
$\int _ {- \infty}^{+ \infty} \exp \left( - a x^{2} + i b x \right) d x = \sqrt {\frac {\pi} {a}} \exp \left( - \frac {b^{2}} {4 a} \right) \label{2.139}$
What about the matrix element?
$x^{2} = \frac {\hbar} {2 m \Omega} \left( a + a^{\dagger} \right)^{2} = \frac {\hbar} {2 m \Omega} \left( a a + a^{\dagger} a + a a^{\dagger} + a^{\dagger} a^{\dagger} \right)\label{2.140}$
From these we see that first-order perturbation theory will not allow transitions to $n =1$, only $n = 0$ and $n = 2$. Generally this would not be realistic, because you would certainly expect excitation to $n=1$ would dominate over excitation to $n=2$. A real system would also be anharmonic, in which case, the leading term in the expansion of the potential V(x), that is linear in x, would not vanish as it does for a harmonic oscillator, and this would lead to matrix elements that raise and lower the excitation by one quantum.
However for the present case,
$\left\langle 2 \left| x^{2} \right| 0 \right\rangle = \sqrt {2} \frac {\hbar} {2 m \Omega} \label{2.141}$
So,
$b _ {2} = \frac {- i \sqrt {\pi} \delta k _ {0} \sigma} {2 m \Omega} e^{- 2 \sigma^{2} \Omega^{2}} \label{2.142}$
and we can write the probability of occupying the $n = 2$ state as
$P _ {2} = \left| b _ {2} \right|^{2} = \frac {\pi \delta k _ {0}^{2} \sigma^{2}} {2 m^{2} \Omega^{2}} e^{- 4 \sigma^{2} \Omega^{2}} \label{2.143}$
From the exponential argument, significant transfer of amplitude occurs when the compression pulse width is small compared to the vibrational period.
$\sigma \ll \dfrac {1} {\Omega} \label{2.144}$
In this regime, the potential is changing faster than the atoms can respond to the perturbation. In practice, when considering a solid-state problem, with frequencies matching those of acoustic phonons and unit cell dimensions, we need perturbations that move faster than the speed of sound, i.e., a shock wave. The opposite limit, $\sigma \Omega > > 1$, is the adiabatic limit. In this case, the perturbation is so slow that the system always remains entirely in n=0, even while it is compressed.
Now, let’s consider the validity of this first-order treatment. Perturbation theory does not allow for $b_n$ to change much from its initial value. First we re-write Equation \ref{2.143} as
$P _ {2} = \sigma^{2} \Omega^{2} \frac {\pi} {2} \left( \frac {\delta k _ {0}^{2}} {k _ {0}^{2}} \right) e^{- 4 \sigma^{2} \Omega^{2}} \label{2.145}$
Now for changes that don’t differ much from the initial value, $P _ {2} \ll 1$
$\sigma^{2} \Omega^{2} \frac {\pi} {2} \left( \frac {\delta k _ {0}^{2}} {k _ {0}^{2}} \right) \ll 1 \label{2.146}$
Generally, the magnitude of the perturbation $\delta k _ {0}$ must be small compared to $k_0$.
One step further…
The preceding example was simple, but it tracks the general approach to setting up problems that you treat with time-dependent perturbation theory. The approach relies on writing a Hamiltonian that can be cast into a Hamiltonian that you can treat exactly $H_0$, and time-dependent perturbations that shift amplitudes between its eigenstates. For this scheme to work well, we need the magnitude of perturbation to be small, which immediately suggests working with a Taylor series expansion of the potential. For instance, take a one-dimensional potential for a bound particle, $V(x)$, which is dependent on the form of an external variable y. We can expand the potential in x about its minimum $x = 0$ as
\begin{align} V (x) &= \frac {1} {2 !} \left. \frac {\partial^{2} V} {\partial x^{2}} \right| _ {x = 0} x^{2} + \frac {1} {2 !} \left. \frac {\partial^{2} V} {\partial x \partial y} \right| _ {x = 0} x y + \frac {1} {3 !} \sum _ {y , z} \left. \frac {\partial^{3} V} {\partial x \partial y \partial z} \right| _ {x = 0} x y z + \cdots \label{2.147} \ &= \frac {1} {2} k x^{2} + V^{( 2 )} x y + \left( V _ {3}^{( 3 )} x^{3} + V _ {2}^{( 3 )} x^{2} y + V _ {1}^{( 3 )} x y^{2} \right) + \cdots\end{align}
The first term is the harmonic force constant for $x$, and the second term is a bi-linear coupling whose magnitude $V^{(2)}$ indicates how much a change in the variable y influences the variable $x$. The remaining terms are cubic expansion terms. $V_3^{(3)}$ is the cubic anharmonicity of $V(x)$, and the remaining two terms are cubic couplings that describe the dependence of x and y. Introducing a time-dependent potential is equivalent to introducing a time-dependence to the operator y, where the form and strength of the interaction is subsumed into the amplitude $V$. In the case of the previous example, our formulation of the problem was equivalent to selecting only the $V _ {2}^{( 3 )}$ term, so that $\delta k _ {0} / 2 = V _ {2}^{( 3 )}$, and giving the value of y a time-dependence described by the Gaussian waveform.
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 1285.
2. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 4.
3. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994; Ch. 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.07%3A_Time-Dependent_Perturbation_Theory.txt
|
A number of important relationships in quantum mechanics that describe rate processes come from first-order perturbation theory. These expressions begin with two model problems that we want to work through:
1. time evolution after applying a step perturbation, and
2. time evolution after applying a harmonic perturbation.
As before, we will ask: if we prepare the system in the state $| \ell \rangle$, what is the probability of observing the system in state $| k \rangle$ following the perturbation?
Constant Perturbation (or a Step Perturbation)
The system is prepared such that $| \psi ( - \infty ) \rangle = | \ell \rangle$. A constant perturbation of amplitude $V$ is applied at $t_o$:
$V (t) = V \Theta \left( t - t _ {0} \right) = \left\{\begin{array} {l l} {0} & {t < t _ {0}} \ {V} & {t \geq t _ {0}} \end{array} \right. \label{2.148}$
Here $\Theta \left( t - t _ {0} \right)$ is the Heaviside step response function, which is 0 for $t < t_o$ and 1 for $t ≥ t_0$. Now, turning to first order perturbation theory, the amplitude in $k \neq \ell$, we have:
$b _ {k} = - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \,e^{i \omega _ {k l} \left( \tau - t _ {0} \right)} V _ {k \ell} ( \tau ) \label{2.149}$
Here $V_{k\ell}$ is independent of time.
Setting $t_o = 0$
\begin{align} b _ {k} &= - \frac {i} {\hbar} V _ {k \ell} \int _ {0}^{t} d \tau \,e^{i \omega _ {k \ell} \tau} \[4pt] &= - \frac {V _ {k \ell}} {E _ {k} - E _ {\ell}} \left[ \exp \left( i \omega _ {k l} t \right) - 1 \right] \[4pt] &= - \frac {2 i V _ {k \ell} e^{i \omega _ {k t} t / 2}} {E _ {k} - E _ {\ell}} \sin \left( \omega _ {k t} t / 2 \right) \label{2.150} \end{align}
For Equation \ref{2.150}, the following identity was used
$e^{i \theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 ).$
Now the probability of being in the $k$ state is
\begin{align} P _ {k} &= \left| b _ {k} \right|^{2} \[4pt] &= \frac {4 \left| V _ {k \ell} \right|^{2}} {\left| E _ {k} - E _ {\ell} \right|^{2}} \sin^{2} \left( \frac {\omega _ {k \ell} t} {2} \right) \label{2.151} \end{align}
If we write this using the energy splitting variable we used earlier:
$\Delta = \left( E _ {k} - E _ {t} \right) / 2$
then
$P _ {k} = \frac {V^{2}} {\Delta^{2}} \sin^{2} ( \Delta t / \hbar ) \label{2.152}$
Fortunately, we have the exact result for the two-level problem to compare this approximation to
$P _ {k} = \frac {V^{2}} {V^{2} + \Delta^{2}} \sin^{2} \left( \sqrt {\Delta^{2} + V^{2}} t / \hbar \right) \label{2.153}$
From comparing Equation \ref{2.152} and \ref{2.153}, it is clear that the perturbation theory result works well for $V \ll \Delta$, as expected for this approximation approach.
Let’s examine the time-dependence to $P_k$, and compare the perturbation theory (solid lines) to the exact result (dashed lines) for different values of $\Delta$.
The worst correspondence is for $\Delta=0.5$ (red curves) for which the behavior appears quadratic and the probability quickly exceeds unity. It is certainly unrealistic, but we do not expect that the expression will hold for the “strong coupling” case: $\Delta \ll V$. One begins to have quantitative accuracy in for the regime $P _ {k} (t) - P _ {k} ( 0 ) < 0.1$ or $\Delta < 4V$.
Now let’s look at the dependence on $\Delta$. We can write the first-order result Equation \ref{2.152} as
$P _ {k} = \frac {V^{2} t^{2}} {\hbar^{2}} \text{sinc}^{2} ( \Delta t / 2 \hbar ) \label{2.154}$
where
$\text{sinc} (x) = \dfrac{\sin (x)}{x}.$
If we plot the probability of transfer from $| \ell \rangle$ to $| k \rangle$ as a function of the energy level splitting ($E_k-K_{\ell}$), we have:
The probability of transfer is sharply peaked where energy of the initial state matches that of the final state, and the width of the energy mismatch narrows with time. Since
$\lim _ {x \rightarrow 0} \operatorname {sinc} (x) = 1,$
we see that the short time behavior is a quadratic growth in $P_k$
$\lim _ {\Delta \rightarrow 0} P _ {k} = \dfrac{V^{2} t^{2}}{\hbar^{2}} \label{2.155}$
The integrated area grows linearly with time.
Uncertainty
Since the energy spread of states to which transfer is efficient scales approximately as $E _ {k} - E _ {\ell} < 2 \pi \hbar / t$, this observation is sometimes referred to as an uncertainty relation with
$\Delta E \cdot \Delta t \geq 2 \pi \hbar$
However, remember that this is really just an observation of the principles of Fourier transforms. A frequency can only be determined as accurately as the length of the time over which you observe oscillations. Since time is not an operator, it is not a true uncertainly relation like
$\Delta p \cdot \Delta x \geq 2 \pi \hbar.$
In the long time limit, the $\text{sinc}^2 (x)$ function narrows to a delta function:
$\lim _ {t \rightarrow \infty} \frac {\sin^{2} ( a x / 2 )} {a x^{2}} = \frac {\pi} {2} \delta (x) \label{2.156}$
So the long time probability of being in the $k$ state is
$\lim _ {t \rightarrow \infty} P _ {k} (t) = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) t \label{2.157}$
The delta function enforces energy conservation, saying that the energies of the initial and target state must be the same in the long time limit. What is interesting in Equation \ref{2.157} is that we see a probability growing linearly in time. This suggests a transfer rate that is independent of time, as expected for simple first-order kinetics:
$w _ {k} (t) = \frac {\partial P _ {k} (t)} {\partial t} = \frac {2 \pi \left| V _ {k \ell} \right|^{2}} {\hbar} \delta \left( E _ {k} - E _ {\ell} \right) \label{2.158}$
This is one statement of Fermi’s Golden Rule—the state-to-state form—which describes relaxation rates from first-order perturbation theory. We will show that this rate properly describes long time exponential relaxation rates that you would expect from the solution to
$\dfrac{d P}{d t} = - w P.$
Harmonic Perturbation
The second model calculation is the interaction of a system with an oscillating perturbation turned on at time $t_0 = 0$. The results will be used to describe how a light field induces transitions in a system through dipole interactions.
Again, we are looking to calculate the transition probability between states $\ell$ and $k$:
$V (t) = V \cos \omega t \Theta (t) \label{2.159}$
\begin{align} V _ {k \ell} (t) &= V _ {k \ell} \cos \omega t \label{2.160} \[4pt] &= \frac {V _ {k \ell}} {2} \left[ e^{- i \omega t} + e^{i \omega t} \right] \end{align}
Setting $t _ {0} \rightarrow 0$, first-order perturbation theory leads to
\begin{align} b _ {k} &= \frac {- i} {\hbar} \int _ {t _ {0}}^{t} d \tau\, V _ {k \ell} ( \tau ) e^{i \omega _ {k l} \tau} \[4pt] &= \frac {- i V _ {k \ell}} {2 \hbar} \int _ {0}^{t} d \tau \left[ e^{i \left( \omega _ {k l} - \omega \right) \tau} + e^{i \left( \omega _ {k l} + \omega \right) \tau} \right] \[4pt] &= \frac {- i V _ {k \ell}} {2 \hbar} \left[ \frac {e^{i \left( \omega _ {k l} - \omega \right) t} - 1} {\omega _ {k \ell} - \omega} + \frac {e^{i \left( \omega _ {k t} + \omega \right) t} - 1} {\omega _ {k \ell} + \omega} \right] \end{align}
Using
$e^{\theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 )$
as before:
$b _ {k} = \frac {V _ {k \ell}} {\hbar} \left[ \underbrace{\frac {e^{i \left( \omega _ {k \ell} - \omega \right) t / 2} \sin \left[ \left( \omega _ {k \ell} - \omega \right) t / 2 \right]} {\omega _ {k \ell} - \omega}}_{\text{absorption}} + \underbrace{\frac {e^{i \left( \omega _ {k \ell} + \omega \right) t / 2} \sin \left[ \left( \omega _ {k \ell} + \omega \right) t / 2 \right]} {\omega _ {k \ell} + \omega}}_{\text{stimulated emission}} \right] \label{2.162}$
Notice that these terms are only significant when $\omega \approx \pm \omega _ {k \ell}$. The condition for efficient transfer is resonance, a matching of the frequency of the harmonic interaction with the energy splitting between quantum states. Consider the resonance conditions that will maximize each of these:
If we consider only absorption,
$P _ {k \ell} = \left| b _ {k} \right|^{2} = \frac {\left| V _ {k l} \right|^{2}} {\hbar^{2} \left( \omega _ {k t} - \omega \right)^{2}} \quad \sin^{2} \left[ \frac {1} {2} \left( \omega _ {k \ell} - \omega \right) t \right] \label{2.163}$
We can compare this with the exact expression:
$P _ {k \ell} = \left| b _ {k} \right|^{2} = \frac {\left| V _ {k l} \right|^{2}} {\hbar^{2} \left( \omega _ {k t} - \omega \right)^{2} + \left| V _ {k l} \right|^{2}} \sin^{2} \left[ \frac {1} {2 \hbar} \sqrt {\left| V _ {k l} \right|^{2} + \left( \omega _ {k \ell} - \omega \right)^{2}} t \right] \label{2.164}$
Again, we see that the first-order expression is valid for couplings $\left| V _ {k \ell} \right|$ that are small relative to the detuning $\Delta \omega = \left( \omega _ {k \ell} - \omega \right)$. The maximum probability for transfer is on resonance $\omega _ {k \ell} = \omega$
Similar to our description of the constant perturbation, the long time limit for this expression leads to a delta function $\delta \left( \omega _ {k \ell} - \omega \right)$. In this long time limit, we can neglect interferences between the resonant and antiresonant terms. The rates of transitions between $k$ and $\ell$ states determined from $w _ {k \ell} = \partial P _ {k} / \partial t$ becomes
$w _ {k \ell} = \frac {\pi} {2 \hbar^{2}} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{2.165}$
We can examine the limitations of this formula. When we look for the behavior on resonance, expanding the sin(x) shows us that Pk rises quadratically for short times:
$\lim _ {\Delta \omega \rightarrow 0} P _ {k} (t) = \frac {\left| V _ {k \ell} \right|^{2}} {4 \hbar^{2}} t^{2} \label{2.166}$
This clearly will not describe long-time behavior, but it will hold for small $P_k$, so we require
$t \ll \frac {2 \hbar} {V _ {k \ell}} \label{2.167}$
At the same time, we cannot observe the system on too short a time scale. We need the field to make several oscillations for this to be considered a harmonic perturbation.
$t > \frac {1} {\omega} \approx \frac {1} {\omega _ {k \ell}} \label{2.168}$
These relationships imply that we require
$V _ {k \ell} \ll \hbar \omega _ {k \ell} \label{2.169}$
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 1299.
2. McHale, J. L., Molecular Spectroscopy. 1st ed.; Prentice Hall: Upper Saddle River, NJ, 1999; Ch. 4.
3. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.08%3A_Fermis_Golden_Rule.txt
|
At a fundamental level, the basic laws governing the time evolution of isolated quantum mechanical systems are invariant under time reversal. That is, there is no preferred direction to the arrow of time. The Time Dependent Schrödinger Equation is reversible, meaning that one can find solutions for propagating either forward or backward in time. If one reverses the sign of time and thereby momenta of objects, we should be able to go back where the system was at an earlier time. We can see this in the exact solution to the two-level problem, where amplitude oscillates between the two states with a frequency that depends on the coupling. If we reverse the sign of the time, the motion is reversed. In contrast, when a quantum system is in contact with another system having many degrees of freedom, a definite direction emerges to the arrow of time, and the system’s dynamics is no longer reversible. Such irreversible systems are dissipative, meaning they decay in time from a prepared state to a state where phase relationships between the basis states are lost.
• 4.1: Introduction to Dissipative Dynamics
How does irreversible behavior, a hallmark of chemical systems, arise from the deterministic Time Dependent Schrödinger Equation? We will answer this question specifically in the context of quantum transitions from a given energy state of the system to energy states its surroundings. Qualitatively, such behavior can be expected to arise from destructive interference between oscillatory solutions of the system and the set of closely packed manifold of energy states of the bath.
04: Irreversible Relaxation
How does irreversible behavior, a hallmark of chemical systems, arise from the deterministic Time Dependent Schrödinger Equation? We will answer this question specifically in the context of quantum transitions from a given energy state of the system to energy states its surroundings. Qualitatively, such behavior can be expected to arise from destructive interference between oscillatory solutions of the system and the set of closely packed manifold of energy states of the bath. To illustrate this point, consider the following calculation for the probability amplitude for an initial state of the system coupled to a finite but growing number of randomly chosen states belonging to the bath.
Here, even with only 100 or 1000 states, recurrences in the initial state amplitude are suppressed by destructive interference between paths. Clearly in the limit that the accepting state are truly continuous, the initial amplitude prepared in $\ell$ will be spread through an infinite number of continuum states. We will look at this more closely by describing the relaxation of an initially prepared state as a result of coupling to a continuum of states of the surroundings. This is common to all dissipative processes in which the surroundings to the system of interest form a continuous band of states.
To begin, let us define a continuum. We are familiar with eigenfunctions being characterized by quantized energy levels, where only discrete values of the energy are allowed. However, this is not a general requirement. Discrete levels are characteristic of particles in bound potentials, but free particles can take on a continuous range of energies given by their momentum,
$E = \dfrac{\langle p^2 \rangle}{2m}.$
The same applies to dissociative potential energy surfaces, and bound potentials in which the energy exceeds the binding energy. For instance, photoionization or photodissociation of a molecule involves a light field coupling a bound state into a continuum. Other examples are common in condensed matter. The intermolecular motions of a liquid, the lattice vibrations of a crystal, or the allowed energies within the band structure of a metal or semiconductor are all examples of a continuum.
For a discrete state imbedded in such a continuum, the Golden Rule gives the probability of transition from the system state $| \ell \rangle$ to a continuum state $| \ell \rangle$ as:
$\overline {w} _ {k \ell} = \frac {\partial \overline {P} _ {k \ell}} {\partial t} = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \rho \left( E _ {k} = E _ {\ell} \right)$
The transition rate $\overline {\mathcal {W}} _ {k \ell}$ is constant in time, when $\left| V _ {k \ell} \right|^{2}$ is constant in time, which will be true for short time intervals. Under these conditions integrating the rate equation on the left gives
\begin{align} \overline {P} _ {k \ell} &= \overline {w} _ {k \ell} \left( t - t _ {0} \right) \[4pt] \overline {P} _ {\ell \ell} &= 1 - \overline {P} _ {k \ell}. \end{align}
The probability of transition to the continuum of bath states varies linearly in time. As we noted, this will clearly only work for times such that
$P _ {k} (t) - P _ {k} ( 0 ) \gg1.$
What long time behavior do we expect? A time independent rate with population governed by
$\overline {w} _ {k \ell} = \partial \overline {P} _ {k \ell} / \partial t$
is a hallmark of first order kinetics and exponential relaxation. In fact, for exponential relaxation out of a state $\ell$, the short time behavior looks just like the first order result:
\begin{align} \overline {P} _ {\ell \ell} (t) &= \exp \left( - \overline {w} _ {k \ell} t \right) \ &= 1 - \overline {w} _ {k \ell} t + \ldots\label{3.4} \end{align}
So we might believe that $\overline {\mathcal {W}} _ {k \ell}$ represents the tangent to the relaxation behavior at $t - 0$. The problem we had previously was we did not account for depletion of initial state. In fact, we will see that when we look a touch more carefully, that the long time relaxation behavior of state $\ell$ is exponential and governed by the golden rule rate. The decay of the initial state is irreversible because there is feedback with a distribution of destructively interfering phases.
Let’s formulate this problem a bit more carefully. We will look at transitions to a continuum of states $\{k \}$ from an initial state $\ell$ under a constant perturbation.
These together form a complete set; so for
$H (t) = H _ {0} + V (t)$
with $H _ {0} | n \rangle = E _ {n} | n \rangle$.
$1 = \sum _ {n} | n \rangle \langle n | = | \ell \rangle \left\langle \ell \left| + \sum _ {k} \right| k \right\rangle \langle k | \label{3.5}$
As we go on, you will see that we can identify $\ell$ with the “system” and $\{k \}$ with the “bath” when we partition
$H _ {0} = H _ {S} + H _ {B}.$
Now let’s make some simplifying assumptions. For transitions into the continuum, we will assume that transitions only occur between $\ell$ and states of the continuum, but that there are no interactions between states of the continuum: $\left\langle k | V | k^{\prime} \right\rangle = 0$. This can be rationalized by thinking of this problem as a discrete set of states interacting with a continuum of normal modes. Moreover, we will assume that the coupling of the initial to continuum states is a constant for all states $k$: $\langle \ell | V | k \rangle = \left\langle \ell | V | k^{\prime} \right\rangle = \cdots$. For reasons that we will see later, we will also keep the diagonal matrix element $\langle \ell | V | \ell \rangle = 0$. With these assumptions, we can summarize the Hamiltonian for our problem as
\begin{aligned}
H(t) &=H_{0}+V(t) \
H_{0} &=|\ell\rangle E_{\ell}\left\langle\ell\left|+\sum_{k}\right| k\right\rangle E_{k}\langle k| \
V(t) &=\sum_{k}\left[|k\rangle V_{k \ell}\langle\ell|+| \ell\rangle V_{2 k}\langle k|\right]+|\ell\rangle V_{\ell \ell}\langle\ell|
\label{3.6}\end{aligned}
We are seeking a more accurate description of the occupation of the initial and continuum states, for which we will use the interaction picture expansion coefficients
$b _ {k} (t) = \left\langle k \left| U _ {I} \left( t , t _ {0} \right) \right| \ell \right\rangle \label{3.7}$
Earlier, we saw that the exact solution to $U_I$ was:
$U _ {I} \left( t , t _ {0} \right) = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {I} ( \tau ) U _ {I} \left( \tau , t _ {0} \right) \label{3.8}$
This form was not very practical, since $U_I$ is a function of itself. For first-order perturbation theory, we set the final term in this equation $U_I$, $U _ {I} \left( \tau , t _ {0} \right) \rightarrow 1$. Here, in order to keep the feedback between $|\ell \rangle$ and the continuum states, we keep it as is.
$b _ {k} (t) = \langle k | \ell \rangle - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \left\langle k \left| V _ {I} ( \tau ) U _ {I} \left( \tau , t _ {0} \right) \right| \ell \right\rangle \label{3.9}$
Inserting Equation \ref{3.7}, and recognizing $k \neq l$,
$b _ {k} (t) = - \frac {i} {\hbar} \sum _ {n} \int _ {t _ {0}}^{t} d \tau e^{i \omega _ {h n} \tau} V _ {k n} b _ {n} ( \tau ) \label{3.10}$
Note, $V_{kn}$ is not a function of time. Equation \ref{3.10} expresses the occupation of state $k$ in terms of the full history of the system from $t _ {0} \rightarrow t$ with amplitude flowing back and forth between the states n. Equation \ref{3.10} is just the integral form of the coupled differential equations that we used before:
$i \hbar \frac {\partial b _ {k}} {\partial t} = \sum _ {n} e^{i \omega _ {b n} t} V _ {k n} b _ {n} (t) \label{3.11}$
These exact forms allow for feedback between all the states, in which the amplitudes $b_k$ depend on all other states. Since you only feed from $\ell$ into $k$, we can remove the summation in Equation \ref{3.10} and express the complex amplitude of a state within the continuum as
$b _ {k} = - \frac {i} {\hbar} V _ {k \ell} \int _ {t _ {0}}^{t} d \tau e^{i \omega _ {k} \tau} b _ {\ell} ( \tau ) \label{3.12}$
We want to calculate the rate of leaving $| \ell \rangle$, including feeding from continuum back into initial state. From Equation \ref{3.11} we can separate terms involving the continuum and the initial state:
$i \hbar \frac {\partial} {\partial t} b _ {\ell} = \sum _ {k \neq \ell} e^{i \omega _ {\mu} t} V _ {\ell k} b _ {k} + V _ {\ell \ell} b _ {\ell} \label{3.13}$
Now substituting Equation \ref{3.12} into Equation \ref{3.13}, and setting $t_0 =0$:
$\frac {\partial b _ {\ell}} {\partial t} = - \frac {1} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \int _ {0}^{t} b _ {\ell} ( \tau ) e^{i \omega _ {k} ( \tau - t )} d \tau - \frac {i} {\hbar} V _ {\ell \ell} b _ {\ell} (t) \label{3.14}$
This is an integro-differential equation that describes how the time-development of $b_ℓ$ depends on the entire history of the system. Note we have two time variables for the two propagation routes:
$\left. \begin{array} {l} {\tau : | \ell \rangle \rightarrow | k \rangle} \ {t : | k \rangle \rightarrow | \ell \rangle} \end{array} \right. \label{3.15}$
The next assumption is that $b_ℓ$ varies slowly relative to $\omega_{kℓ}$, so we can remove it from integral. This is effectively a weak coupling statement: $\hbar \omega _ {k \ell} \gg V _ {k \ell}$. $b$ is a function of time, but since it is in the interaction picture it evolves slowly compared to the $\omega_{kℓ}$ oscillations in the integral.
$\frac {\partial b _ {\ell}} {\partial t} = b _ {\ell} \left[ - \frac {1} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \int _ {0}^{t} e^{i \omega _ {k} ( \tau - t )} d \tau - \frac {i} {\hbar} V _ {\ell \ell} \right] \label{3.16}$
Now, we want the long time evolution of $b$, for times $\omega _ {k \ell} t > > 1$, we will investigate the integration limit $t \rightarrow \infty$.
Note
Complex integration of Equation \ref{3.16}: Defining $t^{\prime} = \tau - t$
$\int _ {0}^{t} e^{i \omega _ {k l} ( \tau - t )} d \tau = - \int _ {0}^{t} e^{i \omega _ {k l} t^{\prime}} d t^{\prime} \label{3.17}$
The integral $\lim _ {T \rightarrow \infty} \int _ {0}^{T} e^{i \omega t^{\prime}} d t^{\prime}$ is purely oscillatory and not well behaved. The strategy to solve this is to integrate:
\begin{align} \lim _ {\varepsilon \rightarrow 0^{+}} \int _ {0}^{\infty} e^{( i \omega + \varepsilon ) t^{\prime}} d t^{\prime} & = \lim _ {\varepsilon \rightarrow 0^{+}} \frac {1} {i \omega + \varepsilon} \ & = \lim _ {\varepsilon \rightarrow 0^{+}} \left( \frac {\varepsilon} {\omega^{2} + \varepsilon^{2}} + i \frac {\omega} {\omega^{2} + \varepsilon^{2}} \right) \ & = \pi \delta ( \omega ) - i \mathbb {P} \frac {1} {\omega} \label{3.19} \end{align}
(This expression is valid when used under an integral) In the final term we have written in terms of the Cauchy Principle Part:
$\mathbb {P} \left( \frac {1} {x} \right) = \left\{\begin{array} {l l} {\frac {1} {x}} & {x \neq 0} \ {0} & {x = 0} \end{array} \right. \label{3.20}$
Using Equation \ref{3.19}, Equation \ref{3.16} becomes \ref{3.21}
$\frac {\partial b _ {\ell}} {\partial t} = b _ {\ell} [ - \underbrace {\frac {\pi} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \delta \left( \omega _ {k \ell} \right)} _ {\text {term} 1} - \frac {i} {\hbar} \left( \underbrace {V _ {\ell \ell} + \mathbb {P} \sum _ {k \neq \ell} \frac {\left| V _ {k \ell} \right|^{2}} {E _ {k} - E _ {\ell}} )} _ {\text {term} 2} \right] \label{3.21}$
Note that Term 1 is just the Golden Rule rate, written explicitly as a sum over continuum states instead of an integral
$\sum _ {k \neq \ell} \delta \left( \omega _ {k \ell} \right) \Rightarrow \hbar \rho \left( E _ {k} = E _ {\ell} \right) \label{3.22}$
$\overline {w} _ {k \ell} = \int d E _ {k} \rho \left( E _ {k} \right) \left[ \frac {2 \pi} {\hbar} \left| V _ {k l} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \right] \label{3.23}$
Term 2 is just the correction of the energy of $E_ℓ$ from second-order time-independent perturbation theory,
$\Delta E _ {\ell} = \langle \ell | V | \ell \rangle + \sum _ {k \neq l} \frac {\langle k | V | \left. \ell \right|^{2}} {E _ {k} - E _ {\ell}} \label{3.25}$
So, the time evolution of $b _ {\ell}$ is governed by a simple first-order differential equation
$\frac{\partial b_{\ell}}{\partial t}=b_{\ell}\left(-\frac{\bar{w}_{k \ell}}{2}-\frac{i}{\hbar} \Delta E_{\ell}\right)$
Which can be solved with $b _ {\ell} ( 0 ) = 1$ to give
$b _ {\ell} (t) = \exp \left( - \frac {\overline {w} _ {k l} t} {2} - \frac {i} {\hbar} \Delta E _ {\ell} t \right) \label{3.26}$
We see that one has exponential decay of amplitude of $b _ {\ell}$! This is a manner of irreversible relaxation from coupling to the continuum. Now, since there may be additional interferences between paths, we switch from the interaction picture back to Schrödinger Picture,
$c _ {\ell} (t) = \exp \left[ - \left( \frac {\overline {w} _ {k \ell}} {2} + i \frac {E _ {\ell}^{\prime}} {\hbar} \right) t \right] \label{3.27}$
with the corrected energy
$E _ {\ell}^{\prime} \equiv E _ {\ell} + \Delta E \label{3.28}$
and
$P _ {\ell} = \left| c _ {\ell} \right|^{2} = \exp \left[ - \overline {w} _ {k \ell} t \right] \label{3.29}$
The solutions to the Time Dependent Schrödinger Equation are expected to be complex and oscillatory. What we see here is a real dissipative component and an imaginary dispersive component. The probability decays exponentially from initial state. Fermi’s Golden Rule rate tells you about long times!
Now, what is the probability of appearing in any of the states $|k \rangle$? Using Equation \ref{3.12}:
$b _ {k} (t) = - \frac {i} {\hbar} \int _ {0}^{t} V _ {k \ell} e^{i \omega _ {k l} \tau} b _ {\ell} ( \tau ) d \tau$
$= V _ {k \ell} \frac {1 - \exp \left( - \frac {\overline {w} _ {k \ell}} {2} t - \frac {i} {h} \left( E _ {\ell}^{\prime} - E _ {k} \right) t \right)} {E _ {k} - E _ {\ell}^{\prime} + i \hbar \overline {w} _ {k \ell} / 2}$
$= V _ {k \ell} \frac {1 - c _ {\ell} (t)} {E _ {k} - E _ {\ell}^{\prime} + i \hbar \overline {w} _ {k \ell} / 2}$
(3.30) If we investigate the long time limit ($t \rightarrow \infty$), noting that $P _ {k \ell} = \left| b _ {k} \right|^{2}$, we find
$P _ {k l} = \frac {\left| V _ {k l} \right|^{2}} {\left( E _ {k} - E _ {i}^{\prime} \right)^{2} + \Gamma^{2} / 4} \label{3.31}$
with
$\Gamma \equiv \overline {w} _ {k \ell} \cdot \hbar \label{3.32}$
The probability distribution for occupying states within the continuum is described by a Lorentzian distribution with maximum probability centered at the corrected energy of the initial state $E _ {\ell}^{\prime}$. The width of the distribution is given by the relaxation rate, which is proxy for $\left| V _ {k \ell} \right|^{2} \rho \left( E _ {\ell} \right)$, the coupling to the continuum and density of states.
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 1344.
2. Merzbacher, E., Quantum Mechanics. 3rd ed.; Wiley: New York, 1998; p. 510.
3. Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; p. 305. 4. Schatz, G. C.; Ratner, M. A., Quantum Mechanics in Chemistry. Dover Publications: Mineola, NY, 2002; Ch. 9.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/04%3A_Irreversible_Relaxation/4.01%3A_Introduction_to_Dissipative_Dynamics.txt
|
The density matrix or density operator is an alternate representation of the state of a quantum system for which we have previously used the wavefunction. Although describing a quantum system with the density matrix is equivalent to using the wavefunction, one gains significant practical advantages using the density matrix for certain time-dependent problems—particularly relaxation and nonlinear spectroscopy in the condensed phase.
Thumbnail: measured density matrix of a thermal state. (CC SA-BY 3.0 unported; measured density matrix of a thermal state).
05: The Density Matrix
The density matrix or density operator is an alternate representation of the state of a quantum system for which we have previously used the wavefunction. Although describing a quantum system with the density matrix is equivalent to using the wavefunction, one gains significant practical advantages using the density matrix for certain time-dependent problems—particularly relaxation and nonlinear spectroscopy in the condensed phase.
The density matrix is defined as the outer product of the wavefunction with its conjugate.
$\rho (t) \equiv | \psi (t) \rangle \langle \psi (t) | \label{4.1}$
This implies that if you specify a state $| x \rangle$, then $\langle x | p | x \rangle$ gives the probability of finding a particle in the state $| x \rangle$. Its name derives from the observation that it plays the quantum role of a probability density. If you think of the statistical description of a classical observable obtained from moments of a probability distribution $P$, then $ρ$ plays the role of $P$ in the quantum case:
\begin{align} \langle A \rangle &= \int A P ( A ) d A \label{4.2} \[4pt] &= \langle \psi | A | \psi \rangle = \operatorname {Tr} [ A \rho ] \label{4.3} \end{align}
where $Tr[…]$ refers to tracing over the diagonal elements of the matrix,
$T r [ \cdots ] = \sum _ {a} \langle a | \cdots | a \rangle.$
The last expression is obtained as follows. If the wavefunction for the system is expanded as
$| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{4.4}$
the expectation value of an operator is
$\langle \hat {A} (t) \rangle = \sum _ {n , m} c _ {n} (t) c _ {m} ^{*} (t) \langle m | \hat {A} | n \rangle \label{4.5}$
Also, from Equation \ref{4.1} we obtain the elements of the density matrix as
\left.\begin{aligned} \rho (t) & {= \sum _ {n , m} c _ {n} (t) c _ {m} ^{*} (t) | n \rangle \langle m |} \[4pt] & {\equiv \sum _ {n = m} \rho _ {n m} (t) | n \rangle \langle m |} \end{aligned} \right. \label{4.6}
We see that $\rho_{nm}$, the density matrix elements, are made up of the time-evolving expansion coefficients. Substituting into Equation \ref{4.5} we see that
\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \sum _ {n , m} A _ {m n} \rho _ {m n} (t) \[4pt] & = \operatorname {Tr} [ \hat {A} \rho (t) ] \end{aligned} \right. \label{4.7}
In practice this makes evaluating expectation values as simple as tracing over a product of matrices.
What information is in the density matrix elements, $\rho_{nm}$? The diagonal elements ($n = m$) give the probability of occupying a quantum state:
$\rho _ {n n} = c _ {n} c _ {n} ^{*} = p _ {n} \geq 0 \label{4.8}$
For this reason, diagonal elements are referred to as populations. The off-diagonal elements ($n \neq m$) are complex and have a time-dependent phase factor
$\rho _ {n m} = c _ {n} (t) c _ {m} ^{*} (t) = c _ {n} c _ {m} ^{*} \mathrm {e} ^{- i \omega _ {mn} t} \label{4.9}$
Since these describe the coherent oscillatory behavior of coherent superpositions in the system, these are referred to as coherences.
So why would we need the density matrix? It becomes a particularly important tool when dealing with mixed states, which we take up later. Mixed states refer to statistical mixtures in which we have imperfect information about the system, for which me must perform statistical averages in order to describe quantum observables. For mixed states, calculations with the density matrix are greatly simplified. Given that you have a statistical mixture, and can describe $p_k$, the probability of occupying quantum state $| \psi _ {k} \rangle$, evaluation of expectation values is simplified with a density matrix:
$\langle \hat {A} (t) \rangle = \sum _ {k} p _ {k} \left\langle \psi _ {k} (t) | \hat {A} | \psi _ {k} (t) \right\rangle \label{4.10}$
$\rho (t) \equiv \sum _ {k} p _ {k} | \psi _ {k} (t) \rangle \langle \psi _ {k} (t) | \label{4.11}$
$\langle \hat {A} (t) \rangle = \operatorname {Tr} [ \hat {A} \rho (t) ] \label{4.12}$
Evaluating expectation value is the same for pure or mixed states.
Properties of the Density Matrix
We can now summarize some properties of the density matrix, which follow from the definitions above:
1. $\rho$ is Hermitian since $\rho _ {n m} ^{*} = \rho _ {m n}$
2. Since probability must be normalized, $\operatorname {Tr} ( \rho ) = 1$
3. We can ascertain the degree of “pure-ness” of a quantum state from $\operatorname {Tr} \left( \rho ^{2} \right) \left\{\begin{array} {l} {= 1 \text { for pure state}} \[4pt] {< 1 \text { for mixed state}} \end{array} \right.$
In addition, when working with the density matrix it is convenient to make note of these trace properties:
1. The trace over a product of matrices is invariant to cyclic permutation of the matrices: $\operatorname {Tr} ( A B C ) = \operatorname {Tr} ( C A B ) = \operatorname {Tr} ( B C A )$
2. From this result we see that the trace is invariant to unitary transformation: $\operatorname {Tr} \left( S ^{\dagger} A S \right) = \operatorname {Tr} \left( S ^{- 1} A S \right) = \operatorname {Tr} ( A )$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/05%3A_The_Density_Matrix/5.01%3A_Introduction_to_the_Density_Matrix.txt
|
The equation of motion for the density matrix follows naturally from the definition of $\rho$ and the time-dependent Schrödinger equation.
\begin{align} \dfrac {\partial \rho} {\partial t} &= \dfrac {\partial} {\partial t} [ | \psi \rangle \langle \psi | ] \[4pt] &= \left[ \dfrac {\partial} {\partial t} | \psi \rangle \right] \langle \psi | + | \psi \rangle \dfrac {\partial} {\partial t} \langle \psi | \[4pt] &= \dfrac {- i} {\hbar} H | \psi \rangle \langle \psi | + \dfrac {i} {\hbar} | \psi \rangle \langle \psi | H . \label{4.13} \[4pt] &= \dfrac {- i} {\hbar} [ H , \rho ] \label{4.14} \end{align}
Equation \ref{4.14} is the Liouville-Von Neumann equation. It is isomorphic to the Heisenberg equation of motion, since $ρ$ is also an operator. The solution to Equation \ref{4.14} is
$\rho (t) = U \rho ( 0 ) U^{\dagger} \label{4.15}$
This can be demonstrated by first integrating Equation \ref{4.14} to obtain
$\rho (t) = \rho ( 0 ) - \dfrac {i} {\hbar} \int _ {0}^{t} d \tau [ H ( \tau ) , \rho ( \tau ) ] \label{4.16}$
If we expand Equation \ref{4.16} by iteratively substituting into itself, the expression is the same as when we substitute
$U = \exp _ {+} \left[ - \dfrac {i} {\hbar} \int _ {0}^{t} d \tau H ( \tau ) \right] \label{4.17}$
into Equation \ref{4.15} and collect terms by orders of $H(\tau)$.
Note that Equation \ref{4.15} and the cyclic invariance of the trace imply that the time-dependent expectation value of an operator can be calculated either by propagating the operator (Heisenberg) or the density matrix (Schrödinger or interaction picture):
\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \operatorname {Tr} [ \hat {A} \rho (t) ] \[4pt] & = \operatorname {Tr} \left[ \hat {A} U \rho _ {0} U^{\dagger} \right] \[4pt] & = \operatorname {Tr} \left[ \hat {A} (t) \rho _ {0} \right] \end{aligned} \right. \label{4.18}
For a time-independent Hamiltonian it is straightforward to show that the density matrix elements evolve as
\begin{align} \rho _ {n m} (t) &= \langle n | \rho (t) | m \rangle \[4pt] &= \left\langle n | U | \psi _ {0} \right\rangle \left\langle \psi _ {0} \left| U^{\dagger} \right| m \right\rangle \label{4.19} \[4pt] &= e^{- i \omega _ {n m} \left( t - t _ {0} \right)} \rho _ {n m} \left( t _ {0} \right) \label{4.20} \end{align}
From this we see that populations, $\rho _ {m n} (t) = \rho _ {n m} \left( t _ {0} \right)$, are time-invariant, and coherences oscillate at the energy splitting $\omega _ {n m}$.
5.03: The Density Matrix in the Interaction Picture
For the case in which we wish to describe a material Hamiltonian $H_0$ under the influence of an external potential $V(t)$,
$H (t) = H _ {0} + V (t) \label{4.21}$
we can also formulate the density operator in the interaction picture, $\rho_I$. From our original definition of the interaction picture wavefunctions
$| \psi _ {I} \rangle = U _ {0}^{\dagger} | \psi _ {S} \rangle \label{4.22}$
We obtain $\rho_I$ as
$\rho _ {I} = U _ {0}^{\dagger} \rho _ {S} U _ {0} \label{4.23}$
Similar to the discussion of the density operator in the Schrödinger equation, above, the equation of motion in the interaction picture is
$\dfrac {\partial \rho _ {I}} {\partial t} = - \dfrac {i} {\hbar} \left[ V _ {I} (t) , \rho _ {I} (t) \right] \label{4.24}$
where, as before, $V _ {I} = U _ {0}^{\dagger} V U _ {0}$.
Equation \ref{4.24} can be integrated to obtain
$\rho _ {I} (t) = \rho _ {I} \left( t _ {0} \right) - \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} \left[ V _ {I} \left( t^{\prime} \right) , \rho _ {I} \left( t^{\prime} \right) \right] \label{4.25}$
Repeated substitution of $\rho _ {I} (t)$ into itself in this expression gives a perturbation series expansion
.\begin{align} \rho _ {I} (t) &= \rho _ {0} - \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t _ {2} \left[ V _ {I} \left( t _ {1} \right) , \rho _ {0} \right] \[4pt] & + \left( - \dfrac {i} {\hbar} \right) \int _ {t _ {0}}^{t} d t _ {2} \int _ {t _ {0}}^{t _ {2}} d t _ {1} \left[ V _ {I} \left( t _ {2} \right) , \left[ V _ {I} \left( t _ {1} \right) , \rho _ {0} \right] \right] + \cdots \[4pt] & + \left( - \dfrac {i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d t _ {n} \int _ {t _ {0}}^{t _ {n}} d t _ {n - 1} \[4pt] & + \cdots \label{4.26}\[4pt] &= \rho^{( 0 )} + \rho^{( 1 )} + \rho^{( 2 )} + \cdots + \rho^{( n )} + \cdots \label{4.27} \end{align}
Here $\rho _ {0} = \rho \left( t _ {0} \right)$ and $\rho^{( n )}$ is the nth-order expansion of the density matrix. This perturbative expansion will play an important role later in the description of nonlinear spectroscopy. An nth order expansion term will be proportional to the observed polarization in an nth order nonlinear spectroscopy, and the commutators observed in Equation \ref{4.26} are proportional to nonlinear response functions. Similar to Equation \ref{4.15}, Equation \ref{4.26} can also be expressed as
$\rho _ {I} (t) = U _ {0} \rho _ {I} ( 0 ) U _ {0}^{\dagger} \label{4.28}$
This is the solution to the Liouville equation in the interaction picture. In describing the time-evolution of the density matrix, particularly when describing relaxation processes later, it is useful to use a superoperator notation to simplify the expressions above. The Liouville equation can be written in shorthand in terms of the Liovillian superoperator $\hat {\hat {\mathcal {L}}}$
$\dfrac {\partial \hat {\rho} _ {I}} {\partial t} = \dfrac {- i} {\hbar} \hat {\mathcal {L}} \hat {\rho} _ {l} \label{4.29}$
where $\hat {\hat {\mathcal {L}}}$ is defined in the Schrödinger picture as
$\hat {\hat {L}} \hat {A} \equiv [ H , \hat {A} ] \label{4.30}$
Similarly, the time propagation described by Equation \ref{4.28} can also be written in terms of a superoperator $\hat {\boldsymbol {\hat {G}}}$, the time-propagator, as
$\rho _ {I} (t) = \hat {\hat {G}} (t) \rho _ {I} ( 0 ) \label{4.31}$
$\hat {\boldsymbol {\hat {G}}}$ is defined in the interaction picture as
$\hat {\hat {G}} \hat {A} _ {I} \equiv U _ {0} \hat {A} _ {I} U _ {0}^{\dagger} \label{4.32}$
Given the eigenstates of $H_0$, the propagation for a particular density matrix element is
\begin{align} \hat {G} (t) \rho _ {a b} & = e^{- i H _ {d} t h} | a \rangle \langle b | e^{iH_0 t \hbar} \[4pt] &= e^{- i \omega _ {\omega} t} | a \rangle \langle b | \end{align} \label{4.33}
Using the Liouville space time-propagator, the evolution of the density matrix to arbitrary order in Equation \ref{4.26} can be written as
$\rho _ {I}^{( n )} = \left( - \dfrac {i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d t _ {n} \int _ {t _ {0}}^{t _ {n}} d t _ {n - 1} \ldots \int _ {t _ {0}}^{t _ {2}} d t _ {1} \hat {G} \left( t - t _ {n} \right) V \left( t _ {n} \right) \hat {G} \left( t _ {n} - t _ {n - 1} \right) V \left( t _ {n - 1} \right) \cdots \hat {G} \left( t _ {2} - t _ {1} \right) V \left( t _ {1} \right) \rho _ {0} \label{4.34}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/05%3A_The_Density_Matrix/5.02%3A_Time-Evolution_of_the_Density_Matrix.txt
|
In quantum mechanics, the adiabatic approximation refers to those solutions to the Schrödinger equation that make use of a time-scale separation between fast and slow degrees of freedom, and use this to find approximate solutions as product states in the fast and slow degrees of freedom. Perhaps the most fundamental and commonly used version is the Born–Oppenheimer (BO) approximation, which underlies much of how we conceive of molecular electronic structure and is the basis of potential energy surfaces. The BO approximation assumes that the motion of electrons is much faster than nuclei due to their large difference in mass, and therefore electrons adapt very rapidly to any changes in nuclear geometry. That is, the electrons “adiabatically follow” the nuclei. As a result, we can solve for the electronic state of a molecule for fixed nuclear configurations. Gradually stepping nuclear configurations and solving for the energy leads to a potential energy surface, or adiabatic state. Much of our descriptions of chemical reaction dynamics is presented in terms of propagation on these potential energy surfaces. The barriers on these surfaces are how we describe the rates of chemical reactions and transition state. The trajectories along these surfaces are used to describe mechanism.
More generally, the adiabatic approximation can be applied in other contexts in which there is a time-scale separation between fast and slow degrees of freedom. For instance, in the study of vibrational dynamics when the bond vibrations of molecules occur much faster than the intermolecular motions of a liquid or solid. It is also generally implicit in a separation of the Hamiltonian into a system and a bath, a method we will often use to solve condensed matter problems. As widely used as the adiabatic approximation is, there are times when it breaks down, and it is important to understand when this approximation is valid, and the consequences of when it is not. This will be particularly important for describing time-dependent quantum mechanical processes involving transitions between potential energy sources.
• 6.1: Born–Oppenheimer Approximation
Exact solutions using the molecular Hamiltonian are intractable for most problems of interest, so we turn to simplifying approximations. The BO approximation is motivated by noting that the nuclei are far more massive than an electron. When the distances separating particles is not unusually small, the kinetic energy of the nuclei is small relative to the other terms in the Hamiltonian. This means that the electrons move and adapt rapidly—adiabatically—in response to shifting nuclear positions.
• 6.2: Nonadiabatic Effects
Even without the BO approximation, we note that the nuclear-electronic product states form a complete basis in which to express the total vibronic wavefunction. The is referred to as the coupled channel Hamiltonian can be constructed with terms that describe deviations from the BO approximation and are referred to as nonadiabatic terms. These depend on the spatial gradient of the wavefunction in the region of interest, and act to couple adiabatic Born–Oppenheimer states.
• 6.3: Diabatic and Adiabatic States
Although the Born–Oppenheimer surfaces are the most straightforward and commonly calculated, they may not be the most chemically meaningful states.
• 6.4: Adiabatic and Nonadiabatic Dynamics
The BO approximation never explicitly addresses electronic or nuclear dynamics, but neglecting the nuclear kinetic energy to obtain potential energy surfaces has implicit dynamical consequences.
• 6.5: Landau–Zener Transition Probability
The adiabatic approximation has significant limitations in the vicinity of curve crossings. This phenomenon is better described through transitions between diabatic surfaces. The Landau–Zener expression gives the transition probabilities as a result of propagating through the crossing between diabatic surfaces.
06: Adiabatic Approximation
As a starting point, it is helpful to review the Born–Oppenheimer Approximation (BOA). For a molecular system, the Hamiltonian can be written in terms of the kinetic energy of the nuclei ($N$) and electrons ($e$) and the potential energy for the Coulomb interactions of these particles.
\begin{align} \hat {H} = \hat {T} _ {e} + \hat {T} _ {N} + \hat {V} _ {e e} + \hat {V} _ {N N} + \hat {V} _ {e N} \[4pt] = - \sum _ {i = 1}^{n} \frac {\hbar^{2}} {2 m _ {e}} \nabla _ {i}^{2} - \sum _ {J = 1}^{N} \frac {\hbar^{2}} {2 M _ {J}} \nabla _ {J}^{2} \[4pt] = - \sum _ {i = 1}^{n} \frac {\hbar^{2}} {2 m _ {e}} \nabla _ {i}^{2} - \sum _ {J = 1}^{N} \frac {\hbar^{2}} {2 M _ {J}} \nabla _ {J}^{2} \label{5.1} \end{align}
Here and in the following, we will use lowercase variables to refer to electrons and uppercase to nuclei. The variables $n$, $i$, $\mathbf {r}$, $\nabla _ {r}^{2}$, and $m_e$ refer to the number, index, position, Laplacian, and mass of electrons, respectively, and $N$, $J$, $\mathbf {R}$, and $M$ refer to the nuclei. $e$ is the electron charge, and $Z$ is the atomic number of the nucleus. Note, this Hamiltonian does not include relativistic effects such as spin-orbit coupling.
The time-independent Schrödinger equation is
$\hat {H} ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) \Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) = E \Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) \label{5.2}$
$\Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} )$ is the total vibronic wavefunction, where “vibronic” refers to the combined electronic and nuclear eigenstates. Exact solutions using the molecular Hamiltonian are intractable for most problems of interest, so we turn to simplifying approximations. The BO approximation is motivated by noting that the nuclei are far more massive than an electron ($m_e \ll M_I$). With this criterion, and when the distances separating particles is not unusually small, the kinetic energy of the nuclei is small relative to the other terms in the Hamiltonian. Physically, this means that the electrons move and adapt rapidly—adiabatically—in response to shifting nuclear positions. This offers an avenue to solving for $\Psi$ by fixing the position of the nuclei, solving for the electronic wavefunctions $\psi_i$, and then iterating for varying $\boldsymbol {R}$ to obtain effective electronic potentials on which the nuclei move.
Since it is fixed for the electronic calculation, we proceed by treating $\mathbf {R}$ as a parameter rather than an operator, set $\hat{T}_N$ to 0, and solve the electronic TISE:
$\hat {H} _ {e l} ( \hat {\mathbf {r}} , \mathbf {R} ) \psi _ {i} ( \hat {\mathbf {r}} , \mathbf {R} ) = U _ {i} ( \mathbf {R} ) \psi _ {i} ( \hat {\mathbf {r}} , \mathbf {R} ) \label{5.3}$
$U_i$ are the electronic energy eigenvalues for the fixed nuclei, and the electronic Hamiltonian in the BO approximation is
$\hat {H} _ {e l} = \hat {T} _ {e} + \hat {V} _ {e e} + \hat {V} _ {e N} \label{5.4}$
In Equation \ref{5.3}, $\psi_i$ is the electronic wavefunction for fixed $\mathbf {R}$, with $i = 0$ referring to the electronic ground state. Repeating this calculation for varying $\mathbf {R}$, we obtain $U_i(R)$, an effective or mean-field potential for the electronic states on which the nuclei can move. These effective potentials are known as Born–Oppenheimer or adiabatic potential energy surfaces (PES).
For the nuclear degrees of freedom, we can define a Hamiltonian for the ith electronic PES: ,
$\hat {H} _ {N u c , i} = \hat {T} _ {N} + U _ {i} ( \hat {R} ) \label{5.5}$
which satisfies a TISE for the nuclear wave functions $\Phi(R)$ :
$\hat {H} _ {N u c , i} \Phi _ {i J} ( R ) = E _ {i J} \Phi _ {i J} ( R ) \label{5.6}$
Here $J$ refers to the Jth eigenstate for nuclei evolving on the i th PES. Equation \ref{5.5} is referred to as the BO Hamiltonian.
The BOA effectively separates the nuclear and electronic contributions to the wavefunction, allowing us to express the total wavefunction as a product of these contributions
$\Psi ( \mathbf {r} , \mathbf {R} ) = \Phi ( \mathbf {R} ) \psi ( \mathbf {r} , \mathbf {R} )$ and the eigenvalues as sums over the electronic and nuclear contribution:
$E = E _ {N} + E _ {e}.$ BOA does not treat the nuclei classically. However, it is the basis for semiclassical dynamics methods in which the nuclei evolve classically on a potential energy surface, and interact with quantum electronic states. If we treat the nuclear dynamics classically, then the electronic Hamiltonian can be thought of as depending on $\mathbf {R}$ or on time as related by velocity or momenta. If the nuclei move infinitely slowly, the electrons will adiabatically follow the nuclei and systems prepared in an electronic eigenstate will remain in that eigenstate for all times.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.01%3A_BornOppenheimer_Approximation.txt
|
Even without the BO approximation, we note that the nuclear-electronic product states form a complete basis in which to express the total vibronic wavefunction:
$\Psi ( \mathbf {r} , \mathbf {R} ) = \sum _ {i , J} c _ {i , j} \Phi _ {i , J} ( \mathbf {R} ) \psi _ {i} ( \mathbf {r} , \mathbf {R} ) \label{5.7}$
We can therefore use this form to investigate the consequences of the BO approximation. For a given vibronic state, the action of the Hamiltonian on the wavefunction in the TISE is
$\hat {H} \Psi _ {i , J} = \left( \hat {T} _ {N} ( \mathbf {R} ) + \hat {H} _ {e l} ( \mathbf {R} ) \right) \Phi _ {i , j} ( \mathbf {R} ) \psi _ {i} ( \mathbf {R} ) \label{5.8}$
Expanding the Laplacian in the nuclear kinetic energy via the chain rule as
$\nabla^{2} A B = \left( \nabla^{2} A \right) B + 2 ( \nabla A ) \nabla B + A \nabla^{2} B, \nonumber$
we obtain an expression with three terms
\begin{align} \hat {H} \Psi _ {i , J} & = \Phi _ {i , J} ( \mathbf {R} ) \left( \hat {T} _ {N} ( \mathbf {R} ) + U _ {i} ( \mathbf {R} ) \right) \psi _ {i} ( \mathbf {R} ) \nonumber \[4pt] & - \sum _ {J} \frac {\hbar^{2}} {M _ {J}} \nabla _ {R} \Phi _ {i , J} ( \mathbf {R} ) \nabla _ {R} \psi _ {i} ( \mathbf {R} ) \nonumber \[4pt] & - \sum _ {J} \frac {\hbar^{2}} {2 M _ {J}} \Phi _ {i , J} ( \mathbf {R} ) \nabla _ {R}^{2} \psi _ {i} ( \mathbf {R} ) \label{5.9} \end{align}
This expression is exact for vibronic problems, and is referred to as the coupled channel Hamiltonian. Note that if we set the last two terms in Equation \ref{5.9} to zero, we are left with $\hat {H} = \hat {T} _ {N} + U$ which is just the Hamiltonian we used in the Born-Oppenheimer approximation, Equation 6.1.7. Therefore, the last two terms describe deviations from the BO approximation, and are referred to as nonadiabatic terms. These depend on the spatial gradient of the wavefunction in the region of interest, and act to couple adiabatic Born–Oppenheimer states. The coupled channel Hamiltonian has a form that is reminiscent of a perturbation theory Hamiltonian in which the Born–Oppenheimer states play the role of the zero-order Hamiltonian being perturbed ay a nonadiabatic coupling
$\hat {H} = \hat {H} _ {B O} + \hat {V} \label{5.10}$
To investigate this relationship further, it is helpful to write this Hamiltonian in its matrix form. We obtain the matrix elements by sandwiching the Hamiltonian between two projection operators and evaluating
$\hat {H} _ {i , I , J} = \iint d \mathbf {r} \,d \mathbf {R} \Psi _ {i , l}^{*} ( \mathbf {r} , \mathbf {R} ) \hat {H} ( \mathbf {r} , \mathbf {R} ) \Psi _ {j , J} ( \mathbf {r} , \mathbf {R} ) \label{5.11}$
Making use of Equation \ref{5.9} we find that the Hamiltonian can be expressed in three terms
\begin{align} \hat {H} _ {i , I , j , J} & = \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \left( \hat {T} _ {N} ( \mathbf {R} ) + U _ {j} ( \mathbf {R} ) \right) \Phi _ {j , J} ( \mathbf {R} ) \delta _ {i , j} \ & - \sum _ {I} \frac {\hbar^{2}} {M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \nabla _ {R} \Phi _ {j , J} ( \mathbf {R} ) \cdot \mathbf {F} _ {i j} ( \mathbf {R} ) \ & - \sum _ {I} \frac {\hbar^{2}} {2 M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \Phi _ {j , J} ( \mathbf {R} ) \mathbf {G} _ {i j} ( \mathbf {R} ) \ & - \sum _ {I} \frac {\hbar^{2}} {2 M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \Phi _ {j , J} ( \mathbf {R} ) \mathbf {G} _ {i j} ( \mathbf {R} ) \label{5.12} \end{align}
where
\begin{align} \mathbf {F} _ {i j} ( \mathbf {R} ) & = \int d \mathbf {r} \psi _ {i}^{*} ( \mathbf {r} , \mathbf {R} ) \nabla _ {R} \psi _ {j} ( \mathbf {r} , \mathbf {R} ) \ \mathbf {G} _ {i j} ( \mathbf {R} ) & = \int d \mathbf {r} \psi _ {i}^{*} ( \mathbf {r} , \mathbf {R} ) \nabla _ {R}^{2} \psi _ {j} ( \mathbf {r} , \mathbf {R} ) \label{5.13} \end{align}
The first term in Equation \ref{5.12} gives the BO Hamiltonian. In the latter two terms, $\mathbf {F}$ is referred to as the nonadiabatic, first-order, or derivative coupling, and $\mathbf {G}$ is the second-order nonadiabatic coupling or diagonal BO correction. Although they are evaluated by integrating over electronic degrees of freedom, both depend parametrically on the position of the nuclei. In most circumstances the last term is much smaller than the other two, so that we can concentrate on the second term in evaluating couplings between adiabatic states. For our purposes, we can write the nonadiabatic coupling in Equation \ref{5.10} as
$\hat {V} _ {i , I , j , J} ( \mathbf {R} ) = - \sum _ {I} \frac {\hbar^{2}} {M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \nabla _ {R} \Phi _ {j , J} ( \mathbf {R} ) \cdot \mathbf {F} _ {i j} ( \mathbf {R} ) \label{5.14}$
This emphasizes that the coupling between surfaces depends parametrically on the nuclear positions, the gradient of the electronic and nuclear wavefunctions, and the spatial overlap of those wavefunctions.
6.03: Diabatic and Adiabatic States
Although the Born–Oppenheimer surfaces are the most straightforward and commonly calculated, they may not be the most chemically meaningful states. As an example consider the potential energy curves for the diatomic $\ce{NaCl}$. The chemically distinct potential energy surfaces one is likely to discuss have distinct atomic or ionic character at large separation between the atoms. These “diabatic” curves focus on physical effects, but are not eigenstates. In the figure, the ionic state $| a \rangle$ is influenced by the Coulomb attraction between ions that draws them together, leading to a stable configuration at $R_{eq}$ once these attractive terms are balanced by nuclear repulsive forces. However, the neutral atoms ($\ce{Na^{0}}$ and $\ce{Cl^{0}}$) have a potential energy surface $| b \rangle$ which is dominated by repulsive interactions. The adiabatic potentials from the BO Hamiltonian will reflect significant coupling between the diabatic electronic states. BO states of the same symmetry will exhibit an avoided crossing where the electronic energy between corresponding diabatic states is equal. As expected from our earlier discussion, the splitting at the crossing for this one-dimensional system would be $2V_{ab}$, twice the coupling between diabatic states.
The adiabatic potential energy surfaces are important in interpreting the reaction dynamics, as can be illustrated with the reaction between $\ce{Na}$ and $\ce{Cl}$ atoms. If the neutral atoms are prepared on the ground state at large separation and slowly brought together, the atoms are weakly repelled until the separation reaches the transition state $R^‡$. Here we cross into the regime where the ionic configuration has lower energy. As a result of the nonadiabatic couplings, we expect that an electron will transfer from $\ce{Na^{0}}$ to $\ce{Cl^{0}}$, and the ions will then feel an attractive force leading to an ionic bond with separation $R_{eq}$.
Diabatic states can be defined in an endless number of ways, but only one adiabatic surface exists. In that respect, the term “nonadiabatic” is also used to refer to all possible diabatic surfaces. However, diabatic states are generally chosen so that the nonadiabatic electronic couplings in Equation 6.2.14 and 6.2.15 are zero. This can be accomplished by making the electronic wavefunction independent of $R$.
As seen above, for coupled states with the same symmetry the couplings repel the adiabatic states and we get an avoided crossing. However, it is still possible for two adiabatic states to cross. Mathematically this requires that the energies of the adiabatic states be degenerate ($E _ {\alpha} = E _ {\beta}$) and that the coupling at that configuration be zero ($V _ {\alpha \beta} = V _ {\beta \alpha} = 0$). This isn’t possible for a one-dimensional problem, such as the $\ce{NaCl}$ example above, unless symmetry dictates that the nonadiabatic coupling vanishes. To accomplish this for a Hermitian coupling operator you need two independent nuclear coordinates, which enable you to independently tune the adiabatic splitting and coupling. This leads to a single point in the two-dimensional space at which degeneracy exists, which is known as a conical intersection (an important topic that is not discussed further here).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.02%3A_Nonadiabatic_Effects.txt
|
The BO approximation never explicitly addresses electronic or nuclear dynamics, but neglecting the nuclear kinetic energy to obtain potential energy surfaces has implicit dynamical consequences. As we discussed for our $\ce{NaCl}$ example, moving the neutral atoms together slowly allows electrons to completely equilibrate about each forward step, resulting in propagation on the adiabatic ground state. This is the essence of the adiabatic approximation. If you prepare the system in $\Psi _ {\alpha}$, an eigenstate of $H$ at the initial time $t_0$, and propagate slowly enough, that $\Psi _ {\alpha}$ will evolve as an eigenstate for all times:
$H (t) \Psi _ {\alpha} (t) = E _ {\alpha} (t) \Psi _ {\alpha} (t) \label{5.15}$
Equivalently this means that the nth eigenfunction of $H(t_o)$ will also be the nth eigenfunction of $H (t)$. In this limit, there are no transitions between BO surfaces, and the dynamics only reflect the phases acquired from the evolving system. That is the time propagator can be expressed as
$U \left( t , t _ {0} \right) _ {a d i a b a t i c} = \sum _ {\alpha} | \alpha \rangle \langle \alpha | \exp \left( - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} E _ {\alpha} \left( t^{\prime} \right) \right) \label{5.16}$
In the opposite limit, we also know that if the atoms were incident on each other so fast (with such high kinetic energy) that the electron did not have time to transfer at the crossing, that the system would pass smoothly through the crossing along the diabatic surface. In fact it is expected that the atoms would collide and recoil. This implies that there is an intermediate regime in which the velocity of the system is such that the system will split and follow both surfaces to some degree.
In a more general sense, we would like to understand the criteria for adiabaticity that enable a time-scale separation between the fast and slow degrees of freedom. Speaking qualitatively about any time-dependent interaction between quantum mechanical states, the time-scale that separates the fast and slow propagation regimes is determined by the strength of coupling between those states. We know that two coupled states exchange amplitude at a rate dictated by the Rabi frequency $\Omega _ {R}$, which in turn depends on the energy splitting and coupling of the states. For systems in which there is significant nonperturbative transfer of population between two states $a$ and $b$, the time scale over which this can occur is approximately $\Delta \mathrm {t} \approx 1 / \Omega _ {\mathrm {R}} \approx \hbar / V _ {\mathrm {ab}}$. This is not precise, but is provides a reasonable starting point for discussing “slow” versus “fast”. “Slow” in an adiabatic sense would mean that a time-dependent interaction act on the system over a period such that $\Delta t \ll \hbar / V _ {\mathrm {ab}}$. In the case of our $\ce{NaCl}$ example, we would be concerned with the time scale over which the atoms pass through the crossing region between diabatic states, which is determined by the incident velocity between atoms.
Adiabaticity Criterion
Let’s investigate these issues by looking more carefully at the adiabatic approximation. Since the adiabatic states ($\Psi _ {\alpha} (t) \equiv | \alpha \rangle$) are orthogonal for all times, we can evaluate the time propagator as
$U (t) = \sum _ {\alpha} e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} | \alpha \rangle \langle \alpha | \label{5.17}$
and the time-dependent wavefunction is
$\Psi (t) = \sum _ {\alpha} b _ {\alpha} (t) e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} | \alpha \rangle \label{5.18}$
Although these are adiabatic states we recognize that the expansion coefficients can be time dependent in the general case. So, we would like to investigate the factors that govern this time-dependence. To make the notation more compact, let’s define the time-rate of change of the eigenfunction as
$| \dot {\alpha} \rangle = \frac {\partial} {\partial t} | \Psi _ {\alpha} (t) \rangle \label{5.19}$
If we substitute the general solution Equation \ref{5.18} into the TDSE, we get
$i \hbar \sum _ {\alpha} \left( \dot {b} _ {\alpha} | \alpha \right\rangle + b _ {\alpha} | \dot {\alpha} \rangle - \frac {i} {\hbar} E _ {\alpha} b _ {\alpha} | \alpha \rangle ) e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} = \sum _ {\alpha} b _ {\alpha} E _ {\alpha} | \alpha \rangle e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} \label{5.20}$
Note, the third term on the left hand side equals the right hand term. Acting on both sides from the left with $\langle \beta |$ leads to
$- \dot {b} _ {\beta} e^{- \frac {i} {h} \int _ {0}^{t} E _ {\beta} (t) d t^{\prime}} = \sum _ {\alpha} b _ {\alpha} \langle \beta | \dot {\alpha} \rangle e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} \label{5.21}$
We can break up the terms in the summation into one for the target state $| \beta \rangle$ and one for the remaining states.
$- \dot {b} _ {\beta} = b _ {\beta} \langle \beta | \dot {\beta} \rangle + \sum _ {\alpha \neq \beta} b _ {\alpha} \langle \beta | \dot {\alpha} \rangle \exp \left[ - \frac {i} {\hbar} \int _ {0}^{t} d t^{\prime} E _ {\alpha \beta} \left( t^{\prime} \right) \right] \label{5.22}$
where
$E _ {\alpha \beta} (t) = E _ {\alpha} (t) - E _ {\beta} (t).$
The adiabatic approximation applies when we can neglect the summation in Equation \ref{5.22}, or equivalently when $\langle \beta | \dot {\alpha} \rangle \ll \langle \beta | \dot {\beta} \rangle$ for all $| \alpha \rangle$. In that case, the system propagates on the adiabatic state $| \beta \rangle$ independent of the other states: $\dot {b} _ {\beta} = - b _ {\beta} \langle \beta | \dot {\beta} \rangle$. The evolution of the coefficients is
\begin{align} b _ {\beta} (t) & = b _ {\beta} ( 0 ) \exp \left[ - \int _ {0}^{t} \left\langle \beta \left( t^{\prime} \right) | \dot {\beta} \left( t^{\prime} \right) \right\rangle d t^{\prime} \right] \ & \approx b _ {\beta} ( 0 ) \exp \left[ \frac {i} {\hbar} \int _ {0}^{t} E _ {\beta} \left( t^{\prime} \right) d t^{\prime} \right] \label{5.23} \end{align}
Here we note that in the adiabatic approximation
$E _ {\beta} (t) = \langle \beta (t) | H (t) | \beta (t) \rangle.$
Equation \ref{5.23} indicates that in the adiabatic approximation the population in the states never changes, only their phase. The second term on the right in Equation \ref{5.22} describes the nonadiabatic effects, and the overlap integral
$\langle \beta | \dot {\alpha} \rangle = \left\langle \Psi _ {\beta} | \frac {\partial \Psi _ {\alpha}} {\partial t} \right\rangle \label{5.24}$
determines the magnitude of this effect. $\langle \beta | \dot {\alpha} \rangle$ is known as the nonadiabatic coupling (even though it refers to couplings between adiabatic surfaces), or as the geometrical phase. Note the parallels here to the expression for the nonadiabatic coupling in evaluating the validity of the BornOppenheimer approximation, however, here the gradient of the wavefunction is evaluated in time rather than the nuclear position. It would appear that we can make some connections between these two results by linking the gradient variables through the momentum or velocity of the particles involved.
So, when can we neglect the nonadiabatic effects? We can obtain an expression for the nonadiabatic coupling by expanding
$\frac {\partial} {\partial t} [ H | \alpha \rangle = E _ {\alpha} | \alpha \rangle ] \label{5.25}$
and acting from the left with $\langle \beta |$, which for $\alpha \neq \beta$ leads to
$\langle \beta | \dot {\alpha} \rangle = \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha} - E _ {\beta}} \label{5.26}$
For adiabatic dynamics to hold $\langle \beta | \dot {\alpha} \rangle < < \langle \beta | \dot {\beta} \rangle$, and so we can say
$\frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha} - E _ {\beta}} \ll - \frac {i} {\hbar} E _ {\beta} \label{5.27}$
So how accurate is the adiabatic approximation for a finite time-period over which the systems propagates? We can evaluate Equation \ref{5.22}, assuming that the system is prepared in state $| \alpha \rangle$ and that the occupation of this state never varies much from one. Then the occupation of any other state can be obtained by integrating over a period
\left.\begin{aligned} \dot {b} _ {\beta} & = \langle \beta | \dot {\alpha} \rangle \exp \left[ - \frac {i} {\hbar} \int _ {0}^{\tau} d t^{\prime} E _ {\alpha \beta} \left( t^{\prime} \right) \right] \ b _ {\beta} & \approx i \hbar \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha \beta}^{2}} \left\{\exp \left[ - \frac {i} {\hbar} E _ {\alpha \beta} \tau \right] - 1 \right\} \ & = 2 \hbar \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha \beta}^{2}} e^{- \frac {i} {h} E _ {\omega \beta} \tau} \sin \left( \frac {E _ {\alpha \beta} \tau} {2 \hbar} \right) \end{aligned} \right. \label{5.28}
Here I used
$e^{i \theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 ).$
For $| b _ {\beta} k < 1$, we expand the $\sin$ term and find
$\left\langle \Psi _ {\beta} \left| \frac {\partial H} {\partial t} \right| \Psi _ {\alpha} \right\rangle \ll E _ {\alpha \beta} / \tau \label{5.29}$
This is the criterion for adiabatic dynamics, which can be seen to break down near adiabatic curve crossings where $E _ {\alpha \beta} = 0$, regardless of how fast we propagate through the crossing. Even away from curve crossing, there is always the possibility that nuclear kinetic energies are such that ($\partial H / \partial t$) will be greater than or equal to the energy splitting between adiabatic states.
6.05: LandauZener Transition Probability
Clearly the adiabatic approximation has significant limitations in the vicinity of curve crossings. This phenomenon is better described through transitions between diabatic surfaces. To begin, how do we link the temporal and spatial variables in the curve crossing picture? We need a time-rate of change of the energy splitting, $\dot {E} = d E _ {a b} / d t$. The Landau–Zener expression gives the transition probabilities as a result of propagating through the crossing between diabatic surfaces at a constant $\dot {E}$. If the energy splitting between states varies linearly in time near the crossing point, then setting the crossing point to $t = 0$ we write
$E _ {a} - E _ {b} = \dot {E} t \label{5.30}$
If the coupling between surfaces $V_{ab}$ is constant, the transition probability for crossing from surface $a$ to $b$ for a trajectory that passes through the crossing is
$P _ {b a} = 1 - \exp \left[ - \frac {2 \pi V _ {a b}^{2}} {\hbar | \dot {E} |} \right] \label{5.31}$
and $P _ {a a} = 1 - P _ {b a}$. Note if $V_{ab} =0$ then $P_{ba} =0$, but if the splitting sweep rate $\dot {E}$ is small as determined by
$2 \pi V _ {a b}^{2} \gg \hbar | \dot {E} |\label{5.32}$
then we obtain the result expected for the adiabatic dynamics $P _ {b a} \approx 1$.
We can provide a classical interpretation to Equation \ref{5.31} by equating $\dot {E}$ with the velocity of particles involved in the crossing. We define the velocity as
$v = \dfrac{\partial R}{\partial t}$
and the slope of the diabatic surfaces at the crossing,
$F _ {i} = \partial E _ {i} / \partial R.$
Recognizing
$\left( E _ {a} - E _ {b} \right) / t = v \left( F _ {a} - F _ {b} \right) \label{5.33}$
we find
$P _ {b a} = 1 - \exp \left[ - \frac {2 \pi V _ {a b}^{2}} {\hbar v \left| F _ {a} - F _ {b} \right|} \right] \label{5.34}$
In the context of potential energy surfaces, what this approximation says is that you need to know the slopes of the potentials at their crossing point, the coupling and their relative velocity in order to extract the rates of chemical reactions.
Readings
1. Truhlar, D. D., Potential Energy Surfaces. In The Encyclopedia of Physical Science and Technology, 3rd ed.; Meyers, R. A., Ed. Academic Press: New York, 2001; Vol. 13, pp 9-17.
2. Tully, J. C., Nonadiabatic Dynamics Theory. J. Chem. Phys. 2012, 137, 22A301.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.04%3A_Adiabatic_and_Nonadiabatic_Dynamics.txt
|
One of the most important topics in time-dependent quantum mechanics is the description of spectroscopy, which refers to the study of matter through its interaction with electromagnetic radiation. Classically, light–matter interactions are a result of an oscillating electromagnetic field resonantly interacting with charged particles in the matter, most often bound electrons. We observe these processes either through changes to the light induced by the matter, such as absorption or emission of new light fields, or by light-induced changes to the matter, such as ionization and photochemistry. By studying such processes as a function of the control variables for the light field (amplitude, frequency, polarization, phase, etc.), we can deduce properties of the samples.
07: Interaction of Light and Matter
The term “spectroscopy” comes from the Latin “spectron” for spirit or ghost and the Greek "σκοπιεν" for to see. These roots are telling because in molecular spectroscopy you use light to interrogate matter, but you actually never see the molecules, only their influence on the light. Different types of spectroscopy give you different perspectives. This indirect contact with the microscopic targets means that the interpretation of spectroscopy requires a model, whether it is stated or not. Modeling and laboratory practice of spectroscopy are dependent on one another, and spectroscopy is only as useful as its ability to distinguish different models. This makes an accurate theoretical description of the underlying physical process governing the interaction of light and matter important.
Quantum mechanically, we will treat spectroscopy as a perturbation induced by the light which acts to couple quantum states of the charged particles in the matter, as we have discussed earlier. Our starting point is to write a Hamiltonian for the light–matter interaction, which in the most general sense would be of the form
$H = H _ {M} + H _ {L} + H _ {L M} \label{6.1}$
Although the Hamiltonian for the matter may be time-dependent, we will treat the Hamiltonian for the matter $H_M$ as time-independent, whereas the electromagnetic field $H_L$ and its interaction with the matter $H_{LM}$ are time-dependent. A quantum mechanical treatment of the light would describe the light in terms of photons for different modes of electromagnetic radiation, which we will describe later. We begin with a semiclassical treatment of the problem, which describes the matter quantum mechanically and the light field classically. We assume that a light field described by a time-dependent vector potential acts on the matter, but the matter does not influence the light. (Strictly, energy conservation requires that any change in energy of the matter be matched with an equal and opposite change in the light field.) For the moment, we are just interested in the effect that the light has on the matter. In that case, we can really ignore $H_L$, and we have a Hamiltonian for the system that is
\left.\begin{aligned} H & \approx H _ {M} + H _ {L M} (t) \[4pt] & = H _ {0} + V (t) \end{aligned} \right. \label{6.2}
which we can solve in the interaction picture. We will derive an explicit expression for the Hamiltonian $H_{LM}$ in the Electric Dipole Approximation. Here, we will derive a Hamiltonian for the light–matter interaction, starting with the force experienced by a charged particle in an electromagnetic field, developing a classical Hamiltonian for this interaction, and then substituting quantum operators for the matter:
$\left. \begin{array} {l} {p \rightarrow - i \hbar \hat {\nabla}} \ {x \rightarrow \hat {x}} \end{array} \right. \label{6.3}$
In order to get the classical Hamiltonian, we need to work through two steps:
1. describe electromagnetic fields, specifically in terms of a vector potential, and
2. describe how the electromagnetic field interacts with charged particles.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.01%3A_Introduction_to_Light-Matter_Interactions.txt
|
Classical Plane Electromagnetic Waves
As a starting point, it is helpful to first summarize the classical description of electromagnetic fields. A derivation of the plane wave solutions to the electric and magnetic fields and vector potential is described in the appendix in Section 6.6.
Maxwell’s equations describe electric ($\overline {E}$) and magnetic fields ($\overline {B}$); however, to construct a Hamiltonian, we must use the time-dependent interaction potential (rather than a field). To construct the potential representation of $\overline {E}$ and $\overline {B}$, you need a vector potential $\overline {A} ( \overline {r} , t )$, and a scalar potential $\varphi ( \overline {r} , t )$. For electrostatics we normally think of the field being related to the electrostatic potential through $\overline {E} = - \nabla \varphi$, but for a field that varies in time and in space, the electrodynamic potential must be expressed in terms of both $\overline {A}$ and $\varphi$.
In general, an electromagnetic wave written in terms of the electric and magnetic fields requires six variables (the $x$, $y$, and $z$ components of $E$ and $B$). This is an over determined problem; Maxwell’s equations constrain these. The potential representation has four variables ($A _ {x}$, $A _ {y}$, $A _ {z}$, and $\varphi$), but these are still not uniquely determined. We choose a constraint—a representation or gauge—that allows us to uniquely describe the wave. Choosing a gauge such that $\varphi=0$ (i.e., the Coulomb gauge) leads to a unique description of $\overline {E}$ and $\overline {B}$:
$- \overline {\nabla}^{2} \overline {A} ( \overline {r} , t ) + \frac {1} {c^{2}} \frac {\partial^{2} \overline {A} ( \overline {r} , t )} {\partial t^{2}} = 0 \label{6.4}$
and
$\overline {\nabla} \cdot \overline {A} = 0 \label{6.5)}$
This wave equation for the vector potential gives a plane wave solution for charge free space and suitable boundary conditions:
$\overline {A} ( \overline {r} , t ) = A _ {0} \hat {\varepsilon} e^{i ( \overline {k} \cdot \overline {r} - \omega t )} + A _ {0}^{*} \hat {\varepsilon} e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \label{6.6}$
This describes the wave oscillating in time at an angular frequency $\omega$ and propagating in space in the direction along the wave vector $\overline {k}$, with a spatial period $\lambda = 2 \pi / | \overline {k} |$. Writing the relationship between $k$, $\omega$, and $\lambda$ in a medium with index of refraction $n$ in terms of their values in free space:
$k = n k _ {0} = \frac {n \omega _ {0}} {c} = \frac {2 \pi n} {\lambda _ {0}} \label{6.7}$
The wave has an amplitude $A_0$, which is directed along the polarization unit vector $\overline {k}$. Since $\overline {\nabla} \cdot \overline {A} = 0$, we see that $\overline {k} \cdot \hat {\mathcal {E}} = 0$ or $\overline {k} \perp \hat {\mathcal {E}}$. From the vector potential we can obtain $\overline {E}$ and $\overline {B}$
\begin{align} \overline {E} & = - \frac {\partial \overline {A}} {\partial t} \[4pt] & = i \omega A _ {0} \hat {\varepsilon} \left( e^{i ( \overline {k} \cdot \overline {r} - \omega t )} - e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \right) \label{6.8} \end{align}
\begin{align} \overline {B} & = \overline {\nabla} \times \overline {A} \[4pt] & = i ( \overline {k} \times \hat {\varepsilon} ) A _ {0} \left( e^{i ( \overline {k} \cdot \overline {r} - \omega t )} - e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \right) \label{6.9} \end{align}
If we define a unit vector along the magnetic field polarization as
$\hat {b} = ( \overline {k} \times \hat {\mathcal {\varepsilon}} ) / | \overline {k} | = \hat {k} \times \hat {\mathcal {E}},$
we see that the wave vector, the electric field polarization and magnetic field polarization are mutually orthogonal $\hat {k} \perp \hat {\varepsilon} \perp \hat {b}$.
Also, by comparing Equation \ref{6.6} and \ref{6.8} we see that the vector potential oscillates as $\cos(\omega t)$, whereas the electric and magnetic fields oscillate as $\sin(\omega t)$. If we define
$\frac {1} {2} E _ {0} = i \omega A _ {0} \label{6.10}$
$\frac {1} {2} B _ {0} = i | k | A _ {0} \label{6.11}$
then,
$\overline {E} ( \overline {r} , t ) = \left| E _ {0} \right| \hat {\varepsilon} \sin ( \overline {k} \cdot \overline {r} - \omega t ) \label{6.12}$
$\overline {B} ( \overline {r} , t ) = \left| B _ {0} \right| \hat {b} \sin ( \overline {k} \cdot \overline {r} - \omega t ) \label{6.13}$
Note that
$E _ {0} / B _ {0} = \omega / | k | = c.$
We will want to express the amplitude of the field in a manner that is experimentally accessible. The intensity $I$, the energy flux through a unit area, is most easily measured. It is the time-averaged value of the Poynting vector
$I = \langle \overline {S} \rangle = \frac {1} {2} \varepsilon _ {0} c E _ {0}^{2} \quad \left( \mathrm {W} / \mathrm {m}^{2} \right) \label{6.15}$
An alternative representation of the amplitude that is useful for describing quantum light fields is the energy density
$U = \frac {I} {c} = \frac {1} {2} \varepsilon _ {0} E _ {0}^{2} \quad \left( \mathrm {J} / \mathrm {m}^{3} \right) \label{6.16}$
Classical Hamiltonian for radiation field interacting with charged particle
Now, we obtain a classical Hamiltonian that describes charged particles interacting with a radiation field in terms of the vector potential. Start with Lorentz force on a particle with charge $q$:
$\overline {F} = q ( \overline {E} + \overline {v} \times \overline {B} ) \label{6.17}$
Here v is the velocity of the particle. Writing this for one direction ($x$) in terms of the Cartesian components of $\overline {E}$, $\overline {v}$, and $\overline {B}$, we have:
$F _ {x} = q \left( E _ {x} + v _ {y} B _ {z} - v _ {z} B _ {y} \right) \label{6.18}$
In Lagrangian mechanics, this force can be expressed in terms of the total potential energy
$F _ {x} = - \frac {\partial U} {\partial x} + \frac {d} {d t} \left( \frac {\partial U} {\partial v _ {x}} \right) \label{6.19}$
Using the relationships that describe $\overline {E}$ and $\overline {B}$ in terms of $\overline {A}$ and $\varphi$ (Equations \ref{6.10} and \ref{6.11}), inserting into Equation \ref{6.18}, and working it into the form of Equation \ref{6.19}, we can show that
$U = q \varphi - q \overline {v} \cdot \overline {A} \label{6.20}$
This is derived elsewhere4 and is readily confirmed by replacing it into Equation \ref{6.19}. Now we can write a Lagrangian in terms of the kinetic and potential energy of the particle
\begin{align} L &= T - U \label{6.21} \[4pt] &= \frac {1} {2} m \overline {v}^{2} + q \overline {v} \cdot \overline {A} - q \varphi \label{6.22} \end{align}
The classical Hamiltonian is related to the Lagrangian as
\begin{align} H & = \overline {p} \cdot \overline {v} - L \[4pt] & = \overline {p} \cdot \overline {v} - \frac {1} {2} m \overline {v}^{2} - q \overline {v} \cdot \overline {A} - q \varphi \label{6.23} \end{align}
Recognizing
$\overline {p} = \frac {\partial L} {\partial \overline {v}} = m \overline {v} + q \overline {A} \label{6.24}$
we write
$\overline {v} = \frac {1} {m} ( \overline {p} - q \overline {A} ) \label{6.25}$
Now substituting Equations \ref{6.25} into Equation \ref{6.23}, we have
\begin{align} H &= \frac {1} {m} \overline {p} \cdot ( \overline {p} - q \overline {A} ) - \frac {1} {2 m} ( \overline {p} - q \overline {A} )^{2} - \frac {q} {m} ( \overline {p} - q \overline {A} ) \cdot A + q \varphi \label{6.26} \[4pt] &= \frac {1} {2 m} [ \overline {p} - q \overline {A} ( \overline {r} , t ) ]^{2} + q \varphi ( \overline {r} , t ) \label{6.27} \end{align}
This is the classical Hamiltonian for a particle in an electromagnetic field. In the Coulomb gauge ($\varphi = 0$), the last term is dropped.
We can write a Hamiltonian for a single particle in a bound potential $V_0$ in the absence of an external field as
$H _ {0} = \frac {\overline {p}^{2}} {2 m} + V _ {0} ( \overline {r} ) \label{6.28}$
and in the presence of the EM field,
$H = \frac {1} {2 m} ( \overline {p} - q \overline {A} ( \overline {r} , t ) )^{2} + V _ {0} ( \overline {r} ) \label{6.29}$
Expanding we obtain
$H = H _ {0} - \frac {q} {2 m} ( \overline {p} \cdot \overline {A} + \overline {A} \cdot \overline {p} ) + \frac {q^{2}} {2 m} | \overline {A} ( \overline {r} , t ) |^{2} \label{6.30}$
Generally the last term which goes as the square of $A$ is small compared to the cross term, which is proportional to first power of $A$. This term should be considered for extremely high field strength, which is non-perturbative and significantly distorts the potential binding molecules together, i.e., when it is similar in magnitude to $V_0$. One can estimate that this would start to play a role at intensity levels $>10^{15}\, W/cm^2$, which may be observed for very high energy and tightly focused pulsed femtosecond lasers. So, for weak fields we have an expression that maps directly onto solutions we can formulate in the interaction picture:
$H = H _ {0} + V (t) \label{6.31}$
with
$V (t) = \frac {q} {2 m} ( \overline {p} \cdot \overline {A} + \overline {A} \cdot \overline {p} ) \label{6.32}.$
Readings
1. Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; Appendix III.
2. Jackson, J. D., Classical Electrodynamics. 2nd ed.; John Wiley and Sons: New York, 1975.
3. McHale, J. L., Molecular Spectroscopy. 1st ed.; Prentice Hall: Upper Saddle River, NJ, 1999.
4. Merzbacher, E., Quantum Mechanics. 3rd ed.; Wiley: New York, 1998.
5. Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994.
6. Schatz, G. C.; Ratner, M. A., Quantum Mechanics in Chemistry. Dover Publications: Mineola, NY, 2002; pp. 82-83.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.02%3A_Classical_LightMatter_Interactions.txt
|
Now we are in a position to substitute the quantum mechanical momentum for the classical momentum:
$\overline {p} = - i \hbar \overline {\nabla} \label{6.33}$
Here the vector potential remains classical and only modulates the interaction strength:
$V (t) = \frac {i \hbar} {2 m} q ( \overline {\nabla} \cdot \overline {A} + \overline {A} \cdot \overline {\nabla} ) \label{6.34}$
We can show that $\overline {\nabla} \cdot \overline {A} = \overline {A} \cdot \overline {\nabla}$. For instance, if we are operating on a wavefunction on the right, we can use the chain rule to write$\overline {\nabla} \cdot ( \overline {A} | \psi \rangle ) = ( \overline {\nabla} \cdot \overline {A} ) | \psi \rangle + \overline {A} \cdot ( \overline {\nabla} | \psi \rangle ).$ The first term is zero since we are working in the Coulomb gauge ($\overline {\nabla} \cdot \overline {A} = 0$). Now we have
\begin{align} V (t) & = \frac {i \hbar q} {m} \overline {A} \cdot \overline {\nabla} \[4pt] & = - \frac {q} {m} \overline {A} \cdot \hat {p} \label{6.35} \end{align}
We can generalize Equation \ref{6.35} for the case of multiple charged particles, as would be appropriate for interactions involving a molecular Hamiltonian:
\begin{align} V (t) &= - \sum _ {j} \frac {q _ {j}} {m _ {j}} \overline {A} \left( \overline {r} _ {j} , t \right) \cdot \hat {p} _ {j} \label{6.36} \[4pt] &= - \sum _ {j} \frac {q _ {j}} {m _ {j}} \left[ A _ {0} \hat {\varepsilon} \cdot \hat {p} _ {j} e^{i \left( \overline {k} \cdot \overline {r} _ {j} - \omega t \right)} + A _ {0}^{*} \hat {\varepsilon} \cdot \hat {p} _ {j}^{\dagger} e^{- i \left( \overline {k} \cdot \overline {r} _ {j} - \omega t \right)} \right] \label{6.37} \end{align}
Under most of the circumstances we will encounter, we can neglect the wave vector dependence of the interaction potential. This applies if the wavelength of the field is much larger than the dimensions of the molecules we are interrogating, i.e., ($\lambda \rightarrow \infty$) and $| k | \rightarrow 0$). To see this, let’s define $r_o$ as the center of mass of a molecule and expand about that position:
\begin{align} e^{i \overline {k} \cdot \overline {r} _ {i}} & = e^{i \overline {k} \cdot \overline {r} _ {0}} e^{i \overline {k} \cdot \left( \overline {r} _ {i} - \overline {r} _ {0} \right)} \[4pt] & = e^{i \overline {k} \cdot \overline {r} _ {0}} e^{i \overline {k} \cdot \delta \overline {r} _ {i}} \label{6.38} \end{align}
For interactions with UV, visible, and infrared radiation, wavelengths are measured in hundreds to thousands of nanometers. This is orders of magnitude larger than the dimensions that describe charge distributions in molecules ($\delta \overline {r} _ {i} = \overline {r} _ {i} - \overline {r} _ {0}$). Under those circumstances $| k | \delta r \ll 1$, and setting $\overline {r _ {0}} = 0$ means that $e^{i \overline {k} \cdot \overline {r}} \rightarrow 1$. This is known as the electric dipole approximation. Implicit in this is also the statement that all molecules within a macroscopic volume experience an interaction with a spatially uniform, homogeneous electromagnetic field.
Certainly there are circumstances where the electric dipole approximation is poor. In the case where the wavelength of light in on the same scale as molecular dimensions, the light will now have to interact with spatially varying charge distributions, which will lead to scattering of the light and interferences between the scattering between different spatial regions. We will not concern ourselves with this limit further. We also retain the spatial dependence for certain other types of light–matter interactions. For instance, we can expand Equation \ref{6.38} as
$e^{i \overline {k} \cdot \overline {r_i}} \approx e^{i \overline {k} \cdot \overline {r} _ {0}} \left[ 1 + i \overline {k} \cdot \left( \overline {r} _ {i} - \overline {r} _ {0} \right) + \ldots \right] \label{6.39}$
We retain the second term for quadrupole transitions: charge distribution interacting with gradient of electric field and magnetic dipole (Section 6.7).
Now, using $A _ {0} = i E _ {0} / 2 \omega$, we write Equation \ref{6.35} as
\begin{align} V (t) &= \frac {- i q E _ {0}} {2 m \omega} \left[ \hat {\mathcal {E}} \cdot \hat {p} e^{- i \omega t} - \hat {\varepsilon} \cdot \hat {p} e^{i \omega t} \right] \label{6.40} \[4pt] & = \frac {- q E _ {0}} {m \omega} ( \hat {\varepsilon} \cdot \hat {p} ) \sin \omega t \[4pt] & = \frac {- q} {m \omega} ( \overline {E} (t) \cdot \hat {p} ) \label{6.41} \end{align}
or for a collection of charged particles (molecules):
$V (t) = - \left( \sum _ {j} \frac {q _ {j}} {m _ {j}} \left( \hat {\varepsilon} \cdot \hat {p} _ {j} \right) \right) \frac {E _ {0}} {\omega} \sin \omega t \label{6.42}$
This is the interaction Hamiltonian in the electric dipole approximation.
In Equation \ref{6.39}, the second term must be considered in certain cases, where variation in the vector potential over the distance scales of the molecule must be considered. This will be the case when one describes interactions with short wavelength radiation, such as x-rays. Then the scattering of radiation by electronic states of molecules and the interference between transmitted and scattered field are important. The second term is also retained for electric quadrupole transitions and magnetic dipole transitions, as described in the appendix in Section 6.7. Electric quadrupole transitions require a gradient of electric field across the molecule, and is generally an effect that is ~10-3 of the electric dipole interaction.
Transition Dipole Matrix Elements
We are seeking to use this Hamiltonian to evaluate the transition rates induced by $V(t)$ from our first-order perturbation theory expression. For a perturbation
$V (t) = V _ {0} \sin \omega t$
the rate of transitions induced by field is
$w _ {k \ell} = \frac {\pi} {2 \hbar} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( E _ {k} - E _ {\ell} - \hbar \omega \right) + \delta \left( E _ {k} - E _ {\ell} + \hbar \omega \right) \right] \label{6.43}$
which depends on the matrix elements for the Hamiltonian in Equation \ref{6.42}. Note in first-order perturbation matrix element calculations one uses unperturbed wavefunctions. Thus, we evaluate the matrix elements of the electric dipole Hamiltonian using the eigenfunctions of $H_0$:
$V _ {k \ell} = \left\langle k \left| V _ {0} \right| \ell \right\rangle = \frac {- q E _ {0}} {m \omega} \langle k | \hat {\varepsilon} \cdot \hat {p} | \ell \rangle \label{6.44}$
We can evaluate $\langle k | \overline {p} | \ell \rangle$ using an expression that holds for any one-particle Hamiltonian:
$\left[ \hat {r} , \hat {H} _ {0} \right] = \frac {i \hbar \hat {p}} {m} \label{6.45}$
This expression gives
\begin{align} \langle k | \hat {p} | \ell \rangle & = \frac {m} {i \hbar} \left\langle k \left| \hat {r} \hat {H} _ {0} - \hat {H} _ {0} \hat {r} \right| \ell \right\rangle \[4pt] & = \frac {m} {i \hbar} \left( \langle k | \hat {r} | \ell \rangle E _ {\ell} - E _ {k} \langle k | \hat {r} | \ell \rangle \right) \[4pt] & = i m \omega _ {k \ell} \langle k | \hat {r} | \ell \rangle \label{6.46} \end{align}
So we have
$V _ {k \ell} = - i q E _ {0} \frac {\omega _ {k \ell}} {\omega} \langle k | \hat {\varepsilon} \cdot \overline {r} | \ell \rangle \label{6.47}$
or for many charged particles
$V _ {k \ell} = - i E _ {0} \frac {\omega _ {k \ell}} {\omega} \left\langle k \left| \hat {\varepsilon} \cdot \sum _ {j} q \hat {r} _ {j} \right| \ell \right\rangle \label{6.48}$
The matrix element can be written in terms of the dipole operators, which describes the spatial distribution of charges,
$\hat {\mu} = \sum _ {j} q _ {j} \hat {r} _ {j} \label{6.49}$
We can see that it is the quantum analog of the classical dipole moment, which describes the distribution of charge density $\rho$ in the molecule:
$\overline {\mu} = \int d \overline {r} \overline {r} \rho ( \overline {r} ) \label{6.50}$
The strength of interaction between light and matter is given by the matrix element in the dipole operator,
$\mu _ {f i} \equiv \langle f | \overline {\mu} \cdot \hat {\mathcal {\varepsilon}} | i \rangle \label{6.51}$
which is known as the transition dipole moment. In order that we have absorption, the part $\langle f | \mu | i \rangle$, which is a measure of change of charge distribution between $| f \rangle$ and $| i \rangle$, should be non-zero. In other words, the incident radiation has to induce a change in the charge distribution of matter to get an effective absorption rate. This matrix element is the basis of selection rules based on the symmetry of the matter charge eigenstates. The second part, namely the electric field polarization vector says that the electric field of the incident radiation field must project onto the matrix elements of the dipole moment between the final and initial sates of the charge distribution.
Then the matrix elements in the electric dipole Hamiltonian are
$V _ {k \ell} = - i E _ {0} \frac {\omega _ {k \ell}} {\omega} \mu _ {k l} \label{6.52}$
This expression allows us to write in a simplified form the well-known interaction potential for a dipole in a field:
$V (t) = - \overline {\mu} \cdot \overline {E} (t) \label{6.53}$
Note that we have reversed the order of terms because they commute. This leads to an expression for the rate of transitions between quantum states induced by the light field:
\begin{align} w _ {k \ell} & = \frac {\pi} {2 \hbar} \left| E _ {0} \right|^{2} \frac {\omega _ {k \ell}^{2}} {\omega^{2}} \left| \overline {\mu} _ {k l} \right|^{2} \left[ \delta \left( E _ {k} - E _ {\ell} - \hbar \omega \right) + \left( E _ {k} - E _ {\ell} + \hbar \omega \right) \right] \[4pt] & = \frac {\pi} {2 \hbar^{2}} \left| E _ {0} \right|^{2} \left| \overline {\mu} _ {k l} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{6.54} \end{align}
In essence, Equation \ref{6.54} is an expression for the absorption and emission spectrum since the rate of transitions can be related to the power absorbed from or added to the light field. More generally, we would express the spectrum in terms of a sum over all possible initial and final states, the eigenstates of $H_0$:
$w _ {f i} = \sum _ {i , f} \frac {\pi} {\hbar^{2}} \left| E _ {0} \right|^{2} \left| \mu _ {f i} \right|^{2} \left[ \delta \left( \omega _ {f i} - \omega \right) + \delta \left( \omega _ {f i} + \omega \right) \right] \label{6.55}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.03%3A_Quantum_Mechanical_Electric_Dipole_Hamiltonian.txt
|
Let’s describe absorption to a state that is coupled to a continuum. What happens to the probability of absorption if the excited state decays exponentially?
We can start with the first-order expression
$\frac {\partial} {\partial t} b _ {k} = - \frac {i} {\hbar} e^{i \omega _ {k l} t} V _ {k \ell} (t) \label{6.56}$
where we make the approximation $b _ {\ell} (t) \approx 1$. We can add irreversible relaxation to the description of $b_k$ using our earlier expression for the relaxation of
$b _ {k} (t) = \exp \left[ - \overline {w} _ {n k} t / 2 - i \Delta E _ {k} t / \hbar \right].$
In this case, we will neglect the correction to the energy $\Delta E _ {k} = 0$, so
$\frac {\partial} {\partial t} b _ {k} = - \frac {i} {\hbar} e^{i \omega _ {\mu l} t} V _ {k \ell} (t) - \frac {\overline {w} _ {n k}} {2} b _ {k} \label{6.57}$
Or using $V (t) = - i E _ {0} \overline {\mu} _ {k l} \sin \omega t$
\begin{align} \frac {\partial} {\partial t} b _ {k} & = \frac {- i} {\hbar} e^{i \omega _ {k t} t} \sin \omega t V _ {k \ell} - \frac {\overline {w} _ {n k}} {2} b _ {k} (t) \[4pt] & = \frac {E _ {0} \omega _ {k \ell}} {2 i \hbar \omega} \left[ e^{i \left( \omega _ {k \ell} + \omega \right)} - e^{i \left( \omega _ {k \ell} - \omega \right) t} \right] \overline {\mu} _ {k \ell} - \frac {\overline {w} _ {n k}} {2} b _ {k} (t) \label{6.58} \end{align}
The solution to the differential equation
$\dot {y} + a y = b e^{i \alpha t} \label{6.59}$
is
$y (t) = A e^{- a t} + \frac {b e^{i \alpha t}} {a + i \alpha} \label{6.60}$
with
$b _ {k} (t) = A e^{- \overline {w} _ {n k} t / 2} + \frac {E _ {0} \overline {\mu} _ {k \ell}} {2 i \hbar} \left[ \frac {e^{i \left( \omega _ {k t} + \omega \right) t}} {\overline {w} _ {n k} / 2 + i \left( \omega _ {k \ell} + \omega \right)} - \frac {e^{i \left( \omega _ {k l} - \omega \right) t}} {\overline {w} _ {n k} / 2 + i \left( \omega _ {k \ell} - \omega \right)} \right] \label{6.61}$
Let’s look at absorption only, in the long time limit:
$b _ {k} (t) = \frac {E _ {0} \overline {\mu} _ {k \ell}} {2 \hbar} \left[ \frac {e^{i \left( \omega _ {k l} - \omega \right) t}} {\omega _ {k \ell} - \omega - i \overline {w} _ {n k} / 2} \right] \label{6.62}$
For which the probability of transition to $k$ is
$P _ {k} = \left| b _ {k} \right|^{2} = \frac {E _ {0}^{2} \left| \mu _ {k \ell} \right|^{2}} {4 \hbar^{2}} \frac {1} {\left( \omega _ {k \ell} - \omega \right)^{2} + \overline {w} _ {n k}^{2} / 4} \label{6.63}$
The frequency dependence of the transition probability has a Lorentzian form:
The FWHM line width gives the relaxation rate from $k$ into the continuum $n$. Also the line width is related to the system rather than the manner in which we introduced the perturbation. The line width or line shape is an additional feature that we interpret in our spectra, and commonly originates from irreversible relaxation or other processes that destroy the coherence first set up by the light field.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.04%3A_Relaxation_and_Line-Broadening.txt
|
The rate of absorption induced by a monochromatic electromagnetic field is
$w _ {k \ell} ( \omega ) = \frac {\pi} {2 \hbar^{2}} \left| E _ {0} ( \omega ) \right|^{2} | \langle k | \hat {\varepsilon} \cdot \overline {\mu} | \left. \ell \rangle \right|^{2} \delta \left( \omega _ {k \ell} - \omega \right) \label{6.64}$
The rate is clearly dependent on the strength of the field. The variable that you can most easily measure is the intensity $I$, the energy flux through a unit area, which is the time-averaged value of the Poynting vector, $S$:
$S = \varepsilon _ {0} c^{2} ( \overline {E} \times \overline {B} ) \label{6.65}$
$I = \langle S \rangle = \frac {1} {2} \varepsilon _ {0} c E _ {0}^{2} \label{6.66}$
Using this we can write
$w _ {k \ell} = \frac {4 \pi} {3 \varepsilon _ {0} c h^{2}} I ( \omega ) | \langle k | \overline {\mu} | \ell \rangle |^{2} \delta \left( \omega _ {k \ell} - \omega \right) \label{6.67}$
where I have also made use of the uniform distribution of polarizations applicable to an isotropic field:
$\left| \overline {E} _ {0} \cdot \hat {x} \right| = \left| \overline {E} _ {0} \cdot \hat {y} \right| = \left| \overline {E} _ {0} \cdot \hat {z} \right| = \frac {1} {3} \left| E _ {0} \right|^{2}.$
Now let’s relate the rates of absorption to a quantity that is directly measured, an absorption cross section $\alpha$:
\begin{align} \alpha &= \frac {\text {total energy absorbed per unit time}} {\text {total incident intensity (energy/unit time/area)}} \label{6.68} \[4pt] &= \frac {\hbar \omega w _ {k \ell}} {I} \end{align}
Note $\alpha$ has units of cm2 . The golden rule rate for absorption also gives the same rate for stimulated emission. Given two levels $| m \rangle$ and $| n \rangle$,
$w _ {n m} = w _ {m n}$
$\therefore \left( \alpha _ {A} \right) _ {n m} = \left( \alpha _ {S E} \right) _ {m n} \label{6.69}$
We can now use a phenomenological approach to calculate the change in the intensity of incident light, $I$, due to absorption and stimulated emission passing through a sample of length $L$. Given that we have a thermal distribution of identical non-interacting particles with quantum states such that the level $| m \rangle$ is higher in energy than $| n \rangle$:
$\frac {d I} {d x} = - N _ {n} \alpha _ {A} I + N _ {m} \alpha _ {s E} I \label{6.70}$
$\frac {d I} {I} = - \left( N _ {n} - N _ {m} \right) \alpha\, d x \label{6.71}$
Here $N_n$ and $N_m$ are population of the upper and lower states, but expressed as population densities (cm-3). Note that $I$ and $α$ are both functions of the frequency of the incident light. If $N$ is the molecular density,
$N _ {n} = N \left( \frac {e^{- \beta E _ {n}}} {Z} \right) \label{6.72}$
Integrating Equation \ref{6.71} over a path length $L$, we have
\begin{align} T &= \frac {I} {I _ {0}} \[4pt] &= e^{- \Delta N \alpha L} \label{6.73} \[4pt] &\approx e^{- N \alpha L} \end{align}
We see that the transmission of light through the sample decays exponentially as a function of path length.
$\Delta N = N _ {n} - N _ {m}$
is the thermal population difference between states. The second expression in Equation \ref{6.73} comes from the high-frequency approximation applicable to optical spectroscopy. Equation \ref{6.73} can also be written in terms of the familiar Beer–Lambert Law:
$A = - \log \frac {I} {I _ {0}} = \epsilon C L \label{6.74}$
where $A$ is the absorbance and $C$ is the sample concentration in mol L-1, which is related to the number density via Avagadro’s number $N_A$,
$C \left[ \operatorname {mol} L^{- 1} \right] = \frac {N \left[ c m^{- 3} \right]} {N _ {A}} \times 1,000 \label{6.75}$
In Equation \ref{6.74}, the characteristic molecular quantity that describes the sample’s ability to absorb the light is $\epsilon$, the molar decadic extinction coefficient, given in L mol-1 cm-1. With these units, we see that we can equate $\epsilon$ with the cross section as
$\epsilon = \frac {N _ {A} \alpha} {2303} \label{6.76}$
In the context of sample absorption characteristics, our use of the variable $α$ for cross section should not be confused with another use as an absorption coefficient with units of cm-1 that is equal to $Nα$ in Equation \ref{6.73}.
These relationships also allow us to obtain the magnitude of the transition dipole matrix element from absorption spectra by integrating over the absorption line shape:
\begin{align} \left| \mu _ {i f} \right|^{2} &= \frac {6 \varepsilon _ {0} \hbar^{2} 2303 c} {N _ {A} n} \int \frac {\varepsilon ( v )} {v} d v \label{6.77} \[4pt] &= \left( 108.86\, L \,mol^{- 1}\, cm^{-1}D^{-2} \right)^{- 1} \int \frac {\varepsilon ( v )} {v} d v \end{align}
Here the absorption line shape is expressed in molar decadic units and the frequency in wavenumbers.
Readings
1. Herzberg, G., Molecular Spectra and Molecular Structure: Infrared and Raman of Polyatomic Molecules. Prentice-Hall: New York, 1939; Vol. II, p. 261.
2. McHale, J. L., Molecular Spectroscopy. 1st ed.; Prentice Hall: Upper Saddle River, NJ, 1999.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.05%3A_Absorption_Cross-Sections.txt
|
Here we review the derivation of the vector potential for the plane wave in free space. We begin with Maxwell’s equations (SI):
\begin{align} \overline {\nabla} \cdot \overline {B} &= 0 \label{6.78} \[4pt] \overline {\nabla} \cdot \overline {E} &= \rho / \varepsilon _ {0} \label{6.79} \[4pt] \overline {\nabla} \times \overline {E} &= - \dfrac {\partial \overline {B}} {\partial t} \label{6.80} \[4pt] \overline {\nabla} \times \overline {B} &= \mu _ {0} \overline {J} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \overline {E}} {\partial t} \label{6.81} \end{align}
Here the variables are: $\overline {E}$, electric field; $\overline {B}$, magnetic field; $\overline {J}$, current density; $\rho$, charge density; $\mathcal {E} _ {0}$, electrical permittivity; $\mu _ {0}$, magnetic permittivity. We are interested in describing $\overline {E}$ and $\overline {B}$ in terms of a vector and scalar potential, $\overline {A}$ and $\varphi$.
Next, let’s review some basic properties of vectors and scalars. Generally, vector field $\overline {F}$ assigns a vector to each point in space. The divergence of the field
$\overline {\nabla} \cdot \overline {F} = \dfrac {\partial F _ {x}} {\partial x} + \dfrac {\partial F _ {y}} {\partial y} + \dfrac {\partial F _ {z}} {\partial z} \label{6.82}$
is a scalar. For a scalar field $\phi$, the gradient
$\nabla \phi = \dfrac {\partial \phi} {\partial x} \hat {x} + \dfrac {\partial \phi} {\partial y} \hat {y} + \dfrac {\partial \phi} {\partial z} \hat {z} \label{6.83}$
is a vector for the rate of change at one point in space. Here
$\hat {x}^{2} + \hat {y}^{2} + \hat {z}^{2} = \hat {r}^{2}$
are unit vectors. Also, the curl
$\overline {\nabla} \times \overline {F} = \left| \begin{array} {l l l} {\hat {x}} & {\hat {y}} & {\hat {z}} \ {\dfrac {\partial} {\partial x}} & {\dfrac {\partial} {\partial y}} & {\dfrac {\partial} {\partial z}} \ {F _ {x}} & {F _ {y}} & {F _ {z}} \end{array} \right|$
is a vector whose $x$, $y$, and $z$ components are the circulation of the field about that component. Some useful identities from vector calculus that we will use are
\begin{align} \overline {\nabla} \cdot ( \overline {\nabla} \times \overline {F} ) &= 0 \label{6.85} \[4pt] \nabla \times ( \nabla \phi ) &= 0 \label{6.86} \[4pt] \nabla \times ( \overline {\nabla} \times \overline {F} ) &= \overline {\nabla} ( \overline {\nabla} \cdot \overline {F} ) - \overline {\nabla}^{2} \overline {F} \label{6.87} \end{align}
Gauge Transforms
We now introduce a vector potential $\overline {A} ( \overline {r} , t )$ and a scalar potential $\varphi ( \overline {r} , t )$, which we will relate to $\overline {E}$ and $\overline {B}$. Since
$\overline {\nabla} \cdot \overline {B} = 0$
and
$\overline {\nabla} ( \overline {\nabla} \times \overline {A} ) = 0,$
we can immediately relate the vector potential and magnetic field
$\overline {B} = \overline {\nabla} \times \overline {A} \label{6.88}$
Inserting this into Equation \ref{6.80} and rewriting, we can relate the electric field and vector potential:
$\overline {\nabla} \times \left[ \overline {E} + \dfrac {\partial \overline {A}} {\partial t} \right] = 0 \label{6.89}$
Comparing Equations \ref{6.89} and \ref{6.86} allows us to state that a scalar product exists with
$\overline {E} = \dfrac {\partial \overline {A}} {\partial t} - \nabla \varphi \label{6.90}$
So summarizing our results, we see that the potentials $\overline {A}$ and $\varphi$ determine the fields $\overline {B}$ and $\overline {E}$:
\begin{align} \overline {B} ( \overline {r} , t ) &= \overline {\nabla} \times \overline {A} ( \overline {r} , t ) \label{6.91} \[4pt] \overline {E} ( \overline {r} , t ) &= - \overline {\nabla} \varphi ( \overline {r} , t ) - \dfrac {\partial} {\partial t} \overline {A} ( \overline {r} , t ) \label{6.92} \end{align}
We are interested in determining the classical wave equation for $\overline {A}$ and $\varphi$. Using Equation \ref{6.91}, differentiating Equation \ref{6.92}, and substituting into Equation \ref{6.81}, we obtain
$\overline {\nabla} \times ( \overline {\nabla} \times \overline {A} ) + \varepsilon _ {0} \mu _ {0} \left( \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} + \overline {\nabla} \dfrac {\partial \varphi} {\partial t} \right) = \mu _ {0} \overline {J} \label{6.93}$
Using Equation \ref{6.87},
$\left[ - \overline {\nabla}^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} \right] + \overline {\nabla} \left( \overline {\nabla} \cdot \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \varphi} {\partial t} \right) = \overline {\mu} _ {0} \overline {J} \label{6.94}$
From Equation \ref{6.90}, we have
$\overline {\nabla} \cdot \overline {E} = - \dfrac {\partial \overline {\nabla} \cdot \overline {A}} {\partial t} - \overline {\nabla}^{2} \varphi \label{6.95}$
and using Equation \ref{6.79},
$\dfrac {- \partial \overline {V} \cdot \overline {A}} {\partial t} - \overline {\nabla}^{2} \varphi = \rho / \varepsilon _ {0} \label{6.96}$
Notice from Equations \ref{6.91} and \ref{6.92} that we only need to specify four field components ($A_{x}, A_{y}, A_{z}, \varphi$ to determine all six $\bar{E}$ and $\bar{B}$ components. But $\bar{E}$ and $\bar{B}$ do not uniquely determine $\bar{A}$ and $\varphi$. So we can construct $\bar{A}$ and $\varphi$ in any number of ways without changing $\bar{E}$ and $\bar{B}$. Notice that if we change $\bar{A}$ by adding $\bar{\nabla} \chi$ where $\chi$ is any function of $\bar{r}$ and $t$ this will not change $\bar{B} \quad(\nabla \times(\nabla \cdot B)=0)$. It will change $E$ by $\left(-\frac{\partial}{\partial t} \bar{\nabla} \chi\right)$, but we can change $\varphi$ to $\varphi^{\prime}=\varphi-(\partial \chi / \partial t)$. Then $\bar{E}$ and $\bar{B}$ will both be unchanged. This property of changing representation (gauge) without changing $\bar{E}$ and $\bar{B}$ is gauge invariance. We can define a gauge transformation with
$\bar{A}^{\prime}(\bar{r}, t)=\bar{A}(\bar{r}, t)+\bar{\nabla} \cdot \chi(\bar{r}, t) \label{6.97}$
$\varphi^{\prime}(\bar{r}, t)=\varphi(\bar{r}, t)-\frac{\partial}{\partial t} \chi(\bar{r}, t) \label{6.98}$
Up to this point, $A^{\prime} \text {and} \varphi^{\prime}$ are undetermined. Let’s choose a $\chi$ such that:
$\overline {\nabla} \cdot \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \varphi} {\partial t} = 0 \label{6.99}$
which is known as the Lorentz condition. Then from Equation \ref{6.93}:
$- \nabla^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} = \mu _ {0} \overline {J} \label{6.100}$
The right hand side of this equation can be set to zero when no currents are present. From Equation \ref{6.96}, we have:
$\varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \varphi} {\partial t^{2}} - \nabla^{2} \varphi = \dfrac {\rho} {\varepsilon _ {0}} \label{6.101}$
Equations \ref{6.100} and \ref{6.101} are wave equations for $\overline {A}$ and $\varphi$. Within the Lorentz gauge, we can still arbitrarily add another $\chi$; it must only satisfy Equation \ref{6.99}. If we substitute Equations \ref{6.97} and \ref{6.98} into Equation \ref{6.101}, we see
$\nabla^{2} \chi - \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \chi} {\partial t^{2}} = 0 \label{6.102}$
So we can make further choices/constraints on $\bar{A} \text {and} \varphi$ as long as it obeys Equation \ref{6.102}. We now choose $\varphi=0$, the Coulomb gauge, and from Equation \ref{6.99} we see
$\overline {\nabla} \cdot \overline {A} = 0 \label{6.103}$
So the wave equation for our vector potential when the field is far currents ($J= 0$) is
$- \overline {\nabla}^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} = 0 \label{6.104}$
The solutions to this equation are plane waves:
$\overline {A} = \overline {A} _ {0} \sin ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.105}$
where $\alpha$ is a phase. $k$ is the wave vector which points along the direction of propagation and has a magnitude
$k^{2} = \omega^{2} \mu _ {0} \varepsilon _ {0} = \omega^{2} / c^{2} \label{6.106}$
Since $\overline {\nabla} \cdot \overline {A} = 0$ (Equation \ref{6.103}), then
$- \overline {k} \cdot \overline {A} _ {0} \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) = 0$
and
$\overline {k} \cdot \overline {A} _ {0} = 0 \label{6.107}$
So the direction of the vector potential is perpendicular to the direction of wave propagation ($\overline {k} \perp \overline {A _ {0}}$). From Equations \ref{6.91} and \ref{6.92}, we see that for $\varphi = 0$:
\begin{align} \overline {E} &= - \dfrac {\partial \overline {A}} {\partial t} \[4pt] &= - \omega \overline {A} _ {0} \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.108} \[4pt] \overline {B} &= \overline {\nabla} \times \overline {A} \[4pt] &= - \left( \overline {k} \times \overline {A} _ {0} \right) \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.109} \end{align}
Here the electric field is parallel with the vector potential, and the magnetic field is perpendicular to the electric field and the direction of propagation ($\overline {k} \perp \overline {E} \perp \overline {B}$). The Poynting vector describing the direction of energy propagation is
$\overline {S} = \varepsilon _ {0} c^{2} ( \overline {E} \times \overline {B} )$
and its average value, the intensity, is
$I = \langle S \rangle = \dfrac {1} {2} \varepsilon _ {0} c E _ {0}^{2}.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.06%3A_Appendix_-_Review_of_Free_Electromagnetic_Field.txt
|
The second term in the expansion in eq. (6.39) leads to magnetic dipole and electric quadrupole transitions, which we will describe here. The interaction potential is
$V^{(2)}(t)=-\frac{q}{m}\left[i A_{0}(\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) e^{-i \omega t}-i A_{0}^{*}(\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) e^{i \omega t}\right]$
We can use the identity
\begin{aligned} (\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) &=\hat{\varepsilon} \cdot(\overline{p r}) \cdot \bar{k} \ &=\frac{1}{2} \hat{\varepsilon}(\overline{p r}-\overline{r p}) \bar{k}+\frac{1}{2} \hat{\varepsilon}(\overline{p r}+\overline{r p}) \bar{k} \end{aligned}
to separate V(t) into two distinct light–matter interaction terms:
$V^{(2)}(t)=V_{m a g}^{(2)}(t)+V_{Q}^{(2)}(t) \label{6.112}$
$V_{m a g}^{(2)}(t)=\frac{-i q}{2 m} \hat{\varepsilon} \cdot(\overline{p r}-\overline{r p}) \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.113}$
$V_{Q}^{(2)}(t)=\frac{-i q}{2 m} \hat{\varepsilon} \cdot(\overline{p r}+\overline{r p}) \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.114}$
where the first $V_{\operatorname{mag}}^{(2)}$ gives rise to magnetic dipole transitions, and the second $V_{Q}^{(2)}$ leads to electric quadrupole transitions.
For the notation above, $\overline{p r}$ represents an outer product (tensor product $\bar{p}: \bar{r}$), so that
$\hat{\varepsilon} \cdot(\overline{p r}) \cdot \bar{k}=\left(\begin{array}{lll} \varepsilon_{x} & \varepsilon_{y} & \varepsilon_{z} \end{array}\right)\left(\begin{array}{ccc} p_{x} r_{x} & p_{x} r_{y} & p_{x} r_{z} \ p_{y} r_{x} & p_{y} r_{y} & p_{y} r_{z} \ p_{z} r_{x} & p_{z} r_{y} & p_{z} r_{z} \end{array}\right)\left(\begin{array}{c} k_{x} \ k_{y} \ k_{z} \end{array}\right)$
This expression is meant to imply that the component of r that lies along k can influence the magnitude of p along $\varepsilon$. Alternatively this term could be written $\sum_{a, b=x, y, z} \varepsilon_{a}\left(p_{a} r_{b}\right) k_{b}$. These interaction potentials can be simplified and made more intuitive. Considering first eq. \ref{6.113}, we can use the vector identity $(\bar{A} \times \bar{B}) \cdot(\bar{C} \times \bar{D})=(\bar{A} \times \bar{C})(\bar{B} \times \bar{D})-(\bar{A} \times \bar{D})(\bar{B} \times \bar{C})$ to show
\begin{aligned} \frac{1}{2} \hat{\varepsilon} \cdot(\overline{p r}-\overline{r p}) \cdot \bar{k} &=\frac{1}{2}[(\hat{\varepsilon} \cdot \bar{p})(\bar{r} \cdot \bar{k})-(\hat{\varepsilon} \cdot \bar{r})(\bar{p} \cdot \bar{k})]=\frac{1}{2}[(\bar{k} \cdot \hat{\varepsilon}) \cdot(\bar{r} \times \bar{p})] \ &=\frac{1}{2}(\bar{k} \times \hat{\varepsilon}) \cdot \bar{L} \end{aligned} \label{6.116}
For electronic spectroscopy, $\bar{L}$ is the orbital angular momentum. Since the vector $\bar{k} \times \hat{\varepsilon}$ describes the direction of the magnetic field $\bar{B}, \text {and since} A_{0}=B_{0} / 2 i k$
$V_{m d}^{(2)}(t)=\frac{-q}{2 m} \bar{B}(t) \cdot \bar{L} \quad \bar{B}(t)=\bar{B}_{0} \cos \omega t$
$\bar{B} \cdot \bar{L} \text {more generally is} \bar{B} \cdot(\bar{L}+2 \bar{S})$ when considering the spin degrees of freedom. In the case of an electron,
$\frac{q \bar{L}}{m}=\frac{2 c}{\hbar} \beta \bar{L}=\frac{2 c}{\hbar} \bar{\mu}_{m a g}$
where the Bohr magneton $\beta=\sum_{i} e \hbar / 2 m_{i} c, \text {and} \bar{\mu}_{m a g}=\beta \bar{L}$ is the magnetic dipole operator. So we have the form for the magnetic dipole interaction
$V_{m a g}^{(2)}(t)=-\frac{c}{\hbar} \bar{B}(t) \cdot \bar{\mu}_{m a g}$
For electric quadrupole transitions, once can simplify eq. \ref{6.114} by evaluating matrix elements for the operator $(\overline{p r}+\overline{r p})$.
$\overline{p r}+\overline{r p}=\frac{i m}{\hbar}\left[\left[H_{0}, \bar{r}\right] \bar{r}-\bar{r}\left[\bar{r}, H_{0}\right]\right]=\frac{-i m}{\hbar}\left[\bar{r} \bar{r}, H_{0}\right]$
and
$V_{Q}^{(2)}(t)=\frac{-q}{2 \hbar} \hat{\varepsilon} \cdot\left[\bar{r} \bar{r}, H_{0}\right] \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.121}$
Here $\bar{r} \bar{r}$ is an outer product of vectors. For a system of many charges (i), we define the quadrupole moment, a traceless second rank tensor
$\begin{array}{l} \bar{Q}=\sum_{i} q_{i} \bar{r} \otimes \bar{r} \ Q_{m n}=\sum_{i} q_{i}\left(3 r_{m i} \cdot r_{n i}-r_{i}^{2} \delta_{m n}\right) \quad m, n=x, y, z \end{array}$
Now, using $A_{0}=E_{0} / 2 i \omega$ eq. (\ref{6.121}) becomes
$V(t)=-\frac{1}{2 i \hbar \omega} \bar{E}(t) \cdot\left[\overline{\bar{Q}}, H_{0}\right] \cdot \hat{k} \quad \bar{E}(t)=\bar{E}_{0} \cos \omega t \label{6.123}$
Since the matrix element $\left\langle k\left|\left[Q, H_{0}\right]\right| \ell\right\rangle=\hbar \omega_{k \ell} \overline{\bar{Q}}_{k \ell}$, we can write the electric quadrupole transition moment as
\begin{aligned} V_{k \ell} &=\frac{i E_{0} \omega_{k \ell}}{2 \omega}\langle k|\hat{\varepsilon} \cdot \overline{\bar{Q}} \cdot \hat{k}| \ell\rangle \ &=\frac{i E_{0} \omega_{k \ell}}{2 \omega} \overline{\bar{Q}}_{k \ell} \end{aligned}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.07%3A_Appendix_-_Magnetic_Dipole_and_Electric_Quadrupole_Transitions.txt
|
Molecules in dense media interact with one another, and as a result no two molecules have the same state. Energy placed into one degree of freedom will ultimately leak irreversibly into its environment. We cannot write down an exact Hamiltonian for these problems; however, we can concentrate on a few degrees of freedom that are observed in a measurement, and try and describe the influence of the surroundings in a statistical manner.
• 8.1: Mixed States
A mixed state refers to any case in which we describe the behavior of an ensemble for which there is initially no phase relationship between the elements of the mixture. Examples include a system at thermal equilibrium and independently prepared states. For mixed states we have imperfect information about the system, and we use statistical averages in order to describe quantum observables.
• 8.2: Density Matrix for a Mixed State
08: Mixed States and the Density Matrix
Conceptually we are now switching gears to develop tools and ways of thinking about condensed phase problems. What we have discussed so far is the time-dependent properties of pure states, the states of a quantum system that can be characterized by a single wavefunction. For pure states one can precisely write the Hamiltonian for all particles and fields in the system of interest. These are systems that are isolated from their surroundings, or isolated systems to which we introduce a time-dependent potential. For describing problems in condensed phases, things are different. Molecules in dense media interact with one another, and as a result no two molecules have the same state. Energy placed into one degree of freedom will ultimately leak irreversibly into its environment. We cannot write down an exact Hamiltonian for these problems; however, we can concentrate on a few degrees of freedom that are observed in a measurement, and try and describe the influence of the surroundings in a statistical manner.
These observations lead to the concept of mixed states or statistical mixtures. A mixed state refers to any case in which we describe the behavior of an ensemble for which there is initially no phase relationship between the elements of the mixture. Examples include a system at thermal equilibrium and independently prepared states. For mixed states we have imperfect information about the system, and we use statistical averages in order to describe quantum observables.
How does a system get into a mixed state? Generally, if you have two systems and you put these in contact with each other, the interaction between the two will lead to a new system that is inseparable. Consider two systems $H_S$ and $H_B$ for which the eigenstates of $H_S$ are $| n \rangle$ and those of $H_B$ are $| \alpha \rangle$.
$H _ {0} = H _ {S} + H _ {B}$
$\left. \begin{array} {l} {H _ {S} | n \rangle = E _ {n} | n \rangle} \ {H _ {B} | \alpha \rangle = E _ {\alpha} | \alpha \rangle} \end{array} \right. \label{0.2}$
Before these systems interact, the state of the system $| \psi _ {0} \rangle$ can be described as product states of $| n \rangle$ and $| \alpha \rangle$.
$| \psi _ {0} \rangle = | \psi _ {S}^{0} \rangle | \psi _ {B}^{0} \rangle \label{0.3}$
$| \psi _ {S}^{0} \rangle = \sum _ {n} s _ {n} | n \rangle$
$| \psi _ {B}^{0} \rangle = \sum _ {\alpha} b _ {\alpha} | \alpha \rangle$
$| \psi _ {0} \rangle = \sum _ {n , \alpha} s _ {n} b _ {\alpha} | n \alpha \rangle$
where $s$ and $b$ are expansion coefficients. After these states are allowed to interact, we have a new state $| \psi (t) \rangle$. The new state can still be expressed in the zero-order basis, although this does not represent the eigenstates of the new Hamiltonian:
$H = H _ {0} + V \label{0.6}$
$| \psi (t) \rangle = \sum _ {n , \alpha} c _ {n , \alpha} | n \alpha \rangle \label{0.7}$
For any point in time, $C _ {n , \alpha}$ is the complex amplitude for the mixed $| n \alpha \rangle$ state. Generally speaking, at any time after bringing the systems in contact $c _ {n , \alpha} \neq s _ {n} b _ {a}.$. The coefficient n, $c_{n, \alpha}$ encodes $P _ {n , \alpha} = \left| c _ {n , \alpha} \right|^{2}$, the joint probability for finding particle of $\left|\psi_{S}\right\rangle \text {in state}|n\rangle$ and simultaneously finding particle of $\left|\psi_{B}\right\rangle \text {in state}|\alpha\rangle$. In the case of experimental observables, we are typically able to make measurements on the system HS, and are blind to HB. Then we are interested in the probability of occupying a particular eigenstate of the system averaged over the bath degrees of freedom:
$P _ {n} = \sum _ {\alpha} P _ {n , \alpha} = \sum _ {\alpha} \left| c _ {n , \alpha} \right|^{2} = \left\langle \left| c _ {n} \right|^{2} \right\rangle \label{0.8}$
Now let’s look at the thinking that goes into describing ensembles. Imagine a room temperature solution of molecules dissolved in a solvent. The same molecular Hamiltonian and wavefunctions can be used to express the state of any molecule in the ensemble. However, the details of the amplitudes of the eigenstates at any moment will also depend on the timedependent local environment.
We will describe this problem with the help of a molecular Hamiltonian $H_{m o l}^{(j)}$, which describes the state of the molecule j within the solution through the wavefunction $\left|\psi^{(j)}\right\rangle$. We also have a Hamiltonian for the liquid $H_{l i q}$ into which we wrap all of the solvent degrees of freedom. The full Hamiltonian for the solution can be expressed in terms of a sum over N solute molecules and the liquid, the interactions between solute molecules $H_{i n t}$, and any solute-solvent interactions $H_{mol-liq}$:
$\overline {H} = \sum _ {j = 1}^{N} H _ {m o l}^{( j )} + H _ {l i q} + \sum _ {j , k = 1 \atop j > k}^{N} H _ {\text {int}}^{( j , k )} + \sum _ {j = 1}^{N} H _ {m o l - l i q}^{( j )} \label{0.9}$
For our purposes, we take the molecular Hamiltonian to be the same for all solute molecules, i.e., $H_{m o l}^{(j)}=H_{m o l}$ which obeys a TISE
$H _ {m o l} | \psi _ {n} \rangle = E _ {n} | \psi _ {n} \rangle \label{0.10}$
We will express the state of each molecule in this isolated molecule eigenbasis. For the circumstances we are concerned with, where there are no interactions or correlations between solute molecules, we are allowed to neglect $H_{int}$. Implicit in this statement is that we believe there is no quantum mechanical phase relationship between the different solute molecules. We will also drop $H_{liq}$, since it is not the focus of our interests and will not influence the conclusions. We can therefore write the Hamiltonian for any individual molecule as
$H^{( j )} = H _ {m o l} + H _ {m o l - l i q}^{( j )} \label{0.11}$
and the statistically averaged Hamiltonian
$\overline {H} = \frac {1} {N} \sum _ {j = 1}^{N} H^{( j )} = H _ {m o l} + \left\langle H _ {m o l - l i q} \right\rangle \label{0.12}$
This Hamiltonian reflects an ensemble average of the molecular Hamiltonian under the influence of a varying solute–solvent interaction. To describe the state of any particular molecule, we can define a molecular wavefunction $\left|\psi_{n}^{(j)}\right\rangle$, which we express as an expansion in the isolated molecular eigenstates,
$| \psi _ {n}^{( j )} \rangle = \sum _ {n} c _ {n}^{( j )} | \psi _ {n} \rangle \label{0.13}$
Here the expansion coefficients vary by molecule because of their interaction with the liquid, but they are all expressed in terms of the isolated molecule eigenstates. Note that this expansion is in essence the same as Equation \ref{0.7}, with the association $c _ {n}^{( j )} \Leftrightarrow c _ {n , \alpha}$. In either case, the mixed state arises from varying interactions with the environment. These may be static and appear from ensemble averaging, or time-dependent and arise from fluctuations in the environment. Recognizing the independence of different molecules, the wavefunction for the complete system $|\Psi\rangle$ can be expressed in terms of the wavefunctions for the individual molecules under the influence of their local environment $\left|\psi^{(j)}\right\rangle$:
$\overline {H} | \Psi \rangle = \overline {E} | \Psi \rangle \label{0.14}$
$| \Psi \rangle = | \psi^{( 1 )} \psi^{( 2 )} \psi^{( 3 )} \cdots \rangle = \prod _ {j = 1}^{N} | \psi^{( j )} \rangle \label{0.15}$
$\overline {E} = \sum _ {j = 1}^{N} E^{( j )} \label{0.16}$
We now turn out attention to expectation values that we would measure in an experiment. First we recognize that for the individual molecule j, the expectation value for an internal operator would be expressed
$\left\langle A^{( j )} \right\rangle = \left\langle \psi^{( j )} \left| \hat {A} \left( p _ {j} , q _ {j} \right) \right| \psi^{( j )} \right\rangle \label{0.17}$
This purely quantum mechanical quantity is itself an average. It represents the mean value obtained for a large number of measurements made on an identically prepared system, and reflects the need to average over the intrinsic quantum uncertainties in the position and momenta of particles. In the case of a mixed state, we must also average the expectation value over the ensemble of different molecules. In the case of our solution, this would involve an average of the expectation value over the N molecules.
$\langle \langle A \rangle \rangle = \frac {1} {N} \sum _ {j = 1}^{N} \left\langle A^{( j )} \right\rangle \label{0.18}$
Double brackets are written here to emphasize that conceptually there are two levels of statistics in this average. The first involves the uncertainty over measurements of the same molecule in the identical pure state, whereas the second is an average over variations of the state of the system within an ensemble. However, we will drop this notation when we are dealing with ensembles, and take it as understood that expectation values must be averaged over a distribution. Expanding Equation \ref{0.18} with the use of Equations \ref{0.13} and \ref{0.17} allows us to write
$\langle A \rangle = \frac {1} {N} \sum _ {n , m} \sum _ {j = 1}^{N} c _ {m}^{( j )} \left( c _ {n}^{( j )} \right)^{*} \left\langle \psi _ {n} | \hat {A} | \psi _ {m} \right\rangle \label{0.19}$
The second term simplifies the first expression by performing an ensemble average over the complex wavefunction amplitudes. We use this expression to write a density matrix or density operator $\rho$, whose matrix elements are
$\rho _ {m n} = \left\langle c _ {m} c _ {n}^{*} \right\rangle \label{0.20}$
Then the expectation value becomes
\left.\begin{aligned} \langle A \rangle & = \sum _ {n , m} \rho _ {m n} A _ {n m} \ & = \operatorname {Tr} ( \rho \hat {A} ) \end{aligned} \right. \label{0.21}
Here the trace $Tr[...]$ refers to a trace over the diagonal elements of the matrix $\sum_{a}\langle a|\cdots| a\rangle$. Although these matrices were evaluated in the basis of the molecular eigenstates, we emphasize that the definition and evaluation of the density matrix and operator matrix elements are not specific to a particular basis set.
Although this is just one example, the principles are quite generally to mixed states in the condensed phase. The wavefunction is a quantity that is meant to describe a molecular or nanoscale object. To the extent that finite temperature, fluctuations, disorder, and spatial separation ensure that phase relationships are randomized between different nano-environments, one can characterize the molecular properties of condensed phases as mixed states in which ensemble-averaging is used to describe the interactions of these molecular environments with their surroundings.
The name density matrix derives from the observation that it plays the quantum role of a probability density. Comparing Equation \ref{0.21} with the statistical determination of the mean value of $A$,
$\langle A \rangle = \sum _ {i = 1}^{M} P \left( A _ {i} \right) A _ {i} \label{0.22}$
we see that $\rho$ plays the role of the probability distribution function $P(A)$. Since $\rho$ is Hermitian, it can be diagonalized, and in this diagonal basis the density matrix elements are in fact the statistical weights or probabilities of occupying a given state of the system.
Returning to our example, and comparing Equation \ref{0.22} with Equation \ref{0.18} also implies that $P_i(A) = 1/N$, i.e. that the contribution from each molecule to the average is statistically equivalent. Note also that the state of the system described by Equation \ref{0.15} is a system of fixed energy. So, the probability density in Equation \ref{0.18} indicates that this expression applies to a microcanonical ensemble ($N$, $V$, $E$) in which any realization of a system at fixed energy is equally probable, and the statistical weight is the inverse of the number of microstates: $P=1 / \Omega$. In the case of a system in contact with a heat bath with temperature T, i.e., the canonical ensemble (N,V,T) we now express the average in terms of the probability that a member of an ensemble with fixed average energy can access a state of energy $E$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/08%3A_Mixed_States_and_the_Density_Matrix/8.01%3A_Mixed_States.txt
|
Based on the discussion of mixed state in Section 7.1, we are led to define the expectation value of an operator for a mixed state as
$\langle \hat {A} (t) \rangle = \sum _ {j} p _ {k} \langle \psi^{( j )} (t) | \hat {A} | \psi^{( j )} (t) \rangle \label{0.23}$
where $p_j$ is the probability of finding a system in the state defined by the wavefunction $| \psi^{( j )} \rangle$. Correspondingly, the density matrix for a mixed state is defined as:
$\rho (t) \equiv \sum _ {j} p _ {j} | \psi^{( j )} (t) \rangle \langle \psi^{( j )} (t) | \label{0.24}$
For the case of a pure state, only one wavefunction$| \psi^{( k )} \rangle$ specifies the state of the system, and $p _ {j} = \delta _ {j k}$. Then the density matrix is as we described before,
$\rho (t) = | \psi (t) \rangle \langle \psi (t) | \label{0.25}$
with the density matrix elements
\left.\begin{aligned} \rho (t) & {= \sum _ {n , m} c _ {n} (t) c _ {m}^{*} (t) | n \rangle \langle m |} \ & {\equiv \sum _ {n , m} \rho _ {n m} (t) | n \rangle \langle m |} \end{aligned} \right. \label{0.26}
For mixed states, using the separation of system ($a$) and bath ($\alpha$) degrees of freedom that we used above, the expectation value of an operator $A$ can be expressed as
\begin{aligned} \langle A (t) \rangle & = \sum _ {a , \alpha} c _ {a , \alpha}^{*} c _ {b , \beta} \langle a | A | b \rangle \delta _ {\alpha , \beta} \ & = \sum _ {a , b} \left( \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \right) A _ {a b} \ & \equiv \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \ & = T r \left[ \rho _ {S} A \right] \end{aligned} \label{0.27}
Here, the density matrix elements are
$\rho _ {a , \alpha , b , \beta} = c _ {a , \alpha}^{*} c _ {b , \beta},$
We are now in a position, where we can average the system quantities over the bath configurations. If we consider that the operator $A$ is only a function of the system coordinates, we can make further simplifications. An example is describing the dipole operator of a molecule dissolved in a liquid. Then we can average the expectation value of $A$ over the bath degrees of freedom as
\left.\begin{aligned} \langle A (t) \rangle & = \sum _ {a , \alpha} c _ {a , \alpha}^{*} c _ {b , \beta} \langle a | A | b \rangle \delta _ {\alpha , \beta} \ & = \sum _ {a , b} \left( \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \right) A _ {a b} \ & \equiv \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \ & = T r \left[ \rho _ {S} A \right] \end{aligned} \right. \label{0.28}
Here we have defined a density matrix for the system degrees of freedom (also called the reduced density matrix, $\sigma$)
$\rho _ {s} = | \psi _ {s} \rangle \langle \psi _ {s} | \label{0.29}$
with density matrix elements that traced over the bath states:
$| b \rangle \rho _ {s} \langle a | = \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \label{0.30}$
The “s” subscript should not be confused with the Schrödinger picture wavefunctions. To relate this to our similar expression for $\rho$, Equation \ref{0.25}, it is useful to note that the density matrix of the system are obtained by tracing over the bath degrees of freedom:
\left.\begin{aligned} \rho _ {S} & = T r _ {B} ( \rho ) \ & = \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \end{aligned} \right. \label{0.31}
Also, note that
$\operatorname {Tr} ( A \times B ) = \operatorname {Tr} ( A ) \operatorname {Tr} ( B ) \label{0.32}$
To interpret what the system density matrix represents, let’s manipulate it a bit. Since $\rho _ {S}$ is Hermitian, it can be diagonalized by a unitary transformation $T$, where the new eigenbasis $| m \rangle$ represents the mixed states of the original $| \psi _ {S} \rangle$ system.
$\rho _ {S} = \sum _ {m} | m \rangle \rho _ {m m} \langle m | \label{0.33}$
$\sum _ {m} \rho _ {m n} = 1 \label{0.34}$
The density matrix elements represent the probability of occupying state $| m \rangle$, which includes the influence of the bath. To obtain these diagonalized elements, we apply the transformation $T$ to the system density matrix:
\begin{aligned} \left( \rho _ {S} \right) _ {m n} & = \sum _ {a , b} T _ {m b} \left( \rho _ {S} \right) _ {b a} T _ {a n}^{\dagger} \ & = \sum _ {a , b , \alpha} c _ {b , \alpha} T _ {m b} c _ {a , \alpha}^{*} T _ {m a}^{*} \ & = \sum _ {\alpha} f _ {m , \alpha} f _ {m , \alpha}^{*} \ & = \left| f _ {m} \right|^{2} = p _ {m} \geq 0 \end{aligned}. \label{0.35}
The quantum mechanical interaction of one system with another causes the system to be in a mixed state after the interaction. The mixed states, which are generally inseparable from the original states, are described by
$| \psi _ {S} \rangle = \sum _ {m} f _ {m} | m \rangle \label{0.36}$
If we only observe a few degrees of freedom, we can calculate observables by tracing over unobserved degrees of freedom. This forms the basis for treating relaxation phenomena.
Readings
1. Blum, K., Density Matrix Theory and Applications. Plenum Press: New York, 1981.
2. Mukamel, S., Principles of Nonlinear Optical Spectroscopy. Oxford University Press: New York, 1995.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/08%3A_Mixed_States_and_the_Density_Matrix/8.02%3A_Density_Matrix_for_a_Mixed_State.txt
|
In condensed phases, intermolecular interactions and collective motions act to modify the state of a molecule in a time-dependent fashion. Liquids, polymers, and other soft matter experience intermolecular interactions that lead to electronic and structural motions. Atoms and molecules in solid form are subject to fluctuations that result from thermally populated phonons and defect states that influence electronic, optical, and transport properties. As a result, the properties and dynamics of an internal variable that we may observe in an experiment are mixed with its surroundings. In studying mixed states we cannot write down an exact Hamiltonian for these problems; however, we can describe the influence of the surroundings in a statistical manner. This requires a conceptual change.
• 9.1: Concepts and Definitions
Perhaps the most significant change between isolated states and condensed matter is the dynamics. From the time-dependent Schrödinger equation, we see that the laws governing the time evolution of isolated quantum mechanical systems are invariant under time reversal. That is, there is no intrinsic directionality to time. If one reverses the sign of time and thereby momenta of objects, we should be able to exactly reverse the motion and propagate the system to where it was at an earlier time.
• 9.2: Thermal Equilibrium
For a statistical mixture at thermal equilibrium, individual molecules can occupy a distribution of energy states.
• 9.3: Fluctuations
Systems at thermal equilibrium are macroscopically time-invariant; however, they are microscopically dynamic, with molecules exploring the range of microstates that are thermally accessible. Local variations in energy result in changes in molecular position, orientation, and structure, and are responsible for the activation events that allow chemical equilibria to be established.
09: Irreversible and Random Processes
As one change to our thinking, we now have to be concerned with ensembles. Most often, we will be concerned with systems in an equilibrium state with a fixed temperature for which many quantum states are accessible to the system. For comparing calculations of pure quantum states to experimental observables on macroscopic samples, we assume that all molecules have been prepared and observed in the same manner, so that the quantum expectation values for the internal operators can be compared directly to experimental observations. For mixed states, we have seen the need to perform an additional layer of averaging over the ensemble in the calculation of expectation values.
Perhaps the most significant change between isolated states and condensed matter is the dynamics. From the time-dependent Schrödinger equation, we see that the laws governing the time evolution of isolated quantum mechanical systems are invariant under time reversal. That is, there is no intrinsic directionality to time. If one reverses the sign of time and thereby momenta of objects, we should be able to exactly reverse the motion and propagate the system to where it was at an earlier time. This is also the case for classical systems evolving under Newton’s equation of motion. In contrast, when a quantum system is in contact with another system having many degrees of freedom, a definite direction emerges for time, “the arrow of time,” and the system’s dynamics is no longer reversible. In such irreversible systems a welldefined prepared state decays in time to an equilibrium state where energy has been dissipated and phase relationships are lost between the various degrees of freedom.
Additionally, condensed phase systems on a local, microscopic scale all have a degree of randomness or noisiness to their dynamics that represent local fluctuations in energy on the scale of $k _ {B} T$. This behavior is observed even through the equations of motion that govern the dynamics are deterministic. Why? It is because we generally have imperfect knowledge about all of the degrees of freedom influencing the system, or experimentally view its behavior through a highly restricted perspective. For instance, it is common in experiments to observe the behavior of condensed phases through a molecular probe imbedded within or under the influence of its surroundings. The physical properties of the probe are intertwined with the dynamics of the surrounding medium, and to us this appears as random behavior, for instance as Brownian motion. Other examples of the appearance of randomness from deterministic equations of motion include weather patterns, financial markets, and biological evolution. So, how do irreversible behavior and random fluctuations, hallmarks of all chemical systems, arise from the deterministic time-dependent Schrödinger equation? This fascinating question will be the central theme in our efforts going forward.
Definitions
Let’s begin by establishing some definitions and language that will be useful for us. We first classify chemical systems of interest as equilibrium or non-equilibrium systems. An equilibrium system is one in which the macroscopic properties (i.e., the intensive variables) are invariant with time, or at least invariant on the time scales over which one executes experiments and observes the system. Further, there are no steady state concentration or energy gradients (currents) in the system. Although they are macroscopically invariant, equilibrium states are microscopically dynamic.
For systems at thermal equilibrium we will describe their time-dependent behavior as dynamically reversible or irreversible. For us, reversible will mean that a system evolves deterministically. Knowledge of the state of the system at one point in time and the equation of motion means that you can describe the state of the system for all points in time later or previously. Irreversible systems are not deterministic. That is, knowledge of the state of the system at one point in time does not provide enough information to precisely determine its past state.
Since all states are irreversible in the strictest sense, the distinction is often related to the time scale of observation. For a given system, on a short enough time scale dynamics will appear deterministic whereas on very long times appear random. For instance, the dynamics of a dilute gas appear ballistic on time scales short compared to the mean collision time between particles, whereas their motion appears random and diffusive on much longer time scales. Memory refers to the ability to maintain deterministic motion and reversibility, and we will quantify the decay of memory in the system with correlation functions. For the case of quantum dynamics, we are particularly interested in the phase relationships between quantum degrees of freedom that results from deterministic motion under the time-dependent Schrödinger equation.
Nonequilibrium states refers to open or closed systems that have been acted on externally, moving them from equilibrium by changing the population or energy of the quantum states available to the system. Thermodynamically, work is performed on the system, leading to a free-energy gradient that the nonequilibrium system will minimize as it re-equilibrates. For nonequilibrium states, we will be interested in relaxation processes, which refer to the timedependent processes involved in re-equilibrating the system. Dissipation refers to the relaxation processes involving redistribution of energy as a nonequilibrium state returns toward a thermal distribution. However, there are other relaxation processes such as the randomization of the orientation of an aligned system or the randomization of phase of synchronized oscillations.
Statistics
With the need to describe ensembles, will use statistical descriptions of the properties and behavior of a system. The variable $A$, which can be a classical internal variable or quantum operator, can be described statistically in terms of the mean and mean-square values of $A$ determined from a large number of measurements:
$\langle A \rangle = \frac {1} {N} \sum _ {i = 1}^{N} A _ {i} \label{8.1}$
$\left\langle A^{2} \right\rangle = \frac {1} {N} \sum _ {i = 1}^{N} A _ {i}^{2} \label{8.2}$
Here, the summation over $i$ refers to averaging over $N$ independent measurements. Alternatively, these equations can be expressed as
$\langle A \rangle = \sum _ {n = 1}^{M} P _ {n} A _ {n} \label{8.3}$
$\left\langle A^{2} \right\rangle = \sum _ {n = 1}^{M} P _ {n} A _ {n}^{2} \label{8.4}$
The sum over $n$ refers to a sum over the $M$ possible values that $A$ can take, weighted by $P_n$, the probability of observing a particular value $A_n$. When the accessible values come from a continuous as opposed to discrete distribution, one can describe the statistics in terms of the moments of the distribution function, $P(A)$, which characterizes the probability of observing $A$ between $A$ and $A+dA$
$\langle A \rangle = \int d A A P ( A ) \label{8.5}$
$\left\langle A^{2} \right\rangle = \int d A A^{2} P ( A ) \label{8.6}$
For time-dependent processes, we recognize that it is possible that these probability distributions carry a time dependence, $P(A,t)$. The ability to specify a value for $A$ is captured in the variance of the distribution
$\sigma^{2} = \left\langle A^{2} \right\rangle - \langle A \rangle^{2} \label{8.7}$
We will apply averages over probability distributions to the description of ensembles of molecules; however, we should emphasize that a statistical description of a quantum system also applies to a pure state. A fundamental postulate is that the expectation value of an operator
$\langle \hat {A} \rangle = \langle \psi | \hat {A} | \psi \rangle$
is the mean value of $A$ obtained over many observations on identically prepared systems. The mean and variance of this expectation value represent the fundamental quantum uncertainty in a measurement.
To take this a step further and characterize the statistical relationship between two variables, one can define a joint probability distribution, $P(A,B)$, which characterizes the probability of observing $A$ between $A$ and $A+dA$ and $B$ between $B$ and $B+dB$. The statistical relationship between the variables can also emerges from moments of $P(A,B)$. The most important measure is a correlation function
$C _ {A B} = \langle A B \rangle - \langle A \rangle \langle B \rangle \label{8.8}$
You can see that this is the covariance—the variance for a bivariate distribution. This is a measure of the correlation between the variables $A$ and $B$. That is, for a specific value of $A$, what are the associated statistics for $B$. To interpret this it helps to define a correlation coefficient
$r = \frac {C _ {A B}} {\sigma _ {A} \sigma _ {B}} \label{8.9}$
$r$ can take on values from +1 to -1. If $r = 1$ then there is perfect correlation between the two distributions. If the variables $A$ and $B$ depend the same way on a common internal variable, then they are correlated. If no statistical relationship exists between the two distributions, then they are uncorrelated, $r = 0$, and $\langle A B \rangle = \langle A \rangle \langle B \rangle$. It is also possible that the distributions depend in an equal and opposite manner on an internal variable, in which case we call them anti-correlated with $r = -1$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/09%3A_Irreversible_and_Random_Processes/9.01%3A_Concepts_and_Definitions.txt
|
For a statistical mixture at thermal equilibrium, individual molecules can occupy a distribution of energy states. An equilibrium system at temperature $T$ has the canonical probability distribution
$\rho _ {e q} = \frac {e^{- \beta H}} {Z} \label{8.10}$
$Z$ is the partition function and $\beta = \left( k _ {B} T \right)^{- 1}$. Classically, we can calculate the equilibrium ensemble average value of a variable $A$ as
$\langle A \rangle = \int d \mathbf {p} \, \int d \mathbf {q} A ( \mathbf {p} , \mathbf {q} ; t ) \rho _ {e q} ( \mathbf {p} , \mathbf {q} ) \label{8.11}$
In the quantum mechanical case, we can obtain an equilibrium expectation value of $A$ by averaging $\langle A \rangle$ over the thermal occupation of quantum states:
$\langle A \rangle = \operatorname {Tr} \left( \rho _ {e q} A \right) \label{8.12}$
where $\rho_{eq}$ is the density matrix at thermal equilibrium and is a diagonal matrix characterized by Boltzmann weighted populations in the quantum states:
$\rho _ {m n} = p _ {n} = \frac {e^{- \beta E _ {s}}} {Z} \label{8.13}$
In fact, the equilibrium density matrix is defined by Equation \ref{8.10}, as we can see by calculating its matrix elements using
$\left( \rho _ {e q} \right) _ {\operatorname {mm}} = \frac {1} {Z} \left\langle n \left| e^{- \beta \hat {H}} \right| m \right\rangle = \frac {e^{- \beta E _ {n}}} {Z} \delta _ {m m} = p _ {n} \delta _ {m m} \label{8.15}$
Note also that
$Z = \operatorname {Tr} \left( e^{- \beta \hat {H}} \right) \label{8.16}$
Equation \ref{8.12} can also be written as
$\langle A \rangle = \sum _ {n} p _ {n} \langle n | A | n \rangle \label{8.14}$
It may not be obvious how this expression relates to our previous expression for mixed states
$\langle A \rangle = \sum _ {n , m} \left\langle c _ {n}^{*} c _ {m} \right\rangle A _ {m n} = \operatorname {Tr} ( \rho \hat {A} ).$
Remember that for an equilibrium system we are dealing with a statistical mixture in which no coherences (no phase relationships) are present in the sample. The lack of coherence is the important property that allows the equilibrium ensemble average of $\left\langle c _ {m} c _ {n}^{*} \right\rangle$ to be equated with the thermal population $p_n$. To evaluate this average we recognize that these are complex numbers, and that the equilibrium ensemble average of the expansion coefficients is equivalent to phase averaging over the expansion coefficients. Since at equilibrium all phases are equally probable
$\left\langle c _ {n}^{*} c _ {m} \right\rangle = \frac {1} {2 \pi} \int _ {0}^{2 \pi} c _ {n}^{*} c _ {m} d \phi = \frac {1} {2 \pi} | c _ {n} \| c _ {m} | \int _ {0}^{2 \pi} e^{- i \phi _ {m n}} d \phi _ {n m} \label{8.17}$
where
$c _ {n} = \left| c _ {n} \right| e^{i \phi _ {n}}$
and
$\phi _ {n m} = \phi _ {n} - \phi _ {m}.$
The integral in Equation \ref{8.17} is quite clearly zero unless $\phi _ {n} = \phi _ {m}$ giving
$\left\langle c _ {n}^{*} c _ {m} \right\rangle = p _ {n} = \frac {e^{- \beta E _ {s}}} {Z} \label{8.18}$
9.03: Fluctuations
“Fluctuations” refers to the random or noisy time evolution of a microscopic subsystem imbedded an actively evolving environment. Randomness is a property of all chemical systems to some degree, but we will focus on an environment that is at or near thermal equilibrium. Systems at thermal equilibrium are macroscopically time-invariant; however, they are microscopically dynamic, with molecules exploring the range of microstates that are thermally accessible. Local variations in energy result in changes in molecular position, orientation, and structure, and are responsible for the activation events that allow chemical equilibria to be established.
If we wish to describe an internal variable $A$ for a system at thermal equilibrium, we can obtain the statistics of $A$ by performing ensemble averages described above. The resulting averages would be time-invariant. However, if we observe a member of the ensemble as a function of time, $A_i(t)$, the behavior is generally is observed to fluctuate randomly. The fluctuations in $A_i(t)$ vary about a mean value $\langle A \rangle$, sampling thermally accessible values which are described by an equilibrium probability distribution function $P(A)$. $P(A)$ describes the potential of mean force, the free energy projected as a function of $A$:
$F ( A ) = - k _ {B} T \ln P ( A ) \label{8.19}$
Given enough time, we expect that one molecule in a homogeneous medium will be able to sample all available configurations of the system. Moreover, a histogram of the values sampled by one molecule is expected to be equal to $P(A)$. Such a system is referred to as ergodic. Specifically, in an ergodic system, it is possible describe the macroscopic properties either by averaging over all possible values for a given member of the ensemble, or by performing an average over the realizations of $A$ for the entire ensemble at one point in time. That is, the statistics for $A$ can be expressed as a time-average or an ensemble average. For an equilibrium system, the ensemble average is
$\langle A \rangle = \operatorname {Tr} \left( \rho _ {e q} A \right) = \sum _ {n} \frac {e^{- \beta E _ {n}}} {Z} \langle n | A | n \rangle \label{8.20}$
and the time average is $\overline {A} = \lim _ {T \rightarrow \infty} \frac {1} {T} \int _ {0}^{T} d t \, A _ {i} (t) \label{8.21}$
These quantities are equal for an ergodic system:
$\langle A \rangle = \overline {A}$
Equilibrium systems are ergodic. From Equation \ref{8.21}, we see that the term ergodic also carries a dynamical connotation. A system is ergodic if one member of the ensemble has evolved long enough to sample the equilibrium probability distribution. Experimental observations on shorter time scales view a nonequilibrium system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/09%3A_Irreversible_and_Random_Processes/9.02%3A_Thermal_Equilibrium.txt
|
Time-correlation functions are an effective and intuitive way of representing the dynamics of a system, and are one of the most common tools of time-dependent quantum mechanics. They provide a statistical description of the time evolution of an internal variable or expectation value for an ensemble at thermal equilibrium. They are generally applicable to any time-dependent process, but are commonly used to describe random (or stochastic) and irreversible processes in condensed phases. We will use them extensively in the description of spectroscopy and relaxation phenomena. Although they can be used to describe the oscillatory behavior of ensembles of pure quantum states, our work is motivated by finding a general tool that will help us deal with the inherent randomness of molecular systems at thermal equilibrium. They will be effective at characterizing irreversible relaxation processes and the loss of memory of an initial state in a fluctuating environment.
• 10.1: Definitions, Properties, and Examples of Correlation Functions
Returning to the microscopic fluctuations of a molecular variable A , there seems to be little information in observing the trajectory for a variable characterizing the time-dependent behavior of an individual molecule. However, this dynamics is not entirely random, since they are a consequence of time-dependent interactions with the environment. We can provide a statistical description of the characteristic time scales and amplitudes to these changes by comparing the value of A at time t w
• 10.2: Correlation Function from a Discrete Trajectory
In practice classical correlation functions in molecular dynamics simulations or single molecule experiments are determined from a time-average over a long trajectory at discretely sampled data points.
• 10.3: Quantum Time-Correlation Functions
Quantum correlation functions involve the equilibrium (thermal) average over a product of Hermitian operators evaluated two times
• 10.4: Transition Rates from Correlation Functions
Here we will show that the rate of leaving an initially prepared state, typically expressed by Fermi’s Golden Rule through a resonance condition in the frequency domain, can be expressed in the time-domain picture in terms of a time-correlation function for the interaction of the initial state with others.
10: Time-Correlation Functions
Returning to the microscopic fluctuations of a molecular variable $A$, there seems to be little information in observing the trajectory for a variable characterizing the time-dependent behavior of an individual molecule. However, this dynamics is not entirely random, since they are a consequence of time-dependent interactions with the environment. We can provide a statistical description of the characteristic time scales and amplitudes to these changes by comparing the value of $A$ at time $t$ with the value of $A$ at time $t’$ later.
We define a time-correlation function (TCF) as a time-dependent quantity, $A(t)$, multiplied by that quantity at some later time, $A(t')$, and averaged over an equilibrium ensemble:
$C _ {A A} \left( t , t^{\prime} \right) \equiv \left\langle A (t) A \left( t^{\prime} \right) \right\rangle _ {e q}\label{9.1}$
The classical form of the correlation function is evaluated as
$C _ {A A} \left( t , t^{\prime} \right) = \int d \mathbf {p} \int d \mathbf {q} A ( \mathbf {p} , \mathbf {q} ; t ) A \left( \mathbf {p} , \mathbf {q} ; t^{\prime} \right) \rho _ {e q} ( \mathbf {p} , \mathbf {q} ) \label{9.2}$
whereas the quantum correlation function can be evaluated as
\begin{align} C _ {A A} \left( t , t^{\prime} \right) &= \operatorname {Tr} \left[ \rho _ {e q} A (t) A \left( t^{\prime} \right) \right] \[4pt] &= \sum _ {n} p _ {n} \left\langle n \left| A (t) A \left( t^{\prime} \right) \right| n \right\rangle \label{9.3} \end{align}
where
$p _ {n} = e^{- \beta E _ {n}} / Z.$
These are auto-correlation functions, which correlates the same variable at two points in time, but one can also define a cross-correlation function that describes the correlation of two different variables in time
$C _ {A B} \left( t , t^{\prime} \right) \equiv \left\langle A (t) B \left( t^{\prime} \right) \right\rangle \label{9.4}$
So, what does a time-correlation function tell us? Qualitatively, a TCF describes how long a given property of a system persists until it is averaged out by microscopic motions and interactions with its surroundings. It describes how and when a statistical relationship has vanished. We can use correlation functions to describe various time-dependent chemical processes. For instance, we will use $\langle \mu (t) \mu ( 0 ) \rangle$ - the dynamics of the molecular dipole moment - to describe absorption spectroscopy. We will also use them for relaxation processes induced by the interaction of a system and bath:
$\left\langle H _ {S B} (t) H _ {S B} ( 0 ) \right\rangle.$
Classically, you can use TCFs to characterize transport processes. For instance a diffusion coefficient is related to the velocity correlation function:
$D = \frac {1} {3} \int _ {0}^{\infty} d t \langle v (t) v ( 0 ) \rangle.$
Properties of Correlation Functions
A typical correlation function for random fluctuations at thermal equilibrium in the variable $A$ might look like
It is described by a number of properties:
1. When evaluated at $t = t’$, we obtain the maximum amplitude, the mean square value of $A$, which is positive for an autocorrelation function and independent of time. $C _ {A A} ( t , t ) = \langle A (t) A (t) \rangle = \left\langle A^{2} \right\rangle \geq 0 \label{9.5}$
2. For long time separations, as thermal fluctuations act to randomize the system, the values of A become uncorrelated $\lim _ {t \rightarrow \infty} C _ {A A} \left( t , t^{\prime} \right) = \langle A (t) \rangle \left\langle A \left( t^{\prime} \right) \right\rangle = \langle A \rangle^{2} \label{9.6}$
3. Since it is an equilibrium quantity, correlation functions are stationary. That means they do not depend on the absolute point of observation ($t$ and $t’$), but rather the time interval between observations. A stationary random process means that the reference point can be shifted by an arbitrary value $T$ $C _ {A A} \left( t , t^{\prime} \right) = C _ {A A} \left( t + T , t^{\prime} + T \right) \label{9.7}$ So, choosing $T = - t^{\prime}$ and defining the time interval $\tau \equiv t - t^{\prime}$, we see that only $\tau$ matters $C _ {A A} \left( t , t^{\prime} \right) = C _ {A A} \left( t - t^{\prime} , 0 \right) = C _ {A A} ( \tau ) \label{9.8}$ Implicit in this statement is an understanding that we take the time-average value of $A$ to be equal to the equilibrium ensemble average value of $A$, i.e., the system is ergodic. So, the correlation of fluctuations can be expressed as either a time-average over a trajectory of one molecule $\overline {A (t) A ( 0 )} = \lim _ {T \rightarrow \infty} \frac {1} {T} \int _ {0}^{T} d \tau A _ {i} ( t + \tau ) A _ {i} ( \tau ) \label{9.9}$ or an equilibrium ensemble average $\langle A (t) A ( 0 ) \rangle = \sum _ {n} \frac {e^{- \beta E _ {n}}} {Z} \langle n | A (t) A ( 0 ) | n \rangle \label{9.10}$
4. Classical correlation functions are real and even in time: \begin{align} \left\langle A (t) A \left( t^{\prime} \right) \right\rangle &= \left\langle A \left( t^{\prime} \right) A (t) \right\rangle \[4pt] C _ {A A} ( \tau ) &= C _ {A A} ( - \tau ) \label{9.11} \end{align}
5. When we observe fluctuations about an average (Figure $1$), we often redefine the correlation function in terms of the deviation from average $\delta A \equiv A - \langle A \rangle \label{9.12}$ and $C _ {\delta A \delta A} (t) = \langle \delta A (t) \delta A ( 0 ) \rangle = C _ {A A} (t) - \langle A \rangle^{2} \label{9.13}$ Now we see that the long time limit when correlation is lost $\lim _ {t \rightarrow \infty} C _ {\delta A \delta A} (t) = 0$ and the zero time value is just the variance $C _ {\delta A \delta A} ( 0 ) = \left\langle \delta A^{2} \right\rangle = \left\langle A^{2} \right\rangle - \langle A \rangle^{2} \label{9.14}$
6. The characteristic time scale of a random process is the correlation time, $\tau _ {c}$. This characterizes the time scale for TCF to decay to zero. We can obtain $\tau _ {c}$ from $\tau _ {c} = \frac {1} {\left\langle \delta A^{2} \right\rangle} \int _ {0}^{\infty} d t \langle \delta A (t) \delta A ( 0 ) \rangle \label{9.15}$ which should be apparent if you have an exponential form $C (t) = C ( 0 ) \exp \left( - t / \tau _ {c} \right).$
Example $1$: Velocity Autocorrelation Function for Gas
Let’s analyze a dilute gas of molecules which have a Maxwell–Boltzmann distribution of velocities. We focus on the component of the molecular velocity along the $\hat{x}$ direction, $x_v$. We know that the average velocity is $\left\langle v _ {x} \right\rangle = 0$. The velocity correlation function is
$C _ {v _ {x} v _ {x}} ( \tau ) = \left\langle v _ {x} ( \tau ) v _ {x} ( 0 ) \right\rangle \nonumber$
From the equipartition principle the average translational energy is
$\frac {1} {2} m \left\langle v _ {x}^{2} \right\rangle = k _ {B} T / 2 \nonumber$
For time scales short compared to collisions between molecules, the velocity of any given molecule remains constant and unchanged, so the correlation function for the velocity is also unchanged at $k_BT/m$. This non-interacting regime corresponds to the behavior of an ideal gas.
For any real gas, there will be collisions that randomize the direction and speed of the molecules, so that any molecule over a long enough time will sample the various velocities within the Maxwell–Boltzmann distribution. From the trajectory of x-velocities for a given molecule we can calculate $C _ {v _ {x _ {x}}} ( \tau )$ using time-averaging. The correlation function will drop on with a correlation time $\tau_c$, which is related to mean time between collisions. After enough collisions, the correlation with the initial velocity is lost and $C _ {v _ {x _ {x}}} ( \tau )$ approaches $\left\langle v _ {x}^{2} \right\rangle = 0$. Finally, we can determine the diffusion constant for the gas, which relates the time and mean square displacement of the molecules:
$\left\langle x^{2} (t) \right\rangle = 2 D _ {x} t.\nonumber$
From
$D _ {x} = \int _ {0}^{\infty} d t \left\langle v _ {x} (t) v _ {x} ( 0 ) \right\rangle\nonumber$
we have
$D _ {x} = k _ {B} T \tau _ {c} / m\nonumber$
In viscous fluids $\tau _ {c} / m$ is called the mobility, $\mu$.
Example $2$: Dipole Moment Correlation Function
Now consider the correlation function for the dipole moment of a polar diatomic molecule in a dilute gas, $\overline {\mu}$. For a rigid rotating object, we can decompose the dipole into a magnitude and a direction unit vector:
$\overline {\mu} _ {i} = \mu _ {0} \cdot \hat {u}.\nonumber$
We know that $\langle \hat {\mu} \rangle = 0$ since all orientations of the gas phase molecules are equally probable. The correlation function is
\begin{align*} C _ {\mu \mu} (t) & = \langle \overline {\mu} (t) \overline {\mu} ( 0 ) \rangle \[4pt] & = \left\langle \mu _ {0}^{2} \right\rangle \langle \hat {u} (t) \cdot \hat {u} ( 0 ) \rangle \end{align*}
This correlation function projects the time-dependent orientation of the molecule onto the initial orientation. Free inertial rotational motion will lead to oscillations in the correlation function as the dipole spins. The oscillations in this correlation function can be related to the speed of rotation and thereby the molecule’s moment of inertia (discussed below). Any apparent damping in this correlation function would reflect the thermal distribution of angular velocities. In practice a real gas would also have the collisional damping effects described in Example $1$ superimposed on this relaxation process.
Example $3$: Harmonic Oscillator Correlation Function
The time-dependent motion of a harmonic vibrational mode is given by Newton’s law in terms of the acceleration and restoring force as $m \ddot {q} = - \kappa q$ or $\ddot {q} = - \omega^{2} q$ where the force constant is $\kappa = m \omega^{2}$. We can write a common solution to this equation as
$q (t) = q ( 0 ) \cos \omega t\nonumber$
Furthermore, the equipartition theorem says that the equilibrium thermal energy in a harmonic vibrational mode is
$\frac {1} {2} \kappa \left\langle q^{2} \right\rangle = \frac {k _ {B} T} {2}\nonumber$
We therefore can write the correlation function for the harmonic vibrational coordinate as
\begin{align*} C _ {q q} (t) &= \langle q (t) q ( 0 ) \rangle \[4pt] &= \left\langle q^{2} \right\rangle \cos \omega t \[4pt] & = \frac {k _ {B} T} {\kappa} \cos \omega t \end{align*}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.01%3A_Definitions_Properties_and_Examples_of_Correlation_Functions.txt
|
In practice classical correlation functions in molecular dynamics simulations or single molecule experiments are determined from a time-average over a long trajectory at discretely sampled data points. Let’s evaluate $C _ {A A}$for a discrete and finite trajectory in which we are given a series of $N$ observations of the dynamical variable $A$ at equally separated time points ti. The separation between time points is $t _ {i + 1} - t _ {i} = \Delta t$, and the length of the trajectory is $T = N \Delta t$. Then we have ,
$C _ {A A} = \frac {1} {T} \sum _ {i , j = 1}^{N} \Delta t A \left( t _ {i} \right) A \left( t _ {j} \right) = \frac {1} {N} \sum _ {i , j = 1}^{N} A _ {i} A _ {j} \label{9.16}$
where $A _ {i} = A \left( t _ {i} \right)$. To make this more useful we want to express it as the time interval between points $\tau = t _ {j} - t _ {i} = ( j - i ) \Delta t$, and average over all possible pairwise products of $A$ separated by $\tau$. Defining a new count integer $n = j -i$, we can express the delay as $\tau = n \Delta t$. For a finite data set there are a different number of observations to average over at each time interval (n). We have the most pairwise products—$N$ to be precise—when the time points are equal (ti=tj). We only have one data pair for the maximum delay $\tau = T$. Therefore, the number of pairwise products for a given delay $\tau$ is $N-n$. So we can write Equation \ref{9.16} as
$C _ {A A} ( \tau ) = C ( n ) = \frac {1} {N - n} \sum _ {i = 1}^{N - n} A _ {i + n} A _ {i} \label{9.17}$
Note that this expression will only be calculated for positive values of $n$, for which $t_j≥t_i$. As an example consider the following calculation for fluctuations in a vibrational frequency $\omega(t)$, which consists of 32000 consecutive frequencies in units of $cm^{-1}$ for points separated by 10 femtoseconds, and has a mean value of $\omega _ {0} = 3244 \mathrm {cm}^{- 1}$. This trajectory illustrates that there are fast fluctuations on femtosecond time scales, but the behavior is seemingly random on 100 picosecond time scales
After determining the variation from the mean $\delta \omega \left( t _ {i} \right) = \omega \left( t _ {i} \right) - \omega 0$, the frequency correlation function is determined from Equation \ref{9.17}, with the substitution $\delta \omega \left( t _ {i} \right) \rightarrow A _ {i}$.
We can see that the correlation function reveals no frequency correlation on the time scale of 104 –105 fs, however a decay of the correlation function is observed for short delays signifying the loss of memory in the fluctuating frequency on the 103 fs time scale. From Equation \ref{9.15}, we find that the correlation time is $\tau_C = 785\, fs$.
10.03: Quantum Time-Correlation Functions
Quantum correlation functions involve the equilibrium (thermal) average over a product of Hermitian operators evaluated two times. The thermal average is implicit in writing
$C _ {A A} ( \tau ) = \langle A ( \tau ) A ( 0 ) \rangle.$
Naturally, this also invokes a Heisenberg representation of the operators, although in almost all cases, we will be writing correlation functions as interaction picture operators
$A _ {I} (t) = e^{i H _ {0} t} A e^{- i H _ {0} t}.$
To emphasize the thermal average, the quantum correlation function can also be written as
$C _ {A A} ( \tau ) = \left\langle \frac {e^{- \beta H}} {Z} A ( \tau ) A ( 0 ) \right\rangle \label{9.18}$
with $\beta = \left( k _ {\mathrm {B}} T \right)^{- 1}$. If we evaluate this for a time-independent Hamiltonian in a basis of states n , inserting a projection operator leads to our previous expression
$C _ {A A} ( \tau ) = \sum _ {n} p _ {n} \langle n | A ( \tau ) A ( 0 ) | n \rangle \label{9.19}$
with $p _ {n} = e^{- \beta E _ {n}} / Z$. Given the case of a time-independent Hamiltonian for which we have knowledge of the eigenstates, we can also express the correlation function in the Schrödinger picture as
\begin{align} C _ {A A} ( \tau ) &= \sum _ {n} p _ {n} \left\langle n \left| U^{\dagger} ( \tau ) A U ( \tau ) A \right| n \right\rangle \[4pt] &= \sum _ {n , m} p _ {n} \langle n | A | m \rangle \langle m | A | n \rangle e^{- i \omega _ {m n} \tau} \[4pt] &= \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} e^{- i \omega _ {m n} \tau} \label{9.20} \end{align}
Properties of Quantum Correlation Functions
There are a few properties of quantum correlation functions for Hermitian operators that can be obtained using the properties of the time-evolution operator. First, we can show that correlation functions are stationary:
\left.\begin{aligned} \left\langle A (t) A \left( t^{\prime} \right) \right\rangle & = \left\langle U^{\dagger} (t) A ( 0 ) U (t) U^{\dagger} \left( t^{\prime} \right) A ( 0 ) U \left( t^{\prime} \right) \right\rangle \[4pt] & = \left\langle U \left( t^{\prime} \right) U^{\dagger} (t) A U (t) U^{\dagger} \left( t^{\prime} \right) A \right\rangle \[4pt] & = \left\langle U^{\dagger} \left( t - t^{\prime} \right) A U \left( t - t^{\prime} \right) A \right\rangle \[4pt] & = \left\langle A \left( t - t^{\prime} \right) A ( 0 ) \right\rangle \end{aligned} \right. \label{9.21}
Similarly, we can show
$\langle A ( - t ) A ( 0 ) \rangle = \langle A (t) A ( 0 ) \rangle^{*} = \langle A ( 0 ) A (t) \rangle \label{9.22}$
or in short
$C _ {A A}^{*} (t) = C _ {A A} ( - t ) \label{9.23}$
Note that the quantum $C_{AA}(t)$ is complex. You cannot directly measure a quantum correlation function, but observables are often related to the real or imaginary part of correlation functions.
$C _ {A A} (t) = C _ {A A}^{\prime} (t) + i C _ {A A}^{\prime \prime} (t) \label{9.24}$
The real and imaginary parts of $C_{AA}(t)$ can be separated as
\left.\begin{aligned} C _ {A A}^{\prime} (t) & = \frac {1} {2} \left[ C _ {A A} (t) + C _ {A A}^{*} (t) \right] = \frac {1} {2} [ \langle A (t) A ( 0 ) \rangle + \langle A ( 0 ) A (t) \rangle ] \[4pt] & = \frac {1} {2} \left\langle [ A (t) , A ( 0 ) ] _ {+} \right\rangle \end{aligned} \right. \label{9.25}
\left.\begin{aligned} C _ {A A}^{\prime \prime} (t) & = \frac {1} {2} \left[ C _ {A A} (t) - C _ {A A}^{*} (t) \right] = \frac {1} {2} [ \langle A (t) A ( 0 ) \rangle - \langle A ( 0 ) A (t) \rangle ] \[4pt] & = \frac {1} {2} \langle [ A (t) , A ( 0 ) ] \rangle \end{aligned} \right. \label{9.26}
Above $[ A , B ] _ {+} \equiv A B + B A$ is the anticommutator. As illustrated below, the real part is even in time, and can be expanded as Fourier series in cosines, whereas the imaginary part is odd, and can be expanded in sines. We will see later that the magnitude of the real part grows with temperature, but the imaginary does not. At 0 K, the real and imaginary components have equal amplitudes, but as one approaches the high temperature or classical limit, the real part dominates the imaginary.
We will also see in our discussion of linear response that $C'_{AA}$ and $C''_{AA}$ are directly proportional to the step response function $S$ and the impulse response function $R$, respectively. $R$ describes how a system is driven away from equilibrium by an external potential, whereas $S$ describes the relaxation of the system to equilibrium when a force holding it away from equilibrium is released. Classically, the two are related by $R \propto \partial S / \partial t$.
Since time and frequency are conjugate variables, we can also define a spectral or frequency-domain correlation function by the Fourier transformation of the TCF. The Fourier transform and its inverse are defined as
\begin{align} \tilde {C} _ {A A} ( \omega ) &= \tilde {\mathcal {F}} \left[ C _ {A A} (t) \right] \[4pt] &= \int _ {- \infty}^{+ \infty} \mathrm {e}^{i \omega t} C _ {A A} (t) \,d t \label{9.27} \end{align}
\begin{align} C _ {A A} (t) &= \tilde {\mathcal {F}}^{- 1} \left[ \tilde {C} _ {A A} ( \omega ) \right] \[4pt] &= \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} \mathrm {e}^{- i \omega t} \tilde {C} _ {A A} ( \omega ) \,d \omega \label{9.28} \end{align}
Examples of the frequency-domain correlation functions are shown below.
For a time-independent Hamiltonian, as we might have in an interaction picture problem, the Fourier transform of the TCF in Equation \ref{9.20} gives
$\tilde {C} _ {A A} ( \omega ) = \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} \delta \left( \omega - \omega _ {m n} \right) \label{9.29}$
This expression looks very similar to the Golden Rule transition rate from first-order perturbation theory. In fact, the Fourier transform of time-correlation functions evaluated at the energy gap gives the transition rate between states that we obtain from first-order perturbation theory. Note that this expression is valid whether the initial states $n$ are higher or lower in energy than final states $m$, and accounts for upward and downward transitions. If we compare the ratio of upward and downward transition rates between two states $i$ and $j$, we have
$\frac {\tilde {C} _ {A A} \left( \omega _ {i j} \right)} {\tilde {C} _ {A A} \left( \omega _ {j i} \right)} = \frac {p _ {j}} {p _ {i}} = e^{\beta E _ {j}} \label{9.30}$
This is one way of showing the principle of detailed balance, which relates upward and downward transition rates at equilibrium to the difference in thermal occupation between states:
$\tilde {C} _ {A A} ( \omega ) = e^{\beta \hbar \omega} \tilde {C} _ {A A} ( - \omega ) \label{9.31}$
This relationship together with a Fourier transform of Equation \ref{9.23} allows us to obtain the real and imaginary components using
$\tilde {C} _ {A A} ( \omega ) \pm \tilde {C} _ {A A} ( - \omega ) = \left( 1 \pm e^{- \beta \hbar \omega} \right) \tilde {C} _ {A A} ( \omega ) \label{9.32}$
$\tilde {C} _ {A A}^{\prime} ( \omega ) = \tilde {C} _ {A A} ( \omega ) \left( 1 + e^{- \beta \hbar \omega} \right) \label{9.33}$
$\tilde {C} _ {A A}^{\prime \prime} ( \omega ) = \tilde {C} _ {A A} ( \omega ) \left( 1 - e^{- \beta \hbar \omega} \right) \label{9.34}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.02%3A_Correlation_Function_from_a_Discrete_Trajectory.txt
|
We have already seen that the rates obtained from first-order perturbation theory are related to the Fourier transform of the time-dependent external potential evaluated at the energy gap between the initial and final state. Here we will show that the rate of leaving an initially prepared state, typically expressed by Fermi’s Golden Rule through a resonance condition in the frequency domain, can be expressed in the time-domain picture in terms of a time-correlation function for the interaction of the initial state with others. The state-to-state form of Fermi’s Golden Rule is
$w _ {k \ell} = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \label{9.35}$
We will look specifically at the case of a system at thermal equilibrium in which the initially populated states $\ell$ are coupled to all states $k$. Time-correlation functions are expressions that apply to systems at thermal equilibrium, so we will thermally average this expression.
$\overline {w} _ {k \ell} = \frac {2 \pi} {\hbar} \sum _ {k , \ell} p _ {\ell} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \label{9.36}$
where $p _ {\ell} = e^{- \beta E _ {\ell}} / Z$ and $Z$ is the partition function. The energy conservation statement expressed in terms of $E$ or $\omega$ can be converted to the time domain using the definition of the delta function
$\delta ( \omega ) = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t e^{i \omega t} \label{9.37}$
giving
$\overline {w} _ {k \ell} = \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \left| V _ {k \ell} \right| \int _ {- \infty}^{+ \infty} d t e^{i \left( E _ {k} - E _ {l} \right) t / h} \label{9.38}$
Writing the matrix elements explicitly and recognizing that in the interaction picture,
$e^{- i H _ {0} t / h} | \ell \rangle = e^{- i E _ {t} t / h} | \ell \rangle,$
we have
\begin{align} \overline {w} _ {k \ell} &= \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \int _ {- \infty}^{+ \infty} d t\, e^{i \left( E _ {k} - E _ {\ell} \right) t / \hbar} \langle \ell | V | k \rangle \langle k | V | \ell \rangle \label{9.39} \[4pt] &= \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \int _ {- \infty}^{+ \infty} d t \langle \ell | V | k \rangle \left\langle k \left| e^{i H _ {0} t / \hbar} V e^{- i H _ {0} t / \hbar} \right| \ell \right\rangle \label{9.40} \end{align}
Then, since $\sum _ {k} | k \rangle \langle k | = 1$,
\begin{align} \overline {w} _ {m n} &= \frac {1} {\hbar^{2}} \sum _ {\ell = m , n} p _ {\ell} \int _ {- \infty}^{+ \infty} d t \left\langle \ell \left| V _ {I} ( 0 ) V _ {I} (t) \right| \ell \right\rangle \label{9.41} \[4pt] &= \overline {w} _ {m n} = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{9.42} \end{align}
As before
$V _ {I} (t) = e^{i H _ {0} t / h} V e^{- i H _ {0} t / \hbar}$
The final expression in Equation \ref{9.42} indicates that integrating over a correlation function for the time-dependent interaction of the initial state with its surroundings gives the relaxation or transfer rate. This is a general expression. Although the derivation emphasized specific eigenstates, Equation \ref{9.42} shows that with a knowledge of a time-dependent interaction potential of any sort, we can calculate transition rates from the time-correlation function for that potential.
The same approach can be taken using the rates of transition in an equilibrium system induced by a harmonic perturbation
$\overline {w} _ {k \ell} = \frac {\pi} {2 \hbar^{2}} \sum _ {\ell , k} p _ {\ell} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{9.43}$
resulting in a similar expression for the transition rate in terms of a interaction potential time-correlation function
\begin{align} \overline {w} _ {k \ell} &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t\, e^{- i \omega t} \left\langle V _ {I} ( 0 ) V _ {I} (t) \right\rangle \[4pt] &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \,e^{i \omega t} \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{9.44} \end{align}
We will look at this closer in the following section. Note that here the transfer rate is expressed in terms of a Fourier transform over a correlation function for the time-dependent interaction potential. Although Equation \ref{9.42} is not written as a Fourier transform, it can in practice be evaluated by a Fourier transformation and evaluating its value at zero frequency.
Readings on time-correlation functions
1. Berne, B. J., Time-Dependent Propeties of Condensed Media. In Physical Chemistry: An Advanced Treatise, Vol. VIIIB, Henderson, D., Ed. Academic Press: New York, 1971.
2. Berne, B. J.; Pecora, R., Dynamic Light Scattering. R. E. Krieger Publishing Co.: Malabar, FL, 1990.
3. Chandler, D., Introduction to Modern Statistical Mechanics. Oxford University Press: New York, 1987.
4. Mazenko, G., Nonequilibrium Statistical Mechanics. Wiley-VCH: Weinheim, 2006.
5. McHale, J. L., Molecular Spectroscopy. 1st ed.; Prentice Hall: Upper Saddle River, NJ, 1999.
6. McQuarrie, D. A., Statistical Mechanics. Harper & Row: New York, 1976; Ch. 21.
7. Schatz, G. C.; Ratner, M. A., Quantum Mechanics in Chemistry. Dover Publications: Mineola, NY, 2002; Ch. 6.
8. Wang, C. H., Spectroscopy of Condensed Media: Dynamics of Molecular Interactions. Academic Press: Orlando, 1985. 9. Zwanzig, R., Nonequilibrium Statistical Mechanics. Oxford University Press: New York, 2001.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.04%3A_Transition_Rates_from_Correlation_Functions.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.