chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Using expressions for $v_{mp}$, $v_{ave}$, or $v_{rms}$, it is fairly simple to derive expressions for kinetic energy from the expression $E_{kin} = \dfrac{1}{2} mv^2 \nonumber$ It is important to remember that there will be a full distribution of molecular speeds in a thermalized sample of gas. Some molecules will be traveling faster and some more slowly. It is also important to recognize that the most probable, average, and RMS kinetic energy terms that can be derived from the Kinetic Molecular Theory do not depend on the mass of the molecules (Table 2.4.1). As such, it can be concluded that the average kinetic energy of the molecules in a thermalized sample of gas depends only on the temperature. However, the average speed depends on the molecular mass. So, for a given temperature, light molecules will travel faster on average than heavier molecules. Table 2.4.1: Kinetic Properties of a Thermalized Ensemble (i.e., follows Maxwell-Boltzmann Distribution) Property Speed Kinetic Energy Most probable $\sqrt{\dfrac{2k_bT}{m}}$ $k_BT$ Average $\sqrt{\dfrac{8k_bT}{\pi m}}$ $\dfrac{4k_BT}{\pi}$ Root-mean-square $\sqrt{\dfrac{3k_bT}{m}}$ $\dfrac{3}{2} k_BT$ The Ideal Gas Law The expression for the root-mean-square molecular speed can be used to show that the Kinetic Molecular model of gases is consistent with the ideal gas law. Consider the expression for pressure $p =\dfrac{N_{tot}m}{3V} \langle v \rangle^2 \nonumber$ Replacing $\langle v \rangle^2$ with the square of the RMS speed expression yields $p = \dfrac{N_{tot}m}{3V} \left( \dfrac{3k_BT}{m}\right) \nonumber$ which simplifies to $p = \dfrac{N_{tot}k_BT}{V} \nonumber$ Noting that Ntot = n∙NA, where n is the number of moles and NA is Avogadro’s number $p = \dfrac{nN_Ak_BT}{V} \nonumber$ or $pV = nN_Ak_BT \nonumber$ Finally, noting that $N_A∙k_B = R$ $pV = nRT \nonumber$ That’s kind of cool, no? The only assumptions (beyond the postulates of the Kinetic Molecular Theory) is that the distribution of velocities for a thermalized sample of gas is described by the Maxwell-Boltzmann distribution law. The next development will be to use the Kinetic Molecular Theory to describe molecular collisions (which are essential events in many chemical reactions.) Collisions with the Wall In the derivation of an expression for the pressure of a gas, it is useful to consider the frequency with which gas molecules collide with the walls of the container. To derive this expression, consider the expression for the “collision volume”. $V_{col} = v_x \Delta t\ \cdot A \nonumber$ All of the molecules within this volume, and with a velocity such that the x-component exceeds vx (and is positive) will collide with the wall. That fraction of molecules is given by $N_{col} = \dfrac{N}{V} \dfrac{\langle v \rangle \Delta t \cdot A}{2} \nonumber$ and the frequency of collisions with the wall per unit area per unit time is given by $Z_w = \dfrac{N}{V} \dfrac{\langle v \rangle}{2} \nonumber$ In order to expand this model into a more useful form, one must consider motion in all three dimensions. Considering that $\langle v \rangle = \sqrt{\langle v_x \rangle +\langle v_y \rangle +\langle v_z \rangle} \nonumber$ and that $\langle v_x \rangle = \langle v_y \rangle =\langle v_z \rangle \nonumber$ it can be shown that $\langle v \rangle = 2 \langle v_x \rangle \nonumber$ or $\langle v_x \rangle = \dfrac{1}{2} \langle v \rangle \nonumber$ and so $Z_w = \dfrac{1}{4} \dfrac{N}{V} \langle v \rangle \nonumber$ The factor of N/V is often referred to as the “number density” as it gives the number of molecules per unit volume. At 1 atm pressure and 298 K, the number density for an ideal gas is approximately 2.5 x 1019 molecule/cm3. (This value is easily calculated using the ideal gas law.) By comparison, the average number density for the universe is approximately 1 molecule/cm3.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.04%3A_Kinetic_Energy.txt
An important consequence of the kinetic molecular theory is what it predicts in terms of effusion and diffusion effects. Effusion is defined as a loss of material across a boundary. A common example of effusion is the loss of gas inside of a balloon over time. The rate at which gases will effuse from a balloon is affected by a number of factors. But one of the most important is the frequency with which molecules collide with the interior surface of the balloon. Since this is a function of the average molecular speed, it has an inverse dependence on the square root of the molecular weight. $\text{Rate of effusion} \propto \dfrac{1}{\sqrt{MW}} \nonumber$ This can be used to compare the relative rates of effusion for gases of different molar masses. The Knudsen Cell Experiment A Knudsen cell is a chamber in which a thermalized sample of gas is kept, but allowed to effuse through a small orifice in the wall. The gas sample can be modeled using the Kinetic Molecular Theory model as a collection of particles traveling throughout the cell, colliding with one another and also with the wall. If a small orifice is present, any molecules that would collide with that portion of the wall will be lost through the orifice. This makes a convenient arrangement to measure the vapor pressure of the material inside the cell, as the total mass lost by effusion through the orifice will be proportional to the vapor pressure of the substance. The vapor pressure can be related to the mass lost by the expression $p = \dfrac{g}{A \Delta t} \sqrt{\dfrac{2 \pi RT}{MW}} \nonumber$ where $g$ is the mass lost, $A$ is the area of the orifice, $\Delta t$ is the time the effusion is allowed to proceed, $T$ is the temperature and $MW$ is the molar mass of the compound in the vapor phase. The pressure is then given by $p$. A schematic of what a Knudsen cell might look like is given below. 2.06: Collisions with Other Molecules A major concern in the design of many experiments is collisions of gas molecules with other molecules in the gas phase. For example, molecular beam experiments are often dependent on a lack of molecular collisions in the beam that could degrade the nature of the molecules in the beam through chemical reactions or simply being knocked out of the beam. In order to predict the frequency of molecular collisions, it is useful to first define the conditions under which collisions will occur. For convenience, consider all of the molecules to be spherical and in fixed in position except for one which is allowed to move through a “sea” of other molecules. A molecular collision will occur every time the center of the moving molecule comes within one molecular diameter of the center of another molecule. One can easily determine the number of molecules the moving molecule will “hit” by determining the number of molecules that lie within the “collision cylinder”. Because we fixed the positions of all but one of the molecules, we must use the relative speed of the moving molecule, which will be given by $v_{rel} = \sqrt{2} \times \,v \nonumber$ The volume of the collision cylinder is given by $V_{col} = \sqrt{2}\, v\,\Delta t \, A \nonumber$ The collisional cross section, which determined by the size of the molecule is given by $\sigma = \pi d^2 \nonumber$ Some values of $\sigma$ are given in the table below: Table 2.6.1: Collisional cross-section of Select Species Molecule $\sigma$ (nm2) He 0.21 Ne 0.24 N2 0.43 CO2 0.52 C2H4 0.64 Since the number of molecules within the collision cylinder is given by $N_{col} = \dfrac{N}{V} V_{col} \nonumber$ and since the number density ($N/V$) is given by $\dfrac{N}{V} = \dfrac{p}{k_BT} \nonumber$ the number of collisions is given by $N_{col} = \dfrac{p}{k_BT} ( \sqrt{2} \, v \Delta t \sigma) \nonumber$ The frequency of collisions (number of collisions per unit time) is then given by $Z = \dfrac{\sqrt{2} p \sigma}{k_B T} \langle v \rangle \nonumber$ Perhaps a more useful value is the mean free path ($\lambda$), which is the distance a molecule can travel on average before it collides with another molecule. This is easily derived from the collision frequency. How far something can travel between collisions is given by the ratio of how fast it is traveling and how often it hits other molecules: $\lambda = \dfrac{\langle v \rangle}{Z} \nonumber$ Thus, the mean free path is given by $\lambda = \dfrac{k_B T}{\sqrt{2} p \sigma} \nonumber$ The mere fact that molecules undergo collisions represents a deviation from the kinetic molecular theory. For example, if molecules were infinitesimally small ($\sigma ≈ 0$) then the mean free path would be infinitely long! The finite size of molecules represents one significant deviation from ideality. Another important deviation stems from the fact that molecules do exhibit attractive and repulsive forces between one another. These forces depend on a number of parameters, such as the distance between molecules and the temperature (or average kinetic energy of the molecules.)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.05%3A_Grahams_Law_of_Effusion.txt
While the ideal gas law is sufficient for the prediction of large numbers of properties and behaviors for gases, there are a number of times that deviations from ideality are extremely important. The van der Waals Equation Several equations of state have been suggested to account for the deviations from ideality. One simple, but useful, expression is that proposed by Johannes Diderik van der Waals (1837 – 1923) (Johannes Diderik van der Waals - Biographical, 2014) van der Waals’ equation introduced corrections to the pressure and volume terms of the ideal gas law in order to account for intermolecular interactions and molecular size respectively. $\left ( p + \dfrac{a}{V_m^2} \right) (V_m - b) = RT \label{vdw}$ or $p =\dfrac{RT}{V_m-b} - \dfrac{a}{V_m^2} \label{vdw2}$ In this expression, $a$ and $b$ are variables of a given substance which can be measured and tabulated. In general, molecules with large intermolecular forces will have large values of $a$, and large molecules will have large values of b. Some van der Waals constants are given in Table $1$. Table $1$: van der Waals constants for select Species Gas frequency of collisions a (atm L2 mol-2) b (L/mol) He 0.0341 0.0238 N2 1.352 0.0387 CO2 3.610 0.0429 C2H4 4.552 0.0305 The van der Walls model is useful because it makes it so simple to interpret the parameters in terms of molecular size and intermolecular forces. But it does have limitations as well (as is the case of every scientific model!) Some other useful two-parameter and three-parameter (or more) equations of state include the Redlich-Kwong, Dieterici, and Clausius models (Table $2$). These have the advantage that they allow for temperature dependence on some of the parameters, which as will be seen later, is necessary to model certain behaviors of real gases. Table $2$: Other equations of State Model Equation of State Ideal $p = \dfrac{RT}{V_m}$ van der Waals (van der Waals J. D., 1867) $p =\dfrac{RT}{V_m-b} - \dfrac{a}{V_m^2}$ Redlich-Kwong (Redlich & Kwong, 1949) $p =\dfrac{RT}{V_m-b} - \dfrac{a}{\sqrt{T} V_m (V_m +b)}$ Dieterici (Dieterici, 1899) $p =\dfrac{RT}{V_m-b} \exp \left( \dfrac{-a}{V_mRT} \right)$ Clausius $p =\dfrac{RT}{V_m-b} - \dfrac{a}{T (V_m + c)^2}$ Virial Equations $p =\dfrac{RT}{V_m} \left(1+ \dfrac{B(T)}{V_m} +\dfrac{C(T)}{V_m} \dots \right)$ The Virial Equation A very handy expression that allows for deviations from ideal behavior is the Virial Equation of state. This is a simple power series expansion in which the higher-order terms contain all of the deviations from the ideal gas law. $p =\dfrac{RT}{V_m} \left(1+ \dfrac{B(T)}{V_m} +\dfrac{C(T)}{V_m^2} \dots \right ) \label{viral}$ In the limit that $B(T)$ (the Second Virial Coefficient) and $C(T)$ are zero, the equation becomes the ideal gas law. Also, the molar volume of gases are small, the contributions from the third, fourth, etc. terms decrease in magnitude, allowing one to truncate the series at a convenient point. The second virial coefficient can be predicted from a theoretical intermolecular potential function by $B(T) = N_a \int _{r=0}^{\infty} \left[ 1- \exp \left(\dfrac{U(r)}{k_BT} \right) \right] 2\pi r^2 \,dr \nonumber$ The quality of an intermolecular potential can be determined (partially) by the potential’s ability to predict the value of the second virial coefficient, $B(T)$. The Leonard-Jones Potential An intermolecular potential function is used to describe the interactions between molecules. These interactions will have to include attractive forces, which will draw molecules together, and repulsive forces which will push them apart. If the molecules are hard spheres, lacking any attractive interactions, the potential function is fairly simple. $U(r) ={\begin{cases} \infty &{\text{for }}r\leq \sigma\0&{\text{for }}r>\sigma.\end{cases}} \nonumber$ In this function, $\sigma$ is determined by the size of the molecules. If two molecules come within a distance $r$ of one another, they collide, bouncing off in a perfectly elastic collision. Real molecules, however, with have a range of intermolecular separations through which they will experience attractive forces (the so-called “soft wall” of the potential surface.) And then at very small separations, the repulsive forces will dominate, pushing the molecules apart (the so-called “hard wall” of the potential surface.) A commonly used intermolecular potential, $U(r)$, is the Leonard-Jones potential. This function has the form $U(r) = 4 \epsilon \left[ \underbrace{\left(\dfrac{\sigma}{r}\right)^{12}}_{\text{repulsive term}} - \underbrace{\left(\dfrac{\sigma}{r}\right)^{6}}_{\text{attractive term}} \right] \nonumber$ where $\sigma$ governs the width of the potential well, and $\epsilon$ governs the depth. The distance between molecules is given by $r$. The repulsive interactions between molecules are contained in the first terms and the attractive interactions are found in the second term. Taylor Series Expansion A commonly used method of creating a power series based on another equation is the Taylor Series Expansion. This is an expansion of a function about a useful reference point where each of the terms is generated by differentiating the original function. For a function $f(x)$, the Taylor series $F(x)$ can be generated from the expression $F(x) = f(a) + \left.\dfrac{d}{dx} f(x) \right|_{x=a} (x-a) + \dfrac{1}{2!} \left. \dfrac{d^2}{dx^2} f(x) \right|_{x=a} (x-a)^2 + \dots \nonumber$ This can be applied to any equation of state to derive an expression for the virial coefficients in terms of the parameters of the equation of state. Application to the van der Waals equation The van der Waals equation can be written in terms of molar volume (Equation \ref{vdw2}). When multiplying the right hand side by $\frac{u}{u}$ (where $u = 1/v$) yields: $p =\dfrac{RTu}{1-bu} - au^2 \nonumber$ This expression can be "Talyor" expanded (to the first three terms) about $u = 0$ (which corresponds to an infinite molar volume.) The coefficient terms that are needed for the expansion are $p(u=0)=0 \nonumber$ $\dfrac{dp}{du} \big|_{u=0} = \left [ \dfrac{RT}{1-bu} + \dfrac{bRTu}{(1-bu)^2} - 2au \right]_{u=0} = RT \nonumber$ $\dfrac{d^2p}{du^2} \big|_{u=0} = \dfrac{1}{2} \left [ \dfrac{bRT}{(1-bu)^2} + \dfrac{bRT}{(1-bu)^2} - \dfrac{2b^2RTu}{(1-bu)^3} - 2au \right]_{u=0} = RT -a \nonumber$ $\dfrac{d^3p}{du^3} \big|_{u=0} = RTb^2 \nonumber$ And the virial equation can then be expressed in terms of the van der Waals parameters as $p = 0 +RT(u) + (bRT -a)(u)^2 + RTb^{2(u)^3} \nonumber$ Substituting $u = 1/V$ and simplifying gives the desired result: $p= RT \left[ \dfrac{1}{V} + \dfrac{b-\frac{a}{RT}}{V^2} + \dfrac{b^2}{V^3} + \dots \right] \nonumber$ And the second virial coefficient is given by $B(T) = b-\dfrac{a}{RT} \nonumber$ The Boyle Temperature A useful way in which deviations from ideality can be expressed is by defining the compression factor ($Z$) given by $Z = \dfrac{pV_m}{RT} \nonumber$ where $V_m$ is the molar volume. For an ideal gas, $Z = 1$ under all combinations of $P$, $V_m$, and $T$. However, real gases will show some deviation (although all gases approach ideal behavior at low p, high Vm, and high T.) The compression factor for nitrogen at several temperatures is shown below over a range of pressures. As can be seen, the gas behaves closer to ideally over a longer range of pressure at the higher temperatures. In general, there is one temperature, the Boyle temperature, at which a gas will approach ideal behavior as the pressure goes to zero asymptotically, and thus behave ideally over a broad range of lower pressures. The Boyle temperature is found by solving $\lim_{p \rightarrow 0} \left( \dfrac{\partial Z}{\partial p} \right) = 0 \nonumber$ or $\lim_{1/V_m \rightarrow 0} \left( \dfrac{\partial Z}{\partial \left(\frac{1}{V_m} \right)} \right) = 0 \nonumber$ Using the virial equation of state (Equation \ref{viral}), the Boyle temperature can be expressed in terms of the virial coefficients. Starting with the compression factor $Z = 1 +\dfrac{B}{V_m} + \dots \nonumber$ and then differentiating with respect to $1/V_m$ yields $\dfrac{\partial Z}{\partial \left(\frac{1}{V_m}\right)} = B \nonumber$ So it can be concluded that at the Boyle temperature, the second virial coefficient $B$ is equal to zero. This should make some sense given that the first virial coefficient provides most of the deviation from the ideal gas law, and so it must vanish as the gas behaves more ideally. Critical Behavior The isotherms (lines of constant temperature) of CO2 reveal a very large deviation from ideal behavior. At high temperatures, CO2 behaves according to Boyle’s Law. However, at lower temperatures, the gas begins to condense to form a liquid at high pressures. At one specific temperature, the critical temperature, the isotherm begins to display this critical behavior. The temperature, pressure, and molar volume ($p_c$, $T_c$, and $V_c$) at this point define the critical point. In order to solve for expressions for the critical constants, one requires three equations. The equation of state provides one relationship. The second can be generated by recognizing that the slope of the isotherm at the critical point is zero. And finally, the third expression is derived by recognizing that the isotherm passes through an inflection point at the critical point. Using the van der Waals equation as an example, these three equations can be generated as follows: Solving these expressions for $p_c$, $T_c$, and $V_c$ yields $p_c = \dfrac{a}{27b^2} \nonumber$ $T_c = \dfrac{8a}{27bR} \nonumber$ $V_c = 3b \nonumber$ The critical variables can be used in this fashion to determine the values of the molecular parameters used in an equation of state (such as the van der Waals equation) for a given substance. The Principle of Corresponding States The principle of corresponding states was proposed by van der Waals in 1913 (van der Waals J. D., 1913). He noted that the compression factor at the critical point $Z_c = \dfrac{p_cV_c}{RT_c} \nonumber$ is very nearly the same for any substance. This is consistent with what is predicted by the van der Waals equation, which predicts $Z_c = 0.375$ irrespective of substance. Further, it can be noted that based on reduced variables defined by $p_r= \dfrac{p}{p_c} \nonumber$ $V_r= \dfrac{V}{V_c} \nonumber$ $T_r= \dfrac{T}{T_c} \nonumber$ several physical properties are found to be comparable for real substances. For example (Guggenheim, 1945), for argon, krypton, nitrogen, oxygen, carbon dioxide and methane the reduced compressibility is $[ \dfrac{p_cV_c}{RT_c} \approx 0.292 \nonumber$ Also, the reduced compression factor can be plotted as a function of reduced pressure for several substances at several reduced isotherms with surprising consistency irrespective of the substance.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.07%3A_Real_Gases.txt
Q2.1 Assuming the form of the Maxwell distribution allowing for motion in three directions to be $f(v) = N v^2 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right) \label{MBFullN}$ derive the correct expression for $N$ such that the distribution is normalized. Hint: a table of definite integrals indicates $\int_0^{\infty} x^{2n} e^{-ax^2} dx = \dfrac{1}{4} \dfrac{\sqrt{\pi}}{a^{3/2}}$ Q2.2 Dry ice (solid CO2) has a density of 1.6 g/cm3. Assuming spherical molecules, estimate the collisional cross section for $CO_2$. How does it compare to the value listed in the text? Q2.3 Calculate the pressure exerted by 1.00 mol of Ar, N2, and CO2 as an ideal gas, a van der Waals gas, and a Redlich-Kwong gas, at 25 °C and 24.4 L. Q2.4 The compression factor $Z$ for CO2 at 0 °C and 100 atm is 0.2007. Calculate the volume of a 2.50 mole sample of CO2 at 0 °C and 100 atm. Q2.5 $Ar$ $N_2$ $CO_2$ ideal van der Waals Redlich-Kwong Q2.6 What is the maximum pressure that will afford a N2 molecule a mean-free-path of at least 1.00 m at 25 °C? Q2.7 In a Knudsen cell, the effusion orifice is measured to be 0.50 mm2. If a sample of naphthalene is allowed to effuse for 1.0 hr at a temperature of 40.3 °C, the cell loses 0.0236 g. From this data, calculate the vapor pressure of naphthalene at this temperature. Q2.8 The vapor pressure of scandium was determined using a Knudsen cell [Kirkorian, J. Phys. Chem., 67, 1586 (1963)]. The data from the experiment are given below. Vapor Pressure of Scandium Temperature 1555.4 K Time 110.5 min Mass loss 9.57 mg Diameter of orifice 0.2965 cm From this data, find the vapor pressure of scandium at 1555.4 K. Q2.9 A thermalized sample of gas is one that has a distribution of molecular speeds given by the Maxwell-Boltzmann distribution. Considering a sample of N2 at 25 cC what fraction of the molecules have a speed less than 1. the most probably speed 2. the average sped 3. the RMS speed? 4. The RMS speed of helium atoms under the same conditions? Q2.10 Assume that a person has a body surface area of 2.0 m2. Calculate the number of collisions per second with the total surface area of this person at 25 °C and 1.00 atm. (For convenience, assume air is 100% N2) Q2.11 Two identical balloons are inflated to a volume of 1.00 L with a particular gas. After 12 hours, the volume of one balloon has decreased by 0.200 L. In the same time, the volume of the other balloon has decreased by 0.0603 L. If the lighter of the two gases was helium, what is the molar mass of the heavier gas? Q2.12 Assuming it is a van der Waals gas, calculate the critical temperature, pressure and volume for $CO_2$. Q2.13 Find an expression in terms of van der Waals coefficients for the Boyle temperature. (Hint: use the viral expansion of the van der Waals equation to find an expression for the second viral coefficient!) Q2.14 Consider a gas that follows the equation of state $p =\dfrac{RT}{V_m - b}$ Using a virial expansion, find an expression for the second virial coefficient. Q2.15 Consider a gas that obeys the equation of state $p =\dfrac{nRT}{V_m - b}$ where a and b are non-zero constants. Does this gas exhibit critical behavior? If so, find expressions for $p_c$, $V_c$, and $T_c$ in terms of the constants $a$, $b$, and $R$. Q2.16 Consider a gas that obeys the equation of state $p = \dfrac{nRT}{V- nB}-\dfrac{an}{V}$ 1. Find an expression for the Boyle temperature in terms of the constant $a$, $b$, and $R$. 2. Does this gas exhibit critical behavior? If so, find expressions for $p_c$, $V_c$, and $T_c$ in terms of the constants $a$, $b$, and $R$. 2.S: Gases (Summary) Learning Objectives After mastering the material covered in this chapter, one will be able to: 1. Understand the relationships demonstrated by and perform calculations using the empirical gas laws (Boyle’s Law, Charles’ Law, Gay-Lussac’s Law, and Avogadro’s Law, as well as the combined gas law.) 2. Understand and be able to utilize the ideal gas law in applications important in chemistry. 3. State the postulates of the Kinetic Molecular theory of gases. 4. Utilize the Maxwell and Maxwell-Boltzmann distributions to describe the relationship between temperature and the distribution of molecular speeds. 5. Derive an expression for pressure based on the predictions of the kinetic molecular theory for the collisions of gas molecules with the walls of a container. 6. Derive and utilize an expression for the frequency with which molecules in a gas sample collide with other molecules. 7. Derive and utilize an expression for the mean-free-path of molecules based on temperature, pressure, and collisional cross section. 8. Explain how the van der Waals (and other) model(s) allow for deviations from ideal behavior of gas samples. 9. Derive an expression for the Boyle temperature and interpret the results based on how a gas’s behavior approaches that of an ideal gas. 10. Explain and utilize the Principle of Corresponding States. Vocabulary and Concepts • average • Boyle temperature • collisional cross section • compression factor • critical point • critical temperature • diffusion • effusion • empirical • empirical gas laws • frequency of collisions • frequency of collisions with the wall • gas law constant • ideal gas law • intermolecular potential • isotherm • Kinetic Molecular Theory • Knudsen cell • Leonard-Jones potential • maximum probability • Maxwell’s distribution • Maxwell-Boltzmann distribution • mean free path • normalization constant • number density • principle of corresponding states • reduced variables • root-mean-square • Second Virial Coefficient • Taylor Series Expansion • van der Waals’ equation • Virial Equation
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.E%3A_Gases_%28Exercises%29.txt
Thermodynamics is the study of how energy flows into and out of systems and how it flows through the universe. People have been studying thermodynamics for a very long time and have developed the field a great deal, including the incorporation of high-level mathematics into the process. Many of the relationships may look cumbersome or complicated, but they are always describing the same basic thing: the flow of energy through the universe. • 3.1: Prelude to Thermodynamics Thermodynamics is the study of how energy flows into and out of systems and how it flows through the universe. People have been studying thermodynamics for a very long time and have developed the field a great deal, including the incorporation of high-level mathematics into the process. Many of the relationships may look cumbersome or complicated, but they are always describing the same basic thing: the flow of energy through the universe. • 3.2: Work and Heat Joule was able to show that work and heat can have the same effect on matter – a change in temperature! It would then be reasonable to conclude that heating, as well as doing work on a system will increase its energy content, and thus it’s ability to perform work in the surroundings. This leads to an important construct of the First Law of Thermodynamics: The capacity of a system to do work is increased by heating the system or doing work on it. • 3.3: Reversible and Irreversible Pathways It is convenient to use the work of expansion to exemplify the difference between work that is done reversibly and that which is done irreversibly. The example of expansion against a constant external pressure is an example of an irreversible pathway. It does not mean that the gas cannot be re-compressed. It does, however, mean that there is a definite direction of spontaneous change at all points along the expansion. • 3.4: Calorimetry As chemists, we are concerned with chemical changes and reactions. The thermodynamics of chemical reactions can be very important in terms of controlling the production of desired products and preventing safety hazards such as explosions. As such, measuring and understanding the thermochemistry of chemical reactions is not only useful, but essential! • 3.5: Temperature Dependence of Enthalpy It is often required to know thermodynamic functions (such as enthalpy) at temperatures other than those available from tabulated data. Fortunately, the conversion to other temperatures isn’t difficult. • 3.6: Reaction Enthalpies Reaction enthalpies are important, but difficult to tabulate. However, because enthalpy is a state function, it is possible to use Hess’ Law to simplify the tabulation of reaction enthalpies. Hess’ Law is based on the addition of reactions. By knowing the reaction enthalpy for constituent reactions, the enthalpy of a reaction that can be expressed as the sum of the constituent reactions can be calculated. • 3.7: Lattice Energy and the Born-Haber Cycle An important enthalpy change is the lattice energy. This is the energy necessary to take one mole of a crystalline solid to ions in the gas phase. A very handy construct in thermodynamics is that of the thermodynamic cycle. This can be represented graphically to help to visualize how all of the pieces of the cycle add together. A very good example of this is the Born-Haber cycle, describing the formation of an ionic solid. • 3.E: First Law of Thermodynamics (Exercises) Exercises for Chapter 3 "First Law of Thermodynamics" in Fleming's Physical Chemistry Textmap. • 3.S: First Law of Thermodynamics (Summary) Summary for Chapter 3 "First Law of Thermodynamics" in Fleming's Physical Chemistry Textmap. 03: First Law of Thermodynamics Albert Einstein, a noted physicist, said of thermodynamics (Einstein, 1979) “A law is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability. (..) It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.” Thermodynamics is the study of how energy flows into and out of systems and how it flows through the universe. People have been studying thermodynamics for a very long time and have developed the field a great deal, including the incorporation of high-level mathematics into the process. Many of the relationships may look cumbersome or complicated, but they are always describing the same basic thing: the flow of energy through the universe. Energy, of course, can be used to do many useful things, such as allow us to drive our cars, use electronic devices, heat our homes, and cook our food. Chemistry is important as well since many of the processes in which we generate energy depend on chemical reactions (such as the combustion of hydrocarbons to generate heat or electron transfer reactions to generate electron flow.) The previous chapter investigated gases which are convenient systems to use to frame many discussions of thermodynamics since they can be modeled using specific equations of state such as the ideal gas law or the van der Waals law. These relationships depend on an important class of variables known as state variables. State variables are those variables which depend only upon the current conditions affecting a system. Pressure, temperature and molar volume are examples of state variables. A number of variables required to describe the flow of energy in a system do depend on the pathway a system follows to come into its current state. To illustrate the difference, consider climbing a mountain. You may choose to walk straight up the side of the mountain, or you may choose to circle the mountain several times in order to get to the top. These two pathways will differ in terms of how far you actually walk (a path-dependent variable) to attain the same change in altitude (an example of a state variable.)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.01%3A_Prelude_to_Thermodynamics.txt
One of the pioneers in the field of modern thermodynamics was James P. Joule (1818 - 1889). Among the experiments Joule carried out, was an attempt to measure the effect on the temperature of a sample of water that was caused by doing work on the water. Using a clever apparatus to perform work on water by using a falling weight to turn paddles within an insulated canister filled with water, Joule was able to measure a temperature increase in the water. Thus, Joule was able to show that work and heat can have the same effect on matter – a change in temperature! It would then be reasonable to conclude that heating, as well as doing work on a system will increase its energy content, and thus it’s ability to perform work in the surroundings. This leads to an important construct of the First Law of Thermodynamics: The capacity of a system to do work is increased by heating the system or doing work on it. The internal energy (U) of a system is a measure of its capacity to supply energy that can do work within the surroundings, making U the ideal variable to keep track of the flow of heat and work energy into and out of a system. Changes in the internal energy of a system ($\Delta U$) can be calculated by $\Delta U = U_f - U_i \label{FirstLaw}$ where the subscripts $i$ and $f$ indicate initial and final states of the system. $U$ as it turns out, is a state variable. In other words, the amount of energy available in a system to be supplied to the surroundings is independent on how that energy came to be available. That’s important because the manner in which energy is transferred is path dependent. There are two main methods energy can be transferred to or from a system. These are suggested in the previous statement of the first law of thermodynamics. Mathematically, we can restate the first law as $\Delta U = q + w \nonumber$ or $dU = dq + dw \nonumber$ where q is defined as the amount of energy that flows into a system in the form of heat and w is the amount of energy lost due to the system doing work on the surroundings. Heat Heat is the kind of energy that in the absence of other changes would have the effect of changing the temperature of the system. A process in which heat flows into a system is endothermic from the standpoint of the system ($q_{system} > 0$, $q_{surroundings} < 0$). Likewise, a process in which heat flows out of the system (into the surroundings) is called exothermic ($q_{system} < 0$, $q_{surroundings} > 0$). In the absence of any energy flow in the form or work, the flow of heat into or out of a system can be measured by a change in temperature. In cases where it is difficult to measure temperature changes of the system directly, the amount of heat energy transferred in a process can be measured using a change in temperature of the soundings. (This concept will be used later in the discussion of calorimetry). An infinitesimal amount of heat flow into or out of a system can be related to a change in temperature by $dq = C\, dT \nonumber$ where C is the heat capacity and has the definition $C = \dfrac{dq}{\partial T} \nonumber$ Heat capacities generally have units of (J mol-1 K-1) and magnitudes equal to the number of J needed to raise the temperature of 1 mol of substance by 1 K. Similar to a heat capacity is a specific heat which is defined per unit mass rather than per mol. The specific heat of water, for example, has a value of 4.184 J g-1 K-1 (at constant pressure – a pathway distinction that will be discussed later.) Example $1$: Heat required to Raise Temperature How much energy is needed to raise the temperature of 5.0 g of water from 21.0 °C to 25.0 °C? Solution \begin{align*} q &=mC \Delta T \[4pt] &= (5.0 \,\cancel{g}) (4.184 \dfrac{J}{\cancel{g} \, \cancel{°C}}) (25.0 \cancel{°C} - 21.0 \cancel{°C}) \[4pt] &= 84\, J \end{align*} What is a partial derivative? A partial derivative, like a total derivative, is a slope. It gives a magnitude as to how quickly a function changes value when one of the dependent variables changes. Mathematically, a partial derivative is defined for a function $f(x_1,x_2, \dots x_n)$ by $\left( \dfrac{ \partial f}{\partial x_i} \right)_{x_j \neq i} = \lim_{\Delta _i \rightarrow 0} \left( \dfrac{f(x_1+ \Delta x_1 , x_2 + \Delta x_2, \dots, x_i +\Delta x_i, \dots x_n+\Delta x_n) - f(x_1,x_2, \dots x_i, \dots x_n) }{\Delta x_i} \right) \nonumber$ Because it measures how much a function changes for a change in a given dependent variable, infinitesimal changes in the in the function can be described by $df = \sum_i \left( \dfrac{\partial f}{\partial x_i} \right)_{x_j \neq i} \nonumber$ So that each contribution to the total change in the function $f$ can be considered separately. For simplicity, consider an ideal gas. The pressure can be calculated for the gas using the ideal gas law. In this expression, pressure is a function of temperature and molar volume. $p(V,T) = \dfrac{RT}{V} \nonumber$ The partial derivatives of p can be expressed in terms of $T$ and $V$ as well. $\left( \dfrac{\partial p}{ \partial V} \right)_{T} = - \dfrac{RT}{V^2} \label{max1}$ and $\left( \dfrac{\partial p}{ \partial T} \right)_{V} = \dfrac{R}{V} \label{max2}$ So that the change in pressure can be expressed $dp = \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT \label{eq3}$ or by substituting Equations \ref{max1} and \ref{max2} $dp = \left( - \dfrac{RT}{V^2} \right ) dV + \left( \dfrac{R}{V} \right) dT \nonumber$ Macroscopic changes can be expressed by integrating the individual pieces of Equation \ref{eq3} over appropriate intervals. $\Delta p = \int_{V_1}^{V_2} \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \int_{T_1}^{T_2} \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT \nonumber$ This can be thought of as two consecutive changes. The first is an isothermal (constant temperature) expansion from $V_1$ to $V_2$ at $T_1$ and the second is an isochoric (constant volume) temperature change from $T_1$ to $T_2$ at $V_2$. For example, suppose one needs to calculate the change in pressure for an ideal gas expanding from 1.0 L/mol at 200 K to 3.0 L/mol at 400 K. The set up might look as follows. $\Delta p = \underbrace{ \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV}_{\text{isothermal expansion}} + \underbrace{ \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT}_{\text{isochoric heating}} \nonumber$ or \begin{align*} \Delta p &= \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV + \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT \[4pt] &= \left[ \dfrac{R(200\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol} + \left[ \dfrac{RT}{3.0 \, L/mol} \right]_{ 200\,K}^{400\,K} \[4pt] &= R \left[ \left( \dfrac{200\,K}{3.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{200\,K}{3.0\, L/mol}\right) \right] \[4pt] &= -5.47 \, atm \end{align*} Alternatively, one could calculate the change as an isochoric temperature change from $T_1$ to $T_2$ at $V_1$ followed by an isothermal expansion from $V_1$ to $V_2$ at $T_2$: $\Delta p = \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT + \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV \nonumber$ or \begin{align*} \Delta p &= \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT + \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV \[4pt] &= \left[ \dfrac{RT}{1.0 \, L/mol} \right]_{ 200\,K}^{400\,K} + \left[ \dfrac{R(400\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol} \[4pt] &= R \left[ \left( \dfrac{400\,K}{1.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{400\,K}{1.0\, L/mol}\right) \right] \[4pt] &= -5.47 \, atm \end{align*} This results demonstrates an important property of pressure in that pressure is a state variable, and so the calculation of changes in pressure do not depend on the pathway! Work Work can take several forms, such as expansion against a resisting pressure, extending length against a resisting tension (like stretching a rubber band), stretching a surface against a surface tension (like stretching a balloon as it inflates) or pushing electrons through a circuit against a resistance. The key to defining the work that flows in a process is to start with an infinitesimal amount of work defined by what is changing in the system. Table 3.1.1: Changes to the System Type of work Displacement Resistance dw Expansion dV (volume) -pext (pressure) -pextdV Electrical dQ (charge) W (resistence) -W dQ Extension dL (length) -t (tension) t dL Stretching dA -s (surf. tens.) sdA The pattern followed is always an infinitesimal displacement multiplied by a resisting force. The total work can then be determined by integrating along the pathway the change follows. Example $2$: Work from a Gas Expansion What is the work done by 1.00 mol an ideal gas expanding from a volume of 22.4 L to a volume of 44.8 L against a constant external pressure of 0.500 atm? Solution $dw = -p_{ext} dV \nonumber$ since the pressure is constant, we can integrate easily to get total work \begin{align*} w &= -p_{exp} \int_{V_1}^{V_2} dV \[4pt] &= -p_{exp} ( V_2-V_1) \[4pt] &= -(0.500 \,am)(44.8 \,L - 22.4 \,L) \left(\dfrac{8.314 \,J}{0.08206 \,atm\,L}\right) \[4pt] &= -1130 \,J = -1.14 \;kJ \end{align*} Note: The ratio of gas law constants can be used to convert between atm∙L and J quite conveniently!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.02%3A_Work_and_Heat.txt
The most common example of work in the systems discussed in this book is the work of expansion. It is also convenient to use the work of expansion to exemplify the difference between work that is done reversibly and that which is done irreversibly. The example of expansion against a constant external pressure is an example of an irreversible pathway. It does not mean that the gas cannot be re-compressed. It does, however, mean that there is a definite direction of spontaneous change at all points along the expansion. Imagine instead a case where the expansion has no spontaneous direction of change as there is no net force push the gas to seek a larger or smaller volume. The only way this is possible is if the pressure of the expanding gas is the same as the external pressure resisting the expansion at all points along the expansion. With no net force pushing the change in one direction or the other, the change is said to be reversible or to occur reversibly. The work of a reversible expansion of an ideal gas is fairly easy to calculate. If the gas expands reversibly, the external pressure ($p_{­ext}$) can be replaced by a single value ($p$) which represents both the pressure of the gas and the external pressure. $dw = -pdV \nonumber$ or $w = - \int p dV \nonumber$ But now that the external pressure is not constant, $p$ cannot be extracted from the integral. Fortunately, however, there is a simple relationship that tells us how $p$ changes with changing $V$ – the equation of state! If the gas is assumed to be an ideal gas $w = - \int p dV -\int \left( \dfrac{nRT}{V}\right) dV \nonumber$ And if the temperature is held constant (so that the expansion follows an isothermal pathway) the nRT term can be extracted from the integral. $w = -nRT \int_{V_1}^{V_2} \dfrac{dV}{V} = -nRT \ln \left( \dfrac{V_1}{V_2} \right) \label{isothermal}$ Equation \ref{isothermal} is derived for ideal gases only; a van der Waal gas would result in a different version. Example $1$: Gas Expansion What is the work done by 1.00 mol an ideal gas expanding reversibly from a volume of 22.4 L to a volume of 44.8 L at a constant temperature of 273 K? Solution Using Equation \ref{isothermal} to calculate this \begin{align*} w & = -(1.00 \, \cancel{mol}) \left(8.314\, \dfrac{J}{\cancel{mol}\,\cancel{ K}}\right) (273\,\cancel{K}) \ln \left( \dfrac{44.8\,L}{22.4 \,L} \right) \nonumber \[4pt] & = -1570 \,J = 1.57 \;kJ \end{align*} Note: A reversible expansion will always require more work than an irreversible expansion (such as an expansion against a constant external pressure) when the final states of the two expansions are the same! The work of expansion can be depicted graphically as the area under the p-V curve depicting the expansion. Comparing examples $1$ and $3.1.2$, for which the initial and final volumes were the same, and the constant external pressure of the irreversible expansion was the same as the final pressure of the reversible expansion, such a graph looks as follows. The work is depicted as the shaded portion of the graph. It is clear to see that the reversible expansion (the work for which is shaded in both light and dark gray) exceeds that of the irreversible expansion (shaded in dark gray only) due to the changing pressure of the reversible expansion. In general, it will always be the case that the work generated by a reversible pathway connecting initial and final states will be the maximum work possible for the expansion. It should be noted (although it will be proven in a later chapter) that $\Delta U$ for an isothermal reversible process involving only p-V work is 0 for an ideal gas. This is true because the internal energy, U, is a measure of a system’s capacity to convert energy into work. In order to do this, the system must somehow store that energy. The only mode in which an ideal gas can store this energy is in the translational kinetic energy of the molecules (otherwise, molecular collisions would not need to be elastic, which as you recall, was a postulate of the kinetic molecular theory!) And since the average kinetic energy is a function only of the temperature, it (and therefore $U$) can only change if there is a change in temperature. Hence, for any isothermal process for an ideal gas, $\Delta U=0$. And, perhaps just as usefully, for an isothermal process involving an ideal gas, $q = -w$, as any energy that is expended by doing work must be replaced with heat, lest the system temperature drop. Constant Volume Pathways One common pathway which processes can follow is that of constant volume. This will happen if the volume of a sample is constrained by a great enough force that it simply cannot change. It is not uncommon to encounter such conditions with gases (since they are highly compressible anyhow) and also in geological formations, where the tremendous weight of a large mountain may force any processes occurring under it to happen at constant volume. If reversible changes in which the only work that can be done is that of expansion (so-called p-V work) are considered, the following important result is obtained: $dU = dq + dw = dq - pdV \nonumber$ However, $dV = 0$ since the volume is constant! As such, $dU$ can be expressed only in terms of the heat that flows into or out of the system at constant volume $dU = dq_v \nonumber$ Recall that $dq$ can be found by $dq = \dfrac{dq}{\partial T} dT = C\, dt \label{eq1}$ This suggests an important definition for the constant volume heat capacity ($C_V$) which is $C_V \equiv \left( \dfrac{\partial U}{\partial T}\right)_V \nonumber$ When Equation \ref{eq1} is integrated the $q = \int _{T_1}^{T_2} nC_V dt \label{isochoric}$ Example $2$: Isochoric Pathway Consider 1.00 mol of an ideal gas with $C_V = 3/2 R$ that undergoes a temperature change from 125 K to 255 K at a constant volume of 10.0 L. Calculate $\Delta U$, $q$, and $w$ for this change. Solution Since this is a constant volume process $w = 0 \nonumber$ Equation \ref{isochoric} is applicable for an isochoric process, $q = \int _{T_1}^{T_2} nC_V dt \nonumber$ Assuming $C_V$ is independent of temperature: \begin{align*} q & = nC_V \int _{T_1}^{T_2} dt \[4pt] &= nC_V ( T_2-T_1) \[4pt] & = (1.00 \, mol) \left( \dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \[4pt] & = 1620 \,J = 1.62\, kJ \end{align*} Since this a constant volume pathway, \begin{align*} \Delta U & = q + \cancel{w} \ & = 1.62 \,kJ \end{align*} Constant Pressure Pathways Most laboratory-based chemistry occurs at constant pressure. Specifically, it is exposed to the constant air pressure of the laboratory, glove box, or other container in which reactions are taking place. For constant pressure changes, it is convenient to define a new thermodynamic quantity called enthalpy. $H \equiv U+ pV \nonumber$ or \begin{align*} dH &\equiv dU + d(pV) \[4pt] &= dU + pdV + Vdp \end{align*} For reversible changes at constant pressure ($dp = 0$) for which only p-V work is done \begin{align} dH & = dq + dw + pdV + Vdp \[4pt] & = dq - \cancel{pdV} + \cancel{pdV} + \cancelto{0}{Vdp} \ & = dq \label{heat} \end{align} And just as in the case of constant volume changes, this implies an important definition for the constant pressure heat capacity $C_p \equiv \left( \dfrac{\partial H}{\partial T} \right)_p \nonumber$ Example $3$: Isobaric Gas Expansion Consider 1.00 mol of an ideal gas with $C_p = 5/2 R$ that changes temperature change from 125 K to 255 K at a constant pressure of 10.0 atm. Calculate $\Delta U$, $\Delta H$, $q$, and $w$ for this change. Solution $q = \int_{T_1}^{T_2} nC_p dT \nonumber$ assuming $C_p$ is independent of temperature: \begin{align*} q & = nC_p \int _{T_1}^{T_2} dT \ & = nC_p (T_2-T_1) \ & = (1.00 \, mol) \left( \dfrac{5}{2} 8.314 \dfrac{J}{mol \, K}\right) (255\, K - 125\, K) = 2700\, J = 1.62\, kJ \end{align*} So via Equation \ref{heat} (specifically the integrated version of it using differences instead of differentials) $\Delta H = q = 1.62 \,kJ \nonumber$ \begin{align*} \Delta U & = \Delta H - \Delta (pV) \ & = \Delta H -nR\Delta T \ & = 2700\, J - (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \ & = 1620 \,J = 1.62\, kJ \end{align*} Now that $\Delta U$ and $q$ are determined, then work can be calculated \begin{align*} w & =\Delta U -q \ & = 1.62\,kJ - 2.70\,kJ = -1.08\;kJ \end{align*} It makes sense that $w$ is negative since this process is an gas expansion. Example $4$: Isothermal Gas Expansion Calculate $q$, $w$, $\Delta U$, and $\Delta H$ for 1.00 mol of an ideal gas expanding reversibly and isothermally at 273 K from a volume of 22.4 L and a pressure of 1.00 atm to a volume of 44.8 L and a pressure of 0.500 atm. Solution Since this is an isothermal expansion, Equation\ref{isothermal} is applicable \begin{align*} w & = -nRT \ln \dfrac{V_2}{V_1} \ & = (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K) \ln \left(\dfrac{44.8\,L}{22.4\,L} \right) \ & = 1572\,J = 1.57\,kJ \[4pt] \Delta U & = q + w \ & = q + 1.57\,KJ \ & = 0 \[4pt] q &= -1.57\,kJ \end{align*} Since this is an isothermal expansion $\Delta H = \Delta U + \Delta (pV) = 0 + 0 \nonumber$ where $\Delta (pV) = 0$ due to Boyle’s Law! Adiabatic Pathways An adiabatic pathway is defined as one in which no heat is transferred ($q = 0$). Under these circumstances, if an ideal gas expands, it is doing work ($w < 0$) against the surroundings (provided the external pressure is not zero!) and as such the internal energy must drop ($\Delta U <0$). And since $\Delta U$ is negative, there must also be a decrease in the temperature ($\Delta T < 0$). How big will the decrease in temperature be and on what will it depend? The key to answering these questions comes in the solution to how we calculate the work done. If the adiabatic expansion is reversible and done on an ideal gas, $dw = -pdV \nonumber$ and $dw = nC_vdT \label{Adiabate2}$ Equating these two terms yields $- pdV = nC_v dT \nonumber$ Using the ideal gas law for an expression for $p$ ($p = nRT/V$) $- \dfrac{nRT}{V} dV = nC_vdT \nonumber$ And rearranging to gather the temperature terms on the right and volume terms on the left yields $\dfrac{dV}{V} = -\dfrac{C_V}{R} \dfrac{dT}{T} \nonumber$ This expression can be integrated on the left between $V_1$ and $V_2$ and on the right between $T_1$ and $T_2$. Assuming that $C_v/nR$ is independent of temperature over the range of integration, it can be pulled from the integrand in the term on the right. $\int_{V_1}^{V_2} \dfrac{dV}{V} = -\dfrac{C_V}{R} \int_{T_1}^{T_2} \dfrac{dT}{T} \nonumber$ The result is $\ln \left(\dfrac{V_2}{V_1} \right) = - \dfrac{C_V}{R} \ln \left( \dfrac{T_2}{T_1} \right) \nonumber$ or $\left(\dfrac{V_2}{V_1} \right) = \left(\dfrac{T_2}{T_1} \right)^{- \frac{C_V}{R}} \nonumber$ or $V_1T_1^{\frac{C_V}{R}} = V_2T_2^{\frac{C_V}{R}} \nonumber$ or $T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} = T_2 \label{Eq4Alternative}$ Once $\Delta T$ is known, it is easy to calculate $w$, $\Delta U$ and $\Delta H$. Example $5$: 1.00 mol of an ideal gas (CV = 3/2 R) initially occupies 22.4 L at 273 K. The gas expands adiabatically and reversibly to a final volume of 44.8 L. Calculate $\Delta T$, $q$, $w$, $\Delta U$, and $\Delta H$ for the expansion. Solution Since the pathway is adiabatic: $q =0 \nonumber$ Using Equation \ref{Eq4Alternative} \begin{align*} T_2 & = T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} \ & =(273\,K) \left( \dfrac{22.4\,L}{44.8\,L} \right)^{2/3} \ & = 172\,K \end{align*} So $\Delta T = 172\,K - 273\,K = -101\,K \nonumber$ For calculating work, we integrate Equation \ref{Adiabate2} to get \begin{align*} w & = \Delta U = nC_v \Delta T \ & = (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \ & = 1.260 \,kJ \end{align*} \begin{align*} \Delta H & = \Delta U + nR\Delta T \ & = -1260\,J + (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \ & = -2100\,J \end{align*} The following table shows recipes for calculating $q$, $w$, $\Delta U$, and $\Delta H$ for an ideal gas undergoing a reversible change along the specified pathway. Table 3.2.1: Thermodynamics Properties for a Reversible Expansion or Compression Pathway $q$ $w$ $\Delta U$ $\Delta H$ Isothermal $nRT \ln (V_2/V_1)$ $-nRT \ln (V_2/V_1)$ 0 0 Isochoric $C_V \Delta T$ 0 $C_V \Delta T$ $C_V \Delta T + V\Delta p$ Isobaric $C_p \Delta T$ $- p\Delta V$ $C_p \Delta T - p\Delta V$ $C_p \Delta T$ adiabatic 0 $C_V \Delta T$ $C_V \Delta T$ $C_p \Delta T$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.03%3A_Reversible_and_Irreversible_Pathways.txt
As chemists, we are concerned with chemical changes and reactions. The thermodynamics of chemical reactions can be very important in terms of controlling the production of desired products and preventing safety hazards such as explosions. As such, measuring and understanding the thermochemistry of chemical reactions is not only useful, but essential! Calorimetry The techniques of calorimetry can be used to measure q for a chemical reaction directly. The enthalpy change for a chemical reaction is of significant interest to chemists. An exothermic reaction will release heat ($q_{reaction} < 0$, $q_{surroundings} > 0$) causing the temperature of the surrounding to increase. Conversely, an endothermic reaction ($q_{reaction} > 0$, $q_{surroundings} < 0$) will draw heat from the surroundings, causing the temperature of the surrounding to drop. Measuring the temperature change in the surroundings allows for the determination of how much heat was released or absorbed in the reaction. Bomb Calorimetry Bomb calorimetry is used predominantly to measure the heat evolved in combustion reactions, but can be used for a wide variety of reactions. A typical bomb calorimetry set up is shown here. The reaction is contained in a heavy metallic container (the bomb) forcing the reaction to occur at constant volume. As such, the heat evolved (or absorbed) by the reaction is equal to the change in internal energy (DUrxn). The bomb is then submerged in a reproducible quantity of water, the temperature of which is monitored with a high-precision thermometer. For combustion reactions, the bomb will be loaded with a small sample of the compound to be combusted, and then the bomb is filled with a high pressure (typically about 10 atm) of O2. The reaction is initiated by supplying heat using a short piece of resistive wire carrying an electrical current. The calorimeter must be calibrated by carrying out a reaction for which $\Delta U_{rxn}$ is well known, so that the resulting temperature change can be related to the amount of heat released or absorbed. A commonly used reaction is the combustion of benzoic acid. This makes a good choice since benzoic acid reacts reliably and reproducibly under normal bomb calorimetry conditions. The “water equivalent” of the calorimeter can then be calculated from the temperature change using the following relationship: $W = \dfrac{n\Delta U_c +e_{wrire}+e_{other}}{\Delta T} \nonumber$ where n is the number of moles of benzoic acid used, $\Delta U_c$ is the internal energy of combustion for benzoic acid (3225.7 kJ mol-1 at 25 oC), $e_{wire}$ accounts for the energy released in the combustion of the fuse wire, eother account for any other corrections (such as heat released due to the combustion of residual nitrogen in the bomb), and DT is the measured temperature change in the surrounding water bath. Once the “water equivalent” is determined for a calorimeter, the temperature change can be used to find $\Delta U_c$ for an unknown compound from the temperature change created upon combustion of a known quantity of the substance. $\Delta U_c = \dfrac{W \Delta T - e_{wire} - e_{other}}{n_{sample}} \nonumber$ The experiment above is known as “isothermal bomb calorimetry” as the entire assembly sits in a constant temperature laboratory. Another approach is to employ “adiabatic bomb calorimetry” in which the assembly sits inside of a water jacket, the temperature of which is controlled to match the temperature of the water inside the insulated container. By matching this temperature, there is no thermal gradient, and thus no heat leaks into or out of the assembly during an experiment (and hence the experiment is effectively “adiabatic”). Finding $\Delta U_c$ The enthalpy of combustion can be calculated from the internal energy change if the balanced chemical reaction is known. Recall from the definition of enthalpy $\Delta H = \Delta U + \Delta (pV) \nonumber$ and if the gas-phase reactants and products can be treated as ideal gases ($pV = nRT$) $\Delta H = \Delta U + RT \Delta n_{gas} \nonumber$ at constant temperature. For the combustion of benzoic acid at 25 oC $\ce{C6H5COOH (s) + 15/2 O_2(g) -> 7 CO2(g) + 3 H2O(l)} \nonumber$ it can be seen that $\Delta n_{gas}$ is -0.5 mol of gas for every mole of benzoic acid reacted. Example $1$: Combustion of Naphthalene A student burned a 0.7842 g sample of benzoic acid ($\ce{C7H6O2}$) in a bomb calorimeter initially at 25.0 oC and saw a temperature increase of 2.02 oC. She then burned a 0.5348 g sample of naphthalene ($\ce{C10H8}$) (again from an initial temperature of 25 oC) and saw a temperature increase of 2.24 oC. From this data, calculate $\Delta H_c$ for naphthalene (assuming ewire and eother are unimportant.) Solution First, the water equivalent: $W = \dfrac{\left[ (0.7841\,g) \left(\frac{1\,mol}{122.124 \, g} \right)\right] (3225.7 \,kJ/mol)}{2.02 \,°C} = 10.254 \, kJ/°C \nonumber$ Then $\Delta U_c$ for the sample: $\Delta U_c = \dfrac{(10.254\, kJ/\,°C)(2.24\,°C )}{(0.5308 \,g)\left(\frac{1\,mol}{128.174 \, g} \right) } = 5546.4 \, kJ/°C \nonumber$ $\Delta H_c$ is then given by $\Delta H_c = \Delta U_c + RT \Delta n_{gas} \nonumber$ The reaction for the combustion of naphthalene at 25 oC is: $\ce{ C10H8(s) + 12 O2(g) -> 10 CO2(g) + 4 H2O(l)} \nonumber$ with $\Delta n_{gas} = -2$. So $\Delta H_c = 5546.4 \,kJ/mol + \left( \dfrac{8.314}{1000} kJ/(mol \, K) \right) (298 \,L) (-2) = 5541\, kJ/mol \nonumber$ The literature value (Balcan, Arzik, & Altunata, 1996) is 5150.09 kJ/mol. So that’s not too far off!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.04%3A_Calorimetry.txt
It is often required to know thermodynamic functions (such as enthalpy) at temperatures other than those available from tabulated data. Fortunately, the conversion to other temperatures is not difficult. At constant pressure $dH = C_p \,dT \nonumber$ And so for a temperature change from $T_1$ to $T_2$ $\Delta H = \int_{T_2}^{T_2} C_p\, dT \label{EQ1}$ Equation \ref{EQ1} is often referred to as Kirchhoff's Law. If $C_p$ is independent of temperature, then $\Delta H = C_p \,\Delta T \label{intH}$ If the temperature dependence of the heat capacity is known, it can be incorporated into the integral in Equation \ref{EQ1}. A common empirical model used to fit heat capacities over broad temperature ranges is $C_p(T) = a+ bT + \dfrac{c}{T^2} \label{EQ15}$ After combining Equations \ref{EQ15} and \ref{EQ1}, the enthalpy change for the temperature change can be found obtained by a simple integration $\Delta H = \int_{T_1}^{T_2} \left(a+ bT + \dfrac{c}{T^2} \right) dT \label{EQ2}$ Solving the definite integral yields \begin{align} \Delta H &= \left[ aT + \dfrac{b}{2} T^2 - \dfrac{c}{T} \right]_{T_1}^{T_2} \ &= a(T_2-T_1) + \dfrac{b}{2}(T_2^2-T_1^2) - c \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{ineq} \end{align} This expression can then be used with experimentally determined values of $a$, $b$, and $c$, some of which are shown in the following table. Table $1$: Empirical Parameters for the temperature dependence of $C_p$ Substance a (J mol-1 K-1) b (J mol-1 K-2) c (J mol-1 K) C(gr) 16.86 4.77 x 10-3 -8.54 x 105 CO2(g) 44.22 8.79 x 10-3 -8.62 x 105 H2O(l) 75.29 0 0 N2(g) 28.58 3.77 x 10-3 -5.0 x 104 Pb(s) 22.13 1.172 x 10-2 9.6 x 104 Example $1$: Heating Lead What is the molar enthalpy change for a temperature increase from 273 K to 353 K for Pb(s)? Solution The enthalpy change is given by Equation \ref{EQ1} with a temperature dependence $C_p$ given by Equation \ref{EQ1} using the parameters in Table $1$. This results in the integral form (Equation \ref{ineq}): $\Delta H = a(T_2-T_1) + \dfrac{b}{2}(T_2^2-T_1^2) - c \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$ when substituted with the relevant parameters of Pb(s) from Table $1$. \begin{align*} \Delta H = \,& (22.14\, \dfrac{J}{mol\,K} ( 353\,K - 273\,K) \ & + \dfrac{1.172 \times 10^{-2} \frac{J}{mol\,K^2}}{2} \left( (353\,K)^2 - (273\,K)^2 \right) \ &- 9.6 \times 10^4 \dfrac{J\,K}{mol} \left( \dfrac{1}{(353\,K)} - \dfrac{1}{(273\,K)} \right) \ \Delta H = \, & 1770.4 \, \dfrac{J}{mol}+ 295.5\, \dfrac{J}{mol}+ 470.5 \, \dfrac{J}{mol} \ = & 2534.4 \,\dfrac{J}{mol} \end {align*} For chemical reactions, the reaction enthalpy at differing temperatures can be calculated from $\Delta H_{rxn}(T_2) = \Delta H_{rxn}(T_1) + \int_{T_1}^{T_2} \Delta C_p \Delta T \nonumber$ Example $2$: Enthalpy of Formation The enthalpy of formation of NH3(g) is -46.11 kJ/mol at 25 oC. Calculate the enthalpy of formation at 100 oC. Solution $\ce{N2(g) + 3 H2(g) \rightleftharpoons 2 NH3(g)} \nonumber$ with $\Delta H \,(298\, K) = -46.11\, kJ/mol$ Compound Cp (J mol-1 K-1) N2(g) 29.12 H2(g) 28.82 NH3(g) 35.06 \begin{align*} \Delta H (373\,K) & = \Delta H (298\,K) + \Delta C_p\Delta T \ & = -46110 +\dfrac{J}{mol} \left[ 2 \left(35.06 \dfrac{J}{mol\,K}\right) - \left(29.12\, \dfrac{J}{mol\,K}\right) - 3\left(28.82\, \dfrac{J}{mol\,K}\right) \right] (373\,K -298\,K) \ & = -49.5\, \dfrac{kJ}{mol} \end{align*}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.05%3A_Temperature_Dependence_of_Enthalpy.txt
Reaction enthalpies are important, but difficult to tabulate. However, because enthalpy is a state function, it is possible to use Hess’ Law to simplify the tabulation of reaction enthalpies. Hess’ Law is based on the addition of reactions. By knowing the reaction enthalpy for constituent reactions, the enthalpy of a reaction that can be expressed as the sum of the constituent reactions can be calculated. The key lies in the canceling of reactants and products that °Ccur in the “data” reactions but not in the “target reaction. Example $1$: Find $\Delta H_{rxn}$ for the reaction $2 CO(g) + O_2(g) \rightarrow 2 CO_2(g) \nonumber$ Given $C(gr) + ½ O_2(g) \rightarrow CO(g) \nonumber$ with $\Delta H_1 = -110.53 \,kJ$ $C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$ with $\Delta H_2 = -393.51\, kJ$ Solution The target reaction can be generated from the data reactions. ${\color{red} 2 \times} \left[ CO(g) \rightarrow C(gr) + O_2(g) \right] \nonumber$ plus ${ \color{red} 2 \times} \left[ C(gr) + 2 O_2(g) \rightarrow 2 CO_2(g) \right] \nonumber$ equals $2 CO(g) + O_2(g) \rightarrow 2 CO_2(g) \nonumber$ so ${ \color{red} 2 \times} \Delta H_1 = -787.02 \, kJ \nonumber$ ${ \color{red} 2 \times} \Delta H_2 = 221.06\, kJ \nonumber$ ${ \color{red} 2 \times} \Delta H_1 + { \color{red} 2 \times} \Delta H_2 = -565.96 \,kJ \nonumber$ Standard Enthalpy of Formation One of the difficulties with many thermodynamic state variables (such as enthalpy) is that while it is possible to measure changes, it is impossible to measure an absolute value of the variable itself. In these cases, it is necessary to define a zero to the scale defining the variable. For enthalpy, the definition of a zero is that the standard enthalpy of formation of a pure element in its standard state is zero. All other enthalpy changes are defined relative to this standard. Thus it is essential to very carefully define a standard state. Definition: the Standard State The standard state of a substance is the most stable form of that substance at 1 atmosphere pressure and the specified temperature. Using this definition, a convenient reaction for which enthalpies can be measured and tabulated is the standard formation reaction. This is a reaction which forms one mole of the substance of interest in its standard state from elements in their standard states. The enthalpy of a standard formation reaction is the standard enthalpy of formation ($\Delta H_{f^o}$). Some examples are • $NaCl(s)$: $Na(s) + ½ Cl_2(g) \rightarrow NaCl(s) \nonumber$ with $\Delta H_f^o = -411.2\, kJ/mol$ • $C_3H_8(g)$: $3 C(gr) + 4 H_2(g) \rightarrow C_3H_8(g) \nonumber$ with $\Delta H_f^o = -103.8\, kJ/mol$ It is important to note that the standard state of a substance is temperature dependent. For example, the standard state of water at -10 °C is solid, whereas the standard state at room temperature is liquid. Once these values are tabulated, calculating reaction enthalpies becomes a snap. Consider the heat combustion ($\Delta H_c$) of methane (at 25 °C) as an example. $CH_4(g) + 2 O_2(g) \rightarrow CO_2(g) + 2 H_2O(l) \nonumber$ The reaction can expressed as a sum of a combination of the following standard formation reactions. $C(gr) + 2 H_2(g) \rightarrow CH_4(g) \nonumber$ with $\Delta H_f^o = -74.6\, kJ/mol$ $C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$ with $\Delta H_f^o = -393.5\, kJ/mol$ $H_2(g) + ½ O_2(g) \rightarrow H_2O(l) \nonumber$ with $\Delta H_f^o = -285.8 \,kJ/mol$ The target reaction can be generated from the following combination of reactions ${ \color{red} -1 \times} \left[ C(gr) + 2 H_2(g) \rightarrow CH_4(g)\right] \nonumber$ $CH_4(g) \rightarrow C(gr) + 2 H_2(g) \nonumber$ with $\Delta H_f^o ={ \color{red} -1 \times} \left[ -74.6\, kJ/mol \right]= 74.6\, kJ/mol$ $C(gr) + O_2(g) \rightarrow CO_2(g) \nonumber$ with $\Delta H_f^o = -393.5\, kJ/mol$ ${ \color{red} 2 \times} \left[ H_2(g) + ½ O_2(g) \rightarrow H_2O(l) \right] \nonumber$ $2H_2(g) + O_2(g) \rightarrow 2H_2O(l) \nonumber$ with $\Delta H_f^o = {\color{red} 2 \times} \left[ -285.8 \,kJ/mol \right] = -571.6\, kJ/mol$. $CH_4(g) + 2 O_2(g) \rightarrow CO2_(g) + 2 H_2O(l) \nonumber$ with $\Delta H_c^o = -890.5\, kJ/mol$ Alternately, the reaction enthalpy could be calculated from the following relationship $\Delta H_{rxn} = \sum_{products} \nu \cdot \Delta H_f^o - \sum_{reactants} \nu \cdot \Delta H_f^o \nonumber$ where $\nu$ is the stoichiometric coefficient of a species in the balanced chemical reaction. For the combustion of methane, this calculation is \begin{align} \Delta _{rxn} & = (1\,mol) \left(\Delta H_f^o(CO_2)\right) + (2\,mol) \left(\Delta H_f^o(H_2O)\right) - (1\,mol) \left(\Delta H_f^o(CH_4)\right) \ & = (1\,mol) (-393.5 \, kJ/mol) + (2\,mol) \left(-285.8 \, kJ/mol \right) - (1\,mol) \left(-74.6 \, kJ/mol \right) \ & = -890.5 \, kJ/mol \end{align} \nonumber A note about units is in order. Note that reaction enthalpies have units of kJ, whereas enthalpies of formation have units of kJ/mol. The reason for the difference is that enthalpies of formation (or for that matter enthalpies of combustion, sublimation, vaporization, fusion, etc.) refer to specific substances and/or specific processes involving those substances. As such, the total enthalpy change is scaled by the amount of substance used. General reactions, on the other hand, have to be interpreted in a very specific way. When examining a reaction like the combustion of methane $CH_4(g) + 2 O_2(g) \rightarrow CO_2(g) + 2 H_2O(l) \nonumber$ with $\Delta H_{rxn} = -890.5\, kJ$. The correct interpretation is that the reaction of one mole of CH4(g) with two moles of O2(g) to form one mole of CO2(g) and two moles of H2O(l) releases 890.5 kJ at 25 °C. Ionization Reactions Ionized species appear throughout chemistry. The energy changes involved in the formation of ions can be measured and tabulated for several substances. In the case of the formation of positive ions, the enthalpy change to remove a single electron at 0 K is defined as the ionization potential. $M(g) \rightarrow M^+(g) + e^- \nonumber$ with $\Delta H (0 K) \equiv 1^{st} \text{ ionization potential (IP)}$ The removal of subsequent electrons requires energies called the 2nd Ionization potential, 3rd ionization potential, and so on. $M^+(g) \rightarrow M^{2+}(g) + e^- \nonumber$ with $\Delta H(0 K) ≡ 2^{nd} IP$ $M^{2+}(g) \rightarrow M^{3+}(g) + e^- \nonumber$ with $\Delta H(0 K) ≡ 3^{rd} IP$ An atom can have as many ionization potentials as it has electrons, although since very highly charged ions are rare, only the first few are important for most atoms. Similarly, the electron affinity can be defined for the formation of negative ions. In this case, the first electron affinity is defined by $X(g) + e^- \rightarrow X^-(g) \nonumber$ with $-\Delta H(0 K) \equiv 1^{st} \text{ electron affinity (EA)}$ The minus sign is included in the definition in order to make electron affinities mostly positive. Some atoms (such as noble gases) will have negative electron affinities since the formation of a negative ion is very unfavorable for these species. Just as in the case of ionization potentials, an atom can have several electron affinities. $X^-(g) + e^- \rightarrow X^{2-}(g) \nonumber$ with $-\Delta H(0 K) ≡ 2^{nd} EA$. $X^{2-}(g) + e^- \rightarrow X^{3-}(g) \nonumber$ with $-\Delta H(0 K) ≡ 3^{rd} EA$. Average Bond Enthalpies In the absence of standard formation enthalpies, reaction enthalpies can be estimated using average bond enthalpies. This method is not perfect, but it can be used to get ball-park estimates when more detailed data is not available. A bond dissociation energy $D$ is defined by $XY(g) \rightarrow X(g) + Y(g) \nonumber$ with $\Delta H \equiv D(X-Y)$ In this process, one adds energy to the reaction to break bonds, and extracts energy for the bonds that are formed. $\Delta H_{rxn} = \sum (\text{bonds broken}) - \sum (\text{bonds formed}) \nonumber$ As an example, consider the combustion of ethanol: In this reaction, five C-H bonds, one C-C bond, and one C-O bond, and one O=O bond must be broken. Also, four C=O bonds, and one O-H bond are formed. Bond Average Bond Energy (kJ/mol) C-H 413 C-C 348 C-O 358 O=O 495 C=O 799 O-H 463 The reaction enthalpy is then given by \begin{align} \Delta H_c = \, &5(413 \,kJ/mol) + 1(348\, kJ/mol) + 1(358 \,kJ/mol) \nonumber \ & + 1(495\, kJ/mol) - 4(799 \,kJ/mol) – 2(463\, kJ/mol) \nonumber \ =\,& -856\, kJ/mol \end{align} \nonumber Because the bond energies are defined for gas-phase reactants and products, this method does not account for the enthalpy change of condensation to form liquids or solids, and so the result may be off systematically due to these differences. Also, since the bond enthalpies are averaged over a large number of molecules containing the particular type of bond, the results may deviate due to the variance in the actual bond enthalpy in the specific molecule under consideration. Typically, reaction enthalpies derived by this method are only reliable to within ± 5-10%.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.06%3A_Reaction_Enthalpies.txt
An important enthalpy change is the Lattice Energy, which is the energy required to take one mole of a crystalline solid to ions in the gas phase. For $\ce{NaCl(s)}$, the lattice energy is defined as the enthalpy of the reaction $\ce{ NaCl(s) \rightarrow Na^{+}(g) + Cl^{-}(g) } \nonumber$ with $\Delta H$ called the lattice energy ($\Delta H_{Lat}$). The Born-Haber Cycle A very handy construct in thermodynamics is that of the thermodynamic cycle. This can be represented graphically to help to visualize how all of the pieces of the cycle add together. A very good example of this is the Born-Haber cycle, describing the formation of an ionic solid. Two pathways can be envisioned for the formation. Added together, the two pathways form a cycle. In one pathway, the ionic solid if formed directly from elements in their standard states. $\ce{Na(s) + 1/2 Cl_2 \rightarrow NaCl(s)} \nonumber$ with $\Delta H_f(NaCl)$. The other pathway involves a series of steps that take the elements from neutral species in their standard states to ions in the gas phase. $Na(s) \rightarrow Na(g) \nonumber$ with $\Delta H_{sub}(Na)$ $Na(g) \rightarrow Na^+(g) + e^- \nonumber$ with $1^{st}\, IP(Na)$ $½ Cl_2(g) \rightarrow Cl(g) \nonumber$ with $½ D(Cl-Cl)$ $Cl(g) + e^- \rightarrow Cl^-(g) \nonumber$ with $1^{st} EA(Cl)$ $Na^+(g) + Cl^-(g) \rightarrow NaCl(s) \nonumber$ with $\Delta H_{Lat}(NaCl)$ It should be clear that when added (after proper manipulation if needed), the second set of reactions yield the first reaction. Because of this, the total enthalpy changes must all add. $\Delta H_{sub}(Na) + 1^{st} IP(Na) + ½ D(Cl-Cl) + 1^{st} EA(Cl) + \Delta H_{lat}(NaCl) = \Delta H_f(NaCl) \nonumber$ This can be depicted graphically, the advantage being that arrows can be used to indicate endothermic or exothermic changes. An example of the Born-Haber Cycle for NaCl is shown below. In many applications, all but one leg of the cycle is known, and the job is to determine the magnitude of the missing leg. Exercise $1$: Potassium Bromide Find $\Delta H_f$­ for KBr given the following data. $\ce{K(s) \rightarrow K(g)} \nonumber$ with $\Delta H_{sub} = 89\, kJ/mol$ $\ce{Br_2(l) \rightarrow Br_2(g) } \nonumber$ with $\Delta H_{vap} = 31\, kJ/mol$ $\ce{Br_2(g) \rightarrow 2 Br(g)} \nonumber$ with $D(Br-Br) = 193\, kJ/mol$ $\ce{K(g) \rightarrow K^+(g) + e^- } \nonumber$ with $1^{st} IP(K) = 419 kJ/mol$ $\ce{Br(g) + e^- \rightarrow Br^-(g) } \nonumber$ with $1^{st} EA(Br) = 194 kJ/mol$ $\ce{K^+(g) + Br^-(g) \rightarrow KBr(s)} \nonumber$ with $\Delta H_{Lat} = 672 kJ/mol$ Answer $\Delta H_f = -246 \,kJ/mol$ Note: This cycle required the extra leg of the vaporization of Br2. Many cycles involve ions with greater than unit charge and may require extra ionization steps as well! 3.E: First Law of Thermodynamics (Exercises) Q3.1 In the attempt to measure the heat equivalent of mechanical work (as Joule did in his famous experiment) a student uses an apparatus similar to that shown below: The 1.50 kg weight is lifted 30.0 cm against the force due to gravity (9.8 N). If the specific heat of water is 4.184 J/(g °C), what is the expected temperature increase of the 1.5 kg of water in the canister? Q3.2 1.00 mol of an ideal gas, initially occupying 12.2 L at 298 K, expands isothermally against a constant external pressure of 1.00 atm until the pressure of the gas is equal to the external pressure. Calculate $\Delta p$, $q$, $w$, $\Delta U$, and $\Delta H$ for the expansion. Q3.3 Consider 1.00 mol of an ideal gas expanding isothermally at 298 K from an initial volume of 12.2 L to a final volume of 22.4 L. Calculate $\Delta p$, $q$, $w$, $\Delta U$, and $\Delta H$ for the expansion. Q3.4 Consider 1.00 mol of an ideal gas (CV = 3/2 R) Occupying 22.4 L that undergoes an isochoric (constant volume) temperature increase from 298 K to 342 K. Calculate $\Delta p$, $q$, $w$, $\Delta U$, and $\Delta H$ for the change. Q3.5 Consider 1.00 mol of an ideal gas (Cp = 5/2 R) initially at 1.00 atm that undergoes an isobaric expansion from 12.2 L to 22.4 L. Calculate $\Delta T$, $q$, $w$, $\Delta U$, and $\Delta H$ for the change. Q3.6 Consider 1.00 mol of an ideal gas (CV = 3/2 R) initially at 12.2 L that undergoes an adiabatic expansion to 22.4 L. Calculate $\Delta T$, $q$, $w$, $\Delta U$, and $\Delta H$ for the change. Q3.7 Derive an expression for the work of an isothermal, reversible expansion of a gas that follows the equation of state (in which $a$ is a parameter of the gas) $pV = nRT -\dfrac{an^2}{V}$ from $V_1$ to $V_2$. Q3.8 Use the following data [Huff, Squitieri, and Snyder, J. Am. Chem. Soc., 70, 3380 (1948)] to calculate the standard enthalpy of formation of tungsten carbide, $WC(s)$. Reaction $\Delta H^o$ (kJ) $C(gr) + O_2(g) \rightarrow CO_2(g)$ -393.51 $WC(s) + 5/2 O_2(g) \rightarrow WO_3(s) + CO_2(g)$ -1195.79 $W(s) + 3/2 O_2(g) \rightarrow WO_3(s)$ -837.42 Q3.9 The standard molar enthalpy of combustion ($\Delta H_c$) of propane gas is given by $C_3H_8(g) + 5 O_2(g) \rightarrow 3 CO_2(g) + 4 H_2O(l)$ with $\Delta H_c = -2220 \,kJ/mol$ The standard molar enthalpy of vaporization ($\Delta H_{vap}$) for liquid propane $C_3H_8(l) \rightarrow C_3H_8(g)$ with $\Delta H_{vap} = 15\, kJ/mol$ 1. Calculate the standard enthalpy of combustion of liquid propane. 2. Calculate the standard internal energy change of vaporization ($\Delta U_{vap}$) for liquid propane. 3. Calculate the standard internal energy change of combustion ($\Delta H_c$) for liquid propane. Q3.10 The enthalpy of combustion ($\Delta H_c$) of aluminum borohydride, $Al(BH_4)_3(l)$, was measured to be -4138.4 kJ/mol [Rulon and Mason, J. Am. Chem. Soc., 73, 5491 (1951)]. The combustion reaction for this compound is given by $Al(BH_4)_3(l) + 6 O_2(g) \rightarrow ½ Al_2O_3(s) + 3/2 B_2O_3(s) + 6 H_2O(l)$ Given the following additional data, calculate the enthalpy of formation of $Al(BH_4)_3(g)$. • $Al_2O_3(s)$: $\Delta H_f = -1669.8 \, kJ/mol$ • $B_2O_3(s)$: $\Delta H_f = -1267.8 \, kJ/mol$ • $H_2O(l)$: $\Delta H_f = -285.84 \, kJ/mol$ • $Al(BH_4)_3(l)$: $\Delta H_{vap} = 30.125 \, kJ/mol$ Q3.11 The standard enthalpy of formation ($\Delta H_f^o$) for water vapor is -241.82 kJ/mol at 25 °C. Use the data in the following table to calculate the value at 100 °C. Substance $C_p$ (J mol-1 K-1) H2(g) 28.84 O2(g) 29.37 H2O(g) 33.58 Q3.12 $\Delta C_p = (1.00 + 2.00 \times 10^{-3} T)\, J/K$ and $\Delta H_{298} = -5.00\, kJ$ for a dimerization reaction $2 A \rightarrow A_2$ Find the temperature at which $\Delta H = 0$. Q3.13 From the following data, determine the lattice energy of $BaBr_2$. $Ca(s) \rightarrow Ca(g)$ with $\Delta H_{sub} = 129\, kJ/mol$ $Br_2(l) \rightarrow Br_2(g)$ with $\Delta H_{vap} = 31\, kJ/mol$ $Br_2(g) \rightarrow 2 Br(g)$ with $D(Br-Br) = 193 \, kJ/mol$ $Ca(g) \rightarrow Ca^+(g) + e^-$ with $1^{st} \, IP(K) = 589.8 \, kJ/mol$ $Ca^+(g) \rightarrow Ca^{2+}(g) + e^-$ with $2^{nd} IP(K) = 1145.4 \,kJ/mol$ $Br(g) + e^- \rightarrow Br-(g)$ with $1^{st} EA(Br) = 194 \, kJ/mol$ $Ca(s) + Br_2^-(l) \rightarrow CaBr_2(s)$ with $\Delta H_f = -675 \, kJ/mol$ Q3.15 Using average bond energies (Table T3) estimate the reaction enthalpy for the reaction $C_2H_4 + HBr \rightarrow C_2H_5Br$ Contributors and Attributions • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 3.S: First Law of Thermodynamics (Summary) Learning Objectives After mastering the material covered in this chapter, one will be able to: 1. Define the internal energy of a system as a measure of its capacity to do work on its surroundings. 2. Define work and heat and relate them to changes in the internal energy of a system. 3. Explain the difference between path dependent variables and path independent variables. 4. Define enthalpy in terms of internal energy, pressure, and volume. 5. Calculate First Law quantities such as \(q\), \(w\), \(\Delta U\) and \(\Delta H\), for an ideal gas undergoing changes in temperature, pressure, and/or volume along isothermal, isobaric, isochoric, or adiabatic pathways. 6. Perform calculations using data collected using calorimetry (at either constant pressure or constant volume). 7. Write a formation reaction (the reaction for which the standard enthalpy of formation is defined) for any compound. 8. Use enthalpies of formation to calculate reaction enthalpies. 9. Estimate reaction enthalpies from average bond dissociation enthalpies. 10. Define and utilize enthalpies for phase changes such as \(\Delta H_{fus}\), \(\Delta H_{sub}\), and \(\Delta H_{vap}\) to calculate the heat energy transferred in the corresponding phase change processes. 11. Define important thermodynamic functions such as ionization energy, electron affinity, bond dissociation energy, and lattice energy. Construct a Born-Haber cycle diagram using these values to describe the formation of an ionic crystalline compound. Vocabulary and Concepts • adiabatic • Bomb calorimetry • bond dissociation energy • Born-Haber cycle • calorimetry • combustion reactions • constant pressure heat capacity • constant volume • constant volume heat capacity • electron affinity • endothermic • enthalpy • enthalpy of combustion • exothermic • First Law of Thermodynamics • heat • heat capacity • Hess’ Law • internal energy • ionization potential • isothermal • maximum work • Reaction enthalpies • reversible • reversibly • specific heat • standard enthalpy of formation • standard formation reaction • state variables • work • work of expansion
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.07%3A_Lattice_Energy_and_the_Born-Haber_Cycle.txt
• 4.1: Prelude to Putting the First Law to Work Because thermodynamics is kind enough to deal in a number of state variables, the functions that define how those variable change must behave according to some very well determined mathematics. This is the true power of thermodynamics! • 4.2: Total and Exact Differentials Total differentials are used identify how  a change in a property depends on the changes of the natural variables of that property. • 4.3: Compressibility and Expansivity A very important property of a substance is how compressible it is. Gases are very compressible, so when subjected to high pressures, their volumes decrease significantly (think Boyle’s Law!) Solids and liquids however are not as compressible. However, they are not entirely uncompressible! High pressure will lead to a decrease in volume, even if it is only slight. And, of course, different substances are more compressible than others. • 4.4: The Joule Experiment Joule's experiment concluded that dq=0 (and dT=0) when a gas is expanded against a vacuum. And because dV>0 for the gas that underwent the expansion into an open space, the internal pressure must also be zero! • 4.5: The Joule-Thomson Effect Joule and Thomson conducted an experiment in which they pumped gas at a steady rate through a lead pipe that was cinched to create a construction. A cooling was observed as the gas expanded from a high pressure region to a lower pressure region was extremely important and lead to a common design of modern refrigerators. Not all gases undergo a cooling effect upon expansion. • 4.6: Useful Definitions and Relationships Several useful definitions have been stated that connect partial derivatives to experimental measurements. Together, these relationships and definitions make a powerful set of tools that can be used to derive a number of very useful expressions. • 4.E: Putting the First Law to Work (Exercises) Exercises for Chapter 4 "Putting the First Law to Work" in Fleming's Physical Chemistry Textmap. • 4.S: Putting the First Law to Work (Summary) Summary for Chapter 4 "Putting the First Law to Work" in Fleming's Physical Chemistry Textmap. 04: Putting the First Law to Work As has been seen in previous chapters, may important thermochemical quantities can be expressed in terms of partial derivatives. Two important examples are the molar heat capacities $C_p$ and $C_V$ which can be expressed as $C_p = \left(\dfrac{\partial H}{\partial T}\right)_p \nonumber$ and $C_V = \left(\dfrac{\partial U}{\partial T}\right)_V \nonumber$ These are properties that can be measured experimentally and tabulated for many substances. These quantities can be used to calculate changes in quantities since they represent the slope of a surface ($H$ or $U$) in the direction of the specified path (constant $p$ or $V$). This allows us to use the following kinds of relationships: $dH = \left(\dfrac{\partial H}{\partial T}\right)_p dT \nonumber$ and $\Delta H = \int \left(\dfrac{\partial H}{\partial T}\right)_p dT \nonumber$ Because thermodynamics is kind enough to deal in a number of state variables, the functions that define how those variable change must behave according to some very well determined mathematics. This is the true power of thermodynamics! 4.02: Total and Exact Differentials The fact that we can define the constant volume heat capacity as $C_V \equiv \left( \dfrac{\partial U}{\partial T} \right)_V \label{compress}$ suggests that the internal energy depends very intimately on two variables: volume and temperature. In fact, we will see that for a single component system, state variables are always determined when two state variables are defined. In the case of internal energy, we might write $U=f(V,T)$ or $U(V,T)$. This suggests that the way to change $U$ is to change either $V$ or $T$ (or both!) And if there is a mathematical function that relates the internal energy to these two variables, it should easy to see how it changes when either (or both!) are changed. This can be written as a total differential: $dU = \left( \dfrac{\partial U}{\partial V} \right)_T dV + \left( \dfrac{\partial U}{\partial T} \right)_V dT \label{total}$ Even without knowing the actually mathematical function relating the variables to the property, we can imagine how to calculate changes in the property from this expression. $\Delta U = \int _{V_1}^{V_2} \left( \dfrac{\partial U}{\partial V} \right)_T dV + \int _{T_1}^{T_2} \left( \dfrac{\partial U}{\partial T} \right)_V dT \nonumber$ In words, this implies that we can think of a change in $U$ occurring due to an isothermal change followed by an isochoric change. And all we need to know is the slope of the surface in each pathway direction. There are a couple of very important experiments people have done to explore the measurement of those kinds of slopes. Understanding them, it turns out, depends on two very important physical properties of substances. Exact Differentials We have seen that the total differential of $U(V, T)$ can be expressed as Equation \ref{total}. In general, if a differential can be expressed as $df(x,y) = P\,dx + Q\,dy \nonumber$ the differential will be an exact differential if it follows the Euler relation $\left( \dfrac{\partial P}{\partial y} \right)_x = \left( \dfrac{\partial Q}{\partial x} \right)_y \label{euler}$ In order to illustrate this concept, consider $p(V, T)$ using the ideal gas law. $p= \dfrac{RT}{V} \nonumber$ The total differential of $p$ can be written $dp = \left( - \dfrac{RT}{V^2} \right) dV + \left( \dfrac{R}{V} \right) dT \label{Eq10}$ Example $1$: Euler Relation Does Equation \ref{Eq10} follow the Euler relation (Equation \ref{euler})? Solution Let’s confirm! \begin{align*} \left[ \dfrac{1}{\partial T} \left( - \dfrac{RT}{V^2} \right) \right]_V &\stackrel{?}{=} \left[ \dfrac{1}{\partial V} \left( \dfrac{R}{V} \right) \right]_T \[4pt] \left( - \dfrac{R}{V^2} \right) &\stackrel{\checkmark }{=} \left( - \dfrac{R}{V^2} \right) \end{align*} \nonumber $dp$ is, in fact, an exact differential. The differentials of all of the thermodynamic functions that are state functions will be exact. Heat and work are not exact differential and $dw$ and $dq$ are called inexact differentials instead.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.01%3A_Prelude_to_Putting_the_First_Law_to_Work.txt
Isothermal Compressibility ($\kappa_T$) A very important property of a substance is how compressible it is. Gases are very compressible, so when subjected to high pressures, their volumes decrease significantly (think Boyle’s Law!) Solids and liquids however are not as compressible. However, they are not entirely incompressible! High pressure will lead to a decrease in volume, even if it is only slight. And, of course, different substances are more compressible than others. To quantify just how compressible substances are, it is necessary to define the property. The isothermal compressibility is defined by the fractional differential change in volume due to a change in pressure. $\kappa_T \equiv - \dfrac{1}{V} \left( \dfrac{\partial V}{\partial p} \right)_T \label{compress}$ The negative sign is important in order to keep the value of $\kappa_T$ positive, since an increase in pressure will lead to a decrease in volume. The $1/V$ term is needed to make the property intensive so that it can be tabulated in a useful manner. Isobaric Thermal Expansivity ($\alpha$) Another very important property of a substance is how its volume will respond to changes in temperature. Again, gases respond profoundly to changes in temperature (think Charles’ Law!) whereas solids and liquid will have more modest (but not negligible) responses to changes in temperature. (For example, If mercury or alcohol didn’t expand with increasing temperature, we wouldn’t be able to use those substances in thermometers.) The definition of the isobaric thermal expansivity (or sometimes called the expansion coefficient) is $\alpha \equiv \dfrac{1}{V} \left( \dfrac{\partial V}{\partial T} \right)_p \label{expand}$ As was the case with the compressibility factor, the $1/V$ term is needed to make the property intensive, and thus able to be tabulated in a useful fashion. In the case of expansion, volume tends to increase with increasing temperature, so the partial derivative is positive. Deriving an Expression for a Partial Derivative (Type I): The reciprocal rule Consider a system that is described by three variables, and for which one can write a mathematical constraint on the variables $F(x, y, z) = 0 \nonumber$ Under these circumstances, one can specify the state of the system varying only two parameters independently because the third parameter will have a fixed value. As such one could define two functions: $z(x, y)$ and $y(x,z)$. This allows one to write the total differentials for $dz$ and $dy$ as follows $dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy \label{eq5}$ and $dy= \left( \dfrac{\partial y}{\partial x} \right)_z dx + \left( \dfrac{\partial y}{\partial z} \right)_x dz \label{eq6}$ Substituting the Equation \ref{eq6} expression into Equation \ref{eq5}: \begin{align} dz &= \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left[ \left( \dfrac{\partial y}{\partial x} \right)_z dx + \left( \dfrac{\partial y}{\partial z} \right)_x dz \right] \[4pt] &= \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_z dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial z} \right)_x dz \label{eq7} \end{align} If the system undergoes a change following a pathway where $x$ is held constant ($dx = 0$), this expression simplifies to $dz = \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial z} \right)_x dz \nonumber$ And so for changes for which $dz \neq 0$, $\left( \dfrac{\partial z}{\partial y} \right)_x = \dfrac{1}{\left( \dfrac{\partial y}{\partial z} \right)_x } \nonumber$ This reciprocal rule is very convenient in the manipulation of partial derivatives. But it can also be derived in a straight-forward, albeit less rigorous, manner. Begin by writing the total differential for $z(x,y)$ (Equation \ref{eq5}): $dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy \nonumber$ Now, divide both sides by $dz$ and constrain to constant $x$. $\left.\dfrac{dz}{dz} \right\rvert_{x}= \left( \dfrac{\partial z}{\partial x} \right)_y \left.\dfrac{dx}{dz} \right\rvert_{x} + \left( \dfrac{\partial z}{\partial y} \right)_x \left.\dfrac{dy}{dz} \right\rvert_{x} \label{eq10}$ Noting that $\left.\dfrac{dz}{dz} \right\rvert_{x} =1 \nonumber$ $\left.\dfrac{dx}{dz} \right\rvert_{x} = 0 \nonumber$ and $\left.\dfrac{dy}{dz} \right\rvert_{x} = \left( \dfrac{\partial y}{\partial z} \right)_{x} \nonumber$ Equation \ref{eq10} becomes $1= \left( \dfrac{\partial z}{\partial y} \right)_z \left( \dfrac{\partial y}{\partial z} \right)_x \nonumber$ or $\left( \dfrac{\partial z}{\partial y} \right)_z = \dfrac{1}{\left( \dfrac{\partial y}{\partial z} \right)_x} \nonumber$ This “formal” method of partial derivative manipulation is convenient and useful, although it is not mathematically rigorous. However, it does work for the kind of partial derivatives encountered in thermodynamics because the variables are state variables and the differentials are exact. Deriving an Expression for a Partial Derivative (Type II): The Cyclic Permutation Rule This alternative derivation follow the initial steps in the derivation above to Equation \ref{eq7}: $dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_z dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial z} \right)_x dz \nonumber$ If the system undergoes a change following a pathway where $z$ is held constant ($dz = 0$), this expression simplifies to $0 = \left( \dfrac{\partial z}{\partial x} \right)_y dy + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_z dx \nonumber$ And so for and changes in which $dx \neq 0$ $\left( \dfrac{\partial z}{\partial x} \right)_y = - \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_z \nonumber$ This cyclic permutation rule is very convenient in the manipulation of partial derivatives. But it can also be derived in a straight-forward, albeit less rigorous, manner. As with the derivation above, we wegin by writing the total differential of $z(x,y)$ $dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy \nonumber$ Now, divide both sides by $dx$ and constrain to constant $z$. $\left.\dfrac{dz}{dx} \right\rvert_{z}= \left( \dfrac{\partial z}{\partial x} \right)_y \left.\dfrac{dx}{dx} \right\rvert_{z} + \left( \dfrac{\partial z}{\partial y} \right)_x \left.\dfrac{dy}{dx} \right\rvert_{z} \label{eq21}$ Note that $\left.\dfrac{dz}{dx} \right\rvert_{z} =0 \nonumber$ $\left.\dfrac{dx}{dx} \right\rvert_{z} =1 \nonumber$ and $\left.\dfrac{dy}{dx} \right\rvert_{z} = \left( \dfrac{\partial y}{\partial x} \right)_{z} \nonumber$ Equation \ref{eq21} becomes $0 = \left( \dfrac{\partial z}{\partial x} \right)_y + \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_{z} \nonumber$ which is easily rearranged to $\left( \dfrac{\partial z}{\partial x} \right)_y = - \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_{z} \nonumber$ This type of transformation is very convenient, and will be used often in the manipulation of partial derivatives in thermodynamics. Example $1$: Expanding Thermodynamic Functions Derive an expression for $\dfrac{\alpha}{\kappa_T}. \label{e1}$ in terms of derivatives of thermodynamic functions using the definitions in Equations \ref{compress} and \ref{expand}. Solution Substituting Equations \ref{compress} and \ref{expand} into the Equation \ref{e1} $\dfrac{\alpha}{\kappa_T}= \dfrac{\dfrac{1}{V} \left( \dfrac{\partial V}{\partial T} \right)_p}{- \dfrac{1}{V} \left( \dfrac{\partial V}{\partial p} \right)_T} \nonumber$ Simplifying (canceling the $1/V$ terms and using transformation Type I to invert the partial derivative in the denominator) yields $\dfrac{\alpha}{\kappa_T} = - \left( \dfrac{\partial V}{\partial T} \right)_p \left( \dfrac{\partial p}{\partial V} \right)_T \nonumber$ Applying Transformation Type II give the final result: $\dfrac{\alpha}{\kappa_T} = \left( \dfrac{\partial p}{\partial T} \right)_V \nonumber$​​
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.03%3A_Compressibility_and_Expansivity.txt
Going back to the expression for changes in internal energy that stems from assuming that $U$ is a function of $V$ and $T$ (or $U(V, T)$ for short) $dU = \left( \dfrac{\partial U}{\partial V} \right)_TdV+ \left( \dfrac{\partial U}{\partial T} \right)_V dT \nonumber$ one quickly recognizes one of the terms as the constant volume heat capacity, $C_V$. And so the expression can be re-written $dU = \left( \dfrac{\partial U}{\partial V} \right)_T dV + C_V dT \nonumber$ But what about the first term? The partial derivative is a coefficient called the “internal pressure”, and given the symbol $\pi_T$. $\pi_T = \left( \dfrac{\partial U}{\partial V} \right)_T \nonumber$ James Prescott Joule (1818-1889) recognized that $\pi_T$ should have units of pressure (Energy/volume = pressure) and designed an experiment to measure it. He immersed two copper spheres, A and B, connected by a stopcock. Sphere A is filled with a sample of gas while sphere B was evacuated. The idea was that when the stopcock was opened, the gas in sphere A would expand ($\Delta V > 0$) against the vacuum in sphere B (doing no work since $p_{ext} = 0$. The change in the internal energy could be expressed $dU = \pi_T dV + C_V dT \nonumber$ But also, from the first law of thermodynamics $dU = dq + dw \nonumber$ Equating the two $\pi_T dV + C_V dT = dq + dw \nonumber$ and since $dw = 0$ $\pi_T dV + C_V dT = dq \nonumber$ Joule concluded that $dq = 0$ (and $dT = 0$ as well) since he did not observe a temperature change in the water bath which could only have been caused by the metal spheres either absorbing or emitting heat. And because $dV > 0$ for the gas that underwent the expansion into an open space, $\pi_T$ must also be zero! In truth, the gas did undergo a temperature change, but it was too small to be detected within his experimental precision. Later, we (once we develop the Maxwell Relations) will show that $\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial p}{\partial T} \right)_V -p \label{eq3}$ Application to an Ideal Gas For an ideal gas $p = RT/V$, so it is easy to show that $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{R}{V} \label{eq4}$ so combining Equations \ref{eq3} and \ref{eq4} together to get $\left( \dfrac{\partial U}{\partial V} \right)_T = \dfrac{RT}{V} - p \label{eq5}$ And since also becuase $p = RT/V$, then Equation \ref{eq5} simplifies to $\left( \dfrac{\partial U}{\partial V} \right)_T = p -p = 0 \nonumber$ So while Joule’s observation was consistent with limiting ideal behavior, his result was really an artifact of his experimental uncertainty masking what actually happened. Appliation to a van der Waals Gas For a van der Waals gas, $p = \dfrac{RT}{V-b} - \dfrac{a}{V^2} \label{eqV1}$ so $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{R}{V-b} \label{eqV2}$ and $\left( \dfrac{\partial U}{\partial V} \right)_T = T\dfrac{R}{V-b} - p \label{eqV3}$ Substitution of the expression for $p$ (Equation \ref{eqV1}) into this Equation \ref{eqV3} $\left( \dfrac{\partial U}{\partial V} \right)_T = \dfrac{a}{V^2} \nonumber$ In general, it can be shown that $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$ And so the internal pressure can be expressed entirely in terms of measurable properties $\left( \dfrac{\partial U}{\partial V} \right)_T = T \dfrac{\alpha}{\kappa_T}-p \nonumber$ and need not apply to only gases (real or ideal)!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.04%3A_The_Joule_Experiment.txt
In 1852, working with William Thomson (who would later become Lord Kelvin), Joule conducted an experiment in which they pumped gas at a steady rate through a lead pipe that was cinched to create a construction. On the upstream side of the constriction, the gas was at a higher pressure than on the downstream side of the constriction. Also, the temperature of the gas was carefully monitored on either side of the construction. The cooling that they observed as the gas expanded from a high pressure region to a lower pressure region was extremely important and lead to a common design of modern refrigerators. Not all gases undergo a cooling effect upon expansion. Some gases, such as hydrogen and helium, will experience a warming effect upon expansion under conditions near room temperature and pressure. The direction of temperature change can be determined by measuring the Joule-Thomson coefficient, $\mu_{JT}$. This coefficient has the definition $\mu_{JT} \equiv \left( \dfrac{\partial T}{\partial p} \right)_H \nonumber$ Schematically, the Joule-Thomson coefficient can be measured by measuring the temperature drop or increase a gas undergoes for a given pressure drop (Figure $1$). The apparatus is insulated so that no heat can be transferred in or out, making the expansion isenthalpic. The typical behavior of the Joule-Thomson coefficient can be summarized in Figure $2$. At the combinations of $T$ and $p$ for which $\mu_{JT} > 0$ (inside the shaded region), the sample will cool upon expansion. At those $p$ and $T$ conditions outside of the shaded region, where $\mu_{JT} < 0$, the gas will undergo a temperature increase upon expansion. And along the boundary, a gas will undergo neither a temperature increase not decrease upon expansion. For a given pressure, there are typically two temperatures at which $\mu_{JT}$ changes sign. These are the upper and lower inversion temperatures. Using the tools of mathematics, it is possible to express the Joule-Thomson coefficient in terms of measurable properties. Consider enthalpy as a function of pressure and temperature: $H(p, T)$. This suggests that the total differential $dH$ can be expressed $dH= \left( \dfrac{\partial H}{\partial p} \right)_T dp+ \left( \dfrac{\partial H}{\partial T} \right)_p dT \label{totalH}$ It will be shown later (again, once we develop the Maxwell Relations) that $\left( \dfrac{\partial H}{\partial p} \right)_T dp = -T \left( \dfrac{\partial V}{\partial T} \right)_p + V \nonumber$ A simple substitution shows $\left( \dfrac{\partial H}{\partial p} \right)_T dp = - TV \alpha + V = V(1-T\alpha) \nonumber$ So $dH = V(1-T\alpha) dP + C_p dT \nonumber$ For an ideal gas, $\alpha = 1/T$, so $dH = \cancelto{0}{V\left(1-T\dfrac{1}{T}\right) dP} + C_p dT \nonumber$ which causes the first term to vanish. So for constant enthalpy expansion ($dH = 0$), there can be no change in temperature ($dT = 0$). This will mean that gases will only show non-zero values for $\mu_{JT}$ only because they deviate from ideal behavior! Example $1$: Derive an expression for $\mu_{JT}$ in terms of $\alpha$, $C_p$, $V$, and $T$. Solution Using the total differential for $H(p, T)$ (Equation \ref{totalH}): $dH= \left( \dfrac{\partial H}{\partial p} \right)_T dp+ \left( \dfrac{\partial H}{\partial T} \right)_p dT \nonumber$ Dividing by $dp$ and constraining to constant $H$: $\left.\dfrac{dH}{dp} \right\rvert_{H}= \left( \dfrac{\partial H}{\partial p} \right)_T \left.\dfrac{dp}{dp} \right\rvert_{H} + \left( \dfrac{\partial H}{\partial T} \right)_p \left.\dfrac{dT}{dp} \right\rvert_{H} \nonumber$ Noting that $\left.\dfrac{dH}{dp} \right\rvert_{H} = 0 \nonumber$ $\left.\dfrac{dp}{dp} \right\rvert_{H} = 1 \nonumber$ and $\left.\dfrac{dT}{dp} \right\rvert_{H} = \left(\dfrac{\partial T}{\partial p} \right)_{H} \nonumber$ so $0 = \left( \dfrac{\partial H}{\partial p} \right)_T + \left( \dfrac{\partial H}{\partial T} \right)_p \left(\dfrac{\partial T}{\partial p} \right)_{H} \nonumber$ We can then use the following substitutions: $\left( \dfrac{\partial H}{\partial p} \right)_T = V(1-T \alpha) \nonumber$ $\left( \dfrac{\partial H}{\partial T} \right)_p = C_p \nonumber$ $\left(\dfrac{\partial T}{\partial p} \right)_{H} = \mu_{JT} \nonumber$ To get $0 = V(1-T \alpha) + C_p \mu_{JT} \nonumber$ And solving for $\mu_{JT}$ gives $\mu_{JT} = \dfrac{V}{C_p}(T \alpha -1) \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.05%3A_The_Joule-Thomson_Effect.txt
In this chapter (and in the previous chapter), several useful definitions have been stated. Toolbox of useful Relationships The following “measurable quantities” have been defined: • Heat Capacities: $C_V \equiv \left( \dfrac{\partial U}{\partial T} \right)_V \nonumber$ and$C_p \equiv \left( \dfrac{\partial H}{\partial T} \right)_p \nonumber$ • Coefficient of Thermal Expansion: $\alpha \equiv \left( \dfrac{\partial V}{\partial T} \right)_p \nonumber$ or $\left( \dfrac{\partial V}{\partial T} \right)_p = V \alpha \nonumber$ • Isothermal Compressibility: $\kappa_T \equiv - \dfrac{1}{V} \left( \dfrac{\partial V}{\partial p} \right)_T \nonumber$ or $\left( \dfrac{\partial V}{\partial p} \right)_T = -V \kappa _T \nonumber$ The following relation has been derived: $\dfrac{ \alpha}{\kappa_T} = \left( \dfrac{\partial p}{\partial T} \right)_V \nonumber$ And the following relationships were given without proof (yet!): $\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial p}{\partial T} \right)_V - p \nonumber$ and $\left( \dfrac{\partial H}{\partial p} \right)_T = - T \left( \dfrac{\partial V}{\partial T} \right)_p - p \nonumber$ Together, these relationships and definitions make a powerful set of tools that can be used to derive a number of very useful expressions. Example $1$: Expanding Thermodynamic Function Derive an expression for $\left( \dfrac{\partial H}{\partial V} \right)_T$ in terms of measurable quantities. Solution 1: Begin by using the total differential of $H(p, T)$: $dH = \left( \dfrac{\partial H}{\partial p} \right)_T dp + \left( \dfrac{\partial H}{\partial T} \right)_p dT \nonumber$ Divide by $dV$ and constrain to constant $T$ (to generate the partial of interest on the left): $\left.\dfrac{dH}{dV} \right\rvert_{T}= \left( \dfrac{\partial H}{\partial p} \right)_T \left.\dfrac{dp}{dV} \right\rvert_{T} + \cancelto{0}{\left( \dfrac{\partial H}{\partial T} \right)_p \left.\dfrac{dT}{dV} \right\rvert_{T}} \nonumber$ The last term on the right will vanish (since $dT = 0$ for constant $T$). After converting to partial derivatives $\left(\dfrac{\partial H}{\partial V} \right)_{T} = \left( \dfrac{\partial H}{\partial p} \right)_T \left(\dfrac{\partial p}{\partial V} \right)_{T} \label{eq5}$ This result is simply a demonstration of the “chain rule” on partial derivatives! But now we are getting somewhere. We can now substitute for $\left(\dfrac{\partial H}{\partial V} \right)_{T}$ using our “toolbox of useful relationships”: $\left(\dfrac{\partial H}{\partial V} \right)_{T} = \left[ -T \left(\dfrac{\partial V}{\partial T} \right)_{p} +V \right] \left(\dfrac{\partial p}{\partial V} \right)_{T} \nonumber$ Using the distributive property of multiplication, this expression becomes $\left(\dfrac{\partial H}{\partial V} \right)_{T} = -T \left(\dfrac{\partial V}{\partial T} \right)_{p}\left(\dfrac{\partial p}{\partial V} \right)_{T} + V \left(\dfrac{\partial p}{\partial V} \right)_{T} \label{eq7}$ Using the cyclic permutation rule (Transformation Type II), the middle term of Equation \ref{eq7} can be simplified cyclic permutation rule? Where is that on the Libraries? $\left(\dfrac{\partial H}{\partial V} \right)_{T} = T \left(\dfrac{\partial p}{\partial T} \right)_{V} + V \left(\dfrac{\partial p}{\partial V} \right)_{T} \nonumber$ And now all of the partial derivatives on the right can be expressed in terms of $\alpha$ and $\kappa_T$ (along with $T$ and $V$, which are also “measurable properties”. $\left(\dfrac{\partial H}{\partial V} \right)_{T} = T \dfrac{\alpha}{\kappa_T} + V \dfrac{1}{-V \kappa_T} \nonumber$ or $\left(\dfrac{\partial H}{\partial V} \right)_{T} = \dfrac{1}{\kappa_T} ( T \alpha -1) \nonumber$ Example $2$: Isothermal Compression Calculate $\Delta H$ for the isothermal compression of ethanol which will decrease the molar volume by $0.010\, L/mol$ at 300 K. (For ethanol, $\alpha = 1.1 \times 10^{-3 }K^{-1}$ and $\kappa_T = 7.9 \times 10^{-5} atm^{-1}$). Solution Integrating the total differential of $H$ at constant temperature results in $\Delta H = \left(\dfrac{\partial H}{\partial V} \right)_{T} \Delta V \nonumber$ From Example $1$, we know that $\left(\dfrac{\partial H}{\partial V} \right)_{T} = \dfrac{1}{\kappa_T} ( T \alpha -1) \nonumber$ so $\Delta H = \left [ \dfrac{1}{ 7.9 \times 10^{-5} atm^{-1}} \left( (300 \,K) (1.1 \times 10^{-3 }K^{-1}) -1 \right) \right] ( - 0.010\, L/mol ) \nonumber$ $\Delta H = \left( 84.81 \, \dfrac{\cancel{atm\,L}}{mol}\right) \underbrace{\left(\dfrac{8.314\,J}{0.8206\, \cancel{atm\,L}}\right)}_{\text{conversion factor}} = 9590 \, J/mol \nonumber$ Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.06%3A_Useful_Definitions_and_Relationships.txt
Q4.1 Given the relationship $\left( - \dfrac{\partial U}{\partial V} \right)_T= T\left( - \dfrac{\partial p}{\partial T} \right)_V-p$ show that $\left( - \dfrac{\partial U}{\partial V} \right)_T =0$ for an ideal gas. Q4.2 Determine if the following differential is exact, and if so, find the function $z(x, y)$ that satisfies the expression. $dz = 4xy\,dz + 2x^2 dy$ Q4.3 For a van der Waals gas, $\left(\dfrac{\partial U}{\partial V}\right)_T = \left(\dfrac{an^2}{V^2}\right)$ Find an expression in terms of $a$, $n$, $V$, and $R$ for $\left(\dfrac{\partial T}{\partial V}\right)_U$ if $C_V = 3/2 R$. Use the expression to calculate the temperature change for 1.00 mol of Xe (a = 4.19 atm L2 mol -2) expanding adiabatically against a vacuum from 10.0 L to 20.0 L. Q4.4 Given the following data, calculate the change in volume for 50.0 cm3 of 1. neon and 2. copper due to an increase in pressure from 1.00 atm to 0.750 atm at 298 K. Substance T (at 1.00 atm and 298 K) Ne 1.00 atm-1 Cu 0.735 x 10-6 atm-1 Q4.5 Consider a gas that follows the equation of state $p =\dfrac{nRT}{V-nb}$ derive an expression for 1. the isobaric thermal expansivity, $\alpha$ 2. the Joule-Thomson coefficient, $\mu_{JT}$ $\mu_{JT} = \dfrac{V}{C_p} (T \alpha -1)$ Q4.6 Given $\left(\dfrac{\partial H}{\partial p}\right)_T = -T \left(\dfrac{\partial V}{\partial T}\right)_p +V$ derive an expression for $\left(\dfrac{\partial U}{\partial p}\right)_T$ in terms of measurable properties. Use your result to calculate the change in the internal energy of 18.0 g of water when the pressure is increased from 1.00 atm to 20.0 atm at 298 K. Q4.7 Derive an expression for $\left(\dfrac{\partial U}{\partial T}\right)_p$ Begin with the definition of enthalpy, in order to determine $dH = dU + pdV + Vdp$ Finish by dividing by dT and constraining to constant pressure. Make substitutions for the measurable quantities, and solve for $\left(\dfrac{\partial U}{\partial T}\right)_p .$ Q4.8 Derive an expression for the difference between $C_p$ and $C_V$ in terms of the internal pressure, $\alpha$, $p$ and $V$. Using the definition for $H$ as a starting point, show that $\left(\dfrac{\partial H}{\partial T}\right)_p = \left(\dfrac{\partial U}{\partial T}\right)_p + p \left(\dfrac{\partial V}{\partial T}\right)_p$ Now, find an expression for by starting with $U(V,T)$ and writing an expression for the total differential $dU(V,T)$. Divide this expression by $dp$ and constrain to constant $T$. Substitute this into the previous expressions and solve for $\left(\dfrac{\partial G}{\partial T}\right)_p - \left(\dfrac{\partial U}{\partial T}\right)_V .$ Q4.9 Evaluate the expression you derived in problem 8 for an ideal, assuming that the internal pressure of an ideal gas is zero. Contributors and Attributions • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 4.S: Putting the First Law to Work (Summary) Learning Objectives After mastering the material covered in this chapter, one will be able to: 1. Express the total differential of a thermodynamic function in terms of partial differentials involving two independent state variables: $dU = \left( \dfrac{\partial U}{\partial V} \right)_T dV + \left( \dfrac{\partial U}{\partial T} \right)_V dT \label{total}$ 2. Utilize the Euler relation to define an exact differential. 3. Derive and utilize partial differential transformation types I and II: $\left( \dfrac{\partial z}{\partial x} \right)_y = - \left( \dfrac{\partial z}{\partial y} \right)_x \left( \dfrac{\partial y}{\partial x} \right)_{z} \nonumber$ and $\left( \dfrac{\partial z}{\partial y} \right)_z = \dfrac{1}{\left( \dfrac{\partial y}{\partial z} \right)_x} \nonumber$ 4. Define and describe the meaning of the isobaric thermal expansivity coefficient ($\alpha$) and the isothermal compressibility coefficient ($\kappa_T$). 5. Derive expressions for $\alpha$ and $\kappa_T$ for gases based on an assumed equation of state. 6. Define internal pressure and describe the experiment Joule used to attempt to measure it. 7. Calculate a value for the internal pressure based on $\alpha$ and $\kappa_T$­ for a given substance. 8. Derive an expression for the internal pressure of a gas based on an assumed equation of state, given $\left(\dfrac{\partial U}{\partial V} \right)_{T} = T \dfrac{\alpha}{\kappa_T} - p \nonumber$ 9. Demonstrate theat the internal pressure of an ideal gas is zero. 10. Define and describe the physical meaning the Joule-Thomson coefficient. 11. Derive an expression for the Joule-Thomson coefficient in terms of $\alpha$, $C_p$, $V$, and $T$ given $\left(\dfrac{\partial H}{\partial V} \right)_{T} = \dfrac{1}{\kappa_T} ( T \alpha -1) \nonumber$ 12. Demonstrate that the Joule-Thomson coefficient ($\mu_{JT}$ for an ideal gas is zero. 13. Derive expressions for the temperature and pressure dependence of enthalpy and internal energy in terms of measurable properties. Use these expressions to calculate changes in enthalpy and internal energy for specific substances based on the values of those measurable properties when the temperature or pressure is changed. Vocabulary and Concepts • Euler relation • exact differential • internal pressure • isobaric thermal expansivity • isothermal compressibility • Joule-Thomson coefficient • total differential
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.E%3A_Putting_the_First_Law_to_Work_%28Exercises%29.txt
• 5.1: Introduction to the Second Law The second law of thermodynamics, which introduces us to the topic of entropy, is amazing in how it constrains what we can experience and what we can do in the universe. A spontaneous process is one that will occur without external forces pushing it. A process can be spontaneous even if it happens very slowly. Unfortunately, Thermodynamics is silent on the topic of how fast processes will occur, but is provides us with a powerful toolbox for predicting which processes will be spontaneous. • 5.2: Heat Engines and the Carnot Cycle To simplify his analysis of the inner workings of an engine, Carnot devised a useful construct for examining what affect engine efficiency. His construct is the heat engine. The idea behind a heat engine is that it will take energy in the form of heat, and transform it into an equivalent amount of work.  Unfortunately, such a device is impractical. As it turns out, nature prevents the complete conversion of energy into work with perfect efficiency. This leads to an important statement of the Sec • 5.3: Entropy In addition to learning that the efficiency of a Carnot engine depends only on the high and low temperatures, more interesting things can be derived through the exploration of this system. • 5.4: Calculating Entropy Changes Entropy changes are fairly easy to calculate so long as one knows initial and final state. For example, if the initial and final volume are the same, the entropy can be calculated by assuming a reversible, isochoric pathway and determining an expression for dq/T. That term can then be integrated from the initial condition to the final conditions to determine the entropy change. • 5.5: Comparing the System and the Surroundings It is oftentimes important  to calculate both the entropy change of the system as well as that of the surroundings. Depending on the size of the surroundings, they can provide or absorb as much heat as is needed for a process without changing temperature. As such, it is oftentimes a very good approximation to consider the changes to the surroundings as happening isothermally, even though it may not be the case for the system (which is often smaller). • 5.6: Entropy and Disorder A common interpretation of entropy is that it is somehow a measure of chaos or randomness. There is some utility in that concept. Given that entropy is a measure of the dispersal of energy in a system, the more chaotic a system is, the greater the dispersal of energy will be, and thus the greater the entropy will be. • 5.7: The Third Law of Thermodynamics One important consequence of Botlzmann’s proposal is that a perfectly ordered crystal (i.e. one that has only one energetic arrangement in its lowest energy state) will have an entropy of 0. This makes entropy qualitatively different than other thermodynamic functions. For example, in the case of enthalpy, it is impossible have a zero to the scale without setting an arbitrary reference (i.e., the enthalpy of formation of elements in their standard states is zero.) But entropy has a natural zero! • 5.8: Adiabatic Compressibility The isothermal compressibility is a very useful quantity, as it can be measured for many different substances and tabulated. Also, as we will see in the next chapter, it can be used to evaluate several different partial derivatives involving thermodynamic variables. • 5.E: The Second Law (Exercises) Exercises for Chapter 5 "The Second Law" in Fleming's Physical Chemistry Textmap. • 5.S: The Second Law (Summary) Summary for Chapter 5 "The Second Law" in Fleming's Physical Chemistry Textmap. 05: The Second Law Rudolph Clausius is kind enough in his 1879 work “The Mechanical Theory of Heat” (Clausius, 1879) to indicate where we have been in our discussion of thermodynamics, as well as where we are going. “The fundamental laws of the universe which correspond to the two fundamental theorems of the mechanical theory of heat: 1. The energy of the universe is constant. 2. The entropy of the universe tends to a maximum.” ― Rudolf Clausius, The Mechanical Theory Of Heat The second law of thermodynamics, which introduces us to the topic of entropy, is amazing in how it constrains what we can experience and what we can do in the universe. As Sean M. Carroll, a CalTech Theoretical physicist, suggests in a 2010 interview with Wired Magazine (Biba, 2010), I’m trying to understand how time works. And that’s a huge question that has lots of different aspects to it. A lot of them go back to Einstein and spacetime and how we measure time using clocks. But the particular aspect of time that I’m interested in is the arrow of time: the fact that the past is different from the future. We remember the past but we don’t remember the future. There are irreversible processes. There are things that happen, like you turn an egg into an omelet, but you can’t turn an omelet into an egg. We, as observers of nature, are time travelers. And the constraints on what we can observe as we move through time step from the second law of thermodynamics. But more than just understanding what the second law says, we are interested in what sorts of processes are possible. And even more to the point, what sorts of processes are spontaneous. A spontaneous process is one that will occur without external forces pushing it. A process can be spontaneous even if it happens very slowly. Unfortunately, Thermodynamics is silent on the topic of how fast processes will occur, but is provides us with a powerful toolbox for predicting which processes will be spontaneous. But in order to make these predictions, a new thermodynamic law and variable is needed since the first law (which defined $\Delta U$ and $\Delta H$) is insufficient. Consider the following processes: $NaOH(s) \rightarrow Na^+(aq) + OH^-(aq) \nonumber$ with $\Delta H < 0$ $NaHCO_3(s) \rightarrow Na+(aq) + HCO_3^-(aq) \nonumber$ with $\Delta H > 0$ Both reactions will occur spontaneously, but one is exothermic and the other endothermic. So while it is intuitive to think that an exothermic process will be spontaneous, there is clearly more to the picture than simply the release of energy as heat when it comes to making a process spontaneous. The Carnot cycle because a useful thought experiment to explore to help to answer the question of why a process is spontaneous. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.01%3A_Introduction_to_the_Second_Law.txt
Heat Engines Sadi Carnot (1796 – 1832) (Mendoza, 2016), a French physicist and engineer was very interested in the improvement of steam engines to perform the tasks needed by modern society. To simplify his analysis of the inner workings of an engine, Carnot devised a useful construct for examining what affect engine efficiency. His construct is the heat engine. The idea behind a heat engine is that it will take energy in the form of heat, and transform it into an equivalent amount of work. Unfortunately, such a device is impractical. As it turns out, nature prevents the complete conversion of energy into work with perfect efficiency. This leads to an important statement of the Second Law of Thermodynamics. It is impossible to convert heat into an equivalent amount of work without some other changes occurring in the universe. As such, a more reasonable picture of the heat engine is one which will allow for losses of energy to the surroundings. The fraction of energy supplied to the engine that can be converted to work defines the efficiency of the engine. The Carnot Cycle The Carnot cycle is a theoretical cyclic heat engine that can used to examine what is possible for an engine for which the job is convert heat into work. For simplicity, all energy provided to the engine occurs isothermally (and reversibly) at a temperature $T_h$ and all of the energy lost to the surroundings also occurs isothermally and reversibly at temperature $T_l$­. In order to insure this, the system must change between the two temperatures adiabatically. Thus, the cycle consists of four reversible legs, two of which are isothermal, and two of which are adiabatic. 1. Isothermal expansion from p1 and V1 to p2 and V2 at Th. 2. Adiabatic expansion from p2, V2, Th to p3, V3, Tl. 3. Isothermal compression from p3 and V3 to p4 and V4 at Tl. 4. Adiabatic compression from p4, V4, Tl to p1, V1, Th. Plotted on a pressure-volume diagram, the Carnot cycle looks as follows: Because this is a closed cycle (the ending state is identical initial state) any state function must have a net change of zero as the system moves around the cycle. Furthermore, the efficiency of the engine can be expressed by the net amount of work the engine produces per unit of heat supplied to power the engine. $\epsilon = \dfrac{w_{net}}{q_h} \nonumber$ In order to examine this expression, it is useful to write down expressions fo the heat and work flow in each of the four legs of the engine cycle. Leg Heat Work I qh = -nRTh ln(V2/V1) nRTh ln(V2/V1) II 0 CV(Tl – Th) III ql = -nRTl ln(V4/V3) nRTl ln(V4/V3) IV 0 CV(Th – Tl) The total amount of work done is given by the sum of terms in the thirst column. Clearly the terms for the two adiabatic legs cancel (as they have the same magnitude, but opposite signs.) So the total work done is given by $w_{tot} = nRT_h \ln \left( \dfrac{V_2}{V_1} \right) + nRT_l \ln \left( \dfrac{V_4}{V_3} \right) \nonumber$ The efficiency of the engine can be defined as the total work produced per unit of energy provided by the high temperature reservoir. $\epsilon = \dfrac{w_{tot}}{q_h} \nonumber$ or $\epsilon = \dfrac{ nRT_h \ln \left( \dfrac{V_2}{V_1} \right) + nRT_l \ln \left( \dfrac{V_4}{V_3} \right)}{nRT_h \ln \left( \dfrac{V_2}{V_1} \right) } \label{eff1}$ That expression has a lot of variables, but it turns out that it can be simplified dramatically. It turns out that by the choice of pathways connecting the states places a very important restriction on the relative values of V1, V2, V3 and V4. To understand this, we must consider how the work of adiabatic expansion is related to the initial and final temperatures and volumes. In Chapter 3, it was shown that the initial and final temperatures and volumes of an adiabatic expansion are related by $V_iT_i^{C_V/R} = V_fT_f^{C_V/R} \nonumber$ or $\dfrac{V_i}{V_f} = \left( \dfrac{T_f}{T_i} \right)^{C_V/R} \nonumber$ Using the adiabatic expansion and compression legs (II and IV), this requires that $\dfrac{V_2}{V_2} = \left( \dfrac{T_h}{T_l} \right)^{C_V/R} \nonumber$ and $\dfrac{V_4}{V_1} = \left( \dfrac{T_l}{T_h} \right)^{C_V/R} \nonumber$ Since the second terms are reciprocals of one another, the first terms must be as well! $\dfrac{V_2}{V_2}=\dfrac{V_1}{V_4} \nonumber$ A simple rearrangement shows that $\dfrac{V_2}{V_1}=\dfrac{V_3}{V_4} \nonumber$ This is very convenient! It is what allows for the simplification of the efficiency expression (Equation \ref{eff1}) becomes $\epsilon = \dfrac{ \cancel{nR}T_h \cancel{\ln \left( \dfrac{V_2}{V_1} \right)} + \cancel{nR}T_l \cancel{ \ln \left( \dfrac{V_2}{V_1} \right)}}{\cancel{nR}T_h \cancel{\ln \left( \dfrac{V_2}{V_1} \right)} } \nonumber$ Canceling terms in the numerator and denominator yields $\epsilon = \dfrac{T_g-T_l}{T_h} \label{eff2}$ This expression gives the maximum efficiency and depends only on the high and low temperatures! Also, it should be noted that the heat engine can be run backwards. By providing work to the engine, it can be forces to draw heat from the low temperature reservoir and dissipate it into the high temperature reservoir. This is how a refrigerator or heat pump works. The limiting efficiency of such a device can also be calculated using the temperatures of the hot can cold reservoirs. Example $1$: What is the maximum efficiency of a freezer set to keep ice cream at a cool -10 oC, which it is operating in a room that is 25oC? What is the minimum amount of energy needed to remove 1.0 J from the freezer and dissipate it into the room? Solution The efficiency is given by Equation \ref{eff2} and converting the temperatures to an absolute scale, the efficiency can be calculated as $\epsilon = \dfrac{298\,K - 263\,K}{298\,K} \nonumber$ This value can be used in the following manner $energy_{transferred} = \epsilon (work_{required}) \nonumber$ So $1.0 \,J = 0.1174(w) \nonumber$ or $w = 8.5\, J \nonumber$ It is interesting to note that any arbitrary closed cyclical process can be described as a sum of infinitesimally small Carnot cycles, and so all of the conclusions reached for the Carnot cycle apply to any cyclical process. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.02%3A_Heat_Engines_and_the_Carnot_Cycle.txt
In addition to learning that the efficiency of a Carnot engine depends only on the high and low temperatures, more interesting things can be derived through the exploration of this system. For example, consider the total heat transferred in the cycle: $q_{tot} = nRT_h \ln \left( \dfrac{V_2}{V_1} \right) - nRT_l \ln \left( \dfrac{V_4}{V_3} \right) \nonumber$ Making the substitution $\dfrac{V_2}{V_1} = \dfrac{V_3}{V_4} \nonumber$ the total heat flow can be seen to be given by $q_{tot} = nRT_h \ln \left( \dfrac{V_4}{V_3} \right) - nRT_l \ln \left( \dfrac{V_4}{V_3} \right) \nonumber$ It is clear that the two terms do not have the same magnitude, unless $T_h = T_l$. This is sufficient to show that $q$ is not a state function, since it’s net change around a closed cycle is not zero (as any value of a state function must be.) However, consider what happens when the sum of $q/T$ is considered: \begin{align*} \sum \dfrac{q}{T} &= \dfrac{nR \cancel{T_h} \ln \left( \dfrac{V_4}{V_3} \right)}{\cancel{T_h}} - \dfrac{nR \cancel{T_l} \ln \left( \dfrac{V_4}{V_3} \right)}{ \cancel{T_l}} \[4pt] &= nR \ln \left( \dfrac{V_4}{V_3} \right) - nR \ln \left( \dfrac{V_4}{V_3} \right) \[4pt] & = 0 \end{align*} This is the behavior expected for a state function! It leads to the definition of entropy in differential form, $dS \equiv \dfrac{dq_{rev}}{T} \nonumber$ In general, $dq_{rev}$ will be larger than $dq$ (since the reversible pathway defines the maximum heat flow.) So, it is easy to calculate entropy changes, as one needs only to define a reversible pathway that connects the initial and final states, and then integrate $dq/T$ over that pathway. And since $\Delta S$ is defined using $q$ for a reversible pathway, $\Delta S$ is independent of the actual path a system follows to undergo a change. Contributors and Attributions • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 5.04: Calculating Entropy Changes Entropy changes are fairly easy to calculate so long as one knows initial and final state. For example, if the initial and final volume are the same, the entropy can be calculated by assuming a reversible, isochoric pathway and determining an expression for $\frac{dq}{T}$. That term can then be integrated from the initial condition to the final conditions to determine the entropy change. Isothermal Changes If the initial and final temperatures are the same, the most convenient reversible path to use to calculate the entropy is an isothermal pathway. As an example, consider the isothermal expansion of an ideal gas from $V_1$ to $V_2$. As was derived in Chapter 3, $dq = nRT \dfrac{dV}{V} \nonumber$ So $dq/T$ is given by $\dfrac{dq}{T} = nR\dfrac{dV}{V} \nonumber$ and so $\Delta S = \int \dfrac{dq}{T} = nR \int_{V_1}^{V_2} \dfrac{dV}{V} = nR \ln \left( \dfrac{V_2}{V_1} \right) \label{isothermS}$ Example $1$: Entropy Change for a Gas Expansion Calculate the entropy change for 1.00 mol of an ideal gas expanding isothermally from a volume of 24.4 L to 48.8 L. Solution Recognizing that this is an isothermal process, we can use Equation \ref{isothermS} \begin{align*} \Delta S &= nR \ln \left( \dfrac{V_2}{V_1} \right) \ &= (1.00 \, mol) (8.314 J/(mol \, K)) \ln \left( \dfrac{44.8\,L}{22.4\,L } \right) \ &= 5.76 \, J/K \end{align*} \nonumber Isobaric Changes For changes in which the initial and final pressures are the same, the most convenient pathway to use to calculate the entropy change is an isobaric pathway. In this case, it is useful to remember that $dq = nC_pdT \nonumber$ So $\dfrac{dq}{T} = nC_p \dfrac{dT}{T} \nonumber$ Integration from the initial to final temperature is used to calculate the change in entropy. If the heat capacity is constant over the temperature range $\int_{T_1}^{T_2} \dfrac{dq}{T} = nC_p \int_{T_1}^{T_2} \dfrac{dT}{T} = nC_p \ln \left( \dfrac{T_2}{T_1} \right) \nonumber$ If the temperature dependence of the heat capacity is known, it can be incorporated into the integral. For example, if $C_p$ can be expressed as $C_p =a + bT + \dfrac{c}{T^2} \nonumber$ $\Delta S$ takes the form $\int_{T_1}^{T_2} \dfrac{dq}{T} = n \int_{T_1}^{T_2} \dfrac{a + bT + \dfrac{c}{T^2}}{T} dT \nonumber$ which simplifies to $\Delta S = n \int_{T_1}^{T_2} \left( \dfrac{a}{T} + bT + \dfrac{c}{T^3} \right) dT \nonumber$ or $\Delta S = n \left[ a \ln \left( \dfrac{T_2}{T_1} \right) + b(T_2-T_1) - \dfrac{c}{2} \left( \dfrac{1}{T_2^2} -\dfrac{1}{T_1^2} \right) \right] \nonumber$ Isochoric Changes Similarly to the cast of constant pressure, it is fairly simple to calculate $\Delta S$. Since $dq = nC_Vdt \nonumber$ $\dfrac{dq}{T}$ is given by $\dfrac{dq}{T} = nC_V \dfrac{dT}{T} \nonumber$ And so for changes over which $C_V$ is independent of the temperature $\Delta S$ is given by $\Delta S = nC_v \ln \left( \dfrac{T_2}{T_1} \right) \nonumber$ Adiabatic Changes The easiest pathway for which to calculate entropy changes is an adiabatic pathway. Since $dq = 0$ for an adiabatic change, then $dS = 0$ as well. Phase Changes The entropy change for a phase change at constant pressure is given by $\Delta S = \dfrac{q}{T} = \dfrac{\Delta H_{phase}}{T} \label{phase}$ Example $2$: Entropy Change for Melting Ice The enthalpy of fusion for water is 6.01 kJ/mol. Calculate the entropy change for 1.0 mole of ice melting to form liquid at 273 K. Solution This is a phase transition at constant pressure (assumed) requiring Equation \ref{phase}: \begin{align*} \Delta S &= \dfrac{(1\,mol)(6010\, J/mol)}{273\,K} \ &= 22 \,J/K \end{align*}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.03%3A_Entropy.txt
It is oftentimes important (for reasons that will be discussed in the next section) to calculate both the entropy change of the system as well as that of the surroundings. Depending on the size of the surroundings, they can provide or absorb as much heat as is needed for a process without changing temperature. As such, it is oftentimes a very good approximation to consider the changes to the surroundings as happening isothermally, even though it may not be the case for the system (which is generally smaller.) Example $1$ Consider 18.02 g (1.00 mol) of ice melting at 273 K in a room that is 298 K. Calculate DS for the ice, the surrounding room, and of the universe. (DHfus = 6.01 kJ/mol) Solution For the process under constant pressure: $q_{ice} = -q_{room}$: $q = n \Delta H_{fus} = (1.00 \, mol) (6010 \, J/mol) = 6010 \,J \nonumber$ For the ice: $\Delta S_{ice} = \dfrac{q_{ice}}{T_{ice}} = \dfrac{6010\,J}{273\,K} = 22.0\, J/K \nonumber$ For the room: $\Delta S_{room} = \dfrac{q_{room}}{T_{room}} = \dfrac{-6010\,J}{298\,K} = -20.2\, J/K \nonumber$ For the universe: \begin{align*} \Delta S_{univ} &=\Delta S_{ice} + \Delta S_{room} \[4pt] &= 22.0 J/K - 20.2\,J/K = 1.8\,J/K \end{align*} Note: $\Delta S_{univ}$ is positive, which is characteristic of a spontaneous change! Example $2$ A 10.0 g piece of metal (C = 0.250 J/g °C) initially at 95 °C is placed in 25.0 g of water initially at 15 °C in an insulated container. Calculate the final temperature of the metal and water once the system has reached thermal equilibrium. Also, calculate the entropy change for the metal, the water, and the entire system. Solution Heat will be transferred from the hot metal to the cold water. Since it has nowhere else to go, the final temperature can be calculated from the expression $q_w = -q_m \nonumber$ where $q_w$ is the heat absorbed by the water, and $q_m$ is the heat lost by the metal. And since $q = mC\Delta T \nonumber$ it follows that $(25\,g)(4.184\,J/g\, °C)) (T_f - 15 \,°C) = -(10.0\,g)(0.250 \, J/g\, °C))(T_f-95\,J/g\, °C) \nonumber$ A bit of algebra determines the final temperature to be: $T_f = 16.9 \, °C. \nonumber$ To get the entropy changes, use the expression: $\Delta S= m C_p \ln \left( \dfrac{T_f}{T_i} \right) \nonumber$ So, for the water: \begin{align*} \Delta S_{water} &= (25\,g)(4.184\,J/g\, °C)) \ln \left( \dfrac{289.9\,K}{288\,K} \right) \[4pt] &= 0.689\, J/K \end{align*} And for the metal: \begin{align*} \Delta S_{metal} &= (10.0\,g)(0.250\,J/g\, °C)) \ln \left( \dfrac{289.9\,K}{368\,K} \right) \[4pt] &= - 0.596 \, J/K \end{align*} For the system: \begin{align*}\Delta S_{sys} &= \Delta S_{water} + \Delta S_{metal} \[4pt] &= 0.689\, J/K + - 0.596 \, J/K = 0.093 \, J/K \end{align*} Note: The total entropy change is positive, suggesting that this will be a spontaneous process. This should make some sense since one expects heat to flow from the hot metal to the cool water rather than the other way around. Also, note that the sign of the entropy change is positive for the part of the system that is absorbing the heat, and negative for the part losing the heat. In summary, $\Delta S$ can be calculated for a number of pathways fairly conveniently. Table $1$: Summary of different ways to calculate $\Delta S$ depending on the pathway. Pathway $\Delta S_{sys} = \ln \dfrac{dQ_{rev}}{T_{sys}}$ $\Delta S_{surr} = \dfrac{q_{sys}}{T_{surr}}$ Adiabatic 0 $\Delta S_{surr} = \dfrac{q_{sys}}{T_{surr}}$ Isothermal $\dfrac{d_{rev}}{T}$ $nR \ln \left( \dfrac{V_2}{V_1} \right)$* Isobaric $m C_p \ln \left( \dfrac{T_f}{T_i} \right)$ Isochoric $m C_V \ln \left( \dfrac{T_f}{T_i} \right)$ Phase Change $\dfrac{\Delta H_{phase}}{T}$ *for an ideal gas. And $\Delta S_{univ} = \Delta S_{sys} + \Delta S_{surr}. \nonumber$ This calculation is important as $\Delta S_{univ}$ provides the criterion for spontaneity for which we were searching from the outset. This also suggests a new way to state the second law: The entropy of the universe increases in any spontaneous change. If we think of “the direction of spontaneous” to be the natural direction of chance, we can see that entropy and the second law are tied inexorably with the natural direction of the flow of time. Basically, we can expect the entropy of the universe to continue to increase as time flows into the future. We can overcome this natural tendency to greater entropy by doing work on a system. This is why it requires such great effort, for example, to straighten a messy desk, but little effort for the desk to get messy over time. Clausius Inequality The Second Law can be summed up in a very simple mathematical expression called the Clausius Inequality. $\Delta S_{universe} \ge 0 \nonumber$ which must be true for any spontaneous process. It is not the most convenient criterion for spontaneity, but it will do for now. In the next chapter, we will derive a criterion which is more useful to us as chemists, who would rather focus on the system itself rather than both the system and its surroundings. Another statement of the Clausius theorem is $\oint \dfrac{dq}{T} \ge 0 \nonumber$ with the only condition of the left hand side equaling zero is if the system transfers all heat reversibly. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.05%3A_Comparing_the_System_and_the_Surroundings.txt
A common interpretation of entropy is that it is somehow a measure of chaos or randomness. There is some utility in that concept. Given that entropy is a measure of the dispersal of energy in a system, the more chaotic a system is, the greater the dispersal of energy will be, and thus the greater the entropy will be. Ludwig Boltzmann (1844 – 1906) (O'Connor & Robertson, 1998) understood this concept well, and used it to derive a statistical approach to calculating entropy. Boltzmann proposed a method for calculating the entropy of a system based on the number of energetically equivalent ways a system can be constructed. Boltzmann proposed an expression, which in its modern form is: $S = k_b \ln(W) \label{Boltz}$ This rather famous equation is etched on Boltzmann’s grave marker in commemoration of his profound contributions to the science of thermodynamics (Figure $1$). Example $1$: Calculate the entropy of a carbon monoxide crystal, containing 1.00 mol of $\ce{CO}$, and assuming that the molecules are randomly oriented in one of two equivalent orientations. Solution Using the Boltzmann formula (Equation \ref{Boltz}): $S = nK \ln (W) \nonumber$ And using $W = 2$, the calculation is straightforward. \begin{align*} S &= \left(1.00 \, mol \cot \dfrac{6.022\times 10^{23}}{1\,mol} \right) (1.38 \times 10^{-23} J/K) \ln 2 \ &= 5.76\, J/K \end{align*} 5.07: The Third Law of Thermodynamics One important consequence of Botlzmann’s proposal is that a perfectly ordered crystal (i.e. one that has only one energetic arrangement in its lowest energy state) will have an entropy of 0. This makes entropy qualitatively different than other thermodynamic functions. For example, in the case of enthalpy, it is impossible have a zero to the scale without setting an arbitrary reference (which is that the enthalpy of formation of elements in their standard states is zero.) But entropy has a natural zero! It is the state at which a system has perfect order. This also has another important consequence, in that it suggests that there must also be a zero to the temperature scale. These consequences are summed up in the Third Law of Thermodynamics. The entropy of a perfectly ordered crystal at 0 K is zero. This also suggests that absolute molar entropies can be calculated by $S = \int_o^{T} \dfrac{C}{T} dT \nonumber$ where $C$ is the heat capacity. An entropy value determined in this manner is called a Third Law Entropy. Naturally, the heat capacity will have some temperature dependence. It will also change abruptly if the substance undergoes a phase change. Unfortunately, it is exceedingly difficult to measure heat capacities very near zero K. Fortunately, many substances follow the Debye Extrapolation in that at very low temperatures, their heat capacities are proportional to T3. Using this assumption, we have a temperature dependence model that allows us to extrapolate absolute zero based on the heat capacity measured at as low a temperature as can be found. Example $1$ SiO2 is found to have a molar heat capacity of 0.777 J mol-1 K-1 at 15 K (Yamashita, et al., 2001). Calculate the molar entropy of SiO2 at 15 K. Solution Using the Debye model, the heat capacity is given by The value of a can be determined by The entropy is then calculated by Calculating a third Law Entropy Start at 0 K, and go from there!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.06%3A_Entropy_and_Disorder.txt
In Chapter 4, we learned about the isothermal compressibility, $\kappa_T$, which is defined as $\kappa_T = - \dfrac{1}{V} \left(\dfrac{\partial V}{\partial p} \right)_T \nonumber$ $\kappa_T$ is a very useful quantity, as it can be measured for many different substances and tabulated. Also, as we will see in the next chapter, it can be used to evaluate several different partial derivatives involving thermodynamic variables. In his seminal work, Philosophiae Naturalis Principia Mathematica (Newton, 1723), Isaac Newton (1643 - 1727) (Doc) calculated the speed of sound through air, assuming that sound was carried by isothermal compression waves. His calculated value of 949 m/s was about 15% smaller than experimental determinations. He accounted for the difference by pointing to “non-ideal effects”. But it turns out that his error, albeit an understandable one (since sound waves do not appear to change bulk air temperatures) was that the compression waves are adiabatic, rather than isothermal. As such, there are small temperature oscillations that occur due to the adiabatic compression followed by expansion of the gas carrying the sound waves. The oversight was correct by Pierre-Simon Laplace (1749 – 1827) (O'Connor & Robertson, Pierre-Simon Laplace, 1999). LaPlace modeled the compression waves using the adiabatic compressibility, $\kappa_S$ defined by $\kappa_S =- \dfrac{1}{V} \left(\dfrac{\partial V}{\partial p} \right)_S \nonumber$ Since the entropy is defined by $dS = \dfrac{dq_{rev}}{T} \nonumber$ it follows that any adiabatic pathway ($dq = 0$) is also isentropic ($dS = 0$), or proceeds at constant entropy. Adiabatic pathways are also isentropic. A couple of interesting conclusions can be reached by following the derivation of an expression for the speed of sound where the sound waves are modeled as adiabatic compression waves. We can begin by expanding the description of $\kappa_S$ by using Partial Derivative Transformation Type II. Applying this, the adiabatic compressibility can be expressed $\kappa_S =\dfrac{1}{V} \left(\dfrac{\partial V}{\partial S} \right)_p \left(\dfrac{\partial S}{\partial p} \right)_V \nonumber$ or by using transformation type I $\kappa_S =\dfrac{1}{V} \dfrac{ \left(\dfrac{\partial S}{\partial p }\right)_V}{ \left(\dfrac{\partial S}{\partial V} \right)_p} \nonumber$ Using a simple chain rule, the partial derivatives can be expanded to get something a little easier to evaluate: $\kappa_S =\dfrac{1}{V} \dfrac{ \left(\dfrac{\partial S}{\partial T }\right)_V \left(\dfrac{\partial T}{\partial p }\right)_V }{ \left(\dfrac{\partial S}{\partial T} \right)_p \left(\dfrac{\partial T}{\partial V }\right)_p} \label{eq10}$ The utility here is that $\left(\dfrac{\partial S}{\partial T }\right)_V = \dfrac{C_V}{T} \label{Note1}$ $\left(\dfrac{\partial S}{\partial T }\right)_p = \dfrac{C_p}{T} \label{Note2}$ This means that Equation \ref{eq10} simplifies to $\kappa_S = \dfrac{C_V}{C_p} \left( \dfrac{1}{V} \dfrac{ \left(\dfrac{\partial T}{\partial p }\right)_V }{ \left(\dfrac{\partial T}{\partial V }\right)_p} \right) \nonumber$ Simplifying what is in the parenthesis yields $\kappa_S = \dfrac{C_V}{C_p} \left( \dfrac{1}{V} \left(\dfrac{\partial T}{\partial p }\right)_V \left(\dfrac{\partial V}{\partial T }\right)_p \right) \nonumber$ $\kappa_S = \dfrac{C_V}{C_p} \left( - \dfrac{1}{V} \left(\dfrac{\partial V}{\partial p }\right)_T \right) \nonumber$ $\kappa_S = \dfrac{C_V}{C_p} \kappa_T \nonumber$ As will be shown in the next chapter, $C_p$ is always bigger than $C_V$, so $\kappa_S$ is always smaller than $\kappa_T$. But there is more! We can use this methodology to revisit how pressure affects volume along an adiabat. In order to do this, we would like to evaluate the partial derivative $\left(\dfrac{\partial V}{\partial p }\right)_S \nonumber$ This can be expanded in the same way as above $\left(\dfrac{\partial V}{\partial p }\right)_S = - \dfrac{ \left(\dfrac{\partial V}{\partial S}\right)_p }{ \left(\dfrac{\partial p}{\partial S }\right)_V } \nonumber$ And further expand $\left(\dfrac{\partial V}{\partial p }\right)_S = - \dfrac{ \left(\dfrac{\partial V}{\partial T}\right)_p \left(\dfrac{\partial T}{\partial S}\right)_p}{ \left(\dfrac{\partial p}{\partial T}\right)_V \left(\dfrac{\partial T}{\partial S}\right)_V} \label{eq20}$ And as before, noting that the relationships in Equations \ref{Note1} and \ref{Note2}, Equation \ref{eq20} can be simplified to $\left(\dfrac{\partial V}{\partial p }\right)_S= - \dfrac{C_V}{C_p} \left(\dfrac{\partial V}{\partial T}\right)_p \left(\dfrac{\partial T}{\partial p}\right)_V \nonumber$ $= \dfrac{C_V}{C_p} \left(\dfrac{\partial V}{\partial p}\right)_T \label{eq22}$ Or defining $\gamma = C_p/C_V$, Equation \ref{eq22} can be easily rearranged to $\gamma \left(\dfrac{\partial V}{\partial p}\right)_S = \left(\dfrac{\partial V}{\partial p}\right)_T \nonumber$ The right-hand derivative is easy to evaluate if we assume a specific equation of state. For an ideal gas, $\left(\dfrac{\partial V}{\partial p}\right)_T = - \dfrac{nRT}{p^2} = - \dfrac{V}{p} \nonumber$ Substitution yields $\gamma \left(\dfrac{\partial V}{\partial p}\right)_S = - \dfrac{V}{p} \nonumber$ which is now looking like a form that can be integrated. Separation of variables yields $\gamma \dfrac{dV}{V} = \dfrac{dP}{p} \nonumber$ And integration (assuming that g is independent of volume) yields $\gamma \int_{V_1}^{V_2} \dfrac{dV}{V} = \int_{p_1}^{p_2} \dfrac{dP}{p} \nonumber$ or $\gamma \ln \left( \dfrac{V_2}{V_1} \right) =\ln \left( \dfrac{p_2}{p_1} \right) \nonumber$ which is easily manipulated to show that $p_1V_1^{\gamma} = p_2V_2^{\gamma} \nonumber$ or $pV^{\gamma} = \text{constant} \nonumber$ which is what we previously determined for the behavior of an ideal gas along an adiabat. Finally, it should be noted that the correct expression for the speed of sound is given by $v_{sound} = \sqrt{\dfrac{1}{\rho \, \kappa_S}} \nonumber$ where $\rho$ is the density of the medium. For an ideal gas, this expression becomes $v_{sound} = \sqrt{\dfrac{\gamma RT}{M}} \nonumber$ where $M$ is the molar mass of the gas. Isaac Newton’s derivation, based on the idea that sound waves involved isothermal compressions, would produce a result which is missing the factor of $\gamma$, accounting for the systematic deviation from experiment which he observed. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.08%3A_Adiabatic_Compressibility.txt
Q5.1 What is the minimum amount of work needed to remove 10.0 J of energy from a freezer at -10.0 °C, depositing the energy into a room that is 22.4 °C? Q5.2 Consider the isothermal, reversible expansion of 1.00 mol of a monatomic ideal gas (CV = 3/2 R) from 10.0 L to 25.0 L at 298 K. Calculate $q$, $w$, $\Delta U$, $\Delta H$, and $\Delta S$ for the expansion. Q5.3 Consider the isobaric, reversible expansion of 1.00 mol of a monatomic ideal gas (Cp = 5/2 R) from 10.0 L to 25.0 L at 1.00 atm. Calculate $q$, $w$, $\Delta U$, $\Delta H$, and $\Delta S$ for the expansion. Q5.4 Consider the isochoric, reversible temperature increase of 1.00 mol of a monatomic ideal gas (CV = 3/2 R) °Ccupying 25.0 L from 298 K to 345 K. Calculate $q$, $w$, $\Delta U$, $\Delta H$, and $\Delta S$ for this process. Q5.5 Consider the adiabatic expansion of 1.00 mol of a monatomic ideal gas (CV = 3/2 R) from 10.0 L at 273 K to a final volume of 45.0 L. Calculate $\Delta T$, $q$, $w$, $\Delta U$, $\Delta H$, and $\Delta S$ for the expansion. Q5.6 15.0 g of ice ($\Delta H_{fus} = 6.009\, kJ/mol$) at 0 °C sits in a room that is at 21 °C. The ice melts to form liquid at 0 °C. Calculate the entropy change for the ice, the room, and the universe. Which has the largest magnitude? Q5.7 15.0 g of liquid water (Cp = 75.38 J mol-1 °C-1) at 0 °C sits in a room that is at 21 °C. The liquid warms from 0 °C to 21 °C. Calculate the entropy change for the liquid, the room, and the universe. Which has the largest magnitude? Q5.8 Calculate the entropy change for taking 12.0 g of H2O from the solid phase (Cp = 36.9 J mol-1 K-1) at -12.0 °C to liquid (Cp = 75.2 J mol-1 K-1) at 13.0 °C. The enthalpy of fusion for water is $\Delta H_{fus} = 6.009 \,kJ/mol$. Q5.9 Using Table T1, calculate the standard reaction entropies ($\Delta S^o$) for the following reactions at 298 K. 1. $CH_3CH_2OH(l) + 3 O_2(g) \rightarrow 2 CO_2(g) + 3 H_2O(l)$ 2. $C_{12}H_{22}O_{11}(s) + 12 O_2 \rightarrow 12 CO_2(g) + 11 H_2O(l)$ 3. $2 POCl_3(l) \rightarrow 2 PCl_3(l) + O_2(g)$ 4. $2 KBr(s) + Cl2(g) \rightarrow 2 KCl(s) + Br_2(l)$ 5. $SiH_4(g) + 2 Cl(g) \rightarrow SiCl_4(l) + 2 H_2(g)$ Q5.10 1.00 mole of an ideal gas is taken through a cyclic process involving three steps: 1. Isothermal expansion from V1 to V2 at T1 2. Isochoric heating from, T1 to T2 at V2 3. Adiabatic compression from V2 to V1 1. Graph the process on a V-T diagram. 2. Find $q$, $w$, $\Delta U$, and $\Delta S$ for each leg. (If you want, you can find $\Delta H$ too!) 3. Use the fact that $\Delta S$ for the entire cycle must be zero (entropy being a state function and all …), determine the relationship between V1 and V2 in terms of Cv, T1 and T2. Q5.11 2.00 moles of a monatomic ideal gas (CV = 3/2 R) initially exert a pressure of 1.00 atm at 300.0 K. The gas undergoes the following three steps, all of which are reversible: 1. isothermal compression to a final pressure of 2.00 atm, 2. Isobaric temperature increase to a final temperature of 400.0 K, and 3. A return to the initial state along a pathway in which $p = a+bT$ where $a$ and $b$ are constants. Sketch the cycle on a pressure-temperature plot, and calculate $\Delta U$ and $\Delta S$ for each of the legs. Are $\Delta U$ and $\Delta S$ zero for the sum of the three legs? Q5.12 A 10.0 g piece of iron (C = 0.443 J/g °C) initially at 97.6 °C is placed in 50.0 g of water (C = 4.184 J/g °C) initially at 22.3 °C in an insulated container. The system is then allowed to come to thermal equilibrium. Assuming no heat flow to or from the surroundings, calculate 1. the final temperature of the metal and water 2. the change in entropy for the metal 3. the change in entropy for the water 4. the change in entropy for the universe Q5.13 Considers a crystal of $CHFClBr$ as having four energetically equivalent orientations for each molecule. What is the expected residual entropy at 0 K for 2.50 mol of the substance? Q5.14 A sample of a certain solid is measured to have a constant pressure heat capacity of 0.436 J mol-1 K-1 at 10.0 K. Assuming the Debeye extrapolation model $C_p(T) = aT^3$ holds at low temperatures, calculate the molar entropy of the substance at 12.0 K. 5.S: The Second Law (Summary) Learning Objectives After mastering the material presented in this chapter, one will be able to: 1. Describe a Carnot engine and derive a relationship for its efficiency of converting heat into work, in terms of the two temperatures at which the engine operates. 2. Define entropy and be able to calculate entropy changes for systems (and the surroundings) undergoing changes which are definable as following various pathways, including constant temperature, constant pressure, constant volume, and adiabatic pathways. 3. Relate entropy to disorder in a crystal based on the number of equivalent orientations a single formula unit may take within the crystal. 4. State the Third Law of Thermodynamics, and use it to calculate total entropies for substances at a given temperature. 5. Understand how isothermal compressibility differs from adiabatic compressibility and relate that difference to the measurement of the speed of sound waves traveling through a gas medium. Vocabulary and Concepts • adiabatic compressibility • Carnot cycle • Clausius theorem • criterion for spontaneity • Debye Extrapolation • efficiency • entropy • heat engine • isentropic • second law of thermodynamics • speed of sound • spontaneous • spontaneous process • Third Law Entropy • Third Law of Thermodynamics
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/05%3A_The_Second_Law/5.E%3A_The_Second_Law_%28Exercises%29.txt
• 6.1: Free Energy Functions In the previous chapter, we saw that for a spontaneous process, ΔS for the universe > 0. While this is a useful criterion for determining whether or not a process is spontaneous, it is rather cumbersome, as it requires one to calculate not only the entropy change for the system, but also that of the surroundings. It would be much more convenient if there was a single criterion that would do the job and focus only on the system. As it turns out, there is by introducing Free Energies. • 6.2: Combining the First and Second Laws - Maxwell's Relations Modeling the dependence of the Gibbs and Helmholtz functions behave with varying temperature, pressure, and volume is fundamentally useful. But in order to do that, a little bit more development is necessary. • 6.3: ΔA, ΔG, and Maximum Work The functions A and G are oftentimes referred to as free energy functions. The reason for this is that they are a measure of the maximum work (in the case of ΔA ) or non p-V work (in the case of ΔG ) that is available from a process. • 6.4: Volume Dependence of Helmholtz Energy The Helmholtz function changes with changing volume at constant temperature. • 6.5: Pressure Dependence of Gibbs Energy The pressure and temperature dependence of G is also easy to describe. Specifically the pressure dependence of G is given by the pressure derivative at constant temperature. • 6.6: Temperature Dependence of A and G The Gibbs-Helmholtz equation can be used to determine how ΔG and ΔA change with changing temperatures. • 6.7: When Two Variables Change at Once So far, we have derived a number of expressions and developed methods for evaluating how thermodynamic variables change as one variable changes while holding the rest constant. But real systems are seldom this accommodating. If the change in a thermodynamic variable (such as G) is needed, contributions from both changes are required to be taken into account. We’ve already seen how to express this in terms of a total differential. • 6.8: The Difference between Cp and Cv Constant volume and constant pressure heat capacities are very important in the calculation of many changes. The ratio Cp/CV=γ appears in many expressions as well (such as the relationship between pressure and volume along an adiabatic expansion.) It would be useful to derive an expression for the difference Cp–CV as well. As it turns out, this difference is expressible in terms of measureable physical properties of a substance. • 6.E: Putting the Second Law to Work (Exercises) Exercises for Chapter 6 "Putting the Second Law to Work" in Fleming's Physical Chemistry Textmap. • 6.S: Putting the Second Law to Work (Summary) Summary for Chapter 6 "Putting the Second Law to Work" in Fleming's Physical Chemistry Textmap. 06: Putting the Second Law to Work In the previous chapter, we saw that for a spontaneous process, $ΔS_{universe} > 0$. While this is a useful criterion for determining whether or not a process is spontaneous, it is rather cumbersome, as it requires one to calculate not only the entropy change for the system, but also that of the surroundings. It would be much more convenient if there was a single criterion that would do the job and focus only on the system. As it turns out, there is. Since we know that $\Delta S_{univ} \ge 0 \nonumber$ for any natural process, and $\Delta S_{univ} = \Delta S_{sys} + \Delta S_{surr} \nonumber$ all we need to do is to find an expression for $\Delta S_{sys}$ that can be determined by the changes in the system itself. Fortunately, we have already done that! Recalling that at constant temperature $\Delta S = -\dfrac{q_{rev}}{T} \nonumber$ and at constant pressure $\Delta H = q_p \nonumber$ it follows that at constant temperature and pressure $\Delta S_{surr} = -\dfrac{\Delta H_{sys}}{T} \nonumber$ Substitution into the above equations yields an expression for the criterion of spontaneity that depends only on variables describing the changes in the system! $\Delta S_{univ} \ge \Delta S_{sys} -\dfrac{\Delta H_{sys}}{T} \nonumber$ so $\Delta S_{sys}-\dfrac{\Delta H_{sys}}{T} \ge 0 \nonumber$ Multiplying both sides by $-T$ yields $\Delta H - T\Delta S \le 0 \label{chem}$ A similar derivation for constant volume processes results in the expression (at constant volume and temperature) $\Delta U - T\Delta S \le 0 \label{geo}$ Equation \ref{chem} is of great use to chemists, as most of chemistry occurs at constant pressure. For geologists, however, who are interested in processes that occur at very high pressures (say, under the weight of an entire mountain) and expansion is not a possibility, the constant volume expression of Equation \ref{chem} may be of greater interest. All of the above arguments can be made for systems in which the temperature is not constant by considering infinitesimal changes. The resulting expressions are $dH -TdS \le 0 \label{chem1}$ and $dU -TdS \le 0 \label{chem2}$ The Gibbs and Helmholtz Functions Equation \ref{chem1} suggests a very convenient thermodynamic function to help keep track of both the effects of entropy and enthalpy changes. This function, the Gibbs function (or Gibbs Free Energy) is defined by $G \equiv H -TS \nonumber$ A change in the Gibbs function can be expressed $\Delta G = \Delta H -\Delta (TS) \nonumber$ Or at constant temperature $\Delta G = \Delta H -T \Delta S \nonumber$ And the criterion for a process to be spontaneous is the DG < 0. As such, it should be clear spontaneity is not merely a function the enthalpy change (although exothermic processes tend to be spontaneous) but also a function of the entropy change, weighted by the temperature. Going back to an earlier example, $NaOH(s) \rightarrow Na^+ (aq) + OH^- (aq) \nonumber$ with $\Delta H < 0$. and $NaHCO_3(s) \rightarrow Na^+ (aq) + HCO_3^- (aq) \nonumber$ with $\Delta H > 0$. It is easy to see why both processes are spontaneous. In the first case, the process is exothermic (favorable) and proceeds with an increase in entropy (also favorable) due to the formation of fragments in the liquid phase (more chaotic) from a very ordered solid (more ordered). The second reaction is endothermic (unfavorable) but proceeds with an increase in entropy (favorable). So, so long as the temperature is high enough, the entropy term will overwhelm the enthalpy term and cause the process to be spontaneous. The conditions for spontaneous processes at constant temperature and pressure can be summarized in Table 6.1.1. Table 6.1.1: Spontaneity Conditions for a Process under Constant Temperature and Pressure $\Delta H$ $\Delta S$ Spontaneous > 0 > 0 At high T > 0 < 0 At no T < 0 > 0 At all T < 0 < 0 At low T Similarly to the Gibbs function, the Helmholtz function is defined by $A \equiv U -TS \nonumber$ and provides another important criterion for spontaneous processes at constant value and temperature. At constant temperature, the Helmholtz function can be expressed by $\Delta A \equiv \Delta U -T\Delta S \nonumber$ Based on similar arguments used for the Gibbs function, the Helmholtz function also can be used to predict which processes will be spontaneous at constant volume and temperature according to Table 6.1.2. Table 6.1.2: Spontaneity Conditions for a Process under Constant Temperature and Volume $\Delta U$ $\Delta S$ Spontaneous? > 0 > 0 At high T > 0 < 0 At no T < 0 > 0 At all T < 0 < 0 At low T Calculating $\Delta G$ for Reactions Much like in the case of enthalpy (and unlike entropy), free energy functions do not have an unambiguous zero to the energy scale. So, just like in the case of enthalpies of formation, by convention, the standard free energy of formation ($\Delta G_f^o$) for elements in their standard states is defined as zero. This allows for two important things to happen. First, $\Delta G_f^o$ can be measured and tabulated for any substance (in principle, at least.) $\Delta G_f^o$ is determined to be $\Delta G_{rxn}^o$ for the reaction that forms one mole of a compound from elements in their standard states (similarly to how $\Delta H_f^o$ is defined.) Secondly, tabulated ($\Delta G_f^o$) can be used to calculate standard reaction free energies ($\Delta G_{rxn}^o$) in much the same way as $\Delta H_f^o$ is used for reaction enthalpies. Example $1$: Given the following data at 298 K, calculate $\Delta G^o$ at 298 K for the following reaction: $C_2H_4(g) + H_2(g) \rightarrow C_2H_6(g) \nonumber$ Substance $\Delta G_f^o$ (kJ/mol) C2H4(g) 68.4 C2H6(g) -32.0 Solution The $\Delta G_f^o$ values can be used to calculate $\Delta G^o$ for the reaction in exactly the same method as $\Delta H_f^o$ can be used to calculate a reaction enthalpy. $\Delta G^o = (1 \,mol)(-32.0\, kJ/mol) - (1\, mol)(68.4\, kJ/mol) \nonumber$ $\Delta G^o= 100.4\, kJ \nonumber$ Note: $H_2(g)$ is not included in the calculation since $\Delta G_f^o$ for $H_2(g)$ is 0 since it is an element in its standard state.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.01%3A_Free_Energy_Functions.txt
Modeling the dependence of the Gibbs and Helmholtz functions behave with varying temperature, pressure, and volume is fundamentally useful. But in order to do that, a little bit more development is necessary. To see the power and utility of these functions, it is useful to combine the First and Second Laws into a single mathematical statement. In order to do that, one notes that since $dS = \dfrac{dq}{T} \nonumber$ for a reversible change, it follows that $dq= TdS \nonumber$ And since $dw = TdS - pdV \nonumber$ for a reversible expansion in which only p-V works is done, it also follows that (since $dU=dq+dw$): $dU = TdS - pdV \nonumber$ This is an extraordinarily powerful result. This differential for $dU$ can be used to simplify the differentials for $H$, $A$, and $G$. But even more useful are the constraints it places on the variables T, S, p, and V due to the mathematics of exact differentials! Maxwell Relations The above result suggests that the natural variables of internal energy are $S$ and $V$ (or the function can be considered as $U(S, V)$). So the total differential ($dU$) can be expressed: $dU = \left( \dfrac{\partial U}{\partial S} \right)_V dS + \left( \dfrac{\partial U}{\partial V} \right)_S dV \nonumber$ Also, by inspection (comparing the two expressions for $dU$) it is apparent that: $\left( \dfrac{\partial U}{\partial S} \right)_V = T \label{eq5A}$ and $\left( \dfrac{\partial U}{\partial V} \right)_S = -p \label{eq5B}$ But the value doesn’t stop there! Since $dU$ is an exact differential, the Euler relation must hold that $\left[ \dfrac{\partial}{\partial V} \left( \dfrac{\partial U}{\partial S} \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial U}{\partial V} \right)_S \right]_V \nonumber$ By substituting Equations \ref{eq5A} and \ref{eq5B}, we see that $\left[ \dfrac{\partial}{\partial V} \left( T \right)_V \right]_S= \left[ \dfrac{\partial}{\partial S} \left( -p \right)_S \right]_V \nonumber$ or $\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial p}{\partial S} \right)_V \nonumber$ This is an example of a Maxwell Relation. These are very powerful relationship that allows one to substitute partial derivatives when one is more convenient (perhaps it can be expressed entirely in terms of $\alpha$ and/or $\kappa_T$ for example.) A similar result can be derived based on the definition of $H$. $H \equiv U +pV \nonumber$ Differentiating (and using the chain rule on $d(pV)$) yields $dH = dU +pdV + Vdp \nonumber$ Making the substitution using the combined first and second laws ($dU = TdS – pdV$) for a reversible change involving on expansion (p-V) work $dH = TdS – \cancel{pdV} + \cancel{pdV} + Vdp \nonumber$ This expression can be simplified by canceling the $pdV$ terms. $dH = TdS + Vdp \label{eq2A}$ And much as in the case of internal energy, this suggests that the natural variables of $H$ are $S$ and $p$. Or $dH = \left( \dfrac{\partial H}{\partial S} \right)_p dS + \left( \dfrac{\partial H}{\partial p} \right)_S dV \label{eq2B}$ Comparing Equations \ref{eq2A} and \ref{eq2B} show that $\left( \dfrac{\partial H}{\partial S} \right)_p= T \label{eq6A}$ and $\left( \dfrac{\partial H}{\partial p} \right)_S = V \label{eq6B}$ It is worth noting at this point that both (Equation \ref{eq5A}) $\left( \dfrac{\partial U}{\partial S} \right)_V \nonumber$ and (Equation \ref{eq6A}) $\left( \dfrac{\partial H}{\partial S} \right)_p \nonumber$ are equation to $T$. So they are equation to each other $\left( \dfrac{\partial U}{\partial S} \right)_V = \left( \dfrac{\partial H}{\partial S} \right)_p \nonumber$ Morevoer, the Euler Relation must also hold $\left[ \dfrac{\partial}{\partial p} \left( \dfrac{\partial H}{\partial S} \right)_p \right]_S= \left[ \dfrac{\partial}{\partial S} \left( \dfrac{\partial H}{\partial p} \right)_S \right]_p \nonumber$ so $\left( \dfrac{\partial T}{\partial p} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_p \nonumber$ This is the Maxwell relation on $H$. Maxwell relations can also be developed based on $A$ and $G$. The results of those derivations are summarized in Table 6.2.1.. Table 6.2.1: Maxwell Relations Function Differential Natural Variables Maxwell Relation $U$ $dU = TdS - pdV$ $S, \,V$ $\left( \dfrac{\partial T}{\partial V} \right)_S = - \left( \dfrac{\partial p}{\partial S} \right)_V$ $H$ $dH = TdS + Vdp$ $S, \,p$ $\left( \dfrac{\partial T}{\partial p} \right)_S = \left( \dfrac{\partial V}{\partial S} \right)_p$ $A$ $dA = -pdV - SdT$ $V, \,T$ $\left( \dfrac{\partial p}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T$ $G$ $dG = Vdp - SdT$ $p, \,T$ $\left( \dfrac{\partial V}{\partial T} \right)_p = - \left( \dfrac{\partial S}{\partial p} \right)_T$ The Maxwell relations are extraordinarily useful in deriving the dependence of thermodynamic variables on the state variables of p, T, and V. Example $1$ Show that $\left( \dfrac{\partial V}{\partial T} \right)_p = T\dfrac{\alpha}{\kappa_T} - p \nonumber$ Solution Start with the combined first and second laws: $dU = TdS - pdV \nonumber$ Divide both sides by $dV$ and constraint to constant $T$: $\left.\dfrac{dU}{dV}\right|_{T} = \left.\dfrac{TdS}{dV}\right|_{T} - p \left.\dfrac{dV}{dV} \right|_{T} \nonumber$ Noting that $\left.\dfrac{dU}{dV}\right|_{T} =\left( \dfrac{\partial U}{\partial V} \right)_T \nonumber$ $\left.\dfrac{TdS}{dV}\right|_{T} = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$ $\left.\dfrac{dV}{dV} \right|_{T} = 1 \nonumber$ The result is $\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial S}{\partial V} \right)_T -p \nonumber$ Now, employ the Maxwell relation on $A$ (Table 6.2.1) $\left( \dfrac{\partial p}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$ to get $\left( \dfrac{\partial U}{\partial V} \right)_T = T \left( \dfrac{\partial p}{\partial T} \right)_V -p \nonumber$ and since $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$ It is apparent that $\left( \dfrac{\partial V}{\partial T} \right)_p = T\dfrac{\alpha}{\kappa_T} - p \nonumber$ Note: How cool is that? This result was given without proof in Chapter 4, but can now be proven analytically using the Maxwell Relations!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.02%3A_Combining_the_First_and_Second_Laws_-_Maxwell%27s_Relations.txt
The functions $A$ and $G$ are oftentimes referred to as free energy functions. The reason for this is that they are a measure of the maximum work (in the case of $\Delta A$) or non p-V work (in the case of $\Delta G$) that is available from a process. To show this, consider the total differentials. First, consider the differential of $A$. $dA = dU -TdS - SdT \nonumber$ Substituting the combined first and second laws for $dU$, but expressing the work term as $dw$, yields $dA = TdS -dw -TdS - SdT \nonumber$ And cancelling the $TdS$ terms gives $dA = dw - SdT \nonumber$ or at constant temperature ($dT = 0$) $dA = dw \nonumber$ Since the only assumption made here was that the change is reversible (allowing for the substitution of $TdS$ for $dq$), and $dw$ for a reversible change is the maximum amount of work, it follows that $dA$ gives the maximum work that can be produced from a process at constant temperature. Similarly, a simple expression can be derived for $dG$. Starting from the total differential of $G$. $dG = dU + pdV – pdV + Vdp – TdS – SdT \nonumber$ Using an expression for $dU = dq + dw$, where $dq = TdS$ and $dw$ is split into two terms, one ($dw_{pV}$) describing the work of expansion and the other ($dw_e$) describing any other type of work (electrical, stretching, etc.) $dU - TdS + dW_{pV} + dW_e \nonumber$ $dG$ can be expressed as $dG = \cancel{TdS} - \cancel{pdV} +dw_e + \cancel{pdV} + Vdp – \cancel{TdS} – SdT \nonumber$ Cancelling the $TdS$ and $pdV$ terms leaves $dG = +dw_e + Vdp – SdT \nonumber$ So at constant temperature ($dT = 0$) and pressure ($dp = 0$), $dG = dw_e \nonumber$ This implies that $dG$ gives the maximum amount of non p-V work that can be extracted from a process. This concept of $dA$ and $dG$ giving the maximum work (under the specified conditions) is where the term “free energy” comes from, as it is the energy that is free to do work in the surroundings. If a system is to be optimized to do work in the soundings (for example a steam engine that may do work by moving a locomotive) the functions A and $G$ will be important to understand. It will, therefore, be useful to understand how these functions change with changing conditions, such as volume, temperature, and pressure. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 6.04: Volume Dependence of Helmholtz Energy If one needs to know how the Helmholtz function changes with changing volume at constant temperature, the following expression can be used: $\Delta A = \int_{V_1}^{V_2} \left( \dfrac{\partial A}{\partial V} \right)_T dV \label{eq1}$ But how does one derive an expression for the partial derivative in Equation \ref{eq1}? This is a fairly straight forward process that begins with the definition of $A$: $A = U - TS \nonumber$ Differentiating (and using the chain rule to evaluate $d(TS)$ yields $dA = dU - TdS - SdT \label{eq4}$ Now, it is convenient to use the combined first and second laws $dU = TdS - pdV \label{eq5}$ which assumes: 1. a reversible change and 2. only $pV$ work is being done. Substituting Equation \ref{eq5} into Equation \ref{eq4} yields $dA = \cancel{TdS} - pdV - \cancel{TdS} - SdT \label{eq6}$ Canceling the $TdS$ terms gives the important result $dA = - pdV - SdT \label{eq6.5}$ The natural variables of $A$ are therefore $V$ and $T$! So the total differential of $A$ is conveniently expressed as $dA = \left( \dfrac{\partial A}{\partial V} \right)_T dV + \left( \dfrac{\partial A}{\partial T} \right)_V dT \label{Total2}$ and by simple comparison of Equations \ref{eq6.5} and \ref{Total2}, it is clear that $\left( \dfrac{\partial A}{\partial V} \right)_T = - p \nonumber$ $\left( \dfrac{\partial A}{\partial T} \right)_V = - S \nonumber$ And so, one can evaluate Equation \ref{eq1} as $\Delta A = - \int_{V_1}^{V_2} p\, dV \nonumber$ If the pressure is independent of the temperature, it can be pulled out of the integral. $\Delta A = - p \int_{V_1}^{V_2} dV = -p (V_2-V_1) \nonumber$ Otherwise, the temperature dependence of the pressure must be included. $\Delta A = - \int_{V_1}^{V_2} p(V)\, dV \nonumber$ Fortunately, this is easy if the substance is an ideal gas (or if some other equation of state can be used, such as the van der Waals equation.) Example $1$: Ideal Gas Expansion Calculate $\Delta A$ for the isothermal expansion of 1.00 mol of an ideal gas from 10.0 L to 25.0 L at 298 K. Solution For an ideal gas, $p =\dfrac{nRT}{V} \nonumber$ So $\left( \dfrac{\partial A}{\partial V} \right)_T = -p \nonumber$ becomes $\left( \dfrac{\partial A}{\partial V} \right)_T = -\dfrac{nRT}{V} \nonumber$ And so (Equation \ref{eq1}) $\Delta A = \int_{V_1}^{V_2} \left( \dfrac{\partial A}{\partial V} \right)_T dV \nonumber$ becomes $\Delta A = -nRT \int_{V_1}^{V_2} \dfrac{dV}{V} dT \nonumber$ or $\Delta A = -nRT \ln \left( \dfrac{V_2}{V_1} \right) \nonumber$ Substituting the values from the problem $\Delta A = -(1.00\,mol)(8.314 \, J/(mol\,K))(298\,K) \ln \left( \dfrac{25.0\,L}{10.0\,L} \right) \nonumber$ But further, it is easy to show that the Maxwell relation that arises from the simplified expression for the total differential of $A$ is $\left( \dfrac{\partial p}{\partial T} \right)_V = \left( \dfrac{\partial S}{\partial V} \right)_T \nonumber$ This particular Maxwell relation is exceedingly useful since one of the terms depends only on $p$, $V$, and $T$. As such it can be expressed in terms of our old friends, $\alpha$ and $\kappa_T$! $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.03%3A_A_G_and_Maximum_Work.txt
The pressure and temperature dependence of $G$ is also easy to describe. The best starting place is the definition of $G$. $G = U + pV -TS \label{eq1}$ Taking the total differential of $G$ yields $dG = dU + pdV – pdV + Vdp – TdS – SdT \nonumber$ The differential can be simplified by substituting the combined first and second law statement for $dU$ (consider a reversible process and $pV$ work only). $dG = \cancel{TdS} \bcancel{– pdV} + \bcancel{pdV} + Vdp – \cancel{TdS} – SdT \nonumber$ Canceling the $TdS$ and $pdV$ terms leaves $dG = V\,dp – S\,dT \label{Total1}$ This suggests that the natural variables of $G$ are $p$ and $T$. So the total differential $dG$ can also be expressed $dG = \left( \dfrac{\partial G}{\partial p} \right)_T dp + \left( \dfrac{\partial G}{\partial T} \right)_p dT \label{Total2}$ And by inspection of Equations \ref{Total1} and \ref{Total2}, it is clear that $\left( \dfrac{\partial G}{\partial p} \right)_T = V \nonumber$ and $\left( \dfrac{\partial G}{\partial T} \right)_p = -S \nonumber$ It is also clear that the Maxwell relation on $G$ is given by $\left( \dfrac{\partial V}{\partial T} \right)_p = \left( \dfrac{\partial S}{\partial p} \right)_T \nonumber$ which is an extraordinarily useful relationship, since one of the terms is expressible entirely in terms of measurable quantities! $\left( \dfrac{\partial V}{\partial T} \right)_p = V\alpha \nonumber$ The pressure dependence of $G$ is given by the pressure derivative at constant temperature $\left( \dfrac{\partial G}{\partial p} \right)_T = V \label{Max2}$ which is simply the molar volume. For a fairly incompressible substance (such as a liquid or a solid) the molar volume will be essentially constant over a modest pressure range. Example $1$: Gold under Pressure The density of gold is 19.32 g/cm3. Calculate $\Delta G$ for a 1.00 g sample of gold when the pressure on it is increased from 1.00 atm to 2.00 atm. Solution The change in the Gibbs function due to an isothermal change in pressure can be expressed as $\Delta G =\int_{p_1}^{p_2} \left( \dfrac{\partial G}{\partial p} \right)_T dp \nonumber$ And since substituting Equation \ref{Max2}, results in $\Delta G =\int_{p_1}^{p_2} V dp \nonumber$ Assuming that the molar volume is independent or pressure over the stated pressure range, $\Delta G$ becomes $\Delta G = V(p_2-p_1) \nonumber$ So, the molar change in the Gibbs function can be calculated by substituting the relevant values. \begin{align} \Delta G & = \left( \dfrac{197.0\, g}{mol} \times \dfrac{1\,}{19.32\,g} \times \dfrac{1\,L}{1000\,cm^3} \right) (2.00 \,atm -1.00 \,atm) \underbrace{ \left(\dfrac{8.315 \,J}{0.08206\, atm\,L}\right)}_{\text{conversion unit}}\ &= 1.033\,J \end{align} \nonumber Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 6.06: Temperature Dependence of A and G In differential form, the free energy functions can be expressed as $dA = -pdV - SdT \nonumber$ and $dG = -Vdp - SdT \nonumber$ So by inspection, it is easy to see that $\left( \dfrac{\partial A}{\partial T} \right)_V = -S \nonumber$ and $\left( \dfrac{\partial G}{\partial T} \right)_p = -S \nonumber$ And so, it should be fairly straightforward to determine how each changes with changing temperature: $\Delta A = - \int_{T_1}^{T_2} \left( \dfrac{\partial A}{\partial T} \right)_V dT = - \int_{T_1}^{T_2} S\,dT \nonumber$ and $\Delta G = - \int_{T_1}^{T_2} \left( \dfrac{\partial G}{\partial T} \right)_p dT = - \int_{T_1}^{T_2} S\,dT \nonumber$ But the temperature dependence of the entropy needed to be known in order to evaluate the integral. A convenient work-around can be obtained starting from the definitions of the free energy functions. $A= U -TS \nonumber$ and $G = H -TS \nonumber$ Dividing by $T$ yields $\dfrac{A}{T} = \dfrac{U}{T} -S \nonumber$ and $\dfrac{G}{T} = \dfrac{H}{T} -S \nonumber$ Now differentiating each expression with respect to $T$ at constant $V$ or $p$ respectively yields $\left( \dfrac{\partial \left( \frac{A}{T}\right)}{\partial T} \right)_V = - \dfrac{U}{T^2} \nonumber$ and $\left( \dfrac{\partial \left( \frac{G}{T}\right)}{\partial T} \right)_p = - \dfrac{H}{T^2} \nonumber$ Or differentiating with respect to $1/T$ provides a simpler form that is mathematically equivalent: $\left( \dfrac{\partial \left( \frac{A}{T}\right)}{\partial \left( \frac{1}{T}\right)} \right)_V = U \nonumber$ and $\left( \dfrac{\partial \left( \frac{G}{T}\right)}{\partial \left( \frac{1}{T}\right)} \right)_p = H \nonumber$ Focusing on the second expression (since all of the arguments apply to the first as well), we see a system that can be integrated. Multiplying both sides by $d(1/T)$ yields: $d \left( \dfrac{G}{T} \right) = H d \left( \dfrac{1}{T} \right) \nonumber$ Or for finite changes $\Delta G$ and $\Delta H$: $d \left( \dfrac{\Delta G}{T} \right) = \Delta H d \left( \dfrac{1}{T} \right) \nonumber$ and integration, assuming the enthalpy change is constant over the temperature interval yields $\int_{T_1}^{T_2} d \left( \dfrac{\Delta G}{T} \right) = \Delta H \int_{T_1}^{T_2} d \left( \dfrac{1}{T} \right) \nonumber$ $\dfrac{\Delta G_{T_2}}{T_2} - \dfrac{\Delta G_{T_1}}{T_1} = \Delta H \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{GH1}$ Equation \ref{HG1} is the Gibbs-Helmholtz equation and can be used to determine how $\Delta G$ changes with changing temperature. The equivalent equation for the Helmholtz function is $\dfrac{\Delta A_{T_2}}{T_2} -\dfrac{\Delta A_{T_1}}{T_1} = \Delta U \left(\dfrac{1}{T_2} -\dfrac{1}{T_1} \right) \label{GH2}$ Example $1$: Given the following data at 298 K, calculate $\Delta G$ at 500 K for the following reaction: $CH_4(g) + 2 O_2(g) \rightarrow CO_2(g) + H_2O(g) \nonumber$ Compound $\Delta G_f^o$ (kJ/mol) $\Delta H_f^o$ (kJ.mol) CH4(g) -50.5 -74.6 CO2(g) -394.4 -393.5 H2O(g) -228.6 -241.8 Solution $\Delta H$ and $\Delta G_{298\, K}$ and can be calculated fairly easily. It will be assumed that $\Delta H$ is constant over the temperature range of 298 K – 500 K. $\Delta H = (1 \,mol)(-393.5 \,kJ/mol) + (2\, mol)(-241.8\, kJ/mol) – (1\, mol)(-74.5\, kJ/mol) = -820.6\, kJ \nonumber$ $\Delta G_{298} = (1\, mol)(-394.4\, kJ/mol) + (2\, mol)(-228,6\, kJ/mol) – (1\, mol)(-50.5\, kJ/mol) = -801.1\, kJ \nonumber$ So using Equation \ref{GH1} with the data just calculated gives $\dfrac{\Delta G_{500\,K}}{500 \,K} - \dfrac{-801.1\,kJ}{298\,K} = (-820.6\, kJ) \left( \dfrac{1}{500\,K} - \dfrac{1}{298\,K} \right) \nonumber$ $\Delta G_{500\,K} = -787.9\,kJ \nonumber$ Note: $\Delta G$ became a little bit less negative at the higher temperature, which is to be expected for a reaction which is exothermic. An increase in temperature should tend to make the reaction less favorable to the formation of products, which is exactly what is seen in this case!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.05%3A_Pressure_Dependence_of_Gibbs_Energy.txt
So far, we have derived a number of expressions and developed methods for evaluating how thermodynamic variables change as one variable changes while holding the rest constant. But real systems are seldom this accommodating. For example, a piece of metal (such as a railroad rail) left in the sun will undergo both an increase in temperature and an expansion due to the absorption of energy from sunlight. So both $T$ and $V$ are changing at the same time! If the change in a thermodynamic variable (such as $G$) is needed, contributions from both changes are required to be taken into account. We’ve already seen how to express this in terms of a total differential. $dG = \left( \dfrac{\partial G}{\partial p} \right)_T dp + \left( \dfrac{\partial G}{\partial T} \right)_p dT \label{Total2}$ Fortunately, $G$ (like the other thermodynamic functions $U$, $H$, $S$, and $A$) is kind enough to be a state variable. This means that we can consider the changes independently and then simply add the results. Another way to think of this is that the system may follow either of two pathways to get from the initial conditions to the final conditions: • Pathway I: 1. An isothermal expansion from $V_1$ to $V_2$ at $T_1$ followed by 2. An isochoric temperature increase from $T_1$ to $T_2$ at $V_2$ • Pathway 2: 1. An isochoric temperature increase from $T_1$ to $T_2$ at $V_1$ followed by 2. And isothermal expansion from $V_1$ to $V_2$ at $T_2$ And since $G$ has the good sense to be a state variable, the pathway connecting the initial and final states is unimportant. We are free to choose any path that is convenient to calculate the change. Example $1$: Non-Isothermal Gas Expansion Calculate the entropy change for 1.00 mol of a monatomic ideal gas (CV = 3/2 R) expanding from 10.0 L at 273 K to 22.0 L at 297 K. Solution If one considers entropy to be a function of temperature and volume, one can write the total differential of entropy as $dS = \left( \dfrac{\partial S}{\partial T} \right)_V dT + \left( \dfrac{\partial S}{\partial V} \right)_T dV \nonumber$ and thus $\Delta S = \int_{T_1}^{T_2} \left( \dfrac{\partial S}{\partial T} \right)_V dT + \int_{V_1}^{V_2} \left( \dfrac{\partial S}{\partial V} \right)_T dV \nonumber$ The first term is the contribution due to an isochoric temperature change: \begin{align} \Delta S_{T_1 \rightarrow T_2} & = \int_{T_1}^{T_2} \left( \dfrac{\partial S}{\partial T} \right)_V dT \ & = \int_{T_1}^{T_2} \dfrac{n C_V}{T} dT \ & = nC_V \ln \left(\dfrac{T_2}{T_1} \right) \ & = (1.00\, mol) \left( \dfrac{3}{2} \cdot 8.314 \dfrac{J}{mol\,K} \right) \ln \left(\dfrac{297\,K}{273\,K }\right) \ & = 13.57 \,J/K \end{align} \nonumber The second term is the contribution due to an isothermal expansion: $\Delta S_{V_1 \rightarrow V_2} = \int_{V_1}^{V_2} \left( \dfrac{\partial S}{\partial V} \right)_T dV \label{second}$ From the Maxwell relation on $A$ $\left( \dfrac{\partial S}{\partial V} \right)_T = \left( \dfrac{\partial p}{\partial T} \right)_V \nonumber$ So Equation \ref{second} becomes \begin{align} \Delta S_{V_1 \rightarrow V_2} & = \int_{V_1}^{V_2} \left( \dfrac{\partial p}{\partial T} \right)_V dV \ & = \int_{V_1}^{V_2} \left( \dfrac{nR}{V} \right) dV\ & = nR \ln \left(\dfrac{V_2}{V_1} \right) \ &= (1.00\, mol) \left( 8.314 \dfrac{J}{mol\,K} \right) \ln \left(\dfrac{22.0\,L}{10.0\,L }\right) \ & = 6.56\, J/K \end{align} \nonumber And the total entropy change is \begin{align} \Delta S_{tot} & = \Delta S_{V_1 \rightarrow V_2} + \Delta S_{V_1 \rightarrow V_2} \ & = 13.57\,J/K + 6.56 \,J/K \ & = 20.13\,J/K \end{align} \nonumber Deriving an expression for a partial derivative (Type III) Thermodynamics involves many variables. But for a single component sample of matter, only two state variables are needed to describe the system and fix all of the thermodynamic properties of the system. As such, it is conceivable that two functions can be specified as functions of the same two variables. In general terms: $z(x, y)$ and $w(x, y)$. So an important question that can be answered is, “What happens to $z$ if $w$ is held constant, but $x$ is changed?” To explore this, consider the total differential of $z$: $dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy \label{eq5}$ but $z$ can also be considered a function of $x$ and $w(x, y)$. This implies that the total differential can also be written as $dz = \left( \dfrac{\partial z}{\partial x} \right)_w dx + \left( \dfrac{\partial z}{\partial w} \right)_x dy \label{eq6}$ and these two total differentials must be equal to one another! $= \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy = \left( \dfrac{\partial z}{\partial x} \right)_w dx + \left( \dfrac{\partial z}{\partial w} \right)_x dw \nonumber$ If we constrain the system to a change in which $w$ remains constant, the last term will vanish since $dw = 0$. $\left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy = \left( \dfrac{\partial z}{\partial x} \right)_w dx \label{eq10}$ but also, since $w$ is a function $x$ and $y$, the total differential for $w$ can be written $dw = \left( \dfrac{\partial w}{\partial x} \right)_y dx + \left( \dfrac{\partial w}{\partial y} \right)_x dy \nonumber$ And it too must be zero for a process in which $w$ is held constant. $0 = \left( \dfrac{\partial w}{\partial x} \right)_y dx + \left( \dfrac{\partial w}{\partial y} \right)_x dy \nonumber$ From this expression, it can be seen that $dy = - \left( \dfrac{\partial w}{\partial x} \right)_y \left( \dfrac{\partial y}{\partial w} \right)_x dx \nonumber$ Substituting this into the Equation \ref{eq10}, yields $\left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x \left[ - \left( \dfrac{\partial w}{\partial x} \right)_y \left( \dfrac{\partial y}{\partial w} \right)_x dx \right] = \left( \dfrac{\partial z}{\partial x} \right)_w dx \label{eq20}$ which simplifies to $\left( \dfrac{\partial z}{\partial x} \right)_y dx - \left( \dfrac{\partial z}{\partial w} \right)_x \left( \dfrac{\partial w}{\partial x} \right)_y dx = \left( \dfrac{\partial z}{\partial x} \right)_w dx \nonumber$ So for $dx \neq 0$, implies that $\left( \dfrac{\partial z}{\partial x} \right)_y - \left( \dfrac{\partial z}{\partial w} \right)_x \left( \dfrac{\partial w}{\partial x} \right)_y = \left( \dfrac{\partial z}{\partial x} \right)_w \nonumber$ or $\left( \dfrac{\partial z}{\partial x} \right)_y = \left( \dfrac{\partial z}{\partial x} \right)_w + \left( \dfrac{\partial z}{\partial w} \right)_x \left( \dfrac{\partial w}{\partial x} \right)_y \label{final1}$ As with partial derivative transformation types I and II, this result can be achieved in a formal, albeit less mathematically rigorous method. Consider $z(x, w)$. This allows us to write the total differential for $z$: $dz = \left( \dfrac{\partial z}{\partial x} \right)_w dx + \left( \dfrac{\partial z}{\partial w} \right)_x dw \nonumber$ Now, divide by $dx$ and constrain to constant $y$. $\left.\dfrac{dz}{dx} \right\rvert_{y}= \left( \dfrac{\partial z}{\partial x} \right)_w \left.\dfrac{dx}{dx} \right\rvert_{y} + \left( \dfrac{\partial z}{\partial w} \right)_x \left.\dfrac{dw}{dx} \right\rvert_{y} \nonumber$ noting that $dx/dx = 1$ and converting the other ratios to partial derivatives yields $\left( \dfrac{\partial z}{\partial x} \right)_y = \left( \dfrac{\partial z}{\partial x} \right)_w + \left( \dfrac{\partial z}{\partial w} \right)_x \left( \dfrac{\partial w}{\partial x} \right)_y \label{final2}$ which agrees with the previous result (Equation \ref{final1})! Again, the method is not mathematically rigorous, but it works so long as $w$, $x$, $y$, and $z$ are state functions and the total differentials $dw$, $dx$, $dy$, and $dz$ are exact. Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.07%3A_When_Two_Variables_Change_at_Once.txt
Constant volume and constant pressure heat capacities are very important in the calculation of many changes. The ratio $C_p/C_V = \gamma$ appears in many expressions as well (such as the relationship between pressure and volume along an adiabatic expansion.) It would be useful to derive an expression for the difference $C_p – C_V$ as well. As it turns out, this difference is expressible in terms of measureable physical properties of a substance, such as $\alpha$, $\kappa_T$, $p$, $V$, and $T$. In order to derive an expression, let’s start from the definitions. $C_p= \left( \dfrac{\partial H}{\partial T} \right)_p \nonumber$ and $C_V= \left( \dfrac{\partial U}{\partial T} \right)_V \nonumber$ The difference is thus $C_p-C_v = \left( \dfrac{\partial H}{\partial T} \right)_p - \left( \dfrac{\partial U}{\partial T} \right)_V \nonumber$ In order to evaluate this difference, consider the definition of enthalpy: $H = U + pV \nonumber$ Differentiating this yields $dH = dU + pdV + Vdp \nonumber$ Dividing this expression by $dT$ and constraining to constant $p$ gives $\left.\dfrac{dH}{dT} \right\rvert_{p}= \left.\dfrac{dU}{dT} \right\rvert_{p} +p \left.\dfrac{dV}{dT} \right\rvert_{p} + V \left.\dfrac{dp}{dT} \right\rvert_{p} \nonumber$ The last term is kind enough to vanish (since $dp = 0$ at constant pressure). After converting the remaining terms to partial derivatives: $\left( \dfrac{\partial H}{\partial T} \right)_p = \left( \dfrac{\partial U}{\partial T} \right)_p + p \left( \dfrac{\partial V}{\partial T} \right)_p \label{total4}$ This expression is starting to show some of the players. For example, $\left( \dfrac{\partial H}{\partial T} \right)_p = C_p \nonumber$ and $\left( \dfrac{\partial V}{\partial T} \right)_p = V \alpha \nonumber$ So Equation \ref{total4} becomes $C_p = \left( \dfrac{\partial U}{\partial T} \right)_p + pV\alpha \label{eq5}$ In order to evaluate the partial derivative above, first consider $U(V, T)$. Then the total differential $du$ can be expressed $du = \left( \dfrac{\partial U}{\partial V} \right)_T dV - \left( \dfrac{\partial U}{\partial T} \right)_V dT \nonumber$ Dividing by $dT$ and constraining to constant $p$ will generate the partial derivative we wish to evaluate: $\left.\dfrac{dU}{dT} \right\rvert_{p}= \left( \dfrac{\partial U}{\partial V} \right)_T \left.\dfrac{dV}{dT} \right\rvert_{p} + \left( \dfrac{\partial U}{\partial T} \right)_V \left.\dfrac{dT}{dT} \right\rvert_{p} \nonumber$ The last term will become unity, so after converting to partial derivatives, we see that $\left( \dfrac{\partial U}{\partial T} \right)_p= \left( \dfrac{\partial U}{\partial V} \right)_T \left( \dfrac{\partial V}{\partial T} \right)_p+ \left( \dfrac{\partial U}{\partial T} \right)_V \label{eq30}$ (This, incidentally, is an example of partial derivative transformation type III.) Now we are getting somewhere! $\left( \dfrac{\partial U}{\partial T} \right)_V = C_V \nonumber$ and $\left( \dfrac{\partial V}{\partial T} \right)_p = V\alpha \nonumber$ So the Equation \ref{eq30} can be rewritten $\left( \dfrac{\partial U}{\partial T} \right)_p= \left( \dfrac{\partial U}{\partial V} \right)_T V\alpha + C_V \nonumber$ If we can find an expression for $\left( \dfrac{\partial U}{\partial V} \right)_T \nonumber$ we are almost home free! Fortunately, that is an easy expression to derive. Begin with the combined expression of the first and second laws: $d = TdS - pdV \nonumber$ Now, divide both sides by $dV$ and constrain to constant $T$. $\left.\dfrac{dU}{dV} \right\rvert_{T}= T \left.\dfrac{dS}{dV} \right\rvert_{T} - p \left.\dfrac{dV}{dV} \right\rvert_{T} \nonumber$ The last term is unity, so after conversion to partial derivatives, we see $\left( \dfrac{\partial U}{\partial V} \right)_T= T \left( \dfrac{\partial S}{\partial V} \right)_T - p \label{eq40}$ A Maxwell relation (specifically the Maxwell relation on $A$) can be used $\left( \dfrac{\partial S}{\partial V} \right)_T = \left( \dfrac{\partial p}{\partial T} \right)_V \nonumber$ Substituting this into Equation \ref{eq40} yields $\left( \dfrac{\partial U}{\partial V} \right)_T= T \left( \dfrac{\partial p}{\partial T} \right)_V - p \nonumber$ and since $\left( \dfrac{\partial p}{\partial T} \right)_V = \dfrac{\alpha}{\kappa_T} \nonumber$ then $\left( \dfrac{\partial U}{\partial V} \right)_T= T \dfrac{\alpha}{\kappa_T} - p \nonumber$ Now, substituting this into the expression into Equation \ref{eq30} to get \begin{align*} \left( \dfrac{\partial U}{\partial T} \right)_p &= \left[ T \dfrac{\alpha}{\kappa_T} - p \right] V\alpha + C_V \[4pt] &= \dfrac{TV \alpha^2}{\kappa_T} - pV\alpha + C_V \end{align*} This can now be substituted into the Equation \ref{eq5} yields $C_p = \left[ \dfrac{TV \alpha^2}{\kappa_T} - \cancel{ pV\alpha} + C_V \right] + \cancel{pV\alpha} \nonumber$ The $pV\alpha$ terms will cancel. And subtracting $C_V$ from both sides gives the desired result: $C_p - C_V = \dfrac{TV \alpha^2}{\kappa_T} \label{final}$ And this is a completely general result since the only assumptions made were those that allowed us to use the combined first and second laws in the form $dU = TdS – pdV. \nonumber$ That means that this expression can be applied to any substance whether gas, liquid, animal, vegetable, or mineral. But what is the result for an ideal gas? Since we know that for an ideal gas $\alpha = \dfrac{1}{T} \nonumber$ and $\kappa_T=\dfrac{1}{p} \nonumber$ Substitution back into Equation \ref{final} yields \begin{align*} C_p - C_V &= \dfrac{TV \left(\dfrac{1}{T}\right)^2}{\left(\dfrac{1}{p}\right)} \[4pt] &= \dfrac{pV}{T} \[4pt] &= R \end{align*} So for an ideal gas, $C_p – C_V = R$. That is good to know, no? Example $1$ Derive the expression for the difference between $C_p$ and $C_V$­ by beginning with the definition of $H$, differentiating, dividing by $dV$ (to generate the partial derivative definition of $C_V$). In this approach, you will need to find expressions for $\left( \dfrac{\partial H}{\partial T} \right)_V \nonumber$ and $\left( \dfrac{\partial U}{\partial p} \right)_T \nonumber$ and also utilize the Maxwell-Relation on $G$. Solution Begin with the definition of enthalpy. $H = U +pV \nonumber$ Differentiate the expression. $dH = dU + pdV + Vdp \nonumber$ Now, divide by $dV$ and constrain to constant $T$ (as described in the instructions) to generate the partial derivative definition of $C_V$ $\left.\dfrac{dH}{dT} \right\rvert_{V}= \left.\dfrac{dU}{dT} \right\rvert_{V} + p \left.\dfrac{dV}{dT} \right\rvert_{V} + V \left.\dfrac{dp}{dT} \right\rvert_{V} \nonumber$ $\left(\dfrac{dH}{dT} \right)_{V}= \left(\dfrac{dU}{dT} \right)_{V} + V \left(\dfrac{dp}{dT} \right)_{V} \label{eq20E}$ Now what is needed is an expression for $\left(\dfrac{dH}{dT} \right)_{V}. \nonumber$ This can be derived from the total differential for $H(p,T)$ by dividing by $dT$ and constraining to constant $V$. $dH = \left(\dfrac{dH}{dp} \right)_{T} dp + \left(\dfrac{dH}{dT} \right)_{p} dT \nonumber$ $\left(\dfrac{dH}{dT} \right)_{V}=\left(\dfrac{dH}{dp} \right)_{T} \left(\dfrac{dp}{dT} \right)_{V} + \left(\dfrac{dH}{dT} \right)_{p} \label{eq30E}$ This again is an example of partial derivative transformation type III. To continue, we need an expression for $\left(\dfrac{dH}{dp} \right)_{T}. \nonumber$ This can be quickly generated by considering the total differential of $H(p,S)$, its natural variables: $dH = TdS + Vdp \nonumber$ Dividing by $dp$ and constraining to constant $T$ yields $\left.\dfrac{dH}{dp} \right\rvert_{T}= T \left.\dfrac{dS}{dp} \right\rvert_{T} + V \left.\dfrac{dp}{dp} \right\rvert_{T} \nonumber$ $\left(\dfrac{dH}{dp} \right)_{T}= T \left(\dfrac{dS}{dp} \right)_{T} + V \label{eq40E}$ Using the Maxwell Relation on $G$, we can substitute $- \left(\dfrac{dV}{dT} \right)_{p}= \left(\dfrac{dS}{dp} \right)_{T} \nonumber$ So Equation \ref{eq40E} becomes $\left(\dfrac{dH}{dp} \right)_{T}= - T \left(\dfrac{dV}{dT} \right)_{p} + V \nonumber$ Now, substitute this back into the expression for (Equation \ref{eq30E}): $\left(\dfrac{dH}{dT} \right)_{V}= \left[ - T \left(\dfrac{dV}{dT} \right)_{p} + V \right] \left(\dfrac{dp}{dT} \right)_{V} + \left(\dfrac{dH}{dT} \right)_{p} \nonumber$ $\left(\dfrac{dH}{dT} \right)_{V}= - T \left(\dfrac{dV}{dT} \right)_{p} \left(\dfrac{dp}{dT} \right)_{V} + V \left(\dfrac{dp}{dT} \right)_{V} + \left(\dfrac{dH}{dT} \right)_{p} \nonumber$ This can now substituted for the right-hand side of the initial expression for $\left(\dfrac{dH}{dT} \right)_{V}$ back into Equation \ref{eq20E}: $- T \left(\dfrac{dV}{dT} \right)_{p} \left(\dfrac{dp}{dT} \right)_{V} + \cancel{V \left(\dfrac{dp}{dT} \right)_{V}} + \left(\dfrac{dH}{dT} \right)_{p} = \left(\dfrac{dU}{dT} \right)_{V} + \cancel{V \left(\dfrac{dp}{dT} \right)_{V}} \label{eq50E}$ Several terms cancel one another. Equation \ref{eq50E} can then be rearranged to yield $\left(\dfrac{dH}{dT} \right)_{p} - \left(\dfrac{dU}{dT} \right)_{V} = T \left(\dfrac{dV}{dT} \right)_{p} \left(\dfrac{dp}{dT} \right)_{V} \nonumber$ or $C_p - C_V = \dfrac{TV \alpha^2}{\kappa_T} \nonumber$ which might look familiar (Equation \ref{final})!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.08%3A_The_Difference_between_Cp_and_Cv.txt
Q6.1 Using Table T1, calculate the standard reaction Gibbs functions ($\Delta G^o$) for the following reactions at 298 K. 1. $CH_3CH_2OH(l) + 3 O_2(g) \rightarrow 2 CO_2(g) + 3 H_2O(l)$ 2. $C_6H_{12}O_6(s) + 6 O_2 \rightarrow 6 CO_2(g) + 6 H_2O(l)$ 3. $2 POCl_3(l) \rightarrow 2 PCl_3(l) + O_2(g)$ 4. $2 KBr(s) + Cl_2(g) \rightarrow 2 KCl(s) + Br_2(l)$ 5. $SiH_4(g) + 2 Cl(g) \rightarrow SiCl_4(l) + 2 H_2(g)$ Q6.2 Estimate $\Delta G$ at 1000 K from its value at 298 K for the reaction $C(s) + 2 H_2(g) \rightarrow CH_4(g)$ with $\Delta G = -50.75\, kJ \,at\, 298\, K$/ Q6.3 The standard Gibbs function for formation ($\Delta G_f^o$) of $PbO_2(s)$ is -217.4 kJ/mol at 298 K. Assuming $O_2$ is an ideal gas, find the standard Helmholtz function for formation ($\Delta A_f^o$ for $PbO_2$ at 298K. Q6.4 Calculate the entropy change for 1.00 mol of an ideal monatomic gas (CV = 3/2 R) undergoing an expansion and simultaneous temperature increase from 10.0 L at 298 K to 205.0 L at 455 K. Q6.5 Consider a gas that obeys the equation of state $p =\dfrac{nRT}{V-nb}$ 1. Find expressions for $\alpha$ and $\kappa_T$ for this gas. 2. Evaluate the difference between $C_p$ and $C_V$ for the gas. Q6.6 Show that $\left( \dfrac{\partial C_p}{\partial p} \right)_T=0$ for an ideal gas. Q6.7 Derive the thermodynamic equation of state $\left( \dfrac{\partial H}{\partial p} \right)_T = V( 1- T \alpha)$ Q6.8 Derive the thermodynamic equation of state $\left( \dfrac{\partial U}{\partial V} \right)_T = T \dfrac{ \alpha}{\kappa_T} -p$ Q6.9 The “Joule Coefficient” is defined by $\mu_J = \left( \dfrac{\partial T}{\partial V} \right)_U$ Show that $\mu_J = \dfrac{1}{C_V} \left( p - \dfrac{T \alpha}{\kappa_T }\right)$ and evaluate the expression for an ideal gas. Q6.10 Derive expressions for the pressure derivatives $\left( \dfrac{\partial X}{\partial p} \right)_T$ where$X$ is $U$, $H$, $A$, $G$, and $S$ at constant temperature in terms of measurable properties. (The derivation of $\left( \dfrac{\partial H}{\partial p} \right)_T$ was done in problem Q6.7). Evaluate the expressions for • $\left( \dfrac{\partial S}{\partial p} \right)_T$ • $\left( \dfrac{\partial H}{\partial p} \right)_T$ • $\left( \dfrac{\partial U}{\partial p} \right)_T$ for a van der Waals gas. Q6.11 Derive expressions for the volume derivatives $\left( \dfrac{\partial X}{\partial V} \right)_T$ where $X$ is $U$, $H$, $A$, $G$, and $S$ at constant temperature in terms of measurable properties. (The derivation of $\left( \dfrac{\partial U}{\partial V} \right)_T$ was done in problem Q8.8.) Evaluate the expressions for • $\left( \dfrac{\partial X}{\partial V} \right)_T$ • $\left( \dfrac{\partial X}{\partial V} \right)_T$ for a van der Waals gas. Q6.12 Evaluate the difference between $C_p$ and $C_V$ for a gas that obeys the equation of state $p =\dfrac{nRT}{V-nb}$ Q6.13 The adiabatic compressibility ($k_S$) is defined by $\kappa_S = \dfrac{1}{V} \left( \dfrac{\partial V}{\partial p} \right)_S$ Show that for an ideal gas, $\kappa_S = \dfrac{1}{p \gamma}$ 6.S: Putting the Second Law to Work (Summary) Learning Objectives After mastering the material presented in this chapter, one will be able to: 1. Define the free energy functions $A$ and $G$, and relate changes in these functions to the spontaneity of a given process and constant volume and pressure respectively. 2. Use the definitions of entropy and reversible work of expansion to write an equation that combines the first and second laws of thermodynamics. 3. Utilize the combined first and second law relationship to derive Maxwell Relations stemming from the definitions of $U$, $H$, $A$, and $G$. 4. Utilize the Maxwell Relations to derive expressions that govern changes in thermodynamic variable as systems move along specified pathways (such as constant temperature, pressure, volume, or adiabatic pathways.) 5. Derive and utilize an expression describing the volume dependence of $A$. 6. Derive and utilize an expression describing the pressure dependence of $G$. 7. Derive and utilize expressions that describe the temperature, dependence of $A$ and $G$. 8. Derive an expression for, and evaluate the difference between $C_p$ and $C_V$ for any substance, in terms of $T$, $V$, $\alpha$, and $\kappa_T$. Vocabulary and Concepts • free energy • Gibbs Free Energy • Gibbs function • Gibbs-Helmholtz equation • Helmholtz function • maximum work • Maxwell Relation • standard free energy of formation ($\Delta G_f^Oo$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/06%3A_Putting_the_Second_Law_to_Work/6.E%3A_Putting_the_Second_Law_to_Work_%28Exercises%29.txt
Up until this point, we have conserved single-component systems which do not change in composition. By and large, nature consists of much more complicated systems, containing many components and continually undergoing changes in composition through phase changes or chemical reactions or both! In order to expand our thermodynamic toolbox, we will begin by discussing mixtures. • 7.1: Thermodynamics of Mixing A natural place to begin a discussion of mixtures is to consider a mixture of two gases. • 7.2: Partial Molar Volume he partial molar volume of compound A in a mixture of A and B can be defined using the total differential of V. • 7.3: Chemical Potential The chemical potential tells how the Gibbs function will change as the composition of the mixture changes. And since systems tend to seek a minimum aggregate Gibbs function, the chemical potential will point to the direction the system can move in order to reduce the total Gibbs function. • 7.4: The Gibbs-Duhem Equation The Gibbs-Duhem equation relates how the chemical potential can change for a given composition while the system maintains equilibrium. So for a binary system, consisting of components A and B (the two most often studied compounds in all of chemistry) • 7.5: Non-ideality in Gases - Fugacity The relationship for chemical potential was derived assuming ideal gas behavior. But for real gases that deviate widely from ideal behavior, the expression has only limited applicability. In order to use the simple expression on real gases, a “fudge” factor is introduced called fugacity. Fugacity is used instead of pressure. • 7.6: Colligative Properties Colligative properties are important properties of solutions as they describe how the properties of the solvent will change as solute (or solutes) is (are) added. • 7.7: Solubility The maximum solubility of a solute can be determined using the same methods we have used to describe colligative properties. If this chemical potential is lower than that of a pure solid solute, the solute will dissolve into the liquid solvent (in order to achieve a lower chemical potential!) So the point of saturation is reached when the chemical potential of the solute in the solution is equal to that of the pure solid solute. • 7.8: Non-ideality in Solutions - Activity The bulk of the discussion in this chapter dealt with ideal solutions. However, real solutions will deviate from this kind of behavior. So much as in the case of gases, where fugacity was introduced to allow us to use the ideal models, activity is used to allow for the deviation of real solutes from limiting ideal behavior. • 7.E: Mixtures and Solutions (Exercises) Exercises for Chapter 7 "Mixtures and Solutions" in Fleming's Physical Chemistry Textmap. • 7.S: Mixtures and Solutions (Summary) Summary for Chapter 7 "Mixtures and Solutions" in Fleming's Physical Chemistry Textmap. 07: Mixtures and Solutions A natural place to begin a discussion of mixtures is to consider a mixture of two gases. Consider samples of the two gases filling two partitions in a single container, both at the same pressure, temperature, having volumes $V_A$ and $V_B$. After being allowed to mix isothermally, the partial pressures of the two gases will drop by a factor of 2 (although the total pressure will still be the original value) and the volumes occupied by the two gases will double. Assuming ideal behavior, so that interactions between individual gas molecules are unimportant, it is fairly easy to calculate $Delta H) for each gas, as it is simply an isothermal expansion. The total enthalpy of mixing is then given by $\Delta H_{mix} = \Delta H_A + \Delta H_B \nonumber$ And since the enthalpy change for an isothermal expansion of an ideal gas is zero, $\Delta H_{mix} =0 \nonumber$ is a straight-forward conclusion. This will be the criterion for an ideal mixture. In general, real mixtures will deviate from this limiting ideal behavior due to interactions between molecules and other concerns. Also, many substances undergo chemical changes when they mix with other substances. But for now, we will limit ourselves to discussing mixtures in which no chemical reactions take place. Entropy of Mixing The entropy change induced due to isothermal mixing (assuming again no interactions between the molecules in the gas mixture) is again going to be the sum of the contributions from isothermal expansions of the two gases. Fortunately, entropy changes for isothermal expansions are easy to calculate for ideal gases. $\Delta S = nR \ln \left( \dfrac{V_2}{V_1}\right) \nonumber$ If we use the initial volumes VA and VB for the initial volumes of gases A and B, the total volume after mixing is \(V_A + V_B$, and the total entropy change is $\Delta S_{mix} = n_AR \ln \left( \dfrac{V_A + V_B}{V_A}\right) + n_AR \ln \left( \dfrac{V_A + V_B}{V_B}\right) \nonumber$ Noting that the term $\dfrac{V_A + V_B}{V_A} = \dfrac{1}{\chi_A} \nonumber$ where $\chi_A$ is the mole fraction of $A$ after mixing, and that $n_A$ can be expresses as the product of $\chi_A$ and the total number of moles, the expression can be rewritten $\Delta S_{mix} = n_{tot} R \left[ -\chi_A \ln (\chi_A) - \chi_B \ln (\chi_B) \right] \nonumber$ It should be noted that because the mole fraction is always between 0 and 1, that $\ln (\chi_B) < 0$. As such, the entropy change for a system undergoing isothermal mixing is always positive, as one might expect (since mixing will make the system less ordered). The entropy change for a system undergoing isothermal mixing is always positive. Free Energy of Mixing Calculating $\Delta G_{mix}$ should be no more difficult than calculating $\Delta S_{mix}$. For isothermal mixing and constant total pressure $\Delta G_{mix} = \Delta H_{mix} - T\Delta S_{mix} \nonumber$ and so it follows from above that for the isothermal mixing of two gases at constant total pressure $\Delta G_{mix} = n_{tot} RT \left[ -\chi_A \ln (\chi_A) + \chi_B \ln (\chi_B) \right] \nonumber$ The relationships describing the isothermal mixing of two ideal gases $A$ and $B$ is summarized in the graph below. Again, because $\ln (\chi_i) < 0$, then $\Delta G_{mix} < 0$ implying that mixing is always a spontaneous process for an ideal solution. This is true for gases. But for many combinations of liquids or solids, the strong intermolecular forces may make mixing unfavorable (for example in the case of vegetable oil and water). Also, these interactions may make the volume non-additive as well (as in the case of ethanol and water). Mixing is always a spontaneous process for an ideal solution.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.01%3A_Thermodynamics_of_Mixing.txt
The partial molar volume of compound A in a mixture of A and B can be defined as $V_A = \left (\dfrac{\partial V}{\partial n_A} \right)_{p,T,n_B} \nonumber$ Using this definition, a change in volume for the mixture can be described using the total differential of $V$: $dV = \left( \dfrac{\partial V}{\partial n_A}\right)_{p,T,n_B} dn_A + \left( \dfrac{\partial V}{\partial n_B}\right)_{p,T,n_A} dn_B \nonumber$ or $dV = V_a \, dn_A + V_b\,dn_B \nonumber$ and integration yields $V = \int _0^{n_A} V_a \, dn_A + \int _0^{n_B} V_b\,dn_B \nonumber$ $V = V_a \, n_A + V_b\,n_B \nonumber$ This result is important as it demonstrates an important quality of partial molar quantities. Specifically, if $\xi_i$ represents the partial molar property $X$ for component i of a mixture, The total property $X$ for the mixture is given by $X = \sum_{i} \xi_in_i \nonumber$ It should be noted that while the volume of a substance is never negative, the partial molar volume can be. An example of this appears in the dissolution of a strong electrolyte in water. Because the water molecules in the solvation sphere of the ions are physically closer together than they are in bulk pure water, there is a volume decrease when the electrolyte dissolves. This is easily observable at high concentrations where a larger fraction of the water in the sample is tied up in solvation of the ions. 7.03: Chemical Potential In much the same fashion as the partial molar volume is defined, the partial molar Gibbs function is defined for compound $i$ in a mixture: $\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{p,T,n_j\neq i} \label{eq1}$ This particular partial molar function is of particular importance, and is called the chemical potential. The chemical potential tells how the Gibbs function will change as the composition of the mixture changes. And since systems tend to seek a minimum aggregate Gibbs function, the chemical potential will point to the direction the system can move in order to reduce the total Gibbs function. In general, the total change in the Gibbs function ($dG$) can be calculated from $dG = \left( \dfrac{\partial G}{\partial p} \right) _{T,n_i} dp + \left( \dfrac{\partial G}{\partial T} \right) _{p, n_i }dT + \sum_i \left( \dfrac{\partial G}{\partial n_i} \right) _{T,n_j\neq i} dn_i \nonumber$ Or, by substituting the definition for the chemical potential, and evaluating the pressure and temperature derivatives as was done in Chapter 6: $dG = Vdp - SdT + \sum_i \mu_i dn_i \nonumber$ But as it turns out, the chemical potential can be defined as the partial molar derivative any of the four major thermodynamic functions $U$, $H$, $A$, or $G$: Table $1$: Chemical potential can be defined as the partial molar derivative any of the four major thermodynamic functions $dU = TdS - pdV + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial U}{\partial n_i} \right) _{S,V,n_j\neq i}$ $dH = TdS - VdT + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial H}{\partial n_i} \right) _{S,p,n_j\neq i}$ $dA = -pdV - TdS + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial A}{\partial n_i} \right) _{V,T,n_j\neq i}$ $dG = Vdp - SdT + \sum_i \mu_i dn_i$ $\mu_i = \left( \dfrac{\partial G}{\partial n_i} \right) _{p,T,n_j\neq i}$ The last definition, in which the chemical potential is defined as the partial molar Gibbs function is the most commonly used, and perhaps the most useful (Equation \ref{eq1}). As the partial most Gibbs function, it is easy to show that $d\mu = Vdp - SdT \nonumber$ where $V$ is the molar volume, and $S$ is the molar entropy. Using this expression, it is easy to show that $\left( \dfrac{\partial \mu}{\partial p} \right) _{T} = V \nonumber$ and so at constant temperature $\int_{\mu^o}^{\mu} d\mu = \int_{p^o}^{p} V\,dp \label{eq5}$ So that for a substance for which the molar volume is fairly independent of pressure at constant temperature (i. e., $\kappa_T$ is very small), therefore Equation \ref{eq5} becomes $\int_{\mu^o}^{\mu} d\mu = V \int_{p^o}^{p} dp \nonumber$ $\mu - \mu^o = V(p-p^o) \nonumber$ or $\mu = \mu^o + V(p-p^o) \nonumber$ Where $p^o$ is a reference pressure (generally the standard pressure of 1 atm) and $\mu^o$ is the chemical potential at the standard pressure. If the substance is highly compressible (such as a gas) the pressure dependence of the molar volume is needed to complete the integral. If the substance is an ideal gas $V =\dfrac{RT}{p} \nonumber$ So at constant temperature, Equation \ref{eq5} then becomes $\int_{\mu^o}^{\mu} d\mu = RT int_{p^o}^{p} \dfrac{dp}{p} \label{eq5b}$ or $\mu = \mu^o + RT \ln \left(\dfrac{p}{p^o} \right) \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.02%3A_Partial_Molar_Volume.txt
For a system at equilibrium, the Gibbs-Duhem equation must hold: $\sum_i n_i d\mu_i = 0 \label{eq1}$ This relationship places a compositional constraint upon any changes in the chemical potential in a mixture at constant temperature and pressure for a given composition. This result is easily derived when one considers that $\mu_i$ represents the partial molar Gibbs function for component $i$. And as with other partial molar quantities, $G_{tot} = \sum_i n_i \mu_i \nonumber$ Taking the derivative of both sides yields $dG_{tot} = \sum_i n_i d \mu_i + \sum_i \mu_i d n_i \nonumber$ But $dG$ can also be expressed as $dG = Vdp - sdT + \sum_i \mu_i d n_i \nonumber$ Setting these two expressions equal to one another $\sum_i n_i d \mu_i + \cancel{ \sum_i \mu_i d n_i } = Vdp - sdT + \cancel{ \sum_i \mu_i d n_i} \nonumber$ And after canceling terms, one gets $\sum_i n_i d \mu_i = Vdp - sdT \label{eq41}$ For a system at constant temperature and pressure $Vdp - sdT = 0 \label{eq42}$ Substituting Equation \ref{eq42} into \ref{eq41} results in the Gibbs-Duhem equation (Equation \ref{eq1}). This expression relates how the chemical potential can change for a given composition while the system maintains equilibrium. So for a binary system, consisting of components $A$ and $B$ (the two most often studied compounds in all of chemistry) $d\mu_B = -\dfrac{n_A}{n_B} d\mu_A \nonumber$ 7.05: Non-ideality in Gases - Fugacity The relationship for chemical potential $\mu = \mu^o + RT \ln \left( \dfrac{p}{p^o} \right) \nonumber$ was derived assuming ideal gas behavior. But for real gases that deviate widely from ideal behavior, the expression has only limited applicability. In order to use the simple expression on real gases, a “fudge” factor is introduced called fugacity. Using fugacity instead of pressure, the chemical potential expression becomes $\mu = \mu^o + RT \ln \left( \dfrac{f}{f^o} \right) \nonumber$ where $f$ is the fugacity. Fugacity is related to pressure, but contains all of the deviations from ideality within it. To see how it is related to pressure, consider that a change in chemical potential for a single component system can be expressed as $d\mu - Vdp - SdT \nonumber$ and so $\left(\dfrac{\partial \mu}{\partial p} \right)_T = V \label{eq3}$ Differentiating the expression for chemical potential above with respect to pressure at constant volume results in $\left(\dfrac{\partial \mu}{\partial p} \right)_T = \left \{ \dfrac{\partial}{\partial p} \left[ \mu^o + RT \ln \left( \dfrac{f}{f^o} \right) \right] \right \} \nonumber$ which simplifies to $\left(\dfrac{\partial \mu}{\partial p} \right)_T = RT \left[ \dfrac{\partial \ln (f)}{\partial p} \right]_T = V \nonumber$ Multiplying both sides by $p/RT$ gives $\left[ \dfrac{\partial \ln (f)}{\partial p} \right]_T = \dfrac{pV}{RT} =Z \nonumber$ where $Z$ is the compression factor as discussed previously. Now, we can use the expression above to obtain the fugacity coefficient $\gamma$, as defined by $f= \gamma p \nonumber$ Taking the natural logarithm of both sides yields $\ln f= \ln \gamma + \ln p \nonumber$ or $\ln \gamma = \ln f - \ln p \nonumber$ Using some calculus and substitutions from above, $\int \left(\dfrac{\partial \ln \gamma}{\partial p} \right)_T dp = \int \left(\dfrac{\partial \ln f}{\partial p} - \dfrac{\partial \ln p }{\partial p} \right)_T dp \nonumber$ $= \int \left(\dfrac{Z}{\partial p} - \dfrac{1}{\partial p} \right)_T dp \nonumber$ Finally, integrating from $0$ to $p$ yields $\ln \gamma = \int_0^{p} \left( \dfrac{ Z-1}{p}\right)_T dp \nonumber$ If the gas behaves ideally, $\gamma = 1$. In general, this will be the limiting value as $p \rightarrow 0$ since all gases behave ideal as the pressure approaches 0. The advantage to using the fugacity in this manner is that it allows one to use the expression $\mu = \mu^o + RT \ln \left( \dfrac{f}{f^o}\right) \nonumber$ to calculate the chemical potential, insuring that Equation \ref{eq3} holds even for gases that deviate from ideal behavior!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.04%3A_The_Gibbs-Duhem_Equation.txt
Colligative properties are important properties of solutions as they describe how the properties of the solvent will change as solute (or solutes) is (are) added. Before discussing these important properties, let us first review some definitions. • Solution – a homogeneous mixture. • Solvent – The component of a solution with the largest mole fraction • Solute – Any component of a solution that is not the solvent. Solutions can exist in solid (alloys of metals are an example of solid-phase solutions), liquid, or gaseous (aerosols are examples of gas-phase solutions) forms. For the most part, this discussion will focus on liquid-phase solutions. Freezing Point Depression In general (and as will be discussed in Chapter 8 in more detail) a liquid will freeze when $\mu_{solid} \le \mu_{liquid} \nonumber$ As such, the freezing point of the solvent in a solution will be affected by anything that changes the chemical potential of the solvent. As it turns out, the chemical potential of the solvent is reduced by the presence of a solute. In a mixture, the chemical potential of component $A$ can be calculated by $\mu_A=\mu_A^o + RT \ln \chi_A \label{chemp}$ And because $\chi_A$ is always less than (or equal to) 1, the chemical potential is always reduced by the addition of another component. The condition under which the solvent will freeze is $\mu_{A,solid} = \mu_{A,liquid} \nonumber$ where the chemical potential of the liquid is given by Equation \ref{chemp}, which rearrangement to $\dfrac{ \mu_A -\mu_A^o}{RT} = \ln \chi_A \nonumber$ To evaluate the temperature dependence of the chemical potential, it is useful to consider the temperature derivative at constant pressure. $\left[ \dfrac{\partial}{\partial T} \left( \dfrac{\mu_A-\mu_A^o}{RT} \right) \right]_{p} = \left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$ $- \dfrac{\mu_A - \mu_A^o}{RT^2} + \dfrac{1}{RT} \left[ \left( \dfrac{\partial \mu_A}{\partial T} \right)_p -\left( \dfrac{\partial \mu_A^o}{\partial T} \right)_p \right] =\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \label{bigeq}$ Recalling that $\mu = H = TS \nonumber$ and $\left( \dfrac{\partial \mu}{\partial T} \right)_p =-S \nonumber$ Equation \ref{bigeq} becomes $- \dfrac{(H_A -TS_A - H_A^o + TS^o_A)}{RT^2} + \dfrac{1}{RT} \left[ -S_A + S_A^o\right] =\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \label{bigeq2}$ And noting that in the case of the solvent freezing, $H_A^o$ is the enthalpy of the pure solvent in solid form, and $H_A$ is the enthalpy of the solvent in the liquid solution. So $H_A^o - H_a = \Delta H_{fus} \nonumber$ Equation \ref{bigeq2} then becomes $\dfrac{\Delta H_{fus}}{RT^2} - \cancel{ \dfrac{-S_A + S_A^o}{RT}} + \cancel{\dfrac{-S_A + S_A^o}{RT}}=\left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$ or $\dfrac{\Delta H_{fus}}{RT^2} = \left( \dfrac{\partial \ln \chi_A}{\partial T} \right)_p \nonumber$ Separating the variables puts the equation into an integrable form. $\int_{T^o}^T \dfrac{\Delta H_{fus}}{RT^2} dT = \int d \ln \chi_A \label{int1}$ where $T^{o}$ is the freezing point of the pure solvent and $T$ is the temperature at which the solvent will begin to solidify in the solution. After integration of Equation \ref{int1}: $- \dfrac{\Delta H_{fus}}{R} \left( \dfrac{1}{T} - \dfrac{1}{T^{o}} \right) = \ln \chi_A \label{int3}$ This can be simplified further by noting that $\dfrac{1}{T} - \dfrac{1}{T^o} = \dfrac{T^o - T}{TT^o} = \dfrac{\Delta T}{TT^o} \nonumber$ where $\Delta T$ is the difference between the freezing temperature of the pure solvent and that of the solvent in the solution. Also, for small deviations from the pure freezing point, $TT^o$ can be replaced by the approximate value $(T^o)^2$. So the Equation \ref{int3} becomes $- \dfrac{\Delta H_{fus}}{R(T^o)^2} \Delta T = \ln \chi_A \label{int4}$ Further, for dilute solutions, for which $\chi_A$­, the mole fraction of the solvent is very nearly 1, then $\ln \chi_A \approx -(1 -\chi_A) = -\chi_B \nonumber$ where $\chi_B$ is the mole fraction of the solute. After a small bit of rearrangement, this results in an expression for freezing point depression of $\Delta T = \left( \dfrac{R(T^o)^2}{\Delta H_{fus}} \right) \chi_B \nonumber$ The first factor can be replaced by $K_f$: $\dfrac{R(T^o)^2}{\Delta H_{fus}} = K_f \nonumber$ which is the cryoscopic constant for the solvent. $\Delta T$ gives the magnitude of the reduction of freezing point for the solution. Since $\Delta H_{fus}$ and $T^o$ are properties of the solvent, the freezing point depression property is independent of the solute and is a property based solely on the nature of the solvent. Further, since $\chi_B$ was introduced as $(1 - \chi_A)$, it represents the sum of the mole fractions of all solutes present in the solution. It is important to keep in mind that for a real solution, freezing of the solvent changes the composition of the solution by decreasing the mole fraction of the solvent and increasing that of the solute. As such, the magnitude of $\Delta T$ will change as the freezing process continually removes solvent from the liquid phase of the solution. Boiling Point Elevation The derivation of an expression describing boiling point elevation is similar to that for freezing point depression. In short, the introduction of a solute into a liquid solvent lowers the chemical potential of the solvent, cause it to favor the liquid phase over the vapor phase. As sch, the temperature must be increased to increase the chemical potential of the solvent in the liquid solution until it is equal to that of the vapor-phase solvent. The increase in the boiling point can be expressed as $\Delta T = K_b \chi_B \nonumber$ where $\dfrac{R(T^o)^2}{\Delta H_{vap}} = K_b \nonumber$ is called the ebullioscopic constant and, like the cryoscopic constant, is a property of the solvent that is independent of the solute or solutes. A very elegant derivation of the form of the models for freezing point depression and boiling point elevation has been shared by F. E. Schubert (Schubert, 1983). Cryoscopic and ebullioscopic constants are generally tabulated using molality as the unit of solute concentration rather than mole fraction. In this form, the equation for calculating the magnitude of the freezing point decrease or the boiling point increase is $\Delta T = K_f \,m \nonumber$ or $\Delta T = K_b \,m \nonumber$ where $m$ is the concentration of the solute in moles per kg of solvent. Some values of $K_f$ and $K_b$ are shown in the table below. Substance $K_f$ (°C kg mol-1) $T^o_f$ (°C) $K_b$ (°C kg mol-1) $T^o_b$ (°C) Water 1.86 0.0 0.51 100.0 Benzene 5.12 5.5 2.53 80.1 Ethanol 1.99 -114.6 1.22 78.4 CCl4 29.8 -22.3 5.02 76.8 Example $1$: The boiling point of a solution of 3.00 g of an unknown compound in 25.0 g of CCl4 raises the boiling point to 81.5 °C. What is the molar mass of the compound? Solution The approach here is to find the number of moles of solute in the solution. First, find the concentration of the solution: $(85.5\, °C- 76.8\, °C) = \left( 5.02\, °C\,Kg/mol \right) m \nonumber$ $m= 0.936\, mol/kg \nonumber$ Using the number of kg of solvent, one finds the number for moles of solute: $\left( 0.936 \,mol/\cancel{kg} \right) (0.02\,\cancel{kg}) =0.0234 \, mol \nonumber$ The ratio of mass to moles yields the final answer: $\dfrac{3.00 \,g}{0.0234} = 128 g/mol \nonumber$ Vapor Pressure Lowering For much the same reason as the lowering of freezing points and the elevation of boiling points for solvents into which a solute has been introduced, the vapor pressure of a volatile solvent will be decreased due to the introduction of a solute. The magnitude of this decrease can be quantified by examining the effect the solute has on the chemical potential of the solvent. In order to establish equilibrium between the solvent in the solution and the solvent in the vapor phase above the solution, the chemical potentials of the two phases must be equal. $\mu_{vapor} = \mu_{solvent} \nonumber$ If the solute is not volatile, the vapor will be pure, so (assuming ideal behavior) $\mu_{vap}^o + RT \ln \dfrac{p'}{p^o} = \mu_A^o + RT \ln \chi_A \label{eq3}$ Where $p’$ is the vapor pressure of the solvent over the solution. Similarly, for the pure solvent in equilibrium with its vapor $\mu_A^o = \mu_{vap}^o + RT \ln \dfrac{p_A}{p^o} \label{eq4}$ where $p^o$ is the standard pressure of 1 atm, and $p_A$ is the vapor pressure of the pure solvent. Substituting Equation \ref{eq4} into Equation \ref{eq3} yields $\cancel{\mu_{vap}^o} + RT \ln \dfrac{p'}{p^o}= \left ( \cancel{\mu_{vap}^o} + RT \ln \dfrac{p_A}{p^o} \right) + RT \ln \chi_A \nonumber$ The terms for $\mu_{vap}^o$ cancel, leaving $RT \ln \dfrac{p'}{p^o}= RT \ln \dfrac{p_A}{p^o} + RT \ln \chi_A \nonumber$ Subtracting $RT \ln(P_A/P^o)$ from both side produces $RT \ln \dfrac{p'}{p^o} - RT \ln \dfrac{p_A}{p^o} = RT \ln \chi_A \nonumber$ which rearranges to $RT \ln \dfrac{p'}{p_A} = RT \ln \chi_A \nonumber$ Dividing both sides by $RT$ and then exponentiating yields $\dfrac{p'}{p_A} = \chi_A \nonumber$ or $p'=\chi_Ap_A \label{RL}$ This last result is Raoult’s Law. A more formal derivation would use the fugacities of the vapor phases, but would look essentially the same. Also, as in the case of freezing point depression and boiling point elevations, this derivation did not rely on the nature of the solute! However, unlike freezing point depression and boiling point elevation, this derivation did not rely on the solute being dilute, so the result should apply the entire range of concentrations of the solution. Example $2$: Consider a mixture of two volatile liquids A and B. The vapor pressure of pure A is 150 Torr at some temperature, and that of pure B is 300 Torr at the same temperature. What is the total vapor pressure above a mixture of these compounds with the mole fraction of B of 0.600. What is the mole fraction of B in the vapor that is in equilibrium with the liquid mixture? Solution Using Raoult’s Law (Equation \ref{RL}) $p_A = (0.400)(150\, Toor) =60.0 \,Torr \nonumber$ $p_B = (0.600)(300\, Toor) =180.0 \,Torr \nonumber$ $p_{tot} = p_A + p_B = 240 \,Torr \nonumber$ To get the mole fractions in the gas phase, one can use Dalton’s Law of partial pressures. $\chi_A = \dfrac{ p_A}{p_{tot}} = \dfrac{60.0 \,Torr}{240\,Torr} = 0.250 \nonumber$ $\chi_B = \dfrac{ p_B}{p_{tot}} = \dfrac{180.0 \,Torr}{240\,Torr} = 0.750 \nonumber$ And, of course, it is also useful to note that the sum of the mole fractions is 1 (as it must be!) $\chi_A+\chi_B =1 \nonumber$ Osmotic Pressure Osmosis is a process by which solvent can pass through a semi-permeable membrane (a membrane through which solvent can pass, but not solute) from an area of low solute concentration to a region of high solute concentration. The osmotic pressure is the pressure that when exerted on the region of high solute concentration will halt the process of osmosis. The nature of osmosis and the magnitude of the osmotic pressure can be understood by examining the chemical potential of a pure solvent and that of the solvent in a solution. The chemical potential of the solvent in the solution (before any extra pressure is applied) is given by $\mu_A = \mu_A^o + RT \ln \chi_A \nonumber$ And since xA < 1, the chemical potential is of the solvent in a solution is always lower than that of the pure solvent. So, to prevent osmosis from occurring, something needs to be done to raise the chemical potential of the solvent in the solution. This can be accomplished by applying pressure to the solution. Specifically, the process of osmosis will stop when the chemical potential solvent in the solution is increased to the point of being equal to that of the pure solvent. The criterion, therefore, for osmosis to cease is $\mu_A^o(p) = \mu_A(\chi_b, +\pi) \nonumber$ To solve the problem to determine the magnitude of p, the pressure dependence of the chemical potential is needed in addition to understanding the effect the solute has on lowering the chemical potential of the solvent in the solution. The magnitude, therefore, of the increase in chemical potential due to the application of excess pressure p must be equal to the magnitude of the reduction of chemical potential by the reduced mole fraction of the solvent in the solution. We already know that the chemical potential of the solvent in the solution is reduced by an amount given by $\mu^o_A - \mu_A = RT \ln \chi_A \nonumber$ And the increase in chemical potential due to the application of excess pressure is given by $\mu(p+\pi) = \mu(p) + \int _{p}^{\pi} \left( \dfrac{\partial \mu}{\partial p} \right)_T dp \nonumber$ The integrals on the right can be evaluated by recognizing $\left( \dfrac{\partial \mu}{\partial p} \right)_T = V \nonumber$ where $V$ is the molar volume of the substance. Combining these expressions results in $-RT \ln \chi_A = \int_{p}^{p+\pi} V\,dp \nonumber$ If the molar volume of the solvent is independent of pressure (has a very small value of $\kappa_T$ – which is the case for most liquids) the term on the right becomes. $\int_{p}^{\pi} V\,dP = \left. V p \right |_{p}^{p+\pi} = V\pi \nonumber$ Also, for values of $\chi_A$ very close to 1 $\ln \chi_A \approx -(1- \chi_A) = - \chi_B \nonumber$ So, for dilute solutions $\chi_B RT = V\pi \nonumber$ Or after rearrangement $\pi \dfrac{\chi_B RT}{V} \nonumber$ again, where $V$ is the molar volume of the solvent. And finally, since $\chi_B/V$ is the concentration of the solute $B$ for cases where $n_B \ll n_A$. This allows one to write a simplified version of the expression which can be used in the case of very dilute solutions $\pi = [B]RT \nonumber$ When a pressure exceeding the osmotic pressure $\pi$ is applied to the solution, the chemical potential of the solvent in the solution can be made to exceed that of the pure solvent on the other side of the membrane, causing reverse osmosis to occur. This is a very effective method, for example, for recovering pure water from a mixture such as a salt/water solution.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.06%3A_Colligative_Properties.txt
The maximum solubility of a solute can be determined using the same methods we have used to describe colligative properties. The chemical potential of the solute in a liquid solution can be expressed $\mu_{B} (solution) = \mu_B^o (liquid) + RT \ln \chi_B \nonumber$ If this chemical potential is lower than that of a pure solid solute, the solute will dissolve into the liquid solvent (in order to achieve a lower chemical potential!) So the point of saturation is reached when the chemical potential of the solute in the solution is equal to that of the pure solid solute. $\mu_B^o (solid) = \mu_B^o (liquid) + RT \ln \chi_B \nonumber$ Since the mole fraction at saturation is of interest, we can solve for $\ln(\chi_B)$. $\ln \chi_B = \dfrac{\mu_B^o (solid) = \mu_B^o (liquid)}{RT} \nonumber$ The difference in the chemical potentials is the molar Gibbs function for the phase change of fusion. So this can be rewritten $\ln \chi_B = \dfrac{-\Delta G_{fus}^o}{RT} \nonumber$ It would be convenient if the solubility could be expressed in terms of the enthalpy of fusion for the solute rather than the Gibbs function change. Fortunately, the Gibbs-Helmholtz equation gives us a means of making this change. Noting that $\left( \dfrac{\partial \left( \dfrac{\Delta G}{T} \right)}{\partial T} \right)_p = \dfrac{\Delta H}{T^2} \nonumber$ Differentiation of the above expression for $\ln(\chi_B)$ with respect to $T$ at constant $p$ yields $\left( \dfrac{\partial \ln \chi_B}{\partial T} \right)_p = \dfrac{1}{R} \dfrac{\Delta H_{fus}}{T^2} \nonumber$ Separating the variables puts this into an integrable form that can be used to see how solubility will vary with temperature: $\int_0^{\ln \chi_B} d \ln \chi_B = \dfrac{1}{R} \int_{T_f}^{T} \dfrac{\Delta H_{fus} dT}{T^2} \nonumber$ So if the enthalpy of fusion is constant over the temperature range of $T_f$ to the temperature of interest, $\ln \chi_B = \dfrac{\Delta H_{fus}}{R} \left( \dfrac{1}{T_f} - \dfrac{1}{T} \right) \nonumber$ And $\chi_B$ will give the mole fraction of the solute in a saturated solution at the temperature $T$. The value depends on both the enthalpy of fusion, and the normal melting point of the solute. 7.08: Non-ideality in Solutions - Activity The bulk of the discussion in this chapter dealt with ideal solutions. However, real solutions will deviate from this kind of behavior. So much as in the case of gases, where fugacity was introduced to allow us to use the ideal models, activity is used to allow for the deviation of real solutes from limiting ideal behavior. The activity of a solute is related to its concentration by $a_B=\gamma \dfrac{m_B}{m^o} \nonumber$ where $\gamma$ is the activity coefficient, $m_B$ is the molaliy of the solute, and $m^o$ is unit molality. The activity coefficient is unitless in this definition, and so the activity itself is also unitless. Furthermore, the activity coefficient approaches unity as the molality of the solute approaches zero, insuring that dilute solutions behave ideally. The use of activity to describe the solute allows us to use the simple model for chemical potential by inserting the activity of a solute in place of its mole fraction: $\mu_B =\mu_B^o + RT \ln a_B \nonumber$ The problem that then remains is the measurement of the activity coefficients themselves, which may depend on temperature, pressure, and even concentration. Activity Coefficients for Ionic Solutes For an ionic substance that dissociates upon dissolving $MX(s) \rightarrow M^+(aq) + X^-(aq) \nonumber$ the chemical potential of the cation can be denoted $\mu_+$ and that of the anion as $\mu_-$. For a solution, the total molar Gibbs function of the solutes is given by $G = \mu_+ + \mu_- \nonumber$ where $\mu = \mu^* + RT \ln a \nonumber$ where $\mu^*$ denotes the chemical potential of an ideal solution, and $a$ is the activity of the solute. Substituting his into the above relationship yields $G = \mu^*_+ + RT \ln a_+ + \mu_-^* + RT \ln a_- \nonumber$ Using a molal definition for the activity coefficient $a_i = \gamma_im_i \nonumber$ The expression for the total molar Gibbs function of the solutes becomes $G = \mu_+^* + RT \ln \gamma_+ m_+ + \mu_-^* + RT \ln \gamma_- m_- \nonumber$ This expression can be rearranged to yield $G = \mu_+^* + \mu_-^* + RT \ln m_+m_- + RT \ln \gamma_+\gamma _- \nonumber$ where all of the deviation from ideal behavior comes from the last term. Unfortunately, it impossible to experimentally deconvolute the term into the specific contributions of the two ions. So instead, we use a geometric average to define the mean activity coefficient, $\gamma _\pm$. $\gamma_{\pm} = \sqrt{\gamma_+\gamma_-} \nonumber$ For a substance that dissociates according to the general process $M_xX_y(s) \rightarrow x M^{y+} (aq) + yX^{x-} (aq) \nonumber$ the expression for the mean activity coefficient is given by $\gamma _{\pm} = (\gamma_+^x \gamma_-^y)^{1/x+y} \nonumber$ Debeye-Hückel Law In 1923, Debeye and Hückel (Debye & Hückel, 1923) suggested a means of calculating the mean activity coefficients from experimental data. Briefly, they suggest that $\log _{10} \gamma_{\pm} = \dfrac{1.824 \times 10^6}{(\epsilon T)^{3/2}} |z_++z_- | \sqrt{I} \nonumber$ where $\epsilon$ is the dielectric constant of the solvent, $T$ is the temperature in K, $z_+$ and $z_-$ are the charges on the ions, and $I$ is the ionic strength of the solution. $I$ is given by $I = \dfrac{1}{2} \dfrac{m_+ z_+^2 + m_-z_-^2}{m^o} \nonumber$ For a solution in water at 25 oC,
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.07%3A_Solubility.txt
Q7.1 The compression factor ($Z$) for O2 at 200 K is measured to have the following values: p (atm) Z 1.000 0.9970 4.000 0.9880 7.000 0.9788 10.000 0.9700 Using numerical integration, calculate the fugacity constant for O2 at 200 K from these data. Q7.2 The normal boiling point of ethanol is 78.4 oC. Its enthalpy of vaporization is 38.6 kJ/mol. Estimate the vapor pressure of ethanol at 24.4 oC. Q7.3 When 20.0 grams of an unknown nonelectrolyte compound are dissolved in 500.0 grams of benzene, the freezing point of the resulting solution is 3.77 °C. The freezing point of pure benzene is 5.444 °C and the cryoscopic constant ($K_f$) for benzene is 5.12 °C/m. What is the molar mass of the unknown compound? Q7.4 Consider a mixture of two volatile liquids, A and B. The vapor pressure of pure liquid A is 324.3 Torr and that of pure liquid B is 502.3 Torr. What is the total vapor pressure over a mixture of the two liquids for which xB = 0.675? Q7.5 Consider the following expression for osmotic pressure $\pi V = \chi_BRT$ where $\pi$ is the osmotic pressure, $V$ is the molar volume of the solvent, $\chi_B$ is the mole fraction of the solute, $R$ is the gas law constant, and $T$ is the temperature (in Kelvin). The molar volume of a particular solvent is 0.0180 L/mol. 0.200 g of a solute (B) is dissolved in 1.00 mol of the solvent. The osmotic pressure of the solvent is then measured to be 0.640 atm at 298 K. Calculate the molar mass of the solute. Q7.6 At 300 K, the vapor pressure of HCl(g) over a solution of $HCl$ in $GeCl_4$ are summarized in the following table. Calculate the Henry’s Law constant for HCl based on these data. $\chi_{HCl}$ $P_{HCl}$ (kPa) 0.005 32.0 0.012 76.9 0.019 121.8 Q7.7 Consider the mixing of 1.00 mol of hexane (C6H12) with 1.00 mole of benzene (C6H6). Calculate $\Delta H$, $\Delta S$, and $\Delta G$ of mixing, of the mixing occurs ideally at 298 K. Contributors and Attributions • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay) 7.S: Mixtures and Solutions (Summary) Learning Objectives After mastering the material in this chapter, one will be able to 1. Describe the thermodynamics of mixing and calculate \(\Delta H\), \(\Delta S\), and \(\Delta G\) or mixing for an ideal solution. 2. Define chemical potential, and calculate its value as a function of pressure and composition. 3. Derive expressions for the colligative properties and perform calculations using the relationships. 4. Estimate the maximum solubility of a solute in a solvent based on the concept equality of chemical potential at saturation. 5. Define fugacity and activity. 6. Calculate the mean activity coefficients of ions in solution based on the ionic strength of the solution. Vocabulary and Concepts • activity • activity coefficient • chemical potential • cryoscopic constant • ebullioscopic constant • enthalpy of mixing • fugacity • fugacity coefficient • Gibbs-Duhem equation • ideal mixture • ionic strength • mean activity coefficient • osmosis • osmotic pressure • Raoult’s Law • solute • solution • solvent • the partial molar Gibbs function
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.E%3A_Mixtures_and_Solutions_%28Exercises%29.txt
• 8.1: Prelude to Phase Equilibrium From the very elementary stages of our journey to describe the physical nature of matter, we learn to classify mater into three (or more) phases: solid, liquid, and gas. This is a fairly easy classification system that can be based on such simple ideas as shape and volume. • 8.2: Single Component Phase Diagrams The stability of phases can be predicted by the chemical potential, in that the most stable form of the substance will have the minimum chemical potential at the given temperature and pressure. • 8.3: Criterion for Phase Equilibrium The thermodynamic criterion for phase equilibrium is simple. It is based upon the chemical potentials of the components in a system. For simplicity, consider a system with only one component. For the overall system to be in equilibrium, the chemical potential of the compound in each phase present must be the same. • 8.4: The Clapeyron Equation Based on the thermodynamic criterion for equilibrium, it is possible to draw some conclusions about the state variables p and T and how they are related along phase boundaries. This results in the Clapeyron equation. • 8.5: The Clausius-Clapeyron Equation The Clapeyron equation can be developed further for phase equilibria involving the gas phase as one of the phases. This is the case for either sublimation (solid → gas) or vaporization (liquid → gas). • 8.6: Phase Diagrams for Binary Mixtures As suggested by the Gibbs Phase Rule, the most important variables describing a mixture are pressure, temperature and composition. In the case of single component systems, composition is not important so only pressure and temperature are typically depicted on a phase diagram. However, for mixtures with two components, the composition is of vital important, so there is generally a choice that must be made as to whether the other variable to be depicted is temperature or pressure. • 8.7: Liquid-Vapor Systems - Raoult’s Law Liquids tend to be volatile, and as such will enter the vapor phase when the temperature is increased to a high enough value (provided they do not decompose first!) A volatile liquid is one that has an appreciable vapor pressure at the specified temperature. An ideal mixture continuing at least one volatile liquid can be described using Raoult’s Law. • 8.8: Non-ideality - Henry's Law and Azeotropes The proceeding discussion was based on the behaviors of ideal solutions of volatile compounds, and for which both compounds follow Raoult’s Law. Henry’s Law can be used to describe these deviations. • 8.9: Solid-Liquid Systems - Eutectic Points Phase diagrams are often complex with multiple phases that exhibit  differing non-ideal behavior like minimum boiling azeotropes, eutectic points (omposition for which the mixture of the two solids has the lowest melting point), incongruent melting where the stable compound formed by two solids is only stable in the solid phase and will decompose upon melting. • 8.10: Cooling Curves The method that is used to map the phase boundaries on a phase diagram is to measure the rate of cooling for a sample of known composition. The rate of cooling will change as the sample (or some portion of it) begins to undergo a phase change. These “breaks” will appear as changes in slope in the temperature-time curve. • 8.E: Phase Equilibrium (Exercises) Exercises for Chapter 8 "Phase Equilibrium" in Fleming's Physical Chemistry Textmap. • 8.S: Phase Equilibrium (Summary) Summary for Chapter 8 "Phase Equilibrium" in Fleming's Physical Chemistry Textmap. 08: Phase Equilibrium From the very elementary stages of our journey to describe the physical nature of matter, we learn to classify mater into three (or more) phases: solid, liquid, and gas. This is a fairly easy classification system that can be based on such simple ideas as shape and volume. Phase Shape Volume Solid Fixed Fixed Liquid Variable Fixed Gas Variable Variable As we have progressed, we have seen that solids and liquids are not completely incompressible as they may have non-zero values of $\kappa_T$. And we learn that there are a number of finer points to describing the nature of the phases about which we all learn in grade school. In this chapter, we will employ some of the tools of thermodynamics to explore the nature of phase boundaries and see what we can conclude about them.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.01%3A_Prelude_to_Phase_Equilibrium.txt
The stability of phases can be predicted by the chemical potential, in that the most stable form of the substance will have the minimum chemical potential at the given temperature and pressure. This can be summarized in a phase diagram like the one shown below. In this diagram, the phase boundaries can be determined by measuring the rate of cooling at constant temperature. A typical cooling curve is shown below. The temperature will decrease over time as a sample is allowed to cool. When $\kappa_T$ the substance undergoes a phase change, say from liquid to solid, the temperature will stop changing while heat is extracted due to the phase change. The temperature at which the halt occurs provides one point on the boundary at the temperature of the halt and the pressure at which the cooling curve was measured. The same data can be obtained by heating the system using a technique such as scanning calorimetry. In this experiment, heat is supplied to a sample at a constant rate, and the temperature of the sample is measured, with breaks occurring at the phase change temperatures. 8.03: Criterion for Phase Equilibrium The thermodynamic criterion for phase equilibrium is simple. It is based upon the chemical potentials of the components in a system. For simplicity, consider a system with only one component. For the overall system to be in equilibrium, the chemical potential of the compound in each phase present must be the same. Otherwise, there will be some mass migration from one phase to another, decreasing the total chemical potential of the phase from which material is being removed, and increasing the total chemical potential of the phase into which the material is being deposited. So for each pair of phases present ($\alpha$ and $\beta$) the following must be true: $\mu_\alpha = \mu_\beta \nonumber$ Gibbs Phase Rule The Gibbs phase rule describes the number of compositional and phase variables that can be varied freely for a system at equilibrium. For each phase present in a system, the mole fraction of all but one component can be varied independently. However, the relationship $\sum_i \chi_i =1 \nonumber$ places a constraint on the last mole fraction. As such, there are $C – 1$ compositional degrees of freedom for each phase present, where $C$ is the number of components in the mixture. Similarly, all but one of the chemical potentials of each phase present must be equal, leaving only one that can be varied independently, leading to $P – 1$ thermodynamic constraints placed on each component. Finally, there are two state variables that can be varied (such as pressure and temperature), adding two additional degrees of freedom to the system. The net number of degrees of freedom is determined by adding all of the degrees of freedom and subtracting the number of thermodynamic constraints. \begin{align} F &= 2+ P(C-1) - C(P-1) \nonumber \[4pt] &= 2 + PC - P -PC +C \nonumber \[4pt] &= 2+C-P \label{Phase} \end{align} Equation \ref{Phase} is the Gibbs phase rule. Example $1$: Show that the maximum number of phases that can co-exist at equilibrium for a single component system is $P = 3$. Solution The maximum number of components will occur when the number of degrees of freedom is zero. \begin{align*} 0 &= 2+1 -P \[4pt] P&=3 \end{align*} Note: This shows that there can never be a “quadruple point” for a single component system! Because a system at its triple point has no degrees of freedom, the triple point makes a very convenient physical condition at which to define a temperature. For example, the International Practical Temperature Scale of 1990 (IPT-90) uses the triple points of hydrogen, neon, oxygen, argon, mercury, and water to define several low temperatures. (The calibration of a platinum resistance thermometer at the triple point of argon, for example, is described by Strouse (Strouse, 2008)). The advantage to using a triple point is that the compound sets both the temperature and pressure, rather than forcing the researcher to set a pressure and then measure the temperature of a phase change, introducing an extra parameter than can introduce uncertainty into the measurement.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.02%3A_Single_Component_Phase_Diagrams.txt
Based on the thermodynamic criterion for equilibrium, it is possible to draw some conclusions about the state variables $p$ and $T$ and how they are related along phase boundaries. First, the chemical potentials of the two phases $\alpha$ and $\beta$ in equilibrium with one another must be equal. $\mu_{\alpha} = \mu_{\beta} \label{eq1}$ Also, any infinitesimal changes to the chemical potential of one phase must be offset by an infinitesimal change to the chemical potential of the other phase that is equal in magnitude. $\mu_{\alpha} + d\mu_{\alpha} = \mu_{\beta}+ d\mu_{\beta} \label{eq2}$ Taking the difference between these Equations \ref{eq1} and \ref{eq2} shows that $d\mu_{\alpha} = d\mu_{\beta} \nonumber$ And since $d\mu$ can be expressed in terms of molar volume and molar entropy $d\mu = Vdp - SdT \nonumber$ It is clear that there will be constraints placed on changes of temperature and pressure while maintaining equilibrium between the phases. $V_{\alpha} dP - S_{\alpha} dT = V_{\beta} dP - S_{\beta} dT \nonumber$ Gathering pressure terms on one side and temperature terms on the other $(V_{\alpha} - V_{\beta} ) dP = (S_{\alpha} - S_{\beta}) dT \nonumber$ The differences $V_{\alpha} - V_{\beta}$ and $S_{\alpha} - S_{\beta}$ are the changes in molar volume and molar entropy for the phase changes respectively. So the expression can be rewritten $\Delta V dp = \Delta S dT \nonumber$ or $\dfrac{dp}{dT} = \dfrac{\Delta S}{\Delta V} \label{clap1}$ Equation \ref{clap1} is the Clapeyron equation. This expression makes it easy to see how the phase diagram for water is qualitatively different than that for most substances. Specifically, the negative slope of the solid-liquid boundary on a pressure-temperature phase diagram for water is very unusual, and arises due to the fact that for water, the molar volume of the liquid phase is smaller than that of the solid phase. Given that for a phase change $\Delta S_{phase} = \dfrac{\Delta H_{phase}}{T} \nonumber$ the Clapeyron equation is sometimes written $\dfrac{dp}{dT} = \dfrac{\Delta H}{T \Delta V} \label{clap2}$ Example $1$: Freezing WAter Calculate the magnitude of the change in freezing point for water ($\Delta H_{fus} = 6.009\, kJ/mol$) and the density of ice is $\rho_{ice} = 0.9167\, g/cm^3$ while that for liquid water is $\rho_{liquid} = 0.9999\, g/cm^3$) for an increase in pressure of $1.00\, atm$ at $273\, K$. Solution The molar volume of ice is given by $\left( 0.9167 \, \dfrac{g}{cm^3} \right) \left(\dfrac{1\,mol}{18.016\, g} \right)\left(\dfrac{1000\,cm^3}{1\, L} \right) = 50.88 \, \dfrac{L}{mol} \nonumber$ The molar volume of liquid water at 0 oC is given by $\left( 0.9999 \, \dfrac{g}{cm^3} \right) \left(\dfrac{1\,mol}{18.016\, g} \right)\left(\dfrac{1000\,cm^3}{1\, L} \right) = 55.50 \, \dfrac{L}{mol} \nonumber$ So $\Delta V$ for the phase change of $\text{solid} \rightarrow \text{liquid}$ (which corresponds to an endothermic change) is $50.88 \, \dfrac{L}{mol} - 55.50 \, \dfrac{L}{mol} = -4.62 \, \dfrac{L}{mol} \nonumber$ To find the change in temperature, use the Clapeyron Equation (Equation \ref{clap2}) and separating the variables $dp = \dfrac{\Delta H_{fus}}{\Delta V} \dfrac{dt}{T} \nonumber$ Integration (with the assumption that $\Delta H_{fus}/\Delta V$ does not change much over the temperature range) yields $\int_{p1}^{p2} dp = \dfrac{\Delta H_{fus}}{\Delta V} \int_{T1}^{T2}\dfrac{dt}{T} \nonumber$ $p_2-p_1 = \Delta p = \dfrac{\Delta H_{fus}}{\Delta V} \ln \left( \dfrac{T_2}{T_1} \right) \nonumber$ or $T_2 = T_1\, \text{exp} \left(\dfrac{\Delta V \Delta p}{\Delta H_{fus}} \right) \nonumber$ so $T_2 = (273\,K) \, \text{exp} \left(\dfrac{(1\, atm)\left(-4.62 \, \dfrac{L}{mol} \right) }{6009 \dfrac{J}{mol} } \underbrace{\left( \dfrac{8.314\,J}{0.08206 \, atm\,L} \right)}_{\text{conversion factor}} \right) \nonumber$ $T_2 = 252.5\,K \nonumber$ $\Delta T = T_2-T_1 = 252.5\,K - 273\,K = -20.5 \,K \nonumber$ So the melting point will decrease by 20.5 K. Note that the phase with the smaller molar volume is favored at the higher pressure (as expected from Le Chatelier's principle)!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.04%3A_The_Clapeyron_Equation.txt
The Clapeyron equation can be developed further for phase equilibria involving the gas phase as one of the phases. This is the case for either sublimation ($\text{solid} \rightarrow \text{gas}$) or vaporization ($\text{liquid} \rightarrow \text{gas}$). In the case of vaporization, the change in molar volume can be expressed $\Delta V = V_{gas} -V_{liquid} \nonumber$ Since substances undergo a very large increase in molar volume upon vaporization, the molar volume of the condensed phase (liquid in this case) is negligibly small compared to the molar volume of the gas (i.e., $V_{gas} \gg V_{liquid}$). So, $\Delta V \approx V_{gas} \nonumber$ And if the vapor can be treated as an ideal gas, $V_{gas} = \dfrac{RT}{p} \nonumber$ Substitution into the Claperyron equation yields $\dfrac{dp}{dT} = \dfrac{p\Delta H_{vap}}{RT^2} \nonumber$ Separating the variables puts the equation into an integrable form. $dp = \dfrac{p\Delta H_{vap}}{R} \dfrac{dT}{T^2} \label{diffCC}$ Noting that $\dfrac{dT}{T^2} =- d\left(\dfrac{1}{T} \right) \nonumber$ makes the integration very easy. If the enthalpy of vaporization is independent of temperature over the range of conditions, $\int_{p_1}^{p_2} \dfrac{dp}{p} = - \dfrac{\Delta H_{vap}}{R} \int_{T_1}^{T_2} d\left(\dfrac{1}{T} \right) \nonumber$ $\ln \left( \dfrac{p_2}{p_1}\right) = - \dfrac{\Delta H_{vap}}{R} \left( \dfrac{1}{T_2} -\dfrac{1}{T_1} \right) \label{CC}$ This is the Clausius-Clapeyron equation. It can also be used to describe the boundary between solid and vapor phases by substituting the enthalpy of sublimation ($\Delta H_{sub}$) Example $1$ The vapor pressure of a liquid triples when the temperature is increased from 25 °C to 45 °C. What is the enthalpy of vaporization for the liquid? Solution The problem can be solved using the Clausius-Clapeyron equation (Equation \ref{CC}). The following values can be used: $p_2 = 3 p_1$ $T_2 = 318\, K$ $p_1 = p_1$ $T_1 = 298\, K$ Substitution into the Clausius-Clapeyron equation yields $\ln \left( \dfrac{3p_1}{p_1}\right) = - \dfrac{\Delta H_{vap}}{9.314 \dfrac{J}{mol\,K}} \left( \dfrac{1}{318\,K} -\dfrac{1}{298\,K} \right) \nonumber$ $\Delta H_{vap} = 43280 \,\dfrac{J}{mol} = 43.28 \, \dfrac{kJ}{mol} \nonumber$ The Clausius-Clapeyron equation also suggests that a plot of $\ln(p)$ vs. $1/T$ should yield a straight line, the slope of which is $–\Delta H/R$ (provided that $\Delta H_{vap}$ is independent of temperature over the range of temperatures involved.. $\ln(p) = - \dfrac{\Delta H_{vap}}{R} \left( \dfrac{1}{T} \right) + const. \nonumber$ This approach in Example $1$ is very useful when there are several pairs of measurements of vapor pressure and temperature. Such a plot is shown below for water. For water, which has a very large temperature dependence, the linear relationship of $\ln(p)$ vs. $1/T$ holds fairly well over a broad range of temperatures. So even though there is some curvature to the data, a straight line fit still results in a reasonable description of the data (depending, of course, on the precision needed in the experiment.) For this fit of the data, $\Delta H_{vap}$ is found to be 43.14 kJ/mol. Temperature Dependence to $\Delta H_{vap}$ For systems that warrant it, temperature dependence of $\Delta H_{vap}$ can be included into the derivation of the model to fit vapor pressure as a function of temperature. For example, if the enthalpy of vaporization is assumed to take the following empirical form $\Delta H_{vap} = \Delta H_o + aT + bT^2 \nonumber$ and substituting it into the differential form of the Clausius-Clapeyron equation (Equation \ref{diffCC}) generates $\dfrac{dp}{p} = \dfrac{\Delta H_o + aT + bT^2}{R} \dfrac{dT}{T^2} \nonumber$ or $\dfrac{dp}{p} = \dfrac{\Delta H_o}{R} \dfrac{dT}{T^2} + \dfrac{a}{R} \dfrac{dT}{T} + \dfrac{b}{R} dT \nonumber$ And so the integrated form becomes $\ln (p) = - \dfrac{\Delta H_o}{R} \left(\dfrac{1}{T}\right) + \dfrac{a}{R} \ln T + \dfrac{b}{R} T + constant \nonumber$ The results of fitting these data to the temperature dependent model are shown in the table below. $\Delta H_0$ (J mol-1) a (J mol-1 K-1) b (J mol-1 K-2) c 43080 0.01058 0.000501 20.50 This results in calculated values of $\Delta H_{vap}$ of 43.13 kJ/mol at 298 K, and 43.15 kJ/mol at 373 K. The results are a little bit skewed since there is no data above 100 oC included in the fit. A larger temperature dependence would be found if the higher-temperature data were included in the fit.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.05%3A_The_Clausius-Clapeyron_Equation.txt
As suggested by the Gibbs Phase Rule, the most important variables describing a mixture are pressure, temperature and composition. In the case of single component systems, composition is not important so only pressure and temperature are typically depicted on a phase diagram. However, for mixtures with two components, the composition is of vital important, so there is generally a choice that must be made as to whether the other variable to be depicted is temperature or pressure. Temperature-composition diagrams are very useful in the description of binary systems, many of which will for two-phase compositions at a variety of temperatures and compositions. In this section, we will consider several types of cases where the composition of binary mixtures are conveniently depicted using these kind of phase diagrams. Partially Miscible Liquids A pair of liquids is considered partially miscible if there is a set of compositions over which the liquids will form a two-phase liquid system. This is a common situation and is the general case for a pair of liquids where one is polar and the other non-polar (such as water and vegetable oil.) Another case that is commonly used in the organic chemistry laboratory is the combination of diethyl ether and water. In this case, the differential solubility in the immiscible solvents allows the two-phase liquid system to be used to separate solutes using a separatory funnel method. As is the case for most solutes, their solubility is dependent on temperature. For many binary mixtures of immiscible liquids, miscibility increases with increasing temperature. And then at some temperature (known as the upper critical temperature), the liquids become miscible in all compositions. An example of a phase diagram that demonstrates this behavior is shown in Figure $1$. An example of a binary combination that shows this kind of behavior is that of methyl acetate and carbon disufide, for which the critical temperature is approximately 230 K at one atmosphere (Ferloni & Spinolo, 1974). Similar behavior is seen for hexane/nitrobenzene mixtures, for which the critical temperature is 293 K. Another condition that can occur is for the two immiscible liquids to become completely miscible below a certain temperature, or to have a lower critical temperature. An example of a pair of compounds that show this behavior is water and trimethylamine. A typical phase diagram for such a mixture is shown in Figure $2$. Some combinations of substances show both an upper and lower critical temperature, forming two-phase liquid systems at temperatures between these two temperatures. An example of a combination of substances that demonstrate the behavior is nicotine and water. The Lever Rule The composition and amount of material in each phase of a two phase liquid can be determined using the lever rule. This rule can be explained using the following diagram. Suppose that the temperature and composition of the mixture is given by point b in the above diagram. The horizontal line segment that passes through point b, is terminated at points a and c, which indicate the compositions of the two liquid phases. Point a indicates the mole faction of compound B ($\chi_B^A$) in the layer that is predominantly A, whereas the point c indicates the composition ($\chi_B^B$ )of the layer that is predominantly compound B. The relative amounts of material in the two layers is then inversely proportional to the length of the tie-lines a-b and b-c, which are given by $l_A$ and $l_B$ respectively. In terms of mole fractions, $l_A = \chi_B - \chi_B^A \nonumber$ and $l_A = \chi_B^B - \chi_B \nonumber$ The number of moles of material in the A layer ($n_A$) and the number of moles in the B layer ($n_B$) are inversely proportional to the lengths of the two lines $l_A$ and $l_B$. $n_A l_A = n_B l_B \nonumber$ Or, substituting the above definitions of the lengths $l_A$ and $l_B$, the ratio of these two lengths gives the ratio of moles in the two phases. $\dfrac{n_A}{n_B} = \dfrac{l_B}{l_A} = \dfrac{ \chi_B^B - \chi_B}{\chi_B - \chi_B^A} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.06%3A_Phase_Diagrams_for_Binary_Mixtures.txt
Liquids tend to be volatile, and as such will enter the vapor phase when the temperature is increased to a high enough value (provided they do not decompose first!) A volatile liquid is one that has an appreciable vapor pressure at the specified temperature. An ideal mixture continuing at least one volatile liquid can be described using Raoult’s Law. Raoult’s Law Raoult’s law can be used to predict the total vapor pressure above a mixture of two volatile liquids. As it turns out, the composition of the vapor will be different than that of the two liquids, with the more volatile compound having a larger mole fraction in the vapor phase than in the liquid phase. This is summarized in the following theoretical diagram for an ideal mixture of two compounds, one having a pure vapor pressure of $p_B^o = 450\, Torr$ and the other having a pure vapor pressure of $p_B^o = 350\, Torr$. In Figure $1$, the liquid phase is represented at the top of the graph where the pressure is higher. Oftentimes, it is desirable to depict the phase diagram at a single pressure so that temperature and composition are the variables included in the graphical representation. In such a diagram, the vapor, which exists at higher temperatures) is indicated at the top of the diagram, while the liquid is at the bottom. A typical temperature vs. composition diagram is depicted in Figure $2$ for an ideal mixture of two volatile liquids. In this diagram, $T_A^o$ and $T_B^o$ represent the boiling points of pure compounds $A$ and $B$. If a system having the composition indicated by $\chi_B^c$ has its temperature increased to that indicated by point c, The system will consist of two phases, a liquid phase, with a composition indicated by $\chi_B^d$ and a vapor phase indicated with a composition indicated by $\chi_B^b$. The relative amounts of material in each phase can be described by the lever rule, as described previously. Further, if the vapor with composition $\chi_B^b$ is condensed (the temperature is lowered to that indicated by point b') and revaporized, the new vapor will have the composition consistent with $\chi_B^{a}$. This demonstrates how the more volatile liquid (the one with the lower boiling temperature, which is A in the case of the above diagram) can be purified from the mixture by collecting and re-evaporating fractions of the vapor. If the liquid was the desired product, one would collect fractions of the residual liquid to achieve the desired result. This process is known as distillation. 8.08: Non-ideality - Henry's Law and Azeotropes The proceeding discussion was based on the behaviors of ideal solutions of volatile compounds, and for which both compounds follow Raoult’s Law. Henry’s Law can be used to describe these deviations. $p_B = k_H p_B^o \nonumber$ For which the Henry’s Law constant ($k_H$) is determined for the specific compound. Henry’s Law is often used to describe the solubilities of gases in liquids. The relationship to Raoult’s Law is summarized in Figure $1$. Henry’s Law is depicted by the upper straight line and Raoult’s Law by the lower. Example $1$: Solubility of Carbon Dioxide in Water The solubility of $CO_2(g)$ in water at 25 oC is 3.32 x 10-2 M with a partial pressure of $CO_2$ over the solution of 1 bar. Assuming the density of a saturated solution to be 1 kg/L, calculate the Henry’s Law constant for $CO_2$. Solution In one L of solution, there is 1000 g of water (assuming the mass of CO2 dissolved is negligible.) $(1000 \,g) \left( \dfrac{1\, mol}{18.02\,g} \right) = 55\, mol\, H_2O \nonumber$ The solubility of $CO_2$ can be used to find the number of moles of $CO_2$ dissolved in 1 L of solution also: $\dfrac{3.32 \times 10^{-2} mol}{L} \cdot 1 \,L = 3.32 \times 10^{-2} mol\, CO_2 \nonumber$ and so the mol fraction of $CO_2$ is $\chi_b = \dfrac{3.32 \times 10^{-2} mol}{55.5 \, mol} = 5.98 \times 10^{-4} \nonumber$ And so $10^5\, Pa = 5.98 \times 10^{-4} k_H \nonumber$ or $k_H = 1.67 \times 10^9\, Pa \nonumber$ Azeotropes An azeotrope is defined as the common composition of vapor and liquid when they have the same composition. Azeotropes can be either maximum boiling or minimum boiling, as show in Figure $\PageIndex{2; left}$. Regardless, distillation cannot purify past the azeotrope point, since the vapor and quid phases have the same composition. If a system forms a minimum boiling azeotrope and also has a range of compositions and temperatures at which two liquid phases exist, the phase diagram might look like Figure $\PageIndex{2; right}$: Another possibility that is common is for two substances to form a two-phase liquid, form a minimum boiling azeotrope, but for the azeotrope to boil at a temperature below which the two liquid phases become miscible. In this case, the phase diagram will look like Figure $3$. Example $1$: In the diagram, make up of a system in each region is summarized below the diagram. The point e indicates the azeotrope composition and boiling temperature. 1. Single phase liquid (mostly compound A) 2. Single phase liquid (mostly compound B) 3. Single phase liquid (mostly A) and vapor 4. Single phase liquid (mostly B) and vapor 5. Vapor (miscible at all mole fractions since it is a gas) Solution Within each two-phase region (III, IV, and the two-phase liquid region, the lever rule will apply to describe the composition of each phase present. So, for example, the system with the composition and temperature represented by point b (a single-phase liquid which is mostly compound A, designated by the composition at point a, and vapor with a composition designated by that at point c), will be described by the lever rule using the lengths of tie lines ­lA and lB.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.07%3A_Liquid-Vapor_Systems_-_Raoults_Law.txt
A phase diagram for two immiscible solids and the liquid phase (which is miscible in all proportions) is shown in Figure $1$. The point labeled “e2” is the eutectic point, meaning the composition for which the mixture of the two solids has the lowest melting point. The four main regions can be described as below: 1. Two-phase solid 2. Solid (mostly A) and liquid (A and B) 3. Solid (mostly B) and liquid (A and B) 4. Single phase liquid (A and B) The unlabeled regions on the sides of the diagram indicate regions where one solid is so miscible in the other, that only a single phase solid forms. This is different than the “two-phase solid” region where there are two distinct phases, meaning there are regions (crystals perhaps) that are distinctly A or B, even though they are intermixed within on another. Region I contains two phases: a solid phase that is mostly compound A, and a liquid phase which contains both A and B. A sample in region II (such as the temperature/composition combination depicted by point b) will consist of two phases: 1 is a liquid mixture of A and B with a composition given by that at point a, and the other is a single phase solid that is mostly pure compound B, but with traces of A entrained within it. As always, the lever rule applies in determining the relative amounts of material in the two phases. In the case where the widths of the small regions on either side of the phase diagram are negligibly small, a simplified diagram with a form similar to that shown in Figure $2$ can be used. In this case, it is assumed that the solids never form a single phase! The tin-lead system exhibits such behavor. Another important case is that for which the two compounds A and B can react to form a third chemical compound C. If the compound C is stable in the liquid phase (does not decompose upon melting), the phase diagram will look like Figure $3$. In this diagram, the vertical boundary at $\chi_B = 0.33$ is indicative of the compound $C$ formed by $A$ and $B$. From the mole fraction of $B$, it is evident that the formula of compound $C$ is $A_2B$. The reaction that forms compound C is $2 A + B \rightarrow C \nonumber$ Thus, at overall compositions where $\chi_B < 0.33$, there is excess compound A (B is the limiting reagent) and for $\chi_B$ there is an excess of compound $B$ ($A$ is now the limiting reagent.) With this in mind, the makeup of the sample in each region can be summarized as • Two phase solid (A and C) • Two phase solid (C and B) • Solid A and liquid (A and C) • Solid C and liquid (A and C) • Solid C and liquid (C and B) • Solid B and liquid (C and B) • liquid. Single phase liquid (A and C or C and B, depending on which is present in excess) Zinc and Magnesium are an example of two compounds that demonstrate this kind of behavior, with the third compound having the formula $Zn_2Mg$ (Ghosh, Mezbahul-Islam, & Medraj, 2011). Incongruent Melting Oftentimes, the stable compound formed by two solids is only stable in the solid phase. In other words, it will decompose upon melting. As a result, the phase diagram will take a lightly different form, as is shown in Figure $4$. In this diagram, the formula of the stable compound is $AB_3$ (consistent with $\chi_B < 0.75$). But you will notice that the boundary separating the two two-phase solid regions does not extend all of the way to the single phase liquid portion of the diagram. This is because the compound will decompose upon melting. The process of decomposition upon melting is also called incongruent melting. The makeup of each region can be summarized as 1. Two phase solid (A and C) 2. Two phase solid (C and B) 3. Solid A and liquid (A and B) 4. Solid C and liquid (A and B) 5. Solid B and liquid (A and B) There are many examples of pairs of compounds that show this kind of behavior. One combination is sodium and potassium, which form a compound ($Na_2K$) that is unstable in the liquid phase and so it melts incongruently (Rossen & Bleiswijk, 1912).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.09%3A_Solid-Liquid_Systems_-_Eutectic_Points.txt
The method that is used to map the phase boundaries on a phase diagram is to measure the rate of cooling for a sample of known composition. The rate of cooling will change as the sample (or some portion of it) begins to undergo a phase change. These “breaks” will appear as changes in slope in the temperature-time curve. Consider a binary mixture for which the phase diagram is as shown in Figure \(\PageIndex{1A}\). A cooling curve for a sample that begins at the temperature and composition given by point a is shown in Figure \(\PageIndex{1B}\). As the sample cools from point a, the temperature will decrease at a rate determined by the sample composition, and the geometry of the experiment (for example, one expects more rapid cooling is the sample has more surface area exposed to the cooler surroundings) and the temperature difference between the sample and the surroundings. When the temperature reaches that at point b, some solid compound B will begin to form. This will lead to a slowing of the cooling due to the exothermic nature of solid formation. But also, the composition of the liquid will change, becoming richer in compound A as B is removed from the liquid phase in the form of a solid. This will continue until the liquid attains the composition at the eutectic point (point c in the diagram.) When the temperature reaches that at point c, both compounds A and B will solidify, and the composition of the liquid phase will remain constant. As such, the temperature will stop changing, creating what is called the eutectic halt. Once all of the material has solidified (at the time indicated by point c’), the cooling will continue at a rate determined by the heat capacities of the two solids A and B, the composition, and (of course) the geometry of the experimental set up. By measuring cooling curves for samples of varying composition, one can map the entire phase diagram. 8.S: Phase Equilibrium (Summary) Learning Objectives After mastering the material in this chapter, one will be able to 1. State the thermodynamic criterion for equilibrium in terms of chemical potential. 2. Derive and interpret the Gibbs Phase Rule. 3. Derive the Clapeyron equation from the thermodynamic criterion for equilibrium. 4. Interpret the slope of phase boundaries on a pressure-temperature phase diagram in terms of the relevant changes in entropy and molar volume for the given phase change. 5. Derive the Clausius-Clapeyron equation, stating all of the necessary approximations. 6. Use the Clausius-Clapeyron equation to calculate the vapor pressure of a substance or the enthalpy of a phase change from pressure-temperature data. 7. Interpret phase diagrams for binary mixtures, identifying the phases and components present in each region. 8. Perform calculations using Raoult’s Law and Henry’s Law to relate vapor pressure to composition in the liquid phase. 9. Describe the distillation process, explaining how the composition of liquid and vapor phases can differ, and how azeotrope composition place bottlenecks in the distillation process. 10. Describe how cooling curves are used to derive phase diagrams by locating phase boundaries. Vocabulary and Concepts • azeotrope • Clapeyron equation • Clausius-Clapeyron equation • compositional degrees of freedom • cooling curve • distillation • eutectic halt • eutectic point • Gibbs phase rule • Henry’s Law • incongruent melting • lever rule • lower critical temperature • phase diagram • platinum resistance thermometer • Raoult’s law • scanning calorimetry • thermodynamic constraints • triple point • upper critical temperature • volatile liquid
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.10%3A_Cooling_Curves.txt
As was discussed in Chapter 6, the natural tendency of chemical systems is to seek a state of minimum Gibbs function. Once the minimum is achieved, movement in any chemical direction will not be spontaneous. It is at this point that the system achieves a state of equilibrium. • 9.1: Prelude to Chemical Equilibria The small is great, the great is small; all is in equilibrium in necessity... - Victor Hugo in “Les Miserables” • 9.2: Chemical Potential Equilibrium can be understood as accruing at the composition of a reaction mixture at which the aggregate chemical potential of the products is equal to that of the reactants. • 9.3: Activities and Fugacities To this point, we have mostly ignored deviations from ideal behavior. But it should be noted that thermodynamic equilibrium constants are not expressed in terms of concentrations or pressures, but rather in terms of activities and fugacities . • 9.4: Pressure Dependence of Kp - Le Châtelier's Principle Since the equilibrium constant is a function of change in Gibbs energy, which is defined for a specific composition (all reactants in their standard states and at unit pressure (or fugacity), changes in pressure have no effect on equilibrium constants for a fixed temperature. However, changes in pressure can have profound effects on the compositions of equilibrium mixtures. • 9.5: Degree of Dissociation Reactions such as the one in the previous example involve the dissociation of a molecule. Such reactions can be easily described in terms of the fraction of reactant molecules that actually dissociate to achieve equilibrium in a sample. This fraction is called the degree of dissociation. • 9.6: Temperature Dependence of Equilibrium Constants - the van ’t Hoff Equation The value of Kp is independent of pressure, although the composition of a system at equilibrium may be very much dependent on pressure. Temperature dependence is another matter. Because the value of is dependent on temperature, the value of Kp is as well. The form of the temperature dependence can be taken from the definition of the Gibbs function. • 9.7: The Dumas Bulb Method for Measuring Decomposition Equilibrium A classic example of an experiment that is employed in many physical chemistry laboratory courses uses a Dumas Bulb method to measure the dissociation of N2O4(g) as a function of temperature. In this experiment, a glass bulb is used to create a constant volume container in which a volatile substance can evaporate, or achieve equilibrium with other gases present. • 9.8: Acid-Base Equilibria A great many processes involve proton transfer, or acid-base types of reactions. As many biological systems depend on carefully controlled pH, these types of processes are extremely important. • 9.9: Buffers Buffer solutions, which are of enormous importance in controlling pH in various processes, can be understood in terms of acid/base equilibrium. A buffer is created in a solution which contains both a weak acid and its conjugate base. This creates to absorb excess H+ or supply H+ to replace what is lost due to neutralization. The calculation of the pH of a buffer is straightforward using an ICE table approach. • 9.10: Solubility of Ionic Compounds The solubility of ionic compounds in water can also be described using the concepts of equilibrium. Ksp is the solubility product and is the equilibrium constant that describes the solubility of an electrolyte. • 9.E: Chemical Equilibria (Exercises) Exercises for Chapter 9 "Chemical Equilibria" in Fleming's Physical Chemistry Textmap. • 9.S: Chemical Equilibria (Summary) Summary for Chapter 9 "Chemical Equilibria" in Fleming's Physical Chemistry Textmap. 09: Chemical Equilibria The small is great, the great is small; all is in equilibrium in necessity... - Victor Hugo in “Les Miserables” As was discussed in Chapter 6, the natural tendency of chemical systems is to seek a state of minimum Gibbs function. Once the minimum is achieved, movement in any chemical direction will not be spontaneous. It is at this point that the system achieves a state of equilibrium. From the diagram above, it should be clear that the direction of spontaneous change is determined by minimizing $\left(\frac{\partial G}{\partial \xi}\right)_{p,T}. \nonumber$ If the slope of the curve is negative, the reaction will favor a shift toward products. And if it is positive, the reaction will favor a shift toward reactants. This is a non-trivial point, as it underscores the importance of the composition of the reaction mixture in the determination of the direction of the reaction.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.01%3A_Prelude_to_Chemical_Equilibria.txt
Equilibrium can be understood as accruing at the composition of a reaction mixture at which the aggregate chemical potential of the products is equal to that of the reactants. Consider the simple reaction $A(g) \rightleftharpoons B(g) \nonumber$ The criterion for equilibrium will be $\mu_A=\mu_B \nonumber$ If the gases behave ideally, the chemical potentials can be described in terms of the mole fractions of $A$ and $B$ $\mu_A^o + RT \ln\left( \dfrac{p_A}{p_{tot}} \right) = \mu_B^o + RT \ln\left( \dfrac{p_B}{p_{tot}} \right) \label{eq2}$ where Dalton’s Law has been used to express the mole fractions. $\chi_i = \dfrac{p_i}{p_{tot}} \nonumber$ Equation \ref{eq2} can be simplified by collecting all chemical potentials terms on the left $\mu_A^o - \mu_B^o = RT \ln \left( \dfrac{p_B}{p_{tot}} \right) - RT \ln\left( \dfrac{p_A}{p_{tot}} \right) \label{eq3}$ Combining the logarithms terms and recognizing that $\mu_A^o - \mu_B^o –\Delta G^o \nonumber$ for the reaction, one obtains $–\Delta G^o = RT \ln \left( \dfrac{p_B}{p_{A}} \right) \nonumber$ And since $p_A/p_B = K_p$ for this reaction (assuming perfectly ideal behavior), one can write $\Delta G^o = RT \ln K_p \nonumber$ Another way to achieve this result is to consider the Gibbs function change for a reaction mixture in terms of the reaction quotient. The reaction quotient can be expressed as $Q_p = \dfrac{\prod_i p_i^{\nu_i}}{\prod_j p_j^{\nu_j}} \nonumber$ where $\nu_i$ are the stoichiometric coefficients for the products, and $\nu_j$ are those for the reactants. Or if the stoichiometric coefficients are defined by expressing the reaction as a sum $0 =\sum_i \nu_i X_i \nonumber$ where $X_i$ refers to one of the species in the reaction, and $\nu_i$ is then the stoichiometric coefficient for that species, it is clear that $\nu_i$ will be negative for a reactant (since its concentration or partial pressure will reduce as the reaction moves forward) and positive for a product (since the concentration or partial pressure will be increasing.) If the stoichiometric coefficients are expressed in this way, the expression for the reaction quotient becomes $Q_p = \prod_i p_i^{\nu_i} \nonumber$ Using this expression, the Gibbs function change for the system can be calculated from $\Delta G =\Delta G^o + RT \ln Q_p \nonumber$ And since at equilibrium $\Delta G = 0 \nonumber$ and $Q_p=K_p \nonumber$ It is evident that $\Delta G_{rxn}^o = -RT \ln K_p \label{triangle}$ It is in this simple way that $K_p$ and $\Delta G^o$ are related. It is also of value to note that the criterion for a spontaneous chemical process is that $\Delta G_{rxn}\ < 0$, rather than $\Delta G_{rxn}^o$, as is stated in many texts! Recall that $\Delta G_{rxn}^o$ is a function of all of the reactants and products being in their standard states of unit fugacity or activity. However, the direction of spontaneous change for a chemical reaction is dependent on the composition of the reaction mixture. Similarly, the magnitude of the equilibrium constant is insufficient to determine whether a reaction will spontaneously form reactants or products, as the direction the reaction will shift is also a function of not just the equilibrium constant, but also the composition of the reaction mixture! Example $1$: Based on the data below at 298 K, calculate the value of the equilibrium constant ($K_p$) for the reaction $2 NO(g) + O_2(g) \rightleftharpoons 2 NO_2(g) \nonumber$ $NO(g)$ $NO_2(g)$ $G_f^o$ (kJ/mol) 86.55 51.53 Solution First calculate the value of $\Delta G_{rxn}^o$ from the $\Delta G_{f}^o$ data. $\Delta G_{rxn}^o = 2 \times (51.53 \,kJ/mol) - 2 \times (86.55 \,kJ/mol) = -70.04 \,kJ/mol \nonumber$ And now use the value to calculate $K_p$ using Equation \ref{triangle}. $-70040\, J/mol = -(8.314 J/(mol\, K) (298 \, K) \ln K_p \nonumber$ $K_p = 1.89 \times 10^{12} \nonumber$ Note: as expected for a reaction with a very large negative $\Delta G_{rxn}^o$, the equilibrium constant is very large, favoring the formation of the products. 9.03: Activities and Fugacities To this point, we have mostly ignored deviations from ideal behavior. But it should be noted that thermodynamic equilibrium constants are not expressed in terms of concentrations or pressures, but rather in terms of activities and fugacities (both being discussed in Chapter 7). Based on these quantities, $K_p = \prod_i f_i^{\nu_i} \label{eq1}$ and $K_c = \prod_i a_i^{\nu_i} \nonumber$ And since activities and fugacities are unitless, thermodynamic equilibrium constants are unitless as well. Further, it can be noted that the activities of solids and pure liquids are unity (assuming ideal behavior) since they are in their standard states at the given temperature. As such, these species never change the magnitude of the equilibrium constant and are generally omitted from the equilibrium constant expression. Thermodynamic equilibrium constants are unitless. Kp and Kc Oftentimes it is desirable to express the equilibrium constant in terms of concentrations (or activities for systems that deviate from ideal behavior.) To make this conversion, the relationship between pressure and concentration from the ideal gas law can be used. $p= RT \left( \dfrac{n}{V}\right) \nonumber$ And noting that the concentration is given by ($n/V$), the expression for the equilibrium constant (Equation \ref{eq1}) becomes $K_p = \prod_i (RT[X_i])^{\nu_i} \label{eq}$ And since for a given temperature, $RT$ is a constant and can be factored out of the expression, leaving \begin{align} K_p &=\left( \prod_i(RT)^{\nu_i} \right) \left( \prod_i [X_i]^{\nu_i}\right) \[10pt] &= (RT)^{\sum \nu_i} \prod [X_i]^{\nu_i} \[10pt] &= (RT)^{\sum \nu_i} K_c \end{align} \nonumber This conversion works for reactions in which all reactants and products are in the gas phase. Care must be used when applying this relationship to heterogeneous equilibria.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.02%3A_Chemical_Potential.txt
Since the equilibrium constant $K_p$ is a function of $\Delta G^o_{rxn}$ which is defined for a specific composition (all reactants in their standard states and at unit pressure (or fugacity), changes in pressure have no effect on equilibrium constants for a fixed temperature. However, changes in pressure can have profound effects on the compositions of equilibrium mixtures. To demonstrate the relationship, one must recall Dalton’s law of partial pressures. According to this relationship, the partial pressure of a component of a gas-phase mixture can be expressed $p_i = \chi_t p_{tot} \nonumber$ It is the combination of mole fractions that describes the composition of the equilibrium mixture. Substituting the above expression into the expression for $K_p$ yields $K_p = \prod_i (\chi_ip_{tot})^{\nu_i} \nonumber$ This expression can be factored into two pieces – one containing the mole fractions and thus describing the composition, and one containing the total pressure. $K_p =\left( \prod_i \chi_i^{\nu_i} \right) \left( \prod_i p_{tot}^{\nu_i} \right) \nonumber$ The second factor is a constant for a given total pressure. If the first term is given the symbol $K_x$, the expression becomes $K_p=K_x (p_{tot})^{\sum_i \nu_i} \nonumber$ In this expression, $K_x$ has the same form as an equilibrium constant $K_x = \prod \chi_i^{\sum_i \nu_i} \nonumber$ but is not itself a constant. The value of$K_x$ will vary with varying composition, and will need to vary with varying total pressure (in most cases) in order to maintain a constant value of $K_p$. Example $1$: Consider the following reaction at equilibrium. $A(g) + 2 B(g) \rightleftharpoons C(g) + D(g) \nonumber$ In which direction will the equilibrium shift if the volume of the reaction vessel is decreased? Solution A decrease in the volume will lead to an increase in total pressure. Since the equilibrium constant can be expressed as $K_p = \dfrac{p_c p_D}{p_A p_B^2} = \dfrac{\chi_p \chi_D}{\chi_A \chi_B^2} (p_{tot})^{-1} \nonumber$ An increase in pressure will lead to an increase in $K_x$ to maintain a constant value of $K_p$. So the reaction will shift to form more of the products $C$ and $D$. Note: This should make some sense, since a shift to the side of the reaction with fewer moles of gas will lower the total pressure of the reaction mixture, and thus relieving the stress introduced by increasing the pressure. This is exactly what is expected according to Le Chatelier's principle. It should be noted that there are several ways one can affect the total pressure of a gas-phase equilibrium. These include the introduction or removal of reactants or products (perhaps through condensation or some other physical process), a change in volume of the reaction vessel, or the introduction of an inert gas that does not participate in the reaction itself. (Changes in the temperature will be discussed in a later section.) The principle of Le Chatelier's can be used as a guide to predict how the equilibrium composition will respond to a change in pressure. Le Chatelier's principle: When a stress is introduced to a system at equilibrium, the system will adjust so as to reduce the stress. Le Chatlier’s principle is fairly clear on how to think about the addition or removal of reactants or products. For example, the addition of a reactant will cause the system to shift to reduce the partial pressure of the reactant. It can do this by forming more products. An important exception to the rule that increasing the total pressure will cause a shift in the reaction favoring the side with fewer moles of gas occurs when the total pressure is increased by introducing an inert gas to the mixture. The reason is that the introduction of an inert gas will affect the total pressures and the partial pressures of each individual species. Example $2$: A 1.0 L vessel is charged with 1.00 atm of A, and the following reaction is allowed to come to equilibrium at 298 K. $A(g) \rightleftharpoons 2 B(g) \nonumber$ with $K_p = 3.10$. 1. What are the equilibrium partial pressures and mole fractions of A and B? 2. If the volume of the container is doubled, what are the equilibrium partial pressures and mole fractions of A and B? 3. If 1.000 atm of Ar (an inert gas) is introduced into the system described in b), what are the equilibrium partial pressures and mole fractions of A and B once equilibrium is reestablished? Solution Part a: First, we can use an ICE[1] table to solve part a). A 2 B Initial 1.00 atm 0 Change -x +2x Equilibrium 1.00 atm - x 2x So (for convenience, consider $K_p$ to have units of atm) $3.10 \,atm = \dfrac{(2x)^2}{1.00 \,atm - x} \nonumber$ Solving for $x$ yields values of $x_1= -1.349 \,atm \nonumber$ $x_1= 0.574 \,atm \nonumber$ Clearly, $x_1$, while a solution to the mathematical problem, is not physically meaningful since the equilibrium pressure of B cannot be negative. So the equilibrium partial pressures are given by $p_A = 1.00 \,atm - 0.574\, atm = 0.426 \,atm \nonumber$ $p_B = 2(0.574 \,atm) = 1.148 \,atm \nonumber$ So the mole fractions are given by $\ chi_A = \dfrac{0.426 \,atm}{0.426\,atm + 1.148\,atm} = 0.271 \nonumber$ $\chi_B=1-\chi_A = 1-0.271 = 0.729 \nonumber$ Part b: The volume is doubled. Again, an ICE table is useful. The initial pressures will be half of the equilibrium pressures found in part a). A 2 B Initial 0.213 atm 0.574 atm Change -x +2x Equilibrium 0.213 atm - x 0.574 atm + 2x So the new equilibrium pressures can be found from $3.10 \,atm = \dfrac{(0.574\,atm + 2x)^2}{0.213\,atm - x} \nonumber$ And the values of $x$ that solve the problem are $x_1= -1.4077 \,atm \nonumber$ $x_1= 0.05875 \,atm \nonumber$ We reject the negative root (since it would cause both of the partial pressures to become negative. So the new equilibrium partial pressures are $p_A = 0.154\, atm \nonumber$ $p_B = 0.0692\, atm \nonumber$ And the mole fractions are $\chi_A = 0.182 \nonumber$ $\chi_B = 0.818 \nonumber$ We can see that the mole fraction of $A$ decreased and the mole fraction $B$ increased. This is the result expected by Le Chatlier’s principle since the lower total pressure favors the side of the reaction with more moles of gas. Part c: We introduce 1.000 atm of an inert gas. The new partial pressures are $p_A = 0.154 \,atm \nonumber$ $p_B = 0.692 \,atm \nonumber$ $p_{Ar} = 1.000\, atm \nonumber$ And because the partial pressures of A and B are unaffected, the equilibrium does not shift! What is affected is the composition, and so the mole fractions will change. $\chi_A = \dfrac{0.154 \,atm}{0.154 atm + 0.692 \,atm + 1.000\, atm} = 0.08341 \nonumber$ $\chi_B = \dfrac{0.692 \,atm}{0.154 atm + 0.692 \,atm + 1.000\, atm} = 0.08341 \nonumber$ $\chi_{Ar} = \dfrac{1.000\, atm}{0.154 atm + 0.692 \,atm + 1.000\, atm} = 0.08341 \nonumber$ And since $K_p = K_x(p_{tot}) \nonumber$ $\dfrac{(0.3749)^2}{0.08342} (1.846\,atm) = 3.1 \nonumber$ Within round-off error, the value obtained is the equilibrium constant. So the conclusion is that the introduction of an inert gas, even though it increases the total pressure, does not induce a change in the partial pressures of the reactants and products, so it does not cause the equilibrium to shift. [1] ICE is an acronym for “Initial, Change, Equilibrium”. An ICE table is a tool that is used to solve equilibrium problems in terms of an unknown number of moles (or something proportional to moles, such as pressure or concentration) will shift for a system to establish equilibrium. See (Tro, 2014) or a similar General Chemistry text for more background and information.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.04%3A_Pressure_Dependence_of_Kp_-_Le_Chatelier%27s_Principle.txt
Reactions such as the one in the previous example involve the dissociation of a molecule. Such reactions can be easily described in terms of the fraction of reactant molecules that actually dissociate to achieve equilibrium in a sample. This fraction is called the degree of dissociation. For the reaction in the previous example $A(g) \rightleftharpoons 2 B(g) \nonumber$ the degree of dissociation can be used to fill out an ICE table. If the reaction is started with $n$ moles of $A$, and a is the fraction of $A$ molecules that dissociate, the ICE table will look as follows. $A$ $2 B$ Initial $n$ $0$ Change $-\alpha n$ $+2n\alpha$ Equilibrium $n(1 - \alpha)$ $2n\alpha$ The mole fractions of $A$ and $B$ can then be expressed by \begin{align*} \chi_A &= \dfrac{n(1-\alpha)}{n(1-\alpha)+2n\alpha} \[4pt] &= \dfrac{1 -\alpha}{1+\alpha} \[4pt] \chi_B &= \dfrac{2 \alpha}{1+\alpha} \end{align*} Based on these mole fractions \begin{align} K_x &= \dfrac{\left( \dfrac{2 \alpha}{1+\alpha}\right)^2}{\dfrac{1 -\alpha}{1+\alpha}} \[4pt] &= \dfrac{4 \alpha^2}{1-\alpha^2} \end{align} \nonumber And so $K_p$, which can be expressed as $K_p = K_x(p_{tot})^{\sum \nu_i} \label{oddEq}$ is given by $K_p = \dfrac{4 \alpha^2}{(1-\alpha^2)} (p_{tot}) \nonumber$ Example $1$ Based on the values given below, find the equilibrium constant at 25 oC and degree of dissociation for a system that is at a total pressure of 1.00 atm for the reaction $N_2O_4(g) \rightleftharpoons 2 NO_2(g) \nonumber$ $N_2O_4(g)$ $NO_2(g)$ $\Delta G_f^o$ (kJ/mol) 99.8 51.3 Solution First, the value of $K_p$ can be determined from $\Delta G_{rxn}^o$ via an application of Hess' Law. \begin{align*} \Delta G_{rxn}^o &= 2 \left( 51.3 \, kJ/mol \right) - 99.8 \,kJ/mol &= 2.8\, kJ/mol \end{align*} So, using the relationship between thermodynamics and equilibria \begin{align*} \Delta G_f^o &= -RT \ln K_p \[4pt] 2800\, kJ/mol &= -(8.314 J/(mol\,K) ( 298 \,K) \ln K_p \[4pt] K_p &= 0.323 \,atm \end{align*} The degree of dissociation can then be calculated from the ICE tables at the top of the page for the dissociation of $N_2O_4(g)$: \begin{align*} K_p &= \dfrac{4 \alpha^2}{1-\alpha^2} (p_{tot}) \[4pt] 0.323 \,atm & = \dfrac{4 \alpha^2}{1-\alpha^2} (1.00 \,atm) \end{align*} Solving for $\alpha$, $\alpha = 0.273 \nonumber$ Note: since a represents the fraction of N2O4 molecules dissociated, it must be a positive number between 0 and 1. Example $2$ Consider the gas-phase reaction $A + 2B \rightleftharpoons 2C \nonumber$ A reaction vessel is initially filled with 1.00 mol of A and 2.00 mol of B. At equilibrium, the vessel contains 0.60 mol C and a total pressure of 0.890 atm at 1350 K. 1. How many mol of A and B are present at equilibrium? 2. What is the mole fraction of A, B, and C at equilibrium? 3. Find values for $K_x$, $K_p$, and $\Delta G_{rxn}^o$. Solution Let’s build an ICE table! A 2 B 2 C Initial 1.00 mol 2.00 mol 0 Change -x -2x +2x Equilibrium 1.00 mol - x 2.00 mol – 2x 2x = 0.60 mol From the equilibrium measurement of the number of moles of C, x = 0.30 mol. So at equilibrium, A 2 B 2 C Equilibrium 0.70 mol 1.40 mol 0.60 mol The total number of moles at equilibrium is 2.70 mol. From these data, the mole fractions can be determined. \begin{align*} \chi_A &= \dfrac{0.70\,mol}{2.70\,mol} = 0.259 \[4pt] \chi_B &= \dfrac{1.40\,mol}{2.70\,mol} = 0.519 \[4pt] \chi_C &= \dfrac{0.60\,mol}{2.70\,mol} = 0.222 \end{align*} So $K_x$ is given by $K_x = \dfrac{(0.222)^2}{(0.259)(0.519)^2} = 0.7064 \nonumber$ And $K_p$ is given by Equation \ref{oddEq}, so $K_p = 0.7604(0.890 \,atm)^{-1} = 0.792\,atm^{-1} \nonumber$ The thermodynamic equilibrium constant is unitless, of course, since the pressures are all divided by 1 atm. So the actual value of $K_p$ is 0.794. This value can be used to calculate $\Delta G_{rxn}^o$ using $\Delta G_{rxn}^o = -RT \ln K_p \nonumber$ so \begin{align*} \Delta G_{rxn}^o &= - (8.314 \, J/(mol\,K))( 1350\, K) \ln (0.792) \[4pt] &= 2590 \, J/mol \end{align*}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.05%3A_Degree_of_Dissociation.txt
The value of $K_p$ is independent of pressure, although the composition of a system at equilibrium may be very much dependent on pressure. Temperature dependence is another matter. Because the value of $\Delta G_{rxm}^o$ is dependent on temperature, the value of $K_p$ is as well. The form of the temperature dependence can be taken from the definition of the Gibbs function. At constant temperature and pressure $\dfrac{\Delta G^o_{T_2}}{T_2} - \dfrac{\Delta G^o_{T_1}}{T_1} = \Delta H^o \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$ Substituting $\Delta G^o = -RT \ln K \nonumber$ For the two values of $\Delta G_{}^o$ and using the appropriate temperatures, yields $\dfrac{-R{T_2} \ln K_2}{T_2} - \dfrac{-R{T_1} \ln K_1}{T_1} = \Delta H^o \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \nonumber$ And simplifying the expression so that only terms involving $K$ are on the left and all other terms are on the right results in the van ’t Hoff equation, which describes the temperature dependence of the equilibrium constant. $\ln \left(\dfrac{\ K_2}{\ K_1}\right) = - \dfrac{\Delta H^o}{R} \left(\dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{vH}$ Because of the assumptions made in the derivation of the Gibbs-Helmholtz equation, this relationship only holds if $\Delta H^o$ is independent of temperature over the range being considered. This expression also suggests that a plot of $\ln(K)$ as a function of $1/T$ should produce a straight line with a slope equal to $–\Delta H^o/R$. Such a plot is known as a van ’t Hoff plot, and can be used to determine the reaction enthalpy. Example $1$ A certain reaction has a value of $K_p = 0.0260$ at 25 °C and $\Delta H_{rxm}^o = 32.4 \,kJ/mol$. Calculate the value of $K_p$ at 37 °C. Solution This is a job for the van ’t Hoff equation! • T1 = 298 K • T2 = 310 K • $\Delta H_{rxm}^o = 32.4 \,kJ/mol$ • K1 = 0.0260 • K2 = ? So Equation \ref{vH} becomes \begin{align*} \ln \left( \dfrac{\ K_2}{0.0260} \right) &= - \dfrac{32400 \,J/mol}{8.314 \,K/(mol \,K)} \left(\dfrac{1}{310\, K} - \dfrac{1}{298 \,K} \right) \[4pt] K_2 &= 0.0431 \end{align*} Note: the value of $K_2$ increased with increasing temperature, which is what is expected for an endothermic reaction. An increase in temperature should result in an increase of product formation in the equilibrium mixture. But unlike a change in pressure, a change in temperature actually leads to a change in the value of the equilibrium constant! Example $2$ Given the following average bond enthalpies for $\ce{P-Cl}$ and $\ce{Cl-Cl}$ bonds, predict whether or not an increase in temperature will lead to a larger or smaller degree of dissociation for the reaction $\ce{PCl_5 \rightleftharpoons PCl_3 + Cl_2} \nonumber$ X-Y D(X-Y) (kJ/mol) P-Cl 326 Cl-Cl 240 Solution The estimated reaction enthalpy is given by the total energy expended breaking bonds minus the energy recovered by the formation of bonds. Since this reaction involves breaking two P-Cl bonds (costing 652 kJ/mol) and the formation of one Cl-Cl bond (recovering 240 kJ/mol), it is clear that the reaction is endothermic (by approximately 412 kJ/mol). As such, an increase in temperature should increase the value of the equilibrium constant, causing the degree of dissociation to be increased at the higher temperature. 9.07: The Dumas Bulb Method for Measuring Decomposition Equilibrium A classic example of an experiment that is employed in many physical chemistry laboratory courses uses a Dumas Bulb method to measure the dissociation of N2O4(g) as a function of temperature (Mack & France, 1934). In this experiment, a glass bulb is used to create a constant volume container in which a volatile substance can evaporate, or achieve equilibrium with other gases present. The latter is of interest in the case of the reaction $N_2O_4(g) \rightleftharpoons 2 NO_2(g) \label{eq1}$ The reaction is endothermic, so at higher temperatures, a larger degree of dissociation is observed. The procedure is to first calibrate the internal volume of the Dumas bulb. This is done using a heavy gas (such as SF6) and comparing the mass of the bulb when evacuated to the mass of the bulb full of the calibrant gas at a particular pressure and temperature. The Dumas bulb is then charged with a pure sample of the gas to be investigated (such as N2O4) and placed in a thermalized bath. It is then allowed to come to equilibrium. Once Equilibrium is established, the stopcock is opened to allow gas to escape until the internal pressure is set to the pressure of the room. The stopcock is then closed and the bulb weighed to determine the total mass of gas remaining inside. The experiment is repeated at higher and higher temperatures (so that at each subsequent measurement, the larger degree of dissociation creates more molecules of gas and an increase in pressure in the bulb (along with the higher temperature), which then leads to the expulsion of gas when the pressure is equilibrated. The degree of dissociation is then determined based on the calculated gas density at each temperature. $\alpha = \dfrac{\rho_1-\rho_2}{\rho_2(n-1)} \nonumber$ where $\rho_1$ is the measured density and $\rho_2$ is the theoretical density if no dissociation occurs (calculated from the ideal gas law for the given temperature, pressure, and molar mass of the dissociating gas) and $n$ is the number of fragments into which the dissociating gas dissociates (ie.g., $n = 2$ for Equation \req{eq1}). The equilibrium constant is then calculated as $K = \dfrac{4 \alpha^2}{1-\alpha^2} \left( \dfrac{p}{1.00 \,atm} \right) \nonumber$ Finally, a van’t Hoff plot is generated to determine the reaction enthalpy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.06%3A_Temperature_Dependence_of_Equilibrium_Constants_-_the_van_t_Hoff_Equation.txt
A great many processes involve proton transfer, or acid-base types of reactions. As many biological systems depend on carefully controlled pH, these types of processes are extremely important. The pH is defined by $pH \equiv -\log a(\ce{H^{+}}) \approx -\log [\ce{H^{+}}] \label{eq1}$ where $a$ is the activity of hydronium ions and $[\ce{H^{+}}]$ is the true concentration of hydronium ions (both in mol/L). The dissociation of a weak acid in water is governed by the equilibrium defined by $HA(aq) \rightleftharpoons H^+(aq) + A^-(aq) \nonumber$ The equilibrium constant for such a reaction, $K_a$, takes the form $K_a = \dfrac{[H^+][A^-]}{[HA]} \label{eq3}$ As is the case for all thermodynamic equilibrium constants, the concentrations are replaced by activities and the equilibrium constant is unitless. However, if all species behave ideally (have unit activity coefficients) the units can be used as a very useful guide in solving problems. Example $1$: Acetic Acid What is the pH of a 0.200 M HOAc (acetic acid) solution? (Ka = 1.8 x 10-5 M) Solution An ICE table will come in very handy here! $[HOAc]$ $[H^+]$ $[OAc^-]$ Initial 0.200 M 0 0 Change -x +x +x Equilibrium 0.200 M - x x x The equilibrium problem can then be set up as $K_a = \dfrac{[H^+][OAc^-]}{[HOAc]} \nonumber$ Substituting the values that are known $1.8 \times 10^{-5} =\dfrac{ x^2}{0.200 \, M -x} \nonumber$ This produces a quadratic equation, and thus two values of $x$ which satisfy the relationship. $x_1 = -0.001906 \,M \nonumber$ $x_2 = 0.001888 \,M \nonumber$ The negative root is not physically meaningful since the concentrations of $H^+$ and $OAc^-$ cannot be negative. Using the value of $x_2$ as $[H^+]$, the pH is then calculated (via Equation \ref{eq1}) to be $pH \approx -\log_{10} (0.001888) = 2.72 \nonumber$ The Auto-ionization of Water Water is a very important solvent as water molecules have large dipole moments which create favorable interactions with ionic compounds. Water also has a large dielectric constant which damps the electric field generated by ions in solutions, making the comparative interactions with water more favorable than with other ions in solution in many cases. But water also dissociates into ions through the reaction $H_2O(l) \rightleftharpoons H^+(aq) + OH^-(aq) \label{eqA}$ The equilibrium constant governing this dissociation is highly temperature dependent. The data below are presented by Bandura and Lvov (Bandura & Lvov, 2006) T (°C) 0 25 50 75 100 pKw 14.95 13.99 13.26 12.70 12.25 From these data, a van’t Hoff plot can be constructed. There is some curvature to the line, suggesting some (albeit small) temperature dependence for $\Delta H_{rxn}$ for Equation \ref{eqA}. However, from the fit of these data, a value of $\Delta H_{rxn}$ can be determined to be 52.7 kJ/mol. Of particular note is that the dissociation is endothermic, so increases in temperature will lead to a greater degree of dissociation. Example $2$: Neutral Water What is the pH of neutral water at 37 °C (normal human body temperature)? Neutral water no excess of $[H^+]$ over $[OH^-]$ or vice versa. Solution From the best-fit line in the van’t Hoff plot of Figure $1$, the value of $K_w$ can be calculated: $\ln (K_w) = - \dfrac{6338 \,K}{310\,K} - 11.04 \nonumber$ $K_w= 2.12 \times 10^{-14}\,M^2 \nonumber$ Since $K_w$ gives the product of $[H^+]$ and $[OH^-]$ (which must be equal in a neutral solution), $H^+] = \sqrt{2.12 \times 10^{-14}\,M^2} = 1.456 \times 10^{-7}\,M \nonumber$ And the pH is given by (Equation \ref{eq1}): $pH = -\log_{10} (1.456 \times 10^{-7}) = 6.84 \nonumber$ Note: This is slightly less than a pH of 7.00, which is normally considered to be “neutral.” But a pH of 7.00 is only neutral at 25 °C! At higher temperatures, neutral pH is a lower value due to the endothermic nature of the auto-ionization water. While it has a nigher $[H^+]$ concentration, it also has a higher $[OH^-]$ and at the same level, so it is still technically neutral. The Hydrolysis of a Weak Base Hydrolysis is defined as a reaction with water that splits a water molecule. The hydrolysis of a weak base defines the equilibrium constant Kb. $A^- + H_2O \rightleftharpoons HA + OH^- \nonumber$ For this reaction, the equilibrium constant $K_b$ is given by $K_b = \dfrac{[HA][OH^-]}{[A^-]} \nonumber$ The concentration (or activity) of the pure compound H2O is not included in the equilibrium expression because, being a pure compound in its standard state, it has unit activity throughout the process of establishing equilibrium. Further, it should be noted that when Kb is combined with the expression for Ka for the weak acid HA (Equation \ref{eq3}), $K_a K_b = \left( \dfrac{[H^+][A^-]}{[HA]} \right) \left( \dfrac{[HA][OH^-]}{[A^-]} \right) = [H^+][OH^-] = K_w \nonumber$ As a consequence, if one knows $K_a$ for a weak acid, one also knows $K_b$ for its conjugate base, since the product results in $K_w$. Example $3$: What is the pH of a 0.150 M solution of KF? (For HF, pKa = 3.17 at 25 °C) Solution The problem involves the hydrolysis of the conjugate base of HF, F-. The hydrolysis reaction is $F^- + H_2O \rightleftharpoons HF + OH^- \nonumber$ An ICE table is in order here. $F^-$ $HF$ $OH^-$ Initial 0.150 M 0 0 Change -x +x +x Equilibrium 0.150 M - x x x So the expression for $K_b$ is $K_b = \dfrac{K_w}{K_a} = \dfrac{1.0 \times 10^{-14} M^2}{10^{3.17} M} = \dfrac{x^2}{1.50 \,M-x} \nonumber$ In this case, the small value of $K_b$ insures that the value of x will be negligibly small compared to 0.150 M. In this limit, the value of $x$ (which is equal to [OH­-]) $x = [OH^-] = 1.49 \times 10^{-6}\,M \nonumber$ So $[H^+]$ is given by $[H^+] =\dfrac{K_w}{[OH^-]} = \dfrac{10^{-14} M^2}{1.49 \times 10^{-6} M} = 6.71 \times 10^{-9} \,M \nonumber$ And the pH is given by (Equation \ref{eq1}): $pH = -\log_{10} (6.71 \times 10^{-9}) = 8.17 \nonumber$ Note: The pH of this salt solution is slightly basic. This is to be expected as KF can be thought of being formed in the reaction of a weak acid (HF) with a strong base (KOH). In the competition to control the pH, the strong base ends up winning the battle.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.08%3A_Acid-Base_Equilibria.txt
Buffer solutions, which are of enormous importance in controlling pH in various processes, can be understood in terms of acid/base equilibrium. A buffer is created in a solution which contains both a weak acid and its conjugate base. This creates to absorb excess H+ or supply H+ to replace what is lost due to neutralization. The calculation of the pH of a buffer is straightforward using an ICE table approach. Example $1$: What is the pH of a solution that is 0.150 M in KF and 0.250 M in HF? Solution The reaction of interest is $HF \rightleftharpoons H^+ + F^- \nonumber$ Let’s use an ICE table! $HF$ $H^+$ $F^-$ Initial 0.250 M 0 0.150 M Change -x +x +x Equilibrium 0.250 M - x x 0.150 M + x $K_a = \dfrac{[H^+][F^-]}{[HF]} \nonumber$ $10^{-3.17} M = \dfrac{x(0.150 \,M + x)}{0.250 \,M - x} \nonumber$ This expression results in a quadratic relationship, leading to two values of $x$ that will make it true. Rejecting the negative root, the remaining root of the equation indicates $[ H^+]= 0.00111\,M \nonumber$ So the pH is given by $pH = -log_{10} (0.00111) = 2.95 \nonumber$ For buffers made from acids with sufficiently large values of pKa the buffer problem can be simplified since the concentration of the acid and its conjugate base will be determined by their pre-equilibrium values. In these cases, the pH can be calculated using the Henderson-Hasselbalch approximation. If one considers the expression for $K_a$ $K_a = \dfrac{[H^+][A^-]}{[HA]} = [H^+]\dfrac{[H^-]}{[HA]} \nonumber$ Taking the log of both sides and multiplying by -1 yields $pK_a= pH - \log_{10} \dfrac{[A^-]}{[HA]} \nonumber$ An rearrangement produces the form of the Henderson-Hasselbalch approxmimation. $pH= pK_a - \log_{10} \dfrac{[A^-]}{[HA]} \nonumber$ It should be noted that this approximation will fail if: 1. the $pk_a$ is too small, 2. the concentrations $[A^-]$ is too small, or 3. $[HA]$ is too small, since the equilibrium concentration will deviate wildly from the pre-equilibrium values under these conditions. 9.10: Solubility of Ionic Compounds The solubility of ionic compounds in water can also be described using the concepts of equilibrium. If you consider the dissociation of a generic salt MX $MX(s) \rightleftharpoons M^+(aq) + X^-(aq) \nonumber$ The equilibrium expression is $K_{sp} = [M^+][X^-] \nonumber$ $K_{sp}$ is the solubility product and is the equilibrium constant that describes the solubility of an electrolyte. And again, the pure solid MX is not included in the expression since it has unit activity throughout the establishment of equilibrium. Example $1$: What is the maximum solubility of CuS at 25 °C? ($K_{sp} = 1 \times 10^{-36}\, M^2$) Solution Yup – time for an ICE table. $CuS$ $Cu^{2+}$ $S^{2-}$ Initial   0 0 Change   +x +x Equilibrium   x x So the equilibrium expression is $1 \times 10^{-36} M^2 = x^2 \nonumber$ $x = \sqrt{ 1 \times 10^{-36} \,M^2 } = 1 \times 10^{-18}\, M \nonumber$ Example $2$: Common Ion What is the maximum solubility of $\ce{CuS}$ at 25 °C in 0.100 M $\ce{NaS}$ with ($K_{sp} = 1 \times 10^{-36}\, M^2$)? Solution In this problem we need to consider the existence of S2-(aq) from the complete dissociation of the strong electrolyte NaS. An ICE table will help, as usual. $CuS$ $Cu^{2+}$ $S^{2-}$ Initial   0 0.100 M Change   +x +x Equilibrium   x 0.100 M + x Given the miniscule magnitude of the solubility product, x will be negligibly small compared to 0.100 MS the equilibrium expression is $1 \times 10^{-36} M^2 = x(0.100\,M) \nonumber$ $1 \times 10^{-35} \,M \nonumber$ The huge reduction in solubility is due to the common ion effect. The existence of sulfide in the solution due to sodium sulfide greatly reduces the solutions capacity to support additional sulfide due to the dissociation of $\ce{CuS}$. 9.S: Chemical Equilibria (Summary) Vocabulary and Concepts • common ion effect • conjugate base • degree of dissociation • dissociation of a weak acid • Dumas Bulb • Henderson-Hasselbalch equation • Le Chatlier’s principle • reaction quotient • solubility product • thermodynamic equilibrium constant • van’t Hoff equation • van’t Hoff plot • weak acid
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/09%3A_Chemical_Equilibria/9.09%3A_Buffers.txt
Elon Musk, an innovator in the field of harnessing renewable sources to generate electric power see a huge potential for electric cars to change the way Americans drive. • Selling an electric sports car creates an opportunity to fundamentally change the way America drives.- Elon Musk • I've actually made a prediction that within 30 years a majority of new cars made in the United States will be electric. And I don't mean hybrid, I mean fully electric.- Elon Musk Given the importance of energy production (and in particular, production from renewable sources) alluded to by Richard Smalley in his address to the United States Congress (see Chapter 1), Elon Musk’s vision seems well-aligned with Smalley’s priority. The generation and consumption of electrical energy and how it is harnessed to do work in the universe lends itself very nicely to discussion within the framework of thermodynamics. In this chapter, we will use some of the tills we have developed to relate electrochemical processes to thermodynamic variables, and to frame discussions of a few important topics. • 10.1: Electricity In 1799, Alessandro Volta showed that electricity could be generated by stacking copper and zinc disks submerged in sulfuric acid. The reactions that Volta produced in his voltaic pile included both oxidation and reduction processes that could be considered as half-reactions. The half-reactions can be classified as oxidation (the loss of electrons) which happens at the anode and reduction (the gain of electrons) which occurs at the cathode. • 10.2: The connection to ΔG A criterion for spontaneity, \(\Delta G\) also indicated the maximum amount of non p-V work a system could produce at constant temperature and pressure. And since we is non p-V work, it seems like a natural fit to extend this discussion to electrochemistry. • 10.3: Half Cells and Standard Reduction Potentials Much like G itself, E can only be measured as a difference, so a convention is used to set a zero to the scale. Toward this end, convention sets the reduction potential of the standard hydrogen electrode (SHE) to 0.00 V. • 10.4: Entropy of Electrochemical Cells The Gibbs function is related to entropy through its temperature dependence and a similar relationship can be derived for the temperature variance of E. • 10.5: Concentration Cells The generation of an electrostatic potential difference is dependent on the creation of a difference in chemical potential between two half-cells. One important manner in which this can be created is by creating a concentration difference. Using the Nernst equation, the potential difference for a concentration cell (one in which both half-cells involve the same half-reaction) can be expressed • 10.E: Electrochemistry (Exercises) Summary for Chapter 10 "Electrochemistry" in Fleming's Physical Chemistry Textmap. • 10.S: Electrochemistry (Summary) Exercises for Chapter 10 "Electrochemistry" in Fleming's Physical Chemistry Textmap. 10: Electrochemistry Electricity has been known for some time. Ancient Egyptians, for example, referred to electric fish in the Nile River as early as 2750 BC (Moller & Kramer, 1991). In 1600, William Gilbert studied what would later be seen to be electrostatic attraction, by creating static charges rubbing amber (Stewart, 2001). And Benjamin Franklin’s famous experiment (although it is actually uncertain if he performed the experiment) of attaching a metal key to a kite string occurred in 1752, and showed that lightening is an electrical phenomenon (Uman, 1987). One of the biggest breakthroughs in the study of electricity as a chemical phenomenon was made by Alessandro Volta, who in 1799 showed that electricity could be generated by stacking copper and zinc disks submerged in sulfuric acid (Routledge, 1881). The reactions that Volta produced in his voltaic pile included both oxidation and reduction processes that could be considered as half-reactions. The half-reactions can be classified as oxidation (the loss of electrons) which happens at the anode and reduction (the gain of electrons) which occurs at the cathode. Those half reactions were $\underbrace{Zn \rightarrow Zn^{2+} + 2 e^-}_{\text{aanode}} \nonumber$ $\underbrace{2 H^+ + 2 e^- \rightarrow H_2}_{\text{cathode}} \nonumber$ The propensity of zinc to oxidize coupled with that of hydrogen to reduce creates a potential energy difference between the electrodes at which these processes occur. And like any potential energy difference, it can create a force which can be used to do work. In this case, the work is that of pushing electrons through a circuit. The work of such a process can be calculated by integrating $dw_e - -E \,dQ \nonumber$ where $E$ is the potential energy difference, and $dQ$ is an infinitesimal amount of charge carried through the circuit. The infinitesimal amount of charge carried through the circuit can be expressed as $dQ = e\,dN \nonumber$ where $e$ is the charge carried on one electron ($1.6 \times 10^{-19} C$) and $dN$ is the infinitesimal change in the number of electrons. Thus, if the potential energy difference is constant $w_e = -e\,E \int_o^{N} dN = -N\,e\,E \nonumber$ But since the number of electrons carried through a circuit is an enormous number, it would be far more convenient to express this in terms of the number of moles of electrons carried through the circuit. Noting that the number of moles ($n$) is given by $n=\dfrac{N}{N_A} \nonumber$ and that the charge carried by one mole of electrons is given by $F = N_A e = 96484\,C \nonumber$ where $F$ is Faraday's constant and has the magnitude of one Faraday (or the total charge carried by one mole of electrons.) The Faraday is named after Michael Faraday (1791-1867) (Doc, 2014), a British physicist who is credited with inventing the electric motor, among other accomplishments. Putting the pieces together, the total electrical work accomplished by pushing n moles of electrons through a circuit with a potential difference $E$, is $w_e = -nFE \nonumber$ 10.02: The connection to G Recall that in addition to being used as a criterion for spontaneity, $\Delta G$ also indicated the maximum amount of non p-V work a system could produce at constant temperature and pressure. And since we is non p-V work, it seems like a natural fit that $\Delta G = -nFE \nonumber$ If all of the reactants and products in the electrochemical cell are in their standard states, it follows that $\Delta G^o = -nFE^o \nonumber$ where $E^o$ is the standard cell potential. Noting that the molar Gibbs function change can be expressed in terms of the reaction quotient $Q$ by $\Delta G = \Delta G^o + RT \ln Q \nonumber$ it follows that $-nFE = -nFE^o + RT \ln Q \nonumber$ Dividing by $–nF$ yields $E = E^o - \dfrac{RT}{nF} \ln Q \nonumber$ which is the Nernst equation. This relationship allows one to calculate the cell potential of a electrochemical cell as a function of the specific activities of the reactants and products. In the Nernst equation, n is the number of electrons transferred per reaction equivalent. For the specific reaction harnessed by Volta in his original battery, Eo = 0.763 V (at 25 oC) and $n = 2$. So if the Zn2+ and H+ ions are at a concentration that gives them unit activity, and the H2 gas is at a partial pressure that gives it unit fugacity: $E = 0.763\,V - \dfrac{RT}{nF} \ln (1) = 0/763 \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/10%3A_Electrochemistry/10.01%3A_Electricity.txt
Much like $G$ itself, $E$ can only be measured as a difference, so a convention is used to set a zero to the scale. Toward this end, convention sets the reduction potential of the standard hydrogen electrode (SHE) to 0.00 V. $\ce{Zn \rightarrow Zn^{2+} + 2e^{-}} \nonumber$ wtih $E_{ox}^o = 0.763\, V$ $\ce{2 H^{+} + 2 e^{-} \rightarrow H2 } \nonumber$ with $E_{red}^o = 0.000 \,V$ Standard Hydrogen Electrode The standard hydrogen electrode is constructed so that H2 gas flows over an inert electrode made of platinum, and can interact with an acid solution which provides H+ for the half reaction $\ce{2 H^+(aq) + 2 e^{-} -> H_2(g)} \nonumber$ Both H+ and H2 need to have unit activity (or fugacity), which if the solution and gas behave ideally means a concentration of 1 M and a pressure of 1 bar. Electrochemical Cells Standard reduction potentials can be measured relative to the convention of setting the reduction potential of the Standard Hydrogen Electrode (SHE) to zero. A number of values are shown in Table P1. Example $1$: Cell Potential and Spontaneity Which pair of reactants will produce a spontaneous reaction if everything is present in its standard state at 25 °C? • $\ce{Fe}$ and $\ce{Cu^{2+}}$ or • $\ce{Fe^{2+}}$ and $\ce{Cu}$ Solution The species with the standard reduction potential (Table P1) will force the other to oxidize. From the table, $\ce{Cu^{2+} + 2 e^{-} \rightarrow Cu} \nonumber$ with $0.337\, V$ $\ce{Fe^{2+} + 2 e^{-} \rightarrow Fe} \nonumber$ with $-0.440\, V$ So the iron half-reaction will flip (so that iron is oxidizing) and the spontaneous reaction under standard conditions will be $Cu^{2+} + Fe \rightarrow Cu + Fe^{2+} \nonumber$ with $E^o = 0.777\, V$ Calculating Cell Potentials Using values measured relative to the SHE, it is fairly easy to calculate the standard cell potential of a given reaction. For example, consider the reaction $\ce{ 2 Ag^{+}(aq) + Cu(s) \rightarrow 2 Ag(s) + Cu^{2+}(aq)} \nonumber$ Before calculating the cell potential, we should review a few definitions. The anode half reaction, which is defined by the half-reaction in which oxidation °Ccurs, is $\ce{Cu(s) \rightarrow Cu^{2+}(aq) + 2 e^{-}} \nonumber$ And the cathode half-reaction, defined as the half-reaction in which reduction takes place, is $\ce{Ag^+(aq) + e- \rightarrow Ag(s)}\nonumber$ Using standard cell notation, the conditions (such as the concentrations of the ions in solution) can be represented. In the standard cell notation, the anode is on the left-hand side, and the cathode on the right. The two are typically separated by a salt bridge, which is designated by a double vertical line. A single vertical line indicates a phase boundary. Hence for the reaction above, if the silver ions are at a concentration of 0.500 M, and the copper (II) ions are at a concentration of 0.100 M, the standard cell notation would be Example $2$: Cell Potential Under nonstadard Conditions Calculate the cell potential at 25 °C for the cell indicated by $\ce{Cu(s) | Cu^{2+}(aq, \,0.100\, M) || Ag^+ (aq,\, 0.500\, M) | Ag(s)} \nonumber$ Solution In order to calculate the cell potential ($E$), the standard cell potential must first be obtained. The standard cell potential at 25 °C is given by \begin{align*} E_{cell} = E^o_{cathode} -E^o_{anode} \[4pt] &= 0.799 \,V - 0.337\,V \[4pt] &=0.462\,V \end{align*} And for a cell at non-standard conditions, such as those indicated above, the Nernst equation can be used to calculate the cell potential. At 25 °C, The cell potential is given by \begin{align*} E_{cell} &= E^o_{cell} - \dfrac{RT}{nF} \ln \left( \dfrac{[Cu^{2+}]}{[Ag^+]} \right) \[4pt] &= 0.462\,V - \dfrac{(8.314 \,J/(mol\,K) (298\,K) }{2(96484\,C)} \ln \left( \dfrac{0.100\,M}{0.500\,M} \right) \end{align*} Noting that $1\, J/C = 1\, V$, $E = 0.483\,V \nonumber$ Example $3$: Cell Potential under Non-Standard Conditions Calculate the cell potential at 25 °C for the cell defined by $Ni(s) | Ni^{2+}\, (aq, \,0.500\, M) || Cu(s) | Cu^{2+}(aq, \,0.100\, M) \nonumber$ Solution We will use the Nernst equation. First, we need to determine $E^o$. Using Table P1, it is apparent that $\ce{Cu^{2 }+ 2 e^{-} \rightarrow Cu } \nonumber$ $E^o = 0.337 \,V$ $\ce{ Ni^{2+} + 2 e^{-} \rightarrow Ni} \nonumber$ with $E^o = -0.250\, V$ So copper, having the larger reduction potential will be the cathode half-reaction while forcing nickel to oxidize, making it the anode. So Eo for the cell will be given by \begin{align*} E_{cell} &= E^o_{cathode} -E^o_{anode} \[4pt] &= 0.337 \,V -(-0.250\,V) \[4pt] = 0.587\,V \end{align*} And the cell potential is then given by the Nernst Equation \begin{align*} E_{cell} &= E^o_{cell} - \dfrac{RT}{nF} \ln Q \[4pt] &= 0.587 - \dfrac{(8.314 \,J/(mol\,K) (298\,K) }{2(96484\,C)} \ln \left( \dfrac{0.500\,M}{0.100\,M} \right) \[4pt] &= 0.566\,V \end{align*} Measuring the Voltage A typical galvanic electrochemical cell can be constructed similar to what is shown in the diagram above. The electrons flow from the anode (the electron source) to the cathode (the electron sink.) The salt bridge allows for the flow of ions to complete the circuit while minimizing the introduction of a junction potential. 10.04: Entropy of Electrochemical Cells The Gibbs function is related to entropy through its temperature dependence $\left( \dfrac{\partial \Delta G}{\partial T} \right)_p = - \Delta S \nonumber$ A similar relationship can be derived for the temperature variance of $E^o$. $nF \left( \dfrac{\partial E^o}{\partial T} \right)_p = \Delta S \label{eq2}$ Consider the following data for the Daniel cell (Buckbeei, Surdzial, & Metz, 1969) which is defined by the following reaction $Zn(s) + Cu^{2+}(aq) \rightleftharpoons Zn^{2+}(aq) + Cu(s) \nonumber$ T (°C) 0 10 20 25 30 40 Eo (V) 1.1028 1.0971 1.0929 1.0913 1.0901 1.0887 From a fit of the data to a quadratic function, the temperature dependence of $\left( \dfrac{\partial E^o}{\partial T} \right)_p \nonumber$ is easily established. The quadratic fit to the data results in $\left( \dfrac{\partial E^o}{\partial T} \right)_p = 3.8576 \times 10^{-6} \dfrac{V}{°C^2}(T) - 6.3810 \times 10^{-4} \dfrac{V}{°C} \nonumber$ So, at 25 °C, $\left( \dfrac{\partial E^o}{\partial T} \right)_p = -54166 \times 10^{-4} V/K \nonumber$ noting that $K$ can be substituted for $°C$ since in difference they have the same magnitude. So the entropy change is calculated (Equation \ref{eq2}) is $\Delta S = nF \left( \dfrac{\partial E^o}{\partial T} \right)_p = (2\,mol)(95484\,C/mol) (-5.4166 \times 10^{-4} V/K) \nonumber$ Because $1\,C \times 1\,V = 1\,J \nonumber$ The standard entropy change for the Daniel cell reaction at 25 °C is $\Delta S = -104.5\, J/(mol\,K). \nonumber$ It is the negative entropy change that leads to an increase in standard cell potential at lower temperatures. For a reaction such as $Pb(s) + 2 H^+(aq) \rightarrow Pb^{2+}(aq) + H_2(g) \nonumber$ which has a large increase in entropy (due to the production of a gas-phase product), the standard cell potential decreases with decreasing temperature. As this is the reaction used in most car batteries, it explains why it can be difficult to start ones car on a very cold winter morning. The topic of temperature dependence of several standard cell potentials is reported and discussed by Bratsch (Bratsch, 1989). 10.05: Concentration Cells The generation of an electrostatic potential difference is dependent on the creation of a difference in chemical potential between two half-cells. One important manner in which this can be created is by creating a concentration difference. Using the Nernst equation, the potential difference for a concentration cell (one in which both half-cells involve the same half-reaction) can be expressed $E = -\dfrac{RT}{nF} \ln \dfrac{[\text{oxdizing}]}{[\text{reducing}]} \nonumber$ Example $1$ Calculate the cell potential (at 25 °C) for the concentration cell defined by $Cu(s) | Cu^{2+} (aq,\, 0.00100 \,M) || Cu^{2+} (aq,\, 0.100 \,M) | Cu(s) \nonumber$ Solution Since the oxidation and reduction half-reactions are the same, $E_{cell}^o =0\,V \nonumber$ The cell potential at 25 °C is calculated using the Nernst equation: $E = -\dfrac{RT}{nF} \ln Q \nonumber$ Substituting the values from the problem: \begin{align*} E_{cell} &= - \dfrac{(8.314 \,J/(mol\,K) (298\,K) }{2(96484\,C)} \ln \left( \dfrac{0.00100\,M}{0.100\,M} \right) \[4pt] &= 0.059\,V \end{align*} 10.S: Electrochemistry (Summary) Vocabulary and Concepts • anode • cathode • concentration cell • half-reactions • Nernst equation • oxidation • reduction • salt bridge • standard cell notation • standard cell potential • Standard Hydrogen Electrode • voltaic pile
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/10%3A_Electrochemistry/10.03%3A_Half_Cells_and_Standard_Reduction_Potentials.txt
Chemical kinetics is the study of how fast chemical reactions proceed from reactants to products. This is an important topic because while thermodynamics will tell us about the direction of spontaneous change, it is silent as to how fast processes will occur. But additionally, the power of studying reaction rates is that it gives us insight into the actual pathways chemical processes follow to proceed from reactants to products. • 11.1: Reaction Rate The rate of a chemical reaction (or the reaction rate) can be defined by the time needed for a change in concentration to occur. But there is a problem in that this allows for the definition to be made based on concentration changes for either the reactants or the products. Plus, due to stoichiometric concerns, the rates at which the concentrations are generally different! • 11.2: Measuring Reaction Rates There are several methods that can be used to measure chemical reactions rates. A common method is to use spectrophotometry to monitor the concentration of a species that will absorb light. If it is possible, it is preferable to measure the appearance of a product rather than the disappearance of a reactant, due to the low background interference of the measurement. • 11.3: Rate Laws A rate law is any mathematical relationship that relates the concentration of a reactant or product in a chemical reaction to time. Rate laws can be expressed in either derivative (or ratio, for finite time intervals) or integrated form. • 11.4: 0th order Rate Law If the reaction follows a zeroth order rate law, it can be expressed in terms of the time-rate of change of [A]. The solution of the differential equation suggests that a plot of concentration as a function of time will produce a straight line. • 11.5: 1st order rate law If the reaction follows a first order rate law, it can be expressed in terms of the time-rate of change of [A]. The solution of the differential equation suggests that a plot of log concentration as a function of time will produce a straight line. • 11.6: 2nd order Rate Laws If the reaction follows a second order rate law, it can be expressed in terms of the time-rate of change of [A]. The solution of the differential equation suggests that a plot of 1/concentration as a function of time will produce a straight line. • 11.7: The Method of Initial Rates The method of initial rates is a commonly used technique for deriving rate laws. As the name implies, the method involves measuring the initial rate of a reaction. The measurement is repeated for several sets of initial concentration conditions to see how the reaction rate varies. This might be accomplished by determining the time needed to exhaust a particular amount of a reactant (preferably one on which the reaction rate does not depend!) • 11.8: The Method of Half-Lives Another method for determining the order of a reaction is to examine the behavior of the half-life as the reaction progresses. The half-life can be defined as the time it takes for the concentration of a reactant to fall to half of its original value. The method of half-lives involved measuring the half-life’s dependence on concentration. • 11.9: Temperature Dependence In general, increases in temperature increase the rates of chemical reactions. It is easy to see why, since most chemical reactions depend on molecular collisions. And as we discussed in Chapter 2, the frequency with which molecules collide increases with increased temperature. But also, the kinetic energy of the molecules increases, which should increase the probability that a collision event will lead to a reaction. An empirical model was proposed by Arrhenius to account for this phenomenon. • 11.10: Collision Theory Collision Theory was first introduced in the 1910s by Max Trautz (Trautz, 1916) and William Lewis (Lewis, 1918) to try to account for the magnitudes of rate constants in terms of the frequency of molecular collisions, the collisional energy, and the relative orientations of the molecules involved in the collision. • 11.11: Transition State Theory Transition state theory was proposed in 1935 by Henry Erying, and further developed by Merrideth G. Evans and Michael Polanyi (Laidler & King, 1983), as another means of accounting for chemical reaction rates. It is based on the idea that a molecular collision that leads to reaction must pass through an intermediate state known as the transition state. • 11.E: Chemical Kinetics I (Exercises) Exercises for Chapter 11 "Chemical Kinetics I" in Fleming's Physical Chemistry Textmap. • 11.S: Chemical Kinetics I (Summary) Summary for Chapter 11 "Chemical Kinetics I" in Fleming's Physical Chemistry Textmap. 11: Chemical Kinetics I The rate of a chemical reaction (or the reaction rate) can be defined by the time needed for a change in concentration to occur. But there is a problem in that this allows for the definition to be made based on concentration changes for either the reactants or the products. Plus, due to stoichiometric concerns, the rates at which the concentrations are generally different! Toward this end, the following convention is used. For a general reaction $a A + b B \rightarrow c C + d D \nonumber$ the reaction rate can be defined by any of the ratios $\text{rate} = - \dfrac{1}{a} \dfrac{\Delta [A]}{dt} = - \dfrac{1}{b} \dfrac{\Delta[B]}{dt} = + \dfrac{1}{c} \dfrac{\Delta [C]}{dt} = + \dfrac{1}{d} \dfrac{ \Delta [D]}{dt} \nonumber$ Or for infinitesimal time intervals $\text{rate} = - \dfrac{1}{a} \dfrac{d[A]}{dt} = - \dfrac{1}{b} \dfrac{d[C]}{dt} = + \dfrac{1}{c} \dfrac{d[C]}{dt} = + \dfrac{1}{d} \dfrac{d[D]}{dt} \nonumber$ Example $1$: Under a certain set of conditions, the rate of the reaction $N_2 + 3 H_2 \rightarrow 2 NH_3 \nonumber$ the reaction rate is $6.0 \times 10^{-4}\, M/s$. Calculate the time-rate of change for the concentrations of N2, H2, and NH3. Solution Due to the stoichiometry of the reaction, $\text{rate} = - \dfrac{d[N_2]}{dt} = - \dfrac{1}{3} \dfrac{d[H_2]}{dt} = + \dfrac{1}{2} \dfrac{d[NH_3]}{dt} \nonumber$ so $\dfrac{d[N_2]}{dt} = -6.0 \times 10^{-4} \,M/s \nonumber$ $\dfrac{d[H_2]}{dt} = -2.0 \times 10^{-4} \,M/s \nonumber$ $\dfrac{d[NH_3]}{dt} = 3.0 \times 10^{-4} \,M/s \nonumber$ Note: The time derivatives for the reactants are negative because the reactant concentrations are decreasing, and those of products are positive since the concentrations of products increase as the reaction progresses.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.01%3A_Reaction_Rate.txt
There are several methods that can be used to measure chemical reactions rates. A common method is to use spectrophotometry to monitor the concentration of a species that will absorb light. If it is possible, it is preferable to measure the appearance of a product rather than the disappearance of a reactant, due to the low background interference of the measurement. However, high-quality kinetic data can be obtained either way. The Stopped-Flow Method The stopped-flow method involves using flow control (which can be provided by syringes or other valves) to control the flow of reactants into a mixing chamber where the reaction takes place. The reaction mixture can then be probed spectrophotometrically. Stopped-flow methods are commonly used in physical chemistry laboratory courses (Progodich, 2014). Some methods depend on measuring the initial rate of a reaction, which can be subject to a great deal of experimental uncertainty due to fluctuations in instrumentation or conditions. Other methods require a broad range of time and concentration data. These methods tend to produce more reliable results as they can make use of the broad range of data to smooth over random fluctuations that may affect measurements. Both approaches (initial rates and full concentration profile data methods) will be discussed below. 11.03: Rate Laws A rate law is any mathematical relationship that relates the concentration of a reactant or product in a chemical reaction to time. Rate laws can be expressed in either derivative (or ratio, for finite time intervals) or integrated form. One of the more common general forms a rate law for the reaction $A + B \rightarrow products \nonumber$ may take is $\text{rate}=k[A]^\alpha[B]^\beta \nonumber$ where $k$, $\alpha$, and $\beta$ are experimentally determined values. However, a rate law can take many different forms, some of which can be quite intricate and complex. The powers $\alpha$ and $\beta$ need not be integers. For example $\text{rate}=k[A]^\alpha [B]^{1/2} \label{ex2}$ is a rate law that is observed for some reactions. Sometimes, the concentrations of products must be included. $\text{rate}=\dfrac{k[A]^{1/2}[B]}{[P]} \nonumber$ In some cases, the concentration for a catalyst or enzyme is important. For example, many enzyme mitigated reactions in biological systems follow the Michaelis-Menten rate law, which is of the form $\text{rate}=\dfrac{V_{max}[S]}{K_m + [S]} \nonumber$ where $V_{max}$ and $K_M$ are factors that are determined experimentally, and $[S]$ is the concentration of the substrate in the reaction. Order For those cases where the rate law can be expressed in the form $\text{rate}=k[A]^\alpha[B]^\beta[C]^\gamma \nonumber$ where $A$, $B$, and $C$ are reactants (or products or catalysts, etc.) involved in the reaction, the reaction is said to be of $\alpha$ order in $A$, $\beta$ order in $B$, and $\gamma$ order in $C$. The reaction is said to be $\alpha +\beta + \gamma$ order overall. Some examples are shown in the following table: Table 11.3.1: Example Rate Laws Rate law Order with respect to A Order with respect to B Order with respect to C Overall order $\text{rate}=k$ 0 0 0 0 $\text{rate}=k[A]$ 0 0 0 1 $\text{rate}=k[A]^2$ 0 0 0 2 $\text{rate}=k[A][B]$ 0 1 0 2 $\text{rate}=k[A]^2[B]$ 0 1 0 3 $\text{rate}=k[A][B][C]$ 0 1 1 3 Reaction orders can also be fractional such as for Equation \ref{ex2} which is 1st order in $A$, and half order in $B$. The order can also be negative such as $\text{rate}=k \dfrac{[A]}{[B]} \nonumber$ which is 1st order in A, and -1 order in B. In this case, an build-up of the concentration of B will retard (slow) the reaction. In all cases, the order of the reaction with respect to a specific reactant or product (or catalyst, or whatever) must be determined experimentally. As a general rule, the stoichiometry cannot be used to predict the form of the rate law. However, the rate law can be used to gain some insight into the possible pathways by which the reaction can proceed. That is the topic of Chapter 12. For now we will focus on three useful methods that are commonly used in chemistry to determine the rate law for a reaction from experimental data. Empirical Methods Perhaps the simplest of the methods to be used are the empirical methods, which rely on the qualitative interpretation of a graphical representation of the concentration vs time profile. In these methods, some function of concentration is plotted as a function of time, and the result is examined for a linear relationship. For the following examples, consider a reaction of the form $A + B \rightarrow products \nonumber$ in which A is one of the reactants. In order to employ these empirical methods, one must generate the forms of the integrated rate laws. 11.04: 0th order Rate Law If the reaction follows a zeroth order rate law, it can be expressed in terms of the time-rate of change of [A] (which will be negative since A is a reactant): $-\dfrac{d[A]}{dt} = k \nonumber$ In this case, it is straightforward to separate the variables. Placing time variables on the right and [A] on the left $d[A] = - k \,dt \nonumber$ In this form, it is easy to integrate. If the concentration of A is [A]0 at time t = 0, and the concentration of A is [A] at some arbitrary time later, the form of the integral is $\int _{[A]_o}^{[A]} d[A] = - k \int _{t_o}^{t}\,dt \nonumber$ which yields $[A] - [A]_o = -kt \nonumber$ or $[A] = [A]_o -kt \nonumber$ This suggests that a plot of concentration as a function of time will produce a straight line, the slope of which is –k, and the intercept of which is [A]0. If such a plot is linear, then the data are consistent with 0th order kinetics. If they are not, other possibilities must be considered. 11.05: 1st order rate law A first order rate law would take the form $\dfrac{d[A]}{dt} = k[A] \nonumber$ Again, separating the variables by placing all of the concentration terms on the left and all of the time terms on the right yields $\dfrac{d[A]}{[A]} =-k\,dt \nonumber$ This expression is also easily integrated as before $\int_{[A]=0}^{[A]} \dfrac{d[A]}{[A]} =-k \int_{t=0}^{t=t}\,dt \nonumber$ Noting that $\dfrac{dx}{x} = d (\ln x) \nonumber$ The form of the integrated rate law becomes $\ln [A] - \ln [A]_o = kt \nonumber$ or $\ln [A] = \ln [A]_o - kt \label{In1}$ This form implies that a plot of the natural logarithm of the concentration is a linear function of the time. And so a plot of ln[A] as a function of time should produce a linear plot, the slope of which is -k, and the intercept of which is ln[A]0. Example $1$: Consider the following kinetic data. Use a graph to demonstrate that the data are consistent with first order kinetics. Also, if the data are first order, determine the value of the rate constant for the reaction. Time (s) 0 10 20 50 100 150 200 250 300 [A] (M) 0.873 0.752 0.648 0.414 0.196 0.093 0.044 0.021 0.010 Solution The plot looks as follows: From this plot, it can be seen that the rate constant is 0.0149 s-1. The concentration at time $t = 0$ can also be inferred from the intercept. It should also be noted that the integrated rate law (Equation \ref{In1}) can be expressed in exponential form: $[A] = [A]_o e^{-kt} \nonumber$ Because of this functional form, 1st order kinetics are sometimes referred to as exponential decay kinetics. Many processes, including radioactive decay of nuclides follow this type of rate law.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.02%3A_Measuring_Reaction_Rates.txt
If the reaction follows a second order rate law, the some methodology can be employed. The rate can be written as $-\dfrac{d[A]}{dt} = k [A]^2 \label{eq1A}$ The separation of concentration and time terms (this time keeping the negative sign on the left for convenience) yields $-\dfrac{d[A]}{[A]^2} = k \,dt \nonumber$ The integration then becomes $- \int_{[A]_o}^{[A]} \dfrac{d[A]}{[A]^2} = \int_{t=0}^{t}k \,dt \label{eq1}$ And noting that $- \dfrac{dx}{x^2} = d \left(\dfrac{1}{x} \right) \nonumber$ the result of integration Equation \ref{eq1} is $\dfrac{1}{[A]} -\dfrac{1}{[A]_o} = kt \nonumber$ or $\dfrac{1}{[A]} = \dfrac{1}{[A]_o} + kt \nonumber$ And so a plot of $1/[A]$ as a function of time should produce a linear plot, the slope of which is $k$, and the intercept of which is $1/[A]_0$. Other 2nd order rate laws are a little bit trickier to integrate, as the integration depends on the actual stoichiometry of the reaction being investigated. For example, for a reaction of the type $A + B \rightarrow P \nonumber$ That has rate laws given by $-\dfrac{d[A]}{dt} = k [A][B] \nonumber$ and $-\dfrac{d[B]}{dt} = k [A][B] \nonumber$ the integration will depend on the decrease of [A] and [B] (which will be related by the stoichiometry) which can be expressed in terms the concentration of the product [P]. $[A] = [A]_o – [P] \label{eqr1}$ and $[B] = [B]_o – [P]\label{eqr2}$ The concentration dependence on $A$ and $B$ can then be eliminated if the rate law is expressed in terms of the production of the product. $\dfrac{d[P]}{dt} = k [A][B] \label{rate2}$ Substituting the relationships for $[A]$ and $[B]$ (Equations \ref{eqr1} and \ref{eqr2}) into the rate law expression (Equation \ref{rate2}) yields $\dfrac{d[P]}{dt} = k ( [A]_o – [P]) ([B] = [B]_o – [P]) \label{rate3}$ Separation of concentration and time variables results in $\dfrac{d[P]}{( [A]_o – [P]) ([B] = [B]_o – [P])} = k\,dt \nonumber$ Noting that at time $t = 0$, $[P] = 0$, the integrated form of the rate law can be generated by solving the integral $\int_{[A]_o}^{[A]} \dfrac{d[P]}{( [A]_o – [P]) ([B]_o – [P])} = \int_{t=0}^{t} k\,dt \nonumber$ Consulting a table of integrals reveals that for $a \neq b$[1], $\int \dfrac{dx}{(a-x)(b-x)} = \dfrac{1}{b-a} \ln \left(\dfrac{b-x}{a-x} \right) \nonumber$ Applying the definite integral (as long as $[A]_0 \neq [B]_0$) results in $\left. \dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0-[P]}{[A]_0-[P]} \right) \right |_0^{[A]} = \left. k\, t \right|_0^t \nonumber$ $\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0-[P]}{[A]_0-[P]} \right) -\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B]_0}{[A]_0} \right) =k\, t \label{finalint}$ Substituting Equations \ref{eqr1} and \ref{eqr2} into Equation \ref{finalint} and simplifying (combining the natural logarithm terms) yields $\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B][A]_o}{[A][B]_o} \right) = kt \nonumber$ For this rate law, a plot of $\ln([B]/[A])$ as a function of time will produce a straight line, the slope of which is $m = ([B]_0 – [A]_0)k. \nonumber$ In the limit at $[A]_0 = [B]_0$, then $[A] = [B]$ at all times, due to the stoichiometry of the reaction. As such, the rate law becomes $\text{rate} = k [A]^2 \nonumber$ and integrate direct like in Equation \ref{eq1A} and the integrated rate law is (as before) $\dfrac{1}{[A]} = \dfrac{1}{[A]_o} + kt \nonumber$ Example $2$: Confirming Second Order Kinetics Consider the following kinetic data. Use a graph to demonstrate that the data are consistent with second order kinetics. Also, if the data are second order, determine the value of the rate constant for the reaction. time (s) 0 10 30 60 100 150 200 [A] (M) 0.238 0.161 0.098 0.062 0.041 0.029 0.023 Solution The plot looks as follows: From this plot, it can be seen that the rate constant is 0.2658 M-1 s-1. The concentration at time $t = 0$ can also be inferred from the intercept. [1] This integral form can be generated by using the method of partial fractions. See (House, 2007) for a full derivation. 11.07: The Method of Initial Rates The method of initial rates is a commonly used technique for deriving rate laws. As the name implies, the method involves measuring the initial rate of a reaction. The measurement is repeated for several sets of initial concentration conditions to see how the reaction rate varies. This might be accomplished by determining the time needed to exhaust a particular amount of a reactant (preferably one on which the reaction rate does not depend!) A typical set of data for a reaction $A + B \rightarrow products \nonumber$ might appear as follows: Run [A] (M) [B] (M) Rate (M/s) 1 0.0100 0.0100 0.0347 2 0.0200 0.0100 0.0694 3 0.0200 0.0200 0.2776 The analysis of this data involves taking the ratios of rates measured where one of the concentrations does not change. For example, assuming a rate law of the form $\text{rate} = k [A]^{\alpha}[B]^{\beta} \label{orRL}$ The ratio of runs $i$ and $j$ generate the following relationship. $\dfrac{\text{rate}_i}{\text{rate}_j} = \dfrac{k [A]_i^{\alpha}[B]_i^{\beta}}{k [A]_j^{\alpha}[B]_j^{\beta}} \nonumber$ So using runs $1$ and $2$, $\dfrac{0.0347\, M/s}{0.0694\, M/s} = \dfrac{\cancel{k} (0.01\,M/s)^{\alpha} \cancel{(0.01\,M/s)^{\beta}}}{\cancel{k} (0.02\,M/s)^{\alpha} \cancel{(0.01\,M/s)^{\beta}}} \nonumber$ this simplifies to $\dfrac{1}{2} = \left( \dfrac{1}{2} \right)^{\alpha} \nonumber$ So clearly, $\alpha = 1$ and the reaction is 1st order in $A$. Taking the ratio using runs 2 and 3 yields $\dfrac{0.0694\, M/s}{0.2776\, M/s} = \dfrac{\cancel{k} (0.02\,M)^{\alpha} \cancel{(0.01\,M)^{\beta}}}{\cancel{k} (0.02\,M)^{\alpha} \cancel{(0.02\,M)^{\beta}}} \nonumber$ This simplifies to $\dfrac{1}{4} = \left( \dfrac{1}{2} \right)^{\beta} \label{Me1}$ By inspection, one can conclude that $\beta = 2$, and that the reaction is second order in B. But if it is not so clear (as it might not be if the concentration is not incremented by a factor of 2), the value of $\beta$ can be determined by taking the natural logarithm of both sides of the Equation $\ref{Me1}$. $\ln \dfrac{1}{4} = \ln \left( \dfrac{1}{2} \right)^{\beta} \nonumber$ $= \beta \ln \left( \dfrac{1}{2} \right) \nonumber$ dividing both sides by $\ln(1/2)$ $\dfrac{ \ln\left( \dfrac{1}{4} \right)}{\ln\left( \dfrac{1}{2} \right)} =\beta \dfrac{ \ln \left( \dfrac{1}{2} \right)}{\ln \left( \dfrac{1}{2} \right)} \nonumber$ or $\beta = \dfrac{-1.3863}{-0.69315} = 2 \nonumber$ And so the rate law (Equation \ref{orRL}) can be expressed as $\text{rate} = k [A][B]^{2} \nonumber$ And is 1st order in A, 2nd order in B, and 3rd order overall. The rate constant can then be evaluated by substituting one of the runs into the rate law (or using all of the data and taking an average). Arbitrarily selecting the first run for this, $0.0347 \,M/s = k (0.01 \, M/s)(0.01 \, M/s)^{2} \nonumber$ This results in a value of $k$ $k = \dfrac{0.0347 \,M/s} {(0.01 \, M/s)(0.01 \, M/s)^2} = 3.47 \times 10^{5} \, M^{-2} s^{-1} \nonumber$ It is useful to note that the units on $k$ are consistent with a 3rd order rate law.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.06%3A_2nd_order_Rate_Laws.txt
Another method for determining the order of a reaction is to examine the behavior of the half-life as the reaction progresses. The half-life can be defined as the time it takes for the concentration of a reactant to fall to half of its original value. The method of half-lives involved measuring the half-life’s dependence on concentration. The expected behavior can be predicted using the integrated rate laws we derived earlier. Using the definition of the half-life, at time $t_{1/2}$ the concentration $[A]$ drops to half of its original value, $[A]_0$. $[A] = \dfrac{1}{2} [A]_o \nonumber$ at $t=t_{1/2}$. So if the reaction is 0th order in $A$, after one half-life $\dfrac{1}{2} [A]_o = [A]_o - k t_{1/2} \nonumber$ Solving for $t_{1/2}$ reveals the dependence of the half-life on the initial concentration. $\dfrac{[A]_o}{2k} = t_{1/2} \nonumber$ So as the original concentration is decreased, the half-life of a 0th order reaction will also decrease. Similarly, for a first order reaction, $\dfrac{1}{2} [A]_o = [A]_o e^{- k t_{1/2}} \nonumber$ and solving for $t_{1/2}$ results in a concentration independent expression $\dfrac{\ln 2}{k} = t_{1/2} \nonumber$ It is because the half-life of a 1st order reaction is independent of concentration that it is oftentimes used to describe the rate of first order processes (such as radioactive decay.) For a 2nd order reaction, the half-life can be expressed based on the integrated rate law. $\dfrac{1}{\dfrac{1}{2} [A]_o} = \dfrac{1}{[A]_o} + k t_{1/2} \nonumber$ solving for $t_{1/2}$ yields $\dfrac{1}{ t_{1/2} } = t_{1/2} \nonumber$ In the case of a second order reaction, the half-life increases with decreasing initial concentration. Table $1$: Calculated half lives for Reactions following simple Rate Laws Order Half-life Behavior 0th $\dfrac{1}{2} [A]_o = [A]_o - k t_{1/2}$ Decreases as the reaction progresses (as [A] decreases 1st $[ \dfrac{\ln 2}{k} = t_{1/2}$ Remains constant as the reaction progresses (is independent of concentration) 2nd $\dfrac{1}{\dfrac{1}{2} [A]_o} = \dfrac{1}{[A]_o} + k t_{1/2}$ Increases with decreasing concentration. For reactions in which the rate law depends on the concentration of more than one species, the half-life can take a much more complex form that may depend on the initial concentrations of multiple reactants, or for that matter, products! Example $1$: Radiocarbon Dating Carbon-14 decays into nitrogen-14 with first order kinetics and with a half-life of 5730 years. $\ce{^{14}C} \rightarrow \ce{^{14}N} \nonumber$ What is the rate constant for the decay process? What percentage of carbon-14 will remain after a biological sample has stopped ingesting carbon-14 for 1482 years? Solution The rate constant is fairly easy to calculate: $t_{1/2} = \dfrac{\ln 2}{k} = \dfrac{\ln 2}{5730\, yr} = 1.21 \times 10^{-4} \,yr^{-1} \nonumber$ Now the integrated rate law can be used to solve the second part of the problem. $[\ce{^{14}C} ] = [\ce{^{14}C}]_o e^{-k t} \nonumber$ this can be rewritten in term of relative loss of $[\ce{^{14}C} ]$. $\dfrac{[\ce{^{14}C} ] }{[\ce{^{14}C}]_o} = e^{-k t} \nonumber$ so $\dfrac{[\ce{^{14}C} ] }{[\ce{^{14}C}]_o} = e^{- (1.21 \times 10^{-4} \,yr^{-1} )(1482 \,ys)} = 0.836 \nonumber$ So after 1482 years, there is 83.6 % of $\ce{^{14}C}$ still left. Example $2$: Based on the following concentration data as a function of time, determine the behavior of the half-life as the reaction progresses. Use this information to determine if the following reaction is 0th order, 1st order, or 2nd order in A. Also, use the data to estimate the rate constant for the reaction. time (s) [A] (M) 0 1.200 10 0.800 20 0.600 30 0.480 40 0.400 50 0.343 60 0.300 70 0.267 80 0.240 90 0.218 100 0.200 Solution If the original concentration is taken as 1.200 M, half of the original concentration is 0.600 M. The reaction takes 20 seconds to reduce the concentration to half of its original value. If the original concentration is taken as 0.800 M, it clearly takes 30 seconds for the concentration to reach half of that value. Based on this methodology, the following table is easy to generate: $[A]_o$ (M) 1.200 0.800 0.600 0.400 $t_{1/2}$ (s) 20 30 40 60 The rate constant can be calculated using any of these values: \begin{align} k &=\dfrac{1}{[A]t_{1/2}} \[4pt] &= \dfrac{1}{(0.8\,M)(30\,s)} \[4pt] &= 0.0417 \, M^{-1}s^{-1} \end{align} \nonumber
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.08%3A_The_Method_of_Half-Lives.txt
In general, increases in temperature increase the rates of chemical reactions. It is easy to see why, since most chemical reactions depend on molecular collisions. And as we discussed in Chapter 2, the frequency with which molecules collide increases with increased temperature. But also, the kinetic energy of the molecules increases, which should increase the probability that a collision event will lead to a reaction. An empirical model was proposed by Arrhenius to account for this phenomenon. The Arrhenius model (Arrhenius, 1889) can be expressed as $k = A e^{-E_a/RT} \nonumber$ Although the model is empirical, some of the parameters can be interpreted in terms of the energy profile of the reaction. $E_a$, for example, is the activation energy, which represents the energy barrier that must be overcome in a collision to lead to a reaction. If the rate constant for a reaction is measure at two temperatures, the activation energy can be determined by taking the ratio. This leads to the following expression for the Arrhenius model: $\ln \left( \dfrac{k_1}{k_2} \right) = - \dfrac{E_a}{R} \left( \dfrac{1}{T_2} - \dfrac{1}{T_1} \right) \label{Arrhenius}$ Example $1$: For a given reaction, the rate constant doubles when the temperature is increased form 25 °C to 35 °C. What is the Arrhenius activation energy for this reaction? Solution The energy of activation can be calculated from the Arrhenius Equation (Equation \ref{Arrhenius}). $\ln \left( \dfrac{2k_1}{k_1} \right) = - \dfrac{E_a}{8.314 \, \dfrac{J}{mol\, K}} \left( \dfrac{1}{308\,K} - \dfrac{1}{298\,K} \right) \nonumber$ From this reaction: $E_a = 52.9\, kJ/mol \nonumber$ Preferably, however, the rate constant is measured at several temperatures, and then the activation energy can be determined using all of the measurements, by fitting them to the expression $\ln (k) = - \dfrac{E_a}{RT} + \ln (A) \nonumber$ This can be done graphically by plotting the natural logarithm of the rate constant as a function of $1/T$ (with the temperature measured in K). The result should be a straight line (for a well-behaved reaction!) with a slope of $–E_a/R$. There are some theoretical models (such as collision theory and transition state theory) which suggest the form of the Arrhenius model, but the model itself is purely empirical. A general feature, however, of the theoretical approaches is to interpret the activation energy as an energy barrier which a reaction must overcome in order to lead to a chemical reaction. 11.10: Collision Theory Collision Theory was first introduced in the 1910s by Max Trautz (Trautz, 1916) and William Lewis (Lewis, 1918) to try to account for the magnitudes of rate constants in terms of the frequency of molecular collisions, the collisional energy, and the relative orientations of the molecules involved in the collision. The rate of a reaction, according to collision theory, can be expressed as $\text{rate} = Z_{ab} F \label{majorrate}$ where $Z_{AB}$ is the frequency of collisions between the molecules $A$ and $B$ involved in the reaction, and $F$ is the fraction of those collisions that will lead to a reaction. The factor $F$ has two important contributors, the energy of the collision and the orientation of the molecules when they collide. The first term, $Z_{AB}$, can be taken from the kinetic molecular theory discussed in Chapter 2. $Z_{AB} = \left( \dfrac{8k_BT}{\pi \mu} \right)^{1/2} \sigma_{AB} [A][B] \label{collision}$ Where the first term is the average relative velocity in which $\mu$ is the reduced mass of the A-B collisional system, $\sigma_{AB}$ is the collisional cross section, and $[A]$ and $[B]$ are the concentrations of $A$ and $B$. The factor $F$ depends on the activation energy. Assuming a Boltzmann (or Boltzmann-like) distribution of energies, the fraction of molecular collisions that will have enough energy to overcome the activation barrier is given by $F= e^{-E_a/RT} \label{arr}$ Combining Equations \ref{collision} and \ref{arr}, the rate of the reaction (Equation \ref{majorrate}) is predicted by $\text{rate} = \left( \dfrac{8k_BT}{\pi \mu} \right) ^{1/2} \sigma_{AB} e^{-E_a/RT} [A][B] \nonumber$ So if the rate law can be expressed as a second order rate law $\text{rate} = k [A][B] \nonumber$ it is clear that the rate constant $k$ is given by $k=\left( \dfrac{8k_BT}{\pi \mu} \right)^{1/2} \sigma_{AB} e^{-E_a/RT} \nonumber$ By comparison, the theory predicts the form of the Arrhenius prefactor to be $A= \left( \dfrac{8k_BT}{\pi \mu} \right)^{1/2} \sigma_{AB} \nonumber$ It should be noted that collision theory appears to apply only to bimolecular reactions, since it takes two molecules to collide. But there are many reactions that have first order rate laws, but are initiated by bimolecular steps in the mechanisms. (Reaction mechanisms will form a large part of the discussion in Chapter 12.) Consider as an example, the decomposition of $N_2O_5$, which follows the reaction $2 N_2O_5 \rightarrow 4 NO_2 + O_2 \nonumber$ Under a certain set of conditions, the following concentrations are observed as a function of time. time (s) $[N_2O_5]$ (M) $[NO_2]$ (M) $[O_2]$ (M) 0 0.0260 0.0000 0.0000 100 0.0219 0.0081 0.0016 200 0.0185 0.0150 0.0030 300 0.0156 0.0207 0.0041 400 0.0132 0.0256 0.0051 500 0.0111 0.0297 0.0059 600 0.0094 0.0332 0.0066 700 0.0079 0.0361 0.0072 Graphically, these data look as follows: The data for N­2O5 can be analyzed empirically to show that the reaction is first order in N2O5, with a rate constant of 1.697 x 10-3 s-1. (The graph is shown below.) So the rate law for the reaction is $\text{rate} = 1.697 \times 10^{-3}s^{-1} [N_2O_5] \nonumber$ So how can collision theory be used to understand the rate constant? As it turns out, the mechanism for the reaction involves a bimolecular initiation step. The mechanism for the reaction has a bimolecular initiation step $N_2O_5 + M \rightleftharpoons N_2O_5^* + M \nonumber$ where $N_2O_5^*$ is an energetically activated form of $N_2O_5$ which can either relax to reform $N_2O_5$ or decompose to form the products of the reaction. Because the initiation step is bimolecular, collision theory can be used to understand the rate law, but because the product of the unimolecular step undergoes slow conversion to products unimolecularly, the overall rate is observed to be first order in $N_2O_5$. The analysis of reaction mechanisms, and reconciliation with observed rate laws, form the subjects of Chapter 12.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.09%3A_Temperature_Dependence.txt
Transition state theory was proposed in 1935 by Henry Erying, and further developed by Merrideth G. Evans and Michael Polanyi (Laidler & King, 1983), as another means of accounting for chemical reaction rates. It is based on the idea that a molecular collision that leads to reaction must pass through an intermediate state known as the transition state. For example, the reaction $A + BC \rightarrow AB + C \nonumber$ would have an intermediate ($ABC$) where the $B-C$ bond is partially broken, and the $A-B$ bond is partially formed. $A + B-C \rightarrow (A-B-C)^‡ \rightarrow A-B + C \nonumber$ So the reaction is mediated by the formation of an activated complex (denoted with the double-dagger symbol ‡) and the decomposition of that complex into the reaction products. Using this theory, the rate of reaction can be expressed as the product of two factors $\text{rate} = (\text{transition state concentration}) \times (\text{decomposition frequency}) \nonumber$ If the formation of the activated complex is considered to reach an equilibrium, $K^‡ = \dfrac{[ABC]^‡}{[A][BC]} \nonumber$ So the concentration of the transition state complex can be expressed by $[ABC]^‡ = K^‡ [A][BC] \nonumber$ Using the relationship from Chapter 9 for the equilibrium constant, $K^‡$ can be expressed in terms of the free energy of formation of the complex ($\Delta G^‡$) $K^‡ = e^{-\Delta G^‡ /RT} \nonumber$ And so the reaction rate is given by $\text{rate} = (\text{frequency}) [A][BC]e^{-\Delta G^‡ /RT} \nonumber$ and the remaining task is to derive an expression for the frequency factor. If the frequency is taken to be equal to the vibrational frequency for the vibration of the bond being broken in the activated complex in order to form the reaction products, it can be expressed in terms of the energy of the oscillation of the bond as the complex vibrates. $E = h\nu = k_BT \nonumber$ or $\nu = \dfrac{k_BT}{h} \nonumber$ The reaction rate is then predicted to be $\text{rate} = \dfrac{k_BT}{h} [A][BC]e^{-\Delta G^‡ /RT} \nonumber$ And the rate constant is thus given by $k = \dfrac{k_BT}{h} e^{-\Delta G^‡ /RT} \nonumber$ An alternative description gives the transition state formation equilibrium constant in terms of the partition functions describing the reactants and the transition state: $K^‡ = \dfrac{Q^‡}{Q_A^‡Q_{BC}^‡} e^{-\Delta G^‡ /RT} \nonumber$ where Qi is the partition function describing the ith species. If the partition function of the transition state is expressed as a product of the partition function excluding and contribution from the vibration leading to the bond cleavage that forms the products and the partition function of that specific vibrational mode $Q^‡ = Q^{‡'}q_v^‡ \nonumber$ In this case, $q_v^‡$ can be expressed by $q_v^‡ = \dfrac{1}{1-e^{-h\nu^‡/RT}} \approx \dfrac{k_BT}{h\nu^‡} \nonumber$ So the equilibrium constant can be expressed $K^‡ = \dfrac{k_BT}{h} \dfrac{Q^‡}{Q_A^‡Q_{BC}^‡} e^{-\Delta G^‡ /RT} \nonumber$ And so the rate constant, which is the product of $n^‡$ and $K^‡$, is given by $k = \dfrac{k_BT}{h} \dfrac{Q^‡}{Q_A^‡Q_{BC}^‡} e^{-\Delta G^‡ /RT} \nonumber$ which looks very much like the Arrhenius equation proposed quite a few years earlier! Thus, if one understands the vibrational dynamics of the activated complex, and can calculate the partition functions describing the reactants and the transition state, one can, at least in theory, predict the rate constant for the reaction. In the next chapter, we will take a look at how kinetics studies can shed some light on chemical reaction mechanisms. 11.S: Chemical Kinetics I (Summary) The results of the integration of these simple rate laws can be summarized in the following table. Order Elementary Reaction Integrated rate law Linear plot 0 - $[A] = [A]_o -kt$ $[A]$ vs. $t$ 1 $A \rightarrow P$ $\ln [A] = \ln [A]_o - kt$ $[ [A] = [A]_o e^{-kt}$ $\ln[A]$ vs. $t$ 2 $A + A \rightarrow P$ $\dfrac{1}{[A]} = \dfrac{1}{[A]_o} + kt$ $\dfrac{1}{[A]}$ vs. $t$ $A + B \rightarrow P$ $\dfrac{1}{[B]_0-[A]_0} \ln \left( \dfrac{[B][A]_o}{[A][B]_o} \right) = kt$ $\ln \left( \dfrac{[B]}{[A]} \right)$ vs. $t$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/11%3A_Chemical_Kinetics_I/11.11%3A_Transition_State_Theory.txt
In the previous chapter, we discussed the rates of chemical reactions. In this chapter, we will expand on the concepts of chemical reaction rates by exploring what the rate law implies about the mechanistic pathways that reactions actually follow to proceed from reactants to products. Typically, one determines a rate law that describes a chemical reaction, and then suggests a mechanism that can be (or might not be!) consistent with the observed kinetics. This chapter will be concerned with reconciling reaction mechanisms with predicted rate laws. • 12.1: Reaction Mechanisms A reaction mechanism is a set of elementary reactions steps, that when taken in aggregate define a chemical pathway that connects reactants to products. An elementary reaction is one that proceeds by a single process, such a molecular (or atomic) decomposition or a molecular collision. • 12.2: Concentration Profiles for Some Simple Mechanisms To illustrate how mechanisms may affect the concentration profile for a reaction, we can examine some simple mechanisms • 12.3: The Connection between Reaction Mechanisms and Reaction Rate Laws The great value of chemical kinetics is that it can give us insights into the actual reaction pathways (mechanisms) that reactants take to form the products of reactions. Analyzing a reaction mechanism to determine the type of rate law that is consistent (or not consistent) with the specific mechanism can give us significant insight. • 12.4: The Rate Determining Step Approximation The rate determining step approximation is one of the simplest approximations one can make to analyze a proposed mechanism to deduce the rate law it predicts. Simply stated, the rate determining step approximation says that a mechanism can proceed no faster than its slowest step. • 12.5: The Steady-State Approximation One of the most commonly used and most attractive approximations is the steady state approximation. This approximation can be applied to the rate of change of concentration of a highly reactive (short lived) intermediate that holds a constant value over a long period of time. • 12.6: The Equilibrium Approximation In many cases, the formation of a reactive intermediate (or even a longer lived intermediate) involves a reversible step. This is the case if the intermediate can decompose to reform reactants with a significant probability as well as moving on to form products. In many cases, this will lead to a pre-equilibrium condition in which the equilibrium approximation can be applied. • 12.7: The Lindemann Mechanism The Lindemann mechanism (Lindemann, Arrhenius, Langmuir, Dhar, Perrin, & Lewis, 1922) is a useful one to demonstrate some of the techniques we use for relating chemical mechanisms to rate laws. In this mechanism, a reactant is collisionally activated to a highly energetic form that can then go on to react to form products. • 12.8: The Michaelis-Menten Mechanism The Michaelis-Menten mechanism (Michaelis & Menten, 1913) is one which many enzyme mitigated reactions follow. The basic mechanism involves an enzyme (E, a biological catalyst) and a substrate (S) which must connect to form an enzyme-substrate complex (ES) in order for the substrate to be degraded (or augmented) to form a product (P). • 12.9: Chain Reactions A large number of reactions proceed through a series of steps that can collectively be classified as a chain reaction. The reactions contain steps that can be classified as initiation step – a step that creates the intermediates from stable species propagation step – a step that consumes an intermediate, but creates a new one termination step – a step that consumes intermediates without creating new ones • 12.10: Catalysis There are many examples of reactions that involve catalysis. One that is of current importance to the chemistry of the environment is the catalytic decomposition of ozone • 12.11: Oscillating Reactions In most cases, the conversion of reactants into products is a fairly smooth process, in that the concentrations of the reactants decrease in a regular manner, and those of the products increase in a similar regular manner. However, some reactions can show irregular behavior in this regard. One particularly peculiar (but interesting!) phenomenon is that of oscillating reactions, in which reactant concentrations can rise and fall as the reaction progresses. • 12.E: Chemical Kinetics II (Exercises) Exercises for Chapter 12 "Chemical Kinetics II" in Fleming's Physical Chemistry Textmap. • 12.S: Chemical Kinetics II (Summary) Summary for Chapter 12 "Chemical Kinetics II" in Fleming's Physical Chemistry Textmap. 12: Chemical Kinetics II A reaction mechanism is a set of elementary reactions steps, that when taken in aggregate define a chemical pathway that connects reactants to products. An elementary reaction is one that proceeds by a single process, such a molecular (or atomic) decomposition or a molecular collision. Typically, elementary reactions only come in unimolecular $A \rightarrow products \nonumber$ and bimolecular $A + B \rightarrow products \nonumber$ form. Occasionally, an elementary step that is termolecular $A + B + C \rightarrow products \nonumber$ (involved the simultaneous collision of three atoms or molecules) but it is generally a pair of bimolecular steps acting in rapid succession, the first forming an activated complex, and the second stabilizing that complex chemically or physically. $A + B \rightarrow AB^* \nonumber$ $AB^* + C \rightarrow AB + C^* \nonumber$ The wonderful property of elementary reactions is that the molecularity defines the order of the rate law for the reaction step. The Requirements of a Reaction Mechanism A valid reaction mechanism must satisfy three important criteria: 1. The sum of the steps must yield the overall stoichiometry of the reaction. 2. The mechanism must be consistent with the observed kinetics for the overall reaction. 3. The mechanism must account for the possibility of any observed side products formed in the reaction. Example $1$: For the reaction $A + B \xrightarrow{} C \nonumber$ is the following proposed mechanism valid? $A +A \xrightarrow{k_1} A_2 \nonumber$ $A_2 + B \xrightarrow{k_1} C + A \nonumber$ Solution Adding both proposed reactions gives $\cancel{2}A + \cancel{A_2} + B\xrightarrow{} \cancel{A_2} + C + \cancel{A} \nonumber$ Canceling those species that appear on both sides of the arrow leaves $A + B \xrightarrow{} C \nonumber$ which is the reaction, so the mechanism is at least stoichiometrically valid. However, it would still have to be consistent with the observed kinetics for the reaction and account for any side-products that are observed.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/12%3A_Chemical_Kinetics_II/12.01%3A_Reaction_Mechanisms.txt
To illustrate how mechanisms may affect the concentration profile for a reaction, we can examine some simple mechanisms $A \rightarrow B$ In this type of reaction, one substance is simply converting into another. An example of this type of reaction might be the isomerization of methylisocyanide to form acetonitrile (methylcyanide) (Redmon, Purvis, & Bartlett, 1978). If the reaction mechanism consists of a single unimolecular step, which is characterized by the rate constant $k_1$: $A \xrightarrow{k_1} B \nonumber$ then rate of change of the concentrations of $A$ and $B$ may be written $\dfrac{d[A]}{dt} = - k_1 [A] \nonumber$ and $\dfrac{d[B]}{dt} = + k_1 [A] \nonumber$ A plot the concentrations as a function of time would look as follows: It can be easily seen that the concentration of the reactant (A) decreases as time moves forward, and that of the product (B) increases. This will continue until reactant A is depleted. $A \rightleftharpoons B$ When the system can establish equilibrium, the rate of change of the concentration of A and B will depend on both the forward and reverse reactions. If k1 is the rate constant that characterizes the forward reaction $A \xrightarrow{k_1} B \nonumber$ and k-1 that which characterizes the reverse $B \xrightarrow{k_{-1}} A \nonumber$ the the rate of change on concentrations of A and B can be expressed $\dfrac{d[A]}{dt} = - k_1 [A] + k_{-1} [B] \nonumber$ and $\dfrac{d[B]}{dt} = + k_1 [A] - k_{-1} [B] \nonumber$ The concentration profile for this situation looks as follows: This profile is characterized by the fact that after a certain amount of time, the system achieves equilibrium and the concentrations stop changing (even though the forward and reverse reactions are still taking place. This is the nature of a dynamic equilibrium about which we speak off of the time in chemistry. The final concentrations of [A] and [B] once equilibrium is established will depend on the ratio of k1 and k- Since the rate of formation of $A$ (from the reverse step) is equal to the rate of consumption of $A$ (from the forward step, the overall rate of change of the concentration of A is zero once equilibrium has been established. So it should be clear that $k_1[A] = k_{-1}[B] \nonumber$ or $\dfrac{k_1}{k_{-1}} = \dfrac{[B]}{[A]} \nonumber$ and the ratio o f$k_1$ to $k_{-1}$ gives the value of the equilibrium constant! $A + C \rightarrow B + C$ Some reactions require a catalyst to mediate the conversion of reactants in to products. The definition of a catalyst is a species that must be added (it is not formed as an intermediate) shows up in the mechanism (usually in a very early step) and this ends up as part of the rate law, but is reformed later on so that it does not appear in the overall stoichiometry. If the reaction $A \rightarrow B \nonumber$ is aided by a catalyst $C$, then one possible (single-step) reaction mechanism might be $A + C \rightarrow B + C \nonumber$ In this case, $C$ is acting as a catalyst to the reaction. The rate of change of the concentrations can be found by $\dfrac{[A]}{dt} = -k [A][C] \nonumber$ $\dfrac{[B]}{dt} = k [A][C] \nonumber$ $\dfrac{[C]}{dt} = - k [A][C] + k [A][C] = 0 \nonumber$ This is a very simplified picture of a catalyzed reaction. Generally a catalyzed reaction will require at least two steps: $A +C \rightarrow AC \nonumber$ $AC \rightarrow B + C \nonumber$ Later, we will see how the steady-state approximation actually predicts the above depicted concentration profile for the two-step mechanism when $AC$ is a short-lived species that can be treated as having a constant and small concentration. $A \rightarrow B \rightarrow C$ Another important (and very common) mechanistic feature is the formation of an intermediate. This is a species that is formed in at least one of the mechanism step, but does not appear in the overall stoichiometry for the reaction. This is different from a catalyst which must be added to speed the reaction. A simple example of a reaction mechanism involving the formation of a catalyst is $A \xrightarrow{k_1} B \nonumber$ $B \xrightarrow{k_2} C \nonumber$ In this case, $C$ cannot form until an appreciable concentration of the intermediate $B$ has been created by the first step of the mechanism. The rate of change of the concentrations of $A$, $B$, and $C$ can be expressed $\dfrac{[A]}{dt} = -k_1 [A] \nonumber$ $\dfrac{[B]}{dt} = k_1 [A] - k_2 [B] \nonumber$ $\dfrac{[C]}{dt} = k_2 [B] \nonumber$ The concentration profile is then shown below. Notice the delay in the formation of $C$. $A \rightleftharpoons B \rightarrow C$ In many cases, the formation of an intermediate involves a reversible step. This step is sometimes referred to as a pre-equilibrium step since it oftentimes will establish a near equilibrium while the reaction progresses. The result of combining a pre-equilibrium with an intermediate produces a profile that shows features of both of the simpler mechanisms. An example of such a mechanism is $A \xrightleftharpoons [k_1]{k_{-1}} B \nonumber$ $B \xrightarrow{k_2} C \nonumber$ In this case, the rate of change for the concentrations of $A$, $B$, and $C$ can be expressed by $\dfrac{[A]}{dt} = -k_1 [A] + -k_{-1} [B] \nonumber$ $\dfrac{[B]}{dt} = k_1 [A] -k_{-1} [B] - k_2 [B] \nonumber$ $\dfrac{[C]}{dt} = k_2 [B] \nonumber$ The concentration profile for this mechanism is shown below. Again, notice the delay in the production of the product $C$, due to the requirement that the concentration of B be sufficiently high to allow the second step to occur with an appreciable rate. $A \rightarrow B$ and $A \rightarrow C$ There are many cases where a reactant can follow pathways to different products (or sometimes even the same products!), and those pathways compete with one another. An example is the following simple mechanism: $A \xrightarrow{k_{-1}} B \nonumber$ $A \xrightarrow{k_2} C \nonumber$ In this case, the rate of change on concentrations can be expressed as $\dfrac{[A]}{dt} = -k_1 [A] + -k_2 [A] \nonumber$ $\dfrac{[B]}{dt} = + k_1 [A] \nonumber$ $\dfrac{[C]}{dt} = + k_2 [A] \nonumber$ Overall, the profile looks like two first order decompositions occurring at the same time, with the final concentration of the product formed with the larger rate constant being favored. One of the goals of studying chemical kinetics is to understand how to alter reaction condition to favor the production of desirable reaction products. This can be accomplished by a number of means, such as alteration of concentrations, temperature, addition of catalysts, etc. Understanding the basics will (hopefully) lead to a better understanding of how concentration profiles can be altered by changing conditions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/12%3A_Chemical_Kinetics_II/12.02%3A_Concentration_Profiles_for_Some_Simple_Mechanisms.txt
The great value of chemical kinetics is that it can give us insights into the actual reaction pathways (mechanisms) that reactants take to form the products of reactions. Analyzing a reaction mechanism to determine the type of rate law that is consistent (or not consistent) with the specific mechanism can give us significant insight. For example, the reaction $A+ B \rightarrow C \nonumber$ might be proposed to follow one of two mechanistic pathways: $\underbrace{A + A \xrightarrow{k_1} A_2}_{\text{step 1}} \nonumber$ $\underbrace{ A_2 + B \xrightarrow{k_2} C}_{\text{step 2}} \nonumber$ or $\underbrace{A \xrightarrow{k_1} A^*}_{\text{step 1}} \nonumber$ $\underbrace{ A^* + B \xrightarrow{k_2} C}_{\text{step 2}} \nonumber$ The first rate law will predict that the reaction should be second order in $A$, whereas the second mechanism predicts that it should be first order in $A$ (in the limit that the steady state approximation, discussed in the following sections, can be applied to $A_2$ and $A^*$). Based on the observed rate law being first or second order in A, one can rule out one of the rate laws. Unfortunately, this kind of analysis cannot confirm a specific mechanism. Other evidence is needed to draw such conclusions, such as the spectroscopic observation of a particular reaction intermediate that can only be formed by a specific mechanism. In order analyze mechanisms and predict rate laws, we need to build a toolbox of methods and techniques that are useful in certain limits. The next few sections will discuss this kind of analysis, specifically focusing on • the Rate Determining Step approximation, • the Steady State approximation, and • the Equilibrium approximation. Each type of approximation is important in certain limits, and they are oftentimes used in conjunction with one another to predict the final forms of rate laws. 12.04: The Rate Determining Step Approximation The rate determining step approximation is one of the simplest approximations one can make to analyze a proposed mechanism to deduce the rate law it predicts. Simply stated, the rate determining step approximation says that a mechanism can proceed no faster than its slowest step. So, for example, if the reaction $A + B \rightarrow C \nonumber$ is proposed to follow the mechanism $\underbrace{A +A \xrightarrow{k_1} A_2}_{\text{slow}} \nonumber$ $\underbrace{ A_2 \xrightarrow{k_2} C + A}_{\text{fast}} \nonumber$ the rate determining step approximation suggests that the rate (expressed in terms of the appearance of product $C$) should be determined by the slow initial step, and so the rate law will be $\dfrac{[C]}{dt} = k_1[A]^2 \nonumber$ matching the order of the rate law to the molecularity of the slow step. Conversely, if the reaction mechanism is proposed as $\underbrace{A \xrightarrow{k_1} A^*}_{\text{slow}} \nonumber$ $\underbrace{ A^* + B \xrightarrow{k_2} C}_{\text{fast}} \nonumber$ the rate determining step approximation suggests that the rate of the reaction should be $\dfrac{[C]}{dt} = k_1[A] \nonumber$ again, with the order of the rate law matching the molecularity of the rate determining step. 12.05: The Steady-State Approximation One of the most commonly used and most attractive approximations is the steady state approximation. This approximation can be applied to the rate of change of concentration of a highly reactive (short lived) intermediate that holds a constant value over a long period of time. The advantage here is that for such an intermediate ($I$), $\dfrac{d[I]}{dt} = 0 \nonumber$ So long as one can write an expression for the rate of change of the concentration of the intermediate $I$, the steady state approximation allows one to solve for its constant concentration. For example, if the reaction $A +B \rightarrow C \label{total}$ is proposed to follow the mechanism \begin{align} A + A &\xrightarrow{k_1} A_2 \[4pt] A_2 + B &\xrightarrow{k_2} C + A \end{align} \nonumber The time-rate of change of the concentration of the intermediate $A_2$ can be written as $\dfrac{d[A_2]}{dt} = k_1[A]^2 - k_2[A_2][B] \nonumber$ In the limit that the steady state approximation can be applied to $A_2$ $\dfrac{d[A_2]}{dt} = k_1[A]^2 - k_2[A_2][B] \approx 0 \nonumber$ or $[A_2] \approx \dfrac{k_1[A]^2}{k_2[B]} \nonumber$ So if the rate of the overall reaction is expressed as the rate of formation of the product $C$, $\dfrac{d[C]}{dt} = k_2[A_2][B] \nonumber$ the above expression for $[A_2]$ can be substituted $\dfrac{d[C]}{dt} = k_2 \left ( \dfrac{k_1[A]^2}{k_2[B]} \right) [B] \nonumber$ of $\dfrac{d[C]}{dt} = k_1[A]^2 \nonumber$ and the reaction is predicted to be second order in $[A]$. Alternatively, if the mechanism for Equation \ref{total} is proposed to be \begin{align} A &\xrightarrow{k_1} A^* \[4pt] A^* + B &\xrightarrow{k_2} C \end{align} \nonumber then the rate of change of the concentration of $A^*$ is $\dfrac{[A^*]}{dt} = k_1[A] - k_2[A^*][B] \nonumber$ And if the steady state approximation holds, then $[A^*] \approx \dfrac{k_1[A]}{k_2[B]} \nonumber$ So the rate of production of $C$ is \begin{align} \dfrac{d[C]}{dt} &= k_2[A^*][B] \[4pt] &= \bcancel{k_2} \left( \dfrac{k_1[A]}{\bcancel{k_2} \cancel{[B]}} \right) \cancel{[B]} \end{align} \nonumber or $\dfrac{d[C]}{dt} = k_1[A] \nonumber$ and the rate law is predicted to be first order in $A$. In this manner, the plausibility of either of the two reaction mechanisms is easily deduced by comparing the predicted rate law to that which is observed. If the prediction cannot be reconciled with observation, then the scientific method eliminates that mechanism from consideration. 12.06: The Equilibrium Approximation In many cases, the formation of a reactive intermediate (or even a longer lived intermediate) involves a reversible step. This is the case if the intermediate can decompose to reform reactants with a significant probability as well as moving on to form products. In many cases, this will lead to a pre-equilibrium condition in which the equilibrium approximation can be applied. An example of a reaction mechanism of this sort is $A + B \xrightleftharpoons [k_1]{k_{-1}} AB \nonumber$ $AB \xrightarrow{k_2} C \nonumber$ Given this mechanism, the application of the steady state approximation is cumbersome. However, if the initial step is assumed to achieve equilibrium, an expression can be found for $[AB]$. In order to derive this expression, one assumes that the rate of the forward reaction is equal to the rate of the reverse reaction for the initial step in the mechanism. $k_{1}[A][B] = k_{-1}[AB] \nonumber$ or $\dfrac{ k_{1}[A][B]}{k_{-1}} = [AB] \nonumber$ This expression can be substituted into an expression for the rate of formation of the product $C$: $\dfrac{d[C]}{dt} = k_2[AB] \nonumber$ or $\dfrac{d[C]}{dt} = \dfrac{ k_2 k_{1}}{k_{-1}}[A][B] \nonumber$ Which predicts a reaction rate law that is first order in $A$, first order in $B$, and second order overall. Example $1$: Given the following mechanism, apply the equilibrium approximation to the first step to predict the rate law suggested by the mechanism. $A + A \xrightleftharpoons [k_1]{k_{-1}} A_2 \nonumber$ $A_2+B \xrightarrow{k_2} C + A \nonumber$ Solution If the equilibrium approximation is valid for the first step, $k_{1}[A]^2 = k_{-1}[A_2] \nonumber$ or $\dfrac{ k_{1}[A]^2}{k_{-1}} \approx [A_2] \nonumber$ Plugging this into the rate equation for the second step $\dfrac{d[C]}{dt} = k_2[A_2][B] \nonumber$ yields $\dfrac{d[C]}{dt} = \dfrac{ k_2k_{1}}{k_{-1}} [A]^2[B] \nonumber$ Thus, the rate law has the form $\text{rate} = k' [A]^2[B] \nonumber$ which is second order in $A$, first order in $B$ and third order over all, and in which the effective rate constant ($k'$ is $k' = \dfrac{k_2k_1}{k_{-1}}. \nonumber$ Sometimes, the equilibrium approximation can suggest rate laws that have negative orders with respect to certain species. For example, consider the following reaction $A + 2B \rightarrow 2C \nonumber$ A proposed mechanism for which might be $A + B \xrightleftharpoons [k_1]{k_{-1}} I + C \nonumber$ $I+ B \xrightarrow{k_2} C \nonumber$ in which $I$ is an intermediate. Applying the equilibrium approximation to the first step yields $k_{1}[A][B] = k_{-1}[I][C] \nonumber$ or $\dfrac{ k_{1}[A][B]}{k_{-1}[C]} \approx [I] \nonumber$ Substituting this into an expression for the rate of formation of $C$, one sees $\dfrac{d[C]}{dt} = k_{2} [I] [B] \nonumber$ or $\dfrac{d[C]}{dt} = \dfrac{ k_{1}[A][B]}{k_{-1}[C]} [B] = \dfrac{ k_{2} k_{1}[A][B]}{k_{-1}[C]} \nonumber$ The rate law is then of the form $\text{rate} = k \dfrac{[A][B]^2}{[C]} \nonumber$ which is first order in $A$, second order in $B$, negative one order in $C$, and second order overall. Also, $k'=\dfrac{k_2k_1}{k_{-1}}. \nonumber$ In this case, the negative order in $C$ means that a buildup of compound $C$ will cause the reaction to slow. These sort of rate laws are not uncommon for reactions with a reversible initial step that forms some of the eventual reaction product.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/12%3A_Chemical_Kinetics_II/12.03%3A_The_Connection_between_Reaction_Mechanisms_and_Reaction_Rate_Laws.txt
The Lindemann mechanism (Lindemann, Arrhenius, Langmuir, Dhar, Perrin, & Lewis, 1922) is a useful one to demonstrate some of the techniques we use for relating chemical mechanisms to rate laws. In this mechanism, a reactant is collisionally activated to a highly energetic form that can then go on to react to form products. $A + A \xrightleftharpoons [k_1]{k_{-1}} A^* \nonumber$ $A^* \xrightarrow{k_2} P \nonumber$ If the steady state approximation is applied to the intermediate $A^*$ $\dfrac{d[A^*]}{dt} = k_1[A]^2 - k_{-1}[A^*][A] - k_2[A^*] \approx 0 \nonumber$ an expression can be derived for $[A^*]$. $A^*]= \dfrac{ k_1[A]^2 }{k_{-1}[A] + k_2} \nonumber$ Substituting this into an expression for the rate of the production of the product $P$ $\dfrac{d[P]}{dt} = k_2[A^*] \nonumber$ yields $\dfrac{d[P]}{dt} = \dfrac{ k_2 k_1[A]^2 }{k_{-1}[A] + k_2} \nonumber$ In the limit that $k_{-1}[A] \ll k_2$, the rate law becomes first order in $[A]$ since $k_{-1}[A] + k_2 \approx k_{-1}[A]$. $\dfrac{d[P]}{dt} = \dfrac{ k_2 k_1 }{k_{-1}} [A] \nonumber$ This will happen if the second step is very slow (and is the rate determining step), such that the reverse of the first step “wins” in the competition for [A*]. However, in the other limit, that $k_2 \gg k_{-1}[A]$, the reaction becomes second order in $[A]$ since $k_{-1}[A] + k_2 \approx k_2$. $\dfrac{d[P]}{dt} = k_1[A]^2 \nonumber$ which is consistent with the forward reaction of the first step being the rate determining step, since $A^*$ is removed from the reaction (through the formation of products) very quickly as soon as it is formed. Third-body Collisions Sometimes, the third-body collision is provided by an inert species $M$, perhaps by filling the reaction chamber with a heavy non-reactive species, such as Ar. In this case, the mechanism becomes $A + M \xrightleftharpoons [k_1]{k_{-1}} A^* + M \nonumber$ $A^* \xrightarrow{k_2} P \nonumber$ And in the limit that $[A^*]$ can be treated using the steady state approximation, the rate of production of the product becomes $\dfrac{d[P]}{dt} = \dfrac{ k_2 k_1[M] }{k_{-1}[M] + k_2} \nonumber$ And if the concentration of the third body collider is constant, it is convenient to define an effective rate constant, $k_{uni}$. $k_{uni} = \dfrac{ k_2 k_1[M] }{k_{-1}[M] + k_2 } \nonumber$ The utility is that important information about the individual step rate constants can be extracted by plotting $1/k_{uni}$ as a function of $1/[M]$. $\dfrac{1}{k_{uni}} = \dfrac{k_{-1}}{ k_2 k_1 } + k_2 \left( \dfrac{1}{[M]} \right) \nonumber$ The plot should yield a straight line, the slope of which gives the value of $k_2$, and the intercept gives $(k_{-1}/k_2k_1)$. 12.08: The Michaelis-Menten Mechanism The Michaelis-Menten mechanism (Michaelis & Menten, 1913) is one which many enzyme mitigated reactions follow. The basic mechanism involves an enzyme ($E$, a biological catalyst) and a substrate ($S$) which must connect to form an enzyme-substrate complex ($ES$) in order for the substrate to be degraded (or augmented) to form a product ($P$). The overall reaction is $S \rightarrow P \nonumber$ And the simple two-step mechanism is given by $E + S \ce{<=>[k_1][k_{-1}]} ES \label{step1}$ $ES \xrightarrow{k_2} P \label{step2}$ Notice that the enzyme is necessary for the reaction to proceed, but is not part of the overall stoichiometry (as is the case for any catalyst!). Equilibrium Approximation Derivation Applying the equilibrium approximation to the first step $k_1[E][S] \approx k_{-1}[ES] \label{equil}$ And using a mass conservation relationship on the enzyme (noting that the enzyme must be either in its bare form ($E$) or complexed with a substrate ($ES$)): $[E]_o = [E] + [ES] \nonumber$ or $[E] = [E]_o - [ES] \nonumber$ Substituting this into the equilibrium expression (Equation \ref{equil}) yields $k_1([E]_o - [ES])[S] = k_{-1}[ES] \nonumber$ Solving this expression for $[ES]$ stepwise reveals $k_1[E]_o[S] - k_1[ES][S] = k_{-1}[ES] \nonumber$ $k_1[E]_o[S] = k_{-1}[ES] + k_1[ES][S] \nonumber$ $= (k_{-1} + k_1 ) [ES] \nonumber$ $\dfrac{k_1[E]_o[S]}{k_1[S] + k_{-1}} = [ES] \nonumber$ Substituting this into the expression for the rate of production of the product $P$ $\dfrac{d[P]}{dt} = k_2[ES] \label{step3}$ yields $\dfrac{d[P]}{dt} = \dfrac{k_2 k_1 [E]_o [S]}{k_1[S] + k_{-1}} \nonumber$ Multiplying the top and bottom of the expression on the right hand side by 1/k1 gives the result $\dfrac{d[P]}{dt} = \dfrac{k_2[E]_o[S]}{[S] + \frac{k_1}{k_{-1}}} \nonumber$ The ratio of k-1/k1 is the equilibrium constant that describes the dissociation of the enzyme-substrate complex, $K_d$ in Equation \ref{step1}. Noting that $k_2[E]_0$ gives the maximum rate ($V_{max}$), and that $\dfrac{d[P]}{dt}$ is the observed reaction rate, the rate law takes the form $\text{rate} = \dfrac{V_{max}[S]}{K_d+[S]} \nonumber$ This is because the maximum reaction rate is achieved when [ES] is low. As [ES] increases, the likelihood of the complex decomposing to reform reactants is higher, slowing the conversion. [ES] will be low if the concentration of the enzyme is much larger than that of the substrate, so there is never a shortage of enzyme available to form the complex with the substrate. However, if the substrate concentration is higher, the lack of available enzyme active sites will slow the reaction and cause it to become 0th order. In the limit that the substrate concentration is large compared to $K_d$ (i.e., $K_d + [S] \approx [S]$), the reaction ends up zeroth order in substrate. $\text{rate} = \dfrac{V_{max}[S]}{K_d+[S]} \approx \dfrac{V_{max}\cancel{[S]}}{\cancel{[S]}} = V_{max} \nonumber$ Hence, adding more substrate to the system under this limiting condition will have no effect on the observed rate. This is characteristic of a bottleneck in the mechanism, which would happen if there is a shortage of enzyme sites to which the substrate can attach. In the other extreme, in which $K_d$ is very large compared to the substrate concentration (i.e., $K_d + [S] \approx K_d$), the reaction become first order in substrate. $\text{rate} = \dfrac{V_{max}[S]}{K_d+[S]} \approx \dfrac{V_{max}[S]}{K_d} = \dfrac{V_{max}}{K_d}[S] \nonumber$ Steady-State Approximation Derivation In an alternate derivation (Briggs & Haldane, 1925) using the steady state approximation applied to the enzyme-substrate complex $\dfrac{[ES]}{dt} = k_1[E][S] - k_{-1}[ES] - k_2[ES] \approx 0 \nonumber$ Solving for $[ES]$ gives the result $ES] = \dfrac{k_1[E][S]}{k_{-1} + k_2} \nonumber$ or $[ES] = \dfrac{[E][S]}{K_m} \nonumber$ where $K_m = \dfrac{k_{-1}+K_2}{k_1} \nonumber$ $K_M$ is the Michaelis constant, which is affected by a number of factors, including pH, temperature, and the nature of the substrate itself. Proceeding as before, though the conservation of mass relationship and substitution into the expression for rate (Equation \ref{step3}) results in $\dfrac{d[P]}{dt} =\dfrac{V_{max}[S]}{K_m + [S]} \nonumber$ The advantage to this approach is that it accounts for the loss of $ES$ complex due to the production of products as well as the decomposition to reform the reactants E and S. As before, in the limit that $[S] \gg K_M$, the reaction reaches its maximum rate ($V_{max}$) and becomes independent of any concentrations. However in the limit that $[S] \ll K_M$, the reaction becomes 1st order in $[S]$. The Michalis constant and Vmax parameters can be extracted in a number of ways. In the Lineweaver-Burk (Lineweaver & Burk, 1934) method, the reciprocal of the rate law is used to create a linear relationship. $\dfrac{1}{\text{rate}} = \dfrac{K_m + [S]}{V_{max}[S]} \nonumber$ or $\dfrac{1}{\text{rate}} = \dfrac{K_m}{V_{max}} \dfrac{1}{[S]} + \dfrac{1}{V_{max}} \nonumber$ So a plot of $1/rate$ as a function of $1/[S]$ results in a straight line, the slope of which is equal to $K_M/V_{max}$ and the intercept is $1/V_{max}$. This is called a Lineweaver–Burk plot. An example of a Lineweaver-Burk plot. (CC BY-SA 3.0; Diberri).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/12%3A_Chemical_Kinetics_II/12.07%3A_The_Lindemann_Mechanism.txt
A large number of reactions proceed through a series of steps that can collectively be classified as a chain reaction. The reactions contain steps that can be classified as • initiation step – a step that creates the intermediates from stable species • propagation step – a step that consumes an intermediate, but creates a new one • termination step – a step that consumes intermediates without creating new ones These types of reactions are very common when the intermediates involved are radicals. An example, is the reaction $H_2 + Br_2 \rightarrow 2HBr \nonumber$ The observed rate law for this reaction is $\text{rate} = \dfrac{k [H_2][Br_2]^{3/2}}{[Br_2] + k'[HBr]} \label{exp}$ A proposed mechanism is $Br_2 \ce{<=>[k_1][k_{-1}]} 2Br^\cdot \label{step1}$ $2Br^\cdot + H_2 \ce{<=>[k_2][k_{-2}]} HBr + H^\cdot \label{step2}$ $H^\cdot + Br_2 \xrightarrow{k_3} HBr + Br^\cdot \label{step3}$ Based on this mechanism, the rate of change of concentrations for the intermediates ($H^\cdot$ and $Br^\cdot$) can be written, and the steady state approximation applied. $\dfrac{d[H^\cdot]}{dt} = k_2[Br^\cdot][H_2] - k_{-2}[HBr][H^\cdot] - k_3[H^\cdot][Br_2] =0 \nonumber$ $\dfrac{d[Br^\cdot]}{dt} = 2k_1[Br_2] - 2k_{-1}[Br^\cdot]^2 - k_2[Br^\cdot][H_2] + k_{-2}[HBr][H^\cdot] + k_3[H^\cdot][Br_2] =0 \nonumber$ Adding these two expressions cancels the terms involving $k_2$, $k_{-2}$, and $k_3$. The result is $2 k_1 [Br_2] - 2k_{-1} [Br^\cdot]^2 = 0 \nonumber$ Solving for $Br^\cdot$ $Br^\cdot = \sqrt{\dfrac{k_1[Br_2]}{k_{-1}}} \nonumber$ This can be substituted into an expression for the $H^\cdot$ that is generated by solving the steady state expression for $d[H^\cdot]/dt$. $[H^\cdot] = \dfrac{k_2 [Br^\cdot] [H_2]}{k_{-2}[HBr] + k_3[Br_2]} \nonumber$ so $[H^\cdot] = \dfrac{k_2 \sqrt{\dfrac{k_1[Br_2]}{k_{-1}}} [H_2]}{k_{-2}[HBr] + k_3[Br_2]} \nonumber$ Now, armed with expressions for $H^\cdot$ and $Br^\cdot$, we can substitute them into an expression for the rate of production of the product $HBr$: $\dfrac{[HBr]}{dt} = k_2[Br^\cdot] [H_2] + k_3 [H^\cdot] [Br_2] - k_{-2}[H^\cdot] [HBr] \nonumber$ After substitution and simplification, the result is $\dfrac{[HBr]}{dt} = \dfrac{2 k_2 \left( \dfrac{k_1}{k_{-1}}\right)^{1/2} [H_2][Br_2]^{1/2}}{1+ \dfrac{k_{-1}}{k_3} \dfrac{[HBr]}{[Br_2]} } \nonumber$ Multiplying the top and bottom expressions on the right by $[Br_2]$ produces $\dfrac{[HBr]}{dt} = \dfrac{2 k_2 \left( \dfrac{k_1}{k_{-1}}\right)^{1/2} [H_2][Br_2]^{3/2}}{[Br_2] + \dfrac{k_{-1}}{k_3} [HBr] } \nonumber$ which matches the form of the rate law found experimentally (Equation \ref{exp})! In this case, $k = 2k_2 \sqrt{ \dfrac{k_1}{k_{-1}}} \nonumber$ and $k'= \dfrac{k_{-2}}{k_3} \nonumber$ 12.10: Catalysis There are many examples of reactions that involve catalysis. One that is of current importance to the chemistry of the environment is the catalytic decomposition of ozone (Fahey, 2006). The overall reaction $O_3 + O^{\cdot} \xrightarrow{} 2 O_2 \nonumber$ can be catalyze by atomic chlorine by the following mechanism. $O_3 + Cl \xrightarrow{k_1} ClO + O_2 \nonumber$ $ClO + O \xrightarrow{k_1} Cl + O_2 \nonumber$ The rate of change of the intermediate ($ClO$) concentration is given by $\dfrac{[ClO]}{dt} = k_1 [ O_3][Cl] - k_2 [ClO][O] \nonumber$ Applying the steady state approximation to this relationship and solving for $[ClO]$ produces $ClO] =\dfrac{[O_3][Cl]}{k_2[O]} \label{clo}$ The rate of production of $O_2$ (which is two times the rate of the reaction) is given by $\dfrac{d[O_2]}{dt} = k_2[O_3][Cl] + k_2[ClO][O] \nonumber$ Substituting the expression for $[ClO]$ (Equation \ref{clo}) into the above expression yields $\dfrac{d[O_2]}{dt} = k_2[O_3][Cl] + k_2 \left(\dfrac{[ O_3][Cl]}{k_2[O]} \right) [O] \nonumber$ $= k_1 [O_3[Cl] + k_1[O_3][Cl] \nonumber$ $= 2k_1[O_3][Cl] \nonumber$ And so the rate of the reaction is predicted to be first order in $[O_3]$, first order in the catalyst $[Cl]$, and second order overall. $\text{rate} = k[O_3][Cl] \nonumber$ If the concentration of the catalyst is constant, the reaction kinetics will reduce to first order. $\text{rate} = k[O_3] \nonumber$ This catalytic cycle can be represented in the following diagram: On the left, atomic oxygen picks up an oxygen atom from $ClO$ to form $O_2$ and generate a $Cl$ atom, which can then react with $O_3$ to form $ClO$ and an $O_2$ molecule. The closed loop in the middle is characteristic of the catalytic cycle involving $Cl$ and $ClO$. Further, since $Cl$ acts as a catalyst, it can decompose many $O_3$ molecules without being degraded through side reactions. The introduction of chlorine atoms into the upper atmosphere is a major environmental problem, leading to the annual thinning and eventual opening of the ozone layer over Antarctica. The source of chlorine is from the decomposition of chlorofluorocarbons which are sued as refrigerants and propellants due to their incredible stability near the Earth’s surface. However, in the upper atmosphere, these compounds are subjected to ultraviolet radiation emitted by the sun and decompose to form the radicals responsible for the catalytic decomposition of ozone. The world community addressed this issue by drafting the Montreal Protocol (Secretariat, 2015), which focused on the emission of ozone-destroying compounds. The result of this action has brought about evidence of the Antarctic ozone hole healing (K, 2015). This is one very good example science-guided political, industrial, and economic policies leading to positive changes for our environment. 12.11: Oscillating Reactions In most cases, the conversion of reactants into products is a fairly smooth process, in that the concentrations of the reactants decrease in a regular manner, and those of the products increase in a similar regular manner. However, some reactions can show irregular behavior in this regard. One particularly peculiar (but interesting!) phenomenon is that of oscillating reactions, in which reactant concentrations can rise and fall as the reaction progresses. One way this can happen is when the products of the reaction (or one of the steps) catalyzes the reaction (or one of the steps. This process is called autocatalysis. An example of an autocatalyzed mechanism is the Lotka-Voltera mechanism. This is a three-step mechanism defined as follows: $A +X \xrightarrow{k_1} X + X \nonumber$ $X +Y \xrightarrow{k_2} Y + Y \nonumber$ $Y \xrightarrow{k_3} B \nonumber$ In this reaction, the concentration of reactant $A$ is held constant by continually adding it to the reaction mixture. The first step is autocatalyzed, so as it proceeds, it speeds up. However, an increase in the production of $X$ by the first reaction increases the rate of the second reaction as well, which is also autocatalyzed. Finally, the removal of $Y$ through the third reaction brings things to a halt, until the first reaction can again produce a build up of $X$ to start the cycle over. A plot of the concentration of $X$ and $Y$ as a function of time looks as follows: This mechanism follows kinetics predicted by what is called the predator-prey relationship. In this case, $X$ represents the “prey” and $Y$ represents the “predator”. The population of the predator cannot build up unless there is a significant population of prey on which the predators can feed. Likewise, the population of predators decreases when the population of the prey falls. And finally, there is a lag, as the rise and decline of the prey population controls the rise and fall of the predator population. The equations have been studied extensively and have applications not just in chemical kinetics, but in biology, economics, and elsewhere. One wonders if the equations can be applied to help to understand politics! 12.E: Chemical Kinetics II (Exercises) In preparation 12.S: Chemical Kinetics II (Summary) Vocabulary and Concepts • autocatalysis • bimolecular • catalyst • chain reaction • dynamic equilibrium • effect rate constant • elementary reaction • equilibrium approximation • equilibrium constant • initiation step • intermediate • Lindemann mechanism • Lotka-Voltera mechanism • Michaelis constant • Michaelis-Menten • molecularity • Montreal Protocol • oscillating reaction • predator-prey • pre-equilibrium • propagation step • radical • rate determining step • reaction mechanism • steady state approximation • termination step • termolecular • third-body collision • unimolecular
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/12%3A_Chemical_Kinetics_II/12.09%3A_Chain_Reactions.txt
"A profound change has taken place during the present century in the opinions physicists have held on the mathematical foundations of their subject. Previously they supposed that the principles of Newtonian mechanics would provide the basis for the description of the whole of physical phenomenon and that all the theoretical physicists had to do was suitably to develop and apply these principles. With the recognition that there is no logical reason why Newtonian and classical principles should be valid outside the domains in which they have been experimentally verified has come the realization that departures from these principles are indeed necessary. Such departures find their expression through the introduction of new mathematical formalisms, new schemes of axioms and rules of manipulation, into the methods of theoretical physics." P. A. M. Dirac, "Quantum Mechanics" (1930). • 1.1: Blackbody Radiation Cannot Be Explained Classically All bodies emit thermal radiation spanning a broad range of wavelengths. • The amount and peak wavelength of the radiation depends on the temperature of the body, but not on its composition. • The higher the temperature, the more radiation is emitted and the shorter (or bluer) the wavelength of the bulk of the radiation. • 1.2: Quantum Hypothesis Used for Blackbody Radiation Law Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike classical mechanics. • 1.3: Photoelectric Effect Explained with Quantum Hypothesis Einstein's theory of the photoelectric effect made the claim that electromagnetic radiation had to be thought of as a series of particles, called photons, which collide with the electrons on the surface and emit electrons when absorbed. This theory ran contrary to the belief that electromagnetic radiation was a wave and thus it was not recognized as correct until 1916 when Robert Millikan experimentally confirmed the theory • 1.4: The Hydrogen Atomic Spectrum Gases heated to incandescence were found to emit light with a series of sharp wavelengths. The emitted light analyzed by a spectrometer appears as a multitude of narrow bands of color. These so called line spectra are characteristic of the atomic composition of the gas. One such set of lines in the hydrogen ATOM emission are the Balmer lines, in which a phenomenological relatioshop between frequency and an integer of unknown origin. • 1.5: The Rydberg Formula and the Hydrogen Atomic Spectrum The Rydberg formula is used to describe the wavelengths of spectral lines and was formulated by the Swedish physicist Johannes Rydberg. The Rydberg formula explains the different energies of transition that occur between energy levels. When an electron moves from a higher energy level to a lower one, a photon is emitted. The Hydrogen atom can emit different wavelengths of light depending on the initial and final energy levels of the transition. • 1.6: Matter Has Wavelike Properties Matter waves are often referred to as de Broglie waves and have wavelengths (λ) to its momentum, p, through the Planck constant, h: λ = h/p . • 1.7: de Broglie Waves can be Experimentally Observed An electron, indeed any particle, is neither a particle nor a wave. Describing the electron as a particle is a mathematical model that works well in some circumstances while describing it as a wave is a different mathematical model that works well in other circumstances. • 1.8: The Bohr Theory of the Hydrogen Atom The model we will describe here, due to Niels Bohr in 1913, is an early attempt to predict the allowed energies for single-electron atoms. It is observed that excited hydrogen atoms emit light at only discrete wavelengths. Bohr's model was a non-phenomenological (based on basic physics principles) that predicts the discrete nature of the spectral of Rydberg's formula and decomposes the Rydberg constant into the fundamental constants of the universe. • 1.9: The Heisenberg Uncertainty Principle The Heisenberg Uncertainty Principle is a fundamental theory in quantum mechanics that defines why a scientist cannot measure multiple quantum variables simultaneously. The principle asserts a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables, such as position x and momentum p, can be known. • 1.E: The Dawn of the Quantum Theory (Exercises) These are homework exercises to accompany Chapter 1 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. Thumbnail: The Photoelectric effect require quantum mechanics to describe accurately (CC BY-SA-NC 3.0; anonymous via LibreTexts). 01: The Dawn of the Quantum Theory Learning Objectives One experimental phenomenon that could not be adequately explained by classical physics was blackbody radiation. Objectives for this section include • Be familiar with black-body radiators • Apply Stefan-Boltzmann’s Law to estimate total light output from a radiator • Apply Wien’s Displacement Law to estimate the peak wavelength (or frequency) of the output from a black body radiator • Understand the Rayleigh-Jeans Law and how it fails to properly model black-body radiation All normal matter at temperatures above absolute zero emits electromagnetic radiation, which represents a conversion of a body's internal thermal energy into electromagnetic energy, and is therefore called thermal radiation. Conversely, all normal matter absorbs electromagnetic radiation to some degree. An object that absorbs ALL radiation falling on it, at all wavelengths, is called a blackbody. When a blackbody is at a uniform temperature, its emission has a characteristic frequency distribution that depends on the temperature. This emission is called blackbody radiation. A room temperature blackbody appears black, as most of the energy it radiates is infra-red and cannot be perceived by the human eye. Because the human eye cannot perceive light waves at lower frequencies, a black body, viewed in the dark at the lowest just faintly visible temperature, subjectively appears grey, even though its objective physical spectrum peaks in the infrared range. When it becomes a little hotter, it appears dull red. As its temperature increases further it becomes yellow, white, and ultimately blue-white. Blackbody radiation has a characteristic, continuous frequency spectrum that experimentally depends only on the body's temperature. In fact, we can be much more precise: A body emits radiation at a given temperature and frequency exactly as well as it absorbs the same radiation. This statement was proved by Gustav Kirchhoff: the essential point is that if we instead suppose a particular body can absorb better than it emits, then in a room full of objects all at the same temperature, it will absorb radiation from the other bodies better than it radiates energy back to them. This means it will get hotter, and the rest of the room will grow colder, contradicting the second law of thermodynamics. Thus, a body must emit radiation exactly as well as it absorbs the same radiation at a given temperature and frequency in order to not violate the second law of thermodynamics. Any body at any temperature above absolute zero will radiate to some extent, the intensity and frequency distribution of the radiation depending on the detailed structure of the body. To begin analyzing heat radiation, we need to be specific about the body doing the radiating: the simplest possible case is an idealized body which is a perfect absorber, and therefore also (from the above argument) a perfect emitter. So how do we construct a perfect absorber in the laboratory? In 1859 Kirchhoff had a good idea: a small hole in the side of a large box is an excellent absorber, since any radiation that goes through the hole bounces around inside, a lot getting absorbed on each bounce, and has little chance of ever getting out again. So, we can do this in reverse: have an oven with a tiny hole in the side, and presumably the radiation coming out the hole is as good a representation of a perfect emitter as we’re going to find (Figure 1.1.2 ). By the 1890’s, experimental techniques had improved sufficiently that it was possible to make fairly precise measurements of the energy distribution of blackbody radiation. In 1895, at the University of Berlin, Wien and Lummer punched a small hole in the side of an otherwise completely closed oven, and began to measure the radiation coming out. The beam coming out of the hole was passed through a diffraction grating, which sent the different wavelengths/frequencies in different directions, all towards a screen. A detector was moved up and down along the screen to find how much radiant energy was being emitted in each frequency range. They found a radiation intensity/frequency curve close to the distributions in Figure 1.1.3 . By measuring the blackbody emission curves at different temperatures (Figure 1.1.3 ), they were also able to construct two important phenomenological Laws (i.e., formulated from experimental observations, not from basic principles of nature): Stefan-Boltzmann’s Law and Wien’s Displacement Law. Not all radiators are blackbody radiators The radiation of a blackbody radiator is produced by the thermal activity of the material, not the nature of the material, nor how it got thermally excited. Some examples of blackbodies include incandescent light bulbs, stars, and hot stove tops. The emission appears as a continuous spectrum (Figure 1.1.3 ) with multiple coexisting colors. However, not every radiator is a blackbody radiator. For example, the emission of a fluorescence bulb is not one. The following spectrum show the distribution of light from a fluoresce light tube and is a mixture of discrete bands at different wavelengths of light in contrast to the continuous spectra in Figure 1.1.3 for blackbody radiators. Fluorescent light bulbs contain a mixture of inert gases (usually argon and neon) together with a drop of mercury at low pressure. A different mix of visible colors blend to produce a light that appears to us white with different shadings. The Stefan-Boltzmann Law The first quantitative conjecture based on experimental observations was the Stefan-Boltzmann Law (1879) which states the total power (i.e., integrated over all emitting frequencies in Figure 1.1.3 ) radiated from one square meter of black surface goes as the fourth power of the absolute temperature (Figure 1.1.4 ): $P = \sigma T^4 \label{Eq1}$ where • $P$ is the total amount of radiation emitted by an object per square meter ($Watts/ m^{2}$) • $\sigma$ is a constant called the Stefan-Boltzman constant ($5.67 \times 10^{-8}\, Watts\; m^{-2} K^{-4}$) • $T$ is the absolute temperature of the object (in K) The Stefan-Boltzmann Law is easily observed by comparing the integrated value (i.e., under the curves) of the experimental black-body radiation distribution in Figure 1.1.3 at different temperatures. In 1884, Boltzmann derived this $T^4$ behavior from theory by applying classical thermodynamic reasoning to a box filled with electromagnetic radiation, using Maxwell’s equations to relate pressure to energy density. That is, the tiny amount of energy coming out of the hole (Figure 1.1.2 ) would of course have the same temperature dependence as the radiation intensity inside. Example 1.1.1 The sun’s surface temperature is 5700 K. 1. How much power is radiated by the sun? 2. Given that the distance to earth is about 200 sun radii, what is the maximum power possible from a one square kilometer solar energy installation? Solution (a) First, we calculate the area of the sun followed by the flux (power). The sun has a radius of $6.96 \times 10^{8} m$ The area of the sun is $A = 4 \pi R^{2}$. \begin{align*} A &= 4 (3.1416)(6.96 \times 10^{8} m)^{2} \[4pt] &= 6.08 \times 10^{18} m^2 \end{align*} \nonumber The power radiated from the sun (via Stefan-Boltzmann Law) is $P = \sigma T^{4}$ (Equation \ref{Eq1}). \begin{align*} P &= (5.67 \times 10^{-8}\, Watts\; m^{-2} K^{-4})(5700 K)^{4} \[4pt] &= 5.98 \times 10^{7} Watts/m^2 \end{align*} \nonumber This value is per square meter. (b) To calculate the total power radiated by the sun is thus: \begin{align*} P_{total} &= P A = (5.98 \times 10^{7} Watts/m^2)( 6.08 \times 10^{18} m^2) \[4pt] &= 3.6 \times 10^{26} Watts \end{align*} \nonumber Wien’s Displacement Law The second phenomenological observation from experiment was Wien’s Displacement Law. Wien's law identifies the dominant (peak) wavelength, or color, of light coming from a body at a given temperature. As the oven temperature varies, so does the frequency at which the emitted radiation is most intense (Figure 1.1.3 ). In fact, that frequency is directly proportional to the absolute temperature: $\nu_{max} \propto T \label{Eq2}$ where the proportionality constant is $5.879 \times 10^{10} Hz/K$. Wien himself deduced this law theoretically in 1893, following Boltzmann’s thermodynamic reasoning. It had previously been observed, at least semi-quantitatively, by an American astronomer, Langley. This upward shift in $\nu_{max}$ with $T$ is familiar to everyone—when an iron is heated in a fire (Figure 1.1.1 ), the first visible radiation (at around 900 K) is deep red, the lowest frequency visible light. Further increase in $T$ causes the color to change to orange then yellow, and finally blue at very high temperatures (10,000 K or more) for which the peak in radiation intensity has moved beyond the visible into the ultraviolet. Another representation of Wien's Law (Equation $\ref{Eq2}$) in terms of the peak wavelength of light is $\lambda_{max} = \dfrac{b}{T} \label{Eq2a}$ where $T$ is the absolute temperature in kelvin and $b$ is a constant of proportionality called Wien's displacement constant, equal to $2.89 \times 10^{−3} m\, K$, or more conveniently to obtain wavelength in micrometers, $b≈2900\; μm \cdot K$. This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. Example 1.1.2 If surface body temperature is 90 °F. 1. How much radiant energy in $W\, m^{-2}$ would your body emit? 2. What is the peak wavelength of emitted radiation? 3. What is the total radiant energy emitted by your body in Watts? Note: The average adult human male has a body surface area of about 1.9 $m^2$ and the average body surface area for a woman is about 1.6 $m^2$. Solution (a) 90 °F is 305 K. We use Stefan-Boltzmann Law (Equation \ref{Eq1}). The total amount of radiation emitted will be $P = \sigma T^4$. \begin{align*} P &= (5.67 \times 10^{-8}\, Watts\; m^{-2} K^{-4})(305 K)^4 \[4pt] &= 491 W\, m^{-2} \end{align*} \nonumber The peak wavelength of emitted radiation is found using Wien's Law: \begin{align*} \lambda_{max} &= \frac{ 2.898 \times 10^{-3} m \cdot K}{T} \[4pt] &= \frac{ 2.898 \times 10^{-3} m \cdot K}{305 K} \[4pt] &= 9.5 \times 10^{-6} m = 9.5 \mu m\end{align*} \nonumber The total radiant energy density in Watts is : \begin{align*} \text{Energy}_{\text{male}} &= (491 W\, m^{-2})(1.9 m^{2}) = 933 W \[4pt] \text{Energy}_{\text{female}} &= (491 W\, m^{-2})(1.6 m^{2}) = 786 W \end{align*} \nonumber Example 1.1.3 : The Temperature of the Sun For example, if the Sun has a surface temperature of 5700 K, what is the wavelength of maximum intensity of solar radiation? Solution If we substitute 5700 K for $T$ in Equation $\ref{Eq2a}$, we have \begin{align*} λ_{max} &= \dfrac{0.0029}{5700} \[4pt] &= 5.1 \times 10^{-7} \, m \end{align*} \nonumber Knowing that violet light has a wavelength of about $4.0 \times 10^{-7}$ meters, yellow about $5.6 \times 10^{-7}$ meters, and red about $6.6 \times 10^{-7}$ meters, what can we say about the color of the Sun's peak radiation? The peak wavelength of the Sun's radiation is at a slightly shorter wavelength than the color yellow, so it is a slightly greenish yellow. To see this greenish tinge to the Sun, you would have to look at it from space. It turns out that the Earth's atmosphere scatters some of the shorter waves of sunlight, which shifts its peak wavelength to pure yellow. Remember that thermal radiation always spans a wide range of wavelengths (Figure 1.1.2 ) and Equation \ref{Eq2a} only specifies the single wavelength that is the peak of the spectrum. So although the Sun appears yellowish-white, when you disperse sunlight with a prism you see radiation with all the colors of the rainbow. Yellow just represents a characteristic wavelength of the emission. Exercise 1.1.1 1. At what wavelength does the sun emit most of its radiation if it has a temperature of 5,778 K? 2. At what wavelength does the earth emit most of its radiation if it has a temperature of 288 K? Answer a 500 nm Answer b 10.0 microns The Rayleigh-Jeans Law Lord Rayleigh and J. H. Jeans developed an equation which explained blackbody radiation at low frequencies. The equation which seemed to express blackbody radiation was built upon all the known assumptions of physics at the time. The big assumption which Rayleigh and Jean implied was that infinitesimal amounts of energy were continuously added to the system when the frequency was increased. Classical physics assumed that energy emitted by atomic oscillations could have any continuous value. This was true for anything that had been studied up until that point, including things like acceleration, position, or energy. Their resulting Rayleigh-Jeans Law was \begin{align} d\rho \left( \nu ,T \right) &= \rho_{\nu} \left( T \right) d\nu \[4pt] &= \dfrac{8 \pi k_B T}{c^3} \nu^2 d\nu \label{Eq3} \end{align} Experimental data performed on the black box showed slightly different results than what was expected by the Rayleigh-Jeans law (Figure 1.1.5 ). The law had been studied and widely accepted by many physicists of the day, but the experimental results did not lie, something was different between what was theorized and what actually happens. The experimental results showed a bell type of curve, but according to the Rayleigh-Jeans law the frequency diverged as it neared the ultraviolet region (Equation $\ref{Eq3}$). Ehrenfest later dubbed this the "ultraviolet catastrophe". It is important to emphasizing that Equation $\ref{Eq3}$ is a classical result: the only inputs are classical dynamics and Maxwell’s electromagnetic theory. Differential vs. Integral Representation of the Distribution Radiation is understood as a continuous distribution of amplitude vs. wavelength or, equivalently, amplitude vs. frequency (Figure 1.1.5 ). According to Rayleigh-Jeans law, the intensity at a specific frequency $\nu$ and temperature is $\rho(\nu,T) = \dfrac{8\pi k_bT\nu^2}{c^3} \text{.} \nonumber$ However, in practice, we are more interested in frequency intervals. An exact frequency is the limit of a sequence of smaller and smaller intervals. If we make the assumption that, for a sufficiently small interval, $ρ(\nu,T)$ does not vary, we get your definition for the differential $dρ(ν,T)$ in Equation \ref{Eq3}: The assumption is fair due to the continuity of $ρ(\nu,T)$. This is the approximation of an integral on a very small interval $d\nu$ by the height of a point inside this interval ($\frac{8\pi k_bT\nu^2}{c^3}$) times its length ($d\nu$). So, if we sum an infinite amount of small intervals like the one above we get an integral. The total radiation between $\nu_1$ and $\nu_2$ will be: \begin{align*} \int_{\nu_1}^{\nu_2}\operatorname{d}\!\rho(\nu,T) &= \int_{\nu_1}^{\nu_2}\rho(\nu, T)\operatorname{d}\!\nu \[4pt] &= \int_{\nu_1}^{\nu_2}\frac{8\pi k_bT\nu^2}{c^3}\operatorname{d}\!\nu \[4pt] &= 8 \pi k_b T \frac{v_2^3 - v_1^3}{3 c^3}\text{.} \end{align*} \nonumber Observe that $ρ(\nu,T)$ is quadratic in $\nu$. Example 1.1.4 : The Ultraviolet Catastrophe What is the total spectral radiance of a radiator that follows the Rayleigh-Jeans law for its emission spectrum? Solution The total spectral radiance $\rho_{tot}(T)$ is the combined emission over all possible wavelengths (or equivalently, frequencies), which is an integral over the relevant distribution (Equation \ref{Eq3} for the Rayleigh-Jeans Law). \begin{align*} \rho_{tot}(T) &= \int_0^\infty d\rho \left( \nu ,T \right) \[4pt] &= \int_0^\infty \dfrac{8 \pi k_B T}{c^3} \nu^2 d\nu \end{align*} \nonumber but the integral $\int_0^\infty x^2\mathrm{d}x \nonumber$ does not converge. Worse, it is infinite, $\lim_{k\to\infty}\int_0^k x^2\mathrm{d}x = \infty \nonumber$ Hence, the classically derived Rayleigh-Jeans law predicts that the radiance of a blackbody is infinite. Since radiance is power per angle and unit area, this also implies that the total power and hence the energy a blackbody emitter gives off is infinite, which is patently absurd. This is called the ultraviolet catastrophe because the absurd prediction is caused by the classical law not predicting the behavior at high frequencies/small wavelengths correctly (Figure 1.1.5 ).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.01%3A_Blackbody_Radiation_Cannot_Be_Explained_Classically.txt
Learning Objectives • To understand how energy is quantized in blackbody radiation By the late 19th century, many physicists thought their discipline was well on the way to explaining most natural phenomena. They could calculate the motions of material objects using Newton’s laws of classical mechanics, and they could describe the properties of radiant energy using mathematical relationships known as Maxwell’s equations, developed in 1873 by James Clerk Maxwell, a Scottish physicist. The universe appeared to be a simple and orderly place, containing matter, which consisted of particles that had mass and whose location and motion could be accurately described, and electromagnetic radiation, which was viewed as having no mass and whose exact position in space could not be fixed. Thus matter and energy were considered distinct and unrelated phenomena. Soon, however, scientists began to look more closely at a few inconvenient phenomena that could not be explained by the theories available at the time. One experimental phenomenon that could not be adequately explained by classical physics was blackbody radiation (Figure 1.2.1 ). Attempts to explain or calculate this spectral distribution from classical theory were complete failures. A theory developed by Rayleigh and Jeans predicted that the intensity should go to infinity at short wavelengths. Since the intensity actually drops to zero at short wavelengths, the Rayleigh-Jeans result was called the ultraviolet catastrophe (Figure 1.2.1 dashed line). There was no agreement between theory and experiment in the ultraviolet region of the blackbody spectrum. Quantizing Electrons in the Radiator In 1900, the German physicist Max Planck (1858–1947) explained the ultraviolet catastrophe by proposing that the energy of electromagnetic waves is quantized rather than continuous. This means that for each temperature, there is a maximum intensity of radiation that is emitted in a blackbody object, corresponding to the peaks in Figure 1.2.1 , so the intensity does not follow a smooth curve as the temperature increases, as predicted by classical physics. Thus energy could be gained or lost only in integral multiples of some smallest unit of energy, a quantum (the smallest possible unit of energy). Energy can be gained or lost only in integral multiples of a quantum. Quantization Although quantization may seem to be an unfamiliar concept, we encounter it frequently in quantum mechanics (hence the name). For example, US money is integral multiples of pennies. Similarly, musical instruments like a piano or a trumpet can produce only certain musical notes, such as C or F sharp. Because these instruments cannot produce a continuous range of frequencies, their frequencies are quantized. It is also similar to going up and down a hill using discrete stair steps rather than being able to move up and down a continuous slope. Your potential energy takes on discrete values as you move from step to step. Even electrical charge is quantized: an ion may have a charge of −1 or −2, but not −1.33 electron charges. Planck's quantization of energy is described by the his famous equation: $E=h \nu \label{Eq1.2.1}$ where the proportionality constant $h$ is called Planck’s constant, one of the most accurately known fundamental constants in science $h=6.626070040(81) \times 10^{−34}\, J\cdot s \nonumber$ However, for our purposes, its value to four significant figures is sufficient: $h = 6.626 \times 10^{−34} \,J\cdot s \nonumber$ As the frequency of electromagnetic radiation increases, the magnitude of the associated quantum of radiant energy increases. By assuming that energy can be emitted by an object only in integral multiples of $hν$, Planck devised an equation that fit the experimental data shown in Figure 1.2.1 . We can understand Planck’s explanation of the ultraviolet catastrophe qualitatively as follows: At low temperatures, radiation with only relatively low frequencies is emitted, corresponding to low-energy quanta. As the temperature of an object increases, there is an increased probability of emitting radiation with higher frequencies, corresponding to higher-energy quanta. At any temperature, however, it is simply more probable for an object to lose energy by emitting a large number of lower-energy quanta than a single very high-energy quantum that corresponds to ultraviolet radiation. The result is a maximum in the plot of intensity of emitted radiation versus wavelength, as shown in Figure 1.2.1 , and a shift in the position of the maximum to lower wavelength (higher frequency) with increasing temperature. At the time he proposed his radical hypothesis, Planck could not explain why energies should be quantized. Initially, his hypothesis explained only one set of experimental data—blackbody radiation. If quantization were observed for a large number of different phenomena, then quantization would become a law. In time, a theory might be developed to explain that law. As things turned out, Planck’s hypothesis was the seed from which modern physics grew. Max Planck explain the spectral distribution of blackbody radiation as result from oscillations of electrons. Similarly, oscillations of electrons in an antenna produce radio waves. Max Planck concentrated on modeling the oscillating charges that must exist in the oven walls, radiating heat inwards and—in thermodynamic equilibrium—themselves being driven by the radiation field. He found he could account for the observed curve if he required these oscillators not to radiate energy continuously, as the classical theory would demand, but they could only lose or gain energy in chunks, called quanta, of size $h\nu$, for an oscillator of frequency $\nu$ (Equation $\ref{Eq1.2.1}$). With that assumption, Planck calculated the following formula for the radiation energy density inside the oven: \begin{align} d\rho(\nu,T) &= \rho_\nu (T) d\nu \[4pt] &= \dfrac {2 h \nu^3}{c^2} \cdot \dfrac {1 }{\exp \left( \dfrac {h\nu}{k_B T}\right)-1} d\nu \label{Eq2a} \end{align} with • $\pi = 3.14159$ • $h$ = $6.626 \times 10^{-34} J\cdot s$ • $c$ = $3.00 \times 10^{8}\, m/s$ • $\nu$ = $1/s$ • $k_B$ = $1.38 \times 10^{-23} J/K$ • $T$ is absolute temperature (in Kelvin) Planck's radiation energy density (Equation $\ref{Eq2a}$) can also be expressed in terms of wavelength $\lambda$. $\rho (\lambda, T) = \dfrac {2 hc^2}{\lambda ^5} \left(\dfrac {1}{ e^{\dfrac {hc}{\lambda k_B T}} - 1}\right) \label{Eq2b}$ With a wavelength of maximum energy density at: $\lambda_{max}=\frac{hc}{4.965kT} \nonumber$ Planck's equation (Equation $\ref{Eq2b}$) gave an excellent agreement with the experimental observations for all temperatures (Figure 1.2.2 ). Max Planck (1858–1947) Planck made many substantial contributions to theoretical physics, but his fame as a physicist rests primarily on his role as the originator of quantum theory. In addition to being a physicist, Planck was a gifted pianist, who at one time considered music as a career. During the 1930s, Planck felt it was his duty to remain in Germany, despite his open opposition to the policies of the Nazi government. One of his sons was executed in 1944 for his part in an unsuccessful attempt to assassinate Hitler and bombing during the last weeks of World War II destroyed Planck’s home. After WWII, the major German scientific research organization was renamed the Max Planck Society. Exercise 1.2.1 Use Equation $\ref{Eq2b}$ to show that the units of $ρ(λ,T)\,dλ$ are $J/m^3$ as expected for an energy density. The near perfect agreement of this formula with precise experiments (e.g., Figure 1.2.3 ), and the consequent necessity of energy quantization, was the most important advance in physics in the century. His blackbody curve was completely accepted as the correct one: more and more accurate experiments confirmed it time and again, yet the radical nature of the quantum assumption did not sink in. Planck was not too upset—he didn’t believe it either, he saw it as a technical fix that (he hoped) would eventually prove unnecessary. Part of the problem was that Planck’s route to the formula was long, difficult and implausible—he even made contradictory assumptions at different stages, as Einstein pointed out later. However, the result was correct anyway! The mathematics implied that the energy given off by a blackbody was not continuous, but given off at certain specific wavelengths, in regular increments. If Planck assumed that the energy of blackbody radiation was in the form $E = nh \nu \nonumber$ where $n$ is an integer, then he could explain what the mathematics represented. This was indeed difficult for Planck to accept, because at the time, there was no reason to presume that the energy should only be radiated at specific frequencies. Nothing in Maxwell’s laws suggested such a thing. It was as if the vibrations of a mass on the end of a spring could only occur at specific energies. Imagine the mass slowly coming to rest due to friction, but not in a continuous manner. Instead, the mass jumps from one fixed quantity of energy to another without passing through the intermediate energies. To use a different analogy, it is as if what we had always imagined as smooth inclined planes were, in fact, a series of closely spaced steps that only presented the illusion of continuity. Summary The agreement between Planck’s theory and the experimental observation provided strong evidence that the energy of electron motion in matter is quantized. In the next two sections, we will see that the energy carried by light also is quantized in units of $h \bar {\nu}$. These packets of energy are called “photons.”
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.02%3A_Quantum_Hypothesis_Used_for_Blackbody_Radiation_Law.txt
Learning Objectives • To be familiar with the photoelectron effect for bulk materials • Understand how the photoelectron kinetic energy and intensity vary as a function of incident light wavelength • Understand how the photoelectron kinetic energy and intensity vary as a function of incident light intensity • Describe what a workfunction is and relate it to ionization energy • Describe the photoelectric effect with Einstein's quantized photon model of light Nature, it seemed, was quantized (non-continuous, or discrete). If this was so, how could Maxwell’s equations correctly predict the result of the blackbody radiator? Planck spent a good deal of time attempting to reconcile the behavior of electromagnetic waves with the discrete nature of the blackbody radiation, to no avail. It was not until 1905, with yet another paper published by Albert Einstein, that the wave nature of light was expanded to include the particle interpretation of light which adequately explained Planck’s equation. The photoelectric effect was first documented in 1887 by the German physicist Heinrich Hertz and is therefore sometimes referred to as the Hertz effect. While working with a spark-gap transmitter (a primitive radio-broadcasting device), Hertz discovered that upon absorption of certain frequencies of light, substances would give off a visible spark. In 1899, this spark was identified as light-excited electrons (called photoelectrons) leaving the metal's surface by J.J. Thomson (Figure 1.3.1 ). The classical picture underlying the photoelectron effect was that the atoms in the metal contained electrons, that were shaken and caused to vibrate by the oscillating electric field of the incident radiation. Eventually some of them would be shaken loose, and would be ejected from the cathode. It is worthwhile considering carefully how the number and speed of electrons emitted would be expected to vary with the intensity and color of the incident radiation along with the time needed to observe the photoelectrons. • Increasing the intensity of radiation would shake the electrons more violently, so one would expect more to be emitted, and they would shoot out at greater speed, on average. • Increasing the frequency of the radiation would shake the electrons faster, so it might cause the electrons to come out faster. For very dim light, it would take some time for an electron to work up to a sufficient amplitude of vibration to shake loose. Lenard's Experimental Results (Intensity Dependence) In 1902, Hertz's student, Philipp Lenard, studied how the energy of the emitted photoelectrons varied with the intensity of the light. He used a carbon arc light and could increase the intensity a thousand-fold. The ejected electrons hit another metal plate, the collector, which was connected to the cathode by a wire with a sensitive ammeter, to measure the current produced by the illumination (Figure 1.3.2 ). To measure the energy of the ejected electrons, Lenard charged the collector plate negatively, to repel the electrons coming towards it. Thus, only electrons ejected with enough kinetic energy to get up this potential hill would contribute to the current. Lenard discovered that there was a well defined minimum voltage that stopped any electrons getting through ($V_{stop}$). To Lenard's surprise, he found that $V_{stop}$ did not depend at all on the intensity of the light! Doubling the light intensity doubled the number of electrons emitted, but did not affect the kinetic energies of the emitted electrons. The more powerful oscillating field ejected more electrons, but the maximum individual energy of the ejected electrons was the same as for the weaker field (Figure 1.3.2 ). Millikan's Experimental Results (Wavelength Dependence) The American experimental physicist Robert Millikan followed up on Lenard's experiments and using a powerful arc lamp, he was able to generate sufficient light intensity to separate out the colors and check the photoelectric effect using light of different colors. He found that the maximum energy of the ejected electrons did depend on the color - the shorter wavelength, higher frequency light eject photoelectrons with greater kinetic energy (Figures 1.3.3 ). As shown in Figure 1.3.4 , just the opposite behavior from classical is observed from Lenard's and Millikan's experiments. The intensity affects the number of electrons, and the frequency affects the kinetic energy of the emitted electrons. From these sketches, we see that • the kinetic energy of the electrons is linearly proportional to the frequency of the incident radiation above a threshold value of $ν_0$ (no current is observed below $ν_0$), and the kinetic energy is independent of the intensity of the radiation, and • the number of electrons (i.e. the electric current) is proportional to the intensity and independent of the frequency of the incident radiation above the threshold value of $ν_0$ (i.e., no current is observed below $ν_0$). Classical Theory does not Describe Experiment Classical theory predicts that energy carried by light is proportional to its amplitude independent of its frequency, and this fails to correctly explain the observed wavelength dependence in Lenard's and Millikan's observations. As with most of the experimental results we discuss in this text, the behavior described above is a simplification of the true experimental results observed in the laboratory. A more complex description involves a greater introduction of more complex physics and instrumentation, which will be ignored for now. Einstein's Quantum Picture In 1905 Einstein gave a very simple interpretation of Lenard's results and borrowed Planck's hypothesis about the quantized energy from his blackbody research and assumed that the incoming radiation should be thought of as quanta of energy $h\nu$, with $\nu$ the frequency. In photoemission, one such quantum is absorbed by one electron. If the electron is some distance into the material of the cathode, some energy will be lost as it moves towards the surface. There will always be some electrostatic cost as the electron leaves the surface, which is the workfunction, $\Phi$. The most energetic electrons emitted will be those very close to the surface, and they will leave the cathode with kinetic energy $KE = h\nu - \Phi \label{Eq1}$ On cranking up the negative voltage on the collector plate until the current just stops, that is, to $V_{stop}$, the highest kinetic energy electrons ($KE_e$) must have had energy $eV_{stop}$ upon leaving the cathode. Thus, $eV_{stop} = h\nu - \Phi \label{Eq2}$ Thus, Einstein's theory makes a very definite quantitative prediction: if the frequency of the incident light is varied, and $V_{stop}$ plotted as a function of frequency, the slope of the line should be $\frac{h}{e}$ (Figure $\PageIndex{4A}$). It is also clear that there is a minimum light frequency for a given metal $\nu_o$, that for which the quantum of energy is equal to $\Phi$ (Equation \ref{Eq1}). Light below that frequency, no matter how bright, will not eject electrons. According to both Planck and Einstein, the energy of light is proportional to its frequency rather than its amplitude, there will be a minimum frequency $\nu_0$ needed to eject an electron with no residual energy. Since every photon of sufficient energy excites only one electron, increasing the light's intensity (i.e. the number of photons/sec) only increases the number of released electrons and not their kinetic energy. In addition, no time is necessary for the atom to be heated to a critical temperature and therefore the release of the electron is nearly instantaneous upon absorption of the light. Finally, because the photons must be above a certain energy to satisfy the workfunction, a threshold frequency exists below which no photoelectrons are observed. This frequency is measured in units of Hertz (1/second) in honor of the discoverer of the photoelectric effect. Einstein's Equation $\ref{Eq1}$ explains the properties of the photoelectric effect quantitatively. A strange implication of this experiment is that light can behave as a kind of massless "particle" now known as a photon whose energy $E=h\nu$ can be transferred to an actual particle (an electron), imparting kinetic energy to it, just as in an elastic collision between to massive particles such as billiard balls. Robert Millikan initially did not accept Einstein's theory, which he saw as an attack on the wave theory of light, and worked for ten years until 1916, on the photoelectric effect. He even devised techniques for scraping clean the metal surfaces inside the vacuum tube. For all his efforts he found disappointing results: he confirmed Einstein's theory after ten years. In what he writes in his paper, Millikan is still desperately struggling to avoid this conclusion. However, by the time of his Nobel Prize acceptance speech, he has changed his mind rather drastically! Einstein's simple explanation (Equation \ref{Eq1}) completely accounted for the observed phenomena in Lenard's and Millikan's experiments (Figure 1.3.4 ) and began an investigation into the field we now call quantum mechanics. This new field seeks to provide a quantum explanation for classical mechanics and create a more unified theory of physics and thermodynamics. The study of the photoelectric effect has also lead to the creation of new field of photoelectron spectroscopy. Einstein's theory of the photoelectron presented a completely different way to measure Planck's constant than from black-body radiation. The Workfunction (Φ) The workfunction is an intrinsic property of the metal. While the workfunctions and ionization energies appear as similar concepts, they are independent. The workfunction of a metal is the minimum amount of energy ($\ce{E}$) necessary to remove an electron from the surface of the bulk (solid) metal (sometimes referred to as binding energy). $\ce{M (s) + \Phi \rightarrow M^{+}(s) + e^{-}}(\text{free with no kinetic energy}) \nonumber$ The workfunction is qualitatively similar to ionization energy ($\ce{IE}$), which is the amount of energy required to remove an electron from an atom or molecule in the gaseous state. $\ce{M (g) + IE \rightarrow M^{+}(g) + e^{-}} (\text{free with no kinetic energy}) \nonumber$ However, these two energies differ in magnitude (Table 1.3.1 ). For instance, copper has a workfunction of about 4.7 eV, but has a higher ionization energy of 7.7 eV. Generally, the ionization energies for metals are greater than the corresponding workfunctions (i.e., the electrons are less tightly bound in bulk metal). Element Workfunction $\Phi$ (eV) Ionization Energy (eV) Table 1.3.1 : Workfunctions and Ionization Energies of Select Elements Lithium (Li) 2.93 5.39 Beryllium (Be) 4.98 9.32 Boron (B) 4.45 8.298 Carbon (C) 5.0 11.26 Sodium (Na) 2.36 5.13 Aluminum (Al) 4.20 5.98 Silicon (Si) 4.85 8.15 Potassium (K) 2.3 4.34 Iron (Fe) 4.67 7.87 Cobalt (Co) 4.89 7.88 Copper (Cu) 4.7 7.7 Gallium (Ga) 4.32 5.99 Germanium (Ge) 5.0 7.89 Arsenic (As) 3.75 9.81 Selenium (Se) 5.9 9.75 Silver (Ag) 4.72 7.57 Tin (Sn) 4.42 7.34 Cesium (Cs) 1.95 3.89 Gold (Au) 5.17 9.22 Mercury (Hg)liquid 4.47 10.43 Bismuth (Bi) 4.34 7.29 Example 1.3.1 : Calcium 1. What is the energy in joules and electron volts of a photon of 420-nm violet light? 2. What is the maximum kinetic energy of electrons ejected from calcium by 420-nm violet light, given that the workfunction for calcium metal is 2.71 eV? Strategy To solve part (a), note that the energy of a photon is given by $E=h\nu$. For part (b), once the energy of the photon is calculated, it is a straightforward application of Equation \ref{Eq1} to find the ejected electron’s maximum kinetic energy, since $\Phi$ is given. Solution for (a) Photon energy is given by $E = h\nu \nonumber$ Since we are given the wavelength rather than the frequency, we solve the familiar relationship $c=\nu\lambda$ for the frequency, yielding $\nu=\dfrac{c}{\lambda} \nonumber$ Combining these two equations gives the useful relationship $E=\dfrac{hc}{\lambda} \nonumber$ Now substituting known values yields \begin{align*} E &= \dfrac{(6.63 \times 10^{-34}\; J \cdot s)(3.00 \times 10^8 m/s)}{420 \times 10^{-9}\; m} \[4pt] &= 4.74 \times 10^{-19}\; J \end{align*} \nonumber Converting to eV, the energy of the photon is \begin{align*} E&=(4.74 \times 10^{-19}\; J) \left( \dfrac{1 \;eV}{1.6 \times 10^{-19}\;J} \right) \[4pt] &= 2.96\; eV. \nonumber \end{align*} \nonumber Solution for (b) Finding the kinetic energy of the ejected electron is now a simple application of Equation \ref{Eq1}. Substituting the photon energy and binding energy yields \begin{align*} KE_e &=h\nu – \Phi \[4pt] &= 2.96 \;eV – 2.71 \;eV \[4pt] &= 0.246\; eV.\nonumber \end{align*} \nonumber Discussion The energy of this 420-nm photon of violet light is a tiny fraction of a joule, and so it is no wonder that a single photon would be difficult for us to sense directly—humans are more attuned to energies on the order of joules. But looking at the energy in electron volts, we can see that this photon has enough energy to affect atoms and molecules. A DNA molecule can be broken with about 1 eV of energy, for example, and typical atomic and molecular energies are on the order of eV, so that the UV photon in this example could have biological effects. The ejected electron (called a photoelectron) has a rather low energy, and it would not travel far, except in a vacuum. The electron would be stopped by a retarding potential of 0.26 eV. In fact, if the photon wavelength were longer and its energy less than 2.71 eV, then the formula would give a negative kinetic energy, an impossibility. This simply means that the 420-nm photons with their 2.96-eV energy are not much above the frequency threshold. You can show for yourself that the threshold wavelength is 459 nm (blue light). This means that if calcium metal is used in a light meter, the meter will be insensitive to wavelengths longer than those of blue light. Such a light meter would be insensitive to red light, for example. Exercise 1.3.1 : Silver What is the longest-wavelength electromagnetic radiation that can eject a photoelectron from silver? Is this in the visible range? Answer Given that the workfunction is 4.72 eV from Table 1.3.1 , then only photons with wavelengths lower than 263 nm will induce photoelectrons (calculated via $E=h \nu$). This is ultraviolet and not in the visible range. Exercise 1.3.2 Why is the workfunction of an element generally lower than the ionization energy of that element? Answer The workfunction of a metal refers to the minimum energy required to extract an electron from the surface of a (bulk) metal by the absorption a photon of light. The workfunction will vary from metal to metal. In contrast, ionization energy is the energy needed to detach electrons from atoms and also varies with each particular atom, with the valence electrons require less energy to extract than core electrons (i.e., from lower shells) that are more closely bound to the nuclei. The electrons in the metal lattice there less bound (i.e., free to move within the metal) and removing one of these electrons is much easier than removing an electron from an atom because the metallic bonds of the bulk metal reduces their binding energy. As we will show in subsequent chapters, the more delocalized a particle is, the lower its energy. Summary Although Hertz discovered the photoelectron in 1887, it was not until 1905 that a theory was proposed that explained the effect completely. The theory was proposed by Einstein and it made the claim that electromagnetic radiation had to be thought of as a series of particles, called photons, which collide with the electrons on the surface and emit them. This theory ran contrary to the belief that electromagnetic radiation was a wave and thus it was not recognized as correct until 1916 when Robert Millikan experimentally confirmed the theory The photoelectric effect is the process in which electromagnetic radiation ejects electrons from a material. Einstein proposed photons to be quanta of electromagnetic radiation having energy $E=h\nu$ is the frequency of the radiation. All electromagnetic radiation is composed of photons. As Einstein explained, all characteristics of the photoelectric effect are due to the interaction of individual photons with individual electrons. The maximum kinetic energy $KE_e$ of ejected electrons (photoelectrons) is given by $KE_e=h\nu – \Phi$, where $h\nu$ is the photon energy and $\Phi$ is the workfunction (or binding energy) of the electron to the particular material. Conceptual Questions 1. Is visible light the only type of electromagnetic radiation that can cause the photoelectric effect? 2. Which aspects of the photoelectric effect cannot be explained without photons? Which can be explained without photons? Are the latter inconsistent with the existence of photons? 3. Is the photoelectric effect a direct consequence of the wave character of electromagnetic radiation or of the particle character of electromagnetic radiation? Explain briefly. 4. Insulators (nonmetals) have a higher $\Phi$ than metals, and it is more difficult for photons to eject electrons from insulators. Discuss how this relates to the free charges in metals that make them good conductors. 5. If you pick up and shake a piece of metal that has electrons in it free to move as a current, no electrons fall out. Yet if you heat the metal, electrons can be boiled off. Explain both of these facts as they relate to the amount and distribution of energy involved with shaking the object as compared with heating it.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.03%3A_Photoelectric_Effect_Explained_with_Quantum_Hypothesis.txt
Overview • To introduce the concept of absorption and emission line spectra and describe the Balmer equation to describe the visible lines of atomic hydrogen. The first person to realize that white light was made up of the colors of the rainbow was Isaac Newton, who in 1666 passed sunlight through a narrow slit, then a prism, to project the colored spectrum on to a wall. This effect had been noticed previously, of course, not least in the sky, but previous attempts to explain it, by Descartes and others, had suggested that the white light became colored when it was refracted, the color depending on the angle of refraction. Newton clarified the situation by using a second prism to reconstitute the white light, making much more plausible the idea that the white light was composed of the separate colors. He then took a monochromatic component from the spectrum generated by one prism and passed it through a second prism, establishing that no further colors were generated. That is, light of a single color did not change color on refraction. He concluded that white light was made up of all the colors of the rainbow, and that on passing through a prism, these different colors were refracted through slightly different angles, thus separating them into the observed spectrum. Atomic Line Spectra The spectrum of hydrogen atoms, which turned out to be crucial in providing the first insight into atomic structure over half a century later, was first observed by Anders Ångström in Uppsala, Sweden, in 1853. His communication was translated into English in 1855. Ångström, the son of a country minister, was a reserved person, not interested in the social life that centered around the court. Consequently, it was many years before his achievements were recognized, at home or abroad (most of his results were published in Swedish). Most of what is known about atomic (and molecular) structure and mechanics has been deduced from spectroscopy. Figure 1.4.1 shows two different types of spectra. A continuous spectrum can be produced by an incandescent solid or gas at high pressure (e.g., blackbody radiation is a continuum). An emission spectrum can be produced by a gas at low pressure excited by heat or by collisions with electrons. An absorption spectrum results when light from a continuous source passes through a cooler gas, consisting of a series of dark lines characteristic of the composition of the gas. Fraunhofer Lines In 1802, William Wollaston in England had discovered that the solar spectrum had tiny gaps - there were many thin dark lines in the rainbow of colors. These were investigated much more systematically by Joseph von Fraunhofer, beginning in 1814. He increased the dispersion by using more than one prism. He found an "almost countless number" of lines. He labeled the strongest dark lines A, B, C, D, etc. Frauenhofer between 1814 and 1823 discovered nearly 600 dark lines in the solar spectrum viewed at high resolution and designated the principal features with the letters A through K, and weaker lines with other letters (Table 1.4.1 ). Modern observations of sunlight can detect many thousands of lines. It is now understood that these lines are caused by absorption by the outer layers of the Sun. Designation Element Wavelength (nm) Table 1.4.1 : Major Fraunhofer lines and the elements they are associated with. y O2 898.765 Z O2 822.696 A O2 759.370 B O2 686.719 C H 656.281 a O2 627.661 D1 Na 589.592 D2 Na 588.995 D3 or d He 587.5618 The Fraunhofer lines are typical spectral absorption lines. These dark lines are produced whenever a cold gas is between a broad spectrum photon source and the detector. In this case, a decrease in the intensity of light in the frequency of the incident photon is seen as the photons are absorbed, then re-emitted in random directions, which are mostly in directions different from the original one. This results in an absorption line, since the narrow frequency band of light initially traveling toward the detector, has been turned into heat or re-emitted in other directions. By contrast, if the detector sees photons emitted directly from a glowing gas, then the detector often sees photons emitted in a narrow frequency range by quantum emission processes in atoms in the hot gas, resulting in an emission line. In the Sun, Fraunhofer lines are seen from gas in the outer regions of the Sun, which are too cold to directly produce emission lines of the elements they represent. Gases heated to incandescence were found by Bunsen, Kirkhoff and others to emit light with a series of sharp wavelengths. The emitted light analyzed by a spectrometer (or even a simple prism) appears as a multitude of narrow bands of color. These so called line spectra are characteristic of the atomic composition of the gas. The line spectra of several elements are shown in Figure 1.4.3 . The Balmer Series of Hydrogen Obviously, if any pattern could be discerned in the spectral lines for a specific atom (in contract to the mixture that Fraunhofer lines represent), that might be a clue as to the internal structure of the atom. One might be able to build a model. A great deal of effort went into analyzing the spectral data from the 1860's on. The big breakthrough was made by Johann Balmer, a math and Latin teacher at a girls' school in Basel, Switzerland. Balmer had done no physics before and made his great discovery when he was almost sixty. Balmer decided that the most likely atom to show simple spectral patterns was the lightest atom, hydrogen. Ångström had measured the four visible spectral lines to have wavelengths 656.21, 486.07, 434.01 and 410.12 nm (Figure 1.4.4 ). Balmer concentrated on just these four numbers, and found they were represented by the phenomenological formula: $\lambda = b \left( \dfrac{n_2^2}{n_2^2 -4} \right) \label{1.4.1}$ where $b$ = 364.56 nm and $n_2 = 3, 4, 5, 6$. The first four wavelengths of Equation $\ref{1.4.1}$ (with $n_2$ = 3, 4, 5, 6) were in excellent agreement with the experimental lines from Ångström (Table 1.4.2 ). Balmer predicted that other lines exist in the ultraviolet that correspond to $n_2 \ge 7$ and in fact some of them had already been observed, unbeknown to Balmer. Table 1.4.2 : The Balmer Series of Hydrogen Emission Lines $n_2$ 3 4 5 6 7 8 9 10 $\lambda$ 656 486 434 410 397 389 383 380 color red teal blue indigo violet not visible not visible not visible The $n_2$ integer in the Balmer series extends theoretically to infinity and the series represents a monotonically increasing energy (and frequency) of the absorption lines with increasing $n_2$ values. Moreover, the energy difference between successive lines decreased as $n_2$ increases (1.4.4 ). This behavior converges to a highest possible energy as Example 1.4.1 demonstrates. If the lines are plot according to their $\lambda$ on a linear scale, you will get the appearance of the spectrum in Figure 1.4.4 ; these lines are called the Balmer series. Balmer's general formula (Equation $\ref{1.4.1}$) can be rewritten in terms of the inverse wavelength typically called the wavenumber ($\widetilde{\nu}$). \begin{align} \widetilde{\nu} &= \dfrac{1}{ \lambda} \[4pt] &=R_H \left( \dfrac{1}{4} -\dfrac{1}{n_2^2}\right) \label{1.4.2} \end{align} where $n_2 = 3, 4, 5, 6$ and $R_H$ is the Rydberg constant (discussed in the next section) equal to 109,737 cm-1. He further conjectured that the 4 could be replaced by 9, 16, 25, … and this also turned out to be true - but these lines, further into the infrared, were not detected until the early twentieth century, along with the ultraviolet lines. The Wavenumber as a Unit of Frequency The relation between wavelength and frequency for electromagnetic radiation is $\lambda \nu= c \nonumber$ In the SI system of units the wavelength, ($\lambda$) is measured in meters (m) and since wavelengths are usually very small one often uses the nanometer (nm) which is $10^{-9}\; m$. The frequency ($\nu$) in the SI system is measured in reciprocal seconds 1/s − which is called a Hertz (after the discover of the photoelectron effect) and is represented by Hz. It is common to use the reciprocal of the wavelength in centimeters as a measure of the frequency of radiation. This unit is called a wavenumber and is represented by ($\widetilde{\nu}$) and is defined by \begin{align*} \widetilde{\nu} &= \dfrac{1}{ \lambda} \[4pt] &= \dfrac{\nu}{c} \end{align*} \nonumber Wavenumbers is a convenient unit in spectroscopy because it is directly proportional to energy. \begin{align*} E &= \dfrac{hc}{\lambda} \nonumber \[4pt] &= hc \times \dfrac{1}{\lambda} \nonumber \[4pt] &= hc\widetilde{\nu} \label{energy} \[4pt] &\propto \widetilde{\nu} \end{align*} Example 1.4.1 : Balmer Series Calculate the longest and shortest wavelengths (in nm) emitted in the Balmer series of the hydrogen atom emission spectrum. Solution From the behavior of the Balmer equation (Equation $\ref{1.4.1}$ and Table 1.4.2 ), the value of $n_2$ that gives the longest (i.e., greatest) wavelength ($\lambda$) is the smallest value possible of $n_2$, which is ($n_2$=3) for this series. This results in \begin{align*} \lambda_{longest} &= (364.56 \;nm) \left( \dfrac{9}{9 -4} \right) \[4pt] &= (364.56 \;nm) \left( 1.8 \right) \[4pt] &= 656.2\; nm \end{align*} \nonumber This is also known as the $H_{\alpha}$ line of atomic hydrogen and is bright red (Figure $\PageIndex{3a}$). For the shortest wavelength, it should be recognized that the shortest wavelength (greatest energy) is obtained at the limit of greatest ($n_2$): $\lambda_{shortest} = \lim_{n_2 \rightarrow \infty} (364.56 \;nm) \left( \dfrac{n_2^2}{n_2^2 -4} \right) \nonumber$ This can be solved via L'Hôpital's Rule, or alternatively the limit can be expressed via the equally useful energy expression (Equation \ref{1.4.2}) and simply solved: \begin{align*} \widetilde{\nu}_{greatest} &= \lim_{n_2 \rightarrow \infty} R_H \left( \dfrac{1}{4} -\dfrac{1}{n_2^2}\right) \[4pt] &= \lim_{n_2 \rightarrow \infty} R_H \left( \dfrac{1}{4}\right) \[4pt] &= 27,434 \;cm^{-1} \end{align*} \nonumber Since $\dfrac{1}{\widetilde{\nu}}= \lambda$ in units of cm, this converts to 364 nm as the shortest wavelength possible for the Balmer series. The Balmer series is particularly useful in astronomy because the Balmer lines appear in numerous stellar objects due to the abundance of hydrogen in the universe, and therefore are commonly seen and relatively strong compared to lines from other elements.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.04%3A_The_Hydrogen_Atomic_Spectrum.txt
Learning Objectives • Describe Rydberg's theory for the hydrogen spectra. • Interpret the hydrogen spectrum in terms of the energy states of electrons. In an amazing demonstration of mathematical insight, in 1885 Balmer came up with a simple formula for predicting the wavelength of any of the lines in atomic hydrogen in what we now know as the Balmer series. Three years later, Rydberg generalized this so that it was possible to determine the wavelengths of any of the lines in the hydrogen emission spectrum. Rydberg suggested that all atomic spectra formed families with this pattern (he was unaware of Balmer's work). It turns out that there are families of spectra following Rydberg's pattern, notably in the alkali metals, sodium, potassium, etc., but not with the precision the hydrogen atom lines fit the Balmer formula, and low values of $n_2$ predicted wavelengths that deviate considerably. Rydberg's phenomenological equation is as follows: \begin{align} \widetilde{\nu} &= \dfrac{1}{ \lambda} \[4pt] &=R_H \left( \dfrac{1}{n_1^2} -\dfrac{1}{n_2^2}\right) \label{1.5.1} \end{align} where $R_H$ is the Rydberg constant and is equal to 109,737 cm-1 ($2.18 \times 10^{−18}\, J$) and $n_1$ and $n_2$ are integers (whole numbers) with $n_2 > n_1$. For the Balmer lines, $n_1 =2$ and $n_2$ can be any whole number between 3 and infinity. The various combinations of numbers that can be substituted into this formula allow the calculation the wavelength of any of the lines in the hydrogen emission spectrum; there is close agreement between the wavelengths generated by this formula and those observed in a real spectrum. Other Series The results given by Balmer and Rydberg for the spectrum in the visible region of the electromagnetic radiation start with $n_2 = 3$, and $n_1=2$. Is there a different series with the following formula (e.g., $n_1=1$)? $\dfrac{1}{\lambda} = R_{\textrm H} \left(\dfrac{1}{1^2} - \dfrac{1}{n^2} \right ) \label{1.5.2}$ The values for $n_2$ and wavenumber $\widetilde{\nu}$ for this series would be: Table 1.5.1 : The Lyman Series of Hydrogen Emission Lines ($n_1=1$) $n_2$ 2 3 4 5 ... $\lambda$ (nm) 121 102 97 94 ... $\widetilde{\nu}$ (cm-1) 82,2291 97,530 102,864 105,332 ... Do you know in what region of the electromagnetic radiation these lines are? Of course, these lines are in the UV region, and they are not visible, but they are detected by instruments; these lines form a Lyman series. The existences of the Lyman series and Balmer's series suggest the existence of more series. For example, the series with $n_1 = 3$ and $n_2 = 4, 5, 6, 7, ...$ is called Paschen series. Other Series The spectral lines are grouped into series according to $n_1$ values. Lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. For example, the ($n_1=1/n_2=2$) line is called "Lyman-alpha" (Ly-$\alpha$), while the ($n_1=3/n_2=7$) line is called "Paschen-delta" (Pa-$δ$). The first six series have specific names: • Lyman series with $n_1 = 1$ • Balmer series with $n_1 = 2$ • Paschen series (or Bohr series) with $n_1 = 3$ • Brackett series with $n_1 = 4$ • Pfund series with $n_1 = 5$ • Humphreys series with $n_1 = 6$ Example 1.5.1 : The Lyman Series The so-called Lyman series of lines in the emission spectrum of hydrogen corresponds to transitions from various excited states to the n = 1 orbit. Calculate the wavelength of the lowest-energy line in the Lyman series to three significant figures. In what region of the electromagnetic spectrum does it occur? Given: lowest-energy orbit in the Lyman series Asked for: wavelength of the lowest-energy Lyman line and corresponding region of the spectrum Strategy: 1. Substitute the appropriate values into Equation $\ref{1.5.1}$ (the Rydberg equation) and solve for $\lambda$. 2. Locate the region of the electromagnetic spectrum corresponding to the calculated wavelength. Solution: We can use the Rydberg equation (Equation \ref{1.5.1}) to calculate the wavelength: $\dfrac{1}{\lambda }=R_H \left ( \dfrac{1}{n_{1}^{2}} - \dfrac{1}{n_{2}^{2}}\right ) \nonumber$ A For the Lyman series, $n_1 = 1$. \begin{align*} \dfrac{1}{\lambda } &=R_H \left ( \dfrac{1}{n_{1}^{2}} - \dfrac{1}{n_{2}^{2}}\right ) \[4pt] &=1.097 \times 10^{7}\, m^{-1}\left ( \dfrac{1}{1}-\dfrac{1}{4} \right )\[4pt] &= 8.228 \times 10^{6}\; m^{-1} \end{align*} \nonumber Spectroscopists often talk about energy and frequency as equivalent. The cm-1 unit (wavenumbers) is particularly convenient. We can convert the answer in part A to cm-1. \begin{align*} \widetilde{\nu} &=\dfrac{1}{\lambda } \[4pt] &= 8.228\times 10^{6}\cancel{m^{-1}}\left (\dfrac{\cancel{m}}{100\;cm} \right ) \[4pt] &= 82,280\: cm^{-1} \end{align*} \nonumber and $\lambda = 1.215 \times 10^{−7}\; m = 122\; nm \nonumber$ This emission line is called Lyman alpha and is the strongest atomic emission line from the sun and drives the chemistry of the upper atmosphere of all the planets producing ions by stripping electrons from atoms and molecules. It is completely absorbed by oxygen in the upper stratosphere, dissociating O2 molecules to O atoms which react with other O2 molecules to form stratospheric ozone B This wavelength is in the ultraviolet region of the spectrum. Exercise 1.5.1 : The Pfund Series The Pfund series of lines in the emission spectrum of hydrogen corresponds to transitions from higher excited states to the $n_1 = 5$. Calculate the wavelength of the second line in the Pfund series to three significant figures. In which region of the spectrum does it lie? Answer $4.65 \times 10^3\, nm$; infrared The above discussion presents only a phenomenological description of hydrogen emission lines and fails to provide a probe of the nature of the atom itself. Clearly a continuum model based on classical mechanics is not applicable, and as the next Section demonstrates, a simple connection between spectra and atomic structure can be formulated.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.05%3A_The_Rydberg_Formula_and_the_Hydrogen_Atomic_Spectrum.txt
Learning Objectives • To introduce the wave-particle duality of light extends to matter • To describe how matter (e.g., electrons and protons) can exhibit wavelike properties, e.g., interference and diffraction patterns • To use algebra to find the de Broglie wavelength or momentum of a particle when either one of these quantities is given The next real advance in understanding the atom came from an unlikely quarter - a student prince in Paris. Prince Louis de Broglie was a member of an illustrious family, prominent in politics and the military since the 1600's. Louis began his university studies with history, but his elder brother Maurice studied x-rays in his own laboratory, and Louis became interested in physics. After World War I, de Broglie focused his attention on Einstein's two major achievements, the theory of special relativity and the quantization of light waves. He wondered if there could be some connection between them. Perhaps the quantum of radiation really should be thought of as a particle. De Broglie suggested that if waves (photons) could behave as particles, as demonstrated by the photoelectric effect, then the converse, namely that particles could behave as waves, should be true. He associated a wavelength $\lambda$ to a particle with momentum $p$ using Planck's constant as the constant of proportionality: $\lambda =\dfrac{h}{p} \label{1.6.1}$ which is called the de Broglie wavelength. The fact that particles can behave as waves but also as particles, depending on which experiment you perform on them, is known as the wave-particle duality. Deriving the de Broglie Wavelength From the discussion of the photoelectric effect, we have the first part of the particle-wave duality, namely, that electromagnetic waves can behave like particles. These particles are known as photons, and they move at the speed of light. Any particle that moves at or near the speed of light has kinetic energy given by Einstein's special theory of relatively. In general, a particle of mass $m$ and momentum $p$ has an energy $E=\sqrt{p^2 c^2+m^2 c^4} \label{1.6.2}$ Note that if $p=0$, this reduces to the famous rest-energy expression $E=mc^2$. However, photons are massless particles (technically rest-massless) that always have a finite momentum $p$. In this case, Equation \ref{1.6.2} becomes $E=pc. \nonumber$ From Planck's hypothesis, one quantum of electromagnetic radiation has energy $E=h\nu$. Thus, equating these two expressions for the kinetic energy of a photon, we have $h\nu =\dfrac{hc}{\lambda}=pc \label{1.6.4}$ Solving for the wavelength $\lambda$ gives Equation \ref{1.6.1}: $\lambda=\dfrac{h}{p}= \dfrac{h}{mv} \nonumber$ where $v$ is the velocity of the particle. Hence, de Broglie argued that if particles can behave as waves, then a relationship like this, which pertains particularly to waves, should also apply to particles. Equation \ref{1.6.1} allows us to associate a wavelength $\lambda$ to a particle with momentum $p$. As the momentum increases, the wavelength decreases. In both cases, this means the energy becomes larger. i.e., short wavelengths and high momenta correspond to high energies. It is a common feature of quantum mechanics that particles and waves with short wavelengths correspond to high energies and vice versa. Having decided that the photon might well be a particle with a rest mass, even if very small, it dawned on de Broglie that in other respects it might not be too different from other particles, especially the very light electron. In particular, maybe the electron also had an associated wave. The obvious objection was that if the electron was wavelike, why had no diffraction or interference effects been observed? But there was an answer. If de Broglie's relation between momentum and wavelength also held for electrons, the wavelength was sufficiently short that these effects would be easy to miss. As de Broglie himself pointed out, the wave nature of light is not very evident in everyday life. As the next section will demonstrate, the validity of de Broglie’s proposal was confirmed by electron diffraction experiments of G.P. Thomson in 1926 and of C. Davisson and L. H. Germer in 1927. In these experiments it was found that electrons were scattered from atoms in a crystal and that these scattered electrons produced an interference pattern. These diffraction patterns are characteristic of wave-like behavior and are exhibited by both electrons (i.e., matter) and electromagnetic radiation (i.e., light). Example 1.6.1 : Electron Waves Calculate the de Broglie wavelength for an electron with a kinetic energy of 1000 eV. Solution To calculate the de Broglie wavelength (Equation \ref{1.6.1}), the momentum of the particle must be established and requires knowledge of both the mass and velocity of the particle. The mass of an electron is $9.109383 \times 10^{−28}\; g$ and the velocity is obtained from the given kinetic energy of 1000 eV: \begin{align*} KE &= \dfrac{mv^2}{2} \[4pt] &= \dfrac{p^2}{2m} = 1000 \;eV \end{align*} \nonumber Solve for momentum $p = \sqrt{2 m KE} \nonumber$ convert to SI units $p = \sqrt{(1000 \; \cancel{eV}) \left( \dfrac{1.6 \times 10^{-19} \; J}{1\; \cancel{ eV}} \right) (2) (9.109383 \times 10^{-31}\; kg)} \nonumber$ expanding definition of joule into base SI units and cancel \begin{align*} p &= \sqrt{(3.1 \times 10^{-16} \;kg \cdot m^2/s^2 ) (9.109383 \times 10^{-31}\; kg)} \[4pt] &= \sqrt{ 2.9 \times 10^{-40 }\, kg^2 \;m^2/s^2 } \[4pt] &= 1.7 \times 10^{-23} kg \cdot m/s \end{align*} \nonumber Now substitute the momentum into the equation for de Broglie's wavelength (Equation $\ref{1.6.1}$) with Planck's constant ($h = 6.626069 \times 10^{−34}\;J \cdot s$). After expanding units in Plank's constant \begin{align*} \lambda &=\dfrac{h}{p} \[4pt] &= \dfrac{6.626069 \times 10^{−34}\;kg \cdot m^2/s}{1.7 \times 10^{-23} kg \cdot m/s} \[4pt] &= 3.87 \times 10^{-11}\; m \[4pt] &=38.9\; pm \end{align*} \nonumber Exercise 1.6.1 : Baseball Waves Calculate the de Broglie wavelength for a fast ball thrown at 100 miles per hour and weighing 4 ounces. Comment on whether the wave properties of baseballs could be experimentally observed. Answer Following the unit conversions below, a 4 oz baseball has a mass of 0.11 kg. The velocity of a fast ball thrown at 100 miles per hour in m/s is 44.7 m/s. $m = \left(4 \; \cancel{oz}\right)\left(\frac{0.0283 \; kg}{1 \; \cancel{oz}}\right) = 0.11 kg \nonumber$ $v = \left(\frac{100 \; \cancel{mi}}{\cancel{hr}}\right) \left(\frac{1609.34 \; m}{\cancel{mi}}\right) \left( \frac{1 \; \cancel{hr}}{3600 \; s}\right) = 44.7 \; m/s \nonumber$ The de Broglie wavelength of this fast ball is: $\lambda = \frac{h}{mv} = \frac{6.626069 \times 10^{-34}\;kg \cdot m^2/s}{(0.11 \; kg)(44.7 \;m/s)} = 1.3 \times 10^{-34} m \nonumber$ Exercise 1.6.2 : Electrons vs. Protons If an electron and a proton have the same velocity, which would have the longer de Broglie wavelength? 1. The electron 2. The proton 3. They would have the same wavelength Answer Equation \ref{1.6.1} shows that the de Broglie wavelength of a particle's matter wave is inversely proportional to its momentum (mass times velocity). Therefore the smaller mass particle will have a smaller momentum and longer wavelength. The electron is the lightest and will have the longest wavelength. This was the prince's Ph.D. thesis, presented in 1924. His thesis advisor was somewhat taken aback, and was not sure if this was sound work. He asked de Broglie for an extra copy of the thesis, which he sent to Einstein. Einstein wrote shortly afterwards: "I believe it is a first feeble ray of light on this worst of our physics enigmas" and the prince got his Ph.D.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.06%3A_Matter_Has_Wavelike_Properties.txt
Learning Objectives • To present the experimental evidence behind the wave-particle duality of matter The validity of de Broglie’s proposal was confirmed by electron diffraction experiments of G.P. Thomson in 1926 and of C. Davisson and L. H. Germer in 1927. In these experiments it was found that electrons were scattered from atoms in a crystal and that these scattered electrons produced an interference pattern. The interference pattern was just like that produced when water waves pass through two holes in a barrier to generate separate wave fronts that combine and interfere with each other. These diffraction patterns are characteristic of wave-like behavior and are exhibited by both matter (e.g., electrons and neutrons) and electromagnetic radiation. Diffraction patterns are obtained if the wavelength is comparable to the spacing between scattering centers. Diffraction occurs when waves encounter obstacles whose size is comparable with its wavelength. Continuing with our analysis of experiments that lead to the new quantum theory, we now look at the phenomenon of electron diffraction. Diffraction of Light (Light as a Wave) It is well-known that light has the ability to diffract around objects in its path, leading to an interference pattern that is particular to the object. This is, in fact, how holography works (the interference pattern is created by allowing the diffracted light to interfere with the original beam so that the hologram can be viewed by shining the original beam on the image). A simple illustration of light diffraction is the Young double slit experiment (Figure 1.7.1 ). Here, light as waves (pictured as waves in a plane parallel to the double slit apparatus) impinge on the two slits. Each slit then becomes a point source for spherical waves that subsequently interfere with each other, giving rise to the light and dark fringes on the screen at the right. The double-slit experiments are direct demonstration of wave phenomena via observed interference. These types of experiment were first performed by Thomas Young in 1801, as a demonstration of the wave behavior of light. In the basic version of this experiment, light is illuminated only a plate pierced by two parallel slits, and the light passing through the two slits is observed on a screen behind the plate (Figures 1.7.1 and 1.7.2 ). The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles (Figure $\PageIndex{0c}$). However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Interference is a wave phenomenon in which two (or more) waves superimpose to form a resultant wave of greater or lower amplitude. It is the primary property used to identify wave behavior. Diffraction of Electrons (Electrons as Waves) According to classical physics, electrons should behave like particles - they travel in straight lines and do not curve in flight unless acted on by an external agent, like a magnetic field. In this model, if we fire a beam of electrons through a double slit onto a detector, we should get two bands of "hits", much as you would get if you fired a machine gun at the side of a house with two windows - you would get two areas of bullet-marked wall inside, and the rest would be intact (Figure 1.7.3 (left) and Figure 1.7.2 ). However, if the slits are made small enough and close enough together, we actually observe the electrons are diffracting through the slits and interfering with each other just like waves (Figure 1.7.3 (right) and Figure 1.7.2 a,b). This means that the electrons have wave-particle duality, just like photons, in agreement with de Broglie's hypothesis discussed previously. In this case, they must have properties like wavelength and frequency. We can deduce the properties from the behavior of the electrons as they pass through our diffraction grating. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave-particle duality. For physicists this idea was important because it meant that not only could any particle exhibit wave characteristics, but that one could use wave equations to describe phenomena in matter if one used the de Broglie wavelength. An electron microscope use a beam of accelerated electrons as a source of illumination. Since the wavelength of electrons can be up to 100,000 times shorter than that of visible light photons, electron microscopes have a higher resolving power than light microscopes and can reveal the structure of smaller objects. A transmission electron microscope can achieve better than 50 pm resolution and magnifications of up to about 10,000,000x whereas most light microscopes are limited by diffraction to about 200 nm resolution and useful magnifications below 2000x (Figure 1.7.4 ). Is Matter a Particle or a Wave? An electron, indeed any particle, is neither a particle nor a wave. Describing the electron as a particle is a mathematical model that works well in some circumstances while describing it as a wave is a different mathematical model that works well in other circumstances. When you choose to do some calculation of the electron's behavior that treats it either as a particle or as a wave, you're not saying the electron is a particle or is a wave: you're just choosing the mathematical model that makes it easiest to do the calculation. Neutron Diffraction (Neutrons as Waves) Like all quantum particles, neutrons can also exhibit wave phenomena and if that wavelength is short enough, atoms or their nuclei can serve as diffraction obstacles. When a beam of neutrons emanating from a reactor is slowed down and selected properly by their speed, their wavelength lies near one angstrom (0.1 nanometer), the typical separation between atoms in a solid material. Such a beam can then be used to perform a diffraction experiment. Neutrons interact directly with the nucleus of the atom, and the contribution to the diffracted intensity depends on each isotope; for example, regular hydrogen and deuterium contribute differently. It is also often the case that light (low Z) atoms contribute strongly to the diffracted intensity even in the presence of large Z atoms. Example 1.7.1 : Neutron Diffraction Neutrons have no electric charge, so they do not interact with the atomic electrons. Hence, they are very penetrating (e.g., typically 10 cm in lead). Neutron diffraction was proposed in 1934, to exploit de Broglie’s hypothesis about the wave nature of matter. Calculate the momentum and kinetic energy of a neutron whose wavelength is comparable to atomic spacing ($1.8 \times 10^{-10}\, m$). Solution This is a simple use of de Broglie’s equation $\lambda = \dfrac{h}{p} \nonumber$ where we recognize that the wavelength of the neutron must be comparable to atomic spacing (let's assumed equal for convenience, so $\lambda = 1.8 \times 10^{-10}\, m$). Rearranging the de Broglie wavelength relationship above to solve for momentum ($p$): \begin{align} p &= \dfrac{h}{\lambda} \nonumber \[4pt] &= \dfrac{6.6 \times 10^{-34} J s}{1.8 \times 10^{-10} m} \nonumber \[4pt] &= 3.7 \times 10^{-24}\, kg \,\,m\, \,s^{-1} \nonumber \end{align} \nonumber The relationship for kinetic energy is $KE = \dfrac{1}{2} mv^2 = \dfrac{p^2}{2m} \nonumber$ where $v$ is the velocity of the particle. From the reference table of physical constants, the mass of a neutron is $1.6749273 \times 10^{−27}\, kg$, so \begin{align*} KE &= \dfrac{(3.7 \times 10^{-24}\, kg \,\,m\, \,s^{-1} )^2}{2 (1.6749273 \times 10^{−27}\, kg)} \[4pt] &=4.0 \times 10^{-21} J \end{align*} \nonumber The neutrons released in nuclear fission are ‘fast’ neutrons, i.e. much more energetic than this. Their wavelengths be much smaller than atomic dimensions and will not be useful for neutron diffraction. We slow down these fast neutrons by introducing a "moderator", which is a material (e.g., graphite) that neutrons can penetrate, but will slow down appreciable.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.07%3A_de_Broglie_Waves_can_be_Experimentally_Observed.txt
Learning Objectives • Introduce the fundamentals behind the Bohr Atom and demonstrate it can predict the Rydberg's equation for the atomic spectrum of hydrogen Rutherford's Failed Planetary Atom Ernest Rutherford had proposed a model of atoms based on the $\alpha$-particle scattering experiments of Hans Geiger and Ernest Marsden. In these experiments helium nuclei ($\alpha$-particles) were shot at thin gold metal foils. Most of the particles were not scattered; they passed unchanged through the thin metal foil. Some of the few that were scattered were scattered in the backward direction; i.e. they recoiled. This backward scattering requires that the foil contain heavy particles. When an $\alpha$-particle hits one of these heavy particles it simply recoils backward, just like a ball thrown at a brick wall. Since most of the α-particles don’t get scattered, the heavy particles (the nuclei of the atoms) must occupy only a very small region of the total space of the atom. Most of the space must be empty or occupied by very low-mass particles. These low-mass particles are the electrons that surround the nucleus. There are some basic problems with the Rutherford model. The Coulomb force that exists between oppositely charge particles means that a positive nucleus and negative electrons should attract each other, and the atom should collapse. To prevent the collapse, the electron was postulated to be orbiting the positive nucleus. The Coulomb force (discussed below) is used to change the direction of the velocity, just as a string pulls a ball in a circular orbit around your head or the gravitational force holds the moon in orbit around the Earth. The origin for this hypothesis that suggests this perspective is plausible is the similarity of gravity and Coulombic interactions. The expression for the force of gravity between two masses (Newton's Law of gravity) is $F_{gravity} \propto \dfrac{m_1m_2}{r^2}\label{1.8.1}$ with $m_1$ and $m_2$ representing the mass of object 1 and 2, respectively and $r$ representing the distance between the objects centers The expression for the Coulomb force between two charged species is $F_{Coulomb} \propto \dfrac{Q_1Q_2}{r^2}\label{1.8.2}$ with $Q_1$ and $Q_2$ representing the charge of object 1 and 2, respectively and $r$ representing the distance between the objects centers. However, this analogy has a problem too. An electron going around in a circle is constantly being accelerated because its velocity vector is changing. A charged particle that is being accelerated emits radiation. This property is essentially how a radio transmitter works. A power supply drives electrons up and down a wire and thus transmits energy (electromagnetic radiation) that your radio receiver picks up. The radio then plays the music for you that is encoded in the waveform of the radiated energy. If the orbiting electron is generating radiation, it is losing energy. If an orbiting particle loses energy, the radius of the orbit decreases. To conserve angular momentum, the frequency of the orbiting electron increases. The frequency increases continuously as the electron collapses toward the nucleus. Since the frequency of the rotating electron and the frequency of the radiation that is emitted are the same, both change continuously to produce a continuous spectrum and not the observed discrete lines. Furthermore, if one calculates how long it takes for this collapse to occur, one finds that it takes about $10^{‑11}$ seconds. This means that nothing in the world based on the structure of atoms could exist for longer than about $10^{-11}$ seconds. Clearly something is terribly wrong with this classical picture, which means that something was missing at that time from the known laws of physics. Conservative Forces can be explained with Potentials A conservative force is dependent only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point. When an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken. The potential can be constructed as simple derivatives for 1-D forces: $F = -\dfrac{dV}{dx} \nonumber$ or as gradients in 3-D forces $F = -\nabla V \nonumber$ where $\nabla$ is the vector of partial derivatives $\nabla = \left ( \dfrac{\partial}{\partial x}, \dfrac{\partial}{\partial y}, \dfrac{\partial}{\partial z} \right) \nonumber$ The most familiar conservative forces are gravity and Coloumbic forces. The Coulomb force law (Equation $\ref{1.8.2}$) comes from the corresponding Coulomb potential (sometimes call electrostatic potential) $V(r)=\dfrac{kQ_1 Q_2}{r} \label{1.8.5}$ and it can be easily verified that the Coulombic force from this interaction ($F(r)$) is $F(r)=-\dfrac{dV}{dr} \label{1.8.6}$ As $r$ is varied, the energy will change, so that we have an example of a potential energy curve $V(r)$ (Figure $\PageIndex{2; left}$). If $Q_1$ and $Q_2$ are the same sign, then the curve which is a purely repulsive potential, i.e., the energy increases monotonically as the charges are brought together and decreases monotonically as they are separated. From this, it is easy to see that like charges (charges of the same sign) repel each other. If the charges are of opposite sign, then the curve appears roughly Figure $\PageIndex{2; right}$ and this is a purely attractive potential. Thus, the energy decreases as the charges are brought together, implying that opposite charges attract The Bohr Model It is observed that line spectra discussed in the previous sections show that hydrogen atoms absorb and emit light at only discrete wavelengths. This observation is connected to the discrete nature of the allowed energies of a quantum mechanical system. Quantum mechanics postulates that, in contrast to classical mechanics, the energy of a system can only take on certain discrete values. This leaves us with the question: How do we determine what these allowed discrete energy values are? After all, it seems that Planck's formula for the allowed energies came out of nowhere. The model we will describe here, due to Niels Bohr in 1913, is an early attempt to predict the allowed energies for single-electron atoms such as $\ce{H}$, $\ce{He^{+}}$, $\ce{Li^{2+}}$, $\ce{Be^{3+}}$, etc. Although Bohr's reasoning relies on classical concepts and hence, is not a correct explanation, the reasoning is interesting, and so we examine this model for its historical significance. Consider a nucleus with charge $+Ze$ and one electron orbiting the nucleus. In this analysis, we will use another representation of the constant $k$ in Coulomb's law (Equation $\ref{1.8.5}$), which is more commonly represented in the form: $k=\dfrac{1}{4\pi \epsilon_0} \label{1.8.7}$ where $\epsilon_0$ is known as the permittivity of free space with the numerical value $\epsilon_0 = 8.8541878\times 10^{-12} \ C^2 J^{-1} m^{-1}$. The total energy of the electron (the nucleus is assumed to be fixed in space at the origin) is the sum of kinetic and potential energies: $E_{total}=\underset{\text{kinetic energy}}{\dfrac{p^2}{2m_e}} - \underset{\text{potential energy}}{\dfrac{Ze^2}{4\pi \epsilon_0 r}} \nonumber$ The force on the electron is $\vec{F}=-\dfrac{Ze^2}{4\pi \epsilon_0 r^3}r \nonumber$ and its magnitude is $F=|\vec{F}|=\dfrac{Ze^2}{4\pi \epsilon_0 r^3}|r|=\dfrac{Ze^2}{4\pi \epsilon_0 r^2} \nonumber$ since $\vec{F}=m_e \vec{a}$, the magnitude, it follows that $|\vec{F}|=m_e |\vec{a}|$. If we assume that the orbit is circular, then the acceleration is purely centripetal, so $|a|=\dfrac{v^2}{r} \nonumber$ where $v$ is the velocity of the electron. Equating force $|F|$ to $m_e |a|$, we obtain $\dfrac{Ze^2}{4\pi \epsilon_0 r^2}=m_e\dfrac{v^2}{r} \nonumber$ or $\dfrac{Ze^2}{4\pi \epsilon_0}=m_e v^2 r \nonumber$ or $\dfrac{Ze^2 m_e r}{4\pi \epsilon_0}=(m_e vr)^2 \label{1.8.14}$ The reason for writing the equation this way is that the quantity $m_e vr$ is the classical orbital angular momentum of the electron. Bohr was familiar with Maxwell's theory of classical electromagnetism and knew that in a classical theory, the orbiting electron should radiate energy away and eventually collapse into the nucleus (Figure 1.8.1 ). He circumvented this problem by following Planck's idea underlying blackbody radiation and positing that the orbital angular momentum $m_e vr$ of the electron could only take on specific values $m_e vr=n\hbar\label{1.8.15}$ with $n=1,2,3,...$. Note that the electron must be in motion, so $n=0$ is not allowed. Substituting Equation $\ref{1.8.15}$ into the Equation $\ref{1.8.14}$, we find $\dfrac{Ze^2 m_e r}{4\pi \epsilon_0}=n^2 (\hbar)^2 \label{1.8.16}$ Equation \ref{1.8.16} implies that orbits could only have certain allowed radii \begin{align}r_n &= \dfrac{4\pi \epsilon_0 \hbar^2}{Ze^2 m_e}n^2 \ &=\dfrac{a_0}{Z}n^2 \label{1.8.16B} \end{align} with $n=1,2,3,...$. The collection of constants has been defined to be $a_0$ $a_0=\dfrac{4\pi \epsilon_0 \hbar^2}{e^2 m_e} \label{1.8.17}$ a quantity that is known as the Bohr radius. We can also calculate the allowed momenta since $m_e vr=n\hbar$, and $p=m_e v$. Thus, \begin{align}p_n r_n &=n\hbar\[4pt] p_n &=\dfrac{n\hbar}{r_n}\[4pt] &=\dfrac{\hbar Z}{a_0 n} \[4pt] &= \dfrac{Ze^2 m_e}{4\pi \epsilon_0 \hbar n}\end{align} \label{1.8.18} From $p_n$ and $r_n$, we can calculate the allowed energies from $E_n=\dfrac{p^2_n}{2m_e}-\dfrac{Ze^2}{4\pi \epsilon_0 r_n} \label{1.8.19}$ Substituting in the expressions for $p_n$ and $r_n$ and simplifying gives $E_n=-\dfrac{Z^2 e^4 m_e}{32\pi^2 \epsilon_{0}^{2}\hbar^2}\dfrac{1}{n^2}=-\dfrac{e^4 m_e}{8 \epsilon_{0}^{2}h^2}\dfrac{Z^2}{n^2} \label{1.8.20}$ We can redefine a new energy scale by defining the Rydberg as $1 \ Ry = \dfrac{e^4 m_e}{8\epsilon_{0}^{2} h^2} =2.18\times 10^{-18} \ J. \nonumber$ and this simplifies the allowed energies predicted by the Bohr model (Equation \ref{1.8.20}) as $E_n=-(2.18\times 10^{-18})\dfrac{Z^2}{n^2} \ J=-\dfrac{Z^2}{n^2} \ R_y \label{1.8.21}$ Hence, the energy of the electron in an atom also is quantized. Equation $\ref{1.8.21}$ gives the energies of the electronic states of the hydrogen atom. It is very useful in analyzing spectra to represent these energies graphically in an energy-level diagram. An energy-level diagram has energy plotted on the vertical axis with a horizontal line drawn to locate each energy level (Figure 1.8.4 ). These turn out to be the correct energy levels, apart from small corrections that cannot be accounted for in this pseudo-classical treatment. Despite the fact that the energies are essentially correct, the Bohr model masks the true quantum nature of the electron, which only emerges from a fully quantum mechanical analysis. Exercise 1.8.1 Calculate a value for the Bohr radius using Equation $\ref{1.8.16}$ to check that this equation is consistent with the value 52.9 pm. What would the radius be for $n = 1$ in the $\ce{Li^{2+}}$ ion? Answer Starting from Equation \ref{1.8.16} and solving for $r$: \begin{align*} \dfrac{Ze^2m_er}{4πϵ_0} &=n^2ℏ^2 \ r &=\dfrac{4 n^2 \hbar^2 πϵ_0}{Z e^2 m_e} \end{align*} \nonumber with • $e$ is the fundamental charge: $e=1.60217662 \times 10^{-19}C^2$ • $m_e$ is the mass of an electron: $m_e= 9.10938356 \times 10^{-31}kg$ • $\epsilon_o$ is the permittivity of free space: $\epsilon_o = 8.854 \times 10^{-12}C^2N^{-1}m^{-2}$ • $\hbar$ is the reduced planks constant: $\hbar=1.0546 \times 10^{-34}m^2kg/s$ For the ground-state of the hydrogen atom: $Z=1$ and $n=1$. \begin{align*} r &=\dfrac{4 \hbar^2 πϵ_0}{e^2m_e} \ &= \dfrac{4 (1.0546 \times 10^{-34}m^2kg/s)^2 \times π \times 8.854 \times 10^{-12}C^2N^{-1}m^{-2}}{(1.60217662 \times 10^{-19}C)^2(9.10938356 \times 10^{-31}kg)} \ &=5.29 \times 10^{-11}m = 52.9\, pm\end{align*} \nonumber For the ground-state of the lithium +2 ion: $Z=3$ and $n=1$ \begin{align*} r &=\dfrac{4 \hbar^2 πϵ_0}{3 e^2m_e} \ &= \dfrac{4 (1.0546 \times 10^{-34}m^2kg/s)^2 \times π \times 8.854\times10^{-12}C^2N^{-1}m^{-2}}{3(1.60217662 \times 10^{-19}C)^2(9.10938356 \times 10^{-31}kg)} \ &=1.76 \times 10^{-11}m = 17.6 \,pm\end{align*} \nonumber As expected, the $\ce{Li^{2+}}$ has a smaller radius than the $\ce{H}$ atoms because of the increased nuclear charge. Exercise 1.8.2 : Rydberg states How do the radii of the hydrogen orbits vary with $n$? Prepare a graph showing $r$ as a function of $n$. States of hydrogen atoms with $n = 200$ have been prepared (called Rydberg states). What is the diameter of the atoms in these states? Answer This is a straightforward application of Equation of \ref{1.8.16B}. The hydrogen atom has only certain allowable radii and these radii can be predicted from the equation that relates them with each $n$. Note that the electron must be in motion so $n = 0$ is not allowed. $4 \pi \epsilon_{0}=1.113 \times 10^{-10} \mathrm{C}^{2} \mathrm{J}^{-1} \mathrm{m}^{-1}$ and $\hbar=1.054 \times 10^{-34} \mathrm{J} \mathrm{s},$ also knowing \begin{aligned} e &=1.602 \times 10^{-19} \mathrm{C} \text { with } \ m_{e} &=9.109 \times 10^{-31} \mathrm{kg} \end{aligned} \nonumber and $Z$ is the nuclear charge, we use this equation directly. A simplification can be made by taking advantage of the fact that $a_{0}=\frac{4 \pi \epsilon_{0} \hbar^{2}}{e^{2} m_{e}} \nonumber$ resulting in $r_{n}=\frac{a_{0}}{Z} n^{2} \nonumber$ where $a_{0}=5.292 \times 10^{-11} \mathrm{m}$ which is the Bohr Radius. Suppose we want to find the radius where $n=200 . n^{2}=40000$ so plugging in directly we have \begin{align*} r_{n} &=\frac{\left(5.292 \times 10^{-11}\right)}{(1)}(40000) \[4pt] &=2.117 \times 10^{-6} m \end{align*} \nonumber for the radius of a hydrogen atom with an electron excited to the $\mathrm{n}=200$ state. The diameter is then $4.234 \times 10^{-6} \mathrm{m}$. The Wave Argument for Quantization The above discussion is based off of a classical picture of an orbiting electron with the quantization from the angular momentum (Equation $\ref{1.8.15}$) requirement lifted from Planck's quantization arguments. Hence, only allows certain trajectories are stable (with differing radii). However, as discussed previously, the electron will have a wavelike property also with a de Broglie wavelength $\lambda$ $\lambda = \dfrac{h}{p} \nonumber$ Hence, a larger momentum $p$ implies a shorter wavelength. That means as $n$ increases (Equation $\ref{1.8.21}$), the wavelength must also increase; this is a common feature in quantum mechanics and will be often observed. In the Bohr atom, the circular symmetry and the wave property of the electron requires that the electron waves have an integer number of wavelengths (Figure $\PageIndex{1A}$). If not, then the waves will overlap imperfectly and cancel out (i.e., the electron will cease to exist) as demonstrated in Figure $\PageIndex{1B}$. A more detailed discussion of the effect of electron waves in atoms will be discuss in the following chapters. Derivation of the Rydberg Equation from Bohr Model Given a prediction of the allowed energies of a system, how could we go about verifying them? The general experimental technique known as spectroscopy permits us to probe the various differences between the allowed energies. Thus, if the prediction of the actual energies, themselves, is correct, we should also be able to predict these differences. Let us assume that we are able to place the electron in Bohr's hydrogen atom into an energy state $E_n$ for $n>1$, i.e. one of its so-called excited states. The electron will rapidly return to its lowest energy state, known as the ground state and, in doing so, emit light. The energy carried away by the light is determined by the condition that the total energy is conserved (Figure 1.8.6 ). Thus, if $n_i$ is the integer that characterizes the initial (excited) state of the electron, and $n_f$ is the final state (here we imagine that $n_f =1$, but is applicable in cases that $n_f <n_i$, i.e., emission) $E_{nf}=E_{ni}-h\nu \label{1.8.22}$ or $\nu=\dfrac{E_{ni}-E_{nf}}{h}=\dfrac{Z^2 e^4 m_e}{8\epsilon_{0}^{2} h^3}\left ( \dfrac{1}{n_{f}^{2}}-\dfrac{1}{n_{i}^{2}}\right ) \label{1.8.23}$ We can now identify the Rydberg constant $R_H$ with the ratio of constants on the right hand side of Equation $\ref{1.8.23}$ $R_H = \dfrac {m_ee^4}{8 \epsilon ^2_0 h^3 } \label {2-22}$ Evaluating $R_H$ from the fundamental constants in this formula gives a value within 0.5% of that obtained experimentally from the hydrogen atom spectrum. Thus, by observing the emitted light, we can determine the energy difference between the initial and final energy levels, which results in the emission spectra discussed in Sections 1.4 and 1.5. Different values of $n_f$ determine which emission spectrum is observed, and the examples shown in the figure are named after the individuals who first observed them. The figure below shows some of the transitions possible for different $n_f$ and $n_i$ values discussed previously. If the atom absorbs light it ends up in an excited state as a result of the absorption. The absorption is only possible for light of certain frequencies, and again, conservation of energy determines what these frequencies are. If light is absorbed, then the final energy $E_{nf}$ will be related to the initial energy $E_{ni}$ with $n_f >n_i$ by $E_{nf}=E_{ni}+h\nu \label{1.8.24}$ or $\nu=\dfrac{E_{nf}-E_{ni}}{h}=\dfrac{Z^2 e^4 m_e}{8\epsilon_{0}^{2}h^3}\left ( \dfrac{1}{n_{i}^{2}}-\dfrac{1}{n_{f}^{2}}\right ) \label{1.8.25}$ Exercise 1.8.3 1. Calculate the energy of a photon that is produced when an electron in a hydrogen atom goes from an orbit with $n = 4$ to an orbit with $n = 1$. 2. What happens to the energy of the photon as the initial value of $n$ approaches infinity? Answer a: \begin{align*} E_{\text{nf}} &= E_{ni} - h\nu \ E_{photon} = h\nu &= E_{nf} - E_{ni}\ &= \frac{Z^2e^4m_e}{8\epsilon_o^2h^2}\left(\frac{1}{n_f^2} - \frac{1}{n_i^2} \right)\ &=\frac{e^4m_e}{8\epsilon_o^2h^2}\left(\frac{1}{1^2} - \frac{1}{4^2}\right)\ &=2.18 \times 10^{-18}\left(1 - \frac{1}{16} \right)\ &=2.04 \times 10^{-18} J \end{align*} \nonumber b: As $n_i \rightarrow \infty$ \begin{align*} E_{photon} &= \frac{e^4m_e}{8\epsilon_o^2h^2}\left(\frac{1}{n_f^2} - \frac{1}{n_i^2}\right)\ \frac{1}{n_i^2} &\rightarrow 0\ E_{photon} &\rightarrow \frac{e^4m_e}{8\epsilon_o^2h^2}\left(\frac{1}{n_f^2}\right)\ \end{align*} \nonumber Bohr’s proposal explained the hydrogen atom spectrum, the origin of the Rydberg formula, and the value of the Rydberg constant. Specifically it demonstrated that the integers in the Rydberg formula are a manifestation of quantization. The energy, the angular momentum, and the radius of the orbiting electron all are quantized. This quantization also parallels the concept of stable orbits in the Bohr model. Only certain values of $E$, $M$, and $r$ are possible, and therefore the electron cannot collapse onto the nucleus by continuously radiating energy because it can only have certain energies, and it cannot be in certain regions of space. The electron can only jump from one orbit (quantum state) to another. The quantization means that the orbits are stable, and the electron cannot spiral into the nucleus in spite of the attractive Coulomb force. Although Bohr’s ideas successfully explained the hydrogen spectrum, they failed when applied to the spectra of other atoms. In addition a profound question remained. Why is angular momentum quantized in units of $\hbar$? As we shall see, de Broglie had an answer to this question, and this answer led Schrödinger to a general postulate that produces the quantization of angular momentum as a consequence. This quantization is not quite as simple as proposed by Bohr, and we will see that it is not possible to determine the distance of the electron from the nucleus as precisely as Bohr thought. In fact, since the position of the electron in the hydrogen atom is not at all as well defined as a classical orbit (such as the moon orbiting the earth) it is called an orbital. An electron orbital represents or describes the position of the electron around the nucleus in terms of a mathematical function called a wavefunction that yields the probability of positions of the electron.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.08%3A_The_Bohr_Theory_of_the_Hydrogen_Atom.txt
Learning Objectives • To understand that sometime you cannot know everything about a quantum system as demonstrated by the Heisenberg uncertainly principle. In classical physics, studying the behavior of a physical system is often a simple task due to the fact that several physical qualities can be measured simultaneously. However, this possibility is absent in the quantum world. In 1927 the German physicist Werner Heisenberg described such limitations as the Heisenberg Uncertainty Principle, or simply the Uncertainty Principle, stating that it is not possible to measure both the momentum and position of a particle simultaneously. The Heisenberg Uncertainty Principle is a fundamental theory in quantum mechanics that defines why a scientist cannot measure multiple quantum variables simultaneously. Until the dawn of quantum mechanics, it was held as a fact that all variables of an object could be known to exact precision simultaneously for a given moment. Newtonian physics placed no limits on how better procedures and techniques could reduce measurement uncertainty so that it was conceivable that with proper care and accuracy all information could be defined. Heisenberg made the bold proposition that there is a lower limit to this precision making our knowledge of a particle inherently uncertain. Probability Matter and photons are waves, implying they are spread out over some distance. What is the position of a particle, such as an electron? Is it at the center of the wave? The answer lies in how you measure the position of an electron. Experiments show that you will find the electron at some definite location, unlike a wave. But if you set up exactly the same situation and measure it again, you will find the electron in a different location, often far outside any experimental uncertainty in your measurement. Repeated measurements will display a statistical distribution of locations that appears wavelike (Figure 1.9.1 ). After de Broglie proposed the wave nature of matter, many physicists, including Schrödinger and Heisenberg, explored the consequences. The idea quickly emerged that, because of its wave character, a particle’s trajectory and destination cannot be precisely predicted for each particle individually. However, each particle goes to a definite place (Figure 1.9.1 ). After compiling enough data, you get a distribution related to the particle’s wavelength and diffraction pattern. There is a certain probability of finding the particle at a given location, and the overall pattern is called a probability distribution. Those who developed quantum mechanics devised equations that predicted the probability distribution in various circumstances. It is somewhat disquieting to think that you cannot predict exactly where an individual particle will go, or even follow it to its destination. Let us explore what happens if we try to follow a particle. Consider the double-slit patterns obtained for electrons and photons in Figure 1.9.2 . The interferrence patterns build up statistically as individual particles fall on the detector. This can be observed for photons or electrons—for now, let us concentrate on electrons. You might imagine that the electrons are interfering with one another as any waves do. To test this, you can lower the intensity until there is never more than one electron between the slits and the screen. The same interference pattern builds up! This implies that a particle’s probability distribution spans both slits, and the particles actually interfere with themselves. Does this also mean that the electron goes through both slits? An electron is a basic unit of matter that is not divisible. But it is a fair question, and so we should look to see if the electron traverses one slit or the other, or both. One possibility is to have coils around the slits that detect charges moving through them. What is observed is that an electron always goes through one slit or the other; it does not split to go through both. But there is a catch. If you determine that the electron went through one of the slits, you no longer get a double slit pattern—instead, you get single slit interference. There is no escape by using another method of determining which slit the electron went through. Knowing the particle went through one slit forces a single-slit pattern. If you do not observe which slit the electron goes through, you obtain a double-slit pattern. How does knowing which slit the electron passed through change the pattern? The answer is fundamentally important—measurement affects the system being observed. Information can be lost, and in some cases it is impossible to measure two physical quantities simultaneously to exact precision. For example, you can measure the position of a moving electron by scattering light or other electrons from it. Those probes have momentum themselves, and by scattering from the electron, they change its momentum in a manner that loses information. There is a limit to absolute knowledge, even in principle. Heisenberg’s Uncertainty Principle It is mathematically possible to express the uncertainty that, Heisenberg concluded, always exists if one attempts to measure the momentum and position of particles. First, we must define the variable “x” as the position of the particle, and define “p” as the momentum of the particle. The momentum of a photon of light is known to simply be its frequency, expressed by the ratio $h/λ$, where h represents Planck’s constant and $\lambda$ represents the wavelength of the photon. The position of a photon of light is simply its wavelength ($\lambda$). To represent finite change in quantities, the Greek uppercase letter delta, or Δ, is placed in front of the quantity. Therefore, $\Delta{p}=\dfrac{h}{\lambda} \label{1.9.1}$ $\Delta{x}= \lambda \label{1.9.2}$ By substituting $\Delta{x}$ for $\lambda$ into Equation $\ref{1.9.1}$, we derive $\Delta{p}=\dfrac{h}{\Delta{x}} \label{1.9.3}$ or, $\underset{\text{early form of uncertainty principle }}{\Delta{p}\Delta{x}=h} \label{1.9.4}$ A Common Trend in Quantum Systems Equation $\ref{1.9.4}$ can be derived by assuming the particle of interest is behaving as a particle, and not as a wave. Simply let $\Delta p=mv$, and $Δx=h/(m v)$ (from De Broglie’s expression for the wavelength of a particle). Substituting in $Δp$ for $mv$ in the second equation leads to Equation $\ref{1.9.4}$. Equation \ref{1.9.4} was further refined by Heisenberg and his colleague Niels Bohr, and was eventually rewritten as $\Delta{p_x}\Delta{x} \ge \dfrac{h}{4\pi} = \dfrac{\hbar}{2} \label{1.9.5}$ with $\hbar = \dfrac{h}{2\pi}= 1.0545718 \times 10^{-34}\; m^2 \cdot kg / s$. Equation $\ref{1.9.5}$ reveals that the more accurately a particle’s position is known (the smaller $Δx$ is), the less accurately the momentum of the particle in the x direction ($Δp_x$) is known. Mathematically, this occurs because the smaller $Δx$ becomes, the larger $Δp_x$ must become in order to satisfy the inequality. However, the more accurately momentum is known the less accurately position is known (Figure 1.9.2 ). What is the Proper Definition of Uncertainty? Equation $\ref{1.9.5}$ relates the uncertainty of momentum and position. An immediate questions that arise is if $\Delta x$ represents the full range of possible $x$ values or if it is half (e.g., $\langle x \rangle \pm \Delta x$). $\Delta x$ is the standard deviation and is a statistic measure of the spread of $x$ values. The use of half the possible range is more accurate estimate of $\Delta x$. As we will demonstrated later, once we construct a wavefunction to describe the system, then both $x$ and $\Delta x$ can be explicitly derived. However for now, Equation \ref{1.9.5} will work. For example: If a problem argues a particle is trapped in a box of length, $L$, then the uncertainly of it position is $\pm L/2$. So the value of $\Delta x$ used in Equation $\ref{1.9.5}$ should be $L/2$, not $L$. Example 1.9.1 An electron is confined to the size of a magnesium atom with a 150 pm radius. What is the minimum uncertainty in its velocity? Solution The uncertainty principle (Equation $\ref{1.9.5}$): $\Delta{p}\Delta{x} \ge \dfrac{\hbar}{2} \nonumber$ can be written $\Delta{p} \ge \dfrac{\hbar}{2 \Delta{x}} \nonumber$ and substituting $\Delta p=m \Delta v$ since the mass is not uncertain. $\Delta{v} \ge \dfrac{\hbar}{2\; m\; \Delta{x}} \nonumber$ the relevant parameters are • mass of electron $m=m_e= 9.109383 \times 10^{-31}\; kg$ • uncertainty in position: $\Delta x=150 \times 10^{-12} m$ \begin{align*} \Delta{v} &\ge \dfrac{1.0545718 \times 10^{-34} \cancel{kg} m^{\cancel{2}} / s}{(2)\;( 9.109383 \times 10^{-31} \; \cancel{kg}) \; (150 \times 10^{-12} \; \cancel{m}) } \[4pt] &= 3.9 \times 10^5\; m/s \end{align*} \nonumber Exercise 1.9.1 What is the maximum uncertainty of velocity the electron described in Example 1.9.1 ? Answer Infinity. There is no limit in the maximum uncertainty, just the minimum uncertainty. Understanding the Uncertainty Principle through Wave Packets and the Slit Experiment It is hard for most people to accept the uncertainty principle, because in classical physics the velocity and position of an object can be calculated with certainty and accuracy. However, in quantum mechanics, the wave-particle duality of electrons does not allow us to accurately calculate both the momentum and position because the wave is not in one exact location but is spread out over space. A "wave packet" can be used to demonstrate how either the momentum or position of a particle can be precisely calculated, but not both of them simultaneously. An accumulation of waves of varying wavelengths can be combined to create an average wavelength through an interference pattern: this average wavelength is called the "wave packet". The more waves that are combined in the "wave packet", the more precise the position of the particle becomes and the more uncertain the momentum becomes because more wavelengths of varying momenta are added. Conversely, if we want a more precise momentum, we would add less wavelengths to the "wave packet" and then the position would become more uncertain. Therefore, there is no way to find both the position and momentum of a particle simultaneously. Several scientists have debated the Uncertainty Principle, including Einstein. Einstein created a slit experiment to try and disprove the Uncertainty Principle. He had light passing through a slit, which causes an uncertainty of momentum because the light behaves like a particle and a wave as it passes through the slit. Therefore, the momentum is unknown, but the initial position of the particle is known. Here is a video that demonstrates particles of light passing through a slit and as the slit becomes smaller, the final possible array of directions of the particles becomes wider. As the position of the particle becomes more precise when the slit is narrowed, the direction, or therefore the momentum, of the particle becomes less known as seen by a wider horizontal distribution of the light. Example 1.9.2 The speed of a 1.0 g projectile is known to within $10^{-6}\;m/s$. 1. Calculate the minimum uncertainty in its position. 2. What is the maximum uncertainty of its position? Solution a From Equation $\ref{1.9.5}$, the $\Delta{p_x} = m \Delta v_x$ with $m=1.0\;g$. Solving for $\Delta{x}$ to get \begin{align*} \Delta{x} &= \dfrac{\hbar}{2m\Delta v} \[4pt] &= \dfrac{1.0545718 \times 10^{-34} \; m^2 \cdot kg / s}{(2)(0.001 \; kg)(10^{-6} \;m/s)} \[4pt] &= 5.3 \times 10^{-26} \,m \end{align*} \nonumber This negligible for all intents and purpose as expected for any macroscopic object. b Unlimited (or the size of the universe). The Heisenberg uncertainty principles does not quantify the maximum uncertainty. Exercise 1.9.2 Estimate the minimum uncertainty in the speed of an electron confined to a hydrogen atom within a diameter of $1 \times 10^{-10} m$? Answer We need to quantify the uncertainty of the electron in position. We can estimate that as $\pm 5 \times 10^{-10} m$. Hence, substituting the relavant numbers into Equation \ref{1.9.5} and solving for $\Delta v$ we get $\Delta v= 1.15 \times 10^6\, km/s \nonumber$ Notice that the uncertainty is significantly greater for the electron in a hydrogen atom than in the magnesium atom (Example 1.9.1 ) as expected since the magnesium atom is appreciably bigger. Heisenberg’s Uncertainty Principle not only helped shape the new school of thought known today as quantum mechanics, but it also helped discredit older theories. Most importantly, the Heisenberg Uncertainty Principle made it obvious that there was a fundamental error in the Bohr model of the atom. Since the position and momentum of a particle cannot be known simultaneously, Bohr’s theory that the electron traveled in a circular path of a fixed radius orbiting the nucleus was obsolete. Furthermore, Heisenberg’s uncertainty principle, when combined with other revolutionary theories in quantum mechanics, helped shape wave mechanics and the current scientific understanding of the atom. Humor: Heisenberg and the Police • Heisenberg get pulled over for speeding by the police. The officer asks him "Do you know how fast you were going?" • Heisenberg replies, "No, but we know exactly where we are!" • The officer looks at him confused and says "you were going 108 miles per hour!" • Heisenberg throws his arms up and cries, "Great! Now we're lost!"
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.09%3A_The_Heisenberg_Uncertainty_Principle.txt
Solutions to select questions can be found online. 1.1A Sodium metal has a threshold frequency of $4.40 × 10^{14}$ Hz. What is the kinetic energy of a photoelectron ejected from the surface of a piece of sodium when the ejecting photon is $6.20 × 10^{14}$ Hz? What is the velocity of this photoelectron? From which region of the electromagnetic spectrum is this photon? 1.1B What is the longest-wavelength electromagnetic radiation that can eject a photoelectron from silver, given that the work function is 4.73 eV? Is this in the visible range? Solution 263 nm 1.1C Find the longest-wavelength photon that can eject an electron from potassium, given that the work function is 2.24 eV. Is this visible electromagnetic radiation? 1.1C What is the work function in eV of electrons in magnesium, if the longest-wavelength photon that can eject electrons is 337 nm? Solution 3.69 eV 1.1D Calculate the work function in eV of electrons in aluminum, if the longest-wavelength photon that can eject the electromagnetic is 304 nm. 1.1E What is the maximum kinetic energy in eV of electrons ejected from sodium metal by 450-nm electromagnetic radiation, given that the work function is 2.28 eV? Solution 0.483 eV 1.1F UV radiation having a wavelength of 120 nm falls on gold metal, to which electrons are bound by 4.82 eV. What is the maximum kinetic energy of the ejected photoelectrons? 1.1G Violet light of wavelength 400 nm ejects electrons with a maximum kinetic energy of 0.860 eV from sodium metal. What is the work function of electrons to sodium metal? Solution 2.25 eV 1.1H UV radiation having a 300-nm wavelength falls on uranium metal, ejecting 0.500-eV electrons. What is the work function of electrons to uranium metal? 1.1I What is the wavelength of electromagnetic radiation that ejects 2.00-eV electrons from calcium metal, given that the work function is 2.71 eV? What type of electromagnetic radiation is this? Solution (a) 264 nm (b) Ultraviolet 1.1J Find the wavelength of photons that eject 0.100-eV electrons from potassium, given that the work function is 2.24 eV. Are these photons visible? 1.1K What is the maximum velocity of electrons ejected from a material by 80-nm photons, if they are bound to the material by 4.73 eV? Solution 1.95 × 10 6 m/s 1.1L Photoelectrons from a material with a work function of 2.71 eV are ejected by 420-nm photons. Once ejected, how long does it take these electrons to travel 2.50 cm to a detection device? 1.1M A laser with a power output of 2.00 mW at a wavelength of 400 nm is projected onto calcium metal. (a) How many electrons per second are ejected? (b) What power is carried away by the electrons, given that the work function is 2.71 eV? Solution (a) $4.02×10^{15}/s$ (b) 0.256 mW 1.1N (a) Calculate the number of photoelectrons per second ejected from a 1.00-mm 2 area of sodium metal by 500-nm electromagnetic radiation having an intensity of $1.30\; kW/m^2$ (the intensity of sunlight above the Earth’s atmosphere). (b) Given that the work function is 2.28 eV, what power is carried away by the electrons? (c) The electrons carry away less power than brought in by the photons. Where does the other power go? How can it be recovered? 1.1O Red light having a wavelength of 700 nm is projected onto magnesium metal to which electrons are bound by 3.68 eV. (a) Use $KE_ e=h\nu -\Phi$ to calculate the kinetic energy of the ejected electrons. (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or inconsistent? Solution (a) –1.90 eV–1.90 eV (b) Negative kinetic energy (c) That the electrons would be knocked free. Unreasonable Results 1.1P (a) What is the work function of electrons to a material from which 4.00-eV electrons are ejected by 400-nm electromagnetic radiation? (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or inconsistent? 1.2A 1. Suppose the electron in a hydrogen atom is in the circular Bohr orbit with n = 30. How many times per second does it go around? 2. Suppose now the electron drops to the n = 29 state, emitting a single photon. What is the frequency of this photon, in cycles per second? 3. Comment on the relation between your answers in (a), (b) above. What would you guess the relation to be for n = 300? 1.2B The $\mu$ (muon) is a cousin of the electron, the only difference being its mass is 207 times greater. The $μ$ has a lifetime of about 2 ms. If a beam of muons is directed at a solid, the muons will go into orbit around nuclei. The Bohr atom, with a muon replacing the electron, is a useful model for picturing this. 1. For a nucleus of charge Ze, how large is the n = 1 μ orbit compared with the electron orbit? 2. What is the frequency of the photon emitted by the $μ$ in the n = 2 to n = 1 transition? 3. For the gold nucleus, the $n = 1$ $μ$ orbit is inside the nucleus. Find the frequency of the emitted photon for $n = 2$ to $n = 1$ in this case. (Hint: you’ll need the radius of the gold nucleus. Assume here that the positive charge is uniformly spread throughout the nucleus.) 1.3 Past Infrared region, in direction of the lower energies, the microwave region is located. In this region, radiation usually is characterized by frequency ($\nu$) which is expressed in units of $\text{MHz}$, where $\text{Hz}$ is a cycle per second. Given a microwave frequency of $2.0 \times 10^{4}\, \text{MHz}$, calculate $v$, $λ$, and energy per photon for this radiation and then compare the results with figure below. Solution The frequency(v) of the microwave radiation is given and once convert to Hz get the following v=2.0e4MHz(1e6Hz/1MHz)= 2.0e10s-1 Hz now we find the wavelength using formula and get $λ=\dfrac{c}{v}=\dfrac{2.998 \times 10^{8} m \,s^{-1}}{2.0 \times 10^{10} \,s^{-1} } = 1.5 \times 10^{-2}\,m \nonumber$ finally we use E=hv to calculate the energy E=hv=(6.626e-34J*s)(2.0e10s-1)= 1.3e-23J 1.4 Compare the Planck Distribution and the Rayleigh-Jean Distributions. For large values of ${\nu}$, which one would be greater? Solution The Planck Distribution is $d{\rho}=\dfrac{8{\pi}h}{c^3}\dfrac{{\nu}^3}{e^{\dfrac{h{\nu}}{k_BT}}-1}d{\nu}\nonumber$ And the Rayleigh Jean Distribution is $d{\rho}=\dfrac{8{\pi}{\rho^2}k_BT}{c^3}d{\nu}\nonumber$ For larger ${\nu}$, the Rayleigh Jean Distribution increases, while the Planck Distribution decreases because of the exponential term in the denominator outweighing the ${\nu}^3$ term. 1.4 Plancks principal assumption was that energies of electronic oscillator can only have values $E=nh\nu$ and $ΔE=h\nu$. In fact, as v→0 then ΔE→0 and E becomes continuous. It should be expected that the nonclassical Plank distribution to go over to the classical Rayleigh-Jeans distribution at low frequencies, where ΔE →0. Prove that Equation 1.2 reduces to Equation 1.1 as v→0. Note: The Taylor expansion of an exponential $e^x \approx 1+x+\left(\dfrac{x^2}{2!}\right) + ... \nonumber$ can truncated to $e^x \approx 1+x$ when $x$ is small. Solution Important to know Planks equation and put it into use: dp(v,T)=Pv(T)dv=(8πh/c3)(v3dv/ehv/kbT-1) Note: Pv(T)dv ⇒ is the radiant energy density between frequencies v and v+dv Now for small x we have ex≈1+x and as v→0, hv/kbT→0 once we have this we get the following dp(v,T)=(8πh/c3)*((v3dv)/(1+(hv/kbT)-1))= 8πhv3kBTdv/c3hv = 8πv2kBTdv/c3 and this is the classical Rayleigh-Jeans distribution. 1.5 The visible spectrum is in the 400-700 nm range, and contains about 40% of the sun’s radiation intensity. Using the Planck Distribution, write an integral expression that can be evaluated to give this result (do not evaluate the integral). Solution The Planck Blackbody distribution in terms of wavelength is $\rho_\lambda(\lambda, T) d \lambda =\dfrac{2 hc^2}{\lambda^5}\dfrac{1}{ e^{\frac{hc}{\lambda k_\mathrm{B}T}} - 1} d\lambda\nonumber$ And so the intensity contained in the visible spectrum (from 400 nm and 700 nm) is $\int^{700}_{400}\rho_\lambda(\lambda, T) d\lambda = \int^{700\, nm}_{400\, nm}\dfrac{2 hc^2}{\lambda^5}\dfrac{1}{ e^{\frac{hc}{\lambda k_\mathrm{B}T}} - 1} d\lambda\nonumber$ The intensity contained in the whole spectrum can be given by $\int^{\infty}_{0}\rho_\lambda(\lambda, T) d\lambda = \int^{\infty}_{0}\dfrac{2 hc^2}{\lambda^5}\dfrac{1}{ e^{\frac{hc}{\lambda k_\mathrm{B}T}} - 1} d\lambda\nonumber$ And thus $40\% =100\% \times \dfrac{\rho_{visible}}{\rho_{total}}=100 \% \times \dfrac{\int^{700}_{400}{\dfrac{8phc}{\lambda^5 \left(e^{\dfrac{hc}{\lambda k_bT}}-1\right)} d \lambda}}{\int^{\infty }_0{\dfrac{8phc}{\lambda^5\left(e^{\dfrac{hc}{\lambda k_bT}}-1\right)}d \lambda}}\nonumber$ 1.13 What is the frequency and energy of a single 310 nm photon? Solution Given: $\lambda$ = 310 nm. To find the frequency: $\nu=\dfrac{c}{\lambda}\nonumber$] $\nu=\dfrac{2.99 \times 10^{8} \dfrac{m}{s}}{310 \ nm}\nonumber$ $\nu= 9.67 \times 10^{14} s^{-1} \nonumber$ To find the energy: $E = h \nu\nonumber$ $E = (6.626 \times 10^{-34}) \times \nu\nonumber$ $E = 6.41 \times 10^{-19} J\nonumber$ 1.14 A laser emits $3.3 \times 10^{17}$photons per second. If the energy per photon is $6.4 \times 10^{-20}J$ per photon, find the a) wattage and b) wavelength of the laser. in what electromagnetic spectra is the laser in? Solution a) $W=(6.4 \times 10^{-20}J)(3.3 \times 10^{17}\dfrac{1}{s})$ $W=0.02112\dfrac{J}{s}$ b) $E=h\dfrac{c}{\lambda}$ $\lambda=\dfrac{hc}{E}$ $\lambda=\dfrac{(3 \times 10^{8}\dfrac{m}{s})(6.626 \times 10^{-34}Js)}{6.4 \times 10^{-20}J}$ $\lambda=3.106 \times 10^{-6}m$ c) infrared spectrum 1.15 What is the max wavelength with a given temperature of 7500K? Solution For a given temperature, the maximum wavelength allowed is given by: $T = \frac{2.9 \times 10^{-3}\text{mK}}{\lambda_\text{max}}$ Given: T = 7500K $7500 = \frac{2.9 \times 10^{-3}\text{mK}}{\lambda_\text{max}}$ $\lambda_\text{max}= \frac{2.9 \times 10^{-3}\text{mK}}{7500 \text{K}}$ $\lambda_\text{max}= 3.8 \times 10^{-7} \text{m}$ Q1.15 A light bulb is a blackbody radiator. What temperature is required such that $\lambda_{max} = \ 400 nm$ ? Solution $T=\dfrac{(2.90 \times 10^{-3}\; m \cdot K)}{400 \times 10^{-9}\; m} = 7250\; K.\nonumber$ 1.16 An unknown elemental metal has work function of $\Phi = 8.01 \times 10^{-19} J$. Upon illumination with UV light of wavelength 162 nm, electrons are ejected with velocity of at $2.95 \times 10^{3} \frac {m}{s}$. What is the threshold wavelength? What is the work function in units of eV? What metal does this correspond to (you will need to consult Table B1)? Solution This question involves a bit of a trick in that neither the wavelength of radiation nor the velocity of electrons are necessary to solve for the threshold wavelength or material as requested. To solve for the threshold wavelength, we employ the concept that kinetic energy is 0 at threshold frequency and then use a relation equation to solve for threshold wavelength. $\dfrac {1}{2} mv^2 = h \nu - \Phi \tag {2-5}\nonumber$ So, $\nu_{threshold} = \dfrac {\Phi}{h} = 1.21 \times 10^{15} s^{-1}\nonumber$ and $\lambda_{threshold} = \dfrac {c}{\nu_{threshold}} = 248 nm\nonumber$ With a basic conversion of $1\;J = 6.242 \times 10^{18} eV\nonumber$ we see that the work function is 5 eV. Using Table B1, we see that this value corresponds to Cobalt (discovered by Georg Brandt). 1.16 Given the work function of sodium is 1.87 eV, find the kinetic energy of the ejected electrons when light of frequency 2.3 times greater than the threshold frequency is used to excite the electrons. Solution step 1: convert work function from electron volts to joules $\phi=1.87 \ eV\nonumber$ 1 eV = $1.602 \times 10^{-19}J$ $\phi$=$1.87 eV \times \dfrac{1.602 \times 10^{-19}J}{1 \ eV}$=$2.995 \times 10^{-19}J$ Step 2: Solve for the threshold frequency $\phi$=$hf$ $\dfrac{\phi}{h}=f$ $\dfrac{2.995 \times 10^{-19}J}{6.626 \times 10^{-34}Js}=f$ $4.5 \times 10^{14}Hz=f$ Step 3: Use threshold frequency to solve for kinetic energy at desired conditions $KE=h(2.3f-f)$ $KE=h(1.3f)$ $KE=(6.626 \times 10^{-34}Js)(1.3(4.5 \times 10^{14}Hz))$ $KE=3.87 \times 10^{-19}J$ Q1.17 Find kinetic energy emitted off surface of tungsten that is radiated with radiation of 250 nm. Work function of tungsten 4.50 eV. Solution Kinetic energy is represented by $E = h\nu\nonumber$ we then use $c=\nu\lambda$ for the frequency to find $E=\dfrac{hc}{\lambda}\nonumber$ Then substitute values to get $E=\dfrac{(6.626 \times 10^{-34}\; J \cdot s)(3.00 \times 10^8 m/s)}{250 \times 10^{-9}\; m} = 7.95 \times 10^{-19}\; J\nonumber$ Convert to eV, to get $E=(7.95 \times 10^{-19}\; J) \dfrac{1 \;eV}{1.6 \times 10^{-19}\;J} = 4.97\; eV.\nonumber$ Use $KE_e=h\nu–\Phi$ to finally get $KE_e=h\nu – \Phi = 4.97 \;eV – 4.50 \;eV = 0.47\; eV.\nonumber$ 1.18 A smooth silver Thanksgiving platter and serving spoon (the pilgrims had knives and spoons but no forks) are irradiated with light of wavelength 317 nm. The work function is $\Phi = 6.825 \times 10^{-19} J$ . What is the kinetic energy of the ejected electrons [eV]? The threshold frequency? Solution We first solve for the threshold frequency. $\dfrac {hc}{\lambda} = \Phi \ = h \nu\nonumber$ Rearrange to solve for $\nu$ \begin{align*} \nu &= \dfrac {\Phi}{h} \[4pt] &= 1.03 \times 10^{15} s^{-1} \end{align*}\nonumber Now we solve for kinetic energy. $\dfrac {1}{2} mv^2 = h \nu - \Phi \tag {2-5}\nonumber$ where $\nu = \nu_{radiation}\nonumber$ and we recall that $\nu = \dfrac {c}{\lambda_{radiation}}\nonumber$ Using the right hand side of that kinetic energy equation, we find the result to be ${KE} = 2.55 \times 10^{-19}\nonumber$ Q1.18 When a clean surface of silver is irradiated with light of wavelength 255 nm, the work function of ejected electrons is 4.18 eV. Calculate the kinetic energy in eV of the silver and the threshold frequency. Solution Kinetic Energy of the electrons can be represented with the formula $KE= h\nu - \Phi$ We have to solve for the Kinetic energy in eV $KE= h\nu - \Phi$ substituting known values gives $KE= (6.626 \times 10^-34 Js )(\dfrac{3 \times 10^8 m/s}{255 \times 10^-9 m})- 6.69* 10^-19 J\nonumber$ $KE = 1.105 \times 10^-19 J \approx .690 eV\nonumber$ The second part of the question asks us to solve for the threshold frequency $\nu_o= \dfrac{\Phi}{h}$ $\nu_o= \dfrac{6.69 \times 10^-19 J}{6.626 \times 10^-34 J*s}$ $= 1.01 \times 10^{15} Hz$ Q1.21 A line in the Paschen series of hydrogen has a wavelength of 1.01 x 10-6 m. Find the original energy of the electron. Solution For the Paschen series n1= 3. To find n2 we have to use the Rhydberg formula: $\dfrac{1}{\lambda} = R_H \times \left(\dfrac{1}{n_1^2} + \dfrac{1}{n_2^2}\right)\nonumber$ substituting our known values will give us $\dfrac{1}{1.01 \times 10^{-6} m } = 109677 cm^{-1} \left(\dfrac{1}{3^2} + \dfrac{1}{n_2^2}\right)\nonumber$ converting our units and using algebra gives us $0.0903 = \left(\dfrac{1}{9} - \dfrac{1}{n_2^2}\right)$ where $n_2 = 6.93 \approx 7$ We approximate to 7 since $n$ is an integer. 1.22 How does the energy change when a particle absorbs and releases a photon? Show the effects on the state that the particle is in and the energy itself. Solution The energy would increase when a photon is absorbed and decrease when a photon is released. We have two equations $E = \dfrac{hc}{\lambda}\nonumber$ $\dfrac{1}{\lambda} = 109680\left(\dfrac{1}{n_1} - \dfrac{1}{n_2}\right)\nonumber$ When a photon is absorbed, $\lambda$ is positive and when a photon is released $\lambda$ is negative. From the first equation, we can see that $E$ only depends on the sign of $\lambda$. So when a photon is absorbed, energy is positive (increases) and when it is released, energy is negative (decreases). The second equation shows that when $\lambda$ is negative, $n_1$ must be greater than $n_2$ so the final state is at a lower quantum number than the initial and vice versa for when a photon is absorbed. 1.23 Show that the (a) wavelength of 100 nm occurs within the Lyman series, that (b) wavelength of 500 nm occurs within the Balmer series, and that (c) wavelength of 1000 nm occurs within the Paschen series. Identify the spectral regions to which these wavelengths correspond. Solution We can show the where the wavelengths occurs by calculate the maximum and minimum wavelengths of each series using the Rydberg formula. a) Lyman Series: $Max: \dfrac{1}{\lambda} = 109680\left(1 - \dfrac{1}{2^2}\right)cm^{-1}\nonumber$ $\lambda = 121.6 nm\nonumber$ $Min: \dfrac{1}{\lambda} = 109680\left(1 - \dfrac{1}{\infty}\right)cm^{-1}\nonumber$ $\lambda = 91.2 nm\nonumber$ The range for the Lyman series from 91.2 nm to 121.6 nm, therefore a wavelength of 100 nm occurs within the Lyman series. This corresponds to the ultraviolent region of the spectrum. b) Balmer Series: $Max: \dfrac{1}{\lambda} = 109680\left(\dfrac{1}{2^2} - \dfrac{1}{3^2}\right)cm^{-1}\nonumber$ $\lambda = 656.5 nm\nonumber$ $Min: \dfrac{1}{\lambda} = 109680\left(\dfrac{1}{2^2} - \dfrac{1}{\infty}\right)cm^{-1}\nonumber$ $\lambda = 364.7 nm\nonumber$ The range for the Balmer series from 364.7 nm to 656.5 nm, therefore a wavelength of 500 nm occurs within the Balmer series. This corresponds to the near ultraviolent region of the spectrum. c) Paschen Series: $Max: \dfrac{1}{\lambda} = 109680\left(\dfrac{1}{3^2} - \dfrac{1}{4^2}\right)cm^{-1}\nonumber$ $\lambda = 1875.6 nm\nonumber$ $Min: \dfrac{1}{\lambda} = 109680\left(\dfrac{1}{3^2} - \dfrac{1}{\infty}\right)cm^{-1}\nonumber$ $\lambda = 820.6 nm\nonumber$ The range for the Paschen series from 820.6 nm to 1875.6 nm, therefore a wavelength of 1000 nm occurs within the Paschen series. This corresponds to the near infrared region of the spectrum. 1.24 Calculate the wavelength and the energy of a photon associated with the series limit of the Balmer series. Solution First find the minimum wavelength for the Balmer series. \begin{align*} \dfrac{1}{\lambda} &= 109,680cm^{-1}\left(\dfrac{1}{2^2} \ - \ \dfrac{1}{\infty}\right) \[4pt] \lambda &= \ 364.7\, nm \end{align*} Now we can use the wavelength to find the energy. \begin{align*} E &= \dfrac{hc}{\lambda} \[4pt] &= \dfrac{(6.626 \times 10^{-34})(3 \times 10^8)}{364.7 \times \ 10^{-9}} \[4pt] &= 5.45 \times 10^{-19}\,J \end{align*} 1.25 For the following particles (a) an electron with a kinetic energy of 50eV, (b) a proton with a kinetic energy of 50eV , and (c) an electron in the second Bohr orbit of a hydrogen atom, calculate the de Broglie wavelength of each. Solution We use $\lambda \ = \dfrac{h}{p}$ in all cases to find $\lambda$. a. $KE \ = \dfrac{mv^2}{2}\nonumber$ $50eV\left(\dfrac{1.602 \times 10^{-19}J}{1 eV}\right) \ = \ \dfrac{(v^2)(9.109 \times 10^{-31}kg)}{2}\nonumber$ $v = 4.19 \times 10^6\, m\centerdot s^{-1}\nonumber$ So \begin{align*} \lambda &= \dfrac{h}{p} \ = \ \dfrac{h}{mv} \[4pt] &= \dfrac{6.626 \times 10^{-34}J \centerdot s}{(9.109 \times 10^{-31}kg)(4.19 \times 10^6m\centerdot s^{-1})} \[4pt] &= 1.23 \times 10^{-10}m = 0.123\,nm \end{align*} b. Replace $m_e$ with $m_p$ in (a) to find $\lambda = 2.86 \times 10^{-3} nm$. c. We must first determine the velocity of an electron in the second Bohr orbit of a hydrogen atom. The velocity of an electron is given by the following equation: $v = \dfrac{nh}{2(\pi)m_{e}r}\nonumber$ and we know $r = \dfrac{\epsilon_0h^2n^2}{(\pi)m_ee^2}\nonumber$ substituting the two equations we find that $v \ = \ \dfrac{e^2}{2nh\epsilon_0}\nonumber$ For $n = 2$, because we are talking about the second orbit \begin{align*} v &= \ \dfrac{(1.602 \times 10^{-19}C)^2}{2(2)(6.626 \times 10^{-34}J\centerdot s)(8.854 \times 10^{-12}C^2J^{-1}m^{-1})} \[4pt] &= \ 1.09 \times 10^6m\centerdot s^{-1} \end{align*} So \begin{align*} \lambda &= \dfrac{h}{p} \ = \ \dfrac{h}{mv} \[4pt] &= \dfrac{6.626 \times 10^{-34}J\centerdot s}{(9.109 \times 10^{-31}kg)(1.09 \times 10^6m\centerdot s^{-1}} \[4pt] &= \ 6.64 \times 10^{-10}m = 0.664\,nm \end{align*} Q1.26 1. What is the velocity and wavelength of an electron with a voltage increase of 75 V? 2. What is the momentum of an electron with a de Broglie wavelength of 20 nm?(mass of an electron is $9.109 \times 10^{-31}\,kg$) Solution a. \begin{align*} KE &= \text{(electron charge)} \times \text{(potential}) \[4pt]&= (1.602 \times 10^{-19}\,C)(75\,V) \[4pt] &= 1.2 \times 10^{-17}\,J \[4pt] &= (1/2)mv^2\end{align*} $v = \sqrt{\dfrac{2(KE)}{m}} = \sqrt{\dfrac{2(1.2 \times 10^{-17}J)}{(9.109 \times 10^{-31}kg)}} = 5.133 \times 10^{6}m*s^{-1}\nonumber$ \begin{align*} \lambda &= \dfrac{h}{mv} \[4pt] &= \dfrac{6.626 \times 10^{-34}J*S}{(9.109 \times 10^{-31}kg)(5.133 \times 10^{6}m*s^{-1})} \[4pt] &= 1.2267\, m \end{align*} b. $\lambda = \dfrac{h}{p}\nonumber$ \begin{align*} p &= \dfrac{h}{\lambda} \[4pt] &= \dfrac{6.626 \times 10^{-34}J*S}{20 \times 10^{-6}m} \[4pt] &= 3.313 \times 10^{-29}kg*m*s^{-1} \end{align*} 1.27 Through what potential a proton must initially at rest fall so its de Broglie wavelength is $1.83 \times 10^{-10}\, m$? Q1.28 Calculate the energy and wavelength associated with a $\beta$ particle that has fallen through a potential difference of 3.2 V. Take the mass of a $\beta$ particle to be $9.1 \times 10^{-31}\text{kg}$. Solution A beta particle is an electron, so it has a -1 charge. $KE = (\beta \,\text{particle charge}) \times \, \text{Potential} = \vert-1.602 \times 10^{-19}\text{C}\vert\; \times \; 3.2\text{V}\nonumber$ $KE = 5.126 \times 10^{-19}\text{J per }\beta\text{ particle}\nonumber$ $\lambda = \dfrac{h}{p}\nonumber$ $KE = \dfrac{mv^2}{2} = \dfrac{p^2}{2m}\nonumber$ $p = \sqrt{2\;KE\;m} = \sqrt{2\;\times \;5.126 \times 10^{-19}\text{J}\;\times \;9.1 \times 10^{-31}\text{kg}}\nonumber$ $p = 5.66 \times 10^{-25}\text{kg m }s^{-1}\nonumber$ $\lambda = \dfrac{6.626 \times 10^{-34}\text{J s}}{5.66 \times 10^{-25}\text{kg m s}^{-1}} = 6.86 \times 10^{-10}\text{m}\nonumber$ Q1.28 If a proton is going through a potential difference of 3.0V, what is the momentum and wavelength associated with this proton? (mass of a proton is equal to $1.6726 \times 10^{-27}\,kg$) Solution $(charge)*(potential) = KE\nonumber$ $charge = 1.602 \times 10^{-19}C\nonumber$ $(1.602 \times 10^{-19}C)*(3.0V) = KE\nonumber$ $KE = 4.806 \times 10^{-19}J\nonumber$ $KE = \dfrac{p^2}{2m} \nonumber$ $p= \sqrt{2(KE)m} = \sqrt{2(4.806 \times 10^{-19}J)(1.6726 \times 10^{-27}kg)} = 4.01 \times 10^{-23}kg*m*s^{-1}\nonumber$ $\lambda = h/p = \dfrac{6.626 \times 10^{-34}J*S}{4.01 \times 10^{-23} kg*m*s^{-1}} = 1.65 \times 10^{-11}m = 0.165\, pm\nonumber$ 1.29 Neutron diffraction is a modern technique to study structure. In the neutron diffraction, a collimated beam of neutrons was generated at some temperature from high-energy neutron source. This is achieved at several accelerator facilities around the world. If the speed of neutron is $v_n = (3k_BT/m)^{1/2}$ with $m$ as the mass of neutron. What is the required temperature so that neutrons have a de Broglie wavelength of 200 pm? The mass of a neutron to be $1.67 \times 10^{-27}\, kg$. 1.29 While studying quantum mechanics one day, you wondered what temperature would be required for the Jumbo Jawbreaker you were about to eat to have a de Broglie wavelength of $1.9 \times 10^{-24}$ meters? Assuming that the speed of a Jumbo Jawbreaker can be calculated from the equation $\nu_n = (\dfrac{3k_BT}{m})^{\dfrac{1}{2}}$. You quickly measure the mass of your Jumbo Jawbreaker and found it to be $0.1kg$. Solution Knowing that the de Broglie wavelength has the form, $\lambda = \dfrac{h}{m\nu_n}\nonumber$ we can substitute the given equation for speed into the de Broglie wavelength equation $\lambda = \dfrac{h}{\sqrt{3mk_BT}} \nonumber$ rearrange to solve for temperature $T = \dfrac{h^2}{3mk_B\lambda^2}\nonumber$ Substituting in constants we can solve for temperature in Kelvin. Using $h = 6.626 \times 10^{-34} J*s$, $m = 0.1kg$, $k_B = 1.381 \times 10^{-21} J*K^{-1}$, and $\lambda = 1.9 \times 10^{-24}$ meters. We find that $T = \dfrac{(6.626 \times 10^{-34} J*s)^2}{3(0.1kg)(1.381 \times 10^{-21} J*K^{-1})(1.9 \times 10^{-24})^2}\nonumber$ Therefore \begin{align*} T &= 293.5K \[4pt] &=20.35^\circ C \end{align*} 1.30 For linear motion, show that a small change in the momentum, $\Delta\text{p}$, changes a change in kinetic energy, $\Delta\text{KE}$, of $\Delta\text{KE} = \dfrac{p_{0}}{m}\Delta\text{p}\nonumber$ where $p_{0}$ is initial momentum. Solution Since $\Delta\text{p} = dp$ and $\Delta\text{KE} = dKE$, $KE = \dfrac{p^2}{2m}\nonumber$ $dKE = \dfrac{p_{0}}{m}dp\nonumber$ $\Delta\text{KE} = \dfrac{p_{0}}{m}\Delta\text{p}\nonumber$ 1.31 Derive the Bohr formula for $\dfrac{1}{\lambda_{vac}}$ for a multi-proton and single electron atom such as $\ce{He^{+}}$ or $\ce{Li^{2+}}$. Solution The number of protons (Z) in the nucleus interact with the single electron with the same coulomb force $(f)$. The total force of a nucleus with charge Z can be written as the sum of each proton individually interacting with the electron. $f_{Total} = \displaystyle\sum\limits_{i=0}^Z \dfrac{e^2}{4r^2\pi\epsilon_\circ}\nonumber$ Simplifying this expression we find that $f_{Total} = \dfrac{Ze^2}{4r^2\pi\epsilon_\circ}\nonumber$ To prevent the electron from spiraling into or away from the nucleus, the centrifugal force $f = \dfrac{m_ev^2}{r}$ is equal to the Coulombic force. Therefore $\dfrac{Ze^2}{4r^2\pi\epsilon_\circ}=\dfrac{m_e\nu^2}{r}\nonumber$ For stability purposes a condition requires electrons to have a set number of complete wavelengths around the circumference of the orbit or $2\pi r = n\lambda \ , \ where \ n = 1,2,3...\nonumber$ using the de Broglie wavelength formula $\lambda = \dfrac{h}{p} = \dfrac{h}{m\nu}$ we find that $m_e\nu r = \dfrac{nh}{2\pi}\nonumber$ Solving for $\nu$ and substituting into our force relationship $\dfrac{Ze^2}{4r^2\pi\epsilon_\circ}=\dfrac{m_e\nu^2}{r}$ We find that $r = \dfrac{n^2h^2\epsilon_\circ}{m_ee^2Z\pi}\nonumber$ Now solving for the total energy of the system $E = KE +V(r) \ = \dfrac{1}{2}m_e\nu^2 - \dfrac{Ze^2}{4r\pi\epsilon_\circ}\nonumber$ Substituting in $m_e\nu^2$ found above into the kinetic energy portion we find $E = \dfrac{Ze^2}{8r\pi\epsilon_\circ} - \dfrac{Ze^2}{4r\pi\epsilon_\circ} = -\dfrac{Ze^2}{8r\pi\epsilon_\circ}\nonumber$ Substituting $r$ from above we quantize the energy such that $E_n = \dfrac{-Z^2m_ee^4}{8n^2h^2\epsilon_\circ^2}\nonumber$ Since this energy is quantized, the change in energy states will occur where electrons are excited by light or $h\nu$ into higher quantum states. Therefore $\Delta E = \dfrac{-Z^2m_ee^4}{8h^2\epsilon_\circ^2} \big(\dfrac{1}{n_1^2}-\dfrac{1}{n_2^2}\big)=h\nu\nonumber$ Finally solve for $\dfrac{1}{\lambda_{vac}}$ remembering that $h\nu = \dfrac{hc}{\lambda_{vac}}$ where $c$ is the speed of light. We obtain our final solution $\boxed{\dfrac{1}{\lambda_{vac}} = \dfrac{-Z^2m_ee^4}{8h^3c\epsilon_\circ^2} \big(\dfrac{1}{n_1^2}-\dfrac{1}{n_2^2}\big)}\nonumber$ 1.32 The series in the $\ce{He^{+}}$ spectrum that corresponds to the set of transitions where the electron falls from a higher level into the n = 4 state is called the Pickering series, an important series in solar astronomy. Derive the formula for the wavelengths of the observed lines in this series. In what region of the spectrum does it occur? Solution If we derive the Bohr formula for $\tilde{v} = Z^2R_H \left(\dfrac{1}{(n^2)_1} - \dfrac{1}{(n^2)_2}\right)$ In the Pickering series, the helium spectrum is in $Z = 2$ and $n_2 = 4$ $\tilde{v} = 4(109,680 cm^-1)\left(\dfrac{1}{4^2} - \dfrac{1}{n^2_1}\right)$ where $n_1 = 5, 6, 7, 8.... \]). $n_1 = 5, \tilde{v}\ = 9871\, \text{cm}^-1 \nonumber$ or $\lambda = 1.013 * 10^{-6} meters \nonumber$ 1.33A Using the Bohr model, find the third ionization energy for the Lithium atom in eV and in J. Solution Energy transitions for a hydrogen like atom are given by ΔE = Z2Ry(1/ni2 - 1/nf2 ) where Z is the atomic number and Ry is 13.6 eV When a hydrogen like atom is ionized, the electron transitions to its highest bound state, at n = infinity,so its quantum number nf goes to infinity, making 1/nf2 = 0. So Eionization = (3)2(13.6)(1/(1)2 - 0) = 122.9 eV. 122.9 eV * 1.6 *10-19 = 1.96*10-17 J 1.33B Find the ionization energy in eV and \(kJ \centerdot mol^{-1}$ of singly ionized helium in the $n=3$ state using Bohr theory Solution To find the ionization energy of helium, consider the case where we move an electron from the $n=3$ state to an infinite distance from the nucleus. Using the Bohr formula for $\tilde{v}$. $\tilde{v}=Z^2R_H(\dfrac{1}{n_1^2}-\dfrac{1}{n_2^2})\nonumber$ $\tilde{v}=2^2(109680 \,cm^{-1})\left(\dfrac{1}{3^2}-\dfrac{1}{\infty^2}\right)\nonumber$ $\tilde{v}=4.87467 \times 10^4 cm^{-1}\nonumber$ Then plugging into $E=hc \tilde{v}$ $E= (6.626 \times 10^{-34} J \centerdot s)(2.998 \times 10^8 m \centerdot s^{-1})(4.87467 \times 10^6 m^{-1})\nonumber$ $E=9.68 \times 10^{-19} J = 583 kJ \centerdot mol^{-1} = 6 eV\nonumber$ 1.34A The speed of electron in an nth Bohr orbit is given by the equation: $v = \dfrac{e^2}{2\epsilon_0nh}\nonumber$ The force acting between an electron and proton a distance $r$ from one another is given by Coulomb's law: $f = \dfrac{e^2}{4\pi\epsilon_0{r^2}}\nonumber$ The centrifugal force acts in opposition to the Coulombic force and is given by the equation: $f = \dfrac{mv^2}{r}\nonumber$ Find the values of $v$ for the Bohr orbits of n = 4, n = 5, and n = 6, and find the total force in an atom between a proton and electron a distance of 5 x 10-11 m away from one another, with the electron moving at a speed of 2 x 106 m/s. Solution To find $v$ simply substitute the values for n into the equation: $v = \dfrac{e^2}{2\epsilon_0nh}\nonumber$ For the values of $v$ at n = 4, n = 5, n = 6, we get n = 1 $v_1= 546,923 \,m \cdot s^{-1}$ n = 2 $v_2 = 437,538 \,m \cdot s^{-1}$ n = 3 $v_3 = 364,615 \,m \cdot s^{-1}$ To find the force between a proton and electron, simply subtract the Coulombic force from the Centrifugal force and substitute appropriate values for the constants: $f = \dfrac{mv^2}{r} - \dfrac{e^2}{4\pi\epsilon_0{r^2}}\nonumber$ For which we attain: $f = 7.2875 \times 10^{-8} N\nonumber$ 1.34B Prove that the speed of electron in an nth Bohr orbit is $v = \dfrac{e^2}{2\epsilon_0nh}$ Then find the first few values of $v$ the Bohr orbit. Solution First we have to know that the angular moment of the electron revolving in the $n$th Bohr orbit is quantized then $mvr = \dfrac{nh}{2\pi}$ where $r$ is the radius of the $n$th Bohr orbit. Kinetic energy of the electron is given as $\dfrac{mv^2}{2} = \dfrac{e^2}{2(4\pi\epsilon_0)r}$ So the radius, r must equal $r = \dfrac{e^2}{(4\pi\epsilon_0)mv^2}$ Now after substituting the value above into the first equation, we get $mv(\dfrac{e^2}{(4\pi\epsilon_0)mv^2} = \dfrac{nh}{2\pi}$ Thus the speed of the electron in the $n$ Bohr orbit is $v = \dfrac{e^2}{2\epsilon_0nh}$ For the first few values of $v$ in the $n$th Bohr orbit, we get n = 1 $v= 2.188 \times 10^{6} m*s{^-1}$ n = 2 $v_2 = 1.094 \times 10^{6} m*s{^-1}$ n = 3 $v_3 = 7.292 \times 10^{5} m*s{^-1}$ 1.35 What is the uncertainty in an electron's position if the uncertainty in measuring its velocity is 5 $m \cdot s^{-1}$. Solution According to the Heisenburg Uncertainty Principle $\Delta x \Delta p \geq \dfrac{\hbar}{2}\nonumber$ $\Delta x \geq \dfrac{\hbar}{2\Delta p}\nonumber$ Then by definition $\Delta p = m \Delta v$ $\Delta x \geq \dfrac{\hbar}{2m \Delta v}\nonumber$ $\Delta x \geq \dfrac{6.626 \times 10^{-34} J \centerdot s}{4\pi (9.109 \times 10^{-31} kg)(5 m \centerdot s^{-1})}\nonumber$ $\Delta x \geq 1.16 \times 10^{-5}m\nonumber$ 1.35 What is the uncertainty in the speed of an electron if we locate it to within 50 pm? Solution It is known that the uncertainty of momentum is given by the expression $\Delta{p} = m\Delta{v} \nonumber$ and Heisenberg’s Uncertainty Principle states that $\Delta{x}\Delta{p} \ge h \nonumber$ Then \begin{align*} \Delta{x}(m\Delta{v}) &\ge h \[4pt] \Delta{v} &\ge \frac{h}{m\Delta{x}} \[4pt] &\ge \frac{6.626 \cdot 10^{-34} J \cdot s}{(9.109 \cdot 10^{-31} kg)(50 \cdot 10^{-12} m)} \[4pt] &\ge 1.45 \cdot 10^7 m \cdot s \end{align*} 1.35 If we know the velocity of an electron to within $3.5 \times 10^{7}\dfrac{m}{s}$, then what is the uncertainty in its position? Solution Using the Heisenberg Uncertainty Principle, $\Delta x \Delta p \geq h = \Delta x \times m\Delta v \geq h\nonumber$ and rearranging to solve for uncertainty in velocity, $\Delta x \geq \dfrac{h}{m\Delta v}\nonumber$ we can use $h = 6.626 \times 10^{-34} J \cdot s$, $m = 9.109 \times 10^{-31} kg$, and $\Delta v = 3.5 \times 10^{7}\dfrac{m}{s}$ and find that $\Delta x \geq \dfrac{(6.626 \times 10^{-34} J \cdot s)}{(9.109 \times 10^{-31} kg)(3.5 \times 10^{7}\dfrac{m}{s})}\nonumber$ and thus $\Delta x \geq 2.078 \times 10^{-11} meters\nonumber$ 1.35 If a proton is located to within 1 angstrom, what is its uncertainty in velocity? Solution The Heisenberg uncertainty principle states ΔxΔp = h/4π ΔxmΔv = h/4π Δv = h/(4mpπΔx) where mp is the mass of a proton x ~ Δx the uncertainty in position is on the same order as the location it is confined to, here 1 angstrom Δv = h/(4mpπx) = (6.626*10-34)/(4*(1.67*10-27)*3.14*10-10) = 315.7 m/s 1.36 If the position of an electron is within the 10 pm interval, what is the uncertainty of the momentum? Is this value similar to that of an electron in the first Bohr orbit? Solution According to the uncertainty principle for position and momentum, \begin{align*} ∆x∆p &≥ h \[4pt] \Delta p &≥ \dfrac{h}{∆x} \end{align*} by substituting the respective values we get, \begin{align*} \Delta p &≥ \dfrac{6.626 \times 10^{-34} J.s}{10.0 \times 10^{-12} m} \[4pt] &≥ 2.9 \times 10^{-23} kg.m.s^-1 \end{align*} Therefore, uncertainty in the momentum of an electron will be $2.9 \times 10^{-23} kg.m.s^-1$. We can calculate the momentum of an electron in the first Bohr radius by using $v$ since we know that, \begin{align*} p &= m_ev \[4pt] &= (9.109 \times 10^{-31} kg)(2.188 \times 10^{6} m \cdot s^{-1}) \[4pt] &= (1.992 \times 10^{-24} kg \cdot m \cdot s^{-1}) \end{align*} The uncertainty of the momentum of an electron somewhere in a 10 pm interval is greater that the momentum of an electron in the first Bohr radius. 1.37 The Heisenberg Uncertainty Principle $\Delta x\Delta p \ge \dfrac{h}{4 \pi}\nonumber$ Show both sides have the same units Solution $\Delta x = meters\nonumber$ $\Delta p = m \Delta v\nonumber$ Planck's Constant has units of $J \cdot s\nonumber$ and $J= \dfrac{kg \cdot m^2}{s^2}\nonumber$ so apply these equations we get $m \dfrac{kg\cdot m}{s} \ge \dfrac{kg \cdot m^2\cdot s}{s^2}\nonumber$ and simplify $\dfrac{kg \cdot m^2}{s} \ge \dfrac{kg \cdot m^2}{s}\nonumber$ Therefore Both sides have same units 1.38 The relationship between energy and time can be seen through the following uncertainty principle: $∆E∆t ≥ h$. Through this relationship, it can be interpreted that a particle of mass $m$, the energy ($E$=$m$c2) can come from nothing and return to nothing within a time ∆t ≤ $h$/($m$c2). A real particle is one that lasts for time ($\Delta t$) or more; likewise, a particle that lasts for less than time ($\Delta t$) are called virtual particles. For a charged subatomic particle, a pion, the mass is $2.5 \times 10^{-28} kg$. For a pion to be considered a real particle, what is its minimum lifetime? Solution Based on the uncertainty principle for energy and time: $∆E∆t ≥ h \nonumber$ $\Delta t≥ \dfrac{h}{mc^2}\nonumber$ therefore $E=mc^2$. By plugging in the values, you get \begin{align*} \Delta t &≥ \dfrac{6.626 \times 10^{-34} Js}{(2.5 \times 10^{-28} kg)(2.998 \times 10^{8} ms^-1) ^2} \[4pt] &≥ 2.9 \times 10^{-23} s \end{align*} Therefore, the minimum lifetime if the pion is to be considered a real particle will be $2.9 \times 10^{-23} s$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.E%3A_The_Dawn_of_the_Quantum_Theory_%28Exercises%29.txt
The aim of this section is to give a fairly brief review of waves in various shaped elastic media—beginning with a taut string, then going on to an elastic sheet, a drumhead, first of rectangular shape then circular, and finally considering elastic waves on a spherical surface, like a balloon. The reason we look at this material here is that these are “real waves”, hopefully not too difficult to think about, and yet mathematically they are the solutions of the same wave equation the Schrödinger wavefunction obeys in various contexts, so should be helpful in visualizing solutions to that equation, in particular for the hydrogen atom. • 2.1: The One-Dimensional Wave Equation The mathematical description of the one-dimensional waves can be expressed as solutions to the "wave equation." It may not be surprising that not all possible waves will satisfy the wave equation for a specific system since waves solutions must satisfy both the initial conditions and the boundary conditions. This results in a subset of possible solutions. In the quantum world, this means that the boundary conditions are responsible somehow for the quantization phenomena in Chapter 1. • 2.2: The Method of Separation of Variables Method of separation of variables is one of the most widely used techniques to solve partial differential equations and  is based on the assumption that the solution of the equation is separable, that is, the final solution can be represented as a product of several functions, each of which is only dependent upon a single independent variable. If this assumption is incorrect, then clear violations of mathematical principles will be obvious from the analysis. • 2.3: Oscillatory Solutions to Differential Equations Characterizing the spatial and temporal components of a wave requires solving homogeneous second order linear differential equations with constant coefficients. This results in oscillatory solutions (in space and time). These solutions solved via specific boundary conditions are standing waves. • 2.4: The General Solution is a Superposition of Normal Modes Since the wave equation is a linear differential equations, the Principle of Superposition holds and the combination two solutions is also a solution. • 2.5: A Vibrating Membrane It is pleasant to find that these waves in higher dimensions satisfy wave equations which are a very natural extension of the one we found for a string, and—very important—they also satisfy the Principle of Superposition, in other words, if waves meet, you just add the contribution from each wave.  In the next two paragraphs, we go into more detail, but this Principle of Superposition is the crucial lesson. • 2.E: The Classical Wave Equation (Exercises) These are homework exercises to accompany Chapter 2 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. 02: The Classical Wave Equation Learning Objectives • To introduce the wave equation including time and position dependence In the most general sense, waves are particles or other media with wavelike properties and structure (presence of crests and troughs). The simplest wave is the (spatially) one-dimensional sine wave (Figure 2.1.1 ) with an varing amplitude $A$ described by the equation: $A(x,t) = A_o \sin (kx - \omega t + \phi) \nonumber$ where • $A_o$ is the maximum amplitude of the wave, maximum distance from the highest point of the disturbance in the medium (the crest) to the equilibrium point during one wave cycle. In Figure 2.1.1 , this is the maximum vertical distance between the baseline and the wave. • $x$ is the space coordinate • $t$ is the time coordinate • $k$ is the wavenumber • $\omega$ is the angular frequency • $\phi$ is the phase constant. One can categorize “waves” into two different groups: traveling waves and stationary waves. Traveling Waves Traveling waves, such as ocean waves or electromagnetic radiation, are waves that “move,” meaning that they have a frequency and are propagated through time and space. Another way of describing this property of “wave movement” is in terms of energy transmission – a wave travels, or transmits energy, over a set distance. The most important kinds of traveling waves in everyday life are electromagnetic waves, sound waves, and perhaps water waves, depending on where you live. It is difficult to analyze waves spreading out in three dimensions, reflecting off objects, etc., so we begin with the simplest interesting examples of waves, those restricted to move along a line. Let’s start with a rope, like a clothesline, stretched between two hooks. You take one end off the hook, holding the rope, and, keeping it stretched fairly tight, wave your hand up and back once. If you do it fast enough, you’ll see a single bump travel along the rope: This is the simplest example of a traveling wave. You can make waves of different shapes by moving your hand up and down in different patterns, for example an upward bump followed by a dip, or two bumps. You’ll find that the traveling wave keeps the same shape as it moves down the rope. Taking the rope to be stretched tightly enough that we can take it to be horizontal, we’ll use its rest position as our x-axis (Figure 2.1.1 ). The $y$-axis is taken vertically upwards, and we only wave the rope in an up-and-down way, so actually $y(x,t)$ will be how far the rope is from its rest position at $x$ at time $t$: that is, Figure 2.1.2 shows where the rope is at a single time $t$. We can now express the observation that the wave “keeps the same shape” more precisely. Taking for convenience time $t = 0$ to be the moment when the peak of the wave passes $x = 0$, we graph here the rope’s position at t = 0 and some later times $t$ as a movie (Figure 2.1.3 ). Denoting the first function by $y(x,0) = f(x)$, then the second $y(x,t) = f(x- v t)$: it is the same function with the “same shape,” but just moved over by $v t$, where $v$ is the velocity of the wave. To summarize: on sending a traveling wave down a rope by jerking the end up and down, from observation the wave travels at constant speed and keeps its shape, so the displacement y of the rope at any horizontal position at $x$ at time $t$ has the form $y(x,t)=f(x-v t) \label{2.1.0}$ We are neglecting frictional effects—in a real rope, the bump gradually gets smaller as it moves along. Standing Waves In contrast to traveling waves, standing waves, or stationary waves, remain in a constant position with crests and troughs in fixed intervals. One way of producing a variety of standing waves is by plucking a melody on a set of guitar or violin strings. When placing one’s finger on a part of the string and then plucking it with another, one has created a standing wave. The solutions to this problem involve the string oscillating in a sine-wave pattern (Figure 2.1.4 ) with no vibration at the ends. There is also no vibration at a series of equally-spaced points between the ends; these "quiet" places are nodes. The places of maximum oscillation are antinodes. Bound vs. Free particles and Traveling vs. Stationary Waves Traveling waves exhibit movement and propagate through time and space and stationary wave have crests and troughs at fixed intervals separated by nodes. "Free" particles like the photoelectron discussed in the photoelectron effect, exhibit traveling wave like properties. In contrast, electrons that are "bound" waves will exhibit stationary wave like properties. The latter was invoked for the Bohr atom for quantizing angular moment of an electron bound within a hydrogen atom. The Wave Equation The mathematical description of the one-dimensional waves (both traveling and standing) can be expressed as $\dfrac{\partial^2 u(x,t)}{\partial x^2} = \dfrac{1}{v^2} \dfrac{\partial^2 u(x,t)}{\partial t^2} \label{2.1.1}$ with $u$ is the amplitude of the wave at position $x$ and time $t$, and $v$ is the velocity of the wave (Figure 2.1.2 ). Equation $\ref{2.1.1}$ is called the classical wave equation in one dimension and is a linear partial differential equation. It tells us how the displacement $u$ can change as a function of position and time and the function. The solutions to the wave equation ($u(x,t)$) are obtained by appropriate integration techniques. It may not be surprising that not all possible waves will satisfy Equation $\ref{2.1.1}$ and the waves that do must satisfy both the initial conditions and the boundary conditions, i.e. on how the wave is produced and what is happening on the ends of the string. For example, for a standing wave of string with length $L$ held taut at two ends (Figure 2.1.3 ), the boundary conditions are $u(0,t)=0\ \label{2.1.3a}$ and $u(L,t)=0 \label{2.1.3b}$ for all values of $t$. As expected, different system will have different boundary conditions and hence different solutions. Mathematical Origin of Quantization The initial conditions and the boundary conditions used to solve the wave equation will result in restrictions of "allowed" waves to exist in a similar fashion that only certain solutions exist for the electrons in the Bohr atom. The first six wave solutions $u(x,t)$ to Equation $\ref{2.1.1}$ subject to the boundary conditions in Equations $\ref{2.1.3a}$ and $\ref{2.1.3b}$ (discussed in detail later) results in the wave in Figure 2.1.5 . These are standing waves that exist with frequencies based on the number of nodes (0, 1, 2, 3,...) they exhibit (more discussed in the following Section). Curvature of Wave Solutions Since the acceleration of the wave amplitude (right side of Equation $\ref{2.1.1}$) is proportional to $\dfrac{\partial^2}{\partial x^2}$, the greater curvature in the material produces a greater acceleration, i.e., greater changing velocity of the wave (Figure 2.1.4 ) and greater frequency of oscillation. As discussed later, the higher frequency waves (i..e, more nodes) are higher energy solutions; this as expected from the experiments discussed in Chapter 1 including Plank's equation $E=h\nu$. Summary Waves which exhibit movement and are propagated through time and space. The two basic types of waves are traveling and stationary. Both exhibit wavelike properties and structure (presence of crests and troughs) which can be mathematically described by a wavefunction or amplitude function. Both wave types display movement (up and down displacement), but in different ways.Traveling waves have crests and troughs which are constantly moving from one point to another as they travel over a length or distance. In this way, energy is transmitted along the length of a traveling wave. In contrast, standing waves have nodes at fixed positions; this means that the wave’s crests and troughs are also located at fixed intervals. Therefore, standing waves only experience vibrational movement (up and down displacement) on these set intervals - no movement or energy travels along the length of a standing wave.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.01%3A_The_One-Dimensional_Wave_Equation.txt
Learning Objectives • To be introduced to the Separation of Variables technique as method to solved wave equations Solving the wave equation involves identifying the functions $u(x,t)$ that solve the partial differential equation that represent the amplitude of the wave at any position $x$ at any time $t$ $\dfrac{\partial^2 u(x,t)}{\partial x^2} = \dfrac{1}{v^2} \dfrac{\partial^2 u(x,t)}{\partial t^2} \label{2.1.1}$ This wave equation is a type of second-order partial differential equation (PDE) involving two variables - $x$ and $t$. PDEs differ from ordinary differential equations (ODEs) that involve functions of only one variable. However, this difference makes PDEs appreciably more difficult to solve. In fact, the vast majority of PDE cannot be solved analytically and those classes of special PDEs that can be solved analytically invariably involve converting the PDE into one or more ODEs and then solving independently. One of these approaches is the the method of separation of variables. Method of Separation of Variables The general application of the Method of Separation of Variables for a wave equation involves three steps: 1. We find all solutions of the wave equation with the general form $u(x,t)= X(x)T(t) \nonumber$for some function $X(x)$ that depends on $x$ but not $t$ and some function $T(t)$ that depends only on $t$, but not $x$. It is of course too much to expect that all solutions of Equation $\ref{2.1.1}$ are of this form, however, if we find a set of solutions $\{X_i(x)T_i(t)\}$ since the wave equation is a linear equation, $u(x,t)=\sum_i c_ iX_i(x)T_i(t) \label{gen1}$is also a solution for any choice of the constants $c_i$. 2. Impose constraints on the solutions based on the knowledge of the system. These are called the boundary conditions, which specify the values of $u(x,t)$ at the extremes ("boundaries"). This is a similar constraint to the solution as in initial value problems which the conditions $x(t_i)$ are specified at a specific time $t_i$. The goal is then to select the constants $c_i$ in Equation \ref{gen1} so that the boundary conditions are also satisfied. Method of separation of variables is one of the most widely used techniques to solve partial differential equations and is based on the assumption that the solution of the equation is separable, that is, the final solution can be represented as a product of several functions, each of which is only dependent upon a single independent variable. If this assumption is incorrect, then clear violations of mathematical principles will be obvious from the analysis. A Vibrating Spring Held Fixed Between Two Points As discussed in Section 2.1, the solutions to the string example $u(x,t)$ for all $x$ and $t$ would be assumed to be a product of two functions: $X(x)$ and $T(t)$, where $X(x)$ is a function of only $x$, not $t$ and $T(t)$ is a function of $t$, but not $x$. $u(x,t)= X(x)T(t) \label{2.2.1}$ Substitute Equation $\ref{2.2.1}$ into the one-dimensional wave equation (Equation $\ref{2.1.1}$) gives $\dfrac{\partial^2 X(x)T(t)}{\partial x^2} = \dfrac{1}{v^2} \dfrac{\partial^2 X(x)T(t)}{\partial t^2} \label{2.2.2}$ Since $X$ is not a function of $t$ and $T$ is not a function of $x$, Equation $\ref{2.2.2}$ can be simplified $T(t) \dfrac{\partial^2 X(x)}{\partial x^2} = \dfrac{1}{v^2} X(x) \dfrac{\partial^2T(t)}{\partial t^2} \label{2.2.3}$ Collecting the expressions that depend on $x$ on the left side of Equation $\ref{2.2.3}$ and of $t$ on the right side results in $\dfrac{1}{X(x)} \dfrac{\partial^2 X(x)}{\partial x^2} = \dfrac{1}{v^2} \dfrac{1}{T(t)} \dfrac{\partial^2T(t)}{\partial t^2} \label{2.2.3a}$ Equation $\ref{2.2.3a}$ is an interesting equation since each side can be set to a fixed constant $K$ as that is the only solution that works for all values of $t$ and $x$. Therefore, the equation can be separated into two ordinary differential equations: $\dfrac{d^2T(t)}{dt^2} - Kv^2 T(t) = 0 \label{2.2.4a}$ $\dfrac{d^2X(x)}{dx^2} - K X(x) = 0 \label{2.2.4b}$ Hence, by substituting the new product solution form (Equation \ref{2.2.1}) into the original wave equation (Equation $\ref{2.1.1}$), we converted a partial differential equation of two variables ($x$ and $t$) into two ordinary differential equations (differential equation containing a function or functions of one independent variable and its derivatives). Each differential equation involves only one of the independent variables ($x$ or $t$). • If $K=0$, then the solution is the trivial $u(x,y,)=0$ solution (i.e., no wave exists). • If $K > 0$, then the general solution of Equation $\ref{2.2.4b}$ is $X(x) = A e^{\sqrt{K}x} + B e^{-\sqrt{K}x} \label{2.2.5}$ At this stage, Equation $\ref{2.2.5}$ implies that the solution to the two ordinary differential wave equations will be an infinite number of waves with no quantization to limit those that are allowed (i.e., any values of $A$ and $B$ are possible). Narrowing down the general solution to a specific solution occurs when taking the boundary conditions into account. The boundary conditions for this problem is that the wave amplitude equal to zero at the ends of the string $u(0,t) = X(x)T(t) = 0 \label{2.2.6a}$ $u(L,t) = X(x)T(t) = 0 \label{2.2.6b}$ for all times $t$. Applying the two boundary conditions in Equations $\ref{2.2.6a}$ and $\ref{2.2.6b}$ into the general solution in Equation $\ref{2.2.5}$ results into relationships between $A$ and $B$: $X(x=0)= A + B = 0 \label{2.2.7a}$ and $X(x=L)= A e^{\sqrt{K}L} + B e^{-\sqrt{K}L} = 0 \label{2.2.7b}$ Ignore the Trivial Solution One solution to this is that $A = B = 0$, but this is the trivial solution from $K=0$ and one we ignore since it provides no physical solution to the problem other than the knowledge that $0=0$, which is not that inspiring of a result. Both Equations $\ref{2.2.4a}$ and $\ref{2.2.4b}$ can be generalized into the following equations $\dfrac{d^2y(x)}{dx^2} - k^2 y(x) = 0 \label{2.2.8}$ where $k$ is a real constant (i.e., not complex). Equation $\ref{2.2.8}$ is a homogeneous second order linear differential equation. The general solution to these types of differential equations has the form $y(x) = e^{\alpha x} \label{2.2.9}$ where $\alpha$ is a constant to be determined by the constraints of system. Substituting Equation $\ref{2.2.9}$ into Equation $\ref{2.2.8}$ results in $\left( \alpha^2 - k^2 \right)y(x)=0 \label{2.2.10}$ For this equation to be satisfied, either • $\alpha^2 - k^2 = 0$ or • $y(x) = 0$. The later is the trivial solution and is ignored and therefore $\alpha^2 - k^2 = 0 \label{2.2.11}$ so $\alpha = \pm k \label{2.2.12}$ Hence, there are two solutions to the general Equation $\ref{2.2.8}$, as expected for a second order differential equation (first order differential equations have one solution), which are a result from substituting the $\alpha$ values from Equation $\ref{2.2.12}$ into Equation $\ref{2.2.9}$ $y(x) = e^{k\, x} \nonumber$ $y(x) = e^{-k\, x} \nonumber$ The general solution can then be any linear combination of these two equations $y(x) = c_1 e^{k\, x} + c_2 e^{-k\, x} \label{2.2.14}$ Example 2.2.1 : General Solution Solve $y'' + 3y' - 4y = 0 \nonumber$ Solution The strategy is to search for a solution of the form $y = e^{\alpha t } \nonumber$ The reason for this is that long ago some geniuses figured this stuff out and it works. Now calculate derivatives $y' = \alpha e^{\alpha t } \nonumber$ $y'' = \alpha^2e^{\alpha t} \nonumber$ Substituting into the differential equation gives \begin{align*} \alpha ^2e^{\alpha t} + 3(\alpha e^{\alpha t}) - 4(e^{\alpha t}) &= 0 \[4pt] ( \alpha ^2 + 3\alpha - 4)e^{\alpha t} &= 0 \end{align*} \nonumber Now divide by $e^{\alpha t}$ to get \begin{align*} \alpha ^2 + 3\alpha - 4 &= 0 \[4pt] (\alpha - 1)(\alpha + 4) &= 0 \[4pt] \alpha &= 1 \end{align*} \nonumber and $\alpha = -4 \nonumber$ We can conclude that two solutions are $y_1 = e^t \nonumber$ and $y_2 = e^{-4t} \nonumber$ Now let $L(y) = y'' + 3y' - 4y \nonumber$ It is easy to verify that if $y_1$ and $y_2$ are solutions to $L(y) = 0 \nonumber$ then $y= c_1y_1 + c_2y_2 \nonumber$ is also a solution. More specifically we can conclude that $y = c_1e^t + c_2e^{-4t } \nonumber$ Represents a two dimensional family (vector space) of solutions. Later we will prove that this is the most general description of the solution space. Example 2.2.2 : Boundary Conditions Solve $y'' - y' - 6y = 0 \nonumber$ with $y(0) = 1$ and $y'(0) = 2$. Solution As before we seek solutions of the form $y = e^{rt} \nonumber$ Now calculate derivatives \begin{align*} y' &= re^{rt} \[4pt] y'' &= r^2e^{rt} \end{align*} \nonumber Substituting into the differential equation gives \begin{align*} r^2e^{rt} + (re^{rt}) - 6(e^{rt}) &= 0 \[4pt] ( r^2 - r - 6 )e^{rt} &= 0 \end{align*} \nonumber Now divide by $e^{rt}$ to get \begin{align*} r^2 - r - 6 &= 0 \[4pt] (r - 3)(r + 2) &= 0 \end{align*} \nonumber We can conclude that two solutions are $y_1 = e^{3t} \nonumber$ and $y_2 = e^{-2t} \nonumber$ We can conclude that $y = c_1e^{3t} + C_2e^{-2t} \nonumber$ Represents a two dimensional family (a "vector space") of solutions. Now use the initial conditions to find that $1 = c_1 + c_2 \nonumber$ We have that $y' = 3C_1e^{3t} - 2C_2e^{-2t}\nonumber$ Plugging in the initial condition with $y'$, gives $2 = 3c_1 - 2c_2 \nonumber$ This is a system of two equations and two unknowns. We can use linear algebra to arrive at $c_1 = \dfrac{4}{5}\nonumber$ and $C_2 = \dfrac {1}{5}\nonumber$ The final solution is $y = \dfrac{4}{5} e^{3t } + \dfrac{1}{5}e^{-2t} \nonumber$ When $K > 0$, the general solutions of Equations $\ref{2.2.4a}$ and $\ref{2.2.4b}$ are oscillatory in time and space, respectively, as discussed in the following section. Contributors and Attributions • Delmar Larsen (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.02%3A_The_Method_of_Separation_of_Variables.txt
Learning Objectives • Explore the basis of the oscillatory solutions to the wave equation • Understand the consequences of boundary conditions on the possible solutions • Rationalize how satisfying boundary conditions forces quantization (i.e., only solutions with specific wavelengths exist) The boundary conditions for the string held to zero at both ends argue that $u(x,t)$ collapses to zero at the extremes of the string (Figure 2.3.1 ). Unfortunately, when $K>0$, the general solution (Equation 2.2.7) results in a sum of exponential decays and growths that cannot achieve the boundary conditions (except for the trivial solution); hence $K<0$. This means we must introduce complex numbers due to the $\sqrt{K}$ terms in Equation 2.2.5. So we can rewrite $K$: $K = - p^2 \label{2.3.1}$ and Equation 2.2.4b can be $\dfrac{d^2X(x)}{dx^2} +p^2 X(x) = 0 \label{2.3.2}$ The general solution to differential equations of the form of Equation \ref{2.3.2} is $X(x) = A e^{ipx} + B e^{-ipx} \label{2.3.3}$ Example 2.3.1 Verify that Equation $\ref{2.3.3}$ is the general form for differential equations of the form of Equation $\ref{2.3.2}$, which when substituted with Equation $\ref{2.3.1}$ give $X(x) = A e^{ipx} + B e^{-ipx} \nonumber$ Solution Expand the complex exponentials into trigonometric functions via Euler formula ($e^{i \theta} = \cos \theta + i\sin \theta$) $X(x) = A \left[\cos (px) + i \sin (px) \right] + B \left[ \cos (px) - i \sin (px) \right] \nonumber$ collecting like terms $X(x) = (A + B ) \cos (px) + i (A - B) \sin (px) \label{2.3.6}$ Introduce new complex constants $c_1=A+B$ and $c_2=i(A-B)$ so that the general solution in Equation $\ref{2.3.6}$ can be expressed as oscillatory functions $X(x) = c_1 \cos (px) + c_2 \sin (px) \label{2.3.7}$ Now let's apply the boundary conditions from Equation 2.2.7 to determine the constants $c_1$ and $c_2$. Substituting the first boundary condition ($X(x=0)=0$) into the general solutions of Equation $\ref{2.3.7}$ results in \begin{align} X(x=0) = c_1 \cos (0) + c_2 \sin (0) &=0 \nonumber \[4pt] c_1 + 0 &= 0 \nonumber \[4pt] c_1 &=0 \label{2.3.8c} \end{align} and substituting the second boundary condition ($X(x=L)=0$) into the general solutions of Equation $\ref{2.3.7}$ results in $X(x=L) = c_1 \cos (pL) + c_2 \sin (pL) = 0 \label{2.3.9}$ we already know that $c_1=0$ from the first boundary condition so Equation $\ref{2.3.9}$ simplifies to $c_2 \sin (pL) = 0 \label{2.3.10}$ Given the properties of sines, Equation $\ref{2.3.9}$ simplifies to $pL= n\pi \label{2.3.11}$ with $n=0$ is the trivial solution that we ignore so $n = 1, 2, 3...$. $p = \dfrac{n\pi}{L} \label{2.3.12}$ Substituting Equations $\ref{2.3.12}$ and $\ref{2.3.8c}$ into Equation $\ref{2.3.7}$ results in $X(x) = c_2 \sin \left(\dfrac{n\pi x}{L} \right) \nonumber$ which can simplify to $X(x) = c_2 \sin \left( \omega x \right) \nonumber$ with $\omega=\dfrac{n\pi}{L} \nonumber$ A similar argument applies to the other half of the ansatz ($T(t)$). Exercise 2.3.1 Given two traveling waves: $\psi_1 = \sin{(c_1 x+c_2 t)} \; \textrm{ and } \; \psi_2 = \sin{(c_1 x-c_2 t)} \nonumber$ 1. Find the wavelength and the wave velocity of $\psi_1$ and $\psi_2$ 2. Find the following and identify nodes: $\psi_+ = \psi_1 + \psi_2 \; \textrm{ and } \; \psi_- = \psi_1 - \psi_2 \nonumber$ Solution a: $\psi_1$ is a sin function. At every integer $n \pi$ where $n=0,\pm 1, \pm 2, ...$, a sin function will be zero. Thus, $\psi_1 = 0$ when $c_1 x + c_2 t = \pi n$. Solving for the x, while ignoring trivial solutions: $x = \frac{n \pi - c_2 t}{c_1} \nonumber$ The velocity of this wave is: $\frac{dx}{dt} = -\frac{c_2}{c_1} \nonumber$ Similarly for $\psi_2$. At every integer $n \pi$ where $n=0,\pm 1, \pm 2, ...$, a sin function will be zero. Thus, $\psi_2 = 0$ when $c_1 x - c_2 t = \pi n$. Solving for x, for $\psi_2$: $x = \frac{n \pi + c_2 t}{c_1} \nonumber$ The velocity of this wave is: $\frac{dx}{dt} = \frac{c_2}{c_1} \nonumber$ The wavelength for each wave is twice the distance between two successive nodes. In other words, $\lambda = 2(x_{n} - x_{n-1}) = \frac{2 \pi}{c_1} \nonumber$ Solution b: Find $\psi_+ = \psi_1 + \psi_2 \; \textrm{ and } \; \psi_- = \psi_1 - \psi_2$. \begin{align*} \psi_+ &= \sin (c_1 x + c_2 t) + \sin (c_1 x - c_2 t) \[4pt] &= \sin (c_1 x ) \cos (c_2 t) + \cancel{\cos(c_1 x) \sin(c_1 t)} + \sin (c_1 x ) \cos (c_2 t) - \cancel{\cos(c_1 x) \sin(c_1 t)} \[4pt] &= 2\sin (c_1 x ) \cos (c_2 t) \end{align*} \nonumber This should have a node at every $x= n \pi / c_1$ and \begin{align*} \psi_- &= \sin (c_1 x + c_2 t) - \sin (c_1 x - c_2 t) \[4pt] &= \cancel{\sin (c_1 x ) \cos (c_2 t)} + \cos(c_1 x) \sin(c_1 t) - \cancel{\sin (c_1 x ) \cos (c_2 t)} + \cos(c_1 x) \sin(c_1 t) \[4pt] &= 2\cos (c_1 x ) \sin (c_2 t) \end{align*} \nonumber
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.03%3A_Oscillatory_Solutions_to_Differential_Equations.txt
Learning Objectives • Separate the wave equation into individual spatial and temporal problems and solve them. • Demonstrate that the general solution can be a superposition of solutions (normal modes) As discussed previously, the solutions to the string example $u(x,t)$ for all $x$ and $t$ would be assumed to be a product of two functions: $X(x)$ and $T(t)$, where $X(x)$ is a function of only $x$, not $t$ and $T(t)$ is a function of $t$, but not $x$. $u(x,t)= X(x)T(t) \nonumber$ By substituting the new product solution form into the original wave equation, one can obtain two ordinary differential equations (differential equation containing a function or functions of one independent variable and its derivatives). Each differential equation would involve only one of the independent variables ($x$ or $t$). Spatial Dependence of the Solution: $X(x)$ The boundary conditions for the string held to zero at both ends argue that $u(x,t)$ collapses to zero at the extremes of the string. Unfortunately, when $K>0$, the general solution to the wave equation results in a sum of exponential decays and growths that cannot achieve the boundary conditions (except for the trivial solution that $u(x,t)=0$); hence $K<0$. This means we must introduce complex numbers due to the $\sqrt{K}$ terms in Equation 2.2.5. So we can rewrite $K$: $K = - p^2 \label{2.4.1}$ and Equation 2.2.4b can be $\dfrac{d^2X(x)}{dx^2} +p^2 X(x) = 0 \label{2.4.2}$ The general solution to differential equations of the form of Equation $\ref{2.4.2}$ is Equation 2.2.5 $X(x) = A e^{\sqrt{K}x} + B e^{-\sqrt{K}x} \label{2.4.3}$ that when substituted with Equation $\ref{2.4.1}$ give $X(x) = A e^{ipx} + B e^{-ipx} \label{2.4.4}$ The complex exponentials can be expressed as trigonometric functions via Euler formula ($e^{i \theta} = \cos \theta + i\sin \theta$) $X(x) = A \left[\cos (px) + i \sin (px) \right] + B \left[ \cos (px) - i \sin (px) \right] \nonumber$ collecting like terms $X(x) = (A + B ) \cos (px) + i (A - B) \sin (px) \label{2.4.6}$ Introduce new complex constants $C=A+B$ and $D=i(A-B)$ so that the general solution in Equation $\ref{2.4.6}$ can be expressed as oscillatory functions $X(x) =C \cos (px) + D \sin (px) \label{2.4.7}$ Exercise 2.4.1 Verify that Equation \ref{2.4.3} is the general form for differential equations of the form of Equation $\ref{2.4.2}$. Answer In order to show that $X(x)=A e^{\sqrt{x} x}+B e^{-\sqrt{x} x} \label{2.2.4}$ is a general solution to the differential equation $\frac{d^{2} X(x)}{d x^{2}}+p^{2} X(x)=0. \nonumber$ We then have to take the second derivative of Equation \ref{2.2.4} and substitute it and the original function into the appropriate locations in Equation \ref{2.4.3} and verify that it does in fact equal $0$. First we have to take the first and then second derivative of Equation \ref{2.2.4} $\begin{array}{l} {\dfrac{d}{d x}\left(A e^{\sqrt{k x}}+B e^{-\sqrt{k} x}\right)=\sqrt{k} A e^{\sqrt{k x}}-\sqrt{k} B e^{-\sqrt{k x}}} \ {\dfrac{d^{2}}{d x^{2}}\left(A e^{\sqrt{k x}}+B e^{-\sqrt{k} x}\right)=k A e^{\sqrt{k x}}+k B e^{-\sqrt{k x}}} \end{array} \nonumber$ Now that we have the second derivative of Equation \ref{2.2.4} we plug the relevant values into Equation \ref{2.4.2} \begin{align*} \dfrac{d^{2} X(x)}{d x^{2}}+p^{2} X(x)=0 \ k A e^{\sqrt{k} x}+k B e^{-\sqrt{k} x}+p^{2}\left(A e^{\sqrt{k} x}+B e^{-\sqrt{k} x}\right) \overset{?}{=} 0 \end{align*} \nonumber We are given in Equation \ref{2.4.2} that $k=-p^{2} \nonumber$ So $p^{2}=-k \nonumber$ Now we can plug that into our differential equation to simplify \begin{align*} k A e^{\sqrt{k} x}+k B e^{-\sqrt{k} x} - k\left(A e^{\sqrt{k} x}+B e^{-\sqrt{k} x} \right) &\overset{?}{=} 0 \[4pt] \bcancel{ k A e^{\sqrt{k x}}} + \cancel{k B e^{-\sqrt{k x}}} - \bcancel{k A e^{\sqrt{k x}}} - \cancel{k B e^{-\sqrt{k x}}} & \overset{\checkmark}{=} 0 \end{align*} \nonumber As all of these terms cancel to equal 0, we prove that the solution given is a general solution to the differential equation. It is important to remember though that it is not the only solution to the differential equation. Now let's apply the boundary conditions from Equation 2.2.7 to determine the constants $C$ and $D$. Substituting the first boundary condition ($X(x=0)=0$) into the general solutions of Equation $\ref{2.4.7}$ results in \begin{align} X(x=0) &= 0 \[4pt] C \cos (0) + D \sin (0) &=0 \label{2.4.8a} \[4pt] C + 0 &= 0 \label{2.4.8b} \[4pt] C&=0 \label{2.4.8c} \end{align} and substituting the second boundary condition ($X(x=L)=0$) into the general solutions of Equation $\ref{2.4.7}$ results in $X(x=L) = C \cos (pL) + D \sin (pL) = 0 \label{2.4.9}$ we already know that $C=0$ from the first boundary condition so Equation $\ref{2.4.9}$ simplifies to $D \sin (pL) = 0 \label{2.4.10}$ Given the properties of sines, Equations \ref{2.4.9} simplifies to $pL= n\pi \label{2.4.11}$ with $n=0$ is the trivial solution that we ignore so $n = 1, 2, 3...$. $p = \dfrac{n\pi}{L} \label{2.4.12}$ Substituting Equations $\ref{2.4.12}$ and $\ref{2.4.8c}$ into Equation $\ref{2.4.7}$ results in $X(x) = D \sin \left(\dfrac{n\pi x}{L} \right) \label{2.4.13}$ Equation $\ref{2.4.13}$ presents a set of solutions to the spatial part of the solution to the wave equation subject to the boundary conditions (Figure 2.4.1 ). This set of solutions is infinitely large with individual solutions that are distinguished from each other by the $n$ parameter introduced to account for the boundary conditions. This number is an example of a "quantum number" that are ubiquitously in quantum mechanics and are uniquely defined for each system. Time Dependence of the Solution: $T(t)$ A similar argument applies to the other half of the ansatz ($T(t)$) originally proposed for the classical wave equation was obtain from solving Equation 2.2.4a, which qualitatively resembles the original spatial differential equation solved above (Equation 2.2.4b). $\dfrac{d^2T(t)}{dt^2} - Kv^2 T(t) = 0 \nonumber$ However, the constraints extracted from solving spatial dependence apply to the time dependence. When Equations $\ref{2.4.1}$ and $\ref{2.4.12}$ are substituted into Equation 2.2.4a, a more simplified expression is obtained $\dfrac{d^2T(t)}{dt^2} + p^2v^2 T(t) = \dfrac{d^2T(t)}{dt^2} + \left(\dfrac{n v \pi}{L}\right)^2 T(t) = 0 \label{2.4.14}$ Define a new constant: $\omega_n$ $\omega_n= \left(\dfrac{n v \pi}{L}\right) \label{2.4.15}$ and substitute into Equation $\ref{2.4.14}$ $\dfrac{d^2T(t)}{dt^2} + \omega_n^2 T(t) = 0 \label{2.4.16}$ This is the same functional form of Equation $\ref{2.4.2}$ $T(t) = E \cos (\omega_n t) + F \sin (\omega_n t) \label{2.4.17}$ In contrast to the spatial dependence solution, we have no boundary conditions to use to identify the constants $E$ and $F$. The Principle of Superposition Now let's revisit the original ansatz solution to the classical wave equation (Equation 2.2.1), which can be substituted with Equations $\ref{2.4.13}$ and $\ref{2.4.17}$ \begin{align} u(x,t) &= X(x)T(t) \label{2.4.19a}\ \[4pt] &= \left[D \sin \left(\dfrac{n\pi x}{L} \right) \right] \left( E \cos (\omega_n t) + F \sin (\omega_n t) \right) \end{align} \label{2.4.18b} we can collect constants again with $G=DE$ and $H=DF$ and introduce a $n$ dependence to each since $E$ and $F$ may be $n$ dependence. $u_n(x,t) = \left[ G_n \cos (\omega_n t) + H_n \sin (\omega_n t) \right] \sin \left(\dfrac{n\pi x}{L} \right) \label{2.4.19}$ The functions represented in Equation $\ref{2.4.19}$ are set of solutions including both spatial and temporal features that solve the wave equation of a string held tight on two ends. Linearity of the Wave Equation The wave equation has a very important property: if we have two solutions to the equation, then the sum of the two is also a solution to the equation. It’s easy to check this: $\dfrac{\partial^2 (f+g)}{\partial x^2} = \dfrac{\partial^2 f}{\partial x^2} + \dfrac{\partial^2 g}{\partial x^2} = \dfrac{1}{v^2} \dfrac{\partial^2f}{\partial t^2} +\dfrac{1}{v^2} \dfrac{\partial^2g}{\partial t^2} = \dfrac{1}{v^2} \dfrac{\partial^2(f+g)}{\partial t^2} \nonumber$ Any differential equation for which this property holds is called a linear differential equation. Also note that $af(x,t) + bg(x,t)$ is also a solution to the equation if $a$, $b$ are constants. So you can add together—superpose—multiples of any two solutions of the wave equation to find a new function satisfying the equation. This important property is easy to interpret visually: if you can draw two wave solutions, then at each point on the string simply add the displacement $u_n(x,t)$ of one wave to the other $u_m(x,t)$—the sum of the two waves together is a solution. So, for example, as two traveling waves moving along the string in opposite directions meet each other, the displacement of the string at any point at any instant is just the sum of the displacements it would have had from the two waves singly. This simple addition of the displacements is interference, doubtless because if the waves meeting have displacements in opposite directions, the string will be displaced less than by a single wave. This is also called the Principle of Superposition. The Principle of Superposition is the sum of two or more solutions is also a solution. Since the wave equation is a linear homogeneous differential equation, the total solution can be expressed as a sum of all possible solutions described by Equation $\ref{2.4.19}$. \begin{align} u(x,t) &= \sum_{n=1}^{\infty} u_n(x,t) \label{2.4.20} \[4pt] & = \sum_{n=1}^{\infty} \left( G_n \cos (\omega_n t) + H_n \sin (\omega_n t) \right) \sin \left(\dfrac{n\pi x}{L}\right) \label{2.4.21} \end{align} Each $u_n(x,t)$ solution is called a normal mode of the system and can be characterized via their corresponding frequencies $\dfrac{n\pi}{L}$ with $n=1,2,3...$. The spatial dependence of the first seven normal modes are shown in Figure 2.4.1 and are standing waves. The first term with $n=1$ is typically called the fundamental and each subsequent modes is called an overtone or harmonic. The temporal dependence of the normal modes is sinusoidal with angular frequencies $\omega_n$ that can be expanded to natural frequencies $\nu_n$ via $\nu_n = \dfrac{\omega_n}{2 \pi} = \dfrac{nv}{2L} \label{2.4.22}$ Hence, as the spatial curvature of the normal mode increases, the temporal oscillation of that mode also increases. This is a common trait in quantum mechanical systems and is a direct consequence of the wave equation.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.04%3A_The_General_Solution_is_a_Superposition_of_Normal_Modes.txt
Learning Objectives • To apply the wave equations to a two-dimensional membrane (rectangles and circles) • To recognize the possible geometries of a nodes in two-dimensional systems So far, we’ve looked at waves in one dimension, traveling along a string or sound waves going down a narrow tube. However, waves in higher dimensions than one are very familiar—water waves on the surface of a pond, or sound waves moving out from a source in three dimensions. It is pleasant to find that these waves in higher dimensions satisfy wave equations which are a very natural extension of the one we found for a string, and—very important—they also satisfy the Principle of Superposition, in other words, if waves meet, you just add the contribution from each wave. In the next two paragraphs, we go into more detail, but this Principle of Superposition is the crucial lesson. The Wave Equation and Superposition in More Dimensions What happens in higher dimensions? Let’s consider two dimensions, for example waves in an elastic sheet like a drumhead. If the rest position for the elastic sheet is the ($x$, $y$) plane, so when it’s vibrating it’s moving up and down in the z-direction, its configuration at any instant of time is a function. $u(x,y,t)$ In fact, we could do the same thing we did for the string, looking at the total forces on a little bit and applying Newton’s Second Law. In this case that would mean taking one little bit of the drumhead, and instead of a small stretch of string with tension pulling the two ends, we would have a small square of the elastic sheet, with tension pulling all around the edge. Remember that the net force on the bit of string came about because the string was curving around, so the tensions at the opposite ends tugged in slightly different directions, and did not cancel. The $\frac{\partial^2}{\partial x^2}$ term measured that curvature, the rate of change of slope. In two dimensions, thinking of a small square of the elastic sheet, things are more complicated. Visualize the bit of sheet to be momentarily like a tiny patch on a balloon, you’ll see it curves in two directions, and tension forces must be tugging all around the edges. The total force on the little square comes about because the tension forces on opposite sides are out of line if the surface is curving around, now we have to add two sets of almost-opposite forces from the two pairs of sides. The math is now shown here, but it is at least plausible that the equation is: $\dfrac{ \partial^2 u(x,y,t)}{\partial x^2} + \dfrac{ \partial^2 u(x,y,t)}{\partial y^2} = \dfrac{1}{v^2} \dfrac{ \partial^2 u(x,y,t)}{\partial t^2} \label{2.5.1}$ The physics of this equation is that the acceleration of a tiny bit of the sheet comes from out-of-balance tensions caused by the sheet curving around in both the x- and y-directions, this is why there are the two terms on the left hand side. And, going to three dimensions is easy: add one more term to give $\dfrac{ \partial^2 u(x,y,,z,t)}{\partial x^2} + \dfrac{ \partial^2 u(x,y,z,t)}{\partial y^2} + \dfrac{ \partial^2 u(x,y,z,t)}{\partial z^2} = \dfrac{1}{v^2} \dfrac{ \partial^2 u(x,y,z,t)}{\partial t^2} \label{2.5.2}$ This sum of partial differentiations in space is so common in physics that there’s a shorthand: $\nabla^2 = \left( \dfrac{ \partial^2}{\partial x^2}, \dfrac{ \partial^2}{\partial y^2}, \dfrac{ \partial^2}{\partial z^2} \right) \label{2.5.4}$ so Equation \ref{2.5.2} can be more easily written as $\nabla^2 u(x,y,z,t) = \dfrac{1}{v^2} \dfrac{\partial^2 u(x,y,z,t)}{\partial t^2} \label{2.5.3}$ Just as we found in one dimension traveling harmonic waves (no boundary conditions) $u(x,t) = A \sin (kx -\omega t) \label{2.5.5}$ with $\omega=\nu k$, you can verify that the three-dimensional equation has harmonic solutions $u(x,y,z,t) = A \sin (k_x x +k_x +k_z z -\omega t) \label{2.5.6}$ with $\omega = \nu |\vec{k|}$ where $|k| = \sqrt{k_x^2+k_y^2+k_z^2}$ $\vec{k}$ is a vector in the direction the wave is moving. The electric and magnetic fields in a radio wave or light wave have just this form (or, closer to the source, a very similar equivalent expression for outgoing spheres of waves, rather than plane waves). It’s important to realize that the 2D wave equation (Equation \ref{2.5.1}) is still a linear equation, so the Principle of Superposition still holds. If two waves on an elastic sheet, or the surface of a pond, meet each other, the result at any point is given by simply adding the displacements from the individual waves. We’ll begin by thinking about waves propagating freely in two and three dimensions, than later consider waves in restricted areas, such as a drum head. Vibrational Modes of a Rectangular Membrane A one-dimensional wave does not have a choice in how it propagates: it just moves along the line (well, it could get partly reflected by some change in the line and part of it go backwards). However, when we go to higher dimensions, how a wave disturbance starting in some localized region spreads out is far from obvious. But we can begin by recalling some simple cases: dropping a pebble into still water causes an outward moving circle of ripples. If we grant that light is a wave, we notice a beam of light changes direction on going from air into glass. Of course, it is not immediately evident that light is a wave: we’ll talk a lot more about that later. A few solutions (both temporal and spatials) are shown below together with their quantum numbers ($n_x$ and $n_y$). Solving for the function $u(x,y,t)$ in a vibrating, rectangular membrane is done in a similar fashion by separation of variables, and setting boundary conditions. The solved function is very similar, where $u(x,y,t) = A_{nm} \cos(\omega_{nm} t + \phi_{nm}) \sin \left(\dfrac {n_x \pi x}{a}\right) \sin\left(\dfrac {n_y\pi y}{b}\right) \label{2.5.6b}$ where • $a$ is the length of the rectangular membrane and $b$ is the width, and • $n_x$ and $n_y$ are two quantum numbers (one in each dimension). As with the solutions to 1-D wave equations, a node is a point (or line) on a structure that does not move while the rest of the structure is vibrating. Example $1$: Nodal Geometries in Rectangular Membranes For the following 2-D solutions in Figure 2.5.1 , how many nodes are there, what is the geometry and how would you characterize them? 1. ($n_x=1$, $n_y=1$ 2. ($n_x=2$, $n_y=1$ Solution a. The ($n_x=1$, $n_y=1$ solution in Figure 2.5.1 has zero nodes. That is, no spot in the membrane (other than the boundaries) is not moving during the motion of the membrane. b. The ($n_x=2$, $n_y=1$ solution in Figure 2.5.1 has one node. It is a line at half the length of the x direction and extends over the entire length of the y direction. Exercise $1$ For the ($n_x=2$, $n_y=2$ solution to a rectangular membrane in Figure 2.5.1 : how many nodes are there, what is the geometry and how would you characterize them? Answer There are two nodes. They are lines and one is at half the length of the x direction and extends over the entire length of the y direction and one if and one is at half the length of the y direction and extends over the entire length of the x direction Vibrational Modes of a Circular Membrane The basic principles of a vibrating rectangular membrane applies to other 2-D members including circular membranes. However, the mathematics and solutions are a bit more complicated. The solutions are best represented in polar notation (instead of rectangular like in Equation \ref{2.5.6b}) and have the following functional form $u(r, \theta, t)=J_{m}\left(\lambda_{m n} r\right) \cos m \theta \cos c \lambda_{m n} t \nonumber$ where $J_m$ are Bessel functions (these are oscillatory functions) and $\lambda$ are constants. This system has two quantum numbers ($m$ and $n$) that serve the same function as $n_x$ and $n_y$ do in the rectangular membranes. In the animations in Figure 2.5.2 , the nodal diameters and circles show up as white regions that do not oscillate, while the red and blue regions indicate positive and negative displacements. Figure 2.5.2 (left) shows the fundamental mode shape for a vibrating circular membrane, while the other two modes are excited modes with more complex nodal character. Exercise $2$ How many nodes are there in the three solutions for a circular membrane in Figure 2.5.2 , what are their geometries, and how would you characterize them? Answer (left) zero nodes (middle) two nodes. They are circular at approximately 1/3 and 2/3 the radius (these are radial nodes = at fixed radii) (right) four nodes. They are lines at 45° angle from the center (these are angular nodes = at fixed angles).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.05%3A_A_Vibrating_Membrane.txt
Solutions to select questions can be found online. 2.1A Find the general solutions to the following differential equations: 1. $\dfrac{d^{2}y}{dx^{2}} - 4y = 0$ 2. $\dfrac{d^{2}y}{dx^{2}} - 3\dfrac{dy}{dx} - 54y = 0$ 3. $\dfrac{d^{2}y}{dx^{2}} + 9y = 0$ 2.1B Find the general solutions to the following differential equations: 1. $\dfrac{d^{2}y}{dx^{2}} - 16y = 0$ 2. $\dfrac{d^{2}y}{dx^{2}} - 6\dfrac{dy}{dx} + 27y = 0$ 3. $\dfrac{d^{2}y}{dx^{2}} + 100y = 0$ 2.1C Find the general solutions to the following differential equations: 1. $\dfrac{dy}{dx} - 4\sin(x)y = 0$ 2. $\dfrac{d^{2}y}{dx^{2}} - 5\dfrac{dy}{dx}+6y = 0$ 3. $\dfrac{d^{2}y}{dx^{2}} = 0$ 2.2A Practice solving these first and second order homogeneous differential equations with given boundary conditions: 1. $\dfrac{dy}{dx} = ay$ with $y(0) = 11$ 2. $\dfrac{d^2y}{dt^2} = ay$ with $y(0) = 6$ and $y'(0) = 4$ 3. $\dfrac{d^2y}{dt^2} + \dfrac{dy}{dt} - 42y = 0$ with $y(0) = 2$ and $y'(0) = 0$ 2.3A Prove that $x(t)$ = $\cos(\theta$) oscillates with a frequency $\nu = \dfrac{1}{2\pi}\sqrt{\dfrac{k}{m}} \nonumber$ Prove that $x(t)$ = $\cos(\theta$) also has a period $T = {2\pi}\sqrt{\dfrac{m}{k}} \nonumber$ where $k$ is the force constant and $m$ is mass of the body. 2.3B Try to show that $x(t)=\sin(\omega t)\nonumber$ oscillates with a frequency $\nu = \omega/2\pi\nonumber$ Explain your reasoning. Can you give another function of x(t) that have the same frequency. 2.3C Which two functions oscillate with the same frequency? 1. $x(t)=\cos( \omega t)$ 2. $x(t)=\sin (2 \omega t)$ 3. $x(t)=A\cos( \omega t)+B\sin( \omega t)$ 2.3D Prove that $x(t) = \cos(\omega(t))$ oscillates with a frequency $\nu = \dfrac{\omega}{2\pi} \nonumber.$ Prove that $x(t) = A \cos(\omega(t) + B \sin(\omega(t))$ oscillates with the same frequency: $\nu = \dfrac{\omega}{2\pi}. \nonumber$ 2.4 Show that the differential equation: $\dfrac{d^2y}{dx^2} + y(x) = 0\nonumber$ has a solution $y(x)= 2\sin x + \cos x \nonumber$ 2.7 For a classical harmonic oscillator, the displacement is given by $\xi (t)=v_0 \sqrt{\dfrac{m}{k}} \sin \sqrt{\dfrac{k}{m}} t\ \nonumber$ where $\xi=x-x_0$. Derive an expression for the velocity as a function of time, and determine the times at which the velocity of the oscillator is zero. 2.11 Verify that $Y(x,t) = A \sin \left(\dfrac{2\pi }{\lambda}(x-vt) \right)\nonumber$ has a frequency $\nu$ = $v$/$\lambda$ and wavelength $\lambda$ traveling right with a velocity $v$. 2.13A Explain (in words) how to expand the Hamiltonian into two dimensions and use it solve for the energy 2.13B Given that the Schrödinger equation for a two-dimensional box, with sides $a$ and $b$, is $\dfrac{∂^2 Ψ}{∂x^2} + \dfrac{∂^2 Ψ}{∂y^2} +\dfrac{(8π^2mE) }{h^2}Ψ(x,y) = 0 \nonumber$ and it has the boundary conditions of $Ψ(0,y)= Ψ (a,y)=0$ and $Ψ(o,x)= Ψ(x,b)=0$ for all $x$ and $y$ values, show that $E_{2,2}=\left(\dfrac{h^2}{2ma^2}\right)+\left(\dfrac{h^2}{2mb^2}\right). \nonumber$ 2.14 Explain, in words, how to expand the Schrödinger Equations into a three-dimensional box 2.18 Solving for the differential equation for a pendulum gives us the following equation, $\phi(x)= c_1\cos {\sqrt{\dfrac{g}{L}}} +c_2\sin {\sqrt{\dfrac{g}{L}}} \nonumber$ Assuming $c_1=2$, $c_3= 5$, $g=7$ and $L=3$, what is the position of the pendulum initially? Does this make sense in the real world. Why or why not? (We can ignore units for this problem). 2.23 Consider a Particle of mass $m$ in a one-dimensional box of length $a$. Its average energy is given by $\langle{E}\rangle = \dfrac{1}{2m}\langle p^2\rangle\nonumber$ Because $\langle{p}\rangle\ = 0\nonumber$ $\langle p^2\rangle = \sigma^{2}_{p}\nonumber$ where $\sigma_p$ can be called the uncertainty in $p$. Using the Uncertainty Principle, show that the energy must be at least as large as $\hbar/8ma^2$ because $\sigma_x$, the uncertainty in $x$, cannot be larger than $a$. 2.33 Prove $y(x, t) = A\cos[2π/λ(x - vt)]$ is a wave traveling to the right with velocity $v$, wavelength $λ$, and period $λ/v$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/02%3A_The_Classical_Wave_Equation/2.E%3A_The_Classical_Wave_Equation_%28Exercises%29.txt
The particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems. In classical systems, for example a ball trapped inside a large box, the particle can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometers), quantum effects become important. The particle may only occupy certain positive energy levels. The particle in a box model provides one of the very few problems in quantum mechanics which can be solved analytically, without approximations. This means that the observable properties of the particle (such as its energy and position) are related to the mass of the particle and the width of the well by simple mathematical expressions. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems. • 3.1: The Schrödinger Equation Erwin Schrödinger posited an equation that predicts both the allowed energies of a system as well as address the wave-particle duality of matter. Schrödinger equation for de Broglie's matter waves cannot be derived from some other principle since it constitutes a fundamental law of nature. Its correctness can be judged only by its subsequent agreement with observed phenomena (a posteriori proof). • 3.2: Linear Operators in Quantum Mechanics An operator is a generalization of the concept of a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another function. • 3.3: The Schrödinger Equation is an Eigenvalue Problem To every dynamical variable in quantum mechanics, there corresponds an eigenvalue equation . The eigenvalues represents the possible measured values of the operator. • 3.4: Wavefunctions Have a Probabilistic Interpretation the most commonly accepted interpretation of the wavefunction that the square of the module is proportional to the probability density (probability per unit volume) that the electron is in the volume $d\tau$ located at $r_i$. Since the wavefunction represents the wave properties of matter, the probability amplitude $P(x,t)$ will also exhibit wave-like behavior. • 3.5: The Energy of a Particle in a Box is Quantized The particle in the box model system is the simplest non-trivial application of the Schrödinger equation, but one which illustrates many of the fundamental concepts of quantum mechanics. • 3.6: Wavefunctions Must Be Normalized To maintain the probabilistic interpretation of the wavefunction, the probability of a measurement of x yielding a result between -∞ and +∞ must be 1. Therefore, wavefunctions should be normalized (if possible) to ensure this requirement. • 3.7: The Average Momentum of a Particle in a Box is Zero From the mathematical expressions for the wavefunctions and energies for the particle-in-a-box, we can answer a number of interesting questions. Key to addressing these questions is the formulation and use of expectation values. This is demonstrated in the module and used in the context of evaluating average properties (energy, position, and momentum of the particle in a box). • 3.8: The Uncertainty Principle - Estimating Uncertainties from Wavefunctions The operators x and p are not compatible and there is no measurement that can precisely determine both x and p simultaneously. The uncertainty principle is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. • 3.9: A Particle in a Three-Dimensional Box The 1D particle in the box problem can be expanded to consider a particle within a 3D box for three lengths $a$, $b$, and $c$. When there is NO FORCE (i.e., no potential) acting on the particles inside the box. Motion and hence quantization properties of each dimension is independent of the other dimensions. This Module introduces the concept of degeneracy where multiple wavefunctions (different quantum numbers) have the same energy. • 3.E: The Schrödinger Equation and a Particle in a Box (Exercises) These are homework exercises to accompany Chapter 3 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. Thumbnail: The quantum wavefunction of a particle in a 2D infinite potential well of dimensions $L_x$ and $L_y$. The wavenumbers are $n_x=2$ and $n_y=2$. (Public Domain; Inductiveload). 03: The Schrodinger Equation and a Particle in a Box Learning Objectives • To be introduced to the general properties of the Schrödinger equation and its solutions. De Broglie’s doctoral thesis, defended at the end of 1924, created a lot of excitement in European physics circles. Shortly after it was published in the fall of 1925 Pieter Debye, Professor of Theoretical Physics at Zurich and Einstein's successor, suggested to Erwin Schrödinger that he give a seminar on de Broglie’s work. Schrödinger gave a polished presentation, but at the end Debye remarked that he considered the whole theory rather childish: why should a wave confine itself to a circle in space? It wasn’t as if the circle was a waving circular string, real waves in space diffracted and diffused, in fact they obeyed three-dimensional wave equations, and that was what was needed. This was a direct challenge to Schrödinger, who spent some weeks in the Swiss mountains working on the problem and constructing his equation. There is no rigorous derivation of Schrödinger’s equation from previously established theory, but it can be made very plausible by thinking about the connection between light waves and photons, and construction an analogous structure for de Broglie’s waves and electrons (and, later, other particles). The Schrödinger Equation: A Better Approach While the Bohr model is able to predict the allowed energies of any single-electron atom or cation, it by no means, a general approach. Moreover, it relies heavily on classical ideas, clumsily grafting quantization onto an essentially classical picture, and therefore, provides no real insights into the true quantum nature of the atom. Any rule that might be capable of predicting the allowed energies of a quantum system must also account for the wave-particle duality and implicitly include a wave-like description for particles. Nonetheless, we will attempt a heuristic argument to make the result at least plausible. In classical electromagnetic theory, it follows from Maxwell's equations that each component of the electric and magnetic fields in vacuum is a solution of the 3-D wave equation for electronmagnetic waves: $\nabla^2 \Psi(x,y,z,t) -\dfrac{1}{c^2}\dfrac{\partial ^2 \Psi(x,y,z,t) }{\partial t^2}=0\label{3.1.1}$ The wave equation in Equation $\ref{3.1.1}$ is the three-dimensional analog to the wave equation presented earlier (Equation 2.1.1) with the velocity fixed to the known speed of light: $c$. Instead of a partial derivative $\dfrac{\partial^2}{\partial x^2}$ in one dimension, the Laplacian (or "del-squared") operator is introduced: $\nabla^2=\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2}\label{3.1.2}$ Corresponding, the solution to this 3D equation wave equation is a function of four independent variables: $x$, $y$, $z$, and $t$ and is generally called the wavefunction $\psi$. We will attempt now to create an analogous equation for de Broglie's matter waves. Accordingly, let us consider a only 1-dimensional wave motion propagating in the x-direction. At a given instant of time, the form of a wave might be represented by a function such as $\Psi(x)=f\left(\dfrac {2\pi x}{ \lambda}\right)\label{3.1.3}$ where $f(\theta)$ represents a sinusoidal function such as $\sin\theta$, $\cos\theta$, $e^{i\theta}$, $e^{-i\theta}$ or some linear combination of these. The most suggestive form will turn out to be the complex exponential, which is related to the sine and cosine by Euler's formula $e^{\pm i\theta}=\cos\theta \pm i \sin\theta \label{3.1.4}$ Each of the above is a periodic function, its value repeating every time its argument increases by $2\pi$. This happens whenever $x$ increases by one wavelength $\lambda$. At a fixed point in space, the time-dependence of the wave has an analogous structure: $T(t)=f(2\pi\nu t)\label{3.1.5}$ where $\nu$ gives the number of cycles of the wave per unit time. Taking into account both $x$ and $t$ dependence, we consider a wavefunction of the form $\Psi(x,t)=exp\left[2\pi i\left(\dfrac{x}{\lambda}-\nu t\right)\right]\label{3.1.6}$ representing waves traveling from left to right. Now we make use of the Planck formula ($E=h\nu$) and de Broglie formulas ($p=\frac{h}{\lambda}$) to replace $\nu$ and $\lambda$ by their particle analogs. This gives $\Psi(x,t)=\exp \left[\dfrac{i(px-Et)}{\hbar} \right] \label{3.1.7}$ where $\hbar \equiv \dfrac{h}{2\pi}\label{3.1.8}$ Since Planck's constant occurs in most formulas with the denominator $2\pi$, the $\hbar$ symbol was introduced by Paul Dirac. Equation $\ref{3.1.5}$ represents in some way the wavelike nature of a particle with energy $E$ and momentum $p$. The time derivative of Equation $\ref{3.1.7}$ gives $\dfrac{\partial\Psi}{\partial t} = -\left(\dfrac{iE}{\hbar} \right ) \exp \left[\dfrac{i(px-Et)}{\hbar} \right]\label{3.1.9}$ Thus from a simple comparison of Equations $\ref{3.1.7}$ and $\ref{3.1.9}$ $i\hbar\dfrac{\partial\Psi}{\partial t} = E\Psi\label{3.1.10}$ or analogously differentiation of Equation $\ref{3.1.9}$ with respect to $x$ $-i\hbar\dfrac{\partial\Psi}{\partial x} = p\Psi\label{3.1.11}$ and then the second derivative $-\hbar^2\dfrac{\partial^2\Psi}{\partial x^2} = p^2\Psi\label{3.1.12}$ The energy and momentum for a nonrelativistic free particle (i.e., all energy is kinetic with no potential energy involved) are related by $E=\dfrac{1}{2}mv^2=\dfrac{p^2}{2m}\label{3.1.13}$ Substituting Equations $\ref{3.1.12}$ and $\ref{3.1.10}$ into Equation $\ref{3.1.13}$ shows that $\Psi(x,t)$ satisfies the following partial differential equation $i\hbar\dfrac{\partial\Psi}{\partial t}=-\dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi}{\partial x^2}\label{3.1.14}$ Equation $\ref{3.1.14}$ is the applicable differential equation describing the wavefunction of a free particle that is not bound by any external forces or equivalently not in a region where its potential energy $V(x,t)$ varies. For a particle with a non-zero potential energy $V(x)$, the total energy $E$ is then a sum of kinetics and potential energies $E=\dfrac{p^2}{2m}+V(x)\label{3.1.15}$ we postulate that Equation $\ref{3.1.3}$ for matter waves can be generalized to $\underbrace{ i\hbar\dfrac{\partial\Psi(x,t)}{\partial t}=\left[-\dfrac{\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2}+V(x)\right]\Psi(x,t) }_{\text{time-dependent Schrödinger equation in 1D}}\label{3.1.16}$ For matter waves in three dimensions, Equation $\ref{3.1.6}$ is then expanded $\underbrace{ i\hbar\dfrac{\partial}{\partial t}\Psi(\vec{r},t)=\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\Psi(\vec{r},t)}_{\text{time-dependent Schrödinger equation in 3D}}\label{3.1.17}$ Here the potential energy and the wavefunctions $\Psi$ depend on the three space coordinates $x$, $y$, $z$, which we write for brevity as $\vec{r}$. Notice that the potential energy is assumed to depend on position only and not time (i.e., particle motion). This is applicable for conservative forces that a potential energy function $V(\vec{r})$ can be formulated. The Laplacian Operator The three second derivatives in parentheses together are called the Laplacian operator, or del-squared, \begin{align} \nabla^2 &= \nabla \cdot \nabla \nonumber\[4pt] &= \left ( \frac {\partial ^2}{\partial x^2} + \dfrac {\partial ^2}{\partial y^2} + \dfrac {\partial ^2}{\partial z^2} \right ) \label {3-20} \end{align} with the del operator, $\nabla = \left ( \vec {x} \frac {\partial}{\partial x} + \vec {y} \frac {\partial}{\partial y} + \vec {z} \frac {\partial }{\partial z} \right ) \label{3-21}$ Remember from basic calculus that when the del operator is direclty operates on a field (e.g, $\nabla f(x,y,x)$, it denotes the gradient (i.e, the locally steepest slope) of the field. The symbols with arrows in Equation \ref{3-21} are unit vectors. Equation $\ref{3.1.17}$ is the time-dependent Schrödinger equation describing the wavefunction amplitude $\Psi(\vec{r}, t)$ of matter waves associated with the particle within a specified potential $V(\vec{r})$. Its formulation in 1926 represents the start of modern quantum mechanics (Heisenberg in 1925 proposed another version known as matrix mechanics). For conservative systems, the energy is a constant, and the time-dependent factor from Equation $\ref{3.1.7}$ can be separated from the space-only factor (via the Separation of Variables technique discussed in Section 2.2) $\Psi(\vec{r},t)=\psi(\vec{r})e^{-iEt / \hbar}\label{3.1.18}$ where $\psi(\vec{r})$ is a wavefunction dependent (or time-independent) wavefuction that only depends on space coordinates. Putting Equation $\ref{3.1.18}$ into Equation $\ref{3.1.17}$ and cancelling the exponential factors, we obtain the time-independent Schrödinger equation: $\textcolor{red}{ \underbrace{\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\psi(\vec{r})=E\psi(\vec{r})} _{\text{time-independent Schrödinger equation}}} \label{3.1.19}$ The overall form of the Equation $\ref{3.1.19}$ is not unusual or unexpected as it uses the principle of the conservation of energy. Most of our applications of quantum mechanics to chemistry will be based on this equation (with the exception of spectroscopy). The terms of the time-independent Schrödinger equation can then be interpreted as total energy of the system, equal to the system kinetic energy plus the system potential energy. In this respect, it is just the same as in classical physics. Time Dependence to the Wavefunctions Notice that the wavefunctions used with the time-independent Schrödinger equation (i.e., $\psi(\vec{r})$) do not have explicit $t$ dependences like the wavefunctions of time-dependent analog in Equation $\ref{3.1.17}$ (i.e., $\Psi(\vec{r},t)$). That does not imply that there is no time dependence to the wavefunction. Equation \ref{3.1.18} argues that the time-dependent (i.e., full spatial and temporal) wavefunction ($\Psi(\vec{r},t)$) differs from from the time-independent (i.e., spatial only) wavefunction $\psi(\vec{r})$ by a "phase factor" of constant magnitude. Using the Euler relationship in Equation \ref{3.1.4}, the total wavefunction above can be expanded $\Psi(\vec{r},t)=\psi(\vec{r})\left(\cos \dfrac{Et}{\hbar} - i \, \sin \dfrac{Et}{\hbar} \right) \label{eq30}$ This means the total wavefunction has a complex behavior with a real part and an imaginary part. Moreover, using the trigonometry identity $\sin (\theta) = \cos (\theta - \pi/2)$ Equation \ref{eq30} can further simplified to $\Psi(\vec{r},t)=\psi(\vec{r})\cos \left(\dfrac{Et}{\hbar} \right) - i \psi(\vec{r})\cos \left(\dfrac{Et}{\hbar} - \dfrac{\pi}{2} \right) \nonumber$ This shows that both the real and the imaginary components of the total wavefunction oscillate, the imaginary part of the total wavefunction oscillates out of phase by $\frac{\pi}{2}$ with respect to the real part. Note that while all wavefunctions have a time-dependence, that dependence may not impact in simple quantum problems as the next sections discuss and can often be ignored. Before we embark on this, however, let us pause to comment on the validity of quantum mechanics. Despite its weirdness, its abstractness, and its strange view of the universe as a place of randomness and unpredictability, quantum theory has been subject to intense experimental scrutiny. It has been found to agree with experiments to better than $10^{-10}\%$ for all cases studied so far. When the Schrödinger Equation is combined with a quantum description of the electromagnetic field, a theory known as quantum electrodynamics, the result is one of the most accurate theories of matter that has ever been put forth. Keeping this in mind, let us forge ahead in our discussion of the quantum universe and how to apply quantum theory to both model and real situations.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.01%3A_The_Schrodinger_Equation.txt
Learning Objectives • Classical-Mechanical quantities are represented by linear operators in Quantum Mechanics • Understand that "algebra" of scalars and functions do not always to operators (specifically the commutative property) The bracketed object in the time-independent Schrödinger Equation (in 1D) $\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\psi(\vec{r})=E\psi(\vec{r}) \label{3.1.19}$ is called an operator. An operator is a generalization of the concept of a function applied to a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another. For the time-independent Schrödinger Equation, the operator of relevance is the Hamiltonian operator (often just called the Hamiltonian) and is the most ubiquitous operator in quantum mechanics. $\hat{H} = -\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r}) \nonumber$ We often (but not always) indicate that an object is an operator by placing a 'hat' over it, eg, $\hat{H}$. So time-independent Schrödinger Equation can then be simplified from Equation \ref{3.1.19} to $\hat{H} \psi(\vec{r}) = E\psi(\vec{r}) \label{simple}$ Equation \ref{simple} says that the Hamiltonian operator operates on the wavefunction to produce the energy, which is a scalar (i.e., a number, a quantity and observable) times the wavefunction. Such an equation, where the operator, operating on a function, produces a constant times the function, is called an eigenvalue equation. The function is called an eigenfunction, and the resulting numerical value is called the eigenvalue. Eigen here is the German word meaning self or own. We will discuss this in detail in later Sections. Fundamental Properties of Operators Most properties of operators are straightforward, but they are summarized below for completeness. The sum and difference of two operators $\hat{A}$ and $\hat{B}$ are given by $(\hat{A} \pm \hat{B}) f = \displaystyle \hat{A} f \pm \hat{B} f \nonumber$ The product of two operators is defined by $\hat{A} \hat{B} f \equiv \hat{A} [ \hat{B} f ] \nonumber$ Two operators are equal if $\hat{A} f = \hat{B} f \nonumber$ for all functions $f$. The identity operator $\hat{1}$ does nothing (or multiplies by 1) ${\hat 1} f = f \nonumber$ The $n$-th power of an operator $\hat{A}^n$ is defined as $n$ successive applications of the operator, e.g. $\hat{A}^2 f = \hat{A} \hat{A} f \nonumber$ The associative law holds for operators $\hat{A}(\hat{B}\hat{C}) = (\hat{A}\hat{B})\hat{C} \nonumber$ The commutative law does not generally hold for operators. In general, but not always, $\hat{A} \hat{B} \neq \hat{B}\hat{A}. \label{comlaw}$ To help identify if the inequality in Equation \ref{comlaw} holds for any two specific operators, we define the commutator. Definition: The Commutator It is convenient to define the commutator of $\hat{A}$ and $\hat{B}$ $[\hat{A}, \hat{B}] \equiv \hat{A} \hat{B} - \hat{B} \hat{A} \nonumber$ If $\hat{A}$ and $\hat{B}$ commute, then $[\hat{A} ,\hat{B} ] = 0. \nonumber$ If the commutator is not zero, the order of operating matters and the operators are said to "not commute." Moreover, this property applies $[\hat{A} ,\hat{B} ] = - [\hat{B} ,\hat{A} ]. \nonumber$ Linear Operators The action of an operator that turns the function $f(x)$ into the function $g(x)$ is represented by $\hat{A}f(x)=g(x)\label{3.2.1}$ The most common kind of operator encountered are linear operators which satisfies the following two conditions: $\underset{\text{Condition A}}{\hat{O}(f(x)+g(x)) = \hat{O}f(x)+\hat{O}g(x)} \label{3.2.2a}$ and $\underset{\text{Condition B}}{\hat{O}cf(x) = c \hat{O}f(x)} \label{3.2.2b}$ where • $\hat{O}$ is a linear operator, • $c$ is a constant that can be a complex number ($c = a + ib$), and • $f(x)$ and $g(x)$ are functions of $x$ If an operator fails to satisfy either Equations $\ref{3.2.2a}$ or $\ref{3.2.2b}$ then it is not a linear operator. Example 3.2.1 Is this operator $\hat{O} = -i \hbar \dfrac{d}{dx}$ linear? Solution To confirm is an operator is linear, both conditions in Equation $\ref{3.2.2b}$ must be demonstrated. Condition A (Equation $\ref{3.2.2a}$): $\hat{O}(f(x)+g(x)) = -i \hbar \dfrac{d}{dx} \left( f(x)+g(x)\right) \nonumber$ From basic calculus, we know that we can use the sum rule for differentiation \begin{align*} \hat{O}(f(x)+g(x)) &= -i \hbar \dfrac{d}{dx} f(x) - i \hbar \dfrac{d}{dx} g(x) \[4pt] &= \hat{O}f(x)+\hat{O}g(x) \;\;\; \checkmark \end{align*} \nonumber Condition A is confirmed. Does Condition B (Equation $\ref{3.2.2b}$) hold? $\hat{O}cf(x) = -i \hbar \dfrac{d}{dx} c f(x) \nonumber$ Also from basic calculus, this can be factored out of the derivative \begin{align*}\hat{O}cf(x) &= - c i \hbar\dfrac{d}{dx} f(x) \[4pt] &= c \hat{O}f(x) \;\;\; \checkmark \end{align*} \nonumber Yes. This operator is a linear operator (this is the linear momentum operator). Exercise 3.2.1 Confirm if the square root operator $\sqrt{f(x)}$ linear or not? Answer To confirm is an operator is linear, both conditions in Equations \ref{3.2.2a} and \ref{3.2.2b} must be demonstrated. Let's look first at Condition B. Does Condition B (Equation $\ref{3.2.2b}$) hold? $\hat{O}cf(x) = c\hat{O}{ f(x) } \nonumber$ $\sqrt{c f(x) } \neq c\sqrt{f(x)} \nonumber$ Condition B does not hold, therefore the square root operator is not linear. The most operators encountered in quantum mechanics are linear operators. Hermitian Operators An important property of operators is suggested by considering the Hamiltonian for the particle in a box: $\hat{H}=-\dfrac{\hbar^2}{2m}\frac{d^2}{dx^2} \label{1}$ Let $f(x)$ and $g(x)$ be arbitrary functions which obey the same boundary values as the eigenfunctions of $\hat{H}$ (e.g., they vanish at $x = 0$ and $x = a$). Consider the integral $\int_0^a \! f(x) \, \hat{H} \, g(x) \, \mathrm{d}x =-\frac{\hbar^2}{2m} \int_0^a \! f(x) \, g''(x) \, \mathrm{d}x \label{2}$ Now, using integration by parts, $\int_0^a \! f(x) \, g''(x) \, \mathrm{d}x = - \int_0^a \! f'(x) \, g'(x) \, \mathrm{d}x + \, f(x) \, g'(x) \Big|_0^a \label{3}$ The boundary terms vanish by the assumed conditions on $f$ and $g$. A second integration by parts transforms Equation $\ref{3}$ to $\int_0^a \! f''(x) \, g(x) \, \mathrm{d}x \, - \, f'(x) \, g(x) \Big|_0^a \nonumber$ It follows therefore that $\int_0^a \! f(x) \, \hat{H} \, g(x) \, \mathrm{d}x=\int_0^a g(x) \, \hat{H} \, f(x) \, \mathrm{d}x \label{4}$ An obvious generalization for complex functions will read $\int_0^a \! f^*(x) \, \hat{H} \, g(x) \, \mathrm{d}x=\left(\int_0^a g^*(x) \, \hat{H} \, f(x) \, \mathrm{d}x\right)^* \label{5}$ In mathematical terminology, an operator $\hat{A}$ for which $\int \! f^* \, \hat{A} \, g \, \mathrm{d}\tau=\left(\int \! g^* \, \hat{A} \, f \, \mathrm{d}\tau\right)^* \label{6}$ for all functions $f$ and $g$ which obey specified boundary conditions is classified as hermitian or self-adjoint. Evidently, the Hamiltonian is a hermitian operator. It is postulated that all quantum-mechanical operators that represent dynamical variables are hermitian. The term is also used for specific times of matrices in linear algebra courses. All quantum-mechanical operators that represent dynamical variables are hermitian.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.02%3A_Linear_Operators_in_Quantum_Mechanics.txt
Learning Objectives • To recognize that each quantum mechanical observable is determined by solve by an eigenvalue problem with different operators for different observable • Confirm if a specific wavefunction is an eigenfunction of a specific operation and extract the corresponding obserable (the eigenvalue) • To recognize that the Schrödinger equation, just like all measurable, is also an eigenvalue problem with the eigenvalue ascribed to total energy • Identity and manipulate several common quantum mechanical operators As per the definition, an operator acting on a function gives another function, however a special case occurs when the generated function is proportional to the original $\hat{A}\psi \propto \psi\label{3.3.1a}$ This case can be expressed in terms of a equality by introducing a proportionality constant $k$ $\hat{A}\psi = k \psi\label{3.3.1b}$ Not all functions will solve an equation like in Equation \ref{3.3.1b}. If a function does, then $\psi$ is known as an eigenfunction and the constant $k$ is called its eigenvalue (these terms are hybrids with German, the purely English equivalents being "characteristic function" and "characteristic value", respectively). Solving eigenvalue problems are discussed in most linear algebra courses. In quantum mechanics, every experimental measurable $a$ is the eigenvalue of a specific operator ($\hat{A}$): $\hat{A} \psi=a \psi \label{3.3.2a}$ The $a$ eigenvalues represents the possible measured values of the $\hat{A}$ operator. Classically, $a$ would be allowed to vary continuously, but in quantum mechanics, $a$ typically has only a sub-set of allowed values (hence the quantum aspect). Both time-dependent and time-independent Schrödinger equations are the best known instances of an eigenvalue equations in quantum mechanics, with its eigenvalues corresponding to the allowed energy levels of the quantum system. ${ \left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\psi(\vec{r})=E\psi(\vec{r})} \label{3.3.3}$ The object on the left that acts on $\psi (x)$ is an example of an operator. $\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right] \label{3.3.4}$ In effect, what is says to do is "take the second derivative of $\psi (x)$, multiply the result by $-(\hbar^2 /2m)$ and then add $V(x)\psi (x)$ to the result of that." Quantum mechanics involves many different types of operators. This one, however, plays a special role because it appears on the left side of the Schrödinger equation. It is called the Hamiltonian operator and is denoted as $\hat{H}=-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r}) \label{3.3.5}$ Therefore, the time-dependent Schrödinger equation can be (and it more commonly) written as $\hat{H} \psi (x,t) = i \hbar \dfrac{\partial}{\partial t} \psi(x,t) \label{3.3.6a}$ and the time-independent Schrödinger equation $\hat{H}\psi (x)=E \psi (x) \label{3.3.6b}$ Note that the functional form of Equation \ref{3.3.6b} is the same as the general eigenvalue equation in Equation \ref{3.3.1b} where the eigenvalues are the (allowed) total energies ($E$). The Hamiltonian, named after the Irish mathematician Hamilton, comes from the formulation of Classical Mechanics that is based on the total energy, $H = T + V$, rather than Newton's second law, $F = ma$. Equation $\ref{3.3.6b}$ says that the Hamiltonian operator operates on the wavefunction to produce the energy $E$, which is a scalar (e.g., expressed in Joules) times the wavefunction. Correspondence Principle Note that $\hat{H}$ is derived from the classical energy $p^2 /2m+V(x)$ simply by replacing $p \rightarrow -i\hbar(d/dx)$. This is an example of the Correspondence Principle initially proposed by Niels Bohr that states that the behavior of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers. It is a general principle of Quantum Mechanics that there is an operator for every physical observable. A physical observable is anything that can be measured. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on the eigenfunction with the appropriate operator. The value of the observable for the system is then the eigenvalue, and the system is said to be in an eigenstate. Equation $\ref{3.3.6b}$ states this principle mathematically for the case of energy as the observable. If the wavefunction is not the eigenfunction of the operation, then the measurement will give an eigenvalue (by definition), but not necessarily the same one for each measurement (this will be discussed in more detail in later section). Common Operators Although we could theoretically come up with an infinite number of operators, in practice there are a few which are much more important than any others. • Linear Momentum: The linear momentum operator of a particle moving in one dimension (the $x$-direction) is $\hat p_x = -i \hbar \dfrac{\partial}{\partial x} \label{3.3.7}$ and can be generalized in three dimensions: $\hat {\vec{p}} = -i \hbar \nabla \label{3.3.8}$ • Position The position operator of a particle moving in one dimension (the $x$-direction) is $\hat x = x \label{3.3.9D}$ and can be generalized in three dimensions: $\hat {\vec{r}} = \vec{r} \label{3.3.10D}$ where ${\vec{r}} = (x,y,z)$. • Kinetic Energy Classically, the kinetic energy of a particle moving in one dimension (the $x$-direction), in terms of momentum, is $KE_{classical}= \dfrac{p_x^2}{2m} \label{3.3.9}$ Quantum mechanically, the corresponding kinetic energy operator is $\hat {KE}_{quantum}= -\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2}\label{3.3.10}$ and can be generalized in three dimensions: $\hat {KE}_{quantum}= -\dfrac{\hbar^2}{2m} \nabla^2 \label{3.3.11}$ • Angular Momentum: Angular momentum requires a more complex discussion, but is the cross product of the position operator $\hat{\vec{r}}$ and the momentum operator $\hat p$ $\hat {\vec{L}} = -i \hbar ( \vec{r} \times \nabla) \label{3.3.12}$ • Hamiltonian: The Hamiltonian operator corresponds to the total energy of the system $\hat {H} = - \dfrac{\hbar^2}{2m} \dfrac{{\partial}^2}{{\partial x}^2} + V(x) \label{3.3.13}$ and it represents the total energy of the particle of mass $m$ in the potential $V(x)$. The Hamiltonian in three dimensions is $\hat{H}=-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r}) \label{3.3.5a}$ • Total Energy: The energy operator from the time-dependent Schrödinger equation $i \hbar \dfrac{\partial}{\partial t} \Psi(x,t)= \hat {H}\Psi(x,t) \label{3.3.14}$ The right hand side of Equation $\ref{3.3.5}$ is the Hamiltonian Operator. In addition determining system energies, the Hamiltonian operator dictates the time evolution of the wavefunction $\hat {H} \Psi(x,t) = i \hbar \dfrac{\partial \Psi(x,t)}{\partial t} \label{3.3.15}$ This aspect will be discussed in more detail elsewhere. Eigenstate, Eigenvalues, Wavefunctions, Measurables and Observables In general, the wavefunction gives the "state of the system" for the system under discussion. It stores all the information available to the observer about the system. Often in discussions of quantum mechanics, the terms eigenstate and wavefunction are used interchangeably. The term eigenvalue is used to designate the value of measurable quantity associated with the wavefunction. • If you want to measure the energy of a particle, you have to operate on the wavefunction with the Hamiltonian operator (Equation \ref{3.3.5}). • If you want to measure the momentum of a particle, you have to operate on wavefunction with the momentum operator (Equation \ref{3.3.7}). • If you want to measure the position of a particle, you have to operate on wavefunction with the position operator (Equation \ref{3.3.9D}). • If you want to measure the kinetic energy of a particle, you have to operate on wavefunction with the kinetic energy operator (Equation \ref{3.3.10}). When discussing the eigenstates of the Hamiltonian ($\hat{H}$), the associated eigenvalues represent energies and within the context of the momentum operators, the associated eigenvalues refer to the momentum of the particle. However, not all wavefunctions ($\psi$) are eigenstates of an operator ($\phi$) – and if they are not, they can be usually be written as superpositions of eigenstates. $\psi = \sum_i c_i \phi_i \nonumber$ This will be discussed in more detail in later sections. While the wavefunction may not be the eigenstate of an observable, when that operator operates on that wavefunction, the wavefunction becomes an eigenstate of that observable and only eigenvalues can be observed. Another way to say this is that the wavefunction "collapses" into an eigenstate of the observable. Because quantum mechanical operators have different forms, their associated eigenstates are similarly often (i.e., most of the time) different. For example, when a wavefunction is an eigenstate of total energy, it will not be an eigenstate of momentum. If a wavefunction is an eigenstate of one operator, (e.g., momentum), that state is not necessarily an eigenstate of a different operator (e.g., energy), although not always. The wavefunction immediately after a measurement is an eigenstate of the operator associated with this measurement. What happens to the wavefunction after the measurement is a different topic. Example 3.3.1 Confirm that the following wavefunctions are eigenstates of linear momentum and kinetic energy (or neither or both): 1. $\psi = A \sin(ax)$ 2. $\psi = N e^{-ix/\hbar}$ Strategy This question is asking if the eigenvalue equation holds for the operators and these wavefunctions. This is just asking if these wavefunctions are solutions to Equation \ref{3.3.1b} using the operators in Equations \ref{3.3.7} and \ref{3.3.10}, i.e., are these equations true: $\hat p_x \psi = p_x \psi \label{ex1}$ $\hat {KE} \psi = KE \psi \label{ex2}$ where $p_x$ and $KE$ are the measurables (eigenvalues) for these operators. Solution a. Let's evaluate the left side of the linear momentum eigenvalue problem (Equation \ref{ex1}) $-i \hbar \dfrac{\partial}{\partial x} A \sin(ax) = -i \hbar Aa \cos(ax) \nonumber$ and compare to the the right side of Equation \ref{ex1} $p_x A\sin(ax) \nonumber$ These are not the same so this wavefunction is not an eigenstate of momentum. Let's look at the left side of the kinetic energy eigenvalue problem (Equation \ref{ex2}) \begin{align*} -\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} A \sin(ax) &= -\dfrac{\hbar^2}{2m} \dfrac{\partial}{\partial x} Aa \cos(ax) \[4pt] &= +\dfrac{\hbar^2}{2m} Aa^2 \sin(ax) \end{align*} \nonumber and compare to the right side $KE A\sin(ax) \nonumber$ These are same, so this specific wavefunction is an eigenstate of kinetic energy. Moreover, the measured kinetic energy will be $KE = +\dfrac{\hbar^2}{2m} a^2 \nonumber$ b. Let's look at the left side of Equation \ref{ex1} for linear momentum $-i \hbar \dfrac{\partial}{\partial x} N e^{-ix/\hbar} = -N e^{-ix/\hbar} \nonumber$ and the right side of Equation \ref{ex1} $p_x N e^{-ix/\hbar} \nonumber$ These are the same so this wavefunction is an eigenstate of momentum with momentum $p_x = -N$. Let's look at the left side of Equation \ref{ex2} for kinetic energy \begin{align*} -\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} N e^{-ix/\hbar} &= + i \dfrac{\hbar}{2m} \dfrac{\partial}{\partial x} N e^{-ix/\hbar} \[4pt] &= + \dfrac{1}{2m} N e^{-ix/\hbar} \end{align*} \nonumber and the right side $KE N e^{-ix/\hbar} \nonumber$ These are same so this wavefunction is an eigenstate of kinetic energy. And the measured kinetic energy will be $KE = \dfrac{1}{2m} \nonumber$ This wavefunction is an eigenstate of both momentum and kinetic energy. Exercise 3.3.1 Are $\psi = M e^{-bx}$ functions eigenstates of linear momentum and kinetic energy (or neither or both)? Answer $\psi$ is an eigenstate of linear momentum with a eigenvalue of $bi\hbar$ and also an eigenstate of kinetic energy with an eigenvalue of $b^2$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.03%3A_The_Schrodinger_Equation_is_an_Eigenvalue_Problem.txt
Learning Objectives • To understand that wavefunctions can have probabilistic interpretations. • To calculate the probabilities directly from a wavefunctions For a single-particle system, the wavefunction $\Psi(\vec{r},t)$, or $\psi(\vec{r})$ for the time-independent case, represents the amplitude of the still vaguely defined matter waves. Since wavefunctions can in general be complex functions, the physical significance cannot be found from the function itself because the $\sqrt {-1}$ is not a property of the physical world. Rather, the physical significance is found in the product of the wavefunction and its complex conjugate, i.e. the absolute square of the wavefunction, which also is called the square of the modulus (also called absolute value). \begin{align} P(\vec{r},t) &= \Psi^*(\vec{r},t) \Psi(\vec{r},t) \[4pt] &= {|\Psi(\vec{r},t)|}^2 \label {3.4.1} \end{align} where $\vec{r}$ is a vector $(x, y, z)$ specifying a point in three-dimensional space. The square is used, rather than the modulus itself, just like the intensity of a light wave depends on the square of the electric field. Born proposed in 1926, the most commonly accepted interpretation of the wavefunction that the square of the modulus (Equation $\ref{3.4.1}$) is proportional to the probability density (probability per unit volume) that the electron is in the volume $d\tau$ located at $r_i$. Since the wavefunction represents the wave properties of matter, the probability amplitude $P(x,t)$ will also exhibit wave-like behavior. Probability density is the three-dimensional analog of the diffraction pattern that appears on the two-dimensional screen in the double-slit diffraction experiment for electrons. The idea that we can understand the world of atoms and molecules only in terms of probabilities is disturbing to some, who are seeking more satisfying descriptions through ongoing research. The Born interpretation therefore calls the wavefunction the probability amplitude, the absolute square of the wavefunction is called the probability density, and the probability density times a volume element in three-dimensional space ($d\tau$) is the probability $P$ The probability that a single quantum particle moving in one spatial dimension will be found in a region $x\in[a,b]$ if a measurement of its location is performed is $P(x\in[a,b])=\int_{a}^{b}|\psi (x)|^2 dx \label{3.4.2}$ In three dimensions, Equation $\ref{3.4.2}$ is represented differently $P(x \in[a,b])=\int_{V}|\psi (\vec{\tau})|^2 d\tau \label{3.4.3}$ This integration extends over a specified volume ($V$) with the symbol $d\tau$ designating the appropriate volume element (including a Jacobian) of the coordinate system adopted: • Cartesian: $d\tau = dx\,dy\,dz \nonumber$ • Spherical: $d\tau = r^2 \sin \phi \,dr\, dθ \,d \phi \nonumber$ • Cylindrical: $d\tau = r \,dr\, d \phi \,dz. \nonumber$ For rectilinear Cartesian space, Equation $\ref{3.4.3}$ can be is expanded with dimension explicitly indicated $P(x \in[a,b])=\int_{a_x}^{b_x} \int_{a_y}^{b_y} \int_{a_z}^{b_z}|\psi (x,y,z)|^2 dx\,dy\,dz \label{3.4.4}$ where the limits of integration are selected to encompass the volume $V$ of consideration. The Born interpretation (Equation \ref{3.4.1}) of relating the wavefunction to probability forces certain demands on its mathematical behavior of wavefunctions and not any mathematical function can be a valid wavefunction. Required Properties of Wavefunction • The wavefunction must be a single-valued function of all its coordinates, since the probability density ought to be uniquely determined at each point in space. • The wavefunction should be both finite as an infinite probability has no meaning. • The wavefunction should be continuous everywhere, as expected for a physically-meaningful probability density. The conditions that the wavefunction be single-valued, finite and continuous--in short, "well behaved"-- lead to restrictions on solutions of the Schrödinger equation such that only certain values of the energy and other dynamical variables are allowed. This is called quantization and is in the feature that gives quantum mechanics its name. It is important to note that this interpretation implies the wavefunction does not mean the particle is distributed over a large region as a sort of "charge cloud". The wavefunction is used to describe the electron motion that behaves like waves and satisfies a wave equation. This is akin to how a grade distribution in a large class does not represent a smearing of grades for a single student, but only makes sense when taking into account that the distribution is the result of many measurables (e.g., student performances). Example 3.4.1 Show that the square of the modulus of $\Psi(\vec{r},t) = \psi(\vec{r}) e^{-iωt}$ is time independent. What insight regarding stationary states do you gain from this proof? Example 3.4.2 According to the Born interpretation, what is the physical significance of $e\psi^*(r_0)(r_0)d\tau$?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.04%3A_Wavefunctions_Have_a_Probabilistic_Interpretation.txt
Learning Objectives • Solve the particle-in-a-box model used to describing a trapped particle in 1 D well • Characterize the particle-in-a-box eigenstates (i.e., wavefunctions) and the eigenenergies as a function of the quantum number • Demonstrate that the eigenstates are orthogonal The particle in the box model system is the simplest non-trivial application of the Schrödinger equation, but one which illustrates many of the fundamental concepts of quantum mechanics. For a particle moving in one dimension (again along the x- axis), the Schrödinger equation can be written $-\dfrac{\hbar^2}{2m}\psi {}''(x)+ V (x)\psi (x) = E \psi (x) \nonumber$ Assume that the particle can move freely between two endpoints $x = 0$ and $x = L$, but cannot penetrate past either end. This is equivalent to a potential energy dependent on $x$ with $V(x)=\begin{cases} 0 & 0\leq x\leq L \ \infty & x< 0 \; and\; x> L \end{cases} \nonumber$ This potential is represented in Figure 3.5.1 . The infinite potential energy constitutes an impenetrable barrier since the particle would have an infinite potential energy if found there, which is clearly impossible. The particle is thus bound to a "potential well" since the particle cannot penetrate beyond $x = 0$ or $x = L$ $\psi (x)=0\; \; \; for \; \; x<0\; \; and\; \; x>L\label{3.5.3}$ By the requirement that the wavefunction be continuous, it must be true as well that $\psi (0)=0\; \; \; and\; \; \; \psi (L)=0\label{3.5.4}$ which constitutes a pair of boundary conditions on the wavefunction within the box. Inside the box, $V(x) = 0$, so the Schrödinger equation reduces to the free-particle form: $-\dfrac{\hbar^2}{2m}\psi{}''(x)=E\psi (x) \label{3.5.5}$ with $0\leq x\leq L$. We again have the differential equation $\psi {}''(x) +k^2\psi (x)=0 \label{3.5.6}$ with $k^2 = \dfrac{2mE}{\hbar^2} \label{3.5.6a}$ The general solution can be written $\psi (x)=A\: \sin\; kx\,+\, B\: \cos\; kx\label{3.5.7}$ where $A$ and $B$ are constants to be determined by the boundary conditions in Equation $\ref{3.5.4}$. By the first condition, we find $\psi (0)=A\, \sin\, 0\, +\, B\, \cos\, 0\, =\, B\,= 0\label{3.5.8}$ The second boundary condition at $x = L$ then implies $\psi (a)=A\, \sin\, kL\,=\, 0\label{3.5.9}$ It is assumed that $A \neq 0$, for otherwise $\psi(x)$ would be zero everywhere and the particle would disappear (i.e., the trivial solution). The condition that $\sin kx = 0$ implies that $kL\, =\, n\pi \label{3.5.10}$ where $n$ is a integer, positive, negative or zero. The case $n = 0$ must be excluded, for then $k = 0$ and again $\psi(x)$ would vanish everywhere. Eliminating $k$ between Equation $\ref{3.5.6}$ and $\ref{3.5.10}$, we obtain $E_{n}=\dfrac{\hbar^2\pi^2}{2mL^2}\, n^2=\dfrac{h^2}{8mL^2}n^2 \label{3.5.11}$ with $n=1,2,,3...$. These are the only values of the energy which allows solutions of the Schrö​dinger Equation $\ref{3.5.5}$ consistent with the boundary conditions in Equation $\ref{3.5.4}$. The integer $n$, called a quantum number, is appended as a subscript on $E$ to label the allowed energy levels. Negative values of $n$ add nothing new because the energies in Equation $\ref{3.5.11}$ depends on $n^2$. Figure 3.5.2 shows part of the energy-level diagram for the particle in a box. The occurrence of discrete or quantized energy levels is characteristic of a bound system, that is, one confined to a finite region in space. For the free particle, the absence of confinement allowed an energy continuum. Note that, in both cases, the number of energy levels is infinite-denumerably infinite for the particle in a box, but nondenumerably infinite for the free particle. The particle in a box assumes its lowest possible energy when $n = 1$, namely $E_{1}=\dfrac{h^2}{8mL^2}\label{3.5.12}$ The state of lowest energy for a quantum system is termed its ground state. Zero Point Energy An interesting point is that $E_{1} > 0$, whereas the corresponding classical system would have a minimum energy of zero. This is a recurrent phenomenon in quantum mechanics. The residual energy of the ground state, that is, the energy in excess of the classical minimum, is known as zero point energy. In effect, the kinetic energy, hence the momentum, of a bound particle cannot be reduced to zero. The minimum value of momentum is found by equating $E_{1}$ to $p^2/2m$, giving $p_{min}$ = $\pm h/2L$. This can be expressed as an uncertainty in momentum given by $\Delta p\approx h/L$. Coupling this with the uncertainty in position, $\Delta x\approx L$, from the size of the box, we can write $\Delta x\Delta p\approx h\label{3.5.13}$ This is in accord with the Heisenberg uncertainty principle. The particle-in-a-box eigenfunctions are given by Equation $\ref{3.5.14}$, with $B = 0$ and $k = n\pi/L=a$, in accordance with Equation $\ref{3.5.10}$ $\psi _{n}(x)=A\, \sin\dfrac{n\pi x}{L} \label{3.5.14}$ with $n=1,2,3...$. These, like the energies, can be labeled by the quantum number $n$. The constant $A$, thus far arbitrary, can be adjusted so that $\psi _{n}(x)$ is normalized. The normalization condition is, in this case, $\int_{0}^{a}\begin{bmatrix}\psi_{n}(x) \end{bmatrix}^2\,dx=1\label{3.5.15}$ the integration running over the domain of the particle $0\leq x\leq L$. Substituting Equation $\ref{3.5.14}$ into Equation $\ref{3.5.15}$, \begin{align} A^2\: \int_{0}^{L}\, \sin^2\, \dfrac{n\pi x}{L}dx &=A^2\dfrac{L}{n\pi}\int_{0}^{n\pi}\sin^2\, \theta \,d\theta \[4pt] &=A^2\dfrac{L}{2}=1\label{3.5.16} \end{align} We have made the substitution $\theta=n\pi x/L$ and used the fact that the average value of $\sin^2 \theta$ over an integral number of half wavelengths equals 1/2 (alternatively, one could refer to standard integral tables). From Equation $\ref{3.5.16}$, we can identify the general normalization constant $A = \sqrt{ \dfrac{2}{L}} \nonumber$ for all values of $n$. Finally we can write the normalized eigenfunctions: $\psi _{n}(x)=\sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L} \label{3.5.17}$ with $n=1,2,3...$. The first few eigenfunctions and the corresponding probability distributions are plotted in Figure 3.5.3 . There is a close analogy between the states of this quantum system and the modes of vibration of a violin string. The patterns of standing waves on the string are, in fact, identical in form with the wavefunctions in Equation $\ref{3.5.17}$. Nodes and Curvature A significant feature of the particle-in-a-box quantum states is the occurrence of nodes. These are points, other than the two end points (which are fixed by the boundary conditions), at which the wavefunction vanishes. At a node there is exactly zero probability of finding the particle. The nth quantum state has, in fact, $n-1$ nodes. It is generally true that the number of nodes increases with the energy of the quantum state, which can be rationalized by the following qualitative argument. As the number of nodes increases, so does the number and the steepness of the 'wiggles' in the wavefunction. It's like skiing down a slalom course. Accordingly, the average curvature, given by the second derivative, must increase. But the second derivative is proportional to the kinetic energy operator. Therefore, the more nodes, the higher the energy. This will prove to be an invaluable guide in more complex quantum systems. Example 3.5.1 : Excited State Probabilities For a particle in a one-dimensional box of length $L$, the second excited state wavefunction ($n=3$) is $\psi_3=\sqrt{\dfrac{2}{L}}\sin{\dfrac{3\pi x}{L}} \nonumber$ 1. What is the probability that the particle is in the left half of the box? 2. What is the probability that the particle is in the middle third of the box? Solution Probability that the particle will be found between $a$ and $b$ is $P(a,b)=\int^b_a \psi^2\,dx \nonumber$ For this problem, $\psi_3=\sqrt{\dfrac{2}{L}}\sin{\dfrac{3\pi x}{L}} \nonumber$ therefore, \begin{align*} P(a,b) &=\dfrac{2}{L} \int^b_a \sin^2 {\dfrac{3\pi x}{L}}\,dx \ &=\left.\dfrac{2}{L}\left(\dfrac{x}{2}-\dfrac{L\sin \left( \dfrac{6\pi x}{L}\right)}{12\pi} \right) \right|^b_a \[4pt] &=\dfrac{b-a}{L}-\dfrac{1}{6\pi} \left[ \sin \left(\dfrac{6\pi b}{L}\right) - \sin \left( \dfrac{6\pi a}{L} \right) \right] \end{align*} \nonumber (a) The probability that the particle is in the left half of the box is \begin{align*} P\left(0,\dfrac{L}{2}\right) &=\dfrac{\dfrac{L}{2}-0}{L}-\dfrac{1}{6\pi}\left[ \sin \left( \dfrac{6\pi \left(\dfrac{L}{2}\right)}{L}\right) - \sin \left( \dfrac{6\pi (0)}{L}\right) \right] \[4pt] &= \dfrac{1}{2} \end{align*} \nonumber (b) The probability that the particle is in the middle third of the box \begin{align*} P\left(\dfrac{L}{3},\dfrac{2L}{3}\right) &=\dfrac{\dfrac{2L}{3}-\dfrac{L}{3}}{L}-\dfrac{1}{6\pi}\left[{\sin {\dfrac{6\pi \left(\dfrac{2L}{3}\right)}{L} }-\sin {\dfrac{6\pi \left(\dfrac{L}{3}\right) }{L} }}\right] \[4pt] &=\dfrac{1}{3} \end{align*} \nonumber Exercise 3.5.1 : Ground State Probability For a particle in a one-dimensional box, the ground state wavefunction is $\psi_1 = \sqrt{\dfrac{2}{L}} \sin\dfrac{x\pi}{L} \nonumber$ What is the probability that the particle is in the left half of the box in the ground state? Answer \begin{align} P\left(0,\frac{L}{2}\right) &= \dfrac{2}{L} \int^{\frac{L}{2}}_0 \sin^2 \dfrac{x\pi}{L} dx \nonumber\ &=\dfrac{2}{L} \dfrac{\dfrac{L\pi}{L}+\sin 0+\sin \dfrac{L\pi}{L}}{\dfrac{4\pi}{L}} \nonumber\ &=\dfrac{1}{2} \nonumber \end{align} \nonumber This is the same answer as for the $\psi_3$ state in Example 3.5.1 . This is because the eigenstate squared (.e., probability density) for the particle in a 1D box will always be symmetric around the center of the box. So there will be equal probability to be on either side (i.e., no side is favored). Time Dependence and Complex Nature of Wavefunctions Recall that the time-dependence of the wavefunction with time-independent potential was discussed in Section 3.1 and is expressed as $\Psi(x,t)=\psi(x)e^{-iEt / \hbar} \nonumber$ so for the particle in a box, these are $\psi _{n}(x)=\sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L} e^{-iE_nt / \hbar} \label{PIBtime}$ with $E_n$ given by Equation \ref{3.5.11}. The phase part of Equation \ref{PIBtime} can be expanded into a real part and a complex components. So the total wavefunction for a particle in a box is $\Psi(x,t)= \underbrace{\left(\sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L}\right) \left(\cos \dfrac{E_nt}{\hbar}\right)}_{\text{real part}} - \underbrace{i \left(\, \sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L} \right) \left(\sin \dfrac{E_nt}{\hbar} \right) }_{\text{imaginary part}} \nonumber$ which can be simplified (slightly) to $\Psi(x,t)= \underbrace{\left(\sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L}\right) \left(\cos \dfrac{E_nt}{\hbar}\right)}_{\text{real part}} - \underbrace{i \left(\, \sqrt{\dfrac{2}{L}} \sin \dfrac{n\pi x}{L} \right) \left(\cos \dfrac{E_nt}{\hbar} - \dfrac{\pi}{2} \right) }_{\text{imaginary part}} \nonumber$ As discussed previously, the imaginary part of the total wavefunction oscillates out of phase by $π/2$ with respect to the real part (we call this "out of phase"). This is demonstrated in the time-dependent behavior of the first three eigenfunctions in Figure 3.5.4 . Note that as $n$ increased, the energy of the wavefunction increases (Equation \ref{3.5.11}) and both the number of nodes and antinodes increase and the frequency of oscillation of the wavefunction increases. It is generally true in quantum systems (not just for particles in boxes) that the number of nodes in a wavefunction increases with the energy of the quantum state. Orthonormality of the Eigenstates Another important property of the eigenfunctions in Equation $\ref{3.5.17}$ applies to the integral over a product of two different eigenfunctions (Equation \ref{3.5.17}). It is easy to see from Figure 3.5.5 that the integral $\int_{0}^{L}\psi _{2}(x)\psi _{1}(x)dx=0 \label{3.5.18}$ The integral in Equation \ref{3.5.18} is zero when the two eignestates differ and when integrated over the entire range of the system (from $-\infty$ to $\infty$ for a 1-D particle in the box, but this can be a narrow region of $0$ to $L$ since the eigenstates are zero outside of the box). To prove Equation \ref{3.5.18} for a particle in a box model, we can use the trigonometric identity $\sin\,\alpha \: \sin\, \beta =\dfrac{1}{2}\begin{bmatrix}\cos(\alpha -\beta )-\cos(\alpha +\beta )\end{bmatrix} \label{trig}$ to show that $\int_{0}^{L}\psi _{m}(x)\psi _{n}(x)dx=0\: \: \: if\: \: \: m \neq n\label{3.5.19}$ This property is called orthogonality​. We will show in the next chapter, that this is a general result from quantum-mechanical eigenfunctions. The normalization (Equation $\ref{3.5.18}$) together with the orthogonality (Equation $\ref{3.5.19}$) can be combined into a single relationship $\int_{0}^{L}\psi _{m}(x)\psi _{n}(x)dx=\delta _{mn}\label{3.5.20}$ In terms of the Kronecker delta $\delta _{mn}=\begin{cases} 1 & \text{if} \,\, m=n \[4pt] 0 & \text{if} \,\, m\neq n \end{cases}\label{3.5.21}$ A set of functions $\begin{Bmatrix}\psi_{n}\end{Bmatrix}$ which obeys Equation $\ref{3.5.20}$ is called orthonormal. Example 3.5.2 Evaluate 1. $\langle \psi_3| \psi_3 \rangle$ 2. $\langle \psi_4| \psi_4 \rangle$ 3. $\langle \psi_3| \psi_4 \rangle$ 4. $\langle \psi_4| \psi_3 \rangle$ for the normalized wavefunctions: $|\psi_3 \rangle = \sqrt{ \dfrac{2}{L}} \sin\dfrac{3\pi x}{L} \nonumber$ and $|\psi_4 \rangle = \sqrt{ \dfrac{2}{L}} \sin\dfrac{4\pi x}{L}\nonumber$ Strategy These are four different integrals and we can solve them directly or use orthonormality (Equation \ref{3.5.21}) to evaluate. a. \begin{align*} \langle \psi_3| \psi_3 \rangle &= \int_{-\infty}^{+\infty} \left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{3\pi x}{L} \right)\left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{3\pi x}{L} \right)\, dx \[4pt] &= \dfrac{2}{L} \int_{-\infty}^{+\infty} \sin^2 \dfrac{3\pi x}{L} \, dx \end{align*} \nonumber This is an integration over an even function, so it cannot be tossed out via symmetry. We can use the Trigonometry relationship in Equation \ref{trig} to get $\dfrac{2}{L} \int_{-\infty}^{+\infty} \sin^2 \dfrac{3\pi x}{L} \, dx = \dfrac{2}{L} \int_{-\infty}^{+\infty} \dfrac{1}{2} \left(1 - \cos \dfrac{6\pi x}{L}\right) \, dx \nonumber$ and we can continue the fun. However, there is no need. Since the we can recognize that $\langle \psi_3| \psi_3 \rangle$ is 1 by the normalization criteria which is folded into the orthonormal criteria (Equation \ref{3.5.21}). Therefore $\langle \psi_3| \psi_3 \rangle = 1$. b. \begin{align*} \langle \psi_4| \psi_4 \rangle &= \int_{-\infty}^{+\infty} \left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{4\pi x}{L} \right)\left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{4\pi x}{L} \right)\, dx \[4pt] &= \dfrac{2}{L} \int_{-\infty}^{+\infty} \sin^2 \dfrac{4\pi x}{L} \, dx \end{align*} \nonumber We can expand and solve, but again there is no need. The wavefunctions are normalized therefore $\langle \psi_4| \psi_4 \rangle = 1$. c. \begin{align*} \langle \psi_3| \psi_4 \rangle &= \int_{-\infty}^{+\infty} \left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{3\pi x}{L} \right)\left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{4\pi x}{L} \right)\, dx \[4pt] &= \dfrac{2}{L} \int_{-\infty}^{+\infty} \sin \left( \dfrac{3\pi x}{L} \right) \sin \left( \dfrac{4\pi x}{L} \right)\, dx \end{align*} \nonumber We can expand this integral and evaluate, but since the integrand is odd (and even function times an odd), this integral is zero. Alternatively, we can use the orthogonality criteria into the greater orthonormal criteria (Equation \ref{3.5.21}). d. \begin{align*} \langle \psi_4| \psi_3 \rangle &= \int_{-\infty}^{+\infty} \left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{4\pi x}{L} \right)\left( \sqrt{ \dfrac{2}{L}} \sin\dfrac{3\pi x}{L} \right)\, dx \[4pt] &= \dfrac{2}{L} \int_{-\infty}^{+\infty} \sin \left(\dfrac{4\pi x}{L} \right) \sin \left( \dfrac{3\pi x}{L} \right)\, dx \end{align*} \nonumber We can expand this integral and evaluate, but since the integrand is odd (and even function times an odd function), this integral is zero. Alternatively, we can use the orthogonality criteria into the greater orthonormal criteria (Equation \ref{3.5.21}). However, since the wavefunctions are real, then $\langle \psi_4| \psi_3 \rangle = \langle \psi_3| \psi_4 \rangle \nonumber$ which also means $\langle \psi_4| \psi_3 \rangle = 0 \nonumber$ from the results of section c.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.05%3A_The_Energy_of_a_Particle_in_a_Box_is_Quantized.txt
Learning Objectives • Calculate the probability of an event from the wavefunction • Understand the utility and importance of normalizing wavefunctions • Demonstrate how to normalize an arbitrary wavefunction Extracting Probabilities Since wavefunctions can in general be complex functions, the physical significance of wavefunctions cannot be found from the functions themselves because the $\sqrt {-1}$ is not a property of the physical world. Rather, the physical significance is found in the product of the wavefunction and its complex conjugate, i.e. the absolute square of the wavefunction, which also is called the square of the modulus. $\Psi^*(r , t ) \Psi (r , t ) = {|\Psi (r , t)|}^2 \label {3.6.1}$ where $r$ is a vector specifying a point in three-dimensional space. The square is used, rather than the modulus itself, just like the intensity of a light wave depends on the square of the electric field. Remember that the Born interpretation is that $\psi^*(r_i)\psi(r_i)\, d\tau$ is the probability that the electron is in the volume $dτ$ located at $r_i$. The Born interpretation therefore calls the wavefunction the probability amplitude, the absolute square of the wavefunction is called the probability density, and the probability density times a volume element in three-dimensional space ($d\tau$) is the probability. Since the squared magnitude $|\psi|^2$ of the wavefunction of a particle can be interpreted as the probability density, then the probability for a one-dimensional wavefunction between the points $x=a$ and $x=b$ can be calculated by $P_{1D}=\int\limits_{a}^{b}|\psi(x)|^{2} \mathrm{d} x \label{prob}$ This is just the area under the under the $|\psi|^2$ curve (Figure 3.6.1 ). If the probability of a two-dimensional wavefunction is being evaluated, then Equation \ref{prob} will be amended to included a double integral: $P_{2D}=\iint\limits_{a_1, a_2}^{b_1, b_2}|\psi(x,y)|^{2} \mathrm{d} x \mathrm{d} y \nonumber$ and similarly a triple integral would be used for calculating probabilities of three-dimensional wavefunctions: $P_{3D}=\iiint\limits_{a_1, a_2, a_3}^{b_1, b_2,b_3}|\psi(x,y,z)|^{2} \mathrm{d} x \mathrm{d} y \mathrm{d} z \nonumber$ Example 3.6.1 : Probability of a Particle in a Box Calculate the probability of finding an electron at $L/2$ in a box of infinite height within an interval ranging from $\dfrac {L}{2} - \dfrac {L}{200}$ to $\dfrac {L}{2} + \dfrac {L}{200}$ for the $n = 1$ and $n = 2$ states. Since the length of the interval, $L/100$, is small compared to $L$, you can get an approximate answer without explicitly integrating. Solution The wavefunction for the particle in a box is $\psi(x)=\sqrt{\frac{2}{L}} \sin \left(\frac{n \pi x}{L}\right) \nonumber$ and the wavefunction for the $n=1$ state is $\psi_{n=1}=\sqrt{\frac{2}{L}} \sin \left(\frac{\pi x}{L}\right) \nonumber$ From the interpretation that the wavefunction modulus squared is the probability density, we can establish the following integral to solve the problem (note the limits of integration) $\left|\psi_{n=1}\right|^{2}=\frac{2}{L} \int_{\frac{99 L}{200}}^{\frac{101L}{200}} \sin ^{2}\left(\frac{\pi x}{L}\right) dx \label{Ex3}$ We can solve this, but we can also recognize that Equation \ref{Ex3} is just calculating an area that can be approximated as the area of a rectangle with a height ($\frac{2}{L} \sin ^{2}\left(\frac{\pi x}{L}\right)$ at $x=L/2$) and width $\Delta x = L/100$ (Figure 3.6.2 ). This area can be computed: \begin{align*} \left|\psi_{n=1}\right|^{2} &\approx \frac{2}{L} \cancelto{1}{\sin ^{2}\left(\frac{\pi (L/2)}{L}\right)} \Delta x \[4pt] & \approx \left(\frac{2}{L} \right) (L/100) \[4pt] & \approx 1/50 = 0.02\end{align*} \nonumber Given that the wavefunction is sinusoidal, the actual probability of finding an electron within the given interval at $\frac{L}{2}$ should be slightly less because of the behavior of the sinusoid at $\frac{L}{2}$ is at its peak of the wavefunction (Figure 3.6.2 ). The wavefunction for the $n = 2$ state $\psi_{n=2}=\sqrt{\frac{2}{L}} \sin \left(\frac{2 \pi x}{L}\right)\nonumber$ so the integral that we need to construct and solve is $\left|\psi_{n=2}\right|^{2}=\frac{2}{L} \int_{\frac{99 L}{200}}^{\frac{101L}{200}} \sin ^{2}\left(\frac{2 \pi x}{L}\right) dx \nonumber$ We can use the same graphical interpretation as above, but using the probability density of the $\psi_2$ wavefunction (Figure 3.6.3 ). \begin{align*} \left|\psi_{n=1}\right|^{2} &\approx \frac{2}{L} \cancelto{0}{\sin ^{2}\left(\frac{ 2\pi (L/2)}{L}\right)} \Delta x \[4pt] & \approx 0\end{align*} \nonumber The probability of finding an electron in a box at $\frac{L}{2}$ for $n=2$ is approximately zero. Exercise 3.6.1 Show that the square of the modulus of $\Psi(r,t) = \psi(r) e^{-iωt}$ is time independent. What insight regarding stationary states do you gain from this proof? Solution The square of the modulus of a wavefunction is $\Psi(r,t)^*\Psi(r,t)$ so for wavefunctions of this form, the square of the modulus is \begin{align*} \Psi(r,t)^*\Psi(r,t) &= \psi(r) \cancel{e^{+iωt}} \psi(r) \cancel{e^{-iωt}} \[4pt] &=\psi(r)^2 \end{align*} \nonumber Hence, there is no time dependence to the modulus of wavefunctions of this work, which from the probability interpretation of the wavefunction means that the probability density is time-independent. Normalizing of the Wavefunction A probability is a real number between 0 and 1, inclusive. An outcome of a measurement which has a probability 0 is an impossible outcome, whereas an outcome which has a probability 1 is a certain outcome. According to Equation $\ref{3.6.1}$, the probability of a measurement of $x$ yielding a result between $-\infty$ and $+\infty$ is $P_{x \in -\infty:\infty}(t) = \int_{-\infty}^{\infty}\vert\psi(x,t)\vert^{ 2} dx. \label{3.6.2}$ However, a measurement of $x$ must yield a value between $-\infty$ and $+\infty$, since the particle has to be located somewhere. It follows that $P_{x \in -\infty:\infty}(t) =1$, or $\int_{-\infty}^{\infty}\vert\psi(x,t)\vert^{ 2} dx = 1, \label{3.6.3}$ which is generally known as the normalization condition for the wavefunction. Example 3.6.2 : Normalizing a Gaussian Wavepacket Normalize the wavefunction of a Gaussian wave packet, centered on $x=x_o$ with characteristic width $\sigma$: $\psi(x) = \psi_0 {\rm e}^{-(x-x_0)^{ 2}/(4 \sigma^2)}.\label{3.6.4}$ Solution To determine the normalization constant $\psi_0$, we simply substitute Equation $\ref{3.6.4}$ into Equation $\ref{3.6.3}$, to obtain $\vert\psi_0\vert^{ 2}\int_{-\infty}^{\infty}{\rm e}^{-(x-x_0)^{ 2}/(2 \sigma^2)} dx = 1. \nonumber$ Changing the variable of integration to $y=(x-x_0)/(\sqrt{2} \sigma)$, we get $\vert\psi_0\vert^{ 2}\sqrt{2} \sigma \int_{-\infty}^{\infty}{\rm e}^{-y^2} dy=1. \nonumber$ However, from an integral table we know $\int_{-\infty}^{\infty}{\rm e}^{-y^2} dy = \sqrt{\pi},\nonumber$ which implies that $\vert\psi_0\vert^{ 2} = \dfrac{1}{(2\pi \sigma^2)^{1/2}}. \nonumber$ Hence, a general normalized Gaussian wavefunction takes the form $\psi(x) = \dfrac{e^{\rm{i} \phi}}{(2\pi \sigma^2)^{1/4}}e^{-(x-x_0)^2/(4 \sigma^2)} \nonumber$ where $\phi$ is an arbitrary real phase-angle. Exercise 3.6.2 Normalize this wavefunction for a particle in a harmonic well: $\psi = x e^{-x^2} \nonumber$ Answer $\psi = 2 \left(\dfrac{2}{\pi} \right)^{1/4} x e^{-x^2} \nonumber$ Time Dependence to the Wavefunction Now, it is important to demonstrate that if a wavefunction is initially normalized then it stays normalized as it evolves in time according to the time-dependent Schrödinger's equation. If this is not the case then the probability interpretation of the wavefunction is untenable, since it does not make sense for the probability that a measurement of $x$ yields any possible outcome (which is, manifestly, unity) to change in time. Hence, we require that $\dfrac{d}{dt} \int_{-\infty}^{\infty}\vert \psi(x,t)\vert^{ 2} dx = 0 \nonumber$ for wavefunctions satisfying the time-dependent Schrödinger's equation (this results from the time-dependent Schrödinger's equation and Equation $\ref{3.6.3}$). The above equation gives $\dfrac{d}{dt} \int_{-\infty}^{\infty}\psi^* \psi \, dx=\int_{-\infty}^{\infty} \left(\dfrac{\partial\psi^*}{\partial t} \psi+\psi^* \dfrac{\partial\psi}{\partial t}\right) dx=0. \label{3.6.11}$ Now, multiplying Schrödinger's equation by $\psi^{*}/({\rm i} \hbar)$, we obtain $\psi^* \dfrac{\partial\psi}{\partial t}= \dfrac{\rm{i}}{2m} \psi^* \dfrac{\partial^2 \psi}{\partial x^2} - \dfrac{\rm{i}}{\hbar} V \vert \psi \vert^2 \label{3.6.12}$ The complex conjugate of this expression yields $\psi \dfrac{\partial\psi^*}{\partial t}= -\dfrac{\rm{i}}{2m} \psi \dfrac{\partial^2 \psi^*}{\partial x^2} + \dfrac{\rm{i}}{\hbar} V \vert \psi \vert^2 \label{3.6.13}$ since • $(A B)^* = A^* B^*$, • $A^{* *}=A$, and • $i^*= -i$. Summing Equation \ref{3.6.12} and \ref{3.6.13} results in \begin{align} \dfrac{\partial\psi^*}{\partial t} \psi + \psi^{*} \dfrac{\partial\psi}{\partial t} &= \dfrac{\rm{i}}{2m} \left(\psi^* \dfrac{\partial^2 \psi}{\partial x^2} - \psi \dfrac{\partial^2\psi^*}{\partial x^2}\right) \[4pt] &=\dfrac{\rm{i}}{2m} \dfrac{\partial}{\partial x} \left(\psi^* \dfrac{\partial \psi}{\partial x} - \psi \dfrac{\partial\psi^*}{\partial x}\right). \label{3.6.14} \end{align} Equations $\ref{3.6.11}$ and $\ref{3.6.14}$ can be combined to produce $\dfrac{d}{dt}\int_{-\infty}^{\infty} \vert\psi\vert^2 dx= \dfrac{\rm{i}}{2m} \left[\psi^*\dfrac{\partial\psi}{\partial x}- \psi\dfrac{\partial\psi^*}{\partial x} \right]_{-\infty}^{\infty} = 0. \label{3.6.15}$ The above equation is satisfied provided the wavefunction converges $\lim_{\vert x\vert\rightarrow\infty} \vert\psi\vert = 0 \label{3.6.16}$ However, this is a necessary condition for the integral on the left-hand side of Equation $\ref{3.6.3}$ to converge. Hence, we conclude that all wavefunctions which are square-integrable [i.e., are such that the integral in Equation \ref{3.6.3} converges] have the property that if the normalization condition Equation $\ref{3.6.3}$ is satisfied at one instant in time then it is satisfied at all subsequent times. Not all Wavefunctions can be Normalized Not all wavefunctions can be normalized according to the scheme set out in Equation $\ref{3.6.3}$. For instance, a planewave wavefunction for a quantum free particle $\Psi(x,t) = \psi_0 {\rm e}^{ {\rm i} (k x-\omega t)} \nonumber$ is not square-integrable, and, thus, cannot be normalized. For such wavefunctions, the best we can say is that $P_{x \in a:b}(t) \propto \int_{a}^{b}\vert\Psi(x,t)\vert^{ 2} dx.\nonumber$ In the following, all wavefunctions are assumed to be square-integrable and normalized, unless otherwise stated.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.06%3A_Wavefunctions_Must_Be_Normalized.txt
Learning Objectives • Calculate the expectation value for a measurement • Apply the expectation value concept to calculate average properties of a participle in a box model • Understand the origin of a zero-point energy/zero-point motion. • Extend the concept of orthogonality from vectors to mathematical functions (and wavefunctions). Now that we have mathematical expressions for the wavefunctions and energies for the particle-in-a-box, we can answer a number of interesting questions. The answers to these questions use quantum mechanics to predict some important and general properties for electrons, atoms, molecules, gases, liquids, and solids. Key to addressing these questions is the formulation and use of expectation values. This is demonstrated below and used in the context of evaluating average properties (momentum of the particle in a box for the case below). Classical Expectation Values The expectation value is the probabilistic expected value of the result (measurement) of an experiment. It is not the most probable value of a measurement; indeed the expectation value may even have zero probability of occurring. The expected value (or expectation, mathematical expectation, mean, or first moment) refers to the value of a variable one would "expect" to find if one could repeat the random variable process an infinite number of times and take the average of the values obtained. More formally, the expected value is a weighted average of all possible values. Example 3.7.1 : Classical Expectation Value of Exam Scores (a discretized example) A classical example is calculating the expectation value (i.e. average) of the exam grades in the class. For example if the class scores for an exam were 65 67 94 43 67 76 94 67 The discrete way is to sum up all scores and divide by the number of students: $\langle s \rangle = \dfrac{\displaystyle \sum_i^N s(i)}{N} \label{Cl1}$ which of this example of scores is \begin{align*} \langle s \rangle &= \dfrac{65 + 67 +94 +43 +67+76+94+76}{8} \[4pt] &= 71.625 \end{align*} \nonumber Notice that the average is not an allowable score on an individual exam. Equation $\ref{Cl1}$ can be rewritten with "probability" or "probability weights" $\langle s \rangle = \sum_i^N s(i) P_s(i) \label{Cl2}$ where $P_s(i)$ is the probability of observing a score of $s$. This is just the number of times it occurs in a dataset divided by the number of elements in that data set. Applying Equation \ref{Cl2} to the set of scores, we need to calculate these weights: Score 65 67 94 43 76 $P_s$ 1/8 3/8 2/8 1/8 1/8 As with all probabilities, the sum of all probabilities possible must be one. These confirm that for the weights here: $\dfrac{1}{8} + \dfrac{3}{8} + \dfrac{2}{8} + \dfrac{1}{8} + \dfrac{1}{8} = \dfrac{8}{8} =1 \nonumber$ This is the discretized "normalization" criterion (the same as why we normalize wavefunctions). So, now we can use Equation $\ref{Cl2}$ properly \begin{align*} \langle s \rangle &= 65 \times \dfrac{1}{8} + 67\times \dfrac{3}{8} +94\times \dfrac{2}{8} + 43 \times\dfrac{1}{8} +76\times \dfrac{1}{8} \[4pt] &= 71.625 \end{align*} \nonumber Hence, Equation $\ref{Cl2}$ gives the same result, as expected, from Equation $\ref{Cl1}$. Quantum Expectation Values The extension of the classical expectation (average) approach in Example 3.7.1 using Equation \ref{Cl2} to evaluating quantum mechanical expectation values requires three small changes: 1. Switch from descretized to continuous variables 2. Substitute the wavefunction squared for the probability weights (i.e., the probability distribution) 3. Use an operator instead of the scalar Hence, the quantum mechanical expectation value $\langle o \rangle$ for an observable, $o$, associated with an operator, $\hat{O}$, is given by $\langle o \rangle = \int _{-\infty}^{+\infty} \psi^* \hat{O} \psi \, dx \label{expect}$ where $x$ is the range of space that is integrated over (i.e., an integration over all possible probabilities). The expectation value changes as the wavefunction changes and the operator used (i.e, which observable you are averaging over). In general, changing the wavefunction changes the expectation value for that operator for a state defined by that wavefunction. Average Energy of a Particle in a Box If we generalize this conclusion, such integrals give the average value for any physical quantity by using the operator corresponding to that physical observable in the integral in Equation $\ref{expect}$. In the equation below, the symbol $\left \langle H \right \rangle$ is used to denote the average value for the total energy. \begin{align} \left \langle H \right \rangle &= \int \limits ^{\infty}_{-\infty} \psi ^* (x) \hat {H} \psi (x) dx \[4pt] &= \int \limits ^{\infty}_{-\infty} \psi ^* (x) \hat {KE} \psi (x) dx + \int \limits ^{\infty}_{-\infty} \psi ^* (x) \hat {V} \psi (x) dx \[4pt] &= \underset{\text {average kinetic energy} }{ \int \limits ^{\infty}_{-\infty} \psi ^* (x) \left ( \frac {-\hbar ^2}{2m} \right ) \frac {\partial ^2 }{ \partial x^2} \psi (x) dx} + \underset{ \text {average potential energy} }{\int \limits ^{\infty}_{-\infty} \psi ^* (x) \hat{V} (x) \psi (x) dx} \label{3-35} \end{align} The Hamiltonian operator consists of a kinetic energy term and a potential energy term. The kinetic energy operator involves differentiation of the wavefunction to the right of it. This step must be completed before multiplying by the complex conjugate of the wavefunction. The potential energy, however, usually depends only on position and not momentum (i.e., it involves conservative forces). The potential energy operator therefore only involves the coordinates of a particle and does not involve differentiation. For this reason we do not need to use a caret over $V$ in Equation $\ref{3-35}$. Equation \ref{3-35} can be simplified $\langle H \rangle = \langle KE \rangle + \langle V \rangle \label{3-35 braket}$ The potential energy integral then involves only products of functions, and the order of multiplication does not affect the result, e.g. 6×4 = 4×6 = 24. This property is called the commutative property. The average potential energy therefore can be written as $\left \langle V \right \rangle = \int \limits ^{\infty}_{-\infty} V (x) \psi ^* (x) \psi (x) dx \label{3-36}$ This integral is telling us to take the probability that the particle is in the interval $dx$ at $x$, which is $ψ^*(x)ψ(x)dx$, multiply this probability by the potential energy at $x$, and sum (i.e., integrate) over all possible values of $x$. This procedure is just the way to calculate the average potential energy $\left \langle V \right \rangle$ of the particle. Exercise 3.7.2 : Particle in Box Evaluate the two integrals in Equation $\ref{3-35}$ for the PIB wavefunction $ψ(x) = \sqrt{\dfrac{2}{L}} \sin(k x)$ with the potential function $V(x) = 0$ from 0 to the length of a box $L$ with $k= \dfrac{\pi }{L}$. Solution The average kinetic energy is \begin{align*} \langle KE \rangle &= \int \limits ^{L}_{0} \left(\sqrt{\dfrac{2}{L}}\right) \sin(kx) \left ( \frac {-\hbar ^2}{2m} \right ) \frac {\partial ^2 }{ \partial x^2} \left(\sqrt{\dfrac{2}{L}}\right) \sin(kx) dx \[4pt] &= \left(\dfrac{2}{L}\right) \int \limits ^{L}_{0} \sin(kx) \left ( \frac {-\hbar ^2}{2m} \right ) \frac {\partial }{ \partial x} \cos(kx)(k) dx \[4pt] &= \left(\dfrac{2}{L}\right) \int \limits ^{L}_{0} \sin(kx) \left ( \frac {-\hbar ^2}{2m} \right ) \sin(kx)(k)(-k) dx \[4pt] &= \left(\dfrac{2}{L}\right) \left ( \frac { k^2 \hbar ^2}{2m} \right ) \int \limits ^{L}_{0} \sin^2(kx) dx \end{align*} \nonumber We can solve this intergral using the standard half-angle representation from an integral table. Or we can recognize that we already did this integral when we normalized the PIB wavefunction by rewriting this integral: \begin{align*} \langle KE \rangle &= \left(\dfrac{2}{L}\right) \left ( \frac { k^2 \hbar ^2}{2m} \right ) \int \limits ^{L}_{0} \sin^2(kx) dx \[4pt] &= \left ( \frac { k^2 \hbar ^2}{2m} \right ) \int \limits ^{L}_{0} \left(\dfrac{2}{L}\right) \sin^2(kx) dx \[4pt] &=\left ( \frac { k^2 \hbar ^2}{2m} \right ) \cancelto{1}{\int \limits ^{L}_{0} \psi^*(x)\psi(x) dx} \[4pt] &= \frac { k^2 \hbar ^2}{2m} \end{align*} \nonumber Thus, the average value for the total energy of this particular system is $\langle KE \rangle = \dfrac { k^2 \hbar ^2}{2m} = \dfrac { \pi^2 \hbar ^2}{2mL^2} \nonumber$ Hence, the average kinetic energy of the wavefunction is dependent on the $n$ quantum number The average potential energy is $\langle V \rangle = \int \limits ^{\infty}_{-\infty} \sin(kx) 0 \sin(kx) dx = 0 \nonumber$ Thus, the average potential energy of the PIB is 0 irrespective of the wavefunction. Hence via Equation \ref{3-35 braket} for this system and set of wavefunctions $\langle H \rangle = \dfrac { \pi^2 \hbar ^2}{2mL^2} \nonumber$ This is the same result obtained from solving the eigenvalue equation for the PIB. However, if the wavefunctions used were NOT eigenstates of energy, then we cannot use the eigenvalue approach and need to rely on the expectation values to describe the energy of the system. What is the lowest energy for a particle in a box? The lowest energy level is $E_1$, and it is important to recognize that this lowest energy of a particle in a box is not zero. This finite energy is called the zero-point energy, and the motion associated with this energy is called the zero-point motion. Any system that is restricted to some region of space is said to be bound. The zero-point energy and motion are manifestations of the wave properties and the Heisenberg Uncertainty Principle, and are general properties of bound quantum mechanical systems. Exercise 3.7.2 : Progressing to the Classical Limit What happens to the energy level spacing for a particle-in-a-box when $mL^2$ becomes much larger than $h^2$? What does this result imply about the relevance of quantization of energy to baseballs in a box between the pitching mound and home plate? What implications does quantum mechanics have for the game of baseball in a world where $h$ is so large that baseballs exhibit quantum effects? Answer As $mL^2$ becomes much larger than $h^2$, as everyday objects are, the spacing between energy levels becomes much smaller. This shows how the quantizations of energy levels become irrelevant for an everyday object, as the quantizations of the energy of baseballs in a box between the pitching mound and home plate would appear particularly continuous for such a relatively large mass and box length. If h were so large that a baseball experiences quantum effects then a game of baseball would be far less predictable, in a classical world the position of a baseball can be easily predicted by the everyday understanding of projectile motion, however, in such a quantum world the baseball would not behave with expected projectile motion but instead behave wave-like with a probability of being in a certain position. The first derivative of a function is the rate of change of the function, and the second derivative is the rate of change in the rate of change, also known as the curvature. A function with a large second derivative is changing very rapidly. Since the second derivative of the wavefunction occurs in the Hamiltonian operator that is used to calculate the energy by using the Schrödinger equation, a wavefunction that has sharper curvatures than another, i.e. larger second derivatives, should correspond to a state having a higher energy. A wavefunction with more nodes than another over the same region of space must have sharper curvatures and larger second derivatives, and therefore should correspond to a higher energy state. Exercise 3.7.3 : Nodes and Energies Identify a relationship between the number of nodes in a wavefunction and its energy by examining the graphs you made above. A node is the point where the amplitude passes through zero. What does the presence of many nodes mean about the shape of the wavefunction? Average Position of a Particle in a Box We can calculate the most probable position of the particle from knowledge of probability distribution, $ψ^* ψ$. For the ground-state particle in a box wavefunction with $n=1$ (Figure $\PageIndex{1a}$) $\psi_{n=1} = \sqrt{\dfrac{2}{L}} \sin \left(\dfrac{\pi x}{L} \right) \label{PIB}$ This state has the following probability distribution (Figure $\PageIndex{1b}$): $\psi^*_{n=1} \psi_{n=1} = \dfrac{2}{L} \sin^2 \left(\dfrac{\pi x}{L} \right) \nonumber$ The expectation value for position with the $\hat{x} = x$ operation for any wavefunction (Equation $\ref{expect}$) is $\langle x \rangle = \int _{-\infty}^{+\infty} \psi^* x \psi \, dx \nonumber$ which for the ground-state wavefunction (Equation $\ref{PIB}$) shown in Figure 3.7.1 is \begin{align} \langle x \rangle &= \int _{-\infty}^{+\infty} \sqrt{\dfrac{2}{L}} \sin \left(\dfrac{\pi x}{L} \right) x \sqrt{\dfrac{2}{L}} \sin \left(\dfrac{\pi x}{L} \right) \, dx \[4pt] &= \dfrac{2}{L} \int _{-\infty}^{+\infty} x \sin^2 \left(\dfrac{\pi x}{L} \right) \label{GSExpect} \end{align} Solution by Inspection Without even having to evaluate Equation $\ref{GSExpect}$, we can get the expectation value from simply inspecting $\psi^*_{n=1} \psi_{n=1}$ in (Figure $\PageIndex{1; right}$). This is a symmetric distribution around the center of the box ($L/2$) so it is just as likely to be found in the left half than the right half. Moreover, specifically at any point a fixed distance away from the mean, i.e. $\psi^*_{n=1} \psi_{n=1} (L/2 + \Delta x) = \psi^*_{n=1} \psi_{n=1} (L/2 - \Delta x) \nonumber$ Therefore, the particle is most likely to be found at the center of the box. So $\langle x \rangle = \dfrac{L}{2} \nonumber$ Exercise 3.7.4 Use the general form of the particle-in-a-box wavefunction for any $n$ to find the mathematical expression for the position expectation value $\left \langle x \right \rangle$ for a box of length L. How does $\left \langle x \right \rangle$ depend on $n$? Average Momentum of a Particle in a Box What is the average momentum of a particle in the box? We start with Equation $\ref{expect}$ and use the momentum operator $\hat{p}_{x}=-i\hbar\dfrac{\partial}{\partial x}\label{3.2.3a}$ We note that the particle-in-a-box wavefunctions are not eigenfunctions of the momentum operator (Exercise 3.7.4 ). However, this does not mean that Equation $\ref{expect}$ is inapplicable as Example 3.7.2 demonstrates. Example 3.7.3 : The Average Momentum of a Particle in a Box is Zero Even though the wavefunctions are not momentum eigenfunctions, we can calculate the expectation value for the momentum. Show that the expectation or average value for the momentum of an electron in the box is zero in every state (i.e., arbitrary values of $n$). Strategy First write the expectation value integral (Equation $\ref{expect}$) with the momentum operator. Then insert the expression for the wavefunction and evaluate the integral as shown here. Answer \begin{align*} \left \langle p \right \rangle &= \int \limits ^L_0 \psi ^*_n (x) \left ( -i\hbar \dfrac {d}{dx} \right ) \psi _n (x) dx \[4pt] &= \int \limits ^L_0 \left (\dfrac {2}{L} \right )^{1/2} \sin \left(\dfrac {n \pi x}{L}\right) \left ( -i\hbar \dfrac {d}{dx} \right ) \left (\dfrac {2}{L} \right )^{1/2} \sin \left(\dfrac {n \pi x }{L} \right) dx \[4pt] &= -i \hbar \left (\dfrac {2}{L} \right ) \int \limits ^L_0 \sin \left(\dfrac {n \pi x}{L}\right) \left ( \dfrac {d}{dx} \right ) \sin \left(\dfrac {n \pi x}{L}\right) dx \[4pt] &= -i \hbar \left (\dfrac {2}{L} \right ) \left ( \dfrac {n \pi}{L} \right ) \int \limits ^L_0 \sin \left(\dfrac {n \pi x}{L} \right) \cos \left(\dfrac {n \pi x}{L}\right) dx \[4pt] &= 0 \end{align*} \nonumber Note that this makes sense since the particles spends an equal amount of time traveling in the $+x$ and $–x$ direction. Interpretation It may seem that this means the particle in a box does not have any momentum, which is incorrect because we know the energy is never zero. In fact, the energy that we obtained for the particle-in-a-box is entirely kinetic energy because we set the potential energy at 0. Since the kinetic energy is the momentum squared divided by twice the mass, it is easy to understand how the average momentum can be zero and the kinetic energy finite Exercise 3.7.5 Show that the particle-in-a-box wavefunctions are not eigenfunctions of the momentum operator (Equation $\ref{3.2.3a}$). Answer The easiest way to address this question is asking if the PIB wavefunction also satisfies the eigenvalue equation using the momentum operation instead of the Hamiltonian operator (3rd postulate of QM). That is $\hat{p}_x \psi(n) = p \psi(n) \nonumber$ with the following PIB wavefunctions $\psi_{n}=\sqrt{\dfrac{2}{L}} \sin \left(\dfrac{n \pi x}{L}\right) \nonumber$ and $\hat{p}=- i\hbar \dfrac{d}{d x } \nonumber$ and $p$ is a real scalar (since it is a measurable). \begin{align*} \hat{p}_x \psi_{n} &= - i\hbar \dfrac{d}{d x} \left[\sqrt{\dfrac{2}{L}} \sin \left(\dfrac{n \pi x}{L}\right)\right] \[4pt] &= -i \hbar \sqrt{ \dfrac{2}{L}} \cos \left(\dfrac{n \pi x}{L}\right) \left(\dfrac{n \pi}{L}\right) \[4pt] &\neq p \psi_{n} \end{align*} \nonumber Hence, the PIB wavefunctions are NOT eigenfunctions of the momentum operator. Alternative Solution An alternative, albeit more complicated, approach is to recognize that the uncertainty of $p$ must be zero if the wavefunction is an eigenstate of the momentum operator. Hence $\sqrt{\langle p^{2}\rangle - \langle p \rangle ^{2}}=0 \nonumber$ This requires calculating the $\langle p^{2}\rangle$ and $\langle p \rangle$ expectation values: \begin{align*} \langle p \rangle &=\int_{0}^{L} \psi^{*}\left[-i \hbar \dfrac{d}{d x}\right] \psi d x \[4pt] &=-i \hbar \int_{0}^{L} \dfrac{2}{L} \sin \left(\dfrac{n \pi x}{L}\right) \dfrac{d}{dx} x \sin \left(\dfrac{n \pi x}{L}\right) dx \[4pt] &=-i \dfrac{\mathrm{h}^2}{L} \int_{0}^{L} \sin \left(\dfrac{n \pi x}{L}\right) \cos \left(\dfrac{n \pi x}{L}\right) dx \[4pt] &= 0 \end{align*} \nonumber This integral is zero via orthonormality of the sine and cosine functions (although you can expand the integrand and confirm this). \begin{align*} \langle p^{2} \rangle &=\int_{0}^{L} \psi^{*} \left[-i \hbar \dfrac{d}{d x}\right]^{2} \psi d x \[4pt] &=\dfrac{\mathrm{h}^2}{L} \int_{0}^{L} \sin \left(\dfrac{n \pi x}{L}\right) \dfrac{d^{2} }{d x^{2}} \sin \left(\dfrac{n \pi x}{L}\right) d x \[4pt] &=\dfrac{-\mathrm{h}^2}{L} \int_{0}^{L} \sin \left(\dfrac{n \pi x}{L}\right) \sin \left(\dfrac{n \pi x}{L}\right) d x \[4pt] &= -\dfrac{\mathrm{h}^2}{L} \end{align*} \nonumber Now the integral above is 1 using orthonormality (although you can expand the integrand and confirm this). Now we calculate the uncertainty in momentum in the PIB wavefunctions: \begin{align*} \sqrt{\langle p^{2} \rangle - \langle p \rangle^2} &=\sqrt{\dfrac{-\mathrm{h}^2}{L} -0^{2}} \[4pt] &= \sqrt{\dfrac{-\mathrm{h}^2}{L}} \[4pt] &\neq 0 \rightarrow \end{align*} \nonumber Since the uncertainty is not 0, different measurements (experiments) will results in different values of momentum being quantified. Hence, the PIB wavefunctions are not eigenfunctions of the momentum operator. It must be equally likely for the particle-in-a-box to have a momentum $-p$ as $+p$. The average of $+p$ and $–p$ is zero, yet $p^2$ and the average of $p^2$ are not zero. The information that the particle is equally likely to have a momentum of $+p$ or $–p$ is contained in the wavefunction. In fact, the sine function is a representation of the two momentum eigenfunctions $e^{+ikx}$ and $e^{-ikx}$ (Figure 3.7.2 ). Exercise 3.7.6 Write the particle-in-a-box wavefunction as a normalized linear combination of the momentum eigenfunctions $e^{ikx}$ and $e^{-ikx}$ by using Euler’s formula. Show that the momentum eigenvalues for these two functions are $p = +ħk$ and $-ħk$. The interpretation of the results of Exercise 3.7.6 is physically interesting. The exponential wavefunctions in the linear combination for the sine function represent the two opposite directions in which the electron can move. One exponential term represents movement to the left and the other term represents movement to the right (Figure 3.7.2 ). The electrons are moving, they have kinetic energy and momentum, yet the average momentum is zero. Did we just Violate the Uncertainty Principle? Does the fact that the average momentum of an electron is zero and the average position is $L/2$ violate the Heisenberg Uncertainty Principle? No, because the Heisenberg Uncertainty Principle pertains to the uncertainty in the momentum and in the position, not to the average values. Quantitative values for these uncertainties can be obtained to compare with the limit set by the Heisenberg Uncertainty Principle for the product of the uncertainties in the momentum and position. However, to do this we need a quantitative definition of uncertainty, which is discussed in the following Section. Orthogonality In vector calculus, orthogonality is the relation of two lines at right angles to one another (i.e., perpendicularity), but is generalized into $n$ dimensions via zero amplitude "dot products" or "inner products." Hence, orthogonality is thought of as describing non-overlapping, uncorrelated, or independent objects of some kind. The concept of orthogonality extends to functions (wavefunctions or otherwise) too. Two functions $\psi_A$ and $\psi_B$ are said to be orthogonal if $\int \limits _{all space} \psi _A^* \psi _B d\tau = 0 \label{3.7.3}$ In general, eigenfunctions of a quantum mechanical operator with different eigenvalues are orthogonal. Are the eigenfunctions of the particle-in-a-box Hamiltonian orthogonal? Exercise 3.7.7 Evaluate the integral $\int \psi ^*_1 \psi _3 dx$ for all possible pairs of particle-in-a-box eigenfunctions from $n=1$ to $n=3$ (use symmetry arguments whenever possible) and explain what the results say about orthogonality of the functions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.07%3A_The_Average_Momentum_of_a_Particle_in_a_Box_is_Zero.txt
Learning Objectives • Expand on the introduction of Heisenberg's Uncertainty Principle by calculating the $\Delta x$ or $\Delta p$ directly from the wavefunction As will be discussed in Section 4.6, the operators $\hat{x}$ and $\hat{p}$ are not compatible and there is no measurement that can precisely determine the coresponding observables ($x$ and $p$) simultaneously. Hence, there must be an uncertainty relation between them that specifies how uncertain we are about one quantity given a definite precision in the measurement of the other. Presumably, if one can be determined with infinite precision, then there will be an infinite uncertainty in the other. The uncertainty in a general quantity $A$ is $\Delta A = \sqrt{\langle A^2 \rangle - \langle A \rangle ^2} \label{3.8.1}$ where $\langle A^2 \rangle$ and $\langle A \rangle$ are the expectation values of $\hat{A^2}$ and $\hat{A}$ operators for a specific wavefunction. Extending Equation \ref{3.8.1} to $x$ and $p$ results in the following uncertainties $\Delta x = \sqrt{\langle x^2 \rangle - \langle x \rangle ^2} \label{3.8.2a}$ $\Delta p = \sqrt{\langle p^2 \rangle - \langle p \rangle ^2} \label{3.8.2b}$ These quantities can be expressed explicitly in terms of the (time-dependent) wavefunction $\Psi (x, t)$ using the fact that \begin{align} \langle x \rangle &= \langle \Psi(t)\vert \hat{x}\vert\Psi(t)\rangle \label{3.8.3} \[4pt] &=\int \Psi^{*}(x,t) x \Psi(x,t)\;dx \end{align} and \begin{align} \langle x^2 \rangle &= \langle \Psi(t)\vert \hat{x}^2 \vert\Psi(t)\rangle \label{3.8.4} \[4pt] &= \int \Psi^{*}(x,t) x^2 \Psi(x,t)\;dx \end{align} The middle terms in both Equations $\ref{3.8.3}$ and $\ref{3.8.4}$ are the integrals expressed in Dirac's Bra-ket notation. Similarly using the definition of the linear momentum operator: $\hat{p}_x = - i \hbar \dfrac{\partial}{ \partial x}. \nonumber$ So \begin{align} \langle p \rangle &= \langle \Psi(t)\vert \hat{p} \vert\Psi(t)\rangle \label{3.8.5} \&= \int \Psi^{*}(x,t) - i \hbar {\partial \over \partial x}\Psi(x,t)\,dx \end{align} and \begin{align} \langle p^2 \rangle &= \langle \Psi(t)\vert \hat{p}^2\vert\Psi(t)\rangle \label{3.8.6} \ &= \int \Psi ^{*} (x, t)\left(-\hbar^2{\partial^2 \over \partial x^2}\right)\Psi(x,t) \;dx \end{align} Time-dependent vs. time-independent wavefunction The expectation values above are formulated with the total time-dependence wavefunction $\psi(x,t)$ that are functions of $x$ and $t$. However, it is easy to show that the same expectation value would be obtained if the time-independent wavefunction $\psi(x)$ that are functions of only $x$ are used. If $V(x)$ in $\hat{H}$ is time independent, then the wavefunctions are stationary and the expectation value are time-independent. You can easily confirm that by comparing the expectation values using the general formula for a stationary wavefunction $\Psi(x,t)=\psi(x)e^{-iEt / \hbar} \nonumber$ and for $\psi(x)$. The Heisenberg uncertainty principle can be quantitatively connected to the properties of a wavefunction, i.e., calculated via the expectation values outlined above: $\Delta p \Delta x \ge \dfrac {\hbar}{2} \label {3.8.8}$ This essentially states that the greater certainty that a measurement of $x$ or $p$ can be made, the greater will be the uncertainty in the other. Hence, as $Δp$ approaches 0, $Δx$ must approach $\infty$, which is the case of the free particle (e.g, with $V(x)=0$) where the momentum of a particles can be determined precisely. Example 3.8.1 : Uncertainty with a Gaussian wavefunction A particle is in a state described by the wavefunction $\psi{(x)} =\left(\dfrac{2a}{π}\right)^{\frac{1}{4}} e^{−ax^2} \label{Ex1eq1}$ where $a$ is a constant and $−∞≤ x ≤ ∞$. Verify that the value of the product $∆p∆x$ is consistent with the predictions from the uncertainty principle (Equation \ref{3.8.8}). Solution Let's calculate the average of $x$: \begin{align} \langle x\rangle &= \int ^\infty_{-\infty} \psi^{*}x \psi \,dx \nonumber \ &= \int ^\infty_{-\infty}(2a/π)^\frac{1}{4} e^{−ax^2} x (2a/π)^\frac{1}{4} e^{−ax^2} \,dx \nonumber \ &= \int ^\infty_{-\infty}x(2a/π)^\frac{1}{2} e^{−2ax^2}\,dx \nonumber \ &= 0\nonumber \end{align} \nonumber since the integrand is an odd function (an even function times an odd function is an odd function). This makes sense given that the gaussian wavefunction is symmetric around $x=0$. Let's calculate the average of $x^2$: \begin{align} \langle x^2\rangle &= \int ^\infty_{-\infty} \psi^{*}{x^2}\psi \,dx \nonumber \ &= \int ^\infty_{-\infty}(2a/π)^\frac{1}{4} e^{−ax^2} (x^2) (2a/π)^\frac{1}{4} e^{−ax^2} \, dx \nonumber \ &= \int ^\infty_{-\infty}x^2(2a/π)^\frac{1}{2} e^{−2ax^2} \, dx \nonumber \ & =\frac{1}{4a} \nonumber \end{align} \nonumber Let's calculate the average in $p$: \begin{align} \langle p\rangle &= \int ^\infty_{-\infty} \psi^{*} p \psi \,dx \nonumber \ &= \int ^\infty_{-\infty} \left(\dfrac{2a}{π}\right)^{\frac{1}{4}} e^{−ax^2} -i\hbar \frac{d}{dx} \left(\dfrac{2a}{π}\right)^{\frac{1}{4}} e^{−ax^2} \,dx \nonumber \ &= \int ^\infty_{-\infty} \left(\dfrac{2a}{π}\right)^{\frac{1}{4}} e^{−ax^2} (- i \hbar ) \left(\dfrac{2a}{π}\right)^{\frac{1}{4}} e^{−ax^2} (-2ax)\,dx \nonumber \ &=0 \nonumber \end{align} \nonumber since the integrand is an odd function. Let's calculate the average of $p^2$: \begin{align} \langle p^2\rangle &= \int ^\infty_{-\infty} \psi^{*}{p^2} \psi dx \nonumber \ &= -\hbar^2 \left(\dfrac{2a}{\pi}\right) ^{1/2} \int_{-\infty}^{\infty} 2a(a x^2 - 1) e^{- 2a x^2} dx \nonumber \ &= -4\hbar^2 a^2 \left(\dfrac{2a}{\pi}\right) ^{1/2} \int_{0}^{\infty} x^2 e^{- 2a x^2} dx + 4\hbar^2 a \left(\dfrac{2a}{\pi}\right)^{1/2} \int_{0}^{\infty} e^{- 2a x^2} dx \nonumber \ &= a\hbar^2 \nonumber \end{align} \nonumber We use Equation \ref{3.8.1} to check on the uncertainty \begin{align*} \Delta{x^2} &= \langle x^2 \rangle - \langle x \rangle^2 = \dfrac{1}{4a} - 0 \[4pt] \Delta{x} &= \sqrt{\Delta{x^2}} = \dfrac{1}{2\sqrt{a}}\[4pt] \Delta{p^2} &= \langle p^2 \rangle - \langle p \rangle^2 = a\hbar^2 - 0 \[4pt] \Delta{p} &= \sqrt{\Delta{p^2}} = \hbar \sqrt{a} \end{align*} \nonumber Finally we have $\Delta{p}\Delta{x} = \left(\dfrac{1}{2\sqrt{a}}\right) (\hbar \sqrt{a}) = \dfrac{\hbar}{2} \nonumber$ Not only does the Heisenburg uncertainly principle hold (Equation \ref{3.8.8}), but the equality is established for this wavefunction. This is because the Gaussian wavefunction (Equation \ref{Ex1eq1}) is special as discussed later. Exercise 3.8.1 A particle is in a state described by the ground state wavefunction of a particle in a box $\psi = \sqrt{\dfrac{2}{L}} \sin \left(\dfrac{\pi x}{L}\right) \nonumber$ where $L$ is the length of the box and $0≤ x ≤ L$. Verify that the value of the product $∆p∆x$ is consistent with the predictions from the uncertainty principle (Equation \ref{3.8.8}). The uncertainty principle is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.08%3A_The_Uncertainty_Principle_-_Estimating_Uncertainties_from_Wavefunctions.txt
Learning Objectives • To demonstrate how the particle in 1-D box problem can extend to a particle in a 3D box • Introduction to nodal surfaces (e.g., nodal planes) The quantum particle in the 1D box problem can be expanded to consider a particle within a higher dimensions as demonstrated elsewhere for a quantum particle in a 2D box. Here we continue the expansion into a particle trapped in a 3D box with three lengths $L_x$, $L_y$, and $L_z$. As with the other systems, there is NO FORCE (i.e., no potential) acting on the particles inside the box (Figure 3.9.1 ). The potential for the particle inside the box $V(\vec{r}) = 0 \nonumber$ • $0 \leq x \leq L_x$ • $0 \leq y \leq L_y$ • $0 \leq z \leq L_z$ • $L_x < x < 0$ • $L_y < y < 0$ • $L_z < z < 0$ $\vec{r}$ is the vector with all three components along the three axes of the 3-D box: $\vec{r} = L_x\hat{x} + L_y\hat{y} + L_z\hat{z}$. When the potential energy is infinite, then the wavefunction equals zero. When the potential energy is zero, then the wavefunction obeys the Time-Independent Schrödinger Equation $-\dfrac{\hbar^{2}}{2m}\nabla^{2}\psi(r) + V(r)\psi(r) = E\psi(r) \label{3.9.1}$ Since we are dealing with a 3-dimensional figure, we need to add the 3 different axes into the Schrondinger equation: $-\dfrac{\hbar^{2}}{2m}\left(\dfrac{d^{2}\psi(r)}{dx^{2}} + \dfrac{d^{2}\psi(r)}{dy^{2}} + \dfrac{d^{2}\psi(r)}{dz^{2}}\right) = E\psi(r) \label{3.9.2}$ The easiest way in solving this partial differential equation is by having the wavefunction equal to a product of individual function for each independent variable (e.g., the Separation of Variables technique): $\psi{(x,y,z)} = X(x)Y(y)Z(z) \label{3.9.3}$ Now each function has its own variable: • $X(x)$ is a function of variable $x$ only • $Y(y)$ is a function of variable $y$ only • $Z(z)$ is a function of variable $z$ only Now substitute Equation $\ref{3.9.3}$ into Equation $\ref{3.9.2}$ and divide it by the $xyz$ product: $\dfrac{d^{2}\psi}{dx^{2}} = YZ\dfrac{d^{2}X}{dx^{2}} \Rightarrow \dfrac{1}{X}\dfrac{d^{2}X}{dx^{2}} \nonumber$ $\dfrac{d^{2}\psi}{dy^{2}} = XZ\dfrac{d^{2}Y}{dy^{2}} \Rightarrow \dfrac{1}{Y}\dfrac{d^{2}Y}{dy^{2}} \nonumber$ $\dfrac{d^{2}\psi}{dz^{2}} = XY\dfrac{d^{2}Z}{dz^{2}} \Rightarrow \dfrac{1}{Z}\dfrac{d^{2}Z}{dz^{2}} \nonumber$ $\left(-\dfrac{\hbar^{2}}{2mX} \dfrac{d^{2}X}{dx^{2}}\right) + \left(-\dfrac{\hbar^{2}}{2mY} \dfrac{d^{2}Y}{dy^{2}}\right) + \left(-\dfrac{\hbar^{2}}{2mZ} \dfrac{d^{2}Z}{dz^{2}}\right) = E \label{3.9.4}$ $E$ is an energy constant, and is the sum of $x$, $y$, and $z$. For this to work, each term must equal its own constant. For example, $\dfrac{d^{2}X}{dx^{2}} + \dfrac{2m}{\hbar^{2}} \varepsilon_{x}X = 0 \nonumber$ Now separate each term in Equation $\ref{3.9.4}$ to equal zero: $\dfrac{d^{2}X}{dx^{2}} + \dfrac{2m}{\hbar^{2}} \varepsilon_{x}X = 0 \label{3.9.5a}$ $\dfrac{d^{2}Y}{dy^{2}} + \dfrac{2m}{\hbar^{2}} \varepsilon_{y}Y = 0 \label{3.9.5b}$ $\dfrac{d^{2}Z}{dz^{2}} + \dfrac{2m}{\hbar^{2}} \varepsilon_{z}Z = 0 \label{3.9.5c}$ Now we can add all the energies together to get the total energy: $\varepsilon_{x}+ \varepsilon_{y} + \varepsilon_{z} = E \label{3.9.6}$ Do these equations look familiar? They should because we have now reduced the 3D box into three particle in a 1D box problems! $\dfrac{d^{2}X}{dx^{2}} + \dfrac{2m}{\hbar^{2}} E_{x}X = 0 \approx \dfrac{d^{2}\psi}{dx^{2}} = -\dfrac{4\pi^{2}}{\lambda^{2}}\psi \label{3.9.7}$ Now the equations are very similar to a 1-D box and the boundary conditions are identical, i.e., $n = 1, 2,..\infty \nonumber$ Use the normalization wavefunction equation for each variable: $\psi(x) = \begin{cases} \sqrt{\dfrac{2}{L_x}}\sin{\dfrac{n \pi x}{L_x}} & \mbox{if } 0 \leq x \leq L \ 0 & \mbox{if } {L < x < 0} \end{cases} \nonumber$ Normalization wavefunction equation for each variable (that substitute into Equation \ref{3.9.3}): $X(x) = \sqrt{\dfrac{2}{L_x}} \sin \left( \dfrac{n_{x}\pi x}{L_x} \right) \label{3.9.8a}$ $Y(y) = \sqrt{\dfrac{2}{L_y}} \sin \left(\dfrac{n_{y}\pi y}{L_y} \right) \label{3.9.8b}$ $Z(z) = \sqrt{\dfrac{2}{L_z}} \sin \left( \dfrac{n_{z}\pi z}{L_z} \right) \label{3.9.8c}$ The limits of the three quantum numbers • $n_{x} = 1, 2, 3, ...\infty$ • $n_{y} = 1, 2, 3, ...\infty$ • $n_{z} = 1, 2, 3, ...\infty$ For each constant use the de Broglie Energy equation: $\varepsilon_{x} = \dfrac{n_{x}^{2}h^{2}}{8mL_x^{2}} \label{3.9.9}$ with $n_{x} = 1...\infty$ Do the same for variables $n_y$ and $n_z$. Combine Equation $\ref{3.9.3}$ with Equations $\ref{3.9.8a}$-$\ref{3.9.8c}$ to find the wavefunctions inside a 3D box. $\psi(r) = \sqrt{\dfrac{8}{V}}\sin \left( \dfrac{n_{x} \pi x}{L_x} \right) \sin \left(\dfrac{n_{y} \pi y}{L_y}\right) \sin \left(\dfrac{ n_{z} \pi z}{L_z} \right) \label{3D wave}$ with $V = \underbrace{L_x \times L_y \times L_z}_{\text{volume of box}} \nonumber$ To find the Total Energy, add Equation $\ref{3.9.9}$ and Equation $\ref{3.9.6}$. $E_{n_x,n_y,n_z} = \dfrac{h^{2}}{8m}\left(\dfrac{n_{x}^{2}}{L_x^{2}} + \dfrac{n_{y}^{2}}{L_y^{2}} + \dfrac{n_{z}^{2}}{L_z^{2}}\right) \label{3.9.10}$ Notice the similarity between the energies a particle in a 3D box (Equation $\ref{3.9.10}$) and a 1D box. Degeneracy in a 3D Cube The energy of the particle in a 3-D cube (i.e., $L_x=L_y= L$) in the ground state is given by Equation $\ref{3.9.10}$ with $n_x=1$, $n_y=1$, and $n_z=1$. This energy ($E_{1,1,1}$) is hence $E_{1,1,1} = \dfrac{3 h^{2}}{8mL^2} \nonumber$ The ground state has only one wavefunction and no other state has this specific energy; the ground state and the energy level are said to be non-degenerate. However, in the 3-D cubical box potential the energy of a state depends upon the sum of the squares of the quantum numbers (Equation \ref{3D wave}). The particle having a particular value of energy in the excited state MAY has several different stationary states or wavefunctions. If so, these states and energy eigenvalues are said to be degenerate. For the first excited state, three combinations of the quantum numbers $(n_x,\, n_y, \, n_z )$ are $(2,\,1,\,1),\, (1,2,1),\, (1,1,2)$. The sum of squares of the quantum numbers in each combination is same (equal to 6). Each wavefunction has same energy: $E_{2,1,1} =E_{1,2,1} = E_{1,1,2} = \dfrac{6 h^{2}}{8mL^2} \nonumber$ Corresponding to these combinations three different wavefunctions and three different states are possible. Hence, the first excited state is said to be three-fold or triply degenerate. The number of independent wavefunctions for the stationary states of an energy level is called as the degree of degeneracy of the energy level. The value of energy levels with the corresponding combinations and sum of squares of the quantum numbers $n^2 \,= \, n_x^2+n_y^2+n_z^2 \nonumber$ as well as the degree of degeneracy are depicted in Table 3.9.1 . Table 3.9.1 : Degeneracy properties of the particle in a 3-D cube with $L_x=L_y= L$. $n_x^2+n_y^2+n_z^2$ Combinations of Degeneracy ($n_x$, $n_y$, $n_z$) Total Energy ($E_{n_x,n_y,n_z}$) Degree of Degeneracy 3 (1,1,1)           $\dfrac{3 h^{2}}{8mL^2}$ 1 6 (2,1,1) (1,2,1) (1,1,2)       $\dfrac{6 h^{2}}{8mL^2}$ 3 9 (2,2,1) (1,2,2) (2,1,2)       $\dfrac{9 h^{2}}{8mL^2}$ 3 11 (3,1,1) (1,3,1) (1,1,3)       $\dfrac{11 h^{2}}{8mL^2}$ 3 12 (2,2,2)           $\dfrac{12 h^{2}}{8mL^2}$ 1 14 (3,2,1) (3,1,2) (2,3,1) (2,1,3) (1,3,2) (1,2,3) $\dfrac{14 h^{2}}{8mL^2}$ 6 17 (2,2,3) (3,2,2) (2,3,2)       $\dfrac{17 h^{2}}{8mL^2}$ 3 18 (1,1,4) (1,4,1) (4,1,1)       $\dfrac{18 h^{2}}{8mL^2}$ 3 19 (1,3,3) (3,1,3) (3,3,1)       $\dfrac{19 h^{2}}{8mL^2}$ 3 21 (1,2,4) (1,4,2) (2,1,4) (2,4,1) (4,1,2) (4,2,1) $\dfrac{21 h^{2}}{8mL^2}$ 6 Example 3.9.1 : Accidental Degeneracies When is there degeneracy in a 3-D box when none of the sides are of equal length (i.e., $L_x \neq L_y \neq L_z$)? Solution From simple inspection of Equation \ref{3.9.10} or Table 3.9.1 , it is clear that degeneracy originates from different combinations of $n_x^2/L_x^2$, $n_y^2/L_y^2$ and $n_z^2/L_z^2$ that give the same value. These will occur at common multiples of at least two of these quantities (the Least Common Multiple is one example). For example if $\dfrac{n_x^2}{L_x^2} = \dfrac{n_y^2}{L_y^2} \nonumber$ there will be a degeneracy. Also degeneracies will exist if $\dfrac{n_y^2}{L_y^2} = \dfrac{n_z^2}{L_z^2} \nonumber$ or if $\dfrac{n_x^2}{L_x^2} = \dfrac{n_z^2}{L_z^2} \nonumber$ and especially if $\dfrac{n_x^2}{L_x^2} = \dfrac{n_y^2}{L_y^2} = \dfrac{n_z^2}{L_z^2} \nonumber. \nonumber$ There are two general kinds of degeneracies in quantum mechanics: degeneracies due to a symmetry (i.e., $L_x=L_y$) and accidental degeneracies like those above. Exercise 3.9.1 The 6th energy level of a particle in a 3D Cube box is 6-fold degenerate. 1. What is the energy of the 7th energy level? 2. What is the degeneracy of the 7th energy level? Answer a $\dfrac{17 h^{2}}{8mL^2}$ Answer b three-fold (i.e., there are three wavefunctions that share the same energy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.09%3A_A_Particle_in_a_Three-Dimensional_Box.txt
Solutions to select questions can be found online. 3.2 Determine from the following operators which are linear and nonlinear: 1. $\hat{A}f(x)= f(x)^2$ [square f(x)] 2. $\hat{A}f(x)= f^*(x)$ [form the complex conjugate of f(x)] 3. $\hat{A}f(x)= 0$ [multiply f(x) by zero] 4. $\hat{A}f(x)= [f(x)]^{-1}$ [take the reciprocal of f(x)] 5. $\hat{A}f(x)= f(0)$ [evaluate f(x) at x=0] 6. $\hat{A}f(x)= \ln f(x)$ [take the log of f(x)] Solution It is important to note that an operator $\hat{A}$ is linear if $\underbrace{\hat{A}[c_1f(x)+c_2f_2(x)]}_{\text{left side}}= \underbrace{c_1\hat{A}f_1(x)+c_2\hat{A}f_2(x) }_{\text{right side}}\nonumber$ and the operator is nonlinear if $\underbrace{ \hat{A}[c_1f_1(x)+c_2f_2(x)]}_{\text{left side}} \neq \underbrace{ c_1\hat{A}f_1(x)+c_2\hat{A}f_2(x) }_{\text{right side}}\nonumber$ a) Evaluate the left side \begin{align*} \hat{A}[c_1f(x)+c_2f_2(x)] &= [c_1f_1(x)+c_2f_2(x)]^2 \ &= c_1^2 f_1(x)^2+2c_1f_1(x) c_2f_2(x)+c_2^2f_2(x)^2 \end{align*}\nonumber Evaluate the right side $c_1 \hat{A} f_1(x)+c_2\hat{A}f_2(x)=c_1[f_1(x)]^2+c_2[f_2(x)]^2 \neq \hat{A}[c_1f_1(x)+c_2f_2(x)] \nonumber$ This operator is nonlinear b) Evaluate the left side $\hat{A}[c_1f_1(x)+c_2f_2(x)] = c_1^*f_1^*(x) + c_2^*f_2^*(x)\nonumber$ Evaluate the right side \begin{align*} c_1\hat{A}f_1(x) + c_2\hat{A}f_2(x) &= c_1f_1^*(x) + c_2f_2^*(x) \[4pt] &= \hat{A}[c_1f_1(x) + c_2f_2(x)] \end{align*} This operator is linear c) Evaluate the left side $\hat{A}[c_1f_1(x)+c_2f_2(x)] = 0\nonumber$ Evaluate the right side $c_1\hat{A}f_1(x) + c_2\hat{A}f_2(x) = c_1f_1(x) + c_2f_2(x) = 0\nonumber$ $= \hat{A}[c_1f_1(x) + c_2f_2(x)]\nonumber$ This operator is linear d) Evaluate the left side $\hat{A}[c_1f_1(x)+c_2f_2(x)] = \dfrac{1}{c_1f_1(x) + c_2f_2(x)}\nonumber$ Evaluate the right side $c_1\hat{A}f_1(x) + c_2\hat{A}f_2(x) = \dfrac{c_1}{f_1(x)} + \dfrac{c_2}{f_2(x)} \nonumber$ $\neq \hat{A}[c_1f_1(x) + c_2f_2(x)]\nonumber$ This operator is nonlinear e) Evaluate the left side $\hat{A}[c_1f_1(x)+c_2f_2(x)] = c_1f_1(0) + c_2f_2(0)\nonumber$ Evaluate the right side $= c_1\hat{A}f_1(x) + c_2\hat{A}f_2(x)\nonumber$ This operator is linear f) Evaluate the left side $\hat{A}[c_1f_1(x)+c_2f_2(x)] = \ln [c_1f_1(x) + c_2f_2(x)]\nonumber$ Evaluate the right side $c_1\hat{A}f_1(x) + c_2\hat{A}f_2(x) = c1 \ln f_1(x) + c_2 \ln f_2(x)\nonumber$ $\neq \hat{A}[c_1f_1(x) + c_2f_2(x)]\nonumber$ This operator is nonlinear 3.8 Show that for a particle in a box with length a with state $n=3$ that there are 3 locations along the x axis where the probability density is at a maximum. Solution The probability density for a particle in a box for state $n=3$ is ${\psi}^*{\psi}=\dfrac{2}{a}{\sin}^2\dfrac{3px}{a}\nonumber$ To maximize the probability density, take its derivative and set it equal to zero and solve for $x$ . $\dfrac{d}{dx}\left[\dfrac{2}{a}{{\sin}^2 \dfrac{3{\pi}x}{a}\ }\right]=\dfrac{2}{a}\cdot 2\cdot {\sin \dfrac{3{\pi}x}{a}\ }\cdot {\cos \dfrac{3{\pi}x}{a}\ }\cdot \dfrac{3\pi}{a}=0\nonumber$ ${\sin \dfrac{3{\pi}x}{a}\ }{\cos \dfrac{3{\pi}x}{a}\ }=0\nonumber$ We want to not choose values of x that make ${\sin \dfrac{3{\pi}x}{a}\ }=0$ , as that means that the probability density will be zero. We will only choose the zeros of ${\cos \dfrac{3{\pi}x}{a}\ }$. So the possible values for x which make ${\cos \dfrac{3{\pi}x}{a}\ } =0 \nonumber$ are $\dfrac{3px}{a}=\dfrac{2m+1}{2}p\ \ \ \ \ \ m=0,1,2,\dots \nonumber$ $x=\dfrac{\left(2m+1\right)a}{6}\nonumber$ We only choose $m=0,1,2$ and not 3 because $m=3$ would give $x=\dfrac{7a}{6}$ which is outside the box. So the locations are $x=\dfrac{a}{6}\nonumber$ $x=\dfrac{a}{2}\nonumber$ $x=\dfrac{5a}{6}\nonumber$ 3.13 What range for $L$ is possible for $\sigma_x$ given: $\sigma_x = \sqrt{\langle x^2\rangle - \langle x \rangle^2}\nonumber$ where $L$ is the length of the 1-D box? Hint: Remember that $\sigma_x$ is the uncertainty in the position of a particle in a box. Solution For a particle in a box: $\langle x \rangle = \dfrac{\text{L}}{2}$ and $\langle x^2 \rangle = \dfrac{L^2}{3}-\dfrac{L^2}{2n^2\pi^2}$ $\sigma_x = \sqrt{\dfrac{L^2}{3}-\dfrac{L^2}{2n^2\pi^2} - (\dfrac{\text{L}}{2})^2}$ By inspection, only values of $\sigma_x$ less than $L$ will make this statement true. 3.14 Using the trigonometric identity $\cos(2x)=2\cos^{2}x-1\nonumber$ show that $\int_0^a 2 \cos^2 \dfrac{n\pi x}{a} -1 dx = 0\nonumber$ $\int_0^a \cos \dfrac{2n\pi x}{a} dx = 0\nonumber$ $\dfrac{a}{2n\pi} \sin(2n\pi) = 0\nonumber$ 3.18 Is the wavefunction $\phi_n= \sqrt{\dfrac{2}{L}} \sin{(\dfrac{n\pi x}{L})}$ orthonormal over $0 \leq{x} \leq{L}$. Explain your reasoning. Solution For a wavefunction to be orthonormal, it has to satisfy these conditions 1.) it has to be orthogonal and 2.) it has to be normalized. To show that it is orthogonal: $\int_0^L \phi_m \phi\ast_n dx = 0\nonumber$ when $m\neq$ n To show that that the wavefunction is normalized it must follow that $\int_0^L \phi_n \phi\ast_n dx = 1 \nonumber$ when m=n Because our wavefunction satisfies both conditions, it is an orthonormal function. We find that $\langle \psi_3 |\psi_3 \rangle = \int \limits _{-\infty}^{\infty} \dfrac{2}{L}(\sin\dfrac{3\pi x}{L})^2 dx = 1\nonumber$ and $\langle \psi_4 |\psi_4 \rangle = \int \limits _{-\infty}^{\infty} \dfrac{2}{L}(\sin\dfrac{3\pi x}{L})(\sin\dfrac{4\pi x}{L}) dx = 0\nonumber$ From orthogonality, we can learn that if n is not equal to m, our dot product will always be zero. But if n = m our dot product will equal 1. 3.22 What is the Heisenberg Uncertainty Principle? Do position and momentum follow the uncertainty principle; why or why not? If they do, what is the minimum uncertainty in the velocity of an electron if it is known to be within 1.5nm of a nucleus? Solution The Heisenberg Uncertainty Principle states that two properties that follow cannot be simultaneously measured to arbitrary precision. Position and momentum follow the principle. If one were to try and commute these two operators, one would not get zero and therefore the properties do not commute. If they do not commute then they cannot be measured to arbitrary precision. We know that $\Delta x \Delta p \geq \dfrac{\hbar}{2}\nonumber$ And that p = mv. This gives $m\Delta x \Delta v \geq \dfrac{\hbar}{2}\nonumber$ The mass of an electron is known is $m_e \approx 9.1 \times 10^{-31}\;kg$. The problem also gives $\Delta x$ to be 1.5 nm. From here, it becomes a plug and chug to solve for $\Delta v$. $\Delta v = 3.86 \times 10^4\nonumber$ 3.23 Describe the degeneracies of a two-dimensional box whose two sides have different lengths. Solution The energies of a two-dimensional box is given by, $E = \dfrac{h^2}{8m} \left(\dfrac{n^{2}_{x}}{a^2}+\dfrac{n^{2}_{y}}{b^2}\right)\nonumber$ We can see that even if $a \ne b$, the energy levels will not necessarily be degenerate. 3.26 How many degenerate states do the first three energy levels for a three-dimensional particle in a box have if $a=b=c$? 3.27 Metal porphyrin molecules are commonly in many proteins and it has the general structure. This molecule is planar, so we can approximate π electrons as being confined inside a square. What are energy levels and corresponding degeneracies of a particle in a square of side $m$? Porphyrin molecules have 18 $π$ electrons. If the length of the molecule is 850 pm, what is the lowest energy absorption of the porphyrin molecules? (the experimental value ≈ 17,000 cm-1 Solution The first energy level is E(1,1,1) which has no degeneracy. The second energy state is E(2,1,1)=E(1,2,1)=E(1,1,2), therefore it has three degenerate states. The third energy state is E(2,2,1)=E(2,1,2)=E(1,2,2), therfeore it has three degenerate states. 3.26 For a two dimensional box of width $w$ and height $h=\sqrt{a}w$, calculate all possible energy combinations between $E_{11}$ and $E_{33}$ note any degeneracy. Solution The energy of a two dimensional particle in the box has the form, $E = \dfrac{h^2}{8m}\Bigg(\dfrac{n_x^2}{w^2}+\dfrac{n_y^2}{h^2}\Bigg)\nonumber$ In this specific case $h=\sqrt{a}w$ so we can simplify the problem to, $E = \dfrac{h^2}{8m}\Bigg(\dfrac{n_x^2}{w^2}+\dfrac{2n_y^2}{w^2}\Bigg)\nonumber$ Now we can tabulate the energy level indicating degeneracy. $E_{xy}$ Degeneracy $\dfrac{E8mw^2}{h^2}$ $E_{11}$ 1 3 $E_{12}$. $E_{31}$ 2 5 $E_{21}$ 1 4 $E_{22}$ 1 6 $E_{13}$. $E_{32}$ 2 7 $E_{23}$ 1 8 $E_{33}$ 1 9 3.32 In this problem, we will explore the quantum-mechanical problem of a free particle that is not restricted to a finite region. Remember quantized energies of a particle in a box is a direct result from the boundary conditions set by the confines of the box. When the potential energy $V(x)$ is equal to zero and the Schrodinger equation become $\dfrac{d^2\psi}{dx^2} + \dfrac{2mE}{\hbar^2}\psi(x) = 0\nonumber$ The two solutions to this Schrodinger equation are $\psi_1(x) = A_1e^{ikx}\nonumber$ $\psi_2(x) = A_2e^{-ikx}\nonumber$ where $k = \dfrac{(2mE)^(1/2)}{\hbar}\nonumber$ Show that $\psi_1(x)$ and $\psi_2(x)$ are solution to the Schrodinger equation where the potential energy $V(x)$ is equal to zero Solution In order to prove that $\psi_1(x)$ and $\psi_2(x)$ are solutions we need to mention a few values $p = \hbar k → k = p/\hbar →k = \dfrac{(2mE)^{1/2}}{\hbar}\nonumber$ Now we have $\dfrac{d^2Ae^{\pm ikx}}{dx^2} + \dfrac{2mE}{\hbar^2}Ae^{\pm ikx}= 0\nonumber$ $A(\pm ik)^2e^{\pm ikx} + \dfrac{2mE}{\hbar^2}Ae^{\pm ikx} = 0 → -k^2 + \dfrac{2mE}{\hbar^2} = 0 \nonumber$ Cancel the like terms Thus, $k = \dfrac{(2mE)^{1/2}}{\hbar}$, which equals the original $k$ value 3.32 Show that $E$ had to be a positive value, since when $E$ is negative the wave function become unbounded for large $x$ values Solution If $E$ < 0 then k becomes imaginary, $k = ik$ $\psi = Ae^{\pm ikx} = Ae^{\pm i(ik)x} Ae^{\pm kx}$ For $\psi_1(x) = A_1e^{-kx}$ this will blow up for x → - $\infty$ For $\psi_2(x) = A_2e^{kx}$ this will blow up for x → $\infty$ 3.32 With $\hat{P}$ $\psi_1(x)$ and $\hat{P}$ $\psi_2(x)$ as eigenvalue equations, show that $\hat{P}\psi_1(x) = -i\hbar \dfrac{d\psi_1}{dx} = \hbar k\psi_1\nonumber$ and $\hat{P}\psi_2(x) = -i\hbar \dfrac{d\psi_2}{dx} = -\hbar k\psi_2\nonumber$ Solution $\hat{P}\psi_1(x) = -i\hbar \dfrac{d\psi_1}{dx} = -i\hbar \dfrac{d}{dx}A_1e^{+ikx} = -i\hbar A_1(ik)e^{ikx} = +\hbar kA_1e^{ikx} = + \hbar k\psi_1\nonumber$ $\hat{P}\psi_2(x) = -i\hbar \dfrac{d\psi_2}{dx} = -i\hbar \dfrac{d}{dx}A_2e^{-ikx} = -i\hbar A_2(-ik)e^{ikx} = -\hbar kA_1e^{-ikx} = -\hbar k\psi_2\nonumber$ Now we can show that $E = \dfrac{p^2}{2m} = \dfrac{\pm (\hbar)^2}{2m} = \dfrac{\hbar^2 k^2}{2m}\nonumber$ 3.32 Show that $\psi_1^{*}\psi_1(x) = A_1^{*}A_1 = \left | A_1\right |^2$ and that $\psi_2^{*}\psi_2(x) = A_2^{*}A_2 = \left | A_2\right |^2$ Solution \begin{align*} \psi_1^{*}\psi_1(x) &= (A_1e^{ikx})^*A_1e^{ikx} \[4pt] &= A_1^{*}A_1e^{-ikx}e^{ikx} \[4pt] &= A_1^{*}A_1e^{-ikx+ikx} = A_1^{*}A_1e^{0} \[4pt] &= A_1^{*}A_1\end{align*} \begin{align*} \psi_2^{*}\psi_2(x) &= (A_2e^{-ikx})^*A_2e^{-ikx} \[4pt] &= A_2^{*}A_2e^{ikx -ikx} \[4pt] &= A_2^{*}A_2e^{0} = A_2^{*}A_2 \end{align*} $\psi$ has equal probability to be everywhere when $\Delta x = \infty$ and $\Delta p = 0$ 3.33A Assuming that a particle is characterized by a standing de Broglie wave, come up with an equation for the allowed energies of a particle in a one-dimensional box. Solution $\lambda = \dfrac{h}{p}\nonumber$ Because the waves are standing waves, an integral number of half wave-lengths will fit in the box or: $a = \dfrac{n\lambda}{2}\nonumber$ and $a = \dfrac{nh}{2p}\nonumber$ Solving for $p$ yields $p = \dfrac{nh}{2a}\nonumber$ and the corresponding energy is $E = \dfrac {mv^2}{2} = \dfrac{p^2}{2m} =\dfrac{1}{2m} \dfrac {n^2h^2}{4a^2} = \dfrac{n^2h^2}{8ma^2} \nonumber$ 3.33B Derive the lowest allowed velocity for a proton in a box of length 10-14 m (approximate size of nucleus), assuming the particle is described by a standing deBroglie wave. Solution The de Broglie wavelength is $λ = \dfrac{h}{p} = \dfrac{h}{m_pv}\nonumber$ For a one dimensional wave that has nodes on both ends of a box, an integer number of half wavelengths can fit, so $n \left(\dfrac{λ}{2} \right) = L\nonumber$ Substituting this wavelength in the de Broglie relationship, one gets $\nu = \dfrac{hn}{2m_L}\nonumber$ lowest allowed velocity will have $n = 1$ \begin{align*} \nu &= \dfrac{(6.626 \times 10^{-34})(1)}{2 \times (1.67 \times 10^{-27})(10^{-14})} \[4pt] &= 19.8 \times 10^6 m/s \end{align*} 3.33C If a particle in a one-dimensional box is described by standing de Broglie waves within the box, derive an equation for the allowed energies. Then use that equation to find the transition energy from n=1 to n=2 given the length of the box is 350 pm and the mass of an electron is $9.109 \times 10^{-31} kg$. Solution The de Broglie formula is $\lambda=\dfrac{h}{p}\nonumber$ An integral number of half-wavelengths will fit in the box because the waves are standing waves so $\dfrac{n\lambda}{2}=a\nonumber$ $\dfrac{nh}{2p}=a\nonumber$ Then solving for p $p=\dfrac{nh}{2a}\nonumber$ Therefore the energy equation is $E=\dfrac{mv^2}{2}=\dfrac{p^2}{2,m}=\dfrac{1}{2m} \dfrac{n^2h^2}{4a^2}=\dfrac{n^2h^2}{8ma^2}\nonumber$ Just plug into the equation to find the transition energy $\Delta E=\dfrac{h^2}{8m_ea^2}(2^2-1^2)\nonumber$ $\Delta E=\dfrac{(6.626 \times 10^{-34} J \centerdot s)^2 (3) }{8(9.109 \times 10^{-31} kg)(350 \times 10^{-12}m)^2}\nonumber$ $\Delta E=1.47 \times 10^{-18}J \nonumber$ 3.35A Consider the two wavefunctions $\psi_n(x) = \sin\dfrac{n\pi x}{a} \nonumber$ with even n numbers and $\psi_n(x) = \cos\dfrac{n\pi x}{a} \nonumber$ with odd n numbers. Prove that the wavefunctions can be symmetric and antisymmetric by using the operation x to -x, a is a constant. Given that the Schrödinger equation has the expression: $\hat{H}(x)\psi_n(x) = E_n\psi_n(x) \nonumber$ Through the operation x to -x, the equation now becomes: $\hat{H}(-x)\psi_n(-x) = E_n\psi_n(-x) \nonumber$ Show that $\hat{H}(x) = \hat{H}(-x)\nonumber$ is true to prove the Schrödinger equation. Solution Substituting $x$ by $-x$, for odd $n$ numbers, $\hat{\psi}_n(-x) = \cos\dfrac{-n\pi x}{a} = \cos\dfrac{n\pi x}{a} = \hat{\psi}_n(x)$ For even $n$ numbers, $\hat{\psi}_n(-x) = sin\dfrac{-n\pi x}{a} = -sin\dfrac{n\pi x}{a} = -\hat{\psi}_n(x)$ Thus, the wavefunction for odd $n$ number is symmetric and even n numbers is antisymmetric. And, $\hat{H}(x) = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{dx^2} = \hat{H}(x)$ $\hat{H}(-x) = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{d(-x)^2} = \hat{H}(x)$ Thus, $\hat{H}(x) = \hat{H}(-x) \nonumber$ and $\hat{H}(x) \nonumber$ is an even function of $x$. 3.35B Show that the Hamiltonian for a Rigid Rotor Model is odd. Solution $\hat{H}(x) = \hat{H}(-x)\nonumber$ so $\hat{H}= -\dfrac{h^2}{4\pi \mu}\nabla^2\nonumber$ $\nabla^2 = \dfrac{d^2y}{dx^2} \dfrac{d^2y}{dy^2}\dfrac{d^2y}{dz^2}\nonumber$ so $\dfrac{d^2y}{dx^2} (x) = 0\nonumber$ and $\dfrac{d^2y}{dx^2}(-x) = 0\nonumber$ so $\hat{H}(x) = 0\nonumber$ and $\hat{H}(-x) = 0\nonumber$ so $\hat{H}(x)= \hat{H}(-x)\nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.E%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box_%28Exercises%29.txt
• 4.1: The Wavefunction Specifies the State of a System Postulate 1: Every physically-realizable state of the system is described in quantum mechanics by a state function that contains all accessible physical information about the system in that state. • 4.2: Quantum Operators Represent Classical Variables Every observable in quantum mechanics is represented by an operator which is used to obtain physical information about the observable from the state function. For an observable that is represented in classical physics by a function $Q(x,p)$, the corresponding operator is $Q(\hat{x},\hat{p})$. • 4.3: Observable Quantities Must Be Eigenvalues of Quantum Mechanical Operators It is a general principle of Quantum Mechanics that there is an operator for every physical observable. A physical observable is anything that can be measured. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on the eigenfunction with the appropriate operator. The value of the observable for the system is the eigenvalue, and the system is said to be in an eigenstate. • 4.4: The Time-Dependent Schrödinger Equation While the time-dependent Schrödinger equation predicts that wavefunctions can form standing waves (i.e., stationary states), that if classified and understood, becomes easier to solve the time-dependent Schrödinger equation for any state. Stationary states can also be described by the time-independent Schrödinger equation (used when the Hamiltonian is not explicitly time dependent). The solutions to the time-independent Schrödinger equation still have a time dependency. • 4.5: Eigenfunctions of Operators are Orthogonal The eigenvalues of operators associated with experimental measurements are all real; this is because the eigenfunctions of the Hamiltonian operator are orthogonal, and we also saw that the position and momentum of the particle could not be determined exactly. We now examine the generality of these insights by stating and proving some fundamental theorems. These theorems use the Hermitian property of quantum mechanical operators, which is described first. • 4.6: Commuting Operators Allow Infinite Precision If two operators commute then both quantities can be measured at the same time with infinite precision, if not then there is a tradeoff in the accuracy in the measurement for one quantity vs. the other. This is the mathematical representation of the Heisenberg Uncertainty principle. • 4.E: Postulates and Principles of Quantum Mechanics (Exercises) These are homework exercises to accompany Chapter 4 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. 04: Postulates and Principles of Quantum Mechanics Learning Objectives • Introduce the first postulate of quantum mechanics • Recognize invalid wavefunction categories In classical mechanics, the configuration or state of a system is given by a point $( x , p )$ in the space of coordinates and momenta. This specifies everything else in the system in a fully deterministic way, in that any observable $Q$ that can be expressed as $Q ( x , p )$ can be found, and any that cannot is irrelevant. Yet, as we have seen with the diffraction of electrons, it is impossible to know both the position and momentum of the electron exactly at every point along the trajectory. This is mathematically expressed as the famous position-momentum uncertainty principle. Hence, specifying a state by $( x , p )$ in classical mechanics clearly will not work in quantum mechanics. So what specifies the state of a quantum system? This is where the first Postulate of quantum mechanics comes in. Postulate I The state of the system is completely specified by $\psi$. All possible information about the system can be found in the wavefunction $\psi$. The properties of a quantum mechanical system are determined by a wavefunction $\psi(r,t)$ that depends upon the spatial coordinates of the system and time, $r$ and $t$. For a single particle system, $r$ is the set of coordinates of that particle $r = (x_1, y_1, z_1)$. For more than one particle, $r$ is used to represent the complete set of coordinates $r = (x_1, y_1, z_1, x_2, y_2, z_2,\dots x_n, y_n, z_n)$. Since the state of a system is defined by its properties, $\psi$ specifies or identifies the state and sometimes is called the state function rather than the wavefunction. What does $\psi$ mean? This is best answered in terms of the probability density $P(x)$ that determines the probability (density) that an object in the state $ψ ( x )$ will be found at position $x$ (the Born interpretation). \begin{align} P ( x ) &= ψ^*(x)ψ(x) \[4pt] &= |ψ(x)| ^2 \label{norm} \end{align} Hence, for valid (e.g., well-behaved) wavefunctions, the normalized probability in Equation $\ref{norm}$ holds true, such that the integral over all space is equal to 1. $\int_{-\infty}^\infty \psi^*(x)\psi(x)\;dx=1 \label{4.1.1}$ Equation \ref{4.1.1} means that the chance to find a particle is 100% somewhere within all space (e.g. somewhere between $-\infty$ and $+\infty$). A wavefunction is said to be square-integrable if Equation \ref{4.1.1} can be satisfied (so that the Born Interpretation to be applicable). Definition: Square-Integrable Functions A complex-valued function, $f(x)$, is a square-integrable function if the integral of the square of the absolute value is finite. $\int_{-\infty}^\infty f^*(x) f(x)\;dx < \infty \nonumber$ For this to be true, the integrals of the positive and negative portions of the real and imaginary parts of $f(x)$ must both be finite. Let us examine this set of examples in further detail in Figure 4.1.1 . The first wavefunction $ψ_1$ is sharply peaked at a particular value of $x$, and the probability density, being its square, is likewise peaked there as well. This is the wavefunction for a particle well localized at a position given by the center of the peak, as the probability density is high there, and the width of the peak is small, so the uncertainty in the position is very small. The second wavefunction $ψ_2$ has the same peak profile, but shifted to a different position center. All of the properties of the first wavefunction hold here too, so this simply describes a particle that is well-localized at that different position. The third and fourth wavefunctions $ψ_3$ and $ψ_4$ respectively look like sinusoids of different spatial periods. The wavefunctions are actually complex of the form $ψ ( x ) = Ne^{ikx} \nonumber$ so only the real part is being plotted in Figure 4.1.1 . Note that even though the periods, $k$, of the oscillating wavefunctions are different, \begin{align*} P(x) &= ψ^*(x)ψ(x) \[4pt] &= | e^{ikx} |^2 \[4pt] &= N^2\left(e^{-ikx}\right) \left(e^{ikx}\right) \[4pt] &= N^2 \end{align*} \nonumber for all $k$, so the corresponding probability densities, $P(x)$, are the same except for the nor­malization constant (Equation $\ref{norm}$). We saw before that it does not make a whole lot of sense to think of a sinusoidal wave as being localized in some place. Indeed, the positions for these two wavefunctions are ill-defined, so they are not well-localized, and the uncertainty in the position is large in each case. This is Heisenberg Uncertainty Principle in action. Ill-behaved (invalid) Wavefunctions The Born interpretation in Equation $\ref{norm}$ means that many wavefunctions which would be acceptable mathematical solutions of the Schrödinger equation are not acceptable because of their implications for the physical properties of the system. To satisfy this interpretation, wavefunctions must be: • single valued, • continuous, and • finite. These aspects mean that the valid wavefunction must be one-to-one, it cannot have an undefined slope, and cannot go to $-\infty$ or $+\infty$. For example, the wavefunction must not be infinite over any finite region. If it is, then the integral in Equation $\ref{4.1.1}$ is equal to infinity. This implies that the particle described by such a wavefunction has a zero probability of being anywhere where the wavefunction is not infinite, but is certain to be found at all points where the wavefunction is infinite. The Born interpretation also renders unacceptable solutions of the Schrödinger equation for which $|ψ(x)|^2$ has more than one value at any point. This would suggest that there were multiple different probabilities of finding the particle at that point, which is clearly absurd. The requirement that the square modulus of the wavefunction must be single-valued usually implies that the wavefunction itself must be single valued. The function in Figure 4.1.3 violates this requirement. The grey lines indicate the region where the wavefunction is multivalued. Further restrictions arise because the wavefunction must satisfy the Schrödinger equation, which is a second-order differential equation. This implies that the second derivative of the function must exist, which implies that the first derivative of the wavefunction to exist (otherwise the second derivative is also undefined and the wavefunction cannot be a solution of the Schrödinger equation). The functions in Figure 4.1.4 are also not acceptable for these reasons. Exercise 4.1.1 Determine if each of the following functions is acceptable as a wavefunction over the indicated regions: 1. $\cos x$ over $(0,\infty)$ 2. $e^x$ over $(-\infty,\infty)$ 3. $e^{-x}$ over $[0,\infty)$ 4. $\tan \theta$ over $[0, 2\theta]$ Solution a This is not an acceptable wavefunction. It is single-valued across the entire range. There is a single value for each value of $x$. It is continuous over the defined limits of integration, as we can see from a plot given below. However, it is not square-integrable. $\int_{0}^{\infty} | \cos(x) |^2 dx \; \xcancel{<} \; \infty \nonumber$ Solution b This is not an acceptable wavefunction. Over the limits of integration from $-\infty$ to $\infty$, this function is not square-integrable. Note in the plot below, how the function is indefinite approaching the limits of $\infty$. Solution c This is an acceptable wavefunction over the given limits. It is finite over the given limits. It is continuous within given limits. It is single-valued. It is square-integrable with $\displaystyle \int_{0}^{\infty} | \Psi(x) |^2 dx = \frac{1}{2}$. Solution d This is not an acceptable wavefunction. It is discontinuous over the limits of integration.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.01%3A_The_Wavefunction_Specifies_the_State_of_a_System.txt
Learning Objectives • Understand how the correspondence principle argues that a unique quantum operator exist for every classical observable. • Recognize several of the commonly used quantum operators An observable is a dynamic variable of a system that can be experimentally measured (e.g., position, momentum and kinetic energy). In systems governed by classical mechanics, it is a real-valued function (never complex), however, in quantum physics, every observable in quantum mechanics is represented by an independent operator that is used to obtain physical information about the observable from the wavefunction. It is a general principle of quantum mechanics that there is an operator for every physical observable. For an observable that is represented in classical physics by a function $Q(x,p)$, the corresponding operator is $Q(\hat{x},\hat{p})$. Postulate II: The Correspondence Principle For every observable property of a system there is a corresponding quantum mechanical operator. This is often referred to as the Correspondence Principle. Classical dynamical variables, such as $x$ and $p$, are represented in quantum mechanics by linear operators which act on the wavefunction. The operator for position of a particle in three dimensions is just the set of coordinates $x$, $y$, and $z$, which is written as a vector, $r$: \begin{align} \vec{r} &= (x , y , z ) \[4pt] &= x \vec {i} + y \vec {j} + z \vec {k} \label {4.2.1} \end{align} The operator for a component of linear momentum is $\hat {p} _x = -i \hbar \dfrac {\partial}{\partial x} \label {4.2.2}$ and the operator for kinetic energy in one dimension is $\hat {T} _x = \left ( -\dfrac {\hbar ^2}{2m} \right ) \dfrac {\partial ^2}{\partial x^2} \label {4.2.3}$ and in three dimensions $\hat {p} = -i \hbar \nabla \label {4.2.4}$ and $\hat {T} = \left ( -\dfrac {\hbar ^2}{2m} \right ) \nabla ^2 \label {4.2.5}$ The total energy operator is called the Hamiltonian operator, $\hat{H}$ and consists of the kinetic energy operator plus the potential energy operator. $\hat {H} = - \frac {\hbar ^2}{2m} \nabla ^2 + \hat {V} (x, y , z ) \label{3-22}$ The Hamiltonian Operator The Hamiltonian operator is named after the Irish mathematician William Hamilton and comes from the his formulation of Classical Mechanics that is based on the total energy: $\hat{H} = \hat{T} + \hat{V} \nonumber$ rather than Newton's second law, $\vec{F} = m\vec{a} \nonumber$ In many cases only the kinetic energy of the particles and the electrostatic or Coulomb potential energy due to their charges are considered, but in general all terms that contribute to the energy appear in the Hamiltonian. These additional terms account for such things as external electric and magnetic fields and magnetic interactions due to magnetic moments of the particles and their motion. Name Observable Symbol Operator Symbol Operation Table 4.2.1 : Some common Operators in Quantum Mechanics Position (in 1D) $x$ $\hat{X}$ Multiply by $x$ Position (in 3D) $\vec{r}$ $\hat{R}$ Multiply by $\vec{r}$ Momentum (in 1D) $p_{x}$ $\hat{P_{x}}$ -$\imath \hbar \dfrac{d}{dx}$ Momentum (in 3D) $\vec{p}$ $\hat{P}$ -$\imath \hbar \left[ \hat{i}\ \dfrac{d}{dx} + \hat{j} \dfrac{d}{dy} + \hat{k} \dfrac{d}{dz}\right]$ Kinetic Energy (in 1D) $T_{x}$ $\hat{T_{x}}$ $\dfrac{-\hbar^2}{2m} \dfrac{d^2}{dx^2}$ Kinetic Energy (in 3D) $T$ $\hat{T}$ $\dfrac{-\hbar^2}{2m} \left[\dfrac{d^{2}}{dx^{2}} + \dfrac{d^2}{dy^2} + \dfrac{d^2}{dz^2} \right]$ Which can be simplified to $\dfrac{- \hbar^2}{2m}$$\bigtriangledown^{2}$ Potential Energy (in 1D) $V(x)$ $\hat{V}(x)$ Multiply by $V(x)$ Potential Energy (in 3D) $V(x,y,z)$ $\hat{V}(x,y,z)$ Multiply by $V(x,y,z)$ Total Energy $E$ $\hat{E}$ $\dfrac{- \hbar^{2}}{2m} \nabla^2 + V(x,y,z)$ Angular Momentum (x axis component) $L_{x}$ $\hat{L_{x}}$ -$\imath \hbar \left[ y \dfrac{d}{dz} - z \dfrac{d}{dy}\right]$ Angular Momentum (y axis component) $L_{y}$ $\hat{L_{y}}$ -$\imath \hbar \left[ z \dfrac{d}{dx} - x \dfrac{d}{dz}\right]$ Angular Momentum (z axis component) $L_{z}$ $\hat{L_{z}}$ -$\imath \hbar \left[ x \dfrac{d}{dy} - y \dfrac{d}{dx}\right]$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.02%3A_Quantum_Operators_Represent_Classical_Variables.txt
Learning Objectives • To be introduced to the role of eigenvalue equations in obtaining observables from a system • Understand how expectation values are calculated if the wavefunctions is not an eigenstate of the operator for the observable. Recall, that we can identify the total energy operator, which is called the Hamiltonian operator, $\hat{H}$, as consisting of the kinetic energy operator plus the potential energy operator. $\hat {H} = - \dfrac {\hbar ^2}{2m} \nabla ^2 + \hat {V} (x, y , z ) \label{3-22}$ Using this notation, we write the Schrödinger Equation as $\hat {H} | \psi (x , y , z ) \rangle = E | \psi ( x , y , z ) \rangle \label{3-23}$ Equation $\ref{3-23}$ says that the Hamiltonian operator operates on the wavefunction to produce the energy, which is a number, (a quantity of Joules), times the wavefunction. Such an equation, where the operator, operating on a function, produces a constant times the function, is called an eigenvalue equation. The function is called an eigenfunction, and the resulting numerical value is called the eigenvalue. Eigen here is the German word meaning self or own. It is a general principle of Quantum Mechanics that there is an operator for every physical observable. A physical observable is anything that can be measured. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on the eigenfunction with the appropriate operator. The value of the observable for the system is the eigenvalue, and the system is said to be in an eigenstate. Equation $\ref{3-23}$ states this principle mathematically for the case of energy as the observable. Postulate III: Obtaining Observables Requires Solving Eigenvalue Problems If a system is described by the eigenfunction $\Psi$ of an operator $\hat{A}$ then the value measured for the observable property corresponding to $\hat{A}$ will always be the eigenvalue $a$, which can be calculated from the eigenvalue equation. $\hat {A} | \Psi \rangle = a | \Psi \rangle \label {4.3.1}$ Consider a general real-space operator $A(x)$. When this operator acts on a general wavefunction $\psi(x)$ the result is usually a wavefunction with a completely different shape. However, there are certain special wavefunctions which are such that when $A$ acts on them the result is just a multiple of the original wavefunction. These special wavefunctions are called eigenstates, and the multiples are called eigenvalues. Thus, if $A | \psi_a(x) \rangle = a | \psi_a(x) \rangle \label{4.3.2}$ where $a$ is a complex number, then $\psi_a$ is called an eigenstate of $A$ corresponding to the eigenvalue $a$. Suppose that $A$ is an operator corresponding to some physical dynamical variable. Consider a particle whose wavefunction is $\psi_a$. The expectation of value $A$ in this state is simply \begin{align*} \langle A\rangle &= \int_{-\infty}^\infty \psi_a^{\ast} A \psi_a dx \[4pt] &= a \int_{-\infty}^\infty \psi_a^{\ast} \psi_a dx \[4pt] &= a. \end{align*} \nonumber where use has been made of Equation $\ref{4.3.2}$ and the normalization condition. Moreover, \begin{align*} \langle A^2\rangle &= \int_{-\infty}^\infty \psi_a^{\ast} A^2 \psi_a dx \[4pt] &= a \int_{-\infty}^\infty \psi_a^{\ast} A \psi_a dx \[4pt] &= a^2 \int_{-\infty}^\infty \psi_a^{\ast} \psi_a dx \[4pt] &= a^2. \end{align*} \nonumber So the variance of $A$ is \begin{align*} \sigma_A^{ 2} &= \langle A^2\rangle - \langle A\rangle^2 = a^2-a^2 \[4pt] &= 0. \end{align*} \nonumber The fact that the variance is zero implies that every measurement of $A$ is bound to yield the same result: namely, $a$. Thus, the eigenstate $\psi_a$ is a state which is associated with a unique value of the dynamical variable corresponding to $A$. This unique value is simply the associated eigenvalue determined by Equation $\ref{4.3.2}$. Expectation Values We have seen that $\vert\psi(x,t)\vert^{ 2}$ is the probability density of a measurement of a particle's displacement yielding the value $x$ at time $t$. Suppose that we made a large number of independent measurements of the displacement on an equally large number of identical quantum systems. In general, measurements made on different systems will yield different results. However, from the definition of probability, the mean of all these results is simply $\langle x\rangle = \int_{-\infty}^{\infty} x \vert\psi\vert^{ 2} dx \label{ 4.3.5}$ Here, $\langle x\rangle$ is called the expectation value of $x$. Similarly the expectation value of any function of $x$ is $\langle f(x)\rangle = \int_{-\infty}^{\infty} f(x) \vert\psi\vert^{ 2} dx.\label{ 4.3.6}$ Postulate IV The average value of an observable measurement of a state in (normalized) wavefunction $\psi$ with operator $\hat{A}$ is given by the expectation value $\langle a \rangle$: \begin{align} \langle a \rangle &= \langle \psi | a |\psi \rangle \[4pt] &= \int_{-\infty}^{\infty} \psi^* \hat{A} \psi dx \label{4.3.7} \end{align} If an unnormalized wavefunction where used, then Equation $\ref{4.3.7}$ changes to \begin{align} \langle a \rangle &= \dfrac{\langle \psi | a |\psi \rangle}{\langle \psi | \psi \rangle} \[4pt] &=\dfrac{ \displaystyle \int_{-\infty}^{\infty} \psi^* \hat{A} \psi dx}{ \displaystyle \int_{-\infty}^{\infty} \psi^* \psi dx} \label{4.3.8} \end{align} The denominator is just the normalization requirement discussed earlier. In general, the results of the various different measurements of $x$ will be scattered around the expectation value $\langle x\rangle$. The degree of scatter is parameterized by the quantity \begin{align} \sigma^2_x &= \int_{-\infty}^{\infty} \left(x-\langle x\rangle \right)^2 |\psi|^{ 2} dx \[4pt] &\equiv \langle x^2\rangle -\langle x\rangle^{2}, \label{4.3.9} \end{align} which is known as the variance of $x$. The square-root of this quantity, $\sigma_x$, is called the standard deviation of $x$. We generally expect the results of measurements of $x$ to lie within a few standard deviations of the expectation value (Figure 4.3.1 ). Example 4.3.1 For a particle in a box in its ground state, calculate the expectation value of the 1. position, 2. the linear momentum, 3. the kinetic energy, and 4. the total energy Solution First the wavefunction needs to be defined. From the particle in the box solutions, the ground state wavefunction ($n=1$ is $\psi = \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \nonumber$ We can confirm that the wavefunction is normalized. $\int \psi^* \psi \, d\tau = \int_{0}^{L} \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \, dx = 1 \nonumber$ Hence, the Equation $\ref{4.3.7}$ is the relevant equation to use. The expectation value of the position is: \begin{align*} \left \langle x \right \rangle &= \int \psi^* x \psi \, d\tau = \int_{0}^{L} \sqrt{\dfrac{2}{L}} x \sin \left ( \dfrac{\pi x}{L} \right ) \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &=\dfrac{2}{L} \int_{0}^{L} x \sin^2 \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &= \dfrac{L}{2} \end{align*} \nonumber The expectation value of the momentum is: \begin{align*} \left \langle p \right \rangle &= \int \psi^* \hat{p} \psi \, d\tau =\int_{0}^{L} \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \left ( -i\hbar \dfrac{d}{dx} \right ) \sqrt{\dfrac{2}{L}} \sin \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &= \dfrac{2i\hbar\pi}{L^2} \int_{0}^{L} \sin \left ( \dfrac{\pi x}{L} \right ) \cos \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &= 0 \end{align*} \nonumber The expectation value of the kinetic energy is: \begin{align*} \left \langle T \right \rangle &= \int \psi^* \hat{K} \psi \, d\tau = \dfrac{2}{L} \int_{0}^{L} \sin \left ( \dfrac{\pi x}{L} \right ) \left ( -\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} \right ) \sin \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &= \dfrac{\hbar^2 \pi^2}{2mL^2} \dfrac{2}{L} \int_{0}^{L} \sin^2 \left ( \dfrac{\pi x}{L} \right ) \, dx \[4pt] &= \dfrac{\hbar^2 \pi^2}{2mL^2} \end{align*} \nonumber A position "on average" is in the middle of the box ($L/2$). It has equal probability of traveling towards the left or right, so the average momentum and velocity must be zero. The average kinetic energy must be equal to the total energy of the ground state of the particle in the box, as there is no other energy component (i.e, $V=0$). Expanding the Wavefunction It is also possible to demonstrate that the eigenstates of an operator attributed to a observable form a complete set (i.e., that any general wavefunction can be written as a linear combination of these eigenstates). However, the proof is quite difficult, and we shall not attempt it here. In summary, given an operator $\hat{A}$, any general wavefunction, $\psi(x)$, can be written $\psi = \sum_{i}c_i \phi_i\label{4.3.9A}$ where the $c_i$ are complex weights, and the $\phi(x)$ are the properly normalized (and mutually orthogonal) eigenstates of $\hat{A}$: i.e., $A \phi_i = a_i \phi_i \label{4.3.10}$ where $a_i$ is the eigenvalue corresponding to the eigenstate $\phi_i$, and $\int_{-\infty}^\infty \phi_i^\ast \phi_j dx = \delta_{ij}. \label{4.3.11}$ Here, $\delta_{ij}$ is called the Kronecker delta-function, and takes the value unity when its two indices are equal, and zero otherwise. It follows from Equations $\ref{4.3.8}$ and $\ref{4.3.11}$ that $c_i = \int_{-\infty}^\infty \phi_i^\ast \psi dx. \label{4.3.12}$ Thus, the expansion coefficients in Equation $\ref{4.3.12}$ are easily determined, given the wavefunction $\psi$ and the eigenstates $\phi_i$. Moreover, if $\psi$ is a properly normalized wavefunction then Equations $\ref{4.3.8}$ and $\ref{4.3.11}$ yield $\sum_i \vert c_i\vert^2 =1. \label{4.3.13}$ Collapsing the Wavefunction Wavefunction collapse is said to occur when a wavefunction—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate (by "observation"). A particle (or a system in general) can be found in a given state $\psi(x,t)$. Suppose now a measurement is performed on the wavefuction to characterize a specific property of the system. Mathematically, an operator $\hat{A}$ is associated with this measurement process, which you suppose has a complete orthonormal set of eigenvalues: $\{ \phi_i \}$ that is typically an infinite set of functionals that depend on quantum number $n$. The wavefunction $\Psi$ can be expand and a set of basis functions can be selected to specifies the wavefunction is the coefficients $\{c_n\}$ of the expansion. Therefore, if the system is perturbed, then your wavefunction will have another set of coefficients $\{c'_n\}$. If you wavefunction is in the eigenstate of the operator, then each measurement via that operator will give the same result.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.03%3A_Observable_Quantities_Must_Be_Eigenvalues_of_Quantum_Mechanical_Operators.txt
Learning Objectives • Recognize the differences between the time-dependent and the time-independent Schrödinger equations • To distinguish between stationary and non-stationary wavefunctions There are two "flavors" of Schrödinger equations: the time-dependent and the time-independent versions. While the time-dependent Schrödinger equation predicts that wavefunctions can form standing waves (called stationary states), that if classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. Stationary states can also be described by the time-independent Schrödinger equation (used only when the Hamiltonian is not explicitly time dependent). However, it should be noted that the solutions to the time-independent Schrödinger equation still have time dependencies. Time-Dependent Wavefunctions Recall that the time-independent Schrödinger equation $\hat{H}\psi (x)=E\psi (x) \label{4.4.1}$ yields the allowed energies and corresponding wavefunctions. However, it does not tell us how the system evolves in time. It would seem that something is missing, since, after all, classical mechanics tells us how the positions and velocities of a classical system evolve in time. The time dependence is given by solving Newton's second law $m\dfrac{d^2 x}{dt^2}=F(x) \label{4.4.2}$ But where is $t$ in quantum mechanics? First of all, what is it that must evolve in time? The answer is that the wavefunction (and associated probability density) must evolve. Suppose, therefore, that we prepare a system at $t=0$ according to a particular probability density $p(x,0)$ related to an amplitude $\Psi (x,0)$ by $p(x,0) =|\Psi (x,0)|^2 \label{4.4.3}$ How will this initial amplitude $\Psi (x,0)$ look at time $t$ later? Note, by the way, that $\Psi (x,0)$ does not necessarily need to be one of the eigenstates $\psi_n (x)$. To address this, we refer to the time-dependent Schrödinger equation that tells us how $\Psi (x,t)$ will evolve starting from the initial condition $\Psi (x,0)$: $\hat{H}\Psi (x,t)=i\hbar\dfrac{\partial}{\partial t}\Psi (x,t) \label{4.4.4}$ It is important to know how it works physically and when it is sufficient to work with the time-independent version of the Schrödinger equation (Equation \ref{4.4.1}). Postulate V The time dependence of wavefunctions is governed by the Time-Dependent Schrödinger Equation (Equation $\ref{4.4.4}$). Stationary States Suppose that we are lucky enough to choose $\Psi (x,0) =\psi_n (x) \nonumber$ with corresponding probability density $p(x,0) =|\psi_n (x)|^2 \label{4.4.5}$ We will show that $\Psi (x,t) =\psi_n (x)e^{-iE_n t/\hbar} \label{4.4.5A}$ From the time-dependent Schrödinger equation \begin{align*}\dfrac{d\Psi}{dt} &= \psi_n (x) \left ( \dfrac{-iE_n}{\hbar} \right ) e^{-iE_n t/\hbar} \[4pt] i\hbar \dfrac{d\Psi }{dt} &= E_n \psi_n (x) e^{-iE_n t/\hbar} \end{align*} \label{4.4.6} Similarly \begin{align*} \hat{H}\Psi (x,t) &=e^{-iE_n t/\hbar}\hat{H}\psi_n (x) \[4pt] &=e^{-iE_n t/\hbar}E_n \psi_n (x) \label{4.4.7} \end{align*} Hence $\psi_n (x) exp(-iE_n t/\hbar)$ satisfies the Time-Dependent Schrödinger Equation (Equation $\ref{4.4.4}$). Consider the probability density for this wavefunction: $p(x,t)=|\Psi (x,t)|^2$ \begin{align*}p(x,t) &= \left [ \psi_n (x) e^{iE_n t/\hbar} \right ] \left [ \psi_n (x)e^{-iE_n t/\hbar} \right ] \[4pt] &= \psi_{n}^{2}(x)e^{iE_n t/\hbar}e^{-iE_n t/\hbar}\[4pt] &= |\psi_n (x)|^2 =p(x,0)\end{align*} \label{4.4.8} the probability does not change in time and for this reason, $\psi_n (x)$ is called a stationary state. In such a state, the energy remains fixed at the well-defined value $E_n$. Nonstationary States Suppose, however, that we had chosen $\Psi (x,0)$ to be some arbitrary linear combination of the two lowest energy states: $\Psi (x,0) =a\psi_1 (x)+b\psi_2 (x) \label{4.4.9}$ for example $\Psi (x,0) =\dfrac{1}{\sqrt{2}}[\psi_1 (x)+\psi_2 (x)] \label{4.4.10}$ as in the previous example. Then, the probability density at time $t$ $p(x,t) = |\Psi (x,t)|^2 \neq p(x,0) \label{4.4.11}$ For such a mixture to be possible, there must be sufficient energy in the system that there is some probability of measuring the particle to be in its excited state. Finally, suppose we start with a state $\Psi (x,0)=\dfrac{1}{\sqrt{2}} [\psi_1 (x) + \psi_2 (x)] \nonumber$ and we let this state evolve in time. At any point in time, the state $\Psi (x,t)$ will be some mixture of $\psi_1 (x)$ and $\psi_2 (x)$, and this mixture changes with time. Now, at some specific instance in time $t$, we measure the energy and obtain a value $E_1$. What is the state of the system just after the measurement is made? Once we make the measurement, then we know with 100% certainty that the energy is $E_1$. From the above discussion, there is only one possibility for the state of the system, and that has to be the wavefunction $\psi_1 (x)$, since in this state we know with 100% certainty that the energy is $E_1$. Hence, just after the measurement, the state must be $\psi_1 (x)$, which means that because of the measurement, any further dependence on $\psi_2 (x)$ drops out, and for all time thereafter, there is no dependence on $\psi_2 (x)$. Consequently, any subsequent measurement of the energy would yield the value $E_1$ with 100% certainty. This discontinuous change in the quantum state of the system as a result of the measurement is known as the collapse of the wavefunction. The idea that the evolution of a system can change as a result of a measurement is one of the topics that is currently debated among quantum theorists. The Quantum Observer Effect The fact that measuring a quantum system changes its time evolution means that the experimenter is now coupled to the quantum system. This observer effect means that the act of observing will influence the phenomenon being observed. In classical mechanics, this coupling does not exist. A classical system will evolve according to Newton's laws of motion independent of whether or not we observe it. This is not true for quantum systems. The very act of observing the system changes how it evolves in time. Put another way, by simply observing a system, we change it!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.04%3A_The_Time-Dependent_Schrodinger_Equation.txt
Learning Objectives • Understand the properties of a Hermitian operator and their associated eigenstates • Recognize that all experimental obervables are obtained by Hermitian operators Consideration of the quantum mechanical description of the particle-in-a-box exposed two important properties of quantum mechanical systems. We saw that the eigenfunctions of the Hamiltonian operator are orthogonal, and we also saw that the position and momentum of the particle could not be determined exactly. We now examine the generality of these insights by stating and proving some fundamental theorems. These theorems use the Hermitian property of quantum mechanical operators that correspond to observables, which is discuss first. Hermitian Operators Since the eigenvalues of a quantum mechanical operator correspond to measurable quantities, the eigenvalues must be real, and consequently a quantum mechanical operator must be Hermitian. To prove this, we start with the premises that $ψ$ and $φ$ are functions, $\int d\tau$ represents integration over all coordinates, and the operator $\hat {A}$ is Hermitian by definition if $\int \psi ^* \hat {A} \psi \,d\tau = \int (\hat {A} ^* \psi ^* ) \psi \,d\tau \label {4-37}$ This equation means that the complex conjugate of $\hat {A}$ can operate on $ψ^*$ to produce the same result after integration as $\hat {A}$ operating on $φ$, followed by integration. To prove that a quantum mechanical operator $\hat {A}$ is Hermitian, consider the eigenvalue equation and its complex conjugate. $\hat {A} \psi = a \psi \label {4-38}$ $\hat {A}^* \psi ^* = a^* \psi ^* = a \psi ^* \label {4-39}$ Note that $a^* = a$ because the eigenvalue is real. Multiply Equation $\ref{4-38}$ and $\ref{4-39}$ from the left by $ψ^*$ and $ψ$, respectively, and integrate over the full range of all the coordinates. Note that $ψ$ is normalized. The results are $\int \psi ^* \hat {A} \psi \,d\tau = a \int \psi ^* \psi \,d\tau = a \label {4-40}$ $\int \psi \hat {A}^* \psi ^* \,d \tau = a \int \psi \psi ^* \,d\tau = a \label {4-41}$ Since both integrals equal $a$, they must be equivalent. $\int \psi ^* \hat {A} \psi \,d\tau = \int \psi \hat {A}^* \psi ^* \,d\tau \label {4-42}$ The operator acting on the function, $\hat {A}^* \int \psi ^* \hat {A} \psi \,d\tau = \int \psi \hat {A} ^* \psi ^* \,d\tau_* \nonumber$ produces a new function. Since functions commute, Equation $\ref{4-42}$ can be rewritten as $\int \psi ^* \hat {A} \psi d\tau = \int (\hat {A}^*\psi ^*) \psi d\tau \label{4-43}$ This equality means that $\hat {A}$ is Hermitian. Orthogonality Theorem Eigenfunctions of a Hermitian operator are orthogonal if they have different eigenvalues. Because of this theorem, we can identify orthogonal functions easily without having to integrate or conduct an analysis based on symmetry or other considerations. Proof $ψ$ and $φ$ are two eigenfunctions of the operator  with real eigenvalues $a_1$ and $a_2$, respectively. Since the eigenvalues are real, $a_1^* = a_1$ and $a_2^* = a_2$. $\hat {A} \psi = a_1 \psi \nonumber$ $\hat {A}^* \psi ^* = a_2 \psi ^* \nonumber$ Multiply the first equation by $φ^*$ and the second by $ψ$ and integrate. $\int \psi ^* \hat {A} \psi \,d\tau = a_1 \int \psi ^* \psi \,d\tau \nonumber$ $\int \psi \hat {A}^* \psi ^* \,d\tau = a_2 \int \psi \psi ^* \,d\tau \label {4-45}$ Subtract the two equations in Equation \ref{4-45} to obtain $\int \psi ^*\hat {A} \psi \,d\tau - \int \psi \hat {A} ^* \psi ^* \,d\tau = (a_1 - a_2) \int \psi ^* \psi \,d\tau \label {4-46}$ The left-hand side of Equation \ref{4-46} is zero because $\hat {A}$ is Hermitian yielding $0 = (a_1 - a_2 ) \int \psi ^* \psi \, d\tau \label {4-47}$ If $a_1$ and $a_2$ in Equation \ref{4-47} are not equal, then the integral must be zero. This result proves that nondegenerate eigenfunctions of the same operator are orthogonal. $\square$ Two wavefunctions, $\psi_1(x)$ and $\psi_2(x)$, are said to be orthogonal if $\int_{-\infty}^{\infty}\psi_1^\ast \psi_2 \,dx = 0. \label{4.5.1}$ Consider two eigenstates of $\hat{A}$, $\psi_a(x)$ and $\psi_{a'}(x)$, which correspond to the two different eigenvalues $a$ and $a'$, respectively. Thus, $A\psi_a = a \psi_a \label{4.5.2}$ $A\psi_a' = a' \psi_a' \label{4.5.3}$ Multiplying the complex conjugate of the first equation by $\psi_{a'}(x)$, and the second equation by $\psi^*_{a'}(x)$, and then integrating over all $x$, we obtain $\int_{-\infty}^\infty (A \psi_a)^\ast \psi_{a'} dx = a \int_{-\infty}^\infty\psi_a^\ast \psi_{a'} dx, \label{ 4.5.4}$ $\int_{-\infty}^\infty \psi_a^\ast (A \psi_{a'}) dx = a' \int_{-\infty}^{\infty}\psi_a^\ast \psi_{a'} dx. \label{4.5.5}$ However, from Equation $\ref{4-46}$, the left-hand sides of the above two equations are equal. Hence, we can write $(a-a') \int_{-\infty}^\infty\psi_a^\ast \psi_{a'} dx = 0. \nonumber$ By assumption, $a \neq a'$, yielding $\int_{-\infty}^\infty\psi_a^\ast \psi_{a'} dx = 0. \nonumber$ In other words, eigenstates of an Hermitian operator corresponding to different eigenvalues are automatically orthogonal. The eigenvalues of operators associated with experimental measurements are all real. Example4.5.1 Draw graphs and use them to show that the particle-in-a-box wavefunctions for $\psi(n = 2)$ and $\psi(n = 3)$ are orthogonal to each other. Solution The two PIB wavefunctions are qualitatively similar when plotted These wavefunctions are orthogonal when $\int_{-\infty}^{\infty} \psi(n=2) \psi(n=3) dx =0 \nonumber$ and when the PIB wavefunctions are substituted this integral becomes \begin{align*} \int_0^L \sqrt{\dfrac{2}{L}} \sin \left( \dfrac{2n}{L}x \right) \sqrt{\dfrac{2}{L}} \sin \left( \dfrac{2n}{L}x \right) dx &= ? \[4pt] \dfrac{2}{L} \int_0^L \sin \left( \dfrac{2}{L}x \right) \sin \left( \dfrac{3}{L}x \right) &= ? \end{align*} \nonumber We can expand the integrand using trigonometric identities to help solve the integral, but it is easier to take advantage of the symmetry of the integrand, specifically, the $\psi(n=2)$ wavefunction is even (blue curves in above figure) and the $\psi(n=3)$ is odd (purple curve). Their product (even times odd) is an odd function and the integral over an odd function is zero. Therefore $\psi(n=2)$ and $\psi(n=3)$ wavefunctions are orthogonal. This can be repeated an infinite number of times to confirm the entire set of PIB wavefunctions are mutually orthogonal as the Orthogonality Theorem guarantees. Orthogonality of Degenerate Eigenstates Consider two eigenstates of $\hat{A}$, $\psi_a$ and $\psi'_a$, which correspond to the same eigenvalue, $a$. Such eigenstates are termed degenerate. The above proof of the orthogonality of different eigenstates fails for degenerate eigenstates. Note, however, that any linear combination of $\psi_a$ and $\psi'_a$ is also an eigenstate of $\hat{A}$ corresponding to the eigenvalue $a$. Thus, even if $\psi_a$ and $\psi'_a$ are not orthogonal, we can always choose two linear combinations of these eigenstates which are orthogonal. For instance, if $\psi_a$ and $\psi'_a$ are properly normalized, we can define the overlap integral $S= \int_{-\infty}^\infty \psi_a^\ast \psi_a' dx ,\label{ 4.5.10}$ It is easily demonstrated (but not here) that $\psi_a'' = \frac{\vert S\vert}{\sqrt{1-\vert S\vert^2}}\left(\psi_a - S^{-1} \psi_a'\right) \label{4.5.11}$ is a properly normalized eigenstate of $\hat{A}$, corresponding to the eigenvalue $a$, which is orthogonal to $\psi_a$. It is straightforward to generalize the above argument to three or more degenerate eigenstates. Hence, we conclude that the eigenstates of a Hermitian operator are, or can be chosen to be, mutually orthogonal. Theorem: Gram-Schmidt Orthogonalization Degenerate eigenfunctions are not automatically orthogonal, but can be made so mathematically via the Gram-Schmidt Orthogonalization. The above theorem argues that if the eigenvalues of two eigenfunctions are the same then the functions are said to be degenerate and linear combinations of the degenerate functions can be formed that will be orthogonal to each other. Since the two eigenfunctions have the same eigenvalues, the linear combination also will be an eigenfunction with the same eigenvalue. The proof of this theorem shows us one way to produce orthogonal degenerate functions. Proof If $\psi_a$ and $\psi'_a$ are degenerate, but not orthogonal, we can define a new composite wavefunction $\psi_a'' = \psi'_a - S\psi_a$ where $S$ is the overlap integral: $S= \langle \psi_a | \psi'_a \rangle \nonumber$ then $\psi_a$ and $\psi_a''$ will be orthogonal. \begin{align*} \langle \psi_a | \psi_a'' \rangle &= \langle \psi_a | \psi'_a - S\psi_a \rangle \[4pt] &= \cancelto{S}{\langle \psi_a | \psi'_a \rangle} - S \cancelto{1}{\langle \psi_a |\psi_a \rangle} \[4pt] &= S - S =0 \end{align*} \nonumber $\int \psi'_a^* \psi_a d\tau \nonumber$ then $\psi_a$ and $Φ$ will be orthogonal. \begin{align*} \int \psi_A^* Φ d\tau &= \int \psi_a^* (\psi'_a - S \psi_a ) d\tau \[4pt] &= \int \psi_a^* \psi'_a d\tau - S \int \psi_a^*\psi_a d\tau \[4pt] &= S - S = 0 \end{align*} \nonumber $\square$ Exercise 4.5.2 Find $N$ that normalizes $\psi$ if $\psi = N(φ_1 − Sφ_2)$ where $φ_1$ and $φ_2$ are normalized wavefunctions and $S$ is their overlap integral. $S= \langle φ_1 | φ_2 \rangle \nonumber$ Answer Remember that to normalize an arbitrary wavefunction, we find a constant $N$ such that $\langle \psi | \psi \rangle = 1$. This equates to the following procedure: \begin{align*} \langle\psi | \psi\rangle =\left\langle N\left(φ_{1} - Sφ_{2}\right) | N\left(φ_{1} - Sφ_{2}\right)\right\rangle &= 1 \[4pt] N^2\left\langle \left(φ_{1} - Sφ_{2}\right) | \left(φ_{1}-Sφ_{2}\right)\right\rangle &=1 \[4pt] N^2 \left[ \cancelto{1}{\langle φ_{1}|φ_{1}\rangle} - S \cancelto{S}{\langle φ_{2}|φ_{1}\rangle} - S \cancelto{S}{\langle φ_{1}|φ_{2}\rangle} + S^2 \cancelto{1}{\langle φ_{2}| φ_{2}\rangle} \right] &= 1 \[4pt] N^2(1 - S^2 \cancel{-S^2} + \cancel{S^2})&=1 \[4pt] N^2(1-S^2) &= 1 \end{align*} \nonumber therefore $N = \dfrac{1}{\sqrt{1-S^2}} \nonumber$ We conclude that the eigenstates of operators are, or can be chosen to be, mutually orthogonal.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.05%3A_Eigenfunctions_of_Operators_are_Orthogonal.txt
Learning Objectives • To connect the Heisenberg Uncertainty principle to the commutation relations. • Develop proficiency in calculating the commutator of two operators. If two operators commute then both quantities can be measured at the same time with infinite precision, if not then there is a tradeoff in the accuracy in the measurement for one quantity vs. the other. This is the mathematical representation of the Heisenberg Uncertainty principle. Commuting Operators One important property of operators is that the order of operation matters. Thus in general $\hat{A}{\hat{E}f(x)} \not= \hat{E}{\hat{A}f(x)} \nonumber$ unless the two operators commute. Two operators commute if the following equation is true: $\left[\hat{A},\hat{E}\right] = \hat{A}\hat{E} - \hat{E}\hat{A} = 0 \nonumber$ To determine whether two operators commute first operate $\hat{A}\hat{E}$ on a function $f(x)$. Then operate$\hat{E}\hat{A}$ the same function $f(x)$. If the same answer is obtained subtracting the two functions will equal zero and the two operators will commute.on If two operators commute, then they can have the same set of eigenfunctions. By definition, two operators $\hat {A}$ and $\hat {B}$ commute if the effect of applying $\hat {A}$ then $\hat {B}$ is the same as applying $\hat {B}$ then $\hat {A}$, i.e. $\hat {A}\hat {B} = \hat {B} \hat {A}. \nonumber$ For example, the operations brushing-your-teeth and combing-your-hair commute, while the operations getting-dressed and taking-a-shower do not. This theorem is very important. If two operators commute and consequently have the same set of eigenfunctions, then the corresponding physical quantities can be evaluated or measured exactly simultaneously with no limit on the uncertainty. As mentioned previously, the eigenvalues of the operators correspond to the measured values. If $\hat {A}$ and $\hat {B}$ commute and $ψ$ is an eigenfunction of $\hat {A}$ with eigenvalue $b$, then $\hat {B} \hat {A} \psi = \hat {A} \hat {B} \psi = \hat {A} b \psi = b \hat {A} \psi \label {4-49}$ Equation $\ref{4-49}$ says that $\hat {A} \psi$ is an eigenfunction of $\hat {B}$ with eigenvalue $b$, which means that when $\hat {A}$ operates on $ψ$, it cannot change $ψ$. At most, $\hat {A}$ operating on $ψ$ can produce a constant times $ψ$. $\hat {A} \psi = a \psi \label {4-50}$ $\hat {B} (\hat {A} \psi ) = \hat {B} (a \psi ) = a \hat {B} \psi = ab\psi = b (a \psi ) \label {4-51}$ Equation $\ref{4-51}$ shows that Equation $\ref{4-50}$ is consistent with Equation $\ref{4-49}$. Consequently $ψ$ also is an eigenfunction of $\hat {A}$ with eigenvalue $a$. Example 4.6.1 Do the following pairs of operators commute? 1. $\hat{A} = \dfrac{d}{dx} \nonumber$ and $\hat{E} = x^2 \nonumber$ 2. $\hat{B}= \dfrac {h} {x} \nonumber$ and $\hat{C}\{f(x)\} = f(x) +3 \nonumber$ 3. $\hat{J} = 3x$ and $\hat{O} = x^{-1}$ Solution a This requires evaluating $\left[\hat{A},\hat{E}\right]$, which requires solving for $\hat{A} \{\hat{E} f(x)\}$ and $\hat{E} \{\hat{A} f(x)\}$ for arbitrary wavefunction $f(x)$ and asking if they are equal. $\hat{A} \{\hat{E} f(x)\} = \hat{A}\{ x^2 f(x) \}= \dfrac{d}{dx} \{ x^2 f(x)\} = 2xf(x) + x^2 f'(x) \nonumber$ From the product rule of differentiation. $\hat{E} \{\hat{A}f(x)\} = \hat{E}\{f'(x)\} = x^2 f'(x) \nonumber$ Now ask if they are equal $\left[\hat{A},\hat{E}\right] = 2x f(x) + x^2 f'(x) - x^2f'(x) = 2x f(x) \not= 0 \nonumber$ Therefore the two operators do not commute. Solution b This requires evaluating $\left[\hat{B},\hat{C}\right]$ like in Example 4.6.1 . $\hat{B} \{\hat{C}f(x)\} = \hat{B}\{f(x) +3\} = \dfrac {h}{x} (f(x) +3) = \dfrac {h f(x)}{x} + \dfrac{3h}{x} \nonumber$ $\hat{C} \{\hat{B}f(x)\} = \hat{C} \{ \dfrac {h} {x} f(x)\} = \dfrac {h f(x)} {x} +3 \nonumber$ Now ask if they are equal $\left[\hat{B},\hat{C}\right] = \dfrac {h f(x)} {x} + \dfrac {3h} {x} - \dfrac {h f(x)} {x} -3 \not= 0\nonumber$ The two operators do not commute. Solution c This requires evaluating $\left[\hat{J},\hat{O}\right]$ $\hat{J} \{\hat{O}f(x) \} = \hat{J} \{f(x)3x\} = f(x)3x/x = 3f(x) \nonumber$ $\hat{O} \{\hat{J}f(x) \}= \hat{O} \{\dfrac{f(x)}{x}\} = \dfrac{f(x)3x}{x} = 3f(x) \nonumber$ $\left[\hat{J},\hat{O}\right] = 3f(x) - 3f(x) = 0 \nonumber$ Because the difference is zero, the two operators commute. General Heisenberg Uncertainty Principle Although it will not be proven here, there is a general statement of the uncertainty principle in terms of the commutation property of operators. If two operators $\hat {A}$ and $\hat {B}$ do not commute, then the uncertainties (standard deviations $σ$) in the physical quantities associated with these operators must satisfy $\sigma _A \sigma _B \ge \left| \int \psi ^* [ \hat {A} \hat {B} - \hat {B} \hat {A} ] \psi \,d\tau \right| \label{4-52}$ where the integral inside the square brackets is called the commutator, and ││signifies the modulus or absolute value. If $\hat {A}$ and $\hat {B}$ commute, then the right-hand-side of Equation $\ref{4-52}$ is zero, so either or both $σ_A$ and $σ_B$ could be zero, and there is no restriction on the uncertainties in the measurements of the eigenvalues $a$ and $b$. If $\hat {A}$ and $\hat {B}$ do not commute, then the right-hand-side of Equation $\ref{4-52}$ will not be zero, and neither $σ_A$ nor $σ_B$ can be zero unless the other is infinite. Consequently, both a and b cannot be eigenvalues of the same wavefunctions and cannot be measured simultaneously to arbitrary precision. Exercise 4.6.1 Show that the commutator for position and momentum in one dimension equals $–i ħ$ and that the right-hand-side of Equation $\ref{4-52}$ therefore equals $ħ/2$ giving $\sigma _x \sigma _{px} \ge \frac {\hbar}{2}$. Applications Operators are very common with a variety of purposes. They are used to figure out the energy of a wavefunction using the Schrödinger Equation. $\hat{H}\psi = E\psi \nonumber$ They also help to explain observations made in the experimentally. An example of this is the relationship between the magnitude of the angular momentum and the components. $\left[\hat{L}^2, \hat{L}^2_x\right] = \left[\hat{L}^2, \hat{L}^2_y\right] = \left[\hat{L}^2, \hat{L}^2_z\right] = 0 \nonumber$ However the components do not commute themselves. An additional property of commuters that commute is that both quantities can be measured simultaneously. Thus, the magnitude of the angular momentum and ONE of the components (usually z) can be known at the same time however, NOTHING is known about the other components. The physical quantities corresponding to operators that commute can be measured simultaneously to any precision. Example $\PageIndex{2A}$ Determine whether the following two operators commute: $\hat{K} = \alpha \displaystyle \int {[1]}^{[\infty]} d[x] \nonumber$ and $\hat{H} = d/dx\nonumber$ Solution Evaluate $\left[\hat{K},\hat{H}\right]\nonumber$ Example $\PageIndex{2B}$ Determine whether the following two operators commute: $\hat{I} = 5\nonumber$ and $\hat{L} = \displaystyle \int_{[1]}^{[\infty]} d[x]\nonumber$ Solution The identity operator, $\hat{I}$ is a real number and commutes with everything. Thus, these two operators commute. We can also directly evaluate the commutator: $\left[\hat{I},\hat{L}\right]\nonumber$ $\left[\hat{I},\hat{L}\right]\nonumber f(x) = 5 \displaystyle \int_{1}^{\infty} f(x) d(x) \nonumber - \displaystyle \int_{1}^{\infty} 5 f(x) d(x)\nonumber = 0 \nonumber$ Exercise $\PageIndex{2C}$ Show that the components of the angular momentum do not commute. \begin{align*} \hat{L}_x &= -i \hbar \left[ -\sin \left(\phi \dfrac {\delta} {\delta \theta} \right) - \cot (\Theta) \cos \left( \phi \dfrac {\delta} {\delta \phi} \right) \right] \[4pt] \hat{L}_y &= -i \hbar \left[ \cos \left(\phi \dfrac {\delta} {\delta \theta} \right) - \cot (\Theta) \cos \left( \phi \dfrac {\delta} {\delta \phi} \right) \right] \[4pt] \hat{L}_z &= -i\hbar \dfrac {\delta} {\delta\theta} \end{align*} \nonumber Solution This requires evaluating the following commucators: $\left[\hat{L}_z,\hat{L}_x\right] = i\hbar \hat{L}_y \nonumber$ $\left[\hat{L}_x,\hat{L}_y\right] = i\hbar \hat{L}_z \nonumber$ $\left[\hat{L}_y,\hat{L}_z\right] = i\hbar \hat{L}_x \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.06%3A_Commuting_Operators_Allow_Infinite_Precision.txt
Solutions to select questions can be found online. 4.3 The function $ψ^*ψ$ has to be real, nonnegative, finite, and of definite value everywhere. Why? Solution If we follow the Born interpretation of wavefuntions, then $ψ^*ψ$ is a probability density and hence must follow standard probability properties including being non-negative, finite and of a definite value at any relevant point in the space of the wavefunction. Moreover, the integral of $ψ^*ψ$ over all this space must be equal to 1. 4.5 Why are the following functions not acceptable wave functions for a 1D particle in a box with length $a$ ? $N$ is a normalization constant. 1. ${\psi}=N{\cos \dfrac{n{\pi}x}{L}\ }$ 2. ${\psi}=\dfrac{N}{\sin \dfrac{n{\pi}x}{a}\ }$ 3. ${\psi}=N{\tan \dfrac{{\pi}x}{a}\ }$ Solution The boundary conditions that need to be met are ${\psi}\left(0\right)={\psi}\left(a\right)=0$. This does not meet them. The proposed wavefunction blows up to infinity at $x=0$ and $x=a$ Tan is not defined for $x=\dfrac{a}{2}$ 4.12 Show that the sets of functions: $\sqrt{\dfrac{2}{L}}\sin\left(\dfrac{n\pi x}{L}\right)$ where $n$ = 1,2,3... is orthonormal. Solution Let $\psi =\sqrt{\dfrac{2}{L}}\sin \left(\dfrac{n\pi x}{L} \right)$ Because $\psi^* = \psi$ and is real, then $\int_0^L \psi^*\psi dx = \int_0^L \sqrt{\dfrac{2}{L}}\sin \left(\dfrac{n\pi x}{L} \right)\sqrt{\dfrac{2}{L}}\sin \left(\dfrac{m\pi x}{L}\right)dx\nonumber$ Letting $n=m$ \begin{align*} \int_0^L \psi^*\psi dx &= \dfrac{2}{L}\int_0^L \sin \left(\dfrac{n\pi x}{L} \right) \sin\left(\dfrac{n\pi x}{L}\right)dx \[4pt] &= \dfrac{2}{L}\int_0^L \sin^2 \left(\dfrac{n\pi x}{L} \right)dx\end{align*} $\dfrac{2}{L}\int_0^L \sin^2 \left(\dfrac{n\pi x}{L} \right)dx = 1\nonumber$ Letting $n \neq m$ \begin{align*} \int_0^L \psi^*\psi dx &= \dfrac{2}{L}\int_0^L \sin \left(\dfrac{n\pi x}{L} \right) \sin\left(\dfrac{m\pi x}{L}\right)dx \[4pt] &=\dfrac{2}{L}\dfrac{1}{2} \int_0^L\left[ \cos \left(\dfrac{(n-m)\pi x}{L} \right) - \cos\left(\dfrac{(n+m)\pi x}{L}\right) \right]dx \[4pt] &= \dfrac{1}{L} \left[\dfrac{L}{(n-m)\pi} \left[\sin\left(\dfrac{(n-m)\pi L}{L}\right)-\sin\left(\dfrac{(n-m)\pi 0}{L}\right)\right]-\dfrac{L}{(n+m)\pi} \left[\sin\left(\dfrac{(n+m)\pi L}{L}\right)-\sin\left(\dfrac{(n+m)\pi 0}{L}\right)\right]\right] =0 \end{align*} and thus $\sqrt{\dfrac{2}{L}}\sin \left(\dfrac{n\pi x}{L}\right)$ (n=1, 2, 3, ...) are orthonormal. 4.13 Show that $a\cdot b\cdot c = \sum_{ik} a_i b_i c_k e_k$ $\sum_{i}a_ie_i\cdot\sum_{j}b_je_j\cdot\sum_{k}c_ke_k=\sum_{ik} a_i b_i c_k e_k\nonumber$ $\sum_{i}\sum_{j}a_ib_j(e_i\cdot e_j)\cdot\sum_{k}c_ke_k=\sum_{ik} a_i b_i c_k e_k\nonumber$ $e_i \cdot e_j=\delta_{ij}=1\nonumber$ when $i=j$ $\sum_{i}a_ib_i\cdot \sum_{k}c_ke_k=\sum_{ik} a_i b_i c_k e_k\nonumber$ $\sum_{ik}a_ib_ic_ke_k=\sum_{ik} a_i b_i c_k e_k\nonumber$ 4.14 Determine if the following operators commute $\hat{B} = \dfrac{d}{dx}\nonumber$ and $\hat{C} = x^5\nonumber$ Solution We must solve $\left[\hat{B},\hat{C}\right]$, by solving for $\hat{B} \{\hat{C} f(x)\}$ and $\hat{C} \{\hat{B} f(x)\}$ for a wavefunction $f(x)$ and see if they are equal. $\hat{B} \{\hat{C} f(x)\} = \hat{B}\{ x^5 f(x) \}= \dfrac{d}{dx} \{ x^5 f(x)\} = 5xf(x) + x^5 f'(x)\nonumber$ $\hat{C} \{\hat{B}f(x)\} = \hat{C}\{f'(x)\} = x^5 f'(x)\nonumber$ since $\left[\hat{B},\hat{C}\right] = 5x f(x) + x^5 f'(x) - x^2f'(x) = 5x f(x) \not= 0\nonumber$ The two operators do not commute. 4.15 Do the following combinations of angular momentum operators commute? Show work to the justify the answer (do not just write "yes" or no"). 1. $\textbf{L}_x$ and $\textbf{L}_y$ 2. $\textbf{L}_y$ and $\textbf{L}_z$ 3. $\textbf{L}_z$ and $\textbf{L}_x$ with $\textbf{L}_x = -{\rm i}\,\hbar\left(y\,\dfrac{\partial}{\partial z} - z\,\dfrac{\partial} {\partial y}\right) \nonumber$ $\textbf{L}_y = -{\rm i}\,\hbar\left(z\,\dfrac{\partial}{\partial x} - x\,\dfrac{\partial} {\partial z}\right) \nonumber$ $\textbf{L}_z = -{\rm i}\,\hbar\left(x\,\dfrac{\partial}{\partial y} - y\,\dfrac{\partial} {\partial x}\right) \nonumber$ Estimate the answer to Part C based on the pattern gathered from parts A and B; no work necessary for Part C. Solution a. $[\textbf{L}_x, \textbf{L}_y] = (y p_z - z p_y)(z p_x - x p_z)\Psi - (z p_x - x p_z)(y p_z - z p_y)\Psi ,\nonumber$ $= (z p_x y p_z - z^{2} p_x p_y - x y p_z p_z - x z p_y p_z)\Psi - (y p_z z p_x - y x p_z p_z + z ^{2} p_y p_x + z x p_z p_y)\Psi\nonumber$ $[\textbf{L}_x, \textbf{L}_y] = i\hbar \textbf{L}_z ,\nonumber$ Does not commute, i.e., is not zero. [L x ,L y ]=iL z , [L x ,L y ]=iL z , [L x ,L y ]=iL z , [L x ,L y ]=iL z , b. $[\textbf{L}_y, \textbf{L}_z] = (z p_x - x p_z)(x p_y - y p_x)\Psi - (x p_y - y p_x)(z p_x - x p_z)\Psi\nonumber$ $= (x p_y z p_x - x^{2} p_y p_z - y z p_x p_x - y x p_z p_x)\Psi - (z p_x x p_y - z y p_x p_x + x ^{2} p_z p_y + x y p_x p_z)\Psi\nonumber$ $[\textbf{L}_y, \textbf{L}_z] = i\hbar \textbf{L}_x ,\nonumber$ Does not commute, i.e., is not zero. c. This part only requires that we notice the rotation of variables and consistency of format/equations. In doing so, we better understand the relation between the parts of the angular momentum operator. The work below does not need to be shown for credit, but it may clarify things or make the solution clearer if you are still having trouble assessing and using the pattern. $[\textbf{L}_z, \textbf{L}_x] = (x p_y - y p_x)(y p_z - z p_y)\Psi - (y p_z - z p_y)(x p_y - y p_x)\Psi\nonumber$ $= (y p_z x p_y - y^{2} p_z p_x - z x p_y p_y - z y p_x p_y)\Psi - (x p_y y p_z - x z p_y p_y + y^{2} p_x p_z + y z p_y p_x)\Psi\nonumber$ $[\textbf{L}_z, \textbf{L}_x] = i\hbar \textbf{L}_y , \nonumber$ Does not commute, i.e., is not zero. These calculations show that you can have only one well-defined component of the angular momentum because of the uncertainty principle says the others will not be known (since they do not commute). 4.17 For two operators to commute, what property must hold? Use the operators $\hat{L^2}$ and $\hat{L_z}$ as an example to show that this property holds. Solution The commuters when applied to a wavefunction must equal the 0 eigenfunction. $\hat{L^2}\hat{L_z}\psi(x) - \hat{L_z}\hat{L^2}\psi(x) = 0$ $\hat{L^2}\hat{L_z} - \hat{L_z}\hat{L^2}\psi(x)= \hat{0}\psi(x)$ $\hat{L^2}\hat{L_z} - \hat{L_z}\hat{L^2}= 0$ 4.21 Show that the angular momentum and kinetic energy operators commute and therefore can be measured simultaneously to arbitrary precision. Solution Show that $[ \hat{K} , \hat{L}] \ = \ 0\nonumber$ where the operators can be broken up into 3 components $L_x = -{\rm i}\,\hbar\left(y\,\dfrac{\partial}{\partial z} - z\,\dfrac{\partial} {\partial y}\right) \nonumber$ $L_y = -{\rm i}\,\hbar\left(z\,\dfrac{\partial}{\partial x} - x\,\dfrac{\partial} {\partial z}\right) \nonumber$ $L_z = -{\rm i}\,\hbar\left(x\,\dfrac{\partial}{\partial y} - y\,\dfrac{\partial} {\partial x}\right) \nonumber$ and $\hat{K_x} \ = \ \dfrac{-\hbar^2}{2m} \dfrac{\partial ^2}{\partial x^2}$ . The same can be written for $\hat{K}$ in the y and z directions. $[ \hat{K} , \hat{L} ] = [\hat{K_x},\hat{L_x}] + [\hat{K_y},\hat{L_y}] + [\hat{K_z},\hat{L_z}] \nonumber$ For the x-direction $[\hat{K_x} , \hat{L_x}] = \left[ \dfrac{-\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2}, -i\hbar \Big( y\dfrac{d}{dz} - z\dfrac{d}{dy} \Big) \Big) \right] \nonumber$ $\dfrac{-\hbar^2}{2m} \dfrac{d^2}{dx^2} \Big(-i\hbar \Big( y\dfrac{d}{dz} - z\dfrac{d}{dy} \Big) \Big) - -i\hbar \Big( y\dfrac{d}{dz} - z\dfrac{d}{dy} \Big) \Big) \dfrac{-\hbar^2}{2m} \dfrac{d^2}{dx^2}\nonumber$ $\dfrac{i\hbar^3}{2m} \Big( y\dfrac{d^3}{dx^2 dz} - z\dfrac{d^3}{dx^2 dy} \Big) - \dfrac{i\hbar^3}{2m} \Big( y\dfrac{d^3}{dx^2 dz} - z\dfrac{d^3}{dx^2 dy} \Big) \ = \ 0\nonumber$ The process can be repeated for the y and z directions and following the same steps the commutations turn out to be 0. Therefore, kinetic energy and angular momentum commute. 4.22 Show that the position and angular momentum operator commutes. Can the position and angular momentum be measured simultaneously to a arbitrary precision? Solution First, we must prove that the position operator, $\mathbf{\hat{R}} = \mathbf{i}\hat{x} + \mathbf{j}\hat{y} + \mathbf{k}\hat{z}$, and the angular momentum operator $\mathbf{\hat{L}} = \mathbf{i}\hat{L_x} + \mathbf{j}\hat{L_y} + \mathbf{k}\hat{L_z}$, commute. In order to prove the commutation, $[\mathbf{\hat{R}},\mathbf{\hat{L}}] = [\mathbf{i}\hat{x} + \mathbf{j}\hat{y} + \mathbf{k}\hat{z},\mathbf{i}\hat{L_x} + \mathbf{j}\hat{L_y} + \mathbf{k}\hat{L_z}]\nonumber$ $= \ [\hat{x},\hat{L_x}] \ + \ [\hat{y},\hat{L_y}] \ + \ [\hat{z},\hat{L_z}]\nonumber$ $\ = \ 0\nonumber$ where we have used the fact that $\mathbf{i}\centerdot\mathbf{i} = \mathbf{j}\centerdot\mathbf{j} = \mathbf{k}\centerdot\mathbf{k} = 1\nonumber$ and $\mathbf{i}\centerdot\mathbf{j} = \mathbf{j}\centerdot\mathbf{k} = \mathbf{k}\centerdot\mathbf{i} = 0\nonumber$ Now that we have proved that the two operators commute, the relationship of commutation means that the position and total angular momentum of any electrons can be measured simultaneously to arbitrary precision. 4.25 If both $|Ψ_n \rangle$ and $|Ψ_m \rangle$ satisfy the time-independent Schrödinger Equation (these are called stationary states) $|Ψ_n(x,t) \rangle = Ψ_n(x)e^{-iE_nt/ \hbar}\nonumber$ and $| Ψ_m(x,t) \rangle = Ψ_m(x)e^{-iE_mt/ \hbar}\nonumber$ show that any linear superposition of the two wavefunctions $|Ψ(x,t) \rangle = c_n | Ψ_n(x,t) \rangle + c_m |Ψ_m(x,t) \rangle \nonumber$ also satisfies the time-dependent Schrödinger Equation. Solution The time-dependent Schrödinger Equation is $\hat{H}Ψ(x,t) =iћ Ψ(x,y)/∂t\nonumber$ Plug Ψ(x,t) into the time-dependent equation. $\hat{H}cnΨn(x)e-iEnt/ ћ + cmΨm(x)e-iEmt/ ћ = iћ /∂tcnΨn(x)e-iEnt/ ћ + cmΨm(x)e-iEmt/ ћ \nonumber$ $\hat{H}cnΨn(x)e-iEnt/ ћ + cmΨm(x)e-iEmt/ ћ = EncnΨn(x)e-iEnt/ ћ + EmcmΨm(x)e-iEmt/ ћ\nonumber$ $∂/∂tcnΨn(x)e-iEnt/ ћ + cmΨm(x)e-iEmt/ ћ = -[(iEmcme-iEmt/ ћΨm(x))/ћ]-[(iEncne-iEnt/ ћΨn(x))/ћ]\nonumber$ combine all the constants (except for E) into cn and cm $iћ [-[(icme-iEmt/ ћΨm(x))/ћ]-[(icne-iEntΨn(x))/ћ]]=EncnΨn(x)e-iEnt/ ћ + EmcmΨm(x)e-iEmt/ ћ\nonumber$ $Since \hat{H}Ψ(x,t) and iћ Ψ(x,y)/∂t are equal, they satisfy the time-dependent equation. \nonumber$ 4.26 Starting with $\langle x \rangle = \int \psi^*(x,t) x \psi(x,t) dx \nonumber$ and the time-independent Schrödinger equation, demonstrate that $\dfrac{d\langle x \rangle }{dt}=\int \psi^* \dfrac{i}{\hbar}(\hat H x- x\hat H)\psi dx \nonumber$ Given that $\hat H = \dfrac {-\hbar^2}{2m} \dfrac{d^2}{dx^2}+V(x)\nonumber$ show that $\hat H x- x\hat H = -2 \dfrac {\hbar^2}{2m} \dfrac{d}{dx} = -\dfrac {\hbar^2}{m} \dfrac {i}{\hbar} \hat P_x = -\dfrac {ih}{m}\hat P_x\nonumber$ 4.28 Derive the condition on operators that arises from forcing eigenvalues to be real with complex conjugates. Solution Starting with an eigenvalue problem with a $\hat{G}$ as our operator we recognize $\hat{G}\psi = \lambda\psi\nonumber$ Solving for our eigenvalue we must multiply by our complex conjugate wavefunction and integrate both sides to see $\int\psi^*\hat{G}\psi d\tau = \int\psi^*\lambda\psi d\tau = \lambda\int\psi^*\psi d\tau =\lambda\nonumber$ We can repeat this calculation but with a complex conjugate of our initial eigenvalue problem $\hat{G}^*\psi^* = \lambda^*\psi^*\nonumber$ Solving for our eigenvalue we multiply $\psi$ and integrate both sides to find that $\int\psi\hat{G}^*\psi^* d\tau = \int\psi\lambda^*\psi^* d\tau = \lambda^*\int\psi\psi^* d\tau =\lambda\nonumber$ Since we restricted $\lambda$ to be real both eigenvalue problems return the same eigenvalue. We can then relate the operator side of both equations to know that $\boxed{\int\psi^*\hat{G}\psi d\tau=\int\psi\hat{G}^*\psi^* d\tau}\nonumber$ 4.31 Prove that the position operator is Hermitian. Solution We must see if the operator satisfies the following requirement to be in Hermitian: $\int^\infty_{-\infty} (\hat{A}\psi^*)\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{A}\psi\,dx\nonumber$ Substitute $\hat{X}$ for $\hat{A}$ into the above equation: $\int^\infty_{-\infty} (\hat{A}\psi^*)\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{A}\psi\,dx\nonumber$ $\int^\infty_{-\infty} (\hat{X}\psi^*)\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{X}\psi\,dx\nonumber$ $\int^\infty_{-\infty} (\hat{X}\psi)^*\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{X}\psi\,dx\nonumber$ $\int^\infty_{-\infty} \psi^*\hat{X}^*\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{X}\psi\,dx\nonumber$ Since $\hat{X}^* \equiv \hat{X}$: $\int^\infty_{-\infty} \psi^*\hat{X}\psi\,dx = \int^\infty_{-\infty} \psi^*\hat{X}\psi\,dx\nonumber$ Therefore the Position Operator is Hermitian. 4.31 Prove that the momentum operator is a Hermitian Solution Hermitian: $\int\psi_{j}^{*}\hat{H}\psi_{i}dx$ Momentum Operator: $\hat{P} = -i\hbar \dfrac{d}{dx}$ We will first start by showing you $\int_{-\infty}^{\infty} \psi_{j}(-i\hbar \dfrac{d}{dx})psi_{i}dx\nonumber$ $\dfrac{d\psi_{i}}{dx} dx = d\psi_{i}$ $\int_{-\infty}^{\infty} \psi_{j}(-i\hbar \dfrac{d}{dx})psi_{i}dx$ = i$\hbar \int_{-\infty}^{\infty} psi_{j} d\psi_{i}$ Using integration by parts with u = \psi_{j}*\ and dv = d\psi_{i} We can notice now that for a confined particle the product \psi_{j}^{*}\psi_{i} will go to zero at each of the endpoints We get in the end $-i\hbar \dfrac{d}{dx}$ = $-i\hbar \dfrac{d}{dx}$ → momentum operator 4.32 Which of the following operators are Hermitian: 1. $x$, 2. $d/dx$ 3. $hd^2/dx^2$ 4. $id^2/dx^2$ Solution A Hermitian Operator $\hat{A}$ satisfies $<Ψ^*|A|Ψ> = <Ψ|A^*|Ψ*> \nonumber$ x $\int Ψ*xΨdx = \int ΨxΨ*dx\nonumber$ where $x^* = x$. Operator $x$ is Hermitian d/dx $\int Ψ* d/dxΨdx\nonumber$ $= \int Ψ* dΨ\nonumber$ Here we can use Integration by Parts \int vdu = uv + \int udv with v=Ψ* and dv = dΨ $= [Ψ*Ψ] - \int ΨdΨ*\nonumber$ [Ψ*Ψ] evaluated at infinity and negative infinity is 0, because of the assumption that this wavefunction approaches 0 as one extends to infinity in both directions $= -\int Ψd/dxΨ*dx\nonumber$ Here we inserted dx/dx into the integral $= \int Ψ(-d/dx)Ψ*dx\nonumber$ d/dx* = d/dx, not -d/dx,so this operator is not Hermitian. hd2/dx2 $\int Ψ*h(d^2/dx^2)Ψdx\nonumber$ $= h\int Ψ*(d^2/dx)Ψ\nonumber$ Here we can use Integration by Parts \int vdu = uv + \int udv with u=Ψ* and dv=d(dΨ/dx) $= h[Ψ*dΨ/dx] - \int (dΨ/dx)dΨ*\nonumber$ $= h[Ψ*dΨ/dx] - \int (dΨ*/dx)dΨ\nonumber$ [Ψ*dΨ/dx] evaluated at infinity and negative infinity is 0, because of the assumption that this wavefunction approaches 0 as one extends to infinity in both directions. This implies that that dΨ/dx, for example, also approach 0. $= - h\int (dΨ*/dx)dΨ \nonumber$ Here we can use Integration by Parts \int vdu = uv + \int udv with u=dΨ*/dx and dv=dΨ $=-h( [ΨdΨ*/dx] - \int Ψd^2Ψ*/dx\nonumber$ [Ψ*dΨ/dx] evaluated at infinity and negative infinity is 0, because of the assumption that this wavefunction approaches 0 as one extends to infinity in both directions. This implies that that dΨ*/dx, for example, also approaches 0. $= h\int Ψ(d^2Ψ*/dx)\nonumber$ $= h\int Ψ(d^2Ψ*/dx^2)dx \nonumber$ $= \int Ψh(d^2/dx^2)Ψ*dx \nonumber$ h(d2/dx2)* = h(d2/dx2), so this operator is Hermitian id2/dx2 $\int Ψ*i(d^2/dx^2)Ψdx\nonumber$ $= i\int Ψ*(d^2/dx)Ψ\nonumber$ Here we can use Integration by Parts $\int vdu = uv + \int udv\nonumber$ with u=Ψ* and dv=d(dΨ/dx) $= i[Ψ*dΨ/dx] - \int (dΨ/dx)dΨ*\nonumber$ $= i[Ψ*dΨ/dx] - \int (dΨ*/dx)dΨ \nonumber$ [Ψ*dΨ/dx] evaluated at infinity and negative infinity is 0, because of the assumption that this wavefunction approaches 0 as one extends to infinity in both directions. This implies that that dΨ/dx, for example, also approach 0. $= - i\int (dΨ*/dx)dΨ \nonumber$ Here we can use Integration by Parts \int vdu = uv + \int udv with u=dΨ*/dx and dv=dΨ $=-i( [ΨdΨ*/dx] - \int Ψd^2Ψ*/dx\nonumber$ [Ψ*dΨ/dx] evaluated at infinity and negative infinity is 0, because of the assumption that this wavefunction approaches 0 as one extends to infinity in both directions. This implies that that dΨ*/dx, for example, also approach 0. $= i\int Ψ(d^2Ψ*/dx)\nonumber$ $= i\int Ψ(d^2Ψ*/dx^2)dx \nonumber$ $= \int Ψi(d^2/dx^2)Ψ*dx \nonumber$ $i(d^2/dx^2)* = -i(d^2/dx^2)\nonumber$ so this operator is NOT Hermitian 4.32 Determine whether the following operators are Hermitian and whether they commute: $\hat{A}=i \dfrac{d}{dx}\nonumber$ and $\hat{B}=i \dfrac{d^2}{dx^2}\nonumber$ Given that -$\infty$<x<$\infty$ and the operators functions are well behaved. Solution If the operator satisfies this condition it is Hermitian $\int_{-\infty}^{\infty} f^*\left(x\right)\hat{A}f\left(x\right)dx=\int_{-\infty}^{\infty} f\left(x\right)\hat{A}f^*\left(x\right)dx\nonumber$ A) $\int_{-\infty}^{\infty} f^*\left(i\dfrac{df}{dx}\right)dx=i\int_{-\infty}^{\infty} f^*\dfrac{df}{dx}dx=i\left([_{-\infty}^{\infty} f^* f]-\int_{-\infty}^{\infty} f\dfrac{df^*}{dx}dx\right)\nonumber$ $=-i\int_{-\infty}^{\infty} f\dfrac{df^*}{dx}dx=\int_{-\infty}^{\infty} f\left(-i\dfrac{d}{dx}\right)f^*dx\nonumber$ $\int_{-\infty}^{\infty} f\left(i\dfrac{d}{dx}\right)^*f^*dx\nonumber$ This operator is Hermitian B) $\int_{-\infty}^{\infty} f^*\left(i\dfrac{d^2f}{dx^2}\right)dx=[_{-\infty}^{\infty} f^* i\dfrac{df}{dx}]-\int_{-\infty}^{\infty} \dfrac{df^*}{dx}\dfrac{df}{dx}dx\nonumber$ $=-i[_{-\infty}^{\infty} f\dfrac{df^*}{dx}]+i\int_{-\infty}^{\infty} f\dfrac{d^2f^*}{dx^2}dx\nonumber$ $=-\int_{-\infty}^{\infty} f\dfrac{id^2}{dx^2}^*f^*dx\nonumber$ This operator is not Hermitian If the operators commute they have to satisfy this condition $\hat{A}\hat{B}f=\hat{B}\hat{A}f\nonumber$ $\hat{A}\hat{B}f=\dfrac{id}{dx}\left(\dfrac{d^2f}{dx^2}\right)=\dfrac{id^3f}{dx^3}\nonumber$ $\hat{B}\hat{A}f=\dfrac{id^2}{dx^2}\left(\dfrac{df}{dx}\right)=\dfrac{id^3f}{dx^3}\nonumber$ This pair of operators commutes. 4.34 Consider two wavefunctions $\psi_1(x) = A\sin(k_1x) + B\cos(k_1x)$and $\psi_2(x) = C\sin(k_2x) + D\cos(k_2x)\nonumber$ Given the boundary conditions are: $\psi(0) = 0\nonumber$ and $\dfrac{d\psi_1}{dx} = \dfrac{d\psi_2}{dx} \;\;\; at x=0\nonumber$ $A+B = C, k_1(A-B) = k_2C\nonumber$ and given a expression of $R = \dfrac {B^2}{A^2}$ Derive the simplest expression of $R$ based on the terms from the boundary conditions provided above. Solution Since $A+B = C, k_1(A-B) = k_2C$, $k_1(A-B) = k_2 (A+B)$ $k_1A - k_1B = k_2A + k_2B$ $(k_1 - k_2)A = (k_1 + k_2)B$ Thus, $\dfrac{B}{A} = \dfrac{k_1 - k_2}{k_1 + k_2}$ $R = \dfrac {B^2}{A^2} =\left (\dfrac{B}{A} \right)^2= \left (\dfrac{k_1 - k2}{k_1 + k_2} \right)^2$ 4.34 A particle is moving in a field. Half-way through the field, there is a line that represents potential energy. To the left of the line, the potential energy is $x < 0\nonumber$ and to the right of the line the potential energy is $x > 0\nonumber$. If the particle's energy is less than the potential energy line will the particle reflect when the its energy is greater than the Potential energy barrier height? Solution When $x <0\nonumber$ the Schrödinger equation is as followed: $\dfrac{-\hbar^{2}}{2m}\dfrac{d^2\psi_{1}}{dx^2} = E\psi_{1}\nonumber$ and the solution to this equation is: $\psi_1(x)= Ae^{ik_1x}+Be^{-ik_1x}\nonumber$ where $k_1 = (\dfrac{2mE}{\hbar^2})^{1/2}\nonumber$ Region Two where $x>0$: $-\dfrac{\hbar^2}{2m}\dfrac{d^2\psi_2}{dx^2}+V_0 \psi_2= E\psi_2\nonumber$ and the solution to the equation is: $\psi_2(x)= Ce^{ik_2x}+ De^{-ik_2x}\nonumber$ and $k_2= [\dfrac{2m(E-V_0)}{\hbar^2}]^{1/2}\nonumber$ Notice the difference between the two Schrödinger equations. Equation one does not have a potential energy component because it is before the potential energy field hence have zero potential energy. After the potential energy field, the Schrödinger equation has a potential energy component because the particle has potential energy at this moment. When you solve the differential solutions to the Schrödinger equations you find that the amount that is reflected back of a particle by the line is equal to the amount that is transmitted after the line. This is all we can find out for the information given. However, if we solve this solution for when the Energy of the particle is greater than the potential energy line and compare the differential solutions to all four wave functions then we find that all particles will be reflected by the barrier.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/04%3A_Postulates_and_Principles_of_Quantum_Mechanics/4.E%3A_Postulates_and_Principles_of_Quantum_Mechanics_%28Exercises%29.txt
• The harmonic oscillator is common: It appears in many everyday examples: Pendulums, springs, electronics (such as the RLC circuit), standing waves on a string, etc. It's trivial to set up demonstrations of these phenomena, and we see them constantly. • The harmonic oscillator is intuitive: We can picture the forces on systems such as pendulum or a plucked string. This makes it simple to study in the classroom. In contrast, there are many "everyday" examples that are not intuitive. • The harmonic oscillator is mathematically simple: Math is part of physics. In studying simple harmonic motion, students can immediately use the formulas that describe its motion. These formulas are understandable: for example, the equation for frequency shows the intuitive result that increasing spring stiffness increases frequency. • 5.1: A Harmonic Oscillator Obeys Hooke's Law The simple harmonic oscillator, a nonrelativistic particle in a quadratic potential, is an excellent model for a wide range of systems in nature. Indeed, it was for this system that quantum mechanics was first formulated: the blackbody radiation formula of Planck. • 5.2: The Equation for a Harmonic-Oscillator Model of a Diatomic Molecule Contains the Reduced Mass of the Molecule Viewing the multi-body system as a single particle allows the separation of the motion: vibration and rotation, of the particle from the displacement of the center of mass. This approach greatly simplifies many calculations and problems. • 5.3: The Harmonic Oscillator Approximates Molecular Vibrations The quantum harmonic oscillator is the quantum analog of the classical harmonic oscillator and is one of the most important model systems in quantum mechanics. This is due in partially to the fact that an arbitrary potential curve V(x) can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it • 5.4: The Harmonic Oscillator Energy Levels In this section we contrast the classical and quantum mechanical treatments of the harmonic oscillator, and we describe some of the properties that can be calculated using the quantum mechanical harmonic oscillator model. • 5.5: The Harmonic Oscillator and Infrared Spectra Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques employed mainly by inorganic and organic chemists due to its usefulness in determining structures of compounds and identifying them. Chemical compounds have different chemical properties due to the presence of different functional groups. • 5.6: The Harmonic Oscillator Wavefunctions involve Hermite Polynomials The quantum-mechanical description of vibrational motion using the harmonic oscillator model will produce vibrational quantum numbers, vibrational wavefunctions, quantized vibrational energies, and a zero-point energy. • 5.7: Hermite Polynomials are either Even or Odd Functions Hermite polynomials were defined by Laplace (1810) though in scarcely recognizable form, and studied in detail by Chebyshev (1859). Chebyshev's work was overlooked and they were named later after Charles Hermite who wrote on the polynomials in 1864 describing them as new. They were consequently not new although in later 1865 papers Hermite was the first to define the multidimensional polynomials. • 5.8: The Energy Levels of a Rigid Rotor Rigid Rotor means when the distance between particles do not change as they rotate. • 5.9: The Rigid Rotator is a Model for a Rotating Diatomic Molecule To develop a description of the rotational states, we will consider the molecule to be a rigid object, i.e. the bond lengths are fixed and the molecule cannot vibrate. This model for rotation is called the rigid-rotor model. It is a good approximation (even though a molecule vibrates as it rotates, and the bonds are elastic rather than rigid) because the amplitude of the vibration is small compared to the bond length. • 5.E: The Harmonic Oscillator and the Rigid Rotor (Exercises) These are homework exercises to accompany Chapter 5 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. Thumbnail: The rigid rotor model for a diatomic molecule. (CC BY-SA 3.0 Unported; Mysterioso via Wikipedia) 05: The Harmonic Oscillator and the Rigid Rotor The motion of two atoms in a diatomic molecule can be separated into translational, vibrational, and rotational motions. Both rotation and vibrational motions are internal motions that do not change the center of mass for the molecule (Figure 5.1.1 ), which is described by translational motion. Quantum translational motions can be modeled with the particle in a box model discussed previously and rotation and vibration can be modeled via the rigid rotor and harmonic oscillator models, respectively. Before delving into the quantum mechanical harmonic oscillator, we will introduce the classical harmonic oscillator (i.e., involving classical mechanics) to build an intuition that we will extend to the quantum world. A classical description of the vibration of a diatomic molecule is needed because the quantum mechanical description begins with replacing the classical energy with the Hamiltonian operator in the Schrödinger equation. It also is interesting to compare and contrast the classical description with the quantum mechanical picture. The Classical Harmonic Oscillator Simple harmonic oscillators about a potential energy minimum can be thought of as a ball rolling frictionlessly in a curved dish or a pendulum swinging frictionlessly back and forth (Figure 5.1.2 ). The restoring forces are precisely the same in either horizontal direction. If we consider the bond to behave like a mass on a spring (Figure 5.1.2 ), then this restoring force ($F$) is proportional to the displacement ($x$) from the equilibrium length ($x_o$) - this is Hooke's Law: $F = - kx \label {5.1.2}$ where $k$ is the force constant. Hooke's Law says that the force is proportional to, but in opposite direction to, the displacement ($x$). The force constant reflects the stiffness of the spring. The idea incorporated into the application of Hooke's Law to a diatomic molecule is that when the atoms move away from their equilibrium positions, a restoring force is produced that increases proportionally with the displacement from equilibrium. The potential energy for such a system increases quadratically with the displacement. $V (x) = \dfrac {1}{2} k x^2 \label {5.1.3}$ Hooke's Law or the harmonic (i.e. quadratic) potential given by Equation $\ref{5.1.3}$ is an excellent approximation for the vibrational oscillations of molecules. The magnitude of the force constant $k$ depends upon the nature of the chemical bond in molecular systems just as it depends on the nature of the spring in mechanical systems. The larger the force constant, the stiffer the spring or the stiffer the bond. Since it is the electron distribution between the two positively charged nuclei that holds them together, a double bond with more electrons has a larger force constant than a single bond, and the nuclei are held together more tightly. Caution A stiff bond with a large force constant is not necessarily a strong bond with a large dissociation energy. A harmonic oscillator has no dissociation energy since it CANNOT be broken - there is always a restoring force to keep the molecule together. This is one of many deficiencies in using the harmonic oscillator model to describe molecular vibrations. Two atoms or one? You may have questioned the applicability of the harmonic oscillator model involving one moving mass bound to a fix wall via a spring like in Figure 5.1.2 for the vibration of a diatomic molecule with two moving masses like in Figure 5.1.1 . It turned out the two are mathematically the same with internal vibration motion described by a single reduced particle with a reduced mass $μ$. For a diatomic molecule, Figure 5.1.3 , the vector $\vec{r}$ corresponds to the internuclear axis. The magnitude or length of $r$ is the bond length, and the orientation of $r$ in space gives the orientation of the internuclear axis in space. Changes in the orientation correspond to rotation of the molecule, and changes in the length correspond to vibration. The change in the bond length from the equilibrium bond length is the vibrational coordinate for a diatomic molecule. Example 5.1.1 1. Show that minus the first derivative of the harmonic potential energy function in Equation $\ref{5.1.3}$ with respect to $x$ is the Hooke's Law force in Equation \ref{5.1.2}. 2. Show that the second derivative is the force constant, $k$. 3. At what value of $x$ is the potential energy a minimum; at what value of $x$ is the force zero? 4. Sketch graphs to compare the potential energy and the force for a system with a large force constant to one with a small force constant. Solution a Hooke's Law for a spring entails that the force applied on a spring $F$ is equal to the force constant, $-k$ times the distance compressed or stretched, $x$ (Equation \ref{5.1.2}). The derivative of $V(x) = 0.5 k x^2$ is $V'(x) = (2)\left(\dfrac{1}{2}\right)kx = kx. \nonumber$ The negative of this is $-V'(x) = -kx$ which is exactly equal to Hooke's Law. Solution b The second derivative $V"(x) = \dfrac{d}{dx} kx = k \nonumber$ Thus, the second derivative of this equation for potential energy is equal to the force constant, $k$. Solution c To find the minimum potential energy, it is easiest to set the first derivative equal to zero and solve for x. When $V'(x) = kx = 0$ then x must be equal to zero. Thus, the minimum potential energy is when x=0. Plugging this into Hooke's Law, $F(0) = -k(0) = 0$ so this is also the value for x when the force is zero. Solution d The force constant has a drastic effect on both the potential energy and the force. A system with a large force constant requires minimal change in $x$ to have a drastic change in potential energy or force, whereas a system with a small force constant is the exact opposite phenomenon. Solving the Harmonic Oscillator Model The classical equation of motion for a one-dimensional simple harmonic oscillator with a particle of mass $m$ attached to a spring having spring constant $k$ is $m \dfrac{d^2x(t)}{dt^2} = -kx(t) \label{5.1.4a}$ which can be rewritten in the standardform: $\dfrac{d^2x(t)}{dt^2} + \dfrac{k}{m}x(t) = 0 \label{5.1.4b}$ Equation $\ref{5.1.4a}$ is a linear second-order differential equation that can be solved by the standard method of factoring and integrating. The resulting solution to Equation $\ref{5.1.4a}$ is $x(t) = x_o \sin (\omega t + \phi) \label{5.1.5}$ with $\omega = \sqrt{\dfrac{k}{m}} \label{5.1.6}$ and the momentum has time dependence \begin{align*} p &= mv \[4pt] &=mx_o \omega \cos (\omega t + \phi) \label{5.1.7} \end{align*} Figure 5.1.4 show the displacement of the bond from its equilibrium length as a function of time. Such motion is called harmonic. Example 5.1.2 Substitute the following functions into Equation $\ref{5.1.4b}$ to demonstrate that they are both possible solutions to the classical equation of motion. 1. $x(t) = x_0 e^{i \omega t}$ 2. $x(t) = x_0 e^{-i \omega t}$ where $\omega = \sqrt {\dfrac {k}{m}} \nonumber$ Note that the Greek symbol $\omega$ for frequency represents the angular frequency $2π\nu$. Solution a This requires simply placing the given function $x(t) = x_0 e^{i \omega t}$ into Equation $\ref{5.1.4b}$. \begin{align*} \frac{d^2 x(t) }{dt^2} + \frac{k}{m} x(t) &= 0 \[4pt] \frac{d^2 }{dt^2} \left( x_o e^{i \omega t} \right)+ \frac{k}{m}x_o e^{i \omega t} &= 0\[4pt] x_o \frac{d^2 }{dt^2} \left( e^{i \omega t} \right)+ \frac{k}{m}x_o e^{i \omega t} &= 0 \[4pt] x_o i^2 \omega^2 \left( e^{i \omega t} \right)+ \frac{k}{m}x_o e^{i \omega t} &= 0 \[4pt] x_o i^2 \omega^2 e^{i \omega t} + \frac{k}{m}x_o e^{i \omega t} &= 0 \[4pt] x_o i^2 \omega^2 + \frac{k}{m}x_o &= 0 \[4pt] -x_o \frac{k}{m}+ \frac{k}{m}x_o = 0 \; \textrm{ with } \; \omega &= \sqrt{\frac{k}{m}} \end{align*} \nonumber Solution b This requires simply placing the given function $x(t) = x_0 e^{-i \omega t}$ into Equation $\ref{5.1.4b}$. \begin{align*} \frac{d^2 x(t) }{dt^2} + \frac{k}{m} x(t) &= 0 \[4pt] \frac{d^2 }{dt^2} \left( x_o e^{-i \omega t} \right)+ \frac{k}{m}x_o e^{-i \omega t}& = 0 \[4pt] x_o \frac{d^2 }{dt^2} \left( e^{-i \omega t} \right)+ \frac{k}{m}x_o e^{-i \omega t} &= 0 \[4pt] x_o i^2 \omega^2 \left( e^{-i \omega t} \right)+ \frac{k}{m}x_o e^{-i \omega t} &= 0 \[4pt] x_o i^2 \omega^2 e^{-i \omega t} + \frac{k}{m}x_o e^{-i \omega t}& = 0\[4pt] x_o i^2 \omega^2 + \frac{k}{m}x_o &= 0 \[4pt] -x_o \frac{k}{m}+ \frac{k}{m}x_o = 0 \; \textrm{ with } \; \omega &= \sqrt{\frac{k}{m}} \end{align*} \nonumber Exercise 5.1.1 Show that sine and cosine functions also are solutions to Equation $\ref{5.1.4b}$. Answer Using Equation $\ref{5.1.4b}$ $\frac{d^2x(t)}{dt^2} +\frac{k}{m}x(t)=0 \nonumber$ with $\omega =\sqrt{\frac{k}{m}}$. For \begin{align*} x(t)=x_{o} \sin(\omega t+\phi) \end{align*} \nonumber Take the second derivative of $x(t)$ \begin{align*} \frac{d^2x(t)}{dt} &=-\omega^2x_{o}\sin(\omega t+\phi) \[4pt] &= \omega^2 x_{o}\sin(\omega t+\phi) \[4pt] &=-\frac{k}{m}x_{o} \sin \left(\sqrt{\frac{k}{m}}t+\phi \right) \end{align*} \nonumber Plug in $x(t)$ and the second derivative of $x(t)$ into Equation $\ref{5.1.4b}$ \begin{align*} -\frac{k}{m}x_{o}\sin \left(\sqrt{ \frac{k}{m}} t + \phi \right) + \frac{k}{m} x_{o} \sin \left(\sqrt{\frac{k}{m}} t + \phi \right) =0 \end{align*} \nonumber Hence, the sine equation is a solution to Equation $\ref{5.1.4b}$ We do the same for the cosine function $x(t)=x_{o}\cos(\omega t + \phi) \nonumber$ Take the second derivative of $x(t)$ \begin{align*} \frac{d^2x(t)}{dt} &=-\omega^2x_{o}\cos(\omega t + \phi) \[4pt] &= -\omega^2x_{o}\cos( \omega t +\phi) \[4pt] &=-\frac{k}{m} x_{o} \cos\left( \sqrt{\frac{k}{m}} t + \phi \right ) \end{align*} \nonumber Plug in $x(t)$ and the second derivative of $x(t)$ into Equation $\ref{5.1.4b}$ \begin{align*} -\frac{k}{m}x_{o} \cos\left( \sqrt{\frac{k}{m}} t + \phi \right) + \frac{k}{m}x_{o} \cos\left( \sqrt{\frac{k}{m}}t + \phi \right)=0 \end{align*} \nonumber The cosine equation is also a solution to Equation $\ref{5.1.4b}$. Exercise 5.1.2 Identify what happens to the frequency of the motion as the force constant increases in one case and as the mass increases in another case. If the force constant is increased 9-fold and the mass is increased by 4-fold, by what factor does the frequency change? Answer This is a simple application of Equation \ref{5.1.6}. As the force constant increases, the frequency of the motion increases, while as the mass increases, the frequency of the motion decreases. If the force constant increased 9-fold and the mass increased 4-fold, $ω=\sqrt{\dfrac{9k}{4m}}= \dfrac{3}{2} \left(\dfrac{k}{m}\right) \nonumber$ The entire frequency of motion would increase by a factor of 3/2. Harmonic Oscillator Energies The energy of the vibration is the sum of the kinetic energy and the potential energy. The momentum associated with the harmonic oscillator is $p = m \dfrac {dx}{dt} \label {5.1.8}$ so combining Equations \ref{5.1.8} and \ref{5.1.3}, the total energy can be written as \begin{align} E &= T + V \[4pt] &= \dfrac {p^2}{2 m} + \dfrac {k}{2} x^2 \label {5.1.9} \end{align} The total energy of the harmonic oscillator is equal to the maximum potential energy stored in the spring when $x = \pm A$, called the turning points (Figure 5.1.5 ). The total energy (Equation $\ref{5.1.9}$) is continuously being shifted between potential energy stored in the spring and kinetic energy of the mass. The motion of a classical oscillator is confined to the region where its kinetic energy is nonnegative, which is what the energy relation Equation \ref{5.1.9} says. Physically, it means that a classical oscillator can never be found beyond its turning points, and its energy depends only on how far the turning points are from its equilibrium position. The energy of a classical oscillator changes in a continuous way. The lowest energy that a classical oscillator may have is zero, which corresponds to a situation where an object is at rest at its equilibrium position. The zero-energy state of a classical oscillator simply means no oscillations and no motion at all (a classical particle sitting at the bottom of the potential well in Figure 5.1.5 ). When an object oscillates, no matter how big or small its energy may be, it spends the longest time near the turning points, because this is where it slows down and reverses its direction of motion. Therefore, the probability of finding a classical oscillator between the turning points is highest near the turning points and lowest at the equilibrium position. (Note that this is not a statement of preference of the object to go to lower energy. It is a statement about how quickly the object moves through various regions.) Example 5.1.3 1. What happens to the frequency of the oscillation as the vibration is excited with more and more energy? 2. What happens to the maximum amplitude of the vibration as it is excited with more and more energy? Solution a. Frequency The energy of the harmonic oscillator can be written as $E_{v}=h v\left(v+\dfrac{1}{2}\right) \nonumber$ and the frequency of oscillation is $\omega=\sqrt{\frac{k}{m}}$. Notice that the frequency depends only on the stiffness ($k$) and reduced mass ($\mu$) of the oscillator and not on the energy. Hence, increasing the energy of the vibrations does not affects its frequency. b. Amplitude The kinetic and potential terms for energy of the harmonic oscillator can be written as \begin{align*} E &=K+V \[4pt] &=\frac{1}{2} m \omega^{2} A^{2} \sin ^{2} \omega t+\frac{1}{2} k A^2 \cos^2 \omega t \end{align*} \nonumber with $\omega=\sqrt{\frac{k}{m}}$ so \begin{align*} E &=\frac{1}{2} k A^{2}\left(\sin ^{2} \omega t+\cos^2 \omega t\right) \[4pt] &= \frac{1}{2} k A^2 \end{align*} \nonumber The maximum amplitude of the vibration will increase as the energy increases. Exercise 5.1.3 If a molecular vibration is excited by collision with another molecule and is given a total energy $E_{hit}$ as a result, what is the maximum amplitude of the oscillation? Is there any constraint on the magnitude of energy that can be introduced? Answer The equation that defines the energy of a molecular vibration can be approximated is: $E_{h i t}=T+V=\frac{p^{2}}{2 m}+\frac{k}{2} x \nonumber$ The maximum amplitude of a harmonic oscillator is equal to x when the kinetic energy term of total energy equals zero $E_{hit}=\frac{k}{2}x \nonumber$ Solving for x gives the maximum amplitude: $x=\sqrt{\frac{2}{k} E_{h i t}} \nonumber$ The constraint for the energy that can be introduced cannot be greater than the energy required to break the bond between atoms.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/05%3A_The_Harmonic_Oscillator_and_the_Rigid_Rotor/5.01%3A_A_Harmonic_Oscillator_Obeys_Hooke%27s_Law.txt
For studying the energetics of molecular vibration we take the simplest example, a diatomic heteronuclear molecule $\ce{AB}$. Let the respective masses of atoms $\ce{A}$ and $\ce{B}$ be $m_A$ and $m_B$. For diatomic molecules, we define the reduced mass $\mu_{AB}$ by: $\mu_{AB}=\dfrac{m_A\, m_B}{m_A+m_B} \label{5.2.1}$ Reduced mass is the representation of a two-body system as a single-body one. When the motion (displacement, vibrational, rotational) of two bodies are only under mutual interactions, the inertial mass of the moving body with respect to the body at rest can be simplified to a reduced mass. Reduced Mass Viewing the multi-body system as a single particle allows the separation of the motion: vibration and rotation, of the particle from the displacement of the center of mass. This approach greatly simplifies many calculations and problems. This concept is readily used in the general motion of diatomics, i.e. simple harmonic oscillator (vibrational displacement between two bodies, following Hooke's Law), the rigid rotor approximation (the moment of inertia about the center of mass of a two-body system), spectroscopy, and many other applications. Example 5.2.1 : Reduced Mass Determine the reduced mass of the two body system of a proton and electron with $m_{proton} = 1.6727 \times 10^{-27}\, kg$ and $m_{electron} = 9.110 \times 10^{-31}\, kg$). Answer \begin{align*} \mu_{pe} &= \dfrac{(1.6727 \times 10^{-27})(9.110 \times 10^{-31})}{1.6727 \times 10^{-27} + 9.110 \times 10^{-31}} \[4pt] &= 9.105 \times 10^{-31} kg \end{align*} \nonumber The Quantum Harmonic Oscillator The classical Harmonic Oscillator approximation is a simple yet powerful representation of the energetics of an oscillating spring system. Central to this model is the formulation of the quadratic potential energy $V(x) \approx \dfrac {1}{2} kx^2 \label{potential}$ One problem with this classical formulation is that it is not general. We cannot use it, for example, to describe vibrations of diatomic molecules, where quantum effects are important. This require the formulation for Schrödinger Equation using Equation \ref{potential}. $\hat{H} | \psi \rangle = \left[ \dfrac {-\hbar^2}{2\mu} \dfrac {d^2 }{d x^2} + \dfrac {1}{2}kx^2 \right ] | \psi \rangle = E|\psi \rangle \nonumber$ Solving this quantum harmonic oscillator is appreciably harder than solving the Schrödinger Equation for the simpler particle-in-the-box model and is outside the scope of this text. However, as with most quantum modules (and in contrast to the classical harmonic oscillator), the energies are quantized in terms of a quantum number ($v$ in this case): \begin{align} E_v &= \hbar \left(\sqrt {\dfrac {k}{\mu}} \right) \left(v + \dfrac {1}{2} \right) \[4pt] &= h \nu \left(v+\dfrac {1}{2} \right) \end{align} \nonumber with the natural vibrational frequency of the system given as $\nu = \dfrac{1}{2 \pi}\sqrt {\dfrac {k}{\mu}} \label{freq}$ and the mass, $\mu$, is the reduced mass of the system (Equation \ref{5.2.1}). Warning Be careful to distinguish $\nu$, the symbol for the natural frequency (as a Greek nu) from $v$ the quantum harmonic oscillator quantum number (Latin v). Caution: Do Not Use Atomic Weights to Calculate Reduced Masses The vibrational frequencies given by Equation \ref{freq} depend on the force constants ($k$) and the atomic masses of the vibrating nuclei via the reduced mass ($\mu$). It should be clear that the substitution of one isotope of an atom in a molecule for another isotope will affect the atomic masses and therefore the reduced mass (via Equation \ref{5.2.1}) and therefore the vibrational frequencies (via Equation \ref{freq}). It is important to remember that the Periodic Table gives only atomic weights of elements, which are scaled averages of atoms normally encountered in the laboratory (Table 5.2.1 ). To properly discuss vibrational frequencies of molecules, we need to know (or denote) the specific isotopes in the molecule. Check Table A4 for that information. Table 5.2.1 : Atomic Mass and Isotope Composition. Consult Table A4 for more extensive table. atomic mass (in amu) isotopic abundance (%) 1H 1.007825 99.985 2H 2.0140 0.015 35Cl 35.968852 75.77 37Cl 36.965903 24.23 79Br 78.918336 50.69 81Br 80.916289 49.31 Example 5.2.1 : Isotope Effect What are the reduced mass for $\ce{^1H^35Cl}$ and $\ce{^1H^37Cl}$? If the spring constants for vibrations of both molecules are equal and estimated at $478 \,N/m$, what are the natural vibrational frequencies of these two molecules? Solution The periodic table gives an atomic weight of 35.45 amu for chlorine, but remember this is the the average of the natural abundances of different chlorine isotopes which is dictated primary by two isotopes: $\ce{^35Cl}$ and $\ce{^37Cl}$. For this problem, we need the exact mass of the $\ce{^1H}$, $\ce{^35Cl}$, and $\ce{^37Cl}$ isotopes. Check Table A4 for that information. For $\ce{^1H^35Cl}$: \begin{align*} \text{Reduced mass} &= \dfrac{m_1m_2}{m_1+m_2} \[4pt] &= \dfrac{m_\ce{H}m_\ce{^35Cl}}{m_\ce{H}+m_\ce{^35Cl}} \[4pt] &= \dfrac{(1.0078)(34.9688)}{1.0078 + 34.9688)}\, amu \[4pt] &= 0.9796\,amu\end{align*} \nonumber or when converted into kg is $1.629 \times 10^{-27}\,kg$. For $\ce{^1H^37Cl}$: \begin{align*} \text{Reduced mass} &= \dfrac{m_1m_2}{m_1+m_2} \[4pt] &= \dfrac{m_\ce{H}m_\ce{^37Cl}}{m_\ce{H}+m_\ce{^37Cl}} \[4pt] &= \dfrac{(1.0078)(36.965)}{1.0078 + 36.965)}\, amu \[4pt] &= 0.9810\,amu\end{align*} \nonumber or when converted into kg is $1.6291 \times 10^{-27}\,kg$. This is only 0.29% bigger. Equation \ref{freq} is used to predict the respective vibrational frequencies of these two molecules. For $\ce{^1H^35Cl}$: \begin{align*} \nu &= \dfrac{1}{2 \pi}\sqrt {\dfrac {k}{\mu}} \[4pt] &= \dfrac{1}{2 \pi}\sqrt {\dfrac {478 \,N/m}{1.629 \times 10^{-27}\,kg}} \[4pt] &= 8.6394×10^{13} s^{-1} \end{align*} \nonumber For $\ce{^1H^37Cl}$: \begin{align*} \nu &= \dfrac{1}{2 \pi}\sqrt {\dfrac {k}{\mu}} \[4pt] &= \dfrac{1}{2 \pi}\sqrt {\dfrac {478 \,N/m}{1.629 \times 10^{-27}\,kg}} \[4pt] &= 8.621×10^{13} s^{-1} \end{align*} \nonumber As with the differences in the reduced masses, the differences in the vibrational frequencies of these two molecules is quite small. However, high resolution IR spectroscopy can easily distinguish the vibrations between these two molecules. Exercise 5.2.1 will demonstrate that this "isotope effect" is not always a small effect. Exercise 5.2.1 : Hydrogen Chloride The force constant is weakly sensitive to the specific isotopes in a molecule (and we typically assume it is isotope independent). If the $k = 478 \,N/m$ for both $\ce{H^{35}Cl}$ and $\ce{H^{37}Cl}$. What are the vibration frequencies in these two diatomic molecules. Answer $\ce{H^{35}Cl}$: 2886 cm-1 $\ce{D^{35}Cl}$: 2081 cm-1 • Eugene Lee
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/05%3A_The_Harmonic_Oscillator_and_the_Rigid_Rotor/5.02%3A_The_Equation_for_a_Harmonic-Oscillator_Model_of_a_Diatomic_Molecule_Contains_the_Re.txt