chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
The focus of this chapter is on the interaction of ultraviolet, visible, and infrared radiation with matter. Because these techniques use optical materials to disperse and focus the radiation, they often are identified as optical spectroscopies. For convenience we will use the simpler term spectroscopy in place of optical spectroscopy; however, you should understand that we are considering only a limited part of a much broader area of analytical techniques. • 14.1: Vocabulary • 14.2: Microwave Spectroscopy Microwave rotational spectroscopy uses microwave radiation to measure the energies of rotational transitions for molecules in the gas phase. It accomplishes this through the interaction of the electric dipole moment of the molecules with the electromagnetic field of the exciting microwave photon. • 14.3: Infrared Spectroscopy Infrared Spectroscopy is the analysis of infrared light interacting with a molecule. This can be analyzed in three ways by measuring absorption, emission and reflection. The main use of this technique is in organic and inorganic chemistry. It is used by chemists to determine functional groups in molecules. IR Spectroscopy measures the vibrations of atoms, and based on this it is possible to determine the functional groups.5 Generally, stronger bonds and light atoms will vibrate at a high stretch • 14.4: Electronic Spectroscopy This page explains what happens when organic compounds absorb UV or visible light, and why the wavelength of light absorbed varies from compound to compound. • 14.5: Nuclear Magnetic Resonance Nuclear Magnetic Resonance (NMR) is a nuceli (Nuclear) specific spectroscopy that has far reaching applications throughout the physical sciences and industry. NMR uses a large magnet (Magnetic) to probe the intrinsic spin properties of atomic nuclei. Like all spectroscopies, NMR uses a component of electromagnetic radiation (radio frequency waves) to promote transitions between nuclear energy levels (Resonance). • 14.6: Electron Spin Resonance Electron Paramagnetic Resonance (EPR) is a remarkably useful form of spectroscopy used to study molecules or atoms with an unpaired electron. It is less widely used than NMR because stable molecules often do not have unpaired electrons. However, EPR can be used analytically to observe labeled species in situ either biologically or in a chemical reaction. • 14.7: Fluorescence and Phosphorescence Fluorescence and phosphorescence are photoluminescence processes in which material emits photons after excitation. • 14.8: Lasers LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications. • 14.9: Optical Rotatory Dispersion and Circular Dichroism Circular Dichroism, an absorption spectroscopy, uses circularly polarized light to investigate structural aspects of optically active chiral media. It is mostly used to study biological molecules, their structure, and interactions with metals and other molecules. • 14.E: Spectroscopy (Exercises) 14: Spectroscopy Electromagnetic radiation—light—is a form of energy whose behavior is described by the properties of both waves and particles. Some properties of electromagnetic radiation, such as its refraction when it passes from one medium to another are explained best by describing light as a wave. Other properties, such as absorption and emission, are better described by treating light as a particle. The exact nature of electromagnetic radiation remains unclear, as it has since the development of quantum mechanics in the first quarter of the 20th century.1 Nevertheless, the dual models of wave and particle behavior provide a useful description for electromagnetic radiation. Electromagnetic radiation consists of oscillating electric and magnetic fields that propagate through space along a linear path and with a constant velocity. In a vacuum electromagnetic radiation travels at the speed of light, c, which is 2.997 × 108 m/s. When electromagnetic radiation moves through a medium other than a vacuum its velocity, v, is less than the speed of light in a vacuum. The difference between v and c is sufficiently small (<0.1%) that the speed of light to three significant figures, 3.00 × 108 m/s, is accurate enough for most purposes. The oscillations in the electric and magnetic fields are perpendicular to each other, and to the direction of the wave’s propagation. Figure $1$ shows an example of plane-polarized electromagnetic radiation, consisting of a single oscillating electric field and a single oscillating magnetic field. An electromagnetic wave is characterized by several fundamental properties, including its velocity, amplitude, frequency, phase angle, polarization, and direction of propagation.2 For example, the amplitude of the oscillating electric field at any point along the propagating wave is $A_\ce{t} = A_\ce{e}\sin(2πνt + \phi)$ where At is the magnitude of the electric field at time t, Ae is the electric field’s maximum amplitude, ν is the wave’s frequency—the number of oscillations in the electric field per unit time—and $\phi$ is a phase angle, which accounts for the fact that At need not have a value of zero at t = 0. The identical equation for the magnetic field is $A_\ce{t} =A_\ce{m}\sin(2πνt + \phi)$ where Am is the magnetic field’s maximum amplitude. Units Other properties also are useful for characterizing the wave behavior of electromagnetic radiation. The wavelength, λ, is defined as the distance between successive maxima (Figure $1$). For ultraviolet and visible electromagnetic radiation the wavelength is usually expressed in nanometers (1 nm = 10–9 m), and for infrared radiation it is given in microns (1 μm = 10–6 m). The relationship between wavelength and frequency is $λ = \dfrac{c}{ν}$ Another unit useful unit is the wavenumber, $\tilde{ν}$, which is the reciprocal of wavelength $\tilde{ν} = \dfrac{1}{λ}$ Wavenumbers are frequently used to characterize infrared radiation, with the units given in cm–1. Example $1$ In 1817, Josef Fraunhofer studied the spectrum of solar radiation, observing a continuous spectrum with numerous dark lines. Fraunhofer labeled the most prominent of the dark lines with letters. In 1859, Gustav Kirchhoff showed that the D line in the sun’s spectrum was due to the absorption of solar radiation by sodium atoms. The wavelength of the sodium D line is 589 nm. What are the frequency and the wavenumber for this line? Solution The frequency and wavenumber of the sodium D line are $ν = \dfrac{c}{λ} = \mathrm{\dfrac{3.00×10^8\: m/s}{589×10^{−9}\: m} = 5.09×10^{14}\: s^{−1}}$ $\tilde{ν} = \dfrac{1}{λ} = \mathrm{\dfrac{1}{589×10^{−9}\: m} × \dfrac{1\: m}{100\: cm} = 1.70×10^4\: cm^{−1}}$ Exercise $1$ Another historically important series of spectral lines is the Balmer series of emission lines form hydrogen. One of the lines has a wavelength of 656.3 nm. What are the frequency and the wavenumber for this line? Click here to review your answer to this exercise. Regions of the Spectrum The frequency and wavelength of electromagnetic radiation vary over many orders of magnitude. For convenience, we divide electromagnetic radiation into different regions—the electromagnetic spectrum—based on the type of atomic or molecular transition that gives rise to the absorption or emission of photons (Figure $2$). The boundaries between the regions of the electromagnetic spectrum are not rigid, and overlap between spectral regions is possible. Above, we defined several characteristic properties of electromagnetic radiation, including its energy, velocity, amplitude, frequency, phase angle, polarization, and direction of propagation. A spectroscopic measurement is possible only if the photon’s interaction with the sample leads to a change in one or more of these characteristic properties. We can divide spectroscopy into two broad classes of techniques. In one class of techniques there is a transfer of energy between the photon and the sample. Table $1$ provides a list of several representative examples. Table $1$: Examples of Spectroscopic Techniques Involving an Exchange of Energy Between a Photon and the Sample Type of Energy Transfer Region of Electromagnetic Spectrum Spectroscopic Techniquea absorption γ-ray Mossbauer spectroscopy X-ray X-ray absorption spectroscopy UV/Vis UV/Vis spectroscopy atomic absorption spectroscopy IR infrared spectroscopy raman spectroscopy Microwave microwave spectroscopy Radio wave electron spin resonance spectroscopy nuclear magnetic resonance spectroscopy emission (thermal excitation) UV/Vis atomic emission spectroscopy photoluminescence X-ray X-ray fluorescence UV/Vis fluorescence spectroscopy phosphorescence spectroscopy atomic fluorescence spectroscopy chemiluminescence UV/Vis chemiluminescence spectroscopy Line Widths A spectral line extends over a range of frequencies, not a single frequency (i.e., it has a nonzero linewidth). There are multiple reasons for this broadening and shifts and only two are discussed below. • Lifetime Broadening. This mechanism ordinates directly from the Heisenberg Principle, which the lifetime of an excited state (due to the spontaneous radiative decay) with the uncertainty of its energy $\Delta t$. Since the system is changing in time, it is impossible to estimate the energies of wavefunctions exactly ($\Delta E$. $\Delta E \Delta t \ge \dfrac{\hbar}{2}$ Hence, systems with excited-states that have short lifetimes will have a large energy uncertainty and a broad emission. This broadening effect results in an broadened profile. As expected, the lifetime broadening can be experimentally if the decay rates can be artificially suppressed or enhanced. • Doppler Broadening. This mechanism is intrinsic to atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The Doppler effect is the change in frequency of a wave (or other periodic event) for an observer moving relative to its source. It is commonly heard when a vehicle sounding a siren or horn approaches, passes, and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession. Doppler Broadening example with sound, but applies to light too. (left): Stationary sound source produces sound waves at a constant frequency $u$, and the wave-fronts propagate symmetrically away from the source at a constant speed c (assuming speed of sound, c = 330 m/s), which is the speed of sound in the medium. The distance between wave-fronts is the wavelength. All observers will hear the same frequency, which will be equal to the actual frequency of the source where $u= u-o$ (right): The same sound source is radiating sound waves at a constant frequency in the same medium. However, now the sound source is moving with a speed so the center of each new wavefront is now slightly displaced to the right. As a result, the wave-fronts begin to bunch up on the right side (in front of) and spread further apart on the left side (behind) of the source. An observer in front of the source will hear a higher frequency. Images used with permission from Wikipedia (credit Lookang). If the speeds are small compared to the speed of the wave, the relationship between observed frequency $u_o$ and emitted frequency $u$ is approximately $u =\left(1+\dfrac{\Delta v}{c}\right)v_0$ The Doppler effect also applies for spectroscopy. For example, the higher the temperature of the gas, the wider the distribution of velocities in the gas (via the Maxwell-Bolztmann Distribution). Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. If the average velocity of a gas is non-zero, then a Doppler shift will be observed that is correlated with the amplitude of this average velocity. Redshift of spectral lines in the optical spectrum of a supercluster of distant galaxies (right), as compared to that of the Sun (left) since the sun is not moving with respect to Earth (or weakly) and the galaxy is moving away. from Wikipedia (Credit Georg Wiora). Doppler broadening is one of the explanations for the broadening of spectral lines, and as such gives an indication for the temperature of observed material. Other causes of velocity distributions may exist, though, for example due to turbulent motion. Doppler broadening can also be used to determine the velocity distribution of a gas given its absorption spectrum. In particular, this has been used to determine the velocity distribution of interstellar gas clouds. Example $2$ Find the uncertainty of simultaneously measuring the frequency and wavelength of an emission from an excited molecule, if the wavelength is 430 nm and the excited state lifetime is 0.50 nanoseconds. Soluton 1. Use Heisenberg's uncertainty principle and the relationship between energy and frequency to find the uncertainty of frequency. $\Delta E \Delta t \geq \dfrac{h}{4 \pi} \nonumber$ $\Delta E = h \Delta u \nonumber$ $h \Delta u \Delta t \geq \dfrac{h}{4 \pi} \nonumber$ $\Delta u \geq \dfrac{1}{4 \pi \Delta t} \nonumber$ The maximum value for Δt is the lifetime of the excited state. $\Delta u \geq \dfrac{1}{4 \pi 0.50 ns \times \dfrac{s}{10^9\ ns}} \nonumber$ $\Delta u \geq 1.6 \times 10^8\ s^{-1} \nonumber$ 2. Use the uncertainty of frequency and the relationship between frequency and wavelength to find the uncertainty of the wavelength. $\lambda = \dfrac{c}{ u} \nonumber$ $| \Delta \lambda | = \dfrac{c}{ u^2} | \Delta u | \nonumber$ $| \Delta \lambda | = \dfrac{\lambda ^2 | \Delta u | }{c} \nonumber$ $| \Delta \lambda | = \dfrac{(430\ nm)^2 \times 1.6 \times 10^8\ s^{-1}}{2.998 \times 10^8\ m/s} \times \dfrac{m}{10^9\ nm} \nonumber$ $| \Delta \lambda | = 2.3 \times 10^{-7}\ nm \nonumber$ Absorption and Emission In absorption spectroscopy a photon is absorbed by an atom or molecule, which undergoes a transition from a lower-energy state to a higher-energy, or excited state (Figure $3$). The type of transition depends on the photon’s energy. The electromagnetic spectrum in Figure $2$, for example, shows that absorbing a photon of visible light promotes one of the atom’s or molecule’s valence electrons to a higher-energy level. When an molecule absorbs infrared radiation, on the other hand, one of its chemical bonds experiences a change in vibrational energy. When it absorbs electromagnetic radiation the number of photons passing through a sample decreases. The measurement of this decrease in photons, which we call absorbance, is a useful analytical signal. Note that the each of the energy levels in Figure $3$ has a well-defined value because they are quantized. Absorption occurs only when the photon’s energy, hν, matches the difference in energy, ∆E, between two energy levels. A plot of absorbance as a function of the photon’s energy is called an absorbance spectrum. Figure $4$, for example, shows the absorbance spectrum of cranberry juice. When an atom or molecule in an excited state returns to a lower energy state, the excess energy often is released as a photon, a process we call emission (Figure $4$). There are several ways in which an atom or molecule may end up in an excited state, including thermal energy, absorption of a photon, or by a chemical reaction. Emission following the absorption of a photon is also called photoluminescence, and that following a chemical reaction is called chemiluminescence. A typical emission spectrum is shown in Figure $6$. Selection Rules The transition probability is defined as the probability of particular spectroscopic transition to take place. When an atom or molecule absorbs a photon, the probability of an atom or molecule to transit from one energy level to another depends on two things: the nature of initial and final state wavefunctions and how strongly photons interact with an wavefunction. Transition strengths are used to describe transition probability. Selection rules are utilized to determine whether a transition is allowed or not. Electronic dipole transitions are by far the most important for the topics covered in this module. In an atom or molecule, an electromagnetic wave (for example, visible light) can induce an oscillating electric or magnetic moment. If the frequency of the induced electric or magnetic moment is the same as the energy difference between one wavefunction $\psi_1$ and another wavefunction $\psi_2$, the interaction between an atom or molecule and the electromagnetic field is resonant (which means these two have the same frequency). Typically, the amplitude of this (electric or magnetic) moment is called the transition moment. In quantum mechanics, the transition probability of one molecule from one wavefunction $\psi_1$ to another wavefunction $\psi_2$ is given by $|\vec{M}_{21}|^2$, and $\vec{M}_{21}$ is called the transition dipole moment, or transition moment, from $\psi_1$ to $\psi_2$. $\vec{M}_{21}$ can be written as $\vec{M}_{21}=\int \psi_2\vec{\mu}\psi_1d\tau \label{Select}$ where $\psi_1$ and $\psi_2$ are two different wavefunctions in one molecule, and $\vec{M}_{21}$ is the electric dipole moment operator. If we have a system with n atoms and each has charge $q_n$, and the dipole moment operator is can be written as $\displaystyle \vec{\mu}=\sum_{n}q_n\vec{r}_n$ the $\vec{r}_{n}$ is the position vector operator for the ith charge. The nature of $\psi_1$ and $\psi_1$ change (e..g, the quantum numbers associated with each wavefunction) $\vec{M}_{21}$. Large values of $\vec{M}_{21}$ signify transitions with strong probabilities and small $\vec{M}_{21}$ values represent weak probabilities. A zero probability for a transition is a forbidden transition. For electronic wavefucntion (either atoms or molecules), the two primary selection rules governing transitions between electronic energy wavefunctions are: 1. $ΔS = 0$ (The Spin Rule) 2. $Δl = \pm 1$ (The Orbital Rule (or Laporte rule)) The spin multiplicity can be calculated from the quantum number $S$ of the total spin or from the number of unpaired electrons (like when determining paramagnetic properties of molecules). The spin-multiplicity is $(2S+1)$, where $S = \underset{\text{spin quantum #}}{\sum s}$ The Spin Rule says that allowed transitions must involve the promotion of electrons without a change in their spin. The Orbital Rule says that transitions within a given set of p or d orbitals (i.e. those which only involve a redistribution of electrons within a given subshell) are forbidden. The orbital rule can be used to construct Grotrian diagrams, which show the allowed electronic transitions between the energy levels of atoms (Figure $6$) by taking into account the specific selection rules related to the system (e..g, the Orbital or Spin Rules). Transitions not permitted by selection rules are said forbidden, which means that theoretically they must not occur. However, in practice they may occur, but very low probabilities (see $3$ below). The Beers-Lambert Law The Beer-Lambert law relates the attenuation of light to the properties of the material through which the light is traveling. This page takes a brief look at the Beer-Lambert Law and explains the use of the terms absorbance and molar absorptivity relating to UV-visible absorption spectrometry. For each wavelength of light passing through the spectrometer, the intensity of the light passing through the reference cell is measured. This is usually referred to as $I_o$ - that's $I$ for Intensity. The intensity of the light passing through the sample cell is also measured for that wavelength - given the symbol, $I$. If $I$ is less than $I_o$, then the sample has absorbed some of the light (neglecting reflection of light off the cuvetter surface). A simple bit of math is then done in the computer to convert this into something called the absorbance of the sample - given the symbol, $A$. Note The absorbance of a transition depends on two external assumptions. 1. The absorbance is directly proportional to the concentration ($c$) of the solution of the the sample used in the experiment. 2. The absorbance is directly proportional to the length of the light path ($l$), which is equal to the width of the cuvette. Assumption one relates the absorbance to concentration and can be expressed as $A \propto c \label{1}$ The absorbance ($A$) is defined via the incident intensity $I_o$ and transmitted intensity $I$ by $A=\log_{10} \left( \dfrac{I_o}{I} \right) \label{2}$ Assumption two can be expressed as $A \propto l \label{3}$ Combining Equations $\ref{1}$ and $\ref{3}$: $A \propto cl \label{4}$ This proportionality can be converted into an equality by including a proportionality constant. $A = \epsilon c l \label{5}$ This formula is the common form of the Beer-Lambert Law, although it can be also written in terms of intensities: $A=\log_{10} \left( \dfrac{I_o}{I} \right) = \epsilon l c \label{6}$ The constant $\epsilon$ is called molar absorptivity or molar extinction coefficient and is a measure of the probability of the electronic transition. On most of the diagrams you will come across, the absorbance ranges from 0 to 1, but it can go higher than that. An absorbance of 0 at some wavelength means that no light of that particular wavelength has been absorbed. The intensities of the sample and reference beam are both the same, so the ratio Io/I is 1. Log10 of 1 is zero. Example $3$ In a sample with an absorbance of 1 at a specific wavelength, what is the relative amount of light that was absorbed by the sample? Solution This question does not need Beer-Lambert Law (Equation $\ref{5}$) to solve, but only the definition of absorbance (Equation $\ref{2}$) $A=\log_{10} \left( \dfrac{I_o}{I} \right) \nonumber$ The relative loss of intensity is $\dfrac{I-I_o}{I_o} = 1- \dfrac{I}{I_o} \nonumber$ Equation $\ref{2}$ can be rearranged using the properties of logarithms to solved for the relative loss of intensity: $10^A= \dfrac{I_o}{I} \nonumber$ $10^{-A}= \dfrac{I}{I_o} \nonumber$ $1-10^{-A}= 1- \dfrac{I}{I_o} \nonumber$ Substituting in $A=1$ $1- \dfrac{I}{I_o}= 1-10^{-1} = 1- \dfrac{1}{10} = 0.9 \nonumber$ Hence 90% of the light at that wavelength has been absorbed and that the transmitted intensity is 10% of the incident intensity To confirm, substituting these values into Equation $\ref{2}$ to get the absorbance back: $\dfrac{I_o}{I} = \dfrac{100}{10} =10 \label{7a} \nonumber$ and $\log_{10} 10 = 1 \label{7b} \nonumber$ You will find that various different symbols are given for some of the terms in the equation - particularly for the concentration and the solution length. The Greek letter epsilon in these equations is called the molar absorptivity - or sometimes the molar absorption coefficient. The larger the molar absorptivity, the more probable the electronic transition. In uv spectroscopy, the concentration of the sample solution is measured in molL-1 and the length of the light path in cm. Thus, given that absorbance is unitless, the units of molar absorptivity are L mol-1 cm-1. However, since the units of molar absorptivity is always the above, it is customarily reported without units. Example $4$ Guanosine has a maximum absorbance of 275 nm. $\epsilon_{275} = 8400 M^{-1} cm^{-1}$ and the path length is 1 cm. Using a spectrophotometer, you find the that $A_{275}= 0.70$. What is the concentration of guanosine? Solution To solve this problem, you must use Beer's Law. $A = \epsilon lc \nonumber$ 0.70 = (8400 M-1 cm-1)(1 cm)($c$) Next, divide both side by [(8400 M-1 cm-1)(1 cm)] $c$ = 8.33x10-5 mol/L Example $5$ There is a substance in a solution (4 g/liter). The length of cuvette is 2 cm and only 50% of the certain light beam is transmitted. What is the extinction coefficient? Solution Using Beer-Lambert Law, we can compute the absorption coefficient. Thus, $- \log \left(\dfrac{I_t}{I_o} \right) = - \log(\dfrac{0.5}{1.0}) = A = {8} \epsilon \nonumber$ Then we obtain that $\epsilon$ = 0.0376 Example $6$ In Example 3 above, what is the molar absorption coefficient if the molecular weight is 100? Solution It can simply obtained by multiplying the absorption coefficient by the molecular weight. Thus, $\epsilon$ = 0.0376 x 100 = 3.76 L·mol-1·cm-1 The proportion of the light absorbed will depend on how many molecules it interacts with. Suppose you have got a strongly colored organic dye. If it is in a reasonably concentrated solution, it will have a very high absorbance because there are lots of molecules to interact with the light. However, in an incredibly dilute solution, it may be very difficult to see that it is colored at all. The absorbance is going to be very low. Suppose then that you wanted to compare this dye with a different compound. Unless you took care to make allowance for the concentration, you couldn't make any sensible comparisons about which one absorbed the most light. Example $7$ In Example $3$ above, how much is the beam of light is transmitted when 8 g/liter ? Solution Since we know $\epsilon$, we can calculate the transmission using Beer-Lambert Law. Thus, $log(1) - log(I_t) = 0 - log(I_t)$ = 0.0376 x 8 x 2 = 0.6016 $log(I_t)$ = -0.6016 Therefore, $I_t$ = 0.2503 = 25% Example $7$ The absorption coefficient of a glycogen-iodine complex is 0.20 at light of 450 nm. What is the concentration when the transmission is 40 % in a cuvette of 2 cm? Solution It can also be solved using Beer-Lambert Law. Therefore, $- \log(I_t) = - \log_{10}(0.4) = 0.20 \times c \times 2 \nonumber$ Then $c$ = 0.9948 The Beer-Lambert law Equation $\ref{5}$ can be rearranged to obtain an expression for $\epsilon$ (the molar absorptivity): $\epsilon = \dfrac{A}{lc} \label{8}$ Remember that the absorbance of a solution will vary as the concentration or the size of the container varies. Molar absorptivity compensates for this by dividing by both the concentration and the length of the solution that the light passes through. Essentially, it works out a value for what the absorbance would be under a standard set of conditions - the light traveling 1 cm through a solution of 1 mol dm-3. That means that you can then make comparisons between one compound and another without having to worry about the concentration or solution length. Values for molar absorptivity can vary hugely. For example, ethanal has two absorption peaks in its UV-visible spectrum - both in the ultra-violet. One of these corresponds to an electron being promoted from a lone pair on the oxygen into a pi anti-bonding orbital; the other from a $\pi$ bonding orbital into a $\pi$ anti-bonding orbital. Table $2$ gives values for the molar absorptivity of a solution of ethanal in hexane. Notice that there are no units given for absorptivity. That's quite common since it assumes the length is in cm and the concentration is mol dm-3, the units are mol-1 dm3 cm-1. Table $2$: electron jump wavelength of maximum absorption (nm) molar absorptivity lone pair to $\pi$ anti-bonding orbital 290 15 $\pi$ bonding to $\pi$ anti-bonding orbital 180 10,000 The ethanal obviously absorbs much more strongly at 180 nm than it does at 290 nm. (Although, in fact, the 180 nm absorption peak is outside the range of most spectrometers.) You may come across diagrams of absorption spectra plotting absorptivity on the vertical axis rather than absorbance. However, if you look at the figures above and the scales that are going to be involved, you are not really going to be able to spot the absorption at 290 nm. It will be a tiny little peak compared to the one at 180 nm. To get around this, you may also come across diagrams in which the vertical axis is plotted as log10(molar absorptivity). If you take the logs of the two numbers in the table, 15 becomes 1.18, while 10,000 becomes 4. That makes it possible to plot both values easily, but produces strangely squashed-looking spectra! Table $3$: Expected intensities of electronic transitions Transition type Typical values of ε /m2mol-1 Spin forbidden and Laporte forbidden 0.1 Spin allowed and Laporte forbidden 1 - 10 Spin allowed and Laporte allowed e.g. charge transfer bands 1,000 - 106
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.01%3A_Vocabulary.txt
Microwave rotational spectroscopy uses microwave radiation to measure the energies of rotational transitions for molecules in the gas phase. It accomplishes this through the interaction of the electric dipole moment of the molecules with the electromagnetic field of the exciting microwave photon. Introduction To probe the pure rotational transitions for molecules, scientists use microwave rotational spectroscopy. This spectroscopy utilizes photons in the microwave range to cause transitions between the quantum rotational energy levels of a gas molecule. The reason why the sample must be in the gas phase is due to intermolecular interactions hindering rotations in the liquid and solid phases of the molecule. For microwave spectroscopy, molecules can be broken down into 5 categories based on their shape and the inertia around their 3 orthogonal rotational axes. These 5 categories include diatomic molecules, linear molecules, spherical tops, symmetric tops and asymmetric tops. Classical Mechanics The Hamiltonian solution to the rigid rotor is $H = T$ since, $H = T + V$ Where $T$ is kinetic energy and $V$ is potential energy. Potential energy, $V$, is 0 because there is no resistance to the rotation (similar to a particle in a box model). Since $H = T$, we can also say that: ${T = }\dfrac{1}{2}\sum{m_{i}v_{i}^2}$ However, we have to determine $v_i$ in terms of rotation since we are dealing with rotation. Since, $\omega = \dfrac{v}{r}$ where $\omega$ = angular velocity, we can say that: $v_{i} = \omega{X}r_{i}$ Thus we can rewrite the T equation as: $T = \dfrac{1}{2}\sum{m_{i}v_{i}\left(\omega{X}r_{i}\right)}$ Since $\omega$ is a scalar constant, we can rewrite the T equation as: $T = \dfrac{\omega}{2}\sum{m_{i}\left(v_{i}{X}r_{i}\right)} = \dfrac{\omega}{2}\sum{l_{i}} = \omega\dfrac{L}{2}$ where $l_i$ is the angular momentum of the ith particle, and L is the angular momentum of the entire system. Also, we know from physics that, $L = I\omega$ where I is the moment of inertia of the rigid body relative to the axis of rotation. We can rewrite the T equation as, $T = \omega\dfrac{{I}\omega}{2} = \dfrac{1}{2}{I}\omega^2$ Quantum Mechanics The internal Hamiltonian, H, is: $H = \dfrac{i^{2}\hbar^{2}}{2I}$ and the Schrödinger Equation for rigid rotor is: $\dfrac{i^{2}\hbar^{2}}{2I}\psi = E\psi$ Thus, we get: $E_n = \dfrac{J(J+1)h^2}{8\pi^2I}$ where $J$ is a rotational quantum number and $\hbar$ is the reduced Planck's constant. However, if we let: $B = \dfrac {h}{8 \pi^2I}$ where $B$ is a rotational constant, then we can substitute it into the $E_n$ equation and get: $E_{n} = J(J+1)Bh$ Considering the transition energy between two energy levels, the difference is a multiple of 2. That is, from $J = 0$ to $J = 1$, the $\Delta{E_{0 \rightarrow 1}}$ is 2Bh and from J = 1 to J = 2, the $\Delta{E}_{1 \rightarrow 2}$ is 4Bh. Theory When a gas molecule is irradiated with microwave radiation, a photon can be absorbed through the interaction of the photon’s electronic field with the electrons in the molecules. For the microwave region this energy absorption is in the range needed to cause transitions between rotational states of the molecule. However, only molecules with a permanent dipole that changes upon rotation can be investigated using microwave spectroscopy. This is due to the fact that their must be a charge difference across the molecule for the oscillating electric field of the photon to impart a torque upon the molecule around an axis that is perpendicular to this dipole and that passes through the molecules center of mass. This interaction can be expressed by the transition dipole moment for the transition between two rotational states $\text{Probability of Transition}=\int \psi_{rot}(F)\hat\mu \psi_{rot}(I)d\tau$ Where Ψrot(F) is the complex conjugate of the wave function for the final rotational state, Ψrot(I) is the wave function of the initial rotational state , and μ is the dipole moment operator with Cartesian coordinates of μx, μy, μz. For this integral to be nonzero the integrand must be an even function. This is due to the fact that any odd function integrated from negative infinity to positive infinity, or any other symmetric limits, is always zero. In addition to the constraints imposed by the transition moment integral, transitions between rotational states are also limited by the nature of the photon itself. A photon contains one unit of angular momentum, so when it interacts with a molecule it can only impart one unit of angular momentum to the molecule. This leads to the selection rule that a transition can only occur between rotational energy levels that are only one quantum rotation level (J) away from another1. $\Delta\textrm{J}=\pm 1$ The transition moment integral and the selection rule for rotational transitions tell if a transition from one rotational state to another is allowed. However, what these do not take into account is whether or not the state being transitioned from is actually populated, meaning that the molecule is in that energy state. This leads to the concept of the Boltzmann distribution of states. The Boltzmann distribution is a statistical distribution of energy states for an ensemble of molecules based on the temperature of the sample2. $\dfrac{n_J}{n_0} = \dfrac{e^{(-E_{rot}(J)/RT)}}{\sum_{J=1}^{J=n} e^{(-E_{rot}(J)/RT)}}$ where Erot(J) is the molar energy of the J rotational energy state of the molecule, • R is the gas constant, • T is the temperature of the sample. • n(J) is the number of molecules in the J rotational level, and • n0 is the total number of molecules in the sample. This distribution of energy states is the main contributing factor for the observed absorption intensity distributions seen in the microwave spectrum. This distribution makes it so that the absorption peaks that correspond to the transition from the energy state with the largest population based on the Boltzmann equation will have the largest absorption peak, with the peaks on either side steadily decreasing. Degrees of Freedom A molecule can have three types of degrees of freedom and a total of 3N degrees of freedom, where N equals the number of atoms in the molecule. These degrees of freedom can be broken down into 3 categories3. • Translational: These are the simplest of the degrees of freedom. These entail the movement of the entire molecule’s center of mass. This movement can be completely described by three orthogonal vectors and thus contains 3 degrees of freedom. • Rotational: These are rotations around the center of mass of the molecule and like the translational movement they can be completely described by three orthogonal vectors. This again means that this category contains only 3 degrees of freedom. However, in the case of a linear molecule only two degrees of freedom are present due to the rotation along the bonds in the molecule having a negligible inertia. • Vibrational: These are any other types of movement not assigned to rotational or translational movement and thus there are 3N – 6 degrees of vibrational freedom for a nonlinear molecule and 3N – 5 for a linear molecule. These vibrations include bending, stretching, wagging and many other aptly named internal movements of a molecule. These various vibrations arise due to the numerous combinations of different stretches, contractions, and bends that can occur between the bonds of atoms in the molecule. Each of these degrees of freedom is able to store energy. However, In the case of rotational and vibrational degrees of freedom, energy can only be stored in discrete amounts. This is due to the quantized break down of energy levels in a molecule described by quantum mechanics. In the case of rotations the energy stored is dependent on the rotational inertia of the gas along with the corresponding quantum number describing the energy level. Rotational Symmetries To analyze molecules for rotational spectroscopy, we can break molecules down into 5 categories based on their shapes and their moments of inertia around their 3 orthogonal rotational axes:4 1. Diatomic Molecules 2. Linear Molecules 3. Spherical Tops 4. Symmetrical Tops 5. Asymmetrical Tops Diatomic Molecules The rotations of a diatomic molecule can be modeled as a rigid rotor. This rigid rotor model has two masses attached to each other with a fixed distance between the two masses. It has an inertia (I) that is equal to the square of the fixed distance between the two masses multiplied by the reduced mass of the rigid rotor. $\large I_e= \mu r_e^2$ $\large \mu=\dfrac{m_1 m_2} {m_1+m_2}$ Using quantum mechanical calculations it can be shown that the energy levels of the rigid rotator depend on the inertia of the rigid rotator and the quantum rotational number J2. $E(J) = B_e J(J+1)$ $B_e = \dfrac{h}{8 \pi^2 cI_e}$ However, this rigid rotor model fails to take into account that bonds do not act like a rod with a fixed distance, but like a spring. This means that as the angular velocity of the molecule increases so does the distance between the atoms. This leads us to the nonrigid rotor model in which a centrifugal distortion term ($D_e$) is added to the energy equation to account for this stretching during rotation. $E(J)(cm^{-1}) = B_e J(J+1) – D_e J^2(J+1)^2$ This means that for a diatomic molecule the transitional energy between two rotational states equals $E=B_e[J'(J'+1)-J''(J''+1)]-D_e[J'^2(J'+1)^2-J''^2(J'+1)^2]\label{8}$ Where J’ is the quantum number of the final rotational energy state and J’’ is the quantum number of the initial rotational energy state. Using the selection rule of $\Delta{J}= \pm 1$ the spacing between peaks in the microwave absorption spectrum of a diatomic molecule will equal $E_R =(2B_e-4D_e)+(2B_e-12D_e){J}''-4D_e J''^3$ Linear Molecules Linear molecules behave in the same way as diatomic molecules when it comes to rotations. For this reason they can be modeled as a non-rigid rotor just like diatomic molecules. This means that linear molecule have the same equation for their rotational energy levels. The only difference is there are now more masses along the rotor. This means that the inertia is now the sum of the distance between each mass and the center of mass of the rotor multiplied by the square of the distance between them2. $\large I_e=\sum_{j=1}^{n} m_j r_{ej}^2$ Where mj is the mass of the jth mass on the rotor and rej is the equilibrium distance between the jth mass and the center of mass of the rotor. Spherical Tops Spherical tops are molecules in which all three orthogonal rotations have equal inertia and they are highly symmetrical. This means that the molecule has no dipole and for this reason spherical tops do not give a microwave rotational spectrum. Examples: Symmetrical Tops Symmetrical tops are molecules with two rotational axes that have the same inertia and one unique rotational axis with a different inertia. Symmetrical tops can be divided into two categories based on the relationship between the inertia of the unique axis and the inertia of the two axes with equivalent inertia. If the unique rotational axis has a greater inertia than the degenerate axes the molecule is called an oblate symmetrical top. If the unique rotational axis has a lower inertia than the degenerate axes the molecule is called a prolate symmetrical top. For simplification think of these two categories as either frisbees for oblate tops or footballs for prolate tops. Figure $3$: Symmetric Tops: (Left) Geometrical example of an oblate top and (right) a prolate top. Images used with permission from Wikipedia.com. In the case of linear molecules there is one degenerate rotational axis which in turn has a single rotational constant. With symmetrical tops now there is one unique axis and two degenerate axes. This means an additional rotational constant is needed to describe the energy levels of a symmetrical top. In addition to the rotational constant an additional quantum number must be introduced to describe the rotational energy levels of the symmetric top. These two additions give us the following rotational energy levels of a prolate and oblate symmetric top $E_{(J,K)}(cm^{-1})=B_e*J(J+1)+(A_e-B_e)K^2$ Where Be is the rotational constant of the unique axis, Ae is the rotational constant of the degenerate axes, $J$ is the total rotational angular momentum quantum number and K is the quantum number that represents the portion of the total angular momentum that lies along the unique rotational axis. This leads to the property that $K$ is always equal to or less than $J$. Thus we get the two selection rules for symmetric tops $\Delta J = 0, \pm1$ $\Delta K=0$ when $K\neq 0$ $\Delta J = \pm1$ $\Delta K=0$ when $K=0$ However, like the rigid rotor approximation for linear molecules, we must also take into account the elasticity of the bonds in symmetric tops. Therefore, in a similar manner to the rigid rotor we add a centrifugal coupling term, but this time we have one for each quantum number and one for the coupling between the two. $E_{(J,K)}(cm^{-1})=B_e J(J+1)-D_{eJ} J^2(J+1)^2+(A_e-B_e)*K^2 \label{13}$ $-D_{ek} K^4-D_{ejk} J(J +1)K^2 \label{14}$ Asymmetrical Tops Asymmetrical tops have three orthogonal rotational axes that all have different moments of inertia and most molecules fall into this category. Unlike linear molecules and symmetric tops these types of molecules do not have a simplified energy equation to determine the energy levels of the rotations. These types of molecules do not follow a specific pattern and usually have very complex microwave spectra. Additional Rotationally Sensitive Spectroscopies In addition to microwave spectroscopy, IR spectroscopy can also be used to probe rotational transitions in a molecule. However, in the case of IR spectroscopy the rotational transitions are coupled to the vibrational transitions of the molecule. One other spectroscopy that can probe the rotational transitions in a molecule is Raman spectroscopy, which uses UV-visible light scattering to determine energy levels in a molecule. However, a very high sensitivity detector must be used to analyze rotational energy levels of a molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.02%3A_Microwave_Spectroscopy.txt
Infrared Spectroscopy is the analysis of infrared light interacting with a molecule. This can be analyzed in three ways by measuring absorption, emission and reflection. The main use of this technique is in organic and inorganic chemistry. It is used by chemists to determine functional groups in molecules. IR Spectroscopy measures the vibrations of atoms, and based on this it is possible to determine the functional groups.5 Generally, stronger bonds and light atoms will vibrate at a high stretching frequency (wavenumber). • How an FTIR Spectrometer Operates FTIR spectrometers (Fourier Transform Infrared Spectrometer) are widely used in organic synthesis, polymer science, petrochemical engineering, pharmaceutical industry and food analysis. In addition, since FTIR spectrometers can be hyphenated to chromatography, the mechanism of chemical reactions and the detection of unstable substances can be investigated with such instruments. • Identifying the Presence of Particular Groups This page explains how to use an infra-red spectrum to identify the presence of a few simple bonds in organic compounds. • Infrared: Application Infrared spectroscopy, an analytical technique that takes advantage of the vibrational transitions of a molecule, has been of great significance to scientific researchers in many fields such as protein characterization, nanoscale semiconductor analysis and space exploration. • Infrared: Interpretation Infrared spectroscopy is the study of the interaction of infrared light with matter. The fundamental measurement obtained in infrared spectroscopy is an infrared spectrum, which is a plot of measured infrared intensity versus wavelength (or frequency) of light. • Infrared Spectroscopy Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques employed mainly by inorganic and organic chemists due to its usefulness in determining structures of compounds and identifying them. Chemical compounds have different chemical properties due to the presence of different functional groups. • Interpreting Infrared Spectra This chapter will focus on infrared (IR) spectroscopy. The wavelengths found in infrared radiation are a little longer than those found in visible light. IR spectroscopy is useful for finding out what kinds of bonds are present in a molecule, and knowing what kinds of bonds are present is a good start towards knowing what the structure could be. • IR Spectroscopy Background • The Fingerprint Region The fingerprint region is the region to the right-hand side of the diagram (from about 1500 to 500 cm-1) usually contains a very complicated series of absorptions. These are mainly due to all manner of bending vibrations within the molecule. 14.04: Electronic Spectroscopy This page explains what happens when organic compounds absorb UV or visible light, and why the wavelength of light absorbed varies from compound to compound. Molecular Absorption of Light When we were talking about the various sorts of molecular orbitals present in organic compounds earlier, you will have come across this diagram showing their relative energies: When light passes through the compound, energy from the light is used to promote an electron from a bonding or non-bonding orbital into one of the empty anti-bonding orbitals. The possible electron jumps that light might cause are: In each possible case, an electron is excited from a full orbital into an empty anti-bonding orbital. Each jump takes energy from the light, and a big jump obviously needs more energy than a small one. Each wavelength of light has a particular energy associated with it. If that particular amount of energy is just right for making one of these energy jumps, then that wavelength will be absorbed - its energy will have been used in promoting an electron. An absorption spectrometer works in a range from about 200 nm (in the near ultra-violet) to about 800 nm (in the very near infra-red). Only a limited number of the possible electron jumps absorb light in that region. Look again at the possible jumps. This time, the important jumps are shown in black, and a less important one in grey. The grey dotted arrows show jumps which absorb light outside the region of the spectrum we are working in. Remember that bigger jumps need more energy and so absorb light with a shorter wavelength. The jumps shown with grey dotted arrows absorb UV light of wavelength less that 200 nm. The important jumps are: • from $\pi$ bonding orbitals to $\pi$ anti-bonding orbitals; • from non-bonding orbitals to $\pi$ anti-bonding orbitals; • from non-bonding orbitals to sigma anti-bonding orbitals. That means that in order to absorb light in the region from 200 - 800 nm (which is where the spectra are measured), the molecule must contain either $\pi$ bonds or atoms with non-bonding orbitals. Remember that a non-bonding orbital is a lone pair on, say, oxygen, nitrogen or a halogen. Groups in a molecule which absorb light are known as chromophores. What does an absorption spectrum look like The diagram below shows a simple UV-visible absorption spectrum for buta-1,3-diene - a molecule we will talk more about later. Absorbance (on the vertical axis) is just a measure of the amount of light absorbed. The higher the value, the more of a particular wavelength is being absorbed. You will see that absorption peaks at a value of 217 nm. This is in the ultra-violet and so there would be no visible sign of any light being absorbed - buta-1,3-diene is colorless. In buta-1,3-diene, CH2=CH-CH=CH2, there are no non-bonding electrons. That means that the only electron jumps taking place (within the range that the spectrometer can measure) are from $\pi$ bonding to $\pi$ anti-bonding orbitals. A chromophore such as the carbon-oxygen double bond in ethanal, for example, obviously has $\pi$ electrons as a part of the double bond, but also has lone pairs on the oxygen atom. That means that both of the important absorptions from the last energy diagram are possible. You can get an electron excited from a $\pi$ bonding to a $\pi$ anti-bonding orbital, or you can get one excited from an oxygen lone pair (a non-bonding orbital) into a $\pi$ anti-bonding orbital. The non-bonding orbital has a higher energy than a $\pi$ bonding orbital. That means that the jump from an oxygen lone pair into a $\pi$ anti-bonding orbital needs less energy. That means it absorbs light of a lower frequency and therefore a higher wavelength. For example, ethanal can therefore absorb light of two different wavelengths: • the $\pi$ bonding to $\pi$ anti-bonding absorption peaks at 180 nm. These $n \rightarrow \pi^*$ transitions involve moving an electron from a nonbonding electron pair to a antibonding \*pi^*\) orbital. They tend to have molar absorbtivities less than 2000 • the non-bonding to $\pi$ anti-bonding absorption peaks at 290 nm. These $\pi \rightarrow \pi^*$ transitions involve moving an electron from a bonding $\pi*$ orbital to an antibonding $\pi^*$ orbital. They tend to have molar absorptivities on the order of 10,000. Both of these absorptions are in the ultra-violet, but most spectrometers will not pick up the one at 180 nm because they work in the range from 200 - 800 nm. Conjugation and Delocalization Consider these three molecules: Ethene contains a simple isolated carbon-carbon double bond, but the other two have conjugated double bonds. In these cases, there is delocalization of the $\pi$ bonding orbitals over the whole molecule. Now look at the wavelengths of the light which each of these molecules absorbs. molecule wavelength of maximum absorption (nm) ethene 171 buta-1,3-diene 217 hexa-1,3,5-triene 258 All of the molecules give similar UV-visible absorption spectra - the only difference being that the absorptions move to longer and longer wavelengths as the amount of delocalization in the molecule increases. Why is this? You can actually work out what must be happening. • The maximum absorption is moving to longer wavelengths as the amount of delocalization increases. • Therefore maximum absorption is moving to shorter frequencies as the amount of delocalization increases. • Therefore absorption needs less energy as the amount of delocalization increases. • Therefore there must be less energy gap between the bonding and anti-bonding orbitals as the amount of delocalization increases. . . . and that's what is happening. Compare ethene with buta-1,3-diene. In ethene, there is one $\pi$ bonding orbital and one $\pi$ anti-bonding orbital. In buta-1,3-diene, there are two $\pi$ bonding orbitals and two $\pi$ anti-bonding orbitals. This is all discussed in detail on the introductory page that you should have read. The highest occupied molecular orbital is often referred to as the HOMO - in these cases, it is a $\pi$ bonding orbital. The lowest unoccupied molecular orbital (the LUMO) is a $\pi$ anti-bonding orbital. Notice that the gap between these has fallen. It takes less energy to excite an electron in the buta-1,3-diene case than with ethene. In the hexa-1,3,5-triene case, it is less still. If you extend this to compounds with really massive delocalization, the wavelength absorbed will eventually be high enough to be in the visible region of the spectrum, and the compound will then be seen as colored. A good example of this is the orange plant pigment, beta-carotene - present in carrots, for example. Why is beta-carotene orange? Beta-carotene has the sort of delocalization that we've just been looking at, but on a much greater scale with 11 carbon-carbon double bonds conjugated together. The diagram shows the structure of beta-carotene with the alternating double and single bonds shown in red. The more delocalization there is, the smaller the gap between the highest energy $\pi$ bonding orbital and the lowest energy $\pi$ anti-bonding orbital. To promote an electron therefore takes less energy in beta-carotene than in the cases we've looked at so far - because the gap between the levels is less. Remember that less energy means a lower frequency of light gets absorbed - and that's equivalent to a longer wavelength. Beta-carotene absorbs throughout the ultra-violet region into the violet - but particularly strongly in the visible region between about 400 and 500 nm with a peak about 470 nm. The wavelengths associated with the various colors are approximately: color region wavelength (nm) violet 380 - 435 blue 435 - 500 cyan 500 - 520 green 520 - 565 yellow 565 - 590 orange 590 - 625 red 625 - 740 So if the absorption is strongest in the violet to cyan region, what color will you actually see? It is tempting to think that you can work it out from the colors that are left - and in this particular case, you wouldn't be far wrong. Unfortunately, it is not as simple as that! Sometimes what you actually see is quite unexpected. Mixing different wavelengths of light does not give you the same result as mixing paints or other pigments. You can, however, sometimes get some estimate of the color you would see using the idea of complementary colors. If you arrange some colors in a circle, you get a "color wheel". The diagram shows one possible version of this. An internet search will throw up many different versions! colors directly opposite each other on the color wheel are said to be complementary colors. Blue and yellow are complementary colors; red and cyan are complementary; and so are green and magenta. Mixing together two complementary colors of light will give you white light. What this all means is that if a particular color is absorbed from white light, what your eye detects by mixing up all the other wavelengths of light is its complementary color. In the beta-carotene case, the situation is more confused because you are absorbing such a range of wavelengths. However, if you think of the peak absorption running from the blue into the cyan, it would be reasonable to think of the color you would see as being opposite that where yellow runs into red - in other words, orange. Franck-Condon: Electronic and Vibrational Coupling So far, we have come across one big rule of photon absorbance. To be absorbed, a photon's energy has to match an energy difference within the compound that is absorbing it. In the case of visible or ultraviolet light, the energy of a photon is roughly in the region that would be appropriate to promote an electron to a higher energy level. Different wavelengths would be able to promote different electrons, depending on the energy difference between an occupied electronic energy level and an unoccupied one. Other types of electromagnetic radiation would not be able to promote an electron, but they would be coupled to other events. For example, absorption of infrared light is tied to vibrational energy levels. Microwave radiation is tied to rotational energy levels in molecules. Thus, one reason a photon may or may not be absorbed has to do with whether its energy corresponds to the available energy differences within the molecule or ion that it encounters. Photons face other limitations. One of these is a moderate variation on our main rule. It is called the Frank Condon Principle. According to this idea, when an electron is excited from its normal position, the ground state, to a higher energy level, the optimal positions of atoms in the molecule may need to shift. Because electronic motion is much faster than nuclear motion, however, any shifting of atoms needed to optimize positions as they should be in the excited state will have to wait until after the electron gets excited. In that case, when the electron lands and the atoms aren't yet in their lowest energy positions for the excited state, the molecule will find itself in an excited vibrational state as well as an excited electronic state. That means the required energy for excitation does not just correspond to the difference in electronic energy levels; it is fine-tuned to reach a vibrational energy level, which is quantized as well. • The Franck Condon Principle states that electronic transitions are vertical. • A vertical transition is one in which non of the nuclei move while the electron journeys from one state to another. • A vertical transition may begin in a vibrational ground state of an electronic ground state and end in a vibrational excited state of an electronic excited state.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.03%3A_Infrared_Spectroscopy.txt
Nuclear Magnetic Resonance (NMR) is a nuceli (Nuclear) specific spectroscopy that has far reaching applications throughout the physical sciences and industry. NMR uses a large magnet (Magnetic) to probe the intrinsic spin properties of atomic nuclei. Like all spectroscopies, NMR uses a component of electromagnetic radiation (radio frequency waves) to promote transitions between nuclear energy levels (Resonance). Most chemists use NMR for structure determination of small molecules. Introduction In 1946, NMR was co-discovered by Purcell, Pound and Torrey of Harvard University and Bloch, Hansen and Packard of Stanford University. The discovery first came about when it was noticed that magnetic nuclei, such as 1H and 31P (read: proton and Phosphorus 31) were able to absorb radio frequency energy when placed in a magnetic field of a strength that was specific to the nucleus. Upon absorption, the nuclei begin to resonate and different atoms within a molecule resonated at different frequencies. This observation allowed a detailed analysis of the structure of a molecule. Since then, NMR has been applied to solids, liquids and gasses, kinetic and structural studies, resulting in 6 Nobel prizes being awarded in the field of NMR. More information about the history of NMR can be found in the NMR History page. Here, the fundamental concepts of NMR are presented. Spin and Magnetic Properties The nucleus consists of elementary particles called neutrons and protons, which contain an intrinsic property called spin. Like electrons, the spin of a nucleus can be described using quantum numbers of I for the spin and m for the spin in a magnetic field. Atomic nuclei with even numbers of protons and neutrons have zero spin and all the other atoms with odd numbers have a non-zero spin. Furthermore, all molecules with a non-zero spin have a magnetic moment, $\mu$, given by $\mu=\gamma I$ where $\gamma$ is the gyromagnetic ratio, a proportionality constant between the magnetic dipole moment and the angular momentum, specific to each nucleus (Table $1$). Table $1$ : The gyromagnetic ratios for several common nuclei Nuclei Spin Gyromagetic Ratio (MHz/T) Natural Abundance (%) 1H 1/2 42.576 99.9985 13C 1/2 10.705 1.07 31P 1/2 17.235 100 27Al 5/2 11.103 100 23Na 3/2 11.262 100 7Li 3/2 16.546 92.41 29Si 1/2 -8.465 4.68 17O 5/2 5.772 0.038 15N 1/2 -4.361 0.368 The magnetic moment of the nucleus forces the nucleus to behave as a tiny bar magnet. In the absence of an external magnetic field, each magnet is randomly oriented. During the NMR experiment the sample is placed in an external magnetic field, $B_0$, which forces the bar magnets to align with (low energy) or against (high energy) the $B_0$. During the NMR experiment, a spin flip of the magnets occurs, requiring an exact quanta of energy. To understand this rather abstract concept it is useful to consider the NMR experiment using the nuclear energy levels. Nuclear Energy Levels As mentioned above, an exact quanta of energy must be used to induce the spin flip or transition. For any m, there are 2m+1 energy levels. For a spin 1/2 nucleus, there are only two energy levels, the low energy level occupied by the spins which aligned with $B_0$ and the high energy level occupied by spins aligned against $B_0$. Each energy level is given by $E=-m\hbar \gamma B_0 \label{E1}$ where $m$ is the magnetic quantum number, in this case +/- 1/2. The energy levels for $m>1/2$, known as quadrupolar nuclei, are more complex and information regarding them can be found here. The energy difference between the energy levels is then $\Delta E=\hbar \gamma B_0 \label{E2}$ where $\hbar$ is Planks constant. A schematic showing how the energy levels are arranged for a spin=1/2 nucleus is shown below. Note how the strength of the magnetic field plays a large role in the energy level difference. In the absence of an applied field the nuclear energy levels are degenerate. The splitting of the degenerate energy level due to the presence of a magnetic field in known as Zeeman Splitting. Energy Transitions (Spin Flip) In order for the NMR experiment to work, a spin flip between the energy levels must occur. The energy difference between the two states corresponds to the energy of the electromagnetic radiation that causes the nuclei to change their energy levels. For most NMR spectrometers, $B_0$ is on the order of Tesla (T) while $\gamma$ is on the order of $10^7$. Consequently, the electromagnetic radiation required is on the order of Hz. The energy of a photon is represented by $E=h\nu$ and thus the frequency necessary for absorption to occur is represented as: $\nu=\dfrac{\gamma B_0}{2\pi}$ Hence, NMR experiment measures the resonant frequency that causes a spin flip. For the more advanced NMR users, the sections on NMR detection and Larmor frequency should be consulted. Nuclear Shielding The power of NMR is based on the concept of nuclear shielding, which allows for structural assignments. Every atom is surrounded by electrons, which orbit the nucleus. Charged particles moving in a loop will create a magnetic field which is felt by the nucleus. Therefore the local electronic environment surrounding the nucleus will slightly change the magnetic field experienced by the nucleus, which in turn will cause slight changes in the energy levels! This is known as shielding. Nuclei that experinece differnet magnetic fields due to the local electronic interactions are known as inequivalent nuclei. The change in the energy levels requires a different frequency to excite the spin flip, which as will be seen below, creates a new peak in the NMR spectrum. The shielding allows for structural determination of molecules. The shielding of the nucleus allows for chemically inequivalent environments to be determined by Fourier Transforming the NMR signal. The result is a spectrum, shown below, that consists of a set of peaks in which each peak corresponds to a distinct chemical environment. The area underneath the peak is directly proportional to the number of nuclei in that chemical environment. Additional details about the structure manifest themselves in the form of different NMR interactions, each altering the NMR spectrum in a distinct manner. The x-axis of an NMR spectrum is given in parts per million (ppm) and the relation to shielding is explained here. Relaxation Relaxation refers to the phenomenon of nuclei returning to their thermodynamically stable states after being excited to higher energy levels (Figure $4$). The energy absorbed when a transition from a lower energy level to a high energy level occurs is released when the opposite happens. This can be a fairly complex process based on different timescales of the relaxation. The two most common types of relaxation are spin lattice relaxation (T1) and spin spin relaxation (T2). A more complex treatment of relaxation is given elsewhere. To understand relaxation, the entire sample must be considered. By placing rhe nuclei in an external magnetic field, the nuclei create a bulk magnetization along the z-axis. The spins of the nuclei are also coherent. The NMR signal may be detected as long as the spins are coherent with one another. The NMR experiment moves the bulk magnetization from the z-axis to the x-y plane, where it is detected. • Spin-Lattice Relaxation ($T_1$): T1 is the time it takes for the 37% of bulk magnetization to recovery along Z-axis from the x-y plane. The more efficient the relaxation process, the smaller relaxation time (T1) value you will get. In solids, since motions between molecules are limited, the relaxation time (T1) values are large. Spin-lattice relaxation measurements are usually carried out by pulse methods. • Spin-Spin Relaxation ($T_2$): T2 is the time it takes for the spins to lose coherence with one another. T2 can either be shorter or equal to T1. Applications The two major areas where NMR has proven to be of critical importance is in the fields of medicine and chemistry, with new applications being developed daily Nuclear magnetic resonance imaging, better known as magnetic resonance imaging (MRI) is an important medical diagnostic tool used to study the function and structure of the human body. It provides detailed images of any part of the body, especially soft tissue, in all possible planes and has been used in the areas of cardiovascular, neurological, musculoskeletal and oncological imaging. Unlike other alternatives, such as computed tomography (CT), it does not used ionized radiation and hence is very safe to administer. In many laboratories today, chemists use nuclear magnetic resonance to determine structures of important chemical and biological compounds. In NMR spectra, different peaks give information about different atoms in a molecule according specific chemical environments and bonding between atoms. The most common isotopes used to detect NMR signals are 1H and 13C but there are many others, such as 2H, 3He, 15N, 19F, etc., that are also in use. NMR has also proven to be very useful in other area such as environmental testing, petroleum industry, process control, earth’s field NMR and magnetometers. Non-destructive testing saves a lot of money for expensive biological samples and can be used again if more trials need to be run. The petroleum industry uses NMR equipment to measure porosity of different rocks and permeability of different underground fluids. Magnetometers are used to measure the various magnetic fields that are relevant to one’s study. Contributors and Attributions • Derrick Kaseman (UC Davis) and Revathi Srinivasan Ganesh Iyer (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.05%3A_Nuclear_Magnetic_Resonance.txt
Though less used than Nuclear Magnetic Resonance (NMR), Electron Paramagnetic Resonance (EPR) is a remarkably useful form of spectroscopy used to study molecules or atoms with an unpaired electron. It is less widely used than NMR because stable molecules often do not have unpaired electrons. However, EPR can be used analytically to observe labeled species in situ either biologically or in a chemical reaction. Introduction Electron Paramagnetic Resonance (EPR), also known as Electron Spin Resonance (ESR). The sample is held in a very strong magnetic field, while electromagnetic (EM) radiation is applied monochromatically (Figure 1). Figure 1(3)-monochromatic electromagnetic beam This portion of EPR is analogous to simple spectroscopy, where absorbance by the sample of a single or range of wavelengths of EM radiation is monitored by the end user ie absorbance. The unpaired electrons can either occupy +1/2 or -1/2 ms value (Figure 2). From here either the magnetic field "B0" is varied or the incident light is varied. Today most researchers adjust the EM radiation in the microwave region, the theory is the find the exact point where the electrons can jump from the less energetic ms=-1/2 to ms=+1/2. More electrons occupy the lower ms value (see Boltzmann Distribution). Overall, there is an absorption of energy. This absorbance value, when paired with the associated wavelength can be used in the equation to generate a graph of showing how absorption relates to frequency or magnetic field. $\Delta E=h\nu=g_e \beta_B B_0$ where ge equals to 2.0023193 for a free electron; $\beta_B$ is the Bohr magneton and is equal to 9.2740 * 10 -24 J T -1; and B0 indicates the external magnetic field. Theory Like NMR, EPR can be used to observe the geometry of a molecule through its magnetic moment and the difference in electron and nucleus mass. EPR has mainly been used for the detection and study of free radical species, either in testing or anylytical experimentation. "Spin labeling" species of chemicals can be a powerfull technique for both quantification and investigation of otherwise invisible factors. The EPR spectrum of a free electron, there will be only one line (one peak) observed. But for the EPR spetrum of hydrogen, there will be two lines (2 peaks) observed due to the fact that there is interaction between the nucleus and the unpaired electron. This is also called hyperfine splitting. The distance between two lines (two peaks) are called hyperfine splitting constant (A). By using (2NI+1), we can calculate the components or number of hyperfine lines of a multiplet of a EPR transtion, where N indicates number of spin, I indicates number of equivalent nuclei. For example, for nitroxide radicals, the nuclear spin of 14N is 1, N=1, I=1, we have 2 x 1 + 1 = 3, which means that for a spin 1 nucleus splits the EPR transition into a triplet. To absorb microwave, there must be unpaired electrons in the system. no EPR signal will be observed if the system contains only paired electrons since there will be no resonant absorption of microwave energy. Molecules such as NO, NO2, O2 do have unpaired electrons in groud states. EPR can be also performed on proteins with paramagnetic ions such as Mn2+, Fe3+ and Cu2+. Additionally, molecules containing stable nitroxide radicals such as 2,2,6,6-tetramethyl-1-piperidinyloxyl (TEMPO, Figure 3) and di-tert-butyl nitroxide radical. Examples of EPR spectra:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.06%3A_Electron_Spin_Resonance.txt
Singlet and Triplet Excited State Understanding the difference between fluorescence and phosphorescence requires the knowledge of electron spin and the differences between singlet and triplet states. The Pauli Exclusion principle states that two electrons in an atom cannot have the same four quantum numbers (\(n\), \(l\), \(m_l\), \(m_s\)) and only two electrons can occupy each orbital where they must have opposite spin states. These opposite spin states are called spin pairing. Because of this spin pairing, most molecules do not exhibit a magnetic field and are diamagnetic. In diamagnetic molecules, electrons are not attracted or repelled by the static electric field. Free radicals are paramagnetic because they contain unpaired electrons have magnetic moments that are attracted to the magnetic field. Singlet state is defined when all the electron spins are paired in the molecular electronic state and the electronic energy levels do not split when the molecule is exposed into a magnetic field. A doublet state occurs when there is an unpaired electron that gives two possible orientations when exposed in a magnetic field and imparts different energy to the system. A singlet or a triplet can form when one electron is excited to a higher energy level. In an excited singlet state, the electron is promoted in the same spin orientation as it was in the ground state (paired). In a triplet excited stated, the electron that is promoted has the same spin orientation (parallel) to the other unpaired electron. The difference between the spins of ground singlet, excited singlet, and excited triplet is shown in Figure \(1\). Singlet, doublet and triplet is derived using the equation for multiplicity, 2S+1, where S is the total spin angular momentum (sum of all the electron spins). Individual spins are denoted as spin up (s = +1/2) or spin down (s = -1/2). If we were to calculated the S for the excited singlet state, the equation would be 2(+1/2 + -1/2)+1 = 2(0)+1 = 1, therefore making the center orbital in the figure a singlet state. If the spin multiplicity for the excited triplet state was calculated, we obtain 2(+1/2 + +1/2)+1 = 2(1)+1 =3, which gives a triplet state as expected. The difference between a molecule in the ground and excited state is that the electrons is diamagnetic in the ground state and paramagnetic in the triplet state.This difference in spin state makes the transition from singlet to triplet (or triplet to singlet) more improbable than the singlet-to-singlet transitions. This singlet to triplet (or reverse) transition involves a change in electronic state. For this reason, the lifetime of the triplet state is longer the singlet state by approximately 104 seconds fold difference.The radiation that induced the transition from ground to excited triplet state has a low probability of occurring, thus their absorption bands are less intense than singlet-singlet state absorption. The excited triplet state can be populated from the excited singlet state of certain molecules which results in phosphorescence. These spin multiplicities in ground and excited states can be used to explain transition in photoluminescence molecules by the Jablonski diagram. Jablonski Diagrams The Jablonski diagram that drawn below is a partial energy diagram that represents the energy of photoluminescent molecule in its different energy states. The lowest and darkest horizontal line represents the ground-state electronic energy of the molecule which is the singlet state labeled as \(S_o\). At room temperature, majority of the molecules in a solution are in this state. The upper lines represent the energy state of the three excited electronic states: S1and S2 represent the electronic singlet state (left) and T1 represents the first electronic triplet state (right). The upper darkest line represents the ground vibrational state of the three excited electronic state.The energy of the triplet state is lower than the energy of the corresponding singlet state. There are numerous vibrational levels that can be associated with each electronic state as denoted by the thinner lines. Absorption transitions (blues lines in Figure \(2\)) can occur from the ground singlet electronic state (So) to various vibrational levels in the singlet excited vibrational states. It is unlikely that a transition from the ground singlet electronic state to the triplet electronic state because the electron spin is parallel to the spin in its ground state (Figure \(1\)). This transition leads to a change in multiplicity and thus has a low probability of occurring which is a forbidden transition. Molecules also go through vibration relaxation to lose any excess vibrational energy that remains when excited to the electronic states (\(S_1\) and \(S_2\)) as demonstrated in wavy lines in Figure \(2\). The knowledge of forbidden transition is used to explain and compare the peaks of absorption and emission. Relaxation and Fluorescence Sometimes, when an excited state species relaxes, giving off a photon, the wavelength of the photon is different from the one that initially led to excitation. When this happens, the photon is invariably red-shifted; its wavelength is longer than the initial one. This situation is called "fluorescence". How can that be? Isn't energy quantized? How is the molecule suddenly taking a commission out of the energy the original photon brought with it? This discrepancy is related to the Franck-Condon principle from the previous page. When an electron is promoted to an electronic excited state, it often ends up in an excited vibrational state as well. Thus, some of the energy put into electronic excitation is immediately passed into vibrational energy. Vibrational energy, however, doesn't just travel in photons. It can be gained or lost through molecular collisions and heat transfer. The electron might simply drop down again immediately; a photon would be emitted of exactly the same wavelength as the one that was previously absorbed. On the other hand, if the molecule relaxes into a lower vibrational state, some of that initial energy will have been lost as heat. When the electron relaxes, the distance back to the ground state is a little shorter. The photon that is emitted will have lower energy and longer wavelength than the initial one. Just how does a molecule undergo vibrational relaxation? Vibrational energy is the energy used to lengthen or shorten bonds, or to widen or squeeze bond angles. Given a big enough molecule, some of this vibrational energy could be transferred into bond lengths and angles further away from the electronic transition. Otherwise, if the molecule is small, it may transfer some of its energy in collisions with other molecules. Note There are many examples of energy being transferred this way in everyday life. In a game of pool, one billiard ball can transfer its energy to another, sending it toward the pocket. Barry Bonds can transfer a considerable amount of energy through his bat into a baseball, sending it out of the park, just as Serena Williams can send a whole lot of energy whizzing back at her sister. Exercise 1 How does the energy of an electronic absorption compare to other processes? To find out, you might consider the excitation of an entire mole of molecules, rather than a sinle molecule absorbing a single photon. Calculate the energy in kJ/mol for the following transitions. 1. absorbance at 180 nm (ultraviolet) 2. absorbance at 476 nm (blue) 3. absorbance at 645 nm (red) Exercise 2 How does the energy of an excitation between vibrational states compare to that of an electronic excitation? Typically, infrared absorptions are reported in cm-1, which is simply what it looks like: the reciprocal of the wavelength in cm. Because wavelength and frequency are inversely related, wavenumbers are considered a frequency unit. Calculate the energy in kJ/mol for the following transitions. 1. absorbance at 3105 cm-1 2. absorbance at 1695 cm-1 3. absorbance at 963 cm-1 In molecules, as one molecule drops to a lower vibrational state, the other will hop up to a higher vibrational state with the energy it gains. In the drawing below, the red molecule is in an electronic excited and vibrational state. In a collision, it transfers some of its vibrational energy to the blue molecule. Radiationless Transitions: Internal Conversion If electrons can get to a lower energy state, and give off a little energy at a time, by hopping down to lower and lower vibrational levels, do they need to give off a giant photon at all? Maybe they can relax all the way down to the ground state via vibrational relaxation. That is certainly the case. Given lots of vibrational energy levels, and an excited state that is low enough in energy so that some of its lower vibrational levels overlap with some of the higher vibrational levels of the ground state, the electron can hop over from one state to the other, without releasing a photon. This event is called a "radiationless transition", because it occurs without release of a photon. The electron simply slides over from a low vibrational state of the excited electronic state to a high vibrational state of the electronic ground state. If the electron simply keeps dropping a vibrational level at a time back to the ground state, the process is called "internal conversion". Internal conversion has an important consequence. Because the absorption of UV and visible light can result in energy transfer into vibrational states, much of the energy that is absorbed from these sources is converted into heat. That can be a good thing if you happen to be a marine iguana trying to warm up in the sun after a plunge in the icy Pacific. It can also be a tricky thing if you are a process chemist trying to scale up a photochemical reaction for commercial production of a pharmaceutical, because you have to make sure the system has adequate cooling available. Radiationless Transitions: Intersystem Crossing There is a very similar event, called "intersystem crossing", that leads to the electron getting caught between the excited state and the ground state. Just as, little by little, vibrational relaxation can lead the electron back onto the ground state energy surface, it can also lead the electron into states that are intermediate in energy. For example, suppose an organic molecule undergoes electronic excitation. Generally, organic molecules have no unpaired electrons. Their ground states are singlet states. According to one of our selection rules for electronic excitation, the excited state must also have no unpaired electrons. In other words, the spin on the electron that gets excited is the same after excitation as it was before excitation. However, that's not the lowest possible energy state for that electron. When we think about atomic orbital filling, there is a rule that governs the spin on the electrons in degenerate orbitals: in the lowest energy state, spin is maximized (Hund's rule). In other words, when we draw a picture of the valence electron configuration of nitrogen, we show nitrogen's three p electrons each in its own orbital, with their spins parallel. The picture with three unpaired electrons, all with parallel spins, shows a nitrogen in the quartet spin state. Having one of those spins point the other way would result in a different spin state. One pair of electrons in the p level would be spin-paired, one up and one down, even though they are in different p orbitals. That would leave one electron without an opposite partner. The nitrogen would be in a doublet spin state. That is not what happens. The spin state on the left is lower in energy than the state on the right. That's just one of the rules of quantum mechanics (Hund's rule): maximize spin when orbitals are singly occupied. It's the same in a molecule with the triplet state lower in energy than the singlet state. Why didn't the electron get excited to the triplet state in the first place? That's against the rules. But sliding down vibrationally onto the triplet state from the singlet excited state is not, because it doesn't involve absorption of a photon. Intersystem crossing can have important consequences in reaction chemistry because it allows access to triplet states that are not normally avaiable in many molecules. Because triplet states feature unpaired electrons, their reactivity is often typified by radical processes. That means an added suite of reactions can be accessed via this process. Phosphorescence: A Radiationless Transition Followed by Emission Intersystem crossing is one way a system can end up in a triplet excited state. Even though this state is lower in energy than a singlet excited state, it cannot be accessed directly via electronic excitation because that would violate the spin selection rule (\Delta S=0\). That's where the electron gets stuck, though. The quick way back down to the bottom is by emitting a photon, but because that would involve a change in spin state, it is not allowed. Realistically speaking, that means it takes a long time. By "a long time", we might mean a few seconds, several minutes, or possibly even hours. Eventually, the electron can drop back down, accompanied by the emission of a photon. This situation is called "phosphorescence". Molecules that display phosphorescence are often incorporated into toys and shirts so that they will glow in the dark.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.07%3A_Fluorescence_and_Phosphorescence.txt
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications. Introduction The word LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. In 1916, Albert Einstein discovered the physical principle responsible for this amplification, and the foundation principle is called stimulated emission. It was widely accepted at that time that laser would represent a big leap in science and technology, even before Theodore H. Maiman built the first one in 1960. The 1951 Nobel Prize in physics was shared by Charles H. Townes, Nokolay Basov, and Aleksandr Prokhorov, in citation, “For Fundamental work in the field of quantum electronics, which has led to the construction of oscillator and amplifiers based on the maser-laser principle”. The early lasers developed in the 1950s by Charles H. Townes and Arthur Shawlow were gas and solid-state lasers for use in spectroscopy. The principles of lasers were adapted from masers. MASER is an acronym that stands for Microwave Amplification by Stimulated Emission of Radiation. It uses the idea of stimulated emission and population inversion to produce a coherent amplified radiation of light in the microwave region. Stimulated emission is when an electron in an excited state falls back to ground state after absorbing energy from an incident photon. Amplified radiation or light is produced with the same direction and energy as the incident light. Population inversion is when you have a greater population of electrons in the excited state than in ground state. Population inversion is achieved through various pumping mechanisms. The laser uses these same ideas except that the electromagnetic wave created is in the visible light region. When emission begins the light oscillates within the resonant cavity and gains magnitude. Once enough light has been acquired, the laser beam is produced. This allows lasers to be used as a powerful light source. Three unique characteristics of a laser are its properties of monochromaticity, directionality, and brightness. The monochromaticity of lasers is due to the fact that lasers are highly selective in the wavelength of light produced, which in itself is due to the resonant frequency inside the active material. Resonant frequency means that the light is oscillating in a single mode creating a monochromatic beam of light. The property of directionality depends on the angle of which the light propagates out of the source. Since lasers have large spatial and temporal coherence directionality is maximized. Temporal coherence is when there are small fluctuations in the phase. Spatial coherence has small changes in the amplitude of the emitted light. Like monochromaticity, directionality is dependent on the resonant cavity of the active material. The property of brightness is a result of the directionality and the coherence of the light. Due to these properties, lasers today are used in simple laser pointers, cutting devices, the development of military technologies, spectroscopy, and medical treatments. Their direct application to spectroscopy has allowed scientists to measure lifetimes of excited state molecules, structural analysis, probing far regions of the atmosphere, photochemistry and their use as ionization sources. History One of the most important characteristics of light is that it has wave-like properties and that it is an electromagnetic wave. Experiments on the blackbody radiation demonstrated a comprehensive idea of emission and absorption of electromagnetic waves. In 1900, Max Plank developed the theory that electromagnetic waves can only exist in distinct quantities of energy, which are directly proportional to a given frequency ($\nu$). In 1905, Albert Einstein proposed the dual nature of light, having both wave-like and particle-like properties. He used the photoelectric effect to show that light acts as a particle, with energy inversely proportional to the wavelength of light. This is important, because the number of particles is directly related to how intense a light beam will be. In 1915, Einstein introduced the idea of stimulated emission- a key concept to lasers. In 1957, Townes and Shawlow proposed the concept of lasers in the infrared and optical region by adapting the concept of masers to produce monochromatic and coherent radiation. In 1953, Townes was the first to build a maser with an ammonia gas source. Masers use stimulated emission to generate microwaves. Townes and other scientists wanted to develop the optical maser to generate light. Optical masers would soon adopt the name LASER: Light Amplification by Stimulated Emission of Radiation. An optical maser would need more energy than what can be provided by microwave frequencies and a resonant cavity of the order of 1μm or less. Townes and Shawlow proposed the use of a Fabry-Pérot interferometer equipped with parallel mirrors, where Interference of radiation which is traveling back and forth between parallel mirrors in the cavity allowed for selection of certain wavelengths. Townes built an optical maser with potassium gas. That failed because the mirrors degraded over time. In 1957, Gordon Gould improved upon Townes' and Shawlow's laser concept. It was Gould who renamed the optical maser to laser. In April 1959, Gould filed a patent for the laser and later in March 1960, Townes and Shawlow had also made a request for a patent. Since Gould’s notebook was officially dated the idea was his first, but he did not receive the patent until 1977. Components A laser consists of three main components: a lasing medium, a resonant or optical cavity, and an output coupler. The lasing medium consists of a group of atoms, molecules , or ions in solid, liquid or gaseous form, which acts as an amplifier for light waves. For amplification, the medium has to achieve population inversion, which means in a state in which the number of atoms in the upper energy level is greater than the number of atoms in the lower energy level. The output coupler serves as energy source which provides for obtaining such a state of population inversion between a pair of energy levels of the atomic system. When the active medium is placed inside an optical resonator, the system acts as an oscillator. Lasing Medium The lasing medium is the component used to achieve lasing, such as chromium in the aluminum oxide crystal-found in a ruby laser. Helium and neon gas are two materials most commonly used in gas lasers. These are only a few examples of lasing mediums or materials that have been used in the past and present states of the laser. For further information about different types of lasing mediums please refer to the section where Types of Lasers is discussed. Optical Cavity and Output Coupler Rays of light moving along an optical path tend to diverge over time, because the energy of radiation has very high frequency. Therefore an optical cavity is needed to refocus the light. Figure 1, represents the basics of an optical cavity where the light inside moves back and forth between two mirrors. These redirect and focus the light each time it hits the surface of the mirrors. There are two types of cavities: stable cavities and unstable cavities. A stable cavity is when the ray of light does not diverge far from the optical axis. An unstable cavity is when the ray of light bounces off and away from a mirrors surface. The importance of the cavities is that it allows for the laser to have properties of directionality, monochromaticity and brightness. Light oscillating between the first mirror (Mo) and the second mirror (M1) separated by distance, d, will have a round-trip phase shift (RTPS) of 2θ=2kd=q2π- ϕ. In Figure 1, a round-trip can be described as the beam traveling from Mo to M1back to Mo. Resonance occurs in the cavity because the light propagating between the two mirrors is uniform. The ABCD law describes that an optical cavity has a field distribution, because it reproduces itself as it is making these round-trips between the two parallel mirrors. The ABCD law was first applied to a Gaussian beam with a beam parameter, q, which is described as $q_2=\dfrac {(Aq_1+B)}{(Cq_1+D)} \nonumber$ This law states that the beam, which is oscillating through an optical system, will experience some changing as it moves in the cavity. Fields that are created in an optical cavity have analogous shape and phase as they make each trip back and forth. However, the one thing that changes is the size of the field because the electromagnetic wave is unrestricted, unlike a wave in a short-circuited coaxial cable used to build a resonator or a microwave cavity mode. Since there is a field distribution of Eoat the surface of Mo, it can be said that there is a field distribution at the surface of M1. Since there will be a change in size of the field this means that the electromagnetic wave will have change in amplitude by ρ01 and a phase factor of $e^-jk2d$, creating additional fields. This is an example of a phasorial addition of all fields between Mo and M1, creating a total field ET (Figure 2). Phasorial addition is described by RTPS, where each additional En will have a delay of angle ϕ which is related to kd. ET will always be greater than Eoonly if ρ0 and ρ1are not greater than 1 and ϕ=0. In this case, when ϕ=0 resonance is enhanced because all factors such as ET travelling between Mo and M1, the intensity of the electromagnetic waves, number of photons traveling between Moand M1, and the amount of energy that is stored are maximized. The resonant wavelength can also be determined by using the relationship between RTPS, and because $k=\dfrac {\omega n}{c}=\dfrac {2\pi}{\lambda}$ Using $2\theta=2kd=q2\pi$ $\dfrac{2\pi2d}{\lambda}=q2π$ $d=\dfrac {q\lambda}{2}$ Where the wavelength of interest is given by $\lambda=\lambda_0/n$, where n is the index of refraction and $\lambda_0$ is the free-space wavelength . Since we are dealing with light as a wave, the light in the resonant cavity can be described in terms of frequency, ν. Where $k2d=\omega\dfrac {2nd}{c}=2\pi \nu \dfrac {2nd}{c}=q(2\pi)$ $\nu=q\dfrac {c}{2nd}$ A Fabry-Pérot interferometer is a prime example of an optical cavity used in a laser. The Fabry-Pérot is equipped with two parallel mirrors, one that is completely reflective and the other that is partially reflective. As light is accumulating in the cavity after taking several round trips between the two mirrors, some light is transmitted through the partially reflective mirror and a laser beam is produced. The beam can be in pulsed mode or continuous-wave (CW) mode. To increase the performance of the resonant cavity, the length of the cavity (d) must be considered as a way to avoid a decrease in the laser beam intensity due to any diffraction losses. The size of the aperture of the cavity is also important because it determines the strength or the intensity of the laser beam. In fact, determining the best length of a resonant cavity will enhance the coupling conditions of the output coupler by producing a frequency that is stable, which ultimately generates a laser beam that is coherent and has high power. There are essentially six stages in the lasing process. First is the ground state where there is no excitation of the lasing medium. Second is pumping, which is applied to the medium where spontaneous emission occurs. Then the third stage is when emitted photons collide with an excited molecule where stimulated emission occurs. In the fourth stage the photons are produced in multiples, however those moving parallel in the cavity will hit a mirror and then hit the second mirror. During the fifth stage this process continues until there is an accumulation of light that is coherent and of a specific frequency. Finally, the sixth stage is when the light or laser beam exits the partially reflective mirror which is also known as the output coupler. An output coupler is the last important component of a laser because it must be efficient to produce an output of light with maximum intensity. If the output coupler is too transparent than there is much more loss of electromagnetic waves and this will decrease lasing significantly because population inversion will no longer be maintained. If the output coupler or partially reflective mirror is too reflective, then all the accumulated light that is built up in the resonant cavity will be trapped in the cavity. The beam will not pass through the output coupler, producing little to no light making the laser ineffective. Emission Lasers create a high energy beam of light by stimulated emission or spontaneous emission. Within in a molecule there are discrete energy levels. A simple molecular description has a low energy ground state (E1) and a high energy excited state (E2). When an electromagnetic wave, referred to as the incident light, irradiates a molecule there are two processes that can occur: absorption and stimulated emission. Absorption occurs when the energy of the incident light matches the energy difference between the ground and excited state, causing the population in the ground state to be promoted to the excited state. The rate of absorption is given by the equation: $\dfrac {dN_1}{dt}=-W_{12} N_1$ Where N1 is the population in E1, and W12 is the probability of this transition. The probability of the transition can also be related to the photon flux (intensity of incident light): $W_{12}=\sigma_{12} F$ Where F is the photon flux and σ12 is the cross section of the transition with units of area. When absorption occurs photons are removed from the incident light and the intensity of the light is decreased. Stimulated emission is the reverse of absorption. Stimulated emission has two main requirements: there must be population in the excited state and the energy of the incident light must match the difference between the excited and ground state. When these two requirements are met, population from the excited state will move to the ground energy level. During this process a photon is emitted with the same energy and direction as the incident light. Unlike absorption, stimulated emission adds to the intensity of the incident light. The rate for stimulated emission is similar to the rate of absorption, except that it uses the population of the higher energy level: $W_{21}=\sigma_{21} F$ Like absorption the probability of the transition is related to the photon flux of the incident light through the equation: $\dfrac {dN_2}{dt}=-W_{21} N_2$ When absorption and stimulated emission occur simultaneously in a system the photon flux of the incident light can increase or decrease. The change in the photon flux is a combination of the rate equations for absorption and stimulated emission. This is given by the equation: $dF=\sigma F(N_2-N_1 )d\tau \nonumber$ Spontaneous emission has the same characteristics as stimulated emission except that no incident light is required to cause the transition from the excited to ground state. Population in the excited state is unstable and will decay to the ground state through several processes. Most decays involve non-radiative vibrational relaxation, but some molecules will decay while emitting a photon matching the energy of the energy difference between the two states. The rate of spontaneous emission is given by: $\dfrac {dN_2}{dt}=-AN_2 \nonumber$ Where A is the spontaneous emission probability which depends on the transition involved. The coefficient A is an Einstein coefficient obtained from the spontaneous emission lifetime. Since spontaneous emission is not competing with absorption, the photon flux is based solely on the rate of spontaneous emission. The population ratio of a molecule or atom is found using the Boltzmann distribution and the energy of the ground state (E1) and the excited state (E2): $\dfrac{N_2}{N_1} = e^{\dfrac{-(E_2-E_1)}{kT}} \nonumber$ Under normal conditions, the majority, if not all, of the population is in the lower energy level (E1). This is because the energy of the excited is greater than the ground state. Normal thermal energy available (kT) is not enough to overcome the difference, and the ratio of population favors the ground state. For example, if the difference in energy between two states absorbes light at 500nm, the ratio of N1 to N2 is 5.1x1041:1. The photon flux of the incident light is directly proportional to the difference in populations. Since the ground state has more populations, the photon flux decreases: there is more absorption occurring than stimulated emission. In order to increase the photon flux there must be more population in the excited state than in the ground state, generally known as a population inversion. In a two level energy system it is impossible to create the population inversion needed for a laser. Instead three or four level energy systems are generally used (Figure 5). Three level processes involve pumping of population from the lowest energy level to the highest, third energy state. The population can then decay down to the second energy level or back down to the first energy level. The population that makes it to the second energy level is available for stimulated emission. Light matching the energy difference between the second and first energy level will cause a stimulated emission. Four level systems follow roughly the same process except that population is moved from the lowest state to the highest fourth level. Then it decays to the third level and lasing happens when the incident light matches the energy between the third and second level. After lasing there is decay to the first level. Pumping Process Pumping is the movement of population from the ground state to a higher excited state. The general rate at which this is done is given by: $(\dfrac {dN_g}{dt})_p=W_p N_g$ Where Ng is the population in the ground level and Wp is the pump rate. Pumping can be done optically, electronically, chemically (see chemical laser), using gases at high flow rates, and nuclear fission. Only optical and electrical pumping will be discussed in detail. Optical Pumping Optical pumping uses light to create the necessary population inversion for a laser. Usually high pressure xenon or krypton lamps are used to excite solid or liquid laser systems. The active material in the laser absorbs the light from the pump lamp, promoting the population from the ground state to the higher energy state. The material used in the laser can be continuously exposed to the pumping light which creates a continuous wave laser (CW). A pulsed laser can be created by using flashes of pumping light. In optical pumping there are three types of efficiency: transfer, lamp radiative, and pump quantum efficiency. Transfer efficiency is the ratio of the energy created by the lamp and the power of the light emitted by the laser. The lamp radiative efficiency is the measure of how much electrical power is converted into light in the optical lamp. Pump quantum efficiency accounts for the ratio of population that decays to the correct energy level and population that decays either back to the ground state or another incorrect energy level. For example, the overall pumping rate of the first ruby laser was around 1.1%. The average pump rate for optical pumping depends on the total efficiency of the pump ($η_p$), volume of the laser material (V), ground state population (Ng), power input (P), and frequency of the lasing transition (ν0): $〈W_p 〉=\eta_p (P/(VN_g ℏυ_0 ))$ Electrical Pumping Electrical pumping is a much more complicated process than optical pumping. Usually used for gas and semiconducting lasers, electrical pumping uses electrical current to excite and promote the ground state population. In a simple gas laser that contains only one species (A), current passes through the gas medium and creates electrons that collide with the gas molecules to produce excited state molecules (A*): $A+e \longrightarrow A^*+e$ During electron impact either an ion or an excited state can be created. The ability to make the excited state depends mostly on the material used in the laser and not the electrical pumping source making it difficult to describe the efficiency of the pumping. Total efficiencies have been calculated and tabulated for most active materials used in electrical pumping. Where eficiencies range from a < 0.1% N2 gas laser to 70% for some CO2 gas lasers. Like the pumping rate of optical pumps, the rate of electrical pumping is found using the overall efficiency of the pump, power applied, and population of the ground state. However instead of using the frequency of the ground to upper state transition, electrical pumping uses the energy of the upper state (ħωp) and the volume of the electron discharge (V): $〈W_p 〉=\eta_p (P/(VN_g ℏω_p ))$ Pulsed operation Q-Switching The technique of Q switching allows the generation of laser pulses of short duration from a few nanoseconds to a few tens of nanoseconds and high peak power from a few megawatts to a few tens of megawatts. Suggest we put a shutter into the laser cavity. If the shutter is closed, laser action cannot occur and the population inversion can be very high. If the shutter is opened suddenly, the stored energy will be released in a short and intense light pulse. This technique is known as Q-switching. Q here denotes the ratio of the energy stored to the energy dissipated in the cavity. This technique is used in many types of solid-stat lasers and CO2 lasers to get a high-power pulsed output. To produce high inversion required for Q-switching, four requirements must be satisfied. 1. The lifetime of the upper level must be longer than cavity buildup time. 2. The pumping flux duration must be longer than the cavity build up time. 3. The initial cavity losses must be high enough during the pumping duration to prevent oscillation occurring. 4. The cavity losses must be reduced instantaneously. Mode-Locking The technique of mode locking allows the generation of laser pulses of ultrashort duration from less than a picosecond to femtoseconds and very high peak, a few gigawatts. Mode-locking is achieved by inducing the different longitudinal modes of a laser to a locked mode. When combining the electromagnetic waves modes with different frequencies and random phases, they produce a random and average output. When the modes are added in phase, they combine to produce a total amplitude and intensity output with a repeated pulse. FIgure 8:Laser mode structure Types of Lasers There are many different types of lasers with a wide range applications, and below is a brief description of some of the main types. Solid State Lasers A solid-state laser is one that uses a solid active medium generally in a rod shape. Inside the active material is a dopant that acts as the light emitting source. Optical pumping is used to create population inversion of the active material. Solid-state lasers generally use stimulated emission as the mechanism for creating the high energy beam. Ruby Laser The ruby laser was the first operating laser and was built in 1960. It has a three-level (Figure 6) energy system that uses aluminum oxide with some of the aluminum atom replaced with chromium as its active material. The chromium in the aluminum oxide crystal is the active part of the laser. Electrons in the ground state of chromium absorb the incident light and become promoted to higher energy states. The short lived excited state relaxes down to a metastable state with a longer lifetime. Laser emission happens when there is relaxation from the metastable state back to the ground state. A xenon flash lamp emitting light at wavelengths of 6600Å and 4000Å (matching the energy needed to excite the chromium atoms). In order to create resonance of the incident light in the active material silver platting was put at both ends of the ruby rod. One end was completely covered while the other end was partially covered so lasing light could exit the system. Nd: YAG Laser Nd: YAG laser are the most popular type of solid state laser. The laser medium is a crystal of Y3Al5O12 which are commonly called YAG, an acronym for yttrium aluminum garnet. A simplified energy-level scheme for Nd:YAG is shown in Fig. 9. The λ=1.06 μm laser transition is the strongest of the 4F3/2→4I11/2 transitions. The major application of the Nd laser is in various form of material processing: drilling, spot welding, and laser marking. Because they can be focused to a very small spot, the laser are also used in resistor trimming and in circuit mask , memory repair and also in cutting out specialized circuits. Medical applications include many types of surgeries. Many medical applications take advantage of low-loss optical fiber delivery systems that can be inserted into the body to wherever is needed. Nd lasers are also used in military applications such as range finding and target designation. High power pilsed versions are also used for X-ray spectral regions. In addition, Nd lasers are used in scientific lab as good sources for pumping dye laser and other types of lasers. Semiconductor Laser The semiconductor laser is another type of solid state laser that uses a semiconducting material like germanium or silicon. When the temperature of the semiconducting material is increased, electrons move from the valence band to the conducting band creating holes in the valence band (Figure 7). In between the conducting band and valence band is a region where there are no energy levels, called the band gap. Applying a voltage to the semiconductor causes electrons to move to the conduction band creating a population inversion. Irradiating a semiconductor with incident light matching the energy of the forbidden area causes a large transition from the conduction band to the valence band, increasing and amplifying the incident light. Gas Lasers A gas laser contains active material composed of a mixture of gases with similar energy states inside a small gas chamber. Electrical pumping is used to create the population inversion where one gas is excited through collisions with electrons and in turn excites the other gas through collisions. Helium-Neon Laser The helium-neon laser was the first gas laser. It consists of a long narrow tube that contains He and Ne gas. Mirrors are placed at both ends of the gas tube to form the resonant cavity with one of the mirrors partially reflecting the incident light. Stimulated emission of the gas mixture is carried out by first exciting the He gas to a higher energy state through electron collision with electrons from the electronic pumping source (electrical pumping). Then the excited He atoms collide with the Ne atoms transferring their energy and exciting them to a higher energy level. The Ne atoms in the higher energy level will then relax to a lower metastable energy state. Lasing occurs when there is relaxation from the metastable state to a lower energy state causing spontaneous emission. The Ne gas then returns to the ground state when it collides with the outer walls of the gas tube (Figure 8). Carbon Dioxide Laser The carbon dioxide laser is a gas laser that uses the energy difference between rotational-vibrational energy levels. Within the vibrational levels of CO2 there are rotational sub-energy levels. A mixture of N2 and CO2 gas are placed inside a chamber. The N2 atoms are excited through an electrical pumping mechanism. The excited atoms then collide with the CO2 atoms transfer energy. This transfer of energy causes the CO2 to go into a higher vibrational level. The excited CO2 molecules then go through spontaneous emission when they are relaxed to lower rotational-vibrational levels increasing the signal of the incident light (Figure 9). Carbon dioxide lasers are extremely efficient, around 70%, and powerful compared to other gas lasers making them useful for welding and cutting. Liquid Lasers Liquid lasers consist of a liquid active material usually composed of an organic dye compound. The most common type of liquid laser uses rhodamine 6G (Figure 10) dye mixed with alcohol and is excited by different types of lasers, such as an argon-ion laser or a nitrogen laser. Organic dyes are large compounds that have absorption bands in the UV or visible region with a strong intense fluorescence spectrum. The free π electrons of the dye are excited using an optical pumping source and the transition from the S1 to the S0 state creates the lasing light (see Jablonski diagrams). Liquids are generally used because they can easily be tuned to emit a certain wavelength by changing the resonant frequency within the cavity. Wavelengths from the visible to the infrared can be covered. There are many benefits of liquid lasers, some include that they can be cooled in a relative amount of time, they cannot be damaged unlike a solid-laser, and their production is cost-effective. The efficiency of liquid lasers is low because the lifetime of the excited state is relatively short; there are many non-radiative decay processes, and the material degrades over time. Liquid lasers tend to be used only as a pulse laser when tunability is required. Liquid lasers can be used for high-resolution spectroscopy since they are easily tuned over a wide range of wavelengths. They can also be used because they have concentrations which are manageable when dissolved in solids or other liquids. Chemical Lasers Chemical lasers are different from other lasers because the population inversion is the direct product of a chemical reaction when energy is released as a result of an exothermic reaction. Usually reactions involve gases where the energy created is used to make vibrationally excited molecules. Light used for lasing is then created from vibrational-rotational relaxation like in the CO2 gas laser. An example of a chemical laser is the HF gas laser. Inside the gas chamber fluorine and hydrogen react to form an excited HF molecule: F + H2 → HF + H The excess energy from the reaction allows HF to stay in its excited state. As it relaxes, light is emitted through spontaneous emission. Deuterium can also be used in place of hydrogen. Deterium fluoride is useds for applications that require high-power. For example, MIRACL was built for military research and was known to produce 2.2 megawatts of power. The uniqueness of a chemical laser is that the power required for lasing is produced in the reaction itself. Laser Applications The applications of lasers are numerous and cover scientific and technological fields. In general, these applications are a direct consequence of the special characteristics of lasers. Below are a few examples of the laser applications, for a complete list please go to en.Wikipedia.org/wiki/List_of...ons_for_lasers Lidar Lidar is short for light detection and ranging which is an optical remote sensing technology can be used for monitoring the environment. A typical lidar system involves a transmitter of laser radiation and a receiver for the detection and analysis of backscattered light. A beam expander is usually used at transmitter to reduce divergence of the laser beam before it propagates into the atmosphere. The receiver includes a wavelength filter, a photo detector, and computers and electronics for data acquisition and analysis. Lidar system dates back to the 1930, because of the laser, it has become one of the primary tools in atmospheric and environmental research. Other than that, Lidar has been put into various uses. In agriculture, lidar can be used to create topographic map to help farmer to decide appropriate amount of fertilizing to achieve a better crop yield. In Archaeology, lidar can be used to create a geographic information system to help archaeologists to find sites. In transportation, lidar has been used in autonomous cruise control system to prevent road accident and policemen are also using lidar speed gun to enforce the speed limit regulation. Laser in Material Processing The beam of a laser is usually a few millimeters in diagram. For most material processing applications, lenses are used to increase the intensity of the beam. The beam from a laser is either plane or spherical. After passing through a lens, the beam should get focused to a point. But in actual practice, diffraction effects have to be taken into consideration, the incoming will focus into a region of radius. If λ is the wavelength of the laser light, a is the radius of the beam, and f is the focal length of the lens, then the radius of the region is $b =\dfrac {\lambda f }{ a }$ If P represents the power of the laser beam, the intensity I, obtained at the focused region would be given by, $I =\dfrac { P }{ \pi b^2 }=\dfrac { P a^2 }{ \pi πλ^2f^2 }$ The high-power (P>100w) laser are widely used in material processing such as welding, drilling, cutting, surface treatment, and alloying. The main advantage of the laser beam can be summarized as follow: (1) The heating produced by the laser is less than that in conventional process. Material distortion is considerably reduced. (2) Possibility of working in inaccessible region. Any region which can be seen can be processed by a laser. (3) The process can be better controlled and easily automatized. However, against all these advantages, the disadvantages are: (1) high cost of the laser system. (2) Reliability and reproducibility problems of the laser system. (3) Safety problems. Laser in Medicine In field of medicine, the major use of lasers is for surgery such as laser eye surgery commonly known as LASIK. Besides that, there are also a few diagnosetic applications such as clinical use of flow microfluormeters, Doppler velocimetry to measure the blood velocity, laser fluorescence bronchoscope to detect tumors in their early phase. For surgery, the laser beams are used instead of a conventional scalpel. The infrared beam from the CO2 laser is strongly absorbed by water molecules in the tissue. It produces a rapid evaporation of these molecules, consequently cutting the tissue. The main advantage of laser beam surgery can be summarized as follows: (1) High precision. The incision can be made with a high precision particularly when the beam is directed by means of a microscope. (2) Possibility of operating in inaccessible region. Laser surgery can be operated in any region of the body which can be observed by means of an optical system. (4) Limited damage to blood vessel and adjacent tissue. However, the disadvantages are: (1) considerable cost. (2) Smaller velocity of the laser scalpel. (3) Reliability and safety problems associated with the laser procedure. Problems 1. Determine the free-space wavelength ($\lambda_0$) in Å, and frequency of the resonant cavity for a beam parameter q1 that is 632,110 and q2 that is 632,111 in a helium-neon gas laser at 1 atm. The index of refraction n is 1.00, the length of the resonant cavity is 20 cm and the wavelength region of interest is 6328Å. 2. What wavelength of light will be released by the spontaneous emission of Ne gas, where the the energy difference between the excited and ground state is 9.9 x 10-19 J. 3. What is the population ratio of the above question at 300K. Answers 1. For q1 $\lambda_0=6328.0125 \angstrom$ Å, q2 $\lambda_0=6328.0025 \angstrom$ Å, and v=474 THz for both q values 2. 200 nm 3. N2/N1=1.9 x 10-105 Contributors and Attributions • Greg Allen (UCD), Arpana Vaniya (UCD), Zheng Zhang (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.08%3A_Lasers.txt
Circular Dichroism, an absorption spectroscopy, uses circularly polarized light to investigate structural aspects of optically active chiral media. It is mostly used to study biological molecules, their structure, and interactions with metals and other molecules. Introduction Circular Dichroism (CD) is an absorption spectroscopy method based on the differential absorption of left and right circularly polarized light. Optically active chiral molecules will preferentially absorb one direction of the circularly polarized light. The difference in absorption of the left and right circularly polarized light can be measured and quantified. UV CD is used to determine aspects of protein secondary structure. Vibrational CD, IR CD, is used to study the structure of small organic molecules, proteins and DNA. UV/Vis CD investigates charge transfer transitions in metal-protein complexes. Circular Polarization of Light Electromagnetic radiation consists of oscillating electric and magnetic fields perpendicular to each other and the direction of propagation. Most light sources emit waves where these fields oscillate in all directions perpendicular to the propagation vector. Linear polarized light occurs when the electric field vector oscillates in only one plane. In circularly polarized light, the electric field vector rotates around the propagation axis maintaining a constant magnitude. When looked at down the axis of propagation the vector appears to trace a circle over the period of one wave frequency (one full rotation occurs in the distance equal to the wavelength). In linear polarized light the direction of the vector stays constant and the magnitude oscillates. In circularly polarized light the magnitude stays constant while the direction oscillates. As the radiation propagates the electric field vector traces out a helix. The magnetic field vector is out of phase with the electric field vector by a quarter turn. When traced together the vectors form a double helix. Light can be circularly polarized in two directions: left and right. If the vector rotates counterclockwise when the observer looks down the axis of propagation, the light is left circularly polarized (LCP). If it rotates clockwise, it is right circularly polarized (RCP). If LCP and RCP of the same amplitude, they are superimposed on one another and the resulting wave will be linearly polarized. Interaction with Matter As with linear polarized light, circularly polarized light can be absorbed by a medium. An optically active chiral compound will absorb the two directions of circularly polarized light by different amounts $\Delta A = A_l - A_r$ This can be extended to the Beer-Lambert Law. The molar absorpitivty of a medium will be different for LCP and RCP. The Beer-Lambert Law can be rewritten as $A = (\varepsilon_l -\varepsilon_r)cl$ The difference in molar absorptivity is also known as the molar circular dichroism $\Delta \varepsilon = \varepsilon_l -\varepsilon_r$ The molar circular dichroism is not only wavelength dependent but also depends on the absorbing molecules conformation, which can make it a function of concentration, temperature, and chemical environment. Any absorption of light results in a change in amplitude of the incident wave; absorption changes the intensity of the light and intensity of the square of the amplitude. In a chiral medium the molar absorptivities of LCP and RCP light are different so they will be absorbed by the medium in different amounts. This differential absorption results in the LCP and RCP having different amplitudes which means the superimposed light is no longer linearly polarized. The resulting wave is elliptically polarized. Molar Ellipticity The CD spectrum is often reported in degrees of ellipticity, $\theta$, which is a measure of the ellipticity of the polarization given by: $tan \theta = \frac{E_l-E_r}{E_l+E_r}$ where E is the magnitude of the electric field vector. The change in polarization is usually small and the signal is often measured in radians where $\theta = \frac{2.303}{4}(A_l - A_r)$ and is a function of wavelength. $\theta$ can be converted to degrees by multiplying by $\frac{180}{\pi}$ which gives $\theta = 32.98 \Delta A$ The historical reported unit of CD experiments is molar ellipticity, $[\theta]$, which removes the dependence on concentration and path length $[\theta] = 3298 \Delta \varepsilon$ where the 3298 converts from the units of molar absorptivity to the historical units of degrees$\cdot$ cm2$\cdot$dmol-1. Applications Instrumentation Most commercial CD instruments are based on the modulation techniques introduced by Grosjean and Legrand. Light is linearly polarized and passed through a monochromator. The single wavelength light is then passed through a modulating device, usually a photoelastic modulator (PEM), which transforms the linear light to circular polarized light. The incident light on the sample switches between LCP and RCP light. As the incident light swtches direction of polarization the absorption changes and the differention molar absorptivity can be calculated. Biological molecules The most widely used application of CD spectroscopy is identifying structural aspects of proteins and DNA. The peptide bonds in proteins are optically active and the ellipticity they exhibit changes based on the local conformation of the molecule. Secondary structures of proteins can be analyzed using the far-UV (190-250 nm) region of light. The ordered $\alpha$-helices, $\beta$-sheets, $\beta$-turn, and random coil conformations all have characteristic spectra. These unique spectra form the basis for protein secondary structure analysis. It should be noted that in CD only the relative fractions of residues in each conformation can be determined but not specifically where each structural feature lies in the molecule. In reporting CD data for large biomolecules it is necessary to convert the data into a normalized value that is independent of molecular length. To do this the molar ellipticity is divided by the number of residues or monomer units in the molecule. The real value in CD comes from the ability to show conformational changes in molecules. It can be used to determine how similar a wild type protein is to mutant or show the extent of denaturation with a change in temperature or chemical environment. It can also provide information about structural changes upon ligand binding. In order to interpret any of this information the spectrum of the native conformation must be determined. Some information about the tertiary structure of proteins can be determined using near-UV spectroscopy. Absorptions between 250-300 nm are due to the dipole orientation and surrounding environment of the aromatic amino acids, phenylalanine, tyrosine, and tryptophan, and cysteine residues which can form disulfide bonds. Near-UV techniques can also be used to provide structural information about the binding of prosthetic groups in proteins. Metal containing proteins can be studied by visible CD spectroscopy. Visible CD light excites the d-d transitions of metals in chiral environments. Free ions in solution will not absorb CD light so the pH dependence of the metal binding and the stoichiometry can be determined. Vibrational CD (VCD) spectroscopy uses IR light to determine 3D structures of short peptides, nucleic acids, and carbohydrates. VCD has been used to show the shape and number of helices in A-, B-, and Z-DNA. VCD is still a relatively new technique and has the potential to be a very powerful tool. Resolving the spectra requires extensive ab initio calculations, as well as, high concentrations and must be performed in water, which may force the molecule into a nonnative conformation.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.09%3A_Optical_Rotatory_Dispersion_and_Circular_Dichroism.txt
14.1: Vocabulary Q14.2 Find the wave number and frequency of light with a wavelength of 700 nm (to three significant figures). S14.2 1. Wave number is equal to the reciprocal of wavelength. Its units are inverse centimeters. $\tilde \nu = \dfrac{1}{\lambda} \]$ \tilde \nu = \dfrac{1}{700\ nm} \times \dfrac{10^9\ nm}{100\ cm} \] $\tilde \nu = 1.43\ \times 10^4\ cm^{-1} \] 2. Use the relationship between frequency, speed, and wavelength to solve for frequency.$ \nu = \dfrac{c}{\lambda} \] $\nu = \dfrac{2.998\ \times 10^8\ m/s}{700\ nm \times \dfrac{m}{10^9\ nm}} \]$ \nu = 4.28 \times 10^{14}\ s^{-1} \] Q14.2a Convert $3 \times 10^4\; cm^{-1}$ to wavelength. Identify what kind of spectroscopy? Q14.2a What is the frequency and wavenumber of a 740 nm photon? Q14.2b The wavelength of the red line in the Hydrogen spectrum is 656 nm ($656 \times 10^{-9}\; m$). What is the wavenumber and frequency of it? S14.2b The wave number =1/λ = 1/656 x10-9 m = 1.5x106 m-1 Q14.2c Convert 533 nm to wavenumber and frequency. Q14.4a Convert the following absorbance to percent transmittance: (a) 0.56, (b) 1.5, (c) 6.8. Q14.4b Calculate percent transmittance from the following values of absorbance: 1. 4.0 2. 0.23 3. 1.6 S14.4b Solve, using the relationship between transmittance and absorbance. $-\log){10}T = A \]$ T = 10^{-A} \] For percent transmittance, multiply T by 100% (a) $T = 10^{-4.0} \times 100\% \]$ T = 0.010\% \] (b) $T = 10^{-0.23} \times 100\% \]$ T = 59\%\] (c) $T = 10^{-1.6} \times 100\% \]$ T = 2.5\% \] Answers: 0.010%, 59%, 2.5% Q14.4c Convert the following from percent transmittance to absorbance. 1. 0.10% 2. 23% 3. 84% Q14.4c What is the percent transmitter of the following absorbance 1. 0.4 2. 1.2 S14.4c We have A = 2- log %T a. 0.4 = 2-log%T -->%T= 10(2-0.4) %T= 39.8 b. 1.2 = 2-log%T ---> %T = 6.3% Q14.6a When molecules are exposed to radiation with frequency, v, such that $\Delta E = hv$, do they travel through a transition from a higher to lower state, or lower to higher state? Q14.6b Find the uncertainty of simultaneously measuring the frequency and wavelength of an emission, if the wavelength is 430 nm and the excited state lifetime is 0.50 nanoseconds. S14.6b 1. Use Heisenberg's uncertainty principle and the relationship between energy and frequency to find the uncertainty of frequency. $\Delta E \Delta t \geq \dfrac{h}{4 \pi} \]$ \Delta E = h \Delta \nu \] $h \Delta \nu \Delta t \geq \dfrac{h}{4 \pi} \]$ \Delta \nu \geq \dfrac{1}{4 \pi \Delta t} \] The maximum value for Δt is the lifetime of the excited state. $\Delta \nu \geq \dfrac{1}{4 \pi 0.50 ns \times \dfrac{s}{10^9\ ns}} \]$ \Delta \nu \geq 1.6 \times 10^8\ s^{-1} \] 2. Use the uncertainty of frequency and the relationship between frequency and wavelength to find the uncertainty of the wavelength. $\lambda = \dfrac{c}{\nu} \]$ | \Delta \lambda | = \dfrac{c}{\nu^2} | \Delta \nu | \] $| \Delta \lambda | = \dfrac{\lambda ^2 | \Delta \nu | }{c} \]$ | \Delta \lambda | = \dfrac{(430\ nm)^2 \times 1.6 \times 10^8\ s^{-1}}{2.998 \times 10^8\ m/s} \times \dfrac{m}{10^9\ nm} \] | \Delta \lambda | = 2.3 \times 10^{-7}\ nm \] Q14.6c Calculate the wavelength emission of an electronically excited molecule with uncertainties in frequency (∆v) is 5.6x106 s-1and the wavelength ( ∆λ) 4x10-6 nm? c= 3x108ms-1=3x1017nms-1 S14.6c We have: $|∆λ|= \dfrac{λ|∆v|}{v}$ Since $\nu = \dfrac{c/}{\lambda}$ then $|∆λ|= \dfrac{λ^2|∆v|}{c}$ => 4x10-6nm = (λ2 x 5.6x106 s-1)/ 3x1017nms-1 => λ = 463 nm Q14.8a In the gas phase, using electronic spectroscopy, what is observed on the electronic spectra of diatomic molecules at high resolution in terms of structure and bands? Q14.8b Explain why the decreasing of temperature will increase the resolution of visible and UV spectra? S14.8b Decreasing temperature will lower the kinetic energy of molecules. Thus the effect of Doppler and collisional broadening decrease, making the resolution enhances. Q14.8c Measuring the spectrum of the UV light at low temperature is a good way to enhance the resolution of the UV light. Explain. Give one more example. Q14.10 In a 5.0 mM solution, a solute absorbs 90% of a visible light as the beam passes through a 80 mm cell. Calculate the molar absorptivity of this solute. S14.10 We can calculate the transmittance: $T = 1.00 - 0.9 = 0.10$ Also, the absorbance is $A = -\log T = -\log (0.10) = 1.0$ Next step, use Beer-Lambert law to determine the molar absorptivity, $\epsilon = \dfrac{A}{bc}$ $\epsilon = \dfrac{1}{(8.0\,cm)(5.0 \, mM)} = 2.5 \times 10^{-2} L \, mol^{-1} \, cm^{-1}$ Q14.10 Calculate the absorbance ($A$) with the molar absorptivity= 6.17$Lmol^{-1}cm^{-1}$, c= 0.52 M when a certain wavelength passes through 2.3-cm cell. S14.10 $A = \epsilon b c = (6.17) (2.3) (0.52) = 7.38$ Q14.12 Calculate $E_{vib}$ for a harmonic oscillator if $v=2$ and $\nu = 3.24 \times 10^{13} \;Hz$ S14.12 $E_{vib}=(v+\frac{1}{2})h\nu =(2+\frac{1}{2}) 6.626 \times 10^{-34} \times 3.24 \times 10^{13}= 5.37*10^{-20}Hz$ Q14.24 Given the following molecules: CO2, H2O. Show the fundamental vibration modes for each of the molecule and explain which one are IR active or both. Q14.26 Given the following molecules: He2, F2, H2, O2 and Li2. Rank these molecules from lowest to highest fundamental frequency of vibration? Show works. S14.26 This can be determined by looking at the mass of the molecules. The one with the lowest mass will have the highest fundamental frequency of vibration and vice versa. $\mu_{He_{2}} = \frac{(4.003amu)(4.003amu)}{(4.003amu)+(4.003amu)} = 2.0015$ and the similar calculation can be applied for other molecules, and the rank is as the following: Lowest --- F2 < O2 < Li2 < He2 < H2 ---- highest 14.5: Nuclear Magnetic Resonance Q14.12 A chemist used the NMR machine to scan his sample and he obtained a signal-to-noise (S/N) ratio of 2.0. How long would it takes the chemist to generate a spectrum with a S/N ratio of 40? Assumed he spent 5 minutes per scan. S14.12 We are looking at the $\left ( \frac{S}{N} \right )_{n}$. This mean the ratio of signal to noise would change according to the number of n scans Thus, $\left ( \frac{S}{N} \right )_{n} = \frac{nS}{\sqrt{n}N} = \sqrt{n} \left ( \frac{S}{N} \right )_{1}$ $\sqrt{n} = \left ( \frac{S}{N} \right )_{n} \div \left(\frac{S}{N} \right )_{1} = 40 \div 2 = 20$ $n= 400$ Since 400 scans at the interval of 5 mins, it would take 400 × 5min = 2000min (1h/60min) = 33.3 h to generate this spectrum Q14.28 Explain why pentacene crystals are blue in color, but tetracene crystals are orange. pentacene tetracene Q14.29 Use the particle in a one-dimensional-box model to calculate the longest wavelength peak in the absorption spectrum of ß-carotene (structure shown below). Q14.32a What does chemical shift measure? What can influence chemical shift? S14.32a Chemical shift measures the difference in resonance frequencies between a nucleus of interest and a reference nucleus. It is influenced by electron shielding; the more electrons are pulled away from a proton of interest (the smaller the electron density around the nucleus of interest), the greater the chemical shift. Q14.32b Assume you are doing an NMR spectroscopy. You operate the spectrometer at 100 Hz, but you find a signal of a compound at an unknown frequency downfield from TMS peak. However, you know its chemical shift is 6.5 ppm. Calculate the unknown frequency for your lab report. Q14.34b Assuming the precession frequency is 100 MHz and $\gamma$ = 10.0x105 T-1s-1, calculate the Larmor frequency for 17O . Q14.32c Identify the most shielded and deshielded hydrogen for this compound: S14.32c The hydrogen groups on the far right end are the most shielded and the hydrogen group on the carbon with the chlorine group is the most deshielded. Q14.32d The NMR signal of coumpound is 280 Hz downfield from TMS is 70 MHZ. find its chemical shifl in ppm. S14.32d $δ = \dfrac{ \nu - \nu_{ref}}{ \nu_{spec}} \times 10^6$ $δ = \dfrac{ 280\; Hz }{ 70 \times 10^6} \times 10^6 = 4.0 \; ppm$ Q14.34 1. Calculate the magnetic field, B0 that corresponds to a precession frequency of 600 MHz for 1H. 2. What is the field strength (in tesla) needed to generate a 1H frequency of 500 MHz? 3. How do spin-spin relaxation and spin-lattice relaxation differ from each other? 4. The 1H NMR spectrum of toluene shows that it has two peaks because of methyl and aromatic protons recorded at 60 MHz and 1.41 T. Given this information, what would be the magnetic field at 400 MHz? 5. What is the difference between 13C and 1H NMR? S14.34 1. B0= 14.1 T. 2. Using the equation used in problem 1 and solving it for B0we get a field strength of 11.74 T. 3. Look under relaxation. 4. Since we know that the NMR frequency is directly proportional to the magnetic strength, we calculate the magnetic field at 400 MHz: B0 = (400 MHz/60MHz) x 1.41 T = 9.40 T 5. Look under applications. Q14.34 calculate the field strength in tesla to generate 1H frequency of 300 MHz? S14.34 B0 = 2πv/ɣ = 2π(300 x 106 s-1)/ 26.75 x 107 T-1s-1 B0 = 7.04 T Q14.36 Given 4.7 T, calculate the diference in frequency for 2 protons whose § value differ 1.25 and 400 MHz S14.36 B0 = (Δδ x vspec) / 106 = (1.25 (200 x 106 Hz)) / 106 B0 = 2.5 x 102 Hz Q14.38 Draw the NMR spectrum of isobutyl alcohol with chemical shift -CH 0.90 ppm, -A-H 1.68ppm, -CH2 3.26 ppm, O-H 4.49 ppm 14.6: Electron Spin Resonance Q14.42 You performed an electron spin resonance (ESR) experiment with di-tert-butyl nitroxide radical and get 3 lines of equal intensity. Then, you combine di-tert-butyl nitroxide radical with ascorbic acid and was about to run another ESR experiment but Jill stopped you. Why did Jill stop you? 14.7: Fluorescence and Phosphorescence Q14.43 Someone has handed you data of the luminescence of a material as a function of time. How can you decide whether the luminescence process was fluorescence or phosphorescence?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/14%3A_Spectroscopy/14.E%3A_Spectroscopy_%28Exercises%29.txt
Photochemistry is the branch of chemistry concerned with the chemical effects of light. Generally, this term is used to describe a chemical reaction caused by absorption of high energy light. Photobiology is the scientific study of the interactions of the photochemistry of living organisms and includes the study of photosynthesis, visual processing, circadian rhythms, bioluminescence, and ultraviolet radiation effects. Thumbnail: Photochemical immersion well reactor (50 mL) with a mercury-vapor lamp. (CC BY-SA 4.0, Masohe). 15: Photochemistry and Photobiology So far, we have come across one big rule of photon absorbance. In order to be absorbed, a photon's energy has to match an energy difference within the compound that is absorbing it. In the case of visible or ultraviolet light, the energy of a photon is roughly in the region that would be appropriate to promote an electron to a higher energy level. Different wavelengths would be able to promote different electrons, depending on the energy difference between an occupied electronic energy level and an unoccupied one. Other types of electromagnetic radiation would not be able to promote an electron, but they would be coupled to other events. For example, absorption of infrared light is tied to vibrational energy levels. Microwave radiation is tied to rotational energy levels in molecules. Thus, one reason a photon may or may not be absorbed has to do with whether its energy corresponds to the available energy differences within the molecule or ion that it encounters. Franck-Condon: Electronic and Vibrational Coupling Photons face other limitations. One of these is a moderate variation on our main rule. It is called the Frank Condon Principle. According to this idea, when an electron is excited from its normal position, the ground state, to a higher energy level, the optimal positions of atoms in the molecule may need to shift. Because electronic motion is much faster than nuclear motion, however, any shifting of atoms needed to optimize positions as they should be in the excited state will have to wait until after the electron gets excited. In that case, when the electron lands and the atoms aren't yet in their lowest energy positions for the excited state, the molecule will find itself in an excited vibrational state as well as an excited electronic state. That means the required energy for excitation doesn't just correspond to the difference in electronic energy levels; it is fine-tuned to reach a vibrational energy level, which is quantized as well. • The Franck Condon Principle states that electronic transitions are vertical. • A vertical transition is one in which non of the nuclei move while the electron journeys from one state to another. • A vertical transition may begin in a vibrational ground state of an electronic ground state and end in a vibrational excited state of an electronic excited state. LaPorte: Orbital Symmetry There are other restrictions on electronic excitation. Symmetry selection rules, for instance, state that the donor orbital (from which the electron comes) and the acceptor orbital (to which the electron is promoted) must have different symmetry. The reasons for this rule are based in the mathematics of quantum mechanics. What constitutes the same symmetry vs. different symmetry is a little more complicated than we will get into here. Briefly, let's just look at one "symmetry element" and compare how two orbitals might differ with respect to that element. If an orbital is centrosymmetric, one can imagine each point on the orbital reflecting through the very centre of the orbital to a point on the other side. At the end of the operation, the orbital appears unchanged. That means the orbital is symmetric with respect to a centre of inversion.. If we do the same thing with a sigma antibonding orbital, things turn out differently. In the drawing, the locations of the atoms are labelled A and B, but the symmetry of the orbital itself doesn't depend on that. If we imagine sending each point on this orbital through the very centre to the other side, we arrive at a picture that looks exactly the opposite of what we started with. These two orbitals have different symmetry. A transition from one to the other is allowed by symmetry. Problem RO2.1. Decide whether each of the following orbitals is centrosymmetric. a) an s orbital b) a p orbital c) a d orbital d) a π orbital e) a π* orbital Problem RO2.2. Decide whether each of the following transitions would be allowed by symmetry. a) π → π* b) p → π* c) p → σ* d) d → d Symmetry selection rules are in reality more like "strong suggestions." They depend on the symmetry of the molecule remaining strictly static, but all kinds of distortions occur through molecular vibrations. Nevertheless, these rules influence the likelihood of a given transition. The likelihood of a transition, similarly, has an influence upon the extinction coefficient, ε. transition ε, extinction coefficient π → π* 3,000 - 25,000 M-1 cm-1 p → π* 20 - 150 M-1 cm-1 p → σ* 100 - 7,000 d → d 5 - 400 M-1 cm-1 Spin State Let's take a quick look at one last rule about electronic emissions. This rule concerns the spin of the excited electron, or more correctly, the "spin state" of the excited species. The spin state describes the number of unpaired electrons in the molecule or ion. number of unpaired electrons spin state 0 singlet 1 doublet 2 triplet 3 quartet The rule says that in an electronic transition, the spin state of the molecule must be preserved. That means if there are no unpaired electrons before the transition, then the excited species must also have no unpaired electrons. If there are two unpaired electrons before the transition, the excited state must also have two unpaired electrons.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/15%3A_Photochemistry_and_Photobiology/15.01%3A_Introduction_to_Photochemistry.txt
Learning Objectives 1. Define the following: • photoautotrophenic a. oxyg • photoautotroph anoxygenicb. • c. photon 2. Name the two stages of photosynthesis. 3. State how all radiations in the electromagnetic spectrum travel. 4. State what constitutes visible light. 5. Define photon and describe what happens when photons of visible light energy strike certain atoms of pigments during photosynthesis and how this can lead to the generation of ATP. 6. Describe the structure of a chloroplast and list the pigments it may contain. 7. Give the overall reaction for photosynthesis. 8. State the reactants and the products for photosynthesis and indicate which are oxidized and which are reduced. 1. Briefly describe the overall function of the light-dependent reactions in photosynthesis and state where in the chloroplast they occur. • 2. State the reactants and the products for the light-dependent reactions. • 3. Describe an antenna complex and state the function of the reaction center. • 4. Briefly describe the overall function of Photosystem II in the light-dependent reactions of photosynthesis. • 5. Briefly describe how ATP is generated by chemiosmosis during the light-dependent reactions of photosynthesis. • 6. Briefly describe the overall function of Photosystem I in the light-dependent reactions of photosynthesis. • 7. Compare noncyclic photophosphorylation and cyclic photophosphorylation in terms of Photosystems involved and products produced. 1. Briefly describe the overall function of the light-independent reactions in photosynthesis and state where in the chloroplast they occur. 1. State how the light-dependent and light-independent reactions are linked during photosynthesis. 2. State the reactants and the products for the light-independent reactions. 3. Briefly describe the following stages of the Calvin cycle: • CO2 fixation • production of G3P • regeneration of RuBP 4. State the significance of glyceraldehyde-3-phosphate (G3P) in the Calvin cycle. Autotrophs are organisms that are able to synthesize organic molecules from inorganic materials. Photoautotrophs absorb and convert light energy into the stored energy of chemical bonds in organic molecules through a process called photosynthesis. Plants, algae, and bacteria known as cyanobacteria are known as oxygenic photoautotrophs because they synthesize organic molecules from inorganic materials, convert light energy into chemical energy, use water as an electron source, and generate oxygen as an end product of photosynthesis. Some bacteria, such as the green and purple bacteria, are known as anoxygenic phototrophs. Unlike the oxygenic plants, algae, and cyanobacteria, anoxygenic phototrophs do not use water as an electron source and, therefore, do not evolve oxygen during photosynthesis. The electrons come from compounds such as hydrogen gas, hydrogen sulfide, and reduced organic molecules. In this section on photosynthesis, we be concerned with the oxygenic phototrophs. Photosynthesis is composed of two stages: the light-dependent reactions and the light-independent reactions. The light-dependent reactions convert light energy into chemical energy, producing ATP and NADPH. The light-independent reactions use the ATP and NADPH from the light-dependent reactions to reduce carbon dioxide and convert the energy to the chemical bond energy in carbohydrates such as glucose. Before we get to these photosynthetic reactions however, we need to understand a little about the electromagnetic spectrum and chloroplasts. The Electromagnetic Spectrum Visible light constitutes a very small portion of a spectrum of radiation known as the electromagnetic spectrum. All radiations in the electromagnetic spectrum travel in waves and different portions of the spectrum are catagorized by their wavelength. A wavelength is the distance from the peak of one wave to that of the next. At one end of the spectrum are television and radio waves with longer wavelengths and low energy. At the other end of the spectrum are gamma rays with a very short wavelength and a great deal of energy. Visible light is the range of wavelengths of the electromagnetic spectrum that humans can see, a mixture of wavelengths ranging from 380 nanometers to 760 nanometers. It is this light that is used in photosynthesis. Light and other types of radiation are composed of individual packets of energy called photons. The shorter the wavelength of the radiation, the greater the energy per photon. As will be seen shortly, when photons of visible light energy strike certain atoms of pigments during photosynthesis, that energy may push an electron from that atom to a higher energy level where it can be picked up by an electron acceptor in an electron transport chain (see Fig. 15.2.1). ATP can then be generated by chemiosmosis. Fig. 15.2.1: Interaction Between a Photon and an Atom. When photons of visible light energy strike certain atoms of pigments during photosynthesis, that energy may push an electron from that atom to a higher energy level where it can be picked up by an electron acceptor in an electron transport chain. Chloroplasts In eukaryotic cells, photosynthesis takes place in organelles called chloroplasts. Like mitochondria, chloroplasts are surrounded by an inner and an outer membrane. The inner membrane encloses a fluid-filled region called the stroma that contains enzymes for the light-independent reactions of photosynthesis. Infolding of this inner membrane forms interconnected stacks of disk-like sacs called thylakoids, often arranged in stacks called grana. The thylakoid membrane, which encloses a fluid-filled thylakoid interior space, contains chlorophyll and other photosynthetic pigments as well as electron transport chains. The light-dependent reactions of photosynthesis occur in the thylakoids. The outer membrane of the chloroplast encloses the intermembrane space between the inner and outer chloroplast membranes (see Fig. 2). The thylakoid membranes contain several pigments capable of absorbing visible light. Chlorophyll is the primary pigment of photosynthesis. Chlorophyll absorbs light in the blue and red region of the visible light spectrum and reflects green light. There are two major types of chlorophyll, chlorophyll a that initiates the light-dependent reactions of photosynthesis, and chlorophyll b, an accessory pigment that also participates in photosynthesis. The thylakoid membranes also contain other accessory pigments. Carotenoids are pigments that absorb blue and green light and reflect yellow, orange, or red. Phycocyanins absorb green and yellow light and reflect blue or purple. These accessory pigments absorb light energy and transfer it to chlorophyll. Photosynthetic prokaryotic cells do not possess chloroplasts. Instead, thylakoid membranes are usually arranged around the periphery of the bacterium as infoldings of the cytoplasmic membrane. Photosynthesis As mentioned above, photoautotrophs use sunlight as a source of energy and through the process of photosynthesis, reduce carbon dioxide to form carbohydrates such as glucose. The radient energy is converted to the chemical bond energy within glucose and other organic molecules. The overall reaction for photosynthesis is as follows: $6 CO_2 + 12 H_2O \text{in the presence of light and chlorophyll yields} C_6H_{12}O_6 + 6 O_2 + 6 H2O$ Note that carbon dioxide (CO2) is reduced to produce glucose (C6H12O6 ), while water (H2O) is oxidized to produce oxygen (O2). Photosynthesis is composed of two stages: the light-dependent reactions and the light independent reactions. We will now look at the role of each in the next two sections. Light-Dependent Reactions The exergonic light-dependent reactions of photosynthesis convert light energy into chemical energy, producing ATP and NADPH. These reactions occur in the thylakoids of the chloroplasts. The products of the light-dependent reactions, ATP and NADPH, are both required for the endergonic light-independent reactions. The light-dependent reactions can be summarized as follows: 12 H2O + 12 NADP+ + 18 ADP + 18 Pi + light and chlorophyll yields 6 O2 + 12 NADPH + 18 ATP The light-dependent reactions involve two photosystems called Photosystem I and Photosystem II. These photosystems include units called antenna complexes composed of chlorophyll molecules and accessory pigments located in the thylakoid membrane. Photosystem I contain chlorophyll a molecules called P700 because they have an absorption peak of 700 nanometers. Photosystem II contains chlorophyll a molecules referred to as P680 because they have an absorption peak of 680 nanometers. Each antenna complex is able to trap light and transfer energy to a complex of chlorophyll molecules and proteins called the reaction center (see Fig. 3). As photons are absorbed by chlorophyll and accessory pigments, that energy is eventually transfered to the reaction center where, when absorbed by an excitable electron, moves it to a higher energy level. Here the electron may be accepted by an electron acceptor molecule of an electron transport chain (see Fig. 3) where the light energy is converted to chemical energy by chemiosmosis. The most common light-dependent reaction in photosynthesis is called noncyclic photophosphorylation. Noncyclic photophosphorylation involves both Photosystem I and Photosystem II and produces ATP and $NADPH$. During noncyclic photophosphorylation, the generation of ATP is coupled to a one-way flow of electrons from $H_2O$ to $NADP^+$. We will now look at Photosystems I and II and their roles in noncyclic photophosphorylation. 1. As photons are absorbed by pigment molecules in the antenna complexes of Photosystem II, excited electrons from the reaction center are picked up by the primary electron acceptor of the Photosystem II electron transport chain. During this process, Photosystem II splits molecules of H2O into 1/2 O2, 2H+, and 2 electrons. These electrons continuously replace the electrons being lost by the P680 chlorophyll a molecules in the reaction centers of the Photosystem II antenna complexes (see Fig. 4). During this process, ATP is generated by the Photosystem II electron transport chain and chemiosmosis. According to the chemiosmosis theory, as the electrons are transported down the electron transport chain, some of the energy released is used to pump protons across the thylakoid membrane from the stroma of the chloroplast to the thylakoid interior space producing a proton gradient or proton motive force. As the accumulating protons in the thylakoid interior space pass back across the thylakoid membrane to the stroma through ATP synthetase complexes, this proton motive force is used to generate ATP from ADP and Pi (see Fig. 4 and Fig. 5). Flash animation illustrating the development of proton motive force as a result of chemiosmosis and ATP production by ATPsynthase. 2. Meanwhile, photons are also being absorbed by pigment molecules in the antenna complex of Photosystem I and excited electrons from the reaction center are picked up by the primary electron acceptor of the Photosystem I electron transport chain. The electrons being lost by the P700 chlorophyll a molecules in the reaction centers of Photosystem I are replaced by the electrons traveling down the Photosystem II electron transport chain. The electrons transported down the Photosystem I electron transport chain combine with 2H+ from the surrounding medium and NADP+ to produce NADPH + H+ (see Fig. 4). McGraw-Hill Flash animation illustrating photosynthetic electran transport and ATP production by ATPsynthase. Cyclic photophosphorylation occurs less commonly in plants than noncyclic photophosphorylation, most likely occurring when there is too little NADP+ available. It is also seen in certain photosynthetic bacteria. Cyclic photophosphorylation involves only Photosystem I and generates ATP but not NADPH. As the electrons from the reaction center of Photosystem I are picked up by the electron transport chain, they are transported back to the reaction center chlorophyll. As the electrons are transported down the electron transport chain, some of the energy released is used to pump protons across the thylakoid membrane from the stroma of the chloroplast to the thylakoid interior space producing a proton gradient or proton motive force. As the accumulating protons in the thylakoid interior space pass back across the thylakoid membrane to the stroma through ATP synthetase complexes, this energy is used to generate ATP from ADP and Pi (see Fig. 6). McGraw-Hill Flash animation illustrating cyclic and non-cyclic photophosphorylation. Light Independent Reactions The endergonic light-independent reactions of photosynthesis use the ATP and NADPH synthesized during the exergonic light-dependent reactions to provide the energy for the synthesis of glucose and other organic molecules from inorganic carbon dioxide and water. This is done by "fixing" carbon atoms from CO2 to the carbon skeletons of existing organic molecules. These reactions occur in the stroma of the chloroplasts. The light-independent reactions can be summarized as follows: 12 NADPH + 18 ATP + 6 CO2 yields C6H12O6 (glucose) + 12 NADP+ + 18 ADP + 18 Pi + 6 H2O Most plants use the Calvin (C3) cycle to fix carbon dioxide. C3 refers to the importance of 3-carbon molecules in the cycle. Some plants, known as C4 plants and CAM plants, differ in their initial carbon fixation step. 1. The Calvin (C3) Cycle There are three stages to the Calvin cycle: 1) CO2 fixation; 2) production of G3P; and 3) regeneration of RuBP. We will now look at each stage. stage 1: CO2 fixation To begin the Calvin cycle, a molecule of CO2 reacts with a five-carbon compound called ribulose bisphosphate (RuBP) producing an unstable six-carbon intermediate which immediately breaks down into two molecules of the three-carbon compound phosphoglycerate (PGA) (see Fig. 7). The carbon that was a part of inorganic CO2 is now part of the carbon skeleton of an organic molecule. The enzyme for this reaction is ribulose bisphosphate carboxylase or Rubisco. A total of six molecules of CO2 must be fixed this way in order to produce one molecule of the six-carbon sugar glucose. stage 2: Production of G3P from PGA The energy from ATP and the reducing power of NADPH (both produced during the light-dependent reactions) is now used to convert the molecules of PGA to glyceraldehyde-3-phosphate (G3P), another three-carbon compound (see Fig. 7). For every six molecules of CO2 that enter the Calvin cycle, two molecules of G3P are produced. Most of the G3P produced during the Calvin cycle - 10 of every 12 G3P produced - are used to regenerate the RuBP in order for the cycle to continue (see Fig. 7). Some of the molecules of G3P, however, are used to synthesize glucose and other organic molecules. As can be seen in Fig. 7, two molecules of the three-carbon G3P can be used to synthesize one molecule of the six-carbon sugar glucose. The G3P is also used to synthesize the other organic molecules required by photoautotrophs (see Fig. 8). stage 3: Regeneration of RuBP from G3P As mentioned in the previous step, most of the G3P produced during the Calvin cycle - 10 of every 12 G3P produced - are used to regenerate the RuBP so that the cycle may continue (see Fig. 7). Ten molecules of the three-carbon compound G3P eventually form six molecules of the four-carbon compound ribulose phosphate (RP) (see Fig. 7). Each molecule of RP then becomes phosphorylated by the hydrolysis of ATP to produce ribulose bisphosphate (RuBP), the starting compound for the Calvin cycle (see Fig. 7). Contributors and Attributions • Dr. Gary Kaiser (COMMUNITY COLLEGE OF BALTIMORE COUNTY, CATONSVILLE CAMPUS)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/15%3A_Photochemistry_and_Photobiology/15.02%3A_Photosynthesis.txt
Vision is such an everyday occurrence that we seldom stop to think and wonder how we are able to see the objects that surround us. Yet the vision process is a fascinating example of how light can produce molecular changes. The retina contain the molecules that undergo a chemical change upon absorbing light, but it is the brain that actually makes sense of the visual information to create an image. Introduction Light is one of the most important resources for civilization, it provides energy as it pass along by the sun. Light influence our everyday live. Living organisms sense light from the environment by photoreceptors. Light, as waves carry energy, contains energy by different wavelength. In vision, light is the stimulus input. Light energy goes into eyes stimulate photoreceptor in eyes. However, as an energy wave, energy is passed on through light at different wavelength. Light, as waves carry energy, contains energy by different wavelength. From long wavelength to short wavelength, energy increase. 400 nm to 700 nm is visible spectrum. Light energy can convert chemical to other forms. Vitamin A, also known as retinol, anti-dry eye vitamins, is a required nutrition for human health. The predecessor of vitamin A is present in the variety of plant carotene. Vitamin A is critical for vision because it is needed by the retina of eye. Retinol can be convert to retinal, and retinal is a chemical necessary for rhodopsin. As light enters the eye, the 11-cis-retinal is isomerized to the all-"trans" form. Mechanism of Vision The molecule cis-retinal can absorb light at a specific wavelength. When visible light hits the cis-retinal, the cis-retinal undergoes an isomerization, or change in molecular arrangement, to all-trans-retinal. The new form of trans-retinal does not fit as well into the protein, and so a series of geometry changes in the protein begins. The resulting complex is referred to a bathrhodopsin (there are other intermediates in this process, but we'll ignore them for now). The reaction above shows Lysine side-chain from the opsin react with 11-cis-retinal when stimulated. By removing the oxygen atom form the retinal and two hydrogen atom form the free amino group of the lysine, the linkage show on the picture above is formed, and it is called Schiff base. Signal Transduction Pathway As the protein changes its geometry, it initiates a cascade of biochemical reactions that results in changes in charge so that a large potential difference builds up across the plasma membrane. This potential difference is passed along to an adjoining nerve cell as an electrical impulse. The nerve cell carries this impulse to the brain, where the visual information is interpreted. The light image is mapped on the surface of the retina by activating a series of light-sensitive cells known as rods and cones or photoreceptors. The rods and cones convert the light into electrical impulses which are transmitted to the brain via nerve fibers. The brain then determines, which nerve fibers carried the electrical impulse activate by light at certain photoreceptors, and then creates an image. The retina is lined with many millions of photoreceptor cells that consist of two types: 7 million cones provide color information and sharpness of images, and 120 million rods are extremely sensitive detectors of white light to provide night vision. The tops of the rods and cones contain a region filled with membrane-bound discs, which contain the molecule cis-retinal bound to a protein called opsin. The resulting complex is called rhodopsin or "visual purple". In human eyes, rod and cones react to light stimulation, and a series of chemical reactions happen in cells. These cells receive light, and pass on signals to other receiver cells. This chain of process is class signal transduction pathway. Signal transduction pathway is a mechanism that describe the ways cells react and response to stimulation. Contributors • {{template.ContribOphardt()}}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/15%3A_Photochemistry_and_Photobiology/15.03%3A_Vision.txt
Learning Objectives • Describe the biological impact of ionizing radiation. • Define units for measuring radiation exposure. • Explain the operation of common tools for detecting radioactivity. • List common sources of radiation exposure in the US. The increased use of radioisotopes has led to increased concerns over the effects of these materials on biological systems (such as humans). All radioactive nuclides emit high-energy particles or electromagnetic waves. When this radiation encounters living cells, it can cause heating, break chemical bonds, or ionize molecules. The most serious biological damage results when these radioactive emissions fragment or ionize molecules. For example, alpha and beta particles emitted from nuclear decay reactions possess much higher energies than ordinary chemical bond energies. When these particles strike and penetrate matter, they produce ions and molecular fragments that are extremely reactive. The damage this does to biomolecules in living organisms can cause serious malfunctions in normal cell processes, taxing the organism’s repair mechanisms and possibly causing illness or even death (Figure $1$). Ionizing vs. Nonionizing Radiation There is a large difference in the magnitude of the biological effects of nonionizing radiation (for example, light and microwaves) and ionizing radiation, emissions energetic enough to knock electrons out of molecules (for example, α and β particles, γ rays, X-rays, and high-energy ultraviolet radiation) (Figure $2$). Energy absorbed from nonionizing radiation speeds up the movement of atoms and molecules, which is equivalent to heating the sample. Although biological systems are sensitive to heat (as we might know from touching a hot stove or spending a day at the beach in the sun), a large amount of nonionizing radiation is necessary before dangerous levels are reached. Ionizing radiation, however, may cause much more severe damage by breaking bonds or removing electrons in biological molecules, disrupting their structure and function. The damage can also be done indirectly, by first ionizing H2O (the most abundant molecule in living organisms), which forms a H2O+ ion that reacts with water, forming a hydronium ion and a hydroxyl radical: Biological Effects of Exposure to Radiation Radiation can harm either the whole body (somatic damage) or eggs and sperm (genetic damage). Its effects are more pronounced in cells that reproduce rapidly, such as the stomach lining, hair follicles, bone marrow, and embryos. This is why patients undergoing radiation therapy often feel nauseous or sick to their stomach, lose hair, have bone aches, and so on, and why particular care must be taken when undergoing radiation therapy during pregnancy. Different types of radiation have differing abilities to pass through material (Figure $4$). A very thin barrier, such as a sheet or two of paper, or the top layer of skin cells, usually stops alpha particles. Because of this, alpha particle sources are usually not dangerous if outside the body, but are quite hazardous if ingested or inhaled (see the Chemistry in Everyday Life feature on Radon Exposure). Beta particles will pass through a hand, or a thin layer of material like paper or wood, but are stopped by a thin layer of metal. Gamma radiation is very penetrating and can pass through a thick layer of most materials. Some high-energy gamma radiation is able to pass through a few feet of concrete. Certain dense, high atomic number elements (such as lead) can effectively attenuate gamma radiation with thinner material and are used for shielding. The ability of various kinds of emissions to cause ionization varies greatly, and some particles have almost no tendency to produce ionization. Alpha particles have about twice the ionizing power of fast-moving neutrons, about 10 times that of β particles, and about 20 times that of γ rays and X-rays. For many people, one of the largest sources of exposure to radiation is from radon gas (Rn-222). Radon-222 is an α emitter with a half–life of 3.82 days. It is one of the products of the radioactive decay series of U-238, which is found in trace amounts in soil and rocks. The radon gas that is produced slowly escapes from the ground and gradually seeps into homes and other structures above. Since it is about eight times more dense than air, radon gas accumulates in basements and lower floors, and slowly diffuses throughout buildings (Figure $5$). Radon is found in buildings across the country, with amounts dependent on location. The average concentration of radon inside houses in the US (1.25 pCi/L) is about three times the level found in outside air, and about one in six houses have radon levels high enough that remediation efforts to reduce the radon concentration are recommended. Exposure to radon increases one’s risk of getting cancer (especially lung cancer), and high radon levels can be as bad for health as smoking a carton of cigarettes a day. Radon is the number one cause of lung cancer in nonsmokers and the second leading cause of lung cancer overall. Radon exposure is believed to cause over 20,000 deaths in the US per year. Measuring Radiation Exposure Several different devices are used to detect and measure radiation, including Geiger counters, scintillation counters (scintillators), and radiation dosimeters (Figure $6$). Probably the best-known radiation instrument, the Geiger counter (also called the Geiger-Müller counter) detects and measures radiation. Radiation causes the ionization of the gas in a Geiger-Müller tube. The rate of ionization is proportional to the amount of radiation. A scintillation counter contains a scintillator—a material that emits light (luminesces) when excited by ionizing radiation—and a sensor that converts the light into an electric signal. Radiation dosimeters also measure ionizing radiation and are often used to determine personal radiation exposure. Commonly used types are electronic, film badge, thermoluminescent, and quartz fiber dosimeters. A variety of units are used to measure various aspects of radiation (Table $1$). The SI unit for rate of radioactive decay is the becquerel (Bq), with 1 Bq = 1 disintegration per second. The curie (Ci) and millicurie (mCi) are much larger units and are frequently used in medicine (1 curie = 1 Ci = $3.7 \times 10^{10}$ disintegrations per second). The SI unit for measuring radiation dose is the gray (Gy), with 1 Gy = 1 J of energy absorbed per kilogram of tissue. In medical applications, the radiation absorbed dose (rad) is more often used (1 rad = 0.01 Gy; 1 rad results in the absorption of 0.01 J/kg of tissue). The SI unit measuring tissue damage caused by radiation is the sievert (Sv). This takes into account both the energy and the biological effects of the type of radiation involved in the radiation dose. Table $1$: Units Used for Measuring Radiation Measurement Purpose Unit Quantity Measured Description activity of source becquerel (Bq) radioactive decays or emissions amount of sample that undergoes 1 decay/second curie (Ci) amount of sample that undergoes $\mathrm{3.7 \times 10^{10}\; decays/second}$ absorbed dose gray (Gy) energy absorbed per kg of tissue 1 Gy = 1 J/kg tissue radiation absorbed dose (rad) 1 rad = 0.01 J/kg tissue biologically effective dose sievert (Sv) tissue damage Sv = RBE × Gy roentgen equivalent for man (rem) Rem = RBE × rad The roentgen equivalent for man (rem) is the unit for radiation damage that is used most frequently in medicine (1 rem = 1 Sv). Note that the tissue damage units (rem or Sv) includes the energy of the radiation dose (rad or Gy), along with a biological factor referred to as the RBE (for relative biological effectiveness), that is an approximate measure of the relative damage done by the radiation. These are related by: $\text{number of rems}=\text{RBE} \times \text{number of rads} \label{Eq2}$ with RBE approximately 10 for α radiation, 2(+) for protons and neutrons, and 1 for β and γ radiation. Example $1$: Amount of Radiation Cobalt-60 (t1/2 = 5.26 y) is used in cancer therapy since the $\gamma$ rays it emits can be focused in small areas where the cancer is located. A 5.00-g sample of Co-60 is available for cancer treatment. 1. What is its activity in Bq? 2. What is its activity in Ci? Solution The activity is given by: $\textrm{Activity}=λN=\left( \dfrac{\ln 2}{t_{1/2} } \right) N=\mathrm{\left( \dfrac{\ln 2}{5.26\ y} \right) \times 5.00 \ g=0.659\ \dfrac{g}{y} \ of\ \ce{^{60}Co} \text{ that decay}} \nonumber$ And to convert this to decays per second: $\mathrm{0.659\; \frac{g}{y} \times \dfrac{y}{365 \;day} \times \dfrac{1\; day}{ 24\; hours} \times \dfrac{1\; h}{3,600 \;s} \times \dfrac{1\; mol}{59.9\; g} \times \dfrac{6.02 \times 10^{23} \;atoms}{1 \;mol} \times \dfrac{1\; decay}{1\; atom}} \nonumber$ $\mathrm{=2.10 \times 10^{14} \; \frac{decay}{s}} \nonumber$ (a) Since $\mathrm{1\; Bq = 1\; \frac{ decay}{s}}$, the activity in Becquerel (Bq) is: $\mathrm{2.10 \times 10^{14} \dfrac{decay}{s} \times \left(\dfrac{1\ Bq}{1 \; \frac{decay}{s}} \right)=2.10 \times 10^{14} \; Bq} \nonumber$ (b) Since $\mathrm{1\ Ci = 3.7 \times 10^{11}\; \frac{decay}{s}}$, the activity in curie (Ci) is: $\mathrm{2.10 \times 10^{14} \frac{decay}{s} \times \left( \dfrac{1\ Ci}{3.7 \times 10^{11} \frac{decay}{s}} \right) =5.7 \times 10^2\;Ci} \nonumber$ Exercise $1$ Tritium is a radioactive isotope of hydrogen ($t_{1/2} = \mathrm{12.32\; years}$) that has several uses, including self-powered lighting, in which electrons emitted in tritium radioactive decay cause phosphorus to glow. Its nucleus contains one proton and two neutrons, and the atomic mass of tritium is 3.016 amu. What is the activity of a sample containing 1.00mg of tritium (a) in Bq and (b) in Ci? Answer a $\mathrm{3.56 \times 10^{11} Bq}$ Answer b $\mathrm{0.962\; Ci}$ Effects of Long-term Radiation Exposure on the Human Body The effects of radiation depend on the type, energy, and location of the radiation source, and the length of exposure. As shown in Figure $8$, the average person is exposed to background radiation, including cosmic rays from the sun and radon from uranium in the ground (see the Chemistry in Everyday Life feature on Radon Exposure); radiation from medical exposure, including CAT scans, radioisotope tests, X-rays, and so on; and small amounts of radiation from other human activities, such as airplane flights (which are bombarded by increased numbers of cosmic rays in the upper atmosphere), radioactivity from consumer products, and a variety of radionuclides that enter our bodies when we breathe (for example, carbon-14) or through the food chain (for example, potassium-40, strontium-90, and iodine-131). A short-term, sudden dose of a large amount of radiation can cause a wide range of health effects, from changes in blood chemistry to death. Short-term exposure to tens of rems of radiation will likely cause very noticeable symptoms or illness; a dose of about 500 rems is estimated to have a 50% probability of causing the death of the victim within 30 days of exposure. Exposure to radioactive emissions has a cumulative effect on the body during a person’s lifetime, which is another reason why it is important to avoid any unnecessary exposure to radiation. Health effects of short-term exposure to radiation are shown in Table $2$. Table $2$: Health Effects of Radiation Exposure (rem) Health Effect Time to Onset (Without Treatment) 5–10 changes in blood chemistry 50 nausea hours 55 fatigue 70 vomiting 75 hair loss 2–3 weeks 90 diarrhea 100 hemorrhage 400 possible death within 2 months 1000 destruction of intestinal lining internal bleeding death 1–2 weeks 2000 damage to central nervous system loss of consciousness minutes death hours to days It is impossible to avoid some exposure to ionizing radiation. We are constantly exposed to background radiation from a variety of natural sources, including cosmic radiation, rocks, medical procedures, consumer products, and even our own atoms. We can minimize our exposure by blocking or shielding the radiation, moving farther from the source, and limiting the time of exposure. Summary We are constantly exposed to radiation from a variety of naturally occurring and human-produced sources. This radiation can affect living organisms. Ionizing radiation is the most harmful because it can ionize molecules or break chemical bonds, which damages the molecule and causes malfunctions in cell processes. It can also create reactive hydroxyl radicals that damage biological molecules and disrupt physiological processes. Radiation can cause somatic or genetic damage, and is most harmful to rapidly reproducing cells. Types of radiation differ in their ability to penetrate material and damage tissue, with alpha particles the least penetrating, but potentially most damaging, and gamma rays the most penetrating. Various devices, including Geiger counters, scintillators, and dosimeters, are used to detect and measure radiation, and monitor radiation exposure. We use several units to measure radiation: becquerels or curies for rates of radioactive decay; gray or rads for energy absorbed; and rems or sieverts for biological effects of radiation. Exposure to radiation can cause a wide range of health effects, from minor to severe, including death. We can minimize the effects of radiation by shielding with dense materials such as lead, moving away from the source of radiation, and limiting time of exposure. Footnotes 1. 1 Source: US Environmental Protection Agency Glossary becquerel (Bq) SI unit for rate of radioactive decay; 1 Bq = 1 disintegration/s. curie (Ci) Larger unit for rate of radioactive decay frequently used in medicine; 1 Ci = 3.7 × 1010 disintegrations/s. Geiger counter Instrument that detects and measures radiation via the ionization produced in a Geiger-Müller tube. gray (Gy) SI unit for measuring radiation dose; 1 Gy = 1 J absorbed/kg tissue. ionizing radiation Radiation that can cause a molecule to lose an electron and form an ion. millicurie (mCi) Larger unit for rate of radioactive decay frequently used in medicine; 1 Ci = 3.7 × 1010 disintegrations/s. nonionizing radiation Radiation that speeds up the movement of atoms and molecules; it is equivalent to heating a sample, but is not energetic enough to cause the ionization of molecules. radiation absorbed dose (rad) SI unit for measuring radiation dose, frequently used in medical applications; 1 rad = 0.01 Gy. radiation dosimeter Device that measures ionizing radiation and is used to determine personal radiation exposure. relative biological effectiveness (RBE) Measure of the relative damage done by radiation. roentgen equivalent man (rem) Unit for radiation damage, frequently used in medicine; 1 rem = 1 Sv. scintillation counter Instrument that uses a scintillator—a material that emits light when excited by ionizing radiation—to detect and measure radiation. sievert (Sv) SI unit measuring tissue damage caused by radiation; takes energy and biological effects of radiation into account. 16: Macromolecules Macromolecules are a very large molecules, such as protein, commonly created by polymerization of smaller subunits (monomers). They are typically composed of thousands or more atoms.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/15%3A_Photochemistry_and_Photobiology/15.04%3A_Biological_Effects_of_Radiation.txt
• 1.1: Things to review This course requires that you are comfortable with same basic mathematical operations and basic calculus. It is imperative that you go over this chapter carefully and identify the topics you may need to review before starting the semester. You will be too busy learning new concepts soon, so this is the right time for you to seek help if you need it. • 1.2: Odd and Even Functions Many of you probably heard about odd and even functions in previous courses, but for those who did not, here is a brief introduction. • 1.3: The Exponential Function Here, we will learn (or review) how to sketch exponential functions with negative exponents quickly. These types of functions appear very often in chemistry, so it is important that you know how to visualize them without the help of a computer or calculator. • 1.4: The Period of a Periodic Function A function f(x) is said to be periodic with period P if f(x)=f(x+P) . In plain English, the function repeats itself in regular intervals of length P. • 1.5: Exercises 01: Before We Begin... This course requires that you are comfortable with same basic mathematical operations and basic calculus. It is imperative that you go over this chapter carefully and identify the topics you may need to review before starting the semester. You will be too busy learning new concepts soon, so this is the right time for you to seek help if you need it. Notice: This chapter does not contain a review of topics you should already know. Instead, it gives you a list of topics that you should be comfortable with so you can review them independently if needed. Also, remember that you can use the formula sheet at all times, so locate the information you have and use it whenever needed! 1.1.1 The Equation of a straight line • given two points calculate the slope, the x-intercept and the y-intercept of the straight line through the points. • given a graph of a straight line write its corresponding equation. • given the equation of a straight line sketch the corresponding graph. 1.1.2 Trigonometric Functions • definition of sin, cos,tan of an angle. • values of the above trigonometric functions evaluated at 0, π/2, π, 3/2π, 2π. • derivatives and primitives of sin and cos. 1.1.3 Logarithms • the natural logarithm ($\ln x$) and its relationship with the exponential function. • the decimal logarithm ($\log x$) and its relationship with the function $10^x$. • properties of logarithms (natural and decimal) • $\ln(1) =?$ • $\ln(ab) =?$ • $\ln(a/b) =?$ • $\ln(a^b ) =?$ • $\ln(1/a) =?$ 1.1.4 The exponential function • properties • $e^0 =?$ • $e^{−x} = 1/...?$ • $e^{−\infty} =?$ • $e^a e^b =?$ • $e^a/e^b =?$ • $(e^a)^b =?$ 1.1.5 Derivatives • concept of the derivative of a function in terms of the slope of the tangent line. • derivative of a constant, $e^x$, $x^n$, $\ln x$, $\sin x$ and $\cos x$. • derivative of the sum of two functions. • derivative of the product of two functions. • the chain rule. • higher derivatives (second, third, etc). • locating maxima, minima and inflection points. 1.1.6 Indefinite Integrals (Primitives) • primitive of a constant, $x^n$, $x^{−1}$, $e^x$, $\sin x$ and $\cos x$. 1.1.7 Definite Integrals • using limits of integration. • properties of definite integrals (see recommended exercises below). Test yourself! Identify what you need to review by taking this non-graded quiz. http://tinyurl.com/laq5aza​​​​​​​
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/01%3A_Before_We_Begin.../1.01%3A_Things_to_review.txt
Many of you probably heard about odd and even functions in previous courses, but for those who did not, here is a brief introduction. • An odd function obeys the relation $f(x) = −f(−x)$. For example, $\sin x$ is odd because $\sin x = − \sin(−x)$. • An even function obeys the relation $f(x) = f(−x)$. For example, $\cos x$ is even because $\cos x = \cos(−x)$. Even functions are symmetric with respect to the y−axis. In other words, if you were to put a mirror perpendicular to the screen at $x = 0$, the right side of the plot would produce a reflection that would overlap with the left side of the plot. Check Figure $1$ to be sure you understand what this means. Odd functions are symmetric in a different way. Imagine that you have an axis perpendicular to the screen that contains the point (0,0). Now rotate every point of your graph 180 degrees. If the plot you create after rotation overlaps with the plot before the rotation, the function is odd. Check Figure $1$ to be sure you understand what this means. Note that functions do not necessarily need to be even or odd. The function $e^x$, for instance, is clearly neither, as $e^x \neq e^{−x}$ (condition for even) and $e^x \neq −e^{−x}$ (condition for odd). You can also sketch the function $e^x$ and verify that it does not have the symmetry of an odd or even function. For any function, $\int_{-a}^a f(x)dx = \int_{-a}^0 f(x) dx + \int_0^{a} f(x) dx$ For an odd function, this integral equals zero: $\label{eq2} \int_{-a}^a f(x)dx = \int_{-a}^0 f(x) dx + \int_0^{a} f(x) dx = 0$ This should be obvious just by looking at the plot of $\sin x$. The area under the curve between 0 and $a$ cancels out with the area under the curve between $−a$ and 0. This is great news because it means that we do not have to bother integrating odd functions as long as the limits of integration span from $−a$ to $a$, where a can be any number, including infinity. As it turns out this happens often in quantum mechanics, so it is something worth keeping in mind! For an even function, $\int_{-a}^a f(x)dx = 2 \int_{0}^{a} f(x) dx$ because the area under the curve between 0 and $a$ equals the area under the curve between $−a$ and 0. This not as helpful as Equation \ref{eq2}, but still might help in some cases. For example, let’s say that you need to evaluate $\int_{- \infty}^{\infty} x^2 e^{-x^2} dx$ and the only material that you have available is the formula sheet. You find an expression for $\int_0^{\infty} x^{2n} e^{-ax^2} dx \nonumber$ which you can use with $n = 1$ to obtain $\int_0^{\infty} x^2 e^{−x^2} dx = \frac{ \sqrt{\pi}}{4} \nonumber$ (be sure you can get this result on your own). How do you get $\int_{- \infty}^{\infty} x^2 e^{-x^2} dx$? If the integrand is even, you just need to multiply by 2. This is in fact an even function, because $x^2 = (−x)^2$, and therefore it is clear that $x^2 e^{−x^2} = (−x)^2 e^{−(−x)^2}$. Therefore, $\int_{- \infty}^{\infty} x^2 e^{-x^2} dx = \frac{\sqrt{\pi}}{2} \nonumber$ It is useful to know that the product of two even functions or two odd functions is an even function, and the product of an odd function and an even function is odd. For example, • $\sin^2 x$ is the product of two odd functions, and is therefore even. • $\cos^2 x$ is the product of two even functions, and is therefore even. • $\sin x \cos x$ is the product of an odd function and an even function, and is therefore odd. Need help understanding how to identify odd and even functions? External links: http://www.youtube.com/watch?v=8VgmBe3ulb8 http://www.youtube.com/watch?v=68enNRhFORc Odd, even or neither? See if you can classify the functions shown in this short quiz. http://tinyurl.com/l4pehb8
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/01%3A_Before_We_Begin.../1.02%3A_Odd_and_Even_Functions.txt
In Section 1.1 you were asked to review some properties of the exponential function. Here, we will learn (or review) how to sketch exponential functions with negative exponents quickly. These types of functions appear very often in chemistry, so it is important that you know how to visualize them without the help of a computer or calculator. Suppose we want to sketch the function $y = ae^{−kx}$, where $a$ and $k$ are positive real numbers (e.g. $y = 3e^{−2x}$). We could consider negative values of the variable $x$, but often $x$ represents a time or another variable for which only positive values make sense, so we will consider values of $x \geq 0$. First, let’s find out the value of $y(0)$, where the function crosses the $y$−axis. Because $e^0 = 1$, $y(0) = a$. The number $a$ is often called “the pre-exponential factor” (i.e. the factor before the exponential), or the “amplitude” of the function. This is the amplitude because it will be the highest point in our plot. Now, let’s see what happens as $x \rightarrow \infty$. From Section 1.1, it is expected that you know that $e^{− \infty} = 0$, and therefore, you should be able to conclude that $y(x)$ decreases from an initial value $y(0) = a$ to zero. Whether the function will decay to zero slowly or rapidly will depend on the value of $k$. Let’s now consider two points in the curve $y = ae^{−kx}$ $\begin{array}{c} y_1 = ae^{−kx_1} \ y_2 = ae^{−kx_2} \end{array} \nonumber$ and let’s further assume that $y_1/y_2 = 2$ (see Figure $1$, left). What is the separation between $x_2$ and $x_1$ $(\Delta x = x_2 − x_1)$? We know that $y_1/y_2 = 2$, so $2 = \frac{e^{-kx_1}}{e^{-kx_2}} \nonumber$ Using the properties of the exponential function: $2 = e^{−k(x_1−x_2)} = e^{−k(− \Delta x)} = e^{k( \Delta x)} \nonumber$ and solving for $\Delta x$: $\begin{array}{c} \ln(2) = k( \Delta x) \ \Delta x = \ln(2)/k \end{array} \nonumber$ This means that the function decays to 50% of its former value every time we move $\ln 2/k$ units to the right in the $x$−axis. To sketch the function, we just need to remember that $\ln(2) \approx 0.7$. Therefore, to sketch $y = 3e^{−2x}$, we place the first point at $y = 3$ and $x = 0$, and the second point at $y = 3/2$ and $x \approx 0.7/2 = 0.35$. We can continue adding points following the same logic: every time we drop the value of $y$ by half, we move $\ln(2)/k$ units to the right in the $x$−axis (see Figure $1$, right). Sketching an exponential function with negative exponent. http://tinyurl.com/n6pdvyh 1.04: The Period of a Periodic Function A function $f(x)$ is said to be periodic with period $P$ if $f(x) = f(x + P)$. In plain English, the function repeats itself in regular intervals of length $P$. The period of the function of Figure $1$ is $2 \pi$. We know that the period of $\sin (x)$ is $2 \pi$, but what is the period of the function $\sin (nx)$? The period of $\sin (x)$ is $2 \pi$, so: $\sin (nx) = \sin (nx + 2 \pi) \nonumber$ By definition, for a periodic function of period $P$, the function repeats itself if we add $P$ to $x$: $sin (nx) = \sin (n(x + P)) = \sin (nx + nP)) \nonumber$ Comparing the two equations: $2 \pi = nP$, and therefore $\textcolor{red}{P = 2π/n}$. For example, the period of $\sin (2x)$ is $\pi$, and the period of $\sin (3x)$ is $2 \pi/3$ (see Figure $2$). You can follow the same logic to prove that the period of $\cos (nx)$ is $2 \pi/n$. These are important results that we will use later in the semester, so keep them in mind! Test yourself with this short quiz! http://tinyurl.com/k4wop6l 1.05: Exercises To see if you are on track, solve the following exercises using only the formula sheet (no calculators, computers, books, etc!). 1. Draw the straight line that has a $y$-intercept of 3/2 and a slope of 1/2. 2. Express $\frac{3}{4} - \frac{2}{3} + 1$ as a single fraction. 3. Simplify $(a -4a^3)/a^{-2}$. 4. Express $\ln 8 − 5 \ln 2$ as the logarithm of a single number. 5. Given $\ln P = − \frac{a}{RT} + b \ln T + c$, where $a$, $b$, $c$ and $R$ are constants, obtain $\frac{d(\ln P)}{dT}$ 6. Obtain $\frac{dy}{dx}$ 1. $y = \sin xe^{mx}$ (m is a constant). 2. $y = \frac{1}{\sqrt{1−x^2}}$ 7. Obtain the first, second and third derivatives of 1. $y = e^{−2x}$ 2. $y = \cos(2x)$ 3. $y = 3 + 2x − 4x^2$ 8. Evaluate $\int_0^{\pi} \cos 3 \theta d \theta$. 9. Use the properties of integrals and your previous result to evaluate $\int_{\pi}^0 \cos 3 \theta d \theta$. What about $\int_0^{\pi/4} \cos 3 \theta d \theta + \int_{\pi/4}^{\pi} \cos 3 \theta d \theta$? 10. Given $f(x) = \left\{\begin{matrix} 0 & \text{if } x<0 \ 3+2x & \text{if } 0 <x<1 \ 0 & \text{if } x>1 \end{matrix}\right.$ Sketch $f(x)$ and calculate $\int_{- \infty}^{\infty} f(x) dx$ 11. What is the value of this integral? $\int_{- \infty}^{\infty} xe^{-x^2} dx$ 12. Sketch $\sin(x/2)$. What is the period of the function? 13. The plots below (Figure $1$) represent the following functions: $y = 3e^{−x/2}, ~ y = 3e^{−x}, ~ y = 3e^{−2x}$ and $y = 2e^{−2x}$. Which one is which?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/01%3A_Before_We_Begin.../1.03%3A_The_Exponential_Function.txt
Chapter Objectives • Be able to perform basic arithmetic operations with complex numbers. • Understand the different forms used to express complex numbers (cartesian, polar and complex exponentials). • Calculate the complex conjugate and the modulus of a number expressed in the different forms (cartesian, polar and complex exponentials). • Be able to manipulate complex functions. • Be able to obtain expressions for the complex conjugate and the square of the modulus of a complex function. • 2.1: Algebra with Complex Numbers The imaginary unit i is defined as the square root of -1. • 2.2: Graphical Representation and Euler Relationship Complex numbers can be represented graphically as a point in a coordinate plane. In cartesian coordinates, the x -axis is used for the real part of the number, and the y -axis is used for the imaginary component. Complex numbers can be also represented in polar form.  We can also represent complex numbers in terms of complex exponentials. • 2.3: Complex Functions The concepts of complex conjugate and modulus that we discussed above can also be applied to complex functions. • 2.4: Problems Thumbnail: A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i satisfies \(i^2 = −1\). (CC BY-SA 3.0 unported; Wolfkeeper via Wikipedia) 02: Complex Numbers The imaginary unit $i$ is defined as the square root of -1: $i= \sqrt{-1}$. If $a$ and $b$ are real numbers, then the number $c= a+ib$ is said to be complex. The real number $a$ is the real part of the complex number $c$, and the real number $b$ is its imaginary part. If $a=0$, then the number is pure imaginary. All the rules of ordinary arithmetic apply with complex numbers, you just need to remember that $i^2=-1$ For example, if $z_1= 2+3i$ and $z_2=1-4i$: • $z_1+ z_2=3-i$ • $z_1- z_2=1+7i$ • $\dfrac{1}{2} z_1+z_2=2-\dfrac{5}{2} i$ • $z_1z_2=(2+3i)(1-4i)=2-12 i^2-5i=14-5i$ (remember that $i^2 = -1$!) • $z_1^2=(2+3i)(2+3i)=4+9 i^2+12i=-5+12i$ In order to divide complex numbers we will introduce the concept of complex conjugate. Complex Conjugate The complex conjugate of a complex number is defined as the number that has the same real part and an imaginary part which is the negative of the original number. It is usually denoted with a star: If $z = x + iy$, then $z^∗ = x − iy$ For example, the complex conjugate of $2-3i$ is $2+3i$. Notice that the product $zz^*$ is always real: $\label{complexeq:eq1}(x+iy)(x-iy)=x^2-ixy+ixy+y^2=x^2+y^2.$ We’ll use this result in a minute. For now, let’s see how the complex conjugate allows us to divide complex numbers with an example: Example $1$: Complex Division Given $z_1= 2+3i$ and $z_2=1-4i$ obtain $z_1/z_2$ Solution $\dfrac{z_1}{z_2}=\dfrac{2+3i}{1-4i} \nonumber$ Multiply the numerator and denominator by the complex conjugate of the denominator: $\dfrac{z_1}{z_2}=\dfrac{2+3i}{1-4i}\dfrac{1+4i}{1+4i} \nonumber$ This “trick” ensures that the denominator is a real number, since $zz^*$ is always real. In this case, \begin{align*} (1-4i)(1+4i) &=1-4i+4i-16i^2\[4pt] &=17. \end{align*} The numerator is \begin{align*} (2+3i)(1+4i) &=2+3i+8i+12i^2 \[4pt] &=-10+11i \end{align*} Therefore, \begin{align*}\dfrac{z_1}{z_2} &=\dfrac{2+3i}{1-4i} \[4pt] &=\displaystyle{\color{Maroon}-\dfrac{10}{17}+\dfrac{11}{17}i} \end{align*} Example $2$ Calculate $(2-i)^3$ and express your result in cartesian form ($a + bi$) Solution \begin{align*} (2-i)^3 &= (2-i)(2-i)(2-i) \[4pt] (2-i)(2-i) &=4-4i+i^2 \[4pt] &=4-4i-1 \[4pt] &=3-4i \[4pt] (2-i)(2-i)(2-i) &=(3-4i)(2-i) \[4pt] &=6-3i-8i+4i^2 \[4pt] &=6-11i+4(-1) \[4pt] &=\displaystyle{\color{Maroon}2-11i} \end{align*} The concept of complex conjugate is also useful to calculate the real and imaginary part of a complex number. Given $z = x+iy$ and $z^*=x-iy$, it is easy to see that $z+z^*=2x$ and $z-z^*=2iy$. Therefore: $\label{eq2} Re(z)=\dfrac{z+z^*}{2}$ and $Im(z)=\dfrac{z-z^*}{2i}$ You may wonder what is so hard about finding the real and imaginary parts of a complex number by visual inspection. It is certainly not a problem if the number is expressed as $a+bi$, but it can be more difficult when dealing with more complicated expressions. External Links:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/02%3A_Complex_Numbers/2.01%3A_Algebra_with_Complex_Numbers.txt
Complex numbers can be represented graphically as a point in a coordinate plane. In cartesian coordinates, the $x$-axis is used for the real part of the number, and the $y$-axis is used for the imaginary component. For example, the complex number $x+iy$ is represented as a point in Figure $1$. Complex numbers can be also represented in polar form. We know that, given a point $(x,y)$ in the plane, $\cos\phi=x/r$ and $\sin\phi=y/r$. Therefore, the complex number $x+iy$ can be also represented as $r\cos\phi+i r\sin\phi$. We can also represent complex numbers in terms of complex exponentials. This will sound weird for now, but we will see how common and useful this is in physical chemistry as we cover other topics this semester. The Euler relationship relates the trigonometric functions to a complex exponential: $\label{eq1} e^{\pm i\phi}=\cos\phi\pm i\sin\phi$ We will prove this relationship using Taylor series later. In summary, the complex number $\displaystyle{\color{Maroon} x+iy}$ can be expressed in polar coordinates as $\displaystyle{\color{Maroon}r\cos\phi+i r\sin\phi}$, and as a complex exponential as $\displaystyle{\color{Maroon}r e^{i\phi}}$. The relationships between $x,y$ and $r,\phi$ are given by the familiar trigonometric relationships: $x=r\cos\phi$ and $y=r\sin\phi$. Notice that $r^2=x^2+y^2 \nonumber$ and $\sin^2\phi+\cos^2\phi=1 \nonumber$ as we know from Pythagoras’ theorem. Example $1$ Express $z= 1+i$ in the form $r e^{i\phi}$ Solution $x = 1$ and $y=1$. We know that $x=r\cos\phi$ and $y=r\sin\phi$ Dividing $y/x$, we get $y/x=\tan\phi$. In this problem $y=x$, and therefore $\pi/4$. To obtain $r$ we use $r^2=x^2+y^2$. In this case: $x^2+y^2=2 \to r=\sqrt{2} \nonumber$ Therefore, $\displaystyle{\color{Maroon}z = \sqrt{2} e^{\frac{\pi}{4} i}}$ From Equation \ref{eq1} we can see how the trigonometric functions can be expressed as complex exponentials: $\begin{split} \cos\phi=\frac{e^{i \phi}+e^{-i \phi}}{2}\ \sin\phi=\frac{e^{i \phi}-e^{-i \phi}}{2i} \end{split}$ Again, this may look strange at this point, but it turns out that exponentials are much easier to manipulate than trigonometric functions (think about multiplying or dividing exponentials vs. trigonometric functions), so it is common that physical chemists write equations in terms of complex exponentials instead of cosines and sines. In Equation $2.1.1$ we saw that $(x+iy)(x-iy)=x^2+y^2$. Now we know that this equals $r^2$, where $r$ is the modulus or absolute value of the vector represented in red in Figure $1$. Therefore, the modulus of a complex number, denoted as $|z|$, can be calculated as: $\label{modulus} |z|^2= z z^* \to |z|=\sqrt{z z^*}$ Example $2$ Obtain the modulus of the complex number $z= 1+i$ (see Example $1$) Solution $|z|=\sqrt{z z^*}=\sqrt{(1+i)(1-i)}=\displaystyle{\color{Maroon}\sqrt{2}} \nonumber$ 2.03: Complex Functions The concepts of complex conjugate and modulus that we discussed above can also be applied to complex functions. For instance, in quantum mechanics, atomic orbitals are often expressed in terms of complex exponentials. For example, one of the $p$ orbitals of the hydrogen atom can be expressed in spherical coordinates ($r,\theta,\phi$) as $\psi(r,\theta,\phi)=\dfrac{1}{8 \sqrt{a_0^5 \pi}} r e^{-r/2a_0} \sin\theta e^{i \phi} \nonumber$ We will work with orbitals and discuss their physical meaning throughout the semester. For now, let’s write an expression for the square of the modulus of the orbital (Equation $2.2.2$): $|\psi|^2=\psi \psi^* \nonumber$ The complex conjugate of a complex function is created by changing the sign of the imaginary part of the function (in lay terms, every time you see a +$i$ change it to a -$i$, and every time you see a -$i$ change it to a +$i$). Therefore: \begin{align*} |\psi|^2 &=\left(\dfrac{1}{8 \sqrt{a_0^5 \pi}} r e^{-r/2a_0} \sin\theta e^{i \phi}\right)\left(\dfrac{1}{8 \sqrt{a_0^5 \pi}} r e^{-r/2a_0} \sin\theta e^{-i \phi}\right) \[4pt] &=\dfrac{1}{64 a_0^5 \pi} r^2 e^{-r/a_0} \sin^2\theta \end{align*} \nonumber Notice that $\psi \psi^*$ is always real because the term $e^{+i \phi} e^{-i \phi}=1.$ This has to be the case because $|\psi|^2$ represents the square of the modulus, and as we will discuss many times during the semester, it can be interpreted in terms of the probability of finding the electron in different regions of space. Because probabilities are physical quantities that are positive, it is good that $\psi \psi^*$ is guaranteed to be real! Confused about the complex conjugate? See how to write the complex conjugate in the different notations discussed in this chapter in this short video: http://tinyurl.com/ lcry7ma​​​​​​​ 2.04: Problems Note: Always express angles in radians (e.g. $\pi/2$, not $90^{\circ}$). When expressing complex numbers in Cartesian form always finish your work until you can express them as $a+bi$. For example, if you obtain $\frac{2}{1+i}$, multiply and divide the denominator by its complex conjugate to obtain $1-i$. Remember: No calculators allowed! Problem $1$ Given $z_1=1+i$, $z_2=1-i$ and $z_3=3e^{i \pi/2}$, obtain: • $z_1 z_2$ • $z_1^2$ • $2z_1-3z_2$ • $|z_1|$ • $2z_1-3z_2^*$ • $\frac{z_1}{z_2}$ • Express $z_2$ as a complex exponential • $|z_3|$ • $z_1+z_3$, and express the result in cartesian form • Display the three numbers in the same plot (real part in the $x$-axis and imaginary part in the $y$-axis) Problem $2$ The following family of functions are encountered in quantum mechanics: $\Phi_m(\phi)=\frac{1}{\sqrt{2 \pi}}e^{i m \phi}, m= 0, \pm 1,\pm 2, \pm 3 \dots, 0 \le \phi \le 2\pi$ Notice the difference between $\Phi$ (the name of the function), and $\phi$ (the independent variable). The definition above defines a family of functions (one function for each value of $m$). For example, for $m=2$: $\Phi_2(\phi)=\frac{1}{\sqrt{2 \pi}}e^{2i \phi},$ and for $m=-2$: $\Phi_{-2}(\phi)=\frac{1}{\sqrt{2 \pi}}e^{-2i \phi},$ • Obtain $|\Phi_m(\phi)|^2$ • Calculate $\int_0 ^{2\pi}|\Phi_m(\phi)|^2 \mathrm{d}\phi$ • Calculate $\int_0 ^{2\pi}\Phi_m(\phi)\Phi_n^*(\phi) \mathrm{d}\phi$ for $m \neq n$ • Calculate $\int_0 ^{2\pi}\Phi_m(\phi) \mathrm{d}\phi$ for $m = 0$ • Calculate $\int_0 ^{2\pi}\Phi_m(\phi) \mathrm{d}\phi$ for $m \neq 0$ Problem $3$ Given the function $f(r,\theta,\phi)=4 r e^{-2r/3} \sin{\theta}e^{-2i\phi/5}$ Write down an expression for $|f(r,\theta,\phi)|^2$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/02%3A_Complex_Numbers/2.02%3A_Graphical_Representation_and_Euler_Relationship.txt
Chapter Objectives • Learn how to obtain Maclaurin and Taylor expansions of different functions. • Learn how to express infinite sums using the summation operator (\( \displaystyle \Sigma\)) • Understand how a series expansion can be used in the physical sciences to obtain an approximation that is valid in a particular regime (e.g. low concentration of solute, low pressure of a gas, small oscillations of a pendulum, etc). • Understand how a series expansion can be used to prove a mathematical relationship. • 3.1: Maclaurin Series A function f(x) can be expressed as a series in powers of x as long as f(x) and all its derivatives are finite at x=0. • 3.2: Linear Approximations We can always approximate a function as a line as long as x is small. When we say ‘any function’ we of course imply that the function and all its derivatives need to be finite at x=0 . • 3.3: Taylor Series Before discussing more applications of Maclaurin series, let’s expand our discussion to the more general case where we expand a function around values different from zero. Let’s say that we want to expand a function around the number h. If h=0, we call the series a Maclaurin series, and if h≠0 we call the series a Taylor series. Because Maclaurin series are a special case of the more general case, we can call all the series Taylor series and omit the distinction. • 3.4: Other Applications of Mclaurin and Taylor series • 3.5: Problems Thumbnail: The graph shows the function \(\displaystyle y=sinx\) and the Maclaurin polynomials \(\displaystyle p_1,p_3\) and \(\displaystyle p_5\). (CC BY-SA 3.0; OpenStax). 03: Series A function $f(x)$ can be expressed as a series in powers of $x$ as long as $f(x)$ and all its derivatives are finite at $x=0$. For example, we will prove shortly that the function $f(x) = \dfrac{1}{1-x}$ can be expressed as the following infinite sum: $\label{eq1}\dfrac{1}{1-x}=1+x+x^2+x^3+x^4 + \ldots$ We can write this statement in this more elegant way: $\label{eq2}\dfrac{1}{1-x}=\displaystyle\sum_{n=0}^{\infty} x^{n}$ If you are not familiar with this notation, the right side of the equation reads “sum from $n=0$ to $n=\infty$ of $x^n.$” When $n=0$, $x^n = 1$, when $n=1$, $x^n = x$, when $n=2$, $x^n = x^2$, etc (compare with Equation \ref{eq1}). The term “series in powers of $x$” means a sum in which each summand is a power of the variable $x$. Note that the number 1 is a power of $x$ as well ($x^0=1$). Also, note that both Equations \ref{eq1} and \ref{eq2} are exact, they are not approximations. Similarly, we will see shortly that the function $e^x$ can be expressed as another infinite sum in powers of $x$ (i.e. a Maclaurin series) as: $\label{expfunction}e^x=1+x+\dfrac{1}{2} x^2+\dfrac{1}{6}x^3+\dfrac{1}{24}x^4 + \ldots$ Or, more elegantly: $\label{expfunction2}e^x=\displaystyle\sum_{n=0}^{\infty}\dfrac{1}{n!} x^{n}$ where $n!$ is read “n factorial” and represents the product $1\times 2\times 3...\times n$. If you are not familiar with factorials, be sure you understand why $4! = 24$. Also, remember that by definition $0! = 1$, not zero. At this point you should have two questions: 1) how do I construct the Maclaurin series of a given function, and 2) why on earth would I want to do this if $\dfrac{1}{1-x}$ and $e^x$ are fine-looking functions as they are. The answer to the first question is easy, and although you should know this from your calculus classes we will review it again in a moment. The answer to the second question is trickier, and it is what most students find confusing about this topic. We will discuss different examples that aim to show a variety of situations in which expressing functions in this way is helpful. How to obtain the Maclaurin Series of a Function In general, a well-behaved function ($f(x)$ and all its derivatives are finite at $x=0$) will be expressed as an infinite sum of powers of $x$ like this: $\label{eq3}f(x)=\displaystyle\sum_{n=0}^{\infty}a_n x^{n}=a_0+a_1 x + a_2 x^2 + \ldots + a_n x^n$ Be sure you understand why the two expressions in Equation \ref{eq3} are identical ways of expressing an infinite sum. The terms $a_n$ are called the coefficients, and are constants (that is, they are NOT functions of $x$). If you end up with the variable $x$ in one of your coefficients go back and check what you did wrong! For example, in the case of $e^x$ (Equation \ref{expfunction}), $a_0 =1, a_1=1, a_2 = 1/2, a_3=1/6, etc$. In the example of Equation \ref{eq1}, all the coefficients equal 1. We just saw that two very different functions can be expressed using the same set of functions (the powers of $x$). What makes $\dfrac{1}{1-x}$ different from $e^x$ are the coefficients $a_n$. As we will see shortly, the coefficients can be negative, positive, or zero. How do we calculate the coefficients? Each coefficient is calculated as: $\label{series:coefficients}a_n=\dfrac{1}{n!} \left( \dfrac{d^n f(x)}{dx^n} \right)_0$ That is, the $n$-th coefficient equals one over the factorial of $n$ multiplied by the $n$-th derivative of the function $f(x)$ evaluated at zero. For example, if we want to calculate $a_2$ for the function $f(x)=\dfrac{1}{1-x}$, we need to get the second derivative of $f(x)$, evaluate it at $x=0$, and divide the result by $2!$. Do it yourself and verify that $a_2=1$. In the case of $a_0$ we need the zeroth-order derivative, which equals the function itself (that is, $a_0 = f(0)$, because $\dfrac{1}{0!}=1$). It is important to stress that although the derivatives are usually functions of $x$, the coefficients are constants because they are expressed in terms of the derivatives evaluated at $x=0$. Note that in order to obtain a Maclaurin series we evaluate the function and its derivatives at $x=0$. This procedure is also called the expansion of the function around (or about) zero. We can expand functions around other numbers, and these series are called Taylor series (see Section 3). Example $1$ Obtain the Maclaurin series of $sin(x)$. Solution We need to obtain all the coefficients ($a_0, a_1...etc$). Because there are infinitely many coefficients, we will calculate a few and we will find a general pattern to express the rest. We will need several derivatives of $sin(x)$, so let’s make a table: $n$ $\dfrac{d^n f(x)}{dx^n}$ $\left( \dfrac{d^n f(x)}{dx^n} \right)_0$ 0 $\sin (x)$ 0 1 $\cos (x)$ 1 2 $-\sin (x)$ 0 3 $-\cos (x)$ -1 4 $\sin (x)$ 0 5 $\cos (x)$ 1 Remember that each coefficient equals $\left( \dfrac{d^n f(x)}{dx^n} \right)_0$ divided by $n!$, therefore: $n$ $n!$ $a_n$ 0 1 0 1 1 1 2 2 0 3 $6$ $-\dfrac{1}{6}$ 4 $24$ 0 5 $120$ $\dfrac{1}{120}$ This is enough information to see the pattern (you can go to higher values of $n$ if you don’t see it yet): 1. the coefficients for even values of $n$ equal zero. 2. the coefficients for $n = 1, 5, 9, 13,...$ equal $1/n!$ 3. the coefficients for $n = 3, 7, 11, 15,...$ equal $-1/n!$. Recall that the general expression for a Maclaurin series is $a_0+a_1 x + a_2 x^2...a_n x^n$, and replace $a_0...a_n$ by the coefficients we just found: $\displaystyle{\color{Maroon}\sin (x) = x - \dfrac{1}{3!} x^3+ \dfrac{1}{5!} x^5 -\dfrac{1}{7!} x^7...} \nonumber$ This is a correct way of writing the series, but in the next example we will see how to write it more elegantly as a sum. Example $2$ Express the Maclaurin series of $\sin (x)$ as a sum. Solution In the previous example we found that: $\label{series:sin}\sin (x) = x - \dfrac{1}{3!} x^3+ \dfrac{1}{5!} x^5 -\dfrac{1}{7!} x^7...$ We want to express this as a sum: $\displaystyle\sum_{n=0}^{\infty}a_n x^{n} \nonumber$ The key here is to express the coefficients $a_n$ in terms of $n$. We just concluded that 1) the coefficients for even values of $n$ equal zero, 2) the coefficients for $n = 1, 5, 9, 13,...$ equal $1/n!$ and 3) the coefficients for $n = 3, 7, 11,...$ equal $-1/n!$. How do we put all this information together in a unique expression? Here are three possible (and equally good) answers: • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=0}^{\infty} \left( -1 \right) ^n \dfrac{1}{(2n+1)!} x^{2n+1}}$ • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=1}^{\infty} \left( -1 \right) ^{(n+1)} \dfrac{1}{(2n-1)!} x^{2n-1}}$ • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=0}^{\infty} cos(n \pi) \dfrac{1}{(2n+1)!} x^{2n+1}}$ This may look impossibly hard to figure out, but let me share a few tricks with you. First, we notice that the sign in Equation \ref{series:sin} alternates, starting with a “+”. A mathematical way of doing this is with a term $(-1)^n$ if your sum starts with $n=0$, or $(-1)^{(n+1)}$ if you sum starts with $n=1$. Note that $\cos (n \pi)$ does the same trick. $n$ $(-1)^n$ $(-1)^{n+1}$ $\cos (n \pi)$ 0 1 -1 1 1 -1 1 -1 2 1 -1 1 3 -1 1 -1 We have the correct sign for each term, but we need to generate the numbers $1, \dfrac{1}{3!}, \dfrac{1}{5!}, \dfrac{1}{7!},...$ Notice that the number “1” can be expressed as $\dfrac{1}{1!}$. To do this, we introduce the second trick of the day: we will use the expression $2n+1$ to generate odd numbers (if you start your sum with $n=0$) or $2n-1$ (if you start at $n=1$). Therefore, the expression $\dfrac{1}{(2n+1)!}$ gives $1, \dfrac{1}{3!}, \dfrac{1}{5!}, \dfrac{1}{7!},...$, which is what we need in the first and third examples (when the sum starts at zero). Lastly, we need to use only odd powers of $x$. The expression $x^{(2n+1)}$ generates the terms $x, x^3, x^5...$ when you start at $n=0$, and $x^{(2n-1)}$ achieves the same when you start your series at $n=1$. Confused about writing sums using the sum operator $(\sum)$? This video will help: http://tinyurl.com/lvwd36q Need help? The links below contain solved examples. External links: Finding the maclaurin series of a function I: http://patrickjmt.com/taylor-and-maclaurin-series-example-1/ Finding the maclaurin series of a function II: http://www.youtube.com/watch?v= dp2ovDuWhro Finding the maclaurin series of a function III: http://www.youtube.com/watch?v= WWe7pZjc4s8 Graphical Representation From Equation $\ref{eq3}$ and the examples we discussed above, it should be clear at this point that any function whose derivatives are finite at $x=0$ can be expressed by using the same set of functions: the powers of $x$. We will call these functions the basis set. A basis set is a collection of linearly independent functions that can represent other functions when used in a linear combination. Figure $1$ is a graphic representation of the first four functions of this basis set. To be fair, the first function of the set is $x^0=1$, so these would be the second, third, fourth and fifth. The full basis set is of course infinite in length. If we mix all the functions of the set with equal weights (we put the same amount of $x^2$ than we put $x^{245}$ or $x^{0}$), we obtain $(1-x)^{-1}$ (Equation \ref{eq1}. If we use only the odd terms, alternate the sign starting with a ‘+’, and weigh each term less and less using the expression $1/(2n-1)!$ for the $n-th$ term, we obtain $\sin{x}$ (Equation \ref{series:sin}). This is illustrated in Figure $2$, where we multiply the even powers of $x$ by zero, and use different weights for the rest. Note that the ‘etcetera’ is crucial, as we would need to include an infinite number of functions to obtain the function $\sin{x}$ exactly. Although we need an infinite number of terms to express a function exactly (unless the function is a polynomial, of course), in many cases we will observe that the weight (the coefficient) of each power of $x$ gets smaller and smaller as we increase the power. For example, in the case of $\sin{x}$, the contribution of $x^3$ is $1/6 th$ of the contribution of $x$ (in absolute terms), and the contribution of $x^5$ is $1/120 th$. This tells you that the first terms are much more important than the rest, although all are needed if we want the sum to represent $\sin{x}$ exactly. What if we are happy with a ‘pretty good’ approximation of $\sin{x}$? Let’s see what happens if we use up to $x^3$ and drop the higher terms. The result is plotted in blue in Figure $3$ together with $\sin{x}$ in red. We can see that the function $x-1/6 x^3$ is a very good approximation of $\sin{x}$ as long as we stay close to $x=0$. As we move further away from the origin the approximation gets worse and worse, and we would need to include higher powers of $x$ to get it better. This should be clear from eq. [series:sin], since the terms $x^n$ get smaller and smaller with increasing $n$ if $x$ is a small number. Therefore, if $x$ is small, we could write $\sin (x) \approx x - \dfrac{1}{3!} x^3$, where the symbol $\approx$ means approximately equal. But why stopping at $n=3$ and not $n=1$ or 5? The above argument suggests that the function $x$ might be a good approximation of $\sin{x}$ around $x=0$, when the term $x^3$ is much smaller than the term $x$. This is in fact this is the case, as shown in Figure $4$. We have seen that we can get good approximations of a function by truncating the series (i.e. not using the infinite terms). Students usually get frustrated and want to know how many terms are ‘correct’. It takes a little bit of practice to realize there is no universal answer to this question. We would need some context to analyze how good of an approximation we are happy with. For example, are we satisfied with the small error we see at $x= 0.5$ in Figure $4$? It all depends on the context. Maybe we are performing experiments where we have other sources of error that are much worse than this, so using an extra term will not improve the overall situation anyway. Maybe we are performing very precise experiments where this difference is significant. As you see, discussing how many terms are needed in an approximation out of context is not very useful. We will discuss this particular approximation when we learn about second order differential equations and analyze the problem of the pendulum, so hopefully things will make more sense then.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/03%3A_Series/3.01%3A_Maclaurin_Series.txt
If you take a look at Equation $3.1.5$ you will see that we can always approximate a function as $a_0+a_1x$ as long as $x$ is small. When we say ‘any function’ we of course imply that the function and all its derivatives need to be finite at $x=0$. Looking at the definitions of the coefficients, we can write: $\label{eq1} f (x) \approx f(0) +f'(0)x$ We call this a linear approximation because Equation \ref{eq1} is the equation of a straight line. The slope of this line is $f'(0)$ and the $y$-intercept is $f(0)$. A fair question at this point is ‘why are we even talking about approximations?’ What is so complicated about the functions $\sin{x}$, $e^x$ or $\ln{(x+1)}$ that we need to look for an approximation? Are we getting too lazy? To illustrate this issue, let’s consider the problem of the pendulum, which we will solve in detail in the chapter devoted to differential equations. The problem is illustrated in Figure $1$, and those of you who took a physics course will recognize the equation below, which represents the law of motion of a simple pendulum. The second derivative refers to the acceleration, and the $\sin \theta$ term is due to the component of the net force along the direction of motion. We will discuss this in more detail later in this semester, so for now just accept the fact that, for this system, Newton’s law can be written as: $\frac{d^2\theta(t)}{dt^2}+\frac{g}{l} \sin{\theta(t)}=0 \nonumber$ This equation should be easy to solve, right? It has only a few terms, nothing too fancy other than an innocent sine function...How difficult can it be to obtain $\theta(t)$? Unfortunately, this differential equation does not have an analytical solution! An analytical solution means that the solution can be expressed in terms of a finite number of elementary functions (such as sine, cosine, exponentials, etc). Differential equations are sometimes deceiving in this way: they look simple, but they might be incredibly hard to solve, or even impossible! The fact that we cannot write down an analytical solution does not mean there is no solution to the problem. You can swing a pendulum and measure $\theta(t)$ and create a table of numbers, and in principle you can be as precise as you want to be. Yet, you will not be able to create a function that reflects your numeric results. We will see that we can solve equations like this numerically, but not analytically. Disappointing, isn’t it? Well... don’t be. A lot of what we know about molecules and chemical reactions came from the work of physical chemists, who know how to solve problems using numerical methods. The fact that we cannot obtain an analytical expression that describes a particular physical or chemical system does not mean we cannot solve the problem numerically and learn a lot anyway! But what if we are interested in small displacements only (that is, the pendulum swings close to the vertical axis at all times)? In this case, $\theta<<1$, and as we saw $\sin{\theta}\approx\theta$ (see Figure $3.1.4$). If this is the case, we have now: $\frac{d^2\theta(t)}{dt^2}+\frac{g}{l} \theta(t)=0 \nonumber$ As it turns out, and as we will see in Chapter 2, in this case it is very easy to obtain the solution we are looking for: $\theta(t)=\theta(t=0)\cos \left((\frac{g}{l})^{1/2}t \right) \nonumber$ This solution is the familiar ‘back and forth’ oscillatory motion of the pendulum you are familiar with. What you might have not known until today is that this solution assumes $\sin{\theta}\approx\theta$ and is therefore valid only if $\theta<<1$! There are lots of ‘hidden’ linear approximations in the equations you have learned in your physics and chemistry courses. You may recall your teachers telling you that a give equation is valid only at low concentrations, or low pressures, or low... you hopefully get the point. A pendulum is of course not particularly interesting when it comes to chemistry, but as we will see through many examples during the semester, oscillations, generally speaking, are. The example below illustrates the use of series to a problem involving diatomic molecules, but before discussing it we need to provide some background. The vibrations of a diatomic molecule are often modeled in terms of the so-called Morse potential. This equation does not provide an exact description of the vibrations of the molecule under any condition, but it does a pretty good job for many purposes. $\label{morse}V(R)=D_e\left(1-e^{-k(R-R_e)}\right)^2$ Here, $R$ is the distance between the nuclei of the two atoms, $R_e$ is the distance at equilibrium (i.e. the equilibrium bond length), $D_e$ is the dissociation energy of the molecule, $k$ is a constant that measures the strength of the bond, and $V$ is the potential energy. Note that $R_e$ is the distance at which the potential energy is a minimum, and that is why we call this the equilibrium distance. We would need to apply energy to separate the atoms even more, or to push them closer (Figure $2$). At room temperature, there is enough thermal energy to induce small vibrations that displace the atoms from their equilibrium positions, but for stable molecules, the displacement is very small: $R-R_e\rightarrow0$. In the next example we will prove that under these conditions, the potential looks like a parabola, or in mathematical terms, $V(R)$ is proportional to the square of the displacement. This type of potential is called a ’harmonic potential’. A vibration is said to be simple harmonic if the potential is proportional to the square of the displacement (as in the simple spring problems you may have studied in physics). Example $1$ Expand the Morse potential as a power series and prove that the vibrations of the molecule are approximately simple harmonic if the displacement $R-R_e$ is small. Solution The relevant variable in this problem is the displacement $R-R_e$, not the actual distance $R$. Let’s call the displacement $R-R_e=x$, and let’s rewrite Equation \ref{morse} as $\label{morse2}V(R)=D_e\left(1-e^{-kx}\right)^2$ The goal is to prove that $V(R) =cx^2$ (i.e. the potential is proportional to the square of the displacement) when $x\rightarrow0$. The constant $c$ is the proportionality constant. We can approach this in two different ways. One option is to expand the function shown in Equation \ref{morse2} around zero. This would be correct, but it but involve some unnecessary work. The variable $x$ appears only in the exponential term, so a simpler option is to expand the exponential function, and plug the result of this expansion back in Equation \ref{morse2}. Let’s see how this works: We want to expand $e^{-kx}$ as $a_0+a_1 x + a_2 x^2...a_n x^n$, and we know that the coefficients are $a_n=\frac{1}{n!} \left( \frac{d^n f(x)}{dx^n} \right)_0.$ The coefficient $a_0$ is $f(0)=1$. The first three derivatives of $f(x)=e^{-kx}$ are • $f'(x)=-ke^{-kx}$ • $f''(x)=k^2e^{-kx}$ • $f'''(x)=-k^3e^{-kx}$ When evaluated at $x=0$ we obtain, $-k, k^2, -k^3...$ and therefore $a_n=\frac{(-1)^n k^n}{n!}$ for $n=0, 1, 2...$. Therefore, $e^{-kx}=1-kx+k^2x^2/2!-k^3x^3/3!+k^4x^4/4!...$ and $1-e^{-kx}=+kx-k^2x^2/2!+k^3x^3/3!-k^4x^4/4!...$ From the last result, when $x<<1$, we know that the terms in $x^2, x^3...$ will be increasingly smaller, so $1-e^{-kx}\approx kx$ and $(1-e^{-kx})^2\approx k^2x^2$. Plugging this result in Equation \ref{morse2} we obtain $V(R) \approx D_e k^2 x^2$, so we demonstrated that the potential is proportional to the square of the displacement when the displacement is small (the proportionality constant is $D_e k^2$). Therefore, stable diatomic molecules at room temperatures behave pretty much like a spring! (Don’t take this too literally. As we will discuss later, microscopic springs do not behave like macroscopic springs at all).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/03%3A_Series/3.02%3A_Linear_Approximations.txt
Before discussing more applications of Maclaurin series, let’s expand our discussion to the more general case where we expand a function around values different from zero. Let’s say that we want to expand a function around the number $h$. If $h=0$, we call the series a Maclaurin series, and if $h\neq0$ we call the series a Taylor series. Because Maclaurin series are a special case of the more general case, we can call all the series Taylor series and omit the distinction. The following is true for a function $f(x)$ as long as the function and all its derivatives are finite at $h$: $\label{taylor} f(x)=a_0 + a_1(x-h)+a_2(x-h)^2+...+a_n(x-h)^n = \displaystyle\sum_{n=0}^{\infty}a_n(x-h)^n$ The coefficients are calculated as $\label{taylorcoeff} a_n=\frac{1}{n!}\left( \frac{d^n f}{dx^n}\right)_h$ Notice that instead of evaluating the function and its derivatives at $x=0$ we now evaluate them at $x=h$, and that the basis set is now $1, (x-h), (x-h)^2,...,(x-h)^n$ instead of $1, x, x^2,...,x^n$. A Taylor series will be a good approximation of the function at values of $x$ close to $h$, in the same way Maclaurin series provide good approximations close to zero. To see how this works let’s go back to the exponential function. Recall that the Maclaurin expansion of $e^x$ is shown in Equation $3.1.3$. We know what happens if we expand around zero, so to practice, let’s expand around $h=1$. The coefficient $a_0$ is $f(1)= e^1=e$. All the derivatives are $e^x$, so $f'(1)=f''(1)=f'''(1)...=e.$ Therefore, $a_n=\frac{e}{n!}$ and the series is therefore $\label{taylorexp} e\left[ 1+(x-1)+\frac{1}{2}(x-1)^2+\frac{1}{6}(x-1)^3+... \right]=\displaystyle\sum_{n=0}^{\infty}\frac{e}{n!}(x-1)^n$ We can use the same arguments we used before to conclude that $e^x\approx ex$ if $x\approx 1$. If $x\approx 1$, $(x-1)\approx 0$, and the terms $(x-1)^2, (x-1)^3$ will be smaller and smaller and will contribute less and less to the sum. Therefore, $e^x \approx e \left[ 1+(x-1) \right]=ex.$ This is the equation of a straight line with slope $e$ and $y$-intercept 0. In fact, from Equation $3.1.7$ we can see that all functions will look linear at values close to $h$. This is illustrated in Figure $1$, which shows the exponential function (red) together with the functions $1+x$ (magenta) and $ex$ (blue). Not surprisingly, the function $1+x$ provides a good approximation of $e^x$ at values close to zero (see Equation $3.1.3$) and the function $ex$ provides a good approximation around $x=1$ (Equation \ref{taylorexp}). Example $1$: Expand $f(x)=\ln{x}$ about $x=1$ Solution $f(x)=a_0 + a_1(x-h)+a_2(x-h)^2+...+a_n(x-h)^n, a_n=\frac{1}{n!}\left( \frac{d^n f}{dx^n}\right)_h \nonumber$ $a_0=f(1)=\ln(1)=0 \nonumber$ The derivatives of $\ln{x}$ are: $f'(x) = 1/x, f''(x)=-1/x^2, f'''(x) = 2/x^3, f^{(4)}(x)=-6/x^4, f^{(5)}(x)=24/x^5... \nonumber$ and therefore, $f'(1) = 1, f''(1)=-1, f'''(1) = 2, f^{(4)}(1)=-6, f^{(5)}(1)=24.... \nonumber$ To calculate the coefficients, we need to divide by $n!$: • $a_1=f'(1)/1!=1$ • $a_2=f''(1)/2!=-1/2$ • $a_3=f'''(1)/3!=2/3!=1/3$ • $a_4=f^{(4)}(1)/4!=-6/4!=-1/4$ • $a_n=(-1)^{n+1}/n$ The series is therefore: $f(x)=0 + 1(x-1)-1/2 (x-1)^2+1/3 (x-1)^3...=\displaystyle{\color{Maroon}\displaystyle\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}(x-1)^{n}} \nonumber$ Note that we start the sum at $n=1$ because $a_0=0$, so the term for $n=0$ does not have any contribution. Need help? The links below contain solved examples. External links: Finding the Taylor series of a function I: http://patrickjmt.com/taylor-and-maclaurin-series-example-2/ 3.04: Other Applications of Mclaurin and Taylor series So far we have discussed how we can use power series to approximate more complex functions around a particular value. This is very common in physical chemistry, and you will apply it frequently in future courses. There are other useful applications of Taylor series in the physical sciences. Sometimes, we may use relationships to derive equations or prove relationships. Example $1$ illustrates this last point. Example $1$ Calculate the following sum ($\lambda$ is a positive constant) $\displaystyle\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!} \nonumber$ Solution Let’s ‘spell out’ the sum: $\displaystyle\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!}=e^{-\lambda} \left[1+\frac{\lambda^1}{1!}+\frac{\lambda^2}{2!}+\frac{\lambda^3}{3!}...\right] \nonumber$ The sum within the brackets is exactly $e^\lambda$. This is exact, and not an approximation, because we have all infinite terms. Therefore, $\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!}=e^{-\lambda}e^\lambda=1 \nonumber$ This would require that you recognize the term within brackets as the Maclaurin series of the exponential function. One simpler version of the problem would be to ask you to prove that the sum equals 1. There are more ways we can use Taylor series in the physical sciences. We will see another type of application when we study differential equations. In fact, power series are extremely important in finding the solutions of a large number of equations that arise in quantum mechanics. The description of atomic orbitals, for example, require that we solve differential equations that involve expressing functions as power series. 3.05: 3.5 Problems Problem $1$ Expand the following functions around the value of $x$ indicated in each case. In each case, write down at least four terms of the series, and write down the result as an infinite sum. • $\sin{(ax)}$, $x=0$, $a$ is a constant • $\cos{(ax)}$, $x=0$, $a$ is a constant • $e^{ax}$, $x=0$, $a$ is a real constant • $e^{-ax}$, $x=0$, $a$ is a real constant • $\ln{(ax)}$, $x=1$, $a$ is a real constant Problem $2$ Use the results of the previous problem to prove Euler’s relationship: $e^{ix}=\cos x + i \sin x \nonumber$ Problem $3$ The osmotic pressure ($\pi$) of a solution is given by $-RT \ln x_A=\pi V_m \nonumber$ where $V_m$ is the molar volume of the pure solvent, and $x_a$ is the mole fraction of the solvent. Show that in the case of a dilute solution $RT x_B \approx \pi V_m \nonumber$ where $x_B$ is the mole fraction of the solute. Remember that the mole fractions of the solute and the solvent need to add up to 1. Note: you may use any of the results you obtained in Problem $1$. Problem $4$ The following expression is known as the Butler-Volmer equation, and it is used in electrochemistry to describe the kinetics of an electrochemical reaction controlled solely by the rate of the electrochemical charge transfer process. $j=j_0({e^{(1-\alpha)f\eta}-e^{-\alpha f \eta}}), ~ 0<\alpha<1 \text{ and } f>0, \eta>0 \nonumber$ Show that $j \approx j_0 f \eta$ when $f \eta <<1$. Note: you may use any of the results you obtained in Problem $1$. Problem $5$ The energy density of black-body radiation ($\rho$) at temperature T is given by Plank’s formula: $\rho(\lambda)=\frac{8\pi h c}{\lambda^5}[e^{hc/\lambda k T}-1]^{-1} \nonumber$ where $\lambda$ is the wavelength, $h$ is Plank’s constant, and $c$ is the speed of light. Show that the formula reduces to the classical Rayleigh-Jeans law $\rho = 8\pi kT/\lambda^4$ for long wavelengths ($\lambda \rightarrow \infty$). Hint: Define a variable $\nu = \lambda^{-1}$ and solve the problem for $\nu \rightarrow 0$. Note: you may use any of the results you obtained in Problem $1$. Problem $6$ Use series to prove that $\sum \limits _{k=0} ^\infty{\frac{\lambda^k e^{-\lambda}}{k!}}=1$, $\lambda$ is a positive real constant. Problem $7$ Write down the equation of a straight line that provides a good approximation of the function $e^x$ at values close to $x = 2$. Problem $8$ Use a Taylor expansion around $a$ to prove that $\ln{x} = \ln{a}+\displaystyle\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n a^n}(x-a)^n$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/03%3A_Series/3.03%3A_Taylor_Series.txt
Objectives • Be able to identify the dependent and independent variables in a differential equation. • Be able to identify whether an ordinary differential equation (ODE) is linear or nonlinear. • Be able to identify the order of an ODE • Be able to identify whether a first order ODE is separable or not. • Be able to find the general and particular solutions of linear first order ODEs. • Be able to find the general and particular solutions of separable first order ODEs. • Understand how to verify that the solution you got in a problem satisfies the differential equation and initial conditions. • Understand how to solve differential equations in the context of chemical kinetics. Understand the concept of mass balance, and half-life. • 4.1: Definitions and General Concepts A differential equation is an equation that defines a relationship between a function and one or more derivatives of that function.  An ordinary differential equation (ODE) relates an unknown function, y(t) as a function of a single variable. Differential equations arise in the mathematical models that describe most physical processes. • 4.2: 1st Order Ordinary Differential Equations We will discuss only two types of 1st order ODEs, which are the most common in the chemical sciences: linear 1st order ODEs, and separable 1st order ODEs. These two categories are not mutually exclusive, meaning that some equations can be both linear and separable, or neither linear nor separable. • 4.3: Chemical Kinetics The term chemical kinetics refers to the study of the rates of chemical reactions. Differential equations play a central role in the mathematical treatment of chemical kinetics. We will start with the simplest examples, and then we will move to more complex cases. We will focus on a couple of reaction mechanisms. The common theme will be to find expressions that will allow us to calculate the concentration of the different species that take part of the reaction at different reaction times. • 4.4: Problems 04: First Order Ordinary Differential Equations A differential equation is an equation that defines a relationship between a function and one or more derivatives of that function. For example, this is a differential equation that relates the function $y(t)$ with its first and second derivatives: $\dfrac{d^2}{dt^2}y(t)+\dfrac{1}{4}\dfrac{d}{dt}y(t)+y(t)=\sin(t) \nonumber$ The above example is an ordinary differential equation (ODE) because the unknown function, $y(t)$, is a function of a single variable (in this case $t$). Because we are dealing with a function of a single variable, only ordinary derivatives appear in the equation. If we were dealing with a function of two or more variables, the partial derivatives of the function would appear in the equation, and we would call this differential equation a partial differential equation (PDE). An example of a PDE is shown below. We will discuss PDEs towards the end of the semester. $\dfrac{\partial ^2u}{\partial x^2}+\dfrac{\partial ^2u}{\partial y^2}+\dfrac{\partial ^2u}{\partial z^2}=c^2 \dfrac{\partial ^2u}{\partial t^2} \nonumber$ Note that in the example above we got ‘lazy’ and used $u$ instead of $u(x,y,z,t)$. The fact that $u$ is a function of $x,y,z$, and $t$ is obvious from the derivatives, so you will see that often we will relax and not write the variables explicitly with the function. Why do we need to study differential equations? The answer is simple: Differential equations arise in the mathematical models that describe most physical processes. Figure $1$ illustrates three examples. The first column illustrates the problem of molecular transport (diffusion). Suppose the red circles represent molecules of sucrose (sugar) and the black circles molecules of water, and assume you are interested in modeling the concentration of sucrose as a function of position and time after you dissolve some sucrose right in the middle of the container. The differential equation describing the process is a partial differential equation because the concentration will be a function of two variables: $r$, the distance from the origin, and $t$, the time elapsed from the moment you started the experiment. The solution shown in the figure was obtained by assuming that all the molecules of sucrose are concentrated at the center at time zero. You could solve the differential equation with other initial conditions. The second column illustrates a chemical system where a compound A reacts to give B. The reaction is reversible, and it could, for example, represent two different isomers of the same molecule. The differential equation models how the concentration of A, [A], changes with time. We will in fact analyze this problem in detail and find the solution shown in the figure. As we will see, in this case we are also assuming certain initial conditions. Finally, the third column illustrates a mass attached to a spring. We will also analyze this equation, and the solution is not shown because it will be your job to get it in your homework. You may think this is a physics problem, but because molecules have chemical bonds, and atoms vibrate around their equilibrium positions, systems like these are of interest to chemists as well. To solve the differential equation means to find the function (or family of functions) that satisfies the equation. In our first example in Figure $1$, we would need to find the function C$(r,t)$ that satisfies the equation. In the second example we would need to find all the functions A$(t)$ that satisfy the equation. As we will see shortly, whether the solution is one function or a family of functions depends on whether we are restricted by initial conditions (e.g. at time zero [B] = 0) or not. The order of a differential equation (partial or ordinary) is the highest derivative that appears in the equation. Below is an example of a first order ordinary differential equation: $\label{eq1}\dfrac{dy}{dx}+3y=8e^x$ In this example, we are looking for all the functions $y(x)$ that satisfy Equation \ref{eq1}. As usual, we will call $x$ the independent variable, and $y$ the dependent variable. Again, $y$ is of course $y(x)$, but often we do not write this explicitly to save space and time. This is an ordinary differential equation because $y$ is a function of a single variable. It is a first order differential equation because the highest derivative is of first order. It is also a linear differential equation because the dependent variable and all of its derivatives appear in a linear fashion. The distinction between linear and non-linear ODEs is very important because there are different methods to solve different types of differential equations. Mathematically, a first order linear ODE will have this general form: $\dfrac{dy}{dx}+f_1(x) y=f_2(x)$ It is crucial to understand that the linearity refers to the terms that include the dependent variable (in this case $y$). The terms involving $x$ ($f_1(x)$ and $f_2(x)$) can be non-linear, as in Equation \ref{eq1}. An example of a non-linear differential equation is shown below. Note that the dependent variable appears in a transcendental function (in this case an exponential), and that makes this equation non-linear: $\dfrac{dy}{dx}+3e^y=8x \nonumber$ Analogously, a linear second order ODE will have the general form: $\label{eq:linear}\dfrac{d^2y}{dx^2}+f_1(x)\dfrac{dy}{dx}+f_2(x) y=f_3(x)$ Again, we don’t care whether the functions $f_1(x), f_2(x)$ and $f_3(x)$ are linear in $x$ or not. The only thing that matters is that the terms involving the dependent variable are. Identifying the dependent and the independent variables: Test yourself with this short quiz. http://tinyurl.com/ll22wnv Linear or non-linear? See if you can identify the linear ODEs in this short quiz. http://tinyurl.com/msldkp3 We are still defining concepts, but we haven’t said anything so far regarding how to solve differential equations! We still need to go over a few more things, so be patient. An $nth$-order differential equation together with $n$ auxiliary conditions imposed at the same value of the independent variable is called an initial value problem. For example, we may be interested in finding the function $y(x)$ that satisfies the following conditions: $\label{eq2}y''=e^{-x}, y(0)=1, y'(0)=4$ Notice that we are introducing different types of notations so you get used to seeing mathematical equations in different ‘flavors’. Here, $y''$ of course means $\dfrac{d^2y(x)}{dx^2}$. This is an initial value problem because we have a second-order differential equation with two auxiliary conditions imposed at the same value of $x$ (in this case $x=0$). There are infinite functions $y(x)$ that satisfy $y''=e^{-x}$, but only one will also satisfy the two initial conditions we imposed. If we were dealing with a first order differential equation we would need only one initial condition. We would need three for a third-order differential equation. How do we use initial conditions to find a solution? In general, the solution of a second order ODE will contain two arbitrary constants (in the example below $c_1$ and $c_2$). This is what we will call the general solution of the differential equation. For example, $y(x)=e^{-x} + c_1 x+c_2$ is the general solution of $y''=e^{-x}$. We can verify this is true by taking the second derivative of this function. Again, we do not know yet how to get these solutions, but if we are given this solution, we know how to check if it is correct. It is clear that $c_1$ and $c_2$ can be in principle anything, so the solution of the ODE is a whole family of functions. However, if we are given initial conditions we are looking for a particular solution, one that not only satisfies the ODE, but also the initial conditions. Which function is that? The first initial condition states that $y(0)=1$. Therefore, $y(x)=e^{-x} + c_1 x+c_2\Rightarrow y(0)=1+c_2 \nonumber$ $y(0)=1 \Rightarrow 1=1+c_2 \Rightarrow c_2 =0 \nonumber$ So far, we demonstrated that the functions $y(x)=e^{-x} + c_1 x$ satisfy not only the ODE, but also the initial condition $y(0)=1$. We still have another initial condition, which will allow us to determine the value of $c_1$. $y'(x)=-e^{-x} + c_1\Rightarrow y'(0)=-1+c_1 \nonumber$ $y'(0)=4 \Rightarrow 4=-1+c_1 \Rightarrow c_1 =5 \nonumber$ Therefore, the function $y(x)=e^{-x}+5x$ is the particular solution of the initial value problem described in Equation \ref{eq2}. We can check our answer by verifying that this solution satisfies the three equations of the initial value problem: 1. $y''=e^{-x} \rightarrow$ $\dfrac{d^2}{dx^2} (e^{-x}+5x)=e^{-x}$, so we know that the solution we found satisfies the differential equation. 2. $y(0)=1 \rightarrow$e$^{-0}+5\times 0=1$, so we know that the solution we found satisfies one of the initial conditions. 3. $y'(0)=4 \rightarrow$$\dfrac{d}{dx} (e^{-x}+5x)=-e^{-x}+5\Rightarrow y'(0)=4$, so we know that the solution we found satisfies the other initial condition as well. Therefore, we demonstrated that $y(x)=e^{-x}+5x$ is indeed the solution of the problem. An $nth$-order differential equation together with $n$ auxiliary conditions imposed at more than one value of the independent variable is called a boundary value problem. What is the difference between a boundary value problem and an initial value problem? In the first case, the conditions are imposed at different values of the independent variable, while in the second case, as we just saw, the conditions are imposed at the same value of the independent variable. For example, this would be a boundary value problem: $\label{eq3}y''=e^{-x}, y(0)=1, y(1)=0$ Notice that we still have two conditions because we are dealing with a second order differential equation. However, one condition deals with $x=1$ and the other with $x=0$. The conditions can refer to values of the first derivative, as in the previous example, or to values of the function itself, as in the example of Equation \ref{eq3}. Why do we need to distinguish between initial value and boundary value problems? The reason lies in a theorem that states that, for linear ODEs, the solution of a initial value problem exists and is unique, but a boundary value problem does not have the existence and uniqueness guarantee. The theorem is not that simple (for example it requires that the functions in $x$ ($f_{1...3}$ in Equation \ref{eq:linear}) are continuous), but the bottom line is that we may find a solution whenever the conditions are imposed at the same value of $x$, but we may not find a solution whenever the conditions are imposed at different values. We will see examples when we discuss second order ODEs, and in particular we will discuss how boundary conditions give rise to interesting physical phenomena. For example, we will see that boundary conditions are responsible for the fact that energies in atoms and molecules are quantized. Coming back to the boundary value problem of Equation \ref{eq3}, it is important to recognize that because the actual differential equation is the same as in the example of Equation \ref{eq2}, the general solution is still the same: $y(x)=e^{-x} + c_1 x+c_2$. However, the particular solution will be different (different values of $c_1$ and $c_2$), because we need to satisfy different conditions. As in the first example, the first boundary condition states that $y(0)=1$ so: $y(x)=e^{-x} + c_1 x+c_2\Rightarrow y(0)=1+c_2 \nonumber$ $y(0)=1 \Rightarrow 1=1+c_2 \Rightarrow c_2 =0 \nonumber$ as before. However, now $y(1)=e^{-1} + c_1 \nonumber$ where we have used the result $c_2=0$ so $y(1)=0 \Rightarrow 0=e^{-1}+c_1 \Rightarrow c_1 =-e^{-1} \nonumber$ and the particular solution is therefore $y(x)=e^{-x}-e^{-1}x$. As we did before, it is important that we check our solution. If we are right, this solution needs to satisfy all the relationships stated in Equation \ref{eq3}. 1. $y''=e^{-x} \rightarrow$ $\dfrac{d^2}{dx^2} (e^{-x}-e^{-1}x)=e^{-x}$, so the solution satisfies the differential equation. 2. $y(0)=1 \rightarrow$e$^{-0}-e^{-1}\times 0=1$, so our solution satisfies one of the boundary conditions. 3. $y(1)=0 \rightarrow$$e^{-1}-e^{-1}\times1=0$, so our solution satisfies the other boundary condition as well. Figure $3$ illustrates the difference between the general solution and the particular solution. The general solution has two arbitrary constants, so there are an infinite number of functions that satisfy the differential equation. Three examples are shown in different colors. However, only one of these satisfies both boundary conditions (shown with the arrows). Using boundary conditions: see if you can obtain the particular solution of a second order ODE in this short quiz. http://tinyurl.com/lovh4x3 So far we have discussed how to: • Identify the dependent and independent variables • Identify the order of the differential equation. • Identify whether the equation is linear or not • Use initial or boundary conditions to obtain particular solutions from general solutions • Check your results to be sure you satisfy the differential equation and all the initial or boundary conditions We are obviously missing the most important question: How do we solve the differential equation? Unfortunately, there is no universal method, and in fact some differential equations cannot be solved analytically. We will see some examples of equations that cannot be solved analytically and we will discuss what can be done in those cases. In this class we will only discuss some differential equations of interest in physical chemistry. It is not our intention to cover the topic in a comprehensive manner, and we will not touch on other differential equations that might be of interest in other disciplines. Yet, the background you will get in this class will allow you to teach yourself more advanced topics in differential equations if your future career demands that you have a deeper knowledge of the subject.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/04%3A_First_Order_Ordinary_Differential_Equations/4.01%3A_Definitions_and_General_Concepts.txt
We will discuss only two types of 1st order ODEs, which are the most common in the chemical sciences: linear 1st order ODEs, and separable 1st order ODEs. These two categories are not mutually exclusive, meaning that some equations can be both linear and separable, or neither linear nor separable. Separable 1st order ODEs An ODE is called separable if it can be written as $\label{sep}\dfrac{dy}{dx}=\dfrac{g(x)}{h(y)}$ A separable differential equation is the easiest to solve because it readily reduces to a problem of integration: $\label{sep2} \int h(y)dy=\int g(x)dx$ For example: $\dfrac{dy}{dx}=4y^2x$ can be written as $y^{-2}dy=4xdx$ or $\dfrac{1}{4}y^{-2}dy=xdx$. This equation is separable becuase the terms multiplying $dy$ do not contain any terms involving $x$, and the terms multiplying $dx$ do not contain any terms involving $y$. This allows you to integrate and solve for $y(x)$: \begin{aligned} \int y^{-2}dy&=\int 4xdx \ -\dfrac{1}{y}+c_1 &=2x^2+c_2 \ y&=-\dfrac{1}{2x^2+c_3} \end{aligned} \nonumber where $c_3 = c_2-c_1$. Let’s see how to separate other equations. If you wanted to finish these problems you would integrate both sides and solve for the dependent variable, as shown in the solved examples below. For now, let’s concentrate on how to separate the terms involving the independent variable from the terms involving the dependent variable: Example 1: $y' = e^{−y} (3 − x) \nonumber$ $\frac{dy}{dx} = e^{−y} (3 − x) \nonumber$ Separated ODE: $e^y dy = (3 − x)dx \nonumber$ Example 2: $\theta' = \frac{t^2}{ \theta} \nonumber$ $\frac{d \theta}{dt} = \frac{t^2}{ \theta} \nonumber$ Separated ODE: $\theta d \theta = t^2dt \nonumber$ Example 3: $\frac{dA(t)}{dt} = \frac{2−t}{1−A(t)} \nonumber$ Separated ODE: $(1 − A(t))dA = (2 − t)dt \nonumber$ Example $1$ Solve the following differential equation: $\frac{dy}{dx} = yx^2$ Solution We first ‘separate’ the terms involving $y$ from the terms involving $x$: $\frac{1}{y} dy = x^2 dx \nonumber$ and then integrate both sides (it is crucial not to forget the integration constants): $\int \frac{1}{y} dy = \int x^2 dx \rightarrow \ln y + c_1 = \frac{1}{3} x^3 + c_2 \nonumber$ Remember that our goal is to find $y(x)$, so our job now is to solve for $y$: \begin{align*} \ln y + c_1 &= \frac{1}{3} x^3 + c_2 \[4pt] \ln y &= \frac{1}{3} x^3 + c_2 − c_1 \[4pt] &= \dfrac{1}{3} x^3 + c_3 \end{align*} $y = \exp \left( \frac{1}{3} x^3 + c_3 \right) = \exp \left( \dfrac{1}{3} x^3 \right) \exp(c_3) = \textcolor{red}{Ke^{x^3/3}} \nonumber$ Notice that $c_2 − c_1$ is a constant, so we re-named it $c_3$. In addition, $exp(c_3)$ is also a constant, so we re-named it $K$. The names of the constants are not important. Just to be on the safe side, let’s verify that $Ke^{x^3/3}$ is indeed the solution of this differential equation. We’ll do it by substitution. On the left-hand side of the equation we have $\frac{dy}{dx}$, and on the right-hand side we have $yx^2$. We’ll replace $y$ by $Ke^{x^3/3}$ on both sides, and verify that the two sides are identical (the equality holds). $\frac{dy}{dx} = Ke^{x^3/3}x^2 \nonumber$ $yx^2 = Ke^{x^3/3}x^2 \nonumber$ We just verified that the function $Ke^{x^3/3}$ satisfies $\frac{dy}{dx} = yx^2$, so we know our solution is correct! Example $1$ illustrates why you need to be very comfortable with the properties of the logarithmic and exponential functions. These functions appear everywhere in the physical sciences, so if you found this challenging you need to review your algebra! Example $2$ Solve the following differential equation: $\dfrac{dx}{dt} = x^2 t$ This might look similar to example $1$, but notice that in this case, $x$ is the dependent variable. Solution We first ‘separate’ the terms involving $x$ from the terms involving $t$: $\dfrac{1}{x^2}dx=t dt \nonumber$ and then integrate both sides (it is crucial not to forget the integration constants): $\int\dfrac{1}{x^2}dx=\int tdt\rightarrow -x^{-1}+c_1=t^2/2+c_2 \nonumber$ Remember that our goal is to find $x(t)$, so our job now is to solve for $x$: $-x^{-1}+c_1=t^2/2+c_2 \nonumber$ $x^{-1}=-t^2/2+(c_1-c_2) \nonumber$ $x(t) =-\dfrac{1}{\dfrac{t^2}{2}+(c_2-c_1)}=\displaystyle{\color{Maroon}-\dfrac{1}{t^2/2+c}}. \nonumber$ where $c=(c_2-c_1)$. Just to be on the safe side, let’s verify that our solution satisfies the differential equation. We’ll do it by substitution. On the left-hand side of the equation we have $\dfrac{dx}{dt}$, and on the right-hand side we have $t x^2$. We’ll replace $x$ by $-\dfrac{1}{t^2/2+c}$ on both sides, and verify that the two sides are identical (the equality holds). $\dfrac{dx}{dt} = \dfrac{t}{(t^2/2+c)^2} \nonumber$ $x^2t=\dfrac{t}{(t^2/2+c)^2} \nonumber$ We just verified that the function $-\dfrac{1}{t^2/2+c}$ satisfies $\dfrac{dx}{dt} = tx^2$, so we know our solution is correct! This example is also available as a video: http://tinyurl.com/kxdfqxq Watch an additional solved example. http://tinyurl.com/kem7e6h External Links: Linear 1st order ODEs A general first-order linear ODE can be written $\label{linear}\dfrac{dy}{dx}+p(x) y=q(x)$ Note that the linearity refers to the $y$ and $dy/dx$ terms, $p(x)$ and $q(x)$ do not need to be linear in $x$. You may need to reorganize terms around to write your equation in the form shown in Equation \ref{linear}. For example, $dy=(8 e^x-3y)dx$ is linear, because it can be re-organized as $\label{exlin1}\dfrac{dy}{dx}+3y=8e^x$ Comparing Equations \ref{linear} and \ref{exlin1}, we see that in this case, $p(x) = 3$ and $q(x)=8e^x$. In this example $p(x)$ is a constant, but this need not to be the case. The term $p(x)$ can be any function of $x$. First order linear ODEs can be solved by multiplying by the integrating factor $e^{ \int p(x)dx }$. This sounds very strange at first sight, but we will see how it works with the example of Equation \ref{exlin1}. Our very first step is to write the equation so it looks like Equation \ref{linear}. We then calculate the integrating factor, in this case $e^{ \int 3dx }=e^{3x}$. We next multiply both sides of the equation by the integrating factor: $\dfrac{dy}{dx} e^{3x}+3ye^{3x}=8e^xe^{3x}=8e^{4x} \nonumber$ In the next step, we recognize that the left-hand side is the derivative of the dependent variable multiplied by the integrating factor: $\dfrac{dy}{dx} e^{3x}+3ye^{3x}=\dfrac{d}{dx} (y e^{3x}) \nonumber$ This last step is the multiplication rule in reverse. If you start with $\dfrac{d}{dx} (y e^{3x})$, you can apply the multiplication rule to obtain $\dfrac{dy}{dx} e^{3x}+3ye^{3x}$. The whole point of calculating the integrating factor is that it guarantees that the left-hand side of the equation will always be the derivative of the dependent variable multiplied by the integrating factor. This will allow us to move $dx$ to the right side, and integrate: $\dfrac{d}{dx} (y e^{3x})=8e^{4x} \nonumber$ $d(y e^{3x})=8e^{4x}dx \nonumber$ $\int d(y e^{3x})=\int 8e^{4x}dx \nonumber$ The left-side of the above equation is $\int d(y e^{3x})=y e^{3x}$, in the same way that $\int dy=y$. The right side is $\int 8e^{4x}dx=2e^{4x}+c$. Note that we included the integration constant only on one side. This is because we already saw that if we included integration constants in both sides we could group them into a single constant. So far we have $y e^{3x}=2e^{4x}+c \nonumber$ Now we need to solve for the dependent variable, in this case $y(x)$. Dividing both terms by $e^{3x}$: $y =2e^{x}+c e^{-3x} \nonumber$ which is the general solution of Equation \ref{exlin1}. Before moving on let’s verify that this function satisfies Equation \ref{exlin1} by substituting $y$ by $y =2e^{x}+c e^{-3x}$: $\dfrac{dy}{dx}=2e^x-3ce^{-3x} \nonumber$ $3y = 6e^{x}+3c e^{-3x} \nonumber$ $\dfrac{dy}{dx}+3y =2e^x-3ce^{-3x} +6e^{x}+3c e^{-3x}=8e^{x} \nonumber$ which equals the right-hand side of Equation \ref{exlin1}. This verifies that $y =2e^{x}+c e^{-3x}$ is indeed the general solution of Equation \ref{exlin1}. If we were given an initial condition we could calculate the value of $c$ and obtain the particular solution. Let’s review and list the steps we used to find the solution of a linear 1st order ODE: 1. Re-arrange the terms so the equation has the form $\dfrac{dy}{dx}+p(x) y=q(x)$ 2. Multiply both sides of the equation by the integrating factor $e^{\int p(x)dx}$ 3. Recognize that the left-hand side can be written as $\dfrac{d}{dx} \left( y e^{\int p(x)dx}\right)$ 4. Move $dx$ to the right-hand side and integrate. Remember the integration constants! 5. Solve for the dependent variable 6. If given, use the initial condition to calculate the value of the arbitrary constant. 7. Verify your solution is correct by substitution into the differential equation. See another solved example to see this method in action: http://tinyurl.com/lzluktp External Links: http://www.youtube.com/watch?v=HAb9JbBD2ig
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/04%3A_First_Order_Ordinary_Differential_Equations/4.02%3A_1st_Order_Ordinary_Differential_Equations.txt
The term chemical kinetics refers to the study of the rates of chemical reactions. As we will see, differential equations play a central role in the mathematical treatment of chemical kinetics. We will start with the simplest examples, and then we will move to more complex cases. As you will see, in this section we will focus on a couple of reaction mechanisms. The common theme will be to find expressions that will allow us to calculate the concentration of the different species that take part of the reaction at different reaction times. Let’s start with the simplest case, in which a reactant A reacts to give the product B. We’ll assume the reaction proceeds in one step, meaning there are no intermediates that can be detected. $\label{1order} A \overset{k}{\rightarrow}B$ We’ll use the following notation for the time-dependent concentrations of A and B: [A]$(t)$, [B]$(t)$, or simply [A] and [B]. We’ll use [A]$_0$ and [B]$_0$ to denote the concentrations of A and B at time $t=0$. The constant $k$ is the rate constant of the reaction, and is a measure of how fast or slow the reaction is. It depends on the reaction itself (the chemical compounds A and B) and environmental factors such as temperature. The rate constant does not depend on the concentrations of the species involved in the reaction. The units of $k$ depend on the particular mechanism of the reaction, as we will see through the examples. For the case described above, the units will be 1/time (e.g. $s^{-1}$) The rate of the reaction ($r$) will be defined as the number of moles of A that disappear or the number of moles of B that appear per unit of time (e.g. per second) and volume (e.g. liter). This is true because of the stoichiometry of the reaction, as we will discuss in a moment. However, because the rate is a positive quantity, we will use a negative sign if we look at the disappearance of A: $r=-\frac{d[A]}{dt}=\frac{d[B]}{dt}$ The rate of the reaction, therefore, is a positive quantity with units of M.s$^{-1}$, or in general, concentration per unit of time. As we will see, the rate of the reaction depends on the actual concentration of reactant, and therefore will in general decrease as the reaction progresses and the reactant is converted into product. Although all the molecules of A are identical, they do not need to react at the same instant. Consider the simple mechanism of Equation \ref{1order}, and imagine that every molecule of A has a probability $p = 0.001$ of reacting in every one-second interval. Suppose you start with 1 mole of A in a 1 L flask ($[A]_0 = 1 M$), and you measure the concentration of A one second later. How many moles of A do you expect to see? To answer this question, you can imagine that you get everybody in China (about one billion people) to throw a die at the same time, and that everybody who gets a six wins the game. How many winners do you expect to see? You know that the probability that each individual gets a six is $p=1/6$, and therefore one-sixth of the players will win in one round of the game. Therefore, you can predict that the number of winers will be $10^9/6$, and the number of losers will be $5/6\times10^9$. If we get the losers to play a second round, we expect that one-sixth of them will get a six, which accounts for $5/6\times10^9\times 1/6$ people. After the second run, therefore, we’ll still have $(5/6)^2\times10^9$ losers. Following the same logic, the probability that a molecule of A reacts to give B in each one-second interval is $= 1/1000$, and therefore in the first second you expect that $6\times10^{23}/1000$ molecules react and $999\times6\times10^{23}/1000$ remain unreacted. In other words, during the first second of your reaction 0.001 moles of A were converted into B, and therefore the rate of the reaction was $10^{-3}Ms^{-1}$. During the second one-second interval of the reaction you expect that one-thousand of the remaining molecules will react, and so on. Imagine that you come back one hour later (3,600 s). We expect that $(999/1000)^{3,600}\times 6\times10^{23}$ molecules will remain unreacted, which is about $1.6\times 10^{22}$ molecules. If you measure the reaction rate in the next second, you expect that one-thousand of them ($1.6\times 10^{19}$ molecules, or $2.7\times 10^{-5}$ moles) will react to give B. The rate of the reaction, therefore, decreased from $10^{-3}Ms^{-1}$ at $t=1s$ to $2.7\times10^{-5}Ms^{-1}$ at $t = 1h$. You should notice that the fraction of molecules of A that react in each one-second is always the same (in this case one-thousand). Therefore, the number of molecules that react per time interval is proportional to the number of molecules of A that remain unreacted at any given time. We just concluded that the rate of the reaction is proportional to the concentration of A: $\label{eq:1st} r=-\frac{d[A]}{dt}=\frac{d[B]}{dt}=k[A]$ The proportionality constant, $k$, is related to the probability that a molecule will react in a small time interval, as we discussed above. In this class, we will concentrate on solving differential equations such as the one above. This is a very simple differential equation that can be solved using different initial conditions. Let’s say that our goal is to find both [A]$(t)$ and [B]$(t)$. As chemists, we need to keep in mind that the law of mass conservation requires that $\label{mass} [A](t) + [B](t) = [A]_0 + [B]_0$ In plain English, the concentrations of A and B at any time need to add up to the sum of the initial concentrations, as one molecule of A converts into B, and we cannot create or destroy matter. Again, keep in mind that this equation will need to be modified according to the stoichiometry of the reaction. We will call an equation of this type a ‘mass balance’. Before solving this equation, let’s look at other examples. What are the differential equations that describe this sequential mechanism? $A \overset{k_1}{\rightarrow}B\overset{k_2}{\rightarrow}C \nonumber$ In this mechanism, A is converted into C through an intermediate, B. Everything we discussed so far will apply to each of these two elementary reactions (the two that make up the overall mechanism). From the point of view of A nothing changes. Because the rate of the first reaction does not depend on B, it is irrelevant that B is converted into C (imagine you give 1 dollar per day to a friend. It does not matter whether you friend saves the money or gives it to someone else, you still lose 1 dollar per day). $\label{cons1} \frac{d[A]}{dt}=-k_1[A]$ On the other hand, the rate of change of [B], $d[B]/dt$ is the sum of the rate at which B is created ($k_1[A]$), minus the rate at which it disappears by reacting into C ($k_2[B]$): $\frac{d[B]}{dt}=k_1[A]-k_2[B] \label{cons2}$ This can be read: The rate of change of [B] equals the rate at which [B] appears from A into B, minus the rate at which [B] disappears from B into C. In each term, the rate is proportional to the reactant of the corresponding step: A for the first reaction, and B for the second step. What about C? Again, it is irrelevant that B was created from A (if you get 1 dollar a day from your friend, you don’t care if she got it from her parents, you still get 1 dollar per day). The rate at which C appears is proportional to the reactant in the second step: B. Therefore: $\frac{d[C]}{dt}=k_2[B] \label{cons3}$ The last three equations form a system of differential equations that need to be solved considering the initial conditions of the problem (e.g. initially we have A but not B or C). We’ll solve this problem in a moment, but we still need to discuss a few issues related to how we write the differential equations that describe a particular mechanism. Imagine that we are interested in $2 A + B \overset{k}{\rightarrow}3 C \nonumber$ We know that the rate of a reaction is defined as the change in concentration with time...but which concentration? is it $d[A]/dt$? or $d[B]/dt$? or $d[C]/dt$? These are all different because 3 molecules of C are created each time 1 of B and 2 of A disappear. Which one should we use? Because 2 of A disappear every time 1 of B disappears: $d[A]/dt= 2 d[B]/dt$. Now, considering that rates are positive quantities, and that the derivatives for the reactants, $d[A]/dt$ and $d[B]/dt$, are negative: $r=-\frac{1}{2}\frac{d[A]}{dt}=-\frac{d[B]}{dt}=\frac{1}{3}\frac{d[C]}{dt}$ This example shows how to deal with the stoichiometric coefficients of the reaction. Note that in all our examples we assume that the reactions proceed as written, without any ‘hidden’ intermediate steps. First order reactions We have covered enough background, so we can start solving the mechanisms we introduced. Let’s start with the easiest one (Equations \ref{1order}, \ref{eq:1st} and \ref{mass}): $A \overset{k}{\rightarrow}B \nonumber$ $r=-\frac{d[A]}{dt}=\frac{d[B]}{dt}=k[A] \nonumber$ $[A](t) + [B](t) = [A]_0 + [B]_0 \nonumber$ This mechanism is called a first order reaction because the rate is proportional to the first power of the concentration of reactant. For a second-order reaction, the rate is proportional to the square of the concentration of reactant (see Problem $4.3$). Let’s start by finding $[A](t)$ from $-\frac{d[A]}{dt}=k[A]$. We’ll then obtain$[B](t)$ from the mass balance. This is a very simple differential equation because it is separable: $-\frac{d[A]}{dt}=k[A] \nonumber$ $\frac{d[A]}{[A]}=-kdt \nonumber$ We integrate both sides of the equation, and combine the two integration constants in one: $\ln{[A]}=-kt+c \nonumber$ We need to solve for [A]: $[A]=e^{-kt+c}=e^ce^{-kt}=c_2e^{-kt} \nonumber$ This is the general solution of the problem. Let’s assume we are giving the following initial conditions: [A]$(t=0)=[A]_0$, [B]$(t=0)=0.$ We’ll use this information to find the arbitrary constant $c_2$: $[A](t=0)=c_2e^{0}=c_2 \nonumber$ Therefore, the particular solution is: $\label{eq:mdecay} [A](t)=[A]_0e^{-kt}$ What about [B]? From the mass balance, [B] = [A]$_0$ + [B]$_0$ - [A] = [A]$_0$ - [A]$_0e^{-kt}= [A]_0\left(1-e^{-kt}\right)]$. Figure $4$ shows three examples of decays with different rate constants. We can calculate the half-life of the reaction ($t_{1/2}$), defined as the time required for half the initial concentration of A to react. From Equation \ref{eq:mdecay}: $\frac{[A](t)}{[A]_0}=e^{-kt} \nonumber$ When $t=t_{1/2}$, $\frac{1}{2}=e^{-kt_{1/2}} \nonumber$ $\ln{1/2}=-kt_{1/2}\rightarrow \ln{2}=kt_{1/2} \nonumber$ $t_{1/2}=\frac{\ln{2}}{k} \nonumber$ Note that in this case, the half-life does not depend on the initial concentration of A. This will not be the case for other types of mechanisms. Also, notice that we have already covered the concept of half-life in Chapter 1 (see Figure $1.3.1$), so this might be a good time to read that section again and refresh what we have already learned about sketching exponential decays. In physical chemistry, scientists often talk about the ‘relaxation time’ instead of the half-life. The relaxation time $\tau$ for a decay of the shape $a^{-b t}$ is $1/b$, so in this case, the relaxation time is simply $1/k$. Notice that the relaxation time has units of time, and it represents the time at which the concentration has decayed to $e^{-1}$ of its original value: $[A]=[A]_0e^{-t/\tau}\rightarrow \frac{[A]}{[A]_0}=e^{-t/\tau} \xrightarrow{t=\tau}\frac{[A]}{[A]_0}=e^{-1}\approx0.37 \nonumber$ The half-life and relaxation time are compared in Figure $5$ for a reaction with $k=0.5 s^{-1}$. Consecutive First Order Processes We will now analyze a more complex mechanism, which involves the formation of an intermediate species (B): $A \overset{k_1}{\rightarrow}B\overset{k_2}{\rightarrow}C \nonumber$ which is mathematically described by Equations \ref{cons1}, \ref{cons2} and \ref{cons3}. Let’s assume that initially the concentration of A is [A]$_0$, and the concentrations of B and C are zero. In addition, we can write a mass balance, which for these initial conditions is expressed as: $[A](t)+[B](t)+[C](t)=[A]_0 \nonumber$ Let’s summarize the equations we have: $\frac{d[A]}{dt}=-k_1[A]\label{consq1}$ $\frac{d[B]}{dt}=k_1[A]-k_2[B]\label{consq2}$ $\frac{d[C]}{dt}=k_2[B]\label{consq3}$ $[A]+[B]+[C]=[A]_0\label{consq4}$ $[A](t=0)=[A]_0\label{consq5}$ $[B](t=0)=[C](t=0)=0\label{consq6}$ Note that Equation \ref{consq4} is not independent from Equations \ref{consq1}-\ref{consq3}. If you take the derivative of \ref{consq4} you get $d[A]/dt+d[B]/dt+d[C]/dt=0$, which is the same you get if you add Equations \ref{consq1}-\ref{consq3}. This means that Equations \ref{consq1}-\ref{consq4} are not all independent, and three of them are enough for us to solve the problem. As you will see, the mass balance (\ref{consq4}) will give us a very easy way of solving for [C] once we have [A] and [B], so we will use it instead of Equation \ref{consq3}. We need to solve the system of Equations \ref{consq1}-\ref{consq6}, and although there are methods to solve systems of differential equations (e.g. using linear algebra), this one is easy enough that can be solved with what we learned so far. This is because not all equations contain all variables. In particular, Equation \ref{consq1} is a simple separable equation with dependent variable [A], which can be solved independently of [B] and [C]. We in fact just solved this equation in the First Order Reactions section, so let’s write down the result: $\label{eq:a(t)} [A](t)=[A]_0e^{-k_1t}$ Equation \ref{consq2} contains two dependent variables, but luckily we just obtained an expression for one of them. We can now re-write \ref{consq2} as: $\label{consq7} \frac{d[B]}{dt}=k_1[A]_0e^{-k_1t}-k_2[B]$ Equation \ref{consq7} contains only one dependent variable, [B], one independent variable, $t$, and three constants: $k_1$, $k_2$ and $[A]_0$. This is therefore an ordinary differential equation, and if it is either separable or linear, we will be able to solve it with the techniques we learned in this chapter. Recall eq. [sep], and verify that Equation \ref{consq7} cannot be separated as $\frac{d[B]}{dt}=\frac{g([B])}{h(t)} \nonumber$ Equation \ref{consq7} is not separable. Is it linear? Recall Equation \ref{linear} and check if you can write this equation as $\frac{d[B]}{dt}+p(t) [B]=q(t)$. We in fact can: $\label{consq8} \frac{d[B]}{dt}+k_2[B]=k_1[A]_0e^{-k_1t}$ Let’s use the list of steps delineated in Section 4.2. We need to calculate the integrating factor, $e^{ \int p(x)dx }$, which in this case is $e^{ \int k_2dt }=e^{k_2t}$. We then multiply Equation \ref{consq8} by the integrating factor: $\frac{d[B]}{dt}e^{k_2t}+k_2[B]e^{k_2t}=k_1[A]_0e^{-k_1t}e^{k_2t}=k_1[A]_0e^{(k_2-k_1)t} \nonumber$ In the next step, we need to recognize that the left-hand side of the equation is the derivative of the product of the dependent variable times the integrating factor: $\frac{d}{dt}\left( [B]e^{k_2t}\right)=k_1[A]_0e^{(k_2-k_1)t} \nonumber$ We then take ‘$dt$’ to the right side of the equation and integrate both sides: $\int d \left( [B]e^{k_2t}\right)=\int k_1[A]_0e^{(k_2-k_1)t}dt \nonumber$ $[B]e^{k_2t}=\frac{1}{k_2-k_1} k_1[A]_0e^{(k_2-k_1)t}+c \nonumber$ $[B]=\frac{1}{k_2-k_1} k_1[A]_0\frac{e^{(k_2-k_1)t}}{e^{k_2t}}+\frac{c}{e^{k_2t}} \nonumber$ $[B]=\frac{k_1}{k_2-k_1} [A]_0e^{-k_1t}+ce^{-k_2t} \nonumber$ We have an arbitrary constant because this is a first order differential equation. Let’s calculate $c$ using the initial condition $[B](t=0)=0$: $0=\frac{k_1}{k_2-k_1} [A]_0e^{-k_1 0}+ce^{-k_2 0}=\frac{k_1}{k_2-k_1} [A]_0+c \nonumber$ $c=-\frac{k_1}{k_2-k_1} [A]_0 \nonumber$ And therefore, $[B]=\frac{k_1}{k_2-k_1} [A]_0e^{-k_1t}-\frac{k_1}{k_2-k_1} [A]_0e^{-k_2t} \nonumber$ $\label{eq:b(t)} [B]=\frac{k_1}{k_2-k_1} [A]_0\left(e^{-k_1t}-e^{-k_2t}\right)$ Before moving on, notice that we have assumed that $k_1\neq k_2$. We were not explicit, but we performed the integration with this assumption. If $k_1=k_2$ the exponential term becomes 1, which is not a function of $t$. In this case, the integral will obviously be different, so our answer assumes $k_1\neq k_2$. This is good news, since otherwise we would need to worry about the denominator of [eq:b(t)] being zero. You will solve the case $k_1= k_2$ in Problem 4.4. Now that we have [A] and [B], we can get the expression for [C]. We could use Equation \ref{consq3}: $\frac{d[C]}{dt}=k_2[B]=\frac{k_2k_1}{k_2-k_1} [A]_0\left(e^{-k_1t}-e^{-k_2t}\right) \nonumber$ This is not too difficult because the equation is separable. However, it is easier to get [C] from the mass balance, Equation \ref{consq4}: $[C]=[A]_0-[A]-[B] \nonumber$ Plugging the answers we got for [A] and [B]: $[C]=[A]_0-[A]_0e^{-k_1t}-\frac{k_1}{k_2-k_1} [A]_0\left(e^{-k_1t}-e^{-k_2t}\right) \nonumber$ $\label{eq:c(t)} [C]=[A]_0\left[1-e^{-k_1t}-\frac{k_1}{k_2-k_1}\left(e^{-k_1t}-e^{-k_2t}\right)\right]$ Equations \ref{eq:a(t)}, \ref{eq:b(t)}, \ref{eq:c(t)} are the solutions we were looking for. If we had the values of $k_1$ and $k_2$ we could plot $[A](t)/[A]_0$, $[B](t)/[A]_0$ and $[C](t)/[A]_0$ and see how the three species evolve with time. If we had $[A]_0$ we could plot the actual concentrations, but notice that this does not add too much, because it just re-scales the $y-$axis but does not change the shape of the curves. Figure $6$ shows the concentration profiles for a reaction with $k_1=0.1 s^{-1}$ and $k_2=0.5 s^{-1}$. Notice that because B is an intermediate, its concentration first increases, but then decreases as B is converted into C. The product C has a ‘lag phase’, because we need to wait until enough B is formed before we can see the concentration of C increase (first couple of seconds in this example). As you will see after solving your homework problems, the time at which the intermediate (B) achieves its maximum concentration depends on both $k_1$ and $k_2$. Reversible first order reactions So far we have discussed irreversible reactions. Yet, we know that many reactions are reversible, meaning that the reactant and product exist in equilibrium: $A \xrightleftharpoons[k_2]{k_1} B \nonumber$ The rate of change of [A], $\frac{d[A]}{dt}$, is the rate at which A appears ($k_2[B]$) minus the rate at which A disappears ($k_1[A]$): $\frac{d[A]}{dt}=k_2[B]-k_1[A] \nonumber$ We cannot solve this equation as it is, because it has two dependent variables, [A] and [B]. However, we can write [B] in terms of [A], or [A] in terms of [B], by using the mass balance: $[A](t)+[B](t)=[A]_0+[B]_0 \nonumber$ $\frac{d[A]}{dt}=k_2\left( [A]_0+[B]_0-[A]\right)-k_1[A] \nonumber$ $\frac{d[A]}{dt}=k_2\left( [A]_0+[B]_0\right)-\left(k_2+k_1\right)[A] \nonumber$ This is an ordinary, separable, first order differential equation, so it can be solved by direct integration. You will solve this problem in your homework, so let’s skip the steps and jump to the answer: $\label{eq:aeq} [A]=\frac{k_2\left ( [A]_0+[B]_0 \right )}{k_1+k_2}+\frac{k_1 [A]_0-k_2[B]_0 }{k_1+k_2}e^{-\left (k_1+k_2 \right )t}$ This is a reversible reaction, so if we wait long enough it will reach equilibrium The concentration of [A] in equilibrium, [A]$_{eq}$, is the limit of the previous expression when $t\rightarrow \infty$. Because $e^{-x}\rightarrow 0$ when $x\rightarrow \infty$: $[A]_{eq}=\frac{k_2\left ( [A]_0+[B]_0 \right )}{k_1+k_2} \nonumber$ and we can re-write Equation \ref{eq:aeq} as $\label{eq:aeq2} [A]=[A]_{eq}+\left([A]_0-[A]_{eq}\right)e^{-\left (k_1+k_2 \right )t}$ As you will do in your homework, we can calculate $[B](t)$ from the mass balance as $[B](t)=[A]_0+[B]_0-[A](t)$. Equation \ref{eq:aeq2} is not too different from Equation \ref{eq:mdecay}. In the case of an irreversible reaction, (Equation \ref{eq:mdecay}), [A] decays from an initial value [A]$_0$ to a final value $[A]_{t \rightarrow \infty}=0$ with a relaxation time $\tau=1/k$. For the reversible reaction, $[A]-[A]_{eq}$ decays from an initial value $[A]_0-[A]_{eq}$ to a final value $[A]_{t \rightarrow \infty}-[A]_{eq}=0$ with a relaxation time $\tau=(k_1+k_2)^{-1}$. This last statement is not trivial! It says that the rate at which a reaction approaches equilibrium depends on the sum of the forward and backward rate constants. In your homework you will be asked to prove that the ratio of the concentrations in equilibrium, $[B]_{eq}/[A]_{eq}$ is equal to the ratio of the forward and backwards rate constants. In addition, from your introductory chemistry courses you should know that the equilibrium constant of a reaction ($K_{eq}$) is the ratio of the equilibrium concentrations of product of reactant. Therefore: $\label{eq:eql} \frac{[B]_{eq}}{[A]_{eq}}=K_{eq}=\frac{k_1}{k_2}$ This means that we can calculate the ratio of $k_1$ and $k_2$ from the concentrations of A and B we observe once equilibrium has been reached (i.e. once $\frac{d[A]}{dt}=\frac{d[B]}{dt}=0$). At the same time, we can obtain the sum of $k_1$ and $k_2$ from the relaxation time of the process. If we have the sum and the ratio, we can calculate both $k_1$ and $k_2$. This all makes sense, but it requires that we can watch the reaction from an initial state outside equilibrium. If the system is already in equilibrium, $[A]_0=[A]_{eq}$, and $d[A]/dt=0$ at all times. A plot of [A]$(t)$ will look flat, and we will not be able to extract the relaxation time of the reaction. If, however, we have an experimental way of shifting the equilibrium so $[A]_0\neq[A]_{eq}$, we can measure the relaxation time by observing how the reaction returns to its equilibrium position. Advanced topic: How can we shift the equilibrium? One way is to produce a very quick change in the temperature of the system. The equilibrium constant of a reaction usually depends on temperature, so if a system is equilibrated at a given temperature (say $25^{\circ}C$), and we suddenly increase the temperature (e.g. to $26^{\circ}C$), the reaction will suddenly be away from its equilibrium condition at the new temperature. We can watch the system relax to the equilibrium concentrations at $26^{\circ}C$, and measure the relaxation time. This will allow us to calculate the rate constants at $26^{\circ}C$. $\frac{[B]_{eq}}{[A]_{eq}}=K_{eq}=\frac{k_1}{k_2} \nonumber$ Example $1$ Advanced topic The following figure illustrates the experimental procedure known as “T-jump”, in which a sudden change in temperature is used to shift the position of a reversible reaction out of equilibrium. The experiment starts at a temperature $T_1$, and the temperature is increased to $T_2$ instantaneously at time $t=0$. Because the equilibrium constant at $T_2$ is different from the equilibrium constant at $T_1$, the system needs to relax to the new equilibrium state. From the graph below estimate to the best of your abilities $K_{eq} (T_1)$, $K_{eq} (T_2)$, and the rate constants $k_1$, and $k_2$ at $T_2$. $A \xrightleftharpoons[k_2]{k_1} B \nonumber$ Solution At $T_1$, $[A]_{eq}=0.1 M$ and $[B]_{eq}=0.2 M$. The equilibrium constant is $[B]_{eq}/[A]_{eq}=2$. At $T_2$, $[A]_{eq}=0.225 M$ and $[B]_{eq}=0.075 M$. The equilibrium constant is $[B]_{eq}/[A]_{eq}=1/3$. Because $K_{eq}=\frac{k_1}{k_2}$, at $T=T_2$, $\frac{k_1}{k_2}=1/3$. To calculate the relaxation time let’s look at the expression for $[A](t)$ (Equation \ref{eq:aeq2}). $[A]=[A]_{eq}+\left([A]_0-[A]_{eq}\right)e^{-\left (k_1+k_2 \right )t} \nonumber$ When the time equals the relaxation time ($\tau=(k_1+k_2)^{-1}$), $[A](t=\tau)=[A]_{eq}+\left([A]_0-[A]_{eq}\right)e^{-1} \nonumber$ $[A](t=\tau)\approx 0.225M+\left(0.1M-0.225M\right)\times 0.37\approx 0.18 \nonumber$1 From the graph, $[A]=0.18$ at $t = 2.5s$, and therefore the relaxation time is $\tau=(k_1+k_2)^{-1}= 2.5s$. We have $(k_1+k_2)^{-1}= 2.5s = 5/2 s$ and $\frac{k_1}{k_2}=1/3$: $(k_1+k_2)=2/5= (k_1+3k_1)=4k_1\rightarrow k_1=1/10 = 0.1 s^{-1}, k_2=0.3 s^{-1} \nonumber$ 4.04: Problems Problem $1$ In each case, • Identify the dependent and independent variables. • Determine if the differential equation is separable. • Determine if the differential equation is linear. • Find the general solution. • Find the particular solution using the given initial condition. • Verify that your solution satisfies the differential equation by substitution. 1. $\frac{dy}{dx}=\frac{y+2}{x-3}, y(0)=1$ 2. $x'=e^{x+t}, x'(0)=2$ 3. $\frac{dy}{dx}-\frac{3}{x}y=2x^4, y(1)=1$ 4. $\frac{df}{dt}=\frac{3t^2}{f}, f(2)=4$ 5. $h'(t)+2h(t)=4, h(0)=1$ Problem $2$ Consider the reaction $A \overset{k}{\rightarrow}B$. The rate of disappearance of A is proportional to the concentration of A, so: $-\frac{d[A]}{dt}=k[A] \nonumber$ 1) Obtain [A]$(t)$ and [B]$(t)$. 2) Using the definition of half-life $(t_{1/2})$, obtain an expression for $(t_{1/2})$ for this mechanism. Your result will be a function of $k$. 3) Sketch $[A](t)$ and $[B](t)$ for the case $k=0.1 s^{-1}$, $[B]_0=0$ and $[A]_0=10^{-3}M$. Remember that you are expected to do this without the help of a calculator. Problem $3$ Consider the reaction $2A \overset{k}{\rightarrow}B$. This mechanism is called a bi-molecular reaction, because the reaction involves the collision of two molecules of reactant. In this case, the rate of disappearance of A is proportional to the square of the concentration of A, so: $-\frac{1}{2}\frac{d[A]}{dt}=k[A]^2 \nonumber$ Notice that the rate is proportional to the square of the concentration, so this is a second-order reaction. Assume that the initial concentration of [A] is [A]$_0$, and the initial concentration of [B] is zero. 1. Obtain an expression for [A]$(t)$. 2. Write down a mass balance (a relationship relating [A](t), [B](t), [A]$_0$ and [B]$_0$) and obtain [B]$(t)$. 3. Using the definition of half-life $(t_{1/2})$, obtain an expression for $(t_{1/2})$ for this mechanism. Your result will be a function of $k$ and [A]$_0$. Problem $4$ Obtain $[A](t), [B](t),$ and$[C](t)$ for the following mechanism: $A \overset{k}{\rightarrow}B\overset{k}{\rightarrow}C \nonumber$ Assume $[A](0)=[A]_0$, and $[B](0)=[C](0)=0$ Note that this problem is identical to the one solved in Section 4.2 but with $k_1=k_2$. Be sure you identify the step where the two problems become different. Problem $5$ Consider the reaction $A \xrightleftharpoons[k_2]{k_1} B \nonumber$ modeled mathematically by the following ODE $\frac{d[A]}{dt}=k_2[B]-k_1[A] \nonumber$ The constants, $k_1$ and $k_2$ represent the kinetic constants in the forward and backward direction respectively, and [A] and [B] represent the molar concentration of A and B. Assume you start with initial concentrations $[A] (t = 0) = [A]_0$ and $[B](t = 0) = [B]_0$. Mass conservation requires that $[A](t) + [B](t) = [A]_0 + [B]_0$ 1. Obtain [A](t) and [B](t) in terms of $k_1, k_2, [A]_0$ and $[B]_0$. 2. Obtain expressions for the concentrations of A and B in equilibrium: $[A]_{eq} =[A](t \rightarrow \infty$) and $[B]_{eq} =[B](t \rightarrow \infty$). 3. Prove that the equilibrium constant of the reaction, $\frac{[B]_{eq}}{[A]_{eq}} = K_{eq}$, can be expressed as $\frac{[B]_{eq}}{[A]_{eq}} = \frac{k_1}{k_2} \nonumber$ 4. Assume that $k_1=1 min^{-1}$, $k_2=\frac{1}{2} min^{-1}$, $[A]_0=0$ and $[B]_0=0.1 M$, calculate $[A]_{eq}$ and $[B]_{eq}$, and sketch $[A](t)$ and $[B](t)$ to the best of your abilities.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/04%3A_First_Order_Ordinary_Differential_Equations/4.03%3A_Chemical_Kinetics.txt
Chapter Objectives • Be able to obtain the general solution of any homogeneous second order ODE with constant coefficients. • Be able to obtain particular solutions when initial conditions are given. • Understand how to solve the equation of motion of a pendulum and a spring in non-viscous and viscous media. • Understand how to solve the Schrödinger equation for the one dimensional particle in the box. Obtain the normalized eigenfunctions and the eigenvalues. 05: Second Order Ordinary Differential Equations Solving second order ordinary differential equations is much more complex than solving first order ODEs. We just saw that there is a general method to solve any linear 1st order ODE. Unfortunately, this is not true for higher order ODEs. However, we can solve higher order ODEs if the coefficients are constants: $y''(x)+ k_1 y'(x) + k_2 y(x)+k_3=0 \nonumber$ The equation above is said to be homogeneous if $k_3=0$: $\label{eq:2ndorder} y''(x)+ k_1 y'(x) + k_2 y(x)=0$ It is possible to solve non-homogeneous ODEs, but in this course we will concentrate on the homogeneous cases. Second order linear equations occur in many important applications. For example, the motion of a mass on a spring, and any other simple oscillating system, is described by an equation of the form $m\frac{d^2u}{dt^2}+\gamma\frac{du}{dt}+k u=F(t) \nonumber$ We’ll analyze what the different parts of this equation mean in the examples. The equation above is homogeneous if $F(t)=0$. Let’s analyze Equation \ref{eq:2ndorder}, which is linear and homogeneous. The parameters $m$, $\gamma$ and $k$ represent physical quantities that do not depend on the value of $x$, and therefore the equation has constant coefficients.This equation will be satisfied by a function whose derivatives are multiples of itself. This is the only way that we will get zero after adding a multiple of the function plus a multiple of its first derivative plus a multiple of the second derivative. You may be tempted to say that $\sin(x)$ satisfies this requirement, but its first derivative is $\cos{x}$, so it will not cancel out with the sine term when added together. The only functions that satisfy this requirement are the expnential functions $e^{\alpha x}$, with first and second derivatives $\alpha e^{\alpha x}$ and $\alpha^2e^{\alpha x}$ respectively. So, let’s assume that the answer we are looking for is an exponential function, $y(x)=e^{\alpha x}$, and let’s plug these expressions back into Equation \ref{eq:2ndorder}: $\alpha^2e^{\alpha x}+ k_1 \alpha e^{\alpha x} + k_2 e^{\alpha x}=0 \nonumber$ $e^{\alpha x}\left(\alpha^2+ k_1 \alpha + k_2 \right)=0 \nonumber$ Thee above equation tells us that either $e^{\alpha x}$ or $\left(\alpha^2+ k_1 \alpha + k_2 \right)$ are zero. In the first case, this would mean that $x$ is plus or minus infinity (depending on whether $\alpha$ is negative or positive). But this is too restrictive because we want to find a solution that is a function of $x$, so we don’t want to impose restrictions on our independent variable. We therefore consider $\left(\alpha^2+ k_1 \alpha + k_2 \right)=0 \nonumber$ This is a quadratic equation in $\alpha$, which we will call the auxiliary equation. The two roots are found from: $\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2} \nonumber$ This gives two answers, $\alpha_1$ and $\alpha_2$, which means there are at least two different exponential functions that are solutions of the differential equation: $e^{\alpha_1 x}$ and $e^{\alpha_2 x}$. We will see that any linear combination of these two functions is also a solution, but before continuing, let’s look at a few examples. Notice that the argument of the square root can be positive, negative or zero, depending on the relative values of $k_1$ and $k_2$. This means that $\alpha_{1,2}$ can be imaginary, and the solutions can therefore be complex exponentials. Let’s look at the three situations individually through examples. Case I: $k_1^2-4k_2>0$ In this case, $\sqrt{k_{1}^{2}-4k_2}>0$, and therefore $\alpha_1$ and $\alpha_2$ are both real and different. For example: Find the solution of $y''(x) -5y'(x) +4 y(x) = 0$ subject to initial conditions $y(0)=1$ and $y'(0)=-1$. As we discussed above, we’ll assume the solution is $y(x)=e^{\alpha x}$, and we’ll determine which values of $\alpha$ satisfy this particular differential equation. Let’s replace $y(x), y'(x)$ and $y''(x)$ in the differential equation: $\alpha^2e^{\alpha x}-5 \alpha e^{\alpha x} +4 e^{\alpha x}=0 \nonumber$ $e^{\alpha x}\left(\alpha^2-5 \alpha + 4 \right)=0 \nonumber$ and with the arguments we discussed above: $\left(\alpha^2-5 \alpha +4 \right)=0 \nonumber$ $\alpha_{1,2}=\frac{-(-5)\pm \sqrt{(-5)}^{2}-4\times 4}{2} \nonumber$ from which we obtain $\alpha_1=1$ and $\alpha_2=4$. Therefore, $e^{x}$ and $e^{4x}$ are both solutions to the differential equation. Let’s prove this is true. If $y(x) = e^{4x}$, then $y'(x) = 4e^{4x}$ and $y''(x) = 16e^{4x}$. Substituting these expressions in the differential equation we get $y''(x) - 5y'(x) + 4y(x) = 16e^{4x}-5\times 4e^{4x}+4\times e^{4x}=0 \nonumber$ so $y(x) = e^{4x}$ clearly satisfies the differential equation. You can do the same with $y(x) = e^{x}$ and prove it is also a solution. However, none of these solutions satisfy both initial conditions, so clearly we are not finished. We found two independent solutions to the differential equation, and now we will claim that any linear combination of these two independent solutions ($c_1 y_1(x)+c_2 y_2(x)$) is also a solution. Mathematically, this means that if $y_1(x)$ and $y_2(x)$ are solutions, then $c_1 y_1(x)+c_2 y_2(x)$ is also a solution, where $c_1$ and $c_2$ are constants (i.e. not functions of $x$). Coming back to our example, the claim is that $c_1 e^{4x}+c_2 e^x$ is the general solution of this differential equation. Let’s see if it’s true: \begin{aligned} y(x)=c_1 e^{4x}+c_2 e^x \ y'(x)=4c_1 e^{4x}+c_2 e^x \ y''(x)=16c_1 e^{4x}+c_2 e^x \end{aligned} \nonumber Substituting in the differential equation: $y''(x) - 5y'(x) + 4y(x) = 16c_1 e^{4x}+c_2 e^x-5\times \left(4c_1 e^{4x}+c_2 e^x\right)+4\times \left(c_1 e^{4x}+c_2 e^x\right)=0 \nonumber$ so we just proved that the linear combination is also a solution, independently of the values of $c_1$ and $c_2$. It is important to notice that our general solution has now two arbitrary constants, as expected for a second order differential equation. We will determine these constants from the initial conditions to find the particular solution. The general solution is $y(x)=c_1e^{4x}+c_2e^{x}$. Let’s apply the first initial condition: $y(0)=1$. $y(0)=c_1+c_2=1 \nonumber$ This gives a relationship between $c_1$ and $c_2$. The second initial condition is $y'(0)=-1$. $y'(x)=4c_1e^{4x}+c_2e^x\rightarrow y'(0)=4c_1+c_2=-1 \nonumber$ We have two equations with two unknowns that we can solve to get $c_1=-2/3$ and $c_2=5/3$. The particular solution is then: $y(x)=-\frac{2}{3}e^{4x}+\frac{5}{3}e^x \nonumber$ Case II: $k_1^2-4k_2<0$ In this case, $k_{1}^{2}-4k_2<0$, so , $\sqrt{k_{1}^{2}-4k_2}=i \sqrt{-k_{1}^{2}+4k_2}$ where $\sqrt{-k_{1}^{2}+4k_2}$ is a real number. Therefore, in this case, $\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\frac{-k_1\pm i \sqrt{-k_{1}^{2}+4k_2}}{2} \nonumber$ and then the two roots $\alpha_1$ and $\alpha_2$ are complex conjugates. Let’s see how it works with an example. Determine the solution of $y''(x) - 3y'(x) + \frac{9}{2}y(x) = 0$ subject to the initial conditions $y(0)=1$ and $y'(0)=-1$. Following the same methodology we discussed for the previous example, we assume $y(x)=e^{\alpha x}$, and use this expression in the differential equation to obtain a quadratic equation in $\alpha$: $\alpha_{1,2}=\frac{3\pm \sqrt{(-3)^{2}-4\times 9/2}}{2}=\frac{3\pm \sqrt{-9}}{2} \nonumber$ Therefore, $\alpha_1=\frac{3}{2}+\frac{3}{2}i$ and $\alpha_2=\frac{3}{2}-\frac{3}{2}i$, which are complex conjugates. The general solution is: \begin{aligned} y(x)=c_1e^{(\frac{3}{2}+\frac{3}{2}i)x}+c_2e^{(\frac{3}{2}-\frac{3}{2}i)x} \ y(x)=c_1e^{\frac{3}{2}x}e^{\frac{3}{2}i x}+c_2e^{\frac{3}{2}x}e^{-\frac{3}{2}i x} \ y(x)=e^{\frac{3}{2}x}\left(c_1e^{\frac{3}{2}i x}+c_2e^{-\frac{3}{2}i x}\right) \end{aligned} \nonumber This expression can be simplified using Euler’s formula: $e^{\pm ix}=\cos(x) \pm i \sin{x}$ (Equation $2.2.1$). $y(x)=e^{\frac{3}{2}x}\left[c_1\left(\cos(\frac{3}{2}x)+i \sin(\frac{3}{2}x) \right)+c_2\left(\cos(\frac{3}{2}x)-i \sin(\frac{3}{2}x) \right)\right] \nonumber$ Grouping the sines and cosines together: $y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)(c_1+c_2)+i \sin(\frac{3}{2}x)(c_1-c_2) \right] \nonumber$ Renaming the constants $c_1+c_2=a$ and $i(c_1-c_2)=b$ $y(x)=e^{\frac{3}{2}x}\left[a\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber$ Our general solution has two arbitrary constants, as expected from a second order ODE. As usual, we’ll use our initial conditions to determine their values. The first initial condition is $y(0)=1$ $\begin{array}{c c c} y(0)=a = 1 & (e^0=1, \cos(0)=1 & \text{and} & \sin(0)=0 ) \end{array} \nonumber$ So far, we have $y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber$ The second initial condition is $y'(0)=-1$ $y'(x)=e^{\frac{3}{2}x}\left[-\sin(\frac{3}{2}x)+b \cos(\frac{3}{2}x)\right]\frac{3}{2}+\frac{3}{2}e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber$ $y'(0)=\frac{3}{2}b +\frac{3}{2}=-1\rightarrow b=-\frac{5}{3} \nonumber$ The particular solution is, therefore: $y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)-\frac{5}{3} \sin(\frac{3}{2}x)\right] \nonumber$ Notice that the function is real even when the roots were complex numbers. Case III: $k_1^2-4k_2=0$ The last case we will analyze is when $k_1^2-4k_2=0$, which results in $\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\alpha_{1,2}=\frac{-k_1}{2} \nonumber$ Therefore, the two roots are real, and identical. This means that $e^{-k_1 x/2}$ is a solution, but this creates a problem because we need another independent solution to create the general solution from a linear combination, and we have only one. The second solution can be found using a method called reduction of order. We will not discuss the method in detail, although you can see how it is used in this case at the end of the video http://tinyurl.com/mpl69ju. The application of the method of reduction of order to this differential equation gives $(a+bx)e^{-k_1 x/2}$ as the general solution. The constants $a$ and $b$ are arbitrary constants that we will determine from the initial/boundary conditions. Notice that the exponential term is the one we found using the ’standard’ procedure. Let’s see how it works with an example. Determine the solution of $y''(x) - 8y'(x) + 16y(x) = 0$ subject to initial conditions $y(0)=1$ and $y'(0)= -1$. We follow the procedure of the previous examples and calculate the two roots: $\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\frac{8\pm \sqrt{8^{2}-4\times 16}}{2}=4 \nonumber$ Therefore, $e^{4x}$ is a solution, but we don’t have another one to create the linear combination we need. The method of reduction of order gives: $y(x)=(a+bx)e^{4 x} \nonumber$ Since we accepted the result of the method of reduction of order without seeing the derivation, let’s at least show that this is in fact a solution. The first and second derivatives are: $y'(x)=be^{4 x}+4(a+bx)e^{4x} \nonumber$ $y''(x)=4be^{4 x}+4be^{4x}+16(a+bx)e^{4x} \nonumber$ Substituting these expressions in $y''(x)-8y'(x)+16y(x)=0$: $\left[4be^{4 x}+4be^{4x}+16(a+bx)e^{4x}\right]-8\left[be^{4 x}+4(a+bx)e^{4x}\right]+16\left[(a+bx)e^{4 x}\right]=0 \nonumber$ Because all these terms cancel out to give zero, the function $y(x)=(a+bx)e^{4 x}$ is indeed a solution of the differential equation. Coming back to our problem, we need to determine $a$ and $b$ from the initial conditions. Let’s start with $y(0)=1$: $y(0)=a=1 \nonumber$ So far, we have $y(x)=(1+bx)e^{4 x}$, and therefore $y'(x)=be^{4 x}+4(1+bx)e^{4x}$. The other initial condition is $y'(0)=-1$: $y'(0)=b+4=-1\rightarrow b=-5 \nonumber$ The particular solution, therefore, is $y(x)=(1-5x)e^{4x}$ This video contains an example of each of the three cases discussed above as well as the application of the method of reduction of order to case III. Remember that you can pause, rewind and fast forward so you can watch the videos at your own pace. http://tinyurl.com/mpl69ju
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/05%3A_Second_Order_Ordinary_Differential_Equations/5.01%3A_Second_Order_Ordinary_Differential_Equations.txt
Note This section is also available in video format: http://tinyurl.com/kq7mrcq The motion of a frictionless pendulum We will now use what we learned so far to solve a problem of relevance in the physical sciences. We’ll start with the problem of the pendulum, and as we already discussed in Section 3.2, even if the pendulum is not particularly interesting as an application in chemistry, the topic of oscillations is of great interest due to the fact that atoms in molecules vibrate around their bonds. The problem of the pendulum was introduced in Figure 3.5, which is reprinted again below: If you took a university physics course, you may recognize that Newton’s second law yields: $\label{eqn2} ml\frac{d^2\theta}{dt^2}+mg\sin{\theta}=0$ This, unfortunately, is a non-linear differential equation (the dependent variable, $\theta$, appears as the argument of a transcendental function). As we discussed in Section 3.2, this ODE has no analytical solution. It is possible to solve this equation numerically (and you will do it in the lab), but we cannot get an equation that is the solution of this ODE. We also discussed that we can obtain analytical solutions if we assume that the angle $\theta$ is small at all times. This means that the solution we get is valid only if the pendulum oscillates very close to the line normal to the ceiling. You may be thinking that studying such a system is boring and useless, but again, as we discussed in Section 3.2, for most molecules at moderate temperatures the displacement of the atoms around their equilibrium position is very small. That is why studying oscillations of systems close to equilibrium makes sense for a chemist. We already discussed that if $\theta<<1$, then $\sin{\theta}\approx\theta$ (see Figure 3.4). Equation \ref{eqn2} can then be simplified to: $\frac{d^2\theta}{dt^2}+\frac{g}{l}\theta=0$ This equation is linear in $\theta$, is homogeneous, and has constant coefficients ($g$ is the acceleration of gravity and $l$ the length of the rod). The auxiliary equation of this ODE is: $\alpha^2+\frac{g}{l}=0 \nonumber$ and therefore, $\alpha=\pm i \sqrt{\frac{g}{l}} \nonumber$ The general solution is $\theta(t)=c_1 e^{\alpha_1t}+c_2 e^{\alpha_2t} \nonumber$ $\theta(t)=c_1 e^{i(g/l)^{1/2}t}+c_2 e^{-i(g/l)^{1/2}t} \nonumber$ We will get the values of the arbitrary constants from the initial conditions. Let’s assume that at time zero the value of $\theta$ was $\theta_0<<1$, and the value of $d\theta/dt$, which is a measure of the velocity, was $\theta'(0)=0$. Physically, it means that at time zero we are holding the pendulum still. At this point we can either use Euler relationships to simplify our result into a cosine and a sine, or use the initial conditions and use Euler’s relationship later. Either way will work, and how you choose to proceed is a matter of personal taste. Let’s apply the initial conditions now: $\theta(t)=c_1 e^{i(g/l)^{1/2}t}+c_2 e^{-i(g/l)^{1/2}t}\rightarrow\theta(0)=c_1+c_2=\theta_0 \nonumber$ $\theta'(t)=c_1i(g/l)^{1/2} e^{i(g/l)^{1/2}t}-c_2i(g/l)^{1/2} e^{-i(g/l)^{1/2}t} \nonumber$ $\theta'(0)=c_1i(g/l)^{1/2}-c_2i(g/l)^{1/2}=0\rightarrow c_1=c_2 \nonumber$ Therefore, $c_1=c_2=\theta_0/2$, and our particular solution is: $\theta(t)=\frac{\theta_0}{2}\left(e^{i(g/l)^{1/2}t}+e^{-i(g/l)^{1/2}t}\right) \nonumber$ From Euler’s relationship we know that $e^{ix}+e^{-ix}=2cos{x}$, so $\label{eqn3} \theta(t)=\theta_0\cos{\left(\left(\frac{g}{l}\right) ^{1/2}t\right)}$ This is of course the familiar periodic function that you saw in your physics course. Remember that we got here by assuming $\theta<<1$ at all times. As we saw in Section 1.4, the period of the function $\cos(nx)$ is $2\pi/n$. The period of the function $\theta_0\cos{\left(\left(\frac{g}{l}\right) ^{1/2}t\right)}$ is therefore $P=2\pi \left(\frac{l}{g} \right)^{1/2}$. The period has units of time, and it tells us the time that it takes for the pendulum to complete a whole cycle (see Figure $2$). We can also calculate the frequency of the motion, which is just the reciprocal of the period. If the period is the amount of time you need to wait to complete a full cycle, the reciprocal is the number of cycles per unit time. For example, if it takes a pendulum with $l=0.1 m$ 0.63 seconds to complete a cycle, this means that we get 1.58 cycles per second. The frequency has units of reciprocal time. The fact that pendula with different lengths have different periods was used by a very creative mind to produce the beautiful display showcased a the lobby of the PSF building at ASU (right across the elevators). There are some videos on YouTube demonstrating the idea (search for pendulum waves), but the one in the physics department at ASU is much more impressive, so go and check it out if you haven’t done so yet. The pendulum in a viscous medium The problem we just saw assumed that there was no friction, so the pendulum will oscillate forever without changing the amplitude. Let’s make the problem more realistic from the physical point of view and add a term that accounts for frictional resistance. The force due to friction is usually proportional to the velocity, so this new force introduces a term that depends on the first derivative of $\theta$: $\frac{d^2\theta}{dt^2}+\gamma \frac{d\theta}{dt}+\frac{g}{l}\theta=0$ The constant $\gamma$ depends on the medium, and it will be larger in e.g. water (more friction) than in air (less friction). The auxiliary equation is now $\alpha^2+\gamma \alpha+\frac{g}{l}=0 \nonumber$ and the two roots are: $\alpha_{1,2}=\frac{-\gamma \pm\sqrt{\gamma^2-4g/l}}{2} \nonumber$ and we see that the result will depend on the relative values of $\gamma^2$ and $4g/l$. We will analyze the case $\gamma^2<4g/l$ first (low friction regime). It is useful to always think about what one expects before doing any math. Think about the pendulum without friction, and imagine you do the same experiment in air (small friction). How do you think the plot of $\theta(t)$ vs $t$ would look like? Coming back to the math, let’s call $a=\sqrt{4g/l-\gamma^2}$ to simplify notation. In the low friction case, $a$ will be a real number. The two roots will therefore be: $\alpha_{1,2}=\frac{-\gamma \pm ia}{2} \nonumber$ and the general solution will be $\theta(t)=c_1 e^{\alpha_1 t}+c_1 e^{\alpha_2 t} \nonumber$ $\theta(t)=c_1 e^{-\gamma t/2}e^{ia t/2}+c_2 e^{-\gamma t/2}e^{-ia t/2} \nonumber$ $\theta(t)=e^{-\gamma t/2}\left(c_1 e^{ia t/2}+c_2 e^{-ia t/2}\right) \nonumber$ At this point we can either use the initial conditions and use Euler’s relationship later, or we can use Euler’s equations now and the initial conditions later. Either way should work. $\theta(t)=e^{-\gamma t/2}\left[\left(c_1 \cos{(ta/2)} +c_1 i \sin{(ta/2)}+c_2 \cos{(ta/2)} -c_2 i \sin{(ta/2)}\right)\right] \nonumber$ $\theta(t)=e^{-\gamma t/2}\left[(c_1+c_2) \cos{(ta/2)} +(c_1-c_2) i \sin{(ta/2)}\right] \nonumber$ and grouping and re-naming constants: $\theta(t)=e^{-\gamma t/2}\left[c_3 \cos{(ta/2)} +c_4\sin{(ta/2)}\right] \nonumber$ We will now evaluate $c_3$ and $c_4$ from the initial conditions. Let’s assume again that $\theta(0)=\theta_0$ and $\theta'(0)=0$. Evaluating the previous equation at $t=0$: $\theta(0)=c_3=\theta_0 \nonumber$ so we have $\theta(t)=e^{-\gamma t/2}\left[\theta_0 \cos{(\frac{ta}{2})} +c_4\sin{(\frac{ta}{2})}\right] \nonumber$ $\theta'(t)=e^{-\gamma t/2}\left[-\frac{\theta_0 a}{2} \sin{(\frac{ta}{2})} +\frac{c_4 a}{2}\cos{(\frac{ta}{2})}\right]-\frac{\gamma}{2}e^{-\gamma t/2}\left[\theta_0 \cos{(\frac{ta}{2})} +c_4\sin{(\frac{ta}{2})}\right] \nonumber$ $\theta'(0)=c_4\frac{a}{2}-\theta_0\frac{\gamma}{2}=0 \nonumber$ $c_4=\frac{\gamma \theta_0}{a} \nonumber$ and therefore $\theta(t)=e^{-\gamma t/2}\left[\theta_0 \cos{\left(\frac{ta}{2}\right)} +\frac{\gamma \theta_0}{a}\sin{\left(\frac{ta}{2}\right)}\right] \nonumber$ $\theta(t)=\theta_0 e^{-\gamma t/2}\left[ \cos{\left(\frac{ta}{2}\right)} +\frac{\gamma }{a}\sin{\left(\frac{ta}{2}\right)}\right]$ If everything went well this equation should simplify to Equation \ref{eqn3} for the case $\gamma=0$. Recall that $a=\sqrt{4g/l-\gamma^2}$, so if $\gamma=0$: $\theta(t)=\theta_0 \left[ \cos{\left(\frac{ta}{2}\right)}\right] \nonumber$ $\theta(t)=\theta_0 \left[ \cos{\left(\sqrt{\frac{g}{l}}t\right)}\right] \nonumber$ This of course does not prove that our solution is correct, but it is always good to see that we recover a known equation for a particular case (in this case $\gamma=0$). Figure $3$ shows $\theta(t)/\theta_0$ for three cases with $g/l=1$ (i.e. a cord of length 9.8 m) and increasing values of friction. Remember that we are assuming that $\theta$ is small. So far we have analyzed the case $\gamma^2<4g/l$. As we increase the friction coefficient, we will reach the point where $\gamma^2=4g/l$. Look at Figure $3$, and think about what you would see when this happens. Mathematically, we will have two repeated roots: $\alpha_{1,2}=\frac{-\gamma \pm\sqrt{\gamma^2-4g/l}}{2}=-\frac{\gamma }{2} \nonumber$ so $\theta(t)=e^{-\gamma t/2}$ is a solution, but we need to find an independent solution through the method of reduction of order. We know that the general solution will be (Section 2.2): $\theta(t)=(c_1+c_2 t)e^{-\gamma t/2} \nonumber$ The arbitrary constants will be calculated from the initial conditions: $\theta(0)=(c_1)=\theta_0 \nonumber$ $\theta'(t)=-\frac{\gamma}{2}(\theta_0+c_2 t)e^{-\gamma t/2}+c_2e^{-\gamma t/2} \nonumber$ $\theta'(0)=-\frac{\gamma}{2}(\theta_0)+c_2=0 \nonumber$ $c_2=\frac{\gamma}{2}\theta_0 \nonumber$ Therefore, $\theta(t)=(c_1+c_2 t)e^{-\gamma t/2} \nonumber$ $\theta(t)=(\theta_0+\frac{\theta_0 \gamma}{2} t)e^{-\gamma t/2}=\theta_0(1+\frac{\gamma}{2} t)e^{-\gamma t/2} \nonumber$ The behavior of $\theta(t)$ is shown in Figure $4$ in black. This regime is called critically damped because it represents the point where the oscillations no longer occur as $\gamma$ increases. If we continue increasing the frictional coefficient, the pendulum will approach $\theta=0$ slower and slower, but will never cross to the other side ($\theta<0$). Let’s find the mathematical expression for $\theta(t)$ for the case $\gamma^2>4g/l$. The two roots are now different and real: $\alpha_{1,2}=\frac{-\gamma \pm\sqrt{\gamma^2-4g/l}}{2} \nonumber$ and the general solution is therefore $\theta(t)=c_1e^{\alpha_1t}+c_2e^{\alpha_2t} \nonumber$ $\theta(t)=c_1e^{(-\gamma+b)t/2}+c_2e^{(-\gamma-b)t/2}; \; b=\sqrt{\gamma^2-4g/l} \nonumber$ The first initial condition is $\theta(0)=\theta_0$: $\theta(0)=c_1+c_2=\theta_0 \nonumber$ The second initial condition is $\theta'(t)=0$: $\theta'(t)=c_1\left(\frac{-\gamma+b}{2}\right)e^{(-\gamma+b)t/2}+c_2\left(\frac{-\gamma-b}{2}\right)e^{(-\gamma-b)t/2} \nonumber$ $\theta'(0)=c_1\left(\frac{-\gamma+b}{2}\right)+c_2\left(\frac{-\gamma-b}{2}\right)=0\nonumber$ $\theta'(0)=-(c_1+c_2)\gamma+(c_1-c_2)b=0 \nonumber$ the first initial condition yielded $c_1+c_2=\theta_0$ so $\theta'(0)=-\theta_0\gamma+(c_1-c_2)b=0 \nonumber$ $(c_1-c_2)=\theta_0\gamma/b \nonumber$ The two initial conditions gave two relationships between $c_1$ and $c_2$: $c_1+c_2=\theta_0 \nonumber$ $(c_1-c_2)=\theta_0\gamma/b \nonumber$ Solving this system of two equations with two unknowns: $c_1=\frac{\theta_0}{2}\left(1+\frac{\gamma}{b} \right) \nonumber$ $c_2=\frac{\theta_0}{2}\left(1-\frac{\gamma}{b} \right) \nonumber$ And finally we can write the particular solution as: $\theta(t)=c_1e^{(-\gamma+b)t/2}+c_2e^{(-\gamma-b)t/2}; \; b=\sqrt{\gamma^2-4g/l} \nonumber$ $\theta(t)=\frac{\theta_0}{2}\left(1+\frac{\gamma}{b} \right)e^{(-\gamma+b)t/2}+\frac{\theta_0}{2}\left(1-\frac{\gamma}{b} \right)e^{(-\gamma-b)t/2}; \; b=\sqrt{\gamma^2-4g/l} \nonumber$ $\theta(t)=\frac{\theta_0}{2}e^{-\gamma t/2}\left[\left(1+\frac{\gamma}{b} \right)e^{b t/2}+\left(1-\frac{\gamma}{b} \right)e^{-b t/2}\right]$ Figure $5$ shows results with $g/l=1$, and three different values of $\gamma$. Notice that $\gamma=2$ corresponds to the critically damped regime.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/05%3A_Second_Order_Ordinary_Differential_Equations/5.02%3A_Second_Order_Ordinary_Differential_Equations_-_Oscillations.txt
Note This section is also available in video format: http://tinyurl.com/n8tgbf6 You may have noticed that all the examples we discussed so far in this chapter involve initial conditions, or in other words, conditions evaluated at the same value of the independent value. We will see now how boundary conditions give rise to important consequences in the solutions of differential equations, which are extremely important in the description of atomic and molecular systems. Let’s start by asking ourselves whether all boundary value problems involving homogeneous second order ODEs have non-trivial solutions. The trivial solution is $y(x)=0$, which is a solution to any homogeneous ODE, but this solution is not particularly interesting from the physical point of view. For example, let’s solve the following problem: $y''(x)+3y(x)=0; \;y('0)=0; \;y(1)=0 \nonumber$ Following the same procedure we have used in previous examples, we get the following general solution: $y(x)=a\cos(\sqrt{3}x)+b\sin(\sqrt{3}x) \nonumber$ The first boundary condition is $y'(0)=0$: $y'(x)=-\sqrt{3}a\sin(\sqrt{3}x)+\sqrt{3}b\cos(\sqrt{3}x)\rightarrow y'(0)=\sqrt{3}b=0 \rightarrow b=0 \nonumber$ Therefore, so far we have $y(x)=a \cos({\sqrt{3}x})$. The second boundary condition is $y(1)=0$, so $y(1)=a \cos{\sqrt{3}}=0\rightarrow a=0 \nonumber$ Therefore, the only particular solution for these particular boundary conditions is $y(x)=0$, the trivial solution. Let’s change the question and ask ourselves now if there is any number $\lambda$, so that the equation $y''(x)+\lambda y(x)=0; \;y'(0)=0; \;y(1)=0 \nonumber$ has a non-trivial solution. Our general solution depends on whether $\lambda$ is positive or negative. If $\lambda>0$ we have $y(x)=a\cos(\sqrt{\lambda}x)+b\sin(\sqrt{\lambda}x) \nonumber$ Notice that we are using results be obtained in previous sections, but you would need to show all your work! If $\lambda<0$ we have $y(x)=ae^{\sqrt{ |\lambda|}x}+be^{-\sqrt{ |\lambda|}x} \nonumber$ where $|\lambda|$ is the absolute value of $\lambda$. Let’s look at the case $\lambda<0$ first. The first boundary condition implies $y'(x)= \sqrt{|\lambda|}a e^{\sqrt{ |\lambda|}x} -\sqrt{|\lambda|}b e^{-\sqrt{ |\lambda|}x}\rightarrow y'(0)=\sqrt{|\lambda|}(a-b)=0\rightarrow a=b \nonumber$ and therefore $y(x)=a\left(e^{\sqrt{ |\lambda|}x}+e^{-\sqrt{ |\lambda|}x}\right)$. Using the second boundary condition: $y(1)=a\left(e^{\sqrt{ |\lambda|}}+e^{-\sqrt{ |\lambda|}}\right)=0\rightarrow a=0 \nonumber$ Therefore, if $\lambda <0$, the solution is always $y(x)=0$, the trivial solution. Let’s see what happens if $\lambda >0$. The general solution is $y(x)=a\cos(\sqrt{\lambda}x)+b\sin(\sqrt{\lambda}x)$, and applying the first boundary condition: $y'(x)=-\sqrt{\lambda}a\sin(\sqrt{\lambda}x)+\sqrt{\lambda}b\cos(\sqrt{\lambda}x)\rightarrow y'(0)=\sqrt{\lambda}b=0 \rightarrow b=0 \nonumber$ Therefore, so far we have $y(x)=a \cos{\sqrt{\lambda}x}$. The second boundary condition is $y(1)=0$, so $y(1)=a \cos({\sqrt{\lambda}})=0 \nonumber$ As before, $a=0$ is certainly a possibility, but this again would give the trivial solution, which we are trying to avoid. However, this is not our only option, because there are some values of $\lambda$ that also make $y(1)=0$. These are $\sqrt{\lambda}=\frac{\pi}{2}, \frac{3\pi}{2}, \frac{5\pi}{2}$, or in terms of $\lambda$: $\lambda=\frac{\pi^2}{4}, \frac{9\pi^2}{4}, \frac{25\pi^2}{4} \nonumber$ This means that $y''(x)+ 3 y(x)=0; \;y'(0)=0; \;y(1)=0 \nonumber$ does not have a non-trivial solution, but $y''(x)+ (\pi^2/4) y(x)=0; \;y'(0)=0; \;y(1)=0 \nonumber$ does. The values of $\lambda$ that guarantee that the differential equation has non-trivial solutions are called the eigenvalues of the equation. The non-trivial solutions are called the eigenfunctions of the equation. We just found the eigenvalues, but what about the eigenfunctions? We just concluded that the solutions are $y(x)=a \cos{\sqrt{\lambda}x}$, and now we know that $\sqrt{\lambda}=\frac{\pi}{2}, \frac{3\pi}{2}, \frac{5\pi}{2}$. We can write the eigenfunctions as: $y(x)=a \cos{\frac{(2n-1)\pi}{2}x} \; \; n=1, 2, 3... \nonumber$ We could also use $(2n+1)$ with $n=0,1,2...$. Notice that we do not have any information that allows us to calculate the constant $a$, so we leave it as an arbitrary constant. Also, notice that although we have infinite eigenvalues, the eigenvalues are discrete. The term discrete means that the variable can take values for a countable set (like the natural numbers). The opposite of discrete is continuous (like the real numbers). These discrete eigenvalues have very important consequences in quantum mechanics. In fact, you probably know from your introductory chemistry classes that atoms and molecules have energy levels that are discrete. Electrons can occupy one orbital or the next, but cannot be in between. These energies are the eigenvalues of differential equations with boundary conditions, so this is an amazing example of what boundary conditions can do!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/05%3A_Second_Order_Ordinary_Differential_Equations/5.03%3A_Second_Order_Ordinary_Differential_Equations_with_Boundary_Conditions.txt
The main postulate of quantum mechanics establishes that the state of a quantum mechanical system is specified by a function called the wave function. The wave function is a function of the coordinates of the particle (the position) and time. We often deal with stationary states, i.e. states whose energy does not depend on time. For example, at room temperature and in the absence of electromagnetic radiation such as UV light, the energy of the only electron in the hydrogen atom is constant (the energy of the 1s orbital). In this case, all the information about the state of the particle is contained in a time-independent function, $\psi (\textbf{r})$, where $\textbf{r}$ is a vector that defines the position of the particle. In Section 2.3 we briefly mentioned that $|\psi|^2 = \psi^* \psi$ can be interpreted in terms of the probability of finding the electron in different regions of space. Because the probability of finding the particle somewhere in the universe is 1, the wave function needs to be normalized (that is, the integral of $|\psi|^2$ over all space has to equal 1). The fundamental equation in quantum mechanics is known as the Schrödinger equation, which is a differential equation whose solutions are the wave functions. For a particle of mass $m$ moving in one dimension in a potential field described by $U(x)$ the Schrödinger equation is: $-\frac{\hbar^2}{2m} \frac{d^2\psi(x)}{dx^2}+U(x)\psi(x)=E \psi(x)$ Notice that the position of the particle is defined by $x$, because we are assuming one-dimensional movement. The constant $\hbar$ (pronounced “h-bar”) is defined as $h/(2\pi)$, where $h$ is Plank’s constant. $U(x)$ is the potential energy the particle is subjected to, and depends on the forces involved in the system. For example, if we were analyzing the hydrogen atom, the potential energy would be due to the force of interaction between the proton (positively charged) and the electron (negatively charged), which depends on their distance. The constant $E$ is the total energy, equal to the sum of the potential and kinetic energies. This will be confusing until we start seeing a few examples, so don’t get discouraged and be patient. Let’s start by discussing the simplest (from the mathematical point of view) quantum mechanical system. Our system consists of a particle of mass $m$ that can move freely in one dimension between two “walls”. The walls are impenetrable, and therefore the probability that you find the particle outside this one-dimensional box is zero. This is not too different from a ping-pong ball bouncing inside a room. It does not matter how hard you bounce the ball against the wall, you will never find it on the other side. However, we will see that for microscopic particles (small mass), the system behaves very different than for macroscopic particles (the ping-pong ball). The behavior of macroscopic systems is described by the laws of what we call classical mechanics, while the behavior of molecules, atoms and sub-atomic particles is described by the laws of quantum mechanics. The problem we just described is known as the “particle in the box” problem, and can be extended to more dimensions (e.g. the particle can move in a 3D box) or geometries (e.g. the particle can move on the surface of a sphere, or inside the area of a circle). The particle in a one-dimensional box We will start with the simplest case, which is a problem known as ’the particle in a one-dimensional box’ (Figure $1$). This is a simple physical problem that, as we will see, provides a rudimentary description of conjugated linear molecules. In this problem, the particle is allowed to move freely in one dimension inside a ’box’ of length $L$. In this context, ’freely’ means that the particle is not subject to any force, so the potential energy inside the box is zero. The particle is not allowed to move outside the box, and physically, we guarantee this is true by impossing an infinite potential energy at the edges of the box ($x = 0$ and $x = L$) and outside the box ($x < 0$ and $x > L$). $U(x)=\left\{\begin{matrix} \infty & x<0 \ 0 & 0<x<L\ \infty & x>L \end{matrix}\right. \nonumber$ Because the potential energy outside the box is infinity, the probability of finding the particle in these regions is zero. This means that $\psi(x)=0$ if $x>L$ or $x<0$. What about $\psi(x)$ inside the box? In order to find the wave functions that describe the states of a system, we have to solve Schrödinger’s equation: $-\frac{\hbar^2}{2m} \frac{d^2\psi(x)}{dx^2}+U(x)\psi(x)=E \psi(x) \nonumber$ Inside the box $U(x)=0$, so: $-\frac{\hbar^2}{2m} \frac{d^2\psi(x)}{dx^2}=E \psi(x) \nonumber$ $\label{eqn1} \frac{\hbar^2}{2m} \frac{d^2\psi(x)}{dx^2}+ E\psi(x)=0$ Remember that $\hbar$ is a constant, $m$ is the mass of the particle (also constant), and $E$ the energy. The energy of the particle is a constant in the sense that it is not a function of $x$. We will see that there are many (in fact infinite) possible values for the energy of the particle, but these are numbers, not functions of $x$. With all this in mind, hopefully you will recognize that the Schrödinger equation for the one-dimensional particle in a box is a homogeneous second order ODE with constant coefficients. Do we have any initial or boundary conditions? In fact we do, because the wave function needs to be a continuous function of $x$. This means that there cannot be sudden jumps of probability density when moving though space. In particular in this case, it means that $\psi(0)=\psi(L)=0$, because the probability of finding the particle outside the box is zero. Let’s solve Equation \ref{eqn1}. We need to find the functions $\psi(x)$ that satisfy the ODE. The auxiliary equation is (remember that $m, \hbar, E$ are positive constants): $\frac{\hbar^2}{2m}\alpha^2+E=0 \nonumber$ $\alpha=\pm i \sqrt{\frac{2mE}{\hbar^2}} \nonumber$ and the general solution is therefore: $\psi(x)=c_1e^{i \sqrt{\frac{2mE}{\hbar^2}}x}+c_2e^{-i \sqrt{\frac{2mE}{\hbar^2}}x} \nonumber$ Because $\psi(0)=0$: $\psi(x)=c_1+c_2=0\rightarrow c_1=-c_2 \nonumber$ $\psi(x)=c_1\left(e^{i \sqrt{\frac{2mE}{\hbar^2}}x}-e^{-i \sqrt{\frac{2mE}{\hbar^2}}x}\right) \nonumber$ This can be simplified using Euler’s relationships: $e^{ix}-e^{-ix}=2i\sin{x}$ $\psi(x)=c_1(2i)\sin{\left(\sqrt{\frac{2mE}{\hbar^2}}x\right)} \nonumber$ $\psi(x)=A\sin{\left(\sqrt{\frac{2mE}{\hbar^2}}x\right)} \nonumber$ In the last step we recognized that $2ic_1$ is a constant, and called it $A$. The second boundary condition is $\psi(L)=0$: $\psi(L)=A\sin{\left(\sqrt{\frac{2mE}{\hbar^2}}L\right)}=0 \nonumber$ We can make $A=0$, but this will result in the wave function being zero at all values of $x$. This is what we called the ‘trivial solution’ before, and although it is a solution from the mathematical point of view, it is not when we think about the physics of the problem. If $\psi(x)=0$ the probability of finding the particle inside the box is zero. However, the problem states that the particle cannot be found outside, so it has to be found inside with a probability of 1. This means that $\psi(x)=0$ is not a physically acceptable solution inside the box, and we are forced to consider the situations where $\sin{\left(\sqrt{\frac{2mE}{\hbar^2}}L\right)}=0 \nonumber$ We know the function $\sin{x}$ is zero at values of $x$ that are zero, or multiples of $\pi$: $\left(\sqrt{\frac{2mE}{\hbar^2}}L\right)=\pi,2\pi, 3\pi...=n\pi\;(n=1,2,3..\infty) \nonumber$ This means that our solution is: $\psi(x)=A\sin{\left(\frac{n\pi}{L}x\right)} \; (n=1,2,3...\infty) \nonumber$ Notice that we didn’t consider $n=0$ because that would again cause $\psi(x)$ to vanish inside the box. The functions $\psi(x)$ contain information about the state of the system, and are called the eigenfunctions. What about the energies? $\left(\sqrt{\frac{2mE}{\hbar^2}}L\right)=n\pi\rightarrow E=\left(\frac{n\pi}{L}\right)^2\frac{\hbar^2}{2m}\;(n=1, 2, 3...\infty) \nonumber$ The energies are the eigenvalues of this equation. Notice that there are infinite eigenfunctions, and each one has a defined eigenvalue. $n$ $\psi(x)$ $E$ 1 $A\sin{\left(\frac{\pi}{L}x\right)}$ $\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}$ 2 $A\sin{\left(\frac{2\pi}{L}x\right)}$ $\left(\frac{2\pi}{L}\right)^2\frac{\hbar^2}{2m}$ 3 $A\sin{\left(\frac{3\pi}{L}x\right)}$ $\left(\frac{3\pi}{L}\right)^2\frac{\hbar^2}{2m}$ The lowest energy state is described by the wave function $\psi=A\sin{\left(\frac{\pi}{L}x\right)}$, and its energy is $\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}$. What about the constant $A$? Mathematically, any value would work, and none of the boundary conditions impose any restriction on its value. Physically, however, we have another restriction we haven’t fulfilled yet: the wave function needs to be normalized. The integral of $|\psi|^2$ over all space needs to be 1 because this function represents a probability. $\int_{-\infty}^{\infty}|\psi(x)|^2dx=1 \nonumber$ However, $\psi(x)=0$ outside the box, so the ranges $x<0$ and $x>L$ do not contribute to the integral. Therefore: $\int_{0}^{L}|\psi(x)|^2dx=\int_{0}^{L}A^2\sin^2{\left(\frac{n\pi}{L}x\right)}dx=1 \nonumber$ We will calculate $A$ from this normalization condition. Using the primitives found in the formula sheet, we get: $\int_{0}^{L}\sin^2{\left(\frac{n\pi}{L}x\right)}dx=L/2 \nonumber$ and therefore $A=\sqrt{\frac{2}{L}} \nonumber$ We can now write down our normalized wave function as: $\psi(x)=\sqrt{\frac{2}{L}}\sin{\left(\frac{n\pi}{L}x\right)} \; (n=1,2,3...\infty)$ We solved our first problem in quantum mechanics! Let’s discuss what we got, and what it means. First, because the potential energy inside the box is zero, the total energy equals the kinetic energy of the particle (i.e. the energy due to the fact that the particle is moving from left to right or from right to left). A ping-pong ball inside a macroscopic box can move at any speed we want, so its kinetic energy is not quantized. However, if the particle is an electron, its kinetic energy inside the box can adopt only quantized values of energy: $E=\left(\frac{n\pi}{L}\right)^2\frac{\hbar^2}{2m}\;(n=1, 2, 3...\infty)$. Interestingly, the particle cannot have zero energy ($n=0$ is not an option), so it cannot be at rest (our ping-pong ball can have zero kinetic energy without violating any physical law). If a ping-pong ball moves freely inside the box we can find it with equal probability close to the edges or close to the center. Not for an electron in a one-dimensional box! The function $|\psi(x)|^2$ for the lowest energy state ($n=1$) is plotted in Figure $2$. The probability of finding the electron is greater at the center than it is at the edges; nothing like what we expect for a macroscopic system. The plot is symmetric around the center of the box, meaning the probability of finding the particle in the left side is the same as finding it in the right side. That is good news, because the problem is truly symmetric, and there are no extra forces attracting or repelling the particle on the left or right half to the box. Looking at Figure $2$, you may think that the probability of finding the particle at the center really 2. How can this be? Probabilities cannot be greater than 1! This is a major source of confusion among students, so let’s clarify what it means. The function $|\psi(x)|^2$ is not a probability, but a probability density. Technically, this means that $|\psi(x)|^2dx$ is the probability of finding the particle between $x$ and $x+dx$ (see page for more details). For example, the probability of finding the particle between $x=0.5$ and $0.5001$ is $\approx|\psi(0.5)|^2\times 0.0001= 0.0002$. This is approximate because $\Delta x= 0.0001$ is small, but not infinitesimal. What about the probability of finding the particle between $x=0.5$ and $0.6$? We need to integrate $|\psi(x)|^2dx$ between $x=0.5$ and $x=0.6$: $p(0.5<x<0.6)=\int_{0.5}^{0.6} |\psi(x)|^2dx\approx 0.2 \nonumber$ Importantly, $p(0<x<1)=\int_{0}^{1} |\psi(x)|^2dx=1 \nonumber$ as it should be the case for a normalized wave function. Notice that these probabilities refer to the lowest energy state ($n=1$), and will be different for states of increasing energy. The particle in the box problem is also available in video format: http://tinyurl.com/mjsmd2a Where is the chemistry? So far we talked about a system that sounds pretty far removed from anything we (chemists) care about. We understand electrons in atoms, but electrons moving in a one-dimensional box? To see why this is not such a crazy idea, let’s consider the molecule of carotene (the orange pigment in carrots). We know that all those double bonds are conjugated, meaning that the $\pi$ electrons are delocalized and relatively free to move around the bonds highlighted in red in Figure $3$. Because the length of each carbon-carbon bond is around 1.4 Å  (Å  stands for angstrom, and equals $10^{-10}m$), we can assume that the $\pi$ electrons move inside a one dimensional box of length $L = 21\times 1.4$ Å$= 29.4$ Å. This is obviously an approximation, as it is not true that the electrons move freely without being subject to any force. Yet, we will see that this simple model gives a good semi-quantitative description of the system. We already solved the problem of the particle in a box, and obtained the following eigenvalues: $E=\left(\frac{n\pi}{L}\right)^2\frac{\hbar^2}{2m}\;(n=1, 2, 3...\infty) \label{eqn2}$ These are the energies that the particle in the box is allowed to have. In this case, the particle in question is an electron, so $m$ is the mass of an electron. Notice that we have everything we need to use Equation \ref{eqn2}: $\hbar = 1.0545 \times 10^{-34} m^2 kg\, s^{-1}$, $m=9.109 \times 10^{-31}kg$, and $L = 2.94 \times 10^{-9} m$. This will allow us to calculate the allowed energies for the $\pi$ electrons in carotene. For $n=1$ (the lowest energy state), we have: $E_1=\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}=6.97 \times 10^{-21}J \nonumber$ Joule is the unit of energy, and $1J = kg\times m^2\times s^{-2}$. A very easy way of remembering this is to recall Einstein’s equation: $E = mc^2$, which tells you that energy is a mass times the square of a velocity (hence, $1J = 1kg (1 m/s)^2$). Coming back to Equation \ref{eqn2}, the allowed energies for the $\pi$ electrons in carotene are: $E_n=n^2\times6.97 \times 10^{-21}J\;(n=1, 2, 3...\infty) \nonumber$ Notice that the energies increase rapidly. The energy of the tenth level ($E_{10}$) is one hundred times the energy of the first! The number of levels is inifinite, but of course we know that the electrons will fill the ones that are lower in energy. This is analogous to the hydrogen atom. We know there are an infinite number of energy levels, but in the absence of an external energy source we know the electron will be in the 1s orbital, which is the lowest energy level. This electron has an infinite number of levels available, but we need an external source of energy if we want the electron to occupy a higher energy state. The same concepts apply to molecules. As you have learned in general chemistry, we cannot have more than two electrons in a given level, so we will put our 22 $\pi$ electrons (2 per double bond) in the first 11 levels (Figure $4$, left). We can promote an electron to the first unoccupied level (in this case $n=12$) by using light of the appropriate frequency ($\nu$). The energy of a photon is $E = h\nu$, where $h$ is Plank’s constant. In order for the molecule to absorb light, the wavelength of the light beam needs to match exactly the gap in energy between the highest occupied state (in this case $n=11$) and the lowest unoccupied state. The wavelength of light is related to the frequency as: $\lambda = c/\nu$, where $c$ is the speed of light. Therefore, in order to produce the excited state shown in the right side of Figure $4$, we have to use light of the following wavelength: $E=E_{12}-E_{11}=h\nu=h c/\lambda\rightarrow \lambda=hc/(E_{12}-E_{11}) \nonumber$ Recall that $E_n=n^2\times6.97 \times 10^{-21}J$, so $(E_{12}-E_{11})= (144-121)\times6.97 \times 10^{-21}J=1.60\times 10^{-19}J$. Therefore, $\lambda = \frac{6.626\times10^{-34} J\,s\times 3\times10^8 m\, s^{-1}}{1.60\times 10^{-19}J}=1.24\times10^{-6}m=1,242 nm \nonumber$ In the last step we expressed the result in nanometers ($1nm=10^{-9}m$), which is a common unit to describe the wavelength of light in the visible and ultraviolet regions of the electromagnetic spectrum. It is actually fairly easy to measure the absorption spectrum of carotene. You just need to have a solution of carotene, shine the solution with light of different colors (wavelengths), and see what percentage of the light is transmitted. The light that is not transmitted is absorbed by the molecules due to transitions such as the one shown in Figure $4$. In reality, the absorption of carotene actually occurs at 497 nm, not at 1,242 nm. The discrepancy is due to the huge approximations of the particle in the box model. Electrons are subject to interactions with other electrons, and with the nuclei of the atoms, so it is not true that the potential energy is zero. Although the difference seems large, you should not be too disappointed about the result. It is actually pretty impressive that such a simple model can give a prediction that is not that far from the experimental result. Nowadays chemists use computers to analyze more sophisticated models that cannot be solved analytically in the way we just solved the particle in the box. Yet, there are some qualitative aspects of the particle in the box model that are useful despite the approximations. One of these aspects is that the wavelength of the absorbed light gets lower as we reduce the size of the box. From Equation \ref{eqn2}, we can write: $h c/\lambda=\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}(n_2^2-n_1^2) \nonumber$ where $n_2$ is the lowest unoccupied level, and $n_1$ is the highest occupied level. Because $n_2=n_1+1$, $h c/\lambda=\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}((n_1+1)^2-n_1^2)=\left(\frac{\pi}{L}\right)^2\frac{\hbar^2}{2m}(2n_1+1) \nonumber$ Molecules that have a longer conjugated system will absorb light of longer wavelengths (less energy), and molecules with a shorter conjugated system will absorb light of shorter wavelengths (higher energy). For example, consider the following molecule, which is a member of a family of fluorescent dyes known as cyanines. The conjugated system contains 8 $\pi$ electrons, and the molecule absorbs light of around 550 nm. This wavelength corresponds to the green region of the visible spectrum. The solution absorbs green and lets everything else reach your eye. Red is the complementary color of green, so this molecule in solution will look red to you. Now look at this other cyanine, which has two extra $\pi$ electrons: The particle in a box model tells you that this cyanine should absorb light of longer wavelengths (less energy), so it should not surprise you to know that a solution of this compound absorbs light of about 670 nm. This corresponds to the orange-red region of the spectrum, and the solution will look blue to us. If we instead shorten the conjugated chain we will produce a compound that absorbs in the blue (450 nm), and that will be yellow when in solution. We just connected differential equations, quantum mechanics and the colors of things... impressive! 5.05: Problems Problem $1$ Solve the following initial value problems: 1. $\frac{d^2x}{dt^2}+\frac{dx}{dt}-2x=0; \;x(0)=1; \;x'(0)=0$ 2. $\frac{d^2x}{dt^2}+6\frac{dx}{dt}+9x=0; \;x(1)=0; \;x'(1)=1$ 3. $\frac{d^2x}{dt^2}+9x=0; \;x(\pi/3)=0; \;x'(\pi/3)=-1$ 4. $\frac{d^2x}{dt^2}-2\frac{dx}{dt}+2x=0; \;x(0)=1; \;x'(0)=0$ Problem $2$ The simple harmonic oscillator consists of a body moving in a straight line under the influence of a force whose magnitude is proportional to the displacement $x$ of the body from the point of equilibrium, and whose direction is towards this point. $\label{ode2:spring_1} F=-k(x-x_0)$ The force acts in the direction opposite to that of the displacement. The constant $k$ is a measure of how hard or soft the spring is. Newton’s law of motion states that the force applied on an object equals its mass multiplied by its acceleration. The variable $h=x-x_0$ represents the displacement of the spring from its undistorted length, and the acceleration is the second derivative of the displacement. Therefore: $\label{ode2:spring_2} F=m\frac{d^2h(t)}{dt^2}$ Combining equations \ref{ode2:spring_1} and \ref{ode2:spring_2} we obtain: $\label{ode2:spring_3} m\frac{d^2h(t)}{dt^2}=-kh(t)$ which is a second order differential equation. Notice that $m$ (the mass of the body) and $k$ (the spring constant) are not functions of $x$. Assume that the displacement $h$ and the velocity $h'$ at time $t=0$ are: $h(0) = A$ and $h'(0)=0$. Physically, this means that the displacement at time zero is $A$, and the body is at rest. $\bullet$ Obtain an expression for $h(t)$. $\bullet$ What is the period of the function you found above? In the example above we assumed that the forces due to friction were negligible. If the oscillator moves in a viscous medium, we need to include a frictional term in Newton’s equation. The force due to friction is proportional to the velocity of the mass ($h'(t)$), and the direction is opposite to the displacement. Therefore: $\label{ode2:spring_4} m\frac{d^2h(t)}{dt^2}=-kh(t)-\gamma \frac{dh(t)}{dt}$ where $\gamma$ is a constant that depends on the viscosity of the medium. $\bullet$ Obtain an expression for $h(t)$. You will have to consider the cases $\gamma^2<4mk$, $\gamma^2=4mk$ and $\gamma^2>4mk$ separately. The answers are printed below so you can check your results. Be sure you show all your work step by step. • $\gamma^2<4mk$: $h(t)=Ae^{-\gamma t/2m}\left[\cos\left(\frac{at}{2m}\right)+\frac{\gamma}{a}\sin\left(\frac{at}{2m}\right)\right], a=\sqrt{4mk-\gamma^2} \nonumber$ • $\gamma^2=4mk$: $h(t)=A\left(1+\frac{\gamma}{2m}t\right)e^{-\gamma t/2m} \nonumber$ • $\gamma^2>4mk$: $h(t)=\frac{A}{2}e^{-\gamma t/2m}\left[\left(e^{at/2m}+e^{-at/2m}\right)+\frac{\gamma}{a}\left(e^{at/2m}-e^{-at/2m}\right)\right], a=\sqrt{\gamma^2-4mk} \nonumber$ Problem $3$ Find the eigenfunctions ($f(x)$) and eigenvalues $\lambda$ of the following boundary value problems: • $-\frac{d^2}{dx^2}f(x)=\lambda f(x)$, $f(0)=0, f'(1)=0$ • $-\frac{d^2}{dx^2}f(x)=\lambda f(x)$, $f'(0)=0, f(\pi)=0$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/05%3A_Second_Order_Ordinary_Differential_Equations/5.04%3A_An_example_in_Quantum_Mechanics.txt
Objectives • Learn how to solve second order ODEs using series. • Use the power series method to solve the Laguerre equation. • 6.1: Introduction to Power Series Solutions of Differential Equations Many important differential equations in physical chemistry are second order homogeneous linear differential equations, but do not have constant coefficients. The following examples are all important differential equations in the physical sciences: the Hermite equation, the Laguerre equation, and the Legendre equation. • 6.2: The Power Series Method The power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients. • 6.3: The Laguerre Equation Some differential equations can only be solved with power series methods. One such example is the Laguerre equation. This differential equation is important in quantum mechanics because it is one of several equations that appear in the quantum mechanical description of the hydrogen atom. The solutions of the Laguerre equation are called the Laguerre polynomials, and together with the solutions of other differential equations, form the functions that describe the orbitals of the hydrogen atom. • 6.4: Problems 06: Power Series Solutions of Differential Equations In Chapter 5 we discussed a method to solve linear homogeneous second order differential equations with constant coefficients. Many important differential equations in physical chemistry are second order homogeneous linear differential equations, but do not have constant coefficients. The following examples are all important differential equations in the physical sciences: • Hermite equation: $y''-2xy'+2ny=0 \nonumber$ • Laguerre equation: $xy''+(1-x)y'+ny=0 \nonumber$ • Legendre equation: $(1-x^2)y''-2xy'+l(l+1)y=0 \nonumber$ These equations do not have constant coefficients because some of the terms multiplying $y, y'$ and $y''$ are functions of $x$. In order to solve these differential equations, we will assume that the solution, $y(x)$, can be expressed as a Maclaurin series: $\label{eq:eq1}y(x)=\displaystyle\sum_{n=0}^{\infty}a_n x^{n}=a_0+a_1 x + a_2 x^2...a_n x^n.$ This method will give us a series as the solution, but at this point we know that an infinite series is one way of representing a function, so we will not be too surprised. For example, instead of obtaining $e^x$ as the solution, we will get the series $\displaystyle\sum_{n=0}^{\infty}\frac{1}{n!}x^n$, which of course represents the same thing. Does it mean that we need to know all the series to be able to recognize which function is represented by the series we got as the answer? Not really. We will see that this method is useful when the solution can be expressed only as a series, but not as a known function. Even if this is the case, for simplicity we will see how the method works with a problem whose solution is a known function. We will then move to a problem whose solution can be expressed as a series only. 6.02: The Power Series Method We will use the series method to solve $\dfrac{dy}{dx}+y=0 \label{Eq1}$ This equation is a first order separable differential equation, and can be solved by direct integration to give $ce^{-x}$ (be sure you can do this on your own). In order to use the series method, we will first assume that the answer can be expressed as $y(x)=\displaystyle\sum_{n=0}^{\infty}a_n x^{n}. \nonumber$ Again, instead of obtaining the actual function $y(x)$, in this method we will obtain the series $\displaystyle\sum_{n=0}^{\infty}a_n x^{n}. \nonumber$ We will use the expression $y(x)=a_0+a_1 x + a_2 x^2...a_n x^n \label{Eq5}$ to calculate the derivatives we need and substitute in the differential equation. Given our initial assumption that the solution can be written as: $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+...+a_n x^n \nonumber$ we can write the first derivative as: $y'(x)=a_1 + a_2\times 2 x+a_3\times3 x^2+...+a_n\times n x^{n-1} \nonumber$ We’ll substitute these expressions in the differential equation we want to solve (Equation \ref{Eq1}): $\begin{array}{c} \dfrac{dy}{dx}+y =0 \[4pt] \left(a_1 + a_2\times 2 x+a_3\times3 x^2 + ... + a_n\times n x^{n-1}\right)+\left(a_0+a_1 x + a_2 x^2+a_3x^3+...+a_n x^n\right) =0 \end{array} \nonumber$ and group the terms that have the same power of $x$: $(a_1+a_0) + (2a_2+a_1) x+(3a_3+a_2) x^2+(4a_4+a_3) x^3+...=0 \nonumber$ This expression needs to hold for all values of $x$, so all terms in parenthesis need to be zero: $(a_1+a_0)= (2a_2+a_1)=(3a_3+a_2)=(4a_4+a_3)=...=0 \nonumber$ The equations above give relationships among the different coefficients. Our solution will look like Equation \ref{Eq5}, but we know now that these coefficients are all related to each other. In the next step, we will express all the coefficients in terms of $a_0$. $\begin{array}{c} \left( a_1+a_0 \right) \rightarrow a_1=-a_0 \ \left( 2a_2+a_1 \right) \rightarrow a_2=-a_1/2=a_0/2 \ \left( 3a_3+a_2 \right) \rightarrow a_3=-a_2/3=-a_0/6 \ \left( 4a_4+a_3 \right) \rightarrow a_4=-a_3/4=a_0/24 \end{array} \nonumber$ We can continue, but hopefully you already see the pattern: $a_n=a_0(-1)^n/n!$. We can then write our solution as: $y(x)=\displaystyle\sum_{n=0}^{\infty}a_n x^{n}=\displaystyle\sum_{n=0}^{\infty}a_0\dfrac{(-1)^n}{n!} x^{n}=a_0\displaystyle\sum_{n=0}^{\infty}\dfrac{(-1)^n}{n!} x^{n} \nonumber$ We got our solution in the shape of an infinite series. Again, in general, we will be happy with the result as it is, because chances are the series does not represent any combination of known functions. In this case, however, we know that the solution is $y(x)=ce^{-x}$, so it should not surprise you that the series $\displaystyle\sum_{n=0}^{\infty}\dfrac{(-1)^n}{n!} x^{n} \nonumber$ is the Maclaurin series of $e^{-x}$. The constant $a_0$ is an arbitrary constant, and can be calculated if we have an initial condition. The same procedure can be performed more elegantly in the following way: \begin{align*} y(x) &= \sum_{n=0}^{\infty}a_n x^{n} \[4pt] y'(x) &= \sum_{n=1}^{\infty}na_n x^{n-1} \[4pt] y'(x) + y(x) &=0\rightarrow \sum_{n=1}^{\infty}na_n x^{n-1}+ \sum_{n=0}^{\infty}a_n x^{n}=0 \end{align*} changing the ‘dummy’ index of the first sum: $\displaystyle\sum_{n=0}^{\infty}(n+1)a_{n+1} x^{n}+\displaystyle\sum_{n=0}^{\infty}a_n x^{n}=0 \nonumber$ and combining the two sums: $\displaystyle\sum_{n=0}^{\infty}\left[(n+1)a_{n+1}+a_n\right] x^{n}=0 \nonumber$ Because this result needs to be true for all values of $x$: $(n+1)a_{n+1}+a_n=0\rightarrow \dfrac{a_{n+1}}{a_n}=-\dfrac{1}{n+1} \nonumber$ The expression above is what is known as a recursion formula. It gives the value of the second coefficient in terms of the first, the third in terms of the second, etc. $\dfrac{a_{n+1}}{a_n}=-\dfrac{1}{n+1}\rightarrow \dfrac{a_{1}}{a_0}=-1; \;\dfrac{a_{2}}{a_1}=-\dfrac{1}{2};\;\dfrac{a_{3}}{a_2}=-\dfrac{1}{3};\;\dfrac{a_{4}}{a_3}=-\dfrac{1}{4}.... \nonumber$ We know we want to express all the coefficients in terms of $a_0$. We can achieve this by multiplying all these terms: $\begin{array}{c} \dfrac{a_{1}}{a_0}\dfrac{a_{2}}{a_1}\dfrac{a_{3}}{a_2}...\dfrac{a_{n}}{a_{n-1}}...=\dfrac{a_{n}}{a_0} \ \dfrac{a_{n}}{a_0}=-1\times \left(-\dfrac{1}{2}\right)\times \left(-\dfrac{1}{3}\right)\times \left(-\dfrac{1}{4}\right)...\times \left(-\dfrac{1}{n}\right)=\dfrac{(-1)^n}{n!} \end{array} \nonumber$ and therefore, $a_n=a_0\dfrac{(-1)^n}{n!}$ Note: You do not need to worry about being ’elegant’. It is fine if you prefer to take the less ’elegant’ route! Example $1$ Solve the following equation using the power series method: $\dfrac{d^2y}{dx^2}+y=0 \nonumber$ Solution We start by assuming that the solution can be written as: $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4...\nonumber$ and therefore the first and second derivatives are: \begin{align*} y'(x) &=a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4... \[4pt] y''(x) &=2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4...\end{align*} \nonumber Notice that up to this point, this procedure is independent of the differential equation we are trying to solve. We now substitute these expressions in the differential equation: $(2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4...)+(a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4...)=0\nonumber$ and group the terms in the same power of $x$: $(2a_2+a_0)+(2\times3a_3+a_1)x+(3\times 4a_4+a_2)x^2+(4\times 5a_5+a_3)x^3+(5\times 6a_6+a_4)x^4...=0\nonumber$ Because this needs to be true for all values of $x$, all the terms in parenthesis need to equal zero. $(2a_2+a_0)=(2\times 3a_3+a_1)=(3\times 4a_4+a_2)=(4\times 5a_5+a_3)=(5\times 6a_6+a_4)...=0\nonumber$ We have relationships between the odd coefficients and between the even coefficients, but we see that the odd and the even are not related. Let’s write all the odd coefficients in terms of $a_1$, and the even coefficients in terms of $a_0$: $a_2=-\dfrac{a_0}{2}=-\dfrac{a_0}{2!}$ $a_3=-\dfrac{a_1}{(2\times3)}=-\dfrac{a_1}{3!}$ $a_4=-\dfrac{a_2}{(3\times 4)}=\dfrac{a_0}{(2\times 3\times 4)}=\dfrac{a_0}{4!}$ $a_5=-\dfrac{a_3}{(4\times 5)}=\dfrac{a_1}{(2\times3\times 4\times5)}=\dfrac{a_1}{5!}$ $a_6=-\dfrac{a_4}{(5\times 6)}=-\dfrac{a_0}{(2\times 3 \times 4\times 5\times 6)}=-\dfrac{a_0}{6!}$ $a_7=-\dfrac{a_5}{(6\times 7)}=-\dfrac{a_1}{(2\times 3\times 4\times 5\times 6\times 7)}=-\dfrac{a_1}{7!}$ Substituting these relationships in the expression of $y(x)$: \begin{align} y(x) &=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4... \[4pt] &=a_0+a_1 x-\dfrac{a_0}{2!}x^2-\dfrac{a_1}{3!}x^3+\dfrac{a_0}{4!}x^4+\dfrac{a_1}{5!}x^5-\dfrac{a_0}{6!}x^6-\dfrac{a_1}{7!}x^7+... \[4pt] &=a_0(1-\dfrac{1}{2!}x^2+\dfrac{1}{4!}x^4-\dfrac{1}{6!}x^6...)+a_1(x-\dfrac{1}{3!}x^3+\dfrac{1}{5!}x^5-\dfrac{1}{7!}x^7+...) \end{align} \nonumber which can be expressed as: $\displaystyle{\color{Maroon}y(x)=a_0\sum_{n=0}^{\infty}\dfrac{(-1)^n}{(2n)!}x^{2n}+a_1\sum_{n=0}^{\infty}\dfrac{(-1)^n}{(2n+1)!}x^{2n+1}} \nonumber$ This is the solution of our differential equation. If you check Chapter 3, you will recognize that these sums are the Maclaurin expansions of the functions cosine and sine. This should not surprise you, as the differential equation we just solved can be solved with the techniques we learned in Chapter 5 to obtain: $y(x)=c_1\cos{x}+c_2\sin{x} \nonumber$ Again,we used this example to illustrate the method, but it does not make a lot of sense to use the power series method to solve a ODE that can be solved using easier techniques. This method will be useful when the solution of the ODE can be only expressed as a power series.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/06%3A_Power_Series_Solutions_of_Differential_Equations/6.01%3A_Introduction_to_Power_Series_Solutions_of_Differential_Equations.txt
So far we used the power series method to solve equations that can be solved using simpler methods. Let’s now turn our attention to differential equations that cannot be solved otherwise. One such example is the Laguerre equation. This differential equation is important in quantum mechanics because it is one of several equations that appear in the quantum mechanical description of the hydrogen atom. The solutions of the Laguerre equation are called the Laguerre polynomials, and together with the solutions of other differential equations, form the functions that describe the orbitals of the hydrogen atom. The Laguerre equation is $xy''+(1-x)y'+ny=0 \nonumber$ where $n=0, 1, 2...$. Solving the n=0 Laguerre Equation Here, for simplicity, we will solve the equation for a given value of $n$. That is, instead of solving the equation for a generic value of $n$, we will solve it first for $n=0$, then for $n=1$, and so on. Let’s start with $n=0$. The differential equation then becomes: $xy''+y'-xy'=0. \label{Eq1}$ We start by assuming that the solution can be written as: $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4 + \ldots \nonumber$ and therefore the first and second derivatives are: \begin{aligned} y'(x) &=a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4... \[4pt] y''(x) &=2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4 + \ldots. \end{aligned} \nonumber We then plug these expressions in the differential equation (Equation \ref{Eq1}): \begin{aligned} xy''+y'-xy' &= 0 \[4pt] x(2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4 + \ldots)+ (a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4...)-x(a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4 + \ldots) &=0 \[4pt] (2a_2x+2\times 3a_3x^2+3\times 4a_4x^3+4\times 5a_5x^4+5\times 6a_6x^5 + \ldots)+(a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4 + \ldots)-(a_1x+ 2a_2 x^2+3a_3x^3+4a_4x^4+5a_5x^5 + \ldots)&=0 \end{aligned} \nonumber We then group the terms in the same power of $x$. However, to avoid writing a long equation, let’s try putting the information in a table. The second column contains the terms that multiply each power of $x$. We know each of these terms needs to be zero, and that will give us the relationships between the coefficients we need. $x^0$ $a_1$ $=0$ $\rightarrow a_1=0$ $x^1$ $2a_2+2a_2-a_1$ $=0$ $\rightarrow a_2=a_1/4$ $x^2$ $6a_3+3a_3-2a_2$ $=0$ $\rightarrow a_3=a_2\times2/9$ $x^3$ $12a_4+4a_4-3a_3$ $=0$ $\rightarrow a_4=a_3\times3/16$ $x^4$ $20a_5+5a_5-4a_4$ $=0$ $\rightarrow a_5=a_4\times4/25$ The first row tells us that $a_1=0$, and from the other rows, we conclude that all other coefficients with $n>1$ are also zero. Recall that $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4...$, so the solution is simply $y(x)=a_0$ (i.e. the solution is a constant). This solution may be disappointing to you because it is not a function of $x$. Don’t worry, we’ll get something more interesting in the next example. Solving the n=1 Laguerre Equation Let’s see what happens when $n=1$. The differential equation becomes $xy''+y'-xy'+y=0. \label{Eq10}$ As always, we start by assuming that the solution can be written as: $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4 + \ldots \nonumber$ and therefore the first and second derivatives are: \begin{align} y'(x) &=a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4 + \ldots \[4pt] y''(x) &=2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4 + \ldots \end{align} \nonumber and then plug these expressions in the differential equation (Equation \ref{Eq10}): \begin{align*} xy''+y'-xy'+y &= 0 \[4pt] x(2a_2+2\times 3a_3x+3\times 4a_4x^2+4\times 5a_5x^3+5\times 6a_6x^4...)+(a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4...)-x(a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4...)+(a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4...) &=0 \[4pt] (2a_2x+2\times 3a_3x^2+3\times 4a_4x^3+4\times 5a_5x^4+5\times 6a_6x^5...)+(a_1+ 2a_2 x+3a_3x^2+4a_4x^3+5a_5x^4...)-(a_1x+ 2a_2 x^2+3a_3x^3+4a_4x^4+5a_5x^5...)+(a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4...) &=0 \end{align*} \nonumber The next step is to group the terms in the same power of $x$. Let’s make a table as we did before: $x^0$ $a_1+a_0$ $=0$ $\rightarrow a_1=-a_0$ $x^1$ $2a_2+2a_2-a_1+a_1$ $=0$ $\rightarrow 4a_2=0$ $x^2$ $6a_3+3a_3-2a_2+a_2$ $=0$ $\rightarrow a_3=a_2\times1/9$ $x^3$ $12a_4+4a_4-3a_3+a_3$ $=0$ $\rightarrow a_4=a_3\times2/16$ $x^4$ $20a_5+5a_5-4a_4+a_4$ $=0$ $\rightarrow a_5=a_4\times3/25$ We see that in this case $a_1=-a_0$, and $a_{n>1}=0$. Recall that $y(x)=a_0+a_1 x + a_2 x^2+a_3x^3+a_4x^4... \nonumber$ so the solution is $y(x)=a_0(1-x)$. In physical chemistry, we define the Laguerre polynomials ($L_n(x)$) as the solution of the Laguerre equation with $a_0=n!$. This is arbitrary and somewhat field-dependent. You may find other definitions, but we’ll stick with $n!$ because it is the one that is more widely used in physical chemistry. With the last two examples we proved that $L_0(x)=1$ and $L_1(x)=1-x$. You’ll obtain $L_2(x)$ and $L_3(x)$ in your homework. 6.04: Problems Problem $1$ Solve the following differential equation $(1-x)y'(x)-y=0 \nonumber$ using 1) the separation of variables method and 2) the power series method, and prove that the two solutions are mathematically equivalent. Problem $2$ Solve the following differential equation $y''(x)-y(x)=0 \nonumber$ using 1) the method we have learned for second order ODEs with constant coefficients and 2) the power series method, and prove that the two solutions are mathematically equivalent. Problem $1$ Solve the Laguerre equation with $n=2$ and $n=3$. Write down $L_2(x)$ and $L_3(x)$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/06%3A_Power_Series_Solutions_of_Differential_Equations/6.03%3A_The_Laguerre_Equation.txt
Chapter Objectives • Learn how to express periodic functions, identify them as even, odd or neither, and calculate their period. • Compute the Fourier series of periodic functions. • Understand the concept of orthogonal expansions and orthonormal functions. • 7.1: Introduction to Fourier Series If we want to produce a series which will converge rapidly, so that we can truncate if after only a few terms, it is a good idea to choose basis functions that have as much as possible in common with the function to be represented. If we want to represent a periodic function, it is useful to use a basis set containing functions that are periodic themselves like sines and cosines. • 7.2: Fourier Series A Fourier series is a linear combination of sine and cosine functions, and it is designed to represent periodic functions. • 7.3: Orthogonal Expansions The idea of expressing functions as a linear combination of the functions of a given basis set is more general than what we just saw. The sines and cosines are not the only functions we can use, although they are a particular good choice for periodic functions. There is a fundamental theorem in function theory that states that we can construct any function using a complete set of orthonormal functions. • 7.4: Problems 07: Fourier Series In Chapter 3 we learned that a function $f(x)$ can be expressed as a series in powers of $x$ as long as $f(x)$ and all its derivatives are finite at $x=0$. We then extended this idea to powers of $x-h$, and called these series “Taylor series”. If $h=0$, the functions that form the basis set are the powers of $x: x^0, x^1, x^2...$, and in the more general case of $h\neq0$, the basis functions are $(x-h)^0, (x-h)^1, (x-h)^2...$ The powers of $x$ or $(x-h)$ are not the only choice of basis functions to expand a function in terms of a series. In fact, if we want to produce a series which will converge rapidly, so that we can truncate if after only a few terms, it is a good idea to choose basis functions that have as much as possible in common with the function to be represented. If we want to represent a periodic function, it is useful to use a basis set containing functions that are periodic themselves. For example, consider the following set of functions: $\sin{(nx)},\;n=1, 2, ..., \infty$: We can mix a finite number of these functions to produce a periodic function like the one shown in the left panel of Figure $2$, or an infinite number of functions to produce a periodic function like the one shown on the right. Notice that an infinite number of sine functions creates a function with straight lines! We will see that we can create all kinds of periodic functions by just changing the coefficients (i.e. the numbers multiplying each sine function). So far everything sounds fine, but we have a problem. The functions $\sin{nx}$ are all odd, and therefore any linear combination will produce an odd periodic function. We might need to represent an even function, or a function that is neither odd nor even. This tells us that we need to expand our basis set to include even functions, and I hope you will agree the obvious choice are the cosine functions $\cos{(nx)}$. Below are two examples of even periodic functions that are produced by mixing a finite (left) or infinite (right) number of cosine functions. Notice that both are even functions. Before moving on, we need to review a few concepts. First, since we will be dealing with periodic functions, we need to define the period of a function. As we saw in Section 1.4, a function $f(x)$ is said to be periodic with period $P$ if $f(x)=f(x+P)$. For example, the period of the function of Figure $4$ is $2\pi$. How do we write the equation for this periodic function? We just need to specify the equation of the function between $-P/2$ and $P/2$. This range is shown in a red dotted line in Figure $4$, and as you can see, it has the width of a period, and it is centered around $x=0$. If we have this information, we just need to extend the function to the left and to the right to create the periodic function:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/07%3A_Fourier_Series/7.01%3A_Introduction_to_Fourier_Series.txt
A Fourier series is a linear combination of sine and cosine functions, and it is designed to represent periodic functions: $\label{eq:fourier} f(x)=\dfrac{a_0}{2}+\sum_{n=1}^{\infty}a_n \cos\left ( \dfrac{n\pi x}{L} \right )+\sum_{n=1}^{\infty}b_n \sin\left ( \dfrac{n\pi x}{L} \right )$ The coefficients $a_0, a_1,a_2...a_n$ and $b_1, b_2....b_n$ are constants. It is important to notice that the period of the sine and cosine functions in Equation \ref{eq:fourier} is $P=2L/n$ (see Section 1.4). This means that we will be mixing sines and cosines of periods $2L$, $2L/2$, $2L/3$, $2L/4$, etc. As we will see, this linear combination will result in a periodic function of period $P = 2L$. In addition, we need only the odd terms (the sine functions) to represent an odd periodic function, so in this case all the $a_n$ coefficients (including $a_0$) will be zero. We need only even terms (the cosine functions) to represent an even function, so in this case all the $b_n$ coefficients will be zero. Why don’t we have a $b_0$ term? This is because $\sin{(0)}=0$. In the case of the cosine terms, the $n=0$ term is separated from the sum, but it does not vanish because $\cos{(0)}\neq0$. This means that an odd periodic function with period $P=2L$ will be in general: $f(x)= b_1 \sin{\left(\dfrac{\pi x}{L}\right)}+b_2 \sin{\left(\dfrac{2\pi x}{L}\right)}+b_3 \sin{\left(\dfrac{3\pi x}{L}\right)}... \nonumber$ Let’s say we want to construct an odd periodic function of period $P=2\pi$. Because the period is $2L$, this means that $L=\pi$: $f(x)= b_1 \sin{\left(x\right)}+b_2 \sin{\left(2x\right)}+b_3 \sin{\left(3x\right)}... \nonumber$ We in fact already saw an example like this in Figure $7.1.2$ (right). This periodic function, which is constructed using $b_n=1/n$, has a period of $2\pi$ as we just predicted. Let’s see other examples with different coefficients: Notice that we are mixing the functions $\sin{\left(x\right)}, \sin{\left(2x\right)},\sin{\left(3x\right)}...$ using different coefficients, and always create a periodic function with period $P=2\pi$. Coming back to Equation \ref{eq:fourier}, we know that different coefficients will create different periodic functions, but they will all have a period $2L$. The obvious question now is how to calculate the coefficients that will create the function we want. Let’s say that the periodic function is constructed by a periodic extension of the function $f(x)$, which is defined in the interval $[-L,L]$. One example would be the function of Figure $7.1.5$, which is defined in the interval $[-\pi,\pi]$. If we create the periodic extension of this function, we will create a periodic function with period $2\pi$. Analogously, by creating a periodic extension of a function defined in the interval $[-L,L]$ we will create a periodic function with period $2L$. The coefficients of Equation \ref{eq:fourier} are calculated as follows: $\label{ao} a_0=\dfrac{1}{L}\int_{-L}^{L}f(x)dx$ $\label{an} a_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\cos{\left(\dfrac{n\pi x}{L} \right)}dx$ $\label{bn} b_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\sin{\left(\dfrac{n\pi x}{L} \right)}dx$ Notice that Equation \ref{ao} is a special case of Equation \ref{an}, and that we don’t have a coefficient $b_0$ because $\sin{(0)}=0$. Because Equation \ref{eq:fourier} represents a periodic function with period $2L$, the integration is performed over one period centered at zero (that is, $L$ is half the period). Alternative Formulation Equation \ref{eq:fourier} is often written as: $\label{eq:fourier_alt} f(x)=a_0+\sum_{n=1}^{\infty}a_n \cos\left ( \dfrac{n\pi x}{L} \right )+\sum_{n=1}^{\infty}b_n \sin\left ( \dfrac{n\pi x}{L} \right )$ If we choose to do this, we of course need to re-define the coefficient $a_0$ as: $a_0=\dfrac{1}{2L}\int_{-L}^{L}f(x)dx. \nonumber$ Both versions give of course the same series, and whether you choose one or the other is a matter of taste. You may see the two versions in different textbooks, so don’t get confused!. Example $1$ Obtain the Fourier series of the periodic function represented in the figure. Solution $y(x)$ is a periodic function with period $P=2$. It can be constructed by the periodic extension of the function $f(x)=2x$, defined in the interval $[-1,1]$. Notice that this interval has a width equal to the period, and it is centered at zero. Because $y(x)$ is odd, we will not bother calculating the coefficients $a_n$. We could, but we would obtain zero for all of them. Equation \ref{eq:fourier}, therefore, reduces to: $y(x)=\sum\limits_{n=1}^{\infty}b_n sin\left ( \dfrac{n\pi x}{L} \right ) \nonumber$ From Equation \ref{bn}, the coefficients $b_n$ are calculated as: $b_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\sin{\left(\dfrac{n\pi x}{L} \right)}dx \nonumber$ and in this case, because $L=1$ (half the period), $b_n=\int_{-1}^{1}(2x)\sin{\left(n\pi x \right)}dx=2\int_{-1}^{1}x\sin{\left(n\pi x \right)}dx \nonumber$ The primitive of $\int x\sin{\left(a x \right)}dx$ is $\dfrac{\sin{(ax)}}{a^2}-\dfrac{x \cos{(ax)}}{a}$ (see formula sheet), so $b_n=2\int_{-1}^{1}x\sin{\left(n\pi x \right)}dx=2\left[\dfrac{\sin{(n \pi)}}{(n\pi)^2}-\dfrac{\cos{(n\pi)}}{n \pi}-\left(\dfrac{\sin{(n \pi (-1))}}{(n\pi)^2}-\dfrac{(-1) \cos{(n\pi (-1))}}{n \pi}\right)\right] \nonumber$ Using the fact that $\sin{(n\pi)}$ is zero and $\cos{x}$ is an even function: $b_n=-4\dfrac{ \cos{(n\pi)}}{n \pi} \nonumber$ Let’s write a few terms in a table: $n$ $\cos{(n\pi)}$ $b_n$ 1 -1 $\dfrac{4}{\pi}$ 2 1 $-\dfrac{4}{2\pi}$ 3 -1 $\dfrac{4}{3\pi}$ 4 1 $-\dfrac{4}{4\pi}$ 5 -1 $\dfrac{4}{5\pi}$ A general expression for $b_n$ is: $b_n=4 \dfrac{(-1)^{n+1}}{n\pi} \nonumber$ The series $y(x)=\sum_{n=1}^{\infty}b_n sin\left ( \dfrac{n\pi x}{L} \right ) \nonumber$ is then: $\label{eq:sawtooth} \displaystyle{\color{Maroon}y(x)=\dfrac{4}{\pi}\sum_{n=1}^{\infty}\dfrac{(-1)^{n+1}}{n} sin\left (n\pi x \right )}$ As in the case of a Taylor series, Equation \ref{eq:sawtooth} is exact if we include the infinite terms of the series. If we truncate the series using a finite number of terms, we will create an approximation. Figure $1$ shows an example with 1, 2, 3 and 8 terms. Example $2$ Obtain the Fourier series of the square wave formed by the periodic extension of the function: $f(x)=\left\{\begin{matrix}0 & -\pi\leq x\leq 0 \ 1 &0<x\leq \pi \end{matrix}\right. \nonumber$ Solution The periodic extension of the function $f(x)$ produces a periodic function with period $2\pi$: Strictly speaking, the resulting periodic function is neither even nor odd, so we would need to calculate all the coefficients. However, you may notice that the function would be odd if we were to subtract 1/2 from all points. In other words, the periodic function we are looking for will be a constant ($a_0$) plus an odd periodic function (sine series). We will calculate the constant, but from this discussion it should be obvious that we will get $a_0/2=1/2$. We will also calculate the rest of the $a_n$ coefficients, but we now know they will all be zero. The first coefficient, $a_0$ is (Equation \ref{ao}): $a_0=\dfrac{1}{L}\int_{-L}^{L}f(x)dx \nonumber$ Here, $L=\pi$ (half the period), so: $a_0=\dfrac{1}{\pi}\int_{-\pi}^{\pi}f(x)dx=\dfrac{1}{\pi}\int_{0}^{\pi}1dx=1 \nonumber$ where we have used the fact that $f(x)=0$ in the interval $-\pi<x<0$. The coefficients $a_n$ are (Equation \ref{an}) $a_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\cos{\left(\dfrac{n\pi x}{L} \right)}dx=\dfrac{1}{\pi}\int_{0}^{\pi}\cos{\left(n x \right)}dx=\dfrac{1}{\pi}\left.\begin{matrix}\left ( \dfrac{sin(n\pi))}{n} \right )\end{matrix}\right|_0^\pi=0 \nonumber$ The coefficients $b_n$ are (Equation \ref{bn}) $b_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\sin{\left(\dfrac{n\pi x}{L} \right)}dx=\dfrac{1}{\pi}\int_{0}^{\pi}\sin{\left(n x \right)}dx=\dfrac{1}{\pi}\left.\begin{matrix}\left ( -\dfrac{cos(n\pi))}{n} \right )\end{matrix}\right|_0^\pi=-\dfrac{1}{\pi n}(\cos{(n\pi)}-cos{(0)})=\dfrac{1-\cos{(n\pi)}}{n\pi} \nonumber$ Let’s see a few terms in a table: $n$ 1 2 3 4 5 6 $\cos{(n\pi)}$ -1 1 -1 1 -1 1 $b_n$ $\dfrac{2}{\pi}$ $0$ $\dfrac{2}{3\pi}$ $0$ $\dfrac{2}{5\pi}$ 0 The series is (Equation \ref{eq:fourier}) $f(x)=\dfrac{a_0}{2}+\sum_{n=1}^{\infty}a_n cos\left( \dfrac{n\pi x}{L} \right)+\sum_{n=1}^{\infty}b_n sin\left( \dfrac{n\pi x}{L} \right) \nonumber$ and with the coefficients we got we can write: $f(x)=\dfrac{1}{2}+\dfrac{2}{\pi}\sin{(x)}+\dfrac{2}{3\pi}\sin{(3x)}+\dfrac{2}{5\pi}\sin{(5x)}... \nonumber$ or more elegantly: $\displaystyle{\color{Maroon}\dfrac{1}{2}+\dfrac{2}{\pi} \sum_{n=0}^{\infty}\dfrac{1}{2n+1}\sin{[(2n+1)x]}} \nonumber$ Notice that, as expected, we have a sine series (which represents and odd periodic function) plus a constant (which ’pushes’ the function up). Need help? The links below contain solved examples. External links:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/07%3A_Fourier_Series/7.02%3A_Fourier_Series.txt
Note As stated in Section 7.2, the coefficients of \ref{eq:fourier} are defined as so: $\label{ao} a_0=\dfrac{1}{L}\int_{-L}^{L}f(x)dx$ $\label{an} a_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\cos{\left(\dfrac{n\pi x}{L} \right)}dx$ $\label{bn} b_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\sin{\left(\dfrac{n\pi x}{L} \right)}dx$ The idea of expressing functions as a linear combination of the functions of a given basis set is more general than what we just saw. The sines and cosines are not the only functions we can use, although they are a particular good choice for periodic functions. There is a fundamental theorem in function theory that states that we can construct any function using a complete set of orthonormal functions. The term orthonormal means that each function in the set is normalized, and that all functions of the set are mutually orthogonal. For a function in one dimension, the normalization condition is: $\label{eq:fourier_normalization} \int_{-\infty }^{\infty }{\left | f (x) \right |}^2\; dx=1$ Two functions $f(x)$ and $g(x)$ are said to be orthogonal if: $\label{eq:fourier_orthogonal} \int_{-\infty }^{\infty }{f (x) g^*(x) }\; dx=0$ The idea that you can construct a function with a linear combination of orthonormal functions is analogous to the idea of constructing a vector in three dimensions by combining the vectors $\vec{v_1}=\{(1,0,0)\}, \vec{v_2}=\{(0,1,0)\},\vec{v_3}=\{(0,0,1)\},$ which as we all know are mutually orthogonal, and have unit length. The basis set we use to construct a Fourier series is $\{1, \sin{(\frac{\pi}{L} x)}, \cos{(\frac{\pi}{L} x), \sin{(2\frac{\pi}{L} x)}, \cos{(2\frac{\pi}{L} x)}, \sin{(3\frac{\pi}{L} x)}}, \cos{(3\frac{\pi}{L} x)}...\} \nonumber$ We will prove that these functions are mutually orthogonal in the interval $[0,2L]$ (one period). For example, let’s prove that $\sin{(n\frac{\pi}{L} x)}$ and $1$ are orthogonal: $\int sin\left (\frac{n\pi x}{L} \right )dx=-\frac{L}{n\pi}cos\left (\frac{n\pi x}{L} \right ) \nonumber$ $\int_{0}^{2L} sin\left (\frac{n\pi x}{L} \right )dx=-\frac{L}{n\pi}cos\left (2n\pi \right )+\frac{L}{n\pi}cos(0)=\frac{L}{n\pi}\left ( 1-cos(2n\pi) \right )=0 \nonumber$ We can also prove that any $\sin{(nx)}$ is orthogonal to any $\cos{(nx)}$: $\int sin\left (\frac{n\pi x}{L} \right ) \cos\left (\frac{n\pi x}{L} \right )dx=-\frac{L}{4n\pi}\cos\left (\frac{2n\pi x}{L} \right ) \nonumber$ $\int_{0}^{2L} \sin\left (\frac{n\pi x}{L} \right ) cos\left (\frac{n\pi x}{L} \right )dx=-\frac{L}{4n\pi}\cos\left (4n\pi\right )+\frac{L}{4n\pi}\cos (0)=0 \nonumber$ Following the same procedure, we can also prove that $\int \sin\left (\frac{n\pi x}{L} \right ) \sin\left (\frac{m\pi x}{L} \right )dx=0\;n\neq m \nonumber$ $\int \cos\left (\frac{n\pi x}{L} \right ) \cos\left (\frac{m\pi x}{L} \right )dx=0\;n\neq m \nonumber$ The functions used in a Fourier series are mutually orthogonal. Are they normalized? $\int_{0}^{2L} \sin^2\left (\frac{n\pi x}{L} \right )dx=L \nonumber$ $\int_{0}^{2L} \cos^2\left (\frac{n\pi x}{L} \right )dx=L \nonumber$ $\int_{0}^{2L} 1^2\;dx=2L \nonumber$ They are not! The functions $1/2L, \frac{1}{L}\sin{(\frac{\pi}{L} x)}$ and $\frac{1}{L}\cos{(\frac{\pi}{L} x)}$ are normalized, so we may argue that our orthonormal set should be: $\{\frac{1}{2L},\frac{1}{L} \sin{(\frac{\pi}{L} x)},\frac{1}{L} \cos{(\frac{\pi}{L} x),\frac{1}{L} \sin{(2\frac{\pi}{L} x)}, \frac{1}{L}\cos{(2\frac{\pi}{L} x)}}, ...\} \nonumber$ and the series should be written as: $\label{eq:fourier2} f(x)=c_0\frac{1}{2L}+\frac{1}{L}\sum_{n=1}^{\infty}c_n \cos\left ( \frac{n\pi x}{L} \right )+\frac{1}{L}\sum_{n=1}^{\infty}d_n \sin\left ( \frac{n\pi x}{L} \right )$ where we used the letters $c$ and $d$ to distinguish these coefficients from the ones defined in Equations \ref{ao}, \ref{an} and \ref{bn}. However if we compare this expression to Equation \ref{eq:fourier}: $\label{eq:fourier} f(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n cos\left ( \frac{n\pi x}{L} \right )+\sum_{n=1}^{\infty}b_n sin\left ( \frac{n\pi x}{L} \right )$ we see that it is just a matter of how we define the coefficients. The coefficients in Equation \ref{eq:fourier} equal the coefficients in Equation \ref{eq:fourier2} divided by $L$. In other words, the coefficients in Equation \ref{eq:fourier} already contain the constant $L$ (look at Equations \ref{ao}, \ref{an} and \ref{bn}), so we can write the sines and cosines without writing the factor $1/L$ every single time. In conclusion, the set $\{1, \sin{\left(\frac{\pi}{L} x\right)}, \cos{\left(\frac{\pi}{L} x\right), \sin{\left(2\frac{\pi}{L} x\right)}, \cos{\left(2\frac{\pi}{L} x\right)}, \sin{\left(3\frac{\pi}{L} x\right)}}, \cos{\left(3\frac{\pi}{L} x\right)}...\} \nonumber$ is not strictly orthonormal the way it is written, but it is once we include the constant $L$ in the coefficients. Therefore, the cosines and sines form a complete set that allows us to express any other function using a linear combination of its members. There are other orthonormal sets that are used in quantum mechanics to express a variety of functions. Just remember that we can construct any function using a complete set of orthonormal functions. We can construct any function using a complete set of orthonormal functions. 7.04: Problems Note: You will use some of these results in Chapter 12. Keep a copy of your work handy so you can use it again when needed. Problem $1$ Consider the following periodic function: • Is the function odd, even or neither? • Calculate all the coefficients of the Fourier series of the function by hand (i.e. not in Mathematica). Express the function as a Fourier series. • In the lab: Use the Manipulate function in Mathematica to plot the Fourier series. Observe how the finite sum gets closer to the actual triangular wave as you increase the upper bound of the sum. Problem $2$ Consider the periodic function formed by the periodic extension of: $f(x)=\left\{\begin{matrix}-1/2 & -1\leq x\leq 0 \ 1/2 &0<x \leq 1 \end{matrix}\right. \nonumber$ • Is the function odd, even or neither? • Calculate all the coefficients of the Fourier series of the function by hand (i.e. not in Mathematica). Express the function as a Fourier series. • In the lab: Use the Manipulate function in Mathematica to plot the Fourier series. Observe how the finite sum gets closer to the actual triangular wave as you increase the upper bound of the sum. Problem $3$ The following functions are encountered in quantum mechanics: $\Phi _m(\phi)=\frac{1}{\sqrt{2 \pi}}e^{im\phi},\;m=0, \pm 1, \pm2,\pm3...\;and\;0\leq\phi\leq 2\pi$ Prove that these functions are all normalized, and that any two functions of the set are mutually orthogonal. Hint: Consider the cases $m=0$ and $m\neq0$ separately, and remember that $e^{im\phi}=1$ when $m=0$. Don’t forget to take into account the complex conjugate in the normalization condition! Hint 2: Check Chapter 2. You may have already solved this problem before!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/07%3A_Fourier_Series/7.03%3A_Orthogonal_Expansions.txt
Chapter Objectives • Review the concept of partial derivative. • Review the properties of partial derivatives. • Be able to use the properties of partial derivatives in the context of physical chemistry problems. • Review the concept of double and triple integrals. • Learn the concept of equation of state. Understand the concept of a van der Waals gas from the molecular point of view. • Learn about phase transitions and critical phenomena. • 8.1: Functions of Two Independent Variables A function of two independent variables, z=f(x,y) , defines a surface in three-dimensional space. For a function of two or more variables, there are as many independent first derivatives as there are independent variables. • 8.2: The Equation of State An equation of state is an expression relating the density of a fluid with its temperature and pressure. Note that the density is related to the number of moles and the volume, so it takes care of these two variables together. There is no single equation of state that predicts the behavior of all substances under all conditions. • 8.3: The Chain Rule The chain rule allow us to create these ‘universal ’ relationships between the derivatives of different coordinate systems. • 8.4: Double and Triple Integrals We can extend the idea of a definite integral to more dimensions. • 8.5: Real Gases We have already mentioned some thermodynamic variables, but in order to make more connections between chemistry and math we need to introduce some concepts that we need to start discussing real gases. • 8.6: Problems Thumbnail: Surface $Σ$ with closed boundary $∂Σ$. $\vec{F}$ could be the $\vec{E}$ or $\vec{B}$ fields. $n$ is the unit normal. (Public Domain; Maschen). 08: Calculus in More than One Variable A (real) function of one variable, $y = f(x)$, defines a curve in the plane. The first derivative of a function of one variable can be interpreted graphically as the slope of a tangent line, and dynamically as the rate of change of the function with respect to the variable Figure $1$. A function of two independent variables, $z=f (x,y)$, defines a surface in three-dimensional space. For a function of two or more variables, there are as many independent first derivatives as there are independent variables. For example, we can differentiate the function $z=f (x,y)$ with respect to $x$ keeping $y$ constant. This derivative represents the slope of the tangent line shown in Figure $2 \text{A}$. We can also take the derivative with respect to $y$ keeping $x$ constant, as shown in Figure $2 \text{B}$. For example, let’s consider the function $z=3x^2-y^2+2xy$. We can take the derivative of this function with respect to $x$ treating $y$ as a constant. The result is $6x+2y$. This is the partial derivative of the function with respect to $x$, and it is written: $\left (\frac{\partial z}{\partial x} \right )_y=6x+2y \nonumber$ where the small subscripts indicate which variables are held constant. Analogously, the partial derivate of $z$ with respect to $y$ is: $\left (\frac{\partial z}{\partial y} \right )_x=2x-2y \nonumber$ We can extend these ideas to functions of more than two variables. For example, consider the function $f(x,y,z)=x^2y/z$. We can differentiate the function with respect to $x$ keeping $y$ and $z$ constant to obtain: $\left (\frac{\partial f}{\partial x} \right )_{y,z}=2x\frac{y}{z} \nonumber$ We can also differentiate the function with respect to $z$ keeping $x$ and $y$ constant: $\left (\frac{\partial f}{\partial z} \right )_{x,y}=-x^2y/z^2 \nonumber$ and differentiate the function with respect to $y$ keeping $x$ and $z$ constant: $\left (\frac{\partial f}{\partial y} \right )_{x,z}=\frac{x^2}{z} \nonumber$ Functions of two or more variables can be differentiated partially more than once with respect to either variable while holding the other constant to yield second and higher derivatives. For example, the function $z=3x^2-y^2+2xy$ can be differentiated with respect to $x$ two times to obtain: $\left ( \frac{\partial }{\partial x}\left ( \frac{\partial z}{\partial x} \right )_{y} \right )_{y}=\left ( \frac{\partial ^2z}{\partial x^2} \right )_{y}=6 \nonumber$ We can also differentiate with respect to $x$ first and $y$ second: $\left ( \frac{\partial }{\partial y}\left ( \frac{\partial f}{\partial x} \right )_{y} \right )_{x}=\left ( \frac{\partial ^2f}{\partial y \partial x} \right )=2 \nonumber$ Check the videos below if you are learning this for the first time, or if you feel you need to refresh the concept of partial derivatives. If a function of two or more variables and its derivatives are single-valued and continuous, a property normally attributed to physical variables, then the mixed partial second derivatives are equal (Euler reciprocity): $\label{c2v:euler reciprocity} \left ( \frac{\partial ^2f}{\partial x \partial y} \right )=\left ( \frac{\partial ^2f}{\partial y \partial x} \right )$ For example, for $z=3x^2-y^2+2xy$: $\left ( \frac{\partial ^2f}{\partial y \partial x} \right )=\left ( \frac{\partial }{\partial y}\left ( \frac{\partial f}{\partial x} \right )_{y} \right )_{x}=\left ( \frac{\partial }{\partial y}\left ( 6x+2y\right ) \right )_{x}=2 \nonumber$ $\left ( \frac{\partial ^2f}{\partial x \partial y} \right )=\left ( \frac{\partial }{\partial x}\left ( \frac{\partial f}{\partial y} \right )_{x} \right )_{y}=\left ( \frac{\partial }{\partial x}\left ( -2y+2x\right ) \right )_{y}=2 \nonumber$ Another useful property of the partial derivatives is the so-called reciprocal identity, which holds when the same variables are held constant in the two derivatives: $\label{c2v:inverse} \left ( \frac{\partial y}{\partial x} \right )=\frac{1}{\left ( \frac{\partial x}{\partial y} \right )}$ For example, for $z=x^2y$: $\left ( \frac{\partial z}{\partial x} \right )_y=\left ( \frac{\partial }{\partial x} x^2y\right )_y=2xy \nonumber$ $\left ( \frac{\partial x}{\partial z} \right )_y=\left ( \frac{\partial }{\partial z} \sqrt{z/y} \right )_y=\frac{1}{2y} (z/y)^{-1/2}=\frac{1}{2xy}=\frac{1}{\left ( \frac{\partial z}{\partial x} \right )}_y \nonumber$ Finally, let’s mention the cycle rule. For a function $z(x,y)$: $\label{c2v:cycle} \left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-1$ We can construct other versions as follows: For example, for $z=x^2y$: $\left ( \frac{\partial y}{\partial x} \right )_z=\left ( \frac{\partial }{\partial x} (z/x^2)\right )_z=-2z/x^3 \nonumber$ $\left ( \frac{\partial x}{\partial z} \right )_y=\left ( \frac{\partial }{\partial z} \sqrt{z/y}\right )_y=\frac{1}{2y} (z/y)^{-1/2} \nonumber$ $\left ( \frac{\partial z}{\partial y} \right )_x=\left ( \frac{\partial }{\partial y} x^2y\right )_x=x^2 \nonumber$ $\left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-\frac{2z}{x^3}\frac{1}{2y} \left(\frac{y}{z}\right)^{1/2}x^2=-\left(\frac{z}{y}\right)^{1/2}\frac{1}{x}=-\left(\frac{x^2y}{y}\right)^{1/2}\frac{1}{x}=-1 \nonumber$ Before discussing partial derivatives any further, let’s introduce a few physicochemical concepts to put our discussion in context.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.01%3A_Functions_of_Two_Independent_Variables.txt
Note From the last section, the cycle rule is defined as follows: $\label{c2v:cycle} \left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-1$ The thermodynamic state of a system, such as a fluid, is defined by specifying a set of measurable properties sufficient so that all remaining properties are determined. For example, if you have a container full of a gas, you may specify the pressure, temperature and number of moles, and this should be sufficient for you to calculate other properties such as the density and the volume. In other words, the temperature ($T$), number of moles ($n$), volume ($V$) and pressure ($P$) are not all independent variables. Ideal Gas Equation of State To make sense of this statement, let’s consider an ideal gas. You know from your introductory chemistry courses1 that temperature, number of moles, volume and pressure are related through a universal constant $R$: $\label{c2v:ideal} P=\dfrac{nRT}{V}$ If $P$ is expressed in atmospheres, $V$ in liters, and $T$ in Kelvin, then $R=0.082 \, \frac{L\times atm}{K \times mol}$ This expression tells you that the four variables cannot be changed independently. If you know three of them, you also know the fourth. Equation \ref{c2v:ideal} is one particular case of what is known as an equation of state. An equation of state is an expression relating the density of a fluid with its temperature and pressure. Note that the density is related to the number of moles and the volume, so it takes care of these two variables together. There is no single equation of state that predicts the behavior of all substances under all conditions. Equation \ref{c2v:ideal}, for example, is a good approximation for non polar gases at low densities (low pressures and high temperatures). Other more sophisticated equations are better suited to describe other systems in other conditions, but there is no universal equation of state. In general, for a simple fluid, an equation of state will be a relationship between $P$ and the variables $T$, $V$ and $n$: $P=P(T,V,n)=P(T,V_m), \nonumber$ where $V_m$ is the molar volume, $V/n$. The molar volume is sometimes written as $\bar{V}$. For example, Equation \ref{c2v:ideal} can be rerwritten as $P=\dfrac{RT}{\bar{V}}. \nonumber$ Let’s ‘play’ with the equation of state for an ideal gas. The partial derivative $\left ( \dfrac{\partial P}{\partial T} \right )_{V,n}$ represents how the pressure changes as we change the temperature of the container at constant volume and constant $n$: $\left ( \dfrac{\partial P}{\partial T} \right )_{V,n}=\dfrac{nR}{V} \nonumber$ It is a relief that the derivative is positive, because we know that an increase in temperature causes an increase in pressure! This also tells us that if we increase the temperature by a small amount, the increase in pressure will be larger in a small container than in a large container. The partial derivative $\left ( \dfrac{\partial P}{\partial V} \right )_{T,n}$ represents how the pressure changes as we change the volume of the container at constant temperature and constant $n$: $\left ( \dfrac{\partial P}{\partial V} \right )_{T,n}=-\dfrac{nRT}{V^2} \nonumber$ Again, we are happy to see the derivative is negative. If we increase the volume we should see a decrease in pressure as long as the temperature is held constant. This is not too different from squeezing a balloon (don’t try this at home!). We can also write an equation that represents how the volume changes with a change in pressure: $\left ( \dfrac{\partial V}{\partial P} \right )_{T,n}$. From Equation \ref{c2v:ideal}, $V=\dfrac{nRT}{P} \nonumber$ and therefore: $\left ( \dfrac{\partial V}{\partial P} \right )_{T,n}=-\dfrac{nRT}{P^2} \nonumber$ Let’s compare these two derivatives: $\left ( \dfrac{\partial V}{\partial P} \right )_{T,n}=-\dfrac{nRT}{P^2}=-\dfrac{nRT}{(nRT/V)^2}=-\dfrac{V^2}{nRT}=\dfrac{1}{\left ( \dfrac{\partial P}{\partial V} \right )_{T,n}} \nonumber$ Surprised? You shouldn’t be based on the inverse rule! (Equation \ref{c2v:inverse}). Note that this works because we hold the same variables constant in both cases. Now, you may argue that the inverse rule is not particularly useful because it doesn’t take a lot of work to solve for $V$ and perform $\left ( \dfrac{\partial V}{\partial P} \right )_{T,n}$. Dieterici’s Equation of State Let’s consider a more complex equation of state known as Dieterici’s equation of state for a real gas: $P=\dfrac{RT}{\bar{V}-b}e^{-a/(R\bar{V}T)} \nonumber$ Here, $a$ and $b$ are constants that depend on the particular gas (e.g. whether we are considering $\ce{H2}$ or $\ce{CO2}$). Let’s say you are asked to obtain $\left ( \dfrac{\partial V}{\partial P} \right )_{T,n}$. What do you do? Do you find the inverse rule useful now? Let’s go back to the ideal gas, and calculate other partial derivatives: \begin{align} \left ( \dfrac{\partial P}{\partial V} \right )_{T,n} &= -\dfrac{nRT}{V^2} \[4pt] \left ( \dfrac{\partial V}{\partial T} \right )_{P,n}&=\dfrac{nR}{P} \[4pt] \left ( \dfrac{\partial T}{\partial P} \right )_{V,n}&=\dfrac{V}{nR} \end{align} \nonumber Let’s calculate the product: $\left ( \dfrac{\partial P}{\partial V} \right )_{T,n}\left ( \dfrac{\partial V}{\partial T} \right )_{P,n}\left ( \dfrac{\partial T}{\partial P} \right )_{V,n}=-\dfrac{nRT}{V^2}\dfrac{nR}{P}\dfrac{V}{nR}=-\dfrac{nRT}{VP}=-1 \nonumber$ In the last step, we used Equation \ref{c2v:ideal}. Surprised? You shouldn’t be based on the cycle rule! (Equation \ref{c2v:cycle}). Again, this is not particularly useful for an ideal gas, but let’s think about Dieterici’s equation again and let’s assume that you are interested in calculating $\left ( \dfrac{\partial V}{\partial T} \right )_{P,n}$. What would you do?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.02%3A_The_Equation_of_State.txt
We all know that the position of a point in space can be specified with two coordinates, $x$ and $y$, called the cartesian coordinates. We also know that we can choose instead to specify the position of the point using the distance from the origin ($r$) and the angle that the vector makes with the $x$ axis ($\theta$). The latter are what we call plane polar coordinates, which we will cover in much more detail in Chapter 10. The two coordinate systems are related by: $\label{c2v:eq:calculus2v_cartesian} x=r\cos{\theta}; \; \;y=r\sin{\theta}$ $\label{c2v:eq:calculus2v_polar} r=\sqrt{x^2+y^2}; \; \; \theta=tan^{-1}(y/x)$ Let’s assume that we are given a function in polar coordinates, for example $f(r,\theta)=e^{-3r}\cos{\theta}$, and we are asked to find the partial derivatives in cartesian coordinates, $(\partial f/\partial x)_y$ and $(\partial f/\partial y)_x$. We can of course re-write the function in terms of $x$ and $y$ and find the derivatives we need, but wouldn’t it be wonderful if we had a universal formula that converts the derivatives in polar coordinates ($(\partial f/\partial r)_\theta$ and $(\partial f/\partial \theta)_r$) to the derivatives in cartesian coordinates? This would allow us to take the derivatives in the system the equation is expressed in (which is easy), and then translate the derivatives to the other system without thinking too much. The chain rule will allow us to create these ‘universal ’ relationships between the derivatives of different coordinate systems. Before using the chain rule, let’s obtain $(\partial f/\partial x)_y$ and $(\partial f/\partial y)_x$ by re-writing the function in terms of $x$ and $y$. I want to show you how much work this would involve, so you can appreciate how useful using the chain rule is. Using Equations \ref{c2v:eq:calculus2v_cartesian} and \ref{c2v:eq:calculus2v_polar}, we can rewrite $f(r,\theta)=e^{-3r}\cos{\theta}$ as $f(x,y)=\dfrac{e^{-3(x^2+y^2)^{1/2}}x}{(x^2+y^2)^{1/2}} \nonumber$ We can easily obtain $(\partial f/\partial x)_y$ and $(\partial f/\partial y)_x$, but it is certainly quite a bit of work. What if I told you that $(\partial f/\partial x)_y$ is simply $\label{c2v:eq:calculus2v_chain1} \left(\dfrac{\partial f}{\partial x}\right)_y=\cos{\theta}\left(\dfrac{\partial f}{\partial r}\right)_\theta-\dfrac{\sin{\theta}}{r}\left(\dfrac{\partial f}{\partial \theta}\right)_r$ independently of the function $f$? We will derive this result shortly, but for now let me just mention that the procedure involves using the chain rule. You are probably sighing in relief, because the derivatives $(\partial f/\partial r)_\theta$ and $(\partial f/\partial \theta)_r$ are much easier to obtain: $\left(\dfrac{\partial f}{\partial r}\right)_\theta=-3e^{-3r}\cos{\theta} \nonumber$ $\left(\dfrac{\partial f}{\partial \theta}\right)_r=-e^{-3r}\sin{\theta} \nonumber$ and using Equation \ref{c2v:eq:calculus2v_chain1}, we can obtain the derivative we are looking for: $\left(\dfrac{\partial f}{\partial x}\right)_y=-\cos{\theta}\times3e^{-3r}\cos{\theta}+\dfrac{\sin{\theta}}{r}e^{-3r}\sin{\theta} \nonumber$ $\left(\dfrac{\partial f}{\partial x}\right)_y=-\cos^2{\theta}\times3e^{-3r}+\dfrac{\sin^2{\theta}}{r}e^{-3r}=e^{-3r}\left(\dfrac{\sin^2{\theta}}{r}-3\cos^2{\theta}\right) \nonumber$ $\left(\dfrac{\partial f}{\partial x}\right)_y=e^{-3{(x^2+y^2)^{1/2}}}\left(\dfrac{y^2}{(x^2+y^2)^{3/2}}-3\dfrac{x^2}{(x^2+y^2)}\right) \nonumber$ Hopefully this wasn’t too painful, or at least, less tedious that it would have been hadn’t we used the chain rule. What about $(\partial f/\partial y)_x$? We can create an expression similar to Equation \ref{c2v:eq:calculus2v_chain1} and use it to relate $(\partial f/\partial y)_x$ with $(\partial f/\partial r)_\theta$ and $(\partial f/\partial \theta)_r$. At this point you may be thinking that this all worked well because the function we had was easier to derive in polar coordinates than in cartesian coordinates. True, but this is the whole point. Many physical systems are described in polar coordinates more naturally than in cartesian coordinates (especially in three dimensions). This has to do with the symmetry of the system. For an atom, for example, it is much more natural to use spherical coordinates than cartesian coordinates. We could use cartesian, but the expressions would be much more complex and hard to work with. If we have equations that are more easily expressed in polar coordinates, getting the derivatives in polar coordinates will always be easier. But why would we want the derivatives in cartesian coordinates then? A great example is the Schrödinger equation, which is at the core of quantum mechanics. We will talk more about this when we discuss operators, but for now, the Schrödinger equation is a partial differential equation (unless the particle moves in one dimension) that can be written as: $E\psi(\vec{r})=-\dfrac{\hbar}{2m}\nabla^2\psi(\vec{r})+V(\vec{r})\psi{(\vec{r})} \nonumber$ Because of the symmetry of the system, for atoms and molecules it is simpler to express the position of the particle ($\vec{r}$) in spherical coordinates. However, the operator $\nabla^2$ (known as the Laplacian) is defined in cartesian coordinates as: $\nabla^2f(x,y,z)=\left(\dfrac{\partial^2 f}{\partial x^2}\right)_{y,z}+\left(\dfrac{\partial^2 f}{\partial y^2}\right)_{x,z}+\left(\dfrac{\partial^2 f}{\partial z^2}\right)_{x,y} \nonumber$ In other words, the Laplacian instructs you to take the second derivatives of the function with respect to $x$, with respect to $y$ and with respect to $z$, and add the three together. We could express the functions $V(\vec{r})$ and $\psi{(\vec{r})}$ in cartesian coordinates, but again, this would lead to a terribly complex differential equation. Instead, we can express the Laplacian in spherical coordinates, and this is in fact the best approach. To do this, we would need to relate the derivatives in spherical coordinates to the derivatives in cartesian coordinates, and this is done using the chain rule. Hopefully all this convinced you of the uses of the chain rule in the physical sciences, so now we just need to see how to use it for our purposes. In two dimensions, the chain rule states that if we have a function in one coordinate system $u(x,y)$, and these coordinates are functions of two other variables (e.g. $x=x(\theta,r)$ and $y=y(\theta,r)$) then: $\left ( \dfrac{\partial u}{\partial r} \right )_\theta=\left ( \dfrac{\partial u}{\partial x} \right )_y\left ( \dfrac{\partial x}{\partial r} \right )_\theta+\left ( \dfrac{\partial u}{\partial y} \right )_x\left ( \dfrac{\partial y}{\partial r} \right )_\theta$ $\left ( \dfrac{\partial u}{\partial \theta} \right )_r=\left ( \dfrac{\partial u}{\partial x} \right )_y\left ( \dfrac{\partial x}{\partial \theta} \right )_r+\left ( \dfrac{\partial u}{\partial y} \right )_x\left ( \dfrac{\partial y}{\partial \theta} \right )_r$ Some students find the following ’tree’ constructions useful: We can also consider $u=u(r,\theta)$, and $\theta=\theta(x,y)$ and $r=r(x,y)$, which gives: $\left ( \dfrac{\partial u}{\partial x} \right )_y=\left ( \dfrac{\partial u}{\partial r} \right )_\theta\left ( \dfrac{\partial r}{\partial x} \right )_y+\left ( \dfrac{\partial u}{\partial \theta} \right )_r\left ( \dfrac{\partial \theta}{\partial x} \right )_y$ $\left ( \dfrac{\partial u}{\partial y} \right )_x=\left ( \dfrac{\partial u}{\partial r} \right )_\theta\left ( \dfrac{\partial r}{\partial y} \right )_x+\left ( \dfrac{\partial u}{\partial \theta} \right )_r\left ( \dfrac{\partial \theta}{\partial y} \right )_x$ Example $1$ Derive Equation \ref{c2v:eq:calculus2v_chain1}. Solution We need to prove $\left(\dfrac{\partial f}{\partial x}\right)_y=\cos{\theta}\left(\dfrac{\partial f}{\partial r}\right)_\theta-\dfrac{\sin{\theta}}{r}\left(\dfrac{\partial f}{\partial \theta}\right)_r$. Using the chain rule: $\left ( \dfrac{\partial f}{\partial x} \right )_y=\left ( \dfrac{\partial f}{\partial \theta} \right )_r\left ( \dfrac{\partial \theta}{\partial x} \right )_y+\left ( \dfrac{\partial f}{\partial r} \right )_\theta\left ( \dfrac{\partial r}{\partial x} \right )_y \nonumber$ From Equation \ref{c2v:eq:calculus2v_cartesian} and \ref{c2v:eq:calculus2v_polar} $\left ( \dfrac{\partial r}{\partial x} \right )_y=\dfrac{1}{2}(x^2+y^2)^{-1/2}(2x)=\dfrac{1}{2}(r^2)^{-1/2}(2r\cos{\theta})=\cos{\theta} \nonumber$ $\left ( \dfrac{\partial \theta}{\partial x} \right )_y=\dfrac{1}{1+(y/x)^2}\dfrac{(-y)}{x^2}=-\dfrac{1}{1+(y/x)^2}\dfrac{y}{x}\dfrac{1}{x}=-\dfrac{1}{1+\tan^2{\theta}}\tan{\theta}\dfrac{1}{r\cos{\theta}}=-\dfrac{1}{1+\dfrac{\sin^2{\theta}}{\cos^2{\theta}}}\dfrac{\sin{\theta}}{\cos{\theta}}\dfrac{1}{r\cos{\theta}}=-\dfrac{\sin{\theta}}{r} \nonumber$ Therefore, $\left ( \dfrac{\partial f}{\partial x} \right )_y=\cos{\theta}\left ( \dfrac{\partial f}{\partial r} \right )_\theta-\dfrac{\sin{\theta}}{r}\left ( \dfrac{\partial f}{\partial \theta} \right )_r \nonumber$ Need help? The videos below contain examples of how to use the chain rule for partial derivatives:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.03%3A_The_Chain_Rule.txt
We can extend the idea of a definite integral to more dimensions. If $f(x,y)$ is continuous over the rectangle $R=[a,b]\times[c,d]$ then, $\label{c2v:eq:doubleint} \underset{R}{\iint}f(x,y)dA=\int_{a}^{b}\int_{c}^{d}f{(x,y)\, dy\, dx}=\int_{c}^{d}\int_{a}^{b}f{(x,y)\, dx\, dy}$ If $f(x,y)\geq0$, then the double integral represents the volume $V$ of the solid that lies above the rectangle $R$ and below the surface $z=f(x,y)$ (Figure $1$). We can compute the double integral of Equation \ref{c2v:eq:doubleint} as: $\underset{R}{\iint}f(x,y)dA=\int_{a}^{b}\left[ \int_{c}^{d}f{(x,y)\, dy} \right]dx \nonumber$ meaning that we will first compute $\int_{c}^{d}f{(x,y)\, dy} \nonumber$ holding $x$ constant and integrating with respect to $y$. The result will be a function containing only $x$, which we will integrate between $a$ and $b$ with respect to $x$. For example, let’s solve $\int_{0}^{3}\int_{1}^{2}{x^2 y\; dy\; dx}$. We’ll start by solving $\int_{1}^{2}{x^2 y\; dy}$ holding $x$ constant: $\int_{1}^{2}{x^2 y\; dy}=x^2\int_{1}^{2}{y\; dy}=\dfrac{3}{2}x^2 \nonumber$ Now we integrate this function from 0 to 3 with respect to $x$: $\int_{0}^{3}\int_{1}^{2}{x^2 y\; dy\; dx}=\int_{0}^{3}{\dfrac{3}{2}x^2 \; dx}=\dfrac{27}{2} \nonumber$ You can of course integrate from 0 to 3 first with respect to $x$ holding $y$ constant, and then integrate the result with respect to $y$ from 1 to 2. Try it this way and verify you get the same result. Triple integrals work in the same way. If $f(x,y,z)$ is continuous on the rectangular box $B=[a,b]\times[c,d]\times[r,s]$, then $\label{c2v:eq:tripleint} \underset{B}{\iiint}f(x,y,z)dV=\int_{r}^{s}\int_{c}^{d}\int_{a}^{b}f{(x,y)\, dx\, dy\, dz}$ This iterated integral means that we integrate first with respect to $x$ (keeping $y$ and $z$ fixed), then we integrate with respect to $y$ (keeping $z$ fixed), and finally we integrate with respect to $z$. There are five other possible orders in which we can integrate, all of which give the same value. Do you need a refresher on double and triple integrals? Check the videos below before moving on to the physical chemistry examples. Triple integrals are used very often in physical chemistry to normalize probability density functions. For example, in quantum mechanics, the absolute square of the wave function, $\left | \psi (x,y,z) \right |^2$, is interpreted as a probability density, the probability that the particle is inside the volume $dx.dy.dz$. Since the probability of finding the particle somewhere in space is 1, we require that: $\label{c2v:eq:calculus2v_normalization} \int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=1$ We already mentioned wave functions in Section 2.3, where we showed that $\left | \psi (x,y,z) \right |^2=\psi^*(x,y,z) \psi(x,y,z) \nonumber$ The normalization condition, therefore, can also be written as $\label{c2v:eq:calculus2v_normalization2} \int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{ \psi^*\psi }\; dx \;dy \;dz=1$ Example $1$ In quantum mechanics, the lowest energy state of a particle confined in a three-dimensional box is represented by $\psi (x,y,z)=A\sin{\dfrac{\pi x}{a}}\sin{\dfrac{\pi y}{b}}\sin{\dfrac{\pi z}{c}}\; if\; \left\{ \begin{matrix} 0< x< a\ 0< y< b\ 0< z< c\ \end{matrix} \right.$ and $\psi (x,y,z)=0$ otherwise (outside the box). Here, $A$ is a normalization constant, and $a$,$b$ and $c$ are the lengths of the sides of the box. Since the probability of finding the particle somewhere in space is 1, we require that $\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=1 \nonumber$ Find the normalization constant $A$ in terms of $a,b,c$ and other constants. Solution Because $\psi(x,y,z)=0$ outside the box, $\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=\int_{0 }^{c }\int_{0 }^{b }\int_{0 }^{a }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz = \left | \psi (x,y,z) \right |^2=\psi^* (x,y,z) \psi(x,y,z) \nonumber$ However, because in this case the function is real, $\left | \psi (x,y,z) \right |^2=\left ( \psi (x,y,z) \right )^2$ $\int_{0 }^{c }\int_{0 }^{b }\int_{0 }^{a }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=\int_{0 }^{c }\int_{0 }^{b }\int_{0 }^{a }{A^2 \sin^2\left( \dfrac{\pi x}{a}\right)\sin^2\left( \dfrac{\pi y}{b}\right)\sin^2\left( \dfrac{\pi z}{c}\right)}\; dx \;dy \;dz=1 \nonumber$ $\int_{0 }^{c }\int_{0 }^{b }\int_{0 }^{a }{A^2 \sin^2\left( \dfrac{\pi x}{a}\right)\sin^2\left( \dfrac{\pi y}{b}\right)\sin^2\left( \dfrac{\pi z}{c}\right)}\; dx \;dy \;dz= A^2\int_{0 }^{a }{\sin^2\left( \dfrac{\pi x}{a}\right)}dx\int_{0 }^{b }{\sin^2\left( \dfrac{\pi y}{b}\right)}dy\int_{0 }^{c }{\sin^2\left( \dfrac{\pi z}{c}\right)}dz \nonumber$ Using the formula sheet, we get $\int_{0 }^{a }{\sin^2\left( \dfrac{\pi x}{a}\right)}dx=a/2 \nonumber$ And therefore, $A^2\int_{0 }^{a }{\sin^2\left( \dfrac{\pi x}{a}\right)}dx\int_{0 }^{b }{\sin^2\left( \dfrac{\pi y}{b}\right)}dy\int_{0 }^{c }{\sin^2\left( \dfrac{\pi z}{c}\right)}dz=A^2 \dfrac{a}{2} \dfrac{b}{2}\dfrac{c}{2}=\dfrac{A^2abc}{8}=1 \nonumber$ Solving for $A$: $\displaystyle{\color{Maroon}A=\left( \dfrac{8}{abc}\right)^{1/2}}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.04%3A_Double_and_Triple_Integrals.txt
So far in this chapter, most of you have not learned anything that you have not learned in your calculus courses. The math in this chapter is actually pretty easy, but yet, it is common that students find it very hard to apply these mathematical tools to actual problems in the physical sciences. To get you comfortable using math in chemistry we will first learn a little bit about gases and thermodynamics. Having some background will help us use the math in a context that we (chemists) can relate to. We have already mentioned some thermodynamic variables, but in order to make more connections between chemistry and math we need to introduce some concepts that we need to start discussing real gases. You will talk about these concepts in more depth in CHM 346. An important thermodynamic variable that is used to characterize the state of a system is the internal energy ($U$). Let’s think about a container containing a gas (e.g. $O_2$). The internal energy of the system is the sum of the following contributions: • Kinetic energy: The kinetic energy is the energy that the molecules have due to their motions. It is related to their velocity, and as expected, it increases with increasing temperature, as molecules move faster. • Vibrational and rotational energy: Molecules store energy in their bonds. As we already discussed, atoms vibrate around their equilibrium position, and there is an energy associated with these vibrations. The vibrational energy of a molecule also depends on temperature, and on the number of bonds. Atomic gases (He, Ar, etc) do not have vibrational energy. Molecules also rotate, and there is energy stored in these motions. As in the case of vibrations, atomic gases do not have contributions from rotations. You will learn about vibrational and rotational energy in CHM 345. • Potential energy: This is the energy due to the interactions between the molecules that make up the gas. Atoms (e.g. Ar) will also interact if brought close enough. The energy of the interactions between the molecules depends obviously on the chemical nature of the molecules. We know that polar molecules will interact more strongly than non-polar molecules, and atoms with more electrons (e.g. Ar) will interact more strongly than atoms of low atomic number (e.g. He). For a given gas, these interactions depend on the distance between the molecules. For simplicity, we will concentrate on atomic gases, where the only contributions to $U$ are the kinetic energy (which depends on temperature only), and potential energy. You already learned about the simplest model used to describe the behavior of gases: the ideal (or perfect) gas. You learned that there are two assumptions behind the model. First the particles do not have any size, meaning that you can push them together as close as you want. In reality, atoms have a size, and if you try to push them together too hard the electronic clouds will repel each other. The other assumption is that particles do not interact with each other at any distance. In reality, this makes sense only at very low densities, when molecules are very far away from each other. However, as molecules get closer together, they experience attractive forces that in many cases are so strong that result in the formation of a liquid. Of course if we push them too close the forces become repulsive, but there is a range of distances in which attractive forces dominate. This makes sense for polar molecules, but what about atoms (e.g. Ar), that do not have a permanent dipole moment? You probably heard about London dispersive forces, without which we would never be able to liquefy a noble gas. London forces are stronger for atoms containing more electrons, and that is why the boiling point of Xe is much higher than the boiling point of Ne. With all this in mind, let’s think about what happens to an ideal gas in the three situations depicted in Figure $1$. The densities (molecules per unit volume) increase as we increase the pressure applied to the container. Let’s assume the three containers are equilibrated at the same temperature, and let’s think about how the internal energy compares among the three cases. We do not have any equations yet, so we need to think in terms of the concepts we just discussed. We have two contributions to think about. The kinetic term should be the same in the three containers because the temperature is the same, and that is the only factor that determines the velocity of the molecules. What about the potential energy? The particles do not have a size, so there are no repulsive forces that arise if we try to push them together too close. They do not have any type of attractive interactions either, so an ideal gas does not store any potential energy. We just concluded that the internal energy for an ideal gas equals the kinetic energy, and it is therefore a function of the temperature of the gas, but not its density. Now, we know that ideal gases are simplified representations of real gases, but they do not exist. In which of these three situations is a gas more likely to behave as an ideal gas? Clearly, when the density is low, the molecules are further away, the interactions between them are weaker, and we do not need to worry about potential energy much. The van der Waals Model Let’s come back to the equation of state of an ideal gas \ref{c2v:ideal}: $P=\frac{nRT}{V} \label{c2v:ideal}$ In order to improve our description of gases we need to take into account the two factors that the ideal gas model neglects: the size of the molecules, and the interactions between them. The size of the molecules can be taken into account by assuming that the volume that the molecules have to move around is not really $V$, but instead $V-nb$, where $nb$ is a measure of the volume that the molecules occupy themselves. In this context $b$ is a measure of the volume of one molecule, so $nb$ takes into account the volume of all molecules present in the gas. This first correction gives: $P=\frac{nRT}{V-nb}\label{calculus2v:eq:hardsphares}$ which is known as the ‘hard spheres’ model. note that we have not introduced anything regarding interactions yet, so this model tells us that we can increase the density as much as we want without changing the internal energy until we get to the point where the spheres touch each other. Because they are ‘hard’, the force required to reduce the volume any further would be infinitely large (not too different from pushing a billiard ball inside another one). Translated into potential energy, this means that the potential energy will jump to infinity when the distance between the center of the particles equals two times their radius. This is better than nothing, but not entirely realistic. In reality molecules are not completely hard, and can be pushed against each other a little bit without an infinite force. What about interactions? We discussed that at moderate densities, attractive interactions dominate over repulsive interactions. In order to incorporate a correction due to attractive interactions we need to recognize that the pressure of the gas needs to be smaller than the pressure of a gas without attractions (like the hard spheres model). The pressure of a gas is a measure of the collisions of the molecules with the walls of the container. Attractive forces should decrease this frequency, so the resulting pressure should be lower: $P=\frac{nRT}{V-nb}-C \nonumber$ where $C$ will be a positive term, that takes into account the attractive interactions. What should this term depend on? Clearly on the chemical nature of the molecules, and should be larger for atoms with more electrons (e.g. Xe) than for atoms with less electrons (e.g. He). In addition, it should depend on the density of the gas ($n/V$), as attractive forces are stronger the closer the molecules are. Van der Waals proposed the following equation that satisfies everything we just said: $\label{c2v:eq:vdw} P=\frac{nRT}{V-nb}-a \left(\frac{n}{V}\right)^2$ Check the following examples of van der Waals constants, and see if you understand how the values of $a$ and $b$ make sense in terms of the sizes of the molecules, and what you know about chemical interactions from your general chemistry courses. Pay attention to the units as well. gas $a (L^2 bar/mol^{2})$ $b (L/mol)$ He 0.035 0.0237 Ar 1.35 0.0320 Kr 2.349 0.0398 H$_2$ 0.248 0.02661 O$_2$ 1.378 0.03183 H$_2$O 5.536 0.03049 CO$_2$ 3.64 0.0427 It is worth stressing that the van der Waals model is still a model, that is, it is not the exact representation of a real gas. It improves many of the deficiencies of the real gas law, but it is still a model. Coming back to Figure $1$, imagine now that the containers are filled with a van der Waals gas. The three states have different internal energy now, because the density of the molecules is different, and that changes the forces between them. The potential energy is zero if the molecules are too far for any attractive or repulsive force to be significant, or when the attractive and repulsive forces exactly cancel each other. When attractive forces dominate we would need to exert work to separate the molecules, and the potential energy is negative. When repulsive forces dominate the potential energy is positive, and we would need to exert work to push the molecules closer. With all this in mind, in which of the three situations is the potential energy lower? The answer is container number 3. Molecules are closer (but not close enough to touch each other), so attractive forces are stronger than in container number 1. Attractive forces lower that internal energy of the system. These arguments allow us to plot the potential energy for a van der Waals gas as a function of the distance between the center of the molecules: The potential energy is zero when the distance between molecules is much longer than their diameters. If we start decreasing the density they get closer, and the attractive interactions become significant, lowering the internal energy. This continues until they touch each other. Because they are ‘hard spheres’ they cannot penetrate each other at all, and the potential energy jumps to infinity. This is equivalent to saying that the force required to push them closer together is infinitely large. We discussed the difference between a real gas and a model of a real gas. How does this plot look like in reality, and how different is it from the van der Waals model? Below are examples for different gases. Notice that ‘Ar$_2$’ does not refer to a gas made up of molecules of Ar$_2$, but instead to the interactions between two Ar atoms. All the gases in this figure, as we know, are monoatomic. There are a few things worth noting. First, atoms with more electrons (look at a periodic table) show a deeper well. This makes sense because more electrons result in stronger attractive London forces. Also, atoms with more electrons experience these attractive forces at longer distances than atoms with less electrons. In all cases the potential energy increases very sharply when we continue to decrease the distance between the atoms, but the potential energy does not jump to infinity suddenly, as in the case of the van der Waals gas. This means the atoms are not exactly hard spheres. As expected, we can bring two atoms of He much closer than two atoms of Rn before we see these repulsive interactions because atoms of He are much smaller than those of Rn. Pressure-Volume Isotherms Scientists started to study the behavior of gases back in the 1600s. As you can imagine, they had very rudimentary laboratory supplies, and their observations were mostly qualitative. One of the earliest quantitative studies in chemistry was performed by Robert Boyle, who noticed that the volume and the pressure of a fixed amount of gas at constant temperature change according to the simple law $P\propto 1/V$, where the symbol “$\propto$” means “proportional to”. This is of course true for an ideal gas, whose equation of state is $P=nRT/V$, but not for a real gas. Boyle’s law predicts that the volume of a gas decreases as the pressure is increased at constant temperature. Mathematically, this curve is called an hyperbola (hyperbolas are graphs where the product $xy$ is a constant), and physically we call these plots isotherms, because they represent the behavior at constant temperature (iso means equal in Latin). In other words, the isotherm for an ideal gas is an hyperbola, but the isotherm for a real gas will show deviations from the hyperbolic shape. We can of course go to the lab and measure the isotherms for any real gas in any range of temperatures we want. We know that whenever the conditions are such that the density of the gas is low, we expect interactions to be negligible, and therefore isotherms should be very close to the hyperbolas that Boyle described in the 1600s. What happens when interactions are significant? Some experimental isotherms for CO$_2$ are shown in Figure $4$. At higher temperatures (e.g. 50$^{\circ}$ C), the isotherm is close to the prediction for an ideal gas: we can compress the volume to small volumes and the pressure will increase following Boyle’s law. The isotherms at higher temperatures will be even closer to the hyperbolas predicted by Boyle’s law because interactions become less and less noticeable. What happens at lower temperatures? Let’s consider the isotherm at 20$^{\circ}$ C. Imagine you have a container with one mole of CO$_2$, and you start reducing its volume at constant temperature. The initial volume is somewhere between 0.5 and 0.6 L (point A in Figure $4$). As you start reducing the volume the pressure starts increasing in approximate agreement with Boyle’s law, but notice that important deviations are observed as you approach point B. When the pressure reaches 60 atm (point C), the behavior of the fluid deviates greatly from that predicted by the laws of ideal gases. You continue reducing the volume of the gas, but the pressure not only does not go up as predicted by Boyle’s low, but it remains constant for a while (between points C and E)! What is going on? How can we reduce the volume of a gas without increasing the pressure? The answer is that we are not dealing with a pure gas anymore: we are liquefying part of it. As we move from C to E, we increase the amount of liquid and reduce the amount of gas, and the pressure of the system remains constant as long as we have gas and liquid in equilibrium. Of course the more liquid we have the less volume the CO$_2$ occupies. As we mentioned before, ideal gases are not supposed to form liquids because in principle, the molecules that make up the gas do not have any size and do not experience any interactions with the other molecules in the container. To form a liquid, molecules need to experience strong attractive forces, or otherwise the motions they experience due to their thermal energy would not allow them to stay close enough. Liquefying a gas, therefore, is a clear experimental evidence of non-ideal behavior and the existence of attractive interactions among molecules. Coming back to Figure $4$, any point in the horizontal line CDE represents a state where liquid and gas coexist. We call the gas “vapor” in these circumstances, but the distinction is more semantic than physical. When we reach point E, all the CO$_2$ molecules are part of the liquid. We can continue to reduce the volume, but the pressure of the container will go up much more dramatically than before because we would need to exert a considerable amount of force to push the molecules of liquid closer together. In more technical terms, liquids are much less compressible than gases (see the definition of compressibility in page ). The area highlighted in light blue in Figure $4$ represents the conditions under which CO$_2$ can exist in equilibrium between the liquid and vapor phases. For example, if we perform the experiment at 0$^{\circ}$ C, we would start seeing the first drops of liquid when the pressure of the container reaches $\approx$ 35 atm. The pressure will remain constant as we continue to reduce the volume and we form more and more liquid. When no CO$_2$ remains in the vapor phase, reducing the volume even further would require that we increase the pressure of the container dramatically, as we would be compressing a liquid, not a gas. Notice that the length of the horizontal line that represents the co-existence of liquid and gas decreases as we increase the temperature. If we were to perform the experiment at 30$^{\circ}$ C (not shown), we would see that the volume at which we see the first drop of liquid is not too different from the volume at which we stop seeing CO$_2$ in the gas phase. Liquefying all the gas would require a small change in volume, which means that at that particular temperature and pressure, the volume that the gas occupies is not too different from the volume the liquid occupies. Pretty strange if you think about it...a mole of an ideal gas occupies 22.4L at room temperature, and a mole of liquid water occupies only 18 mL, almost a thousand times less. If we think in terms of densities, the density of water at room temperature is about 1 g/mL, or 0.056 mol/mL. The density of an ideal gas at room temperature is $n/V=P/RT\approx 4 \times 10^{-5}$ mol/L, again, around a thousand times less. Yet, at 30$^{\circ}$ C, and around 80 atm, the density of CO$_2$ in the liquid state is almost the same as the density of CO$_2$ in the gas phase. If we put CO$_2$ in a high pressure cell, and increase $P$ to 80 atm, it would be hard for us to say whether the CO$_2$ is liquid or gas. At much lower pressures, distinguishing between liquid and gas becomes much more evident, as we are used to from our daily experience. Critical Behavior There is a particular isotherm where the CDE line of figure [c2v:fig:isotherms] reduces to a point. In the case of CO$_2$, this isotherm is the one we measure at 31.04$^{\circ}$C, and it is so unique and important that it has a special name: the ‘critical isotherm’. At temperatures below the critical isotherm, we see that the gas condenses to form liquid, and that the pressure of the system remains constant as we convert more and more gas into liquid. The lower the temperature, the more different the densities of the vapor and the liquid are. This is very intuitive for us, because it is what we are used to seeing with the liquid we know best: water. In the case of water, we would need to increase the pressure to 218 atm and work at 374 $^{\circ}$C to lose our ability to distinguish between liquid water and vapor. The conditions on earth are so far away from the critical point, that we can clearly distinguish liquid water from vapor from their densities. Coming back to CO$_2$, as we increase the temperature at high pressures (more than 60 atm), the liquid and the vapor states of the fluid become more and more similar. Right below the critical temperature we can hardly distinguish what is liquid and what is vapor, and at exactly the critical temperature, that distinction is lost. Above the critical temperature we never see a separation of phases, we just see a fluid that becomes denser as we reduce the volume of the container. Notice that the critical point is an inflection point in the critical isotherm. This happens at a particular molar volume (for CO$_2$ $V_c = 0.094$ L/mol) and at a particular pressure (for CO$_2$ $P_c = 72.9$ atm), which we call the critical molar volume and critical pressure of the fluid. If we want to liquefy CO$_2$, we need to do it at temperatures below its critical temperature (31.04$^{\circ}$C). At temperatures above this value the fluid will always be a gas, although it could be a very dense gas! Chemists call this state ‘supercritical fluid’ just to differentiate it from a low-density gas such as CO$_2$ at 1 atm. Again, to give you an idea, one mole of CO$_2$ occupies about 25L at 1 atm and 40$^{\circ}$C, and we call it a gas without thinking twice. From figure [c2v:fig:isotherms], at 80 atm and 40$^{\circ}$ C one mole of CO$_2$ occupies about 0.15L, about 170 times less than the gas we are used to seeing at 1 atm. This is a very dense fluid, but technically it is not a liquid because we are above the critical temperature. Instead, we use the term supercritical fluid. As it turns out, supercritical CO$_2$ is much more than a curiosity. It is used as a solvent for many industrial processes, including decaffeinating coffee and dry cleaning. From our discussion above, it is clear that ideal gases do not display critical behavior. Again, ideal gases do not exist, so when we say that ideal gases do not display critical behavior we are just saying that 1) gases show critical behavior at conditions of temperature, pressure and molar volume that are very far from the conditions where the simple equation $PV=nRT$ describes the behavior of the gas and that 2) if we want to describe a gas close to the critical point we need an equation of state that is consistent with critical behavior. If we plot the isotherms of an ideal gas ($P = nRT/V$) we will obtain hyperbolas a any temperature. Again, this works well with gases at very low densities, but because the model does not include interactions, it cannot possibly describe the isotherms at or around the critical point. Is the hard spheres model of Equation \ref{calculus2v:eq:hardsphares} consistent with the existence of the critical point? To answer this question, we could plot many isotherms according to this equation and see if the model gives one isotherm that has an inflection point as the one shown in Figure $4$. Once again, keep in mind that the figure contains the data we measure experimentally, which is what CO$_2$ is actually doing in nature. Equations of state by definition are models that are meant to describe the system as close as possible, but they are by definition simplifications of the real behavior. Coming back to the model of hard spheres, you can plot as many isotherms as you want, but you will see that none of them show an inflection point. This is pretty obvious from the equation, as the isotherms are basically the same we would get with the ideal gas equation, but just shifted in the $x-$axis by the quantity $b$. Physically speaking, it is not surprising because the hard spheres model does not contain any parameter that accounts for the attractive interactions between molecules, and that is what we need to describe critical behavior. We know that the van der Waals equation is the simplest equation that introduces a term to account for attractive forces (Equation \ref{c2v:eq:vdw}), so it is likely that his equation might be consistent with critical behavior. Let’s discuss what we mean by this in more detail. Again, van der Waals gases do not exist in nature. They are a theoretical construction where we think about molecules as hard spheres with kinetic energy that interact with each other so that the average interaction between two randomly oriented molecules is inversely proportional to the inverse of the sixth power of the distance between them. If we could create a gas whose molecules followed these exact physical laws, the gas would behave exactly as the van der Waals equation predicts. So, now we wonder: would such a gas show an isotherm with an inflection point like the one shown for the case of real CO$_2$? The answer is yes, and the critical constants ($P_c,T_c$ and $V_c$) depend on the values of the van der Waals parameters, $a$ and $b$. Now, the fact that the van der Waals model predicts critical behavior does not mean at all that it describes the whole isotherm well. If you plot the van der Waals equation at different temperatures you will see that this model does not predict the “flat ” $P-V$ part of the curve, where the liquid and the gas coexist. This is not surprising, as the treatment of the attractive interactions in the van der Waals model is too simple to describe the liquid state. In fact, you will see that the van der Waals equation predicts that the derivative $\left(\frac{\partial P}{\partial V} \right)_{T,n}$ is positive in certain regions of the isotherm, which of course does not make any physical sense. Compressing the gas will never lower the pressure as the van der Waals gas predicts, so we can clearly see how the model fails when attractive forces are important are complex. In any case, it is pretty impressive to see how such a simple equation predicts such complex behavior as the critical point.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.05%3A_Real_Gases.txt
Problem $1$ Given a generic equation of state $P = P(V, T, n)$, explain how you can obtain the derivative $\frac{\partial V}{\partial T}_{P,n} \nonumber$ using the properties of partial derivatives we learned in this chapter. Problem $2$ The thermodynamic equation: $\left (\frac{\partial U}{\partial V} \right )_T=T\left (\frac{\partial P}{\partial T} \right )_V-P \nonumber$ shows how the internal energy of a system varies with the volume at constant temperature. Prove that 1. $\left (\frac{\partial U}{\partial V} \right )_T=0$ for an ideal gas. 2. $\left (\frac{\partial U}{\partial V} \right )_T=\frac{a}{V^2}$ for one mole of van der Waals gas (Equation \ref{c2v:eq:vdw}) Problem $3$ Consider one mole of a van der Waals gas (Equation \ref{c2v:eq:vdw}) and show that $\left (\frac{\partial^2 P}{\partial V\partial T}\right )=\left (\frac{\partial^2 P}{\partial T\partial V} \right) \nonumber$ Problem $4$ Consider a van der Waals gas (Equation \ref{c2v:eq:vdw}) and show that $\left (\frac{\partial V}{\partial T}\right )_{P,n}=\frac{n R}{\left( P-\frac{n^2a}{V^2}+\frac{2n^3ab}{V^3} \right)} \nonumber$ Hint: Calculate derivatives that are easier to obtain and use the properties of partial derivatives to get the one the problem asks for. Do not use the answer in your derivation; obtain the derivative assuming you don’t know the answer and simplify your expression until it looks like the equation above. Problem $5$ From the definitions of expansion coefficient ($\alpha$) and isothermal compressibility ($\kappa$): $\alpha=\frac{1}{V}\left(\frac{\partial V}{\partial T}\right)_{P,n} \nonumber$ and $\kappa=-\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)_{T,n} \nonumber$ prove that $\left(\frac{\partial P}{\partial T}\right)_{V,n}=\frac{\alpha}{\kappa} \nonumber$ independently of the equation of state used. Note: A common mistake in this problem is to assume a particular equation of state. Use the cycle rule to find the required relationship independently of any particular equation of state. Problem $6$ Derive an equation similar to Equation \ref{c2v:eq:calculus2v_chain1}, but that relates $\left ( \frac{\partial f}{\partial y} \right )_x \nonumber$ with $\left ( \frac{\partial f}{\partial r} \right )_\theta \nonumber$ and $\left ( \frac{\partial f}{\partial \theta} \right )_r \nonumber$ Problem $7$ (Extra-credit level) The expression: $\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2} \nonumber$ is known as the Laplacian operator in two dimensions. When applied to a function $f(x,y)$, we get: $\nabla^2f(x,y)=\frac{\partial^2f}{\partial x^2}+\frac{\partial^2f}{\partial y^2} \nonumber$ Express $\nabla^2$ in polar coordinates (2D) assuming the special case where $r=a$ is a constant. Problem $8$ Calculate $\int_{0}^{1}\int_{1}^{2}\int_{0}^{2}{\left( x^2+yz \right)\, dx\, dy\, dz}.$ Try three different orders of integration an verify you always get the same result. Problem $9$ Calculate $\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{\infty}{e^{-r}r^5\sin{\theta}\, dr\, d\theta\, d\phi}.$ Use only the formula sheet. Problem $10$ How would Figure $8.5.2$, reproduced below, look like for an ideal gas? Sketch the potential energy as a function of the distance between the atoms. Problem $11$ From everything we learned in this chapter, and without doing any math, we should be able to calculate the sign (>0, <0, or 0) of the following derivatives: For an ideal gas: $\left(\frac{\partial U}{\partial T}\right)_{V,n} \nonumber$ $\left(\frac{\partial U}{\partial V}\right)_{T,n} \nonumber$ For a van der Vaals gas: $\left(\frac{\partial U}{\partial T}\right)_{V,n} \nonumber$ $\left(\frac{\partial U}{\partial V}\right)_{T,n} \nonumber$ Be sure you can write a short sentence explaining your answers. Problem $12$ The critical point is the state at which the liquid and gas phases of a substance first become indistinguishable. A gas above the critical temperature will never condense into a liquid, no matter how much pressure is applied. Mathematically, at the critical point: $\left(\frac{\partial P}{\partial V} \right)_{T,n}=0 \nonumber$ and $\left(\frac{\partial^2 P}{\partial V^2} \right)_{T,n}=0 \nonumber$ Obtain the critical constants of a van der Waals gas (Equation \ref{c2v:eq:vdw}) in terms of the parameters $a$ and $b$. Hint: obtain the first and second derivatives of $P$ with respect to $V$, make them equal to zero, and obtain $T_c$ and $V_c$ from these equations. Finally, replace these expressions in Equation \ref{c2v:eq:vdw} to obtain $P_c$. Note As derived in Section 8.3, $\label{c2v:eq:calculus2v_chain1} \left(\dfrac{\partial f}{\partial x}\right)_y=\cos{\theta}\left(\dfrac{\partial f}{\partial r}\right)_\theta-\dfrac{\sin{\theta}}{r}\left(\dfrac{\partial f}{\partial \theta}\right)_r$ As defined in Section 8.5, the Van der Waals is defined as: $\label{c2v:eq:vdw} P=\frac{nRT}{V-nb}-a \left(\frac{n}{V}\right)^2$ 1. If you are not familiar with this you need to read about it before moving on
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.06%3A_Problems.txt
Chapter Objectives • Understand the concept of the total differential • Understand the concept of exact and inexact differentials. • Be able to test whether a differential is exact or not. • Understand how to integrate differentials along different paths. • Understand the role of exact and inexact differentials in thermodynamics. Thumbnail: These are just three of the infinitely many paths between a state at point A at some time to point B at some other time. A discussion of exact and inexact differentials is strongly connected to understanding the differences between paths and states (CC BY-SA 3.0; Matt McIrvin via Wikipedia) 09: Exact and Inexact Differentials In Chapter 8 we learned that partial derivatives indicate how the dependent variable changes with one particular independent variable keeping the others fixed. In the context of an equation of state $P=P(T,V,n)$, the partial derivative of $P$ with respect to $V$ at constant $T$ and $n$ is: $\left (\dfrac{\partial P}{\partial V} \right )_{T,n} \nonumber$ and physically represents how the pressure varies as we change the volume at constant temperature and constant $n$. The partial derivative of $P$ with respect to $T$ at constant $V$ and $n$ is: $\left (\dfrac{\partial P}{\partial T} \right )_{V,n} \nonumber$ and physically represents how the pressure varies as we change the temperature at constant volume and constant $n$. What happens with the dependent variable (in this case $P$) if we change two or more independent variables simultaneously? For an infinitesimal change in volume and temperature, we can write the change in pressure as: $\label{eq:differentials1} dP=\left (\dfrac{\partial P}{\partial V} \right )_{T,n} dV+\left (\dfrac{\partial P}{\partial T} \right )_{V,n} dT$ Equation \ref{eq:differentials1} is called the total differential of P, and it simply states that the change in $P$ is the sum of the individual contributions due to the change in $V$ at constant $T$ and the change in $T$ at constant $V$. This equation is true for infinitesimal changes. If the changes are not infinitesimal we will integrate this expression to calculate the change in $P$.[differentials_position1] Let’s now consider the volume of a fluid, which is a function of pressure, temperature and the number of moles: $V=V(n,T,P)$. The total differential of $V$, by definition, is: $\label{eq:differentials3} dV=\left (\frac{\partial V}{\partial T} \right )_{P,n} dT+\left (\frac{\partial V}{\partial P} \right )_{T,n} dP+\left (\frac{\partial V}{\partial n} \right )_{T,V} dn$ If we want to calculate the change in volume in a fluid upon small changes in $P, T$ and $n$, we could use: $\label{eq:differentials3b} \Delta V\approx \left (\frac{\partial V}{\partial T} \right )_{P,n} \Delta T+\left (\frac{\partial V}{\partial P} \right )_{T,n} \Delta P+\left (\frac{\partial V}{\partial n} \right )_{T,V} \Delta n$ Of course, if we know the function $V=V(n,T,P)$, we could also calculate $\Delta V$ as $V_f-F_i$, where the final and initial volumes are calculated using the final and initial values of $P, T$ and $n$. This seems easy, so why do we need to bother with Equation \ref{eq:differentials3b}? The reason is that sometimes we can measure the partial derivatives experimentally, but we do not have an equation of the type $V=V(n,T,P)$ to use. For example, the following quantities are accessible experimentally and tabulated for different fluids and materials (Fig. [fig:diff_tables]): • $\alpha=\frac{1}{V}\left(\frac{\partial V}{\partial T} \right )_{P,n}$ (coefficient of thermal expansion) • $\kappa=-\frac{1}{V}\left(\frac{\partial V}{\partial P} \right )_{V,n}$ (isothermal compressibility)[differentials:compressibility] • $V_m=\left(\frac{\partial V}{\partial n} \right )_{P,T}$ (molar volume) Using these definitions, Equation \ref{eq:differentials3} becomes: $\label{eq:differentials4} dV=\alpha V dT-\kappa VdP+V_m dn$ You can find tables with experimentally determined values of $\alpha$ and $\kappa$ under different conditions, which you can use to calculate the changes in $V$. Again, as we will see later in this chapter, this equation will need to be integrated if the changes are not small. In any case, the point is that you may have access to information about the derivatives of the function, but not to the function itself (in this case $V$ as a function of $T, P, n$). In general, for a function $u=u(x_1, x_2...x_n)$, we define the total differential of $u$ as: $\label{eq:total_differential} du=\left (\frac{\partial u}{\partial x_1} \right )_{x_2...x_n} dx_1+\left (\frac{\partial u}{\partial x_2} \right )_{x_1, x_3...x_n} dx_2+...+\left (\frac{\partial u}{\partial x_n} \right )_{x_1...x_{n-1}} dx_n$ Example $1$ Calculate the total differential of the function $z=3x^3+3yx^2+xy^2$. Solution By definition, the total differential is: $dz=\left (\frac{\partial z}{\partial x} \right )_{y} dx+\left (\frac{\partial z}{\partial y} \right )_{x} dy \nonumber$ For the function given in the problem, $\left (\frac{\partial z}{\partial x} \right )_{y}=9x^2+6xy+y^2 \nonumber$ and $\left (\frac{\partial z}{\partial y} \right )_{x}=3x^2+2xy \nonumber$ and therefore, $\displaystyle{\color{Maroon}dz=(9x^2+6xy+y^2)dx+(3x^2+2xy)dy} \nonumber$ Want to see more examples?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/09%3A_Exact_and_Inexact_Differentials/9.01%3A_The_Total_Differential.txt
So far, we discussed how to calculate the total differential of a function. If you are given a function of more than one variable, you can calculate its total differential using the definition of a total differential of a function $u$: ($du=\left(\frac{\partial u}{\partial x_1} \right)_{x_2...x_n} dx_1+\left( \frac{\partial u}{\partial x_2} \right)_{x_1, x_3...x_n} dx_2+...+\left( \frac{\partial u}{\partial x_n} \right)_{x_1...x_{n-1}} dx_n$). You will have one term for each independent variable. What if we are given a differential (e.g. $dz=(9x^2+6xy+y^2)dx+(3x^2+2xy)dy$ see Example 9.1) and we are asked to calculate the function whose total differential is $dz$? This is basically working Example 9.1 backwards: we know the differential, and we are looking for the function. Things are a little bit more complicated than this, because not all differentials are the total differentials of a function. For example, from the example above we know that $dz=(9x^2+6xy+y^2)dx+(3x^2+2xy)dy$ is the total differential of $z(x,y)=3x^3+3yx^2+xy^2.$ However, the differential $dz = xydx + x^2 dy$ is not the total differential of any function $z(x,y)$. You can write down every single function $z(x,y)$ in this planet, calculate their total differentials, and you will never see $dz = xydx + x^2 dy$ in your list. Therefore, the question we are addressing is the following: given a differential, 1) is it the total differential of any function? 2) if it is, which function? To illustrate the question, let’s say we are given the differential below (notice that I switched to $P, V,$ and $T$, which are variables you will encounter often in thermodynamics): $\label{eq:diff6} dP=\frac{RT}{V-b}dT+\left[\frac{RT}{(V-b)^2}-\frac{a}{TV^2}\right]dV$ The question is whether this is the total differential of a function $P=P(T,V)$ (we are told that $a$ and $b$ are constants, and we already know that $R$ is a constant). By definition of total differential, if the function exists, its total differential will be: $\label{eq:diff7} dP=\left (\frac{\partial P}{\partial T} \right )_{V} dT+\left (\frac{\partial P}{\partial V} \right )_{T} dV$ Comparing Equation \ref{eq:diff6} and \ref{eq:diff7}, if the function exists, its derivatives will have to be: $\label{eq:diff8a} \left (\frac{\partial P}{\partial T} \right )_{V}=\frac{RT}{V-b}$ $\label{eq:diff8b} \left (\frac{\partial P}{\partial V} \right )_{T}=\left[\frac{RT}{(V-b)^2}-\frac{a}{TV^2}\right]$ If we find a function $P=P(T,V)$ that satisfies these equations at the same time, we know that Equation \ref{eq:diff6} will be its total differential. From Equation \ref{eq:diff8a}, we can calculate $P$ by integrating with respect to $T$ at constant $V$: $\label{eq:diff9} \int dP=\int \frac{RT}{V-b}dT\rightarrow P=\frac{R}{V-b}\frac{T^2}{2}+f(V)$ where we included an integration constant ($f(V)$) that can be any function of $V$ (we are integrating at constant $V$). In order to get an expression for $P(T,V)$, we need to find out $f(V)$ so we can complete the right side of Equation \ref{eq:diff9}. To do that, we are going to take the derivative of $P$ (Equation \ref{eq:diff9} with respect to $V$, and compare with Equation \ref{eq:diff8b}: $\label{eq:diff10} \left(\frac{\partial P}{\partial V}\right)_T=-\frac{RT^2}{2(V-b)^2}+\frac{df(V)}{dV}$ Looking at Equation \ref{eq:diff8b} and \ref{eq:diff10}, we see that the two expressions do not match, regardless of which function we chose for $f(V)$. This means that Equation \ref{eq:diff6} does not represent the total differential of any function $P(V,T)$. We call these differentials inexact differentials. If a differential is the total differential of a function, we will call the differential exact. What we did so far is correct, but it is not the easiest way to test whether a differential is exact or inexact. There is, in fact, a very easy way to test for exactness. We’ll derive the procedure below, but in the future we can use it without deriving it each time. Given the differential $dz=f_1(x,y)dx+f_2(x,y)dy$, the differential is exact if $\label{eq:test} \left(\frac{\partial f_1(x,y)}{\partial y} \right )_x=\left(\frac{\partial f_2(x,y)}{\partial x} \right )_y$ If Equation \ref{eq:test} does not hold, the differential is inexact. For instance, if $dz=(9x^2+6xy+y^2)dx+(3x^2+2xy)dy$, the functions $f_1$ and $f_2$ are $f_1=9x^2+6xy+y^2$ and $f_2=3x^2+2xy$. To test this differential, we perform the partial derivatives $\left(\frac{\partial f_1(x,y)}{\partial y} \right )_x =6x+2y$ and $\left(\frac{\partial f_2(x,y)}{\partial x} \right )_y=6x+2y$ The two derivatives are the same, and therefore the differential is said to be exact. Let’s prove why the test of Equation \ref{eq:test} works. Let’s consider a differential of the form $dz=f_1(x,y)dx+f_2(x,y)dy$. If the differential is exact, it is the total differential of a function $z(x,y)$, and therefore: $\label{eq:diff11} dz=f_1(x,y)dx+f_2(x,y)dy=\left (\frac{\partial z}{\partial x} \right )_{y} dx+\left (\frac{\partial z}{\partial y} \right )_{x} dy$ We know that the mixed partial derivatives of a function are independent of the order they are computed: $\left (\frac{\partial^2 z}{\partial y\partial x} \right )=\left (\frac{\partial^2 z}{\partial x\partial y} \right ) \nonumber$ From Equation \ref{eq:diff11}, $f_1(x,y)=\left (\frac{\partial z}{\partial x} \right )_{y} \rightarrow \left(\frac{\partial f_1(x,y)}{\partial y} \right )_x=\left (\frac{\partial^2 z}{\partial x\partial y} \right ) \nonumber$ $f_2(x,y)=\left (\frac{\partial z}{\partial y} \right )_{x} \rightarrow \left(\frac{\partial f_2(x,y)}{\partial x} \right )_y=\left (\frac{\partial^2 z}{\partial y\partial x} \right ) \nonumber$ Because the mixed partial derivatives are the same, for an exact differential: $\left(\frac{\partial f_1(x,y)}{\partial y} \right )_x=\left(\frac{\partial f_2(x,y)}{\partial x} \right )_y$ This equation is true only for an exact differential because we derived it by assuming that the function $z=z(x,y)$ exists, so its mixed partial derivatives are the same. We can use this relationship to test whether a differential is exact or inexact. If the equality of Equation \ref{eq:test} holds, the differential is exact. If it does not hold, it is inexact. Example $1$ Test whether the following differential is exact or inexact: $dz=\frac{1}{x^2}dx-\frac{y}{x^3}dy \nonumber$ Solution To test whether $dz$ is exact or inexact, we compare the following derivatives $\left(\frac{\partial (1/x^2)}{\partial y} \right )_x\overset{?}{=}\left(\frac{\partial (y/x^3)}{\partial x} \right )_y \nonumber$ $\left(\frac{\partial (1/x^2)}{\partial y} \right )_x=0 \nonumber$ $\left(\frac{\partial (y/x^3)}{\partial x} \right )_y=-3yx^{-4} \nonumber$ We conclude that $dz$ is inexact, and therefore there is no function $z(x,y)$ whose total differential is $dz$. Example $2$ Determine whether the following differential is exact or inexact. If it is exact, determine $z=z(x,y)$. $dz=(2x+y)dx+(x+y)dy \nonumber$ Solution To test whether $dz$ is exact or inexact, we compare the following derivatives $\left(\frac{\partial (2x+y)}{\partial y} \right )_x\overset{?}{=}\left(\frac{\partial (x+y)}{\partial x} \right )_y \nonumber$ If this equality holds, the differential is exact. $\left(\frac{\partial (2x+y)}{\partial y} \right )_x=1 \nonumber$ $\left(\frac{\partial (x+y)}{\partial x} \right )_y=1 \nonumber$ Therefore, the differential is exact. Because it is exact, it is the total differential of a function $z(x,y)$. The total differential of $z(x,y)$ is, by definition, $dz=\left (\frac{\partial z}{\partial x} \right )_{y} dx+\left (\frac{\partial z}{\partial y} \right )_{x} dy \nonumber$ Comparing this expression to the differential $dz=(2x+y)dx+(x+y)dy$: $\left (\frac{\partial z}{\partial x} \right )_{y}=(2x+y)$ $\label{eq:examplea} \left (\frac{\partial z}{\partial y} \right )_{x}=(x+y)$ To find $z(x,y)$, we can integrate the first expression partially with respect to $x$ keeping $y$ constant: $\int dz=z=\int (2x+y)dx=x^2+xy+f(y) \nonumber$ So far we have $\label{eq:exampleb} z = x^2+xy+f(y)$ so we need to find the function $f(y)$ to complete the expression above and finish the problem. To do that, we’ll take the derivative of $z$ with respect to $y$, and compare with Equation \ref{eq:examplea}. The derivative of Equation \ref{eq:exampleb} is: $\left (\frac{\partial z}{\partial y} \right )_{x}=x+\frac{df(y)}{dy} \nonumber$ comparing with Equation \ref{eq:examplea} we notice that $\frac{df(y)}{dy}=y$, and integrating, we obtain $f(x)=y^2/2+c$ Therefore, $dz=(2x+y)dx+(x+y)dy$ is the total differential of $z=x^2+xy+y^2/2+c$. We can check our result by working the problem in the opposite direction. If we are given $z=x^2+xy+y^2/2+c$ and we are asked to calculate its total differential, we would apply the definition: $dz=\left (\frac{\partial z}{\partial x} \right )_{y} dx+\left (\frac{\partial z}{\partial y} \right )_{x} dy \nonumber$ and because $\left (\frac{\partial z}{\partial x} \right )_{y} =y+2x \nonumber$ and $+\left (\frac{\partial z}{\partial y} \right )_{x}=y+x \nonumber$ we would write $dz=(2x+y)dx+(x+y)dy$, which is the differential we were given in the problem. Check two extra solved examples in this video: http://tinyurl.com/kq4qecu
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/09%3A_Exact_and_Inexact_Differentials/9.02%3A_Exact_and_Inexact_Differentials.txt
Distinguishing between exact and inexact differentials has very important consequences in thermodynamics. We already mentioned thermodynamic variables such as the internal energy ($U$), volume, pressure, and temperature, and you probably heard about entropy ($S$) and free energy ($G$). All these quantities can be used to specify the state of a system. They are properties of the current state of the system, and they do not depend on the way the system got to that state. For example, if you have a system consisting of 1 mol of He at 298 K and 1 atm, the system will have a given pressure, internal energy, entropy and free energy regardless of its history. You may have compressed the system from 2 atm, or heated the gas from 273 K. All this is irrelevant to specify the pressure, entropy, etc, because all these variables are what we call state functions. State functions depend only on the state of the system. Other quantities such as work ($w$) and heat ($q$), on the other hand, are not state functions. There is no such a thing as an amount of work or heat in a system. The amounts of heat and work that "flow" during a process connecting specified initial and final states depend on how the process is carried out. Quantities that depend on the path followed between states are called path functions. How is all this connected to differentials? Quantities whose values are independent of path are called state functions, and their differentials are exact ($dP$, $dV$, $dG$,$dT$...). Quantities that depend on the path followed between states are called path functions, and their differentials are inexact ($dw$, $dq$). As we will discuss in a moment, when we integrate an exact differential the result depends only on the final and initial points, but not on the path chosen. However, when we integrate an inexact differential, the path will have a huge influence in the result, even if we start and end at the same points. We’ll come back to this shortly. Knowing that a differential is exact will help you derive equations and prove relationships when you study thermodynamics in your advanced physical chemistry courses. For example, you will learn that all the state functions we mentioned above are related through these equations: $dU=TdS-PdV$ $dH=TdS+VdP$ $dA=-SdT-PdV$ $dG=-SdT+VdP$ Here, we introduced two new state functions we haven’t talked about yet: the enthalpy ($H$), and the Helmholtz free energy ($A$). You will learn about what they mean physically in CHM 346, but for now, just accept the fact that they are state functions, just as the entropy and the free energy. Although we didn’t write it explicitly, $T,P,V$ and $S$ are not constants. When we talked about gases, we learned that $P,V$ and $T$ are not independent functions. If you change two of these variables you change the third, or in other words, you cannot independently vary the pressure, volume and temperature. The equation of state tells you how the three variables depend on each other. For one mole of gas, you can write the equation of state as a function $P=P(V,T)$, or as a function $V=V(T,P)$, or as a function $T=T(P,V)$. In the same way, you cannot independently change the pressure, volume, temperature and entropy of a system. If you modify the pressure and temperature, the volume and entropy will change as well. To make this clear, we can re-write the equations above as: $\label{eq:dU} dU=T(S,V)dS-P(S,V)dV$ $\label{eq:dP} dH=T(S,P)dS+V(S,P)dP$ $\label{eq:dA} dA=-S(T,V)dT-P(T,V)dV$ $\label{eq:dG} dG=-S(T,P)dT+V(T,P)dP$ Because $U,H,A$ and $G$ are all state functions, their differentials are exact. We can derive a few relationships just from this fact. For example, we see that $G=G(T,P)$, and because it’s total differential, by definition, is: $\label{dG_total} dG=\left (\frac{\partial G}{\partial T} \right )_{P} dT+\left (\frac{\partial G}{\partial P} \right )_{T} dP$ from Equations \ref{eq:dG} and \ref{dG_total}, we rapidly conclude that $\left (\frac{\partial G}{\partial T} \right )_{P}=-S \nonumber$ and $\left (\frac{\partial G}{\partial P} \right )_{T}=V \nonumber$ With minimal math, we concluded that if we change the pressure of a system at constant temperature, the rate of change of the free energy equals the volume. At this point this does not mean a lot to you, but hopefully you can appreciate how knowing that $G$ is a state function is enough for you to derive a thermodynamic relationship! We can take this even further. Because $G$ is a state function, $dG$ is exact, and therefore, from Equation \ref{eq:dG}: $\label{eq:maxwell} -\left (\frac{\partial S}{\partial P} \right )_{T}=\left (\frac{\partial V}{\partial T} \right )_{P}$ We just derived one of the four Maxwell relations. You can derive the other three from Equations \ref{eq:dU}-\ref{eq:dA}. Notice that once again, we derived this equation from the knowledge that $G$ is a state function. Why are these equations useful? Let’s see an example using Equation \ref{eq:maxwell}. We can integrate this expression with $T$ constant to get: $\int_{P_1}^{P_2} dS=\Delta S=-\int_{P_1}^{P_2} \left (\frac{\partial V}{\partial T} \right )_{P} dP ~ \text{(constant T)} \nonumber$ This equation tells you that the change in entropy in a system can be calculated by integrating $\left (\frac{\partial V}{\partial T} \right )_{P}$ data. This is extremely powerful, as we can easily measure temperature, pressure and volume in the lab, but we don’t have an instrument that directly measures entropy! 9.04: A Mathematical Toolbox In Chapter 8 we learned some important properties of partial derivatives, and in this chapter we learned about exact and inexact differentials. We saw many examples where these properties can be used to create relationships between thermodynamic variables. Usually we will try to calculate what we want from information we have (which is usually the information we can access experimentally). We just saw how we can calculate a change in entropy from quantities that are easy to measure in the lab: volume, temperature and pressure. In Chapter 8 we saw how we can get an expression of a partial derivative from partial derivatives that are much easier to calculate. I will summarize some of these mathematical relationships, and call it our “toolbox”. The more comfortable you get using these relationships, the easier it will be for you to derive the thermodynamic relationships you will come across in your advanced physical chemistry courses. 1. The Euler reciprocity rule (Equation $8.1.1$): $\left ( \frac{\partial ^2f}{\partial x \partial y} \right )=\left ( \frac{\partial ^2f}{\partial y \partial x} \right )$ 2. The inverse rule (Equation $8.1.2$): $\left ( \frac{\partial y}{\partial x} \right )=\frac{1}{\left ( \frac{\partial x}{\partial y} \right )}$ 3. The cycle rule (Equation $8.1.3$) $\left ( \frac{\partial y}{\partial x} \right )_z\left ( \frac{\partial x}{\partial z} \right )_y\left ( \frac{\partial z}{\partial y} \right )_x=-1$ 4. The chain rule (Equations $8.3.4$ and $8.3.5$): $\left ( \frac{\partial u}{\partial r} \right )_\theta=\left ( \frac{\partial u}{\partial x} \right )_y\left ( \frac{\partial x}{\partial r} \right )_\theta+\left ( \frac{\partial u}{\partial y} \right )_x\left ( \frac{\partial y}{\partial r} \right )_\theta$ $\left ( \frac{\partial u}{\partial \theta} \right )_r=\left ( \frac{\partial u}{\partial x} \right )_y\left ( \frac{\partial x}{\partial \theta} \right )_r+\left ( \frac{\partial u}{\partial y} \right )_x\left ( \frac{\partial y}{\partial \theta} \right )_r$ 5. The definition of the total differential (Equation $9.1.7$): $du=\left (\frac{\partial u}{\partial x_1} \right )_{x_2...x_n} dx_1+\left (\frac{\partial u}{\partial x_2} \right )_{x_1, x_3...x_n} dx_2+...+\left (\frac{\partial u}{\partial x_n} \right )_{x_1...x_{n-1}} dx_n$ 6. The concept of exact differential (Section 9.2)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/09%3A_Exact_and_Inexact_Differentials/9.03%3A_Differentials_in_Thermodynamics_-_State_and_Path_Functions.txt
In Section 9.1, we discussed that in order to properly calculate the change in pressure we would need to integrate the differential defined in Equation \ref{eq:differentials1}: $dP=\left (\dfrac{\partial P}{\partial V} \right )_{T,n} dV+\left (\dfrac{\partial P}{\partial T} \right )_{V,n} dT \label{eq:differentials1}$ This raises the question of how to integrate differentials such as this one. Before we focus on this question, let’s discuss what we expect for an exact differential. Let’s say that we know how to perform these integrals, and we integrate $dP$ from initial pressure ($P_i$) to final pressure ($P_f$) to calculate $\Delta P$ for a change that is not infinitesimal: $\Delta P = \int_{P_i}^{P_f} dP \nonumber$ Would it surprise you that the result equals the final pressure minus the initial pressure? $\Delta P = P_f - P_i \nonumber$ Hopefully not; the change in pressure will obviously be the final pressure minus the initial pressure, regardless of whether we did this slowly, fast, at constant volume, constant temperature, etc. In other words, you just need the information of the state of the system at the beginning and the end of the process, but you do not need to know anything about what happened in between. All this makes sense because $P$ is a state function, and the same argument applies to other state functions, such as entropy, internal energy, free energy, etc. We just reached an important conclusion: if we integrate an exact differential, the result will be independent of the path, and it will equal the function at the end point minus the function at the initial point. If we integrate an inexact differential this is not true, because we will be integrating the differential of a function that is not a state function. We’ll come back to this many times, but it is important that before getting lost in the math we keep in mind what to expect. We already mentioned the word “path”, but what do we mean by that? In the example of the gas, the path would be described by the values of the temperature and volume at all times. For example, Figure $1$ shows two possible paths that result in the same change in pressure. We could imagine an infinite number of other options, and of course we are not restricted to keeping one variable constant while changing the other. We can integrate $dP$ along one path or the other, but we already know what we will get: $\int dP = P_2-P_1$ regardless of the path. Work and heat, on the other hand, are not state functions. There is no such a thing as an amount of work or heat in a system. The amounts of heat and work that “flow” during a process connecting specified initial and final states depend on how the process is carried out. This means that if we want to calculate the work or the heat involved in the process, we would need to integrate the inexact differentials $dw$ and $dq$ indicating the particular path used to take the system from the initial to the final states: $q=\int\limits_{path} dq \nonumber$ $w=\int\limits_{path} dw \nonumber$ For one mole of an ideal gas ($P = RT/V$), $dP=\left (\dfrac{\partial P}{\partial V} \right )_{T,n} dV+\left (\dfrac{\partial P}{\partial T} \right )_{V,n} dT=-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT \nonumber$ From our previous discussion, we know the result of integrating the differential $dP=-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT \nonumber$ along any path. The result should be the final pressure minus the initial pressure: $\label{eq:diff17} \Delta P=R(T_f/V_f-T_i/V_i)$ where the subscripts $f$ and $i$ refer to the final and initial states. Even is we know the answer, let’s do it anyway so we learn how to integrate differentials. We will consider the two paths depicted in Figure $1$. In both cases the initial temperature is 250 K, the initial volume is 30 L, the final temperature is 300 K, and the final volume is 20 L. Let’s start with the ‘red’ path. This path is the sum of two components, one where we change the volume at constant temperature, and another one where we change the temperature at constant volume. Let’s call these individual steps path 1 and path 2, so the total path is path1 + path 2: $\int\limits_{path}dP=\int\limits_{path 1}dP+\int\limits_{path 2}dP \nonumber$ $\int\limits_{path}{\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)}=\int\limits_{path 1}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)+\int\limits_{path 2}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right) \nonumber$ In path 1, we keep the temperature constant, so $dT=0$. Furthermore, the temperature equals $T_i$ during the whole process: $\int\limits_{path 1}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\int\limits_{path 1}\left(-\dfrac{RT_i}{V^2}dV\right)=\int_{V_i}^{V_f}-\dfrac{RT_i}{V^2}dV \nonumber$ in path 2 we keep the volume constant, so $dV=0$. Furthermore, the volume equals $V_f$ during the whole process: $\int\limits_{path 2}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\int\limits_{path 2}\dfrac{R}{V_f} dT=\int_{T_i}^{T_f}\dfrac{R}{V_f} dT \nonumber$ Putting the two results together: $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\int_{V_i}^{V_f}-\dfrac{RT_i}{V^2}dV+\int_{T_i}^{T_f}\dfrac{R}{V_f} dT \nonumber$ $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\left. \dfrac{RT_i}{V}\right|_{V_i}^{V_f}+\left. \dfrac{RT}{V_f}\right|_{T_i}^{T_f}=RT_i\left(\dfrac{1}{V_f}-\dfrac{1}{V_i}\right)+\dfrac{R}{V_f}(T_f-T_i) \nonumber$ $RT_i\left(\dfrac{1}{V_f}-\dfrac{1}{V_i}\right)+\dfrac{R}{V_f}(T_f-T_i)=R\left(\dfrac{T_i}{V_f}-\dfrac{T_i}{V_i}+\dfrac{T_f}{V_f}-\dfrac{T_i}{V_f} \right)=R\left(\dfrac{T_f}{V_f}-\dfrac{T_i}{V_i}\right) \nonumber$ The result is, as expected, identical to Equation \ref{eq:diff17}. Let’s now consider the two-step path depicted in green in Figure $1$. We’ll follow the same ideas we used for the path shown in red. In the first part of the path we change the temperature from $T_i$ to $T_f$ at constant volume, $V=V_i$. In the second part of the path we change the volume from $V_i$ to $V_f$ at constant temperature, $T=T_f$. In the first part, $dV=0$ and $V=V_i$ at all times. In the second part, $dT=0$, and $T=T_f$ at all times: $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\int_{T_i}^{T_f}\left(\dfrac{R}{V_i} dT\right)+\int_{V_i}^{V_f}\left(-\dfrac{RT_f}{V^2}dV\right) \nonumber$ $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\left. \dfrac{RT}{V_i}\right|_{T_i}^{T_f}+\left. \dfrac{RT_f}{V}\right|_{V_i}^{V_f} \nonumber$ $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=\dfrac{R}{V_i}(T_f-T_i)+RT_f\left(\dfrac{1}{V_f}-\dfrac{1}{V_i}\right)=R\left(\dfrac{T_f}{V_f}-\dfrac{T_i}{V_i}\right) \nonumber$ The result is, as expected, $P_f-P_i$ (Equation \ref{eq:diff17}). Because $dP$ is exact, it does not matter which path we choose to go from $(V_i,P_i)$ to $(V_f,P_f)$, the result of the integral of $dP$ will always the same $R\left(\dfrac{T_f}{V_f}-\dfrac{T_i}{V_i}\right)$. Let’s try another path; this time one that does not keep any of the variables constant at any time. Consider the path that is the straight line that joins the points $(V_i,P_i)$ to $(V_f,P_f)$. In order to integrate $dP$ along a particular path, we need the equation of the path indicating how the variables $V$ and $T$ are connected at all times. In this case, $T=a+bV$, where $a$ is the $y-$intercept and $b$ is the slope. You should be able to prove that the values of $a$ and $b$ for this path are: \begin{align*} a&=T_i-\dfrac{T_f-T_i}{V_f-V_i}V_i \[4pt] b&=\dfrac{T_f-T_i}{V_f-V_i} \end{align*} \nonumber Because $T=a+bV$, $dT=bdV$, $V=(T-a)/b$, and $dV=dT/b$. These relationships tell us how $T$ and $V$ are connected throughout the path, and we can therefore write these equivalent expressions: $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=-\int_{V_i}^{V_f}\left(-\dfrac{R(a+bV)}{V^2}dV\right)+\int_{T_i}^{T_f}\left(\dfrac{bR}{T-a} dT\right) \nonumber$ $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=-\int_{V_i}^{V_f}\left(-\dfrac{R(a+bV)}{V^2}dV\right)+\int_{V_i}^{V_f}\left(\dfrac{R}{V} bdV\right) \nonumber$ $\int\limits_{path}\left(-\dfrac{RT}{V^2}dV+\dfrac{R}{V} dT\right)=-\int_{T_i}^{T_f}\left(-\dfrac{RT}{\left[(T-a)/b\right]^2}\dfrac{1}{b}dT\right)+\int_{T_i}^{T_f}\left(\dfrac{bR}{T-a} \right)dT \nonumber$ In the first case we just wrote the first integrand in terms of $V$ only and the second integrand in terms of $T$ only. To achieve this, we used the information from the path to see how $V$ and $T$ are related as we move from our initial to our final states. The same idea applies to the second and third lines, where we wrote everything in terms of $V$ or in terms of $T$. The three equations will give the same result regardless of whether the differential is exact on inexact. However, because we are integrating an exact differential, the result will be identical to the result we got for the two other paths that share the same initial and final state, and also identical to $P_f-P_i$. The three equations above are not too hard to solve, but they are more time consuming that the integrals we had to solve for the paths involving sections of the path where one or the other variable remain constant. This is powerful, because it means that if you are integrating an exact differential, you can get smart and solve the integral for a very easy path, as long as the initial and final states are the same. You know the result will be the same because the differential is exact. If, on the other hand, the differential is inexact, we are out of luck. The integral depends on the path, so we need to solve the path we are given. Because $dP$ is exact, the line integral equals $P_f-P_i$. A consequence of this is that the integral along a closed path (one where $P_i=P_f$) is zero. Mathematically: $\oint dP=0 \nonumber$ where the circle inside the integration symbol means that the path is closed. This is true for any exact differential, but not necessarily true for a differential that is inexact. Coming back to thermodynamics, imagine one mole of a gas in a container whose volume is first reduced from 30 L to 20 L at a constant temperature T= 250 K. You then heat the gas up to 300 K keeping the volume constant, then increase the volume back to 30 L keeping the temperature constant, and finally cool it down to 250 K at constant volume (see Figure $1$. Because the initial and final states are the same, the line integral of any state function is zero: $\Delta P=\oint dP=0 \nonumber$ $\Delta G= \oint dG= 0 \nonumber$ $\Delta S =\oint dS=0 \nonumber$ etc, etc This closed path does not involve a change in pressure, free energy or entropy, because these functions are state functions, and the final state is identical to the initial state. On the other hand, $w=\oint dw\neq0 \nonumber$ $q= \oint dq\neq 0 \nonumber$ because we are integrating inexact differentials. Physically, you had to do work to expand the gas, heat it up, compress it, cool it down, etc. It does not matter you end up exactly where you started, work and heat were involved in the process. It may be possible for a particular closed path to yield $w=0$ or $q=0$, but in general this does not need to be the case. Example $1$ Given the following differential $dz=x\;dy+2y\;dx$, and the closed path shown in the figure, calculate the line integral $\int\limits_{path}dz$ Note: This problem is also available in video format: http://tinyurl.com/mszuwr7 Solution The path is divided in three sections, so $\int\limits_{path}dz= \int_{a}dz+ \int_{b}dz+ \int_{c}dz \nonumber$ In section $a$: $y=2$, $dy=0$ ($y$ is a constant), and $x$ changes from an initial value of 2 to a final value of 1: $\int_{a}dz=\int_{2}^1 4 dx=4x\left. \right|_{2}^1=4-8=-4 \nonumber$ In section $b$: $y=1+x$, $dy=dx$, and $x$ changes from an initial value of 1 to a final value of 2: $\int_{b}dz=\int_{b}x\;dy+2y\;dx=\int_{1}^{2} x\; dx+\int_{1}^2 2(1+x) \;dx=\left.\dfrac{x^2}{2}\right|_{1}^{2}+2\left.\dfrac{(1+x)^2}{2}\right|_{1}^{2}=\dfrac{3}{2}+(9-4)=\dfrac{13}{2} \nonumber$ In section $c$: $x=2$, $dx=0$ ($x$ is a constant), and $y$ changes from an initial value of 3 to a final value of 2: $\int_{c}dz=\int_{3}^2 2 dy=2y\left. \right|_{3}^2=4-6=-2 \nonumber$ Therefore, $\int\limits_{path}dz= \int_{a}dz+ \int_{b}dz+ \int_{c}dz=-4+13/2-2=\displaystyle{\color{Maroon}1/2} \nonumber$ Notice that the integral is not zero even if the path was closed. This is not surprising given that the differential was inexact. Example $2$ Consider the differential $du=(x^2-y^2)dx+(2xy)dy \nonumber$ 1. Is $du$ exact or inexact? 2. Explain why each of the following is true or false: • $du$ is the total differential of some function $u(x,y)$. Find $u(x,y)$ if possible. • $\int_{a}du= \int_{b}du= \int_{c}du$ as long as $a,b$ and $c$ are paths in the $(x,y)$ space that share the same starting and ending points. 3. Calculate $\int\limits_{path}du$ if the path is the straight line joining the points (0,2) and (2,0). Solution: To test whether $du$ is exact or inexact, we compare the following derivatives $\left(\dfrac{\partial (x^2-y^2)}{\partial y} \right )_x\overset{?}{=}\left(\dfrac{\partial (2xy)}{\partial x} \right )_y \nonumber$ Because the two partial derivatives are not the same, the differential is inexact. Because the differential is inexact, it is not the total differential of a function $u(x,y)$. We cannot find the function because it does not exist. The line integrals $\int_{a}du, \int_{b}du$ and $\int_{c}du$ will in principle be different because the integral of an inexact differential depends not only on the initial and final states, but also on the path used to get from the initial to the final state. The statement would be true for an exact differential. To calculate the integral along the straight line joining the points (0,2) and (2,0), we first need to find the equation $y(x)$ that describes this path. Sketching the function is not a must, but it might help: The equation of this straight line is $y=2-x$. Therefore, along the path, $y=2-x$ and $dy=-dx$. The variable $x$ changes from an initial value $x=0$ to a final value $x=2$: $\int\limits_{path}du=\int\limits_{path}(x^2-y^2)dx+(2xy)dy \nonumber$ It is important to stress that $x$ and $y$ are not independent along the process, but instead, they are connected through the equation of the path. We will write the equation in terms of $x$ (we could do it in terms of $y$ with identical results): \begin{align*} \int\limits_{path}du &=\int\limits_{path}(x^2-y^2)dx+(2xy)dy \[4pt] &=\int_{0}^2(x^2-(2-x)^2)dx+\int_{0}^2(2x(2-x))(-dx) \[4pt] &=\int_{0}^2(x^2-(4+x^2-4x))dx-\int_{0}^2(4x-2x^2)dx \[4pt] &=\int_{0}^2(4x-4)dx-\int_{0}^2(4x-2x^2)dx=\int_{0}^2(2x^2-4)dx \[4pt] &=\left. \left(\dfrac{2x^3}{3}-4x\right)\right|_{0}^{2}=\dfrac{16}{3}-8=-\dfrac{8}{3} \[4pt] &=\displaystyle{\color{Maroon}-\dfrac{8}{3}} \end{align*} \nonumber
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/09%3A_Exact_and_Inexact_Differentials/9.05%3A_Line_Integrals.txt
To summarize, given a function $f(x,y)$, its total differential $df$ is, by definition: $df=\left (\dfrac{\partial f}{\partial x} \right )_{y} dx+\left (\dfrac{\partial f}{\partial y} \right )_{x} dy \nonumber$ Given an arbitrary differential $df=M(x,y)dx+N(x,y)dy \nonumber$ where $M$ and $N$ are functions of $x$ and $y$, the differential is exact if it is the total differential of a function $f(x,y)$. To test for exactness we compare the partial derivative of $M(x,y)$ with respect to $y$, and the partial derivative of $N(x,y)$ with respect to $x$: $\left(\dfrac{\partial M(x,y)}{\partial y} \right )_x\overset{?}{=}\left(\dfrac{\partial N(x,y)}{\partial x} \right )_y \nonumber$ If the derivatives are identical, we conclude that the differential $df$ is exact, and therefore it is the total differential of a function $f(x,y)$. To find the function, we notice that for an exact differential: $M(x,y)=\left (\dfrac{\partial f}{\partial x} \right )_{y} \nonumber$ $N(x,y)=\left (\dfrac{\partial f}{\partial y} \right )_{x} \nonumber$ We can then find the function by partial integration: $f(x,y)=\int df= \int M(x,y) dx ~ ( \text{at constant } y) \nonumber$ $f(x,y)=\int df= \int N(x,y) dy ~ ( \text{at constant } x) \nonumber$ It is important to keep in mind that the integration constant in the first case will be an arbitrary function of $y$, and in the second case an arbitrary function of $x$. For an exact differential, the line integral does not depend on the path, but only on the initial and final points. Furthermore, because the differential is exact, it is the total differential of a state function $f(x,y)$. This means that the integral of $df$ along any path is simply the function $f$ evaluated at the final state minus the function $f$ evaluated at the initial state: $\int_{c}M(x,y)dx+N(x,y)dy=\int_{c}df=\Delta f=f(x_2,y_2)-f(x_1,y_1) \nonumber$ where $c$ represents the path that starts at the point $(x_1,y_1)$ and ends at the point $(x_2,y_2)$. If the initial and the final states are identical, for an exact differential: $\oint df=0 \nonumber$ For an inexact differential, $\int_{c}df$ will in general depend on the path $c$. 9.07: Problems Problem $1$ Determine whether the following differentials are exact or inexact. If they are exact, determine $u=u(x,y)$. 1. $du=(2ax+by)dx+(bx+2cy)dy$ 2. $du=(x^2-y^2)dx+(2xy)dy$ Problem $2$ Determine whether dz is exact or inexact. If it is exact, determine $z=z(P,T)$. $dz=-\frac{RT}{P^2}dP+\frac{R}{P}dT \nonumber$ Problem $3$ From Equation \ref{eq:dG}, and using the fact that $G$ is a state function, prove that the change in entropy ($\Delta S$) of one mole of an ideal gas whose pressure changes from an initial value $P_1$ to a final value $P_2$ at constant temperature is: $\Delta S =-R \ln{\frac{P_2}{P_1}}\nonumber$ Problem $4$ From Equations \ref{eq:dU}-\ref{eq:dA}, and using the fact that $U,H$ and $A$ are state functions, derive the three corresponding Maxwell relations. Answer Add texts here. Do not delete this text first. Problem $5$ Given the following differential: $dz=xy dx + 2y dy\nonumber$ 1. Determine if it is exact or inexact. If it is, obtain $z(x,y)$ 2. Calculate the line integrals $\int_c{dz}$ for the paths enumerated below: 1. the line $y=2x$ from $x=0$ to $x=2$ 2. the curve $y = x^2$ from $x = 0$ to $x = 2$ 3. any other path of your choice that joins the same initial and final points. Problem $6$ For a mole of a perfect monoatomic gas, the internal energy can be expressed as a function of the pressure and volume as $U = \frac{3}{2}PV\nonumber$ 1. Write the total differential of $U$, $dU$. 2. Calculate the line integrals $\int_c{dU}$ for the paths shown below ($c_1, c_2, c_3$): 3. Calculate $U(V_f,P_f)-U(V_i,P_i)$ and compare with the results of b) (Note: $f$ refers to the final state and $i$ to the initial state). 4. Considering your previous results, calculate $\int_c{dU}$ for the path below: As defined in Section 9.3, $\label{eq:dU} dU=T(S,V)dS-P(S,V)dV$ $\label{eq:dP} dH=T(S,P)dS+V(S,P)dP$ $\label{eq:dA} dA=-S(T,V)dT-P(T,V)dV$ $\label{eq:dG} dG=-S(T,P)dT+V(T,P)dP$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/09%3A_Exact_and_Inexact_Differentials/9.06%3A_Exact_and_Inexact_Differentials_%28Summary%29.txt
Objectives • Understand the concept of area and volume elements in cartesian, polar and spherical coordinates. • Be able to integrate functions expressed in polar or spherical coordinates. • Understand how to normalize orbitals expressed in spherical coordinates, and perform calculations involving triple integrals. • Understand the concept of probability distribution function. • 10.1: Coordinate Systems The simplest coordinate system consists of coordinate axes oriented perpendicularly to each other. These coordinates are known as cartesian coordinates or rectangular coordinates, and you are already familiar with their two-dimensional and three-dimensional representation. • 10.2: Area and Volume Elements In any coordinate system it is useful to define a differential area and a differential volume element. • 10.3: A Refresher on Electronic Quantum Numbers Each electron in an atom is described by four different quantum numbers.  The first three quantum numbers specify the particular orbital the electron occupies. Two electrons of opposite spin can occupy this orbital. • 10.4: A Brief Introduction to Probability We have talked about the fact that the wavefunction can be interpreted as a probability, but this is a good time to formalize some concepts and understand what we really mean by that. Let’s start by reviewing (or learning) a few concepts in probability theory. • 10.5: Problems Thumbnail: A globe showing the radial distance, polar angle and azimuth angle of a point P with respect to a unit sphere. In this image, r equals 4/6, θ equals 90°, and φ equals 30°. (CC BY-SA 4.0; SharkD). 10: Plane Polar and Spherical Coordinates The simplest coordinate system consists of coordinate axes oriented perpendicularly to each other. These coordinates are known as cartesian coordinates or rectangular coordinates, and you are already familiar with their two-dimensional and three-dimensional representation. In the plane, any point $P$ can be represented by two signed numbers, usually written as $(x,y)$, where the coordinate $x$ is the distance perpendicular to the $x$ axis, and the coordinate $y$ is the distance perpendicular to the $y$ axis (Figure $1$, left). In space, a point is represented by three signed numbers, usually written as $(x,y,z)$ (Figure $1$, right). Often, positions are represented by a vector, $\vec{r}$, shown in red in Figure $1$. In three dimensions, this vector can be expressed in terms of the coordinate values as $\vec{r}=x\hat{i}+y\hat{j}+z\hat{k}$, where $\hat{i}=(1,0,0)$, $\hat{j}=(0,1,0)$ and $\hat{z}=(0,0,1)$ are the so-called unit vectors. We already know that often the symmetry of a problem makes it natural (and easier!) to use other coordinate systems. In two dimensions, the polar coordinate system defines a point in the plane by two numbers: the distance $r$ to the origin, and the angle $\theta$ that the position vector forms with the $x$-axis. Notice the difference between $\vec{r}$, a vector, and $r$, the distance to the origin (and therefore the modulus of the vector). Vectors are often denoted in bold face (e.g. r) without the arrow on top, so be careful not to confuse it with $r$, which is a scalar. While in cartesian coordinates $x$, $y$ (and $z$ in three-dimensions) can take values from $-\infty$ to $\infty$, in polar coordinates $r$ is a positive value (consistent with a distance), and $\theta$ can take values in the range $[0,2\pi]$. The relationship between the cartesian and polar coordinates in two dimensions can be summarized as: $\label{eq:coordinates_1} x=r\cos\theta$ $\label{eq:coordinates_2} y=r\sin\theta$ $\label{eq:coordinates_3} r^2=x^2+y^2$ $\label{eq:coordinates_4} \tan \theta=y/x$ In three dimensions, the spherical coordinate system defines a point in space by three numbers: the distance $r$ to the origin, a polar angle $\phi$ that measures the angle between the positive $x$-axis and the line from the origin to the point $P$ projected onto the $xy$-plane, and the angle $\theta$ defined as the is the angle between the $z$-axis and the line from the origin to the point $P$: Before we move on, it is important to mention that depending on the field, you may see the Greek letter $\theta$ (instead of $\phi$) used for the angle between the positive $x$-axis and the line from the origin to the point $P$ projected onto the $xy$-plane. That is, $\theta$ and $\phi$ may appear interchanged. This can be very confusing, so you will have to be careful. When using spherical coordinates, it is important that you see how these two angles are defined so you can identify which is which. Spherical coordinates are useful in analyzing systems that are symmetrical about a point. For example a sphere that has the cartesian equation $x^2+y^2+z^2=R^2$ has the very simple equation $r = R$ in spherical coordinates. Spherical coordinates are the natural coordinates for physical situations where there is spherical symmetry (e.g. atoms). The relationship between the cartesian coordinates and the spherical coordinates can be summarized as: $\label{eq:coordinates_5} x=r\sin\theta\cos\phi$ $\label{eq:coordinates_6} y=r\sin\theta\sin\phi$ $\label{eq:coordinates_7} z=r\cos\theta$ These relationships are not hard to derive if one considers the triangles shown in Figure $4$:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/10%3A_Plane_Polar_and_Spherical_Coordinates/10.01%3A_Coordinate_Systems.txt
In any coordinate system it is useful to define a differential area and a differential volume element. In cartesian coordinates the differential area element is simply $dA=dx\;dy$ (Figure $1$), and the volume element is simply $dV=dx\;dy\;dz$. We already performed double and triple integrals in cartesian coordinates, and used the area and volume elements without paying any special attention. For example, in example [c2v:c2vex1], we were required to integrate the function ${\left | \psi (x,y,z) \right |}^2$ over all space, and without thinking too much we used the volume element $dx\;dy\;dz$ (see page ). We also knew that “all space” meant $-\infty\leq x\leq \infty$, $-\infty\leq y\leq \infty$ and $-\infty\leq z\leq \infty$, and therefore we wrote: $\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=1 \nonumber$ But what if we had to integrate a function that is expressed in spherical coordinates? Would we just replace $dx\;dy\;dz$ by $dr\; d\theta\; d\phi$? The answer is no, because the volume element in spherical coordinates depends also on the actual position of the point. This will make more sense in a minute. Coming back to coordinates in two dimensions, it is intuitive to understand why the area element in cartesian coordinates is $dA=dx\;dy$ independently of the values of $x$ and $y$. This is shown in the left side of Figure $2$. However, in polar coordinates, we see that the areas of the gray sections, which are both constructed by increasing $r$ by $dr$, and by increasing $\theta$ by $d\theta$, depend on the actual value of $r$. Notice that the area highlighted in gray increases as we move away from the origin. The area shown in gray can be calculated from geometrical arguments as $dA=\left[\pi (r+dr)^2- \pi r^2\right]\dfrac{d\theta}{2\pi}.$ Because $dr<<0$, we can neglect the term $(dr)^2$, and $dA= r\; dr\;d\theta$ (see Figure $10.2.3$). Let’s see how this affects a double integral with an example from quantum mechanics. The wave function of the ground state of a two dimensional harmonic oscillator is: $\psi(x,y)=A e^{-a(x^2+y^2)}$. We know that the quantity $|\psi|^2$ represents a probability density, and as such, needs to be normalized: $\int\limits_{all\;space} |\psi|^2\;dA=1 \nonumber$ This statement is true regardless of whether the function is expressed in polar or cartesian coordinates. However, the limits of integration, and the expression used for $dA$, will depend on the coordinate system used in the integration. In cartesian coordinates, “all space” means $-\infty<x<\infty$ and $-\infty<y<\infty$. The differential of area is $dA=dxdy$: $\int\limits_{all\;space} |\psi|^2\;dA=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty} A^2e^{-2a(x^2+y^2)}\;dxdy=1 \nonumber$ In polar coordinates, “all space” means $0<r<\infty$ and $0<\theta<2\pi$. The differential of area is $dA=r\;drd\theta$. The function $\psi(x,y)=A e^{-a(x^2+y^2)}$ can be expressed in polar coordinates as: $\psi(r,\theta)=A e^{-ar^2}$ $\int\limits_{all\;space} |\psi|^2\;dA=\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi} A^2 e^{-2ar^2}r\;d\theta dr=1 \nonumber$ Both versions of the double integral are equivalent, and both can be solved to find the value of the normalization constant ($A$) that makes the double integral equal to 1. In polar coordinates: $\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi} A^2 e^{-2ar^2}r\;d\theta dr=A^2\int\limits_{0}^{\infty}e^{-2ar^2}r\;dr\int\limits_{0}^{2\pi}\;d\theta =A^2\times\dfrac{1}{4a}\times2\pi=1 \nonumber$ Therefore1, $A=\sqrt{2a/\pi}$. The same value is of course obtained by integrating in cartesian coordinates. It is now time to turn our attention to triple integrals in spherical coordinates. In cartesian coordinates, the differential volume element is simply $dV= dx\,dy\,dz$, regardless of the values of $x, y$ and $z$. Using the same arguments we used for polar coordinates in the plane, we will see that the differential of volume in spherical coordinates is not $dV=dr\,d\theta\,d\phi$. The geometrical derivation of the volume is a little bit more complicated, but from Figure $4$ you should be able to see that $dV$ depends on $r$ and $\theta$, but not on $\phi$. The volume of the shaded region is $\label{eq:dv} dV=r^2\sin\theta\,d\theta\,d\phi\,dr$ We will exemplify the use of triple integrals in spherical coordinates with some problems from quantum mechanics. We already introduced the Schrödinger equation, and even solved it for a simple system in Section 5.4. We also mentioned that spherical coordinates are the obvious choice when writing this and other equations for systems such as atoms, which are symmetric around a point. As we saw in the case of the particle in the box (Section 5.4), the solution of the Schrödinger equation has an arbitrary multiplicative constant. Because of the probabilistic interpretation of wave functions, we determine this constant by normalization. The same situation arises in three dimensions when we solve the Schrödinger equation to obtain the expressions that describe the possible states of the electron in the hydrogen atom (i.e. the orbitals of the atom). The Schrödinger equation is a partial differential equation in three dimensions, and the solutions will be wave functions that are functions of $r, \theta$ and $\phi$. The lowest energy state, which in chemistry we call the 1s orbital, turns out to be: $\psi_{1s}=Ae^{-r/a_0} \nonumber$ This particular orbital depends on $r$ only, which should not surprise a chemist given that the electron density in all $s$-orbitals is spherically symmetric. We will see that $p$ and $d$ orbitals depend on the angles as well. Regardless of the orbital, and the coordinate system, the normalization condition states that: $\int\limits_{all\;space} |\psi|^2\;dV=1 \nonumber$ For a wave function expressed in cartesian coordinates, $\int\limits_{all\;space} |\psi|^2\;dV=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\psi^*(x,y,z)\psi(x,y,z)\,dxdydz \nonumber$ where we used the fact that $|\psi|^2=\psi^* \psi$. In spherical coordinates, “all space” means $0\leq r\leq \infty$, $0\leq \phi\leq 2\pi$ and $0\leq \theta\leq \pi$. The differential $dV$ is $dV=r^2\sin\theta\,d\theta\,d\phi\,dr$, so $\int\limits_{all\;space} |\psi|^2\;dV=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ Let’s see how we can normalize orbitals using triple integrals in spherical coordinates. Example $1$ When solving the Schrödinger equation for the hydrogen atom, we obtain $\psi_{1s}=Ae^{-r/a_0}$, where $A$ is an arbitrary constant that needs to be determined by normalization. Find $A$. Solution In spherical coordinates, $\int\limits_{all\; space} |\psi|^2\;dV=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ because this orbital is a real function, $\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)=\psi^2(r,\theta,\phi)$. In this case, $\psi^2(r,\theta,\phi)=A^2e^{-2r/a_0}$. Therefore, $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi) \, r^2 \sin\theta \, dr d\theta d\phi=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}A^2e^{-2r/a_0}\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}A^2e^{-2r/a_0}\,r^2\sin\theta\,dr d\theta d\phi=A^2\int\limits_{0}^{2\pi}d\phi\int\limits_{0}^{\pi}\sin\theta \;d\theta\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr \nonumber$ The result is a product of three integrals in one variable: $\int\limits_{0}^{2\pi}d\phi=2\pi \nonumber$ $\int\limits_{0}^{\pi}\sin\theta \;d\theta=-\cos\theta|_{0}^{\pi}=2 \nonumber$ $\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=? \nonumber$ From the formula sheet: $\int_{0}^{\infty}x^ne^{-ax}dx=\dfrac{n!}{a^{n+1}}, \nonumber$ where $a>0$ and $n$ is a positive integer. In this case, $n=2$ and $a=2/a_0$, so: $\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=\dfrac{2!}{(2/a_0)^3}=\dfrac{2}{8/a_0^3}=\dfrac{a_0^3}{4} \nonumber$ Putting the three pieces together: $A^2\int\limits_{0}^{2\pi}d\phi\int\limits_{0}^{\pi}\sin\theta \;d\theta\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=A^2\times2\pi\times2\times \dfrac{a_0^3}{4}=1 \nonumber$ $A^2\times \pi \times a_0^3=1\rightarrow A=\dfrac{1}{\sqrt{\pi a_0^3}} \nonumber$ The normalized 1s orbital is, therefore: $\displaystyle{\color{Maroon}\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0}} \nonumber$ 10.03: A Refresher on Electronic Quantum Numbers Each electron in an atom is described by four different quantum numbers. • Principal quantum number: $n=1,2,3...\infty$. It determines the overall size and energy of an orbital. • Angular momentum quantum number: $l=0,1, 2...(n-1)$. It is related with the shape of the orbital. In chemistry, we usually use the letters $s,p,d,f...$ to denote an orbital with $l=0, 1,2, 3...$. For example, for the 1s orbital, $n=1$ and $l=0$. • Magnetic quantum number: $m_l=-l, -l+1,...,0,...l-1, l$. It specifies the orientation of the orbital. For a $p$ orbital, for example, $l=1$ and therefore $m_l$ can take the values $-1, 0, 1$. In general, there are $2l+1$ values of $m_l$ for a given value of $l$. That is why $p$ orbitals come in groups of 3, $d$ orbitals come in groups of 5, etc. • Spin quantum number: $m_s=-1/2$ or $1/2$. The Pauli exclusion principle states that no two electrons in the same atom can have identical values for all four of their quantum numbers. This means that no more than two electrons can occupy the same orbital, and that two electrons in the same orbital must have opposite spins. The first three quantum numbers specify the particular orbital the electron occupies. For example, the orbital $2p_{-1}$ is the orbital with $n=2$, $l=1$ and $m_l=-1$. Two electrons of opposite spin can occupy this orbital. So far we’ve been limited to the 1s orbital, but now that we are more comfortable with the nomenclature of orbitals, we can start doing some math with orbitals that have more complex expressions. Example $1$: After solving the Schrödinger equation for the hydrogen atom, we obtain the following expression for the $2p_{+1}$ orbital: $\psi_{2p_{+1}}=Are^{-r/(2a_0)}\sin\theta e^{i\phi} \nonumber$ as usual, we obtain the constant $A$ from the normalization condition. Calculate $A$. Solution In three dimensions, the normalization condition is: $\int\limits_{all\;space} |\psi|^2\;dV=1 \nonumber$ Because the orbital is expressed in spherical coordinates: $\int\limits_{all\;space} |\psi|^2\;dV=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ For this particular orbital: $\psi_{2p_{+1}}=Are^{-r/(2a_0)}\sin\theta e^{i\phi} \nonumber$ $\psi_{2p_{+1}}^*=Are^{-r/(2a_0)}\sin\theta e^{-i\phi} \nonumber$ $\psi_{2p_{+1}}^* \psi_{2p_{+1}}=A^2r^2e^{-r/(a_0)}\sin^2\theta \left(e^{i\phi}e^{-i\phi}\right)=A^2r^2e^{-r/(a_0)}\sin^2\theta \nonumber$ so, $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi =\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty} {\color{Red}A^2r^2e^{-r/(a_0)}\sin^2\theta} \,{\color{Blue}r^2\sin\theta\,dr d\theta d\phi}=1 \nonumber$ where the part of the integrand highlighted in blue comes from the differential of volume ($dV$) and the part in red comes from $|\psi|^2$. We need to integrate the whole expression, so: $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty} A^2r^2e^{-r/(a_0)}\sin^2\theta \,r^2\sin\theta\,dr d\theta d\phi=A^2\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty} r^4e^{-r/(a_0)}\sin^3\theta \,dr d\theta d\phi=A^2\int\limits_{0}^{2\pi}d\phi \int\limits_{0}^{\pi}\sin^3\theta\,d\theta\int\limits_{0}^{\infty} r^4e^{-r/(a_0)}\,dr \nonumber$ $\int\limits_{0}^{2\pi}d\phi=2\pi \nonumber$ $\int\limits_{0}^{\pi}\sin^3\theta\,d\theta=? \nonumber$ From the formula sheet: $\int sin^3(ax)\, dx=\frac{1}{12a}\cos(3ax)-\frac{3}{4a}\cos(ax)+C$ so, $\int_{0}^{\pi} sin^3\theta\, d\theta=\frac{1}{12}\cos(3\pi)-\frac{3}{4}\cos(\pi)-\frac{1}{12}\cos(0)+\frac{3}{4}\cos(0)=\frac{1}{12}(-1)-\frac{3}{4}(-1)-\frac{1}{12}(1)+\frac{3}{4}(1)=\frac{4}{3} \nonumber$ $\int\limits_{0}^{\infty} r^4e^{-r/(a_0)}\,dr=? \nonumber$ From the formula sheet: $\int_{0}^{\infty}x^ne^{-ax}dx=\frac{n!}{a^{n+1}},\; a>0, n$ is a positive integer. Here, $a=1/a_o$ and $n=4$, so: $\int\limits_{0}^{\infty} r^4e^{-r/(a_0)}\,dr=\frac{4!}{(1/a_0)^5}=24a_0^5 \nonumber$ Putting the three pieces together: $A^2\int\limits_{0}^{2\pi}d\phi \int\limits_{0}^{\pi}\sin^3\theta\,d\theta\int\limits_{0}^{\infty} r^4e^{-r/(a_0)}\,dr=A^2\times 2\pi\times \frac{4}{3}\times 24a_0^5=64a_0^5\pi A^2=1 \nonumber$ Solving for $A$: $A=\frac{1}{8\left(a_0^5 \pi\right)^{1/2}} \nonumber$ The normalized orbital is, therefore, $\displaystyle{\color{Maroon}\psi_{2p_{+1}}=\frac{1}{8\left(a_0^5 \pi\right)^{1/2}}re^{-r/(2a_0)}\sin\theta e^{i\phi}} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/10%3A_Plane_Polar_and_Spherical_Coordinates/10.02%3A_Area_and_Volume_Elements.txt
We have talked about the fact that the wavefunction can be interpreted as a probability, but this is a good time to formalize some concepts and understand what we really mean by that. Let’s start by reviewing (or learning) a few concepts in probability theory. First, a random variable is a variable whose value is subject to variations due to chance. For example, we could define a variable that equals the number of days that it rains in Phoenix every month, or the outcome of throwing a die (the number of dots facing up), or the time it takes for the next bus to arrive to the bus station, or the waiting time we will have to endure next time we call a customer service phone line. Some of these random variables are discrete; the number of rain days or the number of dots facing up in a die can take on only a countable number of distinct values. For the case of the die, the outcome can only be $\{1,2,3,4,5,6\}$. In contrast, a waiting time is a continuous random variable. If you could measure with enough precision, the random variable can take any positive real value. Coming back to physical chemistry, the position of an electron in an atom or molecule is a good example of a continuous random variable. Imagine a (admittedly silly) game that involves flipping two coins. You get one point for each tail, and two points for each head. The game has three possible outcomes: 2 points (if you get two tails), 3 points (if you get one tail and one head) and 4 points (if you get two heads). The outcomes do not have the same probability. The probability of getting two heads or two tails is 1/4, while the probability that you get one head and one tail is 1/2. If we define a random variable ($x$) that equals the number of points you get in one round of the game, we can represent the probabilities of getting each possible outcome as: $x$ 2 3 4 $P(x)$ 1/4 1/2 1/4 The collection of outcomes is called the sample space. In this case, the sample space is $\{2,3,4\}$. If we add $P(x)$ over the sample space we get, as expected, 1. In other words, the probability that you get an outcome that belongs to the sample space needs to be 1, which makes sense because we defined the sample space as the collection of all possible outcomes. If you think about an electron in an atom, and define the position in polar coordinates, $r$ (the distance from the nucleus of the atom) is a random variable that can take any value from 0 to $\infty$. The sample space for the random variable $r$ is the set of positive real numbers. Coming back to our discrete variable $x$, our previous argument translates into $\sum\limits_{sample\; space}P(x)=1 \nonumber$ Can we measure probabilities? Not exactly, but we can measure the frequency of each outcome if we repeat the experiment a large number of times. For example, if we play this game three times, we do not know how many times we’ll get 2, 3 or 4 points. But if we play the game a very large number of times, we know that half the time we will get 3 points, a quarter of the time will get 2 points, and another quarter 4 points. The probability is the frequency of an outcome in the limit of an infinite number of trials. Formally, the frequency is defined as the number of times you obtain a given outcome divided for the total number of trials. Now, even if we do not have any way of predicting the outcome of our random experiment (the silly game we described above), if you had to bet, you would not think it twice and bet on $x=3$ (one head and one tail). The fact that a random variable does not have a predictable outcome does not mean we do not have information about the distribution of probabilities. Coming back to our atom, we will be able to predict the value of $r$ at which the probability of finding the electron is highest, the average value of $r$, etc. Even if we know that $r$ can take values up to $\infty$, we know that it is much more likely to find it very close to the nucleus (e.g. within an angstrom) than far away (e.g. one inch). No physical law forbids the electron to be 1 inch from the nucleus, but the probability of that happening is so tiny, that we do not even think about this possibility. The Mean of a Discrete Distribution Let’s talk about the mean (or average) some more. What is exactly the average of a random variable? Coming back to our “game”, it would be the average value of $x$ you would get if you could play the game an infinite number of times. You could also ask the whole planet to play the game once, and that would accomplish the same. The planet does not have an infinite number of people, but the average you get with several billion trials of the random experiment (throwing two coins) should be pretty close to the real average. We will denote the average (also called the mean) with angular brackets: $\left \langle x \right \rangle$. Let’s say that we play this game $10^9$ times. We expect 3 points half the time (frequency = $1/2$), or in this case, $5\times 10^8$ times. We also expect 2 points or 4 points with a frequency of $1/4$, so in this case, $2.5 \times 10^8$ times. What is the average? $\left \langle x \right \rangle=\dfrac{1}{4}\times 2+\dfrac{1}{2}\times 3+\dfrac{1}{4}\times 4=3 \nonumber$ On average, the billion people playing the game (or you playing it a billion times) should get 3 points. This happens to be the most probable outcome, but it does not need to be the case. For instance, if you just flip one coin, you can get 1 point or 2 points with equal probability, and the average will be 1.5, which is not the most probable outcome. In fact, it is not even a possible outcome!. In general, it should make sense that for a discrete variable: $\label{eq:mean_discrete} \left \langle x \right \rangle=\sum\limits_{i=1}^k P(x_i)x_i$ where the sum is performed over the whole sample space, which contains $k$ elements. Here, $x_i$ is each possible outcome, and $P(x_i)$ is the probability of obtaining that outcome (or the fraction of times you would obtain it if you were to perform a huge number of trials). Continuous Variables How do we translate everything we just said to a continuous variable? As an example, let’s come back to the random variable $r$, which is defined as the distance of the electron in the hydrogen atom from its nucleus. As we will see shortly, the 1s electron in the hydrogen atom spends most of its time within a couple of angstroms from the nucleus. We may ask ourselves, what is the probability that the electron will be found exactly at 1Å from the nucleus? Mathematically, what is $P(r=1$ Å)? The answer will disappoint you, but this probability is zero, and it is zero for any value of $r$. The electron needs to be somewhere, but the probability of finding it an any particular value of $r$ is zero? Yes, that is precisely the case, and it is a consequence of $r$ being a continuous variable. Imagine that you get a random real number in the interval [0,1] (you could do this even in your calculator), and I ask you what is the probability that you get exactly $\pi/4$. There are infinite real numbers in this interval, and all of them are equally probable, so the probability of getting each of them is $1/\infty=0$. Talking about the probabilities of particular outcomes is not very useful in the context of continuous variables. All the outcomes have a probability of zero, even if we intuitively know that the probability of finding the electron within 1Å is much larger than the probability of finding it at several miles. Instead, we will talk about the density of probability ($p(r)$). If you are confused about why the probability of a particular outcome is zero check the video listed at the end of this section. A plot of $p(r)$ is shown in Figure $1$ for the case of the 1s orbital of the hydrogen atom. Again, we stress that $p(r)$ does not measure the probability corresponding to each value of $r$ (which is zero for all values of $r$), but instead, it measures a probability density. We already introduced this idea in page Formally, the probability density function ($p(r)$), is defined in this way: $\label{eq:coordinates_pdf} P(a\leq r\leq b)=\int\limits_{a}^{b}p(r)dr$ This means that the probability that the random variable $r$ takes a value in the interval $[a,b]$ is the integral of the probability density function from $a$ to $b$. For a very small interval: $\label{eq:coordinates_pdf2} P(a\leq r\leq a+dr)=p(a)dr$ In conclusion, although $p(r)$ alone does not mean anything physically, $p(r)dr$ is the probability that the variable $r$ takes a value in the interval between $r$ and $r+dr$. For example, coming back to Figure $1$, $p(1$ Å) = 0.62, which does not mean at all that 62% of the time we’ll find the electron at exactly 1Å from the nucleus. Instead, we can use it to calculate the probability that the electron is found in a very narrow region around 1Å . For example, $P(1\leq r\leq 1.001)\approx 0.62\times 0.001=6.2\times 10^{-4}$. This is only an approximation because the number 0.001, although much smaller than 1, is not an infinitesimal. In general, the concept of probability density function is easier to understand in the context of Equation \ref{eq:coordinates_pdf}. You can calculate the probability that the electron is found at a distance shorter than 1Å as: $P(0\leq r\leq 1)=\int\limits_{0}^{1}p(r)dr \nonumber$ and at a distance larger than 1Å but shorter than 2Å as $P(1\leq r\leq 2)=\int\limits_{1}^{2}p(r)dr \nonumber$ Of course the probability that the electron is somewhere in the universe is 1, so: $P(0\leq r\leq \infty)=\int\limits_{0}^{\infty}p(r)dr=1 \nonumber$ We haven’t written $p(r)$ explicitly yet, but we will do so shortly so we can perform all these integrations and get the probabilities discussed above. Confused about continuous probability density functions? This video may help! http: //tinyurl.com/m6tgoap The Mean of a Continuous Distribution For a continuous random variable $x$, Equation \ref{eq:mean_discrete} becomes: $\label{eq:mean_continuous} \left \langle x \right \rangle = \int\limits_{all\,outcomes}p(x) x \;dx$ Coming back to our atom: $\label{eq:mean_r} \left \langle r \right \rangle = \int\limits_{0}^{\infty}p(r) r \;dr$ Again, we will come back to this equation once we obtain the expression for $p(r)$ we need. But before doing so, let’s expand this discussion to more variables. So far, we have limited our discussion to one coordinate, so the quantity $P(a\leq r\leq b)=\int\limits_{a}^{b}p(r)dr$ represents the probability that the coordinate $r$ takes a value between $a$ and $b$, independently of the values of $\theta$ and $\phi$. This region of space is the spherical shell represented in Figure $2$ in light blue. The spheres in the figure are cut for clarity, but of course we refer to the whole shell that is defined as the region between two concentric spheres of radii $a$ and $b$. What if we are interested in the angles as well? Let’s say that we want the probability that the electron is found between $r_1$ and $r_2$, $\theta_1$ and $\theta_2$, and $\phi_1$ and $\phi_2$. This volume is shown in Figure $3$. The probability we are interested in is: $P(r_1\leq r\leq r_2, \theta_1\leq \theta \leq \theta_2, \phi_1 \leq \phi \leq\phi_2)=\int\limits_{\phi_1}^{\phi_2}\int\limits_{\theta_1}^{\theta_2}\int\limits_{r_1}^{r_2}p(r,\theta,\phi)\;r^2 \sin\theta dr d\theta d\phi \nonumber$ Notice that we are integrating in spherical coordinates, so we need to use the corresponding differential of volume. This probability density function, $p(r,\theta,\phi)$, is exactly what $|\psi(r,\theta,\phi)|^2$ represents! This is why we’ve been saying that $|\psi(r,\theta,\phi)|^2$ is a probability density. The function $|\psi(r,\theta,\phi)|^2$ does not represent a probability in itself, but it does when integrated between the limits of interest. Suppose we want to know the probability that the electron in the 1s orbital of the hydrogen atom is found between $r_1$ and $r_2$, $\theta_1$ and $\theta_2$, and $\phi_1$ and $\phi_2$. The answer to this question is: $\int\limits_{\phi_1}^{\phi_2}\int\limits_{\theta_1}^{\theta_2}\int\limits_{r_1}^{r_2}|\psi_{1s}|^2\;r^2 \sin\theta dr d\theta d\phi \nonumber$ Coming back to the case shown in Figure $2$, the probability that $r$ takes a value between $a$ and $b$ independently of the values of the angles, is the probability that $r$ lies between $a$ and $b$, and $\theta$ takes a value between 0 and $\pi$, and $\phi$ takes a value between 0 and $2\pi$: $\label{eq:coordinates9} P(a\leq r\leq b)=P(a\leq r\leq b,0 \leq \theta \leq \pi, 0 \leq \phi \leq 2\pi)=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{a}^{b}|\psi_{1s}|^2\;r^2 \sin\theta dr d\theta d\phi$ The Radial Density Function So far we established that $|\psi(r,\theta,\phi)|^2$ is a probability density function in spherical coordinates. We can perform triple integrals to calculate the probability of finding the electron in different regions of space (but not in a particular point!). It is often useful to know the likelihood of finding the electron in an orbital at any given distance away from the nucleus. This enables us to say at what distance from the nucleus the electron is most likely to be found, and also how tightly or loosely the electron is bound in a particular atom. This is expressed by the radial distribution function, $p(r)$, which is plotted in Figure $1$ for the 1s orbital of the hydrogen atom. In other words, we want a version of $|\psi(r,\theta,\phi)|^2$ that is independent of the angles. This new function will be a function of $r$ only, and can be used, among other things, to calculate the mean of $r$, the most probable value of $r$, the probability that $r$ lies in a given range of distances, etc. We already introduced this function in Equation \ref{eq:coordinates_pdf}. The question now is, how do we obtain $p(r)$ from $|\psi(r,\theta,\phi)|^2$? Let’s compare Equation \ref{eq:coordinates_pdf} with Equation \ref{eq:coordinates9}: $P(a\leq r\leq b)=\int\limits_{a}^{b}p(r)dr \nonumber$ $P(a\leq r\leq b)=P(a\leq r\leq b,0 \leq \theta \leq \pi, 0 \leq \phi \leq 2\pi)=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{a}^{b}|\psi(r,\theta,\phi)|^2\;r^2 \sin\theta dr d\theta d\phi \nonumber$ We conclude that $\int\limits_{a}^{b}p(r)dr=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{a}^{b}|\psi(r,\theta,\phi)|^2\;r^2 \sin\theta dr d\theta d\phi \nonumber$ All $s$ orbitals are real functions of $r$ only, so $|\psi(r,\theta,\phi)|^2$ does not depend on $\theta$ or $\phi$. In this case: $\int\limits_{a}^{b}p(r)dr=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{a}^{b}\psi^2(r)\;r^2 \sin\theta dr d\theta d\phi=\int\limits_{0}^{2\pi}d\phi \int\limits_{0}^{\pi}\sin\theta \;d\theta \int\limits_{a}^{b}\psi^2(r)\;r^2 dr=4\pi\int\limits_{a}^{b}\psi^2(r)\;r^2 dr \nonumber$ Therefore, for an $s$ orbital, $p(r)=4\pi\psi^2(r)\;r^2 \nonumber$ For example, the normalized wavefunction of the 1s orbital is the solution of Example $10.1$: $\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0}$. Therefore, for the 1s orbital: $\label{eq:coordinates10} p(r)=\dfrac{4}{a_0^3}r^2e^{-2r/a_0}$ Equation \ref{eq:coordinates10} is plotted in Figure $1$. In order to create this plot, we need the value of $a_0$, which is a constant known as Bohr radius, and equals $5.29\times10^{-11}m$ (or 0.526 Å). Look at the position of the maximum of $p(r)$; it is slightly above 0.5Å  and more precisely, exactly at $r=a_0$! Now it is clear why $a_0$ is known as a radius: it is the distance from the nucleus at which finding the only electron of the hydrogen atom is greatest. In a way, $a_0$ is the radius of the atom, although we know this is not strictly true because the electron is not orbiting at a fixed $r$ as scientists believed a long time ago. In general, for any type of orbital, $\int\limits_{a}^{b}p(r)dr=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{a}^{b}|\psi(r,\theta,\phi)|^2\;r^2 \sin\theta dr d\theta d\phi=\int\limits_{a}^{b}{\color{Red}\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}|\psi(r,\theta,\phi)|^2\;r^2 \sin\theta\; d\theta d\phi} dr \nonumber$ in the right side of the equation, we just changed the order of integration to have $dr$ last, and color coded the expression so we can easily identify $p(r)$ as: $\label{eq:p(r)} p(r)=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}|\psi(r,\theta,\phi)|^2\;r^2 \sin\theta\; d\theta d\phi$ Equation \ref{eq:p(r)} is the mathematical formulation of what we wanted: a probability density function that does not depend on the angles. We integrate $\phi$ and $\theta$ so what we are left with represents the dependence with $r$. We can multiply both sides by $r$: $rp(r)=r\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}{\color{Red}|\psi(r,\theta,\phi)|^2}{\color{OliveGreen}r^2 \sin\theta\; d\theta d\phi}=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}{\color{Red}|\psi(r,\theta,\phi)|^2}r{\color{OliveGreen}r^2 \sin\theta\; d\theta d\phi} \nonumber$ and use Equation \ref{eq:mean_r} to calculate $\left \langle r \right \rangle$ $\label{eq:coordinates12} \left \langle r \right \rangle = \int\limits_{0}^{\infty}{\color{Magenta}p(r) r} \;dr=\int\limits_{0}^{\infty}{\color{Magenta}\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}|\psi(r,\theta,\phi)|^2r\;r^2 \sin\theta\; d\theta d\phi}dr$ The colors in these expressions are aimed to help you track where the different terms come from. Let’s look at Equation \ref{eq:coordinates12} more closely. Basically, we just concluded that: $\label{eq:coordinates13} \left \langle r \right \rangle = \int\limits_{all\;space}|\psi|^2r\;dV$ where $dV$ is the differential of volume in spherical coordinates. We know that $\psi$ is normalized, so $\int\limits_{all\;space}|\psi|^2\;dV=1 \nonumber$ If we multiply the integrand by $r$, we get $\left \langle r \right \rangle$. We will discuss an extension of this idea when we talk about operators. For now, let’s use Equation \ref{eq:coordinates13} to calculate $\left \langle r \right \rangle$ for the 1s orbital. Example $1$ Calculate the average value of $r$, $\left \langle r \right \rangle$, for an electron in the 1s orbital of the hydrogen atom. The normalized wavefunction of the 1s orbital, expressed in spherical coordinates, is: $\psi_{1s}=\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0} \nonumber$ Solution The average value of $r$ is: $\left \langle r \right \rangle=\int\limits_{0}^{\infty}p(r)r\;dr \nonumber$ or $\left \langle r \right \rangle = \int\limits_{all\;space}|\psi|^2r\;dV \nonumber$ The difference between the first expression and the second expression is that in the first case, we already integrated over the angles $\theta$ and $\phi$. The second expression is a triple integral because $|\psi|^2$ still retains the angular information. We do not have $p(r)$, so either we obtain it first from $|\psi|^2$, or directly use $|\psi|^2$ and perform the triple integration: $\left \langle r \right \rangle = \int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}|\psi(r,\theta,\phi)|^2r\;{\color{Red}r^2 \sin\theta\; d\theta d\phi dr} \nonumber$ The expression highlighted in red is the differential of volume. For this orbital, $|\psi(r,\theta,\phi)|^2=\dfrac{1}{\pi a_0^3}e^{-2r/a_0} \nonumber$ and then, $\left \langle r \right \rangle = \dfrac{1}{\pi a_0^3}\int\limits_{0}^{\infty}e^{-2r/a_0}r^3\;dr\int\limits_{0}^{2\pi}d\phi\int\limits_{0}^{\pi}\sin\theta\; d\theta=\dfrac{4}{a_0^3}\int\limits_{0}^{\infty}e^{-2r/a_0}r^3\;dr \nonumber$ From the formula sheet: $\int_{0}^{\infty}x^ne^{-ax}dx=\dfrac{n!}{a^{n+1}},\; a>0, n$ is a positive integer. Here, $n=3$ and $a= 2/a_0$. $\dfrac{4}{a_0^3}\int\limits_{0}^{\infty}r^3e^{-2r/a_0}\;dr=\dfrac{4}{a_0^3}\times \dfrac{3!}{(2/a_0)^4}=\dfrac{3}{2}a_0 \nonumber$ $\displaystyle{\color{Maroon}\left \langle r \right \rangle=\dfrac{3}{2}a_0} \nonumber$ From Example $1$, we notice that on average we expect to see the electron at a distance from the nucleus equal to 1.5 times $a_0$. This means that if you could measure $r$, and you perform this measurement on a large number of atoms of hydrogen, or on the same atom many times, you would, on average, see the electron at a distance from the nucleus $r=1.5 a_0$. However, the probability of seeing the electron is greatest at $r=a_0$ (page ). We see that the average of a distribution does not necessarily need to equal the value at which the probability is highest2. 10.05: Problems Problem $1$ The wave function describing the state of an electron in the 1s orbital of the hydrogen atom is: $\psi_{1s}=Ae^{-r/a_0}, \nonumber$ where $a_0$ is Bohr’s radius (units of distance), and $A$ is a normalization constant. 1. Calculate $A$ 2. calculate $\left \langle r\right \rangle$, the average value of the distance of the electron from the nucleus. 3. The radius of the hydrogen atom is taken as the most probable value of $r$ for the 1s orbital. Calculate the radius of the hydrogen atom. 4. What is the probability that the electron is found at a distance from the nucleus equal to $a_0/2$? 5. What is the probability that the electron is found at a distance from the nucleus less than $a_0/2$? 6. We know that the probability that the electron is found at a distance from the nucleus $0 < r < \infty$ is 1. Using this fact and the result of the previous question, calculate the probability that the electron is found at a distance from the nucleus greater than $a_0/2$. Hint:$\int x^2 e^{ax}dx=e^{ax}\frac{\left ( 2-2ax+a^2x^2 \right )}{a^3}$ Note: Be sure you show all the steps! Problem $2$ The wave function describing the state of an electron in the 2s orbital of the hydrogen atom is: $\psi_{2s}=Ae^{-r/2a_0}\left(2-\frac{r}{a_0}\right) \nonumber$ where $a_0$ is Bohr’s radius (units of distance), and A is a normalization constant. • Calculate $A$ • Calculate $\left \langle r\right \rangle$, the average value of the distance of the electron from the nucleus. Problem $3$ Calculate the normalization constant of each of the following orbitals: $\psi_{2p+1}=A_1 r e^{-r/2a_0}\sin \theta e^{i\phi} \nonumber$ $\psi_{2p-1}=A_2 r e^{-r/2a_0}\sin \theta e^{-i\phi} \nonumber$ 1The integral in $r$ was solved using the formula sheet 2If you find this strange think about a situation where 20 18-year olds gather in a room with 4 60-year olds. The average age in the room is 25, but the most probable age is 18
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/10%3A_Plane_Polar_and_Spherical_Coordinates/10.04%3A_A_Brief_Introduction_to_Probability.txt
Chapter Objectives • Understand the concept of a mathematical operator. • Understand how to identify whether an operator is linear or not. • Understand the concept of eigenfunction and eigenvalue of an operator. • Learn how to perform algebraic operations with operators. • Understand the concept of the commutator. • Learn how to use operators in the context of quantum mechanics. 11: Operators Mathematical Operators A mathematical operator is a symbol standing for a mathematical operation or rule that transforms one object (function, vector, etc) into another object of the same type. For example, when the derivative operator $d/dx$, also denoted by $\hat{D}_x$, operates on a function $f(x)$, the result is the function $df/dx$. We can apply the operator $\hat{D}_x$ to any function. For example, let’s consider the function $g(x) = 2 \cos x+e^x$: $\hat{D}_xg(x)=-2 \sin x + e^x \nonumber$ In physical chemistry, most operators involve either differentiation or multiplication. For instance, the multiplication operator denoted by $\hat x$ means “multiply by $x$”. Using the previous example, when $\hat x$ operates on $g(x)$ we get $\hat{x}g(x)=2x \cos x +x e^x \nonumber$ Before discussing what operators are good for, lets go over a few more examples. First, notice that we denote operators with a “hat”. Let’s define an operator $\hat A$ (read as “A hat”) as $\hat x + \dfrac{d}{dx}$ $\label{1} \hat A=\hat x + \dfrac{d}{dx}$ This reads as “multiply the function by $x$ and add the result to the first derivative of the function with respect to $x$”. The second term is equivalent to the operator we defined before, $\hat {D}_x$, and using one or the other is a matter of preference. Notice that the expression $\dfrac{d}{dx}$ does not require a “hat” because it is unambiguous. In the case of $x$, we need to use the “hat” to be sure we distinguish the operator (multiply by $x$) from the variable $x$. In the case of $\dfrac{d}{dx}$, the expression clearly needs to be applied to a function, so it is obviously an operator. When $\hat A$ operates on the function $g(x)$ (defined above), we obtain: $\hat A g(x)=\hat x g(x) + \dfrac{d g}{dx}=-2 \sin x + e^x +2x \cos x +x e^x \nonumber$ Linear Operators In quantum mechanics we deal only with linear operators. An operator is said to be linear if $\hat A(c_1 f_1 (x)+c_2 f_2 (x))=\hat A c_1 f_1 (x)+\hat A c_2 f_2 (x) \nonumber$ where $c_1$ and $c_2$ are constants (real or complex). For instance, the $\dfrac{d}{dx}$ operator is linear: $\dfrac{d}{dx}(c_1 f_1 (x)+c_2 f_2 (x))= \dfrac{d}{dx} c_1 f_1 (x)+\dfrac{d}{dx} c_2 f_2 (x) \nonumber$ If we define the operator $\hat B$ as the “square” operator (take the square of...), we notice that $\hat B$ is not linear because $\hat B(c_1 f_1 (x)+c_2 f_2 (x))=(c_1 f_1 (x))^2+(c_2 f_2 (x))^2+2c_1 f_1 (x)c_2 f_2 (x) \nonumber$ which is clearly different from $\hat B(c_1 f_1 (x))+\hat B (c_2 f_2 (x))= (c_1 f_1 (x))^2+(c_2 f_2 (x))^2 \nonumber$ Eigenfunctions and Eigenvalues A common problem in quantum mechanics is finding the functions ($f$) and constants ($a$) that satisfy $\label{eigenfunction} \hat A f = a f$ We will discuss the physical meaning of these functions and these constants later. For now, we will define the concept of eigenfunction and eigenvalue as follows: If the result of operating on a function is the same function multiplied by a constant, the function is called an eigenfunction of that operator, and the proportionality constant is called an eigenvalue. We can test whether a particular function is an eigenfunction of a given operator or not. For instance, let’s consider the operator $-\dfrac{d^2}{dx^2}$ and the function $g(x)$ defined in page . Is $g(x)$ an eigenfunction of $-\dfrac{d^2}{dx^2}$? In lay terms: if we take the second derivative of $g(x)$ and change the sign of the result, do we get a function that can be expressed as $g(x)$ times a constant? Let’s try it: $-\dfrac{d^2g(x)}{dx^2} = 2 \cos x - e^x \nonumber$ The result cannot be expressed as a constant times $g(x)$: $2 \cos x - e^x \neq a(2 \cos x+ e^x) \nonumber$ so $g(x)$ is not an eigenfunction of the operator $-\dfrac{d^2}{dx^2}$. Let’s consider another function: $h(x) = 2sin(bx)$, where $b$ is a constant. Is $h(x)$ and eigenfunction of the operator $-\dfrac{d^2}{dx^2}$? We’ll take the second derivative of $g(x)$, multiply by $-1$, and check whether the result can be expressed as a constant times $h(x)$: $-\dfrac{d^2h(x)}{dx^2} = 2b^2 \sin (bx) \nonumber$ Notice that the result is $b^2$ times the function $h(x)$, so the conclusion is that $h(x)$ is an eigenfunction of the operator $-\dfrac{d^2}{dx^2}$, and that the corresponding eigenvalue is $b^2$. A common mistake is to conclude that the eigenvalue is $2b^2$. Be sure you understand why this is wrong.. Also, notice that $b^2$ is a constant because it does not involve the variable $x$. Another common mistake is to write eigenvalues that are not constants, but contain the independent variable. So far we have learned how to test whether a given function is an eigenfunction of a given operator or not. How can we calculate the eigenfunctions of a given operator? In general, this involves solving a differential equation. For instance, the eigenfunctions of the operator $-\dfrac{d^2}{dx^2}$ satisfy the equation $-\dfrac{d^2f(x)}{dx^2}=a f(x), \nonumber$ where $a$ is the eigenvalue. This is an ordinary second order differential equation with constant coefficients, so it can be solved with the methods we learned in previous chapters. Can you solve it and find the eigenfunctions of the operator $-\dfrac{d^2}{dx^2}$? The eigenfunctions and eigenvalues of an operator play a central role in quantum mechanics. Before moving on, we’ll introduce an important property that you will use often in your physical chemistry course: If two functions $f_1(x)$ and $f_2(x)$ are both eigenfunctions of an operator with the same eigenvalue, the linear combination $c_1f_1(x)+c_2f_2(x)$ will also be an eigenfunction with the same eigenvalue. For instance, the functions $e^{ax}$ and $e^{-ax}$ are both eigenfunctions of the operator $\dfrac{d^2}{dx^2}$ with eigenvalue $a^2$. Therefore, any linear combination $c_1e^{ax}+c_2e^{-ax}$ will be an eigenfunction of this operator with eigenvalue $a^2$, regardless of the values of $c_1$ and $c_2$. To prove it, take the second derivative of the function $c_1e^{ax}+c_2e^{-ax}$ and prove that it equals $a^2$ times $c_1e^{ax}+c_2e^{-ax}$. The function $\cos(ax)$ is also an eigenfunction of $\dfrac{d^2}{dx^2}$. However, the function $c_1e^{ax}+c_2\cos(ax)$ is not. What went wrong?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/11%3A_Operators/11.01%3A_Definitions.txt
Let’s start by defining the identity operator, usually denoted by $\hat E$ or $\hat I$. The identity operator leaves the element on which it operates unchanged: $\hat E f(x)=f(x)$. This is analogous to multiplying by the number 1. We can add operators as follows: $(\hat A + \hat B)f=\hat A f + \hat B f. \nonumber$ For example, $(\hat x + \dfrac{d}{dx})f = \hat x f + \dfrac{df}{dx}=x f + \dfrac{df}{dx} \nonumber$ (remember that $\hat x$ means “multiply by $x$”). The product between two operators is defined as the successive operation of the operators, with the one on the right operating first. For example, $(\hat x \dfrac{d}{dx})f=\hat x (\dfrac{df}{dx})=x \dfrac{df}{dx}$. We first apply the operator on the right (in this case “take the derivative of the function with respect to $x$”), and then the operator on the left (“multiply by $x$ whatever you got in the first step”). We can use this definition to calculate the square of an operator. For example, if we define the operator $\hat A$ as $\hat A = \dfrac{d}{dx}$, the operator $\hat A^2$ is $\hat A \hat A = \dfrac{d}{dx} \dfrac{d}{dx} =\dfrac{d^2}{dx^2}$. Operator multiplication is not, in general, commutative: $\hat A \hat B \neq \hat B \hat A$. In other words, in general, the order of the operations matters. Before, we saw that $(\hat x \dfrac{d}{dx})f=x \dfrac{df}{dx}$. Let’s revert the order of the operation: $(\dfrac{d}{dx} \hat x )f$. Now, we first multiply the function by $x$ and then take the derivative of the result: $(\dfrac{d}{dx} \hat x )f=\dfrac{d}{dx}(xf) =x \dfrac{df}{dx}+f$. In the last step, we calculated the derivative of the product using the differentiation rules we are familiar with. We just proved that $\hat x \dfrac{d}{dx} \neq \dfrac{d}{dx}\hat x$, or in other words, the order in which we apply these two operators matters (i.e. whether we first take the derivative and then multiply by $x$, or first multiply by $x$ and then take the derivative). Whether order matters or not has very important consequences in quantum mechanics, so it is useful to define the so-called commutator, defined as $\label{commutator} [\hat A,\hat B] = \hat A \hat B - \hat B \hat A.$ For example, the commutator of the operators $\hat x$ and $\dfrac{d}{dx}$, denoted by $[\hat x,\dfrac{d}{dx}]$, is by definition $\hat x \dfrac{d}{dx} - \dfrac{d}{dx}\hat x$. When $[\hat A,\hat B]=0$, the operators $\hat A$ and $\hat B$ are said to commute. Therefore, if the operators $\hat A$ and $\hat B$ commute, then $\hat A \hat B = \hat B \hat A$. When the operators $\hat A$ and $\hat B$ do not commute, $\hat A \hat B \neq \hat B \hat A$, and the commutator $[\hat A,\hat B]\neq 0$. Before we move on, it is important to recognize that the product of two operators is also an operator. For instance, let’s consider the product $\dfrac{d}{dx} \hat x$. This is an operator that, when applied to a function $f$, gives a new function $x \dfrac{df}{dx}+f$. For example, if $f= \sin (kx)$, $\dfrac{d}{dx} \hat x f = kx \cos(kx) + \sin(kx)$. In addition, notice that the operator $\dfrac{d}{dx} \hat x$ can be expressed as $\hat E + \hat x \dfrac{d}{dx}$, where $\hat E$ is the identity operator. When the operator $\hat E + \hat x \dfrac{d}{dx}$ operates on a function $f$, the result is the function itself (multiplied by one) plus $x$ times the derivative of the function, which is exactly what we get when we perform $\dfrac{d}{dx} \hat x f$. Similarly, the commutator between two operators is also an operator: Note that in the example on the right side of the figure we demonstrated that the operator $[\hat x,\dfrac{d}{dx}]$ equals the operator $-\hat E$ (“multiply by -1”). In other words, when the commutator $[\hat x,\dfrac{d}{dx}]$ (an operator) operates on a function $f$, the result is $-f$. Because $[\hat x,\dfrac{d}{dx}] \neq 0$, the operators $\hat x$ and $\dfrac{d}{dx}$ do not commute. This is directly related to the uncertainty principle, which (in its simplest form) states that the more precisely the position of some particle is determined, the less precisely its momentum can be known. We will see the connection between this statement and the commutator in a moment, and you will discuss this in a lot of detail in your future physical chemistry courses. Example $1$ Find the commutator $[\hat x^2, \dfrac{d^2}{dx^2}] \nonumber$ Solution Remember that the commutator is an operator, so your answer should be an operator as well (that is, it should not contain a function). To ‘see’ what the commutator does (so we can write the equivalent operator), we apply an arbitrary function: $[\hat x^2, \dfrac{d^2}{dx^2}]f=\hat x^2 \dfrac{d^2}{dx^2}f-\dfrac{d^2}{dx^2}\hat x^2 f \nonumber$ Remember that when we have expressions such as $\dfrac{d^2}{dx^2}\hat x^2 f$ we need to go from right to left, that is, we first multiply by $x^2$ and only then take the second derivative. $\dfrac{d^2}{dx^2}\hat x^2 f =\dfrac{d^2(x^2f)}{dx^2}=\dfrac{d(2xf+x^2 \dfrac{df}{dx})}{dx}=2x\dfrac{df}{dx}+2f+x^2\dfrac{d^2f}{dx^2}+2x\dfrac{df}{dx}=4x\dfrac{df}{dx}+2f+x^2\dfrac{d^2f}{dx^2} \nonumber$ $[\hat x^2, \dfrac{d^2}{dx^2}]f=\displaystyle{\color{Blue}\hat x^2 \dfrac{d^2}{dx^2}f}-\displaystyle{\color{Green}\dfrac{d^2}{dx^2}\hat x^2 f}=\displaystyle{\color{Blue}x^2\dfrac{d^2f}{dx^2}}-\displaystyle{\color{Green}\left(4x\dfrac{df}{dx}+2f+x^2\dfrac{d^2f}{dx^2}\right)}=-4x\dfrac{df}{dx}-2f \nonumber$ $[\hat x^2, \dfrac{d^2}{dx^2}]= \displaystyle{\color{Maroon}-4\hat x \dfrac{d}{dx}-2\hat E} \nonumber$ Again, your result should be an operator, and therefore should not contain the function $f$. Because $[\hat x^2, \dfrac{d^2}{dx^2}] \neq 0$, the two operators do not commute. Common mistakes: • to write the commutator as $[\hat x^2, \dfrac{d^2}{dx^2}] =-4x\dfrac{df}{dx}-2f$ • to use an actual function (e.g. $\sin x$) instead of an arbitrary function $f$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/11%3A_Operators/11.02%3A_Operator_Algebra.txt
We have already discussed that the main postulate of quantum mechanics establishes that the state of a quantum mechanical system is specified by a function called the wavefunction. The wavefunction is a function of the coordinates of the particle (the position) and time. We often deal with stationary states, i.e. states whose energy does not depend on time. For example, at room temperature and in the absence of electromagnetic radiation such as UV light, the energy of the only electron in the hydrogen atom is constant (the energy of the 1s orbital). In this case, all the information about the state of the particle is contained in a time-independent function, $\psi (\textbf{r})$, where $\textbf{r}$ is a vector that defines the position of the particle. In spherical coordinates $\textbf{r}$ is described in terms of $r,\theta$ and $\phi$ (note the difference between $\textbf{r}$ and $r$). For example, the wavefunction that describes the 1s orbital is: $\label{2} \psi(r,\theta,\phi)=\dfrac{1}{\sqrt{\pi}} \dfrac{1}{a_0^{3/2}}e^{-(r/a_0)}$ Notice that in this particular case the wavefunction is independent of $\theta$ and $\phi$. This makes sense because the 1s orbital has spherical symmetry, and therefore the probability of finding the electron in a particular region of space should depend on $r$ only. We also discussed one of the postulates of quantum mechanics: the function $|\psi (\textbf{r})|^2 dV=\psi^* (\textbf{r})\psi (\textbf{r})dV \nonumber$ is the probability that the particle lies in the volume element $dV$ located at r. We will now introduce three additional postulates: 1. Each observable in classical mechanics has an associated operator in quantum mechanics. Examples of observables are position, momentum, kinetic energy, total energy, angular momentum, etc (Table $1$). 2. The outcomes of any measurement of the observable associated with the operator $\hat A$ are the eigenvalues $a$ that satisfy the eigenvalue equation $\hat A f = a f (11.1.2)$. 3. The average value of the observable corresponding to $\hat A$ is given by $\label{avg} \iiint_{-\infty }^{\infty } \psi^{*}\hat A \psi dV$ where $dV$ is the differential of volume in the coordinates used to express $\psi$. We can perform this operation in two dimensions (e.g. if a particle is confined to a plane) by replacing $dV$ by $dA$ and performing a double integral, or in one dimension, by performing a single integral and replacing $dV$ by $dx$. In each case, we need to integrate over all space. To illustrate these postulates let’s consider the hydrogen atom again. The wavefunction for an electron in a 1s orbital is shown in Equation \ref{2}. Quantum mechanical operators for some physical observables. Observable symbol in classical physics Operator in QM Operation Position r $\hat{\textbf{r}}$ multiply by r Momentum $p_x$ $\hat p_x$ $-i \hbar \dfrac{\partial}{\partial x}$ $p_y$ $\hat p_y$ $-i \hbar \dfrac{\partial}{\partial y}$ $p_z$ $\hat p_z$ $-i \hbar \dfrac{\partial}{\partial z}$ Kinetic Energy $T$ $\hat T$ $-\dfrac{\hbar^2}{2m}(\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2})$ Potential Energy V(r) $\hat V(\textbf{r)}$ multiply by $\hat V(\textbf{r)}$ Total Energy $E$ $\hat H$ $\hat T + \hat V$ Angular Momentum $l_x$ $\hat l_x$ $-i \hbar(y\dfrac{\partial}{\partial z}-z\dfrac{\partial}{\partial y})$ $l_y$ $\hat l_y$ $-i \hbar(z\dfrac{\partial}{\partial x}-x\dfrac{\partial}{\partial z})$ $l_z$ $\hat l_z$ $-i \hbar(x\dfrac{\partial}{\partial y}-y\dfrac{\partial}{\partial x})$ Table $1$: Quantum mechanical operators for some physical observables. Let’s say that we are able to measure the distance of the electron from the nucleus (i.e. $r$). What will we measure? According to the postulates listed above, if the wavefunction of Equation \ref{2} is an eigenfunction of the operator $\hat r$, then we will measure the corresponding eigenvalue. However, we can easily see that $\hat r \psi \neq a \psi$, since the operator $r$ stands for “multiply by $r$”, and $r \dfrac{1}{\sqrt{\pi}} \dfrac{1}{a_0^{3/2}}e^{-(r/a_0)} \neq a \dfrac{1}{\sqrt{\pi}} \dfrac{1}{a_0^{3/2}}e^{-(r/a_0)}$. Remember that $a$ should be a constant in Equation \ref{2}, so it cannot be a function of the coordinates ($r, \theta,\phi$). The fact that $\psi$ is not an eigenfunction of the operator $\hat r$ means that a measurement of the position of the particle will give a value that we cannot predict. In other words, the position of the particle is not quantized, and we cannot know the result of the measurement with certainty. This should not be surprising to us, since we know that electrons do not move around the nucleus in fixed orbits as chemists once thought. Instead, we can talk about the probability of finding the electron at different values of $r$. A measurement of the observable $r$ can in principle yield any value from 0 to $\infty$, although of course different values of $r$ will be observed with different probabilities (see Section 10.4). Although we cannot predict the outcome of a single observation, we can calculate the average value of a very large number of observations (formally an infinite number of observations). We already calculated the average $\langle r \rangle$ in Section 10.4. Let’s do it again following the formalism of operators. Example $1$ The wavefunction of the 1s orbital is expressed in polar coordinates as: $\psi(r,\theta,\phi)=\dfrac{1}{\sqrt{\pi}} \dfrac{1}{a_0^{3/2}}e^{-(r/a_0)} \nonumber$ Obtain $\langle r \rangle$. Solution For an observable $A$: $\langle A \rangle=\int\limits_{all\;space}\psi^*\;\hat A\;\psi\;dV \nonumber$ For the observable $r$: $\langle r \rangle=\int\limits_{all\;space}\psi^*\;\hat r\;\psi\;dV \nonumber$ where $\hat r$ is the operator that corresponds to the observable $r$. According to Table $1$, the operator $r$ is “multiply by $r$”. Therefore: $\langle r \rangle=\int\limits_{all\;space}\psi^*\;r\;\psi\;dV \nonumber$ For the 1s orbital, $\psi=\psi^*=\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0} \nonumber$ and then, $\left \langle r \right \rangle = \int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}{\color{red}\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0}}\,r{\color{Blue}\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0}}{\color{OliveGreen} r^2 \sin\theta\; dr\; d\theta\; d\psi} \nonumber$ where $\psi^*$ is shown in red, $\hat r$ in black, $\psi$ in blue, and $dV$ in green. We already solved this integral in Example $10.4.1$, where we obtained $\left \langle r \right \rangle = \dfrac{1}{\pi a_0^3}\int\limits_{0}^{\infty}e^{-2r/a_0}r^3\;dr\int\limits_{0}^{2\pi}d\psi\int\limits_{0}^{\pi}\sin\theta\; d\theta=\dfrac{4}{a_0^3}\int\limits_{0}^{\infty}e^{-2r/a_0}r^3\;dr=\dfrac{3}{2}a_0 \nonumber$ Therefore, the average value of $r$ is $3/2 a_0$. Remember that $a_0$ is a physical constant known as Bohr’s radius, which is approximately 0.53 Å, where 1 Å(Angstrom) equals $10^{-10}m$. Important: Because $\hat r$ is “multiply by $r$”, and the wavefunction is real, the integrand becomes $r \psi^2$. However, you need to be careful when the operator involves derivatives. The integrand is the complex conjugate of the wavefunction multiplied by the function that you obtain when you calculate $\hat A \psi$. See Test Yourself 11.6 for an example where the order of the operations is important. The result of Example $1$ shows that the average distance of a 1s electron from the nucleus is $3/2 a_0$, which is about $8\times10^{-10}m$. The fact that the wavefunction is not an eigenfunction of the operator $\hat r$ tells us that we cannot predict the result of a measurement of the variable $r$. What about other observables such as kinetic energy or total energy? Are the orbitals of the hydrogen atom eigenfunctions of the operator $\hat T$ (kinetic energy)? Let’s try it with the 1s orbital of Equation \ref{2} (our conclusion will be true for all other orbitals, as you will see in your advanced physical chemistry courses). Notice that the expressions of Table $1$ are written in cartesian coordinates, while the orbitals are expressed in spherical coordinates. We could express the orbitals in cartesian coordinates, but that would be a lot of work because, in principle, there are infinite orbitals. It is much wiser to express the operators in spherical coordinates, so we can use them any time we need them in a problem that is best described in this coordinate system. This can be done using the chain rule, as we saw in previous chapters. In spherical coordinates, the operator $\hat T$ is written as: $\label{kinetic} \dfrac{-\hbar^2}{2m}\left ( \dfrac{1}{r^2} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )+\dfrac{1}{r^2 \sin \theta}\dfrac{\partial}{\partial \theta}\left ( \sin \theta \dfrac{\partial}{\partial \theta} \right )+\dfrac{1}{r^2 \sin^2\theta}\dfrac{\partial^2}{\partial \phi^2} \right )$ where $m$ is the mass of the particle (in this case the electron). If you compare this expression to the one found in Table $1$, you may think we are complicating ourselves unnecessarily. However, it would be much more time consuming to convert every wavefunction we want to work with to cartesian coordinates, while obtaining Equation \ref{kinetic} from the expression in Table $1$ is a one time job. To see if the 1s orbital is an eigenfunction of the operator defined in Equation \ref{kinetic} we need to calculate $\dfrac{-\hbar^2}{2m}\left ( \dfrac{1}{r^2} \dfrac{\partial }{\partial r}\left (r^2 \dfrac{\partial \psi}{\partial r} \right )+\dfrac{1}{r^2 \sin \theta}\dfrac{\partial }{\partial \theta}\left ( \sin \theta \dfrac{\partial \psi}{\partial \theta} \right )+\dfrac{1}{r^2 \sin^2\theta}\dfrac{\partial^2 \psi}{\partial \phi^2} \right ) \nonumber$ and see if the result equals a constant times the function $\psi$. The problem is solved in Example $2$. Example $2$ Decide whether the 1s orbital $\psi(r,\theta,\psi)=\dfrac{1}{\sqrt{\pi}} \dfrac{1}{a_0^{3/2}}e^{-(r/a_0)} \nonumber$ is an eigenfunction of the operator $\hat T$, defined in Equation \ref{kinetic}. Solution The 1s orbital depends on $r$ only, and therefore the derivatives with respect to $\theta$ and $\phi$ are zero (this will be true for all the s-orbitals). Therefore, Equation \ref{kinetic} reduces to: $\hat T=\dfrac{-\hbar^2}{2m}\left ( \dfrac{1}{r^2} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )\right ) \nonumber$ The function $\psi$ is an eigenfunction of $\hat T$ if the following relationship is true: $\hat T \psi = a \psi \nonumber$ Remember that $a$ should be a constant (i.e. should not depend on $r, \theta,\psi$. Let’s calculate $\hat T \psi$. We first need to calculate the derivative of $\psi$ with respect to $r$, multiply the result by $r^2$, take the derivative with respect to $r$ of the result, divide the result by $r^2$, and finally multiply the result by $-\hbar /{2m}$. To simplify notation, lets call $A = \dfrac{1}{\sqrt{\pi}}\dfrac{1}{a_0^{3/2}}$, so that $\psi(r,\theta,\psi)=A e^{-(r/a_0)}$. $\dfrac{\partial \psi}{\partial r}=-\dfrac{A}{a_0}e^{-(r/a_0)} \nonumber$ \begin{aligned} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )&=&\dfrac{\partial}{\partial r}\left (r^2 \left ( -\dfrac{A}{a_0}e^{-(r/a_0)} \right ) \right )\ &=& -\dfrac{A}{a_0}\left ( 2r e^{-(r/a_0)}-\dfrac{1}{a_0}r^2e^{-(r/a_0)} \right)\ \dfrac{1}{r^2} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )&=& -\dfrac{A}{a_0}\left ( 2\dfrac{1}{r} e^{-(r/a_0)}-\dfrac{1}{a_0}e^{-(r/a_0)}\right)\ \dfrac{-\hbar^2}{2m}\dfrac{1}{r^2} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )&=&\dfrac{A \hbar^2}{2m a_0}e^{-(r/a_0)} \left ( \dfrac{2}{r} -\dfrac{1}{a_0}\right)\ &=&\dfrac{ \hbar^2}{2m a_0} \left ( \dfrac{2}{r} -\dfrac{1}{a_0}\right)\psi\neq a \psi\\end{aligned} \nonumber Therefore, $\psi$ is not an eigenfunction of $\hat T$, and we cannot predict the result of a measurement of the kinetic energy. We will now consider the total energy (that is, the sum of the kinetic energy plus the potential energy). Because this is such an important property, the corresponding operator has a special name: the Hamiltonian ($\hat H$). To write down the Hamiltonian, we need to add the kinetic energy operator (Equation \ref{kinetic}) to the potential energy operator. However, in contrast to the kinetic energy term, the potential energy depends on the forces experienced by the particle, and therefore we cannot write a generic expression. If you took a physics course, you may be familiar with different expressions for the potential energy of different systems (e.g. two charged particles, a spring, a particle in a gravitational field, etc). In all cases, the potential energy depends on the coordinates of the particles. For example, for two charged point particles of opposite sign, the electrostatic potential associated with their interaction is $V(r)= k q_1 q_2/r$. Here, $k$ is a constant (see below), $q_1$ and $q_2$ are the charges of the two particles, and $r$ is the distance that separates them. If you look at table [tab:operators], you will see that the operator that corresponds to this expression is just “multiply by...”. This is because the potential energy depends on the coordinates, and not on the derivatives. Therefore, it is like the operator $\hat r$ we saw before. For the hydrogen atom, the potential energy arises from the interaction between the only electron and the only proton in the atom. Both have the same charge (in absolute value), but one is negative and the other one positive, so $q_1 q_2 = -\epsilon^2$, where $\epsilon$ is the charge of the proton. With this in mind, we can write the operator $\hat V$ as: $\hat{V} =-\dfrac{\epsilon^2}{4 \pi \epsilon_0} \dfrac{1}{r}$ It is important to understand that this is an operator which operates by “multiplying by...”. Therefore $\hat V \psi =-\dfrac{\epsilon^2}{4 \pi \epsilon_0} \dfrac{1}{r}\psi$. A common mistake is to forget the wavefunction $\psi$. The Hamiltonian for the hydrogen atom can then be expressed as the sum $\hat T + \hat V$: $\label{hamiltonian} \hat H =\dfrac{-\hbar^2}{2m}\left ( \dfrac{1}{r^2} \dfrac{\partial}{\partial r}\left (r^2 \dfrac{\partial}{\partial r} \right )+\dfrac{1}{r^2 \sin \theta}\dfrac{\partial}{\partial \theta}\left ( \sin \theta \dfrac{\partial}{\partial \theta} \right )+\dfrac{1}{r^2 \sin^2\theta}\dfrac{\partial^2}{\partial \phi^2} \right )-\dfrac{\epsilon^2}{4 \pi \epsilon_0} \dfrac{1}{r}$ According to the postulates of quantum mechanics, if the wavefunction defined in Equation \ref{2} is an eigenfunction of this Hamiltonian, every time we measure the total energy of the electron we will measure the corresponding eigenvalue. In other words, if this is true: $\hat H \psi = a \psi$, then the constant $a$ is the energy of one electron in the 1s orbital. If we used the wavefunction for the 2s orbital instead, we would get the energy of the 2s orbital, and so on. It is important to note that the constants in the potential energy term are related to Bohr radius ($a_0$) as: $\dfrac{\epsilon^2}{4 \pi \epsilon_0} =\dfrac{\hbar^2}{m a_0}. \nonumber$ This relationship will allow you to simplify your result. 11.04: Problems Problem $1$ Consider the operator $\hat A$ defined in Equation $11.1.1$ as $\hat A=\hat x + \dfrac{d}{dx}$. Is it linear or non-linear? Justify. Problem $2$ Which of these functions are eigenfunctions of the operator $-\frac{d^2}{dx^2}$? Give the corresponding eigenvalue when appropriate. In each case $k$ can be regarded as a constant. $f_1(x)=e^{ikx} \nonumber$ $f_2(x)=\cos(kx) \nonumber$ $f_3(x)=e^{-kx^2} \nonumber$ $f_4(x)=e^{ikx}-cos(kx) \nonumber$ Problem $3$ In quantum mechanics, the $x$, $y$ and $z$ components of the angular momentum are represented by the following operators: \begin{align*} \hat{L}_x &=i\hbar\left(\sin\phi\frac{\partial}{\partial \theta}+\frac{\cos\phi}{\tan \theta}\frac{\partial}{\partial\phi}\right) \[4pt] \hat{L}_y &=i\hbar\left(-\cos\phi\frac{\partial}{\partial \theta}+\frac{\sin\phi}{\tan \theta}\frac{\partial}{\partial\phi}\right) \[4pt] \hat{L}_z &=-i\hbar\left(\frac{\partial}{\partial \phi}\right) \end{align*} The operator for the square of the magnitude of the orbital angular momentum, $\hat{L}^2=\hat{L}^2_x +\hat{L}^2_y+\hat{L}^2_z$ is: $\hat{L}^2=-\hbar^2\left(\frac{\partial^2}{\partial \theta^2}+\frac{1}{\tan \theta}\frac{\partial}{\partial\theta}+\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial \phi^2}\right) \nonumber$ a) Show that the three 2p orbitals of the H atom are eigenfunctions of both $\hat{L}^2$ and $\hat{L}_z$, and determine the corresponding eigenvalues. $\psi_{2p0}=\frac{1}{\sqrt{32\pi a_0^3}}r e^{-r/2 a_0}\cos\theta \nonumber$ $\psi_{2p+1}=\frac{1}{\sqrt{64\pi a_0^3}}r e^{-r/2 a_0}\sin\theta e^{i\phi} \nonumber$ $\psi_{2p-1}=\frac{1}{\sqrt{64\pi a_0^3}}r e^{-r/2 a_0}\sin\theta e^{-i\phi} \nonumber$ b) Calculate $\hat{L}_x\psi_{2p0}$. Is $\psi_{2p0}$ and eigenfunction of $\hat{L}_x$? c) Calculate $\hat{L}_y\psi_{2p0}$. Is $\psi_{2p0}$ and eigenfunction of $\hat{L}_y$? Problem $4$ Prove that $\left[\hat{L}_z,\hat{L}_x\right]=i\hbar \hat{L}_y \nonumber$ Problem $5$ For a system moving in one dimension, the momentum operator can be written as $\hat p = i \hbar \frac{d}{dx} \nonumber$ Find the commutator $[\hat x, \hat p]$ Note: $\hbar$ is defined as $h/{2 \pi}$, where $h$ is Plank’s constant. It has been defined because the ratio $h/{2 \pi}$ appears often in quantum mechanics. Problem $6$ We demonstrated that $\psi_1s$ is not an eigenfunction of $\hat T$. Yet, we can calculate the average kinetic energy of a 1s electron, $\left \langle T \right \rangle$. Use Equation $11.3.1$ to calculate an expression for $\left \langle T \right \rangle$. Problem $7$ Use the Hamiltonian of Equation $11.3.5$ to calculate the energy of the electron in the 1s orbital of the hydrogen atom. The normalized wave function of the 1s orbital is: $\psi=\frac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0} \nonumber$ Problem $8$ The expression of Equation $11.3.1$ can be used to obtain the expectation (or average) value of the observable represented by the operator $\hat{A}$. The state of a particle confined in a one-dimensional box of length a is described by the following wavefunction: $\psi(x)=\begin{cases} \sqrt{\frac{2}{a}}\sin\left(\frac{\pi x}{a} \right )& \mbox{ if } 0\leq x\leq a \ 0 &\mbox{otherwise} \end{cases} \nonumber$ The momentum operator for a one-dimensional system was introduced in Problem $5$. a) Obtain an expression for $\hat{p}^2$ and determine if $\psi$ is an eigenfunction of $\hat{p}$ and $\hat{p}^2$. If possible, obtain the corresponding eigenvalues. Hint: $\hat{p}^2$ is the product $\hat{p}\hat{p}$. b) Determine if $\psi$ is an eigenfuction of $\hat{x}$. If possible, obtain the corresponding eigenvalues. c) Calculate the following expectation values: $\left \langle x \right \rangle$, $\left \langle p^2 \right \rangle$, and $\left \langle p \right \rangle$. Compare with the eigenvalues calculated in the previous questions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/11%3A_Operators/11.03%3A_Operators_and_Quantum_Mechanics_-_an_Introduction.txt
Objectives • Learn the method of separation of variables to solve simple partial differential equations. • Understand how to apply the method of separation of variables to two important problems in the physical sciences: The wave equation in one dimension and molecular diffusion. • 12.1: Introduction to Partial Differential Equations Many important equations in physical chemistry, engineering, and physics, describe how some physical quantity, such as a temperature or a concentration, varies with position and time. • 12.2: The Method of Separation of Variables The separation of variables is a methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation. • 12.3: The Wave Equation in One Dimension The wave equation is an important second-order linear partial differential equation that describes waves such as sound waves, light waves and water waves. In this course, we will focus on oscillations in one dimension. • 12.4: Molecular Diffusion Molecular diffusion is the thermal motion of molecules at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size and shape of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. The term "diffusion" is also generally used to describe the flux of other physical quantities like thermal energy (heat). • 12.5: Problems 12: Partial Differential Equations Many important equations in physical chemistry, engineering, and physics, describe how some physical quantity, such as a temperature or a concentration, varies with position and time. This means that one or more spatial variables and time serve as independent variables. For example, let’s consider the concentration of a chemical around the point $(x,y,z)$ at time $t$: $C = C (x, y, z, t)$. The differential equation that describes how $C$ changes with time is $\label{eq:pde1} \nabla^2C(x,y,z,t)=\frac{1}{D}\frac{\partial C(x,y,z,t)}{\partial t}$ where $\nabla^2$ is an operator known as the Laplacian. In cartesian three-dimensional coordinates: $\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2} \nonumber$ The constant $D$ is the diffusion coefficient, and determines how far molecules move on average in a given period of time. The diffusion coefficient depends on the size and shape of the molecule, and the temperature and viscosity of the solvent. The diffusion equation (Equation \ref{eq:pde1}) is a partial differential equation because the dependent variable, $C$, depends on more than one independent variable, and therefore its partial derivatives appear in the equation. Other important equations that are common in the physical sciences are: The heat equation: $\label{eq:pde2} \nabla^2T(x,y,z,t)=\frac{1}{\alpha}\frac{\partial T(x,y,z,t)}{\partial t}$ which is in a way a diffusion equation, except that the quantity that diffuses is heat, and not matter. This translates into a change in temperature instead of a change in concentration. The wave equation: $\label{eq:pde3} \nabla^2u(x,y,z,t)=\frac{1}{v^2}\frac{\partial^2 u(x,y,z,t)}{\partial t^2}$ which describes the displacement of all points of a vibrating thin string. Here, $v$ has units of speed, and it’s related to the elasticity of the string. The time-independent Schrödinger equation $\label{eq:pde4} -\frac{\hbar^2}{2m}\nabla^2\psi(x,y,z)+V\psi(x,y,z)=E\psi(x,y,z)$ which we have already introduced in previous chapters. Note that the Schrödinger equation becomes an Ordinary Differential Equation for one-dimensional problems (e.g. the one-dimensional particle in a box, page ), but it is a PDE for systems where particles are allowed to move in two or more dimensions. In this course, we will introduce the simplest examples of PDEs relevant to physical chemistry. As you will see, solving these equations analytically is rather complex, and the solutions depend greatly on initial and boundary conditions. Because solving these equations is time consuming, in your upper-level physical chemistry courses your teacher will often show you the solutions without going through the whole derivation. Yet, it is important that you go through all the work at least once for the simplest cases, so you know what it is involved in solving the PDEs you will see in the future.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/12%3A_Partial_Differential_Equations/12.01%3A_Introduction_to_Partial_Differential_Equations.txt
Most PDEs you will encounter in physical chemistry can be solved using a method called “separation of variables”. We will exemplify the method by solving the easiest PDE: The Laplace equation in two dimensions: $\label{eq:pde5} \dfrac{\partial f(x,y)}{\partial x}+\dfrac{\partial f(x,y)}{\partial y}=0$ The solutions of Laplace’s equation are important in many fields of science, including electromagnetism, astronomy, and fluid dynamics. separation of variable Steps The following steps summarize everything we have done to find the solution: 1. Assume that the solution of the differential equation can be expressed as the product of functions of each of the variables. 2. Group terms that depend on each of the independent variables (in this case $x$ and $y$). 3. Identify the terms that need to equal constants. 4. Solve the ODEs (do not forget the integration constants!) 5. Put everything together. Your answer will have one or more constants that will eventually be determined from boundary conditions Step 1 The first step in the method of separation of variables is to assume that the solution of the differential equation, in this case $f(x,y)$, can be expressed as the product of a function of $x$ times a function of $y$. $f(x,y)=X(x)Y(y) \label{eq1}$ Don’t get confused with the nomenclature. We use lower case to denote the variable, and upper case to denote the function. We could have written Eqution \ref{eq1} as $f(x,y)=h(x)g(y)$ Step 2 In the second step, we substitute $f(x,y)$ in Equation \ref{eq:pde5} by Equation \ref{eq1}: $\dfrac{\partial X(x)Y(y)}{\partial x}+\dfrac{\partial X(x)Y(y)}{\partial y}=0$ $\label{eq:pde7} Y(y)\dfrac{\partial X(x)}{\partial x}+X(x)\dfrac{\partial Y(y)}{\partial y}=0$ Step 3 The third step involves reorganizing the terms of Equation \ref{eq:pde7} so all terms in $x$ and $y$ are grouped together. There is no universal method for this step. In this example, we’ll separate variables by dividing all terms by $X(x)Y(y)$, but in general you will need to figure out how to separate variables for the particular equation you are solving: $\label{eq:pde8} \dfrac{1}{X(x)}\dfrac{\partial X(x)}{\partial x}+\dfrac{1}{Y(y)}\dfrac{\partial Y(y)}{\partial y}=0$ Step 4 In the fourth step, we recognize that Equation \ref{eq:pde8} is the sum of two terms (it would be three if we were solving a problem in 3 dimensions), and each term depends on one variable only. In this case, the first term is a function of $x$ only, and the second term is a function of $y$ only. How can we add something that depends on $x$ only to something that depends on $y$ only and get zero? This sounds impossible, as terms in $x$ will never cancel out terms in $y$ 1. The only way to make Equation \ref{eq:pde8} hold for any value of $x$ and $y$ is to force each summand to be a constant. The term $\dfrac{1}{X(x)}\dfrac{\partial X(x)}{\partial x}$ cannot be a function of $x$, and the term $\dfrac{1}{Y(y)}\dfrac{\partial Y(y)}{\partial y}$ cannot be a function of $y$: $\label{eq:pde9a} \dfrac{1}{X(x)}\dfrac{\partial X(x)}{\partial x}=K_1$ $\label{eq:pde9b} \dfrac{1}{Y(y)}\dfrac{\partial Y(y)}{\partial y}=K_2$ This step transforms a PDE into two ODEs. In general, we will have one ODE for each independent variable. In this particular case, because the two terms need to add up to zero, we have $K_1=-K_2$. Step 5 In the fifth step, we solve the 2 ODEs using the methods we learned in previous chapters. We will get $X(x)$ from Equation \ref{eq:pde9a} and $Y(y)$ from Equation \ref{eq:pde9b}. Both solutions will contain arbitrary constants that we will evaluate using initial or boundary conditions if given. In this case, the two equations are mathematically identical, and are separable 1st order ordinary differential equations. The solutions (which you should be able to get on your own) are: $X(x)=Ae^{K_1x}$ $Y(y)=Be^{-K_1y}$ Step 6 In step 6, we combine the one-variable solutions to obtain the many-variable solution we are looking for (Equation \ref{eq1}): $f(x,y)=X(x)Y(y)=Ae^{K_1x}Be^{-K_1y}=Ce^{K_1(x-y)}$ where $C$ is a constant. We should always finish by checking that our answer indeed satisfies the PDE we were trying to solve: $\dfrac{\partial f(x,y)}{\partial x}+\dfrac{\partial f(x,y)}{\partial y}=0$ \begin{align*} f(x,y) &=Ce^{K_1(x-y)}\rightarrow \dfrac{\partial f(x,y)}{\partial x}=CK_1e^{K_1(x-y)} \[4pt] \dfrac{\partial f(x,y)}{\partial y} &=-CK_1e^{K_1(x-y)}\rightarrow \dfrac{\partial f(x,y)}{\partial x}+\dfrac{\partial f(x,y)}{\partial y}=0 \end{align*}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/12%3A_Partial_Differential_Equations/12.02%3A_The_Method_of_Separation_of_Variables.txt
The wave equation is an important second-order linear partial differential equation that describes waves such as sound waves, light waves and water waves. In this course, we will focus on oscillations in one dimension. Let’s consider a thin string of length $l$ that is fixed at its two endpoints, and let’s call the displacement of the string from its horizontal position $u(x,t)$ (figure [fig:pde1]). The displacement of each point in the string is limited to one dimension, but because the displacement also depends on time, the one-dimensional wave equation is a PDE: $\label{eq:pde10} \dfrac{\partial^2u(x,t)}{\partial x^2}=\dfrac{1}{v^2}\dfrac{\partial^2 u(x,t)}{\partial t^2}$ Because the string is held at both ends, the PDE is subject to two boundary conditions: $\label{eq:pde11} u(0,t)=u(l,t)=0$ Using the method of separation of variables, we assume that the function $u(x,t)$ can be written as the product of a function of only $x$ and a function of only $t$. $\label{eq:pde12} u(x,t)=f(x)g(t)$ Substituting Equation \ref{eq:pde12} in Equation \ref{eq:pde10}: $\dfrac{\partial^2f(x)g(t)}{\partial x^2}=\dfrac{1}{v^2}\dfrac{\partial^2 f(x)g(t)}{\partial t^2} \nonumber$ $\label{eq:pde13} g(t)\dfrac{\partial^2 f(x)}{\partial x^2}=\dfrac{1}{v^2}f(x)\dfrac{\partial^2 g(t)}{\partial t^2}$ and separating the terms in $x$ from the terms in $y$: $\label{eq:pde14} \dfrac{1}{f(x)}\dfrac{\partial^2 f(x)}{\partial x^2}=\dfrac{1}{v^2}\dfrac{1}{g(t)}\dfrac{\partial^2 g(t)}{\partial t^2}$ Remember that $v$ is a constant, and we could leave it on either side of Equation \ref{eq:pde14}. The left side of this equation is a function of $x$ only, and the right side is a function of $t$ only. Because $x$ and $t$ are independent variables, the only way that the equality holds is that each side equals a constant. $\dfrac{1}{f(x)}\dfrac{\partial^2 f(x)}{\partial x^2}=\dfrac{1}{v^2}\dfrac{1}{g(t)}\dfrac{\partial^2 g(t)}{\partial t^2}=K \nonumber$ $K$ is called the separation constant, and will be determined by the boundary conditions. Note that after separation of variables, one PDE became two ODEs: $\label{eq:pde15a} \dfrac{1}{f(x)}\dfrac{\partial^2 f(x)}{\partial x^2}=K\rightarrow \dfrac{\partial^2 f(x)}{\partial x^2}-Kf(x)=0$ $\label{eq:pde15b} \dfrac{1}{v^2}\dfrac{1}{g(t)}\dfrac{\partial^2 g(t)}{\partial t^2}=K\rightarrow \dfrac{\partial^2 g(t)}{\partial t^2}-Kv^2g(t)=0$ These are both second order ordinary differential equations with constant coefficients, so we can solve them using the methods we learned in Chapter 5. From Equation \ref{eq:pde15a}, $\dfrac{\partial ^2f(x)}{\partial x^2}-Kf(x)=0 \nonumber$ which is a 2nd order ODE with auxiliary equation $\alpha^2-K=0\Rightarrow \alpha=\pm\sqrt{K} \nonumber$ and therefore $\label{eq:pde16} f(x)=c_1e^{\sqrt{K}x}+c_2e^{-\sqrt{K}x}$ We do not know yet if $K$ is positive, negative or zero, so we do not know if these are real or complex exponentials. We will use the boundary conditions ($f(0)=f(l)=0$) and see what happens: $f(x)=c_1e^{\sqrt{K}x}+c_2e^{-\sqrt{K}x}\rightarrow f(0)=c_1+c_2=0\Rightarrow c_1=-c_2 \nonumber$ $f(x)=c_1(e^{\sqrt{K}x}-e^{-\sqrt{K}x})\rightarrow f(l)=c_1(e^{\sqrt{K}l}-e^{-\sqrt{K}l})=0 \nonumber$ There are two ways to make $f(l)=c_1(e^{\sqrt{K}l}-e^{-\sqrt{K}l})=0. \nonumber$ We could choose $c_1=0$, but this choice would result in $f(x)=0$, which physically means the string is not vibrating at all (the displacement of all points is zero). This is certainly a mathematically acceptable solution, but it is not a solution that represents the physical behavior of our string. Therefore, the only viable choice is $e^{\sqrt{K}l}=e^{-\sqrt{K}l}$. Let’s see what this means in terms of $K$. There is no positive value of $K$ that makes $e^{\sqrt{K}l}=e^{-\sqrt{K}l}. \nonumber$ If $K=0$, we obtain $f(x)=0$, which is again not physically acceptable. Then, the value of $K$ has to be negative, and $\sqrt{K}$ is an imaginary number: $e^{\sqrt{K}l}=e^{-\sqrt{K}l} \nonumber$ $e^{i\sqrt{|K|}l}=e^{-i\sqrt{|K|}l} \nonumber$ where $|K|=-K$ is the absolute value of $K<0$. Using Euler’s relationship: $\cos(\sqrt{|K|}l)+i\sin(\sqrt{|K|}l)=\cos(\sqrt{|K|}l)-i\sin(\sqrt{|K|}l) \nonumber$ $2i\sin(\sqrt{|K|}l)=0\rightarrow \sqrt{|K|}l=n\pi\rightarrow \sqrt{|K|}=\left( \dfrac{n\pi}{l}\right) \nonumber$ Now that we have an expression for $K$, we can write an expression for $f(x)$: $f(x)=c_1(e^{\sqrt{K}x}-e^{-\sqrt{K}x})=c_1(e^{i\sqrt{|K|}x}-e^{-i\sqrt{|K|}x})=2ic_1\sin(\sqrt{|K|}x) \nonumber$ $\label{eq:pde17} f(x)=A\sin\left(\dfrac{n\pi}{l}x\right)$ So far we got $f(x)$, so we need to move on and get an expression for $g(t)$ from Equation \ref{eq:pde15b}. Notice, however, that we now know the value of $K$, so let’s re-write Equation \ref{eq:pde15b} as: $\label{eq:pde 18} \dfrac{\partial^2 g(t)}{\partial t^2}+\left(\dfrac{n\pi}{l}\right)^2v^2g(t)=0$ This is another 2nd order ODE, with auxiliary equation $\alpha^2+\left(\dfrac{n\pi}{l}\right)^2v^2=0\rightarrow \alpha=\pm i \left( \dfrac{n\pi}{l} v\right) \nonumber$ we can then write $g(t)$ as: $g(t)=c_1e^{i \left( \dfrac{n\pi}{l} v\right) t}+c_2e^{-i \left( \dfrac{n\pi}{l} v\right) t} \nonumber$ which you should be able to prove can be rewritten as $\label{eq:pde_19} g(t)=c_3\sin \left( \dfrac{n\pi}{l} vt\right) +c_4\cos \left( \dfrac{n\pi}{l} vt\right)$ We cannot get the values of $c_3$ and $c_4$ yet because we do not have information about initial conditions. Before discussing this, however, let’s put the two pieces together: $u(x,t)=\sin \left( \dfrac{n\pi}{l}x \right) \left[p_n\sin \left( \dfrac{n\pi}{l} vt\right) +q_n\cos \left( \dfrac{n\pi}{l} vt\right) \right] \nonumber$ where we combined the constants $A$ and $c_{1,2}$ and re-named them $p_n$ and $q_n$. The subindices stress the fact that these constants depend on $n$, which will be important in a minute. Before we move on, and to simplify notation, let’s recognize that the quantity $\dfrac{n \pi v}{l}$ has units of reciprocal time. This is true because it needs to give an dimensionless number when multiplied by $t$. This means that, physically, $\dfrac{n \pi v}{l}$ represents a frequency, so we can call it $\omega_n$: $\label{eq:pde_20} u(x,t)=\sin \left( \dfrac{n\pi}{l}x \right) \left[p_n\sin \left(\omega_nt\right) +q_n\cos \left(\omega_nt\right) \right]\;n=1,2,...,\infty$ At this point, we recognize that we have an infinite number of solutions: $\label{eq:pde_21} u_1(x,t)=\sin \left( \dfrac{\pi}{l}x \right) \left[p_1\sin \left(\omega_1t\right) +q_1\cos \left(\omega_1 t\right) \right]$ $u_2(x,t)=\sin \left( 2\dfrac{\pi}{l}x \right) \left[p_2\sin \left(\omega_2t\right) +q_2\cos \left(\omega_2t\right) \right] \nonumber$ $\vdots \nonumber$ $u_n(x,t)=\sin \left( n\dfrac{\pi}{l}x \right) \left[p_n\sin \left(\omega_nt\right) +q_n\cos \left(\omega_nt\right) \right] \nonumber$ where $\omega_1, \omega_2,...,\omega_n=\dfrac{\pi v}{l},\dfrac{2\pi v}{l},...,\dfrac{n\pi v}{l}$. As usual, the general solution is a linear combination of all these solutions: $\label{eq:pde_22} u(x,t)=c_1u_1(x,t)+c_2u_2(x,t)+...+c_nu_n(x,t)={\color{red}\sum_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\sin \left(\omega_nt\right) +b_n\cos \left(\omega_nt\right) \right]}$ where $a_n=c_np_n$ and $b_n=c_nq_n$. Notice that we have not used any initial conditions yet. We used the boundary conditions we were given ($u(0,t)=u(l,t)=0$), so Equation \ref{eq:pde_22} is valid regardless of initial conditions as long as the string is held fixed at both ends. As you may suspect, the values of $a_n$ and $b_n$ will be calculated from the initial conditions. However, notice that in order to describe the movement of the string at all times we will need to calculate an infinite number of $a_n$-values and and infinite number of $b_n$-values. This sounds pretty intimidating, but you will see how all the time you spent learning about Fourier series will finally pay off. Before we look into how to do that, let’s take a look at the individual solutions listed in Equation \ref{eq:pde_21}. Each $u_n(x,t)$ is called a normal mode. For example, for $n=1$, we have $u_1(x,t)=\sin \left( \dfrac{\pi}{l}x \right) \left[p_1\sin \left(\omega_1t\right) +q_1\cos \left(\omega_1t\right) \right] \nonumber$ which is called the fundamental mode, or first harmonic. Notice that this function is the product of a function that depends only on $x$ ($\sin \left( \dfrac{\pi}{l}x \right)$ ) and another function that depends only on $t$, i.e., $\left[p_1\sin \left(\omega_1t\right) +q_1\cos \left(\omega_1t\right) \right]. \nonumber$ The function on $t$ simply changes the amplitude of the sine function on $x$: For $n=2$, we have: $u_2(x,t)=\sin \left( 2\dfrac{\pi}{l}x \right) \left[p_2\sin \left(\omega_2t\right) +q_2\cos \left(\omega_2t\right) \right] \nonumber$ which is called the first overtone, or second harmonic. Again, this function is the product of one function that depends on $x$ only ($\sin \left( 2\dfrac{\pi}{l}x \right)$), and another one that depends on $t$ and changes the amplitude of $\sin \left( 2\dfrac{\pi}{l}x \right)$ without changing its overall shape: For $n=3$, we have: $u_3(x,t)=\sin \left( 3\dfrac{\pi}{l}x \right) \left[p_3\sin \left(\omega_3t\right) +q_3\cos \left(\omega_3t\right) \right] \nonumber$ which is called the second overtone, or third harmonic. Again, this function is the product of one function that depends on $x$ only ($\sin \left( 3\dfrac{\pi}{l}x \right)$), and another one that depends on $t$ and changes the amplitude of $\sin \left( 3\dfrac{\pi}{l}x \right)$ without changing its overall shape: If the initial shape of the string (i.e. the function $u(x,t)$ at time zero) is $\sin \left( \dfrac{\pi}{l}x \right)$ (Figure $2$, then the string will vibrate as shown in the figure, just changing the amplitude but not the overall shape. In more general terms, if $u(x,0)$ is one of the normal modes, the string will vibrate according to that normal mode, without mixing with the rest. However, in general, the shape of the string will be described by a linear combination of normal modes (Equation \ref{eq:pde_22}). If you recall from Chapter 7, a Fourier series tells you how to express a function as a linear combination of sines and cosines. The idea here is the same: we will express an arbitrary shape as a linear combination of normal modes, which are a collection of sine functions. In order to do that, we need information about the initial shape: $u(x,0)$. We also need information about the initial velocity of all the points in the string: $\dfrac{\partial u(x,0)}{\partial t}$. The initial shape is the displacement of all points at time zero, and it is a function of $x$. Let’s call this function $y_1(x)$: $\label{eq:pde23} u(x,0)=y_1(x)$ The initial velocity of all points is also a function of $x$, and we will call it $y_2(x)$: $\label{eq:pde24} \dfrac{\partial u(x,0)}{\partial t}=y_2(x)$ Both functions together represent the initial conditions, and be will used to calculate all the $a_n$ and $b_n$ coefficients. To simplify the problem, let’s assume that at time zero we hold the string still, so the velocity of all points is zero: $\dfrac{\partial u(x,0)}{\partial t}=0 \nonumber$ Let’s see how we can use this information to finish the problem (i.e. calculate the coefficients $a_n$ and $b_n$). From Equations \ref{eq:pde_22} and \ref{eq:pde23}: $u(x,t)=\sum_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\sin \left(\omega_nt\right) +b_n\cos \left(\omega_nt\right) \right] \nonumber$ and applying the first initial condition: $\label{eq:pde_25} u(x,0)=\sum_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) [b_n]=y_1(x)$ This equation tells us that the initial shape, $y_1(x)$, can be described as an infinite sum of sine functions....sounds familiar? In Chapter 7, we saw that we can represent a periodic odd function $f(x)$ of period $2L$ as an infinite sum of sine functions (Equation $7.2.1$, $f(x)=\dfrac{a_0}{2}+\sum_{n=1}^{\infty}a_n \cos\left ( \dfrac{n\pi x}{L} \right )+\sum_{n=1}^{\infty}b_n \sin\left ( \dfrac{n\pi x}{L} \right )$ ): $\label{eq:pde_26} f(x)=\sum_{n=1}^{\infty}b_n sin\left ( \dfrac{n\pi x}{L} \right )$ Comparing Equations \ref{eq:pde_25} and \ref{eq:pde_26}, we see that in order to calculate the $b_n$ coefficients of Equation \ref{eq:pde_22}, we need to create an odd extension of $y_1$ with period $2l$. Let’s see how this works with an example. Let’s assume that the initial displacement is given by the function shown in the figure: Equation \ref{eq:pde_25} tells us that the function of Figure $5$ can be expressed as an infinite sum of sine functions. If we figure out which sum, we will have the coefficients $b_n$ we need to write down the expression of $u(x,y)$ we are seeking (Equation \ref{eq:pde_22}). We will still need the coefficients $a_n$, which will be calculated from the second initial condition (Equation \ref{eq:pde24}). Because we know the infinite sum of Equation \ref{eq:pde_25} describes an odd periodic function of period $2l$, our first step is to extend $y_1(x)$ in an odd fashion: What is the Fourier series of the periodic function of Figure $6$? Using the methods we learned in Chapter 7, we obtain: $\label{eq:pde_27} y_1(x)=\dfrac{8A}{\pi^2}\left[\sin \dfrac{\pi x}{l}- \dfrac{1}{3^2}\sin\dfrac{3\pi x}{l} + \dfrac{1}{5^2}\sin\dfrac{5\pi x}{l}... \right]=\dfrac{8A}{\pi^2} \sum\limits_{n=0}^{\infty}\dfrac{(-1)^n}{(2n+1)^2} \sin \left( \dfrac{(2n+1)\pi}{l}x \right)$ From Equation \ref{eq:pde_25} $u(x,0)=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) [b_n]=y_1(x)$ comparing Equations \ref{eq:pde_25} and \ref{eq:pde_27}: $\label{eq:pde_28} b_n = \begin{cases} 0& n=2,4,6... \ \dfrac{8A}{\pi^2n^2} & n=1,5,9... \ - \dfrac{8A}{\pi^2n^2} & n=3,7,11... \end{cases}$ Great! we have all the coefficients $b_n$, so we are just one step away from our final goal of expressing $u(x,t)$. Our last step is to calculate the coefficients $a_n$. We will use the last initial condition: $\dfrac{\partial u(x,0)}{\partial t}=y_2(x)$. Taking the partial derivative of Equation \ref{eq:pde_22}: $u(x,t)=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\sin \left(\omega_nt\right) +b_n\cos \left(\omega_nt\right) \right] \nonumber$ $\dfrac{\partial u(x,t)}{\partial t}=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\omega_n\cos \left(\omega_nt\right) -b_n\omega_n\sin \left(\omega_nt\right) \right] \nonumber$ $\label{eq:pde_29} \dfrac{\partial u(x,0)}{\partial t}=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\omega_n\right]=y_2(x)$ Equation \ref{eq:pde_29} tells us that the function $y_2(x)$ can be expressed as an infinite sum of sine functions. Again, we need to create an odd extension of $y_2(x)$ and obtain its Fourier series: $y_2(x)=\sum_{n=1}^{\infty}b_n sin\left ( \dfrac{n\pi x}{L} \right )$. The coefficients $b_n$ of the Fourier series equal $a_n\omega_n$ (Equation \ref{eq:pde_29}). In this particular case: $\dfrac{\partial u(x,0)}{\partial t}=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\omega_n\right]=0\rightarrow a_n=0 \nonumber$ The coefficients $a_n$ are zero, because the derivative needs to be zero for all values of $x$. Now that we have all coefficients $b_n$ and $a_n$ we are ready to wrap this up. From Equations \ref{eq:pde_22} and \ref{eq:pde_28}: \begin{align} u(x,t) &=\sum\limits_{n=1}^{\infty}\sin \left( \dfrac{n\pi}{l}x \right) \left[a_n\sin \left(\omega_nt\right) +b_n\cos \left(\omega_nt\right) \right] \[4pt] &=b_1\sin \left( \dfrac{\pi}{l}x \right)\cos \left( \omega_1t \right)+b_3\sin \left( \dfrac{3\pi}{l}x \right)\cos \left( \omega_3t \right)+b_5\sin \left( \dfrac{5\pi}{l}x \right)\cos \left( \omega_5t \right)... \[4pt] &=\dfrac{8A}{\pi^2}\left[\sin \left( \dfrac{\pi}{l}x \right)\cos \left( \omega_1t \right)-\dfrac{1}{3^2}\sin \left( \dfrac{3\pi}{l}x \right)\cos \left( \omega_3t \right)+\dfrac{1}{5^2}\sin \left( \dfrac{5\pi}{l}x \right)\cos \left( \omega_5t \right)...\right] \end{align} \nonumber Recalling that $\omega_n=\dfrac{n\pi}{l}v$: $u(x,t)=\dfrac{8A}{\pi^2}\left[\sin \left( \dfrac{\pi}{l}x \right)\cos \left( \dfrac{\pi }{l}vt \right)-\dfrac{1}{3^2}\sin \left( \dfrac{3\pi}{l}x \right)\cos \left( \dfrac{3\pi }{l}vt \right)+\dfrac{1}{5^2}\sin \left( \dfrac{5\pi}{l}x \right)\cos \left(\dfrac{5\pi }{l}vt \right)...\right] \nonumber$ $\label{eq:pde_30} {\color{Maroon}u(x,t)=\dfrac{8A}{\pi^2}\sum\limits_{n=0}^{\infty}\dfrac{(-1)^n}{(2n+1)^2}\sin \left( \dfrac{(2n+1)\pi}{l}x \right) \cos \left(\dfrac{(2n+1)\pi }{l}vt \right)}$ Success! We got a full description of the movement of the string. We just need to know the length of the string ($l$), the initial displacement of the midpoint ($A$) and the parameter $v$, and we can start plotting the shape of the string at different times. Just remember that Mathematica cannot plot a function defined as an infinite sum, so you will have to plot a truncated version of Equation \ref{eq:pde_30}. As usual, the more terms you include the better the approximation, but the longer the computer will take to execute the command. To see an amazing slow motion movie of a real string follow this youtube link. The parameter $v$ has units of length over time (e.g. m/s), and it depends on factors such as the material of the string, its tension, and its thickness. A string instrument like a guitar, for instance, has strings made of different materials, and held at different tensions. When plucked, they produce vibrations of different frequencies, which we perceive as different musical notes. In general, the vibration of the string will be a linear combination of the normal modes we talked about earlier in this section. Each normal mode has a unique frequency ($\omega_n=\dfrac{n\pi}{l}v$), and if this frequency is within our audible range, we will perceive it as a pure musical note. A linear combination of normal modes contains many frequencies, and we perceive them as a more complex sound. Music is nice, but what about the applications of normal modes in chemistry? We already mentioned molecular vibrations in different chapters, and we know that the atoms in molecules are continuously vibrating following approximately harmonic motions. The same way that the vibration of the string of Figure $5$ can be expressed as a linear combination of all the normal modes (Figures $2$-$4$, we can express the vibrations of a polyatomic molecule as a linear combination of normal modes. As you will see in your advanced physical chemistry courses, a non-linear polyatomic molecule has $3n-6$ vibrational normal modes, where $n$ is the number of atoms. For the molecule of water, for example, we have 3 normal modes: Any other type of vibration can be expressed as a linear combination of these three normal modes. As you can imagine, these motions occur very fast. Typically, you may see of the order of $10^{12}$ vibrations per second. The most direct way of probing the vibrations of a molecule is through infra-red spectroscopy, and in fact you will measure and analyze the vibrational spectra of simple molecules in your 300-level physical chemistry labs.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/12%3A_Partial_Differential_Equations/12.03%3A_The_Wave_Equation_in_One_Dimension.txt
Molecular diffusion is the thermal motion of molecules at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size and shape of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. The term "diffusion" is also generally used to describe the flux of other physical quantities. For instance, the diffusion of thermal energy (heat) is described by the heat equation, which is mathematically identical to the diffusion equation we’ll consider in this section. To visualize what we mean by molecular diffusion, consider a red dye diffusing in a test tube. Suppose the experiment starts by placing a sample of the dye in a thin layer half way down the tube. Diffusion occurs because all molecules move due to their thermal energy. Each molecule moves in a random direction, meaning that if you wait long enough, molecules will end up being randomly distributed throughout the tube. This means that there is a net movement of dye molecules from areas of high concentration (the central band) into areas of low concentration (Figure $1$. The rate of diffusion depends on temperature, the size and shape of the molecules, and the viscosity of the solvent. The diffusion equation that describes how the concentration of solute (in this case the red dye) changes with position and time is: $\label{eq:pde31} \nabla^2C(\mathbf{r},t)=\frac{1}{D}\frac{\partial C(\mathbf{r},t)}{\partial t}$ where $\mathbf{r}$ is a vector representing the position in a particular coordinate system. This equation is known as Fick’s second law of diffusion, and its solution depends on the dimensionality of the problem (1D, 2D, 3D), and the initial and boundary conditions. If molecules are able to move in one dimension only (for example in a tube that is much longer than its diameter): $\label{eq:pde32} \frac{\partial^2C(x,t)}{\partial x^2}=\frac{1}{D}\frac{\partial C(x,t)}{\partial t}$ Let’s assume that the tube has a length $L$ and it is closed at both ends. Mathematically, this means that the flux of molecules at $x = 0$ and $x = L$ is zero. The flux is defined as the number of moles of substance that cross a $1 m^2$ area per second, and it is mathematically defined as $\label{eq:pde33} J=-D\frac{\partial C(x,t)}{\partial x}$ If the tube is closed, molecules at $x=0$ cannot move from the right side to the left side, and molecules at $x=L$ cannot move from the left side to the right side. Mathematically: $\label{eq:pde34} \frac{\partial C(0,t)}{\partial x}=\frac{\partial C(L,t)}{\partial x}=0$ These will be the boundary conditions we will use to solve the problem We still need an initial condition, which in this case it is the initial concentration profile: $C(x,0)=y(x)$. Notice that because of mass conservation, the integral of $C(x)$ should be constant at all times. We will not use this to solve the problem, but we can verify that the solution satisfies this requirement. Regardless of initial conditions, the concentration profile will be expressed as (Problem 12.3): $\label{eq:pde_35} C(x,t)=\sum\limits_{n=0}^{\infty}a_n\cos\left(\frac{n\pi}{L}x \right)e^{-\left(\frac{n\pi}{L}\right)^2Dt}$ In order to calculate the coefficients $a_n$, we need information about the initial concentration profile: $C(x,0)=y(x)$. $\label{eq:pde_36} C(x,0)=\sum\limits_{n=0}^{\infty}a_n\cos\left(\frac{n\pi}{L}x \right)=y(x)$ This expression tells us that the function $y(x)$ can be expressed as an infinite sum of cosines. We know from Chapter 7 that an infinite sum of cosines like this represents an even periodic function of period $2L$, so in order to find the coefficients $a_n$, we need to construct the even periodic extension of the function $y(x)$ and find its Fourier series. For example, assume that the initial concentration profile is given by Figure $2$. Imagine that you have water in the right half-side of the tube, and a 1M solution of red dye on the left half-side. At time zero, you remove the barrier that separates the two halves, and watch the concentration evolve as a function of time. Before we calculate these concentration profiles, let’s think about what we expect at very long times, when the dye is allowed to fully mix with the water. We know that the same number of molecules present initially need to be re-distributed in the full length of the tube, so the concentration profile needs to be constant at 0.5M. It is a good idea that you sketch what you imagine happens in between before going through the math and seeing the results. In order to calculate the coefficients $a_n$ that we need to complete Equation \ref{eq:pde_35}, we need to express $C(x,0)$ as an infinite sum of cosine functions, and therefore we need the even extension of the function: Let’s calculate the Fourier series of this periodic function of period $2L$ (Equation $7.2.1$, $f(x)=\dfrac{a_0}{2}+\sum_{n=1}^{\infty}a_n \cos\left ( \dfrac{n\pi x}{L} \right )+\sum_{n=1}^{\infty}b_n \sin\left ( \dfrac{n\pi x}{L} \right )$ ). $f(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n cos\left ( \frac{n\pi x}{L} \right ) \nonumber$ $a_0=\frac{1}{L}\int_{-L}^{L}f(x)dx \nonumber$ $a_n=\frac{1}{L}\int_{-L}^{L}f(x)\cos{\left(\frac{n\pi x}{L} \right)}dx \nonumber$ Let’s assume $L=1cm$ (we will use $D$ in units of $cm^2/s$ and $t$ in seconds): $a_0=\frac{1}{L}\int_{-L}^{L}f(x)dx=\int_{-1/2}^{1/2}1dx=1 \nonumber$ $a_n=\int_{-1/2}^{1/2}1\cos{\left(n\pi x\right)}dx=\frac{1}{n\pi}\left.\begin{matrix}\sin(n\pi x)\end{matrix}\right|_{-1/2}^{1/2}=\frac{1}{n\pi}[\sin{(n\pi/2)}-\sin{(-n\pi/2)}] \nonumber$ because $\sin{x}$ is odd: $a_n=\frac{2}{n\pi}\sin{(n\pi/2)}\rightarrow a_1=\frac{2}{\pi}, a_2=0, a_3=-\frac{2}{3\pi}, a_4=0, a_5=\frac{2}{5\pi}... \nonumber$ and therefore: $C(x,0)=\frac{1}{2}+\frac{2}{\pi}\sum\limits_{n=0}^{\infty}(-1)^{n}\frac{1}{2n+1}\cos{[(2n+1)\pi x]} \nonumber$ and the complete description of $C(x,t)$ is (Equation \ref{eq:pde_35}): $\label{eq:pde_37} C(x,t)=\frac{1}{2}+\frac{2}{\pi}\sum\limits_{n=0}^{\infty}(-1)^{n}\frac{1}{2n+1}\cos{[(2n+1)\pi x]}e^{-[(2n+1)\pi]^2Dt}$ Let’s plot $C(x,t)$ at different times assuming $L=1cm$ and $D=6.5 \times 10^{-6}cm^2 s^{-1}$, which is the diffusion coefficient of sucrose (regular sugar) in water. Notice that the derivative $\frac{\partial C(x,t)}{\partial x}$ is zero at both $x=0$ and $x=L=1cm$ at all times, as it should be the case given the boundary conditions. Also, the area under the curve is constant due to mass conservation. In addition notice how long it takes for diffusion to mix the two halves of the tube! It would take about a day for the concentration to be relatively homogeneous in a 1-cm tube, which explains why it is a good idea to stir the sugar in your coffee with a spoon instead of waiting for diffusion to do the job. Diffusion is inefficient because molecules move in random directions, and each time they bump into a molecule of water they change their direction. Imagine that you need to walk from the corner of Apache and Rural to the corner of College and University Ave, and every time you make a step you throw a four-sided die to decide whether to move south, north, east or west. You might eventually get to your destination, but it will likely take you a long time.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/12%3A_Partial_Differential_Equations/12.04%3A_Molecular_Diffusion.txt
Problem $1$ Find $f(x, y)$. Note: In order to obtain all possible solutions in each case you will have to consider that the separation constant can be positive, negative, or zero. 1. $2 \frac{ \partial f}{ \partial x} + \frac{ \partial f}{ \partial y} = 0$ 2. $y \frac{ \partial f}{ \partial x} − x \frac{ \partial f}{ \partial y} = 0$ 3. $\frac{\partial^2 f}{ \partial x^2} + \frac{\partial^2 f}{\partial y^2} = 0$ Problem $2$ Consider a uniform string under tension and of length $L$. At time $t = 0$ the string is displaced as shown in the figure and released. The displacement of the string from its horizontal position $(u(x, t))$ depends on both $x$ and $t$ and satisfies the following PDE: $\frac{ \partial^2 u(x, t)}{\partial x^2} = \frac{1}{v^2} \frac{\partial^2 u(x, t)}{\partial t^2}$ where $v$ is a constant that depends on the characteristics of the string. 1. Obtain an expression for $u(x, t)$. Note that most of the problem is solved in the book, but you still need to show ALL steps. 2. In the lab: Assume $v = 440 \text{m/s}, ~ A = 5 \text{cm}$ and $L = 12 \text{cm}$. Create a function $u(x, t)$ with the result of 1). Remember that you can’t plot an infinite sum, so you will have to truncate it when plotting in Mathematica. Use the function "Manipulate" to generate an animation of the vibrating string. Be sure you run it slowly so you can see the motion). Problem $3$ Use the separation of variables method to obtain an expression for $C(x, t)$ for the system described in Section 12.4: $\frac{\partial^2 C(x,t)}{\partial x^2} = \frac{1}{D} \frac{\partial C(x,t)}{\partial t} \nonumber$ $\frac{\partial C(0,t)}{\partial x} = \frac{\partial C(L,t)}{\partial x} = 0 \nonumber$ The solution is: $C(x, t) = \sum_{n=0}^{\infty} a_n \cos \left( \frac{n \pi}{L} x \right)e^{− \left( \frac{n \pi}{L} \right)^2 Dt} \nonumber$ where the coefficients $a_n$ depend on the initial concentration profile (initial conditions). Problem $4$ Continue the previous problem and obtain the full expression of $C(x, t)$ using the initial condition shown in Figure $1$: Figure $1$: The initial concentration profile, $C(x, 0)$. The diagram at the bottom represents a cartoon of what the tube would look like at time zero, with higher concentration of red dye at the center and zero concentration at both ends. The even extension of this function is (compare with Figure $12.3.6$) Figure $2$: The even extension of y(x) (Figure $1$). In the lab: • Assume $L = 1 \text{cm}, ~ A = 1 \text{M}$, and $D = 6.510−10\text{m}^2/\text{s}$ (the diffusion coefficient of glucose in water) Use Manipulate to create a movie that shows how $C(x)$ changes with time. • Plot $C(0, t)$ (that is, the concentration at the end of the tube as a function of time). How long does it take until the concentration reaches 0.1M? (give an approximate value). This should demonstrate the need of stirring your coffee after adding sugar (i.e. waiting until sugar diffuses on its own would take too long for you to enjoy a hot cup of coffee). • Repeat the previous question assuming $L = 1 \mu \text{m}$ (the order of magnitude of the diameter of the nucleus of a cell). This should demonstrate that diffusion is an efficient mechanism for molecular transport inside small cells like bacteria, or inside the nucleus of larger cells.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/12%3A_Partial_Differential_Equations/12.05%3A_Problems.txt
Chapter Objectives • Learn how to calculate the determinant of a square matrix. • Understand how to solve systems of simultaneous linear equations using determinants. • Learn the properties of determinants. 13: Determinants The concept of determinants has its origin in the solution of simultaneous linear equations. In physical chemistry, they are an important tool in quantum mechanics Suppose you want to solve the following system of two equations with two unknowns ($x$ and $y$): $a_1x+b_1y=c_1 \nonumber$ $a_2x+b_2y=c_2 \nonumber$ In order to find $y$, we could use the following general procedure: we multiply the first equation by $a_2$ and the second by $a_1$, and subtract one line from the other to cancel the term in $x$: $a_1x+b_1y=c_1\overset{\times a_2}{\rightarrow}a_1a_2x+b_1a_2y=c_1a_2 \nonumber$ $a_2x+b_2y=c_2\overset{\times a_1}{\rightarrow}a_1a_2x+b_2a_1y=c_2a_1 \nonumber$ $\left.\begin{matrix} a_1a_2x+b_1a_2y=c_1a_2\ a_1a_2x+b_2a_1y=c_2a_1 \end{matrix}\right\} \rightarrow (b_2a_1-b_1a_2)y=a_1c_2-a_2c_1\rightarrow y=\dfrac{a_1c_2-a_2c_1}{b_2a_1-b_1a_2} \nonumber$ We can follow the same strategy to find $x$: we multiply the first equation by $b_2$ and the second by $b_1$, and subtract one line from the other to cancel the term in $y$: $a_1x+b_1y=c_1\overset{\times b_2}{\rightarrow}a_1b_2x+b_1b_2y=c_1b_2 \nonumber$ $a_2x+b_2y=c_2\overset{\times b_1}{\rightarrow}b_1a_2x+b_2b_1y=c_2b_1 \nonumber$ $\left.\begin{matrix} a_1b_2x+b_1b_2y=c_1b_2\ b_1a_2x+b_2b_1y=c_2b_1 \end{matrix}\right\} \rightarrow (b_2a_1-b_1a_2)x=b_2c_1-b_1c_2\rightarrow x=\dfrac{b_2c_1-b_1c_2}{b_2a_1-b_1a_2} \nonumber$ We define a $2\times 2$ determinant as: $\begin{vmatrix} a &b \ c& d \end{vmatrix}=ad-cb \nonumber$ The determinant, which is denoted with two parallel bars, is a number. For example, $\begin{vmatrix} 3 &-1 \ 1/2& 2 \end{vmatrix}=3\times 2-(-1)\times 1/2=13/2 \nonumber$ Let’s look at the expressions we obtained for $x$ and $y$, and write them in terms of determinants: $x=\dfrac{b_2c_1-b_1c_2}{b_2a_1-b_1a_2}=\dfrac{\begin{vmatrix} c_1 &b_1 \ c_2& b_2 \end{vmatrix}}{\begin{vmatrix} a_1 &b_1 \ a_2& b_2 \end{vmatrix}} \nonumber$ $y=\dfrac{a_1c_2-a_2c_1}{b_2a_1-b_1a_2}=\dfrac{\begin{vmatrix} a_1 &c_1 \ a_2& c_2 \end{vmatrix}}{\begin{vmatrix} a_1 &b_1 \ a_2& b_2 \end{vmatrix}} \nonumber$ Let’s look at our equations, and see how these determinants are constructed from the coefficients. $a_1x+b_1y=c_1 \nonumber$ $a_2x+b_2y=c_2 \nonumber$ The determinant in the denominator of both $x$ and $y$ is the determinant of the coefficients on the left-side of the equal sign: $\left.\begin{matrix} {\color{Red} a_1}x+{\color{Red} b_1}y=c_1\ {\color{Red} a_2}x+{\color{Red} b_2}y=c_2 \end{matrix}\right\} {\begin{vmatrix} {\color{Red} a_1} &{\color{Red} b_1} \ {\color{Red} a_2}& {\color{Red} b_2} \end{vmatrix}} \nonumber$ The numerator in the expression of $y$ is built by replacing the coefficients in the $y$-column with the coefficients on the right side of the equation: $\left.\begin{matrix} {\color{Red} a_1}x+ b_1y={\color{OliveGreen} c_1}\ {\color{Red} a_2}x+b_2y={\color{OliveGreen} c_2} \end{matrix}\right\} {\begin{vmatrix} {\color{Red} a_1} &{\color{OliveGreen} c_1} \ {\color{Red} a_2}& {\color{OliveGreen} c_2} \end{vmatrix}} \nonumber$ The numerator in the expression of $x$ is built by replacing the coefficients in the $x$-column with the coefficients on the right side of the equation: $\left.\begin{matrix} a_1x+ {\color{Red} b_1}y={\color{OliveGreen} c_1}\ a_2x+{\color{Red} b_2}y={\color{OliveGreen} c_2} \end{matrix}\right\} {\begin{vmatrix} {\color{OliveGreen} c_1} &{\color{Red} b_1} \ {\color{OliveGreen} c_2}&{\color{Red} b_2} \end{vmatrix}} \nonumber$ We can extend this idea to $n$ equations with $n$ unknowns ($x_1, x_2, x_3,...,x_n).$ $\begin{matrix} a_{11}x_1&+&a_{12}x_2 &+&\cdots&+&a_{1n}x_n&=b_1 \ a_{21}x_1&+& a_{22}c_2&+&\cdots&+&a_{2n}x_n &=b_2\ \vdots&&\vdots & &\ddots & &\vdots&\vdots\ a_{n1}x_1&+& a_{n2}c_2&+&\cdots&+&a_{nn}x_n &=b_n\ \end{matrix} \nonumber$ Note that we use two subscripts to identify the coefficients. The first refers to the row, and the second to the column. Let’s define the determinant $D$ as the determinant of the coefficients of the equation (the ones on the left side of the equal sign): $D=\begin{vmatrix} a_{11} &a_{12} &\cdots &a_{1n}\ a_{21}&a_{22} &\cdots &a_{2n} \ \vdots& \vdots &\ddots & \vdots\ a_{n1} &a_{n2} &\cdots &a_{nn} \end{vmatrix} \nonumber$ and let’s define the determinant $D_k$ as the one obtained from $D$ by replacement of the $kth$ column of $D$ by the column with elements $b_1, b_2...b_n$. For example, $D_2$ is $D_2=\begin{vmatrix} a_{11} &b_1 &\cdots &a_{1n}\ a_{21}&b_2 &\cdots &a_{2n} \ \vdots& \vdots &\ddots & \vdots\ a_{n1} &b_{n} &\cdots &a_{nn} \end{vmatrix} \nonumber$ The unknowns of the system of equations are calculated as: $x_1=\dfrac{D_1}{D}, x_2=\dfrac{D_2}{D},...,x_n=\dfrac{D_n}{D} \nonumber$ For example, let’s say we want to find $x,y$ and $z$ in the following system of equations: $2x+3y+8z=0 \nonumber$ $x-\dfrac{1}{2}y-3z=\dfrac{1}{2} \nonumber$ $-x-y-z=\dfrac{1}{2} \nonumber$ We can calculate the unknowns as; $x=\dfrac{D_1}{D},y=\dfrac{D_2}{D},z=\dfrac{D_3}{D} \nonumber$ where $D=\begin{vmatrix} 2 &3 & 8 \ 1 &-1/2 &-3 \ -1 &-1 &-1 \end{vmatrix} \nonumber$ $D_1=\begin{vmatrix} 0 &3 & 8 \ 1/2 &-1/2 &-3 \ 1/2 &-1 &-1 \end{vmatrix} \nonumber$ $D_2=\begin{vmatrix} 2 &0 & 8 \ 1 &1/2 &-3 \ -1 &1/2 &-1 \end{vmatrix} \nonumber$ $D_3=\begin{vmatrix} 2 &3 & 0 \ 1 &-1/2 &1/2 \ -1 &-1 &1/2 \end{vmatrix} \nonumber$ In order to do this, we need to learn how to solve $3\times3$ determinants, or in general, $n \times n$ determinants.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/13%3A_Determinants/13.01%3A_The_Solutions_of_Simultaneous_Linear_Equations.txt
There are several techniques to calculate determinants, but if this topic is new to you, expanding along the first row is the easiest (although maybe not the most computationally efficient) way of doing it. A 3 $\times$ 3 matrix is calculated as: $\label{eq:determinants1} \begin{vmatrix} a_{11} &a_{12} & a_{13} \ a_{21} &a_{22} &a_{23} \ a_{31} &a_{32} &a_{33} \end{vmatrix}= a_{11}\begin{vmatrix} a_{22}&a_{23}\ a_{32}&a_{33} \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}\ a_{31}&a_{33} \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}\ a_{31}&a_{32} \end{vmatrix}$ Notice that we multiply each entry in the first row by the determinant formed by what is left after deleting the corresponding row and column. In addition, notice that we alternate signs. Formally, the sign that corresponds to a particular entry $a_{ij}$ is $(-1)^{i+j}$, but if we use the first raw we will always start with a “+” and alternate signs afterwards. A schematics of the procedure is shown in Figure $1$. We can use the same idea to calculate a determinant of any size. For example, for a $4\times 4$ determinant: $\begin{vmatrix} a_{11} &a_{12} & a_{13} &a_{14}\ a_{21} &a_{22} &a_{23} &a_{24}\ a_{31} &a_{32} &a_{33} &a_{34}\ a_{41} &a_{42} &a_{43} &a_{44}\ \end{vmatrix}= a_{11}\begin{vmatrix} a_{22}&a_{23}&a_{24}\ a_{32}&a_{33}&a_{34}\ a_{42}&a_{43}&a_{44} \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}&a_{24}\ a_{31}&a_{33}&a_{34}\ a_{41}&a_{43}&a_{44} \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}&a_{24}\ a_{31}&a_{32}&a_{34}\ a_{41}&a_{42}&a_{44} \end{vmatrix}- a_{14}\begin{vmatrix} a_{21}&a_{22}&a_{23}\ a_{31}&a_{32}&a_{33}\ a_{41}&a_{42}&a_{43} \end{vmatrix} \label{4x4} \nonumber$ The $3\times 3$ determinants are then calculated using Equation \ref{eq:determinants1}. Example $1$ Find $x$ in the following system of equations: \begin{align*}2x+3y+8z &=0 \[4pt] x-\frac{1}{2}y-3z &=\frac{1}{2} \ -x-y-z &=\frac{1}{2}\end{align*} Solution We can calculate the $x$ as; $x=\frac{D_1}{D} \nonumber$ where $D=\begin{vmatrix} 2 &3 & 8 \ 1 &-1/2 &-3 \ -1 &-1 &-1 \end{vmatrix} \nonumber$ and $D_1=\begin{vmatrix} 0 &3 & 8 \ 1/2 &-1/2 &-3 \ 1/2 &-1 &-1 \end{vmatrix} \nonumber$ $D$ is a 3x3 determinant $D$ and can be expanded using quation \ref{eq:determinants1}: \begin{align*} D &= 2\begin{vmatrix} -1/2&-3\ -1&-1 \end{vmatrix}- 3\begin{vmatrix} 1&-3\ -1&-1 \end{vmatrix}+ 8\begin{vmatrix} 1&-1/2\ -1&-1 \end{vmatrix} \[4pt] &=2\times(-5/2)-(3)\times(-4)+8\times(-3/2)=-5 \end{align*} The determinant $D_1$ is similarly expanded: \begin{align*} D_1 &= 0\begin{vmatrix} -1/2&-3\ -1&-1 \end{vmatrix}- 3\begin{vmatrix} 1/2&-3\ 1/2&-1 \end{vmatrix} + 8\begin{vmatrix} 1/2&-1/2\ 1/2&-1 \end{vmatrix} \[4pt] &=0\times(-5/2)-(3)\times(1)+8\times(-1/4)=-5 \end{align*} So, $\displaystyle{\color{Maroon}x=\frac{D_1}{D}=1} \nonumber$ To practice, finish this problem and obtain $y$ and $z$ (Problem 13.1). Example $2$ Show that a $3\times 3$ determinant that contains zeros below the principal diagonal (top left to bottom right) is the product of the diagonal elements. Solution We are asked to prove that $D=\begin{vmatrix} a &b&c \ 0&d &e \ 0& 0 &f \end{vmatrix}=adf \nonumber$ Because we have two zeros in the first column, it makes more sense to calculate the determinant by expanding along the first column instead of the first row. Yet, if you feel uncomfortable doing this at this point we can expand along the first row as we just learned: $D=a\times\begin{vmatrix} d &e\ 0&f\end{vmatrix}-b\times\begin{vmatrix} 0 &e\ 0&f\end{vmatrix}+c\times\begin{vmatrix} 0 &d\ 0&0\end{vmatrix}=a\times d\times f \nonumber$ The conclusion is true in any dimension. Need help? The links below contain solved examples. The determinant of a 3x3 matrix: http://tinyurl.com/n2a3uxw External links: 13.03: The Determinant as a Volume Before discussing the properties of determinants, it will be useful to note that a determinant represents the volume of a box. In two dimensions, the absolute value of a $2\times 2$ determinant represents the area of the parallelogram whose sides are the two columns (or rows) of the determinant. Suppose you have two vectors: $\vec{v_1}=(b,d)$ and $\vec{v_2}=(a,c)$ (see Figure $2$). The area of the parallelogram constructed with these two vectors as sides is the absolute value of the determinant whose columns are $\vec{v_1}$ and $\vec{v_2}$: $A= \begin{vmatrix} b&a\ d&c \end{vmatrix} \nonumber$ A geometrical proof of this statement is shown in Figure $1$. The sign of the determinant is related to the orientation of the parallelogram. If you extend your right hand, and use your thumb and index finger to represent the two vectors, the determinant will be positive if the vector along your thumb is in the first column and the vector and your index finger is in the second column, and will be negative if it is the other way around (Figure $2$). For a $3\times 3$ determinant, its absolute value represents the volume of the parallelepiped (“the box”) whose edges are the vectors that are the columns of the determinant (Figure $3$) This notion will help us understand and remember some useful properties of determinants. For example, we can readily conclude that a determinant that contains non-zero entries only in the main diagonal (top left to bottom right) is the product of the diagonal entries: $\begin{vmatrix} a &0&0 \ 0&b &0 \ 0& 0 &c \end{vmatrix}=abc \nonumber$ This is true because the columns represent vectors that are aligned with the $x$, $y$ and $z$ axes respectively, so the volume of the resulting box is the product of the dimensions along $x$, $y$ and $z$ (Figure $4$):
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/13%3A_Determinants/13.02%3A_Calculation_of_a_3__3_determinant.txt
1- $\label{eq:identity} \begin{vmatrix} 1 &0&0 \ 0&1 &0 \ 0& 0 &1 \end{vmatrix}=1$ This is true in any dimension, and can be understood easily from geometrical arguments. In two dimensions, the columns represent unitary vectors along the $x$ and $y$ axis, and the parallelogram is therefore a square of unit area. In three dimensions, the columns represent unitary vectors along the $x$, $y$ and $z$ axis, and the box is therefore a cube of unit volume. 2- Antisymmetry: If two rows (or two columns) of a determinant are interchanged, the value of the determinant is multiplied by -1. This property is extremely useful in quantum mechanics, so it is worth remembering! $\label{eq:determinants4} \begin{vmatrix} a &b & c \ d &e &f \ g &h &i \end{vmatrix}=- \begin{vmatrix} b&a &c \ e&d &f\ h &g &i \end{vmatrix}=- \begin{vmatrix} d&e &f \ a &b &c\ g &h &i \end{vmatrix}$ We already discussed this property in two dimensions (see Figure $13.3.3$). 3- Scalars can be factored out from rows and columns. $\label{eq:determinants5} \begin{vmatrix} a&b &c \ \lambda d &\lambda e &\lambda f\ g &h&i \end{vmatrix} = \lambda \begin{vmatrix} a &b & c \ d &e &f \ g &h &i \end{vmatrix}$ Geometrically speaking, if you multiply the length of one of the edges of the parallelepiped by $\lambda$, the volume is also multiplied by $\lambda$. 4- Addition rule: If all the elements of any row (or column) are written as the sum of two terms, then the determinant can be written as the sum of two determinants $\label{eq:determinants6} \begin{vmatrix} a+b&c &d \ e+f &g &h\ i+j &k &l \end{vmatrix} = \begin{vmatrix} a&c & d \ e& g &h\ i&k &l \end{vmatrix} + \begin{vmatrix} b&c &d \ f& g &h\ j&k &l \end{vmatrix}$ $\label{eq:determinants6b} \begin{vmatrix} a+b&c+d &e+f \ g &h &i\ j &k &l \end{vmatrix}= \begin{vmatrix} a&c & e \ g &h &i\ j &k &l \end{vmatrix}+ \begin{vmatrix} b&d &f \ g &h &i\ j &k &l \end{vmatrix}$ 5- The value of a determinant is zero if two rows or two columns are equal. This is a consequence of property 2. Exchanging the two identical rows is supposed to change the sign of the determinant, but we know that exchanging two identical rows does nothing to the determinant. Therefore, the determinant has to be zero. Geometrically, two edges of the box would be the same, rendering a flat box with no volume. $\label{eq:determinants3} \begin{vmatrix} {\color{red}a} &{\color{Red}a} &{\color{Blue} b} \ {\color{red}c} &{\color{red}c} &{\color{Blue}d} \ {\color{red}e} &{\color{red}e} &{\color{Blue}f} \end{vmatrix}=0$ 6- The value of a determinant is unchanged if one row or column is added or subtracted to another. This property is a consequence of properties 4 and 5: $\label{eq:determinants7} \begin{vmatrix} a+b&b &c \ d+e & e & f\ g+h &h &i \end{vmatrix}= \begin{vmatrix} a&b &c \ d&e& f\ g&h&i \end{vmatrix}+ \begin{vmatrix} b&b&c \ e&e& f\ h&h&i \end{vmatrix}$ 7- A special case of property 3 is that if all the elements of a row or column are zero, the value of the determinant is zero. In geometrical terms, if one of the edges is a point, the volume is zero. $\label{eq:determinants6extra} \begin{vmatrix} a &b &c \ 0 &0 &0 \ d &e &f \end{vmatrix}=0$ 8- The value of a determinant is zero if one row (or column) is a multiple of another row (or column). Geometrically, this means that two edges of the parallelepiped lie on the same line, and therefore the volume is zero. This is a consequence of properties 3 and 5: $\label{eq:determinants8} \begin{vmatrix} a&b &c \ \lambda a &\lambda b &\lambda c\ d &e &f \end{vmatrix}= \lambda\begin{vmatrix} a&b & c\ a &b&c \ d&e &f \end{vmatrix}=0$ 9- Transposition: the value of the determinant is unchanged if its rows and columns are interchanged. This property can be derived from the previous properties, although it is a little complicated for the level of this course. $\label{eq:determinants2} \begin{vmatrix} {\color{red}a}&{\color{OliveGreen}b} &{\color{Blue} c} \ {\color{red}d} &{\color{OliveGreen}e} &{\color{Blue}f} \ {\color{red}g} &{\color{OliveGreen}h} &{\color{Blue}i} \end{vmatrix}= \begin{vmatrix} {\color{red}a} &{\color{red}d} &{\color{red}g} \ {\color{OliveGreen}b} &{\color{OliveGreen}e} &{\color{OliveGreen}h} \ {\color{Blue}c}& {\color{Blue}f}&{\color{Blue}i} \end{vmatrix}$ Geometrically, interchanging rows by columns rotates the parallelogram (or the box in 3D) without changing the area (or volume). Figure $1$ shows that $\begin{vmatrix}b &a\ d& c \end{vmatrix}=\begin{vmatrix}b &d\ a& c \end{vmatrix} \nonumber$ Example $1$ Determine the value of the following determinant by inspection. $\begin{vmatrix} 2&6 &1 \ -4 &4 &-2\ 2 &-3 &1 \end{vmatrix} \nonumber$ Solution We notice that the third column is a multiple of the first: \begin{align*} \begin{vmatrix} 2&6 &1 \ -4 &4 &-2\ 2 &-3 &1 \end{vmatrix} &= \begin{vmatrix} 1\times 2&6 &1 \ -2 \times 2 &4 &-2\ 1 \times 2 &-3 &1 \end{vmatrix} \[4pt] &=2\times \begin{vmatrix} 1&6 &1 \ -2 &4 &-2\ 1 &-3 &1 \end{vmatrix} \[4pt] &=\displaystyle{\color{Maroon}0} \end{align*} \nonumber The determinant is zero because two of its columns are the same. 13.05: Problems Problem $1$ Finish the problem of Example $13.1.1$ and obtain $y$ and $z$. Problem $2$ Use determinants to solve the equations: A) $\begin{array}{c} x+y+z=6\ x+2y+3z=14\ x+4y+9z=36 \end{array} \nonumber$ B) $\begin{array}{c} x+iy-z=0\ ix+y+z=0\ x+2y-iz=1 \end{array} \nonumber$ Problem $3$ Show that a $3\times 3$ determinant that contains zeros above the principal diagonal is the product of the diagonal elements. $D=\begin{vmatrix} a &0&0 \ b&c &0 \ d& e &f \end{vmatrix}=acf \nonumber$ Problem $4$ Prove that $D=\begin{vmatrix} 1 &2&3 \ 2&3 &3 \ 3& 4 &3 \end{vmatrix}=0 \nonumber$ using the properties of determinants (that is, without calculating the determinant!). Clearly state the properties you use in each step. Exercise $5$ In previous lectures, we discussed how to perform double and triple integrals in different coordinate systems. For instance, we learned that the area elements and volume elements are: 2D: Cartesian: $dA= dx.dy$ Polar: $dA=r. dr. d\theta$ 3D: Cartesian: $dV= dx.dy.dz$ Spherical: $dV=r^2.\sin\theta dr. d\theta d\phi$ In general, for any coordinate system, we can express the area (or volume) element in a new coordinate system using the Jacobian ($J$). For example, in polar coordinates in two dimensions: $dA=dx.dy=J. dr.d\theta \nonumber$ where the Jacobian is defined as: $J=\left | \begin{matrix} \frac{\partial x}{\partial r} &\frac{\partial x}{\partial \theta} \ \ \frac{\partial y}{\partial r} &\frac{\partial y}{\partial \theta} \end{matrix} \right | \nonumber$ a) Calculate the Jacobian in two-dimensional polar coordinates and show that $dA=r. dr. d\theta$. In spherical coordinates, $dV=dx.dy.dz=J. dr.d\theta. d\phi \nonumber$ where $J=\left | \begin{matrix} \frac{\partial x}{\partial r} &\frac{\partial x}{\partial \theta}&\frac{\partial x}{\partial \phi} \ \ \frac{\partial y}{\partial r} &\frac{\partial y}{\partial \theta}&\frac{\partial y}{\partial \phi}\ \ \frac{\partial z}{\partial r} &\frac{\partial z}{\partial \theta}&\frac{\partial z}{\partial \phi}\ \end{matrix} \right | \nonumber$ b) Calculate the Jacobian in three-dimensional spherical coordinates and show that $dV=r^2.\sin\theta dr. d\theta d\phi \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/13%3A_Determinants/13.04%3A_Properties_of_Determinants.txt
Objectives • Be able to perform operations with vectors: addition, subtraction, dot product, cross product. • Understand how to calculate the modulus of a vector, including vectors containing complex entries. • Understand how to normalize vectors. • 14.1: Introduction to Vectors A vector is a quantity that has both a magnitude and a direction, and as such they are used to specify the position, velocity and momentum of a particle, or to specify a force. • 14.2: The Scalar Product The scalar product of vectors u and v , also known as the dot product or inner product, is defined as (notice the dot between the symbols representing the vectors) u⋅v=|u||v|cosθ, where θ is the angle between the vectors. Notice that the dot product is zero if the two vectors are perpendicular to each other, and equals the product of their absolute values if they are parallel. • 14.3: The Vector Product The vector product of two vectors is a vector defined as u×v=|u||v|n sin θ, where θ is again the angle between the two vectors, and n is the unit vector perpendicular to the plane formed by u and v. The direction of the vector n is given by the right-hand rule. • 14.4: Vector Normalization A vector of any given length can be divided by its modulus to create a unit vector (i.e. a vector of unit length). • 14.5: Problems 14: Vectors In this chapter we will review a few concepts you probably know from your physics courses. This chapter does not intend to cover the topic in a comprehensive manner, but instead touch on a few concepts that you will use in your physical chemistry classes. A vector is a quantity that has both a magnitude and a direction, and as such they are used to specify the position, velocity and momentum of a particle, or to specify a force. Vectors are usually denoted by boldface symbols (e.g. $\mathbf{u}$) or with an arrow above the symbol (e.g. $\vec{u}$). A tilde placed above or below the name of the vector is also commonly used in shorthand ($\widetilde{u}$,$\underset{\sim}{u}$). If we multiply a number $a$ by a vector $\mathbf{v}$, we obtain a new vector that is parallel to the original but with a length that is $a$ times the length of $\mathbf{v}$. If $a$ is negative $a\mathbf{v}$ points in the opposite direction than $\mathbf{v}$. We can express any vector in terms of the so-called unit vectors. These vectors, which are designated $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$ and $\hat{\mathbf{k}}$, have unit length and point along the positive $x, y$ and $z$ axis of the cartesian coordinate system (Figure $1$). The symbol $\hat{\mathbf{i}}$ is read "i-hat". Hats are used to denote that a vector has unit length. The length of $\mathbf{u}$ is its magnitude (or modulus), and is usually denoted by $u$: $\label{eq:vectors1} u=|u|=(u_x^2+u_y^2+u_z^2)^{1/2}$ If we have two vectors $\mathbf{u}=u_x\hat{\mathbf{i}}+u_y \hat{\mathbf{j}}+u_z \hat{\mathbf{k}}$ and $\mathbf{v}=v_x \hat{\mathbf{i}}+v_y \hat{\mathbf{j}}+v_z \hat{\mathbf{k}}$, we can add them to obtain $\mathbf{u}+\mathbf{v}=(u_x+v_x)\hat{\mathbf{i}}+(u_y+v_y)\hat{\mathbf{j}}+(u_z+v_z)\hat{\mathbf{k}} \nonumber$ or subtract them to obtain: $\mathbf{u}-\mathbf{v}=(u_x-v_x)\hat{\mathbf{i}}+(u_y-v_y)\hat{\mathbf{j}}+(u_z-v_z)\hat{\mathbf{k}} \nonumber$ When it comes to multiplication, we can perform the product of two vectors in two different ways. The first, which gives a scalar (a number) as the result, is called scalar product or dot product. The second, which gives a vector as a result, is called the vector (or cross) product. Both are important operations in physical chemistry.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/14%3A_Vectors/14.01%3A_Introduction_to_Vectors.txt
The scalar product of vectors $\mathbf{u}$ and $\mathbf{v}$, also known as the dot product or inner product, is defined as (notice the dot between the symbols representing the vectors) $\mathbf{u}\cdot \mathbf{v}=|\mathbf{u}||\mathbf{v}|\cos \theta \nonumber$ where $\theta$ is the angle between the vectors. Notice that the dot product is zero if the two vectors are perpendicular to each other, and equals the product of their absolute values if they are parallel. It is easy to prove that $\mathbf{u}\cdot \mathbf{v}=u_xv_x+u_yv_y+u_zv_z \nonumber$ Example $1$ Show that the vectors \begin{align*} \mathbf{u_1} &=\dfrac{1}{\sqrt{3}}\hat{\mathbf{i}}+\dfrac{1}{\sqrt{3}}\hat{\mathbf{j}}+\dfrac{1}{\sqrt{3}}\hat{\mathbf{k}} \[4pt] \mathbf{u_2} &=\dfrac{1}{\sqrt{6}}\hat{\mathbf{i}}-\dfrac{2}{\sqrt{6}}\hat{\mathbf{j}}+\dfrac{1}{\sqrt{6}}\hat{\mathbf{k}} \[4pt] \mathbf{u_3} &=-\dfrac{1}{\sqrt{2}}\hat{\mathbf{i}}+\dfrac{1}{\sqrt{2}}\hat{\mathbf{k}} \end{align*} \nonumber are of unit length and are mutually perpendicular. Solution The length of the vectors are: \begin{align*} |\mathbf{u_1}|&=\left[\left(\dfrac{1}{\sqrt{3}}\right)^2+\left(\dfrac{1}{\sqrt{3}}\right)^2+\left(\dfrac{1}{\sqrt{3}}\right)^2\right]^{1/2}=\left[\dfrac{1}{3}+\dfrac{1}{3}+\dfrac{1}{3}\right]^{1/2}=1 \[4pt] |\mathbf{u_2}| &=\left[\left(\dfrac{1}{\sqrt{6}}\right)^2+\left(-\dfrac{2}{\sqrt{6}}\right)^2+\left(\dfrac{1}{\sqrt{6}}\right)^2\right]^{1/2}=\left[\dfrac{1}{6}+\dfrac{4}{6}+\dfrac{1}{6}\right]^{1/2}=1 \[4pt] |\mathbf{u_3}| &=\left[\left(-\dfrac{1}{\sqrt{2}}\right)^2+\left(\dfrac{1}{\sqrt{2}}\right)^2\right]^{1/2}=\left[\dfrac{1}{2}+\dfrac{1}{2}\right]^{1/2}=1 \end{align*} \nonumber To test if two vectors are perpendicular, we perform the dot product: \begin{align*} \mathbf{u_1}\cdot \mathbf{u_2}&=\left(\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{6}}-\dfrac{1}{\sqrt{3}}\dfrac{2}{\sqrt{6}}+\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{6}}\right)=0 \[4pt] \mathbf{u_1}\cdot \mathbf{u_3} &=\left(-\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}\dfrac{1}{\sqrt{2}}\right)=0 \[4pt] \mathbf{u_2}\cdot \mathbf{u_3} &=\left(-\dfrac{1}{\sqrt{6}}\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{6}}\dfrac{1}{\sqrt{2}}\right)=0 \end{align*} \nonumber Therefore, we just proved that the three pairs are mutually perpendicular, and the three vectors have unit length. In other words, these vectors are the vectors $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$ and $\hat{\mathbf{k}}$ rotated in space. If the dot product of two vectors (of any dimension) is zero, we say that the two vectors are orthogonal. If the vectors have unit length, we say they are normalized. If two vectors are both normalized and they are orthogonal, we say they are orthonormal. The set of vectors shown in the previous example form an orthonormal set.[vectors:orthonormal] These concepts also apply to vectors that contain complex entries, but how do we perform the dot product in this case? In general, the square of the modulus of a vector is $|\mathbf{u}|^2=\mathbf{u}\cdot \mathbf{u}=u_x^2+u_y^2+u_z^2. \nonumber$ However, this does not work correctly for complex vectors. The square of $i$ is -1, meaning that we risk having non-positive absolute values. To address this issue, we introduce a more general version of the dot product: $\mathbf{u}\cdot \mathbf{v}=u_x^*v_x+u_y^*v_y+u_z^*v_z, \nonumber$ where the “$*$ ” refers to the complex conjugate. Therefore, to calculate the modulus of a vector $\mathbf{u}$ that has complex entries, we use its complex conjugate: $|\mathbf{u}|^2=\mathbf{u}^*\cdot \mathbf{u} \nonumber$ Example $2$: Calculating the Modulus of a vector Calculate the modulus of the following vector: $\mathbf{u}=\hat{\mathbf{i}}+i \hat{\mathbf{j}} \nonumber$ Solution $|\mathbf{u}|^2=\mathbf{u}^*\cdot \mathbf{u}=(\hat{\mathbf{i}}-i \hat{\mathbf{j}})(\hat{\mathbf{i}}+i \hat{\mathbf{j}})=(1)(1)+(-i)(i)=2\rightarrow |\mathbf{u}|=\sqrt{2} \nonumber$ Analogously, if vectors contain complex entries, we can test whether they are orthogonal or not by checking the dot product $\mathbf{u}^*\cdot \mathbf{v}$. Example $3$: Confirming orthogonality Determine if the following pair of vectors are orthogonal (do not confuse the irrational number $i$ with the unit vector $\hat{\mathbf{i}}$!) $\mathbf{u}=\hat{\mathbf{i}}+(1-i)\hat{\mathbf{j}} \nonumber$ and $\mathbf{v}=(1+i)\hat{\mathbf{i}}+\hat{\mathbf{j}} \nonumber$ Solution $\mathbf{u}^*\cdot \mathbf{v}=(\hat{\mathbf{i}}+(1+i)\hat{\mathbf{j}})((1+i)\hat{\mathbf{i}}+\hat{\mathbf{j}})=(1)(1+i)+(1+i)(1)=2+2i\neq 0 \nonumber$ Therefore, the vectors are not orthogonal. 14.03: The Vector Product The vector product of two vectors is a vector defined as $\mathbf{u}\times \mathbf{v}=|\mathbf{u}| |\mathbf{v}| \mathbf{n} \sin\theta \nonumber$ where $\theta$ is again the angle between the two vectors, and $\mathbf{n}$ is the unit vector perpendicular to the plane formed by $\mathbf{u}$ and $\mathbf{v}$. The direction of the vector $\mathbf{n}$ is given by the right-hand rule. Extend your right hand and point your index finger in the direction of $\mathbf{u}$ (the vector on the left side of the $\times$ symbol) and your forefinger in the direction of $\mathbf{v}$. The direction of $\mathbf{n}$, which determines the direction of $\mathbf{u}\times \mathbf{v}$, is the direction of your thumb. If you want to revert the multiplication, and perform $\mathbf{v}\times \mathbf{u}$, you need to point your index finger in the direction of $\mathbf{v}$ and your forefinger in the direction of $\mathbf{u}$ (still using the right hand!). The resulting vector will point in the opposite direction (Figure $1$). The magnitude of $\mathbf{u}\times \mathbf{v}$ is the product of the magnitudes of the individual vectors times $\sin \theta$. This magnitude has an interesting geometrical interpretation: it is the area of the parallelogram formed by the two vectors (Figure $1$). The cross product can also be expressed as a determinant: $\mathbf{u}\times \mathbf{v}= \begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ u_x&u_y&u_z\ v_x&v_y&v_z\ \end{vmatrix} \nonumber$ Example $1$: Given $\mathbf{u}=-2 \hat{\mathbf{i}}+\hat{\mathbf{j}}+\hat{\mathbf{k}}$ and $\mathbf{v}=3 \hat{\mathbf{i}}-\hat{\mathbf{j}}+\hat{\mathbf{k}}$, calculate $\mathbf{w}=\mathbf{u}\times \mathbf{v}$ and verify that the result is perpendicular to both $\mathbf{u}$ and $\mathbf{v}$. Solution \begin{align*} \mathbf{u}\times \mathbf{v} &= \begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ u_x&u_y&u_z\ v_x&v_y&v_z\ \end{vmatrix}=\begin{vmatrix} \hat{\mathbf{i}}&\hat{\mathbf{j}}&\hat{\mathbf{k}}\ -2&1&1\ 3&-1&1\ \end{vmatrix} \[4pt] &=\hat{\mathbf{i}}(1+1)-\hat{\mathbf{j}}(-2-3)+\hat{\mathbf{k}}(2-3) \[4pt] &=\displaystyle{\color{Maroon}2 \hat{\mathbf{i}}+5 \hat{\mathbf{j}}-\hat{\mathbf{k}}} \end{align*} \nonumber To verify that two vectors are perpendicular we perform the dot product: $\mathbf{u} \cdot \mathbf{w}=(-2)(2)+(1)(5)+(1)(-1)=0 \nonumber$ $\mathbf{v} \cdot \mathbf{w}=(3)(2)+(-1)(5)+(1)(-1)=0 \nonumber$ An important application of the cross product involves the definition of the angular momentum. If a particle with mass $m$ moves a velocity $\mathbf{v}$ (a vector), its (linear) momentum is $\mathbf{p}=m\mathbf{v}$. Let $\mathbf{r}$ be the position of the particle (another vector), then the angular momentum of the particle is defined as $\mathbf{l}=\mathbf{r}\times\mathbf{p} \nonumber$ The angular momentum is therefore a vector perpendicular to both $\mathbf{r}$ and $\mathbf{p}$. Because the position of the particle needs to be defined with respect to a particular origin, this origin needs to be specified when defining the angular momentum. 14.04: Vector Normalization A vector of any given length can be divided by its modulus to create a unit vector (i.e. a vector of unit length). We will see applications of unit (or normalized) vectors in the next chapter. For example, the vector $\mathbf{u}=\hat{\mathbf{i}}+\hat{\mathbf{j}}+i\hat{\mathbf{k}} \nonumber$ has a magnitude: $|\mathbf{u}|^2=1^2+1^2+(-i)(i)=3\rightarrow |\mathbf{u}|=\sqrt{3} \nonumber$ Therefore, to normalize this vector we divide all the components by its length: $\hat{\mathbf{u}}=\frac{1}{\sqrt{3}}\hat{\mathbf{i}}+\frac{1}{\sqrt{3}}\hat{\mathbf{j}}+\frac{i}{\sqrt{3}}\hat{\mathbf{k}} \nonumber$ Notice that we use the “hat” to indicate that the vector has unit length. Need help? The links below contain solved examples. Operations with vectors: http://tinyurl.com/mw4qmz8 External links: 14.05: Problems Problem $1$ Given the following vectors in 3D: \begin{aligned} \mathbf{v_1}=\hat{\mathbf{i}}-2 \hat{\mathbf{j}}+\hat{\mathbf{k}}\ \mathbf{v_2}=\frac{1}{2}\hat{\mathbf{i}}-\frac{1}{2}\hat{\mathbf{k}}\ \mathbf{v_3}=i \hat{\mathbf{i}}+\hat{\mathbf{j}}+\hat{\mathbf{k}}\ \mathbf{v_4}=-\hat{\mathbf{i}}+i \hat{\mathbf{j}}+\hat{\mathbf{k}} \end{aligned} \nonumber Calculate: 1. $\mathbf{v_1}-3\mathbf{v_2}$ 2. $\mathbf{v_3}+\frac{1}{2}\mathbf{v_4}$ 3. $\mathbf{v_1}\cdot\mathbf{v_2}$ 4. $\mathbf{v_3}\cdot\mathbf{v_4}$ 5. $\mathbf{v_1}\cdot\mathbf{v_3}$ 6. $\mathbf{v_1}\times\mathbf{v_2}$ 7. $|\mathbf{v_1}|$ 8. $|\mathbf{v_2}|$ 9. $|\mathbf{v_3}|$ 10. $|\mathbf{v_4}|$ 11. $\mathbf{\hat{v}_2}$ 12. $\mathbf{\hat{v}_4}$ What is the angle between $\mathbf{v_1}$ and $\mathbf{v_2}$? Are $\mathbf{v_3}$ and $\mathbf{v_4}$ orthogonal? Write a vector orthogonal to both $\mathbf{v_1}$ and $\mathbf{v_2}$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/14%3A_Vectors/14.02%3A_The_Scalar_Product.txt
Chapter Objectives • Learn the nomenclature used in linear algebra to describe matrices (rows, columns, triangular matrices, diagonal matrices, trace, transpose, singularity, etc). • Learn how to add, subtract and multiply matrices. • Learn the concept of inverse. • Understand the use of matrices as symmetry operators. • Understand the concept of orthogonality. • Understand how to calculate the eigenvalues and normalized eigenvectors of a 2 × 2 matrix. • Understand the concept of Hermitian matrix • 15.1: Definitions Some types of matrices have special names. • 15.2: Matrix Addition The sum of two matrices A and B (of the same dimensions) is a new matrix of the same dimensions, C = A+ B. The sum is defined by adding entries with the same indices. • 15.3: Matrix Multiplication If A has dimensions m×n and B has dimensions n×p , then the product AB is defined, and has dimensions m×p . • 15.4: Symmetry Operators A symmetry operation, such as a rotation around a symmetry axis or a reflection through a plane, is an operation that, when performed on an object, results in a new orientation of the object that is indistinguishable from the original. • 15.5: Matrix Inversion The inverse of a square matrix A , sometimes called a reciprocal matrix, is a matrix A−1 such that AA−1=I , where I is the identity matrix. • 15.6: Orthogonal Matrices A nonsingular matrix is called orthogonal when its inverse is equal to its transpose. • 15.7: Eigenvalues and Eigenvectors Since square matrices are operators, it should not surprise you that we can determine its eigenvalues and eigenvectors. The eigenvectors are analogous to the eigenfunctions we discussed for quantum mechanics. • 15.8: Hermitian Matrices A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its own conjugate transpose. Hermitian matrices are a generalization of the symmetric real matrices we just talked about, and they also have real eigenvalues, and eigenvectors that form a mutually orthogonal set. • 15.9: Problems 15: Matrices An $m\times n$ matrix $\mathbf{A}$ is a rectangular array of numbers with $m$ rows and $n$ columns. The numbers $m$ and $n$ are the dimensions of $\mathbf{A}$. The numbers in the matrix are called its entries. The entry in row $i$ and column $j$ is called $a_{ij}$. Some types of matrices have special names: • A square matrix:$\begin{pmatrix} 3 &-2 &4 \ 5 &3i &3 \ -i & 1/2 &9 \end{pmatrix} \nonumber$ with $m=n$ • A rectangular matrix:$\begin{pmatrix} 3 &-2 &4 \ 5 &3i &3 \end{pmatrix}\nonumber$ with $m\neq n$ • A column vector:$\begin{pmatrix} 3 \ 5\ -i \end{pmatrix}\nonumber$ with $n=1$ • A row vector:$\begin{pmatrix} 3 &-2 &4 \ \end{pmatrix}\nonumber$ with $m=1$ • The identity matrix:$\begin{pmatrix} 1 &0 &0 \ 0 &1 &0 \ 0&0 &1 \end{pmatrix}\nonumber$ with $a_{ij}=\delta_{i,j}$, where $\delta_{i,j}$ is a function defined as $\delta_{i,j}=1$ if $i=j$ and $\delta_{i,j}=0$ if $i\neq j$. • A diagonal matrix:$\begin{pmatrix} a &0 &0 \ 0 &b &0 \ 0&0 &c \end{pmatrix}\nonumber$ with $a_{ij}=c_i \delta_{i,j}$. • An upper triangular matrix:$\begin{pmatrix} a &b &c \ 0 &d &e \ 0&0 &f \end{pmatrix}\nonumber$ All the entries below the main diagonal are zero. • A lower triangular matrix:$\begin{pmatrix} a &0 &0 \ b &c &0 \ d&e &f \end{pmatrix}\nonumber$ All the entries above the main diagonal are zero. • A triangular matrix is one that is either lower triangular or upper triangular. The Trace of a Matrix The trace of an $n\times n$ square matrix $\mathbf{A}$ is the sum of the diagonal elements, and formally defined as $Tr( \mathbf{A})=\sum_{i=1}^{n}a_{ii}$. For example, $\mathbf{A}=\begin{pmatrix} 3 &-2 &4 \ 5 &3i &3 \ -i & 1/2 &9 \end{pmatrix}\; ; Tr(\mathbf{A})=12+3i \nonumber$ Singular and Nonsingular Matrices A square matrix with nonzero determinant is called nonsingular. A matrix whose determinant is zero is called singular. (Note that you cannot calculate the determinant of a non-square matrix). The Matrix Transpose The matrix transpose, most commonly written $\mathbf{A}^T$, is the matrix obtained by exchanging $\mathbf{A}$’s rows and columns. It is obtained by replacing all elements $a_{ij}$ with $a_{ji}$. For example: $\mathbf{A}=\begin{pmatrix} 3 &-2 &4 \ 5 &3i &3 \end{pmatrix}\rightarrow \mathbf{A}^T=\begin{pmatrix} 3 &5\ -2 &3i\ 4&3 \end{pmatrix} \nonumber$ 15.02: Matrix Addition The sum of two matrices $\mathbf{A}$ and $\mathbf{B}$ (of the same dimensions) is a new matrix of the same dimensions, $\mathbf{C}$ = $\mathbf{A}$+ $\mathbf{B}$. The sum is defined by adding entries with the same indices: $c_{ij}=a_{ij}+b_{ij}$. $\begin{pmatrix} 3 &-2 &4 \ 5 &3i &3 \ -i & 1/2 &9 \end{pmatrix}+\begin{pmatrix} 0 &2 &1 \ -4 &-2i &i\ -i & 1/2 &-5 \end{pmatrix}=\begin{pmatrix} 3 &0 &5 \ 1 &i &3+i\ -2i & 1 &4 \end{pmatrix} \nonumber$ Need help? The link below contains solved examples: Matrix addition: http://tinyurl.com/m5skvpy External links:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/15%3A_Matrices/15.01%3A_Definitions.txt
If $\mathbf{A}$ has dimensions $m\times n$ and $\mathbf{B}$ has dimensions $n\times p$, then the product $\mathbf{AB}$ is defined, and has dimensions $m\times p$. The entry $(ab)_{ij}$ is obtained by multiplying row $i$ of $\mathbf{A}$ by column $j$ of $\mathbf{B}$, which is done by multiplying corresponding entries together and then adding the results: Example $1$ Calculate the product $\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} 1 &0 \ 5 &3 \ -1 &0 \end{pmatrix} \nonumber$ Solution We need to multiply a $3\times 3$ matrix by a $3\times 2$ matrix, so we expect a $3\times 2$ matrix as a result. $\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} 1 &0 \ 5 &3 \ -1 &0 \end{pmatrix}=\begin{pmatrix} a&b \ c&d \ e &f \end{pmatrix} \nonumber$ To calculate $a$, which is entry (1,1), we use row 1 of the matrix on the left and column 1 of the matrix on the right: $\begin{pmatrix} {\color{red}1} &{\color{red}-2} &{\color{red}4} \ 5 &0 &3 \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} {\color{red}1} &0 \ {\color{red}5} &3 \ {\color{red}-1} &0 \end{pmatrix}=\begin{pmatrix} {\color{red}a}&b \ c&d \ e &f \end{pmatrix}\rightarrow a=1\times 1+(-2)\times 5+4\times (-1)=-13 \nonumber$ To calculate $b$, which is entry (1,2), we use row 1 of the matrix on the left and column 2 of the matrix on the right: $\begin{pmatrix} {\color{red}1} &{\color{red}-2} &{\color{red}4} \ 5 &0 &3 \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} 1&{\color{red}0} \ 5&{\color{red}3} \ -1&{\color{red}0} \end{pmatrix}=\begin{pmatrix} a&{\color{red}b} \ c&d \ e &f \end{pmatrix}\rightarrow b=1\times 0+(-2)\times 3+4\times 0=-6 \nonumber$ To calculate $c$, which is entry (2,1), we use row 2 of the matrix on the left and column 1 of the matrix on the right: $\begin{pmatrix} 1&-2&4\ {\color{red}5} &{\color{red}0} &{\color{red}3} \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} {\color{red}1} &0 \ {\color{red}5} &3 \ {\color{red}-1} &0 \end{pmatrix}=\begin{pmatrix} a&b \ {\color{red}c}&d \ e &f \end{pmatrix}\rightarrow c=5\times 1+0\times 5+3\times (-1)=2 \nonumber$ To calculate $d$, which is entry (2,2), we use row 2 of the matrix on the left and column 2 of the matrix on the right: $\begin{pmatrix} 1&-2&4\ {\color{red}5} &{\color{red}0} &{\color{red}3} \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} 1&{\color{red}0} \ 5&{\color{red}3} \ -1&{\color{red}0} \end{pmatrix}=\begin{pmatrix} a&b \ c&{\color{red}d} \ e &f \end{pmatrix}\rightarrow d=5\times 0+0\times 3+3\times 0=0 \nonumber$ To calculate $e$, which is entry (3,1), we use row 3 of the matrix on the left and column 1 of the matrix on the right: $\begin{pmatrix} 1&-2&4\ 5&0&3 \ {\color{red}0} &{\color{red}1/2} &{\color{red}9} \end{pmatrix}\begin{pmatrix} {\color{red}1} &0 \ {\color{red}5} &3 \ {\color{red}-1} &0 \end{pmatrix}=\begin{pmatrix} a&b \ c&d \ {\color{red}e} &f \end{pmatrix}\rightarrow e=0\times 1+1/2\times 5+9\times (-1)=-13/2 \nonumber$ To calculate $f$, which is entry (3,2), we use row 3 of the matrix on the left and column 2 of the matrix on the right: $\begin{pmatrix} 1&-2&4\ 5&0&3 \ {\color{red}0} &{\color{red}1/2} &{\color{red}9} \end{pmatrix}\begin{pmatrix} 1&{\color{red}0} \ 5&{\color{red}3} \ -1&{\color{red}0} \end{pmatrix}=\begin{pmatrix} a&b \ c&d \ e&{\color{red}f} \end{pmatrix}\rightarrow f=0\times 0+1/2\times 3+9\times 0=3/2 \nonumber$ The result is: $\displaystyle{\color{Maroon}\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ 0 & 1/2 &9 \end{pmatrix}\begin{pmatrix} 1 &0 \ 5 &3 \ -1 &0 \end{pmatrix}=\begin{pmatrix} -13&-6 \ 2&0 \ -13/2 &3/2 \end{pmatrix}} \nonumber$ Example $2$ Calculate $\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ \end{pmatrix}\begin{pmatrix} 1 \ 5 \ -1 \end{pmatrix}\nonumber$ Solution We are asked to multiply a $2\times 3$ matrix by a $3\times 1$ matrix (a column vector). The result will be a $2\times 1$ matrix (a vector). $\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ \end{pmatrix}\begin{pmatrix} 1 \ 5 \ -1 \end{pmatrix}=\begin{pmatrix} a \ b \end{pmatrix}\nonumber$ $a=1\times1+(-2)\times 5+ 4\times (-1)=-13\nonumber$ $b=5\times1+0\times 5+ 3\times (-1)=2\nonumber$ The solution is: $\displaystyle{\color{Maroon}\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ \end{pmatrix}\begin{pmatrix} 1 \ 5 \ -1 \end{pmatrix}=\begin{pmatrix} -13 \ 2 \end{pmatrix}}\nonumber$ Need help? The link below contains solved examples: Multiplying matrices of different shapes (three examples): http://tinyurl.com/kn8ysqq External links: The Commutator Matrix multiplication is not, in general, commutative. For example, we can perform $\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ \end{pmatrix}\begin{pmatrix} 1 \ 5 \ -1 \end{pmatrix}=\begin{pmatrix} -13 \ 2 \end{pmatrix} \nonumber$ but cannot perform $\begin{pmatrix} 1 \ 5 \ -1 \end{pmatrix}\begin{pmatrix} 1 &-2 &4 \ 5 &0 &3 \ \end{pmatrix} \nonumber$ Even with square matrices, that can be multiplied both ways, multiplication is not commutative. In this case, it is useful to define the commutator, defined as: $[\mathbf{A},\mathbf{B}]=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A} \nonumber$ Example $3$ Given $\mathbf{A}=\begin{pmatrix} 3&1 \ 2&0 \end{pmatrix}$ and $\mathbf{B}=\begin{pmatrix} 1&0 \ -1&2 \end{pmatrix}$ Calculate the commutator $[\mathbf{A},\mathbf{B}]$ Solution $[\mathbf{A},\mathbf{B}]=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}\nonumber$ $\mathbf{A}\mathbf{B}=\begin{pmatrix} 3&1 \ 2&0 \end{pmatrix}\begin{pmatrix} 1&0 \ -1&2 \end{pmatrix}=\begin{pmatrix} 3\times 1+1\times (-1)&3\times 0 +1\times 2 \ 2\times 1+0\times (-1)&2\times 0+ 0\times 2 \end{pmatrix}=\begin{pmatrix} 2&2 \ 2&0 \end{pmatrix}\nonumber$ $\mathbf{B}\mathbf{A}=\begin{pmatrix} 1&0 \ -1&2 \end{pmatrix}\begin{pmatrix} 3&1 \ 2&0 \end{pmatrix}=\begin{pmatrix} 1\times 3+0\times 2&1\times 1 +0\times 0 \ -1\times 3+2\times 2&-1\times 1+2\times 0 \end{pmatrix}=\begin{pmatrix} 3&1 \ 1&-1 \end{pmatrix}\nonumber$ $[\mathbf{A},\mathbf{B}]=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}=\begin{pmatrix} 2&2 \ 2&0 \end{pmatrix}-\begin{pmatrix} 3&1 \ 1&-1 \end{pmatrix}=\begin{pmatrix} -1&1 \ 1&1 \end{pmatrix}\nonumber$ $\displaystyle{\color{Maroon}[\mathbf{A},\mathbf{B}]=\begin{pmatrix} -1&1 \ 1&1 \end{pmatrix}}\nonumber$ Multiplication of a vector by a scalar The multiplication of a vector $\vec{v_1}$ by a scalar $n$ produces another vector of the same dimensions that lies in the same direction as $\vec{v_1}$; $n\begin{pmatrix} x \ y \end{pmatrix}=\begin{pmatrix} nx \ ny \end{pmatrix} \nonumber$ The scalar can stretch or compress the length of the vector, but cannot rotate it (figure [fig:vector_by_scalar]). Multiplication of a square matrix by a vector The multiplication of a vector $\vec{v_1}$ by a square matrix produces another vector of the same dimensions of $\vec{v_1}$. For example, we can multiply a $2\times 2$ matrix and a 2-dimensional vector: $\begin{pmatrix} a&b \ c&d \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=\begin{pmatrix} ax+by \ cx+dy \end{pmatrix} \nonumber$ For example, consider the matrix $\mathbf{A}=\begin{pmatrix} -2 &0 \ 0 &1 \end{pmatrix} \nonumber$ The product $\begin{pmatrix} -2&0 \ 0&1 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix} \nonumber$ is $\begin{pmatrix} -2x \ y \end{pmatrix} \nonumber$ We see that $2\times 2$ matrices act as operators that transform one 2-dimensional vector into another 2-dimensional vector. This particular matrix keeps the value of $y$ constant and multiplies the value of $x$ by -2 (Figure $3$). Notice that matrices are useful ways of representing operators that change the orientation and size of a vector. An important class of operators that are of particular interest to chemists are the so-called symmetry operators.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/15%3A_Matrices/15.03%3A_Matrix_Multiplication.txt
The symmetry of molecules is essential for understanding the structures and properties of organic and inorganic compounds. The properties of chemical compounds are often easily explained by consideration of symmetry. For example, the symmetry of a molecule determines whether the molecule has a permanent dipole moment or not. The theories that describe optical activity, infrared and ultraviolet spectroscopy, and crystal structure involve the application of symmetry considerations. Matrix algebra is the most important mathematical tool in the description of symmetry. A symmetry operation, such as a rotation around a symmetry axis or a reflection through a plane, is an operation that, when performed on an object, results in a new orientation of the object that is indistinguishable from the original. For example, if we rotate a square in the plane by $\pi/2$ or $\pi$ the new orientation of the square is superimposable on the original one (Figure $1$). If rotation by an angle $\theta$ of a molecule (or object) about some axis results in an orientation of the molecule (or object) that is superimposable on the original, the axis is called a rotation axis. The molecule (or object) is said to have an $n$-fold rotational axis, where $n$ is $2\pi/\theta$. The axis is denoted as $C_n$. The square of Figure $1$ has a $C_4$ axis perpendicular to the plane because a $90^{\circ}$ rotation leaves the figure indistinguishable from the initial orientation. This axis is also a $C_2$ axis because a $180^{\circ}$ degree rotation leaves the square indistinguishable from the original square. In addition, the figure has several other $C_2$ axis that lie on the same plane as the square: A symmetry operation moves all the points of the object from one initial position to a final position, and that means that symmetry operators are $3\times 3$ square matrices (or $2\times 2$ in two dimensions). The following equation represents the action of a symmetry operator $\hat A$ on the location of the point $(x,y,z)$ (a vector): $\hat A (x,y,z)=(x',y',z') \nonumber$ The vector $(x',y',z')$ represents the location of the point after the symmetry operation. Let’s come back to the rotation axes we discussed before. A 2-fold rotation around the $z-$axis changes the location of a point $(x,y,z)$ to $(-x,-y,z)$ (see Figure $2$). By convention, rotations are always taken in the counterclockwise direction. What is the matrix that represents the operator $\hat {C^z_2}$? The matrix transforms the vector $(x,y,z)$ into $(-x,-y,z)$, so $\hat {C^z_2}(x,y,z)=(-x,-y,z) \nonumber$ $\begin{pmatrix} a_{11}&a_{12}&a_{13} \ a_{21}&a_{22}&a_{23} \ a_{31}&a_{32}&a_{33} \end{pmatrix}\begin{pmatrix} x \ y \ z \end{pmatrix}=\begin{pmatrix} -x \ -y \ z \end{pmatrix}$ We know the matrix is a $3\times 3$ square matrix because it needs to multiply a 3-dimensional vector. In addition, we write the vector as a vertical column to satisfy the requirements of matrix multiplication. $\begin{pmatrix} a_{11}&a_{12}&a_{13} \ a_{21}&a_{22}&a_{23} \ a_{31}&a_{32}&a_{33} \end{pmatrix}\begin{pmatrix} x \ y \ z \end{pmatrix} \nonumber$ $a_{11}x+a_{12}y+a_{13}z=-x \nonumber$ $a_{21}x+a_{22}y+a_{23}z=-y \nonumber$ $a_{31}x+a_{32}y+a_{33}z=z \nonumber$ and we conclude that $a_{11}=-1$, $a_{12}=a_{13}=0$, $a_{22}=-1$, $a_{21}=a_{23}=0$ and $a_{33}=1$, $a_{31}=a_{32}=0$: $\hat{C^z_2}=\begin{pmatrix} -1&0&0 \ 0&-1&0 \ 0&0&1 \end{pmatrix} \nonumber$ Rotations are not the only symmetry operations we can perform on a molecule. Figure $3$ illustrates the reflection of a point through the $xz$ plane. This operation transforms the vector $(x,y,z)$ into the vector $(x,-y,z)$. Symmetry operators involving reflections through a plane are usually denoted with the letter $\sigma$, so the operator that reflects a point through the $xz$ plane is $\hat{\sigma}_{xz}$: $\hat{\sigma}_{xz}(x,y,z)=(x,-y,z) \nonumber$ Following the same logic we used for the rotation matrix, we can write the $\hat{\sigma}_{xz}$ operator as: $\hat{\sigma}_{x,z}=\begin{pmatrix} 1&0&0 \ 0&-1&0 \ 0&0&1 \end{pmatrix} \nonumber$ This is true because $\begin{pmatrix} 1&0&0 \ 0&-1&0 \ 0&0&1 \end{pmatrix}\begin{pmatrix} x \ y \ z \end{pmatrix}=\begin{pmatrix} x \ -y \ z \end{pmatrix} \nonumber$ As we said before, the symmetry properties of molecules are essential for understanding the structures and properties of organic and inorganic compounds. For example, those of you who took organic chemistry know that molecules that have an inversion center (Problem 15.3) do not have permanent dipole moments. The symmetry of molecules is also related to their abilities to absorb light. Figure $4$ shows the three symmetry elements of the molecule of water (H$_2$O). This molecule has only one rotation axis, which is 2-fold, and therefore we call it a “$C_2$ axis”. It also has two mirror planes, one that contains the two hydrogen atoms ($\sigma_{yz}$), and another one perpendicular to it ($\sigma_{xz}$). Both planes contain the C$_2$ axis. As you will learn in your inorganic chemistry course, chemists organize molecules that share the same symmetry elements under a common group. For example, the group that contains molecules with these three symmetry elements is called “the $C_{2v}$ group”. Because the inversion operation (Problem 15.3) is not part of this group we know that all $C_{2v}$ molecules are polar. The molecule of methane (CH$_4$) has several symmetry elements, some of which we have not learned in this chapter. One that is relatively easy to identify is the C$_3$ rotation axis (Figure $5$): Can we write the matrix for the operator that corresponds to the 3-fold rotation? If we look at the molecule from the top, so the $z$-axis is perpendicular to the plane of the paper (or screen if you are reading this in a computer), we see that the rotation moves one hydrogen atom from one vertex of an equilateral triangle to the other one in a counterclockwise fashion. Therefore, we need a matrix that will move these vertices as shown in the figure: We need the coordinates of the three vertices, which can be obtained from simple geometrical arguments. If we place the green vertex at $x=0$ and $y=h$, then the position of the magenta vertex is $x=h\times \cos 30^{\circ}$ and $y=-h\times \sin 30^{\circ}$ and the position of the orange vertex is $-x=h\times \cos 30^{\circ}$ and $y=-h\times \sin 30^{\circ}$ (Figure $7$). The matrix we are looking for needs to rotate the magenta circle until it overlaps with the green circle: $\hat {C}_3(h \sqrt{3}/2,-h/2,z)=(0,h,z) \nonumber$ where we note that this rotation does not change the value of $z$. $\begin{pmatrix} a_{11}&a_{12}&a_{13} \ a_{21}&a_{22}&a_{23} \ a_{31}&a_{32}&a_{33} \end{pmatrix}\begin{pmatrix} h\sqrt{3}/2 \ -h/2\ z\end{pmatrix}=\begin{pmatrix} 0 \ h \ z \end{pmatrix} \nonumber$ Here, we have used the fact that $\cos30^{\circ}=\sqrt{3}/2$ and $\sin 30^{\circ}=1/2$. Multiplying the matrix by the vector: $a_{11}h\sqrt{3}/2-a_{12}h/2+a_{13}z=0 \nonumber$ $a_{21}h\sqrt{3}/2-a_{22}h/2+a_{23}z=h \nonumber$ $a_{31}h\sqrt{3}/2-a_{32}h/2+a_{33}z=z \nonumber$ From these equations, we conclude that $a_{13}=a_{23}=a_{31}=a_{32}=0$, $a_{12}=\sqrt{3}a_{11}$, and $a_{22}=\sqrt{3}a_{21}-2$. So far the matrix looks like: $\hat {C}_3=\begin{pmatrix} a_{11}&\sqrt{3}a_{11}&0 \ a_{21}&\sqrt{3}a_{21}-2&0 \ 0&0&1 \end{pmatrix} \nonumber$ To find the remaining entries let’s apply the matrix to the vector $(0,h,z)$, which needs to rotate to $(-h \sqrt{3}/2,-h/2,z)$: $\begin{pmatrix} a_{11}&\sqrt{3}a_{11}&0 \ a_{21}&\sqrt{3}a_{21}-2&0 \ 0&0&1 \end{pmatrix}\begin{pmatrix} 0 \ h \ z \end{pmatrix}=\begin{pmatrix} -h \sqrt{3}/2 \ -h/2 \ z \end{pmatrix} \nonumber$ From this multiplication we get $\sqrt{3}a_{11}h=-h\sqrt{3}/2\rightarrow a_{11}=-1/2 \nonumber$ $(\sqrt{3}a_{21}-2)h=-h/2\rightarrow a_{21}=\sqrt{3}/2 \nonumber$ and therefore, $\hat {C}_3=\begin{pmatrix} -1/2&-\sqrt{3}/2&0 \ \sqrt{3}/2&-1/2&0 \ 0&0&1 \end{pmatrix} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/15%3A_Matrices/15.04%3A_Symmetry_Operators.txt
The inverse of a square matrix $\mathbf{A}$, sometimes called a reciprocal matrix, is a matrix $\mathbf{A}^{-1}$ such that $\mathbf{A}\mathbf{A}^{-1}=\mathbf{I}$, where $\mathbf{I}$ is the identity matrix. It is easy to obtain $\mathbf{A}^{-1}$ in the case of a $2\times 2$ matrix: $\mathbf{A}=\begin{pmatrix} a&b \ c&d \end{pmatrix};\;\mathbf{A}^{-1}=\begin{pmatrix} e&f \ g&h \end{pmatrix} \nonumber$ $\begin{pmatrix} a&b \ c&d \end{pmatrix}\begin{pmatrix} e&f \ g&h \end{pmatrix}=\begin{pmatrix} 1&0 \ 0&1 \end{pmatrix} \nonumber$ $\label{eq:matrices_inverse1} ae+bg=1$ $\label{eq:matrices_inverse2} af+bh=0$ $\label{eq:matrices_inverse3} ce+dg=0$ $\label{eq:matrices_inverse4} cf+dh=1$ From Equations \ref{eq:matrices_inverse1} and \ref{eq:matrices_inverse3}: $g=(1-ae)/b=-ce/d\rightarrow ae=cbe/d+1\rightarrow e\left(a-cb/d\right)=1\rightarrow e\left(ad-cb\right)=d\rightarrow e=d/(ad-cb)$. You can obtain expressions for $f,g$ and $h$ in a similar way to obtain: $\mathbf{A}^{-1}=\frac{1}{ad-bc}\begin{pmatrix} d&-b \ -c&a \end{pmatrix} \nonumber$ Notice that the term $(ad-bc)$ is the determinant of $\mathbf{A}$, and therefore $\mathbf{A}^{-1}$ exists only if $|\mathbf{A}|\neq 0$. In other words, the inverse of a singular matrix is not defined. If you think about a square matrix as an operator, the inverse “undoes” what the original matrix does. For example, the matrix $\begin{pmatrix} -2&0 \ 0&1 \end{pmatrix}$, when applied to a vector $(x,y)$, gives $(-2x,y)$: $\begin{pmatrix} -2&0 \ 0&1 \end{pmatrix}\begin{pmatrix} x\ y \end{pmatrix}=\begin{pmatrix} -2x\ y \end{pmatrix} \nonumber$ The inverse of $\mathbf{A}$, when applied to $(-2x,y)$, gives back the original vector, $(x,y)$: $\mathbf{A}^{-1}=\frac{1}{ad-bc}\begin{pmatrix} d&-b \ -c&a \end{pmatrix}\rightarrow \mathbf{A}^{-1}= -\frac{1}{2}\begin{pmatrix} 1&0 \ 0&-2 \end{pmatrix} \nonumber$ $-\frac{1}{2}\begin{pmatrix} 1&0 \ 0&-2 \end{pmatrix} \begin{pmatrix} -2x\ y \end{pmatrix}=\begin{pmatrix} x\ y \end{pmatrix} \nonumber$ It is of course possible to calculate the inverse of matrices of higher dimensions, but in this course you will not be required to do so by hand. 15.06: Orthogonal Matrices A nonsingular matrix is called orthogonal when its inverse is equal to its transpose: $\mathbf{A}^T =\mathbf{A}^{-1}\rightarrow \mathbf{A}^T\mathbf{A} = \mathbf{I}. \nonumber$ For example, for the matrix $\mathbf{A}=\begin{pmatrix} \cos \theta&-\sin \theta \ \sin \theta&\cos \theta \end{pmatrix}, \nonumber$ the inverse is $\mathbf{A}^{-1} =\begin{pmatrix} \cos \theta&\sin \theta \ -\sin \theta&\cos \theta \end{pmatrix} =\mathbf{A}^T \nonumber$ We do not need to calculate the inverse to see if the matrix is orthogonal. We can transpose the matrix, multiply the result by the matrix, and see if we get the identity matrix as a result: $\mathbf{A}^T=\begin{pmatrix} \cos \theta&\sin \theta \ -\sin \theta&\cos \theta \end{pmatrix} \nonumber$ $\mathbf{A}^T \mathbf{A} =\begin{pmatrix} \cos \theta&\sin \theta \ -\sin \theta&\cos \theta \end{pmatrix}\begin{pmatrix} \cos \theta&-\sin \theta \ \sin \theta&\cos \theta \end{pmatrix} =\begin{pmatrix} (\cos^2 \theta+ \sin^2\theta)&0 \ 0&(\sin^2 \theta+ \cos^2\theta) \end{pmatrix} =\begin{pmatrix} 1&0 \ 0&1 \end{pmatrix} \nonumber$ The columns of orthogonal matrices form a system of orthonormal vectors (Section 14.2): $\mathbf{M}=\begin{pmatrix} a_1&b_1&c_1 \ a_2&b_2&c_2 \ a_3&b_3&c_3 \end{pmatrix}\rightarrow \mathbf{a}=\begin{pmatrix}a_1\a_2\a_3\end{pmatrix}; \;\mathbf{b}=\begin{pmatrix}b_1\b_2\b_3\end{pmatrix}; \;\mathbf{c}=\begin{pmatrix}c_1\c_2\c_3\end{pmatrix} \nonumber$ $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{c}=\mathbf{c}\cdot\mathbf{b}=0 \nonumber$ $|\mathbf{a}|=|\mathbf{b}|=|\mathbf{c}|=1 \nonumber$ Example $1$ Prove that the matrix $\mathbf{M}$ is an orthogonal matrix and show that its columns form a set of orthonormal vectors. $\mathbf{M}=\begin{pmatrix} 2/3&1/3&-2/3 \ 2/3&-2/3&1/3 \ 1/3&2/3&2/3 \end{pmatrix} \nonumber$ Note: This problem is also available in video format: http://tinyurl.com/k2tkny5 Solution We first need to prove that $\mathbf{M}^T\mathbf{M}=1$ $\mathbf{M}=\begin{pmatrix} {\color{red}2/3}&{\color{blue}1/3}&-2/3 \ {\color{red}2/3}&{\color{blue}-2/3}&1/3 \ {\color{red}1/3}&{\color{blue}2/3}&2/3 \end{pmatrix}\rightarrow \mathbf{M}^T=\begin{pmatrix} {\color{red}2/3}&{\color{red}2/3}&{\color{red}1/3}& \ {\color{blue}1/3}&{\color{blue}-2/3}&{\color{blue}2/3} \ -2/3&1/3&2/3 \end{pmatrix} \nonumber$ $\mathbf{M}^T\mathbf{M}=\begin{pmatrix} 1&0&0 \ 0&1&0 \ 0&0&1 \end{pmatrix}=\mathbf{I} \nonumber$ Because $\mathbf{M}^T\mathbf{M}=\mathbf{I}$, the matrix is orthogonal. We now how to prove that the columns for a set of orthonormal vectors. The vectors are: $\mathbf{a}=2/3\mathbf{i}+2/3\mathbf{j}+1/3\mathbf{i}= \begin{pmatrix}2/3\2/3\1/3\end{pmatrix}\nonumber$ $\mathbf{b}=1/3\mathbf{i}-2/3\mathbf{j}+2/3\mathbf{i}= \begin{pmatrix}1/3\-2/3\2/3\end{pmatrix}\nonumber$ $\mathbf{c}=-2/3\mathbf{i}+1/3\mathbf{j}+2/3\mathbf{i}= \begin{pmatrix}-2/3\1/3\2/3\end{pmatrix}\nonumber$ The modulii of these vectors are: $|\mathbf{a}|^2=(2/3)^2+(2/3)^2+(1/3)^2=1 \nonumber$ $|\mathbf{b}|^2=(1/3)^2+(-2/3)^2+(2/3)^2=1 \nonumber$ $|\mathbf{c}|^2=(-2/3)^2+(1/3)^2+(2/3)^2=1 \nonumber$ which proves that the vectors are normalized. The dot products of the three pairs of vectors are: $\mathbf{a}\cdot\mathbf{b}=(2/3)(1/3)+(2/3)(-2/3)+(1/3)(2/3)=0 \nonumber$ $\mathbf{a}\cdot\mathbf{c}=(2/3)(-2/3)+(2/3)(1/3)+(1/3)(2/3)=0 \nonumber$ $\mathbf{c}\cdot\mathbf{b}=(-2/3)(1/3)+(1/3)(-2/3)+(2/3)(2/3)=0 \nonumber$ which proves they are mutually orthogonal. Because the vectors are normalized and mutually orthogonal, they form an orthonormal set. Orthogonal matrices, when thought as operators that act on vectors, are important because they produce transformations that preserve the lengths of the vectors and the relative angles between them. For example, in two dimensions, the matrix $\mathbf{M}_1=1/\sqrt{2}\begin{pmatrix} 1&1 \ 1&-1 \end{pmatrix} \nonumber$ is an operator that rotates a vector by $\pi/4$ in the counterclockwise direction (Figure $1$) preserving the lengths of the vectors and their relative orientation. In other words, an orthogonal matrix rotates a shape without distorting it. If the columns are orthogonal vectors that are not normalized, as in $\mathbf{M}_2=\begin{pmatrix} 1&1 \ 1&-1 \end{pmatrix}, \nonumber$ the object changes in size but the shape is not distorted. If, however, the two columns are non-orthogonal vectors, the transformation will distort the shape. Figure $1$ shows an example with the matrix $\mathbf{M}_3=\begin{pmatrix} 1&2 \ -1&1 \end{pmatrix} \nonumber$ From this discussion, it should not surprise you that all the matrices that represent symmetry operators (Section 15.4) are orthogonal matrices. These operators are used to rotate and reflect the object around different axes and planes without distorting its size and shape.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/15%3A_Matrices/15.05%3A_Matrix_Inversion.txt
Since square matrices are operators, it should not surprise you that we can determine its eigenvalues and eigenvectors. The eigenvectors are analogous to the eigenfunctions we discussed in Chapter 11. If $\mathbf{A}$ is an $n\times n$ matrix, then a nonzero vector $\mathbf{x}$ is called an eigenvector of $\mathbf{A}$ if $\mathbf{Ax}$ is a scalar multiple of $\mathbf{x}$: $\mathbf{A}\mathbf{x}=\lambda \mathbf{x}$ The scalar $\lambda$ is called the eigenvalue of $\mathbf{A}$, and $\mathbf{x}$ is said to be an eigenvector. For example, the vector $(2,0)$ is an eigenvector of $\mathbf{A}=\begin{pmatrix} -2&0 \ 0&1 \end{pmatrix} \nonumber$ with eigenvalue $\lambda=-2$: $\begin{pmatrix} -2&0 \ 0&1 \end{pmatrix}\begin{pmatrix} 2\ 0 \end{pmatrix}=-2\begin{pmatrix} 2\ 0 \end{pmatrix} \nonumber$ Notice that the matrix $\mathbf{A}$, like any other $2\times 2$ matrix, transforms a 2-dimensional vector into another one that in general will lie on a different direction. For example, if we take $(2,2)$, this matrix will transform it into $\mathbf{A}(2,2)=(-4,2)$, which has a different orientation. However, the vector $(2,0)$ is special, because this matrix transforms it in a vector that is a multiple of itself: $\mathbf{A}(2,0)=(-4,0)$. For this particular vector, the matrix behaves as a number! (in this case the number -2). In fact, we have a whole family of vectors that do the same: $\mathbf{A}(x,0)=(-4x,0)$, or in other words, any vector parallel to the $x-$axis. There is another family of vectors that makes $\mathbf{A}$ behave as a number: $\mathbf{A}(0,y)=(0,y)$, or in other words, any vector parallel to the $y-$axis makes $\mathbf{A}$ behave as the number 1. The argument above gives a geometrical interpretation to eigenvectors and eigenvalues. For a $2\times 2$ matrix, there are two ‘special’ lines in the plane. If we take a vector along one of these lines, the matrix behaves as a number we call the eigenvalue, and simply shrinks or expands the vector without changing its direction. Example $1$ The vectors $\mathbf{x}_1=(-i,1)$ and $\mathbf{x}_2=(i,1)$ are the two eigenvectors of $\mathbf{A}=\begin{pmatrix} 1&1 \ -1&1 \end{pmatrix} \nonumber$ What are the corresponding eigenvalues? Solution By definition: $\begin{pmatrix} 1&1 \ -1&1 \end{pmatrix}\begin{pmatrix} -i\ 1 \end{pmatrix}=\lambda_1\begin{pmatrix} -i\ 1 \end{pmatrix}\nonumber$ where $\lambda_1$ is the eigenvector corresponding to $\mathbf{x}_1$ We have: $\begin{pmatrix} 1&1 \ -1&1 \end{pmatrix}\begin{pmatrix} -i\ 1 \end{pmatrix}=\begin{pmatrix} -i+1\ i+1 \end{pmatrix}=(1+i)\begin{pmatrix} -i\ 1 \end{pmatrix}\nonumber$ and therefore $\lambda_1=(1+i)$. For the second eigenvector: $\begin{pmatrix} 1&1 \ -1&1 \end{pmatrix}\begin{pmatrix} i\ 1 \end{pmatrix}=\lambda_2\begin{pmatrix} i\ 1 \end{pmatrix}\nonumber$ where $\lambda_2$ is the eigenvector corresponding to $\mathbf{x}_2$ We have: $\begin{pmatrix} 1&1 \ -1&1 \end{pmatrix}\begin{pmatrix} i\ 1 \end{pmatrix}=\begin{pmatrix} i+1\ -i+1 \end{pmatrix}=(1-i)\begin{pmatrix} i\ 1 \end{pmatrix}\nonumber$ and therefore $\lambda_2=(1-i)$. The obvious question now is how to find the eigenvalues of a matrix. We will concentrate on $2\times 2$ matrices, although there are of course methods to do the same in higher dimensions. Let’s say that we want to find the eigenvectors of $\mathbf{A}=\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\nonumber$ The eigenvectors satisfy the following equation: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=\lambda\begin{pmatrix} x \ y \end{pmatrix}\nonumber$ Our first step will be to multiply the right side by the identity matrix. This is analogous to multiplying by the number 1, so it does nothing: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=\lambda\begin{pmatrix} 1&0 \ 0&1 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix} \nonumber$ We will now group all terms on the left side: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}-\lambda\begin{pmatrix} 1&0 \ 0&1 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=0 \nonumber$ distribute $\lambda$: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}-\begin{pmatrix} \lambda&0 \ 0&\lambda \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=0 \nonumber$ and group the two matrices in one: $\begin{pmatrix} 3-\lambda&2 \ -1&0-\lambda \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=0 \nonumber$ multiplying the matrix by the vector: $(3-\lambda)x+2y=0 \nonumber$ $-x-\lambda y=0 \nonumber$ which gives: \begin{align*} (3-\lambda)(-\lambda y)+2y &=0 \[4pt] y[(3-\lambda)(-\lambda )+2] &=0 \end{align*} \nonumber We do not want to force $y$ to be zero, because we are trying to determine the eigenvector, which may have $y\neq 0$. Then, we conclude that $[(3-\lambda)(-\lambda )+2]=0 \label{characteristic equation}$ which is a quadratic equation in $\lambda$. Now, note that $[(3-\lambda)(-\lambda )+2]$ is the determinant $\begin{vmatrix} 3-\lambda&2 \ -1&-\lambda \end{vmatrix} \nonumber$ We just concluded that in order to solve $\begin{pmatrix} 3-\lambda&2 \ -1&0-\lambda \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix}=0 \nonumber$ we just need to look at the values of $\lambda$ that make the determinant of the matrix equal to zero: $\begin{vmatrix} 3-\lambda&2 \ -1&-\lambda \end{vmatrix}=0 \nonumber$ Equation \ref{characteristic equation} is called the characteristic equation of the matrix, and in the future we can skip a few steps and write it down directly. Let’s start the problem from scratch. Let’s say that we want to find the eigenvectors of $\mathbf{A}=\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix} \nonumber$ We just need to subtract $\lambda$ from the main diagonal, and set the determinant of the resulting matrix to zero: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\rightarrow\begin{pmatrix} 3-\lambda&2 \ -1&0-\lambda \end{pmatrix}\rightarrow \begin{vmatrix} 3-\lambda&2 \ -1&-\lambda \end{vmatrix}=0 \nonumber$ We get a quadratic equation in $\lambda$: $\begin{vmatrix} 3-\lambda&2 \ -1&-\lambda \end{vmatrix}=(3-\lambda)(-\lambda)+2=0 \nonumber$ which can be solved to obtain the two eigenvalues: $\lambda_1=1$ and $\lambda_2=2$. Our next step is to obtain the corresponding eigenvectors, which satisfy: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=1\begin{pmatrix} x_1 \ y_1 \end{pmatrix} \nonumber$ for $\lambda_1$ $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=2\begin{pmatrix} x_2 \ y_2 \end{pmatrix} \nonumber$ for $\lambda_2$ Let’s solve both side by side: $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=1\begin{pmatrix} x_1 \ y_1 \end{pmatrix} \nonumber$ $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=1\begin{pmatrix} 1&0\ 0&1 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix} \nonumber$ $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}-1\begin{pmatrix} 1&0\ 0&1 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=0 \nonumber$ $\begin{pmatrix} 3-1&2 \ -1&0-1 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=0 \nonumber$ $\begin{pmatrix} 2&2 \ -1&-1 \end{pmatrix}\begin{pmatrix} x_1 \ y_1 \end{pmatrix}=0 \nonumber$ $2x_1+2y_1=0 \nonumber$ $-x_1-y_1=0 \nonumber$ Notice that these two equations are not independent, as the top is a multiple of the bottom one. Both give the same result: $y=-x$. This means that any vector that lies on the line $y=-x$ is an eigenvector of this matrix with eigenvalue $\lambda=1$. $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=2\begin{pmatrix} x_2 \ y_2 \end{pmatrix} \nonumber$ $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=2\begin{pmatrix} 1&0\ 0&1 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix} \nonumber$ $\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}-2\begin{pmatrix} 1&0\ 0&1 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=0 \nonumber$ $\begin{pmatrix} 3-2&2 \ -1&0-2 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=0 \nonumber$ $\begin{pmatrix} 1&2 \ -1&-2 \end{pmatrix}\begin{pmatrix} x_2 \ y_2 \end{pmatrix}=0 \nonumber$ $x_2+2y_2=0 \nonumber$ $-x_2-2y_2=0 \nonumber$ Notice that these two equations are not independent, as the top is a multiple of the bottom one. Both give the same result: $y=-x/2$. This means that any vector that lies on the line $y=-x/2$ is an eigenvector of this matrix with eigenvalue $\lambda=2$. Figure $1$ shows the lines $y=-x$ and $y=-x/2$. Any vector that lies along the line $y=-x/2$ is an eigenvector with eigenvalue $\lambda=2$, and any vector that lies along the line $y=-x$ is an eigenvector with eigenvalue $\lambda=1$. Eigenvectors that differ only in a constant factor are not treated as distinct. It is convenient and conventional to normalize the eigenvectors. Notice that we can calculate two normalized eigenvectors for each eigenvalue (pointing in one or the other direction), and the distinction between one or the other is not important. In the first case, we have $y=-x$. This means that any vector of the form $\begin{pmatrix} a \ -a \end{pmatrix}$ is an eigenvector, but we are looking for the value of $a$ that makes this eigenvector normalized. In other words, we want $(a)^2+(-a)^2=1$, which gives $a=\pm 1/\sqrt{2}$. In conclusion, both $\begin{array}{c c c} \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1 \ -1 \end{pmatrix} & \text{and} & \dfrac{1}{\sqrt{2}}\begin{pmatrix} -1 \ 1 \end{pmatrix} \end{array} \nonumber$ are normalized eigenvectors of $\mathbf{A}=\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix} \nonumber$ with eigenvalue $\lambda=1$. For $\lambda=2$, we have that $y=-x/2$. This means that any vector of the form $\begin{pmatrix} a \ -a/2 \end{pmatrix}$ is an eigenvector, but we are looking for the value of $a$ that makes this eigenvector normalized. In other words, we want $(a)^2+(-a/2)^2=1$, which gives $a=\pm 2/\sqrt{5}$. In conclusion, both $\begin{array}{c c c} \dfrac{1}{\sqrt{5}}\begin{pmatrix} 2 \ -1 \end{pmatrix} & \text{and} & \dfrac{1}{\sqrt{5}}\begin{pmatrix} -2 \ 1 \end{pmatrix} \end{array} \nonumber$ are normalized eigenvectors of $\mathbf{A}=\begin{pmatrix} 3&2 \ -1&0 \end{pmatrix} \nonumber$ with eigenvalue $\lambda=2.$ Example $2$ Find the eigenvalues and nomalized eigenvectors of $\mathbf{M}=\begin{pmatrix} 0&1 \ -1&0 \end{pmatrix} \nonumber$ The eigenvalues satisfy the characteristic equation: $\begin{vmatrix} -\lambda&1 \ -1&-\lambda \end{vmatrix}=0\rightarrow (-\lambda)(-\lambda)-(1)(-1)=\lambda^2+1=0\rightarrow \lambda_{1,2}=\pm i \nonumber$ For $\lambda =i$: $\begin{pmatrix} 0&1 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1\ y_1 \end{pmatrix}=i\begin{pmatrix} x_1\ y_1 \end{pmatrix} \nonumber$ $y_1=ix_1 \nonumber$ $-x_1=i y_1 \nonumber$ Again, the two equations we get have the same information (or more formally, are linearly dependent). From either one, we get $y_1=i x_1$. Any vector of the form $\mathbf{u}=\begin{pmatrix} a\ ia \end{pmatrix} \nonumber$ is an eigenvector of $\mathbf{M}$ with eigenvalue $\lambda=i$. To normalize the vector (Section 14.4), we calculate the modulus of the vector using the dot product: $|\mathbf{u}|^2=\mathbf{u}^*\cdot\mathbf{u} \nonumber$ (see Section 14.2 for a discussion of the dot product of complex vectors) $|\mathbf{u}|^2=\mathbf{u}^*\cdot\mathbf{u}=a^2+(ia)(-ia)=a^2+a^2=2a^2\rightarrow |\mathbf{u}|=\pm\sqrt{2}a \nonumber$ and we divide the vector by its modulus. The normalized eigenvectors for $\lambda =i$ are, therefore, $\hat{\mathbf{u}}=\pm \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1\ i \end{pmatrix} \nonumber$ For $\lambda =-i$: $\begin{pmatrix} 0&1 \ -1&0 \end{pmatrix}\begin{pmatrix} x_1\ y_1 \end{pmatrix}=-i\begin{pmatrix} x_1\ y_1 \end{pmatrix} \nonumber$ $y_1=-ix_1 \nonumber$ $-x_1=-i y_1 \nonumber$ From either one, we get $y_1=-i x_1. \nonumber$ Any vector of the form $\mathbf{v}=\begin{pmatrix} a\ -ia \end{pmatrix} \nonumber$ is an eigenvector of $\mathbf{M}$ with eigenvalue $\lambda=-i$. To normalize the vector, we calculate the dot product: $|\mathbf{v}|^2=\mathbf{v}^*\cdot\mathbf{v}=a^2+(-ia)(ia)=a^2+a^2=2a^2\rightarrow |\mathbf{v}| =\pm\sqrt{2}a \nonumber$ The normalized eigenvectors for $\lambda =-i$ are, therefore, $\hat{\mathbf{v}}=\pm \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1\ -i \end{pmatrix} \nonumber$ Matrix Eigenvalues: Some Important Properties 1)The eigenvalues of a triangular matrix are the diagonal elements. $\begin{pmatrix} a&b&c\ 0&d&e\ 0&0&f \end{pmatrix}\rightarrow \lambda_1=a;\;\lambda_2=d;\;\lambda_3=f \nonumber$ 2) If $\lambda_1$,$\lambda_2$...,$\lambda_n$, are the eigenvalues of the matrix $\mathbf{A}$, then $|\mathbf{A}|= \lambda_1\lambda_2...\lambda_n$ 3) The trace of the matrix $\mathbf{A}$ is equal to the sum of all eigenvalues of the matrix $\mathbf{A}$. For example, for the matrix $\mathbf{A}=\begin{pmatrix} 1&1\ -2&4 \end{pmatrix}\rightarrow |\mathbf{A}|=6=\lambda_1\lambda_2;\;Tr(\mathbf{A})=5=\lambda_1+\lambda_2 \nonumber$ For a $2\times 2$ matrix, the trace and the determinant are sufficient information to obtain the eigenvalues: $\lambda_1=2$ and $\lambda_2=3$. 4) Symmetric matrices-those that have a "mirror-plane" along the northeast-southwest diagonal (i.e. $\mathbf{A}=\mathbf{A}^T$) must have all real eigenvalues. Their eigenvectors are mutually orthogonal. For example, for the matrix $\mathbf{A}=\begin{pmatrix} -2&4&0\ 4&1&-1\ 0&-1&-3 \end{pmatrix} \nonumber$ the three eigenvalues are $\lambda_1=1+\sqrt{21}$, $\lambda_2=1-\sqrt{21}$, $\lambda_3=-2$, and the three eigenvectors: $\begin{array}{c} \mathbf{u}_1=\begin{pmatrix} -5-\sqrt{21}\ -4-\sqrt{21}\ 1 \end{pmatrix}, & \mathbf{u}_2=\begin{pmatrix} -5+\sqrt{21}\ -4+\sqrt{21}\ 1 \end{pmatrix}, & \text{and} & \mathbf{u}_3=\begin{pmatrix} 1\ -1\ 1 \end{pmatrix} \end{array} \nonumber$ You can prove the eigenvectors are mutually orthogonal by taking their dot products. 15.08: Hermitian Matrices A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its own conjugate transpose. In other words, $a_{ij}=a_{ji}^*$ for all entries. The elements in the diagonal need to be real, because these entries need to equal their complex conjugates: $a_{ii}=a_{ii}^*$: $\begin{pmatrix} a&{\color{red}b+ci}&{\color{blue}d+ei}\ {\color{red}b-ci}&f&{\color{OliveGreen}g+hi}\ {\color{blue}d-ei}&{\color{OliveGreen}g-hi}&j \end{pmatrix} \nonumber$ where all the symbols in this matrix except for $i$ represent real numbers. Hermitian matrices are a generalization of the symmetric real matrices we just talked about, and they also have real eigenvalues, and eigenvectors that form a mutually orthogonal set. Need help? The link below contains solved examples: 15.09: Problems Problem $1$ Given $\mathbf{A}=\begin{pmatrix} 2&3&-1 \ -5&0&6\ 0&2&3 \end{pmatrix}\; ;\mathbf{B}=\begin{pmatrix} 2 \ 1\0 \end{pmatrix}\; ;\mathbf{C}=\begin{pmatrix} 0&1\ 2&0\-1&3 \end{pmatrix} \nonumber$ Multiply all possible pairs of matrices. Problem $2$ The matrix representation of a spin $1/2$ system was introduced by Pauli in 1926. The Pauli spin matrices are the matrix representation of the angular momentum operator for a single spin $1/2$ system and are defined as: $\mathbf{\sigma_x}=\begin{pmatrix} 0&1 \ 1&0 \end{pmatrix}\; ;\mathbf{\sigma_y}=\begin{pmatrix} 0&-i \ i&0 \end{pmatrix}\; ;\mathbf{\sigma_z}=\begin{pmatrix} 1&0\ 0&-1 \end{pmatrix} \nonumber$ 1. Show that $\mathbf{\sigma_x}\mathbf{\sigma_y}=i\mathbf{\sigma_z}$,$\mathbf{\sigma_y}\mathbf{\sigma_z}=i\mathbf{\sigma_x}$ and $\mathbf{\sigma_z}\mathbf{\sigma_x}=i\mathbf{\sigma_y}$ 2. Calculate the commutator $\left[\mathbf{\sigma_x},\mathbf{\sigma_y} \right]$. 3. Show that $\mathbf{\sigma_x}^2=\mathbf{\sigma_y}^2=\mathbf{\sigma_z}^2=\mathbf{I}$, where $\mathbf{I}$ is the identity matrix. Hint: as with numbers, the square of a matrix is the matrix multiplied by itself. Problem $3$ The inversion operator, $\hat i$ transforms the point $(x,y,z)$ into $(-x,-y,-z)$. Write down the matrix that corresponds to this operator. Problem $4$ Calculate the inverse of $\mathbf{A}$ by definition. $\mathbf{A}=\begin{pmatrix} 1&-2 \ 0&1 \end{pmatrix} \nonumber$ Problem $5$ Calculate the inverse of $\mathbf{A}$ by definition. $\mathbf{A}=\begin{pmatrix} \cos \theta&-\sin \theta \ \sin \theta&\cos \theta \end{pmatrix} \nonumber$ Problem $6$ Find the eigenvalues and nomalized eigenvectors of $\mathbf{M_1}=\begin{pmatrix} 2&0 \ 0&-3 \end{pmatrix} \nonumber$ $\mathbf{M_2}=\begin{pmatrix} 1&1+i \ 1-i&1 \end{pmatrix} \nonumber$ Problem $7$ Given, $\mathbf{M_3}=\begin{pmatrix} 1&1-i \ 1+i&1 \end{pmatrix} \nonumber$ 1. Show that the matrix is Hermitian. 2. Calculate the eigenvectors and prove they are orthogonal.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/15%3A_Matrices/15.07%3A_Eigenvalues_and_Eigenvectors.txt
Thumbnail: pixabay.com/photos/book-read...board-4126483/ 16: Formula Sheets $\begin{array}{c} \ln{2} \approx 0.69 \ e \approx 2.72 \ e^{-1} \approx 0.37 \end{array} \nonumber$ 16.02: Quadratic Equation $ax^2 + bx + c = 0 \rightarrow x = \frac{-b \pm \sqrt{b^2-4ac}}{2a} \nonumber$ 16.03: Logarithms and Exponentials • \(\log_b xy = \log_b x + \log_b y\) • \(\log_b x^y = y \log_b x\) • \(\log_b 1 = 0\) • \(\log_b (1/x) = − \log_b x\) • \(e^{x+y} = e^x e^y\) • \(e^{x−y} = e^x/e^y\) 16.04: Trigonometric Identities • $\sin^2 u + \cos^2 u = 1$ • $\tan u = \frac{\sin u}{\cos u}$ • $\sin \left( \frac{\pi}{2} − u \right) = \cos u$ • $\cos \left( \frac{\pi}{2} − u \right) = \sin u$ • $\sin (u \pm v) = \sin u \cos v \pm \cos u \sin v$ • $\cos (u \pm v) = \cos u \cos v \mp \sin u \sin v$ • $\sin (−u) = − \sin u$ • $\cos (−u) = \cos u$ • $\tan (−u) = − \tan u$ • $\sin u \cos v = \frac{1}{2} \left[ \sin (u + v) + \sin (u − v) \right]$ • $\sin u \sin v = \frac{1}{2} \left[ \cos (u − v) − \cos (u + v) \right]$ • $\cos u \cos v = \frac{1}{2} \left[ \cos (u − v) + \cos (u + v) \right]$ 16.05: Complex Numbers $re^{\pm i\phi}=r\cos{\phi}\pm i r \sin{\phi}=x \pm iy \nonumber$ 16.06: Operators $\begin{array}{c} [\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A} \ [\hat{A},\hat{B}]=0 \text{ if the operators } \hat{A} \text{ and } \hat{B} \text{ commute}. \end{array} \nonumber$ 16.07: Taylor Series $\begin{array}{c} f(x)=a_0+a_1(x-h)+a_2(x-h)^2+a_3(x-h)^3+... \ a_n=\frac{1}{n!}\left(\frac{d^n f(x)}{d x^n} \right)_h, n=1,2,3... \end{array} \nonumber$ 16.08: Fourier Series For a periodic function of period $2L$: $\begin{array}{c} f(x)=\frac{a_0}{2}+\sum\limits_{n=1}^{\infty}a_n\cos\left ( \frac{n \pi x}{L} \right )+\sum\limits_{n=1}^{\infty}b_n\sin\left ( \frac{n \pi x}{L} \right ) \ a_0=\frac{1}{L}\int\limits_{-L}^{L}f(x)dx \ a_n=\frac{1}{L}\int\limits_{-L}^{L}f(x)\cos{\left(\frac{n \pi x}{L} \right)}dx \ b_n=\frac{1}{L}\int\limits_{-L}^{L}f(x)\sin{\left(\frac{n \pi x}{L} \right)}dx \end{array} \nonumber$ 16.09: Derivatives and Primitives (Indefinite Integrals) $f(x)$ $f'(x)$ $\int f(x)dx$($\pm c$) $k$ 0 $kx$ $x^n$ $n x^{n-1}$, $n \neq 0$ $\frac{x^{n+1}}{n+1}$, $n \neq -1$ $\frac{1}{x}$ $-\frac{1}{x^2}$ $ln|x|$ $a^x$ $a^x \ln{a}$ $\frac{a^x}{\ln{a}}$ $e^x$ $e^x$ $e^x$ $\log_a{x}$ $\frac{1}{x \ln{a}}$ $\frac{x \ln{x}-x}{\ln{a}}$ $\ln{x}$ $\frac{1}{x}$ $x \ln{x}-x$ $\sin{x}$ $\cos{x}$ $-\cos{x}$ $\cos{x}$ $-\sin{x}$ $\sin{x}$ $\tan{x}$ $\frac{1}{\cos^2{x}}$ $-\ln{(\cos{x}})$ $\arcsin{x}$ $\frac{1}{\sqrt{1-x^2}}$ $x \arcsin{x}+\sqrt{1-x^2}$ $\arccos{x}$ $-\frac{1}{\sqrt{1-x^2}}$ $x \arccos{x}-\sqrt{1-x^2}$ $\arctan{x}$ $\frac{1}{1+x^2}$ $x \arctan{x}-\frac{1}{2}\ln{(1+x^2)}$ $\frac{1}{a^2+x^2}$ $\frac{-2x}{(a^2+x^2)^2}$ $\frac{1}{a}\arctan{\left( \frac{x}{a}\right)}$ $\frac{1}{\sqrt{a^2-x^2}}$ $\frac{x}{(a^2-x^2)^{\frac{3}{2}}}$ $\arcsin{\left( \frac{x}{a}\right)}$ • $\int \sin^2{(ax)}dx= \frac{x}{2}-\frac{\sin{(2ax)}}{4a}+c$ • $\int \cos^2{(ax)}dx= \frac{x}{2}+\frac{\sin{(2ax)}}{4a}+c$ • $\int \sin^3{(ax)}dx= \frac{1}{12a}\cos{(3ax)}-\frac{3}{4a}\cos{(ax)}+c$ • $\int \cos^3{(ax)}dx= \frac{1}{12a}\sin{(3ax)}+\frac{3}{4a}\sin{(ax)}+c$ • $\int x \cos{(ax)}dx=\frac{\cos{(ax)}}{a^2}+\frac{\sin{(ax)}}{a}x+c$ • $\int x \sin{(ax)}dx=\frac{\sin{(ax)}}{a^2}-\frac{\cos{(ax)}}{a}x+c$ • $\int x \sin^2{(ax)}dx=\frac{x^2}{4}-\frac{x\sin{(2ax)}}{4a}-\frac{\cos{(2ax)}}{8a^2}+c$ • $\int xe^{x^2}dx=e^{x^{2}}/2+c$ • $\int x e^{ax}=\frac{e^{ax}(ax-1)}{a^2}+c$ • $\int \frac{x}{x^2+1}dx=\frac{1}{2}\ln{(1+x^2)} +c$ 16.10: Definite integrals • $\int\limits_{0}^{\infty }xe^{-x^2}dx=\frac{1}{2}$ • $\int\limits_{0}^{\infty }e^{-ax}dx=\frac{1}{a}$,$a>0$ • $\int\limits_{0}^{\infty }\sqrt{x}e^{-ax}dx=\frac{1}{2a}\sqrt{\frac{\pi}{a}}$ • $\int\limits_{0}^{\infty }x^{2n+1}e^{-ax^2}dx=\frac{n!}{2a^{n+1}}$,$a>0$ • $\int\limits_{0}^{\infty }x^{2n}e^{-ax^2}dx=\frac{1.3.5...(2n-1)}{2^{n+1}a^{n}}\sqrt{\frac{\pi}{a}}$ • $\int\limits_{0}^{\infty }x^{n}e^{-ax}dx=\frac{n!}{a^{n+1}}$, $a>0$, $n$ positive integer 16.11: Differentiation Rules • $\frac{d[f(x)+g(x)]}{dx}=\frac{d f(x)}{dx}+\frac{d g(x)}{dx}$ • $\frac{d[f(x)g(x)]}{dx}=f(x)\frac{d g(x)}{dx}+g(x)\frac{d f(x)}{dx}$ • $\frac{d[a f(x)]}{dx}=a\frac{d f(x)}{dx}$ • $\frac{df(g(x))}{dx}=\frac{d f(g)}{dg}\frac{d g(x)}{dx}$ 16.12: Partial Derivatives • $\frac{\partial^2 z}{\partial x \partial y}=\frac{\partial^2 z}{\partial y \partial x}$ • $\left(\frac{\partial y}{\partial x}\right)_{z,u}=\frac{1}{\left(\partial x/\partial y\right)_{z,u}}$ • $\left(\frac{\partial y}{\partial x}\right)_{z}\left(\frac{\partial x}{\partial z}\right)_{y}\left(\frac{\partial z}{\partial y}\right)_{x}=-1$ • $du=\left(\frac{\partial u}{\partial x_1}\right)_{x_2,x_3...}dx_1+\left(\frac{\partial u}{\partial x_2}\right)_{x_1,x_3...}dx_2+\left(\frac{\partial u}{\partial x_3}\right)_{x_1,x_2...}dx_3$ • Given $u=u(x,y)$, $x=x(\theta,r)$ and $y=y(\theta,r)$ $\begin{array}{c} \left ( \frac{\partial u}{\partial r} \right )_\theta=\left ( \frac{\partial u}{\partial x} \right )_y\left ( \frac{\partial x}{\partial r} \right )_\theta+\left ( \frac{\partial u}{\partial y} \right )_x\left ( \frac{\partial y}{\partial r} \right )_\theta \ \left ( \frac{\partial u}{\partial \theta} \right )_r=\left ( \frac{\partial u}{\partial x} \right )_y\left ( \frac{\partial x}{\partial \theta} \right )_r+\left ( \frac{\partial u}{\partial y} \right )_x\left ( \frac{\partial y}{\partial \theta} \right )_r \end{array} \nonumber$ 16.13: Coordinate Systems Cartesian coordinates 2 dimensions: area element: $dA=dx.dy$. 3 dimensions: volume element: $dV=dx.dy.dz$. Polar coordinates • $x=r\cos\theta$ • $y=r\sin\theta$ • $r^2=x^2+y^2$ • $tan \theta=y/x$ • $dA=r.dr.d\theta$ Spherical coordinates • $x=r\sin\theta\cos\phi$ • $y=r\sin\theta\sin\phi$ • $z=r\cos\theta$ • $r^2=x^2+y^2+z^2$ • $\theta=\cos^{-1}\left[\frac{z}{\sqrt{x^2+y^2+z^2}} \right]$ • $\phi=\tan^{-1}\left(\frac{y}{x}\right)$ • $dV=r^2\sin{\theta}dr.d\phi.d\theta$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Mathematical_Methods_in_Chemistry_(Levitus)/16%3A_Formula_Sheets/16.01%3A_Some_Important_Numbers.txt
Equilibrium thermodynamics and statistical mechanics are widely considered to be core subject matter for any practicing chemist [1]. There are plenty of reasons for this: • A great many chemical phenomena encountered in the laboratory are well described by equilibrium thermodynamics. • The physics of chemical systems at equilibrium is generally well understood and mathematically tractable. • Equilibrium thermodynamics motivates our thinking and understanding about chemistry away from equilibrium. This last point, however, raises a serious question: how well does equilibrium thermodynamics really motivate our understanding of nonequilibrium phenomena? Is it reasonable for an organometallic chemist to analyze a catalytic cycle in terms of rate-law kinetics, or for a biochemist to treat the concentration of a solute in an organelle as a bulk mixture of compounds? Under many circumstances, equilibrium thermodynamics suffices, but a growing number of outstanding problems in chemistry - from electron transfer in light-harvesting complexes to the chemical mechanisms behind immune system response- concern processes that are fundamentally out of equilibrium. This course endeavors to introduce the key ideas that have been developed over the last century to describe nonequilibrium phenomena. These ideas are almost invariably founded upon a statistical description of matter, as in the equilibrium case. However, since nonequilibrium phenomena contain a more explicit time-dependence than their equilibrium counterparts (consider, for example, the decay of an NMR signal or the progress of a reaction), the probabilistic tools we develop will require some time-dependence as well. In this chapter, we consider systems whose behavior is inherently nondeterministic, or stochastic, and we establish methods for describing the probability of finding the system in a particular state at a specified time. • 1.1: Markov Processes • 1.2: Master Equations The techniques developed in the basic theory of Markov processes are widely applicable, but there are of course many instances in which the discretization of time is either inconvenient or completely unphysical. In such instances, a master equation (more humbly referred to as a rate equation) may provide a continuous-time description of the system that is in keeping with all of our results about stochastic processes. • 1.3: Fokker-Planck Equations We have already generalized the equations governing Markov processes to account for systems that evolve continuously in time, which resulted in the master equations. In this section, we adapt these equations further so that they may be suitable for the description of systems with a continuum of states, rather than a discrete, countable number of states. • 1.4: The Langevin Equation A variety of interesting and important phenomena are subject to combinations of deterministic and stochastic processes. We concern ourselves now with a particular class of such phenomena which are described by Langevin equations. In its simplest form, a Langevin equation is an equation of motion for a system that experiences a particular type of random force. The archetypal system governed by a Langevin equation is a Brownian particle, that is, a particle undergoing Brownian motion. • 1.5: Appendix: Applications to Brownian Motion Thumbnail: This is a simulation of the Brownian motion of a big particle (dust particle) that collides with a large set of smaller particles (molecules of a gas) which move with different velocities in different random directions. (CC BY-SA 3.0; Lookang via Wikipedia) 01: Stochastic Processes and Brownian Motion Probability Distributions and Transitions Suppose that an arbitrary system of interest can be in any one of $N$ distinct states. The system could be a protein exploring different conformational states; or a pair of molecules oscillating between a "reactants" state and a "products" state; or any system that can sample different states over time. Note here that $N$ is finite, that is, the available states are discretized. In general, we could consider systems with a continuous set of available states (and we will do so in section 1.3), but for now we will confine ourselves to the case of a finite number of available states. In keeping with our discretization scheme, we will also (again, for now) consider the time evolution of the system in terms of discrete timesteps rather than a continuous time variable. Let the system be in some unknown state $m$ at timestep $s$, and suppose we’re interested in the probability of finding the system in a specific state $n$, possibly but not necessarily the same as state $m$, at the next timestep $s+1$. We will denote this probability by $P(n, s+1)$ If we had knowledge of $m$, then this probability could be described as the probability of the system being in state $n$ at timestep $s+1$ given that the system was in state $m$ at timestep $s$. Probabilities of this form are known as conditional probabilities, and we will denote this conditional probability by $Q(m, s \mid n, s+1)$ In many situations of physical interest, the probability of a transition from state $m$ to state $n$ is time-independent, depending only on the nature of $m$ and $n$, and so we drop the timestep arguments to simplify the notation, $Q(m, s \mid n, s+1) \equiv Q(m, n)$ This observation may seem contradictory, because we are interested in the time-dependent probability of observing a system in a state $n$ while also claiming that the transition probability described above is time-independent. But there is no contradiction here, because the transition probability $Q$ - a conditional probability - is a different quantity from the time-dependent probability $P$ we are interested in. In fact, we can express $P(n, s+1)$ in terms of $Q(m, n)$ and other quantities as follows: Since we don’t know the current state $m$ of the system, we consider all possible states $m$ and multiply the probability that the system is in state $m$ at timestep $s$ by the probability of the system being in state $n$ at timestep $s+1$ given that it is in state $m$ at timestep $s$. Summing over all possible states $m$ gives $P\left(n, s_{1}\right)$ at timestep $s+1$ in terms of the corresponding probabilities at timestep $s$. Mathematically, this formulation reads $P(n, s+1)=\sum_{m} P(m, s) Q(m, n)$ We’ve made some progress towards a practical method of finding $P(n, s+1)$, but the current formulation Eq.(1.1) requires knowledge of both the transition probabilities $Q(m, n)$ and the probabilities $P(m, s)$ for all states $m$. Unfortunately, $P(m, s)$ is just as much a mystery to us as $P(n, s+1)$. What we usually know and control in experiments are the initial conditions; that is, if we prepare the system in state $k$ at timestep $s=0$, then we know that $P(k, 0)=1$ and $P(n, 0)=0$ for all $n \neq k$. So how do we express $P(n, s+1)$ in terms of the initial conditions of the experiment? We can proceed inductively: if we can write $P(n, s+1)$ in terms of $P(m, s)$, then we can also write $P(m, s)$ in terms of $P(l, s-1)$ by the same approach: $P(n, s+1)=\sum_{l, m} P(l, s-1) Q(l, m) Q(m, n)$ Note that $Q$ has two parameters, each of which can take on $N$ possible values. Consequently we may choose to write $Q$ as an $N \times N$ matrix $\mathbf{Q}$ with matrix elements $(\mathbf{Q})_{m n}=Q(m, n)$. Rearranging the sums in Eq.(1.2) in the following manner, $P(n, s+1)=\sum_{l} P(l, s-1) \sum_{m} Q(l, m) Q(m, n)$ we recognize the sum over $m$ as the definition of a matrix product, $\sum_{m}(\mathbf{Q})_{l m}(\mathbf{Q})_{m n}=\left(\mathbf{Q}^{2}\right)_{l n}$ Hence, Eq.(1.2) can be recast as $P(n, s+1)=\sum_{l} P(l, s-1)\left(\mathbf{Q}^{2}\right)_{l n}$ This process can be continued inductively until $P(n, s+1)$ is written fully in terms of initial conditions. The final result is: \begin{aligned} P(n, s+1) &=\sum_{m} P(m, 0)\left(\mathbf{Q}^{s+1}\right)_{m n} \ &=P(k, 0)\left(\mathbf{Q}^{s+1}\right)_{m n} \end{aligned} where $k$ is the known initial state of the system (all other $m$ do not contribute to the sum since $P(m, 0)=0$ for $m \neq k$ ). Any process that can be described in this manner is called a Markov process, and the sequence of events comprising the process is called a Markov chain. A more rigorous discussion of the origins and nature of Markov processes may be found in, e.g., de Groot and Mazur [2]. The Transition Probability Matrix We now consider some important properties of the transition probability matrix $\mathbf{Q}$. By virtue of its definition, $Q$ is not necessarily Hermitian: if it were Hermitian, every conceivable transition between states would have to have the same forward and backward probability, which is often not the case. Example: Consider a chemical system that can exist in either a reactant state A or a product state B, with forward reaction probability $p$ and backward reaction probability $q=1-p$, $\mathrm{A} \underset{q}{\stackrel{p}{\rightleftharpoons}} \mathrm{B}$ The transition probability matrix $\mathbf{Q}$ for this system is the $2 \times 2$ matrix $\mathbf{Q}=\left(\begin{array}{ll} q & p \ q & p \end{array}\right)$ To construct this matrix, we first observe that the given probabilities directly describe the off-diagonal elements $Q_{A B}$ and $Q_{B A}$; then we invoke conservation of probability. For example, if the system is in the reactant state $A$, it can only stay in $A$ or react to form product $B$; there are no other possible outcomes, so we must have $Q_{A A}+Q_{A B}=1 .$ This forces the value $1-p=q$ upon $Q_{A A}$, and a similar argument yields $Q_{B B}$. Clearly this matrix is not symmetric, hence it is not Hermitian either, thus demonstrating our first general observation about $\mathbf{Q}$. The non-Hermiticity of $\mathbf{Q}$ implies also that its eigenvalues $\lambda_{i}$ are not necessarily real-valued. Nevertheless, $\mathbf{Q}$ yields two sets of eigenvectors, a left set $\chi_{i}$ and a right set $\phi_{i}$, which satisfy the relations \begin{aligned} &\chi_{i} \mathbf{Q}=\lambda_{i} \chi_{i} \ &\mathbf{Q} \phi_{i}=\lambda_{i} \phi_{i} \end{aligned} The left- and right-eigenvectors of $\mathbf{Q}$ are orthonormal, $\left\langle\chi_{i} \mid \phi_{j}\right\rangle=\delta_{i j}$ and they form a complete set, hence there is a resolution of the identity of the form $\sum_{i}\left|\phi_{i}\right\rangle\left\langle\chi_{i}\right|=1$ Conservation of probability further restricts the elements of $\mathbf{Q}$ to be nonnegative with $\sum_{n} \mathbf{Q}_{m n}=1$. It can be shown that this condition guarantees that all eigenvalues of $\mathbf{Q}$ are bounded by the unit circle in the complex plane, $\left|\lambda_{i}\right| \leq 1, \forall i$ Proof of Eq. (1.12): The $i^{\text {th }}$ eigenvalue of $\mathrm{Q}$ satisfies $\lambda_{i} \phi_{i}(n)=\sum_{m} Q_{n m} \phi_{i}(m)$ for each $n$. Take the absolute value of this relation, $\left|\lambda_{i} \phi_{i}(n)\right|=\left|\sum_{m} Q_{n m} \phi_{i}(m)\right|$ Now we can apply the triangle inequality to the right hand side of the equation: $\left|\sum_{m} Q_{n m} \phi_{i}(m)\right| \leq \sum_{m}\left|Q_{n m} \phi_{i}(m)\right|$ Also, since all elements of $\mathbf{Q}$ are nonnegative, $\left|\lambda_{i} \phi_{i}(n)\right| \leq \sum_{m} Q_{n m}\left|\phi_{i}(m)\right|$ Now, the $\phi_{i}(n)$ are finite, so there must be some constant $c$ such that $\left|\phi_{i}(n)\right| \leq c$ for all $n$. Then our triangle inequality relation reads $c\left|\lambda_{i}\right| \leq c \sum_{m} Q_{n m}$ Finally, since $\sum_{m} Q_{n m}=1$, we have the desired result, $\left|\lambda_{i}\right| \leq 1$ Another key feature of the transition probability matrix $\mathbf{Q}$ is the following claim, which is intimately connected with the notion of an equilibrium state: $\text { Q always has the eigenvalue } \lambda=1$ Proof of Eq.(1.13): We refer now to the left eigenvectors of $\mathbf{Q}:$ a given left eigenvector $\chi_{i}$ satisfies $\chi_{i}(n) \lambda_{i}=\sum_{m} \chi_{i}(m) Q_{m n}$ Summing over $n$, we find $\sum_{n} \chi_{i}(n) \lambda_{i}=\sum_{n} \sum_{m} \chi_{i}(m) Q_{m n}=\sum_{m} \chi_{i}(m)$ since $\sum_{m} Q_{n m}=1$. Thus, we have the following secular equation: $\left(\lambda_{i}-1\right) \sum_{n} \chi_{i}(n)=0$ Clearly, $\lambda=1$ is one of the eigenvalues satisfying this equation. The decomposition of the secular equation in the preceding proof has a direct physical interpretation: the eigenvalue $\lambda_{1}=1$ has a corresponding eigenvector which satisfies $\sum_{n} \chi_{1}(n)=1$; this stationary-state eigensolution corresponds to the steady state of a system $\chi_{1}(n)=P_{\mathrm{st}}(n)$. It then follows from the normalization condition that $\phi_{1}(n)=1$. The remaining eigenvalues $\left|\lambda_{j}\right|<1$ each satisfy $\sum_{n} \chi_{j}(n)=0$ and hence correspond to zero-sum fluctuations about the equilibrium state. In light of these properties of $\mathbf{Q}$, we can define the time-dependent evolution of a system in terms of the eigenstates of $\mathbf{Q}$; this representation is termed the spectral decomposition of $P(n, s)$ (the set of eigenvalues of a matrix is also known as the spectrum of that matrix). In the basis of left and right eigenvectors of $\mathbf{Q}$, the probability of being in state $n$ at timestep $s$, given the initial state as $n_{0}$, is $P(n, s)=\left\langle n_{0}\left|\mathbf{Q}^{s}\right| n\right\rangle=\sum_{i}\left\langle n_{0} \mid \phi_{i}\right\rangle \lambda_{i}^{s}\left\langle\chi_{i} \mid n\right\rangle$ If we (arbitrarily) assign the stationary state to $i=1$, we have $\lambda_{1}=1$ and $\chi_{1}=P_{s t}$, where $P_{s t}$ is the steady-state or equilibrium probability distribution. Thus, $P(n, s)=P_{s t}(n)+\sum_{i \neq 1} \phi_{i}\left(n_{0}\right) \lambda_{i}^{s} \chi_{i}(n)$ The spectral decomposition proves to be quite useful in the analysis of more complicated probability distributions, especially those that have sufficiently many states as to require computational analysis. Example: Consider a system which has three states with transition probabilities as illustrated in Figure 1.1. Notice that counterclockwise and clockwise transitions have differing probabilities, which allows this system to exhibit a net current or flux. Also, suppose that $p+q=1$ so that the system must switch states at every timestep. The transition probability matrix for this system is $\mathbf{Q}=\left(\begin{array}{lll} 0 & p & q \ q & 0 & p \ p & q & 0 \end{array}\right)$ To determine $P(s)$, we find the eigenvalues and eigenvectors of this matrix and use the spectral decomposition, Eq.(1.14). The secular equation is $\operatorname{Det}(\mathbf{Q}-\lambda \mathbf{I})=\mathbf{0}$ and its roots are $\lambda_{1}=1, \quad \lambda_{\pm}=-\frac{1}{2} \pm \frac{1}{2} \sqrt{3(4 p q-1)}$ Notice that the nonequilibrium eigenvalues are complex unless $p=q=\frac{1}{2}$, which corresponds to the case of vanishing net flux. If there is a net flux, these complex eigenvalues introduce an oscillatory behavior to $P(s)$. In the special case $p=q=\frac{1}{2}$, the matrix $\mathbf{Q}$ is symmetric, so the left and right eigenvectors are identical, \begin{aligned} \chi_{1} &=\phi_{1}^{T}=\frac{1}{\sqrt{3}}(1,1,1) \ \chi_{2} &=\phi_{2}^{T}=\frac{1}{\sqrt{6}}(1,1,-2) \ \chi_{3} &=\phi_{3}^{T}=\frac{1}{\sqrt{2}}(1,-1,0) \end{aligned} where $T$ denotes transposition. Suppose the initial state is given as state 1 , and we’re interested in the probability of being in state 3 at timestep $s, P_{1 \rightarrow 3}(s)$. According to the spectral decomposition formula $\mathrm{Eq} \cdot(1.14)$ \begin{aligned} P_{1 \rightarrow 3}(s)=& \sum_{i} \phi_{i}(1) \lambda_{i}^{s} \chi_{i}(3) \ =&\left(\frac{1}{\sqrt{3}}\right)\left(1^{s}\right)\left(\frac{1}{\sqrt{3}}\right) \ &+\left(\frac{1}{\sqrt{6}}\right)(1)\left(-\frac{1}{2}\right)^{s}\left(\frac{1}{\sqrt{6}}\right)(-2) \ &+\left(\frac{1}{\sqrt{2}}\right)(1)\left(-\frac{1}{2}\right)^{s}\left(\frac{1}{\sqrt{2}}\right)(0) \ P_{1 \rightarrow 3}(s)=& \frac{1}{3}-\frac{1}{3}\left(-\frac{1}{2}\right)^{s} \end{aligned} Note that in the evaluation of each term, the first element of each left eigenvector $\chi$ and the third element of each right eigenvector $\phi$ was used, since we’re interested in the transition from state 1 to state 3 . Figure $1.2$ is a plot of $P_{1 \rightarrow 3}(s)$; it shows that the probability oscillates about the equilibrium value of $\frac{1}{3}$, approaching the equilibrium value asymptotically. Detailed Balance Our last topic of consideration within the subject of Markov processes is the notion of detailed balance, which is probably already somewhat familiar from elementary kinetics. Formally, a Markov process with transition probability matrix $\mathbf{Q}$ satisfies detailed balance if the following condition holds: $P_{\mathrm{st}}(n) Q_{n m}=P_{\mathrm{st}}(m) Q_{m n}$ And this steady state defines the equilibrium distribution: $P_{\mathrm{eq}}(n)=P_{\mathrm{st}}(n)=\lim _{t \rightarrow \infty} P(n, t)$ This relation generalizes the notion of detailed balance from simple kinetics that the rates of forward and backward processes at equilibrium should be equal: here, instead of considering only a reactant state and a product state, we require that all pairs of states be related by Eq.(1.16). Note also that this detailed balance condition is more general than merely requiring that $\mathbf{Q}$ be symmetric, as the simpler definition from elementary kinetics would imply. However, if a system obeys detailed balance, we can describe it using a symmetric matrix via the following transformation: let $V_{n m}=\frac{\sqrt{P_{\mathrm{st}}(n)}}{\sqrt{P_{\mathrm{st}}(m)}} Q_{n m}$ If we make the substitution $P(n, s)=\sqrt{P_{\mathrm{st}}(n)} \cdot \tilde{P}(n, s)$, some manipulation using equations (1.1) and (1.17) yields $\frac{d \tilde{P}(n, t)}{d t}=\sum_{m} \tilde{P}(m, t) V_{m n}$ The derivative $\frac{d \tilde{P}(n, s)}{d t}$ here is really the finite difference $\tilde{P}(n, s+1)-\tilde{P}(n, s)$ since we are considering discrete-time Markov processes, but we have introduced the derivative notation for comparison of this formula to later results for continuous-time systems. As we did for $\mathbf{Q}$, we can set up an eigensystem for $\mathbf{V}$, which yields a spectral decomposition similar to that of $\mathbf{Q}$ with the exception that the left and right eigenvectors $\psi$ of $\mathbf{V}$ are identical since $\mathbf{V}$ is symmetric; in other words, $\left\langle\psi_{i} \mid \psi_{j}\right\rangle=\delta_{i j}$. Furthermore, it can be shown that all eigenvalues not corresponding to the equilibrium state are either negative or zero; in particular, they are real. The eigenvectors of $\mathbf{V}$ are related to the left and right eigenvectors of $\mathbf{Q}$ by $\left|\phi_{i}\right\rangle=\frac{1}{\sqrt{P_{\mathrm{st}}}}\left|\psi_{i}\right\rangle \quad \text { and } \quad\left\langle\chi_{i}\right|=\sqrt{P_{\mathrm{st}}}\left\langle\psi_{i}\right|$ Example: Our final model Markovian system is a linear three-state chain (Figure 1.3) in which the system must pass through the middle state in order to get from either end of the chain to the other. Again we require that $p+q=1$. From this information, we can construct $\mathbf{Q}$, $\mathbf{Q}=\left(\begin{array}{ccc} q & p & 0 \ q & 0 & p \ 0 & q & p \end{array}\right)$ Notice how the difference between the three-site linear chain and the three-site ring of the previous example is manifest in the structure of $\mathbf{Q}$, particularly in the direction of the zero diagonal. This structural difference carries through to general $N$-site chains and rings. To determine the equilibrium probability distribution $P_{\text {st }}$ for this system, one could multiply $\mathbf{Q}$ by itself many times over and hope to find an analytic formula for $\lim _{s \rightarrow \infty} \mathbf{Q}^{s} ;$ however, a less tedious and more intuitive approach is the following: Noticing that the system cannot stay in state 2 at time $s$ if it is already in state 2 at time $s-1$, we conclude that $P(1, s+2)$ depends only on $P(1, s)$ and $P(3, s)$. Also, the conditional probabilities $P(1, s+2 \mid 1, s)$ and $P(1, s+2 \mid 3, s)$ are both equal to $q^{2}$. Likewise, $P(3, s+2 \mid 1, s)$ and $P(3, s+2 \mid 3, s)$ are both equal to $p^{2} .$ Finally, if the system is in state 2 at time $s$, it can only get back to state 2 at time $s+2$ by passing through either state 1 or state 3 at time $s+1$. The probability of either of these occurrences is $p q$. So the ratio $P_{\mathrm{st}}(1): P_{\mathrm{st}}(2): P_{\mathrm{st}}(3)$ in the equilibrium limit is $q^{2}: q p: p^{2}$. We merely have to normalize these probabilities by noting that $q^{2}+q p+p^{2}=(q+p)^{2}-q p=1-q p$. Thus, the equilibrium distribution is $P(s)=\frac{1}{1-q p}\left(q^{2}, q p, p^{2}\right)$ Plugging each pair of states into the detailed balance condition, we verify that this system satisfies detailed balance, and hence all of its eigenvalues are real, even though $\mathbf{Q}$ is not symmetric.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/01%3A_Stochastic_Processes_and_Brownian_Motion/1.01%3A_Markov_Processes.txt
Motivation and Derivation The techniques developed in the basic theory of Markov processes are widely applicable, but there are of course many instances in which the discretization of time is either inconvenient or completely unphysical. In such instances, a master equation (more humbly referred to as a rate equation) may provide a continuous-time description of the system that is in keeping with all of our results about stochastic processes. Eq. $(1.22)$ is a master equation. As the derivation suggests, $\mathbf{W}$ plays the role of a transition probability matrix in this formulation. You may notice that the master equation looks structurally very similar to rate equations in elementary kinetics; in fact, the master equation is a generalization of such rate equations, and the derivation above provides some formal justification for the rules we learn in kinetics for writing them down. The matrix $\mathbf{W}$ is analogous to the set of rate constants indicating the relative rates of reaction between species in the system, and the probabilities $P_{n}$ are analogous to the relative concentrations of these species. Example: Consider a random walk on a one-dimensional infinite lattice (see Figure 1.4). As indicated in the figure, the transition probability between a lattice point and either adjacent lattice point is $k$, and all other transition probabilities are zero (in other words, the system cannot "hop" over a lattice point without first occupying it). We can write down a master equation to describe the flow of probability among the lattice sites in a manner analogous to writing down a rate law. For any given site $n$ on the lattice, probability can flow into $n$ from either site $n-1$ or site $n+1$, and both of these occur at rate $k$; likewise, probability can flow out of state $n$ to either site $n+1$ or site $n-1$, both of which also happen at rate $k$. Hence, the master equation for all sites $n$ on the lattice is $\dot{P}_{n}=k\left(P_{n+1}+P_{n-1}-2 P_{n}\right)$ Now we define the average site of occupation as a sum over all sites, weighted by the probability of occupation at each site, $\bar{n}=\sum_{n=-\infty}^{\infty} n P_{n}(t)$ Then we can compute, for example, how this average site evolves with time, $\sum_{n=-\infty}^{\infty} n \dot{P}_{n}(t)=\dot{\bar{n}}=0$ Hence the average site of occupation does not change over time in this model, so if we choose the initial distribution to satisfy $\bar{n}=0$, then this will always be the average site of occupation. However, the mean square displacement $\bar{n}^{2}$ is not constant; in keeping with our physical interpretation of the model, the mean square displacement increases with time. In particular, $\sum_{n=-\infty}^{\infty} n^{2} \dot{P}_{n}(t)=\dot{\bar{n}}^{2}=2 k$ If the initial probability distribution is a delta function on site $0, P_{n}(0)=\delta_{0}$, then it turns out that Fourier analysis provides a route towards a closed-form expression for the long-time limit of $P_{n}(t)$ : \begin{aligned} P_{n}(t)=\frac{1}{2 \pi} \int_{0}^{2 \pi} e^{i n z} e^{-2 k(1-\cos z) t} d z \ \lim _{t \rightarrow \infty} P_{n}(t)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} e^{i n z} e^{-2 D\left(\frac{z^{2}}{2}\right) t} d z \end{aligned} $\lim _{t \rightarrow \infty} P_{n}(t)=\sqrt{\frac{1}{4 \pi D t}} e^{-\frac{n^{2}}{4 D t}}$ In the above manipulations, we have replaced $k$ with the diffusion constant $D$, the long-time limit of the rate constant (in this case, the two are identical). Thus the probability distribution for occupying the various sites becomes Gaussian at long times. Mean First Passage Time One of the most useful quantities we can determine from the master equation for a random walk is the average time it takes for the random walk to reach a particular site $n_{s}$ for the first time. This quantity, called the mean first passage time, can be determined via the following trick: we place an absorbing boundary condition at $n_{s}, P_{n_{s}}(t)=0$. Whenever the walk reaches site $n_{s}$, it stays there for all later times. One then calculates the survival probability $S(t)$, that is, the probability that the walker has not yet visited $n_{s}$ at time $t$, $S(t)=\sum_{n \neq n_{s}} P_{n}(t)$ The mean first passage time $\langle t\rangle$ then corresponds to the time-averaged survival probability, $\langle t\rangle=\int_{0}^{\infty} S(t) d t$ Sometimes it is more convenient to write the mean first passage time in terms of the probability density of reaching site $n_{s}$ at time $t$. This quantity is denoted by $f(t)$ and satisfies $f(t)=-\frac{d S(t)}{d t}=\sum_{n \neq n_{s}} P_{n}(t) W_{n n_{s}}$ In terms of $f(t)$, the mean first passage time is given by $\langle t\rangle=\int_{0}^{\infty} t f(t) d t$ The mean first passage time is a quantity of interest in a number of current research applications. Rates of fluorescence quenching, electron transfer, and exciton quenching can all be formulated in terms of the mean first passage time of a stochastic process. Example: Let’s calculate the mean first passage time of the three-site model introduced in Figure $1.1$, with all transition rates having the same value $k$. Suppose the system is prepared in state 1 , and we’re interested in knowing the mean first passage time for site 3 . Applying the absorbing boundary condition at site 3 , we derive the following master equations: $\left\{\begin{array}{l} \dot{P}_{1}=-2 k P_{1}+k P_{2} \ \dot{P}_{2}=k P_{1}-2 k P_{2} \ \dot{P}_{3}=k P_{1}+k P_{2} \end{array}\right.$ The transition matrix $\mathbf{W}$ corresponding to this system would have a zero column since $P_{3}$ does not occur on the right hand side of any of these equations; hence the sink leads to a zero eigenvalue that we can ignore. The relevant submatrix $\mathbf{W}_{1,2}=\left(\begin{array}{cc} -2 k & k \ k & -2 k \end{array}\right)$ has eigenvalues $\lambda_{1}=-k, \lambda_{2}=-3 k$. Using the spectral decomposition formula, we find that the survival probability is $S(t)=\sum_{i, n}\left\langle 1 \mid \psi_{i}\right\rangle e^{\lambda_{i} t}\left\langle\psi_{i} \mid n\right\rangle=e^{-k t}$ Hence, the previously defined probability density $f(t)$ is given by $f(t)=k e^{-k t}$, and the mean first passage time for site 3 is $\langle t\rangle=\frac{1}{k}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/01%3A_Stochastic_Processes_and_Brownian_Motion/1.02%3A_Master_Equations.txt
Fokker-Planck Equations and Diffusion We have already generalized the equations governing Markov processes to account for systems that evolve continuously in time, which resulted in the master equations. In this section, we adapt these equations further so that they may be suitable for the description of systems with a continuum of states, rather than a discrete, countable number of states. Motivation and Derivation Consider once again the infinite one-dimensional lattice, with lattice spacing $\Delta x$ and timestep size $\Delta t$. In the previous section, we wrote down the master equation (discrete sites, continuous time) for this system, but here we will begin with the Markov chain expression (discrete sites, discrete time) for the system, $P(n, s+1)=\frac{1}{2}(P(n+1, s)+P(n-1, s))$ In terms of $\Delta x$ and $\Delta t$, this equation is $P(x, t+\Delta t)=\frac{1}{2}[P(x+\Delta x, t)+P(x-\Delta x, t)]$ Rearranging the previous equation as a finite difference, as in $\frac{P(x, t+\Delta t)-P(x, t)}{\Delta t}=\frac{(\Delta x)^{2}}{2 \Delta t} \cdot \frac{P(x+\Delta x, t)+P(x-\Delta x, t)-2 P(x, t)}{(\Delta x)^{2}}$ and taking the limits $\Delta x \rightarrow 0, \Delta t \rightarrow 0$, we arrive at the following differential equation: $\frac{\partial}{\partial t} P(x, t)=D \frac{\partial^{2}}{\partial x^{2}} P(x, t)$ where $D=\frac{(\Delta x)^{2}}{2 \Delta t}$. This differential equation is called a diffusion equation with diffusion constant $D$, and it is a special case of the Fokker-Planck equation, which we will introduce shortly. The most straightforward route to the solution of the diffusion equation is via spatial Fourier transformation, $\tilde{P}(k, t)=\int_{-\infty}^{\infty} P(x, t) e^{i k x} d x$ In Fourier space, the diffusion equation reads $\frac{\partial}{\partial t} \tilde{P}(k, t)=-D k^{2} \tilde{P}(k, t)$ and its solution is $\tilde{P}(k, t)=\tilde{P}(k, 0) e^{-D k^{2} t}$ If we take a delta function $P(x, 0)=\delta\left(x-x_{0}\right)$ centered at $x_{0}$ as the initial condition, the solution in $x$-space is $P(x, t)=\frac{1}{\sqrt{4 \pi D t}} e^{-\frac{\left(x-x_{0}\right)^{2}}{4 D t}}$ Thus the probability distribution is a Gaussian in $x$ that spreads with time. Notice that this solution is essentially the same as the long-time solution to the spatially discretized version of the problem presented in the previous example. We are now in a position to consider a generalization of the diffusion equation known as the Fokker-Planck equation. In addition to the diffusion term $D \frac{\partial^{2}}{\partial x^{2}}$, we introduce a term linear in the first derivative with respect to $x$, which accounts for drift of the center of the Gaussian distribution over time. Consider a diffusion process on a three-dimensional potential energy surface $U(\mathbf{r})$. Conservation of probability requires that $\dot{P}(\mathbf{r}, t)=-\nabla \cdot \mathbf{J}$ where $\mathbf{J}$ is the probability current, $\mathbf{J}=-D \nabla P+\mathbf{J}_{U}$, and $\mathbf{J}_{U}$ is the current due to the potential $U(\mathbf{r})$. At equilibrium, we know that the probability current $\mathbf{J}=0$ and that the probability distribution should be Boltzmann-weighted according to energy, $P_{\mathrm{eq}}(\mathbf{r}) \propto e^{-\beta U(\mathbf{r})}$. Therefore, at equilibrium, $-D \beta \nabla U(\mathbf{r}) P(\mathbf{r})_{\mathrm{eq}}+\mathbf{J}_{U}=0$ Solving Eq.(1.36) for $\mathbf{J}_{U}$ and plugging the result into Eq.(1.35) yields the Fokker-Planck equation, $\dot{P}(\mathbf{r}, t)=D \nabla[\nabla P(\mathbf{r}, t)+\beta \nabla U(\mathbf{r}) P(\mathbf{r}, t)]$ Properties of Fokker-Planck Equations Let’s return to one dimension to discuss some salient features of the Fokker-Planck equation. • First, the Fokker-Planck equation gives the expected results in the long-time limit: $\lim _{t \rightarrow \infty} P=P_{\text {eq }} \text { with } \dot{P}=0$ • Also, if we define the average position $\bar{x}=\int_{-\infty}^{\infty} x P(x) d x$, then the differential form of the Fokker-Planck equation can be used to verify that $\dot{\bar{x}}=D \beta\left(-\frac{\partial}{\partial x} \overline{U(x)}\right)$ Since the quantity in parentheses is just the average force $\bar{F}$, Eq. $(1.39)$ can be combined with the Einstein relation $D \beta \zeta=1$ (see section 1.4) to justify that $\zeta \bar{v}=\bar{F}$; the meaning and significance of this equation, including the definition of $\zeta$, will be discussed in section 1.4. • The Fokker-Planck equation is linear in the first and second derivatives of $P$ with respect to $x$; it turns out that any spatial operator that is a linear combination of $\frac{\partial}{\partial x}, x \frac{\partial}{\partial x}$, and $\frac{\partial^{2}}{\partial x^{2}}$ will define a Gaussian process when used to describe the time evolution of a probability density. Thus, both the diffusion equation and the more general Fokker-Planck equation will generally always describe a Gaussian process. - One final observation about the Fokker-Planck equation is that it is only analytically solvable in a small number of special cases. This situation is exacerbated by the fact that it is not of Hermitian (self-adjoint) form. However, we can introduce the change of variable $P=e^{-\frac{\beta U}{2}} \Phi$; in terms of $\Phi$, the Fokker-Planck equation is Hermitian, $\frac{\partial \Phi}{\partial t}=D\left[\nabla^{2} \Phi-U_{\mathrm{eff}} \Phi\right]$ where $U_{\text {eff }}=\frac{(\beta \nabla U)^{2}}{4}-\frac{\beta \nabla^{2} U}{2}$. This transformed Fokker-Planck equation now bears the same functional form as the time-dependent Schrödinger equation, so all of the techniques associated with its solution can likewise be applied to Eq.(1.40). Example: One of the simplest, yet most useful, applications of the Fokker-Planck equation is the description of the diffusive harmonic oscillator, which can be treated analytically. Here we solve the Fokker-Planck equation for the one-dimensional diffusive oscillator with frequency $\omega$. The differential equation is $\frac{\partial P}{\partial t}=D \frac{\partial^{2}}{\partial x^{2}} P+\gamma \frac{\partial}{\partial x}(x P)$ where $\gamma=m \omega^{2} D \beta$. We can solve this equation in two steps: first, solve for the average position using Eq. $(1.39)$, $\dot{\bar{x}}=-\gamma \bar{x}$ Given the usual delta function initial condition $P(x, 0)=\delta\left(x-x_{0}\right)$, the average position is given by $\bar{x}(t)=x_{0} e^{-\gamma t}$ Thus, memory of the initial conditions decays exponentially for the diffusive oscillator. Then, since the Fokker-Planck equation is linear in $P$ and bilinear in $x$ and $\frac{\partial}{\partial x}$, the full solution must take the form of a Gaussian, so we can write $P\left(x_{0}, x, t\right)=\frac{1}{\sqrt{2 \pi \alpha(t)}} \exp \left[-\frac{(x-\bar{x}(t))^{2}}{2 \alpha(t)}\right]$ where $\bar{x}(t)$ is the time-dependent mean position and $\alpha(t)$ is the time-dependent standard deviation of the distribution. But we’ve already found $\bar{x}(t)$, so we can substitute it into the solution, $P\left(x_{0}, x, t\right)=\frac{1}{\sqrt{2 \pi \alpha(t)}} \exp \left[-\frac{\left(x-x_{0} e^{-\gamma t}\right)^{2}}{2 \alpha(t)}\right]$ Finally, from knowledge that the equilibrium distribution must satisfy the stationary condition $P_{\mathrm{eq}}(x)=\int_{-\infty}^{\infty} P\left(x_{0}, x, t\right) P_{\mathrm{eq}}\left(x_{0}\right) d x_{0}$ we can determine that $\alpha(t)=\frac{1-e^{-2 \gamma t}}{m \omega^{2} \beta}$ Thus the motion of the diffusive oscillator is fully described. The long and short-time limits of $P\left(x_{0}, x, t\right)$ are both of interest to us. At short times, $\lim _{t \rightarrow 0} P\left(x_{0}, x, t\right)=\sqrt{\frac{1}{4 \pi D t}} \exp \left[-\frac{\left(x-x_{0}\right)^{2}}{4 D t}\right]$ and the evolution of the probability looks like that of a random walk. In the long-time limit, on the other hand, we find the equilibrium probability distribution $\lim _{t \rightarrow \infty} P\left(x_{0}, x, t\right)=\sqrt{\frac{m \omega^{2} \beta}{2 \pi}} \exp \left[-\frac{1}{2} m \omega^{2} \beta x^{2}\right]$ which is Gaussian with no mean displacement and with variance determined by a thermal parameter and a parameter describing the shape of the potential. A Gaussian, Markovian process that exhibits exponential memory decay, such as this diffusive oscillator, is called an Ornstein-Uhlenbeck process.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/01%3A_Stochastic_Processes_and_Brownian_Motion/1.03%3A_Fokker-Planck_Equations.txt
Our focus in this chapter has been on the description of purely stochastic processes. However, a variety of interesting and important phenomena are subject to combinations of deterministic and stochastic processes. We concern ourselves now with a particular class of such phenomena which are described by Langevin equations. In its simplest form, a Langevin equation is an equation of motion for a system that experiences a particular type of random force. The archetypal system governed by a Langevin equation is a Brownian particle, that is, a particle undergoing Brownian motion. (For a brief description of the nature and discovery of Brownian motion, see the Appendix). The Langevin equation for a Brownian particle in a one-dimensional fluid bath is $m \dot{v}(t)+\zeta v(t)=f(t)$ where $v(t)=\dot{x}(t)$ is the velocity of the Brownian particle, $\zeta$ is a coefficient describing friction between the particle and the bath, $m$ is the mass of the Brownian particle, and $f(t)$ is a random force. Though it is random, we can make a couple of useful assumptions about $f(t)$ : • The random force is equally likely to push in one direction as it is in the other, so the average over all realizations of the force is zero, $\langle f(t)\rangle_{f}=0$ • The random force exhibits no time correlation but has a characteristic strength factor $g$ that does not change over time, $\left\langle f\left(t_{1}\right) f\left(t_{2}\right)\right\rangle_{f}=g \delta\left(t_{1}-t_{2}\right)$ Random forces that obey these assumptions are called white noise, or more precisely, Gaussian white noise. In this case, all odd moments of $f$ will vanish, and all even moments can be expressed in terms of two-time correlation functions: for example, the fourth moment is given by \begin{aligned} \left\langle f\left(t_{1}\right) f\left(t_{2}\right) f\left(t_{3}\right) f\left(t_{4}\right)\right\rangle_{f} &=\left\langle f\left(t_{1}\right) f\left(t_{2}\right)\right\rangle_{f}\left\langle f\left(t_{3}\right) f\left(t_{4}\right)\right\rangle_{f} \ &+\left\langle f\left(t_{1}\right) f\left(t_{3}\right)\right\rangle_{f}\left\langle f\left(t_{2}\right) f\left(t_{4}\right)\right\rangle_{f} \ &+\left\langle f\left(t_{1}\right) f\left(t_{4}\right)\right\rangle_{f}\left\langle f\left(t_{2}\right) f\left(t_{3}\right)\right\rangle_{f} \end{aligned} In general, complex systems may exhibit time-dependent strength factors $g(t)$, but we will work with the more mathematically tractable white noise assumption for the random force. The formal solution to the Langevin equation Eq.(1.41) is $v(t)=v(0) e^{-\frac{\varsigma}{m} t}+\frac{1}{m} \int_{0}^{t} e^{-\frac{\varsigma}{m}(t-\tau)} f(\tau) d \tau$ In computing the average velocity under the white noise assumption, the second term of Eq.(1.42) vanishes thanks to the condition $\langle f(t)\rangle_{f}=0$. So the average velocity is simply $\langle v(t)\rangle_{f}=v(0) e^{-\frac{\zeta}{m} t}$ Of special interest is the velocity-velocity correlation function $C\left(t_{1}-t_{2}\right)=\left\langle v\left(t_{1}\right) v\left(t_{2}\right)\right\rangle_{f}$ which can also be computed from Eq.(1.42). Invoking the white noise condition for $\left\langle f\left(t_{1}\right) f\left(t_{2}\right)\right\rangle_{f}$, we find that $\left\langle v\left(t_{1}\right) v\left(t_{2}\right)\right\rangle_{f}=\left(v(0)^{2}-\frac{g}{2 m \zeta}\right) e^{-\frac{\zeta}{m}\left(t_{1}+t_{2}\right)}+\frac{g}{2 m \zeta} e^{-\frac{\zeta}{m}\left(t_{2}-t_{1}\right)}$ So far, we have only performed an average over realizations of the random force, denoted by $\langle\ldots\rangle_{f}$; to proceed, we may also take a thermal average $\langle\ldots\rangle_{\beta}$, that is, the average over realizations of different initial velocities at inverse temperature $\beta$. Equipartition tells us that $\left\langle v_{0}^{2}\right\rangle_{\beta}=\frac{1}{m \beta}$; if we use Eq.(1.45) to write down an expression for $\left\langle\left\langle v\left(t_{1}\right) v\left(t_{2}\right)\right\rangle_{f}\right\rangle_{\beta}$ and apply equipartition, we arrive at the conclusion that $g=\frac{2 \zeta}{\beta}$ which is a manifestation of the fluctuation-dissipation theorem (the fluctuations in the random force, described by $g$, are proportional to the dissipation of energy via friction, described by $\zeta$ ). The properties of the velocity variable $v$ enumerated above imply that the distribution of velocities is Gaussian with exponential memory decay, like the diffusive oscillator in section 1.3, and so we can also think of this type of Brownian motion as an Ornstein-Uhlenbeck process. In particular, the probability distribution for the velocity is $P\left(v_{0}, v, t\right)=\sqrt{\frac{m \beta}{2 \pi\left(1-e^{-2 \gamma t}\right)}} \exp \left[-\frac{m \beta\left(v-v_{0} e^{-\gamma t}\right)^{2}}{2\left(1-e^{-2 \gamma t}\right)}\right]$ We now have a thorough description of the Brownian particle’s velocity, but what about the particle’s diffusion? We’d like to know how far away the Brownian particle can be expected to be found from its initial position as time passes. To proceed, we calculate the mean square displacement of the particle from its initial position, \begin{aligned} R^{2}(t) &=\left\langle(x(t)-x(0))^{2}\right\rangle \ &=\int_{0}^{t} \int_{0}^{t}\left\langle v\left(\tau_{1}\right) v\left(\tau_{2}\right)\right\rangle d \tau_{2} d \tau_{1} \ &=2 \int_{0}^{t}(t-\tau) C(\tau) d \tau \end{aligned} At long times, the mean square displacement behaves as $R^{2}(t)=2 t \int_{0}^{\infty} C(t) d t$ This linear scaling with time is the experimentally observed behavior of Brownian particles, where the proportionality constant is called the diffusion constant $D$; hence, we have found an expression for the macroscopic diffusion constant $D$ in terms of the correlation function, $D=\int_{0}^{\infty} C(t) d t$ Eq. (1.52) is known as the Green-Kubo relation, and it implies that the mean square displacement at long times is simply $\lim _{t \gg 1} R^{2}(t)=2 D t$ This result for the mean square displacement also scales linearly with the dimensionality of the system (i.e. in three dimensions, $R^{2}(t)=6 D t$ ). To determine the behavior of $R^{2}(t)$ at short times, note that $v(t) \approx v(0)$ for short times, so that $R^{2}(t)=\left(\int v(t) d t\right)^{2} \approx\left\langle v_{0}^{2}\right\rangle t^{2}$. Therefore, the short-time limit of the mean square displacement is $\lim _{t \ll 1} R^{2}(t)=\frac{1}{m \beta} t^{2}$ For times in between these extremes, the formal solution to the Langevin equation for the velocity would have to be integrated. This can be done; sparing the details, the result after thermal averaging is $R^{2}(t)=\frac{2}{\beta \zeta}\left[t-\frac{1}{\gamma}\left(1-e^{-t}\right)\right]$ where $\gamma=\frac{\zeta}{m}$. As a final note, the Langevin equation as presented in this section is often modified to describe more complex systems. The most common modifications to the Langevin equation are: • The replacement of the friction coefficient $\zeta$ with a memory kernel $\gamma(t)$ that allows the system to have some memory of previous interactions. • The addition of a deterministic mean force $F=-\nabla U$, which permits the system to respond to forces beyond those due to interactions with the bath. Such modified Langevin equations, also known as Generalized Langevin equations or GLEs, will be explored in further detail in Chapter 4. The Langevin equation and its generalized counterparts provide the basis for a number of successful models of stochastic processes in chemical physics. [3]
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/01%3A_Stochastic_Processes_and_Brownian_Motion/1.04%3A_The_Langevin_Equation.txt
Brownian motion is one of the simplest physical examples of a system whose description necessitates a nonequilibrium statistical description. As such, it is the token example that unifies all of the topics in this course, from Markov processes (Ch. 1) and response functions (Ch. 2) to diffusion constants (Ch. 3) and generalized Langevin equations (Ch. 4). In this appendix, the salient features of Brownian motion and the key results about Brownian motion that will be developed during the course are exposited together as a handy reference. Some basic properties of relevant integral transformations are also included in this Appendix. The discovery of Brownian motion predates the development of statistical mechanics and provided important insight to physicists of the early twentieth century in their first formulations of an atomic description of matter. A fine example of the importance of keeping an eye open for the unexpected in experimental science, Brownian motion was discovered somewhat serendipitously in 1828 by botanist Robert Brown while he was studying pollen under a microscope. Though many others before him had observed the jittery, random motion of fine particles in a fluid, Brown was the first to catalogue his observations [4] and use them to test hypotheses about the nature of the motion. Interest in the phenomenon was revived in 1905 by Albert Einstein, who successfully related observations about Brownian motion to underlying atomic properties. Einstein’s work on Brownian motion [5] is perhaps the least well known of the four paradigm-shifting papers he published in his "Miracle Year" of 1905, which goes to show just how extraordinary his early accomplishments were (the other three papers described the photoelectric effect, special relativity, and mass-energy equivalence)! Einstein determined that the diffusion of a Brownian particle in a fluid is proportional to the system temperature and inversely related to a coefficient of friction $\zeta$ characteristic of the fluid, $D=\frac{1}{\beta \zeta}$ Any physical description of Brownian motion will boil down to an equation of motion for the Brownian particle. The simplest way, conceptually, to model the system is to perform Newtonian dynamics on the Brownian particle and $N$ particles comprising the fluid, with random initial conditions (positions and velocities) for the fluid particles. By performing such calculations for all possible initial configurations of the fluid and averaging the results, we can obtain the correct picture of the stochastic dynamics. This procedure, however, is impossibly time-consuming in practice, and so a number of statistical techniques, such as Monte Carlo simulation, have been developed to make such calculations more practical. Alternatively, we can gain qualitative insight into Brownian dynamics by mean-field methods; that is, instead of treating each particle in the fluid explicitly, we can devise a means to describe their average influence on the Brownian particle, circumventing the tedium of tracking each particle’s trajectory independently. This approach gives rise to the Langevin equation of section 1.4, under the assumption that the fluid exerts a random force $f(t)$ on the Brownian particle that obeys the conditions of Gaussian white noise. For instantaneous (gas-phase) collisions of the fluid and Brownian particle, a Langevin equation with constant frictional coefficient $\zeta$ suffices, $m \dot{v}(t)+\zeta v(t)=f(t)$ However, if fluid-particle collisions are correlated, which is the case for any condensed-phase system, this correlation must be taken into account by imbuing the Brownian particle with memory of its previous interactions, embodied by a memory kernel $\gamma$, $m \dot{v}(t)+m \int_{0}^{t} \gamma(t-\tau) v(\tau) d \tau=f(t)$ where $\gamma(t) \rightarrow \zeta \delta(t)$ in the limit of uncorrelated collisions. We now present some of the key features of Brownian motion. Some of these results are derived in section 1.4; others are presented here for reference. Please consult the references at the end of this chapter for further details about the derivation of these properties. • Fick’s Law: The spreading of the Brownian particle’s spatial probability distribution over time is governed by Fick’s Law, $\frac{\partial}{\partial t} P(\mathbf{r}, t)=-D \nabla^{2} P(\mathbf{r}, t)$ • Green-Kubo relation: The diffusion constant $D$ is tied to the particle’s velocity-velocity correlation function $C(t)$ by the Green-Kubo relation, $D=\int_{0}^{\infty} C(t) d t$ This essentially means that the diffusion constant is the area under the velocity-velocity correlation curve across all times $t>0$. • Solution of the Langevin Equation: All of the information we require from the Langevin equation is contained in the correlation function. Multiplication of the Langevin equation for $v\left(t_{1}\right)$ by the velocity $v\left(t_{2}\right)$ yields a differential equation for the correlation function, $\dot{C}+\int_{0}^{t} \gamma(t-\tau) C(\tau) d \tau=0$ The Laplace transform of this equation, $s \hat{C}(s)-C(0)+\hat{\gamma}(s) \hat{C}(s)=0$ has as its solution $\hat{C}(s)=\frac{C(0)}{s+\hat{\gamma}(s)}$ where $C(0)$ is the non-transformed velocity-velocity correlation function at $t=0$ and $s$ is the Laplace variable. - Einstein relation: The solution to the Langevin equation tells us that $\hat{C}(0)=\frac{C(0)}{\hat{\gamma}(0)}$ Additionally, a comparison of the Green-Kubo relation to the formula for the Laplace transform indicates that $C(0)=D$. Finally, we can conclude from the equipartition theorem that $C(0)=\frac{1}{m \beta}$. Combining this information together, we arrive at Einstein’s relation, $D=\frac{1}{m \beta \hat{\gamma}(0)}$ In Chapter 4, the behavior of the velocity-velocity correlation function is explored for the cases in which the fluid is a bath of harmonic oscillators, a simple liquid, and an elastic solid. Their general functional forms are summarized here; further details can be found in Chapter 4 . • Harmonic oscillators: $C(t)$ is periodic with amplitude $C(0)$ and frequency $\Omega_{0}$ (the Einstein frequency), where $\Omega_{0}^{2}=\gamma(0)$. • Liquids: $C(t)$ exhibits a few oscillations while decaying, eventually leveling out to zero. • Solids: Like a liquid, $C(t)$ will be damped, but like the harmonic oscillator model, the periodic structure of the solid will prevent $C(t)$ from decaying to zero; some oscillation at the Einstein frequency will continue indefinitely. Finally, we summarize the response of a Brownian particle to an external force $F$. The modified Langevin equation for this situation is $\dot{v}(t)+\gamma v(t)=\frac{f(t)}{m}+\frac{F(t)}{m}$ In general, this Langevin equation is difficult to work with, but many forces of interest (such as EM fields) are oscillatory, so we assume an oscillatory form for the external force, $F(t)=F_{\omega} e^{-i \omega t}$ Then we can use the techniques developed in Chapter 2 to determine that the velocity in Fourier space is given by $\tilde{v}(\omega)=\chi(\omega) \tilde{F}(\omega)$ Finally, from this information it can be determined that the response function $K(t)$ is (see Chapter 2) $K(t)=\frac{1}{2 \pi} \int_{0}^{\infty} \frac{e^{-i \omega t}}{-i \omega+\gamma} d \omega=e^{t} \theta(t)$ These formulas are the basis for the Debye theory of dipole reorganization in a solvent, in the case where $F$ corresponds to the force due to the electric field $E(\omega)$ generated by the oscillating dipoles. Integral Transformations: We conclude with a summary of the Laplace and Fourier transforms, which are used regularly in this course and in chemical physics generally to solve and analyze differential equations. 1. Laplace transform: The Laplace transform of an arbitrary function $f(t)$ is $\hat{f}(s)=\int_{0}^{\infty} e^{-s t} f(t) d t$ Both the Laplace and Fourier transforms convert certain types of differential equations into algebraic equations, hence their utility in solving differential equations. Consequently, it is often useful to have expressions for the first and second derivatives of $\hat{f}(s)$ on hand: \begin{aligned} \hat{f}^{(1)}(s)=s \hat{f}(s)-f(0) \ \hat{f}^{(2)}(s)=s^{2} \hat{f}(s)-s f(0)-\hat{f}^{(1)}(0) \end{aligned} A convolution of two functions $F(t)=\int_{0}^{t} f(t) g(t-\tau) d \tau$ is also simplified by Laplace transformation; in Laplace space, it is just a simple product, $\hat{F}(s)=\hat{f}(s) \hat{g}(s)$ 1. Fourier transform: The Fourier transform of an arbitrary function $f(t)$ is $\tilde{f}(\omega)=\int_{-\infty}^{\infty} e^{i \omega t} f(t) d t$ Its derivatives are even simpler in structure than those of the Laplace transform: \begin{aligned} &\tilde{f}^{(1)}(\omega)=-i \omega \tilde{f}(\omega) \ &\tilde{f}^{(2)}(\omega)=-\omega^{2} \tilde{f}(\omega) \end{aligned} For an even function $f(t)$, the relationship between the Fourier and Laplace transforms can be determined by taking a Laplace transform of $f$ at $s=i \omega$, from which we discover that $\tilde{f}(\omega)=2 \operatorname{Re} \hat{f}(-i \omega)$ References [1] The American Chemical Society. Undergraduate Professional Education in Chemistry: ACS Guidelines and Evaluation Procedures for Bachelor’s Degree Programs. ACS Committee on Professional Training, Spring $2008 .$ [2] S. R. De Groot and P. Mazur. Non-Equilibrium Thermodynamics. New York: Dover, 1984 . [3] N. G. van Kampen. Stochastic Processes in Physics and Chemistry. North Holland, $2007 .$ [4] Robert Brown. A brief account of microscopical observations made in the months of june, july and august, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Philosophical Magazine, 4:161-173, 1828. [5] Albert Einstein. Uber die von der molekularkinetischen theorie der wärme geforderte bewegung von in ruhenden flüssigkeiten suspendierten teilchen. Annalen der Physik, 17:549-560, 1905. MIT OpenCourseWare http://ocw.mit.edu Statistical Mechanics Spring 2012 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/01%3A_Stochastic_Processes_and_Brownian_Motion/1.05%3A_The_Langevin_Equation.txt
• 2.1: Response, Relaxation, and Correlation At the beginning of the 21st century, the thermodynamics of systems far from equilibrium remains poorly understood. However, it turns out that many nonequilibrium phenomena can be described rather well in terms of equilibrium fluctuations; this is especially true of systems near equilibrium. • 2.2: Onsager Regression Theory At first glance, the relaxation of macroscopic non-equilibrium disturbances in a system might seem completely unrelated to the regression of microscopic fluctuations in the corresponding equilibrium system. However, they are intimately related by so-called fluctuation-dissipation theorems. The existence of this link between microscopic fluctuations and macroscopic relaxation was conjectured by Lars Onsager in 1931, some twenty years before it was finally proven to be true. • 2.3: Linear Response Theory and Causality The concept of linear response was introduced in section 2.1. Here, we explore further how the linear response of a system is quantified by considering the important relations regularly invoked by practitioners of linear response theory. 02: Non-equilibrium Thermodynamics At the beginning of the 21st century, the thermodynamics of systems far from equilibrium remains poorly understood. However, it turns out that many nonequilibrium phenomena can be described rather well in terms of equilibrium fluctuations; this is especially true of systems near equilibrium $[1,2]$. By designating a system as "near equilibrium", we mean that the system is perturbed from its equilibrium state by some time-dependent external force $f(t)$. The external force is deterministic, not random; typical examples include mechanical forces and forces due to an applied electric or magnetic field. This force drives the expectation values of some of the system’s observables away from their equilibrium values. For example, a typical observable $A$ affected by the external force might be the system’s velocity or its magnetic moment. If the response of the observable $A$ to the external force $f(t)$ satisfies the linearity property $\delta A(\lambda f(t), t)=\lambda \delta A(f(t), t)$ where $\delta A=A-\langle A\rangle_{e q}$ and $\lambda$ describes the strength of the force, then we call the time-dependent behavior of $A$ the linear response of $A$ to the external force $f(t)$. The linearity property Eq.(2.1) implies that the shape of the response curve $A$ vs. $t$ is independent of the value of $\lambda$ in the case of linear response. After achieving a short-lived nonequilibrium steady state (between $t_{2}$ and $t_{3}$ in Figure 2.1), the system is allowed to relax back to equilibrium. This process is also known as regression. Linear response and regression of a system driven from equilibrium are both described in terms of the time correlation function of the observable $A$, and so we turn first to the definition and properties of the time correlation function $[3,4]$. The time correlation function $C_{A A}\left(t, t^{\prime}\right)$ of the observable $A$ is defined by $C_{A A}\left(t, t^{\prime}\right)=\left\langle A(t) A\left(t^{\prime}\right)\right\rangle=\frac{\operatorname{Tr}\left[A(t) A\left(t^{\prime}\right) \rho_{e q}\right]}{\operatorname{Tr}\left[\rho_{e q}\right]}$ Here, $\rho_{e q}$ denotes the equilibrium density matrix of the system; hence the average denoted by $\langle\rangle$ is the ensemble average. This function describes how the value of $A$ at time $t$ is correlated to its value at time $t^{\prime} ;$ it is sometimes referred to as the autocorrelation function of $A$ to distinguish it from correlation functions between $A$ and other observables. For a system which is time-translational invariant, we often choose for convenience to set $t^{\prime}=0$ and to drop the subscript on $C_{A A}$, so that the time correlation function becomes simply $C(t)=\langle A(t) A(0)\rangle$ The correlation function may in general take on complex values. This result is in keeping with our phenomenological understanding of quantum mechanics in the following way. In order to measure the correlation function of an observable A, the quantity A must be measured twice (first at time zero, then again at time $t$ ). However, the first measurement at $t=0$ collapses the system wavepacket, and the state that would have been exhibited by the unperturbed system at time $t$ becomes irrecoverable. We now identify some important features and properties of correlation functions. 1. All inner products $\langle X \mid Y\rangle$ satisfy the Schwarz inequality $|\langle X \mid Y\rangle|^{2} \leq\left\langle X^{2}\right\rangle\left\langle Y^{2}\right\rangle$ Thus the correlation function for any relaxation process has the property $C^{2}(t)=|\langle A(t) \mid A(0)\rangle|^{2} \leq\left\langle A^{2}(t) A^{2}(0)\right\rangle \leq\left\langle A^{2}(0)\right\rangle^{2}=C^{2}(0)$ The second inequality above arises from the fact that $A^{2}(t)<A^{2}(0)$ for relaxation processes when $t>0$. More concisely, the Schwarz inequality implies that $|C(t)| \leq C(0)$ 1. Correlation functions are time-invariant, that is, their value depends only on the time interval between the two measurements of the observable: $\langle A(t) A(0)\rangle=\left\langle A\left(t-t_{0}\right) A\left(t_{0}\right)\right\rangle=\langle A(0) A(-t)\rangle$ 1. Time-invariance imparts the following identities on the time derivative of a time correlation function: $\dot{C}(t)=\langle\dot{A}(t) A(0)\rangle=-\langle A(0) \dot{A}(-t)\rangle=-\langle A(t) \dot{A}(0)\rangle$ 1. If the equilibrium value of $A$ is $\langle A\rangle_{e q}=0$, then the long-time limit of the correlation function is zero, $\lim _{t \rightarrow \infty}\langle A(t) A(0)\rangle=\langle A\rangle_{e q}\langle A(0)\rangle=0$ 1. For quantum systems, the time-invariance properties imply that $C(-t)=C^{*}(t)$. In the classical limit, the correlation function is always real-valued, so this relation becomes $C(-t)=$ $C(t)$ and $C(t)$ is thus even. The fact that classical correlation functions are real-valued should seem sensible because we can (and do) measure correlation functions every day for classical systems, for example, when we try to steady a cord dangling from the ceiling. In this case, we determine the appropriate time and place to apply an external steadying force by looking for time correlations between the various motions the cord undergoes. Note that $C(t)$ is odd with $C(0)=0$ for classical time correlation functions. 2. For ergodic systems, the time correlation function can be calculated as a time average instead of an ensemble average: $\langle A(t) A(0)\rangle=\lim _{\tau \rightarrow \infty} \frac{1}{\tau} \int_{0}^{\tau} A\left(t+\tau^{\prime}\right) A\left(\tau^{\prime}\right) d \tau^{\prime}$ Since most systems amenable to analysis by the methods of statistical mechanics are inherently ergodic, we are generally free to choose whichever formulation is easier to work with. The time average is often easier to implement experimentally because it only requires integration along a trajectory rather than a simultaneous sampling of every state accessible to the system. Example: The classical linear harmonic oscillator with mass $m$ and frequency $\omega$ obeys the equation of motion $\ddot{x}+\omega^{2} x=0$ If we provide initial conditions $x(0)$ and $\dot{x}(0)=v(0)$, then this equation of motion has the closed-form solution $x(t)=x(0) \cos \omega t+\frac{v(0)}{\omega} \sin \omega t$ Taking the inner product of $x(t)$ with the initial value $x(0)$, we find $\langle x(t) x(0)\rangle=\left\langle x^{2}(0)\right\rangle \cos \omega t+\frac{\langle v(0) x(0)\rangle}{\omega} \sin \omega t$ The second term is zero because $\langle x(0)\rangle=0$, so the time correlation function is just $C(t)=\left\langle x^{2}(0)\right\rangle \cos \omega t$ Finally, invoking the equipartition result $\left\langle x^{2}(0)\right\rangle=\frac{k T}{m \omega^{2}}$, where $k$ is the Boltzmann constant, the correlation function for the classical linear harmonic oscillator is $C(t)=\frac{k T}{m \omega^{2}} \cos \omega t$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/02%3A_Non-equilibrium_Thermodynamics/2.01%3A_Response_Relaxation_and_Correlation.txt
At first glance, the relaxation of macroscopic non-equilibrium disturbances in a system might seem completely unrelated to the regression of microscopic fluctuations in the corresponding equilibrium system. However, they are intimately related by so-called fluctuation-dissipation theorems. The existence of this link between microscopic fluctuations and macroscopic relaxation was conjectured by Lars Onsager in 1931, some twenty years before it was finally proven to be true; hence it is often referred to as the Onsager regression hypothesis. To formulate the hypothesis, we consider an observable $A$ with $\langle A\rangle_{e q}=0$ that takes on a nonequilibrium average value $\Delta A$ due to an applied external force $f$ which acts during the time interval $t \leq 0$ but becomes identically zero for $t>0$. For $t \leq 0$, the ensemble average of $\Delta A$ can be expressed as $\Delta A=\frac{\left\langle A e^{-\beta(H-f A)}\right\rangle}{\left\langle e^{-\beta(H-f A)}\right\rangle} \approx \beta f\left[\langle A(0) A(0)\rangle-\langle A(0)\rangle^{2}\right]=\beta f C(0)$ where the approximation being made is truncation of the Taylor series for each exponential to first order. For $t>0$, the system evolves according to $H$ instead of $H-f A$, so $\Delta A$ is no longer stationary, but acquires a time-dependence: $\Delta A=\frac{\left\langle A(t) e^{-\beta(H-f A)}\right\rangle}{\left\langle e^{-\beta(H-f A)}\right\rangle} \approx \beta f\left[\langle A(t) A(0)\rangle-\langle A(0)\rangle^{2}\right]=\beta f C(t)$ Onsager’s hypothesis states that the relaxation of the non-equilibrium value of $\Delta A$ is related to its value at $t=0$ in the same way that the time correlation function for a spontaneous fluctuation is related to its value at $t=0$ : $\frac{\Delta A(t)}{\Delta A(0)}=\frac{C(t)}{C(0)}$ Example $1$ The transition state theory of chemical kinetics can be formulated through the Onsager relation we’ve just presented. Consider a chemical equilibrium established between two species A and B, $\mathrm{A} \underset{k_{b}}{\stackrel{k_{f}}{\rightleftharpoons}} \mathrm{B}$ with forward rate constant $k_{f}$ and backward rate constant $k_{b}$. Equilibrium populations We can describe the population dynamics of A and B deterministically in the macroscopic limit through a pair of coupled differential equations, $\left\{\begin{array}{l} \dot{P}_{\mathrm{A}}=-k_{f} P_{\mathrm{A}}+k_{b} P_{\mathrm{B}} \ \dot{P}_{\mathrm{B}}=k_{f} P_{\mathrm{A}}-k_{b} P_{\mathrm{B}} \end{array}\right.$ The equilibrium state of this system satisfies the detailed balance condition $k_{f}\left\langle P_{\mathrm{A}}\right\rangle=k_{b}\left\langle P_{\mathrm{B}}\right\rangle$ where the angle brackets denote the equilibrium values of the populations. Taking the populations to be normalized to unity, $\left\langle P_{\mathrm{A}}\right\rangle+\left\langle P_{\mathrm{B}}\right\rangle=1$, we can express $\left\langle P_{\mathrm{A}}\right\rangle$ in terms of the rate constants: $\left\langle P_{\mathrm{A}}\right\rangle=\frac{\left\langle P_{\mathrm{A}}\right\rangle}{\left\langle P_{\mathrm{A}}\right\rangle+\left\langle P_{\mathrm{B}}\right\rangle}=\frac{k_{b}}{k_{f}+k_{b}}$ For notational simplicity, we introduce $k=k_{f}+k_{b}$ and refer to the equilibrium populations $\left\langle P_{\mathrm{A}}\right\rangle$ and $\left\langle P_{\mathrm{B}}\right\rangle$ by $q_{\mathrm{A}}$ and $q_{\mathrm{B}}$, respectively. With this new notation, we can express the equilibrium populations of A and B as $\left\{\begin{array}{l} q_{\mathrm{A}}=\frac{k_{b}}{k} \ q_{\mathrm{B}}=\frac{k_{f}}{k} \end{array}\right.$ If the initial state is all species A, the solution to the coupled differential equations indicates a decay to equilibrium with rate constant $k$, which we can write in terms of $\Delta P_{\mathrm{A}}(t)=P_{\mathrm{A}}(t)-q_{\mathrm{A}}$ as $\Delta P_{\mathrm{A}}(t)=\Delta P_{\mathrm{A}}(0) e^{-k t}$ Setting this result aside for a moment, note that if we consider the energies of species A and B to be potential wells connected along a reaction coordinate $x$, then we can write down an expression for the fluctuation in occupation number $n$ for each species as a function of $x$. The barrier between the A and B potential wells is a maximum at $x=x_{b}$; see Figure 2.2. Application of Onsager Regression hypothesis To reflect the fact that a particle to the left of the barrier is species A and a particle to the right is species B, we write the occupation numbers in terms of the Heaviside step function, $\left\{\begin{array}{l} n_{\mathrm{A}}=\theta\left(x_{b}-x\right) \ n_{\mathrm{B}}=\theta\left(x-x_{b}\right) \end{array}\right.$ where $\left\langle n_{\mathrm{A}}\right\rangle=q_{\mathrm{A}}$ and $\left\langle n_{\mathrm{B}}\right\rangle=q_{\mathrm{B}} .$ Applying Onsager’s regression hypothesis to this example, we can relate the dissipation of $P_{\mathrm{A}}$ to the fluctuations in occupation number as follows: $\frac{C(t)}{C(0)}=\frac{\left\langle\delta n_{\mathrm{A}}(t) \delta n_{\mathrm{A}}(0)\right\rangle}{\left\langle\delta n_{\mathrm{A}}^{2}(0)\right\rangle}=\frac{\Delta P_{\mathrm{A}}(t)}{\Delta P_{\mathrm{A}}(0)}=e^{-k t}$ The second equality arises from our integrated rate equation for the dissipation of $P_{\mathrm{A}}$. Also note that $\left\langle\delta n_{\mathrm{A}}^{2}\right\rangle=\left\langle n_{\mathrm{A}}^{2}\right\rangle-\left\langle n_{\mathrm{A}}\right\rangle^{2}=q_{\mathrm{A}}-q_{\mathrm{A}}^{2}=q_{\mathrm{A}}-q_{\mathrm{A}}\left(1-q_{\mathrm{B}}\right)=q_{\mathrm{A}} q_{\mathrm{B}}$ Differentiating the fluctuation-dissipation relation above with respect to $t$ and invoking the identity just shown, we find $k e^{-k t}=-\frac{\left\langle\delta \dot{n}_{\mathrm{A}}(t) \delta n_{\mathrm{A}}(0)\right\rangle}{\left\langle\delta n_{\mathrm{A}}^{2}(0)\right\rangle}=\frac{\left\langle n_{\mathrm{A}}(t) \dot{n}_{\mathrm{A}}(0)\right\rangle}{q_{\mathrm{A}} q_{\mathrm{B}}}$ Recasting this equation in terms of the reaction coordinate $x$, we arrive at an expression for the time dependence of the forward rate constant $k_{f}(t)$, $k_{f}(t)=k_{f} e^{-k t}=\frac{\left\langle\theta\left(x(t)-x_{b}\right) \delta\left(x_{b}-x(0)\right) v\right\rangle}{\left\langle\theta\left(x_{b}-x(t)\right)\right\rangle}$ where $v=\dot{n}_{\mathrm{A}}(0)$ is the initial rate of reaction. Expression for the TST rate constant Finally, to determine the transition state theory (TST) rate constant, we consider our time-dependent expression for $k_{f}$ in the short-time limit, since transition states typically only survive a few molecular vibrations. In this limit, $\lim _{t \rightarrow 0^{+}} k_{f}(t)=\frac{\left\langle\theta\left(x\left(0^{+}\right)-x_{b}\right) \delta\left(x_{b}-x(0)\right) v\right\rangle}{\left\langle n_{\mathrm{A}}\right\rangle}=\frac{\left\langle\theta(v) \delta\left(x_{b}-x(0)\right) v\right\rangle}{\left\langle n_{\mathrm{A}}\right\rangle}$ From the kinetic theory of gases, we recognize that $\langle\theta(v) v\rangle=\sqrt{\frac{k_{B} T}{2 \pi m}}=(2 \pi m \beta)^{-1 / 2}$ If we stipulate now that the height of the barrier is $E_{b}$, some rearrangement of the preceding formulas reveals that $\frac{\left\langle\delta\left(x_{b}-x\right)\right\rangle}{\left\langle\theta\left(x_{b}-x\right)\right\rangle}=\sqrt{\frac{m \omega^{2} \beta}{2 \pi}} e^{-\beta E_{b}}$ where $\omega$ is the fundamental frequency of the left potential well. It follows that the TST rate constant takes on the simple form $k_{T S T}=\frac{\omega}{2 \pi} e^{-\beta E_{b}}$ To conclude our excursion into TST kinetics, note that the ratio $\frac{k(t)}{k_{T S T}}=\frac{\left\langle\theta\left(x(t)-x_{b}\right) \delta\left(x(0)-x_{b}\right) v\right\rangle}{\left\langle\theta\left(x\left(0^{+}\right)-x_{b}\right) \delta\left(x(0)-x_{b}\right) v\right\rangle}$ is always less than or equal to one. This result indicates that the TST flux is partially trapped in the product well while part of the TST flux recrosses back to the reactant state. This result is in keeping with our intuition of chemical dynamics in that every macroscopic reaction is, to some degree, a process of establishing equilibrium rather than a perfect flow from all reactants to all products.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/02%3A_Non-equilibrium_Thermodynamics/2.02%3A_Onsager_Regression_Theory.txt
The concept of linear response was introduced in section 2.1. Here, we explore further how the linear response of a system is quantified by considering the important relations regularly invoked by practitioners of linear response theory. Response Functions The motivating idea behind linear response is that the response of a system to an external force depends on the strength of that force at all times during which the force acts on the system. That is, the response at time $t$ depends on the history of the force’s action on the system. An appropriately weighted sum of the strength of the external force at each moment during the interaction will describe the overall response. Mathematically, therefore, we express the response as an integral over the history of the interaction, $\Delta A(t)=\int_{-\infty}^{\infty} K(t, \tau) f(\tau) d \tau$ The kernel $K(t, \tau)$ in this expression, which provides the weight for the strength of the external force at each time, is called the response function. The response function has two very important properties: • Time invariance: $K$ depends only on the time interval between $\tau$ and $t$, not on the two times independently. More succinctly, $K(t, \tau)=K(t-\tau)$ • Causality: The system cannot respond until the force has been applied. This places an upper limit of $t$ on the integration over the history of the external force. With these observations in place, we arrive at the standard formula describing the linear response of an observable $A$ to an external force $f(t)$, $\Delta A(t)=\int_{-\infty}^{t} K(t-\tau) f(\tau) d \tau$ Linear response - described by the response function $K(t)$ - and linear regression - described by the time correlation function $C(t)$ - are directly related to one another. To see the connection, consider a force $f(t)$ which is constant with strength $f$ for $t \leq 0$ and is zero for $t>0$. We have established two ways to describe the response of an observable $A$ to this force: • Linear regression: $\Delta A(t)=\beta f C(t)$ • Linear response: $\Delta A(t)=\int_{-\infty}^{0} K(t-\tau) f(\tau) d \tau$ From this information, we conclude that the correlation function and response function are related by $K(t)=-\beta \dot{C}(t) \theta(t)$ where $\theta(t)$ is the Heaviside function. Sometimes the linear response function is more conveniently expressed in the frequency domain, in which case it is called the frequency-dependent response function. In many physical situations, it plays the role of a susceptibility to a force and consequently is denoted by $\chi(\omega)$, $\chi(\omega)=\int_{0}^{\infty} e^{i \omega t} K(t) d t$ This response function is often partitioned into real and imaginary parts, which can also be thought of as even and odd parts, respectively, $\chi(\omega)=\chi^{\prime}(\omega)+i \chi^{\prime \prime}(\omega)$ Example: The response function for the classical linear harmonic oscillator can be quickly deduced from its time correlation function. Recall from the first example in this chapter that the time correlation function for the classical linear harmonic oscillator is $C(t)=\frac{k_{B} T}{m \omega^{2}} \cos \omega t$ Applying Eq.(2.17), we differentiate with respect to $t$ and multiply by $\beta=\frac{1}{k_{B} T}$ to determine that $K(t)=\frac{1}{m \omega} \sin (\omega t) \theta(t)$ This is the response function for the classical linear harmonic oscillator. Absorption Power Spectra The frequency-dependent response function is directly related to the absorption spectrum: in fact, knowledge of $\chi(\omega)$ and the time-dependent external force $f(t)$ is sufficient to fully describe the absorption spectrum. The rate at which work is done on a system by a generalized external force $f(t)$ is $f(t) \dot{A}(t)$, where $A$ is the observable corresponding to the generalized force $f$. This quantity has units of power, so we can calculate the total absorption energy by integrating this power over time, $\int P(t) d t=\int f(t) \dot{A}(t) d t$ To recast this result in terms of $\chi(\omega)$, we first consider the Fourier transform of the time-dependent observable $A$, $\tilde{A}(\omega)=\int e^{i t} A(t) d t$ Applying Eq.(2.16), we have $\tilde{A}(\omega)=\int e^{i \omega t} \int K(t-\tau) f(\tau) d \tau d t$ The following rearrangements allow us to express $\tilde{A}(\omega)$ entirely in terms of frequency-dependent functions: \begin{aligned} \tilde{A}(\omega) &=\iint e^{i \omega(t-\tau)} e^{i \tau} K(t-\tau) f(\tau) d \tau d t \ &=\int e^{i \omega(t-\tau)} K(t-\tau) d(t-\tau) \int e^{i \omega \tau} f(\tau) d t \ &=\chi(\omega) \tilde{f}(\omega) \end{aligned} where $\tilde{f}(\omega)$ is the Fourier transform of $f(t)$. Returning to Eq. (2.20) for the absorption energy, $\int P(t) d t=\int f(t) \dot{A}(t) d t=\frac{1}{2 \pi} \int \tilde{f}(-\omega) \tilde{\dot{A}}(\omega) d \omega$ Rearranging the expression once again and evaluating the integral over time yields \begin{aligned} \int P(t) d t &=\frac{1}{2 \pi} \int(-i \omega) \tilde{f}(-\omega) \tilde{A}(\omega) d \omega \ &=\frac{1}{2 \pi} \int(-i \omega) \chi(\omega)|\tilde{f}(\omega)|^{2} d \omega \ &=\frac{1}{2 \pi} \int \omega \chi^{\prime \prime}(\omega)|\tilde{f}(\omega)|^{2} d \omega \end{aligned} Hence the absorption power spectrum $P(\omega)$ has the form $P(\omega)=\omega \chi^{\prime \prime}(\omega)|\tilde{f}(\omega)|^{2}$ Example: For a monochromatic force $F(t)=F \cos \omega_{0} t$, the Fourier transform of $F(t)$ is given by $\tilde{F}(\omega)=F \pi\left[\delta\left(\omega-\omega_{0}\right)+\delta\left(\omega+\omega_{0}\right)\right]$ Hence our expression for the absorption power spectrum Eq.(2.30) tells us that the absorption rate (i.e. the time average of $P(t)$ ) for such a system is $\lim _{\tau \rightarrow \infty} \frac{1}{\tau} \int_{0}^{\tau} P(t) d t=\frac{\omega}{2} \chi^{\prime \prime}(\omega)|F|^{2}$ Furthermore, using Eq. $(2.17)$ and the fact that $\chi^{\prime \prime}(\omega)$ is odd, we find that \begin{aligned} \chi^{\prime \prime}(\omega) &=\int_{0}^{\infty} \sin \omega t K(t) d t \ &=\int_{0}^{\infty} \sin \omega t(-\beta \dot{C}(t)) d t \ &=\beta \omega \int_{0}^{\infty} C(t) \cos \omega t d t \ &=\frac{\beta \omega}{2} \int_{-\infty}^{\infty} e^{i \omega t} C(t) d t \ &=\frac{\beta \omega}{2} \tilde{C}(\omega) \end{aligned} This simple relationship illustrates the close connection between frequency-dependent response functions (or susceptibilities) and time correlation functions. Causality and the Kramers-Kronig Relations Our final consideration in this chapter is the relationship between the real and imaginary parts of the frequency-dependent response function as defined in Eq.(2.19). The equations relating these two functions are known as the Kramers-Kronig relations. In their most general form, they govern the response function as a function of the complex frequency $z=\omega+i \epsilon$, though under most physical circumstances of interest they can be expressed in terms of real-valued frequencies alone. The relations arise from the causality requirement, which we originally expressed by requiring $K(t)=0, \forall t<0$. It turns out that this requirement, along with the assumption that $\int_{0}^{\infty} K(t) d t$ converges, implies that the response function $\chi(z)$ is analytic on the upper half of the complex plane. We can also, however, integrate piecewise over each part of the contour; some manipulation with the residue theorem is required, but the final result is $\oint \frac{\chi(z)}{z-\omega_{0}} d z=\mathcal{P} \int_{-\infty}^{\infty} \frac{\chi(z)}{z-\omega_{0}} d z-i \pi \chi\left(\omega_{0}\right)$ where $\mathcal{P}$ denotes the Cauchy principal value of the integral. Setting the two results above equal and solving for the response function, $\chi(\omega)=\frac{1}{i \pi} \mathcal{P} \int_{-\infty}^{\infty} \frac{\chi(z)}{z-\omega} d z$ The decomposition of Eq.(2.33) into real and imaginary parts yields the Kramers-Kronig relations, \begin{aligned} &\chi^{\prime}(\omega)=\frac{1}{\pi} \mathcal{P} \int_{-\infty}^{\infty} \frac{\chi^{\prime \prime}(z)}{z-\omega} d z \ &\chi^{\prime \prime}(\omega)=-\frac{1}{\pi} \mathcal{P} \int_{-\infty}^{\infty} \frac{\chi^{\prime}(z)}{z-\omega} d z \end{aligned} As promised, this pair of equations provides a concise relationship between the real and imaginary parts of any response function $\chi(\omega)$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/02%3A_Non-equilibrium_Thermodynamics/2.03%3A_Linear_Response_Theory_and_Causality.txt
Hydrodynamics describes the low-frequency, long-wavelength behavior of a system that is disturbed from equilibrium. When a system is disturbed from equilibrium, some quantities and parameters decay very quickly back to their equilibrium state, while others take a long time to relax [1]. Conserved quantities, such as particle number, momentum, and energy, take a long time to relax to equilibrium, while non-conserved quantities decay quickly [1]. Similarly, order parameters, such as average magnetization, take a long time to relax to equilibrium, while parameters which are not order parameters decay quickly [1]. Therefore, at long times, a non-equilibrium system can be completely described by order parameters and the densities of conserved quantities [1]. Hydrodynamic equations are the equations of motion for these quantities and parameters. For further information on the subjects covered in this chapter, please consult books by Reichl[1], Hansen and McDonald[2], and McQuarrie[3]. • 3.1: Light Scattering Scattering occurs when a propagating wave encounters a medium which alters the magnitude or direction of its wave vector. In this section, we will show that the behavior of light scattered from a medium is related to the density correlation functions of the medium. As a result, light scattering experiments can be used to probe the structure of a material. • 3.2: Navier-Stokes Hydrodynamic Equations • 3.3: Transport Coefficients Thumbnail: Scheme of Rayleigh scattering. 03: Hydrodynamics and Light Scattering Scattering and Correlation Functions A particle or light field propagating through space can be described by a wave vector $\vec{k}$. The direction of $k$ indicates the direction of propagation of the wave, and the magnitude of $k$ indicates the wave number, or inverse wavelength, of the wave. Scattering occurs when a propagating wave encounters a medium which alters the magnitude or direction of its wave vector. In this section, we will show that the behavior of light scattered from a medium is related to the density correlation functions of the medium. As a result, light scattering experiments can be used to probe the structure of a material. Neutron or Light Scattering In this section, we want to describe the behavior of a particle or light field that undergoes elastic scattering from a medium. This discussion could apply to x-ray, proton, neutron, or electron scattering, among others. Elastic scattering occurs when there is no transfer of energy from the particle to the scattering medium. The direction of the particle’s wave vector changes, but its wave number (or frequency) remains the same. A schematic of the scattering process is depicted in Figure 3.1. The incident particle or light field with wave vector $\overrightarrow{k_{o}}$ is scattered from the sample at point $\vec{r}$, changing its wave vector to $\overrightarrow{k_{f}}$. The vector $\overrightarrow{k_{f}}$ has the same magnitude as $\overrightarrow{k_{o}}$, but a different direction. The scattered light is detected at point $\overrightarrow{r^{\prime}}$. The scattered particle or light field can be modelled as a spherical wave. The quantum mechanical expression for this wave is $\Psi_{s}=\frac{i}{\hbar} \int \frac{e^{i k\left|\vec{r}-\overrightarrow{r^{\prime}}\right|}}{\left|\vec{r}-\overrightarrow{r^{\prime}}\right|} e^{i \overrightarrow{k_{o}} \cdot \overrightarrow{r^{\prime}}} \rho\left(\overrightarrow{r^{\prime}}\right) d \overrightarrow{r^{\prime}}$ where $\rho(\vec{r})$ is the density of scattering agents and the integral is carried out over all scattering agents. In most light scattering experiments, the distance from the sample to the light detector is significantly larger than the size of the sample itself. In this case it is valid to make the assumption that $r>>r^{\prime}$. Then $e^{i k\left|\vec{r}-\vec{r}^{\prime}\right|} \rightarrow e^{i k r-i \overrightarrow{k_{f}} \cdot \overrightarrow{r^{\prime}}}$ and the wavefunction can be written $\Psi_{s}=\frac{i}{\hbar} \frac{e^{i k r}}{r} \int \rho\left(\overrightarrow{r^{\prime}}\right) e^{-i\left(\overrightarrow{k_{f}}-\overrightarrow{k_{o}}\right) \cdot \overrightarrow{r^{\prime}}} d \overrightarrow{r^{\prime}}=\frac{i}{\hbar} \frac{e^{i k r}}{r} \int \rho\left(\overrightarrow{r^{\prime}}\right) e^{-i \vec{k} \cdot \vec{r}^{\prime}} d \overrightarrow{r^{\prime}}$ where $\vec{k}=\overrightarrow{k_{f}}-\overrightarrow{k_{o}}$ is the difference between the initial and scattered wave vector. We can also assume that the medium is composed of point particles, so the density is the sum over all points $\rho(\vec{r})=\sum_{i=1}^{N} a_{i} \delta\left(\vec{r}-\vec{r}_{i}\right)=a \sum_{i=1}^{N} \delta\left(\vec{r}-\vec{r}_{i}\right)$ Then the wavefunction simplifies to $\Psi_{s}=\frac{i}{\hbar} \frac{e^{i k r}}{r} a \sum_{i=1}^{N} e^{-i \vec{k} \cdot \overrightarrow{r_{i}}} \propto a \sum_{i=1}^{N} e^{-i \vec{k} \cdot \overrightarrow{r_{i}}}$ In light scattering experiments the measured quantity is the intensity of scattered light over the angle spanned by the detector. This quantity is called the scattering cross section $\frac{d \sigma}{d \Omega}$, and it is proportional to the square of the wavefunction: $I(\vec{k})=\left|\Psi_{s}\right|^{2} \propto \frac{1}{r^{2}} \frac{d \sigma}{d \Omega}=\frac{a^{2}}{r^{2}}\left\langle\left.\sum_{i=1}^{N} e^{-i \vec{k} \cdot \vec{r}_{i}}\right|^{2}\right\rangle=\frac{a^{2}}{r^{2}} N S(\vec{k})$ where $S(\vec{k})$ is called the static structure factor, and is defined as: $S(\vec{k})=\frac{1}{N}\left\langle\left|\sum_{i=1}^{N} e^{-i \vec{k} \cdot \vec{r}_{i}}\right|^{2}\right\rangle$ In order to find the scattering intensity, we must evaluate this term. The Static Structure Factor The static structure factor can be rewritten as: $S(k)=\frac{1}{N}\left\langle\rho_{k} \rho_{-k}\right\rangle$ where $\rho_{k}=\int e^{-i \vec{k} \cdot \vec{r}} \rho(\vec{r}) d \vec{r}$ and $\rho(r)$ is the local number density. For a homogeneous liquid, $\langle\rho(r)\rangle=\rho_{o}$. To model real systems, we can simplify the calculations by expressing the density correlations as a sum of the homogeneous density $\rho_{o}$ and local fluctuations $\delta \rho$. $\rho(r)=\rho_{o}+\delta \rho$ Using this separation, the scattering function can be written in two pieces: $S(k)=\frac{1}{N}\left\langle\rho_{k} \rho_{-k}\right\rangle=\rho_{0}(2 \pi)^{3} \delta(\vec{k})+\frac{1}{N}\left\langle\delta \rho_{k} \delta \rho_{-k} \mid\right\rangle$ The first term arises from the homogeneous background and is called the forward scattering. The second term gives the scattering from the density fluctuations. In an ideal gas, there is no interaction between the particles $\delta \rho=0$, and so there is only forward scattering. The Density Correlation function It is also helpful to think about the scattering in real space. Define the density correlation function $G(\vec{r})$ as the Fourier transform of $S(k)$ into coordinate space. \begin{aligned} G(\vec{r})=\frac{1}{(2 \pi)^{3}} \int S(\vec{k}) e^{-i \vec{k} \cdot \vec{r}} d \vec{k} \ =\frac{1}{N} \frac{1}{(2 \pi)^{3}} \int e^{-i \vec{k} \cdot \vec{r}}\left\langle\sum_{i} e^{i \vec{k} \cdot \vec{r}_{i}} \sum_{j} e^{-i \vec{k} \cdot \vec{r}_{j}}\right\rangle d \vec{k} \ =\frac{1}{N} \sum_{i, j}\left\langle\delta\left(\vec{r}-\vec{r}_{i, j}\right)\right\rangle \end{aligned} From this, we can see that the Fourier transform of the structure factor gives probability of finding two particles separated by a vector $\vec{r}$. We can also write the density correlation function in a slightly different form: \begin{aligned} G(\vec{r})=\frac{1}{N} \sum_{i, j} \int d \overrightarrow{r_{o}}\left\langle\delta\left(\overrightarrow{r_{i}}-\overrightarrow{r_{o}}-\vec{r}\right) \delta\left(\overrightarrow{r_{j}}-\overrightarrow{r_{o}}\right)\right\rangle \ =\frac{1}{N} \int d \overrightarrow{r_{o}}\left\langle\rho\left(\overrightarrow{r_{o}}+\vec{r}\right) \rho\left(\overrightarrow{r_{o}}\right)\right\rangle=\frac{V}{N}\langle\rho(\vec{r}) \rho(0)\rangle=\frac{1}{\rho_{o}}\langle\rho(\vec{r}) \rho(0)\rangle \end{aligned} The Pair Distribution Function $\quad$ To better understand the physical interpretation of the structure factor and the density correlation function, we can rewrite them in terms of the pair distribution function $g(r)$. The pair distribution function is given by: $g(r)=\frac{1}{N^{2}} \sum_{i \neq j}\left\langle\delta\left(\vec{r}-\vec{r}_{i, j}\right)\right\rangle$ This gives the probability that, if I have a single particle $i$, I will be able to find another particle $j$ at a distance $\vec{r}$ away. It is defined only for terms with $i \neq j$. We can write $g(r)$ as: $g(r)=h(r)+1$ where $h(r)$ is the pair correlation function. The Fourier transform of the pair distribution function can be written: $\tilde{g}(\vec{k})=\tilde{h}(\vec{k})+(2 \pi)^{3} \delta(\vec{k})$ This allows us to rewrite the structure factor and the density correlation function in terms of the interactions between individual pairs of particles. To write the structure factor $S(k)$ and the density correlation function $G(r)$ in terms of the pair distribution function, separate the summations into terms with $i=j$ and terms with $i \neq j$. The structure factor is written: $S(k)=\frac{1}{N} \sum_{i, j}\left\langle e^{-i k r_{i}} e^{i k r_{j}}\right\rangle$ The terms with $i=j$ each contribute a value of $\frac{1}{N}$. After taking the summation over all $N$ particles, this gives a value of 1 . $S(k)=1+\sum_{i \neq j}\left\langle e^{-i k\left(r_{i}-r_{j}\right)}\right\rangle=1+\rho \tilde{g}(k)=1+\rho \tilde{h}(\vec{k})+(2 \pi)^{3} \delta(\vec{k}) \rho$ Now, the first two terms $1+\rho \tilde{h}(\vec{k})$ give the scattering due to the molecular structure, or fluctuations. The third term gives the forward scattering, which as we discussed earlier is the scattering that we would expect in a system with no fluctuations (an ideal gas). The density correlation function is written: $G(\vec{r})=\frac{1}{N}\left\langle\sum_{i, j} \delta\left(r_{i, j}-r\right)\right\rangle$ when $i=j$, we are discussing a single particle. Therefore, $r_{i, j}=0$ and each term contributes $\frac{1}{N} \delta(\vec{r})$. After taking the summation over all $N$ particles, the $N$ cancels and we are left with $\delta(\vec{r})$. \begin{aligned} & G(\vec{r})=\delta(\vec{r})+\frac{1}{N}\left\langle\sum_{i \neq j} \delta\left(r_{i, j}-r\right)\right\rangle \ =& \delta(\vec{r})+\rho g(\vec{r})=\delta(\vec{r})+\rho(h(\vec{r})+1) \end{aligned} By writing the expressions for $S(\vec{k})$ and $G(\vec{r})$ in terms of $g(\vec{r})$, their physical interpretation becomes more clear. The pair distribution function for a typical liquid and a typical solid are shown in Figure $3.3$ and Figure $3.4$. If a particle has a radius $d$, then clearly no other particle can be closer than distance $d$. Therefore, for both the solid and the liquid, $g(\vec{r})$ has a value of 0 from a distance 0 to a distance $d$. At this point, the probability rapidly increases and begins oscillating around a value of 1 . In a liquid, there is short range structure as weak intermolecular interactions form a series of solvation shells around a particle. However, these forces only act at short range, and as the distance increases the correlation decays to zero. In a solid, the structure persists throughout the sample, and therefore the oscillations do not decay. Inelastic Scattering The previous section described the behavior of a particle as it undergoes an elastic scattering event. In this section, we will address the phenomenon of inelastic scattering, which applies primarily to light fields. Inelastic scattering occurs when scattered light transfers some energy to the scattering material. While an elastic scattering event causes only a change in the direction of the wave vector, an inelastic scattering event causes both a change in the direction and the wavenumber of the scattered light. In other words, the scattered wave becomes frequency dispersed. Figure $3.5$ gives a schematic of an inelastic scattering event. Scattered Intensity To calculate the intensity of scattered light from an inelastic scattering event, we can follow a very similar process to that which we used for elastic scattering: model the scattered light as a spherical wave, and simplify it by assuming that the distance from the sample to the light detector is large compared with the size of the sample, and that the medium is composed of point particles. However, there is one major difference. Since the scattered light can transfer energy to the material, the position of the particles now depends on time. The scattered wavefunction is then: $\Psi_{s} \propto \frac{a}{r} \sum_{i=1}^{N} e^{-i \vec{k} \cdot \vec{r}_{i}(t)}$ which gives a differential cross-section of: $\frac{d \sigma}{d \Omega d \omega}=a^{2}\left\langle\sum_{i} e^{-i \vec{k} \cdot \overrightarrow{r_{i}}(t)} \sum_{j} e^{-i \vec{k} \cdot \overrightarrow{r_{j}}(t)}\right\rangle$ By taking the temporal Fourier transform, we can find the structure factor: \begin{aligned} \frac{d \sigma}{d \Omega d \omega}=a^{2} \int e^{i w t} &\left\langle\sum_{i} e^{-i \vec{k} \cdot \vec{r}_{i}(t)} \sum_{j} e^{-i \vec{k} \cdot \vec{r}_{j}(t)} d t\right\rangle \ &=a^{2} N S(\vec{k}, \omega) \end{aligned} Note that the structure factor $S$ is now dependent on both the wave vector $\vec{k}$ and the frequency $\omega$. Therefore, it is called the Dynamic Structure Factor. The Intermediate Scattering Function The intermediate scattering function is defined as the Fourier transform of the dynamic structure factor into real time. \begin{aligned} F(\vec{k}, t)=\frac{1}{2 \pi} \int S(\vec{k}, \omega) e^{i \omega t} d \omega \ S(\vec{k}, \omega)=\int F(\vec{k}, t) e^{-i \omega t} d t \end{aligned} It is called the Intermediate scattering function is because it has one variable, the spatial dimension $k$, expressed in Fourier space, and the other variable, the time dimension $t$, expressed in real space. It can be expressed explicitly as: $F(\vec{k}, t)=\frac{1}{N}\left\langle\rho_{k}(t) \rho_{-k}(0)\right\rangle$ where: $\rho_{k}(t)=\sum_{i} e^{-i \vec{k} \cdot \overrightarrow{r_{i}}(t)}$ Note that this function looks identical to the static structural factor from section 1, except that now the density is a function of time. The Van Hove Function The Van Hove Function is defined as the Fourier transform of the intermediate scattering function into real space. $G(\vec{r}, t)=\frac{1}{(2 \pi)^{3}} \int F(\vec{k}, t) e^{i \vec{k} \cdot \vec{r}} d \vec{k}=\frac{1}{N} \sum_{i, j}\left\langle\delta\left(\vec{r}_{i}(t)-\overrightarrow{r_{j}}(0)-\vec{r}\right)\right\rangle$ The Van Hove Function can also be expressed as \begin{aligned} G(\vec{r}, t)=\frac{1}{N} \int d \overrightarrow{r_{o}}\left\langle\sum_{i} \delta\left(\overrightarrow{r_{i}}(t)-\vec{r}-\overrightarrow{r_{o}}\right) \sum_{j} \delta\left(\overrightarrow{r_{j}}(0)-\overrightarrow{r_{o}}\right)\right\rangle \ =\frac{1}{N} \int d \overrightarrow{r_{o}}\left\langle\rho\left(\vec{r}+\overrightarrow{r_{o}}, t\right) \rho\left(\overrightarrow{r_{o}}, 0\right)\right\rangle \ =\frac{V}{N}\langle\rho(\vec{r}, t) \rho(0,0)\rangle \end{aligned} where $\frac{V}{N}=\rho_{o}^{-1}$. The Van Hove function describes the fluctuation of densities at different times and positions. It can be difficult to keep track of the many functions used to describe inelastic scattering. The following table summarizes these functions and their different spatial and temporal variables. Name Symbol Spatial Dimension Temporal Dimension Dynamic Structure Factor $S(\vec{k}, \omega)$ Fourier, $\vec{k}$ Fourier, $\omega$ Intermediate Scattering Function $F(\vec{k}, t)$ Fourier, $\vec{k}$ Real, $t$ Van Hove Function $G(\vec{r}, t)$ Real, $\vec{r}$ Real, $t$ 1. If we are only interested in the spatial structure, we can perform a sum over the temporal dimension: $S(\vec{k})=\frac{1}{2 \pi} \int S(\vec{k}, \omega) d \omega=F(\vec{k}, 0)$ This gives the spatial structure. 1. The density can again be expressed as the sum of a constant background $\rho_{o}$ and fluctuations $\delta \rho:$ $\rho=\rho_{o}+\delta \rho$ Then the dynamic structure factor can be expressed as $S(\vec{k}, w)=(2 \pi)^{4} \delta(\vec{k}) \delta(\omega) \rho_{o}+\int e^{i \omega t} \frac{1}{N}\left\langle\delta \rho_{k}(t) \delta \rho_{-k}(0)\right\rangle d t$ In the first term, $\vec{k}=0$ and $\omega=0$. This is the forward, elastic, not scattered wave for an ideal gas. The second term gives the spectrum of density fluctuations in the fluid.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/03%3A_Hydrodynamics_and_Light_Scattering/3.01%3A_Light_Scattering.txt
Basic Equations Conservation of Mass Consider a fixed volume in space, such as that pictured in Figure $3.6$ The total number of particles in the region at any point in time can be found by taking the sum over the density at all points: $N=\int_{V} \rho(\vec{r}) d \vec{r}$ The change in $N$ over time depends upon the flux, which can be found by integration over a surface or a volume $\frac{d N}{d t}=J_{i n}-J_{o u t}=-\oint_{\partial V} \vec{J} \cdot d \vec{S}=-\int \vec{\nabla} \cdot \vec{J} d V$ We can rewrite the change in the number of particles in terms of density: $\frac{d N}{d t}=\int \frac{d \rho}{d t} d \vec{r}=-\int \vec{\nabla} \cdot \vec{J} d V$ Remove the spacial integration and rearrange $\frac{d \rho}{d t}+\vec{\nabla} \cdot \vec{J}=0$ To express the equation in terms of density and velocity, we rewrite the flux as $\vec{J}=\rho \vec{v}$, so that $\nabla \vec{J}=\vec{\nabla} \cdot(\rho \vec{v})$ Then the conservation of mass is given by: Continuity Equations In general, for any dynamic quantity $A$, we can define a density $\rho$ and write down a continuity equation. This equation will be determined by the interaction between currents $\vec{J}$ and sources $\sigma$. $\frac{\partial \rho A}{\partial t}+\vec{\nabla} \cdot \vec{J}=\sigma$ The total current $\vec{J}$ can be modelled as the sum of a conservative term $\vec{J}_{V}=\rho A \vec{v}$ and a dissipative term $\vec{J}_{D}$. $\vec{J}=\vec{J}_{V}+\vec{J}_{D}$ The source $\sigma$ can be written as the sum of external sources $\sigma_{\text {ext }}$ and production sources $\sigma_{D}$ $\sigma=\sigma_{\text {ext }}+\sigma_{D}$ Therefore, the continuity equation for $A$ can be written more explicitly as $\frac{\partial \rho A}{\partial t}+\vec{\nabla} \cdot(\rho A \vec{v})+\vec{\nabla} \cdot \vec{J}_{D}=\sigma_{e x t}+\sigma_{D}$ In the physical world there are five conserved quantities: the density, the momentum (in three directions), and the energy (or entropy). $A=\{1, m \vec{v}, S\}$ Therefore, we will find five continuity equations. We have already found the continuity equation for density, and in the next two sections we will find the equations for momentum and entropy. Momentum Equation (Navier-Stokes equations) To find the continuity equation for momentum, substitute $A=m \vec{v}$ into the general continuity equation. $\frac{\partial \rho m \vec{v}}{\partial t}+\vec{\nabla} \cdot(\rho m \vec{v}: \vec{v})+\vec{\nabla} \cdot \overrightarrow{\vec{J}}_{D}=\vec{\sigma}_{e x t}+\vec{\sigma}_{D}$ We assume that the production force is zero. The external force is pressure, which acts to create a net momentum or acceleration. $\vec{\sigma}_{\text {ext }}=\rho \vec{F}-\vec{\nabla} P$ The terms representing conservative and dissipative current are both tensors. This is because momentum is a vector, and so the current, which represents the change in momentum, must be a tensor. The conservative current is given by $\overrightarrow{\vec{J}}_{V}=(\rho m \vec{v}: \vec{v})$ The dissipative current is the stress tensor $\overrightarrow{\vec{J}}_{D}=-\overrightarrow{\vec{\Pi}}$. The continuity equation for momentum can then be written as $m \frac{\partial \rho \vec{v}}{\partial t}+\vec{\nabla} \cdot(\rho m \vec{v}: \vec{v})+\vec{\nabla} P=\rho \vec{F}+\vec{\nabla} \cdot \overrightarrow{\vec{\Pi}}$ Let’s take a closer look at the stress tensor. For an isotropic medium, the stress tensor can be expressed as $\Pi_{i, j}=\eta_{B}(\vec{\nabla} \cdot \vec{v}) \delta_{i, j}+\eta\left(\partial_{i} v_{j}+\partial_{j} v_{i}-\frac{2}{3}(\vec{\nabla} \cdot \vec{v}) \delta_{i, j}\right)$ where $\eta_{B}$ is the bulk viscosity. It gives the expected change in volume resulting from an applied stress. Likewise, $\eta$ is the shear viscosity. This gives the expected amount of shearing, or change in shape, resulting from an applied stress. The final term $\partial_{i} v_{j}+\partial_{j} v_{i}-\frac{2}{3} \nabla \vec{v} \delta_{i, j}$ is a traceless symmetric component which changes the shape, but not the volume, of the medium. We can express the change in the stress tensor as $(\vec{\nabla} \cdot \overrightarrow{\mathrm{\Pi}})_{i}=\sum_{j} \nabla_{j} \Pi_{j, i}=\left(\frac{1}{3} \eta+\eta_{B}\right) \nabla_{i}(\vec{\nabla} \cdot \vec{v})+\eta \nabla^{2} v_{i}$ With this, we can rewrite the momentum continuity equation as $m \frac{\partial \rho \vec{v}}{\partial t}+\vec{\nabla} \cdot(\rho m \vec{v}: \vec{v})+\vec{\nabla} P=\left(\frac{1}{3} \eta+\eta_{B}\right) \vec{\nabla}(\vec{\nabla} \cdot \vec{v})+\eta \nabla^{2} \vec{v}$ This is also called the Navier-Stokes equation. Entropy Equation (heat-diffusion) To find the continuity equation for entropy, substitute $A=s$ in to the general continuity equation. In this case, we are thinking of the entropy for each particle and not the entire system, so a lowercase $s$ is used. $\frac{\partial \rho s}{\partial t}+\vec{\nabla} \cdot(\rho s \vec{v})+\vec{\nabla} \cdot \vec{J}_{D}=\sigma_{e x t}+\sigma_{D}$ We can simplify this expression by assuming that there are no forces that that create or destroy entropy, so $\sigma_{\text {ext }}+\sigma_{D}=0$ We also know that entropy flows from high temperatures to low temperatures, so $\vec{J}_{D} \propto-\vec{\nabla} \cdot \vec{T}$ Write this explicitly using the constant $\lambda$ $\vec{J}_{D}=-\frac{\lambda \vec{\nabla} T}{T}$ Then, substitute this to get the continuity equation $\frac{\partial \rho s}{\partial t}+\vec{\nabla} \cdot(\rho s \vec{v})-\lambda \vec{\nabla} \cdot\left(\frac{\vec{\nabla} T}{T}\right)=0$ We now have expressions for the 5 continuity equations for number of particles, momentum, and energy. \begin{aligned} \frac{d \rho}{d t}+\vec{\nabla} \cdot(\rho \vec{v}) &=0 \ m \frac{\partial \rho \vec{v}}{\partial t}+\vec{\nabla} \cdot(\rho m \vec{v}: \vec{v})+\vec{\nabla} P &=\vec{\nabla} \cdot \overrightarrow{\vec{\Pi}} \ \frac{\partial \rho s}{\partial t}+\vec{\nabla} \cdot(\rho s \vec{v})-\lambda \vec{\nabla} \cdot\left(\frac{\vec{\nabla} T}{T}\right) &=0 \end{aligned} The solution to this set of equations gives $\rho(k, t)$. Though it is impossible to solve analytically, approximate solutions can be obtained by linearizing the equations. Linearized Hydrodynamic Equations The hydrodynamic equations are impossible to solve analytically. However, it is possible to obtain approximate solutions by linearizing the equations. Define the operator $\mathcal{L}(A)=\frac{\partial A}{\partial t}$ For a time-independent quantity $A_{S}$, $\mathcal{L}\left(A_{S}\right)=\frac{\partial A_{S}}{\partial t}=0$ We can construct any quantity $A$ as the sum of a time-independent, "stable" part $A_{S}$ and a fluctuating part $\delta A$ $A=A_{S}+\delta A$ Then we can write $\mathcal{L}(A)$ as an expansion. If we truncate the expansion at the first order, linear term, we find that $\mathcal{L}(A)=\mathcal{L}\left(A_{S}+\delta A\right)=\mathcal{L}\left(A_{S}\right)+\mathcal{L}(\delta A)=\frac{\partial \delta A}{\partial t}$ For a homogeneous solution, $\rho$ is a constant. There is no collective kinetic motion, only small Boltzmann motions which average to zero. Therefore, $\overrightarrow{v_{o}}=0$. Entropy is also a constant. Therefore, we have three constants: $\rho_{o}, \overrightarrow{v_{o}}=0, S_{o}$ Since the density, velocity, and entropy are constants for a homogeneous solution, we can construct these quantities for a non-homogeneous solution by expanding around them: \begin{aligned} \rho=\rho_{o}+\delta \rho \ S=S_{o}+\delta S \ \vec{v} \end{aligned} We can also expand around a constant temperature and pressure: \begin{aligned} &T=T_{o}+\delta T \ &P=P_{o}+\delta P \end{aligned} Start by substituting the density expansion into the density continuity equation \begin{aligned} \frac{d}{d t}\left(\rho_{o}+\delta \rho\right)+\vec{\nabla} \cdot\left[\left(\rho_{o}+\delta \rho\right) \vec{v}\right]=0 \ \frac{d \delta \rho}{d t}+\vec{\nabla} \cdot\left(\rho_{o} \vec{v}\right)=0 \end{aligned} This is the linearized number density continuity equation. To reach the final expression, we have used that $\frac{d \rho_{o}}{d t}=0$. We have also ignored the term $\vec{\nabla} \cdot(\delta \rho \vec{v})$ because it is of quadratic order and we can assume that it is negligible. In order to linearize the continuity equation for entropy, begin by expanding the original expression. $\rho \frac{\partial \delta s}{\partial t}+s_{o} \frac{\partial \rho}{\partial t}+s_{o} \vec{\nabla} \cdot(\rho \vec{v})+\rho \vec{\nabla} \cdot(\delta s \vec{v})-\lambda \vec{\nabla} \cdot\left(\frac{\vec{\nabla} T}{T}\right)=0$ The second and third term can be combined and will go to zero by conservation of mass. The fourth term is negligible. Then by substituting in the expansions and keeping only the linear terms, the expression simplifies to: $\rho_{o} \frac{\partial \delta s}{\partial t}-\frac{\lambda}{T_{o}} \nabla^{2} \delta T=0$ Similarly we can linearize the momentum continuity equation, the solution is $m \rho_{o} \frac{\partial \vec{v}}{\partial t}+\vec{\nabla} \delta P=\left(\frac{1}{3} \eta+\eta_{B}\right)(\vec{\nabla}: \vec{\nabla}) \cdot \vec{v}+\eta \nabla^{2} \vec{v}$ In summary, the linearized hydrodynamic equations are given by \begin{aligned} \frac{d \delta \rho}{d t}+\rho_{o} \vec{\nabla} \cdot \vec{v} &=0 \ m \rho_{o} \frac{\partial \vec{v}}{\partial t}+\nabla \delta P &=\left(\frac{1}{3} \eta+\eta_{B}\right)(\vec{\nabla}: \vec{\nabla}) \cdot \vec{v}+\eta \nabla^{2} \vec{v} \ \rho_{o} \frac{\partial \delta s}{\partial t}-\frac{\lambda}{T_{o}} \nabla^{2} \delta T &=0 \end{aligned} Transverse Hydrodynamic Modes In order to solve the eigenvalue equation, we need to decompose the velocity into its transverse and longitudinal components. Begin by rewriting the velocity in terms of its Fourier components $\vec{v}(r, t)=\frac{1}{(2 \pi)^{3}} \int \vec{v}(k, t) e^{i \vec{k} \cdot \vec{r}} d \vec{k}$ Through substitution, the momentum continuity equation becomes $m \rho_{o} \frac{\partial \overrightarrow{v_{k}}}{\partial t}+i \vec{k} P_{k}=\left(\frac{1}{3} \eta+\eta_{B}\right)(i \vec{k})\left(i \vec{k} \cdot \overrightarrow{v_{k}}\right)+\eta(\vec{k})^{2} \overrightarrow{v_{k}}$ Now, decompose $\vec{v}(k, t)$ into its 3 components $v(k, t)=v_{x k}(t) \hat{x}+v_{y k}(t) \hat{y}+v_{z k}(t) \hat{z}$ A longitudinal mode is one in which the velocity vector points parallel to the $\vec{k}$ vector, and a transverse mode is one in which the velocity vector points perpendicular to the $\vec{k}$ vector. We can decide arbitrarily that the $\vec{k}$ vector points in the $z$ direction. Therefore, $v_{z k}(t)$ is the longitudinal current and $v_{x k}(t)$ and $v_{y k}(t)$ are the transverse currents. which is easy to solve, yielding the solution $v_{T k}(t)=v_{T k}(0) e^{-\gamma_{T} k^{2} t}$ where $\gamma_{T}=\frac{\eta}{m \rho_{o}}$ is the kinematic shear viscosity. This result looks like a diffusion equation $\frac{\partial P_{k}}{\partial t}=-D k^{2} P_{k}$ Therefore, $\gamma_{T}$ can be interpreted as diffusion constant for velocity. Longitudinal Hydrodynamic modes Solving the Continuity Equations It is much more difficult to solve for the longitudinal velocity component of the current because not as many terms go to zero. Fourier Transform of the density Begin by writing the density in terms of its Fourier components $\vec{\rho}(r, t)=\frac{1}{(2 \pi)^{3}} \int \vec{\rho}(k, t) e^{i \vec{k} \vec{r}} d \vec{k}$ Using this expression, the linearized hydrodynamic equations can be written in terms of the Fourier components of the density. Note that hereafter, for brevity, we will drop the $\delta$ signs of the transformed variables. Readers should keep in mind that these $k$-space variables always refer to the Fourier transform of the fluctuations away from equilibrium. \begin{aligned} \frac{d \rho_{k}}{d t}+i k \rho_{o} v_{k} &=0 \ m \rho_{o} \frac{\partial v_{k}}{\partial t}+i k P_{k}+\left(\frac{4}{3} \eta+\eta_{B}\right) k^{2} v_{k} &=0 \ \rho_{o} \frac{\partial s_{k}}{\partial t}+\frac{\lambda}{T_{o}} k^{2} T_{k} &=0 \end{aligned} Also we denote the velocity as $v_{k}$ for simplicity. However, it is important to remember that this only refers to the longitudinal velocity, $v_{z, k}$. Choosing independent variables As written, the three continuity equations have five variables: $\rho_{k}, v_{k}, T_{k}, P_{k}$, and $S_{k}$. Luckily, these variables are not all independent. Let $\rho_{k}, v_{k}$, and $T_{k}$ be the three independent variables. We can use thermodynamic relations to rewrite $P_{k}$ and $S_{k}$ in terms of these variables. The Helmholtz free energy is a function of temperature and density, $F(T, \rho)$. We can write this in differential form: $d F=-S d T+P d \rho$ This is a total differential of the form: $d z=\left(\frac{\partial z}{\partial x}\right)_{y} d x+\left(\frac{\partial z}{\partial y}\right)_{x} d y$ Using this, we can write the entropy $S$ and the pressure $P$ in differential form \begin{aligned} S=-\left(\frac{\partial F}{\partial T}\right)_{\rho} \ P=\left(\frac{\partial F}{\partial \rho}\right)_{T} \end{aligned} Define the variable $\alpha$ as $\alpha=\left(\frac{\partial P}{\partial T}\right)_{\rho}=\frac{\partial}{\partial T}\left(\frac{\partial F}{\partial \rho}\right)_{T}=\frac{\partial}{\partial \rho}\left(\frac{\partial F}{\partial T}\right)_{\rho}=-\left(\frac{\partial S}{\partial \rho}\right)_{T}$ Here we have used the property that for continuous functions, the mixed partial second derivatives are equal. This gives one of the Maxwell relations. We will also use a couple of well known relations, the isothermal speed of sound: $c_{T}=\sqrt{\frac{1}{m}\left(\frac{\partial P}{\partial \rho}\right)_{T}}$ and the specific heat: $C_{V}=T\left(\frac{\partial S}{\partial T}\right)_{\rho}$ With these relations in hand, we can rewrite the pressure $P$ and the entropy $S$ in terms of temperature $T$ and density $\rho$ : \begin{aligned} d P=\left(\frac{\partial P}{\partial \rho}\right)_{T} d \rho+\left(\frac{\partial P}{\partial T}\right)_{\rho} d T=m C_{T}^{2} d \rho+\alpha d T \ T_{o} d S=T_{o}\left(\frac{\partial S}{\partial \rho}\right)_{T} d \rho+T_{o}\left(\frac{\partial S}{\partial T}\right)_{\rho} d T=-T_{o} \alpha d \rho+C_{V} d T \end{aligned} The Condensed Equations With these substitutions, we can rewrite the continuity equations in terms of the independent variables $\rho_{k}, v_{k}$, and $T_{k}$. \begin{aligned} \frac{d \rho_{k}}{d t}+i k \rho_{o} v_{k} &=0 \ \dot{v}_{k}+\frac{i k}{m \rho_{o}}\left[C_{T}^{2} m \rho_{k}+\alpha T_{k}\right]+b k^{2} v_{k} &=0 \ \dot{T}_{k}+i k\left(\frac{T_{o} \alpha \rho_{o}}{C_{V}}\right) v_{k}+a k^{2} T_{k} &=0 \end{aligned} where we have defined the constants $a$ and $b$ \begin{aligned} a=\frac{\lambda}{\rho_{o} C_{V}} \ b=\left(\eta_{B}+\frac{4}{3} \eta\right) \frac{1}{m \rho_{o}} \end{aligned} and $\rho_{k}=-i k \rho_{o} v_{k}$ is used to simplify the last equation. The Laplace Transform To further simplify the equations, use the Laplace transform of each variable: $\begin{array}{llll} \hat{\rho}_{k}(z)=\int_{0}^{\infty} e^{-z t} \rho_{k}(t) d t & \text { and } & \rho_{k}(t)=\frac{1}{2 \pi i} \int_{\delta-i \infty}^{\delta+i \infty} e^{z t} \hat{\rho}_{k}(z) d z \ \hat{v}_{k}(z)=\int_{0}^{\infty} e^{-z t} v_{k}(t) d t & \text { and } & v_{k}(t)=\frac{1}{2 \pi i} \int_{\delta-i \infty}^{\delta+i \infty} e^{z t} \hat{v}_{k}(z) d z \ \hat{T}_{k}(z)=\int_{0}^{\infty} e^{-z t} T_{k}(t) d t \quad \text { and } & T_{k}(t)=\frac{1}{2 \pi i} \int_{\delta-i \infty}^{\delta+i \infty} e^{z t} \hat{T}_{k}(z) d z \end{array}$ Using this transform, the continuity equations can be rewritten (in matrix form): $\left[\begin{array}{ccc} z & i k \rho_{o} & 0 \ i k \frac{C_{T}^{2}}{\rho_{o}} & z+b k^{2} & \frac{i k}{m \rho_{o}} \alpha \ 0 & \frac{i k}{C_{V}} \alpha T_{o} \rho_{o} & z+a k^{2} \end{array}\right]=\left[\begin{array}{c} \hat{\rho}_{k} \ \hat{v}_{k} \ \hat{T}_{k} \end{array}\right]=\left[\begin{array}{c} \rho_{k}(0) \ v_{k}(0) \ T_{k}(0) \end{array}\right] .$ Isothermal and Adiabatic Speed of Sound The adiabatic $c_{s}$ and isothermal $c_{T}$ speeds of sound are given by: \begin{aligned} c_{S}^{2}=\frac{1}{m}\left(\frac{\partial P}{\partial \rho}\right)_{S} \ c_{T}^{2}=\frac{1}{m}\left(\frac{\partial P}{\partial \rho}\right)_{T} \end{aligned} We can rewrite these quantities using: $\left(\frac{\partial P}{\partial \rho}\right)_{T}=\frac{\left(\frac{\partial P}{\partial S}\right)_{T}}{\left(\frac{\partial \rho}{\partial S}\right)_{T}}=\frac{\left(\frac{\partial P}{\partial T}\right)_{S}\left(\frac{\partial T}{\partial S}\right)_{P}}{\left(\frac{\partial \rho}{\partial T}\right)_{S}\left(\frac{\partial T}{\partial S}\right)_{\rho}}=\frac{C_{V}}{C_{P}}\left(\frac{\partial P}{\partial \rho}\right)_{S}$ Here, we have used the constant volume $C_{V}$ and constant pressure $C_{P}$ heat capacities: \begin{aligned} &C_{V}=T\left(\frac{\partial S}{\partial T}\right)_{\rho} \ &C_{P}=T\left(\frac{\partial S}{\partial T}\right)_{P} \end{aligned} and the identity for differentials: $\left(\frac{\partial x}{\partial y}\right)_{z}=\left(\frac{\partial x}{\partial z}\right)_{y}\left(\frac{\partial z}{\partial y}\right)_{x}$ Now, we can show that the ratio is equal to: $\frac{c_{T}^{2}}{c_{S}^{2}}=\frac{\frac{1}{m} \frac{C_{V}}{C_{P}}\left(\frac{\partial P}{\partial \rho}\right)_{S}}{\frac{1}{m}\left(\frac{\partial P}{\partial \rho}\right)_{S}}=\frac{C_{V}}{C_{P}}=\frac{1}{\gamma}$ Thermodynamic identities can be used to rewrite the quantity $m C_{T}^{2}\left(C_{P}-C_{V}\right)$ Start by writing the expression explicitly in terms of thermodynamic variables: $m C_{T}^{2}\left(C_{P}-C_{V}\right)=T\left(\frac{\partial P}{\partial \rho}\right)_{T}\left[\left(\frac{\partial S}{\partial T}\right)_{P}-\left(\frac{\partial S}{\partial T}\right)_{\rho}\right]$ In order to simplify this expression, we will use another identity for differentials: $\left(\frac{\partial x}{\partial y}\right)_{z}=\left(\frac{\partial x}{\partial y}\right)_{w}+\left(\frac{\partial x}{\partial w}\right)_{y}\left(\frac{\partial w}{\partial y}\right)_{z}$ Using this identity, combined with the identity introduced in the previous section, we can rewrite the first term in the expression: $\left(\frac{\partial S}{\partial T}\right)_{P}=\left(\frac{\partial S}{\partial T}\right)_{\rho}+\left(\frac{\partial S}{\partial \rho}\right)_{T}\left(\frac{\partial \rho}{\partial T}\right)_{P}=\left(\frac{\partial S}{\partial T}\right)_{\rho}-\left(\frac{\partial S}{\partial \rho}\right)_{T}\left(\frac{\partial \rho}{\partial P}\right)_{T}\left(\frac{\partial P}{\partial T}\right)_{\rho}$ Now, plug this into the expression above and cancel terms, to obtain the new identity: $m C_{T}^{2}\left(C_{P}-C_{V}\right)=-T_{o}\left(\frac{\partial S}{\partial \rho}\right)_{T}\left(\frac{\partial P}{\partial T}\right)_{\rho}$ Adiabatic and Isothermal Compressibility The adiabatic $\chi_{S}$ and isothermal $\chi_{T}$ compressibilities are given by: \begin{aligned} \chi_{S} &=\frac{1}{\rho}\left(\frac{\partial \rho}{\partial P}\right)_{S} \ \chi_{T} &=\frac{1}{\rho}\left(\frac{\partial \rho}{\partial P}\right)_{T} \end{aligned} Therefore, $\gamma=\frac{C_{P}}{C_{V}}=\frac{\chi_{T}}{\chi_{S}}=\frac{c_{S}^{2}}{c_{T}^{2}}$ Eigensolution Now, we can solve the set of continuity equations for the density. The density can be found from: $\hat{\rho}(z)=\frac{\operatorname{Det}^{\prime} M(1 \mid 1)}{\operatorname{Det} M}=\left(M^{-1}\right)_{11} \rho(0)$ Note that the Laplace transform of the intermediate scattering function is: $\hat{F}(\vec{k}, z)=\left(M^{-1}\right)_{11}\left\langle\left|\rho_{k}\right|^{2}\right\rangle$ To solve for $\hat{\rho}(z)$, find $\operatorname{Det}^{\prime} M(1 \mid 1)$ and $\operatorname{Det} M$. \begin{aligned} \operatorname{Det}^{\prime} M(1 \mid 1)=\left(z+a k^{2}\right)\left(z+b k^{2}\right)+\frac{k^{2} T_{o} \alpha^{2}}{m C_{V}} \ =\left(z+a k^{2}\right)\left(z+b k^{2}\right)+k^{2} C_{T}^{2}(\gamma-1) \end{aligned} and $\text { Det } M=z\left(z+a k^{2}\right)\left(z+b k^{2}\right)+z k^{2} c_{S}^{2}+z k^{4} c_{T}^{2}$ where we have used some of the thermodynamic identities defined in the previous section. The eigenfrequencies can be obtained from $\operatorname{Det} M(z)=0$. The eigenvalues can be solved using perturbation $z=s_{o}+s_{1} k+s_{2} k+\ldots$. The solutions are \begin{aligned} z_{\pm}=-a \frac{c_{T}^{2}}{c_{S}^{2}} k^{2}=-\frac{a}{T} k^{2} \ z_{\pm}=\pm i c_{S} k-\Gamma k^{2} \end{aligned} where $\Gamma=\frac{1}{2}\left[(a+b)-\frac{a}{\gamma}\right]$ To first order in $k$, we have $\left(M^{-1}\right)_{11}=\frac{\operatorname{Det}^{\prime} M(1 \mid 1)}{\operatorname{Det} M} \approx \frac{z^{2}+\left(1-\frac{1}{\gamma}\right) c_{S}^{2} k^{2}}{z^{3}+z k^{2} c_{S}^{2}}=\left(1-\frac{1}{\gamma}\right) \frac{1}{z}+\frac{1}{\gamma} \frac{1}{z^{2}+k^{2} c_{S}^{2}}$ Then, to second order in $k$, we have $\hat{\rho}_{k}(t)=\hat{\rho_{k}}(0)\left[\left(1-\frac{1}{\gamma}\right) e^{-\frac{a}{\gamma} k^{2} t}+\frac{1}{\gamma} \cos \left(c_{S} k t\right) e^{-\Gamma k^{2} t}\right]$ The first term gives the contributions from thermal fluctuations, while the second term gives the solution for a damped acoustic wave. Notice that the integrated intensity of the first term is $\left(1-\frac{1}{\gamma}\right)$ and the integrated intensity of the second term is $\frac{1}{\gamma}$. Light Scattering The Landau-Placzek ratio gives the ratio between the intensity of thermal and acoustic scattering $\frac{I_{\text {thermal }}}{I_{\text {acoustic }}}=\frac{\left\langle(\delta \rho)^{2}\right\rangle_{\text {thermal }}}{\left\langle(\delta \rho)^{2}\right\rangle_{\text {mech }}}=\frac{\left(\frac{\partial \rho}{\partial S}\right)_{P}^{2}\left\langle\Delta S^{2}\right\rangle}{\left(\frac{\partial \rho}{\partial P}\right)_{S}^{2}\left\langle\Delta P^{2}\right\rangle}=\frac{C_{P}-C_{V}}{C_{V}}=\gamma-1$ Note that the dynamic structure factor is twice the real part of the Laplace transform of the intermediate scattering function (Figure 3.9): $S(\vec{k}, \omega)=\int_{\infty}^{\infty} F(\vec{k}, t) e^{-i \omega t} d t=2 \operatorname{Re} \hat{F}(z=-i \omega)$ The initial value of this function is $F(k, 0)=\frac{1}{N}\left\langle\left|\delta \rho_{k}\right|^{2}\right\rangle=\rho_{o} h+1=\frac{\rho_{o} \chi_{T}}{\beta}$ Acoustic Scattering By ignoring the coupling to entropy flow, we have $d P=\left(\frac{\partial P}{\partial \rho}\right)_{S} d \rho$ so that $\begin{array}{r} \frac{d \delta \rho_{k}}{d t}+i k \rho_{o} v_{k}=0 \ \dot{v_{k}}+i c_{S}^{2} k \rho_{k}+b k^{2} v_{k}=0 \end{array}$ For an ideal gas, $b=0$, and so we get a propagating sound wave $z=\pm i c_{S} k$ In a viscous liquid, $b \neq 0$, and so we get a propagating acoustic wave with a damping term $z=\pm i c_{S} k-\frac{1}{2} b k^{2}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/03%3A_Hydrodynamics_and_Light_Scattering/3.02%3A_Navier-Stokes_Hydrodynamic_Equations.txt
Before we jump into the next section on transport equations, let’s take a moment to briefly summarize what we have seen in this chapter and where we are going: 1. The response of a liquid to an external probe $\frac{d^{2} \sigma}{d \omega d \Omega}$ is given by spontaneous time-dependent fluctuations, described in $G(\vec{r}, t)$ or $S(\vec{k}, \omega)$. 2. Hydrodynamic equations describe the decay of spontaneous fluctuations. 3. Hydrodynamic modes can be used to find transport coefficients. Diffusion Constant We will begin our exploration of transport coefficients with the diffusion constant. We will use the concepts developed in this chapter to find three different expressions for the diffusion constant. These expressions are called Einstein’s relation, the Green-Kubo relation, and the Scattering function in the hydrodynamic limit. Einstein’s Relation Define a single-particle correlation function $G_{S}(\vec{r}, t)=\langle\delta(\vec{r}(t)-\vec{r}(0)-\vec{r})\rangle$ Taking the Fourier transform into $\vec{k}$ space gives the self-intermediate scattering function $F_{S}(\vec{k}, t)=\langle\exp [-i \vec{k}(\vec{r}(t)-\vec{r}(0))]\rangle=\left\langle\rho_{s, k}(t) \rho_{s, k}(0)\right\rangle$ All transport coefficients are defined for length and time scales when $k \rightarrow 0$ and $\omega \rightarrow 0$. In real space, they apply to relatively long length and time scales. Therefore, hydrodynamics theory applies. Recall that hydrodynamics theory applies on the coarse-grained scale much larger and longer than characteristic molecular interactions. Apply Fick’s law to the problem: $\dot{\rho}=D \nabla^{2} \rho$ Therefore $\dot{\rho}_{k}=-D k^{2} \rho_{k}$ and $F_{S}(\vec{k}, t)=e^{-k^{2} D t}$ We now have two equations for $F_{S}(\vec{k}, t)$. Expand both of them to $k^{2}$ and set them equal $1-k^{2} D t+\ldots=1-k^{2} \frac{1}{2}(z(t)-z(0))^{2}+\ldots$ Then solve for $D$ $D=\left.\frac{1}{2 t}\left\langle|z(t)-z(0)|^{2}\right\rangle\right|_{t=\infty}$ This is Einstein’s Relation. The Green-Kubo Relation To find the Green-Kubo relation, use time-invariance to rewrite the thermal average in Einstein’s relation \begin{aligned} \left\langle|z(t)-z(0)|^{2}\right\rangle &=\left\langle\int_{0}^{t} \int_{0}^{t} v\left(t_{1}\right) v\left(t_{2}\right) d t_{1} d t_{2}\right\rangle \[4pt] &=2 \int_{0}^{t}(t-\tau) C(\tau) d \tau \end{aligned} where $C(t)=\left\langle v_{z}(t) v_{z}(0)\right\rangle=\frac{1}{3}\langle v(t) v(0)\rangle$ Therefore $D=\lim _{t \rightarrow \infty} \frac{1}{2 t}\left\langle|z(t)-z(0)|^{2}\right\rangle=\int_{0}^{\infty} C(\tau) d \tau$ This is called the Green-Kubo Relation. In general, for any variable $A(t)$ we have $\int_{0}^{\infty}\langle\dot{A}(t) \dot{A}(0)\rangle d t=\lim _{t \rightarrow \infty} \frac{1}{2 t}\left\langle|A(t)-A(0)|^{2}\right\rangle$ Relation to Scattering We can also relate the diffusion constant to scattering, such as incoherent neutron scattering. The dynamic structure factor is related to the diffusion constant through $S_{s}(k, \omega)=\int_{-\infty}^{\infty} e^{i \omega t} F_{s}(k, t) d t=\frac{2 D^{2} k^{2}}{\omega^{2}+\left(D k^{2}\right)^{2}}$ Then solve this equation for $D$ $D=\frac{1}{2} \lim _{\omega \rightarrow 0} \lim _{k \rightarrow 0} \frac{\omega^{2}}{k^{2}} S_{s}(k, \omega)$ Therefore \begin{aligned} D &=\frac{1}{2} \lim _{\omega \rightarrow 0} \lim _{k \rightarrow 0} \frac{1}{k^{2}} \int \ddot{F}_{s}(k, t) e^{i \omega t} d t= \[4pt] &=\lim _{\omega \rightarrow 0} \int_{0}^{\infty} C(t) e^{i \omega t} d t \[4pt] &=\int_{0}^{\infty}\langle v(t) v(o)\rangle d t \end{aligned} This is the final expression for the diffusion constant. In this section we showed how there are three different methods for finding the diffusion constant. These are Einstein’s Relation, the Green-Kubo Relation, and the Scattering Function, as $\omega \rightarrow 0$ and $k \rightarrow 0$. This process can be generalized for different types of transport coefficients. In the next two sections, we will evaluate the viscosity coefficients and the thermal transport coefficients using these three methods. Viscosity Coefficients In this section, we will evaluate the viscosity coefficients $\eta$ and $\eta_{B}$ using Einstein’s relation, the Green-Kubo relation, and the scattering function in the hydrodynamic limit $(\omega \rightarrow 0$ and $k \rightarrow 0)$. 1. The Transverse Current Define the transverse current as the sum of the velocity components in the $\mathrm{x}$-direction $J_{x}=\sum_{i} v_{i x}(t) \delta\left(\vec{r}-\vec{r}_{i}(t)\right)$ The Fourier transform is $J_{k}=\sum_{i} v_{i x}(t) \exp \left(-i \vec{k} \vec{r}_{i}(t)\right)$ Therefore, the transverse current correlation function is \begin{aligned} C_{t}(k, t) &=\frac{1}{N}\left\langle J_{k}(t) J_{-k}(0)\right\rangle \[4pt] &=\frac{1}{N} \sum_{i j}\left\langle v_{i}(t) v_{j}(0) \exp \left[-i \vec{k}\left(\vec{r}_{i}(t)-\vec{r}_{j}(0)\right)\right]\right\rangle \end{aligned} On the other hand, the Navier-Stokes equation predicts that $J_{x}-\nu_{t} \nabla^{2} J_{x}=0$ where $\nu_{t}=\frac{\eta}{m \rho_{o}}$ is the kinematic shear viscosity. The Fourier transform of this relation is $J_{k}+\nu_{t} k^{2} J_{k}=0$ which yields the solution $J_{k}(t)=J_{k}(0) e^{-\nu k^{2} t}$ Using this expression, the transverse current correlation function is $C_{t}(k, t)=\frac{1}{N}\left\langle J_{k}(t) J_{-k}(0)\right\rangle e^{-\nu k^{2} t}=C_{t}(k, 0) e^{-\nu k^{2} t}$ Now, we have two different expressions for the transverse current correlation function. 1. To complete the expression for the transverse current correlation function, we must find $C_{t}(k, 0)$. Using the first expression for $C_{t}(k, t)$, we find that $C_{t}(k, 0)=\frac{1}{N}\left\langle\sum_{i} v_{i x}(0) \exp \left(-i \vec{k} \vec{r}_{i}(0)\right) \sum_{j} v_{j x}(0) \exp \left(-i \vec{k} \vec{r}_{i}(0)\right)\right\rangle$ \begin{aligned} =\frac{1}{N} \sum_{i j}\left\langle v_{0}^{2} \delta_{i j} \exp \left[-i \vec{k}\left(z_{i}-z_{j}\right)\right]\right\rangle \ =v_{o}^{2} \end{aligned} where $\left\langle v_{i x} v_{j x}\right\rangle=\delta_{i j}\left\langle v_{i x} v_{i x}\right\rangle=\delta_{i j} \frac{1}{\beta m}=\delta_{i j} v_{o}^{2}$ Note that $C_{t}(k, 0)$ is independent of $k$. Now, expand the two expressions for the transverse current to the order of $k^{2}$. Set them equal and and solve for $C_{t}(k, 0)$ $C_{t}(k, t)=C_{t}(k, 0)\left(1-\nu k^{2} t\right)=\frac{1}{N} \sum_{i j}\left\langle v_{i}(t) v_{j}(0)\left[1-\frac{k^{2}}{2}\left(z_{i}(t)-z_{j}(0)\right)^{2}\right]\right\rangle$ Then we have $C(k, 0) \nu=\lim _{t \rightarrow \infty} \frac{1}{2 t N} \sum_{i j}\left\langle v_{i}(t) v_{j}(0)\left[z_{i}(t)-z_{j}(0)\right]^{2}\right\rangle$ 1. To simplify this equation, use the momentum conservation condition $\sum_{i} v_{i}(t)=\sum_{i} v_{i}(0)$. Then we can write that $\left\langle\sum_{i} v_{i}(t) z_{i}^{2}(t) \sum_{j} v_{j}(0)\right\rangle=\left\langle\sum_{i j} v_{i}(t) z_{i}^{2}(t) v_{j}(t)\right\rangle=\sum_{i}\left\langle v_{i}^{2}(t) z_{i}^{2}(t)\right\rangle$ then the viscosity coefficient is given by $\eta=\lim _{t \rightarrow \infty} \frac{1}{v k T} \frac{1}{2 t}\left\langle[A(t)-A(0)]^{2}\right\rangle$ where $A=\sum_{i} P_{i x} z_{i}$. This is Einstein’s expression for the viscosity coefficient. 1. Define $\sigma_{x z}$ as the time derivative of $A$ $\dot{A}=\sigma_{x z}=\frac{d}{d t} \sum_{i} P_{i x} z_{i}$ Then we can write the viscosity coefficient as $\eta=\frac{1}{V m^{2} k_{B} T} \int_{0}^{\infty}\left\langle\sigma_{x z}(t) \sigma_{x z}(0)\right\rangle d t$ 1. Define the Fourier transform of $C_{t}(\vec{r}, t)$ as $C_{t}(\vec{k}, \omega)$ $C_{t}(\vec{k}, t)=v_{o}^{2} e^{-\gamma k^{2} t} \Rightarrow C_{t}(\vec{k}, \omega)=v_{o}^{2} \frac{2 \nu_{t} k^{2}}{\omega^{2}+\nu_{t} k^{2}}$ Therefore, the viscosity coefficient can be written as $\eta=\frac{\rho_{o} m^{2} \beta}{2} \lim _{\omega \rightarrow 0} \lim _{k \rightarrow 0} \frac{\omega^{2}}{k^{2}} C_{t}(\vec{k}, \omega)$ 1. In general, $\sigma_{\alpha \beta}$ denotes $\sigma_{\alpha \beta}=\frac{d}{d t} \sum_{i} P_{i \alpha} r_{i \beta}$ From the virial theorem, the thermal average of $\sigma_{\alpha \beta}$ is $\left\langle\sigma_{\alpha \beta}\right\rangle=\delta_{\alpha \beta} P V$ The longitudinal current is given by $J_{k}(t)=J_{k}(0) e^{-b k^{2} t}$ where $b=\frac{1}{m \rho_{o}}\left(\eta_{B}+\frac{4}{3} \eta\right)$ Therefore, by analogy $\eta_{B}+\frac{4}{3} \eta=\frac{1}{V k_{b} T} \int_{0}^{\infty}\left\langle\delta \sigma_{z z}(t) \delta \sigma_{z z}(0)\right\rangle d t$ where $\delta \sigma_{z z}=\sigma_{z z}(t)-P V$ Evaluation of the Thermal Transport Coefficients 1. Summary of the Transport Coefficients Before we enter the topic of thermal transport, let’s briefly review the transport coefficients we have defined in this chapter. i) Diffusion Constant $D=\int_{0}^{\infty} v_{z}(t) v_{z}(0) d t$ ii) Viscosity Coefficients \begin{aligned} \eta=\frac{1}{V k_{B} T} \int_{0}^{\infty} \sigma_{x z}(t) \sigma_{x z}(0) d t \ \eta_{B}+\frac{4}{3} \eta=\frac{1}{V k_{b} T} \int_{0}^{\infty}\left[\sigma_{z z}(t)-P V\right]\left[\sigma_{z z}(0)-P V\right] d t \end{aligned} where $\sigma_{\alpha \beta}=\frac{d}{d t} \sum_{i} P_{i \alpha} r_{i \beta}$ iii) $\lambda=\frac{1}{V k_{b} T} \int_{0}^{\infty}\langle\dot{A}(t) \dot{A}(0) d t\rangle$ where $A=\frac{d}{d t} \sum_{i} z_{i}\left[\frac{p_{i}^{2}}{2 m}+\frac{1}{2} \sum_{i j} u_{i j}-\langle E\rangle\right]$ 1. Mean Free Path Approximation The mean free path approximation can be used to approximate the value of the diffusion constant and the viscosity coefficients. The mean free path approximation states that the motion of molecules is described by collisions. The behavior of these collisions is governed by two main assumptions: i) The collisions are Markovian. In other words, the velocity of a particle after a collision is random and is not correlated with the velocity before the collision. ii) The distribution of collisions is a Poisson process $e^{-t / \tau_{c}}$. Using this approximation, the diffusion constant is $D=\int_{0}^{\infty}\left\langle v_{z}^{2}\right\rangle e^{-t / \tau_{c}} d t=\left\langle v_{z}^{2}\right\rangle \tau_{c}$ and the viscosity coefficient is $\eta=\frac{1}{V k_{B} T} \int_{0}^{\infty}\left\langle\left(\sum_{i} P_{x i} v_{z i}\right)^{2}\right\rangle e^{-t / \tau_{c}} d t=\frac{N}{V k_{B} T}\left\langle P_{x i}^{2} v_{z}^{2}\right\rangle \tau_{c}$ 1. Hard-Sphere gas For a hard sphere gas, the average collision time $\tau_{c}$ is given by $\tau_{c}=\frac{\tau}{\bar{v}}=\frac{1}{\sqrt{2} \pi \sigma^{2} \rho}\left[\frac{\pi m}{8 k_{B} T}\right]^{\frac{1}{2}}$ where $\sigma$ is the radius of the particles. Then, substituting this expression for $\tau_{c}$ into $D$ and $\eta$ gives \begin{aligned} &D=\frac{1}{4 \sigma^{2} \rho}\left[\frac{k T}{\pi m}\right]^{\frac{1}{2}} \ &\eta=\frac{1}{4 \sigma^{2}}\left[\frac{m k T}{\pi}\right]^{\frac{1}{2}} \end{aligned} Thermal Diffusion (Conduction) Define the energy $E_{k}=\sum_{i} \delta e_{i}(t) e^{-i \vec{k} \vec{r}_{i}(t)}$ where $\delta e=e-\langle e\rangle$. The correlation function is $C(k, t)=\sum_{i j}\left\langle\delta e_{i} e^{-i \vec{k} \vec{r}_{i}(t)} \delta e_{j} e^{-i \vec{k} \vec{r}_{j}(0)}\right\rangle$ The initial value of this correlation function is \begin{aligned} C(k, 0) &=\sum_{i j}\left\langle\delta e_{i} \delta e_{j}\right\rangle\left\langle\exp \left[-i \vec{k} \overrightarrow{r_{i}}-\overrightarrow{r_{j}}\right]\right\rangle \[4pt] &=\sum_{i}\left\langle\delta e_{i} \delta e_{i}\right\rangle=\langle E(0) E(0)\rangle=N C_{V} k_{B} T^{2} \end{aligned} where we have used the fact that $\left\langle\delta e_{i} \delta e_{i}\right\rangle=\delta_{i j}\left\langle(\delta e)^{2}\right\rangle$. 2) Now, expand the correlation function to the order of $k^{2}$ : \begin{aligned} C(k, t) &= C(k, 0)-\frac{k^{2}}{2} \sum_{i j}\left\langle\delta e_{i}(t) \delta e_{j}(0)\left(z_{i}(t)-z_{j}(0)\right)^{2}\right\rangle+\ldots \[4pt] &=C(k, 0)-\frac{k^{2}}{2}\left\langle\left|\sum_{i} \delta e_{i}(t) z_{i}(t)-\sum_{i} \delta e_{i}(0) z_{i}(0)\right|^{2}\right\rangle+\ldots \end{aligned} where we have used the conservation of energy to rewrite the expression. This allows us to write $A=\sum_{i}\left|e_{i}(t)-\langle e\rangle\right| z_{i}(t)$ Conduction Equation The conduction equation states that $\frac{\partial \rho e}{\partial t}-\nabla \lambda(\nabla T)=0$ and therefore $\frac{\partial E}{\partial t}-\frac{\lambda}{C_{V} \rho} \nabla^{2} E=0$ We can solve this equation for $E(t)$ $E(t)=E(0) e^{-a k^{2} t}$ where $a=\frac{\lambda}{C_{V \rho}}$. Use this expression to write the correlation function $C(k, t)=\left\langle E^{2}(0)\right\rangle e^{-a k^{2} t}=\left\langle E^{2}\right\rangle\left[1-a k^{2} t+\ldots\right]$ By equating the $k^{2}$ terms, we find that $a k^{2} N C_{V} k_{B} T^{2}=\frac{k^{2}}{2 t}\left\langle|A(t)-A(0)|^{2}\right\rangle$ Therefore, $\lambda=\frac{1}{V k_{b} T^{2}} \int_{0}^{\infty}\langle\dot{A}(t) \dot{A}(0) d t\rangle=\lim _{t \rightarrow \infty} \frac{1}{V k_{b} T^{2}} \frac{1}{2 t}\left\langle|A(t)-A(0)|^{2}\right\rangle$ where $e_{i}=\frac{p_{i}^{2}}{2 m}+\frac{1}{2} \sum_{i=j} U_{i j}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/03%3A_Hydrodynamics_and_Light_Scattering/3.03%3A_Transport_Coefficients.txt
In the last chapter, we explored the low-frequency, long-wavelength behavior of a system that is disturbed from equilibrium. In the first section of this chapter, we study the opposite limit, and describe how a system behaves at very short times and high frequencies. The study of systems in this limit is referred to as Molecular dynamics. We are ultimately interested in developing a set of expressions that describe a system at all times and frequencies. In section 2, we will introduce the projection operator and use it to derive the Generalized Langevin Equation. The projection operator allows us to study only the portion of the system we are interested in, and treat the rest as a statistical bath. In section 3, we will use the GLE to derive the viscoelastic model for transverse current. Finally, in section 4, we will introduce mode-coupling theory and discuss its ability to predict long-time tails in velocity correlation functions. For further information on the subjects covered in this chapter, please consult books by Hansen and McDonald[1], McQuarrie[2], Boon and Yip[3], and Berne [4]. 04: Time Correlation Functions Moment Expansion In chapter 2, we introduced the concept of the time-correlation function. The correlation function for an operator $A(t)$ is given by $C(t)=\langle A(t) A(0)\rangle=\operatorname{Tr} A(t) A(0) \rho_{e q}$ Where the equilibrium density matrix is given by $\rho_{e q}=\frac{e^{-\beta \mathcal{H}}}{Q}$ Here $\mathcal{H}$ and $\mathcal{Q}$ are the Hamiltonian and Partition function for the system. The time evolution of $A$ is given by $A(t)=e^{i \mathcal{L t}} A(0)$ or $\dot{A}(t)=i \mathcal{L} A(0)$ Here, $\mathcal{L}$ is an operator which describes the time evolution of an operator. For quantum mechanical systems, $\mathcal{L}$ is defined as the Liouville operator $i \mathcal{L}=\frac{1}{i \hbar}[\ldots, \mathcal{H}]$ And for classical systems it is defined as the Poisson operator $i \mathcal{L}=\{\ldots, \mathcal{H}\}$ The evolution operator $\mathcal{L}$ is Hermitian, $\mathcal{L}^{+}=\mathcal{L}$. This operator will be discussed in much more detail in section 4.2. The value of a correlation function in the short time limit $t \rightarrow 0$ can be approximated using a moment expansion. As shown in Eq.(4.1), the correlation function of a quantity $A(t)$ is given by $C(t)=\langle A(t) A(0)\rangle$ This quantity $C(t)$ can be written as a Taylor expansion $C(t)=\sum_{n=0}^{\infty} \frac{t^{n}}{n !} C^{(n)}(0)$ This formula can be simplified by noting that all correlation functions are even in time. As a result, any odd-valued derivative of $C(t)$ will be zero when evaluated at $t=0$. Therefore, all of the odd terms of this expansion can be dropped $C(t)=\sum_{n=0}^{\infty} \frac{t^{2 n}}{(2 n) !} C^{(2 n)}(0)$ The derivative of a correlation function can be written as $C^{(2 n)}(t)=(-1)^{n}\left\langle A^{(2 n)}(t) A(0)\right\rangle$ Using this expression, the Taylor expansion can be written in terms of the function $A(t)$ $C(t)=\sum_{n=0}^{\infty}(-1)^{n} \frac{t^{2 n}}{(2 n) !}\left\langle A^{(2 n)}(0) A(0)\right\rangle$ This expression can be further simplified using the definition $\left\langle A^{(2 n)}(0) \mid A(0)\right\rangle=-\left\langle A^{(n)}(0) \mid A^{(n)}(0)\right\rangle$, where the notation $\langle A \mid B\rangle=\left\langle A B^{+}\right\rangle=\operatorname{Tr} A B^{+} \rho_{e q}$ $C(t)=\sum_{n=0}^{\infty} \frac{t^{2 n}}{(2 n) !}\left\langle A^{(n)} A^{(n)}\right\rangle$ In this expression, we are only concerned with the value of $A(t)$ at time 0 , and so the explicit time dependence has been dropped. This expression could also be obtained by performing a Taylor expansion on $A(t)$ and substituting it into Eq.(4.1). We can use the Fourier transform of $C(t)$ to find a general expression for $C_{(2 n)}$. Since $C^{(2 n)}=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \tilde{C}(\omega) e^{-i \omega t} d t$ the time derivatives can be evaluated easily as $C^{(2 n)}=(-1)^{(n)}\left(\frac{\partial}{\partial t}\right)^{(2 n)} C(t)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \tilde{C}(\omega) \omega^{2 n} d t=\left\langle\omega^{2 n}\right\rangle$ In the next sections, this method is applied to velocity correlation functions and self-scattering functions. Velocity Correlation Function and Self-Scattering Functions 1. The Velocity Correlation Function The velocity correlation function for the $z$-directed motion of a particle is defined as $C(t)=\frac{1}{3}\langle\vec{v}(t) \vec{v}(0)\rangle=\langle\dot{z}(t) \dot{z}(0)\rangle$ This expression can be evaluated using Eq.(4.2). At short times, the value of $C(t)$ can be reasonably approximated by taking the first two moments of the expansion $C(t)=\frac{1}{3}\langle\vec{v} \mid \vec{v}\rangle-\frac{t^{2}}{2} \frac{1}{3}\langle\dot{\vec{v}} \mid \vec{v}\rangle+\cdots$ The first moment is simply the the average thermal velocity in the $z$-direction $\frac{1}{3}\langle\vec{v} \mid \vec{v}\rangle=\frac{1}{\beta m}=v_{o}^{2}$ where $\beta=\left(k_{B} T\right)^{-1}$. The second moment can be evaluated using Newton’s equation, $F=m a$. Since $a=\dot{v}$ and $F=-\nabla U$, where $U$ is the potential energy, $\dot{v}=\frac{F}{m}=-\frac{\nabla U}{m}$. Therefore, the second moment is given by $\frac{1}{3}\langle\dot{\vec{v}} \mid \dot{\vec{v}}\rangle=\frac{1}{3} \frac{\langle\nabla U \mid \nabla U\rangle}{m^{2}}$ To evaluate this expression, write it in its explicit form $\frac{1}{3} \frac{\langle\nabla U \mid \nabla U\rangle}{m^{2}}=\frac{1}{3 m^{2}} \int d z \partial_{z} U \partial_{z} U e^{-\beta U}$ Note that $\partial_{z} U e^{-\beta U}=-\frac{1}{\beta} \partial_{z} e^{-\beta U}$. This allows us to combine terms in the integral to get the expression: $\frac{1}{3 m^{2}} \int d z \partial_{z} U\left(\frac{1}{\beta} \partial_{z} e^{-\beta U}\right)$ Now, carry out a partial integration to get the expression: $\frac{1}{3 \beta m^{2}} \int d z \partial_{z}^{2} U e^{-\beta U}=\frac{1}{3 \beta m}\left\langle\frac{\partial_{z}^{2} U}{m}\right\rangle$ Note that we have proven a general property here. For any operator $A$ $\langle\nabla U A\rangle=\frac{1}{Q} \int d \mathbf{r} A \nabla U e^{-\beta U}=k_{B} T\langle\nabla A\rangle$ We have shown that the second term in the expansion of $C(t)$ is proportional to $\left\langle\frac{\partial_{z}^{2} U}{m}\right\rangle$, the curvature of the potential averaged with the Boltzmann weight. This term is called the Einstein frequency $\Omega_{o}^{2}$. It is the average collision frequency of the particles in the system. For the specific case of a harmonic potential this is simply the frequency $\omega^{2}$. However, it can be defined for many types of systems. Simply find the collision frequency for each pair of particles in the system and sum over all pairs. For the velocity correlation function, this can be expressed as $\Omega_{o}^{2}=\frac{1}{3 m}\left\langle\nabla^{2} U\right\rangle=\frac{\rho}{3 m} \int d \mathbf{r} g(\mathbf{r}) \nabla^{2} \phi$ where $\phi$ is the pairwise potential between each set of two particles. Finally, we can write second moment the expansion of $C(t)$ as $C(t) \simeq v_{o}^{2}\left(1-\frac{t^{2}}{2} \Omega_{0}^{2}\right)$ 1. Self-Intermediate Scattering Function The moment expansion method of estimating the short time behavior of correlation functions can also be applied to self-scattering functions. In chapter 3, we introduced the self-density of a particle $i$ as $n_{s}(\mathbf{R}, t)=\delta(\mathbf{R}-r(t))$ Which has the Fourier transform $n_{s}(\vec{k}, t)=e^{-i \vec{k} \vec{r}(t)}$ The self-intermediate scattering function is defined as $F_{s}(\vec{k}, t)=\left\langle n_{s}(\vec{k}, t) \mid n_{s}(\vec{k}, 0)\right\rangle=\left\langle e^{-i \vec{k} \vec{r}(t)} \mid e^{-i \vec{k} \vec{r}(0)}\right\rangle=\left\langle e^{-i \vec{k}(\vec{r}(t)-\vec{r}(0))}\right\rangle$ We can apply Eq.(4.2) to estimate the short time behavior of this function. The zero-th moment term is trivial to evaluate: $C_{0}=F_{s}(\vec{k}, 0)=\left\langle e^{-i \vec{k}(\vec{r}(0)-\vec{r}(0))}\right\rangle=1$ The second order term is given by $C_{2}=\left\langle\omega^{2}\right\rangle=\left\langle\dot{n}_{s} \mid \dot{n}_{s}\right\rangle=\left\langle-i \vec{k} \dot{\vec{r}}(0) e^{-i \vec{k} \vec{r}(0)} \mid-i \vec{k} \ddot{\vec{r}}(0) e^{-i \vec{k} \vec{r}(0)}\right\rangle$ This can be simplified to $C_{2}=\left\langle(\vec{k} \vec{v}(0))^{2} e^{-i \vec{k}(\vec{r}(0)-\vec{r}(0))}\right\rangle=k^{2} v_{o}^{2}$ We can defined $\omega_{o}=k v_{o}$, which gives second moment of the correlation function $C_{2}=\omega_{o}^{2}$ The fourth moment of this correlation function is given by $C_{4}=\left\langle\omega^{4}\right\rangle=\left\langle\ddot{n}_{s} \mid \ddot{n}_{s}\right\rangle=\left\langle-i \frac{d}{d t}\left(\vec{k} \dot{\vec{r}}(0) e^{-i \vec{k} \vec{r}(0)}\right) \mid-i \frac{d}{d t}\left(\vec{k} \dot{\vec{r}}(0) e^{-i \vec{k} \vec{r}(0)}\right)\right\rangle$ Evaluate these derivatives using the product rule and multiply out the terms. The resulting equation will have four terms, two of which cancel out. The remaining two terms are $C_{4}=(\vec{k} \vec{v})^{4}+\left\langle(\vec{k} \overrightarrow{\vec{v}})^{2}\right\rangle$ The first term is simply $3 \omega_{o}^{4}$. The second term can be evaluated by following a similar method to the one we used to calculate the second moment of the velocity correlation function in the previous section. As we demonstrated in that problem, The derivative of the velocity $\vec{v}$ is equivalent to the derivative of the potential divided by the mass. Therefore, this term can be written as $\left\langle(\vec{k} \overrightarrow{\vec{v}})^{2}\right\rangle=\frac{1}{m^{2}} k^{2}\left\langle\nabla_{z} V \nabla_{z} V\right\rangle$ Using Eq.(4.4), we can rewrite this term as $\frac{1}{m^{2}} k_{B} T k^{2}\left\langle\nabla_{z}^{2} V\right\rangle$ Finally, by doing some rearranging and using $v_{o}^{2}=\frac{k_{B} T}{m}$, we find that this term can be written as $\frac{k_{B} T}{m} k^{2}\left\langle\frac{\nabla_{z}^{2} V}{m}\right\rangle=k^{2} v_{o}^{2}\left\langle\frac{\nabla_{z}^{2} V}{m}\right\rangle=\omega_{o}^{2} \Omega_{o}^{2}$ Where $\Omega_{o}^{2}$ is the Einstein frequency, as defined in the previous section. Therefore, the short time expansion of $F_{s}(\vec{k}, t)$, $F_{s}(\vec{k}, t)=1-\left\langle\omega^{2}\right\rangle \frac{t^{2}}{2 !}+\left\langle\omega^{4}\right\rangle \frac{t^{4}}{4 !}-\cdots$ can be evaluated to $F_{s}(\vec{k}, t)=1-\omega_{o}^{2} \frac{t^{2}}{2 !}+\left(3 \omega_{o}^{4}+\omega_{o}^{2} \Omega_{o}^{2}\right) \frac{t^{4}}{4 !}-\cdots$ 1. Free-Particle Limit (Ideal Fluid) We can use the short time expansion of the selfintermediate scattering function to find an expression for $F_{s}(\vec{k}, t)$ in the free-particle limit. In the free-particle limit, we assume that the particles behave as an ideal gas; that is, there is no attraction or repulsion between the particles, and their interaction potential is zero $\phi(\vec{r})=0$. Recall that the Einstein frequency can be written as (Eq.(4.5)) $\Omega_{o}^{2}=\frac{\rho}{3 m} \int d \vec{r} g(\vec{r}) \nabla^{2} \phi(\vec{r})$ Therefore, if the interaction potential is zero, the Einstein frequency will also be zero. Our expansion for $F_{s}(\vec{k}, t)$ becomes $F_{s}(\vec{k}, t)=1-\omega_{o}^{2} \frac{t^{2}}{2 !}+\omega_{o}^{4} \frac{t^{4}}{8}-\cdots$ This is simply the short time expansion for the function $F_{s}(\vec{k}, t)=e^{-\frac{1}{2} \omega_{o}^{2} t^{2}}$ For free particles the self intermediate scattering function takes on a Gaussian form. Only ideal systems can by truly described with the free-particle model. However, there are many real systems that also show this limiting behaviour. Using these results, we can find the condition for a system that will allow us to ignore the effects of molecular collisions. From Eq.(4.6), we can see that the scattering function will take on a Gaussian form when $\Omega_{o}^{2}=0$ (the ideal case) or when $\omega_{0}^{2} \Omega_{o}^{2}$ is sufficiently smaller than $3 \omega_{0}^{4}$ that it can be ignored. Therefore, the condition for ignoring collisions can be written as $\Omega_{o}^{2} \ll 3 \omega_{o}^{2}$ Using the definitions of $\omega_{o}^{2}$ and $v_{o}^{2}$ and rearranging, we find $k \gg \frac{\Omega_{o}}{\sqrt{\frac{3 k_{B} T}{m}}}$ Now, define the parameter $l$ as $l=\sqrt{\frac{3 k_{B} T}{m}} \Omega_{o}$ This term gives the average thermal velocity, $\sqrt{\frac{3 k_{B} T}{m}}$, divided by the average collision frequency $\Omega_{o}$. Therefore, it can be interpreted as the mean free path of the particles, or the average distance a particle can travel before experiencing a collision. With the definition of $l$ in hand, we can rewrite $k \gg \frac{1}{l}$ or $\lambda \ll l$ This indicates that a system can be treated in the free-particle limit when the wavelength, or spatial range, that it used to investigate the system is less than the mean free path travelled by the particles. For further discussion of self-intermediate scattering functions, please see Dynamics of the Liquid State by Umberto Balucani [5]. Collective Properties 1. Density Fluctuations We can extend our previous discussion of the self-density function $n_{s}(\vec{r}, t)$ by considering the density function $\rho$, which is simply a sum of self-density functions $\rho(\vec{r}, t)=\sum_{i} \delta\left(\vec{r}-\vec{r}_{i}(t)\right)$ We define the density fluctuation as $\delta \rho(\vec{r}, t)=\rho(\vec{r}, t)-\langle\rho\rangle$ The Fourier transform of the density fluctuation is given by $\rho_{k}(t)=\sum_{i} e^{-i \vec{k} \vec{r}_{i}(t)}-(2 \pi)^{3} \delta(\vec{k}) \rho_{o}$ where $\rho_{o}=\langle\rho\rangle$. Then, we define the intermediate scattering function as the correlation function of $\rho_{k}(t)$ $F(\vec{k}, t)=\frac{1}{N}\left\langle\rho_{k}(t) \mid \rho_{k}(0)\right\rangle=\frac{1}{N}\langle\rho(\vec{k}, t) \mid \rho(-\vec{k}, 0)\rangle$ Once again, we can find an expression for the short time behavior of $F(\vec{k}, t)$ using the moment expansion in equation Eq.(4.2). We can find the zeroth moment of $F(k, t)$ by substituting in the definition of $\rho_{k}(t)$ and solving at time $t=0$. $C_{0}=F(\vec{k}, 0)=\frac{1}{N}\left\langle\sum_{j} e^{i \vec{k} \vec{r}_{j}(0)} \sum_{i} e^{-i \vec{k} \vec{r}_{i}(0)}\right\rangle$ Note that when we consider the correlation of a particle with itself (that is, when $i=j$ ), the terms in the exponentials will cancel, giving a value of 1 . Summing over all $N$ particles gives a value of N. Therefore, we can write the zeroth moment as $C_{0}=1+\frac{1}{N}\left\langle\sum_{i \neq j} e^{-i \vec{k} \vec{r}_{i j}}\right\rangle(2 \pi)^{3} \delta(\vec{k}) \rho_{o}$ where $\vec{r}_{i j}=\vec{r}_{i}(0)-\vec{r}_{j}(0)$. In Chapter 3 , we defined the second term as $\rho_{o} g(\vec{r})$, where $g(\vec{r})$, is the pair distribution function. The zeroth moment becomes $C_{0}=1+\rho_{o} g(\vec{r})-(2 \pi)^{3} \delta(\vec{k}) \rho_{o}=1+\rho_{o} \tilde{h}=S(\vec{k})$ where $S(\vec{k})$ is the static structure factor. From thermodynamics, we know that $S(0)=1+\rho_{o} k_{B} T \chi_{T} \leqslant 1$ where $\chi_{T}$ is the isothermal compressibility, $\chi_{T}=\frac{1}{\rho_{o}} \frac{\partial \rho}{\partial t}$ The pairwise correlation functions arises from the real space Van Hove Correlation function $G(\vec{r}, t)=\frac{1}{N}\left\langle\sum_{i, j} \vec{r}(0)-\vec{r}_{i j}(0)\right\rangle-\rho_{o}=\langle\delta \rho(\vec{r}, t) \rho(\vec{r}, 0)\rangle$ At time $t=0$, the Van Hove function becomes $G(\vec{r}, 0)=\delta(\vec{r})+\rho_{o} h(\vec{r})$ where $g=1+h$ 1. The Short time expansion In the previous section, we demonstrated that the zeroth moment $C_{0}$ of the short-time expansion of the intermediate scattering function is given by the static structure factor $S(\vec{k})$. Therefore, we can write $F(\vec{k}, t)=S(\vec{k})-\left\langle\omega^{2}\right\rangle \frac{t^{2}}{2 !}+\left\langle\omega^{4}\right\rangle \frac{t^{4}}{4 !}-\cdots$ To evaluate the second and fourth moments, we will consider the interactions of each particle with itself (the self-part, $i=j$ ) separately from the interactions of each particle with other particles (the distinct part, $i \neq j$ ). To evaluate the self-part, use the results from section 2 : \begin{aligned} \dot{n}_{k}=\sum_{i=1}^{N}-i\left(\vec{k} \vec{v}_{i}\right) e^{-i \vec{k} \vec{r}_{i}(t)} \ \ddot{n}_{k}=\sum_{i=1}^{N}\left[-\left(\vec{k} \vec{v}_{i}\right)^{2}-i\left(\vec{k} \overrightarrow{\vec{v}}_{i}\right)\right] e^{-i \vec{k} \vec{r}_{i}(t)} \end{aligned} Then we can evaluate the second moment of the self-part as $C_{2}=\left\langle\omega^{2}\right\rangle=\frac{1}{N}\left\langle\dot{n}_{k} \mid \dot{n}_{k}\right\rangle=\left\langle(\vec{k} \overrightarrow{\vec{v}})^{2}\right\rangle=\omega_{o}^{2}$ This gives the entire value of the second moment because the $i \neq j$ terms do not contribute. The fourth moment is given by $C_{4}=\left\langle\omega^{4}\right\rangle=\frac{1}{N} \sum_{i, j}\left\langle\left[\left(\vec{k} \vec{v}_{i}\right)^{2}\left(\vec{k} \vec{v}_{j}\right)^{2}+i\left(\vec{k} \overrightarrow{\vec{v}}_{i}\right)\left(\vec{k} \vec{v}_{j}\right)^{2}-i\left(\vec{k} \overrightarrow{\vec{v}}_{j}\right)\left(\vec{k} \vec{v}_{k}\right)^{2}+\left(\vec{k} \overrightarrow{\vec{v}}_{i}\right)\left(\vec{k} \overrightarrow{\vec{v}}_{j}\right)\right] e^{-i \vec{k} \vec{r}_{i j}}\right\rangle$ Both the self-part and the distinct part contribute to the fourth moment. $C_{4}=\left\langle\omega^{4}\right\rangle=\frac{1}{N}\left[\sum_{i=j}\langle\cdots \cdots\rangle+\sum_{i \neq j}\langle\cdots \cdots\rangle\right]$ When $i=j$, the middle two terms of the fourth moment cancel out and the exponential becomes 1. Therefore, the self-part of the fourth moment is given by $\frac{1}{N}\left\langle(\vec{k} \vec{v})^{4}\right\rangle+\frac{1}{N}\left\langle(\vec{k} \overrightarrow{\vec{v}})^{2}\right\rangle=3 \omega_{o}^{4}+\omega_{o}^{2} \Omega_{o}^{2}$ We can evaluate each of the terms of the distinct part of the fourth moment separately. The first term is given by $=\frac{1}{N} \sum_{i \neq j}\left\langle\left(\vec{k} \vec{v}_{i}\right)^{2}\left(\vec{k} \vec{v}_{j}\right)^{2} e^{-i \vec{k} \vec{r}_{i j}}\right\rangle=\left(k^{2} v_{o}^{2}\right)^{2} \frac{1}{N} \sum_{i \neq j}\left\langle e^{-i \vec{k} \vec{r}_{i j}}\right\rangle=\omega_{o}^{4} \tilde{g} \rho_{o}$ The second and third term can be combined to give \begin{aligned} \frac{1}{N} \sum_{i \neq j}\left\langle\left[i\left(\vec{k} \ddot{\vec{v}}_{i}\right)\left(\vec{k} \vec{v}_{j}\right)^{2}-i\left(\vec{k} \overrightarrow{\vec{v}}_{j}\right)\left(\vec{k} \vec{v}_{k}\right)^{2}\right] e^{-i \vec{k} \vec{r}_{i j}}\right\rangle \ \quad=\frac{1}{N} \sum_{i \neq j}\left\langle\left(k v_{o}\right)^{2}\left[\vec{k} \dot{\vec{v}}_{i}-\vec{k} \overrightarrow{\vec{v}}_{j}\right] e^{-i \vec{k} \vec{r}_{i j}}\right\rangle \ \quad=-2 \omega_{o}^{4} \frac{1}{N} \sum_{i \neq j}\left\langle e^{-i \vec{k} \vec{r}_{i j}}\right\rangle=-2 \omega_{o}^{4} \tilde{g} \rho_{o} \end{aligned} And the fourth term gives $\frac{1}{N} \sum_{i \neq j}\left\langle\left(\vec{k} \overrightarrow{\vec{v}}_{i}\right)\left(\vec{k} \overrightarrow{\vec{v}}_{j}\right) e^{-i \vec{k} \vec{r}_{i j}}\right\rangle$ $=\frac{k^{2}}{m^{2}} \frac{1}{N} \sum_{i \neq j}\left\langle\nabla_{z i} U \nabla_{z j} U e^{-i \vec{k} \vec{r}_{i j}}\right\rangle$ Using Eq.(4.4), we can write this as \begin{aligned} \frac{k^{2}}{m^{2}} \frac{1}{N} \sum_{i \neq j}\left\langle\left(-\frac{1}{\beta} \frac{\partial}{\partial z_{i}} \frac{\partial}{\partial z_{j}} U+\frac{k^{2}}{\beta^{2}}\right) e^{-i \vec{k} \vec{r}_{i j}}\right\rangle \ =-\omega_{o}^{2} \Omega_{L}^{2}+\omega_{o}^{4} \tilde{g} \rho_{o} \end{aligned} Then the distinct part of the fourth moment is given by $\omega_{o}^{4} \tilde{g} \rho_{o}-2 \omega_{o}^{4} \tilde{g} \rho_{o}+\omega_{o}^{4} \tilde{g} \rho_{o}-\omega_{o}^{2} \Omega_{L}^{2}=-\omega_{o}^{2} \Omega_{L}^{2}$ where $\Omega_{L}^{2}=\frac{1}{m}\left\langle\partial_{z}^{2} \phi e^{-i \vec{k} \vec{z}_{i j}}\right\rangle=\frac{\rho_{o}}{m} \int d \vec{r} e^{-i \vec{k} \vec{z}} \partial_{z}^{2} \phi g(\vec{r})$ Therefore, the fourth moment of the intermediate scattering function is given by $C_{4}=\left\langle\omega^{4}\right\rangle=3 \omega_{o}^{4}+\omega_{o}^{2} \Omega_{o}^{2}-\omega_{o}^{2} \Omega_{L}^{2}$ 1. Comparison to Self-intermediate Scattering Function With our results from the previous sections, we can write the short-time expansion of the intermediate scattering function as $F(\vec{k}, t)=S(\vec{k})-\omega_{o}^{2} \frac{t^{2}}{2 !}+\left[3 \omega_{o}^{2}+\Omega_{o}^{2}-\Omega_{L}^{2}\right] \omega_{o}^{2} \frac{t^{4}}{4 !}-\cdots$ We can interpret $S(\vec{k})-\omega_{o}^{2} \frac{t^{2}}{2 !}$ as the initial decay term and define the frequency $\omega_{L}^{2}=3 \omega_{o}^{2}+\Omega_{o}^{2}-\Omega_{L}^{2}$. For comparison, the self-intermediate scattering function is given by $F_{s}(\vec{k}, t)=1-\omega_{o}^{2} \frac{t^{2}}{2 !}+\left[3 \omega_{o}^{2}+\Omega_{o}^{2}\right] \omega_{o}^{2} \frac{t^{4}}{4 !}-\cdots$ How do these compare in the long wavelength limit $k \rightarrow 0$ ? In the short-time limit, the scattering functions will be largely determined by the first terms in the expansions. We see that as $\lim _{k \rightarrow 0} S(\vec{k})=S(0) \leq 1$ Therefore, in this limit, the intermediate scattering function $F(\vec{k}, t)$ decays slower than the selfintermediate scattering function $F_{s}(\vec{k}, t)$. Transverse and Longitudinal Current Transverse and longitudinal current were introduced in chapter 3, where the Navier-Stokes equation was used to predict their rate of dissipation. Here, we will apply the short-time expansion to the current correlation functions to define the transverse and longitudinal speeds of sound and find their behavior in the free particle limit. To review, the current is defined as $\vec{J}_{k}(t)=\sum_{i} \vec{v}_{i}(t) e^{-i \vec{k} \vec{r}_{i}}$ Longitudinal current exists when the direction of motion of the particles (the velocity) is parallel with the direction of propagation of the waves. For waves propagating in the z-direction, the longitudinal current is given by $\vec{J}_{L}(k, t)=\sum_{i} \vec{z}_{i}(t) e^{-i \vec{k} \vec{z}_{i}}$ Transverse current exists when the direction of motion of the particles is perpendicular to the direction of propagation of the waves. For waves propagating in the z-direction, the transverse current is given by $\vec{J}_{T}(k, t)=\sum_{i} \vec{x}_{i}(t) e^{-i \vec{k} \vec{z}_{i}}$ The longitudinal current correlation function is given by $C_{L}=\frac{1}{N}\left\langle\vec{J}_{L}(\vec{k}, t) \mid \vec{J}_{L}(\vec{k}, t)\right\rangle$ And the transverse current correlation function is given by $C_{T}=\frac{1}{N}\left\langle\vec{J}_{T}(\vec{k}, t) \mid \vec{J}_{T}(\vec{k}, t)\right\rangle$ Using Eq.(4.2), we can write the short time expansion of each of these functions as \begin{aligned} &C_{L}(\vec{k}, t)=v_{o}^{2}\left(1-\omega_{L}^{2} \frac{t^{2}}{2}\right)+\cdots \ &C_{T}(\vec{k}, t)=v_{o}^{2}\left(1-\omega_{T}^{2} \frac{t^{2}}{2}\right)+\cdots \end{aligned} In the long wavelength limit the transverse and longitudinal frequencies $\omega_{T}$ and $\omega_{L}$ are related to the transverse and longitudinal speeds of sound by $\omega_{\frac{L}{T}}^{2}=k^{2} c_{\frac{L}{T}}^{2}$ And the transverse and longitudinal speeds of sound are given by \begin{aligned} &c_{L}^{2}=3 v_{o}^{2}+\frac{\rho_{o}}{2 m} \int d \vec{r} g(\vec{r}) \partial_{z}^{2} \phi \vec{z} \ &c_{T}^{2}=v_{o}^{2}+\frac{\rho_{o}}{2 m} \int d \vec{r} g(\vec{r}) \partial_{x}^{2} \phi \vec{z} \end{aligned} Therefore, in the long wavelength limit, $\omega_{L}^{2} \backsim 3 \omega_{T}^{2}$ 1. Free-Particle Limit In the free-particle limit, the forces between particles can be ignored. The longitudinal and transverse current correlation functions are then given by \begin{aligned} C_{L}(\vec{k}, t)=\left\langle v_{z}^{2} e^{-i \vec{k} \vec{v}_{z} t}\right\rangle=v_{o}^{2}\left(1-\omega_{o}^{2} t^{2}\right) e^{-\frac{1}{2}\left(\omega_{o} t\right)^{2}} \ C_{T}(\vec{k}, t)=\left\langle v_{x}^{2} e^{-i \vec{k} \vec{v}_{z} t}\right\rangle=v_{o}^{2} e^{-\frac{1}{2}\left(\omega_{o} t\right)^{2}} \end{aligned} We can see that the Fourier transform of the transverse correlation function $\tilde{C}_{T}(\vec{k}, \omega)$ is a Gaussian while the Fourier transform of the longitudinal correlation function $\tilde{C}_{L}(\vec{k}, \omega)$ has poles at $\omega=$ $\pm \sqrt{2} \omega_{o}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/04%3A_Time_Correlation_Functions/4.01%3A_Short-time_Behavior.txt
In chapter 3, we explored the long time behavior of time correlation functions, and in the previous section, we explored their short-time behavior. However, we are ultimately interested in finding expressions for the time correlation functions that apply at all time scales. There are many different ways to approach this problem. In principle, we could simply calculate the position and velocity of each particle in the system at all times. Then, any other phase-space variable $A(t)$ could easily be determined. However, this is clearly not viable for macroscopic systems, which contain on the order of $10^{23}$ individual particles. Another approach, and the one we will explore here, is to consider only that part of the system that we care about and treat all the rest as a statistical bath. This can be accomplished using projection operator techniques. To understand this method, it is useful to consider an analogy to a three dimensional vector space. Any three dimensional vector can be projected onto a vector of interest to find its component in that direction. Similarly, we can project the position and velocity information for the entire system onto $A(t)$, and treat the rest as a statistical bath. To illustrate this idea, let $A(t)$ be the velocity of a Brownian particle. We could always calculate $A(t)$ by determining the positions and velocities of all the particles in the system. However, this would be very time consuming and generate much unnecessary information. Instead, we can project the system velocity onto the velocity of the Brownian particle and treat the rest of the system as a bath. We have already solved this problem for one specific case: in chapter 1, we used the Langevin equation to describe the evolution of the velocity of a particle under the influence of friction and a random force. In this section, we will use the projection operator technique to derive the Generalized Langevin Equation. However, first we need to define some terms. Definitions 1. The Projection Operator Given the column vectors $A$ and $B$, the projection of $B$ onto $A$ is given by the expression $\mathcal{P}_{A} B=\frac{\langle B \mid A\rangle}{\langle A \mid A\rangle} A$ By definition, $\mathcal{P}_{A}^{2}=\mathcal{P}_{A}$ For an equilibrium system, the operator product is $\langle B \mid A\rangle=\operatorname{Tr} B A^{+} \rho_{e q}$ or, in the phase space representation $\langle B \mid A\rangle=\int d \Gamma B(\Gamma) A^{+}(\Gamma) \rho_{e q}$ Similarly, we can define the orthogonal operator $\mathcal{Q}=\mathcal{P}-1$, which projects onto a subspace that is orthogonal to A. 2. Operator Identity If $a$ and $b$ are operators, the following are identities \begin{aligned} &\frac{1}{s-a-b}=\frac{1}{s-a}+\frac{1}{s-a-b} b \frac{1}{s+a} \ &e^{(a+b) t}=e^{a t}+\int_{0}^{t} e^{(a+b)(t-\tau)} b e^{a \tau} d \tau \end{aligned} 1. The Liouville Operator The time evolution of an operator A in a system with the Hamiltonian $\mathcal{H}$ is found using the Liouville operator $\mathcal{L}$ $\frac{d A}{d t}=i \mathcal{L} A$ The Liouville operator is a special form of operator called a "superoperator" because it acts upon other operators rather than on functions. In quantum mechanics, the Liouville operator for a system with the Hamiltonian $\mathcal{H}$ is defined as $i \mathcal{L} A \equiv \frac{1}{i \hbar}[A, \mathcal{H}]$ where $[\ldots, \ldots]$ indicates the commutator. In the classical limit as $\hbar \rightarrow 0$, this becomes $i \mathcal{L} A \equiv i\{A, \mathcal{H}\}$ where $\{\ldots, \ldots\}$ is the Poisson bracket. One important property of $\mathcal{L}$ is that it is Hermitian. This property is demonstrated in the following proof \begin{aligned} \langle\mathcal{L} A \mid B\rangle=\operatorname{Tr}\left([A, \mathcal{H}] B^{+} \rho\right) \ =\operatorname{Tr}\left(A \mathcal{H B}^{+} \rho-\mathcal{H} A B^{+} \rho\right) \ =\operatorname{Tr}\left(A \mathcal{H} B^{+} \rho-A B^{+} \mathcal{H} \rho\right) \ =\operatorname{Tr} A\left[\mathcal{H}, B^{+}\right] \rho \ =\operatorname{Tr} A[\mathcal{H}, B]^{+} \rho \ =\langle A \mid \mathcal{H} B\rangle \end{aligned} The Generalized Langevin Equation The Liouville equation $\frac{d}{d t} A(t)=i L A(t)$ has the formal solution $A(t)=e^{i \mathcal{L t}} A(0)$ From this equation it is clear that the function $e^{i L t}$ acts as a time propagator of $A$ from an initial value $A(0)$. However, it is not very helpful in this form. We will use the projection operator to rewrite this equation in a more useful form. To simplify the notation, $A(t)$ will be written as $A$ from now on. Start by writing the new equation of motion for $A(t)$ $\frac{d A}{d t}=i \mathcal{L} e^{i \mathcal{L} t} A$ Insert the identity, $I=(\mathcal{P}+\mathcal{Q})$ $\frac{d A}{d t}=e^{i \mathcal{L t}}(\mathcal{P}+\mathcal{Q}) i \mathcal{L} A=e^{i \mathcal{L t}} P i \mathcal{L} A+e^{i \mathcal{L t}} \mathcal{Q} i \mathcal{L} A$ Begin by evaluating the first term. Using the definition of the projection operator, we can rewrite this as \begin{aligned} e^{i \mathcal{L t}} \operatorname{Pi\mathcal} A=e^{i \mathcal{L} t} \frac{\langle i \mathcal{L} A \mid A\rangle}{\langle A \mid A\rangle} A \ =i \frac{\langle\mathcal{L} A \mid A\rangle}{\langle A \mid A\rangle} e^{i \mathcal{L t}} A \ =i \Omega A(t) \end{aligned} where $\Omega$ is called the frequency matrix and is defined as $\Omega=\frac{\langle\mathcal{L} A \mid A\rangle}{\langle A \mid A\rangle}$ To evaluate the second term, we will need to rewrite the time propagator in terms of $\mathcal{P}$ and $\mathcal{Q}$. Start by inserting the identity, and then rewrite the expression using the operator identity defined in section A.2, with $a=-i \mathcal{Q L}, b=-i P \mathcal{L}$, and $(a+b)=-i \mathcal{L}$ \begin{aligned} e^{i \mathcal{L} t} &=e^{i(\mathcal{P}+\mathcal{Q}) \mathcal{L} t} \ &=e^{i \mathcal{Q} L t}+\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} i P \mathcal{L} e^{i \mathcal{Q} L t} d \tau \end{aligned} Now, apply this expansion to $i \mathcal{Q L} A$ $e^{i \mathcal{L t}} i \mathcal{Q L} A=e^{i \mathcal{Q L t}} i \mathcal{Q L} A+\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} i P \mathcal{L} e^{i \mathcal{Q L t}}(i \mathcal{Q L} A) d \tau$ To understand this expression, start by examining the first term. The operator $\mathcal{Q}$ projects the system into the solvent degrees of freedom, which are orthogonal to $A$. However, we are primarily interested in describing only the propagation in the $A$ direction. Therefore, this term gives the random force or noise in the system, which we will denote $R(t)$ $R(t)=e^{i \mathcal{Q} L t} i \mathcal{Q L} A$ where $R(0)=i \mathcal{Q L} A$ and $e^{i \mathscr{\mathcal { L }} t}$ describes the time propagation of $R(t)$. The second term in this expression describes the friction in the system. One interesting thing to note is that the expression for $R(t)$ appears in this term, indicating that the friction and noise of the system are related. This relation is called the fluctuation-dissipation theorem, and will be given more explicitly later. Using the definition of $R(t)$, we can rewrite the second term in the expression as $\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} i P \mathcal{L} e^{i \mathcal{Q} \mathcal{L} t}(i \mathcal{Q} \mathcal{L} A) d \tau=\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} i P \mathcal{L} R(t) d \tau$ Then, use the definition of the projection operator $\mathcal{P}$ to write $\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} \frac{\langle i \mathcal{L} R(t) \mid A\rangle}{\langle A \mid A\rangle} A d \tau$ Since the noise term $R(t)$ is already projected into the orthogonal space, we can always operate on it with $\mathcal{Q}$ without changing its value (recall that for any projection operator $\mathcal{P}, \mathcal{P}^{2}=\mathcal{P}$ ) $\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} \frac{\langle i \mathcal{L} Q R(t) \mid A\rangle}{\langle A \mid A\rangle} A d \tau$ Then, use the fact that $\mathcal{Q}$ and $\mathcal{L}$ are both Hermitian operators to rearrange the expression $-\int_{0}^{t} e^{i \mathcal{L}(t-\tau)} \frac{\langle R(t) \mid i \mathcal{Q} \mathcal{L} A\rangle}{\langle A \mid A\rangle} A d \tau$ Finally, use the definition $R(0)=i \mathcal{Q L} A$ and $e^{i \mathcal{L}(t-\tau)} A=A(t-\tau)$ to write the expression as $-\int_{0}^{t} \frac{\langle R(t) \mid R(0)\rangle}{\langle A \mid A\rangle} A(t-\tau) d \tau$ Define the memory kernel $\kappa(t)$ as $\kappa(t)=\frac{\langle R(t) \mid R(0)\rangle}{\langle A \mid A\rangle}$ This term gives the final fluctuation-dissipation theorem. The second term can then be written as simply $-\int_{0}^{t} \kappa(t) A(t-\tau) d \tau$ With all of this in hand, we can finally write out the full Generalized Langevin equation $\frac{d A}{d t}=i \Omega A(t)-\int_{0}^{t} \kappa(t) A(t-\tau) d \tau+R(t)$ where the frequency matrix is $\Omega=\frac{\langle\mathcal{L} A \mid A\rangle}{\langle A \mid A\rangle}$ the random force is $R(t)=e^{i \mathcal{Q L t}} i \mathcal{Q L} A$ and the memory kernel, which defines the fluctuation-dissipation theorem, is $\kappa(t)=\frac{\langle R(t) \mid R(0)\rangle}{\langle A \mid A\rangle}$ Let’s take a closer look at the frequency matrix and the memory kernel. For one-dimensional problems, the frequency matrix will evaluate to zero. To understand why, remember that $i \mathcal{L} A=\frac{d A}{d t}$. This allows us to rewrite the numerator of the frequency matrix as $\left\langle\frac{d A}{d t} \mid A\right\rangle$, which is simply the derivative of the correlation function $C(t)=\langle A(t) \mid A(0)\rangle$, evaluated at zero. Since all correlation functions are even in time, the derivative at zero must equal zero. This rule will apply to all of the problems that we address in this section. As stated earlier, the definition of the memory kernel relates the fluctuation, or noise in the system, to the dissipation of $A$. The fluctuation term, $\langle R(t) \mid R(0)\rangle\langle A \mid A\rangle^{-1}$, will be zero when the noise in the system is zero. This indicates that in an isolated system with no noise, $A$ will quickly decay to zero. Applications of the GLE 1. GLE for Brownian Motion In chapter 1, we used the Langevin equation to explore the motion of a Brownian particle. Here, we will perform the same analysis using the Generalized Langevin equation. Recall that Brownian motion describes the discrete and random motion that is observed when a large particle is immersed in a fluid of smaller particles. We want to use the GLE to describe the velocity of the large particle without having to solve for the motion of the entire bath. Begin by writing the GLE for the the velocity of the particle. For this system, the frequency matrix $\Omega$ is zero, so the full GLE is given by $\frac{d v}{d t}=-\int_{0}^{t} \gamma(t-\tau) v(\tau) d \tau+\frac{f(t)}{m}$ where $\gamma(t)$ represents the memory kernel and $\frac{f(t)}{m}=R(t)$ represents the random force. For this system, the memory kernel is given by $\gamma(t)=\frac{\langle f(t) \mid f(0)\rangle}{m^{2}\langle v \mid v\rangle}$ The normalization factor $\langle v \mid v\rangle^{-1}$ is simply the average value of the squared velocity, $\left\langle v^{2}\right\rangle=v_{o}^{2}=$ $\frac{k_{B} T}{m}$. Therefore we can write this as $\gamma(t)=\frac{\beta}{m}\langle f(t) \mid f(0)\rangle$ where $\beta=k_{B} T^{-1}$. The friction coefficient for the system is given by $\xi(t)=m \gamma(t)$. Using this, we can write the fluctuation-dissipation relation $\xi(t)=\beta\langle f(t) \mid f(0)\rangle$ We can use the GLE to find the velocity autocorrelation function $C(t)=\langle v(t) v(0)\rangle$ for the Brownian particle. Begin by multiplying the GLE through by $v(0)$ and taking the thermal average. \begin{aligned} \frac{d v}{d t} &=-\int_{0}^{t} \gamma(t-\tau) v(\tau) d \tau+\frac{f(t)}{m} \ \frac{d v}{d t} v(0) &=-\int_{0}^{t} \gamma(t-\tau) v(\tau) v(0) d \tau+\frac{f(t)}{m} v(0) \ \left\langle\frac{d v}{d t} v(0)\right\rangle &=-\int_{0}^{t} \gamma(t-\tau)\langle v(\tau) v(0)\rangle d \tau+\frac{v(0)}{m}\langle f(t)\rangle \ \frac{d C(t)}{d t} &=-\int_{0}^{t} \gamma(t-\tau) C(\tau) d \tau \end{aligned} Here we have used the fact that the thermal average over the random force $\langle f(t)\rangle=0$. This gives us the equation of motion for $C(t)$, which can be solved using Laplace transformation. The Laplace transform of this equation gives $s \hat{C}(s)+\hat{\gamma}(s) \hat{C}(s)=C(0)$ Using $C(0)=\langle v(0) v(0)\rangle=v_{o}^{2}$ and rearranging, we get the general Laplace transformed solution for $C(t)$ $C \hat{(s)}=\frac{v_{0}^{2}}{s+\hat{\gamma}(s)}$ which can be solved for specified values of $\gamma(t)$. The Laplace transformed solution for $C(t)$ can be used to find an equation for the diffusion constant $D$. The Green-Kubo relation defines the diffusion constant as $D=\int_{0}^{\infty} C(t) d \tau=\hat{C}(s=0)$ Using the solution that we derived above $D=\frac{v_{0}^{2}}{\hat{\gamma}(0)}=\frac{k_{B} T}{m \hat{\gamma}(0)}$ This is a generalized form of Einstein’s relation, which we derived in chapter 1 for the Brownian particle. The Brownian particle experiences white-noise, which can be modelled by making the memory function a delta function $\gamma(t)=\gamma_{o} \delta(t)$. Then the GLE simplifies to \begin{aligned} \frac{d v}{d t} &=-\int_{0}^{t} \gamma_{o} \delta(t-\tau) v(\tau) d \tau+\frac{f(t)}{m} \ &=-\gamma_{o} v(t)+\frac{f(t)}{m} \end{aligned} which has the formal solution (chapter 1) $v(t)=v(0) e^{-\gamma t}+\frac{1}{m} \int_{0}^{t} e^{-\gamma(t-\tau)} f(t) d \tau$ and the correlation function $C(t)=C(0) e^{-\gamma t}$. Using the white-noise memory function, we can also reproduce Einstein’s relation from chapter 1. The Laplace transform of $\gamma(t)=\gamma_{o} \delta(t)$ is $\hat{\gamma}(s)=\gamma_{o}$. Substituting this for $\hat{\gamma}(0)$ in Eq. (4.28) and using the friction coefficient $\xi(t)=m \gamma_{o}$ gives the familiar Einstein relation $D=\frac{k_{B} T}{m \hat{\gamma}(0)}=\frac{k_{B} T}{\xi}$ 1. Exponential-decay memory In addition to the delta function memory kernel, which gives the dynamics of a Brownian particle, we can also consider a case in which the friction has the same overall strength $\gamma_{o}$ but varies with time. We can model this with the exponential decay memory kernel $\gamma(t)=\gamma_{o} \alpha e^{-\alpha t}$ This memory kernel has the special property that no matter the value of $\alpha$, the integral of the function will always equal $\gamma_{o}$. In the limit as $\alpha \rightarrow \infty$, this function approaches $\gamma_{o} \delta\left(t^{+}\right)$. The correlation function for this memory kernel is relatively easy to find because the Laplace transform of an exponential decay function is well defined. For the exponential decay $\gamma(t)$ defined above, the Laplace transform is $\hat{\gamma}(t)=\frac{\gamma_{o} \alpha}{s+\alpha}$ Therefore, to solve the correlation function, we only need to find the value of $\gamma_{o}$. This can be estimated using the definition of the memory kernel $\gamma(0)=\frac{\langle i \mathcal{Q L} v \mid i \mathcal{Q L} v\rangle}{m^{2}\left\langle v^{2}\right\rangle}$ Here, $i \mathcal{L} v=\frac{d v}{d t}$ is simply the acceleration. Using Newton’s law, we can write $i \mathcal{L} v=\frac{F}{m}=\frac{-1}{m} \frac{\partial U}{\partial x}$, which is the gradient of the potential, or the non-random component of the force. Putting everything together, we find that the memory kernel evaluated at zero is $\gamma(0)=\frac{\left\langle\partial_{x}^{2} U\right\rangle}{m} \equiv \Omega_{o}^{2}$ This is the average curvature of the potential. For a harmonic oscillator, this is simply the average frequency. We can now use the Laplace transform of the exponential decay memory kernel to find the correlation function. $\hat{C}(0)=\frac{v_{o}^{2}}{s+\frac{\Omega_{o}^{2}}{s+\alpha}}=v_{o}^{2} \frac{s+\alpha}{s^{2}+s \alpha+\Omega_{o}^{2}}$ This is relatively easy to solve because it is quadratic. To generate the solutions, find the eigenvalues by solving the quadratic equation $s^{2}+s \alpha+\Omega_{o}^{2}[1]$. This gives the results \begin{aligned} \lambda_{\pm} &=-\frac{\alpha}{2} \pm \sqrt{\frac{\alpha-4 \Omega_{o}^{2}}{4}} \ C(t) &=v_{o}^{2} \frac{1}{\lambda_{+}+\lambda_{-}}\left(e^{-\lambda+t}-e^{-\lambda_{-} t}\right) \end{aligned} Some interesting results arise from this solution. We can see that if $\alpha<2 \Omega$, then $\lambda_{\pm}$are complex numbers and $C(t)$ becomes oscillatory $C(t)=v_{o}^{2} \frac{1}{\lambda_{+}+\lambda_{-}}(\cos \Delta t)$ We can examine these results for different relations between $\alpha$ and $\Omega_{o}$. 1. Solids When $\alpha \ll \Omega$, the decay time is much longer than one oscillation period. The correlation function shows persistant oscillations at many frequencies. There is virtually no damping and the decay occurs primarily through dephasing (see Figure 4.1). Physically, this represents a solid. In a solid, each individual particle is locked into position by strong bonding between itself and its neighbors. If it is disturbed from equilibrium, it can only vibrate within the small area allowed by these bonds. 1. Liquids When $\alpha l \Omega$, the decay time is longer than one oscillation period. The correlation function shows one or two oscillations which are quickly damped out and a long time decay tail (Figure 4.2). Physically, this represents a liquid. At short times, a molecule in a liquid is "trapped" within a solvation shell formed by weak intermolecular bonds. When it is disturbed from equilibrium, it will initially vibrate within this shell. However, at longer times, this vibration will cause a rearrangement of the solvation shell, allowing the molecule to travel away from its initial position. This damps out the oscillations. 1. Gases When $\alpha \Omega$, the decay time is shorter than one oscillation period. The correlation function decays completely before undergoing an oscillation (Figure 4.3). Physically, this represents a gas. In a gas, the molecules are not confined by intermolecular bonding, and the correlation function will decay without any oscillation. 1. Generalized Diffusion Constant We can use the GLE to derive the Green-Kubo relation for the generalized diffusion constant. Following a similar procedure as that used for the velocity correlation function, we can show that the equation of motion for the intermediate scattering function (which we have discussed in depth in Chapter 3 and in section IC of chapter 4) is given However, this term is simply the diffusion constant $D(\vec{k}, t)$ multiplied by $k^{2}$. Therefore, the equation of motion for the intermediate scattering function can be written as $\dot{F}(\vec{k}, t)=-k^{2} \int_{0}^{t} D(\vec{k}, \tau) F(\vec{k}, t-\tau) d \tau$ In the long time limit $t \rightarrow \infty$ and the long wavelength limit $k \rightarrow 0$, we find the Green-Kubo relation $D=\int_{0}^{\infty} D(0, t) d t=\int_{0}^{\infty} C(t) d t$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/04%3A_Time_Correlation_Functions/4.02%3A_Projection_Operator_Method.txt
Introduction The Generalized Langevin Equation and Mode-Coupling theory are subsets of molecular hydrodynamics, the theory that was developed to bridge the gap between hydrodynamics and molecular dynamics. Hydrodynamics, which we discussed in chapter 3, describes the macroscopic, long time behavior of systems in the limit as $t \rightarrow \infty$ and $k \rightarrow 0$. It uses the transport coefficients $D, \lambda$, $\eta$, and $\eta_{B}$ to predict long time fluctuations. Molecular dynamics, which we discussed in section I of chapter 4 , describes the microscopic, short time behavior of systems in the limit as $t \rightarrow 0$ and $k \rightarrow \infty$. In this limit, systems behave as static liquid structures, and their dynamics are largely determined by the pairwise interaction potential. In this section, we will use the GLE to derive the viscoelastic model for transverse current. By taking the appropriate limits, we can show that the results of the viscoelastic model are consistent with those of hydrodynamics and molecular dynamics, and that this model provides a successful bridge between the two limits. Phenomenological Viscosity Consider a constant shear force applied to a viscous liquid. At long times, the shear stress $\sigma_{x z}$ in the liquid is related to the rate of strain $\partial_{z} \vec{v}_{x}+\partial_{x} \vec{v}_{z}$ by $\sigma_{x z}=-\eta\left(\partial_{z} \vec{v}_{x}+\partial_{x} \vec{v}_{z}\right)$ Liquids behaving in this fashion do not support shear waves. However, if the force is applied instantaneously, the system does not have the time to relax like a liquid. Instead, it behaves like an elastic solid. The stress is now proportional to the strain rather than the rate of strain. The short term response is $\sigma_{x z}=-G\left(\partial_{z} \vec{x}+\partial_{x} \vec{z}\right)$ where $G$ is the modulus of rigidity. When the liquid is behaving like a solid, it supports shear waves propagating at a speed of $v_{s}=\sqrt{\frac{G}{\rho m}}$. To determine the time scale on which the liquid behaves like an elastic solid, define the constant $\tau_{M}=\frac{\eta}{G}$ This is the Maxwell relaxation time. For the timescales $t$ when $\frac{t}{\tau_{M}} \ll 1$ the system behaves like an elastic solid. For the timescales when $\frac{t}{\tau_{M}} \gg 1$ the system behaves like a viscous liquid. Viscoelastic Approximation To interpolate between the two extremes, we can write $\left(\frac{1}{\eta}+\frac{1}{G} \frac{\partial}{\partial t}\right) \sigma_{x z}=-\left(\frac{\partial}{\partial z} \vec{v}_{x}+\frac{\partial}{\partial_{x}} \vec{v}_{z}\right)$ The Laplace transform of this equation yields $\hat{\eta}(s)=\frac{G}{s+\frac{G}{\eta}}=\frac{G}{s+\frac{1}{\tau_{M}}}$ In the steady-state limit, as $s \rightarrow 0$ $\lim _{s \rightarrow 0} \hat{\eta}=\eta$ and in the high-frequency limit, as $s \rightarrow \infty$ $\lim _{s \rightarrow \infty} \hat{\eta}=\frac{G}{s}$ Transverse Current Correlation Function We will use the transverse current correlation function to demonstrate the viscoelastic approximation. In Section I, we defined the transverse current as $J_{T}(\vec{k}, t)=\sum_{j=1}^{N} \overrightarrow{\dot{x}}_{j} e^{i \vec{k} \vec{z}_{j}}$ and the transverse current correlation function as $C_{T}(\vec{k}, t)=\frac{1}{N}\left\langle J_{T}(\vec{k}, t) \mid J_{T}(\vec{k}, 0)\right\rangle$ We have studied the transverse current in both the hydrodynamic limit $(k \rightarrow 0, \omega \rightarrow 0)$ and the short-time expansion limit $(k \rightarrow \infty, \omega \rightarrow \infty)$. In chapter 3, we used the Navier Stokes equation to find an equation of motion for the transverse correlation function in the hydrodynamic limit $\dot{C}_{T}(\vec{k}, t)=-\nu k^{2} C_{T}(\vec{k}, t)$ This has the solution $C_{T}(\vec{k}, t)=C_{T}(\vec{k}, 0) e^{-\nu k^{2} t}=v_{o}^{2} e^{-\nu k^{2} t}$ where $\nu=\frac{\eta}{m \rho}$ is the shear viscosity. Therefore, in the hydrodynamic limit, transverse current fluctuations decay exponentially with a rate determined by the shear viscosity $\nu$. In section I of this chapter, we used the short-time expansion approximation to show that in the $k \rightarrow \infty, \omega \rightarrow \infty$ limit, the transverse current correlation function can be written as $C_{T}(\vec{k}, t)=v_{o}^{2}\left(1-\omega_{T}^{2} \frac{t^{2}}{2}\right)+\cdots$ where the transverse frequency $\omega_{T}$ is related to the transverse speed of sound $c_{T}(k)$ by $\omega_{T}^{2}=k^{2} c_{T}^{2}(k)$ And the transverse speed of sound is given by $c_{T}^{2}(k)=v_{o}^{2}+\frac{\rho}{m} \int g(\vec{r}) \partial_{x}^{2} U(\vec{r}) \frac{\left[1-e^{i k z}\right]}{k^{2}} d \vec{r}$ where $g(\vec{r})$ is the pairwise correlation function and $U(\vec{r})$ is the pairwise interaction potential. This frequency term can also be written as [3] $\omega_{T}^{2}=\frac{\left(k v_{o}\right)^{2}}{n M} G_{\infty}(k)$ where $G_{\infty}(k)$ is the shear modulus. This indicates that at short times and wavelengths, the dissipation effects are diminished and transverse current fluctuations can propagate through the material with speed $c_{T}(k)$. Using the Generalized Langevin equation, we can generate a model for transverse current fluctuations that replicates the results of hydrodynamics and the short-time expansion when the appropriate limits are taken. Begin by writing the GLE for transverse current. Since the frequency matrix is zero, the GLE is written $\dot{J}_{T}(k, t)=-\int_{0}^{t} \kappa(t-\tau) J_{T}(k, \tau) d \tau+f(t)$ where $\kappa(t)$ is the memory function and $f(t)$ is the noise term. Multiplying through by $J_{T}(k, 0)$ and taking the average gives us the equation of motion for the transverse current correlation function $\dot{C}_{T}(k, t)=-\int_{0}^{t} \kappa(t-\tau) C_{T}(k, \tau) d \tau$ Take a closer look at the memory kernel $\kappa(t)=\frac{\langle R(t) \mid R(0)\rangle}{\langle A \mid A\rangle}=\frac{\left\langle e^{i \mathcal{Q} \mathcal{L} t} i \mathcal{Q} \mathcal{L} J_{T}(k, 0) \mid i \mathcal{Q L} J_{T}(k, 0)\right\rangle}{\left\langle J_{T}(k, 0) \mid J_{T}(k, 0)\right\rangle}$ The normalization factor is simply $\frac{\beta m}{\rho}$. By writing the projection operator $\mathcal{Q}$ as $1-\mathcal{P}$ and eliminating terms, this can be written \begin{aligned} \kappa(t) &=\frac{\beta m}{\rho}\left\langle e^{i(1-\mathcal{P}) \mathcal{L} t} i \mathcal{L} J_{T}(k, 0) \mid i \mathcal{L} J_{T}(k, 0)\right\rangle \ &=\frac{\beta m}{\rho}\left\langle\dot{J}_{T}(k, t)\left|e^{i(1-\mathcal{P}) \mathcal{L t}}\right| \dot{J}_{T}(k, t)\right\rangle \end{aligned} The equation of motion for the transverse current can be written as [5] $\dot{J}_{T}(\vec{k}, t)=i \frac{k}{m} \sigma^{z x}(\vec{k}, t)$ where $\sigma^{z x}(\vec{k}, t)$ is the zx-component of the microscopic stress tensor $\sigma^{z x}(\vec{k})=\sum_{i}\left\{m v_{i, z} v_{i, x}-\frac{1}{2} \sum_{j \neq i} \frac{z_{i j} x_{i j}}{r^{2}} \mathcal{P}_{k}\left(r_{i j}\right)\right\} e^{i k z_{i}}$ Then the memory kernel becomes $\kappa(t)=k^{2} \frac{\beta}{\rho m}\left\langle\sigma^{z x}(\vec{k}, t)\left|e^{i(1-\mathcal{P}) \mathcal{L} t}\right| \sigma^{z x}(\vec{k}, t)\right\rangle=k^{2} \nu(k, t)$ where $\nu(k, t)$ is defined as $\nu(k, t)=\frac{\beta}{\rho m}\left\langle\sigma^{z x}(\vec{k}, t) e^{i(1-\mathcal{P}) \mathcal{L t}} \sigma^{z x}(\vec{k}, t)\right\rangle$ This demonstrates that the memory kernel is proportional to $k^{2}$. Then the transverse current correlation function can be written $\dot{C}_{T}(k, t)=-k^{2} \int_{0}^{t} \nu(k, t-\tau) C_{T}(k, \tau) d \tau$ The memory kernel is the key element that links the two limits. In general, the presence of the propagator $e^{i \mathcal{Q} L t}$ makes it very difficult to evaluate $\nu(k, t)$ explicitly. However, the presence of $\mathcal{Q}$ indicates that we can separate out fast and slow motions and use this to construct a form for $\nu(k, t)$ that will bridge the short and long time limits. To find this form, the viscoelastic model starts my assuming that the memory kernel has an exponential form: $\nu(k, t)=\nu(k, 0) \exp \left[\frac{-t}{\tau_{M}(k)}\right]$ where $\tau_{M}(k)$ is the Maxwell relaxation time, discussed above. Before using this function, it is necessary to specify the values of the two parameters, $\nu(k, 0)$ and $\tau_{M}(k)$. These can be found by taking the short and long time limits of the GLE and comparing them to the short-time expansion and hydrodynamic results, respectively. The Short Time Limit The value of $\nu(k, t)$ at short times can be obtained by comparing the GLE at time $t=0$ to the short time expansion of the transverse correlation function. To find the GLE at time $t=0$, take its time derivative \begin{aligned} \ddot{C}_{T}(k, t) &=-k^{2} \frac{d}{d t} \int_{0}^{t} \nu(k, t-\tau) C_{T}(k, \tau) d \tau \ &=-k^{2} \nu(k, 0) C_{T}(k, t) \[4pt] &=-k^{2} \nu(k, 0) C_{T}(k, 0) \[4pt] &=-k^{2} \nu(k, 0) v_{o}^{2} \end{aligned} The first two terms of the short time expansion of the correlation function are $C_{T}(\vec{k}, t)=v_{o}^{2}\left(1-\omega_{T}^{2} \frac{t^{2}}{2}\right)+\cdots$ The second derivative of this expansion gives $\ddot{C}_{T}(k, 0)=-k^{2} c_{t}^{2}(k) v_{o}^{2}$ Comparison of equations (4.38) and (4.35) shows that $\nu(k, 0)=c_{t}^{2}(k)=\frac{G(k)}{\rho m}$ Further, we see that in this limit the material supports propagating waves. The form of the waves can be found by solving the differential equation Eq.(4.38), and is given by $C_{T}(k, t)=v_{o}^{2} \cos \left(\omega_{t} t-k x\right)$ where $\omega=k c_{t}(k)$ and the speed of the waves are $c_{t}=\sqrt{\frac{G(k)}{\rho m}}$. The Hydrodynamic Limit The value of $\nu(k, t)$ at long times can be obtained by comparing the hydrodynamic equation to the long time limit of the GLE for $C_{T}(k, t)$ : $\dot{C}_{T}(k, t)=-k^{2} \int_{0}^{t} \nu(k, t-\tau) C_{T}(k, \tau) d \tau$ To take the long time limit of this equation, note that the memory function will generally be characterized by some relaxation time $\tau_{\kappa}$. When the time $t$ is much greater than this relaxation time, the major contribution to the integral will come when $t \sim \tau$. Therefore, we can approximate $C_{T}(k, \tau) \sim C_{T}(k, t)$. With this approximation, the correlation function can be taken out of the integral in the GLE: $\dot{C}_{T}(k, t)=-k^{2} C_{T}(k, t) \int_{0}^{\infty} \nu(k, t-\tau) d \tau$ where the integration limit has been extended to $\infty$ to indicate that we are taking the long time limit. This result should be identical to the hydrodynamic solution in the long time and long wavelength limit. By taking the long wavelength limit $(k \rightarrow 0)$ and comparing to the hydrodynamic result (Eq.(4.35)), we see that this only holds when: $\int_{0}^{\infty} \nu(k, t-\tau) d \tau=\nu=\frac{\eta}{\rho m}$ The Viscoelastic Solution We now have the information we need to construct the explicit form of the viscoelastic memory kernel. $\nu(k, t)=\nu(k, 0)\left[\frac{-t}{\tau_{M}(k)}\right]$ From the short time limit, we found that $\nu(k, 0)=\frac{G(k)}{\rho m}$, which allows us to write $\nu(k, t)=\frac{G(k)}{\rho m} \exp \left[\frac{-t}{\tau_{M}(k)}\right]$ From the long time limit, we know that $\int_{0}^{\infty} \nu(k, t-\tau) d \tau=\frac{\eta}{\rho m}$ Now, plug in the exponential memory kernel for $k=0$ $\int_{0}^{\infty} \frac{G(0)}{\rho m} \exp \left[\frac{-t}{\tau_{M}(0)} d \tau=\frac{\eta}{\rho m}\right]$ The elastic modulus has no time dependence, so it can be taken out of the integral $\frac{G(0)}{\rho m} \int_{0}^{\infty} \exp \left[\frac{-t}{\tau_{M}(0)} d \tau=\frac{\eta}{\rho m}\right]$ Finally, evaluate the integral to find the Maxwell relaxation time at $k=0$. It is reasonable to assume that the Maxwell relaxation time remains constant over all $k$ values. Therefore, the Maxwell relaxation time can be written as the ratio of the shear viscosity coefficient of the liquid to the modulus of rigidity of the elastic solid at $k=0$. $\tau_{M}=\frac{\eta}{G(0)}$ When $\tau_{M}$ is small compared to the time $t$, the viscosity term dominates and the system will behave as a viscous liquid. However, when $\tau_{M}$ is large compared to the time $t$, the system does not have time to respond to a stimulus as a viscous liquid. The modulus of rigidity dominates, and the material will behave as an elastic solid, supporting propagating shear waves. Finally, we can use the Maxwell relaxation time to write the explicit form of the viscoelastic memory kernel. $\nu(k, t)=\frac{G(k)}{\rho m} \exp \left[\frac{-\eta t}{G(0)}\right]$ With this memory kernel in hand, we can now go on to find an explicit solution to the transverse current correlation function. To find the equation for the viscoelastic wave, we first find the Laplace transform of the transverse current correlation function $\hat{C}_{T}(k, s)=\frac{C(k, t=0)}{s+k^{2} \hat{\nu}(k, s)}=\frac{v_{o}^{2}}{s+k^{2} \hat{\nu}(k, s)}$ Now, solve this equation using the exponential memory kernel $\nu(k, t)=\frac{G(k)}{\rho m} \exp \frac{-t}{\tau_{M}}$. The Laplace transform of an exponential function is well defined $\mathcal{L}[\exp [-\alpha t u(t)]]=\frac{1}{s+\alpha}$ Therefore, the Laplace transform of the viscoelastic memory kernel is $\hat{\nu}(k, s)=\frac{\nu(k, 0)}{s+\tau_{M}^{-1}}$ Plug this into the the Laplace transform of the transverse current correlation function \begin{aligned} \hat{C}_{T}(k, s)=\frac{v_{o}^{2}}{s+k^{2}\left(\frac{\nu(k, 0)}{s+\tau_{M}^{-1}}\right)} \ \hat{C}_{T}(k, s)=\frac{v_{o}^{2}\left(s+\tau_{M}^{-1}\right)}{s\left(s+\tau_{M}^{-1}\right)+k^{2} \nu(k, 0)} \end{aligned} Since function is quadratic, it is relatively easy to find the reverse Laplace transform, using the same method as that presented in section 4.2.C.2, or reference [1]. $C_{T}(k, t)=v_{o}^{2} \frac{1}{\lambda_{+}+\lambda_{-}}\left(e^{-\lambda+t}-e^{-\lambda_{-} t}\right)$ where the eigenvalues $\lambda_{\pm}$are given by the solutions to the quadratic equation $s^{2}+\tau_{M}^{-1} s+k^{2} \nu(k, 0)$ : $\lambda_{\pm}=-\frac{-\tau_{M}^{-1}}{2} \pm \sqrt{\frac{\tau_{M}^{-2}-4 k^{2} \nu(k, 0)}{4}}$ Complex eigensolutions exist if $\frac{1}{\tau_{M}^{2}}<4 k^{2} \nu(k, 0)$ Recall that $\nu(k, 0)=c_{t}^{2}(k)$. Then we can rewrite the above inequality in terms of the wavenumber $k>\frac{1}{2 \tau_{M} c_{t}(k)}$ Define the critical wavenumber, $k_{c}=\left(2 \tau_{M} c_{t}(k)\right)^{-1}$. For more information on the viscoelastic approximation and its application to transverse current, please see Chapter 6 of Molecular Hydrodynamics by Jean-Pierre Boon and Sidney Yip [3] and chapter 3 and chapter 6 of Dynamics of the Liquid State by Umberto Balucani [5].
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/04%3A_Time_Correlation_Functions/4.03%3A_Viscoelastic_Model.txt
The Problem where the memory kernel is related to the diffusion constant through Einstein’s relation $\gamma=\frac{1}{m D \beta}$ In 1967, Alder and Wainwright used computer simulations to calculate the velocity correlation function of hard-sphere gases [6]. They found that at long times, $C(t)$ exhibits a power-law decay rather than an exponential decay. That is, $C(t)$ decays according to $t^{-\frac{3}{2}}$ $\lim _{t \rightarrow \infty} C(t) \propto \frac{1}{t^{\frac{3}{2}}}$ This is the famous long-time tail problem in kinetic theory. Memory Effects The fundamental assumption underlying the exponential decay model of the velocity correlation function is that collisions between particles in a dilute hard-sphere gas are independent. This means that after each collision, a particle will lose memory of its original velocity until its motion has become completely randomized. This assumption leads to the exponential decay $C(t)=v_{o}^{2} e^{-t / \tau_{\text {coll }}}$, where $\tau_{\text {coll }}=\frac{1}{\gamma}$ is the average collision time. However, it is also possible that collisions are not completely independent but instead are correlated. A correlation would occur if, for example, two particles collide and then collide again after undergoing some number of other collisions (see Figures $4.5$ and 4.6). This implies that there is a long term memory in the system which would lead to a decay that is slower than an exponential. To estimate the form of this decay, we can consider the probability that, following a collision, a particle remains at or returns to its initial position after a time t, $P(r=0, t)$. To make a rough estimate of this probability, imagine that at any moment in time we can draw a "probability sphere" for the particle. The probability of finding the particle inside the sphere is constant, and the probability of finding the particle outside the sphere is zero. At time $t=0$, the particle has not had time to travel away from its initial position. Therefore, $P(r=0, t)=\delta(r)$. As $t$ increases, the particle begins to diffuse away from its initial position. The radius of the sphere increases linearly To estimate $P(r=0, t)$ from probability sphere, note that the probability of finding the particle at any point in space must be unity: $\int P(r, t) d r=1$ We only need to integrate over the volume of the sphere, since the probability of finding the particle anywhere else is zero. Within the sphere, the probability is a constant, $P(r, t)=P(t)$. Therefore, this integral simplifies to $1=\int_{\text {sphere }} P(t) d r \propto V P(t)$ The volume of the sphere goes as $R^{d}=t^{\frac{d}{2}}$, where $d$ is the spatial dimension. Therefore, the probability goes as $P(r=0, t)=\frac{1}{V} \propto \frac{1}{R^{d}} \propto \frac{1}{t^{\frac{d}{2}}}$ For a three dimensional system, we get a result consistent with Alder and Wainwright’s predictions $P(r=0, t) \propto \frac{1}{t^{\frac{3}{2}}}$ This shows that memory effects may be the source of the power law decay. We can construct a simple model of the behavior of a system for which memory effects are important. In this model, a particle with a velocity $v_{o}$ creates a velocity field through its interactions with other particles. This velocity field can in turn influence the long time behavior of the particle. We can start by getting a rough estimate of this velocity field from the transverse current $J_{T}(k, t)=J_{T}(k, 0) e^{-\nu k^{2} t}$ where $\nu=\frac{\eta}{m \rho}$ is the diffusion constant, $\eta$ is the shear viscosity, and $J_{T}(k, 0) \propto v_{o}$. The transverse current is written in $k$-space. By transforming this into real space, we obtain an expression for the dissipation of the velocity field over time and space $v(r, t) \propto \frac{v_{o}}{(\sqrt{4 \pi \nu t})^{3}} e^{\frac{-r^{2}}{2 \nu t}}$ The velocity field dissipates due to friction. At short times, the decay has a Gaussian form. However, at long times the decay is dominated by the prefactor, which goes as $\tau^{-\frac{3}{2}}$. Hydrodynamic Model A simple way of deriving the above result would be to evaluate the velocity correlation function $C(t)=\langle v(t) v(0)\rangle$ Using the hydrodynamic model, we can find this correlation function by taking the equilibrium average of the non-equilibrium average thermal velocity $C(t)=\langle v(t) v(0)\rangle=\int d \vec{r}_{o} \frac{1}{V} \int d \vec{v}_{o} f_{B} v_{o}\langle v(t)\rangle_{\text {n.e. }}$ where $f_{B}$ is the Boltzmann distribution and $\langle v(t)\rangle_{\text {n.e. }}$ is a non-equilibrium velocity field: $\langle v(t)\rangle_{\text {n.e. }}=\frac{\left\langle v_{s}(t) \delta\left(v_{s}-v_{o}\right) \delta\left(r_{s}-r_{o}\right)\right\rangle}{\left\langle\delta\left(v_{s}-v_{o}\right) \delta\left(v_{s}-v_{o}\right) \delta\left(r_{s}-r_{o}\right)\right\rangle}$ Here, $s$ describes the tagged particle. The non-equilibrium velocity field can be represented as a coupling of the linear modes $\rho^{s}$ and $v_{s} .$ \begin{aligned} \langle v(t)\rangle_{\text {n.e. }}=\int \rho^{s}(r, t) v_{s}(r, t) d r \ =\frac{1}{\rho_{o}} \int \rho^{s}(r, t) J(r, t) d r \ =\frac{\rho_{o}^{-1}}{(2 \pi)^{3}} \int \rho^{s}(k, t) J(k, t) d k \end{aligned} We can solve this using the solutions of the hydrodynamic modes: \begin{aligned} \rho_{s}(k, t)=\rho_{s}(k, 0) e^{-k^{2} D t} \ J_{\perp}(k, t)=J_{\perp}(k, 0) e^{-k^{2} \nu t} \ J_{L}(k, t)=J_{L}(k, 0) e^{-k^{2} \frac{\Gamma}{2} t \pm i c_{s} k t} \end{aligned} For transverse modes, \begin{aligned} v_{o}\langle v(t)\rangle_{\text {n.e. }}=\frac{1}{(2 \pi)^{3} \rho_{o}} \frac{2}{3} \int \rho^{s}(k, 0) e^{-k^{2} D t} J_{x}(k, 0) e^{-k^{2} \nu t} v_{x}^{s}(0) d k \ =\frac{1}{(2 \pi)^{3} \rho_{o}} \frac{2}{3} \int \rho^{s}(k, 0) J_{x}(k, 0) v_{x}^{s}(0) e^{-k^{2}(D+\nu) t} d k \end{aligned} Then, take the equilibrium average $\left\langle\rho^{s}(k, 0) J_{x}(k, 0) v_{x}^{s}(0)\right\rangle=v_{o}^{2}$ Finally, \begin{aligned} C(t)=\frac{2}{3 \rho_{o}} v_{o}^{2} \frac{1}{(2 \pi)^{3}} \int d \vec{k} e^{-k^{2}(D+\nu) t} \ =\frac{v_{o}^{2}}{12 \rho_{o}} \frac{1}{[\pi t(D+\nu)]^{\frac{3}{2}}} \ \propto \frac{1}{t^{\frac{3}{2}}} \end{aligned} Mode-Coupling Theory As shown above, the correlation of a given dynamic quantity decays predominantly into pairs of hydrodynamic modes with conserved variables. Mode-Coupling Theory is the formalism that calculates their coupling. From the discussion about, the velocity of the tagged particle is coupled to a bilinear mode, $A=\rho_{s}(k) J_{x}^{*}(k)$. then $C(t)=\left\langle v_{x}\left|e^{i \mathcal{L} t}\right| v_{x}\right\rangle \Rightarrow\left\langle v_{x}\left|P e^{i \mathcal{L t}} P\right| v_{x}\right\rangle$ where $P$ is the projection operator associated with $A$. By expanding the projection operator $C(t)=\left\langle v_{s} \mid A\right\rangle\langle A \mid A\rangle^{-1}\left\langle A\left|e^{i \mathcal{L} t}\right| A\right\rangle\langle A \mid A\rangle^{-1}\left\langle A \mid v_{s}\right\rangle$ Now, use the linear hydrodynamic modes to evaluate the correlation function. \begin{aligned} \rho_{s}=e^{-i k r_{s}} \ J_{x}=\sum_{i} v_{x i} e^{-i k r_{i}} \end{aligned} Then $\left\langle v_{s} \mid A\right\rangle=\sum_{i}\left\langle v_{x s} \mid e^{-i k r_{s}} v_{x i} e^{i k r_{i}}\right\rangle=v_{o}^{2}$ and $\langle A \mid A\rangle=N v_{o}^{2}$ so that $\left\langle v_{s} \mid A\right\rangle\langle A \mid A\rangle^{-1}=\frac{1}{N}$ Therefore, \begin{aligned} C(t)=\frac{1}{N^{2}} \sum_{k} \sum_{k^{\prime}}\left\langle A(k)\left|e^{i \mathcal{L} t}\right| A\left(k^{\prime}\right)\right\rangle \ \approx \frac{1}{N^{2}} \sum_{k} \sum_{k^{\prime}}\left\langle\rho_{s}(k, t) \mid \rho_{s}\left(k^{\prime}, 0\right)\right\rangle\left\langle J_{x}(k, t) \mid J_{x}\left(k^{\prime}, 0\right)\right\rangle \ =\frac{1}{N} \sum_{k} F_{s}(k, t) C_{t}(k, t) \ =\frac{V}{N} \frac{1}{(2 \pi)^{3}} \int d \vec{k} F_{s}(\vec{k}, t) C_{t}(\vec{k}, t) \end{aligned} Now, $F_{s}(\vec{k}, t)=e^{-k^{2} D t}$ and $C_{t}(\vec{k}, t)=v_{o}^{2} e^{-k^{2} \nu t}$ Then $C(t)=\frac{1}{\rho} \frac{v_{o}^{2}}{(2 \pi)^{3}} \int d \vec{k} e^{-k^{2}(D+\nu) t}$ By incorporating the three-spatial components of $J$ and $v$, we have $C(t)=\frac{1}{12 \rho} v_{o}^{2}\left[\frac{1}{\pi(D+\nu) t}\right]^{\frac{3}{2}}$ References [1] Jean Pierre Hansen and Ian R. McDonald. Theory of Simple Liquids. Burlington, MA: Elsevier Academic Press, 2006. [2] Donald A. McQuarrie. Statistical Mechanics. Sausalito: Univerity Science Books, $2000 .$ [3] Jean-Pierre Boon and Sidney Yip. Molecular Hydrodynamics. New York: McGraw-Hill, $1980 .$ [4] Bruce J. Berne and Robert Pecora. Dynamic Light Scattering: with Applications to chemistry, biology, and physics. New York: Wiley, $1976 .$ [5] Umberto Balucani. Dynamics of the Liquid State. New York: Oxford University Press, $1994 .$ [6] B. J. Alder and T. E. Wainwright. Phys. Rev., A1, 1970 . MIT OpenCourseWare http://ocw.mit.edu Statistical Mechanics Spring 2012 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Non-Equilibrium_Statistical_Mechanics_(Cao)/04%3A_Time_Correlation_Functions/4.04%3A_Long-time_Tails_and_Mode-coupling_Theory.txt
We will specifically be dealing with the description of coherent nonlinear spectroscopy, which is the term used to describe the case where one or more input fields coherently act on the dipoles of the sample to generate a macroscopic oscillating polarization. This polarization acts as a source to radiate a signal that we detect in a well-defined direction. This class includes experiments such as pump-probes, transient gratings, photon echoes, and coherent Raman methods. However understanding these experiments allows one to rather quickly generalize to other techniques. Detection: Coherent Spontaneous $I_{coherent}\propto|\sum_i\mu_i|^2\nonumber$ $I_{spont}\propto\sum_i|\mu_i|^2\nonumber$ Linear Absorption Fluorescence, phosphorescence, Raman, and light scattering Nonlinear Pump-probe transient absorption, photon echoes, transient gratings, CARS, impulsive Raman scattering Fluorescence-detected nonlinear spectroscopy, i.e. stimulated emission pumping, time-dependent Stokes shift Spontaneous and coherent signals are both emitted from all samples, however, the relative amplitude of the two depend on the time-scale of dephasing within the sample. For electronic transitions in which dephasing is typically much faster than the radiative lifetime, spontaneous emission is the dominant emission process. For the case of vibrational transitions where non-radiative relaxation is typically a picoseconds process and radiative relaxation is a µs or longer process, spontaneous emission is not observed. The description of coherent nonlinear spectroscopies is rooted in the calculation of the polarization, P . The polarization is a macroscopic collective dipole moment per unit volume, and for a molecular system is expressed as a sum over the displacement of all charges for all molecules being interrogated by the light. Sum over molecules: $\bar P (\bar r) = \sum_m \bar \mu_m \delta ( \bar r - \bar R_m)$ Sum over charges on molecules: $\bar \mu_m \equiv \sum_{\alpha} q_{m\alpha} (\bar r_{m\alpha} - \bar R_m)$ In coherent spectroscopies, the input fields E act to create a macroscopic, coherently oscillating charge distribution. $\bar P(\omega) = \chi \bar E (\omega)$ as dictated by the susceptibility of the sample. The polarization acts as a source to radiate a new electromagnetic field, which we term the signal $\bar E_{sig}$. (Remember that an accelerated charge radiates an electric field.) In the electric dipole approximation, the polarization is one term in the current and charge densities that you put into Maxwell’s equations. From our earlier description of freely propagating electromagnetic waves, the wave equation for a transverse, plane wave was $\bar \nabla^2 \bar E(\bar r, t) - \frac{1}{c^2}\frac{\partial^2\bar E(\bar r, t)}{\partial t^2} = 0$ which gave a solution for a sinusoidal oscillating field with frequency ω propagating in the direction of the wavevector k. In the present case, the polarization acts as a source − an accelerated charge − and we can write $\bar \nabla^2 \bar E(\bar r, t) - \frac{1}{c^2}\frac{\partial^2\bar E(\bar r, t)}{\partial t^2} = \frac{4\pi}{c^2}\frac{\partial^2\bar P(\bar r, t)}{\partial t^2}$ The polarization can be described by solutions of the form $\bar P (\bar r,t) = P(t)exp(i \bar k_{sig}' \cdot \bar r -i\omega_{sig}t) + c.c.$ As we will discuss further later, the wavevector and frequency of the polarization depend on the frequency and wave vector of incident fields. $\bar k_{sig} = \sum_n\pm \bar k_n$ $\omega_{sig} = \sum_n\pm \omega_n$ These relationships enforce momentum and energy conservation for the problem. The oscillating polarization radiates a coherent signal field, $\bar E_{sig}$, in a wave vector matched direction $\bar k_{sig}$. Although a single dipole radiates as a sinθ field distribution relative to the displacement of the charge,1 for an ensemble of dipoles that have been coherently driven by external fields, P is given by (2.6) and the radiation of the ensemble only constructively adds along $\bar k_{sig}$. For the radiated field we obtain $\bar E_{sig} (\bar r,t) = E_{sig}(\bar r,t)exp(i \bar k_{sig} \cdot \bar r -i\omega_{sig}t) + c.c.$ This solution comes from solving (2.5) for a thin sample of length l, for which the radiated signal amplitude grows and becomes directional as it propagates through the sample. The emitted signal $\bar E_{sig}(t) = i\frac{2\pi\omega_s}{nc}l \bar P(t)sinc(\frac{\Delta kl}{2})e^{i\Delta kl/2}$ Here we note the oscillating polarization is proportional to the signal field, although there is a π/2 phase shift between the two, $\bar E_{sig}\propto i \bar P$, because in the sample the polarization is related to the gradient of the field. Δk is the wave-vector mismatch between the wavevector of the polarization $\bar k_{sig}'$ and the radiated field $\bar k_{sig}$, which we will discuss more later. For the purpose of our work, we obtain the polarization from the expectation value of the dipole operator $\bar P(t) \Rightarrow \bar{\mu(t)}$ The treatment we will use for the spectroscopy is semi-classical, and follows the formalism that was popularized by Mukamel.2 As before, our Hamiltonian can generally be written as $H = H_0 + V(t)$ where the material system is described by H0 and treated quantum mechanically, and the electromagnetic fields V(t) are treated classically and take the standard form $V(t)=-\bar \mu \cdot \bar E$ The fields only act to drive transitions between quantum states of the system. We take the interaction with the fields to be sufficiently weak that we can treat the problem with perturbation theory. Thus, nth-order perturbation theory will be used to describe the nonlinear signal derived from interacting with n electromagnetic fields. 1. The radiation pattern in the far field for the electric field emitted by a dipole aligned along the z axis is $E(r,\theta,\phi,t)=-\frac{p_0k^2}{4\pi\epsilon_0}\frac{\sin \theta}{r}\sin{(k \cdot r - \omega t)}$ (written in spherical coordinates). See Jackson, Classical Electrodynamics. 2. S. Mukamel, Principles of Nonlinear Optical Spectroscopy. (Oxford University Press, New York, 1995). 01: Coherent Spectroscopy and the Nonlinear Polarization Absorption is the simplest example of a coherent spectroscopy. In the semi-classical picture, the polarization induced by the electromagnetic field radiates a signal field that is out-of-phase with the transmitted light. To describe this, all of the relevant information is in $R(t)$ or $\chi(\omega)$. $\bar P(t)=\int\limits_0^{\infty} d\tau R(\tau)E(t-\tau)$ $\bar P(\omega)=\chi(\omega) \bar E(\omega)$ Let’s begin with a frequency-domain description of the absorption spectrum, which we previously found was proportional to the imaginary part of the susceptibility, $\chi′′$.1 We consider one monochromatic field incident on the sample that resonantly drives dipoles in the sample to create a polarization, which subsequently re-radiate a signal field (free induction decay). For one input field, the energy and momentum conservation conditions dictate that $ω_{in} =ω_{sig}$ and $k_{in} = k_{sig}$, that is a signal field of the same frequency propagates in the direction of the transmitted excitation field. In practice, an absorption spectrum is measured by characterizing the frequency-dependent transmission decrease on adding the sample $A = −\log I_{out}/I_{in}$. For the perturbative case, let’s take the change of intensity $\delta I = I_{in} − I_{out}$ to be small, so that $A \approx \delta I$ and $I_{in} \approx I_{out}$. Then we can write the measured intensity after the sample as \begin{align*} I_{out} &= |E_{out} + E_{sig}|^2 \[4pt] &= |E_{out} + (iP)|^2 \[4pt] &= |E_{out}+i\chi E_{in}|^2 \[4pt] &\approx |E_{in} + i\chi E_{in}|^2 \[4pt] &= |E_{in}|^2|1+i(\chi' + i\chi'')|^2 \[4pt] &= I_{in}(1-2\chi''+\cdots) \[4pt] \Rightarrow I_{out} &= I_{in}-\delta I \end{align*} Here we have made use of the assumption that $|E_{in}| \gg |\chi|$. We see that as a result of the phase shift between the polarization and the radiated field that the absorbance is proportional to $\chi'' : \delta I = 2\chi'' I_{in}$. A time-domain approach to absorption draws on Eq. (2.1.1) and should recover the relationships to the dipole autocorrelation function that we discussed previously. Equating $\bar P(t)$ with $\bar {\mu(t)}$, we can calculate the polarization in the density matrix picture as $\bar P(t) = Tr\left(\mu_I(t)\rho_I^{(1)}(t)\right)$ where the first order expansion of the density matrix is $\rho_I^{(1)}=-\frac{i}{\hbar}\int\limits_{-\infty}^{t} dt_1[V_I(t_1),\rho_{eq}]$ Substituting eq. (2.13) we find \begin{align*} \bar P(t) &= Tr\left(\mu_I(t)\frac{i}{\hbar}\int\limits_{-\infty}^{t} dt'[-\mu_I(t')E(t'),\rho_{eq}]\right) \[4pt] &= -\frac{i}{\hbar}\int\limits_{-\infty}^{t} dt'E(t')Tr\left(\mu_I(t)\left[\mu_I(t'),\rho_{eq}\right]\right) \[4pt] &= +\frac{i}{\hbar}\int\limits_{0}^{\infty} d\tau E(t-\tau ) Tr\left(\left[\mu_I(\tau),\mu_I(0)\right]\rho_{eq}\right) \end{align*} In the last line, we switched variables to the time interval $\tau=t-t'$, and made use of the identity $\left[A,\left[B,C\right]\right] = \left[\left[A,B\right],C\right]$. Now comparing to Eq. (2.1.1), we see, as expected $R(\tau)=\frac{i}{\hbar}\theta(\tau)Tr\left(\left[\mu_I(\tau),\mu_I(0)\right]\rho_{eq}\right)$ So the linear response function is the sum of two correlation functions, or more precisely, the imaginary part of the dipole correlation function. $R(\tau)=\frac{i}{\hbar}\theta(\tau)\left(C(\tau)-C^*(\tau)\right)$ $C(\tau)=Tr\left(\mu_I(\tau)\mu_I(0)\rho_{eq}\right) \nonumber$ $C^*(\tau)=Tr\left(\mu_I(\tau)\rho_{eq}\mu_I(0)\right)$ Also, as we would expect, when we use an impulsive driving potential to induce a free induction decay (i.e., $E(t-\tau)=E_0\delta(t-\tau)$), the polarization is directly proportional to the response function, which can be Fourier transformed to obtain the absorption lineshape. 1. Remember the following relationships of the susceptibility with the complex dielectric constant $\epsilon(\omega)$, the index of refraction $n(\omega)$, and the absorption coefficient $\kappa(\omega)$: $\epsilon(\omega)=1+4\pi\chi(\omega) \nonumber$ $\sqrt{\epsilon(\omega)}=\tilde n(\omega)= n(\omega)+i\kappa(\omega) \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/01%3A_Coherent_Spectroscopy_and_the_Nonlinear_Polarization/1.01%3A_Linear_Absorption_Spectroscopy.txt
For nonlinear spectroscopy, we will calculate the polarization arising from interactions with multiple fields. We will use a perturbative expansion of $P$ in powers of the incoming fields $\bar P(t)=P^{(0)}+P^{(1)}+P^{(2)}+\cdots$ where $P^{(n)}$ refers to the polarization arising from $n$ incident light fields. So, $P^{(2)}$ and higher are the nonlinear terms. We calculate P from the density matrix \begin{align*} \bar P(t) &=Tr\left(\bar\mu_I(t)\rho_I(t)\right) \[4pt] &=Tr\left(\bar\mu_I\rho_I^{(0)}\right)+Tr\left(\bar\mu_I\rho_I^{(1)}(t)\right)+Tr\left(\bar\mu_I\rho_I^{(2)}(t)\right)+\cdots \end{align*} As we wrote earlier, $P_I^{(n)}$ is the nth order expansion of the density matrix $\rho^{(0)}=\rho_{eq}\nonumber$ $\rho^{(1)}=-\frac{i}{\hbar}\int\limits_{-\infty}^tdt_1\left[V_I(t_I),\rho_{eq}\right]$ $\rho^{(2)}=\left(-\frac{i}{\hbar}\right)^2\int\limits_{-\infty}^tdt_2\int\limits_{-\infty}^{t_2}dt_1\left[V_I(t_2)\left[V_I(t_I),\rho_{eq}\right]\right]\nonumber$ $\rho_I^{(n)}=\left(-\frac{i}{\hbar}\right)^n\int\limits_{-\infty}^{t}dt_n\int\limits_{-\infty}^{t_n}dt_{n-1}\cdots\int\limits_{-\infty}^{t_2}dt_1\left[V_I(t_n),\left[V_I(t_{n-1}),\left[\cdots\left[V_I(t_1),\rho_{eq}\right]\cdots\right]\right]\right]$ Let’s examine the second-order polarization in order to describe the nonlinear response function. Earlier we stated that we could write the second-order nonlinear response arise from two time-ordered interactions with external potentials in the form $\bar P^{(2)}(t)=\int\limits_0^{\infty}d\tau_2\int\limits_0^{\infty}d\tau_1R^{(2)}\left(\tau_2,\tau_1\right)\bar E_1\left(t-\tau_2-\tau_1\right)\bar E_2\left(t-\tau_2\right)$ We can compare this result to what we obtain from $P^{(2)}(t)=Tr\left(\mu_I(t)\rho_I^{(2)}(t)\right)$. Substituting as we did in the linear case, \begin{aligned} P^{(2)}(t) &=\operatorname{Tr}\left\{\mu_{I}(t)\left(-\frac{i}{\hbar}\right)^{2} \int_{-\infty}^{t} d t_{2} \int_{-\infty}^{t_{2}} d t_{1}\left[V_{I}\left(t_{2}\right),\left[V_{I}\left(t_{1}\right), \rho_{e q}\right]\right]\right\} \ &=\left(\frac{i}{\hbar}\right)^{2} \int_{-\infty}^{t} d t_{2} \int_{-\infty}^{t_{2}} d t_{1} E_{2}\left(t_{2}\right) E_{1}\left(t_{1}\right) \operatorname{Tr}\left\{\left[\left[\mu_{I}(t), \mu_{I}\left(t_{2}\right)\right], \mu_{I}\left(t_{1}\right)\right] \rho_{e q}\right\} \ &=\left(\frac{i}{\hbar}\right)^{2} \int_{0}^{\infty} d \tau_{2} \int_{0}^{\infty} d \tau_{1} E_{2}\left(t-\tau_{2}\right) E_{1}\left(t-\tau_{2}-\tau_{1}\right) \operatorname{Tr}\left\{\left[\left[\mu_{I}\left(\tau_{1}+\tau_{2}\right), \mu_{I}\left(\tau_{1}\right)\right], \mu_{I}(0)\right] \rho_{e q}\right\} \end{aligned} \label{2.2.5} In the last line we switched variables to the time-intervals $t_1=t-\tau_1-\tau_2$ and $t_2=t-\tau_2$, and enforced the time-ordering $t_1 \le t_2$. Comparison of eqs. (2.2.4) and (2.2.5) allows us to state that the second order nonlinear response function is $R^{(2)}\left(\tau_{1}, \tau_{2}\right)=\left(\frac{i}{\hbar}\right)^{2} \theta\left(\tau_{1}\right) \theta\left(\tau_{2}\right) \operatorname{Tr}\left\{\left[\left[\mu_{I}\left(\tau_{1}+\tau_{2}\right), \mu_{I}\left(\tau_{1}\right)\right], \mu_{I}(0)\right] \rho_{e q}\right\} \label{2.2.6}$ Again, for impulsive interactions (i.e., delta function light pulses), the nonlinear polarization is directly proportional to the response function. Similar exercises to the linear and second order response can be used to show that the nonlinear response function to arbitrary order $R^{(n)}$ is \begin{aligned} R^{(n)}\left(\tau_{1}, \tau_{2}, \ldots \tau_{n}\right) &=\left(\frac{i}{\hbar}\right)^{n} \theta\left(\tau_{1}\right) \theta\left(\tau_{2}\right) \ldots \theta\left(\tau_{n}\right) \ & \times \operatorname{Tr}\left\{\left[\left[\ldots\left[\mu_{I}\left(\tau_{n}+\tau_{n-1}+\ldots+\tau_{1}\right), \mu_{I}\left(\tau_{n-1}+\tau_{n}+\cdots \tau_{1}\right)\right], \ldots\right] \mu_{I}(0)\right] \rho_{e q}\right\} \end{aligned} \label{2.2.7} We see that in general the nonlinear response functions are sums of correlation functions, and the nth order response has 2n correlation functions contributing. These correlation functions differ by whether sequential operators act on the bra or ket side of ρ when enforcing the time-ordering. Since the bra and ket sides represent conjugate wavefunctions, these correlation functions will contain coherences with differing phase relationships during subsequent time-intervals. To see more specifically what a specific term in these nested commutators refers to, let’s look at $R^{(2)}$ and enforce the time-ordering: Term 1 in eq. (2.2.7) \begin{aligned} Q_1 &= Tr\left(\mu_1(\tau_1+\tau_2)\mu_I(\tau_1)\mu_I(0)\rho_{eq}\right) \ &= Tr\left(\frac{U_0^{\dagger}(\tau_1+\tau_2)}{U_0^{\dagger}(\tau_1)U_0^{\dagger}(\tau_2)}\mu\frac{U_0(\tau_1+\tau_2)U_0^{\dagger}(\tau)}{U_0(\tau_2)}\mu U_0(\tau_1)\mu\rho_{eq}\right) \ &=Tr\left(\mu U_0(\tau_2)\mu U_0(\tau_1)\mu\rho_{eq}U_0^{\dagger}(\tau_1)U_0^{\dagger}(\tau_2)\right) \end{aligned} (1) dipole acts on ket of $\rho_{eq}$ (2) evolve under $H_0$ during $\tau_1$. (3) dipole acts on ket. (4) Evolve during $\tau_2$. (5) Multiply by $\mu$ and take trace. KET/KET interaction At each point of interaction with the external potential, the dipole operator acted on ket side of ρ . Different correlation functions are distinguished by the order that they act on bra or ket. We only count the interactions with the incident fields, and the convention is that the final operator that we use prior to the trace acts on the ket side. So the term Q1 is a ket/ket interaction. An alternate way of expressing this correlation function is in terms of the time-propagator for the density matrix, a superoperator defined through: $\hat G(t)\rho_{ab}=U_0|a\rangle\langle b|U_0^{\dagger}$. Remembering the time-ordering, this allows Q1 to be written as $Q_1 = Tr\left(\mu\hat G(\tau_2)\mu\hat G(\tau_1)\mu\rho_{eq}\right) \label{2.2.8}$ Term 2 \begin{aligned} Q_2 &=Tr\left(\mu_I(0)\mu_I(\tau_1+\tau_2)\mu_I(\tau_1)\rho_{eq}\right) \ &=Tr\left(\mu_I(\tau_1+\tau_2)\mu_I(\tau_1)\rho_{eq}\mu_I(0)\right) \end{aligned}\nonumber BRA/KET interaction For the remaining terms we note that the bra side interaction is the complex conjugate of ket side, so of the four terms in eq. (2.2.7), we can identify only two independent terms: $Q_1 \Rightarrow ket/ket ~~~~~ Q_1^* \Rightarrow bra/bra ~~~~~ Q_2 \Rightarrow ket/bra ~~~~~ Q_2^* \Rightarrow bra/ket \nonumber$ This is a general observation. For $R^{(n)}$, you really only need to calculate 2n-1 correlation functions. So for $R^{(2)}$ we write $R^{(2)}=\left(\frac{i}{\hbar}\right)^{2} \theta\left(\tau_{1}\right) \theta\left(\tau_{2}\right) \sum_{\alpha=1}^{2}\left[Q_{\alpha}\left(\tau_{1}, \tau_{2}\right)-Q_{\alpha}^{*}\left(\tau_{1}, \tau_{2}\right)\right] \label{2.2.9}$ where $Q_1=Tr\left[\mu_I(\tau_1+\tau_2)\mu_I(\tau_1)\mu_I(0)\rho_{eq}\right] \label{2.2.10}$ $Q_1=Tr\left[\mu_I(\tau_1)\mu_I(\tau_1+\tau_2)\mu_I(0)\rho_{eq}\right] \label{2.2.11}$ So what is the difference in these correlation functions? Once there is more than one excitation field, and more than one time period during which coherences can evolve, then one must start to carefully watch the relative phase that coherences acquire during different consecutive time-periods, $\phi(\tau)=\omega_{ab}\tau$. To illustrate, consider wavepacket evolution: light interaction can impart positive or negative momentum ($\pm\bar k_{in}$) to the evolution of the wavepacket, which influences the direction of propagation and the phase of motion relative to other states. Any subsequent field that acts on this state must account for time-dependent overlap of these wavepackets with other target states. The different terms in the nonlinear response function account for all of the permutations of interactions and the phase acquired by these coherences involved. The sum describes the evolution including possible interference effects between different interaction pathways. 1.03: Third Order Response Since $R^{(2)}$ orientationally averages to zero for isotropic systems, the third-order nonlinear response described the most widely used class of nonlinear spectroscopies. $R^{(3)}\left(\tau_{1}, \tau_{2}, \tau_{3}\right)=\left(\frac{i}{\hbar}\right)^{3} \theta\left(\tau_{3}\right) \theta\left(\tau_{2}\right) \theta\left(\tau_{1}\right) \operatorname{Tr}\left\{\left[\mu_{I}\left(\tau_{1}+\tau_{2}+\tau_{3}\right), \mu_{I}\left(\tau_{1}+\tau_{2}\right)], \mu_{I}\left(\tau_{1}\right)], \mu_{I}(0)\right] \rho_{e q}\right\}$ $R^{(3)}\left(\tau_{1}, \tau_{2}, \tau_{3}\right)=\left(\frac{i}{\hbar}\right)^{3} \theta\left(\tau_{3}\right) \theta\left(\tau_{2}\right) \theta\left(\tau_{1}\right) \sum_{\alpha=1}^{4}\left[R_\alpha \left(\tau_{1}+\tau_{2}+\tau_{3}\right) - R^*_\alpha \left(\tau_{1}+\tau_{2}+\tau_{3}\right)\right]$ Here the convention for the time-ordered interactions with the density matrix is R1 = ket / ket / ket ; R2 = bra / ket / bra ; R3 = bra / bra / ket ; and R4ket / bra / bra . In the eigenstate representation, the individual correlation functions can be explicitly written in terms of a sum over all possible intermediate states (a,b,c,d) $\begin{array}{l} R_{1}=\sum_{a, b, c, d} p_{a}\left\langle\mu_{a d}\left(\tau_{1}+\tau_{2}+\tau_{3}\right) \mu_{d c}\left(\tau_{1}+\tau_{2}\right) \mu_{c b}\left(\tau_{1}\right) \mu_{b a}(0)\right\rangle \ R_{2}=\sum_{a, b, c, d} p_{a}\left\langle\mu_{a d}(0) \mu_{d c}\left(\tau_{1}+\tau_{2}\right) \mu_{c b}\left(\tau_{1}+\tau_{2}+\tau_{3}\right) \mu_{b a}\left(\tau_{1}\right)\right\rangle \ R_{3}=\sum_{a, b, c, d} p_{a}\left\langle\mu_{a d}(0) \mu_{d c}\left(\tau_{1}\right) \mu_{c b}\left(\tau_{1}+\tau_{2}+\tau_{3}\right) \mu_{b a}\left(\tau_{1}+\tau_{2}\right)\right\rangle \ R_{4}=\sum_{a, b, c, d} p_{a}\left\langle\mu_{a d}\left(\tau_{1}\right) \mu_{d c}\left(\tau_{1}+\tau_{2}\right) \mu_{c b}\left(\tau_{1}+\tau_{2}+\tau_{3}\right) \mu_{b a}(0)\right\rangle \end{array}$ 1.04: Summary - General Expressions for nth Order Nonline For an nth-order nonlinear signal, there are n interactions with the incident electric field or fields that give rise to the radiated signal. Counting the radiated signal there are n+1 fields involved (n+1 light-matter interactions), so that nth order spectroscopy is at times referred to as (n+1)- wave mixing. The radiated nonlinear signal field is proportional to the nonlinear polarization: $P^{(n)}(t)=\int_0^{\infty}d\tau_n\dotsi\int_0^{\infty}d\tau_1R^{(n)}\left(\tau_1,\tau_2,\dotsc\tau_n\right)\bar E_1\left(t-\tau_n-\dotsb -\tau_1\right) \dotsm \bar E_n(t-\tau_n)$ \begin{aligned} R^{(n)}\left(\tau_{1}, \tau_{2}, \ldots \tau_{n}\right) &=\left(\frac{i}{\hbar}\right)^{n} \theta\left(\tau_{1}\right) \theta\left(\tau_{2}\right) \ldots \theta\left(\tau_{n}\right) \ & \times \operatorname{Tr}\left\{\left[\left[\ldots\left[\mu_{I}\left(\tau_{n}+\tau_{n-1}+\ldots+\tau_{1}\right), \mu_{I}\left(\tau_{n-1}+\tau_{n}+\cdots \tau_{1}\right)\right], \ldots\right] \mu_{I}(0)\right] \rho_{e q}\right\} \end{aligned} \label{2.4.2} Here the interactions of the light and matter are expressed in terms of a sequence of consecutive time intervals $\tau_1 \dotso \tau_n$ prior to observing 14 the system. For delta-function interactions, $\bar E_i(t-t_0)=|\bar E_i|\delta(t-t_0)$, the polarization and response function are directly proportional $P^{(n)}(t)=R^{(n)}(\tau_1,\tau_2,\dotsc \tau_{n-1},t)|\bar E_1|\dotso|\bar E_n| \label{2.4.3}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/01%3A_Coherent_Spectroscopy_and_the_Nonlinear_Polarization/1.02%3A_Nonlinear_Polarization.txt
In practice, the nonlinear response functions as written above provide little insight into what the molecular origin of particular nonlinear signals is. These multiply nested terms are difficult to understand when faced the numerous light-matter interactions, which can take on huge range of permutations when performing experiments on a system with multiple quantum states. The different terms in the response function can lead to an array of different nonlinear signals that vary not only microscopically by the time-evolution of the molecular system, but also differ macroscopically in terms of the frequency and wavevector of the emitted radiation. Diagrammatic perturbation theory (DPT) is a simplified way of keeping track of the contributions to a particular nonlinear signal given a particular set of states in \(H_0\) that are probed in an experiment. It uses a series of simple diagrams to represent the evolution of the density matrix for \(H_0\), showing repeated interaction of ρ with the fields followed by time-propagation under \(H_0\). From a practical sense, DPT allows us to interpret the microscopic origin of a signal with a particular frequency and wavevector of detection, given the specifics of the quantum system we are studying and the details of the incident radiation. It provides a shorthand form of the correlation functions contributing to a particular nonlinear signal, which can be used to understand the microscopic information content of particular experiments. It is also a bookkeeping method that allows us to keep track of the contributions of the incident fields to the frequency and wavevector of the nonlinear polarization. There are two types of diagrams we will discuss, Feynman and ladder diagrams, each of which has certain advantages and disadvantages. For both types of diagrams, the first step in drawing a diagram is to identify the states of \(H_0\) that will be interrogated by the light-fields. The diagrams show an explicit series of absorption or stimulated emission events induced by the incident fields which appear as action of the dipole operator on the bra or ket side of the density matrix. They also symbolize the coherence or population state in which the density matrix evolves during a given time interval. The trace taken at the end following the action of the final dipole operator, i.e. the signal emission, is represented by a final wavy line connecting dipole coupled states. 02: Diagrammatic Perturbation Theory Feynman diagrams are the easiest way of tracking the state of coherences in different time periods, and for noting absorption and emission events. 1. Double line represents ket and bra side of $\rho$. 2. Time-evolution is upward. 3. Lines intersecting diagram represent field interaction. Absorption is designated through an inward pointing arrow. Emission is an outward pointing arrow. Action on the left line is action on the ket, whereas the right line is bra. 4. System evolves freely under $H_0$ between interactions, and density matrix element for that period is often explicitly written 2.02: Ladder Diagrams Ladder Diagrams1 are helpful for describing experiments on multistate systems and/or with multiple frequencies; however, it is difficult to immediately see the state of the system during a given time interval. They naturally lend themselves to a description of interactions in terms of the eigenstates of $H_0$. 1. Multiple states arranged vertically by energy. 2. Time propagates to right. 3. Arrows connecting levels indicate resonant interactions. Absorption is an upward arrow and emission is downward. A solid line is used to indicate action on the ket, whereas a dotted line is action on the bra. 4. Free propagation under $H_0$ between interactions, but the state of the density matrix is not always obvious. For each light-matter interactions represented in a diagram, there is an understanding of how this action contributes to the response function and the final nonlinear polarization state. Each light-matter interaction acts on one side of $\rho$, either through absorption or stimulated emission. Each interaction adds a dipole matrix element $\mu_{ij}$ that describes the interaction amplitude and any orientational effects.2 Each interaction adds input electric field factors to the polarization, which are used to describe the frequency and wavevector of the radiated signal. The action of the final dipole operator must return you to a diagonal element to contribute to the signal. Remember that action on the bra is the complex conjugate of ket and absorption is complex conjugate of stimulated emission. A table summarizing these interactions contributing to a diagram is below Interaction Diagrammatic Representation contrib. to $R^{(n)}$ contrib. to ksig & $\omega_{sig}$ KET SIDE Absorption $(\bar\mu_{ba}\cdot\bar E_n)exp[i\bar k_n\cdot\bar r - i\omega_nt]\nonumber$ $\bar\mu_{ba}\cdot\hat\epsilon_n \nonumber$ +kn $+\omega_n$ Stimulated Emission $(\bar\mu_{ba}\cdot\bar E_n^*)exp[i\bar k_n\cdot\bar r - i\omega_nt]\nonumber$ $\bar\mu_{ba}\cdot\hat\epsilon_n \nonumber$ -kn $-\omega_n$ BRA SIDE Absorption $(\bar\mu_{ba}^*\cdot\bar E_n^*)exp[i\bar k_n\cdot\bar r - i\omega_nt]\nonumber$ $\bar\mu_{ba}^*\cdot\hat\epsilon_n \nonumber$ -kn $-\omega_n$ Stimulated Emission $(\bar\mu_{ba}^*\cdot\bar E_n)exp[i\bar k_n\cdot\bar r - i\omega_nt]\nonumber$ $\bar\mu_{ba}^*\cdot\hat\epsilon_n \nonumber$ +kn $+\omega_n$ SIGNAL EMISSION (final trace, convention: ket side) $\bar\mu_{ba}\cdot\hat\epsilon_{an} \nonumber$ Once you have written down the relevant diagrams, being careful to identify all permutations of interactions of your system states with the fields relevant to your signal, the correlation functions contributing to the material response and the frequency and wavevector of the signal field can be readily obtained. It is convenient to write the correlation function as a product of several factors for each event during the series of interactions: 1. Start with a factor $p_n$ signifying the probability of occupying the initial state, typically a Boltzmann factor. 2. Read off products of transition dipole moments for interactions with the incident fields, and for the final signal emission. 3. Multiply by terms that describe the propagation under $H_0$ between interactions. As a starting point for understanding an experiment, it is valuable to include the effects of relaxation of the system eigenstates in the time-evolution using a simple phenomenological approach. Coherences and populations are propagated by assigning the damping constant $\Gamma_{ab}$ to propagation of the $\rho_{ab}$ element: $\hat G(\tau)\rho_{ab}=exp[-i\omega_{ab}\tau-\Gamma_{ab}\tau]\rho_{ab}$ Note $\Gamma_{ab} = \Gamma_{ba}$ and $G_{ab}^* = G_{ba}$. We can then recognize $\Gamma_{ii}=1/T_1$ as the population relaxation rate for state i and $\Gamma_{ij} = 1/T_2$ the dephasing rate for the coherence $\rho_{ij}$. 4) Multiply by a factor of (−1)n where n is the number of bra side interactions. This factor accounts for the fact that in evaluating the nested commutator, some correlation functions are subtracted from others. 5) The radiated signal will have frequency $\displaystyle\omega_{sig}=\sum_i\omega_i$ and wave vector $\displaystyle\bar k_{sig}=\sum_i\bar k_i$ 1. D. Lee and A. C. Albrecht, “A unified view of Raman, resonance Raman, and fluorescence spectroscopy (and their analogues in two-photon absorption).” Adv. Infrared and Raman Spectr. 12, 179 (1985). 2. To properly account for all orientational factors, the transition dipole moment must be projected onto the incident electric field polarization $\hat\epsilon$ leading to the terms in the table. This leads to a nonlinear polarization that can have x, y, and z polarization components in the lab frame. The are obtained by projecting the matrix element prior to the final trace onto the desired analyzer axis $\hat\epsilon_{an}$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/02%3A_Diagrammatic_Perturbation_Theory/2.01%3A_Feynman_Diagrams.txt
Let’s consider the diagrammatic approach to the linear absorption problem, using a two-level system with a lower level $a$ and upper level $b$. There is only one independent correlation function in the linear response function \begin{aligned} C(t) &= Tr[\mu(t)\mu(0)\rho_{eq}] \[4pt] &=Tr[\mu\hat G(t)\mu\rho_{eq}]\end{aligned}\label{3.3.1} This does not need to be known before starting, but is useful to consider, since it should be recovered in the end. The system will be taken to start in the ground state $\rho_{aa}$. Linear response only allows for one input field interaction, which must be absorption, and which we take to be a ket side interaction. We can now draw two diagrams: With this diagram, we can begin by describing the signal characteristics in terms of the induced polarization. The product of incident fields indicates: $E_1e^{-i\omega_1t+i\bar k_1\cdot\bar r} \Rightarrow P(t)e^{-i\omega_{sig}t+i\bar k_{sig}\cdot\bar r}\label{3.3.2}$ so that $\omega_{sig}=\omega_1 ~~~~~ \bar k_{sig}=\bar k_1 \label{3.3.3}$ As expected the signal will radiate with the same frequency and in the same direction as the incoming beam. Next we can write down the correlation function for this term. Working from bottom up: (1) (2) (3) (4) \begin{aligned} C(t) &=p_{a}\left[\mu_{b a}\right]\left[e^{-i \omega_{b a} t-\Gamma_{b a} t}\right]\left[\mu_{a b}\right] \ &=p_{a}\left|\mu_{b a}\right|^{2} e^{-i \omega_{b a} t-\Gamma_{b a} t} \end{aligned} \label{3.3.4} More sophisticated ways of treating the time-evolution under $H_0$ in step (3) could take the form of some of our earlier treatments of the absorption lineshape: \begin{aligned}\hat{G}(\tau) \rho_{a b} & \sim \rho_{a b} \exp \left[-i \omega_{a b} \tau\right] F(\tau) \&=\rho_{a b} \exp \left[-i \omega_{a b} \tau-g(t)\right] \end{aligned} \label{3.3.5} Note that one could draw four possible permutations of the linear diagram when considering bra and ket side interactions, and initial population in states a and b: However, there is no new dynamical content in these extra diagrams, and they are generally taken to be understood through one diagram. Diagram ii is just the complex conjugate of eq. (3.3.4) so adding this signal contribution gives: $C(t)-C^*(t)=2ip_a|\mu_{ba}|^2\sin (\omega_{ba}t)e^{-\Gamma_{ba}t} \label{3.3.6}$ Accounting for the thermally excited population initially in b leads to the expected two-level system response function that depends on the population difference $R(t)=\frac{2}{\hbar}(p_a-p_b)|\mu_{ba}|^2\sin (\omega_{ba}t)e^{-\Gamma_{ba}t} \label{3.3.7}$ 2.04: Example- Second-Order Response for a Three-Level System The second-order response is the simplest nonlinear case, but in molecular spectroscopy is less commonly used than third-order measurements. The signal generation requires a lack of inversion symmetry, which makes it useful for studies of interfaces and chiral systems. However, let’s show how one would diagrammatically evaluate the second order response for a very specific system pictured below. If we only have population in the ground state at equilibrium and if there are only resonant interactions allowed, the permutations of unique diagrams are as follows: From the frequency conservation conditions, it should be clear that process i is a sum-frequency signal for the incident fields, whereas diagrams ii-iv refer to difference frequency schemes. To better interpret what these diagrams refer to let’s look at iii. Reading in a time-ordered manner, we can write the correlation function corresponding to this diagram as \begin{aligned} C_{2} &=\operatorname{Tr}\left[\mu(\tau) \rho_{e q} \mu(0)\right] \[4pt] &=(-1)^{1} \mu_{b c} \hat{G}_{c b}\left(\tau_{2}\right) \mu_{c a} \hat{G}_{a b}\left(\tau_{1}\right) \rho_{a a} \mu_{b a}^{*} \[4pt] &=-p_{a} \mu_{a b} \mu_{b c} \mu_{c a} e^{-i \omega_{a b} \tau_{1}-\Gamma_{a b} \tau_{1}} e^{-i \omega_{c b} \tau_{2}-\Gamma_{c b} \tau_{2}} \end{aligned} \label{3.4.1} Note that a literal interpretation of the final trace in diagram iv would imply an absorption event – an upward transition from b to c. What does this have to do with radiating a signal? On the one hand it is important to remember that a diagram is just mathematical shorthand, and that one can’t distinguish absorption and emission in the final action of the dipole operator prior to taking a trace. The other thing to remember is that such a diagram always has a complex conjugate associated with it in the response function. The complex conjugate of iv, a $Q_2^*$ ket/bra term, shown below has a downward transition –emission– as the final interaction. The combination $Q_2 − Q_2^*$ ultimately describes the observable. Now, consider the wavevector matching conditions for the second order signal iii. Remembering that the magnitude of the wavevector is $|\bar k| = \omega/c = 2\pi / \lambda$, the length of the vectors will be scaled by the resonance frequencies. When the two incident fields are crossed as a slight angle, the signal would be phase-matched such that the signal is radiated closest to beam 2. Note that the most efficient wavevector matching here would be when fields 1 and 2 are collinear.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/02%3A_Diagrammatic_Perturbation_Theory/2.03%3A_Example-Linear_Response_for_a_Two-Level_System.txt
Now let’s look at examples of diagrammatic perturbation theory applied to third-order nonlinear spectroscopy. Third-order nonlinearities describe the majority of coherent nonlinear experiments that are used including pump-probe experiments, transient gratings, photon echoes, coherent anti-Stokes Raman spectroscopy (CARS), and degenerate four wave mixing (4WM). These experiments are described by some or all of the eight correlation functions contributing to $R^{(3)}$: $R^{(3)}=\left(\frac{i}{\hbar}\right)^3\sum_{\alpha=1}^4\left[R_\alpha - R^*_\alpha\right] \label{3.5.1}$ The diagrams and corresponding response first requires that we specify the system eigenstates. The simplest case, which allows us discuss a number of examples of third-order spectroscopy is a two-level system. Let’s write out the diagrams and correlation functions for a two-level system starting in $\rho_{aa}$, where the dipole operator couples $|b\rangle$ and $|a\rangle$. As an example, let’s write out the correlation function for R2 obtained from the diagram above. This term is important for understanding photon echo experiments and contributes to pump-probe and degenerate four-wave mixing experiments. \begin{aligned} R_{2} &=(-1)^{2} p_{a}\left(\mu_{b a}^{*}\right)\left[e^{-i \omega_{a b} \tau_{1}-\Gamma_{a b} \tau_{1}}\right]\left(\mu_{b a}\right)\left(e^{\cancel{-i \omega_{b b} \tau_{2}}-\Gamma_{b b} \tau_{2}}\right)\left(\mu_{a b}^{*}\right)\left[e^{-i \omega_{b a} \tau_{3}-\Gamma_{b a} \tau_{3}}\right]\left(\mu_{a b}\right) \ &=p_{a}\left|\mu_{a b}\right|^{4} \exp \left[-i \omega_{b a}\left(\tau_{3}-\tau_{1}\right)-\Gamma_{b a}\left(\tau_{1}+\tau_{3}\right)-\Gamma_{b b}\left(\tau_{2}\right)\right] \end{aligned} \label{3.5.2} The diagrams show how the input field contributions dictate the signal field frequency and wavevector. Recognizing the dependence of $E_{sig}^{(3)} \sim P^{(3)} \sim R_2(E_1E_2E_3)$, these are obtained from the product of the incident field contributions \begin{aligned} \bar E_1 \bar E_2 \bar E_3 &= \left(E_1^*e^{+i\omega_1t-i\bar k_1\cdot\bar r}\right)\left(E_2e^{-i\omega_2t+i\bar k_2\cdot r}\right)\left(E_3e^{+i\omega_3t-i\bar k_3\cdot\bar r_3}\right) \ &\implies E_1^*E_2E_3e^{-\omega_{sig}t+i\bar k_{sig}\cdot\bar r} \end{aligned} \label{3.5.3} \begin{aligned} \therefore \omega_{sig2} &= -\omega_1 + \omega_2 + \omega_3 \ k_{sig2} &= -\bar k_1 +\bar k_2 +\bar k_3 \end{aligned} \label{3.5.4} Now, let’s compare this to the response obtained from R4. These we obtain $R_4=p_a|\mu_{ab}|^4exp\left[-i\omega_{ba}(\tau_3+\tau_1)-\Gamma_{ba}(\tau_1+\tau_3)-\Gamma_{bb}(\tau_2)\right] \label{3.5.5}$ \begin{aligned} \omega_{sig4} &= +\omega_1 - \omega_2 + \omega_3 \ k_{sig4} &= +\bar k_1 -\bar k_2 +\bar k_3 \end{aligned} \label{3.5.6} Note that R2 and R4 terms are identical, except for the phase acquired during the initial period: $exp[i\phi]=exp[\pm i\omega_{ba}\tau_1]$. The R2 term evolves in conjugate coherences during the $\tau_1$ and $\tau_3$ periods, whereas the R4 term evolves in the same coherence state during both periods: Coherences in $\tau_1$ and $\tau_3$ Phase acquired in $\tau_1$ and $\tau_3$ R4 $|b\rangle\langle a| \rightarrow |b\rangle\langle a|$ $e^{-i\omega_{ba}(\tau_1+\tau_3)}$ R2 $|a\rangle\langle b| \rightarrow |b\rangle\langle a|$ $e^{-i\omega_{ba}(\tau_1-\tau_3)}$ The R2 term has the property of time-reversal: the phase acquired during $\tau_1$ is reversed in $\tau_3$. For that reason the term is called “rephasing.” Rephasing signals are selected in photon echo experiments and are used to distinguish line broadening mechanisms and study spectral diffusion. For R4, the phase acquired continuously in $\tau_1$ and $\tau_3$, and this term is called “nonrephasing.” Analysis of R1 and R3 reveals that these terms are non-rephasing and rephasing, respectively. For the present case of a third-order spectroscopy applied to a two-level system, we observe that the two rephasing functions R2 and R3 have the same emission frequency and wavevector, and would therefore both contribute equally to a given detection geometry. The two terms differ in which population state they propagate during the $\tau_2$ variable. Similarly, the non-rephasing functions R1 and R4 each have the same emission frequency and wavevector, but differ by the $\tau_2$ population. For transitions between more than two system states, these terms could be separated by frequency or wavevector (see appendix). Since the rephasing pair R2 and R3 both contribute equally to a signal scattered in the −k1 + k2 + k3 direction, they are also referred to as $S_I$. The nonrephasing pair R1 and R4 both scatter in the + k1 − k2 + k3 direction and are labeled as $S_{II}$. Our findings for the four independent correlation functions are summarized below. $\omega_{sig}$ $k_{sig}$ $\tau_2$ population $S_I$ rephasing R2 $-\omega_1 +\omega_2 +\omega_3$ $-k_1 +k_2 +k_3$ excited state R3 $-\omega_1 +\omega_2 +\omega_3$ $-k_1 +k_2 +k_3$ ground state $S_{II}$ non-rephasing R1 $+\omega_1 -\omega_2 +\omega_3$ $+k_1 -k_2 +k_3$ ground state R4 $+\omega_1 -\omega_2 +\omega_3$ $+k_1 -k_2 +k_3$ excited state
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/02%3A_Diagrammatic_Perturbation_Theory/2.05%3A_Third-Order_Nonlinear_Spectroscopy.txt
A Fourier-Laplace transform of $P^{(3)}(t)$ with respect to the time intervals allows us to obtain an expression for the third order nonlinear susceptibility, $\chi^{(3)}\left(\omega_1,\omega_2,\omega_3\right)$: $P^{(3)}\left(\omega_{sig}\right)=\chi^{(3)}\left(\omega_{sig};\omega_1,\omega_2,\omega_3\right)\bar E_1\bar E_2\bar E_3 \label{3.6.1}$ $\chi^{(n)}=\int_0^{\infty}d\tau_ne^{i\Omega_n\tau_n}\dotsi\int_0^{\infty}d\tau_1e^{i\Omega_1\tau_1}R^{(n)}\left(\tau_1,\tau_2,\dotsc\tau_n\right) \label{3.6.2}$ Here the Fourier transform conjugate variables $\Omega_m$ to the time-interval $\tau_m$ are the sum over all frequencies for the incident field interactions up to the period for which you are evolving: $\Omega_m=\sum_{i=1}^m\omega_i \label{3.6.3}$ For instance, the conjugate variable for the third time-interval of a $k_1-k_2+k_3$ experiment is the sum over the three preceding incident frequencies $\Omega_3=\omega_1-\omega_2+\omega_3$. In general, \chi^{(3)} is a sum over many correlation functions and includes a sum over states: $\chi^{(3)}\left(\omega_1,\omega_2,\omega_3\right)=\frac{1}{6}\left(\frac{i}{\hbar}\right)^3\sum_{abcd}p_a\sum_{\alpha=1}^4\left[\chi_\alpha-\chi_\alpha^*\right] \label{3.6.4}$ Here a is the initial state and the sum is over all possible intermediate states. Also, to describe frequency domain experiments, we have to permute over all possible time orderings. Most generally, the eight terms in $R^{(3)}$ lead to 48 terms for $\chi^{(3)}$, as a result of the 3!=6 permutations of the time-ordering of the input fields.2 Given a set of diagrams, we can write the nonlinear susceptibility directly as follows: 1) Read off products of light-matter interaction factors. 2) Multiply by resonance denominator terms that describe the propagation under $H_0$. In the frequency domain, if we apply eq. (3.6.2) to response functions that use phenomenological time-propagators of the form eq. (3.2.1), we obtain $\hat G(\tau_m)\rho_{ab}\implies\frac{1}{(\Omega_m-\omega_{ba})-i\Gamma_{ba}} \label{3.6.5}$ $\Omega_m$ is defined in eq. (3.6.3). 3) As for the time domain, multiply by a factor of (−1)n for n bra side interactions. 4) The radiated signal will have frequency $\omega_{sig}=\sum_i\omega_i$ and wavevector $\bar k_{sig}=\sum_i\bar k_i$. As an example, consider the term for R2 applied to a two-level system that we wrote in the time domain in eq. (3.5.2) \begin{aligned} \chi_{2} &=\left|\mu_{b a}\right|^{4} \frac{(-1)}{\omega_{a b}-\left(-\omega_{1}\right)-i \Gamma_{a b}} \cdot \frac{1}{\cancel{\omega_{b b}}-\left(\omega_{2}-\omega_{1}\right)-i \Gamma_{b b}} \cdot \frac{(-1)}{\omega_{b a}-\left(\omega_{3}+\omega_{2}-\omega_{1}\right)-i \Gamma_{b a}} \ &=\left|\mu_{b a}\right|^{4} \frac{1}{\omega_{1}-\omega_{b a}-i \Gamma_{b a}} \cdot \frac{1}{-\left(\omega_{2}-\omega_{1}\right)-i \Gamma_{b b}} \cdot \frac{1}{-\left(\omega_{3}+\omega_{2}-\omega_{1}-\omega_{b a}\right)-i \Gamma_{b a}} \end{aligned} \label{3.6.6} The terms are written from a diagram with each interaction and propagation adding a resonant denominator term (here reading left to right). The full frequency domain response is a sum over multiple terms like these. 1. Prior, Y. A complete expression for the third order susceptibility $\chi^{(3})$-perturbative and diagramatic approaches. IEEE J. Quantum Electron. QE-20, 37 (1984). See also, Dick, B. Response functions and susceptibilities for multiresonant nonlinear optical spectroscopy: Perturbative computer algebra solution including feeding. Chem. Phys. 171, 59 (1993). 2. Bloembergen, N., Lotem, H. & Lynch, R. T. Lineshapes in coherent resonant Raman scattering. Indian J. Pure Appl. Phys. 16, 151 (1978). 2.07: Appendix- Third-order diagrams for a four-level system The third order response function can describe interaction with up to four eigenstates of the system Hamiltonian. These are examples of correlation functions within $R^{(3)}$ for a four-level system representative of vibronic transitions accompanying an electronic excitation, as relevant to resonance Raman spectroscopy. Note that these diagrams present only one example of multiple permutations that must be considered given a particular time-sequence of incident fields that may have variable frequency. The signal frequency comes from summing all incident resonance frequencies accounting for the sign of the excitation. The products of transition matrix elements are written in a time-ordered fashion without the projection onto the incident field polarization needed to properly account for orientational effects. The $R_1$ term is more properly written $\langle\left(\bar\mu_{ba}\cdot\hat\epsilon_1\right)\left(\bar\mu_{cb}\cdot\hat\epsilon_2\right)\left(\bar\mu_{dc}\cdot\hat\epsilon_3\right)\left(\bar\mu_{ad}\cdot\hat\epsilon_{an}\right)\rangle$. Note that the product of transition dipole matrix elements obtained from the sequence of interactions can always be re-written in the cyclically invariant form $\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}s$. This is one further manifestation of closed loops formed by the sequence of interactions. 2.08: Appendix- Third-order diagrams for a vibration The third-order nonlinear response functions for infrared vibrational spectroscopy are often applied to a weakly anharmonic vibration. For high frequency vibrations in which only the $\nu = 0$ state is initially populated, when the incident fields are resonant with the fundamental vibrational transition, we generally consider diagrams involving the system eigenstates $\nu = 0, 1$ and 2, and which include v=0-1 and v=1-2 resonances. Then, there are three distinct signal contributions: Note that for the $S_I$ and $S_{II}$ signals there are two types of contributions: two diagrams in which all interactions are with the v=0-1 transition (fundamental) and one diagram in which there are two interactions with v=0-1 and two with v=1-2 (the overtone). These two types of contributions have opposite signs, which can be seen by counting the number of bra side interactions, and have emission frequencies of $\omega_{10}$ or $\omega_{21}$. Therefore, for harmonic oscillators, which have $\omega_{10} = \omega_{21}$ and $\sqrt{2}\mu_{10}=\mu_{21}$, we can see that the signal contributions should destructively interfere and vanish. This is a manifestation of the finding that harmonic systems display no nonlinear response. Some deviation from harmonic behavior is required to observe a signal, such as vibrational anharmonicity $\omega_{10} \ne \omega_{21}$, electrical anharmonicity (\sqrt{2}\mu_{10}\ne\mu_{21}\), or level-dependent damping $\Gamma_{10}\ne\Gamma_{21}$ or \(\Gamma_{00}\ne\Gamma_{11}.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/02%3A_Diagrammatic_Perturbation_Theory/2.06%3A_Frequency_Domain_Representation%281%29.txt
Third-order nonlinear spectroscopies are the most widely used class of nonlinear methods, including the common pump-probe experiment. This section will discuss a number of these methods. The approach here is meant to be practical, with the emphasis on trying to connect the particular signals with their microscopic origin. This approach can be used for describing any experiment in terms of the wave-vector, frequency and time-ordering of the input fields, and the frequency and wavevector of the signal. • 3.1: Selecting signals by wavevector The question is how to select particular contributions to the signal. It won’t be possible to uniquely select particular diagrams. However, you can use the properties of the incident and detected fields to help with selectivity. • 3.2: Photon Echo The photon echo experiment is most commonly used to distinguish static and dynamic linebroadening, and time-scales for energy gap fluctuations. • 3.3: Transient Grating The transient grating is a third-order technique used for characterizing numerous relaxation processes, but is uniquely suited for looking at optical excitations with well-defined spatial period. • 3.4: Pump-Probe The pump-probe or transient absorption experiment is perhaps the most widely used third-order nonlinear experiment. It can be used to follow many types of time-dependent relaxation processes and chemical dynamics, and is most commonly used to follow population relaxation, chemical kinetics, or wavepacket dynamics and quantum beats. • 3.5: CARS (Coherent Anti-Stoke Raman Scattering) 03: Third-Order Nonlinear Spectroscopies The question is how to select particular contributions to the signal. It won’t be possible to uniquely select particular diagrams. However, you can use the properties of the incident and detected fields to help with selectivity. Here is a strategy for describing a particular experiment: 1. Start with the wavevector and frequency of the signal field of interest. 2. (a) Time-domain: Define a time-ordering along the incident wavevectors or (b) Frequency domain: Define the frequencies along the incident wavevectors. 3. Sum up diagrams for correlation functions that will scatter into the wave-vector matched direction, keeping only resonant terms (rotating wave approximation). In the frequency domain, use ladder diagrams to determine which correlation functions yield signals that pass through your filter/monochromator. Let’s start by discussing how one can distinguish a rephasing signal from a non-rephasing signal. Consider two degenerate third-order experiments $(\omega_1 = \omega_2 = \omega_3 = \omega_{sig})$ distinguished by the signal wave-vector for a particular time-ordering. We choose a box geometry, where the three incident fields (a,b,c) are crossed in the sample, incident from three corners of the box, as shown. (Colors in these figures are not meant to represent the frequency of the incident fields – which are all the same – but rather is just there to distinguish them for the picture). Since the frequencies are the same, the length of the wavevector $|k|=2\pi n/\lambda$ is equal for each field, only its direction varies. Vector addition of the contributing terms from the incident fields indicates that the signal $\bar k_{sig}$ will be radiated in the direction of the last corner of the box when observed after the sample. (Colors in the figure don’t represent frequency, but serve to distinguish beams): $\bar k_{sig} = +\bar k_a − \bar k_b + \bar k_c$ Comparing the wavevector matching condition for this signal with those predicted by the third-order Feynman diagrams, we see that we can select non-rephasing signals R1 and R4 by setting the time ordering of pulses such that a = 1, b = 2, and c = 3. The rephasing signals R2 and R3 are selected with the time-ordering a = 2, b = 1, and c = 3. Here the wave-vector matching for the rephasing signal is imperfect. The vector sum of the incident fields $\bar k_{sig}$ dictates the direction of propagation of the radiated signal (momentum conservation), whereas the magnitude of the signal wavevector $\bar k_{sig}'$ is dictated by the radiated frequency (energy conservation). The efficiency of radiating the signal field falls of with the wave-vector mismatch $\Delta k=\bar k_{sig}-\bar k_{sig}'$ as $|\bar E_{sig}(t)|\propto\bar P(t)\text{sinc} (\Delta kl/2)$ where $l$ is the path length (see eq. 2.10).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/03%3A_Third-Order_Nonlinear_Spectroscopies/3.01%3A_Selecting_signals_by_wavevector.txt
The photon echo experiment is most commonly used to distinguish static and dynamic linebroadening, and time-scales for energy gap fluctuations. The rephasing character of $R_2$ and $R_3$ allows you to separate homogeneous and inhomogeneous broadening. To demonstrate this let’s describe a photon echo experiment for an inhomogeneous lineshape, that is a convolution of a homogeneous line shape with width $Γ$ with a static inhomogeneous distribution of width $Δ$. Remember that linear spectroscopy cannot distinguish the two: $R(\tau)=|\mu_{ab}|^2e^{-i\omega_{ab}\tau-g(\tau)}-c.c. \label{4.2.1}$ For an inhomogeneous distribution, we could average the homogeneous response, $g(t)=\Gamma_{ba}t$, with an inhomogeneous distribution $R=\int d\omega_{ab}G\left(\omega_{ab}\right)R\left(\omega_{ab}\right) \label{4.2.2}$ which we take to be Gaussian $G(\omega_{ba})=exp\left(-\frac{\left(\omega_{ba}-\langle\omega_{ba}\rangle\right)^2}{2\Delta^2}\right) \label{4.2.3}$ Equivalently, since a convolution in the frequency domain is a product in the time domain, we can set $g(t)=\Gamma_{ba}t+\frac{1}{2}\Delta^2t^2 \label{4.2.4}$ So for the case that $\Delta \gt \Gamma$, the absorption spectrum is a broad Gaussian lineshape centered at the mean frequency $\langle\omega_{ba}\rangle$ which just reflects the static distribution $\Delta$ rather than the dynamics in $\Gamma$. Now look at the experiment in which two pulses are crossed to generate a signal in the direction $k_{sig}=2k_2-k_1 \label{4.2.5}$ This signal is a special case of the signal $(k_3+k_2-k_1)$ where the second and third interactions are both derived from the same beam. Both non-rephasing diagrams contribute here, but since both second and third interactions are coincident, $\tau_2=0$ and $R_2=R_3$. The nonlinear signal can be obtained by integrating the homogeneous response, $R^{(3)}(\omega_{ab})=|\mu_{ab}|^4p_ae^{-i\omega_{ab}(\tau_1-\tau_3)}e^{-\Gamma_{ab}(\tau_1+\tau_3)} \label{4.2.6}$ over the inhomogeneous distribution as in eq. (4.2.2). This leads to $R^{(3)}=|\mu_{ab}|^4p_ae^{-i\langle\omega_{ab}\rangle(\tau_1-\tau_3)}e^{-\Gamma_{ab}(\tau_1+\tau_3)}e^{-(\tau_1-\tau_3)^2\Delta^2/2} \label{4.2.7}$ For $\Delta \gt\gt \Gamma_{ab}$, $R^{(3)}$ is sharply peaked at $\tau_1=\tau_3$, i.e. $e^{-(\tau_1-\tau_3)^2\Delta^2/2}\approx\delta(\tau_1-\tau_3)$. The broad distribution of frequencies rapidly dephases during $\tau_1$, but is rephased (or refocused) during $\tau_3$, leading to a large constructive enhancement of the polarization at $\tau_1=\tau_3$. This rephasing enhancement is called an echo. In practice, the signal is observed with a integrating intensity-level detector placed into the signal scattering direction. For a given pulse separation $\tau$ (setting $\tau_1=\tau$), we calculated the integrated signal intensity radiated from the sample during $\tau_3$ as $I_{sig}(\tau)=|E_{sig}|^2\propto\int_{-\infty}^{\infty}d\tau_3|P^{(3)}(\tau,\tau_3)|^2 \label{4.2.8}$ In the inhomogeneous limit $(\Delta \gt\gt \Gamma_{ab})$, we find $I_{sig}(\tau)\propto|\mu_{ab}|^8e^{-4\Gamma_{ab}\tau} \label{4.2.9}$ In this case, the only source of relaxation of the polarization amplitude at $\tau_1=\tau_3$ is $\Gamma_{ab}$. At this point inhomogeneity is removed and only the homogeneous dephasing is measured. The factor of four in the decay rate reflects the fact that damping of the initial coherence evolves over two periods $\tau_1+\tau_3=2\tau$, and that an intensity level measurement doubles the decay rate of the polarization. 3.03: Transient Grating The transient grating is a third-order technique used for characterizing numerous relaxation processes, but is uniquely suited for looking at optical excitations with well-defined spatial period. The first two pulses are set time-coincident, so you cannot distinguish which field interacts first. Therefore, the signal will have contributions both from $k_{sig} = k_1 − k_2 + k_3$ and $k_{sig} = −k_1 + k_2 + k_3$. That is the signal depends on $R_1+R_2+R_3+R_4$. Consider the terms contributing to the polarization that arise from the first two interactions. For two time-coincident pulses of the same frequency, the first two fields have an excitation profile in the sample $\bar E_a\bar E_b=E_aE_b \exp\left[-i(\omega_a-\omega_b)t+i(\bar k_a-\bar k_b)\cdot\bar r\right]+c.c. \label{4.3.1}$ If the beams are crossed at an angle $2\theta$ \begin{aligned} \bar k_a &=|k_a|(\hat z\cos{\theta}+\hat x\sin{\theta}) \ \bar k_b &=|k_b|(\hat z\cos{\theta}-\hat x\sin{\theta}) \end{aligned} \label{4.3.2} with $|k_a|=|k_b|=\frac{2\pi n}{\lambda} \label{4.3.3}$ the excitation of the sample is a spatial varying interference pattern along the transverse direction $\bar E_a \bar E_b=E_aE_b \exp[i\bar\beta\cdot\bar x]+c.c. \label{4.3.4}$ The grating wavevector is \begin{aligned} \bar\beta &=\bar k_1-\bar k_2 \[4pt] |\bar\beta| &= \frac{4\pi n}{\lambda}\sin{\theta} =\frac{2\pi}{\eta} \end{aligned} \label{4.3.5} This spatially varying field pattern is called a grating, and has a fringe spacing $\eta=\frac{\lambda}{2n\sin{\theta}} \label{4.3.6}$ Absorption images this pattern into the sample, creating a spatial pattern of excited and ground state molecules. A time-delayed probe beam can scatter off this grating, where the wavevector matching conditions are equivalent to the constructive interference of scattered waves at the Bragg angle off a diffraction grating. For $\omega_1=\omega_2=\omega_3=\omega_{sig}$ this the diffraction condition is incidence of $\bar k_3$ at an angle $\theta$, leading to scattering of a signal out of the sample at an angle $-\theta$. Most commonly, we measure the intensity of the scattered light, as given in eq. (4.2.8) More generally, we should think of excitation with this pulse pair leading to a periodic spatial variation of the complex index of refraction of the medium. Absorption can create an excited state grating, whereas subsequent relaxation can lead to heating a periodic temperature profile (a thermal grating). Nonresonant scattering processes (Raleigh and Brillouin scattering) can create a spatial modulation in the real index or refraction. Thus, the transient grating signal will be sensitive to any processes which act to wash out the spatial modulation of the grating pattern: • Population relaxation leads to a decrease in the grating amplitude, observed as a decrease in diffraction efficiency. $I_{sig}(\tau) \propto exp[-2\Gamma_{bb}\tau] \label{4.3.7}$ • Thermal or mass diffusion along $\hat x$ acts to wash out the fringe pattern. For a diffusion constant D the decay of diffraction efficiency is $I_{sig}(\tau)\propto exp[-2\beta^2D\tau] \label{4.3.8}$ • Rapid heating by the excitation pulses can launch counter propagating acoustic waves along $\hat x$, which can modulate the diffracted beam at a frequency dictated by the period for which sound propagates over the fringe spacing in the sample.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/03%3A_Third-Order_Nonlinear_Spectroscopies/3.02%3A_Photon_Echo.txt
The pump-probe or transient absorption experiment is perhaps the most widely used third-order nonlinear experiment. It can be used to follow many types of time-dependent relaxation processes and chemical dynamics, and is most commonly used to follow population relaxation, chemical kinetics, or wavepacket dynamics and quantum beats. The principle is quite simple, and the using the theoretical formalism of nonlinear spectroscopy often unnecessary to interpret the experiment. Two pulses separated by a delay $\tau$ are crossed in a sample: a pump pulse and a time-delayed probe pulse. The pump pulse $E_{pu}$ creates a non-equilibrium state, and the time-dependent changes in the sample are characterized by the probe-pulse $E_{pr}$ through the pump-induced intensity change on the transmitted probe, $\Delta I$. Described as a third-order coherent nonlinear spectroscopy, the signal is radiated collinear to the transmitted probe field, so the wavevector matching condition is $\bar k_{sig}=+\bar k_{pu} -\bar k_{pu} +\bar k_{pr}=\bar k_{pr}.$ There are two interactions with the pump field and the third interaction is with the probe. Similar to the transient grating, the time-ordering of pump-interactions cannot be distinguished, so terms that contribute to scattering along the probe are $k_{sig}=\pm k_1 \mp k_2 + k_3$ (i.e., all correlation functions R1 to R4). In fact, the pump-probe can be thought of as the limit of the transient grating experiment in the limit of zero grating wavevector ($\theta$ and $\beta\rightarrow 0$). The detector observes the intensity of the transmitted probe and nonlinear signal $I=\frac{nc}{4\pi}|E_{pr}'+E_{sig}|^2 \label{4.4.1}$ $E_{pr}'$ is the transmitted probe field corrected for linear propagation through the sample. The measured signal is typically the differential intensity on the probe field with and without the pump field present: $\Delta I(\tau)=\frac{nc}{4\pi}\left\{|E_{pr}'+E_{sig}(\tau)|^2-|E_{pr}'|^2\right\} \label{4.4.2}$ If we work under conditions of a weak signal relative to the transmitted probe $|E_{pr}'|\gt\gt|E_{sig}|$, then the differential intensity in eq. (4.4.2) is dominated by the cross term \begin{aligned} \Delta I(\tau) & \approx \frac{n c}{4 \pi}\left[E_{p r}^{\prime} E_{s i g}^{*}(\tau)+c . c .\right] \ &=\frac{n c}{2 \pi} \operatorname{Re}\left[E_{p r}^{\prime} E_{s i g}^{*}(\tau)\right] \end{aligned}\label{4.4.3} So the pump-probe signal is directly proportional to the nonlinear response. Since the signal field is related to the nonlinear polarization through a \pi/2) phase shift, $\bar E_{sig}(\tau)=i\frac{2\pi\omega_{sig}\ell}{nc}P^{(3)}(\tau)\label{4.4.4}$ the measured pump-probe signal is proportional to the imaginary part of the polarization $\Delta I(\tau)=2\omega_{sig}\ell Im\left[E_{pr}'P^{(3)}(\tau)\right]\label{4.4.5}$ which is also proportional to the correlation functions derived from the resonant diagrams we considered earlier. Dichroic and Birefringent Response In analogy to what we observed earlier for linear spectroscopy, the nonlinear changes in absorption of the transmitted probe field are related to the imaginary part of the susceptibility, or the imaginary part of the index of refraction. In addition to the fully resonant processes, it is also possible for the pump field to induce nonresonant changes to the polarization that modulate the real part of the index of refraction. These can be described through a variety of nonresonant interactions, such as nonresonant Raman, the optical Kerr effect, coherent Raleigh or Brillouin scattering, or the second hyperpolarizability of the sample. In this case, we can describe the timedevelopment of the polarization and radiated signal field as \begin{aligned} P^{(3)}(\tau,\tau_3) &= P^{(3)}(\tau,\tau_3)e^{-i\omega_{sig}\tau_3}+\left[P^{(3)}(\tau,\tau_3)\right]^*e^{i\omega_{sig}}\tau_3 \ &= 2Re\left[P^{(3)}(\tau,\tau_3)\right]\cos{(\omega_{sig}\tau_3)}+2Im\left[P^{(3)}(\tau,\tau_3)\right]\sin{(\omega_{sig}\tau_3)} \end{aligned}\label{4.4.6} \begin{aligned}\bar E_{sig}(\tau_3) &=\frac{4\pi\omega_{sig}\ell}{nc}\left(Re\left[P^{(3)}(\tau,\tau_3)\right]\sin{(\omega_{sig}\tau_3)}+Im\left[P^{(3)}(\tau,\tau_3)\right]\cos{(\omega_{sig}\tau_3)}\right) \ &= E_{bir}(\tau,\tau_3)\sin{(\omega_{sig}\tau_3)}+E_{dic}(\tau,\tau_3)\cos{(\omega_{sig}\tau_3)} \end{aligned}\label{4.4.7} Here the signal is expressed as a sum of two contributions, referred to as the birefringent \((E_{bir}) and dichroic $(E_{dic})$ responses. As before the imaginary part, or dichroic response, describes the sample-induced amplitude variation in the signal field, whereas the birefringent response corresponds to the real part of the nonlinear polarization and represents the phase-shift or retardance of the signal field induced by the sample. In this scheme, the transmitted probe is $\bar E_{pr}'(\tau_3)=E_{pr}'(\tau_3)\cos{(\omega_{pr}\tau_3)} \label{4.4.8}$ So that the $\Delta I(\tau)\approx\frac{nc}{2\pi}\left[E_{pr}'(\tau)E_{dic}(\tau)\right] \label{4.4.9}$ Because the signal is in-quadrature with the polarization ($\pi/2$ phase shift), the absorptive or dichroic response is in-phase with the transmitted probe, whereas the birefringent part is not observed. If we allow for the phase of the probe field to be controlled, for instance through a quarter-wave plate before the sample, then we can write $\bar E_{pr}'(\tau_3,\phi)=E_{pr}'(\tau_3)\cos{(\omega_{pr}\tau_3+\phi)} \label{4.4.10}$ $I(\tau,\phi)\approx\frac{nc}{2\pi}\left[E_{pr}'(\tau)E_{bir}(\tau)\sin{(\phi)}+E_{pr}'(\tau)E_{dic}(\tau)\cos{(\phi)}\right] \label{4.4.11}$ The birefringent and dichroic response of the molecular system can now be observed for phases of $\phi=\pi/2,3\pi/2\dots$ and $\phi=0,\pi\dots$, respectively. Incoherent pump-probe experiments What information does the pump-probe experiment contain? Since the time delay we control is the second time interval $\tau_2$, the diagrams for a two level system indicate that these measure population relaxation: $\Delta I(\tau)\propto|\mu_{ab}|^4e^{-\Gamma_{bb}\tau} \label{4.4.12}$ In fact measuring population changes and relaxation are the most common use of this experiment. When dephasing is very rapid, the pump-probe can be interpreted as an incoherent experiment, and the differential intensity (or absorption) change is proportional to the change of population of the states observed by the probe field. The pump-induced population changes in the probe states can be described by rate equations that describe the population relaxation, redistribution, or chemical kinetics. For the case where the pump and probe frequencies are the same, the signal decays as a results of population relaxation of the initially excited state. The two-level system diagrams indicate that the evolution in $\tau_2$ is differentiated by evolution in the ground or excited state. These diagrams reflect the equal signal contributions from the ground state bleach (loss of ground state population) and stimulated emission from the excited state. For direct relaxation from excited to ground state the loss of population in the excited state $\Gamma_{bb}$ is the same as the refilling of the hole in the ground state $\Gamma_{aa}$, so that $\Gamma_{aa}=\Gamma_{bb}$. If population relaxation from the excited state is through an intermediate, then the pump-probe decay will reflect equal contributions from both processes, which can be described by coupled first-order rate equations. When the resonance frequencies of the pump and probe fields are different, then the incoherent pump-probe signal is related to the joint probability of exciting the system at $\omega_{pu}$ and detecting at $\omega_{pr}$ after waiting a time $\tau$, $P(\omega_{pr},\tau;\omega_{pu})$. Coherent pump-probe experiments Ultrafast pump-probe measurements on the timescale of vibrational dephasing operate in a coherent regime where wavepackets prepared by the pump-pulse modulate the probe intensity. This provides a mechanism for studying the dynamics of excited electronic states with coupled vibrations and photoinitiated chemical reaction dynamics. If we consider the case of pump-probe experiments on electronic states where $\omega_{pu}=\omega_{pr}$, our description of the pump-probe from Feynmann diagrams indicates that the pump-pulse creates excitations both on the excited state and ground state. Both wavepackets will contribute to the signal. There are two equivalent ways of describing the experiment, which mirror our earlier description of electronic spectroscopy for an electronic transition coupled to nuclear motion. The first is to describe the spectroscopy in terms of the eigenstates of $H_0, |e,n\rangle$. The second draws on the energy gap Hamiltonian to describe the spectroscopy as two electronic levels $H_S$ that interact with the vibrational degrees of freedom $H_B$, and the wavepacket dynamics are captured by $H_{SB}$. For the eigenstate description, a two level system is inadequate to capture the wavepacket dynamics. Instead, describe the spectroscopy in terms of the four-level system diagrams given earlier. In addition to the population relaxation terms, we see that the R2 and R4 terms describe the evolution of coherences in the excited electronic state, whereas the R1 and R3 terms describe the ground state wave packet. For an underdamped wavepacket these coherences are observed as quantum beats on the pump-probe signal. 3.05: CARS (Coherent Anti-Stoke Raman Scattering) Used to drive ground state vibrations with optical pulses or cw fields. • Two fields, with a frequency difference equal to a vibrational transition energy, are used to excite the vibration. • The first field is the “pump” and the second is the “Stokes” field. • A second interaction with the pump frequency lead to a signal that radiates at the anti-Stokes frequency: $\omega_{sig}=2\omega_P-\omega_S$ and the signal is observed background-free next to the transmitted pump field: $\bar k_{sig}=2\bar k_P-\bar k_S$. The experiment is described by R1 to R4, and the polarization is \begin{aligned} R^{(3)} &=\bar \mu_{ev'} \bar \mu_{v'g} e^{-i\omega_{eg}\tau-\Gamma_{eg}\tau} \bar \mu_{gv} \bar \mu_{ve} +c.c. \ &=\bar\alpha_{eg}e^{-i\omega_{eg}\tau-\Gamma_{eg}\tau}\bar\alpha_{ge}+c.c. \end{aligned} \nonumber The CARS experiment is similar to a linear experiment in which the lineshape is determined by the Fourier transform of $C(\tau)=\langle\bar\alpha(\tau)\bar\alpha(0)\rangle$. The same processes contribute to Optical Kerr Effect Experiments and Impulsive Stimulated Raman Scattering.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/03%3A_Third-Order_Nonlinear_Spectroscopies/3.04%3A_Pump-Probe.txt
• 4.1: Eigenstate vs. system/bath perspectives From our earlier work on electronic spectroscopy, we found that there are two equivalent ways of describing spectroscopic problems, which can be classified as the eigenstate and system/bath perspectives. Let’s summarize these before turning back to nonlinear spectroscopy. • 4.2: Energy Gap Fluctuations How do transition energy gap fluctuations enter into the nonlinear response? As we did in the case of linear experiments, we will make use of the second cumulants approximation to relate dipole correlation functions to the energy gap correlation function . • 4.3: Nonlinear Response with the Energy Gap Hamiltonian In a manner that parallels our description of the linear response from a system coupled to a bath, the nonlinear response can also be partitioned into a system, bath and energy gap Hamiltonian, leading to similar averages over the fluctuations of the energy gap. • 4.4: How Can you Characterize Fluctuations and Spectral Diffusion? 04: Characterizing Fluctuations From our earlier work on electronic spectroscopy, we found that there are two equivalent ways of describing spectroscopic problems, which can be classified as the eigenstate and system/bath perspectives. Let’s summarize these before turning back to nonlinear spectroscopy, using electronic spectroscopy as the example: 1) Eigenstate: The interaction of light and matter is treated with the interaction picture Hamiltonian $H = H_0 +V (t)$. $H_0$ is the full material Hamiltonian, expressed as a function of nuclear and electronic coordinates, and is characterized by eigenstates which are the solution to $H_0|n\rangle = E_n|n\rangle$. In the electronic case $|n\rangle=|e,n_1,n_2\dots\rangle$ represent labels for a particular vibronic state. The dipole operator in $V(t)$ couples these states. Given that we have such detailed knowledge of the matter, we can obtain an absorption spectrum in two ways. In the time domain, we know $C_{\mu\mu}(t)=\sum_np_n\langle n|\mu(t)\mu(0)|n\rangle = \sum_{n,m}p_n|\mu_{nm}|^2e^{-i\omega_{mn}t} \label{5.1.1}$ The absorption lineshape is then related to the Fourier transform of C(t), $\sigma(\omega)=\sum_{n,m}p_n|\mu_{nm}|^2\frac{1}{\omega-\omega_{nm}-i\Gamma_{nm}} \label{5.1.2}$ where the phenomenological damping constant $\Gamma_{nm}$ was first added into eq. (5.1.1). This approach works well if you have an intimate knowledge of the Hamiltonian if your spectrum is highly structured and if irreversible relaxation processes are of minor importance. 2) System/Bath: In condensed phases, irreversible dynamics and featureless lineshapes suggest a different approach. In the system/bath or energy gap representation, we separate our Hamiltonian into two parts: the system $H_S$ contains a few degrees of freedom Q which we treat in detail, and the remaining degrees of freedom (q) are in the bath $H_B$. Ideally, the interaction between the two sets $H_{SB}(qQ)$ is weak. $H_0=H_S+H_B+H_{SB} \label{5.1.3}$ Spectroscopically we usually think of the dipole operator as acting on the system state, i.e. the dipole operator is a function of Q. If we then know the eigenstates of $H_S$, $H_S|n\rangle = E_n|n\rangle$ where $|n\rangle = |g\rangle$ or $|e\rangle$ for the electronic case, the dipole correlation function is $C_{\mu\mu}(t)=|\mu_{eg}|^2e^{-i\langle\omega_{eg}\rangle t} \left\langle exp\left[-i\int_0^tH_{SB}(t')dt'\right]\right\rangle \label{5.1.4}$ The influence of the dark states in $H_B$ is to modulate or change the spectroscopic energy gap $\omega_{eg}$ in a form dictated by the time-dependent system-bath interaction. The systembath approach is a natural way of treating condensed phase problems where you can’t treat all of the nuclear motions (liquid/lattice) explicitly. Also, you can imagine hybrid approaches if there are several system states that you wish to investigate spectroscopically. 4.02: Energy Gap Fluctuations How do transition energy gap fluctuations enter into the nonlinear response? As we did in the case of linear experiments, we will make use of the second cumulants approximation to relate dipole correlation functions to the energy gap correlation function $C_{eg}(\tau)$. Remembering that for the case of a system-bath interaction that that linearly couples the system and bath nuclear coordinates, the cumulant expansion allows the linear spectroscopy to be expressed in terms of the lineshape function $g(t)$ $C_{\mu\mu}(t)=|\mu_{eg}|^2e^{-i\omega_{eg}t}e^{-g(t)} \label{5.2.1}$ $g(t)=\int_0^tdt''\int_0^{t''}dt'\underbrace{\frac{1}{\hbar^2}\langle\delta H_{eg}(t')\delta H_{eg}(0)\rangle}_{C_{eg}(t')} \label{5.2.2}$ $C_{eg}(\tau)=\langle\delta\omega_{eg}(\tau)\delta\omega_{eg}(0)\rangle \label{5.2.3}$ $g(t)$ is a complex function for which the imaginary components describe nuclear motion modulating or shifting the energy gap, whereas the real part describes the fluctuations and 46 damping that lead to line broadening. When $C_{eg}(\tau)$ takes on an undamped oscillatory form $C_{eg}(\tau)=De^{i\omega_0\tau}$, as we might expect for coupling of the electronic transition to a nuclear mode with frequency $\omega_0$, we recover the expressions that we originally derived for the electronic absorption lineshape in which $D$ is the coupling strength and related to the Frank-Condon factor. Here we are interested in discerning line-broadening mechanisms, and the time scale of random fluctuations that influence the transition energy gap. Summarizing our earlier results, we can express the lineshape functions for energy gap fluctuations in the homogeneous and inhomogeneous limit as The Homogeneous Limit The bath fluctuations are infinitely fast, and only characterized by a magnitude: $C_{eg}(\tau)=\Gamma\delta(\tau) \label{5.2.4}$ In this limit, we obtain the phenomenological damping result $g(t)=\Gamma t \label{5.2.5}$ Which leads to homogeneous Lorentzian lineshapes with width $Γ$. The Inhomogeneous Limit The bath fluctuations are infinitely slow, and again characterized by a magnitude, but there is no decay of the correlations $C_{eg}(\tau)=\Delta^2 \label{5.2.6}$ This limit recovers the Gaussian static limit, and the Gaussian inhomogeneous lineshape where Δ is the distribution of frequencies. $g(t)=\frac{1}{2}\Delta^2t^2 \label{5.2.7}$ The intermediate regime The intermediate regime is when the energy gap fluctuates on the same time scale as the experiment. The simplest description is the stochastic model which describes the loss of correlation with a time scale $\tau_c$ $C_{eg}(\tau)=\Delta^2exp(-t/\tau_c) \label{5.2.8}$ which leads to $g(t)=\Delta^2\tau_c^2\left[exp(-t/\tau_c)+t/\tau_c-1\right] \label{5.2.9}$ For an arbitrary form of the dynamics of the bath, we can construct $g(t)$ as a sum over independent modes $g(t)=\sum_ig_i(t)$. Or for a continuous distribution for modes, we can describe the bath in terms of the spectral density $\rho(\omega)$ that describes the coupled nuclear motions $\rho(\omega)=\frac{1}{2\pi\omega^2}Im\left[\tilde C_{eg}(\omega)\right] \label{5.2.10}$ \begin{aligned} g(t) &= \int_{-\infty}^{+\infty}d\omega\frac{1}{2\pi\omega^2}\tilde C_{eg}(\omega)\left[exp(-i\omega t)+i\omega t-1\right] \ &= \int_{-\infty}^{+\infty}d\omega\rho(\omega)\left(\coth{\left(\frac{\beta\hbar\omega}{2}\right)}(1-\cos{\omega t})+i(\sin{\omega t}-\omega t)\right) \end{aligned} \label{5.2.11} To construct an arbitrary form of the bath, the phenomenological Brownian oscillator model allows us to construct a bath of $i$ damped oscillators, \begin{aligned} C_{eg}''(\omega) &=\sum_i\xi_iC_i''(\omega) \ C_i''(\omega) &= \frac{\hbar}{m_i}\frac{\omega\Gamma_i}{(\omega_i^2-\omega^2)^2+4\omega^2\Gamma_i^2} \end{aligned} \label{5.2.12} Here $\xi_i$ is the coupling coefficient for oscillator i.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/04%3A_Characterizing_Fluctuations/4.01%3A_Eigenstate_vs._system_bath_perspectives.txt
In a manner that parallels our description of the linear response from a system coupled to a bath, the nonlinear response can also be partitioned into a system, bath and energy gap Hamiltonian, leading to similar averages over the fluctuations of the energy gap. In the general case, the four correlations functions contributing to the third order response that emerge from eq. (2.3.3) are $\begin{array}{l} R_{1}=\sum_{a b c d} p_{a}\left\langle\mu_{a b}\left(\tau_{3}+\tau_{2}+\tau_{1}\right) \mu_{b c}\left(\tau_{2}+\tau_{1}\right) \mu_{c d}\left(\tau_{1}\right) \mu_{d a}(0) F_{a b c d}^{(1)}\right\rangle \ R_{2}=\sum_{a b c d} p_{a}\left\langle\mu_{a b}\left(\tau_{1}\right) \mu_{b c}\left(\tau_{2}+\tau_{1}\right) \mu_{c d}\left(\tau_{3}+\tau_{2}+\tau_{1}\right) \mu_{d a}(0) F_{a b c d}^{(2)}\right\rangle \ R_{3}=\sum_{a b c d} p_{a}\left\langle\mu_{d a}(0) \mu_{a b}\left(\tau_{2}+\tau_{1}\right) \mu_{b c}\left(\tau_{3}+\tau_{2}+\tau_{1}\right) \mu_{c d}\left(\tau_{1}\right) F_{a b c d}^{(3)}\right\rangle \ R_{4}=\sum_{a b c d} p_{a}\left\langle\mu_{d a}\left(\tau_{1}\right) \mu_{a b}\left(t_{1}\right) \mu_{b c}\left(\tau_{3}+\tau_{2}+\tau_{1}\right) \mu_{c d}\left(\tau_{2}+\tau_{1}\right) F_{a b c d}^{(4)}\right\rangle \end{array} \label{5.3.1}$ Here a,b,c, and d are indices for system eigenstates, and the dephasing functions are $\begin{array}{l} F_{a b c d}^{(1)}=\exp \left[-i \int_{\tau_{2}+\tau_{1}}^{\tau_{3}+\tau_{2}+\tau_{1}} \omega_{b a}(\tau) d \tau-i \int_{\tau_{1}}^{\tau_{2}+\tau_{1}} \omega_{c a}(\tau) d \tau-i \int_{0}^{\tau_{1}} \omega_{d a}(\tau) d \tau\right] \ F_{a b c d}^{(2)}=\exp \left[-i \int_{\tau_{2}+\tau_{1}}^{\tau_{3}+\tau_{2}+\tau_{1}} \omega_{d c}(\tau) d \tau-i \int_{\tau_{1}}^{\tau_{2}+\tau_{1}} \omega_{d b}(\tau) d \tau-i \int_{0}^{\tau_{1}} \omega_{d a}(\tau) d \tau\right] \ F_{a b c d}^{(3)}=\exp \left[-i \int_{\tau_{2}+\tau_{1}}^{\tau_{3}+\tau_{2}+\tau_{1}} \omega_{b c}(\tau) d \tau+i \int_{\tau_{1}}^{\tau_{2}+\tau_{1}} \omega_{c a}(\tau) d \tau+i \int_{0}^{\tau_{1}} \omega_{d a}(\tau) d \tau\right] \ F_{a b c d}^{(4)}=\exp \left[-i \int_{\tau_{2}+\tau_{1}}^{\tau_{3}+\tau_{2}+\tau_{1}} \omega_{b c}(\tau) d \tau+i \int_{\tau_{1}}^{\tau_{2}+\tau_{1}} \omega_{d b}(\tau) d \tau+i \int_{0}^{\tau_{1}} \omega_{d a}(\tau) d \tau\right] \end{array} \label{5.3.2}$ As before $\omega_{ab}=H_{ab}/\hbar$. These expressions describe the correlated dynamics of the dipole operator acting between multiple resonant transitions, in which the amplitude, frequency, and orientation of the dipole operator may vary with time. As a further simplification, let’s consider the specific form of the nonlinear response for a fluctuating two-level system. If we allow only for two states e and g, and apply the Condon approximation, eq. (5.3.2) gives $R_1(\tau_1,\tau_2,\tau_3)=p_g|\mu_{eg}|^4e^{i\omega_{eg}(\tau_1+\tau_3)}\left\langle exp\left(-i\int_0^{\tau_1}d\tau\omega_{eg}(\tau)-i\int_{\tau_1+\tau_2}^{\tau_1+\tau_2+\tau_3}d\tau\omega_{eg}(\tau)\right)\right\rangle \label{5.3.3}$ $R_2(\tau_1,\tau_2,\tau_3)=p_g|\mu_{eg}|^4e^{-i\omega_{eg}(\tau_1-\tau_3)}\left\langle exp\left(i\int_0^{\tau_1}d\tau\omega_{eg}(\tau)-i\int_{\tau_1+\tau_2}^{\tau_1+\tau_2+\tau_3}d\tau\omega_{eg}(\tau)\right)\right\rangle \label{5.3.4}$ These are the rephasing (R2) and non-rephasing (R1) functions, written for a two-level system. These expressions only account for the correlation of fluctuating frequencies while the system evolves during the coherence periods $\tau_1$ and $\tau_3$. Since they neglect any difference in relaxation on the ground or excited state during the population period $\tau_2$, R2=R3 and R1= R4. They also ignore reorientational relaxation of the dipole. In the case that the fluctuations of those two states follow Gaussian statistics, we can also apply the cumulant expansion to the third order response function. In this case, for a two-level system, the four correlation functions are expressed in terms of the lineshape function as: $R_1=e^{-i\omega_{eg}\tau_1-i\omega_{eg}\tau_3}\left(\frac{i}{\hbar}\right)^3p_g|\mu_{eg}|^4\times exp\left[-g^*(\tau_3)-g(\tau_1)-g^*(\tau_2)+g^*(\tau_2+\tau_3)+g(\tau_1+\tau_2)-g(\tau_1+\tau_2+\tau_3)\right] \label{5.3.5}$ $R_2=\left(\frac{i}{\hbar}\right)^3p_g|\mu_{eg}|^4e^{i\omega_{eg}\tau_1-i\omega_{eg}\tau_3}\times exp\left[-g^*(\tau_3)-g^*(\tau_1)+g(\tau_2)-g(\tau_2+\tau_3)-g^*(\tau_1+\tau_2)+g^*(\tau_1+\tau_2+\tau_3)\right] \label{5.3.4}$ $R_3=\left(\frac{i}{\hbar}\right)^3p_g|\mu_{eg}|^4e^{i\omega_{eg}\tau_1-i\omega_{eg}\tau_3}\times exp\left[-g(\tau_3)-g^*(\tau_1)+g^*(\tau_2)-g^*(\tau_2+\tau_3)-g^*(\tau_1+\tau_2)+g^*(\tau_1+\tau_2+\tau_3)\right] \label{5.3.5}$ $R_4=\left(\frac{i}{\hbar}\right)^3p_g|\mu_{eg}|^4e^{-i\omega_{eg}\tau_1-i\omega_{eg}\tau_3}\times exp\left[-g(\tau_3)-g(\tau_1)-g(\tau_2)+g(\tau_2+\tau_3)+g(\tau_1+\tau_2)-g(\tau_1+\tau_2+\tau_3)\right] \label{5.3.6}$ These expressions provide the most direct way of accounting for fluctuations or periodic modulation of the spectroscopic energy gap in nonlinear spectroscopies. Example $1$: Two-Pulse Photon Echo For the two-pulse photon echo experiment on a system with inhomogeneous broadening: • Set $g(t)=\Gamma_{eg}t+\frac{1}{2}\Delta^2t^2$ For this simple model $g(t)$ is real. • Set $\tau_2=0$, giving $R_2=R_3=\left(\frac{i}{\hbar}\right)^3p_g|\mu_{eg}|^4e^{i\omega_{eg}\tau_1-i\omega_{eg}\tau_3} exp\left[-2g(\tau_3)-2g(\tau_1) + g(\tau_1+\tau_3)\right] \nonumber$ • Substituting $g(t)$ into this expression gives the same result as before. $R^{(3)}\propto e^{-i\omega_{eg}(\tau_1-\tau_3)}e^{-\Gamma_{eg}(\tau_1+\tau_3)}e^{-(\tau_1-\tau_3)^2\Delta^2/2} \label{5.3.7}$ Similar expressions can also be derived for an arbitrary number of eigenstates of the system Hamiltonian.1 In that case, eqs. (5.3.1) become $\begin{array}{l} R_{1}=\sum_{a b c d} p_{a} \mu_{a b} \mu_{b c} \mu_{c d} \mu_{d a} \exp \left[-i\left\langle\omega_{b a}\right\rangle \tau_{3}-i\left\langle\omega_{c a}\right\rangle \tau_{2}-i\left\langle\omega_{d a}\right\rangle \tau_{1}\right] F_{a b c d}^{(1)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right) \ R_{2}=\sum_{a b c d} p_{a} \mu_{a b} \mu_{b c} \mu_{c d} \mu_{d a} \exp \left[-i\left\langle\omega_{d c}\right\rangle \tau_{3}-i\left\langle\omega_{d b}\right\rangle \tau_{2}-i\left\langle\omega_{d a}\right\rangle \tau_{1}\right] F_{a b c d}^{(2)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right) \ R_{3}=\sum_{a b c d} p_{a} \mu_{a b} \mu_{b c} \mu_{c d} \mu_{d a} \exp \left[-i\left\langle\omega_{b c}\right\rangle \tau_{3}+i\left\langle\omega_{c a}\right\rangle \tau_{2}+i\left\langle\omega_{d a}\right\rangle \tau_{1}\right] F_{a b c d}^{(3)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right) \ R_{4}=\sum_{a b c d} p_{a} \mu_{a b} \mu_{b c} \mu_{c d} \mu_{d a} \exp \left[-i\left\langle\omega_{b c}\right\rangle \tau_{3}+i\left\langle\omega_{d b}\right\rangle \tau_{2}+i\left\langle\omega_{d a}\right\rangle \tau_{1}\right] F_{a b c d}^{(4)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right) \end{array} \label{5.3.8}$ The dephasing functions are written in terms of lineshape functions with a somewhat different form: \begin{aligned} -\ln \left[F_{a b c d}^{(1)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right)\right]=& h_{b b}\left(\tau_{3}\right)+h_{c c}\left(\tau_{2}\right)+h_{d d}\left(\tau_{1}\right)+h_{b c}^{+}\left(\tau_{3}, \tau_{2}\right) \ &+h_{c d}^{+}\left(\tau_{3}, \tau_{2}\right)+f_{b d}^{+}\left(\tau_{3}, \tau_{1} ; \tau_{2}\right) \ -\ln \left[F_{a b c d}^{(2)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right)\right]=&\left[h_{c c}\left(\tau_{3}\right)\right]^{*}+\left[h_{b b}\left(\tau_{2}\right)\right]^{*}+h_{d d}\left(\tau_{1}+\tau_{2}+\tau_{3}\right)+\left[h_{b c}^{+}\left(\tau_{3}, \tau_{2}\right)\right]^{*} \ &+h_{c d}^{-}\left(\tau_{1}+\tau_{2}+\tau_{3}, \tau_{3}\right)+\left[f_{b d}^{-}\left(\tau_{2}, \tau_{1}+\tau_{2}+\tau_{3} ; \tau_{3}\right)\right]^{*} \ -\ln \left[F_{a b c d}^{(3)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right)\right]^{*} &=\left[h_{b b}\left(\tau_{3}\right)\right]^{*}+h_{c c}\left(\tau_{2}+\tau_{3}\right)+h_{d d}\left(\tau_{1}\right)+h_{c d}^{+}\left(\tau_{2}+\tau_{3}, \tau_{1}\right) \ &-f_{b c}^{-}\left(\tau_{3}, \tau_{2}+\tau_{3} ; \tau_{2}\right)-f_{b d}^{+}\left(\tau_{3}, \tau_{1} ; \tau_{2}\right) \ -\ln \left[F_{a b c d}^{(4)}\left(\tau_{3}, \tau_{2}, \tau_{1}\right)\right]^{*} &=h_{c c}\left(\tau_{3}\right)+h_{d d}\left(\tau_{1}+\tau_{2}\right)+\left[h_{b b}\left(\tau_{2}+\tau_{3}\right)\right]^{*}-h_{b c}^{-}\left(\tau_{3}, \tau_{2}+\tau_{3}\right) \ &+h_{c d}^{+}\left(\tau_{1}+\tau_{2}, \tau_{3}\right)-f_{b d}^{-}\left(\tau_{1}+\tau_{2}, \tau_{2}+\tau_{3} ; \tau_{3}\right) \end{aligned} \label{5.3.9} where: \begin{aligned} h_{nm}(\tau) &= \int_0^\tau d\tau_2'\int_0^{\tau_2}d\tau_1'C_{nm}(\tau_2'-\tau_1') \ h_{nm}^\pm(\tau_2,\tau_1) &= \int_0^{\tau_2}d\tau_2'\int_0^{\tau_1}d\tau_1'C_{nm}(\tau_2'\pm\tau_1') \ f_{nm}^\pm(\tau_2,\tau_1;\tau_3) &= \int_0^{\tau_2}d\tau_2'\int_0^{\tau_1}d\tau_1'C_{nm}(\tau_2'\pm\tau_1'+\tau_3) \end{aligned} \label{5.3.10}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/04%3A_Characterizing_Fluctuations/4.03%3A_Nonlinear_Response_with_the_Energy_Gap_Hamiltonian.txt
The rephasing ability of the photon echo experiment provides a way of characterizing memory of the energy gap transition frequency initially excited by the first pulse. For a static inhomogeneous lineshape, perfect memory of transition frequencies is retained through the experiment, whereas homogeneous broadening implies extremely rapid dephasing. So, let’s first examine the polarization for a two-pulse photon echo experiment on a system with homogeneous and inhomogeneous broadening by varying $\Delta/\Gamma_{eg}$. Plotting the polarization as proportional to the response in eq. (5.3.7): We see that following the third pulse, the polarization (red line) is damped during $\tau_3$ through homogeneous dephasing at a rate $\Gamma_{eg}$, regardless of Δ. However in the inhomogeneous case $\Delta\gt\gt\Gamma_{eg}$, any inhomogeneity is rephased at $\tau_1=\tau_3$. The shape of this echo is a Gaussian with width ~ 1/ Δ . The shape of the echo polarization is a competition between the homogeneous damping and the inhomogeneous rephasing. Normally, one detects the integrated intensity of the radiated echo field. Setting the pulse delay $\tau_1=\tau$, $S(\tau)\propto\int_0^{\infty}d\tau_3|P^{(3)}(\tau,\tau_3)|^2 \label{5.4.1}$ $S(\tau)=exp\left(-4\Gamma_{eg}\tau-\frac{\Gamma_{eg}^2}{\Delta^2}\right)\cdot erfc\left(-\Delta\tau+\frac{\Gamma_{eg}}{\Delta}\right) \label{5.4.2}$ where $erfc(x)=1-erf(x)$ is the complementary error function. For the homogeneous and inhomogeneous limits of this expression we find $\Delta \lt\lt \Gamma_{eg} \Rightarrow S(\tau)\propto e^{-2\Gamma_{eg}\tau} \label{5.4.3}$ $\Delta \gt\gt \Gamma_{eg} \Rightarrow S(\tau)\propto e^{-4\Gamma_{eg}\tau} \label{5.4.4}$ In either limit, the inhomogeneity is removed from the measured decay. In the intermediate case, we observe that the leading term in eq. (5.4.2) decays whereas the second term rises with time. This reflects the competition between homogeneous damping and the inhomogeneous rephasing. As a result, for the intermediate case $(\Delta \approx \Gamma_{ab})$ we find that the integrated signal $S(\tau)$ has a maximum signal for $\tau\gt 0$. The delay of maximum signal, $\tau^*$, is known as the peak shift. The observation of a peak shift is an indication that there is imperfect ability to rephrase. Homogenous dephasing, i.e. fluctuations fast on the time scale of $\tau$, are acting to scramble memory of the phase of the coherence initially created by the first pulse. In the same way, spectral diffusion (processes which randomly modulate the energy gap on time scales equal or longer than $\tau$) randomizes phase. It destroys the ability for an echo to form by rephasing. To characterize these processes through an energy gap correlation function, we can perform a three-pulse photon echo experiment. The three pulse experiment introduces a waiting time $\tau_2$ between the two coherence periods, which acts to define a variable shutter speed for the experiment. The system evolves as a population during this period, and therefore there is nominally no phase acquired. We can illustrate this through a lens analogy: Lens Analogy: For an inhomogeneous distribution of oscillators with different frequencies, we define the phase acquired during a time period through $e^{i\phi}=e^{i(\delta\omega_i t)}$ Since we are in a population state during $\tau_2$, there is no evolution of phase. Now to this picture we can add spectral diffusion as a slower random modulation of the phase acquired during all time periods. If the system can spectrally diffuse during $\tau_2$, this degrades the ability of the system to rephase and echo formation is diminished. Since spectral diffusion destroys the rephasing, the system appears more and more “homogeneous” as $\tau_2$ is incremented. Experimentally, one observes how the peak shift of the integrated echo changes with the waiting time $\tau_2$. It will be observed to shift toward $\tau^*=0$ as a function of $\tau_2$. In fact, one can show that the peak shift with $\tau_2$ decays with a form given by the the correlation function for system-bath interactions: $\tau^*(\tau_2)\propto C_{eg}(\tau) \label{5.4.5}$ Using the lineshape function for the stochastic model $g(t)=\Delta^2\tau_c^2\left[exp(-t/\tau_c)+t/\tau_c-1\right]$, you can see that for times $\tau_2\gt\tau_c$, $\tau^*(\tau_2)\propto exp(-\tau_2/\tau_c)\Rightarrow \left\langle\delta\omega_{eg}(\tau)\delta\omega_{eg}(0)\right\rangle \label{5.4.6}$ Thus echo peak shift measurements are a general method to determine the form to $C_{eg}(\tau)$ or $C_{eg}''(\omega)$ or $\rho(\omega)$. The measurement time scale is limited only by the population lifetime.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/04%3A_Characterizing_Fluctuations/4.04%3A_How_can_you_characterize_fluctuations_and_spectral_diffusion.txt
What is two-dimensional spectroscopy? This is a method that will describe the underlying correlations between two spectral features. Our examination of pump-probe experiments indicates that the third-order response reports on the correlation between different spectral features. Let’s look at this in more detail using a system with two excited states as an example, for which the absorption spectrum shows two spectral features at $\omega_{ba}$ and $\omega_{ca}$. Imagine a double resonance (pump-probe) experiment in which we choose a tunable excitation frequency $\omega_{pump}$, and for each pump frequency we measure changes in the absorption spectrum as a function of $\omega_{probe}$. Generally speaking, we expect resonant excitation to induce a change of absorbance. The question is: what do we observe if we pump at $\omega_{ba}$ and probe at $\omega_{ca}$? If nothing happens, then we can conclude that microscopically, there is no interaction between the degrees of freedom that give rise to the ba and ca transitions. However, a change of absorbance at $\omega_{ca}$ indicates that in some manner the excitation of $\omega_{ba}$ is correlated with $\omega_{ca}$. Microscopically, there is a coupling or chemical conversion that allows deposited energy to flow between the coordinates. Alternatively, we can say that the observed transitions occur between eigenstates whose character and energy encode molecular interactions between the coupled degrees of freedom (here $\beta$ and $\chi$): Now imagine that you perform this double resonance experiment measuring the change in absorption for all possible values of $\omega_{pump}$ and $\omega_{probe}$, and plot these as a two-dimensional contour plot:1 This is a two-dimensional spectrum that reports on the correlation of spectral features observed in the absorption spectrum. Diagonal peaks reflect the case where the same resonance is pumped and probed. Cross peaks indicate a cross-correlation that arises from pumping one feature and observing a change in the other. The principles of correlation spectroscopy in this form were initially developed in the area of magnetic resonance, but are finding increasing use in the areas of optical and infrared spectroscopy. Double resonance analogies such as these illustrate the power of a two-dimensional spectrum to visualize the molecular interactions in a complex system with many degrees of freedom. Similarly, we can see how a 2D spectrum can separate components of a mixture through the presence or absence of cross peaks. Also, it becomes clear how an inhomogeneous lineshape can be decomposed into the distribution of configurations, and the underlying dynamics within the ensemble. Take an inhomogeneous lineshape with width $\Delta$ and mean frequency $\langle\omega_{ab}\rangle$, which is composed of a distribution of homogeneous transitions of width $\Gamma$. We will now subject the system to the same narrow band excitation followed by probing the differential absorption $\Delta A$ at all probe frequencies. Here we observe that the contours of a two-dimensional lineshape report on the inhomogeneous broadening. We observe that the lineshape is elongated along the diagonal axis $(\omega_1=\omega_3)$. The diagonal linewidth is related to the inhomogeneous width Δ whereas the antidiagonal width $\left[\omega_1+\omega_3=\langle\omega_{ab}\rangle/2\right]$ is determined by the homogeneous linewidth $\Gamma$. 1. Here we use the right-hand rule convention for the frequency axes, in which the pump or excitation frequency is on the horizontal axis and the probe or detection frequency is on the vertical axis. Different conventions are being used, which does lead to confusion. We note that the first presentations of two-dimensional spectra in the case of 2D Raman and 2D IR spectra used a RHR convention, whereas the first 2D NMR and 2D electronic measurements used the LHR convention. 5.02: 2D Spectroscopy from Third Order Response These examples indicate that narrow band pump-probe experiments can be used to construct 2D spectra, so in fact the third-order nonlinear response should describe 2D spectra. To describe these spectra, we can think of the excitation as a third-order process arising from a sequence of interactions with the system eigenstates. For instance, taking our initial example with three levels, one of the contributing factors is of the form R2: Setting $\tau_2=0$ and neglecting damping, the response function is $R_2(\tau_1,\tau_3)=p_a|\mu_{ab}|^2|\mu_{ac}|^2e^{-i\omega_{ba}\tau_1-i\omega_{ca}\tau_3} \label{7.1}$ The time domain behavior describes the evolution from one coherent state to another—driven by the light fields: A more intuitive description is in the frequency domain, which we obtained by Fourier transforming eq. (7.1): \begin{aligned} \tilde R_2(\omega_1,\omega_3) &= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{i\omega_1\tau_1+i\omega_3\tau_3}R_2(\tau_1,\tau_3)d\tau_1d\tau_3 \ &=p_a|\mu_{ab}|^2|\mu_{ac}|^2\left\langle\delta(\omega_3-\omega_{ca})\delta(\omega_1-\omega_{ba})\right\rangle \ &\equiv p_a|\mu_{ab}|^2|\mu_{ac}|^2P(\omega_3,\tau_2;\omega_1) \end{aligned} \label{7.2} The function P looks just like the covariance $\langle xy \rangle$ that describes the correlation of two variables x and y. In fact P is a joint probability function that describes the probability of exciting the system at $\omega_{ba}$ and observing the system at $\omega_{ca}$ (after waiting a time $\tau_2$). In particular, this diagram describes the cross peak in the upper left of the initial example we discussed. 5.03: Fourier Transform Spectroscopy The last example underscores the close relationship between time and frequency domain representations of the data. Similar information to the frequency-domain double resonance experiment is obtained by Fourier transformation of the coherent evolution periods in a time domain experiment with short broadband pulses. In practice, the use of Fourier transforms requires a phase-sensitive measure of the radiated signal field, rather than the intensity measured by photodetectors. This can be obtained by beating the signal against a reference pulse (or local oscillator) on a photodetector. If we measure the cross term between a weak signal and strong local oscillator: \begin{aligned} \delta I_{LO}(\tau_{LO}) &= |E_{sig}+E_{LO}|^2-|E_{LO}|^2 \ &\approx 2Re\int_{-\infty}^{+\infty}d\tau_3 E_{sig}(\tau_3)E_{LO}(\tau_3-\tau_{LO}) \end{aligned} \label{8.1} For a short pulse $E_{LO},\delta I(\tau_{LO})\propto E_{sig}(\tau_{LO})$. By acquiring the signal as a function of $\tau_1$ and $\tau_{LO}$ we can obtain the time domain signal and numerically Fourier transform to obtain a 2D spectrum. Alternatively, we can perform these operations in reverse order, using a grating or other dispersive optic to spatially disperse the frequency components of the signal. This is in essence an analog Fourier Transform. The interference between the spatially dispersed Fourier components of the signal and LO are subsequently detected. $\delta I(\omega_3)=\int|E_{LO}(\omega_3)+E_{sig}(\omega_3)|^2-|E_{LO}(\omega_3)|^2 \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/05%3A_Two-Dimensional_Spectroscopy/5.01%3A_Two-Dimensional_Correlation_Spectroscopy.txt
One of the unique characteristics of 2D spectroscopy is the ability to characterize molecular couplings1. This allows one to understand microscopic relationships between different objects, and with knowledge of the interaction mechanism, determine the structure or reveal the dynamics of the system. To understand how 2D spectra report on molecular interactions, we will discuss the spectroscopy using a model for two coupled electronic or vibrational degrees of freedom. Since the 2D spectrum reports on the eigenstates of the coupled system, understanding the coupling between microscopic states requires a model for the eigenstates in the basis of the interacting coordinates of interest. Traditional linear spectroscopy does not provide enough constraints to uniquely determine these variables, but 2D spectroscopy provides this information through a characterization of two-quantum eigenstates. Since it takes less energy to excite one coordinate if a coupled coordinate already has energy in it, a characterization of the energy of the combination mode with one quantum of excitation in each coordinate provides a route to obtaining the coupling. This principle lies behind the use of overtone and combination band molecular spectroscopy to unravel anharmonic couplings. The language for the different variables for the Hamiltonian of two coupled coordinates varies considerably by discipline. A variety of terms that are used are summarized below. We will use the underlined terms. System Hamiltonian $H_S$ Local or site basis (i,j) Eigenbasis (a,b) One-Quantum Eigenstates Two-Quantum Eigenstates Local mode Hamiltonian Exciton Hamiltonian Frenkel Exciton Hamiltonian Coupled oscillators Sites Local modes Oscillators Chromophores Eigenstates Exciton states Delocalized states Fundamental v=0-1 One-exciton states Exciton band Combination mode or band Overtone Doubly excited states Biexciton Two-exciton states The model for two coupled coordinates can take many forms. We will pay particular attention to a Hamiltonian that describes the coupling between two local vibrational modes i and j coupled through a bilinear interaction of strength J: \begin{aligned} H_{vib} &= H_i+H_j+V_{i,j} \ &=\frac{p_i^2}{2m_i}+V(q_i)+\frac{p_j^2}{2m_j}+V(q_j)+Jq_iq_j \end{aligned} \label{9.1} An alternate form cast in the ladder operators for vibrational or electronic states is the Frenkel exciton Hamiltonian $H_{vib,harmonic}\approx\hbar\omega_i\left(a_i^\dagger a_i\right)+\hbar\omega_j\left(a_j^\dagger a_j\right)+J\left(a_i^\dagger a_j+a_i a_j^\dagger\right) \label{9.2}$ $H_{elec}=E_ia_i^\dagger a_i+E_ja_j^\dagger a_j+\left(J_{ij}a_i^\dagger a_j+c.c\right) \label{9.3}$ The bi-linear interaction is the simplest form by which the energy of one state depends on the other. One can think of it as the leading term in the expansion of the coupling between the two local states. Higher order expansion terms are used in another common form, the cubic anharmonic coupling between normal modes of vibration $H_{vib}=\left(\frac{p_i^2}{2m_i}+\frac{1}{2}k_iq_i^2+\frac{1}{6}g_{iii}q_i^2\right)+\left(\frac{p_j^2}{2m_j}+\frac{1}{2}k_jq_j^2+\frac{1}{6}g_{jjj}q_j^2\right)+\left(\frac{1}{2}g_{iij}q_i^2q_j+\frac{1}{2}g_{ijj}q_iq_j^2\right) \label{9.4}$ In the case of eq. (9.2), the eigenstates and energy eigenvalues for the one-quantum states are obtained by diagonalizing the 2x2 matrix $H_S^{(1)}=\begin{pmatrix} E_{i=1} & J \ J & E_{j=1} \end{pmatrix} \label{9.5}$ $E_{i=1}$ and $E_{j=1}$ are the one-quantum energies for the local modes $q_i$ and $q_j$. These give the system energy eigenvalues $E_{a/b}=\Delta E \pm\left(\Delta E^2+J^2\right)^{1/2} \label{9.6}$ $\Delta E=\frac{1}{2}\left(E_{i=1}-E_{j=1}\right) \label{9.7}$ $E_a$ and $E_b$ can be observed in the linear spectrum, but are not sufficient to unravel the three variables (site energies $E_iE_j$ and coupling J) relevant to the Hamiltonian; more information is needed. For the purposes of 2D spectroscopy, the coupling is encoded in the two-quantum eigenstates. Since it takes less energy to excite a vibration $|i\rangle$ if a coupled mode $|j\rangle$ already has energy, we can characterized the strength of interaction from the system eigenstates by determining the energy of the combination mode $E_{ab}$ relative to the sum of the fundamentals: $\Delta_{ab}=E_a+E_b-E_{ab} \label{9.8}$ In essence, with a characterization of $E_{ab},E_a,E_b$ one has three variables that constrain $E_i,E_j,J$. The relationship between $\Delta_{ab}$ and J depends on the model. Working specifically with the vibrational Hamiltonian eq. (9.1), there are three twoquantum states that must be considered. Expressed as product states in the two local modes these are $|i,j\rangle = |20\rangle, |02\rangle,$ and $|11\rangle$. The two-quantum energy eigenvalues of the system are obtained by diagonalizing the 3x3 matrix $H_S^{(2)}=\begin{pmatrix} E_{i=2} & 0 & \sqrt{2}J \ 0 & E_{j=2} & \sqrt{2}J \ \sqrt{2}J & \sqrt{2}J & E_{i=1}+E_{j=1} \end{pmatrix} \label{9.9}$ Here $E_{i=2}$ and $E_{j=2}$ are the two-quantum energies for the local modes $q_i$ and $q_j$. These are commonly expressed in terms of $\delta E_i$, the anharmonic shift of the i=1-2 energy gap relative to the i=0-1 one-quantum energy: \begin{aligned} \delta E_i &= \left(E_{i=1}-E_{i=0}\right)-\left(E_{i=2}-E_{i=1}\right) \ \delta\omega_i &= \omega_{10}^i-\omega_{21}^i \end{aligned} \label{9.10} Although there are analytical solutions to eq. (9.9), it is more informative to examine solutions in two limits. In the strong coupling limit ($J\lt\lt\Delta E$), one finds $\Delta_{ab}=J \label{9.11}$ For vibrations with the same anharmonicity $\delta E$ with weak coupling between them ($J\lt\lt\Delta E$), perturbation theory yields $\Delta_{a b}=\delta E \frac{J^{2}}{\Delta E^{2}} \label{9.12}$ This result is similar to the perturbative solution for weakly coupled oscillators of the form given by eq. (9.4) $\Delta_{a b}=g_{i i j}^{2}\left(\frac{4 E_{i}}{E_{j}^{2}-4 E_{i}^{2}}\right)+g_{i j j}^{2}\left(\frac{4 E_{j}}{E_{i}^{2}-4 E_{j}^{2}}\right) \label{9.13}$ Example $1$: $\ce{Rh(CO)2(acac)}$ So, how do these variables present themselves in 2D spectra? Here it is helpful to use a specific example: the strongly coupled carbonyl vibrations of $\ce{Rh(CO)2(acac)}$ or RDC. For the purpose of 2D spectroscopy with infrared fields resonant with the carbonyl transitions, there are six quantum states (counting the ground state) that must be considered. Coupling between the two degenerate CO stretches leads to symmetric and anti-symmetric one-quantum eigenstates, which are more commonly referred to by their normal mode designations: the symmetric and asymmetric stretching vibrations. For n=2 coupled vibrations, there are n(n−1)/2 = 3 two-quantum eigenstates. In the normal mode designation, these are the first overtones of the symmetric and asymmetric modes and the combination band. This leads to a six level system for the system eigenstates, which we designate by the number of quanta in the symmetric and asymmetric stretch: $|00\rangle$, $|s\rangle$ = $|10\rangle$, $|a\rangle$ = $|01\rangle$, $|2s\rangle$ = $|20\rangle$, $|2a\rangle$ = $|02\rangle$, and $|sa\rangle$ = $|11\rangle$. For a model electronic system, there are four essential levels that need to be considered, since Fermi statistics does not allow two electrons in the same state: $|00\rangle$, $|10\rangle$, $|01\rangle$, and $|11\rangle$. We now calculate the nonlinear third-order response for this six-level system, assuming that all of the population is initially in the ground state. To describe a double-resonance or Fourier transform 2D correlation spectrum in the variables $\omega_1$ and $\omega_3$, include all terms relevant to pump-probe experiments: $-k_1 +k_2 +k_3$ ($S_{I}$, rephasing) and $k_1 - k_2 +k_3$ ($S_{II}$, non-rephasing). After summing over many interaction permutations using the phenomenological propagator, keeping only dipole allowed transitions with ±1 quantum, we find that we expect eight resonances in a 2D spectrum. For the case of the rephasing spectrum $S_I$ \begin{aligned} S_I(\omega_1,\omega_3) &= \frac{2|\mu_{s,0}|^4}{\left[i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{s,0})+\Gamma_{s,0}\right]} + \frac{2|\mu_{a,0}|^4}{\left[i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{a,0})+\Gamma_{a,0}\right]} \ &+ \frac{2|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{a,0})+\Gamma_{a,0}\right]} + \frac{2|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{s,0})+\Gamma_{b,0}\right]} \ &- \frac{|\mu_{s,0}|^2|\mu_{2s,s}|^2}{\left[i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{s,0}+\Delta_{s})+\Gamma_{2s,s}\right]} - \frac{|\mu_{a,0}|^2|\mu_{2a,a}|^2}{\left[i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{a,0}+\Delta_{a})+\Gamma_{2a,a}\right]} \ &-\frac{|\mu_{s,0}|^2|\mu_{as,s}|^2+\mu_{0,s}\mu_{a,0}\mu_{as,a}\mu_{s,as}}{\left[i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{a,0}+\Delta_{as})+\Gamma_{as,s}\right]} - \frac{|\mu_{a,0}|^2|\mu_{as,a}|^2+\mu_{0,a}\mu_{s,0}\mu_{as,s}\mu_{a,as}}{\left[i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{s,0}+\Delta_{as})+\Gamma_{as,a}\right]} \ &\equiv {\bf 1+1'+2+2'+3+3'+4+4'} \end{aligned} \label{9.14} To discuss these peaks we examine how they appear in the experimental Fourier transform 2D IR spectrum of RDC, here plotted both as in differential absorption mode and absolute value mode. We note that there are eight peaks, labeled according to the terms i eq. (9.14) from which they arise. Each peak specifies a sequence of interactions with the system eigenstates, with excitation at a particular $\omega_1$ and detection at given $\omega_3$. Notice that in the excitation dimension $ω_1$ all of the peaks lie on one of the fundamental frequencies. Along the detection axis $\omega_3$ resonances are seen at all six one-quantum transitions present in our system. More precisely, there are four features: two diagonal and two cross peaks each of which are split into a pair. The positive diagonal and cross peak features represent evolution on the fundamental transitions, while the split negative features arise from propagation in the two-quantum manifold. The diagonal peaks represent a sequence of interactions with the field that leaves the coherence on the same transition during both periods, where as the split peak represents promotion from the fundamental to the overtone during detection. The overtone is anharmonically shifted, and therefore the splitting between the peaks, ,$\Delta_a,\Delta_s$, gives the diagonal anharmonicity. The cross peaks arise from the transfer of excitation from one fundamental to the other, while the shifted peak represents promotion to the combination band for detection. The combination band is shifted in frequency due to coupling between the two modes, and therefore the splitting between the peaks in the off-diagonal features $\Delta_{as}$ gives the off-diagonal anharmonicity. Notice for each split pair of peaks, that in the limit that the anharmonicity vanishes, the two peaks in each feature would overlap. Given that they have opposite sign, the peaks would destructively interfere and vanish for a harmonic system. This is a manifestation of the rule that a nonlinear response vanishes for a harmonic system. So, in fact, a 2D spectrum will have signatures of whatever types of vibrational interactions lead to imperfect interference between these two contributions. Nonlinearity of the transition dipole moment will lead to imperfect cancellation of the peaks at the amplitude level, and nonlinear coupling with a bath will lead to different lineshapes for the two features. With an assignment of the peaks in the spectrum, one has mapped out the energies of the one- and two-quantum system eigenstates. These eigenvalues act to constrain any model that will be used to interpret the system. One can now evaluate how models for the coupled vibrations match the data. For instance, when fitting the RDC spectrum to the Hamiltonian in eq. (9.1) for two coupled anharmonic local modes with a potential of the form $V\left(q_{i}\right)=\frac{1}{2} k_{i} q_{i}^{2}+\frac{1}{6} g_{i i i} q_{i}^{3}$, we obtain $\hbar \omega_{10}^{i}=\hbar \omega_{10}^{j}=2074 \mathrm{~cm}^{-1}$, $J_{i j}=35 \mathrm{~cm}^{-1}$, and $g_{iii} = g_{jjj} = 172 \mathrm{~cm}^{-1}$. Alternatively, we can describe the spectrum through eq. (9.4) as symmetric and asymmetric normal modes with diagonal and off-diagonal anharmonicity. This leads to $\hbar \omega_{10}^{a}=2038 \mathrm{~cm}^{-1}$, $\hbar \omega_{10}^{s}=2108 \mathrm{~cm}^{-1}$, $g_{a a a}=g_{s s s}=32 \mathrm{~cm}^{-1}$, and $g_{s s a}=g_{a a s}=22 \mathrm{~cm}^{-1}$. Provided that one knows the origin of the coupling and its spatial or angular dependence, one can use these parameters to obtain a structure. 5.05: Two-dimensional spectroscopy to characterize spectral diffusion A more intuitive, albeit difficult, approach to characterizing spectral diffusion is with a twodimensional correlation technique. Returning to our example of a double resonance experiment, let’s describe the response from an inhomogeneous lineshape with width $\Delta$ and mean frequency $\langle\omega_{ab}\rangle$, which is composed of a distribution of homogeneous transitions of width $\Gamma$. We will now subject the system to excitation by a narrow band pump field, and probe the differential absorption $\Delta A$ at all probe frequencies. We then repeat this for all pump frequencies: In constructing a two-dimensional representation of this correlation spectrum, we observe that the observed lineshape is elongated along the diagonal axis $(\omega_1=\omega_3)$. The diagonal linewidth is related to the inhomogeneous width $\Delta$ whereas the antidiagonal width $\left[\omega_1+\omega_3=\langle\omega_{ab}\rangle/2\right]$ is determined by the homogeneous linewidth $\Gamma$. For the system exhibiting spectral diffusion, we recognize that we can introduce a waiting time $\tau_2$ between excitation and detection, which provides a controlled period over which the system can evolve. One can see that when $\tau_2$ varies from much less to much greater than the correlation time, $\tau_c$, that the lineshape will gradually become symmetric. This reflects the fact that at long times the system excited at any one frequency can be observed at any other with equilibrium probability. That is, the correlation between excitation and detection frequencies vanishes. $\begin{array}{l} \sum_{i j}\left\langle\delta\left(\omega_{1}-\omega_{e g}^{(i)}\right) \delta\left(\omega_{3}-\omega_{e g}^{(j)}\right)\right\rangle \ \quad \rightarrow \sum_{i j}\left\langle\delta\left(\omega_{1}-\omega_{e g}^{(i)}\right)\right\rangle\left\langle\delta\left(\omega_{3}-\omega_{e g}^{(j)}\right)\right\rangle \end{array} \label{10.1}$ To characterize the energy gap correlation function, we choose a metric that describes the change as a function of $\tau_2$. For instance, the ellipticity $E\left(\tau_{2}\right)=\frac{a^{2}-b^{2}}{a^{2}+b^{2}} \label{10.2}$ is directly proportional to $C_{eg}(\tau)$. The photon echo experiment is the time domain version of this double-resonance or hole burning experiment. If we examine $R_2$ in the inhomogeneous and homogeneous limits, we can plot the polarization envelope as a function of $\tau_1$ and $\tau_3$. In the inhomogeneous limit, an echo ridge decaying as $e^{-\Gamma t}$ extends along $\tau_1=\tau_3$. It decays with the inhomogeneous distribution in the perpendicular direction. In the homogeneous limit, the response is symmetric in the two time variables. Fourier transformation allows these envelopes to be expressed as the lineshapes above. Here again $\tau_2$ is a control variable to allow us to characterize $C_{eg}(\tau)$ through the change in echo profile or lineshape. 5.06: Appendix- Third Order Diagrams Corresponding to Peaks in a 2D Spectrum of C *Diagrams that do not contribute to double-resonance experiments, but do contribute to Fourier-transform measurements. Rephasing diagrams correspond to the terms in eq. (9.14). Using a phenomenological propagator, the $S_{II}$ non-rephasing diagrams lead to the following expressions for the eight peaks in the 2D spectrum. \begin{aligned} S_{II}(\omega_1,\omega_3) &= \frac{2|\mu_{s,0}|^4+|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[-i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{s,0})+\Gamma_{s,0}\right]} + \frac{2|\mu_{a,0}|^4+|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[-i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{a,0})+\Gamma_{a,0}\right]} \ &+ \frac{|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[-i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{a,0})+\Gamma_{a,0}\right]} + \frac{|\mu_{a,0}|^2|\mu_{s,0}|^2}{\left[-i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{s,0})+\Gamma_{b,0}\right]} \ &- \frac{|\mu_{s,0}|^2|\mu_{2s,s}|^2+\mu_{s,0}\mu_{0,a}\mu_{as,s}\mu_{a,as}}{\left[-i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{s,0}+\Delta_{s})+\Gamma_{2s,s}\right]} - \frac{|\mu_{a,0}|^2|\mu_{2a,a}|^2+\mu_{a,0}\mu_{0,s}\mu_{as,a}\mu_{s,as}}{\left[-i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{a,0}+\Delta_{a})+\Gamma_{2a,a}\right]} \ &-\frac{|\mu_{s,0}|^2|\mu_{as,s}|^2}{\left[-i(\omega_1+\omega_{s,0})+\Gamma_{s,0}\right]\left[i(\omega_3-\omega_{a,0}+\Delta_{as})+\Gamma_{as,s}\right]} - \frac{|\mu_{a,0}|^2|\mu_{as,a}|^2}{\left[-i(\omega_1+\omega_{a,0})+\Gamma_{a,0}\right]\left[i(\omega_3-\omega_{s,0}+\Delta_{as})+\Gamma_{as,a}\right]} \ &\equiv {\bf 1+1'+2+2'+3+3'+4+4'} \end{aligned} \label{9.1.1}
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Nonlinear_and_Two-Dimensional_Spectroscopy_(Tokmakoff)/05%3A_Two-Dimensional_Spectroscopy/5.04%3A_Characterizing_Couplings_in_2D_Spectra.txt
Physical chemistry is concerned with the gray area that lies between physics (the study of energy) and chemistry (the study of matter). As such, physical chemistry is all about how energy can be stored through, extracted from, and used to drive chemical reactions and chemical systems. A major topic that focuses on how energy and matter interact and affect one another is thermodynamics. But before diving into thermodynamics, it is important to set down a few definitions that make it possible to begin slicing up the topic. • 1.1: The System and the Surroundings The Zeroth Law of Thermodynamics deals with the temperature of a system. And while it may seem intuitive as to what terms like “temperature” and “system” mean, it is important to define these terms. The easiest terms to define are the ones used to describe the system of interest and the surroundings, both of which are subsets of the universe. • 1.2: Pressure and Molar Volume Italian physicist Evangelista Torricelli (1608 – 1647) (Evangelista Torricelli) was the inventor of an ingenious device that could be used to measure air pressure • 1.3: Temperature Another important variable that describes the state of a system it the system’s temperature. Like pressure, temperature scales experienced an important process of development over time. Three of the most important temperature scales in US culture are the Fahrenheit, Celsius, and Kelvin scales. • 1.4: The Zeroth Law of Thermodynamics How does one use or measure temperature? Fortunately, there is a simple and intuitive relationship which can be used to design a thermometer – a device to be used to measure temperature and temperature changes. The zeroth law of thermodynamics can be stated as follows: If a system A is in thermal equilibrium with a system B, which is also in thermal equilibrium with system C, then systems A and C share a property called temperature. • 1.5: Work and Energy Temperature, pressure and volume are important variables in the description of physical systems. They will also be important to describe how energy flows from one system to another. Generally, energy can flow in two important forms: 1) work and 2) heat. The bookkeeping needed to track the flow of energy is what the subject of Thermodynamics is all about. • 1.E: The Basics (Exercises) Exercises for Chapter1 "The Basics" in Fleming's Physical Chemistry Textmap. • 1.S: The Basics (Summary) Summary for Chapter 2 "The Basics" in Fleming's Physical Chemistry Textmap. Thumbnail: Frying an egg is an example of a chemical change induced by the addition of thermal enegy (via heat). Image used iwth permission (CC BY-SA 3.0; Managementboy). 01: The Basics The Zeroth Law of Thermodynamics deals with the temperature of a system. And while it may seem intuitive as to what terms like “temperature” and “system” mean, it is important to define these terms. The easiest terms to define are the ones used to describe the system of interest and the surroundings, both of which are subsets of the universe. • Universe – everything • System – subset of the universe that is being studied and/or measured • Surroundings – every part of the universe that is not the system itself. As it turns out, there can be several types of systems, depending on the nature of the boundary that separates the system from the surroundings, and specifically whether or not it allows to the transmittance of matter or energy across it. • Open System – allows for both mass and energy transfer across its boundary • Closed System – allows for energy transfer across its boundary, but not mass transfer • Isolated System – allows neither mass nor energy transfer across its boundary Further, systems can be homogeneous (consisting of only a single phase of matter, and with uniform concentration of all substances present throughout) or heterogeneous (containing multiple phases and/or varying concentrations of the constituents throughout.) A very important variable that describes a system is its composition, which can be specified by the number of moles of each component or the concentration of each component. The number of moles of a substance is given by the ratio of the number of particles to Avogadro’s number $n =\dfrac{N}{N_A} \nonumber$ where $n$ is the number of moles, $N$ is the number of particles (atoms, molecules, or formula units) and $N_A$ is Avogadro’s number (NA = 6.022 x 1023 mol-1). Other important variables that are used to describe a system include the important variables of pressure, temperature, and volume. Other variables may also be important, but can often be determined if these state variables are known. Oftentimes, knowing a small number of state variables is all that is required to determine all of the other properties of a system. The relationship that allows for the determination of these properties from the values of a couple of state variables is called an equation of state. Variables that describe a system can be either intensive (independent of the amount of any given substance present in the system) or extensive (dependent on the amount of substance present in the system.) Temperature and color are examples of intensive variables, whereas volume and mass are examples of extensive variables. The value of intensive properties is that they can be conveniently tabulated for various substances, whereas extensive properties would be specific to individual systems. Oftentimes it is the case that the ratio of two extensive variables results in an intensive variable (since the amount of substance cancels out.) An example of this is density, which is the ratio of mass and volume. Another example is molar volume ($V_m$) which is the ratio of volume and number of moles of substance. For a given substance, the molar volume is inversely proportional to the density of the substance. In a homogeneous system, an intensive variable will describe not just the system as a whole, but also any subset of that system. However, this may not be the case in a heterogeneous system!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/01%3A_The_Basics/1.01%3A_The_System_and_the_Surroundings.txt
Italian physicist Evangelista Torricelli (1608 – 1647) (Evangelista Torricelli) was the inventor of an ingenious device that could be used to measure air pressure. Basically, he took a glass tube closed at one end, and filled it with mercury. He then inverted it, submerging the open end below the surface level in a pool of mercury. The mercury in the glass tube was then allowed to drain, leaving a vacuum (known as a “Torrocellian vacuum”) in the open space at the closed end of the tube. Remarkably, the tube did not drain completely! Torricelli, was able to use the residual column height to measure the pressure of the air pushing down on the surface of the pool of mercury. The larger the pressure pushing down on the exposed surface, the larger the column height is observed to be. The ambient air pressure can be computed by equating the force generated by the mass of the mercury in the column to the force generated by ambient air pressure (after normalizing for surface area). The resulting relationship is $p = \rho \,g \,h \nonumber$ where $\rho$ is the density of the mercury (13.1 g/cm3), $g$ is the acceleration due to gravity, and $h$ is the height of the column. Torricelli found that at sea level, the height of the column was 76 cm. $p = (13.1\, g/cm^3 ) (9.8 \,m/s) ( 76 \, cm) \left(\dfrac{100^2 cm^2}{m^2} \right) \left( \dfrac{1\, kg}{1000\, g} \right) \left( \dfrac{1 kg \, m/s^2}{N} \right) \nonumber$ $100,000 N/m^2= 1 \times 10 ^5 Pa \nonumber$ A force of 1 N acting on an area of 1 m2 is a Pascal (Pa). A standard atmosphere is 101,325 Pa (101.325 kPa), or 76.0 cm Hg (760 mm Hg.) Another commonly used unit of pressure is the bar: $1\, bar = 100,000 \,Pa \nonumber$ 1.03: Temperature Another important variable that describes the state of a system it the system’s temperature. Like pressure, temperature scales experienced an important process of development over time. Three of the most important temperature scales in US culture are the Fahrenheit, Celsius, and Kelvin scales. G. Daniel Fahrenheit wanted to develop a temperature scale that would be convenient to use in his laboratory. He wanted it to be of convenient magnitude and wanted to avoid having to use any negative values for temperature. So he define the zero of his temperature scale to be the lowest temperature he could create in his laboratory, which was in a saturated brine/water/ice slurry. He then defined 100 °F as his own body temperature. As a result, using his temperature scale, water has a normal melting point (the temperature at 1.00 atm pressure at which water ice melts) of 32 °F. Similarly, water boils (again at 1 atm pressure) at a temperature of 212 °F. The difference between these values is 180 °F. Anders Celsius also thought a 100 degree temperature scale made sense, and was given the name “the centigrade scale”. He defined 0 °C on his scale as the normal boiling point of water, and 100 °C as the normal freezing point. By today’s standards, this inverted temperature scale makes little sense. The modern Celsius temperature scale defines 0 °C as the normal freezing point of water and 100 °C as the normal boiling point. The difference is 100 °C. Comparing this to the Fahrenheit scale, one can easily construct a simple equation to convert between the two scales. $212\, °F = 100\, °C(m) + b \nonumber$ $32 °F = 0 \,°C (m) + b \nonumber$ Solving these equations for $m$ and $b$ yields $m= \dfrac{9 \, °F}{5\, °C} \nonumber$ $b= 32\,°F \nonumber$ And so conversion between the two scales is fairly simple. $y\, °F = x\, \cancel{°C} \left( \dfrac{9 \, °F}{5\, \cancel{°C}} \right) + 32\,°F \nonumber$ $x\,°C = (y \, \cancel{°F} - 32 \cancel{°F}) \left( \dfrac{5 \, °C}{9\,\cancel{ °F}} \right) \nonumber$ Many physical properties of matter suggest that there is an absolute minimum temperature that can be attained by any sample. This minimum temperature can be shown by several types or experiments to be -273.15 °C. An absolute temperature scale is one that assigns the minimum temperature a value of 0. One particularly useful scale is named after William Lord Kelvin (Kelvin, Lord William Thomson (1824-1907) , 2007). The Kelvin scale fixes the normal melting temperature of water at 273.15 K and the boiling point at 373.15 K. As such, temperatures can be converted using the following expression: $z\, K = x\, \cancel{°C} \left( \dfrac{1 \, K}{1\, \cancel{°C}} \right) + 273.15 \,K \nonumber$ 1.04: The Zeroth Law of Thermodynamics Temperature is an important property when it comes to measuring energy flow through a system. But how does one use or measure temperature? Fortunately, there is a simple and intuitive relationship which can be used to design a thermometer – a device to be used to measure temperature and temperature changes. The zeroth law of thermodynamics can be stated as follows: If a system A is in thermal equilibrium with a system B, which is also in thermal equilibrium with system C, then systems A and C share a property called temperature. This basic principle has been used to define standard temperature scales by the International Committee on Weights and Measures (BIPM) to guide the adoption of the International Practical Temperature Scale of 1990 (Mangum & Furukawa, 1990). IPT-90 is defined by using various physical properties of substances (such as the triple point of water) which occur at ver specific temperatures and pressures, and then assigning the measurable values such as the resistance on a standard platinum resistance thermometer (Strouse, 2008).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/01%3A_The_Basics/1.02%3A_Pressure_and_Molar_Volume.txt
Temperature, pressure and volume are important variables in the description of physical systems. They will also be important to describe how energy flows from one system to another. Generally, energy can flow in two important forms: 1) work and 2) heat. The bookkeeping needed to track the flow of energy is what the subject of Thermodynamics is all about, so these topics will be discussed at length in subsequent chapters. However, a little bit of review is in order, just to set the foundation for the discussions that are forthcoming. Energy Energy is an important entity in the modern world. We use energy to light our homes, drive our cars, and power our electronic devices. According to Richard Smalley, co-winner of the 1996 Nobel Prize in Chemistry, energy is one of the (if not the biggest) challenge we face moving into the 21st century (energy.senate.gov, 2004): Energy is at the core of virtually every problem facing humanity. We cannot afford to get this wrong. … Somehow we must find the basis for energy prosperity for ourselves and the rest of humanity for the 21st century. By the middle of this century we should assume we will need to at least double world energy production from its current level, with most of this coming from some clean, sustainable, CO2-free source. For worldwide peace and prosperity it needs to be cheap.- Richard Smalley, Testimony to the Senate Committee on Energy and Natural Resources, April 26, 2004 Energy can be measured in a multitude of different units, including joules (J), kilojoules (kJ), calories (cal), kilocalories (kcal), as well as several other set of units such as kJ/mol or kcal/mol. A calorie (cal) was once defined as the amount of energy needed to raise the temperature of 1 g of water by 1 °C. This definition suggests a convenient property of water called the specific heat: $C = \dfrac{1\,cal}{g\, °C} \nonumber$ The modern definition of a calorie is 4.184 joule, $1 \,J = 1\,N \cdot m = \dfrac{1 \,kg \, m^2}{s^2} \nonumber$ where a joule is the energy necessary to move a mass a distance of 1 m against a resisting force of 1 N. A dietary Calorie (Cal) is equal to 1000 cal, or 1 kcal, and is often listed on the labels of food containers to indicate the energy content of the food inside. Energy can take the form of potential energy (stored energy) and kinetic energy (realized energy) forms. Kinetic energy is the energy of motion. On the other hand, potential energy can be defined as the energy stored in a system that can be converted to kinetic energy someplace in the universe. Kinetic energy of a particle can be expressed as $E_{kin} = \dfrac{1}{2} mv^2 \label{transKE}$ where $m$ is the mass of the particle, and $v$ is the magnitude of its velocity (or speed). Equation \ref{transKE} describes the kinetic energy associated with translation; other expressions exist for different motions (e.g., rotation or vibration). An example of a system in which energy is converted between kinetic energy and potential energy is a Hooke’s Law oscillator. According to Hooke’s Law, the force acting on an object is proportional in magnitude to the displacement of the object from an equilibrium position, and opposite in sign. $F = -kx \label{hook}$ In this equation, $F$ is the force, $x$ is the displacement from equilibrium, and $k$ is the constant of proportionality. The negative sign is necessary to insure that the force acting on the object is one that will tend to restore it to an equilibrium position ($x = 0$) irrespective of whether $x$ is positive or negative. As the object that follows Hooke’s Law moves from its equilibrium position, the kinetic energy of its motion is converted into potential energy until there is no more kinetic energy left. At this point, the change in displacement will change direction, returning the object to the equilibrium position by converting potential energy back into kinetic energy. As the object is displaced in the along the x-axis (in the case shown in Figure 1.5.1, this would be accomplished by stretching or compressing the spring), the potential energy increases. The force acting on the object will also increase as the object is displaced and will be directed opposite of the direction of displacement (Equation \ref{hook}). According to Newtonian physics, the potential energy $U(x)$ is given by the negative integral of the force with respect to position: $U (x) = - \int F(x) dx \label{energy}$ Substituting Equation \ref{hook} in Equation \ref{energy} yields $U(x) = - \int (-k x)dx \nonumber$ $= \dfrac{1}{2} kx^2 + constant \nonumber$ With the proper choice of coordinate system and other definitions, the constant of integration can be arbitrarily made to be zero (for example, by choosing it to offset any other forces acting on the object, such as the force due to gravity.) The kinetic energy is then given by the total energy minus the potential energy (since the total energy must be constant due to the conservation of energy in the system!) $E_{kin} = E_{tot} -U(x) \nonumber$ Work Work is defined as the amount of energy expended to move a mass against a resisting force. For a mass being moved along a surface, the amount of energy expended must be sufficient to overcome the resisting force (perhaps due to friction) and also sufficient to cause motion along the entire path. The energy expended as work in this case (if the force is independent of the position of the object being moved) is given by $w= - F \,\Delta x \nonumber$ where F is the magnitude of the resisting force, and Dx is the displacement of the object. The negative sign is necessary since the force is acting in the opposite direction of the motion. A more general expression, and one that can be used if the force is not constant over the entire motion is $dw = -F \,dx \nonumber$ This expression can then be integrated, including any dependence $F$ might have on $x$ as needed for a given system. Another important way that work can be defined includes that for the expansion of a gas sample against an external pressure. In this case, the displacement is defined by a change in volume for the sample: $dw = -p_{ext} dV \nonumber$ This is a very convenient expression and will be used quite often when discussing the work expended in the expansion of a gas. The conversion of potential energy into kinetic energy generally is accomplished through work which is done someplace in the universe. As such, the concepts of energy and work are inexorably intertwined. They will be central to the study of thermodynamics. The work of a “reversible” expansion An important case of limiting ideal behavior[1] is that of the reversible expansion. For a change to be reversible there can be no net force pushing the change in one direction or the other. In order for this to be the case, the internal pressure (that of the system) and external pressure (that of the surroundings) must be the same. $p_{int} = p_{ext} = p \nonumber$ In this case, the work of expansion can be calculated by integrating the expression for dw. $w = \int dw = -\int p dV \nonumber$ Making a simple substitution from the ideal gas law $p=\dfrac{nRT}{V} \label{IGL}$ allows for the expression in terms of volume and temperature. If the temperature is constant (so that it can be placed before the integral) the expression becomes $w= - nRT \int_{V_1}^{V_2} \dfrac{dV}{V} = -nRT \ln \left( \dfrac{V_2}{V_1} \right) \label{workIG}$ where $V_1$ and $V_2$ are the initial and final volumes of the expansion respectively. Example $1$: Consider 1.00 mol of an ideal gas, expanding isothermally at 273 K, from an initial volume of 11.2 L to a final volume of 22.4 L. What is the final pressure of the gas? Calculate the work of the expansion if it occurs 1. against a constant external pressure equal to the final pressure you have calculated. 2. reversibly. Solution First, let’s calculate the final pressure via Equation \ref{IGL}: $p = \dfrac{nRT}{V} = \dfrac{(1.00\,mole)(0.08206\, atm\,L\,mol^{-1} K^{-1})(273\,K)}{22.4\, L} = 1.00 \, atm \nonumber$ (This may be a relationship you remember from General Chemistry – that 1 mole of an idea gas occupies 22.4 L at 0 oC!) Okay – now for the irreversible expansion against a constant external pressure: $dw = -p_{ext}dV \nonumber$ so $w = -p_{exp} \int _{V_1}^{V_2} dV = -p_{ext} \Delta V \nonumber$ $w = -(1.00 \, atm)(22.4 \, L- 11.2\,L) =-11.2 \,atm\,L \nonumber$ But what the heck is an atm L? It is actually a fairly simply thing to convert from units of atm L to J by using the ideal gas law constant. $w = -(11.2 \,atm\,L) \left( \dfrac{8.314\, \frac{J}{mol\, K}}{0.08206\, \frac{atm\, L}{mol\, K}} \right) = -1130 \,J \nonumber$ Note that the negative sign indicated that the system is expending energy by doing work on the surroundings. (This concept will be vital in the Chapter 3!) Now for the reversible pathway. The work done by the system can be calculated for this change using Equation \ref{workIG}: $w= -(1.00\, mol) (8.314 J\, mol^{-1} K^{-1}) (273 \,K) \ln \left( \dfrac{22.4\,L}{11.2\,L} \right) = -1570\,J \nonumber$ Notes: • First notice how the value for the gas law constant, R, was chosen in order to match the units required in the problem. (Read and recite that previous sentence to yourself a few times. The incorrect choice of the value of R is one of the most common errors made by students in physical chemistry! By learning how the units will dictate your choice of R, you will save yourself a considerable number of headaches as you learn physical chemistry!) • Second, You may note that the magnitude of work done by the system in the reversible expansion is larger than that of the irreversible expansion. This will always be the case! [1] There are many cases of “limiting ideal behavior” which we use to derive and/or explore the nature of chemical systems. The most obvious case, perhaps, is that of the Ideal Gas Law. 1.E: The Basics (Exercises) Q1.1 Convert the temperatures indicated to complete the following table °F °C K 25 98.6 373.15 -40 32 Q1.2 Make a graph representing the potential energy of a harmonic oscillator as a function of displacement from equilibrium. On the same graph, include a function describing the kinetic energy as a function of displacement from equilibrium as well as the total energy of the system. Q1.3 Calculate the work required to move a 3.2 kg mass 10.0 m against a resistive force of 9.80 N. Q1.4 Calculate the work needed for a 22.4 L sample of gas to expand to 44.8 L against a constant external pressure of 0.500 atm. Q1.5 If the internal and external pressure of an expanding gas are equal at all points along the entire expansion pathway, the expansion is called “reversible.” Calculate the work of a reversible expansion for 1.00 mol of an ideal gas expanding from 22.4 L at 273 K to a final volume of 44.8 L. 1.S: The Basics (Summary) Learning Objectives Upon mastering the material covered in this chapter, one should be able to do the following: 1. Write down expressions from which work of motion and of expansion can be calculated. 2. Express the “Zeroth Law of Thermodynamics. 3. Convert between temperatures on several scales that are commonly used. 4. Define boundaries that differentiate between a system and its surroundings. 5. Perform calculations involving Specific Heat and understand how the specific heat governs temperature changes for the flow of a given amount of energy. Vocabulary and Concepts • calorie (cal) • Calorie (Cal) • closed system • energy • equation of state • extensive • heterogeneous • homogeneous • Ideal Gas Law • ideal gas law constant • intensive • isolated system • joule • kinetic energy • limiting ideal behavior • open system • platinum resistance thermometer • potential energy • pressure • reversible expansion • specific heat • state variables • surroundings • system • temperature • universe • work
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/01%3A_The_Basics/1.05%3A_Work_and_Energy.txt
Gases comprise a very important type of system that can be modeled using thermodynamics. This is true because gas samples can be described by very simple equations of state, such as the ideal gas law. In this chapter, both macroscopic and microscopic descriptions of gases will be used to demonstrate some of the important tools of thermodynamics. • 2.1: The Empirical Gas Laws A number of important relationships describing the nature of gas samples have been derived completely empirically (meaning based solely on observation rather making an attempt to define the theoretical reason these relationships may exist. These are the empirical gas laws. • 2.2: The Ideal Gas Law The ideal gas law combines the empirical laws into a single expression. It also predicts the existence of a single, universal gas constant, which turns out to be one of the most important fundamental constants in science.  As derived here, it is based entirely on empirical data. It represents “limiting ideal behavior.” As such, deviations from the behavior suggested by the ideal gas law can be understood in terms of what conditions are required for ideal behavior to be followed (or approached). • 2.3: The Kinetic Molecular Theory of Gases The gas laws were derived from empirical observations. Connecting them to fundamental properties of the gas particles is subject of great interest. The Kinetic Molecular Theory is one such approach. In its modern form, the Kinetic Molecular Theory of gasses is based on five basic postulates. • 2.4: Kinetic Energy It is also important to recognize that the most probable, average, and RMS kinetic energy terms that can be derived from the Kinetic Molecular Theory do not depend on the mass of the molecules. As such, it can be concluded that the average kinetic energy of the molecules in a thermalized sample of gas depends only on the temperature. However, the average speed depends on the molecular mass. So, for a given temperature, light molecules will travel faster on average than heavier molecules. • 2.5: Graham’s Law of Effusion An important consequence of the kinetic molecular theory is what it predicts in terms of effusion and diffusion effects. Effusion is defined as a loss of material across a boundary • 2.6: Collisions with Other Molecules A major concern in the design of many experiments is collisions of gas molecules with other molecules in the gas phase. For example, molecular beam experiments are often dependent on a lack of molecular collisions in the beam that could degrade the nature of the molecules in the beam through chemical reactions or simply being knocked out of the beam. • 2.7: Real Gases While the ideal gas law is sufficient for the prediction of large numbers of properties and behaviors for gases, there are a number of times that deviations from ideality are extremely important. • 2.E: Gases (Exercises) Exercises for Chapter 2 "Gases" in Fleming's Physical Chemistry Textmap. • 2.S: Gases (Summary) Summary for Chapter 2 "Gases" in Fleming's Physical Chemistry Textmap. 02: Gases A number of important relationships describing the nature of gas samples have been derived completely empirically (meaning based solely on observation rather making an attempt to define the theoretical reason these relationships may exist. These are the empirical gas laws. Boyle’s Law One of the important relationships governing gas samples that can be modeled mathematically is the relationship between pressure and volume. Robert Boyle (1627 – 1691) (Hunter, 2004) did experiments to confirm the observations of Richard Towneley and Henry Powers to show that for a fixed sample of gas at a constant temperature, pressure and volume are inversely proportional. $pV = \text{constant} \nonumber$ or $p_1V_2=p_2V_2 \nonumber$ Boyle used a glass u-tube that was closed at one end and with the lower portion filled with mercury (trapping a sample of air in the closed end.) By adding mercury to the open end, he was able to observe and quantify the compression of the trapped air. Charles’ Law Charles’ Law states that the volume of a fixed sample of gas at constant pressure is proportional to the temperature. For this law to work, there must be an absolute minimum to the temperature scale since there is certainly an absolute minimum to the volume scale! $\dfrac{V}{T} = \text{constant} \nonumber$ or $\dfrac{V_1}{T_2} = \dfrac{V_1}{T_2} \nonumber$ The second law of thermodynamics also predicts an absolute minimum temperature, but that will be developed in a later chapter. Gay-Lussac’s Law Gay-Lussac’s Law states that the pressure of a fixed sample of gas is proportional to the temperature. As with Charles’ Law, this suggests the existence of an absolute minimum to the temperature scale since the pressure can never be negative. $\dfrac{p}{T} = \text{constant} \nonumber$ or $\dfrac{p_1}{T_2} = \dfrac{p_1}{T_2} \nonumber$ Combined Gas Law Boyle’s, Charles’, and Gay-Lussac’s Laws can be combined into a single empirical formula that can be useful. For a given amount of gas, the following relationship must hold: $\dfrac{pV}{T} = \text{constant} \nonumber$ or $\dfrac{p_1V_1}{T_1} = \dfrac{p_2V_2}{T_2} \nonumber$ Avogadro’s Law Amedeo Avogadro (1776-1856) (Encycolopedia, 2016) did extensive work with gases in his studies of matter. In the course of his work, he noted an important relationship between the number of moles in a gas sample. Avogadro’s Law (Avogadro, 1811) states that at the same temperature and pressure, any sample of gas has the same number of molecules per unit volume. $\dfrac{n}{V} = \text{constant} \nonumber$ or $\dfrac{n_1}{V_1} = \dfrac{n_2}{V_2} \nonumber$ 2.02: The Ideal Gas Law The ideal gas law combines the empirical laws into a single expression. It also predicts the existence of a single, universal gas constant, which turns out to be one of the most important fundamental constants in science. $pV = nRT \nonumber$ The ideal gas law constant is of fundamental importance and can be expressed in a number of different sets of units. Value Units 0.08206 atm L mol-1 K-1 8.314 J mol-1 K-1 1.987 cal mol-1 K-1 The ideal gas law, as derived here, is based entirely on empirical data. It represents “limiting ideal behavior.” As such, deviations from the behavior suggested by the ideal gas law can be understood in terms of what conditions are required for ideal behavior to be followed (or at least approached.) As such, it would be nice if there was a theory of gases that would suggest the form of the ideal gas law and also the value of the gas law constant. As it turns out, the kinetic molecular theory of gases does just that!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.01%3A_The_Empirical_Gas_Laws.txt
Theoretical models attempting to describe the nature of gases date back to the earliest scientific inquiries into the nature of matter and even earlier! In about 50 BC, Lucretius, a Roman philosopher, proposed that macroscopic bodies were composed of atoms that continually collide with one another and are in constant motion, despite the observable reality that the body itself is as rest. However, Lucretius’ ideas went largely ignored as they deviated from those of Aristotle, whose views were more widely accepted at the time. In 1738, Daniel Bernoulli (Bernoulli, 1738) published a model that contains the basic framework for the modern Kinetic Molecular theory. Rudolf Clausius furthered the model in 1857 by (among other things) introducing the concept of mean free path (Clausius, 1857). These ideas were further developed by Maxwell (Maxwell, Molecules, 1873). But, because atomic theory was not fully embraced in the early 20th century, it was not until Albert Einstein published one of his seminal works describing Brownian motion (Einstein, 1905) in which he modeled matter using a kinetic theory of molecules that the idea of an atomic (or molecular) picture really took hold in the scientific community. In its modern form, the Kinetic Molecular Theory of gasses is based on five basic postulates. 1. Gas particles obey Newton’s laws of motion and travel in straight lines unless they collide with other particles or the walls of the container. 2. Gas particles are very small compared to the averages of the distances between them. 3. Molecular collisions are perfectly elastic so that kinetic energy is conserved. 4. Gas particles so not interact with other particles except through collisions. There are no attractive or repulsive forces between particles. 5. The average kinetic energy of the particles in a sample of gas is proportional to the temperature. Qualitatively, this model predicts the form of the ideal gas law. 1. More particles means more collisions with the wall ($p \propto n$) 2. Smaller volume means more frequent collisions with the wall ($p \propto 1/V$) 3. Higher molecular speeds means more frequent collisions with the walls ($p \propto T$) Putting all of these together yields $p \propto \dfrac{nT}{V} =k \dfrac{nT}{V} \nonumber$ which is exactly the form of the ideal gas law! The remainder of the job is to derive a value for the constant of proportionality ($k$) that is consistent with experimental observation. For simplicity, imagine a collection of gas particles in a fixed-volume container with all of the particles traveling at the same velocity. What implications would the kinetic molecular theory have on such a sample? One approach to answering this question is to derive an expression for the pressure of the gas. The pressure is going to be determined by considering the collisions of gas molecules with the wall of the container. Each collision will impart some force. So the greater the number of collisions, the greater the pressure will be. Also, the larger force imparted per collision, the greater the pressure will be. And finally, the larger the area over which collisions are spread, the smaller the pressure will be. $p \propto \dfrac{ (\text{number of collisions}) \times (\text{force imparted per collision})}{area} \nonumber$ First off, the pressure that the gas exerts on the walls of the container would be due entirely to the force imparted each time a molecule collides with the interior surface of the container. This force will be scaled by the number of molecules that hit the area of the wall in a given time. For this reason, it is convenient to define a “collision volume”. $V_{col}= (v_x \cdot \Delta t) \cdot A \nonumber$ where $v_x$ is the speed the molecules are traveling in the x direction, $\Delta t$ is the time interval (the product of $v_x·\Delta T$ gives the length to the collision volume box) and A is the area of the wall with which the molecules will collide. Half of the molecules within this volume will collide with the wall since half will be traveling toward it and half will be traveling away from it. The number of molecules in this collision volume will be given by the total number of molecules in the sample and the fraction of the total volume that is the collision volume. And thus, the number of molecules that will collide with the wall is given by $N_{col} =\dfrac{1}{2} N_{tot} \dfrac{V_{col}}{V} \nonumber$ And thus the number of molecules colliding with the wall will be $N_{col} =\dfrac{1}{2} N_{tot} \dfrac{(v_x \cdot \Delta t) \cdot A}{V} \nonumber$ The magnitude of that force imparted per collision will be determined by the time-rate of change in momentum of each particle as it hits the surface. It can be calculated by determining the total momentum change and dividing by the total time required for the event. Since each colliding molecule will change its velocity from vx to –vx, the magnitude of the momentum change is 2(mvx). Thus the force imparted per collision is given by $F = \dfrac{2(mv_x)}{\Delta t} \nonumber$ and the total force imparted is \begin{align} F_{tot} &= N_{col} \dfrac{2 (mv_x)}{\Delta t} \[4pt] &= \dfrac{1}{2} N_{tot} \left[ \dfrac{(v_x\Delta t)A}{V} \right] \left[ \dfrac{2(m v_x)}{\Delta t} \right] \[4pt] &= N_{tot} \left(\dfrac{mv_x^2}{V} \right) A \end{align} \nonumber Since the pressure is given as the total force exerted per unit area, the pressure is given by $p = \dfrac{F_{tot}}{A} = N_{tot} \left( \dfrac{mv_x^2}{V} \right) = \dfrac{N_{tot}m}{V} v_x^2 \nonumber$ The question then becomes how to deal with the velocity term. Initially, it was assumed that all of the molecules had the same velocity, and so the magnitude of the velocity in the x-direction was merely a function of the trajectory. However, real samples of gases comprise molecules with an entire distribution of molecular speeds and trajectories. To deal with this distribution of values, we replace ($v_x^2$) with the squared average of velocity in the x direction $\langle v_x \rangle ^2$. $p = \dfrac{N_{tot}m}{V} \langle v_x \rangle ^2 \label{press}$ The distribution function for velocities in the x direction, known as the Maxwell-Boltzmann distribution, is given by: $f(v_x) = \underbrace{\sqrt{ \dfrac{m}{2\pi k_BT} }}_{\text{normalization term}} \underbrace{\exp \left(\dfrac{-mv_x^2}{2k_BT} \right)}_{\text{exponential term}} \nonumber$ This function has two parts: a normalization constant and an exponential term. The normalization constant is derived by noting that $\int _{-\infty}^{\infty} f(v_x) dv_x =1 \label{prob}$ Normalizing the Maxwell-Boltzmann Distribution The Maxwell-Boltzmann distribution has to be normalized because it is a continuous probability distribution. As such, the sum of the probabilities for all possible values of vx must be unity. And since $v_­x$ can take any value between -∞ and ∞, then Equation \ref{prob} must be true. So if the form of $f(v_x)$ is assumed to be $f(v_x) = N \exp - \left(\dfrac{mv_x^2}{2k_BT} \right) \nonumber$ The normalization constant $N$ can be found from $\int_{-\infty}^{\infty} f(v_x) dv_x = \int_{-\infty}^{\infty} N \exp \left(\dfrac{-mv_x^2}{2k_BT} \right) dv_x =1 \nonumber$ The expression can be simplified by letting $\alpha = m/2k_BT$. It is then more simply written $N \int_{-\infty}^{\infty} \exp \left(\dfrac{-mv_x^2}{2k_BT} \right) dv_x =1 \nonumber$ A table of definite integrals says that $\int_{-\infty}^{\infty} e^{- a x^2} dx = \sqrt{\dfrac{\pi}{a}} \nonumber$ So $N \sqrt{\dfrac{\pi}{\alpha}} = \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \nonumber$ And thus the normalized distribution function is given by $f(v_x) = \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \text{exp} \left( \dfrac{m v_x^2}{2 k_BT} \right) \label{MB}$ Calculating an Average from a Probability Distribution Calculating an average for a finite set of data is fairly easy. The average is calculated by $\bar{x} = \dfrac{1}{N} \sum_{i=1}^N x_i \nonumber$ But how does one proceed when the set of data is infinite? Or how does one proceed when all one knows are the probabilities for each possible measured outcome? It turns out that that is fairly simple too! $\bar{x} = \sum_{i=1}^N x_i P_i \nonumber$ where $P_i$ is the probability of measuring the value $x_i$. This can also be extended to problems where the measurable properties are not discrete (like the numbers that result from rolling a pair of dice) but rather come from a continuous parent population. In this case, if the probability is of measuring a specific outcome, the average value can then be determined by $\bar{x} = \int x P(x) dx \nonumber$ where $P(x)$ is the function describing the probability distribution, and with the integration taking place across all possible values that x can take. Calculating the average value of $v_x$ A value that is useful (and will be used in further developments) is the average velocity in the x direction. This can be derived using the probability distribution, as shown in the mathematical development box above. The average value of $v_x$ is given by $\langle v_x \rangle = \int _{-\infty}^{\infty} v_x (f(v_x) dx \nonumber$ This integral will, by necessity, be zero. This must be the case as the distribution is symmetric, so that half of the molecules are traveling in the +x direction, and half in the –x direction. These motions will have to cancel. So, a more satisfying result will be given by considering the magnitude of $v_x$, which gives the speed in the x direction. Since this cannot be negative, and given the symmetry of the distribution, the problem becomes $\langle |v_x |\rangle = 2 \int _{0}^{\infty} v_x (f(v_x) dx \nonumber$ In other words, we will consider only half of the distribution, and then double the result to account for the half we ignored. For simplicity, we will write the distribution function as $f(v_x) = N \exp(-\alpha v_x^2) \nonumber$ where $N= \left( \dfrac{m}{2\pi k_BT} \right) ^{1/2} \nonumber$ and $\alpha = \dfrac{m}{2k_BT}. \nonumber$ A table of definite integrals shows $\int_{0}^{\infty} x e^{- a x^2} dx = \dfrac{1}{2a} \nonumber$ so $\langle v_x \rangle = 2N \left( \dfrac{1}{2\alpha}\right) = \dfrac{N}{\alpha} \nonumber$ Substituting our definitions for $N$ and $\alpha$ produces $\langle v_x \rangle = \left( \dfrac{m}{2\pi k_BT} \right)^{1/2} \left( \dfrac{2 k_BT}{m} \right) = \left( \dfrac{2\pi k_BT}{ \pi m} \right)^{1/2} \nonumber$ This expression indicates the average speed for motion of in one direction. However, real gas samples have molecules not only with a distribution of molecular speeds and but also a random distribution of directions. Using normal vector magnitude properties (or simply using the Pythagorean Theorem), it can be seen that $\langle v \rangle^2 = \langle v_x \rangle^2 + \langle v_y \rangle^2 + \langle v_z \rangle^2 \nonumber$ Since the direction of travel is random, the velocity can have any component in x, y, or z directions with equal probability. As such, the average value of the x, y, or z components of velocity should be the same. And so $\langle v \rangle^2 = 3 \langle v_x \rangle^2 \nonumber$ Substituting this into the expression for pressure (Equation \ref{press}) yields $p =\dfrac{ N_{tot}m}{3V} \langle v \rangle^2 \nonumber$ All that remains is to determine the form of the distribution of velocity magnitudes the gas molecules can take. One of the first people to address this distribution was James Clerk Maxwell (1831-1879). In his 1860 paper (Maxwell, Illustrations of the dynamical theory of gases. Part 1. On the motions and collisions of perfectly elastic spheres, 1860), proposed a form for this distribution of speeds which proved to be consistent with observed properties of gases (such as their viscosities). He derived this expression based on a transformation of coordinate system from Cartesian coordinates ($x$, $y$, $z$) to spherical polar coordinates ($v$, $\theta$, $\phi$). In this new coordinate system, v represents the magnitude of the velocity (or the speed) and all of the directional data is carried in the angles $\theta$ and $\phi$. The infinitesimal volume unit becomes $dx\,dy\,dz\, = v^2 \sin( \theta) \,dv\,d\theta \,d\phi \nonumber$ Applying this transformation of coordinates, and ignoring the angular part (since he was interested only in the speed) Maxwell’s distribution (Equation \ref{MB}) took the following form $f(v) = N v^2 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right) \label{MBFullN}$ This function has three basic parts to it: a normalization constant ($N$), a velocity dependence ($v^2$), and an exponential term that contains the kinetic energy ($½ mv^2$). Since the function represents the fraction of molecules with the speed $v$, the sum of the fractions for all possible velocities must be unity. This sum can be calculated as an integral. The normalization constant ensures that $\int_0^{\infty} f(v) dv = 1 \nonumber$ Choosing the normalization constant as $N =4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \nonumber$ yields the final form of the Maxwell distribution of molecular speeds. $N =4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } v^2 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right) \label{MBFull}$ At low velocities, the $v^2$ term causes the function to increase with increasing $v$, but then at larger values of $v$, the exponential term causes it to drop back down asymptotically to zero. The distribution will spread over a larger range of speed at higher temperatures, but collapse to a smaller range of values at lower temperatures (Table 2.3.1). Calculating the Average Speed Using the Maxwell distribution as a distribution of probabilities, the average molecular speed in a sample of gas molecules can be determined. \begin{align} \langle v \rangle & = \int _{-\infty}^{\infty} v \,f(v) dv \ & = \int _{-\infty}^{\infty} v\, 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } v^2 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right)\ dv \ & = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \int _{-\infty}^{\infty} v^3 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right)\ dv \end{align} \nonumber The following can be found in a table of integrals: $\int_0^{\infty} x^{2n+1} e^{-ax^2} dx = \dfrac{n!}{2a^{n+1}} \nonumber$ So $\langle v \rangle = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \left[ \dfrac{1}{2 \left( \dfrac{m}{2 k_B T} \right) ^2 } \right] \nonumber$ Which simplifies to $\langle v \rangle = \left( \dfrac{8 k_BT}{\pi m} \right) ^{1/2} \nonumber$ Note: the value of $\langle v \rangle$ is twice that of $\langle v_x \rangle$ which was derived in an earlier example! $\langle v \rangle = 2\langle v_x \rangle \nonumber$ Example $1$: What is the average value of the squared speed according to the Maxwell distribution law? Solution \begin{align} \langle v^2 \rangle & = \int _{-\infty}^{\infty} v^2 \,f(v) dv \ & = \int _{-\infty}^{\infty} v^2\, 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } v^2 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right)\ dv \ & = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \int _{-\infty}^{\infty} v^4 \text{exp} \left( \dfrac{m v^2}{2 k_BT} \right)\ dv \end{align} \nonumber A table of integrals indicates that $\int_0^{\infty} x^{2n} e^{-ax^2} dx = \dfrac{1 \cdot 3 \,cdot 5 \dots (2n-1)}{2^{n+1}a^n} \sqrt{\dfrac{\pi}{a}} \nonumber$ Substitution (noting that $n = 2$) yields $\langle v^2 \rangle = 4\pi \sqrt{\left( \dfrac{m}{2\pi k_BT} \right) ^3 } \left[ \dfrac{1 \cdot 3}{2^3 \left( \dfrac{m}{2 k_BT} \right) ^2 } \sqrt{\dfrac{\pi}{\left( \dfrac{m}{2 k_BT} \right)}} \right] \nonumber$ which simplifies to $\langle v^2 \rangle = \dfrac{3 k_BT}{ m} \nonumber$ Note: The square root of this average squared speed is called the root mean square (RMS) speed, and has the value $v_{rms} = \sqrt{ \langle v^2 \rangle } = \left( \dfrac{3 k_BT}{ m} \right)^{1/2} \nonumber$ The entire distribution is also affected by molecular mass. For lighter molecules, the distribution is spread across a broader range of speeds at a given temperature, but collapses to a smaller range for heavier molecules (Table 2.3.2). The probability distribution function can also be used to derive an expression for the most probable speed ($v_{mp}$), the average ($v_{ave}$), and the root-mean-square ($v_{rms}$) speeds as a function of the temperature and masses of the molecules in the sample. The most probable speed is the one with the maximum probability. That will be the speed that yields the maximum value of $f(v)$. It is found by solving the expression $\dfrac{d}{dv} f(v) = 0 \nonumber$ for the value of $v$ that makes it true. This will be the value that gives the maximum value of $f(v)$ for the given temperature. Similarly, the average value can be found using the distribution in the following fashion $v_{ave} = \langle v \rangle \nonumber$ and the root-mean-square (RMS) speed by finding the square root of the average value of $v^2$. Both demonstrated above. $v_{rms} = \sqrt{ \langle v^2 \rangle} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/02%3A_Gases/2.03%3A_The_Kinetic_Molecular_Theory_of_Gases.txt