chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Recall that the Fourier transform of a time correlation function can be related to some kind of frequency spectrum. For example, the Fourier transform of the velocity autocorrelation function of a particular degree of freedom ${q}$ of interest ${v = \dot {q} } \nonumber$ where $I (\omega )$, gives the relevant frequencies contributing to the dynamics of ${q}$, but does not give amplitudes. This "frequency'' spectrum $I(\omega) = \int_0^{\infty}\;dt\; e^{i\omega t} C_{\rm vv}(t)$ is simply given by $C_{vv} (t) \nonumber$ That is, we take the Laplace transform of $s = -i \omega$ using $T_2$. Since $s = - i\omega$ carries information about the relevant frequencies of the system, the decay of $s = - i\omega$ in time is a measure of how strongly coupled the motion of ${q}$ is to the rest of the bath, i.e., how much of an overlap there is between the relevant frequencies of the bath and those of ${q}$. The more of an overlap there is, the more mixing there will be between the system and the bath, and hence, the more rapidly the motion of the system will become vibrationally "out of phase'' or decorrelated with itself. Thus, the decay time of $s = - i\omega$, which is denoted $C_{\varepsilon\varepsilon}= {\langle \varepsilon (0)\varepsilon (t)\rangle \over \langle \varepsilon ^2\rangle }$ is called the vibrational dephasing time. Another measure of the strength of the coupling between the system and the bath is the time required for the system to dissipate energy into the bath when it is excited away from equilibrium. This time can be obtained by studying the decay of the energy autocorrelation function: $\varepsilon (t) \nonumber$ where $\varepsilon (t) = {1 \over 2}m\dot{q}^2 + \phi(q) - kT$ is defined to be $T_1 \nonumber$ The decay time of this correlation function is denoted $\phi (q) = {1 \over 2} m \omega ^2 q^2$. The question then becomes: what are these characteristic decay times and how are they related? To answer this, we will take a phenomenological approach. We will assume the validity of the GLE for ${q}$: $m\ddot{q} = -{\partial \phi \over \partial q} - \int_0^t\;d\tau\;\dot{q}(\tau)\zeta(t-\tau)+ R(t) \nonumber$ and use it to calculate $\phi(q) = {1 \over 2}m\omega^2q^2$ and $C_{\varepsilon \varepsilon} = {\langle \varepsilon (0) \varepsilon (t) \rangle \over \langle \varepsilon ^2 \rangle }$. Suppose the potential $\phi (q)$ is harmonic and takes the form $\ddot{q} = -{\omega}^2 q - \int_0^t\;d\tau\;\dot{q}(t-\tau)\gamma(\tau) + f(t) \nonumber$ Substituting into the GLE and dividing through by $m$ gives $\gamma(t) = {\zeta(t) \over m}\;\;\;\;\;f(t) = {R(t) \over m} \nonumber$ where $\dot {q} (0) \nonumber$ An equation of motion for $s = -i \omega$ can be obtained directly by multiplying both sides of the GLE by $\langle \dot{q}(0)\ddot{q}(t)\rangle = -{\omega}^2\langle \dot {q} (0) q (t) \rangle - \int _0^t d \tau \langle \dot {q} (0) \dot {q} (t-\tau)\rangle \gamma(\tau) + \langle \dot{q}(0)f(t)\rangle$ and averaging over a canonical ensemble: $\langle \dot{q}(0)f(t)\rangle = {1 \over m}\langle \dot{q}(0)R(t)\rangle = 0 \nonumber$ Recall that $\langle \dot{q}(0)\ddot{q}(t)\rangle = {d \over dt}\langle \dot{q}(0)\dot{q}(t)\rangle ={dC_{\rm vv} \over dt} \nonumber$ and note that $\int_0^t\;d\tau\;\langle \dot{q}(0)\dot{q}(\tau)\rangle = \langle \dot {q} (0) q (t) \rangle - \langle \dot{q}(0)q(0)\rangle= \langle \dot{q}(0)q(t)\rangle \nonumber$ also $\langle \dot{q}(0)q(t)\rangle = \int_0^t\;d\tau\;C_{\rm vv}(\tau) \nonumber$ Thus, ${d \over dt}C_{\rm vv}(t) \nonumber$ Combining these results gives an equation for $s = - i \omega$ $-\int_0^t\;d\tau\;\left({\omega}^2 +\gamma(t-\tau)\right)C_{\rm vv}(\tau)$ = $-\int_0^t\;d\tau\;K(t-\tau)C_{\rm vv}(\tau)$ $-\int_0^t\;d\tau\;\left({\omega}^2 +\gamma(t-\tau)\right)C_{\rm vv}(\tau)$ = $K (t)$ which is known as the memory function equation and the kernel $s\tilde{C}_{vv}(s) - C_{\rm vv}(0) = -\tilde{C}_{vv}(s)\tilde{K}(s)$ is known as the memory function or memory kernel. This type of integro-differential equation is called a Volterra equation and it can be solved by Laplace transforms. Taking the Laplace transform of both sides gives $C_{\rm vv}(0)=1 \nonumber$ However, it is clear that $\tilde{K}(s) = {{\omega}^2 \over s} + \tilde{\gamma}(s)$ and also ${s\tilde{C}_{vv}(s) - 1} \nonumber$ Thus, it follows that ${\left({{\omega}^2 \over s} + \tilde{\gamma}(s)\right)\tilde{C}_{vv}(s) }$ = ${\tilde{C}_{vv}(s) }$ ${s \over s^2 + s\tilde{\gamma}(s) + {\omega}^2}$ = $s^2 + s \tilde {\gamma} (s) + \omega ^2 = 0$ In order to perform the inverse Laplace transform, we need the poles of the integrand, which will be determined by the solutions of $\tilde {\gamma } (s) \nonumber$ which we could solve directly if we knew the explicit form of $\omega$. However, if $\tilde {\gamma } (0)$ is sufficiently larger than $s = s_0 + s_1 + s_2 + \cdots$, then it is possible to develop a perturbation solution to this equation. Let us assume the solutions for $s$ can be written as $(s_0+s_1+s_2+\cdots)^2 + (s_0+s_1+s_2+\cdots)\tilde{\gamma}(s_0+s_1+s_2+\cdots) + {\omega}^2=0$ Substituting in this ansatz gives $\tilde{\gamma} \nonumber$ Since we are assuming $s_0^2 + {\omega}^2 = 0$ is small, then to lowest order, we have ${s_0 = \pm i{\omega} } \nonumber$ so that $2s_0s_1 + s_0\tilde{\gamma}(s_0) = 0$. The first order equation then becomes $s_1 = -{\tilde{\gamma}(s_0) \over 2} = -{\tilde{\gamma}(\pm i{\omega}) \over 2} \nonumber$ or $\tilde{\gamma}(\pm i{\omega}) \nonumber$ Note, however, that $\int_0^{\infty}\;dt\;\gamma(t)e^{\pm i\omega t}$ = $\int_0^{\infty}\;dt\;\left[\gamma(t)\cos{\omega}t \pm i\gamma(t)\sin{\omega}t\right]$ = ${\gamma'({\omega}) \pm i\gamma''({\omega}) }$ = $s \approx \pm i\left({\omega}+ \gamma''({\omega})\right) - {\gamma ' (\omega) \over 2}\equiv \pm i\Omega - {\gamma'({\omega}) \over 2}$ Thus, stopping the first order result, the poles of the integrand occur at ${s_+ } \nonumber$ Define ${i\Omega - {\gamma'({\omega}) \over 2} }$ = ${s-}$ ${-i\Omega - {\gamma'({\omega}) \over 2}}$ = $\tilde{C}_{vv}(s) \approx {s \over (s-s_+)(s-s_-)}$ Then $C_{\rm vv}(t) = {1 \over 2\pi i}\oint\;{se^{st}\;ds \over (s-s_+)(s-s_-)} \nonumber$ and $s = - i\omega$ is then given by the contour integral $C_{\rm vv}(t) = {s_+e^{s_+t} \over (s_+-s_-)} + {s_-e^{s_-t} \over (s_--s_+)} \nonumber$ Taking the residue at each pole, we find $C_{\rm vv}(t) = e^{-\gamma'({\omega})t/2}\left[\cos\Omega t - {\gamma'({\omega}) \over 2\Omega}\sin\Omega t\right] \nonumber$ which can be simplified to give $\Omega \nonumber$ Thus, we see that the GLE predicts $s = -i\omega$ oscillates with a frequency ${1 \over T_2} = {\gamma'({\omega}) \over 2} = {\zeta'({\omega}) \over 2m}$ and decays exponentially. From the exponential decay, we can directly read off the time $C_{\epsilon \epsilon } = {\langle \epsilon (0) \epsilon (t) \rangle \over \langle \epsilon ^2 \rangle }$: $2m \nonumber$ That is, the value of the real part of the Fourier (Laplace) transform of the friction kernel evaluated at the renormalized frequency divided by $C_{\rm qq}(t) = \langle q(0)q(t)\rangle$ gives the vibrational dephasing time! By a similar scheme, one can easily show that the position autocorrelation function $C_{\rm qq}(t) = e^{-\gamma'({\omega})t/2}\left[\cos\Omega t + {\gamma'({\omega}) \over 2\Omega}\sin\Omega t\right]$ decays with the same dephasing time. It's explicit form is $C_{\varepsilon\varepsilon}(t) \nonumber$ The energy autocorrelation function $C_{\rm qq}(t)$ can be expressed in terms of the more primitive correlation functions $C_{\varepsilon\varepsilon}(t) = {1 \over 2}C_{\rm vv}^2(t) + {1 \over 2} C^2_{qq} (t) + {1 \over {\omega}^2}\dot{C}_{qq}^2(t)$ and $s = - i \omega$. It is a straightforward, although extremely tedious, matter to show that the relation, valid for the harmonic potential of mean force, is $C_{\varepsilon\varepsilon}(t) = e^{-\gamma'({\omega})t}\times ({\rm oscillatory\ functions\ of\ }t) \nonumber$ Substituting in the expressions for $c_{\varepsilon \varepsilon } (t) = {1 \over 2} C^2_{vv} (t) + {1 \over 2} C^2_{qq} (t) + {1 \over \omega ^2} C^2 _{qq} (t)$ and $s = -i \omega$ $C_{\rm vv}(t)$ gives ${1 \over T_1} = \gamma'({\omega}) = {\zeta'({\omega}) \over m} \nonumber$ so that the decay time $\phi (q) = {1 \over 2} m \omega^2 q^2$ can be seen to be ${1 \over T_2} = {1 \over 2T_1} \nonumber$ and therefore, the relation between $\phi (q) = {1 \over 2} m \omega ^2 q^2$ and $C_{\varepsilon\varepsilon} = {\langle \varepsilon (0) \varepsilon (t) \rangle \over \langle \varepsilon ^2 \rangle }$ can be seen immediately to be $\omega \nonumber$ The incredible fact is that this result is also true quantum mechanically. That is, by doing a simple, purely classical treatment of the problem, we obtained a result that turns out to be the correct quantum mechanical result! Just how big are these times? If $\omega$ is very large compared to any typical frequency relevant to the bath, then the friction kernel evaluated at this frequency will be extremely small, giving rise to a long decay time. This result is expect, since, if $\tilde {\gamma} (0)$ is large compared to the bath, there are very few ways in which the system can dissipate energy into the bath. The situation changes dramatically, however, if a small amount of anharmonicity is added to the potential of mean force. The figure below illustrates the point for a harmonic diatomic molecule interacting with a Lennard-Jones bath. The top figure shows the velocity autocorrelation function for an oscillator whose frequency is approximately 3 times the characteristic frequency of the bath, while the bottom one shows the velocity autocorrelation function for the case that the frequency disparity is a factor of 6. Figure 1:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Statistical_Mechanics_(Tuckerman)/14%3A_The_Langevin_and_Generalized_Langevin_equations/14.07%3A_Example-_Vibrational_dephasing_and_energy_relaxation.txt
Learning Objectives In this Chapter, you should have learned about the following things: • Why quantum mechanics is needed; that is, what things classical mechanics does not describe correctly. How quantum and classical descriptions can sometimes agree and when they will not. How certain questions can only be asked when classical mechanics applies, not when quantum mechanics is needed. • The Schrödinger equation, operators, wave functions, eigenvalues and eigenfunctions and their relations to experimental observations. • Time propagation of wave functions. • Free particle motion and corresponding eigenfunctions in one, two, and three dimensions and the associated energy levels, and the relevance of these models to various chemistry issues. • Action quantization and the resulting semi-classical wave functions and how this point of view offers connections between classical and quantum perspectives. In this portion of the text, most of the topics that are appropriate to an undergraduate reader are covered. Many of these subjects are subsequently discussed again in Chapter 5, where a broad perspective of what theoretical chemistry is about is offered. They are treated again in greater detail in Chapters 6-8 where the three main disciplines of theory, electronic structure, chemical dynamics, and statistical mechanics, are covered in depth appropriate to a graduate-student reader. 01: The Basics of Quantum Mechanics The field of theoretical chemistry deals with the structures, bonding, reactivity, and physical properties of atoms, molecules, radicals, and ions all of whose sizes range from ca. 1 Å for atoms and small molecules to a few hundred Å for polymers and biological molecules such as DNA and proteins. Sometimes these building blocks combine to form nanoscopic materials (e.g., quantum dots, graphene sheets) whose dimensions span up to thousands of Å, making them amenable to detection using specialized microscopic tools. However, description of the motions and properties of the particles comprising such small systems has been found to not be amenable to treatment using classical mechanics. Their structures, energies, and other properties have only been successfully described within the framework of quantum mechanics. This is why quantum mechanics has to be mastered as part of learning theoretical chemistry. We know that all molecules are made of atoms that, in turn, contain nuclei and electrons. As I discuss in this Chapter, the equations that govern the motions of electrons and of nuclei are not the familiar Newton equations $\textbf{F} = m \textbf{a} \tag{1.1}$ but a new set of equations called Schrödinger equations. When scientists first studied the behavior of electrons and nuclei, they tried to interpret their experimental findings in terms of classical Newtonian motions, but such attempts eventually failed. They found that such small light particles behaved in a way that simply is not consistent with the Newton equations. Let me now illustrate some of the experimental data that gave rise to these paradoxes and show you how the scientists of those early times then used these data to suggest new equations that these particles might obey. I want to stress that the Schrödinger equation was not derived but postulated by these scientists. In fact, to date, to the best of my knowledge, no one has been able to derive the Schrödinger equation. From the pioneering work of Bragg on diffraction of x-rays from planes of atoms or ions in crystals, it was known that peaks in the intensity of diffracted x-rays having wavelength l would occur at scattering angles q determined by the famous Bragg equation: $n \lambda = 2 d \sin{\theta} \tag{1.2}$ where d is the spacing between neighboring planes of atoms or ions. These quantities are illustrated in Figure 1.1 shown below. There are may such diffraction peaks, each labeled by a different value of the integer $n$ ($n = 1, 2, 3, \cdots$). The Bragg formula can be derived by considering when two photons, one scattering from the second plane in the figure and the second scattering from the third plane, will undergo constructive interference. This condition is met when the extra path length covered by the second photon (i.e., the length from points $A$ to $B$ to $C$) is an integer multiple of the wavelength of the photons. The importance of these x-ray scattering experiments to electrons and nuclei appears in the experiments of Davisson and Germer in 1927 who scattered electrons of (reasonably) fixed kinetic energy $E$ from metallic crystals. These workers found that plots of the number of scattered electrons as a function of scattering angle $\theta$ displayed peaks at angles $\theta$ that obeyed a Bragg-like equation. The startling thing about this observation is that electrons are particles, yet the Bragg equation is based on the properties of waves. An important observation derived from the Davisson-Germer experiments was that the scattering angles $\theta$ observed for electrons of kinetic energy $E$ could be fit to the Bragg equation if a wavelength were ascribed to these electrons that was defined by $\lambda = \dfrac{h}{\sqrt{2m_e E}} \tag{1.3}$ where $m_e$ is the mass of the electron and h is the constant introduced by Max Planck and Albert Einstein in the early 1900s to relate a photon’s energy $E$ to its frequency $\nu$ via $E = h\nu$). These amazing findings were among the earliest to suggest that electrons, which had always been viewed as particles, might have some properties usually ascribed to waves. That is, as de Broglie has suggested in 1925, an electron seems to have a wavelength inversely related to its momentum, and to display wave-type diffraction. I should mention that analogous diffraction was also observed when other small light particles (e.g., protons, neutrons, nuclei, and small atomic ions) were scattered from crystal planes. In all such cases, Bragg-like diffraction is observed and the Bragg equation is found to govern the scattering angles if one assigns a wavelength to the scattering particle according to $\lambda = \dfrac{h}{\sqrt{2m E}} \tag{1.4}$ where • $m$ is the mass of the scattered particle and • $h$ is Planck’s constant (6.62 x10-27 erg sec). The observation that electrons and other small light particles display wave like behavior was important because these particles are what all atoms and molecules are made of. So, if we want to fully understand the motions and behavior of molecules, we must be sure that we can adequately describe such properties for their constituents. Because the classical Newtonian equations do not contain factors that suggest wave properties for electrons or nuclei moving freely in space, the above behaviors presented significant challenges. Another problem that arose in early studies of atoms and molecules resulted from the study of the photons emitted from atoms and ions that had been heated or otherwise excited (e.g., by electric discharge). It was found that each kind of atom (i.e., H or C or O) emitted photons whose frequencies $\nu$ were of very characteristic values. An example of such emission spectra is shown in Figure 1.2 for hydrogen atoms. In the top panel, we see all of the lines emitted with their wave lengths indicated in nano-meters. The other panels show how these lines have been analyzed (by scientists whose names are associated) into patterns that relate to the specific energy levels between which transitions occur to emit the corresponding photons. In the early attempts to rationalize such spectra in terms of electronic motions, one described an electron as moving about the atomic nuclei in circular orbits such as shown in Figure 1. 3. A circular orbit was thought to be stable when the outward centrifugal force characterized by radius $r$ and speed $v$ ($m_e v^2/r$) on the electron perfectly counterbalanced the inward attractive Coulomb force ($Ze^2/r^2$) exerted by the nucleus of charge $Z$: $m_e \dfrac{v^2}{r} = \dfrac{Ze^2}{r^2} \tag{1.5}$ This equation, in turn, allows one to relate the kinetic energy $\dfrac{1}{2} m_e v^2$ to the Coulombic energy $Ze^2/r$, and thus to express the total energy $E$ of an orbit in terms of the radius of the orbit: $E = \dfrac{1}{2} m_e v^2 – \dfrac{Ze^2}{r^2} = -\dfrac{1}{2} \dfrac{Ze^2}{r^2} \tag{1.6}$ The energy characterizing an orbit or radius $r$, relative to the $E = 0$ reference of energy at $r \rightarrow \infty$, becomes more and more negative (i.e., lower and lower) as $r$ becomes smaller. This relationship between outward and inward forces allows one to conclude that the electron should move faster as it moves closer to the nucleus since $v^2 = Ze^2/(r m_e)$. However, nowhere in this model is a concept that relates to the experimental fact that each atom emits only certain kinds of photons. It was believed that photon emission occurred when an electron moving in a larger circular orbit lost energy and moved to a smaller circular orbit. However, the Newtonian dynamics that produced the above equation would allow orbits of any radius, and hence any energy, to be followed. Thus, it would appear that the electron should be able to emit photons of any energy as it moved from orbit to orbit. The breakthrough that allowed scientists such as Niels Bohr to apply the circular-orbit model to the observed spectral data involved first introducing the idea that the electron has a wavelength and that this wavelength l is related to its momentum by the de Broglie equation $\lambda = h/p$. The key step in the Bohr model was to also specify that the radius of the circular orbit be such that the circumference of the circle $2\pi r$ be equal to an integer ($n$) multiple of the wavelength $\lambda$. Only in this way will the electron’s wave experience constructive interference as the electron orbits the nucleus. Thus, the Bohr relationship that is analogous to the Bragg equation that determines at what angles constructive interference can occur is $2 \pi r = n \lambda. \tag{1.7}$ Both this equation and the analogous Bragg equation are illustrations of what we call boundary conditions; they are extra conditions placed on the wavelength to produce some desired character in the resultant wave (in these cases, constructive interference). Of course, there remains the question of why one must impose these extra conditions when the Newton dynamics do not require them. The resolution of this paradox is one of the things that quantum mechanics does. Returning to the above analysis and using $\lambda = h/p = h/(m_e v)$, $2\pi r = n\lambda$, as well as the force-balance equation $m_e v^2/r = Ze^2/r^2$, one can then solve for the radii that stable Bohr orbits obey: $r = \left(\dfrac{nh}{2\pi}\right)^2 \dfrac{1}{m_e Z e^2} \tag{1.8}$ and, in turn for the velocities of electrons in these orbits $v = \dfrac{Z e^2}{nh/2\pi}. \tag{1.9}$ These two results then allow one to express the sum of the kinetic ($\dfrac{1}{2} m_e v^2$) and Coulomb potential ($-Ze^2/r$) energies as $E = -\dfrac{1}{2} m_e Z^2 \dfrac{e^4}{(nh/2\pi)^2}. \tag{1.10}$ Just as in the Bragg diffraction result, which specified at what angles special high intensities occurred in the scattering, there are many stable Bohr orbits, each labeled by a value of the integer $n$. Those with small $n$ have small radii (scaling as $n^2$), high velocities (scaling as 1/n) and more negative total energies (n.b., the reference zero of energy corresponds to the electron at $r = \infty$, and with $v = 0$). So, it is the result that only certain orbits are allowed that causes only certain energies to occur and thus only certain energies to be observed in the emitted photons. It turned out that the Bohr formula for the energy levels (labeled by $n$) of an electron moving about a nucleus could be used to explain the discrete line emission spectra of all one-electron atoms and ions (i.e., $H$, $He^+$, $Li^{+2}$, etc., sometimes called hydrogenic species) to very high precision. In such an interpretation of the experimental data, one claims that a photon of energy $h\nu = R \left(\dfrac{1}{n_i^2} – \dfrac{1}{n_f^2}\right) \tag{1.11}$ is emitted when the atom or ion undergoes a transition from an orbit having quantum number $n_i$ to a lower-energy orbit having $n_f$. Here the symbol $R$ is used to denote the following collection of factors: $R = \dfrac{1}{2} m_e Z^2 \dfrac{e^4}{\Big(\dfrac{h}{2\pi}\Big)^2} \tag{1.12}$ and is called the Rydberg unit of energy and is equal to 13.6 eV. The Bohr formula for energy levels did not agree as well with the observed pattern of emission spectra for species containing more than a single electron. However, it does give a reasonable fit, for example, to the Na atom spectra if one examines only transitions involving only the single 3s valence electron. Moreover, it can be greatly improved if one introduces a modification designed to treat the penetration of the Na atom’s 3s and higher orbitals within the regions of space occupied by the 1s, 2s, and 2p orbitals. Such a modification to the Bohr model is achieved by introducing the idea of a so-called quantum defect d into the principal quantum number $n$ so that the expression for the $n$-dependence of the orbitals changes to $E = \dfrac{-R}{(n-\delta)^2} \tag{1.13}$ Example 1.1 For example, choosing $\delta$ equal to 0.41, 1.37, 2.23, 3.19, or 4.13 for Li, Na, K, Rb, and Cs, respectively, in this so-called Rydberg formula, one finds decent agreement between the $n$-dependence of the energy spacings of the singly excited valence states of these atoms. The fact that $\delta$ is larger for Na than for Li and largest for Cs reflects that fact that the 3s orbital of Na penetrates the 1s, 2s, and 2p shells while the 2s orbital of Li penetrates only the 1s shell and the 6s orbital of Cs penetrates $n =$ 1, 2, 3, 4, and 5 shells. It turns out this Rydberg formula can also be applied to certain electronic states of molecules. In particular, for closed-shell cations such as $NH_4^+$, $H_3O^+$, protonated alcohols and protonated amines (even on side chains of amino acids), an electron can be attached into a so-called Rydberg orbital to form corresponding neutral radicals such as $NH_4$, $H_3O$, $R-NH_3$, or $R-OH_2$. For example, in $NH_4$, the electron bound to an underlying $NH_4^+$ cation core. The lowest-energy state of this Rydberg species is often labeled 3s because $NH_4^+$ is isoelectronic with the Na+ cation which binds an electron in its 3s orbital in its ground state. As in the cases of alkali atoms, these Rydberg molecules also possess excited electronic states. For example, the NH4 radical has states labeled 3p, 3d, 4s, 4p, 4d, 4f, etc. By making an appropriate choice of the quantum defect parameter d, the energy spacings among these states can be fit reasonably well to the Rydberg formula (Equation 1.13). In Figure 1.3.a several Rydberg orbitals of $NH_4$ are shown These Rydberg orbitals can be quite large (their sizes scale as $n^2$, clearly have the s, p, or d angular shapes, and possess the expected number of radial nodes. However, for molecular Rydberg orbital’s, and unlike atomic Rydberg orbitals, the three $p$, five $d$, seven $f$, etc. orbitals are not degenerate; instead they are split in energy in a manner reflecting the symmetry of the underlying cation’s symmetry. For example, for $NH_4$, the three $3p$ orbitals are degenerate and belong to $t_2$ symmetry in the $T_d$ point group; the five $3d$ orbitals are split into three degenerate $t_2$ and two degenerate e orbitals. So, the Bohr model works well for one-electron atoms or ions and the quantum defect-modified Bohr equation describes reasonably well some states of alkali atoms and of Rydberg molecules. The primary reason for the breakdown of the Bohr formula is the neglect of electron-electron Coulomb repulsions in its derivation, which are qualitatively corrected for by using the quantum defect parameter for Rydberg atoms and molecules. Nevertheless, the success of the Bohr model made it clear that discrete emission spectra could only be explained by introducing the concept that not all orbits were allowed. Only special orbits that obeyed a constructive-interference condition were really accessible to the electron’s motions. This idea that not all energies were allowed, but only certain quantized energies could occur was essential to achieving even a qualitative sense of agreement with the experimental fact that emission spectra were discrete. In summary, two experimental observations on the behavior of electrons that were crucial to the abandonment of Newtonian dynamics were the observations of electron diffraction and of discrete emission spectra. Both of these findings seem to suggest that electrons have some wave characteristics and that these waves have only certain allowed (i.e., quantized) wavelengths. So, now we have some idea about why Newton’s equations fail to account for the dynamical motions of light and small particles such as electrons and nuclei. We see that extra conditions (e.g., the Bragg condition or constraints on the de Broglie wavelength) could be imposed to achieve some degree of agreement with experimental observation. However, we still are left wondering what equations can be applied to properly describe such motions and why the extra conditions are needed. It turns out that a new kind of equation based on combining wave and particle properties needed to be developed to address such issues. These are the so-called Schrödinger equations to which we now turn our attention. As I said earlier, no one has yet shown that the Schrödinger equation follows deductively from some more fundamental theory. That is, scientists did not derive this equation; they postulated it. Some idea of how the scientists of that era dreamed up the Schrödinger equation can be had by examining the time and spatial dependence that characterizes so-called traveling waves. It should be noted that the people who worked on these problems knew a great deal about waves (e.g., sound waves and water waves) and the equations they obeyed. Moreover, they knew that waves could sometimes display the characteristic of quantized wavelengths or frequencies (e.g., fundamentals and overtones in sound waves). They knew, for example, that waves in one dimension that are constrained at two points (e.g., a violin string held fixed at two ends) undergo oscillatory motion in space and time with characteristic frequencies and wavelengths. For example, the motion of the violin string just mentioned can be described as having an amplitude $A(x,t)$ at a position $x$ along its length at time $t$ given by $A(x,t) = A(x,o) \cos(2\pi \nu t), \tag{1.14}$ where $nu$ is its oscillation frequency. The amplitude’s spatial dependence also has a sinusoidal dependence given by $A(x,0) = A \sin (\dfrac{2\pi x}{\lambda}) \tag{1.15}$ where $\lambda$ is the crest-to-crest length of the wave. Two examples of such waves in one dimension are shown in Figure 1. 4. In these cases, the string is fixed at $x = 0$ and at $x = L$, so the wavelengths belonging to the two waves shown are $\lambda = 2L$ and $\lambda = L$. If the violin string were not clamped at $x = L$, the waves could have any value of $\lambda$. However, because the string is attached at $x = L$, the allowed wavelengths are quantized to obey $\lambda = \dfrac{2L}{n}, \tag{1.16}$ where $n = 1, 2, 3, 4, \cdots$. The equation that such waves obey, called the wave equation, reads: $\dfrac{d^2A(x,t)}{dt^2} = c^2 \dfrac{d^2A}{dx^2} \tag{1.17}$ where $c$ is the speed at which the wave travels. This speed depends on the composition of the material from which the violin string is made; stiff string material produces waves with higher speeds than for softer material. Using the earlier expressions for the $x-$ and $t$-dependences of the wave $A(x,t)$, we find that the wave’s frequency and wavelength are related by the so-called dispersion equation: $\nu^2 = \left(\dfrac{c}{\lambda}\right)^2, \tag{1.18}$ or $c = \lambda \nu. \tag{1.19}$ This relationship implies, for example, that an instrument string made of a very stiff material (large $c$) will produce a higher frequency tone for a given wavelength (i.e., a given value of $n$) than will a string made of a softer material (smaller $c$). For waves moving on the surface of, for example, a rectangular two-dimensional surface of lengths $L_x$ and $L_y$, one finds $A(x,y,t) = \sin \left(n_x \dfrac{p_x}{L_x}\right) \sin\left(n_y \dfrac{p_y}{L_y}\right) \cos\left(2\pi \nu t\right). \tag{1.20}$ Hence, the waves are quantized in two dimensions because their wavelengths must be constrained to cause $A(x,y,t)$ to vanish at $x = 0$ and $x = L_x$ as well as at $y = 0$ and $y = L_y$ for all times $t$. It is important to note, in closing this discussion of waves on strings and surfaces, that it is not being a solution to the Schrödinger equation that results in quantization of the wavelengths. Instead, it is the condition that the wave vanish at the boundaries that generates the quantization. You will see this trend time and again throughout this text; when a wave function is subject to specific constraints at its inner or outer boundary (or both), quantization will result; if these boundary conditions are not present, quantization will not occur. Let us now return to the issue of waves that describe electrons moving. The pioneers of quantum mechanics examined functional forms similar to those shown above. For example, forms such as $A = \exp[\pm 2\pi(\nu t – x/\lambda)]$ were considered because they correspond to periodic waves that evolve in $x$ and $t$ under no external $x$- or $t$-dependent forces. Noticing that $\dfrac{d^2A}{dx^2} = - \left(\dfrac{2\pi}{\lambda} \right)^2 A \tag{1.21}$ and using the de Broglie hypothesis $\lambda = h/p$ in the above equation, one finds $\dfrac{d^2A}{dx^2} = - p^2 \Big(\dfrac{2\pi}{h}\Big)^2 A. \tag{1.22}$ If $A$ is supposed to relate to the motion of a particle of momentum p under no external forces (since the waveform corresponds to this case), $p^2$ can be related to the energy $E$ of the particle by $E = p^2/2m$. So, the equation for $A$ can be rewritten as: $\dfrac{d^2A}{dx^2} = - 2m E \Big(\dfrac{2\pi}{h}\Big)^2 A, \tag{1.23}$ or, alternatively, $- \Big(\dfrac{h}{2\pi}\Big)^2 (\dfrac{1}{2}m) \dfrac{d^2A}{dx^2} = E A. \tag{1.23}$ Returning to the time-dependence of $A(x,t)$ and using $\nu = E/h$, one can also show that $i \Big(\dfrac{h}{2\pi}\Big) \dfrac{dA}{dt} = E A, \tag{1.24}$ which, using the first result, suggests that $i \Big(\dfrac{h}{2\pi}\Big) \dfrac{dA}{dt} = - \Big(\dfrac{h}{2\pi}\Big)^2 (\dfrac{1}{2}m) \dfrac{d^2A}{dx^2}. \tag{1.25}$ This is a primitive form of the Schrödinger equation that we will address in much more detail below. Briefly, what is important to keep in mind that the use of the de Broglie and Planck/Einstein connections ($\lambda = h/p$ and $E = h\nu$), both of which involve the constant h, produces suggestive connections between $i \Big(\dfrac{h}{2\pi}\Big) \dfrac{d}{dt} \text{ and } E \tag{1.26}$ and between $p^2 \text {and } – \Big(\dfrac{h}{2\pi}\Big)^2 \dfrac{d^2}{dx^2} \tag{1.27}$ or, alternatively, between $p \text{ and } – i \Big(\dfrac{h}{2\pi}\Big) \dfrac{d}{dx}.\tag{1.28}$ These connections between physical properties (energy $E$ and momentum $p$) and differential operators are some of the unusual features of quantum mechanics. The above discussion about waves and quantized wavelengths as well as the observations about the wave equation and differential operators are not meant to provide or even suggest a derivation of the Schrödinger equation. Again the scientists who invented quantum mechanics did not derive its working equations. Instead, the equations and rules of quantum mechanics have been postulated and designed to be consistent with laboratory observations. My students often find this to be disconcerting because they are hoping and searching for an underlying fundamental basis from which the basic laws of quantum mechanics follow logically. I try to remind them that this is not how theory works. Instead, one uses experimental observation to postulate a rule or equation or theory, and one then tests the theory by making predictions that can be tested by further experiments. If the theory fails, it must be refined, and this process continues until one has a better and better theory. In this sense, quantum mechanics, with all of its unusual mathematical constructs and rules, should be viewed as arising from the imaginations of scientists who tried to invent a theory that was consistent with experimental data and which could be used to predict things that could then be tested in the laboratory. Thus far, this theory has proven to be reliable, but, of course, we are always searching for a new and improved theory that describes how small light particles move. If it helps you to be more accepting of quantum theory, I should point out that the quantum description of particles reduces to the classical Newton description under certain circumstances. In particular, when treating heavy particles (e.g., macroscopic masses and even heavier atoms), it is often possible to use Newton dynamics. Soon, we will discuss in more detail how the quantum and classical dynamics sometimes coincide (in which case one is free to use the simpler Newton dynamics). So, let us now move on to look at this strange Schrödinger equation that we have been digressing about for so long.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/01%3A_The_Basics_of_Quantum_Mechanics/1.01%3A_Why_Quantum_Mechanics_is_Necessary.txt
It has been well established that electrons moving in atoms and molecules do not obey the classical Newton equations of motion. People long ago tried to treat electronic motion classically, and found that features observed clearly in experimental measurements simply were not consistent with such a treatment. Attempts were made to supplement the classical equations with conditions that could be used to rationalize such observations. For example, early workers required that the angular momentum $\textbf{L} = \textbf{r} \times \textbf{p}$ be allowed to assume only integer multiples of $h/2\pi$ (which is often abbreviated as $\hbar$), which can be shown to be equivalent to the Bohr postulate $n \lambda = 2\pi r$. However, until scientists realized that a new set of laws, those of quantum mechanics, applied to light microscopic particles, a wide gulf existed between laboratory observations of molecule-level phenomena and the equations used to describe such behavior. Quantum mechanics is cast in a language that is not familiar to most students of chemistry who are examining the subject for the first time. Its mathematical content and how it relates to experimental measurements both require a great deal of effort to master. With these thoughts in mind, i have organized this material in a manner that first provides a brief introduction to the two primary constructs of quantum mechanics- operators and wave functions that obey a Schrödinger equation. Next, I demonstrate the application of these constructs to several chemically relevant model problems. By learning the solutions of the Schrödinger equation for a few model systems, the student can better appreciate the treatment of the fundamental postulates of quantum mechanics as well as their relation to experimental measurement for which the wave functions of the known model problems offer important interpretations. Operators Each physically measurable quantity has a corresponding operator. The eigenvalues of the operator tell the only values of the corresponding physical property that can be observed in an experimental probe of that property. Some operators have a continuum of eigenvalues, but others have only discrete quantized eigenvalues. Any experimentally measurable physical quantity $F$ (e.g., energy, dipole moment, orbital angular momentum, spin angular momentum, linear momentum, kinetic energy) has a classical mechanical expression in terms of the Cartesian positions $\{q_i\}$ and momenta $\{p_i\}$ of the particles that comprise the system of interest. Each such classical expression is assigned a corresponding quantum mechanical operator $\textbf{F}$ formed by replacing the $\{p_i\}$ in the classical form by the differential operator $-i\hbar \dfrac{\partial}{\partial q_j} \tag{1.1}$ and leaving the coordinates $q_j$ that appear in $F$ untouched. If one is working with a classical quantity expressed in terms of curvilinear coordinates, it is important that this quantity first be rewritten in Cartesian coordinates. The replacement of the Cartesian momenta by $-i\hbar\dfrac{\partial}{\partial q_j}$ can then be made and the resultant expression can be transformed back to the curvilinear coordinates if desired. Example 1.2.1 For example, the classical kinetic energy of $N$ particles (with masses $m_l$) moving in a potential field containing both quadratic and linear coordinate-dependence can be written as $F=\sum_{l=1}^N { \left(\dfrac{p_l^2}{2m_l} + \dfrac{1}{2} k(q_l-q_l^0)^2 + L(q_l-q_l^0)\right)}. \tag{1.2}$ The quantum mechanical operator associated with this $F$ is $\textbf{F}=\sum_{l=1}^N \left(- \dfrac{\hbar^2}{2m_l} \dfrac{\partial^2}{\partial{q_l^2}} + \dfrac{1}{2} k(q_l-q_l^0)^2 + L(q_l-q_l^0) \right).\tag{1.3}$ Such an operator would occur when, for example, one describes the sum of the kinetic energies of a collection of particles (the first term) in Eq. 1.3), plus the sum of "Hookes' Law" parabolic potentials (the second term in Eq. 1.3), and the interactions of the particles with an externally applied field (the last term Eq. 1.3) whose potential energy varies linearly as the particles move away from their equilibrium positions $\{q_l^0\}$. Let us try more examples. The sum of the $z$-components of angular momenta (recall that vector angular momentum $\textbf{L}$ is defined as $\textbf{L} = \textbf{r} \times \textbf{p}$ of a collection of $N$ particles has the following classical expression $F=\sum_{j=1}^N (x_jp_{yj} - y_jp_{xj}),\tag{1.4}$ and the corresponding operator is $\textbf{F}=-i\hbar \sum_{j=1}^N (x_j\dfrac{\partial}{\partial{y_j}} - y_j\dfrac{\partial}{\partial{x_j}}). \tag{1.5}$ If one transforms these Cartesian coordinates and derivatives into polar coordinates, the above expression reduces to $\textbf{F} = -i \hbar \sum_{j=1}^N \dfrac{\partial}{\partial{\phi_j}} \tag{1.6}$ where $\phi_j$ is the azimuthal angle of the $j^{th}$ particle. The $x$-component of the dipole moment for a collection of $N$ particles has a classical form of $F= \sum_{j=1}^N Zje \, x_j,\tag{1.7}$ for which the quantum operator is $\textbf{F}= \sum_{j=1}^N Z_je \, x_j, \tag{1.8}$ where $Z_je$ is the charge on the $j^{th}$ particle. Notice that in this case, classical and quantum forms are identical because $\textbf{F}$ contains no momentum operators. Remember, the mapping from $F$ to $\textbf{F}$ is straightforward only in terms of Cartesian coordinates. To map a classical function $F$, given in terms of curvilinear coordinates (even if they are orthogonal), into its quantum operator is not at all straightforward. The mapping can always be done in terms of Cartesian coordinates after which a transformation of the resulting coordinates and differential operators to a curvilinear system can be performed. The relationship of these quantum mechanical operators to experimental measurement lies in the eigenvalues of the quantum operators. Each such operator has a corresponding eigenvalue equation $\textbf{F} \chi_j = \alpha_j \chi_j \tag{1.9}$ in which the $\chi_j$ are called eigenfunctions and the (scalar numbers) $a_j$ are called eigenvalues. All such eigenvalue equations are posed in terms of a given operator ($\textbf{F}$ in this case) and those functions $\{\chi_j\}$ that $\textbf{F}$ acts on to produce the function back again but multiplied by a constant (the eigenvalue). Because the operator $\textbf{F}$ usually contains differential operators (coming from the momentum), these equations are differential equations. Their solutions $\chi_j$ depend on the coordinates that $\textbf{F}$ contains as differential operators. An example will help clarify these points. The differential operator $d/dy$ acts on what functions (of $y$) to generate the same function back again but multiplied by a constant? The answer is functions of the form $\exp(ay)$ since $\dfrac{d (\exp(ay))}{dy} = a \exp(ay). \tag{1.10}$ So, we say that $\exp(ay)$ is an eigenfunction of $d/dy$ and $a$ is the corresponding eigenvalue. As I will discuss in more detail shortly, the eigenvalues of the operator $\textbf{F}$ tell us the only values of the physical property corresponding to the operator $\textbf{F}$ that can be observed in a laboratory measurement. Some $\textbf{F}$ operators that we encounter possess eigenvalues that are discrete or quantized. For such properties, laboratory measurement will result in only those discrete values. Other $\textbf{F}$ operators have eigenvalues that can take on a continuous range of values; for these properties, laboratory measurement can give any value in this continuous range. An important characteristic of the quantum mechanical operators formed as discussed above for any measurable property is the fact that they are Hermitian. An operator $\textbf{F}$ that acts on coordinates denoted $q$ is Hermitian if $\int \phi_I^* \textbf{F} \phi_J dq = \int [\textbf{F} \phi_I]^* \phi_J dq \tag{1.11}$ or, equivalently, $\int \phi_I^* \textbf{F} \phi_J dq = [\int \phi_J^* \textbf{F} \phi_I dq]^* \tag{1.12}$ for any functions $\phi_I(q)$ and $\phi_J(q)$. The operator corresponding to any power of the coordinate $q$ itself is easy to show obeys this identity, but what about the corresponding momentum operator $-i\hbar \dfrac{∂}{∂q}$? Let’s take the left hand side of the above identity for $\textbf{F} = -i \hbar \dfrac{∂}{∂q} \tag{1.13}$ and rewrite it using integration by parts as follows: $\int_{-\infty}^{+\infty}\phi_I^*(q) [-i\hbar \frac{\partial \phi_J(q)}{\partial q}]dq=-i\hbar \int_{-\infty}^{+\infty}\phi_I^*(q) [\frac{\partial \phi_J(q)}{\partial q}]dq\=-i\hbar \{-\int_{-\infty}^{+\infty}\frac{\partial \phi_I^*(q)}{\partial q}\phi_J(q) dq+\phi_I^*(\infty)\phi_J(\infty)-\phi_I^*(-\infty)\phi_J(-\infty)\}$ If the functions $\phi_I(q)$ and $\phi_J(q)$ are assumed to vanish at $\pm\infty$, the right-hand side of this equation can be rewritten as $ih \int_{-\infty}^{+\infty}\frac{\partial\phi_I^*(q)}{\partial q}\phi_J(q) dq=\int_{-\infty}^{\infty} [-i\hbar \frac{\partial \phi_I(q)}{\partial q}]^*\phi_J(q) dq =[\int_{-\infty}^{\infty}\phi_J^*(q)​ [-i\hbar \frac{\partial \phi_I(q)}{\partial q}]dq]^* .$ So, $-i\hbar \dfrac{∂}{∂q}$ is indeed a Hermitian operator. Moreover, using the fact that $q_j$ and $p_j$ are Hermitian, one can show that any operator $\textbf{F}$ formed using the rules described above is also Hermitian. One thing you need to be aware of concerning the eigenfunctions of any Hermitian operator is that each pair of eigenfunctions $\psi_n$ and $\psi_{n’}$ belonging to different eigenvalues display a property termed orthonormality. This property means that not only may $\psi_n$ and $\psi_{n’}$ each normalized so their probability densities integrate to unity $1= \int |\psi_n|^2 dx = \int |\psi_{n’}|^2 dx,\tag{1.14}$ but they are also orthogonal to each other $0 = \int \psi_n^* \psi_{n’} dx \tag{1.15}$ where the complex conjugate * of the first function appears only when the $\psi$ solutions contain imaginary components (e.g., the functions $\exp(im\phi)$, which eigenfunctions of the $z$-component of angular momentum $–i \hbar \dfrac{∂}{∂\phi}$). The orthogonality condition can be viewed as similar to the condition of two vectors $\textbf{v}_1$ and $\textbf{v}_2$ being perpendicular, in which case their scalar (sometimes called dot) product vanishes $\textbf{v}_1 \cdot \textbf{v}_2 = 0$. I want you to keep this property in mind because you will soon see that it is a characteristic of all eigenfunctions of any Hermitian operator. It is common to write the integrals displaying the normalization and orthogonality conditions in the following so-called Dirac notation $1 = \langle \psi_n | \psi_n\rangle \tag{1.14}$ and $0 = \langle \psi_n | \psi_{n’}\rangle ,\tag{1.14}$ where the $| \rangle$ and $\langle$ | symbols represent $\psi$ and $\psi^*$, respectively, and putting the two together in the $\langle | \rangle$ construct implies the integration over the variables that y depends upon. The Hermitian character of an operator $\textbf{F}$ means that this operator forms a Hermitian matrix when placed between pairs of functions and the coordinates are integrated over. For example, the matrix representation of an operator $\textbf{F}$ when acting on a set of functions denoted {$\phi_J$} is: $F_{I,J} = \langle \phi_I | \textbf{F}|\phi_J\rangle = \int \phi_I^* \textbf{F} \phi_J dq.\tag{1.14}$ For all of the operators formed following the rules stated earlier, one finds that these matrices have the following property: $F_{I,J} = F_{J,I}^* \tag{1.14}$ which makes the matrices what we call Hermitian. If the functions upon which F acts and F itself have no imaginary parts (i.e., are real), then the matrices turn out to be symmetric: $F_{I,J} = F_{J,I} . \tag{1.14}$ The importance of the Hermiticity or symmetry of these matrices lies in the fact that it can be shown that such matrices have all real (i.e., not complex) eigenvalues and have eigenvectors that are orthogonal (or, in the case of degenerate eigenvalues, can be chosen to be orthogonal). Let’s see how these conditions follow from the Hermiticity property. If the operator $\textbf{F}$ has two eigenfunctions $\psi_1$ and $\psi_2$ having eigenvalues $\lambda_1$ and $\lambda_2$, respectively, then $\textbf{F} \psi_1 = \lambda_1 \psi_1. \tag{1.14}$ Multiplying this equation on the left by $\psi_2^*$ and integrating over the coordinates (denoted $q$) that $\textbf{F}$ acts on gives $\int \psi_2^*\textbf{F} \psi_1 dq = \lambda_1 \int \psi_2^*\psi_1 dq. \tag{1.14}$ The Hermitian nature of $\textbf{F}$ allows us to also write $\int \psi_2^*\textbf{F} \psi_1 dq = \int ( \textbf{F} \psi_2)^* \psi_1 dq, \tag{1.14}$ which, because $\textbf{F} \psi_2 = \lambda_2 \psi_2 \tag{1.14}$ gives $\lambda_1 \int \psi_2^*\psi_1 dq = \int \psi_2^*\textbf{F} \psi_1 dq = \int ( \textbf{F} \psi_2)^* \psi_1 dq = \lambda_2 \int \psi_2^*\psi_1 dq. \tag{1.14}$ If $\lambda_1$ is not equal to $\lambda_2$, the only way the left-most and right-most terms in this equality can be equal is if $\int \psi_2^*\psi_1 dq = 0, \tag{1.14}$ which means the two eigenfunctions are orthogonal. If the two eigenfunctions $\psi_1$ and $\psi_2$ have equal eigenvalues, the above derivation can still be used to show that $\psi_1$ and $\psi_2$ are orthogonal to the other eigenfunctions {$\psi_3, \psi_4,$etc.} of $\textbf{F}$ that have different eigenvalues. For the eigenfunctions $\psi_1$ and $\psi_2$ that are degenerate (i.e., have equal eigenvalues), we cannot show that they are orthogonal (because they need not be so). However, because any linear combination of these two functions is also an eigenfunction of $\textbf{F}$ having the same eigenvalue, we can always choose a combination that makes $\psi_1$ and $\psi_2$ orthogonal to one another. Finally, for any given eigenfunction $\psi_1$, we have $\int \psi_1^*\textbf{F} \psi_1 dq = \lambda_1 \int \psi_1^*\psi_1 dq \tag{1.14}$ However, the Hermitian character of F allows us to rewrite the left hand side of this equation as $\int \psi_1^*\textbf{F} \psi_1 dq = \int [\textbf{F}\psi_1]^*\psi_1 dq = [\lambda_1]^* \int \psi_1^*\psi_1 dq. \tag{1.14}$ These two equations can only remain valid if $[\lambda_1]^* = \lambda_1, \tag{1.14}$ which means that $\lambda_1$ is a real number (i.e., has no imaginary part). So, all quantum mechanical operators have real eigenvalues (this is good since these eigenvalues are what can be measured in any experimental observation of that property) and can be assumed to have orthogonal eigenfunctions. It is important to keep these facts in mind because we make use of them many times throughout this text. Wave functions The eigenfunctions of a quantum mechanical operator depend on the coordinates upon which the operator acts. The particular operator that corresponds to the total energy of the system is called the Hamiltonian operator. The eigenfunctions of this particular operator are called wave functions A special case of an operator corresponding to a physically measurable quantity is the Hamiltonian operator $H$ that relates to the total energy of the system. The energy eigenstates of the system $Y$ are functions of the coordinates $\{q_j\}$ that $H$ depends on and of time t. The function $|\Psi(q_j,t)|^2 = \Psi^*\Psi$ gives the probability density for observing the coordinates at the values $q_j$ at time $t$. For a many-particle system such as the $H_2O$ molecule, the wave function depends on many coordinates. For $H_2O$, it depends on the $x$, $y$, and $z$ (or $r$,$\theta$, and $\phi$) coordinates of the ten electrons and the $x$, $y$, and $z$ (or $r$,$\theta$, and $\phi$) coordinates of the oxygen nucleus and of the two protons; a total of thirty-nine coordinates appear in $Y$. If one is interested in what the probability distribution is for finding the corresponding momenta $p_j$ at time $t$, the wave function $\Psi(q_j, t)$ has to first be written as a combination of the eigenfunctions of the momentum operators $–i\hbar \dfrac{∂}{∂q}_j$. Expressing $\Psi(q_j,t)$ in this manner is possible because the momentum operator is Hermitian and it can be shown that the eigenfunctions of any Hermitian operator form a complete set of functions. The momentum operator’s eigenfunctions are $\frac{1}{\sqrt{2\pi\hbar}} \exp(ip_j q_j/\hbar), \tag{1.14}$ and they obey $–ih \dfrac{\partial}{\partial q_j} \frac{1}{\sqrt{2\pi\hbar}} \exp(i p_j q_j/\hbar) = p_j \frac{1}{\sqrt{2\pi\hbar}} \exp(ip_j q_j/\hbar). \tag{1.14}$ These eigenfunctions can also be shown to be orthonormal. Expanding $\Psi(q_j,t)$ in terms of these normalized momentum eigenfunctions gives We can find the expansion coefficients $C(p_j,t)$ by multiplying the above equation by the complex conjugate of another (labeled $p_{j’}$) momentum eigenfunction and integrating over $q_j$ The quantities $|C(p’_j,t)|^2$ then give the probability of finding momentum $p’_j$ at time $t$. In classical mechanics, the coordinates $q_j$ and their corresponding momenta $p_j$ are functions of time. The state of the system is then described by specifying $q_j(t)$ and $p_j(t)$. In quantum mechanics, the concept that qj is known as a function of time is replaced by the concept of the probability density for finding coordinate qj at a particular value at a particular time $|\Psi(q_j,t)|^2$ or the probability density $|C(p’j,t)|^2$ for finding momentum $p’_j$ at time $t$. The Hamiltonian eigenstates are especially important in chemistry because many of the tools that chemists use to study molecules probe the energy states of the molecule. For example, most spectroscopic methods are designed to determine which energy state (electronic, vibrational, rotational, nuclear sp_in, etc.) a molecule is in. However, there are other experimental measurements that measure other properties (e.g., the $z$-component of angular momentum or the total angular momentum). As stated earlier, if the state of some molecular system is characterized by a wave function Y that happens to be an eigenfunction of a quantum mechanical operator F, one can immediately say something about what the outcome will be if the physical property F corresponding to the operator F is measured. In particular, since $F \chi_j = \lambda_j \chi_j, \tag{1.14}$ where $\lambda_j$ is one of the eigenvalues of $F$, we know that the value $\lambda_j$ will be observed if the property $F$ is measured while the molecule is described by the wave function $Y = \chi_j$. In fact, once a measurement of a physical quantity $F$ has been carried out and a particular eigenvalue $\lambda_j$ has been observed, the system's wave function $Y$ becomes the eigenfunction $\chi_j$ that corresponds to that eigenvalue. That is, the act of making the measurement causes the system's wave function to become the eigenfunction of the property that was measured. This is what is meant when one hears that the act of making a measurement can change the state of the system in quantum mechanics. What happens if some other property G, whose quantum mechanical operator is $G$ is measured in a case where we have already determined $Y = \chi_j$? We know from what was said earlier that some eigenvalue mk of the operator G will be observed in the measurement. But, will the molecule's wave function remain, after G is measured, the eigenfunction $Y = \chi_j$ of $F$, or will the measurement of G cause Y to be altered in a way that makes the molecule's state no longer an eigenfunction of $F$? It turns out that if the two operators F and G obey the condition $F G = G F, \tag{1.14}$ then, when the property G is measured, the wave function $Y = \chi_j$ will remain unchanged. This property that the order of application of the two operators does not matter is called commutation; that is, we say the two operators commute if they obey this property. Let us see how this property leads to the conclusion about Y remaining unchanged if the two operators commute. In particular, we apply the G operator to the above eigenvalue equation from which we concluded that $Y = \chi_j$: $G F \chi_j = G \lambda_j \chi_j. \tag{1.14}$ Next, we use the commutation to re-write the left-hand side of this equation, and use the fact that $\lambda_j$ is a scalar number to thus obtain: $F G \chi_j = \lambda_j G \chi_j. \tag{1.14}$ So, now we see that $G\chi_j$ itself is an eigenfunction of F having eigenvalue $\lambda_j$. So, unless there are more than one eigenfunction of F corresponding to the eigenvalue $\lambda_j$ (i.e., unless this eigenvalue is degenerate), $G\chi_j$ must itself be proportional to $\chi_j$. We write this proportionality conclusion as $G \chi_j = \mu_j \chi_j, \tag{1.14}$ which means that $\chi_j$ is also an eigenfunction of G. This, in turn, means that measuring the property G while the system is described by the wave function $Y = \chi_j$ does not change the wave function; it remains $\chi_j$. If there are more than one function {$\chi_{j_1}, \chi_{j_2}, …\chi_{j_M}$} that are eigenfunctions of F having the same eigenvalue $\lambda_j$, then the relation $F G \chi_j = \lambda_j G \chi_j$ only allows us to conclude that $G \chi_j$ is some combination of these degenerate functions $G \chi_j = \sum_{k=1,M} C_k \chi_jk. \tag{1.14}$ Below, I offer some examples that i hope will clarify what these rules mean and how the relate to laboratory measurements. In summary, when the operators corresponding to two physical properties commute, once one measures one of the properties (and thus causes the system to be an eigenfunction of that operator), subsequent measurement of the second operator will (if the eigenvalue of the first operator is not degenerate) produce a unique eigenvalue of the second operator and will not change the system wave function. If either of the two properties is subsequently measured (even over and over, again), the wave function will remain unchanged and the value observed for the property being measured will remain the same as the original eigenvalue observed. However, if the two operators do not commute, one simply cannot reach the above conclusions. In such cases, measurement of the property corresponding to the first operator will lead to one of the eigenvalues of that operator and cause the system wave function to become the corresponding eigenfunction. However, subsequent measurement of the second operator will produce an eigenvalue of that operator, but the system wave function will be changed to become an eigenfunction of the second operator and thus no longer the eigenfunction of the first. I think an example will help clarify this discussion. Let us consider the following orbital angular momentum operators for $N$ particles $\textbf{L} = \sum_{j=1}^N (\textbf{r}_j \times \textbf{p}_j)\tag{1.14}$ or $\textbf{L}_z = -i\hbar \sum_{j=1,N} \Big(x_j \frac{∂}{∂y_j} –y_j \frac{∂}{∂x_j}\Big)\tag{1.14a}$ $\textbf{L}_x = -i\hbar \sum_{j=1,N} \Big(y_j \frac{∂}{∂x_j} –x_j \frac{∂}{∂y_j}\Big)\tag{1.14b}$ $\textbf{L}_y = -i\hbar \sum_{j=1,N} \Big(z_j \frac{∂}{∂x_j} –x_j \frac{∂}{∂z_j}\Big)\tag{1.14c}$ and $\textbf{L}^2 = \textbf{L}_x^2 + \textbf{L}_y^2 +\textbf{L}_z^2\tag{1.14}$ It turns out that the operator $\textbf{L}^2$ can be shown to commute with any one of $\textbf{L}_z$, $\textbf{L}_x$, or $\textbf{L}_y$, but $\textbf{L}_z$, $\textbf{L}_x$, or $\textbf{L}_y$ do not commute with one another (we will discuss these operators in considerably more detail in Chapter 2 section 2.7; for now, please accept these statements). Let us assume a measurement of $\textbf{L}_z$ is carried out and one obtains the value $2\hbar$. Thus far, all one knows is that the system can be described by a wave function that is some combination of $D$, $F$, $G$, $H$, etc. angular momentum functions $|L, m=2\rangle$ having different $L$-values but all having $m = 2$ $\Psi = \sum_{L > 2} C_L |L, m=2\rangle ,\tag{1.14}$ but one does not know the amplitudes $C_L$ telling how much a given $L$-value contributes to $\Psi$. One can express $\Psi$ as such a linear combination because the Hermitian quantum mechanical operators formed as described above can be shown to possess complete sets of eigenfunctions; this means that any function (of the appropriate variables) can be written as a linear combination of these eigenfunctions as done above. If one subsequently carries out a measurement of $\textbf{L}^2$, the fact that $\textbf{L}^2$ and $\textbf{L}_z$ commute means that this second measurement will not alter the fact that $\Psi$ contains only contributions with $m =2$, but it will result in observing only one specific $L$-value. The probability of observing any particular $L$-value will be given by $|C_L|^2$. Once this measurement is realized, the wave function will contain only terms having that specific $L$-value and $m = 2$. For example, if $L = 3$ is found, we know the wave function has $L = 3$ and $m = 2$, so we know it is a F-symmetry function with $m = 2$, but we don’t know any more. That is, we don’t know if it is an $n = 4, 5, 6,$ etc. F-function. What now happens if we make a measurement of $\textbf{L}_x$ when the system is in the $L = 3$, $m=2$ state (recall, this $m = 2$ is a value of the $\textbf{L}_z$ component of angular momentum)? Because $\textbf{L}_x$ and $\textbf{L}^2$ commute, the measurement of $\textbf{L}_x$ will not alter the fact that $\Psi$ contains only $L = 3$ components. However, because $\textbf{L}_x$ and $\textbf{L}_z$ do not commute, we can not assume that $\Psi$ is still an eigenfunction of $\textbf{L}_x$ ; it will be a combination of eigenfunctions of $\textbf{L}^2$ having $L = 3$ but having $m$-values between -3 and 3, with m now referring to the eigenvalue of $\textbf{L}_x$ (no longer to $\textbf{L}_z$) $\Psi = \sum_{m=-3}^3 C_m |L=3, m\rangle .\tag{1.14}$ When $\textbf{L}_x$ is measured, the value $m\hbar$ will be found with probability $|C_m|^2$, after which the wave function will be the $|L=3, m\rangle$ eigenfunction of $\textbf{L}^2$ and $\textbf{L}_x$ (and no longer an eigenfunction of $\textbf{L}_z$) I understand that these rules of quantum mechanics can be confusing, but I assure you they are based on laboratory observations about how atoms, ions, and molecules behave when subjected to state-specific measurements. So, I urge you to get used to the fact that quantum mechanics has rules and behaviors that may be new to you but need to be mastered by you. The Schrödinger Equation This equation is an eigenvalue equation for the energy or Hamiltonian operator; its eigenvalues provide the only allowed energy levels of the system The Time-Dependent Equation If the Hamiltonian operator contains the time variable explicitly, one must solve the time-dependent Schrödinger equation Before moving deeper into understanding what quantum mechanics means, it is useful to learn how the wave functions $\psi$ are found by applying the basic equation of quantum mechanics, the Schrödinger equation, to a few exactly soluble model problems. Knowing the solutions to these 'easy' yet chemically very relevant models will then facilitate learning more of the details about the structure of quantum mechanics. The Schrödinger equation is a differential equation depending on time and on all of the spatial coordinates necessary to describe the system at hand (thirty-nine for the $H_2O$ example cited above). It is usually written $H \psi = i \hbar \dfrac{\partial \Psi}{\partial t} \tag{1.14}$ where $\Psi(q_j,t)$ is the unknown wavefunction and $H$ is the operator corresponding to the total energy of the system. This Hermitian operator is called the Hamiltonian and is formed, as stated above, by first writing down the classical mechanical expression for the total energy (kinetic plus potential) in Cartesian coordinates and momenta and then replacing all classical momenta $p_j$ by their quantum mechanical operators $p_j = - i\hbar\dfrac{\partial}{\partial q_j}$. For the $H_2O$ example used above, the classical mechanical energy of all thirteen particles is $E = \sum_{i=1}^{30} \frac{p_i^2}{2m_e} + \frac{1}{2} \sum_{j\ne i=1,10} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} + \sum_{a=1}^9 \frac{p_a^2}{2m_a} + \frac{1}{2} \sum_{b\ne a=1}^3 \frac{Z_aZ_be^2}{r_{a,b}}\tag{1.14}$ where the indices $i$ and $j$ are used to label the ten electrons whose thirty Cartesian coordinates and thirty Cartesian momenta are {$q_i$} and {$p_j$}, and $a$ and $b$ label the three nuclei whose charges are denoted $\{Z_a\}$ and whose nine Cartesian coordinates and nine Cartesian momenta are {$q_a$} and {$p_a$}. The electron and nuclear masses are denoted $m_e$ and $\{m_a\}$, respectively. The corresponding Hamiltonian operator is $H = \sum_{i=1}^{30} \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_i^2} \Big]+ \frac{1}{2} \sum_{j\ne i=1}^{10} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} + \sum_{a=1}^9 \Big[- \frac{\hbar^2}{2m_a} \frac{\partial^2}{\partial q_i^2}​ \Big]+ \frac{1}{2} \sum_{b\ne a=1}^3 \frac{Z_aZ_be^2}{r_{a,b}} \tag{1.14}$ where $r_{i,j}$, $r_{i,a}$, and $r_{a,b}$ denote the distances between electron pairs, electrons and nuclei, and nuclear pairs, respectively. Notice that $\textbf{H}$ is a second order differential operator in the space of the thirty-nine Cartesian coordinates that describe the positions of the ten electrons and three nuclei. It is a second order operator because the momenta appear in the kinetic energy as $p_j^2$ and $p_a^2$, and the quantum mechanical operator for each momentum $p = -i\hbar \dfrac{\partial}{\partial q}$ is of first order. The Schrödinger equation for the $H_2O$ example at hand then reads $\left\{\sum_{i=1}^{30} \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_i^2}​ \Big] + \frac{1}{2} \sum_{j\ne i} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} \right\} \Psi + \left\{\sum_{a=1}^9 \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_a^2} \Big]+ \frac{1}{2} \sum_{b\ne a} \frac{Z_aZ_be^2}{r_{a,b}} \right\} \Psi = i \hbar \frac{\partial \Psi}{\partial t} \tag{1.14}$ The Hamiltonian in this case contains $t$ nowhere. An example of a case where $\textbf{H}$ does contain $t$ occurs, for example, when the an oscillating electric field $E \cos(\omega t)$ along the $x$-axis interacts with the electrons and nuclei and a term $\sum_{a=1}^{3} Z_ze X_a E \cos(\omega t) - \sum_{j=1}^{10} e x_j E \cos(\omega t)\tag{1.14}$ is added to the Hamiltonian. Here, $X_a$ and $x_j$ denote the $x$ coordinates of the $a^{th}$ nucleus and the $j^{th}$ electron, respectively. The Time-Independent Equation If the Hamiltonian operator does not contain the time variable explicitly, one can solve the time-independent Schrödinger equation In cases where the classical energy, and hence the quantum Hamiltonian, do not contain terms that are explicitly time dependent (e.g., interactions with time varying external electric or magnetic fields would add to the above classical energy expression time dependent terms), the separations of variables techniques can be used to reduce the Schrödinger equation to a time-independent equation. In such cases, $\textbf{H}$ is not explicitly time dependent, so one can assume that $\Psi(q_j,t)$ is of the form (n.b., this step is an example of the use of the separations of variables method to solve a differential equation) $\Psi(q_j,t) = \Psi(q_j) F(t). \tag{1.14}$ Substituting this 'ansatz' into the time-dependent Schrödinger equation gives $\Psi(q_J) i\hbar \frac{\partial F}{\partial t} = F(t) \textbf{H}\Psi(q_J) . \tag{1.14}$ Dividing by $\Psi(q_J) F(t)$ then gives $F^{-1} (i\hbar \frac{\partial F}{\partial t}) = \Psi^{-1} (\textbf{H}\Psi(q_J) ). \tag{1.14}$ Since $F(t)$ is only a function of time $t$, and $\Psi(q_j)$ is only a function of the spatial coordinates {$q_j$}, and because the left hand and right hand sides must be equal for all values of t and of {$q_j$}, both the left and right hand sides must equal a constant. If this constant is called E, the two equations that are embodied in this separated Schrödinger equation read as follows: $H \Psi(q_J) = E\Psi(q_J), \tag{1.14}$ $i\hbar \frac{dF(t)}{dt} = E F(t).\tag{1.14}$ The first of these equations is called the time-independent Schrödinger equation; it is an eigenvalue equation in which one is asked to find functions that yield a constant multiple of themselves when acted on by the Hamiltonian operator. Such functions are called eigenfunctions of $\textbf{H}$ and the corresponding constants are called eigenvalues of $\textbf{H}$. For example, if $\textbf{H}$ were of the form $- \dfrac{\hbar^2}{2I}\dfrac{\partial^2}{\partial \phi^2}​ = \textbf{H}$, then functions of the form $\exp(i m\phi)$ would be eigenfunctions because $- \frac{\hbar^2}{2I} \frac{\partial^2}{\partial \phi^2} \exp(i m\phi) = \frac{m^2\hbar^2}{2I}​ \exp(i m\phi​).\tag{1.14}$ In this case, $\dfrac{m^2\hbar^2}{2I}$​​ is the eigenvalue. In this example, the Hamiltonian contains the square of an angular momentum operator (recall earlier that we showed the $z$-component of angular momentum $L_z$ for a single particle is to equal $– i\hbar \dfrac{d}{d\phi}$). When the Schrödinger equation can be separated to generate a time-independent equation describing the spatial coordinate dependence of the wave function, the eigenvalue $E$ must be returned to the equation determining $F(t)$ to find the time dependent part of the wave function. By solving $i\hbar \frac{dF(t)}{dt} = E F(t)\tag{1.14}$ once $E$ is known, one obtains $F(t) = \exp( -i Et/ \hbar),\tag{1.14}$ and the full wave function can be written as $\Psi(q_j,t) = \Psi(q_j) \exp (-i Et/\hbar).\tag{1.14}$ For the above example, the time dependence is expressed by $F(t) = \exp \Big( -i t { \frac{m^2 \hbar^2}{2M} }\frac{1}{\hbar}\Big).\tag{1.14}$ In such cases, the spatial probability density $|\Psi(q_j,t)|^2$ does not depend upon time because the product $\exp (-i Et/\hbar) \exp (i Et/\hbar)$ reduces to unity. In summary, whenever the Hamiltonian does not depend on time explicitly, one can solve the time-independent Schrödinger equation first and then obtain the time dependence as $\exp(-i Et/\hbar)$ once the energy $E$ is known. In the case of molecular structure theory, it is a quite daunting task even to approximately solve the full Schrödinger equation because it is a partial differential equation depending on all of the coordinates of the electrons and nuclei in the molecule. For this reason, there are various approximations that one usually implements when attempting to study molecular structure using quantum mechanics. It should be noted that it is possible to prepare in the laboratory, even when the Hamiltonian contains no explicit time dependence, wave functions that are time dependent and that have time-dependent spatial probability densities. For example, one can prepare a state of the Hydrogen atom that is a superposition of the $2s$ and $2p_z$ wave functions $\Psi(r,t=0) = C_1 \psi_{2s} (r) +C_2 \psi_{2pz} (r)\tag{1.14}$ where the two eigenstates obey $H \psi_{2s} (r) = E_{2s} \psi_{2s} (r)\tag{1.14}$ and $H \psi_{2pz} (r) = E_{2pz}\psi_{2pz} (r).\tag{1.14}$ When $\textbf{H}$ does not contain $t$ explicitly, it is possible to then express $\Psi(r,t)$ in terms of $\Psi(r,t=0)$ as follows: $\Psi(r,t) = \exp\Big(-\dfrac{iHt}{\hbar}\Big)[ C_1 \psi_{2s} (r) +C_2 \psi_{2pz} (r)] \tag{1.14}$ $= \left[ C_1 \psi_{2s} (r) \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2 \psi_{2pz} (r) \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right]. \tag{1.14}$ This function, which is a superposition of $2s$ and $2p_z$ functions, does indeed obey the full time-dependent Schrödinger equation $\textbf{H} \Psi = i\hbar \dfrac{\partial \Psi}{\partial t}$. The probability of observing the system in the $2s$ state if a measurement capable of making this determination were carried out is $\left|C_1 \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)\right|^2 = |C_1|^2 \tag{1.14}$ and the probability of finding it in the $2p_z$ state is $\left|C_2​ \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right|^2,\tag{1.14}$ both of which are independent of time. This does not mean that $\Psi$ or the spatial probability density $\Psi$ describes is time-independent because the product $\left[C_1​ \psi_{2s} (r) \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2​ \psi_{2pz} (r)\exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right]^* \left[C_1​ \psi_{2s} (r)\exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2​ \psi_{2pz} (r) \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right] \tag{1.14}$ contains cross terms that depend on time. It is important to note that applying $\exp(-iHt/\hbar)$ to such a superposition state in the manner shown above, which then produces a superposition of states each of whose amplitudes carries its own time dependence, only works when $\textbf{H}$ has no time dependence. If $\textbf{H}$ were time-dependent, $i\hbar \dfrac{\partial}{\partial t}$ acting on $\exp(-iHt/\hbar) \Psi(r,t=0)$ would contain an additional factor involving $\dfrac{\partial\textbf{H}}{\partial t}$ as a result of which one would not have $\textbf{H} \Psi= i\hbar \dfrac{\partial\Psi}{\partial t}$. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/01%3A_The_Basics_of_Quantum_Mechanics/1.02%3A_The_Schrodinger_Equation_and_Its_Components.txt
One of the most important approximations relating to applying quantum mechanics to molecules and molecular ions is known as the Born-Oppenheimer (BO) approximation. The basic idea behind this approximation involves realizing that in the full electrons-plus-nuclei Hamiltonian operator introduced above $H = \sum_i \left[- \dfrac{\hbar^2}{2m_e} \dfrac{\partial^2}{\partial q_i^2} \right]+ \dfrac{1}{2} \sum_{j\ne i} \dfrac{e^2}{r_{i,j}} - \sum_{a,i} \dfrac{​Z_ae^2}{r_{i,a}} + \sum_a \left[- \dfrac{\hbar^2}{2m_a} \dfrac{\partial^2}{\partial q_a^2}\right]+ \dfrac{1}{2} \sum_{b\ne a} \dfrac{​Z_aZ_b e^2}{r_{a,b}}$ the time scales with which the electrons and nuclei move are usually quite different. In particular, the heavy nuclei (i.e., even a H nucleus weighs nearly 2000 times what an electron weighs) move (i.e., vibrate and rotate) more slowly than do the lighter electrons. For example, typical bond vibrational motions occur over time scales of ca. $10^{-14}$ s, molecular rotations require $10^{-100}$ times as long, but electrons undergo periodic motions within their orbits on the $10^{-17}$ s timescale if they reside within core or valence orbitals. Thus, we expect the electrons to be able to promptly “adjust” their motions to the much more slowly moving nuclei. This observation motivates us to consider solving the Schrödinger equation for the movement of the electrons in the presence of fixed nuclei as a way to represent the fully adjusted state of the electrons at any fixed positions of the nuclei. Of course, we then have to have a way to describe the differences between how the electrons and nuclei behave in the absence of this approximation and how they move within the approximation. These differences give rise to so-called non-Born-Oppenheimer corrections, radiationless transitions, surface hops, and non-adiabatic transitions, which we will deal with later. It should be noted that this separation of time scales between fast electronic and slow vibration and rotation motions does not apply as well to, for example, Rydberg states of atoms and molecules. As discussed earlier, in such states, the electron in the Rydberg orbital has much lower speed and much larger radial extent than for typical core or valence orbitals. For this reason, corrections to the BO model are usually more important to make when dealing with Rydberg states. The electronic Hamiltonian that pertains to the motions of the electrons in the presence of clamped nuclei $H = \sum_i \left[- \dfrac{\hbar^2}{2m_e} \dfrac{\partial^2}{\partial q_i^2} \right]+ \dfrac{1}{2} \sum_{j\ne i} \dfrac{e^2}{r_{i,j}} - \sum_{a,i} \dfrac{​Z_ae^2}{r_{i,a}} \dfrac{1}{2} \sum_{b\ne a} \dfrac{​Z_a Z_b e^2}{r_{a,b}}$ produces as its eigenvalues through the equation $H \psi_J(q_j|q_a) = E_J (q_a) \psi_J(q_j|q_a)$ energies $E_K (q_a)$ that depend on where the nuclei are located (i.e., the {$q_a$} coordinates). As its eigenfunctions, one obtains what are called electronic wave functions {$\psi_K(q_i|q_a)$} which also depend on where the nuclei are located. The energies $E_K(q_a)$ are what we usually call potential energy surfaces. An example of such a surface is shown in Figure 1.5. This surface depends on two geometrical coordinates $\{q_a\}$ and is a plot of one particular eigenvalue $E_J(q_a)$ vs. these two coordinates. Although this plot has more information on it than we shall discuss now, a few features are worth noting. There appear to be three minima (i.e., points where the derivative of $E_J$ with respect to both coordinates vanish and where the surface has positive curvature). These points correspond, as we will see toward the end of this introductory material, to geometries of stable molecular structures. The surface also displays two first-order saddle points (labeled transition structures A and B) that connect the three minima. These points have zero first derivative of $E_J$ with respect to both coordinates but have one direction of negative curvature. As we will show later, these points describe transition states that play crucial roles in the kinetics of transitions among the three stable geometries. Keep in mind that Figure 1.5 shows just one of the $E_J$ surfaces; each molecule has a ground-state surface (i.e., the one that is lowest in energy) as well as an infinite number of excited-state surfaces. Let’s now return to our discussion of the BO model and ask what one does once one has such an energy surface in hand. The motion of the nuclei are subsequently, within the BO model, assumed to obey a Schrödinger equation in which $\displaystyle\sum_a [- \dfrac{\hbar^2}{2m_a} \dfrac{\partial^2}{\partial q_a^2} ]+ \dfrac{1}{2} \sum_{b\ne a} \dfrac{Z_aZ_be^2}{r_{a,b}} + E_K(q_a)$ defines a rotation-vibration Hamiltonian for the particular energy state $E_K$ of interest. The rotational and vibrational energies and wave functions belonging to each electronic state (i.e., for each value of the index $K$ in $E_K(q_a)$) are then found by solving a $E_K$ Hamiltonian. This BO model forms the basis of much of how chemists view molecular structure and molecular spectroscopy. For example as applied to formaldehyde $H_2C=O$, we speak of the singlet ground electronic state (with all electrons spin paired and occupying the lowest energy orbitals) and its vibrational and rotational states as well as the $n\rightarrow \pi^*$ and $\pi\rightarrow \pi^*$ electronic states and their vibrational and rotational levels. Although much more will be said about these concepts later in this text, the student should be aware of the concepts of electronic energy surfaces (i.e., the {$E_K(q_a)$}) and the vibration-rotation states that belong to each such surface. I should point out that the $3N$ Cartesian coordinates {$q_a$} used to describe the positions of the molecule’s $N$ nuclei can be replaced by 3 Cartesian coordinates $(X,Y,Z)$ specifying the center of mass of the $N$ nuclei and $3N-3$ other so-called internal coordinates that can be used to describe the molecule’s orientation (these coordinates appear in the rotational kinetic energy) and its bond lengths and angles (these coordinates appear in the vibrational kinetic and potential energies). When center-of-mass and internal coordinates are used in place of the $3N$ Cartesian coordinates, the Born-Oppenheimer energy surfaces {$E_K(q_a)$} are seen to depend only on the internal coordinates. Moreover, if the molecule’s energy does not depend on its orientation (e.g., if it is moving freely in the gas phase), the {$E_K(q_a)$} will also not depend on the 3 orientational coordinates, but only on the $3N-6$ vibrational coordinates. Having been introduced to the concepts of operators, wave functions, the Hamiltonian and its Schrödinger equation, it is important to now consider several examples of the applications of these concepts. The examples treated below were chosen to provide the reader with valuable experience in solving the Schrödinger equation; they were also chosen because they form the most elementary chemical models of electronic motions in conjugated molecules and in atoms, rotations of linear molecules, and vibrations of chemical bonds. Your First Application of Quantum Mechanics- Motion of a Particle in One Dimension This is a very important problem whose solutions chemists use to model a wide variety of phenomena. Let’s begin by examining the motion of a single particle of mass $m$ in one direction which we will call $x$ while under the influence of a potential denoted $V(x)$. The classical expression for the total energy of such a system is $E = \dfrac{p^2}{2m} + V(x)$, where $p$ is the momentum of the particle along the x-axis. To focus on specific examples, consider how this particle would move if $V(x)$ were of the forms shown in Figure 1. 6, where the total energy $E$ is denoted by the position of the horizontal line. The Classical Probability Density I would like you to imagine what the probability density would be for this particle moving with total energy $E$ and with $V(x)$ varying as the above three plots illustrate. To conceptualize the probability density, imagine the particle to have a blinking lamp attached to it and think of this lamp blinking say 100 times for each time it takes for the particle to complete a full transit from its left turning point, to its right turning point and back to the former. The turning points $x_L$ and $x_R$ are the positions at which the particle, if it were moving under Newton’s laws, would reverse direction (as the momentum changes sign) and turn around. These positions can be found by asking where the momentum goes to zero: $0 = p = \sqrt{2m(E-V(x)}.$ These are the positions where all of the energy appears as potential energy $E = V(x)$ and correspond in the above figures to the points where the dark horizontal lines touch the $V(x)$ plots as shown in the central plot. The probability density at any value of $x$ represents the fraction of time the particle spends at this value of $x$ (i.e., within $x$ and $x+dx$). Think of forming this density by allowing the blinking lamp attached to the particle to shed light on a photographic plate that is exposed to this light for many oscillations of the particle between $x_L$ and $x_R$. Alternatively, one can express the probability $P(x) dx$ that the particle spends between $x$ and $x + dx$ by dividing the spatial distance $dx$ by the velocity (p/m) of the particle at the point $x$: $P(x)dx = \sqrt{2m(E-V(x))} \;m\; dx.$ Because $E$ is constant throughout the particle’s motion, $P(x)$ will be small at $x$ values where the particle is moving quickly (i.e., where $V$ is low) and will be high where the particle moves slowly (where $V$ is high). So, the photographic plate will show a bright region where $V$ is high (because the particle moves slowly in such regions) and less brightness where $V$ is low. Note, however, that as $x$ approaches the classical turning points, the velocity approaches zero, so the above expression for $P(x)$ will approach infinity. It does not mean the probability of finding the particle at the turning point is infinite; it means that the probability density is infinite there. This divergence of $P(x)$ is a characteristic of the classical probability density that will be seen to be very different from the quantum probability density. The bottom line is that the probability densities anticipated by analyzing the classical Newtonian dynamics of this one particle would appear as the histogram plots shown in Figure 1.7 illustrate. Where the particle has high kinetic energy (and thus lower $V(x)$), it spends less time and $P(x)$ is small. Where the particle moves slowly, it spends more time and $P(x)$ is larger. For the plot on the right, $V(x)$ is constant within the “box”, so the speed is constant, hence $P(x)$ is constant for all $x$ values within this one-dimensional box. I ask that you keep these plots in mind because they are very different from what one finds when one solves the Schrödinger equation for this same problem. Also please keep in mind that these plots represent what one expects if the particle were moving according to classical Newtonian dynamics (which we know it is not!). Quantum Treatment To solve for the quantum mechanical wave functions and energies of this same kind of problem, we first write the Hamiltonian operator as discussed above by replacing $p$ by $-i\hbar \dfrac{d}{dx}$: $H = -\dfrac{ \hbar^2}{2m} \dfrac{d^2}{dx^2} + V(x).$ We then try to find solutions $\psi(x)$ to $H\psi = E\psi$ that obey certain conditions. These conditions are related to the fact that $|\psi (x)|^2$ is supposed to be the probability density for finding the particle between $x$ and $x+dx$. To keep things as simple as possible, let’s focus on the box potential $V$ shown in the right side of Figure 1. 7. This potential, expressed as a function of $x$ is: $V(x) = \infty$ for $x< 0$ and for $x> L$; $V(x) = 0$ for $x$ between $0$ and $L$. The fact that $V$ is infinite for $x< 0$ and for $x> L$, and that the total energy $E$ must be finite, says that $\psi$ must vanish in these two regions ($y = 0$ for $x< 0$ and for $x> L$). This condition means that the particle cannot access regions of space where the potential is infinite. The second condition that we make use of is that $\psi (x)$ must be continuous; this means that the probability of the particle being at $x$ cannot be discontinuously related to the probability of it being at a nearby point. It is also true that the spatial derivative $\dfrac{d\psi}{dx}$ must be continuous except at points where the potential $V(x)$ has an infinite discontinuity like it does in the example shown on the right in Figure 1.7. The continuity of $\dfrac{d\psi}{dx}$ relates to continuity of momentum (recall, $-i \hbar \dfrac{\partial}{\partial x}$ is a momentum operator). When a particle moves under, for example, one of the two potential shown on the left or middle of Figure 1.7, the potential smoothly changes as kinetic and potential energy interchange during the periodic motion. In contrast, when moving under the potential on the right of Figure 1.7, the potential undergoes a sudden change of direction when the particle hits either wall. So, even classically, the particle’s momentum undergoes a discontinuity at such hard-wall turning points. These conditions of continuity of $\psi$ (and its spatial first derivative) and that $\psi$ must vanish in regions of space where the potential is extremely high were postulated by the pioneers of quantum mechanics so that the predictions of the quantum theory would be in line with experimental observations. Energies and Wave functions The second-order differential equation $- \dfrac{\hbar^2}{2m} \dfrac{d^2\psi}{dx^2} + V(x)\psi = E\psi$ has two solutions (because it is a second order equation) in the region between $x= 0$ and $x= L$ where $V(x) = 0$: $\psi = \sin(kx)$ and $\psi = \cos(kx),$ where $k$ is defined as $k=\sqrt{2mE/\hbar^2}.$ Hence, the most general solution is some combination of these two: $\psi = A \sin(kx) + B \cos(kx).$ We could, alternatively use $\exp(ikx)$ and $\exp(-ikx)$ as the two independent solutions (we do so later in Section 1.4 to illustrate) because $\sin(kx)$ and $\cos(kx)$ can be rewritten in terms of $\exp(ikx)$ and $\exp(-ikx)$; that is, they span exactly the same space. The fact that $\psi$ must vanish at $x= 0$ (n.b., $\psi$ vanishes for $x< 0$ because $V(x)$ is infinite there and $\psi$ is continuous, so it must vanish at the point $x= 0$) means that the weighting amplitude of the $\cos(kx)$ term must vanish because $\cos(kx) = 1$ at $x = 0$. That is, $B = 0.$ The amplitude of the $\sin(kx)$ term is not affected by the condition that $\psi$ vanish at $x= 0$, since $\sin(kx)$ itself vanishes at $x= 0$. So, now we know that $\psi$ is really of the form: $\psi(x) = A \sin(kx).$ The condition that $\psi$ also vanish at $x= L$ (because it vanishes for $x < 0$ where $V(x)$ again is infinite) has two possible implications. Either $A = 0$ or $k$ must be such that $\sin(kL) = 0$. The option $A = 0$ would lead to an answer $\psi$ that vanishes at all values of $x$ and thus a probability that vanishes everywhere. This is unacceptable because it would imply that the particle is never observed anywhere. The other possibility is that $\sin(kL) = 0$. Let’s explore this answer because it offers the first example of energy quantization that you have probably encountered. As you know, the sin function vanishes at integral multiples of $p$. Hence $kL$ must be some multiple of $\pi$; let’s call the integer $n$ and write $Lk = n\pi$ (using the definition of $k$) in the form: $L\sqrt{\dfrac{2mE}{\hbar^2}} = n\pi.$ Solving this equation for the energy $E$, we obtain: $E = \dfrac{n^2 \pi^2 \hbar^2}{2mL^2}$ This result says that the only energy values that are capable of giving a wave function $\psi (x)$ that will obey the above conditions are these specific $E$ values. In other words, not all energy values are allowed in the sense that they can produce $\psi$ functions that are continuous and vanish in regions where $V(x)$ is infinite. If one uses an energy $E$ that is not one of the allowed values and substitutes this $E$ into $\sin(kx)$, the resultant function will not vanish at $x = L$. I hope the solution to this problem reminds you of the violin string that we discussed earlier. Recall that the violin string being tied down at $x = 0$ and at $x = L$ gave rise to quantization of the wavelength just as the conditions that $\psi$ be continuous at $x = 0$ and $x = L$ gave energy quantization. Substituting $k = n\pi/L$ into $\psi = A \sin(kx)$ gives $\psi (x) = A \sin\Big(\dfrac{np_x}{L}\Big).$ The value of A can be found by remembering that $|\psi|^2$ is supposed to represent the probability density for finding the particle at $x$. Such probability densities are supposed to be normalized, meaning that their integral over all $x$ values should amount to unity. So, we can find A by requiring that $1 = \int |\psi(x)|^2 dx = |A|^2 \int \sin^2\Big(\dfrac{np_x}{L}\Big)​dx$​ where the integral ranges from $x = 0$ to $x = L$. Looking up the integral of $\sin^2(ax)$ and solving the above equation for the so-called normalization constant $A$ gives $A = \sqrt{\dfrac{2}{L}}$ and so $\psi(x) = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{np_x}{L}\Big).$ The values that $n$ can take on are $n = 1, 2, 3, \cdots$; the choice $n = 0$ is unacceptable because it would produce a wave function $\psi(x)$ that vanishes at all $x$. The full x- and t- dependent wave functions are then given as $\Psi(x,t) = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{np_x}{L}\Big) \exp\bigg[-\dfrac{it}{\hbar}\dfrac{n^2 \pi^2\hbar^2}{2mL^2}\bigg].$ Notice that the spatial probability density $|\Psi(x,t)|^2$ is not dependent on time and is equal to $|\psi(x)|^2$ because the complex exponential disappears when $\Psi^*\Psi$ is formed. This means that the probability of finding the particle at various values of $x$ is time-independent. Another thing I want you to notice is that, unlike the classical dynamics case, not all energy values $E$ are allowed. In the Newtonian dynamics situation, $E$ could be specified and the particle’s momentum at any $x$ value was then determined to within a sign. In contrast, in quantum mechanics, one must determine, by solving the Schrödinger equation, what the allowed values of $E$ are. These $E$ values are quantized, meaning that they occur only for discrete values $E = \dfrac{n^2 \pi^2h^2}{2mL^2}$ determined by a quantum number $n$, by the mass of the particle m, and by characteristics of the potential ($L$ in this case). Probability Densities Let’s now look at some of the wave functions $\psi (x)$ and compare the probability densities $|\psi (x)|^2$ that they represent to the classical probability densities discussed earlier. The $n=1$ and $n=2$ wave functions are shown in the top of Figure 1.8. The corresponding quantum probability densities are shown below the wave functions in two formats (as $x-y$ plots and shaded plots that could relate to the flashing light way of monitoring the particle’s location that we discussed earlier). A more complete set of wave functions (for $n$ ranging from 1 to 7) are shown in Figure 1. 9. Notice that as the quantum number $n$ increases, the energy $E$ also increases (quadratically with $n$ in this case) and the number of nodes in $\psi$ also increases. Also notice that the probability densities are very different from what we encountered earlier for the classical case. For example, look at the $n = 1$ and $n = 2$ densities and compare them to the classical density illustrated in Figure 1.10. The classical density is easy to understand because we are familiar with classical dynamics. In this case, we say that $P(x)$ is constant within the box because the fact that $V(x)$ is constant causes the kinetic energy and hence the speed of the particle to remain constant, and this is true for any energy $E$. In contrast, the $n = 1$ quantum wave function’s $P(x)$ plot is peaked in the middle of the box and falls to zero at the walls. The $n = 2$ density $P(x)$ has two peaks (one to the left of the box midpoint, and one to the right), a node at the box midpoint, and falls to zero at the walls. One thing that students often ask me is “how does the particle get from being in the left peak to being in the right peak if it has zero chance of ever being at the midpoint where the node is?” The difficulty with this question is that it is posed in a terminology that asks for a classical dynamics answer. That is, by asking “how does the particle get...” one is demanding an answer that involves describing its motion (i.e., it moves from here at time $t_1$ to there at time $t_2$). Unfortunately, quantum mechanics does not deal with issues such as a particle’s trajectory (i.e., where it is at various times) but only with its probability of being somewhere (i.e., $|\psi|^2$). The next section will treat such paradoxical issues even further. Classical and Quantum Probability Densities As just noted, it is tempting for most beginning students of quantum mechanics to attempt to interpret the quantum behavior of a particle in classical terms. However, this adventure is full of danger and bound to fail because small light particles simply do not move according to Newton’s laws. To illustrate, let’s try to understand what kind of (classical) motion would be consistent with the $n = 1$ or $n = 2$ quantum $P(x)$ plots shown in Figure 1. 8. However, as I hope you anticipate, this attempt at gaining classical understanding of a quantum result will not work in that it will lead to nonsensical results. My point in leading you to attempt such a classical understanding is to teach you that classical and quantum results are simply different and that you must resist the urge to impose a classical understanding on quantum results at least until you understand under what circumstances classical and quantum results should or should not be comparable. For the $n = 1$ case in Figure 1.8, we note that $P(x)$ is highest at the box midpoint and vanishes at $x = 0$ and $x = L$. In a classical mechanics world, this would mean that the particle moves slowly near $x = \dfrac{L}{2}$ and more quickly near $x = 0$ and $x = L$. Because the particle’s total energy $E$ must remain constant as it moves, in regions where it moves slowly, the potential it experiences must be high, and where it moves quickly, $V$ must be small. This analysis (n.b., based on classical concepts) would lead us to conclude that the $n =1$ $P(x)$ arises from the particle moving in a potential that is high near $x = \dfrac{L}{2}$ and low as $x$ approaches 0 or L. A similar analysis of the $P(x)$ plot for $n = 2$ would lead us to conclude that the particle for which this is the correct $P(x)$ must experience a potential that is high midway between $x = 0$ and $x = \dfrac{L}{2}$, high midway between $x = \dfrac{L}{2}$ and $x = L$, and low near $x = \dfrac{L}{2}$ and near $x = 0$ and $x = L$. These conclusions are crazy because we know that the potential $V(x)$ for which we solved the Schrödinger equation to generate both of the wave functions (and both probability densities) is constant between $x = 0$ and $x = L$. That is, we know the same $V(x)$ applies to the particle moving in the $n = 1$ and $n = 2$ states, whereas the classical motion analysis offered above suggests that $V(x)$ is different for these two cases. What is wrong with our attempt to understand the quantum $P(x)$ plots? The mistake we made was in attempting to apply the equations and concepts of classical dynamics to a $P(x)$ plot that did not arise from classical motion. simply put, one cannot ask how the particle is moving (i.e., what is its speed at various positions) when the particle is undergoing quantum dynamics. Most students, when first experiencing quantum wave functions and quantum probabilities, try to think of the particle moving in a classical way that is consistent with the quantum $P(x)$. This attempt to retain a degree of classical understanding of the particle’s movement is almost always met with frustration, as I illustrated with the above example and will illustrate later in other cases. Continuing with this first example of how one solves the Schrödinger equation and how one thinks of the quantized $E$ values and wave functions $\psi$, let me offer a little more optimistic note than offered in the preceding discussion. If we examine the $\psi(x)$ plot shown in Figure 1.9 for $n = 7$, and think of the corresponding $P(x) = |\psi(x)|^2$, we note that the $P(x)$ plot would look something like that shown in Figure 1. 11. It would have seven maxima separated by six nodes. If we were to plot $|\psi(x)|^2$ for a very large $n$ value such as $n = 55$, we would find a $P(x)$ plot having 55 maxima separated by 54 nodes, with the maxima separated approximately by distances of (1/55L). Such a plot, when viewed in a coarse-grained sense (i.e., focusing with somewhat blurred vision on the positions and heights of the maxima) looks very much like the classical $P(x)$ plot in which $P(x)$ is constant for all $x$. Another way to look at the difference between the low-n and high-n quantum probability distributions is reflected in the so-called local de Broglie wavelength $\lambda_{\rm local}(x)=\dfrac{h}{\sqrt{2m(E-V(X))}}$ It can be shown that the classical and quantum probabilities will be similar in regions of space where $\left|\dfrac{d\lambda_{\rm local}}{dx}\right| << 1.$ This inequality will be true when $E$ is much larger than $V$, which is consistent with the view that high quantum states behave classically, but it will not hold when $E$ is only slightly above $V$ (i.e., for low-energy quantum states and for any quantum state near classical turning points) or when $E$ is smaller than $V$ (i.e., in classically forbidden regions). In summary, it is a general result of quantum mechanics that the quantum $P(x)$ distributions for large quantum numbers take on the form of the classical $P(x)$ for the same potential $V$ that was used to solve the Schrödinger equation except near turning points and in classically forbidden regions. It is also true that, at any specified energy, classical and quantum results agree better when one is dealing with heavy particles than for light particles. For example, a given energy $E$ corresponds to a higher $n$ quantum number in the particle-in-a-box formula $E_n = \dfrac{n^2\hbar^2}{2mL^2}$ for a heavier particle than for a lighter particle. Hence, heavier particles, moving with a given energy $E$, have more classical probability distributions. To gain perspective about this matter, in the table shown below, I give the energy levels $E_n = \dfrac{n^2\hbar^2}{2mL^2}$ in kcal mol-1 for a particle whose mass is 1, 2000, 20,000, or 200,000 times an electron’s mass constrained to move within a one-dimensional region of length $L$ (in Bohr units denoted $a_0$; 1 $a_0$ =0.529 Å). Energies $E_n$ (kcal mol-1) for various $m$ and $L$ combinations m = 1 me L = 1 a0 L = 10 a0 L = 100 a0 L = 1000 a0 m = 1 me 3.1 x103n2 3.1 x101n2 3.1 x10-1n2 3.1 x10-3n2 m = 2000 me 1.5 x100n2 1.5 x10-2n2 1.5 x10-4n2 1.5 x10-6n2 m = 20,000 me 1.5 x10-1n2 1.5 x10-3n2 1.5 x10-5n2 1.5 x10-7n2 m = 200,000 me 1.5 x10-2n2 1.5 x10-4n2 1.5 x10-6n2 1.5 x10-8n2 Clearly, for electrons, even when free to roam over 50-500 nanometers (e.g., $L = 100 a_0$ or $L = 1000 a_0$), one does not need to access a very high quantum state to reach 1 kcal mol-1 of energy (e.g., $n= 3$ would be adequate for $L =100 a_0$). Recall, it is high quantum states where one expects the classical and quantum spatial probability distribution to be similar. So, when treating electrons, one is probably (nearly) always going to have to make use of quantum mechanics and one will not be able to rely on classical mechanics. For light nuclei, with masses near 2000 times the electron’s mass, if the particle is constrained to a small distance range (e.g., 1-10 $a_0$), again even low quantum states will have energies in excess of 1 kcal mol-1. Only when free to move over of 100 to 1000 $a_0$ does 1 kcal mol-1 correspond to relatively large quantum numbers for which one expects near-classical behavior. The data shown in the above table can also be used to estimate when quantum behavior such as Bose-Einstein condensation can be expected. When constrained to 100 $a_0$, particles in the 1 amu mass range have translational energies in the $0.15 n^2$ cal mol-1 range. Realizing that $RT = 1.98$ cal mol-1 K-1, this means that translational temperatures near 0.1 K would be needed to cause these particles to occupy their $n = 1$ ground state. In contrast, particles with masses in the range of 100 amu, even when constrained to distances of ca. 5 Å, require $n$ to exceed ca. 10 before having 1 kcal mol-1 of translational energy. When constrained to 50 Å, 1 kcal mol-1 requires $n$ to exceed 1000. So, heavy particles will, even at low energies, behave classically except if they are constrained to very short distances. We will encounter this so-called quantum-classical correspondence principal again when we examine other model problems. It is an important property of solutions to the Schrödinger equation because it is what allows us to bridge the gap between using the Schrödinger equation to treat small light particles and the Newton equations for macroscopic (big, heavy) systems. Time Propagation of Wave functions For a particle in a box system that exists in an eigenstate $\psi(x) = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{np_x}{L}\Big)$ having an energy $E_n = \dfrac{n^2 \pi^2\hbar^2}{2mL^2}$, the time-dependent wave function is $\Psi(x,t) = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{np_x}{L}\Big) \exp\Big(-\dfrac{itE_n}{\hbar}\Big),$ that can be generated by applying the so-called time evolution operator $U(t,0)$ to the wave function at $t = 0$: $\Psi(x,t) = U(t,0) \Psi(x,0),$ where an explicit form for $U(t,t’)$ is: $U(t,t’) = \exp\bigg[-\dfrac{i(t-t’)H}{\hbar}\bigg].$ The function $\Psi(x,t)$ has a spatial probability density that does not depend on time because $\Psi^*(x,t) \Psi(x,t) = \dfrac{2}{L} \sin^2\Big(\dfrac{np_x}{L}\Big) = \Psi^*(x,0) \Psi(x,0)$ since $\exp\Big(-\dfrac{itE_n}{\hbar}\Big) \exp\Big(\dfrac{itE_n}{\hbar}\Big) = 1$. However, it is possible to prepare systems (even in real laboratory settings) in states that are not single eigenstates; we call such states superposition states. For example, consider a particle moving along the x- axis within the box potential but in a state whose wave function at some initial time $t = 0$ is $\Psi(x,0) = \dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{1p_x}{L}\Big) – \dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{2p_x}{L}\Big).$ This is a superposition of the $n =1$ and $n = 2$ eigenstates. The probability density associated with this function is $|\Psi(x,0)|^2 = \dfrac{1}{2}\Big\{\dfrac{2}{L} \sin^2\Big(\dfrac{1p_x}{L}\Big)+ \dfrac{2}{L} \sin^2\Big(\dfrac{2p_x}{L}\Big) -2\dfrac{2}{L} \sin\Big(\dfrac{1p_x}{L}\Big)\sin\Big(\dfrac{2p_x}{L}\Big)\Big\}.$ The $n = 1$ and $n = 2$ components, the superposition $\Psi$, and the probability density at $t = 0$ are shown in the first three panels of Figure 1.12. It should be noted that the probability density associated with this superposition state is not symmetric about the $x=\dfrac{L}{2}$ midpoint even though the $n = 1$ and $n = 2$ component wave functions and densities are. Such a density describes the particle localized more strongly in the large-x region of the box than in the small-x region at $t = 0$. Now, let’s consider the superposition wave function and its density at later times. Applying the time evolution operator $\exp\Big(-\dfrac{itH}{\hbar}\Big)$ to $\Psi(x,0)$ generates this time-evolved function at time t: $\Psi(x,t) = \exp\Big(-\dfrac{itH}{\hbar}\Big) \left\{\dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{1p_x}{L}\Big) – \dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{2p_x}{L}\Big)\right\}$ $= {\dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{1p_x}{L}\Big) \exp\Big(-\dfrac{itE_1}{\hbar}\Big) – \dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{2p_x}{L}\Big) \exp\Big(-\dfrac{itE_2}{\hbar}\Big) }.$ The spatial probability density associated with this $\Psi$ is: $|\Psi(x,t)|^2 = \dfrac{1}{2}\Bigg\{\dfrac{2}{L} \sin^2\Big(\dfrac{1p_x}{L}\Big)+ \dfrac{2}{L} \sin^2\Big(\dfrac{2p_x}{L}\Big)-2\dfrac{2}{L} \cos\Big(\dfrac{(E_2-E_1)t}{\hbar}\Big) \sin\Big(\dfrac{1p_x}{L}\Big)\sin\Big(\dfrac{2p_x}{L}\Big)\Bigg\}.$ At $t = 0$, this function clearly reduces to that written earlier for $\Psi(x,0)$. Notice that as time evolves, this density changes because of the $\cos(E_2-E_1)\tau/\hbar$) factor it contains. In particular, note that as $t$ moves through a period of time $\delta t = \dfrac{\pi\hbar}{E_2-E_1}$, the cos factor changes sign. That is, for $t = 0$, the $\cos$ factor is $+1$; for $t = \dfrac{\pi\hbar}{E_2-E_1}$, the cos factor is $-1$; for $t = \dfrac{2\pi\hbar}{E_2-E_1}$, it returns to $+1$. The result of this time-variation in the cos factor is that $|\Psi|^2$ changes in form from that shown in the bottom left panel of Figure 1. 12 to that shown in the bottom right panel (at $t = \dfrac{\pi\hbar}{E_2-E_1}$) and then back to the form in the bottom left panel (at $t = \dfrac{2\pi\hbar}{E_2-E_1}$). One can interpret this time variation as describing the particle’s probability density (not its classical position!), initially localized toward the right side of the box, moving to the left and then back to the right. Of course, this time evolution will continue over more and more cycles as time evolves further. This example illustrates once again the difficulty with attempting to localize particles that are being described by quantum wave functions. For example, a particle that is characterized by the eigenstate $\sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{1p_x}{L}\Big)$ is more likely to be detected near $x = \dfrac{L}{2}$ than near $x = 0$ or $x = L$ because the square of this function is large near $x = \dfrac{L}{2}$. A particle in the state $\sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{2p_x}{L}\Big)$ is most likely to be found near $x = \dfrac{L}{4}$ and $x = \dfrac{3L}{4}$, but not near $x = 0$, $x = \dfrac{L}{2}$, or $x =L$. The issue of how the particle in the latter state moves from being near $x = \dfrac{L}{4}$ to $x = \dfrac{3L}{4}$ is not something quantum mechanics deals with. Quantum mechanics does not allow us to follow the particle’s trajectory which is what we need to know when we ask how it moves from one place to another. Nevertheless, superposition wave functions can offer, to some extent, the opportunity to follow the motion of the particle. For example, the superposition state written above as $\dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{1p_x}{L}\Big) – \dfrac{1}{\sqrt{2}} \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{2p_x}{L}\Big)$ has a probability amplitude that changes with time as shown in Figure 1.12. Moreover, this amplitude’s major peak does move from side to side within the box as time evolves. So, in this case, we can say with what frequency the major peak moves back and forth. In a sense, this allows us to follow the particle’s movements, but only to the extent that we are satisfied with ascribing its location to the position of the major peak in its probability distribution. That is, we can not really follow its precise location, but we can follow the location of where it is very likely to be found. However, notice that the time it takes the particle to move from right to left $t = \dfrac{\pi\hbar}{E_2-E_1}$ is dependent upon the energy difference between the two states contributing to the superposition state, not to the energy of either of these states, which is very different from what would expect if the particle were moving classically. These are important observation that I hope the student will keep fresh in mind. They are also important ingredients in modern quantum dynamics in which localized wave packets, which are similar to superposed eigenstates discussed above, are used to detail the position and speed of a particle’s main probability density peak. The above example illustrates how one time-evolves a wave function that is expressed as a linear combination (i.e., superposition) of eigenstates of the problem at hand. There is a large amount of current effort in the theoretical chemistry community aimed at developing efficient approximations to the $\exp\Big(-\dfrac{itH}{\hbar}\Big)$ evolution operator that do not require $\Psi(x,0)$ to be explicitly written as a sum of eigenstates. This is important because, for most systems of direct relevance to molecules, one can not solve for the eigenstates; it is simply too difficult to do so. You can find a significantly more detailed treatment of the research-level treatment of this subject in my Theory Page web site and my QMIC textbook. However, let’s spend a little time on a brief introduction to what is involved. The problem is to express $\exp\Big(-\dfrac{itH}{\hbar}\Big) \Psi(q_j)$, where $\Psi(q_j)$ is some initial wave function but not an eigenstate, in a manner that does not require one to first find the eigenstates {$\psi_J$} of $H$ and to expand $\Psi$ in terms of these eigenstates: $\Psi (t=0) = \sum_J C_J \psi_J$ after which the desired function is written as $\exp\Big(-\dfrac{itH}{\hbar}\Big) \Psi(q_j) = \sum_J C_J \psi_J \exp\Big(-\dfrac{itE_J}{\hbar}\Big).$ The basic idea is to break the operator $H$ into its kinetic $T$ and potential $V$ energy components and to realize that the differential operators appear in $T$ only. The importance of this observation lies in the fact that $T$ and $V$ do not commute which means that $TV$ is not equal to $VT$ (n.b., recall that for two quantities to commute means that their order of appearance does not matter). Why do they not commute? Because $T$ contains second derivatives with respect to the coordinates {q_j} that $V$ depends on, so, for example, $\dfrac{d^2}{dq^2}(V(q) \Psi(q))$ is not equal to $V(q)\dfrac{d^2}{dq^2}\Psi(q)$. The fact that $T$ and $V$ do not commute is important because the most obvious attempt to approximate $\exp\Big(-\dfrac{itH}{\hbar}\Big)$ is to write this single exponential in terms of $\exp\Big(-\dfrac{itT}{\hbar}\Big)$ and $\exp\Big(-\dfrac{itV}{\hbar}\Big)$. However, the identity $\exp\Big(-\dfrac{itH}{\hbar}\Big) = \exp\Big(-\dfrac{itV}{\hbar}\Big) \exp\Big(-\dfrac{itT}{\hbar}\Big)$ is not fully valid as one can see by expanding all three of the above exponential factors as $\exp(x) = 1 + x + \dfrac{x^2}{2!} + \cdots,$ and noting that the two sides of the above equation only agree if one can assume that $TV = VT$, which, as we noted, is not true. In most modern approaches to time propagation, one divides the time interval $t$ into many (i.e., $P$ of them) small time slices $\tau = t/P$. One then expresses the evolution operator as a product of $P$ short-time propagators (the student should by now be familiar with the fact that $H$, $T$, and $V$ are operators, so, from now on I will no longer necessarily use bold lettering for these quantities): $\exp\Big(-\dfrac{itH}{\hbar}\Big) = \exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \cdots = \left[\exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \right]^P.$ If one can then develop an efficient means of propagating for a short time $\tau$, one can then do so over and over again $P$ times to achieve the desired full-time propagation. It can be shown that the exponential operator involving $H$ can better be approximated in terms of the $T$ and $V$ exponential operators as follows: $\exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \approx \exp\Big(-\dfrac{\tau^2 (TV-VT)}{\hbar^2}\Big) \exp\Big(-\dfrac{i\tau V}{\hbar}\Big) \exp\Big(-\dfrac{i\tau T}{\hbar}\Big).$ So, if one can be satisfied with propagating for very short time intervals (so that the $\tau^2$ term can be neglected), one can indeed use $\exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \approx \exp\Big(-\dfrac{i\tau V}{\hbar}\Big) \exp\Big(-\dfrac{i\tau T}{\hbar}\Big)$ as an approximation for the propagator $U(t,0)$. It can also be shown that the so-called split short-time expression $\exp\Big(-\dfrac{i\tau H}{\hbar}\Big) \approx \exp\Big(-\dfrac{i\tau ​V}{2\hbar}\Big) \exp\Big(-\dfrac{i\tau T}{\hbar}\Big) \exp\Big(-\dfrac{i\tau ​V}{2\hbar}\Big)$ provides an even more accurate representation of the short-time propagator (because expansions of the left- and right-hand sides agree to higher orders in $\tau/\hbar$). To progress further, one then expresses $\exp\Big(-\dfrac{i\tau T}{\hbar}\Big)$ acting on $\exp\Big(-\dfrac{i\tau V}{2\hbar}\Big) \Psi(q)$ in terms of the eigenfunctions of the kinetic energy operator $T$. Note that these eigenfunctions do not depend on the nature of the potential V, so this step is valid for any and all potentials. The eigenfunctions of $T = - \dfrac{\hbar^2}{2m} \dfrac{d^2}{dq^2}$ are the momentum eigenfunctions that we discussed earlier $\psi_p(q) =\dfrac{1}{\sqrt{2\pi}} \exp\Big(\dfrac{ipq}{\hbar}\Big)$ and they obey the following orthogonality $\int \psi_p'*(q) \psi_p(q) dq = d(p'-p)$ and completeness relations $\int \psi_p(q) \psi_p^*(q') dp = d(q-q').$ Writing $\exp\Big(-\dfrac{i\tau V}{2\hbar}\Big) \Psi(q)$ as $\exp\Big(-\dfrac{i\tau ​V}{2\hbar}\Big)\Psi(q) = \int dq’ \delta(q-q') \exp\Big(-\dfrac{i\tau ​V(q’)}{2\hbar}\Big)\Psi(q'),$ and using the above expression for $\delta(q-q')$ gives: $\exp\Big(-\dfrac{i\tau ​V}{2\hbar}\Big)\Psi(q) = \int \int \psi_p(q) \psi_p^*(q') \exp\Big(-\dfrac{i\tau ​V(q’)}{2\hbar}\Big)\Psi(q') dq' dp.$ Then inserting the explicit expressions for $\psi_p(q)$ and $\psi_p^*(q')$ in terms of $\psi_p(q) =\dfrac{1}{\sqrt{2\pi}} \exp\Big(\dfrac{ipq}{\hbar}\Big)$ gives $\exp\Big(-\dfrac{i\tau V}{2\hbar}\Big)\Psi(q)$ $= \int \int\dfrac{1}{\sqrt{2\pi}} \exp\Big(\dfrac{ipq}{\hbar}\Big)\dfrac{1}{\sqrt{2\pi}} \exp\Big(-\dfrac{ipq'}{\hbar}\Big) \exp\Big(-\dfrac{i\tau V(q’)}{2\hbar}\Big)\Psi(q') dq' dp.$ Now, allowing $\exp\Big(-\dfrac{i\tau T}{\hbar}\Big)$ to act on $\exp\Big(-\dfrac{i\tau V}{2\hbar}\Big) \Psi(q)$ produces $\exp\Big(-\dfrac{i\tau T}{\hbar}\Big) \exp\Big(-\dfrac{i\tau V}{2\hbar}\Big)\Psi(q) =$ $\int \int \exp\Big(-\dfrac{i\tau\pi^2\hbar^2}{2m\hbar}\Big)\dfrac{1}{\sqrt{2\pi}} \exp\Big(-\dfrac{ip(q-q')}{\hbar}\Big)\dfrac{1}{\sqrt{2\pi}} \exp\Big(-\dfrac{i\tau V(q’)}{2\hbar}\Big)\Psi(q') dq' dp.$ The integral over $p$ above can be carried out analytically and gives $\exp\Big(-\dfrac{i\tau T}{\hbar}\Big) \exp\Big(-\dfrac{i\tau V}{2\hbar}\Big)\Psi(q) =$ $\sqrt{\dfrac{m}{2i\pi \tau\hbar}} \int \exp\Big(\dfrac{im(q-q')^2}{2\tau\hbar}\Big) \exp\Big(-\dfrac{i\tau V(q’)}{2\hbar}\Big) \Psi(q') dq'.$ So, the final expression for the short-time propagated wave function is: $\Psi(q.t) = \sqrt{\dfrac{m}{2i\pi \tau\hbar}}\exp\Big(-\dfrac{i\tau V(q)}{2\hbar}\Big)\int \exp\Big(\dfrac{im(q-q')^2}{2\tau\hbar}\Big) \exp\Big(-\dfrac{i\tau V(q’)}{2\hbar}\Big)\Psi(q') dq',$ which is the working equation one uses to compute $\Psi(q,t)$ knowing $\Psi(q)$. Notice that all one needs to know to apply this formula is the potential $V(q)$ at each point in space. One does not need to know any of the eigenfunctions of the Hamiltonian to apply this method. This is especially attractive when dealing with very large molecules or molecules in condensed media where it is essentially impossible to determine any of the eigenstates and where the energy spacings between eigenstates is extremely small. However, one does have to use this formula over and over again to propagate the initial wave function through many small time steps $\tau$ to achieve full propagation for the desired time interval $t = P\tau$. Because this type of time propagation technique is a very active area of research in the theory community, it is likely to continue to be refined and improved. Further discussion of it is beyond the scope of this book, so I will not go further into this direction. The web site of Professor Nancy Makri provides access to further information about the quantum time propagation research area.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/01%3A_The_Basics_of_Quantum_Mechanics/1.03%3A_The_Born-Oppenheimer_Approximation.txt
The number of dimensions depends on the number of particles and the number of spatial (and other) dimensions needed to characterize the position and motion of each particle. The number of dimensions also affects the number of quantum numbers that may be used to label eigenstates of the Hamiltonian. Schrödinger Equation Consider an electron of mass m and charge $e$ moving on a two-dimensional surface that defines the $x,y$ plane (e.g., perhaps an electron is constrained to the surface of a solid by a potential that binds it tightly to a narrow region in the $z$-direction but allows it to roam freely over a rectangular area in the $x,y$ plane), and assume that the electron experiences a constant and not time-varying potential $V_0$ at all points in this plane. For example, if $V_0$ is negative, it could reflect the binding energy of the electron relative to its energy in vacuum. The pertinent time independent Schrödinger equation is: $- \dfrac{\hbar^2}{2m} \left(\dfrac{\partial^2}{\partial x^2} +\dfrac{\partial^2}{\partial y^2}\right)\psi(x,y) +V_0 \psi(x,y) = E\psi(x,y).$ The task at hand is to solve the above eigenvalue equation to determine the allowed energy states for this electron. Because there are no terms in this equation that couple motion in the $x$ and $y$ directions (e.g., no terms of the form $x^ay^b$ or $x\dfrac{\partial}{\partial y}$ or $\psi\dfrac{\partial}{\partial x}$), separation of variables can be used to write $y$ as a product $\psi(x,y)=A(x)B(y)$. Substitution of this form into the Schrödinger equation, followed by collecting together all $x$-dependent and all y-dependent terms, gives; $- \dfrac{\hbar^2}{2m} \frac{1}{A}\frac{\partial^2 A}{\partial x^2} - \dfrac{\hbar^2}{2m} \frac{1}{B}\frac{\partial^2 B}{\partial y^2} = E-V_0.$ Since the first term contains no $y$-dependence and the second contains no $x$-dependence, and because the right side of the equation is independent of both $x$ and $y$, both terms on the left must actually be constant (these two constants are denoted $E_x$ and $E_y$, respectively, realizing that they have units of energy). This observation allows two separate Schrödinger equations to be written: $- \dfrac{\hbar^2}{2m} \frac{1}{A}\frac{\partial^2 A}{\partial x^2} = E_x$ and $- \dfrac{\hbar^2}{2m} \frac{1}{B}\frac{\partial^2 B}{\partial y^2} = E_y.$ The total energy $E$ can then be expressed in terms of these separate energies $E_x$ and $E_y$ from $E_x + E_y = E-V_0$. Solutions to the $x-$ and $y-$ Schrödinger equations are easily seen to be: $A(x) = \exp\left(ix\sqrt{\frac{2mE_x}{\hbar^2}}\right) \text{and} \exp\left(-ix\sqrt{\frac{2mE_x}{\hbar^2}}\right)$ $B(y) = \exp\left(iy\sqrt{\frac{2mE_y}{\hbar^2}}\right) \text{and} \exp\left(-iy\sqrt{\frac{2mE_y}{\hbar^2}}\right).$ Two independent solutions are obtained for each equation because the $x-$ and $y-$space Schrödinger equations are both second order differential equations (i.e., a second order differential equation has two independent solutions). Boundary Conditions The boundary conditions, not the Schrödinger equation, determine whether the eigenvalues will be discrete or continuous If the electron is entirely unconstrained within the $x,y$ plane, the energies $E_x$ and $E_y$ can assume any values; this means that the experimenter can inject the electron onto the $x,y$ plane with any total energy $E$ and any components $E_x$ and $E_y$ along the two axes as long as $E_x + E_y = E$. In such a situation, one speaks of the energies along both coordinates as being in the continuum or not quantized. In contrast, if the electron is constrained to remain within a fixed area in the $x,y$ plane (e.g., a rectangular or circular region), then the situation is qualitatively different. Constraining the electron to any such specified area gives rise to boundary conditions that impose additional requirements on the above $A$ and $B$ functions. These constraints can arise, for example, if the potential $V(x,y)$ becomes very large for $x,y$ values outside the region, in which case, the probability of finding the electron outside the region is very small. Such a case might represent, for example, a situation in which the molecular structure of the solid surface changes outside the enclosed region in a way that is highly repulsive to the electron (e.g., as in the case of molecular corrals on metal surfaces). This case could then represent a simple model of so-called corrals in which the particle is constrained to a finite region of space. For example, if motion is constrained to take place within a rectangular region defined by $0 \le x\le L_x$; $0 \le y\le L_y$, then the continuity property that all wave functions must obey (because of their interpretation as probability densities, which must be continuous) causes $A(x)$ to vanish at 0 and at $L_x$. That is, because $A$ must vanish for $x < 0$ and must vanish for $x > L_x$, and because $A$ is continuous, it must vanish at $x = 0$ and at $x = L_x$. Likewise, $B(y)$ must vanish at 0 and at $L_y$. To implement these constraints for $A(x)$, one must linearly combine the above two solutions $\exp\left(ix\sqrt{\dfrac{2mE_x}{\hbar^2}}\right)$and $\exp\left(-ix\sqrt{\dfrac{2mE_x}{\hbar^2}}\right)$to achieve a function that vanishes at $x=0$: $A(x) = \exp\left(ix\sqrt{\frac{2mE_x}{\hbar^2}}\right) - \exp\left(-ix\sqrt{\frac{2mE_x}{\hbar^2}}\right).$ One is allowed to linearly combine solutions of the Schrödinger equation that have the same energy (i.e., are degenerate) because Schrödinger equations are linear differential equations. An analogous process must be applied to $B(y)$ to achieve a function that vanishes at $y=0$: $B(y) = \exp\left(iy\sqrt{\frac{2mE_y}{\hbar^2}}\right) - \exp\left(-iy\sqrt{\frac{2mE_y}{\hbar^2}}\right).$ Further requiring $A(x)$ and $B(y)$ to vanish, respectively, at $x=L_x$ and $y=L_y$, respectively, gives equations that can be obeyed only if $E_x$ and $E_y$ assume particular values: $\exp\left( iL_x \sqrt{\frac{2mE_x}{\hbar^2}} \right)-\exp\left(-iL_x\sqrt{\frac{2mE_x}{\hbar^2}}\right)=0\text{, and}$ $\exp \left( iL_y \sqrt{\frac{2mE_y}{\hbar^2}} \right) - \exp \left(-iL_y\sqrt{\frac{2mE_y}{\hbar^2}} \right) = 0.$ These equations are equivalent (i.e., using $\exp(ix) = \cos(x) + i \sin(x)$) to $\sin\left(L_x\sqrt{\frac{2mE_x}{\hbar^2}}\right) = \sin\left(L_y\sqrt{\frac{2mE_y}{\hbar^2}}\right) = 0.$ Knowing that $\sin(\theta)$ vanishes at $q = n\pi$, for $n=1,2,3,\cdots,$ (although the $\sin(n\pi)$ function vanishes for $n=0$, this function vanishes for all $x$ or $y$, and is therefore unacceptable because it represents zero probability density at all points in space) one concludes that the energies $E_x$ and $E_y$ can assume only values that obey: $L_x\sqrt{\frac{2mE_x}{\hbar^2}} =n_x\pi,$ $L_y\sqrt{\frac{2mE_y}{\hbar^2}} =n_y\pi$ or $E_x = \frac{n_x^2\pi^2 \hbar^2}{2mL_x^2}$ and $E_y = \frac{n_y^2\pi^2 \hbar^2}{2mL_y^2}, \text{ with } n_x \text{ and } n_y =1,2,3, \cdots$ and $E = V_0 +E_x+ E_y.$ It is important to stress that it is the imposition of boundary conditions, expressing the fact that the electron is spatially constrained, that gives rise to quantized energies. In the absence of spatial confinement, or with confinement only at $x = 0$ or $L_x$ or only at $y = 0$ or $L_y$, quantized energies would not be realized. In this example, confinement of the electron to a finite interval along both the $x$ and $y$ coordinates yields energies that are quantized along both axes. If the electron were confined along one coordinate (e.g., between $0 \le x\le L_x$) but not along the other (i.e., $B(y)$ is either restricted to vanish only at $y=0$ or at $y=L_y$ or at neither point), then the total energy $E$ lies in the continuum; its $E_x$ component is quantized but $E_y$ is not. Analogs of such cases arise, for example, for a triatomic molecule containing one strong and one weak bond. If the bond with the higher dissociation energy is excited to a level that is not enough to break it but that is in excess of the dissociation energy of the weaker bond, one has a situation that is especially interesting. In this case, one has two degenerate states 1. one with the strong bond having high internal energy and the weak bond having low energy ($\psi_1$), and 2. a second with the strong bond having little energy and the weak bond having more than enough energy to rupture it ($\psi_2$). Although an experiment may prepare the molecule in a state that contains only the former component (i.e., $\Psi(t=0)= C_1\psi_1 + C_2\psi_2$ with $C_1 = 1$, $C_2 = 0$), coupling between the two degenerate functions (induced by terms in the Hamiltonian H that have been ignored in defining $\psi_1$ and $\psi_2$) can cause the true wave function $\Psi = \exp(-itH/\hbar) \Psi(t=0)$ to acquire a component of the second function as time evolves. In such a case, one speaks of internal vibrational energy relaxation (IVR) giving rise to unimolecular decomposition of the molecule. Energies and Wave Functions for Bound States For discrete energy levels, the energies are specified functions that depend on quantum numbers, one for each degree of freedom that is quantized Returning to the situation in which motion is constrained along both axes, the resultant total energies and wave functions (obtained by inserting the quantum energy levels into the expressions for $A(x)$ and $B(y)$) are as follows: $E_x = \frac{n_x^2\pi^2 \hbar^2}{2mL_x^2}$ and $E_y = \frac{n_y^2\pi^2 \hbar^2​}{2mL_y^2}$ $E = E_x + E_y +V_0$ $\psi(x,y) = \sqrt{\frac{1}{2L_x}}\sqrt{\frac{1}{2L_y}}\left[\exp\bigg(\frac{in_x\pi x}{L_x}\bigg) -\exp\bigg(-\frac{in_x\pi x}{L_x}\bigg)\right]\left[\exp\bigg(\frac{in_y\pi y}{L_y}\bigg) -\exp\bigg(-\frac{in_y\pi y}{L_y}\bigg)\right]$ with $n_x$ and $n_y =1,2,3, \cdots$. The two $\sqrt{\dfrac{1}{2L}}$ factors are included to guarantee that $y$ is normalized: $\int |\psi(x,y)|^2 dx,dy = 1.$ Normalization allows $|\psi(x,y)|^2$ to be properly identified as a probability density for finding the electron at a point $x$, $y$. Shown in Figure 1. 13 are plots of four such two dimensional wave functions for $n_x$ and $n_y$ values of (1,1), (2,1), (1.2) and (2,2), respectively. Note that the functions vanish on the boundaries of the box, and notice how the number of nodes (i.e., zeroes encountered as the wave function oscillates from positive to negative) is related to the $n_x$ and $n_y$ quantum numbers and to the energy. This pattern of more nodes signifying higher energy is one that we encounter again and again in quantum mechanics and is something the student should be able to use to guess the relative energies of wave functions when their plots are at hand. Finally, you should also notice that, as in the one-dimensional box case, any attempt to classically interpret the probabilities $P(x,y)$ corresponding to the above quantum wave functions will result in failure. As in the one-dimensional case, the classical $P(x,y)$ would be constant along slices of fixed $x$ and varying $y$ or slices of fixed $y$ and varying $x$ within the box because the speed is constant there. However, the quantum $P(x,y)$ plots, at least for small quantum numbers, are not constant. For large $P(x,y)$ and ny values, the quantum $P(x,y)$ plots will again, via the quantum-classical correspondence principle, approach the (constant) classical $P(x,y)$ form except near the classical turning points (i.e., near the edges of the two-dimensional box). If, instead of being confined to a rectangular corral, the electron were constrained to lie within a circle of radius R, the Schrödinger equation is more favorably expressed in polar coordinates $(r, \theta)$. Transforming the partial derivatives appearing in the Schrödinger equation $- \dfrac{\hbar^2}{2m} (\frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2})\psi(x,y) +V(x,y) \psi(x,y) = E\psi(x,y)$ into polar coordinates and realizing that the potential depends on $r$ but not on $\theta$ gives $- \dfrac{\hbar^2}{2m} (\frac{1}{r}\frac{\partial}{\partial r}(r\frac{\partial}{\partial r}) +\frac{1}{r^2} \frac{\partial^2}{\partial \theta^2}) \psi(r, \theta) + V (r) \psi(r, \theta) = E\psi(r, \theta).$ Again using separation of variables to substitute $\psi(r, \theta) = A(r) B(\theta)$ into the Schrödinger equation and dividing by $AB$, we obtain $- \frac{1}{A} \dfrac{\hbar^2}{2m} (\frac{1}{r}\frac{\partial}{\partial r}(r\frac{\partial}{\partial r})A(r)) +V_0 – \frac{1}{B} \dfrac{\hbar^2}{2m} (\frac{1}{r^2}\frac{\partial^2 B(\theta)}{\partial \theta^2}) = E$ where $V_0$ is the value of the potential inside the circular region. The first two terms on the left and the $E$ on the right side contain no reference to $\theta$, so the quantity $\dfrac{1}{B} \dfrac{\partial^2 B(\theta)}{\partial \theta^2}$ must be independent of $\theta$: $\frac{1}{B} \frac{\partial^2 B(\theta)}{\partial \theta^2} = c$ Moreover, because the coordinates $(r, \theta)$ and $(r, \theta +2\pi)$ describe the same point in space, $B(\theta)$ must obey $B(\theta) = B(\theta +2 \pi).$ The solutions to the above differential equation for $B(\theta)$ subject to the periodicity condition are $B(\theta) = \frac{1}{\sqrt{2\pi}} \exp(± in \theta); n = 0, 1, 2, ….$ This means that the equation for the radial part of the wave function is $- \frac{1}{A} \dfrac{\hbar^2}{2m} (\frac{1}{r}\frac{\partial}{\partial r}(r\frac{\partial}{\partial r})A(r)) +V_0 – \frac{1}{B}\dfrac{\hbar^2}{2m}​ (\frac{1}{r^2}​n^2) = E$ or $r^2 \frac{d^2A}{dr^2} + r \frac{dA}{dr} –n^2 A +\frac{2mr^2}{\hbar^2} (E-V_0)A = 0 .$ This differential equation is probably not familiar to you, but it turns out this is the equation obeyed by so-called Bessel functions. The Bessel functions labeled $J_n(ax)$ obey $x^2 \frac{d^2J}{dx^2} +x \frac{dJ}{dx} – n^2 J + a^2 x^2 J = 0$ so, our $A$ function is $A(r) = J_n\left(\sqrt{\frac{2m(E-V_0)}{\hbar^2}}r\right).$ The full wave functions are then $\psi(r, \theta) = A(r) B(\theta) = N J_n\left(\sqrt{\frac{2m(E-V_0)}{\hbar^2}}​r\right) \frac{1}{\sqrt{2\pi}} \exp(± in \theta)$ where $N$ is a normalization constant. The energy eigenvalues $E_{j,n}$ cannot be expressed analytically as in the particle-in-a box system (where we used knowledge of the zeros of the sin function to determine $E_n$). However, knowing that $A(r)$ must vanish at $R = R$, we can use tables (for example, see Kreyszig, E. Advanced Engineering Mathematics, 8th ed.; John Wiley and Sons, Inc.: New York, 1999) that give the values of $x$ at which $J_n(x)$ vanishes to determine the set of eigenvalues associated with each value of the angular momentum quantum number $n$. In the table shown below, we list the first five values at which $J_0$, $J_1$, and $J_2$ vanish. Values of $x$ at which $J_n(x)$ vanish for $n = 0, 1$, and $2$ If we call the values at which $J_{n(x)}$ vanishes $z_{n,j}$, then the energies are given as $E_{n,j} = V_0 + \dfrac{(z_{n,j})^2 \hbar^2}{2mR^2}.$ From the ordering of the $z_{n,j}$ values shown in the table above, we can see that the ordering of the energy levels will be $z_{0.1}$, $z_{1,1}$, $z_{1,2}$, $z_{1,0}$, $z_{1,1}$, $z_{1,2}$, and so forth, regardless of the size of the circle $R$ or the mass of the particle $m$. The state with $n = -1$ has the same energy as that with $n = 1$; likewise, $n = -2$ has the same energy as $n=2$. So, all but the $n = 0$ states are doubly degenerate; the only difference between such pairs of states is the sense of the angular momentum terms $\exp(\pm ni \theta)$. These energy levels depend on both the angular momentum quantum number $n$, as well as the radial quantum number $j$ and they depend upon $R$ much like the particle-in-a-box energies depend on the box length $L$. In Figure 1.13a we show plots of the probability densities $|\psi(r,\theta)|^2$ for $n = 0, 1$, and $2$ and for $j = 1, 3$, and $5$ to illustrate how the number of radial nodes increases as $j$ increases. The character of $|\psi(r,\theta)|^2$ also changes with $n$. For $n = 0$, there is high amplitude for the particle being in the center of the circle, but for $n > 0$, there is no amplitude in the center. This is analogous to what one finds for atomic orbitals; $s$ orbitals have non-zero amplitude at the nucleus, but p, d, and higher orbitals do not. Let’s examine a few more easy problems that can be solved analytically to some degree. This will help illustrate how boundary conditions generate quantization and how the number of quantum numbers depends on the dimensionality of the problem. When considering a particle of mass $m$ moving in three dimensions but constrained to remain within a sphere of radius R, we replace the three Cartesian coordinates $x, y,$ and $z$ by the spherical coordinates $r$, $\theta$, and $\phi$. Doing so, changes the Schrödinger equation’s kinetic energy terms into what we show below $-\frac{\hbar^2}{2mr^2}\left(\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)\right) -\frac{\hbar^2}{2m} \frac{1}{r^2\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial \psi}{\partial\theta}\right) -\frac{\hbar^2}{2m}\frac{1}{r^2\sin\theta}\frac{\partial^2 \psi}{\partial\phi^2}+ V(r)\psi = E\psi.$ Taking the potential to be $V_0$ (a constant) for $0 \le r \le R$, and infinite for $r > R$, we can again use separation of variables to progress in solving this three dimensional differential equation. We substitute $\psi(r,\theta,\phi) = Y_{L,M} (\theta,\phi) F(r)$ into the Schrödinger equation and taking into account that the so-called spherical harmonic functions $Y_{L,M} (\theta,\phi)$ obey the following: $\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial Y_{L,M}}{\partial\theta}\right)+ \frac{1}{\sin^2\theta}\frac{\partial^2 Y_{L,M}}{\partial\phi^2}= - L(L+1) Y_{L,M}.$ This reduces the Schrödinger equation to an equation for the radial function $F(r)$: $-\frac{\hbar^2}{2mr^2}\left(\frac{\partial}{\partial r}\left(r^2\frac{\partial F}{\partial r}\right)\right) + \frac{\hbar^2}{2mr^2}L(L+1)F + V_0 F = EF.$ Again, this equation is probably not familiar to you, but it can be recast in a way that makes it equivalent to the equation obeyed by so-called spherical Bessel functions $x^2\frac{d^2j_L(x)}{dx^2} +2x \frac{dj_L(x)}{dx} +[x^2 –L(L+1)] j_L(x) = 0$ by taking $x = \sqrt{\frac{2m(E-V_0)}{\hbar^2}} r.$ The result is that the wave functions for this problem reduce to $\psi(r,\theta,\phi) = N Y_{L,M} (\theta,\phi) j_L\bigg(\sqrt{\frac{2m(E-V_0)}{\hbar^2}} r\bigg)$ where $N$ is a normalization constant. The energies are determined by requiring $\psi(r,\theta,\phi)$ to vanish at $r = R$, which is analogous to insisting that the spherical Bessel function vanish at $r = R$ in the earlier problem we studied. The values of $x (z_{L,n})$ at which $j_L(x)$ vanish again can be found in various tabulations, including that cited earlier. Several of these values are tabulated below for illustration. Values of $x$ at which $j_L(x)$ vanish for $L = 0, 1, 2, 3$, and $4$ n=1 n=2 n=3 n=4 L=0 3.142 6.283 9.425 12.566 L=1 4.493 7.725 10.904 14.066 L=2 5.763 9.095 12.323 15.515 L=3 6.988 10.417 13.698 16.924 L=4 8.183 11.705 15.040 18.301 From the values of $z_{L,n}$, one finds the energies from $E_{L,n} = V_0 + \frac{(z_{L,n})^2\hbar^2}{2mR^2}.$ Again, we see how the energy depends on the size of the constraining region (characterized by $R$) very much in the same way as in the earlier systems. We also see that $E$ depends on the angular momentum quantum number $L$ (much as it did in the preceding example) and on the mass of the particle. However, the energy ordering of these levels is different from what we have seen earlier as reflected in the ordering of the $z_{L,n}$ values shown in the above table. The energies appear in the order ($L=0$ $n=1$; $L=-2$ $n=1$; $L=2$ $n=1$; $L=0$ $n=2$; $L=3$ $n=1$; $L=1$ $n=2$, and so on, and this is true for any size sphere $R$ and any particle mass m. If, instead of being constrained to move within a spherical volume, the particle is constrained to move on the surface of a sphere or radius $R$, the variable $R$ is fixed (at $R = R$) and the Schrödinger equation becomes $-\frac{\hbar^2}{2m} \frac{1}{R^2\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial \psi}{\partial\theta}\right) -\frac{\hbar^2}{2m}\frac{1}{R^2\sin^2\theta}\frac{\partial^2 \psi}{\partial\phi^2}+ V_0\psi = E\psi.$ Using $\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial Y_{L,M}}{\partial\theta}\right)+ \frac{1}{\sin^2\theta}\frac{\partial^2 Y_{L,M}}{\partial\phi^2}= - L(L+1) Y_{L,M}.$ we can see that the wave functions are the spherical harmonics and the energies are given by $E_{L,M} = V_0 +\frac{L(L+1)\hbar^2}{2mR^2}$ Note that the energies depend on $L$ but not on the $M$ quantum number. So, each state belonging to level $L$ is $2L+1$ fold degenerate because $M$ ranges from $-L$ to $L$. Finally, if instead of being constrained to move within a circle of radius R, the particle were constrained to move on the surface of the circle, the two-dimensional Schrödinger equation treated earlier would reduce to $- \frac{\hbar^2}{2mR^2} \frac{\partial^2\psi(\theta)}{\partial q^2} + V_0 \psi(\theta) = E\psi(\theta).$ The solutions are the familiar functions $\psi(\theta) = \sqrt{(1/2 \pi)} \exp(in\theta)$ with $n = 0, ±1, ±2, …$ and the energies are $E_n = \frac{n^2 \hbar^2}{2mR^2} + V_0.$ Note that the quantization of energy arises because the angular momentum is quantized to be $nh$; this condition arose, in turn, by the condition that $\psi(\theta) = \psi(q +2\pi).$ As with the case of a particle moving within the circular region, the states with $n > 0$ are doubly degenerate; the difference between pairs of such states reflecting the sense of their angular momentum. These model problems will be seen in Chapter 2 to be very useful representations of situations that arise when an electron is constrained within or on the surface of various nanoscopic particles. For now, they were discussed to illustrate how separations of variables can sometimes be used to decompose the Schrödinger equation into one-dimensional ordinary differential equations and to show how it is the boundary conditions (either constraining $y$ to vanish at certain distances or insisting that $y$ be periodic when appropriate) that produce the quantization. It is important to note that it is when a particle is spatially constrained (e.g., when its wave function was forced to vanish at two locations $x = 0$ and $x = L_x$) that quantized energy levels result. When the particle is not so spatially trapped, its energy will not be quantized. You will see this behavior over and over as we explore other models for electronic, vibrational, and rotational motions in molecules. Quantized Action Can Also be Used to Derive Energy Levels There is another approach that can be used to find energy levels and is especially straightforward to use for systems whose Schrödinger equations are separable. The so-called classical action (denoted $S$) of a particle moving with momentum p along a path leading from initial coordinate $q_i$ at initial time $t_i$ to a final coordinate $q_f$ at time $t_f$ is defined by: $S = \int_{\textbf{q}_i;t_i}^{\textbf{q}_f;t_f}\textbf{p}\cdot\textbf{dq} .$ Here, the momentum vector p contains the momenta along all coordinates of the system, and the coordinate vector $q$ likewise contains the coordinates along all such degrees of freedom. For example, in the two-dimensional particle-in-a-box problem considered above, $\textbf{q} = (x, y)$ has two components as does $\textbf{p} = (p_x, p_y)$, and the action integral is: $S =\int_{x_i;y_i;t_i}^{x_f;y_f;t_f}(p_xdx+p_ydy).$ In computing such actions, it is essential to keep in mind the sign of the momentum as the particle moves from its initial to its final positions. The examples given below will help clarify these matters and will show how to apply the idea. For systems for which the Hamiltonian is separable, the action integral decomposes into a sum of such integrals, one for each degree of freedom. In the two-dimensional example, the additivity of H: $H = H_x + H_y = \frac{p_x^2}{2m} + \frac{p_y^2}{2m} ​ + V(x) + V(y)$ $= - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x) - \frac{\hbar^2}{2m}​ \frac{\partial^2}{\partial y^2} + V(y)$ means that $p_x$ and $p_y$ can be independently solved for in terms of the potentials $V(x)$ and $V(y)$ as well as the energies $E_x$ and $E_y$ associated with each separate degree of freedom: $p_x = \pm\sqrt{2m(E_x-V(x))}$ $p_y = \pm\sqrt{2m(E_y-V(y))} ;$ the signs on $p_x$ and $p_y$ must be chosen to properly reflect the motion that the particle is actually undergoing at any instant of time. Substituting these expressions into the action integral yields: $S = S_x + S_y$ $= \int_{x_i;t_i}^{x_f;t_f} ​\pm\sqrt{2m(E_x-V(x))} dx + \int_{y_i;t_i}^{y_f;t_f} ​\pm\sqrt{2m(E_y-V(y))} dy.$ The relationship between these classical action integrals and the existence of quantized energy levels has been shown to involve equating the classical action for motion that is periodic between a left and right turning point, as for a classical particle undergoing periodic vibrational motion, to the following multiple of Planck's constant: $S_{\text{closed}} = \int_{q_i;t_i}^{q_f;t_f} qdq= (n +\frac{1}{2}) h,$ where the quantization index $n$ ranges from 0 to $\infty$ in steps of unity. Alternatively, for motion in a closed angular path, as for a particle moving on a circular or elliptical path, the action quantization condition reads: $S_{\text{closed}} =\int_{q_i;t_i}^{q_f;t_f}​ pdq = nh,$ where again $n$ ranges from 0 to $\infty$ in steps of unity. When action-quantization as described above is applied to the so-called harmonic oscillator problem (this serves as the simplest reasonable model for vibration of a diatomic molecule AB) that we will study in quantum form later, one expresses the total energy as the sum of kinetic and potential energies $E = \frac{p^2}{2m} + \frac{k}{2} x^2$ where $m = \dfrac{m_Am_B}{m_A + m_B}$ is the reduced mass of the AB diatomic molecule, $k$ is the force constant describing the bond between A and B, $x$ is the bond-length displacement, and p is the momentum associated with the bond length. The quantized action requirement then reads $(n +\frac{1}{2}) h = \int p dx = \int \sqrt{2\mu(E-k/2 x^2)} dx.$ This integral is carried out between $x = - \sqrt{2E/k}$ and $\sqrt{2E/k}$ the left and right turning points of the oscillatory motion and back again to form a closed path. Carrying out this integral and equating it to $(n +\frac{1}{2}) h$ gives the following expression for the energy $E$: $E = (n +\frac{1}{2}) (\hbar^2\pi) \sqrt{\dfrac{k}{\mu}} .$ If the quantum number $n$ is allowed to assume integer values ranging from $n = 0$, 1, 2, to infinity, these energy levels agree with the full quantum treatment’s results that we will obtain later. For an example of applying this approach to a problem involving motion along a closed loop, let’s consider the free (i.e., with no potential affecting its angular motion) rotation of a diatomic molecule AB having fixed bond length R. The rotational energy can be written as $E=\frac{p_\phi^2}{2\mu R^2}$ where is the momentum associated with rotation and $m$ is the reduced mass of the AB molecule. Solving for and inserting this into the action-quantization equation appropriate for motion along a closed loop gives $\int_{\phi=0}^{\phi=2\pi} p_\phi d\phi=\int_{\phi=0}^{\phi=2\pi}​ \sqrt{2\mu R^2E} d\phi = \sqrt{2\mu R^2 E}(2\pi) = nh.$ Solving for the energy $E$ then gives $E=\frac{(nh)^2}{(2\pi)^22\mu R^2}=\frac{n^2h^2}{2\mu R^2},$ which is exactly the same result as we obtained earlier when solving the Schrödinger equation for the motion of a particle moving on a circle. Now, let’s apply action quantization to each of the independent coordinates of the two-dimensional particle in a box problem. The two separate action quantization conditions read: $\big(n_x +\dfrac{1}{2}\big) h =\int_{x=0}^{x=L_x} \sqrt{2m(E_x-V(x))} dx +\int_{x=L_x}^{x=0} -\sqrt{2m(E_x-V(x))} dx$ $(n_y +\dfrac{1}{2}) h = \int_{y=0}^{y=L_y} \sqrt{2m(E_y-V(y))} dy +\int_{y=L_y}^{y=0} -\sqrt{2m(E_y-V(y))} dy .$ Notice that the sign of the momenta are positive in each of the first integrals appearing above (because the particle is moving from $x = 0$ to $x = L_x$, and analogously for $y$-motion, and thus has positive momentum) and negative in each of the second integrals (because the motion is from $x = L_x$ to $x = 0$ (and analogously for $y$-motion) and thus the particle has negative momentum). Within the region bounded by $0 \le x\le L_x; 0 \le y\le L_y$, the potential is constant and can be taken as zero (this just gives our reference point for total energy). Using this fact, and reversing the upper and lower limits, and thus the sign, in the second integrals above, one obtains: $\big(n_x +\dfrac{1}{2}\big) h = 2\int_{x=0}^{x=L_x} \sqrt{2mE_x} dx= 2\sqrt{2mE_x} L_x$ $\big(n_y +\dfrac{1}{2}\big) h = 2\int_{y=0}^{y=L_y} \sqrt{2mE_y} dy= 2\sqrt{2mE_y} L_y.$ Solving for $E_x$ and $E_y$, one finds: $E_x =\frac{[\big(n_x +\dfrac{1}{2}\big)h]^2}{8mL_x^2}$ $E_y =\frac{[\big(n_y +\dfrac{1}{2}\big)h]^2}{8mL_y^2} .$ These are not the same quantized energy levels that arose when the wave function boundary conditions were matched at $x = 0$, $x = L_x$ and $y = 0$, $y = L_y$. In the Schrödinger equation approach, the energy expressions did not have the + $\dfrac{1}{2}$ factor that appears in the above action-based result. It turns out that, for potentials that are defined in a piecewise manner, as the particle-in-a-box potential is (i.e., the potential undergoes an infinite jump at $x = 0$ and $x = L$), the action quantization condition has to be modified. An example of how and why one has to make this modification is given in a paper from Prof. Bill Miller’s group (J. E. Adams and W. H. Miller, J. Chem. Phys. 67, 5775-5778 (1977)), but I will not discuss it further here because its details are beyond the level of this text. Suffice it to say that for periodic motion between two turning points on a smooth (i.e., non-piecewise) potential, $(n+\dfrac{1}{2})h$ is the correct action quantization value. For angular motion on a closed loop, nh is the proper value. But, for periodic motion between turning points on a piecewise potential, the modifications discussed in the above reference must be applied to cause action quantization to reproduce the correct quantum result. The use of action quantization as illustrated above has become a very important tool. It has allowed scientists to make great progress toward bridging the gap between classical and quantum descriptions of molecular dynamics. In particular, by using classical concepts such as trajectories and then imposing quantized-action conditions, people have been able to develop so-called semi-classical models of molecular dynamics. In such models, one is able to retain a great deal of classical understanding while building in quantum effects such as energy quantization, zero-point energies, and interferences. Both at my Theory Page web site and from papers accessed on the web site of Professor William H. Miller, one of the pioneers of semi-classical theory as applied to chemistry, you can learn more about this subject. Before leaving this section, it is worth discussing a bit more the energy and angular momentum quantization that occurs when treating free one-dimensional rotational motion of a particle on a circle or a linear rigid molecule constrained to lie on a plane. When we used action quantization to address this kind of problem, we obtained quantized energies $E=\frac{n^2h^2}{2\mu R^2}$ which, through the energy expression given in terms of angular momentum $E=\frac{p_\phi^2}{2\mu R^2},$ implies that the angular momentum itself is quantized $p_\phi=\pm nh.$ This is the same result we obtain when we seek eigenfunctions and eigenvalues the quantum mechanics $\textbf{L}_z$ angular momentum operator. As we showed earlier, this operator, when computed as the $z$-component of $\textbf{R} \times \textbf{p}$, can be written in polar $(r, \theta, \phi)$ coordinates as $L_z = - i \hbar \dfrac{d}{d\phi}.$ The eigenfunctions of this operator have the form $\exp(ia\phi)$, and the eigenvalues are a h. Because geometries with azimuthal angles equal to $\phi$ or equal to $\phi + 2\pi$ are exactly the same geometries, the function $\exp(ia\phi)$ should be exactly the same as $\exp(ia(\phi+2\pi))$. This can only be the case if a is an integer. Thus, one concludes that only integral multiples of h can be allowed values of the $z$-component of angular momentum. Experimentally, one measures the $z$-component of an angular momentum by placing the system possessing the angular momentum in a magnetic field of strength B and observing how many $z$-component energy states arise. This splitting in energy levels is termed the Zeeman effect. For example, a boron atom with one unpaired electron its $2\pi$ orbital has one unit of orbital angular momentum, so one finds three separate $z$-component values which are usually denoted $m = -1, m=0,$ and $m=1$. Another example is offered by the scandium atom with one unpaired electron in a d orbital; this atom’s states split into five ($m = -2, -1, 0, 1, 2$) $z$-component states. In each case, one finds $2L + 1$ values of the $m$ quantum number, and, because L is an integer, $2L + 1$ is an odd integer. Both of these observations are consistent with the expectation that only integer values can occur for $L_z$ eigenvalues as obtained from action quantization and from the boundary condition $\exp(ia\phi) = \exp(ia(\phi+2\pi))$. However, it has been observed that some species do not possess 3 or 5 or 7 or 9 $z$-component states but an even number of such states. In particular, electrons, protons, or neutrons are observed to have only two $z$-component eigenvalues. This also is observed in, for example, the Boron atom mentioned above, if one examines the further splittings of the $2\pi$ (m = -1, 0, and 1) levels caused by the magnetic field’s action on the unpaired electron’s spin. Because, as we discuss later in this text, all angular momenta have $z$-component eigenvalues that are separated from one another by unit multiples of h, one is forced to conclude that these three fundamental building-block particles (electrons, protons, and neutrons) have $z$-component eigenvalues of $\dfrac{1}{2} \hbar$ and $-\dfrac{1}{2} \hbar$. The appearance of half-integral angular momenta is not consistent with the action-quantization result or the observation made earlier that $\phi$ and $\phi + 2\pi$ correspond to exactly the same physical point in coordinate space, which, in turn, implies that only full-integer angular momenta are possible. The resolution of the above paradox (i.e., how can half-integer angular momenta exist?) involves realizing that some angular momenta correspond not to the $\textbf{R} \times \textbf{p}$ angular momenta of a physical mass rotating, but, instead, are intrinsic properties of certain particles. That is, the intrinsic angular momenta of electrons, protons, and neutrons can not be viewed as arising from rotation of some mass that comprises these particles. Instead, such intrinsic angular momenta are fundamental built in characteristics of these particles. For example, the two $\dfrac{1}{2} \hbar$ and $-\dfrac{1}{2} \hbar$ angular momentum states of an electron, usually denoted a and b, respectively, are two internal states of the electron that are degenerate in the absence of a magnetic field but which represent two distinct states of the electron. Analogously, a proton has $\dfrac{1}{2} \hbar$ and $-\dfrac{1}{2} \hbar$ states, as do neutrons. All such half-integral angular momentum states cannot be accounted for using classical mechanics but are known to arise in quantum mechanics. This means that, when we teach introductory chemistry to young students, it is not correct to say that the up and down (a and b) spin states of an electron can be viewed in terms of the electron’s mass spinning clockwise or counterclockwise around some axis. Such spinning-mass angular momenta can only possess integer values; half-integer angular momenta cannot and should not be described in terms of spinning masses. Action Can Also be Used to Generate Wave Functions Action integrals computed from classical descriptions of motion on potential energy surfaces can also be used to generate approximate quantum wave functions. So doing offers yet another avenue for making connection between the classical and quantum worlds. To see how such a connection can arise directly from the Schrödinger equation, we begin with the time-independent Schrödinger equation for a single particle of mass $m$ moving on a potential $V(r)$ that depends on the particle’s position coordinates $r$: $E\Psi(r)=-\frac{\hbar^2}{2m}\nabla^2\Psi(r)+V(r)\Psi(r).$ Then, we express the complex wave function as a constant real amplitude A multiplied by a complex phase which we write as: $\Psi(r)=A\exp(iW(r)/\hbar).$ Substituting this expression for into the Schrödinger equation gives an equation for $W$: $E=V+\frac{(\nabla W)^2}{2m}-i\hbar\frac{\nabla^2W}{2m}.$ This equation contains both real and imaginary components (n.b., $W$ itself is complex). It is usually solved by assuming $W(r)$ can be expanded in a power series in the variable $\hbar$. This expansion is motivated by noting that if the $\dfrac{\nabla^2 W}{2m}i\hbar$ factor in the above equation is neglected, the resulting equation $0=V-E+\frac{(\nabla W)^2}{2m}$ would make sense if $\nabla W(r)$ were equal to the classical momentum of the particle. So, taking the $\hbar\rightarrow 0$ limit of the equation for $W(r)$ appears to reduce this quantum mechanics equation to a classical result in which $\nabla W(r) = p(r)$. So, substituting $W(r)=W_0(r)+hW_1(r)+\hbar^2W_2(r)+\cdots$ into the above equation for $W(r)$ and gathering together all terms of a given power in produces equations for the various $W_n(r)$, the first two of which read: $0=2m(V-E)+(\nabla W_0)^2$ and $0=2\nabla W_0\cdot \nabla W_1-i\nabla^2 W_0.$ To simplify further discussion of this so-called semi-classical wave function theory, let us restrict attention to the case in which there is only one spatial coordinate. For the two- or three-dimensional cases, $\nabla W_0$ and $\nabla W_1$ are vector quantities, and the solution of these equations is considerably more complicated, especially if the potential $V(\textbf{r})$ can not be separated into additive contributions from each of the variables. When there is only one spatial coordinate, and are scalar quantities. The first equation can be solved for $W_0(r)$ and gives two independent solutions (i.e., those corresponding to the ± sign): $W_0(r)=\pm\int\sqrt{2m(E-V(r^\prime))dr^\prime},$ each of which will be real when $E > V(r)$ (i.e., in classically allowed regions of space) and imaginary when $E < V(r)$ (i.e., in classically forbidden regions). Notice that $W_0(r)$ contains an integrand equal to the classical momentum $p(r)=\sqrt{2m(E-V(r)) }$. The equation for $W_1(r)$ can also be solved: $W_1(r)=\frac{i}{2}\ln[\sqrt{2m(E-V(r))}].$ So, through first-order in $\hbar$, the semi-classical wave functions are $\Psi(r)=A\exp\left(\pm\frac{i}{\hbar}\int^r​\sqrt{2m(E-V(r^\prime))}dr^\prime\right)\exp\left(\frac{i\hbar}{2h}i\ln[\sqrt{2m(E-V(r))}]​\right).$ $=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}A\exp\left(\pm\frac{i}{\hbar}\int^r​\sqrt{2m(E-V(r^\prime))}dr^\prime\right)$ These pairs of wave functions are often expressed as $\Psi=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}A\exp\left(\pm\frac{i}{\hbar}\int^r​\sqrt{2m(E-V(r^\prime))}dr^\prime\right)$ in regions of space where $E > V$, and $\Psi=\frac{1}{\sqrt{\sqrt{2m(-E+V(r))}}}A\exp\left(\pm\frac{i}{\hbar}\int^r​\sqrt{2m(-E+V(r^\prime))}dr^\prime\right)$ in the classically forbidden regions where $V > E$. Notice that the wave functions in the classically allowed regions have probability densities given by $\Psi^*\Psi=\frac{A^2}{\sqrt{2m(E-V(r))}}$ which is exactly the classical probability density we discussed earlier in this Chapter. The probability is inversely proportional to the speed of the particle at location r, and has the same singularity as the classical probability at turning points (where $V = E$). In contrast, the probability densities in regions where $V > E$ either grow or decay exponentially within these classically forbidden regions. Let’s see how these semi-classical wave functions can be applied to some of the model problems we discussed earlier. For the one dimensional particle-in-a-box problem, the two exponentially growing and decaying functions are not needed because in the regions $R < 0$ and $R > L$, the wave function can be taken to vanish. Within the region $0 \le r \le L$, there are two independent wave functions $\Psi=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}A\exp\left(\pm\frac{i}{\hbar}\int^r\sqrt{2m(E-V(r^\prime))}dr^\prime\right),$ and the potential $V(r’)$ is constant (let’s call the potential in this region $V_0$). So, the integration appearing in these two wave functions can be carried out to give $\Psi=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}A\exp\left(\pm\frac{ir}{\hbar}\sqrt{2m(E-V_0)}\right).$ We can combine these two functions to generate a function that will vanish at $R = 0$ (as it must for this particle-in-a-box problem): $\Psi=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}A\left[\exp\left(\frac{ir}{\hbar}\sqrt{2m(E-V_0)}\right)-\exp\left(-\frac{ir}{\hbar}\sqrt{2m(E-V_0)}\right)\right].$ We can then use the condition that $\Psi$ must also vanish at $R = L$ to obtain an equation that specifies the energies $E$ that are allowed: $0=\left[\exp\left(\frac{iL}{\hbar}\sqrt{2m(E-V_0)}\right)-\exp\left(-\frac{iL}{\hbar}\sqrt{2m(E-V_0)}\right)\right]=2i\sin\left(\frac{L}{\hbar}\sqrt{2m(E-V_0)}\right),$ which means that $E=V_0+\frac{n^2\pi^2\hbar^2}{2mL^2}.$ These energies are exactly the same as we found when we solved the Schrödinger equation for this model problem. It is informative to note that these semi-classical wave functions, which are not exact because they were obtained by retaining only terms up to the first power of $\hbar$, were able to generate quantum nodal patterns (i.e., interferences) and quantized energy levels even though they contained classical concepts such as the momentum at various positions in space. It was by superimposing two functions having the same energy that nodal patterns were obtained. Let’s now consider what happens when we apply the semi-classical wave function to the harmonic oscillator problem also discussed earlier. In this case, there are two classical turning points $r_1$ and $r_2$ at which $E = V(r)$. The semi-classical wave functions appropriate to the three regions (two classically forbidden and one classically allowed) are: $\Psi_1=\frac{1}{\sqrt{\sqrt{2m(-E+V(r))}}}A_1\exp\left(-\frac{1}{\hbar}\int_{r_2}^r​\sqrt{2m(-E+V(r^\prime))}dr^\prime\right),r\ge r_2.$ $\Psi_2=\frac{1}{\sqrt{\sqrt{2m(-E+V(r))}}}A_2\exp\left(\frac{1}{\hbar}\int_{r_1}^r​\sqrt{2m(-E+V(r^\prime))}dr^\prime\right),r\le r_1.$ $\Psi_3=\frac{1}{\sqrt{\sqrt{2m(E-V(r))}}}\left[A_3\exp\left(\frac{i}{\hbar}\int_{r_1}^r​\sqrt{2m(E-V(r^\prime))}dr^\prime\right)-A_{3^\prime}\exp\left(-\frac{i}{\hbar}\int_{r}^{r_2}​\sqrt{2m(E-V(r^\prime))}dr^\prime\right)\right],r_1\le r\le r_2.$ The first two decay exponentially within the two classically forbidden regions. The third is a combination of the two independent solutions within the classically allowed region, with the amplitudes of the two solutions defined by the coefficients $A_3$ and $A_{3’}$. The amplitudes $A_1$ and $A_2$ multiply the wave functions in the two classically forbidden regions, and all four amplitudes as well as the energy $E$ must be determined by (i) normalizing the total wave function to obey $\int_{-\infty}^{\infty}\Psi^*\Psi dr=1$ and (2) by matching the wave functions $\Psi_1$ and $\Psi_3$ and their first derivatives at $R = r_1$, and the wave functions $\Psi_2$ and $\Psi_3$ and their first derivatives at $R = r_2$. Before addressing how this wave function matching might be accomplished, let me point out an interesting property of the factor entering into the exponential of the semi-classical wave function. We first use the two expressions $\frac{dW_0}{dr}=\pm\sqrt{2m(E-V(r))}$ and $\frac{dW_1}{dr}=\dfrac{i\dfrac{d\sqrt{2m(E-V)}}{dr}}{2\sqrt{2m(E-V)}}$ given above for the first two components of $W(r)$ and then make use of the harmonic form of $V(r)$ $V(r)=\frac{1}{2}kr^2.$ Next, we evaluate the integral of $\dfrac{dW}{dr}$ for a closed classical path in which the system moves from the left turning point $r_1=-\sqrt{\frac{2E}{k}}$ to the right turning point $r_2=\sqrt{\frac{2E}{k}}$ and back again to the left turning point. The contribution from integrating $\dfrac{dW_0}{dr}$ along this closed path is (n.b., the + sign is used for the first part of the path because the particle has positive momentum, and the – sign applies to the return part of the path when the particle has negative momentum): $W_0=\int_{r_1}^{r_2}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr-\int_{r_2}^{r_1}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr$ which is exactly the action integral we treated earlier in this Chapter when we computed for the classical harmonic oscillator. The contribution from integrating $\dfrac{dW_1}{dr}$ along this closed path can be evaluated by first writing $\frac{dW_1}{dr}=\dfrac{i\dfrac{d\sqrt{2m(E-1/2 kr^2)}}{dr}}{2\sqrt{2m(E-1/2 kr^2)}}=\frac{-ikr}{4(E-1/2 kr^2)}.$ The integral from $r_1$ to $r_2$ of this quantity can be carried out (using the substitution $r = \sqrt{2E/k} y$) as $\dfrac{-ik}{4}\int_{-\sqrt{2E/k}}^{\sqrt{2E/k}}\dfrac{rdr}{(E-1/2kr^2)}=\dfrac{-ik}{4}\int_{-1}^{1}\dfrac{\dfrac{2E}{k}ydy}{E(1-y^2)}=\dfrac{-i}{4}\int_{-1}^{1}\dfrac{ydy}{(1-y)(1+y)}.$ The evaluation of the integral remaining on the right-hand side can be done using contour integration (undergraduate students may not have encountered this subject within complex variable theory; I refer them to pp. 367-377 Methods of Theoretical Physics, P. M. Morse and H. Feshabach, McGraw-Hill, New York (1953) or p. 113 Applied Complex Variables, J. W. Dettman, Macmillan Co. New York (1965)). The basic equation from contour integration says that an integral of the form $\int \dfrac{f(z)}{(z-a)}dz$, where $z = a$ is a singularity, is equal to $2\pi if(a)$. Our integral has singularities at $y = 1$ and at $y = -1$, so there are two such contributions. The net result is that our integral reduces to $\frac{-i}{4}\int_{-1}^{1}\frac{ydy}{(1-y)(1+y)}=\frac{i}{4}2\pi i\left[\frac{1}{2}+\frac{-1}{-2}\right]=-\frac{\pi}{2}.$ So, the contribution to the integral of $\dfrac{dW_1}{dr}$ arising from $r_1$ to $r_2$ is equal to $–\pi/2$. The integral from $r_2$ back to $r_1$ gives another factor or $–\dfrac{\pi}{2}$. Combining the integral of $\dfrac{dW_0}{dr}$ and the integral of $\dfrac{dW_1}{dr}$ (multiplied by because $W = W_0 + \hbar W_1 + …$) gives the following final result $W=\int_{r_1}^{r_2}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr-\int_{r_2}^{r_1}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr-\pi \hbar$ If the original Bohr quantization is applied to the integral of $\dfrac{dW}{dr}$ along a closed classical path: $W=nh,n=1,2,3,\cdots$ our result above then says that $nh=\int_{r_1}^{r_2}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr-\int_{r_2}^{r_1}\sqrt{2m\Big(E-\frac{1}{2}kr^2\Big)}dr-\frac{1}{2}h$ which is the same as $\int p(r)dr=\big(n+\dfrac{1}{2}\big)h.$ This means that the $\dfrac{1}{2}$ factor that arises in the action quantization condition for periodic motions between two turning points can be viewed as arising from the first quantum correction (i.e., the term first order in $\hbar$) to the semi-classical wave function. Recall that equating this classical action integral to $\big(n+\dfrac{1}{2}\big) h$ gave the correct (i.e., quantum) energies for this harmonic oscillator problem. We have seen how a semi-classical wave function can be defined, what its spatial probability density is, how it can build in interference (to achieve proper nodal patterns), and how quantizing its action can give the correct allowed energy levels. However, there is one issue we have not fully addressed. To solve for the coefficients $(A_1, … A_{3’})$ multiplying the semi-classical wave functions in the classically allowed and forbidden regions, the wave functions $\Psi_1$ and $\Psi_3$ and their first derivatives must be matched at $r = r_1$, and the wave functions $\Psi_2$ and $\Psi_3$ and their first derivatives must be matched at $R$ = $r_2$. Unfortunately, the details of this matching process are rather complicated and require examining in more detail the nature of the wave functions near the classical turning points where each of $\Psi_1$, $\Psi_2$, and $\Psi_3$ contain factors of the form $\sqrt{\sqrt{2m(-E+V(r)}}$ in their denominators. It should be clear that matching functions and their derivatives that contain such singularities pose special challenges. I will not go further into this matter here; rather, I refer the interested reader to pp. 268-279 of Quantum Mechanics, 3rd Ed., L. I. Schiff, McGraw-Hill, New York (1968) for a good treatment of this so-called WKB approach to the matching issue. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/01%3A_The_Basics_of_Quantum_Mechanics/1.04%3A_Free_Particle_Motions_in_More_Dimensions.txt
The model problems discussed in this Chapter form the basis for chemists’ understanding of the electronic states of atoms, molecules, nano-clusters, and solids as well as the rotational and vibrational motions and energy levels of molecules. In this Chapter, you should have learned about the following things. 1. Free particle energies and wave functions and their densities of states, as applied to polyenes, electron in surfaces, solids, and nanoscopic materials and as applied to bands of orbitals in solids. 2. The tight-binding or Hückel model for chemical bonding. 3. The hydrogenic radial and angular wave functions. These same angular functions occur whenever one is dealing with a potential that depends only on the radial coordinate, not the angular coordinates. 4. Electron tunneling and quasi-bound resonance states. 5. Angular momentum including coupling two or more angular momenta, and angular momentum as applied to rotations of rigid molecules including rigid rotors, symmetric, spherical, and asymmetric top rotations. Why half-integral angular momenta cannot be thought of as arising from rotational motion of a physical body. 6. Vibrations of diatomic molecules including the harmonic oscillator and Morse oscillator models including harmonic frequencies and anharmonicity. 02: Model Problems That Form Important Starting Points The particle-in-a-box type problems provide important models for several relevant chemical situations The particle-in-a-box model for motion in one or two dimensions discussed earlier can obviously be extended to three dimensions. For two and three dimensions, it provides a crude but useful picture for electronic states on surfaces (i.e., when the electron can move freely on the surface but cannot escape to the vacuum or penetrate deeply into the solid) or in metallic crystals, respectively. I say metallic crystals because it is in such systems that the outermost valence electrons are reasonably well treated as moving freely rather than being tightly bound to a valence orbital on one of the constituent atoms or within chemical bonds localized to neighboring atoms. Free motion within a spherical volume such as we discussed in Chapter 1 gives rise to eigenfunctions that are also used in nuclear physics to describe the motions of neutrons and protons in nuclei. In the so-called shell model of nuclei, the neutrons and protons fill separate $s$, $p$, $d$, etc. orbitals (refer back to Chapter 1 to recall how these orbitals are expressed in terms of spherical Bessel functions and what their energies are) with each type of nucleon forced to obey the Pauli exclusion principle (i.e., to have no more than two nucleons in each orbital because protons and neutrons are Fermions). For example, $^4He$ has two protons in $1s$ orbitals and 2 neutrons in $1s$ orbitals, whereas $^3He$ has two $1s$ protons and one $1s$ neutron. To remind you, I display in Figure 2. 1 the angular shapes that characterize $s$, $p$, and $d$ orbitals. Figure 2.1. The angular shapes of $s$, $p$, and $d$ functions This same spherical box model has also been used to describe the valence electrons in quasi-spherical nano-clusters of metal atoms such as $Cs_n$, $Cu_n$, $Na_n$, $Au_n$, $Ag_n$, and their positive and negative ions. Because of the metallic nature of these species, their valence electrons are essentially free to roam over the entire spherical volume of the cluster, which renders this simple model rather effective. In this model, one thinks of each valence electron being free to roam within a sphere of radius $R$ (i.e., having a potential that is uniform within the sphere and infinite outside the sphere). The orbitals that solve the Schrödinger equation inside such a spherical box are not the same in their radial shapes as the $s$, $p$, $d$, etc. orbitals of atoms because, in atoms, there is an additional attractive Coulomb radial potential $V(r) = -Ze^2/r$ present. In Chapter 1, we showed how the particle-in-a-sphere radial functions can be expressed in terms of spherical Bessel functions. In addition, the pattern of energy levels, which was shown in Chapter 1 to be related to the values of x at which the spherical Bessel functions $j_L(x)$ vanish, are not the same as in atoms, again because the radial potentials differ. However, the angular shapes of the spherical box problem are the same as in atomic structure because, in both cases, the potential is independent of $\theta$ and $\phi$. As the orbital plots shown above indicate, the angular shapes of s, p, and $d$ orbitals display varying number of nodal surfaces. The $s$ orbitals have none, $p$ orbitals have one, and $d$ orbitals have two. Analogous to how the number of nodes related to the total energy of the particle constrained to the $xy$ plane, the number of nodes in the angular wave functions indicates the amount of angular or orbital rotational energy. Orbitals of $s$ shape have no angular energy, those of $p$ shape have less then do $d$ orbitals, etc. It turns out that the pattern of energy levels derived from this particle-in-a-spherical-box model can offer reasonably accurate descriptions of what is observed experimentally. In particular, when a cluster (or cluster ion) has a closed-shell electronic configuration in which, for a given radial quantum number $n$, all of the $s$, $p$, $d$ orbitals associated with that $n$ are doubly occupied, nanoscopic metal clusters are observed to display special stability (e.g., lack of chemical reactivity, large electron detachment energy). Clusters that produce such closed-shell electronic configurations are sometimes said to have magic-number sizes. The energy level expression given in Chapter 1 $E_{L,n} = V_0 + (z_{L,n})^2 \dfrac{h^2}{2mR^2} \tag{2.1}$ for an electron moving inside a sphere of radius $R$ (and having a potential relative to the vacuum of $V_0$) can be used to model the energies of electron within metallic nano-clusters. Each electron occupies an orbital having quantum numbers $n$, $L$, and $M$, with the energies of the orbitals given above in terms of the zeros $\{z_{L,n}\}$ of the spherical Bessel functions. Spectral features of the nano-clusters are then determined by the energy gap between the highest occupied and lowest unoccupied orbital and can be tuned by changing the radius ($R$) of the cluster or the charge (i.e., number of electrons) of the cluster. Another very useful application of the model problems treated in Chapter 1 is the one-dimensional particle-in-a-box, which provides a qualitatively correct picture for $\pi$-electron motion along the $p_{\pi}$ orbitals of delocalized polyenes. The one Cartesian dimension corresponds to motion along the delocalized chain. In such a model, the box length $L$ is related to the carbon-carbon bond length $R$ and the number $N$ of carbon centers involved in the delocalized network $L=(N-1) R$. In Figure 2.2, such a conjugated network involving nine centers is depicted. In this example, the box length would be eight times the C-C bond length. The eigenstates $\psi_n(x)$ and their energies $E_n$ represent orbitals into which electrons are placed. In the example case, if nine $\pi$ electrons are present (e.g., as in the 1,3,5,7-nonatetraene radical), the ground electronic state would be represented by a total wave function consisting of a product in which the lowest four $\psi$'s are doubly occupied and the fifth $\psi$ is singly occupied: $\Psi = \psi_1 \alpha\psi_1\beta \psi_2 \alpha \psi_2 \beta \psi_3 \alpha \psi_3\beta \psi_4 \alpha \psi_4 \beta \psi_5 \alpha. \tag{2.2}$ The $z$-component spin angular momentum states of the electrons are labeled $\alpha$ and $\beta$ as discussed earlier. We write the total wave function above as a product wave function because the total Hamiltonian involves the kinetic plus potential energies of nine electrons. To the extent that this total energy can be represented as the sum of nine separate energies, one for each electron, the Hamiltonian allows a separation of variables $H \cong \sum_{j=1}^9 H(j) \tag{2.3}$ in which each H(j) describes the kinetic and potential energy of an individual electron. Of course, the full Hamiltonian contains electron-electron Coulomb interaction potentials $e^2/r_{i,j}$ that cannot be written in this additive form. However, as we will treat in detail in Chapter 6, it is often possible to approximate these electron-electron interactions in a form that is additive. Recall that when a partial differential equation has no operators that couple its different independent variables (i.e., when it is separable), one can use separation of variables methods to decompose its solutions into products. Thus, the (approximate) additivity of $H$ implies that solutions of $H \psi = E \psi$ are products of solutions to $H (j) \psi (\textbf{r}_j) = E_j \psi(\textbf{r}_j). \tag{2.4}$ The two lowest $\pi\pi^*$ excited states would correspond to states of the form $\psi^* = \psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_5\beta \psi_5\alpha, \tag{2.5a}$ and $\psi'^* = \psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_4\beta \psi_6\alpha,\tag{2.5b}$ where the spin-orbitals (orbitals multiplied by $\alpha$ or $\beta$) appearing in the above products depend on the coordinates of the various electrons. For example, $\psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_5\beta \psi_5\alpha \tag{2.6a}$ denotes $\psi_1\alpha(\textbf{r}_1) \psi_1\beta (\textbf{r}_2) \psi_2\alpha (\textbf{r}_3) \psi_2\beta (\textbf{r}_4) \psi_3\alpha (\textbf{r}_5) \psi_3\beta (\textbf{r}_6) \psi_4a (\textbf{r}_7)\psi_5\beta (\textbf{r}_8) \psi_5\alpha (\textbf{r}_9). \tag{2.6b}$ The electronic excitation energies from the ground state to each of the above excited states within this model would be $\Delta{E^*} = \dfrac{ \pi^2 \hbar^2}{2m} \left[ \dfrac{5^2}{L^2} - \dfrac{4^2}{L^2}\right] \tag{2.7a}$ and $\Delta{E'^*} = \dfrac{ \pi^2 \hbar^2}{2m} \left[ \dfrac{6^2}{L^2} - \dfrac{5^2}{L^2}\right]. \tag{2.7b}$ It turns out that this simple model of $\pi$-electron energies provides a qualitatively correct picture of such excitation energies. Its simplicity allows one, for example, to easily suggest how a molecule’s color (as reflected in the complementary color of the light the molecule absorbs) varies as the conjugation length $L$ of the molecule varies. That is, longer conjugated molecules have lower-energy orbitals because $L^2$ appears in the denominator of the energy expression. As a result, longer conjugated molecules absorb light of lower energy than do shorter molecules. This simple particle-in-a-box model does not yield orbital energies that relate to ionization energies unless the potential inside the box is specified. Choosing the value of this potential $V_0$ that exists within the box such that $V_0 + \dfrac{\pi^2 \hbar^2}{2m} \dfrac{5^2}{L^2}$ is equal to minus the lowest ionization energy of the 1,3,5,7-nonatetraene radical, gives energy levels (as $E = V_0 + \dfrac{\pi^2 \hbar^2}{2m} \dfrac{n^2}{L^2}$), which can then be used as approximations to ionization energies. The individual $\pi$-molecular orbitals $\psi_n = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{n\pi x}{L}\Big) \tag{2.8}$ are depicted in Figure 2.3 for a model of the 1,3,5 hexatriene $\pi$-orbital system for which the box length $L$ is five times the distance $R_{CC}$ between neighboring pairs of carbon atoms. The magnitude of the $k^{th}$ C-atom centered atomic orbital in the $n^{th}$ $\pi$-molecular orbital is given by $\sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{n\pi(k-1)R_{CC}}{L}\Big).$ Figure 2.3. The phases of the six molecular orbitals of a chain containing six atoms. In this figure, positive amplitude is denoted by the clear spheres, and negative amplitude is shown by the darkened spheres. Where two spheres of like shading overlap, the wave function has enhanced amplitude (i.e. there is a bonding interaction); where two spheres of different shading overlap, a node occurs (i.e., there is antibonding interaction). Once again, we note that the number of nodes increases as one ranges from the lowest-energy orbital to higher energy orbitals. The reader is once again encouraged to keep in mind this ubiquitous characteristic of quantum mechanical wave functions. This simple model allows one to estimate spin densities at each carbon center and provides insight into which centers should be most amenable to electrophilic or nucleophilic attack. For example, radical attack at the $C_5$ carbon of the nine-atom nonatetraene system described earlier would be more facile for the ground state $\psi$ than for either $\psi^*$ or $\psi'^*$. In the former, the unpaired spin density resides in $\psi_5$ (which varies as $\sin(5\pi x/8R_{CC}$) so is non-zero at $x = L/2$), which has non-zero amplitude at the $C_5$ site $x= L/2 = 4R_{CC}$. In $\psi^*$ and $\psi'*$, the unpaired density is in $\psi_4$ and $\psi_6$, respectively, both of which have zero density at $C_5$ (because sin(npx/8RCC) vanishes for $n = 4$ or $6$ at $x = 4R_{CC}$). Plots of the wave functions for $n$ ranging from 1 to 7 are shown in another format in Figure 2.4 where the nodal pattern is emphasized. I hope that by now the student is not tempted to ask how the electron gets from one region of high amplitude, through a node, to another high-amplitude region. Remember, such questions are cast in classical Newtonian language and are not appropriate when addressing the wave-like properties of quantum mechanics. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.01%3A_Free_Electron_Model_of_Polyenes.txt
Not only does the particle-in-a-box model offer a useful conceptual representation of electrons moving in polyenes, but it also is the zeroth-order model of band structures in solids. Let us consider a simple one-dimensional crystal consisting of a large number of atoms or molecules, each with a single orbital (the blue spheres shown below) that it contributes to the bonding. Let us arrange these building blocks in a regular lattice as shown in the Figure 2.5. In the top four rows of this figure we show the case with 1, 2, 3, and 5 building blocks. To the left of each row, we display the energy splitting pattern into which the building blocks’ orbitals evolve as they overlap and form delocalized molecular orbitals. Not surprisingly, for $n = 2$, one finds a bonding and an antibonding orbital. For $n = 3$, one has a bonding, one non-bonding, and one antibonding orbital. Finally, in the bottom row, we attempt to show what happens for an infinitely long chain. The key point is that the discrete number of molecular orbitals appearing in the 1-5 orbital cases evolves into a continuum of orbitals called a band as the number of building blocks becomes large. This band of orbital energies ranges from its bottom (whose orbital consists of a fully in-phase bonding combination of the building block orbitals) to its top (whose orbital is a fully out-of-phase antibonding combination). In Figure 2.6 we illustrate these fully bonding and fully antibonding band orbitals for two cases- the bottom involving s-type building block orbitals, and the top involving $p_\sigma$-type orbitals. Notice that when the energy gap between the building block $s$ and $p_\sigma$ orbitals is larger than is the dispersion (spread) in energy within the band of $s$ or band of $p_\sigma$ orbitals, a band gap occurs between the highest member of the $s$ band and the lowest member of the $p_\sigma$ band. The splitting between the $s$ and $p_\sigma$ orbitals is a property of the individual atoms comprising the solid and varies among the elements of the periodic table. For example, we teach students that the $2s$-$2p$ energy gap in C is smaller than the $3s$-$3p$ gap in Si, which is smaller than the $4s$-$4p$ gap in Ge. The dispersion in energies that a given band of orbitals is split into as these atomic orbitals combine to form a band is determined by how strongly the orbitals on neighboring atoms overlap. Small overlap produces small dispersion, and large overlap yields a broad band. So, the band structure of any particular system can vary from one in which narrow bands (weak overlap) do not span the energy gap between the energies of their constituent atomic orbitals to bands that overlap strongly (large overlap). Depending on how many valence electrons each building block contributes, the various bands formed by overlapping the building-block orbitals of the constituent atoms will be filled to various levels. For example, if each building block orbital shown above has a single valence electron in an s-orbital (e.g., as in the case of the alkali metals), the s-band will be half filled in the ground state with a and b -paired electrons. Such systems produce very good conductors because their partially filled $s$ bands allow electrons to move with very little (e.g., only thermal) excitation among other orbitals in this same band. On the other hand, for alkaline earth systems with two $s$ electrons per atom, the s-band will be completely filled. In such cases, conduction requires excitation to the lowest members of the nearby p-orbital band. Finally, if each building block were an Al (3s2 3p1) atom, the s-band would be full and the p-band would be half filled. In Figure 2.6 a, we show a qualitative depiction of the bands arising from sodium atoms’ $1s$, $2s$, $2p$, and $3s$ orbitals. Notice that the $1s$ band is very narrow because there is little coupling between neighboring $1s$ orbitals, so they are only slightly stabilized or destabilized relative to their energies in the isolated Na atoms. In contrast, the $2s$ and $2p$ bands show greater dispersion (i.e., are wider), and the $3s$ band is even wider. The $1s$, $2s$, and $2p$ bands are full, but the $3s$ band is half filled, as a result of which solid Na is a good electrical conductor. In describing the band of states that arise from a given atomic orbital within a solid, it is common to display the variation in energies of these states as functions of the number of sign changes in the coefficients that describe each orbital as a linear combination of the constituent atomic orbitals. Using the one-dimensional array of $s$ and $p_{\sigma}$ orbitals shown in Figure 2.6 as an example, 1. The lowest member of the band deriving from the $s$ orbitals $\phi_o = s(1) + s(2) +s(3) + s(4) ... + s(N) \tag{2.1}$ is a totally bonding combination of all of the constituent $s$ orbitals on the $N$ sites of the lattice. 2. The highest-energy orbital in this band $\phi_N=s(1)- s(2) +s(3) - s(4) ... + s(N-1) + s(N) \tag{2.2}$ is a totally anti-bonding combination of the constituent $s$ orbitals. 3. Each of the intervening orbitals in this band has expansion coefficients that allow the orbital to be written as $\phi_n = \sum_{j=1}^N \cos \left( \dfrac{n(j-1)\pi}{N}\right)s(j)\tag{2.3}.$ Clearly, for small values of $n$, the series of expansion coefficients $\cos \left( \dfrac{n(j-1)\pi}{N}\right) \tag{2.4}$ has few sign changes as the index $j$ runs over the sites of the one-dimensional lattice. For larger n, there are more sign changes. Thus, thinking of the quantum number $n$ as labeling the number of sign changes and plotting the energies of the orbitals (on the vertical axis) versus $n$ (on the horizontal axis), we would obtain a plot that increases from $n = 0$ to $n =N$. In fact, such plots tend to display quadratic variation of the energy with $n$. This observation can be understood by drawing an analogy between the pattern of sign changes belonging to a particular value of $n$ and the number of nodes in the one-dimensional particle-in-a-box wave function, which also is used to model electronic states delocalized along a linear chain. As we saw in Chapter 1, the energies for this model system varied as $E= \dfrac{j^2\pi^2\hbar^2}{2mL^2} \tag{2.5}$ with $j$ being the quantum number ranging from $1$ to $\infty$. The lowest-energy state, with $j = 1$, has no nodes; the state with $j = 2$ has one node, and that with $j = n$ has $(n-1)$ nodes. So, if we replace $j$ by $(n-1)$ and replace the box length $L$ by $(NR)$, where $R$ is the inter-atom spacing and $N$ is the number of atoms in the chain, we obtain $E= \dfrac{(n-1)^2\pi^2\hbar^2}{2mN^2R^2}$ from which on can see why the energy can be expected to vary as $(n/N)^2$. 4. In contrast for the $p_{\sigma}$ orbitals, the lowest-energy orbital is $\phi_o=p_{\sigma}(1)- p_{\sigma}(2) +p_{\sigma}(3) - p_{\sigma}(4) ... - p_{\sigma}(N-1) + p_{\sigma}(N) \tag{2.6}$ because this alternation in signs allows each orbital on one site to overlap in a bonding fashion with the orbitals on neighboring sites. 5. Therefore, the highest-energy orbital in the band is $\phi_o=p_{\sigma}(1) + p_{\sigma}(2) +p_{\sigma}(3) + p_{\sigma}(4) ... + p_{\sigma}(N-1) + p_{\sigma}(N) \tag{2.7}$ and is totally anti-bonding. 6. The intervening members of this band have orbitals given by $\phi_{N-n} = \sum_{j=1}^N \cos \left( \dfrac{n(j-1)\pi}{N}\right)s(j) \tag{2.3}$ with low $n$ corresponding to high-energy orbitals (having few inter-atom sign changes but anti-bonding character) and high $n$ to low-energy orbitals (having many inter-atom sign changes). So, in contrast to the case for the s-band orbitals, plotting the energies of the orbitals (on the vertical axis) versus $n$ (on the horizontal axis), we would obtain a plot that decreases from $n = 0$ to $n =N$. For bands comprised of $p_{\pi}$ orbitals, the energies vary with the $n$ quantum number in a manner analogous to how the $s$ band varies because the orbital with no inter-atom sign changes is fully bonding. For two- and three-dimensional lattices comprised of s, p, and d orbitals on the constituent atoms, the behavior of the bands derived from these orbitals follows analogous trends. It is common to describe the sign alternations arising from site to site in terms of a so-called $\textbf{k}$ vector. In the one-dimensional case discussed above, this vector has only one component with elements labeled by the ratio ($n/N$) whose value characterizes the number of inter-atom sign changes. For lattices containing many atoms, $N$ is very large, so $n$ ranges from zero to a very large number. Thus, the ratio ($n/N$) ranges from zero to unity in small fractional steps, so it is common to think of these ratios as describing a continuous parameter varying from zero to one. Moreover, it is convention to allow the $n$ index to range from $–N$ to $+N$, so the argument $n \pi /N$ in the cosine function introduced above varies from $-\pi$ to $+\pi$. In two- or three-dimensions the $\textbf{k}$ vector has two or three elements and can be written in terms of its two or three index ratios, respectively, as $\textbf{k}_2=\Big(\dfrac{n}{N},\dfrac{m}{M}\Big)$ $\textbf{k}_3=\Big(\dfrac{n}{N},\dfrac{m}{M},\dfrac{l}{L}\Big).$ Here, $N$, $M$, and $L$ would describe the number of unit cells along the three principal axes of the three-dimensional crystal; $N$ and $M$ do likewise in the two-dimensional lattice case. In such two- and three- dimensional crystal cases, the energies of orbitals within bands derived from s, p, d, etc. atomic orbitals display variations that also reflect the number of inter-atom sign changes. However, now there are variations as functions of the ($n/N$), ($m/M$) and ($l/L$) indices, and these variations can display rather complicated shapes depending on the symmetry of the atoms within the underlying crystal lattice. That is, as one moves within the three-dimensional space by specifying values of the indices ($n/N$), ($m/M$) and ($l/L$), one can move throughout the lattice in different symmetry directions. It is convention in the solid-state literature to plot the energies of these bands as these three indices vary from site to site along various symmetry elements of the crystal and to assign a letter to label this symmetry element. The band that has no inter-atom sign changes is labeled as $\Gamma$ (sometimes G) in such plots of band structures. In much of our discussion below, we will analyze the behavior of various bands in the neighborhood of the $\Gamma$ point because this is where there are the fewest inter-atom nodes and thus the wave function is easiest to visualize. Let’s consider a few examples to help clarify these issues. In Figure 2.6 b, where we see the band structure of graphene, you can see the quadratic variations of the energies with $\textbf{k}$ as one moves away from the $\textbf{k} = 0$ point labeled $\Gamma$, with some bands increasing with $\textbf{k}$ and others decreasing with $\textbf{k}$. The band having an energy of ca. -17 eV at the $\Gamma$ point originates from bonding interactions involving $2s$ orbitals on the carbon atoms, while those having energies near 0 eV at the $\Gamma$ point derive from carbon $2p_\sigma$ bonding interactions. The parabolic increase with $\textbf{k}$ for the $2s$-based and decrease with $\textbf{k}$ for the $2p_\sigma$-based orbitals is clear and is expected based on our earlier discussion of how $\sigma$ and $p_{\sigma}$ bands vary with $\textbf{k}$. The band having energy near -4 eV at the $\Gamma$ point involves $2p_\pi$ orbitals involved in bonding interactions, and this band shows a parabolic increase with $\textbf{k}$ as expected as we move away from the $\Gamma$ point. These are the delocalized $\pi$ orbitals of the graphene sheet. The anti-bonding $2p_\pi$ band decreases quadratically with $\textbf{k}$ and has an energy of ca. 15 eV at the $\Gamma$ point. Because there are two atoms per unit cell in this case, there are a total of eight valence electrons (four from each carbon atom) to be accommodated in these bands. The eight carbon valence electrons fill the bonding $2s$ and two $2p_\sigma$ bands fully as well as the bonding $2p_\pi$ band. Only along the direction labeled P in Figure 2.6b do the bonding and anti-bonding $2p_\pi$ bands become degenerate (near 2.5 eV); the approach of these two bands is what allows graphene to be semi-metallic (i.e., to conduct at modest temperatures- high enough to promote excitations from the bonding $2p_\pi$ to the anti-bonding $2p_\pi$ band). It is interesting to contrast the band structure of graphene with that of diamond, which is shown in Figure 2. 6 c. The band having an energy of ca. – 22 eV at the $\Gamma$ point derives from $2s$ bonding interactions, and the three bands near 0 eV at the $\Gamma$ point come from $2p_\sigma$ bonding interactions. Again, each of these bands displays the expected parabolic behavior as functions of $\textbf{k}$. In diamond’s two interpenetrating face centered cubic structure, there are two carbon atoms per unit cell, so we have a total of eight valence electrons to fill the four bonding bands. Notice that along no direction in $\textbf{k}$-space do these filled bonding bands become degenerate with or are crossed by any of the other bands. The other bands remain at higher energy along all $\textbf{k}$-directions, and thus there is a gap between the bonding bands and the others is large (ca. 5 eV or more along any direction in $\textbf{k}$-space). This is why diamond is an insulator; the band gap is very large. Finally, let’s compare the graphene and diamond cases with a metallic case such as shown in Figure 2. 6 d for Al and for Ag. For Al and Ag, there is one atom per unit cell, so we have three valence electrons (3s23p1) and eleven valence electrons (3d104s1), respectively, to fill the bands shown in Figure 2.6d. Focusing on the $\Gamma$ points in the Al and Ag band structure plots, we can say the following: 1. For Al, the $3s$-based band near -11 eV is filled and the three $3p$-based bands near 11 eV have an occupancy of 1/6 (i.e., on average there is one electron in one of these three bands each of which can hold two electrons). 2. The $3s$ and $3p$ bands are parabolic with positive and negative curvature, respectively. 3. Along several directions (e.g. $K$, $W$, $X$, $W$, $L$) there are crossings among the bands; these crossings allow electrons to be promoted from occupied to previously unoccupied bands. The partial occupancy of the $3p$ bands and the multiple crossings of bands are what allow Al to show metallic behavior. 4. For Ag, there are six bands between -4 eV and -8 eV. Five of these bands change little with $\textbf{k}$, and one shows somewhat parabolic dependence on $\textbf{k}$. The former five derive from $4d$ atomic orbitals that are contracted enough to not allow them to overlap much, and the latter is based on $5s$ bonding orbital interaction. 5. Ten of the valence electrons fill the five $4d$ bands, and the eleventh resides in the 5s-based bonding band. 6. If the five $4d$-based bands are ignored, the remainder of the Ag band structure looks a lot like that for Al. There are numerous band crossings that include, in particular, the half-filled $5s$ band. These crossings and the partial occupancy of the $5s$ band cause Ag to have metallic character. One more feature of band structures that is often displayed is called the band density of states. An example of such a plot is shown in Figure 2.6 e for the TiN crystal. The density of states at energy $E$ is computed by summing all those orbitals having an energy between $E$ and $E + dE$. Clearly, as seen in Figure 2.6e, for bands in which the orbital energies vary strongly with $\textbf{k}$ (i.e., so-called broad bands), the density of states is low; in contrast, for narrow bands, the density of states is high. The densities of states are important because their energies and energy spreads relate to electronic spectral features. Moreover, just as gaps between the highest occupied bands and the lowest unoccupied bands play central roles in determining whether the sample is an insulator, a conductor, or a semiconductor, gaps in the density of states suggest what frequencies of light will be absorbed or reflected via inter-band electronic transitions. The bands of orbitals arising in any solid lattice provide the orbitals that are available to be occupied by the number of electrons in the crystal. Systems whose highest energy occupied band is completely filled and for which the gap in energy to the lowest unfilled band is large are called insulators because they have no way to easily (i.e., with little energy requirement) promote some of their higher-energy electrons from orbital to orbital and thus effect conduction. The case of diamond discussed above is an example of an insulator. If the band gap between a filled band and an unfilled band is small, it may be possible for thermal excitation (i.e., collisions with neighboring atoms or molecules) to cause excitation of electrons from the former to the latter thereby inducing conductive behavior. The band structures of Al and Ag discussed above offer examples of this case. A simple depiction of how thermal excitations can induce conduction is illustrated in Figure 2.7. Systems whose highest-energy occupied band is partially filled are also conductors because they have little spacing among their occupied and unoccupied orbitals so electrons can flow easily from one to another. Al and Ag are good examples. To form a semiconductor, one starts with an insulator whose lower band is filled and whose upper band is empty as shown by the broad bands in Fig.2.8. If this insulator material is synthesized with a small amount of “dopant” whose valence orbitals have energies between the filled and empty bands of the insulator, one can generate a semiconductor. If the dopant species has no valence electrons (i.e., has an empty valence orbital), it gives rise to an empty band lying between the filled and empty bands of the insulator as shown below in case a of Figure 2.8. In this case, the dopant band can act as an electron acceptor for electrons excited (either thermally or by light) from the filled band of the insulator into the dopant’s empty band. Once electrons enter the dopant band, charge can flow (because the insulator’s lower band is no longer filled) and the system thus becomes a conductor. Another case is illustrated in the b part of Figure 2.8. Here, the dopant has a filled band that lies close in energy to the empty band of the insulator. Excitation of electrons from this dopant band to the insulator’s empty band can induce current to flow (because now the insulator’s upper band is no longer empty). Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.02%3A_Bands_of_Orbitals_in_Solids.txt
When a large number of neighboring orbitals overlap, bands are formed. However, the natures of these bands, their energy patterns, and their densities of states are very different in different dimensions. Before leaving our discussion of bands of orbitals and orbital energies in solids, I want to address a bit more the issue of the density of electronic states and what determines the energy range into which orbitals of a given band will split. First, let’s recall the energy expression for the 1 and 2- dimensional electron in a box case, and let’s generalize it to three dimensions. The general result is $E = \sum_j \dfrac{n_j^2 \pi^2\hbar^2}{2mL_j^2}\tag{2.3.1}$ where the sum over $j$ runs over the number of dimensions (1, 2, or 3), and $L_j$ is the length of the box along the jth direction. For one dimension, one observes a pattern of energy levels that grows with increasing $n$, and whose spacing between neighboring energy levels also grows as a result of which the state density decreases with increasing $n$. However, in 2 and 3 dimensions, the pattern of energy level spacing displays a qualitatively different character, especially at high quantum number. Consider first the 3-dimensional case and, for simplicity, let’s use a box that has equal length sides $L$. In this case, the total energy $E$ is $\dfrac{\hbar^2\pi^2}{2mL^2}$ times $(n_x^2 + n_y^2 + n_z^2)$. The latter quantity can be thought of as the square of the length of a vector $\textbf{R}$ having three components $n_x$, $n_y$, $n_z$. Now think of three Cartesian axes labeled $n_x$, $n_y$, and $n_z$ and view a sphere of radius $R$ in this space. The volume of the 1/8 th sphere having positive values of $n_x$, $n_y$, and $n_z$​ and having radius $R$ is $\dfrac{1}{8} \Big(\dfrac{4}{3} \pi R^3\Big)$. Because each cube having unit length along the $n_x$, $n_y$, and $n_z$ axes corresponds to a single quantum wave function and its energy, the total number $N_{\rm tot}(E)$ of quantum states with positive $n_x$, $n_y$, and $n_z$ and with energy between zero and $E = \bigg(\dfrac{\hbar^2\pi^2}{2mL^2}\bigg)R^2$ is $N_{tot} = \frac{1}{8} (\frac{4}{3} \pi R^3) = \frac{1}{8} \bigg(\frac{4}{3} \pi \left[\frac{2mEL^2}{\hbar^2\pi^2}\right]^{3/2}\bigg)\tag{2.3.2}$ The number of quantum states with energies between $E$ and $E+dE$ is $\dfrac{dN_{tot}}{dE}dE$, which gives the density $\Omega(E)$ of states near energy $E$: $\Omega(E) = \frac{dN_{tot}}{dE} = \frac{1}{8} \bigg(\frac{4}{3} \pi \left[\frac{2mEL^2}{\hbar^2\pi^2}\right]^{3/2} \frac{3}{2}​ \sqrt{E}\bigg). \tag{2.3.3}$ Notice that this state density increases as $E$ increases. This means that, in the 3-dimensional case, the number of quantum states per unit energy grows; in other words, the spacing between neighboring state energies decreases, very unlike the 1-dimensioal case where the spacing between neighboring states grows as $n$ and thus $E$ grows. This growth in state density in the 3-dimensional case is a result of the degeneracies and near-degeneracies that occur. For example, the states with $n_x$, $n_y$, $n_z$​ = 2,1,1 and 1, 1, 2, and 1, 2, 1 are degenerate, and those with $n_x$, $n_y$, $n_z$​ = 5, 3, 1 or 5, 1, 3 or 1, 3, 5 or 1, 5, 3 or 3, 1, 5 or 3, 5, 1 are degenerate and nearly degenerate to those having quantum numbers 4, 4, 1 or 1, 4, 4, or 4, 1, 4. In the 2-dimensional case, degeneracies also occur and cause the density of states to possess an $E$-dependence that differs from the 1- or 3-dimensional case. In this situation , we think of states having energy $E = \bigg(\dfrac{\hbar^2\pi^2}{2mL^2}\bigg)R^2$, but with $R^2 = n_x^2 + n_y^2$. The total number of states having energy between zero and $E$ is $N_{\rm total}= 4\pi R^2 = 4\pi E\left(\frac{2mL^2}{\hbar^2\pi^2}\right) \tag{2.3.4}$ So, the density of states between $E$ and $E+dE$ is $\Omega(E) = \frac{dN_{\rm total}}{dE} = 4\pi \left(\frac{2mL^2}{\hbar^2\pi^2}\right) \tag{2.3.5}$ That is, in this 2-dimensional case, the number of states per unit energy is constant for high $E$ values (where the analysis above applies best). This kind of analysis for the 1-dimensional case gives $N_{\rm total}= R = \sqrt{\frac{2mEL^2}{\hbar^2\pi^2}} \tag{2.3.6}$ so, the state density between $E$ and $E+ dE$ is: $\Omega(E) = \frac{1}{2} \sqrt{\frac{2mL^2}{\hbar^2\pi^2}}\frac{1}{2} E^{-1/2}, \tag{2.3.7}$ which clearly shows the widening spacing, and thus lower state density, as one goes to higher energies. These findings about densities of states in 1-, 2-, and 3- dimensions are important because, in various problems one encounters in studying electronic states of extended systems such as solids, chains, and surfaces, one needs to know how the number of states available at a given total energy $E$ varies with $E$. A similar situation occurs when describing the translational states of an electron or a photo ejected from an atom or molecule into the vacuum; here the 3-dimensional density of states applies. Clearly, the state density depends upon the dimensionality of the problem, and this fact is what I want the students reading this text to keep in mind. Before closing this Section, it is useful to overview how the various particle-in-box models can be used as qualitative descriptions for various chemical systems. 1a. The one-dimensional box model is most commonly used to model electronic orbitals in delocalized linear polyenes. 1b. The electron-on-a-circle model is used to describe orbitals in a conjugated cyclic ring such as in benzene. 2a. The rectangular box model can be used to model electrons moving within thin layers of metal deposited on a substrate or to model electrons in aromatic sheets such as graphene shown below in Figure 2.8a. 2b. The particle-within-a-circle model can describe states of electrons (or other light particles requiring quantum treatment) constrained within a circular corral. 2c. The particle-on-a-sphere’s surface model can describe states of electrons delocalized over the surface of fullerene-type species such as shown in the upper right of Figure 2.8b. 3a. The particle-in-a-sphere model, as discussed earlier, is often used to treat electronic orbitals of quasi-spherical nano-clusters composed of metallic atoms. 3b. The particle-in-a-cube model is often used to describe the bands of electronic orbitals that arise in three-dimensional crystals constructed from metallic atoms. In all of these models, the potential $V_0$, which is constant in the region where the electron is confined, controls the energies of all the quantum states relative to that of a free electron (i.e., an electron in vacuum with no kinetic energy). For some dimensionalities and geometries, it may be necessary to invoke more than one of these models to qualitatively describe the quantum states of systems for which the valence electrons are highly delocalized (e.g., metallic clusters and conjugated organics). For example, for electrons residing on the surface of any of the three graphene tubes shown in Figure 2.8b, one expects quantum states (i) labeled with an angular momentum quantum number and characterizing the electrons’ angular motions about the long axis of the tube, but also (ii) labeled by a long-axis quantum number characterizing the electron’s energy component along the tube’s long axis. For a three-dimensional tube-shaped nanoparticle composed of metallic atoms, one expects the quantum states to be (i) labeled with an angular momentum quantum number and a radial quantum number characterizing the electrons’ angular motions about the long axis of the tube and its radial (Bessel function) character, but again also (ii) labeled by a long-axis quantum number characterizing the electron’s energy component along the tube’s long axis. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.03%3A_Densities_of_States_in_1_2_and_3_dimensions.txt
Now, let’s examine what determines the energy range into which orbitals (e.g., $p_\pi$ orbitals in polyenes, metal, semi-conductor, or insulator; $\sigma$ or $p_\sigma$ orbitals in a solid; or $\sigma$ or $\pi$ atomic orbitals in a molecule) split. I know that, in our earlier discussion, we talked about the degree of overlap between orbitals on neighboring atoms relating to the energy splitting, but now it is time to make this concept more quantitative. To begin, consider two orbitals, one on an atom labeled A and another on a neighboring atom labeled B; these orbitals could be, for example, the $1s$ orbitals of two hydrogen atoms, such as Figure 2.9 illustrates. However, the two orbitals could instead be two $p_\pi$ orbitals on neighboring carbon atoms such as are shown in Figure 2.10 as they form $\pi$ bonding and $\pi^*$ anti-bonding orbitals. Figure 2.10. Two atomic $p_\pi$ orbitals form a bonding $\pi$ and antibonding $\pi^*$ molecular orbital. In both of these cases, we think of forming the molecular orbitals (MOs) $\phi_k$ as linear combinations of the atomic orbitals (AOs) ca on the constituent atoms, and we express this mathematically as follows: $\phi_K = \sum_a C_{K,a} \chi_a,$ where the $C_{K,a}$ are called linear combination of atomic orbital to form molecular orbital (LCAO-MO) coefficients. The MOs are supposed to be solutions to the Schrödinger equation in which the Hamiltonian H involves the kinetic energy of the electron as well as the potentials $V_L$ and $V_R$ detailing its attraction to the left and right atomic centers (this one-electron Hamiltonian is only an approximation for describing molecular orbitals; more rigorous N-electron treatments will be discussed in Chapter 6): $H = - \dfrac{\hbar^2}{2m} \nabla^2 + V_L + V_R.$ In contrast, the AOs centered on the left atom A are supposed to be solutions of the Schrödinger equation whose Hamiltonian is $H = - \dfrac{\hbar^2}{2m} \nabla^2 + V_L$, and the AOs on the right atom B have $H = - \dfrac{\hbar^2}{2m} \nabla^2 + V_R$. Substituting $\phi_K = \sum_a C_{K,a} \chi_a$ into the MO’s Schrödinger equation $\textbf{H}\phi_K = \varepsilon_K \phi_K$ and then multiplying on the left by the complex conjugate of $\chi_b$ and integrating over the $r$, $\theta$ and $\phi$ coordinates of the electron produces $\sum_a \langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_L + V_R |\chi_a\rangle C_{K,a} = \varepsilon_K​ \sum_a \langle \chi_​b|\chi_a\rangle C_{K,a}$ Recall that the Dirac notation $\langle a|b\rangle$ denotes the integral of $a^*$ and $b$, and $\langle a| op| b\rangle$ denotes the integral of $a^*$ and the operator op acting on b. In what is known as the Hückel model in chemistry or the tight-binding model in solid-state theory, one approximates the integrals entering into the above set of linear equations as follows: 1. The diagonal integral $\langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_L + V_R |\chi_b\rangle$ involving the AO centered on the right atom and labeled $\chi_b$ is assumed to be equivalent to $\langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_R |\chi_b\rangle$, which means that net attraction of this orbital to the left atomic center is neglected. Moreover, this integral is approximated in terms of the binding energy (denoted $\alpha$, not to be confused with the electron spin function a) for an electron that occupies the $\chi_b$ orbital: $\langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_R |\chi_b\rangle = \alpha_b$. The physical meaning of $\alpha_b​$ is the kinetic energy of the electron in $\chi_b$​ plus the attraction of this electron to the right atomic center while it resides in $\chi_b$​​. Of course, an analogous approximation is made for the diagonal integral involving $\chi_a$​​; $\langle \chi_a| - \dfrac{\hbar^2}{2m} \nabla^2 + V_L |\chi_a\rangle = \alpha_a$​ . These $\alpha​$ values are negative quantities because, as is convention in electronic structure theory, energies are measured relative to the energy of the electron when it is removed from the orbital and possesses zero kinetic energy. 2. The off-diagonal integrals $\langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_L + V_R |\chi_a\rangle$ are expressed in terms of a parameter $\beta_{a,b}$ which relates to the kinetic and potential energy of the electron while it resides in the “overlap region” in which both $\chi_a$ and $\chi_b$​ are non-vanishing. This region is shown pictorially above as the region where the left and right orbitals touch or overlap. The magnitude of $\beta$ is assumed to be proportional to the overlap $S_{a,b}$ between the two AOs : $S_{a,b} = \langle \chi_a|\chi_b​\rangle$. It turns out that $\beta$ is usually a negative quantity, which can be seen by writing it as $\langle \chi_b| - \dfrac{\hbar^2}{2m} \nabla^2 + V_R |\chi_a\rangle + \langle \chi_b| V_L |\chi_a\rangle$. Since $\chi_a$ is an eigenfunction of $- \dfrac{\hbar^2}{2m} \nabla^2 + V_R$ having the eigenvalue $\alpha_a​$​, the first term is equal to $\alpha_a​$​ (a negative quantity) times $\langle \chi_b|\chi_a\rangle$, the overlap $S$. The second quantity $\langle \chi_b| V_L |\chi_a\rangle$ is equal to the integral of the overlap density $\chi_b(r)\chi_a(r)$​ multiplied by the (negative) Coulomb potential for attractive interaction of the electron with the left atomic center. So, whenever $\chi_b(r)$ and $\chi_a(r)$ have positive overlap, b will turn out negative. iii. Finally, in the most elementary Hückel or tight-binding model, the off-diagonal overlap integrals $\langle \chi_a|\chi_b​\rangle =S_{a,b}$ are neglected and set equal to zero on the right side of the matrix eigenvalue equation. However, in some Hückel models, overlap between neighboring orbitals is explicitly treated, so, in some of the discussion below we will retain $S_{a,b}$. With these Hückel approximations, the set of equations that determine the orbital energies $\varepsilon_K$ and the corresponding LCAO-MO coefficients $C_{K,a}$ are written for the two-orbital case at hand as in the first $2\times2$ matrix equations shown below $\left[ \begin{array}{cc} \alpha & \beta \ \beta & \alpha \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right] =\varepsilon \left[ \begin{array}{cc} 1 & S \ S & 1 \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right]$ which is sometimes written a $\left[ \begin{array}{cc} \alpha-\varepsilon & \beta-\varepsilon S \ \beta-\varepsilon S & \alpha-\varepsilon \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right] = \left[\begin{array}{c} 0\ 0 \end{array} \right]$ These equations reduce with the assumption of zero overlap to $\left[ \begin{array}{cc} \alpha & \beta \ \beta & \alpha \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right] =\varepsilon \left[ \begin{array}{cc} 1 & 0 \ 0 & 1 \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right]$ The a parameters are identical if the two AOs ca and $\chi_b$ are identical, as would be the case for bonding between the two $1s$ orbitals of two H atoms or two 2$p_\pi$ orbitals of two C atoms or two 3s orbitals of two Na atoms. If the left and right orbitals were not identical (e.g., for bonding in HeH+ or for the $\pi$ bonding in a C-O group), their a values would be different and the Hückel matrix problem would look like: $\left[ \begin{array}{cc} \alpha & \beta \ \beta & \alpha' \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right] =\varepsilon \left[ \begin{array}{cc} 1 & S \ S & 1 \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right]$ To find the MO energies that result from combining the AOs, one must find the values of e for which the above equations are valid. Taking the $2\times2$ matrix consisting of e times the overlap matrix to the left hand side, the above set of equations reduces to the third set displayed earlier. It is known from matrix algebra that such a set of linear homogeneous equations (i.e., having zeros on the right hand sides) can have non-trivial solutions (i.e., values of $C$ that are not simply zero) only if the determinant of the matrix on the left side vanishes. Setting this determinant equal to zero gives a quadratic equation in which the e values are the unknowns: $(\alpha-\varepsilon)^2 – (\beta - \varepsilon S)^2 = 0.$ This quadratic equation can be factored into a product $(\alpha - \beta - \varepsilon +\varepsilon S) (\alpha + \beta - \varepsilon -\varepsilon S) = 0$ which has two solutions $\varepsilon = \frac{\alpha + \beta}{1 + S}, \text{ and } \varepsilon = \frac{\alpha - \beta}{1 – S}.$ As discussed earlier, it turns out that the b values are usually negative, so the lowest energy such solution is the $\varepsilon = (\alpha + \beta)/(1 + S)$ solution, which gives the energy of the bonding MO. Notice that the energies of the bonding and anti-bonding MOs are not symmetrically displaced from the value a within this version of the Hückel model that retains orbital overlap. In fact, the bonding orbital lies less than b below a, and the antibonding MO lies more than b above a because of the $1+S$ and $1-S$ factors in the respective denominators. This asymmetric lowering and raising of the MOs relative to the energies of the constituent AOs is commonly observed in chemical bonds; that is, the antibonding orbital is more antibonding than the bonding orbital i$\sigma$ bonding. This is another important thing to keep in mind because its effects pervade chemical bonding and spectroscopy. Having noted the effect of inclusion of AO overlap effects in the Hückel model, I should admit that it is far more common to utilize the simplified version of the Hückel model in which the S factors are ignored. In so doing, one obtains patterns of MO orbital energies that do not reflect the asymmetric splitting in bonding and antibonding orbitals noted above. However, this simplified approach is easier to use and offers qualitatively correct MO energy orderings. So, let’s proceed with our discussion of the Hückel model in its simplified version. To obtain the LCAO-MO coefficients corresponding to the bonding and antibonding MOs, one substitutes the corresponding a values into the linear equations $\left[ \begin{array}{cc} \alpha-\varepsilon & \beta \ \beta & \alpha-\varepsilon \end{array} \right] \left[\begin{array}{c} C_L\ C_R \end{array} \right] = \left[\begin{array}{c} 0\ 0 \end{array} \right]$ and solves for the $C­_a$ coefficients (actually, one can solve for all, but one $C_a$, and then use normalization of the MO to determine the final Ca). For example, for the bonding MO, we substitute $\varepsilon = \alpha + \beta$ into the above matrix equation and obtain two equations for $C_L$ and $C_R$: $- \beta C_L + \beta C_R = 0$ $\beta C_L - \beta C_R = 0.$ These two equations are clearly not independent; either one can be solved for one C in terms of the other C to give: $C_L = C_R,$ which means that the bonding MO is $\phi = C_L (\chi_L + \chi_R).$ The final unknown, C_L, is obtained by noting that f is supposed to be a normalized function $\langle \phi|\phi\rangle = 1$. Within this version of the Hückel model, in which the overlap S is neglected, the normalization of f leads to the following condition: $1 = \langle \phi|\phi\rangle = C_\textbf{L}^2 (\langle \chi_L|\chi_L\rangle + \langle \chi_R\chi_R\rangle ) = 2 C_\textbf{L}^2$ with the final result depending on assuming that each c is itself also normalized. So, finally, we know that $C_L = \frac{1}{\sqrt{2}}$, and hence the bonding MO is: $\phi = \frac{1}{\sqrt{2}} (\chi_L + \chi_R).$ Actually, the solution of $1 = 2 C_\textbf{L}^2$ could also have yielded $C_L = - \frac{1}{\sqrt{2}}$ and then, we would have $\phi = - \frac{1}{\sqrt{2}} (\chi_L + \chi_R).$ These two solutions are not independent (one is just –1 times the other), so only one should be included in the list of MOs. However, either one is just as good as the other because, as shown very early in this text, all of the physical properties that one computes from a wave function depend not on $\psi$ but on $\psi^*\psi$. So, two wave functions that differ from one another by an overall sign factor as we have here have exactly the same $\psi^*\psi$ and thus are equivalent. In like fashion, we can substitute $\varepsilon = \alpha - \beta$ into the matrix equation and solve for the $C_L$ can $C_R$ values that are appropriate for the antibonding MO. Doing so, gives us: $\phi^* = \frac{1}{\sqrt{2}} (\chi_L - \chi_R)$ or, alternatively, $\phi^* = \frac{1}{\sqrt{2}} (\chi_R - \chi_L).$ Again, the fact that either expression for $\phi$ is acceptable shows a property of all solutions to any Schrödinger equations; any multiple of a solution is also a solution. In the above example, the two answers for $\phi$ differ by a multiplicative factor of (-1). Let’s try another example to practice using Hückel or tight-binding theory. In particular, I’d like you to imagine two possible structures for a cluster of three Na atoms (i.e., pretend that someone came to you and asked what geometry you think such a cluster would assume in its ground electronic state), one linear and one an equilateral triangle. Further, assume that the Na-Na distances in both such clusters are equal (i.e., that the person asking for your theoretical help is willing to assume that variations in bond lengths are not the crucial factor in determining which structure is favored). In Figure 2.11, I shown the two candidate clusters and their 3s orbitals. Numbering the three Na atoms’ valence 3s orbitals $\chi_1$, $\chi_2$, and $\chi_3$, we then set up the 3x3 Hückel matrix appropriate to the two candidate structures: $\left[ \begin{array}{ccc} \alpha & \beta &0\ \beta & \alpha &\beta\ 0 & \beta & \alpha \end{array} \right]$ for the linear structure (n.b., the zeros arise because $\chi_1$ and $\chi_3$ do not overlap and thus have no $\beta$ coupling matrix element). Alternatively, for the triangular structure, we find $\left[ \begin{array}{ccc} \alpha & \beta &\beta\ \beta & \alpha &\beta\ \beta & \beta & \alpha \end{array} \right]$ as the Hückel matrix. Each of these 3x3 matrices will have three eigenvalues that we obtain by subtracting e from their diagonals and setting the determinants of the resulting matrices to zero. For the linear case, doing so generates $(\alpha-\varepsilon)^3 – 2 \beta^2 (​\alpha-\varepsilon\alpha-\varepsilon) = 0,$ and for the triangle case it produces $(\alpha-\varepsilon)^3 –3 \beta^2 (\alpha-\varepsilon) + 2 \alpha-\varepsilon = 0.$ The first cubic equation has three solutions that give the MO energies: $\varepsilon = \alpha + \sqrt{2} \beta, \varepsilon = \alpha, \text{ and } \varepsilon = \alpha - \sqrt{2} \beta,$ for the bonding, non-bonding and antibonding MOs, respectively. The second cubic equation also has three solutions $\varepsilon = \alpha + 2\beta, \varepsilon = \alpha - \beta , \text{ and } \varepsilon = \alpha - \beta.$ So, for the linear and triangular structures, the MO energy patterns are as shown in Figure 2.12. For the neutral $Na_3$ cluster about which you were asked, you have three valence electrons to distribute among the lowest available orbitals. In the linear case, we place two electrons into the lowest orbital and one into the second orbital. Doing so produces a 3-electron state with a total energy of $E= 2(\alpha+\sqrt{2} \beta) + \alpha= 3\alpha​ +2\sqrt{2}\beta$. Alternatively, for the triangular species, we put two electrons into the lowest MO and one into either of the degenerate MOs resulting in a 3-electron state with total energy $E = 3 \alpha + 3\beta$. Because b is a negative quantity, the total energy of the triangular structure is lower than that of the linear structure since $3 > 2\sqrt{2}$. The above example illustrates how we can use Hückel or tight-binding theory to make qualitative predictions (e.g., which of two shapes is likely to be of lower energy). Notice that all one needs to know to apply such a model to any set of atomic orbitals that overlap to form MOs is 1. the individual AO energies a (which relate to the electronegativity of the AOs), 2. the degree to which the AOs couple (the b parameters which relate to AO overlaps), 3. an assumed geometrical structure whose energy one wants to estimate. This example and the earlier example pertinent to $H_2$ or the $\pi$ bond in ethylene also introduce the idea of symmetry. Knowing, for example, that $H_2$, ethylene, and linear $Na_3$ have a left-right plane of symmetry allows us to solve the Hückel problem in terms of symmetry-adapted atomic orbitals rather than in terms of primitive atomic orbitals as we did earlier. For example, for linear $Na_3$, we could use the following symmetry-adapted functions: $\chi_2 {\rm and} \frac{1}{\sqrt{2}} (\chi_1 + \chi_3)$ both of which are even under reflection through the symmetry plane and $\frac{1}{\sqrt{2}} (\chi_1 - \chi_3)$ which is odd under reflection. The 3x3 Hückel matrix would then have the form $\left[ \begin{array}{ccc} \alpha & \sqrt{2}\beta &0\ \sqrt{2}\beta & \alpha &0\ 0 & 0 & \alpha \end{array} \right]$ For example, $H_{1,2}$ and $H_{2,3}$ are evaluated as follows $H_{1,2} = \langle \frac{1}{\sqrt{2}} (\chi_1 + \chi_3)|H|\chi_2\rangle = 2\frac{1}{\sqrt{2}} \beta$ $H_{2,3} = \langle \frac{1}{\sqrt{2}} (\chi_1 + \chi_3)|H| \frac{1}{\sqrt{2}} (\chi_1 - \chi_3)\rangle = \frac{1}{2}( \alpha + \beta - \beta - \alpha)= 0.$ The three eigenvalues of the above Hückel matrix are easily seen to be $\alpha$, $\alpha+\sqrt{2}\beta$, and $\alpha-\sqrt{2}\beta$, exactly as we found earlier. So, it is not necessary to go through the process of forming symmetry-adapted functions; the primitive Hückel matrix will give the correct answers even if you do not. However, using symmetry allows us to break the full (3x3 in this case) Hückel problem into separate Hückel problems for each symmetry component (one odd function and two even functions in this case, so a 1x1 and a $2\times2$ sub - matrix). While we are discussing the issue of symmetry, let me briefly explain the concept of approximate symmetry again using the above Hückel problem as it applies to ethylene as an illustrative example. Clearly, as illustrated in Figure 2.12a, at its equilibrium geometry the ethylene molecule has a plane of symmetry (denoted $\sigma_{X,Y}$) that maps nuclei and electrons from its left to its right and vice versa. This is the symmetry element that could used to decompose the $2\times2$ Hückel matrix describing the $\pi$ and $\pi^*$ orbitals into two 1x1 matrices. However, if any of the four C-H bond lengths or HCH angles is displaced from its equilibrium value in a manner that destroys the perfect symmetry of this molecule, or if one of the C-H units were replaced by a C-CH3 unit, it might appear that symmetry would no longer be a useful tool in analyzing the properties of this molecule’s molecular orbitals. Fortunately, this is not the case. Even if there is not perfect symmetry in the nuclear framework of this molecule, the two atomic $p_\pi$ orbitals will combine to produce a bonding $\pi$ and antibonding $\pi^*$ orbital. Moreover, these two molecular orbitals will still possess nodal properties similar to those shown in Figure 2.12a even though they will not possess perfect even and odd character relative to the $\sigma_{X,Y}$ plane. The bonding orbital will still have the same sign to the left of the $\sigma_{X,Y}$ plane as it does to the right, and the antibonding orbital will have the opposite sign to the left as it does to the right, but the magnitudes of these two orbitals will not be left-right equal. This is an example of the concept of approximate symmetry. It shows that one can use symmetry, even when it is not perfect, to predict the nodal patterns of molecular orbitals, and it is the nodal patterns that govern the relative energies of orbitals as we have seen time and again. Let’s see if you can do some of this on your own. Using the above results, would you expect the cation $Na_3^+$ to be linear or triangular? What about the anion $Na_3^-$? Next, I want you to substitute the MO energies back into the 3x3 matrix and find the $\chi_1$, $\chi_2$, and $\chi_3$ coefficients appropriate to each of the 3 MOs of the linear and of the triangular structure. See if doing so leads you to solutions that can be depicted as shown in Figure 2.13, and see if you can place each set of MOs in the proper energy ordering. Now, I want to show you how to broaden your horizons and use tight-binding theory to describe all of the bonds in a more complicated molecule such as ethylene shown in Figure 2.14. What is different about this kind of molecule when compared with metallic or conjugated species is that the bonding can be described in terms of several pairs of valence orbitals that couple to form two-center bonding and antibonding molecular orbitals. Within the Hückel model described above, each pair of orbitals that touch or overlap gives rise to a 2x2 matrix. More correctly, all n of the constituent valence orbitals form an nxn matrix, but this matrix is broken up into 2x2 blocks. Notice that this did not happen in the triangular Na3 case where each AO touched two other AOs. For the ethlyene case, the valence orbitals consist of (a) four equivalent C $sp^2$ orbitals that are directed toward the four H atoms, (b) four H $1s$ orbitals, (c) two C $sp^2$ orbitals directed toward one another to form the C-C $\sigma$ bond, and (d) two C $p_\pi$ orbitals that will form the C-C $\pi$ bond. This total of 12 orbitals generates 6 Hückel matrices as shown below the ethylene molecule. We obtain one $2\times2$ matrix for the C-C $\sigma$ bond of the form $\left[ \begin{array}{cc} \alpha_{sp^2} & \beta_{sp^2,sp^2} \ \beta_{sp^2,sp^2} & \alpha_{sp^2} \end{array} \right]$ and one $2\times2$ matrix for the C-C $\pi$ bond of the form $\left[ \begin{array}{cc} \alpha_{p_\pi} & \beta_{p_\pi,p_\pi} \ \beta_{p_\pi,p_\pi} & \alpha_{p_\pi} \end{array} \right]$ Finally, we also obtain four identical $2\times2$ matrices for the C-H bonds: $\left[ \begin{array}{cc} \alpha_{sp^2} & \beta_{sp^2,H} \ \beta_{sp^2,H} & \alpha_{H} \end{array} \right]$ The above matrices produce 1. four identical C-H bonding MOs having energies $\varepsilon = \dfrac{(\alpha_H + \alpha_C) –\sqrt{(\alpha_H - \alpha_C)^2 + 2\beta^2}}{2},$ 2. four identical C-H antibonding MOs having energies $\varepsilon^* = \dfrac{(\alpha_H + \alpha_C) + \sqrt{(\alpha_H - \alpha_C)^2 + 2\beta^2}}{2},$ 3. one bonding C-C $\pi$ orbital with $\varepsilon = \alpha_{p\pi}+ \beta,$ 4. a partner antibonding C-C orbital with $\varepsilon^* = \alpha_{p\pi} - \beta,$ 5. a C-C $\sigma$ bonding MO with $\varepsilon = \alpha_{sp^2}+ \beta,$ and (\phi) its antibonding partner with $\varepsilon^* = \alpha_{sp^2}- \beta.$ In all of these expressions, the $\beta$ parameter is supposed to be that appropriate to the specific orbitals that overlap as shown in the matrices. If you wish to practice this exercise of breaking a large molecule down into sets of interacting valence, try to see what Hückel matrices you obtain and what bonding and antibonding MO energies you obtain for the valence orbitals of methane shown in Figure 2.15. Before leaving this discussion of the Hückel/tight-binding model, I need to stress that it has its flaws (because it is based on approximations and involves neglecting certain terms in the Schrödinger equation). For example, it predicts (see above) that ethylene has four energetically identical C-H bonding MOs (and four degenerate C-H antibonding MOs). However, this is not what is seen when photoelectron spectra are used to probe the energies of these MOs. Likewise, it suggests that methane has four equivalent C-H bonding and antibonding orbitals, which, again is not true. It turns out that, in each of these two cases (ethylene and methane), the experiments indicate a grouping of four nearly iso-energetic bonding MOs and four nearly iso-energetic antibonding MOs. However, there is some “splitting” among these clusters of four MOs. The splittings can be interpreted, within the Hückel model, as arising from couplings or interactions among, for example, one sp2 or $sp^3$ orbital on a given C atom and another such orbital on the same atom. Such couplings cause the nxn Hückel matrix to not block-partition into groups of $2\times2$ sub - matrices because now there exist off-diagonal b factors that couple one pair of directed valence to another. When such couplings are included in the analysis, one finds that the clusters of MOs expected to be degenerate are not but are split just as the photoelectron data suggest.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.04%3A_Huckel_or_Tight_Binding_Theory.txt
The Hydrogenic atom problem forms the basis of much of our thinking about atomic structure. To solve the corresponding Schrödinger equation requires separation of the $r$, $\theta$, and $\psi$ variables. The Schrödinger equation for a single particle of mass $\mu$ moving in a central potential (one that depends only on the radial coordinate $r$) can be written as $-\frac{\hbar^2}{2\mu}\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\right)\psi+V(\sqrt{x^2+y^2+z^2}\Psi=E\psi$ or, introducing the short-hand notation $\nabla^2$: $-\frac{\hbar^2}{2\mu}​ \nabla^2 \psi + V \psi = E\psi.$ This equation is not separable in Cartesian coordinates ($x,y,z$) because of the way $x,y,$ and $z$ appear together in the square root. However, it is separable in spherical coordinates where it has the form $-\frac{\hbar^2}{2\mu r^2} \left(\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)\right) -\frac{\hbar^2}{2\mu r^2} \frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial \psi}{\partial\theta}\right) -\frac{\hbar^2}{2\mu r^2}\frac{1}{\sin^2\theta}\frac{\partial^2 \psi}{\partial\theta^2}+V(r)\psi=-\frac{\hbar^2}{2\mu r^2}\nabla^2\Psi+V\psi=E\psi.$ Subtracting $V(r) \psi$ from both sides of the equation and multiplying by - then moving the derivatives with respect to $r$ to the right-hand side, one obtains $\frac{1}{\sin\theta}\frac{\partial}{\partial \theta} \left(\sin\theta\frac{\partial \psi}{\partial\theta} \right) + \frac{1}{\sin^2\theta}\frac{\partial^2 \psi}{\partial\phi^2} = -\frac{2\mu r^2}{\hbar^2}(E-V(r)) \psi - \left(\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)\right).$ Notice that, except for $\psi$ itself, the right-hand side of this equation is a function of $r$ only; it contains no $\theta$ or $\phi$ dependence. Let's call the entire right hand side $\Phi(r) \psi$ to emphasize this fact. To further separate the $\theta$ and $\phi$ dependence, we multiply by $\sin^2\theta$ and subtract the $\theta$ derivative terms from both sides to obtain $\frac{\partial^2 \psi}{\partial\phi^2}= \Phi(r)\psi\sin^2\theta - \sin\theta\frac{\partial}{\partial\theta} \left(\sin\theta\frac{\partial \psi}{\partial\theta} \right).$ Now we have separated the $\phi$ dependence from the $\theta$ and r dependence. We now introduce the procedure used to separate variables in differential equations and assume y can be written as a function of $\phi$ times a function of $r$ and $\theta$: $\psi = \Phi(\phi) Q(r,\theta)$. Dividing by $\Phi Q$, we obtain $\frac{1}{\Phi}\frac{\partial^2\Phi}{\partial \phi^2}= \frac{1}{Q}\left(F(r)\sin^2\theta Q-\sin\theta\frac{\partial }{\partial\theta}\left(\sin\theta\frac{\partial Q}{\partial\theta}\right)\right).$ Now all of the $\phi$ dependence is isolated on the left hand side; the right hand side contains only $r$ and $\theta$ dependence. Whenever one has isolated the entire dependence on one variable as we have done above for the $\phi$ dependence, one can easily see that the left and right hand sides of the equation must equal a constant. For the above example, the left hand side contains no $r$ or $\theta$ dependence and the right hand side contains no $\phi$ dependence. Because the two sides are equal for all values of $r$, $\theta$, and $\phi$, they both must actually be independent of $r$, $\theta$, and $\phi$ dependence; that is, they are constant. This again is what is done when one employs the separations of variables method in partial differential equations. For the above example, we therefore can set both sides equal to a so-called separation constant that we call $-m^2$. It will become clear shortly why we have chosen to express the constant in the form of minus the square of an integer. You may recall that we studied this same $\phi$ - equation earlier and learned how the integer $m$ arises via. the boundary condition that $\phi$ and $\phi +2\pi$ represent identical geometries. The $\Phi$ Equation The resulting $\Phi$ equation reads (the “ symbol is used to represent second derivative) $\Phi'' + m^2\Phi = 0.$ This equation should be familiar because it is the equation that we treated much earlier when we discussed z-component of angular momentum. So, its further analysis should also be familiar, but for completeness, I repeat much of it. The above equation has as its most general solution $\Phi = A e^{im\phi} + B e^{-im\phi} .$ Because the wave functions of quantum mechanics represent probability densities, they must be continuous and single-valued. The latter condition, applied to our $\Phi$ function, means (n.b., we used this in our earlier discussion of z-component of angular momentum) that $\Phi(\phi) = \Phi(2\pi+\phi)$ or $Ae^{im\phi}(1-e^{2im\pi})+ Be^{-im\phi}(1-e^{2im\pi})​= 0.$ This condition is satisfied only when the separation constant is equal to an integer $m = 0, ±1, ± 2, \cdots$. This provides another example of the rule that quantization comes from the boundary conditions on the wave function. Here $m$ is restricted to certain discrete values because the wave function must be such that when you rotate through $2\pi$ about the z-axis, you must get back what you started with. The $\Theta$ Equation Now returning to the equation in which the $\phi$ dependence was isolated from the $r$ and $\theta$ dependence and rearranging the $\theta$ terms to the left-hand side, we have $\frac{1}{\sin\theta}\frac{\partial }{\partial \theta} \left(\sin\theta\frac{\partial Q}{\partial\theta} \right) - \frac{m^2Q}{\sin^2\theta} = F(r)Q.$ In this equation we have separated the $\theta$ and $r$ terms, so we can further decompose the wave function by introducing $Q = \Theta(\theta) R(r)$, which yields $\frac{1}{\Theta\sin\theta}\frac{\partial }{\partial \theta} \left(\sin\theta\frac{\partial \Theta}{\partial\theta} \right) - \frac{m^2}{\sin^2\theta} = \frac{F(r)Q}{R}=-\lambda.$ where a second separation constant, $-\lambda$, has been introduced once the $r$ and $\theta$ dependent terms have been separated onto the right and left hand sides, respectively. We now can write the $\theta$ equation as $\frac{1}{\sin\theta}\frac{\partial }{\partial \theta} \left(\sin\theta\frac{\partial \Theta}{\partial\theta} \right) - \frac{m^2\Theta}{\sin^2\theta} = -\lambda\Theta.$ where $m$ is the integer introduced earlier. To solve this equation for $\Theta$, we make the substitutions $z =\cos\theta$ and $P(z) = \Theta(\theta)$, so $\sqrt{1-z^2}=\sin\theta$, and $\frac{\partial }{\partial \theta} = \frac{\partial z}{\partial \theta}\frac{\partial }{\partial z}= - \sin\theta ​\frac{\partial }{\partial z}.$ The range of values for $\theta$ was $0 \le \theta < \pi$, so the range for $z$ is $-1 < z < 1$. The equation for $\Theta$, when expressed in terms of $P$ and $z$, becomes $\frac{d}{dz}\left((1-z^2)\frac{dP}{dz}\right)- \frac{m^2P}{1-z^2}+ \lambda P= 0.$ Now we can look for polynomial solutions for $P$, because $z$ is restricted to be less than unity in magnitude. If $m$ = 0, we first let $P= \sum_{k=0}a_kz^k,$ and substitute into the differential equation to obtain $\sum_{k=0}(k+2)(k+1)a_{k+2}z^k - \sum_{k=0}(k+1)ka_{k}z^k+ \lambda\sum_{k=0}a_kz^k = 0.$ Equating like powers of $z$ gives $a_{k+2} = \frac{a_k(k(k+1)-\lambda)}{(k+2)(k+1)}.$ Note that for large values of $k$ $\frac{a_{k+2}}{a_{k}} \rightarrow \frac{k^2\left(1+\frac{1}{k}\right)}{k^2\left(1+\frac{2}{k}\right)\left(1+\frac{1}{k}\right)} = 1.$ Since the coefficients do not decrease with $k$ for large $k$, this series will diverge for $z = ± 1$ unless it truncates at finite order. This truncation only happens if the separation constant $\lambda$ obeys $\lambda = \lambda(\lambda+1)$, where $l$ is an integer (you can see this from the recursion relation giving $a_{k+2}$ in terms of $a_k$; only for certain values of $l$ will the numerator vanish ). So, once again, we see that a boundary condition (i.e., that the wave function not diverge and thus be normalizable in this case) give rise to quantization. In this case, the values of $l$ are restricted to $\lambda(\lambda+1)$; before, we saw that $m$ is restricted to $0, ±1, ± 2, \cdots$. Since the above recursion relation links every other coefficient, we can choose to solve for the even and odd functions separately. Choosing $a_0$ and then determining all of the even $a_k$​ in terms of this $a_1$​, followed by rescaling all of these $a_k$​ to make the function normalized generates an even solution. Choosing $a_1$​ and determining all of the odd $a_k$​ in like manner, generates an odd solution. For $l= 0$, the series truncates after one term and results in $P_o(z) = 1$. For $l= 1$ the same thing applies and $P_1(z) = z$. For $l= 2$, $a_2 = -6 \dfrac{a_o}{2}= -3a_o$, so one obtains $P_2 = 3z^2-1$, and so on. These polynomials are called Legendre polynomials and are denoted $P_l(z)$. For the more general case where $m \ne 0$, one can proceed as above to generate a polynomial solution for the $Q$ function. Doing so, results in the following solutions: $P_l^m(z)=(1-z^2)^{|m|/2}\frac{d^{|m|}P_l(z)}{dz^{|m|}}$ These functions are called Associated Legendre polynomials, and they constitute the solutions to the $Q$ problem for non-zero $m$ values. The above $P$ and $e^{im\phi}$ functions, when re-expressed in terms of $\theta$ and $\phi$, yield the full angular part of the wave function for any centrosymmetric potential. These solutions are usually written as $Y_{l,m}(\theta,\phi)= P_l^m(\cos\theta)\frac{1}{\sqrt{2\pi}} \exp(im\phi),$ and are called spherical harmonics. They provide the angular solution of the $r,\theta,\phi$ Schrödinger equation for any problem in which the potential depends only on the radial coordinate. Such situations include all one-electron atoms and ions (e.g., $H$, $He^+$, $Li^{2+}$, etc.), the rotational motion of a diatomic molecule (where the potential depends only on bond length $r$), the motion of a nucleon in a spherically symmetrical box (as occurs in the shell model of nuclei), and the scattering of two atoms (where the potential depends only on interatomic distance). The $Y_{l,m}$ functions possess varying number of angular nodes, which, as noted earlier, give clear signatures of the angular or rotational energy content of the wave function. These angular nodes originate in the oscillatory nature of the Legendre and associated Legendre polynomials $P_{lm}(\cos\theta)$; the higher $l$ is, the more sign changes occur within the polynomial. The $R$ Equation Let us now turn our attention to the radial equation, which is the only place that the explicit form of the potential appears. Using our earlier results for the equation obeyed by the $R(r)$ function and specifying $V(r)$ to be the Coulomb potential appropriate for an electron in the field of a nucleus of charge $+Z_e$, yields: $\frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)+\left(\frac{2\mu}{\hbar^2}\left(E+\frac{Ze^2}{r}\right)-\frac{L(L+1)}{rZ2}\right) R = 0.$ We can simplify things considerably if we choose rescaled length and energy units because doing so removes the factors that depend on $\mu$, $\hbar$, and $e$. We introduce a new radial coordinate $\rho$ and $a$ quantity $\sigma$ as follows: $\rho=r\sqrt{\frac{-8\mu E}{\hbar^2}} \text{ and } \sigma = \frac{\mu Ze^2}{\hbar\sqrt{-2\mu E}}.$ Notice that if $E$ is negative, as it will be for bound states (i.e., those states with energy below that of a free electron infinitely far from the nucleus and with zero kinetic energy), $\rho$ and $\sigma$ are real. On the other hand, if $E$ is positive, as it will be for states that lie in the continuum, $\rho$ and $\sigma$ will be imaginary. These two cases will give rise to qualitatively different behavior in the solutions of the radial equation developed below. We now define a function $S$ such that $S(\rho) = R(r)$ and substitute $S$ for $R$ to obtain: $\frac{1}{\rho^2}\frac{d}{d\rho}\left(\rho^2\frac{dS}{d\rho}\right) + \left(-\frac{1}{4}-\frac{L(L+1)}{\rho^2}+\dfrac{\sigma}{\rho}\right) S = 0.$ The differential operator terms can be recast in several ways using $\frac{1}{\rho^2}\frac{d}{d\rho}\left(\rho^2\frac{dS}{d\rho}\right)=\frac{d^2 S}{d\rho^2} +\frac{2}{\rho}\frac{dS}{d\rho} =\frac{1}{\rho}\frac{d^2}{d\rho^2}(\rho S) .$ The strategy that we now follow is characteristic of solving second order differential equations. We will examine the equation for $S$ at large and small $\rho$ values. Having found solutions at these limits, we will use a power series in $\rho$ to interpolate between these two limits. Let us begin by examining the solution of the above equation at small values of $\rho$ to see how the radial functions behave at small $\rho$. As $\rho\rightarrow0$, the term $-\dfrac{L(L+1)}{\rho^2}$ will dominate over $-\dfrac{1}{4} +\dfrac{\sigma}{\rho}$. Neglecting these other two terms, we find that, for small values of $\rho$ (or $r$), the solution should behave like $\rho^L$ and because the function must be normalizable, we must have $L \ge 0$. Since $l$ can be any non-negative integer, this suggests the following more general form for $S(r)$: $S(\rho) \approx \rho^L e^{-a\rho}.$ This form will insure that the function is normalizable since $S(r) \rightarrow 0$ as $r \rightarrow \infty$ for all $L$, as long as $\rho$ is a real quantity. If $\rho$ is imaginary, such a form may not be normalized (see below for further consequences). Turning now to the behavior of $S$ for large $\rho$, we make the substitution of $S(r)$ into the above equation and keep only the terms with the largest power of $\rho$ (i.e., the $-1/4$ term) and we allow the derivatives in the above differential equation to act on $\approx \rho^L e^{-a\rho}​$. Upon so doing, we obtain the equation $a^2\rho^Le^{-a\rho} = \frac{1}{4}\rho^Le^{-a\rho}​ ,$ which leads us to conclude that the exponent in the large-$r$ behavior of S is $a = \dfrac{1}{2}$. Having found the small-$\rho$ and large-$\rho$ behaviors of $S(\rho)$, we can take $S$ to have the following form to interpolate between large and small r-values: $S(\rho) = \rho^L​e^{-\frac{\rho}{2}}​​P(\rho),$ where the function $P$ is expanded in an infinite power series in $\rho$ as $P(\rho) =\sum a_k\rho^k$. Again substituting this expression for $S$ into the above equation we obtain $P"\rho + P'(2L+2-\rho) + P(\sigma-L-l) = 0,$ and then substituting the power series expansion of $P$ and solving for the ak's we arrive at a recursion relation for the ak coefficients: $a_{k+1} = \frac{(k-\sigma+L+1)a_k}{(k+1)(k+2L+2)}.$ For large $k$, the ratio of expansion coefficients reaches the limit $\dfrac{a_{k+1}}{a_k}=\dfrac{1}{k}$, which, when substituted into $\sum a_k\rho^k$, gives the same behavior as the power series expansion of $e^\rho$. Because the power series expansion of $P$ describes a function that behaves like $e^\rho$​ for large $\rho$, the resulting $S(\rho)$ function would not be normalizable because the efactor would be overwhelmed by this $e^\rho$ dependence. Hence, the series expansion of $P$ must truncate in order to achieve a normalizable $S$ function. Notice that if $\rho$ is imaginary, as it will be if $E$ is in the continuum, the argument that the series must truncate to avoid an exponentially diverging function no longer applies. Thus, we see a key difference between bound (with $\rho$ real) and continuum (with r imaginary) states. In the former case, the boundary condition of non-divergence arises; in the latter, it does not because $e^{\frac{\rho}{2}}$ does not diverge if $\rho$ is imaginary. To truncate at a polynomial of order $n'$, we must have $n'-s+L+l= 0$. This implies that the quantity s introduced previously is restricted to $s = n'+L+l$, which is certainly an integer; let us call this integer $n$. If we label states in order of increasing $n = 1,2,3,\cdots$, we see that doing so is consistent with specifying a maximum order ($n'$) in the $P(r)$ polynomial $n' = 0,1,2,\cdots$ after which the $\textbf{L}_{-}$ value can run from $L = 0$, in steps of unity up to $L = n-1$. Substituting the integer $n$ for $s$, we find that the energy levels are quantized because $s$ is quantized (equal to $n$): $E = -\frac{\mu Z^2 e^4}{2\hbar^2 n^2}$ and the scaled distance turns out to be $\rho = \frac{Zr}{a_0n}.$​ Here, the length $a_o=\dfrac{\hbar^2}{\mu e^2}$ is the so-called Bohr radius , which turns out to be 0.529 Å; it appears once the above E-expression is substituted into the equation for $\rho$. Using the recursion equation to solve for the polynomial's coefficients $a_k$ for any choice of $n$ and $L$ quantum numbers generates a so-called Laguerre polynomial; $P_{n-L-1}(\rho)$. They contain powers of $\rho$ from zero through $n-L-1$, and they have $n-L-1$ sign changes as the radial coordinate ranges from zero to infinity. It is these sign changes in the Laguerre polynomials that cause the radial parts of the hydrogenic wave functions to have $n-L-1$ nodes. For example, $3d$ orbitals have no radial nodes, but 4d orbitals have one; and, as shown in Figure 2.16, $3p$ orbitals have one while $3s$ orbitals have two. Once again, the higher the number of nodes, the higher the energy in the radial direction. Let me again remind you about the danger of trying to understand quantum wave functions or probabilities in terms of classical dynamics. What kind of potential $V(r)$ would give rise to, for example, the $3s$ $P(r)$ plot shown above? Classical mechanics suggests that $P$ should be large where the particle moves slowly and small where it moves quickly. So, the $3s$ $P(r)$ plot suggests that the radial speed of the electron has three regions where it is low (i.e., where the peaks in $P$ are) and two regions where it is very large (i.e., where the nodes are). This, in turn, suggests that the radial potential $V(r)$ experienced by the $3s$ electron is high in three regions (near peaks in P) and low in two regions. Of course, this conclusion about the form of $V(r)$ is nonsense and again illustrates how one must not be drawn into trying to think of the classical motion of the particle, especially for quantum states with small quantum number. In fact, the low quantum number states of such one-electron atoms and ions have their radial $P(r)$ plots focused in regions of r-space where the potential $–Ze^2/r$ is most attractive (i.e., largest in magnitude). Finally, we note that the energy quantization does not arise for states lying in the continuum because the condition that the expansion of $P(r)$ terminate does not arise. The solutions of the radial equation appropriate to these scattering states (which relate to the scattering motion of an electron in the field of a nucleus of charge $Z$) are a bit outside the scope of this text, so we will not treat them further here. To review, separation of variables has been used to solve the full $r,\theta,\phi$ Schrödinger equation for one electron moving about a nucleus of charge $Z$. The $\theta$ and $\phi$ solutions are the spherical harmonics $Y_{L,m} (\theta,\phi)$. The bound-state radial solutions $R_{n,L}(r) = S(\rho) = \rho^Le^{-\frac{\rho}{2}}P_{n-L-1}(\rho)$ depend on the $n$ and $L$ quantum numbers and are given in terms of the Laguerre polynomials. Summary To summarize, the quantum numbers $L$ and $m$ arise through boundary conditions requiring that $\psi(\theta)$ be normalizable (i.e., not diverge) and $\psi(\phi) = \psi(\phi+2\pi)$. The radial equation, which is the only place the potential energy enters, is found to possess both bound-states (i.e., states whose energies lie below the asymptote at which the potential vanishes and the kinetic energy is zero) and continuum states lying energetically above this asymptote. The former states are spatially confined by the potential, but the latter are not. The resulting hydrogenic wave functions (angular and radial) and energies are summarized on pp. 133-136 in the text by L. Pauling and E. B. Wilson for $n$ up to and including 6 and $L$ up to 5 (i.e, for $s, p, d, f, g,$ and $h$ orbitals). There are both bound and continuum solutions to the radial Schrödinger equation for the attractive coulomb potential because, at energies below the asymptote, the potential confines the particle between $r=0$ and an outer classical turning point, whereas at energies above the asymptote, the particle is no longer confined by an outer turning point (see Figure 2.17). This provides yet another example of how quantized states arise when the potential spatially confines the particle, but continuum states arise when the particle is not spatially confined. Figure 2.17: Radial Potential for Hydrogenic Atoms and Bound and Continuum Orbital Energies. The solutions of this one-electron problem form the qualitative basis for much of atomic and molecular orbital theory. For this reason, the reader is encouraged to gain a firmer understanding of the nature of the radial and angular parts of these wave functions. The orbitals that result are labeled by $n$, $l$, and $m$ quantum numbers for the bound states and by $l$ and $m$ quantum numbers and the energy $E$ for the continuum states. Much as the particle-in-a-box orbitals are used to qualitatively describe p- electrons in conjugated polyenes, these so-called hydrogen-like orbitals provide qualitative descriptions of orbitals of atoms with more than a single electron. By introducing the concept of screening as a way to represent the repulsive interactions among the electrons of an atom, an effective nuclear charge $Z_{\rm eff}$ can be used in place of $Z$ in the $\psi_{n,l,m}$ and $E_n$ to generate approximate atomic orbitals to be filled by electrons in a many-electron atom. For example, in the crudest approximation of a carbon atom, the two $1s$ electrons experience the full nuclear attraction so $Z_{\rm eff} = 6$ for them, whereas the $2s$ and $2p$ electrons are screened by the two $1s$ electrons, so $Z_{\rm eff} = 4$ for them. Within this approximation, one then occupies two $1s$ orbitals with $Z = 6$, two $2s$ orbitals with $Z = 4$ and two $2p$ orbitals with $Z=4$ in forming the full six-electron wave function of the lowest-energy state of carbon. It should be noted that the use of screened nuclear charges as just discussed is different from the use of a quantum defect parameter d as discussed regarding Rydberg orbitals in Chapter 1. The $Z = 4$ screened charge for carbon’s $2s$ and $2p$ orbitals is attempting to represent the effect of the inner-shell $1s$ electrons on the $2s$ and $2p$ orbitals. The modification of the principal quantum number made by replacing $n$ by $(n- d)$ represents the penetration of the orbital with nominal quantum number $n$ inside its inner-shells. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.05%3A_Hydrogenic_Orbitals.txt
As we have seen several times already, solutions to the Schrödinger equation display several properties that are very different from what one experiences in Newtonian dynamics. One of the most unusual and important is that the particles one describes using quantum mechanics can move into regions of space where they would not be allowed to go if they obeyed classical equations. We call these classically forbidden regions. Let us consider an example to illustrate this so-called tunneling phenomenon. Specifically, we think of an electron (a particle that we likely would use quantum mechanics to describe) moving in a direction we will call $R$ under the influence of a potential that is: 1. Infinite for $R < 0$ (this could, for example, represent a region of space within a solid material where the electron experiences very repulsive interactions with other electrons); 2. Constant and negative for some range of $R$ between $R = 0$ and $R_{\rm max}$ (this could represent the attractive interaction of the electrons with those atoms or molecules in a finite region or surface of a solid); 3. Constant and repulsive (i.e., positive) by an amount $\delta V + D_e$ for another finite region from $R_{\rm max}$​ to $R_{\rm max} +\delta$ (this could represent the repulsive interactions between the electrons and a layer of molecules of thickness d lying on the surface of the solid at $R_{\rm max}$​); 4. Constant and equal to $D_e$ from $R_{\rm max} +\delta$ to infinity (this could represent the electron being removed from the solid, but with a work function energy cost of $D_e$, and moving freely in the vacuum above the surface and the ad-layer). Such a potential is shown in Figure 2.18. The piecewise nature of this potential allows the one-dimensional Schrödinger equation to be solved analytically. For energies lying in the range $D_e < E < D_e +\delta V$, an especially interesting class of solutions exists. These so-called resonance states occur at energies that are determined by the condition that the amplitude of the wave function within the barrier (i.e., for $0 \le R \le R_{\rm max}$ ) be large. Let us now turn our attention to this specific energy regime, which also serves to introduce the tunneling phenomenon. $k=\sqrt{\dfrac{2m_e E}{\hbar^2}},k'=\sqrt{\dfrac{2m_e (E-D_e)}{\hbar^2}},\kappa'=\sqrt{\dfrac{2m_e (D_e+\delta V-E)}{\hbar^2}}$ The piecewise solutions to the Schrödinger equation appropriate to the resonance case are easily written down in terms of sin and cos or exponential functions, using the following three definitions: The combination of $\sin(kR)$ and $\cos(kR)$ that solve the Schrödinger equation in the inner region and that vanish at $R=0$ (because the function must vanish within the region where $V$ is infinite and because it must be continuous, it must vanish at $R=0$) is: $\psi = A\sin(kR) \hspace{1cm} (\text{for }0 \le R \le R_{\rm max} ).$ Between $R_{\rm max}$ and $R_{\rm max} +\delta$, there are two solutions that obey the Schrödiger equation, so the most general solution is a combination of these two: $\psi = B^+ \exp(\kappa'R) + B^- \exp(-\kappa'R) \hspace{1cm} (\text{for }R_{\rm max} \le R \le R_{\rm max} +\delta).$ Finally, in the region beyond $R_{\rm max} +\delta$, we can use a combination of either $\sin(k’R)$ and $\cos(k’R)$ or $\exp(ik’R)$ and $\exp(-ik’R)$ to express the solution. Unlike the region near $R=0$, where it was most convenient to use the sin and cos functions because one of them could be “thrown away” since it could not meet the boundary condition of vanishing at $R=0$, in this large-$R$ region, either set is acceptable. We choose to use the $\exp(ik’R)$ and $exp(-ik’R)$ set because each of these functions is an eigenfunction of the momentum operator$-ih\dfrac{∂}{∂R}$. This allows us to discuss amplitudes for electrons moving with positive momentum and with negative momentum. So, in this region, the most general solution is $\psi = C \exp(ik'R) + D \exp(-ik'R) \hspace{1cm} (\text{for }R_{\rm max} +\delta \le R < \infty).$ There are four amplitudes ($A, B^+, B^-,$ and $C$) that can be expressed in terms of the specified amplitude $D$ of the incoming flux (e.g., pretend that we know the flux of electrons that our experimental apparatus shoots at the surface). Four equations that can be used to achieve this goal result when $\psi$ and $\dfrac{d\psi}{dR}$ are matched at $R_{\rm max}$ and at $R_{\rm max} + \delta$ (one of the essential properties of solutions to the Schrödinger equation is that they and their first derivative are continuous; these properties relate to y being a probability and the momentum $-ih\dfrac{∂}{∂R}$ being continuous). These four equations are: $A\sin(kR_{\rm max}) = B^+ \exp(\kappa'R_{\rm max}) + B^- \exp(-\kappa'R_{\rm max}),$ $Ak\cos(kR_{\rm max}) = \kappa'​B^+ \exp(\kappa'​R_{\rm max}) - \kappa'​B^- \exp(-\kappa'R_{\rm max}),$ $B^+ \exp(\kappa'(R_{\rm max} + \delta)) + B^- \exp(-\kappa'(R_{\rm max} + \delta))$ $= C \exp(ik'(R_{\rm max} + \delta) + D \exp(-ik'(R_{\rm max} + \delta),$ $k'B^+ \exp(\kappa'(R_{\rm max} + \delta)) - k'B^- \exp(-\kappa'(R_{\rm max} + \delta))$ $= ik'C \exp(ik'(R_{\rm max} + \delta)) -ik' D \exp(-ik'(R_{\rm max} + \delta)).$ It is especially instructive to consider the value of $A/D$ that results from solving this set of four equations in four unknowns because the modulus of this ratio provides information about the relative amount of amplitude that exists inside the barrier in the attractive region of the potential compared to that existing in the asymptotic region as incoming flux. The result of solving for $A/D$ is: $\dfrac{A}{D} = \frac{4 \kappa'\exp(-ik'(R_{\rm max}+\delta))}{\exp(\kappa'\delta)(ik'-\kappa')(\kappa'\sin(kR_{\rm max})+k\cos(kR_{\rm max}))/ik'+ \exp(-\kappa'\delta)(ik'+\kappa')(\kappa'\sin(kR_{\rm max})-k\cos(kR_{\rm max}))/ik' }.$ To simplify this result in a manner that focuses on conditions where tunneling plays a key role in creating the resonance states, it is instructive to consider this result under conditions of a high (large $D_e + \delta V - E$) and thick (large $\delta$) barrier. In such a case, the factor $\exp(-\kappa'\delta)$ will be very small compared to its counterpart $\exp(\kappa'\delta)$, and so $\dfrac{A}{D} = 4\frac{ik'\kappa'}{ik'-\kappa'} \frac{\exp(-ik'(R_{\rm max}+\delta)) \exp(-\kappa'\delta)}{\kappa'\sin(kR_{\rm max})+k\cos(kR_{\rm max}) }.$ The $\exp(-\kappa'\delta)$ factor in $A/D$ causes the magnitude of the wave function inside the barrier to be small in most circumstances; we say that incident flux must tunnel through the barrier to reach the inner region and that $\exp(-\kappa'\delta)$ governs the probability of this tunneling. Keep in mind that, in the energy range we are considering ($E < D_e+\delta$), a classical particle could not even enter the region $R_{\rm max} < R < R_{\rm max} + \delta$; this is why we call this the classically forbidden or tunneling region. A classical particle starting in the large-$R$ region can not enter, let alone penetrate, this region, so such a particle could never end up in the $0 <R < R_{\rm max}$ inner region. Likewise, a classical particle that begins in the inner region can never penetrate the tunneling region and escape into the large-$R$ region. Were it not for the fact that electrons obey a Schrödinger equation rather than Newtonian dynamics, tunneling would not occur and, for example, scanning tunneling microscopy (STM), which has proven to be a wonderful and powerful tool for imaging molecules on and near surfaces, would not exist. Likewise, many of the devices that appear in our modern electronic tools and games, which depend on currents induced by tunneling through various junctions, would not be available. But, or course, tunneling does occur and it can have remarkable effects. Let us examine an especially important (in chemistry) phenomenon that takes place because of tunneling and that occurs when the energy E assumes very special values. The magnitude of the $A/D$ factor in the above solutions of the Schrödinger equation can become large if the energy E is such that the denominator in the above expression for $A/D$ approaches zero. This happens when $\kappa'\sin(kR_{\rm max})+k\cos(kR_{\rm max})$ or if $\tan(kR_{\rm max}) = - \frac{k}{\kappa}’.$ It can be shown that the above condition is similar to the energy quantization condition $\tan(kR_{\rm max}) = - \frac{k}{\kappa}$ that arises when bound states of a finite potential well similar to that shown above but with the barrier between $R_{\rm max}$ and $R_{\rm max} + \delta$ missing and with $E$ below $D_e$. There is, however, a difference. In the bound-state situation, two energy-related parameters occur $k =\sqrt{\dfrac{2\mu E}{\hbar^2}}$ and $\kappa = \sqrt{\dfrac{2\mu (D_e-E)}{\hbar^2}} .$ In the case we are now considering, $k$ is the same, but $k' = \sqrt{\dfrac{2\mu (D_e+\delta V-E)}{\hbar^2}}$ rather than $\kappa$ occurs, so the two equations involving $\tan(kR_{\rm max})$are not identical, but they are quite similar. Another observation that is useful to make about the situations in which $A/D$ becomes very large can be made by considering the case of a very high barrier (so that $k'$ is much larger than $k$). In this case, the denominator that appears in $A/D$ $\kappa'\sin(kR_{\rm max})+k\cos(kR_{\rm max}) \simeq \kappa' \sin(kR_{\rm max})$ can become small at energies satisfying $\sin(kR_{\rm max}) \simeq 0.$ This condition is nothing but the energy quantization condition that occurs for the particle-in-a-box potential shown in Figure 2.19. This potential is identical to the potential that we were examining for $0 \le R \le R_{\rm max}$, but extends to infinity beyond $R_{\rm max}$ ; the barrier and the dissociation asymptote displayed by our potential are absent. Let’s consider what this tunneling problem\hbaras taught us. First, it showed us that quantum particles penetrate into classically forbidden regions. It showed that, at certain so-called resonance energies, tunneling is much more likely than at energies that are off-resonance. In our model problem, this means that electrons impinging on the surface with resonance kinetic energies will have a very high probability of tunneling to produce an electron that is highly localized (i.e., trapped) in the $0 < R < R_{\rm max}$ region. Likewise, it means that an electron prepared (e.g., perhaps by photo-excitation from a lower-energy electronic state) within the $0 < R < R_{\rm max}$ region will remain trapped in this region for a long time (i.e., will have a low probability of tunneling outward). In the case just mentioned, it would make sense to solve the four equations for the amplitude C of the outgoing wave in the $R > R_{\rm max}$ region in terms of the A amplitude. If we were to solve for $C/A$ and then examine under what conditions the amplitude of this ratio would become small (so the electron cannot escape), we would find the same $\tan(kR_{\rm max}) = - \dfrac{k}{\kappa}'$ resonance condition as we found from the other point of view. This means that the resonance energies tell us for what collision energies the electron will tunnel inward and produce a trapped electron and, at these same energies, an electron that is trapped will not escape quickly. Whenever one has a barrier on a potential energy surface, at energies above the dissociation asymptote $D_e$ but below the top of the barrier ($D_e + \delta V$ here), one can expect resonance states to occur at special scattering energies $E$. As we illustrated with the model problem, these so-called resonance energies can often be approximated by the bound-state energies of a potential that is identical to the potential of interest in the inner region ($0 \le R \le R_{\rm max}$ ) but that extends to infinity beyond the top of the barrier (i.e., beyond the barrier, it does not fall back to values below $E$). The chemical significance of resonances is great. Highly rotationally excited molecules may have more than enough total energy to dissociate ($D_e$), but this energy may be stored in the rotational motion, and the vibrational energy may be less than $D_e$. In terms of the above model, high rotational angular momentum may produce a significant centrifugal barrier in the effective potential that characterizes the molecule’s vibration, but the system's vibrational energy may lie significantly below $D_e$. In such a case, and when viewed in terms of motion on an angular-momentum-modified effective potential such as I show in Figure 2.20 , the lifetime of the molecule with respect to dissociation is determined by the rate of tunneling through the barrier. Figure 2.20. Radial potential for non-rotating ($J = 0$) molecule and for rotating molecule. In this case, one speaks of rotational predissociation of the molecule. The lifetime t can be estimated by computing the frequency n at which flux that exists inside $R_{\rm max}$ strikes the barrier at $R_{\rm max}$ $\nu = \frac{\hbar k}{2\mu R_{\rm max}} \hspace{2cm} ({\rm sec})^{-1}$ and then multiplying by the probability $P$ that flux tunnels through the barrier from $R_{\rm max}$ to $R_{\rm max} + \delta$: $P = \exp(-2\kappa'\delta).$ The result is that $\tau^{ -1}= \frac{\hbar k}{2\mu R_{\rm max}} \exp(-2\kappa'\delta)$ with the energy $E$ entering into $k$ and $\kappa'$ being determined by the resonance condition: (\kappa'\sin(kR_{\rm max})+k\cos(kR_{\rm max})) = minimum. We note that the probability of tunneling $\exp(-2\kappa'\delta)$ falls of exponentially with a factor depending on the width d of the barrier through which the particle must tunnel multiplied by $\kappa'$, which depends on the height of the barrier $D_e + \delta$ above the energy $E$ available. This exponential dependence on thickness and height of the barriers is something you should keep in mind because it appears in all tunneling rate expressions. Another important case in which tunneling occurs is in electronically metastable states of anions. In so-called shape resonance states, the anion’s extra electron experiences an attractive potential due to its interaction with the underlying neutral molecule’s dipole, quadrupole, and induced electrostatic moments, as well as a centrifugal potential of the form $\dfrac{L(L+1)\hbar^2}{8\pi^2m_eR^2}$ whose magnitude depends on the angular character of the orbital the extra electron occupies. When combined, the above attractive and centrifugal potentials produce an effective radial potential of the form shown in Figure 2.21 for the $N_2^-$ case in which the added electron occupies the $\pi^*$ orbital which has $L=2$ character when viewed from the center of the N-N bond. Again, tunneling through the barrier in this potential determines the lifetimes of such shape resonance states. Although the examples treated above analytically involved piecewise constant potentials (so the Schrödinger equation and the boundary matching conditions could be solved exactly), many of the characteristics observed carry over to more chemically realistic situations. In fact, one can often model chemical reaction processes in terms of motion along a reaction coordinate (s) from a region characteristic of reactant materials where the potential surface is positively curved in all direction and all forces (i.e., gradients of the potential along all internal coordinates) vanish; to a transition state at which the potential surface's curvature along s is negative while all other curvatures are positive and all forces vanish; onward to product materials where again all curvatures are positive and all forces vanish. A prototypical trace of the energy variation along such a reaction coordinate is in Figure 2.22. Near the transition state at the top of the barrier on this surface, tunneling through the barrier plays an important role if the masses of the particles moving in this region are sufficiently light. Specifically, if $H$ or $D$ atoms are involved in the bond breaking and forming in this region of the energy surface, tunneling must usually be considered in treating the dynamics. Within the above reaction path point of view, motion transverse to the reaction coordinate is often modeled in terms of local harmonic motion although more sophisticated treatments of the dynamics is possible. This picture leads one to consider motion along a single degree of freedom, with respect to which much of the above treatment can be carried over, coupled to transverse motion along all other internal degrees of freedom taking place under an entirely positively curved potential (which therefore produces restoring forces to movement away from the streambed traced out by the reaction path). This point of view constitutes one of the most widely used and successful models of molecular reaction dynamics and is treated in more detail in Chapters 3 and 8 of this text. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.06%3A_Electron_Tunneling.txt
Orbital Angular Momentum A particle moving with momentum p at a position r relative to some coordinate origin has so-called orbital angular momentum equal to $\textbf{L} = \textbf{r} \times \textbf{p}$. The three components of this angular momentum vector in a Cartesian coordinate system located at the origin mentioned above are given in terms of the Cartesian coordinates of $\textbf{r}$ and $\textbf{p}$ as follows: ${L}_z = x p_y - y p_x ,$ ${L}_x = y p_z - z p_y ,$ ${L}_y = z p_x - x p_z .$ Using the fundamental commutation relations among the Cartesian coordinates and the Cartesian momenta: $[ q_k , p_j ] = q_k p_j - p_j q_k = i\hbar \delta_{j,k} ( j,k = x,y,z) ,$ which are proven by considering quantities of the from $(x p_x - p_x x)f=-i\hbar \left[x\frac{\partial f}{\partial x}-\frac{\partial (xf)}{\partial x}\right]=i\hbar f,$ it can be shown that the above angular momentum operators obey the following set of commutation relations: $[\textbf{L}_x, \textbf{L}_y] = i\hbar \textbf{L}_z ,$ $[\textbf{L}_y, \textbf{L}_z] = i\hbar \textbf{L}_x ,$ $[\textbf{L}_z, \textbf{L}_x] = i\hbar \textbf{L}_y .$ Although the components of $\textbf{L}$ do not commute with one another, they can be shown to commute with the operator $\textbf{L}^2$ defined by $\textbf{L}^2 = \textbf{L}_x^2 + \textbf{L}_y^2 + \textbf{L}_z^2 .$ This new operator is referred to as the square of the total angular momentum operator. The commutation properties of the components of $\textbf{L}$ allow us to conclude that complete sets of functions can be found that are eigenfunctions of $\textbf{L}^2$ and of one, but not more than one, component of $\textbf{L}$. It is convention to select this one component as $\textbf{L}_z$, and to label the resulting simultaneous eigenstates of $\textbf{L}^2$ and $\textbf{L}_z$ as $|l,m\rangle$ according to the corresponding eigenvalues: $\textbf{L}^2 |l,m\rangle = \hbar^2 l(l+1) |l,m\rangle , l = 0,1,2,3,....$ $\textbf{L}_z |l,m\rangle = \hbar m|l,m\rangle , m= ± l, ±(l-1), ±(l-2), ... ±(l-(l-1)), 0.$ These eigenfunctions of $\textbf{L}^2$ and of $\textbf{L}_z$ will not, in general, be eigenfunctions of either $\textbf{L}_x$ or of $\textbf{L}_y$. This means that any measurement of $\textbf{L}_x$ or $\textbf{L}_y$ will necessarily change the wave function if it begins as an eigenfunction of $\textbf{L}_z$. The above expressions for $\textbf{L}_x$, $\textbf{L}_y$, and $\textbf{L}_z$ can be mapped into quantum mechanical operators by substituting $x$, $y$, and $z$ as the corresponding coordinate operators and $-i\hbar \dfrac{\partial}{\partial x}$, $-i\hbar \dfrac{\partial}{\partial y}$, and $-i\hbar \dfrac{\partial}{\partial z}$ for $p_x$, $p_y$, and $p_z$, respectively. The resulting operators can then be transformed into spherical coordinates the results of which are: $\textbf{L}_z =-i\hbar \frac{\partial}{\partial \phi} ,$ $\textbf{L}_x = i\hbar \left[\sin\phi \frac{\partial}{\partial \theta} + \cot\theta \cos\phi \frac{\partial}{\partial \phi}\right] ,$ $\textbf{L}_y = -i\hbar \left[\cos\phi \frac{\partial}{\partial \theta} - \cot\theta \sin\phi \frac{\partial}{\partial \phi}\right] ,$ $\textbf{L}^2 = - \hbar^2 \left[ \frac{1}{\sin\theta} \frac{\partial}{\partial \theta} (\sin\theta \frac{\partial}{\partial \theta}) + \frac{1}{\sin^2\theta} \frac{\partial^2}{\partial \phi^2}\right] .$ Properties of General Angular Momenta There are many types of angular momenta that one encounters in chemistry. Orbital angular momenta, such as that introduced above, arise in electronic motion in atoms, in atom-atom and electron-atom collisions, and in rotational motion in molecules. Intrinsic spin angular momentum is present in electrons, $H^1, H^2, C^{13},$ and many other nuclei. In this Section, we will deal with the behavior of any and all angular momenta and their corresponding eigenfunctions. At times, an atom or molecule contains more than one type of angular momentum. The Hamiltonian's interaction potentials present in a particular species may or may not cause these individual angular momenta to be coupled to an appreciable extent (i.e., the Hamiltonian may or may not contain terms that refer simultaneously to two or more of these angular momenta). For example, the $NH^-$ ion, which has a $^2P$ ground electronic state (its electronic configuration is $1s^22s^23s^22p_{\pi x}^22p_{\pi y}^1$) has electronic spin, electronic orbital, and molecular rotational angular momenta. The full Hamiltonian $H$ contains terms that couple the electronic spin and orbital angular momenta, thereby causing them individually to not commute with $H$. In such cases, the eigenstates of the system can be labeled rigorously only by angular momentum quantum numbers $j$ and $m$ belonging to the total angular momentum operators $\textbf{J}^2$ and $\textbf{J}_z$. The total angular momentum of a collection of individual angular momenta is defined, component-by-component, as follows: $J_k = \sum_i J_k(i),$ where $k$ labels $x$, $y$, and $z$, and $i$ labels the constituents whose angular momenta couple to produce J. For the remainder of this Section, we will study eigenfunction-eigenvalue relationships that are characteristic of all angular momenta and which are consequences of the commutation relations among the angular momentum vector's three components. We will also study how one combines eigenfunctions of two or more angular momenta {$\textbf{J}(i)$} to produce eigenfunctions of the total $\textbf{J}$. Consequences of the Commutation Relations Any set of three operators that obey $[\textbf{J}_x, \textbf{J}_y] = i\hbar \textbf{J}_z ,$ $[\textbf{J}_y, \textbf{J}_z] = i\hbar \textbf{J}_x ,$ $[\textbf{J}_z, \textbf{J}_x] = i\hbar \textbf{J}_y ,$ will be taken to define an angular momentum $\textbf{J}$, whose square $\textbf{J}^2= \textbf{J}_x^2 + \textbf{J}_y^2 + \textbf{J}_z^2$ commutes with all three of its components. It is useful to also introduce two combinations of the three fundamental operators $\textbf{J}_x$ and $\textbf{J}_y$: $\textbf{J}_{\pm} = \textbf{J}_x ± i \textbf{J}_y ,$ and to refer to them as raising and lowering operators for reasons that will be made clear below. These new operators can be shown to obey the following commutation relations: $[\textbf{J}^2, \textbf{J}_{\pm}] = 0,$ $[\textbf{J}_z, \textbf{J}_{\pm}] = \pm \hbar \textbf{J}_{\pm} .$ Using only the above commutation properties, it is possible to prove important properties of the eigenfunctions and eigenvalues of $\textbf{J}^2$ and $\textbf{J}_z$. Let us assume that we have found a set of simultaneous eigenfunctions of $\textbf{J}^2$ and $\textbf{J}_z$ ; the fact that these two operators commute tells us that this is possible. Let us label the eigenvalues belonging to these functions: $\textbf{J}^2 |j,m\rangle = \hbar^2 f(j,m) |j,m\rangle ,$ $\textbf{J}_z |j,m\rangle = \hbar m|j,m\rangle ,$ in terms of the quantities $m$ and $f(j,m)$. Although we certainly hint that these quantities must be related to certain $j$ and $m$ quantum numbers, we have not yet proven this, although we will soon do so. For now, we view $f(j,m)$ and $m$ simply as symbols that represent the respective eigenvalues. Because both $\textbf{J}^2$ and $\textbf{J}_z$ are Hermitian, eigenfunctions belonging to different $f(j,m)$ or $m$ quantum numbers must be orthogonal: $\langle j,m|j',m'\rangle = \delta_{m,m^\prime} \delta_{j,j^\prime} .$ We now prove several identities that are needed to discover the information about the eigenvalues and eigenfunctions of general angular momenta that we are after. Later in this Section, the essential results are summarized. There is a Maximum and a Minimum Eigenvalue for $\textbf{J}_z$ Because all of the components of $\textbf{J}$ are Hermitian, and because the scalar product of any function with itself is positive semi-definite, the following identity holds: $\langle j,m|\textbf{J}_x^2 + \textbf{J}_y^2|j,m\rangle = \langle \textbf{J}_x\langle j,m| \textbf{J}_x|j,m\rangle + \langle \textbf{J}_y\langle j,m| \textbf{J}_y|j,m\rangle \ge 0.$ However, $\textbf{J}_x^2 + \textbf{J}_y^2$ is equal to $\textbf{J}^2 - \textbf{J}_z^2$, so this inequality implies that $\langle j,m| \textbf{J}^2 - \textbf{J}_z^2 |j,m\rangle = \hbar^2 {f(j,m) - m^2} \ge 0,$ which, in turn, implies that $m^2$ must be less than or equal to $f(j,m)$. Hence, for any value of the total angular momentum eigenvalue $f$, the z-projection eigenvalue ($m$) must have a maximum and a minimum value and both of these must be less than or equal to the total angular momentum squared eigenvalue $f$. The Raising and Lowering Operators Change the $\textbf{J}_z$ Eigenvalue but not the $\textbf{J}^2$ Eigenvalue When Acting on $|j,m\rangle$ Applying the commutation relations obeyed by $\textbf{J}_{\pm}$ to $|j,m\rangle$ yields another useful result: $\textbf{J}_z \textbf{J}_{\pm} |j,m\rangle - \textbf{J}_{\pm} \textbf{J}_z |j,m\rangle = \pm \hbar \textbf{J}_{\pm} |j,m\rangle ,$ $\textbf{J}^2 \textbf{J}_{\pm} |j,m\rangle - \textbf{J}_{\pm} \textbf{J}^2 |j,m\rangle = 0.$ Now, using the fact that $|j,m\rangle$ is an eigenstate of $\textbf{J}^2$ and of $\textbf{J}_z$, these identities give $\textbf{J}_z \textbf{J}_{\pm} |j,m\rangle = (m\hbar \pm \hbar) \textbf{J}_{\pm} |j,m\rangle = h (m±1) |j,m\rangle ,$ $\textbf{J}^2 \textbf{J}_{\pm} |j,m\rangle = \hbar^2 f(j,m) \textbf{J}_{\pm} |j,m\rangle .$ These equations prove that the functions $\textbf{J}_{\pm} |j,m\rangle$ must either themselves be eigenfunctions of $\textbf{J}^2$ and $\textbf{J}_z$, with eigenvalues $\hbar^2 f(j,m)$ and $\hbar (m+1)$, respectively, or $\textbf{J}_{\pm} |j,m\rangle$ must equal zero. In the former case, we see that $\textbf{J}_{\pm}$ acting on $|j,m\rangle$ generates a new eigenstate with the same $\textbf{J}^2$ eigenvalue as $|j,m\rangle$but with one unit of h higher or lower in $\textbf{J}_z$ eigenvalue. It is for this reason that we call $\textbf{J}_{\pm}$ raising and lowering operators. Notice that, although $\textbf{J}_{\pm} |j,m\rangle$ is indeed an eigenfunction of $\textbf{J}_z$ with eigenvalue $(m±1) \hbar$, $\textbf{J}_{\pm} |j,m\rangle$ is not identical to $|j,m±1\rangle$; it is only proportional to $|j,m±1\rangle$: $\textbf{J}_{\pm} |j,m\rangle = C^±_{j,m}|j,m±1\rangle .$ Explicit expressions for these $C^±_{j,m}$ coefficients will be obtained below. Notice also that because the $\textbf{J}_{\pm} |j,m\rangle$, and hence $|j,m±1\rangle$, have the same $\textbf{J}^2$ eigenvalue as $|j,m\rangle$ (in fact, sequential application of $\textbf{J}_{\pm}$ can be used to show that all $|j,m'\rangle$, for all $m'$, have this same $\textbf{J}^2$ eigenvalue), the $\textbf{J}^2$ eigenvalue $f(j,m)$ must be independent of m. For this reason, $f$ can be labeled by one quantum number j. iii. The $\textbf{J}^2$ Eigenvalues are Related to the Maximum and Minimum $\textbf{J}_z$ Eigenvalues, Which are Related to One Another Earlier, we showed that there exists a maximum and a minimum value for m, for any given total angular momentum. It is when one reaches these limiting cases that $\textbf{J}_{\pm} |j,m\rangle = 0$ applies. In particular, $\textbf{J}_{+} |j,m_{\rm max}\rangle = 0,$ $\textbf{J}_{-} |j,m_{\rm min}\rangle = 0.$ Applying the following identities: $\textbf{J}_{-} \textbf{J}_{+} = \textbf{J}^2 - \textbf{J}_z^2 -\hbar \textbf{J}_z ,$ $\textbf{J}_{+} \textbf{J}_{-} = \textbf{J}^2 - \textbf{J}_z^2 +\hbar \textbf{J}_z,$ respectively, to $|j,m_{\rm max}\rangle$ and $|j,m_{\rm min}\rangle$ gives $\hbar^2 \{ f(j,m_{\rm max}) - m_{\rm max}^2 - m_{\rm max}\} = 0,$ $\hbar^2 \{ f(j,m_{\rm min}) - m_{\rm min}^2 + m_{\rm min}\} = 0,$ which immediately gives the $\textbf{J}^2$ eigenvalue $f(j,m_{\rm max})$ and $f(j,m_{\rm min})$ in terms of $m_{\rm max}$ or $m_{\rm min}$: $f(j,m_{\rm max}) = m_{\rm max} (m_{\rm max}+1),$ $f(j,m_{\rm min}) = m_{\rm min} (m_{\rm min}-1).$ So, we now know the $\textbf{J}^2$ eigenvalues for $|j,m_{\rm max}\rangle$ and $|j,m_{\rm min}\rangle$. However, we earlier showed that $|j,m\rangle$and $|j,m-1\rangle$ have the same $\textbf{J}^2$ eigenvalue (when we treated the effect of $\textbf{J}_{\pm}$ on $|j,m\rangle$) and that the $\textbf{J}^2$ eigenvalue is independent of m. If we therefore define the quantum number $j$ to be $m_{\rm max}$, we see that the $\textbf{J}^2$ eigenvalues are given by $\textbf{J}^2 |j,m\rangle = \hbar^2 j(j+1) |j,m\rangle .$ We also see that $f(j,m) = j(j+1) = m_{\rm max} (m_{\rm max}+1) = m_{\rm min} (m_{\rm min}-1),$ from which it follows that $m_{\rm min} = - m_{\rm max} .$ The $j$ Quantum Number Can Be Integer or Half-Integer The fact that the $m$-values run from $j$ to $-j$ in unit steps (because of the property of the $\textbf{J}_{\pm}$ operators), there clearly can be only integer or half-integer values for $j$. In the former case, the $m$ quantum number runs over $-j, -j+1, -j+2, ..., -j+(j-1), 0, 1, 2, ... j$; in the latter, $m$ runs over $-j, -j+1, -j+2, ...-j+\Big(j-\dfrac{1}{2}\Big), \dfrac{1}{2}, \dfrac{3}{2}, ...j$. Only integer and half-integer values can range from $j$ to $-j$ in steps of unity. Species whose intrinsic angular momenta are integers are known as Bosons and those with half-integer spin are called Fermions. More on $\textbf{J}_{\pm} |j,m\rangle$ Using the above results for the effect of $\textbf{J}_{\pm}$ acting on $|j,m\rangle$ and the fact that $\textbf{J}_{+}$ and $\textbf{J}_{-}$ are adjoints of one another (two operators $\textbf{F}$ and $\textbf{G}$ are adjoints if $\langle \psi|\textbf{F}|\chi\rangle = \langle \textbf{G}\psi|\chi\rangle$, for all $\psi$ and all $\chi$) allows us to write: $\langle j,m| \textbf{J}_{-} \textbf{J}_{+} |j,m\rangle = \langle j,m| (\textbf{J}^2 - \textbf{J}_z^2 -\hbar \textbf{J}_z ) |j,m\rangle$ $= \hbar^2 {j(j+1)-m(m+1)} = \langle \textbf{J}_{+}\langle j,m| \textbf{J}_{+}|j,m\rangle = (C^+_{j,m})^2,$ where $C^+_{j,m}$ is the proportionality constant between $\textbf{J}_{+}|j,m\rangle$ and the normalized function $|j,m+1\rangle$. Likewise, the effect of $\textbf{J}_{-}$ can be expressed as $\langle j,m| \textbf{J}_{+} \textbf{J}_{-} |j,m\rangle = \langle j,m| (\textbf{J}^2 - \textbf{J}_z^2 +\hbar \textbf{J}_z) |j,m\rangle$ $= \hbar^2 {j(j+1)-m(m-1)} = \langle \textbf{J}_{-}\langle j,m| \textbf{J}_{-}|j,m\rangle = (C^-_{j,m})^2,$ where $C^-_{j,m}$ is the proportionality constant between $\textbf{J}_{-} |j,m\rangle$ and the normalized $|j,m-1\rangle$. Thus, we can solve for $C^±_{j,m}$ after which the effect of $\textbf{J}_{\pm}$ on $|j,m\rangle$ is given by: $\textbf{J}_{\pm} |j,m\rangle = h \sqrt{j(j+1) –m(m±1)} |j,m±1\rangle .$ Summary The above results apply to any angular momentum operators. The essential findings can be summarized as follows: (i) $\textbf{J}^2$ and $\textbf{J}_z$ have complete sets of simultaneous eigenfunctions. We label these eigenfunctions $|j,m\rangle$; they are orthonormal in both their m- and j-type indices: $\langle j,m| j',m'\rangle = \delta_{m,m^\prime} \delta_{j,j^\prime} .$ (ii) These $|j,m\rangle$ eigenfunctions obey: $\textbf{J}^2 |j,m\rangle = \hbar^2 j(j+1) |j,m\rangle , \{ j= \text{ integer or half-integer}\},$ $\textbf{J}_z |j,m\rangle = \hbar m|j,m\rangle , \{ m= -j,\text{ in steps of 1 to }+j\}.$ (iii) The raising and lowering operators $\textbf{J}_{\pm}$ act on $|j,m\rangle$ to yield functions that are eigenfunctions of $\textbf{J}^2$ with the same eigenvalue as $|j,m\rangle$ and eigenfunctions of $\textbf{J}_z$ with eigenvalue of $(m±1) \hbar$ : $\textbf{J}_{\pm} |j,m\rangle = \hbar \sqrt{j(j+1) - m(m±1)} |j,m±1\rangle .$ (iv) When $\textbf{J}_{\pm}$ acts on the extremal states $|j,j\rangle$ or $|j,-j\rangle$, respectively, the result is zero. The results given above are, as stated, general. Any and all angular momenta have quantum mechanical operators that obey these equations. It is convention to designate specific kinds of angular momenta by specific letters; however, it should be kept in mind that no matter what letters are used, there are operators corresponding to $\textbf{J}^2$, $\textbf{J}^z$, and $\textbf{J}_{\pm}$ that obey relations as specified above, and there are eigenfunctions and eigenvalues that have all of the properties obtained above. For electronic or collisional orbital angular momenta, it is common to use $\textbf{L}^2$ and $\textbf{L}^z$ ; for electron spin, S2 and Sz are used; for nuclear spin I2 and Iz are most common; and for molecular rotational angular momentum, N2 and Nz are most common (although sometimes $\textbf{J}^2$ and $\textbf{J}^z$ may be used). Whenever two or more angular momenta are combined or coupled to produce a total angular momentum, the latter is designated by $\textbf{J}^2$ and $\textbf{J}^z$. Coupling of Angular Momenta If the Hamiltonian under study contains terms that couple two or more angular momenta $\textbf{J}(i)$, then only the components of the total angular momentum $\textbf{J} =\sum_i\textbf{J}(i)$ and the total $\textbf{J}^2$ will commute with $\textbf{H}$. It is therefore essential to label the quantum states of the system by the eigenvalues of $\textbf{J}_z$ and $\textbf{J}^2$ and to construct variational trial or model wave functions that are eigenfunctions of these total angular momentum operators. The problem of angular momentum coupling has to do with how to combine eigenfunctions of the uncoupled angular momentum operators, which are given as simple products of the eigenfunctions of the individual angular momenta $\prod_i |j_i,m_i\rangle$, to form eigenfunctions of $\textbf{J}^2$ and $\textbf{J}_z$. Eigenfunctions of $\textbf{J}_z$ Because the individual elements of $J$ are formed additively, but $\textbf{J}^2$ is not, it is straightforward to form eigenstates of $\textbf{J}_z =\sum_i\textbf{J}_z(i);$ simple products of the form $\prod_i |j_i,m_i\rangle$ are eigenfunctions of $\textbf{J}_z$: $\textbf{J}_z \prod_i |j_i,m_i\rangle = \sum_k \textbf{J}_z(k) \prod_i |j_i,m_i\rangle = \sum_k \hbar m_k \prod_i |j_i,m_i\rangle ,$ and have $\textbf{J}_z$ eigenvalues equal to the sum of the individual $m_k\hbar$ eigenvalues. Hence, to form an eigenfunction with specified $J$ and $M$ eigenvalues, one must combine only those product states $\prod_i |j_i,m_i\rangle$ whose $m_i\hbar$ sum is equal to the specified $M$ value. Eigenfunctions of $\textbf{J}^2$; the Clebsch-Gordon Series The task is then reduced to forming eigenfunctions $|J,M\rangle$, given particular values for the {$j_i$} quantum numbers. When coupling pairs of angular momenta { $|j,m\rangle$ and $|j',m'\rangle$ }, the total angular momentum states can be written, according to what we determined above, as $|J,M\rangle = \sum_{m,m'} C^{J,M}_{j,m;j',m'} |j,m\rangle |j',m'\rangle ,$ where the coefficients $C^{J,M}_{j,m;j',m'}$ are called vector coupling coefficients (because angular momentum coupling is viewed much like adding two vectors $\textbf{j}$ and $\textbf{j}'$ to produce another vector $\textbf{J}$), and where the sum over $m$ and $m'$ is restricted to those terms for which $m+m' = M$. It is more common to express the vector coupling or so-called Clebsch-Gordon (CG) coefficients as $\langle j,m;j'm'|J,M\rangle$ and to view them as elements of a matrix whose columns are labeled by the coupled-state $J,M$ quantum numbers and whose rows are labeled by the quantum numbers characterizing the uncoupled product basis $j,m;j',m'$. It turns out that this matrix can be shown to be unitary so that the CG coefficients obey: $\sum_{m,m'} \langle j,m;j'm'|J,M\rangle ^* \langle j,m;j'm'|J',M'\rangle = \delta_{j,j^\prime} \delta_{m,m^\prime}$ and $\sum_{J,M} \langle j,n;j'n'|J,M\rangle \langle j,m;j'm'|J,M\rangle ^* = \delta_{n,m} \delta_{n',m'}.$ This unitarity of the CG coefficient matrix allows the inverse of the relation giving coupled functions in terms of the product functions: $|J,M\rangle = \sum_{m,m'} \langle j,m;j'm'|J,M\rangle |j,m\rangle |j',m'\rangle$ to be written as: $|j,m\rangle |j',m'\rangle = \sum_{J,M} \langle j,m;j'm'|J,M\rangle ^* |J,M\rangle$ $= \sum_{J,M} \langle J,M|j,m;j'm'\rangle |J,M\rangle .$ This result expresses the product functions in terms of the coupled angular momentum functions. Generation of the Clebsch-Gordon Coefficients The Clebsch-Gordon coefficients can be generated in a systematic manner; however, they can also be looked up in books where they have been tabulated (e.g., see Table 2.4 of R. N. Zare, Angular Momentum, John Wiley, New York (1988)). Here, we will demonstrate the technique by which the CG coefficients can be obtained, but we will do so for rather limited cases and refer the reader to more extensive tabulations for more cases. The strategy we take is to generate the $|J,J\rangle$ state (i.e., the state with maximum $M$-value) and to then use $\textbf{J}_{-}$ to generate $|J,J-1\rangle$, after which the state $|J-1,J-1\rangle$ (i.e., the state with one lower $\textbf{J}_{-}$ value) is constructed by finding a combination of the product states in terms of which $|J-1,J-1\rangle$ is expressed (because both $|J-1,J-1\rangle$ and $|J-1,J-1\rangle$ have the same $M$-value $M=J-1$) which is orthogonal to $|J,J-1\rangle$ (because $|J-1,J-1\rangle$ and $|J,J-1\rangle$ are eigenfunctions of the Hermitian operator $\textbf{J}^2$ corresponding to different eigenvalues, they must be orthogonal). This same process is then used to generate $|J,J-2\rangle |J-1,J-2\rangle$ and (by orthogonality construction) $|J-2,J-2\rangle$, and so on. The States With Maximum and Minimum M-Values We begin with the state $|J,J\rangle$ having the highest $M$-value. This state must be formed by taking the highest $m$ and the highest $m'$ values (i.e., $m=j$ and $m'=j'$), and is given by: $|J,J\rangle = |j,j\rangle |j'j'\rangle .$ Only this one product is needed because only the one term with m=j and m'=j' contributes to the sum in the above CG series. The state $|J,-J\rangle = |j,-j\rangle |j',-j'\rangle$ with the minimum $M$-value is also given as a single product state. Notice that these states have $M$-values given as $±(j+j')$; since this is the maximum $M$-value, it must be that the $J$-value corresponding to this state is $J= j+j'$. States With One Lower M-Value But the Same $\textbf{J}_{-}$ Value Applying $\textbf{J}_{-}$ to $|J,J\rangle$, and expressing $\textbf{J}_{-}$ as the sum of lowering operators for the two individual angular momenta: $\textbf{J}_{-} = \textbf{J}_{-}(1) + \textbf{J}_{-}(2)$ gives $\textbf{J}_{-}|J,J\rangle = \hbar\sqrt{J(j+1) -J(j-1)} |J,J-1\rangle$ $= (\textbf{J}_{-}(1) + \textbf{J}_{-}(2)) |j,j\rangle |j'j'\rangle$ $= \hbar\sqrt{j(j+1) - j(j-1)} |j,j-1\rangle |j',j'\rangle + \hbar\sqrt{j'(j'+1)-j'(j'-1)} |j,j\rangle |j',j'-1\rangle .$ This result expresses $|J,J-1\rangle$ as follows: $|J,J-1\rangle = \frac{\sqrt{j(j+1)-j(j-1)} |j,j-1\rangle |j',j'\rangle + \sqrt{j'(j'+1)-j'(j'-1)} |j,j\rangle |j',j'-1\rangle }{\sqrt{J(J+1) -J(J-1)}};$ that is, the $|J,J-1\rangle$ state, which has $M=J-1$, is formed from the two product states $|j,j-1\rangle |j',j'\rangle$ and $|j,j\rangle |j',j'-1\rangle$ that have this same $M$-value. iii. States With One Lower $\textbf{J}_{-}$ Value To find the state $|J-1,J-1\rangle$ that has the same $M$-value as the one found above but one lower $\textbf{J}_{-}$ value, we must construct another combination of the two product states with $M=J-1$ (i.e., $|j,j-1\rangle |j',j'\rangle$ and $|j,j\rangle |j',j'-1\rangle$) that is orthogonal to the combination representing $|J,J-1\rangle$; after doing so, we must scale the resulting function so it is properly normalized. In this case, the desired function is: $|J-1,J-1\rangle = \dfrac{\sqrt{j(j+1)-j(j-1)} |j,j\rangle |j',j'-1\rangle -\sqrt{j'(j'+1)-j'(j'-1)} |j,j-1\rangle |j',j'\rangle}{\sqrt{J(J+1) -J(J-1)}} .$ It is straightforward to show that this function is indeed orthogonal to $|J,J-1\rangle$. States With Even One Lower $\textbf{J}_{-}$ Value Having expressed $|J,J-1\rangle$ and $|J-1,J-1\rangle$ in terms of $|j,j-1\rangle$ $|j',j'\rangle$ and $|j,j\rangle |j',j'-1\rangle$, we are now prepared to carry on with this stepwise process to generate the states $|J,J-2\rangle$, $|J-1,J-2\rangle$ and $|J-2,J-2\rangle$ as combinations of the product states with $M=J-2$. These product states are $|j,j-2\rangle |j',j'\rangle$, $|j,j\rangle |j',j'-2\rangle$, and $|j,j-1\rangle |j',j'-1\rangle$. Notice that there are precisely as many product states whose $m+m'$ values add up to the desired $M$-value as there are total angular momentum states that must be constructed (there are three of each in this case). The steps needed to find the state $|J-2,J-2\rangle$ are analogous to those taken above: 1. One first applies $\textbf{J}_{-}$ to $|J-1,J-1\rangle$ and to $|J,J-1\rangle$ to obtain $|J-1,J-2\rangle$ and $|J,J-2\rangle$, respectively as combinations of $|j,j-2\rangle |j',j'\rangle$, $|j,j\rangle |j',j'-2\rangle$, and $|j,j-1\rangle |j',j'-1\rangle$. 2. One then constructs $|J-2,J-2\rangle$ as a linear combination of the $|j,j-2\rangle |j',j'\rangle$, $|j,j\rangle |j',j'-2\rangle$, and $|j,j-1\rangle |j',j'-1\rangle$ that is orthogonal to the combinations found for $|J-1,J-2\rangle$ and $|J,J-2\rangle$. Once $|J-2,J-2\rangle$ is obtained, it is then possible to move on to form $|J,J-3\rangle$, $|J-1,J-3\rangle$, and $|J-2,J-3\rangle$ by applying $\textbf{J}_{-}$ to the three states obtained in the preceding application of the process, and to then form $|J-3,J-3\rangle$ as the combination of $|j,j-3\rangle |j',j'\rangle$, $|j,j\rangle |j',j'-3\rangle$, $|j,j-2\rangle |j',j'-1\rangle$, $|j,j-1\rangle |j',j'-2\rangle$ that is orthogonal to the combinations obtained for $|J,J-3\rangle$, $|J-1,J-3\rangle$, and $|J-2,J-3\rangle$. Again notice that there are precisely the correct number of product states (four here) as there are total angular momentum states to be formed. In fact, the product states and the total angular momentum states are equal in number and are both members of orthonormal function sets (because $\textbf{J}^2(1)$, $\textbf{J}_z(1)$, $\textbf{J}^2(2)$, and $\textbf{J}_z(2)$ as well as $\textbf{J}^2$ and $\textbf{J}_z$ are Hermitian operators which have complete sets of orthonormal eigenfunctions). This is why the CG coefficient matrix is unitary; because it maps one set of orthonormal functions to another, with both sets containing the same number of functions. Example Example 1 Let us consider an example in which the spin and orbital angular momenta of the Si atom in its $^3P$ ground state can be coupled to produce various $^3P_J$ states. In this case, the specific values for $j$ and $j'$ are $j=S=1$ and $j'=L=1$. We could, of course take $j=L=1$ and $j'=S=1$, but the final wave functions obtained would span the same space as those we are about to determine. The state with highest $M$-value is the $(^3P(M_s=1, M_L=1)$ state, which can be represented by the product of an $\alpha\alpha$ spin function (representing $S=1, M_s=1$) and a $3p_13p_0$ spatial function (representing $L=1, M_L=1$), where the first function corresponds to the first open-shell orbital and the second function to the second open-shell orbital. Thus, the maximum $M$-value is $M= 2$ and corresponds to a state with $J=2$: $J=2,M=2\rangle = |2,2\rangle = \alpha\alpha 3p_13p_0 .$ Clearly, the state $|2,-2\rangle$ would be given as $\beta\beta 3p_{-1}3p_0$. The states $|2,1\rangle$ and $|1,1\rangle$ with one lower $M$-value are obtained by applying $\textbf{J}_{-} = \textbf{S}_{-} + \textbf{L}_{-}$ to $|2,2\rangle$ as follows: $\textbf{J}_{-} |2,2\rangle = \hbar\sqrt{J(J+1)-M(M-1)} |2,1\rangle = \hbar\sqrt{2(3)-2(1)} |2,1\rangle$ $= (\textbf{S}_{-} + \textbf{L}_{-}) \alpha\alpha 3p_13p_0 .$ To apply $\textbf{S}_{-}$ or $\textbf{L}_{-}$ to $\alpha\alpha 3p_13p_0$, one must realize that each of these operators is, in turn, a sum of lowering operators for each of the two open-shell electrons: $\textbf{S}_{-} = \textbf{S}_{-}(1) + \textbf{S}_{-}(2),$ $\textbf{L}_{-} = \textbf{L}_{-}(1) + \textbf{L}_{-}(2).$ The result above can therefore be continued as $(\textbf{S}_{-} + \textbf{L}_{-}) \alpha\alpha 3p_13p_0 = \hbar\sqrt{\dfrac{1}{2}\Big(\dfrac{3}{2}\Big)-\dfrac{1}{2}\Big(-\dfrac{1}{2}\Big)} \beta\alpha 3p_13p_0$ $+ \hbar\sqrt{\dfrac{1}{2}\Big(\dfrac{3}{2}\Big)-\dfrac{1}{2}\Big(-\dfrac{1}{2}\Big)} \alpha\beta 3p_13p_0$ $+ \hbar\sqrt{1(2)-1(0)} \alpha\alpha 3p_03p_0$ $+ \hbar\sqrt{1(2)-0(-1)} \alpha\alpha 3p_13p_{-1}.$ So, the function $|2,1\rangle$ is given by $|2,1\rangle = \dfrac{1}{2}[\beta\alpha 3p_13p_0 + ab 3p_13p_0 + \sqrt{2} \alpha\alpha 3p_03p_0+ \sqrt{2} \alpha\alpha 3p_13p_{-1}],$ which can be rewritten as: $|2,1\rangle = \dfrac{1}{2}[(\beta\alpha + ab)^3p_13p_0 + \sqrt{2} \alpha\alpha (3p_03p_0 + 3p_13p_{-1})].$ Writing the result in this way makes it clear that $|2,1\rangle$ is a combination of the product states $|S=1,M_S=0\rangle |L=1,M_L=1\rangle$ (the terms containing $|S=1,M_S=0\rangle = \frac{1}{\sqrt{2}}(\alpha​\beta​+\beta\alpha​))$ and $|S=1,M_S=1\rangle |L=1,M_L=0\rangle$ (the terms containing $|S=1,M_S=1\rangle = \alpha\alpha$). There is a good chance that some readers have noticed that some of the terms in the $|2,1\rangle$ function would violate the Pauli exclusion principle. In particular, the term $\alpha\alpha 3p_03p_0$ places two electrons into the same orbitals and with the same spin. Indeed, this electronic function would indeed violate the Pauli principle, and it should not be allowed to contribute to the final Si $^3P_J$ wave functions we are trying to form. The full resolution of how to deal with this paradox is given in the following Subsection, but for now let me say the following: (i) Once you have learned that all of the spin-orbital product functions shown for $|2,1\rangle$ (e.g., $\alpha\alpha 3p_03p_0$, $(\beta\alpha + \alpha​\beta​​)^3p_13p_0$, and $\alpha\alpha 3p_13p_{-1}$) represent Slater determinants (we deal with this in the next Subsection) that are antisymmetric with respect to permutation of any pair of electrons, you will understand that the Slater determinant corresponding to $\alpha\alpha 3p_03p_0$ vanishes. (ii) If, instead of considering the $3s^2 3p^2$ configuration of Si, we wanted to generate wave functions for the $3s^2 3p^1 4p^1$ $^3P_J$​ states of Si, the same analysis as shown above would pertain, except that now the $|2,1\rangle$ state would have a contribution from $\alpha\alpha 3p_04p_0$. This contribution does not violate the Pauli principle, and its Slater determinant does not vanish. So, for the remainder of this treatment of the $^3P_J$ states of Si, don’t worry about terms arising that violate the Pauli principle; they will not contribute because their Slater determinants will vanish. To form the other function with $M=1$, the $|1,1\rangle$ state, we must find another combination of $|S=1,M_S=0\rangle |L=1,M_L=1\rangle$ and $|S=1,M_S=1\rangle |L=1,M_L=0\rangle$ that is orthogonal to $|2,1\rangle$ and is normalized. Since $|2,1\rangle = \frac{1}{\sqrt{2}} [|S=1,M_S=0\rangle |L=1,M_L=1\rangle + |S=1,M_S=1\rangle |L=1,M_L=0\rangle ],$ we immediately see that the requisite function is $|1,1\rangle = \frac{1}{\sqrt{2}} [|S=1,M_S=0\rangle |L=1,M_L=1\rangle - |S=1,M_S=1\rangle |L=1,M_L=0\rangle ].$ In the spin-orbital notation used above, this state is: $|1,1\rangle = \dfrac{(\beta\alpha + ab)^3p_13p_0 - \sqrt{2} \alpha\alpha (3p_03p_0 + 3p_13p_{-1})}{2}.$ Thus far, we have found the $^3P_J$ states with $J=2, M=2; J=2, M=1;$ and $J=1, M=1$. To find the $^3P_J$ states with $J=2, M=0; J=1, M=0$; and $J=0, M=0,$ we must once again apply the $\textbf{J}_{-}$ tool. In particular, we apply $\textbf{J}_{-}$ to $|2,1\rangle$ to obtain $|2,0\rangle$ and we apply $\textbf{J}_{-}$ to $|1,1\rangle$ to obtain $|1,0\rangle$, each of which will be expressed in terms of $|S=1,M_S=0\rangle |L=1,M_L=0\rangle$, $|S=1,M_S=1\rangle |L=1,M_L=-1\rangle$, and $|S=1,M_S=-1\rangle |L=1,M_L=1\rangle$. The $|0,0\rangle$ state is then constructed to be a combination of these same product states which is orthogonal to $|2,0\rangle$ and to $|1,0\rangle$. The results are as follows: $|J=2,M=0\rangle = \frac{1}{\sqrt{6}}[2 |1,0\rangle |1,0\rangle + |1,1\rangle |1,-1\rangle + |1,-1\rangle |1,1\rangle ],$ $|J=1,M=0\rangle = \frac{1}{\sqrt{2}}[|1,1\rangle |1,-1\rangle - |1,-1\rangle |1,1\rangle ],$ $|J=0, M=0\rangle = \frac{1}{\sqrt{3}}[|1,0\rangle |1,0\rangle - |1,1\rangle |1,-1\rangle - |1,-1\rangle |1,1\rangle ],$ where, in all cases, a short hand notation has been used in which the $|S,M_S\rangle |L,M_L\rangle$ product stated have been represented by their quantum numbers with the spin function always appearing first in the product. To finally express all three of these new functions in terms of spin-orbital products it is necessary to give the $|S,M_S\rangle |L,M_L\rangle$ products with $M=0$ in terms of these products. For the spin functions, we have: $|S=1,M_S=1\rangle = \alpha\alpha,$ $|S=1,M_S=0\rangle = \frac{1}{\sqrt{2}}(\alpha\beta+\beta\alpha).$ $|S=1,M_S=-1\rangle = \beta\beta.$ For the orbital product function, we have: $|L=1, M_L=1\rangle = 3p_13p_0 ,$ $|L=1,M_L=0\rangle = \frac{1}{\sqrt{2}}(3p_03p_0 + 3p_13p_{-1}),$ $|L=1, M_L=-1\rangle = 3p_03p_{-1}.$ Coupling Angular Momenta of Equivalent Electrons If equivalent angular momenta are coupled (e.g., to couple the orbital angular momenta of a $p^2$ or $d^3$ configuration), there is a tool one can use to determine which of the term symbols violate the Pauli principle. To carry out this step, one forms all possible unique (determinental) product states with non-negative $M_L$ and $M_S$ values and arranges them into groups according to their $M_L$ and $M_S$ values. For example, the “boxes” appropriate to the $p^2$ orbital occupancy that we considered earlier for Si are shown below: MS ML 2 1 0 1 |p1α​ p0α | |p1α p-1α| 0 |p1α p1β| |p1α p0β |, |p0α p1β| |p1α p-1β |, |p-1αp1β|, |p0α p0β | There is no need to form the corresponding states with negative $M_L$ or negative $M_S$ values because they are simply "mirror images" of those listed above. For example, the state with $M_L= -1$ and $M_S = -1$ is $|p_{-1}\beta p_0\beta |$, which can be obtained from the $M_L = 1, M_S = 1$ state $|p_1\alpha p_0\alpha |$ by replacing a by b and replacing $p_1$ by $p_{-1}$. Given the box entries, one can identify those term symbols that arise by applying the following procedure over and over until all entries have been accounted for: 1. One identifies the highest $M_S$ value (this gives a value of the total spin quantum number that arises, $S$) in the box. For the above example, the answer is $S = 1$. 2. For all product states of this $M_S$ value, one identifies the highest $M_L$ value (this gives a value of the total orbital angular momentum, $\textbf{L}$, that can arise for this $S$). For the above example, the highest $M_L$ within the $M_S =1$ states is $M_L = 1$ (not $M_L = 2$), hence $L=1$. 3. Knowing an $S, L$ combination, one knows the first term symbol that arises from this configuration. In the $p^2$ example, this is $^3P$. 4. Because the level with this $\textbf{L}$ and $S$ quantum numbers contains $(2L+1)(2S+1)$ states with $M_L$ and $M_S$ quantum numbers running from $-L$ to $L$ and from $-S$ to $S$, respectively, one must remove from the original box this number of product states. To do so, one simply erases from the box one entry with each such $M_L$ and $M_S$ value. Actually, since the box need only show those entries with non-negative $M_L$ and $M_S$ values, only these entries need be explicitly deleted. In the $^3P$ example, this amounts to deleting nine product states with $M_L$, $M_S$ values of 1,1; 1,0; 1,-1; 0,1; 0,0; 0,-1; -1,1; -1,0; -1,-1. 5. After deleting these entries, one returns to step 1 and carries out the process again. For the $p^2$ example, the box after deleting the first nine product states looks as follows (those that appear in italics should be viewed as already deleted in counting all of the $^3P$ states): MS ML 2 1 0 1 |p1α​ p0α | |p1α p-1α| 0 |p1α p1β| |p1α p0β |, |p0α p1β| |p1α p-1β |, |p-1αp1β|, |p0α p0β | It should be emphasized that the process of deleting or crossing off entries in various $M_L, M_S$ boxes involves only counting how many states there are; by no means do we identify the particular $L,S,M_L,M_S$ wave functions when we cross out any particular entry in a box. For example, when the $|p_1\alpha p_0\beta |$ product is deleted from the $M_L= 1, M_S=0$ box in accounting for the states in the $^3P$ level, we do not claim that $|p_1\alpha p_0\beta |$ itself is a member of the $^3P$ level; the $|p_0\alpha p_1\beta|$product state could just as well been eliminated when accounting for the $^3P$ states. Returning to the $p^2$ example at hand, after the $^3P$ term symbol's states have been accounted for, the highest $M_S$ value is 0 (hence there is an $S=0$ state), and within this $M_S$ value, the highest $M_L$ value is 2 (hence there is an $L=2$ state). This means there is a $^1D$ level with five states having $M_L = 2,1,0,-1,-2$. Deleting five appropriate entries from the above box (again denoting deletions by italics) leaves the following box: MS ML 2 1 0 1 |p1α​ p0α | |p1α p-1α| 0 |p1α p1β| |p1α p0β |, |p0α p1β| |p1α p-1β |, |p-1αp1β|, |p0α p0β | The only remaining entry, which thus has the highest $M_S$ and $M_L$ values, has $M_S = 0$ and $M_L = 0$. Thus there is also a $^1S$ level in the $p^2$ configuration. Thus, unlike the non-equivalent $3p_14p_1$ case, in which $^3P, ^1P, ^3D, ^1D, ^3S,$ and $^1S$ levels arise, only the $^3P, ^1D,$ and $^1S$ arise in the $p^2$ situation. This "box method" is useful to carry out whenever one is dealing with equivalent angular momenta. If one has mixed equivalent and non-equivalent angular momenta, one can determine all possible couplings of the equivalent angular momenta using this method and then use the simpler vector coupling method to add the non-equivalent angular momenta to each of these coupled angular momenta. For example, the $p^2d^1$ configuration can be handled by vector coupling (using the straightforward non-equivalent procedure) $L=2$ (the d orbital) and $S=\dfrac{1}{2}$ (the third electron's spin) to each of $^3P, ^1D,$ and $^1S$ arising from the $p^2$configuration. The result is $^4F, ^4D, ^4P, ^2F, ^2D, ^2P, ^2G, ^2F, ^2D, ^2P, ^2S,$ and $^2D$. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.07%3A_Angular_Momentum.txt
Rotational Motion For Rigid Diatomic and Linear Polyatomic Molecules This Schrödinger equation relates to the rotation of diatomic and linear polyatomic molecules. It also arises when treating the angular motions of electrons in any spherically symmetric potential. A diatomic molecule with fixed bond length $R$ rotating in the absence of any external potential is described by the following Schrödinger equation: $- \frac{\hbar^2}{2\mu} \left[ \frac{1}{R^2\sin\theta}\frac{\partial}{\partial \theta} \left(\sin\theta \frac{\partial}{\partial \theta} \right) + \frac{1}{R^2\sin^2\theta} \frac{\partial^2}{\partial \phi^2} \right] \psi = E \psi$ or $\frac{\textbf{L}^2\psi}{2\mu R^2} = E \psi,$ where $\textbf{L}^2$ is the square of the total angular momentum operator $\textbf{L}_x^2 + \textbf{L}_y^2 + \textbf{L}_z^2$ expressed in polar coordinates above. The angles $\theta$ and $\phi$ describe the orientation of the diatomic molecule's axis relative to a laboratory-fixed coordinate system, and $\mu$ is the reduced mass of the diatomic molecule $\mu=\dfrac{m_1m_2}{m_1+m_2}$. The differential operators can be seen to be exactly the same as those that arose in the hydrogen-like-atom case discussed earlier in this Chapter. Therefore, the same spherical harmonics that served as the angular parts of the wave function in the hydrogen-atom case now serve as the entire wave function for the so-called rigid rotor: $\psi = Y_{J,M}(\theta,\phi)$. These are exactly the same functions as we plotted earlier when we graphed the $s (L=0)$, $p (L=1)$, and $d (L=2)$ orbitals. The energy eigenvalues corresponding to each such eigenfunction are given as: $E_J = \frac{\hbar^2 J(J+1)}{2\mu R^2} = B J(J+1)$ and are independent of $M$. Thus each energy level is labeled by $J$ and is $2J+1$-fold degenerate (because $M$ ranges from $-J$ to $J$). Again, this is just like we saw when we looked at the hydrogen orbitals; the p orbitals are 3-fold degenerate and the d orbitals are 5-fold degenerate. The so-called rotational constant $B$ (defined as $\dfrac{\hbar^2}{2\mu R^2}$) depends on the molecule's bond length and reduced mass. Spacings between successive rotational levels (which are of spectroscopic relevance because, as shown in Chapter 6, angular momentum selection rules often restrict the changes $\Delta J$ in $J$ that can occur upon photon absorption to 1,0, and -1) are given by $\Delta E = B (J+1)(J+2) - B J(J+1) = 2B(J+1).$ These energy spacings are of relevance to microwave spectroscopy which probes the rotational energy levels of molecules. In fact, microwave spectroscopy offers the most direct way to determine molecular rotational constants and hence molecular bond lengths. The rigid rotor provides the most commonly employed approximation to the rotational energies and wave functions of linear molecules. As presented above, the model restricts the bond length to be fixed. Vibrational motion of the molecule gives rise to changes in $R$, which are then reflected in changes in the rotational energy levels (i.e., there are different $B$ values for different vibrational levels). The coupling between rotational and vibrational motion gives rise to rotational $B$ constants that depend on vibrational state as well as dynamical couplings, called centrifugal distortions, which cause the total ro-vibrational energy of the molecule to depend on rotational and vibrational quantum numbers in a non-separable manner. Within this rigid rotor model, the absorption spectrum of a rigid diatomic molecule should display a series of peaks, each of which corresponds to a specific $J \rightarrow J+1$ transition. The energies at which these peaks occur should grow linearly with $J$ as shown above. An example of such a progression of rotational lines is shown in the Figure 2.23. The energies at which the rotational transitions occur appear to fit the $\Delta E = 2B (J+1)$ formula rather well. The intensities of transitions from level $J$ to level $J+1$ vary strongly with $J$ primarily because the population of molecules in the absorbing level varies with $J$. These populations $P_J$ are given, when the system is at equilibrium at temperature $T$, in terms of the degeneracy ($2J+1$) of the Jth level and the energy of this level $B J(J+1)$ by the Boltzmann formula: $P_J = \frac{1}{Q} (2J+1) \exp\bigg(-\dfrac{BJ(J+1)}{kT}\bigg),$ where $Q$ is the rotational partition function: $Q = \sum_J (2J+1) \exp\bigg(-\dfrac{BJ(J+1)}{kT}\bigg).$ For low values of $J$, the degeneracy is low and the $\exp(-BJ(J+1)/kT)$ factor is near unity. As $J$ increases, the degeneracy grows linearly but the $\exp(-BJ(J+1)/kT)$ factor decreases more rapidly. As a result, there is a value of $J$, given by taking the derivative of $(2J+1) \exp(-BJ(J+1)/kT)$ with respect to $J$ and setting it equal to zero, $2J_{\rm max}+ 1 =\sqrt{\frac{2kT}{B}}$ at which the intensity of the rotational transition is expected to reach its maximum. This behavior is clearly displayed in the above figure. The eigenfunctions belonging to these energy levels are the spherical harmonics $Y_{L,M}(\theta,\phi)$ which are normalized according to $\int_0^\pi\int_0^{2\pi}Y_{L,M}^*(\theta,\phi)Y_{L',M'}(\theta,\phi)\sin\theta d\theta d\phi= \delta_{L,L'} \delta_{m,m^\prime} .$ As noted above, these functions are identical to those that appear in the solution of the angular part of Hydrogenic atoms. The above energy levels and eigenfunctions also apply to the rotation of rigid linear polyatomic molecules; the only difference is that the moment of inertia I entering into the rotational energy expression, which is $\mu R^2$ for a diatomic, is given by $I = \sum_a m_a R_a^2$ where ma is the mass of the $a^{\rm th}$ atom and $R_a$ is its distance from the center of mass of the molecule to this atom. Rotational Motions of Rigid Non-Linear Molecules The Rotational Kinetic Energy The classical rotational kinetic energy for a rigid polyatomic molecule is $H_{\rm rot} = \frac{J_a^2}{2I_a} + \frac{J_b^2}{2I_b}​ + \frac{J_c^2}{2I_c}​$ where the $I_k (k = a, b, c)$ are the three principal moments of inertia of the molecule (the eigenvalues of the moment of inertia tensor). This tensor has elements in a Cartesian coordinate system ($K, K' = X, Y, Z$), whose origin is located at the center of mass of the molecule, that can be computed as: $I_{K,K} = \sum_j m_j (R_j^2 - R_{K,j}^2) \hspace{1cm} (\text{for }K = K')$ $I_{K,K'} = - \sum_j m_j R_{K,j} R_{K',j} \hspace{1cm} (\text{for } K \ne K').$ As discussed in more detail in R. N. Zare, Angular Momentum, John Wiley, New York (1988), the components of the corresponding quantum mechanical angular momentum operators along the three principal axes are: $\textbf{J}_a = -i\hbar \cos\chi \left[\cot\theta \frac{\partial}{\partial \chi} - \frac{1}{\sin\theta}\frac{\partial}{\partial \phi} \right] -i\hbar \sin\chi \frac{\partial}{\partial \theta}$ $\textbf{J}_b = i\hbar​ \sin\chi \left[\cot\theta \frac{\partial}{\partial \chi} - \frac{1}{\sin\theta}\frac{\partial}{\partial \phi}\right] -i\hbar \cos\chi \frac{\partial}{\partial \theta}$ $\textbf{J}_c = - i\hbar \frac{\partial}{\partial \chi} .$ The angles $\theta$, $\phi$, and $\chi$ are the Euler angles needed to specify the orientation of the rigid molecule relative to a laboratory-fixed coordinate system. The corresponding square of the total angular momentum operator $\textbf{J}^2$ can be obtained as $\textbf{J}^2 = \textbf{J}_a^2 + \textbf{J}_b^2 + \textbf{J}_c^2$ $= - \hbar^2 \frac{\partial^2}{\partial \theta^2} - \hbar^2\cot\theta \frac{\partial}{\partial \theta} + \hbar^2 \frac{1}{\sin^2\theta} \left[\frac{\partial^2}{\partial \phi^2} + \frac{\partial^2}{\partial \chi^2} - 2 \cos\theta\frac{\partial^2}{\partial \phi\partial \chi} \right],$ and the component along the lab - fixed $Z$ axis $J_Z$ is $- i\hbar \partial /\partial \phi$ as we saw much earlier in this text. The Eigenfunctions and Eigenvalues for Special Cases Spherical Tops When the three principal moment of inertia values are identical, the molecule is termed a spherical top. In this case, the total rotational energy can be expressed in terms of the total angular momentum operator $\textbf{J}^2$ $\textbf{H}_{\rm rot} = \frac{\textbf{J}^2}{2I}.$ As a result, the eigenfunctions of $\textbf{H}_{\rm rot}$ are those of $\textbf{J}^2$ and $J_a$ as well as $J_Z$ both of which commute with $\textbf{J}^2$ and with one another. $J_Z$ is the component of $J$ along the lab-fixed $Z$-axis and commutes with $J_a$ because $J_Z = - i\hbar \partial /\partial \phi$ and $J_a = - i\hbar \partial /\partial \chi$ act on different angles. The energies associated with such eigenfunctions are $E(J,K,M) = \frac{\hbar^2​ J(J+1)}{2I^2},$ for all $K$ (i.e., $J_a$ quantum numbers) ranging from $-J$ to $J$ in unit steps and for all $M$ (i.e., $J_Z$ quantum numbers) ranging from $-J$ to $J$. Each energy level is therefore $(2J + 1)^2$ degenerate because there are $2J + 1$ possible $K$ values and $2J + 1$ possible $M$ values for each $J$. The eigenfunctions $|J,M,K\rangle$ of $\textbf{J}^2$, $J_Z$ and $J_a$ , are given in terms of the set of so-called rotation matrices $D_{J,M,K}$: $|J,M,K\rangle = \sqrt{\frac{2J+1}{8\pi^2}}D^*_{J,M,K}(\theta,\phi,\chi)$ which obey $\textbf{J}^2|J,M,K\rangle = \hbar^2 J(J+1) |J,M,K\rangle ,$ $\textbf{J}_a |J,M,K\rangle = \hbar K |J,M,K\rangle ,$ $\textbf{J}_Z |J,M,K\rangle = \hbar M |J,M,K\rangle .$ These $D_{J,M,K}$ functions are proportional to the spherical harmonics $Y_{J,M}(\theta,\phi)$ multiplied by $\exp(iK\chi)$, which reflects its c-dependence. Symmetric Tops Molecules for which two of the three principal moments of inertia are equal are called symmetric tops. Those for which the unique moment of inertia is smaller than the other two are termed prolate symmetric tops; if the unique moment of inertia is larger than the others, the molecule is an oblate symmetric top. An American football is prolate, and a Frisbee is oblate. Again, the rotational kinetic energy, which is the full rotational Hamiltonian, can be written in terms of the total rotational angular momentum operator $\textbf{J}^2$ and the component of angular momentum along the axis with the unique principal moment of inertia: $\textbf{H}_{\rm rot} = \frac{\textbf{J}^2}{2I} + \textbf{J}_a^2\left[\frac{1}{2I_a} - \frac{1}{2I}​\right]\text{, for prolate tops}$ $\textbf{H}_{\rm rot} = \frac{\textbf{J}^2}{2I} + \textbf{J}_c^2\left[\frac{1}{2I_c} - \frac{1}{2I}​\right]\text{, for oblate tops}$ Here, the moment of inertia I denotes that moment that is common to two directions; that is, I is the non-unique moment of inertia. As a result, the eigenfunctions of $H_{\rm rot}$ are those of $\textbf{J}^2$ and $J_a$ or $J_c$ (and of $J_Z$), and the corresponding energy levels are: $E(J,K,M) = \frac{\hbar^2​ J(J+1)}{2I^2} + \hbar^2 K^2 \left[\frac{1}{2I_a} - \frac{1}{2I}​\right],$ for prolate tops $E(J,K,M) = \frac{\hbar^2​ J(J+1)}{2I^2} + \hbar^2 K^2 \left[\frac{1}{2I_c} - \frac{1}{2I}​\right],$ for oblate tops, again for $K$ and $M$ (i.e., $J_a$ or $J_c$ and $J_Z$ quantum numbers, respectively) ranging from $-J$​ to $J$ in unit steps. Since the energy now depends on $K$​, these levels are only $2J + 1$ ​​degenerate due to the $2J + 1$ ​different $M$ values that arise for each $J$ value. Notice that for prolate tops, because $I_a$ is smaller than $I$​, the energies increase with increasing $K$ for given $J$. In contrast, for oblate tops, since $I_c$ is larger than $I$, the energies decrease with $K$ for given $J$. The eigenfunctions $|J, M,K\rangle$ are the same rotation matrix functions as arise for the spherical-top case, so they do not require any further discussion at this time. iii. Asymmetric Tops The rotational eigenfunctions and energy levels of a molecule for which all three principal moments of inertia are distinct (a so-called asymmetric top) cannot analytically be expressed in terms of the angular momentum eigenstates and the $J, M,$ and $K$ quantum numbers. In fact, no one has ever solved the corresponding Schrödinger equation for this case. However, given the three principal moments of inertia $I_a$, $I_b$, and $I_c$, a matrix representation of each of the three contributions to the rotational Hamiltonian $\textbf{H}_{\rm rot} = \frac{\textbf{J}_a^2}{2I_a} + \frac{\textbf{J}_b^2}{2I_b}​ + \frac{\textbf{J}_c^2}{2I_c}​$ can be formed within a basis set of the {$|J, M, K\rangle$} rotation-matrix functions discussed earlier. This matrix will not be diagonal because the $|J, M, K\rangle$ functions are not eigenfunctions of the asymmetric top $\textbf{H}_{\rm rot}$. However, the matrix can be formed in this basis and subsequently brought to diagonal form by finding its eigenvectors {$C_{n, J,M,K}$} and its eigenvalues {$E_n$}. The vector coefficients express the asymmetric top eigenstates as $\psi_n (\theta,\phi,\chi) = \sum_{J, M, K} C_{n, J,M,K} |J, M, K\rangle .$ Because the total angular momentum $\textbf{J}^2$ still commutes with $H_{\rm rot}$, each such eigenstate will contain only one $\textbf{J}_{-}$ value, and hence $\psi_n$ can also be labeled by a $J$ quantum number: $\psi_{n​,J} (\theta,\phi,\chi) = \sum_{M, K} C_{n, J,M,K} |J, M, K\rangle .$ To form the only non-zero matrix elements of $H_{\rm rot}$ within the $|J, M, K\rangle$ basis, one can use the following properties of the rotation-matrix functions (see, for example, R. N. Zare, Angular Momentum, John Wiley, New York (1988)): $\langle J, M, K| \textbf{J}_a^2| J, M, K\rangle = \langle J, M, K| \textbf{J}_b^2| J, M, K\rangle$ $= \frac{1}{2} \langle J, M, K| \textbf{J}^2 - \textbf{J}​_c^2 | J, M, K\rangle = \hbar^2 [ J(J+1) - K^2 ],$ $\langle J, M, K| \textbf{J}​_c^2| J, M, K\rangle = \hbar^2 K^2,$ $\langle J, M, K| \textbf{J}_a^2| J, M, K ± 2\rangle = - \langle J, M, K| \textbf{J}_b^2| J, M, K ± 2\rangle$ $= \hbar^2 \sqrt{J(J+1) - K(K± 1)} \sqrt{J(J+1) -(K± 1)(K± 2)}$ $\langle J, M, K| \textbf{J}​_c^2| J, M, K ± 2\rangle = 0.$ Each of the elements of $\textbf{J}_c^2$, $\textbf{J}_a^2$, and $\textbf{J}_b^2$ must, of course, be multiplied, respectively, by $\dfrac{1}{2I_c}$, $\dfrac{1}{2I_a}$​, and $\dfrac{1}{2I_b}$​ and summed together to form the matrix representation of $H_{\rm rot}$. The diagonalization of this matrix then provides the asymmetric top energies and wave functions. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.08%3A_Rotations_of_Molecules.txt
This Schrödinger equation forms the basis for our thinking about bond stretching and angle bending vibrations as well as collective vibrations in solids called phonons. The radial motion of a diatomic molecule in its lowest ($J=0$) rotational level can be described by the following Schrödinger equation: $- \dfrac{\hbar^2}{2\mu r^2} \dfrac{\partial}{\partial r} \left(r^2\dfrac{\partial \psi}{\partial r}\right) +V(r) \psi = E \psi,$ where $\mu$ is the reduced mass $\mu = \dfrac{m_1m_2}{m_1+m_2}$ of the two atoms. If the molecule is rotating, then the above Schrödinger equation has an additional term $J(J+1) \hbar^2/2\mu r^{-2} \psi$ on its left-hand side. Thus, each rotational state (labeled by the rotational quantum number $J$) has its own vibrational Schrödinger equation and thus its own set of vibrational energy levels and wave functions. It is common to examine the $J=0$ vibrational problem and then to use the vibrational levels of this state as approximations to the vibrational levels of states with non-zero $J$ values (treating the vibration-rotation coupling via perturbation theory). Let us thus focus on the $J=0$ situation. By substituting $\psi= \dfrac{\Phi(r)}{r}$ into this equation, one obtains an equation for $\Phi(r)$ in which the differential operators appear to be less complicated: $- \dfrac{\hbar^2}{2\mu} \dfrac{d^2F}{dr^2} + V(r) \Phi = E F.$ This equation is exactly the same as the equation seen earlier in this text for the radial motion of the electron in the hydrogen-like atoms except that the reduced mass m replaces the electron mass m and the potential $V(r)$ is not the Coulomb potential. If the vibrational potential is approximated as a quadratic function of the bond displacement $x = r-r_e$ expanded about the equilibrium bond length $r_e$ where $V$ has its minimum: $V = \dfrac{1}{2} k(r-r_e)^2,$ the resulting harmonic-oscillator equation can be solved exactly. Because the potential $V$ grows without bound as $x$ approaches $\infty$ or $-\infty$, only bound-state solutions exist for this model problem. That is, the motion is confined by the nature of the potential, so no continuum states exist in which the two atoms bound together by the potential are dissociated into two separate atoms. In solving the radial differential equation for this potential, the large-r behavior is first examined. For large-r, the equation reads: $\dfrac{d^2F}{dx^2} = \dfrac{1}{2} k x^2 \dfrac{2\mu}{\hbar^2} \Phi = \dfrac{k\mu}{\hbar^2} x^2 F,$ where $x = r-r_e$ is the bond displacement away from equilibrium. Defining $\beta^2 =\dfrac{k\mu}{\hbar^2}$ and $\xi= \sqrt{\beta} x$ as a new scaled radial coordinate, and realizing that $\dfrac{d^2}{dx^2} = \beta \dfrac{d^2}{dx^2}$ allows the larger Schrödinger equation to be written as: $\dfrac{d^2F}{d\xi^2} = \xi^2 F$ which has the solution $F_{\rm large-r} = \exp(- \xi^2/2).$ The general solution to the radial equation is then expressed as this large-r solution multiplied by a power series in the $z$ variable: $\Phi = \exp(- \xi^2/2)\sum_{n=0}\xi^n C_n ,$ where the $C_n$ are coefficients to be determined. Substituting this expression into the full radial equation generates a set of recursion equations for the $C_n$​ amplitudes. As in the solution of the hydrogen-like radial equation, the series described by these coefficients is divergent unless the energy $E$ happens to equal specific values. It is this requirement that the wave function not diverge so it can be normalized that yields energy quantization. The energies of the states that arise by imposing this non-divergence condition are given by: $E_n = \hbar \sqrt{\dfrac{k}{\mu}} (n+\dfrac{1}{2}),$ and the eigenfunctions are given in terms of the so-called Hermite polynomials $H_n(y)$ as follows: $\psi_n(x) = \dfrac{1}{\sqrt{n! 2^n}} \left(\dfrac{\beta}{\pi}\right)^{1/4} \exp(- \beta \xi^2/2) H_n(\sqrt{\beta} x),$ where $\beta =\sqrt{\dfrac{k}{\mu}}$. Within this harmonic approximation to the potential, the vibrational energy levels are evenly spaced: $\Delta E = E_{n+1} - E_n = \hbar \sqrt{\dfrac{k}{\mu}} .$ In experimental data such evenly spaced energy level patterns are seldom seen; most commonly, one finds spacings $E_{n+1} - E_n$ that decrease as the quantum number $n$ increases. In such cases, one says that the progression of vibrational levels displays anharmonicity. Because the Hermite functions $H_n$ are odd or even functions of $x$ (depending on whether n is odd or even), the wave functions yn(x) are odd or even. This splitting of the solutions into two distinct classes is an example of the effect of symmetry; in this case, the symmetry is caused by the symmetry of the harmonic potential with respect to reflection through the origin along the $x$-axis (i.e., changing $x$ to $-x$). Throughout this text, many symmetries arise; in each case, symmetry properties of the potential cause the solutions of the Schrödinger equation to be decomposed into various symmetry groupings. Such symmetry decompositions are of great use because they provide additional quantum numbers (i.e., symmetry labels) by which the wave functions and energies can be labeled. The basic idea underlying how such symmetries split the solutions of the Schrödinger equation into different classes relates to the fact that a symmetry operator (e.g., the reflection plane in the above example) commutes with the Hamiltonian. That is, the symmetry operator $\textbf{S}$ obeys $\textbf{S} \textbf{H} = \textbf{H} \textbf{S}.$ So $\textbf{S}$ leaves $\textbf{H}$ unchanged as it acts on $\textbf{H}$ (this allows us to pass $\textbf{S}$ through $\textbf{H}$ in the above equation). Any operator that leaves the Hamiltonian (i.e., the energy) unchanged is called a symmetry operator. If you have never learned about how point group symmetry can be used to help simplify the solution of the Schrödinger equation, this would be a good time to interrupt your reading and go to Chapter 4 and read the material there. The harmonic oscillator energies and wave functions comprise the simplest reasonable model for vibrational motion. Vibrations of a polyatomic molecule are often characterized in terms of individual bond-stretching and angle-bending motions, each of which is, in turn, approximated harmonically. This results in a total vibrational wave function that is written as a product of functions, one for each of the vibrational coordinates. Two of the most severe limitations of the harmonic oscillator model, the lack of anharmonicity (i.e., non-uniform energy level spacings) and lack of bond dissociation, result from the quadratic nature of its potential. By introducing model potentials that allow for proper bond dissociation (i.e., that do not increase without bound as $x \rightarrow \infty$), the major shortcomings of the harmonic oscillator picture can be overcome. The so-called Morse potential (see Figure 2.24) $V(r) = D_e (1-\exp(-a(r-r_e)))^2,$ is often used in this regard. In this form, the potential is zero at $r = r_e$, the equilibrium bond length and is equal to $D_e$ as $r \rightarrow\infty$. Sometimes, the potential is written as $V(r) = D_e (1-\exp(-a(r-r_e)))^2 -D_e$ so it vanishes as $r \rightarrow\infty$ and is equal to $–D_e$ at $r = r_e$. The latter form is reflected in Figure 2.24. In the Morse potential function, $D_e$ is the bond dissociation energy, $r_e$ is the equilibrium bond length, and $a$ is a constant that characterizes the steepness of the potential and thus affects the vibrational frequencies. The advantage of using the Morse potential to improve upon harmonic-oscillator-level predictions is that its energy levels and wave functions are also known exactly. The energies are given in terms of the parameters of the potential as follows: $E_n = \hbar \sqrt{\dfrac{k}{\mu}} { (n+\dfrac{1}{2}) - \dfrac{ (n+\dfrac{1}{2})^2\hbar \sqrt{k/\mu}}{4D_e} },$ where the force constant is given in terms of the Morse potential’s parameters by $k=2D_e a^2$. The Morse potential supports both bound states (those lying below the dissociation threshold for which vibration is confined by an outer turning point) and continuum states lying above the dissociation threshold (for which there is no outer turning point and thus the no spatial confinement). Its degree of anharmonicity is governed by the ratio of the harmonic energy $\hbar \sqrt{\dfrac{k}{\mu}}$ to the dissociation energy $D_e$. The energy spacing between vibrational levels $n$ and $n+1$ are given by $E_{n+1} – E_n = \hbar \sqrt{\dfrac{k}{\mu}} \left( 1 - \dfrac{ (n+1)\hbar \sqrt{k/\mu}}{2D_e }\right).$ These spacings decrease until $n$ reaches the value $n_{\rm max}$ at which ${ 1 - \dfrac{(n_{\rm max}+1) \hbar \sqrt{k/\mu}}{2D_e} } = 0,$ after which the series of bound Morse levels ceases to exist (i.e., the Morse potential has only a finite number of bound states) and the Morse energy level expression shown above should no longer be used. It is also useful to note that, if $\dfrac{\sqrt{2D_e\mu}}{a \hbar}$ becomes too small (i.e., < 1.0 in the Morse model), the potential may not be deep enough to support any bound levels. It is true that some attractive potentials do not have a large enough $D_e$ value to have any bound states, and this is important to keep in mind. So, bound states are to be expected when there is a potential well (and thus the possibility of inner- and outer- turning points for the classical motion within this well) but only if this well is deep enough. The eigenfunctions of the harmonic and Morse potentials display nodal character analogous to what we have seen earlier in the particle-in-boxes model problems. Namely, as the energy of the vibrational state increases, the number of nodes in the vibrational wave function also increases. The state having vibrational quantum number $v$ has $v$ nodes. I hope that by now the student is getting used to seeing the number of nodes increase as the quantum number and hence the energy grows. As the quantum number $v$ grows, not only does the wave function have more nodes, but its probability distribution becomes more and more like the classical spatial probability, as expected. In particular for large-$v$, the quantum and classical probabilities are similar and are large near the outer turning point where the classical velocity is low. They also have large amplitudes near the inner turning point, but this amplitude is rather narrow because the Morse potential drops off strongly to the right of this turning point; in contrast, to the left of the outer turning point, the potential decreases more slowly, so the large amplitudes persist over longer ranges near this turning point.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/02%3A_Model_Problems_That_Form_Important_Starting_Points/2.09%3A_Vibrations_of_Molecules.txt
Learning Objectives In this Chapter, you will learned about the following things: 1. Characteristics of Born-Oppenheimer energy surfaces, and how to find local minima, transition states, intrinsic reaction paths, and intersection seams on them. 2. The harmonic normal modes of vibration extracted from the mass weighted Hessian matrix, and how symmetry can be used to simplify the problem. Born-Oppenheimer energy surfaces (or the empirical functions often used to represent them) possess important critical points that detail the properties of stable molecular structures, transition states, intersection seams, and reaction paths, all of which play central roles in the theoretical description of chemical reactions and molecular properties. In this Chapter, you will learn about these special points on the surfaces, how to find them, and what to do with them once you know them. • 3.1: Strategies for Geometry Optimization and Finding Transition States The extension of the harmonic and Morse vibrational models to polyatomic molecules requires that the multidimensional energy surface be analyzed in a manner that allows one to approximate the molecule’s motions in terms of many nearly independent vibrations. In this Section, we will explore the tools that one uses to carry out such an analysis of the surface, but first it is important to describe how one locates the minimum-energy and transition-state geometries on such surfaces. • 3.2: Normal Modes of Vibration Having seen how one can use information about the gradients and Hessians on a Born-Oppenheimer surface to locate geometries corresponding to stable species and transition states, let us now move on to see how this same data is used to treat vibrations on this surface. • 3.3: Intrinsic Reaction Paths There is a special path connecting reactants, transition states, and products that is especially useful to characterize in terms of energy surface gradients and Hessians. This is the Intrinsic Reaction Path (IRP). The general procedure to construct an IRP is outlined in this module. 03: Characteristics of Energy Surfaces The extension of the harmonic and Morse vibrational models to polyatomic molecules requires that the multidimensional energy surface be analyzed in a manner that allows one to approximate the molecule’s motions in terms of many nearly independent vibrations. In this Section, we will explore the tools that one uses to carry out such an analysis of the surface, but first it is important to describe how one locates the minimum-energy and transition-state geometries on such surfaces. Finding Local Minima Many strategies that attempt to locate minima on molecular potential energy landscapes begin by approximating the potential energy $V$ for geometries (collectively denoted in terms of $3N$ Cartesian coordinates $\{q_j\}$) in a Taylor series expansion about some “starting point” geometry (i.e., the current molecular geometry in an iterative process or a geometry that you guessed as a reasonable one for the minimum or transition state that you are seeking): $V (g_k) = V(0) + \sum_k \left(\dfrac{\partial V}{\partial q_k}\right) q_k + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} q_k \, + \, ... \label{3.1.1}$ Here, • $V(0)$ is the energy at the current geometry, • $\dfrac{\partial{V}}{\partial{q_k}} = g_k$ is the gradient of the energy along the $q_k$ coordinate, • $H_{j,k} = \dfrac{\partial^2{V}}{\partial{q_j}\partial{q_k}}$ is the second-derivative or Hessian matrix, and • $g_k$ is the length of the “step” to be taken along this Cartesian direction. An example of an energy surface in only two dimensions is given in the Figure 3.1 where various special aspects are illustrated. For example, minima corresponding to stable molecular structures, transition states (first order saddle points) connecting such minima, and higher order saddle points are displayed. If the only knowledge that is available is $V(0)$ and the gradient components (e.g., computation of the second derivatives is usually much more computationally taxing than is evaluation of the gradient, so one is often forced to work without knowing the Hessian matrix elements), the linear approximation $V(q_k) = V(0) + \sum_k g_K \,q_k \label{3.1.2}$ suggests that one should choose “step” elements $q_k$ that are opposite in sign from that of the corresponding gradient elements $g_k = \dfrac{\partial{V}}{\partial{q_k}}$ if one wishes to move “downhill” toward a minimum. The magnitude of the step elements is usually kept small in order to remain within the “trust radius” within which the linear approximation to $V$ is valid to some predetermined desired precision (i.e., one wants to assure that $\sum_k g_K q_k$ is not too large). When second derivative data is available, there are different approaches to predicting what step {$g_k$} to take in search of a minimum, and it is within such Hessian-based strategies that the concept of stepping along $3N-6$ independent modes arises. We first write the quadratic Taylor expansion $V (g_k) = V(0) + \sum_k g_K g_k + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} g_k\label{3.1.3}$ in matrix-vector notation $V(\textbf{q}) = V(0) + ​\textbf{q}^{\textbf{T}} \cdot \textbf{g} + \dfrac{1}{2} \textbf{q}^{\textbf{T}} \cdot \textbf{H} \cdot \textbf{q} \label{3.1.4}$ with the elements $\{g_k\}$ collected into the column vector $\textbf{q}$ whose transpose is denoted $\textbf{q}^{\textbf{T}}$. Introducing the unitary matrix $\textbf{U}$ that diagonalizes the symmetric $\textbf{H}$ matrix, the above equation becomes $V(\textbf{q}) = V(0) + \textbf{q}^{\textbf{T}} \textbf{U} \, \textbf{U}^{\textbf{T}} \textbf{q} + \dfrac{1}{2}​ \textbf{q}^{\textbf{T}} \textbf{U} \, \textbf{U}^{\textbf{T}} \textbf{H} \textbf{U}\, \textbf{U}^{\textbf{T}} \textbf{q}. \label{3.1.5}$ Because $\textbf{U}^{\textbf{T}}\textbf{H}\textbf{U}$ is diagonal, we have $(\textbf{U}^{\textbf{T}}\textbf{H}\textbf{U})_{k,l} = \delta_{k,l} \lambda_k \label{3.1.6}$ where $\lambda_k$ are the eigenvalues of the Hessian matrix. For non-linear molecules, $3N-6$ of these eigenvalues will be non-zero; for linear molecules, $3N-5$ will be non-zero. The 5 or 6 zero eigenvalues of $\textbf{H}$ have eigenvectors that describe translation and rotation of the entire molecule; they are zero because the energy surface $V$ does not change if the molecule is rotated or translated. It can be difficult to properly identify the 5 or 6 translation and rotation eigenvalues of the Hessian because numerical precision issues often cause them to occur as very small positive or negative eigenvalues. If the molecule being studied actually does possess internal (i.e., vibrational) eigenvalues that are very small (e.g., the torsional motion of the methyl group in ethane has a very small energy barrier as a result of which the energy is very weakly dependent on this coordinate), one has to be careful to properly identify the translation-rotation and internal eigenvalues. By examining the eigenvectors corresponding to all of the low Hessian eigenvalues, one can identify and thus separate the former from the latter. In the remainder of this discussion, I will assume that the rotations and translations have been properly identified and the strategies I discuss will refer to utilizing the remaining $3N-5$ or $3N-6$ eigenvalues and eigenvectors to carry out a series of geometry “steps” designed to locate energy minima and transition states. The eigenvectors of $\textbf{H}$ form the columns of the array $U$ that brings $H$ to diagonal form: $\sum_{\lambda} H_{k,l} U_{l,m} = \lambda_m U_{k,m} \label{3.1.7}$ Therefore, if we define $Q_m = \sum_k U^T_{m,k} ​g_k \label{3.1.8a}$ and $G_m = \sum_k U^T_{m,k} g_K \label{3.1.8b}$ to be the component of the step $\{g_k\}$ and of the gradient along the $m^{th}$ eigenvector of $H$, the quadratic expansion of $V$ can be written in terms of steps along the $3N-5$ or $3N-6 \{Q_m\}$ directions that correspond to non-zero Hessian eigenvalues: $V (g_k) = V(0) + \sum_m G^T_m Q_m + \dfrac{1}{2} \sum_m Q_m \lambda_m Q_m.\label{3.1.9}$ The advantage to transforming the gradient, step, and Hessian to the eigenmode basis is that each such mode (labeled m) appears in an independent uncoupled form in the expansion of $V$. This allows us to take steps along each of the $Q_m$ directions in an independent manner with each step designed to lower the potential energy when we are searching for minima (strategies for finding a transition state will be discussed below). For each eigenmode direction, one can ask for what size step $Q$ would the quantity $GQ + \dfrac{1}{2} \lambda Q^2$ be a minimum. Differentiating this quadratic form with respect to $Q$ and setting the result equal to zero gives $Q_m = - \dfrac{G_m}{\lambda_m} \label{3.1.10}$ that is, one should take a step opposite the gradient but with a magnitude given by the gradient divided by the eigenvalue of the Hessian matrix. If the current molecular geometry is one that has all positive $\lambda_m$ values, this indicates that one may be “close” to a minimum on the energy surface (because all $\lambda_m$ are positive at minima). In such a case, the step $Q_m = - G_m/\lambda_m$ is opposed to the gradient along all $3N-5$ or $3N-6$ directions, much like the gradient-based strategy discussed earlier suggested. The energy change that is expected to occur if the step $\{Q_m\}$ is taken can be computed by substituting $Q_m = - G_m/\lambda_m$ into the quadratic equation for $V$: $V(\text{after step}) = V(0) + \sum_m G^T_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg) + \dfrac{1}{2} \sum_m \lambda_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg)^2 \label{3.1.11a}$ $= V(0) - \dfrac{1}{2} \sum_m \lambda_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg)^2. \label{3.1.11b}$ This clearly suggests that the step will lead “downhill” in energy along each eigenmode as long as all of the $\lambda_m$ values are positive. For example, if one were to begin with a good estimate for the equilibrium geometries of ethylene and propene, one could place these two molecules at a distance $R_0$ longer than the expected inter-fragment equilibrium distance $R_{\rm vdW}$ in the van der Waals complex formed when they interact. Because both fragments are near their own equilibrium geometries and at a distance $R_0$ at which long-range attractive forces will act to draw them together, a strategy such as outlined above could be employed to locate the van der Waals minimum on their energy surface. This minimum is depicted qualitatively in Figure 3.1a. Beginning at $R_0$, one would find that $3N-6 = 39$ of the eigenvalues of the Hessian matrix are non-zero, where $N = 15$ is the total number of atoms in the ethylene-propene complex. Of these 39 non-zero eigenvalues, three will have eigenvectors describing radial and angular displacements of the two fragments relative to one another; the remaining 36 will describe internal vibrations of the complex. The eigenvalues belonging to the inter-fragment radial and angular displacements may be positive or negative (because you made no special attempt to orient the molecules at optimal angles and you may not have guessed very well at optimal the equilibrium inter-fragment distance), so it would probably be wisest to begin the energy-minimization process by using gradient information to step downhill in energy until one reaches a geometry $R_1$ at which all 39 of the Hessian matrix eigenvalues are positive. From that point on, steps determined by both the gradient and Hessian (i.e., $Q_m = - G_m/\lambda_m$) can be used unless one encounters a geometry at which one of the eigenvalues $\lambda_m$ is very small, in which case the step $Q_m = - G_m/\lambda_m$ along this eigenmode could be unrealistically large. In this case, it would be better to not take $Q_m = - G_m/\lambda_m$ for the step along this particular direction but to take a small step in the direction opposite to the gradient to improve chances of moving downhill. Such small-eigenvalue issues could arise, for example, if the torsion angle of propene’s methyl group happened, during the sequence of geometry steps, to move into a region where eclipsed rather than staggered geometries are accessed. Near eclipsed geometries, the Hessian eigenvalue describing twisting of the methyl group is negative; near staggered geometries, it is positive. Whenever one or more of the $\lambda_m$ are negative at the current geometry, one is in a region of the energy surface that is not sufficiently close to a minimum to blindly follow the prescription $Q_m = - G_m/\lambda_m$ along all modes. If only one $\lambda_m$ is negative, one anticipates being near a transition state (at which all gradient components vanish and all but one $\lambda_m$ are positive with one $\lambda_m$ negative). In such a case, the above analysis suggests taking a step $Q_m = - G_m/\lambda_m$ along all of the modes having positive $\lambda_m$, but taking a step of opposite direction (e.g., $Q_n = - G_n/\lambda_n$ unless $\lambda_m$ is very small in which case a small step opposite $G_n$ is best) along the direction having negative $\lambda_n$ if one is attempting to move toward a minimum. This is what I recommended in the preceding paragraph when an eclipsed geometry (which is a transition state for rotation of the methyl group) is encountered if one is seeking an energy minimum. Finding Transition States On the other hand, if one is in a region where one Hessian eigenvalues is negative (and the rest are positive) and if one is seeking to find a transition state, then taking steps $Q_m = - G_m/\lambda_m$ along all of the modes Having positive eigenvalues and taking $Q_n = - G_n/\lambda_n$ along the mode having negative eigenvalue is appropriate. The steps $Q_m = - G_m/\lambda_m$ will act to keep the energy near its minimum along all but one direction, and the step $Q_n = - G_n/\lambda_n$ will move the system uphill in energy along the direction having negative curvature, exactly as one desires when “walking” uphill in a streambed toward a mountain pass. However, even the procedure just outlined for finding a transition state can produce misleading results unless some extent of chemical intuition is used. Let me give an example to illustrate this point. Let’s assume that one wants to find begin near the geometry of the van der Waals complex involving ethylene and propene and to then locate the transition state on the reaction path leading to the [2+2] cyclo-addition products methyl-cyclobutane as also shown in Figure 3.1a. Consider employing either of two strategies to begin the “walk” leading from the van der Waals complex to the desired transition state (TS): 1. One could find the lowest (non-translation or non-rotation) Hessian eigenvalue and take a small step uphill along this direction to begin a streambed walk that might lead to the TS. Using the smallest Hessian eigenvalue to identify a direction to explore makes sense because it is along this direction that the energy surface rises least abruptly (at least near the geometry of the reactants). 2. One could move the ethylene radially a bit (say 0.2 Å) closer to the propene to generate an initial geometry to begin the TS search. This makes sense because one knows the reaction must lead to inter-fragment carbon-carbon distances that are much shorter in the methyl-cyclobutane products than in the van der Waals complex. The first strategy suggested above will likely fail because the series of steps generated by walking uphill along the lowest Hessian eigenmode will produce a path leading from eclipsed to staggered orientation of propene’s methyl group. Indeed, this path leads to a TS, but it is not the [2+2] cyclo-addition TS that we want. The take-home lesson here is that uphill streambed walks beginning at a minimum on the reactants’ potential energy surface may or may not lead to the desired TS. Such walks are not foolish to attempt, but one should examine the nature of the eigenmode being followed to judge whether displacements along this mode make chemical sense. Clearly, only rotating the methyl group is not a good way to move from ethylene and propene to methyl-cyclobutane. The second strategy suggested above might succeed, but it would probably still need to be refined. For example, if the displacement of the ethylene toward the propene were too small, one would not have distorted the system enough to move it into a region where the energy surface has negative curvature along the reaction path as it must have as one approaches the TS. So, if the Hessian eigenmodes whose eigenvectors possess substantial inter-fragment radial displacements are all positive, one has probably not moved the two fragments close enough together. Probably the best way to then proceed would be to move the two fragments even closer (or, to move them along a linear synchronous path[1] connecting the reactants and products) until one finds a geometry at which a negative Hessian eigenvalue’s eigenmode has substantial components along what appears to be reasonable for the desired reaction path (i.e., substantial displacements leading to shorter inter-fragment carbon-carbon distances). Once one has found such a geometry, one can use the strategies detailed earlier (e.g., $Q_m = - G_m/\lambda_m$ to then walk uphill along one mode while minimizing along the other modes to move toward the TS. If successful, such a process will lead to the TS at which all components of the gradient vanish and all but one eigenvalue of the Hessian is positive. The take-home lesson of the example is it is wise to try to first find a geometry close enough to the TS to cause the Hessian to have a negative eigenvalue whose eigenvector has substantial character along directions that make chemical sense for the reaction path. In either a series of steps toward a minimum or toward a TS, once a step has been suggested within the eigenmode basis, one needs to express that step in terms of the original Cartesian coordinates $q_k$ so that these Cartesian values can be altered within the software program to effect the predicted step. Given values for the $3N-5$ or $3N-6$ step components $Q_m$ (n.b., the step components $Q_m$ along the 5 or 6 modes having zero Hessian eigenvalues can be taken to be zero because the would simply translate or rotate the molecule), one must compute the {$g_k$}. To do so, we use the relationship $Q_m = \sum_k U^T_{m,k} g_k\label{3.1.12}$ and write its inverse (using the unitary nature of the $\textbf{U}$ matrix): $g_k = \sum_m U_{k,m} Q_m \label{3.1.13}$ to compute the desired Cartesian step components. In using the Hessian-based approaches outlined above, one has to take special care when one or more of the Hessian eigenvalues is small. This often happens when 1. one has a molecule containing “soft modes” (i.e., degrees of freedom along which the energy varies little), or 2. as one moves from a region of negative curvature into a region of positive curvature (or vice versa)- in such cases, the curvature must move through or near zero. For these situations, the expression $Q_m = - G_m/\lambda_m$ can produce a very large step along the mode having small curvature. Care must be taken to not allow such incorrect artificially large steps to be taken. Energy Surface Intersections I should note that there are other important regions of potential energy surfaces that one must be able to locate and characterize. Above, we focused on local minima and transition states. Later in this Chapter, and again in Chapter 8, we will discuss how to follow so-called reaction paths that connect these two kinds of stationary points using the type of gradient and Hessian information that we introduced earlier in this Chapter. It is sometimes important to find geometries at which two Born-Oppenheimer energy surfaces $V_1(\text{q})$ and $V_2(\text{q})$ intersect because such regions often serve as efficient funnels for trajectories or wave packets evolving on one surface to undergo so-called non-adiabatic transitions to the other surface. Let’s spend a few minutes thinking about under what circumstances such surfaces can indeed intersect, because students often hear that surfaces do not intersect but, instead, undergo avoided crossings. To understand the issue, let us assume that we have two wave functions $\Phi_1$ and $\Phi_2$ both of which depend on $3N-6$ coordinates $\{q\}$. These two functions are not assumed to be exact eigenfunctions of the Hamiltonian $H$, but likely are chosen to approximate such eigenfunctions. To find the improved functions $\Psi_1$ and $\Psi_2$ that more accurately represent the eigenstates, one usually forms linear combinations of $\Phi_1$ and $\Phi_2$, $\Psi_K = C_{K,1} \Phi_1 + C_{K,2} \Phi_2 \label{3.1.14}$ from which a 2x2 matrix eigenvalue problem arises: $\left|\begin{array}{cc} H_{1,1}-E & H_{1,2}\ H_{2,1} & H_{2,2}-E \end{array}\right|=0$ This quadratic equation has two solutions $2E_\mp = (H_{1,1} + H_{2,2}) \pm \sqrt{(H_{1,1}+H_{2,2})^2 + 4H_{1,2}^2}$ These two solutions can be equal (i.e., the two state energies can cross) only if the square root factor vanishes. Because this factor is a sum of two squares (each thus being positive quantities), this can only happen if two identities hold simultaneously (i.e., at the same geometry): $H_{1,1} = H_{2,2} \label{3.1.15a}$ and $H_{1,2} = 0. \label{3.1.15b}$ The main point then is that in the $3N-6$ dimensional space, the two states will generally not have equal energy. However, in a space of two lower dimensions (because there are two conditions that must simultaneously be obeyed: $H_{1,1} = H_{2,2}$ and $H_{1,2} = 0$), their energies may be equal. They do not have to be equal, but it is possible that they are. It is based upon such an analysis that one usually says that potential energy surfaces in $3N-6$ dimensions may undergo intersections in spaces of dimension $3N-8$. If the two states are of different symmetry (e.g., one is a singlet and the other a triplet), the off-diagonal element $H_{1,2}$ vanishes automatically, so only one other condition is needed to realize crossing. So, we say that two states of different symmetry can cross in a space of dimension $3N-7$. For a triatomic molecule with $3N-6 = 3$ internal degrees of freedom, this means that surfaces of the same symmetry can cross in a space of dimension 1 (i.e., along a line) while those of different symmetry can cross in a space of dimension 2 (i.e., in a plane). An example of such a surface intersection is shown in Figure 3.1c. First considering the reaction of Al (3s2 3p1; 2P) with $H_2 (sg2; 1Sg+)$ to form AlH2(^^2A_1) as if it were to occur in $C_{2v}$ symmetry, the $Al$ atom’s occupied 3p orbital can be directed in either of three ways. 1. If it is directed toward the midpoint of the H-H bond, it produces an electronic state of $^2A_1$ symmetry. 2. If it is directed out of the plane of the $AlH_2$, it gives a state of $2B_1$ symmetry, and 3. if it is directed parallel to the H-H bond, it generates a state of $^2B_2$ symmetry. The $^2A_1$ state is, as shown in the upper left of Figure 3.1c, repulsive as the Al atom’s 3s and 3p orbitals begin to overlap with the hydrogen molecule’s $\sigma_g$ orbital at large $R$-values. The $^2B_2$ state, in which the occupied 3p orbital is directed sideways parallel to the H-H bond, leads to a shallow van der Waals well at long-R but also moves to higher energy at shorter $R$-values. The ground state of the $AlH_2$ molecule has its five valence orbitals occupied as follows: 1. two electrons occupy a bonding Al-H orbital of $a_1$ symmetry, 2. two electrons occupy a bonding Al-H orbital of $b_2$ symmetry, and 3. the remaining electron occupies a non-bonding orbital of $sp^2$ character localized on the Al atom and having a1 symmetry. This $a_1^2 b_2^2 a_1^1$ orbital occupancy of the $AlH_2$ molecule’s ground state does not correlate directly with any of the three degenerate configurations of the ground state of $Al + H_2$ which are $a_1^2 a_1^2 a_1^1, a_1^2 a_1^2 b_1^1$, and $a_1^2 a_1^2 b_2^1$ as explained earlier. It is this lack of direct configuration correlation that generates the reaction barrier show in Figure 3.1c. Let us now return to the issue of finding the lower-dimensional ($3N-8$ or $3N-7$) space in which two surfaces cross, assuming one has available information about the gradients and Hessians of both of these energy surfaces $V_1$ and $V_2$. There are two components of characterizing the intersection space within which $V_1$ = $V_2$: 1. One has to first locate one geometry $\textbf{q}_0$ lying within this space and then, 2. one has to sample nearby geometries (e.g., that might have lower total energy) lying within this subspace where $V_1 = V_2$. To locate a geometry at which the difference function $F = [V_1 –V_2]^2$ passes through zero, one can employ conventional functional minimization methods, such as those detailed earlier when discussing how to find energy minima, to locate where $F = 0$, but now the function one is seeking to locate a minimum on is the potential energy surface difference. Once one such geometry $\textbf{q}_0$ has been located, one subsequently tries to follow the seam (i.e., for a triatomic molecule, this is the one-dimensional line of crossing; for larger molecules, it is a $3N-8$ dimensional space) within which the function $F$ remains zero. Professor David Yarkony has developed efficient routines for characterizing such subspaces (D. R. Yarkony, Acc. Chem. Res. 31, 511-518 (1998)). The basic idea is to parameterize steps away from ($q_0$) in a manner that constrains such steps to have no component along either the gradient of ($H_{1,1} –H_{2,2}$) or along the gradient of $H_{1,2}$. Because $V­_1 = V_2$ requires having both $H_{1,1} = H_{2,2}$ and $H_{1,2} = 0$, taking steps obeying these two constraints allows one to remain within the subspace where $H_{1,1} = H_{2,2}$ and $H_{1,2} = 0$ are simultaneously obeyed. Of course, it is a formidable task to map out the entire $3N-8$ or $3N-7$ dimensional space within which the two surfaces intersect, and this is essentially never done. Instead, it is common to try to find, for example, the point within this subspace at which the two surfaces have their lowest energy. An example of such a point is labeled RMECP in Figure 3.1c, and would be of special interest when studying reactions taking place on the lower-energy surface that have to access the surface-crossing seam to evolve onto the upper surface. The energy at RMECP reflects the lowest energy needed to access this surface crossing. Such intersection seam location procedures are becoming more commonly employed, but are still under very active development, so I will refer the reader to Prof. Yarkony’s paper cited above for further guidance. For now, it should suffice to say that locating such surface intersections is an important ingredient when one is interested in studying, for example, photochemical reactions in which the reactants and products may move from one electronic surface to another, or thermal reactions that require the system to evolve onto an excited state through a surface crossing. Endnotes 1. This is a series of geometries $R_x$ defined through a linear interpolation (using a parameter $0 < x < 1$) between the $3N$ Cartesian coordinates $R_{\rm reactants}$ belonging to the equilibrium geometry of the reactants and the corresponding coordinates $R_{\rm products}$ of the products: $R_x = R_{\rm reactants} x + (1-x) R_{\rm products}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/03%3A_Characteristics_of_Energy_Surfaces/3.01%3A_Strategies_for_Geometry_Optimization_and_Finding_Transition_States.txt
Having seen how one can use information about the gradients and Hessians on a Born-Oppenheimer surface to locate geometries corresponding to stable species and transition states, let us now move on to see how this same data is used to treat vibrations on this surface. For a polyatomic molecule whose electronic energy's dependence on the $3N$ Cartesian coordinates of its $N$ atoms, the potential energy $V$ can be expressed (approximately) in terms of a Taylor series expansion about any of the local minima. Of course, different local minima (i.e., different isomers) will have different values for the equilibrium coordinates and for the derivatives of the energy with respect to these coordinates. The Taylor series expansion of the electronic energy is written as: $V (g_k) = V(0) + \sum_k \left(\dfrac{\partial V}{\partial q_k}\right) q_k + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} q_k \, + \, ...$ Here, • $V(0)$ is the energy at the current geometry, • $\dfrac{\partial{V}}{\partial{q_k}} = g_k$ is the gradient of the energy along the $q_k$ coordinate, • $H_{j,k} = \dfrac{\partial^2{V}}{\partial{q_j}\partial{q_k}}$ is the second-derivative or Hessian matrix, and • $g_k$ is the length of the “step” to be taken along this Cartesian direction. If the geometry corresponds to a minimum or transition state, the gradient terms will all vanish, and the Hessian matrix will possess $3N - 5$ (for linear species) or $3N -6$ (for non-linear molecules) positive eigenvalues and 5 or 6 zero eigenvalues (corresponding to 3 translational and 2 or 3 rotational motions of the molecule) for a minimum and one negative eigenvalues and $3N-6$ or $3N-7$ positive eigenvalues for a transition state. The Newton Equations of Motion for Vibration The Kinetic and Potential Energy Matrices Truncating the Taylor series at the quadratic terms (assuming these terms dominate because only small displacements from the equilibrium geometry are of interest), one has the so-called harmonic potential: $V(q_k) = V(0) + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} q_k \label{3.2.1}$ The classical mechanical equations of motion for the $3N \{q_k\}$ coordinates can be written in terms of the above potential energy and the following kinetic energy function: $T = \dfrac{1}{2} \sum_j m_j \left(\dfrac{dq_j}{dt}\right)^2, \label{3.2.2}$ where $\dfrac{dq_j}{dt}$ is the time rate of change of the coordinate $q_j$ and $m_j$ is the mass of the atom on which the $j^{th}$ Cartesian coordinate resides. The Newton equations thus obtained are: $m_j\dfrac{d^2 q_j}{dt^2}=-\sum_k H_{j,k}q_k$ where the force along the $j^{th}$ coordinate is given by minus the derivative of the potential $V$ along this coordinate $\dfrac{\partial{V}}{\partial{q_j}}= \sum_k H_{j,k} q_k \label{3.2.3}$ within the harmonic approximation. These classical equations can more compactly be expressed in terms of the time evolution of a set of so-called mass-weighted Cartesian coordinates defined as: $x_j = q_j \sqrt{(m_j)} \label{3.2.4}$ in terms of which the above Newton equations become $\dfrac{d^2 x_j}{dt^2}=-\sum_k H'_{j,k}x_k$ and the mass-weighted Hessian matrix elements are $H'_{j,k} = \dfrac{H_{j,k}}{ \sqrt{m_jm_k} }. \label{3.2.5}$ The Harmonic Vibrational Energies and Normal Mode Eigenvectors Assuming that the $x_j$ undergo some form of sinusoidal time evolution: $x_j(t) = x_j (0) \cos(\omega t),$ and substituting this into the Newton equations produces a matrix eigenvalue equation: $\omega^2 x_j = \sum_k H'_{j,k} x_k$ in which the eigenvalues are the squares of the so-called normal mode vibrational frequencies and the eigenvectors give the amplitudes of motion along each of the $3N$ mass-weighted Cartesian coordinates that belong to each mode. Hence, to perform a normal-mode analysis of a molecule, one forms the mass-weighted Hessian matrix and then finds the $3N-5$ or $3N-6$ non-zero eigenvalues $\omega_j^2$ as well as the corresponding eigenvectors $x_k^{(j)}$. It is useful to note that, if this same kind of analysis were performed at a geometry corresponding to a transition state, $3N-6$ or $3N-7$ of the $\omega_j^2$ values would be positive, but one of them would be negative. The eigenvector corresponding to the negative eigenvalue of the mass-weighted Hessian points along a very important direction that we will discuss later; it is the direction of the so-called intrinsic reaction coordinate (IRC). When reporting the eigenvalues $\omega_j^2$ at such a transition-state geometry, one often says that there is one imaginary frequency because one of the $\omega_j^2$ values is negative; this value of $\omega_j^2$ characterizes the curvature of the energy surface along the IRC at the transition state. The positive vibrational eigenvalues of transition-state geometries are used, as discussed in Chapter 8, to evaluate statistical mechanics partition functions for reaction rates, and the negative $\omega_j^2$ value plays a role in determining the extent of tunneling through the barrier on the reaction surface. Within this harmonic treatment of vibrational motion, the total vibrational energy of the molecule is given as $E(\nu_1, \nu_2, ··· \nu_{3N-5\text{ or }6}) =\sum_{j=1}^{3N-5\text{ or }6}\hbar\omega_j\Big(\nu_j+\dfrac{1}{2}\Big)$ a sum of $3N-5$ or $3N-6$ independent contributions one for each normal mode. The corresponding total vibrational wave function $\Psi = \prod_{j=1}^{3N-5\text{ or }6} \psi\nu_j (x^{(j)})$ is a product of $3N-5$ or $3N-6$ harmonic oscillator functions $\psi\nu_j (x^{(j)})$ one for each normal mode. The energy gap between one vibrational level and another in which one of the $\nu_j$ quantum numbers is increased by unity (i.e., for fundamental vibrational transitions) is $\Delta E_{\nu_j} \rightarrow \nu_j + 1 = \hbar \omega_j$ The harmonic model thus predicts that the "fundamental" ($\nu=0 \rightarrow \nu = 1$) and "hot band" ($\nu=1 \rightarrow \nu = 2$) transitions should occur at the same energy, and the overtone ($\nu=0 \rightarrow \nu=2$) transitions should occur at exactly twice this energy. One might wonder whether mass-weighted Cartesian coordinates would be better or more appropriate to use when locating minima and transition states on Born-Oppenheimer energy surfaces. Although mass-weighted coordinates are indeed essential for evaluating harmonic vibrational frequencies and, as we will see later, for tracing out so-called intrinsic reaction paths, their use produces the same minima and transition states as one finds using coordinates that are mass-weighted. This is because the condition that all components of the gradient $\dfrac{\partial V}{\partial q_j}=0$ of the energy surface vanish at a minimum or at a transition state will automatically be obeyed when expressed in terms of mass-weighted coordinates since $\dfrac{\partial V}{\partial q_j}=\dfrac{\partial V}{\partial x_j}\dfrac{\partial x_j}{\partial q_j}=\dfrac{\partial V}{\partial x_j}\sqrt{m_j}$ Notice that this means the geometries of all local minima and transition states on a given Born-Oppenheimer surface will be exactly the same regardless of what isotopes appear in the molecule. For example, for the reactions $H-CN \rightarrow H-NC$ or $D-CN \rightarrow D-NC$ $H_2C=O \rightarrow H_2 + CO$ or $HDC=O \rightarrow HD + CO$ or $D_2C=O \rightarrow D_2 + CO$ the geometries of the reactants, products, and transition states (for each of the distinct reactions) will not depend on the identity of the hydrogen isotopes. However, the harmonic vibrational frequencies will depend on the isotopes because the mass-weighted Hessian differs from the Hessian expressed in terms of non-mass-weighted coordinates. The Use of Symmetry Symmetry Adapted Modes It is often possible to simplify the calculation of the normal mode harmonic frequencies and eigenvectors by exploiting molecular point group symmetry. For molecules that possess symmetry at a particular stable geometry, the electronic potential $V(q_j)$ displays symmetry with respect to displacements of symmetry equivalent Cartesian coordinates. For example, consider the water molecule at its $C_{2v}$ equilibrium geometry as illustrated in Figure 3.2. A very small movement of the $H_2O$ molecule's left $H$ atom in the positive $x$ direction ($\Delta x_L$) produces the same change in the potential $V$ as a correspondingly small displacement of the right $H$ atom in the negative $x$ direction ($-\Delta x_R$​​). Similarly, movement of the left H in the positive y direction ($\Delta y_L$​) produces an energy change identical to movement of the right H in the positive y direction ($\Delta y_R$​). The equivalence of the pairs of Cartesian coordinate displacements is a result of the fact that the displacement vectors are connected by the point group operations of the $C_{2v}$ group. In particular, reflection of $\Delta x_L$​ through the yz plane (the two planes are depicted in Figure 3.3) produces $-\Delta x_R$​, and reflection of $\Delta y_L$​​ through this same plane yields $\Delta y_R$​​. More generally, it is possible to combine sets of Cartesian displacement coordinates {$q_k$} into so-called symmetry adapted coordinates {$Q_{\Gamma_j}$}, where the index $\Gamma$ labels the irreducible representation in the appropriate point group and j labels the particular combination of that symmetry (i.e., there may be more than one kind of displacement that has a given symmetry G). These symmetry-adapted coordinates can be formed by applying the point group projection operators (that are treated in detail in Chapter 4) to the individual Cartesian displacement coordinates. To illustrate, again consider the $H_2O$ molecule in the coordinate system described above. The $3N = 9$ mass-weighted Cartesian displacement coordinates ($X_L, Y_L, Z_​L, X_O, Y_O, Z_O, X_R, Y_R, Z_R​$) can be symmetry adapted by applying the following four projection operators: $P_{A_1} = 1 + \sigma_{yz} + \sigma_{xy} + C_2$ $P_{b_1} = 1 + \sigma_{yz} - \sigma_{xy} - C_2$ $P_{​b_2} = 1 - \sigma_{yz} + \sigma_{xy} - C_2$ $P_{​a_2} = 1 - \sigma_{yz} - \sigma_{xy} + C_2$ to each of the 9 original coordinates (the symbol s denotes reflection through a plane and $C_2$ means rotation about the molecule’s $C_2$ axis). Of course, one will not obtain 9 x 4 = 36 independent symmetry adapted coordinates in this manner; many identical combinations will arise, and only 9 will be independent. The independent combinations of $a_1$ symmetry (normalized to produce vectors of unit length) are $Q_{a_1,1} = \dfrac{1}{\sqrt{2}} [X_L- X_R]$ $Q_{​a_1,2} = \dfrac{1}{\sqrt{2}} [Y_L + Y_R]$ $Q_{​a_1,3} = [Y_O]$ Those of $b_2$ symmetry are $Q_{​b_2,1} = \dfrac{1}{\sqrt{2}} [X_L+ X_R]$ $Q_{​b_2,2} = \dfrac{1}{\sqrt{2}} [Y_L - Y_R]$ $Q_{​b_2,3} = [X_O],$ and the combinations $Q_{​b_1,1} = \dfrac{1}{\sqrt{2}} [Z_L + Z_R]$ $Q_{​b_1,2} = [Z_O]$ are of $b_1$ symmetry, whereas $Q_{a_2,1} = \dfrac{1}{\sqrt{2}} [Z_L - Z_R]$ is of $a_2$ symmetry. Point Group Symmetry of the Harmonic Potential These nine symmetry-adapted coordinates $Q_{\Gamma_j}$ are expressed as unitary transformations of the original mass-weighted Cartesian coordinates: $Q_{\Gamma_j} = C_{\Gamma_{j,k}} X_k$ These transformation coefficients ${C_{\Gamma_{j,k}}}$ can be used to carry out a unitary transformation of the 9x9 mass-weighted Hessian matrix. In so doing, we need only form blocks $H_{\Gamma_{j,l}} = \sum_{k,k’} C_{\Gamma_{j,k}} H_{k,k'} \sqrt{m_k m_{k'}} C_{\Gamma_{l,k'}}$ within which the symmetries of the two modes are identical. The off-diagonal elements $H_{\Gamma_j\Gamma'_l}= \sum_{k,k’} C_{\Gamma_{j,k}} H_{k,k'} \sqrt{m_k m_{k'}} C_{\Gamma'_{l,k'}}$ vanish because the potential $V(q_j)$ (and the full vibrational Hamiltonian $H = T + V$) commutes with the $C_{2V}$ point group symmetry operations. As a result, the 9x9 mass-weighted Hessian eigenvalue problem can be subdivided into two 3x3 matrix problems (of $a_1$ and $b_2$ symmetry), one 2x2 matrix of $b_1$ symmetry and one 1x1 matrix of $a_2$ symmetry. For example, the $a_1$ symmetry block His formed as follows: $\left[\begin{array}{ccc} \dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{2}} & 0 \ \dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} & 0 \ 0 & 0 & 1 \end{array}\right] \left[\begin{array}{ccc} m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_L^2}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_L \partial x_R}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_L \partial y_O}m_H^{-1/2}\ m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_R \partial x_L}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_R^2}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial x_R \partial y_O}m_H^{-1/2}\ m_H^{-1/2}\dfrac{\partial^2 V}{\partial y_O \partial x_L}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial y_O \partial x_R}m_H^{-1/2} & m_H^{-1/2}\dfrac{\partial^2 V}{\partial y_O^2}m_H^{-1/2} \end{array}\right] \left[\begin{array}{ccc} \dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} & 0 \ -\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} & 0 \ 0 & 0 & 1 \end{array}\right]$ The $b_2$, $b_1$ and $a_2$ blocks are formed in a similar manner. The eigenvalues of each of these blocks provide the squares of the harmonic vibrational frequencies, the eigenvectors provide the coefficients $\{C_{\Gamma_{j,k}}\}$ of the $j^{\rm th}$ normal mode of symmetry $\Gamma$ in terms of the mass-weighted Cartesian coordinates {$X_k$}. The relationship $X_k = q_k \sqrt{(m_k)}$ can then be used to express these coefficients in terms of the original Cartesian coordinates {$q_k$}. Regardless of whether symmetry is used to block diagonalize the mass-weighted Hessian, six (for non-linear molecules) or five (for linear species) of the eigenvalues will equal zero. The eigenvectors belonging to these zero eigenvalues describe the 3 translations and 2 or 3 rotations of the molecule. For example, when expressed in terms of the original (i.e., non-mass-weighted) Cartesian coordinates $\dfrac{1}{\sqrt{3}}[x_L + x_R + x_O]$ $\dfrac{1}{\sqrt{3}}[y_L + y_R + y_O]$ $\dfrac{1}{\sqrt{3}}[z_L +z_R + z_O]$ are three translation eigenvectors of $b_2$, $a_1$ and $b_1$ symmetry, and $\dfrac{1}{\sqrt{2}}(z_L-z_R)$ is a rotation (about the y-axis in the Figure 3.2) of $a_2$ symmetry. This rotation vector can be generated by applying the $a_2$ projection operator to $z_L$ or to $z_R$. The other two rotations are of $b_1$ and $b_2$ symmetry and involve spinning of the molecule about the $x$- and $z$- axes of the Figure 3.2, respectively. So, of the 9 Cartesian displacements, 3 are of $a_1$ symmetry, 3 of $b_2$, 2 of $b_1$, and 1 of $a_2$. Of these, there are three translations ($a_1$, $b_2$, and $b_1$) and three rotations ($b_2$, $b_1$, and a2). This leaves two vibrations of $a_1$ and one of $b_2$ symmetry. For the $H_2O$ example treated here, the three non-zero eigenvalues of the mass-weighted Hessian are therefore of $a_1$, $b_2$, and $a_1$ symmetry. They describe the symmetric and asymmetric stretch vibrations and the bending mode, respectively as illustrated in Figure 3.4. The method of vibrational analysis presented here can work for any polyatomic molecule. One knows the mass-weighted Hessian and then computes the non-zero eigenvalues, which then provide the squares of the normal modes’ harmonic vibrational frequencies. Point group symmetry can be used to block diagonalize this Hessian and to label the vibrational modes according to symmetry as we show in Figure 3.5 for the $CF_4$ molecule in tetrahedral symmetry.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/03%3A_Characteristics_of_Energy_Surfaces/3.02%3A_Normal_Modes_of_Vibration.txt
As we will discuss in more detail in Chapter 8, there is a special path connecting reactants, transition states, and products that is especially useful to characterize in terms of energy surface gradients and Hessians. This is the Intrinsic Reaction Path (IRP). To construct an IRP, one proceeds as follows: Step 1: Once a transition state (TS) has been located, its mass-weighted Hessian matrix is formed and diagonalized. The normalized eigenvector $\textbf{s}$ belonging to the one negative eigenvalue of this matrix defines the initial direction(s) leading from the TS to either reactants or products (a unit vector along $\textbf{s}$ is one direction; a unit vector along $-\textbf{s}$ is the second). Step 2: One takes a small step (i.e., a displacement of the Cartesian coordinates {$q_j$} of the nuclei having a total length $L$) along the direction $\textbf{s}$, and this direction is taken to define the first step along the intrinsic reaction coordinate (IRC) that will eventually lead to the IRP. When $\textbf{s}$ is expressed in terms of the its components {$s_j$} along the Cartesian coordinates {$q_j$} $\textbf{s} = \sum_j s_j q_j \label{3.3.1}$ the displacements $\{\delta{q_j}\}$ can be expressed as $\delta{q_j} = L s_j.\label{3.3.1b}$ Step 3 One re-evaluates the gradient and Hessian at this new geometry (call it {$\textbf{q}^0$}), forms the mass-weighted Hessian at {$\textbf{q}^0$}, and identifies the eigenmode having negative curvature. The gradient along this direction will no longer vanish (as it did at the TS), and the normalized eigenvector of this mode is now used to define the continuation of the direction $\textbf{s}$ along the IRC. Step 4 One then minimizes the energy along the $3N-6$ or $3N-7$ coordinates transverse to $\textbf{s}$. This can be done by expressing the energy in terms of the corresponding eigenmodes $\{Q_k\}$ of the mass-weighted Hessian $V=\sum_{k=1}^{3N-6\text{ or }3N-7}[g_k\delta Q_k+\frac{1}{2}\omega_k^2\delta Q_k^2]$ where $g_kk$ is the component of the gradient of the energy along the eigenmode $Q_k$ and is the eigenvalue of the mass-weighted Hessian for this mode. This energy minimization transverse to $\textbf{s}$ is designed to constrain the “walk” downhill from the TS at (or near) the minimum in the streambed along which the IRC is evolving. After this energy minimization step, the Cartesian coordinates will be defined as {$\textbf{q}^1$}. Step 5 At {$\textbf{q}^1$}, one re-evaluates the gradient and Hessian, and proceeds as in step (c) above. This process is continued, generating a series of geometries {$\textbf{q}^0, \textbf{q}^1 , \textbf{q}^2 , … \textbf{q}^K$} that define points on the IRC. At each of these geometries, the gradient will have its largest component (excluding at the TS, where all components vanish) along the direction of $\textbf{s}$ because the energy minimization process will cause its components transverse to $\textbf{s}$ to (at least approximately) vanish. Step 6 Eventually, a geometry will be reached at which all $3N-5$ or $3N-6$ of the eigenvalues of the mass-weighted Hessian are positive; here, one is evolving into a region where the curvature along the IRC is positive and suggests one may be approaching a minimum. However, at this point, there will be one eigemode (the one whose eigenvalue just changed from negative to positive) along which the gradient has its largest component. This eigenmode will continue to define the IRC’s direction $\textbf{s}$. Step 7 One continues by taking a small step along $\textbf{s}$ downhill in energy, after which the energy is minimized along the modes transverse to $\textbf{s}$. This process is continued until the magnitude of the gradient (which always points along s) becomes small enough that one can claim to have reached a minimum. Step 8 The process described above will lead from the TS to either the reactants or products, and will define one branch of the IRP. To find the other branch, one returns to step (b) and begins the entire process again but now taking the first small step in the opposite direction (i.e., along the negative of the eigenvector of the mass-weighted Hessian at the TS). Proceeding along this path, one generates the other branch of the IRP; the series of geometries leading from reactants, through the TS, to products defines the full IRP. At any point along this path, the direction $\textbf{s}$ is the direction of the IRC. This process for generating the IRP can be viewed as generating a series of Cartesian coordinates {$\textbf{q}^k$} lying along a continuous path {$\textbf{q}(s)$} that is the solution of the following differential equation $\frac{dq_j(s)}{ds}=-\frac{g_j(s)}{|g(s)|}$ where $q_j$ is the $j^{th}$ Cartesian coordinate, $g_j$ is the energy gradient along this Cartesian coordinate, $|g|$ is the norm of the total energy gradient, and $\textbf{s}$ is the continuous parameter describing movement along the IRC. The initial condition appropriate to solving this differential equation is that the initial step (i.e., at $s = 0$) is to be directed along (for one branch of the IRP) or opposed to (for the other branch) the eigenmode of the mass-weighted Hessian having negative eigenvalue at the TS.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/03%3A_Characteristics_of_Energy_Surfaces/3.03%3A_Intrinsic_Reaction_Paths.txt
Learning Objectives In this Chapter, you should have learned about the following things: • Rayleigh-Schrödinger perturbation theory with several example applications. • The variational method for optimizing trial wave functions. • The use of point group symmetry. • Time dependent perturbation theory, primarily for sinusoidal perturbations characteristic of electromagnetic radiation. For all but the most elementary problems, many of which serve as fundamental approximations to the real behavior of molecules (e.g., the Hydrogenic atom, the harmonic oscillator, the rigid rotor, particles in boxes), the Schrödinger equation can not be solved exactly. It is therefore extremely useful to have tools that allow one to approach these insoluble problems by solving other Schrödinger equations that can be trusted to reasonably describe the solutions of the impossible problem. The approaches discussed in this Chapter are the most important tools of this type. • 4.1: Perturbation Theory In most practical applications of quantum mechanics to molecular problems, one is faced with the harsh reality that the Schrödinger equation pertinent to the problem at hand cannot be solved exactly. To illustrate how desperate this situation is, I note that neither of the following two Schrödinger equations has ever been solved exactly (meaning analytically): • 4.2: The Variational Method The other method that is used to solve Schrödinger equations approximately, the variational method. In this approach, one must again have some reasonable wave function ψ(0) that is used to approximate the true wave function. Within this approximate wave function, one embeds one or more variables that one subsequently varies to achieve a minimum in the energy of ψ(0) computed as an expectation value of the true Hamiltonian H. • 4.3: Linear Variational Method The other method that is used to solve Schrödinger equations approximately, the variational method. In this approach, one must again have some reasonable wave function ψ(0) that is used to approximate the true wave function. Within this approximate wave function, one embeds one or more variables that one subsequently varies to achieve a minimum in the energy of ψ(0) computed as an expectation value of the true Hamiltonian H. • 4.4: Point Group Symmetry • 4.5: Character Tables • 4.6: Time Dependent Perturbation Theory 04: Some Important Tools of Theory In most practical applications of quantum mechanics to molecular problems, one is faced with the harsh reality that the Schrödinger equation pertinent to the problem at hand cannot be solved exactly. To illustrate how desperate this situation is, I note that neither of the following two Schrödinger equations has ever been solved exactly (meaning analytically): 1. The Schrödinger equation for the two electrons moving about the He nucleus: $\left[- \dfrac{\hbar^2}{2m_e} \nabla_1^2 - \dfrac{\hbar^2}{2m_e} \nabla_2^2 – \frac{2e^2}{r_1} – \frac{2e^2}{r_2} + \frac{e^2}{r_{1,2}}\right] \psi= E \psi,$ 1. The Schrödinger equation for the two electrons moving in an $H_2$ molecule even if the locations of the two nuclei (labeled A and B) are held clamped as in the Born-Oppenheimer approximation: $\left[- \dfrac{\hbar^2}{2m_e} \nabla_1^2 - \dfrac{\hbar^2}{2m_e} \nabla_2^2 – \dfrac{e^2}{r_{1,A}} – \dfrac{e^2}{r_{2,A}} – \dfrac{e^2}{r_{1,B}} – \dfrac{e^2}{r_{2,B}} + \dfrac{e^2}{r_{1,2}}\right] \psi = E \psi$ These two problems are examples of what is called the “three-body problem” meaning solving for the behavior of three bodies moving relative to one another. Motions of the sun, earth, and moon (even neglecting all the other planets and their moons) constitute another three-body problem. None of these problems, even the classical Newton’s equation for the sun, earth, and moon, have ever been solved exactly. So, what does one do when faced with trying to study real molecules using quantum mechanics? There are two very powerful tools that one can use to “sneak up” on the solutions to the desired equations by first solving an easier model problem and then using the solutions to this problem to approximate the solutions to the real Schrödinger problem of interest. For example, to solve for the energies and wave functions of a boron atom, one could use hydrogenic $1s$ orbitals (but with $Z = 5$) and hydrogenic $2s$ and $2p$ orbitals (with $Z = 3$ to account for the screening of the full nuclear charge by the two $1s$ electrons) as a starting point. To solve for the vibrational energies of a diatomic molecule whose energy vs. bond length $E(R)$ is known, one could use the Morse oscillator wave functions and energies as starting points. But, once one has decided on a reasonable model to use, how does one connect this model to the real system of interest? Perturbation theory and the variational method are the two tools that are most commonly used for this purpose, and it is these two tools that are covered in this Chapter. The perturbation theory approach provides a set of analytical expressions for generating a sequence of approximations to the true energy $E$ and true wave function $\psi$. This set of equations is generated, for the most commonly employed perturbation method, Rayleigh-Schrödinger perturbation theory (RSPT), as follows. First, one decomposes the true Hamiltonian $H$ into a so-called zeroth-order part $H^{(0)}$ (this is the Hamiltonian of the model problem used to represent the real system) and the difference ($H-H^{(0)}$), which is called the perturbation and usually denoted $V$: $H = H^{(0)} + V.$ It is common to associate with the perturbation $V$ a strength parameter $\lambda$, which could, for example, be associated with the strength of the electric field when the perturbation results from the interaction of the molecule of interest with an electric field. In such cases, it is usual to write the decomposition of $H$ as $H = H^{(0)} + \lambda V$ A fundamental assumption of perturbation theory is that the wave functions and energies for the full Hamiltonian $H$ can be expanded in a Taylor series involving various powers of the perturbation parameter $\lambda$. Hence, one writes the energy $E$ and the wave function $\psi$ as zeroth-, first-, second, etc, order pieces which form the unknowns in this method: $E = E^{(0)} + E^{(1)} + E^{(2)} + E^{(3)} + \cdots$ $y = \psi^{(0)} + \psi^{(1)} + \psi^{(2)} + \psi^{(3)} + \cdots$ with $E^{(n)}$ and $\psi^{(n)}$ being proportional to $\lambda_n$. Next, one substitutes these expansions of $E$, $H$ and $\psi$ into $H\psi = E\psi$. This produces one equation whose right and left hand sides both contain terms of various “powers” in the perturbation $\lambda$. For example, terms of the form $E^{(1)}$, $\psi^{(2)}$, and $V$ $\psi^{(2)}$ and $E^{(0)}$ $\psi^{(3)}$ are all of third power (also called third order). Next, one equates the terms on the left and right sides that are of the same order. This produces a set of equations, each containing all the terms of a given order. The zeroth, first, and second-order such equations are given below: $H^{(0)} \psi^{(0)} = E^{(0)} \psi^{(0)},$ $H^{(0)} \psi^{(1)} + V \psi^{(0)} = E^{(0)} \psi^{(1)} + E^{(1)} \psi^{(0)}$ $H^{(0)} \psi^{(2)} + V \psi^{(1)} = E^{(0)} \psi^{(2)} + E^{(1)} \psi^{(1)} + E^{(2)} \psi^{(0)}.$ It is straightforward to see that the nth order expression in this sequence of equations can be written as $H^{(0)} \psi^{(n)} + V \psi^{(n-1)} = E^{(0)} \psi^{(n)} + E^{(1)} \psi^{(n-1)} + E^{(2)} \psi^{(n-2)} + E^{(3)} \psi^{(n-3)} + \cdots + E^{(n)} \psi^{(0)}.$ The zeroth-order equation simply instructs us to solve the model Schrödinger equation to obtain the zeroth-order wave function $\psi^{(0)}$ and its zeroth-order energy $E^{(0)}$. Since $H^{(0)}$ is a Hermitian operator, it has a complete set of such eigenfunctions, which we label $\{\psi^{(0)}k\}$ and {E^{(0)}_k}. One of these states will be the one we are interested in studying (e.g., we might be interested in the effect of an external electric field on the $2s$ state of the hydrogen atom), but, as will become clear soon, we actually have to find the full set of {$\psi^{(0)}_k$} and {$E^{(0)}_k$} (e.g., we need to also find the $1s, 2p, 3s, 3p, 3d,$ etc. states of the hydrogen atom when studying the electric field’s effect on the $2s$ state). In the first-order equation, the unknowns are $\psi^{(1)}$ and $E^{(1)}$ (recall that $V$ is assumed to be known because it is the difference between the Hamiltonian one wants to solve and the model Hamiltonian $H^{(0)}$). To solve the first-order and higher-order equations, one expands each of the corrections to the wave function $\psi$ of interest in terms of the complete set of wave functions of the zeroth-order problem $\{\psi^{(0)}_J\}$. As noted earlier, this means that one must solve $H^{(0)} \psi^{(0)}_J = E^{(0)}J \psi^{(0)}_J$ not just for the zeroth-order state one is interested in (denoted $\psi^{(0)}$ above), but for all of the other zeroth-order states $\{\psi^{(0)}_J\}$. For example, expanding $\psi^{(1)}$ in this manner gives: $\psi^{(1)}=\sum_J C_J^1\psi_J^{(0)}$ Now, the unknowns in the first-order equation become $E^{(1)}$ and the expansion coefficients. To solve $H^{(0)} \psi^{(1)} + V \psi^{(0)} = E^{(0)} \psi^{(1)} + E^{(1)} \psi^{(0)}$ one proceeds as follows: 1. First, one multiplies this equation on the left by the complex conjugate of the zeroth-order function for the state of interest $\psi^{(0)}$ and integrates over the variables on which the wave functions depend. This gives $\langle\psi^{(0)}|H^{(0)}|\psi^{(1)}\rangle + \langle\psi^{(0)}|V|\psi^{(0)}\rangle = E^{(0)} \langle\psi^{(0)}|\psi^{(1)}\rangle + E^{(1)} \langle\psi^{(0)}|\psi^{(0)}\rangle.$ The first and third terms cancel one another because $H^{(0)} \psi^{(0)} = E^{(0)} \psi^{(0)}$, and the fourth term reduces to $E^{(1)}$ because $\psi^{(0)}$ is assumed to be normalized. This allows the above equation to be written as $E^{(1)} = \langle\psi^{(0)} | V | \psi^{(0)}\rangle$ which is the RSPT expression for $E^{(1)}$. It says the first-order correction to the energy $E^{(0)}$ of the unperturbed state can be evaluated by computing the average value of the perturbation with respect to the unperturbed wave function $\psi^{(0)}$. 2. Returning to the first-order equation and multiplying on the left by the complex conjugate of one of the other zeroth-order functions gives $\langle \psi_J^{(0)}|H^{(0)}|\psi^{(1)}\rangle + \langle\psi_J^{(0)} |V|\psi^{(0)}\rangle = E^{(0)} \langle\psi_J^{(0)}|\psi^{(1)}\rangle + E^{(1)} \langle \psi_J^{(0)}|\psi^{(0)}\rangle.$ Using $H^{(0)} =$, the first term reduces to $\langle |\psi^{(1)}\rangle$, and the fourth term vanishes because is orthogonal to $\psi^{(0)}$ because these two functions are different eigenfunctions of $H^{(0)}$. This reduces the equation to $\langle \psi_J^{(0)}|\psi^{(1)}\rangle + \langle\psi_J^{(0)} |V|\psi^{(0)}\rangle = E^{(0)} \langle\psi_J^{(0)} |\psi^{(1)}\rangle$ The unknown in this expression is $\langle\psi_J^{(0)} |\psi^{(1)}\rangle$, which is the expansion coefficient for the expansion of $\psi^{(1)}$ in terms of the zeroth-order functions { }. In RSPT, one assumes that the only contribution of $\psi^{(0)}$ to the full wave function \psioccurs in zeroth-order; this is referred to as assuming intermediate normalization of y. In other words, $\langle\psi^{(0)}|\psi\rangle = 1$ because $\langle\psi^{(0)}|\psi^{(0)}\rangle = 1$ and $\langle\psi^{(0)}|\psi^{(n)}\rangle = 0$ for $n = 1, 2, 3, \cdots$. So, the coefficients $\langle\psi_J^{(0)} |\psi^{(1)}\rangle$ appearing in the above equation are all one needs to describe $\psi^{(1)}$. 3. If the state of interest $\psi^{(0)}$ is non-degenerate in zeroth-order (i.e., none of the other is equal to E^{(0)}), this equation can be solved for the needed expansion coefficients $\langle \psi_J^{(0)}|\psi^{(1)}\rangle=\frac{\langle\psi_J^{(0)} |V|\psi^{(0)}\rangle}{E^{(0)}-E^{(0)}_J}$ which allow the first-order wave function to be written as $\psi^{(1)}=\sum_J\psi_J^{(0)}\frac{\langle\psi_J^{(0)} |V|\psi^{(0)}\rangle}{E^{(0)}-E^{(0)}_J}$ where the index $J$ is restricted such that $\psi_J^{(0)}$ not equal the state $\psi^{(0)}$ you are interested in. 4. However, if one or more of the zeroth-order energies is equal to $E^{(0)}$, an additional step needs to be taken before the above expression for $\psi^{(1)}$ can be used. If one were to try to solve $\langle \psi_J^{(0)}|\psi^{(1)}\rangle + \langle\psi_J^{(0)} |V|\psi_0\rangle = E^{(0)} \langle \psi_J^{(0)}|\psi^{(1)}\rangle$ without taking this extra step, the $\langle\psi_J^{(0)} |\psi^{(1)}\rangle$ values for those states with $= E^{(0)}$ could not be determined because the first and third terms would cancel and the equation would read $\langle \psi_J^{(0)}|V|\psi^{(0)}\rangle = 0$. The way RSPT deals with this paradox is realize that, within a set of $N$ degenerate states, any $N$ orthogonal combinations of these states will also be degenerate. So RSPT assumes that one has already chosen the degenerate sets of zeroth-order states to make $\langle \psi_J^{(0)}|V| \psi_K^0\rangle = 0$ for $K \ne J$. This extra step is carried out in practice by forming the matrix representation of $V$ in the original set of degenerate zeroth-order states and then finding the unitary transformation among these states that diagonalizes this matrix. These transformed states are then what one uses as and $\psi^{(0)}$ in the RSPT expressions. This means that the paradoxical result $\langle\psi_J^{(0)}|V|\psi^{(0)}\rangle = 0$ is indeed obeyed by this choice of states, so one does not need to determine the coefficients $\langle\psi_J^{(0)} |\psi^{(1)}\rangle$ for belonging to the degenerate zeroth-order states (i.e., these coefficients can be assumed to be zero). The bottom line is that the expression $\psi^{(1)}=\sum_J\psi_J^{(0)}\frac{\langle\psi_J^{(0)} |V|\psi^{(0)}\rangle}{E^{(0)}-E^{(0)}_J}$ remains valid, but the summation index $J$ is now restricted to exclude any members of the zeroth-order states that are degenerate with $\psi^{(0)}$. To obtain the expression for the second-order correction to the energy of the state of interest, one returns to $H^{(0)} \psi^{(2)} + V \psi^{(1)} = E^{(0)} \psi^{(2)} + E^{(1)} \psi^{(1)} + E^{(2)} \psi^{(0)}$ Multiplying on the left by the complex conjugate of $\psi^{(0)}$ and integrating yields $\langle\psi^{(0)}|H^{(0)}|\psi^{(2)}\rangle + \langle\psi^{(0)}|V|\psi^{(1)}\rangle = E^{(0)} \langle\psi^{(0)}|\psi^{(2)}\rangle + E^{(1)} \langle\psi^{(0)}|\psi^{(1)}\rangle + E^{(2)} \langle\psi^{(0)}|\psi^{(0)}\rangle.$ The intermediate normalization condition causes the fourth term to vanish, and the first and third terms cancel one another. Recalling the fact that $\psi^{(0)}$ is normalized, the above equation reduces to $\langle\psi^{(0)}|V|\psi^{(1)}\rangle = E^{(2)}.$ Substituting the expression obtained earlier for $\psi^{(1)}$ allows $E^{(2)}$ to be written as $E^{(2)}=\sum_J \frac{|\langle\psi_J^{(0)} |V|\psi^{(0)}\rangle|^2}{E^{(0)}-E^{(0)}_J}$ where, as before, the sum over $J$ is limited to states that are not degenerate with $\psi^{(0)}$ in zeroth-order. These are the fundamental working equations of Rayleigh-Schrödinger perturbation theory. They instruct us to compute the average value of the perturbation taken over a probability distribution equal to $\psi^{(0)}{}^*\psi^{(0)}$ to obtain the first-order correction to the energy $E^{(1)}$. They also tell us how to compute the first-order correction to the wave function and the second-order energy in terms of integrals coupling $\psi^{(0)}$ to other zeroth-order states and denominators involving energy differences . An analogous approach is used to solve the second- and higher-order equations. For example, the equation for the nth order energy and wave functions reads: $H^{(0)} \psi^{(n)} + V \psi^{(n-1)} = E^{(0)} \psi^{(n)} + E^{(1)} \psi^{(n-1)} + E^{(2)} \psi^{(n-2)} + E^{(3)} \psi^{(n-3)} + … + E^{(n)} \psi^{(0)}$ The nth order energy is obtained by multiplying this equation on the left by $\psi^{(0)}{}^*$ and integrating over the relevant coordinates (and using the fact that $\psi^{(0)}$ is normalized and the intermediate normalization condition $\langle\psi^{(0)}|\psi_m\rangle = 0$ for all $m > 0$): $\langle\psi^{(0)}|V|\psi^{(n-1)}\rangle = E^{(n)}.$ This allows one to recursively solve for higher and higher energy corrections once the various lower-order wave functions $\psi^{(n-1)}$ are obtained. To obtain the expansion coefficients for the $\psi^{(n)}$ expanded in terms of the zeroth-order states { }, one multiplies the above $n^{th}$ order equation on the left by (one of the zeroth-order states not equal to the state $\psi^{(0)}$ of interest) and obtains $\langle\psi_J^{(0)} |\psi^{(n)}\rangle + \langle\psi_J^{(0)} |V| \psi^{(n-1)}\rangle = E^{(0)} \langle\psi_J^{(0)} |\psi^{(n)}\rangle + E^{(1)} \langle \psi_J^{(0)}|\psi^{(n-1)}\rangle + E^{(2)} \langle \psi_J^{(0)}|\psi^{(n-2)}\rangle + E^{(3)} \langle\psi_J^{(0)} |\psi^{(n-3)}\rangle + … + E^{(n)} \langle\psi_J^{(0)} |\psi^{(0)}\rangle.$ The last term on the right-hand side vanishes because and $\psi^{(0)}$ are orthogonal. The terms containing the nth order expansion coefficients $\langle |\psi^{(n)}\rangle$ can be brought to the left-hand side to produce the following equation for these unknowns: $\langle\psi_J^{(0)} |\psi^{(n)}\rangle - E^{(0)} \langle\psi_J^{(0)} |\psi^{(n)}\rangle = - \langle \psi_J^{(0)}|V| \psi^{(n-1)}\rangle + E^{(1)} \langle\psi_J^{(0)} |\psi^{(n-1)}\rangle + E^{(2)} \langle\psi_J^{(0)}|\psi^{(n-2)}\rangle + E^{(3)} \langle \psi_J^{(0)}|\psi^{(n-3)}\rangle + … + E^{(n)} \langle \psi_J^{(0)}|\psi^{(0)}\rangle.$ As long as the zeroth-order energy is not degenerate with $E^{(0)}$ (or, that the zeroth-order states have been chosen as discussed earlier to cause there to no contribution to $\psi^{(n)}$ from such degenerate states), the above equation can be solved for the expansion coefficients $\langle \psi_J^{(0)}|\psi^{(n)}\rangle$, which then define $\psi^{(n)}$. The RSPT equations can be solved recursively to obtain even high-order energy and wave function corrections: 1. $\psi^{(0)}$ and $E^{(0)}$ and $V$ are used to determine $E^{(1)}$ and $\psi^{(1)}$ as outlined above, 2. $E^{(2)}$ is determined from $\langle\psi_0|V|\psi_{n-1}\rangle = E^{(n)}$ with $n = 2$, and the expansion coefficients of $\psi^{(2)}$ {$\langle |\psi^{(2)}\rangle$} are determined from the above equation with $n = 2$, 3. $E^{(3)}$ (and higher $E^{(n)}$) are then determined from $\langle\psi_0|V|\psi_{n-1}\rangle = E^{(n)}$ and the expansion coefficients of $\psi^{(2)}$ {$\langle |\psi^{(2)}\rangle$} are determined from the above equation with $n = 2$. 4. This process can then be continued to higher and higher order. Although modern quantum mechanics uses high-order perturbation theory in some cases, much of what the student needs to know is contained in the first- and second- order results to which I will therefore restrict our further attention. I recommend that students have in memory (their own brain, not a computer) the equations for $E^{(1)}$, $E^{(2)}$, and $\psi^{(1)}_0$ so they can make use of them even in qualitative applications of perturbation theory as we will discuss later in this Chapter. But, first, let’s consider an example problem that illustrates how perturbation theory is used in a more quantitative manner. Example Problem As we discussed earlier, an electron moving in a quasi-linear conjugated bond framework can be modeled as a particle in a box. An externally applied electric field of strength $\varepsilon$ interacts with the electron in a fashion that can described by adding the perturbation $V = e\varepsilon\Big(x-\dfrac{L}{2}\Big)$ to the zeroth-order Hamiltonian. Here, $x$ is the position of the electron in the box, $e$ is the electron's charge, and $L$ is the length of the box. The perturbation potential varies in a linear fashion across the box, so it acts to pull the electron to one side of the box. First, we will compute the first-order correction to the energy of the $n=1$ state and the first-order wave function for the $n=1$ state. In the wave function calculation, we will only compute the contribution to $\psi$ made by $\psi^{(0)}_2$ (this is just an approximation to keep things simple in this example). Let me now do all the steps needed to solve this part of the problem. Try to make sure you can do the algebra, but also make sure you understand how we are using the first-order perturbation equations. The zeroth-order wave functions and energies are given by $\psi^{(0)}_n= \sqrt{\frac{2}{L}}\sin\left(\frac{n\pi x}{L}\right) ,$ and $E^{(0)}_n= \frac{\hbar^2\pi^2 n^2}{2mL^2},$ and the perturbation is $V = e\varepsilon\left(x-\frac{L}{2}\right)​.$ The first-order correction to the energy for the state having $n = 1$ and denote $\psi^{(0)}$ is $E^{(1)} =\langle\psi^{(0)}|V|\psi^{(0)}\rangle =\left\langle\psi^{(0)}\left|e\varepsilon\left(x-\frac{L}{2}\right)\right|\psi^{(0)}\right\rangle$ $=\frac{2}{L}\int_0^L \sin^2\left(\frac{\pi x}{L}\right)e\varepsilon\left(x-\frac{L}{2}\right)​dx$ $=\frac{2e\varepsilon}{L}\int_0^L\sin^2\left(\frac{\pi x}{L}\right)xdx -\frac{2e\varepsilon}{L}\frac{L}{2}\int_0^L\sin^2\left(\frac{\pi x}{L}\right)dx$ The first integral can be evaluated using the following identity with $a = \dfrac{\pi}{L}$: $\int_0^L\sin^2(ax)dx=\frac{x^2}{4}-\frac{x\sin(2ax)}{4a}-\frac{x\cos(2ax)}{8a^2}\Bigg|^L_0=\frac{L^2}{4}$ The second integral can be evaluated using the following identity with $\theta =\frac{\pi x}{L}$ and $d\theta = \frac{\pi}{L}​dx$ : $\int_0^L\sin^2\left(\frac{\pi x}{L}\right)dx= \frac{L}{\pi}\int_0^\pi\sin^2\theta d\theta=-\frac{1}{4}\sin(2\theta) + \frac{\theta}{2}\Bigg|^\pi_0=\frac{\pi}{2}$. Making all of these appropriate substitutions we obtain: $E^{(1)} =\frac{2e\varepsilon}{L}\left(\frac{L^2}{4}-\frac{L}{2}\frac{L}{\pi}\frac{\pi}{2}\right) = 0.$ This result, that the first-order correction to the energy vanishes, could have been foreseen. In the expression for $E^{(1)} = \langle\psi^{(0)}|V|\psi^{(0)}\rangle$, the product $\psi^{(0)}{}^*\psi^{(0)}$ is an even function under reflection of $x$ through the midpoint $x = \dfrac{L}{2}$; in fact, this is true for all of the particle-in-a-box wave functions. On the other hand, the perturbation $V = e\varepsilon\Big(x-\dfrac{L}{2}\Big)$ is an odd function under reflection through $x = \dfrac{L}{2}$. Thus, the integral $\langle\psi^{(0)}|V|\psi^{(0)}\rangle$ must vanish as its integrand is an odd function. This simple example illustrates how one can use symmetry to tell ahead of time whether the integrals $\langle\psi^{(0)}|V|\psi^{(0)}\rangle$ and $\langle\psi_J^{(0)}|V|\psi^{(0)}\rangle$ contributing to the first-order and higher-order energies and wave functions will vanish. The contribution to the first-order wave function made by the $n = 2$ state is given by $\psi^{(1)}=\frac{\langle\psi_J^{(0)} |V|\psi^{(0)}_2\rangle\psi^{(0)}_2}{E^{(0)}-E^{(0)}_2}$ $=\frac{\dfrac{2}{L}\left\langle\sin\left(\dfrac{\pi x}{L}\right)\left|e\varepsilon\left(x-\frac{L}{2}\right)\right|\sin\left(\dfrac{2 \pi x}{L}\right)\right\rangle}{\dfrac{\hbar^2\pi^2}{2mL^2}-\dfrac{\hbar^2\pi^2 2^2}{2mL^2}​​}$ The two integrals in the numerator involve $\int_0^L x\sin\left(\dfrac{2 \pi x}{L}\right)\sin\left(\dfrac{\pi x}{L}\right)dx$ and $\int_0^L \sin\left(\dfrac{2 \pi x}{L}\right)\sin\left(\dfrac{\pi x}{L}\right)dx$ Using the integral identities $\int x\cos(ax)dx= \frac{1}{a^2}\cos(ax) + \frac{x}{a} \sin(ax)$ and $\int\cos(ax)dx = \frac{1}{a}\sin(ax),$ we obtain the following: $\int_0^L \sin\left(\dfrac{2 \pi x}{L}\right)\sin\left(\dfrac{\pi x}{L}\right)dx=\frac{1}{2}\left[\int_0^L \cos\left(\dfrac{\pi x}{L}\right)dx - \int_0^L \cos\left(\dfrac{3\pi x}{L}\right)dx\right]$ $=\frac{1}{2}\left[ \frac{L}{\pi}\sin\left(\dfrac{\pi x}{L}\right)\Bigg|^L - \frac{L}{3\pi}\sin\left(\dfrac{3\pi x}{L}\right)\Bigg|^L\right] = 0$ and $\int_0^L x\sin\left(\dfrac{2 \pi x}{L}\right)\sin\left(\dfrac{\pi x}{L}\right)dx= \frac{1}{2}\left[\int_0^L x\cos\left(\dfrac{\pi x}{L}\right)dx - \int_0^L x\cos\left(\dfrac{3\pi x}{L}\right)dx\right]$ $=\frac{1}{2}\left[ \left(\frac{L^2}{\pi^2}\cos\left(\dfrac{\pi x}{L}\right)+ \frac{Lx}{\pi}\sin\left(\dfrac{\pi x}{L}\right) \right)\Bigg|^L - \left(\frac{L^2}{9\pi^2}\cos\left(\dfrac{3\pi x}{L}\right)+ \frac{Lx}{3\pi}\sin\left(\dfrac{3\pi x}{L}\right)\right) \Bigg|^L\right]$ $=\frac{-2L^2}{2\pi^2} -\frac{-2L^2}{18\pi^2} = \frac{L^2}{9\pi^2} -\frac{L^2}{\pi^2} = - \frac{-8L^2}{9\pi^2}.$ Making all of these appropriate substitutions we obtain: $\psi^{(1)}=\frac{32mL^3e\varepsilon}{27\hbar^2\pi^4}\sqrt{\frac{2}{L}}\sin\left(\frac{2\pi x}{L}\right)$ for the first-order wave function (actually, only the $n = 2$ contribution). So, the wave function through first order (i.e., the sum of the zeorth- and first-order pieces) is $\psi^{(0)}+\psi^{(1)}=\sqrt{\frac{2}{L}}\sin\left(\frac{\pi x}{L}\right) + \frac{32mL^3e\varepsilon}{27\hbar^2\pi^4}\sqrt{\frac{2}{L}}\sin\left(\frac{2\pi x}{L}\right)$ In Figure 4.1 we show the $n = 1$ and $n = 2$ zeroth-order functions as well as the superposition function formed when the zeroth-order $n = 1$ and first-order $n = 1$ functions combine. Clearly, the external electric field acts to polarize the $n = 1$ wave function in a manner that moves its probability density toward the $x > \dfrac{L}{2}$ side of the box. The degree of polarization will depend on the strength of the applied electric field. For such a polarized superposition wave function, there should be a net dipole moment induced in the system. We can evaluate this dipole moment by computing the expectation value of the dipole moment operator: $\mu_{induced}= - e \int\psi^*\left(x-\frac{L}{2}\right)\psi dx$ with being the sum of our zeroth- and first-order wave functions. In computing this integral, we neglect the term proportional to $E^{(2)}$ because we are interested in only the term linear in $\varepsilon$ because this is what gives the dipole moment. Again, allow me to do the algebra and see if you can follow. $\mu_{induced}= - e \int\psi^*\left(x-\frac{L}{2}\right)\psi dx$ where, $\psi=\psi^{(0)}+\psi^{(1)}$ $\mu_{induced}= - e \int_0^L(\psi^{(0)}+\psi^{(1)})^*\left(x-\frac{L}{2}\right)(\psi^{(0)}+\psi^{(1)}) dx$ $= - e \int_0^L​ \psi^{(0)}{}^*\left(x-\frac{L}{2}\right)\psi^{(0)} dx - e \int_0^L​ \psi^{(1)}{}^*\left(x-\frac{L}{2}\right)\psi^{(0)} dx$ $= - e \int_0^L \psi^{(0)}{}^*\left(x-\frac{L}{2}\right)\psi^{(1)} dx - e \int_0^L \psi^{(1)}{}^*\left(x-\frac{L}{2}\right)\psi^{(1)} dx$ The first integral is zero (we discussed this earlier when we used symmetry to explain why this vanishes). The fourth integral is neglected since it is proportional to $E^{(2)}$ and we are interested in obtaining an expression for how the dipole varies linearly with $\varepsilon$. The second and third integrals are identical and can be combined to give: $\mu_{induced}= - 2 e \int_0^L \psi^{(0)}{}^*\left(x-\frac{L}{2}\right)\psi^{(1)} dx​$ Substituting our earlier expressions for $\psi^{(0)}=\dfrac{2}{L}\sin\left(\dfrac{\pi x}{L}\right)$ and $\psi^{(1)}=\frac{32mL^3e\varepsilon}{27\hbar^2\pi^4}\sqrt{\frac{2}{L}}\sin\left(\frac{2\pi x}{L}\right)$ we obtain: $\mu_{induced} = -2e\frac{32mL^3e\varepsilon}{27\hbar^2\pi^4} \frac{2}{L} \int_0^L \sin\left(\dfrac{\pi x}{L}\right) \left(x-\frac{L}{2}\right) \sin\left(\dfrac{2\pi x}{L}\right)​ dx$ These integrals are familiar from what we did to compute ; doing them we finally obtain: $\mu_{induced}= -2e \frac{32mL^3e\varepsilon}{27\hbar^2\pi^4} \left(\frac{2}{L}\right) \left(\frac{-8L^2}{9\pi^2}\right) =\frac{mL^4 e^2\varepsilon}{\hbar^2\pi^6} \frac{2^{10}}{3^5}$ Now. Let’s compute the polarizability, $\alpha$, of the electron in the $n=1$ state of the box, and try to understand physically why a should depend as it does upon the length of the box $L$. To compute the polarizability, we need to know that $\alpha = \left(\frac{\partial\mu}{\partial\varepsilon}\right)_{\varepsilon=0}.$ Using our induced moment result above, we then find $\alpha =\left(\frac{\partial\mu}{\partial\varepsilon}\right)_{\varepsilon=0} = \frac{mL^4 e^2}{\hbar^2\pi^6} \frac{2^{10}}{3^5}$ Notice that this finding suggests that the larger the box (i.e., the length of the conjugated molecule), the more polarizable the electron density. This result also suggests that the polarizability of conjugated polyenes should vary non-linearly with the length of the conjugated chain. Other Examples Let’s consider a few more examples of how perturbation theory is used in chemistry, either quantitatively (i.e., to actually compute changes in energies and wave functions) or qualitatively (i.e., to interpret or anticipate how changes might alter energies or other properties). The Stark effect When a molecule is exposed to an electric field $\textbf{E}$, its electrons and nuclei experience a perturbation $V= \textbf{E} \cdot ( e\sum_n Z_n \textbf{R}_n - e \sum_i \textbf{r}_i )$ where $Z_n$ is the charge of the $n^{th}$ nucleus whose position is $R_n$, $r_i$ is the position of the $i^{th}$ electron, and $e$ is the unit of charge. The effect of this perturbation on the energies is termed the Stark effect. The first-order change to the energy of this molecule is evaluated by calculating $E^{(1)}=\langle\psi^*|V|\psi\rangle=\textbf{E}\cdot \langle\psi|e\sum_n Z_n \textbf{R}_n-e\sum_i \textbf{r}_i|\psi$ where $\psi$ is the unperturbed wave function of the molecule (i.e., the wave function in the absence of the electric field). The quantity inside the integral is the electric dipole operator, so this integral is the dipole moment of the molecule in the absence of the field. For species that possess no dipole moment (e.g., non-degenerate states of atoms and centro-symmetric molecules), this first-order energy vanishes. It vanishes in the two specific cases mentioned because $\psi$ is either even or odd under the inversion symmetry, but the product $\psi^* \psi$ is even, and the dipole operator is odd, so the integrand is odd and thus the integral vanishes. If one is dealing with a degenerate state of a centro-symmetric system, things are different. For example, the $2s$ and $2p$ states of the hydrogen atom are degenerate, so, to apply perturbation theory one has to choose specific combinations that diagonalize the perturbation. This means one needs to first form the 2x2 matrix $\left(\begin{array}{cc} \langle 2s |V| 2s \rangle & \langle 2s |V| 2p_z \rangle \ \langle 2p_z |V| 2s \rangle & \langle 2p_z​ |V| 2p_z \rangle ​\end{array}\right)$ where $z$ is taken to be the direction of the electric field. The diagonal elements of this matrix vanish due to parity symmetry, so the two eigenvalues are equal to $E^{(1)}_{\pm}=\pm2\langle 2s|V| 2p_z \rangle.$ These are the two first-order (because they are linear in $V$ and thus linear in ) energies. So, in such degenerate cases, one can obtain linear Stark effects. The two corrected zeroth-order wave functions corresponding to these two shifted energies are $\psi^{(0)}_{\pm}=\frac{1}{\sqrt{2}}[2s\mp 2p_z]$ and correspond to orbitals polarized into or away from the electric field. The Stark effect example offers a good chance to explain a fundamental problem with applying perturbation theory. One of the basic assumptions of perturbation theory is that the unperturbed and perturbed Hamiltonians are both bounded from below (i.e., have a discrete lowest eigenvalues) and allow each eigenvalue of the unperturbed Hamiltonian to be connected to a unique eigenvalue of the perturbed Hamiltonian. Considering the example just discussed, we can see that these assumptions are not met for the Stark perturbation. Consider the potential that an electron experiences within an atom or molecule close to a nucleus of charge $Z$. It is of the form (in atomic units where the energy is given in Hartrees (1 H = 27.21 eV) and distances in Bohr units (1 Bohr = 0.529 Å)) $V(r,\theta,\phi)=-\frac{Z}{r}-e\textbf{E}r\cos\theta$ where the first term is the Coulomb potential acting to attract the electron to the nucleus and the second is the electron-field potential assuming the field is directed along the $z$-direction. In Figure 4.2 a we show this potential for a given value of the angle $\theta$. Along directions for which $\cos{\theta}$ is negative (to the right in Figure 4.2 a), this potential becomes large and positive as the distance $r$ of the electron from the nucleus increases; for bound states such as the $2s$ and $2p$ states discussed earlier, such regions are classically forbidden and the wave function exponentially decays in this direction. However, in directions along which $\cos{\theta}$ is positive, the potential is negative and strongly attractive for small-r (i.e., near the nucleus), then passes through a maximum (e.g., near $x = -2$ in Figure 4.2 a) at $r_{\rm max}=\sqrt{\frac{Z}{e\textbf{E}\cos\theta}}$ where $V(r_{\rm max})=-2\sqrt{e\textbf{E}\cos\theta}$ (ca. – 1 eV in Figure 4.2 a) and then decreases monotonically as r increases. In fact, this potential approaches $-\infty$ as $r$ approaches $\infty$ as we see in the left portion of Figure 4.2 a. The bottom line is that the total potential with the electric field present violates the assumptions on which perturbation theory is based. However, it turns out that perturbation theory can be used in such cases under certain conditions. For example as applied to the Stark effect for the degenerate $2s$ and $2p$ levels of a hydrogenic atom (i.e., a one-electron system with nuclear charge $Z$), if the energy of the $2s$ and $2p$ states lies far below the maximum in the potential $V(r_{­\rm max})$, perturbation theory can be used. We know the energies of hydrogenic ions vary with $Z$ and with the principal quantum number $n$ as $E^{(n)}(Z)=\frac{-13.6 {\rm eV}}{n^2Z^2}=\frac{-1}{2n^2Z^2}{\rm au}$ So, as long as $\frac{-1}{2n^2Z^2} \ll -2\sqrt{e\textbf{E}\cos\theta}$ the zeroth-order energy of the state will like below the barrier on the potential surface. Because the wave function can penetrate this barrier, this state will no longer be a true bound state; it will be a metastable resonance state (recall, we studied such states in Chapter 1 where we learned about tunneling). However, if the zeroth-order energy lies far below the barrier, the extent of tunneling through the barrier will be small, so the state will have a long lifetime. In such cases, we can use perturbation theory to describe the effects of the applied electric field on the energies and wave functions of such metastable states, but we must realize that we are only allowed to do so in the limit of weak fields and for states that lie significantly below the barrier. In this case, perturbation theory describes the changes in the energy and wave function in regions of space where the zeroth-order wave function is bound, but does not describe at all the asymptotic part of the wave function where the electron is unbound. Another example of Stark effects in degenerate cases arises when considering how polar diatomic molecules’ rotational energies are altered by an electric field. The zeroth-order wave functions appropriate to such cases are given by $\psi=Y_{J,M}(\theta,\phi)\chi_\nu(R)\psi_e(r|R)$ where the spherical harmonic $Y_{J,M}(\theta,\phi)$ is the rotational wave function, $\chi_\nu(R)$ is the vibrational function for level $\nu$, and $\psi_e(r|R)$ is the electronic wave function. The diagonal elements of the electric-dipole operator $\langle ​Y_{J,M}(\theta,\phi)\chi_\nu(R)\psi_e(r|R)|V|Y_{J,M}(\theta,\phi)\chi_\nu(R)\psi_e(r|R) \rangle$ vanish because the vibrationally averaged dipole moment, which arises as $\langle\mu\rangle = \langle ​\chi_\nu(R)\psi_e(r|R)| e\sum_n Z_n \textbf{R}_n-e\sum_i \textbf{r}_i |\chi_\nu(R)\psi_e(r|R) \rangle$ is a vector quantity whose component along the electric field is $\langle \mu\rangle \cos(\theta)$ (again taking the field to lie along the $z$-direction). Thinking of $\cos(\theta)$ as $x$, so $\sin(\theta) d\theta$ is $dx$, the integrals $\langle ​Y_{J,M}(\theta,\phi)|\cos\theta|Y_{J,M}(\theta,\phi)\rangle=\int Y_{J,M}^*(\theta,\phi) \cos\theta ​Y_{J,M}(\theta,\phi) \sin\theta d\theta d\phi=\int Y_{J,M}^*(\theta,\phi) x ​Y_{J,M}(\theta,\phi) dxd\phi=0$ because $|Y_{J,M}|^2$ is an even function of $x$ (i.e. ,of $\cos(\theta)$). Because the angular dependence of the perturbation (i.e., $\cos \theta$) has no $\phi$-dependence, matrix elements of the form $\int ​Y_{J,M}^*(\theta,\phi) \cos\theta Y_{J,M}(\theta,\phi)\sin\theta d\theta d\phi=0$ also vanish. This means that if one were to form the $(2J+1) \times (2J+1)$ matrix representation of $V$ for the $2J+1$ degenerate states $Y_{J,M}$ belonging to a given $J$, all of its elements would be zero. Thus the rotational energies of polar diatomic (or rigid linear polyatomic) molecules have no first-order Stark splittings. There will, however, be second-order Stark splittings, in which case we need to examine the terms that arise in the formula $E^{(2)}=\sum_J\frac{|\langle\psi^{(0)}|V|psi^{(0)}\rangle|^2}{E^{(0)}-E^{(0)}_J}$ For a zeroth-order state $Y_{J,M}$, only certain other zeroth-order states will have non-vanishing coupling matrix elements . These non-zero integrals are governed by , which can be shown to be $\langle ​Y_{J,M}|\cos\theta|Y_{J',M'}\rangle=\sqrt{\frac{(J+1)^2-M^2}{(2J+1)(2J+3)}}\delta_{M,M'}\text{ for }J'=J+1; \sqrt{\frac{J^2-M^2}{(2J-1)(2J+1)}}\delta_{M,M'}\text{ for }J'=J-1;$ of course, if $J = 0$, the term $J’ = J-1$ does not occur. The limitation that $M$ must equal $M’$ arises, as above, because the perturbation contains no terms dependent on the variable $\phi$. The limitation that $J’ = J±1$ comes from a combination of three conditions 1. angular momentum coupling, which you learned about in Chapter 2, tells us that $\cos\theta$, which happens to be proportional to $Y_{1,0}(\theta,\phi)$, can couple to $Y_{J,M}$ to generate terms having $J+1$, $J$, or $J-1$ for their $J^2$ quantum number but only $M$ for their $J_z$ quantum number, 2. the $J+1$, $J$, and $J-1$ factors arising from the product $\cos \theta Y_{J,M}$ must match $Y_{J,M'}$ for the integral not to vanish because $\langle Y_{J,M}|Y_{J',M'}\rangle = \delta_{J,J’} \delta_{M,M’}$, 3. finally, the $J = J’$ terms will vanish because of the inversion symmetry ($\cos\theta$ is odd under inversion but $|Y_{J,M}|^2$ is even). Using the fact that the perturbation is $\textbf{E}\langle \mu\rangle \cos(theta)$, these two non-zero matrix elements can be used to express the second-order energy for the $J,M$ level as $E=\textbf{E}^2\langle\mu\rangle^2\left[\dfrac{\dfrac{(J+1)^2-M^2}{(2J+1)(2J+3)}}{-2B(J+1)} + \dfrac{\dfrac{J^2-M^2}{(2J-1)(2J+1)}}{2BJ}\right]$ where $h$ is Planck’s constant and $B$ is the rotational constant for the molecule $B=\frac{h}{8\pi^2\mu r_e^2}$ for a diatomic molecule of reduced mass $\mu$ and equilibrium bond length $r_e$. Before moving on to another example, it is useful to point out some common threads that occur in many applications of perturbation theory and that will also be common to variational calculations that we discuss later in this Chapter. Once one has identified the specific zeroth-order state $\psi^{(0)}$ of interest, one proceeds as follows: 1. The first-order energy $E^{(1)} = \langle \psi^{(0)}|V| \psi^{(0)}\rangle$ is evaluated. In doing so, one should first make use of any symmetry (point group symmetry is treated later in this Chapter) such as inversion, angular momentum, spin, etc., to determine whether this expectation value will vanish by symmetry, in which case, we don’t bother to consider this matrix element any more. We used this earlier when considering $\langle 2s|\cos\theta|2s\rangle$, $\langle2p_\sigma|\cos\theta|2p_\sigma\rangle$, and $\langle Y_{J,M}|\cos\theta|Y_{J,M}\rangle$ to conclude that certain first-order energies are zero. 2. If $E^{(1)}$ vanishes (so the lowest-order effect is in second order) or if we want to examine higher-order corrections, we consider evaluating $E^{(2)}$. Before doing so explicitly, we think about whether symmetry will limit the matrix elements $\langle\psi^{(0)}|V \psi^{(0)}n\rangle$ entering into the expression for $E^{(2)}$. For example, in the case just studied, we saw that only other zeroth-order states having $J’ = J+1$ or $J‘ = J-1$ gave non-vanishing matrix elements. In addition, because $E^{(2)}$ contains energy denominators ($E^{(0)}-E^{(0)}_n$), we may choose to limit our calculation to those other zeroth-order states whose energies are close to our state of interest; this assumes that such states will contribute a dominant amount to the sum . You will encounter many times when reading literature articles in which perturbation theory is employed situations in which researchers have focused attention on zeroth-order states that are close in energy to the state of interest and that have the correct symmetry to couple strongly (i.e., have substantial $\langle \psi^{(0)}|V \psi^{(0)}_n\rangle$) to that state. Electron-electron Coulomb repulsion In one of the most elementary pictures of atomic electronic structure, one uses nuclear charge screening concepts to partially account for electron-electron interactions. For example, in 1s22s1 Li, one might posit a zeroth-order wave function consisting of a product $\psi=\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)$ in which two electrons occupy a $1s$ orbital and one electron occupies a $2s$ orbital. To find a reasonable form for the radial parts of these two orbitals, one could express each of them as a linear combination of (i) one orbital having hydrogenic $1s$ form with a nuclear charge of 3 and (ii) a second orbital of $2s$ form but with a nuclear charge of 1 (to account for the screening of the $Z = 3$ nucleus by the two inner-shell $1s$ electrons) $\phi_i(r)=C_i\chi_{1s,Z=1}(r)+D_i\chi_{2s,Z=3}(r)$ where the index i labels the $1s$ and $2s$ orbitals to be determined. Next, one could determine the $C_i$ and $D_i$ expansion coefficients by requiring the fi to be approximate eigenfunctions of the Hamiltonian $h=\frac{1}{2}\nabla^2-\frac{3}{r}$ that would be appropriate for an electron attracted to the Li nucleus but not experiencing any repulsions with other electrons. This would result in the following equation for the expansion coefficients: $\left(\begin{array}{cc} \langle \chi_{1s,Z=1}(r) | -\frac{1}{2}\nabla^2-\frac{3}{r}|\chi_{1s,Z=1}(r) \rangle & \langle \chi_{1s,Z=1}(r) | -\frac{1}{2}\nabla^2-\frac{3}{r}| \chi_{2s,Z=3}(r) \rangle \ \langle \chi_{1s,Z=1}(r) | -\frac{1}{2}\nabla^2-\frac{3}{r}| \chi_{2s,Z=3}(r) \rangle & \langle \chi_{2s,Z=3}(r) | -\frac{1}{2}\nabla^2-\frac{3}{r}| \chi_{2s,Z=3}(r) \rangle \end{array}\right) \left(\begin{array}{cc}C \ D\end{array}\right)\ = \left(\begin{array}{cc} \langle \chi_{1s,Z=1}(r) | \chi_{1s,Z=1}(r) \rangle & \langle \chi_{1s,Z=1}(r) | \chi_{2s,Z=3}(r) \rangle \ \langle \chi_{1s,Z=1}(r) | \chi_{2s,Z=3}(r) \rangle & \langle \chi_{2s,Z=3}(r) | \chi_{2s,Z=3}(r) \rangle ​\end{array}\right) \left(\begin{array}{cc}C \ D\end{array}\right).$ This 2x2 matrix eigenvalue problem can be solved for the $C_i$ and $D_i$ coefficients and for the energies $E_i$ of the $1s$ and $2s$ orbitals. The lower-energy solution will have $|C| > |D|$, and will be this model’s description of the $1s$ orbital. The higher-energy solution will have $|D| > |C|$ and is the approximation to the $2s$ orbital. Using these $1s$ and $2s$ orbitals and the 3-electron wave function they form $\psi=\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)$ as a zeroth-order approximation, how do we then proceed to apply perturbation theory? The full three-electron Hamiltonian $H=\sum_{i=1}^3\left[\frac{1}{2}\nabla_i^2-\frac{3}{r_i}\right]+\sum_{i<j=1}^3\frac{1}{r_{i,j}}$ can be decomposed into a zeroth-order part $H^{(0)}=\sum_{i=1}^3\left[\frac{1}{2}\nabla_i^2-\frac{3}{r_i}\right]$ and a perturbation $V=\sum_{i<j=1}^3\frac{1}{r_{i,j}}$ The zeroth-order energy of the wave function $\psi=\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)$ is $E^{(0)}=2E_{1s}+E_{2s}$ where each of the $E_{ns}$ are the energies obtained by solving the 2x2 matrix eigenvalue equation shown earlier. The first-order energy of this state can be written as $E^{(1)}=\langle ​\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)|V|\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3) \rangle J_{1s,1s}+2J_{1s,2s}$ with the Coulomb interaction integrals being defined as $J_{a,b}=\int \phi_a^*(r)\phi_a(r)\frac{1}{|r-r'|}\phi_b^*(r)\phi_b(r)drdr'$ To carry out the 3-electron integral appearing in $E^{(1)}$, one proceeds as follows. For the integral $\int[\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)]^*\frac{1}{r_{1,2}}\phi_{1s}(r_1)\alpha(1)\phi_{1s}(r_2)\beta(2)\phi_{2s}(r_3)\alpha(3)dr_1dr_2dr_3$ one integrates over the 3 spin variables using $\langle a| a\rangle=1$, $\langle a| b\rangle=0$ and $\langle b| b\rangle=1$) and then integrates over the coordinate of the third electron using $\langle \phi_{2s}|\phi_{2s}​\rangle=1$ to obtain $\int [\phi_{1s}(r_1)\phi_{1s}(r_2)]^*\frac{1}{r_{1,2}}​ \phi_{1s}(r_1)\phi_{1s}(r_2)​dr_1dr_2dr_3$ which is $J_{1s,1s}$. The two $J_{1s,2s}$ integrals arise when carrying out similar integration for the terms arising from ($1/r_{1,3}$ ) and ($1/r_{2,3}$). So, through first order, the energy of the Li atom at this level of treatment is given by $E^{(0)}+E^{(1)}=2E_{1s}+E_{2s}+J_{1s,1s}+2J_{1s,2s}.$ The factor $2E_{1s}+E_{2s}$ contains the contributions from the kinetic energy and electron-nuclear Coulomb potential. The $J_{1s,1s}+2J_{1s,2s}$ terms describe the Coulombic repulsions among the three electrons. Each of the Coulomb integrals $J_{i,j}$ can be interpreted as being equal to the Coulombic interaction between electrons (one at location $\textbf{r}$; the other at $\textbf{r}’$) averaged over the positions of these two electrons with their spatial probability distributions being given by $|\phi_i(r)|^2$ and $|\phi_j(r’)|^2$, respectively. Although the example just considered is rather primitive, it introduces a point of view that characterizes one of the most commonly employed models for treating atomic and molecular electronic structure- the Hartree-Fock (HF) mean-field model, which we will discuss more in Chapter 6. In the HF model, one uses as a zeroth-order Hamiltonian $H^{(0)}=\sum_{i=1}^3 \left[\frac{1}{2}\nabla_i^2-\frac{3}{r_i}+V_{\rm HF}(r_i)\right]$ consisting of a sum of one-electron terms containing the kinetic energy, the Coulomb attraction to the nucleus (I use the Li atom as an example here), and a potential $V_{\rm HF}(\textbf{r}_i)$. This potential, which is written in terms of Coulomb integrals similar to those we discussed earlier as well as so-called exchange integrals that we will discuss in Chapter 6, is designed to approximate the interaction of an electron at location $\textbf{r}_i$ with the other electrons in the atom or molecule. Because $H^{(0)}$ is one-electron additive, its eigenfunctions consist of products of eigenfunctions of the operator $h^{(0)}=\frac{1}{2}\nabla^2-\frac{3}{r}+V_{\rm HF}(r)$ $V_{\rm HF}(\textbf{r}_i)$ offers an approximation to the true $1/r_{i,j}$ Coulomb interactions expressed in terms of a “smeared-out” electron distribution interacting with the electron at ri. Perturbation theory is then used to treat the effect of the perturbation $V=\sum_{i<j=1}^N\frac{1}{r_{i,j}}-\sum_{i=1}^N V_{\rm HF}(r_i)$ on the zeroth-order states. We say that the perturbation, often called the fluctuation potential, corrects for the difference between the instantaneous Coulomb interactions among the $N$ electrons and the mean-field (average) interactions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/04%3A__Some_Important_Tools_of_Theory/4.01%3A_Perturbation_Theory.txt
Let us now turn to the other method that is used to solve Schrödinger equations approximately, the variational method. In this approach, one must again have some reasonable wavefunction $\psi^{(0)}$ that is used to approximate the true wavefunction. Within this approximate wavefunction, one embeds one or more variables {$\alpha_J$} that one subsequently varies to achieve a minimum in the energy of $\psi^{(0)}$ computed as an expectation value of the true Hamiltonian $H$: $E({\alpha_J}) = \dfrac{\langle\psi^{(0)}| H | \psi^{(0)}\rangle}{\langle\psi^{(0)} | \psi^{(0)}\rangle}\nonumber$ The optimal values of the $\alpha_J$ parameters are determined by making $\dfrac{dE}{d\alpha_J} = 0\nonumber$ To achieve the desired energy minimum. We also should verify that the second derivative matrix $\dfrac{\partial^2E}{\partial\alpha_J \partial \alpha_L}\nonumber$ has all positive eigenvalues, otherwise one may not have found the minimum. The theoretical basis underlying the variational method can be understood through the following derivation. Suppose that someone knew the exact eigenstates (i.e., true $\psi_K$ and true $E_K$) of the true Hamiltonian $H$. These states obey $H \psi_K = E_K \psi_K.\nonumber$ Because these true states form a complete set (it can be shown that the eigenfunctions of all the Hamiltonian operators we ever encounter have this property), our so-called “trial wavefunction” $\psi^{(0)}$ can, in principle, be expanded in terms of these $\psi_K$: $\psi^{(0)} = \displaystyle \sum_K c_K \psi_K.\nonumber$ Before proceeding further, allow me to overcome one likely misconception. What I am going through now is only a derivation of the working formula of the variational method. The final formula will not require us to ever know the exact $\psi_K$ or the exact $E_K$, but we are allowed to use them as tools in our derivation because we know they exist even if we never know them. With the above expansion of our trial function in terms of the exact eigenfunctions, let us now substitute this into the quantity $\dfrac{\langle\psi^{(0)}| H | \psi^{(0)}\rangle}{\langle\psi^{(0)} | \psi^{(0)}\rangle}\nonumber$ that the variational method instructs us to compute: $E=\dfrac{\langle\psi^{(0)}| H | \psi^{(0)}\rangle}{\langle\psi^{(0)} | \psi^{(0)}\rangle}= \dfrac{\left \langle \displaystyle \sum_K c_K \psi_K | H | \displaystyle \sum_L c_L \psi_L \right \rangle}{\left \langle\displaystyle \sum_K c_K \psi_K|\displaystyle \sum_L c_L \psi_L \right \rangle} \nonumber$ Using the fact that the $\psi_K$ obey $H\psi_K = E_K \psi_K$ and that the $\psi_K$ are orthonormal $\langle\psi_K|\psi_L\rangle = \delta_{K.L}\nonumber$ the above expression reduces to $E = \dfrac{\displaystyle \sum_K \langle c_K \psi_K | H | c_K \psi_K\rangle}{\displaystyle \sum_K\langle c_K \psi_K| c_K \psi_K\rangle} = \dfrac{\displaystyle \sum_K |c_K|^2 E_K}{\displaystyle \sum_K|c_K|^2}.\nonumber$ One of the basic properties of the kind of Hamiltonian we encounter is that they have a lowest-energy state. Sometimes we say they are bounded from below, which means their energy states do not continue all the way to minus infinity. There are systems for which this is not the case (we saw one earlier when studying the Stark effect), but we will now assume that we are not dealing with such systems. This allows us to introduce the inequality $E_K \geq E_0$ which says that all of the energies are higher than or equal to the energy of the lowest state which we denote $E_0$. Introducing this inequality into the above expression gives $E \geq \dfrac{\displaystyle \sum_K |c_K|^2 E_0}{\displaystyle \sum_K|c_K|^2} = E_0.\nonumber$ This means that the variational energy, computed as $\dfrac{\langle\psi^{(0)}| H | \psi^{(0)}\rangle}{\langle\psi^{(0)} | \psi^{(0)}\rangle} \label{energy}$ will lie above the true ground-state energy no matter what trial function $\psi^{(0)}$ we use. The significance of the above result that $E \geq E_0$ is as follows. We are allowed to imbed into our trial wavefunction $\psi^{(0)}$ parameters that we can vary to make $E$, computed as Equation $\ref{energy}$ as low as possible because we know that we can never it lower than the true ground-state energy. The philosophy then is to vary the parameters in $\psi^{(0)}$ to render $E$ as low as possible, because the closer $E$ is to $E_0$ the “better” is our variational wavefunction. Let me now demonstrate how the variational method is used in such a manner by solving an example problem. Example $1$: Two electron Atoms Suppose you are given a trial wavefunction of the form: $\phi = \dfrac{Z_e^3}{\pi a_0^3}\exp\left(\dfrac{-Z_er_1}{a_0}\right) \exp\left(\dfrac{-Z_er_2}{a_0}\right)​\nonumber$ to represent a two-electron ion of nuclear charge $Z$ and suppose that you are lucky enough that I have already evaluated the variational energy expression (Equation \ref{energy}, which I’ll call $W$, for you and found $W =\left(Z_e^2-2ZZ_e+\dfrac{5}{8}Z_e\right)\dfrac{e^2}{a_0} .\nonumber$ Now, let’s find the optimum value of the variational parameter $Z_e$ for an arbitrary nuclear charge $Z$ by setting $= 0$. After finding the optimal value of $Z_e$, we’ll then find the optimal energy by plugging this $Z_e$ into the above $W$ expression. \begin{align*} \dfrac{dW}{dZ_e}= \left(2Z_e-2Z+\dfrac{5}{8}​\right)\dfrac{e^2}{a_0}&= 0 \[4pt] 2Z_e - 2Z +\dfrac{5}{8} &= 0 \[4pt] 2Ze &= 2Z -\dfrac{5}{8} \[4pt] Z_e &= Z - \dfrac{5}{16} \[4pt] &= Z - 0.3125 \end{align*} Note that 0.3125 represents the shielding factor of one 1s electron to the other, reducing the optimal effective nuclear charge by this amount (those familiar with Slater's Rules will not be surprised by this number). Now, using this optimal $Z_e$ in our energy expression gives \begin{align*} W &= Z_e\left(2Z_e-2Z+\dfrac{5}{8}​\right)​\dfrac{e^2}{a_0}​ \[4pt] &=\left(Z-\dfrac{5}{16}​\right) \left(\left(Z-\dfrac{5}{16}​\right)-2Z+​\dfrac{5}{8}\right)​\dfrac{e^2}{a_0}​ \[4pt]&=\left(Z-\dfrac{5}{16}​\right)\left(-Z+\dfrac{5}{16}​\right)​\dfrac{e^2}{a_0} \[4pt]&= -\left(Z-\dfrac{5}{16}​\right)\left(Z-\dfrac{5}{16}​\right) \dfrac{e^2}{a_0} \[4pt] &= -\left(Z-\dfrac{5}{16}​\right)^2\dfrac{e^2}{a_0} \[4pt] &= - (Z - 0.3125)^2(27.21) {\rm eV}\end{align*} Since $a_0$ is the Bohr radius 0.529 Å, $e^2/a_0$ = 27.21 eV, or one atomic unit of energy. Is this energy any good? The total energies of some two-electron atoms and ions have been experimentally determined to be as shown in Table $1$ below. Using our optimized expression for $W$, let’s now calculate the estimated total energies of each of these atoms and ions as well as the percent error in our estimate for each ion. Table $1$: Comparison of Experimental (true) total energies with predicted for select two-electron species. Z Atom Experimental Calculated % Error 1 H- -14.35 eV -12.86 eV 10.38% 2 He -78.98 eV -77.46 eV 1.92% 3 Li+ -198.02 eV -196.46 eV 0.79% 4 Be+2 -371.5 eV -369.86 eV 0.44% 5 B+3 -599.3 eV -597.66 eV 0.27% 6 C+4 -881.6 eV -879.86 eV 0.19% 7 N+5 -1218.3 eV -1216.48 eV 0.15% 8 O+6 -1609.5 eV -1607.46 eV 0.13% The energy errors are essentially constant over the range of $Z$, but produce a larger percentage error at small Z. Aside: In 1928, when quantum mechanics was quite young, it was not known whether the isolated, gas-phase hydride ion, $H^-$, was stable with respect to loss of an electron to form a hydrogen atom. Let’s compare our estimated total energy for $H^-$ to the ground state energy of a hydrogen atom and an isolated electron (which is known to be -13.60 eV). When we use our expression for W and take $Z = 1$, we obtain $W = -12.86$ eV, which is greater than -13.6 eV ($H + e^-$), so this simple variational calculation erroneously predicts $H^-$ to be unstable. More complicated variational treatments give a ground state energy of $H^-$ of -14.35 eV, in agreement with experiment and agreeing that $H^-$ is indeed stable with respect to electron detachment. 4.03: Linear Variational Method A widely used example of Variational Methods is provided by the so-called linear variational method. Here one expresses the trial wave function a linear combination of so-called basis functions {$c_j$}. $\psi=\sum_j c_j \chi_j.\nonumber$ Substituting this expansion into $\langle\psi|H|\psi\rangle$ and then making this quantity stationary with respect to variations in the $c_i$ subject to the constraint that $\psi$ remains normalized $1=\langle\psi|\psi\rangle=\sum_i\sum_j c_i^*\langle\chi_i|\chi_j\rangle c_j\nonumber$ gives $\sum_j \langle\chi_i|H|\chi_j\rangle c_j = E \sum_j \langle\chi_i|\chi_j\rangle c_j.\nonumber$ This is a generalized matrix eigenvalue problem that we can write in matrix notation as $\textbf{HC}=\textbf{ESC}.\nonumber$ It is called a generalized eigenvalue problem because of the appearance of the overlap matrix $\textbf{S}$ on its right hand side. This set of equations for the $c_j$ coefficients can be made into a conventional eigenvalue problem as follows: 1. The eigenvectors $\textbf{v}_k$ and eigenvalues $s_k$ of the overlap matrix are found by solving $\sum_j S_{i,j}\nu_{k,j}=s_k\nu_{k,i}\nonumber$All of the eigenvalues $s_k$ are positive because $\textbf{S}$ is a positive-definite matrix. 2. Next one forms the matrix $\textbf{S}^{-1/2}$ whose elements are $S_{i,j}^{-1/2}=\sum_k\nu_{k,i}\dfrac{1}{\sqrt{s_k}}\nu_{k,j}\nonumber$ (another matrix $\textbf{S}^{1/2}$ can be formed in a similar way replacing $\dfrac{1}{\sqrt{s_k}}$ with $\sqrt{s_k}$). 3. One then multiplies the generalized eigenvalue equation on the left by $\textbf{S}^{-1/2}$ to obtain $\textbf{S}^{-1/2}\textbf{HC}=\textbf{E} \textbf{S}^{-1/2}\textbf{SC}.\nonumber$ 4. This equation is then rewritten, using $\textbf{S}^{-1/2}\textbf{S}$​ = $\textbf{S}^{1/2}$​ and $1=\textbf{S}^{-1/2}\textbf{S}^{1/2}$​ as $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2} (\textbf{S}^{1/2}\textbf{C})=\textbf{E} (\textbf{S}^{1/2}\textbf{C}).\nonumber$ This is a conventional eigenvalue problem in which the matrix is $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$ and the eigenvectors are $(\textbf{S}^{1/2}\textbf{C})$. The net result is that one can form $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$ and then find its eigenvalues and eigenvectors. Its eigenvalues will be the same as those of the original generalized eigenvalue problem. Its eigenvectors $(\textbf{S}^{1/2}\textbf{C})$ can be used to determine the eigenvectors $\textbf{C}$ of the original problem by multiplying by $\textbf{S}^{-1/2}$ $\textbf{C}= \textbf{S}^{-1/2} (\textbf{S}^{1/2}\textbf{C}).\nonumber$ Although the derivation of the matrix eigenvalue equations resulting from the linear variational method was carried out as a means of minimizing $\langle\psi|H|\psi\rangle$, it turns out that the solutions offer more than just an upper bound to the lowest true energy of the Hamiltonian. It can be shown that the nth eigenvalue of the matrix $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$ is an upper bound to the true energy of the nth state of the Hamiltonian. A consequence of this is that, between any two eigenvalues of the matrix $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$ there is at least one true energy of the Hamiltonian. This observation is often called the bracketing condition. The ability of linear variational methods to provide estimates to the ground- and excited-state energies from a single calculation is one of the main strengths of this approach.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/04%3A__Some_Important_Tools_of_Theory/4.02%3A_The_Variational_Method.txt
It is assumed that the reader has previously learned how symmetry arises in molecular shapes and structures and what symmetry elements are (e.g., planes, axes of rotation, centers of inversion, etc.). We review and teach here only that material that is of direct application to symmetry analysis of molecular orbitals and vibrations and rotations of molecules. We use a specific example, the ammonia molecule, to introduce and illustrate the important aspects of point group symmetry because this example contains most of the complexities that arise in any application of group theory to molecular problems. Example $1$: The $C_{3v}$ Symmetry Group of Ammonia The ammonia molecule $NH_3$ belongs, in its ground-state equilibrium geometry, to the $C_{3v}$ point group. Its symmetry operations consist of two $C_3$ rotations, $C_3$, $C_3^2$ (rotations by 120° and 240°, respectively about an axis passing through the nitrogen atom and lying perpendicular to the plane formed by the three hydrogen atoms), three vertical reflection operations, $\sigma_v$, $\sigma_{v'}$, $\sigma_{v"}$, and the identity operation. Corresponding to these six operations are symmetry elements: the three-fold rotation axis, $C_3$ and the three symmetry planes $\sigma_v$, $\sigma_{v'}$ and $\sigma_{v"}$ that contain the three $NH$ bonds and the $z$-axis (see Figure 4.3). These six symmetry operations form a mathematical group. A group is defined as a set of objects satisfying four properties. 1. A combination rule is defined through which two group elements are combined to give a result that we call the product. The product of two elements in the group must also be a member of the group (i.e., the group is closed under the combination rule). 2. One special member of the group, when combined with any other member of the group, must leave the group member unchanged (i.e., the group contains an identity element). 3. Every group member must have a reciprocal in the group. When any group member is combined with its reciprocal, the product is the identity element. 4. The associative law must hold when combining three group members (i.e., (AB)C must equal A(BC)). The members of symmetry groups are symmetry operations; the combination rule is successive operation. The identity element is the operation of doing nothing at all. The group properties can be demonstrated by forming a multiplication table. Let us label the rows of the table by the first operation and the columns by the second operation. Note that this order is important because most groups are not commutative. The $C_{3v}$ group multiplication table is as follows: $\begin{array}{c|p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm​}p{1.3cm}​p{1.3cm}​c} &E&C_3&C_3^2&\sigma_v&\sigma_{v'}&\sigma_{v"}&\text{Second Operation}\\hline C_3&C_3&C_3^2&E&\sigma_{v'}&\sigma_{v''}&\sigma_{v}&\ C_3^2&C_3^2&E&C_3&\sigma_{v''}&\sigma_{v}&\sigma_{v'}&\ \sigma_{v}&\sigma_{v}&\sigma_{v''}&\sigma_{v'}&E&C_3^2&C_3&\ \sigma_{v'}&\sigma_{v'}&\sigma_{v}&\sigma_{v''}&C_3&E&C_3^2&\ \sigma_{v''}&\sigma_{v''}&\sigma_{v'}&\sigma_{v}&C_3^2&C_3&E&\ \text{First}&&&&&&&&\ \text{Operation}&&&&&&&& \end{array}$ Note the reflection plane labels do not move. That is, although we start with $H_1$ in the $\sigma_v$ plane, $H_2$ in $\sigma_{v'}$, and $H_3$ in $\sigma_{v"}$, if $H_1$ moves due to the first symmetry operation, $\sigma_v$ remains fixed and a different H atom lies in the $\sigma_v$ plane. Matrices as Group Representations In using symmetry to help simplify molecular orbital (mo) or vibration/rotation energy-level identifications, the following strategy is followed: 1. A set of $M$ objects belonging to the constituent atoms (or molecular fragments, in a more general case) is introduced. These objects are the orbitals of the individual atoms (or of the fragments) in the mo case; they are unit vectors along the Cartesian $x$, $y$, and $z$ directions located on each of the atoms, and representing displacements along each of these directions, in the vibration/rotation case. 2. Symmetry tools are used to combine these $M$ objects into $M$ new objects each of which belongs to a specific symmetry of the point group. Because the Hamiltonian (electronic in the mo case and vibration/rotation in the latter case) commutes with the symmetry operations of the point group, the matrix representation of H within the symmetry-adapted basis will be "block diagonal". That is, objects of different symmetry will not interact; only interactions among those of the same symmetry need be considered. To illustrate such symmetry adaptation, consider symmetry adapting the $2s$ orbital of $N$ and the three $1s$ orbitals of the three H atoms. We begin by determining how these orbitals transform under the symmetry operations of the $C_{3v}$ point group. The act of each of the six symmetry operations on the four atomic orbitals can be denoted as follows: $(S_N,S_1,S_2,S_3) \overset{E}{\rightarrow} (S_N,S_1,S_2,S_3)\ \hphantom{(S_N,S_1,S_2,S_3)​} \overset{C_3}{\rightarrow} (S_N,S_3,S_1,S_2)\ \hphantom{(S_N,S_1,S_2,S_3)​} \overset{C_3^2}{\rightarrow} (S_N,S_2,S_3,S_1)\ \hphantom{(S_N,S_1,S_2,S_3)​} \overset{\sigma_v}{\rightarrow} (S_N,S_1,S_3,S_2)\ \hphantom{(S_N,S_1,S_2,S_3)​} \overset{\sigma_{v''}}{\rightarrow} (S_N,S_3,S_2,S_1)\ \hphantom{(S_N,S_1,S_2,S_3)​} \overset{\sigma_{v'}}{\rightarrow} (S_N,S_2,S_1,S_3)$ Here we are using the active view that a $C_3$ rotation rotates the molecule by 120°. The equivalent passive view is that the $1s$ basis functions are rotated -120°. In the $C_3$ rotation, $S_3$ ends up where $S_1$ began, $S_1$, ends up where $S_2$ began and $S_2$ ends up where $S_3$ began. These transformations can be thought of in terms of a matrix multiplying a vector with elements $(S_N,S_1,S_2,S_3)$. For example, if $D^{(4)}(C_3)$ is the representation matrix giving the $C_3$ transformation, then the above action of $C_3$ on the four basis orbitals can be expressed as: $D^{(4)}(C_3) \left(\begin{array}{c}S_N\S_1\S_2\S_3\end{array}\right)= \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{array}\right) ​\left(\begin{array}{c}S_N\S_1\S_2\S_3\end{array}\right)= \left(\begin{array}{c}S_N\S_3\S_1\S_2\end{array}\right)$ We can likewise write matrix representations for each of the symmetry operations of the $C_{3v}$ point group: $D^{(4)}(C_3^2) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \end{array}\right) \hspace{30pt} D^{(4)}(E) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{array}\right)$ $D^{(4)}(\sigma_v) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{array}\right) \hspace{30pt} D^{(4)}(\sigma_{v'}) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \end{array}\right)$ $D^{(4)}(\sigma_{v"}) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \end{array}\right)$ It is easy to verify that a $C_3$ rotation followed by a $\sigma_v$ reflection is equivalent to a $\sigma_{v'}$ reflection alone. In other words $\sigma_v C_3 = \sigma_{v'},\hspace{1cm}\text{or }\hspace{1cm}\begin{array}{ccc}&S_1&\&&\S_2&&S_3\end{array}\overset{C_3}{\rightarrow} \begin{array}{ccc}&S_3&\&&\S_1&&S_2\end{array}​\overset{\sigma_v}{\rightarrow} ​\begin{array}{ccc}&S_3&\&&\S_2&&S_1\end{array}​​$ Note that this same relationship is carried by the matrices: $D^{(4)}(\sigma_v)D^{(4)}(C_3) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{array}\right) \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{array}\right) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \end{array}\right) =D^{(4)}(\sigma_{v'})$ Likewise we can verify that $C_3(\sigma_v) = \sigma_{v"}$ directly and we can notice that the matrices also show the same identity: $D^{(4)}(C_3)D^{(4)}(\sigma_v) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{array}\right) \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{array}\right) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 ​\end{array}\right) =D^{(4)}(\sigma_{v"}).$ In fact, one finds that the six matrices, $D^{(4)}(R)$, when multiplied together in all 36 possible ways, obey the same multiplication table as did the six symmetry operations. We say the matrices form a representation of the group because the matrices have all the properties of the group. Characters of Representations One important property of a matrix is the sum of its diagonal elements which is called the trace of the matrix $D$ and is denoted $Tr(D)$: $Tr(D) = \sum_iD_{ii}=\chi .$ So, $\chi$ is called the trace or character of the matrix. In the above example $\chi (E) = 4$ $\chi (C_3) = \chi (C_3^2) = 1$ $\chi (\sigma_v) = \chi (\sigma_{v'}) = \chi (\sigma_{v"}) = 2.$ The importance of the characters of the symmetry operations lies in the fact that they do not depend on the specific basis used to form the matrix. That is, they are invariant to a unitary or orthogonal transformation of the objects used to define the matrices. As a result, they contain information about the symmetry operation itself and about the space spanned by the set of objects. The significance of this observation for our symmetry adaptation process will become clear later. Note that the characters of both rotations are the same as are the characters of all three reflections. Collections of operations having identical characters are called classes. Each operation in a class of operations has the same character as other members of the class. The character of a class depends on the space spanned by the basis of functions on which the symmetry operations act. Another Basis and Another Representation Above we used $(S_N,S_1,S_2,S_3)$ as a basis. If, alternatively, we use the one-dimensional basis consisting of the $1s$ orbital on the N-atom, we obtain different characters, as we now demonstrate. The act of the six symmetry operations on this $S_N$ can be represented as follows: $S_N \overset{E}{\rightarrow} S_N \hspace{15pt} S_N \overset{C_3}{\rightarrow} S_N \hspace{15pt} S_N \overset{C_3^2}{\rightarrow} S_N;$ $S_N \overset{\sigma_v}{\rightarrow} S_N \hspace{15pt} S_N \overset{\sigma_{v''}}{\rightarrow} S_N \hspace{15pt} S_N \overset{\sigma_{v'}}{\rightarrow} S_N.$ We can represent this group of operations in this basis by the one-dimensional set of matrices: $D^{(1)} (E) = 1; \hspace{15pt} D^{(1)}(C_3) = 1; \hspace{15pt} D^{(1)}(C_3^2) = 1,$ $D^{(1)} (\sigma_v) = 1; \hspace{15pt} D^{(1)}(\sigma_{v"}) = 1; \hspace{15pt} D^{(1)}(\sigma_{v'}) = 1.$ Again we have $D^{(1)} (\sigma_v) D^{(1)}(C_3) = 1 \oplus 1 = D^{(1)}(\sigma_{v"}),\text{ and}$ $D^{(1)} (C_3) (D^{(1)}(\sigma_v) = 1 \oplus 1 = D^{(1)}(\sigma_{v'}).$ These six 1x1 matrices form another representation of the group. In this basis, each character is equal to unity. The representation formed by allowing the six symmetry operations to act on the $1s$ N-atom orbital is clearly not the same as that formed when the same six operations acted on the $(S_N,S_1,S_2,S_3)$ basis. We now need to learn how to further analyze the information content of a specific representation of the group formed when the symmetry operations act on any specific set of objects. Reducible and Irreducible Representations Reducible Representations Note that every matrix in the four dimensional group representation labeled $D^{(4)}$ has the so-called block diagonal form $\begin{array}{|c|ccc|}\hline 1 &0 & 0 & 0\\hline 0 & A & B & C \ 0 & D & E & F \ 0 & G & H & I \\hline \end{array}$ This means that these $D^{(4)}$ matrices are really a combination of two separate group representations (mathematically, it is called a direct sum representation). We say that $D^{(4)}$ is reducible into a one-dimensional representation $D^{(1)}$ and a three-dimensional representation formed by the 3x3 submatrices that we will call $D^{(3)}$. $D^{(3)}(E) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{array}\right) \hspace{20pt} D^{(3)}(C_3) = \left(\begin{array}{ccc} 0 & 0 & 1 \ 1 & 0 & 0 \ 0 & 1 & 0 \end{array}\right) \hspace{20pt} D^{(3)}(C_3^2) = \left(\begin{array}{ccc} 0 & 1 & 0 \ 0 & 0 & 1 \ 1 & 0 & 0 \end{array}\right)$ $D^{(3)}(\sigma_v) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 \end{array}\right) \hspace{20pt} D^{(3)}(\sigma_{v'}) = \left(\begin{array}{ccc} 0 & 0 & 1 \ 0 & 1 & 0 \ 1 & 0 & 0 \end{array}\right) \hspace{20pt} D^{(3)}(\sigma_{v"}) = \left(\begin{array}{ccc} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1 \end{array}\right)$ The characters of $D^{(3)}$ are $\chi (E) = 3, \chi (2C_3) = 0, \chi (3\sigma_v) = 1$. Note that we would have obtained this $D^{(3)}$ representation directly if we had originally chosen to examine the basis $(S_1,S_2,S_3)$ alone; also note that these characters are equal to those of $D^{(4)}$ minus those of $D^{(1)}$. Change in Basis Now let us convert to a new basis that is a linear combination of the original $S_1,S_2,S_3$ basis: $T_1 = S_1 + S_2 + S_3$ $T_2 = 2S_1 - S_2 - S_3$ $T_3 = S_2 - S_3$ (Don't worry about how I constructed $T_1$, $T_2$, and $T_3$ yet. As will be demonstrated later, we form them by using symmetry projection operators defined below). We determine how the $"T"$ basis functions behave under the group operations by allowing the operations to act on the $S_j$ and interpreting the results in terms of the $T_i$. In particular, $(T_1,T_2 ,T_3) \overset{\sigma_v}{\rightarrow} (T_1,T_2,-T_3) \hspace{15pt} (T_1,T_2,T_3) \overset{E}{\rightarrow} (T_1,T_2,T_3) ;$ $(T_1,T_2,T_3) \overset{\sigma_{v'}}{\rightarrow} (S_3+S_2+S_1, 2S_3-S_2-S_1,S_2-S_1) = (T_1, -\frac{1}{2} T_2 – \frac{3}{2} T_3, - \frac{1}{2} T_2 + \frac{1}{2} T_3);$ $(T_1,T_2,T_3) \overset{\sigma_{v''}}{\rightarrow} (S_2+S_1+S_3, 2S_2-S_1-S_3,S_1-S_3) = (T_1, - \frac{1}{2} T_2 + \frac{3}{2} T_3, \frac{1}{2}T_2 + \frac{1}{2}T_3);$ $(T_1,T_2,T_3) \overset{C_3}{\rightarrow} (S_3+S_1+S_2, 2S_3-S_1-S_2,S_1-S_2) = (T_1, - \frac{1}{2}T_2 – \frac{3}{2}T_3, \frac{1}{2}T_2 – \frac{1}{2}T_3);$ $(T_1,T_2,T_3) \overset{C_3^2}{\rightarrow} (S_2+S_3+S_1, 2S_2-S_3-S_1,S_3-S_1) = (T_1, - \frac{1}{2}T_2 + \frac{3}{2}T_3, - \frac{1}{2}T_2 – \frac{1}{2}T_3).$ So the matrix representations in the new $T_i$ basis are: $D^{(3)}(E) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{array}\right) \hspace{20pt} D^{(3)}(C_3) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & -\frac{1}{2} & -\frac{3}{2} \ 0 & \frac{1}{2} & -\frac{1}{2} \end{array}\right) ;$ $D^{(3)}(C_3^2) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & -\frac{1}{2} & \frac{3}{2} \ 0 & -\frac{1}{2} & -\frac{1}{2} \end{array}\right) \hspace{20pt} D^{(3)}(\sigma_v) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & -1 \end{array}\right) ;$ $D^{(3)}(\sigma_{v'}) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & -\frac{1}{2} & -\frac{3}{2} \ 0 & -\frac{1}{2} & \frac{1}{2} \end{array}\right) \hspace{20pt} D^{(3)}(\sigma_{v"}) = \left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & -\frac{1}{2} & \frac{3}{2} \ 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right)$ Reduction of the Reducible Representation These six matrices can be verified to multiply just as the symmetry operations do; thus they form another three-dimensional representation of the group. We see that in the $T_i$ basis the matrices are block diagonal. This means that the space spanned by the $T_i$ functions, which is the same space as the $S_j$ span, forms a reducible representation that can be decomposed into a one dimensional space and a two dimensional space (via formation of the $T_i$ functions). Note that the characters (traces) of the matrices are not changed by the change in bases. The one-dimensional part of the above reducible three-dimensional representation is seen to be the same as the totally symmetric representation we arrived at before, $D^{(1)}$. The two-dimensional representation that is left can be shown to be irreducible; it has the following matrix representations: $D^{(2)}(E) = \left(\begin{array}{cc} 1 & 0 \ 0 & 1 \end{array}\right) \hspace{15pt} D^{(2)}(C_3) = \left(\begin{array}{cc} -\frac{1}{2} & -\frac{3}{2} \ \frac{1}{2} & -\frac{1}{2} \end{array}\right) \hspace{15pt} D^{(2)}(C_3^2) = \left(\begin{array}{cc} -\frac{1}{2} & \frac{3}{2} \ -\frac{1}{2} & -\frac{1}{2} \end{array}\right)$ $D^{(2)}(\sigma_v) = \left(\begin{array}{cc} 1 & 0 \ 0 & -1 \end{array}\right) \hspace{15pt} ​D^{(2)}(\sigma_{v'}) = \left(\begin{array}{cc} -\frac{1}{2} & -\frac{3}{2} \ -\frac{1}{2} & \frac{1}{2} \end{array}\right) \hspace{15pt} D^{(2)}(\sigma_{v'}') = \left(\begin{array}{cc} -\frac{1}{2} & -\frac{3}{2} \ -\frac{1}{2} & \frac{1}{2} \end{array}\right)$ The characters can be obtained by summing diagonal elements: $\chi (E) = 2, \chi (2C_3) = -1, \chi (3\sigma_v) = 0.$ Rotations as a Basis Another one-dimensional representation of the group can be obtained by taking rotation about the Z-axis (the $C_3$ axis) as the object on which the symmetry operations act: $R_z \overset{E}{\rightarrow} R_z \hspace{15pt} R_z \overset{C_3}{\rightarrow} R_z\hspace{15pt} R_z \overset{C_3^2}{\rightarrow} R_z;$ $R_z \overset{\sigma_v}{\rightarrow} -R_z \hspace{15pt} R_z \overset{\sigma_{v''}}{\rightarrow} -R_z \hspace{15pt} ​ R_z \overset{\sigma_{v'}}{\rightarrow} -R_z.$ In writing these relations, we use the fact that reflection reverses the sense of a rotation. The matrix representations corresponding to this one-dimensional basis are: $D^{(1)}(E) = 1 \hspace{15pt} D^{(1)}(C_3) = 1 \hspace{15pt} D^{(1)}(C_3^2) = 1;$ $D^{(1)}(\sigma_v) = -1 \hspace{15pt} D^{(1)}(\sigma_{v"}) = -1 \hspace{15pt} D^{(1)} (\sigma_{v'}) = -1.$ These one-dimensional matrices can be shown to multiply together just like the symmetry operations of the $C_{3v}$ group. They form an irreducible representation of the group (because it is one-dimensional, it cannot be further reduced). Note that this one-dimensional representation is not identical to that found above for the $1s$ N-atom orbital, or the $T_1$ function. Overview We have found three distinct irreducible representations for the $C_{3v}$ symmetry group; two different one-dimensional and one two dimensional representations. Are there any more? An important theorem of group theory shows that the number of irreducible representations of a group is equal to the number of classes. Since there are three classes of operation (i.e., E, $C_3$ and $\sigma_v$), we have found all the irreducible representations of the $C_{3v}$ point group. There are no more. The irreducible representations have standard names; the first $D^{(1)}$ (that arising from the $T_1$ and $1s_N$ orbitals) is called $A_1$, the $D^{(1)}$ arising from $R_z$ is called $A_2$ and $D^{(2)}$ is called $E$ (not to be confused with the identity operation E). We will see shortly where to find and identify these names. Thus, our original $D^{(4)}$ representation was a combination of two $A_1$ representations and one $E$ representation. We say that $D^{(4)}$ is a direct sum representation: $D^{(4)} = 2A_1 \oplus E$. A consequence is that the characters of the combination representation $D^{(4)}$ can be obtained by adding the characters of its constituent irreducible representations. $\begin{array}{cccc} &E​ & 2C_3 & 3\sigma_v \ A_1& 1 & 1 & 1 \ ​ A_1& 1 & 1 & 1 \ E & 2 & -1 & 0 \ \hline​ 2A_1 \oplus E & 4 & 1 & 2 \ \end{array}$ Decompose Reducible Representations in General Suppose you were given only the characters (4,1,2). How can you find out how many times $A_1$, $E$, and $A_2$ appear when you reduce $D^{(4)}$ to its irreducible parts? You want to find a linear combination of the characters of $A_1$, $A_2$ and $E$ that add up (4,1,2). You can treat the characters of matrices as vectors and take the dot product of $A_1$ with $D^{(4)}$ $\left(\begin{array}{cccccc} ​1 & 1 & 1 & 1 & 1& 1\ E & C_3 & C_3^2 & \sigma_v & \sigma_{v'} & \sigma_{v''} \end{array}\right) \left(\begin{array}{cc} 4 &E \ 1 & C_3 \ 1 & C_3^2 \ 2 & \sigma_v \ 2 & \sigma_{v'} \ 2 & \sigma_{v''} \end{array}\right) = 4 + 1 + 1 + 2 + 2 + 2 = 12.$ The vector $(1,1,1,1,1,1)$ is not normalized; hence to obtain the component of $(4,1,1,2,2,2)$ along a unit vector in the $(1,1,1,1,1,1)$ direction, one must divide by the norm of $(1,1,1,1,1,1)$; this norm is 6. The result is that the reducible representation contains $12/6 = 2A_1$ components. Analogous projections in the $E$ and $A_2$ directions give components of 1 and 0, respectively. In general, to determine the number $n_\Gamma$ of times irreducible representation $\Gamma$ appears in the reducible representation with characters $\chi_{\rm red}$, one calculates $n\Gamma =\dfrac{1}{g}\sum_R\chi_\Gamma(R)\chi_{\rm red}(R) ,$ where $g$ is the order of the group (i.e.. the number of operations in the group; six in our example) and $\chi_\Gamma(R)$ are the characters of the $\Gamma^{\rm th}$ irreducible representation. Commonly Used Bases We could take any set of functions as a basis for a group representation. Commonly used sets include: Cartesian displacement coordinates $(x,y,z)$ located on the atoms of a polyatomic molecule (their symmetry treatment is equivalent to that involved in treating a set of p orbitals on the same atoms), quadratic functions such as d orbitals $- xy,yz,xz,x^2-y^2,z^2,$ as well as rotations about the $x$, $y$ and $z$ axes. The transformation properties of these very commonly used bases are listed in the character tables shown in Section 4.4. Summary The basic idea of symmetry analysis is that any basis of orbitals, displacements, rotations, etc. transforms either as one of the irreducible representations or as a direct sum (reducible) representation. Symmetry tools are used to first determine how the basis transforms under action of the symmetry operations. They are then used to decompose the resultant representations into their irreducible components. More Examples The 2p Orbitals of Nitrogen For a function to transform according to a specific irreducible representation means that the function, when operated upon by a point-group symmetry operator, yields a linear combination of the functions that transform according to that irreducible representation. For example, a $2p_z$ orbital ($z$ is the $C_3$ axis of $NH_3$) on the nitrogen atom belongs to the $A_1$ representation because it yields unity times itself when $C_3$, $C_3^2$, $\sigma_v$, $\sigma_{v'}$, $\sigma_{v"}$ or the identity operation act on it. The factor of 1 means that $2p_z$ has $A_1$ symmetry since the characters (the numbers listed opposite $A_1$ and below $E, 2C_3,$ and $3\sigma_v$ in the $C_{3v}$ character table shown in Section 4.4) of all six symmetry operations are 1 for the $A_1$ irreducible representation. The $2p_x$ and $2p_y$ orbitals on the nitrogen atom transform as the $E$ representation since $C_3$, $C_3^2$, $\sigma_v$, $\sigma_{v'}$, $\sigma_{v"}$ and the identity operation map $2p_x$ and $2p_y$ among one another. Specifically, $C_3 \left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} \cos 120^\circ & - \sin 120 ^\circ \ \sin 120^\circ & \cos 120 ^\circ \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right)$ $C_3^2\left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} \cos 240^\circ & - \sin 240 ^\circ \ \sin 240^\circ & \cos 240 ^\circ \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right)$ $E \left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} 1 &0 \ 0 & 1 \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right)$ $\sigma_v \left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} -1 &0 \ 0 & 1 \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right)$ $\sigma_{v'} \left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} \frac{1}{2} &\frac{\sqrt{3}}{2} \ \frac{\sqrt{3}}{2} & -\frac{1}{2} \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right)$ $\sigma_{v"}\left(\begin{array}{c}2p_x \2p_y \end{array}\right) = \left(\begin{array}{cc} \frac{1}{2} &-\frac{\sqrt{3}}{2} \ -\frac{\sqrt{3}}{2} & -\frac{1}{2} \end{array}\right) \left(\begin{array}{c}2p_x \2p_y \end{array}\right) .$ The 2x2 matrices, which indicate how each symmetry operation maps $2p_x$ and $2p_y$ into some combinations of $2p_x$ and $2p_y$, are the representation matrices ($D^{(IR)}$) for that particular operation and for this particular irreducible representation (IR). For example, $​\left(\begin{array}{cc} \frac{1}{2} &\frac{\sqrt{3}}{2} \ \frac{\sqrt{3}}{2} & -\frac{1}{2} \end{array}\right) = D^{(E)}(\sigma_{v'})$ This set of matrices have the same characters as the $D^{(2)}$ matrices obtained earlier when the $T_i$ displacement vectors were analyzed, but the individual matrix elements are different because we used a different basis set (here $2p_x$ and $2p_y$ ; above it was $T_2$ and $T_3$). This illustrates the invariance of the trace to the specific representation; the trace only depends on the space spanned, not on the specific manner in which it is spanned. Short-Cut A short-cut device exists for evaluating the trace of such representation matrices (that is, for computing the characters). The diagonal elements of the representation matrices are the projections along each orbital of the effect of the symmetry operation acting on that orbital. For example, a diagonal element of the $C_3$ matrix is the component of $C_32p_y$ along the $2p_y$ direction. More rigorously, it is $\int 2p_y^*C_32p_y d\tau$. Thus, the character of the $C_3$ matrix is the sum of $\int 2p_y^*C_32p_y d\tau$ and $\int 2p_x^*C_32p_x d\tau$. In general, the character $\chi$ of any symmetry operation $S$ can be computed by allowing $S$ to operate on each orbital $\phi_i$, then projecting $S\phi_i$ along $\phi_i$ (i.e., forming $\int\phi_i^*S\phi_id\tau$, and summing these terms, $\sum_i\int\phi_i^*S\phi_id\tau= \chi(S).$ If these rules are applied to the $2p_x$ and $2p_y$ orbitals of nitrogen within the $C_{3v}$ point group, one obtains $\chi (E) = 2, \chi (C_3) = \chi (C_3^2) = -1, \chi (\sigma_v) = \chi (\sigma_{v"}) = \chi (\sigma_{v'}) = 0.$ This set of characters is the same as $D^{(2)}$ above and agrees with those of the $E$ representation for the $C_{3v}$ point group. Hence, $2p_x$ and $2p_y$ belong to or transform as the $E$ representation. This is why $(x,y)$ is to the right of the row of characters for the $E$ representation in the $C_{3v}$ character table shown in Section 4.4. In similar fashion, the $C_{3v}$ character table (please refer to this table now) states that $d_{x^2−y^2}$ and $d_{xy}$ orbitals on nitrogen transform as E, as do $d_{xy}$ and $d_{yz}$, but $d_{z^2}$ transforms as $A_1$. Earlier, we considered in some detail how the three $1s_H$ orbitals on the hydrogen atoms transform. Repeating this analysis using the short-cut rule just described, the traces (characters) of the 3 x 3 representation matrices are computed by allowing $E, 2C_3,$ and $3\sigma_v$ to operate on $1s_{H_1}$, $1s_{H_2}$, and $1s_{H_3}$ and then computing the component of the resulting function along the original function. The resulting characters are $\chi (E) = 3, \chi (C_3) = \chi (C_3^2) = 0,$ and $\chi (\sigma_v) = \chi (\sigma_{v'}) = \chi (\sigma_{v"}) = 1$, in agreement with what we calculated before. Using the orthogonality of characters taken as vectors we can reduce the above set of characters to $A_1 + E$. Hence, we say that our orbital set of three $1s_H$ orbitals forms a reducible representation consisting of the sum of $A_1$ and $E$ IR's. This means that the three $1s_H$ orbitals can be combined to yield one orbital of $A_1$ symmetry and a pair that transform according to the $E$ representation. Projector Operators: Symmetry Adapted Linear Combinations of Atomic Orbitals To generate the above $A_1$ and $E$ symmetry-adapted orbitals, we make use of so-called symmetry projection operators $P_E$ and $P_{A_1}$. These operators are given in terms of linear combinations of products of characters times elementary symmetry operations as follows: $P_{A_1} =\sum_S\chi_A(S)S$ $P_E =\sum_S\chi_E(S)S,$ where $S$ ranges over $C_3$, $C_3^2$, $\sigma_v$, $\sigma_{v'}$ and $\sigma_{v"}$ and the identity operation. The result of applying $P_{A_1}$ to say $1s_{H_1}$ is $P_{A_1} 1s_{H_1} = 1s_{H_1} + 1s_{H_2}+1s_{H_3}+1s_{H_2}+1s_{H_3}+1s_{H_1}\ = 2(1s_{H_1}+1s_{H_2}+1s_{H_3}) = \phi_{A_1},$ which is an (unnormalized) orbital having $A_1$ symmetry. Clearly, this same $\phi_{A_1}$ orbital would be generated by $P_{A_1}$ acting on $1s_{H_2}$ or $1s_{H_3}$. Hence, only one $A_1$ orbital exists. Likewise, $P_E1s_{H_1} = 2 ⋅1s_{H_1} -1s_{H_2} -1s_{H_3} ≡ \phi_{E,1}$ which is one of the symmetry adapted orbitals having $E$ symmetry. The other $E$ orbital can be obtained by allowing $P_E$ to act on $1s_{H_2}$ or $1s_{H_3}$: $P_E1s_{H_2} = 2 ⋅1s_{H_2} -1s_{H_1} -1s_{H_3} ≡ \phi_{E,2}$ $P_E1s_{H_3} = 2 ⋅1s_{H_3} -1s_{H_1} -1s_{H_2} = \phi_{E,3} .$ It might seem as though three orbitals having $E$ symmetry were generated, but only two of these are really independent functions. For example, $\phi_{E,3}$ is related to $\phi_{E,1}$ and $\phi_{E,2}$ as follows: $\phi_{E,3} = -(\phi_{E,1} + \phi_{E,2}).$ Thus, only $\phi_{E,1}$ and $\phi_{E,2}$ are needed to span the two-dimensional space of the $E$ representation. If we include $\phi_{E,1}$ in our set of orbitals and require our orbitals to be orthogonal, then we must find numbers $a$ and $b$ such that $\phi'_E = a\phi_{E,2} + b\phi_{E,3}$ is orthogonal to $\phi_{E,1}: = 0$. A straightforward calculation gives $a = -b$ or $\phi'_E = a (1s_{H_2} -1s_{H_3})$ which agrees with what we used earlier to construct the $T_i$ functions in terms of the $S_j$ functions. Summary Let us now summarize what we have learned thus far about point group symmetry. Any given set of atomic orbitals {$\phi_i$}, atom-centered displacements, or rotations can be used as a basis for the symmetry operations of the point group of the molecule. The characters $\chi(S)$ belonging to the operations $S$ of this point group within any such space can be found by summing the integrals over all the atomic orbitals (or corresponding unit vector atomic displacements or rotations). The resultant characters will, in general, be reducible to a combination of the characters of the irreducible representations $\chi_i(S)$. To decompose the characters $\chi(S)$ of the reducible representation to a sum of characters $\chi_i(S)$ of the irreducible representation $\chi(S) = \sum_in_i\chi_i(S),$ it is necessary to determine how many times, $n_i$, the $i^{\rm th}$ irreducible representation occurs in the reducible representation. The expression for $n_i$ is $n_i =\dfrac{1}{g}\sum_S\chi(S)\chi_i(S)$ in which $g$ is the order of the point group- the total number of symmetry operations in the group (e.g., $g = 6$ for $C_{3v}$). For example, the reducible representation $\chi(E) = 3, \chi(C_3) = 0$, and $\chi(\sigma_v) = 1$ formed by the three $1s_H$ orbitals discussed above can be decomposed as follows: $n_{A_1} = \dfrac{1}{6} (3 · 1 + 2 ·0 · 1 = 3 · 1 · 1 ) = 1,$ $n_{A_2} = \dfrac{1}{6} (3 · 1 + 2 ·0 · 1 = 3 · 1 · -1 ) = 0,$ $n_E = \dfrac{1}{6} (3 · 2 + 2 ·0 · -1 = 3 · 1 · 0 ) = 1.$ These equations state that the three $1s_H$ orbitals can be combined to give one $A_1$ orbital and, since $E$ is degenerate, one pair of $E$ orbitals, as established above. With knowledge of the $n_i$, the symmetry-adapted orbitals can be formed by allowing the projectors $P_i =\sum_i\chi_i(S)S$ to operate on each of the primitive atomic orbitals. How this is carried out was illustrated for the $1s_H$ orbitals in our earlier discussion. These tools allow a symmetry decomposition of any set of atomic orbitals into appropriate symmetry-adapted orbitals. Before considering other concepts and group-theoretical machinery, it should once again be stressed that these same tools can be used in symmetry analysis of the translational, vibrational and rotational motions of a molecule. The twelve motions of $NH_3$ (three translations, three rotations, six vibrations) can be described in terms of combinations of displacements of each of the four atoms in each of three $(x,y,z)$ directions. Hence, unit vectors placed on each atom directed in the $x$, $y$, and $z$ directions form a basis for action by the operations {$S$} of the point group. In the case of $NH_3$, the characters of the resultant 12 x 12 representation matrices form a reducible representation in the $C_{2v}$ point group: $\chi(E) = 12, \chi(C_3) = \chi(C_3^2) = 0$, $\chi(\sigma_v) = \chi(\sigma_{v'}) = \chi(\sigma_{v"}) = 2$. For example under $\sigma_v$, the $H_2$ and $H_3$ atoms are interchanged, so unit vectors on either one will not contribute to the trace. Unit z-vectors on $N$ and $H_1$ remain unchanged as well as the corresponding y-vectors. However, the x-vectors on $N$ and $H_1$ are reversed in sign. The total character for $\sigma_{v'}$ the $H_2$ and $H_3$ atoms are interchanged, so unit vectors on either one will not contribute to the trace. Unit z-vectors on $N$ and $H_1$ remain unchanged as well as the corresponding y-vectors. However, the x-vectors on $N$ and $H_1$ are reversed in sign. The total character for $\sigma_v$ is thus $4 - 2 = 2$. This representation can be decomposed as follows: $n_{A_1} = \dfrac{1}{6} (1· 1· 12 + 2 ·1 ·0 + 3 · 1 · 2 ) = 3,$ $n_{A_2} = \dfrac{1}{6} (1· 1· 12 + 2 ·1 ·0 + 3 · -1 · 2 ) = 1,$ $n_E = \dfrac{1}{6} (1· 2· 12 + 2 ·-1 ·0 + 3 · 0 · 2 ) = 4.$ From the information on the right side of the $C_{3v}$ character table, translations of all four atoms in the $z$, $x$ and $y$ directions transform as $A_1(z)$ and $E(x,y)$, respectively, whereas rotations about the $z(R_z)$, $x(R_x)$, and $y(R_y)$ axes transform as $A_2$ and E. Hence, of the twelve motions, three translations have $A_1$ and $E$ symmetry and three rotations have $A_2$ and $E$ symmetry. This leaves six vibrations, of which two have $A_1$ symmetry, none have $A_2$ symmetry, and two (pairs) have $E$ symmetry. We could obtain symmetry-adapted vibrational and rotational bases by allowing symmetry projection operators of the irreducible representation symmetries to operate on various elementary Cartesian $(x,y,z)$ atomic displacement vectors centered on the four atoms. Direct Product Representations Direct Products in N-Electron Wave functions We now turn to the symmetry analysis of orbital products. Such knowledge is important because one is routinely faced with constructing symmetry-adapted $N$-electron configurations that consist of products of $N$ individual spin orbitals, one for each electron. A point-group symmetry operator S, when acting on such a product of orbitals, gives the product of $S$ acting on each of the individual orbitals $S(\phi_1\phi_2\phi_3...\phi_N) = (S\phi_1) (S\phi_2) (S\phi_3) ... (S\phi_N).$ For example, reflection of an $N$-orbital product through the $\sigma_v$ plane in $NH_3$ applies the reflection operation to all $N$ electrons. Just as the individual orbitals formed a basis for action of the point-group operators, the configurations ($N$-orbital products) form a basis for the action of these same point-group operators. Hence, the various electronic configurations can be treated as functions on which $S$ operates, and the machinery illustrated earlier for decomposing orbital symmetry can then be used to carry out a symmetry analysis of configurations. Another shortcut makes this task easier. Since the symmetry adapted individual orbitals {$\phi_i, i = 1, ..., M$} transform according to irreducible representations, the representation matrices for the $N$-term products shown above consist of products of the matrices belonging to each $\phi_i$. This matrix product is not a simple product but what is called a direct product. To compute the characters of the direct product matrices, one multiplies the characters of the individual matrices of the irreducible representations of the $N$ orbitals that appear in the electron configuration. The direct-product representation formed by the orbital products can therefore be symmetry-analyzed (reduced) using the same tools as we used earlier. For example, if one is interested in knowing the symmetry of an orbital product of the form $a_1^2a_2^2e^2$ (note: lower case letters are used to denote the symmetry of electronic orbitals, whereas capital letters are reserved to label the overall configuration’s symmetry) in $C_{3v}$ symmetry, the following procedure is used. For each of the six symmetry operations in the $C_{2v}$ point group, the product of the characters associated with each of the six spin orbitals (orbital multiplied by á or â spin) is formed $\chi(S) = \prod_j\chi_j(S)= (\chi_{A_1}(S))^2 (\chi_{A_2}(S))^2 (\chi_E(S))^2.$ In the specific case considered here, $\chi (E) = 4$, $\chi (2C_3) = 1$, and $\chi (3\sigma_v) = 0$. Notice that the contributions of any doubly occupied non-degenerate orbitals (e.g., $a_1^2$, and $a_2^2$) to these direct product characters $\chi(S)$ are unity because for all operators $(\chi_k(S))^2 = 1$ for any one-dimensional irreducible representation. As a result, only the singly occupied or degenerate orbitals need to be considered when forming the characters of the reducible direct-product representation $\chi(S)$. For this example this means that the direct-product characters can be determined from the characters $\chi_E(S)$ of the two active (i.e., non-closed-shell) orbitals - the $e^2$ orbitals. That is, $\chi(S) = \chi_E(S)⋅\chi_E(S)$. From the direct-product characters $\chi(S)$ belonging to a particular electronic configuration (e.g., $a_1^2a_2^2e^2$), one must still decompose this list of characters into a sum of irreducible characters. For the example at hand, the direct-product characters $\chi(S)$ decompose into one $A_1$, one $A_2$, and one $E$ representation. This means that the $e^2$ configuration contains $A_1$, $A_2$, and $E$ symmetry elements. Projection operators analogous to those introduced earlier for orbitals can be used to form symmetry-adapted orbital products from the individual basis orbital products of the form $a_1^2a_2^2e_x^me_y^{m'}$, where $m$ and $m'$ denote the occupation (1 or 0) of the two degenerate orbitals $e_x$ and $e_y$. In Appendix III of Electronic Spectra and Electronic Structure of Polyatomic Molecules , G. Herzberg, Van Nostrand Reinhold Co., New York, N.Y. (1966) the resolution of direct products among various representations within many point groups are tabulated. When dealing with indistinguishable particles such as electrons, it is also necessary to further project the resulting orbital products to make them antisymmetric (for Fermions) or symmetric (for Bosons) with respect to interchange of any pair of particles. This step reduces the set of $N$-electron states that can arise. For example, in the above $e^2$ configuration case, only $^3A_2$, $^1A_1$, and $^1E$ states arise; the $^3E$, $^3A_1$, and $^1A_2$ possibilities disappear when the antisymmetry projector is applied. In contrast, for an $e^1e'^1$ configuration, all states arise even after the wave function has been made antisymmetric. The steps involved in combining the point group symmetry with permutational antisymmetry are illustrated in Chapter 6 of this text as well as in Chapter 10 of my QMIC text. Direct Products in Selection Rules Two states $\psi_a$ and $\psi_b$ that are eigenfunctions of a Hamiltonian $H_o$ in the absence of some external perturbation (e.g., electromagnetic field or static electric field or potential due to surrounding ligands) can be "coupled" by the perturbation $V$ only if the symmetries of $V$ and of the two wave functions obey a so-called selection rule. In particular, only if the coupling integral $\int \psi_a^*V\psi_bd\tau= V_{a,b}$ is non-vanishing will the two states be coupled by $V$. The role of symmetry in determining whether such integrals are non-zero can be demonstrated by noting that the integrand, considered as a whole, must contain a component that is invariant under all of the group operations (i.e., belongs to the totally symmetric representation of the group) if the integral is to not vanish. In terms of the projectors introduced above we must have $\sum_S\chi_A(S)S[\psi_a^*S\psi_b]$ not vanish. Here the subscript $A$ denotes the totally symmetric representation of whatever point group applies. The symmetry of the product $\psi_a^*V\psi_b$ is, according to what was covered earlier, given by the direct product of the symmetries of $\psi_a^*$ of $V$ and of $\psi_b$. So, the conclusion is that the integral will vanish unless this triple direct product contains, when it is reduced to its irreducible components, a component of the totally symmetric representation. Another way to state the above result, and a way this is more often used in practice, is that the integral $\int \psi_a V \psi_b \tau$ will vanish unless the symmetry of the direct product $V\psi_b$ matches the symmetry of $\psi_a^*$. Only when these symmetries match will the triple direct product contain a non-zero component of the totally symmetric representation. This is very much the same as what we saw earlier in this Chapter when we discussed how angular momentum coupling could limit which states contribute to the second-order perturbation theory energy. The angular momenta of $V$ and of $\psi_b$, when coupled, must have a component that matches the angular momentum of $\psi_a$. To see how this result is used, consider the integral that arises in formulating the interaction of electromagnetic radiation with a molecule within the electric-dipole approximation: $\int \psi_a^* \textbf{r} \psi_b d\tau$ Here, $\textbf{r}$ is the vector giving, together with $e$, the unit charge, the quantum mechanical dipole moment operator $\textbf{r} = e\sum_nZ_n\textbf{R}_n - e\sum_i \textbf{r}_i,$ where $Z_n$ and $\textbf{R}_n$ are the charge and position of the nth nucleus and $\textbf{r}_j$ is the position of the jth electron. Now, consider evaluating this integral for the singlet $n\rightarrow \pi^*$ transition in formaldehyde. Here, the closed-shell ground state is of $^1A_1$ symmetry and the singlet excited state, which involves promoting an electron from the non-bonding $b_2$ lone pair orbital on the Oxygen atom into the anti-bonding $\pi^*$ $b_1$ orbital on the CO moiety, is of $^1A_2$ symmetry ($b_1 \times b_2 = a_2$). The direct product of the two wave function symmetries thus contains only $a_2$ symmetry. The three components ($x$, $y$, and $z$) of the dipole operator have, respectively, $b_1$, $b_2$, and $a_1$ symmetry. Thus, the triple direct products give rise to the following possibilities: $a_2 \times b_1 = b_2$ $a_2 \times b_2 = b_1$ $a_2 \times a_1 = a_2$ There is no component of $A_1$ symmetry in the triple direct product, so the integral vanishes. The alternative way of reaching this same conclusion is to notice that the direct product of the symmetries of the $\pi^*$ $b_1$ orbital and the $b_2$ lone pair orbital is $a_2 (b_1 \times b_2 = a_2$), which does not match the symmetry of any component of the dipole operator. Either route allows us to conclude that the $n\rightarrow \pi^*$ excitation in formaldehyde is electric dipole forbidden. Overview We have shown how to make a symmetry decomposition of a basis of atomic orbitals (or Cartesian displacements or orbital products) into irreducible representation components. This tool is very helpful when studying spectroscopy and when constructing the orbital correlation diagrams that form the basis of the Woodward-Hoffmann rules that play useful roles in predicting whether chemical reactions will have energy barriers in excess of thermodynamic barriers. We also learned how to form the direct-product symmetries that arise when considering configurations consisting of products of symmetry-adapted spin orbitals. Finally, we learned how the direct product analysis allows one to determine whether or not integrals of products of wave functions with operators between them vanish. This tool is of utmost importance in determining selection rules in spectroscopy and for determining the effects of external perturbations on the states of the species under investigation.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/04%3A__Some_Important_Tools_of_Theory/4.04%3A_Point_Group_Symmetry.txt
C_1 E A 1 Cs E sh A' 1 1 x,y,Rz x2,y2,z2,xy A" 1 -1 z,Rx,Ry yz,xz Ci E i Ag 1 1 Rx,Ry,Rz x2,y2,z2,xy,xz,yz Au 1 -1 x,y,z C2 E C2 A 1 1 z,Rz x2,y2,z2,xy B 1 -1 x,y,Rx,Ry yz,xz D2 E C2(z) C2(y) C2(x) A 1 1 1 1 x2,y2,z2 B1 1 1 -1 -1 z,Rz xy B2 1 -1 1 -1 y,Ry xz B3 1 -1 -1 1 x,Rx yz 4.06: Time Dependent Perturbation Theory When dealing with the effects of external perturbations (e.g., applied fields, collisions with other species), one needs to have a way to estimate the probabilities and rates of transitions among states of the system of interest induced by these perturbations. Time-dependent perturbation theory (TDPT) offers a framework within which such estimates can be achieved. Derivation In deriving the working equations of TDPT, one begins with the time-dependent Schrödinger equation $i\hbar \frac{\partial \Psi}{\partial t}=[H_0+V(t)]\Psi \label{1}$ in which $H_0$ is the Hamiltonian for the system whose transitions are to be probed, and $V(t)$ is the perturbation caused by the external field or the collision. The wave function that solves this equation is expanded in an order-by-order manner as in conventional perturbation theory $\Psi=\psi^{(0)}(r)\exp\Big(-it\frac{E^{(0)}}{\hbar}\Big)+\psi^{(1)}+\cdots \label{2}$ Here $\psi_0$ is the eigenfunction of $H_0$ from which transitions to other eigenstates (denoted ) of $H_0$ are being considered. Because, in the absence of the external perturbation $V(t)$, the states of $H_0$ are known to vary with time as , this component of the time dependence of the total wave function is included in the above expansion. Then, the first-order correction $\psi^{(1)}$ is expanded in terms of the complete set of states { } after which the expansion coefficients { } become the unknowns to be solved for $\psi^{(1)}=\sum_f \psi^{(0)}_f(r)\exp\bigg(-it\frac{E^{(0)}_f}{\hbar}\bigg)C^{(1)}_f(t). \label{3}$ It should be noted that this derivation treats the zeroth-order states {$\psi^{(0)}$ and $\psi^{(0)}_f$} as eigenfunctions of $H_0$. However, in most practical applications of TDPT, {$\psi^{(0)}$ and $\psi^{(0)}_f$} are not known exactly and, in fact, are usually approximated by using variational or perturbative methods (e.g., to treat differences between HF mean-field and true Coulombic interactions among electrons). So, the derivation of TDPT that we are pursuing assumes the {$\psi^{(0)}$ and $\psi^{(0)}_f$} are exact eigenfunctions. When the final TDPT working equations are thus obtained, one usually substitutes perturbative or variational approximations to {$\psi^{(0)}$ and $\psi^{(0)}_f$} into these equations. Substituting the order-by-order expansion into the Schrödinger equation gives, for the left- and right-hand sides, $i\hbar \frac{\partial \Psi}{\partial t} = E^{(0)}\psi^{(0)}(r)\exp\Big(-it\frac{E^{(0)}}{\hbar}\Big)\ + \sum_f \left[ E^{(0)}_f\psi^{(0)}_f(r)\exp\bigg(-it\frac{E^{(0)}_f}{\hbar}\bigg)C^{(1)}_f(t) -i\hbar\psi^{(0)}_f(r)\exp\bigg(-it\frac{E^{(0)}_f}{\hbar}\bigg)\frac{C^{(1)}_f(t)}{dt}​ \right] \label{4a}$ and $[H_0+V(t)]\Psi=E^{(0)}\psi^{(0)}(r)\exp\Big(-it\frac{E^{(0)}}{\hbar}\Big)\ +\sum_f E^{(0)}_f\psi^{(0)}_f(r)\exp\bigg(-it\frac{E^{(0)}_f}{\hbar}\bigg)C^{(1)}_f(t) +V(t)\psi^{(0)}(r)\exp\Big(-it\frac{E^{(0)}}{\hbar}\Big), \label{4b}$ respectively, through first-order. Multiplying each of these equations on the left by the complex conjugate of a particular $\psi_f^0$ and integrating over the variables that $H_0$ depends on produces the following equation for the unknown first-order coefficients $-i\hbar\frac{dC^{(1)}_f(t)}{dt}=\langle \psi^{(0)}_f|V(t)|\psi^{(0)}_f \rangle ​\exp\bigg(-it\frac{(E^{(0)}-E^{(0)}_f)}{\hbar}\bigg). \label{5}$ The states and can be different electronic states, vibrational states, or rotational states. In Chapter 15 of my book Quantum Mechanics in Chemistry referred to in Chapter 1, I treat each of these types of transitions in detail. In the present discussion, I will limit myself to the general picture of TDPT, rather than focusing on any of these particular forms of spectroscopic transitions. To proceed further, one needs to say something about how the perturbation $V(t)$ depends on time. In the most common application of TDPT, the perturbation is assumed to consist of a term that depends on spatial variables (denoted $v(r)$) multiplied by a time-dependent factor of sinusoidal character. An example of such a perturbation is provided by the electric dipole potential $V(t)=\textbf{E}\cdot ​[ e\sum_n Z_n \textbf{R}_n - e \sum_i \textbf{r}_i ]\cos(\omega t)$ characterizing photons of frequency $\omega$ interacting with the nuclei and electrons of a molecule. $\textbf{E}\cdot ​[ e\sum_n Z_n \textbf{R}_n - e \sum_i \textbf{r}_i ]$ is the spatial part $v(\textbf{r})$ and $\cos(\omega t)$ is the time-dependence. To allow for the possibility that photons over a range of frequencies may impinge on the molecules, we can proceed with the derivation for photons of a given frequency $\omega$ and, after obtaining our final result, average over a distribution of frequencies characterized by a function $f(\omega)$ giving the number of photons with frequencies between $\omega$ and $\omega+d\omega$. For perturbations that do not vary in a sinusoidal manner (e.g., a perturbation arising from a collision with another molecule), the derivation follows a different path at this point (application 3 below). Because spectroscopic time-dependent perturbations are extremely common in chemistry, we will focus much of our attention to this class of perturbations in this Chapter. To proceed deriving the working equations of TDPT, the above expression for $V(t)$ is inserted into the differential equation for the expansion coefficients and the equation is integrated from an initial time $t_i$ to a final time $t_f$. These times describe when the external perturbation is first turned on and when it is turned off, respectively. For example, a laser whose photon intensity profile is described by $f(\omega)$ might be pulsed on from $t_i$ to $t_f$, and one wants to know what fraction of the molecules initially in $\psi_0$ have undergone transitions to each of the . Alternatively, the molecules may be flowing in a stream that passes through a laser light source that is continually on, entering the laser beam at $t_i$ and exiting from the laser beam at $t_f$. In either case, the molecules would be exposed to the photons from $t_i$ until $t_f$. The result of integrating the differential equation is $\begin{split}C^{(1)}_f(t)&=\frac{-1}{2i\hbar}\int_{t_i}^{t_f}\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r) \rangle [\exp(i\omega t)+\exp(-i\omega t)]​\exp\bigg(-it\frac{(E^{(0)}-E^{(0)}_f)}{\hbar}\bigg)\ &=\frac{-1}{2i\hbar}\int_{t_i}^{t_f}\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r) \rangle [\exp(i(\omega+\omega_{f,0}) t)+\exp(-i(\omega-\omega_{f,0}) t)]\ &=\frac{-1}{2i\hbar}\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r) \rangle\ &\times\left[\frac{\exp(i(\omega+\omega_{f,0}) t_f-\exp(i(\omega+\omega_{f,0}) t_i))}{i(\omega+\omega_{f,0})} +\frac{\exp(-i(\omega-\omega_{f,0}) t_f-\exp(-i(\omega-\omega_{f,0}) t_i))}{i(\omega-\omega_{f,0})} \right]\end{split} \label{6}$ where the transition frequencies $\omega_{f,0}$ are defined by $\omega_{f,0}=\frac{(E^{(0)}-E^{(0)}_f)}{\hbar} \label{7}$ and $t$ is the time interval $t_f –t_i$. Now, if the frequency $\omega$ is close to one of the transition frequencies, the term with $(\omega-\omega_{f,0})$ in the denominator will be larger than the term containing $(\omega-\omega_{f,0})$. Of course, if has a higher energy than , so one is studying stimulate emission spectroscopy, $\omega_{f,0}$ will be negative, in which case the term containing $(\omega+\omega_{f,0})$ will dominate. In on-resonance absorption spectroscopy conditions, the above expression for the first-order coefficients reduces to $C^{(1)}_f(t)=\frac{-1}{2i\hbar}\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle \frac{\exp(-i(\omega-\omega_{f,0}) t_f-\exp(-i(\omega-\omega_{f,0}) t_i))}{i(\omega-\omega_{f,0})}​. \label{8}$ The modulus squared of this quantity gives a measure of the probability of observing the system in state after being subjected to the photons of frequency $\omega$ for a length of time $t$. $|C^{(1)}_f(t)|^2=\frac{|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2} \frac{2[1-\cos((\omega-\omega_{f,0})t​)]}{(\omega-\omega_{f,0})^2}\ =\frac{|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2} \frac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2} . \label{9}$ The function $\dfrac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2}$ is plotted in Figure 4.4 for a given value of $t$ as a function of $\omega$. It is sharply peaked around $\omega = \omega_{f,0}$, decays rapidly as $|(\omega - \omega_{f,0})|$ increases, and displays recurrences of smaller and smaller intensity when $(\omega - \omega_{f,0})t$ passes through multiples of $\pi$. At larger values of $t$, the main peak in the plot of this function becomes narrower and higher such that, in the $t \rightarrow \infty$ limit, the area under this plot approaches $t\pi/2$: ${\rm Area}=\int\dfrac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2}d\omega=t\frac{\pi}{2}. \label{10}$ The importance of this observation about the area under the plot shown in Figure 4.4 can be appreciated by returning to our result $|C^{(1)}_f(t)|^2= \frac{|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2} \frac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2} \label{11}$ and introducing the fact that the photon source used to induce the transitions being studied most likely is not perfectly monochromatic. If it is characterized, as suggested earlier, by a distribution of frequencies $f(\omega)$ that is broader than the width of the large central peak in Figure 4.4 (n.b., this will be true if the time duration $t$ is long enough), then when we average over $f(\omega)$ to obtain a result that directly relates to this kind of experiment, we obtain $\int_{-\infty}^\infty f(\omega)|C^{(1)}_f(t)|^2d\omega =\frac{|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2} \int_{-\infty}^\infty f(\omega)\frac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2}d\omega\ \frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2t}{4\hbar^2}f(\omega_{f,0})=\langle |C^{(1)}_f(t)|^2 \rangle \label{12}$ We are allowed to write the integral over $\omega$ as ranging from $-\infty$ to $+\infty$ because the function shown in Figure 4.4 is so sharply peaked around $\omega_{f,0}$ that extending the range of integration makes no difference. We are allowed to factor the $f(\omega)$ out of the integral as f($\omega_{f,0}$) by assuming the light source’s distribution function $f(\omega)$ is very smoothly varying (i.e., not changing much) in the narrow range of frequencies around $\omega_{f,0}$ where the function in Figure 4.4 is sharply peaked. The result of this derivation of TDPT is the above expression for the average probability of observing a transition from state $\psi_0$ to state . This probability is seen to grow linearly with the time duration over which the system is exposed to the light source. Because we carried out this derivation within first-order perturbation theory, we should trust this result only under conditions where the effects of the perturbation are small. In the context of the example considered here, this means only for short times. That is, we should view $\frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2t}{4\hbar^2}f(\omega_{f,0})=\langle |C^{(1)}_f(t)|^2 \rangle \label{13}$ as expressing the short-time estimate of the probability of a transition from $\psi_0$ to and ${\rm Rate}=\frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2}f(\omega_{f,0})$ (obtained as $\dfrac{d\langle |C^{(1)}_f(t)|^2 \rangle}{dt}$) as expressing the initial rate of such transitions within the first-order TDPT approximation. It should be noted that the rate expression given above will not be valid if the time duration t of the perturbation does not obey $\omega_{f,o} t \gg p$; only when this condition is met an the function shown in Figure 4.4 be integrated to generate a probability prediction that grows linearly with time. So, one has to be careful when using pulsed lasers of very short duration to not employ the simplified rate expression given above (e.g., 1 eV corresponds to a frequency of ca. 2.4 x1014 s-1, so to study an electronic transition of this energy, one needs to use a light source of duration significantly longer than $10^{-14}$ s to make use of the simplified result). The working equations of TDPT, given above, allow one to estimate (because this is a first-order theory) the rates of transitions from one quantum state to another induced by a perturbation whose spatial dependence is characterized by $v(r)$ and whose time dependence is sinusoidal. The same kind of coupling matrix elements $\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle$ as we experienced in time-independent PT govern the selection rules and intensities for these transitions, so there is no need to repeat how symmetry can be used to analyze these integrals. Before closing this treatment of TDPT, it is useful to address a few issues that were circumvented in the derivation presented above. Application 1: Coupling to a Continuum In some cases, one is interested in transitions from a particular initial state $\psi^{(0)}(r)$ into a manifold of states that exist in a continuum having energies between $E^{(0)}_f$ and $E^{(0)}_f+dE^{(0)}_f$. This occurs, for example, when treating photoionization of a neutral or photodetachment of an anion; here the ejected electron exists in a continuum wave function whose density of states $\rho(E^{(0)}_f)$ is given by the formulas discussed in Chapter 2. In such cases, the expression given above for the rate is modified by summing over all final states having energies within $E^{(0)}_f$ and $E^{(0)}_f+dE^{(0)}_f$. Returning to the earlier expression $\int\rho(E^{(0)}_f)\frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2} \int_{-\infty}^\infty f(\omega)\frac{\sin^2(1/2(\omega-\omega_{f,0})t)}{(\omega-\omega_{f,0})^2}d\omega dE^{(0)}_f \label{14}$ using $dE^{(0)}_f=\hbar\omega_{f,0}$, and assuming the matrix elements $\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle$ do not vary significantly within the narrow range between and , one arrives at a rate expression of ${\rm Rate}=\frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2}f(\omega_{f,0})\rho(E^{(0)}_f) \label{15}$ which is much like we obtained earlier but now contains the density of states $\rho(E^{(0)}_f)$. In some experiments, one may not have only a single state $\psi^{(0)}_f(r)$ that can absorb light of a given frequency w; in such a situation, attenuation of the light source at this frequency can occur through absorptions from many initial states $\psi^{(0)}_f(r)$ into all possible final states whose energy differs from that of the initial state by . In this case, the correct expression for the total rate of absorption of photons of energy is obtained by averaging the above result over the probabilities $P_i$ of the system being in various initial states (which we label $\psi^{(0)}_i$): ${\rm Rate}=\sum_i P_i \frac{\pi|\langle \psi^{(0)}_f|v(r)|\psi^{(0)}_f(r)\rangle|^2}{4\hbar^2}f(\omega_{f,i})\rho(E^{(0)}_f)\delta(\omega-\omega_{f,i}). \label{16}$ Here the $\delta(\omega-\omega_{f,i})$ function guarantees that only states $\psi^{(0)}_i$ and $\psi^{(0)}_f$ whose energies differ by are permitted to enter the sum. The nature of the initial-state probability $P_i$ depends on what kind of experiment is being carried out. $P_i$ might be a Boltzmann distribution if the initial states are in thermal equilibrium, for example. Application 2: Experimental Oscillations In Figure 4.4 the function $\dfrac{\sin^2(1/2(\omega-\omega_{f,0})t​)}{(\omega-\omega_{f,0})^2}$ is plotted for one value of $t$ as a function of $\omega$. There also appear in this figure, dots that represent experimental data. These data were obtained by allowing a stream of $HCN$ molecules to flow through a laser beam of width $L$ with the laser frequency tuned to $\omega$. From the flow velocity $v$ of the $HCN$ stream and the laser beam width $L$, one can determine the duration over which the molecules were exposed to the light source $t = \dfrac{L}{v}$. After the molecules exited the laser beam, they were probed to determine whether they were in an excited state. This experiment was repeated for various values of the frequency $\omega$. The population of excited states was then plotted as a function of $\omega$ to obtain the data plotted in Figure 4.4. This experiment is described in the text Molecules and Radiation, J. I. Steinfeld, MIT Press, Cambridge, Mass. (1981). This kind of experiment provided direct proof of the oscillatory frequency dependence observed in the population of excited states as predicted in our derivation of TDPT. Application 3: Collisionally induced Transitions To give an example of how one proceeds in TDPT when the perturbation is not oscillatory in time, let us consider an atom located at the origin of our coordinate system that experiences a collision with an ion of charge c whose trajectory is described in Figure 4.5. As an approximation, we assume 1. that the ion moves in a straight line: $= vt, Y = D, Z = 0$, characterized by an impact parameter $D$ and a velocity $v$ (this would be appropriate if the ion were moving so fast that it would not be deflected by interactions with the atom), 2. that the perturbation caused by the ion on the electrons of the atom at the origin can be represented by $-\sum_{i=1}^N\frac{\chi}{|\textbf{r}_i-\textbf{R}|} \label{17}$ where $\textbf{r}_i$ is the position of the ith electron in the atom and $\textbf{R} = (vt, D, 0)$ is the position of the ion. The time dependence of the perturbation arises from the motion of the ion along the $X$-axis. Writing the distance $|\textbf{r}_i-\textbf{R}|$ as $|\textbf{r}_i-\textbf{R}|=\sqrt{(x_i-vt)^2+(y_i-D)^2+z_i^2} \label{18}$ and expanding in inverse powers of $\sqrt{D^2+(vt)^2}$ we can express the ion-atom interaction potential as $-\sum_{i=1}^N\frac{\chi}{|r_i-R|}=\sum_i^N\left[\frac{-\chi}{\sqrt{D^2+(vt)^2}}+\frac{-\chi(vtx_i+Dy_i+r_i^2)}{(D^2+(vt)^2)^{3/2}}+\cdots\right]. \label{19}$ The first term contains no factors dependent on the atom’s electronic coordinates, so it plays no role in causing electronic transitions. In the second term, the factor $r_i^2$ can be neglected compared to $vtx_i+Dy_i$ the terms because the ion is assumed to be somewhat distant from the atom’s valence electrons. To derive an equation for the probability of the atom undergoing a transition from $\psi^{(0)}(r)$ to $\psi^{(0)}_f(r)$, one returns to the TDPT expression $-i\hbar\frac{dC^{(1)}_f(t)}{dt}=\langle \psi^{(0)}_f|V(t)|\psi^{(0)}(r)\rangle\exp\bigg(-it\frac{(E^{(0)}-E^{(0)}_f)}{\hbar}\bigg) \label{20}$ and substitutes the above expression for the perturbation to obtain $\frac{dC^{(1)}_f(t)}{dt}=\frac{-1}{i\hbar} \langle \psi^{(0)}_f|\sum_{i=1}^N \frac{-\chi(vtx_i+Dy_i+r_i^2)}{(D^2+(vt)^2)^{3/2}}|\psi^{(0)}(r)\rangle ​\exp\bigg(-it\frac{(E^{(0)}-E^{(0)}_f)}{\hbar}\bigg). \label{21}$ This is the equation that must be solved to evaluate by integrating from $t = -\infty$ to $t = +\infty$ (representing the full collision with the ion starting far to the left on the $X$-axis and proceeding far to the right). There are two limiting cases in which the solution is straightforward. First, if the time duration of the collision (i.e., the time over which the ion is close to the atom) $\dfrac{D}{v}$ is long compared to $\omega_{f,I}$ where $\omega_{f,0}=\frac{(E^{(0)}-E^{(0)}_f)}{\hbar}, \label{22}$ then the integrand will oscillate repeatedly during the time $\dfrac{D}{v}$ as a result of which the integral $C^{(1)}_f(t)=\int_{-\infty}^\infty \frac{dC^{(1)}_f(t)}{dt}dt \label{23}$ will be vanishingly small. So, in this so-called adiabatic case (i.e., with the ion moving slowly relative to the oscillation frequency $\omega_{f,0}$), electronic transitions should not be expected. In the other limit $\omega_{f,0}\dfrac{D}{v} \ll 1$, the factor $​\exp\bigg(-it\dfrac{(E^{(0)}-E^{(0)}_f)}{\hbar}\bigg)$ will remain approximately equal to unity, so the integration needed reduces to $C^{(1)}_f(t)=\frac{-1}{i\hbar}\int_{-\infty}^\infty\langle\psi^{(0)}_f|\frac{-\chi(vtx_i+Dy_i+r_i^2)}{(D^2+(vt)^2)^{3/2}}|\psi^{(0)}(r)\rangle dt. \label{24}$ The integral involving $vtx_i$vanishes because $vt$ is odd and the remainder of the integrand is an even function of $t$. The integral involving $Dy_i$ can be performed by trigonometric substitution ($vt = D \tan\theta$ so the denominator reduces to $D^3 \Big(1+\Big(\dfrac{\sin\theta}{\cos\theta}\Big)^2\Big)^{3/2} = \dfrac{D^3}{(\cos\theta)^3}$ and gives $C^{(1)}_f(t)=\frac{-2\chi}{i\hbar\nu D}\langle\psi^{(0)}_f|\sum_{i=1}^N y_i|\psi^{(0)}(r)\rangle. \label{25}$ This result suggests that the probability of a transition $|C^{(1)}_f(t)|^2=\frac{4\chi^2}{\hbar^2\nu^2 D^2}|\langle\psi^{(0)}_f|\sum_{i=1}^N y_i|\psi^{(0)}(r)\rangle|^2. \label{26}$ should vary as the square of the ion’s charge and inversely with the speed of the collision. Of course, this result can not be trusted if the speed $v$ is too low because, then the condition $\omega_{f,0}\dfrac{D}{v} \ll 1$ will not hold. This example shows how one must re-derive the equations of TDPT when dealing with perturbations whose time-dependence is not sinusoidal. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/04%3A__Some_Important_Tools_of_Theory/4.05%3A_Character_Tables.txt
In this Chapter, many of the basic concepts and tools of theoretical chemistry are discussed only at an introductory level and without providing much of the background needed to fully comprehend them. Most of these topics are covered again in considerably more detail in Chapters 6-8, which focus on the three primary sub-disciplines of the field. The purpose of the present Chapter is to give you an overview of the field that you will learn the details of in these later Chapters. It probably will mainly be of use to undergraduate students using this text to learn about theoretical chemistry; most graduate students and more senior scientists should be able to skip this Chapter or briefly glance through it. In this chapter, you should have learned about how theory and experiment address chemical structure, bonding, energetics, and change. You were introduced to several experimental probes that involve spectroscopic methods, and the three main sub disciplines of theory were explained briefly to you. • 5.1: What is Theoretical Chemistry About? The science of chemistry deals with molecules including the radicals, cations, and anions they produce when fragmented or ionized. Chemists study isolated molecules (e.g., as occur in the atmosphere and in astronomical environments), solutions of molecules or ions dissolved in solvents, as well as solid, liquid, and plastic materials comprised of molecules. All such forms of molecular matter are what chemistry is about. • 5.2: Molecular Structure- Theory and Experiment Experimental data can only be interpreted, and thus used to extract molecular properties, through the application of theory. So, theory does not replace experiment, but serves both as a complementary component of chemical research (via. simulation of molecular properties) and as the means by which we connect laboratory data to molecular properties. • 5.3: Chemical Change 05: An Overview of Theoretical Chemistry The science of chemistry deals with molecules including the radicals, cations, and anions they produce when fragmented or ionized. Chemists study isolated molecules (e.g., as occur in the atmosphere and in astronomical environments), solutions of molecules or ions dissolved in solvents, as well as solid, liquid, and plastic materials comprised of molecules. All such forms of molecular matter are what chemistry is about. Chemical science includes how to make molecules (synthesis), how to detect and quantitate them (analysis), how to probe their properties and the changes they undergo as reactions occur (physical). Molecular Structure- bonding, shapes, electronic structures One of the more fundamental issues chemistry addresses is molecular structure, which means how the molecule’s atoms are linked together by bonds and what the inter-atomic distances and angles are. Another component of structure analysis relates to what the electrons are doing in the molecule; that is, how the molecule’s orbitals are occupied and in which electronic state the molecule exists. For example, in the arginine molecule shown in Figure 5.1, a $HOOC^-$ carboxylic acid group (its oxygen atoms are shown in red) is linked to an adjacent carbon atom (yellow) which itself is bonded to an $–NH_2$ amino group (whose nitrogen atom is blue). Also connected to the a-carbon atom are a chain of three methylene $–CH_2^-$ groups, a $–NH^-$ group, then a carbon atom attached both by a double bond to an imine $–NH$ group and to an amino $–NH_2$ group. The connectivity among the atoms in arginine is dictated by the well known valence preferences displayed by H, C, O, and N atoms. The internal bond angles are, to a large extent, also determined by the valences of the constituent atoms (i.e., the $sp_3$ or $sp_2$ nature of the bonding orbitals). However, there are other interactions among the several functional groups in arginine that also contribute to its ultimate structure. In particular, the hydrogen bond linking the a-amino group’s nitrogen atom to the $–NH^-$ group’s hydrogen atom causes this molecule to fold into a less extended structure than it otherwise might. What does theory have to do with issues of molecular structure and why is knowledge of structure so important? It is important because the structure of a molecule has a very important role in determining the kinds of reactions that molecule will undergo, what kind of radiation it will absorb and emit, and to what active sites in neighboring molecules or nearby materials it will bind. A molecule’s shape (e.g., rod like, flat, globular, etc.) is one of the first things a chemist thinks of when trying to predict where, at another molecule or on a surface or at a cell membrane, the molecule will fit and be able to bind and perhaps react. The presence of lone pairs of electrons (which act as Lewis base sites), of $\pi$ orbitals (which can act as electron donor and electron acceptor sites), and of highly polar or ionic groups guide the chemist further in determining where on the molecule’s framework various reactant species (e.g., electrophilic or nucleophilic or radical) will be most strongly attracted. Clearly, molecular structure is a crucial aspect of the chemists’ toolbox. How does theory relate to molecular structure? As we discussed in the Part 1 of this text, the Born-Oppenheimer approximation leads us to use quantum mechanics to predict the energy $E$ of a molecule for any positions ({$R_a$}) of its nuclei, given the number of electrons Ne in the molecule (or ion). This means, for example, that the energy of the arginine molecule in its lowest electronic state (i.e., with the electrons occupying the lowest energy orbitals) can be determined for any location of the nuclei if the Schrödinger equation governing the movements of the electrons can be solved. If you have not had a good class on how quantum mechanics is used within chemistry, I urge you to take the time needed to master Part 1. In those pages, I introduce the central concepts of quantum mechanics and I show how they apply to several very important cases including 1. electrons moving in 1, 2, and 3 dimensions and how these models relate to electronic structures of polyenes and to electronic bands in solids 2. the classical and quantum probability densities and how they differ, 3. time propagation of quantum wave functions, 4. the Hückel or tight-binding model of chemical bonding among atomic orbitals, 5. harmonic vibrations, 6. molecular rotations, 7. electron tunneling, 8. atomic orbitals’ angular and radial characteristics, 9. and point group symmetry and how it is used to label orbitals and vibrations You need to know this material if you wish to understand most of what this text offers, so I urge you to read Part 1 if your education to date has not yet adequately been exposed to it. Let us now return to the discussion of how theory deals with molecular structure. We assume that we know the energy $E(\{R_a\})$ at various locations {$R_a$} of the nuclei. In some cases, we denote this energy $V(R_a)$ and in others we use $E(R_a)$ because, within the Born-Oppenheimer approximation, the electronic energy $E$ serves as the potential V for the molecule’s vibrational motions. As discussed in Part 1, one can then perform a search for the lowest energy structure (e.g., by finding where the gradient vector vanishes $\dfrac{∂E}{∂R_a}​=0$ and where the second derivative or Hessian matrix $\bigg(\dfrac{∂^2E}{∂R_a∂R_b}\bigg)$ has no negative eigenvalues). By finding such a local-minimum in the energy landscape, theory is able to determine a stable structure of such a molecule. The word stable is used to describe these structures not because they are lower in energy than all other possible arrangements of the atoms but because the curvatures, as given in terms of eigenvalues of the Hessian matrix $\bigg(\dfrac{∂^2E}{∂R_a∂R_a}\bigg)$, are positive at this particular geometry. The procedures by which minima on the energy landscape are found may involve simply testing whether the energy decreases or increases as each geometrical coordinate is varied by a small amount. Alternatively, if the gradients $\dfrac{∂E}{∂R_a}$ are known at a particular geometry, one can perform searches directed downhill along the negative of the gradient itself. By taking a small step along such a direction, one can move to a new geometry that is lower in energy. If not only the gradients $\dfrac{∂E}{∂R_a}$ but also the second derivatives $\bigg(\dfrac{∂^2E}{∂R_a∂R_a}\bigg)$ are known at some geometry, one can make a more intelligent step toward a geometry of lower energy. For additional details about how such geometry optimization searches are performed within modern computational chemistry software, see Chapter 3 where this subject was treated in greater detail. It often turns out that a molecule has more than one stable structure (isomer) for a given electronic state. Moreover, the geometries that pertain to stable structures of excited electronic state are different than those obtained for the ground state (because the orbital occupancy and thus the nature of the bonding is different). Again using arginine as an example, its ground electronic state also has the structure shown in Figure 5.2 as a stable isomer. Notice that this isomer and that shown earlier have the atoms linked together in identical manners, but in the second structure the a-amino group is involved in two hydrogen bonds while it is involved in only one in the former. In principle, the relative energies of these two geometrical isomers can be determined by solving the electronic Schrödinger equation while placing the constituent nuclei in the locations described in the two figures. If the arginine molecule is excited to another electronic state, for example, by promoting a non-bonding electron on its C=O oxygen atom into the neighboring C-O $\pi^*$ orbital, its stable structures will not be the same as in the ground electronic state. In particular, the corresponding C-O distance will be longer than in the ground state, but other internal geometrical parameters may also be modified (albeit probably less so than the C-O distance). Moreover, the chemical reactivity of this excited state of arginine will be different than that of the ground state because the two states have different orbitals available to react with attacking reagents. In summary, by solving the electronic Schrödinger equation at a variety of geometries and searching for geometries where the gradient vanishes and the Hessian matrix has all positive eigenvalues, one can find stable structures of molecules (and ions). The Schrödinger equation is a necessary aspect of this process because the movement of the electrons is governed by this equation rather than by Newtonian classical equations. The information gained after carrying out such a geometry optimization process include (1) all of the inter-atomic distances and internal angles needed to specify the equilibrium geometry {$R_{a{\rm eq}}$} and (2) the total electronic energy $E$ at this particular geometry. It is also possible to extract much more information from these calculations. For example, by multiplying elements of the Hessian matrix $\bigg(\dfrac{∂^2E}{∂R_a∂R_b}\bigg)$ by the inverse square roots of the atomic masses of the atoms labeled a and b, one forms the mass-weighted Hessian ($\dfrac{1}{\sqrt{m_am_b}}\dfrac{∂^2E}{∂R_a∂R_b}$) whose non-zero eigenvalues give the harmonic vibrational frequencies {$\omega_k$} of the molecule. The eigenvectors {$R_{k,a}$} of the mass-weighted Hessian matrix give the relative displacements in coordinates $R_{ka}$ that accompany vibration in the $k^{\rm th}$ normal mode (i.e., they describe the normal mode motions). Details about how these harmonic vibrational frequencies and normal modes are obtained were discussed earlier in Chapter 3. Molecular Change- reactions and interactions 1.Changes in bonding Chemistry also deals with transformations of matter including changes that occur when molecules react, are excited (electronically, vibrationally, or rotationally), or undergo geometrical rearrangements. Again, theory forms the cornerstone that allows experimental probes of chemical change to be connected to the molecular level and that allows simulations of such changes. Molecular excitation may or may not involve altering the electronic structure of the molecule; vibrational and rotational excitation do not, but electronic excitation, ionization, and electron attachment do. As illustrated in Figure 5.3 where a bi-molecular reaction is displayed, chemical reactions involve breaking some bonds and forming others, and thus involve rearrangement of the electrons among various molecular orbitals. In this example, in part (a) the green atom collides with the brown diatomic molecule and forms the bound triatomic molecule (b). Alternatively, in (c) and (d), a pink atom collides with a green diatomic to break the bond between the two green atoms and form a new bond between the pink and green atoms. Both such reactions are termed bi-molecular because the basic step in which the reaction takes place requires a collision between to independent species (i.e., the atom and the diatomic). A simple example of a unimolecular chemical reaction is offered by the arginine molecule considered above. In the first structure shown for arginine, the carboxylic acid group retains its $HOOC-$ bonding. However, in the zwitterion structure of this same molecule, shown in Figure 5.4, the $HOOC-$ group has been deprotonated to produce a carboxylate anion group $–COO^-$, with the $H^+$ ion now bonded to the terminal imine group, thus converting it to an amino group and placing the net positive charge on the adjacent carbon atom. The unimolecular tautomerization reaction in which the two forms of arginine are interconverted involves breaking an $O-H$ bond, forming a $N-H$ bond, and changing a carbon-nitrogen double bond into a carbon-nitrogen single bond. In such a process, the electronic structure is significantly altered, and, as a result, the two isomers can display very different chemical reactivities toward other reagents. Notice that, once again, the ultimate structure of the zwitterion tautomer of arganine is determined by the valence preferences of its constituent atoms as well as by hydrogen bonds formed among various functional groups (the carboxylate group and one amino group and one $–NH^-$ group). Energy Conservation In any chemical reaction as in all physical processes (other than nuclear event in which mass and energy can be interconveted), total energy must be conserved. Reactions in which the summation of the strengths of all the chemical bonds in the reactants exceeds the sum of the bond strengths in the products are termed endothermic. For such reactions, external energy must to provided to the reacting molecules to allow the reaction to occur. Exothermic reactions are those for which the bonds in the products exceed in strength those of the reactants. For exothermic reactions, no net energy input is needed to allow the reaction to take place. Instead, excess energy is generated and liberated when such reactions take place. In the former (endothermic) case, the energy needed by the reaction usually comes from the kinetic energy of the reacting molecules or molecules that surround them. That is, thermal energy from the environment provides the needed energy. Analogously, for exothermic reactions, the excess energy produced as the reaction proceeds is usually deposited into the kinetic energy of the product molecules and into that of surrounding molecules. For reactions that are very endothermic, it may be virtually impossible for thermal excitation to provide sufficient energy to effect reaction. In such cases, it may be possible to use a light source (i.e., photons whose energy can excite the reactant molecules) to induce reaction. When the light source causes electronic excitation of the reactants (e.g., one might excite one electron in the bound diatomic molecule discussed above from a bonding to an anti-bonding orbital), one speaks of inducing reaction by photochemical means. Conservation of Orbital Symmetry- the Woodward-Hoffmann Rules An example of how important it is to understand the changes in bonding that accompany a chemical reaction, let us consider a reaction in which 1,3-butadiene is converted, via ring-closure, to form cyclobutene. Specifically, focus on the four $\pi$ orbitals of 1,3-butadiene as the molecule undergoes so-called disrotatory closing along which the plane of symmetry which bisects and is perpendicular to the $C_2-C_3$ bond is preserved. The orbitals of the reactant and product can be labeled as being even-e or odd-o under reflection through this symmetry plane. It is not appropriate to label the orbitals with respect to their symmetry under the plane containing the four C atoms because, although this plane is indeed a symmetry operation for the reactants and products, it does not remain a valid symmetry throughout the reaction path. That is, in applying the Woodward-Hoffmann rules, we symmetry label the orbitals using only those symmetry elements that are preserved throughout the reaction path being examined. The four $\pi$ orbitals of 1,3-butadiene are of the following symmetries under the preserved symmetry plane (see the orbitals in Figure 5.5): $\pi_1= e, \pi_2= o, \pi_3=e, \pi_4= o$. The $\pi$ and $\pi^*$ and $\sigma$ and $\sigma^*$ orbitals of the product cyclobutane, which evolve from the four orbitals of the 1,3-butadiene, are of the following symmetry and energy order: $\sigma = e, \pi = e, \pi^* = o, \sigma^* = o$. The Woodward-Hoffmann rules instruct us to arrange the reactant and product orbitals in order of increasing energy and to then connect these orbitals by symmetry, starting with the lowest energy orbital and going through the highest energy orbital. This process gives the following so-called orbital correlation diagram: We then need to consider how the electronic configurations in which the electrons are arranged as in the ground state of the reactants evolves as the reaction occurs. We notice that the lowest two orbitals of the reactants, which are those occupied by the four $\pi$ electrons of the reactant, do not connect to the lowest two orbitals of the products, which are the orbitals occupied by the two $\sigma$ and two $\pi$ electrons of the products. This causes the ground-state configuration of the reactants ($\pi_1{}^2 \pi_2{}^2$) to evolve into an excited configuration ($\sigma^2 \pi^{*2}$) of the products. This, in turn, produces an activation barrier for the thermal disrotatory rearrangement (in which the four active electrons occupy these lowest two orbitals) of 1,3-butadiene to produce cyclobutene. If the reactants could be prepared, for example by photolysis, in an excited state having orbital occupancy $\pi_1{}^2\pi_2{}^1\pi_3{}^1$, then reaction along the path considered would not have any symmetry-imposed barrier because this singly excited configuration correlates to a singly-excited configuration $\sigma^2\pi^1\pi^{*1}$ of the products. The fact that the reactant and product configurations are of equivalent excitation level causes there to be no symmetry constraints on the photochemically induced reaction of 1,3-butadiene to produce cyclobutene. In contrast, the thermal reaction considered first above has a symmetry-imposed barrier because the orbital occupancy is forced to rearrange (by the occupancy of two electrons from $\pi_2{}^2 = \pi^{*2}$ to $\pi^2 = \pi_3{}^2$) for the ground-state wave function of the reactant to smoothly evolve into that of the product. Of course, if the reactants could be generated in an excited state having $\pi_1{}^2 \pi_3{}^2$ orbital occupancy, then products could also be produced directly in their ground electronic state. However, it is difficult, if not impossible, to generate such doubly-excited electronic states, so it is rare that one encounters reactions being induced via such states. It should be stressed that although these symmetry considerations may allow one to anticipate barriers on reaction potential energy surfaces, they have nothing to do with the thermodynamic energy differences of such reactions. What the above Woodward-Hoffmann symmetry treatment addresses is whether there will be symmetry-imposed barriers above and beyond any thermodynamic energy differences. The enthalpies of formation of reactants and products contain the information about the reaction's overall energy balance and need to be considered independently of the kind of orbital symmetry analysis just introduced. As the above example illustrates, whether a chemical reaction occurs on the ground or an excited-state electronic surface is important to be aware of. This example shows that one might want to photo-excite the reactant molecules to cause the reaction to occur at an accelerated rate. With the electrons occupying the lowest-energy orbitals, the ring closure reaction can still occur, but it has to surmount a barrier to do so (it can employ thermal collision al energy to surmount this barrier), so its rate might be slow. If an electron is excited, there is no symmetry barrier to surmount, so the rate can be greater. Reactions that take place on excited states also have a chance to produce products in excited electronic states, and such excited-state products may emit light. Such reactions are called chemiluminescent because they produce light (luminescence) by way of a chemical reaction. Rates of change Rates of reactions play crucial roles in many aspects of our lives. Rates of various biological reactions determine how fast we metabolize food, and rates at which fuels burn in air determine whether an explosion or a calm flame will result. Chemists view the rate of any reaction among molecules (and perhaps photons or electrons if they are used to induce excitation in reactant molecules) to be related to (1) the frequency with which the reacting species encounter one another and (2) the probability that a set of such species will react once they do encounter one another. The former aspects relate primarily to the concentrations of the reacting species and the speeds with which they are moving. The latter have more to do with whether the encountering species collide in a favorable orientation (e.g., do the enzyme and substrate dock properly, or does the $Br^-$ ion collide with the $H_3C^-$ end of $H_3C-Cl$ or with the $Cl$ end in the SN2 reaction that yields $CH_3Br + Cl^-$ ?) and with sufficient energy to surmount any barrier that must be passed to effect breaking bonds in reactants to form new bonds in products. The rates of reactions can be altered by changing the concentrations of the reacting species, by changing the temperature, or by adding a catalyst. Concentrations and temperature control the collision rates among molecules, and temperature also controls the energy available to surmount barriers. Catalysts are molecules that are not consumed during the reaction but which cause the rate of the reaction to be increased (species that slow the rate of a reaction are called inhibitors). Most catalysts act by providing orbitals of their own that interact with the reacting molecules’ orbitals to cause the energies of the latter to be lowered as the reaction proceeds. In the ring-closure reaction cited earlier, the catalyst’s orbitals would interact (i.e., overlap) with the 1,3-butadiene’s $\pi$ orbitals in a manner that lowers their energies and thus reduces the energy barrier that must be overcome for reaction to proceed In addition to being capable of determining the geometries (bond lengths and angles), energies, vibrational frequencies of species such as the isomers of arginine discussed above, theory also addresses questions of how and how fast transitions among these isomers occur. The issue of how chemical reactions occur focuses on the mechanism of the reaction, meaning how the nuclei move and how the electronic orbital occupancies change as the system evolves from reactants to products. In a sense, understanding the mechanism of a reaction in detail amounts to having a mental moving picture of how the atoms and electrons move as the reaction is occurring. The issue of how fast reactions occur relates to the rates of chemical reactions. In most cases, reaction rates are determined by the frequency with which the reacting molecules access a critical geometry (called the transition state or activated complex) near which bond breaking and bond forming takes place. The reacting molecules’ potential energy along the path connecting reactants through a transition state to produces is often represented as shown in Figure 5.7. Figure 5.7 Energy vs. reaction progress plot showing the transition state or activated complex and the activation energy. In this figure, the potential energy (i.e., the electronic energy without the nuclei’s kinetic energy included) is plotted along a coordinate connecting reactants to products. The geometries and energies of the reactants, products, and of the activated complex can be determined using the potential energy surface searching methods discussed briefly above and detailed earlier in Chapter 3. Chapter 8 provides more information about the theory of reaction rates and how such rates depend upon geometrical, energetic, and vibrational properties of the reacting molecules. The frequencies with which the transition state is accessed are determined by the amount of energy (termed the activation energy $E^*$) needed to access this critical geometry. For systems at or near thermal equilibrium, the probability of the molecule gaining energy $E^*$ is shown for three temperatures in Figure 5.8. For such cases, chemical reaction rates usually display a temperature dependence characterized by linear plots of $\ln(k)$ vs $1/T$. Of course, not all reactions involve molecules that have been prepared at or near thermal equilibrium. For example, in supersonic molecular beam experiments, the kinetic energy distribution of the colliding molecules is more likely to be of the type shown in Figure 5.9. In this figure, the probability is plotted as a function of the relative speed with which reactant molecules collide. It is common in making such collision speed plots to include the $v^2$ volume element factor in the plot. That is, the normalized probability distribution for molecules having reduced mass m to collide with relative velocity components $v_z, v_y, v_z$ is $P(v_x, v_y, v_z) dv_x dv_y dv_z = \bigg(\dfrac{\mu}{2\pi kT}\bigg)^{3/2} \exp\bigg(-\dfrac{\mu(v_x^2+v_y^2+v_z^2)}{2kT}\bigg) dv_x dv_y dv_z.$ Because only the total collisional kinetic energy is important in surmounting reaction barriers, we convert this Cartesian velocity component distribution to one in terms of $v = \sqrt{v_x^2+v_y^2+v_z^2}$ the collision speed. This is done by changing from Cartesian to polar coordinates (in which the radial variable is v itself) and gives (after integrating over the two angular coordinates): $P(v) dv = 4p \bigg(\dfrac{\mu}{2\pi kT}\bigg)^{3/2} \exp\bigg(-\dfrac{\mu v^2}{2kT}\bigg) v^2 dv.$ It is the $v^2$ factor in this speed distribution that causes the Maxwell-Boltzmann distribution to vanish at low speeds in the above plot. Another kind of experiment in which non-thermal conditions are used to extract information about activation energies occurs within the realm of ion-molecule reactions where one uses collision-induced dissociation (CID) to break a molecule apart. For example, when a complex consisting of a $Na^+$ cation bound to a uracil molecule is accelerated by an external electric field to a kinetic energy $E$ and subsequently allowed to impact into a gaseous sample of Xe atoms, the high-energy collision allows kinetic energy to be converted into internal energy. This collisional energy transfer may deposit into the Na+(uracil) complex enough energy to fragment the $Na^+$ …uracil attractive binding energy, thus producing $Na^+$ and neutral uracil fragments. If the signal for production of $Na^+$ is monitored as the collision energy $E$ is increased, one generates a CID reaction rate profile such as I show in Figure 5.10. On the vertical axis is plotted a quantity proportional to the rate at which $Na^+$ ions are formed. On the horizontal axis is plotted the collision energy $E$ in two formats. The laboratory kinetic energy is simply 1/2 the mass of the $Na^+({\rm uracil})$ complex multiplied by the square of the speed of these ion complexes measured with respect to a laboratory-fixed coordinate frame. The center-of-mass (CM) kinetic energy is the amount of energy available between the $Na^+({\rm uracil})$ complex and the Xe atom, and is given by $E_{\rm CM} = \dfrac{1}{2} \dfrac{m_{\rm complex} m_{Xe}}{m_{\rm complex} + m_{Xe}} v^2,$ where $v$ is the relative speed of the complex and the Xe atom, and $m_{Xe}$ and $m_{\rm complex}$ are the respective masses of the colliding partners. The most essential lesson to learn from such a graph is that no dissociation occurs if $E$ is below some critical threshold value, and the CID reaction $Na^+({\rm uracil}) \rightarrow Na^+ + {\rm uracil}$ occurs with higher and higher rate as the collision energy $E$ increases beyond the threshold. For the example shown above, the threshold energy is ca. 1.2-1.4 eV. These CID thresholds can provide us with estimates of reaction endothermicities and are especially useful when these energies are greatly in excess of what can be realized by simply heating the sample. Statistical Mechanics: Treating Large Numbers of Molecules in Close Contact When one has a large number of molecules that undergo frequent collisions (thereby exchanging energy, momentum, and angular momentum), the behavior of this collection of molecules can often be described in a simple way. At first glance, it seems unlikely that the treatment of a large number of molecules could require far less effort than that required to describe one or a few such molecules. To see the essence of what I am suggesting, consider a sample of 10 cm3 of water at room temperature and atmospheric pressure. In this macroscopic sample, there are approximately 3.3 x1023 water molecules. If one imagines having an instrument that could monitor the instantaneous speed of a selected molecule, one would expect the instrumental signal to display a very jerky irregular behavior if the signal were monitored on time scales of the order of the time between molecular collisions. On this time scale, the water molecule being monitored may be moving slowly at one instant, but, upon collision with a neighbor, may soon be moving very rapidly. In contrast, if one monitors the speed of this single water molecule over a very long time scale (i.e., much longer than the average time between collisions), one obtains an average square of the speed that is related to the temperature $T$ of the sample via $\dfrac{1}{2} mv^2 = \dfrac{3}{2} kT$. This relationship holds because the sample is at equilibrium at temperature $T$. An example of the kind of behavior I describe above is shown in Figure 5.11. In this figure, on the vertical axis is plotted the log of the energy (kinetic plus potential) of a single $CN^-$ anion in a solution with water as the solvent as a function of time. The vertical axis label says Eq.(8) because this figure was taken from a literature article. The $CN^-$ ion initially has excess vibrational energy in this simulation which was carried out in part to model the energy flow from this hot solute ion to the surrounding solvent molecules. One clearly sees the rapid jerks in energy that this ion experiences as it undergoes collisions with neighboring water molecules. These jerks occur approximately every 0.01 ps, and some of them correspond to collisions that take energy from the ion and others to collisions that given energy to the ion. On longer time scales (e.g., over 1-10 ps), we also see a gradual drop off in the energy content of the $CN^-$ ion which illustrates the slow loss of its excess energy on the longer time scale. Now, let’s consider what happens if we monitor a large number of molecules rather than a single molecule within the 1 cm3 sample of $H_2O$ mentioned earlier. If we imagine drawing a sphere of radius $R$ and monitoring the average speed of all water molecules within this sphere, we obtain a qualitatively different picture if the sphere is large enough to contain many water molecules. For large R, one finds that the average square of the speed of all the $N$ water molecules residing inside the sphere (i.e., $\displaystyle\sum_{K =1}^N \dfrac{1}{2} mv_K^2$) is independent of time (even when considered at a sequence of times separated by fractions of ps) and is related to the temperature $T$ through $\displaystyle\sum_K \dfrac{1}{2} mv_K^2 = \dfrac{3N}{2} kT$. This example shows that, at equilibrium, the long-time average of a property of any single molecule is the same as the instantaneous average of this same property over a large number of molecules. For the single molecule, one achieves the average value of the property by averaging its behavior over time scales lasting for many, many collisions. For the collection of many molecules, the same average value is achieved (at any instant of time) because the number of molecules within the sphere (which is proportional to $\dfrac{4}{3} \pi R^3$) is so much larger than the number near the surface of the sphere (proportional to $4\pi R^2$) that the molecules interior to the sphere are essentially at equilibrium for all times. Another way to say the same thing is to note that the fluctuations in the energy content of a single molecule are very large (i.e., the molecule undergoes frequent large jerks) but last a short time (i.e., the time between collisions). In contrast, for a collection of many molecules, the fluctuations in the energy for the whole collection are small at all times because fluctuations take place by exchange of energy with the molecules that are not inside the sphere (and thus relate to the surface area to volume ratio of the sphere). So, if one has a large number of molecules that one has reason to believe are at thermal equilibrium, one can avoid trying to follow the instantaneous short-time detailed dynamics of any one molecule or of all the molecules. Instead, one can focus on the average properties of the entire collection of molecules. What this means for a person interested in theoretical simulations of such condensed-media problems is that there is no need to carry out a Newtonian molecular dynamics simulation of the system (or a quantum simulation) if it is at equilibrium because the long-time averages of whatever is calculated can be found another way. How one achieves this is through the magic of statistical mechanics and statistical thermodynamics. One of the most powerful of the devices of statistical mechanics is the so-called Monte-Carlo simulation algorithm. Such theoretical tools provide a direct way to compute equilibrium averages (and small fluctuations about such averages) for systems containing large numbers of molecules. In Chapter 7, I provide a brief introduction to the basics of this sub-discipline of theoretical chemistry where you will learn more about this exciting field. Sometimes we speak of the equilibrium behavior or the dynamical behavior of a collection of molecules. Let me elaborate a little on what these phrases mean. Equilibrium properties of molecular collections include the radial and angular distribution functions among various atomic centers. For example, the O-O and $O-H$ radial distribution functions in liquid water and in ice are shown in Figure 5.12. Such properties represent averages, over long times or over a large collection of molecules, of some property that is not changing with time except on a very fast time scale corresponding to individual collisions. In contrast, dynamical properties of molecular collections include the folding and unfolding processes that proteins and other polymers undergo; the migrations of protons from water molecule to water molecule in liquid water and along $H_2O$ chains within ion channels; and the self assembly of molecular monolayers on solid surfaces as the concentration of the molecules in the liquid overlayer varies. These are properties that occur on time scales much longer than those between molecular collisions and on time scales that we wish to probe by some experiment or by simulation. Having briefly introduced the primary areas of theoretical chemistry- structure, dynamics, and statistical mechanics, let us now examine each of them in somewhat greater detail, keeping in mind that Chapters 6-8 are where each is treated more fully. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/05%3A_An_Overview_of_Theoretical_Chemistry/5.01%3A_What_is_Theoretical_Chemistry_About.txt
Experimental Probes of Molecular Shapes I expect you are wondering why I want to discuss how experiments measure molecular shapes in this text whose aim is to introduce you to the field of theoretical chemistry. In fact, theory and experimental measurement are very connected, and it is these connections that I wish to emphasize in the following discussion. In particular, I want to make it clear that experimental data can only be interpreted, and thus used to extract molecular properties, through the application of theory. So, theory does not replace experiment, but serves both as a complementary component of chemical research (via. simulation of molecular properties) and as the means by which we connect laboratory data to molecular properties. Rotational Spectroscopy Most of us use rotational excitation of molecules in our every-day life. In particular, when we cook in a microwave oven, the microwave radiation, which has a frequency in the $10^9- 10^{11} s^{-1}$ range, inputs energy into the rotational motions of the (primarily) water molecules contained in the food. These rotationally hot water molecules then collide with neighboring molecules (i.e., other water as well as proteins and other molecules in the food and in the cooking vessel) to transfer some of their motional energy to them. Through this means, the translational kinetic energy of all the molecules inside the cooker gains energy. This process of rotation-to-translation energy transfer is how the microwave radiation ultimately heats the food, which cooks it. What happens when you put the food into the microwave oven in a metal container or with some other metal material? As shown in Chapter 2, the electrons in metals exist in very delocalized partially filled orbitals called bands. These band orbitals are spread out throughout the entire piece of metal. The application of any external electric field (e.g., that belonging to the microwave radiation) causes these metal electrons to move throughout the metal. As these electrons accumulate more and more energy from the microwave radiation, they eventually have enough kinetic energy to be ejected into the surrounding air forming a discharge. This causes the sparking that we see when we make the mistake of putting anything metal into our microwave oven. Let’s now learn more about how the microwave photons cause the molecules to become rotationally excited. Using microwave radiation, molecules having dipole moment vectors ($\boldsymbol{\mu}$) can be made to undergo rotational excitation. In such processes, the time-varying electric field $\textbf{E} \cos(\omega t)$ of the microwave electromagnetic radiation interacts with the molecules via a potential energy of the form $V = \textbf{E} \cdot \boldsymbol{\mu} \cos(\omega t)$. This potential can cause energy to flow from the microwave energy source into the molecule’s rotational motions when the energy of the former $\hbar\omega/2\pi$ matches the energy spacing between two rotational energy levels. This idea of matching the energy of the photons to the energy spacings of the molecule illustrates the concept of resonance and is something that is ubiquitous in spectroscopy as we learned in mathematical detail in Chapter 4. Upon first hearing that the photon’s energy must match an energy-level spacing in the molecule if photon absorption is to occur, it appears obvious and even trivial. However, upon further reflection, there is more to such resonance requirements than one might think. Allow me to illustrate using this microwave-induced rotational excitation example by asking you to consider why photons whose energies $\hbar\omega/2\pi$ considerably exceed the energy spacing $\Delta{E}$ will not be absorbed in this transition. That is, why is more than enough energy not good enough? The reason is that for two systems (in this case the photon’s electric field and the molecule’s rotation which causes its dipole moment to also rotate) to interact and thus exchange energy (this is what photon absorption is), they must have very nearly the same frequencies. If the photon’s frequency ($\omega$) exceeds the rotational frequency of the molecule by a significant amount, the molecule will experience an electric field that oscillates too quickly to induce a torque on the molecule's dipole that is always in the same direction and that lasts over a significant length of time. As a result, the rapidly oscillating electric field will not provide a coherent twisting of the dipole and hence will not induce rotational excitation. One simple example from every day life can further illustrate this issue. When you try to push your friend, spouse, or child on a swing, you move your arms in resonance with the swinging person’s movement frequency. Each time the person returns to you, your arms are waiting to give a push in the direction that gives energy to the swinging individual. This happens over and over again; each time they return, your arms have returned to be ready to give another push in the same direction. In this case, we say that your arms move in resonance with the swing’s motion and offer a coherent excitation of the swinger. If you were to increase greatly the rate at which your arms are moving in their up and down pattern, the swinging person would not always experience a push in the correct direction when they return to meet your arms. Sometimes they would feel a strong in-phase push, but other times they would feel an out-of-phase push in the opposite direction. The net result is that, over a long period of time, they would feel random jerks from your arms, and thus would not undergo smooth energy transfer from you. This is why too high a frequency (and hence too high an energy) does not induce excitation. Let us now return to the case of rotational excitation by microwave photons. As we saw in Chapter 2, for a rigid diatomic molecule, the rotational energy spacings are given by $E_{J+1} – E_J = 2 (J+1) \bigg(\dfrac{\hbar^2}{2I} \bigg) = 2\hbar c B (J+1) \tag{5.2.1}$ where $I$ is the moment of inertia of the molecule given in terms of its equilibrium bond length $r_e$ and its reduced mass $\mu=\dfrac{m_am_b}{m_a+m_b}$ as $I = Mr_e^2$. Thus, in principle, measuring the rotational energy level spacings via microwave spectroscopy allows one to determine $r_e$. The second identity above simply defines what is called the rotational constant $B$ in terms of the moment of inertia. The rotational energy levels described above give rise to a manifold of levels of non-uniform spacing as shown in the Figure 5.13. The non-uniformity in spacings is a result of the quadratic dependence of the rotational energy levels $E_J$ on the rotational quantum number $J$: $E_J = J(J+1) \bigg(\dfrac{\hbar^2}{2I}\bigg).\tag{5.2.2}$ Moreover, the level with quantum number $J$ is $(2J+1)$-fold degenerate; that is, there are $2J+1$ distinct energy states and wave functions that have energy $E_J$ and that are distinguished by a quantum number $M$. These $2J+1$ states have identical energy but differ among one another by the orientation of their angular momentum in space (i.e., the orientation of how they are spinning). For polyatomic molecules, we know from Chapter 2 that things are more complicated because the rotational energy levels depend on three so-called principal moments of inertia ($I_a$, $I_b$, $I_c$) which, in turn, contain information about the molecule’s geometry. These three principle moments are found by forming a 3x3 moment of inertia matrix having elements $I_{x,x} = \sum_a m_a [ (R_a-R_{\rm CofM})^2 -(x_a - x_{\rm CofM} )^2\tag{5.2.3a}$ and $I_{x,y} = \sum_a ma [ (x_a - x_{\rm CofM}) ( y_a -y_{\rm CofM}) ]\tag{5.2.3b}$ expressed in terms of the Cartesian coordinates of the nuclei (a) and of the center of mass in an arbitrary molecule-fixed coordinate system (analogous definitions hold for $I_{z,z}$, $I_{y,y}$, $I_{x,z}$ and $I_{y,z}$). The principle moments are then obtained as the eigenvalues of this 3x3 matrix. For molecules with all three principle moments equal, the rotational energy levels are given by $E_{J,K} = \dfrac{\hbar^2J(J+1)}{2I}$, and are independent of the $K$ quantum number and on the $M$ quantum number that again describes the orientation of how the molecule is spinning in space. Such molecules are called spherical tops. For molecules (called symmetric tops) with two principle moments equal ($I_a$)) and one unique moment $I_c$, the energies depend on two quantum numbers $J$ and $K$ and are given by $E_{J,K} = \dfrac{\hbar^2J(J+1)}{2I_a} + \hbar^2K^2 \bigg(\dfrac{1}{2I_c} - \dfrac{1}{2I_a}\bigg). \tag{5.2.4}$ Species having all three principal moments of inertia unique, termed asymmetric tops, have rotational energy levels for which no analytic formula is yet known. The H2O molecule, shown in Figure 5.14, is such an asymmetric top molecule. More details about the rotational energies and wave functions were given in Chapter 2. The moments of inertia that occur in the expressions for the rotational energy levels involve positions of atomic nuclei relative to the center of mass of the molecule. So, a microwave spectrum can, in principle, determine the moments of inertia and hence the geometry of a molecule. In the discussion given above, we treated these positions, and thus the moments of inertia as fixed (i.e., not varying with time). Of course, these distances are not unchanging with time in a real molecule because the molecule’s atomic nuclei undergo vibrational motions. Because of this, it is the vibrationally-averaged moments of inertia that must be incorporated into the rotational energy level formulas. Specifically, because the rotationally energies depend on the inverses of moments of inertia, one must vibrationally average $(R_a–R_{\rm CofM})^{-2}$ over the vibrational motion that characterizes the molecule’s movement. For species containing stiff bonds, the vibrational average $\langle \psi|(R_a –R_{\rm CofM})^{-2}|\psi\rangle$ of the inverse squares of atomic distances relative to the center of mass does not differ significantly from the equilibrium values $(R_{a,eq} –R_{\rm CofM})^{-2}$ of the same distances. However, for molecules such as weak van der Waals complexes (e.g., (H2O)2 or Ar..HCl) that undergo floppy large-amplitude vibrational motions, there may be large differences between the equilibrium $(R_{a,eq} –R_{\rm CofM})^{-2}$ and the vibrationally averaged values $\langle \psi|(R_a –R_{\rm CofM})^{-2}|\psi\rangle$. The proper treatment of the rotational energy level patterns in such floppy molecules is still very much under active study by theoretical and experimental chemists. For this reason, it is a very challenging task to use microwave data on rotational energies to determine geometries (equilibrium or vibrationally averaged) for these kinds of molecules. So, in the area of rotational spectroscopy theory plays several important roles: 1. It provides the basic equations in terms of which the rotational line spacings relate to moments of inertia. 2. It allows one, given the distribution of geometrical bond lengths and angles characteristic of the vibrational state the molecule exists in, to compute the proper vibrationally-averaged moment of inertia. 3. It can be used to treat large amplitude floppy motions (e.g., by simulating the nuclear motions on a Born-Oppenheimer energy surface), thereby allowing rotationally resolved spectra of such species to provide proper moment of inertia (and thus geometry) information. Vibrational Spectroscopy The ability of molecules to absorb and emit infrared radiation as they undergo transitions among their vibrational energy levels is critical to our planet’s health. It turns out that water and $CO_2$ molecules have bonds that vibrate in the $10^{13}-10^{14} s^{-1}$ frequency range which is within the infrared spectrum ($10^{11} –10^{14} s^{-1}$). As solar radiation (primarily visible and ultraviolet) impacts the earth’s surface, it is absorbed by molecules with electronic transitions in this energy range (e.g., colored molecules such as those contained in plant leaves and other dark material). These molecules are thereby promoted to excited electronic states. Some such molecules re-emit the photons that excited them but most undergo so-called radiationless relaxation that allows them to return to their ground electronic state but with a substantial amount of internal vibrational energy. That is, these molecules become vibrationally very hot. Subsequently, these hot molecules, as they undergo transitions from high-energy vibrational levels to lower-energy levels, emit infrared (IR) photons. If our atmosphere were devoid of water vapor and $CO_2$, these IR photons would travel through the atmosphere and be lost into space. The result would be that much of the energy provided by the sun’s visible and ultraviolet photons would be lost via IR emission. However, the water vapor and $CO_2$ do not allow so much IR radiation to escape. These greenhouse gases absorb the emitted IR photons to generate vibrationally hot water and $CO_2$ molecules in the atmosphere. These vibrationally excited molecules undergo collisions with other molecules in the atmosphere and at the earth’s surface. In such collisions, some of their vibrational energy can be transferred to translational kinetic energy of the collision-partner molecules. In this manner, the temperature (which is measure of the average translational energy) increases. Of course, the vibrationally hot molecules can also re-emit their IR photons, but there is a thick layer of such molecules forming a blanket around the earth, and all of these molecules are available to continually absorb and re-emit the IR energy. In this manner, the blanket keeps the IR radiation from escaping and thus keeps our atmosphere warm. Those of us who live in dry desert climates are keenly aware of such effects. Clear cloudless nights in the desert can become very cold, primarily because much of the day’s IR energy production is lost to radiative emission through the atmosphere and into space. Let’s now learn more about molecular vibrations, how IR radiation excites them, and what theory has to do with this. When infrared (IR) radiation is used to excite a molecule, it is the vibrations of the molecule that are in resonance with the oscillating electric field $\textbf{E} \cos(\omega t)$. Molecules that have dipole moments that vary as its vibrations occur interact with the IR electric field via a potential energy of the form $V = (\partial \boldsymbol{\mu}/\partial Q)\bullet\textbf{E} \cos(\omega t)$. Here $\partial \boldsymbol{\mu}/\partial Q$ denotes the change in the molecule’s dipole moment $M$ associated with motion along the vibrational normal mode labeled $Q$. As the IR radiation is scanned, it comes into resonance with various vibrations of the molecule under study, and radiation can be absorbed. Knowing the frequencies at which radiation is absorbed provides knowledge of the vibrational energy level spacings in the molecule. Absorptions associated with transitions from the lowest vibrational level to the first excited lever are called fundamental transitions. Those connecting the lowest level to the second excited state are called first overtone transitions. Excitations from excited levels to even higher levels are named hot-band absorptions. Fundamental vibrational transitions occur at frequencies that characterize various functional groups in molecules (e.g., O-H stretching, H-N-H bending, N-H stretching, C-C stretching, etc.). As such, a vibrational spectrum offers an important fingerprint that allows the chemist to infer which functional groups are present in the molecule. However, when the molecule contains soft floppy vibrational modes, it is often more difficult to use information about the absorption frequency to extract quantitative information about the molecule’s energy surface and its bonding structure. As was the case for rotational levels of such floppy molecules, the accurate treatment of large-amplitude vibrational motions of such species remains an area of intense research interest within the theory community. In a polyatomic molecule with $N$ atoms, there are many vibrational modes. The total vibrational energy of such a molecule can be approximated as a sum of terms, one for each of the $3N-6$ (or $3N-5$ for a linear molecule) vibrations: $E(v_1 ... v_{3N-5\text{ or }6}) = \hbar\omega_j \big(v_j + \dfrac{1}{2}\big). \tag{5.2.1}$ Here, $\omega_j$ is the harmonic frequency of the $j^{\rm th}$ mode and $v_j$ is the vibrational quantum number associated with that mode. As we discussed in Chapter 3, the vibrational wave functions are products of harmonic vibrational functions for each mode: $\psi = \prod_{j=1}^{3N-5\text{ or }6} \psi_{v_j} (x (j)),\tag{5.2.1}$ and the spacings between energy levels in which one of the normal-mode quantum numbers increases by unity are expressed as $\Delta E_{v_j} = E(...v_{j+1} ...) - E (...v_j ...) = \hbar\omega_j.$ That is, the spacings between successive vibrational levels of a given mode are predicted to be independent of the quantum number v within this harmonic model as shown in Figure 5.15. In Chapter 3, the details connecting the local curvature (i.e., Hessian matrix elements) in a polyatomic molecule’s potential energy surface to its normal modes of vibration are presented. Experimental evidence clearly indicates that significant deviations from the harmonic oscillator energy expression occur as the quantum number $v_j$ grows. These deviations are explained in terms of the molecule's true potential $V(R)$ deviating strongly from the harmonic $\dfrac{1}{2}k (R-R_e)^2$ potential at higher energy as shown in the Figure 5.16. At larger bond lengths, the true potential is softer than the harmonic potential, and eventually reaches its asymptote, which lies at the dissociation energy $D_e$ above its minimum. This deviation of the true $V(R)$ from $\dfrac{1}{2} k(R-R_e)^2$ causes the true vibrational energy levels to lie below the harmonic predictions. It is convention to express the experimentally observed vibrational energy levels, along each of the $3N-5$ or $6$ independent modes in terms of an anharmonic formula similar to what we discussed for the Morse potential in Chapter 2: $E(v_j) = h\bigg[\omega_j \big(v_j + \dfrac{1}{2}\big) - (\omega_x)_j \big(v_j + \dfrac{1}{2}\big)^2 + (\omega_y)_j \big(v_j + \dfrac{1}{2}\big)^3 + (\omega_z)_j \big(v_j + \dfrac{1}{2}\big)^4 + ... \bigg]$ The first term is the harmonic expression. The next is termed the first anharmonicity; it (usually) produces a negative contribution to $E(v_j)$ that varies as $\big(v_j + \dfrac{1}{2}\big)^2$. Subsequent terms are called higher anharmonicity corrections. The spacings between successive $v_j \rightarrow v_j + 1$ energy levels are then given by: $\Delta{E}v_j = E(v_j + 1) - E(v_j)\tag{5.2.1}$ $= \hbar [\omega_j - 2(\omega_x)_j (v_j + 1) + ...]\tag{5.2.1}$ A plot of the spacing between neighboring energy levels versus $v_j$ should be linear for values of $v_j$ where the harmonic and first anharmonicity terms dominate. The slope of such a plot is expected to be $-2\hbar(\omega_x)_j$ and the small $-v_j$ intercept should be $\hbar[\omega_j - 2(\omega_x)_j]$. Such a plot of experimental data, which clearly can be used to determine the $\omega_j$ and $(\omega_x)_j$ parameters of the vibrational mode of study, is shown in Figure 5.17. Figure 5.17 Birge-Sponer plot of vibrational energy spacings vs. quantum number. These so-called Birge-Sponer plots can also be used to determine dissociation energies of molecules if the vibration whose spacings are plotted corresponds to a bond-stretching mode. By linearly extrapolating such a plot of experimental $\Delta E_{v_j}$ values to large $v_j$ values, one can find the value of $v_j$ at which the spacing between neighboring vibrational levels goes to zero. This value $v_j$, max specifies the quantum number of the last bound vibrational level for the particular bond-stretching mode of interest. The dissociation energy $D_e$ can then be computed by adding to $\dfrac{1}{2}\hbar\omega_j$ (the zero point energy along this mode) the sum of the spacings between neighboring vibrational energy levels from $v_j = 0$ to $v_j = v_{j, max}$: $D_e = \dfrac{1}{2}\hbar\omega_j + \sum_{v_j=0}^{v_{j,max}}\Delta D_{v_j}.$ So, in the case of vibrational spectroscopy, theory allows us to • interpret observed infrared lines in terms of absorptions arising in localized functional groups; • extract dissociation energies if a long progression of lines is observed in a bond-stretching transition; • and treat highly non-harmonic floppy vibrations by carrying out dynamical simulations on a Born-Oppenheimer energy surface. X-Ray Crystallography In x-ray crystallography experiments, one employs crystalline samples of the molecules of interest and makes use of the diffraction patterns produced by scattered x-rays to determine positions of the atoms in the molecule relative to one another using the famous Bragg formula: $n\lambda = 2d \sin \theta .\tag{5.2.1}$ In this equation, $\lambda$ is the wavelength of the x-rays, $d$ is a spacing between layers (planes) of atoms in the crystal, $q$ is the angle through which the x-ray beam is scattered, and $n$ is an integer (1,2, …) that labels the order of the scattered beam. Because the x-rays scatter most strongly from the inner-shell electrons of each atom, the interatomic distances obtained from such diffraction experiments are, more precisely, measures of distances between high electron densities in the neighborhoods of various atoms. The x-rays interact most strongly with the inner-shell electrons because it is these electrons whose characteristic Bohr frequencies of motion are (nearly) in resonance with the high frequency of such radiation. For this reason, x-rays can be viewed as being scattered from the core electrons that reside near the nuclear centers within a molecule. Hence, x-ray diffraction data offers a very precise and reliable way to probe inter-atomic distances in molecules. The primary difficulties with x-ray measurements are: 1. That one needs to have crystalline samples (often, materials simply cannot be grown as crystals), 2. That one learns about inter-atomic spacings as they occur in the crystalline state, not as they exist, for example, in solution or in gas-phase samples. This is especially problematic for biological systems where one would like to know the structure of the bio-molecule as it exists within the living organism. Nevertheless, x-ray diffraction data and its interpretation through the Bragg formula provide one of the most widely used and reliable ways for probing molecular structure. NMR Spectroscopy NMR spectroscopy probes the absorption of radio-frequency (RF) radiation by the nuclear spins of the molecule. The most commonly occurring spins in natural samples are $^1H$ (protons), $^2H$ (deuterons), $^{13}C$ and $^{15}N$ nuclei. In the presence of an external magnetic field $B_{0z}$ along the $z$-axis, each such nucleus has its spin states split in energy by an amount given by $B_0(1-\sigma_­k)\gamma_k M_I$, where $M_I$ is the component of the $k^{\rm th}$ nucleus’ spin angular momentum along the $z$-axis, $B­_0$ is the strength of the external magnetic field, and $\gamma_k$ is a so-called gyromagnetic factor (i.e., a constant) that is characteristic of the $k^{\rm th}$ nucleus. This splitting of magnetic spin levels by a magnetic field is called the Zeeman effect, and it is illustrated in Figure 5.18. The factor $(1-\sigma_k)$ is introduced to describe the screening of the external $B_0$-field at the $k^{\rm th}$ nucleus caused by the electron cloud that surrounds this nucleus. In effect, $B_0(1-\sigma_k)$ is the magnetic field experienced local to the $k^{\rm th}$ nucleus. It is this $(1-\sigma_k)$ screening that gives rise to the phenomenon of chemical shifts in NMR spectroscopy, and it is this factor that allows NMR measurements of shielding factors ($\sigma_k$) to be related, by theory, to the electronic environment of a nucleus. In Figure 5.19 we display the chemical shifts of proton and $^{13}C$ nuclei in a variety of chemical bonding environments. Because the $M_I$ quantum number changes in steps of unity and because each photon possesses one unit of angular momentum, the RF energy $\hbar\omega$ that will be in resonance with the $k^{\rm th}$ nucleus’ Zeeman-split levels is given by $\hbar\omega = B_0(1-\sigma_­k)\gamma_k$. In most NMR experiments, a fixed RF frequency is employed and the external magnetic field is scanned until the above resonance condition is met. Determining at what $B_0$ value a given nucleus absorbs RF radiation allows one to determine the local shielding $(1-\sigma_k)$ for that nucleus. This, in turn, provides information about the electronic environment local to that nucleus as illustrated in the above figure. This data tells the chemist a great deal about the molecule’s structure because it suggests what kinds of functional groups occur within the molecule. To extract even more geometrical information from NMR experiments, one makes use of another feature of nuclear spin states. In particular, it is known that the energy levels of a given nucleus (e.g., the $k^{\rm th}$ one) are altered by the presence of other nearby nuclear spins. These spin-spin coupling interactions give rise to splittings in the energy levels of the $k^{\rm th}$ nucleus that alter the above energy expression as follows: $E_M = B_0(1-\sigma_­k)\gamma_k M+J_{M \, M’}\tag{5.2.1}$ Where $M$ is the $z$-component of the $k^{\rm th}$ nuclear spin angular momentum, $M’$ is the corresponding component of a nearby nucleus causing the splitting, and $J$ is called the spin-spin coupling constant between the two nuclei. Examples of how spins on neighboring centers split the NMR absorption lines of a given nucleus are shown in Figs. 5.20-5.22 for three common cases. The first involves a nucleus (labeled A) that is close enough to one other magnetically active nucleus (labeled X); the second involves a nucleus (A) that is close to two equivalent nuclei (2X); and the third describes a nucleus (A) close to three equivalent nuclei (X3). In Figure 5.20 are illustrated the splitting in the X nucleus’ absorption due to the presence of a single A neighbor nucleus (right) and the splitting in the A nucleus’ absorption (left) caused by the X nucleus. In both of these examples, the X and A nuclei have only two $M_I$ values, so they must be spin-1/2 nuclei. This kind of splitting pattern would, for example, arise for a $^{13}C-H$ group in the benzene molecule where A = $^{13}C$ and X = $^1H$. The ($AX_2$) splitting pattern shown if Figure 5.21 would, for example, arise in the $^{13}C$ spectrum of a $–CH_2^-$ group, and illustrates the splitting of the A nucleus’ absorption line by the four spin states that the two equivalent X spins can occupy. Again, the lines shown would be consistent with X and A both having spin 1/2 because they each assume only two $M_I$ values. In Figure 5.22 is the kind of splitting pattern ($AX_3$) that would apply to the $^{13}C$ NMR absorptions for a $–CH_3$ group. In this case, the spin-1/2 A line is split by the eight spin states that the three equivalent spin-1/2 H nuclei can occupy. The magnitudes of these $J$ coupling constants depend on the distances R between the two nuclei to the inverse sixth power (i.e., as $R^{-6}$). They also depend on the $g$ values of the two interacting nuclei. In the presence of splitting caused by nearby (usually covalently bonded) nuclei, the NMR spectrum of a molecule consists of sets of absorptions (each belonging to a specific nuclear type in a particular chemical environment and thus have a specific chemical shift) that are split by their couplings to the other nuclei. Because of the spin-spin coupling’s strong decay with internuclear distance, the magnitude and pattern of the splitting induced on one nucleus by its neighbors provides a clear signature of what the neighboring nuclei are (i.e., through the number of $M’$ values associated with the peak pattern) and how far these nuclei are (through the magnitude of the $J$ constant, knowing it is proportional to $R^{-6}$). This near-neighbor data, combined with the chemical shift functional group data, offer powerful information about molecular structure. An example of a full NMR spectrum is given in Figure 5.23 where the $^1H$ spectrum (i.e., only the proton absorptions are shown) of $H_3C-H_2C-OH$ appears along with plots of the integrated intensities under each set of peaks. The latter data suggests the total number of nuclei corresponding to that group of peaks. Notice how the $OH$ proton’s absorption, the absorption of the two equivalent protons on the $–CH_2^-$ group, and that of the three equivalent protons in the $–CH_3$ group occur at different field strengths (i.e., have different chemical shifts). Also note how the $OH$ peak is split only slightly because this proton is distant from any others, but the $CH_3$ protons peak is split by the neighboring $–CH_2^-$ group’s protons in an $AX_2$ pattern. Finally, the $–CH_2^-$ protons’ peak is split by the neighboring $–CH_3$ group’s three protons (in an $AX_3$ pattern). In summary, NMR spectroscopy is a very powerful tool that: • allows us to extract inter-nuclear distances (or at least tell how many near-neighbor nuclei there are) and thus geometrical information by measuring coupling constants $J$ and subsequently using the theoretical expressions that relate $J$ values to $R^{-6}$ values. • allows us to probe the local electronic environment of nuclei inside molecules by measuring chemical shifts or shielding $\sigma_I$ and then using the theoretical equations relating the two quantities. Knowledge about the electronic environment tells one, for example, about the degree of polarity in bonds connected to that nuclear center. • tells us, through the splitting patterns associated with various nuclei, the number and nature of the neighbor nuclei, again providing a wealth of molecular structure information. Theoretical Simulation of Structures We have seen how microwave, infrared, and NMR spectroscopy as well as x-ray diffraction data, when subjected to proper interpretation using the appropriate theoretical equations, can be used to obtain a great deal of structural information about a molecule. As discussed in Part 1 of this text, theory is also used to probe molecular structure in another manner. That is, not only does theory offer the equations that connect the experimental data to the molecular properties, but it also allows one to simulate a molecule. This simulation is done by solving the Schrödinger equation for the motions of the electrons to generate a potential energy surface $E(R)$, after which this energy landscape can be searched for points where the gradients along all directions vanish. An example of such a PES is shown in Figure 5.24 for a simple case in which the energy depends on only two geometrical parameters. Even in such a case, one can find several local minima and transition state structures connecting them. As we discussed in Chapter 3, among the stationary points on the potential energy surface (PES), those at which all eigenvalues of the second derivative (Hessian) matrix are positive represent geometrically stable isomers of the molecule. Those stationary points on the PES at which all but one Hessian eigenvalue are positive and one is negative represent transition state structures that connect pairs of stable isomers. Once the stable isomers of a molecule lying within some energy interval above the lowest such isomer have been identified, the vibrational motions of the molecule within the neighborhood of each such isomer can be described either by solving the Schrödinger equation for the vibrational wave functions $\chi_v(Q)$ belonging to each normal mode or by solving the classical Newton equations of motion using the gradient $\dfrac{∂E}{∂Q}$ of the PES to compute the forces along each molecular distortion direction $Q$: $F_Q = -\dfrac{\partial E}{\partial Q} \tag{5.2.1}$ The decision about whether to use the Schrödinger or Newtonian equations to treat the vibrational motion depends on whether one wishes (needs) to properly include quantum effects (e.g., zero-point motion and wave function nodal patterns) in the simulation. Once the vibrational motions have been described for a particular isomer, and given knowledge of the geometry of that isomer, one can evaluate the moments of inertia, one can properly vibrationally average all of the $R^{-2}$ quantities that enter into these moments, and, hence, one can simulate the microwave spectrum of the molecule. Also, given the Hessian matrix for this isomer, one can form its mass-weighted variant whose non-zero eigenvalues give the normal-mode harmonic frequencies of vibration of that isomer and whose eigenvectors describe the atomic motions that correspond to these vibrations. Moreover, the solution of the electronic Schrödinger equation allows one to compute the NMR shielding $\sigma_I$ values at each nucleus as well as the spin-spin coupling constants $J$ between pairs of nuclei (the treatment of these subjects is beyond the level of this text; you can find it in Molecular Electronic Structure Theory by Helgaker, et. al.). Again, using the vibrational motion knowledge, one can average the $\sigma$ and $J$ values over this motion to gain vibrationally averaged $\sigma_I$ and $J_{I,I’}$ values that best simulate the experimental parameters. One carries out such a theoretical simulation of a molecule for various reasons. Especially in the early days of developing theoretical tools to solve the electronic Schrödinger equation or the vibrational motion problem, one would do so for molecules whose structures and IR and NMR spectra were well known. The purpose in such cases was to calibrate the accuracy of the theoretical methods against established experimental data. Now that theoretical tools have been reasonably well tested and can be trusted (within known limits of accuracy), one often uses theoretically simulated structural and spectroscopic properties to identify spectral features whose molecular origin is not known. That is, one compares the theoretical spectra of a variety of test molecules to the observed spectral features to attempt to identify the molecule that produced the spectra. It is also common to use simulations to examine species that are especially difficult to generate in reasonable quantities in the laboratory and species that do not persist for long times. Reactive radicals, cations and anions are often difficult to generate in the laboratory and may be impossible to retain in sufficient concentrations and for a sufficient duration to permit experimental characterization. In such cases, theoretical simulation of the properties of these molecules may be the most reliable way to access such data. Moreover, one might use simulations to examine the behavior of molecules under extreme conditions such as high pressure, confinement to nanoscopic spaces, high temperature, or very low temperatures for which experiments could be very difficult or expensive to carry out. Let me tell you about an example of how such theoretical simulation has proven useful, probably even essential, for interpreting experimental data (the data is reported in N. I. Hammer, J-W. Shin, J. M. Headrick, E. G. Diken, J. R. Roscioli, G. H. Weddle, and M. A. Johnson, Science 306, 675 (2004)). In the group of Prof. Mark Johnson at Yale, infrared spectroscopy is carried out on gas-phase ions. In this particular experiment, water cluster anions $Ar_k(H_2O)_n^-$ with one or more Ar atoms attached to them were formed and, using a mass spectrometer, the ions of one specific mass were selected for subsequent study. In the example illustrated here, the cluster $Ar_k(H_2O)_4^-$ containing four water molecules was studied. When infrared (IR) radiation impinges on the $Ar_k(H_2O)_4^-$ ions, it can be absorbed if its frequency matches the frequency of one of the vibrational modes of this cluster. If, for example, IR radiation in the 1500-1700 cm-1 frequency range is absorbed (this range corresponds to frequencies of H-O-H bending vibrations), this excess internal energy can cause one or more of the weakly bound Ar atoms to be ejected from the $Ar_k(H_2O)_4^-$ cluster, thus decreasing the number of intact $Ar_k(H_2O)_4^-$ ions in the mass spectrometer. The decrease in the number of intact ions is a direct measure then of the absorption of the IR light. By monitoring the number of $Ar_k(H_2O)_4^-$ (i.e., the strength of the mass spectrometer’s signal at this particular mass-to-charge ratio) as the IR radiation is tuned through the 1500-1700 cm-1 frequency range, the experimentalists obtain spectral signatures (i.e., the ion intensity loss) of the IR absorption by the $Ar_k(H_2O)_4^-$ cluster ions. When they carry out this kind of experiment using $Ar_5(H_2O)_4^-$ and scan the IR radiation in the 1500-1700 cm-1 frequency range, they obtained the spectrum labeled A in Figure 5. 24 a. When they performed the same kind of experiment on $Ar_{10}(D_2O)_4^-$ and scanned in the 2400-2800 cm-1 frequency range (which is where O-D stretching vibrations are known to occur), they obtained the spectrum labeled B in Figure 5. 24 a. What the experimentalists did not know, however, is what the geometrical structure of the underlying $(H_2O)_4^-$ ion was. Nor did they know exactly which H-O-H bending or O-H (or O-D) stretching vibrations were causing the various peaks shown in Figure 5.24 a A and B. By carrying out electronic structure calculations on a large number of geometries for $(H_2O)_4^-$ and searching for local minima on the ground electronic state of this ion (there are a very large number of such local minima) and then using the mass-weighted Hessian matrix at each local minima to calculate the structure’s vibrational energies, the experimentalists were able to figure out what structure for $(H_2O)_4^-$ was most consistent with their observed IR spectrum. For example, for the rather extended structure of $(H_2O)_4$, they computed the IR spectrum shown in panel E (and for $(D_2O)_4^-$ in panel F) of Figure 5. 24 a. Alternatively, for the cyclic structure of $(H_2O)_4$-, they computed the IR spectrum shown in panel C (and for $(D_2O)_4^-$ in panel D) of Figure 5. 24 a. Clearly, the spectrum of panels C and D agrees much better with their experimental spectrum in panels A and B than does the spectrum of panels E and F. Based on these comparisons, these scientists concluded that the $(H_2O)_4^-$ ions in their $Ar_5(H_2O)_4^-$ and $Ar_{10}(D_2O)_4^-$ have the cyclic geometry, not the extended quasi-linear geometry. Moreover, by looking at which particular vibrational modes of the cyclic $(H_2O)_4^-$ produced which peaks in panels C and D, they were able to assign each of the IR peaks seen in their data of panels A and B. This is a good example of how theoretical simulation can help interpret experimental data; without the theory, these scientists would not know the geometry of $(H_2O)_4^-$. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/05%3A_An_Overview_of_Theoretical_Chemistry/5.02%3A_Molecular_Structure-_Theory_and_Experiment.txt
Experimental Probes of Chemical Change Many of the same tools that are used to determine the structures of molecules can also be used to follow the changes that the molecule undergoes as it is involved in a chemical reaction. Specifically, for any reaction in which one kind of molecule $A$ is converted into another kind $B$, one needs to have 1. the ability to identify, via some physical measurement, the experimental signatures of both $A$ and $B$, 2. the ability to relate the magnitude of these experimental signals to the concentrations $[A]$ and $[B]$ of these molecules, and 3. the ability to monitor these signals as functions of time so that these concentrations can be followed as time evolves. The third requirement is what allows one to determine the rates at which the $A$ and $B$ molecules are reacting. Many of the experimental tools used to identify molecules (e.g., NMR allows one to identify functional groups and near-neighbor functional groups, IR also allows functional groups to be seen) and to determine their concentrations have restricted time scales over which they can be used. For example, NMR spectra require that the sample be studied for ca. 1 second or more to obtain a useable signal. Likewise, a mass spectroscopic analysis of a mixture of reacting species may require many second or minutes to carry out. These restrictions, in turn, limit the rates of reactions that can be followed using these experimental tools (e.g., one can not use NMR of mass spectroscopy to follow a reaction that occurs on a time scale of $10^{-12}$ s). Especially for very fast reactions and for reactions involving unstable species that can not easily be handled, so-called pump-probe experimental approaches are often used. For example, suppose one were interested in studying the reaction of $Cl$ radicals (e.g., as formed in the decomposition of chloroflurocarbons (CFCs) by ultraviolet light) with ozone to generate $ClO$ and $O_2$: $Cl + O_3 \rightarrow ClO + O_2^. \tag{5.3.1}$ One can not simply deposit a known amount of $Cl$ radicals from a vessel into a container in which gaseous $O_3$ of a known concentration has been prepared; the $Cl$ radicals will recombine and react with other species, making their concentrations difficult to determine. So, alternatively, one places known concentrations of some $Cl$ radical precursor (e.g., a CFC or some other X-Cl species) and ozone into a reaction vessel. One then uses, for example, a very short light pulse whose photon's frequencies are tuned to a transition that will cause the X-Cl precursor to undergo rapid photodissociation: $h\nu + X-Cl \rightarrow X + Cl^.\tag{5.3.2}$ Because the pump light source used to prepare the $Cl$ radicals is of very short duration ($\Delta{t}$) and because the X-Cl dissociation is prompt, one knows, to within $\Delta{t}$, the time at which the Cl radicals begin to react with the ozone. The initial concentration of the $Cl$ radicals can be known if the quantum yield for the $h\nu + X-Cl \rightarrow X + Cl$ reaction is known, This means that the intensity of photons, the probability of photon absorption by X-Cl, and the fraction of excited X-Cl molecules that dissociate to produce $X + Cl$ must be known. Such information is available (albeit, from rather tedious earlier studies) for a variety of X-Cl precursors. So, knowing the time at which the $Cl$ radicals are formed and their initial concentrations, one then allows the $Cl + O_3 h\nu \rightarrow ClO + O_2$ reaction to proceed for some time duration $\Delta{t}$. One then, at $t =\Delta{t}$, uses a second light source to probe either the concentration of the $ClO$, the $O_2$ or the $O_3$, to determine the extent of progress of the reaction. Which species is so monitored depends on the availability of light sources whose frequencies these species absorb. Such probe experiments are carried out at a series of time delays $\Delta{t}$, the result of which is the determination of the concentrations of some product or reactant species at various times after the initial pump event created the reactive $Cl$ radicals. In this way, one can monitor, for example, the $ClO$ concentration as a function of time after the $Cl$ begins to react with the $O_3$. If one has reason to believe that the reaction occurs in a single bimolecular event as $Cl + O_3 \rightarrow ClO + O_2 \tag{5.3.3}$ one can then extract the rate constant k for the reaction by using the following kinetic scheme; $\dfrac{d[ClO]}{dt} = k [Cl] [O_3].\tag{5.3.4}$ If the initial concentration of $O_3$ is large compared to the amount of $Cl$ that is formed in the pump event, $[O_3]$ can be taken as constant and known. If the initial concentration of $Cl$ is denoted $[Cl]_0$, and the concentration of $ClO$ is called $x$, this kinetic equation reduces to $\dfrac{dx}{dt} = k ( [Cl]_0 -x) [O_3]\tag{5.3.5}$ the solution of which is $[ClO] = x = [Cl]_0 (1 - \exp(-k[O_3]t)).\tag{5.3.6}$ So, knowing the $[ClO]$ concentration as a function of time delay $t$, and knowing the initial ozone concentration $[O_3]$ as well as the initial $Cl$ radical concentration, one can find the rate constant $k$. Such pump-probe experiments are necessary when one wants to study species that must be generated and allowed to react immediately. This is essentially always the case when one or more of the reactants is a highly reactive species such as a radical. There is another kind of experiment that can be used to probe very fast reactions if the reaction and its reverse reaction can be brought into equilibrium to the extent that reactants and products both exist in measurable concentrations. For example, consider the reaction of an enzyme E and a substrate S to form the enzyme-substrate complex ES: $E + S \rightleftharpoons ES.\tag{5.3.7}$ At equilibrium, the forward rate $k_f = [E]_{eq} [S]_{eq} \tag{5.3.8}$ and the reverse rate $k_r = [ES]_{eq} \tag{5.3.9}$ are equal: $k_f [E]_{eq} [S]_{eq} = k_r [ES]_{eq} \tag{5.3.10}$ The idea behind so called perturbation techniques is to begin with a reaction that is in such an equilibrium condition and to then use some external means to slightly perturb the equilibrium. Because both the forward and reverse rates are assumed to be very fast, it is essential to use a perturbation that can alter the concentrations very quickly. This usually precludes simply adding a small amount of one or more of the reacting species to the reaction vessel. Instead, one might employ, for example, a fast light source or electric field pulse to perturb the equilibrium to one side or the other. For example, if the reaction thermochemistry is known, the equilibrium constant $K_{eq}$ can be changed by rapidly heating the sample (e.g., with a fast laser pulse that is absorbed and rapidly heats the sample) and using $\dfrac{d \ln{K_{eq}}}{dT} = \dfrac{\Delta{H}}{RT^2} \tag{5.3.11}$ to calculate the change in $K_{eq}$ and thus the changes in concentrations caused by the sudden heating. Alternatively, if the polarity of the reactants and products is substantially different, one may use a rapidly applied electric field to quickly change the concentrations of the reactant and product species. In such experiments, the concentrations of the species is shifted by a small amount $\delta$ as a result of the application of the perturbation, so that $[ES] = [ES]_{eq} - \delta \tag{5.3.12}$ $[E] = [E]_{eq} + \delta \tag{5.3.13}$ $[S] = [S]_{eq} + \delta \tag{5.3.14}$ once the perturbation has been applied and then turned off. Subsequently, the following rate law will govern the time evolution of the concentration change d: $- \dfrac{d\delta}{dt} = - k_r ([ES]_{eq} -\delta) + k_f ([E]_{eq} + \delta) ([S]_{eq} + \delta). \tag{5.3.15}$ Assuming that $\delta$ is very small (so that the term involving $\delta^2$ cam be neglected) and using the fact that the forward and reverse rates balance at equilibrium, this equation for the time evolution of $\delta$ can be reduced to: $- \dfrac{d\delta}{dt} = (k_r + k_f [S]_{eq} + k_f [E_{eq}]) d. v \tag{5.3.16}$ So, the concentration deviations from equilibrium will return to equilibrium (i.e., $\delta$ will decay to zero) exponentially with an effective rate coefficient that is equal to a sum of terms: $k_{eff} = k_r + k_f [S]_{eq} + k_f [E_{eq}] \tag{5.3.17}$ involving both the forward and reverse rate constants. So, by quickly perturbing an equilibrium reaction mixture for a short period of time and subsequently following the concentrations of the reactants or products as they return to their equilibrium values, one can extract the effective rate coefficient $k_{eff}$. Doing this at a variety of different initial equilibrium concentrations (e,g., $[S]_{eq}$ and $[E]_{eq}$), and seeing how $k_{eff}$ changes, one can then determine both the forward and reverse rate constants. Both the pump-probe and the perturbation methods require that one be able to quickly create (or perturb) concentrations of reactive species and that one have available an experimental probe that allows one to follow the concentrations of at least some of the species as time evolves. Clearly, for very fast reactions, this means that one must use experimental tools that can respond on a very short time scale. Modern laser technology and molecular beam methods have provided some of the most widely used of such tools. These experimental approaches are discussed in some detail in Chapter 8. Theoretical Simulation of Chemical Change The most common theoretical approach to simulating a chemical reaction is to use Newtonian dynamics to follow the motion of the nuclei on a Born-Oppenheimer electronic energy surface. If the molecule of interest contains few ($N$) atoms, such a surface could be computed (using the methods discussed in Chapter 6) at a large number of molecular geometries $\{Q_K\}$ and then fit to an analytical function $E(\{q_J\})$ of the $3N-6$ or $3N-5$ variables denoted $\{q_J\}$. Knowing $E$ as a function of these variables, one can then compute the forces $F_J = -\dfrac{\partial{E}}{\partial{q_J}} \tag{5.3.18}$ along each coordinate, and then use the Newton equations $m_J \dfrac{d^2q_J}{dt^2} = F_J \tag{5.3.19}$ to follow the time evolution of these coordinates and hence the progress of the reaction. The values of the coordinates $\{q_J(t_L)\}$ at a series of discrete times $t_L$ constitute what is called a classical trajectory. To simulate a chemical reaction, one begins the trajectory with initial coordinates characteristic of the reactant species (i.e., within one of the valleys on the reactant side of the potential surface) and one follows the trajectory long enough to determine whether the collision results in 1. a non-reactive outcome characterized by final coordinates describing reactant not product molecules, or 2. a reactive outcome that is recognized by the final coordinates describing product molecules rather than reactants. One must do so for a large number of trajectories whose initial coordinates and moment are representative of the experimental conditions one is attempting to simulate. Then, one has to average the outcomes of these trajectories over this ensemble of initial conditions. More about how one carries out such ensemble averaging is discussed in Chapters 7 and 8. If the molecule contains more than 3 or 4 atoms, it is more common to not compute the Born-Oppenheimer energy at a set of geometries and then fit this data to an analytical form. Instead, one begins a trajectory at some initial coordinates $\{q_J(0)\}$ and with some initial momenta $\{p_J(0)\}$ and then uses the Newton equations, usually in the finite-difference form: $q_J = q_J(0) + \dfrac{p_J(0)}{m_J} dt \tag{5.3.20}$ $p_J = p_J(0) -\dfrac{\partial E}{\partial q_J}(t=0) dt, \tag{5.3.21}$ to propagate the coordinates and momenta forward in time by a small amount $\delta{t}$. Here, $\dfrac{\partial{E}}{\partial{q_J}}(t=0)$ denotes the gradient of the BO energy computed at the $\{q_J(0)\}$ values of the coordinates. The above propagation procedure is then used again, but with the values of $q_J$ and $p_J$ appropriate to time $t = \delta{t}$ as new initial coordinates and momenta, to generate yet another set of $\{q_J\}$ and $\{p_J\}$ values. In such direct dynamics approaches, the energy gradients, which produce the forces, are computed only at geometries that the classical trajectory encounters along its time propagation. In the earlier procedure, in which the BO energy is fit to an analytical form, one often computes $E$ at geometries that the trajectory never accesses. In carrying out such a classical trajectory simulation of a chemical reaction, there are other issues that must be addressed. In particular, as mentioned above, one can essentially never use any single trajectory to simulate a reaction carried out in a laboratory setting. One must perform a series of such trajectory calculations with a variety of different initial coordinates and momenta chosen in a manner to represent the experimental conditions of interest. For example, suppose one were to wish to model a molecular beam experiment in which a beam of species $A$ having a well defined kinetic energy $E_A$ collides with a beam of species $B$ having kinetic energy $E_B$ as shown in Figure 5.25. Even though the $A$ and $B$ molecules all collide at right angles and with specified kinetic energies (and thus specified initial momenta), not all of these collisions occur head on. Figure 5.26 illustrates this point. Here, we show two collisions between an $A$ and a $B$ molecule, both of which have identical $A$ and $B$ velocities $V_A$ and $V_B$, respectively. What differs in the two events is their distance of closest approach. In the collision shown on the left, the $A$ and $B$ come together closely. However, in the left collision, the A molecule is moving away from the region where $B$ would strike it before $B$ has reached it. These two cases can be viewed from a different perspective that helps to clarify their differences. In Figure 5.27, we illustrate these two collisions viewed from a frame of reference located on the $A$ molecule. In this figure, we show the location of the $B$ molecule relative to $A$ at a series of times, showing $B$ moving from right to left. In the figure on the left, the $B$ molecule clearly undergoes a closer collision than is the case on the right. The distance of closest approach in each case is called the impact parameter and it represents the distance of closest approach if the colliding partners did not experience any attractive or repulsive interactions (as the above figures would be consistent with). Of course, when $A$ and $B$ have forces acting between them, the trajectories shown above would be modified to look more like those shown in Figure 5.28. In both of these trajectories, repulsive intermolecular forces cause the trajectory to move away from its initial path, which defines the respective impact parameters. So, even in this molecular beam example in which both colliding molecules have well specified velocities, one must carry out a number of classical trajectories, each with a different impact parameter b to simulate the laboratory event. In practice, the impact parameters can be chosen to range from $b = 0$ (i.e., a head on collision) to some maximum value $b_{Max}$ beyond which the $A$ and $B$ molecules no longer interact (and thus can no longer undergo reaction). Each trajectory is followed long enough to determine whether it leads to geometries characteristic of the product molecules. The fraction of such trajectories, weighted by the volume element $2\pi b\,db$ for trajectories with impact parameters in the range between $b$ and $b + \delta{b}$, then gives the averaged fraction of trajectories that react. In most simulations of chemical reactions, there are more initial conditions that also must sampled (i.e., trajectories with a variety of initial variables must be followed) and properly weighted. For example, 1. if there is a range of velocities for the reactants $A$ and/or $B$, one must follow trajectories with velocities in this range and weigh the outcomes (i.e., reaction or not) of such trajectories appropriately (e.g., with a Maxwell-Boltzmann weighting factor), and 2. if the reactant molecules have internal bond lengths, angles, and orientations, one must follow trajectories with different initial values of these variables and properly weigh each such trajectory (e.g., using the vibrational state's coordinate probability distribution as a weighting factor for the initial values of that coordinate). As a result, to properly simulate a laboratory experiment of a chemical reaction, it usually requires one to follow a very large number of classical trajectories. Fortunately, such a task is well suited to distributed parallel computing, so it is currently feasible to do so even for rather complex reactions. There is a situation in which the above classical trajectory approach can be foolish to pursue, even if there is reason to believe that a classical Newton description of the nuclear motions is adequate. This occurs when one has a rather high barrier to surmount to evolve from reactants to products and when the fraction of trajectories whose initial conditions permit this barrier to be accessed is very small. In such cases, one is faced with the reactive trajectories being very rare among the full ensemble of trajectories needed to properly simulate the laboratory experiment. Certainly, one can apply the trajectory-following technique outlined above, but if one observes, for example, that only one trajectory in 106 produces a reaction, one may not have adequate statistics to determine the reaction probability. One could subsequently run 108 trajectories (chosen again to represent the same experiment), and see whether 100 or 53 or 212 of these trajectories react, thereby increasing the precision of your reaction probability. However, it may be computationally impractical to perform 100 times as many trajectories to achieve better accuracy in the reaction probability. When faced with such rare-event situations, one is usually better off using an approach that breaks the problem of determining what fraction of the (properly weighted) initial conditions produce reaction into two parts: 1. among all of the (properly weighted) initial conditions, what fraction can access the high-energy barrier? and 2. of those that do access the high barrier, how may react? This way of formulating the reaction probability question leads to the transition state theory (TST) method that is treated in detail in Chapter 8, along with some of its more common variants. Briefly, the answer to the first question posed above involves computing the quasi-equilibrium fraction of reacting species that reach the barrier region in terms of the partition functions of statistical mechanics. This step becomes practical if the chemical reactants can be assumed to be in some form of thermal equilibrium (which is where these kinds of models are useful). In the simplest form of TST, the answer to the second question posed above is taken to be "all trajectories that reach the barrier react". In more sophisticated variants, other models are introduced to take into consideration that not all trajectories that cross over the barrier indeed proceed onward to products and that some trajectories may tunnel through the barrier near its top. I will leave further discussion of the TST to Chapter 8. In addition to the classical trajectory and TST approaches to simulating chemical reactions, there are more quantum approaches. These techniques should be used when the nuclei involved in the reaction include hydrogen or deuterium nuclei. A discussion of the details involved in quantum propagation is beyond the level of this Chapter, so I will delay it until Chapter 8. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/05%3A_An_Overview_of_Theoretical_Chemistry/5.03%3A_Chemical_Change.txt
Learning OBjectives The subjects you should now be familiar with include • The Hatree and Hartree-Fock models, • Koopmans’theorem • Atomic basis functions- Slater and Gaussian- and the notations used to describe them. • Static and dynamic electron correlation. • The CI, MPPT, CC, and DFT methods for treating correlation, as well as EOM or Greens function methods. • The Slater-Condon rules. • QM-MM methods. • Experimental tools to probe electronic structures including methods for metastable states. • Various contributions to spectroscopic line shapes and line broadening. Electrons are the “glue” that holds the nuclei together in the chemical bonds of molecules and ions. Of course, it is the nuclei’s positive charges that bind the electrons to the nuclei. The competitions among Coulomb repulsions and attractions as well as the existence of non-zero electronic and nuclear kinetic energies make the treatment of the full electronic-nuclear Schrödinger equation an extremely difficult problem. Electronic structure theory deals with the quantum states of the electrons, usually within the Born-Oppenheimer approximation (i.e., with the nuclei held fixed). It also addresses the forces that the electrons’ presence creates on the nuclei; it is these forces that determine the geometries and energies of various stable structures of the molecule as well as transition states connecting these stable structures. Because there are ground and excited electronic states, each of which has different electronic properties, there are different stable-structure and transition-state geometries for each such electronic state. Electronic structure theory deals with all of these states, their nuclear structures, and the spectroscopies (e.g., electronic, vibrational, rotational) connecting them. In this Chapter, you were introduced to many of the main topics of electronic structure theory. • 6.1: Theoretical Treatment of Electronic Structure The Born-Oppenheimer electronic energy,  (E(r), as a function of the 3N coordinates of the N atoms in the molecule plays a central role. It is on this landscape that one searches for stable isomers and transition states, and it is the second derivative (Hessian) matrix of this function that provides the harmonic vibrational frequencies of such isomers. This chapter will introduce to the tools used to solve the electronic Schrödinger equation to generate E(R) and the electronic wave function. • 6.2: Orbitals • 6.3: The Hartree-Fock Approximation The Hartree approximation ignores an important property of electronic wave functions- their permutational antisymmetry. • 6.4: Deficiencies in the Single Determinant Model • 6.5: Various Approaches to Electron Correlation • 6.6: The Slater-Condon Rules To form Hamiltonian matrix elements between any pair of Slater determinants constructed from spin-orbitals that are orthonormal, one uses the so-called Slater-Condon rules. These rules express all non-vanishing matrix elements involving either one- or two- electron operators. • 6.7: Molecules Embedded in Condensed Media • 6.8: High-End Methods for Treating Electron Correlation Although their detailed treatment is beyond the scope of this text, it is important to appreciate that new approaches are always under development in all areas of theoretical chemistry. In this Section, I want to introduce you to two tools that are proving to offer high precision in the treatment of electron correlation energies. These are the so-called quantum Quantum Monte-Carlo and r1,2- approaches to this problem. • 6.9: Experimental Probes of Electronic Structure Visible and ultraviolet spectroscopies used study transitions between states of molecules/ions - these are called electronic transitions. When such transitions occur, the initial and final states generally differ in their electronic, vibrational, and rotational energies because any change to the electrons' orbital occupancy will induce changes in the Born-Oppenheimer energy surface which governs the vibrational and rotational character. • 6.10: Molecular Orbitals Before moving on to discuss methods that go beyond the HF model, it is appropriate to examine some of the computational effort that goes into carrying out a HF SCF calculation on a molecule. 06: Electronic Structure In Chapter 5, I introduced you to the strategies that theory uses to interpret experimental data relating to such matters, and how and why theory can also be used to simulate the behavior of molecules. In carrying out simulations, the Born-Oppenheimer electronic energy $E(R)$ as a function of the $3N$ coordinates of the $N$ atoms in the molecule plays a central role. It is on this landscape that one searches for stable isomers and transition states, and it is the second derivative (Hessian) matrix of this function that provides the harmonic vibrational frequencies of such isomers. In the present Chapter, I want to provide you with an introduction to the tools that we use to solve the electronic Schrödinger equation to generate $E(R)$ and the electronic wave function $\psi(r|R)$. In essence, this treatment will focus on orbitals of atoms and molecules and how we obtain and interpret them. For an atom, one can approximate the orbitals by using the solutions of the hydrogenic Schrödinger equation discussed in Part 1 of this text. Although such functions are not proper solutions to the actual $N$-electron Schrödinger equation (believe it or not, no one has ever solved exactly any such equation for $N > 1$) of any atom, they can be used as perturbation or variational starting-point approximations when one may be satisfied with qualitatively accurate answers. In particular, the solutions of this one-electron hydrogenic problem form the qualitative basis for much of atomic and molecular orbital theory. As discussed in detail in Part 1, these orbitals are labeled by $n$, $l$, and $m$ quantum numbers for the bound states and by $l$ and $m$ quantum numbers and the energy $E$ for the continuum states. Much as the particle-in-a-box orbitals are used to qualitatively describe $\pi$-electrons in conjugated polyenes or electronic bands in solids, these so-called hydrogen-like orbitals provide qualitative descriptions of orbitals of atoms with more than a single electron. By introducing the concept of screening as a way to represent the repulsive interactions among the electrons of an atom, an effective nuclear charge $Z_{\rm eff}$ can be used in place of $Z$ in the hydrogenic $\psi_{n,l,m}$ and $E_{n,l}$ formulas to generate approximate atomic orbitals to be filled by electrons in a many-electron atom. For example, in the crudest approximation of a carbon atom, the two $1s$ electrons experience the full nuclear attraction so $Z_{\rm eff} =6$ for them, whereas the $2s$ and $2p$ electrons are screened by the two $1s$ electrons, so $Z_{\rm eff}= 4$ for them. Within this approximation, one then occupies two $1s$ orbitals with $Z=6$, two $2s$ orbitals with $Z=4$ and two $2p$ orbitals with $Z=4$ in forming the full six-electron product wave function of the lowest-energy state of carbon $\psi(1, 2, …, 6) = \psi_{1s} \alpha(1) \psi_{1s} \alpha(2) \psi_{2s} \alpha(3) … \psi_{1p}(0) \beta(6).$ However, such approximate orbitals are not sufficiently accurate to be of use in quantitative simulations of atomic and molecular structure. In particular, their energies do not properly follow the trends in atomic orbital (AO) energies that are taught in introductory chemistry classes and that are shown pictorially in Figure 6.1. For example, the relative energies of the $3d$ and $4s$ orbitals are not adequately described in a model that treats electron repulsion effects in terms of a simple screening factor. So, now it is time to examine how we can move beyond the screening model and take the electron repulsion effects, which cause the inter-electronic couplings that render the Schrödinger equation insoluble, into account in a more reliable manner. Atomic Units The electronic Hamiltonian that appears throughout this text is commonly expressed in the literature and in other texts in so-called atomic units (aus). In that form, it is written as follows: $H_e = \sum_j \left[ -\frac{1}{2} \nabla_j^2 - \sum_a \frac{Z_a}{r_{j,a}} \right] + \sum_{j< k} \frac{1}{r_{j,k}} .$ Atomic units are introduced to remove all of the h , e, and me factors from the Schrödinger equation. To effect the unit transformation that results in the Hamiltonian appearing as above, one notes that the kinetic energy operator scales as $r_j^{-2}$ whereas the Coulomb potentials scale as $r_j^{-1}$​ and as $r_{j,k}^{-1}$. So, if each of the Cartesian coordinates of the electrons and nuclei were expressed as a unit of length $a_0$ multiplied by a dimensionless length factor, the kinetic energy operator would involve terms of the form $( - \hbar^2/2(a_0)^2m_e ) \nabla_j^2$, and the Coulomb potentials would appear as $Z_ae^2/(a_0)r_{j,a}$ and $e^2/(a_0)r_{j,k}$, with the $r_{j,a}$ and $r_{j,k}$ factors now referring to the dimensionless coordinates. A factor of $e^2/a_0$ (which has units of energy since a_0 has units of length) can then be removed from the Coulomb and kinetic energies, after which the kinetic energy terms appear as $( - \hbar^2/2(e^2a_0)m_e ) \nabla_j^2$ and the potential energies appear as $Z_a/r_{j,a}$ and $1/r_{j,k}$. Then, choosing $a_0 = \hbar^2/e^2m_e$ changes the kinetic energy terms into $-1/2 \nabla_j^2$; as a result, the entire electronic Hamiltonian takes the form given above in which no $e^2$, me, or $\hbar^2$ factors appear. The value of the so-called Bohr radius $a_0 = \hbar^2/e^2m_e$ turns out to be 0.529 Å, and the so-called Hartree energy unit $e^2/a_0$, which factors out of He, is 27.21 eV or 627.51 kcal/mol.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.01%3A_Theoretical_Treatment_of_Electronic_Structure.txt
Hartree Description The energies and wave functions within the most commonly used theories of atomic structure are assumed to arise as solutions of a Schrödinger equation whose Hamiltonian $h_e(r)$ possess three kinds of energies: 1. Kinetic energy, whose average value is computed by taking the expectation value of the kinetic energy operator $-\dfrac{\hbar^2}{2m} \nabla^2$ with respect to any particular solution $\phi_j(r)$ to the Schrödinger equation: $KE = \langle\phi_j| -\dfrac{\hbar^2}{2m} \nabla^2 |\phi_j\rangle$ 2. Coulombic attraction energy with the nucleus of charge $Z$: $\langle\phi_j| -\dfrac{Z_e^2}{r} |\phi_j\rangle$ 3. Coulomb repulsion energies with all of the $N-1$ other electrons, which are assumed to occupy other atomic orbitals (AOs) denoted $\phi_K$, with this energy computed as $\sum_K \langle\phi_j(r) \phi_K(r’) |\frac{e^2}{|r-r’|} | \phi_j(r) \phi_K(r’)\rangle.\label{6.1.2}$ The Dirac notation $\langle\phi_j(r) \phi_K(r’) |\dfrac{e^2}{|r-r’|} | \phi_j(r) \phi_K(r’)\rangle$ is used to represent the six-dimensional Coulomb integral $J_{J,K} = \int |\phi_j(r)|^2 |\phi_K(r’)|^2 \dfrac{e^2}{r-r'} dr dr’ \label{6.1.3}$ that describes the Coulomb repulsion between the charge density $|\phi_j(r)|^2$ for the electron in $\phi_j$ and the charge density $|\phi_K(r’)|^2$ for the electron in $\phi_K$. Of course, the sum over $K$ must be limited to exclude $K=J$ to avoid counting a “self-interaction” of the electron in orbital $\phi_j$ with itself. The total energy $\epsilon_J$ of the orbital $\phi_j$, is the sum of the above three contributions: $\epsilon_J = \langle\phi_j| - \frac{\hbar^2}{2m} \nabla^2 |\phi_j\rangle + \langle\phi_j| -\frac{Z_e^2}{r} |\phi_j\rangle + \sum_K \langle\phi_j(r) \phi_K(r’) |\frac{e^2}{|r-r’|} | \phi_j(r) \phi_K(r’)\rangle.\label{6.1.4}$ This treatment of the electrons and their orbitals is referred to as the Hartree-level of theory. As stated above, when screened hydrogenic AOs are used to approximate the $\phi_j$ and $\phi_K$ orbitals, the resultant $\epsilon_J$ values do not produce accurate predictions. For example, the negative of $\epsilon_J$ should approximate the ionization energy for removal of an electron from the AO $\phi_j$. Such ionization potentials (IP s) can be measured, and the measured values do not agree well with the theoretical values when a crude screening approximation is made for the AO s. LCAO-Expansion To improve upon the use of screened hydrogenic AOs, it is most common to approximate each of the Hartree AOs {$\phi_K$} as a linear combination of so-called basis AOs {$\chi_\mu$}: $\phi_J = \sum_\mu C_{J,\mu} \chi_\mu.\label{6.1.5}$ using what is termed the linear-combination-of-atomic-orbitals (LCAO) expansion. In this equation, the expansion coefficients {$C_{J,\mu}$} are the variables that are to be determined by solving the Schrödinger equation $h_e \phi_J = \epsilon_J \phi_J. \label{6.1.6}$ After substituting the LCAO expansion for $\phi_J$ into this Schrödinger equation, multiplying on the left by one of the basis AOs $\chi_\nu$, and then integrating over the coordinates of the electron in $\phi_J$, one obtains $\sum_\mu \langle\chi_\nu| h_e| \chi_\mu\rangle C_{J,\mu} = \epsilon_J \sum_\mu \langle\chi_\nu| \chi_\mu\rangle C_{J,\mu} . \label{6.1.7}$ This is a matrix eigenvalue equation in which the $\epsilon_J$ and {$C_{J,\mu}$} appear as eigenvalues and eigenvectors. The matrices $\langle\chi_\nu| h_e| \chi_\mu\rangle$ and $\langle\chi_\nu| \chi_\mu\rangle$ are called the Hamiltonian and overlap matrices, respectively. An explicit expression for the former is obtained by introducing the earlier definition of he: $\langle\chi_\nu| h_e| \chi_\mu\rangle = \langle\chi_\nu| - \frac{\hbar^2}{2m} \nabla^2 |\chi_\mu\rangle + \langle\chi_\nu| -\frac{Ze^2}{r} |\chi_\mu\rangle \label{6.1.8}$ $+ \sum_{\eta,\gamma} \sum_K C_{K,\eta} C_{K,\gamma} \langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\mu(r) \chi_\gamma(r’)\rangle. \label{6.1.9}$ An important thing to notice about the form of the matrix Hartree equations is that to compute the Hamiltonian matrix, one must know the LCAO coefficients {$C_{K,\gamma}$} of the orbitals which the electrons occupy. On the other hand, these LCAO coefficients are supposed to be found by solving the Hartree matrix eigenvalue equations. This paradox leads to the need to solve these equations iteratively in a so-called self-consistent field (SCF) technique. In the SCF process, one inputs an initial approximation to the {$C_{K,\gamma}$} coefficients. This then allows one to form the Hamiltonian matrix defined above. The Hartree matrix equations $\sum_\mu \langle\chi_\nu| h_e| \chi_\mu\rangle C_{J,m} = \epsilon_J \sum_\mu \langle\chi_\nu| \chi_\mu\rangle C_{J,\mu} \label{6.1.10}$ are then solved for new {$C_{K,\gamma}$} coefficients and for the orbital energies {$\epsilon_K$}. The new LCAO coefficients of those orbitals that are occupied are then used to form a new Hamiltonian matrix, after which the Hartree equations are again solved for another generation of LCAO coefficients and orbital energies. This process is continued until the orbital energies and LCAO coefficients obtained in successive iterations do not differ appreciably. Upon such convergence, one says that a self-consistent field has been realized because the {$C_{K,\gamma}$} coefficients are used to form a Coulomb field potential that details the electron-electron interactions. Basis Sets Slater-type orbitals and Gaussian-type orbitals As noted above, it is possible to use the screened hydrogenic orbitals as the {$\chi_\mu$}. However, much effort has been expended at developing alternative sets of functions to use as basis orbitals. The result of this effort has been to produce two kinds of functions that currently are widely used. The basis orbitals commonly used in the LCAO process fall into two primary classes: 1. Slater-type orbitals (STOs) $c_{n,l,m} (r,\theta,\phi) = N_{n,l,m,z} Y_{l,m} (\theta,\phi) r_{n-1} e^{-zr}$ are characterized by quantum numbers $n$, $l$, and $m$ and exponents (which characterize the orbital’s radial size) $z$. The symbol $N_{n,l,m,z}$ denotes the normalization constant. 2. Cartesian Gaussian-type orbitals (GTOs) $c_{a,b,c} (r,\theta,\phi) = N'_{a,b,c,a} x_a y_b z_c e^{-ar^2}$ are characterized by quantum numbers $a$, $b$, and $c$, which detail the angular shape and direction of the orbital, and exponents $a$ which govern the radial size. For both types of AOs, the coordinates $r$, $\theta$, and $\phi$ refer to the position of the electron relative to a set of axes attached to the nucleus on which the basis orbital is located. Note that Slater-type orbitals (STO's) are similar to hydrogenic orbitals in the region close to the nucleus. Specifically, they have a non-zero slope near the nucleus $\dfrac{d}{dr}(e^{-zr})_{r=0} = -z.$ In contrast, GTOs, have zero slope near $r=0$ because $\dfrac{d}{dr}(e^{-ar^2})_{r=0} = 0.$ We say that STOs display a cusp at $r=0$ that is characteristic of the hydrogenic solutions, whereas GTOs do not. Although STOs have the proper cusp behavior near nuclei, they are used primarily for atomic and linear-molecule calculations because the multi-center integrals $\langle\chi_\mu(1) \chi_\kappa(2)|\dfrac{e^2}{|r_1-r_2|}| \chi_\nu(1) \chi_\gamma(2)\rangle\label{6.1.11}$ which arise in polyatomic-molecule calculations (we will discuss these integrals later in this Chapter) cannot efficiently be evaluated when STOs are employed. In contrast, such integrals can routinely be computed when GTOs are used. This fundamental advantage of GTOs has lead to the dominance of these functions in molecular quantum chemistry. To overcome the primary weakness of GTO functions (i.e., their radial derivatives vanish at the nucleus), it is common to combine two, three, or more GTOs, with combination coefficients which are fixed and not treated as LCAO parameters, into new functions called contracted GTOs (CGTOs). Typically, a series of radially tight, medium, and loose GTOs are multiplied by contraction coefficients and summed to produce a CGTO that approximates the proper cusp at the nuclear center (although no such combination of GTOs can exactly produce such a cusp because each GTO has zero slope at $r = 0$. Although most calculations on molecules are now performed using Gaussian orbitals, it should be noted that other basis sets can be used as long as they span enough of the regions of space (radial and angular) where significant electron density resides. In fact, it is possible to use plane wave orbitals of the form $\chi(r,\theta,\phi) = N\exp[i(k_x r \sin{\theta} \cos{\phi} + k_y r \sin_{\theta} \sin{\phi} + k_z r \cos{\theta})],\label{6.1.12}$ where $N$ is a normalization constant and $k_x$, $k_y$, and $k_z$ are quantum numbers detailing the momenta or wavelength of the orbital along the $x$, $y$, and $z$ Cartesian directions. The advantage to using such simple orbitals is that the integrals one must perform are much easier to handle with such functions. The disadvantage is that one must use many such functions to accurately describe sharply peaked charge distributions of, for example, inner-shell core orbitals while still retaining enough flexibility to also describe the much smoother electron density in the valence regions. Much effort has been devoted to developing and tabulating in widely available locations sets of STO or GTO basis orbitals for main-group elements and transition metals. This ongoing effort is aimed at providing standard basis set libraries which: 1. Yield predictable chemical accuracy in the resultant energies. 2. Are cost effective to use in practical calculations. 3. Are relatively transferable so that a given atom's basis is flexible enough to be used for that atom in various bonding environments (e.g., hybridization and degree of ionization). Fundamental Core and Valence Basis In constructing an atomic orbital basis, one can choose from among several classes of functions. First, the size and nature of the primary core and valence basis must be specified. Within this category, the following choices are common: 1. A minimal basis in which the number of CGTO orbitals is equal to the number of core and valence atomic orbitals in the atom. 2. A double-zeta (DZ) basis in which twice as many CGTOs are used as there are core and valence atomic orbitals. The use of more basis functions is motivated by a desire to provide additional variational flexibility so the LCAO process can generate molecular orbitals of variable diffuseness as the local electronegativity of the atom varies. A valence double-zeta (VDZ) basis has only one CGTO to represent the inner-shell orbitals, but uses two sets of CGTOs to describe the valence orbitals. 3. A triple-zeta (TZ) basis in which three times as many CGTOs are used as the number of core and valence atomic orbitals (of course, there are quadruple-zeta and higher-zeta bases also). Moreover, there are VTZ bases that treat the inner-shell orbitals with one CGTO and the valence orbitals with three CGTOs. Optimization of the orbital exponents (z’s or a's) and the GTO-to-CGTO contraction coefficients for the kind of bases described above has undergone considerable growth in recent years. The theory group at the Pacific Northwest National Labs (PNNL) offer a world wide web site from which one can find (and even download in a form prepared for input to any of several commonly used electronic structure codes) a wide variety of Gaussian atomic basis sets. This site can be accessed here. Professor Kirk Peterson at Washington State University is involved in the PNNL basis set development project, but he also hosts his own basis set site. Polarization Functions One usually enhances any core and valence basis set with a set of so-called polarization functions. They are functions of one higher angular momentum than appears in the atom's valence orbital space (e.g., $d$-functions for C, N, and O and $p$-functions for H), and they have exponents ($z$ or $a$) which cause their radial sizes to be similar to the sizes of the valence orbitals ( i.e., the polarization $p$ orbitals of the H atom are similar in size to the $1s$ orbital rather than to the $2s$ valence orbital of hydrogen). Thus, they are not orbitals which describe the atom's valence orbital with one higher l-value; such higher-l valence orbitals would be radially more diffuse. A primary purpose of polarization functions is to give additional angular flexibility to the LCAO process in forming bonding orbitals between pairs of valence atomic orbitals. This is illustrated in Figure 6.1.2 where polarization dp orbitals on C and O are seen to contribute to formation of the bonding $p$ orbital of a carbonyl group by allowing polarization of the carbon atom's $p_\pi$ orbital toward the right and of the oxygen atom's $p_\pi$ orbital toward the left. Polarization functions are essential in strained ring compounds such as cyclopropane because they provide the angular flexibility needed to direct the electron density into regions between bonded atoms, but they are also important in unstrained compounds when high accuracy is required. Diffuse Functions When dealing with anions or Rydberg states, one must further augment the AO basis set by adding so-called diffuse basis orbitals. The valence and polarization functions described above do not provide enough radial flexibility to adequately describe either of these cases. The PNNL web site data base cited above offers a good source for obtaining diffuse functions appropriate to a variety of atoms as does the site of Prof. Kirk Peterson. Once one has specified an atomic orbital basis for each atom in the molecule, the LCAO-MO procedure can be used to determine the $\chi_{\mu,i}$ coefficients that describe the occupied and virtual (i.e., unoccupied) orbitals. It is important to keep in mind that the basis orbitals are not themselves the SCF orbitals of the isolated atoms; even the proper atomic orbitals are combinations (with atomic values for the $\chi_{\mu,i}$ coefficients) of the basis functions. The LCAO-MO-SCF process itself determines the magnitudes and signs of the $\chi_{\nu,i}$. In particular, it is alternations in the signs of these coefficients allow radial nodes to form.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.02%3A_Orbitals.txt
Unfortunately, the Hartree approximation ignores an important property of electronic wavefunctions- their permutational antisymmetry. The full electronic Hamiltonian $H = \sum_j {- \dfrac{\hbar^2}{2m} \nabla^2_j - \dfrac{Ze^2}{r_j}} + \dfrac{1}{2} \sum_j^k \dfrac{e^2}{|r_j-r_k|}\label{6.1.13}\nonumber$ is invariant (i.e., is left unchanged) under the operation $P_{i,j}$ in which a pair of electrons have their labels (i, j) permuted. We say that $H$ commutes with the permutation operator $P_{i,j}$. This fact implies that any solution $\psi$ to $H\psi = E\psi$ must also be an eigenfunction of $P_{i,j}$ Because permutation operators are idempotent, which means that if one applies $P_{i,j}$ twice, one obtains the identity $PP=1$, it can be seen that the eigenvalues of $P_{i,j}$ must be either $+1$ or$–1$. That is, if $P \psi=c\psi$, then $P P \psi = cc \psi$, but $PP=1$ means that $cc = 1$, so $c = +1$ or $–1$. As a result of $H$ commuting with electron permutation operators and of the idempotency of $P$, the eigenfunctions $\psi$ must either be odd or even under the application of any such permutation. Particles whose wavefunctions are even under $P$ are called Bose particles or Bosons; those for which $\psi$ is odd are called Fermions. Electrons belong to the latter class of particles. The simple spin-orbital product function used in Hartree theory $\psi= ​\prod_{k=1}^N \phi_k\label{6.1.14}\nonumber$ does not have the proper permutational symmetry. For example, the Be atom function $\psi = 1s\alpha(1) 1s\beta(2) 2s\alpha(3) 2s\beta(4) \label{6.1.15}\nonumber$ is not odd under the interchange of the labels of electrons 3 and 4; instead one obtains $P_{3,4} \psi = 1s\alpha(1) 1s\beta(2) 2s\alpha(4) 2s\beta(3) \nonumber$ However, such products of spin-orbitals (i.e., orbitals multiplied by $a$ or $b$ spin functions) can be made into properly antisymmetric functions by forming the determinant of an $N \times N$ matrix whose row index labels the spin orbital and whose column index labels the electron. For example, the Be atom function $1s\alpha(1) 1s\beta(2) 2s\alpha(3) 2s\beta(4)$ produces the $4 \times 4$ matrix whose determinant is shown below $\left|\begin{array}{cccc} 1s\alpha(1) & 1s\alpha(2) & 1s\alpha(3) & 1s\alpha(4)\ 1s\beta(1) & 1s\beta(2) & 1s\beta(3) & 1s\beta(4)\ 2s\alpha(1) & 2s\alpha(2) & 2s\alpha(3) & 2s\alpha(4)\ 2s\beta(1) & 2s\beta(2) & 2s\beta(3) & 2s\beta(4) \end{array}\right| \nonumber$ Clearly, if one were to interchange any columns of this determinant, one changes the sign of the function. Moreover, if a determinant contains two or more rows that are identical (i.e., if one attempts to form such a function having two or more spin-orbitals equal), it vanishes. This is how such antisymmetric wavefunctions embody the Pauli exclusion principle. A convenient way to write such a determinant is as follows: $\sum_P (-1)^p \phi_{P1} (1) \phi_{P2}(2) … \phi_{PN}(N),\nonumber$ where the sum is over all N! permutations of the $N$ spin-orbitals and the notation $(-1)^p$ means that a –1 is affixed to any permutation that involves an odd number of pair wise interchanges of spin-orbitals and a +1 sign is given to any that involves an even number. To properly normalize such a determinental wavefunction, one must multiply it by $\dfrac{1}{\sqrt{N!}}$. So, the final result is that a wavefunction of the form $\psi = \frac{1}{\sqrt{N!}} \sum_P (-1)^p \phi_{P1} (1) \phi_{P2}(2) … \phi_{PN}(N),\label{6.1.16}\nonumber$ which is often written in short-hand notation as, $\psi = |\phi_1 (1) \phi_2(2) … \phi_N(N)|\label{6.1.17}\nonumber$ has the proper permutational antisymmetry. Note that such functions consist of as sum of $N!$ factors, all of which have exactly the same number of electrons occupying the same spin-orbitals; the only difference among the $N!$ terms involves which electron occupies which spin-orbital. For example, in the $1s\alpha 2s\alpha$ function appropriate to the excited state of He, one has $\psi = \frac{1}{\sqrt{2}} \{1s\alpha(1) 2s\alpha(2) – 2s\alpha(1) 1s\alpha(2)\} \label{6.1.18}\nonumber$ This function is clearly odd under the interchange of the labels of the two electrons, yet each of its two components has one electron is a $1s\alpha$ spin-orbital and another electron in a $2s\alpha$ spin-orbital. Although having to make $\psi$ antisymmetric appears to complicate matters significantly, it turns out that the Schrödinger equation appropriate to the spin-orbitals in such an antisymmetrized product wavefunction is nearly the same as the Hartree Schrödnger equation treated earlier. In fact, if one variationally minimizes the expectation value of the $N$-electron Hamiltonian for the above antisymmetric product wavefunction subject to the condition that the spin-orbitals are orthonormal $\langle\phi_J(r)| \phi_k(r)\rangle = \delta_{J,K} \label{6.1.19}\nonumber$ one obtains the following equation for the optimal ${\phi_J(r)}$: $h_e \phi_J = \left[– \dfrac{\hbar^2}{2m} \nabla^2 -\dfrac{Ze^2}{r} + \sum_K \langle\phi_K(r’) | \dfrac{e^2}{|r-r’|} | \phi_K(r’)\rangle \right] \phi_J(r)\label{6.1.20}\nonumber$ $- \sum_K \langle\phi_K(r’) |\dfrac{e^2}{|r-r’|} | \phi_J(r’)\rangle \phi_K(r) = \epsilon_J \phi_J(r).\label{6.21}\nonumber$ In this expression, which is known as the Hartree-Fock equation, the same kinetic and nuclear attraction potentials occur as in the Hartree equation. Moreover, the same Coulomb potential $\sum_K \int \phi_K(r’) \dfrac{e^2}{|r-r’|} \phi_K(r’) dr’ = \sum_K \langle\phi_K(r’)|\dfrac{e^2}{|r-r’|} |\phi_K(r’)\rangle = \sum_K J_K (r)\label{6.1.22}\nonumber$ appears. However, one also finds a so-called exchange contribution to the Hartree-Fock potential that is equal to $\sum_L \langle\phi_L(r’) |\dfrac{e^2}{|r-r’|} | \phi_J(r’)\rangle \phi_L(r) \nonumber$ and is often written in short-hand notation as $\sum_L K_L \phi_J(r)$. Notice that the Coulomb and exchange terms cancel for the $L=J$ case; this causes the artificial self-interaction term $J_L \phi_L(r)$ that can appear in the Hartree equations (unless one explicitly eliminates it) to automatically cancel with the exchange term $K_L \phi_L(r)$ in the Hartree-Fock equations. To derive the above Hartree-Fock equations, one must make use of the so-called Slater-Condon rules to express the Hamiltonian expectation value as $\langle|\phi_1(1)\phi_2(2)\cdots \phi_{N-1}\phi_N(N)|H|\phi_1(1)\phi_2(2)\cdots \phi_{N-1}\phi_N(N)|\rangle =\sum_{j=1}^N\langle\phi_j(r)|-\frac{1}{2}\nabla^2-\frac{e^2}{r}|\phi_j(r)\rangle \+\frac{1}{2}\sum_{j,k=1}^N\left[ \langle\phi_j(r)\phi_k(r')|\frac{e^2}{|r-r'|}|\phi_j(r)\phi_k(r')\rangle - \langle\phi_j(r)\phi_k(r')|\frac{e^2}{|r-r'|}|\phi_k(r)\phi_j(r')\rangle\right]\nonumber$ This expectation value is a sum of terms (the kinetic energy and electron-nuclear Coulomb potentials) that vary quadratically on the spin-orbitals (i.e., as $\langle \phi| {\rm operator} |\phi\rangle$) plus another sum (the Coulomb and exchange electron-electron interaction terms) that depend on the fourth power of the spin-orbitals (i.e., as $\langle \phi\phi |{\rm operator} |\phi\phi \rangle$. When these terms are differentiated to minimize the expectation value, they generate factors that scale linearly and with the third power of the spin-orbitals. These are the factors ${-\dfrac{\hbar^2}{2m} \nabla^2 -\frac{Z_e^2}{r} } \phi_J(r)\nonumber$ and $\sum_K \langle\phi_K(r’) |\frac{e^2}{|r-r’|} | \phi_K(r’)\rangle \phi_J(r) - \sum_K \langle \phi_K(r’) |\frac{e^2}{|r-r’|} | \phi_J(r’)\rangle \phi_K(r) \label{6.1.23}$ appearing in the Hartree-Fock equations shown above. When the LCAO expansion of each Hartree-Fock (HF) spin-orbital is substituted into the above HF Schrödinger equation, a matrix equation is again obtained: $\sum_\mu \langle\chi_\nu |h_e| \chi_\mu\rangle C_{J,\mu} = \epsilon_J \sum_\mu \langle\chi_\nu|\chi_\mu\rangle C_{J,\mu}\label{6.1.24}\nonumber$ where the overlap integral $\langle\chi_\nu|\chi_\mu\rangle$ is as defined earlier, and the $h_e$ matrix element is $\langle\chi_\nu| h_e| \chi_\mu\rangle = \langle\chi_\nu| -\dfrac{\hbar^2}{2m} \nabla^2 |\chi_\mu\rangle + \langle\chi_\nu| -\frac{Ze^2}{r} |\chi_\mu\rangle \label{6.1.25}$ $+ \sum_{K,\eta,\gamma} C_{K,\eta} C_{K,\gamma} [\langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\mu(r) \chi_\gamma(r’)\rangle - \langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\gamma(r) \chi_\mu (r’)\rangle].\label{6.1.26}$ Clearly, the only difference between this expression and the corresponding result of Hartree theory is the presence of the last term, the exchange integral. The SCF iterative procedure used to solve the Hartree equations is again used to solve the HF equations. It is useful to reflect on the physical meaning of the Coulomb and exchange interactions between pairs of orbitals. For example, the Coulomb integral $J_{1,2} = \int |\phi_1(r)|^2 \frac{e^2}{|r-r’|} |\phi_2(r’)|^2 dr dr’ \label{6.1.27}\nonumber$ appropriate to the two orbitals shown in Figure 6.1.3 represents the Coulombic repulsion energy $\dfrac{e^2}{|r-r’|}$ of two charge densities, $|\phi_1|^2$ and $|\phi_2|^2$, integrated over all locations $r$ and $r’$ of the two electrons. In contrast, the exchange integral $K_{1,2} = \int \phi_1(r) \phi_2(r’) \frac{e^2}{|r-r’|} \phi_2(r) \phi_1(r’) dr dr’\label{6.1.28}\nonumber$ can be thought of as the Coulombic repulsion between two electrons whose coordinates $r$ and $r’$ are both distributed throughout the “overlap region” $\phi_1$ $\phi_2$. This overlap region is where both $\phi_1$ and $\phi_2$ have appreciable magnitude, so exchange integrals tend to be significant in magnitude only when the two orbitals involved have substantial regions of overlap. Finally, a few words are in order about one of the most computer time-consuming parts of any Hartree-Fock calculation (or those discussed later)- the task of evaluating and transforming the two-electron integrals $\langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\mu(r) \chi_\gamma(r’)\rangle.\label{6.1.29}\nonumber$ When M GTOs are used as basis functions, the evaluation of $\dfrac{M^4}{8}$ of these integrals often poses a major hurdle. For example, with 500 basis orbitals, there will be of the order of 7.8 x109 such integrals. With each integral requiring 2 words of disk storage (most integrals need to be evaluated in double precision), this would require at least 1.5 x104 Mwords of disk storage. Even in the era of modern computers that possess 500 Gby disks, this is a significant requirement. One of the more important technical advances that is under much current development is the efficient calculation of such integrals when the product functions $\chi_\nu(r) \chi_\mu(r)$ and $\chi_\gamma(r’) \chi_\eta(r’)$ that display the dependence on the two electrons’ coordinates r and r’ are spatially distant. In particular, so-called multipole expansions of these product functions are used to obtain more efficient approximations to their integrals when these functions are far apart. Moreover, such expansions offer a reliable way to ignore (i.e., approximate as zero) many integrals whose product functions are sufficiently distant. Such approaches show considerable promise for reducing the $\dfrac{M^4}{8}$ two-electron integral list to one whose size scales much less strongly with the size of the AO basis and form an important component if efforts to achieve CPU and storage needs that scale linearly with the size of the molecule. Koopmans’ Theorem The HF-SCF equations $h_e \phi_i = \epsilon_i \phi_i$ imply that the orbital energies $\epsilon_i$ can be written as: $\epsilon_i = \langle \phi_i | h_e | \phi_i \rangle = \langle \phi_i | T + V | \phi_i \rangle + \sum_{j({\rm occupied})} \langle \phi_i | J_j - K_j | \phi_i \rangle \nonumber$ $= \langle \phi_i | T + V | \phi_i \rangle + \sum_{j({\rm occupied})} [ J_{i,j} - K_{i,j} ],\label{6.1.30}\nonumber$ where $T + V$ represents the kinetic ($T$) and nuclear attraction ($V$) energies, respectively. Thus, $\epsilon_i$ is the average value of the kinetic energy plus Coulombic attraction to the nuclei for an electron in $\phi_i$ plus the sum over all of the spin-orbitals occupied in $\psi$ of Coulomb minus Exchange interactions of these electrons with the electron in $\phi_i$. If $\phi_i$ is an occupied spin-orbital, the $j = i$ term $[ J_{i,i} - K_{i,i}]$ disappears in the above sum and the remaining terms in the sum represent the Coulomb minus exchange interaction of $\phi_i$ with all of the $N-1$ other occupied spin-orbitals. If $\phi_i$ is a virtual spin-orbital, this cancelation does not occur because the sum over $j$ does not include $j = i$. So, one obtains the Coulomb minus exchange interaction of $\phi_i$ with all $N$ of the occupied spin-orbitals in $\psi$. Hence the energies of occupied orbitals pertain to interactions appropriate to a total of $N$ electrons, while the energies of virtual orbitals pertain to a system with $N+1$ electrons. This difference is very important to understand and to keep in mind. Let us consider the following model of the detachment or attachment of an electron in an $N$-electron system. 1. In this model, both the parent molecule and the species generated by adding or removing an electron are treated at the single-determinant level. 2. The Hartree-Fock orbitals of the parent molecule are used to describe both species. It is said that such a model neglects orbital relaxation (i.e., the re-optimization of the spin-orbitals to allow them to become appropriate to the daughter species). Within this model, the energy difference between the daughter and the parent can be written as follows ($\phi_k$ represents the particular spin-orbital that is added or removed): for electron detachment: $E_{N-1} - E_N = - \epsilon_k \label{6.1.31}\nonumber$ and for electron attachment: $E_N - E_{N+1} = - \epsilon_k .\label{6.1.32}\nonumber$ Let’s derive this result for the case in which an electron is added to the $N+1^{st}$ spin-orbital. Again, using the Slater-Condon rules from Section 6.1.2 of this Chapter, the energy of the $N$-electron determinant with spin-orbitals $\phi_1$ through $f_N$ occupied is $E_N = \sum_{i=1}^N \langle \phi_i | T + V | \phi_i \rangle + \sum_{i=1}^{N} [ J_{i,j} - K_{i,j} ],\label{6.1.33}\nonumber$ which can also be written as $E_N = \sum_{i=1}^N \langle \phi_i | T + V | \phi_i \rangle + \frac{1}{2} \sum_{i,j=1}^{N} [ J_{i,j} - K_{i,j} ].\label{6.1.34}\nonumber$ Likewise, the energy of the $N+1$-electron determinant wavefunction is $E_{N+1} = \sum_{i=1}^{N+1} \langle \phi_i | T + V | \phi_i \rangle + \frac{1}{2} \sum_{i,j=1}^{N+1} [ J_{i,j} - K_{i,j} ].\label{6.1.35}\nonumber$ The difference between these two energies is given by $E_{N} – E_{N+1} = - \langle \phi_{N+1} | T + V | \phi_{N+1} \rangle - \frac{1}{2} \sum_{i=1}^{N+1} [ J_{i,N+1} - K_{i,N+1} ]\label{6.1.36}\nonumber$ $- \frac{1}{2} \sum_{j=1}^{N+1} [ J_{N+1,j} - K_{N+1,j} ] = - \langle \phi_{N+1} | T + V | \phi_{N+1} \rangle - \sum_{i=1}^{N+1} [ J_{i,N+1} - K_{i,N+1} ]\label{6.1.37}\nonumber$ $= - \epsilon_{N+1}.\label{6.1.38}\nonumber$ That is, the energy difference is equal to minus the expression for the energy of the $N+1^{st}$ spin-orbital, which was given earlier. So, within the limitations of the HF, frozen-orbital model, the ionization potentials (IPs) and electron affinities (EAs) are given as the negative of the occupied and virtual spin-orbital energies, respectively. This statement is referred to as Koopmans’ theorem; it is used extensively in quantum chemical calculations as a means of estimating IPs and EAs and often yields results that are qualitatively correct (i.e., ± 0.5 eV). Orbital Energies and the Total Energy The total HF-SCF electronic energy can be written as: $E = \sum_{i({\rm occupied})} \langle \phi_i | T + V | \phi_i \rangle + \sum_{i>j_{({\rm occupied})}} [ J_{i,j} - K_{i,j} ] \label{6.1.39}\nonumber$ and the sum of the orbital energies of the occupied spin-orbitals is given by: $\sum_{i({\rm occupied})} \epsilon_i = \sum_{i({\rm occupied})} \langle \phi_i | T + V | \phi_i \rangle + \sum_{i,j({\rm occupied})} [J_{i,j} - K_{i,j} ]. \label{6.1.40}\nonumber$ These two expressions differ in a very important way; the sum of occupied orbital energies double counts the Coulomb minus exchange interaction energies. Thus, within the Hartree-Fock approximation, the sum of the occupied orbital energies is not equal to the total energy. This finding teaches us that we can not think of the total electronic energy of a given orbital occupation in terms of the orbital energies alone. We need to also keep track of the inter-electron Coulomb and Exchange energies.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.03%3A_The_Hartree-Fock_Approximation.txt
To achieve reasonable chemical accuracy (e.g., ± 5 kcal/mole in EAs or IPs or bond energies) in electronic structure calculations, one can not describe the wavefunction $\psi$ in terms of a single determinant. The reason such a wavefunction is inadequate is because the spatial probability density functions are not correlated. This means the probability of finding one electron at position r is independent of where the other electrons are, which is absurd because the electrons’ mutual Coulomb repulsion causes them to avoid one another. This mutual avoidance is what we call electron correlation because the electrons’ motions, as reflected in their spatial probability densities, are correlated (i.e., inter-related). Let us consider a simple example to illustrate this problem with single determinant functions. The $|1s\alpha(r) 1s\beta(r’)|$ determinant, when written as $|1s\alpha(r) 1s\beta(r’)| = \frac{1}{\sqrt{2}}\{1s\alpha(r) 1s\beta(r’) - 1s\alpha(r’) 1s\beta(r)\}$ can be multiplied by itself to produce the 2-electron spin- and spatial- probability density: $P(r, r’) = \frac{1}{2}\{[1s\alpha(r) 1s\beta(r’)]^2 + [1s\alpha(r’) 1s\beta(r)]^2 -1s\alpha(r) 1s\beta(r’) 1s\alpha(r’) 1s\beta(r) - 1s\alpha(r’) 1s\beta(r) 1s\alpha(r) 1s\beta(r’)\}.$ If we now integrate over the spins of the two electrons and make use of $\langle a|a \rangle = \langle b|b \rangle = 1 \label{6.1.1a}$ and $\langle a|b \rangle = \langle b|a\rangle = 0 \label{6.1.1b}$ we obtain the following spatial (i.e., with spin absent) probability density: $P(r,r’) = |1s(r)|^2 |1s(r’)|^2.$ This probability, being a product of the probability density for finding one electron at r times the density of finding another electron at $r’$, clearly has no correlation in it. That is, the probability of finding one electron at r does not depend on where $(r’)$ the other electron is. This product form for $P(r,r’)$ is a direct result of the single-determinant form for y, so this form must be wrong if electron correlation is to be accounted for. Electron Correlation Now, we need to ask how $\psi$ should be written if electron correlation effects are to be taken into account. As we now demonstrate, it turns out that one can account for electron avoidance by taking $\psi$ to be a combination of two or more determinants that differ by the promotion of two electrons from one orbital to another orbital. For example, in describing the $\pi^2$ bonding electron pair of an olefin or the $ns^2$ electron pair in alkaline earth atoms, one mixes in doubly excited determinants of the form $(\pi^*)^2$ or $np^2$, respectively. Briefly, the physical importance of such doubly-excited determinants can be made clear by using the following identity involving determinants: $C_1 | ..\phi_\alpha \phi_\beta​..| - C_2 | ..\phi'_\alpha \phi'_\beta..|$ $= \dfrac{C_1}{2} { | ..( \phi - x\phi')\alpha ( \phi + x\phi')b..| - | ..( \phi - x\phi')\beta ( \phi + x\phi')\alpha..| },$ where $x = \sqrt{\dfrac{C_2}{C_1}} .$ This identity is important to understand, so please make sure you can work through the algebra needed to prove it. It allows one to interpret the combination of two determinants that differ from one another by a double promotion from one orbital $(\phi)$ to another $(\phi')$ as equivalent to a singlet coupling (i.e., having $\alpha\beta-\beta\alpha$ spin function) of two different orbitals $(\phi - x\phi')$ and $(\phi + x\phi')$ that comprise what are called polarized orbital pairs. In the simplest embodiment of such a configuration interaction (CI) description of electron correlation, each electron pair in the atom or molecule is correlated by mixing in a configuration state function (CSF) in which that electron pair is doubly excited to a correlating orbital. A CSF is the minimum combination of determinants needed to express the proper spin eigenfunction for a given orbital occupation. In the olefin example mentioned above, the two non-orthogonal polarized orbital pairs involve mixing the p and p* orbitals to produce two left-right polarized orbitals as depicted in Figure 6.1.9: In this case, one says that the $\pi^2$ electron pair undergoes left-right correlation when the $(\pi^*)^2$ determinant is mixed into the CI wavefunction. In the alkaline earth atom case, the polarized orbital pairs are formed by mixing the $ns$ and $np$ orbitals (actually, one must mix in equal amounts of $p_x, p_y$, and $p_z$ orbitals to preserve overall $^1S$ symmetry in this case), and give rise to angular correlation of the electron pair. Such a pair of polarized orbitals is shown in Figure 6.1.10. More specifically, the following four determinants are found to have the largest amplitudes in $\psi$ for Be: $\psi \cong C_1 |1s^22s^2 | - C_2 [|1s^22p_x^2 | +|1s^22p_y^2 | +|1s^22p_z^2 |].$ The fact that the latter three terms possess the same amplitude $C_2$ is a result of the requirement that a state of $^1S$ symmetry is desired. It can be shown that this function is equivalent to: $\psi \cong \frac{1}{6} C_1 |1s\alpha1s\beta [[(2s-a2p_x)\alpha(2s+a2p_x)\beta - (2s-a2p_x)\beta(2s+a2p_x)\alpha]\ +[(2s-a2p_y)\alpha(2s+a2p_y)\beta - (2s-a2p_y)\beta(2s+a2p_y)\alpha]\ +[(2s-a2p_z)\alpha(2s+a2p_z)\beta - (2s-a2p_z)\beta(2s+a2p_z)\alpha] ] |,$ where $a = \sqrt{3C_2/C_1}$. Here two electrons occupy the $1s$ orbital (with opposite, $\alpha$ and $\beta$ spins), and are thus not being treated in a correlated manner, while the other pair resides in $2s$/$2p$ polarized orbitals in a manner that instantaneously correlates their motions. These polarized orbital pairs $(2s ± a 2p_{x,y,\text{ or }z})$ are formed by combining the $2s$ orbital with the $2p_{x,y,\text{ or }z}$ orbital in a ratio determined by $C_2/C_1$. This ratio $C_2/C_1$ can be shown using perturbation theory to be proportional to the magnitude of the coupling $\langle 1s^22s^2 |H|1s^22p^2 \rangle$ matrix element between the two configurations involved and inversely proportional to the energy difference $[\langle 1s^22s^2H|1s^22s^2 \rangle - \langle 1s^22p^2|H|1s^22p^2 \rangle]$ between these configurations. In general, configurations that have similar Hamiltonian expectation values and that are coupled strongly give rise to strongly mixed (i.e., with large $|C_2/C_1|$ ratios) polarized orbital pairs. II.Later in this Chapter, you will learn how to evaluate Hamiltonian matrix elements between pairs of antisymmetric wavefunctions. If you are anxious to learn this now, go to the subsection entitled The Slater-Condon Rules and read that before returning here. In each of the three equivalent terms in the alkaline earth wavefunction, one of the valence electrons moves in a $2s+a2p$ orbital polarized in one direction while the other valence electron moves in the $2s-a2p$ orbital polarized in the opposite direction. For example, the first term $[(2s-a2p_x)\alpha(2s+a2p_x)\beta - (2s-a2p_x)\beta(2s+a2p_x)\alpha]$ describes one electron occupying a $2s-a2p_x$ polarized orbital while the other electron occupies the $2s+a2p_x$ orbital. The electrons thus reduce their Coulomb repulsion by occupying different regions of space; in the SCF picture $1s^22s^2$, both electrons reside in the same $2s$ region of space. In this particular example, the electrons undergo angular correlation to avoid one another. The use of doubly excited determinants is thus seen as a mechanism by which $\psi$ can place electron pairs, which in the single-configuration picture occupy the same orbital, into different regions of space (i.e., each one into a different member of the polarized orbital pair) thereby lowering their mutual Coulomb repulsion. Such electron correlation effects are extremely important to include if one expects to achieve chemically meaningful accuracy (i.e., ± 5 kcal/mole). Essential Configuration Interaction There are occasions in which the inclusion of two or more determinants in $\psi$ is essential to obtaining even a qualitatively correct description of the molecule’s electronic structure. In such cases, we say that we are including essential correlation effects. To illustrate, let us consider the description of the two electrons in a single covalent bond between two atoms or fragments that we label X and Y. The fragment orbitals from which the bonding $\sigma$ and antibonding $\sigma^*$ MOs are formed we will label $s_X$ and $s_Y$, respectively. Several spin- and spatial- symmetry adapted 2-electron determinants (i.e., CSFs) can be formed by placing two electrons into the $\sigma$ and $\sigma^*$ orbitals. For example, to describe the singlet determinant corresponding to the closed-shell $\sigma^2$ orbital occupancy, a single Slater determinant $^1\Sigma (0) = |\sigma\alpha \sigma\beta| = \frac{1}{\sqrt{2}} [\sigma\alpha(1)\sigma\beta(2) - \sigma\beta(1)\sigma\alpha(2) ]$ suffices. An analogous expression for the $(\sigma^*)^2$ determinant is given by ${}^1\Sigma^{**} (0) = | \sigma^*\alpha \sigma^*\beta | = \frac{1}{\sqrt{2}} [ \sigma^*\alpha(1) \sigma^*\beta(2) - \sigma^*\alpha(2) \sigma^*\beta(1) ]$ Also, the $M_S = 1$ component of the triplet state having $\sigma\sigma^*$ orbital occupancy can be written as a single Slater determinant: ${}^3\Sigma^{*} (1) = |\sigma\alpha \sigma^*\alpha | = \frac{1}{\sqrt{2}} [\sigma\alpha(1) \sigma^*\alpha(2) - \sigma^*\alpha(1)\sigma\alpha(2) ],$ as can the $M_S = -1$ component of the triplet state ${}^3\Sigma^{*}(-1) = |\sigma\beta \sigma^*\beta | = \frac{1}{\sqrt{2}} [\sigma\beta(1) \sigma^*\beta(2) - \sigma^*\beta(1)\sigma\beta(2) ].$ However, to describe the singlet and $M_S = 0$ triplet states belonging to the $\sigma\sigma^*$ occupancy, two determinants are needed: ${}^1\Sigma^{*} (0) = \frac{1}{\sqrt{2}} [\sigma\alpha \sigma^*\beta - \sigma\beta\sigma^*\alpha ]$ is the singlet and ${}^3\Sigma^{*}(0) = \frac{1}{\sqrt{2}} [\sigma\alpha \sigma^*\beta + \sigma\beta\sigma^*\alpha ]$ is the triplet (note, you can obtain this $M_S = 0$ triplet by applying $\textbf{S}_- = \textbf{S}_- (1) + \textbf{S}_- (2)$ to the $M_S = 1$ triplet). In each case, the spin quantum number $S$, its z-axis projection $M_S$, and the $L$ quantum number are given in the conventional $^{2S+1}\Lambda(M_S)$ term symbol notation. As the distance $R$ between the X and Y fragments is changed from near its equilibrium value of $R_e$ and approaches infinity, the energies of the $\sigma$ and $\sigma^*$ orbitals vary in a manner well known to chemists as depicted in Figure 6.1.11 if X and Y are identical. If X and Y are not identical, the $s_x$ and $s_y$ orbitals still combine to form a bonding $\sigma$ and an antibonding $\sigma^*$ orbital. The energies of these orbitals, for R values ranging from near $R_e$ to $R\rightarrow \infty$, are depicted in Figure 6.1.12 for the case in which X is more electronegative than Y. The energy variation in these orbital energies gives rise to variations in the energies of the six determinants listed above. As $R \rightarrow \infty$, the determinants’ energies are difficult to intuit because the $\sigma$ and $\sigma^*$ orbitals become degenerate (in the homonuclear case) or nearly so (in the $X \ne Y$ case). To pursue this point and arrive at an energy ordering for the determinants that is appropriate to the $R \rightarrow \infty$ region, it is useful to express each such function in terms of the fragment orbitals $s_x$ and $s_y$ that comprise $\sigma$ and $\sigma^*$. To do so, the LCAO-MO expressions for $\sigma$ and $\sigma^*$, $\sigma = C [s_x + z s_y]$ and $\sigma^* = C^* [z s_x - s_y],$ are substituted into the Slater determinant definitions given above. Here $C$ and $C^*$ are the normalization constants. The parameter $z$ is 1.0 in the homonuclear case and deviates from 1.0 in relation to the $s_x$ and $s_y$ orbital energy difference (if $s_x$ lies below $s_y$, then $z < 1.0$; if $s_x$ lies above $s_y$, $z > 1.0$). Let us examine the $X=Y$ case to keep the analysis as simple as possible. The process of substituting the above expressions for $\sigma$ and $\sigma^*$ into the Slater determinants that define the singlet and triplet functions can be illustrated as follows for the $^1\Sigma(0)$ case: ${}^1\Sigma(0) = |\sigma\alpha \sigma\beta| = C_2 | (s_x + s_y) \alpha(s_x + s_y) \beta|$ $= C_2 [|s_x \alpha s_x b| + |s_y \alpha s_y \beta| + |s_x \alpha s_y \beta| + |s_y \alpha s_x \beta|]$ The first two of these atomic-orbital-based Slater determinants ($|s_x \alpha s_x b|$ and $|s_y \alpha s_y \beta|$) are called ionic because they describe atomic orbital occupancies, which are appropriate to the $R \rightarrow \infty$ region that correspond to $X \bullet\bullet + X$ and $X + X \bullet\bullet$ valence bond structures, while $|s_x \alpha s_y \beta|$ and $|s_y \alpha s_x \beta|$ are called "covalent" because they correspond to $X\bullet + X\bullet$ structures. In similar fashion, the remaining five determinant functions may be expressed in terms of fragment-orbital-based Slater determinants. In so doing, use is made of the antisymmetry of the Slater determinants (e.g., $| \phi_1 \phi_2 \phi_3 | = - | \phi_1 \phi_3 \phi_2 |$), which implies that any determinant in which two or more spin-orbitals are identical vanishes $| \phi_1 \phi_2 \phi_2 | = - | \phi_1 \phi_2 \phi_2 | = 0$. The result of decomposing the MO-based determinants into their fragment-orbital components is as follows: ${}^1\Sigma^{**} (0) = |\sigma^*\alpha \sigma^*\beta | = C^*{}^2 [ |s_x \alpha s_x \beta| + |s_y \alpha s_y \beta| - |s_x \alpha s_y \beta| - |s_y \alpha s_x \beta|]$ ${}^1\Sigma^{*} (0) =\frac{1}{\sqrt{2}}[ |\sigma\alpha \sigma^*\beta | - |\sigma\beta \sigma^*\alpha | ]= CC^* \sqrt{2} [|s_x \alpha s_x \beta| - |s_y \alpha s_y \beta|]$ ${}^3\Sigma^{*} (1) = |\sigma\alpha \sigma^*\alpha | = CC^* 2|s_y \alpha s_x \alpha|$ ${}^3\Sigma^{*} (0) = \frac{1}{\sqrt{2}}[ \sigma\alpha \sigma^*\beta | + |\sigma\beta \sigma^*\alpha |]=CC^* \sqrt{2} [|s_y \alpha s_x \beta| - |s_x \alpha s_y \beta|]$ ${}^3\Sigma^{*} (-1) = |\sigma\alpha \sigma^*\alpha | = CC^* 2|s_y \beta s_x \beta|$ These decompositions of the six valence determinants into fragment-orbital or valence bond components allow the $R = \infty$ energies of these states to specified. For example, the fact that both ${}^1\Sigma$ and ${}^1\Sigma^{**}$ contain 50% ionic and 50% covalent structures implies that, as $R \rightarrow \infty$, both of their energies will approach the average of the covalent and ionic atomic energies $\frac{1}{2} [E (X\bullet ) + E (X\bullet ) + E (X) + E ( X \bullet\bullet) ]$. The ${}^1\Sigma^{*}$ energy approaches the purely ionic value $E (X)+ E (X \bullet\bullet )$ as $R \rightarrow \infty$. The energies of ${}^3\Sigma^{*}(0), {}^3\Sigma^{*}(1)$ and ${}^3\Sigma^{*}(-1)$ all approach the purely covalent value $E (X\bullet ) + E (X\bullet )$ as $R \rightarrow \infty$. The behaviors of the energies of the six valence determinants as $R$ varies are depicted in Figure 6.1.13 for situations in which the homolytic bond cleavage is energetically favored (i.e., for which $E (X\bullet ) + E (X\bullet ) < E (X) +E ( X \bullet\bullet)$). It is essential to realize that the energies of the determinants do not represent the energies of the true electronic states. For $R$-values at which the determinant energies are separated widely, the true state energies are rather well approximated by individual determinant energies; such is the case near Re for the ${}^1\Sigma$ state. However, at large $R$, the situation is very different, and it is in such cases that what we term essential configuration interaction occurs. Specifically, for the $X=Y$ example, the ${}^1\Sigma$ and ${}^1\Sigma^{**}$ determinants undergo essential CI coupling to form a pair of states of ${}^1\Sigma$ symmetry (the ${}^1\Sigma^{*}$ CSF cannot partake in this CI mixing because it is of ungerade symmetry; the ${}^3\Sigma^{*}$ states can not mix because they are of triplet spin symmetry). The CI mixing of the ${}^1\Sigma$ and ${}^1\Sigma^{**}$ determinants is described in terms of a 2x2 secular problem $\left[\begin{array}{cc} \langle ^1\Sigma | H | ^1\Sigma \rangle & \langle ^1\Sigma | H | ^1\Sigma^{**} \rangle \ \langle ^1\Sigma^{**} | H | ^1\Sigma \rangle & \langle ^1\Sigma^{**} | H | ^1\Sigma^{**} \rangle \end{array}\right] ​\left[\begin{array}{c}A\B\end{array}\right] = E\left[\begin{array}{c}A\B\end{array}\right]$ The diagonal entries are the determinants’ energies depicted in Figure 6.1.13. The off-diagonal coupling matrix elements can be expressed in terms of an exchange integral between the $\sigma$ and $\sigma^*$ orbitals: $\langle {}^1\Sigma|H|{}^1\Sigma^{**} \rangle = \langle |\sigma\alpha \sigma\beta|H||\sigma^*\alpha \sigma^*\beta |\rangle = \langle \sigma\sigma|| \sigma^*\sigma^* \rangle = K_{\sigma \sigma^*}$ Later in this Chapter, you will learn how to evaluate Hamiltonian matrix elements between pairs of antisymmetric wavefunctions and to express them in terms of one- and two-electron integrals. If you are anxious to learn this now, go to the subsection entitled the Slater-Condon Rules and read that before returning here. At $R \rightarrow \infty$, where the ${}^1\Sigma$ and ${}^1\Sigma^{**}$ determinants are degenerate, the two solutions to the above CI matrix eigenvalue problem are: $E =\frac{1}{2} [ E (X\bullet ) + E (X\bullet ) + E (X)+ E (X \bullet\bullet) ] \mp \langle \sigma\sigma | \frac{1}{r_{12}} | \sigma^* \sigma^*\rangle$ with respective amplitudes for the ${}^1\Sigma$ and ${}^1\Sigma^{**}$ CSFs given by $A_\mp = \pm \frac{1}{\sqrt{2}} ; \hspace{15pt} B_{\mp} = \mp ​\frac{1}{\sqrt{2}}.$ The first solution thus has $\psi_{-} = \frac{1}{\sqrt{2}} [|\sigma\alpha \sigma\beta| - |\sigma^*\alpha \sigma^*\beta |]$ which, when decomposed into atomic orbital components, yields $\psi_{-} = \frac{1}{\sqrt{2}} [ |s_x\alpha s_y\beta| - |s_x\beta s_y\alpha|].$ The other root has $\psi_{+} = \frac{1}{\sqrt{2}} [|\sigma\alpha \sigma\beta| + |\sigma^*\alpha \sigma^*\beta |] = \frac{1}{\sqrt{2}} [ |s_x\alpha s_x\beta| + |s_y a s_y\beta|].$ So, we see that ${}^1\Sigma$ and ${}^1\Sigma^{**}$, which both contain 50% ionic and 50% covalent parts, combine to produce $\psi_{-}$ which is purely covalent and $\psi_{+}$ which is purely ionic. The above essential CI mixing of ${}^1\Sigma$ and ${}^1\Sigma^{**}$ as $R \rightarrow \infty$ qualitatively alters the energy diagrams shown above. Descriptions of the resulting valence singlet and triplet S states are given in Figure 6.1.14 for homonuclear situations in which covalent products lie below the ionic fragments. Figure 6.1.14: State Correlation Diagram Showing How the Energies of the States, Comprised of Combinations of Determinants, vary with $R$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.04%3A_Deficiencies_in_the_Single_Determinant_Model.txt
There are numerous procedures currently in use for determining the best Born-Oppenheimer electronic wave function that is usually expressed in the form: $\psi = \sum_i m_i C_I \Phi_i,$ where $\Phi_I$ is a spin-and space- symmetry-adapted configuration state function (CSF) that consists of one or more determinants $| \phi_{I1}\phi_{I2}\phi_{I3}\cdots \phi_{IN}|$ combined to produce the desired symmetry. In all such wave functions, there are two kinds of parameters that need to be determined- the CI coefficients and the LCAO-MO coefficients describing the fIk in terms of the AO basis functions. The most commonly employed methods used to determine these parameters include: The CI Method In this approach, the LCAO-MO coefficients are determined first usually via a single-configuration HF SCF calculation. The CI coefficients are subsequently determined by making the expectation value $\langle \psi | H | \psi \rangle / \langle \psi | \psi \rangle$ variationally stationary with $\psi$ chosen to be of the form $\psi = \sum_I C_I \Phi_I.$ As with all such linear variational problems, this generates a matrix eigenvalue equation $\sum_J \langle\Phi_I|H|\Phi_J\rangle C_J=EC_I$ to be solved for the optimum {$C_I$} coefficients and for the optimal energy $E$. The CI wave function is most commonly constructed from spin- and spatial- symmetry adapted combinations of determinants called configuration state functions (CSFs) $\Phi_J$ that include: 1. The so-called reference CSF that is the SCF wave function used to generate the molecular orbitals $\phi_i$. 2. CSFs generated by carrying out single, double, triple, etc. level excitations (i.e., orbital replacements) relative to the reference CSF. CI wave functions limited to include contributions through various levels of excitation are denoted S (singly), D (doubly), SD (singly and doubly), SDT (singly, doubly, and triply) excited. The orbitals from which electrons are removed can be restricted to focus attention on correlations among certain orbitals. For example, if excitations out of core orbitals are excluded, one computes a total energy that contains no core correlation energy. The number of CSFs included in the CI calculation can be large. CI wave functions including 5,000 to 50,000 CSFs are routine, and functions with one to several billion CSFs are within the realm of practicality. The need for such large CSF expansions can be appreciated by considering (i) that each electron pair requires at least two CSFs to form the polarized orbital pairs discussed earlier in this Chapter, (ii) there are of the order of $\dfrac{N(N-1)}{2} = X$ electron pairs for a molecule containing $N$ electrons, hence (iii) the number of terms in the CI wave function scales as $2^X$. For a molecule containing ten electrons, there could be $2^{45} = 3.5 \times 10^{13}$ terms in the CI expansion. This may be an over estimate of the number of CSFs needed, but it demonstrates how rapidly the number of CSFs can grow with the number of electrons. The Hamiltonian matrix elements $H_{I,J}$ between pairs of CSFs are, in practice, evaluated in terms of one- and two- electron integrals over the molecular orbitals. Prior to forming the $H_{I,J}$ matrix elements, the one- and two- electron integrals, which can be computed only for the atomic (e.g., STO or GTO) basis, must be transformed to the molecular orbital basis. This transformation step requires computer resources proportional to the fifth power of the number of basis functions, and thus is one of the more troublesome steps in most configuration interaction (and most other correlated) calculations. To transform the two-electron integrals $\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\chi_d(r')\rangle$ from this AO basis to the MO basis, one proceeds as follows: 1. First one utilizes the original AO-based integrals to form a partially transformed set of integrals $\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle = \sum_d C_{l,d} \langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\chi_d(r')\rangle.$ This step requires of the order of $M^5$ operations. 2. Next one takes the list $\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle$ and carries out another so-called one-index transformation $\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle = \sum_c C_{k,c} \langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle.$ 3. This list $\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle$ is then subjected to another one-index transformation to generate $\langle \chi_a(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle$, after which 4. $\langle \chi_a(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle$ is subjected to the fourth one-index transformation to form the final MO-based integral list $\langle \phi_i(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle$. In total, these four transformation steps require $4M^5$ computer operations. A variant of the CI method that is sometimes used is called the multi-configurational self-consistent field (MCSCF) method. To derive the working equations of this approach, one minimizes the expectation value of the Hamiltonian for a trial wave function consisting of a linear combination of CSFs $\psi = \sum_I C_I \Phi_I.$ In carrying out this minimization process, one varies both the linear {$C_I$} expansion coefficients and the LCAO-MO coefficients {$C_{J,\mu}$} describing those spin-orbitals that appear in any of the CSFs {$\Phi_I$}. This produces two sets of equations that need to be solved: 1. A matrix eigenvalue equation $\sum_J \langle\Phi_I|H|\Phi_J\rangle C_J=EC_I$ of the same form as arises in the CI method, and 2. equations that look very much like the HF equations $\sum_\mu \langle\chi_\nu |h_e| \chi_\mu\rangle C_{J,\mu} = \epsilon_J \sum_\mu \langle\chi_\nu|\chi_\mu\rangle C_{J,\mu}$ but in which the he matrix element is $\langle\chi_\nu| h_e| \chi_\mu\rangle = \langle\chi_\nu| -\dfrac{\hbar^2}{2m} \nabla^2 |\chi_\mu\rangle + \langle\chi_\nu| -\frac{Ze^2}{r} |\chi_\mu\rangle$ $+ \sum_{\eta,\gamma} \Gamma{\eta,\gamma} [\langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\mu(r) \chi_\gamma(r’)\rangle - \langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\gamma(r) \chi_\mu (r’)\rangle].$ Here $\Gamma_{\eta,\gamma}$ replaces the sum $\sum_K C_{K,\eta} C_{K,\gamma}$ that appears in the HF equations, with $\Gamma_{\eta,\gamma}$ depending on both the LCAO-MO coefficients {$C_{K,\eta}$} of the spin-orbitals and on the {$C_I$} expansion coefficients. These equations are solved through a self-consistent process in which initial {$C_{K,\eta}$} coefficients are used to form the matrix and solve for the {$C_I$} coefficients, after which the $\Gamma_{\eta,\gamma}$ can be determined and the HF-like equations solved for a new set of {$C_{K,\eta}$} coefficients, and so on until convergence is reached. Perturbation Theory This method uses the single-configuration SCF process to determine a set of orbitals {$\phi_i$}. Then, with a zeroth-order Hamiltonian equal to the sum of the $N$ electrons’ Fock operators $H_0 = \sum_{i=1}^N h_e(i)$, perturbation theory is used to determine the CI amplitudes for the other CSFs. The Møller-Plesset perturbation (MPPT) procedure is a special case in which the above sum of Fock operators is used to define $H_0$. The amplitude for the reference CSF is taken as unity and the other CSFs' amplitudes are determined by using $H-H_0$ as the perturbation. This perturbation is the difference between the true Coulomb interactions among the electrons and the mean-field approximation to those interactions: $V=H-H^{(0)}=\frac{1}{2}\sum_{i \ne j\ne 1}^N \frac{1}{r_{i,j}}-\sum_{k=1}^N[J_j(r)-K_k(r)]$ where $J_k$ and $K_k$ are the Coulomb and exchange operators defined earlier in this Chapter and the sum over $k$ runs over the $N$ spin-orbitals that are occupied in the Hartree-Fock wave function that forms the zeroth-order approximation to $\psi$. In the MPPT method, once the reference CSF is chosen and the SCF orbitals belonging to this CSF are determined, the wave function $\psi$ and energy $E$ are determined in an order-by-order manner as is the case in the RSPT discussed in Chapter 3. In fact, MPPT is just RSPT with the above fluctuation potential as the perturbation. The perturbation equations determine what CSFs to include through any particular order. This is one of the primary strengths of this technique; it does not require one to make further choices, in contrast to the CI treatment where one needs to choose which CSFs to include. For example, the first-order wave function correction $\psi_1$ is: $\psi_1 = - \sum_{i < j,m < n} \dfrac{\langle i,j |\dfrac{1}{r_{12}}| m,n \rangle -\langle i,j |\dfrac{1}{r_{12}}| n,m \rangle}{ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j} | \Phi_{i,j}^{m,n} \rangle,$ where the SCF orbital energies are denoted $\varepsilon_k$ and $\Phi_{i,j}^{m,n}$ represents a CSF that is doubly excited ($\phi_i$ and $\phi_j$ are replaced by $\phi_m$ and $\phi_n$) relative to the SCF wave function $\Phi$. The denominators $[ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j]$ arise from $E_0-E_k^0$ because each of these zeroth-order energies is the sum of the orbital energies for all spin-orbitals occupied. The excited CSFs $\Phi_{i,j}^{m,n}$ are the zeroth-order wave functions other than the reference CSF. Only doubly excited CSFs contribute to the first-order wave function; the fact that the contributions from singly excited configurations vanish in $\psi_1$ is known at the Brillouin theorem. The Brillouin theorem can be proven by considering Hamiltonian matrix elements coupling the reference CSF $F$ to singly-excited CSFs Fim. The rules for evaluating all such matrix elements are called Slater-Condon rules and are given later in this Chapter. If you don’t know them, this would be a good time to go read the subsection on these rules before returning here. From the Slater-Condon rules, we know that the matrix elements in question are given by $\langle \Phi|H|\Phi_i^m\rangle= \langle \phi_i(r)| -\frac{1}{2}\nabla^2 - \sum_a \dfrac{Z_a}{|r-R_a|} |\phi_m(r)\rangle + \sum_{j=1(\ne i,m)}^N \langle \phi_i(r) \phi_j(r')|\dfrac{1-P_{r,r'}}{|r-r'|}| \phi_m(r) \phi_j(r')\rangle.$ Here, the factor $P_{r,r’}$ simply permutes the coordinates $r$ and $r’$ to generate the exchange integral. The sum of two electron integrals on the right-hand side above can be extended to include the terms arising from $j =i$ because vanishes. As a result, the entire right-hand side can be seen to reduce to the matrix element of the Fock operator $h_{\rm HF}(r)$: $\langle \Phi|H|\Phi_i^m\rangle=\langle \phi_i|h_{\rm HF}(r)|\phi_m(r)\rangle=\varepsilon_m\delta_{i,m}=0.$ The matrix elements vanish because the spin-orbitals are eigenfunctions of $h_{\rm HF}(r)$ and are orthogonal to each other. The MPPT energy $E$ is given through second order as in RSPT by $E = E_{SCF} - \sum_{i < j,m < n} \frac{| \langle i,j | \dfrac{1}{r_{12}} | m,n \rangle -\langle i,j | \dfrac{1}{r_{12}} | n,m \rangle |^2}{ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j }$ and again only contains contributions from the doubly excited CSFs. Both $\psi$ and $E$ are expressed in terms of two-electron integrals $\langle i,j | \frac{1}{r_{12}} | m,n \rangle$ (that are sometimes denoted $\langle i,j|k,l\rangle$) coupling the virtual spin-orbitals $\phi_m$ and $\phi_n$ to the spin-orbitals from which electrons were excited $\phi_i$ and $\phi_j$ as well as the orbital energy differences $[ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j ]$ accompanying such excitations. Clearly, major contributions to the correlation energy are made by double excitations into virtual orbitals $\phi_m \phi_n$ with large $\langle i,j | \frac{1}{r_{12}} | m,n \rangle$ integrals and small orbital energy gaps $[\varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j]$. In higher order corrections, contributions from CSFs that are singly, triply, etc. excited relative to the HF reference function $F$ appear, and additional contributions from the doubly excited CSFs also enter. The various orders of MPPT are usually denoted MPn (e.g., MP2 means second-order MPPT). The Coupled-Cluster Method As noted above, when the Hartree-Fock wave function $\psi_0$ is used as the zeroth-order starting point in a perturbation expansion, the first (and presumably most important) corrections to this function are the doubly-excited determinants. In early studies of CI treatments of electron correlation, it was observed that double excitations had the largest $C_J$ coefficients (after the SCF wave function, which has the very largest $C_J$). Moreover, in CI studies that included single, double, triple, and quadruple level excitations relative to the dominant SCF determinant, it was observed that quadruple excitations had the next largest $C_J$ amplitudes after the double excitations. And, very importantly, it was observed that the amplitudes $C_{abcd}^{mnpq}$ of the quadruply excited CSFs $\Phi_{abcd}^{mnpq}$​ could be very closely approximated as products of the amplitudes $C_{ab}^{mn} C_{cd}^{pq}$​ of the doubly excited CSFs $\Phi_{ab}^{mn}$ and $\Phi_{cd}^{pq}$. This observation prompted workers to suggest that a more compact and efficient expansion of the correlated wave function might be realized by writing $\psi$ as: $\psi = \exp(T) \Phi,$ where $\Phi$ is the SCF determinant and the operator $T$ appearing in the exponential is taken to be a sum of operators $T = T_1 + T_2 + T_3 + … + T­_N$ that create single ($T_1$), double ($T_2$), etc. level excited CSFs when acting on $\Phi$. As I show below, this so-called coupled-cluster (CC) form for $\psi$ then has the characteristic that the dominant contributions from quadruple excitations have coefficients nearly equal to the products of the coefficients of their constituent double excitations. In any practical calculation, this sum of $T_n$ operators would be truncated to keep the calculation practical. For example, if excitation operators higher than $T_3$ were neglected, then one would use $T » T_1 + T_2 + T_3$. However, even when $T$ is so truncated, the resultant $\psi$ would contain excitations of higher order. For example, using the truncation just introduced, we would have $y = (1 + T_1 + T_2 + T_3 + \frac{1}{2} (T_1 + T_2 + T_3) (T_1 + T_2 + T_3) + \frac{1}{6} (T_1 + T_2 + T_3)$ $(T_1 + T_2 + T_3) (T_1 + T_2 + T_3) + …) \Phi.$ This function contains single excitations (in $T_1\Phi$), double excitations (in $T_2\Phi$ and in $T_1T_1\Phi$), triple excitations (in $T_3\Phi$, $T_2T_1\Phi$, $T_1T_2\Phi$, and $T_1T_1T_1\Phi$), and quadruple excitations in a variety of terms including $T_3 T_1\Phi$ and $T_2 T_2\Phi$, as well as even higher level excitations. By the design of this wave function, the quandruple excitations $T_2 T_2\Phi$ will have amplitudes given as products of the amplitudes of the double excitations $T_2\Phi$ just as were found by earlier CI workers to be most important. Hence, in CC theory, we say that quadruple excitations include unlinked products of double excitations arising from the $T_2 T_2$ product; the quadruple excitations arising from $T_4\Phi$ would involve linked terms and would have amplitudes that are not products of double-excitation amplitudes. After writing $\psi$ in terms of an exponential operator, one is faced with determining the amplitudes of the various single, double, etc. excitations generated by the $T$ operator acting on $\Phi$. This is done by writing the Schrödinger equation as: $H \exp(T) \Phi = E \exp(T) \Phi,$ and then multiplying on the left by $\exp(-T)$ to obtain: $exp(-T) H \exp(T) \Phi = E \Phi.$ The CC energy is then calculated by multiplying this equation on the left by $\Phi^*$ and integrating over the coordinates of all the electrons: $\langle\Phi| \exp(-T) H \exp(T) \Phi> = E.$ In practice, the combination of operators appearing in this expression is rewritten and dealt with as follows: $E = \langle\Phi| T + [H,T] + \frac{1}{2} [[H,T],T] + \frac{1}{6} [[[H,T],T],T] + \frac{1}{24} [[[[H,T],T],T],T] |\Phi\rangle;$ this so-called Baker-Campbell-Hausdorf expansion of the exponential operators can be shown truncate exactly after the fourth power term shown here. So, once the various operators and their amplitudes that comprise $T$ are known, $E$ is computed using the above expression that involves various powers of the $T$ operators. The equations used to find the amplitudes (e.g., those of the $T_2$ operator $\sum_{a,b,m,n} t_{ab}^{mn}T_{ab}^{mn}$, where the $t_{ab}^{mn}$ are the amplitudes and $T_{ab}^{mn}$ are the excitation operators that promote two electrons from $\phi_a$ and $\phi_b$ into $\phi_m$ and $\phi_n$) of the various excitation level are obtained by multiplying the above Schrödinger equation on the left by an excited determinant of that level and integrating. For example, the equation for the double-excitations is: $0 = \langle\Phi_{ab}^{mn}| T + [H,T] + \frac{1}{2} [[H,T],T] + \frac{1}{6} [[[H,T],T],T] + \frac{1}{24} [[[[H,T],T],T],T] |\Phi\rangle.$ The zero arises from the right-hand side of $\exp(-T) H \exp(T) \Phi = E \Phi$ and the fact that $\langle\Phi_{ab}^{mn}|\Phi\rangle = 0$; that is, the determinants are orthonormal. The number of such equations is equal to the number of doubly excited determinants $\Phi_{ab}^{mn}$, which is equal to the number of unknown $t_{ab}^{mn}$ amplitudes. So, the above quartic equations must be solved to determine the amplitudes appearing in the various $T_J$ operators. Then, as noted above, once these amplitudes are known, the energy $E$ can be computed using the earlier quartic equation. Having to solve many coupled quartic equations is one of the most severe computational challenges of CC theory. Clearly, the CC method contains additional complexity as a result of the exponential expansion form of the wave function $\psi$ and the resulting coupled quartic equations that need to be solved to determine the $t$ amplitudes. However, it is this way of writing $\psi$ that allows us to automatically build in the fact that products of double excitations are the dominant contributors to quadruple excitations (and $T_2 T_2 T_2$ is the dominant component of six-fold excitations, not $T_6$). In fact, the CC method is today one of the most accurate tools we have for calculating molecular electronic energies and wave functions. The Density Functional Method These approaches provide alternatives to the conventional tools of quantum chemistry, which move beyond the single-configuration picture by adding to the wave function more configurations (i.e., excited determinants) whose amplitudes they each determine in their own way. As noted earlier, these conventional approaches can lead to a very large number of CSFs in the correlated wave function, and, as a result, a need for extraordinary computer resources. The density functional approaches are different. Here one solves a set of orbital-level equations $- \frac{\hbar^2}{2m_e} \nabla^2 - \sum_a \frac{Z_ae^2}{|\textbf{r}-\textbf{R}_a|} + \int \rho(\textbf{r}')\frac{e^2}{|\textbf{r}-\textbf{r}'|} + U(r)] \phi_i = \varepsilon_i \phi_i$ in which the orbitals {$\phi_i$} feel potentials due to the nuclear centers (having charges $Z_a$), Coulombic interaction with the total electron density $\rho(\textbf{r}')$, and a so-called exchange-correlation potential denoted $U(\textbf{r}')$. The particular electronic state for which the calculation is being performed is specified by forming a corresponding density $\rho(\textbf{r}')$ that, in turn, is often expressed as a sum of squares of occupied orbitals multiplied by orbitial occupation numbers. Before going further in describing how DFT calculations are carried out, let us examine the origins underlying this theory. The so-called Hohenberg-Kohn theorem states that the ground-state electron density $\rho(\textbf{r})$ of the atom or molecule or ion of interest uniquely determines the potential $V(\textbf{r})$ in the molecule’s electronic Hamiltonian (i.e., the positions and charges of the system’s nuclei) $H = \sum_j {-\frac{\hbar^2}{2m_e} \nabla_j^2 + V(r_j) + \frac{e^2}{2} \sum_{k\ne j} \frac{1}{r_{j,k}} },$ and, because H determines all of the energies and wave functions of the system, the ground-state density $\rho(\textbf{r})$ therefore determines all properties of the system. One proof of this theorem proceeds as follows: 1. $\rho(\textbf{r})$ determines the number of electrons $N$ because $\int \rho(\textbf{r}) d^3r = N$. 2. Assume that there are two distinct potentials (aside from an additive constant that simply shifts the zero of total energy) $V(\textbf{r})$ and $V'(\textbf{r})$ which, when used in $H$ and $H’$, respectively, to solve for a ground state produce $E_0$, $\psi (r)$ and $E_0’$, $\psi'(r)$ that have the same one-electron density: $\int |\psi|^2 dr_2 dr_3 ... dr_N = \rho(\textbf{r})= \int |\psi'|^2 dr_2 dr_3 ... dr_N$. 3. If we think of $\psi'$ as trial variational wave function for the Hamiltonian $H$, we know that $E_0 < \langle \psi'|H|\psi'\rangle = \langle \psi'|H’|\psi'\rangle + \int \rho(\textbf{r}) [V(\textbf{r}) - V’(\textbf{r})] d^3r = E_0’ + \int \rho(\textbf{r}) [V(\textbf{r}) - V’(\textbf{r})] d^3r$. 4. Similarly, taking $\psi$ as a trial function for the $H’$ Hamiltonian, one finds that $E_0’ < E_0 + \int \rho(\textbf{r}) [V’(\textbf{r}) - V(\textbf{r})] d^3r$. 5. Adding the equations in c and d gives $E_0 + E_0’ < E_0 + E_0’,$ a clear contradiction unless the electronic state of interest is degenerate. Hence, there cannot be two distinct potentials $V$ and $V’$ that give the same non-degenerate ground-state $\rho(\textbf{r})$. So, the ground-state density $\rho(\textbf{r})$ uniquely determines $N$ and $V$, and thus H. Furthermore, because the eigenfunctions of $H$ determine all properties of the ground state, then $\rho(\textbf{r})$, in principle, determines all such properties. This means that even the kinetic energy and the electron-electron interaction energy of the ground-state are determined by $\rho(\textbf{r})$. It is easy to see that $\int \rho(\textbf{r}) V(r) d^3r = V[\rho]$ gives the average value of the electron-nuclear (plus any additional one-electron additive potential) interaction in terms of the ground-state density $\rho(\textbf{r})$. However, how are the kinetic energy $T[\rho]$ and the electron-electron interaction $V_{ee}[\rho]$ energy expressed in terms of r? There is another point of view that I find sheds even more light on why it makes sense that the ground-state electron density $\rho(\textbf{r})$ contains all the information needed to determine all properties. It was shown many years ago, by examining the mathematical character of the Schrödinger equation, that the ground-state wave function $\psi_0(r)$ has certain so-called cusps in the neighborhoods of the nuclear centers $R_a$. In particular $\psi_0(r)$ must obey $\frac{\partial \psi_0(r_1,r_2,\cdots,r_N)}{\partial r_k}=-\frac{m_eZ_ae^2}{\hbar^2}\psi_0(r_1,r_2,\cdots,r_N)\text{ as }\textbf{r}_k \rightarrow \textbf{R}_a$ That is, the derivative or slope of the natural logarithm of the true ground-state wave function must be as any of the electrons’ positions approach the nucleus of charge $Z_a$ residing at position $R_a$. Because the ground-state electron density can be expressed in terms of the ground-state wave function as it can be shown that the ground-state density also displays cusps at the nuclear centers as $r \rightarrow R_a$. where me is the electron mass and e is the unit of charge. So, imagine that you knew the true ground-state density at all points in space. You could integrate the density over all space to determine how many electrons the system has. Then, you could explore over all space to find points at which the density had sharp points characterized by non-zero derivatives in the natural logarithm of the density. The positions $R_a$ of such points specify the nuclear centers, and by measuring the slopes in $\ln(\rho(\textbf{r}))$ at each location, one could determine the charges of these nuclei through ${\rm slope}=\left(\dfrac{\partial\ln(\rho(r))}{dr}\right)_{r\rightarrow R_a}=-2\frac{m_eZ_ae^2}{\hbar^2}$ This demonstrates why the ground-state density is all one needs to fully determine the locations and charges of the nuclei as well as the number of electrons and thus the entire Hamiltonian $H$. The main difficulty with DFT is that the Hohenberg-Kohn theorem shows the values of $T$, $V_{ee}$, $V$, etc. are all unique functionals of the ground-state $\rho$ (i.e., that they can, in principle, be determined once $\rho$ is given), but it does not tell us what these functional relations are. To see how it might make sense that a property such as the kinetic energy, whose operator $-\hbar^2 /2m_e \nabla^2$ involves derivatives, can be related to the electron density, consider a simple system of $N$ non-interacting electrons moving in a three-dimensional cubic box potential. The energy states of such electrons are known to be $E = \frac{\hbar^2}{8m_eL^2} (n_x^2 + n_y^2 +n_z^2 ),$ where $L$ is the length of the box along the three axes, and $n_x$, $n_y$, and $n_z$ are the quantum numbers describing the state. We can view $n­_x^2 + n_y^2 +n_z^2 = R^2$ as defining the squared radius of a sphere in three dimensions, and we realize that the density of quantum states in this space is one state per unit volume in the $n_x$, $n_y$, $n_z$ space. Because $n_x$, $n_y$, and $n_z$ must be positive integers, the volume covering all states with energy less than or equal to a specified energy $E = (h^2/8m_eL^2) R^2$ is 1/8 the volume of the sphere of radius $R$: $\Phi(E) = \frac{1}{8} \frac{4\pi}{3} R^3 = \frac{\pi}{6} \left(\frac{8m_eL^2E}{\hbar^2}\right)^{3/2}$ Since there is one state per unit of such volume, $\Phi(E)$ is also the number of states with energy less than or equal to $E$, and is called the integrated density of states. The number of states $g(E) dE$ with energy between $E$ and $E+dE$, the density of states, is the derivative of $\Phi$: $g(E) = \frac{d\Phi}{dE} = \frac{\pi}{4} \left(\frac{8m_eL^2}{\hbar^2}\right)^{3/2} \sqrt{E} .$ If we calculate the total energy for these non-interacting $N$ electrons that doubly occupy all states having energies up to the so-called Fermi energy (i.e., the energy of the highest occupied molecular orbital HOMO), we obtain the ground-state energy: $E_0=2\int_0^{E_F} g(E)EdE = \frac{8\pi}{5} \left(\frac{2m_e}{\hbar^2}\right)^{3/2} L^3 E\Phi^{5/2}.$ The total number of electrons $N$ can be expressed as $N = 2\int_0^{E_F} g(E)dE = \frac{8\pi}{3} \left(\frac{2m_e}{\hbar^2}\right)^{3/2} L^3 E\Phi^{3/2},$ which can be solved for $E\Phi$ in terms of $N$ to then express $E_0$ in terms of $N$ instead of in terms of $E\Phi$: $E_0 = \frac{3\hbar^2}{10m_e} \left(\frac{3}{8\pi}\right)^{2/3} L^3 \left(\frac{N}{L^3}\right)^{5/3} .$ This gives the total energy, which is also the kinetic energy in this case because the potential energy is zero within the box and because the electrons are assumed to have no interactions among themselves, in terms of the electron density $\rho (x,y,z) = \dfrac{N}{L^3}$. It therefore may be plausible to express kinetic energies in terms of electron densities $\rho(\textbf{r})$, but it is still by no means clear how to do so for real atoms and molecules with electron-nuclear and electron-electron interactions operative. In one of the earliest DFT models, the Thomas-Fermi theory, the kinetic energy of an atom or molecule is approximated using the above kind of treatment on a local level. That is, for each volume element in $\textbf{r}$ space, one assumes the expression given above to be valid, and then one integrates over all $\textbf{r}$ to compute the total kinetic energy: $T_{\rm TF}[\rho] = \int \frac{3\hbar^2}{10m_e} \left(\frac{3}{8\pi}\right)^{2/3} [\rho(\textbf{r})]^{5/3} d^3r = C_F \int [\rho(\textbf{r})]^{5/3} d^3r ,$ where the last equality simply defines the $C_F$ constant. Ignoring the correlation and exchange contributions to the total energy, this $T$ is combined with the electron-nuclear $V$ and Coulombic electron-electron potential energies to give the Thomas-Fermi total energy: $E_{\rm 0,TF} [\rho] = C_F \int [\rho(\textbf{r})]^{5/3} d^3r + \int V(r) \rho(\textbf{r}) d^3r + e^2/2 \int \frac{\rho(\textbf{r}) \rho(\textbf{r}’)}{|r-r’|} d^3r d^3r’,$ This expression is an example of how $E_0$ is given as a local density functional approximation (LDA). The term local means that the energy is given as a functional (i.e., a function of $\rho$) which depends only on $\rho(\textbf{r})$ at points in space but not on $\rho(\textbf{r})$ at more than one point in space or on spatial derivatives of $\rho(\textbf{r})$. Unfortunately, the Thomas-Fermi energy functional does not produce results that are of sufficiently high accuracy to be of great use in chemistry. What is missing in this theory are the exchange energy and the electronic correlation energy. Moreover, the kinetic energy is treated only in the approximate manner described earlier (i.e., for non-interacting electrons within a spatially uniform potential). Dirac was able to address the exchange energy for the uniform electron gas ($N$ Coulomb interacting electrons moving in a uniform positive background charge whose magnitude balances the total charge of the $N$ electrons). If the exact expression for the exchange energy of the uniform electron gas is applied on a local level, one obtains the commonly used Dirac local density approximation to the exchange energy: $E_{\rm ex,Dirac}[\rho] = - C_x \int [\rho(\textbf{r})]^{4/3} d^3r,$ with $C_x = (3/4) (3/\pi)^{1/3}$. Adding this exchange energy to the Thomas-Fermi total energy $E_{\rm 0,TF} [\rho]$ gives the so-called Thomas-Fermi-Dirac (TFD) energy functional. Because electron densities vary rather strongly spatially near the nuclei, corrections to the above approximations to $T[\rho]$ and $E_{\rm ex,Dirac}$ are needed. One of the more commonly used so-called gradient-corrected approximations is that invented by Becke, and referred to as the Becke88 exchange functional: $E_{\rm ex}({\rm Becke88}) = E_{\rm ex,Dirac}[\rho] -\gamma \int \frac{x^2 r^{4/3}}{1+6 \gamma x \sinh^{-1}(x)} dr,$ where $x =r^{-4/3} |\nabla\rho|$, and $\gamma$ is a parameter chosen so that the above exchange energy can best reproduce the known exchange energies of specific electronic states of the inert gas atoms (Becke finds $\gamma$ to equal 0.0042). A common gradient correction to the earlier local kinetic energy functional $T[\rho]$ is called the Weizsacker correction and is given by $\delta{T_{\rm Weizsacker}} = \frac{1}{72} \frac{\hbar}{m_e} \int \frac{ | \nabla \rho(\textbf{r})|^2}{\rho(\textbf{r})} dr.$ Although the above discussion suggests how one might compute the ground-state energy once the ground-state density $\rho(\textbf{r})$ is given, one still needs to know how to obtain $\rho$. Kohn and Sham (KS) introduced a set of so-called KS orbitals obeying the following equation: ${-\dfrac{\hbar^2}{2m} \nabla^2 + V(r) + e^2 \int \frac{\rho(\textbf{r}’)}{|r-r’|} dr’ + U_{\rm xc}(r) }\phi_J = \varepsilon_j \phi_j ,$ where the so-called exchange-correlation potential $U­_{xc} (r) = dE_{\rm xc}[\rho]/d\rho(\textbf{r})$ could be obtained by functional differentiation if the exchange-correlation energy functional $E_{\rm xc}[\rho]$ were known. KS also showed that the KS orbitals {$\phi_J$} could be used to compute the density $\rho$ by simply adding up the orbital densities multiplied by orbital occupancies $n_j$: $\rho(\textbf{r}) = \sum_j n_j |\phi_J(r)|^2$ (here $n_j =0,1,$ or 2 is the occupation number of the orbital $\phi_J$ in the state being studied) and that the kinetic energy should be calculated as $T = \sum_j n_j \langle \phi_J(r)| -\dfrac{\hbar^2}{2m} \nabla^2 |\phi_J(r)\rangle$ The same investigations of the idealized uniform electron gas that identified the Dirac exchange functional found that the correlation energy (per electron) could also be written exactly as a function of the electron density $\rho$ of the system for this model system, but only in two limiting cases- the high-density limit (large $\rho$) and the low-density limit. There still exists no exact expression for the correlation energy even for the uniform electron gas that is valid at arbitrary values of $\rho$. Therefore, much work has been devoted to creating efficient and accurate interpolation formulas connecting the low- and high- density uniform electron gas. One such expression is $E_C[\rho] = \int \rho(\textbf{r}) \varepsilon_c(r) dr,$ where $\varepsilon_c(r) = \dfrac{A}{2}\ln\Big(\dfrac{x}{X}\Big) + \dfrac{2b}{Q} \tan^{-1}\dfrac{Q}{2x+b} -\dfrac{bx_0}{X_0} [\ln\Big(\dfrac{(x-x_0)^2}{X}\Big) +\dfrac{2(b+2x_0)}{Q} \tan^{-1}\dfrac{Q}{2x+b}$ is the correlation energy per electron. Here $x = \sqrt{r_s}$, $X=x^2 +bx+c$, $X_0 =x_0^2 +bx_0+c$ and $Q=\sqrt{4c - b^2}$, $A = 0.0621814$, $x_0= -0.409286$, $b = 13.0720$, and $c = 42.7198$. The parameter $r_s$ is how the density $\rho$ enters since $4/3 \pi r_s^3$ is equal to $1/\rho$; that is, $r_s$ is the radius of a sphere whose volume is the effective volume occupied by one electron. A reasonable approximation to the full $E_{\rm xc}[\rho]$ would contain the Dirac (and perhaps gradient corrected) exchange functional plus the above $E_C[\rho]$, but there are many alternative approximations to the exchange-correlation energy functional. Currently, many workers are doing their best to cook up functionals for the correlation and exchange energies, but no one has yet invented functionals that are so reliable that most workers agree to use them. To summarize, in implementing any DFT, one usually proceeds as follows: 1. An atomic orbital basis is chosen in terms of which the KS orbitals are to be expanded. Most commonly, this is a Gaussian basis or a plane-wave basis. 2. Some initial guess is made for the LCAO-KS expansion coefficients $C_{j,a}: \phi_J = \sum_a C_{j,a} \chi_a$ of the occupied KS orbitals. 3. The density is computed as $\rho(\textbf{r}) = \sum_j n_j |\phi_J(r)|^2$ . Often, $\rho(\textbf{r})$ itself is expanded in an atomic orbital basis, which need not be the same as the basis used for the $\phi_J$, and the expansion coefficients of $\rho$ are computed in terms of those of the this new basis. It is also common to use an atomic orbital basis to expand $\rho^{1/3}(r)$, which, together with $\rho$, is needed to evaluate the exchange-correlation functional’s contribution to $E_0$. 4. The current iteration’s density is used in the KS equations to determine the Hamiltonian ${-\dfrac{\hbar^2}{2m} \nabla^2 + V(r) + e^2 \int \frac{\rho(\textbf{r}’)}{|r-r’|} dr’ + U_{\rm xc}(r) }$ whose new eigenfunctions {$\phi_J$} and eigenvalues {$\epsilon_J$} are found by solving the KS equations. 5. These new $\phi_J$ are used to compute a new density, which, in turn, is used to solve a new set of KS equations. This process is continued until convergence is reached (i.e., until the $\phi_J$ used to determine the current iteration’s $\rho$ are the same $\phi_J$ that arise as solutions on the next iteration. 6. Once the converged $\rho(\textbf{r})$ is determined, the energy can be computed using the earlier expression $E [\rho] = \sum_j n_j \langle \phi_J(r)| -\dfrac{\hbar^2}{2m} \nabla^2|\phi_J(r)\rangle + \int V(r) \rho(\textbf{r}) dr + \frac{e^2}{2} \int \frac{\rho(\textbf{r})\rho(\textbf{r}’)}{|r-r’|}dr dr’+ E_{\rm xc}[\rho].$ Energy Difference Methods In addition to the methods discussed above for treating the energies and wave functions as solutions to the electronic Schrödinger equation, there exists a family of tools that allow one to compute energy differences directly rather than by finding the energies of pairs of states and subsequently subtracting them. Various energy differences can be so computed: differences between two electronic states of the same molecule (i.e., electronic excitation energies $\Delta E$), differences between energy states of a molecule and the cation or anion formed by removing or adding an electron (i.e., ionization potentials (IPs) and electron affinities (EAs)). In the early 1970s, the author developed one such tool for computing EAs (J. Simons, and W. D. Smith, Theory of Electron Affinities of Small Molecules, J. Chem. Phys., 58, 4899-4907 (1973)) and he called this the equations of motion (EOM) method. Throughout much of the 1970s and 1980s, his group advanced and applied this tool to their studies of molecular EAs and electron-molecule interactions. Because of space limitations, we will not be able to elaborate much in great detail on these methods. However, it is important to stress that: 1. These so-called EOM or Greens function or propagator methods utilize essentially the same input information (e.g., atomic orbital basis sets) and perform many of the same computational steps (e.g., evaluation of one- and two- electron integrals, formation of a set of mean-field molecular orbitals, transformation of integrals to the MO basis, etc.) as do the other techniques discussed earlier. 2. These methods are now rather routinely used when $\Delta E$, IP, or EA information is sought. The basic ideas underlying most if not all of the energy-difference methods are: 1. One forms a reference wave function $\psi$ (this can be of the SCF, MPn, CI, CC, DFT, etc. variety); the energy differences are computed relative to the energy of this function. 2. One expresses the final-state wave function $\psi’$ (i.e., that describing the excited, cation, or anion state) in terms of an operator $\Omega$ acting on the reference $\psi$: $\psi’ = \Omega \psi$. Clearly, the $\Omega$ operator must be one that removes or adds an electron when one is attempting to compute IPs or EAs, respectively. 3. One writes equations which $\psi$ and $\psi’$ are expected to obey. For example, in the early development of these methods, the Schrödinger equation itself was assumed to be obeyed, so $H\psi = E \psi$ and $H\psi' = E’ \psi’$ are the two equations. 4. One combines $\Omega\psi = \psi’$ with the equations that $\psi$ and $\psi’$ obey to obtain an equation that $\Omega$ must obey. In the above example, one (a) uses $\Omega\psi = \psi’$ in the Schrödinger equation for $\psi’$, (b) allows $\Omega$ to act from the left on the Schrödinger equation for $\psi$, and (c) subtracts the resulting two equations to achieve $(H\Omega - \Omega H) \psi = (E’ - E) \Omega \psi$, or, in commutator form $[H,\Omega] \psi = \Delta E \Omega \psi$. 5. One can, for example, express $\psi$ in terms of a superposition of configurations $\psi = \sum_J C_J \phi_J$ whose amplitudes $C_J$ have been determined from a CI or MPn calculation and express $\Omega$ in terms of operators {$O_K$} that cause single-, double-, etc. level excitations (for the IP (EA) cases, $\Omega$ is given in terms of operators that remove (add), remove and singly excite (add and singly excite, etc.) electrons): $\Omega = \sum_K D_K O_K$. 6. Substituting the expansions for $\psi$ and for $\Omega$ into the equation of motion (EOM) $[H,\Omega] \psi = \Delta E \Omega \psi$, and then projecting the resulting equation on the left against a set of functions (e.g., {$O_{K’} |\psi>$}) gives a matrix eigenvalue-eigenvector equation $\sum_K \langle O_{K’}\psi| [H,O_K] \psi \rangle D_K = \Delta E \sum_K \langle O_{K’}\psi|O_K\psi\rangle D_K$ to be solved for the $D_K$ operator coefficients and the excitation (or IP or EA) energies $\Delta E$. Such are the working equations of the EOM (or Greens function or propagator) methods. In recent years, these methods have been greatly expanded and have reached a degree of reliability where they now offer some of the most accurate tools for studying excited and ionized states. In particular, the use of time dependent variational principles have allowed a much more rigorous development of equations for energy differences and non-linear response properties. In addition, the extension of the EOM theory to include coupled-cluster reference functions now allows one to compute excitation and ionization energies using some of the most accurate ab initio tools.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.05%3A_Various_Approaches_to_Electron_Correlation.txt
To form Hamiltonian matrix elements $H_{K,L}$ between any pair of Slater determinants constructed from spin-orbitals that are orthonormal, one uses the so-called Slater-Condon rules. These rules express all non-vanishing matrix elements involving either one- or two- electron operators. One-electron operators are additive and appear as $F = \sum_i \phi(i);$ two-electron operators are pairwise additive and appear as $G = \sum_{i< j}g(i,j) = \frac{1}{2} \sum_{i \ne j} g(i,j).$ The Slater-Condon rules give the matrix elements between two determinants $| \rangle = |\phi_1\phi_2\phi_3... \phi_N|$ and $| '\rangle = |\phi'_1\phi'_2\phi'_3...\phi'_N|$ for any quantum mechanical operator that is a sum of one- and two- electron operators ($F + G$). It expresses these matrix elements in terms of one-and two-electron integrals involving the spin-orbitals that appear in $| \rangle$ and $| '\rangle$ and the operators $f$ and $g$. As a first step in applying these rules, one must examine $| \rangle$ and $| '\rangle$ and determine by how many (if any) spin-orbitals $| \rangle$ and $| '\rangle$ differ. In so doing, one may have to reorder the spin-orbitals in one of the determinants to achieve maximal coincidence with those in the other determinant; it is essential to keep track of the number of permutations ( $N_p$) that one makes in achieving maximal coincidence. The results of the Slater-Condon rules given below are then multiplied by $(-1)^{N_p}$ to obtain the matrix elements between the original $| \rangle$ and $| '\rangle$. The final result does not depend on whether one chooses to permute $| \rangle$ or $| '\rangle$ to determine $N_p$. The Hamiltonian is, of course, a specific example of such an operator that contains both one- and two-electron components; the electric dipole operator $\sum_i e\textbf{r}_i$ and the electronic kinetic energy $- \frac{\hbar^2}{2m_e}\sum_i\nabla_i^2$ are examples of one-electron operators (for which one takes $g = 0$); the electron-electron coulomb interaction $\sum_{i<j} e^2/r_{ij}$ is a two-electron operator (for which one takes $f = 0$). The two Slater determinants whose matrix elements are to be determined can be written as $| \rangle = \frac{1}{\sqrt{N!}} \sum_{P=1}^{N!} (-1)^p P \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)$ $| '\rangle = \frac{1}{\sqrt{N!}} \sum_{P=1}^{N!} (-1)^q Q \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)$ where the spin-orbitals {$\phi_j$} and {$\phi’_j$} appear in the first and second determinants, respectively, and the operators $P$ and $Q$ describe the permutations of the spin-orbitals appearing in these two determinants. The factors $(-1)^p$ and $(-1)^q$ are the signs associated with these permutations as discussed earlier in Section 6.1.1. Any matrix element involving one- and two-electron operators $\langle |F+G|'\rangle =\frac{1}{\sqrt{N!}} \sum_{P,Q} (-1)^{p+q} \\langle P \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)|F+G|Q \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ needs to be expressed in terms of integrals involving the spin-orbitals in the two determinants and the one- and two-electron operators. To simplify the above expression, which contains $(N!)^2$ terms in its two summations, one proceeds as follows: a. Use is made of the identity $\langle P\psi |\psi’\rangle = \langle y|P\psi’\rangle$ to move the permutation operator $P$ to just before the ($F+G$) $\langle P \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)| F+G |Q \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle \ =\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)| P(F+G) |Q \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ b. Because $F$ and $G$ contain sums over all $N$ electrons in a symmetric fashion, any permutation $P$ acting on $F+G$ leaves these sums unchanged. So, $P$ commutes with $F$ and with $G$. This allows the above quantity to be rewritten as $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)| F+G |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ c. For any permutation operator $Q$, the operator $PQ$ is just another permutation operator. Moreover, for any $Q$, the set of all operators $PQ$ runs over all $N!$ permutations, and the sign associated with the operator $PQ$ is the sign belonging to $P$ times the sign associated with $Q$, $(-1)^{p+q}$. So, the double sum (i.e., over $P$ and over $Q$) appearing in the above expression for the general matrix element of $F+G$ contains $N!$ identical sums over the single operator $PQ$ of the sign of this operator $(-1)^{p+q}$ multiplied by the effect of this operator on the spin-orbital product on the right-hand side $\langle |F+G|'\rangle =\frac{1}{\sqrt{N!}}N!\ \sum_{P,Q} (-1)^{p+q} \langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots \phi_N(N)| F+G |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ By assumption, as explained earlier, the two Slater determinants have been compared and arranged in an order of maximal coincidence and the factor $(-1)^{N_p}$ needed to bring them into maximal coincidence has been determined. So, let us begin by assuming that the two determinants differ by three spin-orbitals and let us first consider the terms arising from the identity permutation $PQ = E$ (i.e., the permutation that alters none of the spin-orbitals’ labels). These terms will involve integrals of the form $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots\phi_j(j)\cdots\phi_N(N)| F+G |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_j(j)\cdots​\phi'_N(N)\rangle$ where the three-spin orbitals that differ in the two determinants appear in positions $k$, $n$, and $j$. In these $4N$-dimensional (3 spatial and 1 spin coordinate for each of $N$ electrons) integrals: a. Integrals of the form (for all $i\ne k$, $n$, or $j$) $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots\phi_j(j)\cdots\phi_N(N)| f(i) | \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_j(j)\cdots​\phi'_N(N)\rangle$ and (for all i and $l \ne k$, $n$, or $j$) $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots\phi_j(j)\cdots\phi_N(N)| g(i,l) | \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_j(j)\cdots​\phi'_N(N)\rangle$ vanish because the spin-orbitals appearing in positions $k$, $n$, and $j$ in the two determinants are orthogonal to one another. For the $F$-operator, even integrals with $i = k$, $n$, or $j$ vanish because there are still two spin-orbital mismatches at the other two locations among $k$, $n$, and $j$. For the $G$-operator, even integrals with $i$ or $l = k$, $n$, or $j$ vanish because two mismatches remain; and even with both $i$ and $l = k$, $n$, or $j$, the integrals vanish because one spin-orbital mismatch remains. The main observation to make is that, even for $PQ = E$, if there are three spin-orbital differences, neither the $F$ nor $G$ operator gives rise to any non-vanishing results. b. If we now consider any other permutation $PQ$, the situation does not improve because any permutation cannot alter the fact that three spin-orbital mismatches do not generate any non-vanishing results. If there are only two spin-orbital mismatches (say in locations $k$ and $n$), the integrals we need to evaluate are of the form $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots\phi_N(N)| f(i) |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ and $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_n(n)\cdots\phi_N(N)| g(i,l) |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_n(n)\cdots\phi'_N(N)\rangle$ c. Again, beginning with $PQ = E$, we can conclude that all of the integrals involving the $F$-operator (i.e., $\phi(i)$, $\phi(k)$, and $\phi(n)$) vanish because the two spin-orbital mismatch is too much even for $\phi(k)$ or $\phi(n)$ to overcome; at least one spin-orbital orthogonality integral remains. For the $G$-operator, the only non-vanishing result arises from the $i = k$ and $l = n$ term $\langle \phi_k(k)\phi_n(n)| g(k,n) | \phi'_k(k)\phi'_n(n)\rangle$. d. The only other permutation that generates another non-vanishing result is the permutation that interchanges $k$ and $n$, and it produces $-\langle \phi_k(k)\phi_n(n)| g(k,n) | \phi'_n(k)\phi'_k(n)\rangle$ , where the negative sign arises from the $(-1)^{p+q}$ factor. All other permutations would interchange other spin-orbitals and thus generate orthogonality integrals involving other electrons’ coordinates. If there is only one spin-orbital mismatch (say in location $k$), the integrals we need to evaluate are of the form $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_N(N)| f(i) |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_N(N)\rangle$ and $\langle \phi_1(1)\phi_2(2)\cdots\phi_k(k)\cdots\phi_N(N)| g(i,l) |PQ \phi'_1(1)\phi'_2(2)\cdots\phi'_k(k)\cdots\phi'_N(N)\rangle.$ e. Again beginning with $PQ = E$, the only non-vanishing contribution from the $F$-operator is $\langle \phi_k(k)|f(k)|\phi'_k(k) \rangle$. For all other permutations, the $F$-operator produces no non-vanishing contributions because these permutations generate orthogonality integrals. For the $G$-operator and $PQ = E$, the only non-vanishing contributions are $\langle \phi_k(k)\phi_j(j)| g(k,j) | \phi'_k(k)\phi_j(j)\rangle$ where the sum over $j$ runs over all of the spin-orbitals that are common to both of the two determinants. f. Among all other permutations, the only one that produces a non-vanishing result are those that permute the spin-orbital in the kth location with another spin-orbital, and they produce $-\langle \phi_k(k)\phi_j(j)| g(k,j) | \phi'_j(k)\phi_k(j)\rangle.$ The minus sign arises from the $(-1)^{p+q}$ factor associated with this pair wise permutation operator. Finally, if there is no mismatch (i.e., the two determinants are identical), then g. The identity permutation generates $-\langle \phi_k(k)| f(k) | \phi_k(k)\rangle.$ from the $F$-operator and $\frac{1}{2}\sum_{j \ne k=1}^N \langle \phi_j(j)\phi_k(k)| g(k,j) | \phi_j(j)\phi_k(k)\rangle$ from the $G$-operator. h. The permutation that interchanges spin-orbitals in the kth and jth location produces $-\frac{1}{2}\sum_{j \ne k=1}^N \langle \phi_j(j)\phi_k(k)| g(k,j) | \phi_k(j)\phi_j(k)\rangle .$ The summations over $j$ and $k$ appearing above can, alternatively, be written as $\sum_{j < k=1}^N \langle \phi_j(j)\phi_k(k)| g(k,j) | \phi_j(j)\phi_k(k)\rangle$ and $-\sum_{j < k=1}^N \langle \phi_j(j)\phi_k(k)| g(k,j) | \phi_k(j)\phi_j(k)\rangle .$ So, in summary, once maximal coincidence has been achieved, the Slater-Condon (SC) rules provide the following prescriptions for evaluating the matrix elements of any operator $F+G$ containing a one-electron part $F = \sum_i \phi(i)$ and a two-electron part $G = \sum_{i< j}g(i,j)$.: 1. If $| \rangle$ and $| '\rangle$ are identical, then $\langle | F+G | \rangle = \sum_i \langle \phi_i| f | \phi_i\rangle +\sum_{i\rangle j} [\langle \phi_i \phi_j | g | \phi_i \phi_j \rangle - \langle \phi_i \phi_j | g | \phi_j \phi_i​ \rangle ],$ where the sums over $i$ and $j$ run over all spin-orbitals in $| \rangle$ ; 2. If $| \rangle$ and $| '\rangle$ differ by a single spin-orbital mismatch ( $\phi_p \ne \phi'_p$ ), $\langle | F+G | '\rangle = (-1)^{N_p} {\langle \phi_p | f | \phi'_p \rangle +\sum_j [\langle \phi_p\phi_j | g | \phi'_p\phi_j \rangle - \langle \phi_p\phi_j | g | \phi_j\phi'_p \rangle ]},$ where the sum over $j$ runs over all spin-orbitals in $| \rangle$ except $\phi_p$; 3. If $| \rangle$ and $| '\rangle$ differ by two spin-orbitals ( $\phi_p \ne \phi'_p$ and $\phi_q \ne \phi'_q$), $\langle | F+G | '\rangle = (-1)^{N_p} {\langle \phi_p \phi_q | g | \phi'_p \phi'_q \rangle - \langle \phi_p \phi_q | g | \phi'_q \phi'_p \rangle }$(note that the $F$ contribution vanishes in this case); 4. If $| \rangle$ and $| '\rangle$ differ by three or more spin orbitals, then $\langle | F+G | '\rangle = 0;$ 5. $\Phi$ or the identity operator $I$, the matrix elements $\langle | I | '\rangle = 0$ if $| \rangle$ and $| '\rangle$ differ by one or more spin-orbitals (i.e., the Slater determinants are orthonormal if their spin-orbitals are). In these expressions, $\langle \phi_i| f | \phi_j \rangle$ is used to denote the one-electron integral $\int \phi^*_i(r) f(r) \phi_j(r) dr$ and $\langle \phi_i \phi_j | g | \phi_k\phi_l \rangle$ (or, in short hand notation, $\langle i j| k l \rangle$ ) represents the two-electron integral $\int \phi^*_i(r) \phi^*_j(r') g(r,r') \phi_k(r)\phi_l(r') drdr'.$ The notation $\langle i j | k l \rangle$ introduced above gives the two-electron integrals for the $g(r,r')$ operator in the so-called Dirac notation, in which the $i$ and $k$ indices label the spin-orbitals that refer to the coordinates $r$ and the $j$ and l indices label the spin-orbitals referring to coordinates $r'$. The $r$ and $r'$ denote $r,\theta,\phi,\sigma$ and $r',\theta',\phi',\sigma'$ (with $\sigma$ and $\sigma'$ being the $\alpha$ or $\beta$ spin functions). If the operators $f$ and $g$ do not contain any electron spin operators, then the spin integrations implicit in these integrals (all of the $\phi_i$ are spin-orbitals, so each $\phi$ is accompanied by an $\alpha$ or $\beta$ spin function and each $\phi^*$ involves the adjoint of one of the $\alpha$ or $\beta$ spin functions) can be carried out using $\langle a|a\rangle =1$, $\langle a|b\rangle =0$, $\langle b|a\rangle =0$, $\langle b|b\rangle =1$, thereby yielding integrals over spatial orbitals.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.06%3A_The_Slater-Condon_Rules.txt
Often one wants to model the behavior of a molecule or ion that is not isolated as it might be in a gas-phase experiment. When one attempts to describe a system that is embedded, for example, in a crystal lattice, in a liquid or a glass, one has to have some way to treat both the effects of the surrounding medium on the molecule of interest and the motions of the medium’s constituents. In so-called quantum mechanics- molecular mechanics (QM-MM) approaches to this problem, one treats the molecule or ion of interest using the electronic structure methods outlined earlier in this Chapter, but with one modification. The one-electron component of the Hamiltonian, which contains the electron-nuclei Coulomb potential $\sum_{a,i} (-Z_ae^2/|r_i – R_a|)$, is modified to also contain a term that describes the potential energy of interaction of the electrons and nuclei with the surrounding medium. In the simplest such models, this solvation potential depends only on the dielectric constant of the surroundings. In more sophisticated models, the surroundings are represented by a collection of (fractional) point charges that may also be attributed with local dipole moments and polarizabilities that allow them to respond to changes in the internal charge distribution of the molecule or ion. The locations of such partial charges and the magnitudes of their dipoles and polarizabilities are determined to make the resultant solvation potential reproduce known (from experiment or other simulations) solvation characteristics (e.g., solvation energy, radial distribution functions) in a variety of calibration cases. The book Molecular Modeling, 2nd ed., A. R. Leach, Prentice Hall, Englewood Cliffs (2001) offers a good source of information about how these terms are added into the one-electron component of the Hamiltonian to account for solvation effects. In addition to describing how the surroundings affect the Hamiltonian of the molecule or ion of interest, one needs to describe the motions or spatial distributions of the medium’s constituent atoms or molecules. This is usually done within a purely classical treatment of these degrees of freedom. That is, if equilibrium properties of the solvated system are to be simulated, then Monte-Carlo (MC) sampling (this subject is treated in Chapter 7 of this text) of the surrounding medium’s coordinates is used. Within such a MC sampling, the potential energy of the entire system is calculated as a sum of two parts: i. the electronic energy of the solute molecule or ion, which contains the interaction energy of the molecule’s electrons and nuclei with the surrounding medium, plus ii. the intra-medium potential energy, which is taken to be of a simple molecular mechanics (MM) force field character (i.e., to depend on inter-atomic distances and internal angles in an analytical and easily computed manner). Again, the book Molecular Modeling, 2nd ed., A. R. Leach, Prentice Hall, Englewood Cliffs (2001) offers a good source of information about these matters. If, alternatively, dynamical characteristics of the solvated species are to be simulated, a classical molecular dynamics (MD) treatment is used. In this approach, the solute-medium and internal-medium potential energies are handled in the same way as in the MC case but where the time evolution of the medium’s coordinates are computed using the MD techniques discussed in Chapter 7 of this text.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.07%3A_Molecules_Embedded_in_Condensed_Media.txt
Although their detailed treatment is beyond the scope of this text, it is important to appreciate that new approaches are always under development in all areas of theoretical chemistry. In this Section, I want to introduce you to two tools that are proving to offer high precision in the treatment of electron correlation energies. These are the so-called quantum Quantum Monte-Carlo and r1,2- approaches to this problem. Both methods currently are used when one wishes to obtain the absolute highest precision in an electronic structure calculation. The computational requirements of both of these methods are very high, so, at present, they can only be used on species containing fewer than ca. 100 electrons. However, with the power and speed of computers growing as fast as they are, it is likely that these high-end methods will be more and more widely used as time goes by. Quantum Monte-Carlo In this method, one first re-writes the time dependent Schrödinger equation $i \hbar \frac{d\Psi}{dt} = - \frac{\hbar^2}{2m_e} \sum_j \nabla_j^2 \Psi + V \Psi$ for negative imaginary values of the time variable $t$ (i.e., one simply replaces $t$ by $-it$). This gives $\frac{d\Psi}{dt} = \frac{\hbar}{2m_e}​ \sum_j \nabla_j^2 \Psi - \frac{V}{\hbar} \Psi,$ which is analogous to the well-known diffusion equation $\frac{dC}{dt} = D \nabla^2C + S C.$ The re-written Schrödinger equation can be viewed as a diffusion equation in the $3N$ spatial coordinates of the $N$ electrons with a diffusion coefficient $D$ that is related to the electrons' mass me by $D = \frac{\hbar}{2m_e}.$ The so-called source and sink term $S$ in the diffusion equation is related to the electron-nuclear and electron-electron Coulomb potential energies denoted V: $S = - \frac{V}{\hbar}.$ In regions of space where $V$ is large and negative (i.e., where the potential is highly attractive), $V$ is large and negative, so $S$ is large and positive. This causes the concentration $C$ of the diffusing material to accumulate in such regions. Likewise, where $V$ is positive, $C$ will decrease. Clearly by recognizing $\Psi$ as the concentration variable in this analogy, one understands that $\Psi$ will accumulate where $V$ is negative and will decay where $V$ is positive, as one expects. So far, we see that the trick of taking $t$ to be negative and imaginary causes the electronic Schrödinger equation to look like a $3N$-dimensional diffusion equation. Why is this useful and why does this trick work? It is useful because, as we see in Chapter 7 of this text, Monte-Carlo methods are highly efficient tools for solving certain equations; it turns out that the diffusion equation is one such case. So, the Quantum Monte-Carlo approach can be used to solve the imaginary-time Schrödinger equation even for systems containing many electrons. But, what does this imaginary time mean? To understand the imaginary time trick, let us recall that any wave function (e.g., the trial wave function with which one begins to use Monte-Carlo methods to propagate the diffusing $\Psi$ function) $\Phi$ can be written in terms of the exact eigenfunctions {$\psi_K$} of the Hamiltonian $H = - \frac{\hbar^2}{2m_e} \sum_j \nabla_j^2 + V$ as follows: $F = \sum_K C_K \psi_K.$ If the Monte-Carlo method can, in fact be used to propagate forward in time such a function but with $t = -it$, then it will, in principle, generate the following function at such an imaginary time: $F = \sum_K C_K \psi_K \exp(-iEKt/\hbar) = \sum_K C_K \psi_K \exp(-EKt/\hbar).$ As $t$ increases, the relative amplitudes {$C_K \exp(-E_Kt/\hbar)$} of all states but the lowest state (i.e., that with smallest $E_K$) will decay compared to the amplitude $C_0 \exp(-E_0t/\hbar)$ of the lowest state. So, the time-propagated wave function will, at long enough t, be dominated by its lowest-energy component. In this way, the quantum Monte-Carlo propagation method can generate a wave function in $3N$ dimensions that approaches the ground-state wave function. It has turned out that this approach, which tackles the $N$-electron correlation problem head-on, has proven to yield highly accurate energies and wave functions that display the proper cusps near nuclei as well as the negative cusps (i.e., the wave function vanishes) whenever two electrons' coordinates approach one another. Finally, it turns out that by using a starting function $F$ of a given symmetry and nodal structure, this method can be extended to converge to the lowest-energy state of the chosen symmetry and nodal structure. So, the method can be used on excited states also. In Chapter 7 of this text, you will learn how the Monte-Carlo tools can be used to simulate the behavior of many-body systems (e.g., the $N$-electron system we just discussed) in a highly efficient and easily parallellized manner. $r_{1,2}$ Method In this approach to electron correlation, one employs a trial variational wave function that contains components that depend explicitly on the inter-electron distances $r_{i,j}$. By so doing, one does not rely on the polarized orbital pair approach introduced earlier in this Chapter to represent all of the correlations among the electrons. An example of such an explicitly correlated wave function is: $\psi = |\phi_1 \phi_2 \phi_3 …\phi_N| (1 + a \sum_{i<j} r_{i,j})$ which consists of an antisymmetrized product of $N$ spin-orbitals multiplied by a factor that is symmetric under interchange of any pair of electrons and contains the electron-electron distances in addition to a single variational parameter $a$. Such a trial function is said to contain linear-$r_{i,j}$ correlation factors. Of course, it is possible to write many other forms for such an explicitly correlated trial function. For example, one could use: $\psi = |\phi_1 \phi_2 \phi_3 …\phi_N| \exp(-a \sum_{i<j} r_{i,j}))$ as a trial function. Both the linear and the exponential forms have been used in developing this tool of quantum chemistry. Because the integrals that must be evaluated when one computes the Hamiltonian expectation value $\langle \psi|H| \psi \rangle$ are most computationally feasible (albeit still very taxing) when the linear form is used, this particular parameterization is currently the most widely used.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.08%3A_High-End_Methods_for_Treating_Electron_Correlation.txt
Visible and Ultraviolet Spectroscopy Visible and ultraviolet spectroscopies are used to study transitions between states of a molecule or ion in which the electrons’ orbital occupancy changes. We call these electronic transitions, and they usually require light in the 5000 cm$^{-1}$ to 100,000 cm$^{-1}$ regime. When such transitions occur, the initial and final states generally differ in their electronic, vibrational, and rotational energies because any change to the electrons' orbital occupancy will induce changes in the Born-Oppenheimer energy surface which, in turn, governs the vibrational and rotational character. Excitations of inner-shell and core orbital electrons may require even higher energy photons as would excitations that eject an electron. The interpretation of all such spectroscopic data relies heavily on theory as this Section is designed to illustrate. The Electronic Transition Dipole and Use of Point Group Symmetry The interaction of electromagnetic radiation with a molecule's electrons and nuclei can be treated using perturbation theory as we discussed in Chapter 4. The result is a standard expression that we derived in Chapter 4 $R_{i,f} = \frac{2\pi}{\hbar^2} f(\omega_{f,i}) | \textbf{E}_0· \langle \Phi_f | \boldsymbol{\mu} | \Phi_i\rangle |^2$ for the rate of photon absorption between initial $\Phi_i$ and final $\Phi_f$ states. In this equation, $f(\omega)$ is the intensity of the photon source at the frequency $\omega$, $\omega_{f,i}$ is the frequency corresponding to the transition under study, and $\textbf{E}_0$ is the electric field vector of the photon field. The vector $\boldsymbol{\mu}$ is the electric dipole moment of the electrons and nuclei in the molecule. Because each of these wave functions is a product of an electronic ye, a vibrational, and a rotational function, we realize that the electronic integral appearing in this rate expression involves $\langle \psi_{ef} | \boldsymbol{\mu} | \psi_{ei}\rangle = \boldsymbol{\mu}_{f,i} (R),$ a transition dipole matrix element between the initial $\psi_{ei}$ and final $\psi_{ef}$ electronic wave functions. This element is a function of the internal vibrational coordinates of the molecule, and is a vector locked to the molecule's internal axis frame. Molecular point-group symmetry can often be used to determine whether a particular transition's dipole matrix element will vanish and, as a result, the electronic transition will be forbidden and thus predicted to have zero intensity. If the direct product of the symmetries of the initial and final electronic states $\psi_{ei}$​ and $\psi_{ef}$​ do not match the symmetry of the electric dipole operator (which has the symmetry of its $x$, $y$, and $z$ components; these symmetries can be read off the right most column of the character tables), the matrix element will vanish. For example, the formaldehyde molecule $H_2CO$ has a ground electronic state that has $^1A_1$ symmetry in the $C_{2v}$ point group. Its $\pi \Rightarrow \pi^*$ singlet excited state also has $^1A_1$ symmetry because both the $\pi$ and $\pi^*$ orbitals are of $b_1$ symmetry. In contrast, the lowest $n \Rightarrow \pi^*$ (these orbitals are shown in Figure 6.15) singlet excited state is of $^1A_2$ symmetry because the highest energy oxygen centered non-bonding orbital is of $b_2$ symmetry and the $\pi^*$ orbital is of $b_1$ symmetry, so the Slater determinant in which both the $n$ and $\pi^*$ orbitals are singly occupied has its symmetry dictated by the $b_2 \times b_1$ direct product, which is $A_2$. The $\pi \Rightarrow \pi^*$ transition thus involves ground ($^1A_1$) and excited ($^1A_1$) states whose direct product ($A_1 \times A_1$) is of $A_1$ symmetry. This transition thus requires that the electric dipole operator possess a component of $A_1$ symmetry. A glance at the $C_{2v}$ point group's character table shows that the molecular $z$-axis is of $A_1$ symmetry. Thus, if the light's electric field has a non-zero component along the $C_2$ symmetry axis (the molecule's $z$-axis), the $\pi \Rightarrow \pi^*$ transition is predicted to be allowed. Light polarized along either of the molecule's other two axes cannot induce this transition. In contrast, the $n \Rightarrow \pi^*$ transition has a ground-excited state direct product of $B_2\times B_1=A_2$ symmetry. The $C_{2v}$'s point group character table shows that the electric dipole operator (i.e., its $x$, $y$, and $z$ components in the molecule-fixed frame) has no component of $A_2$ symmetry; thus, light of no electric field orientation can induce this $n \Rightarrow \pi^*$ transition. We thus say that the $n \Rightarrow \pi^*$ transition is forbidden. The above examples illustrate one of the most important applications of visible-UV spectroscopy. The information gained in such experiments can be used to infer the symmetries of the electronic states and hence of the orbitals occupied in these states. It is in this manner that this kind of experiment probes electronic structures. The Franck-Condon Factors Beyond such electronic symmetry analysis, it is also possible to derive vibrational selection rules for electronic transitions that are allowed. It is conventional to expand the transition dipole matrix element $\boldsymbol{\mu}_{f,i} (R)$ in a power series about the equilibrium geometry of the initial electronic state (since this geometry is characteristic of the molecular structure prior to photon absorption and, because the photon absorption takes place quickly, the nuclei don’t have time to move far from there): $\boldsymbol{\mu}_{f,i}(R) = \boldsymbol{\mu}_{f,i}(R_e) + \sum_a \frac{\partial \boldsymbol{\mu}_{f,i}}{∂R_a} (R_a - R_{a,e}) + ....$ The first term in this expansion, when substituted into the integral over the vibrational coordinates, gives $\boldsymbol{\mu}_{f,i}(R_e) \langle \chi_{v_f} | \chi_{v_i}\rangle$, which has the form of the electronic transition dipole multiplied by the overlap integral between the initial and final vibrational wave functions. The $\boldsymbol{\mu}_{f,i}(R_e)$ factor was discussed above; it is the electronic transition integral evaluated at the equilibrium geometry of the absorbing state. Symmetry can often be used to determine whether this integral vanishes, as a result of which the transition will be forbidden. The vibrational overlap integrals $\langle \chi_{v_f} | \chi_{v_i}\rangle$ do not necessarily vanish because $\chi_{v_f}$ and $\chi_{v_i}$ are eigenfunctions of different vibrational Hamiltonians because they belong to different Born-Oppenheimer energy surfaces. $\chi_{v_f}$ is an eigenfunction whose potential energy is the final electronic state's energy surface; $\chi_{v_i}$ has the initial electronic state's energy surface as its potential. The squares of these $\langle \chi_{v_f} | \chi_{v_i}\rangle$ integrals, which are what eventually enter into the transition rate expression $R_{i,f} = \frac{2\pi}{\hbar^2} f(\omega_{f,i}) | \textbf{E}_0· \langle \Phi_f | \boldsymbol{\mu} | \Phi_i\rangle |^2$, are called Franck-Condon factors. Their relative magnitudes play strong roles in determining the relative intensities of various vibrational bands (i.e., series of peaks) within a particular electronic transition's spectrum. In Figure 6.16, I show two potential energy curves and illustrate the kinds of absorption (and emission) transitions that can occur when the two electronic states have significantly different geometries. Whenever an electronic transition causes a large change in the geometry (bond lengths or angles) of the molecule, the Franck-Condon factors tend to display the characteristic broad progression shown in Figure 6.17 when considered for one initial-state vibrational level $v_i$ and various final-state vibrational levels $v_f$: Notice that as one moves to higher $v_f$ values, the energy spacing between the states (Evf - Evf-1) decreases; this, of course, reflects the anharmonicity in the excited-state vibrational potential. For the above example, the transition to the $v_f = 2$ state has the largest Franck-Condon factor. This means that the overlap of the initial state's vibrational wave function $\chi_{v_i}$ is largest for the final state's $\chi_{v_f}$ function with $v_f = 2$. As a qualitative rule of thumb, the larger the geometry difference between the initial- and final- state potentials, the broader will be the Franck-Condon profile (as shown in Figure 6.17) and the larger the $v_f$ value for which this profile peaks. Differences in harmonic frequencies between the two states can also broaden the Franck-Condon profile. If the initial and final states have very similar geometries and frequencies along the mode that is excited when the particular electronic excitation is realized, the type of Franck-Condon profile shown in Figure 6.18 may result: Another feature that is important to emphasize is the relation between absorption and emission when the two states’ energy surfaces have different equilibrium geometries or frequencies. Subsequent to photon absorption to form an excited electronic state but prior to photon emission, the molecule can undergoe collisions with other nearby molecules. This, of course, is especially true in condensed-phase experiments. These collisions cause the excited molecule to lose some of its vibrational and rotational energy, thereby relaxing it to lower levels on the excited electronic surface. This relaxation process is illustrated in Figure 6.19. Subsequently, the electronically excited molecule can undergo photon emission (also called fluorescence) to return to its ground electronic state as shown in Figure 6.20. The Franck-Condon principle discussed earlier also governs the relative intensities of the various vibrational transitions arising in such emission processes. Thus, one again observes a set of peaks in the emission spectrum as shown in Figure 6.21. There are two differences between the lines that occur in emission and in absorption. First, the emission lines are shifted to the red (i.e., to lower energy or longer wavelength) because they occur at transition energies connecting the lowest vibrational level of the upper electronic state to various levels of the lower state. In contrast, the absorption lines connect the lowest vibrational level of the ground state to various levels of the upper state. These relationships are shown in Figure 6.22. The second difference relates to the spacings among the vibrational lines. In emission, these spacings reflect the energy spacings between vibrational levels of the ground state, whereas in absorption they reflect spacings between vibrational levels of the upper state. The above examples illustrate how vibrationally-resolved visible-UV absorption and emission spectra can be used to gain valuable information about 1. the vibrational energy level spacings of the upper and ground electronic states (these spacings, in turn, reflect the strengths of the bonds existing in these states), 2. the change in geometry accompanying the ground-to-excited state electronic transition as reflected in the breadth of the Franck-Condon profiles (these changes also tell us about the bonding changes that occur as the electronic transition occurs). So, again we see how visible-UV spectroscopy can be used to learn about the electronic structure of molecules in various electronic states. Time Correlation Function Expressions for Transition Rates The above so-called golden-rule expression for the rates of photon-induced transitions are written in terms of the initial and final electronic/vibrational/rotational states of the molecule. There are situations in which these states simply cannot be reliably known. For example, the higher vibrational states of a large polyatomic molecule or the states of a molecule that strongly interacts with surrounding solvent molecules are such cases. In such circumstances, it is possible to recast the golden rule formula into a form that is more amenable to introducing specific physical models that lead to additional insights. Specifically, by using so-called equilibrium averaged time correlation functions, it is possible to obtain rate expressions appropriate to a large number of molecules that exist in a distribution of initial states (e.g., for molecules that occupy many possible rotational and perhaps several vibrational levels at room temperature). As we will soon see, taking this route to expressing spectroscopic transition rates also allows us to avoid having to know each vibrational-rotational wave function of the two electronic states involved; as noted above, this is especially useful for large molecules or molecules in condensed media where such knowledge is likely not available. To begin re-expressing the spectroscopic transition rates, the expression obtained earlier $R_{i,f} = \frac{2\pi}{\hbar^2} f(\omega_{f,i}) | \textbf{E}_0· \langle \Phi_f | \boldsymbol{\mu} | \Phi_i\rangle |^2 ,$ appropriate to transitions between a particular initial state $\Phi_i$ and a specific final state $\Phi_f$, is rewritten as $R_{i,f} = \frac{2\pi}{\hbar^2} \int f(\omega)|\textbf{E}_0 \cdot \langle ​\Phi_f | \boldsymbol{\mu} | \Phi_i \rangle |^2 \delta(\omega_{f,i}-\omega)d\omega .$ Here, the $\delta(\omega_{f,i} - \omega)$ function is used to specifically enforce the resonance condition which states that the photons' frequency $\omega$ must be resonant with the transition frequency $\omega_{f,i}$. The following integral identity can be used to replace the $d$-function: $\delta(\omega_{f,i} - \omega) = \frac{1}{\pi} \int_{-\infty}^\infty \exp[i(\omega_{f,i}-\omega)t] dt$ by a form that is more amenable to further development. Then, the state-to-state rate of transition becomes: $R_{i,f} = \frac{1}{\hbar^2} \int f(\omega) |\textbf{E}_0 \cdot \langle ​\Phi_f | \boldsymbol{\mu} | \Phi_i \rangle |^2 \int_{-\infty}^\infty \exp[i(\omega_{f,i}-\omega)t] dt d\omega .$ If this expression is then multiplied by the equilibrium probability $\rho_i$ that the molecule is found in the state $\Phi_i$ and summed over all such initial states and summed over all final states $\Phi_f$ that can be reached from $\Phi_i$ with photons of energy $\hbar\omega$, the equilibrium averaged rate of photon absorption by the molecular sample is obtained: $R_{\rm eq.ave.} = \frac{1}{\hbar^2}\sum_{f,i} \rho_i \int f(\omega) |\textbf{E}_0 \cdot \langle ​\Phi_f | \boldsymbol{\mu} | \Phi_i \rangle |^2 \int_{-\infty}^\infty \exp[i(\omega_{f,i}-\omega)t] dt d\omega.$ This expression is appropriate for an ensemble of molecules that can be in various initial states $\Phi_i$ with probabilities $\rho_i$. The corresponding result for transitions that originate in a particular state ($\Phi_i$) but end up in any of the allowed (by energy and selection rules) final states reads: $R_i = \frac{1}{\hbar^2}\sum_{f} \rho_i \int f(\omega) |\textbf{E}_0 \cdot \langle ​\Phi_f | \boldsymbol{\mu} | \Phi_i \rangle |^2 \int_{-\infty}^\infty \exp[i(\omega_{f,i}-\omega)t] dt d\omega.$ As we discuss in Chapter 7, for an ensemble in which the number of molecules, the temperature $T$, and the system volume are specified, $\rho_i$ takes the form: $\rho_i = g_i \exp(-E_i^0/kT)/Q$ where $Q$ is the partition function of the molecules and $g_i$ is the degeneracy of the state $\Phi_i$ whose energy is $E_i^0$. If you are unfamiliar with partition functions and do not want to simply trust me in the analysis of time correlation functions that we am about to undertake, I suggest you interrupt your study of Chapter 6 and read up through Section 7.1.3 of Chapter 7 at this time. In the above expression for $R_{\rm eq.ave.}$, a double sum occurs. Writing out the elements that appear in this sum in detail, one finds: $\sum_{i,f} \rho_i \textbf{E}_0· \langle \Phi_i | \boldsymbol{\mu} | \Phi_f\rangle \textbf{E}_0· \langle \Phi_f | \boldsymbol{\mu} | \Phi_i\rangle \exp[i(\omega_{f,i})t].$ In situations in which one is interested in developing an expression for the intensity arising from transitions to all allowed final states, the sum over the final states can be carried out explicitly by first writing $\langle \Phi_f | \boldsymbol{\mu} | \Phi_i\rangle \exp[i(\omega_{f,i})t] = \langle \Phi_f |\exp(iHt/\hbar) \boldsymbol{\mu} \exp(-iHt/\hbar)| \Phi_i\rangle$ and then using the fact that the set of states {$\Phi_k$} are complete and hence obey $\sum_k |\Phi_k\rangle \langle \Phi_k| = 1.$ The result of using these identities as well as the Heisenberg definition of the time-dependence of the dipole operator $\boldsymbol{\mu}(t) = \exp(iHt/\hbar) \boldsymbol{\mu} \exp(-iHt/\hbar),$ is: $\sum_i\rho_i \langle \Phi_i | \textbf{E}_0· \boldsymbol{\mu} \textbf{E}_0· \boldsymbol{\mu} (t) | \Phi_i\rangle .$ In this form, one says that the time dependence has been reduce to that of an equilibrium averaged (i.e., as reflected in the $\sum_i \rho_i \langle \Phi_i | | \Phi_i\rangle$ expression) time correlation function involving the component of the dipole operator along the external electric field at $t = 0$, $( \textbf{E}_0· \boldsymbol{\mu} )$ and this component at a different time $t$, $(\textbf{E}_0· \boldsymbol{\mu} (t))$. If $\omega_{f,i}$ is positive (i.e., in the photon absorption case), the above expression will yield a non-zero contribution when multiplied by $\exp(-i\omega t)$ and integrated over positive $\omega$- values. If $\omega_{f,i}$ is negative (as for stimulated photon emission), this expression will contribute, when multiplied by $\exp(-i\omega t)$, for negative $\omega$-values. In the latter situation, $\rho_i$ is the equilibrium probability of finding the molecule in the (excited) state from which emission will occur; this probability can be related to that of the lower state $\rho_f$ by \begin{align*} \rho_{\rm excited} &= \rho_{\rm lower} \exp[ - (E^0_{\rm excited} - E^0_{\rm lower})/kT ] \[4pt] &= \rho_{\rm lower} \exp[ - \hbar\omega/kT ]. \end{align*} The absorption and emission cases can be combined into a single expression for the net rate of photon absorption by recognizing that the latter process leads to photon production, and thus must be entered with a negative sign. The resultant expression for the net rate of decrease of photons is: $R_{\rm eq.ave.net} = \frac{1}{\hbar^2} \sum_i \rho_i$ $\int \int f(\omega) \langle \Phi_i | (\textbf{E}_0· \boldsymbol{\mu}) \textbf{E}_0· \boldsymbol{\mu} (t) | \Phi_i\rangle (1 - \exp(- \hbar\omega/kT) ) \exp(-i\omega t) d\omega dt.$ It is convention to introduce the so-called line shape function $I (\omega)$: $I (\omega) = \sum_i \rho_i \exp(-i\omega t) dt$ in terms of which the net photon absorption rate is $R_{\rm eq.ave.net} = \frac{1}{\hbar^2} (1 - \exp(- \hbar\omega/kT) ) .$ The function $C (t) = \sum_i \rho_i \langle \Phi_i | (\textbf{E}_0· \boldsymbol{\mu} ) \textbf{E}_0· \boldsymbol{\mu} (t) | \Phi_i\rangle$ is called the equilibrium averaged time correlation function of the component of the electric dipole operator along the direction of the external electric field $E_0$. Its Fourier transform is $I (\omega)$, the spectral line shape function. The convolution of $I(\omega)$ with the light source's $f (\omega)$ function, multiplied by $(1 - \exp(-\hbar\omega/kT)$), the correction for stimulated photon emission, gives the net rate of photon absorption. Although the correlation function expression for the photon absorption rate is equivalent to the state-to-state expression from which it was derived, we notice that 1. $C(t)$ does not contain explicit reference to the final-state wave functions $\phi$; instead, 2. $C(t)$ requires us to describe how the dipole operator changes with time. That is, in the time correlation framework, one is allowed to use models of the time evolution of the system to describe the spectra. This is especially appealing for large complex molecules and molecules in condensed media because, for such systems, it would be hopeless to attempt to find the final-state wave functions, but it may be reasonable (albeit challenging) to model the system’s time evolution. Prof. Eric Heller at Harvard has pioneered the use of time-domain methods for treating molecular spectroscopy; his web site offers access to further information and insight into this subject. It turns out that a very wide variety of spectroscopic and thermodynamic properties (e.g., light scattering intensities, diffusion coefficients, and thermal conductivity) can be expressed in terms of molecular time correlation functions. The text Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977) has a good treatment of many of these cases. Let’s now examine how such time evolution issues are used within the correlation function framework for the specific photon absorption case. Line Broadening Mechanisms If the rotational motion of the system’s molecules is assumed to be entirely unhindered (e.g., by any environment or by collisions with other molecules), it is appropriate to express the time dependence of each of the dipole time correlation functions listed above in terms of a free rotation model. For example, when dealing with diatomic molecules, the electronic-vibrational-rotational $C(t)$ appropriate to a specific electronic-vibrational transition becomes: $C(t) = (q_r q_v q_e q_t)^{-1} \sum_J (2J+1) \exp\bigg(- \frac{\hbar^2J(J+1)}{8\pi^2IkT}\bigg) \exp\Big(- \frac{\hbar\nu_{\rm vib}v_i}{kT}\Big)$ $g_{ie} \langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle |\langle \chi_{iv} | \chi_{fv}\rangle |^2$ $\exp(i [\hbar\nu_{\rm vib}] t + i\Delta E_{i,f} t/\hbar).$ Here, $q_r = \frac{8\pi^2IkT}{\hbar^2}$ is the rotational partition function (I being the molecule's moment of inertia $I = mR_e^2$, and $\dfrac{\hbar^2J(J+1)}{8\pi^2I}$ being the molecule's rotational energy for the state with quantum number $J$ and degeneracy $2J+1$), $q_v = \frac{\exp(-\hbar\nu_{\rm vib}/2kT)}{1-\exp(-\hbar\nu_{\rm vib}/kT)}$ is the vibrational partition function $\nu_{\rm vib}$ being the vibrational frequency), $g_{ie}$ is the degeneracy of the initial electronic state, $q_t = (2\pi mkT/\hbar^2)^{3/2} V$ is the translational partition function for the molecules of mass $\boldsymbol{\mu}$ moving in volume $V$, and $\Delta E_{i,f}$ is the adiabatic electronic energy spacing. The origins of such partition functions are treated in Chapter 7 of this text. The functions $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle$ describe the time evolution of the electronic transition dipole vector for the rotational state $J$. In a free-rotation model, this function is taken to be of the form: $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle$ $= \langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) |\phi_J\rangle \cos(\omega_Jt)$ where $\omega_J$ is the rotational frequency (in cycles per second) for rotation of the molecule in the state labeled by $J$. This oscillatory time dependence, combined with the $\exp(i [\hbar\nu_{\rm vib}] t + i\Delta E_{i,f} t/\hbar)$ time dependence arising from the electronic and vibrational factors, produce, when this $C(t)$ function is Fourier transformed to generate $I(\omega)$, a series of $d$-function peaks. The intensities of these peaks are governed by the quantities $(q_r q_v q_e q_t)^{-1} \sum_J (2J+1) \exp\bigg(- \frac{\hbar^2J(J+1)}{8\pi^2IkT}\bigg) \exp\Big(- \frac{\hbar\nu_{\rm vib}v_i}{kT}\Big) g_{ie},$ Boltzmann population factors, as well as by the $|\langle \chi_{iv} | \chi_{fv}\rangle |^2$ Franck-Condon factors and the $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,0) |\phi_J\rangle$ terms. This same analysis can be applied to the pure rotation and vibration-rotation $C(t)$ time dependences with analogous results. In the former, $d$-function peaks are predicted to occur at $\omega = ± \omega_J$ and in the latter at $\omega = \omega_{fv,iv} ± \omega_J ;$ with the intensities governed by the time independent factors in the corresponding expressions for $C(t)$. In experimental measurements, such sharp $d$-function peaks are, of course, not observed. Even when very narrow bandwidth laser light sources are used (i.e., for which $f(\omega)$ is an extremely narrowly peaked function), spectral lines are found to possess finite widths. Let us now discuss several sources of line broadening, some of which will relate to deviations from the "unhindered" rotational motion model introduced above. Doppler Broadening In the above expressions for $C(t)$, the averaging over initial rotational, vibrational, and electronic states is explicitly shown. There is also an average over the translational motion implicit in all of these expressions. Its role has not (yet) been emphasized because the molecular energy levels, whose spacings yield the characteristic frequencies at which light can be absorbed or emitted, do not depend on translational motion. However, the frequency of the electromagnetic field experienced by moving molecules does depend on the velocities of the molecules, so this issue must now be addressed. Elementary physics classes express the so-called Doppler shift of a wave's frequency induced by relative movement of the light source and the molecule as follows: $\omega_{\rm observed} = \omega_{\rm ​nominal} (1 + v_z/c)^{-1} \approx \omega_{\rm ​nominal} (1 - v_z/c + ...).$ Here, $\omega_{\rm ​nominal}$​ is the frequency of the unmoving light source seen by unmoving molecules, $v_z$ is the velocity of relative motion of the light source and molecules, $c$ is the speed of light, and wobserved is the Doppler-shifted frequency (i.e., the frequency seen by the molecules). The second identity is obtained by expanding, in a power series, the $(1 + v_z/c)^{-1}$ factor, and is valid in truncated form when the molecules are moving with speeds significantly below the speed of light. For all of the cases considered earlier, a $C(t)$ function is subjected to Fourier transformation to obtain a spectral line shape function $I(\omega)$, which then provides the essential ingredient for computing the net rate of photon absorption. In this Fourier transform process, the variable $\omega$ is assumed to be the frequency of the electromagnetic field experienced by the molecules. The above considerations of Doppler shifting then lead one to realize that the correct functional form to use in converting $C(t)$ to $I(\omega)$ is: $I(\omega) = \int C(t) \exp\left(-i t \omega\Big(1-\dfrac{v_z}{c}\Big)\right)dt,$ where $\omega$ is the nominal frequency of the light source. As stated earlier, within $C(t)$ there is also an equilibrium average over translational motion of the molecules. For a gas-phase sample undergoing random collisions and at thermal equilibrium, this average is characterized by the well-known Maxwell-Boltzmann velocity distribution: $\left(\frac{m}{2\pi kT}\right)^{3/2} \exp\bigg(-\frac{m (v_x^2+v_y^2+v_z^2)}{2kT}\bigg) dv_x dv_y dv_z.$ Here $\boldsymbol{\mu}$ is the mass of the molecules and $v_x$, $v_y$, and $v_z$ label the velocities along the lab-fixed Cartesian coordinates. Defining the $z$-axis as the direction of propagation of the light's photons and carrying out the averaging of the Doppler factor over such a velocity distribution, one obtains: $\int_{-\infty}^\infty \exp\left(-i t \omega\Big(1-\dfrac{v_z}{c}\Big)\right) \left(\frac{m}{2\pi kT}\right)^{3/2}\exp\bigg(-\frac{m (v_x^2+v_y^2+v_z^2)}{2kT}\bigg) dv_x dv_y dv_z\ ​= \exp(-i\omega t)\int_{-\infty}^\infty ​\left(\frac{m}{2\pi kT}\right)^{1/2} \exp\left(it\omega \dfrac{v_z}{c}\right)\exp\Big(-\dfrac{mv_z^2}{2kT}\Big)dv_z$ $= \exp(-i\omega t) \exp\bigg(- \frac{\omega^2t^2kT}{2mc^2}\bigg).$ This result, when substituted into the expressions for $C(t)$, yields expressions identical to those given for the three cases treated above but with one modification. The translational motion average need no longer be considered in each $C(t)$; instead, the earlier expressions for $C(t)$ must each be multiplied by a factor $\exp(- \omega^2t^2kT/(2mc^2))$ that embodies the translationaly averaged Doppler shift. The spectral line shape function $I(\omega)$ can then be obtained for each $C(t)$ by simply Fourier transforming: $I(\omega) = \int_{-\infty}^\infty \exp(-i\omega t)C(t) dt.$ When applied to the rotation, vibration-rotation, or electronic-vibration-rotation cases within the unhindered rotation model treated earlier, the Fourier transform involves integrals of the form: This integral would arise in the electronic-vibration-rotation case; the other two cases would involve integrals of the same form but with the $\Delta E_{i,f}/\hbar$ absent in the vibration-rotation situation and with $\omega_{fv,iv} + \Delta E_{i,f}/\hbar$ missing for pure rotation transitions. All such integrals can be carried out analytically and yield: $\sqrt{\frac{2mc^2\pi}{\omega^2kT}} \exp\left[ - \frac{(\omega-\omega_{fv,iv} - \Delta E_{i,f}/\hbar ± \omega_J)^2 mc^2}{2\omega^2kT} \right].$ The result is a series of Gaussian peaks in $\omega$-space, centered at: $\omega = \omega_{fv,iv} + \Delta E_{i,f}/\hbar ± \omega_J$ with widths ($\sigma$) determined by $\sigma^2= \frac{\omega^2kT}{mc^2},$ given the temperature $T$ and the mass of the molecules $m$. The hotter the sample, the faster the molecules are moving on average, and the broader is the distribution of Doppler shifted frequencies experienced by these molecules. The net result then of the Doppler effect is to produce a line shape function that is similar to the unhindered rotation model's series of $d$-functions but with each $d$-function peak broadened into a Gaussian shape. If spectra can be obtained to accuracy sufficient to determine the Doppler width of the spectral lines, such knowledge can be used to estimate the temperature of the system. This can be useful when dealing with systems that cannot be subjected to alternative temperature measurements. For example, the temperatures of stars can be estimated (if their velocity relative to the earth is known) by determining the Doppler shifts of emission lines from them. Alternatively, the relative speed of a star from the earth may be determined if its temperature is known. As another example, the temperature of hot gases produced in an explosion can be probed by measuring Doppler widths of absorption or emission lines arising from molecules in these gases. Pressure Broadening To include the effects of collisions on the rotational motion part of any of the above $C(t)$ functions, one must introduce a model for how such collisions change the dipole-related vectors that enter into $C(t)$. The most elementary model used to address collisions applies to gaseous samples which are assumed to undergo unhindered rotational motion until struck by another molecule at which time a kick is applied to the dipole vector and after which the molecule returns to its unhindered rotational movement. The effects of such infrequent collision-induced kicks are treated within the so-called pressure broadening (sometimes called collisional broadening) model by modifying the free-rotation correlation function through the introduction of an exponential damping factor $\exp( -|t|/t)$: $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,0) |\phi_J\rangle \cos \frac{\hbar J(J+1) t}{4\pi I}$ $Þ \langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,0) |\phi_J\rangle \cos \frac{\hbar J(J+1) t}{4\pi I} \exp\bigg(-\dfrac{|t|}{\tau}\bigg).$ This damping function's time scale parameter $\tau$ is assumed to characterize the average time between collisions and thus should be inversely proportional to the collision frequency. Its magnitude is also related to the effectiveness with which collisions cause the dipole function to deviate from its unhindered rotational motion (i.e., related to the collision strength). In effect, the exponential damping causes the time correlation function $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle$ to lose its memory and to decay to zero. This memory point of view is based on viewing $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle$ as the projection of $\textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t)$ along its $t = 0$ value $\textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,0)$ as a function of time t. Introducing this additional $\exp( -|t|/t)$ time dependence into $C(t)$ produces, when $C(t)$ is Fourier transformed to generate $I(\omega)$, integrals of the form $\int_{-\infty}^\infty \exp(-i\omega t) \exp\bigg(-\dfrac{|t|}{\tau}\bigg) \exp\bigg(- \frac{\omega^2t^2kT}{2mc^2}\bigg)​ \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} ± \omega_J​\Big) t \right) dt$ In the limit of very small Doppler broadening, the $\dfrac{\omega^2t^2kT}{2mc^2}$ factor can be ignored (i.e., $\exp\bigg(- \dfrac{\omega^2t^2kT}{2mc^2}\bigg)$ set equal to unity), and $\int_{-\infty}^\infty \exp(-i\omega t) \exp\bigg(-\dfrac{|t|}{\tau}\bigg) \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} \pm \omega_J​\Big) t \right) dt$ results. This integral can be performed analytically and generates: $\frac{1}{4\pi}\left[\frac{1/\tau}{(1/\tau)^2+(\omega-\omega_{fv,iv} - \Delta E_{i,f}/\hbar ± \omega_J​)^2} + \frac{1/\tau}{(1/\tau)^2+(\omega+\omega_{fv,iv} + \Delta E_{i,f}/\hbar ± \omega_J​)^2} \right],$ a pair of Lorentzian peaks in $\omega$-space centered again at $\omega = ± [\omega_{fv,iv}+\Delta E_{i,f}/\hbar ± \omega_J].$ The full width at half height of these Lorentzian peaks is $2/\tau$. One says that the individual peaks have been pressure or collisionally broadened. When the Doppler broadening can not be neglected relative to the collisional broadening, the above integral $\int_{-\infty}^\infty \exp(-i\omega t) \exp\bigg(-\dfrac{|t|}{\tau}\bigg) \exp\bigg(- \frac{\omega^2t^2kT}{2mc^2}\bigg)​ \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} \pm \omega_J​\Big) t \right) dt$ is more difficult to perform. Nevertheless, it can be carried out and again produces a series of peaks centered at $\omega = \omega_{fv,iv}+\Delta E_{i,f}/\hbar ± \omega_J$ but whose widths are determined both by Doppler and pressure broadening effects. The resultant line shapes are thus no longer purely Lorentzian nor Gaussian (which are compared in Figure 6.23 for both functions having the same full width at half height and the same integrated area), but have a shape that is called a Voight shape. Experimental measurements of line widths that allow one to extract widths originating from collisional broadening provide information (through $\tau$) on the frequency of collisions and the strength of these collisions. By determining $\tau$ at a series of gas densities, one can separate the collision-frequency dependence and determine the strength of the individual collisions (meaning how effective each collision is in reorienting the molecule’s dipole vector). Rotational Diffusion Broadening Molecules in liquids and very dense gases undergo such frequent collisions with the other molecules that the mean time between collisions is short compared to the rotational period for their unhindered rotation. As a result, the time dependence of the dipole-related correlation functions can no longer be modeled in terms of free rotation that is interrupted by (infrequent) collisions and Doppler shifted. Instead, a model that describes the incessant buffeting of the molecule's dipole by surrounding molecules becomes appropriate. For liquid samples in which these frequent collisions cause the dipole to undergo angular motions that cover all angles (i.e., in contrast to a frozen glass or solid in which the molecule's dipole would undergo strongly perturbed pendular motion about some favored orientation), the so-called rotational diffusion model is often used. In this picture, the rotation-dependent part of $C(t)$ is expressed as: $\langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t) |\phi_J\rangle$ $= \langle \phi_J | \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e) \textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,0) |\phi_J\rangle \exp( -2D_{\rm rot}|t|),$ where $D_{\rm rot}$ is the rotational diffusion constant whose magnitude details the time decay in the averaged value of $\textbf{E}_0· \boldsymbol{\mu}_{i,f}(R_e,t)$ at time $t$ with respect to its value at time $t = 0$; the larger $D_{\rm rot}$, the faster is this decay. As with pressure broadening, this exponential time dependence, when subjected to Fourier transformation, yields: $\int_{-\infty}^\infty \exp(-i\omega t) \exp( -2D_{\rm rot}|t|) \exp\bigg(- \frac{\omega^2t^2kT}{2mc^2}\bigg) \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} \pm \omega_J​\Big) t \right)​$ Again, in the limit of very small Doppler broadening, the $\dfrac{\omega^2t^2kT}{2mc^2}$ factor can be ignored (i.e., $\exp\bigg(- \dfrac{\omega^2t^2kT}{2mc^2}\bigg)$ set equal to unity), and $\int_{-\infty}^\infty \exp(-i\omega t) \exp( -2D_{\rm rot}|t|) \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} \pm \omega_J​\Big) t \right)​$ results. This integral can be evaluated analytically and generates: $\frac{1}{4\pi}\left[\frac{2D_{\rm rot}}{(2D_{\rm rot})^2+(\omega-\omega_{fv,iv} - \Delta E_{i,f}/\hbar ± \omega_J​)^2} + \frac{2D_{\rm rot}}{(2D_{\rm rot})^2+(\omega+\omega_{fv,iv} + \Delta E_{i,f}/\hbar ± \omega_J​)^2} \right],$ a pair of Lorentzian peaks in $\omega$-space centered again at $\omega = ±[\omega_{fv,iv}+\Delta E_{i,f}/\hbar ± \omega_J].$ The full width at half height of these Lorentzian peaks is $4D_{\rm rot}$. In this case, one says that the individual peaks have been broadened via rotational diffusion. In such cases, experimental measurement of line widths yield valuable information about how fast the molecule is rotationally diffusing in its condensed environment. Lifetime or Heisenberg Homogeneous Broadening Whenever the absorbing species undergoes one or more processes that depletes its numbers, we say that it has a finite lifetime. For example, a species that undergoes unimolecular dissociation has a finite lifetime, as does an excited state of a molecule that decays by spontaneous emission of a photon. Any process that depletes the absorbing species contributes another source of time dependence for the dipole time correlation functions $C(t)$ discussed above. This time dependence is usually modeled by appending, in a multiplicative manner, a factor $\exp(-|t|/t)$. This, in turn modifies the line shape function $I(\omega)$ in a manner much like that discussed when treating the rotational diffusion case: $\int_{-\infty}^\infty \exp(-i\omega t) \exp\bigg(-\dfrac{|t|}{\tau}\bigg) \exp\bigg(- \frac{\omega^2t^2kT}{2mc^2}\bigg)​ \exp\left(i \Big(\omega_{fv,iv} + \dfrac{\Delta E_{i,f}}{\hbar} \pm \omega_J​\Big) t \right) dt$ Not surprisingly, when the Doppler contribution is small, one obtains: $\frac{1}{4\pi}\left[\frac{1/\tau}{(1/\tau)^2+(\omega-\omega_{fv,iv} - \Delta E_{i,f}/\hbar ± \omega_J​)^2} + \frac{1/\tau}{(1/\tau)^2+(\omega+\omega_{fv,iv} + \Delta E_{i,f}/\hbar ± \omega_J​)^2} \right].$ In these Lorentzian lines, the parameter $\tau$ describes the kinetic decay lifetime of the molecule. One says that the spectral lines have been lifetime or Heisenberg broadened by an amount proportional to $1/\tau$. The latter terminology arises because the finite lifetime of the molecular states can be viewed as producing, via the Heisenberg uncertainty relation $\Delta E \Delta t \rangle \hbar$, states whose energy is uncertain to within an amount $\Delta E$. Site Inhomogeneous Broadening Among the above line broadening mechanisms, the pressure, rotational diffusion, and lifetime broadenings are all of the homogeneous variety. This means that each and every molecule in the sample is affected in exactly the same manner by the broadening process. For example, one does not find some molecules with short lifetimes and others with long lifetimes in the Heisenberg case; the entire ensemble of molecules is characterized by a single lifetime. In contrast, Doppler broadening is inhomogeneous in nature because each molecule experiences a broadening that is characteristic of its particular velocity $v_z$. That is, the fast molecules have their lines broadened more than do the slower molecules. Another important example of inhomogeneous broadening is provided by so-called site broadening. Molecules imbedded in a liquid, solid, or glass do not, at the instant of their photon absorption, all experience exactly the same interactions with their surroundings. The distribution of instantaneous solvation environments may be rather narrow (e.g., in a highly ordered solid matrix) or quite broad (e.g., in a liquid at high temperature or in a super-critical liquid). Different environments produce different energy level splittings $\omega = \omega_{fv,iv}+\Delta E_{i,f}/\hbar ± \omega_J$ (because the initial and final states are solvated differently by the surroundings) and thus different frequencies at which photon absorption can occur. The distribution of energy level splittings causes the sample to absorb at a range of frequencies as illustrated in Figure 6.24 where homogeneous and inhomogeneous line shapes are compared. The spectral line shape function $I(\omega)$ is therefore further broadened when site inhomogeneity is present and significant. These effects can be modeled by convolving the kind of $I(\omega)$ function that results from Doppler, lifetime, rotational diffusion, and pressure broadening with a Gaussian distribution $P(\Delta E)$ that describes the inhomogeneous distribution of energy level splittings: $I(\omega) = \int I^0(\omega,\Delta E)P(\Delta E)d\Delta E.$ Here $I^0(\omega;\Delta E)$ is a line shape function such as those described earlier each of which contains a set of frequencies (e.g., $\omega_{fv,iv}+\Delta E_{i,f}/\hbar ± \omega_J +\Delta E /\hbar = \omega+\Delta E /\hbar$) at which absorption or emission occurs and $P(\Delta E)$ is a Gaussian probability function describing the inhomogeneous broadening of the energy splitting $\Delta E$. A common experimental test to determine whether inhomogeneous broadening is significant involves hole burning. In such experiments, an intense light source (often a laser) is tuned to a frequency $\omega_{\rm burn}$ that lies within the spectral line being probed for inhomogeneous broadening. Then, with the intense light source constantly turned on, a second tunable light source is used to scan through the profile of the spectral line, and an absorption spectrum is recorded. Given an absorption profile as shown in Figure 6.25 in the absence of the intense burning light source: one expects to see a profile such as that shown in Figure 6.26 if inhomogeneous broadening is operative. The interpretation of the change in the absorption profile caused by the bright light source proceeds as follows: 1. In the ensemble of molecules contained in the sample, some molecules will absorb at or near the frequency of the bright light source $\omega_{\rm burn}$; other molecules (those whose environments do not produce energy level splittings that match $\omega_{\rm burn}$) will not absorb at this frequency. 2. Those molecules that do absorb at $\omega_{\rm burn}$ will have their transition saturated by the intense light source, thereby rendering this frequency region of the line profile transparent to further absorption. 3. When the probe light source is scanned over the line profile, it will induce absorptions for those molecules whose local environments did not allow them to be saturated by the $\omega_{\rm burn}$ light. The absorption profile recorded by this probe light source's detector thus will match that of the original line profile, until 4. The probe light source's frequency matches $\omega_{\rm burn}$, upon which no absorption of the probe source's photons will be recorded because molecules that absorb in this frequency regime have had their transition saturated. 5. Hence, a hole will appear in the absorption spectrum recorded by the probe light source's detector in the region of $\omega_{\rm burn}$. Unfortunately, the technique of hole burning does not provide a fully reliable method for identifying inhomogeneously broadened lines. If a hole is observed in such a burning experiment, this provides ample evidence, but if one is not seen, the result is not definitive. In the latter case, the transition may not be strong enough (i.e., may not have a large enough rate of photon absorption) for the intense light source to saturate the transition to the extent needed to form a hole. Photoelectron Spectroscopy Photoelectron spectroscopy (PES) is a special kind of electronic spectroscopy. It uses visible or UV light to excite a molecule or ion to a final state in which an electron is ejected. In effect, it induces transitions to final states in which an electron has been promoted to an unbound so-called continuum orbital. Most PES experiments are carried out using a fixed-frequency light source (usually a laser). This source’s photons, when absorbed, eject electrons whose intensity and kinetic energies $KE$ are then measured. Subtracting the electrons’ $KE$ from the photon’s energy $h\nu$ gives the binding energy $BE$ of the electron: $BE = h\nu - KE.$ If the sample subjected to the PES experiment has molecules in a variety of initial states (e.g., two electronic states or various vibrational-rotational levels of the ground electronic state) having various binding energies $BE_k$, one will observe a series of peaks corresponding to electrons ejected with a variety of kinetic energies $KE_k$ as Figure 6.27 illustrates and as the energy-balance condition requires: $BE_k = h\nu - KE_k.$ The peak of electrons detected with the highest kinetic energy came from the highest-lying state of the parent, while those with low kinetic energy came from the lowest-energy state of the parent. By examining the spacings between these peaks, one learns about the spacings between the energy levels of the parent species that has been subjected to electron loss. Alternatively, if the parent species exists primarily in its lowest state but the daughter species produced when an electron is removed from the parent has excited (electronic, vibration-rotation) states that can be accessed, one can observe a different progression of peaks. In this case, the electrons with highest kinetic energy arise from transitions leading to the lowest-energy state of the daughter as Figure 6.28 illustrates. In that figure, the lower energy surface belongs to the parent and the upper curve to the daughter. An example of experimental photodetachment data is provided in Figure 6.29 showing the intensity of electrons detected when $Cu_2^-$ anion loses an electron vs. the kinetic energy of the ejected electrons. The peak at a kinetic energy of ca. 1.54 eV, corresponding to a binding energy of 1.0 eV, arises from $Cu_2^-$ in $v=0$ losing an electron to produce $Cu_2$ in $v=0$. The most intense peak corresponds to a $v=0$ to $v=4$ transition. As in the visible-UV spectroscopy case, Franck-Condon factors involving the overlap of the $Cu_2^-$ anion and $Cu_2$ neutral vibrational wave functions govern the relative intensities of the PES peaks. Another example is given in Figure 6.30 where the photodetachment spectrum of $H_2C=C^-$ (the anion of the carbene vinylidene) appears. In this spectrum, the peaks having electron binding energies near 0.5 eV correspond to transitions in which ground-state $H_2C=C^-$ in $v=0$ is detached to produce ground-state ($^1A_1$) $H_2C=C$ in various v levels. The spacings between this group of peaks relate to the spacings in vibrational states of this $^1A_1$ electronic state. The series of peaks with binding energies near 2.5 eV correspond to transitions in which $H_2C=C^-$ is detached to produce $H_2C=C$ in its $^3B_2$ excited electronic state. The spacings between peaks in this range relate to spacings in vibrational states of this $^3B_2$ state. The spacing between the peaks near 0.5 eV and those near 2.5 eV relate to the energy difference between the $^3B_2$ and $^1A_1$ electronic states of the neutral $H_2C=C$. Because PES offers a direct way to measure energy differences between anion and neutral or neutral and cation state energies, it is a powerful and widely used means of determining molecular electron affinities (EAs) and ionization potentials (IPs). Because IPs and EAs relate, via Koopmans’ theorem, to orbital energies, PES is thus seen to be a way to measure orbital energies. Its vibrational envelopes also offer a good way to probe vibrational energy level spacings, and hence the bonding strengths. Probing Continuum Orbitals There is another type of spectroscopy that can be used to directly probe the orbitals of a molecule that lie in the continuum (i.e., at energies higher than that of the parent neutral). I ask that you reflect back on our discussion in Chapter 2 of tunneling and of resonance states that can occur when an electron experiences both attractive and repulsive potentials. In such cases, there exists a special energy at which the electron can be trapped by the attractive potential and have to tunnel through the repulsive barrier to eventually escape. It is these kinds of situations that this spectroscopy probes. This experiment is called electron-transmission spectroscopy (ETS). In such an experiment, a beam of electrons having a known intensity $I_0$ and narrowly defined range of kinetic energies $E$ is allowed to pass through a sample (usually gaseous) of thickness $L$. The intensity $I$ of electrons observed to pass through the sample and arrive at a detector lying along the incident beam’s direction is monitored, as are the kinetic energies of these electrons $E’$. Such an experiment is described in qualitative form in Figure 6.31. If the molecules in the sample have a resonance orbital whose energy is close to the kinetic energy $E$ of the colliding electrons, it is possible for an electron from the beam to be captured into such an orbital and to exist in this orbital for a considerable time. Of course, in the absence of any collisions or other processes to carry away excess energy, this anion will re-emit an electron at a later time. Hence, such anions are called metastable and their electronic states are called resonance states. If the captured electron remains in this orbital for a length of time comparable to or longer than the time it takes for the nascent molecular anion to undergo vibrational or rotational motion, various events can take place before the electron is re-emitted: i.some bond lengths or angles can change (this will happen if the orbital occupied by the beam’s electron has bonding or antibonding character) so, when the electron is subsequently emitted, the neutral molecule is left with a change in vibrational energy; ii.the molecule may rotate, so when the electron is ejected, it is not emitted in the same direction as the incident beam. In the former case, one observes electrons emitted with energies $E’$ that differ from that of the incident beam by amounts related to the internal vibrational energy levels of the anion. In the latter, one sees a reduction in the intensity of the beam that is transmitted directly through the sample and electrons that are scattered away from this direction. Such an ETS spectrum is shown in Figure 6.32 for a gaseous sample of $CO_2$ molecules. In this spectrum, the energy of the transmitted beam’s electrons is plotted on the horizontal axis and the derivative of the intensity of the transmitted beam is plotted on the vertical axis. It is common to plot such derivatives in ETS-type experiments to allow the variation of the signal with energy to be more clearly identified. In this ETS spectrum of $CO_2$, the oscillations that appear within the major spectral feature displayed (whose center is near 3.8 eV) correspond to stretching and bending vibrational levels of the metastable $CO_2^-$ anion. It is the bending vibration that is primarily excited because the beam electron enters the LUMO of $CO_2$, which is an orbital of the form shown in Figure 6.33. Occupancy of this antibonding $\pi^*$ orbital, causes both C-O bonds to lengthen and the O-C-O angle to bend away from 180 deg. The bending allows the antibonding nature of this orbital to be reduced. Other examples of ETS spectra are shown in Figure 6.34. Here, again a derivative spectrum is shown, and the vertical lines have been added to show where the derivative passes through zero, which is where the ETS absorption signal would have a peak. These maxima correspond to electrons entering various virtual $\pi^*$ orbitals of the uracil and DNA base molecules. It is by finding these peaks in the ETS spectrum that one can determine the energies of such continuum orbitals. Before closing this section, it is important to describe how one uses theory to simulate the metastable states that arise in such ETS experiments. Such calculations are not at all straightforward, and require the introduction of special tools designed to properly model the resonant continuum orbital. For metastable anions, it is difficult to approximate the potential experienced by the excess electron. For example, singly charged anions in which the excess electron occupies a molecular orbital $\phi$ that possesses non-zero angular momentum have effective potentials as shown in Figure 6.35, which depend on the angular momentum $L$ value of the orbital. For example, the $\pi^*$ orbital of $N_2^-$ shown in Figure 6.36 produces two counteracting contributions to the effective radial potential $V_{\rm eff}(r)$ experienced by an electron occupying it. First, the two nitrogen centers exert attractive potentials on the electron in this orbital. These attractions are strongest when the excess electron is near the nuclei but decay rapidly at larger distances because the other electrons’ Coulomb repulsions screen the nuclear attractions. Secondly, because the $\pi^*$ molecular orbital is comprised of atomic basis functions of $p_{\pi}$, $d_{\pi}$, etc. symmetry, it possesses non-zero angular momentum. Because the $\pi^*$ orbital has gerade symmetry, its large-r character is dominated by $L = 2$ angular momentum. As a result, the excess electron has a centrifugal radial potential $L(L+1)/2m_er^2$ derived largely from its $L = 2$ character. The attractive short-range valence potentials $V(r)$ and the centrifugal potential combine to produce a net effective potential as illustrated in Figure 6.35. The energy of an electron experiencing such a potential may or may not lie below the $r \rightarrow \infty$ asymptote. If the attractive potential is sufficiently strong, as it is for $O_2^{-1}$, the electron in the $\pi^*$ orbital will be bound and its energy will lie below this asymptote. On the other hand, if the attractive potential is not as strong, as is the case for the less-electronegative nitrogen atoms in $N_2^{-1}$, the energy of the $\pi^*$ orbital can lie above the asymptote. In the latter cases, we speak of metastable shape-resonance states. They are metastable because their energies lie above the asymptote so they can decay by tunneling through the centrifugal barrier. They are called shape-resonances because their metastability arises from the shape of their repulsive centrifugal barrier. If one had in-hand a reasonable approximation to the attractive short-range potential $V(r)$ and if one knew the L-symmetry of the orbital occupied by the excess electron, one could form $V_{\rm eff}(r)$ as above. However, to compute the lifetime of the shape resonance, one has to know the energy $E$ of this state. The most common and powerful tool for studying such metastable states theoretically is the stabilization method (SM) that Prof. Howard Taylor at USC pioneered. This method involves embedding the system of interest (e.g., the $N_2^{-1}$ anion) within a finite radial box in order to convert the continuum of states corresponding, for example, to $N_2 + e^-$, into discrete states that can be handled using more conventional methods. By then varying the size of the box, one can vary the energies of the discrete states that correspond to $N_2 + e^-$ (i.e., one varies the kinetic energy $KE$ of the orbital containing the excess electron). As the box size is varied, one eventually notices (e.g., by plotting the orbitals) that one of the $N_2 + e^-$ states possesses a significant amount of valence (i.e., short-range) character. That is, one such state has significant amplitude not only at large-r but also in the region of the two nitrogen centers. It is this state that corresponds to the metastable shape-resonance state, and it is the energy $E$ where significant valence components develop that provides the stabilization estimate of the state energy. Let us continue using $N_2^{-1}$ as an example for how the SM would be employed, especially how one usually varies the box within which the anion is constrained. One would use a conventional atomic orbital basis set that would likely include s and $\pi$ functions on each $N$ atom, perhaps some polarization d functions and some conventional diffuse s and $\pi$ orbitals on each $N$ atom. These basis orbitals serve primarily to describe the motions of the electrons within the usual valence regions of space. To this basis, one would append extra sets of diffuse $\pi$-symmetry orbitals. These orbitals could be $p_\pi$ (and maybe $d_\pi$) functions centered on each nitrogen atom, or they could be $p_\pi$ (and maybe $d_\pi$) orbitals centered at the midpoint of the N-N bond. One usually would not add just one such function; rather several such functions, each with an orbital exponent $\alpha_J$ that characterizes its radial extent, would be used. Let us assume, for example, that $K$ such $\pi$ functions have been used. Next, using the conventional atomic orbital basis as well as the $K$ extra $\pi$ basis functions, one carries out a calculation (most often a variational calculation in which one computes many energy levels) on the $N_2^{-1}$ anion. In this calculation, one tabulates the energies of many (say M) of the electronic states of $N_2^{-1}$. Of course, because a finite atomic orbital basis set must be used, one finds a discrete spectrum of orbital energies and thus of electronic state energies. There are occupied orbitals having negative energy that represent, via. Koopmans' theorem, the bound states of the $N_2^-$. There are also so-called virtual orbitals (i.e., those orbitals that are not occupied) whose energies lie above zero (i.e., do not describe bound states). The latter orbitals offer a discrete approximation to the continuum within which the resonance state of interest lies. One then scales the orbital exponents {$\alpha_J$} of the $K$ extra $\pi$ basis orbitals by a factor $\eta$: $\alpha_J \rightarrow \eta \alpha_J$ and repeats the calculation of the energies of the M lowest energies of $N_2^{-1}$. This scaling causes the extra $\pi$ basis orbitals to contract radially (if $\eta > 1$) or to expand radially (if $\eta < 1$). It is this basis orbital expansion and contraction that produces expansion and contraction of the box discussed above. That is, one does not employ a box directly; instead, one varies the radial extent of the most diffuse basis orbitals to simulate the box variation. If the conventional orbital basis is adequate, one finds that the extra $\pi$ orbitals, whose exponents are being scaled, do not affect appreciably the energy of the neutral $N_2$ molecule. This can be probed by plotting the $N_2$ energy as a function of the scaling parameter h; if the energy varies little with $\eta$, the conventional basis is adequate. In contrast to plots of the neutral $N_2$ energy vs. $\eta$, plots of the energies of the M $N_2^{-1}$ states show significant h-dependence as Figure 6.37 illustrates. What does such a stabilization plot tell us and what do the various branches of the plot mean? First, one should notice that each of the plots of the energy of an anion state (relative to the neutral molecule’s energy, which is independent of $\eta$) grows with increasing h. This h-dependence arises from the h-scaling of the extra diffuse $\pi$ basis orbitals. Because most of the amplitude of such basis orbitals lies outside the valence region, the kinetic energy is the dominant contributor to such orbitals’ energy. Because $\eta$ enters into each orbital as $\exp(-\eta \alpha r^2)$, and because the kinetic energy operator involves the second derivative with respect to r, the kinetic energies of orbitals dominated by the diffuse $\pi$ basis functions vary as $\eta^2$. For small $\eta$, all of the $\pi$ diffuse basis functions have their amplitudes concentrated at large r and have low kinetic energy. This is because, for small $\eta$ all of these orbitals are very diffuse and concentrate electron density at large distances. As $\eta$ grows, these functions become more radially compact and their kinetic energies grow. For example, note the three lowest energies shown above increasing from near zero as $\eta$ grows. As $\eta$ further increases, one reaches a point at which the third and fourth anion-state energies undergo an avoided crossing. At this $\eta$ value, if one examines the nature of the two wave functions whose energies avoid one another, one finds that one of them contains substantial amounts of both valence and extra-diffuse $\pi$ function character. Just to the left of the avoided crossing, the lower-energy state (the third state for small $\eta$) contains predominantly extra diffuse $\pi$ orbital character, while the higher-energy state (the fourth state) contains largely valence $\pi^*$ orbital character. However, at the special value of $\eta$ where these two states nearly cross, the kinetic energy of the third state (as well as its radial size and its de Broglie wavelength) are appropriate to connect properly with the fourth state. By connect properly we mean that the two states have wave function amplitudes, phases, and slopes that match. So, at this special $\eta$ value, one can achieve a description of the shape-resonance state that correctly describes this state both in the valence region and in the large-r region. Only by tuning the energy of the large-r states using the $\eta$ scaling can one obtain this proper boundary condition matching. In summary, by carrying out a series of anion-state energy calculations for several states and plotting them vs. $\eta$, one obtains a stabilization graph. By examining this graph and looking for avoided crossings, one can identify the energies at which metastable resonances occur. It is also possible to use the shapes (i.e., the magnitude of the energy splitting between the two states and the slopes of the two avoiding curves) of the avoided crossings in a stabilization graph to compute the lifetimes of the metastable states. Basically, the larger the avoided crossing energy splitting between the two states, the shorter is the lifetime of the resonance state. So, the ETS and PES experiments offer wonderful probes of the bound and continuum states of molecules and ions that tell us a lot about the electronic nature and chemical bonding of these species. The theoretical study of these phenomena is complicated by the need to properly identify and describe any continuum orbitals and states that are involved. The stabilization technique allows us to achieve a good approximation to resonance states that lie in such continua. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.09%3A_Experimental_Probes_of_Electronic_Structure.txt
Before moving on to discuss methods that go beyond the HF model, it is appropriate to examine some of the computational effort that goes into carrying out a HF SCF calculation on a molecule. The primary differences that appear when molecules rather than atoms are considered are 1. The electronic Hamiltonian $H_e$ contains not only one nuclear-attraction Coulomb potential $\sum_j Ze^2/r_j$, but a sum of such terms, one for each nucleus in the molecule: $\sum_a \sum_j \dfrac{Z_a e^2}{|r_j-R_a|}, \label{6.1.41}$ whose locations are denoted $R_a$. 1. One has AO basis functions of the type discussed above located on each nucleus of the molecule. These functions are still denoted $\chi_\mu(r-R_a)$, but their radial and angular dependences involve the distance and orientation of the electron relative to the particular nucleus on which the AO is located. Other than these two changes, performing a SCF calculation on a molecule (or molecular ion) proceeds just as in the atomic case detailed earlier. Let us briefly review how this iterative process occurs. Once atomic basis sets have been chosen for each atom, the one- and two-electron integrals appearing in the $H_e$ and overlap matrices must be evaluated. There are numerous highly efficient computer codes that allow such integrals to be computed for $s$, $p$, $d$, $f$, and even $g$, $h$, and $i$ basis functions. After executing one of these so-called integral packages for a basis with a total of $M$ functions, one has available (usually on the computer's hard disk) of the order of $\dfrac{M^2}{2}$ one-electron ($\langle \chi_\mu | H_e | \chi_\nu \rangle$ and $\langle \chi_\mu | \chi_\nu \rangle$) and $\dfrac{M^4}{8}$ two-electron ($\langle \chi_\mu \chi_\delta | \chi_\nu \chi_\kappa \rangle$) integrals. When treating extremely large atomic orbital basis sets (e.g., 500 or more basis functions), modern computer programs calculate the requisite integrals, but never store them on the disk. Instead, their contributions to the $\langle\chi_\mu |H_e|\chi_\nu\rangle$ matrix elements are accumulated on the fly after which the integrals are discarded. This is usually referred to as the direct integral-driven approach. Shapes, Sizes, and Energies of Orbitals Each molecular spin-orbital (MO) that results from solving the HF SCF equations for a molecule or molecular ion consists of a sum of components involving all of the basis AOs: $\phi_j = \sum_\mu C_{J,\mu} \chi_\mu.\label{6.1.42}$ In this expression, the $C_{j,\mu}$ are referred to as LCAO-MO coefficients because they tell us how to linearly combine AOs to form the MOs. Because the AOs have various angular shapes (e.g., $s$, $p$, or $d$ shapes) and radial extents (i.e., different orbital exponents), the MOs constructed from them can be of different shapes and radial sizes. Let’s look at a few examples to see what I mean. The first example is rather simple and pertains to two H atoms combining to form the $H_2$ molecule. The valence AOs on each H atom are the $1s$ AOs; they combine to form the two valence MOs ($\sigma$ and $\sigma^*$) depicted in Figure 6.1.4. The bonding MO labeled s has LCAO-MO coefficients of equal sign for the two $1s$ AOs, as a result of which this MO has the same sign near the left H nucleus (A) as near the right H nucleus (B). In contrast, the antibonding MO labeled $\sigma^*$ has LCAO-MO coefficients of different sign for the A and B $1s$ AOs. As was the case in the Hückel or tight-binding model outlined in Chapter 2, the energy splitting between the two MOs depends on the overlap $\langle \chi_{1sA}|\chi_{1sB} \rangle$ between the two AOs which, in turn, depends on the distance $R$ between the two nuclei. An analogous pair of bonding and antibonding MOs arises when two $p$ orbitals overlap sideways as in ethylene to form $\pi$ and $\pi^*$ MOs which are illustrated in Figure 6.1.5. The shapes of these MOs clearly are dictated by the shapes of the AOs that comprise them and the relative signs of the LCAO-MO coefficients that relate the MOs to AOs. For the $\pi$ MO, these coefficients have the same sign on the left and right atoms; for the $\pi^*$ MO, they have opposite signs. I should stress that the signs and magnitudes of the LCAO-MO coefficients arise as eigenvectors of the HF SCF matrix eigenvalue equation: $\sum_\mu \langle \chi_\nu|h_e| \chi_\mu \rangle C_{j,m} = \epsilon_j \sum_\mu \langle \chi_\nu|\chi_\mu \rangle C_{j,m}$ It is a characteristic of such eigenvalue problems for the lower energy eigenfunctions to have fewer nodes than the higher energy solutions as we learned from several examples that we solved in Part 1 of this text. Another thing to note about the MOs shown above is that they will differ in their quantitative details, but not in their overall shapes, when various functional groups are attached to the ethylene molecule’s C atoms. For example, if electron-withdrawing groups such as Cl, OH or Br are attached to one of the C atoms, the attractive potential experienced by a $\pi$ electron near that C atom will be enhanced relative to the potential near the other C atom. As a result, the bonding MO will have larger LCAO-MO coefficients Ck,m belonging to tighter basis AOs $\chi_\mu$ on this C atom. This will make the bonding $\pi$ MO more radially compact in this region of space, although its nodal character and gross shape will not change. Alternatively, an electron donating group such as H3C- or t-butyl attached to one of the C centers will cause the $\pi$ MO to be more diffuse (by making its LCAO-MO coefficients for more diffuse basis AOs larger). In addition to MOs formed primarily of AOs of one type (i.e., for $H_2$ it is primarily s-type orbitals that form the $\sigma$ and $\sigma^*$ MOs; for ethylene’s $\pi$ bond, it is primarily the C $2p$ AOs that contribute), there are bonding and antibonding MOs formed by combining several AOs. For example, the four equivalent C-H bonding MOs in $CH_4$ shown in Figure 6.1. 6 each involve C $2s$ and $2p$ as well as H $1s$ basis AOs. The energies of the MOs depend on two primary factors: the energies of the AOs from which the MOs are constructed and the overlap between these AOs. The pattern in energies for valence MOs formed by combining pairs of first-row atoms to form homo-nuclear diatomic molecules is shown in Figure 6.1. 7. In this figure, the core MOs formed from the $1s$ AOs are not shown; only those MOs formed from $2s$ and $2p$ AOs appear. The clear trend toward lower orbital energies as one moves from left to right is due primarily to the trends in orbital energies of the constituent AOs. That is, F being more electronegative than $N$ has a lower-energy $2p$ orbital than does $N$. Bonding, Anti-bonding, Non-bonding, and Rydberg Orbitals As noted above, when valence AOs combine to form MOs, the relative signs of the combination coefficients determine, along with the AO overlap magnitudes, the MO’s energy and nodal properties. In addition to the bonding and antibonding MOs discussed and illustrated earlier, two other kinds of MOs are important to know about. Non-bonding MOs arise, for example, when an orbital on one atom is not directed toward and overlapping with an orbital on a neighboring atom. For example, the lone pair orbitals on $H_2O$ or on the oxygen atom of $H_2C=O$ are non-bonding orbitals. They still are described in the LCAO-MO manner, but their $\chi_{m,i}$ coefficients do not contain dominant contributions from more than one atomic center. Finally, there is a type of orbital that all molecules possess but that is ignored in most elementary discussions of electronic structure. All molecules have so-called Rydberg orbitals. These orbitals can be thought of as large diffuse orbitals that describe the regions of space an electron would occupy if it were in the presence of the corresponding closed-shell molecular cation. Two examples of such Rydberg orbitals are shown in Figure 6.1.8. On the left, we see the Rydberg orbital of $NH_4$ and on the right, that of $H_3N-CH_3$. The former species can be thought of as a closed-shell ammonium cation $NH_4^+$ around which a Rydberg orbital resides. The latter is protonated methyl amine with its Rydberg orbital.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/06%3A_Electronic_Structure/6.10%3A_Molecular_Orbitals.txt
Learning Objectives In this Chapter, you will be introduced to many of the main concepts and methods of statistical mechanics. You will be familiar with the following topics: 1. Microcanonical, canonical, and grandcanonical ensembles and their partition functions. 2. Ensemble averages being equal to long-time averages; the equal a priori postulate. 3. Fluctuations 4. Expressions for thermodynamic properties in terms of partition functions. 5. Monte Carlo methods including Metropolis sampling and umbrella sampling. 6. Molecular dynamics simulations, including molecular mechanics force fields. 7. Coarse graining methods. 8. Time correlation functions. 9. Einstein and Debye models for solids’ phonons. 10. Lattice theories of adsorption, liquids, and phase transitions. 11. Virial expansions of thermodynamic properties. When one is faced with a system containing many molecules at or near thermal equilibrium, it is not necessary or even wise to try to describe it in terms of quantum wave functions or even classical trajectories following the positions and momenta of all of the constituent particles. Instead, the powerful tools of statistical mechanics allow one to focus on quantities that describe the many-molecule system in terms of the behavior it displays most of the time. In this Chapter, you will learn about these tools and see some important examples of their application. 07: Statistical Mechanics As introduced in Chapter 5, the approach one takes in studying a system composed of a very large number of molecules at or near thermal equilibrium can be quite different from how one studies systems containing a few isolated molecules. In principle, it is possible to conceive of computing the quantum energy levels and wave functions of a collection of many molecules (e.g., ten $Na^+$ ions, ten $Cl^-$ ions and 550 $H_2O$ molecules in a volume chosen to simulate a concentration of 1 molar $NaCl_{(aq)}$), but doing so becomes impractical once the number of atoms in the system reaches a few thousand or if the molecules have significant intermolecular interactions as they do in condensed-phase systems. Also, as noted in Chapter 5, following the time evolution of such a large number of molecules can be confusing if one focuses on the short-time behavior of any single molecule (e.g., one sees jerky changes in its energy, momentum, and angular momentum). By examining, instead, the long-time average behavior of each molecule or, alternatively, the average properties of a significantly large number of molecules, one is often better able to understand, interpret, and simulate such condensed-media systems. Moreover, most experiments do not probe such short-time dynamical properties of single molecules; instead, their signals report on the behavior of many molecules lying within the range of their detection device (e.g., laser beam, STM tip, or electrode). It is when one want to describe the behavior of collections of molecules under such conditions that the power of statistical mechanics comes into play. Distribution of Energy Among Levels One of the most important concepts of statistical mechanics involves how a specified total amount of energy $E$ can be shared among a collection of molecules and within the internal (rotational, vibrational, electronic) and intermolecular (translational) degrees of freedom of these molecules when the molecules have a means for sharing or redistributing this energy (e.g., by collisions). The primary outcome of asking what is the most probable distribution of energy among a large number $N$ of molecules within a container of volume $V$ that is maintained in equilibrium by such energy-sharing at a specified temperature $T$ is the most important equation in statistical mechanics, the Boltzmann population formula: $P_j = \dfrac{\Omega_j \exp(- E_j /kT)}{Q}.$ This equation expresses the probability $P_j$ of finding the system (which, in the case introduced above, is the whole collection of $N$ interacting molecules) in its $j^{th}$ quantum state, where $E_j$ is the energy of this quantum state, $T$ is the temperature in K, $\Omega_j$ is the degeneracy of the $j^{th}$ state, and the denominator $Q$ is the so-called partition function: $Q = \sum_j \Omega_j \exp\bigg(- \dfrac{E_j}{kT}\bigg).$ The classical mechanical equivalent of the above quantum Boltzmann population formula for a system with a total of $M$ coordinates (collectively denoted $q$- they would be the internal and intermolecular coordinates of the $N$ molecules in the system) and $M$ momenta (denoted $p$) is: $P(q,p) = \dfrac{ h^{-M}\exp \bigg(- \dfrac{H(q, p)}{kT}\bigg)}{Q},$ where $H$ is the classical Hamiltonian, $h$ is Planck's constant, and the classical partition function $Q$ is $Q = h^{-M} \int \exp \bigg(- \dfrac{H(q, p)}{kT}\bigg) dq \;dp .$ This probability density expression, which must integrate to unity, contains the factor of $h^{-M}$ because, as we saw in Chapter 1 when we learned about classical action, the integral of a coordinate-momentum product has units of Planck’s constant. Notice that the Boltzmann formula does not say that only those states of one particular energy can be populated; it gives non-zero probabilities for populating all states from the lowest to the highest. However, it does say that states of higher energy $E_j$ are disfavored by the $\exp (- E_j /kT)$ factor, but, if states of higher energy have larger degeneracies $\Omega_j$ (which they usually do), the overall population of such states may not be low. That is, there is a competition between state degeneracy $\Omega_j$, which tends to grow as the state's energy grows, and $\exp (-E_j /kT)$ which decreases with increasing energy. If the number of particles $N$ is huge, the degeneracy $-\Omega$ grows as a high power (let’s denote this power as $K$) of $E$ because the degeneracy is related to the number of ways the energy can be distributed among the $N$ molecules. In fact, $K$ grows at least as fast as $N$. As a result of $-\Omega$ growing as $E^K$, the product function $P(E) = E^K \exp(-E/kT)$ has the form shown in Fig. 7.1 (for $K=10$, for illustrative purposes) By taking the derivative of this function $P(E)$ with respect to E, and finding the energy at which this derivative vanishes, one can show that this probability function has a peak at $E^* = K kT$, and that at this energy value, $P(E^*) = (KkT)^K \exp(-K),$ By then asking at what energy $E'$ the function $P(E)$ drops to $\exp(-1)$ of this maximum value $P(E^*)$: $P(E') = \exp(-1) P(E^*),$ one finds $E' = K kT \bigg(1+ \sqrt{\dfrac{2}{K}} \bigg).$ So the width of the $P(E)$ graph, measured as the change in energy needed to cause $P(E)$ to drop to $\exp(-1)$ of its maximum value divided by the value of the energy at which $P(E)$ assumes this maximum value, is $\dfrac{E'-E^*}{E^*} = \sqrt{\dfrac{2}{K}}.$ This width gets smaller and smaller as $K$ increases. The primary conclusion is that as the number $N$ of molecules in the sample grows, which, as discussed earlier, causes $K$ to grow, the energy probability function becomes more and more sharply peaked about the most probable energy $E^*$. This, in turn, suggests that we may be able to model, aside from infrequent fluctuations which we may also find a way to take account of, the behavior of systems with many molecules by focusing on the most probable situation (i.e., those having the energy $E^*$) and ignoring or making small corrections for deviations from this case. It is for the reasons just shown that for macroscopic systems near equilibrium, in which $N$ (and hence $K$) is extremely large (e.g., $N$ ~ $10^{10}$ to $10^{24}$), only the most probable distribution of the total energy among the $N$ molecules need be considered. This is the situation in which the equations of statistical mechanics are so useful. Certainly, there are fluctuations (as evidenced by the finite width of the above graph) in the energy content of the $N$-molecule system about its most probable value. However, these fluctuations become less and less important as the system size (i.e., $N$) becomes larger and larger. Basis of the Boltzmann Population Formula To understand how this narrow Boltzmann distribution of energies arises when the number of molecules $N$ in the sample is large, we consider a system composed of $M$ identical containers, each having volume V, and each made out a material that allows for efficient heat transfer to its surroundings (e.g., through collisions of the molecules inside the volume with the walls of the container) but material that does not allow any of the $N$ molecules in each container to escape. These containers are arranged into a regular lattice as shown in Figure 7.2 in a manner that allows their thermally conducting walls to come into contact. Finally, the entire collection of $M$ such containers is surrounded by a perfectly insulating material that assures that the total energy (of all $N \times M$ molecules) can not change. So, this collection of $M$ identical containers each containing $N$ molecules constitutes a closed (i.e., with no molecules coming or going) and isolated (i.e., so total energy is constant) system. Equal priori Probability Assumption One of the fundamental assumptions of statistical mechanics is that, for a closed isolated system at equilibrium, all quantum states of the system having energy equal to the energy $E$ with which the system is prepared are equally likely to be occupied. This is called the assumption of equal a priori probability for such energy-allowed quantum states. The quantum states relevant to this case are not the states of individual molecules, nor are they the states of $N$ of the molecules in one of the containers of volume $V$. They are the quantum states of the entire system comprised of $N\times M$ molecules. Because our system consists of $M$ identical containers, each with $N$ molecules in it, we can describe the quantum states of the entire system in terms of the quantum states of each such container. It may seem foolish to be discussing quantum states of the large system containing $N\times M$ molecules, given what I said earlier about the futility in trying to find such states. However, what I am doing at this stage is to carry out a derivation that is based upon such quantum states but whose final form and final working equations will not actually require one to know or even be able to have these states in hand. Let’s pretend that we know the quantum states that pertain to $N$ molecules in a container of volume $V$ as shown in Figure 7.2, and let’s label these states by an index $J$. That is $J=1$ labels the lowest-energy state of $N$ molecules in the container of volume $V$, $J=2$ labels the second such state, and so on. As I said above, I understand it may seem daunting to think of how one actually finds these $N$-molecule eigenstates. However, we are just deriving a general framework that gives the probabilities of being in each such state. In so doing, we are allowed to pretend that we know these states. In any actual application, we will, of course, have to use approximate expressions for such energies. Assuming that the walls that divide the $M$ containers play no role except to allow for collisional (i.e., thermal) energy transfer among the containers, an energy-labeling for states of the entire collection of $M$ containers can be realized by giving the number of containers that exist in each single-container J-state. This is possible because, under the assumption about the role of the walls just stated, the energy of each $M$-container state is a sum of the energies of the $M$ single-container states that comprise that $M$-container state. For example, if $M= 9$, the label 1, 1, 2, 2, 1, 3, 4, 1, 2 specifies the energy of this 9-container state in terms of the energies {$\varepsilon_j$} of the states of the 9 containers: $E = 4\varepsilon_1 + 3\varepsilon_2 + \varepsilon_3 + \varepsilon_4$. Notice that this 9-container state has the same energy as several other 9-container states; for example, 1, 2, 1, 2, 1, 3, 4, 1, 2 and 4, 1, 3, 1, 2, 2, 1, 1, 2 have the same energy although they are different individual states. What differs among these distinct states is which box occupies which single-box quantum state. The above example illustrates that an energy level of the $M$-container system can have a high degree of degeneracy because its total energy can be achieved by having the various single-container states appear in various orders. That is, which container is in which state can be permuted without altering the total energy $E$. The formula for how many ways the $M$ container states can be permuted such that: 1. there are $n_J$ containers appearing in single-container state $J$, with 2. a total of $M$ containers, is $\Omega(n) = \dfrac{M!}{\prod_Jn_J!}.$ Here $n = \{n_1, n_2, n_3, \cdots n_J, \cdots \}$ denote the number of containers existing in single-container states 1, 2, 3, … $J$, …. This combinatorial formula reflects the permutational degeneracy arising from placing $n_1$ containers into state 1, $n_2$ containers into state 2, etc. If we imagine an extremely large number of containers and we view $M$ as well as the {$n_J$} as being large numbers (n.b., we will soon see that this is the case at least for the most probable distribution that we will eventually focus on), we can ask- for what choices of the variables $\{n_1, n_2, n_3, \cdots n_J, \cdots \}$ is this degeneracy function $\Omega(n)$ a maximum? Moreover, we can examine $\Omega(n)$ at its maximum and compare its value at values of the {$n$} parameters changed only slightly from the values that maximized $\Omega(n)$. As we will see, $-\Omega$ is very strongly peaked at its maximum and decreases extremely rapidly for values of {$n$} that differ only slightly from the optimal values. It is this property that gives rise to the very narrow energy distribution discussed earlier in this Chapter. So, let’s take a closer look at how this energy distribution formula arises. We want to know what values of the variables $\{n_1, n_2, n_3, \cdots n_J, \cdots \}$ make $-\Omega = M!/{\Pi_Jn_J!}$ a maximum. However, all of the $\{n_1, n_2, n_3, \cdots n_J, \cdots \}$ variables are not independent; they must add up to $M$, the total number of containers, so we have a constraint $\sum_J n_J = M$ that the variables must obey. The {$n_j$} variables are also constrained to give the total energy $E$ of the $M$-container system when summed as $\sum_J n_J\varepsilon_J = E.$ We have two problems: i. how to maximize $-\Omega$ and ii. how to impose these constraints. Because $-\Omega$ takes on values greater than unity for any choice of the {$n_j$}, $-\Omega$ will experience its maximum where $\ln\Omega$ has its maximum, so we can maximize $\ln \Omega$ if doing so helps. Because the $n_J$ variables are assumed to take on large numbers (when $M$ is large), we can use Sterling’s approximation for the natural logarithm of the factorial of a large number: $\ln X! \approx X \ln X – X$ to approximate $\ln \Omega$ as follows: $\ln \Omega \approx \ln M! - \sum_J (n_J \ln n_J – n_J).$ This expression will prove useful because we can take its derivative with respect to the $n_J$ variables, which we need to do to search for the maximum of $\ln \Omega$. To impose the constraints $\sum_J n_J = M$ and $\sum_J n_J \varepsilon_J = E$ we use the technique of Lagrange multipliers. That is, we seek to find values of {$n_J$} that maximize the following function: $F = \ln M! - \sum_J (n_J \ln n_J – n_J) - \alpha(\sum_Jn_J – M) - \beta(\sum_J n_J \varepsilon_J –E).$ Notice that this function $F$ is exactly equal to the $\ln\Omega$ function we wish to maximize whenever the {$n_J$} variables obey the two constraints. So, the maxima of $F$ and of $\ln\Omega$ are identical if the {$n_J$} have values that obey the constraints. The two Lagrange multipliers $\alpha$ and $\beta$ are introduced to allow the values of {$n_J$} that maximize $F$ to ultimately obey the two constraints. That is, we first find values of the {$n_J$} variables that make $F$ maximum; these values will depend on $\alpha$ and $\beta$ and will not necessarily obey the constraints. However, we will then choose $\alpha$ and $\beta$ to assure that the two constraints are obeyed. This is how the Lagrange multiplier method works. Lagrange Multiplier Method Taking the derivative of $F$ with respect to each independent $n_K$ variable and setting this derivative equal to zero gives: $- \ln n_K - \alpha - \beta \varepsilon_K = 0.$ This equation can be solved to give $n_K = \exp(- \alpha) \exp(- \beta \varepsilon_K)$. Substituting this result into the first constraint equation gives $M = \exp(- \alpha) \sum_J \exp(- \beta \varepsilon_J)$, which allows us to solve for $\exp(- \alpha)$ in terms of $M$. Doing so, and substituting the result into the expression for $n_K$ gives: $n_K = M\dfrac{\exp(- \beta \varepsilon_K)}{Q}$ where $Q = \sum_J \exp(- \beta \varepsilon_J).$ Notice that the $n_K$ are, as we assumed earlier, large numbers if $M$ is large because $n_K$ is proportional to $M$. Notice also that we now see the appearance of the partition function $Q$ and of exponential dependence on the energy of the state that gives the Boltzmann population of that state. It is possible to relate the $\beta$ Lagrange multiplier to the total energy $E$ of the $M$ containers by summing the number of containers in the Kth quantum state $n_K$ multiplied by the energy of that quantum state $\varepsilon_K$ \begin{algin*} E &= \sum_K n_K \varepsilon_K \[4pt] &= M\sum_K \dfrac{\varepsilon_K\exp(- \beta \varepsilon_K)}{Q} \[4pt] &= - M\left(\frac{∂\ln Q}{∂\beta} \right)_{N,V}. \end{align*} This shows that the average energy of a container, computed as the total energy $E$ divided by the number $M$ of such containers can be computed as a derivative of the logarithm of the partition function $Q$. As we show in the following Section of this Chapter, all thermodynamic properties of the $N$ molecules in the container of volume $V$ can be obtained as derivatives of the natural logarithm of this $Q$ function. This is why the partition function plays such a central role in statistical mechanics. To examine the range of energies over which each of the $M$ single-container system varies with appreciable probability, let us consider not just the degeneracy $\Omega(n^*)$ of that set of variables $\{n^*\} = \{n^*_1, n^*_2, \cdots \}$ that makes $-\Omega$ maximum, but also the degeneracy $\Omega(n)$ for values of $\{n_1, n_2, \cdots\}$ differing by small amounts {$\delta n_1, \delta n_2, \cdots$} from the optimal values {$n^*$}. Expanding $\ln \Omega$ as a Taylor series in the parameters $\{n_1, n_2, \cdots\}$ and evaluating the expansion in the neighborhood of the values {$n^*$}, we find: $\ln \Omega = \ln \Omega({n^*_1, n^*_2, …}) + \sum_J \left(\frac{∂\ln\Omega}{∂n_J}\right) \delta n_J + \frac{1}{2} \sum_{J,K} \left(\frac{∂^2\ln\Omega}{∂n_J∂n_K}\right) \delta n_J \delta n_K + …$ We know that all of the first derivative terms ($\dfrac{∂\ln\Omega}{∂n_J}$) vanish because $\ln\Omega$ has been made maximum at {$n^*$}. To evaluate the second derivative terms, we first note that the first derivative of $\ln\Omega$ is $\left(\frac{∂\ln\Omega}{∂n_J}\right) = \frac{∂(\ln M! - \sum_J (n_J \ln n_J – n_J))}{∂n_J} = -\ln(n_J).$ So the second derivatives needed to complete the Taylor series through second order are: $\left(\frac{∂^2\ln\Omega}{∂n_J∂nK}\right) = - \frac{\delta_{J,K}}{n_j}.$ Using this result, we can expand $\Omega(n)$ in the neighborhood of {$n^*$} in powers of $\delta n_J = n_J-n_J^*$ as follows: $\ln \Omega(n) = \ln \Omega(n^*) – \frac{1}{2} \frac{\sum_J (\delta n_J)^2}{n_J^*},$ or, equivalently, $\Omega(n) = \Omega(n^*) \frac{\exp[-\frac{1}{2}\sum_J (\delta n_J)^2]}{n_J^*}$ This result clearly shows that the degeneracy, and hence, by the equal a priori probability hypothesis, the probability of the $M$-container system occupying a state having {$n_1, n_2, \cdots$} falls off exponentially as the variables $n_J$ move away from their most-probable values {$n^*$}. Thermodynamic Limit As we noted earlier, the $n_J^*$ are proportional to $M$ (i.e., $n_J^* = \dfrac{M\exp(-\beta\varepsilon_J)}{Q} = f_J M$), so when considering deviations $\delta n_J$ away from the optimal $n_J^*$, we should consider deviations that are also proportional to $M$: $\delta n_J = M \delta f_J$. In this way, we are treating deviations of specified percentage or fractional amount which we denote $f_J$. Thus, the ratio $\dfrac{(\delta n_J)^2}{n_J^*}$ that appears in the above exponential has an M-dependence that allows $\Omega(n)$ to be written as: $\Omega(n) = \Omega(n^*) \exp\left[-\dfrac{M}{2}\sum_J \dfrac{(\delta f_J)^2}{f_J^*}\right],$ where $f_J^*$ and $\delta f_J$ are the fraction and fractional deviation of containers in state $J$: $f_J^* = \dfrac{n_J^*}{M}$ and $\delta f_J = \dfrac{\delta n_J}{M}$. The purpose of writing $\Omega(n)$ in this manner is to explicitly show that, in the so-called thermodynamic limit, when $M$ approaches infinity, only the most probable distribution of energy {$n^*$} need to be considered because only {$\delta f_J=0$} is important as $M$ approaches infinity. Fluctuations Let’s consider this very narrow distribution issue a bit further by examining fluctuations in the energy of a single container around its average energy $E_{\rm ave} = \dfrac{E}{M}$. We already know that the number of containers in a given state $K$ can be written as $n_K = \dfrac{M\exp(- \beta \varepsilon_K)}{Q}$. Alternatively, we can say that the probability of a container occupying the state $J$ is: $p_J = \dfrac{\exp(- \beta \varepsilon_K)}{Q}.$ Using this probability, we can compute the average energy $E_{\rm ave}$ as: $E_{\rm ave} = \sum_J p_J \varepsilon_J = \dfrac{\sum_J \varepsilon_J \exp(- \beta \varepsilon_K)}{Q} = - \left(\dfrac{∂\ln Q}{∂\beta} \right)_{N,V}.$ To compute the fluctuation in energy, we first note that the fluctuation is defined as the average of the square of the deviation in energy from the average: $(E-E_{\rm ave})^2_{\rm ave} = \sum_J (\varepsilon_J –E_{\rm ave})^2 p_J = \sum_J p_J (\varepsilon_J^2 - 2\varepsilon_J E_{\rm ave} +E_{\rm ave}^2) = \sum_J p_J(\varepsilon_J^2 – E_{\rm ave}^2).$ The following identity is now useful for further re-expressing the fluctuations: $\left(\dfrac{∂^2\ln Q}{∂\beta^2}\right)_{N,V} = \dfrac{ ∂\left( -\sum_J\dfrac{\varepsilon_J \exp(-\beta\varepsilon_J)}{Q} \right) }{∂\beta}$ $= \sum_J \dfrac{\varepsilon_J^2\exp(-\beta\varepsilon_J)}{Q} - \left(\sum_J \dfrac{\varepsilon_J\exp(-\beta\varepsilon_J)}{Q}\right) \left(\sum_L \dfrac{\varepsilon_L\exp(-\beta\varepsilon_​L)}{Q}\right)$ Recognizing the first factor immediately above as $\sum_J \varepsilon_J^2 p_J$, and the second factor as $- E_{\rm ave}^2$, and noting that $\sum_J p_J = 1$, allows the fluctuation formula to be rewritten as: $(E-E_{\rm ave})^2_{\rm ave} = \left(\dfrac{∂^2\ln Q}{∂\beta^2}\right )_{N,V} = - \left(\dfrac{∂E_{\rm ave}}{∂\beta}\right)_{N,V}.$ Because the parameter $\beta$ can be shown to be related to the Kelvin temperature $T$ as $\beta =\dfrac{1}{kT}$, the above expression can be re-written as: $(E-E_{\rm ave})^2_{\rm ave} = - \left(\dfrac{∂ E_{\rm ave}}{∂\beta}\right)_{N,V} = kT^2 \left(\dfrac{∂E_{\rm ave}}{∂T}\right)_{N,V}.$ Recognizing the formula for the constant-volume heat capacity $C_V = \left(\dfrac{∂E_{\rm ave}}{∂T}\right)_{N,V}$ allows the fractional fluctuation in the energy around the mean energy $E_{\rm ave} = \dfrac{E}{M}$ to be expressed as: $\dfrac{(E-E_{\rm ave})^2_{\rm ave}}{E_{\rm ave}^2} = \dfrac{kT^2 C_V}{E_{\rm ave}^2}.$ What does this fractional fluctuation formula tell us? On its left-hand side it gives a measure of the fractional spread of energies over which each of the containers ranges about its mean energy $E_{\rm ave}$. On the right side, it contains a ratio of two quantities that are extensive properties, the heat capacity and the mean energy. That is, both $C_V$ and $E_{\rm ave}$ will be proportional to the number $N$ of molecules in the container as long as $N$ is reasonably large. However, because the right-hand side involves $C_V/E_{\rm ave}^2$, it is proportional to $N-1$ and thus will be very small for large $N$ as long as $C_V$ does not become large. As a result, except near so-called critical points where the heat capacity does indeed become extremely large, the fractional fluctuation in the energy of a given container of $N$ molecules will be very small (i.e., proportional to $N-1$). This finding is related to the narrow distribution in energies that we discussed earlier in this section. Let’s look at the expression $\dfrac{(E-E_{\rm ave})^2_{\rm ave}}{E_{\rm ave}^2} = \frac{kT^2 C_V}{E_{\rm ave}^2}$ in a bit more detail for a system that is small but still contains quite a few particles-a cluster of $N$ Ar atoms at temperature $T$. If we assume that each of the Ar atoms in the cluster has $\dfrac{3}{2} kT$ of kinetic energy and that the potential energy holding the cluster together is small and constant (so it cancels in $E-E_{\rm ave}$), $E_{\rm ave}$ will be $\dfrac{3}{2}NkT$ and $C_V$ will be $\dfrac{3}{2} Nk$. So, $\frac{(E-E_{\rm ave})^2_{\rm ave}}{E_{\rm ave}^2} = \frac{kT^2 C_V}{E_{\rm ave}^2} = kT^2 \dfrac{\dfrac{3}{2}Nk}{\bigg(\dfrac{3}{2} NkT\bigg)^2} = \frac{2}{3 N}.$ In a nano-droplet of diameter 100 Å, with each Ar atom occupying a volume of ca. $4/3 \pi (3.8Å)^3 = 232 Å^3$, there will be ca. $N = \frac{4}{3} \pi \dfrac{100^3}{\dfrac{4}{3} \pi 3.83} = 1.8 \times10^4$ Ar atoms. So, the average fractional spread in the energy $\sqrt{\frac{(E-E_{\rm ave})^2_{\rm ave}}{E_{\rm ave}^2}} = \sqrt{\frac{2}{3 N}}=0.006.$ That is, even for a very small nano-droplet, the fluctuation in the energy of the system is only a fraction of a percent (assuming $C_V$ is not large as near a critical point). This example shows why it is often possible to use thermodynamic concepts and equations even for very small systems, albeit realizing that fluctuations away from the most probable state are more important than in much larger systems. Partition Functions and Thermodynamic Properties Let us now examine how this idea of the most probable energy distribution being dominant gives rise to equations that offer molecular-level expressions for other thermodynamic properties. The first equation is the fundamental Boltzmann population formula that we already examined: $P_j = \dfrac{\exp(- E_j /kT)}{Q},$ which expresses the probability for finding the $N$-molecule system in its $J^{\rm th}$ quantum state having energy $E_j$. Sometimes, this expression is written as $P_j = \dfrac{\Omega_j \exp(- E_j /kT)}{Q}$ where now the index $j$ is used to label an energy level of the system having energy $E_j$ and degeneracy. It is important for the student to be used to either notation; a level is just a collection of those states having identical energy. System Partition Functions Using this result, it is possible to compute the average energy $E_{\rm ave}$, sometimes written as $\langle E \rangle$, of the system $\langle E \rangle = \sum_j P_j E_j ,$ and, as we saw earlier in this Chapter, to show that this quantity can be recast as $\langle E \rangle = kT^2 \left(\frac{∂\ln Q}{∂T}\right)_{N,V} .$ To review how this proof is carried out, we substitute the expressions for $P_j$ and for $Q$ into the expression for $\langle E \rangle$ (I will use the notation labeling energy levels rather than energy states to allow the student to become used to this) $\langle E \rangle = \frac{\sum_j E_j \Omega_j \exp(-E_j/kT)}{\sum_I \Omega_I\exp(-E­_l/kT)}.$ By noting that $\dfrac{∂ (\exp(-E­_j/kT))}{∂T} = \dfrac{1}{kT^2} E_j \exp(-E_j/kT)$, we can then rewrite $\langle E \rangle$ as $\langle E \rangle = kT^2 \frac{\sum_j \Omega_j \dfrac{∂ (\exp(-E­_j/kT))}{∂T} }{\sum_I \Omega_I\exp(-E­_l/kT)}.$ And then recalling that $\dfrac{∂X/∂T}{X} = \dfrac{∂\ln X}{∂T}$, we finally obtain $\langle E \rangle = kT^2 \left(\frac{∂\ln Q}{∂T}\right)_{N,V}.$ All other equilibrium properties can also be expressed in terms of the partition function $Q$. For example, if the average pressure $\langle p \rangle$ is defined as the pressure of each quantum state (defined as how the energy of that state changes if we change the volume of the container by a small amount) $p_j = \bigg(\frac{∂E_j}{∂V}\bigg)_N$ multiplied by the probability $P_j$ for accessing that quantum state, summed over all such states, one can show, realizing that only $E_j$ (not $T$ or $W$) depend on the volume $V$, that $\langle p \rangle = \sum_j \bigg(\frac{∂E_j}{∂V}\bigg)\dfrac{N \Omega_j \exp(- E_j /kT)}{Q}$ $= kT\left(\frac{∂\ln Q}{∂V}\right)_{N,T} .$ If you wonder why the energies $E_J$ should depend on the volume $V$, think of the case of $N$ gas-phase molecules occupying the container of volume V. You know that the translational energies of each of these $N$ molecules depend on the volume through the particle-in-a-box formula $E_{n_x,n_y,n_z}=\dfrac{\hbar^2}{8mL^2}(n_x^2+n_y^2+n_z^2).$ Changing $V$ can be accomplished by changing the box length $L$. This makes it clear why the energies do indeed depend on the volume $V$. Of course, there are additional sources of the V-dependence of the energy levels. For example, as one shrinks $V$, the molecules become more crowded, so their intermolecular energies also change. Without belaboring the point further, it is possible to express all of the usual thermodynamic quantities in terms of the partition function $Q$. The average energy and average pressure are given above, as is the heat capacity. The average entropy is given as $\langle S\rangle = k \ln Q + kT \left(\frac{∂\ln Q}{∂N}\right)_{V,T}$ the Helmholtz free energy A is $A = -kT \ln Q$ and the chemical potential $\mu$ is expressed as follows: $\mu = -kT \left(\frac{∂\ln Q}{∂N}\right)_{T,V}.$ As we saw earlier, it is also possible to express fluctuations in thermodynamic properties in terms of derivatives of partition functions and, thus, as derivatives of other properties. For example, the fluctuation in the energy $\langle (E-\langle E\rangle )^2\rangle$ was shown above to be given by $\langle (E-\langle E\rangle )^2\rangle = kT^2 C_V.$ The text Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977) has an excellent treatment of these topics and shows how all of these expressions are derived. So, if one were able to evaluate the partition function $Q$ for $N$ molecules in a volume $V$ at a temperature T, either by summing the quantum-level degeneracy and $\exp(-E_j/kT)$ factors $Q = \sum_j \Omega_j \exp(- E_j /kT),$ or by carrying out the phase-space integral over all $M$ of the coordinates and momenta of the system $Q = h^{-M} \int \exp \bigg(- \dfrac{H(q, p)}{kT}\bigg) dq \; dp ,$ one could then use the above formulas to evaluate any thermodynamic properties and their fluctuations as derivatives of $\ln Q$. The averages discussed above, derived using the probabilities $p_J = \dfrac{\Omega_J \exp(- E_J /kT)}{Q}$ associated with the most probable distribution, are called ensemble averages with the set of states associated with the specified values of $N$, $V$, and $T$ constituting what is called a canonical ensemble. Averages derived using the probabilities $\Pi_J$ = constant for all states associated with specified values of $N$, $V$, and $E$ are called ensemble averages for a microcanonical ensemble. There is another kind of ensemble that is often used in statistical mechanics; it is called the grand canonical ensemble and relates to systems with specified volume $V$, temperature $T$, and chemical potential $\mu$ (rather than particle number $N$). To obtain the partition function (from which all thermodynamic properties are obtained) in this case, one considers maximizing the same function $\Omega(n) = \dfrac{M!}{\prod_Jn_J!}$ introduced earlier, but now considering each quantum (labeled J) as having an energy $E_J(N,V)$ that depends on the volume and on how may particles occupy this volume. The variables $n_J(N)$ are now used to specify how many of the containers introduced earlier contain $N$ particles and are in the $J^{\rm th}$ quantum state. These variables have to obey the same two constraints as for the canonical ensemble $\sum_J,N n_J(N) = M$ $\sum_J,N n_J(N) \varepsilon_J(N,V) =E,$ but they also are required to obey $\sum_{J,N} N n_J(N) = N_{\rm total}$ which means that the sum adds up to the total number of particles in the isolated system’s large container that was divided into M smaller container. In this case, the walls separating each small container are assumed to allow for energy transfer (as in the canonical ensemble) and for molecules to move from one container to another (unlike the canonical ensemble). Using Lagrange multipliers as before to maximize $\ln\Omega(n)$ subject to the above three constraints involves maximizing $F = \ln M!-\sum_{J,N} (n_{J,N} \ln n_{J,N} – n_{J,N}) - \alpha(\sum_{J,N} n_{J,N} – M) -\beta(\sum_{J,N} n_{J,N} \varepsilon_J –E) –\gamma(\sum_{J,N} N n_{J,N}(N) - N_{\rm total})$ and gives $- \ln n_{K,N} - \alpha - \beta \varepsilon_K -\gamma N = 0$ or $n_{K,N} = \exp[- \alpha - \beta \varepsilon_K -\gamma N].$ Imposing the first constraint gives $M = \sum_{K,N}\exp[- \alpha - \beta \varepsilon_K -\gamma N],]$ or $\exp(-\alpha)=\frac{M}{\sum_{K,N}\exp(-\beta\varepsilon_K(N)-\gamma N)}=\frac{M}{Q(\gamma,V,T)}$ where the partition function $Q$ is defined by the sum in the denominator. So, now the probability of the system having $N$ particles and being in the $K^{\rm th}$ quantum state is $P_K(N)=\frac{\exp(-\beta\varepsilon_K(N)-\gamma N)}{Q}.$ Very much as was shown earlier for the canonical ensemble, one can then express thermodynamic properties (e.g., $E$, $C_V$, etc.) in terms of derivatives of $\ln Q$. The text Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977) goes through these derivations in good detail, so I will not repeat them here because we showed how to do so when treating the canonical ensemble. To summarize them briefly, one again uses $\beta = \dfrac{1}{kT}$, finds that g is related to the chemical potential $\mu$ as $\gamma = - \mu \beta$ and obtains $p=\sum_{N,K} P_K(N)\left(\frac{-\partial \varepsilon_K(N,V)}{\partial V}\right)_N=kT \left(\frac{-\partial Q}{\partial V}\right)_{\mu,T}$ $N_{\rm ave}=\sum_{N,K} N P_K(N)=kT \left(\frac{-\partial Q}{\partial \mu}\right)_{V,T}$ $S=kT\left(\frac{-\partial Q}{\partial T}\right)_{\mu,V}=k\ln Q$ $E=\sum_{N,K} \varepsilon_K(N)P_K(N)=kT^2 \left(\frac{-\partial Q}{\partial T}\right)_{\mu,V}$ $Q=\sum_{N,K} \exp(-\beta\varepsilon_K(N)+\mu\beta N).$ The formulas look very much like those of the canonical ensemble, except for the result expressing the average number of molecules in the container Nave in terms of the derivative of the partition function with respect to the chemical potential $\mu$. In addition to the equal a priori probability postulate stated earlier (i.e., that, in the thermodynamic limit (i.e., large $N$), every quantum state of an isolated system in equilibrium having fixed $N$, $V$, and $E$ is equally probable), statistical mechanics makes another assumption. It assumes that, in the thermodynamic limit, the ensemble average (e.g., using equal probabilities $\Pi_J$ for all states of an isolated system having specified $N$, $V$, and $E$ or using $P_j = \dfrac{\exp(- E_j /kT)}{Q}$ for states of a system having specified $N$, $V$, and $T$ or using $P_K(N)=\dfrac{\exp(-\beta\varepsilon_K(N,V)+\mu\beta N)}{Q}$ for the grand canonical case) of any quantity is equal to the long-time average of this quantity (i.e., the value one would obtain by monitoring the dynamical evolution of this quantity over a very long time). This second postulate implies that the dynamics of an isolated system spends equal amounts of time in every quantum state that has the specified $N$, $V$, and $E$; this is known as the ergodic hypothesis. Let’s consider a bit more what the physical meaning or information content of partition functions is. Canonical ensemble partition functions represent the thermal-averaged number of quantum states that are accessible to the system at specified values of $N$, $V$, and $T$. This can be seen best by again noting that, in the quantum expression, $Q = \sum_j \Omega_j \exp\bigg(- \dfrac{E_j}{kT}\bigg)$ the partition function is equal to a sum of the number of quantum states in the jth energy level multiplied by the Boltzmann population factor $\exp(-E_j/kT)$ of that level. So, $Q$ is dimensionless and is a measure of how many states the system can access at temperature $T$. Another way to think of $Q$ is suggested by rewriting the Helmholtz free energy definition given above as $Q = \exp(-A/kT)$. This identity shows that $Q$ can be viewed as the Boltzmann population, not of a given energy $E$, but of a specified amount of free energy $A$. For the microcanonical ensemble, the probability of occupying each state that has the specified values of $N$, $V$, and $E$ is equal $P_J = \dfrac{1}{\Omega(N,V, E)}$ where $\Omega(N,V, E)$ is the total number of such states. In the microcanonical ensemble case, $\Omega(N,V, E)$ plays the role that $Q$ plays in the canonical ensemble case; it gives the number of quantum states accessible to the system. Individual-Molecule Partition Functions Keep in mind that the energy levels $E_j$ and degeneracies $\Omega_j$ and $\Omega(N,V, E)$ discussed so far are those of the full $N$-molecule system. In the special case for which the interactions among the molecules can be neglected (i.e., in the dilute ideal-gas limit) at least as far as expressing the state energies, each of the energies $E_j$ can be written as a sum of the energies of each individual molecule: $E_j = \sum_{k=1}^N \varepsilon_j(k)$. In such a case, the above partition function $Q$ reduces to a product of individual-molecule partition functions: $Q = \frac{q_N}{N!}$ where the N! factor arises as a degeneracy factor having to do with the permutational indistinguishability of the $N$ molecules (e.g., one must not count both $\varepsilon_j(3) + \varepsilon_k(7)$ with molecule 3 in state $j$ and molecule 7 in state $k$ and $\varepsilon_j(7) + \varepsilon_k(3)$ with molecule 7 in state $j$ and molecule 3 in state $k$; they are the same state), and $q$ is the partition function of an individual molecule $q = \sum_l \omega_l \exp\bigg(-\dfrac{\varepsilon_l}{kT}\bigg).$ Here, $\varepsilon_l$ is the energy of the lth level of the molecule and $\omega_l$ is its degeneracy. The molecular partition functions $q$, in turn, can be written as products of translational, rotational, vibrational, and electronic partition functions if the molecular energies $\varepsilon_l$ can be approximated as sums of such energies. Of course, these approximations are most appropriate to gas-phase molecules whose vibration and rotation states are being described at the lowest level. The following equations give explicit expressions for these individual contributions to $q$ in the most usual case of a non-linear polyatomic molecule: Translational $q_t = \left(\frac{2\pi mkT}{\hbar^2}\right)^{\frac{3}{2}} V$ where $\mu$ is the mass of the molecule and $V$ is the volume to which its motion is constrained. For molecules constrained to a surface of area $A$, the corresponding result is $q_t = (2\pi mkT/\hbar^2)^{2/2} A$, and for molecules constrained to move along a single axis over a length $L$, the result is $q_t = (2\pi mkT/\hbar^2)^{1/2} L$. The magnitudes these partition functions can be computed, using $\mu$ in amu, $T$ in Kelvin, and $L$, $A$, or $V$ in cm, cm2 or cm3, as $q_t = (3.28 \times10^{13} mT)^{\frac{1}{2},\frac{2}{2},\frac{3}{2}} L, A, V.$ Clearly, the magnitude of $q_t$ depends strongly on the number of dimensions the molecule and move around in. This is a result of the vast differences in translational state densities in 1, 2, and 3 dimensions; recall that we encountered these state-density issues in Chapter 2. Rotational $q_{\rm rot} = \frac{\sqrt{\pi}}{\sigma} \sqrt{\frac{8\pi^2I_AkT}{\hbar^2} } \sqrt{\frac{8\pi^2I_BkT}{\hbar^2}} \sqrt{\frac{8\pi^2I_CkT}{\hbar^2}},$ where $I_A$, $I_B$, and $I_C$ are the three principal moments of inertia of the molecule (i.e., eigenvalues of the moment of inertia tensor). $\sigma$ is the symmetry number of the molecule defined as the number of ways the molecule can be rotated into a configuration that is indistinguishable from its original configuration. For example, $\sigma$ is 2 for $H_2$ or $D_2$, 1 for $HD$, 3 for $NH_3$, and 12 for $CH_4$. The magnitudes of these partition functions can be computed using bond lengths in Å and masses in amu and $T$ in $K$, using $\sqrt{\frac{8\pi^2I_AkT}{\hbar^2} } = 9.75 \times10^6 \sqrt{I T}$ Vibrational $q_{\rm vib} = \prod_{k=1}^{3N-6} \left\{\dfrac{\exp(-h\nu_j /2kT)}{1- \exp(-h\nu_j/kT)}\right\},$ where $n_j$ is the frequency of the $j^{\rm th}$ harmonic vibration of the molecule, of which there are $3N-6$. If one wants to treat the vibrations at a level higher than harmonic, this expression can be modified by replacing the harmonic energies $h\nu_j$ by higher-level expressions. Electronic: $q_e = \sum_J \omega_J\exp\bigg(-\dfrac{\varepsilon_J}{kT}\bigg),$ where $\varepsilon_J$ and $\omega_J$ are the energies and degeneracies of the $J^{\rm th}$ electronic state; the sum is carried out for those states for which the product $\omega_J\exp\bigg(-\dfrac{\varepsilon_J}{kT}\bigg)$ is numerically significant (i.e., levels that any significant thermal population). It is conventional to define the energy of a molecule or ion with respect to that of its atoms. So, the first term in the electronic partition function is usually written as we $\exp(-D_e/kT)$, where we is the degeneracy of the ground electronic state and $D_e$ is the energy required to dissociate the molecule into its constituent atoms, all in their ground electronic states. Notice that the magnitude of the translational partition function is much larger than that of the rotational partition function, which, in turn, is larger than that of the vibrational function. Moreover, note that the 3-dimensional translational partition function is larger than the 2-dimensional, which is larger than the 1-dimensional. These orderings are simply reflections of the average number of quantum states that are accessible to the respective degrees of freedom at the temperature $T$ which, in turn, relates to the energy spacings and degeneracies of these states. The above partition function and thermodynamic equations form the essence of how statistical mechanics provides the tools for connecting molecule-level properties such as energy levels and degeneracies, which ultimately determine the $E_j$ and the $\Omega_j$, to the macroscopic properties such as $\langle E\rangle$, $\langle S\rangle$, $\langle p\rangle$, $\mu$, etc. If one has a system for which the quantum energy levels are not known, it may be possible to express all of the thermodynamic properties in terms of the classical partition function, if the system could be adequately described by classical dynamics. This partition function is computed by evaluating the following classical phase-space integral (phase space is the collection of coordinates $q$ and conjugate momenta $p$ as we discussed in Chapter 1) $Q = \frac{h^{-NM}}{N!} \int \exp \bigg(- \dfrac{H(q, p)}{kT}\bigg) dq dp.$ In this integral, one integrates over the internal (e.g., bond lengths and angles), orientational, and translational coordinates and momenta of the $N$ molecules. If each molecule has $K$ internal coordinates, 3 translational coordinates, and 3 orientational coordinates, the total number of such coordinates per molecule is $M = K+6$. One can then compute all thermodynamic properties of the system using this $Q$ in place of the quantum $Q$ in the equations given above for $\langle E\rangle$, $\langle p\rangle$, etc. The classical partition functions discussed above are especially useful when substantial intermolecular interactions are present (and, thus, where knowing the quantum energy levels of the $N$-molecule system is highly unlikely). In such cases, the classical Hamiltonian is often written in terms of $H^0$ which contains all of the kinetic energy factors as well as all of the potential energies other than the intermolecular potentials, and the intermolecular potential $U$, which depends only on a subset of the coordinates: $H = H^0 + U$. For example, let us assume that $U$ depends only on the relative distances between molecules (i.e., on the $3N$ translational degrees of freedom which we denote $r$). Denoting all of the remaining coordinates as $y$, the classical partition function integral can be re-expressed as follows: $Q = \frac{h^{-NM}}{N!} \int \exp \bigg(- \dfrac{H^0(y, p)}{kT}\bigg) dy dp \int \exp \bigg(-\dfrac{U(r)}{kT}\bigg) dr.$ The factor $Q_{\rm ideal} = \frac{h^{-NM}}{N!} \int \exp \bigg(- \dfrac{H^0(y, p)}{kT}\bigg) dy dp V^N$ would be the partition function if the Hamiltonian $H$ contained no intermolecular interactions $U$. The $V^N$ factor arises from the integration over all of the translational coordinates if $U(r)$ is absent. The other factor $Q_{\rm inter} = \frac{1}{V^N} {\int \exp \bigg(-\dfrac{U(r)}{kT}\bigg) dr}$ contains all the effects of intermolecular interactions and reduces to unity if the potential $U$ vanishes. If, as the example considered here assumes, $U$ only depends on the positions of the centers of mass of the molecules (i.e., not on molecular orientations or internal geometries), the $Q_{\rm ideal}$ partition function can be written in terms of the molecular translational, rotational, and vibrational partition functions shown earlier: $Q_{\rm ideal} = \frac{1}{N!} \bigg[\left(\frac{2\pi mkT}{\hbar^2}\right)^{\frac{3}{2}} V \frac{\sqrt{\pi}}{\sigma} \sqrt{\frac{8\pi^2I_AkT}{\hbar^2} } \sqrt{\frac{8\pi^2I_BkT}{\hbar^2}} \sqrt{\frac{8\pi^2I_CkT}{\hbar^2}}$ $\prod_{k=1}^{3N-6} \left\{\frac{\exp(-h\nu_j /2kT)}{1- \exp(-h\nu_j/kT)}\right\} \sum_J \omega_J\exp\left(\frac{-\varepsilon_J}{kT}\right)\bigg]^N .$ Because all of the equations that relate thermodynamic properties to partition functions contain $\ln Q$, all such properties will decompose into a sum of two parts, one coming from $\ln Q_{\rm ideal}$ and one coming from $\ln Q_{\rm inter}$. The latter contains all the effects of the intermolecular interactions. This means that, in this classical mechanics case, all the thermodynamic equations can be written as an ideal component plus a part that arises from the intermolecular forces. Again, the Statistical Mechanics text by McQuarrie is a good source for reading more details on these topics. Equilibrium Constants in Terms of Partition Functions One of the most important and useful applications of statistical thermodynamics arises in the relation giving the equilibrium constant of a chemical reaction or for a physical transformation (e.g., adsorption of molecules onto a metal surface or sublimation of molecules from a crystal) in terms of molecular partition functions. Specifically, for any chemical or physical equilibrium (e.g., the former could be the $HF \rightleftharpoons H^+ + F^-$ equilibrium; the latter could be $H_2O(l) \rightleftharpoons H_2O(g)$), one can relate the equilibrium constant (expressed in terms of numbers of molecules per unit volume or per unit area, depending on whether species undergo translational motion in 3 or 2 dimensions) in terms of the partition functions of these molecules. For example, in the hypothetical chemical equilibrium $A + B \rightleftharpoons C$, the equilibrium constant $K$ can be written, if the species can be treated as having negligibly weak intermolecular potentials, as: $K = \dfrac{(N_C/V)}{(N_A/V) (N_B/V)} = \frac{(q_C/V)}{(q_A/V) (q_B/V)}.$ Here, $q_J$ is the partition function for molecules of type $J$ confined to volume $V$ at temperature $T$. As another example consider the isomerization reaction involving the normal (N) and zwitterionic (Z) forms of arginine that were discussed in Chapter 5. Here, the pertinent equilibrium constant would be: $K = \frac{(N_Z/V)}{(N_N/V)} = \frac{(q_Z/V)}{(q_N/V)}.$ So, if one can evaluate the partition functions $q$ for reactant and product molecules in terms of the translational, electronic, vibrational, and rotational energy levels of these species, one can express the equilibrium constant in terms of these molecule-level properties. Notice that the above equilibrium constant expressions equate ratios of species concentrations (in, numbers of molecules per unit volume) to ratios of corresponding partition functions per unit volume. Because partition functions are a count of the number of quantum states available to the system (i.e., the average density of quantum states), this means that we equate species number densities to quantum state densities when we use the above expressions for the equilibrium constant. In other words, statistical mechanics produces equilibrium constants related to numbers of molecules (i.e., number densities) not molar or molal concentrations.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/07%3A_Statistical_Mechanics/7.01%3A_Collections_of_Molecules_at_or_Near_Equilibrium.txt
A tool that has proven extremely powerful in statistical mechanics since computers became fast enough to permit simulations of complex systems is the Monte Carlo (MC) method. This method allows one to evaluate the integrations appearing in the classical partition function described above by generating a sequence of configurations (i.e., locations of all of the molecules in the system as well as of all the internal coordinates of these molecules) and assigning a weighting factor to these configurations. By introducing an especially efficient way to generate configurations that have high weighting, the MC method allows us to simulate extremely complex systems that may contain millions of molecules. To appreciate why it is useful to have a tool such as MC, let’s consider how one might write a computer program to evaluate the classical partition function $Q = \frac{h^{-NM}}{N!} \int \exp (- H(q, p)/kT)\,dq \,dp$ For a system consisting of $N$ Ar atoms in a box of volume $V$ at temperature T. The classical Hamiltonian $H(q,p)$ consists of a sum of kinetic and inter-atomic potential energies $H(p,q)=\sum_{i=1}^N\frac{p_i^2}{2m}+V(q)$ The integration over the $3N$ momentum variables can be carried out analytically and allows $Q$ to be written as $Q=\frac{1}{N!}\left(\frac{2\pi mkT}{\hbar^2}\right)^{3N/2}\int \exp\left[-\frac{V(q_1,q_2,\cdots,q_{3N})}{kT}\right]dq_1dq_2\cdots dq_{3N-1}$ The contribution to $Q$ provided by the integral over the coordinates is often called the configurational partition function $Q_{\rm config}=\int \exp\left[-\frac{V(q_1,q_2,\cdots,q_{3N})}{kT}\right]dq_1dq_2\cdots dq_{3N-1}$ If the density of the $N$ Ar atoms is high, as in a liquid or solid state, the potential $V$ will depend on the $3N$ coordinates of the Ar atoms in a manner that would not allow substantial further approximations to be made. One would thus be faced with evaluating an integral over $3N$ spatial coordinates of a function that depends on all of these coordinates. If one were to discretize each of the $3N$ coordinate axes using say $K$ points along each axis, the numerical evaluation of this integral as a sum over the $3N$ coordinates would require computational effort scaling as K3N. Even for 10 Ar atoms with each axis having $K$ = 10 points, this is of the order of 1030 computer operations. Clearly, such a straightforward evaluation of this classical integral would be foolish to undertake. The MC procedure allows one to evaluate such high-dimensional integrals by 1. not dividing each of the $3N$ axes into $K$ discrete points, but rather 2. selecting values of $q_1, q_2, \cdots q_{3N}$ for which the integrand $\exp(-V/kT)$ is non-negligible, while also 3. avoiding values of $q_1, q_2, \cdots q_{3N}$ for which the integrand $\exp(-V/kT)$ is small enough to neglect. By then summing over only values of $q_1, q_2, \cdots q_{3N}$ that meet these criteria, the MC process can estimate the integral. Of course, the magic lies in how one designs a rigorous and computationally efficient algorithm for selecting those $q_1, q_2, \cdots q_{3N}$ that meet the criteria. To illustrate how the MC process works, let us consider carrying out a MC simulation representative of liquid water at some density r and temperature T. One begins by placing $N$ water molecules in a box of volume $V$ chosen such that $N/V$ reproduces the specified density. To effect the MC process, we must assume that the total (intramolecular and intermolecular) potential energy $V$ of these $N$ water molecules can be computed for any arrangement of the $N$ molecules within the box and for any values of the internal bond lengths and angles of the water molecules. Notice that, as we showed above when considering the Ar example, $V$ does not include the kinetic energy of the molecules; it is only the potential energy. Often, this energy $V$ is expressed as a sum of intra-molecular bond-stretching and bending contributions, one for each molecule, plus a pair-wise additive intermolecular potential: $V = \sum_J V_{\rm (internal)}{}_J + \sum_{J,K} V_{\rm (intermolecular)}{}_{J,K},$ although the MC process does not require that one employ such a decomposition; the energy $V$ could be computed in other ways, if appropriate. For example, $V$ might be evaluated as the Born-Oppenheimer energy if an ab initio electronic structure calculation on the full $N$-molecule system were feasible. The MC process does not depend on how $V$ is computed, but, most commonly, it is evaluated as shown above. Metropolis Monte Carlo In each step of the MC process, this potential energy $V$ is evaluated for the current positions of the $N$ water molecules. In its most common and straightforward implementation known as the Metropolis Monte-Carlo process, a single water molecule is then chosen at random and one of its internal (bond lengths or angle) or external (position or orientation) coordinates is selected at random. This one coordinate (q) is then altered by a small amount ($q \rightarrow q+\delta q;ta{q}$) and the potential energy $V$ is evaluated at the new configuration ($q+\delta q$). The amount $\delta{q}$ by which coordinates are varied is usually chosen to make the fraction of MC steps that are accepted (by following the procedure detailed below) approximately 50%. This has been shown to optimize the performance of the MC algorithm. In implementing the MC process, it is usually important to consider carefully how one defines the coordinates $q$ that will be used to generate the MC steps. For example, in the case of $N$ Ar atoms discussed earlier, it might be acceptable to use the $3N$ Cartesian coordinates of the $N$ atoms. However, for the water example, it would be very inefficient to employ the $9N$ Cartesian coordinates of the $N$ water molecules. Displacement of, for example, one of the $H$ atoms along the x-axis while keeping all other coordinates fixed would alter the intramolecular O-H bond energy and the H-O-H bending energy as well as the intermolecular hydrogen bonding energies to neighboring water molecules. The intramolecular energy changes would likely be far in excess of $kT$ unless a very small coordinate change $\delta q$ were employed. Because it is important to the efficiency of the MC process to make displacements $\delta q$ that produce ca. 50% acceptance, it is better, for the water case, to make use of coordinates such as the center of mass and orientation coordinates of the water molecules (for which larger displacements produce energy changes within a few $kT$) and smaller displacements of the O-H stretching and H-O-H bending coordinates (to keep the energy change within a few $kT$). Another point to make about how the MC process is often used is that, when the inter-molecular energy is pair wise additive, evaluation of the energy change $V(q+\delta q) – V(q) = \delta V$ accompanying the change in $q$ requires computational effort that is proportional to the number $N$ of molecules in the system because only those factors $V_{\rm (intermolecular)}{}_{J,K}$, with $J$ or $K$ equal to the single molecule that is displaced need be computed. This is why pair wise additive forms for $V$ are often employed. Let us now return to how the MC process is implemented. If the energy change $\delta V$ is negative (i.e., if the potential energy is lowered by the coordinate displacement), the change in coordinate $\delta q$ is allowed to occur and the resulting new configuration is counted among the MC-accepted configurations. On the other hand, if $\delta V$ is positive, the move from $q$ to $q + \delta q$ is not simply rejected (to do so would produce an algorithm directed toward finding a minimum on the energy landscape, which is not the goal). Instead, the quantity $P = \exp(-\delta V/kT)$ is used to compute the probability for accepting this energy-increasing move. In particular, a random number between, for example, 0.000 and 1.000 is selected. If the random number is greater than $P$ (expressed in the same decimal format), then the move is rejected. If the random number is less than $P$, the move is accepted and the new location is included among the set of MC-accepted configurations. Then, new water molecule and its internal or external coordinate are chosen at random and the entire process is repeated. In this manner, one generates a sequence of MC-accepted moves representing a series of configurations for the system of $N$ water molecules. Sometimes this series of configurations is called a Monte Carlo trajectory, but it is important to realize that there is no dynamics or time information in this series. This set of configurations has been shown to be properly representative of the geometries that the system will experience as it moves around at equilibrium at the specified temperature $T$ (n.b., $T$ is the only way that information about the molecules' kinetic energy enters the MC process), but no time or dynamical attributes are contained in it. As the series of accepted steps is generated, one can keep track of various geometrical and energetic data for each accepted configuration. For example, one can monitor the distances R among all pairs of oxygen atoms in the water system being discussed and then average this data over all of the accepted steps to generate an oxygen-oxygen radial distribution function $g(R)$ as shown in Figure 7.3. Alternatively, one might accumulate the intermolecular interaction energies between pairs of water molecules and average this over all accepted configurations to extract the cohesive energy of the liquid water. The MC procedure also allows us to compute the equilibrium average of any property $A(q)$ that depends on the coordinates of the $N$ molecules. Such an average would be written in terms of the normalized coordinate probability distribution function $P(q)$ as: $\langle A \rangle = \int P(q)A(q) dq = \dfrac{\int \exp(-\beta V(q))A(q)dq}{\int \exp(-\beta V(q))dq}$ The denominator in the definition of $P(q)$ is, of course, proportional to the coordinate-contribution to the partition function $Q$. In the MC process, this average is computed by forming the following sum over the M MC-accepted configurations $q_J$: $\langle A \rangle =\frac{1}{M}\sum_{J=1}^M A(q_J)$ In most MC simulations, millions of accepted steps contribute to the above averages. At first glance, it may seem that such a large number of steps represent an extreme computational burden. However, recall that straightforward discretization of the $3N$ axes produced a result whose effort scaled as $K^{3N}$, which is unfeasible even for small numbers of molecules So, why do MC simulations work when the straightforward way fails? That is, how can one handle thousands or millions of coordinates when the above analysis would suggest that performing an integral over so many coordinates would require $K^{3N}$ computations? The main thing to understand is that the $K$-site discretization of the $3N$ coordinates is a stupid way to perform the above integral because there are many (in fact, most) coordinate values where the value of the quantity A whose average one wants multiplied by $\exp(-\beta V)$ is negligible. On the other hand, the MC algorithm is designed to select (as accepted steps) those coordinates for which $\exp(-\beta V)$ is non-negligible. So, it avoids configurations that are stupid and focuses on those for which the probability factor is largest. This is why the MC method works! The standard Metropolis variant of the MC procedure was described above where its rules for accepting or rejecting trial coordinate displacements $\delta q$ were given. There are several other ways of defining rules for accepting or rejecting trial MC coordinate displacements, some of which involve using information about the forces acting on the coordinates, all of which can be shown to generate a series of MC-accepted configurations consistent with an equilibrium system. The book Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley, Oxford U. Press, New York (1997) provides good descriptions of these alternatives to the Metropolis MC method, so I will not go further into these approaches here. Umbrella Sampling It turns out that the MC procedure as outlined above is a highly efficient method for computing multidimensional integrals of the form $\int P(q) A(q) dq$ where $P(q)$ is a normalized (positive) probability distribution and $A(q)$ is any property that depends on the multidimensional variable q. There are, however, cases where this conventional MC approach needs to be modified by using so-called umbrella sampling. To illustrate how this is done and why it is needed, suppose that one wanted to use the MC process to compute an average, with $\exp(-\beta V(q))$ as the weighting factor, of a function $A(q)$ that is large whenever two or more molecules have high (i.e., repulsive) intermolecular potentials. For example, one could have $A(q) = \sum_{I<J} \frac{a}{|\textbf{R}_I- \textbf{R}_J|^n}.$ Such a function could, for example, be used to monitor when pairs of molecules, with center-of-mass coordinates RJ and RI, approach closely enough to undergo a reaction that requires them to surmount a high inter-molecular barrier. The problem with using conventional MC methods to compute $\langle A \rangle = \int A(q) P(q) dq$ in such cases is that 1. i. $P(q) = \dfrac{\exp(-\beta V(q))}{\int \exp(-\beta V)dq}$ favors those coordinates for which the total potential energy $V$ is low. So, coordinates with high $V(q)$ are very infrequently accepted. 2. ii. However, $A(q)$ is designed to identify events in which pairs of molecules approach closely and thus have high $V(q)$ values. So, there is a competition between $P(q)$ and $A(q)$ that renders the MC procedure ineffective in such cases because the average one wants to compute involves the product $A(q) P(q)$ which is small for most values of q. What is done to overcome this competition is to introduce a so-called umbrella weighting function $U(q)$ that 1. i. attains it largest values where $A(q)$ is large, and 2. ii. is positive and takes on values between 0 and 1 so it can be used as shown below to define a proper probability weighting function. One then replaces $P(q)$ in the MC algorithm by the product $P(q)U(q)$ and uses this as a weighting function. To see how this replacement works, we re-write the average that needs to be computed as follows: $\langle A \rangle = \int P(q)A(q) dq = \dfrac{\int \exp(-\beta V(q))A(q)dq}{\int \exp(-\beta V(q))dq}$ $=\dfrac{\dfrac{ \int U(q)\exp(-\beta V(q))(A(q)/U(q)) dq }{\int U(q)\exp(-\beta V(q)) dq}}{\dfrac{ \int U(q)\exp(-\beta V(q))(1/U(q)) dq}{ \int U(q)\exp(-\beta V(q)) dq}}=\dfrac{\Big\langle \dfrac{A}{U}\Big\rangle_{Ue^{-\beta UV}}}{\Big\langle \dfrac{1}{U}\Big\rangle_{Ue^{-\beta V}}}$ The interpretation of the last identity is that $\langle A \rangle$ can be computed by 1. i. using the MC process to evaluate the average of ($A(q)/U(q)$) but with a probability weighting factor of $U(q) \exp(-\beta V(q))$ to accept or reject coordinate changes, and 2. ii. also using the MC process to evaluate the average of ($1/U$(q)) again with $U(q) \exp(-\beta V(q))$ as the weighting factor, and finally 3. iii. taking the average of ($A/U$) divided by the average of ($1/U$) to obtain the final result. The secret to the success of umbrella sampling is that the product $U(q) \exp(-\beta V(q))$ causes the MC process to emphasize in its acceptance and rejection procedure coordinates for which both $\exp(-\beta V)$ and $U$ (and hence $A$) are significant. Of course, the tradeoff is that the quantities ($A/U$ and $1/U$) whose averages one computes using $U(q) \exp(-\beta V(q))$ as the MC weighting function are themselves susceptible to being very small at coordinates $q$ where the weighting function is large. Let’s consider some examples of when and how one might want to use umbrella sampling techniques. Suppose one has one system for which the evaluation of the partition function (and thus all thermodynamic properties) can be carried out with reasonable computational effort and another similar system (i.e., one whose potential does not differ much from the first) for which this task is very difficult. Let’s call the potential function of the first system $V^0$ and that of the second system $V^0 + \Delta V$. The latter system’s partition function can be written as follows $Q=\sum_J \exp(-\beta (V^0+\Delta V))=Q^0 \sum_J \exp(-\beta(V^0+\Delta V))/Q^0=Q^0\langle \exp(-\beta(V^0+\Delta V))\rangle^0$ where $Q^0$ is the partition function of the first system and is the ensemble average of the quantity taken with respect to the ensemble appropriate to the first system. This result suggests that one can form the ratio of the partition functions ($Q/Q^0$) by computing the ensemble average of using the first system’s weighting function in the MC process. Likewise, to compute, for second system, the average value of any property $A(q)$ that depends only on the coordinates of the particles, one can proceed as follows $\langle A \rangle=\dfrac{\sum_J A_J\exp(-\beta(V^0+\Delta V)) }{Q}=\frac{Q^0}{Q}\langle A\exp(-\beta(V^0+\Delta V)) \rangle^0$ where is the ensemble average of the quantity $A$ taken with respect to the ensemble appropriate to the first system. Using the result derived earlier for the ratio ($Q/Q^0$), this expression for $\langle A \rangle$ can be rewritten as $\langle A \rangle=\frac{Q^0}{Q}\langle A\exp(-\beta(V^0+\Delta V)) \rangle^0=\dfrac{\langle A\exp(-\beta(V^0+\Delta V)) \rangle^0}{\langle \exp(-\beta(V^0+\Delta V)) \rangle^0}.$ In this form, we are instructed to form the average of $A\exp(-\beta(V^0+\Delta V))$ for the second system by 1. a. forming the ensemble average of using the weighting function for the first system, 2. b. forming the ensemble average of using the weighting function for the first system, and 3. c. taking the ratio of these two averages. This is exactly what the umbrella sampling device tells us to do if we were to choose as the umbrella function $U=\exp(\beta \Delta V).$ In this example, the umbrella is related to the difference in the potential energies of the two systems whose relationship we wish to exploit. Under what circumstances would this kind of approach be useful? Suppose one were interested in performing a MC average of a property for a system whose energy landscape $V(q)$ has many local minima separated by large energy barriers, and suppose it was important to sample configurations characterizing the many local minima in the sampling. A straightforward MC calculation using $\exp(-\beta V)$ as the weighting function would likely fail because a sequence of coordinate displacements from near one local minimum to another local minimum would have very little chance of being accepted in the MC process because the barriers are very high. As a result, the MC average would likely generate configurations representative of only the system’s equilibrium existence near one local minimum rather than representative of its exploration of the full energy landscape. However, if one could identify those regions of coordinate space at which high barriers occur and construct a function $\Delta V$ that is large and positive only in those regions, one could then use $U=\exp(\beta \Delta V).$ as the umbrella function and compute averages for the system having potential $V(q)$ in terms of ensemble averages for a modified system whose potential $V_0$ is $V^0=V-\Delta V$ In Figure 7. 3a, I illustrate how the original and modified potential landscapes differ in regions between two local minima. The MC-accepted coordinates generated using the modified potential $V^0$ would sample the various local minima and thus the entire landscape in a much more efficient manner because they would not be trapped by the large energy barriers. By using these MC-accepted coordinates, one can then estimate the average value of a property $A$ appropriate to the potential $V$ having the large barriers by making use of the identity. $\langle A \rangle=\frac{Q^0}{Q}\langle A\exp(-\beta(V^0+\Delta V)) \rangle^0=\dfrac{\langle A\exp(-\beta(V^0+\Delta V)) \rangle^0}{\langle \exp(-\beta(V^0+\Delta V)) \rangle^0}.$ The above umbrella strategy could be useful in generating a good sampling of configurations characteristic of the many local minima, which would be especially beneficial if the quantity $A(q)$ emphasized those configurations. This would be the case, for example, if $A(q)$ measured the intramolecular and nearest-neighbor oxygen-hydrogen interatomic distances in a MC simulation of liquid water. On the other hand, if one wanted to use as $A(q)$ a measure of the energy needed for a $Cl^-$ ion to undergo, in a 1 M aqueous solution of NaCl, a change in coordination number from 6 to 5 as illustrated in Figure 7.3 b, one would need a sampling that is accurate both near the local minima corresponding to the 5- and 6-coordinate and the transition-state structures. Using an umbrella function similar to that discussed earlier to simply lower the barrier connecting the two $Cl^-$ ion structures may not be sufficient. Although this would allow one to sample both local minima, its sampling of structures near the transition state would be questionable if the quantity $\Delta V$ by which the barrier is lowered (to allow MC steps moving over the barrier to be accepted with non-negligible probability) is large. In such cases, it is wise to employ a series of umbrellas to connect the local minima to the transition states. Assuming that one has knowledge of the energies and local solvation geometries characterizing the two local minima and the transition state as well as a reasonable guess or approximation of the intrinsic reaction path (refer back to Section 3.3 of Chapter 3) connecting these structures, one proceeds as follows to generate a series of so-called windows within each of which the free energy $A$ of the solvated $Cl^-$ ion is evaluated. 1. 1. Using the full potential $V$ of the system to constitute the unaltered weighting function $\exp(-\beta V(q))$, one multiplies this by an umbrella function $U(q)=\dfrac{0;\{s_1-\delta/2\le s(q)\le s_1+\delta/2\}}{\infty;{\rm otherwise}}$ to form the umbrella-altered weighting function $U(q) \exp(-\beta V(q))$. In U(q), s(q) is the value of the value of the intrinsic reaction coordinate IRC evaluated for the current geometry of the system q, $s_1$ is the value of the IRC characterizing the first window, and d is the width of this window. The first window could, for example, correspond to geometries near the 6-coordinate local minimum of the solvated $Cl^-$ ion structure. The width of each window d should be chosen so that the energy variation within the window is no more than a 1-2 kT; in this way, the MC process will have a good (i.e., ca. 50%) acceptance fraction and the configurations generated will allow for energy fluctuations uphill toward the TS of about this amount. 2. 2. As the MC process is performed using the above $U(q) \exp(-\beta V(q))$ weighting, one constructs a histogram $P_1(s)$ for how often the system reaches various values s along the IRC. Of course, the severe weighting caused by $U(q)$ will not allow the system to realize any value of s outside of the window . 3. 3. One then creates a second window that connects to the first window (i.e., with $s_1+\delta/2 = s_2 - \delta/2$) and repeats the MC sampling using $U(q)=\dfrac{0;\{s_2-\delta/2\le s(q)\le s_2+\delta/2\}}{\infty;{\rm otherwise}}$ to generate a second histogram $P_2(s)$ for how often the system reaches various values of s along the IRC within the second window. 4. 4. This process is repeated at a series of connected windows $\{s_k-\delta/2\le s(q)\le s_k+\delta/2\}$ whose centers $s_k$ range from the 6-coordinate $Cl^-$ ion ($k = 1$), through the transition state ($k = TS$), and to the 5-coordinate $Cl^-$ ion ($k = N$). After performing this series of $N$ umbrella-altered samplings, one has in hand a series of $N$ histograms {$P_k(s); k = 1, 2, \cdots TS, \cdots N$}. Within the $k^{\rm th}$ window, $P_k(s)$ gives the relative probability of the system being at a point s along the IRC. To generate the normalized absolute probability function P(s) expressing the probability of being at a point s, one can proceed as follows: 1. Because the first and second windows are connected at the point $s_1+\delta/2 = s_2 - \delta/2$, one can scale $P_2(s)$ (i.e., multiply it by a constant) to match $P_1(s)$ at this common point to produce a new $P'_2(s)$ function $P'_2(s)=P_2(s)\dfrac{P_1(s_1+\delta/2)}{P_2(s_2-\delta/2)}$ This new function describes exactly the same relative probability within the second window, but, unlike $P_2(s)$, it connects smoothly to $P_1(s)$. 2. Because the second and third windows are connected at the point $s_2+\delta/2 = s_3 - \delta/2$, one can scale $P_3(s)$ to match at this common point to produce a new function $P'_3(s)=P_3(s)\dfrac{P'_2(s_2+\delta/2)}{P_3(s_3-\delta/2)}$ 3. This process of scaling $P_k$ to match at $s_k – \delta/2 = s_{k-1} + \delta/2$ is repeated until the final window connecting $k = N-1$ to $k = N$. Upon completing this series of connections, one has in hand a continuous probability function $P(s)$, which can be normalized $P_{\rm normalized}=\frac{P(s)}{\int_{s=0}^{s_{\rm final}}P(s)ds }$ In this way, one can compute the probability of accessing the TS, $P_{\rm normalized}(s=TS)$, and the free energy profile $A(s)=-kT\ln P_{\rm normalized}(s)$ at any point along the IRC. It is by using a series of connected windows, within each of which the MC process samples structures whose energies can fluctuate by 1-2 $kT$, that one generates a smooth connection from low-energy to high-energy (e.g., TS) geometries.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/07%3A_Statistical_Mechanics/7.02%3A_Monte_Carlo_Evaluation_of_Properties.txt
One thing that the MC process does not address directly is the time evolution of the system. That is, the steps one examines in the MC algorithm are not straightforward to associate with a time-duration, so it is not designed to compute the rates at which events take place. If one is interested in simulating such dynamical processes, even when the N-molecule system is at or near equilibrium, it is more appropriate to carry out a classical molecular dynamics (MD) simulation. In such an MD calculation, one has to assign initial values for each of the internal and external coordinates of each of the $N$ molecules and an initial value of the kinetic energy or momentum for each coordinate, after which a time-propagation algorithm generates values for the coordinates and momenta at later times. For example, the initial coordinates could be chosen close to those of a local minimum on the energy surface and the initial momenta associated with each coordinate could be assigned values chosen from a Maxwell-Boltzmann distribution characteristic of a specified temperature T. In such cases, it is common to then allow the MD trajectory to be propagated for a length of time $\Delta t$ long enough to allow further equilibration of the energy among all degrees of freedom before extracting any numerical data to use in evaluating average values or creating inter-particle distance histograms, for example. One usually does not choose just one set of such initial coordinates and momenta to generate a single trajectory. Rather, one creates an ensemble of initial coordinates and momenta designed to represent the experimental conditions the MD calculation is to simulate. The time evolution of the system for each set of initial conditions is then followed using MD and various outcomes (e.g., reactive events, barrier crossings, folding or unfolding events, chemisorption ocurrences, etc.) are monitored throughout each MD simulation. An average over the ensemble of trajectories is then used in computing averages and creating histograms for the MD simulation. It is the purpose of this Section to describe how MD is used to follow the time evolution for such simulations. Trajectory Propagation With each coordinate having its initial velocity $\left(\dfrac{dq}{\delta t}\right)_0$ and its initial value $q_0$ specified, one then uses Newton’s equations written for a time step of duration $\delta t$ to propagate $q$ and $dq/dt$ forward in time according, for example , to the following first-order propagation formula: $q(t+\delta t) = q_0 + \left(\dfrac{dq}{\delta t}\right)_0 dt$ $\dfrac{dq}{dt} (t+\delta t) = \left(\dfrac{dq}{dt}\right)_0 - \delta t \left[\left( \dfrac{∂V}{∂q} \right)_0\dfrac{1}{m_q}\right].$ Here m_q is the mass factor connecting the velocity $dq/dt$ and the momentum pq conjugate to the coordinate q: $p_q = m_q \dfrac{dq}{dt},$ and $-(∂V/∂q)_0$ is the force along the coordinate $q$ at the earlier geometry $q_0$. In most modern MD simulations, more sophisticated numerical methods can be used to propagate the coordinates and momenta. For example, the widely used Verlet algorithm is derived as follows. 1. One expands the value of the coordinate $q$ at the $n+1^{\rm st}$ and $n-1^{\rm st}$ time steps in Taylor series in terms of values at the $n$st time step $q_{n+1}=q_n+\left(\dfrac{dq}{dt}\right)_n\delta t +\dfrac{-\left( \dfrac{∂V}{∂q} \right)_n​}{2m_q}\delta t^2-O(\delta t^3)$ $q_{n-1}=q_n-\left(\dfrac{dq}{dt}\right)_n\delta t +\dfrac{\left( \dfrac{∂V}{∂q} \right)_n​}{2m_q}\delta t^2+O(\delta t^3)$ 2. One adds these two expansions to obtain $q_{n+1}=2 q_n-q_{n-1}+\dfrac{-\left( \dfrac{∂V}{∂q} \right)_n​}{2m_q}\delta t^2-O(\delta t^4)$ which allows one to compute $q_{n+1}$ in terms of $q_{n}$ and $q_{n-1}$ and the force at the $n^{\rm th}$ step, while not requiring knowledge of velocities. 3. If the two Taylor expansions are subtracted, one obtains $\left(\dfrac{dq}{dt}\right)_{n+1}-\dfrac{q_{n+1}-q_{n-1}}{2\delta t}+O(\delta t^2)$ as the expression for the velocity at the $n+1^{\rm st}$ time step in terms of the coordinates at the $n+1^{\rm st}$ and $n-1^{\rm st}$ steps. There are many other such propagation schemes that can be used in MD; each has strengths and weaknesses. In the present Section, I will focus on describing the basic idea of how MD simulations are performed while leaving treatment of details about propagation schemes to more advanced sources such as Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley, Oxford U. Press, New York (1997). The forces $-(∂V/∂q)$ appearing in the MD propagation algorithms can be obtained as gradients of a Born-Oppenheimer electronic energy surface if this is computationally feasible. Following this path involves performing what is called direct-dynamics MD. Alternatively, the forces can be computed from derivatives of an empirical force field. In the latter case, the system's potential energy $V$ is expressed in terms of analytical functions of 1. intramolecular bond lengths, bond angles, and torsional angles, as well as 2. intermolecular distances and orientations. The parameters appearing in such force fields have usually been determined from electronic structure calculations on molecular fragments, spectroscopic determination of vibrational force constants, and experimental measurements of intermolecular forces. Force Fields Let’s interrupt our discussion of MD propagation of coordinates and velocities to examine the ingredients that usually appear in the force fields mentioned above. In Figure 7.3 c, we see a molecule in which various intramolecular and intermolecular interactions are introduced. The total potential of a system containing one or more such molecules in the presence of a solvent (e.g., water) it typically written as a sum of intramolecular potentials (one for each molecule in the system) and itermolecular potentials. The former are usually decomposed into a sum of covalent interactions describing how the energy varies with bond stretching, bond bending, and dihedral angle distortion as depicted in Figure 7.3 d. and non-covalent interactions describing electrostatic and van der Waals interactions among the atoms in the molecule a $V_{\rm noncovalent}=\sum_{i<j}^{\rm atoms} \left[\dfrac{A_{i,j}}{r_{i,j}^{12}}-\dfrac{B_{i,j}}{r_{i,j}^{6}}+\dfrac{q_iq_j}{\varepsilon r_{i,j}}\right].$ These functional forms would be used to describe how the energy $V(q)$ changes with the bond lengths ($r$) and angles ($\theta,\phi$) within, for example, each of the molecules shown in Figure 7. 3 c (let’s call them solute molecules) as well as for any water molecules that may be present (if these molecules are explicitly included in the MD simulation). The interactions among the solute and solvent moleulues are also often expressed in a form involving electrostatic and van der Waals interations between pairs of atoms- one on one molecule (solute or solvent) and the other on another molecule (solute or solvent) $V_{\rm intermolecular} = \sum_{i<j}^{\rm atoms} \left[\dfrac{A_{i,j}}{r_{i,j}^{12}}-\dfrac{B_{i,j}}{r_{i,j}^{6}}+\dfrac{q_iq_j}{\varepsilon r_{i,j}}\right].$ The Cartesian forces on any atom within a solute or solvent molecule are then computed for use in the MD simulation by using the chain rule to relate derivatives with respect to Cartesian coordinates to derivatives of the above intramolecular and intermolecular potentials with respect to the interatomic distances and the angles appearing in them. Because water is such a ubiquitous component in condensed-phase chemistry, much effort has been devoted to generating highly accurate intermolecular potentials to describe the interactions among water molecules. In the popular TIP3P and TIP4P models, the water-water interaction is given by $V = \dfrac{A}{r_{OO}^{12}}-\dfrac{B}{r_{OO}^{6}}+\sum_{i,j}\dfrac{kq_iq_j}{r_{i,j}}.$ where rOO is the distance between the oxygen atoms of the two water molecules in Å, and indices $i$ and $j$ run over 3 or 4 sites, respectively, for TIP3P or TIP4P, with $i$ labeling sites on one water molecule and $j$ labeling sites on the second water molecule. The parameter $k$ is 332.1 Å kcal mol-1. A and B are conventional Lennard-Jones parameters for oxygen atoms and qi is the magnitude of the partial charge on the ith site. In Figure 7.3 d, we show how the 3 or 4 sites are defined for these two models. Typical values for the parameters are given in the table below. rOH(Å) HOH angle degrees rOM(Å) A12 kcal/mol) B6 kcal/mol) qOor qM qH TIP3P 0.9572 104.52 582 x103 595 -0.834 0.417 TIP4P 0.9672 104.52 0.15 600 x103 610 -1.04 0.52 In the TIP3P model, the three sites reside on the oxygen and two hydrogen centers. For TIP4P, the fourth site is called the M-site and it resides off the oxygen center a distance of 0.15 along the bisector of the two O-H bonds as shown in Figure 7.3 d. In using either the TIP3P or TIP4P model, the intramolecular bond lengths and angles are often constrained to remain fixed; when doing so, one is said to be using a rigid water model. There are variants to these two 3-site and 4-site models that, for example, include van der Waals interactions between $H$ atoms on different water molecules, and there are models including more than 4 sites, and models that allow for the polarization of each water molecule induced by the dipole fields (as represented by the partial charges) of the other water molecules and of solute molecules. The more detail and complexity one introduces, the more computational effort is needed to perform MD simulations. In particular, water molecules that allow for polarization are considerably more computationally demanding because they often involve solving self-consistently for the polarization of each molecule by the charge and dipole potentials of all the other molecules, with each dipole potential including both the permanent and induced dipoles of that molecule. Professor John Wampler has created a web page in which the details about molecular mechanics force fields introduced above are summarized. This web page provides links to numerous software packages that use these kinds of force fields to carry out MD simulations. These links also offer more detailed information about the performance of various force fields as well as giving values for the parameters used in those force fields. The parameter values are usually obtained by 1. fitting the intramolecular or intermolecular functional form (e.g., as shown above) to energies obtained in electronic structure calculations at a large number of geometries, or 2. adjusting them to cause MD or MC simulations employing the force field to reproduce certain thermodynamic properties (e.g., radial distribution functions, solvation energies, vaporization energies, diffusion constants), or some combination of both. It is important to observe that the kind of force fields discussed above have limitations beyond issues of accuracy. In particular, they are not designed to allow for bond breaking and bond forming, and they represent the Born-Oppenheimer energy of one (most often the ground) electronic state. There are force fields explicitly designed to include chemical bonding changes, but most MD packages do not include them. When one is interested in treating a problem that involves transitions from one electronic state to another (e.g., in spectroscopy or when the system undergoes a surface hop near a conical intersection), it is most common to use a combined QM-MM approach like we talked about in Section 6.1.3 of Chapter 6. A QM treatment of the portion of the system that undergoes the electronic transition is combined with a force-field (MM) treatment of the rest of the system to carry out the MD simulation. Let’s now return to the issue of propagating trajectories given a force field and a set of initial conditions appropriate to describing the system to be simulated. By applying one of the time-propagation algorithms to all of the coordinates and momenta of the $N$ molecules at time t, one generates a set of new coordinates $q(t+\delta t)$ and new velocities $dq/dt(t+\delta t)$ appropriate to the system at time $t+dt$. Using these new coordinates and momenta as $q_0$ and $(dq/\delta t)_0$ and evaluating the forces $–(∂V/∂q)_0$ at these new coordinates, one can again use the propagation equations to generate another finite-time-step set of new coordinates and velocities. Through the sequential application of this process, one generates a sequence of coordinates and velocities that simulate the system’s behavior. By following these coordinates and momenta, one can interrogate any dynamical properties that one is interested in. For example, one could monitor oxygen-oxygen distances throughout an MD simulation of liquid water with initial conditions chosen to represent water at a given temperature (T would determine the initial momenta) to generate a histogram of O-O distances. This would allow one to construct the kind of radial distribution function shown in Figure 7. 3 using MD simulation rather than MC. The radial distribution function obtained in such an MD simulation should be identical to that obtained from MC because statistical mechanics assumes the ensemble average (MC) is equal to the long-time average (MD) of any property for a system at equilibrium. Of course, one could also monitor quantities that depend on time, such as how often two oxygen atoms come within a certain distance, throughout the MD simulation. This kind of interrogation could not be achieved using MC because there is no sense of time in MC simulations. In Chapter 8, I again discuss using classical molecular dynamics to follow the time evolution of a chemical system. However, there is a fundamental difference between the kind of simulations described above and the case I treat in Chapter 8. In the former, one allows the N-molecule system to reach equilibrium (i.e., either by carefully choosing initial coordinates and momenta or by waiting until the dynamics has randomized the energy) before monitoring the subsequent time evolution. In the problem discussed in Chapter 8, we use MD to follow the time progress of a system representing a single bimolecular collision in two crossed beams of molecules. Each such beam contains molecules whose initial translational velocities are narrowly defined rather than Maxwell-Boltzmann distributed. In this case, we do not allow the system to equilibrate because we are not trying to model an equilibrium system. Instead, we select an ensemble of initial conditions that represent the molecules in the two beams and we then follow the Newton dynamics to monitor the outcome (e.g., reaction or non-reactive collision). Unlike the MC method, which is very amenable to parallel computation, MD simulations are more difficult to carry out in a parallel manner. One can certainly execute many different classical trajectories on many different computer nodes; however, to distribute one trajectory over many nodes is difficult. The primary difficulty is that, for each time step, all $N$ of the molecules undergo moves to new coordinates and momenta. To compute the forces on all $N$ molecules requires of the order of $N^2$ calculations (e.g., when pairwise additive potentials are used). In contrast, each MC step requires that one evaluate the potential energy change accompanying the displacement of only one molecule. This uses only of the order of $N$ computational steps (again, for pair wise additive potentials). Another factor that complicates MD simulations has to do with the wide range of times scales that may be involved. For example, for one to use a time step dt short enough to follow high-frequency motions (e.g., O-H stretching) in a simulation of an ion or polymer in water solvent, dt must be of the order of 10-15 s. To then simulate the diffusion of an ion or the folding of a polymer in the liquid state, which might require 10-4 s or longer, one would have to carry out 1011 MD steps. This likely would render the simulation not feasible. In the table below we illustrate the wide range of time scales that characterize various events that one might want to simulate using some form of MD, and we give a sense of what is practical using MD simulations in the year 2010. Examples of dynamical processes taking place over timescales ranging from 10-15 s through hundreds of seconds, each of which one may wish to simulate using MD. 10-15 -10-14 s 10-12 s 10-9 s 10-6 s 10-3 s 110 s C-H, N-H, O-H bond vibration Rotation of small molecule Routinely accessible time duration for atomistic MD simulation Time duration for heroic atomistic MD simulation Time duration achievable using coarse-graining techniquesa Time needed for protein folding a. These techniques are discussed in Section 7.3.3. Because one can not afford to carry out simulations covering 10-3 -100 s using time steps needed to follow bond vibrations 10-15 s, it is necessary to devise strategies to focus on motions whose time frame is of primary interest while ignoring or approximating faster motions. For example, when carrying out long-time MD simulations, one can ignore the high-frequency intramolecular motions by simply not including these coordinates and momenta in the Netwonian dynamics (e.g., as one does when using a rigid-water model discussed earlier). In other words, one simply freezes certain bond lengths and angles. Of course, this is an approximation whose consequences must be tested and justified, and would certainly not be a wise step to take if those coordinates played a key role in the dynamical process being simulated. Another approach, called coarse graining involves replacing the fully atomistic description of selected components of the system by a much-simplified description involving significantly fewer spatial coordinates and momenta. Coarse Graining The goal of coarse graining is to bring the computational cost of a simulation into the realm of reality. This is done by replacing the fully atomistic description of the system, in which coordinates sufficient to specify the positions (and, in MD, the velocities) of every atom, by a description in terms of fewer functional groups often referred to as “beads”. The TIP4P and TIP3P models for the water-water interaction potential discussed above are not coarse-grained models because they contain as many (or more) centers as atoms. An example of a coarse-grained model for the water-water interaction is provided by the Stillinger-Weber model (that was originally introduced to treat tetrahedral Si) of water introduced in V. Molinero and E. B. Moore, J. Phys. Chem. B 2009, 113, 4008–4016. Here, each water molecule is described only by the location of its oxygen nucleus (labeled ri for the ith water molecule), and the interaction potential is given as a sum of two-body and three-body terms $V=\sum_{i<j=1}^N A\varepsilon\left[B\left(\dfrac{\sigma}{r_{i,j}}\right)^p-\left(\dfrac{\sigma}{r_{i,j}}\right)^q\right]\exp\left(\dfrac{\sigma}{r_{i,j}-a\sigma}\right)\+\sum_{i<j<k=1}^N \lambda \varepsilon[\cos\theta_{i,j,k}-\cos\theta_0]^2\exp\left(\dfrac{\gamma\sigma}{r_{i,j}-a\sigma}\right)\exp\left(\dfrac{\gamma\sigma}{r_{i,k}-a\sigma}\right)$ where $r_{i,j}$ is the distance between the ith and jth oxygen atom, $\theta_0 =$ 109.47 deg, and $q_{i,j,k}$ is the angle between the ith (at the center), jth, and kth oxygen atom. The parameters $A$, $B$, $\varepsilon$, $\sigma$, and $a$ are used to characterize various characteristics of the potential; different values are needed to describe the behavior of Si, Ge, diamond, or water even though they all can adopt tetrahedral coordination. The form of the three-body part of this potential is designed to guide the orientations among oxygen atoms to adopt tetrahedral character. Although the above potential seems more complicated than, for example, the form used in the TIP3P or TIP4P potential, it has three important advantages when it comes to carrying out MD simulations: 1. Because the SW potential contains no terms varying with distance as $r^{-1}$ (i.e., no Coulomb interactions among partial charges), it is of qualitatively shorter range than the other two potentials. This allows spatial cut-offs to be used (i.e., to ignore interactions beyond much shorter distances) efficiently. 2. 2. For a system containing $N$ water molecules, the TIP3P or TIP4P models require one to evaluate functions of the distances between $(3N)^2/2$ or $(4N)^2/2$ centers, whereas the SW’s two-body component involves only $N^2/2$ interactions and the three-body component need only be evaluated for molecules $j$ and $k$ that are nearby molecule $i$. 3. 3. If, for the atomistic models, one wishes to treat the O-H stretching and H-O-H bending motions, MD time steps of ca. 10-15 s must be employed. For the SW model, the fastest motions involve relative movements of the oxygen centers, which occur on time scales ca. 10 times longer. This means that one can use longer MD steps. The net result is that this coarse-grained model of the water-water interaction allows MD simulations to be carried out for qualitatively longer time durations. Of course, this is only an advantage if the simulations provide accurate results. In the Table shown below (taken from the above reference), we see MD simulation results (as well as experimental results) obtained with the above (mW) model, with various TIPnP models, and with two other popular water-water potentials (SPC and SPCE) from which it is clear that the coarse-grained mW model is capable of yielding reliable results on a range of thermodynamic properties. In the Table shown below, the reference cited above specifies the locations and masses of the phosphate, sugar, and base beads in the B form of the DNA helix. The masses need to be chosen so that the coarse-grained dynamical motions of these units replicate within reasonable tolerances the center of mass motions of the phosphate, sugar, and base moieties when atomistic MD simulations are carried out on smaller test systems containing these nucleotide units. The potential $V$ used to carry out the coarse-grained MD simulations is given by the equations shown below taken from the above reference. In addition to the usual bond stretching, bending and dihedral terms (n.b., now the bonds relate to linkages between beads rather than between atoms) that are similar to what we saw earlier in our discussion of force fields, there are additional terms. 1. $V_{\rm stack}$ describes the interactions among p-stacked base pairs, 2. $V_{\rm bp}$ describes the hydrogen bonding interactions between bases, and 3. $V_{\rm ex}$ describes excluded-volume effects. $V_{\rm total}=V_{\rm bond}+V_{\rm angle}+V_{\rm dihedral}+V_{\rm stack}+V_{\rm bp}+V_{\rm ex}+V_{\rm qq},$ where $V_{\rm bond}=\sum_i^{N_{\rm bond}}[k_1(d_i-d_{0_i})^2+k_2(d_i-d_{0_i})^4],$ $V_{\rm angle}=\sum_i^{N_{\rm angle}}\dfrac{k_\theta}{2}(\theta_i-\theta_{0_i})^2,$ $V_{\rm dihedral}=\sum_i^{N_{\rm dihedral}}k_\phi [1-\cos(\phi_i-\phi_{0_i})],$ $V_{\rm stack}=\sum_{i<j}^{N_{\rm st}} 4\varepsilon\left[\left(\dfrac{\sigma_{ij}}{r_{ij}}\right)^{12}-\left(\dfrac{\sigma_{ij}}{r_{ij}}\right)^6\right],$ $V_{\rm bp}=\sum_{\rm base pairs}^{N_{\rm bp}} 4\varepsilon_{\text{bp}{}_i}\left[5\left(\dfrac{\sigma_{\text{bp}{}_i}}{r_{ij}}\right)^{12}-6\left(\dfrac{\sigma_{\text{bp}{}_i}}{r_{ij}}\right)^{10}\right],$ $V_{\rm ex}=\sum_{i<j}^{N_{\rm ex}} \left\{\begin{array}{cc} 4\varepsilon\left[\left(\dfrac{\sigma_{ij}}{r_{ij}}\right)^{12}-\left(\dfrac{\sigma_{ij}}{r_{ij}}\right)^6\right]+\varepsilon & \text{if } r_{ij}<d_{\rm cut}\ 0 & \text{if } r_{ij}\ge d_{\rm cut} \end{array}\right.,$ $V_{\rm qq}=\sum_{i<j}^N\dfrac{q_iq_j}{4\pi\varepsilon_0 \varepsilon_k r_{ij}}e^{-r'_{ij}\kappa_D}.$ 4. $V_{\rm qq}$ is the screened Coulombic interactions among phosphate units, with its exponential decay constant $\kappa_D$ given in terms of a so-called Debye screening length as detailed in the above reference. The values of the parameters used in this force field potential given in the above reference are reproduced in the two Tables shown below. Although there are numerous parameters in this potential, the key to the success of this coarse graining is that there are only six kinds of sites whose positions and velocities must be propagated in the MD simulation- phosphate sites, sugar sites, and four kinds of base sites. This is far fewer coordinates that would arise in a fully atomistic MD simulation. I will refer the reader to the reference cited above for details about how successful coarse graining was in this case, but I will not go further into it at this time. I think the two examples we discussed in this Section suffice for introducing the subject of coarse graining to the readers of this text.In summary for this Section, MD classical simulations are not difficult to implement if one has available a proper representation of the intramolecular and intermolecular potential energy V. Such calculations are routinely carried out on large bio-molecules or condensed-media systems containing thousands to millions of atomic centers. There are, however, difficulties primarily connected to the time scales over which molecular motions and over which the process being simulated change that limit the success of this method and which often require one to employ reduced representations of the system such as in coarse graining. In contrast, quantum MD simulations such as we describe in the following Section are considerably more difficult to carry out.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/07%3A_Statistical_Mechanics/7.03%3A_Molecular_Dynamics_Simulations.txt
One of the most active research areas in statistical mechanics involves the evaluation of so-called equilibrium time correlation functions such as we encountered in Chapter 6. The correlation function $C(t)$ is defined in terms of two physical operators $A$ and $B$, a time dependence that is carried by a Hamiltonian $H$ via $\exp(-iHt/ \hbar)$, and an equilibrium average over a Boltzmann population $\exp(-\beta H)/Q$. The quantum mechanical expression for $C(t)$ is $C(t) = \sum_j \langle \Phi_j | A\exp(iHt/ \hbar) B\exp(-iHt/ \hbar) |\Phi_j\rangle \dfrac{\exp(-\beta E_j)}{Q}, \label{1}$ while the classical mechanical expression (here, we allow the $h^{-M}$ factor that occurs in the partition function shown in Section 7.1.2 to be canceled out in the numerator and denominator for simplicity) is $C(t) = \int dq(0) \int dp(0) A(q(0),p(0)) B(q(t),p(t)) \dfrac{\exp(-\beta H(q(0),p(0)))}{Q},\label{2}$ where $q(0)$ and $p(0)$ are the values of all the coordinates and momenta of the system at $t=0$ and $q(t)$ and $p(t)$ are their values, according to Newtonian mechanics, at time $t$. As shown above, an example of a time correlation function that relates to molecular spectroscopy is the dipole-dipole correlation function that we discussed in Chapter 6: $C(t) = \sum_j \langle \Phi_j | \textbf{e}•\boldsymbol{\mu} \exp(iHt/ \hbar) \textbf{e} \cdot \boldsymbol{\mu} \exp(-iHt/ \hbar) |\Phi_j\rangle \dfrac{\exp(-\beta E_j)}{Q},\label{3}$ for which $A$ and $B$ are both the electric dipole interaction $\textbf{e}•\boldsymbol{\mu}$ between the photon's electric field whose direction is characterized by the vector $\textbf{e}$ and the molecule's dipole operator $\mu$. The Fourier transform of this particular $C(t)$ relates to the absorption intensity for light of frequency $\omega$ : $I(\omega) = \int dt C(t) \exp(i\omega t).$ It turns out that many physical properties (e.g., absorption line shapes, Raman scattering intensities) and transport coefficients (e.g., diffusion coefficients, viscosity) can be expressed in terms of time-correlation functions. It is beyond the scope of this text to go much further in this direction, so I will limit my discussion to the optical spectroscopy case at hand, which requires that we now discuss how the time-evolution aspect of this problem is dealt with. The text Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977) has a nice treatment of such other correlation functions, so the reader is directed to that text for further details. The computation of correlation functions involves propagating either wave functions or classical trajectories which produce the q(t), $p(t)$ values entering into the expression for $C(t)$. In the classical case, one carries out a large number of Newtonian trajectories with initial coordinates $q(0)$ and momenta $p(0)$ chosen to represent the equilibrium condition of the $N$-molecule system. For example, one could use the MC method to select these variables employing $\exp(-\beta H(p(0),q(0)))$ as the probability function for accepting or rejecting initial $q(0)$ and $p(0)$ values. In this case, the weighting function contains not just the potential energy but also the kinetic energy (and thus the total Hamiltonian $H$) because now we need to also select proper initial values for the momenta. So, with many (e.g., M) selections of the initial $q$ and $p$ variables of the $N$-molecules being made, one would allow the Newton dynamics of each set of initial conditions to proceed. During each such trajectory, one would monitor the initial value of the $A(q(0), p(0))$ property and the time progress of the $B(q(t),p(t))$ property. One would then compute the MC average to obtain the correlation function: $C(t) = \dfrac{1}{M} \sum_{J=1}^M A(q_J(0),p_J(0)) B(q_J(t),p_J(t)) \exp(-\beta H(q_J(0),p_J(0))).\label{4}$ Where the index $J$ labels the $M$ accepted configurations and momenta of the MC sampling. In the quantum case, the time propagation is especially challenging and is somewhat beyond the scope of this text. However, I want to give you some idea of the steps that are involved, realizing that this remains an area of very active research development. As noted in Section 1.3.6, it is possible to time-propagate a wave function $F$ that is known at $t = 0$ if one is able to expand $F$ in terms of the eigenfunctions of the Hamiltonian $H$. However, for systems comprised of many molecules, which are most common in statistical mechanics studies, it is impossible to compute (or realistically approximate) these eigenfunctions. Thus, it is not productive to try to express $C(t)$ in terms of these eigenfunctions. Therefore, an entirely new set of tools has been introduced to handle time-propagation in the quantum case, and it is these new devices that I now attempt to describe in a manner much like we saw in Section 1.3.6’s discussion of time propagation of wave functions. To illustrate, consider the time propagation issue contained in the quantum definition of $C(t)$ shown above. One is faced with 1. propagating $|\Phi_j\rangle$ from $t=0$ up to time $t$, using $\exp(-iHt/ \hbar) |\Phi_j\rangle$ and then acting with the operator $B$ 2. acting with the operator $A^+$ on $|\Phi_j\rangle$ and then propagating $A^+ |\Phi_j\rangle$ from $t=0$ up to time $t$, using $\exp(-iHt/ \hbar)A^+ |\Phi_j\rangle$; 3. $C(t)$ then requires that these two time-propagated functions be multiplied together and integrated over the coordinates that $F$ depends on. The $\exp(-\beta H)$ operator that also appears in the definition of $C(t)$ can be combined, for example, with the first time propagation step and actually handled as part of the time propagation as follows: $exp(-iHt/ \hbar) |\Phi_j\rangle \exp(-\beta E_j) = \exp(-iHt/ \hbar) \exp(-\beta H) |\Phi_j\rangle\label{5a}$ $=exp(-i[t+\beta \hbar /i]H/ \hbar) |\Phi_j\rangle.\label{5b}$ The latter expression can be viewed as involving a propagation in complex time from $t = 0$ to $t = t + \beta\hbar/i$. Although having a complex time may seem unusual, as I will soon point out, it turns out that it can have a stabilizing influence on the success of these tools for computing quantum correlation functions. Much like we saw earlier in Section 1.3.6, so-called Feynman path integral techniques can be used to carry out the above time propagations. One begins by dividing the time interval into $P$ discrete steps (this can be the real time interval or the complex interval) $\exp\big[-\frac{i Ht}{\hbar}\big] = \Big\{\exp\big[-\frac{i H\delta t}{\hbar}\big]\Big\}^P .\label{6}$ The number $P$ will eventually be taken to be large, so each time step $dt = t/P$ has a small magnitude. This fact allows us to use approximations to the exponential operator appearing in the propagator that are valid only for short time steps. For each of these short time steps one then approximates the propagator in the most commonly used so-called split symmetric form: $\exp\big[-\frac{i H\delta t}{\hbar}\big] = \exp\big[-\frac{i V\delta t}{2 \hbar}\big] \exp\big[-\frac{i T\delta t}{\hbar}\big] \exp\big[-\frac{i V\delta t}{2 \hbar}\big].\label{7}$ Here, $V$ and $T$ are the potential and kinetic energy operators that appear in $H$ = $T + V$. It is possible to show that the above approximation is valid up to terms of order $(\delta t)^4$. So, for short times (i.e., small $\delta t$), these symmetric split operator approximation to the propagator should be accurate. The time evolved wave function $\Phi(t)$ can then be expressed as $\Phi(t) = \{ \exp\big[-\frac{i V\delta t}{2 \hbar}\big] \exp\big[-\frac{i T\delta t}{\hbar}\big] \exp\big[-\frac{i V\delta t}{2 \hbar}\big]\}^P \Phi(t=0).\label{8}$ The potential $V$ is (except when external magnetic fields are present) a function only of the coordinates $\{q_j \}$ of the system, while the kinetic term $T$ is a function of the momenta $\{p_j\}$ (assuming Cartesian coordinates are used). By making use of the completeness relations for eigenstates of the coordinate operator $1 = \int dq | q_j\rangle \langle q_j|\label{9}$ and inserting this identity $P$ times (once between each combination of $\exp[-i V\delta t/2\hbar] \exp[-i T\delta t/\hbar] \exp[-i V\delta t/2\hbar]$ factors), the expression given above for $\Phi(t)$ can be rewritten as follows: $\Phi(q_P ,t)= \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \prod_{j=1}^P \exp\big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))\big]$ $\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle \Phi(q_0,0).$ Then, by using the analogous completeness identity for the momentum operator $1 = \frac{1}{\hbar} \int dp_j| p_j\rangle \langle p_j |$ one can write $\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle = \frac{1}{\hbar} \int dp \langle q_j|p \rangle \exp\big(-\frac{ip^2\delta t}{2m \hbar} \big) \langle p|q_{j-1} \rangle .$ Finally, by using the fact (recall this from Section 1.3.6) that the momentum eigenfunctions $|p\rangle$, when expressed as functions of coordinates $q$ are given by $\langle q_j|p \rangle = \frac{1}{\sqrt{2\pi}} \exp\big(\frac{ipq}{\hbar}\big),$ the above integral becomes $\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle = \frac{1}{2\pi \hbar} \int dp \exp\big(-\frac{ip^2 dt}{2m \hbar}\big) \exp\big[\frac{ip(q_j - q_j- 1)}{\hbar}\big].$ This integral over $p$ can be carried out analytically to give $\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle =\left(\frac{m}{2\pi i\hbar \delta t}\right)^{1/2} \exp\big[\frac{im(q_j - q_{j-1})^2}{2 \hbar \delta t}\big].$ When substituted back into the multidimensional integral for $\Phi(q_P ,t)$, we obtain $\Phi(q_P ,t)= \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \prod_{j=1}^P\exp\big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))\big] \exp\big[\frac{im(q_j - q_{j-1})^2}{2 \hbar \delta t}\big] F(q_0,0)$ or $\Phi(q_P ,t)= \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \exp\Big[\sum_{j=1}^P \big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))+ \frac{i m(q_j - q_{j-1})^2}{2 \hbar \delta t}\big]\Big] F(q_0,0).$ Recall what we said earlier that the time correlation function was to be computed by: 1. propagating $|\Phi_j\rangle$ from $t=0$ up to time $t$, using $\exp(-iHt/ \hbar) |\Phi_j\rangle$ and then acting with the operator B 2. acting with the operator $A^+$ on $|\Phi_j\rangle$ and then propagating $A^+ |\Phi_j\rangle$ from $t=0$ up to time $t$, using $\exp(-iHt/ \hbar)A^+ |\Phi_j\rangle$ ; 3. multiplying together these two functions and integrating over the coordinates that $F$ depends on. So all of the effort described above would have to be expended for $F (q_0,0)$ taken to be $|\Phi_j\rangle$ after which the result would be multiplied by the operator B, as well as for $F (q_0,0)$ taken to be $A^+|\Phi_j\rangle$ to allow the quantum time correlation function $C(t)$ to be evaluated. These steps can be performed, but they are very difficult to implement, so I will refer the student to Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley, Oxford U. Press, New York (1997) for further discussion on this topic. Why are the multidimensional integrals of the form shown above called path integrals? Because the sequence of positions $q_1 , ... q_{P-1}$ describes a path connecting $q_0$ to $q_P$. By integrating over all of the intermediate positions $q_1 , q_2 ,... q_{P-1}$ for any given $q_0$ and $q_P$ one is integrating over all paths that connect $q_0$ to $q_P$. Further insight into the meaning of the above is gained by first realizing that $\frac{m}{2\delta t} (q_j - q_{j-1})^2 =\frac{m}{2(\delta t)^2} (q_j - q_{j-1})^2 \delta t = \int T dt$ is the finite-difference representation, within the $P$ discrete time steps of length dt, of the integral of Tdt over the jth time step, and that $\frac{\delta t}{2} (V(q_j) + V(q_{j-1})) = \int V(q)dt$ is the representation of the integral of $Vd\;t$ over the jth time step. So, for any particular path (i.e., any specific set of $q_0, q_1, \cdots q_{P-1} , q_P$ values), the sum over all such terms $\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]$ represents the integral over all time from $t=0$ until $t = t$ of the so-called Lagrangian $L = T - V$: $\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big] = \int Ldt.$ This time integral of the Lagrangian is called the action $S$ in classical mechanics (recall that in Chapter 1, we used quantization of the action in the particle-in-a-box problem). Hence, the N-dimensional integral in terms of which $\Phi(q_P ,t)$ is expressed can be written as $F (q_P ,t) = \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \sum_{\text{ all paths}} \exp\big[\frac{i}{\hbar} \int dt L \big] F(q_0 ,t=0).$ Here, the notation "all paths" is realized in the earlier version of this equation by dividing the time axis from $t = 0$ to $t = t$ into $P$ equal divisions, and denoting the coordinates of the system at the jth time step by $q_j$. By then allowing each $q_j$ to assume all possible values (i.e., integrating over all possible values of $q_j$ using, for example, the Monte-Carlo method discussed earlier), one visits all possible paths that begin at $q_0$ at $t = 0$ and end at $q_P$ at $t = t$. By forming the classical action $S$ $S = \int dtL$ for each path and then summing $\exp(iS/ \hbar) \Phi(q_0,t=0)$ over all paths and multiplying by $\left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2}$, one is able to form $\Phi(q_P ,t)$. The difficult step in implementing this Feynman path integral method in practice involves how one identifies all paths connecting $q_0$, $t = 0$ to $q_P$, $t$. Each path contributes an additive term involving the complex exponential of the quantity $\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]$ Because the time variable $\delta t =t/P$ appearing in each action component can be complex (recall that, in one of the time evolutions, $t$ is really $t + \beta \hbar /i$), the exponentials of these action components can have both real and imaginary parts. The real parts, which arise from the $\exp(-\beta H)$, cause the exponential terms to be damped (i.e., to undergo exponential decay), but the imaginary parts give rise (in $\exp(iS/ \hbar$) to oscillations. The sum of many, many (actually, an infinite number of) oscillatory $\exp(iS/ \hbar) = \cos (S/ \hbar) + i \sin(S/ \hbar)$ terms is extremely difficult to evaluate because of the tendency of contributions from one path to cancel those of another path. The practical evaluation of such sums remains a very active research subject. The most commonly employed approximation to this sum involves finding the path(s) for which the action $S= \sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]$ is smallest because such paths produce the lowest-frequency oscillations in $\exp(iS/ \hbar)$, and thus may be less subject to cancelation by contributions from other paths. The path(s) that minimize the action $S$ are, in fact, the classical paths. That is, they are the paths that the system whose quantum wave function is being propagated would follow if the system were undergoing classical Newtonian mechanics subject to the conditions that the system be at $q_0$ at $t=0$ and at $q_P$ at $t=t$. In this so-called semi-classical approximation to the propagation of the initial wave function using Feynman path integrals, one finds all classical paths that connect $q_0$ at $t = 0$ and at $q_P$ at $t = t$, and one evaluates the action $S$ for each such path. One then applies the formula $\Phi(q_P ,t) = \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \sum_{\text{ all paths}} \exp\big[\frac{i}{\hbar} \int dt L \big] F(q_0 ,t=0)$ but includes in the sum only the contribution from the classical path(s). In this way, one obtains an approximate quantum propagated wave function via a procedure that requires knowledge of only classical propagation paths. Clearly, the quantum propagation of wave functions, even within the semi-classical approximation discussed above, is a rather complicated affair. However, keep in mind the alternative that one would face in evaluating, for example, spectroscopic line shapes if one adopted a time-independent approach. One would have to know the energies and wave functions of a system comprised of many interacting molecules. This knowledge is simply not accessible for any but the simplest molecules. For this reason, the time-dependent framework in which one propagates classical trajectories or uses path-integral techniques to propagate initial wave functions offers the most feasible way to evaluate the correlation functions that ultimately produce spectral line shapes and other time correlation functions for complex molecules in condensed media. Before finishing this Section, it might help if I showed how one obtains the result that classical paths are those that make the action integral $S = \int L\;dt$ minimum. This provides the student with an introduction to the subject called calculus of variations or functional analysis, which most students reading this text have probably not studied in a class. First, let’s clarify what a functional is. A function $f(x)$ depends on one or more variables x that take on scalar values; that is, given a scalar number $x$, $f(x)$ produces the value of the function $f$ at this value of $x$. A functional $F[f]$ is a function of the function $f$ if, given the function $f$, $F$ acts on it to produce a value. In more general functionals, $F[f]$ might depend not only of f, but on various derivatives of $f$. Let’s consider an example. Suppose one has a functional of the form $F[f]=\int_{t_0}^{t_f} F(t,f(t),\frac{df(t)}{dt})dt$ meaning that the functional involves an integral from $t_0$ through $t_f$ of an integrand that may contain (i) the variable $t$ explicitly, (ii) the function $f(t)$, and (iii) the derivative of this function with respect to the variable $t$. This is the kind of integral one encounters when evaluating the action integral $S=\int_{t_0}^{t_f}[T-V]dt=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}\Big)^2-V(x(t))\Big]dt$ where the function $f(t)$ is the coordinate $x(t)$ that evolves from $x(t_0)$ to $x(t_f)$. The task at hand is to determine that function $x(t)$ for which this integral is a minimum. We solve this problem proceeding much as one would do if one had to minimize a function of a variable; we differentiate with respect to the variable and set the derivative to zero. However, in our case, we have a function of a function, not a function of a variable; so how do we carry out the derivative? We assume that the function $x(t)$ that minimizes $S$ is known, and we express any function that differs a little bit from the correct $x(t)$ as $x(t)+\varepsilon\eta(t)$ where is a scalar quantity used to suggest that $x(t)$ and differ by only a small amount and is a function that obeys $\eta(t)= 0\text{ at }t=t_0\text{ and at }t = t_f;$ this is how we guarantee that we are only considering paths that connect to the proper $x_0$ at $t_0$ and $x_f$ at $t_f$. By considering all possible functions that obey these conditions, we have in a parameterization of all paths that begin (at $t_0$) and end (at tf) where the exact path $x(t)$ does but differ by a small amount from $x(t)$. Substituting into $S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}\Big)^2-V(x(t))\Big]dt$ gives $S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}+\varepsilon\frac{d\eta(t)}{dt}\Big)^2-V(x(t)+\varepsilon\eta(t))\Big]dt.$ The terms in the integrand are then expanded in powers of the $\varepsilon$ parameter $\Big(\frac{dx(t)}{dt}+\varepsilon\frac{d\eta(t)}{dt}\Big)^2=\Big(\frac{dx(t)}{dt}\Big)^2+2\varepsilon\frac{dx(t)}{dt}\frac{d\eta(t)}{dt}+\varepsilon^2\left(\frac{d\eta}{dt}\right)^2$ $-V(x(t)+\varepsilon\eta(t))=-V(x(t))-\varepsilon\frac{\partial V(x(t))}{\partial x(t)}\eta(t)-\frac{1}{2}\varepsilon^2\frac{\partial^2 V(x(t))}{\partial x(t)^2}\eta^2(t)-\cdots$ and substituted into the integral for $S$. Collecting terms of each power of $\varepsilon$ allows this integral to be written as $S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\left\{\Big(\frac{dx(t)}{dt}\Big)^2+2\varepsilon\frac{dx(t)}{dt}\frac{d\eta(t)}{dt}+O(\varepsilon^2)\right\}-V(x(t)+\varepsilon\eta(t))-V(x(t))-\varepsilon\frac{\partial V(x(t))}{\partial x(t)}\eta(t)-O(\varepsilon^3) \Big]dt.$ The condition that S(e) be stable with respect to variations in $\varepsilon$ can be expressed as $\frac{ds(\varepsilon)}{d\varepsilon}=0=\lim_{\varepsilon\rightarrow 0}\dfrac{S(\varepsilon)-S(0)}{\varepsilon}$ which is equivalent to requiring that the terms linear in $\varepsilon$ in the above expansion for $S(\varepsilon)$ vanish $0=\int_{t_0}^{t_f}\Big[m\frac{dx(t)}{dt}\frac{d\eta(x(t))}{dt}-\frac{\partial V(x(t))}{\partial x(t)}\eta(t)\Big]dt$ Next, we use integration by parts to rewrite the first term involving as a term involving instead $\int_{t_0}^{t_f}m\frac{dx(t)}{dt}\frac{d\eta(x(t))}{dt}=m\left[\frac{dx(t)}{dt}\eta(t)\right]_{t_0}^{t_f}-\int_{t_0}^{t_f} m\frac{d^2x(t)}{dt^2}\eta(t)dt$ Because the function vanishes at $t_0$ and $t_f$, the first term vanishes, so this identity can be used to rewrite the condition that the terms in $S(\varepsilon)$ that are linear in $\varepsilon$ vanish as $0=\int_{t_0}^{t_f}\Big[-m\frac{d^2x(t)}{dt^2}-\frac{\partial V(x(t))}{\partial x(t)}\Big]\eta(t)​dt.$ Because this result is supposed to be valid for any function that vanishes at $t_0$ and tf, the factor multiplying in the above integral must itself vanish $-m\frac{d^2x(t)}{dt^2}-\frac{\partial V(x(t))}{\partial x(t)}=0.$ This shows that the path $x(t)$ that makes $S$ stationary is the path that obeys Newton’s equations- the classical path. I urge the student reader to study this example of the use of functional analysis because this mathematical device is an important tool too master.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/07%3A_Statistical_Mechanics/7.04%3A_Time_Correlation_Functions.txt
In this Section, I introduce several applications of statistical mechanics that are important for students to be aware of because they arise frequently when chemists make use of the tools of statistical mechanics. These examples include 1. The basic equations connecting the translational, rotational, vibrational, and electronic properties of isolated (i.e., gas-phase) molecules to their thermodynamics. 2. The most basic descriptions of the vibrations of ions, atoms, or molecules within crystals. 3. The most elementary models for describing cooperative behavior and phase transitions in gas-surface and liquid-liquid systems. 4. The contributions of intermolecular forces to the thermodynamics of gases. Gas-Molecule Thermodynamics The equations relating the thermodynamic variables to the molecular partition functions can be employed to obtain the following expressions for the energy $E$, heat capacity $C_V$, Helmholz free energy $A$, entropy $S$, and chemical potential $\mu$ in the case of a gas (i.e., in the absence of intermolecular interactions) of polyatomic molecules: $\dfrac{E}{NkT} = \dfrac{3}{2} + \dfrac{3}{2} + \sum_{J=1}^{3N-6} \left[\dfrac{h\nu_J}{2kT} + \dfrac{h\nu_J/kT} {\exp(h\nu_J/kT)-1} \right] – \dfrac{D_e}{kT},$ $\dfrac{C_V}{Nk} = \dfrac{3}{2} + \dfrac{3}{2} + \sum_{J=1}^{3N-6} \left(\dfrac{h\nu_J}{kT}\right)^2 \dfrac{\exp(h\nu_J/kT)}{(\exp(h\nu_J/kT)-1)^2} ,$ $-\dfrac{A}{NkT} = \ln \left(\left[\dfrac{2\pi mkT}{\hbar^2}\right]^{3/2} \dfrac{V_e}{N}\right) + \ln\left(\dfrac{\sqrt{\pi}}{\sigma} \sqrt{\dfrac{8\pi 2I_AkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_BkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_CkT}{\hbar^2}}\right)$ $- \sum_{J=1}^{3N-6} \left[\dfrac{h\nu_J}{2kT} + \ln\Big(1-\exp\Big(-\dfrac{h\nu_J}{kT}\Big)\Big)\right] + \dfrac{D_e}{kT} + \ln\omega_e$ $\dfrac{S}{Nk} = \ln \left(\left[\dfrac{2\pi mkT}{\hbar^2}\right]^{3/2} \dfrac{V_e^{5/2}}{N}\right) + \ln\left(\dfrac{\sqrt{\pi}}{\sigma} \sqrt{\dfrac{8\pi 2I_AkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_BkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_CkT}{\hbar^2}}\right)$ $+ \sum_{J=1}^{3N-6} \left[\dfrac{h\nu_J/kT} {\exp(h\nu_J/kT)-1} – \ln\Big(1-\exp\Big(-\dfrac{h\nu_J}{kT}\Big)\Big)\right] + \ln\omega_e$ $\dfrac{\mu}{kT} = - \ln \left(\left[\dfrac{2\pi mkT}{\hbar^2}\right]^{3/2} \dfrac{kT}{p}\right) - \ln\left(\dfrac{\sqrt{\pi}}{\sigma} \sqrt{\dfrac{8\pi 2I_AkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_BkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_CkT}{\hbar^2}}\right)$ $+ \sum_{J=1}^{3N-6} \left[\dfrac{h\nu_J}{2kT} + \ln\Big(1-\exp\Big(-\dfrac{h\nu_J}{kT}\Big)\Big)\right] - \dfrac{D_e}{kT} - \ln\omega_e.$ Earlier in this Chapter in Section 7.1.2, we showed how these equations are derived, so I refer the reader back to that treatment for further details. Notice that, except for the chemical potential $\mu$, all of these quantities are extensive properties that depend linearly on the number of molecules in the system $N$. Except for the chemical potential $\mu$ and the pressure $p$, all of the variables appearing in these expressions have been defined earlier when we showed the explicit expressions for the translational, vibrational, rotational, and electronic partition functions. These are the working equations that allow one to compute thermodynamic properties of stable molecules, ions, and even reactive species such as radicals in terms of molecular properties such as geometries, vibrational frequencies, electronic state energies and degeneracies, and the temperature, pressure, and volume. Einstein and Debye Models of Solids These two models deal with the vibrations of crystals that involve motions among the neighboring atoms, ions, or molecules that comprise the crystal. These inter-fragment vibrations are called phonons. In the Einstein model of a crystal, one assumes that: 1. Each atom, ion, or molecule from which the crystal is constituted is trapped in a potential well formed by its interactions with neighboring species. This potential is denoted $\phi(V/N)$ with the volume-to-number $\dfrac{V}{N}$ ratio written to keep in mind that it likely depends on the packing density (i.e., the distances among neighbors) within the crystal. Keep in mind that f represents the interaction of any specific atom, ion, or molecule with the $N-1$ other such species. So, $\dfrac{N \phi}{2}$, not $N \phi$ is the total interaction energy among all of the species; the factor of $\dfrac{1}{2}$ is necessary to avoid double counting. 2. Each such species is assumed to undergo local harmonic vibrational motions about its equilibrium position ($q_J^0$) within the local well that traps it. If the crystal is isotropic, the force constants $k_J$ that characterize the harmonic potential $\dfrac{1}{2} k_J(q_J-q­_J^0)^2$ along the $x$, $y$, and $z$ directions are equal; if not, these $k_J$ parameters may be unequal. It is these force constants, along with the masses $m$ of the atoms, ions, or molecules, that determine the harmonic frequencies $\nu_J = \dfrac{1}{2}\pi \sqrt{\dfrac{k_J}{m}}$ of the crystal. 3. The inter-species phonon vibrational partition function of the crystal is then assumed to be a product of $N$ partition functions, one for each atom, ion, or molecule in the crystal, with each partition function taken to be of the harmonic vibrational form: $Q = \exp\bigg(-\dfrac{N \phi}{2kT}\bigg) \left\{\prod_{J=1}^3 \dfrac{\exp(-h\nu_J/2kT)}{1-\exp(-h\nu_J/kT)}\right\}^N.$ There is no factor of $N!$ in the denominator because, unlike a gas of $N$ species, each of these $N$ species (atoms, ions, or molecules) are constrained to stay put (i.e., not free to roam independently) in the trap induced by their neighbors. In this sense, the $N$ species are distinguishable rather than indistinguishable as they are in the gas case. The $\dfrac{N\phi}{2kT}$ factor arises when one asks what the total energy of the crystal is, aside from its vibrational energy, relative to $N$ separated species; in other words, what is the total cohesive energy of the crystal. This energy is $N$ times the energy of any single species $\phi$, but, as noted above, divided by 2 to avoid double counting the inter-species interaction energies. This partition function can be subjected to the thermodynamic equations discussed earlier to compute various thermodynamic properties. One of the most useful to discuss for crystals is the heat capacity $C_V$, which is given by (see the vibrational contribution to $C_V$ expressed in Section 7.5.1) : $C_V = Nk \sum_{J=1,3} \left(\dfrac{h\nu_J}{kT}\right)^2 \dfrac{\exp(h\nu_J/kT) }{(\exp(h\nu_J/kT) –1)^2}.$ At very high temperatures, this function can be shown to approach $3Nk$, which agrees with the experimental observation know as the law of Dulong and Petit. However, at very low temperatures, this expression approaches: $C_V \rightarrow \sum_{J=1,3} Nk \left(\dfrac{h\nu_J}{kT}\right)^2 \exp\Big(-\dfrac{h\nu_J}{kT}\Big),$ which goes to zero as $T$ approaches zero, but not in a way that is consistent with experimental observation. That is, careful experimental data shows that all crystal heat capacities approach zero proportional to $T^3$ at low temperature; the Einstein model’s $C_V$ approaches zero but not in the $T^3$ form found in experiments. So, although the Einstein model offers a very useful model of how a crystal’s stability relates to $N\phi$ and how its $C_V$ depends on vibrational frequencies of the phonon modes, it does not work well at low temperatures. Nevertheless, it remains a widely used model in which to understand the phonons’ contributions to thermodynamic properties as long as one does not attempt to extrapolate its predictions to low $T$. In the Debye model of phonons in crystals, one abandons the view in which each atom, ion, or molecule vibrates independently about it own equilibrium position and replaces this with a view in which the constituent species vibrate collectively in wave-like motions. Each such wave has a wave length $\lambda$ and a frequency $\nu$ that are related to the speed $c$ of propagation of such waves in the crystal by $c = \lambda \nu.$ The speed $c$ is a characteristic of the crystal’s inter-species forces; it is large for stiff crystals and small for soft crystals. In a manner much like we used to determine the density of quantum states $\Omega(E)$ within a three-dimensional box, one can determine how many waves can fit within a cubic crystalline box having frequencies between $\nu$ and $\nu + d\nu$. The approach to this problem is to express the allowed wave lengths and frequencies as: $\lambda_n = \dfrac{2L}{n},$ $\nu_n = \dfrac{n c}{2L},$ where $L$ is the length of the box on each of its sides and $\nu$ is an integer $1, 2, 3, \cdots$. This prescription forces all wave lengths to match the boundary condition for vanishing at the box boundaries. Then carrying out a count of how many ($\Omega(\nu)$) waves have frequencies between $\nu$ and $\nu + d\nu$ for a box whose sides are all equal gives the following expression: $\Omega(\nu) = \dfrac{12\pi V \nu^2}{c^3}.$ The primary observation to be made is that the density of waves is proportional to $\nu^2$: $\Omega(\nu) = a \nu^2.$ It is conventional to define the parameter a in terms of the maximum frequency $\nu_m$ that one obtains by requiring that the integral of $\Omega(\nu)$ over all allowed $\nu$ add up to $3N$, the total number of inter-species vibrations that can occur: $3N = \int \Omega(\nu) d\nu = \dfrac{a \nu_m^3}{3}.$ This then gives the constant a in terms of $\nu_m$ and $N$ and allows $\Omega(\nu)$ to be written as $\Omega(\nu) = \dfrac{9N}{\nu_m^3}.$ The Debye model uses this wave picture and computes the total energy $E$ of the crystal much as done in the Einstein model, but with the sum over $3N$ vibrational modes replaced by a continuous integral over the frequencies $\nu$ weighted by the density of such states $\Omega(\nu)$ ((see the vibrational contribution to $E$ expressed in Section 7.5.1): $E = \dfrac{N\phi}{2} + \dfrac{9NkT}{\nu_m^3} \int \left[\dfrac{h\nu}{2kT} + \dfrac{h\nu /kT}{\exp(h\nu /kT) –1} \right]\nu^2 d\nu,$ where the integral over $\nu$ ranges from 0 to nm. It turns out that the $C_V$ heat capacity obtained by taking the temperature derivative of this expression for $E$ can be written as follows: $C_V = 3Nk \left[ 4 D\dfrac{h\nu_m}{kT} – \dfrac{3(h\nu_m/kT)}{\exp(h\nu_m/kT) –1} \right]$ where the so-called Debye function $D(u)$ is defined by $D(u) = 3 u^{-3} \int \dfrac{x^3}{\exp(x) – 1} dx,$ and the integral is taken from $x = 0$ to $x = u$. The important thing to be noted about the Debye model is that the heat capacity, as defined above, extrapolates to $3Nk$ at high temperatures, thus agreeing with the law of Dulong and Petit, and varies at low temperature as $C_V \rightarrow \dfrac{12}{5} Nk\pi^4 \left(\dfrac{kT}{h\nu_m}\right)^3.$ So, the Debye heat capacity does indeed vary as $T^3$ at low $T$ as careful experiments indicate. For this reason, it is appropriate to use the Debye model whenever one is interested in properly treating the energy, heat capacity, and other thermodynamic properties of crystals at temperatures for which $\dfrac{kT}{h\nu_m}$ is small. At higher temperatures, it is appropriate to use either the Debye or Einstein models. The major difference between the two lies in how they treat the spectrum of vibrational frequencies that occur in a crystal. The Einstein model says that only one (or at most three, if three different $k_J$ values are used) frequency occurs $\nu_J = \dfrac{1}{2}\pi \sqrt{\dfrac{k_J}{m}}$; each species in the crystal is assumed to vibrate at this frequency. In contrast, the Debye model says that the species vibrate collectively and with frequencies ranging from $n = 0$ up to $\nu = \nu_m$, the so-called Debye frequency, which is proportional to the speed $c$ at which phonons propagate in the crystal. In turn, this speed depends on the stiffness (i.e., the inter-species potentials) within the crystal. Lattice Theories of Surfaces and Liquids This kind of theory can be applied to a wide variety of chemical and physical problems, so it is a very useful model to be aware of. The starting point of the model is to consider a lattice containing $M$ sites, each of which has $c$ nearest neighbor sites (n.b., clearly, $c$ will depend on the structure of the lattice) and to imagine that each of these sites can exist in either of two states that we label A and B. Before deriving the basic equations of this model, let me explain how the concepts of sites and A and B states are used to apply the model to various problems. 1. The sites can represent binding sites on the surface of a solid and the two states A and B can represent situations in which the site is either occupied (A) or unoccupied (B) by a molecule that is chemisorbed or physisorbed to the site. This point of view is taken when one applies lattice models to adsorption of gases or liquids to solid surfaces. 2. The sites can represent individual spin = 1/2 molecules or ions within a lattice, and the states A and B can denote the a and b spin states of these species. This point of view allows the lattice models to be applied to magnetic materials. 3. The sites can represent positions that either of two kinds of molecules A and B might occupy in a liquid or solid in which case A and B are used to label whether each site contains an A or a B molecule. This is how we apply the lattice theories to liquid mixtures. 4. The sites can represent cis- and trans- conformations in linkages within a polymer, and A and B can be used to label each such linkage as being either cis- or trans-. This is how we use these models to study polymer conformations. In Figure 7.4, I show a two-dimensional lattice having 25 sites of which 16 are occupied by dark (A) species and 9 are occupied by lighter (B) species. The partition function for such a lattice is written in terms of a degeneracy $\Omega$ and an energy $E$, as usual. The degeneracy is computed by considering the number of ways a total of $N_A + N_B$ species can be arranged on the lattice: $\Omega = \dfrac{(N_A+N_B)!}{N_A! N_B!}.$ The interaction energy among the A and B species for any arrangement of the A and B on the lattice is assumed to be expressed in terms of pair wise interaction energies. In particular, if only nearest neighbor interaction energies are considered, one can write the total interaction energy $E­_{\rm ­int}$ of any arrangement as $E_{\rm int} = N_{AA} E_{AA} + N_{BB} E_{BB} + N_{AB} E_{AB}$ where $N_{IJ}$ is the number of nearest neighbor pairs of type I-J and $E_{IJ}$ is the interaction energy of an I-J pair. The example shown in Figure 7.4 has $N_{AA} = 16$, $N_{BB} = 4$ and $N_{AB} = 19$. The three parameters $N_{IJ}$ that characterize any such arrangement can be re-expressed in terms of the numbers $N_A$ and $N_B$ of A and B species and the number of nearest neighbors per site $c$ as follows: $2N_{AA} + N_{AB} = cN_A$ $2N_{BB} + N_{AB} = cN_B.$ Note that the sum of these two equations states the obvious fact that twice the sum of AA, BB, and AB pairs must equal the number of A and B species multiplied by the number of neighbors per species, $c$. Using the above relationships among $N_{AA}$, $N_{BB}$, and $N_{AB}$, we can rewrite the interaction energy as $E_{\rm int} = E_{AA} \dfrac{c N_A – N_{AB}}{2} + E_{BB} \dfrac{c N_B – N_{AB}}{2} + E_{AB} N_{AB}$ $=\dfrac{(N_A E_{AA} + N_B E_{BB}) c}{2} + \dfrac{(2 E_{AB} – E_{AA} – E_{BB} ) N_{AB}}{2}$ The reason it is helpful to write $E_{\rm int}$ in this manner is that it allows us to express things in terms of two variables over which one has direct experimental control, $N_A$ and $N_B$, and one variable $N_{AB}$ that characterizes the degree of disorder among the A and B species. That is, if $N_{AB}$ is small, the A and B species are arranged on the lattice in a phase-separated manner; whereas, if $N_{AB}$ is large, the A and B are well mixed. The total partition function of the A and B species arranged on the lattice is written as follows: $Q = q_A^{N_A} q_B^{N_B} \sum_{N_{AB}} \Omega(N_A, N_B, N_{AB}) \exp(-E_{\rm int}/kT).$ Here, $q_A$ and $q_B$ are the partition functions (electronic, vibrational, etc.) of the A and B species as they sit bound to a lattice site and $\Omega(N_A, N_B, N_{AB})$ is the number of ways that $N_A$ species of type A and $N_B$ of type B can be arranged on the lattice such that there are $N_{AB}$ A-B type nearest neighbors. Of course, $E_{\rm int}$ is the interaction energy discussed earlier. The sum occurs because a partition function is a sum over all possible states of the system. There are no ($1/N_J!$) factors because, as in the Einstein and Debye crystal models, the A and B species are not free to roam but are tied to lattice sites and thus are distinguishable. This expression for $Q$ can be rewritten in a manner that is more useful by employing the earlier relationships for $N_{AA}$ and $N_{BB}$: $Q = \Bigl(q_A \exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr)^{N_A} \Bigl(q_B\exp\Bigl(-\dfrac{cE_{BB}}{2kT}\Bigr)\Bigr)^{N_B} \sum_{N_{AB}} \Omega(N_A, N_B, N_{AB}) \exp\Bigl(\dfrac{N_{AB}X}{2kT}\Bigr),$ where $X = (-2 E_{AB} + E_{AA} + E_{BB} ).$ The quantity $X$ plays a central role in all lattice theories because it provides a measure of how different the A-B interaction energy is from the average of the A-A and B-B interaction energies. As we will soon see, if $X$ is large and negative (i.e., if the A-A and B-B interactions are highly attractive), phase separation can occur; if $X$ is positive, phase separation will not occur. The problem with the above expression for the partition function is that no one has yet determined an analytical expression for the degeneracy $\Omega(N_A, N_B, N_{AB})$ factor. Therefore, in the most elementary lattice theory, known as the Bragg-Williams approximation, one approximates the sum over $N_{AB}$ by taking the following average value of ${AB}$: $N_{AB}^* = \dfrac{N_A(cN_B)}{N_A+N_B}$ in the expression for $\Omega$. This average is formed by taking the number of A sites and multiplying by the number of neighbor sites (c) and by the fraction of these neighbor sites that would be occupied by a B species if mixing were random. This approximation produces $Q = \Bigl(q_A \exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr)^{N_A} \Bigl(q_B\exp\Bigl(-\dfrac{cE_{BB}}{2kT}\Bigr)\Bigr)^{N_B} \exp\Bigl(\dfrac{N_{AB}^*X}{2kT}\Bigr) \sum_{N_{AB}} \Omega(N_A, N_B, N_{AB}).$ Finally, we realize that the sum $\sum_{N_{AB}} \Omega(N_A, N_B, N_{AB})$ is equal to the number of ways of arranging $N_A$ A species and $N_B$ B species on the lattice regardless of how many A-B neighbor pairs there are. This number is, of course, $\dfrac{(N_A+N_B)!}{N_A!N_B!}$. So, the Bragg-Williams lattice model partition function reduces to: $Q = \Bigl(q_A \exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr)^{N_A} \Bigl(q_B\exp\Bigl(-\dfrac{cE_{BB}}{2kT}\Bigr)\Bigr)^{N_B} \dfrac{(N_A+N_B)!}{N_A!N_B!} \exp\Bigl(\dfrac{N_{AB}^*X}{2kT}\Bigr).$ The most common connection one makes to experimental measurements using this partition function arises by computing the chemical potentials of the A and B species on the lattice and equating these to the chemical potentials of the A and B as they exist in the gas phase. In this way, one uses the equilibrium conditions (equal chemical potentials in two phases) to relate the vapor pressures of A and B, which arise through the gas-phase chemical potentials, to the interaction energy $X$. Let me now show you how this is done. First, we use $\mu_J = -kT \left(\dfrac{∂\ln Q}{∂N_J}\right)_{T,V}$ to compute the A and B chemical potentials on the lattice. This gives $\mu_A = -kT\left\{ \ln\Bigl(q_A\exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr) – \ln\Bigl(\dfrac{N_A}{N_A+N_B}\Bigr) + \Bigl(1-\dfrac{N_A}{N_A+N_B}\Bigr)^2 \dfrac{cX}{2kT} \right\}$ and an analogous expression for $\mu_B$ with $N_B$ replacing $N_A$. The expression for the gas-phase chemical potentials $\mu_A^g$ and $\mu_B^g$ given earlier in this Chapter has the form: $\mu = - kT \ln \left(\left[\dfrac{2\pi mkT}{\hbar^2}\right]^{3/2} \dfrac{kT}{p}\right) – kT \ln\left(\dfrac{\sqrt{\pi}}{\sigma} \sqrt{\dfrac{8\pi 2I_AkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_BkT}{\hbar^2}} \sqrt{\dfrac{8\pi 2I_CkT}{\hbar^2}}\right)$ $+kT \sum_{J=1}^{3N-6} \left[\dfrac{h\nu_J}{2kT} + \ln\Big(1-\exp\Big(-\dfrac{h\nu_J}{kT}\Big)\Big)\right] - D_e – kT \ln\omega_e,$ within which the vapor pressure appears. The pressure dependence of this gas-phase expression can be factored out to write each $\mu$ as: $\mu_A^g = \mu_A^0 + kT \ln(p_A),$ where $p_A$ is the vapor pressure of A (in atmosphere units) and $\mu_A^0$ denotes all of the other factors in $\mu_A^g$. Likewise, the lattice-phase chemical potentials can be written as a term that contains the $N_A$ and $N_B$ dependence and a term that does not: $\mu_A = -kT\left\{ \ln\Bigl(q_A\exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr) – \ln X_A + (1-X_A)^2 \dfrac{cX}{2kT} \right\},$ where $X_A$ is the mole fraction of A ($\dfrac{N_A}{N_A+N_B}$). Of course, an analogous expression holds for $\mu_B$. We now perform two steps: 1. We equate the gas-phase and lattice-phase chemical potentials of species A in a case where the mole fraction of A is unity. This gives $\mu_A^0 + kT \ln(p_A^0) = -kT{ \ln\Bigl(q_A\exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr)}$ where $p_A^0$ is the vapor pressure of A that exists over the lattice in which only A species are present. 2. We equate the gas- and lattice-phase chemical potentials of A for an arbitrary chemical potential $X_A$ and obtain: $\mu_A^0 + kT \ln(p_A) = -kT\left\{ \ln\Bigl(q_A\exp\Bigl(-\dfrac{cE_{AA}}{2kT}\Bigr)\Bigr) – \ln X_A + (1-X_A)^2 \dfrac{cX}{2kT} \right\},$ which contains the vapor pressure $p_A$ of A over the lattice covered by A and B with $X_A$ being the mole fraction of A. Subtracting these two equations and rearranging, we obtain an expression for how the vapor pressure of A depends on $X_A$: $p_A = p_A^0 X_A \exp\Bigl(-\dfrac{cX(1-X_A)^2}{2kT}\Bigr).$ Recall that the quantity $X$ is related to the interaction energies among various species as $X = (-2 E_{AB} + E_{AA} + E_{BB} ).$ Let us examine that physical meaning of the above result for the vapor pressure. First, if one were to totally ignore the interaction energies (i.e., by taking $X = 0$), one would obtain the well known Raoult’s Law expression for the vapor pressure of a mixture: $p_A = p_A^0 X_A$ $p_B = p_B^0 X_B.$ In Figure 7.5, I plot the A and B vapor pressures vs. $X_A$. The two straight lines are, of course, just the Raoult’s Law findings. I also plot the $p_A$ vapor pressure for three values of the $X$ interaction energy parameter. When $X$ is positive, meaning that the A-B interactions are more energetically favorable than the average of the A-A and B-B interactions, the vapor pressure of A is found to deviate negatively from the Raoult’s Law prediction. This means that the observed vapor pressure is lower than is that expected based solely on Raoult’s Law. On the other hand, when $X$ is negative, the vapor pressure deviates positively from Raoult’s Law. An especially important and interesting case arises when the $X$ parameter is negative and has a value that makes $\dfrac{cX}{2kT}$ be more negative than –4. It turns out that in such cases, the function $p_A$ suggested in this Bragg-Williams model displays a behavior that suggests a phase transition may occur. Hints of this behavior are clear in Figure 7.5 where one of the plots displays both a maximum and a minimum, but the plots for $X > 0$ and for $\dfrac{cX}{2kT} > -4$ do not. Let me now explain this further by examining the derivative of $p_A$ with respect to $X_A$: $\dfrac{dp_A}{dX_A} = p_A^0 \left\{1 + X_A(1-X_A) \dfrac{2cX}{2kT}\right\} \exp\Bigl(-\dfrac{cX(1-X_A)^2}{2kT}\Bigr).$ Setting this derivative to zero (in search of a maximum or minimum), and solving for the values of $X_A$ that make this possible, one obtains: $X_A= \dfrac{1 ± \sqrt{1+4kT/cX} }{2}.$ Because $X_A$ is a mole fraction, it must be less than unity and greater than zero. The above result giving the mole fraction at which $\dfrac{dp_A}{dX_A} = 0$ will not produce a realistic value of $X_A$ unless $\dfrac{cX}{kT} < - 4.$ If $\dfrac{cX}{kT} = -4$, there is only one value of $X_A$ (i.e., $X_A$ = 1/2) that produces a zero slope; for $\dfrac{cX}{kT} < -4$, there will be two such values given by $X_A = \dfrac{1 ± \sqrt{1+4kT/cX} }{2}$, which is what we see in Figure 7.5 where the plot displays both a maximum and a minimum. What does it mean for $\dfrac{cX}{kT}$ to be less than $-4$ and why is this important? For $X$ to be negative, it means that the average of the A-A and B-B interactions are more energetically favorable than is the A-B interactions. It is for this reason that a phase separation is may be favored in such cases (i.e., the A species prefer to be near other A species more than to be near B species, and similarly for the B species). However, thermal motion can overcome a slight preference for such separation. That is, if $X$ is not large enough, $kT$ can overcome this slight preference. This is why $cX$ must be less than $-4kT$, not just less than zero. So, the bottom line is that if the A-A and B-B interactions are more attractive, on average, than are the A-B interactions, one can experience a phase separation in which the A and B species do not remain mixed on the lattice but instead gather into two distinct kinds of domains. One domain will be rich in the A species, having an $X_A$ value equal to that shown in the right dot in Figure 7.5. The other domains will be rich in B and have an $X_A$ value of that shown by the left dot. As I noted in the introduction to this Section, lattice models can be applied to a variety of problems. We just analyzed how it is applied, within the Bragg-Williams approximation, to mixtures of two species. In this way, we obtain expressions for how the vapor pressures of the two species in the liquid or solid mixture display behavior that reflects their interaction energies. Let me now briefly show you how the lattice model is applied in some other areas. In studying adsorption of gases to sites on a solid surface, one imagines a surface containing M sites per unit area A with $N_{ad}$ molecules (that have been adsorbed from the gas phase) bound to these sites. In this case, the interaction energy $E_{\rm int}$ introduced earlier involves only interactions among neighboring adsorbed molecules; there are no lateral interactions among empty surface sites or between empty surface sites and adsorbed molecules. So, we can make the following replacements in our earlier equations: $N_A \rightarrow N_{ad}$ $N_B \rightarrow M – N_{ad}$ $E_{\rm int} = E_{­ad,ad} N_{ad,ad},$ where $N_{ad,ad}$ is the number of nearest neighbor pairs of adsorbed species and $E_{ad,ad}$ is the pairwise interaction energy between such a pair. The primary result obtained by equating the chemical potentials of the gas-phase and adsorbed molecules is: $p = kT \dfrac{q_{gas}}{V} \dfrac{1}{q_{ad}} \dfrac{\theta}{1-\theta} \exp\Big(\dfrac{E_{ad}c\theta}{kT}\Big).$ Here $q_{gas}/V$ is the partition function of the gas-phase molecules per unit volume, $q_{ad}$ is the partition function of the adsorbed molecules (which contains the adsorption energy as $\exp(-\phi /kT))$ and $\theta$ is called the coverage (i.e., the fraction of surface sites to which molecules have adsorbed). Clearly, $\theta$ plays the role that the mole fraction $X_A$ played earlier. This so-called adsorption isotherm equation allows one to connect the pressure of the gas above the solid surface to the coverage. As in our earlier example, something unusual occurs when the quantity $E_{ad}c\theta/kT$ is negative and beyond a critical value. In particular, differentiating the expression for $p$ with respect to $\theta$ and finding for what $\theta$ value(s) $dp/d\theta$ vanishes, one finds: $q = \dfrac{ 1 ± \sqrt{1 +4kT/cE_{ad}} }{2}.$ Since $\theta$ is a positive fraction, this equation can only produce useful values if $\dfrac{cE_{ad}}{kT} < -4.$ In this case, this means that if the attractions between neighboring adsorbed molecules is strong enough, it can overcome thermal factors to cause phase-separation to occur. The kind of phase separation on observes is the formation of islands of adsorbed molecules separated by regions where the surface has little or no adsorbed molecules. There is another area where this kind of lattice model is widely used. When studying magnetic materials one often uses the lattice model to describe the interactions among pairs of neighboring spins (e.g., unpaired electrons on neighboring molecules or nuclear spins on neighboring molecules). In this application, one assumes that up or down spin states are distributed among the lattice sites, which represent where the molecules are located. $N_\alpha$ and $N_\beta$ are the total number such spins, so ($N_\alpha - N_\beta$) is a measure of what is called the net magnetization of the sample. The result of applying the Bragg-Williams approximation in this case is that one again observes a critical condition under which strong spin pairings occur. In particular, because the interactions between a and a spins, denoted $–J$, and between $\alpha$ and $\beta$ spins, denoted $+ J$, are equal and opposite, the $X$ variable characteristic of all lattice models reduces to: $X = -2E_{\alpha,\beta} + E_{\alpha,\alpha} + E_{\beta,\beta} = -4 J.$ The critical condition under which one expects like spins to pair up and thus to form islands of a-rich centers and other islands of b-rich centers is $-\dfrac{4 cJ}{kT} < - 4$ or $\dfrac{cJ}{kT} > 1.$ Virial Corrections to Ideal-Gas Behavior Recall from our earlier treatment of classical partition function that one can decompose the total partition function into a product of two factors: $Q = \dfrac{h^{-NM}}{N!}\int \exp \Big(- \dfrac{H^0(y, p)}{kT}\Big) dy dp \int \exp \Big(-\dfrac{U(r)}{kT}\Big) dr$ one of which $Q_{\rm ideal} = \dfrac{h^{-NM}}{N!} \int \exp \Big(- \dfrac{H^0(y, p)}{kT}\Big) dy dp V^N$ is the result if no intermolecular potentials are operative. The second factor $Q_{\rm inter} = \dfrac{1}{V^N} {\int \exp \Big(-\dfrac{U(r)}{kT}\Big) dr}$ thus contains all of the effects of intermolecular interactions. Recall also that all of the equations relating partition functions to thermodynamic properties involve taking $\ln Q$ and derivatives of $\ln Q$. So, all such equations can be cast into sums of two parts; that arising from $\ln Q_{\rm ideal}$ and that arising from $\ln Q_{\rm inter}$. In this Section, we will be discussing the contributions of $Q_{\rm inter}$ to such equations. The first thing that is done to develop the so-called cluster expansion of $Q_{\rm inter}$ is to assume that the total intermolecular potential energy can be expressed as a sum of pair wise additive terms: $U = \sum_{I<J} U(r_{IJ})$ where $r_{IJ}$ labels the distance between molecule $I$ and molecule $J$. This allows the exponential appearing in $Q_{\rm inter}$ to be written as a product of terms, one for each pair of molecules: $\exp\Big(-\dfrac{U}{kT}\Big) = \exp\Big(- \sum_{I<J} \dfrac{U(r_{IJ})}{kT}\Big) = \prod_{I<J} \exp\Big(- \dfrac{U(r_{IJ})}{kT}\Big).$ Each of the exponentials $\exp\Big(- \dfrac{U(r_{IJ})}{kT}\Big)$ is then expressed as follows: $\exp\Big(- \dfrac{U(r_{IJ})}{kT}\Big) = 1 + \Big(\exp\Big(- \dfrac{U(r_{IJ})}{kT}\Big) –1\Big) = 1 + f_{IJ},$ the last equality being what defines $f_{IJ}$. These $f_{IJ}$ functions are introduced because, whenever the molecules $I$ and $J$ are distant from one another and thus not interacting, $U(r_{IJ})$ vanishes, so $\exp\Big(- \dfrac{U(r_{IJ})}{kT}\Big)$ approaches unity, and thus $f_{IJ}$ vanishes. In contrast, whenever molecules $I$ and $J$ are close enough to experience strong repulsive interactions, $U(r_{IJ})$ is large and positive, so $f_{IJ}$ approaches $-1$. These properties make $f_{IJ}$ a useful measure of how molecules are interacting; if they are not, $f = 0$, if they are repelling strongly, $f = -1$, and if they are strongly attracting, $f$ is large and positive. Inserting the $f_{IJ}$ functions into the product expansion of the exponential, one obtains: $\exp\Big(-\dfrac{U}{kT}\Big) = \prod_{I<J} (1 + f_{IJ}) = 1 + \sum_{I<J} f_{IJ} + \sum_{I<J} \sum_{K<L} f_{IJ} f_{KL} + \cdots$ which is called the cluster expansion in terms of the $f_{IJ}$ pair functions. When this expansion is substituted into the expression for $Q_{\rm inter}$, we find: $Q_{\rm inter} = \dfrac{1}{V^N} \int (1 + \sum_{I<J} f_{IJ} + \sum_{I<J} \sum_{K<L} f_{IJ} f_{KL} + \cdots) dr$ where the integral is over all $3N$ of the $N$ molecule’s center of mass coordinates. The integrals involving only one $f_{IJ}$ function are all equal (i.e., for any pair $I$, $J$, the molecules are identical in their interaction potentials) and reduce to: $\dfrac{N(N-1)}{2V^2} \int f(r_1,2) dr_1 dr_2.$ The integrals over $dr_3 \cdots dr_N$ produce $V^{N-2}$, which combines with $\dfrac{1}{V^N}$ to produce the $V^{-2}$ seen. Finally, because $f(r_{1,2})$ depends only on the relative positions of molecules 1 and 2, the six dimensional integral over $dr_1 dr_2$ can be replaced by integrals over the relative location of the two molecules r, and the position of their center of mass $R$. The integral over $R$ gives one more factor of $V$, and the above cluster integral reduces to $4\pi \dfrac{N(N-1)}{2V} \int f(r) r^2 dr.$ with the $4\pi$ coming from the angular integral over the relative coordinate $r$. Because the total number of molecules $N$ is very large, it is common to write the $\dfrac{N(N-1)}{2}$ factor as $\dfrac{N^2}{2}$. The cluster integrals containing two $f_{IJ} f_{KL}$ factors can also be reduced. However, it is important to keep track of different kinds of such factors (depending on whether the indices $I$, $J$, $K$, $L$ are all different or not). For example, terms of the form $\dfrac{1}{V^N} \int f_{IJ} f_{KL} dr_1 dr_2 \cdots dr_N$ with $I$, $J$, $K$, and $L$ all unique. reduce (again using the equivalence of the molecules and the fact that $f_{IJ}$ depends only on the relative positions of $I$ and J) to: $\dfrac{N^4}{4} (4\pi)^2 V^{-2} \int f_{12} r_{12}^2 dr_{12} \int f_{34} r_{34}^2 dr_{34},$ where, again I used the fact that $N$ is very large to replace $\dfrac{N(N-1)}{2} \dfrac{(N-2)(N-3)}{2}$ by $\dfrac{N^4}{4}$. On the other hand, cluster integrals with, for example,$I=K$ but $J$ and $L$ different reduce as follows: $\dfrac{1}{V^N} \int f_{12} f_{13} dr_1 dr_2 \cdots dr_N = \dfrac{1}{2} V^{-3} N^3 \int f_{12} f_{13} dr_1 dr_2 dr_3.$ Because $f_{12}$ depends only on the relative positions of molecules 1 and 2 and $f_{13}$ depends on the relative positions of 1 and 3, the nine-dimensional integral over $dr_1 dr_2 dr_3$ can be changed to a six-dimensional integral over $dr_{12} dr_{13}$ and an integral over the location of molecule 1; the latter integral produces a factor of $V$ when carried out. Thus, the above cluster integral reduces to: $(4\pi)^2 \dfrac{1}{2} V^{-2} N^3 \int f_{12} f_{13} r_{12}^2 r_{13}^2 dr_{12} dr_{13} .$ There is a fundamental difference between cluster integrals of the type $f­_{12} f_{34}$ and those involving $f_{12} f_{13}$. The former are called unlinked clusters because they involve the interaction of molecules 1 and 2 and a separate interaction of molecules 3 and 4. The latter are called linked because they involve molecule 1 interacting simultaneously with molecules 2 and 3 (although 2 and 3 need not be close enough to cause $f_{23}$ to be non-zero). The primary differences between unlinked and linked cluster contributions are: 1. The total number of unlinked terms is proportional to $N^4$, while the number of linked terms is proportional to $N^3$. This causes the former to be more important than the latter because they are more numerous. 2. The linked terms only become important at densities where there is a significant probability that three molecules occupy nearby regions of space. The unlinked terms, on the other hand, do not require that molecules 1 and 2 be anywhere near molecules 3 and 4. This also causes the unlinked terms to dominate especially at low and moderate densities. I should note that a similar observation was made in Chapter 6 when we discussed the configuration interaction and coupled-cluster expansion of electronic wave functions. That is, we noted that doubly excited configurations (analogous to $f_{IJ}$) are the most important contributions beyond the single determinant, and that quadruple excitations in the form of unlinked products of double excitations were next most important, not triple excitations. The unlinked nature in this case was related to the amplitudes of the quadruple excitations being products of the amplitudes of two double excitations. So, both in electronic structures and in liquid structure, one finds that pair correlations followed by unlinked pair correlations are the most important to consider. Clearly, the cluster expansion approach to $Q_{\rm inter}$ can be carried to higher and higher-level clusters (e.g., involving $f_{12} f_{34} f_{56}$ or $f_{12} f_{13} f_{34}$, etc.). Generally, one finds that the unlinked terms (e.g., $f_{12} f_{34} f_{56}$ in this example) are most important (because they are proportional to higher powers of $N$ and because they do not require more than binary collisions). It is most common, however, to employ a severely truncated expansion and to retain only the linked terms. Doing so for $Q_{\rm inter}$ produces at the lower levels: $Q_{\rm inter} = 1 + \dfrac{1}{2} \Big(\dfrac{N}{V}\Big)^2 4\pi V \int f r^2 dr + \dfrac{1}{4} \Big(\dfrac{N}{V}\Big)^4 [4\pi V \int f r^2 dr ]^2$ $+ \dfrac{1}{2} \Big(\dfrac{N}{V}\Big)^3 V (4\pi)^2 \int f_{12} f_{13} r_{12}^2 r_{13}^2 dr_{12} dr_{13}.$ One of the most common properties to compute using a partition function that includes molecular interactions in the cluster manner is the pressure, which is calculated as: $p = kT \left(\dfrac{∂\ln Q}{∂V}\right)_{N,T}.$ Using $Q = Q_{\rm ideal} Q_{\rm inter}$ and inserting the above expression for $Q_{\rm inter}$ produces the following result for the pressure: $\dfrac{pV}{NkT} = 1 + B_2 \Big(\dfrac{N}{V}\Big) + B_3 \Big(\dfrac{N}{V}\Big)^2 + \cdots$ where the so-called virial coefficients $B_2$ and $B_3$ are defined as the factors proportional to $\Big(\dfrac{N}{V}\Big)$ and $\Big(\dfrac{N}{V}\Big)^2$, respectively. The second virial coefficient’s expression in terms of the cluster integrals is: $B_2 = - 2\pi \int f r^2 dr = - 2\pi \int \Big[\exp\Big(-\dfrac{U(r)}{kT}\Big) –1\Big] r^2 dr.$ The third virial coefficient involves higher order cluster integrals. The importance of such cluster analysis is that they allow various thermodynamic properties (e.g., the pressure above) to be expressed as one contribution that would occur if the system consisted of non-interacting molecules and a second contribution that arises from the intermolecular forces. It thus allows experimental measurements of the deviation from ideal (i.e., non-interacting) behavior to provide a direct way to determine intermolecular potentials. For example, by measuring pressures at various $N/V$ values and various temperatures, one can determine $B_2$ and thus gain valuable information about the intermolecular potential $U$. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/07%3A_Statistical_Mechanics/7.05%3A_Some_Important_Applications_of_Statistical_Mechanics.txt
Learning Objectives In this Chapter, you should have learned about • Conventional and variational transition state theory. • Classical trajectory and reaction-path Hamiltonian simulations of chemical reactions. • Unimolecular RRKM theory. • Time correlation function and wave packet propagation approaches. • Surface hopping and Landau-Zener theories of non-adiabatic processes. • Spectroscopic, beam, and other experimental approaches to probing chemical reaction rates. Chemical dynamics is a field in which scientists study the rates and mechanisms of chemical reactions. It also involves the study of how energy is transferred among molecules as they undergo collisions in gas-phase or condensed-phase environments. Therefore, the experimental and theoretical tools used to probe chemical dynamics must be capable of monitoring the chemical identity and energy content (i.e., electronic, vibrational, and rotational state populations) of the reacting species. Moreover, because the rates of chemical reactions and energy transfer are of utmost importance, these tools must be capable of doing so on time scales over which these processes, which are often very fast, take place. Let us begin by examining many of the most commonly employed theoretical models for simulating and understanding the processes of chemical dynamics. 08: Chemical Dynamics Transition State Theory The most successful and widely employed theoretical approach for studying rates involving species undergoing reaction at or near thermal-equilibrium conditions is the transition state theory (TST) of the author’s late colleague, Henry Eyring. This would not be a good way to model, for example, photochemical reactions in which the reactants do not reach thermal equilibrium before undergoing significant reaction progress. However, for most thermal reactions, it is remarkably successful. In this theory, one views the reactants as undergoing collisions that act to keep all of their degrees of freedom (translational, rotational, vibrational, electronic) in thermal equilibrium. Among the collection of such reactant molecules, at any instant of time, some will have enough internal energy to access a transition state (TS) on the Born-Oppenheimer potential energy surface upon which the reaction takes place. Within TST, the rate of progress from reactants to products is then expressed in terms of the concentration of species that exist near the TS multiplied by the rate at which these species move through the TS region of the energy surface. The concentration of species at the TS is, in turn, written in terms of the equilibrium constant expression of statistical mechanics discussed in Chapter 7. For example, for a bimolecular reaction $A+B \rightarrow C$ passing through a TS denoted AB*, one writes the concentration (in molecules per unit volume) of AB* species in terms of the concentrations of A and of B and the respective partition functions as $[AB*] = \dfrac{(q_{AB}^*/V)}{(q_A/V)( q_B/V)} [A] [B].$ There is, however, one aspect of the partition function of the TS species that is specific to this theory. The partition function $q_{AB}^*$ contains all of the usual translational, rotational, vibrational, and electronic partition functions that one would write down, as we did in Chapter 7, for a conventional AB molecule except for one modification. It does not contain a $\dfrac{\exp(-h\nu_j /2kT)}{1- \exp(-h\nu_j/kT)}$ vibrational contribution for motion along the one internal coordinate corresponding to the reaction path. As we discussed in Chapter 3, in the vicinity of the TS, the reaction path can be identified as that direction along which the PES has negative curvature; along all other directions, the energy surface is positively curved. For example, in Figure 8.1, a reaction path begins at Transition Structure B and is directed downhill. More specifically, if one knows the gradients {$(\partial E/\partial q_k)$ }and Hessian matrix elements: $H_{j,k} = \dfrac{\partial^2E}{\partial{q_j}\partial{q_k}}$ of the energy surface at the TS, one can express the variation of the potential energy along the $3N$ Cartesian coordinates {$q_k$} of the molecule as follows: $E(q_k) = E(0) + \sum_k \dfrac{\partial E}{\partial q_k} q_k + \dfrac{1}{2} \sum_j^k q_j H_{j,k} q_k + …$ where E(0) is the energy at the TS, and the {$q_k$} denote displacements away from the TS geometry. Of course, at the TS, the gradients all vanish because this geometry corresponds to a stationary point. As we discussed in Chapter 3, the Hessian matrix $H_{j,k}$ has 6 zero eigenvalues whose eigenvectors correspond to overall translation and rotation of the molecule. This matrix has $3N-7$ positive eigenvalues whose eigenvectors correspond to the vibrations of the TS species, as well as one negative eigenvalue. The latter has an eigenvector whose components $\{q_k\}$ along the $3N$ Cartesian coordinates describe the direction of the reaction path as it begins its journey from the TS backward to reactants (when followed in one direction) and onward to products (when followed in the opposite direction). Once one moves a small amount along the direction of negative curvature, the reaction path is subsequently followed by taking infinitesimal steps downhill along the gradient vector $\textbf{g}$ whose $3N$ components are ($\dfrac{∂E}{∂q_k}$). Note that once one has moved downhill away from the TS by taking the initial step along the negatively curved direction, the gradient no longer vanishes because one is no longer at the stationary point. Returning to the TST rate calculation, one therefore is able to express the concentration $[AB^*]$ of species at the TS in terms of the reactant concentrations and a ratio of partition functions. The denominator of this ratio contains the conventional partition functions of the reactant molecules and can be evaluated as discussed in Chapter 7. However, the numerator contains the partition function of the TS species but with one vibrational component missing (i.e.: $q_{\rm vib} = \prod_{k=1}^{3N-7} \left[\dfrac{\exp(-h\nu_j /2kT)}{1- \exp(-h\nu_j/kT)}\right].$ Other than the one missing $q_{\rm vib}$, the TS's partition function is also evaluated as in Chapter 7. The motion along the reaction path coordinate contributes to the rate expression in terms of the frequency (i.e., how often) with which reacting flux crosses the TS region given that the system is in near-thermal equilibrium at temperature $T$. To compute the frequency with which trajectories cross the TS and proceed onward to form products, one imagines the TS as consisting of a narrow region along the reaction coordinate $s$; the width of this region we denote $\delta_s$. We next ask what the classical weighting factor is for a collision to have momentum $p_s$ along the reaction coordinate. Remembering our discussion of such matters in Chapter 7, we know that the momentum factor entering into the classical partition function for translation along the reaction coordinate is $(1/h) \exp(-p_s^2/2\mu kT) dp_s$. Here, m is the mass factor associated with the reaction coordinate s. We can express the rate or frequency at which such trajectories pass through the narrow region of width $\delta_s$ as $\dfrac{p_s}{\mu\delta_s}$, with $\dfrac{p_s}{\mu}$ being the speed of passage (cm s-1) and $1/\delta_s$​ being the inverse of the distance that defines the TS region. So, $\dfrac{p_s}{\mu\delta_s}$ has units of s-1. In summary, we expect the rate of trajectories moving through the TS region to be $\dfrac{1}{h} \exp\Big(\dfrac{-p_s^2}{2\mu kT}\Big) dp_s \dfrac{p_s}{\mu \delta_s}.$ However, we still need to integrate this over all values of $p_s$ that correspond to enough energy $p_s^2/2m$ to access the TS’s energy (relative to that of the reactants), which we denote $E^*$. Moreover, we have to account for the fact that it may be that not all trajectories with kinetic energy equal to $E^*$ or greater pass on to form product molecules; some trajectories may pass through the TS but later recross the TS and return to produce reactants. Moreover, it may be that some trajectories with kinetic energy along the reaction coordinate less than $E^*$ can react by tunneling through the barrier. The way we account for the fact that a reactive trajectory must have at least $E^*$ in energy along s is to integrate over only values of $p_s$ greater than $\sqrt{2\mu E^*}$. To account for the fact that some trajectories with energies above $E^*$ may recross, we include a so-called transmission coefficient k whose value is between zero and unity. In the most elementary TST, tunneling is ignored. Putting all of these pieces together, we carry out the integration over $p_s$ just described to obtain: $\int\int \dfrac{\kappa}{h} \exp\Big(\dfrac{-p_s^2}{2\mu kT}\Big) \dfrac{p_s}{\mu\delta_s} \delta_s dp_s$ where the momentum is integrated from $p_s = \sqrt{2\mu E^*}$ to ∞ and the s-coordinate is integrated only over the small region $ds$. If the transmission coefficient is factored out of the integral (treating it as a multiplicative factor), the integral over $p_s$ can be evaluated and yields the following: $\kappa \dfrac{kT}{h} \exp(\dfrac{-E^*}{kT}).$ The exponential energy dependence is usually then combined with the partition function of the TS species that reflect this species’ other $3N-7$ vibrational coordinates and momenta and the reaction rate is then expressed as $\text{Rate} = \kappa \dfrac{kT}{h} [AB*] = \kappa \dfrac{kT}{h} \dfrac{q_{AB}^* /V}{(q_A/V)( q_B/V)} [A] [B].$ This implies that the rate coefficient $k_{\rm rate}$ for this bimolecular reaction is given in terms of molecular partition functions by: $k_{\rm rate} = k \dfrac{k_T}{h} \dfrac{q_{AB}^*/V}{(q_A/V)(q_B/V)},$ which is the fundamental result of TST. Once again we notice that ratios of partition functions per unit volume can be used to express ratios of species concentrations (in number of molecules per unit volume), just as appeared in earlier expressions for equilibrium constants as in Chapter 7. The above rate expression undergoes only minor modifications when unimolecular reactions are considered. For example, in the hypothetical reaction $A \rightarrow B$ via the TS ($A^*$), one obtains $k_{\rm rate} = \kappa \dfrac{kT}{h} \dfrac{q_A^*/V}{q_A/V},$ where again $q_A^*$ is a partition function of A* with one missing vibrational component. Before bringing this discussion of TST to a close, I need to stress that this theory is not exact. It assumes that the reacting molecules are nearly in thermal equilibrium, so it is less likely to work for reactions in which the reactant species are prepared in highly non-equilibrium conditions. Moreover, it ignores tunneling by requiring all reactions to proceed through the TS geometry. For reactions in which a light atom (i.e., an H or D atom) is transferred, tunneling can be significant, so this conventional form of TST can provide substantial errors in such cases (however, there are straightforward approximations similar to those we discussed in Chapter 2 that can be used to make tunneling corrections to this rate expression). Nevertheless, TST remains the most widely used and successful theory of chemical reaction rates and can be extended to include tunneling and other corrections as we now illustrate. Variational Transition State Theory Within the TST expression for the rate constant of a bi-molecular reaction, $k_{\rm rate} = \kappa\dfrac{kT}{h} \dfrac{q_{AB}^*/V}{(q_A/V)(q_B/V)}$ or of a uni-molecular reaction, $k_{\rm rate} = \kappa\dfrac{kT}{h} \dfrac{q_A^*/V}{q_A/V}$, the height (E^*) of the barrier on the potential energy surface appears in the TS species’ partition function $q_{AB}^*$ or $q_A^*$, respectively. In particular, the TS partition function contains a factor of the form $\exp(-E^*/kT)$ in which the Born-Oppenheimer electronic energy of the TS relative to that of the reactant species appears. This energy $E^*$ is the value of the potential energy $E(S)$ at the TS geometry, which we denote $S_0$. It turns out that the conventional TS approximation to $k_{\rm rate}$ over-estimates reaction rates because it assumes all trajectories that cross the TS proceed onward to products unless the transmission coefficient is included to correct for this. In the variational transition state theory (VTST), one does not evaluate the ratio of partition functions appearing in $k_{\rm rate}$ at $S_0$​, but one first determines at what geometry (S*) the TS partition function (i.e., $q_{AB}^*$ or $q_A^*$) is smallest. Because this partition function is a product of (i) the $\exp(-E(S)/kT)$ factor as well as (ii) 3 translational, 3 rotational, and $3N-7$ vibrational partition functions (which depend on $S$), the value of $S$ for which this product is smallest need not be the conventional TS value $S_0$. What this means is that the location (S*) along the reaction path at which the free-energy reaches a saddle point is not the same the location $S_0$ where the Born-Oppenheimer electronic energy $E(S)$ has its saddle. This interpretation of how $S^*$ and $S_0$ differ can be appreciated by recalling that partition functions are related to the Helmholtz free energy $A$ by $q = \exp(-A/kT)$; so determining the value of $S$ where $q$ reaches a minimum is equivalent to finding that $S$ where the free energy $A$ is at a maximum. So, in VTST, one adjusts the dividing surface (through the location of the reaction coordinate S) to first find that value $S^*$ where $k_{\rm rate}$ has a minimum. One then evaluates both $E(S^*)$ and the other components of the TS species’ partition functions at this value $S^*$. Finally, one then uses the $k_{\rm rate}$ expressions given above, but with $S$ taken at $S^*$. This is how VTST computes reaction rates in a somewhat different manner than does the conventional TST. As with TST, the VTST, in the form outlined above, does not treat tunneling and the fact that not all trajectories crossing $S^*$ proceed to products. These corrections still must be incorporated as an add-on to this theory (i.e., in the k factor for recrossing and through tunneling corrections) to achieve high accuracy for reactions involving light species (recall from Chapter 2 that tunneling probabilities depend exponentially on the mass of the tunneling particle). I refer the reader to the web page of Prof. Don Truhlar, who has been one of the pioneers of VTST for further details. Reaction Path Hamiltonian Theory Let us review what the reaction path is as defined earlier in Chapter 3. It is a path that 1. begins at a transition state (TS) and evolves along the direction of negative curvature on the potential energy surface (as found by identifying the eigenvector of the Hessian matrix $H_{j,k} = \dfrac{∂^2E}{∂q_k∂q_j}$ that belongs to the negative eigenvalue); 2. moves further downhill along the gradient vector $\textbf{g}$ whose components are $g_k = \dfrac{∂E}{∂q_k}$, 3. terminates at the geometry of either the reactants or products (depending on whether one began moving away from the TS forward or backward along the direction of negative curvature). The individual steps along the reaction coordinate can be labeled $S_0$, $S_1$, $S_2$, … $S_P$ as they evolve from the TS to the products (labeled S_P) and $S-R$, $S-R+1$, …$S_0$ as they evolve from reactants (S-R) to the TS. If these steps are taken in very small (infinitesimal) lengths, they form a continuous path and a continuous coordinate that we label $S$. At any point $S$ along a reaction path, the Born-Oppenheimer potential energy surfacef $E(S)$, its gradient components $g_k(S) = \dfrac{∂E(S)}{∂q_k}$ and its Hessian components $H_{k,j}(S) = \dfrac{∂^2E(S)}{∂q_k∂q_j}$ can be evaluated in terms of derivatives of $E$ with respect to the $3N$ Cartesian coordinates of the molecule. However, when one carries out reaction path dynamics, one uses a different set of coordinates for reasons that are similar to those that arise in the treatment of normal modes of vibration as given in Chapter 3. In particular, one introduces $3N$ mass-weighted coordinates $x_j = q_j \sqrt{m_j}$ that are related to the $3N$ Cartesian coordinates $q_j$ in the same way as we saw in Chapter 3. The gradient and Hessian matrices along these new coordinates {x_j} can be evaluated in terms of the original Cartesian counterparts: $g_{K’}(S) = \dfrac{g_k(S)}{\sqrt{​m_k}}$ $H_{j,k}’ = \dfrac{H_{j,k}}{\sqrt{m_jm_k}}.$ The eigenvalues {$\omega_k^2$} and eigenvectors {$\textbf{v}_k$} of the mass-weighted Hessian $H’$ can then be determined. Upon doing so, one finds 1. 6 zero eigenvalues whose eigenvectors describe overall rotation and translation of the molecule; 2. $3N-7$ positive eigenvalues {$\omega_K^2$} and eigenvectors $\textbf{v}_K$ along which the gradient $\textbf{g}$ has zero (or nearly so) components; 3. and one eigenvalue $\omega_S^2$ (that may be positive, zero, or negative) along whose eigenvector $\textbf{v}_S$ the gradient $\textbf{g}$ has its largest component. The one unique direction along $\textbf{v}_S$ gives the direction of evolution of the reaction path (in these mass-weighted coordinates). All other directions (i.e., within the space spanned by the $3N-7$ other vectors $\{\textbf{v}_K\}$) possess (nearly) zero gradient component and positive curvature. This means that at any point $S$ on the reaction path being discussed 1. one is at or near a local minimum along all $3N-7$ directions $\{\textbf{v}_K\}$ that are transverse to the reaction path direction (i.e., the gradient direction); 2. one can move to a neighboring point on the reaction path by moving a small (infinitesimal) amount along the gradient. 3. In terms of the $3N-6$ mass-weighted Hessian’s eigen-mode directions ({$\textbf{v}_K$} and $\textbf{v}_S$), the potential energy surface can be approximated, in the neighborhood of each such point on the reaction path $S$, by expanding it in powers of displacements away from this point. If these displacements are expressed as components ​$\delta X_k$ along the $3N-7$ eigenvectors $\textbf{v}_K$ and $\delta S$ along the gradient direction $\textbf{v}_S$, one can write the Born-Oppenheimer potential energy surface locally as: $E = E(S) + \textbf{g}\cdot \textbf{v}_S \delta S + \dfrac{1}{2} \omega_S^2 \delta S^2 + \sum_{K = 1}^{3N-7} \dfrac{1}{2} \omega_K^2 \delta X_K^2 .$ Within this local quadratic approximation, $E$ describes a sum of harmonic potentials along each of the $3N-7$ modes transverse to the reaction path direction. Along the reaction path, $E$ appears with a non-zero gradient $\textbf{g}·\textbf{v}_S$ and a curvature $\dfrac{1}{2} \omega_S^2$ that may be positive, negative, or zero. The eigenmodes of the local (i.e., in the neighborhood of any point $S$ along the reaction path) mass-weighted Hessian decompose the $3N-6$ internal coordinates into $3N-7$ along which $E$ is harmonic and one ($S$) along which the reaction evolves. In terms of these same coordinates, the kinetic energy $T$ can also be written and thus the classical Hamiltonian $H = T+V$ can be constructed. Because the coordinates we use are mass-weighted, in Cartesian form the kinetic energy $T$ contains no explicit mass factors: $T = \dfrac{1}{2} \sum_j m_j \left(\frac{dq_j}{dt}\right)^2 = \dfrac{1}{2} \sum_j \left(\frac{dx_j}{dt}\right)^2.$ This means that the momenta conjugate to each (mass-weighted) coordinate $x_j$, obtained in the usual way as $p_j = ∂[T-V]/∂(\frac{dx_j}{dt}) = \frac{dx_j}{dt},$ all have identical (unit) mass factors associated with them. To obtain the working expression for the reaction path Hamiltonian (RPH), one must transform the above equation for the kinetic energy $T$ by replacing the $3N$ Cartesian mass-weighted coordinates {$x_j$} by 1. the $3N-7$ eigenmode displacement coordinates $\delta X_j$, 2. the reaction path displacement coordinate $\delta S$, and 3. 3 translation and 3 rotational coordinates. The three translational coordinates can be separated and ignored (because center-of-mass energy is conserved) in further consideration. The 3 rotational coordinates do not enter into the potential $E$, but they do appear in $T$. However, it is most common to ignore their effects on the dynamics that occurs in the internal-coordinates; this amounts to ignoring the effects of overall centrifugal forces on the reaction dynamics. We will proceed with this approximation in mind although the reader should keep in mind that doing so is an approximation that one might have to revisit in more sophisticated treatments. Although it is tedious to perform the coordinate transformation of $T$ outlined above, it has been done in the paper W. H. Miller, N. C. Handy and J. E. Adams, Reaction Path Hamiltonian for Polyatomic Molecules, J. Chem. Phys. 72, 99-112 (1980), and results in the following form for the RPH: $H = \sum_{K=1}^{3N-7} \dfrac{1}{2}[p_K^2 + \delta X_K^2 \omega_K^2(S)] + E(S) + \dfrac{1}{2} \dfrac{[p_S - \sum_{K,K’=1}^{3N-7} p_K \delta X_{K’} B_{K,K’}]^2}{1+F}$ where $(1+F) = [1 + \sum_{K=1}^{3N-7} \delta X_K B_{K,S}]^2.$ In the absence of the so-called dynamical coupling factors $B_{K,K’}$ and $B_{K,S}$, this expression for $H$ describes 1. $3N-7$ harmonic-oscillator Hamiltonian $\dfrac{1}{2}[p_K^2 + \delta X_K^2\omega_K^2(S)]$ each of which has a locally defined frequency $\omega_K(S)$ that varies along the reaction path (i.e., is $S$-dependent); 2. a Hamiltonian $\dfrac{1}{2} p_S^2 + E(S)$ for motion along the reaction coordinate $S$ with $E(S)$ serving as the potential. In this limit (i.e., with the $B$ factors turned off), the reaction dynamics can be simulated in what is termed a vibrationally adiabatic manner by 1. placing each transverse oscillator into a quantum level $\textbf{v}_K$ that characterizes the reactant’s population of this mode; 2. assigning an initial momentum $p_S(0)$ to the reaction coordinate that is characteristic of the collision to be simulated (e.g., $p_S(0)$ could be sampled from a Maxwell-Boltzmann distribution if a thermal reaction is of interest, or $p_S(0)$ could be chosen equal to the mean collision energy of a beam-collision experiment); 3. time-evolving the $S$ and $p_S$, coordinate and momentum using the above Hamiltonian, assuming that each transverse mode remains in the quantum state $\textbf{v}_K$ that it had when the reaction began. The assumption that $\textbf{v}_K$ remains fixed, which is why this model is called vibrationally adiabatic, does not mean that the energy content of the $K^{\rm th}$ mode remains fixed because the frequencies $\omega_K(S)$ vary as one moves along the reaction path. As a result, the kinetic energy along the reaction coordinate $\dfrac{1}{2} p_S^2$ will change both because $E(S)$ varies along $S$ and because $\sum_{K=1}^{3N-7} \hbar\omega_K^2(S) [\textbf{v}_K + \dfrac{1}{2}]$ varies along S. Let’s return now to the RPH theory in which the dynamical couplings among motion along the reaction path and the modes transverse to it are included. In the full RPH, the terms $B_{K,K’}(S)$ couple modes $K$ and $K’$, while $B_{K,S}(S)$ couples the reaction path to mode $K$. These couplings express how energy can flow among these various degrees of freedom. Explicit forms for the $B_{K,K’}$ and $B_{K,S}$ factors are given in terms of the eigenvectors {$\textbf{v}_K, \textbf{v}_S$} of the mass-weighted Hessian matrix as follows: $B_{K,K’} = \langle d\textbf{v}_K/dS| \textbf{v}_{K’}\rangle; B_{K,S} = \langle d\textbf{v}_K/dS | \textbf{v}_S\rangle$ where the derivatives of the eigenvectors {$d\textbf{v}_K/dS$} are usually computed by taking the eigenvectors at two neighboring points $S$ and $S’$ along the reaction path: $\frac{d\textbf{v}_K}{dS} = \frac{\textbf{v}_K(S’) – \textbf{v}_K(S)}{S’-S}.$ In summary, once a reaction path has been mapped out, one can compute, along this path, the mass-weighted Hessian matrix and the potential $E(S)$. Given these quantities, all terms in the RPH $H = \sum_{K=1}^{3N-7} \dfrac{1}{2}[p_K^2 +\delta X_K^2 \omega_K^2(S)] + E(S) + \dfrac{1}{2} \frac{p_S - \sum_{K,K’=1}^{3N-7} p_K \delta X_{K’} B_{K,K’}]^2}{1+F}$ are in hand. This knowledge can, subsequently, be used to perform the propagation of a set of classical coordinates and momenta forward in time. For any initial (i.e., $t = 0$) momenta $p_S$ and $p_K$, one can use the above form for H to propagate the coordinates {$\delta X_K, \delta S$} and momenta {$p_K, p_S$} forward in time. In this manner, one can use the RPH theory to follow the time evolution of a chemical reaction that begins ($t = 0$) with coordinates and moment characteristic of reactants under specified laboratory conditions and moves through a TS and onward to products. Once time has evolved long enough for product geometries to be realized, one can interrogate the values of $\dfrac{1}{2}[p_K^2 + \delta X_K^2 \omega_K^2(S)]$ to determine how much energy has been deposited into various product-molecule vibrations and of $\dfrac{1}{2} p_S^2$ to see what the final kinetic energy of the product fragments is. Of course, one also monitors what fraction of the trajectories, whose initial conditions are chosen to represent some experimental situation, progress to product geometries vs. returning to reactant geometries. In this way, one can determine the overall reaction probability. Classical Dynamics Simulation of Rates One can also perform classical dynamics simulations of reactive events without using the reaction path Hamiltonian. Following a procedure like that outlined in Chapter 7 where classical condensed-media MD simulations were discussed, one can time-evolve the Newton equations of motion of the molecular reaction species using, for example, the Cartesian coordinates of each atom in the system and with either a Born-Oppenheimer surface or a parameterized functional form (e.g., a force field). Of course, it is essential that whatever function one uses must be able to accurately describe the reactive surface, especially near the transition state (recall, that may force fields do not do so because they do not account for bond breaking and forming). With each such coordinate having an initial velocity $(dq/dt)_0$ and an initial value $q_0$, one then uses Newton’s equations written for a time step of duration $\delta t$ to propagate $q$ and $dq/dt$ forward in time according, for example , to the following first-order propagation formula: $q(t+\delta t) = q_0 + (dq/dt)_0\delta t$ $dq/dt (t+\delta t) = (dq/dt)_0-\delta t[(∂E/∂q)_0/m_q]$ or using the Verlet algorithm described in Chapter 7. Here m_q is the mass factor connecting the velocity dq/dt and the momentum p_q conjugate to the coordinate q: $p_q = m_q dq/dt,$ and $-(∂E/∂q)_0$ is the force along the coordinate $q$ at the initial geometry $q_0$. By applying the time-propagation process, one generates a set of new coordinates $q(t+\delta t)$ and new velocities $dq/dt(t+\delta t)$ appropriate to the system at time $t+\delta t$. Using these new coordinates and momenta as $q_0$ and $(dq/dt)_0$ and evaluating the forces $-(∂E/∂q)_0$ at these new coordinates, one can again use the Newton equations to generate another finite-time-step set of new coordinates and velocities. Through the sequential application of this process, one generates a sequence of coordinates and velocities that simulate the system’s dynamical behavior. In using this kind of classical trajectory approach to study chemical reactions, it is important to choose the initial coordinates and momenta in a way that is representative of the experimental conditions that one is attempting to simulate. The tools of statistical mechanics discussed in Chapter 7 guide us in making these choices and allow us efficient methods (e.g., the Monte Carlo technique) for sampling such initial values. When one attempts, for example, to simulate the reactive collisions of an A atom with a BC molecule to produce AB + C, it is not appropriate to consider a single classical (or quantal) collision between A and BC. Why? Because in any laboratory setting, 1. The A atoms are probably moving toward the BC molecules with a distribution of relative speeds. That is, within the sample of molecules (which likely contains 1010 or more molecules), some A + BC pairs have low relative kinetic energies when they collide, and others have higher relative kinetic energies. There is a probability distribution $P(E_{KE})$ for this relative kinetic energy that must be properly sampled in choosing the initial conditions. 2. The BC molecules may not all be in the same rotational ($J$) or vibrational ($v$) state. There is a probability distribution function $P(J,v)$ describing the fraction of BC molecules that are in a particular $J$ state and a particular $v$ state. An ensemble of initial values of the BC molecule's internal vibrational coordinate and momentum as well as its orientation and rotational angular momentum must be selected to represent this $P(J,v)$. 3. When the A and BC molecules collide with a relative motion velocity vector $v$, they do not all hit head on. Some collisions have small impact parameter $b$ (the closest distance from A to the center of mass of BC if the collision were to occur with no attractive or repulsive forces), and some have large $b$-values (see Figure 8.2). The probability function for these impact parameters is $P(b) = 2\pi bdb$, which is simply a statement of the geometrical fact that larger $b$-values have more geometrical volume element than smaller $b$-values. So, to simulate the entire ensemble of collisions that occur between A atoms and BC molecules in various $J$, $v$ states and having various relative kinetic energies $E_{KE}$ and impact parameters b, one must: 1. run classical trajectories (or quantum propagations) for a large number of $J$, $v$, $E_{KE}$, and $b$ values, 2. with each such trajectory assigned an overall weighting (or importance factor) of $P_{\rm total} = P(E_{KE})P(J,v) 2\pi bdb.$ After such an ensemble of trajectories representative of an experimental condition has been carried out, one has available a great deal of data. This data includes knowledge of what fraction of the trajectories produced final geometries characteristic of products, so the net reaction probability can be calculated. In addition, the kinetic and potential energy content of the internal (vibrational and rotational) modes of the product molecules can be interrogated and used to compute histograms giving probabilities for observing products in these states. This is how classical dynamics simulations allow us to study chemical reactions and/or energy transfer. RRKM Theory Another theory that is particularly suited for studying unimolecular decomposition reactions is named after the four scientists who developed it- Rice, Ramsperger, Kassel, and Marcus. To use this theory, one imagines an ensemble of molecules that have been activated to a state in which they possess a specified total amount of internal energy $E$ of which an amount $E^*_{\rm rot}$ exists as rotational energy and the remainder as internal vibrational energy. The mechanism by which the molecules become activated could involve collisions or photochemistry. It does not matter as long as enough time has passed to permit one to reasonably assume that these molecules have the energy $E-E^*_{\rm rot}$ distributed randomly among all their internal vibrational degrees of freedom. When considering thermally activated unimolecular decomposition of a molecule, the implications of such assumptions are reasonably clear. For photochemically activated unimolecular decomposition processes, one usually also assumes that the molecule has undergone radiationless relaxation and returned to its ground electronic state but in a quite vibrationally hot situation. That is, in this case, the molecule contains excess vibrational energy equal to the energy of the optical photon used to excite it. Finally, when applied to bimolecular reactions, one assumes that collision between the two fragments results in a long-lived complex. The lifetime of this intermediate must be long enough to allow the energy $E-E^*_{\rm rot}$, which is related to the fragments’ collision energy, to be randomly distributed among all vibrational modes of the collision complex. For bimolecular reactions that proceed directly (i.e., without forming a long-lived intermediate), one does not employ RRKM-type theories because their primary assumption of energy randomization almost certainly would not be valid in such cases. The RRKM expression of the unimolecular rate constant for activated molecules A* (i.e., either a long-lived complex formed in a bimolecular collision or a hot molecule) dissociating to products through a transition state, $A* \rightarrow TS \rightarrow P$, is $k_{\rm rate} = \dfrac{G(E-E_0 –E’_{\rm rot})}{N(E-E^*_{\rm rot})h}.$ Here, the total energy $E$ is related to the energies of the activated molecules by $E = E^*_{\rm rot} + E^*_{\rm vib}$ where $E^*_{\rm rot}$ is the rotational energy of the activated molecule and $E^*_{\rm vib}$ is the vibrational energy of this molecule. This same energy $E$ must, of course, appear in the transition state where it is decomposed as an amount $E_0$ needed to move from A* to the TS (i.e., the energy needed to reach the barrier) and vibrational ($E'_{\rm vib})$, translational ($E'_{\rm trans}$ along the reaction coordinate), and rotational ($E'_{\rm rot}$) energies: $E = E_0 + E’_{\rm vib} + E’_{\rm trans} + E’_{\rm rot} .$ In the rate coefficient expression, $G(E-E_0 –E’_{\rm rot} )$ is the total sum of internal vibrational quantum states that the transition state possesses having energies up to and including $E-E_0 –E’_{\rm rot}$. This energy is the total energy $E$ but with the activation energy $E_0$ removed and the overall rotational energy $E’_{\rm rot}$ of the TS removed. The quantity $N(E-E^*_{\rm rot})$ is the density of internal vibrational quantum states (excluding the mode describing the reaction coordinate) that the activated molecule possesses having an energy between $E-E^*_{\rm rot}$ and $E-E^*_{\rm rot} + \delta E$. In this expression, the energy $E-E^*_{\rm rot}$ is the total energy $E$ with the rotational energy $E^*_{\rm rot}$ of the activated species removed. In the most commonly employed version of RRKM theory, the rotational energies of the activated molecules $E^*_{\rm rot}$ and of the TS $E’_{\rm rot}$ are assumed to be related by $E^*_{\rm rot} – E’_{\rm rot} = J(J+1) \dfrac{h^2}{8\pi^2} \left(\frac{1}{I^*} - \frac{1}{I’}\right) = E^*_{\rm rot} \left(1 –\frac{I^*}{I’}\right).$ Here $I^*$ and $I’$ are the average (taken over the three eigenvalues of the moment inertia tensors) moments of inertia of the activated molecules and TS species, respectively. The primary assumption embodied in the above relationship is that the rotational angular momenta of the activated and TS species are the same, so their rotational energies can be related, as expressed in the equation, to changes in geometries as reflected in their moments of inertia. Because RRKM theory assumes that the vibrational energy is randomly distributed, its fundamental rate coefficient equation $k_{\rm rate} = \dfrac{G(E-E_0 –E’_{\rm rot} )}{N(E-E^*_{\rm rot})h}$ depends on the total energy $E$, the energy $E_0$ required to access the TS, and the amount of energy contained in the rotational degrees of freedom that is thus not available to the vibrations. To implement a RRKM rate coefficient calculation, one must know 1. the total energy $E$ available, 2. the barrier energy $E_0$, 3. the geometries (and hence the moments of inertia $I^*$ and $I’$) of the activated molecules and of the TS, respectively, 4. the rotational energy $E^*_{\rm rot}$ of the activated molecules, as well as 5. all $3N-6$ vibrational energies of the activated molecules and all $3N-7$ vibrational energies of the TS (i.e., excluding the reaction coordinate). The rotational energy of the TS species can then be related to that of the activated molecules through $E^*_{\rm rot} – E’_{\rm rot} = E^*_{\rm rot} \left(1 –\frac{I^*}{I’}\right).$ To simulate an experiment in which the activated molecules have a thermal distribution of rotational energies, the RRKM rate constant is computed for a range of $E^*_{\rm rot}$ values and then averaged over $E^*_{\rm rot}$ using the thermal Boltzmann population $(2J+1) \exp\Big(-J(J+1) \dfrac{h^2}{8\pi^2I^*kT}\Big)$ as a weighting factor. This can be carried out, for example, using the MC process for selecting rotational $J$ values. This then produces a rate constant for any specified total energy E. Alternatively, to simulate experiments in which the activated species are formed in bimolecular collisions at a specified energy $E$, the RRKM rate coefficient is computed for a range of $E^*_{\rm rot}$ values with each $E^*_{\rm rot}$ related to the collisional impact parameter $b$ that we discussed earlier. In that case, the collisional angular momentum $J$ is given as $J = \mu vb$, where $v$ is the relative collision speed (related to the collision energy) and m is the reduced mass of the two colliding fragments. Again using $E^*_{\rm rot} – E’_{\rm rot} = E^*_{\rm rot} \left(1 –\frac{I^*}{I’}\right)$ the TS rotational energy can be related to that of the activated species. Finally, the RRKM rate coefficient is evaluated by averaging the result over a series of impact parameters $b$ (each of which implies a $J$ value and thus an $E^*_{\rm rot}$) with $2\pi bdb$ as the weighting factor. he evaluation of the sum of states $G(E-E_0 –E’_{\rm rot} )$ and the density of states $N(E-E^*_{\rm rot})$ that appear in the RRKM expression is usually carried out using a state-counting algorithm such as that implemented by Beyer and Swinehart in Commun. Assoc. Comput. Machin. 16, 372 (1973). This algorithm uses knowledge of the $3N-6$ harmonic vibrational frequencies of the activated molecules and the $3N-7$ frequencies of the TS and determines how many ways a given amount of energy can be distributed among these modes. By summing over all such distributions for energy varying from zero to $E$, the algorithm determines G(E). By taking the difference $G(E+\delta E) – G(E)$, it determines $N(E)\delta E$. Professor Bill Hase has been one of the early pioneers involved in applying RRKM theory to chemical processes. Correlation Function Expressions for Rates Recall from Chapter 6 that rates of photon absorption can, in certain circumstances, be expressed either in terms of squares of transition dipole matrix elements connecting each initial state $\Phi_i$ to each final state $\Phi_f$, $| \textbf{E}_0 \cdot \langle \Phi_f | \boldsymbol{\mu} | \Phi_i \rangle |^2$ or in terms of the equilibrium average of the product of a transition dipole vector at time $t=0$ dotted into this same vector at another time $t$ $\sum_i \rho_i \langle \Phi_i | \textbf{E}_0 \cdot \boldsymbol{\mu} \textbf{E}_0 · \boldsymbol{\mu} (t) | \Phi_i \rangle$ That is, these rates can be expressed either in a state-to-state manner or in a time-dependent correlation function framework. In Chapter 7, this same correlation function approach was examined further. In an analogous fashion, it is possible to express chemical reaction rate constants in a time-domain language again using time correlation functions. The TST (or VTST) and RRKM expressions for the rate constant $k_{rate}$ all involve, through the partition functions or state densities, the reactant and transition-state energy levels and degeneracies. These theories are therefore analogs of the state-to-state photon-absorption rate equations. To make the connection between the state-to-state and time-correlation function expressions, one can begin with a classical expression for the rate constant given below: $k(t)=\frac{1}{Q_r(2\pi \hbar)^L}\int dpdq e^{-\beta H(q,p)} F(p,q) \chi(p,q)$ Here • $Q_r$ is the partition function of the reactant species, • L is the number of coordinates and momenta upon which the Hamiltonian $H(\textbf{p},\textbf{q})$ depends, and • $\beta$ is $1/kT$. The flux factor $F$ and the reaction probability $c$ are defined in terms of a dividing surface which could, for example, be a plane perpendicular to the reaction coordinate $S$ and located along the reaction path that was discussed earlier in this Chapter in Section 8.1.3. Points on such a surface can be defined by specifying one condition that the L coordinates {qj} must obey, and we write this condition as $f(\textbf{q}) = 0.$ Points lying where $f(\textbf{q}) < 0$ are classified as lying in the reactant region of coordinate space, while those lying where $f > 0$ are in the product region. For example, if the dividing surface is defined as being a plane perpendicular to the reaction path, the function f can be written as: $f(\textbf{q}) = (S(\textbf{q}) - S_0).$ Here, $S$ is the reaction coordinate (which, of course, depends on all of the $q$ variables) and $S_0$ is the value of $S$ at the dividing surface. If the dividing surface is placed at the transition state on the energy surface, $S_0$ vanishes because the transition state is then, by convention, the origin of the reaction coordinate. So, now we see how the dividing surface can be defined, but how are the flux $F$ and probability c constructed? The flux factor $F$ is defined in terms of the dividing surface function $f(\textbf{q})$ as follows: $F(\textbf{p},\textbf{q}) = \frac{d h(f(\textbf{q}))}{dt}$ $= \dfrac{dh}{df} \dfrac{df}{dt}$ $=\dfrac{dh}{df} \sum_j \dfrac{∂f}{∂q_j} \dfrac{dq_j}{dt}$ $= \delta(f(\textbf{q})) \sum_j \dfrac{∂f}{∂q_j} \dfrac{dq_j}{dt}.$ Here, $h(f(\textbf{q}))$ is the Heaviside step function ($h(x) = 1$ if $x>0; h(x) = 0$ if $x < 0$), whose derivative $dh(x)/dx$ is the Dirac delta function $\delta(x)$, and the other identities follow by using the chain rule. When the dividing surface is defined in terms of the reaction path coordinate $S$ as introduced earlier (i.e., $f(\textbf{q}) = (S - S_0)$), the factor $\sum_j \dfrac{∂f}{∂q_j} \dfrac{dq_j}{dt}$ contains only one term when the L coordinates {$q_j$} are chosen, as in the reaction path theory, to be the reaction coordinate $S$ and L-1 coordinates ${q’_j} = q’$ perpendicular to the reaction path. For such a choice, one obtains $\sum_j \dfrac{∂f}{∂q_j} \dfrac{dq_j}{dt} = \frac{dS}{dt} = \dfrac{P_S}{m_S}$ where $P_S$ is the momentum along $S$ and $m_S$ is the mass factor associated with $S$ in the reaction path Hamiltonian. So, in this case, the total flux factor $F$ reduces to: $F(\textbf{p},\textbf{q}) = \delta(S-S_0) \dfrac{P_S}{m_S}.$ We have seen exactly this construct before in Section 8.1.2 where the TST expression for the rate coefficient was developed. The reaction probability factor $c(\textbf{p},\textbf{q})$ is defined in terms of those trajectories that evolve, at long time $t \rightarrow \infty$, onto the product side of the dividing surface; such trajectories obey $c(\textbf{p},\textbf{q}) = \lim_{t \rightarrow \infty} h(f(q(t))) = 1.$ This long-time limit can, in turn, be expressed in a form where the flux factor again occurs $\lim_{t \rightarrow \infty}​ h(f(q(t)))​=\int_0^\infty \frac{dh(f(q(t)))​}{dt}dt=\int_0^\infty Fdt$ In this expression, the flux $F(t)$ pertains to coordinates $q(t)$ and momenta $p(t)$ at $t > 0$. Because of time reversibility, the integral can be extended to range from $t = - \infty$ until $t = \infty$. Using the expressions for c and for $F$ as developed above in the equation for the rate coefficient given at the beginning of this Section allows the rate coefficient $k(T)$ to be rewritten as follows: $k(T)=\frac{1}{Q_r(2\pi \hbar)^L}\int dpdq e^{-\beta H(q,p)} F(p,q) \chi(p,q)​$ $=\frac{1}{Q_r(2\pi \hbar)^L}\int_{-\infty}^\infty dt\int dpdq e^{-\beta H(q,p)} F(p,q) F(p(t),q(t))​$​ In this form, the rate constant $k(T)$ appears as an equilibrium average (represented by the integral over the initial values of the variables $p$ and $q$ with the $Q_r^{-1} (2\pi \hbar)^{-L}​ \exp(-\beta H)$ weighting factor) of the time correlation function of the flux $F$: To evaluate the rate constant in this time-domain framework for a specific chemical reaction, one would proceed as follows. 1. Run an ensemble of trajectories whose initial coordinates and momenta ${q.p}$ are selected (e.g., using Monte-Carlo methods discussed in Chapter 7) from a distribution with $\exp(-\beta H)$ as its weighting factor. 2. Make sure that the initial coordinates ${q}$ lie on the dividing surface because the flux expression contains the $\delta(f(\textbf{q}))$ factor; 3. Monitor each trajectory to observe when it again crosses the dividing surface (i.e., when ${q(t)}$ again obeys $f(q(t)) = 0$; at which time the quantity 4. $F(p(t),q(t))$ can be evaluated as $F(\textbf{p},\textbf{q}) = \delta(f(\textbf{q})) \sum_j \dfrac{∂f}{∂q_j} \dfrac{dq_j}{dt}$, using the coordinates and momenta at time $t$ to compute these quantities. Using a planar dividing surface attached to the reaction path at $S = S_0$ as noted earlier allows $F(q,p)$ to be calculated in terms of the initial ($t=0$) momentum lying along the reaction path direction as , $F(\textbf{p},\textbf{q}) = \delta(S-S_0) \dfrac{P_S}{m_S}$ and permits $F(p(t),q(t))$ to be computed when the trajectory again crosses this surface at at time $t$ as $F(p(t),q(t)) = \delta(S-S_0) P_S(t)/m_S$. So, all that is really needed if the dividing surface is defined in this manner is to start trajectories with $S = S_0$; to keep track of the initial momentum along $S$; to determine at what times $t$ the trajectory returns to $S = S_0$; and to form the product $\dfrac{P_S}{m_S} \dfrac{P_S(t)}{m_S}$ for each such time. It is in this manner that one can compute flux-flux correlation functions and, thus, the rate coefficient. Notice that trajectories that undergo surface re-crossings contribute negative terms to the flux-flux correlation function computed as discussed above. That is, a trajectory with a positive initial value of $(\dfrac{P_S}{m_S})$ can, at some later time t, cross the dividing surface with a negative value of $\dfrac{P_S(t)}{m_S}$ (i.e., be directed back toward reactants). This re-crossing will contribute a negative value, via. the product $\dfrac{P_S}{m_S}\dfrac{P_S(t)}{m_S}$, to the total correlation function, which integrates over all times. Of course, if this same trajectory later undergoes yet another crossing of the dividing surface at t' with positive $P_S(t')$, it will contribute a positive term to the correlation function via. $\dfrac{P_S}{m_S}\dfrac{P_S(t')}{m_S}$. Thus, the correlation function approach to computing the rate coefficient can properly account for surface re-crossings, unlike the TST which requires one to account for such effects in the transmission coefficient k. Wave Packet Propagation The discussions in Chapters 1 and 7 should have made it clear that it is very difficult to time-propagate wave functions rigorously using quantum mechanics. On the other hand, to propagate a classical trajectory is relatively straightforward. In addition to the semi-classical tools introduced in Chapter 1, there is another powerful tool that allows one to retain much of the computational ease and convenient interpretation of the classical trajectory approach while also incorporating quantum effects that are appropriate under certain circumstances. In this wave packet propagation approach, one begins with a quantum mechanical wave function that is characterized by two parameters specifying the average value of the position and of the momentum along each coordinate. One then propagates not the quantum wave function but the values of these two parameters, which one assumes will evolve according to Newtonian dynamics. Let's see how these steps are taken in more detail and try to understand when such an approach is expected to work or to fail. First, the form of the so-called wave packet quantum function is written as follows: $Y(q,Q, P) = \prod_{J=1}^N \frac{1}{\sqrt{2\pi\langle \delta q_J^2\rangle}} \exp\Big[\frac{iP_J q_J}{\hbar} -\frac{(q_J - Q_J)^2}{4}\langle \delta q_J^2\rangle \Big].$ Here, we have a total of N coordinates that we denote {$q_J : J=1,N$}. It is these coordinates that the quantum wave function depends upon. The total wave function is a product of terms, one for each coordinate. Notice that this wave function has two distinct ways in which the coordinate $q_J$ appear. First, it has a Gaussian spatial dependence ($\exp\Big[- \dfrac{(q_J - Q_J)^2}{4}\langle \delta q_J^2\rangle \Big]$) centered at the values $Q_J$ and having Gaussian width factors related to $\langle q_J^2\rangle$. This dependence tends to make the wave function's amplitude largest when $q_J$ is close to $Q_J$. Secondly, it has a form $\exp\Big[\dfrac{iP_J q_J}{\hbar}\Big]$ that looks like the travelling wave that we encountered in Chapter 1 in which the coordinate $q_J$ moves with momentum $P_J$. So, these wave packet functions have built into them characteristics that allow them to describe motion (via. the $P_J$) of an amplitude that is centered at $Q_J$ with a width given by the parameter $\langle q_J^2\rangle$. In this approach to chemical dynamics, we assume the parameters $P_J$ and $Q_J$ will undergo classical time evolution according to the Newton equations: $\frac{dQ_J}{dt} = \frac{P_J}{m_J}$ $\frac{dP_J}{dt} = - \frac{∂V}{∂Q_J}$ where $V$ is the potential energy surface (Born-Oppenheimer or force field) upon which we wish to propagate the wave packet, and $m_J$ is the mass associated with coordinate $q_J$. For the form of the wave function given above, the $Q_J$ and $P_J$ parameters can be shown to be the expectation values of the coordinates $q_J$ and momenta $-i\hbar\dfrac{∂}{∂q_J}$: $Q_J = \int Y^* q_J Y dq,$ $P_J = \int Y^* (- i \hbar\dfrac{∂}{∂q_J}) Y dq.$ Moreover, the $\langle q_J^2\rangle$ parameter appearing in the Gaussian part of the function can be shown to equal the dispersion or spread of this wave function along the coordinate $q_J$: $\langle q_J^2\rangle = \int Y^* (q_J - Q_J)^2 Y dq.$ There is an important characteristic of the above Gaussian wave packet functions that we need to point out. It turns out that functions of the form: $Y(q,Q(t), P(t)) = \prod_{J=1}^N \frac{1}{\sqrt{2\pi\langle \delta q_J^2\rangle}} \exp\Big[\frac{iP_J(t) q_J}{\hbar} -\frac{(q_J - Q_J(t))^2}{4}\langle \delta q_J^2\rangle \Big]$ can be shown to have uncertainties in $q_J$ and in $- i \hbar\dfrac{∂}{∂q_J}$ whose product is as small as possible: $\langle (q_J –Q_J)^2\rangle \langle (- i \hbar\dfrac{∂}{∂q_J} – P_J)^2\rangle = \dfrac{\hbar^2}{4}.$ The proof that the wave packet form of the wave function has the smallest uncertainty product is given in the text book Quantum Mechanics, 3rd ed., L. I. Schiff, McGraw-Hill, New York (1968). The Heisenberg uncertainty relation, which is discussed in many texts dealing with quantum mechanics, says that this product of coordinate and momentum dispersions must be greater than or equal to $\dfrac{\hbar^2}{4}$. In a sense, the Gaussian wave packet function is the most classical function that one can have because its uncertainty product is as small as possible (i.e., equals $\dfrac{\hbar^2}{4}$). We say this is the most classical possible quantum function because in classical mechanics, both the coordinate and the momentum can be known precisely. So, whatever quantum wave function allows these two variables to be least uncertain is the most classical. To use wave packet propagation to simulate a chemical dynamics event, one begins with a set of initial classical coordinates and momenta {$Q_J(0), P_J(0)$} as well as a width $\langle q_J^2\rangle$ or uncertainty for each coordinate. Each width must be chosen to represent the range of that coordinate in the experiment that is to be simulated. For example, assume one were to represent the dynamics of a wave function that is prepared by photon absorption of a $v = 0$ vibrational state of the H-Cl molecule from the ground 1S state to an excited-state energy surface ($V(R)$). Such a situation is described qualitatively in Figure 8.3. In this case, one could choose $\langle \delta R^2\rangle$ to be the half width of the $v = 0$ harmonic (or Morse) oscillator wave function $\chi_0(R)$ of H-Cl, and take $P(0) = 0$ (because this is the average value of the momentum for $\chi_0$) and $R(0) = R_{\rm eq}$, the equilibrium bond length. For such initial conditions, classical Newtonian dynamics would then be used to propagate the $Q_J$ and $P_J$. In the H-Cl example, introduced above, this propagation would be performed using the excited-state energy surface for $E$ since, for $t \rangle 0$, the molecule is assumed to be on this surface. The total energy at which the initial wave packet it delivered to the upper surface would be dictated by the energy of the photon used to perform the excitation. In Figure 8.3, two such examples are shown. Once the packet is on the upper surface, its position $Q$ and momentum $P$ begin to change according to the Newton equations. This, in turn, causes the packet to move as shown for several equally spaced time steps in Figure 8.3 for the two different photons’ cases. At such subsequent times, the quantum wave function is then assumed, within this model, to be given by: $Y(q,Q(t), P(t)) = \prod_{J=1}^N \frac{1}{\sqrt{2\pi\langle \delta q_J^2\rangle}} \exp\Big[\frac{iP_J(t) q_J}{\hbar} -\frac{(q_J - Q_J(t))^2}{4}\langle \delta q_J^2\rangle \Big].$ That is, it is taken to be of the same form as the initial wave function but to have simply moved its center from $Q(0)$ to $Q(t)$ with a momentum that has changed from $P(0)$ to $P(t)$. It should be noticed that the time evolution of the wave packet shown in Figure 8.3 displays clear classical behavior. For example, as time evolves, it moves to large R-values and its speed (as evidenced by the spacings between neighboring packets for equal time steps) is large when the potential is low and small when the potential is higher. As we learned in Chapter 6, the time correlation function $C(t) = \langle Y(q,Q(0),P(0))|Y(q,Q(t),P(t))\rangle$ can be used to extract spectral information by Fourier transformation. For the H-Cl example considered here, this correlation function will be large at $t = 0$ but will decay in magnitude as the wave packet $Y(q,Q(t),P(t))$ moves to the right (at t1, t2, etc.) because its overlap with $Y(q,Q(0),P(0))$ becomes smaller and smaller as time evolves. This decay in $C(t)$ will occur more rapidly for the high-energy photon case because $Y(q,Q(t),P(t))$ moves to the right more quickly because the classical momentum $P(t)$ grows more rapidly. These dynamics will induce exponential decays in $C(t)$ (i.e., $C(t)$ will vary as $\exp(-t/\tau_1)$) at short times. In fact, the decay of $C(t)$ discussed above produces, when $C(t)$ is Fourier transformed, the primary characteristic of the correlation function for the higher-energy photon case where dissociation ultimately occurs. In such photo-dissociation spectra, one observes a Lorentzian line shape whose width is characterized by the decay rate ($1/\tau_1$), which, in turn, relates to the total energy of the packet and the steepness of the excited-state surface. This steepness determines how fast $P(t)$ grows, which then determines how fast the H-Cl bond fragments. In the lower-energy photon case shown in Figure 8.3, a qualitatively different behavior occurs in $C(t)$ and thus in the spectrum. The packet’s movement to larger $R$ causes $C(t)$ to initially undergo $\exp(-t/\tau_1)$ decay. However, as the packet moves to its large-R turning point (shortly after time $t_3$), it strikes the outer wall of the surface where it is reflected. Subsequently, it undergoes motion to smaller R, eventually returning to its initial value of R. Such recurrences, which occur on time scales that we denote $\tau_2$, are characteristic of bound motion in contrast to the directly dissociative motion discussed earlier. This recurrence will cause $C(t)$ to again achieve a large amplitude, but, $C(t)$ will subsequently again undergo $\exp(-t/\tau_1)$ decay as the packet once again departs. Clearly, the correlation function will display a series of recurrences followed by exponential decays. The frequency of the recurrences is determined by the frequency with which the packet traverses from its inner to outer turning points and back again, which is proportional to $1/\tau_2$. This, of course, is the vibrational period of the H-Cl bond. So, in such bound-motion cases, the spectrum (i.e., the Fourier transform of C(t)) will display a series of peaks spaced by ($1/\tau_2$) with the envelope of such peaks having a width determined by $1/\tau_1$. In more complicated multi-mode cases (e.g., in molecules containing several coordinates), the periodic motion of the wave packet usually shows another feature that we have not yet discussed. Let us, for simplicity, consider a case in which only two coordinates are involved. For the wave packet to return to (or near) its initial location, enough time must pass for both coordinates to have undergone an excursion to their turning points and back. For example, consider the situation in which one coordinate’s vibrational frequency is ca. 1000 cm-1 and the other’s is 300 cm-1; these two modes then require ca. 1/30 ps and 1/9 ps, respectively, to undergo one complete oscillation. At $t = 0$, the wave packet, which is a product of two packets, $\prod_{J=1}^2 \dfrac{1}{\sqrt{2\pi\langle \delta q_J^2\rangle}} \exp\Big[\dfrac{iP_J(t) q_J}{\hbar} -\dfrac{(q_J - Q_J(t))^2}{4}\langle \delta q_J^2\rangle \Big]$, one for each mode, produces a large C(t). After 1/30 ps, the first mode’s coordinate has returned to its initial location, but the second mode is only 9/30 of the way along in its periodic motion. Moreover, after 1/9 ps, the second mode’s coordinate has returned to near where it began, but now the first mode has moved away. So, at both 1/30 ps and 1/9 ps, the correlation function will not be large because one of the mode contribution to $C(t) = \langle Y(q,Q(0),P(0)) | Y(q,Q(t),P(t))\rangle$ will be small. However, after 1/3 ps, both modes’ coordinates will be in positions to produce a large value of C(t); the high-frequency mode will have undergone 10 oscillations, and the lower-frequency mode will have undergone 3 oscillations. My point in discussing this example is to illustrate that molecules having many coordinates can produce spectra that display rather complicated patterns but which, in principle, can be related to the time evolution of these coordinates using the correlation function’s connection to the spectrum. Of course, there are problems that arise in using the wave packet function to describe the time evolution of a molecule (or any system that should be treated using quantum mechanics). One of the most important limitations of the wave packet approach to be aware of relates to it inability to properly treat wave reflections. It is well know that when a wave strikes a hard wall, it is reflected by the wall. However, when, for example, a water wave moves suddenly from a region of deep water to a much more shallow region, one observes both a reflected and a transmitted wave. In the discussion of tunneling resonances given in Chapter 2, we also encountered reflected and transmitted waves. Furthermore, when a wave strikes a barrier that has two or more holes or openings in it, one observes wave fronts coming out of these openings. The problem with the most elementary form of wave packets presented above is that each packet contains only one piece. It therefore can not break into two or more pieces as it, for example, reflects from turning points or passes through barriers with holes. Because such wave packets can not fragment into two or more packets that subsequently undergo independent dynamical evolution, they are not able to describe dynamical processes that require multiple-fragmentation events. It is primarily for this reason that wave packet approaches to simulating dynamics are usually restricted to treating short-time dynamics where such fragmentation of the wave packet is less likely to occur. Prompt molecular photo-dissociation processes such as we discussed above is a good example of such a short-time phenomenon. There have been many refinements of the wave packet approach described above, some of which are designed to allow for splitting of the wave function. I refer the reader to the work of one of the pioneers of the time-dependent wave packet approach, Prof. Eric Heller, for more information on this subject. Surface Hopping Dynamics There are, of course, chemical reactions and energy-transfer collisions in which two or more Born-Oppenheimer (BO) energy surfaces are involved. Under such circumstances, it is essential to have available the tools needed to describe the coupled electronic and nuclear-motion dynamics appropriate to this situation. The way this problem is addressed is by returning to the Schrödinger equation before the single-surface BO approximation was made and expressing the electronic wave function $\Psi(\textbf{r}|\textbf{R})$, which depends on the electronic coordinates {$\textbf{r}$} and the nuclear coordinates $\{R\}$, as: $\Psi(\textbf{r}|\textbf{R})=\sum_J a_J(t)\psi_J(\textbf{r}|\textbf{R})$ Here, $\psi_J(\textbf{r}|\textbf{R})$ can be the BO electronic wave function belonging to the $J^{\rm th}$ electronic state, in which case we say we are using an adiabatic basis of electronic states. The $a_J(t)$ are amplitudes that will relate to the probability that the system is on the $J^{\rm th}$ energy surface. Next, we assume that the coordinates {$\textbf{R}(t)$} of the nuclei undergo classical motion in a manner to be specified in further detail below that allows us to know their locations and velocities (or momenta) at any time $t$. This assumption implies that the time dependence of the above wave function is carried in the time dependence of the coordinates $\textbf{R}(t)$ as well as in the $a_J(t)$ amplitudes $\Psi(\textbf{r}|\textbf{R(t)})=\sum_J a_J(t)\psi_J(\textbf{r}|\textbf{R(t)})$ We next substitute this expansion into the time-dependent Schrödinger equation $i \hbar \dfrac{\partial \psi}{\partial t} = H_0 (\textbf{r}|\textbf{R}(t))\psi$ where $H_0 (\textbf{r}|\textbf{R}(t))$ is the electronic Hamiltonian, which depends on the nuclear coordinates $R(t)$ and thus on the time variable. We then multiply the resultant equation on the left by one of the wave functions $\psi^*K(\textbf{r}|\textbf{R})$ and integrate over the electronic coordinates {$\textbf{r}$} to obtain an equation for the $a_K(t)$ amplitudes: $i\hbar\frac{da_K}{dt}=\sum_J \left[V_K{K,J}(R(t))-i\hbar\langle \psi_K|\dfrac{d\psi_J}{dt}\rangle \right]a_J .$ Here, $V_{K,J}$ is the electronic Hamiltonian matrix element that couples $\psi_K$ to $\psi_J$. This set of coupled differential equations for the amplitudes can be solved numerically by, for example, starting at $t_i$ with $a_K = 1$ and $a_{J\ne K} = 0$ and propagating the amplitudes’ values forward in time. The next step is to express $\langle \psi_K|d\psi_J/dt\rangle$, using the chain rule, in terms of derivatives with respect to the nuclear coordinates {$\textbf{R}$} and the time rate of change of these coordinates: $\langle \psi_K|\dfrac{d\psi_J}{dt}\rangle =\sum_b\langle \psi_K|\dfrac{d\psi_J}{d\textbf{R}_b}\rangle \dfrac{d\textbf{R}_b}{dt}$ So, now the equations for the $a_K(t)$ read as follows: $i\hbar\frac{da_K}{dt}=\sum_J \left[V_K{K,J}(R(t))-i\hbar\sum_b \langle \psi_K|\dfrac{d\psi_J}{d\textbf{R}_b}\rangle \dfrac{d\textbf{R}_b}{dt} \right]a_J​$ The $\langle \psi_K|\dfrac{d\psi_J}{d\textbf{R}_b}\rangle =\textbf{d}_{K,J}(b)$ are called non-adiabatic coupling matrix elements (for each pair of states $K$ and $J$, they are a vector in the space of the nuclear coordinates $\textbf{R}$), and it is their magnitudes that play a central role in determining the efficiency of surface hoppings. Below we will make use of the following symmetry property of these quantities, which derive from the orthogonality of the {$\psi_J$} $\langle \psi_K|\dfrac{d\psi_J}{d\textbf{R}_b}\rangle =\textbf{d}_{K,J}(b)=-\langle \psi_J|\dfrac{d\psi_K}{d\textbf{R}_b}\rangle ^*=-\textbf{d}_{J,K}^*(b)$ $\langle \psi_K|\dfrac{d\psi_K}{d\textbf{R}_b}\rangle =0$ These matrix elements are becoming more commonly available in widely utilized quantum chemistry and dynamics computer packages (although their efficient evaluation remains a challenge that is undergoing significant study). Qualitatively, one can expect a coupling $\langle \psi_K|d\psi_J/dRa\rangle$ to be large if motion along a coordinate causes an orbital occupied in $\psi_J$ to be distorted in a manner that would produce significant overlap with an orbital in $\psi_K$. If the electronic functions {$\psi_K$} appearing in the equations $i\hbar\frac{da_K}{dt}=\sum_J \left[V_K{K,J}(R(t))-i\hbar\sum_b \textbf{d}_{K,J}(b)\dfrac{d\textbf{R}_b}{dt} \right]a_J​$ are BO eigenfunctions, the off-diagonal elements $V_{­­K,J}$ vanish and the diagonal elements are the BO energy levels. In this case, only the terms involving $d_{K,J}(b)$ generate transitions between surfaces. On the other hand, if one chooses electronic functions {$\psi_K$} that have vanishing $d_{K,J}(b)$ values, only the $V_{K,J}$ terms induce transitions among surfaces. The latter case is said to involve using diabatic wave functions, while the former involves adiabatic wave functions. For the remainder of this discussion, I will assume we are making use of adiabatic (i.e., BO) wave functions, but I will carry through the derivation in a manner that will allow either adiabatic or diabatic functions to be used. Because one is eventually interested in the populations for being in various electronic states, it is common to recast the above equations for the amplitudes $a_J(t)$ into equations for so-called density matrix elements $\gamma_{K,J}(t)=a_K(t)a_J^*(t)$ The diagonal elements of the $\textbf{g}$ matrix are the state probabilities while the off-diagonal elements contain information about the phases of the complex quantities {$a_J$}. So, in place of the equations for the {$a_J(t)$}, one can use the following equations for the {$g_{K,J}$}: $i\hbar\frac{d\gamma_{K,J}}{dt}=\sum_L\left\{ \gamma_{L,J}\left[V_{K,L}-i\hbar\sum_b \dfrac{d\textbf{R}_b}{dt} \textbf{d}_{K,J}(b) \right] - \gamma_{K,L}\left[V_{L,J}-i\hbar\sum_b \dfrac{d\textbf{R}_b}{dt} \textbf{d}_{L,J}(b) \right]\right\} ​$ Setting $K=J$, it is then possible to derive an equation for the time evolution of the diagonal elements of the density matrix $\frac{d\gamma_{K,K}}{dt}=\sum_{L\ne K}X_{K,L}$ where $X_{K,L}=\frac{2}{\hbar}\Im\{\gamma_{L,K}V_{K,L}\}-2\Re\{\gamma_{L,K}\sum_b \dfrac{d\textbf{R}_b}{dt} \textbf{d}_{K,J}(b)\}$ In addition to calculating amplitudes (the probabilities then being computed as $|a_J|^2 = \gamma_{J,J}$), one often needs to identify (using, perhaps the kind of strategy discussed in Chapter 3) the seam at which the surfaces of interest intersect. This helps focus attention on those geometries near which a surface hop is most likely to occur. To utilize the most basic form of surface hopping theory, one proceeds as follows: 1. One begins with initial values of the nuclear coordinates {$\textbf{R}_b$} and their velocities {$\textbf{R}_b/dt$} that properly characterize the kind of collision or reaction one wishes to simulate. Of course, one has to utilize an ensemble of trajectories with initial conditions chosen to properly describe such an experimental situation. In addition, one specifies which electronic surface (say the $K^{\rm th}$ surface) the system is initially on. 2. For each such set of initial conditions, one propagates a classical trajectory describing the time evolution of the {Ra} and {dRa/dt} on this initial ($K^{\rm th}$) surface. 3. As one is propagating the classical trajectory, one also propagates the coupled differential equations for the density matrix elements with the nuclei moving on the $K^{\rm th}$ energy surface 4. After each time-propagation step of duration $\delta t$, one evaluates the quantity shown above (these elements give estimates for the rate of change of the population of the $K^{\rm th}$ state due to transitions to other states) from which one computes . These quantities control the fractional change in the probability of being on the $K^{\rm th}$ surface $\gamma_{K,K}$ due to transitions from state $K$ into state $J$. They are used as follows. A random number $0 < x < 1$ is chosen. If $x < g_{K,J}$ a hop to surface $J$ is allowed to occur; otherwise, no hop occurs and the system remains to continue its time evolution on surface K. 1. 5. If a hop occurs, the coordinates and momenta are allowed to now propagate on the $J^{\rm th}$ energy surface, where the forces will, of course, be different, but with one change. The component of the velocity vector along the non-adiabatic coupling vector is modified to allow for the fact that the system’s electronic energy has suddenly changed from $V_K(R)$ to $V_J(R)$, which must be compensated for by a change in the kinetic energy of the nuclei so that total energy is conserved. If $V_K(R) > V_J(R)$, this results in an increase in speed; if $V_K(R) < V_J(R)$ it produces a decrease in speed. In the latter case, if $V_K(R)$ lies considerably below $V_J(R)$, it might turn out that there is just not enough total energy to access the surface $V_J(R)$; in this case, the hop is not allowed to occur. 2. 6. Following the above decision about allowing the hop and adjusting the speed along the direction of the vector, the trajectory is then continued with the system now propagating on the $J^{\rm th}$ or $K^{\rm th}$ surface, and the differential equations involving continue to be propagated with no changes other than the fact the nuclei may (or may not) be evolving on a different surface. The entire process is repeated until the trajectory reaches termination (e.g., a reaction or quenching is observed, or some specified time limit is reached) at which time one can probe the properties of the products as reflected in the coordinates and velocities of the nuclei. Carrying out surface hopping trajectories for an ensemble of trajectories with initial conditions representative of an experiment generates an ensemble of final $\gamma_{J,J}$ values (i.e., at the end of each trajectory) whose averages can be used to estimate the overall probability of ending up in the $J^{\rm th}$ electronic state. The algorithm discussed above is the so-called fewest-switches method (detailed in J. C. Tully, J. Chem. Phys. 93, 1061 (1990)) pioneered by Prof. John Tully. This surface-hopping algorithm remains one of the most widely used approaches to treating such coupled-state dynamics. Landau-Zener Surface Jumps There is a simplified version of the surface hopping procedure just discussed that is often used when one has two electronic surfaces that intersect in a region of space that (i) is energetically accessible in the experiment being simulated and (ii) can be located and characterized (e.g., in terms of its coordinates and energy gradients) in a computationally feasible manner. To illustrate, we again consider the case of Al interacting with $H_2$, whose potential energy surfaces are reproduced below from Figure 3.1c. With the Landau-Zener model described in this Section, trajectories are propagated on one energy surface until a point on or very near the seam (denoted $\textbf{R}_x(r)$ in Figure 8.3 a) is encountered at which time an equation giving the probability of undergoing a jump to the other surface is invoked. It is the purpose of this Section to derive and explain this Landau-Zener equation. In Chapter 4, we learned that the rates of transitions from one state (labeled $i$) to another (labeled $f$) can sometimes be expressed in terms of matrix elements of the perturbation connecting the two states as follows ${\rm Rate}=\delta(\omega-\omega_{f,i})\dfrac{2\pi |\langle \Psi_f^0|v(r)|\Psi_i^0\rangle |^2}{\hbar^2}.$ Because the coupling matrix elements $\langle \Psi_f^0|v(r)|\Psi_i^0\rangle$ have units of energy, and the $\delta(\omega-\omega_{f,i})$ function has units of inverse frequency, the rate expression clearly has units of $s^{-1}$. In the rate equation, is the energy of the transition induced by light of energy , and $v(r)$ is the perturbation due to the electric dipole operator. These photon-induced rates can be viewed as relating to transitions between two surfaces that cross: (i) one surface being that of the initial state plus a photon of energy , and (ii) the second being that of the final state with no photon. In this point of view, the photon lifts the lower-energy state upward in energy until it crosses the upper state and then the dipole operator effects the transition. Making analogy with the photon-absorption case, we consider expressing the rates of transitions between 1. an initial state consisting of an electronic state multiplied by a state describing the initial vibrational (including inter-fragment collisional) and rotational state of the system, 2. a final state consisting of the product of another electronic and vibration-rotation state as follows ${\rm Rate}=\delta(\omega_{f,i})\dfrac{2\pi |\langle \Psi_f^0\chi_f(R)|v(r)|\Psi_i^0\chi_i(R)\rangle |^2}{\hbar^2}.$ That is, we use the same golden rule rate expression but with no photon energy needed to cause the two surfaces to intersect. Next we use the identity $\delta(x)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp(ixt)dt$ to write $\delta(\omega_{f,i})=\delta((\varepsilon_f-\varepsilon_i)/\hbar)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp\Big(\frac{i(\varepsilon_f-\varepsilon_i)​t}{\hbar}\Big)dt.$ which can be substituted into our rate expression to obtain ${\rm Rate}=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp\Big(\frac{i(\varepsilon_f-\varepsilon_i)​t}{\hbar}\Big) \dfrac{2\pi \langle \Psi_f^0\chi_f(R)|v(r)|\Psi_i^0\chi_i(R)\rangle \langle \Psi_f^0\chi_f(R)|v(r)|\Psi_i^0\chi_i(R)\rangle }{\hbar^2}dt.$ Defining two nuclear-motion Hamiltonian, one for each BO surface, $h_{i,f}=T_R+V_{i,f}(R)$ and assuming that the nuclear-motion wave functions obey $h_{i,f}\chi_{i,f}(R)=\varepsilon_{i,f}\chi_{i,f}(R)$ this expression becomes ${\rm Rate}=\frac{1}{2\pi}\int_{-\infty}^{\infty}\dfrac{2\pi \langle \Psi_f^0\chi_f(R)|v(r)|\Psi_i^0\chi_i(R)\rangle \langle \Psi_f^0\chi_f(R)\exp\Big(\dfrac{ih_ft}{\hbar}\Big)|v(r)|\exp\Big(-\dfrac{ih_ft}{\hbar}\Big)\Psi_i^0\chi_i(R)\rangle }{\hbar^2}dt.$ In the expression $\exp\Big(\dfrac{ih_ft}{\hbar}\Big)|v(r)|\exp\Big(-\dfrac{ih_ft}{\hbar}\Big)$ the $\langle \psi_i^0(r)|v(r)|\psi_i^0(r)\rangle$ elements of $v(r)$ for our surface-jumping problem would involve either the $V_{­i,f}$ electronic Hamiltonian couplings (if one uses a diabatic basis) or the non-adiabatic coupling elements (if one used a BO adiabatic basis). In either case, these elements are functions of the nuclear coordinates and thus do not commute with the differential operators appearing in $T_R$. As a result, the operator combination $\exp\Big(\dfrac{ih_ft}{\hbar}\Big)|v(r)|\exp\Big(-\dfrac{ih_ft}{\hbar}\Big)$ must be handled carefully (e.g., as one does in the coupled-cluster expansion treated in Chapter 6) by expanding the exponential operators and keeping track of the fact that not all terms commute. The lowest-order term in the expansion of this combination of operators is $\exp\Big(\dfrac{ih_ft}{\hbar}\Big)|v(r)|\exp\Big(-\dfrac{ih_ft}{\hbar}\Big)\approx v(r)\exp\Big(\dfrac{it(V_f(\textbf{R})-V_i(\textbf{R}))}{\hbar}\Big)$ which yields the approximation I now want to pursue. Using this approximation in our expression for the rate of surface jumping transitions gives ${\rm Rate}=\frac{1}{2\pi}\int_{-\infty}^{\infty}\dfrac{2\pi \langle \Psi_f^0\chi_f(R)|v(r)|\Psi_i^0\chi_i(R)\rangle \langle \chi_f(R)\exp(iV_ft/\hbar)\langle \Psi_f^0|v(r)|\Psi_i^0\rangle \exp(-iV_it/\hbar)\chi_i(R)\rangle }{\hbar^2}dt.$ We now use $\delta(V_f(\textbf{R})-V_i(\textbf{R}))=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp\Big(\frac{i(V_f(\textbf{R})-V_i(\textbf{R}))​t}{\hbar}\Big)dt$ to write the rate as ${\rm Rate}=\frac{2\pi}{\hbar} \langle \chi_f(R)\langle \Psi_f^0|v(r)|\Psi_i^0\rangle \chi_i(R)\rangle \langle \chi_f(R)\langle \Psi_f^0|v(r)|\Psi_i^0\rangle \delta(V_f(\textbf{R})-V_i(\textbf{R}))\chi_i(R)\rangle ​.$ $=\frac{2\pi}{\hbar}\langle \chi_f(R)|v_{f,i}|\chi_i(R)\rangle \langle \chi_f(R)|v_{f,i}|\delta(V_f(\textbf{R})-V_i(\textbf{R}))\chi_i(R)\rangle ​​$ where we define the electronic transition integrals in shorthand as $v_{f,i}=\langle \Psi_f^0|v(r)|\Psi_i^0\rangle$ Because of the energy-conserving d-function , we can actually simplify this expression even further by summing over the complete set of the final-state’s vibration-rotation functions and making use of the completeness relation $\sum_f\chi_f(R)\chi_f^*(R)=\delta(R-R')$ to obtain ${\rm Rate}=\frac{2\pi}{\hbar}\langle \chi_i(R)|v_{f,i}^*v_{f,i}|\delta(V_f(\textbf{R})-V_i(\textbf{R}))\chi_i(R)\rangle .$ This expression can be seen to have units of s-1 since the delta function has units of inverse energy and each electronic coupling integral has units of energy. In the above rate expression, we see a d-function that limits the integration to only those geometries for which $V_­i = V_f$ ; these are the geometries that lie on the intersection seam. Any geometry $\textbf{R}$ can be expressed in terms of the geometry $\textbf{S}$ of the point on the seam closest to $\textbf{R}$ plus a displacement of magnitude $\eta$ along the unit vector normal to the seam at point $\textbf{S}$ $\textbf{R}=\textbf{S}+\hat{\textbf{n}}\eta.$ If we now expand the energy difference $V_f(\textbf{R}) – V_i(\textbf{R})$ in a Taylor series about the point $\textbf{S}$ lying on the seam, we obtain $V_f(\textbf{R}) – V_i(\textbf{R})=V_f(\textbf{S}) – V_i(\textbf{S})+\nabla[V_f(\textbf{R}) – V_i(\textbf{R})]_S\bullet\hat{n}(\textbf{S})\eta+\text{ higher - order}$ The gradient $\nabla[V_f(\textbf{R}) – V_i(\textbf{R})]_S$ of the potential difference has zero components within the subspace of the seam; its only non-vanishing component lies along the normal $\hat{n}(\textbf{S})$ vector. Now using $\delta(ax)=\frac{1}{|a|}\delta(x),$ the $\delta(V_f(\textbf{R}) – V_i(\textbf{R}))$ function can be expressed as $\delta(V_f(\textbf{R}) – V_i(\textbf{R}))=\delta((V_f(\textbf{S}) – V_i(\textbf{S}))+\eta\nabla(V_f(\textbf{S}) – V_i(\textbf{S}))\bullet\hat{n}(\textbf{S})+\cdots)$ $=\delta(0+\eta\nabla(V_f(\textbf{S}) – V_i(\textbf{S}))\bullet\hat{n}(\textbf{S})+\cdots)=\frac{1}{|\nabla(V_f(\textbf{S}) – V_i(\textbf{S}))\bullet\hat{n}(\textbf{S})|}\delta(\eta)$ with the $\delta(h)$ factor constraining the integral to lie within the seam. ${\rm Rate}=\frac{2\pi}{\hbar}\int\int \chi_i^*(R)v_{i,f}^*v_{i,f}\frac{\delta(\eta)}{|\nabla(V_f-V_i)|_S}\chi_i(R)dSd\eta$ $=\frac{2\pi}{\hbar}\int \chi_i^*(S,0)v_{i,f}^*v_{i,f}\frac{1}{|\nabla(V_f-V_i)|_S}\chi_i(S,0)dS$ This result can be interpreted as follows: 1. $\chi_i^*(S,0)\chi_i(S,0)$ gives the probability density for being at a point $\textbf{R}=(\textbf{S},0)$ on the seam; this factor has units of (length)-(3N-6) . 2. $\dfrac{2\pi}{\hbar}v_{i,f}^*v_{i,f}\dfrac{1}{|\nabla(V_f-V_i)|_S}$ gives the rate of transitions from one surface to the other at the point $\textbf{S}$ on the seam; this factor has units of length times s-1. 3. The $d\textbf{S}$ factor has units of (length) 3N-7; so the entire expression has units of s-1 as it should. In this form, the rate expression can be used by (i) sampling (e.g., using Monte Carlo) over as much of the seam as is energetically accessible, using the initial-state spatial probability density as a weighting factor, and (ii) forming the sampling average of the rate quantity $\dfrac{2\pi}{\hbar}v_{i,f}^*v_{i,f}\dfrac{1}{|\nabla(V_f-V_i)|_S}$ computed for each accepted geometry. There is another way to utilize the above rate expression. If we think of a swarm of $N$ trajectories (i.e., an ensemble representative of the experiment of interest) and ask what is the rate $r(\textbf{S})$ at which trajectories pass through a narrow region of thickness $\eta$ at a point $\textbf{S}$ on the seam, we could write $r(\textbf{S})=N|\chi_i(\textbf{S},\eta)|^2\frac{v_n}{\eta}d\textbf{S}d\eta$ where $|\chi_i(\textbf{S},\eta)|^2$ gives the probability density for a trajectory being at the point $\textbf{S}$ on the seam and lying within a distance $\eta$ along the direction $\hat{n}(\textbf{S})$ normal to the seam. The quantity $\dfrac{v_n}{\eta}$ is the component of the velocity along $\hat{n}(\textbf{S})$ with which the system moves through the seam divided by the thickness $\eta$. This ratio gives the inverse of the time the system spends within the thin $\eta$ region or, equivalently, the frequency of passing through the thin strip of the seam at S. The quantity $d\textbf{S}d\eta$ is the volume element $d\textbf{R}$ whose units cancel those of $|\chi_i(\textbf{S},\eta)|^2$. If we multiply this rate at which trajectories pass through $\textbf{S}$, $\eta$ by the probability $P$ that a surface jump will occur and integrate over the entire seam space $d\textbf{S}d\eta$, we can express the rate at which the $N$ trajectories will undergo jumps ${\rm Rate}=\int NP|\chi_i(\textbf{S},\eta)|^2\frac{v_n}{\eta}d\textbf{S}d\eta = \int NP|\chi_i(\textbf{S},\eta)|^2v_n d\textbf{S}.$ If we divide this rate by $N$, the number of trajectories, to produce the average rate per trajectory, and compare this expression to the rate we derived earlier ${\rm Rate}=\frac{2\pi}{\hbar}\int \chi_i^*(S,0)v_{i,f}^*v_{i,f}\frac{1}{|\nabla(V_f-V_i)|_S}\chi_i(S,0)dS$ we see that they would be equivalent if the probability $P$ of a surface jump were given by $P=\frac{2\pi}{\hbar}v_{i,f}^*v_{i,f}\frac{1}{v_\eta|\nabla(V_f-V_i)|_S}.$ The above expression for the probability of jumping from $V_i(\textbf{R})$ to $V_f(\textbf{R})$ is known as the Landau-Zener (LZ) formula. The way it is used in most applications is as follows: 1. An ensemble of classical trajectories with initial coordinates and momenta selected to represent an experimental condition are run on the potential energy surface $V_i(\textbf{R})$. 2. Whenever any trajectory passes very close to an intersection seam between $V_i(\textbf{R})$ and another surface $V_f(\textbf{R})$, the seam geometry $\textbf{S}$ nearest to $\textbf{R}$ is determined and the gradient $\nabla(V_f-V_i)$ of the energy difference is evaluated at $\textbf{S}$. In addition, the component of the velocity along the direction of this gradient is computed. 3. The electronic coupling matrix elements between the two states are evaluated at S, and the above formula is used to estimate the probability $P$ of a surface jump. In most applications of LZ theory, the electronic states {$\psi_{i,f}^0$ } in the region of a crossing seam are taken to be diabatic states because then the coupling matrix elements can be taken from the splitting between the two adiabatic states that undergo an avoided crossing near $\textbf{S}$ rather than by evaluating non-adiabatic coupling matrix elements $\langle \psi_i|\dfrac{d\psi_f}{dR_b}\rangle$ between adiabatic BO states. In summary, the LZ expression for the probability of a surface jump​ should be viewed as an approximate version of the algorithm provided by the fewest-switches surface hopping approach discussed earlier. Before closing this Section, it is useful to point out how this formula applies to two distinct cases 1. If, as suggested in Figure 8.3 b, a molecule is prepared (e.g., by photon absorption) in an excited electronic state (the upper blue curve) that undergoes a crossing with a dissociative electronic state (the green curve), one may wish to estimate the rate of the process called predissociation in which the excited molecule jumps to the dissociative surface and falls apart. This rate is often computed by multiplying the frequency $\nu$ at which the excited molecule passes through the curve crossing by the LZ estimate of the surface jumping probability P: ${\rm Rate}=\nu P$ with $P$ computed as discussed above and $\nu$ usually being equal to the vibrational frequency of the bond whose stretching generates the curve crossing. igure 8.3b Qualitative depiction of predissociation that can occur from an excited (blue) surface onto a dissociative (green) surface. 2. Alternatively, one may be interested in determining the probability that the fragment species (atoms in Figure 8.3 b) collide on the green curve and undergo a transition to the upper blue curve as a result of this collision. For example, prompt fluorescence from this upper blue curve might be the experimental signature one wishes to simulate. In this case, the outcome (i.e., generation of the molecule in the upper blue curve’s electronic state) can occur in either of two ways: a. The system collides on the green curve and undergoes a surface jump at the crossing, thus ending up on the blue surface from which it promptly fluoresces; this process has a probability $P$ computed using the LZ formula. b. The system collides on the green curve and does not jump to the blue curve at the crossing, but remains on the green curve (this has probability $1-P$) until it reaches the turning point. After reflecting off the turning point, the system (still on the green curve) jumps to the blue curve (this has probability $P$) when it again reaches the crossing after which prompt fluorescence occurs. The overall probability for this path is $P(1-P)$. So, the total yield of fluorescence would be related to the quantity $P + P(1-P)$. The point of these two examples is that the LZ formula gives an estimate of the jump probability for a given crossing event; one still needs to think about how various crossing events relate to the particular experiment at hand.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/08%3A_Chemical_Dynamics/8.01%3A_Theoretical_Tools_for_Studying_Chemical_Change_and_Dynamics.txt
Spectroscopic Methods To follow the rate of any chemical reaction, one must have a means of monitoring the concentrations of reactant or product molecules as time evolves. In the majority of current experiments that relate to reaction dynamics, one uses some form of spectroscopic or alternative physical probe (e.g., an electrochemical signature or a mass spectrometric detection of product ions) to monitor these concentrations as functions of time. Of course, in all such measurements, one must know how the intensity of the signal detected relates to the concentration of the molecules that cause the signal. For example, in many absorption experiments, as illustrated in Figure 8.4, light is passed through a sample of thickness $L$ and the intensity of the light beam in the absence of the sample $I_0$ and with the sample present $I$ are measured. The Beer-Lambert law: $\log(I_0/I) = \varepsilon[A]L$ then allows the concentration [A] of the absorbing molecules to be determined, given the path length $L$ over which absorption occurs and given the extinction coefficient $\varepsilon$ of the absorbing molecules. These extinction coefficients, which relate to the electric dipole matrix elements as discussed in Chapter 6, are usually determined empirically by preparing a known concentration of the absorbing molecules and measuring the $I_0/I$ ratio that this concentration produces in a cell of length $L$. For molecules and ions that are extremely reactive, this calibration approach to determining $\varepsilon$ is often not feasible because one cannot prepare a sample with a known concentration that remains constant in time long enough for the experiment to be carried out. In such cases, one often must resort to using the theoretical expressions given in Chapter 6 (and discussed in most textbooks on molecular spectroscopy) to compute $\varepsilon$ in terms of the wave functions of the absorbing species. In any event, one must know how the strength of the signal relates to the concentrations of the species if one wishes to monitor chemical reaction or energy transfer rates. Because modern experimental techniques are capable of detecting molecules in particular electronic and vibration-rotation states, it has become common to use such tools to examine chemical reaction dynamics on a state-to-state level and to follow energy transfer processes, which clearly require such state-specific data. In such experiments, one seeks to learn the rate at which reactants in a specific state $\Phi_i$ react to produce products in some specific state $\Phi_f$. One of the most common ways to monitor such state-specific rates is through a so-called pump-probe experiment in which i. A short-duration light pulse is used to excite reactant molecules to some specified initial state $\Phi_i$. Usually a tunable laser is used because its narrow frequency spread allows specific states to be pumped. The time at which this pump laser thus prepares the excited reactant molecules in state $\Phi_i$ defines $t = 0$. ii. After a delay time of duration t, a second light source is used to probe the product molecules that have been formed in various final states, $\Phi_f$. Often, the frequency of this probe source is scanned so that one can examine populations of many such final states. The concentrations of reactant and products molecules in the initial and final states $\Phi_i$ and $\Phi_f$ are determined by the Beer-Lambert relation assuming that the extinction coefficients $\varepsilon_i$ and $\varepsilon_f$ for these species and states absorption are known. In the former case, the extinction coefficient $\varepsilon_i$ relates to absorption of the pump photons to prepare reactant molecules in the specified initial state. In the latter, $\varepsilon_f$ refers to absorption of the product molecules that are created in the state $\Phi_f$. Carrying out a series of such final-state absorption measurements at various delay times t allows one to determine the concentration of these states as a function of time. This kind of laser pump-probe experiment is used not only to probe specific electronic or vibration/rotation states of the reactants and products but also when the reaction is fast (i.e., complete in 10‑4s or less). In these cases, one is not using the high frequency resolution of the laser but its fast time response. Because laser pulses of quite short duration can be generated, these tools are well suited in such fast chemical reaction studies. The reactions can be in the gas phase (e.g., fast radical reactions in the atmosphere or in explosions) or in solution (e.g., photo-induced electron transfer reactions in biological systems). Beam Methods Another approach to probing chemical reaction dynamics is to use a beam of reactant molecules A that collides with other reactants B that may also in a beam or in a bulb in equilibrium at some temperature T. Such crossed-beam and beam-bulb experiments are illustrated in Figure 8.5. Almost always, these beam and bulb samples contain molecules, radicals, or ions in the gas phase, so these techniques are most prevalent in gas-phase dynamics studies. The advantages of the crossed-beam type experiments are that: 1. one can control the velocities, and hence the collision energies, of both reagents, 2. one can examine the product yield as a function of the angle $\theta$ through which the products are scattered, 3. one can probe the velocity of the products and, 4. by using spectroscopic methods, one can determine the fraction of products generated in various internal (electronic/vibrational/rotational) states. Such measurements allow one to gain very detailed information about how the reaction rate coefficient depends on collisional (kinetic) energy and where the total energy available to the products is deposited (i.e., into product translational energy or product internal energy). The angular distribution of product molecules can also give information about the nature of the reaction process. For example, if the A + B collision forms a long-lived (i.e., on rotational time scales) collision complex, the product C molecules display a very isotropic angular distribution. In contrast, reactions that proceed more impulsively show product angular distributions that are either strongly back-scattered or strongly forward-scattered rather than isotropic. In beam-bulb experiments, one is not able to gain as much detailed information because one of the reactant molecules B is not constrained to be moving with a known fixed velocity in a specified direction when the $A + B \rightarrow C$ collisions occur. Instead, the B molecules collide with A molecules in a variety of orientations and with a distribution of collision energies whose range depends on the Maxwell-Boltzmann distribution of kinetic energies of the B molecules in the bulb. The advantage of beam-bulb experiments is that one can achieve much higher collision densities than in crossed-beam experiments because the density of B molecules inside the bulb can be much higher than the densities achievable in a beam of B molecules. There are cases in which the beam-bulb experiments can be used to determine how the reaction rate depends on collision energy even though the molecules in the bulb have a distribution of kinetic energies. That is, if the species in the beam have much higher kinetic energies than most of the B molecules, then the A + B collision energy is primarily determined by the beam energy. An example of this situation is provided by so-called guided-ion beam experiments in which a beam of ions having well-specified kinetic energy E impinges on molecules in a bulb having a temperature $T$ for which $kT \ll E$. Figure 8.6 illustrates data that can be extracted from such an experiment. In Figure 8.6, we illustrate the cross-section $\sigma$ (related to the bimolecular rate constant $k$ by $\sigma v = k$, where v is the relative collision speed) for production of $Na^+$ ions when a beam of $Na^+$(uracil) complexes having energy E (the horizontal axis) collides with a bulb containing Xe atoms at room temperature. In this case, the reaction is simply the collision-induced dissociation (CID) process in which the complex undergoes unimolecular decomposition after gaining internal energy in collisions with Xe atoms: $Na^+({\rm ​uracil}) \rightarrow Na^+ + \rm uracil.$ The primary knowledge gained in this CID experiment is the threshold energy $E^*$; that is, the minimum collision energy needed to effect dissociation of the $Na^+({\rm uracil})$ complex. This kind of data has proven to offer some of the most useful information about bond dissociation energies of a wide variety of species. In addition, the magnitude of the reaction cross-section $\sigma$ as a function of collision energy is a valuable product of such experiments. These kind of CID beam-bulb experiments offer one of the most powerful and widely used means of determining such bond-rupture energies and reaction rate constants. Other Methods Of course, not all chemical reactions occur so quickly that they require the use of fast lasers to follow concentrations of reacting species or pump-probe techniques to generate and probe these molecules. For slower chemical reactions, one can use other methods for monitoring the relevant concentrations. These methods include electrochemistry (where the redox potential is the species’ signature) and NMR spectroscopy (where the chemical shifts of functional groups are the signatures) both of whose instrumental response times are too slow for probing fast reactions. In addition, when the reactions under study do not proceed to completion but exist in equilibrium with a back reaction, alternative approaches can be used. The example discussed in Chapter 5 is one such case. Let us briefly review it here and again consider the reaction of an enzyme E and a substrate S to form the enzyme-substrate complex ES: $E + S \rightleftharpoons ES.$ In the perturbation-type experiments, the equilibrium concentrations of the species are "shifted" by a small amount $\delta$ by application of the perturbation, so that $[ES] = [ES]_{\rm eq} -\delta$ $[E] = [E]_{\rm eq} +\delta$ $[S] = [S]_{\rm eq} +\delta.$ Subsequently, the following rate law will govern the time evolution of the concentration change d: $-\dfrac{d\delta}{dt} = - k_r ([ES]_{\rm eq} -\delta) + k_f ([E]_{\rm eq} +\delta) ([S]_{\rm eq} +\delta).$ Assuming that $\delta$ is very small (so that the term involving $\delta^2$ cam be neglected) and using the fact that the forward and reverse rates balance at equilibrium, this equation for the time evolution of $\delta$ can be reduced to: $-\dfrac{d\delta}{dt}​ = (k_r + k_f [S]_{\rm eq} + k_f[E]_{\rm eq}​) \delta.$​ So, the concentration deviations from equilibrium will return to equilibrium exponentially with an effective rate coefficient that is equal to a sum of terms: $k_{\rm eff} = k_r + k_f [S]_{\rm eq} + k_f[E]_{\rm eq}.$ So, by following the concentrations of the reactants or products as they return to their equilibrium values, one can extract the effective rate coefficient $k_{\rm eff}$. Doing this at a variety of different initial equilibrium concentrations (e,g., $[S]_{\rm eq}$ and $[E]_{\rm eq}$), and seeing how $k_{\rm eff}$​ changes, one can then determine both the forward and reverse rate constants. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/08%3A_Chemical_Dynamics/8.02%3A_Experimental_Probes_of_Reaction_Dynamics.txt
The following are some problems that will help you refresh your memory about material you should have learned in undergraduate chemistry classes and that allow you to exercise the material taught in this text. Chapters of Part 1 1.You should be able to set up and solve the one- and two-dimensional particle in a box Schrödinger equations. I suggest you now try this and make sure you see: 1. How the second order differential equations have two independent solutions, so the most general solution is a sum of these two. 2. How the two boundary conditions reduce the number of acceptable solutions from two to one and limit the values of $E$ that can be “allowed”. 3. How the wave function is continuous even at the box boundaries, but $\dfrac{d\Psi}{dx}$ is not. In general $\dfrac{d}{dx}$, which relates to the momentum because $-i\hbar \dfrac{d}{dx}$ is the momentum operator, is continuous except at points where the potential $V(x)$ undergoes an infinite jump as it does at the box boundaries. The infinite jump in $V$, when viewed classically, means that the particle would undergo an instantaneous reversal in momentum at this point, so its momentum would not be continuous. Of course, in any realistic system, $V$ does not have infinite jumps, so momentum will vary smoothly and thus $\dfrac{d\Psi}{dx}$ will be continuous. 4. How the energy levels grow with quantum number $n$ as $n^2$. 5. What the wave functions look like when plotted. 2. You should go through the various wave functions treated in the Part 1 (e.g., particles in boxes, rigid rotor, harmonic oscillator) and make sure you see how the $|\Psi|^2$ probability plots of such functions are not at all like the classical probability distributions except when the quantum number is very large. 3. You should make sure you understand how the time evolution of an eigenstate $\Psi$ produces a simple $\exp(-i tE/ \hbar)$ multiple of $\Psi$ so that $|\Psi|^2$ does not depend on time. However, when $\Psi$ is not an eigenstate (e.g., when it is a combination of such states), its time propagation produces a $\Psi$ whose $|\Psi|^2$ probability distribution changes with time. 4. You should notice that the densities of states appropriate to the 1-, 2-, and 3- dimensional particle in a box problem (which relate to translations in these dimensions) depend of different powers of $E$ for the different dimensions. 5. You should be able to solve 2x2 and 3x3 Hückel matrix eigenvalue problems both to obtain the orbital energies and the normalized eigenvectors. For practice, try to do so for 1. the allyl radical’s three $\pi$ orbitals 2. the cyclopropenly radical’s three $\pi$ orbitals. Do you see that the algebra needed to find the above sets of orbitals is exactly the same as was needed when we treat the linear and triangular sodium trimer? 6. You should be able to follow the derivation of the tunneling probability. Doing this offers a good test of your ability to apply the boundary conditions properly, so I suggest you do this task. You should appreciate how the tunneling probability decays exponentially with the “thickness” of the tunneling barrier and with the “height” of this barrier and that tunneling for heavier particles is less likely than for light particles. This is why tunneling usually is considered only for electrons, protons, and neutrons. 7. I do not expect that you could carry off a full solution to the Schrödinger equation for the hydrogenic atom. However, I think you need to pay attention to 1. How separations of variables leads to a radial and two angular second order differential equations. 2. How the boundary condition that and + 2 are equivalent points in space produces the m quantum number. 3. How the l quantum number arises from the equation. 4. How the condition that the radial wave function not “explode” (i.e., go to infinity) as the coordinate r becomes large gives rise to the equation for the energy $E$. 5. The fact that the angular parts of the wave functions are spherical harmonics, and that these are exactly the same wave functions for the rotational motion of a linear molecule. 6. How the energy $E$ depends on the $n$ quantum number as $n^{-2}$ and on the nuclear charge $Z$ as $Z^2$, and that the bound state energies are negative (do you understand what this means? That is, what is the zero or reference point of energy?). 8. You should make sure that you are familiar with how the rigid-rotor and harmonic oscillator energies vary with quantum numbers ($J$, $M$ in the former case, $v$ in the latter). You should also know how these energies depend on the molecular geometry (in the former) and on the force constant and reduced mass (in the latter). You should note that $E$ depends quadratically on $J$ but linearly on $v$. 9. You should know what the Morse potential is and what its parameters mean. You should understand that the Morse potential displays anharmonicity, but the harmonic potential does not. 10. You should be able to follow how the mass-weighted Hessian matrix can be used to approximate the vibrational motions of a polyatomic molecule. And, you should understand how the eigenvalues of this matrix produce the harmonic vibrational frequencies and the corresponding eigenvectors describe the motions of the molecule associated with these frequencies. Practice with matrices and operators 1.Find the eigenvalues and corresponding normalized eigenvectors of the following matrices: $\left[\begin{array}{cc} -1 & 2\ 2 & 2 \end{array}\right]$ $\left[\begin{array}{ccc} -2 & 0 & 0\ 0 & -1 & 2 \ 0 & 2 & 2 ​\end{array}\right]$ 2. Replace the following classical mechanical expressions with their corresponding quantum mechanical operators: • K.E. = $\dfrac{mv^2}{2}$ in three-dimensional space. • $\textbf{p} = m\textbf{v}$, a three-dimensional Cartesian vector. • $y$-component of angular momentum: $L_y = zp_x - xp_z$. 3. Transform the following operators into the specified coordinates: • $L_x=$ from Cartesian to spherical polar coordinates. • $L_z=$ from spherical polar to Cartesian coordinates. 4. Match the eigenfunctions in column B to their operators in column A. What is the eigenvalue for each eigenfunction? $\begin{array}{lll} \phantom{aaaaa} & \text{ Column A } & \text{ Column B } \ \hline {\rm i.} &(1-x^2) - x & 4x^4 - 12x^2 + 3 \ {\rm ii.} & \dfrac{d^2}{dx^2} & 5x^4 \ {\rm iii.}& x\dfrac{d}{dx} & e^{3x} + e^{-3x} \ {\rm iv.}& \dfrac{d^2}{dx^2}- 2x\dfrac{d}{dx}​ & x^2 - 4x + 2 \ {\rm v.}& x \dfrac{d^2}{dx^2}+ (1-x) \dfrac{d}{dx} & 4x^3 - 3x \end{array}$ Review of shapes of orbitals 5.Draw qualitative shapes of the (1) $s$, (3) $p$ and (5) $d$ atomic orbitals (note that these orbitals represent only the angular portion and do not contain the radial portion of the hydrogen like atomic wave functions) Indicate with the relative signs of the wave functions and the position(s) (if any) of any nodes. 6.Plot the radial portions of the $4s$, $4p$, $4d$, and $4f$ hydrogen like atomic wave functions. 7. Plot the radial portions of the $1s$, $2s$, $2p$, $3s$, and $3p$ hydrogen like atomic wave functions for the Si atom using screening concepts for any inner electrons. Labeling orbitals using point group symmetry 8. Define the symmetry adapted "core" and "valence" atomic orbitals of the following systems: • $NH_3$ in the $C_{3v}$ point group, • $H_2O$ in the $C_{2v}$ point group, • $H_2O_2$ (cis) in the $C_2$ point group • $N$ in $D_{\infty h}$, $D_{2h}$, $C_{2v}$, and $C_{s}$ point groups • $N_2$ in $D_{\infty h}$, $D_{2h}$, $C_{2v}$, and $C_{s}$ point groups. problem to practice the basic tools of the Schrödinger equation 9. A particle of mass $m$ moves in a one-dimensional box of length $L$, with boundaries at $x = 0$ and $x = L$. Thus, $V(x)$ = 0 for $0 \le x \le L$, and $V(x) = \infty$ elsewhere. The normalized eigenfunctions of the Hamiltonian for this system are given by $\Psi_n(x) = \sqrt{\dfrac{2}{L}} \sin\dfrac{2\pi x}{L}$, with $E_n =\dfrac{n^2\pi^2\hbar^2}{2mL^2}$, where the quantum number $n$ can take on the values $n=1,2,3,....$ 1. Assuming that the particle is in an eigenstate, $\Psi_n(x)$, calculate the probability that the particle is found somewhere in the region $0 \le x \le \dfrac{L}{4}$. Show how this probability depends on $n$. 2. For what value of $n$ is there the largest probability of finding the particle in $0 \le x \le \dfrac{L}{4}$ ? 3. Now assume that $\Psi$ is a superposition of two eigenstates, $\Psi = a\Psi_n + b\Psi_m, \text{ at time } t = 0.$ What is $\Psi$ at time t? What energy expectation value does $\Psi$ have at time t and how does this relate to its value at $t = 0$? 4. For an experimental measurement which is capable of distinguishing systems in state $\Psi_n$ from those in $\Psi_m$, what fraction of a large number of systems each described by $\Psi$ will be observed to be in $\Psi_n$? What energies will these experimental measurements find and with what probabilities? 5. For those systems originally in $\Psi = a\Psi_n + b\Psi_m$ which were observed to be in $\Psi_n$ at time $t$, what state ($\Psi_n$, $\Psi_m$, or whatever) will they be found in if a second experimental measurement is made at a time $t'$ later than $t$? 6. Suppose by some method (which need not concern us at this time) the system has been prepared in a nonstationary state (that is, it is not an eigenfunction of $H$). At the time of a measurement of the particle's energy, this state is specified by the normalized wave function $\Psi = \sqrt{\dfrac{30}{L^5}}x(L-x)$ for $0 \le x \le L$, and $\Psi = 0$ elsewhere. What is the probability that a measurement of the energy of the particle will give the value $E_n =\dfrac{n^2\pi^2\hbar^2}{2mL^2}$ for any given value of $n$? 7. What is the expectation value of $H$, i.e. the average energy of the system, for the wave function $\Psi$ given in part f? problem on the properties of non-stationary states 10. Show that for a system in a non-stationary state, $\Psi = \sum_j C_j\Psi_j e^{-iE_jt/\hbar}$, the average value of the energy does not vary with time but the expectation values of other properties do vary with time. problem about Jahn-Teller distortion 11. The energy states and wave functions for a particle in a 3-dimensional box whose lengths are $L_1$, $L_2$, and $L_3$ are given by $E(n_1,n_2,n_3) = \dfrac{h^2}{8m}\left[\Big(\dfrac{n_1}{L_1}\Big)^2+\Big(\dfrac{n_2}{L_2}\Big)^2+\Big(\dfrac{n_3}{L_3}\Big)^2\right]\text{ and}$ $\Psi(n_1,n_2,n_3) = \sqrt{\dfrac{2}{L_1}}\sqrt{\dfrac{2}{L_2}}\sqrt{\dfrac{2}{L_3}} \sin\Big(\dfrac{n_1\pi x}{L_1}\Big) \sin\Big(\dfrac{n_2\pi x}{L_2}\Big) \sin\Big(\dfrac{n_3\pi x}{L_3}\Big).$ These wave functions and energy levels are sometimes used to model the motion of electrons in a central metal atom (or ion) which is surrounded by six ligands in an octahedral manner. 1. Show that the lowest energy level is nondegenerate and the second energy level is triply degenerate if $L_1= L_2= L_3$. What values of $n_1$, $n_2$, and $n_3$ characterize the states belonging to the triply degenerate level? 2. For a box of volume $V = L_1L_2L_3$ show that for three electrons in the box (two in the nondegenerate lowest "orbital", and one in the next), a lower total energy will result if the box undergoes a rectangular distortion ($L_1= L_2\ne L_3$). which preserves the total volume than if the box remains undistorted (hint: if $V$ is fixed and $L_1 = L_2$, then $L_3 =\dfrac{V}{L_1^2}$ and $L_1$ is the only "variable"). 3. Show that the degree of distortion (ratio of $L_3$ to $L_1$) which will minimize the total energy is $L_3 = \sqrt{2}L_1$. How does this problem relate to Jahn-Teller distortions? Why (in terms of the property of the central atom or ion) do we do the calculation with fixed volume? 4. By how much (in eV) will distortion lower the energy (from its value for a cube, $L_1= L_2= L_3$) if $V$ = 8 Å and = 6.01 x 10 erg cm2. 1 eV = 1.6 x 10 erg particle on a ring model for electrons moving in cyclic compounds 12. The $\pi$-orbitals of benzene, $C_6H_6$, may be modeled very crudely using the wave functions and energies of a particle on a ring. Lets first treat the particle on a ring problem and then extend it to the benzene system. 1. Suppose that a particle of mass m is constrained to move on a circle (of radius $r$) in the $xy$ plane. Further assume that the particle's potential energy is constant (choose zero as this value). Write down the Schrödinger equation in the normal Cartesian coordinate representation. Transform this Schrödinger equation to cylindrical coordinates where $x = r\cos\theta$, $y = r\sin\phi$, and $z = z$ ($z = 0$ in this case). Taking $r$ to be held constant, write down the general solution, $\Phi(\phi)$, to this Schrödinger equation. The "boundary" conditions for this problem require that $\Phi(\phi) = \Phi(\phi+2\pi)$. Apply this boundary condition to the general solution. This results in the quantization of the energy levels of this system. Write down the final expression for the normalized wave functions and quantized energies. What is the physical significance of these quantum numbers that can have both positive and negative values? Draw an energy diagram representing the first five energy levels. 2. Treat the six $\pi$-electrons of benzene as particles free to move on a ring of radius 1.40 Å, and calculate the energy of the lowest electronic transition. Make sure the Pauli principle is satisfied! What wavelength does this transition correspond to? Suggest some reasons why this differs from the wavelength of the lowest observed transition in benzene, which is 2600 Å. non-stationary state wave function 13. A diatomic molecule constrained to rotate on a flat surface can be modeled as a planar rigid rotor (with eigenfunctions, $\Phi(\phi)$, analogous to those of the particle on a ring of problem 12) with fixed bond length $r$. At $t = 0$, the rotational (orientational) probability distribution is observed to be described by a wave function $\Psi(\phi,0) = \sqrt{\dfrac{4}{3\pi}}\cos^2\phi$. What values, and with what probabilities, of the rotational angular momentum, $-i\hbar\dfrac{\partial}{\partial \phi}$, could be observed in this system? Explain whether these probabilities would be time dependent as $\Psi(\phi,0)$ evolves into $\Psi(\phi,t)$. problem about Franck-Condon factors 14. Consider an $N_2$ molecule, in the ground vibrational level of the ground electronic state, which is bombarded by 100 eV electrons. This leads to ionization of the $N_2$ molecule to form $N$. In this problem we will attempt to calculate the vibrational distribution of the newly-formed $N$ ions, using a somewhat simplified approach. 1. Calculate (according to classical mechanics) the velocity (in cm/sec) of a 100 eV electron, ignoring any relativistic effects. Also calculate the amount of time required for a 100 eV electron to pass an Nmolecule, which you may estimate as having a length of 2Å. 2. The radial Schrödinger equation for a diatomic molecule treating vibration as a harmonic oscillator can be written as: $-\dfrac{\hbar^2}{2\mu r^2}\left(\dfrac{\partial}{\partial r}\left( r^2 \dfrac{\partial \Psi}{\partial r}\right)\right)+ \dfrac{k}{2}(r-r_e)^2 \Psi =E\Psi,$ Substituting $\Psi(r) =\dfrac{F(r)}{r}$, this equation can be rewritten as: $-\dfrac{\hbar^2}{2\mu r^2}\dfrac{\partial^2}{\partial r^2}F(r) + \dfrac{k}{2}(r-r_e)^2​F(r) = EF(r) .$ The vibrational Hamiltonian for the ground electronic state of the $N_2$ molecule within this approximation is given by: $H({\rm N_2}) = \dfrac{\hbar^2}{2\mu r^2}\dfrac{\partial^2}{\partial r^2}+ \dfrac{k_{\rm N_2}}{2}(r-r_{\rm N_2})^2​,$ where $r_{\rm N_2}$ and $k_{\rm N_2}$ have been measured experimentally to be: $r_{\rm N_2}​ = 1.09769 Å; \hspace{1cm} k_{\rm N_2} = 2.294 \times 10^6 \dfrac{\rm g}{\rm sec^2}.$ The vibrational Hamiltonian for the $N_2^+$ ion, however, is given by : $H({\rm N_2^+}) = \dfrac{\hbar^2}{2\mu r^2}\dfrac{\partial^2}{\partial r^2}+ \dfrac{k_{\rm N_2^+}}{2}(r-r_{\rm N_2^+})^2​,$ where $r_{\rm N_2^+}$ and $k_{\rm N_2^+}$ have been measured experimentally to be: $r_{\rm N_2^+}= 1.11642 Å;\hspace{1cm}​ k_{\rm N_2} ​= 2.009 \times 10^6 \dfrac{\rm g}{\rm sec^2}.$ In both systems the reduced mass is $\mu = 1.1624 \times 10^{-23}$ g. Use the above information to write out the ground state vibrational wave functions of the $N_2$ and $N$ molecules, giving explicit values for any constants which appear in them. The $v = 0$ harmonic oscillator function is $\Psi_0 = \Big(\dfrac{\alpha}{\pi}\Big)^{1/4} \exp(-\alpha x^2/2)$. c. During the time scale of the ionization event (which you calculated in part a), the vibrational wave function of the $N_2$ molecule has effectively no time to change. As a result, the newly-formed $N$ ion finds itself in a vibrational state which is not an eigenfunction of the new vibrational Hamiltonian, $H({\rm N}_2^+)$. Assuming that the $N_2$ molecule was originally in its $v=0$ vibrational state, calculate the probability that the $N$ ion will be produced in its $v=0$ vibrational state. Vibration of a diatomic molecule 15.The force constant, $k$, of the $C-O$ bond in carbon monoxide is 1.87 x 10 g/sec. Assume that the vibrational motion of $CO$ is purely harmonic and use the reduced mass $\mu= 6.857$ amu. Calculate the spacing between vibrational energy levels in this molecule, in units of ergs and cm-1. Calculate the uncertainty in the internuclear distance in this molecule, assuming it is in its ground vibrational level. Use the ground state vibrational wave function ($\Psi_{v=0}$; recall that I gave you this function in problem 14), and calculate $\langle x\rangle$, $\langle x^2\rangle$, and $\Delta x = \sqrt{\langle x^2\rangle - \langle x\rangle ^2}$. Under what circumstances (i.e. large or small values of $k$; large or small values of $\mu$) is the uncertainty in internuclear distance large? Can you think of any relationship between this observation and the fact that helium remains a liquid down to absolute zero? Variational Method Problem 16. A particle of mass m moves in a one-dimensional potential whose Hamiltonian is given by $H = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{dx^2}+ a|x| ,$ where the absolute value function is defined by $|x| = x$ if $x \ge 0$ and $|x| = -x$ if $x \le 0$. 1. Use the normalized trial wavefunction $\phi = \Big(\dfrac{2b}{\pi}\Big)^{\frac{1}{4}}e^{-bx^2}$ to estimate the energy of the ground state of this system, using the variational principle to evaluate $W(b)$, the variational expectation value of $H$. 2. Optimize b to obtain the best approximation to the ground state energy of this system, using a trial function of the form of $\phi$, as given above. The numerically calculated exact ground state energy is $0.808616 \hbar^{\frac{2}{3}}m^{-\frac{1}{3}}a^{-\frac{2}{3}}$. What is the percent error in your value? Another Variational Method Problem 17. The harmonic oscillator is specified by the Hamiltonian: $H = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{dx^2}+ \dfrac{1}{2}kx^2 ,$ Suppose the ground state solution to this problem were unknown, and that you wish to approximate it using the variational theorem. Choose as your trial wavefunction, $\begin{array}{ll} \phi = \sqrt{\dfrac{15}{16}}a^{-\frac{5}{2}} & \text{for } -a < x < a\ \phi = 0 & \text{for } |x|\ge a \end{array}$ where a is an arbitrary parameter which specifies the range of the wavefunction. Note that f is properly normalized as given. a. Calculate $\displaystyle\int_{-\infty}^{\infty}\phi^*H\phi\, dx$ and show it to be given by: $\int_{-\infty}^{\infty}\phi^*H\phi\, dx=\dfrac{5}{4}\dfrac{\hbar^2}{ma} + \dfrac{ka^2}{14}.$ b. Calculate $\displaystyle\int_{-\infty}^{\infty}\phi^*H\phi\, dx$ for $a = b\Big(\dfrac{\hbar^2}{km}\Big)^{\frac{1}{4}}$. c. To find the best approximation to the true wavefunction and its energy, find the minimum of $\displaystyle\int_{-\infty}^{\infty}\phi^*H\phi\, dx$ by setting $\displaystyle\dfrac{d}{da}\int_{-\infty}^{\infty}\phi^*H\phi\, dx= 0$ and solving for $a$. Substitute this value into the expression for $\displaystyle\int_{-\infty}^{\infty}\phi^*H\phi\, dx$ given in part a. to obtain the best approximation for the energy of the ground state of the harmonic oscillator. d. What is the percent error in your calculated energy of part c.? Perturbation Theory Problem 18. Consider an electron constrained to move on the surface of a sphere of radius $r_0$. The Hamiltonian for such motion consists of a kinetic energy term only $H_0 = \dfrac{L^2}{2m_er_0^2}$, where $L$ is the orbital angular momentum operator involving derivatives with respect to the spherical polar coordinates ($\theta,\phi$). $H_0$ has the complete set of eigenfunctions $\psi= Y_{l,m}(\theta,\phi)$. a. Compute the zeroth order energy levels of this system. b. A uniform electric field is applied along the $z$-axis, introducing a perturbation $V = -e\varepsilon z = -e\varepsilon r_0\cos\theta$, where $\varepsilon$ is the strength of the field. Evaluate the correction to the energy of the lowest level through second order in perturbation theory, using the identity $\cos\theta Y_{l,m}(\theta,\phi) = \sqrt{\dfrac{(l+m+1)(l-m+1)}{(2l+1)(2l+3)}}Y_{l+1,m}(\theta,\phi) + \sqrt{\dfrac{(l+m)(l-m)}{(2l+1)(2l-1)}}Y_{l-1,m}(\theta,\phi) .$ Note that this identity enables you to utilize the orthonormality of the spherical harmonics. c. The electric polarizability $\alpha$ gives the response of a molecule to an externally applied electric field, and is defined by $\alpha = -\dfrac{\partial^2 E}{\partial \varepsilon^2}$ where $E$ is the energy in the presence of the field and $\varepsilon$ is the strength of the field. Calculate $\alpha$ for this system. d. Use this problem as a model to estimate the polarizability of a hydrogen atom, where $r_0 = a_0 = 0.529$ Å, and a cesium atom, which has a single 6s electron with $r_0 \approx 2.60 Å$. The corresponding experimental values are $\alpha_H = 0.6668 Å^3$ and $\alpha_{Cs} = 59.6 Å^3$. Hartree-Fock problem you can do by hand 19. Given the following orbital energies (in hartrees) for the $N$ atom and the coupling elements between two like atoms (these coupling elements are the Fock matrix elements from standard ab-initio minimum-basis SCF calculations), calculate the molecular orbital energy levels and orbitals. Draw the orbital correlation diagram for formation of the $N_2$ molecule. Indicate the symmetry of each atomic and each molecular orbital. Designate each of the molecular orbitals as bonding, non-bonding, or antibonding. $N_{1s} = -15.31^*$ $N_{2s} = -0.86^*$ $N_{2p} = -0.48^*$ $N_2$ $\sigma_g$ Fock matrix* $\left[\begin{array}{ccc} -6.52 & & \ -6.22 & -7.06 & \ 3.61 & 4.00 & -3.92 \end{array}\right]$ $N_2$ $\pi_g$ Fock matrix* $[0.28]$ $N_2$ $\sigma_u$ Fock matrix* $\left[\begin{array}{ccc} 1.02 & & \ -0.60 & -7.59 & \ 0.02 & 7.42 & -8.53 \end{array}\right]$ $N_2$ $\pi_u$ Fock matrix* $-0.58$ *The Fock matrices (and orbital energies) were generated using standard minimum basis set SCF calculations. The Fock matrices are in the orthogonal basis formed from these orbitals. orbital correlation diagram problem 20. Given the following valence orbital energies for the $C$ atom and $H_2$ molecule, draw the orbital correlation diagram for formation of the $CH_2$ molecule (via a $C_{2v}$ insertion of $C$ into $H_2$ resulting in bent $CH_2$). Designate the symmetry of each atomic and molecular orbital in both their highest point group symmetry and in that of the reaction path ($C_{2v}$). $\begin{array}{ll} C_{1s} = -10.91^* & H_2\; \sigma_g = -0.58^*\ C_{2s} = -0.60^* & H_2\; \sigma_u = 0.67^*\ C_{2p} = -0.33^* & \end{array}$ *The orbital energies were generated using standard STO3G minimum basis set SCF calculations. Practice using point group symmetry 21. Qualitatively analyze the electronic structure (orbital energies and orbitals) of $PF_5$. Analyze only the $3s$ and $3p$ electrons of P and the one $2p$ bonding electron of each $F$. Proceed with a $D_{3h}$ analysis in the following manner: 1. Symmetry adapt the top and bottom $F$ atomic orbitals. 2. Symmetry adapt the three (trigonal) $F$ atomic orbitals. 3. Symmetry adapt the P $3s$ and $3p$ atomic orbitals. 4. Allow these three sets of $D_{3h}$​ orbitals to interact and draw the resultant orbital energy diagram. 5. Symmetry label each of these molecular energy levels. Fill this energy diagram with 10 "valence" electrons. Practice with term symbols and determinental wave functions for atoms and molecules 22. For the given orbital occupations (configurations) of the following systems, determine all possible states (all possible allowed combinations of spin and space states). There is no need to form the determinental wave functions, simply label each state with its proper term symbol. $\begin{array}{lll} {\rm a.} & CH_2 & 1a_1{}^2\,2a_1{}^2\,1b_2{}^2\,3a_1{}^1\,1b_1{}^1\ {\rm b.} & B_2 & 1\sigma_g{}^2\,1\sigma_u{}^2\,2\sigma_g{}^2\,2\sigma_u{}^2\,1\pi_u{}^1\,2\pi_u{}^1\ {\rm c.} & O_2 & 1\sigma_g{}^2\,1\sigma_u{}^2\,2\sigma_g{}^2\,2\sigma_u{}^2\,1\pi_u{}^4\,3\sigma_g{}^2\,1\pi_g{}^2 \ {\rm d.} & Ti & 1s{}^2\,2s{}^2\,2p{}^6\,3s{}^2\,3p{}^6\,4s{}^2\,3d{}^1\,4d{}^1\ {\rm e.} & Ti & 1s{}^2\,2s{}^2\,2p{}^6\,3s{}^2\,3p{}^6\,4s{}^2\,3d{}^2 \end{array}$ 23. Construct Slater determinant wave functions for each of the following states of $CH_2$: 1. $^1B_1$ ($1a_1{}^2\,2a_1{}^2\,1b_2{}^2\,3a_1{}^1\,1b_1{}^1$) 2. $^3B_1$ ($1a_1{}^2\,2a_1{}^2\,1b_2{}^2\,3a_1{}^1\,1b_1{}^1$) 3. $^1A_1$ ($1a_1{}^2\,2a_1{}^2\,1b_2{}^2\,3a_1{}^2$) Woodward-Hoffmann rules problem 24. Let us investigate the reactions: i. $CH_2(^1A_1) \rightarrow H_2 + C$, and ii. $CH_2(^3B_1) \rightarrow H_2 + C$, under an assumed $C_{2v}$ reaction pathway utilizing the following information: $C$ atom: $^3P$ $^1D$ $^1S$ $C(^3P) + H_2 \rightarrow CH_2(^3B_1)$ $\Delta E = -78.8$ kcal/mole $C(^1D) + H_2 \rightarrow CH_2(^1A_1)$ $\Delta E = -97.0$ kcal/mole IP ($H_2$) > IP (2s carbon). 1. Write down (first in terms of $2p_{1,0,-1}$ orbitals and then in terms of $2p_{x,y,z}$ orbitals) the: 1. three Slater determinant (SD) wave functions belonging to the $^3P$ state all of which have $M_S = 1$, 2. five $^1D$ SD wave functions, and 3. one $^1S$ SD wave function. 2. Using the coordinate system shown below, label the hydrogen orbitals sg, su and the carbon $2s$, $2p_x$, $2p_y$, $2p_z$, orbitals as $a_1, b_1(x), b_(y),$ or $a_2$. Do the same for the $\sigma$, $\sigma$, $\sigma^*$, $\sigma^*$, $n$, and $\pi_p$ orbitals of $CH_2$. 3. Draw an orbital correlation diagram for the $CH_2 \rightarrow H_2 + C$ reactions. Try to represent the relative energy orderings of the orbitals correctly. 4. Draw a configuration correlation diagram for $CH_2(^3B_1) \rightarrow H_2 + C$ showing all configurations which arise from the $C(^3P) + H_2$ products. You can assume that doubly excited configurations lie much (~100 kcal/mole) above their parent configurations. 5. Repeat step d. for $CH_2(^1A_1) \rightarrow H_2 + C$ again showing all configurations which arise from the $C(^1D) + H_2$ products. 6. Do you expect the reaction $C(^3P) + H_2 \rightarrow CH_2$ to have a large activation barrier? About how large? What state of $CH_2$ is produced in this reaction? Would distortions away from $C_{2v}$ symmetry be expected to raise or lower the activation barrier? Show how one could estimate where along the reaction path the barrier top occurs. 7. Would $C(^1D) + H_2 \rightarrow CH_2$ be expected to have a larger or smaller barrier than you found for the $^3P$ $C$ reaction? Another Woodward-Hoffmann rule problem 25. The decomposition of the ground-state singlet carbene, to produce acetylene and $^1D$ carbon is known to occur with an activation energy equal to the reaction endothermicity. However, when the corresponding triplet carbene decomposes to acetylene and ground-state (triplet) carbon, the activation energy exceeds this reaction's endothermicity. Construct orbital, configuration, and state correlation diagrams that permit you to explain the above observations. Indicate whether single configuration or configuration interaction wave functions would be required to describe the above singlet and triplet decomposition processes. Practice with rotational spectrocopy and its relation to molecular structure 26. Consider the molecules $CCl_4$, $CHCl_3$, and $CH_2Cl_2$. 1. What kind of rotor are they (symmetric top, etc; do not bother with oblate, or near-prolate, etc.) 2. Will they show pure rotational (i.e., microwave) spectra? 27. Assume that ammonia shows a pure rotational spectrum. If the rotational constants are 9.44 and 6.20 cm-1, use the energy expression: $E = (A - B) K^2 + B J(J + 1),$ to calculate the energies (in cm-1) of the first three lines (i.e., those with lowest $K$, $J$ quantum number for the absorbing level) in the absorption spectrum (ignoring higher order terms in the energy expression). problem on vibration-rotation spectroscopy 28. The molecule $^{11}B ^{16}O$ has a vibrational frequency $\omega_e = 1885$ cm-1, a rotational constant $B_e = 1.78$ cm-1, and a bond energy from the bottom of the potential well of $D= 8.28$ eV. Use integral atomic masses in the following: In the approximation that the molecule can be represented as a Morse oscillator, calculate the bond length, $R_e$ in angstroms, the centrifugal distortion constant, $D_e$ in cm-1, the anharmonicity constant, $\omega_e x_e$ in cm-1, the zero-point corrected bond energy, $D_0^0$ in eV, the vibration rotation interaction constant, $\alpha_e$ in cm-1, and the vibrational state specific rotation constants, $B_0$ and $B_1$ in cm-1. Use the vibration-rotation energy expression for a Morse oscillator: $E = \hbar\omega_e\Big(v + \dfrac{1}{2}\Big) - \hbar\omega_e x_e\Big(v + \dfrac{1}{2}\Big)^2 + B_vJ(J + 1) - D_eJ^2(J + 1)^2,$ where $Bv = B_e - \alpha_e\Big(v + \dfrac{1}{2}\Big),\; \alpha_e = \dfrac{-6B_e^2}{\hbar\omega_e}+ \dfrac{6\sqrt{B_e^3\hbar\omega_e x_e}}{\hbar\omega}, \; \text{and } D_e = \dfrac{4B_e^3}{\hbar\omega^2}.$ Will this molecule show a pure rotation spectrum? A vibration-rotation spectrum? Assuming that it does, what are the energies (in cm-1) of the first three lines in the P branch ($\Delta v = +1, \Delta J = -1$) of the fundamental absorption? problem labeling vibrational modes by symmetry 29. Consider trans-$C_2H_2Cl_2$. The vibrational normal modes of this molecule are shown below. What is the symmetry of the molecule? Label each of the modes with the appropriate irreducible representation. problem in rotational spectroscopy 30. Suppose you are given two molecules (one is $CH_2$ and the other is $CH_2^-$ but you don't know which is which). Both molecules have $C_{2v}$ symmetry. The $CH$ bond length of molecule I is 1.121 Å and for molecule II it is 1.076 Å. The bond angle of molecule I is 104° and for molecule II it is 136°. a. Using a coordinate system centered on the $C$ nucleus as shown above (the molecule is in the $YZ$ plane), compute the moment of inertia tensors of both species (I and II). The definitions of the components of the tensor are, for example: $I_{xx} = - \sum_j m_j (y_j^2+z_j^2) - M(Y^2 + Z^2)$ $I_{xy} = -\sum_j m_j x_j y_j - MXY$ Here, $m_j$ is the mass of the nucleus $j$, $M$ is the mass of the entire molecule, and $X$, $Y$, $Z$ are the coordinates of the center of mass of the molecule. Use Å for distances and amu's for masses. b. Find the principal moments of inertia $I_a < I_b < I_c$ for both compounds ( in amu Å2 units) and convert these values into rotational constants $A$, $B$, and $C$ in cm-1 using, for example, $A = \dfrac{h}{8\pi^2cI_a}.$ c. Both compounds are "nearly prolate tops" whose energy levels can be well approximated using the prolate top formula: $E = (A - B) K^2 + BJ(J + 1),$ if one uses for the $B$ constant the average of the $B$ and $C$ valued determined earlier. Thus, take the $B$ and $C$ values (for each compound) and average them to produce an effective $B$ constant to use in the above energy formula. Write down (in cm-1 units) the energy formula for both species. What values are $J$ and $K$ allowed to assume? What is the degeneracy of the level labeled by a given $J$ and $K$? d. Draw a picture of both compounds and show the directions of the three principle axes (a,b,c). On these pictures, show the kind of rotational motion associated with the quantum number $K$. e. Suppose you are given the photoelectron spectrum of $CH_2^-$. In this spectrum $J_j = J_i + 1$ transitions are called R-branch absorptions and those obeying $J_j = J_i - 1$ are called P-branch transitions. The spacing between lines can increase or decrease as functions of $J_i$ depending on the changes in the moment of inertia for the transition. If spacings grow closer and closer, we say that the spectrum exhibits a so-called band head formation. In the photoelectron spectrum that you are given, a rotational analysis of the vibrational lines in this spectrum is carried out and it is found that the R-branches show band head formation but the P-branches do not. Based on this information, determine which compound I or II is the $CH_2^-$ anion. Explain you reasoning. f. At what $J$ value (of the absorbing species) does the band head occur and at what rotational energy difference? Using point group symmetry in vibrational spectroscopy 31. Let us consider the vibrational motions of benzene. To consider all of the vibrational modes of benzene we should attach a set of displacement vectors in the $x$, $y$, and $z$ directions to each atom in the molecule (giving 36 vectors in all), and evaluate how these transform under the symmetry operations of $D_{6h}$​. For this problem, however, let's only inquire about the $C-H$ stretching vibrations. a. Represent the $C-H$ stretching motion on each $C-H$ bond by an outward-directed vector on each $H$ atom, designated $r_i$: b. These vectors form the basis for a reducible representation. Evaluate the characters for this reducible representation under the symmetry operations of the $D_{6h}$ group. c. Decompose the reducible representation you obtained in part b. into its irreducible components. These are the symmetries of the various $C-H$ stretching vibrational modes in benzene. d. The vibrational state with zero quanta in each of the vibrational modes (the ground vibrational state) of any molecule always belongs to the totally symmetric representation. For benzene, the ground vibrational state is therefore of $A_{1g}$ symmetry. An excited state which has one quantum of vibrational excitation in a mode which is of a given symmetry species has the same symmetry species as the mode which is excited (because the vibrational wave functions are given as Hermite polynomials in the stretching coordinate). Thus, for example, excitation (by one quantum) of a vibrational mode of $A_{2u}$ symmetry gives a wave function of $A_{2u}$​ symmetry. To resolve the question of what vibrational modes may be excited by the absorption of infrared radiation we must examine the $x$, $y$, and $z$ components of the transition dipole integral for initial and final state wave functions $\psi_i$ and $\psi_f$, respectively: $|\langle\psi_f | x |\psi_i \rangle| ,\;\; |\langle\psi_f | y |\psi_i \rangle| ,\; \; \text{and } |\langle\psi_f | z |\psi_i \rangle| .$ Using the information provided above, which of the $C-H$ vibrational modes of benzene will be infrared-active, and how will the transitions be polarized? How many $C-H$ vibrations will you observe in the infrared spectrum of benzene? e. A vibrational mode will be active in Raman spectroscopy only if one or more of the following integrals is nonzero: $|\langle\psi_f | xy |\psi_i \rangle| ,\;\;|\langle\psi_f | xz |\psi_i \rangle| , \;\; |\langle\psi_f | yz |\psi_i \rangle| ,$ $|\langle\psi_f | x^2 |\psi_i \rangle| ,\;\; |\langle\psi_f | y^2 |\psi_i \rangle| ,\;\; \text{and } |\langle\psi_f | z^2 |\psi_i \rangle| .$ Using the fact that the quadratic operators transform according to the irreducible representations: $(x^2 + y^2, z^2) \Rightarrow A_{1g}$ $(xz, yz) \Rightarrow E_{1g}$ $(x^2 - y^2, xy) \Rightarrow E_{2g}$ Determine which of the $C-H$ vibrational modes will be Raman-active. f. Are there any of the $C-H$ stretching vibrational motions of benzene which cannot be observed in either infrared of Raman spectroscopy? Give the irreducible representation label for these unobservable modes. problem on electronic spectra and lifetimes 32. Time dependent perturbation theory provides an expression for the radiative lifetime of an excited electronic state, given by $t_R$: $t_R = \dfrac{3\hbar^4c^3}{4(E_i-E_f)^3|\mu_{fi}|^2},$ where $i$ refers to the excited state, $f$ refers to the lower state, and $\mu_{fi}$ is the transition dipole. a. Evaluate the $z$-component of the transition dipole for the $2p_z \rightarrow 1s$ transition in a hydrogenic atom of nuclear charge $Z$, given: $\psi_{1s} =\dfrac{1}{\sqrt{\pi}} \bigg(\dfrac{Z}{a_0}\bigg)^{\frac{3}{2}} \exp\bigg(-\dfrac{Zr}{a_0}\bigg) ,\text{ and } \psi_{2p_z} =\dfrac{1}{4\sqrt{2\pi}} \bigg(\dfrac{Z}{a_0}\bigg)^{\frac{5}{2}} r \cos\theta\exp\bigg(-\dfrac{Zr}{2a_0}\bigg).$ Express your answer in units of $ea_0$. b. Use symmetry to demonstrate that the $x-$ and $y$-components of $\mu_{fi}$ are zero, i.e. $\langle 2p_z| e x |1s\rangle = \langle 2p_z| e y |1s\rangle = 0.$ c. Calculate the radiative lifetime $t_R$ of a hydrogenlike atom in its $2p_z$ state. Use the relation $e^2 =\dfrac{\hbar^2}{m_ea_0}$ to simplify your results. difference between slowly and quickly turning on a perturbation 33. Consider a case in which the complete set of states {$\phi_k$} for a Hamiltonian is known. a. If the system is initially in the state m at time $t=0$ when a constant perturbation $V$ is suddenly turned on, find the probability amplitudes $C_k^{(2)}(t)$ and $C_m^{(2)}(t)$, to second order in $V$, that describe the system being in a different state $k$ or the same state $m$ at time $t$. b. If the perturbation is turned on adiabatically (i.e., very slowly), what are $C_k^{(2)}(t)$ and $C_m^{(2)}(t)$? Here, consider that the initial time is $t_0 \rightarrow -\infty$, and the potential is $V e^\eta t$, where the positive parameter $\eta$ is allowed to approach zero $\eta\rightarrow 0$ in order to describe the adiabatically turned on perturbation. c. Compare the results of parts a. and b. and explain any differences. d. Ignore first order contributions (assume they vanish) and evaluate the transition rates $|C_k^{(2)}(t)|^2$ for the results of part b. by taking the limit $\eta \rightarrow 0^+$, to obtain the adiabatic results. example of quickly turning on a perturbation- the sudden approximation 34. Consider an interaction or perturbation which is carried out suddenly (instantaneously, e.g., within an interval of time $\Delta t$ which is small compared to the natural period $\omega_{nm}^{-1}$ corresponding to the transition from state $m$ to state $n$), and after that is turned off adiabatically (i.e., extremely slowly as $V e^\eta t$). The transition probability in this case is given as: $T_{nm} \approx \dfrac{|\langle n|V|m \rangle|^2}{\hbar^2\omega_{nm}^2}$ where $V$ corresponds to the maximum value of the interaction when it is turned on. This formula allows one to calculate the transition probabilities under the action of sudden perturbations which are small in absolute value whenever perturbation theory is applicable. Let's use this "sudden approximation" to calculate the probability of excitation of an electron under a sudden change of the charge of the nucleus. Consider the reaction: $^3_1H \rightarrow ​^3_2He^+ + e^-,$ and assume the tritium atom has its electron initially in a $1s$ orbital. a. Calculate the transition probability for the transition $1s \rightarrow 2s$ for this reaction using the above formula for the transition probability. b. Suppose that at time $t = 0$ the system is in a state which corresponds to the wave function $\phi_m$, which is an eigenfunction of the operator $H_0$. At $t = 0$, the sudden change of the Hamiltonian occurs (now denoted as $H$ and remains unchanged). Calculate the same $1s \rightarrow 2s$ transition probability as in part a., only this time as the square of the magnitude of the coefficient, $A_{1s,2s}$ using the expansion: $\Psi(r,0) =\phi_m(r) =\sum_n A_{nm}\psi_n(r) ,$ where $A_{nm} =\int \phi_m(r)\psi_n(r)d^3r$ Note, that the eigenfunctions of $H$ are $\psi_n$ with eigenvalues $E_n$. Compare this value with that obtained by perturbation theory in part a. symmetric top rotational spectrum problem 35. The methyl iodide molecule is studied using microwave (pure rotational) spectroscopy. The following integral governs the rotational selection rules for transitions labeled $J, M, K \rightarrow J', M', K'$: $I = <D_{M'K'}^{J'}|\boldsymbol{\varepsilon}\bullet\boldsymbol{\mu}|D_{MK}^J>.$ The dipole moment $\boldsymbol{\mu}$ lies along the molecule's $C_3$ symmetry axis. Let the electric field of the light $\boldsymbol{\mu}$ define the lab-fixed Z-direction. a. Using the fact that $\cos\beta = D_{00}^*$, show that $I = 8\pi^2\mu\varepsilon(-1)^{(M+K)} \left(\begin{array}{ccc}J' & 1 & J \ M & 0 & M \end{array}\right) \left(\begin{array}{ccc}J' & 1 & J \ K & 0 & K \end{array}\right) \delta_{M'M}\delta_{K'K}$ b. What restrictions does this result place on $\Delta J = J' - J$? Explain physically why the $K$ quantum number can not change. problem in electronic and photo-electron spectroscopy 36. Consider the molecule $BO$. a. What are the total number of possible electronic states that can be formed by combination of ground-state $B$ and $O$ atoms? b. What electron configurations of the molecule are likely to be low in energy? Consider all reasonable orderings of the molecular orbitals. What are the states corresponding to these configurations? c. What are the bond orders in each of these states? d. The true ground state of $BO$ is $^2\Sigma$. Specify the +/- and u/g symmetries for this state. e. Which of the excited states you derived above will radiate to the $^2\Sigma$ ground state? Consider electric dipole radiation only. f. Does ionization of the molecule to form a cation lead to a stronger, weaker, or equivalent bond strength? g. Assuming that the energies of the molecular orbitals do not change upon ionization, what are the ground state, the first excited state, and the second excited state of the positive ion? h. Considering only these states, predict the structure of the photoelectron spectrum you would obtain for ionization of $BO$. problem on vibration-rotation spectroscopy 37. The above figure shows part of the infrared absorption spectrum of $HCN$ gas. The molecule has a $CH$ stretching vibration, a bending vibration, and a $CN$ stretching vibration. a. Are any of the vibrations of linear $HCN$ degenerate? b. To which vibration does the group of peaks between 600 cm-1 and 800 cm-1 belong? c. To which vibration does the group of peaks between 3200 cm-1 and 3400 cm-1 belong? d. What are the symmetries (s, p, d) of the $CH$ stretch, $CN$ stretch, and bending vibrational motions? e. Starting with $HCN$ in its 0,0,0 vibrational level, which fundamental transitions would be infrared active under parallel polarized light (i.e., z-axis polarization): $000 \rightarrow 001?$ $000 \rightarrow 100?$ $000 \rightarrow010?$ f. Why does the 712 cm-1 transition have a Q-branch, whereas that near 3317 cm-1 has only P- and R-branches? Problem in Which You Can Practice Deriving Equations This is Important Because a Theory Scientist Does Derivations as Part of Her/His Job 38. By expanding the molecular orbitals {$\phi_k$} as linear combinations of atomic orbitals {$\chi_\mu$}, $\phi_k =\sum_\mu c_{\mu k}\chi_\mu$ show how the canonical Hartree-Fock (HF) equations: $F \phi_i = \varepsilon_i \phi_j$ reduce to the matrix eigenvalue-type equation of the form: $\sum_\nu F_{\mu\nu}C_{\nu i}= \varepsilon_i\sum_\nu S_{\mu\nu}C_{\nu i}$ where: $F_{\mu\nu} = \langle \chi_\mu|h|\chi_nu\rangle+\sum_{\delta\kappa} \left[\gamma_{\delta\kappa}\langle\chi_\mu\chi_\delta|g|\chi_\nu\chi_\kappa\rangle - \gamma_{\delta\kappa}^{\rm ex} \langle\chi_\mu\chi_\delta|g|\chi_\kappa\chi_\nu\rangle\right] ,$ $S_{\mu\nu} = \langle\chi_\mu|\chi_nu\rangle,\; \gamma_{\delta\kappa} =\sum_{i={\rm occ}}C_{\delta i}C_{\kappa i} ,$ and $\gamma_{\delta\kappa}^{\rm ex} =\sum_{i=\text{occ and same spin}} C_{\delta i}C_{\kappa i}.$ Note that the sum over $i$ in $\gamma_{\delta\kappa}$ and $\gamma_{\delta\kappa}^{\rm ex}$ is a sum over spin orbitals. In addition, show that this Fock matrix can be further reduced for the closed shell case to: $F_{\mu\nu} = \langle \chi_\mu|h|\chi_nu\rangle +\sum_{\delta\kappa} P_{\delta\kappa}\bigg[\langle\chi_\mu\chi_\delta|g|\chi_\nu\chi_\kappa\rangle - \dfrac{1}{2} \langle\chi_\mu\chi_\delta|g|\chi_\kappa\chi_\nu\rangle\bigg] ,$ where the charge bond order matrix, $P$, is defined to be: $P_{\delta\kappa} =\sum_{i={\rm occ}} 2C_{\delta i}C_{\kappa i},$ where the sum over $i$ here is a sum over orbitals not spin orbitals. Another Derivation Practice Problem 39. Show that the HF total energy for a closed-shell system may be written in terms of integrals over the orthonormal HF orbitals as: $E({\rm SCF}) = 2\sum_k^{\rm occ}\langle\phi_k|h|\phi_k\rangle+\sum_{k,l}^{\rm occ} [2\langle kl|g|kl \rangle - \langle kl|g|lk\rangle] +\sum_{\mu>\nu} \dfrac{Z_\mu Z_\nu}{Z_{\mu\nu}}$ . More Derivation Problem 40. Show that the HF total energy may alternatively be expressed as: $E({\rm SCF}) =\sum_k^{\rm occ}(\varepsilon_k+\langle\phi_k|h|\phi_k\rangle)+ \sum_{\mu>\nu}\dfrac{Z_\mu Z_\nu}{R_{\mu\nu}},$ where the $\varepsilon_k$ refer to the HF orbital energies. Molecular Hartree-Fock SCF Problem 41. This problem will be concerned with carrying out an SCF calculation for the $HeH^+$ molecule in the $1\Sigma_g^+(^1\sigma^2)$ ground state. The one- and two-electron integrals (in atomic units) needed to carry out this SCF calculation at $R = 1.4$ a.u. using Slater type orbitals with orbital exponents of 1.6875 and 1.0 for the $He$ and $H$, respectively are: S11 = 1.0, S22 = 1.0, S12 = 0.5784 h11 = -2.6442, h22 = -1.7201, h12 = -1.5113, g1111 = 1.0547, g1121 = 0.4744, g1212 = 0.5664, g2211 = 0.2469, g2221 = 0.3504, g2222 = 0.6250, where 1 refers to $1s_{He}$ and 2 to $1s_H$. The two-electron integrals are given in Dirac notation. Parts a. – d should be done by hand. Any subsequent parts can make use of the QMIC software that can be found at www.emsl.pnl.gov:2080/people/...a_nichols.html. 1. Using $\phi_1 \approx 1s_{He}$ for the initial guess of the occupied molecular orbital, form a 2x2 Fock matrix. Use the equation derived above in problem 38 for $F_{\mu\nu}$. 2. Solve the Fock matrix eigenvalue equations given above to obtain the orbital energies and an improved occupied molecular orbital. In so doing, note that $\langle\phi_1|\phi_1\rangle= 1 = C_1^TSC_1$ gives the needed normalization condition for the expansion coefficients of the $\phi_1$ in the atomic orbital basis. 3. Determine the total SCF energy using the expression of problem 39 at this step of the iterative procedure. When will this energy agree with that obtained by using the alternative expression for $E({\rm SCF})$ given in problem 40? 4. Obtain the new molecular orbital, $\phi_1$, from the solution of the matrix eigenvalue problem (part b). 5. A new Fock matrix and related total energy can be obtained with this improved choice of molecular orbital, $\phi_1$. This process can be continued until a convergence criterion has been satisfied. Typical convergence criteria include: no significant change in the molecular orbitals or the total energy (or both) from one iteration to the next. Perform this iterative procedure for the $HeH^+$ system until the difference in total energy between two successive iterations is less than 10-5 a.u. 6. Show, by comparing the difference between the SCF total energy at one iteration and the converged SCF total energy, that the convergence of the above SCF approach is primarily linear (or first order). 7. Is the SCF total energy calculated at each iteration of the above SCF procedure as in problem 39 an upper bound to the exact ground-state total energy? 8. Does this SCF wave function give rise (at $R\rightarrow\infty$) to proper dissociation products? Configuration Interaction Problem 42. This problem will continue to address the same $HeH^+$ molecular system as above, extending the analysis to include correlation effects. We will use the one- and two-electron integrals (same geometry) in the converged (to 10-5 au) SCF molecular orbital basis which we would have obtained after 7 iterations above. The converged MOs you should have obtained in problem 1 are: $\phi_1 = \left[\begin{array}{c}-0.89997792\-0.15843012\end{array}\right] \phi_2 =\left[\begin{array}{c}-0.83233180\1.21558030\end{array}\right]$ a. Carry out a two configuration CI calculation using the $1\sigma^2$ and $2\sigma^2$ configurations first by obtaining an expression for the CI matrix elements $H_{I,J}$ ($I,J = 1\sigma^2, 2\sigma^2$) in terms of one- and two-electron integrals, and secondly by showing that the resultant CI matrix is (ignoring the nuclear repulsion energy): $\left[\begin{array}{cc}-4.2720 & 0.1261 \ 0.1261 & -2.0149\end{array}\right]$ b. Obtain the two CI energies and eigenvectors for the matrix found in part a. c. Show that the lowest energy CI wave function is equivalent to the following two-determinant (single configuration) wave function: $\dfrac{1}{2}\left\{|(\sqrt{a}\phi_1 + \sqrt{b}\phi_2) \alpha(\sqrt{a}\phi_1 - \sqrt{b}\phi_2)\beta| + |(\sqrt{a}\phi_1 - \sqrt{b}\phi_2)\alpha (\sqrt{a}\phi_1 + \sqrt{b}\phi_2)\beta|\right\}$ involving the polarized orbitals: $\sqrt{a}\phi_1 ± \sqrt{b}\phi_2$, where $a = 0.9984$ and $b = 0.0556$. d. Expand the CI list to 3 configurations by adding $1\sigma 2\sigma$ to the original $1\sigma^2$ and $2\sigma^2$ configurations of part a above. First, express the proper singlet spin-coupled $1\sigma 2\sigma$ configuration as a combination of Slater determinants and then compute all elements of this 3x3 matrix. $E$. Obtain all eigenenergies and corresponding normalized eigenvectors for this CI problem. f. Determine the excitation energies and transition moments for $HeH^+$ using the full CI result of part e above. The nonvanishing matrix elements of the dipole operator $\textbf{r}(x,y,z)$ in the atomic basis are: $\langle1s_H|z|1s_{He}\rangle= 0.2854 \text{ and } \langle1s_H|z|1s_H\rangle= 1.4.$ First determine the matrix elements of $\textbf{r}$ in the SCF orbital basis then determine the excitation energies and transition moments from the ground state to the two excited singlet states of $HeH^+$. g. Now turning to perturbation theory, carry out a perturbation theory calculation of the first-order wave function $|1\sigma^2\rangle^{(1)}$ for the case in which the zeroth-order wave function is taken to be the $1\sigma^2$ Slater determinant. Show that the first-order wave function is given by: $|1\sigma^2\rangle^{(1)} = -0.0442| 2\sigma^2\rangle.$ h. Why does the $|1\sigma2\sigma\rangle$ configuration not enter into the first-order wave function? i. Normalize the resultant wave function that contains zeroth- plus first-order parts and compare it to the wave function obtained in the two-configuration CI study of part b. j. Show that the second-order RSPT correlation energy, $E^{(2)}$, of $HeH^+$ is -0.0056 a.u. How does this compare with the correlation energy obtained from the two-configuration CI study of part b? Repeating the SCF Problem but With a Computer Program 43. Using either programs that are available to you or the QMIC programs that you can find at the web site www.emsl.pnl.gov:2080/people/...ichols_ja.html calculate the SCF energy of $HeH^+$ using the same geometry as in problem 42 and the STO3G basis set provided in the QMIC basis set library. How does this energy compare to that found in problem 42? Run the calculation again with the 3-21G basis basis provided. How does this energy compare to the STO3G and the energy found using STOs in problem 42? Series of SCF Calculations to Produce a Potential Energy Curve 44. Generate SCF potential energy surfaces for $HeH^+$ and $H_2$ using the QMIC software or your own programs. Use the 3-21G basis set and generate points for geometries of $R = 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.5,$ and $10.0 a_0$. Plot the energies vs. geometry for each system. Which system dissociates properly? Configuration Interaction Potential Curves for Several States 45. Generate CI potential energy surfaces for the 4 states of $H_2$ resulting from a calculation with 2 electrons occupying the lowest 2 SCF orbitals ($1\sigma_g$ and $1\sigma_u$) in all possible ways. Use the same geometries and basis set as in problem 44. Plot the energies vs. geometry for each system. Properly label and characterize each of the states (e.g., repulsive, dissociate properly, etc.). Problem on Partition Functions and Thermodynamic Properties 46. F atoms have $1s^2\,2s^2\,2p^5$ $^2P$ ground electronic states that are split by spin-orbit coupling into $^2P_{3/2}$ and $^2P_{1/2}$ states that differ by only 0.05 eV in energy. a. Write the electronic partition function (take the energy of the $^2P_{3/2}$ state to be zero and that of the $^2P_{1/2}$ state to be 0.05eV and ignore all other states) for each F atom. b. Using , derive an expression for the average electronic energy of $N$ gaseous F atoms. c. Using the fact that $kT=0.03$ eV at $T=300$ °K, make a (qualitative) graph of $\bar{E}/N$ vs $T$ for $T$ ranging from 100°K to 3000°K. Problem Using Transition State Theory 47. Suppose that we used transition state theory to study the reaction $NO(g) + Cl_2(g) \rightarrow NOCl(g) + Cl(g)$ assuming it to proceed through a bent transition state, and we obtained an expression for the rate coefficient $k_{\rm bent}=\dfrac{kT}{h}e^{-E^\ne/kT}\dfrac{\dfrac{q^\ne}{v}}{\dfrac{q_{NO}}{v} \dfrac{q_{Cl_2}}{v}}$ a. Now, let us consider what differences would occur if the transition state structure were linear rather than bent. Assuming that the activation energy $E^\ne$ and electronic state degeneracies are not altered, derive an expression for the ratio of the rate coefficients for the linear and bent transition state cases $\dfrac{k_{\rm linear}}{k_{\rm bent}}$ b. Using the following order of magnitude estimates of translational, rotational, and vibrational partition functions per degree of freedom at 300°K $q_t \sim 10^8, q_r \sim 10^2, q_v \sim 1,$ what ratio would you expect for $k_{\rm linear}/k_{\rm bent}$? Problem With Slater Determinants 48. Show that the configuration (determinant) corresponding to the $Li^+$ $1s(\alpha)1s(\alpha)$ state vanishes. Another Problem With Slater Determinants and Angular Momenta 49. Construct the 3 triplet and 1 singlet wave functions for the $Li^+$ $1s^12s^1$ configuration. Show that each state is a proper eigenfunction of S2 and Sz (use raising and lowering operators for S2) Problem With Slater Determinants for a Linear Molecule 50. Construct determinant wave functions for each state of the $1\sigma^22\sigma^23\sigma^21\pi^2$ configuration of $NH$. Problem With Slater Determinants for an Atom 51. Construct determinant wave functions for each state of the $1s^12s^13s^1$ configuration of $Li$. Problem on Angular Momentum of an Atom 52. Determine all term symbols that arise from the $1s^22s^22p^23d^1$ configuration of the excited $N$ atom. Practice With the Slater Condon Rules 53. Calculate the energy (using Slater Condon rules) associated with the $2p$ valence electrons for the following states of the $C$ atom. i. $^3P(M_L=1,M_S=1),$ ii. $^3P(M_L=0,M_S=0),$ iii. $^1S(M_L=0,M_S=0)$, and iv. $^1D(M_L=0,M_S=0)$. More Practice With the Slater Condon Rules 54. Calculate the energy (using Slater Condon rules) associated with the $\pi$ valence electrons for the following states of the $NH$ molecule. i.$^1\Delta$ $(M_L=2, M_S=0),$ ii. $^1\Sigma$ $(M_L=0, M_S=0),$ and iii. $^3\Sigma$ $(M_L=0, M_S=0).$ Practice With The Equations of Statistical Mechanics 55. Match each of the equations below with the proper phrase A-K $\renewcommand{\arraystretch}{2.5}\begin{array}{|c|c|}\hline B_2=-2\pi \int_0^\infty r^2 \bigg(\exp\bigg(-\dfrac{u(r)}{kT}\bigg)-1\bigg) dr & \phantom{\Bigg|}\hspace{2cm}\\hline \bar{E^2}-(\bar{E})^2=kT^2\Big(\dfrac{\partial E}{\partial T}\Big)_{N,V} &\phantom{\Bigg|} \\hline \dfrac{2\pi mKT}{h^2} &\phantom{\Bigg|} \\hline Q=\exp\bigg(-\dfrac{N\phi}{2kT}\bigg)\bigg(\dfrac{\exp(-\theta/2T)}{1-\exp(-\theta/2T)}\bigg)^{3N}&\phantom{\Bigg|} \\hline g(\nu)=a\nu^2 & \phantom{\Bigg|}\\hline Q=\dfrac{M!}{N!(M-N)!}q^N &\phantom{\Bigg|} \\hline \Theta=\dfrac{q \exp\bigg(\dfrac{\mu_0}{kT}\bigg)p}{1+q \exp\bigg(\dfrac{\mu_0}{kT}\bigg)p}& \phantom{\Bigg|}\\hline p_A=p_A^0X_A &\phantom{\Bigg|} \\hline \dfrac{c\omega}{kT}=-4 & \phantom{\Bigg|}\\hline W=W_{AA}N_{AA}+W_{BB}N_{BB}+W_{AB}N_{AB}&\phantom{\Bigg|} \\hline N_{AB}\cong \dfrac{N_A c N_B}{N_A+N_B} &\phantom{\Bigg|} \\hline\end{array}$ 1. Raoult's law 2. Debye solid 3. Critical Point 4. Ideal adsorption 5. Langmuir isotherm 6. Bragg-Williams 7. Partition function for surface translation 8. Concentrated solution 9. Fluctuation 10. Virial coefficient 11. Einstein solid Problem Dealing With the Second Virial Coefficient 56. The Van der Waals equation of state is $\left(p+\bigg(\dfrac{N}{V}\bigg)^2a\right)(V-Nb)=NkT$ solve this equation for $p$, and then obtain an expression for $\dfrac{pV}{NkT}$. Finally, expand $\dfrac{pV}{NkT}$ in powers of $\bigg(\dfrac{N}{V}\bigg)$ and obtain an expression for the second virial coefficient of this Van der Waals gas in terms of $b$, $a$, and $T$. Problem to Make You Think About Carrying Out Monte-Carlo and Molecular Dynamics Simulations 57. Briefly answer each of the following: For which of the following would you be wisest to use Monte-Carlo (MC) simulation and for which should you use molecular dynamics (MD) a. Determining the rate of diffusion of $CH_4$ in liquid $Kr$. b. Determining the equilibrium radial distribution of $Kr$ atoms relative to the $CH_4$ in the above example c. Determining the mean square end-to-end distance for a floppy hydrocarbon chain in the liquid state Suppose you are carrying out a Monte-Carlo simulation involving 1000 $Ar$ atoms. Further suppose that the potentials are pair wise additive and that your computer requires approximately 50 floating point operations (FPO's) (e.g. multiply, add, divide, etc.) to compute the interaction potential between any pair of atoms d. For each M-C trial move, how many FPO's are required? Assuming your computer has a speed of 100 MFlops (i.e., 100 million FPO's per sec), how long will it take you to carry out 1,000,000 M-C moves? e. If the fluctuations observed in the calculation of question d are too large, and you wish to make a longer M-C calculation to reduce the statistical "noise", how long will your new calculation require if you wish to cut the noise in half? f. How long would the calculation of question d require if you were to use 1,000,000 $Ar$ atoms (with the same potential and the same computer)? g. Assuming that the evaluation of the forces between pairs of $Ar$ atoms ($\partial V/\partial r$;) requires approximately the same number of FPO's (50) as for computing the pair potential, how long (in sec) would it take to carry out a molecular dynamics simulation involving 1000 $Ar$ atoms using a time step ($\Delta t$) of 10-15 sec and persisting for a total time duration of one nanosecond (10-9 sec) using the 100 MFlop computer? h. How long would a 10-6 MFlop (i.e., 1 FPO per sec) Ph.D. student take to do the calculation in part d? Problem to Practice Using Partition Functions 58. In this problem, you will compute the pressure-unit equilibrium constant $K_p$ for the equilibrium $2Na \rightleftharpoons Na_2$ in the gas phase at a temperature of 1000 K. Your final answer should be expressed in units of atm-1. In doing so, you need to consider the electronic term symbols of $Na$ and of $Na_2$, and you will need to use the following data: i. $Na$ has no excited electronic states that you need to consider. ii. $\dfrac{\hbar^2}{8\pi^2Ik} = 0.221$ K for $Na_2$ iii. $\dfrac{h\nu}{k} = 229$ K for $Na_2$ iv. $1 \text{ atm} = 1.01 \times 10^6 \text{ dynes cm}^{-2}$ v. The dissociation energy of $Na_2$ from the $v$ = 0 to dissociation is $D_0 = 17.3 \text{ kcal mol}^{-1}$. a. First, write the expressions for the $Na$ and $Na_2$ partition functions showing their translational, rotational, vibrational and electronic contributions. b. Next, substitute the data and compute $K_p$, and change units to atm-1. Problem Using Transition State Theory 59. Looking back at the $NO+Cl_2$ reaction treated using transition state theory in Problem 47, let us assume that this same reaction (via. the bent transition state) were to occur while to reagents $NO$ and $Cl_2$ were adsorbed to a surface in the following manner: a. both $NO$ and $Cl_2$ lie flat against the surface with both of their atoms touching the surface. b. both $NO$ and $Cl_2$ move freely along the surface (i.e., they can translate parallel to the surface). c. both $NO$ and $Cl_2$ are tightly bound to the surface in a manner that causes their movements perpendicular to the surface to become high-frequency vibrations. Given this information, and again assuming the following order of magnitude estimates of partition functions $q_t \sim 10^8, q_r \sim 10^2, q_v \sim 1$ calculate the ratio of the TS rate constants for this reaction occurring in the surface adsorbed state and in the gas phase. In doing so, you may assume that the activation energy and all properties of the transition state are identical in the gas and adsorbed state, except that the TS species is constrained to lie flat on the surface just as are $NO$ and $Cl_2$. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis) 09: Exercises Solutions 1. a. First determine the eigenvalues: det = 0 (-1 - l)(2 - l) - 22 = 0 -2 + l - 2l + l2 - 4 = 0 l2 - l - 6 = 0 (l - 3)(l + 2) = 0 l = 3 or l = -2. Next, determine the eigenvectors. First, the eigenvector associated with eigenvalue -2: = -2 -C11 + 2C21 = -2C11 C11 = -2C21 (Note: The second row offers no new information, e.g. 2C11 + 2C21 = -2C21) C112 + C212 = 1 (from normalization) (-2C21)2 + C212 = 1 4C212 + C212 = 1 5C212 = 1 C212 = 0.2 C21 = , and therefore C11 = -2. For the eigenvector associated with eigenvalue 3: = 3 -C12 + 2C22 = 3C12 -4C12 = -2C22 C12 = 0.5C22 (again the second row offers no new information) C122 + C222 = 1 (from normalization) (0.5C22)2 + C222 = 1 0.25C222 + C222 = 1 1.25C222 = 1 C222 = 0.8 C22 = = 2, and therefore C12 = . Therefore the eigenvector matrix becomes: b. First determine the eigenvalues: det = 0 det det = 0 From 1a, the solutions then become -2, -2, and 3. Next, determine the eigenvectors. First the eigenvector associated with eigenvalue 3 (the third root): = 3 -2 C13 = 3C13 (row one) C13 = 0 -C23 + 2C33 = 3C23 (row two) 2C33 = 4C23 C33 = 2C23 (again the third row offers no new information) C132 + C232 + C332 = 1 (from normalization) 0 + C232 + (2C23)2 = 1 5C232 = 1 C23 = , and therefore C33 = 2. Next, find the pair of eigenvectors associated with the degenerate eigenvalue of -2. First, root one eigenvector one: -2C11 = -2C11 (no new information from row one) -C21 + 2C31 = -2C21 (row two) C21 = -2C31 (again the third row offers no new information) C112 + C212 + C312 = 1 (from normalization) C112 + (-2C31)2 + C312 = 1 C112 + 5C312 = 1 C11 = Second, root two eigenvector two: -2C12 = -2C12 (no new information from row one) -C22 + 2C32 = -2C22 (row two) C22 = -2C32 (again the third row offers no new information) C122 + C222 + C322 = 1 (from normalization) C122 + (-2C32)2 + C322 = 1 C122 + 5C322 = 1 C12 = (1- 5C322)1/2 (Note: again, two equations in three unknowns) C11C12 + C21C22 + C31C32 = 0 (from orthogonalization) Now there are five equations with six unknowns. Arbitrarily choose C11 = 0 (whenever there are degenerate eigenvalues, there are not unique eigenvectors because the degenerate eigenvectors span a 2- or more- dimensional space, not two unique directions. One always is then forced to choose one of the coefficients and then determine all the rest; different choices lead to different final eigenvectors but to identical spaces spanned by these eigenvectors). C11 = 0 = 5C312 = 1 C31 = C21 = -2 C11C12 + C21C22 + C31C32 = 0 (from orthogonalization) 0 + -2+ C32 = 0 5C32 = 0 C32 = 0, C22 = 0, and C12 = 1 Therefore the eigenvector matrix becomes: 2. a. K.E. = = = = K.E. = K.E. = K.E. = b. p = mv = ipx + jpy + kpz p = where i, j, and k are unit vectors along the x, y, and z axes. c. Ly = zpx - xpz Ly = z - x 3. First derive the general formulas for , , in terms of r,q, and f, and , , and in terms of x,y, and z. The general relationships are as follows: x = r Sinq Cosf r2 = x2 + y2 + z2 y = r Sinq Sinf sinq = z = r Cosq cosq = tanf = First , , and from the chain rule: = y,z + y,z + y,z , = x,z + x,z + x,z , = x,y + x,y + x,y . Evaluation of the many "coefficients" gives the following: y,z = Sinq Cosf , y,z = , y,z = - , x,z = Sinq Sinf , x,z = , x,z = , x,y = Cosq , x,y = - , and x,y = 0 . Upon substitution of these "coefficients": = Sinq Cosf + - , = Sinq Sinf + + , and = Cosq - + 0 . Next , , and from the chain rule: = q,f + q,f + q,f , = r,f + r,f + r,f , and = r,q + r,q + r,q . Again evaluation of the the many "coefficients" results in: q,f = , q,f = , q,f = , r,f = , r,f = , r,f = - , r,q = -y , r,q = x , and r,q = 0 Upon substitution of these "coefficients": = + + = + - = -y + x + 0 . Note, these many "coefficients" are the elements which make up the Jacobian matrix used whenever one wishes to transform a function from one coordinate representation to another. One very familiar result should be in transforming the volume element dxdydz to r2Sinqdrdqdf. For example: = a. Lx = Lx = - Lx = - b. Lz = = - i Lz = 4. B dB/dx d2B/dx2 i. 4x4 - 12x2 + 3 16x3 - 24x 48x2 - 24 ii. 5x4 20x3 60x2 iii. e3x + e-3x 3(e3x - e-3x) 9(e3x + e-3x) iv. x2 - 4x + 2 2x - 4 2 v. 4x3 - 3x 12x2 - 3 24x B(v.) is an eigenfunction of A(i.): (1-x2) - x B(v.) = (1-x2) (24x) - x (12x2 - 3) 24x - 24x3 - 12x3 + 3x -36x3 + 27x -9(4x3 -3x) (eigenvalue is -9) B(iii.) is an eigenfunction of A(ii.): B(iii.) = 9(e3x + e-3x) (eigenvalue is 9) B(ii.) is an eigenfunction of A(iii.): x B(ii.) = x (20x3) 20x4 4(5x4) (eigenvalue is 4) B(i.) is an eigenfunction of A(vi.): - 2x B(i) = (48x2 - 24) - 2x (16x3 - 24x) 48x2 - 24 - 32x4 + 48x2 -32x4 + 96x2 - 24 -8(4x4 - 12x2 + 3) (eigenvalue is -8) B(iv.) is an eigenfunction of A(v.): x + (1-x) B(iv.) = x (2) + (1-x) (2x - 4) 2x + 2x - 4 - 2x2 + 4x -2x2 + 8x - 4 -2(x2 - 4x +2) (eigenvalue is -2) 5. 6. 7. 8. i. In ammonia, the only "core" orbital is the N 1s and this becomes an a1 orbital in C3v symmetry. The N 2s orbitals and 3 H 1s orbitals become 2 a1 and an e set of orbitals. The remaining N 2p orbitals also become 1 a1 and a set of e orbitals. The total valence orbitals in C3v symmetry are 3a1 and 2e orbitals. ii. In water, the only core orbital is the O 1s and this becomes an a1 orbital in C2v symmetry. Placing the molecule in the yz plane allows us to further analyze the remaining valence orbitals as: O 2pz = a1, O 2py as b2, and O 2px as b1. The (H 1s + H 1s) combination is an a1 whereas the (H 1s - H 1s) combination is a b2. iii. Placing the oxygens of H2O2 in the yz plane (z bisecting the oxygens) and the (cis) hydrogens distorted slightly in +x and -x directions allows us to analyze the orbitals as follows. The core O 1s + O 1s combination is an a orbital whereas the O 1s - O 1s combination is a b orbital. The valence orbitals are: O 2s + O 2s = a, O 2s - O 2s = b, O 2px + O 2px = b, O 2px - O 2px = a, O 2py + O 2py = a, O 2py - O 2py = b, O 2pz + O 2pz = b, O 2pz - O 2pz = a, H 1s + H 1s = a, and finally the H 1s - H 1s = b. iv. For the next two problems we will use the convention of choosing the z axis as principal axis for the D¥h, D2h, and C2v point groups and the xy plane as the horizontal reflection plane in Cs symmetry. D¥h D2h C2v Cs N 1s sg ag a1 a' N 2s sg ag a1 a' N 2px pxu b3u b1 a' N 2py pyu b2u b2 a' N 2pz su b1u a1 a'' 9. a. Yn(x) = Sin Pn(x)dx = dx The probability that the particle lies in the interval 0 £ x £ is given by: Pn = = This integral can be integrated to give : Pn = Pn = Pn = - \f(1,4q,2 = = - Sin b. If n is even, Sin= 0 and Pn = . If n is odd and n = 1,5,9,13, ... Sin= 1 and Pn = - If n is odd and n = 3,7,11,15, ... Sin= -1 and Pn = + The higher Pn is when n = 3. Then Pn = + Pn = + = 0.303 c. Y(t) = e= aYne+ bYme HY = aYnEne+ bYmEme = |a|2En + |b|2Em + a*be + b*ae Since and are zero, = |a|2En + |b|2Em (note the time independence) d. The fraction of systems observed in Yn is |a|2. The possible energies measured are En and Em. The probabilities of measuring each of these energies is |a|2 and |b|2. e. Once the system is observed in Yn, it stays in Yn. f. P(En) = 2 = |cn|2 cn = x(L-x)dx = dx = These integrals can be evaluated to give: cn = 60,L6L\b(\f(L2,n2p2 - 60,L6\b(\f(2xL2,n2p2 cn = { - ) - ( - Cos(np) + Cos(0))} cn = L-3{- Cos(np) + Cos(np) + } cn = cn = cn = ) |cn|2 = ) If n is even then cn = 0 If n is odd then cn = = The probability of making a measurement of the energy and obtaining one of the eigenvalues, given by: En = is: P(En) = 0 if n is even P(En) = if n is odd g. = = = = = 30\o(h,-L\f(x2,2 = = = = 10. = Ci*eeCj Since = Ejdij = Cj*CjEje = For other properties: = Ci*eeCj but, does not necessarily = ajdij because the Yj are not eigenfunctions of A unless [A,H] = 0. = Ci*Cje Therefore, in general, other properties are time dependent. 11. a. The lowest energy level for a particle in a 3-dimensional box is when n1 = 1, n2 = 1, and n3 = 1. The total energy (with L1 = L2 = L3) will be: Etotal = = Note that n = 0 is not possible. The next lowest energy level is when one of the three quantum numbers equals 2 and the other two equal 1: n1 = 1, n2 = 1, n3 = 2 n1 = 1, n2 = 2, n3 = 1 n1 = 2, n2 = 1, n3 = 1. Each of these three states have the same energy: Etotal = = Note that these three states are only degenerate if L1 = L2 = L3. b. ¾ ¾¾ ¾¾ ¾¾ ¾¾ ¾ L1 = L2 = L3 L3 ¹ L1 = L2 For L1 = L2 = L3, V = L1L2L3 = L13, Etotal(L1) = 2e1 + e2 = + = + = For L3 ¹ L1 = L2, V = L1L2L3 = L12L3, L3 = V/L12 Etotal(L1) = 2e1 + e2 = + = + = = = In comparing the total energy at constant volume of the undistorted box (L1 = L2 = L3) versus the distorted box (L3 ¹ L1 = L2) it can be seen that: £ as long as L3 ³ L1. c. In order to minimize the total energy expression, take the derivative of the energy with respect to L1 and set it equal to zero. = 0 = 0 But since V = L1L2L3 = L12L3, then L3 = V/L12. This substitution gives: = 0 = 0 = 0 = 24L16 = 12V2 L16 = V2 = = L14L32 L12 = L32 L3 = L1 d. Calculate energy upon distortion: cube: V = L13, L1 = L2 = L3 = (V) distorted: V = L12L3 = L12L1 = L13 L3 = ¹ L1 = L2 = DE = Etotal(L1 = L2 = L3) - Etotal(L3 ¹ L1 = L2) = - = = Since V = 8Å3, V2/3 = 4Å2 = 4 x 10-16 cm2 , and = 6.01 x 10-27 erg cm2: DE = 6.01 x 10-27 erg cm2 DE = 6.01 x 10-27 erg cm2 DE = 0.99 x 10-11 erg DE = 0.99 x 10-11 erg DE = 6.19 eV 12. a. H = (Cartesian coordinates) Finding andfrom the chain rule gives: = y + y , = x + x , Evaluation of the "coefficients" gives the following: y = Cosf , y = - , x = Sinf , and x = , Upon substitution of these "coefficients": = Cosf - = - ; at fixed r. = Sinf + = ; at fixed r. = = + ; at fixed r. = = - ; at fixed r. + = + + - = ; at fixed r. So, H = (cylindrical coordinates, fixed r) = The Schrödinger equation for a particle on a ring then becomes: HY = EY = EF = F The general solution to this equation is the now familiar expression: F(f) = C1e-imf + C2eimf , where m = Application of the cyclic boundary condition, F(f) = F(f+2p), results in the quantization of the energy expression: E = where m = 0, ±1, ±2, ±3, ... It can be seen that the ±m values correspond to angular momentum of the same magnitude but opposite directions. Normalization of the wavefunction (over the region 0 to 2p) corresponding to + or - m will result in a value of for the normalization constant. \ F(f) = eimf ¾¾ ¾¾ ¾¾ ¾¾ ¾¾ ¾¾ b. = 6.06 x 10-28 erg cm2 = = 3.09 x 10-12 erg DE = (22 - 12) 3.09 x 10-12 erg = 9.27 x 10-12 erg but DE = hn = hc/l So l = hc/DE l = = 2.14 x 10-5 cm = 2.14 x 103 Å Sources of error in this calculation include: i. The attractive force of the carbon nuclei is not included in the Hamiltonian. ii. The repulsive force of the other p-electrons is not included in the Hamiltonian. iii. Benzene is not a ring. iv. Electrons move in three dimensions not one. 13. Y(f,0) = Cos2f. This wavefunction needs to be expanded in terms of the eigenfunctions of the angular momentum operator, . This is most easily accomplished by an exponential expansion of the Cos function. Y(f,0) = = The wavefunction is now written in terms of the eigenfunctions of the angular momentum operator, , but they need to include their normalization constant, . Y(f,0) = = Once the wavefunction is written in this form (in terms of the normalized eigenfunctions of the angular momentum operator having mas eigenvalues) the probabilities for observing angular momentums of 0, 2, and -2can be easily identified as the squares of the coefficients of the corresponding eigenfunctions. P2= = P-2= = P0= = 14. a. mv2 = 100 eV v2 = v = 0.593 x 109 cm/sec The length of the N2 molecule is 2Å = 2 x 10-8 cm. v = t = = = 3.37 x 10-17 sec b. The normalized ground state harmonic oscillator can be written as: Y0 = 1/4e-ax2/2, where a = and x = r - re Calculating constants; aN2 = = 0.48966 x 1019 cm-2 = 489.66 Å-2 For N2: Y0(r) = 3.53333Åe-(244.83Å-2)(r-1.09769Å)2 aN2+ = = 0.45823 x 1019 cm-2 = 458.23 Å-2 For N2+: Y0(r) = 3.47522Åe-(229.113Å-2)(r-1.11642Å)2 c. P(v=0) = Let P(v=0) = I2 where I = integral: I= . (3.53333Åe-(244.830Å-2)(r-1.09769Å)2)dr Let C1 = 3.47522Å, C2 = 3.53333Å, A1 = 229.113Å-2, A2 = 244.830Å-2, r1 = 1.11642Å, r2 = 1.09769Å, I = C1C2dr . Focusing on the exponential: -A1(r-r1)2-A2(r-r2)2 = -A1(r2 - 2r1r + r12) - A2(r2 - 2r2r + r22) = -(A1 + A2)r2 + (2A1r1 + 2A2r2)r - A1r12 - A2r22 Let A = A1 + A2, B = 2A1r1 + 2A2r2, C = C1C2, and D = A1r12 + A2r22 . I = Cdr = Cdr where -A(r-r0)2 + D' = -Ar2 + Br - D -A(r2 - 2rr0 + r02) + D' = -Ar2 + Br - D such that, 2Ar0 = B -Ar02 + D' = -D and, r0 = D' = Ar02 - D = A- D = - D . I = Cdr = CeD'dy = CeD' Now back substituting all of these constants: I = C1C2exp I = (3.47522)(3.53333) . exp . exp I = 0.959 P(v=0) = I2 = 0.92, so there is a 92% probability. 15. a. En = DE = En+1 - En = = = = 4.27 x 10-13 erg DE = l = = = 4.66 x 10-4 cm = 2150 cm-1 b. Y0 = 1/4e-ax2/2 = = = = = 1/2e-ax2 ½= 0 = = = = 21/2 = 21/21/2 = Dx = (<x2> - <x>2)1/2.= = = = 3.38 x 10-10 cm = 0.0338Å c. Dx = The smaller k and m become, the larger the uncertainty in the internuclear distance becomes. Helium has a small m and small attractive force between atoms. This results in a very large Dx. This implies that it is extremely difficult for He atoms to "vibrate" with small displacement as a solid, even as absolute zero is approached. 16. a. W = W = e= = + = + Making this substitution results in the following three integrals: W = + + = + + a = 2 + 2 + a = + + W = + a b. Optimize b by evaluating = 0 = = - b So, b= or, b= = , and, b = . Substituting this value of b into the expression for W gives: W = + a = + a = 2pam+ 2pam = am= am = 0.812889106am-1/3 which is in error by only 0.5284% !!!!! 17. a. H = -+ kx2 f = a for -a < x < a f = 0 for |x| ³ a = = a-5 = a-5 + a-5 = a-5 + a-5 = a-5dx + a-5 = a-5\o(\s\up10(a-a + a-5a4k,3\o(\s\up10(a = a-5+ a-5 = a-5 = a-5 = a-5 = a-5= + b. Substituting a = binto the above expression for E we obtain: E = + = km c. E = + = -+ = -+ = 0 = and 352 = 2mka4 So, a4 = , or a = Therefore fbest = , and Ebest = + = km. d. = = = = 0.1952 = 19.52% 18. a. H0 y= y= Yl,m(q,f) = 2 l(l+1) Yl,m(q,f) E= l(l+1) b. V = -eez = -eer0Cosq E= = = -eer0 Using the given identity this becomes: E= -eer0+ -eer0 The spherical harmonics are orthonormal, thus = = 0, and E= 0. E= = -eer0 Using the given identity this becomes: = -eer0+ -eer0 = - This indicates that the only term contributing to the sum in the expression for Eis when l=1, and m=), otherwise vanishes (from orthonormality). In quantum chemistry when using orthonormal functions it is typical to write the term as a delta function, for example dlm,10 , which only has values of 1 or 0; dij = 1 when i = j and 0 when i ¹ j. This delta function when inserted into the sum then eliminates the sum by "picking out" the non-zero component. For example, = -dlm,10 , so E= = E= 0(0+1) = 0 and E= 1(1+1) = Inserting these energy expressions above yields: E= -= - c. E= E+ E+ E+ ... = 0 + 0 - = - a = -= = d. a = a = r04 12598x106cm-1 = r04 1.2598Å-1 aH = 0.0987 Å3 aCs = 57.57 Å3 19. The above diagram indicates how the SALC-AOs are formed from the 1s,2s, and 2p N atomic orbitals. It can be seen that there are 3sg, 3su, 1pux, 1puy, 1pgx, and 1pgy SALC-AOs. The Hamiltonian matrices (Fock matrices) are given. Each of these can be diagonalized to give the following MO energies: 3sg; -15.52, -1.45, and -0.54 (hartrees) 3su; -15.52, -0.72, and 1.13 1pux; -0.58 1puy; -0.58 1pgx; 0.28 1pgy; 0.28 It can be seen that the 3sg orbitals are bonding, the 3su orbitals are antibonding, the 1pux and 1puy orbitals are bonding, and the 1pgx and 1pgy orbitals are antibonding. 20. Using these approximate energies we can draw the following MO diagram: This MO diagram is not an orbital correlation diagram but can be used to help generate one. The energy levels on each side (C and H2) can be "superimposed" to generate the reactant side of the orbital correlation diagram and the center CH2 levels can be used to form the product side. Ignoring the core levels this generates the following orbital correlation diagram. 21. a. The two F p orbitals (top and bottom) generate the following reducible representation: D3h E 2C3 3C2 sh 2S3 3sv Gp 2 2 0 0 0 2 This reducible representation reduces to 1A1' and 1A2'' irreducible representations. Projectors may be used to find the symmetry-adapted AOs for these irreducible representations. fa1' = fa2'' = b. The three trigonal F p orbitals generate the following reducible representation: D3h E 2C3 3C2 sh 2S3 3sv Gp 3 0 1 3 0 1 This reducible representation reduces to 1A1' and 1E' irreducible representations. Projectors may be used to find the symmetry-adapted -AOs for these irreducible representations (but they are exactly analogous to the previous few problems): fa1' = fe' = (1/6)-1/2 (2 f3 – f4 –f5) fe' = . c. The 3 P sp2 orbitals generate the following reducible representation: D3h E 2C3 3C2 sh 2S3 3sv Gsp2 3 0 1 3 0 1 This reducible representation reduces to 1A1' and 1E' irreducible representations. Again, projectors may be used to find the symmetry-adapted -AOs for these irreducible representations: fa1' = fe' = fe' = . The leftover P pz orbital generate the following irreducible representation: D3h E 2C3 3C2 sh 2S3 3sv Gpz 1 1 -1 -1 -1 1 This irreducible representation is A2'' fa2'' = f9. Drawing an energy level diagram using these SALC-AOs would result in the following: 22. a. For non-degenerate point groups, one can simply multiply the representations (since only one representation will be obtained): a1 Ä b1 = b1 Constructing a "box" in this case is unnecessary since it would only contain a single row. Two unpaired electrons will result in a singlet (S=0, MS=0), and three triplets (S=1, MS=1; S=1, MS=0; S=1, MS=-1). The states will be: 3B1(MS=1), 3B1(MS=0), 3B1(MS=-1), and 1B1(MS=0). b. Remember that when coupling non-equivalent linear molecule angular momenta, one simple adds the individual Lz values and vector couples the electron spin. So, in this case (1pu12pu1), we have ML values of 1+1, 1-1, -1+1, and -1-1 (2, 0, 0, and -2). The term symbol D is used to denote the spatially doubly degenerate level (ML=±2) and there are two distinct spatially non-degenerate levels denoted by the term symbol S (ML=0) Again, two unpaired electrons will result in a singlet (S=0, MS=0), and three triplets (S=1, MS=1;S=1, MS=0;S=1, MS=-1). The states generated are then: 1D (ML=2); one state (MS=0), 1D (ML=-2); one state (MS=0), 3D (ML=2); three states (MS=1,0, and -1), 3D (ML=-2); three states (MS=1,0, and -1), 1S (ML=0); one state (MS=0), 1S (ML=0); one state (MS=0), 3S (ML=0); three states (MS=1,0, and -1), and 3S (ML=0); three states (MS=1,0, and -1). c. Constructing the "box" for two equivalent p electrons one obtains: ML MS 2 1 0 1 |p1ap-1a| 0 |p1ap1b| |p1ap-1b|, |p-1ap1b| From this "box" one obtains six states: 1D (ML=2); one state (MS=0), 1D (ML=-2); one state (MS=0), 1S (ML=0); one state (MS=0), 3S (ML=0); three states (MS=1,0, and -1). d. It is not necessary to construct a "box" when coupling non-equivalent angular momenta since vector coupling results in a range from the sum of the two individual angular momenta to the absolute value of their difference. In this case, 3d14d1, L=4, 3, 2, 1, 0, and S=1,0. The term symbols are: 3G, 1G, 3F, 1F, 3D, 1D, 3P, 1P, 3S, and 1S. The L and S angular momenta can be vector coupled to produce further splitting into levels: J = L + S ... |L - S|. Denoting J as a term symbol subscript one can identify all the levels and subsequent (2J + 1) states: 3G5 (11 states), 3G4 (9 states), 3G3 (7 states), 1G4 (9 states), 3F4 (9 states), 3F3 (7 states), 3F2 (5 states), 1F3 (7 states), 3D3 (7 states), 3D2 (5 states), 3D1 (3 states), 1D2 (5 states), 3P2 (5 states), 3P1 (3 states), 3P0 (1 state), 1P1 (3 states), 3S1 (3 states), and 1S0 (1 state). e. Construction of a "box" for the two equivalent d electrons generates (note the "box" has been turned side ways for convenience): MS ML 1 0 4 |d2ad2b| 3 |d2ad1a| |d2ad1b|, |d2bd1a| 2 |d2ad0a| |d2ad0b|, |d2bd0a|, |d1ad1b| 1 |d1ad0a|, |d2ad-1a| |d1ad0b|, |d1bd0a|, |d2ad-1b|, |d2bd-1a| 0 |d2ad-2a|, |d1ad-1a| |d2ad-2b|, |d2bd-2a|, |d1ad-1b|, |d1bd-1a|, |d0ad0b| The term symbols are: 1G, 3F, 1D, 3P, and 1S. The L and S angular momenta can be vector coupled to produce further splitting into levels: 1G4 (9 states), 3F4 (9 states), 3F3 (7 states), 3F2 (5 states), 1D2 (5 states), 3P2 (5 states), 3P1 (3 states), 3P0 (1 state), and 1S0 (1 state). 23. a. Once the spatial symmetry has been determined by multiplication of the irreducible representations, the spin coupling gives the result: b. There are three states here : 1.) |3a1a1b1a|, 2.) , and 3.) |3a1b1b1b| c. |3a1a3a1b| 24. a. All the Slater determinants have in common the |1sa1sb2sa2sb| "core" and hence this component will not be written out explicitly for each case. 3P(ML=1,MS=1) = |p1ap0a| = |a(pz)a| = 3P(ML=0,MS=1) = |p1ap-1a| = |aa| = = = = -i|pxapya| 3P(ML=-1,MS=1) = |p-1ap0a| = |a(pz)a| = As you can see, the symmetries of each of these states cannot be labeled with a single irreducible representation of the C2v point group. For example, |pxapza| is xz (B1) and |pyapza| is yz (B2) and hence the 3P(ML=1,MS=1) state is a combination of B1 and B2 symmetries. But, the three 3P(ML,MS=1) functions are degenerate for the C atom and any combination of these three functions would also be degenerate. Therefore, we can choose new combinations that can be labeled with "pure" C2v point group labels. 3P(xz,MS=1) = |pxapza| = = 3B1 3P(yx,MS=1) = |pyapxa| = = 3A2 3P(yz,MS=1) = |pyapza| = = 3B2 Now, we can do likewise for the five degenerate 1D states: 1D(ML=2,MS=0) = |p1ap1b| = |ab| = 1D(ML=-2,MS=0) = |p-1ap-1b| = |ab| = 1D(ML=1,MS=0) = = = 1D(ML=-1,MS=0) = = = 1D(ML=0,MS=0) = = + |ab|) = + + ) = ) Analogous to the three 3P states, we can also choose combinations of the five degenerate 1D states which can be labeled with "pure" C2v point group labels: 1D(xx-yy,MS=0) = |pxapxb| - |pyapyb| = = 1A1 1D(yx,MS=0) = |pxapyb| + |pyapxb| = = 1A2 1D(zx,MS=0) = |pzapxb| - |pzbpxa| = = 1B1 1D(zy,MS=0) = |pzapyb| - |pzbpya| = = 1B2 1D(2zz+xx+yy,MS=0) = ) = 1D(ML=0,MS=0) = 1A1 The only state left is the 1S: 1S(ML=0,MS=0) = = - |ab|) = - - ) = ) Each of the components of this state are A1 and hence this state has A1 symmetry. b. Forming symmetry-adapted AOs from the C and H atomic orbitals would generate the following: The bonding, nonbonding, and antibonding orbitals of CH2 can be illustrated in the following manner: c. d. - e. It is necessary to determine how the wavefunctions found in part a. correlate with states of the CH2 molecule: 3P(xz,MS=1); 3B1 = sg2s2pxpz ¾¾® s2n2pps* 3P(yx,MS=1); 3A2 = sg2s2pxpy ¾¾® s2n2pps 3P(yz,MS=1); 3B2 = sg2s2pypz ¾¾® s2n2ss* 1D(xx-yy,MS=0); 1A1 ¾¾® s2n2pp2 - s2n2s2 1D(yx,MS=0); 1A2 ¾¾® s2n2spp 1D(zx,MS=0); 1B1 ¾¾® s2n2s*pp 1D(zy,MS=0); 1B2 ¾¾® s2n2s*s 1D(2zz+xx+yy,MS=0); 1A1 ¾¾® 2s2n2s*2 + s2n2pp2 + s2n2s2 Note, the C + H2 state to which the lowest 1A1 (s2n2s2) CH2 state decomposes would be sg2s2py2. This state (sg2s2py2) cannot be obtained by a simple combination of the 1D states. In order to obtain pure sg2s2py2 it is necessary to combine 1S with 1D. For example, sg2s2py2 = - . This indicates that a configuration correlation diagram must be drawn with a barrier near the 1D asymptote to represent the fact that 1A1 CH2 correlates with a mixture of 1D and 1S carbon plus hydrogen. The C + H2 state to which the lowest 3B1 (s2ns2pp) CH2 state decomposes would be sg2spy2px. f. If you follow the 3B1 component of the C(3P) + H2 (since it leads to the ground-state products) to 3B1 CH2 you must go over an approximately 20 Kcal/mole barrier. Of course this path produces 3B1 CH2 product. Distortions away from C2v symmetry, for example to Cs symmetry, would make the a1 and b2 orbitals identical in symmetry (a'). The b1 orbitals would maintain their different symmetry going to a'' symmetry. Thus 3B1 and 3A2 (both 3A'' in Cs symmetry and odd under reflection through the molecular plane) can mix. The system could thus follow the 3A2 component of the C(3P) + H2 surface to the place (marked with a circle on the CCD) where it crosses the 3B1 surface upon which it then moves and continues to products. As a result, the barrier would be lowered. You can estimate when the barrier occurs (late or early) using thermodynamic information for the reaction (i.e. slopes and asymptotic energies). For example, an early barrier would be obtained for a reaction with the characteristics: and a late barrier would be obtained for a reaction with the characteristics: This relation between reaction endothermicity or exothermicity and the character of the transition state is known as the Hammond postulate. Note that the C(3P1) + H2 --> CH2 reaction of interest here has an early barrier. g. The reaction C(1D) + H2 ---> CH2 (1A1) should have no symmetry barrier (this can be recognized by following the 1A1 (C(1D) + H2) reactants down to the 1A1 (CH2) products). 25. This problem in many respects is analogous to problem 24. The 3B1 surface certainly requires a two configuration CI wavefunction; the s2s2npx (p2py2spx) and the s2n2pxs* (p2s2pxpz). The 1A1 surface could use the s2s2n2 (p2s2py2) only but once again there is no combination of 1D determinants which gives purely this configuration (p2s2py2). Thus mixing of both 1D and 1S determinants are necessary to yield the required p2s2py2 configuration. Hence even the 1A1 surface would require a multiconfigurational wavefunction for adequate description. Configuration correlation diagram for the reaction C2H2 + C ---> C3H2. 26. a. CCl4 is tetrahedral and therefore is a spherical top. CHCl3 has C3v symmetry and therefore is a symmetric top. CH2Cl2 has C2v symmetry and therefore is an asymmetric top. b. CCl4 has such high symmetry that it will not exhibit pure rotational spectra because it has no permanent dipole moment. CHCl3 and CH2Cl2 will both exhibit pure rotation spectra. 27. NH3 is a symmetric top (oblate). Use the given energy expression, E = (A - B) K2 + B J(J + 1), A = 6.20 cm-1, B = 9.44 cm-1, selection rules DJ = ±1, and the fact that lies along the figure axis such that DK = 0, to give: DE = 2B (J + 1) = 2B, 4B, and 6B (J = 0, 1, and 2). So, lines are at 18.88 cm-1, 37.76 cm-1, and 56.64 cm-1. 28. To convert between cm-1 and energy, multiply by hc = (6.62618x10-34J sec)(2.997925x1010cm sec-1) = 1.9865x1023 J cm. Let all quantities in cm-1 be designated with a bar, e.g. = 1.78 cm-1. a. hc= Re = , m = = x 1.66056x10-27 kg = 1.0824x10-26 kg. hc= hc(1.78 cm-1) = 3.5359x10-23 J Re = Re = 1.205x10-10 m = 1.205 Å De = , = = = 6.35x10-6 cm-1 wexe = , = = = 13.30 cm-1. D= D- + , = - + = 66782.2 - + = 65843.0 cm-1 = 8.16 eV. ae = + = + = + = 0.0175 cm-1. B0 = Be - ae(1/2) , = - = 1.78 - 0.0175/2 = 1.77 cm-1 B1 = Be - ae(3/2) , = - = 1.78 - 0.0175(1.5) = 1.75 cm-1 b. The molecule has a dipole moment and so it should have a pure rotational spectrum. In addition, the dipole moment should change with R and so it should have a vibration-rotation spectrum. The first three lines correspond to J = 1 ® 0, J = 2 ® 1, J = 3 ® 2 E = we(v + 1/2) - wexe(v + 1/2)2 + BvJ(J + 1) - DeJ2(J + 1)2 DE = we - 2wexe - B0J(J + 1) + B1J(J - 1) - 4DeJ3 = - 2- J(J + 1) + J(J - 1) - 4J3 = 1885 - 2(13.3) - 1.77J(J + 1) + 1.75J(J - 1) - 4(6.35x10-6)J3 = 1858.4 - 1.77J(J + 1) + 1.75J(J - 1) - 2.54x10-5J3 = 1854.9 cm-1 = 1851.3 cm-1 = 1847.7 cm-1 29. The C2H2Cl2 molecule has a sh plane of symmetry (plane of molecule), a C2 axis (^ to the molecular plane), and inversion symmetry, this results in C2h symmetry. Using C2h symmetry, the modes can be labeled as follows: n1, n2, n3, n4, and n5 are ag, n6 and n7 are au, n8 is bg, and n9, n10, n11, and n12 are bu. 30. Molecule I Molecule II RCH = 1.121 Å RCH = 1.076 Å ÐHCH = 104° ÐHCH = 136° yH = R Sin (q/2) = ±0.8834 yH = ±0.9976 zH = R Cos (q/2) = -0.6902 zH = -0.4031 Center of Mass(COM): clearly, X = Y = 0, Z = = -0.0986 Z = -0.0576 a. Ixx = - M(Y2 + Z2) Ixy = -- MXY Ixx = 2(1.121)2 - 14(-0.0986)2 Ixx = 2(1.076)2 - 14(-0.0576)2 = 2.377 = 2.269 Iyy = 2(0.6902)2 - 14(-0.0986)2 Iyy = 2(0.4031)2 - 14(-0.0576)2 = 0.8167 = 0.2786 Izz = 2(0.8834)2 Izz = 2(0.9976)2 = 1.561 = 1.990 Ixz = Iyz = Ixy = 0 b. Since the moment of inertia tensor is already diagonal, the principal moments of inertia have already been determined to be (Ia < Ib < Ic): Iyy < Izz < Ixx Iyy < Izz < Ixx 0.8167 < 1.561 < 2.377 0.2786 < 1.990 < 2.269 Using the formula: A = = X A = cm-1 similarly, B = cm-1, and C = cm-1. So, Molecule I Molecule II y Þ A = 20.62 y Þ A = 60.45 z Þ B = 10.79 z Þ B = 8.46 x Þ C = 7.08 x Þ C = 7.42 c. Averaging B + C: B = (B + C)/2 = 8.94 B = (B + C)/2 = 7.94 A - B = 11.68 A - B = 52.51 Using the prolate top formula: E = (A - B) K2 + B J(J + 1), Molecule I Molecule II E = 11.68K2 + 8.94J(J + 1) E = 52.51K2 + 7.94J(J + 1) Levels: J = 0,1,2,... and K = 0,1, ... J For a given level defined by J and K, there are MJ degeneracies given by: (2J + 1) x d. Molecule I Molecule II e. Assume molecule I is CH2- and molecule II is CH2. Then, DE = EJj(CH2) - EJi(CH2-), where: E(CH2) = 52.51K2 + 7.94J(J + 1), and E(CH2-) = 11.68K2 + 8.94J(J + 1) For R-branches: Jj = Ji + 1, DK = 0: DER = EJj(CH2) - EJi(CH2-) = 7.94(Ji + 1)(Ji + 1 + 1) - 8.94Ji(Ji + 1) = (Ji + 1){7.94(Ji + 1 + 1) - 8.94Ji} = (Ji + 1){(7.94- 8.94)Ji + 2(7.94)} = (Ji + 1){-Ji + 15.88} For P-branches: Jj = Ji - 1, DK = 0: DEP = EJj(CH2) - EJi(CH2-) = 7.94(Ji - 1)(Ji - 1 + 1) - 8.94Ji(Ji + 1) = Ji{7.94(Ji - 1) - 8.94(Ji + 1)} = Ji{(7.94- 8.94)Ji - 7.94 - 8.94} = Ji{-Ji - 16.88} This indicates that the R branch lines occur at energies which grow closer and closer together as J increases (since the 15.88 - Ji term will cancel). The P branch lines occur at energies which lie more and more negative (i.e. to the left of the origin). So, you can predict that if molecule I is CH2- and molecule II is CH2 then the R-branch has a band head and the P-branch does not. This is observed, therefore our assumption was correct: molecule I is CH2- and molecule II is CH2. f. The band head occurs when = 0. = [(Ji + 1){-Ji + 15.88}] = 0 = = 0 = -2Ji + 14.88 = 0 \ Ji = 7.44, so J = 7 or 8. At J = 7.44: DER = (J + 1){-J + 15.88} DER = (7.44 + 1){-7.44 + 15.88} = (8.44)(8.44) = 71.2 cm-1 above the origin. 31. a. D6h E 2C6 2C3 C2 3C2' 3C2" i 2S3 2S6 sh 3sd 3sv A1g 1 1 1 1 1 1 1 1 1 1 1 1 x2+y2,z2 A2g 1 1 1 1 -1 -1 1 1 1 1 -1 -1 Rz B1g 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 B2g 1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 E1g 2 1 -1 -2 0 0 2 1 -1 -2 0 0 Rx,Ry (xz,yz) E2g 2 -1 -1 2 0 0 2 -1 -1 2 0 0 (x2-y2,xy) A1u 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 A2u 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 z B1u 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 B2u 1 -1 1 -1 -1 1 -1 1 -1 1 1 -1 E1u 2 1 -1 -2 0 0 -2 -1 1 2 0 0 (x,y) E2u 2 -1 -1 2 0 0 -2 1 1 -2 0 0 GC-H 6 0 0 0 0 2 0 0 0 6 2 0 b. The number of irreducible representations may be found by using the following formula: nirrep = , where g = the order of the point group (24 for D6h). nA1g = = {(1)(6)(1)+(2)(0)(1)+(2)(0)(1)+(1)(0)(1) +(3)(0)(1)+(3)(2)(1)+(1)(0)(1)+(2)(0)(1) +(2)(0)(1)+(1)(6)(1)+(3)(2)(1)+(3)(0)(1)} = 1 nA2g = {(1)(6)(1)+(2)(0)(1)+(2)(0)(1)+(1)(0)(1) +(3)(0)(-1)+(3)(2)(-1)+(1)(0)(1)+(2)(0)(1) +(2)(0)(1)+(1)(6)(1)+(3)(2)(-1)+(3)(0)(-1)} = 0 nB1g = {(1)(6)(1)+(2)(0)(-1)+(2)(0)(1)+(1)(0)(-1) +(3)(0)(1)+(3)(2)(-1)+(1)(0)(1)+(2)(0)(-1) +(2)(0)(1)+(1)(6)(-1)+(3)(2)(1)+(3)(0)(-1)} = 0 nB2g = {(1)(6)(1)+(2)(0)(-1)+(2)(0)(1)+(1)(0)(-1) +(3)(0)(-1)+(3)(2)(1)+(1)(0)(1)+(2)(0)(-1) +(2)(0)(1)+(1)(6)(-1)+(3)(2)(-1)+(3)(0)(1)} = 0 nE1g = {(1)(6)(2)+(2)(0)(1)+(2)(0)(-1)+(1)(0)(-2) +(3)(0)(0)+(3)(2)(0)+(1)(0)(2)+(2)(0)(1) +(2)(0)(-1)+(1)(6)(-2)+(3)(2)(0)+(3)(0)(0)} = 0 nE2g = {(1)(6)(2)+(2)(0)(-1)+(2)(0)(-1)+(1)(0)(2) +(3)(0)(0)+(3)(2)(0)+(1)(0)(2)+(2)(0)(-1) +(2)(0)(-1)+(1)(6)(2)+(3)(2)(0)+(3)(0)(0)} = 1 nA1u = {(1)(6)(1)+(2)(0)(1)+(2)(0)(1)+(1)(0)(1) +(3)(0)(1)+(3)(2)(1)+(1)(0)(-1)+(2)(0)(-1) +(2)(0)(-1)+(1)(6)(-1)+(3)(2)(-1)+(3)(0)(-1)} = 0 nA2u = {(1)(6)(1)+(2)(0)(1)+(2)(0)(1)+(1)(0)(1) +(3)(0)(-1)+(3)(2)(-1)+(1)(0)(-1)+(2)(0)(-1) +(2)(0)(-1)+(1)(6)(-1)+(3)(2)(1)+(3)(0)(1)} = 0 nB1u = {(1)(6)(1)+(2)(0)(-1)+(2)(0)(1)+(1)(0)(-1) +(3)(0)(1)+(3)(2)(-1)+(1)(0)(-1)+(2)(0)(1) +(2)(0)(-1)+(1)(6)(1)+(3)(2)(-1)+(3)(0)(1)} = 0 nB2u = {(1)(6)(1)+(2)(0)(-1)+(2)(0)(1)+(1)(0)(-1) +(3)(0)(-1)+(3)(2)(1)+(1)(0)(-1)+(2)(0)(1) +(2)(0)(-1)+(1)(6)(1)+(3)(2)(1)+(3)(0)(-1)} = 1 nE1u = {(1)(6)(2)+(2)(0)(1)+(2)(0)(-1)+(1)(0)(-2) +(3)(0)(0)+(3)(2)(0)+(1)(0)(-2)+(2)(0)(-1) +(2)(0)(1)+(1)(6)(2)+(3)(2)(0)+(3)(0)(0)} = 1 nE2u = {(1)(6)(2)+(2)(0)(-1)+(2)(0)(-1)+(1)(0)(2) +(3)(0)(0)+(3)(2)(0)+(1)(0)(-2)+(2)(0)(1) +(2)(0)(1)+(1)(6)(-2)+(3)(2)(0)+(3)(0)(0)} = 0 We see that GC-H = A1gÅE2gÅB2uÅE1u c. x and y Þ E1u , z Þ A2u , so, the ground state A1g level can be excited to the degenerate E1u level by coupling through the x or y transition dipoles. Therefore E1u is infrared active and ^ polarized. d. (x2 + y2, z2) Þ A1g, (xz, yz) Þ E1g, (x2 - y2, xy) Þ E2g ,so, the ground state A1g level can be excited to the degenerate E2g level by coupling through the x2 - y2 or xy transitions or be excited to the degenerate A1g level by coupling through the xz or yz transitions. Therefore A1g and E2g are Raman active.. e. The B2u mode is not IR or Raman active. 32. a. Evaluate the z-component of mfi: mfi = <2pz|e r Cosq|1s>, where y1s = e , and y2pz = r Cosq e . mfi = <r Cosq e |e r Cosq|e > = <r Cosq e |e r Cosq|e > = Cos2q = 2p = 2p Cos3q\s\up15(p0 = 2p = = = 0.7449 b. Examine the symmetry of the integrands for <2pz| e x |1s> and <2pz| e y |1s>. Consider reflection in the xy plane: Function Symmetry 2pz -1 x +1 1s +1 y +1 Under this operation, the integrand of <2pz| e x |1s> is (-1)(1)(1) = -1 (it is antisymmetric) and hence <2pz| e x |1s> = 0. Similarly, under this operation the integrand of <2pz| e y |1s> is (-1)(1)(1) = -1 (it is also antisymmetric) and hence <2pz| e y |1s> = 0. c. tR = , Ei = E2pz = -Z2 Ef = E1s = -Z2 Ei - Ef = Z2 Making the substitutions for Ei - Ef and |mfi| in the expression for tR we obtain: tR = , = , = , Inserting e2 = we obtain: tR = = = 25.6289 = 25,6289 x = 1.595x10-9 sec x So, for example: Atom tR H 1.595 ns He+ 99.7 ps Li+2 19.7 ps Be+3 6.23 ps Ne+9 159 fs 33. a. H = H0 + lH'(t), H'(t) = Vq(t), H0jk = Ekjk, wk = Ek/ i= Hy let y(r,t) = iand insert into the Schrödinger equation: ie-iwjtjj = ijj e-iwjtjj = 0 e-iwjt = 0 im e-iwmt = e-iwjt So, m = e-i(wjm)t Going back a few equations and multiplying from the left by jk instead of jm we obtain: e-iwjt = 0 ik e-iwkt = e-iwjt So, k = e-i(wjk)t Now, let: cm = cm(0) + cm(1)l + cm(2)l2 + ... ck = ck(0) + ck(1)l + ck(2)l2 + ... and substituting into above we obtain: m(0) + m(1)l + m(2)l2 + ... = lH'mj e-i(wjm)t first order: m(0) = 0 Þ cm(0) = 1 second order: m(1) = (n+1)st order: m(n) = Similarly: first order: k(0) = 0 Þ ck¹m(0) = 0 second order: k(1) = (n+1)st order: k(n) = So, m(1) = cm(0) H'mm e-i(wmm)t = H'mm cm(1)(t) = = and similarly, k(1) = cm(0) H'km e-i(wmk)t = H'km e-i(wmk)t ck(1)(t) = Vkm = m(2) = m(2) = H'mj e-i(wjm)t + H'mm cm(2) = e-i(wjm)t' - = - = - = + t - Similarly, k(2) = = H'kj e-i(wjk)t + H'km e-i(wmk)t ck(2)(t) = e-i(wjk)t' - e-i(wmk)t' = - h,-e-i(wmk = + h,-e-i(wmk = + So, the overall amplitudes cm, and ck, to second order are: cm(t) = 1 + + t + - ck(t) = + + e-i(wmk)t + b. The perturbation equations still hold: m(n) = ; k(n) = So, cm(0) = 1 and ck(0) = 0 m(1) = H'mm cm(1) = Vmm = k(1) = H'km e-i(wmk)t ck(1) = Vkm = = m(2) = e-i(wmj+h)t Vmj eht e-i(wjm)t + Vmm eht cm(2) = - = e2ht - e2ht k(2) = e-i(wmj+h)t H'kj e-i(wjk)t + H'km e-i(wmk)t ck(2) = - = - Therefore, to second order: cm(t) = 1 + + e2ht ck(t) = + c. In part a. the c(2)(t) grow linearly with time (for Vmm = 0) while in part b. they remain finite for h > 0. The result in part a. is due to the sudden turning on of the field. d. |ck(t)|2 = = = |ck(t)|2 = Now, look at the limit as h ® 0+: |ck(t)|2 ¹ 0 when Em = Ek limhÆ0+a d(Em-Ek) So, the final result is the 2nd order golden rule expression: |ck(t)|2 = d(Em-Ek)limhÆ0+ 34. a. Tnm » evaluating <1s|V|2s> (using only the radial portions of the 1s and 2s wavefunctions since the spherical harmonics will integrate to unity) where V = (e2/r), the change in Coulomb potential when tritium becomes He: <1s|V|2s> = e e r2dr <1s|V|2s> = = <1s|V|2s> = <1s|V|2s> = = Now, En = -, E1s = -, E2s = -, E2s - E1s = So, Tnm = = = = 0.312 (for Z = 1) b. jm(r) = j1s = 2e Y00 The orthogonality of the spherical harmonics results in only s-states having non-zero values for Anm. We can then drop the Y00 (integrating this term will only result in unity) in determining the value of A1s,2s. yn(r) = y2s = e Remember for j1s Z = 1 and for y2s Z = 2 Anm = e e r2dr Anm = e r2dr Anm = We obtain: Anm = Anm = Anm = Anm = -2 The transition probability is the square of this amplitude: Tnm = = = 0.25 (for Z = 1). The difference in these two results (parts a. and b.) will become negligible at large values of Z when the perturbation becomes less significant than in the case of Z = 1. 35. is along Z (lab fixed), and is along z (the C-I molecule fixed bond). The angle between Z and z is b: .= emCosb = emD So, I = <D|.|D> = Sinbdbdgda = emSinbdbdgda. Now use: DD= *, to obtain: I = em*Sinbdbdgda. Now use: Sinbdbdgda = dJjdMmdKn, to obtain: I = em*dJjdMmdKn = em<J'M'10|JM><JK|J'K'10>. We use: <JK|J'K'10> = and, <J'M'10|JM> = to give: I = em = em8p2(-i)(J'-1+M+J'-1+K) = em8p2(-i)(M+K) The 3-J symbols vanish unless: K' + 0 = K and M' + 0 = M. So, I = em8p2(-i)(M+K)dM'MdK'K. b. and vanish unless J' = J + 1, J, J - 1 \ DJ = ±1, 0 The K quantum number can not change because the dipole moment lies along the molecule's C3 axis and the light's electric field thus can exert no torque that twists the molecule about this axis. As a result, the light can not induce transitions that excite the molecule's spinning motion about this axis. 36. a. B atom: 1s22s22p1, 2P ground state L = 1, S = , gives a degeneracy ((2L+1)(2S+1)) of 6. O atom: 1s22s22p4, 3P ground state L = 1, S = 1, gives a degeneracy ((2L+1)(2S+1)) of 9. The total number of states formed is then (6)(9) = 54. b. We need only consider the p orbitals to find the low lying molecular states: Which, in reality look like this: This is the correct ordering to give a 2S+ ground state. The only low-lying electron configurations are 1p35s2 or 1p45s1. These lead to 2P and 2S+ states, respectively. c. The bond orders in both states are 2.5. d. The 2S is + but g/u symmetry cannot be specified since this is a heteronuclear molecule. e. Only one excited state, the 2P, is spin-allowed to radiate to the 2S+. Consider symmetries of transition moment operators that arise in the electric dipole contributions to the transition rate z ® S+, x,y ® P, \ the 2P ® 2S+ is electric dipole allowed via a perpendicular band. f. Since ionization will remove a bonding electron, the BO+ bond is weaker than the BO bond. g. The ground state BO+ is 1S+ corresponding to a 1p4 electron configuration. An electron configuration of 1p3 5s1 leads to a 3P and a 1P state. The 3P will be lower in energy. A 1p2 5s2 configuration will lead to higher lying states of 3S-, 1D, and 1S+. h. There should be 3 bands corresponding to formation of BO+ in the 1S+, 3P, and 1P states. Since each of these involves removing a bonding electron, the Franck-Conden integrals will be appreciable for several vibrational levels, and thus a vibrational progression should be observed. 37. a. The bending (p) vibration is degenerate. b. H---CºN Ý bending fundamental c. H---CºN Ý stretching fundamental d. CH stretch (n3 in figure) is s, CN stretch is s, and HCN (n2 in figure) bend is p. e. Under z (s) light the CN stretch and the CH stretch can be excited, since y0 = s, y1 = s and z = s provides coupling. f. Under x,y (p) light the HCN bend can be excited, since y0 = s, y1 = p and x,y = p provides coupling. g. The bending vibration is active under (x,y) perpendicular polarized light. DJ = 0, ±1 are the selection rules for ^ transitions. The CH stretching vibration is active under (z) || polarized light. DJ = ±1 are the selection rules for || transitions. 38. F fi = ei fj = h fi + fi Let the closed shell Fock potential be written as: Vij = , and the 1e- component as: hij = fi| - Ñ2 - |fj , and the delta as: dij = , so that: hij + Vij = dijei. using: fi = , fj = , and fk = , and transforming from the MO to AO basis we obtain: Vij = CmiCgkCnjCkk = = Vmn where, Vmn = Pgk, and Pgk = , hij = hmn , where hmn = cm| - Ñ2 - |cn , and dij = = . So, hij + Vij = dijej becomes: hmn + Vmn = ej , ej - hmn - Vmn = 0 for all i,j CmiCnj = 0 for all i,j Therefore, Cnj = 0 This is FC = SCE in the AO basis. 39. The Slater Condon rule for zero (spin orbital) difference with N electrons in N spin orbitals is: E = = + = + = + If all orbitals are doubly occupied and we carry out the spin integration we obtain: E = 2+ , where i and j now refer to orbitals (not spin-orbitals). 40. If the occupied orbitals obey Ffk = ekfk , then the expression for E in problem 39 can be rewritten as. E = + We recognize the closed shell Fock operator expression and rewrite this as: E = + = 41. I will use the QMIC software to do this problem. Lets just start from the beginning. Get the starting "guess" MO coefficients on disk. Using the program MOCOEFS it asks us for the first and second MO vectors. We input 1, 0 for the first mo (this means that the first MO is 1.0 times the He 1s orbital plus 0.0 times the H 1s orbital; this bonding MO is more likely to be heavily weighted on the atom having the higher nuclear charge) and 0, 1 for the second. Our beginning LCAO-MO array looks like: and is placed on disk in a file we choose to call "mocoefs.dat". We also put the AO integrals on disk using the program RW_INTS. It asks for the unique one- and two- electron integrals and places a canonical list of these on disk in a file we choose to call "ao_integrals.dat". At this point it is useful for us to step back and look at the set of equations which we wish to solve: FC = SCE. The QMIC software does not provide us with a so-called generalized eigenvalue solver (one that contains an overlap matrix; or metric), so in order to use the diagonalization program that is provided we must transform this equation (FC = SCE) to one that looks like (F'C' = C'E). We do that in the following manner: Since S is symmetric and positive definite we can find an Ssuch that SS= 1, SS = S, etc. rewrite FC = SCE by inserting unity between FC and multiplying the whole equation on the left by S. This gives: SFSSC = SSCE = SCE. Letting: F' = SFS C' = SC, and inserting these expressions above give: F'C' = C'E Note, that to get the next iteration’s MO coefficients we must calculate C from C': C' = SC, so, multiplying through on the left by Sgives: SC' = SSC = C This will be the method we will use to solve our fock equations. Find Sby using the program FUNCT_MAT (this program generates a function of a matrix). This program will ask for the elements of the S array and write to disk a file (name of your choice ... a good name might be "shalf") containing the Sarray. Now we are ready to begin the iterative Fock procedure. a. Calculate the Fock matrix, F, using program FOCK which reads in the MO coefficients from "mocoefs.dat" and the integrals from "ao_integrals.dat" and writes the resulting Fock matrix to a user specified file (a good filename to use might be something like "fock1"). b. Calculate F' = SFSusing the program UTMATU which reads in F and Sfrom files on the disk and writes F' to a user specified file (a good filename to use might be something like "fock1p"). Diagonalize F' using the program DIAG. This program reads in the matrix to be diagonalized from a user specified filename and writes the resulting eigenvectors to disk using a user specified filename (a good filename to use might be something like "coef1p"). You may wish to choose the option to write the eigenvalues (Fock orbital energies) to disk in order to use them at a later time in program FENERGY. Calculate C by using. C = SC'. This is accomplished by using the program MATXMAT which reads in two matrices to be multiplied from user specified files and writes the product to disk using a user specified filename (a good filename to use might be something like "mocoefs.dat"). c. The QMIC program FENERGY calculates the total energy: 2<k|h|k> + 2<kl|kl> - <kl|lk> + , and ek + <k|h|k> + . This is the conclusion of one iteration of the Fock procedure ... you may continue by going back to part a. and proceeding onward. d. and e. Results for the successful convergence of this system using the supplied QMIC software are as follows (this data is provided to give the student assurance that they are on the right track; alternatively one could switch to the QMIC program SCF and allow that program to iteratively converge the Fock equations): The one-electron AO integrals: The two-electron AO integrals: 1 1 1 1 1.054700 2 1 1 1 0.4744000 2 1 2 1 0.5664000 2 2 1 1 0.2469000 2 2 2 1 0.3504000 2 2 2 2 0.6250000 The "initial" MO-AO coefficients: AO overlap matrix (S): S ************** ITERATION 1 ************** The charge bond order matrix: The Fock matrix (F): S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9779331 2 1 1 1 0.1924623 2 1 2 1 0.5972075 2 2 1 1 0.1170838 2 2 2 1 -0.0007945194 2 2 2 2 0.6157323 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84219933 from formula: ek + <k|h|k> + = -2.80060530 the difference is: -0.04159403 ************** ITERATION 2 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9626070 2 1 1 1 0.1949828 2 1 2 1 0.6048143 2 2 1 1 0.1246907 2 2 2 1 0.003694540 2 2 2 2 0.6158437 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84349298 from formula: ek + <k|h|k> + = -2.83573675 the difference is: -0.00775623 ************** ITERATION 3 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9600707 2 1 1 1 0.1953255 2 1 2 1 0.6060572 2 2 1 1 0.1259332 2 2 2 1 0.004475587 2 2 2 2 0.6158972 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84353018 from formula: ek + <k|h|k> + = -2.84225941 the difference is: -0.00127077 ************** ITERATION 4 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9596615 2 1 1 1 0.1953781 2 1 2 1 0.6062557 2 2 1 1 0.1261321 2 2 2 1 0.004601604 2 2 2 2 0.6159065 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84352922 from formula: ek + <k|h|k> + = -2.84332418 the difference is: -0.00020504 ************** ITERATION 5 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9595956 2 1 1 1 0.1953862 2 1 2 1 0.6062872 2 2 1 1 0.1261639 2 2 2 1 0.004621811 2 2 2 2 0.6159078 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84352779 from formula: ek + <k|h|k> + = -2.84349489 the difference is: -0.00003290 ************** ITERATION 6 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9595859 2 1 1 1 0.1953878 2 1 2 1 0.6062925 2 2 1 1 0.1261690 2 2 2 1 0.004625196 2 2 2 2 0.6159083 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84352827 from formula: ek + <k|h|k> + = -2.84352398 the difference is: -0.00000429 ************** ITERATION 7 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9595849 2 1 1 1 0.1953881 2 1 2 1 0.6062936 2 2 1 1 0.1261697 2 2 2 1 0.004625696 2 2 2 2 0.6159083 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84352922 from formula: ek + <k|h|k> + = -2.84352827 the difference is: -0.00000095 ************** ITERATION 8 ************** The charge bond order matrix: The Fock matrix: S F S The eigenvalues of this matrix (Fock orbital energies) are: Their corresponding eigenvectors (C' = S * C) are: The "new" MO-AO coefficients (C = S * C'): The one-electron MO integrals: The two-electron MO integrals: 1 1 1 1 0.9595841 2 1 1 1 0.1953881 2 1 2 1 0.6062934 2 2 1 1 0.1261700 2 2 2 1 0.004625901 2 2 2 2 0.6159081 The closed shell Fock energy from formula: 2<k|h|k> + 2<kl|kl> - <kl|lk> + = -2.84352827 from formula: ek + <k|h|k> + = -2.84352827 the difference is: 0.00000000 f. In looking at the energy convergence we see the following: Iter Formula 1 Formula 2 1 -2.84219933 -2.80060530 2 -2.84349298 -2.83573675 3 -2.84353018 -2.84225941 4 -2.84352922 -2.84332418 5 -2.84352779 -2.84349489 6 -2.84352827 -2.84352398 7 -2.84352922 -2.84352827 8 -2.84352827 -2.84352827 If you look at the energy differences (SCF at iteration n - SCF converged) and plot this data versus iteration number, and do a 5th order polynomial fit, we see the following: In looking at the polynomial fit we see that the convergence is primarily linear since the coefficient of the linear term is much larger than those of the cubic and higher terms. g. The converged SCF total energy calculated using the result of problem 40 is an upper bound to the ground state energy, but, during the iterative procedure it is not. Only at convergence does the expectation value of the Hamiltonian for the Hartree Fock determinant become equal to that given by the equation in problem 40. h. Yes, the 1s2 configuration does dissociate properly because at at R®¥ the lowest energy state is He + H+, which also has a 1s2 orbital occupancy (i.e., 1s2 on He and 1s0 on H+). 42. 2. At convergence the MO coefficients are: f1 = f2 = and the integrals in this MO basis are: h11 = -2.615842 h21 = -0.1953882 h22 = -1.315354 g1111 = 0.9595841 g2111 = 0.1953881 g2121 = 0.6062934 g2211 = 0.1261700 g2221 = 004625901 g2222 = 0.6159081 a. H = = = = b. The eigenvalues are E1 = -4.279131 and E2 = -2.007770. The corresponding eigenvectors are: C1 = , C2 = c. = = = a- b. d. The third configuration |1s2s| = , Adding this configuration to the previous 2x2 CI results in the following 3x3 'full' CI: H = = Evaluating the new matrix elements: H13 = H31 = *(-0.1953882 + 0.1953881) = 0.0 H23 = H32 = *(-0.1953882 + 0.004626) = -0.269778 H33 = -2.615842 - 1.315354 + 0.606293 + 0.126170 = -3.198733 = e. The eigenvalues are E1 = -4.279345, E2 = -3.256612 and E3 = -1.949678. The corresponding eigenvectors are: C1 = , C2 = , C3 = f. We need the non-vanishing matrix elements of the dipole operator in the MO basis. These can be obtained by calculating them by hand. They are more easily obtained by using the TRANS program. Put the 1e- AO integrals on disk by running the program RW_INTS. In this case you are inserting z11 = 0.0, z21 = 0.2854, and z22 = 1.4 (insert 0.0 for all the 2e- integrals) ... call the output file "ao_dipole.ints" for example. The converged MO-AO coefficients should be in a file ("mocoefs.dat" is fine). The transformed integrals can be written to a file (name of your choice) for example "mo_dipole.ints". These matrix elements are: z11 = 0.11652690, z21 = -0.54420990, z22 = 1.49117320 The excitation energies are E2 - E1 = -3.256612 - -4.279345 = 1.022733, and E3 - E1 = -1.949678.- -4.279345 = 2.329667. Using the Slater-Conden rules to obtain the matrix elements between configurations we obtain: Hz = = = Now, <Y1|z|Y2> = C1THzC2, (this can be accomplished with the program UTMATU) = T = -.757494 and, <Y1|z|Y3> = C1THzC3 = T = 0.014322 g. Using the converged coefficients the orbital energies obtained from solving the Fock equations are e1 = -1.656258 and e2 = -0.228938. The resulting expression for the PT first-order wavefunction becomes: |1s2>(1) = - |2s2> |1s2>(1) = - |2s2> |1s2>(1) = -0.0441982|2s2> h. As you can see from part c., the matrix element <1s2|H|1s2s> = 0 (this is also a result of the Brillouin theorem) and hence this configuration does not enter into the first-order wavefunction. i. |0> = |1s2> - 0.0441982|2s2>. To normalize we divide by: = 1.0009762 |0> = 0.999025|1s2> - 0.044155|2s2> In the 2x2 CI we obtained: |0> = 0.99845123|1s2> - 0.05563439|2s2> j. The expression for the 2nd order RSPT is: E(2) = - = - = -0.005576 au Comparing the 2x2 CI energy obtained to the SCF result we have: -4.279131 - (-4.272102) = -0.007029 au 43. STO total energy: -2.8435283 STO3G total energy -2.8340561 3-21G total energy -2.8864405 The STO3G orbitals were generated as a best fit of 3 primitive Gaussians (giving 1 CGTO) to the STO. So, STO3G can at best reproduce the STO result. The 3-21G orbitals are more flexible since there are 2 CGTOs per atom. This gives 4 orbitals (more parameters to optimize) and a lower total energy. 44. R HeH+ Energy H2 Energy 1.0 -2.812787056 -1.071953297 1.2 -2.870357513 -1.113775015 1.4 -2.886440516 -1.122933507 1.6 -2.886063576 -1.115567684 1.8 -2.880080938 -1.099872589 2.0 -2.872805595 -1.080269098 2.5 -2.856760263 -1.026927710 10.0 -2.835679293 -0.7361705303 Plotting total energy vs. geometry for HeH+: Plotting total energy vs. geometry for H2: For HeH+ at R = 10.0 au, the eigenvalues of the converged Fock matrix and the corresponding converged MO-AO coefficients are: -.1003571E+01 -.4961988E+00 .5864846E+00 .1981702E+01 .4579189E+00 -.8245406E-05 .1532163E-04 .1157140E+01 .6572777E+00 -.4580946E-05 -.6822942E-05 -.1056716E+01 -.1415438E-05 .3734069E+00 .1255539E+01 -.1669342E-04 .1112778E-04 .7173244E+00 -.1096019E+01 .2031348E-04 Notice that this indicates that orbital 1 is a combination of the s functions on He only (dissociating properly to He + H+). For H2 at R = 10.0 au, the eigenvalues of the converged Fock matrix and the corresponding converged MO-AO coefficients are: -.2458041E+00 -.1456223E+00 .1137235E+01 .1137825E+01 .1977649E+00 -.1978204E+00 .1006458E+01 -.7903225E+00 .5632566E+00 -.5628273E+00 -.8179120E+00 .6424941E+00 .1976312E+00 .1979216E+00 .7902887E+00 .1006491E+01 .5629326E+00 .5631776E+00 -.6421731E+00 -.8181460E+00 Notice that this indicates that orbital 1 is a combination of the s functions on both H atoms (dissociating improperly; equal probabilities of H2 dissociating to two neutral atoms or to a proton plus hydride ion). 45. The H2 CI result: R 1Sg+ 3Su+ 1Su+ 1Sg+ 1.0 -1.074970 -0.5323429 -0.3997412 0.3841676 1.2 -1.118442 -0.6450778 -0.4898805 0.1763018 1.4 -1.129904 -0.7221781 -0.5440346 0.0151913 1.6 -1.125582 -0.7787328 -0.5784428 -0.1140074 1.8 -1.113702 -0.8221166 -0.6013855 -0.2190144 2.0 -1.098676 -0.8562555 -0.6172761 -0.3044956 2.5 -1.060052 -0.9141968 -0.6384557 -0.4530645 5.0 -0.9835886 -0.9790545 -0.5879662 -0.5802447 7.5 -0.9806238 -0.9805795 -0.5247415 -0.5246646 10.0 -0.980598 -0.9805982 -0.4914058 -0.4913532 For H2 at R = 1.4 au, the eigenvalues of the Hamiltonian matrix and the corresponding determinant amplitudes are: determinant -1.129904 -0.722178 -0.544035 0.015191 |1sga1sgb| 0.99695 0.00000 0.00000 0.07802 |1sgb1sua| 0.00000 0.70711 0.70711 0.00000 |1sga1sub| 0.00000 0.70711 -0.70711 0.00000 |1sua1sub| -0.07802 0.00000 0.00000 0.99695 This shows, as expected, the mixing of the first 1Sg+ (1sg2) and the 2nd 1Sg+ (1su2) determinants in the first and fourth states, and the 3Su+ = (), and 1Su+= () states as the second and third states. Also notice that the first 1Sg+ state has coefficients (0.99695 - 0.07802) (note specifically the + - combination) and the second 1Sg+ state has the opposite coefficients with the same signs (note specifically the + + combination). The + + combination always gives a higher energy than the + - combination. 46. F atoms have 1s22s22p5 2P ground electronic states that are split by spin-orbit coupling into 2P3/2 and 2P1/2 states that differ by only 0.05 eV in energy. a. The degeneracy of a state having a given J is 2J+1, and the J=3/2 state is lower in energy because the 2p orbital shell is more than half filled (I learned this in inorganic chemistry class), so qel = 4 exp(-0/kT) + 2 exp(-0.05 eV/kT). 0.05 eV is equivalent to k(500 K), so 0.05/kT = 500/T, hence qel = 4 exp(-0/kT) + 2 exp(-500/T). b. Q = qN/N! so, ln Q = N lnq – lnN! E =kT2 ∂lnQ/∂T = NkT2 ∂lnq/∂T = Nk{1000 exp(-500/T)/[4 + 2 exp(-500/T)]} c. Using the fact that kT=0.03eV at T=300°K, make a (qualitative) graph of /N vs T for T ranging from 100°K to 3000°K. At T = 100 K, E/N is small and equal to 1000k exp(-5)/(4 + 2 exp(-5)). At T = 3000 K, E/N has grown to 1000k exp(-1/6)/(4 + 2 exp(-1/6)) which is approximately 1000k/6. 47. a. The difference between a linear and bent transition state would arise in the vibrational and rotational partition functions. For the linear TS, one has 3N-6 vibrations (recall that one loses one vibration as a reaction coordinate), but for the bent TS, one has 3N-7 vibrations. For the linear TS, one has 2 rotational axes, and for the bent TS, one has 3. So the ratio of rate constants will reduce to ratios of vibration and rotation partition functions. In particular, one will have klinear/kbent = (qvib3N-6 qrot2/qvib3N-7qrot3) = (qvib/qrot). b. Using qt ~ 108, qr ~ 102, qv ~ 1, I would expect klinear/kbent to be of the order of 1/102 = 10-2. 48. Constructing the Slater determinant corresponding to the "state" 1s(a)1s(a) with the rows labeling the orbitals and the columns labeling the electron gives: |1sa1sa| = = = 0 49. Starting with the MS=1 3S state (which in a "box" for this ML=0, MS=1 case would contain only one product function; |1sa2sa|) and applying S- gives: S- 3S(S=1,MS=1) = 3S(S=1,MS=0) = 3S(S=1,MS=0) = |1sa2sa| = S-(1)|1sa2sa| + S-(2)|1sa2sa| = |1sb2sa| + |1sa2sb| = So, 3S(S=1,MS=0) = 3S(S=1,MS=0) = The three triplet states are then: 3S(S=1,MS=1)= |1sa2sa|, 3S(S=1,MS=0) = , and 3S(S=1,MS=-1) = |1sb2sb|. The singlet state which must be constructed orthogonal to the three singlet states (and in particular to the 3S(S=1,MS=0) state) can be seen to be: 1S(S=0,MS=0) = . Applying S2 and Sz to each of these states gives: Sz |1sa2sa| = |1sa2sa| = Sz(1)|1sa2sa| + Sz(2))|1sa2sa| = |1sa2sa| + |1sa2sa| = |1sa2sa| S2 |1sa2sa| = (S-S+ + Sz2 + Sz) |1sa2sa| = S-S+|1sa2sa| + Sz2|1sa2sa| + Sz|1sa2sa| = 0 + 2 |1sa2sa| + 2|1sa2sa| = 22 |1sa2sa| Sz = = |1sb2sa| + |1sa2sb| = |1sb2sa| + |1sa2sb| = 0 S2 = (S-S+ + Sz2 + Sz) = S-S+ = = = 2 = 2 = 2 2 Sz |1sb2sb| = |1sb2sb| = Sz(1)|1sb2sb| + Sz(2))|1sb2sb| = |1sb2sb| + |1sb2sb| = -|1sb2sb| S2 |1sb2sb| = (S+S- + Sz2 - Sz) |1sb2sb| = S+S-|1sb2sb| + Sz2|1sb2sb| - Sz|1sb2sb| = 0 + 2 |1sb2sb| + 2|1sb2sb| = 22 |1sb2sb| Sz = = |1sb2sa| - |1sa2sb| = |1sb2sa| - |1sa2sb| = 0 S2 = (S-S+ + Sz2 + Sz) = S-S+ = = = 0 = 0 = 0 2 50. As shown in problem 22c, for two equivalent p electrons one obtains six states: 1D (ML=2); one state (MS=0), 1D (ML=-2); one state (MS=0), 1S (ML=0); one state (MS=0), and 3S (ML=0); three states (MS=1,0, and -1). By inspecting the "box" in problem 22c, it should be fairly straightforward to write down the wavefunctions for each of these: 1D (ML=2); |p1ap1b| 1D (ML=-2); |p-1ap-1b| 1S (ML=0); 3S (ML=0, MS=1); |p1ap-1a| 3S (ML=0, MS=0); 3S (ML=0, MS=-1); |p1bp-1b| 51. We can conveniently couple another s electron to the states generated from the 1s12s1 configuration: 3S(L=0, S=1) with 3s1(L=0, S=) giving: L=0, S=, ; 4S (4 states) and 2S (2 states). 1S(L=0, S=0) with 3s1(L=0, S=) giving: L=0, S=; 2S (2 states). Constructing a "box" for this case would yield: ML MS 0 |1sa2sa3sa| |1sa2sa3sb|, |1sa2sb3sa|, |1sb2sa3sa| One can immediately identify the wavefunctions for two of the quartets (they are single entries): 4S(S=,MS=): |1sa2sa3sa| 4S(S=,MS=-): |1sb2sb3sb| Applying S- to 4S(S=,MS=) yields: S-4S(S=,MS=) = 4S(S=,MS=) = 4S(S=,MS=) S-|1sa2sa3sa| = So, 4S(S=,MS=) = Applying S+ to 4S(S=,MS=-) yields: S+4S(S=,MS=-) = 4S(S=,MS=-) = 4S(S=,MS=-) S+|1sb2sb3sb| = So, 4S(S=,MS=-) = It only remains to construct the doublet states which are orthogonal to these quartet states. Recall that the orthogonal combinations for systems having three equal components (for example when symmetry adapting the 3 sp2 hybrids in C2v or D3h symmetry) give results of + + +, +2 - -, and 0 + -. Notice that the quartets are the + + + combinations and therefore the doublets can be recognized as: 2S(S=,MS=) = 2S(S=,MS=) = 2S(S=,MS=-) = 2S(S=,MS=-) = 52. As illustrated in problem 24, a p2 configuration (two equivalent p electrons) gives rise to the term symbols: 3P, 1D, and 1S. Coupling an additional electron (3d1) to this p2 configuration will give the desired 1s22s22p23d1 term symbols: 3P(L=1,S=1) with 2D(L=2,S=) generates; L=3,2,1, and S=, with term symbols 4F, 2F,4D, 2D,4P, and 2P, 1D(L=2,S=0) with 2D(L=2,S=) generates; L=4,3,2,1,0, and S=with term symbols 2G, 2F, 2D, 2P, and 2S, 1S(L=0,S=0) with 2D(L=2,S=) generates; L=2 and S=with term symbol 2D. 53. The notation used for the Slater Condon rules will be as follows: (a.) zero (spin orbital) difference; = + = + (b.) one (spin orbital) difference (fp ¹ fp'); = + = fpp' + (c.) two (spin orbital) differences (fp ¹ fp' and fq ¹ fq'); = - = gpqp'q' - gpqq'p' (d.) three or more (spin orbital) differences; = 0 i. 3P(ML=1,MS=1) = |p1ap0a| = <| 10| H | 10|> Using the Slater Condon rule (a.) above (I will denote these SCa-SCd): = f11 + f00 + g1010 - g1001 ii. 3P(ML=0,MS=0) = = + + ) Evaluating each matrix element gives: = f1a1a + f-1b-1b + g1a-1b1a-1b - g1a-1b-1b1a (SCa) = f11 + f-1-1 + g1-11-1 - 0 = g1a-1b1b-1a - g1a-1b-1a1b (SCc) = 0 - g1-1-11 = g1b-1a1a-1b - g1b-1a-1b1a (SCc) = 0 - g1-1-11 = f1b1b + f-1a-1a + g1b-1a1b-1a - g1b-1a-1a1b (SCa) = f11 + f-1-1 + g1-11-1 - 0 Substitution of these expressions give: = + f11 + f-1-1 + g1-11-1) = f11 + f-1-1 + g1-11-1 - g1-1-11 iii. 1S(ML=0,MS=0); = - - + + - + + ) Evaluating each matrix element gives: = f0a0a + f0b0b + g0a0b0a0b - g0a0b0b0a (SCa) = f00 + f00 + g0000 - 0 = = g0a0b1a-1b - g0a0b-1b1a (SCc) = g001-1 - 0 = = g0a0b-1a1b - g0a0b1b-1a (SCc) = g00-11 - 0 = f1a1a + f-1b-1b + g1a-1b1a-1b - g1a-1b-1b1a (SCa) = f11 + f-1-1 + g1-11-1 - 0 = = g1a-1b-1a1b - g1a-1b1b-1a (SCc) = g1-1-11 - 0 = f-1a-1a + f1b1b + g-1a1b-1a1b - g-1a1b1b-1a (SCa) = f-1-1 + f11 + g-11-11 - 0 Substitution of these expressions give: = + g1-11-1 + g1-1-11 - g00-11 + g1-1-11 + f-1-1 + f11 + g-11-11) = iv. 1D(ML=0,MS=0) = Evaluating we note that all the Slater Condon matrix elements generated are the same as those evaluated in part iii. (the signs for the wavefunction components and the multiplicative factor of two for one of the components, however, are different). = + f-1-1 + g1-11-1 + g1-1-11 + 2g00-11 + g1-1-11 + f-1-1 + f11 + g-11-11) = 54. i. 1D(ML=2,MS=0) = |p1ap1b| = = f1a1a + f1b1b + g1a1b1a1b - g1a1b1b1a (SCa) = f11 + f11 + g1111 - 0 = 2f11 + g1111 ii. 1S(ML=0,MS=0) = = - + ) Evaluating each matrix element gives: = f1a1a + f-1b-1b + g1a-1b1a-1b - g1a-1b-1b1a (SCa) = f11 + f-1-1 + g1-11-1 - 0 = g1a-1b1b-1a - g1a-1b-1a1b (SCc) = 0 - g1-1-11 = g1b-1a1a-1b - g1b-1a-1b1a (SCc) = 0 - g1-1-11 = f1b1b + f-1a-1a + g1b-1a1b-1a - g1b-1a-1a1b (SCa) = f11 + f-1-1 + g1-11-1 - 0 Substitution of these expressions give: = = f11 + f-1-1 + g1-11-1+ g1-1-11 iii. 3S(ML=0,MS=0) = = f11 + f-1-1 + g1-11-1 - 0 = g1a-1b1b-1a - g1a-1b-1a1b (SCc) = 0 - g1-1-11 = g1b-1a1a-1b - g1b-1a-1b1a (SCc) = 0 - g1-1-11 = f1b1b + f-1a-1a + g1b-1a1b-1a - g1b-1a-1a1b (SCa) = f11 + f-1-1 + g1-11-1 - 0 Substitution of these expressions give: = = f11 + f-1-1 + g1-11-1- g1-1-11 55. The order of the answers is J, I, G. K, B, D, E, A, C, H, F 56. p = N/(V-Nb) – N2 a/(kTV2) but p/kT = (∂lnQ/∂V)T,N so we can integrate to obtain ln Q lnQ = ò (p/kT) dV = ò [N/(V-Nb) – N2 a/(kTV2)] dV = N ln(V-Nb) + N2a/kT (1/V) So, Q = {(V-Nb)exp[(a/kT) (N/V)]}N 57. a. MD because you need to keep track of how far the molecule moves as a function of time and MC does not deal with time. b. MC is capable of doing this although MD is also. However, MC requires fewer computational steps, so I would prefer to use it. c. MC can do this, as could MD. Again, because MC needs fewer computational steps, I’d use it. Suppose you are carrying out a Monte-Carlo simulation involving 1000 Ar atoms. Further suppose that the potentials are pairwise additive and that your computer requires approximately 50 floating point operations (FPO's) (e.g. multiply, add, divide, etc.) to compute the interaction potential between any pair of atoms d. For each MC move, we must compute only the change in potential energy. To do this, we need to compute only the change in the pair energies that involve the atom that was moved. This will require 999x50 FPOs (the 99 being the number of atoms other than the one that moved). So, for a million MC steps, I would need 106 x 999 x 50 FPOs. At 100 x106 FPOs per second, this will require 495 seconds, or a little over eight minutes. e. Because the statistical fluctuations in MC calculations are proportional to (1/N)1/2, where N is the number of steps taken, I will have to take 4 times as many steps to cut the statistical errors in half. So, this will require 4 x 495 seconds or 1980 seconds. f. If we have one million rather than one thousand atoms, the 495 second calculation of part d would require 999,999/999 times as much time. This ratio arises because the time to compute the change in potential energy accompanying a MC move is proportional to the number of other atoms. So, the calculation would take 495 x (999,999/999) seconds or about 500,000 seconds or about 140 hours. g. We would be taking 10-9s/(10-15 s per step) = 106 MD steps. Each step requires that we compute all forces(-∂V∂RI,J) between all pairs of atoms. There are 1000x999/2 such pairs. So, to compute all the forces would require (1000x999/2)x 50 FPOs = 2.5 x107 FPOs. So, we will need 2.5 x107 FPOs/step x 106 steps/(100 FPOs per second) = 2.5 x105 seconds or about 70 hours. h. The graduate student is 108 times slower than the 100 Mflop computer, so it will take her/him 108 times as long, so 495 x108 seconds or about 1570 years. 58. First, Na has a 2S ground state term symbol whose degeneracy is 2S + 1 = 2. Na2 has a 1S ground state whose degeneracy is 1. The symmetry number for Na2 is s = 2. The D0 value given is 17.3 kcal mol-1. The Kp equilibrium constant would be given in terms of partial pressures as (and then using pV=NkT) Kp = pNa2/pNa2 = (kT)-1 (qNa/V)2/(qNa2/V) in terms of the partition functions. a. qNa = (2pmkT/h2)3/2 V q­el qNA2 = (2pm’kT/h2)3/2 V (8p2IkT/h2) 1/2 [ exp-hn/2kT) (1- exp-hn/kT))-1 exp(De/kT) We can combine the De and the –hn/2kT to obtain the D0 which is what we were given. b. For Na (I will use cgs units in all cases): q/V = (2p 23 1.66x10-24 1.38 x10-16 1000)3/2 2 = (6.54 x1026) x 2 = 1.31 x1027 For Na2: q/N = 23/2 x (6.54 x1026) (1000/0.221) (1/2) (1-exp(-229/1000))-1 exp(D0/kT) = 1.85 x1027 (2.26 x103) (4.88) (5.96 x103) = 1.22 x1035 So, Kp = [1.22 x1035]/[(1.38 x10-16)(1000) (1.72 x1054) = 0.50 x10-6 dynes cm-2 = 0.50 atm-1. 59. The differences in krate will arise from differences in the number of translational, rotational, and vibrational partition functions arising in the adsorbed and gas-phase species. Recall that krate = (kT/h) exp(-E*/kT) [qTS/V]/[(qNO/V) (qCl2/V)] In the gas phase, NO has 3 translations, two rotations, and one vibration Cl2 has 3 translations, two rotations, and one vibration the NOCl2 TS, which is bent, has 3 translations, three rotations, and five vibrations (recall that one vibration is missing and is the reaction coordinate) In the adsorbed state, NO has 2 translations, one rotation, and three vibrations Cl2 has 2 translations, one rotation, and three vibrations the NOCl2 TS, which is bent, has 2 translations, one rotation, and eight vibrations (again, one vibration is missing and is the reaction coordinate). So, in computing the partition function ratio: [qTS/V]/[(qNO/V) (qCl2/V)] for the adsorbed and gas-phase cases, one does not obtain the same number of translational, rotational, and vibrational factors. In particular, the ratio of these factors for the adsorbed and gas-phase cases gives the ratio of rate constants as follows: kad/kgas = (qtrans/V)/qvib which should be of the order of 108 (using the ratio of partition functions as given). Notice that this result suggests that reaction rates can be altered by constraining the reacting species to move freely in lower dimensions even if one does not alter the energetics (e.g., activation energy or thermochemistry). Contributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/09%3A_Exercises/9.01%3A_Solutions.txt
The understanding and prediction of the properties of matter at the atomic level represents one of the great achievements of twentieth-century science. The theory developed to describe the behavior of electrons, atoms and molecules differs radically from familiar Newtonian physics, the physics governing the motions of macroscopic bodies and the physical events of our everyday experiences. The discovery and formulation of the fundamental concepts of atomic physics in the period 1901 to 1926 by such men as Planck, Einstein, de Broglie and Heisenberg caused what can only be described as a revolution in the then-accepted basic concepts of physics. The new theory is called quantum theory or quantum mechanics. As far as we now know this theory is able to account for all observable behaviour of matter and, with suitable extensions, for the interaction of matter with light. The proper formulation of quantum mechanics and its application to a specific problem requires a rather elaborate mathematical framework, as do proper statements and applications of Newtonian physics. We may, however, in this introductory account acquaint ourselves with the critical experiments which led to the formulation of quantum mechanics and apply the basic concepts of this new mechanics to the study of electrons. Specifically the problem we set ourselves is to discover the physical laws governing the behaviour of electrons and then apply these laws to determine how the electrons are arranged when bound to nuclei to form atoms and molecules. This arrangement of electrons is termed the electronic structure of the atom or molecule. Furthermore, we shall discuss the relationship between the electronic structure of an atom and its physical properties, and how the electronic structure is changed during a chemical reaction. Rutherford's nuclear model for the atom set the stage for the understanding of the structure of atoms and the forces holding them together. From Rutherford's alpha-scattering experiments it was clear that the atom consisted of a positively-charged nucleus with negatively-charged electrons arranged in some fashion around it, the electrons occupying a volume of space many times larger than that occupied by the nucleus. (The diameters of nuclei fall in the range of l ´ 10-12 ® 1 ´ 10-13 cm, while the diameter of an atom is typically of the order of magnitude of 1 ´ 10-8 cm.) The forces responsible for binding the atom, and in fact all matter (aside from the nuclei themselves), are electrostatic in origin: the positively-charged nucleus attracts the negatively-charged electrons. There are attendant magnetic forces which arise from the motions of the charged particles. These magnetic forces give rise to many important physical phenomena, but they are smaller in magnitude than are the electrostatic forces and they are not responsible for the binding found in matter. During a chemical reaction only the number and arrangement of the electrons are changed, the nucleus remaining unaltered. The unchanging charge of the atomic nucleus is responsible for retaining the atom's chemical identity through any chemical reaction. Thus for the purpose of understanding the chemical properties and behaviour of atoms, the nucleus may be regarded as simply a point charge of constant magnitude for a given element, giving rise to a central field of force which binds the electrons to the atom. Rutherford proposed his nuclear model of the atom in 1911, some fifteen years before the formulation of quantum mechanics. Consequently his model, when first proposed, posed a dilemma for classical physics. The nuclear model, based as it was on experimental observations, had to be essentially correct, yet all attempts to account for the stability of such a system using Newtonian mechanics ended in failure. According to Newtonian mechanics we should be able to obtain a complete solution to the problem of the electronic structure of atoms once the nature of the force between the nucleus and the electron is known. The electrostatic force operative in the atom is well understood and is described by Coulomb's law, which states that the force between two particles with charges $e_1$ and $e_2$ separated by a distance $R$ is given by: $F \propto \dfrac{e_1e_2}{R^2} \nonumber$ There is a theorem of electrostatics which states that no stationary arrangement of charged particles can ever be in electrostatic equilibrium, i.e., be stable to any further change in their position. This means that all the particles in a collection of positively and negatively charged species will always have resultant forces of attraction or repulsion acting on them no matter how they are arranged in space. Thus no model of the atom which invokes some stationary arrangement of the electrons around the nucleus is possible. The electrons must be in motion if electrostatic stability is to be preserved. However, an electron moving in the field of a nucleus experiences a force and, according to Newton's second law of motion, would be accelerated. The laws of electrodynamics state that an accelerated charged particle should emit light and thus continuously lose energy. In this dynamical model of the atom, all of the electrons would spiral into the nucleus with the emission of light and all matter would collapse to a much smaller volume, the volume occupied by the nuclei. No one was able to devise a theoretical model based on Newtonian, or what is now called classical mechanics, which would explain the electrostatic stability of atoms. The inescapable conclusion was that the classical equations of motion did not apply to the electron. Indeed, by the early 1900's a number of physical phenomena dealing with light and with events on the atomic level were found to be inexplicable in terms of classical mechanics. It became increasingly clear that Newtonian mechanics, while predicting with precision the motions of masses ranging in size from stars to microscopic particles, could not predict the behavior of particles of the extremely small masses encountered in the atomic domain. The need for a new set of laws was indicated.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/01%3A_The_Nature_of_the_Problem/1.01%3A_Introduction.txt
Certainly the early experiments on the properties of electrons did not suggest that any unusual behaviour was to be expected. Everything pointed to the electron being a particle of very small mass. The trajectory of the electron can be followed in a device such as a Wilson cloud chamber. Similarly, a beam of electrons generated by passing a current between two electrodes in a glass tube from which the air has been partially evacuated will cast the shadow of an obstacle placed in the path of the beam. Finally, the particle nature of the electron was further evidenced by the determination of its mass and charge. Just as classical considerations placed electrons in the realm of particles, the same classical considerations placed light in the realm of waves with equal certainty. How can one explain diffraction effects without invoking wave motion? In the years from 1905 to 1928 a number of experiments were performed which could be interpreted by classical mechanics only if it was assumed that electrons possessed a wave motion, and light was composed of a stream of particles! Such dualistic descriptions, ascribing both wave and particle characteristics to electrons or light, are impossible in a physical sense. The electron must behave either as a particle or a wave, but not both (assuming it is either). "Particle" and "wave" are both concepts used by ordinary or classical mechanics and we see the paradox which results when classical concepts are used in an attempt to describe events on an atomic scale. We shall consider just a few of the important experiments which gave rise to the classical explanation of dual behaviour for the description of electrons and light, a description which must ultimately be abandoned. The Photoelectric Effect Certain metals emit electrons when they are exposed to a source of light. This is called the photoelectric effect. The pertinent results of this experiment are 1. The number of electrons released from the surface increases as the intensity of he light is increased, but the energies of the emitted electrons are independent of the intensity of the light. 2. No electrons are emitted from the surface of the metal unless the frequency of the light is greater than a certain minimum value. When electrons are ejected from the surface they exhibit a range of velocities, from zero up to some maximum value. The energy of the electrons with the maximum velocity is found to increase linearly with an increase in the frequency of the incident light. The first result shows that light cannot be a wave motion in the classical sense. As an analogy, consider waves of water striking a beach and hitting a ball (in place of an electron) at the water's edge. The intensity of a wave is proportional to the square of the amplitude (or height) of the wave. Certainly, even when the frequency with which the waves strike the beach remains constant, an increase in the amplitude of the waves will cause much more energy to be released when they strike the beach and hit the ball. Yet when light "waves" strike a substance only the number of emitted electrons increases as the intensity is increased; the energy of the most energetic electrons remains constant. This can be explained only if it is assumed that the energy in a beam of light is not transmitted in the manner characteristic of a wave, but rather that the energy comes in bundles or packets and that the size of the packet is determined by the frequency of the light. This explanation put forward by Einstein in 1905 relates the energy to the frequency - and not to the intensity of the light - as required by the experimental results. A packet of light energy is called a photon. The results of the photoelectric experiment show that the energy e of a photon is directly proportional to the frequency n of the light, or, calling the constant of proportionality h, we have: $\varepsilon = hv \label{1}$ Since the electron is bound to the surface of the metal, the photon must possess a certain minimum amount of energy, i.e., possess a certain minimum frequency no, just sufficient to free the electron from the metal. When an electron is ejected from the surface by a photon with a frequency greater than this minimum value, the energy of the photon in excess of the minimum amount appears as kinetic energy of the electron. Thus: $kinetic\ energy\ of \ electron = hv - hv_{0} \label{2}$ where hn is the energy of the photon with frequency n, and hno is the energy of the photon which is just sufficient to free the electron from the metal. Experimentally we can measure the kinetic energy of the electrons as a function of the frequency n. A plot of the kinetic energy versus the frequency gives a straight line whose slope is equal to the value of h, the proportionality constant. The value of h is found to be 6.6 ´ 10-27 erg sec. Equation \ref{1} is revolutionary. It states that the energy of a given frequency of light cannot be varied continuously, (Click here for note.) as would be the case classically, but rather that it is fixed and comes in packets of a discrete size. The energy of light is said to be quantized and a photon is one quantum (or bundle) of energy. It is tempting at this point, if we desire a classical picture of what is happening, to consider each bundle of light energy, that is, each photon, to be an actual particle. Then one photon, on striking an individual electron, scatters the electron from the surface of the metal. The energy originally in the photon is converted into the kinetic energy of the electron (minus the energy required for the electron to escape from the surface). This picture must not be taken literally, for then the diffraction of light is inexplicable. Nor, however, can the wave picture for diffraction be taken literally, for then the photoelectric effect is left unexplained. In other words, light behaves in a different way from ordinary particles and waves and requires a special description. The constant h determines the size of the light quantum. It is termed Planck's constant in honour of the man who first postulated that energy is not a continuously variable quantity, but occurs only in packets of a discrete size. Planck proposed this postulate in 1901 as a result of a study of the manner in which energy is distributed as a function of the frequency of the light emitted by an incandescent body. Planck was forced to assume that the energies of the oscillations of the electrons in the incandescent matter, which are responsible for the emission of the light, were quantized. Only in this way could he provide a theoretical explanation of the experimental results. There was a great reluctance on the part of scientists at that time to believe that Planck's revolutionary postulate was anything more than a mathematical device, or that it represented a result of general applicability in atomic physics. Einstein's discovery that Planck's hypothesis provided an explanation of the photoelectric effect as well indicated that the quantization of energy was indeed a concept of great physical significance. Further examples of the quantization of energy were soon forthcoming, some of which are discussed below. The Diffraction of Electrons Just as we have found dualistic properties for light when its properties are considered in terms of classical mechanics, so we find the same dualism for electrons. From the early experiments on electrons it was concluded that they were particles. However, a beam of electrons, when passed through a suitable grating, gives a diffraction pattern entirely analogous to that obtained in diffraction experiments with light. In other words, not only do electrons and light both appear to behave in completely different and strange ways when considered in terms of our everyday physics, they both appear to behave in the same way! Actually, the same strange behaviour can be observed for protons and neutrons. All the fundamental particles and light exhibit behaviour which leads to conflicting conclusions when classical mechanics is used to interpret the experimental findings. The diffraction experiment with electrons was carried out at the suggestion of de Broglie. In 1923 de Broglie reasoned that a relationship should exist between the "particle" and "wave" properties for light. If light is a stream of particles, they must possess momentum. He applied to the energy of the photon Einstein's equation for the equivalence between mass and energy: $\varepsilon = mc^2 \nonumber$ where c is the velocity of light and m is the mass of the photon. Thus the momentum of the photon is mc and: $\varepsilon = momentum \times c \nonumber$ If light is a wave motion, then of course it possesses a characteristic frequency n and wavelength l which are related by the equation: $v = \frac{c}{\lambda } \nonumber$ The frequency and wavelength may be related to the energy of the photon by using Einstein's famous relationship: $\varepsilon = hv = \frac{hc}{\lambda } \nonumber$ By equating the two expressions for the energy: $\frac{hc}{\lambda } = momentum \times c \nonumber$ de Broglie obtained the following relationship which bears his name: $\lambda = \frac{h}{momentum} \label{3}$ However, de Broglie did not stop here. It was he who reasoned that light and electrons might behave in the same way. Thus a beam of electrons, each of mass m and with a velocity u (and hence a momentum mu) should exhibit diffraction effects with an apparent wavelength: $\lambda = \frac{h}{mv} \nonumber$ Using de Broglie's relationship, we can calculate that an electron with a velocity of 1 ´ 109 cm/sec should have a wavelength of approximately 1 x 10-8 cm. This is just the order of magnitude of the spacings between atoms in a crystal lattice. Thus a crystal can be used as a diffraction grating for electrons. In 1927 Davisson and Germer carried out this very experiment and verified de Broglie's prediction. (See Problem 1 at the end of this section.) Line Spectra A gas will emit light when an electrical discharge is passed through it. The light may be produced by applying a large voltage across a glass tube containing a gas at a low pressure and fitted with electrodes at each end. A neon sign is an example of such a "discharge tube." The electrons flowing through the tube transfer some of their energy to the electrons of the gaseous atoms. When the atomic electrons lose this extra energy and return to their normal state in the atom the excess energy is emitted in the form of light. Thus the gaseous atoms serve to transform electrical energy into the energy of light. The puzzling feature of the emitted light is that when it is passed through a diffraction grating (or a prism) to separate the light according to its wavelength, only certain wavelengths appear in the spectrum. Each wavelength appears in the spectrum as a single narrow line of coloured light, the line resulting from the fact that the emitted light is passed through a narrow slit (thus producing a thin "line" of light) before striking the grating or the prism and being diffracted. Thus a "line" spectrum rather than a continuous spectrum is obtained when atomic electrons are excited by an electrical discharge. An example of such a spectrum is given in Fig. 1-1, which illustrates the visible spectrum observed for the hydrogen atom. This spectrum should be contrasted with the more usual continuous spectrum obtained from a source of white light which consists of a continuous band of colours ranging from red at the long wavelength end to violet at short wavelengths. Fig. 1-1. The visible spectrum for hydrogen atoms (1Å = 1 Ångstrom = 1 ´ 10-8 cm) The energy lost by an electron as it is attracted by the nucleus appears in the form of light. If all energies were possible for an electron when bound to an atom, all wavelengths or frequencies should appear in its emission spectrum, i.e., a continuous spectrum should be observed. The fact that only certain lines appear implies that only certain values for the energy of the electron are possible or allowed. We could describe this by assuming that the energy of an electron bound to an atom is quantized. The electron can then lose energy only in fixed amounts corresponding to the difference in value between two of the allowed or quantized energy values of the atom. Since the energy of a photon is given by $\varepsilon = hv$ and e must correspond to the difference between two of the allowed energy values for the electron, say E and E' {E' > E), then the value of the corresponding frequency for the photon will be given by $\frac{E' - E}{h} = v = \frac{\varepsilon}{h} \label{4}$ Obviously, if only certain values of E are allowed, only certain values of e or n will be observed, and a line spectrum rather than a continuous spectrum (which contains all values of n) will be observed. Equation \ref{4} was put forward by Bohr in 1913 and is known as Bohr's frequency condition. It was Bohr who first suggested that atomic line spectra could be accounted for if we assume that the energy of the electron bound to an atom is quantized. Thus the parallelism between the properties of light and electrons is complete. Both exhibit the wave-particle dualism and the energies of both are quantized. The Compton Effect The results of one more experiment will play an important role in our discussions of the nature of electrons bound to an atom. The experiment concerns the direct interaction of a photon and an electron. In order to determine the position of an object we must somehow "see" it. This is done by reflecting or scattering light from the object to the observer's eyes. However, when observing an object as small as the electron we must consider the interaction of an individual photon with an individual electron. It is found experimentally and this is the Compton effect that when a photon is scattered by an electron, the frequency of the emergent photon is lower than it was before the scattering. Since e = hn, and nis observed to decrease, some of the photon's energy has been transmitted to the electron. If the electron was initially free, the loss in the energy of the photon would appear as kinetic energy of the electron. From the law of conservation of energy, $hv - hv' = \frac{1}{2}mv^2 = kinetic\ energy\ of\ electron$ where n' is the frequency of the photon after collision with the electron. This experiment brings forth a very important effect in the making of observations on the atomic level. We cannot make an observation on an object without at the same time disturbing the object. Obviously, the electron receives a kick from the photon during the observation. While it is possible to determine the amount of energy given to the electron by measuring n and n', we cannot however, predict in advance the final momentum of the electron. A knowledge of the momentum requires a knowledge of the direction in which the electron is scattered after the collision and while this can be measured experimentally one cannot predict the outcome of any given encounter. We shall illustrate later, with the aid of a definite example, that information regarding both the position and the momentum of an electron cannot be obtained with unlimited accuracy. For the moment, all we wish to draw from this experiment is that we must be prepared to accept a degree of uncertainty in the events we observe on the atomic level. The interaction of the observer with the system he is observing can be ignored in classical mechanics where the masses are relatively large. This is not true on the atomic level as here the "tools" employed to make the observation necessarily have masses and energies comparable to those of the system we are observing. In 1926 Schrödinger, inspired by the concept of de Broglie's "matter waves," formulated an equation whose role in solving problems in atomic physic's corresponds to that played by Newton's equation of motion in classical physics. This single equation will correctly predict all physical behaviour, including, for example, the experiments with electrons and light discussed above. Quantization follows automatically from this equation, now called Schrödinger's equation, and its solution yields all of the physical information which can be known about a given system. Schrödinger's equation forms the basis of quantum mechanics and as far as is known today the solutions to all of the problems of chemistry are contained within the framework of this new mechanics. We shall in the remainder of this site concern ourselves with the behaviour of electrons in atoms and molecules as predicted and interpreted by quantum mechanics.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/01%3A_The_Nature_of_the_Problem/1.02%3A_Some_Important_Experiments_with_Electrons_and_Light.txt
The energies of electrons are commonly measured and expressed in terms of a unit called an electron volt. An electron volt (ev) is defined as the energy acquired by an electron when it is accelerated through a potential difference of one volt. Imagine an evacuated tube which contains two parallel separate metal plates connected externally to a battery supplying a voltage V. The cathode in this apparatus, the negatively-charged plate, is assumed to be a photoelectric emitter. Photons from an external light source with a frequency no upon striking the cathode will supply the electrons with enough energy to just free them from the surface of the cathode. Once free, the electrons will be attracted by and accelerated towards the positively-charged anode. The electrons, which initially have zero velocity at the cathode surface, will be accelerated to some velocity u when they reach the anode. Thus the electron acquires a kinetic energy equal to ½ mu2 in falling through a potential of V volts. If the charge on the electron is denoted by e this same energy change in ev is given by the charge multiplied by the voltage V: (5) $\frac{1}{2}mv^2 = eV \nonumber$ For a given velocity u in cm/sec, equation (5) also (1.3.1) provides a relationship between the energy unit in the cgs (centimetre, gram, second) system, the erg, and the electron volt. This relationship is: $1\ ev = 1.602 \times 10^{-12} \ erg \nonumber$ The regular cgs system of units is inconvenient to use on the atomic level as the sizes of the standard units in this system are too large. Instead, a system of units called atomic units, based on atomic values for energy, length, etc., is employed. Atomic units are defined in terms of Planck's constant and the mass and charge of the electron: $Planck's\ constant = h = 6.625 \times 10^{-27}\ erg-sec \nonumber$ $mass\ of\ electron = m = 9.108 \times 10^{-28} g \nonumber$ $charge\ of\ electron = e = 4.8029 \times 10^{-10}\ esu \nonumber$ Length. $1\ au = a_{0} = \frac{h^{2}}{4\pi ^{^{2}}me^{2}} = 0.52917 \times 10^{-8}\ cm \nonumber$ Force. Force has the dimensions of charge squared divided by distance squared or $1\ au = \frac{e^{2}}{a_{0}^{2}} = 8.2377 \times 10^{-3}\ dynes \nonumber$ Energy. Energy is force acting through a distance or $1\ au = \frac{e^{2}}{a_{0}} = 4.3592 \times 10^{-11}\ erg = 2.7210 \times 10^{1}\ ev \nonumber$ 1.04: Further Reading Any elementary introductory book on modem physics will describe the details of the experiments discussed in this section as well as other experiments, such as the Franck-Hertz experiment, which illustrate the quantum behaviour of atoms. 1.E: Exercises Q1.1 Atoms or ions in a crystal are arranged in regular arrays as typified by the simple lattice structure shown in Fig. 1-2. Fig. 1-2. A two-dimensional display of a simple crystal lattice showing an incoming and a reflected beam of X-rays. This structure is repeated in the third dimension. X-rays are a form of light with a very short wavelength. Since the spacings between the planes of atoms in a crystal, denoted by d, are of the same order of magnitude of the wavelength of X-rays (~10-8 cm), a beam of X-rays reflected from the crystal will exhibit interference effects. That is, the layers of atoms in the crystal act as a diffraction grating. The reflected beam of X-rays will be in phase if the difference in the path length followed by waves which strike succeeding layers in the crystal is an integral number of wavelengths. When this occurs the reflected X-rays reinforce one another and produce a beam of high intensity for that particular glancing angle q. For some other values of the angle q, the difference in path lengths will not be equal to an integral number of wavelengths. The reflected waves will then be out of phase and the resulting interference will greatly decrease the intensity of the reflected beam. The difference in path length traversed by waves reflected by adjacent layers is 2dsinq as indicated in the diagram. Therefore, (6) $n\lambda = 2d\sin \theta \nonumber$ with $n= 1,2,3...$ which states that the reflected beam will be intense at those angles for which the difference in path length is equal to a whole number of wavelengths. Thus a diffraction pattern is produced, the intensity of the reflected X-ray beam varying with the glancing angle q. (a) By using X-rays with a known wavelength and observing the angles of maximum intensity for the reflected beam, the spacings between the atoms in a crystal, the quantity d in equation (1.E.1) also (6), may be determined. For example, X-rays with a wavelength of 1.5420 C produce an intense first-order (n = 1 in equation (1.E.1) also (6)) reflection at an angle of 21.01° when scattered from a crystal of nickel. Determine the spacings between the planes of nickel atoms. (b) Remarkably, electrons exhibit the same kind of diffraction pattern as do X-rays when reflected from a crystal; this provides a verification of de Broglie's prediction. The experiment performed by Davisson and Germer employed low energy electrons which do not penetrate the crystal. (High energy electrons do.) In their experiment the diffraction of the electrons was caused by the nickel atoms in the surface of the crystal. A beam of electrons with an energy of 54 ev was directed at right angles to a surface of a nickel crystal with d = 2.15 C. Many electrons are reflected back, but an intense sharp reflected beam was observed at an angle of 50° with respect to the incident beam. Fig. 1-3. The classic experiment of Davisson and Germer: the scattering of low energy electrons from the surface of a nickel crystal. As indicated in Fig. 1-3 the condition for reinforcement using a plane reflection grating is (7) $n\lambda = d \sin\theta \nonumber$ $n = 1, 2, 3, ...$ using equation (1.E.2) also (7) with $n=1$ for the intense first-order peak. Observed at 50°, calculate the wavelength of the electrons. Compare this experimental value for l with that calculated using de Broglie's relationship. (8) $\lambda = \frac{h}{mv} \nonumber$ The momentum mu may be calculated from the kinetic energy of the electrons using equation (5) in the website. (c) Even neutrons and atoms will exhibit diffraction effects when scattered from a crystal. In 1994 Professor Brockhouse of McMaster University shared the Nobel prize in physics with Professor Shull of MIT for their work on the scattering of neutrons by solids and liquids. Professor Brockhouse demonstrated how the inelastic scattering of neutrons can be used to gain information about the motions of atoms in solids and liquids. Calculate the velocity of neutrons which will produce a first-order reflection for q = 30° for a crystal with d = 1.5 ´ 10-8 cm. Neutrons penetrate a crystal and hence equation (6) also (1.E.1) should be used to determine l. The mass of the neutron is 1.66 ´ 10-24 g. (d) The neutrons obtained from an atomic reactor have high velocities. They may be slowed down by allowing them to come into thermal equilibrium with a cold material. This is usually done by passing them through a block of carbon. The kinetic theory relationship between average kinetic energy and the absolute temperature, $\frac{1}{2}mv^{-2} = \frac{3}{2}kT \nonumber$ may be applied to the neutrons. Calculate the temperature of the carbon block which will produce an abundant supply of neutrons with velocities in the range required for the experiment described in (c).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/01%3A_The_Nature_of_the_Problem/1.03%3A_Units_of_Measurement_used_in_Atomic_Physics.txt
Now that we have studied some of the properties of electrons and light and have seen that their behavior cannot be described by classical mechanics, we shall introduce some of the important concepts of the new physics, quantum mechanics, which does predict their behavior. For the study of chemistry, we are most interested in what the new mechanics has to say about the properties of electrons whose motions are in some manner confined, for example, the motion of an electron which is bound to an atom by the attractive force of the nucleus. An atom, even the hydrogen atom, is a relatively complicated system because it involves motion in three dimensions. We shall consider first an idealized problem in just one dimension, that of an electron whose motion is confined to a line of finite length. We shall state the results given by quantum mechanics for this simple system and contrast them with the results given by classical mechanics for a similar system, but for a particle of much larger mass. Later, we shall indicate the manner in which the quantum mechanical predictions are obtained for a system. 02: The New Physics Consider an electron of mass m = 9 ´ 10-28 g which is confined to move on a line L cm in length. L is set equal to the approximate diameter of an atom, 1 ´ 10-8cm = 1Å. Consider as well a system composed of a mass of 1 g confined to move on a line, say 1 metre in length. We shall apply quantum mechanics to the first of these systems and classical mechanics to the second. Energy As either mass moves from one end of its line to the other, the potential energy (the energy which depends on the position of the mass) remains constant. We may set the potential energy equal to zero, and all the energy is then kinetic energy (energy of motion). When the electron reaches the end of the line, we shall assume that it is reflected by some force. Thus at the ends of the line the potential energy rises abruptly to a very large value, so large that the electron can never "break through." We can plot potential energy versus position x along the line Fig. 2-1. We refer to the electron (or the particle of m = 1 g) as being in a potential well and we can imagine the abruptly rising potential at x = 0 and x = L to be the result of placing a "wall" at each end of the line. First, what are the predictions of classical mechanics regarding the energy of the mass of 1 g? The total energy is kinetic energy and is simply: $E = KE = \frac{1}{2}mv^{2}$ We know from experience that u, the velocity, can have any possible value from zero up to very large values. Since all values for u are allowed, all values for E are allowed. We conclude that the energy of a classical system may have any one of a continuous range of values and may be changed by any arbitrary amount. Let us contrast with this conclusion the prediction which quantum mechanics makes regarding the energy of an electron in a corresponding situation. The quantum mechanical results are remarkable indeed, although they should not be surprising when we recall Bohr's explanation of the line spectra which are observed for atoms. Quantum mechanics predicts that there are only certain values of the energy which the electron confined to move on the line can possess. The energy of the electron is quantized. If this result could be observed for a massive particle, it would mean that only certain velocities were possible, say u = 1 cm/sec or 10 cm/sec but with no intermediate values! But then an electron is not really a particle. The expression for the allowed energies as given by quantum mechanics for this simple system is: (1) $E_{n} = \frac{h^{2} n^{2}}{8mL^{2}} \nonumber$ $n = 1,2,3,4,....$ where again h is Planck's constant and n is an integer which may assume any value from one to infinity. Since only discrete values for E are possible, the appearance of the index n in equation (1) is necessary. A number such as n which appears in the expression for the energy is called a quantum number. Each value of the quantum number n fixes a value of En, one of the allowed energy values. We can indicate the possible values for the energy on an energy diagram. It is clear from equation (1) that for given values of m and L, En equals a constant (K = h2/8mL2) multiplied by n2: (2) $E_{n} = Kn^{2} \nonumber$ $n = 1,2,3,4,....$ Thus we can express the value of Enin terms of so many units of K. Each line, called an energy level, in Fig. 2-2 denotes an allowed energy for the system and the figure is called an energy level diagram. Each level is identified by its value of n as a subscript. A corresponding diagram for the case of the classical particle would consist of an infinite number of lines with infinitesimally small spacings between them, indicating that the energy in a classical system may vary in a continuous manner and may assume any value. The energy continuum of classical mechanics is replaced by a discrete set of energy levels in quantum mechanics. Suppose we could give the electron sufficient energy to place it in one of the higher (excited) energy levels. Then when it "fell" back down to the lowest value of E (called the ground level, E1), a photon would be emitted. The energy e of the photon would be given by the difference in the values of En and E1 and, since e = hv the frequency of the photon would be given by the relationship: $v = \frac{E_{n} - E_{1}}{h} \nonumber$ $n = 2,3,4,5,...$ which is Bohr's frequency condition (I-4). Thus only certain frequencies would be emitted and the spectrum would consist of a series of lines. We can illustrate the change in energy when the electron falls to the lowest energy level by connecting the upper level and the n = 1 level by an arrow in an energy level diagram. The frequency of the photon emitted during the indicated drop in energy is proportional to the length of the arrow, i.e., to the change in energy (Fig. 2-3). The line directly beneath each arrow represents the value of the frequency for that drop in energy. Since the differences in the lengths of the arrows increase as n increases, the separations between the observed frequencies show a corresponding increase. The spectrum, therefore, consists of a series of lines, with the spacings between the lines increasing as n increases. If the energy was not quantized and all values were possible, all jumps in energy would be possible and all frequencies would appear. Thus a continuum of possible energy values will produce a continuous spectrum of frequencies. A line spectrum, on the other hand, is a direct manifestation of the quantization of energy. Fig. 2-3. The origin of a line spectra. In the quantum case, as in the classical case, all of the energy will be in the form of kinetic energy. We may obtain an expression for the momentum of the electron by equating the total value of the energy En to p2/2m, where p is the momentum (= mu) of the electron, (p2/2m is another way of expressing 2mu2.) $E_{n} = \frac{n^{2}h^{2}}{8mL^{2}} = \frac{1}{2}mv^{2} = \frac{p^{2}}{2m}$ This gives: $p = \pm \frac{nh}{2L}$ $n = 1,2,3,4,...$ A plus and a minus sign must be placed in front of the number which gives the magnitude of the momentum to indicate that we do not know and cannot determine the direction of the motion. If the electron moves from left to right the sign will be positive. If it moves from right to left the sign will be negative. The most we can know about the momentum itself is its average value. This value will clearly be zero because of the equal probability for motion in either direction. The average value of p2, however, is finite. Since the lowest allowed value of the quantum number n in the quantum mechanical expression for the energy is unity, it is evident that the energy can never equal zero. A confined electron can never be motionless. The expression for En also indicates that the kinetic energy and the momentum increase as the length of the line L is decreased. Thus the kinetic energy and momentum of the electron increase as its motion becomes more confined. This is both an important and a general result and will be referred to again. Position The concept of a trajectory is fundamental to classical mechanics. Given a particular mass with a given initial velocity and a knowledge of the forces acting on it, we may use classical mechanics to predict the exact position and velocity of the particle at any future time. Thus we speak of the trajectory of the particle and we may calculate it to any desired degree of accuracy. It is also possible, within the framework of classical mechanics, to measure the position and velocity of a particle at any given instant of time. Thus classical mechanics correctly predicts what one can experimentally measure for massive particles. We have previously mentioned the difficulties which are encountered when we attempt to determine the position of an electron. The results of the Compton effect indicate that part of the energy of the photon used in making the observation is transferred to the electron, and we invariably disturb the electron when we attempt to measure its position. Thus it is not surprising to find that quantum mechanics does not predict the position of an electron exactly. Rather, it provides only a probability as to where the electron will be found. When we consider the experiments which attempt to define the position of the electron, we shall find that this is the maximum information that can indeed be obtained even experimentally. The new mechanics again predicts only what can indeed be measured experimentally. We shall illustrate the probability aspect in terms of the system of an electron confined to motion along a line of length L. Quantum mechanical probabilities are expressed in terms of a distribution function which in this particular case we shall label Pn(x). Consider the line of length L to be divided into a large number of very small segments, each of length Dx. Then the probability that the electron is in one particular small segment Dx of the line is given by the product of Dx and the value of the probability distribution function Pn(x) for that interval. For example, the probability distribution function for the electron when it is in the lowest energy level, n = 1, is given by P1(x) (Fig. 2-4). The probability that the electron will be in the particular small interval Dx indicated in Fig. 2-4 is equal to the shaded area, an area which in turn is equal to the product of Dx and the average value of P1(x) throughout the interval Dx, called P1(x'), $probability\ that\ electron\ is\ in\ segment \Delta x = P_{1}(x') \Delta x$ The curve P1(x) may be determined in the following manner. We design an experiment able to determine whether or not the electron is in one particular segment Dx of the line when it is known to be in the quantum level n = 1. (One way in which this might be done is described below.) We perform the experiment a large number of times, say one hundred, for each segment and record the ratio of the number of times the electron is found in a particular segment to the total number of observations made for that segment. For example, an electron is found to be in the segment marked Dx (of length 0.1 L) in the figure for P1(x) in 18 out of 100 observations, or 18% of the time. In the other 82 observations the electron was in one of the other segments. Thus the average value of P1(x) for this segment, called P1(x') must be 1.8/L since P1(x')Dx = (1.8/L) (0.1 L) = 0.18 or 18%. A similar set of experiments is made for each of the segments Dx and in each case a rectangle is constructed with Dx as base and with a height equal to P1(x) such that the product P1(x)Dx equals the fractional number of times the electron is found in the segment Dx. The limiting case in which the total length L is divided into a very large number of very small segments (Dx ® dx) would result in the smooth curve shown in the figure for P1(x). There is a different probability distribution for each value of En, or each quantum level, as shown, for example, by the probability distributions for the energy levels with n = 2, 3, 4, 5 and 6 (Fig. 2-4). The probability of finding the electron at the positions where the curve touches the x-axis is zero. Such a zero is termed a node. The number of nodes is always n-1 if we do not count the nodes at the ends of each Pn(x) curve. Let us first contrast these results, particularly that for P1(x), with the corresponding classical case. Since a classical analysis allows us to determine the position of a particle uniquely at any instant, either theoretically or experimentally, the idea of a probability distribution is foreign to a classical mechanical analysis. However, we still can determine the classical probability distribution for the particle confined to motion on a line. Since there are no forces acting on the particle as it traverses the line, it will be equally likely to be found at any point on the line (Fig 2-5). This probability will be the same regardless of the energy. There is again a striking difference between the classical and the quantum mechanics results. For the first quantum level, the graph of P1(x) indicates the electron will most likely be found at the midpoint of the line. Furthermore, the form of Pn(x) changes with every change in energy. Every allowed value of the of the energy has associated with it a distinct probability distribution for the electron. Theses are the predictions of quantum mechanics regarding the position of a bound electron. Now let us investigate the experimental aspect of the problem to gain some physical reason for these predictions. Fig. 2-5. The classical probability distribution for motion on a line. This is the result obtained when the particle is located a large number of times at random time intervals. The classical probability function Pc(x) is the same for all values of x and equals 1/L, i.e., the particle is equally likely to be found at any value of x between 0 and L Let us design an experiment in which we attempt to pinpoint the position of an electron within a segment Dx. The experiment is a hypothetical one in that we imagine that we are to observe the electron through a microscope by reflecting or scattering light from it. Imagine the lens of a microscope being placed above the line L with the light entering from the side (Fig. 2-6 (a)). The electron, when illuminated with light, will act as a small source of light and will produce at A an image in the form of a bright disc surrounded by a group of rings of decreasing intensity. Because of this effect, which is entirely analogous to the diffraction effect observed for a pinhole source of light, the centre of the image will appear bright even if the electron is not precisely located at the point marked x. It could equally well have been at any value of x between the points x' and x" and produced an image visible to the eye at A if the difference in the path lengths Bx' and Cx' (or Bx" and Cx") is less than one half of a wavelength. In other words the resolving power of a microscope is not unlimited but is instead determined by the wavelength of the light used in making the observation. The use of the microscope imposes an inherent uncertainty in our observation of the position of the electron. With the condition that the difference in the path lengths to the outside rim of the lens must be no greater than one half a wavelength and with the use of some geometry, the magnitude of the uncertainty in the position of the electron, x'' - x' = Dx, is found to be given approximately by: (3) $\Delta x \sim \frac{\lambda}{2 \sin \theta} \nonumber$ where q is the angle indicated in the diagram. Remembering the Compton effect and bearing in mind that we wish to disturb the electron as little as possible during the observation, we shall inquire as to the results obtained when a single photon is scattered from the electron. A single photon will not yield the complete diffraction pattern at A, but will instead produce a single flash of light. A diffraction pattern is the result of many photons passing through the microscope and represents the probability distribution for the emergent photons when they have been scattered by an electron lying between x' and x''. A single photon, when scattered from an electron within the length Dx, is however still diffracted and will produce a flash of light somewhere in one of the areas defined by the probability distribution produced by many photons passing through the system. Thus even when we use but a single photon in our apparatus the uncertainty Dx in our experimentally determined position of the electron will still be given by equation (3). Obviously, if we want to locate an electron which is confined to move on a line to within a length that is small compared to the length of the line, we must use light which has a wavelength much less than L. This is exactly what equation (3) states: the shorter the wavelength of the light which is used to observe the electron, the smaller will be the uncertainty Dx. That being the case, why not do the experiment with light of very short wavelength compared to the length L, say l = (1/100)L? Then we can hope to find the electron on one small segment of the line, each segment being approximately (1/100)L in length. Let us calculate the frequency and energy of a photon which has the required wavelength of l = (1/100)L. As before, we set L equal to a typical atomic dimension of 1 x 10-8 cm. $\varepsilon = hv = \frac{hc}{\lambda} = \frac{6.6 \times 10^{-27} \times 3 \times 10^{10}}{10^{-10}} = 2.0 \times 10^{-6} ergs$ We are immediately in difficulty, because the energy of the electron in the first quantum level is easily found to be: $E_{1} = \frac{(6.6 \times 10^{-27})^{2}}{9.1 \times 10^{-28} \times 8 \times 10^{-16}} = 6.0 \times 10^{-11} ergs = K$ The energy of the photon is approximately 1 x 104 times greater than the energy of the electron! We know from the Compton effect that the collision of a photon with an electron imparts energy to the electron. Thus the electron after the collision will certainly not be in the state n = 1. It will be excited to oneCwe don't know whichCof the excited levels with n = 2 (E = 4K) or n = 3 (E = 9K), etc. The result is clear. If we demand an intimate knowledge of what the position of the electron is in a given state, we can obtain this information only at the expense of imparting to the electron an unknown amount of energy which destroys the system, i.e., the electron is no longer in the n = 1 level but in one of the other excited levels. If this experiment was repeated a large number of times and a record kept of the number of times an electron was located in each segment of the line (roughly (1/100)L), a probability plot similar to Fig. 2-4 would be obtained. We can ask another kind of question regarding the position of the electron: "How much information can be obtained about the position of the electron in a given quantum level without at the same time destroying that level?" The electron cannot accept energy in an amount less than that necessary to excite it to the next quantum level, n = 2. The difference in energy between E2, and E1, is 3K. Thus if we are to leave the electron in a state of known energy and momentum we must use light whose photons possess an energy less than 3K. Let us calculate the wavelength of the light with e = 2K and compare this value with the length L. $\lambda = \frac{hc}{\varepsilon} = \frac{6.6 \times 10^{-27} \times 3.0 \times 10^{10}}{12 \times 10^{-11}} = 1.7 \times 10^{-6} cm$ The wavelength is greater than the length of the line L. From equation (3) it is clear that the uncertainty in the position of the particle will be of the order of magnitude of, or greater than, L itself. The electron will appear to be blurred over the complete length of the line in a single experiment! Thus there are two interpretations which can be given to the probability distributions, depending on the experiment which is performed. The first is that of a true probability of finding the electron in a given small segment of the line using light of very short l relative to L. This experiment excites the electron, changes the system and leaves the electron with an unknown amount of energy and momentum. We have destroyed the object of our investigation. We now know where it was in a given experiment but not where it will be, in terms of energy or position. Alternatively, we could use light with a l approximately equal to L. This does not excite the electron and leaves it in a known energy level. However, now the knowledge of the position is very uncertain. The photons are scattered from the system and give us directly the smeared distribution P1 pictured in Fig. 2-4. In a real sense we must accept the fact that when the electron remains in a given state it is "smeared out" and "looks like" the pictures given for Pn. Thus we can interpret the Pn's as instantaneous pictures of the electron when it is bound in a known state, and forgot their probability aspect. This "smeared out" distribution is given a special name; it is called the electron density distribution. There will be a certain fraction of the total electronic charge at each point on the line, and when we consider a system in three dimensions, there will be a certain fraction of the total electronic charge in every small volume of space. Hence it is given the name electron density, the amount of charge per unit volume of space. The Pn's represent a charge density distribution which is considered static as long as the electron remains in the nth quantum level. Thus the Pn functions tell us either (a) the fraction of time the electron is at each point on the line for observations employing light of short wavelength, or (b) they tell us the fraction of the total charge found at each point on the line (the whole of the charge being spread out) when the observations are made with light of relatively long wavelength. The electron density distributions of atoms, molecules or ions in a crystal can be determined experimentally by X-ray scattering experiments since X-rays can be generated with wavelengths of the same order of magnitude as atomic diameters (1 ´ 10-8cm). In X-ray scattering the intensity of the scattered beam and the angle through which it is scattered are measured. The distribution of negative charge within the crystal scatters the X-rays and determines the intensity and angle of scattering. Thus these experimental quantities can be used to calculate the form of the electron density distribution. There is a definite quantum mechanical relationship governing the magnitudes of the uncertainties encountered in measurements on the atomic level. We can illustrate this relationship for the one-dimensional system. Let us consider the minimum uncertainty in our observations of the position and the momentum of the electron moving on a line obtained in an experiment which leaves the particle bound in a given quantum level, say n = 1. This will require the use of light with l ~ L. We have seen that the use of light of this wavelength limits us to stating that the electron is somewhere on the line of length L. We can say no more than this with certainty unless we use light of much shorter l , and then we will change the quantum number of the electron. The uncertainty in the value of the position coordinate, which we shall call Dx, is just L, the length of the line: $\Delta x = L$ We have previously shown that the momentum of the electron in the nth quantum level is given by: $p_{n} = \pm \frac{nh}{2L}$ $n = 1,2,3, ...$ the plus and minus signs denoting the fact that while we know the magnitude of the momentum we cannot determine whether the electron is moving from left to right (+nh/2L) or from right to left (-nh/2L). The minimum uncertainty in our knowledge of the momentum is the difference between these two possibilities, or for n = 1: $\Delta p = + \frac{h}{2L} - (\frac{-h}{2L}) = \frac{H}{L}$ The product of the uncertainties in the position and the momentum is: $\Delta p \Delta x = L \frac{h}{L} = h$ This result is a particular example of a general relationship governing the product of the uncertainties in the momentum and position known as Heisenberg's uncertainty principle. In the general case, the equality sign in the above equation is replaced by the symbol "³" which denotes that the product in the uncertainties DpDx equals or exceeds the value of Planck's contant h, that is, the general statement is given by DpDx ³ h. If we endeavour to decrease the uncertainty in the position coordinate (i.e., make D x small) there will be a corresponding increase in the uncertainty of the momentum of the electron along the same coordinate, such that the product of the two uncertainties is always equal to Planck's constant. We saw this effect in our experiments wherein we employed light of short l to locate the position of the electron more precisely. When we did this we excited the electron to one of the other available quantum states, thus making a knowledge of the energy and hence the momentum uncertain. We might also try to defeat Heisenberg's uncertainty principle by decreasing the length of the line L. By shortening L, we would decrease the uncertainty as to where the electron is. However, as was noted previously, the momentum increases as L is decreased and the uncertainty in p is always the same order of magnitude as p itself; in this case twice the magnitude of p. Thus the decrease in Dx obtained by decreasing L is offset by the increase in Dp which accompanies the increased confinement of the electron; the product DxD p remains unchanged in value. We can illustrate the operation of Heisenberg's uncertainty principle for a free particle by referring again to our hypothetical experiment in which we attempted to locate the position of an electron by using a microscope. We imagine the electron to be free and travelling with a known momentum in the direction of the x-axis with a photon entering from below along the y-axis. When the photon is scattered by the electron it may transfer momentum to the electron and continue on a line which makes an angle q' to the y-axis (Fig. 2-6). The photon, in doing so, will acquire momentum in the direction of the x-axis, a direction in which it initially had none. Since momentum must be conserved, the electron will receive a recoil momentum, a momentum equal in magnitude but opposite in direction to that gained by the photon. This is the Compton effect. Thus our act of observing the electron will lead to an uncertainty in its momentum as the amount of momentum transferred during the collision is uncontrollable. We may, however, set limits on the amount transferred and in this way determine the uncertainty introduced into the value of the momentum of the electron. The momentum of the photon before the collision is all directed along the y-axis and has a magnitude equal to h/l . After colliding with the electron the photon may be scattered to the left or to the right of the y-axis through any angle q' lying between 0 and q and still be collected by the lens of the microscope and seen by the observer at A. Thus every photon which passes through the microscope will have an uncertainty of 2(h/l)sinq in its component of momentum along the x-axis since it may have been scattered by the maximum amount to the left and acquired a component of -(h/l)sinq or, on the other hand, it may have been scattered by the maximum amount to the right and acquired a momentum component of +(h/l)sinq. Any x-component of momentum acquired by the photon must have been lost by the electron and the uncertainty introduced into the momentum of the electron by the observation is also equal to 2(h/l)sinq . In addition to the uncertainty induced in the momentum of the electron by the act of measurement, there is also an inherent uncertainty in its position (equation (3)) because of the limited resolving power of the microscope. The product of the two uncertainties at the instant of measurement or immediately following it is: $\Delta p \Delta x \sim 2 \frac{h}{\lambda} \sin \theta \times \frac{\lambda}{2 \sin \theta} = h$ Heisenberg's uncertainty relationship is again fulfilled. Our experiment employs only a single photon which, since light itself is quantized, represents the smallest packet of energy and momentum which we can use in making the observation. Even in this idealized experiment the act of observation creates an unavoidable disturbance in the system. Degeneracy We may use an extension of our simple system to illustrate another important quantum mechanical result regarding energy levels. Suppose we allow the electron to move on the x-y plane rather than just along the x-axis. The motions along the x and y directions will be independent of one another and the total energy of the system will be given by the sum of the energy quantum for the motion along the x-axis plus the energy quantum for motion along the y-axis. Two quantum numbers will now be necessary, one to indicate the amount of energy along each coordinate. We shall label these as nx and ny. Let us assume that the motion is confined to a length L along each axis, then: $E_{n_{x},{n_{y}}} = \frac{h^{2}}{8mL^{2}} n_{x}^{2} \ + \ \frac{h^{2}}{8mL^{2}} n_{y}^{2}$ $= \frac{h^{2}}{8mL^{2}}(n_{x}^{2} + n_{y}^{2})$ $n_{x,y} = 1,2,3, ...$ Nothing new is encountered when the electron is in the lowest quantum level for which nx = ny = 1. The energy E1,1 simply equals 2h2/8mL2. Since two dimensions (x and y) are now required to specify the position of the electron, the probability distribution P1,1(x,y) must be plotted in the third dimension. We may, however, still display P1,1(x,y) in a two-dimensional diagram in the form of a contour map (Fig. 2-7). All points in the x-y plane having the same value for the probability distribution P1,1(x,y) are joined by a line, a contour line. The values of the contours increase from the outermost to the innermost, and the electron, when in the levelnx = ny = 1, is therefore most likely to be found in the central region of the x-y plane. Fig. 2-7. Contour maps of the probability distributions Pnx, ny (x,y) for an electron moving on the x-y plane. The dashed lines represent the postion of nodes, lines along which the probability is zero. P1,2 (x,y) and P2,1 (x,y) are distributions for one doubly-degenerate level; P2,3 (x,y) and P3,2 (x,y) are examples of distributions for another degenerate level of still higher energy. The same contours are shown in each diagram and their values (in units of 4/L2) are indicated in the diagram for P1,1(x,y). A plot of P1,1(x,y) along either of the axes indicated in Fig. 2-7 (one parallel to the x-axis at y = L/2 and the other parallel to the y-axis at x = L/2) is similar in appearance to that for P1(x) shown in Fig. 2-4. That is, for a fixed value of y, the contribution to P1,1(x,y) from the motion along the y-axis is constant and $P_{1,1}(x,L/2) = constant \times P_{1}(x)$ Thus, aside from the constant factor, P1(x) provides a profile, or if P1,1(x,y) were displayed in three dimensions, a cross section of the contour map of P1,1(x). A contour map is a display of the probability or density distribution in a plane; a profile is a display of the density distribution along a line. Now consider the possibility of nx = 1 and ny = 2. Then $E_{1,2} = \frac{5h^{2}}{8mL^2}$ We could also have the situation in which nx = 2 and ny = 1. This does not change the value of the total energy, $E_{2,1} = E_{1,2} = \frac{5h^{2}}{8mL^{2}}$ but the probability distributions (Fig. 2-7) are different, P1,1(x,y) ¹P2,1 (x,y). When nx = 1 and ny = 2, there must be a node on the y-axis, i.e., a zero probability of finding the electron at y = L/2. Thus a slice through P1,2(x y) at x = L/2 parallel to the y-axis must be similar to the figure for P2(x), while a slice parallel to the x-axis will still be similar to P1(x). Just the reverse is true for the case nx = 2 and ny = 1. In this case, whether or not we can distinguish experimentally between the x- and y-axes, there are two different arrangements for the distribution of the electron, both of which have the same energy. The energy level is said to be degenerate. The degeneracy of an energy level is equal to the number of distinct probability distribution for the system, all of which belong to this same energy level. The concept of degeneracy in an energy level has important consequences in our study of the electronic structure of atoms.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/02%3A_The_New_Physics/2.01%3A_A_Contrast_of_the_Old_and_New_Physics.txt
In quantum mechanics, Newton's familiar equations of motion are replaced by Schrödinger's equation. We shall not discuss this equation in any detail, nor indeed even write it down, but one important aspect of it must be mentioned. When Newton's laws of motion are applied to a system, we obtain both the energy and an equation of motion. The equation of motion allows us to calculate the position or coordinates of the system at any instant of time. However, when Schrödinger's equation is solved for a given system we obtain the energy directly, but not the probability distribution functionCthe function which contains the information regarding the position of the particle. Instead, the solution of Schrödinger's equation gives only the amplitude of the probability distribution function along with the energy. The probability distribution itself is obtained by squaring the probability amplitude. (Click here for note.) Thus for every allowed value of the energy, we obtain one or more (the energy value may be degenerate) probability amplitudes. The probability amplitudes are functions only of the positional coordinates of the system and are generally denoted by the Greek letter y (psi). For a bound system the amplitudes as well as the energies are determined by one or more quantum numbers. Thus for every En we have one or more yn's and by squaring the yn's we may obtain the corresponding Pn's. Let us look at the forms of the amplitude functions for the simple system of an electron confined to motion on a line. For any system, y is simply some mathematical function of the positional coordinates. In the present problem which involves only a single coordinate x, the amplitude functions may be plotted versus the x-coordinate in the form of a graph. The functions yn are particularly simple in this case as they are sin functions. The first few yn's are shown plotted in Fig. 2-8. Fig. 2-8. The first six probability amplitudes yn(x) for an electron moving on a line of length L. Note the yn(x) may be negative in sign for certain values of x. The yn(x) are squared to obtain the probability distrubrition functions Pn(x), which are, therefore, positive for all values of x. Wherever yn(x) crosses the x-axis and changes sign, a node appears in the corresponding Pn(x). Each of these graphs, when squared, yields the corresponding Pn curves shown previously. When n = 1, When x = 0, When x = L, When x = L/2, Thus y equals zero at x = 0 and x = L and is a maximum when x = L/2. When this function is squared, we obtain: and the graph (Fig. 2-4) previously given for P1(x). As illustrated previously in Fig. 2-4, the value of yn2(x) or Pn(x) multiplied by Dx, yn2(x)Dx, or Pn(x)Dx, is the probability that the electron will be found in some particular small segment of the line Dx. The constant factor of which appears in every yn(x) is to assure that when the value of yn2(x)Dx is summed over each of the small segments Dx, the final value will equal unity. This implies that the probability that the electron is somewhere on the line is unity, i.e., a certainty. Thus the probability that the electron is in any one of the small segments Dx (the value of yn2(x)Dx or Pn(x)Dx evaluated at a value of x between 0 and L) is a fraction of unity, i.e., a probability less than one. (Click here for note.) Each yn must necessarily go to zero at each end of the line, since the probability of the electron not being on the line is zero. This is a physical condition which places a mathematical restraint on the yn . Thus the only acceptable yn 's are those which go to zero at each end of the line. A solution of the form shown in Fig. 2-9 is, therefore, not an acceptable one. Since there is but a single value of the energy for each of the possible yn functions, it is clear that only certain discrete values of the energy will be allowed. The physical restraint of confining the motion to a finite length of line results in the quantization of the energy. Indeed, if the line is made infinitely long (the electron is then free and no longer bound), solutions for any value of n, integer or non-integer, are possible; correspondingly, all energies are permissible. Thus only the energies of bound systems are quantized. Fig. 2-9. An unacceptable form for yn(x). The yn 's have the appearance of a wave in that a given value of yn(x) is repeated as x is increased. They are periodic functions of x. We may, if we wish, refer to the wavelength of yn. The wavelength of y1 is 2L since only one half of a wave fits on the length L. The wavelength for y2, is L since one complete wave fits in the length L. Similarly l3, = (2/3)L and l4 = (2/4)L. In general: Because of the wave-like nature of the yn 's , the new physics is sometimes referred to as wave mechanics, and the yn functions are called wave functions. However, it must be stressed that a wave function itself has no physical reality. All physical properties are determined by the product of the wave function with itself. It is the product yn(x)yn(x) which yields the physically measurable probability distribution. Thus yn2 may be observed but not yn itself. A yn does not represent the trajectory or path followed by an electron in space. We have seen that the most we can say about the position of an electron is given by the probability function yn2. We do, however, refer to the wavelengths of electrons, neutrons, etc. But we must remember that the wavelengths refer only to a property of the amplitude functions and not to the motion of the particle itself. A number of interesting properties can be related to the idea of the wavelengths associated with the wave functions or probability amplitude functions. The wavelengths for our simple system are given by l = 2L/n. Can we identify these wavelengths with the wavelengths which de Broglie postulated for matter waves and which obeyed the relationship: The absolute value for the momentum (the magnitude of the momentum independent of its direction) of an electron on the line is nh/2L. Substituting this into de Broglie's relationship gives: So indeed the wavelengths postulated by de Broglie to be associated with the motions of particles are in reality the wavelengths of the probability amplitudes or wave functions. There is no need to postulate "matter waves" and the results of the electron diffraction experiment of Davisson and Germer for example can be interpreted entirely in terms of probabilities rather than in terms of "matter waves" with a wavelength l = h/p. It is clear that as n increases, l becomes much less than L. For n = 100, y100and P100 would appear as in Fig. 2-10. When L>>ln, the nodes in Pn are so close together that the function appears to be a continuous function of x. No experiment could in fact detect nodes which are so closely spaced, and any observation of the position of the electron would yield a result for P100 similar to that obtained in the classical case. This is a general result. When l is smaller than the important physical dimensions of the system, quantum effects disappear and the system behaves in a classical fashion. This will always be true when the system possesses a large amount of energy, i.e, a high n value. When, however, l is comparable to the physical dimensions of the system, quantum effects predominate. Fig. 2-10. The wave function and probability distribution for n = 100. Let us check to see whether or not quantum effects will be evident for electrons bound to nuclei to form atoms. A typical velocity of an electron bound to an atom is of the order of magnitude of 109 cm/sec. Thus: This is a short wavelength, but it is of the same order of magnitude as an atomic diameter. Electrons bound to atoms will definitely exhibit quantum effects because the wavelength which determines their probability amplitude is of the same size as the important physical dimensionCthe diameter of the atom. We can also determine the wavelength associated with the motion of the mass of 1 g moving on a line 1 m in length with a velocity of, say, 1 cm/sec: This is an incredibly short wavelength, not only relative to the length of the line but absolutely as well. No experiment could detect the physical implications of such a short wavelength. It is indeed many, many times smaller than the diameter of the mass itself. For example, to observe a diffraction effect for such particles the spacings in the grating must be of the order of magnitude of 1 ´ 10-27 cm. Such a grating cannot be made from ordinary matter since atoms themselves are about 1019 times larger than this. Even if such a grating could be found, it certainly wouldn't affect the motion of a mass of 1 g as the size of the mass is approximately 1028 times larger than the spacings in the grating! Clearly, quantum effects will not be observed for massive particles. It is also clear that the factor which determines when quantum effects will be observed and when they will be absent is the magnitude of Planck's constant h. The very small magnitude of h restricts the observation of quantum effects to the realm of small masses. 2.03: Further Reading W. Heisenberg, The Physical Principles of the Quantum Theory, University of Chicago Press, Chicago, Illinois, 1930. This reference contains interesting discussions of the basic concepts of quantum mechanics written by a man who participated in the birth of the new physics. 2.E: Exercises Q.1 One of the more recent experimental methods of studying the nucleus of an atom is to probe the nucleus with very high energy electrons. Calculate the order of magnitude of the energy of an electron when it is bound inside a nucleus with a diameter 1 ´ 10-12 cm. Compare this value with the order of magnitude of the energy of an electron bound to an atom of diameter 1 ´ 10-8 cm. Q.2 Nuclear particles, protons or neutrons have masses approximately 2 ´ 103 times the mass of an electron. Estimate the average energy of a nuclear particle bound in a nucleus and compare it with the order of magnitude energy for an electron bound to an atom. This result should indicate that chemical changes which involve changes in the electronic energies of the system do not affect the nucleus of an atom.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/02%3A_The_New_Physics/2.02%3A_Probability_Amplitudes.txt
The study of the hydrogen atom is more complicated than our previous example of an electron confined to move on a line. Not only does the motion of the electron occur in three dimensions but there is also a force acting on the electron. This force, the electrostatic force of attraction, is responsible for holding the atom together. 03: The Hydrogen Atom The study of the hydrogen atom is more complicated than our previous example of an electron confined to move on a line. Not only does the motion of the electron occur in three dimensions but there is also a force acting on the electron. This force, the electrostatic force of attraction, is responsible for holding the atom together. The magnitude of this force is given by the product of the nuclear and electronic charges divided by the square of the distance between them. In the previous example of an electron confined to move on a line, the total energy was entirely kinetic in origin since there were no forces acting on the electron. In the hydrogen atom however, the energy of the electron, because of the force exerted on it by the nucleus, will consist of a potential energy (one which depends on the position of the electron relative to the nucleus) as well as a kinetic energy. The potential energy $V(r)$ arising from the force of attraction between the nucleus and the electron is: $V(r) = \dfrac{-e^2}{r} \nonumber$ Let us imagine for the moment that the proton and the electron behave classically. Then, if the nucleus is held fixed at the origin and the electron allowed to move relative to it, the potential energy would vary in the manner indicated in Fig. 3-1. The potential energy is independent of the direction in space and depends only on the distance r between the electron and the nucleus. Thus Fig. 3-1 refers to any line directed from the nucleus to the electron. The r-axis in the figure may be taken literally as a line through the nucleus. Whether the electron moves to the right or to the left the potential energy varies in the same manner. Fig. 3-1. The potential energy of interaction between a nucleus (at the origin) and an electron as a function of the distance r between them. The potential energy is zero when the two particles are very far apart (r = ¥ ), and equals minus infinity when r equals zero. We shall take the energy for r = ¥ as our zero of energy. Every energy will be measured relative to this value. When a stable atom is formed, the electron is attracted to the nucleus, r is less than infinity, and the energy will be negative. A negative value for the energy implies that energy must be supplied to the system if the electron is to overcome the attractive force of the nucleus and escape from the atom. The electron has again "fallen into a potential well." However, the shape of the well is no longer a simple square one as previously considered for an electron confined to move on a line, but has the shape shown in Fig. 3-1. This shape is a consequence of there being a force acting on the electron and hence a potential energy contribution which depends on the distance between the two particles. This is the nature of the problem. Now let us see what quantum mechanics predicts for the motion of the electron in such a situation.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/03%3A_The_Hydrogen_Atom/3.01%3A_Introduction.txt
The motion of the electron in the hydrogen atom is not free. The electron is bound to the atom by the attractive force of the nucleus and consequently quantum mechanics predicts that the total energy of the electron is quantized. The expression for the energy is: $E_n =\dfrac{-2 \pi^2 m e^4Z^2}{n^2h^2} \label{1}$ with $n = 1,2,3,4...$ where $m$ is the mass of the electron, $e$ is the magnitude of the electronic charge, $n$ is a quantum number, $h$ is Planck's constant and $Z$ is the atomic number (the number of positive charges in the nucleus). Equation $\ref{1}$ applies to any one-electron atom or ion. For example, He+ is a one-electron system for which Z = 2. We can again construct an energy level diagram listing the allowed energy values (Figure $1$). These are obtained by substituting all possible values of n into Equation $\ref{1}$. As in our previous example, we shall represent all the constants which appear in the expression for $E_n$ by a constant $K$ and we shall set $Z = 1$, i.e., consider only the hydrogen atom. $E_n = \dfrac{-K}{n^2} \label{2}$ with $n = 1,2,3,4...$ Since the motion of the electron occurs in three dimensions we might correctly anticipate three quantum numbers for the hydrogen atom. But the energy depends only on the quantum number $n$ and for this reason it is called the principal quantum number. In this case, the energy is inversely dependent upon n2, and as n is increased the energy becomes less negative with the spacings between the energy levels decreasing in size. When $n = 0$, then $E = 0$ and the electron is free of the attractive force of the nucleus. The average distance between the nucleus and the electron (the average value of r) increases as the energy or the value of n increases. Thus energy must be supplied to pull the electron away from the nucleus. The parallelism between increasing energy and increasing average value of $r$ is a useful one. In fact, when an electron loses energy, we refer to it as "falling" from one energy level to a lower one on the energy level diagram. Since the average distance between the nucleus and the electron also decreases with a decrease in n, then the electron literally does fall in closer to the nucleus when it "falls" from level to level on the energy level diagram. The energy difference between $E_{\infty}$ and $E_1$: $E_{n= \infty} - E_{n=1} = 0 -(-K) = K = \dfrac{-2 \pi^2 m e^4}{n^2h^2} = \dfrac{e^2}{2a_o}\label{3}$ is called the ionization energy and is the energy required to pull the electron completely away from the nucleus and is, therefore, the energy of the reaction: $H \rightarrow H^+ + e^- \nonumber$ with $\Delta E = 13.60 \,eV = 0.5 \,a.u.$ This amount of energy is sufficient to separate the electron from the attractive influence of the nucleus and leave both particles at rest. If an amount of energy greater than K is supplied to the electron, it will not only escape from the atom but the energy in excess of K will appear as kinetic energy of the electron. Once the electron is free it may have any energy because all velocities are then possible. An electron which possesses and energy in this region of the diagram is a free electron and has kinetic energy of motion only. The Hydrogen Atom Spectrum As mentioned earlier, hydrogen gas emits colored light when a high voltage is applied across a sample of the gas contained in a glass tube fitted with electrodes. The electrical energy transmitted to the gas causes many of the hydrogen molecules to dissociate into atoms: $H_2 \rightarrow H + H \nonumber$ The electrons in the molecules and in the atoms absorb energy and are excited to high energy levels. lonization of the gas also occurs. When the electron is in a quantum level other than the lowest level (with n = 1) the electron is said to be excited, or to be in an excited level. The lifetime of such an excited level is very brief, being of the order of magnitude of only 10-8 sec. The electron loses the energy of excitation by falling to a lower energy level and at the same time emitting a photon to carry off the excess energy. We can easily calculate the frequencies which should appear in the emitted light by calculating the difference in energy between the two levels and making use of Bohr's frequency condition ($E =h\nu$): $\nu = \dfrac{E_n' - E_n}{h} \nonumber$ with $n' > n$ Suppose we consider all those frequencies which appear when the electron falls to the lowest level, $n = 1$: $\nu = \dfrac{E_1 - E_n}{h} = \dfrac{K}{h} \left( \dfrac{1}{1} - \dfrac{1}{n^2} \right) \label{4}$ with $n= 2, 3, 4 ...$ very value of $n$ substituted into Equation $\ref{3}$ gives a distinct value for v. In Figure $1$ we illustrate the changes in energy which result when the electron emits a photon by an arrow connecting the excited level (of energy En) with the ground level (of energy E1). The frequency resulting from each drop in energy will be directly proportional to the length of the arrow. Just as the arrows increase in length as n is increased, so v increases. However, the spacings between the lines decrease as n is increased, and the spectrum will appear as shown directly below the energy level diagram in Figure $1$. Each line in the spectrum is placed beneath the arrow which represents the change in energy giving rise to that particular line. Free electrons with varying amounts of kinetic energy (½mu2) can also fall to the $n = 1$ level. The energy released in the reversed ionization reaction (electron affinity): $H^+ + e^- \rightarrow H \nonumber$ will equal K, the difference between E¥ and E1, plus ½mu2, the kinetic energy originally possessed by the electron. Since this latter energy is not quantized, every energy value greater than K should be possible and every frequency greater than that corresponding to $\nu = \dfrac{K}{h} \nonumber$ should be observed. The line spectrum should, therefore, collapse into a continuous spectrum at its high frequency end. Thus the energy continuum above E¥ gives rise to a continuum of frequencies in the emission spectrum. The beginning of the continuum should be the frequency corresponding to the jump from E¥ to E1, and thus we can determine K, the ionization energy of the hydrogen atom, from the observation of this frequency. Indeed, the spectroscopic method is one of the most accurate methods of determining ionization energies. The hydrogen atom does possess a spectrum identical to that predicted by Equation $\ref{3}$, and the observed value for K agrees with the theoretical value. This particular series of lines, called the Lyman series, falls in the ultraviolet region of the spectrum because of the large energy changes involved in the transitions from the excited levels to the lowest level. The first few members of a second series of lines, a second line spectrum, falls in the visible portion of the spectrum. It is called the Balmer series and arises from electrons in excited levels falling to the second quantum level. Since E2 equals only one quarter of E1, the energy jumps are smaller and the frequencies are correspondingly lower than those observed in the Lyman series. Four lines can be readily seen in this series: red, green, blue, and violet. Each color results from the electrons falling from a specific level, to the n = 2 level: red E3 ® E2; green, E4® E2; blue, E5® E2; and violet E6 ®E2. Other series, arising from electrons falling to the n = 3 and n= 4 levels, can be found in the infrared (frequencies preceding the red end or long wavelength end of the visible spectrum). The fact that the hydrogen atom exhibits a line spectrum is visible proof of the quantization of energy on the atomic level.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/03%3A_The_Hydrogen_Atom/3.02%3A_The_Quantization_of_Energy.txt
Crude Approximation of Electron Position in a Hydrogen Atom To what extent will quantum mechanics permit us to pinpoint the position of an electron when it is bound to an atom? We can obtain an order of magnitude answer to this question by applying the uncertainty principle $\Delta x \Delta p \ge \dfrac{\hbar}{2} \nonumber$ $\approx h \nonumber$ to estimate $\Delta x$, which represents the minimum uncertainty in our knowledge of the position of the electron. The momentum of an electron in an atom is of the order of magnitude of $9 \times 10^{-19}\; g\, cm/sec$. The uncertainty in the momentum $Delta p$ must necessarily be of the same order of magnitude. Thus $\Delta x =\dfrac{ 7 \times 10^{-27}}{9 \times 10^{-19}} \approx 10^{-8} \,cm \nonumber$ The uncertainty in the position of the electron is of the same order of magnitude as the diameter of the atom itself. As long as the electron is bound to the atom, we will not be able to say much more about its position than that it is in the atom. Certainly all models of the atom which describe the electron as a particle following a definite trajectory or orbit must be discarded. We can obtain an energy and one or more wavefunctions for every value of $n$, the principal quantum number, by solving Schrödinger's equation for the hydrogen atom. A knowledge of the wavefunctions, or probability amplitudes $\psi_n$, allows us to calculate the probability distributions for the electron in any given quantum level. When n = 1, the wave function and the derived probability function are independent of direction and depend only on the distance r between the electron and the nucleus. In Figure $1$, we plot both $\psi_1$ and $P_1$ versus $r$, showing the variation in these functions as the electron is moved further and further from the nucleus in any one direction. (These and all succeeding graphs are plotted in terms of the atomic unit of length, a0 = 0.529 ´ 10-8 cm.) Figure $1$: The wave function and probability distribution as functions of r for the n = 1 level of the H atom. The functions and the radius r are in atomic units in this and succeeding figures. Two interpretations can again be given to the $P_1$ curve. An experiment designed to detect the position of the electron with an uncertainty much less than the diameter of the atom itself (using light of short wavelength) will, if repeated a large number of times, result in Figure $1$ for $P_1$. That is, the electron will be detected close to the nucleus most frequently and the probability of observing it at some distance from the nucleus will decrease rapidly with increasing $r$. The atom will be ionized in making each of these observations because the energy of the photons with a wavelength much less than 10-8 cm will be greater than $K$, the amount of energy required to ionize the hydrogen atom. If light with a wavelength comparable to the diameter of the atom is employed in the experiment, then the electron will not be excited but our knowledge of its position will be correspondingly less precise. In these experiments, in which the electron's energy is not changed, the electron will appear to be "smeared out" and we may interpret $P_1$ as giving the fraction of the total electronic charge to be found in every small volume element of space. (Recall that the addition of the value of Pn for every small volume element over all space adds up to unity, i.e., one electron and one electronic charge.) When the electron is in a definite energy level we shall refer to the $P_n$ distributions as electron density distributions, since they describe the manner in which the total electronic charge is distributed in space. The electron density is expressed in terms of the number of electronic charges per unit volume of space, e-/V. The volume V is usually expressed in atomic units of length cubed, and one atomic unit of electron density is then e-/a03. To give an idea of the order of magnitude of an atomic density unit, 1 au of charge density e-/a03 = 6.7 electronic charges per cubic Ångstrom. That is, a cube with a length of $0.52917 \times 10^{-8}\; cm$, if uniformly filled with an electronic charge density of 1 au, would contain 6.7 electronic charges. $P_1$ may be represented in another manner. Rather than considering the amount of electronic charge in one particular small element of space, we may determine the total amount of charge lying within a thin spherical shell of space. Since the distribution is independent of direction, consider adding up all the charge density which lies within a volume of space bounded by an inner sphere of radius r and an outer concentric sphere with a radius only infinitesimally greater, say $r + Dr$. The area of the inner sphere is 4pr2 and the thickness of the shell is Dr. Thus the volume of the shell is $4\pi r^2 \Delta r$ and the product of this volume and the charge density P1(r), which is the charge or number of electrons per unit volume, is therefore the total amount of electronic charge lying between the spheres of radius $r$ and $r + Dr$. The product $4pr^2P_n$ is given a special name, the radial distribution function, which we shall label $Q_n(r)$. Volume Element for a Shell in Spherical Coordinates The reader may wonder why the volume of the shell is not taken as: $\dfrac{4}{3} \pi \left[ (r + \Delta r)^3 -r^3 \right] \nonumber$ the difference in volume between two concentric spheres. When this expression for the volume is expanded, we obtain $\dfrac{4}{3} \pi \left(3r^2 \Delta r + 3r \Delta r^2 + \Delta r^3\right) \nonumber$ and for very small values of $\Delta r$ the $3r \Delta r^2$ and $\Delta r^3$ terms are negligible in comparison with $3r^2\Delta r$. Thus for small values of $\Delta r$, the two expressions for the volume of the shell approach one another in value and when $\Delta r$ represents an infinitesimal small increment in $r$ they are identical. The radial distribution function is plotted in Figure $2$ for the ground state of the hydrogen atom. The curve passes through zero at r = 0 since the surface area of a sphere of zero radius is zero. As the radius of the sphere is increased, the volume of space defined by 4pr2Dr increases. However, as shown in Figure $1$:, the absolute value of the electron density at a given point decreases with r and the resulting curve must pass through a maximum. This maximum occurs at rmax = a0. Thus more of the electronic charge is present at a distance a0, out from the nucleus than at any other value of r. Since the curve is unsymmetrical, the average value of r, denoted by , is not equal to rmax. The average value of r is indicated on the figure by a dashed line. A "picture" of the electron density distribution for the electron in the $n = 1$ level of the hydrogen atom would be a spherical ball of charge, dense around the nucleus and becoming increasingly diffuse as the value of r is increased. We could also represent the distribution of negative charge in the hydrogen atom in the manner used previously for the electron confined to move on a plane (Figure $1$), by displaying the charge density in a plane by means of a contour map. Imagine a plane through the atom including the nucleus. The density is calculated at every point in this plane. All points having the same value for the electron density in this plane are joined by a contour line (Figure $3$). Since the electron density depends only on r, the distance from the nucleus, and not on the direction in space, the contours will be circular. A contour map is useful as it indicates the "shape" of the density distribution. This completes the description of the most stable state of the hydrogen atom, the state for which $n = 1$. Before proceeding with a discussion of the excited states of the hydrogen atom we must introduce a new term. When the energy of the electron is increased to another of the allowed values, corresponding to a new value for $n$, $y_n$ and $P_n$ change as well. The wavefunctions $y_n$ for the hydrogen atom are given a special name, atomic orbitals, because they play such an important role in all of our future discussions of the electronic structure of atoms. In general the word orbital is the name given to a wavefunction which determines the motion of a single electron. If the one-electron wave function is for an atomic system, it is called an atomic orbital. Do not confuse the word orbital with the classical word and notion of an orbit. First, an orbit implies the knowledge of a definite trajectory or path for a particle through space which in itself is not possible for an electron. Secondly, an orbital, like the wave function, has no physical reality but is a mathematical function which when squared gives the physically measurable electron density distribution. For every value of the energy En, for the hydrogen atom, there is a degeneracy equal to $n^2$. Therefore, for n = 1, there is but one atomic orbital and one electron density distribution. However, for n = 2, there are four different atomic orbitals and four different electron density distributions, all of which possess the same value for the energy, E2. Thus for all values of the principal quantum number n there are n2 different ways in which the electronic charge may be distributed in three-dimensional space and still possess the same value for the energy. For every value of the principal quantum number, one of the possible atomic orbitals is independent of direction and gives a spherical electron density distribution which can be represented by circular contours as has been exemplified above for the case of n = 1. The other atomic orbitals for a given value of n exhibit a directional dependence and predict density distributions which are not spherical but are concentrated in planes or along certain axes. The angular dependence of the atomic orbitals for the hydrogen atom and the shapes of the contours of the corresponding electron density distributions are intimately connected with the angular momentum possessed by the electron. Angular Momentum The physical quantity known as angular momentum plays a dominant role in the understanding of the electronic structure of atoms. To gain a physical picture and feeling for the angular momentum it is necessary to consider a model system from the classical point of view. The simplest classical model of the hydrogen atom is one in which the electron moves in a circular orbit with a constant speed or angular velocity (Figure $4$). Just as the ordinary momentum $m\vec{v}$ plays a dominant role in the analysis of straight line or linear motion, so angular momentum plays the central role in the analysis of a system with circular motion as found in the model of the hydrogen atom. Figure $4$: The angular momentum vector for a classical model of the atom. In Figure $4$, m is the mass of the electron, v is the linear velocity (the velocity the electron would possess if it continued moving at a tangent to the orbit as indicated in the figure) and r is the radius of the orbit. The linear velocity v is a vector since it possesses at any instant both a magnitude and a direction in space. Obviously, as the electron rotates in the orbit the direction of $\vec{v}$ is constantly changing, and thus the linear momentum $m\vec{v}$ is not constant for the circular motion. This is so even though the speed of the electron (i.e, the magnitude of $\vec{v}$ which is denoted by $|\vec{v}|$) remains unchanged. According to Newton's second law, a force must be acting on the electron if its momentum changes with time. This is the force which prevents the electron from flying on tangent to its orbit. In an atom the attractive force which contains the electron is the electrostatic force of attraction between the nucleus and the electron, directed along the radius r at right angles to the direction of the electron's motion. The angular momentum, like the linear momentum, is a vector and is defined as follows: $|\vec{M}| = m \nu r \nonumber$ The angular momentum vector $\vec{M}$ is directed along the axis of rotation. From the definition it is evident that the angular momentum vector will remain constant as long as the speed of the electron in the orbit is constant (u remains unchanged) and the plane and radius of the orbit remain unchanged. Thus for a given orbit, the angular momentum is constant as long as the angular velocity of the particle in the orbit is constant. In an atom the only force on the electron in the orbit is directed along r; it has no component in the direction of the motion. The force acts in such a way as to change only the linear momentum. Therefore, while the linear momentum is not constant during the circular motion, the angular momentum is. A force exerted on the particle in the direction of the vector v would change the angular velocity and the angular momentum. When a force is applied which does change $\vec{M}$, a torque is said to be acting on the system. Thus angular momentum and torque are related in the same way as are linear momentum and force. The important point of the above discussion is that both the angular momentum and the energy of an atom remain constant if the atom is left undisturbed. Any physical quantity which is constant in a classical system is both conserved and quantized in a quantum mechanical system. Thus both the energy and the angular momentum are quantized for an atom. Any physical quantity which is constant in a classical system is both conserved and quantized in a quantum mechanical system. There is a quantum number, denoted by $l$, which governs the magnitude of the angular momentum, just as the quantum number $n$ determines the energy. The magnitude of the angular momentum may assume only those values given by: $M = \sqrt{l(l+1)} \hbar \nonumber$ with $l = 0, 1, 2, 3, ... n-1$ Furthermore, the value of n limits the maximum value of the angular momentum as the value of l cannot be greater than n - 1. For the state n = 1 discussed above, l may have the value of zero only. When n = 2, l may equal 0 or 1, and for n = 3, l = 0 or 1 or 2, etc. When l = 0, it is evident from Equation $\ref{4}$ that the angular momentum of the electron is zero. The atomic orbitals which describe these states of zero angular momentum are called s orbitals. The s orbitals are distinguished from one another by stating the value of n, the principal quantum number. They are referred to as the 1s, 2s, 3s, etc., atomic orbitals. The preceding discussion referred to the 1s orbital since for the ground state of the hydrogen atom n = 1 and l = 0. This orbital, and all s orbitals in general, predict spherical density distributions for the electron as exemplified by Figure $2$ for the 1s density. Figure $5$ shows the radial distribution functions $Q(r)$ which apply when the electron is in a 2s or 3s orbital to illustrate how the character of the density distributions change as the value of $n$ is increased. It is common usage to refer to an electron as being "in" an orbital even though an orbital is, but a mathematical function with no physical reality. To say an electron is in a particular orbital is meant to imply that the electron is in the quantum state which is described by that orbital. For example, when the electron is in the 2s orbital the hydrogen atom is in a state for which n = 2 and l = 0. Figure $5$: Radial distribution functions for the 1s, 2s, and 2p density distributions. Notice the number of nodes in each distribution. Comparing these results with those for the 1s orbital in Figure $2$ we see that as $n$ increases the average value of $r$ increases. This agrees with the fact that the energy of the electron also increases as $n$ increases. The increased energy results in the electron being on the average pulled further away from the attractive force of the nucleus. As in the simple example of an electron moving on a line, nodes (values of $r$ for which the electron density is zero) appear in the probability distributions. The number of nodes increases with increasing energy and equals $n - 1$. When the electron possesses angular momentum the density distributions are no longer spherical. In fact for each value of l, the electron density distribution assumes a characteristic shape Figure $6$. Figure $6$: Contour maps of the 2s, 2p, 3d and 4f atomic orbitals and their charge density distributions for the H atom. The zero contours shown in the maps for the orbitals define the positions of the nodes. Negative values for the contours of the orbitals are indicated by dashed lines, positive values by solid lines. When l = 1, the orbitals are called p orbitals. In this case the orbital and its electron density are concentrated along a line (axis) in space. The 2p orbital or wave function is positive in value on one side and negative in value on the other side of a plane which is perpendicular to the axis of the orbital and passes through the nucleus. The orbital has a node in this plane, and consequently an electron in a 2p orbital does not place any electronic charge density at the nucleus. The electron density of a 1s orbital, on the other hand, is a maximum at the nucleus. The same diagram for the 2p density distribution is obtained for any plane which contains this axis. Thus in three dimensions the electron density would appear to be concentrated in two lobes, one on each side of the nucleus, each lobe being circular in cross section Figure $7$. Figure $7$: The appearance of the 2p electron density distribution in three-dimensional space. When l = 2, the orbitals are called d orbitals andFigure $7$ shows the contours in a plane for a 3d orbital and its density distribution. Notice that the density is again zero at the nucleus and that there are now two nodes in the orbital and in its density distribution. As a final example, Figure $5$ shows the contours of the orbital and electron density distribution obtained for a 4f atomic orbital which occurs when n = 4 and l = 3. The point to notice is that as the angular momentum of the electron increases, the density distribution becomes increasingly concentrated along an axis or in a plane in space. Only electrons in s orbitals with zero angular momentum give spherical density distributions and in addition place charge density at the position of the nucleus. There seems to be neither rhyme nor reason for the naming of the states corresponding to the different values of $l$ (s, p, d, f for l = 0, 1, 2, 3). This set of labels had its origin in the early work of experimental atomic spectroscopy. The letter s stood for sharp, p for principal, d for diffuse and f for fundamental in characterizing spectral lines. From the letter f onwards the naming of the orbitals is alphabetical $l = 4,5,6 \rightarrow g,h,i, ....$. We have not as yet accounted for the full degeneracy of the hydrogen atom orbitals which we stated earlier to be n2 for every value of n. For example, when n = 2, there are four distinct atomic orbitals. The remaining degeneracy is again determined by the angular momentum of the system. Since angular momentum like linear momentum is a vector quantity, we may refer to the component of the angular momentum vector which lies along some chosen axis. For reasons we shall investigate, the number of values a particular component can assume for a given value of l is (2l + 1). Thus when l = 0, there is no angular momentum and there is but a single orbital, an s orbital. When l = 1, there are three possible values for the component (2´ 1 + 1) of the total angular momentum which are physically distinguishable from one another. There are, therefore, three p orbitals. Similarly there are five d orbitals, (2 ´ 2+1), seven f orbitals, (2 ´ 3 +1), etc. All of the orbitals with the same value of n and l, the three 2porbitals for example, are similar but differ in their spatial orientations. To gain a better understanding of this final element of degeneracy, we must consider in more detail what quantum mechanics predicts concerning the angular momentum of an electron in an atom.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/03%3A_The_Hydrogen_Atom/3.03%3A_The_Probability_Distribution_of_the_Hydrogen_Atom.txt
The simplest classical model of the hydrogen atom is one in which the electron moves in a circular planar orbit about the nucleus as previously discussed and as illustrated in Figure 3-7. The angular momentum vector M in this figure is shown at an angle qwith respect to some arbitrary axis in space. Assuming for the moment that we can somehow physically define such an axis, then in the classical model of the atom there should be an infinite number of values possible for the component of the angular momentum vector along this axis. As the angle between the axis and the vector M varies continuously from 0°, through 90° to 180°, the component of M along the axis would vary correspondingly from M to zero to -M. Thus the quantum mechanical statements regarding the angular momentum of an electron in an atom differ from the classical predictions in two startling ways. First, the magnitude of the angular momentum (the length of the vector M) is restricted to only certain values given by: $M = \sqrt{l(l+1)} \hbar \nonumber$ with $l= 0,1,2,...$ The magnitude of the angular momentum is quantized. Secondly, quantum mechanics states that the component of M along a given axis can assume only ($2l + 1$) values, rather than the infinite number allowed in the classical model. In terms of the classical model this would imply that when the magnitude of M is (the value when l = 1), there are only three allowed values for q, the angle of inclination of M with respect to a chosen axis. The angle q is another example of a physical quantity which in a classical system may assume any value, but which in a quantum system may take on only certain discrete values. You need not accept this result on faith. There is a simple, elegant experiment which illustrates the "quantization" of q, just as a line spectrum illustrates the quantization of the energy. If we wish to measure the number of possible values which the component of the angular momentum may exhibit with respect to some axis we must first find some way in which we can physically define a direction or axis in space. To do this we make use of the magnetism exhibited by an electron in an atom. The flow of electrons through a loop of wire (an electric current) produces a magnetic field (Figure $1$). At a distance from the ring of wire, large compared to the diameter of the ring, the magnetic field produced by the current appears to be the same as that obtained from a small bar magnet with a north pole and a south pole. Such a small magnet is called a magnetic dipole, i.e., two poles separated by a small distance. Figure $1$: The magnetic field produced by a current in a loop of wire. The electron is charged and the motion of the electron in an atom could be thought of as generating a small electric current. Associated with this current there should be a small magnetic field. The magnitude of this magnetic field is related to the angular momentum of the electron's motion in roughly the same way that the magnetic field produced by a current in a loop of wire is proportional to the strength of the current flowing in the wire. The strength of the atomic magnetic dipole is given by m where: $\mu = \sqrt{l(l+1)} \beta_m \label{5}$ Just as there is a fundamental unit of negative charge denoted by e- so there is a fundamental unit of magnetism at the atomic level denoted by bm and called the Bohr magneton. From Equation $\ref{5}$ we can see that the strength of the magnetic dipole will increase as the angular momentum of the electron increases. This is analogous to increasing the magnetic field by increasing the strength of the current through a circular loop of wire The magnetic dipole, since it has a north and a south pole, will define some direction in space (the magnetic dipole is a vector quantity). The axis of the magnetic dipole in fact coincides with the direction of the angular momentum vector. Experimentally, a collection of atoms behave as though they were a collection of small bar magnets if the electrons in these atoms possess angular momentum. In addition, the axis of the magnet lies along the axis of rotation, i.e., along the angular momentum vector. Thus the magnetism exhibited by the atoms provides an experimental means by which we may study the direction of the angular momentum vector. Thus the magnetism exhibited by the atoms provides an experimental means by which we may study the direction of the angular momentum vector. If we place the atoms in a magnetic field they will be attracted or repelled by this field, depending on whether or not the atomic magnets are aligned against or with the applied field. The applied magnetic field will determine a direction in space. By measuring the deflection of the atoms in this field we can determine the directions of their magnetic moments and hence of their angular momentum vectors with respect to this applied field. Consider an evacuated tube with a tiny opening at one end through which a stream of atoms may enter (Figure 3-12). By placing a second small hole in front of the first, inside the tube, we will obtain a narrow beam of atoms which will pass the length of the tube and strike the opposite end. If the atoms possess magnetic moments the path of the beam can be deflected by placing a magnetic field across the tube, perpendicular to the path of the atoms. The magnetic field must be one in which the lines of force diverge thereby exerting an unbalanced force on any magnetic material lying inside the field. This inhomogeneous magnetic field could be obtained through the use of N and S poles of the kind illustrated in Figure 3-12. The direction of the magnetic field will be taken as the direction of the z-axis. Figure 3-12. The atomic beam apparatus. Let us suppose the beam consists of neutral atoms which possess units of electronic angular momentum (the angular momentum quantum number l = 1). When no magnetic field is present, the beam of atoms strikes the end wall at a single point in the middle of the detector. What happens when the magnetic field is present? We must assume that before the beam enters the magnetic field, the axes of the atomic magnets are randomly oriented with respect to the z-axis. According to the concepts of classical mechanics, the beam should spread out along the direction of the magnetic field and produce a line rather than a point at the end of the tube (Figure 3-13a). Actually, the beam is split into three distinct component beams each of equal intensity producing three spots at the end of the tube (Figure 3-13b). Figure $3$: (a) The result of the atomic beam experiment as predicted by classical mechanics, (b) The observed result of the atomic beam experiment. The startling results of this experiment can be explained only if we assume that while in the magnetic field each atomic magnet could assume only one of three possible orientations with respect to the applied magnetic field (Figure 3-14). Figure $4$: The three possible orientations for the total magnetic moment with respect to an external magnetic field for an atom with l =1. The atomic magnets which are aligned perpendicular to the direction of the field are not deflected and will follow a straight path through the tube. The atoms which are attracted upwards must have their magnetic moments oriented as shown. From the known strength of the applied inhomogeneous magnetic field and the measured distance through which the beam has been deflected upwards, we can determine that the component of the magnetic moment lying along the z-axis is only bm in magnitude rather than the value of This latter value would result if the axis of the atomic magnet was parallel to the z-axis, i.e., the angle q = 0°. Instead q assumes a value such that the component of the total moment lying along the z-axis is just lbm. Similarly the beam which is deflected downwards possesses a magnetic moment along the z-axis of -bm or -lbm. The classical prediction for this experiment assumes that q may equal all values from 0° to 180°, and thus all values (from a maximum of (q = 0°) to 0 (q =90°) to (q = 180°)) for the component of the atomic moment along the z-axis would be possible. Instead, q is found to equal only those values such that the magnetic moment along the z-axis equals +bm, 0 and -bm. The angular momentum of the electron determines the magnitude and the direction of the magnetic dipole. (Recall that the vectors for both these quantities lie along the same axis.) Thus the number of possible values which the component of the angular momentum vector may assume along a given axis must equal the number of values observed for the component of the magnetic dipole along the same axis. In the present example the values of the angular momentum component are +1(h/2p), 0 and -1(h/2p), or since l = 1 in this case, + l(h/2p), 0 and -l(h/2p). In general, it is found that the number of observed values is always (2l + 1) the values being: $-l \hbar, (-l+1) \hbar, ... 0, ..., (l-1)\hbar, l \hbar \nonumber$ for the angular momentum and $-l \beta_m, (l-1)\beta_m ..., 0, ..., (l-1)\beta_m, l \beta_m \nonumber$ for the magnetic dipole. The number governing the magnitude of the component of M and , ranges from a maximum value of l and decreases in steps of unity to a minimum value of -l. This number is the third and final quantum number which determines the motion of an electron in a hydrogen atom. It is given the symbol m and is called the magnetic quantum number. In summary, the angular momentum of an electron in the hydrogen atom is quantized and may assume only those values given by: Furthermore, it is an experimental fact that the component of the angular momentum vector along a given axis is limited to (21 + 1) different values, and that the magnitude of this component is quantized and governed by the quantum number $m$ which may assume the values l, l-1, . . .,0, . . .,-l. These facts are illustrated in Figure 3-15 for an electron in a d orbital in which l = 2. (a) (b) Figure 3-15. Pictorial representation of the quantum mechanical properties of the angular momentum of a d electron for which l = 2. The z-axis can be along any arbitrary direction in space. Figure (a) shows the possible components which the angular momentum vector (of length ) may exhibit along an arbitrary axis in space. A d electron may possess any one of these components. There are therefore five states for a d electron, all of which are physically different. Notice that the maximum magnitude allowed for the component is less then the magnitude of the total angular momentum. Therefore, the angular momentum vector can never coincide with the axis with respect to which the observations are made. Thus the x and y components of the angular momentum are not zero. This is illustrated in Figure (b) which shows how the angular momentum vector may be oriented with respect to the z-axis for the case m = l = 2. When the atom is in a magnetic field, the field exerts a torque on the magnetic dipole of the atom. This torque causes the magnetic dipole and hence the angular momentum vector to precess or rotate about the direction of the magnetic field. This effect is analogous to the precession of a child's top which is spinning with its axis (and hence its angular momentum vector) at an angle to the earth's gravitational field. In this case the gravitational field exerts the torque and the axis of the top slowly revolves around the perpendicular direction as indicated in the figure. The angle of inclination of M with respect to the field direction remains constant during the precession. The z-component of M is therefore constant but the x and y components are continuously changing. Because of the precession, only one component of the electronic angular momentum of an atom an be determined in a given experiment. The quantum number m determines the magnitude of the component of the angular momentum along a given axis in space. Therefore, it is not surprising that this same quantum number determines the axis along which the electron density is concentrated. When m = 0 for a p electron (regardless of the n value, 2p, 3p, 4p, etc.) the electron density distribution is concentrated along the z-axis (see Figure 3-10) implying that the classical axis of rotation must lie in the x-y plane. Thus a p electron with m = 0 is most likely to be found along one axis and has a zero probability of being on the remaining two axes. The effect of the angular momentum possessed by the electron is to concentrate density along one axis. When m = 1 or -1 the density distribution of a pelectron is concentrated in the x-y plane with doughnut-shaped circular contours. The m = 1 and -1 density distributions are identical in appearance. Classically they differ only in the direction of rotation of the electron around the z-axis; counter-clockwise for m = +1 and clockwise for m = -1. This explains why they have magnetic moments with their north poles in opposite directions. We can obtain density diagrams for the m = +1 and -1 cases similar to the m = 0 case by removing the resultant angular momentum component along the z-axis. We can take combinations of the m = +1 and -1 functions such that one combination is concentrated along the x-axis and the other along the y-axis, and both are identical to the m = 0 function in their appearance. Thus these functions are often labelled as px, py and pz functions rather than by their m values. The m value is, however, the true quantum number and we are cheating physically by labelling them px, py and pz . This would correspond to applying the field first in the z direction, then in the x direction and finally in the y direction and trying to save up the information each time. In reality when the direction of the field is changed, all the information regarding the previous direction is lost and every atom will again align itself with one chance out of three of being in one of the possible component states with respect to the new direction. We should note that the r dependence of the orbitals changes with changes in n or l, but the directional component changes with l and m only. Thus all s orbitals possess spherical charge distributions and all p orbitals possess dumb-bell shaped charge distributions regardless of the value of n. Table 3-1: The Atomic Orbitals for the Hydrogen Atom En n l m Symbol for orbital -K 1 0 0 1s 2 0 0 2s 2 1 1 2p+1 ö 2 1 0 2p0 ýpx, py, pz 2 1 -1 2p-1 þ 3 0 0 3s 3 1 1 3p+1 ö 3 1 0 3p0 ýpx, py, pz 3 1 -1 3p-1 þ 3 2 2 3d+2 ö 3 2 1 3d+1 | 3 2 0 3d0 ý 3 2 -1 3d-1 | 3 2 -2 3d-2 þ Table 3-1 summarizes the allowed combinations of quantum numbers for an electron in a hydrogen atom for the first few values of n; the corresponding name (symbol) is given for each orbital. Notice that there are n2 orbitals for each value of n, all of which belong to the same quantum level and have the same energy. There are n - 1 values of l for each value of n and there are (2l + 1) values of m for each value of l. Notice also that for every increase in the value of n, orbitals of the same l value (same directional dependence) as found for the preceding value of n are repeated. In addition, a new value of l and a new shape are introduced. Thus there is a repetition in the shapes of the density distributions along with an increase in their number. We can see evidence of a periodicity in these functions (a periodic re-occurrence of a given density distribution) which we might hope to relate to the periodicity observed in the chemical and physical properties of the elements. We might store this idea in the back of our minds until later. We can summarize what we have found so far regarding the energy and distribution of an electron in a hydrogen atom thus: 1. The energy increases as n increases, and depends only on n, the principal quantum number. 2. The average value of the distance between the electron and the nucleus increases as n increases. 3. The number of nodes in the probability distribution increases as n increases. 4. The electron density becomes concentrated along certain lines (or in planes) as l is increased. Some words of caution about energies and angular momentum should be added. In passing from the domain of classical mechanics to that of quantum mechanics we retain as many of the familiar words as possible. Examples are kinetic and potential energies, momentum, and angular momentum. We must, however, be on guard when we use these familiar concepts in the atomic domain. All have an altered meaning. Let us make this clear by considering these concepts for the hydrogen atom. Perhaps the most surprising point about the quantum mechanical expression for the energy is that it does not involve r, the distance between the nucleus and the electron. If the system were a classical one, then we would expect to be able to write the total energy En as: $E_n = KE = PE = \dfrac{1}{2} mv^2 - \dfrac{e^2}{r} \label{6}$ Both the KE and PE would be functions of r, i.e., both would change in value as r was changed (corresponding to the motion of the electron). Furthermore, the sum of the PE and KE must always yield the same value of En which is to remain constant. Figure3-16. The potential energy diagram for an H atom with one of the allowed energy values superimposed on it. Fig 3-16 is the potential energy diagram for the hydrogen atom and we have superimposed on it one of the possible energy levels for the atom, En. Consider a classical value for r at the point A". Classically, when the electron is at the point A", its PE is given by the value of the PE curve at A'. The KE is thus equal to the length of the line A - A' in energy units. Thus the sum of PE + KE adds up to En. When the electron is at the point B", its PE would equal En and its KE would be zero. The electron would be motionless. Classically, for this value of En the electron could not increase its value of r beyond the point represented by B". If it did, it would be inside the "potential wall." For example, consider the point C". At this value of r, the PE is given by the value at C' which is now greater than En and hence the KE must be equal to the length of the line C - C'. But the KE must now be negative in sign so that the sum of PE and KE will still add up to En. What does a negative KE mean? It doesn't mean anything as it never occurs in a classical system. Nor does it occur in a quantum mechanical system. It is true that quantum mechanics does predict a finite probability for the electron being inside the potential curve and indeed for all values of r out to infinity. However, the quantum mechanical expression for En does not allow us to determine the instantaneous values for the PE and KE. Instead, we can determine only their average values. Thus quantum mechanics does not give Equation $\ref{6}$ but instead states only that the average potential and kinetic energies may be known: $E_n = \langle PE \rangle = \langle KE \rangle \label{7}$ The bracket denotes the fact that the energy quantity has been averaged over the complete motion (all values of r) of the electron. Why can r not appear in the quantum mechanical expression for En, and why can we obtain only average values for the KE and PE? When the electron is in a given energy level its energy is precisely known; it is En. The uncertainty in the value of the momentum of the electron is thus at a minimum. Under these conditions we have seen that our knowledge of the position of the electron is very uncertain and for an electron in a given energy level we can say no more about its position than that it is bound to the atom. Thus if the energy is to remain fixed and known with certainty, we cannot, because of the uncertainty principle, refer to (or measure) the electron as being at some particular distance r from the nucleus with some instantaneous values for its PEand KE. Instead, we may have knowledge of these quantities only when they are averaged over all possible positions of the electron. This discussion again illustrates the pitfalls (e.g., a negative kinetic energy) which arise when a classical picture of an electron as a particle with a definite instantaneous position is taken literally. It is important to point out that the classical expressions which we write for the dependence of the potential energy on distance, -e2/r for the hydrogen atom for example, are the expressions employed in the quantum mechanical calculation. However, only the average value of the PE may be calculated and this is done by calculating the value of -e2/r at every point in space, taking into account the fraction of the total electronic charge at each point in space. The amount of charge at a given point in three-dimensional space is, of course, determined by the electron density distribution. Thus the value of for the ground state of the hydrogen atom is the electrostatic energy of interaction between a nucleus of charge +1e with the surrounding spherical distribution of negative charge. Tunneling The penetration of a potential wall by the electron, into regions of negative kinetic energy, is known as "tunnelling." Classically a particle must have sufficient energy to surmount a potential barrier. In quantum mechanics, an electron may tunnel into the barrier (or through it, if it is of finite width). Tunnelling will not occur unless the barrier is of finite height. In the example of the H atom, the potential well is infinitely deep, but the energy of the electron is such that it is only a distance En from the top of the well. In the example of the electron moving on a line we assumed the potential well to be infinitely deep regardless of the energy of the electron. In this case yn and hence Pn must equal zero at the ends of the line and no tunnelling is possible as the potential wall is infinitely high. We can say more about theand for an electron in an atom. Not only are these values constant for a given value of $n$, but also for any value of $n$, Thus the is always positive and equal to minus one half of the . Since the total energy En is negative when the electron is bound to the atom, we can interpret the stability of atoms as being due to the decrease in the when the electron is attracted by the nucleus. The question now arises as to why the electron doesn't "fall all the way" and sit right on the nucleus. When r = 0, thewould be equal to minus infinity, and the , which is positive and thus destabilizing, would be zero. Classically this would certainly be the situation of lowest energy and thus the most stable one. The reason for the electron not collapsing onto the nucleus is a quantum mechanical one. If the electron was bound directly to the nucleus with no kinetic energy, its position and momentum would be known with certainty. This would violate Heisenberg's uncertainty principle. The uncertainty principle always operates through the kinetic energy causing it to become large and positive as the electron is confined to a smaller region of space. (Recall that in the example of an electron moving on a line, the increased as the length of the line decreased.) The smaller the region to which the electron is confined, the smaller is the uncertainty in its position. There must be a corresponding increase in the uncertainty of its momentum. This is brought about by the increase in the kinetic energy which increases the magnitude of the momentum and thus the uncertainty in its value. In other words the bound electron must always possess kinetic energy as a consequence of quantum mechanics. The and have opposite dependences on . The decreases (becomes more negative) as decreases but the increases (making the atom less stable) as decreases. A compromise is reached to make the energy as negative as possible (the atom as stable as possible) and the compromise always occurs when . A further decrease in would decrease the but only at the expense of a larger increase in the . The reverse is true for an increase in . Thus the reason the electron doesn't fall onto the nucleus may be summed up by stating that "the electron obeys quantum mechanics, and not classical mechanics."
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/03%3A_The_Hydrogen_Atom/3.04%3A_Angular_Momentum_of_an_Electron_in_a_Hydrogen_Atom.txt
Listed below are a number of equations which give the dependence of , and on the quantum numbers n, l and m. They refer not only to the hydrogen atom but also to any one-electron ion in general with a nuclear charge of Z. Thus He+ is a one-electron ion with Z = 2, Li+2 another example with Z = 3. The average distance between the electron and the nucleus expressed in atomic units of length is: Note that is proportional to n2 for l = 0 orbitals, and deviates only slightly from this for l ¹ 0. The value of decreases as Z increases because the nuclear attractive force is greater. Thus for He+ would be only one half as large as for H. 3.E: Exercises Q1 In 1913 Niels Bohr proposed a model for the hydrogen atom which gives the correct expression for the energy levels En. His model was based on an awkward marriage of classical mechanics and, at that time, the new idea of quantization. In the Bohr model of the hydrogen atom the electron is assumed to move in a circular orbit around the nucleus, as illustrated in Fig. 3-7. The energy of the electron in such an orbit is: (1) where u is the tangential velocity of the electron in the orbit. Since the circular orbit is to be a stable one the attractive coulomb force exerted on the electron by the nucleus must be balanced by a centrifugal force, or: (2) where w is the circular velocity of the electron. Up until this point the model is completely classical in concept. However, Bohr now postulated that only those orbits are allowed for which the angular momentum is an integral multiple of (h/2p). In other words, Bohr postulated that the angular momentum of the electron in the hydrogen atom is quantized. This postulate gives a further equation: (3) 1. Show that by eliminating r and u from these three equations you can obtain the correct expression for En. 2. Show that Bohr's model correctly predicts that the KE = -½PE. 3. Show that the radius of the first Bohr orbit is identical to the maximum value of r for the n = 1 level of the hydrogen atom, , as calculated by quantum mechanics. 4. Criticize the Bohr model in the light of the quantum mechanical results for the hydrogen atom. Q2 The part of the hydrogen atom spectrum which occurs in the visible region arises from electrons in excited levels falling to the n = 2 level. The quantum mechanical expression for the frequencies in this case, corresponding to equation (3) of the text for the Lyman series, is: (4) The energy of an emitted photon for a jump from level n to level 2 is: (5) Equation (5) predicts that a plot of the photon energies versus (1/n2) should be a straight line. Furthermore, it predicts that the intercept of this line with the energy axis, corresponding to the value of 1/n2 = 0, i.e., n = ¥, should equal (¼)K where: (6) The point of this problem is to test these theoretical predictions against the experimental results. Experimentally we measure the wavelength of the emitted light by means of a diffraction grating. A grating for the diffraction of visible light may be made by marking a glass plate with parallel, equally spaced lines. There are about 10,000 lines per cm. The spacings between the lines in the grating d is thus about 1 ´10-4 cm which is the order of magnitude of the wavelength of visible light. The diffraction equation is: (7) as previously discussed in Problem I-1. We measure the angle q for different orders n = 1, 2, 3, ... of the diffracted light beam. Since d is known, l may be calculated. The experimentally measured values for the first four lines in the Balmer series are given below. Balmer Series l(Å) n 6563 3 4861 4 4341 5 4102 6 The value of the principal quantum number n which appears in equation (5) is given for each value of l. (This n is totally unrelated to the n of equation (7) for the experimental determination of l.) Calculate the energy of each photon from the value of its wavelength. Plot the photon energies versus the appropriate value of 1/n2. Let the 1/n2 axis run from 0 to 0.25 and the energy axis run from 0 to 3.6 ev. Include as a point on your graph e = 0 for 1/n2 = 0.25, i.e., when n = 2, the excited level and the level to which the electron falls coincide. 1. Do the points fall on a straight line as predicted? 2. Determine the value of K by extending the line to intercept the energy axis. This intercept should equal K/4. Read off this value from your graph. 3. Compare the experimental value for K with that predicted theoretically by equation (5). Use e = 4.803 ´ 10-10 esu, express m and h in cgs units and the value of K will be in ergs (1 erg = 6.2420 ´ 1011 ev). Recall that K is the ionization potential for the hydrogen atom. An electron falling from the n = ¥ level to the n = 2 level will fall only (¼)K in energy as is evident from the energy level diagram shown in Fig. 3-2. Q3 A beam of atoms with l = 1 is passed through an atomic beam apparatus with the magnetic field directed along an axis perpendicular to the direc tion of the beam. The undeflected beam from this experiment enters a second beam apparatus in which the magnetic field is directed along an axis which is perpendicular to both the path of the beam and the direction of the field in the first experiment. Will this one component of the original beam be split in the second applied magnetic field? Explain why you think it will be, if this is indeed your answer.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/03%3A_The_Hydrogen_Atom/3.05%3A_Some_Useful_Expressions.txt
The hydrogen atom is the only atom for which exact solutions of the Schrödinger equation exist. For any atom that contains two or more electrons, no solution has yet been discovered (so no solution for the helium atom exists!) and we need to introduce approximation schemes. The helium atom is a good example of a many-electron atom (that is, an atom which contains more than one electron). No fundamentally new problems are encountered whether we consider two or ten electrons, but a very important problem arises in passing from the one-electron to the two-electron case. 04: Many-Electron Atoms The helium atom is a good example of a many-electron atom (that is, an atom which contains more than one electron). No fundamentally new problems are encountered whether we consider two or ten electrons, but a very important problem arises in passing from the one-electron to the two-electron case. To see what this problem is, consider all the potential interactions found in a helium atom. Again, consider the electrons to be point charges and "freeze" them at some instantaneous positions in space (Figure $1$). Figure $1$: The potential interactions in an He atom. The electrons are labelled by their charge -e, and the nucleus by its charge Z = +2e. The potential energy, the average value of which is to be determined by quantum mechanics, is $V(r_1,r_2) = \dfrac{(+Ze)(-e)}{r_1} + \dfrac{(+Ze)(-e)}{r_2} + \dfrac{(-e)(-e)}{|r_1 - r_2|} \label{1}$ for the helium atom with $Z=2$, equation $\ref{1}$ simplifies to $V_{He}(r_1,r_2) = \dfrac{-2e^2}{r_1} - \dfrac{2e^2}{r_1} +\dfrac{e^2}{|r_1 - r_2|} \nonumber$ The first and second terms in Equation $\ref{1}$ represent the attraction of the helium nucleus for electrons 1 and 2 respectively. The last term represents the repulsion between the two electrons. It is this last term which makes the problem of the helium atom, and of all many-electron atoms, difficult to solve. No direct solution to the problem exists, the reason being that there are too many interactions to consider simultaneously. We must make some approximation in our approach to this problem. 4.02: The Atomic Orbital Concept Since the nuclear charge is twice the electronic charge, the electrostatic energy of repulsion between the electrons will be the smallest of the three terms in the potential energy expression of the helium atom when the interactions are averaged over all possible positions of the electrons. We could obtain an approximation to the electronic energy of the He atom by neglecting this small term. This is a good idea for another reason. If we ignore the repulsion between the electrons, then physically we are supposing that neither electron "realizes" the other electron is present. The problem is thus identical to that of the hydrogen atom, a problem which can be solved exactly, except that the nuclear charge is now +2 rather than +1. The energy of each electron in the field of a nucleus of charge +2 is determined separately and the total energy is then simply the sum of the two energies. This is the first approximation to the energy. We will also obtain an approximation to the manner in which the electrons are distributed in space. With this latter knowledge, we can estimate the energy of repulsion between the electrons. That is, with the knowledge of how the electrons are distributed relative to one another we can pick up the term in the potential energy we originally neglected, the term $e^2/r_{12}$, and calculate its contribution to the energy. This method of calculating the electronic energy of atoms reduces a many-electron problem to many one-electron problems. Each electron (moving in the attractive field of the nucleus) is treated independently of the others. In addition, since the problem is now a set of one-electron problems, we may carry over and use all of the results obtained for the hydrogen atom. We pointed out in our discussion of the hydrogen atom that the results we obtained could be applied to any one-electron system by setting $Z$ equal to the appropriate value in all of the formulae. The one-electron energies are easily calculated and are given by $E_n = -\dfrac{Z^2 R}{n^2} \nonumber$ with the Rydberg constant $R$ determined from basic constants: $R=\dfrac {m_e e^4}{8 \varepsilon_0^2 h^2}= 13.6 \,eV \nonumber$ More important, the concept of atomic orbitals, the one-electron wave functions for the hydrogen atom, may be employed in the many-electron case. When each electron is considered in turn, its motion and distribution in space will again be determined by an atomic orbital. The atomic orbitals will differ from the case of the hydrogen atom in that they will generally be more contracted In the previous chapter we pointed out that the average value of the distance between the nucleus and the electron, $\langle r \rangle$, decreased as the nuclear charge and hence the attractive force exerted by the nucleus was increased However, the orbitals will still be determined by the three quantum numbers n, l and m. Increasing $Z$ contracts the orbital, but the symmetry of the problem is left unchanged, i.e., the attraction of the electron by the nucleus is still determined only by the distance between them and does not depend on the direction. The l and m dependences of the orbitals, which determine the directional properties of the orbital and of the electron density distribution, remain unchanged. Thus we may still refer to "hydrogen-like" 1s, 2s, 2p orbitals For example, the 1s orbital is the most stable orbital (most negative $E$ value) for any value of $Z$, and we naturally assume that the most stable form of the helium atom will be obtained when both electrons are placed in the 1s orbital. This information, telling us in which atomic orbital the electrons have been placed, is called an electron configuration. An abbreviated notation is used to denote the electron configuration For example, the lowest energy state of helium, in which two electrons are placed in the 1s orbital, is written as 1s2. (This is to be read as one-s-two and not as one-s-squared.) When one of the electrons is placed in an orbital of higher energy, an "excited" configuration is obtained. An example might be 1s12p1 which one electron is in the 1s orbital and one electron is in the 2p orbital. It should be emphasized that the concept of assigning an electronto an atomic orbital is a rigorous and exact concept only for the hydrogen atom; for the many-electron case it is an approximation. The atomic orbital approximation may be tested by applying it to the helium atom. We have seen that the energy of a single electron moving in the attractive field of a nucleus of charge $+Ze$ is $E_n = \dfrac{-2\pi^2 m e^4 Z^2}{n^2h^2}= - \dfrac{Z^2}{n^2}R \label{2}$ with $n = 1, 2, 3, 4...$ The energy of the two electrons in the helium atom, each considered to be independent of the other (this is a gross approximation, by the way), is simply $E_1 = -\left( \dfrac{2^2}{1} \right)R - \left( \dfrac{2^2}{1} \right)R = -8R \nonumber$ for the 1s2 electron configuration. To this energy value must be added the energy of repulsion between the two electrons. Since both electrons have been placed in a 1s atomic orbital, we know that the charge distribution for each electron must be spherical and centered on the helium nucleus. The two charge distributions will be completely intermingled and we must calculate the energy of repulsion between every small element of charge density of the one distribution with every small charge element of the second distribution. This calculation can be readily done by the methods of integral calculus and the value of the average energy repulsion is found to be $E_c = \dfrac{5}{4}ZR = \dfrac{5}{2} R \label{3a}$ with $Z_{He}=2$. We label this energy $E_c$ as it is a correction to our first approximation to the energy. Notice from Equation $\ref{3a}$ that in general $E_c$ depends directly on the value of $Z$. This makes physical sense, for the greater the value of $Z$, the more contracted and superimposed are the two charge distributions, and the greater is the energy of repulsion between them. Note as well that the correction $E_c$ is indeed smaller than $E_1$, an assumption we made in developing this method of approximating the electronic energy. The estimate of the total electronic energy of helium atom is $E_{He} = E_1 + E_c = -8R + \dfrac{5}{2}R = \dfrac{11}{2}R \nonumber$ This total energy is called the electronic binding energy as it is the energy released when two initially free electrons are bound to the helium nucleus. Recall that -K represents the binding energy of the most stable state of the hydrogen atom; thus the helium atom is five and one half times more stable than the hydrogen atom. This is not really a fair comparison because the value (11/2)K is the energy required to remove both electrons from the helium atom (a double ionization) $He \rightarrow He^{+2} + 2e^- \nonumber$ with $\Delta E =\dfrac{11}{2}R \nonumber$ It is more interesting to compare the energy required to remove a single electron from helium with the energy required to remove the single electron in hydrogen. The energy of the reaction (i.e., the energy required to ionize an atom once) will be denoted by the letter $I_1$. $He \rightarrow He^{+} + e^- \nonumber$ with $\Delta E =I \nonumber$ is easily calculated $I_1 = E_{He^+} + E_{e^-} - E_{He} \nonumber$ The energy of the $He^+$ ion using Equation $\ref{2}$ with n = 1 and Z = 2, is $E_{He^+} = -4R \nonumber$ as there is but a single electron left. The energy of the ionized electron $E_{e^-}$ is set equal to zero as it is assumed to be at rest infinitely far away from the ion and $E_{He} = -\left(\dfrac{11}{2}\right)R. \nonumber$ Thus $I_1$, the first ionization potential of helium, is equal to $1.5 R$, or one and one half times larger than the energy required to ionize a hydrogen atom. How well do our calculated values for the ionization potential and total energy agree with the experimental results? The energy required for the removal of both electrons is 78.98 eV. Since R = 13.61 eV, the calculated value is (11/2)13.61 = 74.86 eV. This is an encouraging result as the error is only about 5%. The experimental value for $I_1$ is 24.58 eV and our calculated value is (1.5)13.61 = 20.42 ev. The percentage error is larger for the latter case because the actual error is the same in both calculations but $I_1$ is smaller than $\Delta E$. However, the method seems promising. We have indeed predicted that it requires almost twice as much energy to remove an electron from helium as it does to remove one from hydrogen. Effective Nuclear Charge Diagram. (Public Domain) Effective Nuclear Charge The calculations outlined above may be improved by introducing the concept of an effective nuclear charge. Since there are two electrons present in the helium atom, neither electron experiences the full attractive force of the two positive charges on the helium nucleus. Each electron partially screens the nuclear charge from the other. We saw previously that the average value of the distance between an electron and the nucleus for a strictly hydrogen-like orbital varied as (1/Z). Thus by assuming that each electron moves in the field of the full nuclear charge of helium, we consider it to be in a 1s orbital with exactly one half the value of $\langle r \rangle$ as that found for a hydrogen 1s orbital. Since the electron on the average experiences a reduced nuclear charge (i.e., the effective nuclear charge) because of the screening effect of the second electron, we should place it in a 1s orbital which possesses an $\langle r \rangle$ value somewhere between that found for an orbital for the cases $Z$ = 1 and $Z = 2$. In other words, the size of the orbital should be determined by an effective nuclear charge, rather than by the actual nuclear charge. This lowered value for $Z$ will obviously decrease the value of the average repulsion energy between the electrons as the two charge clouds will be more expanded and the average distance between the charge points in each distribution will increase. An increased value of $\langle r \rangle$ will also decrease the average kinetic energy of the electrons and thus again lead to an increase in the stability of the atom. On the other hand, an increase in $\langle r \rangle$ will lead to a less negative potential energy as the electrons will on the average be further away from the nucleus. Thus there is some best value for the effective nuclear charge and for $\langle r \rangle$, the value which gives the most stable description of the atom. For helium this "best" value for the effective nuclear charge is found to be 1.687 and the total energy of He is now calculated to be 77.48 ev. The error has been reduced to approximately 2%. The effective nuclear charge value cannot be inserted into Equation $\ref{2}$ to determine the energy of the electron. The Z in Equation $\ref{2}$ refers to the actual nuclear charge, while the effective nuclear charge is a number, always less than the actual $Z$. which determines the optimum size of the orbital when other electrons are present. The value of $Z$ appearing in the equate for $E_c$ will be the effective nuclear charge value. The value of $E_c$ is indeed determined solely by the degree to which the two electron distributions are contracted and this is governed by the effective nuclear charge. It should be pointed out that the concept of an effective nuclear charge will be paramount in our future discussions concerning the electronic structures and properties of many-electron atoms. Excited States of the helium Atom Just as the single electron in the hydrogen atom can be excited to higher quantum levels, so it should be possible to excite one of the electrons in the He atom to energy levels with quantum numbers greater than one. This will change the electron configuration from 1s2 to say, 1s12s1 or 1s12p1 etc. The excited electron may again lose the excitation energy in the form of light and fall back to the 1s level, giving the ground electronic configuration 1s2 $1s^12p^1 \rightarrow 1s^2 + h\nu \nonumber$ Thus the helium atom should emit a line spectrum when it is excited in an electrical discharge tube. Since only a single electron is excited at a given time although if is possible with the use of a laser to excite two electrons simultaneously, the spectrum for helium should be formally the same as that observed for hydrogen. However, since the nuclear charge experienced by the electron will always be greater than one, the lines in the helium spectrum should be observed at higher frequencies (shorter wavelengths) than those for hydrogen. Table $1$: The Wavelengths for the Balmer Series in H and the Wavelengths for the Corresponding One-Electron Transitions in He H He n l (Å) n l (Å) 3 6563 6 5016 4 4861 4 3965 5 4340 5 3614 6 4101 6 3449 In Table $1$, we compare two corresponding line spectra, one for hydrogen and one from helium. In both cases the excited electron fall from an upper p energy level to the 2s energy level. In hydrogen the frequencies of the lines in the spectrum are determined by the energy differences between the configuration. $np^1 \rightarrow 2s^1 \nonumber$ with $n=3,4,5...$ This series of jumps from En (n = 3, 4, 5, 6, . . . ) to E2 level generates the Balmer series of lines which we discussed earlier. We are now being more specific in stating that in this particular examples the excited electrons in in an np orbital. The helium spectrum we wish to compare with this one arises from the transition between configurations $1s^1np^1 \rightarrow 1s^12s^1 \nonumber$ with $n=3,4,5...$ Qualitatively the two spectra are the same, as our model predicted. In addition, the helium lines occur at shorter wavelengths (higher energies) than for hydrogen. In fact, for every series of lines (Lyman, Balmer, etc.) found for hydrogen, there is a corresponding series found at shorter wavelengths for helium. Our model, which uses hydrogen-like atomic orbitals to describe many-electron atoms, looks promising indeed. However, it is in the study of the spectrum of helium that we encounter the first shortcoming of this simple approach; there are two series of lines observed for helium for every single series of lines observed for hydrogen. Not only does helium possess the "Balmer" series, it has a second "Balmer"series starting at l = 3889A. That is, the whole series is repeated at shorter wavelengths. Rather than abandon the atomic orbital approach for the many-electron atom, let us keep the above failure of the method in mind and proceed with an application to the lithium atom. The Lithium Atom There are three electrons in the lithium atom (Z = 3) but the total repulsion energy between the electrons is still determined by considering the repulsions between a pair of electrons at a time. For this reason, three electrons are fundamentally no more difficult to treat than two electrons. There are simply more possible pairs and hence more repulsive interactions to consider than in the two-electron case. The dependence of the potential energy on the distances between the electrons and between the electrons and the nucleus is $V(r_1, r_2,r_3) = -\dfrac{3e^2}{r_1} - \dfrac{3e^2}{r_2} -\dfrac{3e^2}{r_3} +\dfrac{e^2}{|r_1-r_2|} + \dfrac{e^2}{|r_1-r_3|} + \dfrac{e^2}{|r_2-r_3|} \nonumber$ where $|r_1-r_2|$, is the distance between electrons 1 and 2, $|r_1-r_3|$ the distance between electrons 1 and 3, and $|r_2-r_3|$ the distance between electrons 2 and 3. It is natural to assume that, as in the case of hydrogen or helium, the most stable energy of the lithium atom will be obtained when all three electrons are placed in the 1s atomic orbital giving the electronic configuration 1s3. Proceeding as in the case of helium we calculate the first approximation to the energy to be (using Equation $\ref{2}$) $E_1 = -3 (3^3 R) = -27R \nonumber$ This represents the sum of the energies obtained when each electron is considered to move independently in the field of the nucleus of charge +3 in an orbital with n = 1. To this must be added the energy of repulsion between the electrons. The average repulsion energy between a pair of electrons is again given by $5/4ZR$. In lithium we must consider the repulsion between electrons 1 and 2, between electrons 1 and 3, and between electrons 2 and 3. Therefore the total repulsion energy which represents the correction to $E$ is estimated at (from Equation $ref{3a}$): $E_c = 3 \left( \dfrac{5}{4} Z R \right) = \dfrac{45}{4}R \nonumber$ and the total electronic energy of the lithium atom is predicted to be $E = E_1 + E_c = -27R + \dfrac{45}{4}R = -15.75 R = -214.4\,eV \nonumber$ Thus it should require 214.4 eV to remove all three electrons from the lithium atom. We can also calculate the energy required to remove a single electron from a lithium atom (the first ionization potential) $Li \rightarrow Li^+ + e^- \nonumber$ with $\Delta E =I_1 = E(Li^+) - E(Li) \nonumber$ $= \underbrace{- 2(3^3K) + \left( \dfrac{15}{4}K \right)}_{E(Li^+)} - \underbrace{(-15.75 \;K)}_{E(Li)} = +1.50\;R = +20.4 \,eV \nonumber$ When the predicted values for lithium are compared with the corresponding experimental values, they are found to be in serious error. The lithium atom is not as stable as the calculations would suggest. Experimentally it requires 202.5 eV to remove all three electrons from lithium, and only 5.4 eV to remove one electron. Experimentally it requires less energy to ionize lithium than it does to ionize hydrogen, yet our calculation predicts an ionization energy one and one half times larger. The error in $I_1$ is 300%! We should expect a realistic model to do better than this. More serious than this, however, is that the kind of calculation we are doing should never predict the system to be more stable than it actually is. The method should always predict an energy less negative than is actually observed. If this is not found to be the case, then it means that an incorrect assumption has been made or that some physical principle has been ignored. It is also clear that if we were to continue this scheme of placing each succeeding electron in the 1s orbital as we increased the nuclear charge by unity we would never predict the most striking property of the elements: the property of periodicity. We might recall at this point that there is a periodicity in the types of atomic orbitals found for the hydrogen atom. With every increase in n, all the preceding values of $l_1$ are repeated, and a new $l_1$ value is introduced. If we could discover a physical reason for not placing all of the electrons in the 1s orbital, but instead place succeeding electrons in the orbitals with higher n values, we could expect to obtain a periodicity in our predicted electronic structures for the atoms. This periodicity in electronic structure would then explain the observed periodicity in their properties. There must be another factor governing the behavior of electrons and this factor must be one which determines the number of electrons that may be accommodated in a given orbital. To discover what this missing factor is and to find the physical basis for it, we must investigate further the magnetic properties of electrons.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/04%3A_Many-Electron_Atoms/4.01%3A_Introduction.txt
So far, the only motion we have considered for the electron is a motion in three-dimensional space. Since this motion is ultimately described in terms of an orbital wave function, we term this the orbital motion of the electron. However, the electron may possess an internal motion of some kind, one which is independent of its motion through space. Since the electron bears a charge, such an internal motion, if it does exist, might be expected to generate a magnetic moment. We have previously pointed out that when an electron is in an atomic orbital for which l is not equal to zero, the resultant angular motion of the electron gives rise to a magnetic moment. We would anticipate then that an electron in an s orbital (l = 0) should not exhibit any magnetic effects as its angular momentum is zero. If an electron in these circumstances did exhibit a magnetic effect, it would indicate that another type of motion was possible, presumably an internal one. Whether or not an electron in an s orbital does possess a magnetic moment may be determined by means of an atomic beam experiment described below. Ground-state Hydrogen atoms travel through an inhomogeneous magnetic field and are deflected up or down depending on their spin (which is based on the electron spin). (1) the hydrogen atom source, (2) collimated atomic beam, (3) inhomogeneous magnetic field, (4) the observed bifurcation of the beam (5) the predicted spread for a classical atom with no intrinsic electron spin. (CC SA-BY 4.0; Tatoute). In the present experiment a beam of hydrogen atoms is passed through the apparatus. All of the hydrogen atoms in the beam will be in their ground state with l = 0 and hence they will not possess an orbital magnetic moment. However, when the magnetic field is applied, something does happen to the beam of atoms. It is split into two distinct beams, one of which is deflected to the N pole of the magnet and the other to the S pole. Thus even when atoms possess no magnetic moment because of the orbital motion of the electrons, they may still exhibit magnetic effects! As striking as the behaviour of the atoms as small magnets is the splitting of the beam into two distinct components. Let us consider first the origin of the magnetic effect, and second, the splitting of the beam into two distinct beams. The observed magnetism of the hydrogen atoms must be due to some motion of the electrons. The nucleus of a hydrogen atom does possess a magnetic moment but its magnitude is too small, by a factor of roughly a thousand, to account for the deflections observed in this experiment. A magnetic moment will be observed only when the charged particle possesses angular momentum. Since the orbital angular momentum for an electron in the ground state of hydrogen is zero, we are forced to assume that the electron possesses some internal motion which has associated with it an angular momentum. A classical analogue of the internal angular momentum would be a spinning motion of the electron about its own axis. For this reason it is referred to as a spin angular momentum and the associated magnetic effect as a spin magnetic moment. These effects are separate from, and in addition to, the orbital angular momentum of the electron (classically, the rotation of the electron around the nucleus) and its associated magnetic effects. We are familiar enough with the predictions of quantum mechanics to anticipate that the spin angular momentum and its component along some axis will be quantized. As in the case of orbital angular momentum, the effect of the quantization will be to limit the number of values which the component of the spin magnetic moment may have along any given axis. The magnitude of the spin angular momentum will determine the number of possible values its component may have along a given axis. Each of the possible values will in turn cause some fraction of the total spin magnetic moment to be aligned along the same axis. In the case of the electron's orbital motion, we found that as l and hence the orbital angular momentum was increased, the number of possible values for the component of the orbital magnetic moment along a given axis was increased, the number being equal to (2l + 1). We can use a magnetic field to inquire into the nature of the spin angular momentum as well. In fact, we have already discussed the pertinent experiment. The beam of hydrogen atoms was split into just two components in the atomic beam experiment. This means that the component of the electron's spin magnetic moment (and spin angular momentum) along a given axis may have only one of two possible values; the component may be aligned with the field and hence be attracted, or it may be opposed to the field and be repelled. The electron's spin magnetic moment has been detected in many different kinds of experiments and the results are remarkable in that only two components of constant magnitude are ever observed. The electron is always either repelled by the field or attracted to it. This implies that the magnitude of the spin angular momentum for a single electron may have only one possible value. Since the number of possible values for the component of a given amount of angular momentum of any type in quantum mechanics is (2l + 1), l must equal ½ and only ½ for the spin angular momentum, and the values of m for the electron spin, which assume values from a maximum of l to a minimum of -l in steps of unity, must equal +½ and -½. In this respect the spin angular momentum of the electron is quite different from its orbital angular momentum, which may have many possible values, as the value of l for the orbital motion is restricted only in that it must equal zero or an integer. It should be stressed that the splitting of the beam of hydrogen atoms into only two components is again evidence of quantization. If the atomic magnets (the hydrogen atoms) behaved according to classical mechanics, then the effect of the magnetic field would be simply to broaden the beam. The orientations of the atomic magnets would be random when they first entered the field of the magnet and classically the individual atomic magnets could be aligned at any and all angles with respect to the field, giving all possible components of the spin magnetic moment along the direction of the field. The inhomogeneous field would then exert a force proportional to the magnitude of the component, and the beam would broaden but not split. Since the spin magnetic moment is an intrinsic property of the electron, even a beam of free electrons should be split into two components in a magnetic field. However, the charge possessed by the free electron also interacts with the magnetic field and the much smaller magnetic-magnetic interaction is masked by the usual deflection of a charge species in a magnetic field. By employing a neutral atom, the complications of the electronic charge may be avoided. The original experiment was performed on a beam of silver atoms by Stern and Gerlach in 1921. (We shall see shortly that the electrons in a silver atom do not possess any orbital angular momentum.) Let us summarize what we have learned about this new property of the electron. Since an electron may exhibit a magnetic moment even when it does not possess orbital angular momentum, it must possess some internal motion. We call this motion the electron spin and treat it quantum mechanically as another kind of angular momentum. Experimentally, however, all we know is that the electron possesses an intrinsic magnetic moment. The remarkable feature of this intrinsic magnetic moment is that its magnitude and the number of components along a given axis are fixed. A given electron may exhibit only one of two possible components; it may be aligned with the field or against it. Experimentally, or theoretically, this is all we can know about the spin magnetic moment and the spin angular momentum. Hence only one quantum number is required to describe completely the spin properties of a single electron. We shall denote the value of this quantum number by ­ or ¯, the upwards-pointing arrow signifying that the component of the magnetic moment is aligned with the field and the downwards-pointing arrow that this component is opposed to the field. A total of four quantum numbers is required to specify completely the state of an electron when it is bound to an atom. The quantum numbers n, l and m determine its energy, orbital angular momentum and its component of orbital angular momentum. The fourth quantum number, the spin quantum number, summarizes all that can be known about the spin angular momentum of the electron. This final quantum number may have only one of two possible values corresponding to the magnetic moment component being (a) aligned with the field or (b) opposed to it. The Pauli Exclusion Principle The consequences of the spin quantum number, when applied to the problem of the electronic structure of atoms, are not immediately obvious. The small magnitude of the electron's magnetic moment does not directly affect the energy of the electron to any significant degree. To see just how the spin of the electron does influence the problem, let us reconsider our atomic orbital model in the light of this new degree of freedom for the electron. In particular let us reconsider those instances in which our model failed to account for the observations. If a beam of helium atoms is passed through a magnetic field, no splitting and no deflection is observed. The helium atom, unlike the hydrogen atom is not magnetic. We could account for the absence of a magnetic moment for helium if we assumed that of the two electrons in the helium 1s orbital, one had its magnetic moment component up (­↑) and the other down (¯). The two components would then cancel and there would be no resultant magnetic effect. Our complete description of the electronic configuration of the helium atom would be 1s2(­¯), i.e., both electrons have n = 1, l = 0, m = 0 and one has a spin (­↑) and the other a spin (). You may wonder why the states of helium corresponding to the configurations 1s2(­­↑↑) or 1s2(↓↓) are not observed. These states should exhibit twice the magnetism possessed by a hydrogen atom. They are, however not found to occur. What about the excited states of the helium atom? An excited state results when one electron is raised in energy to an orbital with a higher n value. The electrons are thus in different orbitals. The spin assignments for an excited configuration can be made in more than one way and are such as to predict the occurrence of both magnetic and non-magnetic helium. For example, the configuration 1s12s1 could be 1s1(↑­)2s1() and be nonmagnetic or it could equally well be 1s1(↓­)2s1(­↓) or 1s1(↑­)2s1(­↑) and be magnetic. Care must be exercised in the use of the abbreviated notation 1s1(­↑)2s1() to indicate the configuration and spin of a many-electron atom. In the present example, all we mean to imply is that the total component of the spin is zero. We do not imply that the electron in the 1s orbital necessarily has a spin "up" and that in the 2s orbital a spin "down." The situation could equally well be described by the notation 1s1()2s1(↑­). There is no experimental method by which we can distinguish between electrons in an atom, or, for that matter, determine any property of an individual electron in a many-electron system. Only the total magnetic moment, or total angular momentum, may be determined experimentally. Both the magnetic and non-magnetic forms are indeed found to occur for helium in an excited state. There are in fact two kinds of excited helium atoms, those which are non-magnetic and those which are magnetic. If the two forms of helium possess different energies even though they have the same orbital configuration (we shall see why this should be so later) then we have an explanation for the previously noted discrepancy that helium exhibits twice the number of line spectra as does hydrogen. For every set of lines in the spectrum which arises from the transition of the electron from the configurations 1s1(­↑)np1(¯) to the configuration 1s1(↑­)2s1(¯) for example, there will be another set of lines due to transitions from 1s1(­↑)np1(­¯) to 1s1(­↑)2s1(¯­). The study of the magnetic properties of the ground and excited states of helium is sufficient to point out a general principle. For the ground state of helium, in which both electrons are in the same atomic orbital, only the non-magnetic form exists. This would imply that when two electrons are in the same atomic orbital their spins must be paired, that is, one up () and one down (¯). This is an experimental fact because helium is never found to be magnetic when it is in its electronic ground state. When the electrons are in different orbitals, then it is again an experimental fact that their spins may now be either paired (¯­↑) or unpaired, e.g., (­­↑). Thus when two electrons are in the same orbital (i.e., they possess the same n, l and m values) their spins must be paired. When they are in different orbitals (one or more of their n, l and m values are different) then their spins may be paired or unpaired. We could generalize these observations by stating that "no two electrons in the same atom may have all four quantum numbers the same." Stated in this way we see immediately that any given orbital may hold no more than two electrons. Since two electrons in the same orbital have the same values of n, l and m, they can differ only through their spin quantum number. However, the spin quantum number may have only one of two possible values, and these possibilities are given by (n, l, m, ­) or (n, l, m, ¯). We have indeed found the principle we were seeking, one which limits the occupation of an atomic orbital. This principle is known as the Pauli exclusion principle. One form of it, suitable for use within the framework of the orbital approximation, is the statement given in quotation marks above. The Pauli principle cannot be derived from, nor is it predicted by, quantum mechanics. It is a law of nature which must be taken into account along with quantum mechanics if the properties of matter are to be correctly described. The concept of atomic orbitals, as derived from quantum mechanics, together with the Pauli exclusion principle which limits the occupation of a given orbital, provides an understanding of the electronic structure of many-electron atoms. We shall demonstrate this by "predicting" the existence of the periodic table.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/04%3A_Many-Electron_Atoms/4.03%3A_The_Magnetic_Properties_of_the_Electron.txt
The hydrogen-like orbitals for a many-electron atom are listed in order of increasing energy in Fig. 4-2. This energy level diagram differs from the corresponding diagram for the hydrogen atom, a one-electron system. In the many-electron atom all orbitals with the same value of the principal quantum number n do not have the same energy as they do in the case of hydrogen. For the many-electron atoms, the energy of an orbital depends on both n and l, the energy increasing as l increases even when n is constant. For example, from Fig. 4-2 it is evident that the 3d orbital possesses a higher energy than does the 3p orbital which in turn has a higher energy than does the 3p orbital. The reason for this difference between the one- and the many-electron case will be discussed below. The energy of the orbital is still independent of the magnetic quantum number m. Thus when l = 1, there are three p orbitals which are still degenerate (all possess the same energy) and this is indicated by the three open circles which are superimposed on each of the p levels. The open circles thus represent the number of available orbitals or the degeneracy of each orbital energy level. Fig. 4.2. An orbital energy level diagram for a many-electron atom. With the aid of this energy level scheme and the Pauli principle we may proceed to build up the electronic structures of all the atoms. We do this by asssigning electrons one at a time to the vacant orbital which possesses the lowest energy. An orbital is "filled" when it contains two electrons with their spins paired. Hydrogen. The nuclear charge is 1 and the single electron is placed in the 1s orbital. The electronic configuration is 1s1. Helium. The nuclear charge is increased by one unit to 2 and the extra electron is again placed in the 1s orbital, with its spin opposed to that of the electron already present. The electronic configuration is 1s2. Lithium. The nuclear charge is 3 and the third electron, because of the Pauli principle, must be placed in the 2s orbital as the 1s orbital is doubly occupied. The electronic configuration of lithium is therefore 1s22s1. We can now answer the question as to why the 2s orbital is more stable than the 2p orbital, i.e., why Li is described as 1s22s1 and not as 1s22p1. The two inner electrons of lithium (those in the 1s orbital) partially shield the nuclear charge from the outer elctron. Recall that as n increases, the average distance between the electron and the nucleus increases. Thus most of tthe electron density of the electron with n = 2 will lie outside of the charge density of the two inner electrons which have n = 1. When the outer electron is at large distances from the nucleus and thus essentially outside of the inner shell of electron density it will experience a force from only one of the three positive charges on the lithium nucleus. However, as the outer electron does have a small but finite probability of being close to the nucleus, it will penetrate to some extent the tightly bound electron density of the two 1s electrons. In doing so it will "see" much more of the total nuclear charge and be more tightly bound. The closer the outer electron can get to the nucleus, that is, the more it can penetrate the density distribution of the inner shell electrons, the more tightly bound it will be. An electron in an s orbital has a finite probability of being found right at the nucleus. An electron in a p orbital on the other hand has a node in its density distribution at the nucleus. Thus an s electron penetrates the inner shell density more effectively than does a p electron and is consequently more tightly bound to the atom. In a hydrogen atom, there are no inner electrons and both a 2s and 2p electron always experience the full nuclear charge and have the same energy. The crux of this penetration effect on the energy is that the inner shell electron density does possess a finite extension in space. Thus an outer electron can penetrate inner shell density and the screening effect is reduced. If the inner shell density was contracted right onto the nucleus, then no matter how close the outer electron came to the lithium nucleus, it would always experience only a charge of +1. This dependence of the orbital energies on their l value is aptly called the penetration effect. The electron density of a d electron is concentrated even further away from the nucleus than is that of a p electron. Consequently, the orbital energy of a d electron is even less stable than that of a p electron. In some atoms the penetration of the inner shell density by a d electron is so slight that its energy is raised even over that of the s electron with the next highest n value. For example, a 3d electron possesses a higher energy than does a 4s electron in the atoms Sc to Zn with the exceptions of Cr and Cu. The penetration effect in these elements overrides the principal quantum number for d electrons in determining their relative energies. Notice that the configuration 1s22s1 for lithium overcomes the difficulties of our earlier attempts to describe the electronic structure of this atom. The Pauli principle, of which we were ignorant in our previous attempt, forces the third electron to occupy the 2s orbital and forces in turn the beginning of a new quantum shell, that is, a new value of n. Thus lithium, like hydrogen, possesses one outer electron in an s orbital. Since it is only the outer electron density which in general is involved in a chemical change, lithium and hydrogen should have some chemical properties in common, as indeed they do. Hydrogen is the beginning of the first period (n = 1) and lithium marks the beginning of the second period (n = 2). Beryllium. The nuclear charge is 4 and the electronic configuration is 1s22s2. Boron. Z = 5 and the electron configuration is 1s22s22p1. Carbon. Z = 6. The placing of the sixth electron of carbon requires some comment. It will obviously go into a 2p orbital. But in which of the three should it be placed? Should it be placed in the 2p orbital which already possesses one electron, or should it be placed in one of the vacant 2p orbitals? If it is placed in the occupied 2p orbital its spin must be paired with that of the electron already present and the result would be a nonmagnetic carbon atom. If, however, it is placed in one of the vacant 2p orbitals it may be assigned a spin parallel to the first electron. The question is decided on the grounds of which situation gives the lowest energy. As a result of the Pauli principle, two electrons with parallel spins (both up or both down) have only a very small probability of being close to one another. In fact the wave function which describes the two-electron case for parallel spins vanishes when both electrons approach one another. When the wave function vanishes, the corresponding probability distribution goes to zero. On the average, then, electrons with parallel spins tend to keep out of each other's way. Two electrons with paired spins, whether in the same or different orbitals are not prevented by the Pauli principle from being close to one another. The wave function for this situation is finite even when they are on top of one another! Obviously, two electrons with parallel spins will have a smaller value for the electrostatic energy of repulsion between them than will two electrons with paired spins. This is a general result which holds almost without exception in the orbital approximation. It is known as one of Hund's rules as he was the first to state it. Thus the most stable electronic configuration of the carbon atom is 1s22s22p2(­­) where we have emphasized the fact that the two 2p electrons have parallel spins and hence must be in different 2p orbitals. Nitrogen. Z = 7. Because of Hund's rule the electronic configuration is 1s22s22p3(­­­) i.e., one electron in each of the 2p orbitals. The configuration with the largest possible component of the spin magnetic moment will be the most stable. Oxygen. Z = 8. One of the 2p electrons must now be paired with the added electron, but the other 2p electrons will be left unpaired. 1s22s22p4(­­) (Only the number of unpaired electrons is indicated by the arrows.) Fluorine. Z = 9. The configuration will be 1s22s22p5(­) Neon. Z = 10. The tenth electron will occupy the last remaining vacancy in the second quantum shell (the set of orbitals with n = 2). 1s22s22p6 Thus neon represents the end of the second period and all the electrons have paired spins. When all the orbitals in a given shell are doubly occupied, the resulting configuration is called a "closed shell." Helium and neon are similar in that they both possess closed shell configurations. Because neither of these elements possesses a vacancy in its outer shell of orbitals both are endowed with similar chemical properties. When the orbitals belonging to a given l value contain either one electron each (are half-filled) or are completely filled, the resulting density distribution is spherical in shape. Thus the electron density distributions of nitrogen and neon, for example, will be spherical. Reference to Fig. 4-2 indicates that the next element, sodium, will have its outer electron placed in the 3s orbital and it will be the first element in the third period. Since its outer electronic structure is similar to that of the preceding elements, lithium and hydrogen, it is placed beneath them in the periodic table. It is obvious that in passing from sodium to argon, all of the preceding outer electronic configurations found in the second period (n = 2) will be duplicated by the elements of the third period by filling the 3s and 3p orbitals. For example, the electronic structure of phosphorus (Z = 15) will be 1s22s22p63s23p3(­­­) and thus resemble nitrogen. Argon. Z = 18. Argon will have filled 3s and 3p orbitals and will represent the end of a period. Argon, like helium and neon, possesses a closed shell structure and is placed beneath these two elements in the periodic table. The Transition Elements The beginning of the fourth period will be marked by the single and double occupation of the 4s orbital to give potassium and calcium respectively. However, reference to the orbital energy level diagram indicates that the 3dorbital is more stable than the 4p orbital. Since there are five d orbitals they may hold a total of ten electrons. Thus the ten elements beginning with scandium (Z = 21) will possess electronic structures which differ from any preceding them as they are the first elements to fill the d orbitals. A typical electronic configuration of one of these elements is that of manganese: [Ar]4s23d5. The symbol [Ar] is an abbreviated way of indicating that the inner shells of electrons on manganese possess the same orbital configuration as those of argon. In addition, the symbol 3d5 indicates that there are five electrons in the 3d orbitals, no distinction being made between the five different d orbitals. This series of elements in which the 3d orbitals are filled is called the first transition series. The element zinc with a configuration [Ar]4s23d10 marks the end of this series. The six elements from gallium to krypton mark the filling of the 4p orbitals and the closing, with krypton, of the fourth quantum shell and the fourth period of the table. While the 3d orbitals are less stable than the 4s orbitals in the neutral atom (with the exceptions of Cr and Cu) and are filled only after the 4s orbitals are filled, the relative stability of the 4s and 3d orbitals is reversed in the ionic forms of the transition metals. For example, the configuration of the ion which results when the manganese atom loses two electrons is Mn+2 [Ar]3d5 and not [Ar]4s23d3. This is a general result. The d orbitals of quantum number n are filled only after the s orbital of quantum number (n + 1) is filled in the neutral atom, but the nd orbital is more stable than the (n + l)s orbital in the corresponding ion. The fifth period begins with the filling of the 5s orbital, followed by the filling of the 4d orbitals, which generates the second transition series of elements. The period closes with the filling of the 5p orbitals and ends with xenon. The lanthanide and actinide elements The sixth period is started by the filling of the 6s orbital. The next element, lanthanum, has the electronic configuration [Xe]6s25d1. However, the next fourteen elements represent the beginning of another new series as the filling of the 4f orbitals is now energetically favoured over a continued increase in the population of the 5d orbitals. Note that the very small penetration effect possessed by the 4f orbitals (n = 4) delays their appearance until the sixth quantum shell has been partially filled. There are fourteen elements in this series, called the lanthanide series, since there are seven 4f orbitals (l = 3 and 2 ´ 3+1 = 7). The third transition series follows the lanthanide elements as the occupation of the 5d orbitals is completed. This in turn is followed by the filling of the 6p orbitals. The final period begins with the filling of the 7s orbital and continues with the filling of the 5f orbitals. This second series of elements with electrons in f orbitals is called the actinide series. The concept of atomic orbitals in conjunction with the Pauli principle has indeed predicted a periodicity in the electronic structures of the elements. The form of this periodicity duplicates exactly that found in the periodic table of the elements in which the periodicity is founded on the observed chemical and physical properties of the elements. Our next task will be to determine whether or not our proposed electronic structures will properly predict and explain the observed variations in the chemical and physical properties of the elements.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/04%3A_Many-Electron_Atoms/4.04%3A_The_Electronic_Basis_of_the_Periodic_Table.txt
R.M. Hochstrasser, Behaviour of Electrons in Atoms, W. A. Benjamin Inc., New York, N.Y., 1964. The magnitude of the total angular momentum in a many-electron atom is governed by the same rules of quantization as apply to the motions of the individual electrons. Because of this, the addition of the angular momentum vectors of the individual electrons in an atom to give the total angular momentum quantum number denoted by J is not arbitrary but must be carried out in such a way that the magnitude of the resultant vector is expressible in the form with J = 0, 1, 2, 3 ... An elementary discussion of the manner in which the total angular momentum of an atom is determined by quantum mechanics is given in the above reference. 4.E: Exercises Problems 1. Would you expect the spectrum of magnesium (Z = 12) to resemble that of He? Explain your answer. 2. The boron atom has the electronic configuration 1s22s22p1. The single unpaired electron in the 2p orbital will possess both orbital and spin angular momentum. Into how many distinct beams will a beam of boron atoms be split when it is passed through an atomic beam apparatus with an inhomogeneous magnetic field directed perpendicular to the direction of travel of the atoms? 3. When a test tube containing an aqueous solution of Fe+3 ions is placed near the poles of a strong magnet, the test tube is attracted and pulled into the magnetic field. When a test tube containing a solution of Zn+2 ions is placed near the magnetic field, it is not attracted into the field. Use the atomic orbital theory to account for the fact that the Fe+3 solution is magnetic while the Zn+2 solution is not. The atomic number of Fe is 26 and of Zn is 30. (Recall that the 3d orbitals are more stable than are the 4s orbitals in the ionic forms of the transition elements.) 4. Suppose you lived in a universe where all of the laws of quantum mechanics applied as they do in ours, but where the spin angular momentum quantum number of the electron had increased from ½ to some larger value. The new value must also be half-integer if the Pauli principle is to apply. Rather than use the general symbol "l"to denote an angular momentum quantum number, we shall reserve this symbol for orbital angular momentum and introduce a new symbol "s" to denote the spin angular momentum quantum number. In our universe, a beam of hydrogen atoms in their ground state (with l = 0) is split in two in an atomic beam apparatus when a magnetic field is applied. The number of quantized components of angular momentum is related to the angular momentum quantum number by the expression (2l + 1) for orbital momentum or (2s + 1) for spin momentum. Thus, since two components are observed, the value of the spin quantum number s in our universe is ½. Recall that the magnetic quantum number m governing the components of angular momentum assumes values from l to � l in steps of unity or from s to � s in the case of spin angular momentum. If we use ml and ms to denote the orbital and spin magnetic quantum numbers respectively, then the values of ms are + ½ and � ½ in our universe. (a) When a beam of hydrogen atoms with l = 0 is passed through an atomic beam apparatus in the new universe, the magnetic field causes the beam to split into four (4) separate beams. What is the value of the spin quantum number s in the new universe and what are the possible values for the spin magnetic quantum number ms ? Since only the spin quantum number has undergone a change in the new universe, the atomic orbital model of electronic structure should still apply and each electron will be assigned to an orbital with some value of n, l and ml and a given spin quantum number ms. The statement of the Pauli principle as it applies to the orbital model is "no two electrons in the same atom may have all four quantum numbers the same." How many electrons may occupy an orbital with given values of n, l and ml in the new universe? (b) Clearly, the periodic table of the elements in the new universe will have a different structure from that in ours. State how many elements would appear in the first, second, third and fourth rows of the new table. What would be the ground state configurations of the elements with atomic numbers Z = 7 and 10 and what would their valencies be? Which element would be the first of the noble gases in the new universe? 5. When a transition metal ion is place in solution, its magnetic moment generally changes from the value it had in the gas phase, indicating that the number of unpaired electron spins is different in the gas and solution phases. Transition metal ions M2+ form a six-coordinated octahedral complex with CN- ions when placed in solution containing this ligand. The formation of the complex perturbs the d orbitals, changes their energy and partiallly removes their degeneracy. That is, the d-level which is five-fold degenerate in the gas phase is split into two or more levels with different energies. The new sets of levels can be dgenerate, but their degeneracies will necessarily be less than five. By measuring the magnetic moment of solution of the complexes M(CN)6-4 for various metal ions M2+, one can determine the number of unpaired d electrons in the complex. With this information, use the orbital model to determine the number of levels into which the d-level is split and the degeneracy of each of the new levels. The solution of the Fe2+ ion showed that no permanent magnetic moment was present - the solution was diamagnetic. The V2+ and Ni2+ solutions gave moments indicating the presence of three and two unpaired electrons respectively. The atomic numbers of the metal atoms are V:23, Fe:26 and Ni:28. You must show how your final answer is arrived.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/04%3A_Many-Electron_Atoms/4.05%3A_Further_Reading.txt
We shall now present an interpretation of the physical and chemical properties of the elements based on the atomic orbital description of their electronic structures. Our discussion of the properties of the atoms will be a qualitative one, but it should be pointed out that many of the properties of atoms can now be accurately predicted by quantum mechanical calculations employing a very extended version of the atomic orbital concept. 05: Electronic Basis for the Properties of the Elements We shall now present an interpretation of the physical and chemical properties of the elements based on the atomic orbital description of their electronic structures. Our discussion of the properties of the atoms will be a qualitative one, but it should be pointed out that many of the properties of atoms can now be accurately predicted by quantum mechanical calculations employing a very extended version of the atomic orbital concept. 5.02: Horizontal Variations The experimental values of the atomic radii and the first and second ionization potentials of the elements (labelled as I1 and I2 respectively) in the third row of the periodic table are listed in Table 5-1. A study of these values will indicate the basic trends observed as the number of electrons is increased one at a time until all the orbitals with a given value of n are fully occupied. Table 5-1:The Atomic Radii and Ionization Potentials* of Third Row Elements Element Na Mg Al Si P S Cl Ar Radius (Å) 1.86 1.60 1.48 1.17 1.0 1.06 0.97 I1 (ev) 5.14 7.64 5.98 8.15 11.0 10.4 13.0 15.8 I2 (ev) 47.3 15.0 18.8 16.3 19.7 23.4 23.8 27.6 *The values for I1 and I2 are taken from C. E. Moore, Atomic Energy Levels, Vol. 1, N.B.S. Circular 467, Washington, D.C. (1949). I2 is the energy required to remove an electron from the singly-charged ion, i.e., the energy required to ionize a second electron. Atomic radii The diameter of an atom is difficult to define precisely as the density distribution tails off at large distances. However, there is a limit as to how close two atoms can be pushed together in a solid material. We shall take one half of the distance between the nuclei of two atoms in an elemental solid as a rough measure of the atomic radius. Any consistent method of defining the radius leads to the same trend we see in Table 5-1. The size of the atom in general decreases as the number of electrons in the quantum shell is increased. This observation, which at first sight might appear surprising, finds a ready explanation through the concept of an effective nuclear charge. The electric field and hence the attractive force exerted by the nucleus on an electron in the outer quantum shell is reduced because of the screening effect of the other electrons which are present in the atom. An outer electron does not penetrate to any great extent the tightly bound density distribution of the inner shell electrons. Consequently each inner electron (an electron with an n value less than the n value of the electron in question) reduces the value of the nuclear charge experienced by the outer electron by almost one unit. The remaining outer electrons on the other hand are, on the average, all at the same distance away from the nucleus as is the electron under consideration. Consequently each outer electron screens considerably less than one nuclear charge from the other outer electrons. Thus the higher the ratio of outer shell to inner shell electrons, the larger will be the "effective nuclear charge" which is experienced by an electron in the outer shell. All of the elements in a given row of the periodic table possess the same number of inner shell electrons. For example, the elements in the third row have the inner shell configuration of 1s22s22p6. As we move across the periodic table from left to right the nuclear charge increases, and each added electron is placed in the outer shell until a total of eight is reached and the quantum shell is full. The number of outer shell electrons increases along a given period, but the number of inner shell electrons remains fixed. Thus the effective nuclear charge increases from a minimum value for sodium, where the ratio of outer shell to inner shell electrons is 1:10, to a maximum value for argon where the same ratio is 8:10. The atomic radius undergoes a gradual decrease since the outer electrons become more tightly bound as the effective nuclear charge increases. These features of the atomic density distributions are clearly evident in a graph of the radial distribution function, Q(r). This function, it will be recalled, gives the number of electronic charges within a thin shell of space lying between two concentric spheres, one of radius r and the other with a radius only slightly larger. The radial distribution functions for atoms may be determined experimentally by X-ray or electron diffraction techniques. Plots of Q(r) versus r for sodium and argon (Fig. 5-1), the first and last members of the third row of the periodic table, clearly reveal the persistence of a "shell structure" in the many-electron atoms. Fig. 5-1. The radial distribution functions Q(r) for the Na and Ar atoms. There are three peaks in the density distribution corresponding to the presence of three principal quantum shells in the orbital model of the electronic structure of sodium and argon. The peak closest to the nucleus may be identified with the charge density in the 1s orbital, the middle peak with that in the 2s and 2p orbitals and the outer peak with the charge density in the 3s orbital in sodium and in the 3s and 3p orbitals in argon. The maxima in Q(r) occur at smaller values of r for argon than for sodium as expected on the basis of a larger effective nuclear charge for argon than for sodium. Most of the 1s charge density is found within a very thin shell close to the nucleus in both cases as the inner shell density experiences the field of the full nuclear charge, ZNa= 11 and ZAr = 18. The charge density in the n = 2 orbitals is confined to a shell which is narrower and closer to the nucleus in argon than in sodium. The electrons in this second shell experience a nuclear charge of approximately sixteen in argon but of only nine in sodium. The most dramatic effect of the difference in the effective nuclear charges of argon and sodium is evidenced by the appearance of the electron density in the valence shell. In sodium this shell is broad and diffuse as there are ten inner electrons shielding eleven nuclear charges. In argon where there are ten inner electrons to shield eighteen nuclear charges the valence shell is more contracted and it peaks at roughly one third of the corresponding distance in sodium. The valence shell density is clearly more tightly bound in argon than in sodium. Figure 5-2 shows the effect of an increase in the nuclear charge on the individual atomic orbital densities for elements in the same row of the periodic table, in this case sodium and chlorine. The total density distribution for the atom is obtained by summing the individual orbital densities. The summation of just the 1s, 2s and three 2p densities yields the spherical inner shell densities indicated on the diagram as "core densities." It is the core density which shields the nuclear charge from the valence electrons. The outer density contour indicated for the inner shell or core densities defines a volume in space containing over 99% of the electronic charge of the inner shell electrons. Thus the effective nuclear charge experienced by the valence density beyond the indicated radii of the core densities is ZNa - 10 = 1 for sodium and ZCl - 10 = 7 for chlorine. Notice that the radius of the core density is smaller for chlorine than it is for sodium and thus the attractive force exerted on the valence electrons by each of the unscreened nuclear charges will be greater in chlorine than in sodium. Fig. 5-2. Atomic orbital charge densities for the Na and Cl atoms. Only one member of a 2p or 3p set of orbitals is shown. The nodes are indicated by dashed lines. The inner node of the 3s orbital is too close to the nucleus to be indicated in the diagram. When two neighbouring contours have the same value, as for example the two outermost contours in the 3s density of Na, the charge density passes through some maximum value between the two contours, decreasing to zero at the nodal line. In terms of the outermost contour shown in the total density plots (0.002 au) the Cl atom appears to be larger than the Na atom. The outer charge density of Na is, however, very diffuse (as shown by the plot of Q(r) in Fig. 5-1) and in terms of density contours of value less than 0.002 au the Na atom is indeed larger than the Cl atom. The values of contours not indicated in the figure may be obtained by referring to the Table of Contour Values. There is one exception to the trend of a decrease in diameter across a given row in that phosphorus has an atomic radius slightly smaller than that of sulphur which follows it in the table. The configuration of the outer electrons in phosphorus is 3s23p3(­­­). Each of the p orbitals contains a single electron and according to Hund's rule all will have the same spin quantum number. Electrons with identical spins have smaller electron-electron repulsion energies than do electrons with paired spins, for reasons we have previously mentioned. Therefore, the larger the number of parallel spins in an atom, the smaller will be the average energy of repulsion between the electrons. Three is the maximum number of unpaired spins possible in any of the short periods as this corresponds to a half-filled set of p orbitals. The stabilizing effect of the decreased energy of repulsion between the electrons is comparable to the effect obtained by increasing the effective nuclear charge by approximately one. This can be seen by comparing phosphorus with sulphur. Sulphur has an increased nuclear charge but the added electron must be paired up with one of the electrons in the p orbitals. The number of unpaired electrons with parallel spins is thus reduced to two, the average energy of repulsion between the electrons is increased, and the sulphur atom is slightly larger than the phosphorus atom. The decrease in energy which is obtained by maximizing the number of parallel spins is not sufficient to change the most stable outer configuration actually found for silicon, 3s2(­¯)3p2(­­), to that in which all four outer electrons have parallel spins, 3s1(­)3p3(­­­). This latter configuration could be obtained only by promoting an electron from a 3s orbital to a 3p orbital. The 3s orbital is more stable than a 3p orbital because of the penetration effect, and the energy increase caused by the promotion of an electron from the 3s to a 3p orbital would not be offset by the energy decrease obtained by maximizing the number of parallel spins. It is interesting to note, however, that the reverse of this is true for some of the elements in the transition series. In these elements the 4s and 3d (or in general the ns and (n - l)d) orbitals are the outer orbitals. The energy difference between an ns and an (n - 1)d orbital is much less than that between an ns and an np orbital. Thus the effect of maximizing the number of parallel spins can be overriding in these cases. The outer electronic structure of vanadium is 4s23d3. (Recall that there are five d orbitals and hence the configuration d5 will represent five electrons with parallel spins.) We would expect the outer electronic configuration of the next element, chromium, to be 4s23d4 with four parallel spins. Instead, the configuration is actually 4s13d5 resulting in a total of six parallel spins and a reduction in the energy of repulsion between the electrons. The Ionization Potentials Reference to Table 5-1 indicates that in general the amount of energy required to remove one of the outer electrons increases as the effective nuclear charge increases. The increase in I1 from approximately 5 ev for sodium to approximately 16 ev for argon dramatically illustrates the increase in the force which the nucleus exerts on the outer electrons as the nuclear charge and the number of outer electrons is increased. The effect of the half-filled set of p orbitals is again evident as I1 is slightly larger for phosphorus than for sulphur. There is an apparent discrepancy in the value for I1 observed for magnesium. The outer electronic configuration of magnesium is 3s2 and for aluminum is 3s23p1. The value of 7.64 ev observed for magnesium is the energy required to remove a 3s electron, while the value quoted for aluminum is the energy required to remove a 3p electron. An s orbital is more stable than a p orbital because of its greater penetration of the inner core of electron density. Thus the penetration effect overrides the increase in the effective nuclear charge. We can test the validity of this explanation by comparing the energies required to remove a second electron (I2) from the magnesium and aluminium atoms. The outer electronic configurations of the singly-charged magnesium and aluminum ions are 3s1 and 3s2. Thus a comparison of the second ionization potentials (I2) will be free of the complication due to the penetration effect because we will be comparing the amount of energy required to remove an s electron in each case The values in Table 5-1 indicate that the removal of an s electron requires more energy in aluminum than in magnesium, a result which is consistent with the greater effective nuclear charge for aluminium than for magnesium. What explanation can be given to the second ionization potential of sulfur being almost equal to that for chlorine? It is worthwhile noting the large value of the second ionization potential observed for sodium. The sodium ion has the electron configuration 1s22s22p6, i.e., there are no remaining outer electrons. The second ionization potential for sodium is, therefore, a measure of the amount of energy required to remove one of what were initially inner shell electrons in the neutral atom. The effective nuclear charge experienced by a 2p electron in the sodium ion will be very large indeed, because the number of inner shell electrons for an n = 2 electron is only two. That is, only the two electrons in the 1s orbital exert a large screening effect. Therefore, coupled to the fact that the ion bears a net positive charge, is the fact that the ratio of outer to inner shell electrons is 8:2, which is even more favourable than that obtained for argon. (Recall that in the neutral sodium atom the ratio is 1:10.) The value of I2, for sodium again emphasizes the electronic stability of a closed shell, a stability which is a direct reflection of the large value of the effective nuclear charge operative in such cases.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/05%3A_Electronic_Basis_for_the_Properties_of_the_Elements/5.01%3A_Introduction.txt
Table 5-2 lists the atomic radii and the ionization potentials of the elements found in the first column of the periodic table, the group I elements. Table 5-2: Atomic Radii and Ionization Potentials of Group I Elements Element Li Na K Rb Cs Radius (Å) 1.50 1.86 2.27 2.43 2.62 I1 (ev) 5.4 5.1 4.3 4.2 3.9 The average value of the distance between the electron and the nucleus increases as the value of the principal quantum number is increased. The increase in the atomic diameters down a given group in the periodic table is thus understandable. Each of the group I elements represents the beginning of a new quantum shell. There will be a very sharp decrease in the effective nuclear charge on passing from the preceding closed shell element to a member of group I, as the number of the inner shell electrons is increased by eight. This large sudden reduction in the effective nuclear charge and the fact that the electron, because of the Pauli exclusion principle, must enter a new quantum shell, causes the group I elements to be larger in size and much more readily ionized than the preceding noble gas elements. The decrease in the effective nuclear charge and the increase in the principal quantum number down a given family bring about a steady decrease in the observed ionization potentials. Thus the outer 6s electron in cesium is on the average, further from the nucleus than is the outer 2s electron in lithium. It is also more readily removed. So far we have considered the periodic variations in the energy required to remove an electron from an atom: In some favourable cases it is possible to determine the energy released when an electron is added to an atom: The magnitude of the energy released when an atom captures an extra electron is a measure of the atom's electron affinity. It might at first seem surprising that a neutral atom may attract an extra electron. Indeed many elements do not have a detectable electron affinity. However, consider the outer electronic configuration of the group VII elements, the halogens: ns2np5 There is a single vacancy in the outer set of orbitals and the effective nuclear charge experienced by the valence electrons in a halogen atom is almost the maximum value possible for any given row. Because of the incomplete screening of the nuclear charge by the outer electrons, the remaining vacancy in the outer shell will, in effect, exert an attractive force on a free electron large enough to bind it to the atom. The electron affinities for the rare gas atoms will be effectively zero even though the effective nuclear charge is a maximum for this group of elements there are no vacancies in the outer set of orbitals in a rare gas atom and as a result of the Pauli principle, an extra electron would have to enter an orbital in the next quantum shell. The electron in this orbital will experience only a very small effective nuclear charge as all of the electrons originally present in the atom will be in inner shells with respect to it. Elements to the left of the periodic table, the alkali metals for example, do have vacancies in their outer quantum shell but their effective nuclear charges are very small in magnitude. Thus these elements do not exhibit a measurable electron affinity. The electron affinity increases across a given row of the periodic table and reaches a maximum value with the group VII elements. This is a direct reflection of the variation in the effective nuclear charge. The orbital vacancy in which the extra electron is placed is found at larger distances from the nucleus when the principal quantum number is increased. Thus the electron affinity should decrease down any given family of elements in the periodic table. For example, the electron affinities for the halogens should decrease in the order F > Cl > Br > I. (Click here for note.) The variation in the ionization potentials across a given row is reflected in the values shown in the atomic orbital energy level diagram for the elements from hydrogen through to neon (Fig. 5-3). Fig. 5-3. An orbital energy level diagram for the elements H to Ne (Note that the energy scale used for the 1s orbital differs by a factor of ten from that for the 2s and 1p orbitals.) The orbital energies show a uniform decrease when the nuclear charge is increased, reflecting an increase in the binding of the electrons. The total energy of a many-electron atom is not simply the sum of the orbital energies. Summing the orbital energies does not take proper account of the repulsions between the electrons. The orbital energies do, however, provide approximate estimates of the ionization potentials. The ionization potential is the energy required to remove one electron from an atom, and an orbital energy is a measure of the binding of a single electron in a given orbital. Thus the ionization potential should be approximately equal to minus the orbital energy. For example, the ionization potential of lithium is 5.39 ev and the 2s orbital energy is -5.34 ev. Similarly I1, for neon is 21.56 ev and the 2p orbital energy is -23.14 ev. Shell structure is also evident in the ionization potentials and orbital energies of atoms. By exposing the atom to light of very short wavelength (in the X-ray region of the spectrum), it is possible to ionize one of the inner shell electrons, rather than a valence electron. That is, the energy of an X-ray photon is comparable to the binding energy of an inner shell electron. The resulting ion is in a very unstable configuration and after a very brief period of time an electron from the outer shell "falls" into the vacancy in the inner shell. In falling from an outer to an inner shell the binding of the electron is greatly increased and a photon is emitted. The energy of this photon should be approximately equal to the difference in energies of the outer shell and inner shell orbitals. For example, the photon emitted when neon loses an inner shell electron has an energy of 849 ev. The difference in energy between the 2p and 1s orbitals of neon is 869 ev. Photons with energies in this range occur in the X-ray region of the spectrum. It is apparent from the variation in the 1s orbital energies shown in Fig. 5-3 that the energies and hence the frequencies of the X-ray photons will increase as the nuclear charge is increased. It was from a study of the X-ray photons emitted by the elements that Moseley was first able to determine the nuclear charges (the atomic numbers) of the elements.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/05%3A_Electronic_Basis_for_the_Properties_of_the_Elements/5.03%3A_Vertical_Relationships.txt
A detailed study of the chemical implications of the orbital theory of electronic structure must await our discussion of the chemical bond. However, we can at this point correlate the gross chemical behaviour of the elements with the general results of the orbital theory. The effective nuclear charge is a minimum for the group I elements in any given row of the periodic table. Therefore, it requires less energy to remove an outer electron from one of these elements than from any other element in the periodic table. The strong reducing ability of these elements is readily accounted for. The variation in the relative reducing power of the elements across a given period or within a given group will be determined by the variation in the effective nuclear charge. The ability of the elements in a given row of the periodic table to act as reducing agents should undergo a continuous decrease from group I to group VII, since the effective nuclear charge increases across a given row. Similarly, the reducing ability should increase down a given column (group) in the table since the effective nuclear charge decreases as the principal quantum number is increased. Anticipating the fact that electrons can be transferred from one atom (the reducing agent) to another (the oxidizing agent) during a chemical reaction, we expect the elements to the left of the periodic table to exhibit a strong tendency to form positively charged ions. The ability of the elements to act as oxidizing agents should parallel directly the variations in the effective nuclear charge. Thus the oxidizing ability should increase across a given row (from group I to group VII) and decrease down a given family. These trends are, of course, just the opposite of those noted for the reducing ability. We can also relate the chemical terms "reducing ability" and "oxidizing ability" to the experimentally determined energy quantities, "ionization potential" and "electron affinity." The reducing ability should vary inversely with the ionization potential, and the oxidizing ability should vary directly with the electron affinity. The elements in groups VI and VII should exhibit a strong tendency for accepting electrons in chemical reactions to form negatively charged ions. Francium, which possesses a single outer electron in the 7s orbital, should be the strongest chemical reducing agent and fluorine, with an orbital vacancy in the 2p subshell, should be the strongest oxidizing agent. (Click here for note.) A great deal of chemistry can now be directly related to the electronic structure of the elements. For example, the reaction is explained chemically by stating that Cl2 is a stronger oxidizing agent than Br2. The electronic interpretation is that the orbital vacancy in Cl is in a 3p orbital and closer to the nucleus than the 4p orbital vacancy in Br. Thus the effective nuclear charge which attracts the extra electron is larger for the Cl atom than for the Br atom. We could of course interpret this same reaction by stating that the Br- ion is a stronger reducing agent than is the Cl- ion. In other words the extra electron in the Br- ion is less tightly held than is the extra electron in the Cl- ion. The explanation in terms of the relative effective nuclear charges is the same as that given above. The decrease in the effective nuclear charge down the halogen family of elements leads to some interesting differences in their chemistry. For example, hydrogen chloride may be prepared from sodium chloride and sulphuric acid: (1) However, the same method cannot be employed in the preparation of hydrogen bromide or hydrogen iodide. In the preparation of hydrogen bromide from sodium bromide, (2) some of the HBr reacts further, (3) and the HBr is thus contaminated. In preparation of hydrogen iodide a further reaction again occurs: (4) Reactions (3) and (4) are clearly redox reactions in which the halide ions reduce the sulphur in the SO4-2 anion to a lower oxidation state. Since Cl has the highest effective nuclear charge, the Cl- ion should be the weakest reducing agent of the three halide ions. Indeed, the Cl- ion is not a strong enough reducing agent to change the oxidation state of S in SO4-2. The Br- ion possesses an intermediate value for the effective nuclear charge and thus it is a stronger reducing agent than the Cl- ion. The Br- ion reduces the oxidation number of sulphur from (+6) to (+4). Since the I- ion binds the extra electron least of all (the electron is in an n = 5 orbital and the effective nuclear charge of iodine is the smallest of the three), it should be the strongest reducing agent of the three halide ions. The I- ion in fact reduces the sulphur from (+6) to (-2). A word about oxidation numbers and electron density distributions is appropriate at this point. An oxidation number does not, in general, represent the formal charge present on a species. Thus S is not S+6 in the SO-2 ion, nor is it S-2 in the H2S molecule. However, the average electron density in the direct vicinity of the sulphur atom does increase on passing from SO4-2 to H2S. From their relative positions in the periodic table it is clear that oxygen will have a greater affinity for electrons than sulphur. Thus when sulphur is chemically bonded to oxygen the electron density in the vicinity of the sulphur atom is decreased over what it was in the free atom and increased in the region of each oxygen atom. Again it is clear from the relative positions of H and S in the periodic table that sulphur has a greater affinity for electrons than does hydrogen. Thus in the molecule H2S, the electron density in the vicinity of the sulphur atom is increased over that found in the free atom. In changing the immediate chemical environment of the sulphur atom from that of four oxygen atoms to two hydrogen atoms, the electron density (i.e., the average number of electrons) in the vicinity of the sulphur atom has increased. The assignment of actual oxidation numbers is simply a bookkeeping device to keep track of the number of electrons, but the sign of the oxidation number does indicate the direction of the flow of electron density. Thus sulphur has a positive oxidation number when combined with oxygen (the sulphur atom has lost electron density) and a negative one when combined with hydrogen (the electron density around sulphur is now greater than in the sulphur atom). The above are only a few examples of how a knowledge of the electronic structure of atoms may be used to understand and correlate a large amount of chemical information. It should be remembered, however, that chemistry is a study of very complex interactions and the few simple concepts advanced here cannot begin to account for the incredible variety of phenomena actually observed. Our discussion has been based solely on energy, and energy alone never determines completely the course of a reaction on a macroscopic level, i.e., when many molecules undergo the reaction. There are statistical factors, determined by the changes in the number of molecules and in the molecular dimensions, which must also be considered. Even so, the energy effect can often be overriding. In the long form of the periodic table, families are labelled by both a number and by the letter A or B. Thus there is a IA family and a IB family. It will be noted that the elements in a B family all occur in the series of transition elements in which the dorbitals are being filled. In the A families, however, the d orbitals are either absent or are present as closed inner shells. For example, consider the electronic configurations of K (IA) andCu (IB): Note that the most stable configuration for Cu is not [Ar] 3d94s2 as expected. By transferring one of the 4s electrons to the 3d vacancy, the d subshell is filled and the electronic energy is lowered. The electron density distribution of the Cu atom is therefore a spherical one. Both K and Cu have one outer electron with a spherical charge distribution. They should have some properties in common, such as a tendency to lose one electron and form a positive ion. For this reason both families are labelled I. However, the shell underlying the outer electron in the K atom possesses a rare gas configuration, while in the Cu atom it is a set of filled d orbitals. This difference in electronic structure is sufficient to cause considerable differences in their chemistry, hence the further labels A and B. A rare gas configuration is always one of great stability, particularly when it occurs in a positive ion. (Recall that I2 = 47.3 ev for sodium.) The species K+2 is never observed in solution chemistry, and could be produced in the gas phase only by an expenditure of energy far in excess of that observed in ordinary chemical reactions. The Cu+ ion, on the other hand, very readily loses a second electron to form the Cu+2 ion. Indeed, Cu+2 is the more common ionic form of copper. Thus the d10 closed shell structure is more easily broken than a rare gas configuration, giving to Cu a variable valency of one or two. 5.E: Exercises 1. Estimate the wavelength of the photon which is emitted when a 3p electron falls to a vacancy in the 1s orbital in a chlorine ion. The energies of the 1s and 3p orbitals in chlorine are -2.854 ´ 103 ev and -13.77 ev respectively. 2. In his investigation of the X-ray spectra of the elements, Moseley found that the frequencies of the lines of shortest wavelength could be expressed as a function of the atomic number Z as where a and s are constants. Account for the general form of the relationship. What is the significance of the factor s? 3. (a) On the basis of your knowledge of the electronic structure of the elements arrange the following substances in the order of their increasing ability to act as oxidizing agents. He+, Cl, P, Na, F- (b) Arrange the following substances in the order of their increasing ability to act as reducing agents. Cs, Li, C, S, Cl 4. Rationalize the following observations on the basis of the electronic structures of the halogen atoms and their ions. Iodide ions can be oxidized to elemental iodine by molecular oxygen 4HI + O2 ® 2Li + 2H20 but the corresponding reaction does not occur with HCl HCl + O2 ® no reaction 5. Account for the fact that the second ionization potential for oxygen is greater than that for fluorine. (I2 for 0 is 35.15 ev and I2 for F is 34.98 ev.) 6. Which atom or ion in the following pairs has the highest ionization potential? (a) N, P (b) Mg, Sr (c) Ge, As (d) Ar, K+ 7. Of the following substances: F2, F-, I2, I- (a) Which is the best oxidizing agent? (b) Which is the best reducing agent? (c) Write one chemical equation for a reaction which will illustrate your answers to parts (a) and (b).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/05%3A_Electronic_Basis_for_the_Properties_of_the_Elements/5.04%3A_Some_Chemical_Implications.txt
With our knowledge of the electronic structure of atoms we are now in a position to understand the existence of molecules. Clearly, the force which binds the atoms together to form a molecule will, as in the atomic case, be the electrostatic force of attraction between the nuclei and electrons. In a molecule, however, we encounter a force of repulsion between the nuclei in addition to that between the electrons. To account for the existence of molecules we must account for the predominance of the attractive interactions. We shall give general arguments to show that this is so, first in terms of the energy of a molecule, relative to the energies of the constituent atoms, and secondly, in terms of the forces acting on the nuclei in a molecule. In order to determine what attractive and repulsive interactions are possible in a molecule, consider an instantaneous configuration of the nuclei and electrons in a hydrogen molecule (Figure \(1\)). When the two atoms are initially far apart (the distance R is very large) the only potential interactions are the attraction of nucleus A for electron number (1) and the attraction of nucleus B for electron number (2). When R is comparable to the diameter of an atom (A and B are close enough to form a molecule) then new interactions appear. Nucleus A will now attract electron (2) as well as (1) and similarly nucleus B will attract electron (1) as well as (2). These interactions are indicated by the four solid lines in Figure \(1\) connecting pairs of particles which attract one another. Figure \(1\): One possible set of the instantaneous relative positions of the electrons and nuclei in an \(H_2\), molecule. The dashed lines represent the repulsive interactions between like charges and the solid lines indicate the attractive interactions between opposite charges. The number of attractive interactions has been doubled from what it was when the atoms were far apart. However, the reduction in R introduces two repulsive interactions as well, indicated by the dashed lines joining charges of like sign in Figure \(1\). The two electrons now repel one another as do the two nuclei. If the two atoms are to remain together to form a molecule, the attractive interactions must exceed the repulsive ones. It is clear from Figure \(1\) that the new attractive interactions, nucleus A attracting electron (2) and nucleus B attracting electron (1), will be large only if there is a high probability of both electrons being found in the region between the nuclei. When in this region, both electrons are strongly attracted by both nuclei, rather than by just one nucleus as is the case when the atoms are far apart. When the average potential energy is calculated by quantum mechanics, the attractive interactions are found to predominate over the repulsive ones because quantum mechanics does indeed predict a high probability for each electron being in the region between the nuclei. This general consideration of the energy demonstrates that electron density must be concentrated between the nuclei if a stable molecule is to be formed, for only in this way can the attractive interaction be maximized. We can be much more specific in our analysis of this problem if we discuss a molecule from the point of view of the forces acting on the nuclei. However, we must first state some general conclusions of quantum mechanics regarding molecular systems. In the atomic case we could fix the position of the nucleus in space and consider only the motion of the electrons relative to the nucleus. In molecules, the nuclei may also change positions relative to one another. This complication can, however, be removed. The nuclei are very massive compared to the electrons and their average velocities are consequently much smaller than those possessed by the electrons. In a classical picture of the molecule we would see a slow, lumbering motion of the nuclei accompanied by a very rapid motion of the electrons. The physical implication of this large disparity in the two sets of velocities is that the electrons can immediately adjust to any change in the position of the nuclei. The positions of the nuclei determine the potential field in which the electrons move. However, as the nuclei change their positions and hence the potential field, the electrons can immediately adjust to the new positions. Thus the motion of the electrons is determined by where the nuclei are but not by how fast the nuclei are moving. We may, because of this fact, discuss the motions of the electrons and of the nuclei separately. Thus the motion of the electrons is determined by where the nuclei are but not by how fast the nuclei are moving. For a given distance between the nuclei we obtain the energy, the wave function and the electron density distribution of the electrons, the nuclei being held in fixed positions. Then the distance between the nuclei is changed to a new value, and the calculation of the energy, wave function and electron density distribution of the electrons is performed again. This process, repeated for every possible internuclear distance, allows us to determine how the energy of the electrons changes as the distance between the nuclei is changed. More important for our present discussion, we may concern ourselves only with the motion of the electrons and hold the nuclei stationary at some particular value for the internuclear distance R. The energy of the electrons in a molecule is quantized, as it is in atoms. When the nuclei are held stationary at some fixed value of R, there are a number of allowed energy levels for the electrons. There are, however, no simple expressions for the energy levels of a molecule in terms of a set of quantum numbers such as we found for the hydrogen atom. In any event we shall be concerned here only with the first or lowest of the energy levels for a molecule. As in the case of atoms, there is a wave function which governs the motion of all the electrons for each of the allowed energy levels. Each wave function again determines the manner in which the electronic charge is distributed in three-dimensional space. The electron density distribution for a molecule is best illustrated by means of a contour map, of the kind introduced earlier in the discussion of the hydrogen atom. Figure \(2\) shows a contour map of the charge distribution for the lowest, or most stable state of the hydrogen molecule. Imagine a hydrogen molecule to be cut in half by a plane which contains the nuclei.The amount of electronic charge at every point in space is determined, and all points having the same value for the electron density in the plane are joined by a line, a contour line. Also shown is a profile of the contour map along the internuclear axis. A profile illustrates the variation in the charge density along a single axis. Figure \(2\): A contour map of the electron density distribution (or the molecular charge distribution) for H2 in a plane containing the nuclei. Also shown is a profile of the density distribution along the internuclear axis. The internuclear separation is 1.4 au. The values of the contours increase in magnitude from the outermost one inwards towards the nuclei. The values of the contours in this and all succeeding diagrams are given in au; 1 au = e/ao3 = 6.749 e/Å3. The electron density contours of highest value are in the region of each nucleus. Thus the negative charge is concentrated in the region of the nuclei in a molecule as well as in an atom. The next highest concentration of negative charge is found in the region between the nuclei. It is the negative charge in this region which is strongly attracted by both nuclei and which results in the attractive interactions exceeding the repulsive ones in the formation of the molecule from the atoms. Most of the density contours envelope both nuclei. The density distributions of the two atoms have been merged together in the formation of the molecule. The same contour map would be obtained for any plane through the nuclei. Therefore, in three-dimensional space the hydrogen molecule would appear to be an ellipsoidal distribution of negative charge. Most of the electronic charge is concentrated along the internuclear axis and becomes progressively more diffuse at large distances from the centre of the molecule. Recall that the addition of all the charge in every small volume element of space equals the total number of electrons which in the case of the hydrogen molecule is two. The volume of space enclosed by the outer contour in Figure \(2\) contains over 99% of the total electronic charge of the hydrogen molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.01%3A_Introduction.txt
An Electrostatic Interpretation of the Chemical Bond In the light of the above discussion of a molecular electron density distribution, we may regard a molecule as two or more nuclei imbedded in a rigid three-dimensional distribution of negative charge. There is a theorem of quantum mechanics which allows us to make direct use of this picture of a molecule. This theorem states that the force acting on a nucleus in a molecule may be determined by the methods of classical electrostatics. The nuclei in a molecule repel one another, since they are of like charge. This force of repulsion, if unbalanced, would push the nuclei apart and the molecule would separate into atoms. In a stable molecule, however, the nuclear force of repulsion is balanced by an attractive force exerted by the negatively-charged electron density distribution. The usefulness of this approach lies in the fact that we may account for and discuss the stability of molecules in terms of the classical concept of a balance between the electrostatic forces of attraction and repulsion. We can illustrate this method and arrive at some results of a general nature by considering in detail the forces acting on the nuclei in the hydrogen molecule. The charge on a hydrogen nucleus is +e and the force of repulsion acting on either nucleus is where R is the internuclear distance. This force obviously acts to push the two nuclei apart (Fig. 6-3). Fig. 6-3. The forces acting on the nuclei in H2. Only one outer contour of the electron density distribution is shown. Over 99% of the total electronic charge is contained within this contour. The attractive force which balances this force of repulsion and draws the nuclei together is exerted by the negatively-charged electron density distribution. The density distribution is treated as a rigid distribution of negative charge in space. Each small element of this charge distribution exerts a force on the nuclei, illustrated in Fig. 6-3 for one such small charge point. The forces it exerts on the nuclei are labelled FeA and FeB. The total amount of negative charge in the electron density distribution must correspond to some integral number of electrons. However, the amount of negative charge in each small region of space will in general correspond to some fraction of one electronic charge. The electronic force of attraction FeA or FeB may be equated to two components, one along the bond, and one perpendicular to it. The density distribution is symmetric with respect to the internuclear axis, i.e., for every charge point above the axis there must, by symmetry, be another point of equal charge at the corresponding place beneath the internuclear axis. The symmetrically related charge point will exert the same force along the bond, but the component perpendicular to the bond will be in the opposite direction. Thus the perpendicular forces of attraction exerted on the nuclei are zero (Fig. 6-4) and we may confine our attention to the components of the attractive force along the bond. Fig. 6-4. The two components of force along the bond add together while the two perpendicular components cancel at both A and B. It is obvious that all of the charge elements which are in the general region between the two nuclei will exert forces which draw the two nuclei together. The force exerted by the density in this region acts in opposition to the force of nuclear repulsion and binds the two nuclei together. It is also clear that a charge element in the region behind either nucleus will exert a force which tends to increase the distance between the nuclei (Fig. 6-5). Fig. 6-5. The component of FeA along the bond is greater than the corresponding component of FeB. Since the charge element is closer to nucleus A than it is to nucleus B, the component of the force on A along the bond will be greater than the component of the force on B along the bond. Thus the effect of density in this region will be to separate the molecule into atoms. There must also be a line on which the density exerts the same force on both nuclei and thus neither increases nor decreases R because the charge density in one region draws the nuclei together and in another draws them apart. The charge element shown in Fig. 6-6 exerts the same force along the bond on both A and B even though it is closer to B than it is to A. Although the total force FeB is much larger than FeA, FeB is directed almost perpendicular to the bond axis and thus its component along the bond is quite small and equal to the component of FeA along the bond. Charge density on either of the two curves shown in Fig. 6-6 exerts equal forces on both of the nuclei along the bond, and such charge density will not tend to increase or decrease the distance between the nuclei. Thus these two curves (surfaces in three dimensions) divide the space in a molecule into a binding region and an antibinding region. Any charge density between the two boundary curves, in the binding region, draws the two nuclei together while any charge density in the hatched region behind either curve, the antibinding region, exerts unequal forces on the nuclei and separates the molecule into atoms. Fig. 6-6. The boundary curves which separate the binding from the antibinding regions in a homonuclear diatomic molecule. A chemical bond is thus the result of the accumulation of negative charge density in the region between the nuclei to an extent sufficient to balance the nuclear forces of repulsion. This corresponds to a state of electrostatic equilibrium as the net force acting on each nucleus is zero for this one particular value of the internuclear distance. If the distance between the nuclei is increased from the equilibrium value, the nuclear force of repulsion is decreased. At the same time the force of attraction exerted by the electron density distribution is increased as the binding region is increased in size. Thus when R is increased from its equilibrium value there are net forces of attraction acting on the nuclei which pull the two nuclei together again. A definite force would have to be applied to overcome the force of attraction exerted by the electron density distribution and separate the molecule into atoms. Similarly, if the value of R is decreased from its equilibrium value, the force of nuclear repulsion is increased over its equilibrium value. At the same time, the attractive force exerted by the electron density is decreased, because the binding region is decreased in size. In this case there will be a net force of repulsion pushing the two nuclei apart and back to their equilibrium separation. There is thus one value of R for which the forces on the nuclei are zero and the whole molecule is in a state of electrostatic equilibrium. The division of the space around a molecule into a binding and an antibinding region shows where charge density must be concentrated in order to obtain a stable chemical bond. The next question which must be answered is, "How much charge must be placed in the binding region to achieve electrostatic equilibrium?" For example, we might consider the possibility of forming a molecule by bringing together two atoms, each with its own atomic distribution of charge, and simply allow the two atomic charge distributions to overlap without deforming in any way. This would result in the accumulation of approximately twice as much charge density in the binding region as in either of the antibinding regions behind the nuclei. Would this doubling of the charge density in the region between the nuclei be sufficient to balance the nuclear forces of repulsion? Let us answer this question for the simple case of two hydrogen atoms forming molecular hydrogen, but again the result will be general. The most stable state of the hydrogen molecule is obtained when two hydrogen atoms, each in its most stable atomic state, approach one another. The ground state of a hydrogen atom is obtained when the electron is in the 1s orbital. The density distribution around each hydrogren nucleus is the spherical one which we discussed previously in some detail. We shall first calculate the force on one of the hydrogen nuclei resulting when the two atoms are very far apart. The situation is represented in Fig. 6-7 where each atomic charge distribution is represented by a single outer circular contour. This contour is to define a sphere which in three-dimensions contains essentially all of the electronic charge of each atom. Fig. 6-7. The forces acting on nucleus A at a large internucleus distances, R. Consider the forces exerted on nucleus A. The force of nuclear repulsion is just The atomic charge density centred on nucleus A exerts no net force on this nucleus as it pulls equally in every direction because of its spherical symmetry. There is, however, a net force of attraction due to the single electronic charge dispersed in the atomic distribution of B. A theorem of classical electrostatics states that the force exerted by a spherical charge distribution on a point charge lying outside of the charge distribution is equal to the force which would be obtained if all the charge in the distribution were concentrated at its centre. Nucleus A is a point charge which lies outside of the spherical charge distribution centred on B. Thus the force exerted on nucleus A by this charge distribution is just as the total amount of charge contained in the distribution is that of one electron. The total force acting on nucleus A is A zero force is the expected answer when the two atoms are very far apart. Can we again balance the forces for a value of R which is of the order of magnitude of an atomic diameter, i.e., typical of the values of R found in molecules? At this value of R, each nucleus will have penetrated the charge density surrounding the other nucleus. Recall that in this calculation we insist upon the atomic charge densities remaining spherical and our molecular charge density is obtained by allowing the two rigid atomic charge distributions to overlap one another (Fig. 6-8). Fig. 6-8. The forces exterted on the nucleus A for the overlap of rigid atomic charge distributions. Only the charge density on B which is contained in the sphere of radius R exerted a force on nucleus A. The force of nuclear repulsion in this case is still given by where the value of R is much less than in the previous calculation. Since the charge distribution on A is still spherical in shape, it exerts no net force on nucleus A. The force exerted on nucleus A by the charge density on B can again be calculated by the theorem referred to previously. However, nucleus A no longer lies outside of all the charge density on B. The value of R is significantly less than the radius of the charge distribution on B. All the charge density on B which lies within the sphere defined by the bond length R again exerts a force on nucleus A, equal to that obtained if all this density were situated at the B nucleus. The theorem referred to previously shows that the density on B which lies outside of this sphere defined by R exerts no net force on nucleus A. Since R is less than the diameter of the charge distribution, the amount of negative charge contained in a sphere of radius R will be less than that of one electron. The observed value of R for the hydrogen molecule is 1.4 au and reference to the data given previously for the 1s orbital density for the hydrogen atom shows that a sphere of radius 1.4 would contain approximately one half of an electronic charge. The electrostatic force of attraction exerted on nucleus A is, therefore, The net force on nucleus A is There is a net force of repulsion exerted on nucleus A under these conditions. If R were decreased still further, nucleus A would penetrate the charge density around B to an even greater extent and "see" even more of the nuclear charge on B. The force on the nuclei will thus be repulsive for all finite values of R. This is an important result as it shows that the density distribution in a molecule cannot be considered as the simple sum of the two atomic charge densities. The overlap of rigid atomic densities does not place sufficient charge density in the binding region to overcome the nuclear force of repulsion. We conclude that the original atomic charge distributions must be distorted in the formation of a molecule, and the distortion is such that charge density is concentrated in the binding region between the nuclei. A quantum mechanical calculation predicts this very result. The calculation shows that there is a continuous distortion of the original atomic density distributions, a distortion which increases as the internuclear distance decreases. This is illustrated in Fig. 6-9for the approach of two hydrogen atoms to form the hydrogen molecule. Fig. 6-9. A series of electron density contour maps illustrating the changes in the electron charge distribution during the approach of two H atoms to form H2. The internuclear distance R in units of au is indicated beneath each map. At R = 8 the atomic densities appear to be undistorted. At R = 6 the densities are distorted but still essentially separate. As R is further decreased, charge density contours of increasing value envelope both nuclei, and charge density is accumulated at the positions of the nuclei and in the internuclear region. The values of the contours in au increase from the outermost to the innermost one in the order 2 ´ 10-n, 4 ´ 10-n, 8 ´ 10-n, for decreasing values of n beginning with n = 3. Thus the outermost contour in each case is 0.002 au and the value of the innermost contour for R = 1.0 au, for example, is 0.4. The changes in the original atomic density distributions caused by the formation of the chemical bond may be isolated and studied directly by the construction of a density difference distribution. Such a distribution is obtained by subtracting the density obtained from the overlap of the undistorted atomic densities separated by a distance R, from the molecular charge distribution evaluated at the same value of R. Wherever this density difference is positive in value it means that the electron density in the molecule is greater than that obtained from the simple overlap of the original atomic densities. Where the density difference is negative, it means that there is less density at this point in space in the molecule than in the distribution obtained from the overlap of the original atomic distributions. Such a density difference map thus provides a detailed picture of the net reorganization of the charge density of the separated atoms accompanying the formation of a molecule. We have just proven that the density distribution resulting from the overlap of the undistorted atomic densities does not place sufficient charge density in the binding region to balance the forces of nuclear repulsion. The regions of charge increase in the density difference maps are, therefore, the regions to which charge is transferred relative to the separated atoms to obtain a state of electrostatic equilibrium and hence a chemical bond. From this point of view a density difference map provides us with a picture of the "bond density." Figure 6-10 shows a set of density difference or bond density maps for the approach of two hydrogen atoms to form the hydrogen molecule. At very large separations, for example at 8 au, the density distribution on each atom is polarized in the direction of the approaching atom. Charge density has been transferred from the antibinding region behind each nucleus to the binding region immediately in front of each nucleus. Thus even at large separations the atomic density distributions are no longer spherical. We noted in our discussion of the approach of two rigid hydrogen atoms that a spherical charge distribution does not exert a net force on the nucleus on which it is centred. Each polarized atomic charge distribution does, however, exert an attractive force on its nucleus. The polarized densities place more charge on the binding side of each nucleus than on the antibinding side. These long-range attractive forces, called van der Waals' or dispersion forces, could be aptly described as a "bootstrap effect" as each nucleus is pulled by its own charge density. All pairs of neutral molecules undergo this type of polarization as a result of the long-range interactions between them, and there are attractive forces operative between all pairs of molecules out to very large distances. Although the long-range polarizations and the resulting forces of attraction are very weak, they are of extreme importance. They are commonly referred to as van der Waals forces and are solely responsible for the binding observed in certain kinds of solids, solid helium for example. This will be discussed more fully later. Fig. 6-10. Density difference distribution (molecular minus atomic) for the approach of two H atoms. These maps indicate the changes in the atomic densities caused by the formation of a molecule. The solid contours represent an increase in charge density over the atomic case, while the dashed contours denote a decrease in the charge density relative to the atomic densities. Since the changes in the charge density are much smaller for large values of R than for small values of R two different scales are used. The solid and dashed contours increase (+) or decrease (-) respectively from the zero contour in the order ±2 ´ 10-n, ±4 ´ 10-n, ±8 ´ 10-n au for decreasing values of n. The maps for R = 8.0, 6.0 and 4.0 au begin with n = 5 and those for R = 2.0, 1.4 and 1.0 au begin with n = 3. The zero contour and the value of the innermost positive contour are indicated in each case. Note the continuous increase in charge density in the region between the nuclei as R is decreased. At 6.0 au the density increase in the binding region is common to both nuclei, and for distances less than 6.0 au the system can no longer be described as two polarized hydrogen atoms. The distortions of the original densities caused by the transfer of charge to the binding region is so great that the individual character of the atomic densities is no longer discernible. The magnitude of the attractive force (which is negative in sign) exerted on the nuclei by this accumulation of charge density in the binding region increases rapidly for distances less than 4.5 au (Fig. 6-11). Fig. 6-11. The force on an H nucleus in H2 as a function of the internuclear separation. An attractive force is negative in sign; a replusive one, positive. The attractive force reaches a maximum at 2.1 au. The density difference diagrams indicate that for distances as small as 2.0 au, the density increase is confined to the region between the nuclei. For separations smaller than 2.0 au an increasing amount of charge density is transferred to the anti-binding regions behind each nucleus. Because of this, the attractive force on the nuclei decreases rapidly with a further decrease in R until at R = 1.4 au, the net attractive force exerted by the charge density just balances the force of nuclear repulsion (Fig. 6-11). A state of electrostatic equilibrium is reached, and a chemical bond is formed. A further decrease in R leads to a force of repulsion. More charge density is transferred to the antibinding regions, and the force exerted by this charge density, acting in concert with the increase in the force of nuclear repulsion, outweighs the attractive force exerted by the charge density in the binding region. The same changes in density are depicted in Fig. 6-12, which is a series of profiles along the internuclear axis of the density difference maps shown in Fig. 6-10. The profile maps illustrate in a striking fashion the build-up of charge density in the region between the nuclei. Fig. 6-12. Profiles of the density difference along the internuclear axis for H2at a series of internuclear separations. One nucleus is held fixed, and the other is moved relative to it. The separations are indicated on the diagram. The formation of any chemical bond is qualitatively similar to the changes in the charge distribution and in the forces exerted on the nuclei as found for the hydrogen molecule. We must now inquire into the conditions which determine whether or not sufficient charge density can be accumulated in the binding region to yield a stable molecule. Since not all atoms form chemical bonds, clearly such conditions must exist.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.02%3A_An_Electrostatic_Interpretation_of_the_Chemical_Bond.txt
The Pauli exclusion principle plays as important a role in the understanding of the electronic structure of molecules as it does in the case of atoms. The end result of the Pauli principle is to limit the amount of electronic charge density that can be placed at any one point in space. For example, the Pauli principle prevents the 1s orbital in an atom from containing more than two electrons. Since the 1s orbital places most of its charge density in regions close to the nucleus, the Pauli principle, by limiting the occupation of the 1s orbital, limits the amount of density close to the nucleus. Any remaining electrons must be placed in orbitals which concentrate their charge density further from the nucleus. In an earlier discussion we pointed out that the reason the electron doesn't fall onto the nucleus is because it must possess kinetic energy if Heisenberg's uncertainty principle is not to be violated. This is one reason why matter doesn't collapse. The Pauli principle is equally important in this regard. The electron density of the outer electrons in an atom cannot collapse and move closer to the nucleus since it can do so only if the electrons occupy an orbital with a lower n value. If, however, the inner orbital contains two electrons, then the Pauli principle states that the collapse cannot occur. We must be careful in our interpretation of this aspect of the Pauli principle. The density from a 2s orbital has a small but finite probability of being found well within the density of the 1s orbital. Do not interpret the Pauli principle as implying that the density from an occupied orbital has a clearly defined and distinct region in real space all to its own. This is not the case. The operation of the Pauli principle is more subtle than this. In some simple cases, such as the ones we wish to discuss below, the limiting effect of the Pauli principle on the density distribution can, however, be calculated and pictured in a very direct manner. The Pauli principle demands that when two electrons are placed in the same orbital their spins must be paired. What restriction is placed on the spins of the electrons during the formation of a molecule, when two orbitals, each on a different atom, overlap one another? For example, consider the approach of two hydrogen atoms to form a hydrogen molecule. Consider atom A to have the configuration $1s^1 \alpha$ and atom B the configuration $1s^1 \beta$. Even when the atoms approach very close to one another the Pauli principle would be satisfied as the spins of the two electrons are opposed. This is the situation we have tacitly assumed in our previous discussion of the hydrogen molecule. However, what would occur if two hydrogen atoms approached one another and both had the same configuration and spin, say $1s^1 \alpha$? When two atoms are relatively close together the electrons become indistinguishable. It is no longer possible to say which electron is associated with which atom as both electrons move in the vicinity of both nuclei. Indeed this is the effect which gives rise to the chemical bond. In so far as we can still regard the region around each atom to be governed by its own atomic orbital, distorted as it may be, two electrons with the same spin will not be able to concentrate their density in the binding region. This region is common to the orbitals on both atoms, and since the electrons possess the same spin they cannot both be there simultaneously. In the region of greatest overlap of the orbitals, the binding region, the presence of one electron will tend to exclude the presence of the other if their spins are parallel. Instead of density accumulating in the binding region as two atoms approach, electron density is removed from this region and placed in the antibonding region behind each nucleus where the overlap of the orbitals is much smaller. Thus the approach of two hydrogen atoms with parallel spins does not result in the formation of a stable molecule. This repulsive state of the hydrogen molecule, in which both electrons have the same spin and atomic orbital quantum numbers, can be detected spectroscopically. We can now give the general requirements for the formation of a chemical bond. Electron density must be accumulated in the region between the nuclei to an extent greater than that obtained by allowing the original atomic density distributions to overlap. In general, the increase in charge density necessary to balance the nuclear force of repulsion requires the presence of two electrons. There are a few examples of "one-electron" bonds. An example is the $H_2^+$ molecule-ion. This ion contains only one electron and is indeed a stable entity in the gas phase. It cannot, however, be isolated or stored in any way. In the atomic orbital approximation we picture the bond as resulting from the overlap of two distorted atomic orbitals, one centered on each nucleus. When the orbitals overlap, both electrons may move in the field of either nuclear charge as the electrons may now exchange orbitals. Finally, the pair of electrons must possess opposed spins. When their spins are parallel, the charge density from each electron is accumulated in the antibinding rather than in the binding region. We shall now apply these principles to a number of examples and in doing so obtain a quantum mechanical definition of valency.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.03%3A_The_Effect_of_the_Pauli_Principle_on_Chemical_Binding.txt
Helium atoms in their ground state do not form a stable diatomic molecule. In fact, helium does not combine with any neutral atom. Its valency, that is, its ability to form chemical bonds with other atoms, is zero. The electronic configuration of the helium atom is 1s2(­↓↑), a closed shell configuration.When two helium atoms are in contact, each electron on one atom encounters an electron on the other atom with a parallel spin. Because of the Pauli principle, neither electron on either atom can concentrate its density in the region they have in common, the region between the nuclei. Instead, the density is transferred to the antibinding regions behind each nucleus where the overlap of the two atomic density distributions is least. This is the same effect noted earlier for the approach of two hydrogen atoms with parallel spins. Comparison of a series of density difference maps for the approach of two helium atoms (Figure $1$) with those given previously for H2 (Fig. 6-10) reveals that one set is the opposite of the other. The regions of charge build-up and charge depletion are reversed in the two cases. The density difference diagrams are obtained by subtracting the distribution obtained by the overlap of the atomic charge densities from the molecular charge distribution. The former distribution, it will be recalled, does not place sufficient charge density in the binding region to balance the force of nuclear repulsion. Thus it is clear from Figure $1$ that He2 will be unstable because the molecular distribution places less charge density in the binding region than does the one obtained from the overlap of the atomic densities. The charge density in He2 is transferred to the antibinding region where it exerts a force which, acting in the same direction as the nuclear force of repulsion, pulls the two nuclei apart. Repulsive forces will dominate in He2 and no stable molecule is possible. Figure $1$: Contour maps of the total molecular charge density and of the density difference for two He atoms at internuclear separations of 4.0 au and 2.0 au. The scale of contour values for the total density maps are the same as used in Fig 6-9 for H2. The outermost contour is 0.002 au and the innermost one is 2.0 au for R = 4.0 and R = 2.0 au. The scale used in the density difference plots is the same as that given in Fig. 6-10 beginning with n = 5 for R = 4.0 au and with n = 3 for R = 2.0 au. Note the increase in the amount of charge density transferred from the binding to the antibinding regions as the separation between the two atoms is decreased. A comparison of the density diiference profiles for He2 (Figure $2$) and H2 (Fig. 6-12) provides a striking contrast of the difference between the charge redistributions which result in the formation of unstable and stable molecules. Figure $2$: Profiles of the density difference maps along the internuclear axis for the approach of two He atoms. One nucleus is held stationary. This figure should be contrasted with Fig. 6-12, the corresponding one for H2. The force on a helium nucleus in He2 as a function of the internuclear separation is repulsive for the range of R values indicated in Figure $3$. Unlike the force curve for H2, there is no deep minimum in the curve which represents a range of $R$ values for which the force is attractive. The force curve for $He_2$ does cross the $R$ axis at approximately 6 au (not indicated in Figure $3$) and becomes very slightly attractive for values of R greater than this value. This weak attractive force has its origin in the long-range mutual polarization of the atomic density distributions which was discussed in detail for the approach of two hydrogen atoms. For large internuclear separations, where there is no significant overlap of the atomic orbitals and hence no need to invoke the Pauli exclusion principle, the atomic charge distributions of two approaching helium atoms are polarized in the same way as are the charge distributions for two approaching hydrogen atoms, and the force is attractive. At smaller internuclear separations, however, where the overlap of the orbitals is significant and the Pauli exclusion principle is operative, the direction of the charge transfer in He2 is reversed and the force is rapidly transformed into one of repulsion. Were it not for the weak long-range attractive forces - the van der Waals forces - gaseous helium could not be condensed into a liquid or a solid phase. As it is, the force of attraction between two helium atoms is so weak that at a temperature of only 4.2°K they have sufficient kinetic energy to overcome the forces of attraction between them and escape into the gas phase. The force of attraction between two helium atoms is so weak that at a temperature of only 4.2°K they have sufficient kinetic energy to overcome the forces of attraction between them. If it was not necessary to satisfy the demands of the Pauli principle, electron density would accumulate in the binding region of He2, even for small values of R, as this region is of lower potential energy than is the antibinding region. However, when each electron detects another of like spin (when the orbitals overlap) they cannot concentrate their charge density in the region they have in common, the binding region. That it is indeed the Pauli principle which prevents the formation of He2 is evident from the fact that He2+, which possesses one less electron, is stable! When a helium atom approaches a helium ion, an orbital vacancy is present and the density from one pair of electrons (those with opposed spins) can be concentrated in the binding region. All the rare gas atoms possess a closed shell structure and this accounts for their inertness in chemical reactions. No homonuclear diatomic molecules are found in this group of elements; all occur naturally in the atomic state. Compounds of Kr and Xe have been formed with fluorine, for the same reason that the formation of He2+ is possible. Fluorine has a very high electron affinity and a single vacancy in its outer quantum shell. Thus one of the electrons in the closed shell structure of Xe can be pulled into the orbital vacancy of the fluorine atom and density concentrated in the region between the nuclei. Only an atom with a very high affinity for electrons will bond with a rare gas atom. The only species found with sufficient electron affinity to bind a helium atom (which holds its electrons the most tightly of all atoms) is a He+ ion. If the helium atom has the highest ionization potential of all the elements, then the singly-charged He+ ion must possess the highest electron affinity of all the neutral or singly-charged atoms. Second Row Elements Let us now attempt to explain the variation in the valency exhibited by the elements in the second row of the periodic table. The hydrides of these elements are LiH, BeH2, BH3, CH4, NH3, OH2 and FH. The valency of the hydrogen atom is unity as it possesses one unpaired electron and one orbital vacancy. It can form one electron pair bond. Therefore, the valencies exhibited in the above hydrides must be 1, 2, 3, 4, 3, 2, 1, as this is the number of hydrogens bound in each case. (By the way, while the BH3 molecule is predicted to be stable with respect to the separated boron and hydrogen atoms it cannot be isolated as such, but only in the form of its dimer B2 H6). We will consider HF first. Fluorine: The electron configuration of F is ls22s22p5(­). Only one of the electrons in the 2p orbitals is unpaired. The 2p atomic orbital with the vacancy may overlap with the 1s atomic orbital of hydrogen, and if the spin of the electron on H is paired with the spin of the electron on F, all the requirements for the formation of a stable chemical bond will be met. The valency of F will be one as it possesses one unpaired electron and can form one electron pair bond. Oxygen: The electronic configuration of oxygen is ls22s22p4(­­). Oxygen has two unpaired electrons, both of which may pair up with an electron on a hydrogen atom. The valency of oxygen should be two as is observed. It is obvious that all the requirements for a chemical bond can be met for every unpaired electron present in the outer or "valency" shell of an atom. Thus valency may be defined as being equal to the number of unpaired electrons present in the atom. Nitrogen: The configuration of nitrogen is ls22s22p3(­­­), and its hydride should be NH3 as is indeed the case. Carbon: Since the most stable electron configuration of carbon is 1s22s22p2(­­) we predict its valency to be two. The molecule CH2 (called methylene) is indeed known. However, CH2 is very reactive and its products are not stable until four chemical bonds are formed to carbon as in the case of CH4. Four, not two, is the common valency for carbon. How can our theory account for this fact? The energy of a 2p orbital is not much greater than that of a 2s orbital. Because of this, relatively little energy is required to promote an electron from the 2s orbital on carbon to the vacant 2p orbital: C: ls22s22p2(­­) ® C* ls22s1(­)2p3(­­­) Carbon in the promoted state possesses four unpaired electrons and can now combine with four hydrogen atoms. Every bond to a hydrogen atom releases a large amount of energy. The energy required to unpair the 2s electrons and promote one of them to a 2p orbital is more than compensated for by the fact that two new C�H bonds are obtained. Boron: Boron has the electronic configuration ls22s22p1(­). Its valency should be one and BH is known to exist. However, again through the mechanism of promotion, the valency of boron can be increased to three: B*: ls22s1(­)2p2(­­) We might wonder why, with a 2p orbital still vacant, one of the 1s electrons is not promoted and thus give boron a valency of five. This does not happen because of the large difference in energy between the 1s and 2p orbitals as shown in the orbital energy level diagram (Fig. 5-3). Beryllium: Beryllium has the configuration ls22s2 and should exhibit a valency of zero. The outer electron configuration of Be is similar to that of He, a closed shell of s electrons. Indeed, the molecule Be2 exists only as a weakly bound van der Waals molecule. However, Be differs from He in that there are vacant orbitals available in its valency shell. The observed valency of two in the molecule BeH2 can be explained by a promotion to the configuration ls22s1(­)2p1(­). Lithium. Lithium, with the configuration ls22s1(­), should exhibit only a valency of one. Thus valency may be defined as being equal to the number of unpaired electrons present in the atom. Lewis Structures The concept of an electron pair bond is not restricted to bonds with hydrogen. The only requirements are an unpaired electron on each atom (which is another way of saying there is an orbital vacancy on each atom) with their spins opposed. Thus two fluorine atoms may combine to form the fluorine molecule F2 through the overlap of the singly-occupied 2p orbital on one atom with a similar orbital on the other. This will result in F2 being described as F�F where the single line denotes that one pair of electrons forms the bond between the two atoms. Similarly, the three singly-occupied 2p orbitals on one nitrogen atom may overlap with those on another to form the N2 molecule. Since three pairs of electrons are shared between the nuclei in this case, we represent the molecule by the symbol NºN. The electrons in the valence shell of an atom which are not involved in the formation of a chemical bond (as they are already paired in an orbital on the atom) may also be indicated and the resulting symbols are called Lewis structures. Thus the three pairs of valency electrons on each F, (2s22p4), not involved in the bonding are often indicated by dots. For example, (Lithium has only one outer electron and it is shared in the bond.) In compounds with nitrogen we may indicate the 2s pair of electrons: and Recall that each line, since it denoted a bond in these diagrams, represents a pair of electrons shared between the two atoms joined by the line. If we add up the lines joined to each atom, multiply by two (to obtain the number of electrons) and add to this the number of dots which represents the remaining valence electrons, the number eight is obtained in many cases, particularly for the second-row elements (n = 2 valence orbitals). This so-called octet rule results from many elements having four outer orbitals (nsnpxnpynpz) which together may contain a total of eight electrons. Not all eight electrons belong to either atom in general as the electrons in a bond are shared (not necessarily equally as we shall see) between two atoms. Each bond contains two electrons with paired spins. Thus the orbital from one atom used to form the bond is, in a sense, filled as both spin possibilities are now accounted for. The presence of an unshared pair of electrons in the valency shell of an atom can lead to the formation of another chemical bond. For example, the unshared pair of electrons in the 2s orbital on nitrogen in ammonia may attract and bind to the molecule another proton: H3N: + H+ ® NH4+ A similar reaction occurs for the water molecule which possesses two unshared pairs of electrons: + H+® We must modify our previous rule regarding the requirements for the formation of an electron pair bond. Rather than both orbitals being half-filled, an orbital on one of the atoms may be filled if the orbital on the other atom is completely vacant. Molecules possessing an unshared pair of electrons, which may be used to bond another atom, are called Lewis bases. Only elements in groups V, VI and VII will exhibit this property. The elements in groups I to IV do not possess unshared pairs. Instead, the chemistry of the elements in groups II and III is largely characterized by the orbital vacancies which they possess in their valency shell. The compound boron trifluoride represents the pairing of the three valence electrons of boron with the unpaired electrons on three F atoms. The boron is considered to be in the promoted configuration ls22s1(­)2p2(­­)and BF3is represented as A 2p orbital on boron is vacant. It is not surprising to find that BF3 may form another bond with a species which has an unshared pair of electrons, i.e., a Lewis base. For example, $\ce{BF_3 + : NH_3 \rightarrow H_3N-BF_3} \nonumber$ Since BF3 accepts the electron pair it is termed a Lewis acid. Further examples from Group 3 are $\ce{BF_3 + F^- \rightarrow BF_3^-} \nonumber$ $\ce{AlCl_3 + Cl^- \rightarrow AlCl_4^-} \nonumber$ $\ce{BH_3+ H^- \rightarrow BH_4^-} \nonumber$ and from Group 2 (which have two orbital vacancies): $\ce{BeCl_2 + 2Cl^- \rightarrow BeCl_4^{-2}} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.04%3A_The_Quantum_Mechanical_Explanation_of_Valency.txt
The theory of valency which we have been developing is known as valence bond theory. One further feature of this theory is that it may be used to predict (or in some cases, rationalize) the observed geometries of molecules By the geometry of a molecule we mean the relative arrangement of the nuclei in three-dimensional space. For example, assuming the two O-H bonds in the water molecule to be similar and hence of the same length, the angle formed by the two O-H bonds (the HOH angle could conceivably posess any angle from 180° to some relatively small value. All we demand of our simple theory is that it correctly predict whether the water molecule is linear (bond angle = 180°) or bent (bond angle less than 180°) Or as another example, it should predict whether the ammonia molecule is planar (a) or pyramidal (b). The observed geometry of a molecule is that which makes the energy of the system a minimum. Thus those geometries will be favoured which (i) concentrate the largest amount of charge density in the binding region and thus give the strongest individual bonds, and (ii) keep the nuclei as far apart as possible (consistent with (i)), and hence reduce the nuclear replusions. Consider again the two possibilities for the water molecule It is clear that the linear form (a) will have a smaller energy of nuclear repulsion from the hydrogens than will the bent form (b). If the amount of electron density which could be concentrated in the regions between the nuclei in each \(O-H\) bond (i.e., the strength of each \(O-H\) bond) was independent of the bond angle, then clearly the linear form of the water molecule would be the most stable. This would be the situation if all the atomic orbitals which describe the motions of the electrons were rigidly spherical and centred on the nuclei. But this is not the case. As was stressed earlier in our discussion of atomic orbitals, the motion of electrons possessing angular momentum because they occupy orbitals with l ¹ 0 is concentrated along certain axes or planes in space. In particular the three p orbitals are a maximum along the three perpendicular axes in space. The valence bond theory of the water molecule describes the two O-H bonds as resulting from the overlap of the H 1s orbitals with the two half-filled 2p orbitals of the oxygen atom. Since the two 2p orbitals are at right angles to one another, valence bond theory predicts a bent geometry for the water molecule with a bond angle of 90°. The overlap of the orbitals is shown schematically in Fig. 6-16. Fig. 6-16. A pictorial representation of the overlap of \(2p_x\) and \(2p_y\) orbitals on the oxygen with the 1s orbitals of two H atoms. The actual bond angle in the water molecule is 104.5°. The opening of the angle to a value greater than the predicted one of 90° can be accounted for in terms of a lessening of the repulsion between the hydrogen nuclei. The assumption we have made is that the maximum amount of electron density will be transferred to the binding region and hence yield the strongest possible bond when the hydrogen and oxygen nuclei lie on the axis which is defined by the direction of the 2p orbital. For a given internuclear separation, this will result in the maximum overlap of the orbitals. Because an orbital with l ¹ 0 restricts the motion of the electron to certain preferred directions in space, bond angles and molecular geometry will be determined, to a first rough approximation, by the inter-orbital angles. In the valence bond description of ammonia, each N-H bond results from the overlap of an H 1s orbital with a 2p orbital on N. All three 2p orbitals on N have a vacancy and thus three bonds should be formed and each HNH angle should be 90°, i.e., ammonia should be a pyramidal and not a planar molecule. The NH3 molecule is indeed pyramidal and the observed HNH angle is 107.3°. The actual bond angle is again larger than that predicted by the theory. It might be argued that since the N atom possesses a half-filled 2p shell (its electronic configuration is 1s22s22p3), its density distribution is spherical and hence the N atom should not exhibit any directional preferences in its bonding. This argument is incorrect for the following reason. The density distribution is obtained by squaring the wave function. The wave function which properly describes the system must be obtained first, then squared to obtain the density. The wave function which describes the ammonia molecule consists of products of hydrogen 1s orbital functions with the nitrogen 2p orbital functions. (A product of orbitals is the mathematical statement of the phrase "overlap of orbitals" in valence bond theory.) The density distribution obtained by squaring the product of two orbitals is not the same as that obtained from the sum of the squares of the individual orbitals. Thus in the valence bond theory of molecular electronic structure the directional properties of the valence orbitals play an important role. By assuming that the most stable bond results when the two nuclei joined by the bond lie along the axis defined by the orbitals and considering the bonds to a first approximation to be independent of one another, we can predict the geometries of molecules. Hybridization The BeH2 molecule is linear and the two Be-H bonds are equivalent. The valence bond description of BeH2 accounted for the two-fold valency of Be (which has the ground state configuration 1s22s2) by assuming the bonding to occur with a promoted configuration of Be: Be* 1s22s(­)2p(­) At first sight this suggests that the two Be-H bonds should be dissimilar and not necessarily 180° apart because one bond results from the overlap with a 2s orbital and the other with a 2p orbital on Be. We can, however, account for the equivalence of the two Be-H bonds and for the linearity of the molecule within the framework of the theory. There is no a priori reason for assuming that the one bond will result from the overlap with a 2s orbital and the other from the overlap with a 2p orbital. In the most general treatment of the problem, each bond to a hydrogen could involve both the 2s and the 2p orbitals. That is, we can "mix" or hybridize the valence orbitals on the Be atom. In fact, by taking each valence orbital on Be to be an equal part of 2s and 2p, we can obtain two equivalent hybrid orbitals which are directed 180° apart. The two hybrid orbitals will form two equivalent bonds with the H 1s orbitals whose total bond strength will be larger than that obtained by forming one bond with a 2p and the other with a 2s orbital on Be. The construction of the hybrid orbitals is accomplished by taking the sum and the difference of the 2s orbital and one of the 2p orbitals, say the 2px orbital, both orbitals being centred on the Be nucleus. This is illustrated in Fig. 6-17. Fig. 6-17. The construction of sp hybrid orbitals from a 2s and 2p atomic orbital on Be. Since the 2p orbital has a node at the nucleus the 2p orbital wave function has opposite signs on each side of the nodal plane indicated in the figure. Both orbitals are positive on one side and the orbital functions add at each point in space. On the other side of the nodal plane, the orbitals are of opposite sign and their sum yields the difference between the two functions at every point in space. The addition of a 2s and 2px orbital concentrates the wave function and hence the charge density on the positive side of the x-axis. Obviously the combination (2s - 2px) will be similar in appearance but concentrated on the negative side of the x-axis. These combinations of the 2s and 2p orbitals yield two hybrid orbitals which are equivalent and oppositely directed. Since each of the hybrid orbitals is constructed from equal amounts of the 2s and 2p orbitals they are termed "sp hybrid" orbitals. The linear nature of BeH2 can be explained if it is assumed (as is true) that the best overlap with both H 1s orbitals will result when the valence orbitals on the Be are sp hybrids (Fig. 6-18). Fig 6-18. A pictorial representation of the overlap of two sp hybrid orbitals on Be with H 1s orbitals to form BeH2. The three B-H bonds in BH3 are equivalent and the molecule is planar and symmetrical: The promoted configuration of boron with three unpaired electrons is B* 1s22s(­)2px(­)2py(­) In this case we must construct three equivalent hybrid orbitals from the three atomic orbitals 2s, 2px, and 2py, on boron. The 2px and 2py orbitals define a plane in space and the three hybrid orbitals constructed from them will be projected in this same plane. Since the hybrid orbitals are to be equivalent, each must contain one part 2s and two parts 2p. They will be called "sp2" hybrid orbitals. The three orbital combinations which have the above properties are indeed directed at 120° to one another. The planar, symmetrical geometry of BH3 can be accounted for in terms of sp2 hybridization of the orbitals on boron. The four C-H bonds in CH4 are equivalent and the molecule possesses a tetrahedral geometry: The promoted configuration of carbon with four unpaired spins is C* 1s22s(­)2px(­)2py(­)2pz(­) Four equivalent hybrid orbitals can be constructed from the 2s and the three 2p orbitals on carbon. Each orbital will contain one part 2s and three parts 2p, and the hybrids are termed sp3 hybrids. Only one such set of orbitals is possible and the angle between the orbitals is 109°28', the tetrahedral angle. The tetrahedral geometry of CH4 is described as resulting from the sp3 hybridization of the valence orbitals on the carbon atom. The three hybridization schemes which have been presented are sufficient to account for the geometries of all the compounds formed from elements of the first two rows of the periodic table (those with n= 1 or n = 2 valence orbitals). Consider, for example, the unsaturated hydrocarbons. The ethylene molecule, C2H4, possesses the planar geometry indicated here, where the bond angles around each carbon nucleus are approximately 120°. Three bonds in a plane with 120° bond angles suggests sp2 hybridization for the carbon atoms. Two of the sp2 hybrids from each carbon may overlap with H 1s orbitals forming the four C-H bonds. The remaining sp2 hybrids on each carbon may overlap with one another to form a bond between the carbons: The sp2 hybrids are denoted by arrows in the above diagram to indicate their directional dependence. If these bonds are formed in the x-y plane, using the 2px and 2py orbitals of the carbon atoms, a singly-occupied 2pz orbital will remain on each carbon. They will be directed in a plane perpendicular to the plane of the molecule: The overlap of the two 2pz orbitals above and below the plane of the molecule will result in a second electron pair bond between the carbon atoms. The bonds formed in the plane of the molecule are called s (sigma) bonds, while those perpendicular to the plane are called p bonds. Since the overlap of the orbitals to form a p bond is not as great as the overlap obtained from s bonds (which are directed along the bond axis), p bonds in general are weaker than s bonds. A Lewis structure for the C2H4 molecule is expressed as indicating that there is a double bond between the carbon atoms, i.e., the density from two pairs of electrons binds the carbon atoms. The energy required to break the carbon-carbon double bond in ethylene is indeed greater than that required to break the carbon-carbon single bond in the ethane molecule, H3C-CH3. Furthermore, the chemical behaviour of ethylene is readily accounted for in terms of a model which places a large concentration of negative charge density in the region between the carbon atoms. The physical evidence thus verifies the valence bond description of the bonding between the carbons in ethylene. Our final example concerns another important possible hybridization for the carbon atom. The acetylene molecule, C2H2, is a linear symmetric molecule: H-C-C-H. The linear structure suggests we try sp hybridization for each carbon, one hybrid overlapping with a hydrogen and the other with a similar hybrid from the second carbon atom. This will produce a linear s bond framework for the molecule: The sp hybrids are denoted by arrows in the above diagram. If the sp hybrids are assumed to be directed along the x-axis, then the remaining singly-occupied 2py and 2pz orbitals on each carbon may form p bonds. The 2py orbitals on each carbon may overlap to form a p bond whose density is concentrated in the x-y plane, with a node in the x-z plane. Similarly the 2pz orbitals may form a second p bond concentrated in the x-z plane, with a node in the x-y plane. Acetylene will possess a triple bond, one involving three pairs of electrons, between the carbon atoms. The Lewis structure is drawn as where it is understood that one of the C-C bonds is a s bond while the other two are of the p-type. The chemistry and properties of acetylene are consistent with a model which places a large amount of charge density in the region of the C-C bond. Hybridization schemes involving d orbitals are also possible. They are important for elements in the third and succeeding rows of the periodic table. Although the elements of the third row do not possess occupied 3d orbitals in their ground electronic configurations, the 3d orbitals of phosphorus, sulphur and chlorine are low enough in energy that promoted configurations involving the 3d orbitals may be reasonably postulated to account for the binding in compounds of these elements. One consequence of the "availability" of the 3d orbitals is that there are many exceptions to the octet rule in compounds of the third row elements. For example, in PCl5 there are ten valence electrons involved in the bonding of the five chlorines to the phosphorus. A hybridization scheme based on the promotion of one 3s electron of phosphorus to a 3d orbital to yield five "dsp3" hybrid orbitals correctly predicts the trigonal bypyramidal structure of PCl5: As a final example consider the molecule SF6 in which all six S-F bonds are equivalent and the geometry is that of a regular octahedron (one F atom centred in each face of a regular cube): This geometry and number of bonds can be accounted for by assuming the promotion of one 3s and one 3p electron to two of the 3d orbitals on the sulphur atom. This hybridization yields six equivalent "d2sp3" hybrid bonds which are indeed directed as indicated in the structure for SF6.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.05%3A_Molecular_Geometry.txt
Literature references 1. The electrostatic method used in this book for the interpretation of chemical binding is based on the Hellmann-Feynman theorem. The theorem was proposed independently by both H. Hellmann and R. P. Feynman. Feynman's account of the theorem anticipates many of the applications to chemistry including the electrostatic interpretation of van der Waals forces. R. P. Feynman, Phys. Rev. 56, 340 (1939). 2. The wave functions used in the calculation of the density distributions for H2 were determined by G. Das and A. C. Wahl, J. Chem. Phys. 44, 87 (1966). These wave functions include configuration interaction and hence provide suitable descriptions for the H2 systems for large values of the internuclear separation. The wave functions for He2 are from N. R. Kestner, J. Chem. Phys. 48, 252 (1968). 6.E: Exercises Q6.1 The element beryllium has an atomic number of four. Rationalize the following observations in terms of the valence bond theory of molecular structure. 1. $Be_2$ does not exist except as a weakly bound van der Waals molecule. 2. $Be$ can exhibit a valency of two in combination with a halogen, for example, BeF2. 3. $BeF_2$ can undergo a further reaction with an excess of F- ions to give $BeF_2 + 2F^- \rightarrow BeF_4^{-2} \nonumber$ In addition to explaining why this reaction occurs, predict the geometrical shape of the $BeF_4^{-2}$ ion. Q6.2 1. Use valence bond theory to predict the molecular formula and geometrical structure of the most stable electrically neutral hydride of phosphorus. 2. The hydride of phosphorus can react with HI to form an ionic crystal which contains the I- ion. Explain why this reaction can occur and give the formula and geometrical structure of the positive ion which contains phosphorus and hydrogen. Q6.3 The atomic number of silicon is fourteen. What is the electronic structure of Si in its ground state? Predict the molecular formula and geometrical shape of the most stable silicon-hydrogen compound using valence bond theory. Q6.4 The element vanadium (Z = 23) forms the compound VCl4. Would a beam of VCl4 molecules be deflected in an inhomogeneous magnetic field? Explain the reasoning behind your answer. 5. The CH2 molecule may exist in two distinct forms. In the one case all the electrons are paired and the molecule does not possess a magnetic moment. In the second form the molecule exhibits a magnetism which can be shown to arise from the presence of two unpaired electrons. One of the forms of CH2 is linear. Use valence bond theory to describe the electronic structures and geometries of both forms of CH2. Which of the two will possess the lower electronic energy? Q6.6 1. Write Lewis structures (structures in which each electron pair bond is designated by a line joining the nuclei and dots are used to designate unshared electrons in the valency shell) for H2O, CH4, CO2, HF, NH4+, H2O2. 2. Give a discussion of the bonding of the molecules listed in part (a) in terms of valence bond theory. Denote the use of hybrid orbitals by arrows and a label as to whether they are sp, sp2, or sp3 hybrids. You should predict that H2O and H2O2 are bent molecules, that CH4 and NH4+ are tetrahedral and that CO2 is linear. Q6.7 Sometimes it is possible to write a number of equivalent Lewis structures for a single species. For example, the bonding in the NO3- ion can be described by: Each atom in these structures is surrounded by four pairs of electrons, the first cardinal rule in writing a Lewis structure. On the average, one electron of the pair in each bond belongs to one atom. Since there are only four bonds to N and no unshared valence pairs, N on the average has but four valence electrons in these three structures. The N atom initially possessed five electrons, and a plus sign is placed at N to denote that it has, on the average, one less electron in the NO3-ion. The two singly-bonded oxygens have on the average seven electrons in each structure, one more than a neutral oxygen atom. This is denoted by a minus sign. The doubly-bonded O has on the average six electrons. Notice that the sum of these formal charges is minus one, the correct charge for the NO3- ion. The structure of the NO3- ion is in reality planar and symmetrical, all of the NO bonds being of equal length. This could be indicated in a single Lewis structure by indicating that the final pair of electrons in the p bond between N and one O is actually spread over all three NO bonds simultaneously: When one or more pairs of electrons are delocalized over more than two atoms, the Lewis method or the valence bond method of writing valence structures with bonds between pairs of atoms runs into difficulties. The compromise structure above correctly indicates that each NO bond in NO3- is stronger and shorter than a N�O single bond, but not as strong as an N=O double bond. 1. Use the valence bond theory to account for the bonding and planar structure of the NO3- ion. 2. Write Lewis structures and the corresponding valence bond structures for the CO3-2 ion and SO2. Are there full S=O or C=O double bonds in either of these molecules? Q6.8 Draw valence bond structures for benzene, C6H6. This molecule has a planar hexagonal geometry: Are there any delocalized electron pairs in the benzene molecule? Q6.9 The carbon monoxide molecule forms stable complexes with many transition metal elements. Examples are (from the first transition metal series) Cr(CO)6, (CO)5Mn�Mn(CO)5, Fe(CO)5, Ni(CO)4 In each case the bond is formed between the metal and the unshared pair of electrons on the carbon end of carbon monoxide. The metal atom in these complexes obviously violates the octet rule, but can the electronic structures for the carbon monoxide complexes be rationalized on the basis of an expanded valency shell for the metal?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/06%3A_The_Chemical_Bond/6.06%3A_Literature_References.txt
Ionic and Covalent Binding The distribution of negative charge in a molecule will exhibit varying degrees of asymmetry depending on the relative abilities of the nuclei in the molecule to attract and bind the electronic charge density. The symmetry or asymmetry of the charge distribution plays a fundamental role in determining the chemical properties of the molecule and consequently this property of the charge distribution is used as a basis for the classification of chemical bonds. We can envisage two extremes for the distribution of the valence charge density. An example of one of the extremes is obtained when a bond is formed between two identical atoms. The charge density of the valence electrons will in this case necessarily be delocalized equally over corresponding regions of each nucleus since both nuclei will attract the electron density with equal force. Such an equal sharing of the charge density is an example of covalent binding and is exemplified by the molecular charge distribution of N2 (Fig. 7-1). Fig. 7-1. Contour maps of the molecular charge distributions of N2 and LiF at their equilibrium internuclear separations. The space to the right of the dashed line through the Li nucleus denotes the region of nonbonded charge density. The values of the contours increase from the outermost one to the innermost one. The specific values of the contours appearing in this and the following contour maps can be obtained by referring to the Table of Contour Values. The charge distribution of LiF (Fig. 7-1) provides an example of the other extreme, termed ionic binding, obtained when a bond is formed between two atoms with very different affinities for the electronic charge density. The very unsymmetrical distribution of charge in LiF is not simply a reflection of the fluorine atom possessing seven valence electrons to lithium's one. Instead the formation of the bond in LiF corresponds to the nearly complete transfer of the valence charge density of lithium to fluorine resulting in a molecule best described as Li+F-. We need only recall that initially a lithium atom is considerably larger than a fluorine atom to realize that a considerable transfer of charge has occurred in the formation of the LiF molecule. In N2 the valence charge density is delocalized over the whole molecule. The electronic charge is heavily concentrated in the internuclear region where it forms a bridge of high density between the two nuclei. Only the density of the 1s inner shell or "core" orbitals is strongly localized in the regions of the nuclei. In contrast to this, practically all of the charge density in the lithium fluoride molecule is localized in nearly spherical contours on the two nuclei in the manner characteristic of two separate closed-shell distributions. Only contours of very small value encompass both nuclei and the bridge of charge density joining the two spherical distributions is very low in value, being approximately one tenth of the value observed for N2. We may determine the total amount of electronic charge in an arbitrary region of space by summing the density in each small volume element within the region of interest (i.e., integrating the charge distribution over some particular volume of space): A useful measure of the extent of charge transfer occurring on bond formation is obtained by determining the nonbonded charge on each nucleus. The nonbonded charge for a nucleus in a molecule is defined as occupying the volume of space on the nonbonded side of a plane perpendicular to the bond axis and through the nucleus in question. This is indicated by a dashed line for the Li nucleus in LiF (Fig. 7-1). The nonbonded charge density of the lithium nucleus in LiF is 1.07 e- compared to 1.5 e- in the Li atom (i.e., one half of the total number of electronic charges in a Li atom). The nonbonded charge density of the F nucleus, on the other hand, is increased above its atomic value, being 5.0 e- as compared to 4.5 e- in the fluorine atom. Since the distributions centred on the nuclei in LiF are nearly spherical, the total charge contained in each distribution will be approximately twice the value of the corresponding nonbonded charge. The total distribution of charge in LiF is, therefore, consistent with the ionic model Li+F- corresponding to the transfer of the single 2s valence electron of lithium to the fluorine. The radii of the nonbonded charge distributions (the distance measured along the bond axis from a nucleus to the outermost contour of its nonbonded density) are also consistent with the ionic model. The radius of the nonbonded charge density on lithium is 1.7 ao, a value almost identical to the radius of a Li+ion (1.8 ao) but much less than the radius of a Li atom (3.3 ao). The value of 3.0 ao for the radius of the nonbonded charge density on fluorine is consistent with that of a fluoride ion distribution as it represents a slight increase over the atomic value for fluorine of 2.8 ao. By way of comparison, the nonbonded charge on the nitrogen nuclei in N2 is increased above the atomic value of 3.5 e- to 3.68 e-. This transfer of charge density to the nonbonded regions on bond formation is somewhat surprising when it is recalled that charge density must be accumulated in the binding region, the region between the nuclei, to achieve a chemical bond. We require a more detailed picture of the charge reorganization accompanying the formation of a bond to understand fully the distribution of the charge density in a molecule. In addition, many chemical bonds possess charge distributions which lie between the extreme of the perfect sharing of the valence charge density found in N2 and its complete localization on one nucleus in LiF. We consider next a method of classification of bonding in molecules, a classification which provides at the same time an understanding of the mechanism of the two binding situations in terms of the forces exerted on the nuclei.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.01%3A_Introduction.txt
To make a quantitative assessment of the type of binding present in a particular molecule it is necessary to have a measue of the extent of charge transfer present in the molecule relative to the charge distributions of the separated atoms. This information is contained in the density difference or bond density distribution, the distribution obtained by subtracting the atomic densities from the molecular charge distribution. Such a distribution provides a detailed measure of the net reorganization of the charge densities of the separated atoms accompanying the formation of the molecule. The density distribution resulting from the overlap of the undistorted atomic densities (the distribution which is subtracted from the molecular distribution) does not place sufficient charge density in the binding region to balance the nuclear forces of repulsion. The regions of charge increase in a bond density map are, therefore, the regions to which charge is transferred relative to the separated atoms to obtain a state of electrostatic equilibrium and hence a chemical bond. Thus we may use the location of this charge increase relative to the positions of the nuclei to characterize the bond and to obtain an explanation for its electrostatic stability. In covalent binding we shall find that the forces binding the nuclei are exerted by an increase in the charge density which is shared mutually between them. In ionic binding both nuclei are bound by a charge increase which is localized in the region of a single nucleus. Covalent Binding The bond density map of the nitrogen molecule (Fig. 7-2) is illustrative of the characteristics of covalent binding. Fig. 7-2. Bond density (or density difference) maps and their profiles along the internuclear axis for N2 and LiF. The solid and dashed lines represent an increase and a decrease respectively in the molecular charge density relative to the overlapped atomic distributions. These maps contrast the two possible extremes of the manner in which the original atomic charge densities may be redistributed to obtain a chemical bond. Click here for contour values. The principal feature of this map is a large accumulation of charge density in the binding region, corresponding in this case to a total increase of one quarter of an electronic charge. As noted in the study of the total charge distribution, charge density is also transferred to the antibinding regions of the nuclei but the amount transferred to either region, 0.13 e-, is less than is accumulated in the binding region. The charge density of the original atoms is decreased in regions perpendicular to the bond at the positions of the nuclei. In three dimensions, the regions of charge deficit correspond to two continuous rings or roughly doughnut-shaped regions encircling the bond axis. The increase in charge density in the antibinding regions and the removal of charge density from the immediate regions of the nuclei result in an increase in the forces of repulsion exerted on the nuclei, forces resulting from the close approach of the two atoms and from the partial overlap of their density distributions. The repulsive forces are obviously balanced by the forces exerted on the nuclei by the shared increase in charge density located in the binding region. A bond is classified as covalent when the bond density distribution indicates that the charge increase responsible for the binding of the nuclei is shared by both nuclei. It is not necessary for covalent binding that the density increase in the binding region be shared equally as in the completely symmetrical case of N2. We shall encounter heteronuclear molecules (molecules with different nuclei) in which the net force binding the nuclei is exerted by a density increase which, while shared, is not shared equally between the two nuclei. The pattern of charge rearrangement in the bond density map for N2 is, aside from the accumulation of charge density in the binding region, quite distinct from that found for H2 (Fig. 6-10), another but simpler example of covalent binding. The pattern observed for nitrogen, a charge increase concentrated along the bond axis in both the binding and antibinding regions and a removal of charge density from a region perpendicular to the axis, is characteristic of atoms which in the orbital model of bonding employ p atomic orbitals in forming the bond. Since a p orbital concentrates charge density on opposite sides of a nucleus, the large buildup of charge density in the antibinding regions is to be expected. In the orbital theory of the hydrogen molecule, the bond is the result of the overlap of s orbitals. The bond density map in this case is characterized by a simple transfer of charge from the antibinding to the binding region since s orbitals do not possess the strong directional or nodal properties of p orbitals. Further examples of both types of charge rearrangements or polarizations will be illustrated below. Ionic Binding We shall preface our discussion of the bond density map for ionic binding with a calculation of the change in energy associated with the formation of the bond in LiF. While the calculation will be relatively crude and based on a very simple model, it will illustrate that the complete transfer of valence charge density from one atom to another in forming a molecule is in certain cases energetically possible. Lithium possesses the electronic configuration 1s22s1 and is from group IA of the periodic table. It possesses a very low ionization potential and an electron affinity which is zero for all practical purposes. Fluorine is from group VIIA and has a configuration 1s22s22p5. It possesses a high ionization potential and a high electron affinity. The following calculation will illustrate that the 2s electron of Li could conceivably be transferred completely to the 2p shell of orbitals on F in which there is a single vacancy. This would result in the formation of a molecule best described as Li+F-, and in the electron configurations 1s2 for Li+ and 1s22s22p6 for F-. We can calculate the energy change for the reaction $Li + F \rightarrow Li^+R^- \nonumber$ in stages. The energy which must be supplied to ionize the 1s electron on the Li atom is: $Li \rightarrow Li^+ e^- \label{1}$ with $E_1=I_1=5.4\,eV$. The energy released when an electron combines with an F atom is given by the electron affinity of F: $F + e^- \rightarrow F- \label{2}$ with $E_2=-3.7\,eV$. The two ions are oppositely charged and will attract one another. The energy released when the two ions approach one another from infinity to form the LiF molecule is easily estimated. To a first approximation it is simply -e2/R where $R$ is the final equilibrium distance between the two ions in the molecule: $\underbrace{Li^+ + F^-}_{\text{large distance apart}} \rightarrow \underbrace{Li^+F^-}_{\text{at R}} \nonumber$ with $E_3 \approx -4 \,eV$. The sum of these three reactions gives $Li + F \rightarrow Li^+F^- \nonumber$ and the overall change in energy is the sum of the three energy changes, or approximately -2 ev. The species Li+F- possesses a lower energy than the separated Li and F atoms and should therefore be a stable molecule. The transfer of charge density from lithium to fluorine is very evident in the bond density map for LiF (Fig. 7-2). The charge density of the 2s electron on the lithium atom is a very diffuse distribution and consequently the negative contours in the bond density map denoting its removal are of large spatial extent but small in magnitude. The principal charge increase is nearly symmetrically arranged about the fluorine nucleus and is completely encompassed by a single nodal surface. The total charge increase on fluorine amounts to approximately one electronic charge. The charge increase in the antibinding region of the lithium nucleus corresponds to only 0.01 electronic charges. (The great disparity in the magnitudes of the charge increases on lithium and fluorine are most strikingly portrayed in the profile of the bond density map, also shown in Fig. 7-2) It is equally important to realize that the charge increase on lithium occurs within the region of the 1s inner shell or core density and not in the region of the valence density. Thus the slight charge increase on lithium is primarily a result of a polarization of its core density and not of an accumulation of valence density. The pattern of charge increase and charge removal in the region of the fluorine, while similar to that for a nitrogen nucleus in N2, is much more symmetrical, and the charge density corresponds very closely to the distribution obtained from a single 2pselectron. Thus the simple orbital model of the bond in LiF which describes the bond as a transfer of the 2s electron on lithium to the single 2ps vacancy on fluorine is a remarkably good one. While the bond density map for LiF substantiates the concept of charge transfer and the formation of Li+ and F- ions it also indicates that the charge distributions of both ions are polarized. The charge increase in the binding region of fluorine exceeds slightly that in its antibinding region (the F- ion is polarized towards the Li+ ion) and the charge distribution of the Li+ ion is polarized away from the fluorine. A consideration of the forces exerted on the nuclei in this case will demonstrate that these polarizations are a necessary requirement for the attainment of electrostatic equilibrium in the face of a complete charge transfer from lithium to fluorine. Consider first the forces acting on the nuclei in the simple model of the ionic bond, the model which ignores the polarizations of the ions and pictures the molecule as two closed-shell spherical ions in mutual contact. If the charge density of the Li+ ion is spherical it will exert no net force on the lithium nucleus. The F- ion possesses ten electrons and, since the charge density on the F- ion is also considered to be spherical, the attractive force this density exerts on the Li nucleus is the same as that obtained for all ten electrons concentrated at the fluorine nucleus. Nine of these electrons will screen the nine positive nuclear charges on fluorine from the lithium nucleus. The net force on the lithium nucleus is, therefore, one of attraction because of the one excess negative charge on F. For the molecule to be stable, the final force on the lithium nucleus must be zero. This can be achieved by a distortion of the spherical charge distribution of the Li+ ion. If a small amount of the 1s charge density on lithium is removed from the region adjacent to fluorine and placed on the side of the lithium nucleus away from the fluorine, i.e., the charge distribution is polarized away from the fluorine, it will exert a force on the lithium nucleus in a direction away from the fluorine. Thus the force on the lithium nucleus in an ionic bond can be zero only if the charge density of the Li+ ion is polarized away from the negative end of the molecule. A similar consideration of the forces exerted on the fluorine nucleus demonstrates that the F- ion density must also be polarized. The fluorine nucleus experiences a net force of repulsion because of the presence of the lithium ion. The two negative charges centred on lithium screen only two of its three nuclear charges. Therefore, the charge density of the F- ion must be polarized towards the lithium in order to exert an attractive force on the fluorine nucleus which will balance the repulsive force arising from the presence of the Li+ ion. Thus both nuclei in the LiF molecule are bound by the increase in charge density localized in the region of the fluorine. The charge distribution of a molecule with an ionic bond will necessarily be characterized not only by the transfer of electronic charge from one atom to another, but also by a polarization of each of the resulting ions in a direction counter to the transfer of charge, as indicated in the bond density map for LiF. In a covalent bond the increase in charge density which binds both nuclei is shared between them. In an ionic bond both nuclei are bound by the forces exerted by the charge density localized on a single nucleus. The bond density maps for N2 and LiF are shown side by side to provide a contrast of the changes in the atomic charge densities responsible for the two extremes of chemical binding. In a covalent bond the increase in charge density which binds both nuclei is shared between them. In an ionic bond both nuclei are bound by the forces exerted by the charge density localized on a single nucleus. It must be stressed that there is no fundamental difference between the forces responsible for a covalent or an ionic bond. They are electrostatic in each case.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.02%3A_Classification_of_Chemical_Bonds.txt
Contour maps of the charge distributions for the stable homonuclear diatomic molecules formed from the second-row atoms (Figure \(1\)) provide further examples of covalent binding. The maps illustrate the relative tightness of binding of the density distributions, the density in Li2 for example being much more diffuse than that in N2. Two important physical dimensions for a molecule are the bond length and the molecular size. The bond length of a molecule may be directly determined (by X-ray diffraction techniques or by spectroscopic methods) but the size of a molecule cannot be as precisely defined or measured. However, molecular diameters may be inferred from measurements of the viscosity of gas phase molecules and from X-ray crystallographic studies on the structures of molecular crystals such as solid N2 and O2. Figure \(1\): Contour maps of the molecular charge distribution for the stable homonuclear diatomic molecules \(Li_2\) to \(F_2\). Click here for contour values. In general over 95% of the molecular charge lies within the 0.002 contour (the outermost contour illustrated in the density maps) and it has been found that the dimensions of this contour agree well with the experimental estimates of molecular sizes. The length and width of each molecule, defined respectively as the distance between the intercepts of the 0.002 contour on the molecular axis and on a line perpendicular to the axis and passing through its mid-point, are given in Table \(1\) along with the experimental bond lengths Re Table \(1\): Properties of the Total Charge Distributions* A2 Re Length Width Nonbonded Radius Molecule Atom Li2 5.051 8.7 7.8 1.8 3.3 B2 3.005 9.8 7.2 3.4 3.4 C2 2.3481 8.5 7.0 3.1 3.2 N2 2.068 8.2 6.4 3.1 3.0 O2 2.282 7.9 6.0 2.8 2.9 F2 2.68 7.9 5.4 2.6 2.8 *All distances are given in units of ao = 0.52917 Å. There is only a rough correlation between the bond length and the overall length of the molecule. Thus the lengths of N2 and O2 are in the reverse order of their bond lengths, as is also roughly true experimentally. The lithium molecule has the largest bond length but a molecular length only slightly larger than that of C2. There are two factors which must be considered in understanding the length of a molecule, the bond length and the rate at which the density falls off from the nucleus on the side away from the bond. Table \(1\) lists the distance from the nucleus to the 0.002 contour in the molecule, i.e., the radius of the nonbonded charge density, and the radius of the same contour in the isolated atom. With the exception of Li2, this distance in the molecule is almost identical to the value in the isolated atom. Thus the contribution of the two end lengths, beyond the nuclear separation, to the overall length of a molecule is largely determined by how tightly the density is bound in the unperturbed atom. The binding of the atomic densities increases from Li across to F, so that Li and Be are large and diffuse and N, 0, and F progressively tighter and more compact. Therefore F2 is smaller in size than N2 or C2 even though it possesses a greater bond length because the density in the F atom is more tightly bound than that in the C or N atoms. The Li molecule differs from the others in that its length is considerably less than expected considering the diffuse nature of its atomic density. In this case the molecular length is not approximately equal to the sum of Re and twice the "atomic" radius. This is, however, easily understood since in the Li atom only one valence-shell electron is present and in the molecule the charge density of this electron is concentrated almost exclusively in the binding region. This is further illustrated by using instead of the 0.002 contour of Li the 0.002 contour of the 1s2 shell of Li+, which is in fact equal to the value listed in Table \(1\) for the Li2 molecule. An estimate of the size of a peripheral atom in a molecule can thus be obtained by taking the sum of ½Re from a suitable source and the atomic radius as defined by the 0.002 contour of the atom (except for Li, Na, etc., where the core radius should be used). The bond density maps for the second-row homonuclear diatomic molecules (Figure \(2\)) indicate that the original atomic densities are distorted so as to place charge in the antibinding as well as in the binding regions. Figure \(2\): Bond density maps for the homonuclear diatomic molecules. Click here for contour values Apart from Li2 the pattern of charge increase and charge removal in these molecules is similar to that discussed previously for N2, a pattern ascribed to the participation of 2ps orbitals in the formation of the bond. Only Li2 approximates the simple picture found for H2 of removal of charge from the antibinding region and a buildup in the binding region. For the remaining molecules charge density is increasingly accumulated along the bond axis in both the binding and antibinding regions. The total accumulation of electronic charge represented by the regions of positive contours in the binding and antibinding regions of the bond density maps are listed in Table \(1\). Table \(2\): Charge Contained in the Regions of Increase in Bond Density Maps A2 Binding Region Antibinding Region Li2 0.41 0.01 B2 0.30 0.05 C2 0.50 0.06 N2 0.25 0.13 O2 0.10 0.14 F2 0.08 0.10 These figures show that in O2 and F2 a greater amount of charge is transferred to the antibinding region of a single nucleus than to the binding region. It is evident, however, from the shapes of the contours that the charge increase in the binding region is concentrated along the bond axis where it exerts a maximum force of attraction on the nuclei while the buildup in the antibinding region is more diffuse. The net forces on the nuclei are zero for each molecule. Therefore, the force exerted by the charge density in the binding region balances not only the force of nuclear repulsion but the force exerted by the charge buildup in the antibinding region as well. The nuclei are in each case bound by the charge increase which is shared equally by both nuclei. An important physical property of a molecule is its bond energy, the amount of energy required to break the bond or bonds in a molecule and change it back into its constituent atoms. The bond energies of the second-row homonuclear diatomic molecules increase from either Li2 or F2 to a maximum value for the central member of the series, N2 (Table \(3\)). Table \(3\): Bond Energies for Homonuclear Diatomic Molecules Molecule Bond Energy (ev) Number of electron pair bonds Li2 1.12 1 B2 3.0 1 C2 6.36 2 N2 9.90 3 O2 5.21 2 F2 1.65 1 We may rationalize the variation in the bond energies and the differences in the bond density maps in terms of the orbital theory of bonding. The simple bonding theory proposed in the preceding chapter equated the valency of an atom to its number of unpaired electrons. Thus the number of electron pair bonds formed between atoms in this series of molecules is predicted to be one for Li2, B2 and F2, two for C2 and O2, and three for N2. Reference to Table \(3\) reveals a parallelism between the bond energy and the number of electron pair bonds present in each molecule. The detailed variation in bond energy through the series can be accounted for in terms of the type of bond (whether it is formed for s or p orbitals) present in each molecule, a feature which is clearly reflected in the bond density maps, and even more strikingly portrayed in their profiles (Figure \(4\)). Figure \(4\): Profiles of the bond density maps for the homonuclear diatomic molecules. The bond in Li2 is formed primarily from the overlap of 2s atomic orbitals on each lithium atom. The 2s atomic density of lithium is a diffuse spherical distribution. These same characteristics are evident in the total charge distribution for Li2 and particularly in its bond density map. The charge increase in the binding region, while large in amount (Table \(2\)), is very diffuse and the bond density profile shows that relative to the other molecules, the charge increase is not concentrated along the bond axis. These are the very features expected for a bond resulting from the overlap of distorted, nondirectional s orbitals. B2 and F2 also have but a single pair bond. However, the bonds in these two molecules are formed primarily from the overlap of 2ps orbitals. Since a ps orbital is directed along the bond axis, it is more effective than an s orbital at concentrating charge density along this same axis. This is particularly evident when we compare the profiles of the bond densities for F2 and B2 with the profile for Li2. Similarly, the presence of two electron pair bonds and the still larger bond energies found for C2 and O2 are reflected in the larger increases in the charge densities along the internuclear axis in the binding region. Notice that while B2 concentrates three times as much charge as O2 in the binding region, it is not concentrated along the bond axis to as great an extent as in O2, and consequently its bond energy is the smaller of the two. The nitrogen molecule possesses three electron pair bonds and the largest bond energy of the molecules in this series. The charge increase in the binding region is concentrated along the bond axis to a far greater extent in this molecule than in any of the other molecules in the series. This concentration of the charge density gives N2 a stronger bond than C2 even though the total charge increase in its binding region is only one half as great as that for C2. The comparison of the bond energies in this series of molecules clearly illustrates that the strength of a bond is not simply related to the number of electronic charges in the binding region. As important as the amount of charge is the exact disposition of the charge density in the molecule, whether it is diffuse or concentrated.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.03%3A_Molecular_Charge_Distribution_of_Homonuclear_Diatomic_Mol.txt
Any chemical bond results from the accumulation of charge density in the binding region to an extent sufficient to balance the forces of repulsion. Ionic and covalent binding represent the two possible extremes of reaching this state of electrostatic equilibrium and there is a complete spectrum of bond densities lying between these two extremes. Since covalent and ionic charge distributions exhibit radically different chemical and physical properties, it is important, if we are to understand and predict the bulk properties of matter, to know which of the two extremes of binding a given molecule most closely approximates. We can obtain an experimental measure of the extent to which the charge density is unequally shared by the nuclei in a molecule. The physical property which determines the asymmetry of a charge distribution is called the dipole moment. To illustrate the definition of the dipole moment we shall determine this property for the LiF molecule assuming that one electron is transferred from Li to F and that the charge distributions of the resulting ions are spherical. The dipole moment is defined as the product of the total amount of positive or negative charge and the distance between their centroids The centroids of the positive and negative charges in a molecule are determined in a manner similar to that used to determine the center of mass of a system. \(1\) Figure \(1\): diagram for the calculation of the centroids of positive and negative charge in LiF. With reference to Figure \(1\) the "center of gravity" of the positive charge in LiF is easily found from the following equations: Eliminating b from these equations and solving for a we find that Thus all the positive charge in the LiF molecule can be considered to be at a point one quarter of the bond length away from the fluorine nucleus. Similarly the centroid of negative charge, remembering that one electron has been transferred from Li to F, is found to lie at a point one sixth of the bond length away from the F nucleus. The centroids of positive and negative charge do not coincide, the negative centroid being closer to the F nucleus than the positive centroid. While the molecule is electrically neutral, there is a separation of charge within the molecule. Let us denote the distance between the centroids of charge by l : and since there are twelve electrons in LiF, the dipole moment denoted by m is Thus, not surprisingly, the dipole moment in this case is numerically equal to one excess positive charge at the Li nucleus and one excess negative charge at the F nucleus, or one pair of opposite charges separated by the bond length. We can easily calculate the value of the dipole moment. The value of R for LiF is 1.53 ´ 10-8 cm and the charge on the electron is 4.80 ´ 10-10 esu. Thus or where 1 debye = 1 ´ 10-18 esu cm. (The fundamental unit for dipole moments is called a debye in honour of P. Debye who was responsible for formulating the theory and method of measurement of this important physical quantity.) The experimental value of m for LiF is slightly smaller than the calculated value, being 6.28 debyes. The reason for the discrepancy is easily traced to the assumption made in the calculation that the charge distributions of the Li+ and F- ions are spherical. We have previously indicated that the charge distributions of both the F- and Li+ ions are polarized in a direction counter to the direction of transfer of the electron in order to balance the forces on the nuclei. The centroid of the ten negative charges on F is not at the F nucleus, but shifted slightly towards the Li, and the centroid of the charge density on Li+ is correspondingly shifted slightly off the Li nucleus away from the F. Thus the centroid of negative charge for the whole molecule is not as close to the F nucleus as our simple calculation indicated and the dipole moment is correspondingly less. Obviously from this discussion the dipole moment of a molecule with a covalent bond will be zero since the symmetry of the charge distribution will dictate that the positive and negative charge centroids coincide. Thus dipole moments can conceivably possess values which lie between the covalent limit of zero and the ionic extreme which approaches \(neR\) in value (n being the number of electrons transferred in the formation of the ionic bond). The series of diatomic molecules formed by the union of a single hydrogen atom with each of the elements in the second row of the periodic table exemplifies both the extreme and intermediate types of binding, and hence of dipole moments. Table \(2\) lists the dipole moments and the values of eR for the ionic extreme (assuming spherical ions) for the second-row diatomic hydride molecules. Table \(2\): Dipole Moments and Bond Lengths of Second-row Hydrides AH m*(debyes) eR(debyes) R(Å) LiH -6.002 -7.661 1.595 BeH -0.282 -6.450 1.343 BH 1.733 5.936 1.236 CH 1.570 5.398 1.124 NH 1.627 4.985 1.038 OH 1.780 4.661 0.9705 FH 1.942 4.405 0.9171 *The negative or positive signs for m imply that H is the negative or positive end of the dipole respectively. All of these molecules exist as stable, independent species in the gas phase at low pressures and may be studied by spectroscopic methods or by molecular beam techniques. Only LiH and HF, however, are stable under normal conditions; LiH is a solid and HF a gas at room temperature. The remaining diatomic hydrides are very reactive since they are all capable of forming one or more additional bonds. The variation of the dipole moment in this series of molecules provides a measure of the relative abilities of H and of each of the second-row elements to attract electrons. For example, the dipole moment for LiH illustrates that electron density is transferred from Li to H in the formation of this molecule. In HF, on the other hand, charge density is transferred from H to F. With the exception of BH, there is a steady increase in m from -6.0 debyes for LiH to +1.9 debyes for HF. Only LiH approaches the ionic limit of Li+H-. BeH appears to possess a close to equal sharing of the valence electrons. The remaining molecules, while exhibiting some degree of charge removal from H, are all far removed from the ionic extreme. They represent cases of molecular binding which lie between the two extremes, ionic and covalent. They are referred to as polar molecules. We can best illustrate the variation in the chemical binding in this series of molecules by considering the properties of the molecular charge and bond density distributions (Figure 7-7 and 7-8). In LiH almost all of the molecular charge density is centerd on the two nuclei in nearly spherical distributions. Figure \(1\). Contour maps of the molecular charge distrubution of the diatomic hydride molecules LiH to HF. The proton is the nucleus on the right-hand side in each case. Click here for contour values. Figure \(2\). Bond density maps for the diatiomic hydride molecules LiH to HF. The proton is on the right-hand side in each case. Click here for contour values. The nonbonded charge and radius for lithium, 1.09 e- and 1.7 ao respectively, are characteristic of the 1s2 inner shell distribution of Li+. Thus the molecular charge distribution for LiH indicates that the single valence electron of lithium is transferred to hydrogen and that the bond is ionic. (Recall that initially the Li atom is much larger than an H atom. The density map for LiH should be compared to that given previously for LiF, Figure 7-1.) In BeH, the valence density has the appearance of being equally shared by the two nuclei. From BH through to HF a decreasing amount of density is centerd on the proton to the extent that the charge distribution of HF could be approximately described as an F- ion distribution polarized by an imbedded proton. The increase of the effective nuclear charge across a row of the periodic table is reflected not only in the amount of charge transferred to or from the hydrogen, but also in the relative sizes of the molecules. In BeH the density is diffuse and the molecule is correspondingly large. In HF the density is more compact and the molecule is the smallest in the series. The decrease in the size of the molecule from BH to HF parallels the decrease in the size of the atoms B to F. The intermediate size of LiH is a consequence of the one and only valence electron of lithium being transferred to hydrogen, and thus the size of LiH is a reflection of the size of the Li+ ion and not of the Li atom. In general terms, the bond density maps provide a striking confirmation of the transfer of charge predicted by the relative electron affinities or by the relative effective nuclear charges of hydrogen and the second-row elements Li ® F. We may again employ the position of the charge increase in the bond density map to characterize the type of binding present in the molecule. The map for LiH exhibits the same characteristics as does the one for LiF (Figure 7-2), the contours in the region of the Li nucleus being remarkably similar in the two cases. The valence density is clearly localized about the proton just as it is about the fluorine nucleus in LiF. The 1s core density remaining on lithium is clearly polarized away from the proton, and the density increase localized on the proton is polarized towards the lithium as required in ionic binding. The one principal difference between the LiH and LiF bond density maps concerns the shape of the contours representing the density increase on the proton and fluorine nucleus. In LiF the contours on fluorine are similar in shape to those obtained for a 2ps orbital density. In LiH the contours on the proton are nearly spherical. In terms of a simple orbital model we imagine the 2s electron of Li to be transferred to the 1s orbital of hydrogen in LiH and to the 2ps orbital of fluorine in LiF. The spherical and double-lobed appearance of the density increases found for hydrogen and fluorine respectively show these orbital models of the binding to be reasonable ones. From BeH through the rest of the series, the bond density maps show an increase in the amount of charge removed from the proton and transferred to the region of the other nucleus. This is evident from the increase in the number and diameter of the dashed contours in the nonbonded region of the proton. The pattern of charge increase and charge removal in the regions of the Be, B, C, N, O and F nuclei is similar to that found for these nuclei in their homonuclear diatomic molecules, and is characteristic of the participation of a ps orbital in the formation of the bond. The polarization of the density in the region of the hydrogen is of the simple dipolar type characteristic of a dominant s orbital contribution. As previously discussed, the double-lobed appearance of the density increase in the region of fluorine in the bond density map for LiF can be viewed as characteristic of the ionic case when a 2ps orbital vacancy is filled in forming the bond. This limiting pattern is most closely approached in the hydride series by HF, the molecule exhibiting the largest degree of charge transfer from hydrogen. HF, of all the hydrides, is most likely to approach the limiting ionic extreme of H+F-. However, the charge increase in the region of fluorine in HF is not as symmetrical as that found for F in the LiF molecule. The proton in HF, unlike the Li+ ion in LiF, is imbedded in one lobe of the density increase on F and distorts it. Thus, unlike the ionic extreme of LiF, the charge increase on F in HF is shared by both nuclei in the molecule. Another important difference between the charge distributions of HF and LiF concerns the polarizations of the charge density in the immediate vicinities of the nuclei. In LiF (or LiH) the localized charge distributions are both polarized in a direction opposite to the direction of charge transfer Li ® F (or Li ® H). These polarizations are a consequence of the extreme charge transfer from lithium to fluorine, a transfer resulting in a force of attraction on the lithium nucleus and one of repulsion on the fluorine nucleus. In HF the charge density in the regions of the proton and the fluorine nucleus is polarized in the same direction as the direction of charge transfer from H ® F. Thus the amount of charge transferred to the vicinity of the fluorine in HF is not, unlike the situation in LiF, sufficient to screen the nuclear charge of fluorine and hence exert a net attractive force on the proton. Instead, the fluorine nucleus and its associated charge density exert a net repulsive force on the proton, one which is balanced by the inwards polarization of the charge density in the region of the proton. The polarization of the charge density on the proton adds to and is contiguous with the charge increase in the binding region. Thus in HF and in the molecules BeH to OH for which the charge transfer is less extreme, the nuclei are bound by a shared density increase and the binding is covalent. From BeH through the series of molecules the sharing of the charge increase in the binding region becomes increasingly unequal and favours the heavy nucleus over the proton. The latter molecules in the series, NH, OH and HF, provide examples of polar binding which are intermediate between the extremes of perfect covalent and ionic binding as exhibited by the homo nuclear diatomics and LiF respectively. In general, chemical bonds between identical atoms or between atoms from the same family in the periodic table will exhibit equal or close to equal sharing of the bond density and be covalent in character. Compounds formed by the union of elements in columns I or II with elements in columns VIA or VIIA will be ionic, as exemplified by LiF or BeO. We find a continuous change from covalent to ionic binding as the atoms joined by a chemical bond come from columns in the periodic table which are progressively further removed from one another. This is illustrated by the variation in the molecular charge distributions through the series of molecules shown in Figure \(1\). Figure \(2\). Molecular charge distributions for the 12-electron isoelectronic series \(C_2\), BN, BeO and LiF. Click here for contour values. This series of molecules is formed (in an imaginary process) by the successive transfer of one nuclear charge from the nucleus on the left to the nucleus on the right, starting with the central symmetrical molecule C2. The molecules are said to form an isoelectronic series since they all contain the same number of electrons, twelve. The molecular charge distributions in this series illustrate how the charge distribution and binding for a constant number of electrons changes as the nuclear potential field in which the electrons move is made increasingly unsymmetrical. In C2 the nuclear charges are, of course, equal and the charge distribution is symmetrically shared by both nuclei in the manner characteristic of covalent binding. In the remaining molecules the valence charge density is increasingly localized in the region of the left-hand nucleus. This is particularly evident in the bond density maps and their profiles (Figure \(1\)) which show the increasing extent to which charge density is transferred from the region of the nucleus on the right (B, Be, Li) to its partner on the left (N, 0, F). Figure \(2\). Bond density maps and profiles along the internuclear axes for the 12-electron sequence of molecules \(C_2\), BN, BeO and LiF. Click here for contour values. The charge distribution of BN (with nuclear charges of five for boron and seven for nitrogen) is similar to that for C2 in that charge is accumulated in the nonbonded regions of both nuclei as well as in the region between the nuclei. However, the buildup of charge behind the boron nucleus is smaller than that behind the nitrogen nucleus and the charge density shared between the nuclei is heavily shifted towards the nitrogen nucleus. Thus the binding in BN is predominantly covalent, but the bond density is polarized towards the nitrogen. The charge transfer in BeO and LiF is much more extreme and the bond density maps show a considerable loss of charge from the nonbonded regions of both the Be and Li nuclei. Notice that except for contours of very low value the charge density in BeO, as in LiF, is localized in nearly spherical distributions on the nuclei, distributions which are characteristic of Be+2 and O-2 ions. A count of the number of electronic charges contamed within the spherical or nearly spherical contours centerd on the nuclei in BeO and LiF indicates that the charge distributions correspond to the formulae Be+1.5O-1.5 and Li+1F-1. That is, the binding is ionic and corresponds to the transfer of approximately one charge from Li to F and of one and one half charges from Be to O. Thus while the binding in LiF is close to the simple orbital model of Li+(1s2)F-(1s22s22p6) as noted before, the binding in BeO falls somewhat short of the description Be+2(1s2)O-2(1s22s22p6). Notice that the density contours on oxygen in BeO are more distorted towards the Be than the contours on F are towards Li in LiF. This illustrates that the oxygen anion is more polarizable than is the fluoride anion. The radius of the charge distribution on the nonbonded side of the Be nucleus as measured along the bond axis is identical to that found for an isolated Be+2 ion. (Recall that the radius of an atomic or orbital density decreases as the nuclear charge increases. Thus the Li+1 ion is larger than the Be+2 ion as indicated in Figure 7-9.) However, the radius of the Be charge density perpendicular to the bond axis is much greater than that for a Be+2 ion. This shows, as does the actual electron count given above, that the two valence electrons of boron are not completely transferred to oxygen in the formation of the BeO molecule. Hydrogen is an exception to the above set of generalizations regarding an element's position in the periodic table and the ionic-covalent nature of the bond it forms with other elements. It does not behave in a manner typical of family IA. The bond formed between hydrogen and another member of group IA, as exemplified by LiH, is ionic and not covalent. Here hydrogen accepts a single electron to fill the vacancy in its 1s shell and thus resembles the members of family VIIA, the halogens. The bond in HF, however, is more polar than would be expected for the union of two adjacent members of the same family, and hydrogen is therefore not a typical member of family VIIA. This intermediate behaviour for H is understandable in that the values of its ionization potential and electron affinity are considerably greater than those observed for the alkali metals (IA) but are considerably less than those found for the halogens (VIIA).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.04%3A_Dipole_Moments_and_Polar_Bonds.txt
It is important that we be able to predict the extent to which electronic charge will be transferred from one atom to another in the formation of a chemical bond, that is, to predict its polarity. The very detailed results given previously for the charge distributions of the diatomic hydrides are not generally available and there is a need for an empirical method which will allow us to estimate the polarity of any chemical bond. It is possible to define for an element a property known as its electronegativity, which provides a qualitative estimate of the degree of polarity of a bond. Electronegativity is defined as the ability of an atom in a molecule to attract electrons to itself. The concept of an electronegativity scale for the elements was proposed by Pauling. The electron affinity of an atom provides a direct measure of the ability of an atom to attract and bind an electron: $X + e^- \rightarrow X^- \nonumber$ with $\Delta E = A_x$ here AX denotes the electron affinity of atom $X$. For the reactions of two elements, $X$ and $Y$, with free electrons, the relative values of the electron affinities AX and AY provide a measure of the relative independent tendencies of X and Y to change into $X^-$ and $Y^-$. However, we are interested in the reaction of X with Y and in being able to predict whether the $X-Y$ bond will be polar in the sense $X^+Y^-$ or $X^-Y^+$. The electron which is to be partially or wholly gained by $X$ or $Y$ is not a free electron but is bound to the atom Y or X respectively. Consequently we are interested in the relative energies of the following two processes: $X +Y \rightarrow X^+ + Y^- \label{1}$ with $\Delta E_1 = I_X + A_Y \nonumber$ and $X +Y \rightarrow X^+ + Y^- \label{2}$ with $\Delta E_2 = I_Y + A_X \nonumber$ For reaction $\ref{1}$ to be favored over reaction $\ref{2}$, not only must $Y$ have a high electron affinity, it is also necessary that X have a low ionization potential. We would expect the bonding electrons to be approximately equally shared in the $X-Y$ bond, if DE1 = DE2, as neither extreme structure is favored over the other. Thus the condition for a non-polar covalent bond is $I_X + A_Y = I_Y + A_X \label{3}$ or, collecting quantities for a given atom on one side of the equation, $I_X - A_X = I_Y -A_Y \label{4}$ Equation $\ref{4}$ states that a non-polar bond will result when the difference between the ionization potential and the electron affinity is the same for both atoms joined by the bond. If the quantity IX - AX is greater than IY - AY, then the product X-Y+ will be energetically favoured over X+Y-. Thus the quantity (I - A) provides a measure of the ability of an atom to attract electrons (or electronic charge density) to itself relative to some other atom. The electronegativity, denoted by the symbol $\chi$, is defined to be proportional to this quantity: $\chi_x \propto I_x - A_x \nonumber$ The electronegativities of the elements in the first few rows of the periodic table are given in Table $1$. Thus the quantity (I - A) provides a measure of the ability of an atom to attract electrons (or electronic charge density) to itself relative to some other atom. Table $1$: Some Electronegativity Values H 2.1 Li Be B C N O P 1.0 1.5 2.0 2.5 3.0 6.5 4.0 Na Mg Al Si P S Cl 0.9 1.2 1.5 1.8 2.1 2.5 3.0 K Ca Br 0.8 1.0 2.8 As expected, the electronegativity increases from left to right across a given row of the periodic table and decreases down a given column. The greater the difference in the electronegativity values for two atoms, the greater should be the disparity in the extent to which the bond density is shared between the two atoms. Pauling has given empirical expressions which relate the electronegativity difference between two elements to the dipole moment and to the strength of the bond.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.05%3A_Electronegativity.txt
The properties observed for matter on the macroscopic level are determined by the properties of the constituent molecules and the interactions between them. The polar or non-polar character of a molecule will clearly be important in determining the nature of its interactions with other molecules. There will be relatively strong forces of attraction acting between molecules with large dipole moments. To a first approximation, the energy of interaction between dipolar molecules can be considered as completely electrostatic in origin, the negative end of one molecule attracting the positive end of another. The presence of intermolecular forces accounts for the existence of solids and liquids. A molecule in a condensed phase is in a region of low potential energy, a potential well, as a result of the attractive forces which the neighboring molecules exert on it. By supplying energy in the form of heat, a molecule in a solid or liquid phase can acquire sufficient kinetic energy to overcome the potential energy of attraction and escape into the vapor phase. The vapor pressure (the pressure of the vapor in equilibrium with a solid or liquid at a given temperature) provides a measure of the tendency of a molecule in a condensed phase to escape into the vapor; the larger the vapor pressure, the greater the escaping tendency. The average kinetic energy of the molecule in the vapor is directly proportional to the absolute temperature. Thus the observation of a large vapor pressure at a low temperature implies that relatively little kinetic energy is required to overcome the potential interactions between the molecules in the condensed phase. By supplying energy in the form of heat, a molecule in a solid or liquid phase can acquire sufficient kinetic energy to overcome the potential energy of attraction and escape into the vapor phase. The only potential interactions possible between non-polar, covalently bonded molecules are of the van der Waals' type as previously discussed for the interaction between two helium atoms at large internuclear separations. Molecules such as H2 and N2have closed shell electronic structures in the same sense that helium does; all of the valence electrons are paired and no further chemical bonding may occur. The small polarizations of the charge densities induced by the long-range interactions of closed shell atoms or molecules result in only weak forces of attraction. The low boiling points (the temperature at which the vapor pressure above the liquid phase equals one atmosphere) observed for substances composed of molecules which can interact only through a van der Waals' type force are, therefore, understandable. Table \(1\) lists the normal boiling points for a number of representative compounds. Table \(1\): Normal Boiling Points (°K) Substance BP Substance BP Substance BP He 4.2   NH3 240   NaCl 1686 H2 20.4   HF 292   LiF 1949 N2 77.4   H2O 373   BeO 4100 Ar 87.4 An argon atom is larger than a helium atom and its outer charge density is not bound as tightly as that in helium. (Recall that the ionization potential for argon is less than that for helium.) Consequently, the charge density of argon is more polarizable than that of helium and the forces of attraction between argon atoms and hence its normal boiling point are correspondingly greater. These same forces do, of course, operate in the gas phase as well and are the cause of the observed deviations from ideal gas behaviour. The interactions between polar molecules such as HF and H2O will be much larger and their normal boiling points greater than those observed for the non-polar molecules. When hydrogen is present at the positive end of a polar bond, the dipolar interactions are particularly strong and are given a special name, hydrogen-bonded interactions. The hydrogen bond increases in strength as the electronegativity of the atom to which the H is chemically bonded increases. (We noted previously that the dipole moment in the HA molecules increased as A was made more electronegative.) Liquid hydrogen fluoride consists of chains of molecules joined end to end; each hydrogen of one molecule is attracted to the fluorine of the next. In liquid water, each water molecule is hydrogen bonded to four other water molecules. This accounts for what appears to be an anomalously high boiling point for water when compared with the values observed for the neighboring hydride molecules NH3 and HF. The condensed phases so far considered are called molecular solids or molecular liquids because the identity of the individual molecule is largely retained. As the forces between the molecules become larger, the point of view of regarding a solid as a collection of individual, interacting molecules becomes less satisfactory. In the limiting case of the strong interactions which exist between the ions in an ionic crystal, the concept of a discrete molecule in the solid phase ceases to exist. In solid KCl, for example, the potassium and chloride ions exist as separate entities; each potassium ion is in contact with six chloride ions, which in turn are each in contact with six potassium ions. Each ion attracts its six neighboring ions equally and thus the structure is symmetrical and therefore cubic; six ions of one sign occupy the centers of the faces of a regular cube with an ion of opposite sign at its center. The number of nearest neighbors a given ion has in an ionic crystal is determined by the relative sizes of the positive and negative species. The Be+2 ion is considerably smaller than O-2 and the basic structure of BeO is tetrahedral, each ion surrounded by four ions of opposite charge. The strong electrostatic forces between the ions in a crystal are reflected in the high boiling points recorded in Table \(1\) for the ionic compounds. 7.07: Literature References More detailed discussions of the molecular charge distributions and the forces exerted on the nuclei will be found in the references given below. The sources of the wave functions used in the calculation of the density distributions are also given in these references. 1. R. F. W. Bader, W. H. Henneker and P. E. Cade, J. Chem. Phys. 46, 3341 (1967). (Homonuclear diatomic molecules.) 2. R. F. W. Bader, I. Keaveny and P. E. Cade, J. Chem. Phys. 47, 3381 (1967). (The second-row diatomic hydrides, LiH ® HF.) 3. R. F. W. Bader and A. D. Bandrauk, J. Chem. Phys. 49, 1653 (1968). (The 12- and 14-electron isoelectron series, C2, BeO, LiF and No CO BF.) 4. P. E. Cade, R. F. W. Bader, I. Keaveny and W. H. Henneker, J. Chem. Phys. 50, 5313 (1969). (The third-row diatomic hydrides NaH ® HC1.) 7.08: Further Reading L. Pauling, The Nature of the Chemical Bond, Cornell University Press, Ithaca, N. Y., 1960, third edition.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.06%3A_Interaction_Between_Molecules.txt
A QUANTITATIVE DEFINITION OF ATOMIC CHARGE Since the publication of this book in 1970, a theory of atoms in molecules (AIM) has been developed that enables one to define an atom in a molecule and all of its properties. The theoretical definition of an atom is based on the properties of the experimentally observable charge density. This theory is described elsewhere in this web site, but we present here a brief introduction to enable a quantitative definition of the charge on atom in a molecule. The theory is fully described and developed in the book "Atoms in molecules: a quantum theory", published by the University of Oxford Press, 1990. A molecule, or a crystal, is partitioned into individual atoms in terms of surfaces that satisfy a particular condition on the electron density. The partitioning is a direct result of the density exhibiting a maximum at the position of each nucleus. As a consequence, the density passes through some minimum value before again attaining some maximum value at a neighbouring nucleus, as illustrated in the displays of the molecular density distributions shown in this book. This fundamental property of a charge distribution is illustrated in the accompanying figure, Fig. A1 which displays the electron density in a plane of the nuclei for the LiF molecule. The distribution is dominated by the two nuclear maxima. The point of minimum density along the line linking the two nuclei � the saddle point in the density � is a starting point for two paths of steepest descent away from this point, paths that define the boundary separating the atoms in this plane. The collection of such paths for all planes obtained by rotation about the axis defines the interatomic surface. The interatomic surfaces defined in this manner yield a �natural partitioning� of a molecule into atoms, as illustrated in Fig. A1 for LiF and in Figures A2a and A2b for the second- and third-row diatomic hydrides. The properties of the atoms defined by these �natural� surfaces are determined by quantum mechanics. Thus, not only does quantum mechanics predict the properties of a molecule, it also predicts the properties of its individual constituent atoms. The theory of atoms in molecules (AIM) equates every property of a molecule to the sum of its atomic contributions. Among the properties of immediate interest is the charge on an atom. The electron population of an atom A - its average number of electrons - a quantity denoted by N(A), is obtained by integrating the electron density over the space of the atom up to its atomic boundaries, as given in equation A-1 where the electron density at a point denoted by the position vector r is denoted by r (r) A-1 The subscript A on the integral means that the integration is carried out over the space of the atom A up to its atomic boundary, that is, its interatomic surface. The charge on the atom, q(A), is the difference between the nuclear charge ZA and its electron population, as given in equation A-2 A-2 Another useful property determined by the electron density is the atomic polarization. In a free atom the centroid of negative charge coincides with the nucleus and there is no dipole moment, that is, no separation of charge. When an atom is in chemical combination, its density becomes polarized, either towards or away from a neighbouring atom, as discussed and illustrated in this chapter. One can easily determine the displacement of the centroid of negative charge from the position of the nucleus by determining the average (vector) distance of every element of the electron density from the nucleus of the atom. This corresponds to integrating the product of the density at each position r, with the vector distance of this point from the nucleus, a distance denoted by the vector rA to obtain the atomic moment M(A). Its definition is given in equation A-3. A-3 The atomic moment is a vector because it has a direction as well as a magnitude. The dipole moment m of a diatomic molecule A-B, when expressed in terms of atomic contributions, is given by a charge transfer term, equal to the charge on atom B multiplied by R, the vector distance from the nucleus of A and to that of B, and a polarization term given by the sum of the two atomic moments. The dipole moment of A-B is therefore given as in equation A-4 A-4 where all three terms are directed along the internuclear axis. The sign of the charge transfer term q(B)R is determined by the sign of q(B) = -q(A). The model expression for m given for LiF in the text assumes the complete transfer of one electron from Li to F and equates the dipole moment to the charge transfer term alone, that is, to the first term of equation A-4. We can use the molecular density distribution for LiF displayed in Fig. 7-9 or Fig. A1 to calculate the charges on the atoms and determine the atomic polarizations to illustrate how the atomic contributions determine the molecular dipole. The charge on F is q(F) = -0.938e, close to the value of -1e assumed in the model calculation. As already discussed, the atoms are in general polarized in a direction counter to the direction of the charge transfer. In LiF the charge transfer is from Li ® F and hence the fluorine atom is polarized towards the lithium atom which in turn is polarized away from the fluorine atom. The length of the vector R in LiF is 1.564 Å and expressing the charge on the fluorine atom in esu, the charge transfer contribution to the dipole moment is � 7.05 debyes. The atomic polarizations are M(F) = +0.714 and M(Li) = +0.043 debyes to yield a dipole moment with a magnitude of 6.29 debyes, the experimental value being 6.28 debyes. Fig. A1. Top diagram is a contour map of the electron density for the LiF molecule in a plane containing the nuclei. The outer contour has the value 0.001 au and the remaining contours increase in value as previously given. The boundary separating the basin of the F atom on the left, from the Li aton on the right, is also indicated. It is defined by the path of steepest descent from the point of minimum density along the Li-F internuclear axis indicated in the relief map of the density shown in the lower diagram. Properties Determined by the Electronic Charge Density Pauling (1960) defined electronegativity to be 'the power of an atom in a molecule to attract electrons to itself.' This concept has proved to be extremely useful and it is reflected in the net charges on the atoms found in diatomic molecules, as determined by the theory of atoms in molecules. The atomic charges in a diatomic molecule are a direct measure of the relative abilities of the two atoms to attract and bind electronic charge within their basins. The variation in the charge on atom A in AB where A and B vary across the second row of the periodic table, Li ® F and including hydrogen are displayed in Fig. A2 Fig. A2. (a) Bar graphs of the charges on the atoms in the ground states of the diatomic molecules AB where both A and B = Li, Be, B, C, H, N, O, F. This is the ordering of increasing electronegativity as determined by theory�all charges to the left of the position of the reference atom are negative, all those to its right are positive. (b) Bar graphs of the charge on hydrogen q(H), for the second- and third-row diatomic hydrides. Each atom withdraws charge from elements to the left of it and donates charge to those on its right, with H appearing between C and N. The orderings are as anticipated with C and H possessing almost equal electronegativities. The electronegativity of C relative to H increases with the degree of unsaturation and with the extent of geometric strain. This result is anticipated on the basis of the orbital model which predicts the electronegativity of C to increase as the s character of its hybrid bonds to H increases. Most of the secondary variation in charges across the table are explicable in terms of the extent of charge transfer being limited by either the number of valence electrons on the donor or vacancies on the acceptor. The charges on the third-row elements Na ® Cl are also given relative to H in their hydrides and, as anticipated, H advances towards the electronegative end of the scale in this row relative to its position vis-à-vis the second-row elements. Unexpectedly, sodium and magnesium are slightly less electropositive than their second-row congeners. The charge distributions of the second- and third-row hydrides are illustrated in Fig. A3a and b in the form of contour maps. (a) (b) Fig. A3. Contour maps of the ground-state electronic charge distributions for the second- and third-row diatomic hydrides showing the positions of the interatomic surfaces. The first set of diagrams (a) also include a plot for the ground state of the H2 molecule. The outer density contour in these plots is 0.001 au. The remaining contours increase in value according to the scale given in the table A1. (a) The left-hand side H2, LiH, BeH , BH; right-hand side CH, NH, OH, HF. (b) The left-hand side: NaH, MgH, AlH, SiH; right-hand side PH, SH, HCl. The extent and direction of charge transfer and its effect on the charge distribution are reflected in the behaviour of the interatomic surface which is indicated for each molecule. In LiH the surface envelops what is essentially a Li ion while in HF the total charge distribution is dominated by the forces exerted by the F nucleus. The charges q(H) of AH are also characteristic of the stable polyatomic species AHn the two values usually differing by less than 0.05 e, and they reflect the chemical behaviour of the hydrides. The hydrides of Li, Be, and B, for example, are all hydridic, expelling molecular hydrogen from water and, for all of these, q(H) < 0. There is a sharp break in the value of q(H) for methane for which q(H) » 0 and this is a faithful reflection of the non-polar nature of this molecule. It has a low solubility in water and does not dissociate. The remaining hydrides, NH3, H2O, and HF, are all increasingly polar with q(H) > 0 and the ordering of the charges accounts for the aqueous solution of ammonia being basic and that of HF being acidic. The ability to determine the charge on an atom in a molecule removes the necessity of defining a numerical electronegativity scale. The concept, however, remains useful and one may use the atomic populations to demonstrate that they recover the basic idea underlying electronegativity�to predict the degree of charge transfer between two atoms. Since hydrogen can either donate or accept but a single electron, the electron population of hydrogen in AH may be used to define an electronegativity per electron of A relative to hydrogen. This electronegativity is given by X(A) = 1 - N(H)AH, where N(H)AH is the population of H in AH. A positive or negative value for X(A) implies that A has a greater or lesser bonding electron affinity than does hydrogen, respectively. If the X(A) are meaningful, then the difference |X(A) - X(B)| should determine the charge transfer per valence electron in AB. Using this concept, the population of A in AB is predicted to be where v is the number of valence electrons on the donor A or the number of vacancies on the acceptor B, whichever value is limiting. N(A)a is the electron population of the isolated A atom. Examples of predicted and actual atomic populations for A are: NF, 6.56 (6.56); NO, 6.46 (6.50); CF, 5.21 (5.22); CO, 4.78 (4.65); CN, 4.94 (4.88); LiC, 2.12 (2.12). With the added stipulation that the charge transferred per vacancy, |X(A) - X(B)|, cannot exceed v, then all diatomic fluoride populations are predicted to within a maximum error of 0.08 electrons. This method of predicting the charge transfer between atoms yields significantly larger errors only for some compounds of the elements Be and B. The charge distributions of the diatomic hydrides illustrate a general phenomenon�that a significant degree of interatomic charge transfer is accompanied by a polarization of the valence densities of the atoms in a direction counter to that of the charge transfer. The polarizations are in response to the electric field created by the charge transfer, the acceptor atom polarizing towards the positively charged donor atom which is itself polarized away from the negatively charged acceptor. This polarization of the donor atom is particularly pronounced when it possesses a greater number of valence electrons than there are vacancies on the acceptor atom, as illustrated by the data for the diatomic hydrides given in Table A2. This Table lists the atomic quantities that determine the molecular dipole moment, as given in equation A-4; the charges on the atoms (equation A-2 for q(A)), the atomic polarizations (equation A-3 for M(A)), the charge transfer contribution to the molecular dipole moment under the headingm(CT) = q(H)R = - q(A)R, the dipole moment for each molecule m(AH) and the nonbonded radius of the A atom, rn(A). In general, the magnitude of the molecular dipole is less than that m(CT), the charge transfer contribution, because of the opposing atomic polarizations. The polarizations of Li and Na in their hydrides are quite small as they correspond to tightly bound core densities. However, in some instances such as BH, the atomic polarizations determine the direction of the molecular moment. For the second row, the atomic polarizations are largest for the diffuse valence density on Be and B. They are larger still for their third-row congeners and Si where, because of the larger, 10-electron K-L core, the valence density is less tightly bound and more polarizable. Attempts to assign atomic charges on the basis of measured dipole moments are unrealistic as such a procedure ignores the polarizations of the atomic densities. Such an attempt corresponds to assuming the molecular charge distribution to be composed of a set of spherically symmetric atomic densities, each centred on its own nucleus, a physically unacceptable model even in the limit of an ionic system. It should be evident from a comparison of the charge distribution in the non-bonded region of the A atoms that the reduction in magnitude or reversal in sign of the dipole moment, which occurs after LiH in the second-row and after Na in the third-row, is a consequence of an atomic polarization and is not indicative of a sudden increase in the electronegativity of the A atom. The extent of the physical distortion of those atoms for which the atomic polarizations are greatest is reflected in the values of their non-bonded radii. A non-bonded radius, rn(W), is defined as the axial distance from a nucleus to an outer contour of the charge density on its non-bonded side. The 0.001au contour is chosen since the corresponding density envelopes provide good approximations to the experimentally determined van der Waals sizes and corresponding radii for molecules in the gas phase. The non-bonded radii for Li and Na are close to the values for the corresponding singly-charged ions while those for the strongly back-polarized atoms are all considerably greater than are their values in the free atomic state. The presence of such a large and diffuse (weakly bound) charge distribution has important chemical consequences, imparting to the molecule the characteristics of a strong Lewis base. A classic example of this behaviour is the carbon atom in the CO molecule. This molecule has a near-zero dipole moment because of very pronounced polarizations of the atomic densities, particularly that of carbon, which oppose the considerable charge transfer moment. The charge on oxygen is - 1.33 e and the magnitudes of the opposing atomic dipoles are |M(O)| = 0.98 au and |M(C)| = 1.72 au with the non-bonded radius on carbon exceeding its free atomic value by 0.15 au. The physical importance of the atomic polarization of carbon is reflected in the ability of CO to act as a Lewis base, particularly in the formation of metal carbonyls. The considerable difference in the electronegativities of C and O is reflected in the relatively large dipole moment, |m| = 1.11 au, of the formaldehyde molecule, H2C=O. The charge transfer from C to O in formaldehyde where q(O) = - 1.24 e, is only slightly less than it is in CO. Unlike CO, however, the charge transfer contribution dominates the final moment in formaldehyde because of the close to halving of the atomic dipole on carbon which results from the use of its non-bonded density in the formation of non-polar bonds to the hydrogen atoms. Table of Contour Values This table lists the values of the contours appearing in molecular density maps and bond density maps for those cases where the values are not given in the figure. In charge density maps the contours increase in value from the outermost one to the innermost one in the order indicated below. As an example, the reader may refer to Fig. 6-2, a contour map of the charge density for H2 with the contours labelled in the order indicated by the table. In the bond density difference maps the contour values increase (solid lines) or decrease (dashed lines) from the zero lines indicated on cach contour map. Key to Charge Density Maps Key to Density Difference or Bond Density Maps Value of Contour Contour number beginning with outermost one Value of contour in au Contour number (from zero contour) increase (solid contour) decrease (dashed contour) 1 0.002 1 +0.002 -0.002 2 0.004 2 +0.004 -0.004 3 0.008 3 +0.008 -0.008 4 0.02 4 +0.02 -0.02 5 0.04 5 +0.04 -0.04 6 0.08 6 +0.08 -0.08 7 0.2 7 +0.2 -0.2 8 0.4 8 +0.4 -0.4 9 0.8 9 +0.8 -0.8 10 2 11 4 12 8 13 20 7.E: Exercises 1. Arrange the following compounds in the order of the increasing polarity of their bonds: CO, HF, NaCI, O2 2. Pauling introduced the idea of defining the percent ionic character possessed by a chemical bond. A covalent bond with equal sharing of the charge density has 0% ionic character, and a perfect ionic bond would of course have 100% ionic character. One method of estimating the percent ionic character is to set it equal to the ratio of the observed dipole moment to the value of eR, all multiplied by 100. The value of eR is, it will be recalled, the value of the dipole moment when one charge is completely transferred in the formation of the bond and the resulting ions are spherical. Use this method to determine the percent ionic character of the bonds in the diatomic hydrides, LiH to HF. Could any real molecule ever exhibit 100% ionic character according to this definition? 3. Pauling has proposed an empirical relationship which relates the percent ionic character in a bond to the electronegativity difference. From the electronegativity values given in Table 7-2, it is seen that the difference (cF - cH) is greater than the value (cH - cLi). Using the above relationship, we can calculate that the bond in HF should be 59% ionic while that in LiH should be only 26% ionic. Does the estimate of the relative ionic character in HF and LiH based on the electronegativity difference agree with that obtained by a comparison of the molecular charge density and density difference maps for these two molecules?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/07%3A_Ionic_and_Covalent_Bonding/7.A%3A_Appendix.txt
There is a second major theory of chemical bonding whose basic ideas are distinct from those employed in valence bond theory. This alternative approach to the study of the electronic structure of molecules is called molecular orbital theory. The theory applies the orbital concept, which was found to provide the key to the understanding of the electronic structure of atoms, to molecular systems. The concept of an orbital, whether it is applied to the study of electrons in atoms or molecules, reduces a many-body problem to the same number of one-body problems. In essence an orbital is the quantum mechanical description (wave function) of the motion of a single electron moving in the average potential field of the nuclei and of the other electrons which are present in the system. An orbital theory is an approximation because it replaces the instantaneous repulsions between the electrons by some average value. The difficulty in obtaining an accurate description of an orbital is the difficulty in determining the average potential field of the other electrons. For example, the 2s orbital in the lithium atom is a function which determines the motion of an electron in the potential field of the nucleus and in the average field of the two electrons in the 1s orbital. However, the 1s orbital is itself determined by the nuclear potential field and by the average potential field exerted by the electron in the 2s orbital. Each orbital is dependent upon and determined by all the other orbitals of the system. To know the form of one orbital we must know the forms of all of them. This problem has a mathematical solution; the exploitation of this solution has proved to be one of the most powerful and widely used methods to obtain information on the electronic structure of matter. A molecular orbital differs from the atomic case only in that the orbital must describe the motion of an electron in the field of more than one nucleus, as well as in the average field of the other electrons. A molecular orbital will in general, therefore, encompass all the nuclei in the molecule, rather than being centred on a single nucleus as in the atomic case. Once the forms and properties of the molecular orbitals are known, the electronic configuration and properties of the molecule are again determined by assigning electrons to the molecular orbitals in the order of increasing energy and in accordance with the Pauli exclusion principle. In valence bond theory, a single electron pair bond between two atoms is described in terms of the overlap of atomic orbitals (or in the mathematical formulation of the theory, the product of atomic orbitals) which are centred on the nuclei joined by the bond. In molecular orbital theory the bond is described in terms of a single orbital which is determined by the field of both nuclei. The two theories provide only a first approximation to the chemical bond. We shall begin our discussion of molecular orbital theory by applying the theory to the discussion of the bonding in the homonuclear diatomic molecules.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.01%3A_Introduction.txt
The spatial symmetries of atomic orbitals and the number of each symmetry type are determined by the angular momentum of the electron. The orbitals are in fact labelled by the angular momentum quantum numbers, l and m, which along with the principal quantum number n, completely specify the orbital. Angular momentum plays a similar role in determining the symmetries and number of orbitals of each symmetry species in the molecular case. In an atom all of the angular momentum is electronic in origin. In the molecular case, the molecule as a whole rotates in space and the nuclei contribute to the total angular momentum of the system. The nuclei and the electrons of a diatomic molecule can rotate about both of the axes which are perpendicular to the bond axis (Fig. 8-1). Fig. 8-1. Two rotational axis for a diatomic molecule. In a classical analogue the electrons and the nuclei exchange angular momentum during these rotations and the angular momentum of the electrons is not separately conserved. Thus the magnitude of the total electronic angular momentum in a diatomic molecule, unlike the atomic case, is not quantized. Instead, the magnitude of the total angular momentum, nuclei and electrons, is quantized. Only the electrons, however, may rotate about the internuclear axis and this component of the angular momentum is entirely electronic in origin. As long as the molecule is left undisturbed, this one component of the angular momentum remains fixed in value and its magnitude, is, therefore, quantized. The angular momentum vector for rotation about the bond will lie along the bond axis. This vector represents the component of the total angular momentum vector along the internuclear axis. As in the atomic case, quantum mechanics restricts the values of the component of the total angular momentum vector along a given axis to integral multiples of (h/2p). The quantum number in this case is denoted by the Greek letter l (lambda). It is analogous to the quantum number m in the atomic case. The possible values for l are $\lambda = 0,1,2,3,... \nonumber$ Since the rotation may occur in the clockwise or anticlockwise sense about the axis, the angular momentum vector component may be pointed in either direction along the bond (Fig. 8-2). Fig. 8-2. The two directions for the orbital angular momentum vector l for the rotation of an electron about the internuclear axis of a diatomic molecule. Correspondingly, the allowed values of the angular momentum about the internuclear axis are 0, ±1 (h/2p), ±2(h/2p), etc., or in general, ±l(h/2p). Thus when l is different from zero, each energy level is doubly degenerate corresponding to the two possible directions for the component l along the bond axis. The molecular orbitals are labelled according to the values of the quantum number l. When l = 0, they are called s orbitals; when l = 1, p orbitals; when l = 2, d orbitals, etc. This is analogous to the labelling of the atomic orbitals as s, p, d, . . . , as determined by their l value. We know less about the angular momentum of an electron in a diatomic molecule than in an atom. In the atomic case it is possible to determine the magnitude of the total angular momentum, as given by the quantum number l, and the magnitude of one of its components, as given by the quantum number m. In a linear molecule our knowledge is more restricted and we are limited to a single quantum number l, which determines only the component of angular momentum about the bond axis.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.02%3A_Angular_Momentum_in_Diatomic_Molecules.txt
The potential field of a nucleus in an atom is spherically symmetric, depending only on the distance between the nucleus and the electron. Consequently the spatial symmetries of atomic orbitals are completely determined by the angular momentum quantum numbers l and m. When spherical polar coordinates rather than cartesian coordinates are used to describe the orbitals (Fig. 8-3) the dependence of the orbitals on the angles q and f is determined by their angular momentum quantum numbers. Fig. 8-3. The relationships of spherical polar and cylindrical polar coordinate systems to the Cartesian axes x, y and z. The inversion operation transforms the point (x,y,z) into the point (-x,-y,-z). Only the radial dependence (the dependence of the orbital on the coordinate r, the distance between the nucleus and the electron) differs between orbitals with the same l and m values but different values of n. The potential field of the nuclei in a linear molecule possesses cylindrical symmetry. In terms of a cylindrical coordinate system (Fig. 8-3) the single angular momentum quantum number l determines the dependence of the molecular orbitals on the anglef, a dependence determining the symmetry of the orbital for a rotation about the internuclear axis. The dependence of the molecular orbitals on r and z is left undetermined. The forms of the orbitals are not as fully determined by the angular momentum quantum numbers in a molecule as in an atom. However, we may further characterize and label the orbitals for a molecular system by taking advantage of the symmetry possessed by the molecule. The symmetry of the potential field in which an electron moves places very severe restrictions on the possible forms of the orbitals. This is a very general and powerful result. Indeed, the angular dependence of orbitals and wave functions and their angular momentum quantum numbers may be completely determined solely by a consideration of the rotational symmetry of a system. We may illustrate the role which symmetry plays in determining the form of an orbital by considering the symmetry properties of the orbitals obtained in Section 2 for the case of an electron restrained to move on a line of fixed length. Let us shift the origin of the x-axis in the plots of the orbitals (Fig. 2-8) to the mid-point of the line, thereby changing the values of the coordinates of the two end points from 0 and L to -L/2 and +L/2 respectively. Next let us denote by the symbol R the operation of reflection through the origin, an operation which replaces each value of x by -x. For example, the end points x = -L/2 and x = +L/2 are interchanged by the reflection operator R. The first point to note about the operation of reflection is that its application leaves the physical system itself unchanged. The potential in which the electron moves is assumed to be of constant value along the x-axis. The reflection operator simply interchanges the two halves of the line leaving the system unchanged. The potential is said to be invariant to the operation of reflection through the origin. What is the effect of R on the wave functions or orbitals? When R operates on y1(x) (that is, when y1(x) is reflected through the origin) the result is obviously to change y1(x) into itself: $\hat{R} | \psi_2(x) \rangle = \psi_2(-1x) \rangle = (+1) | \psi_2 (x) \rangle \nonumber$ The reflected function y1(-x) is indistinguishable from y1(x). The result of operating on y1(x) with the operator R is to leave the function unchanged. y1(x) is said to be symmetric with respect to a reflection through the origin. The operation of R on y2(x) yields a different result: $\hat{R} | \psi_2(x) \rangle = \psi_2(-x) \rangle = (-1) | \psi_2 (x) \rangle \nonumber$ It is obvious from Fig. 2-8 that the reflection of y2(x) through the mid-point changes its sign, the reflected function y2(-x) is the negative of the unreflected function y2(x). Such a function is said to be antisymmetric with respect to a reflection at the origin. Every orbital for this system is either symmetric (those with odd n values) or antisymmetric (those with even n values) with respect to the symmetry operation of reflection. Any orbital which was neither symmetric nor antisymmetric but was instead simply unsymmetrical with respect to reflection would when squared yield an unsymmetrical probability distribution. An unsymmetrical probability distribution implies that the electron is more likely to be found on one half of the x-axis than on the other. This is a physically unacceptable result since there are no forces acting on the electron which would favour one end of the line over the other. Only orbitals which are either symmetric or antisymmetric yield density distributions which properly reflect the symmetry of the system (Fig. 2-4), that is, density distributions which are themselves symmetrical with respect to reflection at the mid-point of the line. Thus we conclude that the only wave functions resulting in physically acceptable probability distributions are those which are either symmetrical or antisymmetrical with respect to any symmetry operation which changes the physical system into itself. This statement is always true for non-degenerate wave functions, but must be amended somewhat for the action of some symmetry operations on a degenerate set of wave functions. We shall use only one of the many symmetry elements possessed by a homonuclear diatomic molecule to further characterize and classify the molecular orbitals. A homonuclear diatomic molecule possesses a centre of symmetry and the corresponding operator is called the inversion operator. The action of this operator, denoted by the symbol i, is to replace the x, y, z coordinates of every point in space by their negatives -x, -y, -z. This corresponds to an inversion (or reflection) of every point through the origin or centre of symmetry of the molecule (Fig. 8-3). The action of the inversion operator on the nuclear coordinates simply interchanges one nucleus for the other. Since the nuclei possess identical charges, the nuclear framework is left unchanged and the potential exerted by the nuclei is invariant to the operation of inversion. Thus every molecular orbital for a homonuclear molecule must be either symmetric or antisymmetric with respect to the inversion operator. Orbitals which are left unchanged by the operation of inversion (are symmetric) are labelled with a subscript g, while those which undergo a change in sign (are antisymmetric) are labelled u. The symbols g and u come from the German words "gerade" and "ungerade" meaning "even" and "odd" respectively.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.03%3A_Symmetry_Considerations.txt
While the specific forms of the molecular orbitals (their dependence on r and z in a cylindrical coordinate system) are different for each molecule, their dependence on the angle f as denoted by the quantum number l and their g or u behaviour with respect to inversion are completely determined by the symmetry of the system. These properties are common to all of the molecular orbitals for homonuclear diatomic molecules. In addition, the relative ordering of the orbital energies is the same for nearly all of the homonuclear diatomic molecules. Thus we may construct a molecular orbital energy level diagram, similar to the one used to build up the electronic configurations of the atoms in the periodic table. The molecular orbital energy level diagram (Figure $1$) is as fundamental to the understanding of the electronic structure of diatomic molecules as the corresponding atomic orbital diagram is to the understanding of atoms. Molecular orbitals exhibit the same general properties as atomic orbitals, including a nodal structure. The nodal properties of the orbitals are indicated inFigure $1$. Notice that the nodal properties correctly reflect the g and u character of the orbitals. Inversion of a g orbital interchanges regions of like sign and the orbital is left unchanged. Inversion of a u orbital interchanges the positive regions with the negative regions and the orbital is changed in sign. An orbital of a particular symmetry may appear more than once. When this occurs a number is added as a prefix to the symbol. Thus there are 1sg, 2sg, 3sg, etc. molecular orbitals just as there are 1s, 2s, 3s, etc. atomic orbitals. The numerical prefix is similar to the principal quantum number n in the atomic case. As n increases through a given symmetry set, for example, 1sg, 2sg, 3sg, the orbital energy increases, the orbital increases in size and consequently concentrates charge density further from the nuclei, and finally the number of nodes increases as n increases. All these properties are common to atomic orbitals as well. We may obtain a qualitative understanding of the molecular orbital energy level diagram by considering the behaviour of the orbitals under certain limiting conditions. The molecular orbital must describe the motion of the electron for all values of the internuclear separation; from R = ¥ for the separated atoms, through R = Re, the equilibrium state of the molecule, to R = 0, the united atom obtained when the two nuclei in the molecule coalesce (in a hypothetical reaction) to give a single nucleus. Hence a molecular orbital must undergo a continuous change in form. At the limit of large R it must reduce to some combination of atomic orbitals giving the proper orbital description of the separated atoms and for R = 0 it must reduce to a single atomic orbital on the united nucleus. Consider, for example, the limiting behaviour of the 1sg orbital in the case of the hydrogen molecule. The most stable state of H2 is obtained when both electrons are placed in this orbital with paired spins giving the electronic configuration 1sg2. For large values of the internuclear separation, the hydrogen molecule dissociates into two hydrogen atoms. Thus the limiting form of the 1sg molecular orbital for an infinite separation between the nuclei should be a sum of 1s orbitals, one centred on each of the nuclei. If we label the two nuclei as A and B we can express the limiting form of the $1s_g$ orbital as $1\sigma_g \rightarrow (1s_A + 1s_b \nonumber$ where lsA is a 1s orbital centered on nucleus A, and lsB is a ls orbital centered on nucleus B. This form for the 1sg orbital predicts the correct density distribution for the system at large values of R. Squaring the function (1sA + 1sB) we obtain for the density $1s_A 1s_A + 1s_B 1s_B +2 \times 1s_A 1s_B \nonumber$ The first two terms denote that one electron is on atom A and one on atom B, both with 1s atomic density distributions. The cross term 2 ´ lsA ´ lsB obtained in the product is zero since the distance between the two nuclei is so great that the overlap of the orbitals vanishes. Notice as well that the function (lsA + lsB) has the same symmetry properties as does the 1sg molecular orbital; it is symmetric with respect to both a rotation about the line joining the nuclei and to an inversion of the coordinates at the mid-point between the nuclei. The 1sg orbital for the molecule is said to correlate with the sum of 1s orbitals, one on each nucleus, for the separated atom case. Consider next the limiting case of the separated atoms for the helium molecule. Of the four electrons present in He2 two are placed in the 1sg orbital and the remaining two must, by the Pauli exclusion principle, be placed in the next vacant orbital of lowest energy, the1su orbital. The electronic configuration of He2 is thus 1sg21su2. The 1sg orbital will correlate with the sum of the 1s orbitals for the separated helium atoms. Of the two electrons in the1sg molecular orbital one will correlate with the 1sorbital on atom A and the other with the 1s orbital on atom B. Since each helium atom possesses two 1s electrons, the 1su orbital must also correlate its electrons with 1s atomic functions on A and B. In addition, the correlated function in this case must be of u symmetry. A function with these properties is $1\sigma_g \rightarrow (1s_A - 1s_b \nonumber$ The limiting density distribution obtained by squaring this function places one electron in a 1s atomic distribution on A, the other in a 1s atomic distribution on B. The sum of the limiting charge densities for the 1sg and 1su molecular orbitals places two electrons in 1s atomic charge distributions on each atom, the proper description of two isolated helium atoms. Every diatomic homonuclear molecular orbital may be correlated with either the sum (for sg and pu orbitals) or the difference (for su and pg orbitals) of like orbitals on both separated atoms. By carrying out this correlation procedure for every orbital we may construct a molecular orbital correlation diagram (Figure $2$) which relates each of the orbital energy levels in the molecule with the correlated energy levels in the separated atoms. It is important to note that the symmetry of each orbital is preserved in the construction of this diagram. Consider, for example the molecular orbitals which correlate with the 2p atomic orbitals. The direction of approach of the two atoms defines a new axis of quantization for the atomic orbitals. The 2p orbital which lies along this axis is of s symmetry while the remaining two 2p orbitals form a degenerate set of p symmetry with respect to this axis. The sum and difference of the 2ps orbitals on each center correlate with the 3sg and 3su orbitals respectively, while the sum and the difference of the 2pp orbitals correlate with the pu and pg orbitals (Figure $2$). For large values of the internuclear distance, each molecular orbital is thus represented by a sum or a difference of atomic orbitals centered on the two interacting atoms. As the atoms approach one another the orbitals on each atom are distorted by polarization and overlap effects. In general, the limiting correlated forms of the molecular orbitals are not suitable descriptions of the molecular orbitals for finite internuclear separations. We are now in a position to build up and determine the electronic configurations of the homonuclear diatomic molecules by adding electrons two at a time to the molecular orbitals with the spins of the electrons paired, always filling the orbitals of lowest energy first. We shall, at the same time, discuss the effectiveness of each orbital in binding the nuclei and make qualitative predictions regarding the stability of each molecular configuration. Hydrogen. The two electrons in the hydrogen molecule may both be accommodated in the 1sg orbital if their spins are paired and the molecular orbital configuration for H2 is 1sg2. Since the 1sg orbital is the only occupied orbital in the ground state of H2, the density distribution shown previously in Figure $2$ for H2 is also the density distribution for the 1sg orbital when occupied by two electrons. The remarks made previously regarding the binding of the nuclei in H2 by the molecular charge distribution apply directly to the properties of the 1sg charge density. Because it concentrates charge in the binding region and exerts an attractive force on the nuclei the 1sg orbital is classified as a bonding orbital. Excited electronic configurations for molecules may be described and predicted with the same ease within the framework of molecular orbital theory as are the excited configurations of atoms in the corresponding atomic orbital theory. For example, an electron in H2 may be excited to any of the vacant orbitals of higher energy indicated in the energy level diagram. The excited molecule may return to its ground configuration with the emission of a photon. The energy of the photon will be given approximately by the difference in the energies of the excited orbital and the 1sg ground state orbital. Thus molecules as well as atoms will exhibit a line spectrum. The electronic line spectrum obtained from a molecule is, however, complicated by the appearance of many accompanying side bands. These have their origin in changes in the vibrational energy of the molecule which accompany the change in electronic energy. Helium. The electronic configuration of He2 is 1sg2 1su2. A su orbital, unlike a sg orbital, possesses a node in the plane midway between the nuclei and perpendicular to the bond axis. The 1su orbital and all su orbitals in general, because of this nodal property, cannot concentrate charge density in the binding region. It is instead concentrated in the antibinding region behind each nucleus (Figure $3$). Figure $3$: Contour maps of the doubly-occupied $1\sigma _g$ and $1\sigma _u$ molecular orbital charge densities and of the total molecular charge distribution of $He_2$ at R = 2.0 au. A profile of the total charge distribution along the internuclear axis is also shown. Click here for contour values. The su orbitals are therefore classified as antibonding. It is evident from the form of density distribution for the 1su orbital that the charge density in this orbital pulls the nuclei apart rather than drawing them together. Generally, the occupation of an equal number of sg and su orbitals results in an unstable molecule. The attractive force exerted on the nuclei by the charge density in the sg orbitals is not sufficient to balance both the nuclear force of repulsion and the antibinding force exerted by the density in the su orbitals. Thus molecular orbital theory ascribes the instability of He2 to the equal occupation of bonding and antibonding orbitals. Notice that the Pauli exclusion principle is still the basic cause of the instability. If it were not for the Pauli principle, all four electrons could occupy a sg-type orbital and concentrate their charge density in the region of low potential energy between the nuclei. It is the Pauli principle, and not a question of energetics, which forces the occupation of the 1su antibonding orbital. The total molecular charge distribution is obtained by summing the individual molecular orbital densities for single or double occupation numbers as determined by the electronic configuration of the molecule. Thus the total charge distribution for He2 (Figure $3$) is given by the sum of the 1sg and 1su orbital densities for double occupation of both orbitals. The adverse effect which the nodal property of the 1su orbital has on the stability of He2 is very evident in the total charge distribution. Very little charge density is accumulated in the central portion of the binding region. The value of the charge density at the mid-point of the bond in He2 is only 0.164 au compared to a value of 0.268 au for H2. We should reconsider in the light of molecular orbital theory the stability of He2+ and the instability of the hydrogen molecule with parallel spins, cases discussed previously in terms of valence bond theory. He2+ will have the configuration 1sg2 1su1. Since the 1su orbital is only singly occupied in He2+, less charge density is accumulated in the antibinding regions than is accumulated in these same regions in the neutral molecule. Thus the binding forces of the doubly-occupied 1sg density predominate and He2 is stable. The electron configuration of H2 is 1sg1(­)1su1(­) when the electronic spins are parallel. The electrons must occupy separate orbitals because of the Pauli exclusion principle. With equal occupation of bonding and antibonding orbitals, the H2 (­­)species is predicted to be unstable. Lithium. The Li2 molecule with the configuration 1sg21su22sg2 marks the beginning of what can be called the second quantum shell in analogy with the atomic case. Since the 1su antibonding orbital approximately cancels the binding obtained from the 1sg bonding orbital, the bonding in Li2 can be described as arising from the single pair of electrons in the 2sg orbital. Valence bond theory, or a Lewis model for Li2, also describes the bonding in Li2 as resulting from a single electron pair bond. This is a general result. The number of bonds predicted in a simple Lewis structure is often found to equal the difference between the number of occupied bonding and antibonding orbitals of molecular orbital theory. The forms of the orbital density distributions for Li2 (Figure $4$) bear out the prediction that a single electron pair bond is responsible for the binding in this molecule. Figure $4$: Contour maps of the doubly-occupied 1sg, 1su and 2sg molecular orbital charge densities for Li2 at R = 5.051 au, the equilibrium internuclear separation. Click here for contour values. The total molecular charge distribution for Li2 is shown in Fig. 7-3. The 1sg and 1su density distributions are both strongly localized in the regions of the nuclei with spherical contours characteristic of 1s atomic distributions. The addition of just the doubly-occupied 1sg and 1su orbital densities in Li2 will yield a distribution which resembles very closely and may be identified with the doubly-occupied 1s or inner shell atomic densities on each lithium nucleus. Only the charge density of the pair of valence electrons in the 2sg orbital is delocalized over the whole of the molecule and accumulated to any extent in the binding region. Thus there are cases where the molecular orbitals even at the equilibrium bond length resemble closely their limiting atomic forms. This occurs for inner shell molecular orbitals which correlate with the inner shell atomic orbitals on the separated atoms. Inner shell 1s electrons are bound very tightly to the nucleus as they experience almost the full nuclear charge and the effective radii of the 1s density distributions are less than the molecular bond lengths. Because of their tight binding and restricted extension in space, the inner electrons do not participate to any large extent in the binding of a molecule. Thus with the exception of H2 and He2 and their molecular ions, the 1sg and 1su molecular orbitals degenerate into non-overlapping atomic-like orbitals centred on the two nuclei and are classed as nonbonding orbitals. Beryllium. The configuration of Be2 is 1sg21su22sg22su2 and the molecule is predicted to form a weakly bound van der Waals molecule like the helium dimer. Oxygen. Since the method of determining electronic configurations is clear from the above examples, we shall consider just one more molecule in detail, the oxygen molecule. Filling the orbitals in order of increasing energy the sixteen electrons of O2 are described by the configuration 1sg21su22sg22su23sg21pu41pg2. The orbital densities are illustrated in Figure $5$. Figure $5$: Contour maps of the molecular orbital charge densities for $O_2$ at the equilibrium internuclear distance of 2.282 au. Only one component of the Ipg and 1pu orbitals is shown. All the maps are for doubly-occupied orbitals with the exception of that for 1pg for which each component of the doubly-degenerate orbital contains a single electron. The nodes are indicated by dashed lines. Click here for contour values. The molecular orbitals of p symmetry are doubly degenerate and a filled set of p orbitals will contain four electrons. The node in a pu orbital is in the plane which contains the internuclear axis and is not perpendicular to this axis as is the node in a suorbital. (The nodal properties of the orbitals are indicated in Figure $5$) The pu orbital is therefore bonding. A pg orbital, on the other hand, is antibonding because it has, in addition to the node in the plane of the bond axis, another at the bond mid-point perpendicular to the axis. The bonding and antibonding characters of the p orbitals have just the opposite relationship to their g and u dependence as have the s orbitals. The 1sg and 1su orbital densities have, as in the case of Li2, degenerated into localized atomic distributions with the characteristics of 1s core densities and are nonbonding. The valence electrons of O2 are contained in the remaining orbitals, a feature reflected in the extent to which their density distributions are delocalized over the entire molecule. Aside from the inner nodes encircling the nuclei, the 2sg and 2suorbital densities resemble the 1sg and 1su valence density distributions of H2 and He2. A quantitative discussion of the relative binding abilities of the 2sg , 3sg and 1p orbital densities is presented in the following section. One interesting feature of the electronic configuration of O2 is that its outer orbital is not fully occupied. The two pg electrons could both occupy one of the pg orbitals with paired spins or they could be assigned one to each of the pg orbitals and have parallel spins. Hund's principle applies to molecules as well as to atoms and the configuration with single occupation of both pg orbitals with parallel spins is thus predicted to be the most stable. This prediction of molecular orbital theory regarding the electronic structure of O2 has an interesting consequence. The oxygen molecule should be magnetic because of the resultant spin angular momentum possessed by the electrons. The magnetism of O2 can be demonstrated experimentally in many ways, one of the simplest being the observation that liquid oxygen is attracted to the poles of a strong magnet.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.04%3A_Molecular_Orbitals_for_Homonuclear_Diatomics.txt
We may determine the relative importance of each orbital density in the overall binding of the nuclei in a molecule through a comparison of the forces which the various molecular orbital charge distributions exert on the nuclei. In molecular orbital theory, the total charge density is given by the sum of the orbital charge densities. Thus the total force exerted on the nuclei by the electronic charge distribution will be equal to the sum of the forces exerted by the charge density in each of the molecular orbitals. It is of interest to compare the effectiveness of each orbital charge density in binding the nuclei with some standard case in which they all exhibit the same ability. The limiting forms of the molecular orbitals for the case of the separated atoms have this desired property. In addition, the properties of the separated atoms form a useful basis for the discussion of any molecular property from the point of view of determining the changes which have been brought about by the formation of the chemical bond. Suppose we label the two nuclei of a homonuclear diatomic molecule as A and B and consider the forces exerted on the A nucleus by the pair of electrons in a molecular orbital when the orbital has assumed its limiting form for the separated atoms. At this limit, one electron correlates with an atomic orbital on nucleus A and the other with an identical orbital on nucleus B. The discussion of the forces exerted on the nuclei by such a limiting charge distribution is similar to the discussion given previously in Section 6 for the case of two separated hydrogen atoms. The charge density in the orbital on the A nucleus will not exert a force on that nucleus since an undistorted atomic orbital is centrosymmetric with respect to its nucleus. The charge density of the electron which correlates with the B nucleus will exert a force on the A nucleus equivalent to that obtained by concentrating the charge density to a point at the position of the B nucleus. The electron which correlates with the B nucleus will screen one of the nuclear charges of B from the A nucleus. Thus the force exerted on one of the nuclei by the pair of electrons in a molecular orbital for the limiting state of the separated atoms is equivalent to that obtained by placing one negative charge at the position of the second nucleus. Of the pair of electrons in a given homonuclear molecular orbital, only one is effective in binding either nucleus in the limit of the separated atoms. If there are a total of N electrons in the molecule, there will be N/2 occupied molecular orbitals since each molecular orbital contains a pair of electrons. Therefore, a total of N/2 electrons will correlate with each nucleus. The molecule dissociates into two neutral atoms each with a nuclear charge Z = N/2. Thus the N/2 electrons which correlate with each nucleus will exactly cancel the nuclear charge of both nuclei; the final force on the nuclei will be zero. The limiting force exerted on the A nucleus by the pair of electrons in a molecular orbital is (ZAe2/R2)(-1), that is, the force is equivalent to placing one negative charge at the position of the B nucleus. Thus we may express the total limiting force on nucleus A as the product of (ZAe2/R2) with the difference between the number of positive charges on the B nucleus (ZB) and the number of electronic charges which are effective in exerting a force on the A nucleus, (N/2): The quantity N/2 is the charge equivalent of the electronic force, the number of charges which when placed at the position of one nucleus exerts the same force on the second nucleus as does the actual charge distribution. The zero force between the separated atoms may be viewed as a result of each electron screening one nuclear charge on one nucleus from the nuclear charge of the other atom. As the atoms approach one another to form a chemical bond, the atomic distributions on each atom become increasingly distorted and charge density is transferred to the binding region between the nuclei. There is a net force of attraction on the nuclei. We may again express the electronic force on the A nucleus in terms of its charge equivalent by multiplying the electronic force of attraction by R2/ZAe2. Because of the distortion of the atomic orbital densities and the formation of molecular orbitals concentrating charge density in the region between the nuclei, the charge density of more than just one electron in each molecular orbital is effective in binding the nuclei. Thus at intermediate R values the charge equivalent of the electronic force exceeds its limiting value of N/2 required to screen the nuclear charge and the result is a force of attraction drawing the nuclei together. When the distance between the nuclei is further decreased to its equilibrium value the force on the nuclei is again equal to zero. At this point, as when R equals infinity, the charge equivalent of the electronic force equals the nuclear charge. However, the state of electrostatic equilibrium in the molecule does not correspond to the charge density in each molecular orbital effectively screening one nuclear charge as it did in the separated atoms. Instead the charge equivalent of the density in each molecular orbital may be less than, equal to, or greater than the limiting value of unity observed for the separated atoms. An orbital which concentrates charge density in the binding region will exert a force on the nuclei with a charge equivalent greater than unity. Such an orbital is called binding as it does more than simply screen one unit of positive charge on each nucleus. The charge equivalent of an orbital which concentrates density in the antibinding regions will be less than the separated atom value of unity. Such an orbital is termed antibinding as the charge density does not screen one unit of positive charge on each nucleus. When the charge equivalent of the force equals unity, it implies that the orbital charge density plays the same role in the molecule as in the separated atoms, that of screening one nuclear charge on B from nucleus A. An orbital with this property is termed nonbinding. Thus, by comparing the charge equivalent of the force exerted by the density in each molecular orbital with its separated atom value of unity, we may classify the orbitals as binding, antibinding or nonbinding: Binding orbital charge equivalent > unity Nonbinding orbital charge equivalent ~ unity Antibinding orbital charge equivalent < unity Table 8-1. Charge Equivalents of the Orbital Forces in Homonuclear Diatomic Molecules Molecule 1sg 1su 2sg 2su 1pu 3sg 1pg Sum = ZB Re (au) He2 1.78 -0.42 1.36* (1.750)* Li2 0.70 0.68 1.62 3.00 5.051 Be2 1.05 1.08 2.00 -0.40 3.68* (3.500)* B2 0.98 0.98 2.32 -0.48 1.20 5.00 3.005 C2 0.97 0.95 2.25 -0.43 1.13� 6.00 2.348 N2 1.15 1.08 2.67 -0.47 1.21� 0.15 7.00 2.068 O2 1.23 1.14 2.94 -0.52 1.30� 0.18 0.43 8.00 2.282 F2 1.24 1.12 2.45 -0.16 1.24� 0.52 0.67� 9.00 2.680 *He2 and Be2 form only weakly bound van der Waals molecules for relatively large internuclear seperations. The values of R quoted for these molecules are the internuclear distances used in the calculation of the charge equivalents listed in the table. �All of the values are quoted for double occupation of the orbitals for comparative purposes. The values marked by �are to be doubled to obtain the total electronic force as they refer to filled p orbitals. The charge equivalents of the orbital forces for some homonuclear diatomic molecules are given in Table 8-1. Except for He2 and Be2 the sum of the charge equivalents equals the nuclear charge in each case as required for electrostatic equilibrium and the formation of a stable molecule. The charge equivalents of the orbital forces provide a quantitative measure of the role each orbital density plays in the binding of the nuclei in the molecule. The 1sg orbital in He2 is binding. Of the two electronic charges in the 1sg orbital, 1.78 of them are effective in binding the nuclei when R = 1.75 ao as opposed to the one electronic charge which exerts a force when R = ¥. The 1su charge density, however, is strongly antibinding. The transfer of charge density to the antibinding regions in the formation of the 1su orbital in He2 is so great that the charge equivalent is negative in sign. The antibinding nature of this orbital is very evident in the form of its charge distribution (Fig. 8-6). Not only does the charge density in this orbital no longer screen a positive charge on one nucleus from the other, it actually exerts a repulsive force on the nuclei, one which pulls the nuclei further apart from one another. The total eletronic force exerted on a nucleus in He2 at R = 1.75 ao is equivalent to placing (1.78 - 0.42) = 1.36 negative charges at the position of the second nucleus. Since the nuclear charge on helium is 2.00, a total of (2.00 - 1.36) = 0.64 positive charges on the second nucleus are left unscreened by the charge density. The net force on the nuclei is thus a repulsive one. The 1sg and 1su molecular orbitals are inner shell orbitals in the remaining molecules, Li2 to F2. An idealized inner shell molecular orbital has a charge equivalent of unity, the same as the separated atom value. Each electron should be localized in an atomic-like distribution and screen one nuclear charge. This is illustrated by the 1sg and 1su charge density maps for the O2 molecule (Fig. 8-8). The charge equivalents of the 1sg and 1su orbital densities for Li2 (Fig. 8-7) are significantly less than unity. While these orbitals are not as contracted around the nuclei in Li2 as they are in O2 (the nuclear charge for lithium is three compared to eight for oxygen), they are still atomic-like with no effective overlap between the two centres. The charge equivalents are less than the screening value of unity because each of the atomic-like distributions is polarized into the antibinding region and exerts an antibinding force on the nucleus on which it is centred. The charge equivalents for the 1sg and 1su density distributions in the remaining molecules are close to unity indicating that they are essentially nonbinding inner shell orbitals. The slight binding character of the 1sg charge density in O2 and F2 is the result of small inward polarizations of the atomic-like distributions. The 2sg molecular charge density is binding in every molecule. A comparison of the charge equivalents shows that the 2sg charge density is the most binding of all the molecular orbitals in this series of molecules. The charge equivalent of the force exerted by the 2sg density in O2 is almost three times greater than it is for the separated oxygen atoms. This is a result of the large amount of charge density accumulated in the binding region by this orbital (Fig. 8-8). The 2su orbital is uniformly strongly antibinding. The extreme concentration of charge density in the antibinding regions observed for the 2su orbital is typified by the 2su density plot for O2 (Fig. 8-8). It is obvious that the density in this orbital, as that in the 1su orbital of He2 will pull the nuclei away from one another rather than bind them together. Notice that Be2 is analogous to He2 except that the 2sg and 2su orbitals rather than the 1sgand 1su orbital densities are involved. In Be2 the 1sg and 1sudensities are nonbinding and together simply screen two nuclear charges on each atom. The 2sg density exerts a binding force equivalent to one electronic charge in excess of the simple screening effect. The 2su orbital density, however, leaves a single nuclear charge unscreened which cancels the net attractive force of the 2sg density and in addition exerts an antibinding force equivalent to increasing the nuclear charge by 0.40 units. The beryllium molecule is therefore unstable at this value of R. The 1pu orbital density is binding in each case, but only weakly so. The charge density of a pu molecular orbital is concentrated around the internuclear axis rather than along it as in a sg molecular orbital. Consequently the 1pu density distributions exert only weak binding forces on the nuclei. In fact, the inner shell 1sg charge density in F2 exerts as large a binding force on the nuclei as does a pair of electrons in the1pu orbital. The charge equivalent of the 3sg orbital density is less than unity in the three cases where it is occupied. Thus it is an antibinding orbital even though it is of sg symmetry. The charge density contours for this orbital in O2 (Fig. 8-8) show that charge density is accumulated in the region between the nuclei as expected for an orbital of sg symmetry. However, the 3sg orbital correlates with a 2ps atomic orbital on each nucleus. The strong participation of the 2ps orbitals in the molecular orbital is evidenced by the node at each nucleus and by the concentration of charge density on both sides of each nucleus. The concentration of charge in the antibinding regions nullifies the binding effect arising from the accumulation of charge density in the region between the nuclei. The net result is an attractive force considerably less than that required to screen one positive charge on each nucleus. The 1pg orbital density is only weakly antibinding just as the 1pu density is only weakly binding. The formation of the 1pg orbital results in the removal of charge density from the binding region, not from along the internuclear axis but instead from regions around the axis. Notice that unlike the 2su orbital densities, the 1pg charge density is antibinding only in the sense that it does not screen its share of nuclear charge, not because it exerts a force which draws the nuclei apart.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.05%3A_The_Relative_Binding_Ability_of_Molecular_Orbitals.txt
The molecular orbitals which describe the motion of a single electron in a molecule containing two unequal nuclear charges will not exhibit the g and u symmetry properties of the homonuclear diatomic case. The molecular orbitals in the heteronuclear case will in general be concentrated more around one nucleus than the other. The orbitals may still be classified as s, p, d, etc. because the molecular axis is still an axis of symmetry. In simple numerical calculations the molecular orbitals are sometimes approximated by the sum and difference of single atomic orbitals on each centre, their limiting form. The molecular orbital is said to be approximated mathematically by a linear combination of atomic orbitals and the technique is known as the LCAO-MO method. It must be understood that the LCAO-MO method using a limited number of atomic orbitals provides only an approximation to the true molecular orbital. The concept of a molecular orbital is completely independent of the additional concept of approximating it in terms of atomic orbitals, except for the case of the separated atoms. However, by using a large number of atomic orbitals centred on each nucleus in the construction of a single molecular orbital sufficient mathematical flexibility can be achieved to approximate the exact form of the molecular orbital very closely. While the LCAO approximation using a limited number of atomic orbitals is in general a poor one for quantitative purposes, it does provide a useful guide for the prediction of the qualitative features of the molecular orbital. There are two simple conditions which must be met if atomic orbitals on different centres are to interact significantly and form a molecular orbital which is delocalized over the whole molecule. Both atomic orbitals must have approximately the same orbital energy and they must possess the same symmetry characteristics with respect to the internuclear axis. We shall consider the molecular orbitals in LiH, CH and HF to illustrate how molecular orbital theory describes the bonding in heteronuclear molecules, and to see how well the forms of the orbitals in these molecules can be rationalized in terms of the symmetry and energy criteria set out above. The 1s and 2s atomic orbitals and the 2p orbital which is directed along the bond axis are all left unchanged by a rotation about the symmetry axis. They may therefore form molecular orbitals of s symmetry in the diatomic hydride molecules. The 2porbitals which are perpendicular to the bond axis will be of p symmetry and may form molecular orbitals with this same symmetry. The energies and symmetry properties of the relevant atomic orbitals and the electronic configurations of the atoms and molecules are given in Table 8-2. Table 8-2. Atomic Orbital Energies and Symmetry Properties Energy (au) Symmetry H Li C F 1s -0.5 -2.48 -11.33 -26.38 s 2s -0.20 -0.71 -1.57 s 2p -0.43 -0.73 s and p Atomic Configurations Molecular Configurations Li 1s22s1 LiH 1s22s2 C 1s22s22p2 CH 1s22s23s21p1 F 1s22s22p5 HF 1s22s23s21p4 Density diagrams of the molecular orbitals for the LiH, CH, and HF molecules are illustrated in Fig. 8-9. Fig. 8-9. Contour maps of the molecular orbital charge densities of the LiH, CH, HF diatomic hydrides. The nodes are indicated by dashed lines. Click here to see countour values. The 1s orbital energies of Li, C and F all lie well below that of the H 1s orbital. The charge densities of these inner shell orbitals are tightly bound to their respective nuclei. They should not, therefore, be much affected by the field of the proton or interact significantly with the H 1s orbital. The molecular orbital of lowest energy in these molecules, the ls molecular orbital, should be essentially nonbinding and resemble the doubly-occupied 1s atomic orbital on Li, C and F respectively. These predictions are borne out by the ls orbital density distributions (Fig. 8-9). They consist of nearly spherical contours centred on the Li, C and F nuclei, the radius of the outer contour being less than the bond length in each case. The forces exerted on the proton by the lscharge distributions are equivalent to placing two negative charges at the position of the heavy nucleus in each case. The charge density in the ls molecular orbital simply screens two of the nuclear charges on the heavy nucleus from the proton. This same screening effect is obtained for the 1s2 charge distribution when the molecules dissociate into atoms. Thus the 1s atomic orbitals of Li, C and F are not much affected by the formation of the molecule and the ls charge density is nonbinding as far as the proton is concerned. The ls atomic-like distributions are slightly polarized. In LiH the ls density is polarized away from the proton to a significant extent while in CH and HF it is slightly polarized towards the proton. Thus the 1s charge density exerts an antibinding force on the Li nucleus and a small binding force on the C and F nuclei. The energies of the 2s atomic orbitals decrease (the electron is more tightly bound) from Li to F as expected on the basis of the increase in the effective nuclear charge from Li to F. The 2s orbital on Li is large and diffuse and will overlap extensively with the 1s orbital on H. However, the 2s electron on Li is considerably less tightly bound than is the 1s electron on H. Thus the charge density of the 2s molecular orbital in LiH will be localized in the region of the proton corresponding to the transfer of the 2selectron on Li to the region of lower potential energy offered by the 1s orbital on H. This is approximately correct as shown by the almost complete concentration of the charge density in the region of the proton in the 2s orbital density map for LiH. The small amount of density which does remain around the Li nucleus is polarized away from the proton. The 1s and 2s densities are polarized in a direction counter to the direction of charge transfer as required in ionic binding. The inwardly polarized accumulation of 2s charge density centred on the proton binds both nuclei. The 1s molecular orbital in LiH is to a good approximation a polarized doubly-occupied 1s orbital on Li, and the 2s molecular orbital is, to a somewhat poorer approximation, a doubly-occupied and polarized 1s orbital on H. Our previous discussion of the bonding in LiH indicated that the binding is ionic, corresponding to the description Li+(1s2)H-(1s2). The molecular orbital description of an ionic bond is similar in that the molecular orbitals in the ionic extreme are localized in the regions of the individual nuclei, rather than being delocalized over both nuclei as they are for a covalent bond. The matching of the 2s orbital energy with the H 1s orbital energy is closer in the case of C than it is for Li. Correspondingly, the 2s charge density in CH is delocalized over both nuclei rather than concentrated in the region of just one nucleus as it is in the LiH molecule. There is a considerable buildup of charge density in the binding region which is shared by both nuclei. The 2s charge density exerts a large binding force on both the H and C nuclei. This is the molecular orbital description of an interaction which is essentially covalent in character. The 2s orbital energy of F is considerably lower than that of the H 1s orbital. The 2s orbital charge density in HF, therefore, approximately resembles a localized 2s orbital on F. It is strongly polarized and distorted by the proton, but the amount of charge transferred to the region between the nuclei is not as large as in CH. The 2s orbital in HF plays a less important role in binding the proton than it does in CH. The 3s molecular orbital in CH and HF will result primarily from the overlap of the 2ps orbital on C and F, with the 1s orbital on H. The 2p-like character of the 3s molecular orbital in both CH and HF is evident in the density diagrams (Fig. 8-9). In CH the 1s orbital of H interacts strongly with both the 2s and 2ps orbitals on C. In terms of the forces exerted on the nuclei, the 2s charge density is strongly binding for both C and H, while the 3s charge density is only very weakly binding for H and is actually antibinding for the C. This antibinding effect is a result of the large accumulation of charge density in the region behind the C nucleus. In HF, the H 1s orbital interacts only slightly with the 2s orbital on F, but it interacts very strongly with the 2ps orbital in the formation of the 3s molecular orbital. The 3s charge density exerts a large binding force on the proton. Thus the proton is bound primarily by the 2s charge density in CH and by the 3s charge density in HF. The 3s charge density in HF is primarily centred on the F nucleus and roughly resembles a 2ps orbital. Although no density contours are actually centred on the proton, the proton is embedded well within the orbital density distribution. This is a molecular orbital description of a highly polar bond. The 3s orbital charge density exerts a force on the F and C nuclei in a direction away from the proton. The molecular orbitals which involve ps orbitals are characteristically strongly polarized in a direction away from the bond in the region of the nucleus on which the p orbital is centred. Compare, for example, the 3s orbitals of CH and HF with the 3sg molecular orbital of the homonuclear diatomic molecules. When the C and H atoms are widely separated, we can consider the carbon atom to have one 2p electron in the 2ps orbital which lies along the bond axis, and the second 2p electron in one of the 2pp orbitals which are perpendicular to the bond. The F atom has five 2p electrons and of these one may be placed in the 2ps orbital; the remaining four 2p electrons will then completely occupy the 2pp orbitals. The singly-occupied 2ps orbitals on F and C eventually interact with the singly-occupied 1s orbital on H to form the doubly-occupied 3s molecular orbital in HF and CH. The remaining 2p electrons, those of p symmetry, will occupy the 1p molecular orbital. The H atom does not possess an orbital of p symmetry in its valence shell and the vacant 2pporbital on H is too high in energy (-0.125 au) to interact significantly with the 2pp orbitals on C and F. Thus the 1p molecular orbital is atomic-like, centred on the F and C nuclei and is essentially nonbinding (Fig. 8-9). The 1p molecular orbital resembles a 2pp atomic orbital in each case, but one which is polarized in the direction of the proton. The 1p orbitals of CH and HF illustrate an interesting and general result. In the formation of a bond between different atoms, the charge density in the s orbitals is transferred from the least to the most electronegative atom. However, the charge density of p symmetry, if any is present, is invariably transferred, or at least polarized, in the opposite direction, towards the least electronegative atom. Although the amount of charge density transferred is less in the formation of the p orbitals than in the s orbitals, one effect increases with the other. Thus the polarization is more pronounced in HF than in CH. The three examples considered above demonstrate the essential points of a molecular orbital description of the complete range of chemical bonding. In the ionic extreme of LiH the charge density of the bonding molecular orbital is localized around the proton. In CH the valence charge density is more evenly shared by both nuclei and the bond is covalent. The motions of the electrons in HF are governed largely by the potential field of the F nucleus. This is evidenced by the appearance of the molecular orbital charge distributions. The proton is, however, encompassed by the valence charge density and the result is a polar bond.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.06%3A_Molecular_Orbitals_for_Heteronuclear_Molecules.txt
The concept of a molecular orbital is readily extended to provide a description of the electronic structure of a polyatomic molecule. Indeed molecular orbital theory forms the basis for most of the quantitative theoretical investigations of the properties of large molecules. In general a molecular orbital in a polyatomic system extends over all the nuclei in a molecule and it is essential, if we are to understand and predict the spatial properties of the orbitals, that we make use of the symmetry properties possessed by the nuclear framework. An analysis of the molecular orbitals for the water molecule provides a good introduction to the way in which the symmetry of a molecule determines the forms of the molecular orbitals in a polyatomic system. There are three symmetry operations which transform the nuclear framework of the water molecule into itself and hence leave the nuclear potential field in which the electrons move unchanged (Fig. 8-10). Fig. 8-10. Symmetry elements for H2O. The bottom two diagrams illustrate the transformations of the 2py orbital on oxygen under the C2 and s2 symmetry operations. For each symmetry operation there is a corresponding symmetry element. The symmetry elements for the water molecule are a two-fold axis of rotation C2 and two planes of symmetry s1 and s2 (Fig. 8-10). A rotation of 180° about the C2 axis leaves the oxygen nucleus unchanged and interchanges the two hydrogen nuclei. A reflection through the plane labelled s1 leaves all the nuclear positions unchanged while a reflection through s2 interchanges the two protons. The symmetry operations associated with the three symmetry elements either leave the nuclear positions unchanged or interchange symmetrically equivalent (and hence indistinguishable) nuclei. Every molecular orbital for the water molecule must, under the same symmetry operations, be left unchanged or undergo a change in sign. Similarly we may use the symmetry transformation properties of the atomic orbitals on oxygen and hydrogen together with their relative orbital energy values to determine the primary atomic components of each molecular orbital in a simple LCAO approximation to the exact molecular orbitals. Only atomic orbitals which transform in the same way under the symmetry operations may be combined to form a molecular orbital of a given symmetry. The symmetry transformation properties of the atomic orbitals on oxygen and hydrogen are given in Table 8-3. A value of +1 or -1 opposite a given orbital in the table indicates that the orbital is unchanged or changed in sign respectively by a particular symmetry operation. Table 8-3: Symmetry Properties and Orbital Energies for the Water Molecule Atomic Orbitals on Oxygen Symmetry Behaviour Under Symmetry Classification Orbital Energy (au) s2 s1 s2 1s +1 +1 +1 a1 -20.669 2s +1 +1 +1 a1 -1.244 2pz +1 +1 +1 a1 } -0.632 2px -1 +1 -1 b2 2py -1 -1 +1 b1 Atomic Orbitals on Hydrogen (1s1 + 1s2) +1 +1 +1 a1 } -0.500 (1s1 - 1s2) -1 +1 -1 b2 Molecular Orbital Energies for H2O (au) 1a1 2a1 1b2 3a1 1b1 -20.565 -1.339 -0.728 -0.595 -0.521 The 1s, 2s and 2pz orbitals of oxygen are symmetric (i.e., unchanged) with respect to all three symmetry operations. They are given the symmetry classification a1. The 2px orbital, since it possesses a node in the s2 plane (and hence is of different sign on each side of the plane) changes sign when reflected through the s2 plane or when rotated by 180° about the C2 axis. It is classified as a b2 orbital. The 2py orbital is antisymmetric with respect to the rotation operator and to a reflection through the s1 plane. It is labelled b1. The hydrogen 1s orbitals when considered separately are neither unchanged nor changed in sign by the rotation operator or by a reflection through the s2 plane. Instead both these operations interchange these orbitals. The hydrogen orbitals are said to be symmetrically equivalent and when considered individually they do not reflect the symmetry properties of the molecule. However, the two linear combinations (1s1 + 1s2) and (1s1 - 1s2) do behave in the required manner. The former is symmetric under all three operations and is of a1 symmetry while the latter is antisymmetric with respect to the rotation operator and to a reflection through the plane s2 and is of b2 symmetry. The molecular orbitals in the water molecule are classified as a1, b1 or b2 orbitals, as determined by their symmetry properties. This labelling of the orbitals is analogous to the use of the s-p and g-u classification in linear molecules. In addition to the symmetry properties of the atomic orbitals we must consider their relative energies to determine which orbitals will overlap significantly and form delocalized molecular orbitals. The 1s atomic orbital on oxygen possesses a much lower energy than any of the other orbitals of a1 symmetry and should not interact significantly with them. The molecular orbital of lowest energy in H2O should therefore correspond to an inner shell 1satomic-like orbital centred on the oxygen. This is the first orbital of a1 symmetry and it is labelled la1. Reference to the forms of the charge density contours for the la, molecular orbital (Fig. 8-11) substantiates the above remarks regarding the properties of this orbital. Fig. 8-11. Contour maps of the molecular orbital charge densities for H2O. The maps for the la1, 2a1, 3a1and 1b2 orbitals (all doubly-occupied) are shown in the plane of the nuclei. The lb1 orbital has a node in this plane and hence the contour map for the 1b1 orbital is shown in the plane perpendicular to the molecular plane. The total molecular charge density for H2O is also illustrated. The density distributions were calculated from the wave function determined by R. M. Pitzer, S. Aung and S. I. Chan, J. Chem. Phys. 49, 2071 (1968). Click here for contour values. Notice that the orbital energy of the la1 molecular orbital is very similar to that for the 1s atomic orbital on oxygen. The 1a1 orbital in H2O is, therefore, similar to the ls inner shell molecular orbitals of the diatomic hydrides. The atomic orbital of next lowest energy in this system is the 2s orbital of a1 symmetry on oxygen. We might anticipate that the extent to which this orbital will overlap with the (1s1 + 1s2) combination of orbitals on the hydrogen atoms to form the 2a1molecular orbital will be intermediate between that found for the 2s molecular orbitals in the diatomic hydrides CH and HF (Fig. 8-9). The 2s orbital in CH results from a strong mixing of the 2s orbital on carbon and the hydrogen 1s orbital. In HF the participation of the hydrogen orbital in the 2s orbital is greatly reduced, a result of the lower energy of the 2s atomic orbital on fluorine as compared to that of the 2s orbital on carbon. Aside from the presence of the second proton, the general form and nodal structure of the 2a1 density distribution in the water molecule is remarkably similar to the 2s distributions in CH and HF, and particularly to the latter. The charge density accumulated on the bonded side of the oxygen nucleus in the 2a1 orbital is localized near this nucleus as the corresponding charge increase in the 2s orbital of HF is localized near the fluorine. The charge density of the 2a1 molecular orbital accumulated in the region between the three nuclei will exert a force drawing all three nuclei together. The 2a1 orbital is a binding orbital. Although the three 2p atomic orbitals are degenerate in the oxygen atom the presence of the two protons results in each 2p orbital experiencing a different potential field in the water molecule. The nonequivalence of the 2p orbitals in the water molecule is evidenced by all three possessing different symmetry properties. The three 2p orbitals will interact to different extents with the protons and their energies will differ. The 2px orbital interacts most strongly with the protons and forms an orbital of b2 symmetry by overlapping with the (1s1 - 1s2) combination of 1s orbitals on the hydrogens. The charge density contours for the lb2 orbital indicate that this simple LCAO description accounts for the principal features of this molecular orbital. The lb2 orbital concentrates charge density along each O-H bond axis and draws the nuclei together. The charge density of the 1b2 orbital binds all three nuclei. In terms of the forces exerted on the nuclei the 2a1 and lb2 molecular orbitals are about equally effective in binding the protons in the water molecule. The 2pz orbital may also overlap with the hydrogen 1s orbitals, the (1s1 + 1s2) a1 combination, and the result is the 3a1 molecular orbital. This orbital is concentrated along the z-axis and charge density is accumulated in both the bonded and nonbonded sides of the oxygen nucleus. It exerts a binding force on the protons and an antibinding force on the oxygen nucleus, a behaviour similar to that noted before for the 3s orbitals in CH and HF. The 2py orbital is not of the correct symmetry to overlap with the hydrogen 1s orbitals. To a first approximation the 1b1 molecular orbital will be simply a 2py atomic orbital on the oxygen, perpendicular to the plane of the molecule. Reference to Fig. 8-11indicates that the 1b1 orbital does resemble a 2p atomic orbital on oxygen but one which is polarized into the molecule by the field of the protons. The 1b1 molecular orbital of H2O thus resembles a single component of the 1p molecular orbitals of the diatomic hydrides. The 1b1 and the 1p orbitals are essentially nonbinding. They exert a small binding force on the heavy nuclei because of the slight polarization. The force exerted on the protons by the pair of electrons in the 1b1 orbital is slightly less than that required to balance the force of repulsion exerted by two of the nuclear charges on the oxygen nucleus. The 1b1 and 1p electrons basically do no more than partially screen nuclear charge on the heavy nuclei from the protons. In summary, the electronic configuration of the water molecule as determined by molecular orbital theory is 1a212a211b223a211b21 The la1 orbital is a nonbinding inner shell orbital. The pair of electrons in the la1 orbital simply screen two of the nuclear charges on the oxygen from the protons. The 2a1, 1b2 and 3a1 orbitals accumulate charge density in the region between the nuclei and the charge densities in these orbitals are responsible for binding the protons in the water molecule. Aside from being polarized by the presence of the protons, the lb1 orbital is a non-interacting 2py orbital on the oxygen and is essentially nonbinding. Before closing this introductory account of molecular orbital theory, brief mention should be made of the very particular success which the application of this theory has had in the understanding of the chemistry of a class of organic molecules called conjugated systems. Conjugated molecules are planar organic molecules consisting of a framework of carbon atoms joined in chains or rings by alternating single and double bonds. Some examples are In the structural formulae for the cyclic molecules, e.g., benzene and naphthalene, it is usual not to label the positions of the carbon and hydrogen atoms by their symbols. A carbon atom joined to just two other carbon atoms is in addition bonded to a hydrogen atom, the C�H bond axis being projected out of the ring in the plane of the carbon framework and bisecting the CCC bond angle. The notion of these molecules possessing alternating single and double bonds is a result of our attempt to describe the bonding in terms of conventional chemical structures. In reality all six C�C bonds in benzene are identical and the C�C bonds in the other two examples possess properties intermediate between those for single and double bonds. In other words, the pairs of electrons forming the second or p bonds are not localized between specific carbon atoms but are delocalized over the whole network of carbon atoms, a situation ideally suited for a molecular orbital description. We may consider each carbon atom in a conjugated molecule to be sp2 hybridized and bonded through these hybrid orbitals to three other atoms in the plane. This accounts for the bonding of the hydrogens and for the formation of the singly-bonded carbon network. The electrons forming these bonds are called s electrons. The axis of the remaining 2p orbital on each carbon atom is directed perpendicular to the plane of the molecule and contains a single electron, called a p electron. A simple adaptation of molecular orbital theory, called Hückel theory, which takes the s bonds for granted and approximates the molecular orbitals of the p electrons in terms of linear combinations of the 2pp atomic orbitals on each carbon atom, provides a remarkably good explanation of the properties of conjugated molecules. Hückel molecular orbital theory and its applications are treated in a number of books, some of which are referred to at the end of this chapter.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.07%3A_Molecular_Orbitals_for_Polyatomic_Molecules.txt
1. C. A. Coulson, Valence, Second Edition, Oxford University Press, 1961. 2. J. N. Murrell, S. F. A. Kettle and J. M. Tedder, Valence Theory, John Wiley and Sons Ltd., 1965. 3. L. Salem, The Molecular Orbital Theory of Conjugated Systems, W. A. Benjamin Inc., 1966, Chapter 1. 8.E: Exercises 1. (a) Give the molecular orbital electronic configurations of the N2 and Ne2 molecules. (b) Does the difference in the number of occupied bonding and anti-bonding orbitals agree with the number of electron pair bonds which a Lewis structure would predict for these two molecules? 2. Complete the correlation diagram (Fig. 8-4) for the homonuclear diatomic molecular orbitals by correlating each molecular orbital with an atomic orbital of the united atom. The symmetry and nodal property of each orbital must be conserved in the correlation. Starting with the molecular orbital of lowest energy each molecular orbital will in turn correlate with the atomic orbital of lowest energy which possesses the same symmetry. All atomic orbitals with even l values are of gsymmetry and those with odd l values are of u symmetry. 3. The total and molecular orbital charge distributions of the bifluoride ion (FHF)- are shown in Fig. 8-12. Fig. 8-12. Contour maps of the total molecular charge distribution and the molecular orbital densities for the (FHF)- ion, which has the electronic configuration lsg2lsu22sg22su23sg2lpu4 lpg4 3su2. Note that this electron configuration is formally identical to that for the unstable Ne2 molecule. The binding properties of the orbitals in (FHF)- are considerably altered from the homonuclear diatomic case by the presence of the proton, and the ion is a stable species. (The lsg and lsudensities are not shown.) (Click here for enlarged picture.) This negatively-charged molecule results from the reaction of a fluoride ion with a hydrogen fluoride molecule. The molecule has a linear, symmetric structure with the proton forming a bond between the fluorines. The molecular orbitals thus have the same symmetry classification (s or p and g or u) as do the orbitals for the homonuclear diatomic molecules. (a) Give a qualitative comparison of the forms and binding properties of the molecular orbitals for (FHF)- with those for the homonuclear diatomic molecule F2. (The molecular orbitals for F2 are very similar to those shown in Fig. 8-8 for O2. The 3su orbital is not occupied in the ground state of F2.) The 1sg and 1su molecular orbital densities for (FHF)- are not illustrated since they, like the corresponding orbitals in the homonuclear diatomics, are simply inner shell 1satomic-like distributions centred on the fluorine atom. (b) Account for the general forms and the primary atomic orbital components of the molecular orbitals in (FHF)- in terms of the simple LCAO approximation using symmetry properties and the relative energies of the orbitals on H and F. 4. The CO2 molecule is another linear symmetric triatomic molecule possessing the same symmetry properties as do the homonuclear diatomic molecules. The molecular orbitals will be of s or p and g or u symmetry. From a knowledge of the symmetries of the 1s, 2s and 2p atomic orbitals and their relative energies as given for C and O in Fig. 5-3 predict the electronic configuration of the CO2 molecule in terms of molecular orbitals. 5. The CO molecule is isoelectronic with the N2 molecule and can be thought of as being derived from N2 by transferring one proton from one N nucleus to the other. The molecular orbitals of CO will be of s or p symmetry but will not exhibit any g or u dependence since the centre of symmetry has been lost. Derive the electronic configuration of CO by considering how each molecular orbital of N2 will be changed as one N nuclear charge is increased by one unit and the other is decreased by one unit. As a hint, the 1sg orbital of N2 will become the 1s orbital of CO. Reference to Fig. 5-3 shows the 1s orbital of O to be considerably more stable than the 1s orbital of C. Thus the 1sg orbital of N2 which is concentrated equally in ls-like atomic orbitals on both N nuclei, becomes a 1s-like atomic orbital on O. Similarly the 1su orbital of N2 becomes a 1s-like orbital on C. 6. Using the 1s, 2s, 2ps and 2pp atomic orbitals on C and the 1s orbital on H discuss the simple LCAO forms expected for the molecular orbitals of the linear form of methylene, CH2. One can consider this problem from the point of view of how the molecular orbitals of CH given in the text would change if a second proton was brought up to the nonbonded side of the C atom. 7. Construct a correlation diagram for the HF molecule which relates the molecular orbitals with the orbitals of the separated atoms. Arrange the atomic orbitals of H and F on the right hand side of the diagram in order of increasing energy. The energies of the 1s, 2s, 3s, and 1p molecular orbitals in the HF molecule are -26.29 au, -1.60 au, -0.77 au and -0.65 au respectively. Is the energy of the 1s orbital on F much affected by the formation of the chemical bond with H? 8. Construct a correlation diagram for the CO molecule which relates the molecular orbitals with those of the separated atoms. Arrange the atomic orbitals of both C and O on the right hand side of the diagram in the order of increasing energy. Only atomic orbitals of the same symmetry can interact to form a molecular orbital and the resulting molecular orbital will have this same symmetry. The energies of the molecular orbitals in CO in au are 1s(-20.67), 2s(-11.37), 3s(-1.53) 4s(-0.81), 5s(-0.56), 1p(-0.65). Recall that the 2p atomic orbitals on C and O may form molecular orbitals of both s and p symmetry. 9. The correlation diagram in Problem 7 correlates the separated atom orbitals for R = ¥ with the molecular orbitals at Re, the equilibrium internuclear distance in the molecule. Continue the correlation of the orbitals to the limiting case of R = 0, the united atom. When the distance between the F nucleus and the proton is decreased to zero the result is a neon nucleus and a neon atom. The electronic energy of each molecular orbital should correlate smoothly with an atomic energy level of the united atom, the symmetry again being conserved. For example, the 1s molecular orbital will correlate with the 1s atomic orbital of the Ne atom. Do the spacings between the energy levels for HF resemble those for the united or separated atoms more closely? That is, is the electronic structure of the HF best compared to that of the Ne atom or to that of perturbed energy levels of the F and H atoms?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/08%3A_Molecular_Orbitals/8.08%3A_Further_Reading.txt
The beginning student of chemistry must have a knowledge of the theory which forms the basis for our understanding of chemistry and he must acquire this knowledge before he has the mathematical background required for a rigorous course of study in quantum mechanics. The present approach is designed to meet this need by stressing the physical or observable aspects of the theory through an extensive use of the electronic charge density. The manner in which the negative charge of an atom or a molecule is arranged in three-dimensional space is determined by the electronic charge density distribution. Thus, it determines directly the sizes and shapes of molecules, their electrical moments and, indeed, all of their chemical and physical properties. Since the charge density describes the distribution of negative charge in real space, it is a physically measurable quantity. Consequently, when used as a basis for the discussion of chemistry, the charge density allows for a direct physical picture and interpretation. In particular, the forces exerted on a nucleus in a molecule by the other nuclei and by the electronic charge density may be rigorously calculated and interpreted in terms of classical electrostatics. Thus, given the molecular charge distribution, the stability of a chemical bond may be discussed in terms of the electrostatic requirement of achieving a zero force on the nuclei in the molecule. A chemical bond is the result of the accumulation of negative charge density in the region between any pair of nuclei to an extent sufficient to balance the forces of repulsion. This is true of any chemical bond, ionic or covalent, and even of the shallow minimum in the potential curves arising from van der Waals' forces. In this treatment, the classifications of bonding, ionic or covalent, are retained, but they are given physical definitions in terms of the actual distribution of charge within the molecule. In covalent bonding the valence charge density is distributed over the whole molecule and the attractive forces responsible for binding the nuclei are exerted by the charge density equally shared between them in the internuclear region. In ionic bonding, the valence charge density is localized in the region of a single nucleus and in this extreme of binding the charge density localized on a single nucleus exerts the attractive force which binds both nuclei. This web page begins with a discussion of the need for a new mechanics to describe the events at the atomic level. This is illustrated through a discussion of experiments with electrons and light, which are found to be inexplicable in terms of the mechanics of Newton. The basic concepts of the quantum description of a bound electron, such as quantization, degeneracy and its probabilistic aspect, are introduced by contrasting the quantum and classical results for similar one-dimensional systems. The atomic orbital description of the many-electron atom and the Pauli exclusion principle are considered in some detail, and the experimental consequences of their predictions regarding the energy, angular momentum and magnetic properties of atoms are illustrated. The alternative interpretation of the probability distribution (for a stationary state of an atom) as a representation of a static distribution of electronic charge in real space is stressed, in preparation for the discussion of the chemical bond. Chemical binding is discussed in terms of the molecular charge distribution and the forces which it exerts on the nuclei, an approach which may be rigorously presented using electrostatic concepts. The discussion is enhanced through the extensive use of diagrams to illustrate both the molecular charge distributions and the changes in the atomic charge distributions accompanying the formation of a chemical bond. The above topics are covered in the first seven sections of this web page. The final section is for the reader interested in the extension of the orbital concept to the molecular cases. An elementary account of the use of symmetry in predicting the electronic structure of molecules is given in this section. Hamilton, 1970 Acknowledgement The physical picture of chemical binding afforded by the study of molecular charge distributions has been forced to await the availability of molecular wave functions of considerable quality. The author, together with the reader who finds the approach presented in this volume helpful in his understanding of chemistry, is indebted to the people who overcame the formidable mathematical obstacles encountered in obtaining these wave functions. This webpage is dedicated to Pamela, Carolyn, Kimberly and Suzanne.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/An_Introduction_to_the_Electronic_Structure_of_Atoms_and_Molecules_(Bader)/09%3A_Preface_to_Calculations.txt
Spectroscopy generally is defined as the area of science concerned with the absorption, emission, and scattering of electromagnetic radiation by atoms and molecules, which may be in the gas, liquid, or solid phase. Visible electromagnetic radiation is called light, although the terms light, radiation, and electromagnetic radiation can be used interchangeably. You will discover some properties of electromagnetic radiation in Activities 1 and 2. Spectroscopy played a key role in the development of quantum mechanics and is essential to understanding molecular properties and the results of spectroscopic experiments. It is used as a “stepping stone” to take us to the concepts of quantum mechanics and the quantum mechanical description of molecular properties in order to make the discussion more concrete and less abstract and mathematical. A spectrum is a graph that shows the intensity of radiation at different wavelengths or the response of the atomic or molecular system to different wavelengths of the radiation. Examples of absorption and fluorescence spectra are shown in Figures $1$ and $2$. An absorption spectrum shows how much light is absorbed by a sample at each wavelength of the radiation. Absorption spectra generally are displayed in one of three different ways: as a plot of either the transmission (T), absorbance (A), or the absorption coefficient (ε) on the y-axis with the wavelength on the x-axis. Sometimes the absorbance is called the optical density (OD). If we define I0 as the intensity of light incident on a sample, I as the intensity of the light transmitted by the sample, d as the thickness of the sample, and c as the concentration of the absorbing species in the sample, then $T = \dfrac {I}{I_0} \label {1.1}$ $A = log \dfrac {I_0}{I} \label {1.2}$ $\epsilon = \dfrac {1}{dc} log_{10} \left (\dfrac {I_0}{I} \right ) \label {1.3}$ Equation $\ref{1.3}$ is a rearranged form of Beer’s law, as developed in a Problem at the end of this chapter. Each of the quantities I, $I_0$, and ε are functions of the wavelength of the light being used. Three different ways of plotting absorption spectra are used because each has particular advantages. The transmission function is simple. The absorbance condenses large variations by using a logarithm so reasonably-sized graphs show both large and small variations in light intensity. Also, the absorbance is proportional to a fundamental property, which is the absorption coefficient. The absorption coefficient is of interest because it can be calculated from the transition moment, which is a quantum mechanical quantity. In Chapter 4, we will use quantum mechanics to calculate transition moments for some molecules. Energy often is released from atoms, molecules, and solids as light. This light is called luminescence in general and fluorescence and phosphorescence in particular situations that are identified by the decay time of the luminescence and the nature of the excited state. The decay time is the characteristic time it takes for the luminescence to disappear after the source of energy is removed or turned off. Fluorescence decays quickly (in microseconds or faster), and phosphorescence decays slowly (milliseconds to minutes). The concepts of angular momentum and a transition moment that are developed in other chapters will help you understand why these decay times are so different and depend on the nature of the excited state. The fluorescence spectrum in Figure $2$. shows how the intensity of the light emitted by fluorescein varies with wavelength. This spectrum is an example of a distribution function. It shows how the fluorescence intensity is distributed over a range of wavelengths. The idea of a distribution function is an important one that you may have encountered previously (e.g. the Maxwell-Boltzmann velocity distribution) and will encounter again. The term spectroscopy also is used in electron spectroscopy and mass spectroscopy where the energy distribution of electrons and the mass distribution of ions are the quantities of interest. These distributions give the absolute or relative number of particles with a given energy or mass. In general, any function that shows how some property is distributed (i.e. a distribution function) can be called a spectrum. In scattering, light incident on an atomic or molecular system is deflected to some other direction, and in the process the wavelength of the light may or may not change. When the wavelength does not change, the scattering is called elastic or Rayleigh scattering, and when the wavelength does change, it is called inelastic scattering or Raman scattering. Scattering spectra show the intensity of radiation that is scattered in some direction as a function of the wavelength of the scattered radiation. Rather than plotting the absolute wavelength on the x-axis, it is common to plot the change in wavenumber value for the radiation, because this quantity is proportional to the energy left behind in the molecule during the scattering process. The spectra in Figures $1$ and $2$ are characterized by intense features, which are called spectral bands or lines, at some points on the x-axis. The peaks of spectral bands are indicated by a star in Figures $1$ and $2$. The spectral bands are characterized by three quantities: their location on the x-axis, their intensity or height, and their width or shape. Quantum mechanics is needed to understand and explain these characteristics. From this book, you will learn how to interpret and calculate the positions of the bands on the x-axis in terms of the energy level structure of molecules and the intensities in terms of the transition moments. The band widths and shapes are due to dynamical effects that are unfortunately beyond the scope of this book. The above discussion of spectroscopy brings us to the question: What is electromagnetic radiation? During the nineteenth century, research in the areas of optics, electricity, and magnetism and the unification of the resulting concepts by Maxwell provided convincing evidence that electromagnetic radiation consists of two sinusoidally oscillating fields or waves, an electric field and a magnetic field. In the simplest situation, which is radiation in a vacuum, these fields oscillate perpendicular to each other and perpendicular to the direction of propagation of the wave. Various units are used in discussing electromagnetic radiation, and you must be familiar with conversions between them. Tables 3-5 provide the most frequently used units and their relationships. These units include hertz, joules, electron volts, wavenumbers, Angstroms, and nanometers. Any of these units, not just wavelength, can be used when plotting a spectrum. The electromagnetic spectrum commonly is viewed as split into different regions. These regions are classified by the nature of the instrumentation (sources, wavelength selectors and detectors) that are used in the different frequency ranges. The different radiation frequencies correspond to different kinds of motions or degrees of freedom within a molecule, e.g. rotational motion (microwave region), vibrational motion (infrared region), electronic motion (generally visible through soft x-ray regions) and nuclear and electron spin motion (radio and microwave regions). After a description of the historical development of quantum mechanics and the introduction of some key concepts associated with it, this book uses quantum mechanics to account for the spectra associated with these motions and identify what can be learned about these degrees of freedom from the spectra. Table 2 lists the parameters that characterize electromagnetic radiation. As you can see from this table, Greek letters often are used to represent physical quantities. The use of symbols makes writing equations and derivations and showing relationships much shorter and quicker than using words, but we pay a price for this convenience. We have to remember what the symbols mean, and since there are more quantities than there are symbols, even with the use of both Latin and Greek letters, some symbols mean more than one thing. Consequently, we must deduce their meaning from the context. Tables at the end of this chapter provide you with information about Greek letters and other items such as units and physical constants that will prove useful to you. While spectra often are plotted with the wavelength, and sometimes with the wavenumber values or the frequency, on the x-axis, usually the energy associated with a photon at a particular wavelength is needed in order to relate spectra to the energy level structure of molecules. The following relationships convert wavelength λ, wavenumbers $\bar {\nu}$, and frequency ν to photon energy E. $E = \dfrac {hc}{\lambda} \label {1.4}$ $E = hc \bar {\nu} \label {1.5}$ $E = h \nu \label {1.6}$ where $c$ is the speed of light in a vacuum. Since wavenumbers and frequency are proportional to energy, sometimes spectroscopists measure energy in these units for convenience.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/01%3A_Spectroscopy.txt
The concepts of quantum mechanics were invented to explain experimental observations that otherwise were totally inexplicable. This period of invention extended from 1900 when Max Planck introduced the revolutionary concept of quantization to 1925 when Erwin Schrödinger and Werner Heisenberg independently introduced two mathematically different but equivalent formulations of a general quantum mechanical theory. The Heisenberg method uses properties of matrices, while the Schrödinger method involves partial differential equations. We will develop and utilize Schrödinger’s approach because students usually are more familiar with elementary calculus than with matrix algebra, and because this approach provides direct insight into charge distributions in molecules, which are of prime interest in chemistry. • 2.1: Prelude to the Foundations of Quantum Mechanics Heisenberg and Schrödinger were inspired by four key experimental observations: the spectral distribution of black-body radiation, the characteristics of the photoelectric effect, the Compton effect, and the luminescence spectrum of the hydrogen atom. Explanation of these phenomena required the introduction of two revolutionary concepts: physical quantities previously thought to be continuously variable, such as energy and momentum, are quantized, and momentum and wavelength are related. • 2.2: Black-Body Radiation One experimental phenomenon that could not be adequately explained by classical physics was black-body radiation. Hot objects emit electromagnetic radiation. The burners on most electric stoves glow red at their highest setting. If we take a piece of metal and heat it in a flame, it begins to glow, dark red at first, then perhaps white or even blue if the temperature is high enough. • 2.3: Photoelectric Effect In the photoelectric effect, light incident on the surface of a metal causes electrons to be ejected. The number of emitted electrons and their kinetic energy can be measured as a function of the intensity and frequency of the light. • 2.4: The Compton Effect The Compton effect concerns the inelastic scattering of x‑rays by electrons. Scattering means dispersing in different directions, and inelastic means that energy is lost by the scattered object in the process. The intensity of the scattered x‑ray is measured as a function of the wavelength shift. • 2.5: Hydrogen Luminescence The luminescence spectrum of the hydrogen atom reveals light being emitted at discrete frequencies. These spectral features appear so sharp that they are called lines. These lines, occurring in groups, are found in different regions of the spectrum; some are in the visible, some in the infrared, and some in the vacuum ultraviolet. The occurrence of these lines was very puzzling in the late 1800’s. Spectroscopists approach this type of problem by looking for patterns in the observations. • 2.6: Early Models of the Hydrogen Atom Ernest Rutherford had proposed a model of atoms based on the αα -particle scattering experiments of Hans Geiger and Ernest Marsden. There are some basic problems with the Rutherford model. The Coulomb force that exists between oppositely charge particles means that a positive nucleus and negative electrons should attract each other, and the atom should collapse. Niels Bohr approached this problem by proposing that we simply must invent new physical laws. • 2.7: Derivation of the Rydberg Equation from Bohr's Model Bohr postulated that electrons existed in orbits or states that had discrete energies. We therefore want to calculate the energy of these states and then take the differences in these energies to obtain the energy that is released as light when an electron makes a transition from one state to a lower energy one. • 2.8: Summary of Bohr's Contribution Bohr’s proposal explained the hydrogen atom spectrum, the origin of the Rydberg formula, and the value of the Rydberg constant. Specifically it demonstrated that the integers in the Rydberg formula are a manifestation of quantization. The energy, the angular momentum, and the radius of the orbiting electron all are quantized. This quantization also parallels the concept of stable orbits in the Bohr model. • 2.9: The Wave Properties of Matter De Broglie’s proposal can be applied to Bohr’s view of the hydrogen atom to show why angular momentum is quantized in units of \(ħ\). If the electron in the hydrogen atom is orbiting the nucleus in a stable orbit, then it should be described by a stable or stationary wave. Such a wave is called a standing wave. In a standing wave, the maximum and minimum amplitudes (crests and troughs) of the wave and the nodes (points where the amplitude is zero) are always found at the same position. • 2.E: Foundations of Quantum Mechanics (Exercises) Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al. • 2.S: Foundations of Quantum Mechanics (Summary) Around 1900 several experimental observations were made that could not be explained, not even qualitatively, by existing physical laws. It therefore was necessary to invent (create) new concepts: quantization of energy and momentum, and a momentum-wavelength relation. 02: Foundations of Quantum Mechanics The concepts of quantum mechanics were invented to explain experimental observations that otherwise were totally inexplicable. This period of invention extended from 1900 when Max Planck introduced the revolutionary concept of quantization to 1925 when Erwin Schrödinger and Werner Heisenberg independently introduced two mathematically different but equivalent formulations of a general quantum mechanical theory. The Heisenberg method uses properties of matrices, while the Schrödinger method involves partial differential equations. We will develop and utilize Schrödinger’s approach because students usually are more familiar with elementary calculus than with matrix algebra, and because this approach provides direct insight into charge distributions in molecules, which are of prime interest in chemistry. Heisenberg and Schrödinger were inspired by four key experimental observations: the spectral distribution of black-body radiation, the characteristics of the photoelectric effect, the Compton effect, and the luminescence spectrum of the hydrogen atom. Explanation of these phenomena required the introduction of two revolutionary concepts: 1. physical quantities previously thought to be continuously variable, such as energy and momentum, are quantized, and 2. momentum, p, and wavelength, λ, are related, p = $\frac {h}{λ}$, where h is a fundamental constant. We will use a quasi‑historical approach in this chapter to emphasize that individuals created knowledge by inventing new ideas or concepts. What is not apparent here is that these new ideas initially were greeted with considerable skepticism, and acceptance was slow because of counter proposals that were not so revolutionary. Only after some time did inconsistencies in the counter proposals become apparent. By “quantized,” we mean that only certain values are possible or allowed. For example, money is quantized. Money does not come in continuous denominations. In the United States the smallest unit of money is a penny, and everything costs some integer multiple of a penny. From high school and freshman physics, as well as from everyday experience, we learn that particles have momentum, which is mass times velocity. Although more abstract, the wave properties of light are clearly demonstrated by interference, diffraction and refraction effects. That a relationship between momentum (a particle property) and wavelength (a wave property) applies to both particles and light was, and remains, somewhat amazing and revolutionary. This relationship, called the “wave-particle duality,” means that particles have wave-like properties and light waves have particle-like properties.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.01%3A_Prelude_to_the_Foundations_of_Quantum_Mechanics.txt
One experimental phenomenon that could not be adequately explained by classical physics was black-body radiation. Hot objects emit electromagnetic radiation. The burners on most electric stoves glow red at their highest setting. If we take a piece of metal and heat it in a flame, it begins to glow, dark red at first, then perhaps white or even blue if the temperature is high enough. A very hot object would emit a significant amount of energy in the ultraviolet region of the spectrum, and people are emitters of energy on the other end of the spectrum. We can see this infrared energy by using night vision goggles. The exact spectrum depends upon properties of the material and the temperature. A black-body is an ideal object that emits all frequencies of radiation with a spectral distribution that depends only on the temperature and not on its composition. The radiation emitted by such an object is called black-body radiation. Black-body radiation can be obtained experimentally from a pinhole in a hollow cavity that is held at a constant temperature. It was found that the observed intensity of black-body radiation as a function of wavelength varies with temperature. Attempts to explain or calculate this spectral distribution from classical theory were complete failures. A theory developed by Rayleigh and Jeans predicted that the intensity should go to infinity at short wavelengths. Since the intensity actually drops to zero at short wavelengths, the Rayleigh-Jeans result was called the “ultraviolet catastrophe.” There was no agreement between theory and experiment in the ultraviolet region of the black-body spectrum. This is shown in Figure $1$. Max Planck was the first to successfully explain the spectral distribution of black-body radiation. He said that the radiation resulted from oscillations of electrons. Similarly, oscillations of electrons in an antenna produce radio waves. With revolutionary insight and creativity, Planck realized that in order to explain the spectral distribution, he needed to assume that the energy E of the oscillating electrons was quantized and proportional to integer multiples of the frequency ν $E = nh \nu \label{2-1}$ where n is an integer and h is a proportionality constant. He then was able to derive an equation (Equation $\ref{2-2}$) that gave excellent agreement with the experimental observations for all temperatures provided that the value of $6.62618 \times 10^{-34}$ Joule.sec was used for h. This new fundamental constant, which is an essential component of Quantum Mechanics, now is called Planck’s constant. The Boltzmann constant, $k_B$, and the speed of light (c), also appear in the equation. $\rho (\lambda, T) d \lambda = \frac {8 \pi hc}{\lambda ^5} \frac {d \lambda}{ e^{\frac {hc}{\lambda k_B T}} - 1} \label{2-2}$ Example $1$ Use Equation to show that the units of ρ(λ,T)dλ are $J/m^3$ as expected for an energy density. Equation $\ref{2-2}$ gives ρ(λ,T)dλ, the radiation density ($J/m^3$) between λ and λ + dλ inside the cavity from which the black-body radiation is emitted. The parameters in the equation are Planck’s constant, the speed of light, Boltzmann’s constant, the temperature, and the wavelength. The agreement between Planck’s theory and the experimental observation provided strong evidence that the energy of electron motion in matter is quantized. In the next two sections, we will see that the energy carried by light also is quantized in units of h $\bar {\nu}$. These packets of energy are called “photons.” Example $2$ Use Planck’s equation to prepare computer-generated graphs showing how ρ(λ,T), which is the black-body radiation density per nm, varies with wavelength at various temperatures. Use these graphs to explain why white hot is hotter than red hot. A Mathcad file link is provided as a head start for this exercise. Example $3$ Use the results from Exercise $2$ to prepare a computer-generated graph of $λ_{max}$, which is the peak (or maximum) of the functions plotted in Exercise $1$, as a function of T. Describe how the color of the light emitted from the black-body varies with temperature. Example $4$ Use the results from Exercise $4$ to estimate the color temperature of sunlight (that has a maximum at 480 nm) and the temperature of a tungsten light bulb (that has a maximum at 1035 nm.)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.02%3A_Black-Body_Radiation.txt
In the photoelectric effect, light incident on the surface of a metal causes electrons to be ejected. The number of emitted electrons and their kinetic energy can be measured as a function of the intensity and frequency of the light. One might expect, as did the physicists at the beginning of the Twentieth Century, that the energy in the light wave (its intensity in $J/m^2s$) should be transferred to the kinetic energy of the emitted electrons. Also, the number of electrons that break away from the metal should change with the frequency of the light wave. This dependence on frequency was expected because the oscillating electric field of the light wave causes the electrons in the metal to oscillate back and forth, and the electrons in the metal respond at different frequencies. In other words, it was expected that the number of emitted electrons should depend upon the frequency, and their kinetic energy should depend upon the intensity of the light wave (at fixed wavelength). The classical expectation of the photoelectric effect was that the number of emitted electrons would depend upon the frequency, and their kinetic energy should depend upon the intensity of the light wave. As shown in Figure $1$, just the opposite behavior is observed in the photoelectric effect. The intensity affects the number of electrons, and the frequency affects the kinetic energy of the emitted electrons. From these sketches, we see that: • the kinetic energy of the electrons is linearly proportional to the frequency of the incident radiation above a threshold value of $ν_0$ (no current is observed below $ν_0$), and the kinetic energy is independent of the intensity of the radiation. • the number of electrons (i.e. the electric current) is proportional to the light intensity (at a fixed wavelength) and independent of the frequency of the incident radiation above the threshold value of $ν_0$ (no current is observed below $ν_0$). In 1905, Albert Einstein explained the observations shown in Figure $1$ with the bold hypothesis that energy carried by light existed in packets of an amount $h\nu$. Each packet or photon could cause one electron to be ejected, which is like having a moving particle collide with and transfer energy to a stationary particle. The number of electrons ejected therefore depends upon the number of photons, i.e. the intensity of the light. Some of the energy in the packet is used to overcome the binding energy of the electron in the metal. This binding energy is called the work function, $\Phi$. The remaining energy appears as the kinetic energy, $\frac {1}{2} mv^2$, of the emitted electron. Equations $\ref{2-3}$ and $\ref{2-4}$ express the conservation of energy for the photoelectric process $E_{photon} = K_{Eelectron} + W_{electron} \label {2-3}$ $h \nu = \frac {1}{2} mv^2 + \Phi \label {2-4}$ Rearranging this equation reveals the linear dependence of kinetic energy on frequency as shown in Figure $1$. $\frac {1}{2} mv^2 = h \nu - \Phi \label {2-5}$ The slope of the straight line obtained by plotting the kinetic energy as a function of frequency above the threshold frequency is just Planck’s constant, and the x-intercept, where $\frac {1}{2} mv^2 = 0$, is just the work function of the metal, $\Phi = hν_0$. Example $1$ Sodium metal has a threshold frequency of $4.40 × 10^{14}$ Hz. What is the kinetic energy of a photoelectron ejected from the surface of a piece of sodium when the ejecting photon is $6.20 × 10^{14}$ Hz? What is the velocity of this photoelectron? From which region of the electromagnetic spectrum is this photon? With such an analysis Einstein obtained a value for $h$ in agreement with the value Planck deduced from the spectral distribution of black-body radiation. The fact that the same quantization constant could be derived from two very different experimental observations was very impressive and made the concept of energy quantization for both matter and light credible. In the next sections we will see that wavelength and momentum are properties that also are related for both matter and light.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.03%3A_Photoelectric_Effect.txt
The Compton effect concerns the inelastic scattering of x‑rays by electrons. Scattering means dispersing in different directions, and inelastic means that energy is lost by the scattered object in the process. The intensity of the scattered x‑ray is measured as a function of the wavelength shift $\Delta \lambda$, where $\lambda ' = \lambda + \Delta \lambda \label {2-6}$ and the scattering angle $\theta$. To explain the experimental observations, it is necessary to describe the situation just as one would when discussing two particles, e.g. marbles, colliding and scattering from each other. The x‑ray scatters (changes direction) and causes an electron with mass me to be ejected from the object with a direction that conserves the momentum of the system. Momentum and energy conservation equations then explain the scattering angles and the observed wavelength shift of the x-ray when the momentum of the x-rays is taken to be equal to $h/\lambda$ and the energy is $h\nu$. These considerations lead to Equation $\ref{2-7}$, which describes the experimental data for the variation of $\Delta \lambda$ with $\theta$. The success of using energy and momentum conservation for two colliding particles to explain the experimental data for the Compton effect is powerful evidence that electromagnetic radiation has momentum just like a particle and that the momentum and energy are given by $h/\lambda$ and $h\nu$, respectively. $\Delta \lambda = \frac {h}{m_ec} (1 - \cos \theta ) \label {2-7}$ Example $1$ For Compton scattering, determine the wavelength shift at a scattering angle of $90^o$, and identify the scattering angles where the wavelength shift is the smallest and the largest. 2.05: Hydrogen Luminescence The luminescence spectrum of the hydrogen atom reveals light being emitted at discrete frequencies. These spectral features appear so sharp that they are called lines. These lines, occurring in groups, are found in different regions of the spectrum; some are in the visible, some in the infrared, and some in the vacuum ultraviolet. The occurrence of these lines was very puzzling in the late 1800’s. Spectroscopists approach this type of problem by looking for some regularity or pattern in the observations. Johannes Rydberg recognized a pattern and expressed it in terms of the following formula, $\bar {\nu} = R_H \left ( \frac {1}{f^2} - \frac {1}{i^2} \right ) \label {2-8}$ Here $\bar {\nu}$ is the “frequency” of the line in wavenumber units $\bar {\nu} = \dfrac {\nu}{c} \label {2-9}$ $R_H$ is a constant equal to 109,677.581 cm-1, now called the Rydberg constant, and $f$ and $i$ are positive integers with $i > f$. Different groups of lines, called Rydberg series, are obtained for different values of f. The lines in each series arise from a range of values for $i$. This analysis by Rydberg was pretty amazing. It pictured the hydrogen atom as some sort of counting machine that utilized integer numbers for some unknown reason. Example $1$ Calculate the wavelength of a line in the hydrogen atom luminescence spectrum corresponding to f = 7 and i = 8. In which region of the electromagnetic spectrum will this line appear? Since the Rydberg equation was derived empirically (i.e., invented to describe experimental data), the next question was, “Can the Rydberg equation and the origin of the integer values for $f$ and $i$ be obtained from theoretical considerations?” This question was enormously difficult for scientists at the time because the nature of the spectrum, discrete lines rather than a continuous frequency distribution, and the very existence of atoms, was not consistent with existing physical theories. Example $2$ Explain what it means to say a constant or an equation is empirical. Give an example of a value that is determined empirically.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.04%3A_The_Compton_Effect.txt
Ernest Rutherford had proposed a model of atoms based on the $\alpha$-particle scattering experiments of Hans Geiger and Ernest Marsden. In these experiments helium nuclei ($α$-particles) were shot at thin gold metal foils. Most of the particles were not scattered; they passed unchanged through the thin metal foil. Some of the few that were scattered were scattered in the backward direction; i.e. they recoiled. This backward scattering requires that the foil contain heavy particles. When an α-particle hits one of these heavy particles it simply recoils backward, just like a ball thrown at a brick wall. Since most of the α-particles don’t get scattered, the heavy particles (the nuclei of the atoms) must occupy only a very small region of the total space of the atom. Most of the space must be empty or occupied by very low-mass particles. These low-mass particles are the electrons that surround the nucleus. There are some basic problems with the Rutherford model. The Coulomb force that exists between oppositely charge particles means that a positive nucleus and negative electrons should attract each other, and the atom should collapse. To prevent the collapse, the electron was postulated to be orbiting the positive nucleus. The Coulomb force is used to change the direction of the velocity, just as a string pulls a ball in a circular orbit around your head or the gravitational force holds the moon in orbit around the Earth. But this analogy has a problem too. An electron going around in a circle is constantly being accelerated because its velocity vector is changing. A charged particle that is being accelerated emits radiation. This property is essentially how a radio transmitter works. A power supply drives electrons up and down a wire and thus transmits energy (electromagnetic radiation) that your radio receiver picks up. The radio then plays the music for you that is encoded in the waveform of the radiated energy. If the orbiting electron is generating radiation, it is losing energy. If an orbiting particle loses energy, the radius of the orbit decreases. To conserve angular momentum, the frequency of the orbiting electron increases. The frequency increases continuously as the electron collapses toward the nucleus. Since the frequency of the rotating electron and the frequency of the radiation that is emitted are the same, both change continuously to produce a continuous spectrum and not the observed discrete lines. Furthermore, if one calculates how long it takes for this collapse to occur, one finds that it takes about $10^{‑11}$ seconds. This means that nothing in the world based on the structure of atoms could exist for longer than about $10^{-11}$ seconds. Clearly something is terribly wrong with this classical picture, which means that something was missing at that time from the known laws of physics. Niels Bohr approached this problem by proposing that we simply must invent new physical laws since experimental observations are inconsistent with the known physical laws. Bohr therefore proposed in 1913 that 1. The electron could orbit the nucleus in a stationary state without collapsing. 2. These orbits have discrete energies and radiation is emitted at a discrete frequency when the electron makes a transition from one orbit to another. 3. The energy difference between the orbits is proportional to the frequency of radiation emitted $E_f − E_i = \Delta E_{fi} = h \nu$ where the constant of proportionality, h, is Planck’s constant. Note that $E_f − E_i$ is the difference between energy levels and $h\nu$ is the energy of the emitted photon. 4. The angular momentum, $M$, of the orbiting electron is a positive integer multiple of $h/2\pi$, which often is written as $\hbar$ and called h-bar. $M = n \hbar \label{2-11}$ • where $n = 1,2,3, \dots$ Bohr’s revolutionary proposal was taken seriously because with these ideas, he could derive Rydberg’s formula and calculate a value for the Rydberg constant, which up to this point had only been obtained empirically by fitting the Rydberg equation to the luminescence data. Example $1$ Make four sketches to illustrate Bohr’s four propositions
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.06%3A_Early_Models_of_the_Hydrogen_Atom.txt
Bohr postulated that electrons existed in orbits or states that had discrete energies. We therefore want to calculate the energy of these states and then take the differences in these energies to obtain the energy that is released as light when an electron makes a transition from one state to a lower energy one. Because the proton is so much more massive than the electron, we can consider the proton to be fixed and the electron to be rotating around it. For the general case, two particles rotate about their center of mass, and this rotation can be described as the rotation of a single particle with a reduced mass. To explain the hydrogen luminescence spectrum, we write the energy, E, of an orbit or state of the hydrogen atom as the sum of the kinetic energy, $T$, and potential energy, $V$, of the rotating electron. The potential energy is just the Coulomb energy for two particles with charges $q_1$ and $q_2$. $E = T + V \label {2-12}$ $T = \frac {1}{2} m_e v^2 \label {2-13}$ and $V = \frac {q_1q_2}{4\pi \epsilon _0 r} = \frac {(Ze) (-e)}{4 \pi \epsilon _0 r} = \frac {-Ze^2}{4 \pi \epsilon _0 r} \label {2-14}$ In general the charge on an atomic nucleus is $Ze$, where $Z$ is the number of protons in the nucleus. The charge on a single proton is simply the fundamental constant for the unit charge, $e$, and the charge on an electron is $–e$. The factor $4πε_0$ is due to the use of SI units, and ε is the permittivity of free space ($8.85419 × 10^{-12} C^2N^{-1}m^{-2}$). Even though $Z = 1$ for the hydrogen atom, $Z$ is retained in Equation $\ref{2-14}$ and subsequent equations so the results apply to any one-electron ion as well (e.g., $He^+, Li^{2+}$, etc.). Example $1$ Show that 1 $C^2N^{-1}m^{-2}$ is equivalent to 1 F/m. By invoking the Virial Theorem for electrostatic forces, we can determine the radii of the orbit and the energy of the rotating electron, derive the Rydberg equation, and calculate a value for the Rydberg constant. This theorem says that the total energy of the system is equal to half of its potential energy and also equal to the negative of its kinetic energy. $E = \frac {V}{2} = -T \label {2-15}$ The Virial Theorem has fundamental importance in both classical mechanics and quantum mechanics. It has many applications in chemistry beyond its use here. The word virial comes from the Latin word for force, vires, and the Virial Theorem results from an analysis of the forces acting on a system of charged particles. A proof of the validity of this theorem for the hydrogen atom is available. The Virial Theorem makes it possible to obtain the total energy from the potential energy if we have the radius, r, of the orbit in Equation $\ref{2-14}$. We can obtain the radius of the orbit by first expressing the kinetic energy $T$ in terms of the angular momentum $M$, $T = \frac {M^2}{2m_er^2} \label {2-16}$ where $M = m_evr$. Using the Virial Theorem, Equation $\ref{2-15}$, to equate the expressions for $V/2$ and $-T$ (Equations $\ref{2-14}$ and $\ref{2-16}$), introducing Bohr’s proposal that angular momentum $M$ is quantized, $M = nhbar$, and solving for r gives $r_n = \frac {\pi \epsilon _0 \hbar ^2 n^2}{m_e Z e^2} \label {2-17}$ Notice in equation $\ref{2-17}$ how the quantization of angular momentum results in the quantization of the radii of the orbits. The smallest radius, for the orbit with $n = 1$, is called the Bohr radius and is denoted by $a_0$. $a_0 = 52.92 \, pm = 0.5292\, Å \label {2-18}$ Substituting Equations $\ref{2-14}$ and $\ref{2-17}$ into Equation $2-15$ for the total energy gives $E_n = \frac {-m_e Z^2 e^4}{8 \epsilon ^2_0 h^2 n^2} \label {2-19}$ which shows that the energy of the electron also is quantized. Equation $\ref{2-19}$ gives the energies of the electronic states of the hydrogen atom. It is very useful in analyzing spectra to represent these energies graphically in an energy-level diagram. An energy-level diagram has energy plotted on the vertical axis with a horizontal line drawn to locate each energy level. Example $2$ Calculate the potential energy, the kinetic energy, and the total energy for hydrogen when r = 52.92 pm. Example $3$ Sketch an energy level diagram for the hydrogen atom. Label each energy level with the quantum number n and the radius of the corresponding orbit. Example $4$ Calculate a value for the Bohr radius using Equation $\ref{2-17}$ to check that this equation is consistent with the value 52.9 pm. What would the radius be for n = 1 in the $Li^{2+}$ ion. Example $5$ How do the radii of the hydrogen orbits vary with n? Prepare a graph showing r as a function of n. To which family of curves does this plot belong? States of hydrogen atoms with n = 200 have been prepared. What is the diameter of the atoms in these states? Identify something else this size. To explain the hydrogen atom luminescence spectrum, Bohr said that light of frequency νif is produced when an electron goes from an orbit with n = i (“i” represents “initial”) to a lower energy orbit with n = f (“f” represents “final”), with i > f. In other words, the energy of the photon is equal to the difference in energies of the two orbits or hydrogen atom states associated with the transition. $E_{photon} = h \nu _{if} = E_i - E_f = \Delta E_{if} = \frac {me^4}{8 \epsilon ^2_0 h^2} \left ( \frac {1}{f^2} - \frac {1}{i^2} \right ) \label {2-20}$ Using $\nu _{if} = c \bar {\nu} _{if}$ converts Equation (2.20) from frequency to wave number units, $\bar {\nu} _{if} = \frac {me^4}{8 \epsilon ^2_0 h^3} \left ( \frac {1}{f^2} - \frac {1}{i^2} \right ) \label {2-21}$ When we identify $R_H$ with the ratio of constants on the right hand side of Equation (2-21), we obtain the Rydberg equation with the Rydberg constant as in Equation (2-22). $R_H = \frac {me^4}{8 \epsilon ^2_0 h^3 } \label {2-22}$ Evaluating $R_H$ from the fundamental constants in this formula gives a value within 0.5% of that obtained experimentally from the hydrogen atom spectrum. Example $6$ Calculate the energy of a photon that is produced when an electron in a hydrogen atom goes from an orbit with n = 4 to and orbit with n = 1. What happens to the energy of the photon as the initial value of n approaches infinity?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.07%3A_Derivation_of_the_Rydberg_Equation_from_Bohr%27s_Model.txt
Bohr’s proposal explained the hydrogen atom spectrum, the origin of the Rydberg formula, and the value of the Rydberg constant. Specifically it demonstrated that the integers in the Rydberg formula are a manifestation of quantization. The energy, the angular momentum, and the radius of the orbiting electron all are quantized. This quantization also parallels the concept of stable orbits in the Bohr model. Only certain values of $E$, $M$, and $r$ are possible, and therefore the electron cannot collapse onto the nucleus by continuously radiating energy because it can only have certain energies, and it cannot be in certain regions of space. The electron can only jump from one orbit (quantum state) to another. The quantization means that the orbits are stable, and the electron cannot spiral into the nucleus in spite of the attractive Coulomb force. How might one have been so clever as to propose that angular momentum is quantized in units of $ħ$? Possibly, by using unit analysis. Example $1$ Show that the units of Planck’s constant (J s) are the same as angular momentum (mvr = $kg m^2/s$). Example $2$ Why do you suppose Bohr did not include the possibility of no angular momentum for the electron, i.e. n = 0? The factor of $\frac{1}{2π}$ is needed to obtain the experimental value for RH from the theory. Without this factor Bohr would have calculated a value for RH that was smaller than the experimental value by a factor of $(2π)^2$. Example $3$ Calculate a value for $R_H$ using fundamental Constants. Repeat the calculation assuming that angular momentum is quantized in units of h rather than ħ. Show that the value you calculate differs from the value obtained by Rydberg from the hydrogen atom data by a factor of $(2π)^2$. Although Bohr’s ideas successfully explained the hydrogen spectrum, they failed when applied to the spectra of other atoms. In addition a profound question remained. Why is angular momentum quantized in units of $ħ$? As we shall see, de Broglie had an answer to this question, and this answer led Schrödinger to a general postulate that produces the quantization of angular momentum as a consequence. This quantization is not quite as simple as proposed by Bohr, and we will see that it is not possible to determine the distance of the electron from the nucleus as precisely as Bohr thought. In fact, since the position of the electron in the hydrogen atom is not at all as well defined as a classical orbit (such as the moon orbiting the earth) it is called an orbital. An electron orbital represents or describes the position of the electron around the nucleus in terms of a mathematical function called a wavefunction that yields the probability of positions of the electron. Bohr’s ideas successfully explained the hydrogen spectrum, but they failed when applied to the spectra of other atoms. Bohr’s idea that the absorption or emission of light corresponds to an electron jumping from one orbit to another also is not entirely accurate. When light is absorbed or emitted by an atom or molecule, the atom or molecule makes a transition from one energy state to another. If there is more than one state associated with each energy level, the energy level is said to be degenerate. Bohr’s analysis produced the correct energy level spacing for the hydrogen atom and therefore could explain the luminescence spectrum, but the analysis did not identify all the possible electronic states of the hydrogen atom, as we shall see later in Chapter 8 with the Zeeman effect. The Bohr model also did not predict the electronic states of multielectron atoms. This spurred a long period of development culminating in quantum mechanics as we are studying in this text. Contributors and Attributions David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules")
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.08%3A_Summary_of_Bohr%27s_Contribution.txt
The fact that light (electromagnetic radiation) exhibited properties of particles became clear from the Compton scattering experiments where a momentum of $p = h/λ$ had to be associated with the x-rays to explain the experimental observations. In 1924 Louis de Broglie proposed that if light waves exhibited properties of particles, then matter particles should exhibit properties of waves, and the wavelength of these waves should be given by the same equation, $\lambda = \dfrac {\hbar}{p} \label {2-23}$ Since the wave vector k is defined as $k = \frac {2\pi}{ \lambda}$, we can rewrite this equation as $p = \hbar k \label {2-24}$ Example $1$ Calculate the de Broglie wavelength for an electron with a kinetic energy of 1000 eV. Could such electrons be used to obtain diffraction patterns of molecules? Example $2$ Calculate the de Broglie wavelength for a fast ball thrown at 100 miles per hour and weighing 4 ounces. Comment on whether the wave properties of baseballs could be observed. The validity of de Broglie’s proposal was confirmed by electron diffraction experiments of G.P. Thomson in 1926 and of C. Davisson and L. H. Germer in 1927. In these experiments it was found that electrons were scattered from atoms in a crystal and that these scattered electrons produced an interference pattern. The interference pattern was just like that produced when water waves pass through two holes in a barrier to generate separate wave fronts that combine and interfere with each other. These diffraction patterns are characteristic of wave-like behavior and are exhibited by both electrons and electromagnetic radiation. Diffraction patterns are obtained if the wavelength is comparable to the spacing between scattering centers. Immediately below in the box are two hyperlinks that show you patterns obtained by electron diffraction and by x-ray diffraction. You can find others on the Internet by searching x-ray diffraction pattern and electron diffraction pattern to see how electron and x-ray diffraction are being used in modern research. De Broglie’s proposal can be applied to Bohr’s view of the hydrogen atom to show why angular momentum is quantized in units of $ħ$. If the electron in the hydrogen atom is orbiting the nucleus in a stable orbit, then it should be described by a stable or stationary wave. Such a wave is called a standing wave. In a standing wave, the maximum and minimum amplitudes (crests and troughs) of the wave and the nodes (points where the amplitude is zero) are always found at the same position. In a traveling wave the crests, troughs, and nodes move from one position to another as a function of time. To place a standing wave in the shape of a round orbit, the circumference $2πr$ must be an integer multiple of the wavelength, i.e. $2\pi r = n \lambda \label {2-25}$ Now using the wavelength-momentum relationship $\lambda = \frac {\hbar}{p}$ to replace $\lambda$ we get $rp = \dfrac {nh}{2\pi} \label {2-26}$ Since rp equals the angular momentum, we have $M = n\hbar \label {2-27}$ By saying the electron has the property of a standing wave around the orbit, we are led to the conclusion that angular momentum of the electron is quantized in units of ħ. The assumption of quantization thereby is replaced by the postulate that particles have wave properties characterized by a wavelength $\lambda = \frac {\hbar}{p}$, and quantization is a consequence of this new postulate. This insight regarding the wave properties of particles led Erwin Schrödinger to build on the mathematical description of waves and develop a general theory of Quantum Mechanics, as we will see in the next chapter. Example $3$ Draw standing waves with 2, 4, and 6 nodes on a Bohr electron orbit.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.09%3A_The_Wave_Properties_of_Matter.txt
Q2.1 Construct graphs from the following data to illustrate the key features of the photoelectric effect. Determine the work function for Ni and a value for Planck’s constant from the data. Photoelectric Effect for a Nickel Metal Film wavelength(nm) relative light intensity relative electron current electron kinetic energy (eV) 400 1 0 0.00 350 1 0 0.00 300 1 0 0.00 250 1 0 0.00 200 1 1 0.98 150 1 1 3.05 100 1 1 7.19 50 1 1 19.60 150 1 1 3.05 150 2 2 3.05 150 3 3 3.05 150 4 4 3.05 150 5 5 3.05 150 6 6 3.05 150 7 7 3.05 150 8 8 3.05 150 9 9 3.05 Q2.2 Suppose you need to take an absorption spectrum of a naphthalene sample in the near-UV region, around 320 nm. How much intensity is gained in this region by using an expensive tungsten filament lamp (\$75) with a color temperature of 3400 K compared to an inexpensive lamp (\$7.50) with a color temperature of 2800 K? Which lamp would you purchase and why? List all of the assumptions you made in formulating your answer. Q2.3 Calculate the de Broglie wavelength for an electron in the first Bohr orbit of the hydrogen atom and compare this wavelength with the circumference of the orbit. What insight do you gain from this comparison? Q2.4 Neutrons as well as electrons and x-rays are used to obtain information about molecular structure through diffraction patterns. What must the velocity of a neutron be for its de Broglie wavelength to be about five times smaller than a bond length? Do you consider this velocity to be large or small? Two typical bond lengths are: C-C = 1.54Å and C-H 1.08 Å. David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") 2.0S: 2.S: Foundations of Quantum Mechanics (Summary) Around 1900 several experimental observations were made that could not be explained, not even qualitatively, by existing physical laws. It therefore was necessary to invent (create) new concepts: quantization of energy and momentum, and a momentum-wavelength relation. In 1900 Planck proposed that electron oscillations in matter were quantized, and their energy was related by E = hν to the frequency of radiation emitted by a hot object. In 1905 Einstein proposed that electromagnetic radiation, light, also was quantized, consisting of photons, each with energy E = hν. In 1914 Bohr used this energy-frequency relationship together with the quantization of angular momentum, \(M = n \hbar\), to construct a model of the hydrogen atom that was consistent with its luminescence spectrum. In 1922 Compton explained the inelastic scattering of x-rays by matter by treating the x‑rays as particles with momentum p = h/λ. In 1924 de Broglie argued that particles should then have the properties of waves with a wavelength λ. This suggestion led Schrödinger to develop the general underlying theory of Quantum Wave Mechanics in 1925. The wave-like properties of electrons and the validity of the de Broglie relationship were demonstrated directly by Thomson’s and Davisson and Germer’s diffraction experiments in 1926 and 1927.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/02%3A_Foundations_of_Quantum_Mechanics/2.0E%3A_2.E%3A_Foundations_of_Quantum_Mechanics_%28Exercises%29.txt
The discussion in this chapter constructs the ideas that lead to the postulates of quantum mechanics, which are given at the end of the chapter. The overall picture is that quantum mechanical systems such as atoms and molecules are described by mathematical functions that are solutions of a differential equation called the Schrödinger equation. In this chapter we want to make the Schrödinger equation and other postulates of Quantum Mechanics seem plausible. We follow a train-of-thought that could resemble Schrödinger's original thinking. The discussion is not a derivation; it is a plausibility argument. In the end we accept and use the Schrödinger equation and associated concepts because they explain the properties of microscopic objects like electrons and atoms and molecules. • 3.1: Introduction to the Schrödinger Equation The Schrödinger equation is the fundamental postulate of Quantum Mechanics.If electrons, atoms, and molecules have wave-like properties, then there must be a mathematical function that is the solution to a differential equation that describes electrons, atoms, and molecules. This differential equation is called the wave equation, and the solution is called the wavefunction. Such thoughts may have motivated Erwin Schrödinger to argue that the wave equation is a key component of Quantum Mechanics. • 3.2: A Classical Wave Equation The easiest way to find a differential equation that will provide wavefunctions as solutions is to start with a wavefunction and work backwards. We will consider a sine wave, take its first and second derivatives, and then examine the results. • 3.3: Invention of the Schrödinger Equation Our goal as chemists is to seek a method for finding the wavefunctions that are appropriate for describing electrons, atoms, and molecules. In order to reach this objective, we need the appropriate wave equation. • 3.4: Operators, Eigenfunctions, Eigenvalues, and Eigenstates he Laplacian operator is called an operator because it does something to the function that follows: namely, it produces or generates the sum of the three second-derivatives of the function. Of course, this is not done automatically; you must do the work, or remember to use this operator properly in algebraic manipulations. Symbols for operators are often (although not always) denoted by a hat ^ over the symbol, unless the symbol is used exclusively for an operator. • 3.5: Momentum Operators • 3.6: The Time-Dependent Schrodinger Equation The time-dependent Schrödinger equation, is used to find the time dependence of the wavefunction. This equation relates the energy to the first time derivative analogous to the classical wave equation that involved the second time derivative. • 3.7: Meaning of the Wavefunction • 3.8: Expectation Values These expectation value integrals are very important in Quantum Mechanics. They provide us with the average values of physical properties because in many cases precise values cannot, even in principle, be determined. If we know the average of some quantity, it also is important to know whether the distribution is narrow, i.e. all values are close to the average, or broad, i.e. many values differ considerably from the average. The width of a distribution is characterized by its variance. • 3.9: Postulates of Quantum Mechanics We now summarize the postulates of Quantum Mechanics that have been introduced. The application of these postulates will be illustrated in subsequent chapters. • 3.E: The Schrödinger Equation (Exercises) Exercises for the "Quantum States of Atoms and Molecules" TextMap by Zielinksi et al. David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") 03: The Schrodinger Equation A scientific postulate is a generally accepted statement, which is accepted because it is consistent with experimental observation and serves to predict or explain a variety of observations. These postulates also are known as physical laws. Postulates cannot be derived by any other fundamental considerations. Newton's second law, $f = ma$, is an example of a postulate that is accepted and used because it explains the motion of objects like baseballs, bicycles, rockets, and cars. One goal of science is to find the smallest and most general set of postulates that can explain all observations. A whole new set of postulates was added with the invention of Quantum Mechanics. The Schrödinger equation is the fundamental postulate of Quantum Mechanics. In the previous chapter we saw that many individual quantum postulates were introduced to explain otherwise inexplicable phenomena. We will see that quantization and the relations $E = h\nu$ and $p = \frac {h}{λ}$, discussed in the last chapter, are consequences of the Schrödinger equation. In other words the Schrödinger equation is a more general and fundamental postulate. A differential equation is a mathematical equation involving one or more derivatives. The analytical solution to a differential equation is the expression or function for the dependent variable that gives an identity when substituted into the differential equation. A mathematical function is a rule that assigns a value to one quantity using the values of other quantities. Any mathematical function can be expressed not only by a mathematical formula, but also in words, as a table of data, or by a graph. Numerical solutions to differential equations also can be obtained. In numerical solutions, the behavior of the dependent variable is expressed as a table of data or by a graph; no explicit function is provided. Example $1$ The differential equation $\frac {dy(x)}{dx} = 2$ has the solution $y(x) = 2 x + b$, where $b$ is a constant. This function $y(x)$ defines the family of straight lines on a graph with a slope of 2. Show that this function is a solution to the differential equation by substituting for $y(x)$ in the differential equation. How many solutions are there to this differential equation? For one of these solutions, construct a table of data showing pairs of $x$ and $y$ values, and use the data to sketch a graph of the function. Describe this function in words. Some differential equations have the property that the derivative of the function gives the function back multiplied by a constant. The differential equation for a first-order chemical reaction is one example. This differential equation and the solution for the concentration of the reactant are given below. $\frac {dC (t)}{dt} = -k C (t)$ $C (t) = C_0 e^{-kt} \label {3-1}$ Example $2$ Show that C(t) is a solution to the differential equation. Another kind of differential equation has the property that the second derivative of the function yields the function multiplied by a constant. Both of these types of differential equations are found in Quantum Mechanics. $\frac {d^2 \psi (x)}{dx^2} = k \psi (x) \label {3-2}$ Example 3.3 What is the value of the constant in the above differential equation when $\psi(x) = \cos(3x)$? Example $4$ What other functions, in addition to the cosine, have the property that the second derivative of the function yields the function multiplied by a constant? Since some mathematical functions, such as the sine and cosine, go through repeating periodic maxima and minima, they produce graphs that look like waves. Such functions can themselves be thought of as waves and can be called wavefunctions. We now make a mathematically intuitive leap. If electrons, atoms, and molecules have wave-like properties, then there must be a mathematical function that is the solution to a differential equation that describes electrons, atoms, and molecules. This differential equation is called the wave equation, and the solution is called the wavefunction. Such thoughts may have motivated Erwin Schrödinger to argue that the wave equation is a key component of Quantum Mechanics.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.01%3A_Introduction.txt
The easiest way to find a differential equation that will provide wavefunctions as solutions is to start with a wavefunction and work backwards. We will consider a sine wave, take its first and second derivatives, and then examine the results. The amplitude of a sine wave can depend upon position, $x$, in space, $A (x) = A_0 \sin \left ( \frac {2 \pi x}{\lambda} \right ) \label{1}$ or upon time, $t$, $A(t) = A_0\sin(2\pi \nu t) \label{2}$ or upon both space and time, $A (x, t) = A_0 \sin \left ( \frac {2 \pi x}{\lambda} - 2\pi \nu t \right ) \label {3}$ We can simplify the notation by using the definitions of a wave vector, $k = \frac {2\pi}{\lambda}$, and the angular frequency, $\omega = 2\pi \nu$ to get $A(x,t) = A_0\sin(kx − \omega t) \label {4}$ When we take partial derivatives of A(x,t) with respect to both $x$ and $t$, we find that the second derivatives are remarkably simple and similar. $\frac {\partial ^2 A (x, t)}{\partial x^2} = -k^2 A_0 \sin (kx -\omega t ) = -k^2 A (x, t) \label {5}$ $\frac {\partial ^2 A (x, t)}{\partial t^2} = -\omega ^2 A_0 \sin (kx -\omega t ) = -\omega ^2 A (x, t) \label {6}$ By looking for relationships between the second derivatives, we find that both involve $A(x,t)$; consequently an equality is revealed. $k^{-2} \frac {\partial ^2 A (x, t)}{ \partial x^2} = - A (x, t) = \omega^{-2} \frac {\partial ^2 A (x, t)}{\partial t^2} \label {7}$ Recall that $\nu$ and $λ$ are related; their product gives the velocity of the wave, $\nu \lambda = v$. Be careful to distinguish between the similar but different symbols for frequency $\nu$ and the velocity v. If in ω = 2πν we replace ν with v/λ, then $\omega = \frac {2 \pi \nu}{\lambda} = \nu k \label {8}$ and Equation $\ref{7}$ can be rewritten to give what is known as the classical wave equation in one dimension. This equation is very important. It is a differential equation whose solution describes all waves in one dimension that move with a constant velocity (e.g. the vibrations of strings in musical instruments) and it can be generalized to three dimensions. The classical wave equation in one-dimension is $\frac {\partial ^2 A (x, t)}{\partial x^2} = \nu ^{-2} \frac {\partial ^2 A (x, t)}{\partial t^2} \label {9}$ Example $1$ Complete the steps leading from Equation $\ref{3}$ to Equations $\ref{5}$ and $\ref{6}$ and then to Equation $\ref{9}$. 3.03: Invention of the Schrödinger Equation From the previous section, the the classical wave equation in one-dimension was discussed: $\frac {\partial ^2 A (x, t)}{\partial x^2} = \nu ^{-2} \frac {\partial ^2 A (x, t)}{\partial t^2} \label {3-11}$ Although we used a sine function to obtain the classical wave equation, functions other than the sine function can be substituted for $A$ in Equation \ref{3-11}. Our goal as chemists is to seek a method for finding the wavefunctions that are appropriate for describing electrons, atoms, and molecules. In order to reach this objective, we need the appropriate wave equation. Exercise $1$ Show that the functions $e^{i(k x + ωt)}$ and $\cos(k\,x - \omega\, t)$ also satisfy the classical wave equation (Equation \ref{3-11}). Note that $i$ is a constant equal to $\sqrt {-1}$. A general method for finding solutions to differential equations that depend on more than one variable ($x$ and $t$ in this case) is to separate the variables into different terms. This separation makes it possible to write the solution as a product of two functions, one that depends on $x$ and one that depends on $t$. This important technique is called the Method of Separation of Variables. This technique is used in most of the applications that we will be considering. For the classical wave equation, Equation \ref{3-11}, separating variables is very easy because $x$ and $t$ do not appear together in the same term in the differential equation. In fact, they are on opposite sides of the equation. The variables already have been separated, and we only have to see what happens when we substitute a product function into this equation. It is common in Quantum Mechanics to symbolize the functions that are solutions to Schrödinger’s equation as $\psi$, $\psi$, or $\phi$, so we use $\Phi (x)$ as the $x$-function, and examine the consequences of using $\cos (ωt)$ as one possibility for the $t$-function. $\psi(x,t) = ψ(x) cos(ωt) \label{3-12}$ After substituting Equation \ref{3-12} into the classical wave Equation \ref{3-11} and differentiating, we obtain $\cos (\omega t ) \dfrac {\partial ^2 \psi (x)}{\partial x^2} = -\dfrac {\omega ^2}{\nu ^2} \psi (x) \cos (\omega t) \label{3-13}$ which yields, after simplifying and rearranging, $\dfrac {\partial ^2 \psi (x)}{\partial x^2} + \dfrac {\omega ^2}{\nu ^2} \psi (x) = 0 \label{3-14}$ We now include the idea that we are trying to find a wave equation for a particle. We introduce the particle momentum by using de Broglie's relation to replace $\dfrac {ω^2}{v^2}$ with $\dfrac {p^2}{ħ^2}$, where $ħ = \dfrac {h}{2π}$ (called h-bar). $\dfrac {\partial ^2 \psi (x)}{\partial x^2} + \dfrac {p^2}{\hbar ^2} \psi (x) = 0 \label{3-15}$ Exercise $2$ Show that $\dfrac {ω^2}{v^2}$ = $\dfrac {p^2}{ħ^2}$. Next we will use the total energy of a particle as the sum of the kinetic energy and potential energy to replace the momentum in Equation. $E = T + V (x) = \dfrac {p^2}{2m} + V (x) \label{3-16}$ Note that we have included the idea that the potential energy is a function of position. Each atomic or molecular system we will consider in the following chapters will have different potential energy functions. Solving Equation $\ref{3-16}$ for $p^2$ and substituting it into Equation $\ref{3-15}$ gives us the Schrödinger Equation, $\dfrac {\partial ^2 \psi (x)}{\partial x^2} + \dfrac {2m}{\hbar ^2} (E - V (x)) \psi (x) = 0 \label{3-17}$ which usually is written in rearranged form, $\dfrac {\hbar ^2}{2m} \dfrac {\partial ^2 \psi (x)}{\partial x^2} + V (x) \psi (x) = E \psi (x) \label{3-18}$ Notice that the left side of Equation $\ref{3-18}$ consists of the two terms corresponding to the kinetic energy and the potential energy. When we look at the left side of Equation $\ref{3-18}$, we can deduce a method of extracting the total energy from a known wavefunction, or we can use Equation $\ref{3-18}$) to find the wavefunction. Finding wavefunctions for models of interesting chemical phenomena will be one of the tasks we will accomplish in this text. Exercise $3$ Show the steps that lead from Equations $\ref{3-11}$ and \ref{3-12} to Equation \ref{3-18}. More precisely, Equation \ref{3-18} is the Schrödinger equation for a particle of mass $m$ moving in one dimension (x) in a potential field specified by $V(x)$. Since this equation does not contain time, it often is called the Time-Independent Schrödinger Equation. As mentioned previously, functions like ψ(x) are called wavefunctions because they are solutions to this wave equation. The term, wave, simply denotes oscillatory behavior or properties. The significance of the wavefunction will become clear as we proceed. For now, ψ(x) is the wavefunction that accounts for or describes the wave-like properties of particles. The Schrödinger equation for a particle moving in three dimensions (x, y, z) is obtained simply by adding the other second derivative terms and by including the three-dimensional potential energy function. The wavefunction ψ then depends on the three variables x, y, and z. $\dfrac {-\hbar ^2}{2m} \left ( \dfrac {\partial ^2}{\partial x^2} + \dfrac {\partial ^2}{\partial y^2} + \dfrac {\partial ^2}{\partial z^2} \right ) \psi (x , y , z ) + V (x , y , z) \psi (x , y , z ) = E \psi (x , y , z ) \label{3-19}$ Exercise $4$ Write the Schrödinger equation for a particle of mass m moving in a 2-dimensional space with the potential energy given by $V(x, y) = -\dfrac {(x^2 + y^2)}{2}.$ The three second derivatives in parentheses together are called the Laplacian operator, or del-squared, $\nabla^2 = \left ( \dfrac {\partial ^2}{\partial x^2} + \dfrac {\partial ^2}{\partial y^2} + \dfrac {\partial ^2}{\partial z^2} \right ) \label {3-20}$ with the del operator, $\nabla = \left ( \vec {x} \dfrac {\partial}{\partial x} + \vec {y} \dfrac {\partial}{\partial y} + \vec {z} \dfrac {\partial }{\partial z} \right ) \label{3-21}$ also is used in Quantum Mechanics. Remember, symbols with arrows over them are unit vectors. Exercise $5$ Write the del-operator and the Laplacian operator for two dimensions and rewrite your answer to Exercise $4$ in terms of the Laplacian operator.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.02%3A_A_Classical_Wave_Equation.txt
The Laplacian operator is called an operator because it does something to the function that follows: namely, it produces or generates the sum of the three second-derivatives of the function. Of course, this is not done automatically; you must do the work, or remember to use this operator properly in algebraic manipulations. Symbols for operators are often (although not always) denoted by a hat ^ over the symbol, unless the symbol is used exclusively for an operator, e.g. $\nabla$ (del/nabla), or does not involve differentiation, e.g.$r$ for position. Recall, that we can identify the total energy operator, which is called the Hamiltonian operator, $\hat{H}$, as consisting of the kinetic energy operator plus the potential energy operator. $\hat {H} = - \frac {\hbar ^2}{2m} \nabla ^2 + \hat {V} (x, y , z ) \label{3-22}$ Using this notation we write the Schrödinger Equation as $\hat {H} \psi (x , y , z ) = E \psi ( x , y , z ) \label{3-23}$ The Hamiltonian The term Hamiltonian, named after the Irish mathematician Hamilton, comes from the formulation of Classical Mechanics that is based on the total energy, $H = T + V$ rather than Newton's second law, $\vec{F} = m\vec{a}$ Equation $\ref{3-23}$ says that the Hamiltonian operator operates on the wavefunction to produce the energy, which is a number, (a quantity of Joules), times the wavefunction. Such an equation, where the operator, operating on a function, produces a constant times the function, is called an eigenvalue equation. The function is called an eigenfunction, and the resulting numerical value is called the eigenvalue. Eigen here is the German word meaning self or own. It is a general principle of Quantum Mechanics that there is an operator for every physical observable. A physical observable is anything that can be measured. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on the eigenfunction with the appropriate operator. The value of the observable for the system is the eigenvalue, and the system is said to be in an eigenstate. Equation $\ref{3-23}$ states this principle mathematically for the case of energy as the observable. 3.05: Momentum Operators One of the tasks we must be able to do as we develop the quantum mechanical representation of a physical system is to replace the classical variables in mathematical expressions with the corresponding quantum mechanical operators. In the preceding section, operators were identified for the total energy and the kinetic energy. Potential energy operators will be introduced case by case in the following chapters. In the remaining paragraphs, we will focus on the momentum operator. Momentum operators now can be obtained from the kinetic energy operator. Since the classical expression for the kinetic energy of a particle moving in one dimension, along the x-axis, is $T_x = \frac {P^2_x}{2m} \label{}$ we expect that $\hat {T} _x = \frac {P^2_x}{2m} = - \frac {\hbar ^2}{2m} \frac {\partial ^2}{ \partial x^2} \label{}$ so we can identify the operator for the square of the x-momentum as $\hat {P ^2_x} = - \hbar ^2 \frac {\partial ^2}{\partial x^2} \label{}$ Since $\hat {P ^2_x}$, can be interpreted to mean $\hat {P} _x . \hat {P} _x$, there are two possibilities for $\hat {P} _x$, namely $\hat {P} _x = i\hbar \frac {\partial}{\partial x}$ or $\hat {P} _x = -i\hbar \frac {\partial}{\partial x} \label{}$ where $i = \sqrt {-1}$. The second possibility is the best choice, as explained below. In making this choice, consider the $e^{ikx}$ function. This function is an eigenfunction of both possible forms for the momentum operator. This fact can be used to choose which form of the momentum operator to use. Problems Exercise $11$ Demonstrate that the function $e^{ikx}$ is an eigenfunction of either momentum operator. Plan: Start with $\hat {P} _x \psi (x) = P_x \psi (x)$ where $ψ (x) = e^{ikx}$. Operate on $ψ(x) = e^{ikx}$ with $\pm i\hbar \frac {\partial}{\partial x}$ to show that $P_x = \mp \hbar k$. Which do you prefer, $p_x = +ħk$ or $p_x = -ħk$? If we use the momentum operator that has the - sign, we get the momentum and the wave vector pointing in the same direction, $p_x = +ħk$, which is the preferred result corresponding to the de Broglie relation. The review of vectors and scalar products may help you with the following exercises. Exercise $12$ Show graphically, using a unit vector diagram, that $\vec {x} \cdot \vec {x} = 1 \text {and} \vec {x} \cdot \vec {y} = 0$. Exercise $13$ Consider a particle moving in three dimensions. The total momentum, which is a vector, is $p = \vec {x} P_x + \vec {y} P_y + \vec {z} P_z$ where $\vec {x}, \vec {y}, and \vec {z}$ are unit vectors pointing in the x, y, and z directions, respectively. Write the operators for the momentum of this particle in the x, y, and z directions, and show that the total momentum operator is $-i \hbar \nabla = - i \hbar \left ( \vec {x} \frac {\partial}{\partial x} + \vec {y} \frac {\partial}{\partial y} + \vec {z} \frac {\partial}{\partial z} \right )$ is the vector operator called del(nabla). Show that the scalar product $\nabla \cdot \nabla$ produces the Laplacian operator. Exercise $14$ Following Exercise $11$, show that the de Broglie relation $p = \frac {h}{λ}$ follows from the definition of the momentum operator and the momentum eigenfunction for a one-dimensional space. Exercise $15$ Write the wavefunction for an electron moving in the z-direction with an energy of 100 eV. The form of the wavefunction is $e^{ikz}$. You need to find the value for k. Obtain the electron’s momentum by operating on the wavefunction with the momentum operator.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.04%3A_Operators%2C_Eigenfunctions%2C_Eigenvalues%2C_and_Eigenstates.txt
The time-dependent Schrödinger equation, is used to find the time dependence of the wavefunction. This equation relates the energy to the first time derivative analogous to the classical wave equation that involved the second time derivative. This equation, $\hat {H} (r , t) \psi (r, t) = i \hbar \frac {\partial}{\partial t} \psi (r , t) \label {3-28}$ where $r$ represents the spatial coordinates (x, y, z), must be used when the Hamiltonian operator depends on time, e.g. when a time dependent external field causes the potential energy to change with time. Even if the Hamiltonian does not depend on time, we can use this equation to find the time dependence $φ(t)$ of the eigenfunctions of $\hat{H}(r)$. First we write the wavefunction $Ψ(r,t)$ as a product of a space function ($ψ(r)$) and a time function ($\varphi (t)$) and substitute into Equation \ref{3-28}. $Ψ (r , t ) = \psi (r) \varphi (t) \label {3-29}$ We use a product function because the space and time variables are separated in Equation \ref{3-28} when the Hamiltonian operator does not depend on time. Since $ψ(r)$ is an eigenfunction of $\hat{H}(r)$ with eigenvalue $E$, this substitution leads to Equation \ref{3-31} $\hat {H} (r) \psi (r) \varphi (t) = i \hbar \frac {\partial}{\partial t} \psi (r) \varphi (t)$ $E \psi (r) \varphi (t) = i\hbar \psi (r) \frac {\partial}{\partial t} \psi (t) \label {3-30}$ which rearranges to $\frac {d \varphi (t)}{\varphi (t)} = \frac {-iE}{\hbar} dt \label {3-31}$ Integration gives $\varphi (t) = e^{-i \omega t} \label {3-32}$ by setting the integration constant to 0 and using the definition $ω = \frac {E}{ħ}$. Thus, we see the time dependent Schrödinger equation contains the condition $E = ħω$ proposed by Planck and Einstein. The eigenfunctions of a time-independent Hamiltonian therefore have an oscillatory time dependence given by a complex function, i.e. a function that involves $\sqrt {-1}$. $Ψ(r,t) = ψ(r)e^{ − iωt} \label {3-33}$ When molecules are described by such an eigenfunction, they are said to be in an eigenstate of the time-independent Hamiltonian operator. We will see that all observable properties of a molecule in an eigenstate are constant or independent of time because the calculation of the properties from the eigenfunction is not affected by the time dependence of the eigenfunction. A wavefunction with this oscillatory time dependence e-iωt therefore is called a stationary-state function. When a system is not is a stationary state, the wavefunction can be represented by a sum of eigenfunctions like those above. In this situation, the oscillatory time dependence does not cancel out in calculations, but rather accounts for the time dependence of physical observables. Examples are provided in Chapter 4, Activity 2, and Chapter 5, Activity 1. Example $16$ Complete the steps leading from Equation \ref{3-28} to Equation \ref{3-33}. Example $17$ Show that Equation \ref{3-33} is a solution to Equation \ref{3-28} when the Hamiltonian operator does not depend on time and $ψ(r)$ is an eigenfunction of the Hamiltonian operator. This might be a good time to review complex numbers 3.07: Meaning of the Wavefunction Since wavefunctions can in general be complex functions, the physical significance cannot be found from the function itself because the $\sqrt {-1}$ is not a property of the physical world. Rather, the physical significance is found in the product of the wavefunction and its complex conjugate, i.e. the absolute square of the wavefunction, which also is called the square of the modulus. $\psi ^* (r , t ) \psi (r , t ) = {|\psi (r , t ) |}^2 \label {3-34}$ where $r$ is a vector $(x, y, z)$ specifying a point in three-dimensional space. The square is used, rather than the modulus itself, just like the intensity of a light wave depends on the square of the electric field. At one time it was thought that for an electron described by the wavefunction $\psi(r)$, the quantity $e\psi^*(r_­i)\psi(r_i)d\tau$ was the amount of charge to be found in the volume $d\tau$ located at $r_i$. However, Max Born found this interpretation to be inconsistent with the results of scattering experiments. The Born interpretation, which generally is accepted today, is that $\psi^*(r_i)\psi(r_i)\, d\tau$ is the probability that the electron is in the volume dτ located at $r_i$. The Born interpretation therefore calls the wavefunction the probability amplitude, the absolute square of the wavefunction is called the probability density, and the probability density times a volume element in three-dimensional space ($d\tau$) is the probability. The idea that we can understand the world of atoms and molecules only in terms of probabilities is disturbing to some, who are seeking more satisfying descriptions through ongoing research. Example $1$ Show that the square of the modulus of $\psi(r,t) = \psi(r) e^{-i\,ω-t}$ is time independent. What insight regarding stationary states do you gain from this proof? Example $2$ Show that the square of the modulus Example $3$ Show that the square of the modulus Example $4$ According to the Born interpretation, what is the physical significance of $e\psi^*(r_0)(r_0)d\tau$?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.06%3A_The_Time-Dependent_Schrodinger_Equation.txt
An important deduction can be made if we multiply the left-hand side of the Schrödinger equation by $\psi^*(x)$, integrate over all values of $x$, and examine the potential energy term that arises. We can deduce that the potential energy integral provides the average value for the potential energy. Likewise we can deduce that the kinetic energy integral provides the average value for the kinetic energy. This is shown in Equation $\ref{3-35}$. If we generalize this conclusion, such integrals give the average value for any physical quantity by using the operator corresponding to that physical observable in the integral. In the equation below, the symbol $\left \langle H \right \rangle$ is used to denote the average value for the total energy. $\left \langle H \right \rangle = \int \limits ^{\infty}_{-\infty} \psi ^* (x) \hat {H} \psi (x) dx = \underset{\text {kinetic energy term} }{ \int \limits ^{\infty}_{-\infty} \psi ^* (x) \left ( \frac {-\hbar ^2}{2m} \right ) \frac {\partial ^2 }{ \partial x^2} \psi (x) dx} + \underset{ \text {Potential energy term} }{\int \limits ^{\infty}_{-\infty} \psi ^* (x) V (x) \psi (x) dx} \label{3-35}$ Example $1$ Evaluate the two integrals in Equation $\ref{3-35}$ for the wavefunction $ψ(x) = \sin(kx)$ and the potential function $V(x) = x$. The Hamiltonian operator consists of a kinetic energy term and a potential energy term. The kinetic energy operator involves differentiation of the wavefunction to the right of it. This step must be completed before multiplying by the complex conjugate of the wavefunction. The potential energy, however, usually depends only on position and not momentum. The potential energy operator therefore only involves the coordinates of a particle and does not involve differentiation. For this reason we do not need to use a caret over $V$ in Equation $\ref{3-35}$. For example, the harmonic potential in one dimension is $½kx^2$. (Note: here $k$ is the force constant and not the wave vector. Unfortunately just like words, a symbol can have more than one meaning, and the meaning must be gotten from the context.) The potential energy integral then involves only products of functions, and the order of multiplication does not affect the result, e.g. 6×4 = 4×6 = 24. This property is called the commutative property. The potential energy integral therefore can be written as $\left \langle V \right \rangle = \int \limits ^{\infty}_{-\infty} V (x) \psi ^* (x) \psi (x) dx \label{3-36}$ This integral is telling us to take the probability that the particle is in the interval $dx$ at $x$, which is $ψ^*(x)ψ(x)dx$, multiply this probability by the potential energy at $x$, and sum (i.e. integrate) over all possible values of $x$. This procedure is just the way to calculate the average potential energy $\left \langle V \right \rangle$ of the particle. This integral therefore is called the average-value integral or the expectation-value integral because it gives the average result of a large number of measurements of the particle's potential energy. When an operator involves differentiation, it does not commute with the wavefunctions, e.g. $\psi ^* (x) \frac {\partial ^2}{\partial x^2} \psi (x) \ne \psi ^* (x) \psi (x) \frac {\partial ^2}{\partial x^2} \ne \frac {\partial ^2}{\partial x^2}( \psi ^*(x) \psi (x) ) \label{3-37}$ but the interpretation of the kinetic energy integral in Equation $\ref{3-35}$ is the same as for the potential energy. This integral gives the average kinetic energy of the particle. These expectation value integrals are very important in Quantum Mechanics. They provide us with the average values of physical properties (e.g. like energy, momentum, or position) because in many cases precise values cannot, even in principle, be determined. If we know the average of some quantity, it also is important to know whether the distribution is narrow, i.e. all values are close to the average, or broad, i.e. many values differ considerably from the average. The width of a distribution is characterized by its variance.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.08%3A_Expectation_Values.txt
We now summarize the postulates of Quantum Mechanics that have been introduced. The application of these postulates will be illustrated in subsequent chapters. Postulate 1 The properties of a quantum mechanical system are determined by a wavefunction Ψ(r,t) that depends upon the spatial coordinates of the system and time, $r$ and $t$. For a single particle system, r is the set of coordinates of that particle $r = (x_1, y_1, z_1)$. For more than one particle, $r$ is used to represent the complete set of coordinates $r = (x_1, y_1, z_1, x_2, y_2, z_2,\dots x_n, y_n, z_n)$. Since the state of a system is defined by its properties, $\psi$ specifies or identifies the state and sometimes is called the state function rather than the wavefunction. Postulate 2 The wavefunction is interpreted as probability amplitaude with the absolute square of the wavefunction, $Ψ^*(r,t)Ψ(r,t)$ interpreted at the probability density at time $t$. A probability density times a volume is a probability, so for one particle $\psi^*(x_1,y_1,z_1,t)\psi(x_1,y_1,z_1,t)dx_1dy_1dz_1$ is the probability that the particle is in the volume $dx\;dy\;dz$ located at $x_l, y_l, z_l$ at time $t$. For a many particle system, we write the volume element as $d\tau = dx_1dy_1dz_1\dots dx_ndy_ndz_n$; and $Ψ^*(r,t)Ψ(r,t)dτ$ is the probability that particle 1 is in the volume $dx_ldy_ldz_1$ at $x_ly_lz_l$ and particle 2 is in the volume $dx_2dy_2dz_2$ at $x_2y_2z_2$, etc. Because of this probabilistic interpretation, the wavefunction must be normalized. $\int \limits _{all space} \psi ^* (r, t) \psi (r , t) d \tau = 1 \label {3-38}$ The integral sign here represents a multi-dimensional integral involving all coordinates: $x_l \dots z_n$. For example, integration in three-dimensional space will be an integration over $dV$, which can be expanded as: • $dV=dx\,dy\,dz$ in Cartesian coordinates or • $dV=r^2\sin{\phi}\, dr\,d\theta \;d\phi$ in spherical coordinates or • $dV=r\, dr\,d\theta\,dz$ in cylindrical coordinates. Postulate 3 For every observable property of a system there is a quantum mechanical operator. The operator for position of a particle in three dimensions is just the set of coordinates $x$, $y$, and $z$, which is written as a vector $r = (x , y , z ) = x \vec {i} + y \vec {j} + z \vec {k} \label {3-39}$ The operator for a component of momentum is $\hat {P} _x = -i \hbar \dfrac {\partial}{\partial x} \label {3-40}$ and the operator for kinetic energy in one dimension is $\hat {T} _x = -\left (\dfrac {\hbar ^2}{2m} \right ) \dfrac {\partial ^2}{\partial x^2} \label {3-14}$ and in three dimensions $\hat {p} = -i \hbar \nabla \label {3-42}$ and $\hat {T} = \left ( -\dfrac {\hbar ^2}{2m} \right ) \nabla ^2 \label {3-43}$ The Hamiltonian operator $\hat{H}$ is the operator for the total energy. In many cases only the kinetic energy of the particles and the electrostatic or Coulomb potential energy due to their charges are considered, but in general all terms that contribute to the energy appear in the Hamiltonian. These additional terms account for such things as external electric and magnetic fields and magnetic interactions due to magnetic moments of the particles and their motion. Postulate 4 The time-independent wavefunctions of a time-independent Hamiltonian are found by solving the time-independent Schrödinger equation. $\hat {H} (r) \psi (r) = E \psi (r) \label {3-44}$ These wavefunctions are called stationary-state functions because the properties of a system in such a state, i.e. a system described by the function $\psi(r)$, are time independent. Postulate 5 The time evolution or time dependence of a state is found by solving the time-dependent Schrödinger equation. $\hat {H} (r , t) \psi (r , t) = i \hbar \frac {\partial}{\partial t} \psi (r , t ) \label {3-45}$ For the case where $\hat{H}$ is independent of time, the time dependent part of the wavefunction is $e^{-iωt}$ where $ω = \frac {E}{ħ}$ or equivalently $ν = \frac {E}{h}$, which shows that the energy-frequency relation used by Planck, Einstein, and Bohr results from the time-dependent Schrödinger equation. This oscillatory time dependence of the probability amplitude does not affect the probability density or the observable properties because in the calculation of these quantities, the imaginary part cancels in multiplication by the complex conjugate. Postulate 6 If a system is described by the eigenfunction $\psi$ of an operator $\hat{A}$ then the value measured for the observable property corresponding to $\hat{A}$ will always be the eigenvalue $a$, which can be calculated from the eigenvalue equation. $\hat {A} \psi = a \psi \label {3-46}$ Postulate 7 If a system is described by a wavefunction $\psi$, which is not an eigenfunction of an operator $\hat{A}$, then a distribution of measured values will be obtained, and the average value of the observable property is given by the expectation value integral, $\left \langle A \right \rangle = \dfrac {\int \psi ^* \hat {A} \psi d \tau}{\int \psi ^* \psi d \tau} \label {3-47}$ where the integration is over all coordinates involved in the problem. The average value $\left \langle A \right \rangle$, also called the expectation value, is the average of many measurements. If the wavefunction is normalized, then the normalization integral in the denominator of Equation (3-47) equals 1. Problems • Exercise $21$ What does it mean to say a wavefunction is normalized? Why must wavefunctions be normalized? • Exercise $22$ Rewrite Equations(3-42) and (3-43) using the definitions of ħ, $\nabla$, and $\nabla _2$. • Exercise $23$ Write a definition for a stationary state. What is the time dependence of the wavefunction for a stationary state? • Exercise $24$ Show how the energy-frequency relation used by Planck, Einstein, and Bohr results from the time-dependent Schrödinger equation. • Exercise $25$ Show how the de Broglie relation follows from the postulates of Quantum Mechanics using the definition of the momentum operator. • Exercise $26$ What quantity in Quantum Mechanics gives you the probability density for finding a particle at some specified position in space? How do you calculate the average position of the particle and the uncertainty in the position of the particle from the wavefunction? 3.0E: 3.E: The Schrödinger Equation (Exercises) Q3.1 Prove Euler's formula is correct by expanding $e^{±iθ}$, $\cos\theta$, and $\sin \theta$ each in terms of a Maclaurin series and showing that corresponding terms are identical. Q3.2 The following table gives the results of many measurements of the length of a laser cavity. Complete the table by calculating the probability for each value. Use the probabilities that you calculated to compute the average value for the length, the average of the length squared, the variance, and the standard deviation in the measurements. length (cm) number of times the value was obtained probability 100.05 4 100.04 3 100.03 6 100.02 9 100.01 8 100.00 9 99.99 9 99.98 8 99.97 2 99.96 3 Q3.3 Consider an electron trapped by a positively charged point defect in a one-dimensional world. The following wavefunction with α = 20/nm describes the distance x of the electron from the point defect located at x=0. Note that in 1, 2, and 3 dimensions, r = |x|, ${(x^2+y^2)}^{1/2}$, and ${(x^2+y^2+z^2)}^{1/2}$, respectively. $\psi (r) = N e^{-\alpha |x|} \label{3-48}$ 1. Evaluate the normalization constant N. 2. Graph the probability density for this electron. 3. Calculate the expectation value for x and |x|. 4. If the electron were in a two or three-dimensional world, such as on the surface of a crystal or in a free atom, would the average distance of the electron from the origin <r> be less, the same, or larger than the value you found for one dimension? 5. Determine whether the expectation value for r depends upon the dimensionality of the world (1, 2, or 3) in which the atom lives. David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules")
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/03%3A_The_Schrodinger_Equation/3.09%3A_Postulates_of_Quantum_Mechanics.txt
Our first chemical application of Quantum Mechanics is directed at obtaining a description of the electronic spectra of a class of molecules called cyanine dyes. We start with this set of molecules because we can use a particularly simple model, the particle-in-a-box model, to describe their electronic structure. This simple model applied to a real molecular system will further develop our “sense of Quantum Mechanics.” We also will discover rules, called selection rules, that are used to tell whether a transition between two energy levels will occur in an absorption or emission spectrum. Later we will learn about more sophisticated and general methods for describing the electronic states of atoms and molecules. David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") 04: Electronic Spectroscopy of Cyanine Dyes Overview of key concepts and equations for the particle in a box Potential energy V = 0 inside box (0 < x < L) V = ∞ outside box Hamiltonian $\hat {H} = -\frac {\hbar ^2}{2m} \frac {d^2}{dx^2}$ Wavefunctions ${(\frac {2}{L})}^{1/2} \sin (\frac {n\pi}{L}x)$ Quantum Numbers n = 1, 2, 3, … Energies $E = n^2 (\frac {h^2}{8mL^2})$ Spectroscopic Selection Rules Δn = odd integer Angular Momentum Properties none David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski ("Quantum States of Atoms and Molecules") 4.02: Cyanine Dyes Cyanine dye molecules, which have the general structure shown in Figure $1$, are planar cations. The number of carbon atoms in the chain can vary, as can the nature of the end groups containing the nitrogen atoms. The R groups in the diagram represent $H$, $CH_3$, $CH_3CH_2$, or many other moieties including ring structures. Since these dyes are cations, they can be paired with many anions, e.g. $I^-$, iodide. The position (wavelength) and strength (absorption coefficient) of the absorption band depends upon the length of the carbon chain between the nitrogen atoms but is not affected very much by the nature of the end groups beyond the nitrogen atoms. These molecules are called dye molecules because they have very intense absorption bands in the visible region of the spectrum as shown in Figure $2$. This strong absorption of light at particular wavelengths makes solutions of these molecules brightly colored. A solution of a dye shows the color of the light that is not absorbed. The strong absorption leads to many applications in technology. For example, dyes are used to color plastics, fabrics, and hair. They also can be used as filters to produce colored light and as a laser medium in medical applications. Example $1$ Draw the Lewis electron dot structure of dye I that produced the spectrum shown in Figure $2$ with the maximum absorption at 309 nm. Examine the resonance structures and determine whether all the carbon-carbon bonds are identical or whether some are single bonds and some are double bonds. Example $2$ Use Figure $2$ to describe what happens to the maximum absorption coefficient and the wavelength of the peak absorption as the length of a cyanine dye molecule increases. Example $3$ The decadic molar absorption coefficient for dye III at λ = 512 nm is almost 200,000 in units of $1000 cm^2/mol$. If 0.1 gram of dye III (very small) were dissolved in 10 liters of water (very large), what fraction of light at 512 nm would be absorbed in a path length of 1 mm (very small)? What is the concentration of this solution? What insight do you gain from your results? (Note: the molar mass is 179 g/mol.) The electrons and bonds in the cyanine dyes can be classified as sigma or pi. The probability densities for the sigma electrons are large along the lines connecting the nuclei, while the probability densities for the pi electrons are large above and below the plane containing the nuclei. In molecular orbital theory, the $\pi$ electrons can be described by wavefunctions composed from $p_z$ atomic orbitals, shown in Figure $3$. Example $4$ Determine the number of pi electrons in each of the three molecules described in Figure $2$. The pi electrons in these molecules, one from each carbon atom and three from the two nitrogen atoms, are delocalized over the length of the molecule between the nitrogen atoms. When ultraviolet and visible light is absorbed by the cyanine dyes, the energy is used to cause transitions of the pi electrons from one energy level to another, as sketched in Figure $4$. The longest wavelength transition occurs from the highest-energy occupied level to the lowest-energy unoccupied level. We will use Quantum Mechanics and a simple model, called the particle-in-a-box model, to explain why the longer molecules absorb at longer wavelengths and have larger absorption coefficients. This analysis will demonstrate that Quantum Mechanics is a quantitative theory. It provides both a qualitative understanding of chemical systems and numerical values for the properties of interest. Both are important for understanding molecules and their chemistry. You can visualize the absorption of energy and the promotion of an electron from a lower energy level to a higher one like throwing a shirt from your closet floor to a shelf. There is an important difference however. You can see the shirt on the way from the floor to the shelf. You can tell when it left the floor, and when it arrived on the shelf. Such precise information cannot be obtained for the electron. We only know the probability that the electron is in the lower level and the probability that it is in the higher level as a function of time. We do not know exactly when during this period that the electron makes the transition from one energy level to the other. We can imagine that the potential energy experienced by the pi electron varies along the chain as shown in Figure $5$ effectively trapping the electron in the pi region of the molecule, i.e. in a one-dimensional box. At the end of the chain the potential energy rises to a large value. The particle-in-a-box model essentially consists of three approximations to the actual potential energy. 1. The chain of carbon atoms forms a one-dimensional space of some length L for the pi electrons. 2. The potential energy is constant along the chain; i.e. the oscillations are ignored. 3. The potential energy becomes infinite at some point slightly past the nitrogen atoms. Since only changes in energy are meaningful, and an absolute zero of energy does not exist, the constant potential energy of the electron along the chain between the nitrogen atoms can be defined as zero. The particle-in-a-box potential energy also is shown in Figure $5$. The particle-in-a-box model allows us to relate the transition energy obtained from the position of the absorption maximum to the length of the conjugated part of the molecule, i.e. distance L between the infinite potential barriers at the ends of the molecule. From this distance for different series of dyes, we can obtain the average bond length β and the distance δ the box extends beyond a nitrogen atom for each series. If this model is reasonable, we expect the average bond lengths to be similar for each series and δ to vary from one series to another due to differences in the end groups attached to the nitrogen atoms. Some end groups might, due to their polarizability or electronegativity, allow the electrons to penetrate further past the nitrogen atoms than others. Analyzing the data in this way rather than using estimated bond lengths to predict transition energies was suggested by R.S. Moog. (J. Chem Educ. 1991, 68, 506-508.) Example $5$ In Figure $5$, why does a realistic potential energy dip at each atom? Why is the dip larger for nitrogen than for carbon? Why does the potential energy increase sharply at the ends of the molecule?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.01%3A_Introduction.txt
The particle-in-a box model is used to approximate the Hamiltonian operator for the $\pi$ electrons because the full Hamiltonian is quite complex. The full Hamiltonian operator for each electron consists of the kinetic energy term $\dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}$ and the sum of the Coulomb potential energy terms $\dfrac {q_1q_2}{4\pi \epsilon _0 r_{12}}$ for the interaction of each electron with all the other electrons and with the nuclei ($q$ is the charge on each particle and $r$ is the distance between them). Considering these interactions, the Hamiltonian for electron i given below. $\hat {H} _i = \dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} + \underset{\text{sum over electrons}}{ \sum _{j } \dfrac {e^2}{4 \pi \epsilon _0 r_{i, j}}} - \underset{\text{sum over nuclei}}{ \sum _{n} \dfrac {e^2 Z_n}{4 \pi \epsilon _0 r_{i,n}} } \label {4-1}$ The Schrödinger equation obtained with this Hamiltonian cannot be solved analytically by anyone because of the electron-electron interaction terms. Some approximations for the potential energy must be made. We want a model for the dye molecules that has a particularly simple potential energy function because we want to be able to solve the corresponding Schrödinger equation easily. The particle-in-a-box model has the necessary simple form. It also permits us to get directly at understanding the most interesting feature of these molecules, their absorption spectra. As mentioned in the previous section, we assume that the π-electron motion is restricted to left and right along the chain in one dimension. The average potential energy due to the interaction with the other electrons and with the nuclei is taken to be a constant except at the ends of the molecule. At the ends, the potential energy increases abruptly to a large value; this increase in the potential energy keeps the electrons bound within the conjugated part of the molecule. Figure $1$ shows the classical particle-in-a-box potential function and the more realistic potential energy function. We have defined the constant potential energy for the electrons within the molecule as the zero of energy. One end of the molecule is set at $x = 0$, the other at $x = L$, and the potential energy is goes to infinity at these points. For one electron located within the box, i.e. between $x = 0$ and $x = L$, the Hamiltonian is $\hat {H} = \dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}$ because $V =0$, and the (time-independent) Schrödinger equation that needs to be solved is then $\dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} \psi (x) = E \psi (x) \label {4-2}$ We need to solve this differential equation to find the wavefunction and the energy. In general, differential equations have multiple solutions (solutions that are families of functions), so actually by solving this equation, we will find all the wavefunctions and all the energies for the particle-in-a-box. There are many ways of solving differential equations, and you will see some of them illustrated here and in subsequent chapters. One way is to recognize functions that might satisfy the equation. This equation says that differentiating the function twice produces the function times a constant. What kinds of functions have you seen that regenerate the function after differentiating twice? Exponential functions and sine and cosine functions come to mind. Example $1$ Use $\sin(kx)$, $\cos(kx)$, and $e^{ikx}$ for the possible wavefunctions in Equation $\ref{4-2}$ and differentiate twice to demonstrate that each of these functions satisfies the Schrödinger equation for the particle-in-a-box. Exercise $1$ leads you to the following three equations. $\dfrac {\hbar ^2 k^2}{2m} \sin (kx) = E \sin (kx) \label {4-3}$ $\dfrac {\hbar ^2 k^2}{2m} \cos (kx) = E \cos (kx) \label {4-4}$ $\dfrac {\hbar ^2 k^2}{2m} e^{ikx} = E e^{ikx} \label {4-5}$ For the equalities expressed by these equations to hold, $E$ must be given by $E = \dfrac {\hbar ^2 k^2}{2m} \label {4-6}$ Kinetic energy is the momentum squared divided by twice the mass $p^2/2m$, so we conclude from Equation $\ref{4-6}$ that $ħ^2k^2 = p^2$. Solutions to differential equations that describe the real world also must satisfy conditions imposed by the physical situation. These conditions are called boundary conditions. For the particle-in-a-box, the particle is restricted to the region of space occupied by the conjugated portion of the molecule, between $x = 0$ and $x = L$. If we make the large potential energy at the ends of the molecule infinite, then the wavefunctions must be zero at $x = 0$ and $x = L$ because the probability of finding a particle with an infinite energy should be zero. Otherwise, the world would not have an energy resource problem. This boundary condition therefore requires that $ψ(0) = ψ(L) = 0$. Example $2$ Which of the functions $sin(kx)$, $cos(kx)$, or $e^{ikx}$ is 0 when x = 0? As you discovered in Exercise $2$ for these three functions, only $sin(kx) = 0$ when $x = 0$. Consequently only $sin(kx)$ is a physically acceptable solution to the Schrödinger equation. The boundary condition described above also requires us to set $ψ(L) = 0$. $ψ(L) = \sin(kL) = 0 \label {4.7}$ The sine function will be zero if $kL = nπ$ with $n = 1,2,3, \cdots$. In other words, $k = \dfrac {n \pi}{L} \label {4-8}$ with $n = 1, 2, 3 \cdots$ Note that $n = 0$ is not acceptable here because this makes the wave vector zero $k = 0$, so $\sin(kx) = 0$, and thus $ψ(x)$ is zero everywhere. If the wavefunction were zero everywhere, it means that the probability of finding the electron is zero. This clearly is not acceptable because it means there is no electron. Example $3$ Show that $\sin(kx) = 0$ at $x = L$ if $k = nπ/L$ and $n$ is an integer. Negative Quantum Numbers It appears that a negative integer also would work for $n$ because $\sin \left ( \dfrac {-n \pi}{L} x \right ) = - \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4.9}$ which also satisfies the boundary condition at $x = L$. The reason negative integers are not used is a bit subtle. Changing $n$ to $–n$ just changes the sign (also called the phase) of the wavefunction from + to -, and does not produce a function describing a new state of the particle. Note that the probability density for the particle is the absolute square of the function, and the energies are the same for $n$ and $–n$. Also, since the wave vector k is associated with the momentum (p = ħk), n > 0 means k > 0 corresponding to momentum in the positive direction, and $n < 0$ means $k < 0$ corresponding to momentum in the negative direction. By using Euler’s formula one can show that the sine function incorporates both $k$ and $–k$ since $\sin (kx) = \dfrac {1}{2i} ( e^{ikx} - e^{-ikx} ) \label {4-10}$ so changing $n$ to $–n$ and $k$ to $–k$ does not produce a function describing new state, because both momentum states already are included in the sine function. The set of wavefunctions that satisfies both boundary conditions is given by $\psi _n (x) = N \sin \left ( \dfrac {n \pi}{L} x \right ) \text {with } n = 1, 2, 3, \cdots \label {4-11}$ The normalization constant N is introduced and evaluated to satisfy the normalization requirement. $\int \limits _0^L \psi ^* (x) \psi (x) dx = 1 \label {4- 12}$ $N^2 \int \limits _0^L \sin ^2 \left ( \dfrac {n \pi x}{L} \right ) dx = 1 \label {4-13}$ $N = \sqrt{ \dfrac{1}{\int \limits _0^L \sin ^2 \dfrac {n\pi x}{L} dx} } \label {4-14}$ $N = \sqrt{ \dfrac {2}{L}} \label {4-15}$ Finally we write the wavefunction: $\psi _n (x) = \sqrt{ \dfrac {2}{L} } \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4-16}$ Example $4$ Evaluate the integral in Equation $\ref{4-13}$ and show that $N = (2/L)^{1/2}$. By finding the solutions to the Schrödinger equation and imposing boundary conditions, we have found a whole set of wavefunctions and corresponding energies for the particle-in-a box. The wavefunctions and energies depend upon the number n, which is called a quantum number. In fact there are an infinite number of wavefunctions and energy levels, corresponding to the infinite number of values for n $n = 1 \rightarrow \infty$. The wavefunctions are given by Equation $\ref{4-16}$ and the energies by Equation $\ref{4-6}$. If we substitute the expression for k from Equation $\ref{4-8}$ into Equation $\ref{4-6}$, we obtain the equation for the energies $E_n$ $E_n = \dfrac {n^2 \pi ^2 \hbar ^2}{2mL^2} = n^2 \left (\dfrac {h^2}{8mL^2} \right ) \label {4-17}$ Example $4$ Substitute the wavefunction, Equation $\ref{4-16}$, into Equation $\ref{4.2}$ and differentiate twice to obtain the expression for the energy given by Equation $\ref{4-17}$. From Equation $\ref{4-17}$ we see that the energy is quantized in units of $\dfrac {h^2}{8mL^2}$; i.e. only certain values for the energy of the particle are possible. This quantization, the dependence of the energy on integer values for n, results from the boundary conditions requiring that the wavefunction be zero at certain points. We will see in other chapters that quantization generally is produced by boundary conditions and the presence of Planck’s constant in the equations. The lowest-energy state of a system is called the ground state. Note that the ground state ($n = 1$) energy of the particle-in-a-box is not zero. This energy is called the zero-point energy. Example $5$ Here is a neat way to deduce or remember the expression for the particle-in-a-box energies. The momentum of a particle has been shown to be equal to $ħk$. Show that this momentum, with $k$ constrained to be equal to $nπ/L$, combined with the classical expression for the kinetic energy in terms of the momentum $(p^2/2m)$ produces Equation $\ref{4.17}$. Determine the units for $\dfrac {h^2}{8mL^2}$ from the units for $h$, $m$, and $L$. Example $6$ Why must the wavefunction for the particle-in-a-box be normalized? Show that φ(x) in Equation $\ref{4-16}$ is normalized. Example $6$ Use a spreadsheet program, Mathcad, or other suitable software to construct an accurate energy level diagram and to plot the wavefunctions and probability densities for a particle-in-a-box with $n = 1$ to $6$. You can make your graphs universal, i.e. apply to any particle in any box, by using the quantity $(h^2/8mL^2)$ as your unit of energy and $L$ as your unit of length. To make these universal graphs, plot $n^2$ on the y-axis of the energy-level diagram, and plot $x/L$ from $0$ to $1$ on the x-axis of your wavefunction and probability density graphs. Example $7$ How does the energy of the electron depend on the size of the box and the quantum number n? What is the significance of these variations with respect to the spectra of cyanine dye molecules with different numbers of carbon atoms and pi electrons? Plot $E(n_2)$, $E(L_2)$, and $E(n)$ on the same figure and comment on the shape of each curve. The quantum number serves as an index to specify the energy and wavefunction or state. Note that $E_n$ for the particle-in-a-box varies as $n^2$ and as $1/L^2$, which means that as $n$ increases the energies of the states get further apart, and as $L$ increases the energies get closer together. How the energy varies with increasing quantum number depends on the nature of the particular system being studied; be sure to take note of the relationship for each case that is discussed in subsequent chapters.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.03%3A_The_Particle-in-a-Box_Model.txt
The wavefunctions that describe electrons in atoms and molecules are called orbitals. An orbital is a wavefunction for a single electron. When we say an electron is in orbital $n$, we mean that it is described by a particular wavefunction $Ψ_n$ and has energy $E_n$. All the properties of this electron can be calculated from $Ψ_n$ as described in Chapter 3. We will now use the particle-in-a-box model to explain the absorption spectra of the cyanine dyes. When an atom or molecule absorbs a photon, the atom or molecule goes from one energy level, designated by quantum number $n_i$, to a higher energy level, designated by $n_f$. We can also say that the molecule goes from one electronic state to another. This change is called a transition. Sometimes it is said that an electron goes from one orbital to another in a transition, but this statement is not general. It is valid for a particle-in-a-box, but not for real atoms and molecules, which are more complicated than the simple particle-in-a-box model. The energy of the photon absorbed ($E_{photon} = h\nu$) matches the difference in the energy between the two states involved in the transition ($ΔE_{states}$). In general, the observed frequency or wavelength for a transition is calculated from the change in energy using the following equalities, $\Delta E_{states} = E_f - E_i = E_{photon} = h \nu = hc \bar {\nu} \label {4-18}$ Then, for the specific case of the particle-in-a-box, $E_{photon} = \Delta E_{states} = E_f - E_i = \frac {(n_{f}^2 - n_{i}^2) h^2}{8mL^2} \label {4-19}$ where $n_f$ is the quantum number associated with the final state and $n_i$ is the quantum number for the initial state. A negative value for $E_{photon}$ means the photon is emitted as a result of the transition in states; a positive value means the photon is absorbed. Generally the transition energy, $E_{photon}$ or $ΔE_{states}$, is taken to correspond to the peak in the absorption spectrum. When high accuracy is needed for the electronic transition energy, the spectral line shape must be analyzed to account for rotational and vibrational motion as well as effects due to the solvent or environment. Contributions of rotational and vibrational motion to an absorption spectrum will be discussed in later chapters. In a cyanine dye molecule that has three carbon atoms in the chain, there are six $π$-electrons. When light is absorbed, one of these electrons increases its energy by an amount $hν$ and jumps to a higher energy level. In order to use Equation \ref{4-18}, we need to know which energy levels are involved. We assign the electrons to the lowest energy levels to create the ground-state lowest-energy electron configuration. We could put all six electrons in the $n = 1$ level, or we could put one electron in each of $n = 1$ through $n = 6$, or we could pair the electrons in $n = 1$ through $n = 3$, etc. The Pauli Exclusion Principle says that each spatial wavefunction can describe, at most, two electrons, or in other words, that each energy level can have only two electrons assigned to it. Spatial refers to our 3-dimensional space, and a spatial wavefunction depends upon the spatial coordinates x, y, or z. We will discuss the Pauli Exclusion Principle more fully later, but you probably have encountered it in other courses. Rather than appeal to the Pauli Exclusion Principle to assign the electrons to the energy levels, let’s try an empirical approach and discover the Pauli Exclusion Principle as a result. Assign the electrons to the energy levels in different ways until you find an assignment that agrees with experiment. When there is an even number of electrons, the lowest-energy transition is the energy difference between the highest occupied level (HOMO) and the lowest unoccupied level (LUMO). HOMO designates the highest-energy occupied molecular orbital, and LUMO designates the lowest-energy unoccupied molecular orbital. The term orbital refers to the wavefunction or energy level for one electron. All other transitions have a higher energy. For the case with all the electrons in the first energy level, the lowest-energy transition energy would be $hν = E_2 – E_1$. With one electron in each of the first six levels, $hν = E_7 – E_6$, and with the electrons paired, $hν = E_4 – E_3$. Example $1$ Draw energy level diagrams indicating the HOMO, the LUMO, the electrons and the lowest energy transition for each of the three cases mentioned in the preceding paragraph. Example $2$ For the three ways of assigning the 6 electrons to the energy levels in Exercise $17$, calculate the peak absorption wavelength λ for a cyanine dye molecule with 3 carbon atoms in the chain using a value for L of 0.849 nm, which is obtained by estimating bond lengths. Which wavelength agrees most closely with the experimental value of 309 nm for this molecule? It turns out that the assignment that gives a reasonable wavelength for the absorption of a cyanine dye with 6 $\pi$ electrons is $hν = E_4 – E_3$ as you concluded from Exercise $2$. In this way we have “discovered” the Pauli Exclusion Principle, electrons should be paired in the same energy level whenever possible, and we accept it for now because it agrees with the experimental observations of the cyanine dye spectra. In molecules with an odd number of electrons, it is possible to have transitions between the doubly occupied molecular orbitals and the singly occupied molecular orbital as well as from the singly occupied orbital to an unoccupied orbital.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/04%3A_Electronic_Spectroscopy_of_Cyanine_Dyes/4.04%3A_Spectroscopy_of_the_Particle-in-a-Box_Model.txt