chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
The Application of the Schrödinger Equation to the Motions of Electrons and Nuclei in a Molecule Lead to the Chemists' Picture of Electronic Energy Surfaces on Which Vibration and Rotation Occurs and Among Which Transitions Take Place.
• 3.1: The Born-Oppenheimer Separation of Electronic and Nuclear Motions
Many elements of chemists' pictures of molecular structure hinge on the point of view that separates the electronic motions from the vibrational/rotational motions and treats couplings between these (approximately) separated motions as 'perturbations'. It is essential to understand the origins and limitations of this separated-motions picture
• 3.2: Time Scale Separation
The physical parameters that determine under what circumstances the BO approximation is accurate relate to the motional time scales of the electronic and vibrational/rotational coordinates.
• 3.3: Vibration/Rotation States for Each Electronic Surface
The BO picture is what gives rise to the concept of a manifold of potential energy surfaces on which vibrational/rotational motions occur.
• 3.4: Rotation and Vibration of Diatomic Molecules
For a diatomic species, the vibration-rotation kinetic energy operator can be expressed in terms of the bond length and the angles that describe the orientation of the bond axis relative to a laboratory-fixed coordinate system. Solutions require the vibrational wavefunction and energy to respond to the presence of the 'centrifugal potential' to obey the full coupled V/R equations.
• 3.5: Separation of Vibration and Rotation
It is common, in developing the working equations of diatomic-molecule rotational/vibrational spectroscopy, to treat the coupling between the two degrees of freedom using perturbation theory.
• 3.6: The Rigid Rotor and Harmonic Oscillator
Treatment of the rotational motion at the zeroth-order level described above introduces the so-called 'rigid rotor' energy levels and wavefunctions that arise when the diatomic molecule is treated as a rigid rotor. These harmonic-oscillator solutions predict evenly spaced energy levels. Of course, molecular vibrations display anharmonicity (i.e., the energy levels move closer together at higher energies) and that quantized vibrational motion ceases once the bond dissociation energy is reached.
• 3.7: The Morse Oscillator
The Morse oscillator model is often used to go beyond the harmonic oscillator approximation. The solutions to the Morse solutions display anharmonicity characteristic of true bonds.
• 3.8: Rotation of Polyatomic Molecules
For a non-linear polyatomic molecule, again with the centrifugal couplings to the vibrations evaluated at the equilibrium geometry, the following terms form the rotational part of the nuclear-motion kinetic energy. Molecules with all three principal moments of inertia are equal are called 'spherical tops' and with two of the three principal moments of inertia are equal are called symmetric top molecules.
• 3.9: Rotation of Linear Molecules
To describe the orientations of a diatomic or linear polyatomic molecule requires only two angles. For any non-linear molecule, three angles are needed. Hence the rotational Schrödinger equation for a nonlinear molecule is a differential equation in three-dimensions. There are 3M-6 vibrations of a non-linear molecule containing M atoms; a linear molecule has 3M-5 vibrations. The linear molecule requires two angular coordinates to describe its orientation with respect to a laboratory axis system.
• 3.E: Exercises
• 3.10: Rotation of Non-Linear Molecules
The proper rotational eigenfunctions for the polyatomic molecules are known as 'rotation matrices' and depend on three angles (the three Euler angles needed to describe the orientation of the molecule in space) and three quantum numbers: J, M, and K.
• 3.11: Chapter Summary
The solution of the Schrödinger equation governing the motions and interparticle potential energies of the nuclei and electrons of an atom or molecule (or ion) can be decomposed into two distinct problems: solution of the electronic Schrödinger equation for the electronic wavefunctions and energies, both of which depend on the nuclear geometry and solution of the vibration/rotation Schrödinger equation for the motion of the nuclei on any one of the electronic energy surfaces.
03: Nuclear Motion
Many elements of chemists' pictures of molecular structure hinge on the point of view that separates the electronic motions from the vibrational/rotational motions and treats couplings between these (approximately) separated motions as 'perturbations'. It is essential to understand the origins and limitations of this separated-motions picture.
To develop a framework in terms of which to understand when such separability is valid, one thinks of an atom or molecule as consisting of a collection of N electrons and M nuclei each of which possesses kinetic energy and among which coulombic potential energies of interaction arise. To properly describe the motions of all these particles, one needs to consider the full Schrödinger equation $H\Psi = E\Psi$, in which the Hamiltonian H contains the sum (denoted $H_e$ ) of the kinetic energies of all N electrons and the coulomb potential energies among the N electrons and the M nuclei as well as the kinetic energy T of the M nuclei
$T = \sum\limits_{a=1, M}\left( \dfrac{-\hbar^2}{2m_a} \right)\nabla_a^2, \nonumber$
$H = H_e + T \nonumber$
$H_e = \sum\limits_j \left[ \dfrac{-\hbar^2}{2m_e} \nabla_j^2 - \sum\limits_a\dfrac{Z_ae^2}{r_{j,a}} \right] + \sum\limits_{j < k} \dfrac{e^2}{r_{j,k}} + \sum\limits_{a < b} Z_aZ_b \dfrac{e^2}{R_{a,b}}. \nonumber$
Here, $m_a$ is the mass of the nucleus a, $Z_ae^2$ is its charge, and $\nabla_a^2$ is the Laplacian with respect to the three cartesian coordinates of this nucleus (this operator $\nabla_a^2$ is given in spherical polar coordinates in Appendix A); $r_{j,a}$ is the distance between the $j^{th}$ electron and the $a^{th}$ nucleus, $r_{j,k}$ is the distance between the $j^{th}$ and $k^{th}$ electrons, $m_e$ is the electron's mass, and $R_{a,b}$ is the distance from nucleus a to nucleus b.
The full Hamiltonian H thus contains differential operators over the 3N electronic coordinates (denoted r as a shorthand) and the 3M nuclear coordinates (denoted R as a shorthand). In contrast, the electronic Hamiltonian $H_e$ is a Hermitian differential operator in r-space but not in R-space. Although $H_e$ is indeed a function of the R-variables, it is not a differential operator involving them.
Because He is a Hermitian operator in r-space, its eigenfunctions $\Psi_i(r|R) \text{ obey } H_e \Psi_i(r|R) = E_i(R)\Psi_i(r|R)$
for any values of the R-variables, and form a complete set of functions of r for any values of R. These eigenfunctions and their eigenvalues $E_i(R)$ depend on R only because the potentials appearing in $H_e$ depend on R. The $\Psi_i \text{ and } E_i$ are the electronic wavefunctions and electronic energies whose evaluations are treated in the next three Chapters.
The fact that the set of {$\Psi_i$} is, in principle, complete in r-space allows the full (electronic and nuclear) wavefunction $\Psi$ to have its r-dependence expanded in terms of the $\Psi_i$ :
$\Psi(r,R) = \sum\limits_i \Psi_i(r|R)\Xi_i(R). \nonumber$
The $\Xi_i(R)$ functions, carry the remaining R-dependence of $\Psi$ and are determined by insisting that $\Psi$ as expressed here obey the full Schrödinger equation:
$(H_e +T - E)\sum\limits_i \Psi_i(r|R) \Xi_i(R) = 0. \nonumber$
Projecting this equation against $\langle \Psi_j (r|R)|$ (integrating only over the electronic coordinates because the $\Psi_j$ are orthonormal only when so integrated) gives:
$[(E_j(R) - E) \Xi_j(R) + T\Xi_j(R)] = - \sum\limits_i \left[ \langle \Psi_j | T | \Psi_i \rangle (R)\Xi_i(R) + \sum\limits_{a=1, M} \dfrac{- \hbar^2}{m_a} \langle \Psi_j | \nabla_a | \Psi_i \rangle (R) \cdot \nabla_a \Xi_i(R) \right], \nonumber$
where the (R) notation in $\langle \Psi_j |T| \Psi_i \rangle(R) \text{ and } \langle \Psi_j | \nabla_a | \Psi_i \rangle (R)$ has been used to remind one that the integrals < ...> are carried out only over the r coordinates and, as a result, still depend on the R coordinates.
In the Born-Oppenheimer (BO) approximation, one neglects the so-called nonadiabatic or non-BO couplings on the right-hand side of the above equation. Doing so yields the following equations for the $\Xi_i (R)$ functions:
$[(E_j(R) - E)\Xi_j^0(R) + T\Xi_j^0(R)] = 0, \nonumber$
where the superscript in $\Xi_i^0(R)$ is used to indicate that these functions are solutions within the BO approximation only.
These BO equations can be recognized as the equations for the translational, rotational, and vibrational motion of the nuclei on the 'potential energy surface' $E_j(R).$ That is, within the BO picture, the electronic energies $E_j(R),$ considered as functions of the nuclear positions R, provide the potentials on which the nuclei move. The electronic and nuclear-motion aspects of the Schrödinger equation are thereby separated.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/03%3A_Nuclear_Motion/3.01%3A_The_Born-Oppenheimer_Separation_of_Electronic_and_Nuclear_Motions.txt
|
The physical parameters that determine under what circumstances the BO approximation is accurate relate to the motional time scales of the electronic and vibrational/rotational coordinates.
The range of accuracy of this separation can be understood by considering the differences in time scales that relate to electronic motions and nuclear motions under ordinary circumstances. In most atoms and molecules, the electrons orbit the nuclei at speeds much in excess of even the fastest nuclear motions (the vibrations). As a result, the electrons can adjust 'quickly' to the slow motions of the nuclei. This means it should be possible to develop a model in which the electrons 'follow' smoothly as the nuclei vibrate and rotate.
This picture is that described by the BO approximation. Of course, one should expect large corrections to such a model for electronic states in which 'loosely held' electrons exist. For example, in molecular Rydberg states and in anions, where the outer valence electrons are bound by a fraction of an electron volt, the natural orbit frequencies of these electrons are not much faster (if at all) than vibrational frequencies. In such cases, significant breakdown of the BO picture is to be expected.
Rotation States for Each Electronic Surface
The BO picture is what gives rise to the concept of a manifold of potential energy surfaces on which vibrational/rotational motions occur.
Even within the BO approximation, motion of the nuclei on the various electronic energy surfaces is different because the nature of the chemical bonding differs from surface to surface. That is, the vibrational/rotational motion on the ground-state surface is certainly not the same as on one of the excited-state surfaces. However, there are a complete set of wavefunctions $Xi_{j,m}^0(R)$ and energy levels $E^0_{j,m}$ for each surface $E_j(R) T + E_j(R)$ is a Hermitian operator in R-space for each surface (labelled j):
$[T + E_j(R)]\Xi^0_{j,m}(R) = E^0_{j,m}\Xi^0_{j,m}. \nonumber$
The eigenvalues $E^0_{j,m}$ must be labelled by the electronic surface (j) on which the motion occurs as well as to denote the particular state (m) on that surface.
3.04: Rotation and Vibration of Diatomic Molecules
For a diatomic species, the vibration-rotation $\left(\dfrac{V}{R}\right)$ kinetic energy operator can be expressed as follows in terms of the bond length R and the angles $\theta \text{ and } \phi$ that describe the orientation of the bond axis relative to a laboratory-fixed coordinate system:
$T_{V/R} = \dfrac{-\hbar^2}{2\mu}\left[ \dfrac{1}{R^2}\dfrac{\partial}{\partial R}\left( R^2\dfrac{\partial}{\partial R} \right) - \left( \dfrac{L}{R\hbar}\right)^2 \right], \nonumber$
where the square of the rotational angular momentum of the diatomic species is
$L^2 = \hbar^2 \left[ \dfrac{1}{sin \theta} \dfrac{\partial}{\partial \theta} \left( sin \theta \dfrac{\partial}{\partial \theta} \right) + \dfrac{1}{sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2} \right]. \nonumber$
Because the potential $E_j (R)$ depends on R but not on $\theta \text{ or } \phi \text{ , the } \dfrac{V}{R} \text{ function } Xi^0_{j,m}$ can be written as a product of an angular part and an R-dependent part; moreover, because $L^2$ contains the full angle-dependence of $T_{V/R} , Xi^0_{j,n}$ can be written as
$\Xi^0_{j,n} = Y_{J,M}(\theta,\phi)F_{j,J,v}(R). \nonumber$
The general subscript n, which had represented the state in the full set of 3M-3 R-space coordinates, is replaced by the three quantum numbers J,M, and v (i.e., once one focuses on the three specific coordinates $R,\theta, \text{ and } \phi$, a total of three quantum numbers arise in place of the symbol n).
Substituting this product form for $\Xi^0_{j,n}$ into the $\dfrac{V}{R}$ equation gives:
$\dfrac{-\hbar^2}{2\mu}\left[ \dfrac{1}{R^2} \dfrac{\partial}{\partial R}\left( R^2\dfrac{\partial}{\partial R} \right) - \dfrac{J(J+1)}{R^2\hbar^2} \right] F_{j,J,v}(R) + E_j(R) F_{j,J,v}(R) = E^0_{j,J,v} F_{j,J,v} \nonumber$
as the equation for the vibrational (i.e., R-dependent) wavefunction within electronic state j and with the species rotating with $J(J+1) \hbar^2$ as the square of the total angular momentum and a projection along the laboratory-fixed Z-axis of $M\hbar.$ The fact that the $F_{j,J,v}$ functions do not depend on the M quantum number derives from the fact that the $T_{V/R}$ kinetic energy operator does not explicitly contain $J_Z$; only $J^2$ appears in $T_{V/R}.$
The solutions for which J=0 correspond to vibrational states in which the species has no rotational energy; they obey
$\dfrac{-\hbar^2}{2\mu} \left[ \dfrac{1}{R^2} \dfrac{\partial}{\partial R}\left( R^2 \dfrac{\partial}{\partial R} \right) \right] F_{j,0,v}(R) + E_j(R)F_{j,0,v}(R) = E^0_{j,0,v}F_{j,0,v}. \nonumber$
The differential-operator parts of this equation can be simplified somewhat by substituting $F= \dfrac{\chi}{R}$ and thus obtaining the following equation for the new function $\chi:$
$\dfrac{-\hbar^2}{2\mu} \dfrac{\partial}{\partial R} \dfrac{\partial}{\partial R} \chi_{j,0,v}(R) + E_j(R) \chi_{j,0,v}(R) = E^0_{j,0,v}\chi_{j,0,v}. \nonumber$
Solutions for which $J\neq 0$ require the vibrational wavefunction and energy to respond to the presence of the 'centrifugal potential' given by $\frac{\hbar^2 J(J+1)}{2\mu R^2}$; these solutions obey the full coupled V/R equations given above.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/03%3A_Nuclear_Motion/3.02%3A_Time_Scale_Separation.txt
|
It is common, in developing the working equations of diatomic-molecule rotational/vibrational spectroscopy, to treat the coupling between the two degrees of freedom using perturbation theory as developed later in this chapter. In particular, one can expand the centrifugal coupling $\hbar^2 \frac{J(J+1)}{2\mu R^2}$ around the equilibrium geometry $R_e$ (which depends, of course, on $J$):
$\hbar^2 \dfrac{J(J+1)}{2\mu R^2} = \hbar^2 \dfrac{J(J+1)}{2\mu [R_e^2 (1+\Delta R)^2]} \nonumber$
$\hbar^2 \dfrac{J(J+1)}{2\mu R_e^2}[1 - 2\Delta R + ...], \nonumber$
and treat the terms containing powers of the bond length displacement $\Delta R^k$ as perturbations. The zeroth-order equations read:
$\dfrac{-\hbar^2}{2\mu}\left[ \dfrac{1}{R^2}\dfrac{\partial}{\partial R}\left( R^2\dfrac{\partial}{\partial R} \right) \right] F_{j,J,v}^0 (R) + E_j(R) F^0_{j,J,v}(R) + \hbar^2 \dfrac{J(J+1)}{2\mu R_e^2}F^0_{j,J,v} F^0_{j,J,v}, \nonumber$
and have solutions whose energies separate
$E^0_{j,J,v} = \hbar^2\dfrac{J(J+1)}{2\mu R^2_e} + E_{j,v} \nonumber$
and whose wavefunctions are independent of $J$ (because the coupling is not R-dependent in zeroth order)
$F^0_{j,J,v}(R) = F_{j,v}(R). \nonumber$
Perturbation theory is then used to express the corrections to these zeroth order solutions as indicated in Appendix D.
3.06: The Rigid Rotor and Harmonic Oscillator
Treatment of the rotational motion at the zeroth-order level described above introduces the so-called 'rigid rotor' energy levels and wavefunctions: $E_J = \hbar^2 \frac{J(J+1)}{2\mu R_e^2} \text{ and } Y_{J,M} (\theta, \phi)$; these same quantities arise when the diatomic molecule is treated as a rigid rod of length $R_e.$ The spacings between successive rotational levels within this approximation are
$\Delta E_{J+1,J} = 2hcB(J+1), \nonumber$
where the so-called rotational constant B is given in $cm^{-1}$ as
$B = \dfrac{h}{8\pi^2 c\mu R_e^2}. \nonumber$
The rotational level J is (2J+1)-fold degenerate because the energy $E_J$ is independent of the M quantum number of which there are (2J+1) values for each J: M= -J, -J+1, -J+2, ... J-2, J-1, J.
The explicit form of the zeroth-order vibrational wavefunctions and energy levels, $F^0_{j,v} \text{ and } E^0_{ j,v},$ depends on the description used for the electronic potential energy surface $E_j(R).$ In the crudest useful approximation, $E_j(R)$ is taken to be a so-called harmonic potential
$E_j(R) \approx \dfrac{1}{2}k_j (R-R_e)^2 ; \nonumber$
as a consequence, the wavefunctions and energy levels reduce to
$E^0_{j,v} = E_j(R_e) + \hbar \dfrac{\sqrt{k}}{\mu}\left( v + \dfrac{1}{2}\right), \text{ and } \nonumber$
$F^0_{j,v}(R) = \dfrac{1}{\sqrt{2^v v!}} \sqrt[4]{\dfrac{\alpha}{\pi}}e^{\dfrac{-\alpha (R-R_e)^2}{2}}H_v \sqrt{\alpha}(R-R_e), \nonumber$
where $\alpha = \dfrac{\sqrt{k_j \mu}}{\hbar} \text{ and } H_v(y)$ denotes the Hermite polynomial defined by:
$H_v(y) = (-1)^v e^{y^2}\dfrac{d^v}{dy^v}e^{-y^2}. \nonumber$
The solution of the vibrational differential equation
$\dfrac{-\hbar^2}{2\mu}\left[ \dfrac{1}{R^2}\dfrac{\partial}{\partial R}\left( R^2\dfrac{\partial}{\partial R} \right) \right] F_{j,v}(R) + E_j(R) F_{j,v}(R) = E_{j,v} F_{j,v} \nonumber$
is treated in EWK, Atkins, and McQuarrie.
These harmonic-oscillator solutions predict evenly spaced energy levels (i.e., no anharmonicity) that persist for all $v$. It is, of course, known that molecular vibrations display anharmonicity (i.e., the energy levels move closer together as one moves to higher $v$) and that quantized vibrational motion ceases once the bond dissociation energy is reached.
3.07: The Morse Oscillator
The Morse oscillator model is often used to go beyond the harmonic oscillator approximation. In this model, the potential $E_j(R)$ is expressed in terms of the bond dissociation energy $D_e$ and a parameter a related to the second derivative k of $E_j(R)$ at $R_e k = \frac{d^2E_j}{dR^2} = 2a^2D_e$ as follows:
$E_j(R) - E_j(R_e) = D_e \left[ 1 - e^{-a(R-R_e)} \right]^2. \nonumber$
The Morse oscillator energy levels are given by
$E^0_{j,v} = E_j(R_e) + \hbar \dfrac{\sqrt{k}}{\mu}\left( v+\dfrac{1}{2} \right) - \dfrac{\hbar^2}{4}\left( \dfrac{k}{\mu D_e} \right) \left( v + \dfrac{1}{2} \right)^2 \nonumber$
the corresponding eigenfunctions are also known analytically in terms of hypergeometric functions (see, for example, Handbook of Mathematical Functions , M. Abramowitz and I. A. Stegun, Dover, Inc. New York, N. Y. (1964)). Clearly, the Morse solutions display anharmonicity as reflected in the negative term proportional to $(v+\dfrac{1}{2})^2.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/03%3A_Nuclear_Motion/3.05%3A_Separation_of_Vibration_and_Rotation.txt
|
For a non-linear polyatomic molecule, again with the centrifugal couplings to the vibrations evaluated at the equilibrium geometry, the following terms form the rotational part of the nuclear-motion kinetic energy:
$T_{rot} = \sum\limits_{i=a,b,c} \left( \dfrac{J_i^2}{2I_i} \right). \nonumber$
Here, $I_i$ is the eigenvalue of the moment of inertia tensor:
$I_{x,x} = \sum\limits_a m_a [ (R_a-R_{CofM})^2 - (x_a - x_{CofM})^2 ] \nonumber$
$I_{x,y} = \sum\limits_a m_a [ (x_a - x_{CofM})(y_a - y_{CofM}) ] \nonumber$
expressed originally in terms of the cartesian coordinates of the nuclei (a) and of the center of mass in an arbitrary molecule-fixed coordinate system (and similarly for $I_{z,z} , I_{y,y} , I_{x,z} \text{ and } I_{y,z}$). The operator $J_i$ corresponds to the component of the total rotational angular momentum J along the direction belonging to the $i^th$ eigenvector of the moment of inertia tensor.
Molecules for which all three principal moments of inertia (the $I_i's$) are equal are called 'spherical tops'. For these species, the rotational Hamiltonian can be expressed in terms of the square of the total rotational angular momentum $J^2$ :
$T_{rot} = \dfrac{J^2}{2I}, \nonumber$
as a consequence of which the rotational energies once again become
$E_J = \hbar^2\dfrac{J(J+1)}{2I}. \nonumber$
However, the $Y_{J,M}$ are not the corresponding eigenfunctions because the operator $J^2$ now contains contributions from rotations about three (no longer two) axes (i.e., the three principal axes). The proper rotational eigenfunctions are the $D^J_{M,K} (\alpha,\beta,\gamma)$ functions known as 'rotation matrices' (see Sections 3.5 and 3.6 of Zare's book on angular momentum) these functions depend on three angles (the three Euler angles needed to describe the orientation of the molecule in space) and three quantum numbers- J,M, and K. The quantum number M labels the projection of the total angular momentum (as $M\hbar$) along the laboratory-fixed z-axis; $K\hbar$ is the projection along one of the internal principal axes ( in a spherical top molecule, all three axes are equivalent, so it does not matter which axis is chosen).
The energy levels of spherical top molecules are $(2J+1)^2$ -fold degenerate. Both the M and K quantum numbers run from -J, in steps of unity, to J; because the energy is independent of M and of K, the degeneracy is $(2J+1)^2$.
Molecules for which two of the three principal moments of inertia are equal are called symmetric top molecules. Prolate symmetric tops have $I_a < I_b = I_c$ ; oblate symmetric tops have $I_a = I_b < I_c$ ( it is convention to order the moments of inertia as $I_a \leq I_b \leq I_c ).$ The rotational Hamiltonian can now be written in terms of $J^2$ and the component of J along the unique moment of inertia's axis as:
$T_{rot} = J_a^2 \left( \dfrac{1}{2I_a} - \dfrac{1}{2I_b} \right) + \dfrac{J^2}{2I_b} \nonumber$
for prolate tops, and
$T_{rot} = J_c^2 \left( \dfrac{1}{2I_c} - \dfrac{1}{2I_b} \right) + \dfrac{J^2}{2I_b} \nonumber$
for oblate tops. Again, the $D^J_{M,K} (\alpha,\beta,\gamma)$ are the eigenfunctions, where the quantum number K describes the component of the rotational angular momentum J along the unique molecule-fixed axis (i.e., the axis of the unique moment of inertia). The energy levels are now given in terms of J and K as follows:
$E_{J,K}\hbar^2 \dfrac{J(J+1)}{2I_b} + \hbar^2 K^2 \left( \dfrac{1}{2I_a} - \dfrac{1}{2I_b} \right) \nonumber$
for prolate tops, and
$E_{J,K} = \hbar^2 \dfrac{J(J+1)}{2I_b} + \hbar^2 K^2 \left( \dfrac{1}{2I_c} - \dfrac{1}{2I_b} \right) \nonumber$
for oblate tops.
Because the rotational energies now depend on K (as well as on J), the degeneracies are lower than for spherical tops. In particular, because the energies do not depend on M and depend on the square of K, the degeneracies are (2J+1) for states with K=0 and 2(2J+1) for states with |K| > 0; the extra factor of 2 arises for |K| > 0 states because pairs of states with K = |K| and K = |-K| are degenerate.
3.09: Rotation of Linear Molecules
The rotational motion of a linear polyatomic molecule can be treated as an extension of the diatomic molecule case. One obtains the $Y_{J,M} (\theta,\phi)$ as rotational wavefunctions and, within the approximation in which the centrifugal potential is approximated at the equilibrium geometry of the molecule $(R_e)$, the energy levels are:
$E^0_J = \hbar^2 \dfrac{J(J+1)}{2I}. \nonumber$
Here the total moment of inertia I of the molecule takes the place of $\mu R_e^2$ in the diatomic molecule case
$I = \sum\limits_a m_a (R_a - R_{CofM})^2; \nonumber$
$m_a$ is the mass of atom a whose distance from the center of mass of the molecule is $(R_a - R_{CofM}).$ The rotational level with quantum number J is (2J+1)-fold degenerate again because there are (2J+1) M- values.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/03%3A_Nuclear_Motion/3.08%3A_Rotation_of_Polyatomic_Molecules.txt
|
For a non-linear polyatomic molecule, again with the centrifugal couplings to the vibrations evaluated at the equilibrium geometry, the following terms form the rotational part of the nuclear-motion kinetic energy:
$T_{rot} = \sum\limits_{i=a,b,c} \left( \dfrac{J_i^2}{2I_i} \right). \nonumber$
Here, $I_i$ is the eigenvalue of the moment of inertia tensor:
$I_{x,x} = \sum\limits_a m_a [ (R_a-R_{CofM})^2 - (x_a - x_{CofM})^2 ] \nonumber$
$I_{x,y} = \sum\limits_a m_a [ (x_a - x_{CofM})(y_a - y_{CofM}) ] \nonumber$
expressed originally in terms of the cartesian coordinates of the nuclei (a) and of the center of mass in an arbitrary molecule-fixed coordinate system (and similarly for $I_{z,z} , I_{y,y} , I_{x,z} \text{ and } I_{y,z}$). The operator $J_i$ corresponds to the component of the total rotational angular momentum J along the direction belonging to the $i^th$ eigenvector of the moment of inertia tensor.
Molecules for which all three principal moments of inertia (the $I_i's$) are equal are called 'spherical tops'. For these species, the rotational Hamiltonian can be expressed in terms of the square of the total rotational angular momentum $J^2$ :
$T_{rot} = \dfrac{J^2}{2I}, \nonumber$
as a consequence of which the rotational energies once again become
$E_J = \hbar^2\dfrac{J(J+1)}{2I}. \nonumber$
However, the $Y_{J,M}$ are not the corresponding eigenfunctions because the operator $J^2$ now contains contributions from rotations about three (no longer two) axes (i.e., the three principal axes). The proper rotational eigenfunctions are the $D^J_{M,K} (\alpha,\beta,\gamma)$ functions known as 'rotation matrices' (see Sections 3.5 and 3.6 of Zare's book on angular momentum) these functions depend on three angles (the three Euler angles needed to describe the orientation of the molecule in space) and three quantum numbers- J,M, and K. The quantum number M labels the projection of the total angular momentum (as $M\hbar$) along the laboratory-fixed z-axis; $K\hbar$ is the projection along one of the internal principal axes ( in a spherical top molecule, all three axes are equivalent, so it does not matter which axis is chosen).
The energy levels of spherical top molecules are $(2J+1)^2$ -fold degenerate. Both the M and K quantum numbers run from -J, in steps of unity, to J; because the energy is independent of M and of K, the degeneracy is $(2J+1)^2$.
Molecules for which two of the three principal moments of inertia are equal are called symmetric top molecules. Prolate symmetric tops have $I_a < I_b = I_c$ ; oblate symmetric tops have $I_a = I_b < I_c$ ( it is convention to order the moments of inertia as $I_a \leq I_b \leq I_c ).$ The rotational Hamiltonian can now be written in terms of $J^2$ and the component of J along the unique moment of inertia's axis as:
$T_{rot} = J_a^2 \left( \dfrac{1}{2I_a} - \dfrac{1}{2I_b} \right) + \dfrac{J^2}{2I_b} \nonumber$
for prolate tops, and
$T_{rot} = J_c^2 \left( \dfrac{1}{2I_c} - \dfrac{1}{2I_b} \right) + \dfrac{J^2}{2I_b} \nonumber$
for oblate tops. Again, the $D^J_{M,K} (\alpha,\beta,\gamma)$ are the eigenfunctions, where the quantum number K describes the component of the rotational angular momentum J along the unique molecule-fixed axis (i.e., the axis of the unique moment of inertia). The energy levels are now given in terms of J and K as follows:
$E_{J,K}\hbar^2 \dfrac{J(J+1)}{2I_b} + \hbar^2 K^2 \left( \dfrac{1}{2I_a} - \dfrac{1}{2I_b} \right) \nonumber$
for prolate tops, and
$E_{J,K} = \hbar^2 \dfrac{J(J+1)}{2I_b} + \hbar^2 K^2 \left( \dfrac{1}{2I_c} - \dfrac{1}{2I_b} \right) \nonumber$
for oblate tops.
Because the rotational energies now depend on K (as well as on J), the degeneracies are lower than for spherical tops. In particular, because the energies do not depend on M and depend on the square of K, the degeneracies are (2J+1) for states with K=0 and 2(2J+1) for states with |K| > 0; the extra factor of 2 arises for |K| > 0 states because pairs of states with K = |K| and K = |-K| are degenerate.
3.11: Chapter Summary
This Chapter has shown how the solution of the Schrödinger equation governing the motions and interparticle potential energies of the nuclei and electrons of an atom or molecule (or ion) can be decomposed into two distinct problems:
1. solution of the electronic Schrödinger equation for the electronic wavefunctions and energies, both of which depend on the nuclear geometry and
2. solution of the vibration/rotation Schrödinger equation for the motion of the nuclei on any one of the electronic energy surfaces.
This decomposition into approximately separable electronic and nuclearmotion problems remains an important point of view in chemistry. It forms the basis of many of our models of molecular structure and our interpretation of molecular spectroscopy. It also establishes how we approach the computational simulation of the energy levels of atoms and molecules; we first compute electronic energy levels at a 'grid' of different positions of the nuclei, and we then solve for the motion of the nuclei on a particular energy surface using this grid of data.
The treatment of electronic motion is treated in detail in Sections 2, 3, and 6 where molecular orbitals and configurations and their computer evaluation is covered. The vibration/rotation motion of molecules on BO surfaces is introduced above, but should be treated in more detail in a subsequent course in molecular spectroscopy.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/03%3A_Nuclear_Motion/3.10%3A_Rotation_of_Non-Linear_Molecules.txt
|
Valence atomic orbitals on neighboring atoms combine to form bonding, non-bonding and antibonding molecular orbitals. In Section 1 the Schrödinger equation for the motion of a single electron moving about a nucleus of charge Z was explicitly solved. The energies of these orbitals relative to an electron infinitely far from the nucleus with zero kinetic energy were found to depend strongly on Z and on the principal quantum number n, as were the radial "sizes" of these hydrogenic orbitals. Closed analytical expressions for the \(r\),\(θ\), and \(ϕ\) dependence of these orbitals are given in Appendix B. The reader is advised to also review this material before undertaking study of this section.
• 4.1: Shapes of Atomic Orbitals
Shapes of atomic orbitals play central roles in governing the types of directional bonds an atom can form.
• 4.2: Directions of Atomic Orbitals
Atomic orbital directions also determine what directional bonds an atom will form.
• 4.3: Sizes and Energies
Orbital energies and sizes go hand-in-hand; small 'tight' orbitals have large electron binding energies (i.e., low energies relative to a detached electron). For orbitals on neighboring atoms to have large (and hence favorable to bond formation) overlap, the two orbitals should be of comparable size and hence of similar electron binding energy.
04: Atomic Orbitals
Shapes of atomic orbitals play central roles in governing the types of directional bonds an atom can form.
All atoms have sets of bound and continuum s, p, d, f, g, etc. orbitals. Some of these orbitals may be unoccupied in the atom's low energy states, but they are still present and able to accept electron density if some physical process (e.g., photon absorption, electron attachment, or Lewis-base donation) causes such to occur. For example, the Hydrogen atom has 1s, 2s, 2p, 3s, 3p, 3d, etc. orbitals. Its negative ion \(H^-\) has states that involve \(1s2s\), \(2p^2, 3s^2, 3p^2,\) etc. orbital occupancy. Moreover, when an \(H\) atom is placed in an external electronic field, its charge density polarizes in the direction of the field. This polarization can be described in terms of the orbitals of the isolated atom being combined to yield distorted orbitals (e.g., the 1s and 2p orbitals can "mix" or combine to yield sp hybrid orbitals, one directed toward increasing field and the other directed in the opposite direction). Thus in many situations it is important to keep in mind that each atom has a full set of orbitals available to it even if some of these orbitals are not occupied in the lowest energy state of the atom.
4.02: Directions of Atomic Orbitals
Atomic orbital directions also determine what directional bonds an atom will form.
Each set of p orbitals has three distinct directions or three different angular momentum m-quantum numbers as discussed in Appendix G. Each set of d orbitals has five distinct directions or m-quantum numbers, etc; s orbitals are unidirectional in that they are spherically symmetric, and have only m = 0. Note that the degeneracy of an orbital (2l+1), which is the number of distinct spatial orientations or the number of m-values, grows with the angular momentum quantum number l of the orbital without bound.
It is because of the energy degeneracy within a set of orbitals, that these distinct directional orbitals (e.g., x, y, z for p orbitals) may be combined to give new orbitals which no longer possess specific spatial directions but which have specified angular momentum characteristics. The act of combining these degenerate orbitals does not change their energies. For example, the $\frac{1}{\sqrt{2}}(p_x + ip_y)$ and $\frac{1}{\sqrt{2}}(p_x - ip_y)$ combinations no longer point along the x and y axes, but instead correspond to specific angular momenta $(+1\hbar \text{ and } -1\hbar)$ about the z axis. The fact that they are angular momentum eigenfunctions can be seen by noting that the x and y orbitals contain $\phi$ dependences of cos($\phi$) and sin($\phi$), respectively. Thus the above combinations contain $e^{i\phi} \text{ and } e^{-i\phi},$ respectively. The sizes, shapes, and directions of a few s, p, and d orbitals are illustrated below (the light and dark areas represent positive and negative values, respectively).
4.03: Sizes and Energies
The size (e.g., average value or expectation value of the distance from the atomic nucleus to the electron) of an atomic orbital is determined primarily by its principal quantum number n and by the strength of the potential attracting an electron in this orbital to the atomic center (which has some l-dependence too). The energy (with negative energies corresponding to bound states in which the electron is attached to the atom with positive binding energy and positive energies corresponding to unbound scattering states) is also determined by n and by the electrostatic potential produced by the nucleus and by the other electrons. Each atom has an infinite set of orbitals of each l quantum number ranging from those with low energy and small size to those with higher energy and larger size.
Atomic orbitals are solutions to an orbital-level Schrödinger equation in which an electron moves in a potential energy field provided by the nucleus and all the other electrons. Such one-electron Schrödinger equations are discussed, as they pertain to qualitative and semi-empirical models of electronic structure in Appendix F. The spherical symmetry of the one-electron potential appropriate to atoms and atomic ions is what makes sets of the atomic orbitals degenerate. Such degeneracies arise in molecules too, but the extent of degeneracy is lower because the molecule's nuclear coulomb and electrostatic potential energy has lower symmetry than in the atomic case. As will be seen, it is the symmetry of the potential experienced by an electron moving in the orbital that determines the kind and degree of orbital degeneracy which arises.
Symmetry operators leave the electronic Hamiltonian H invariant because the potential and kinetic energies are not changed if one applies such an operator R to the coordinates and momenta of all the electrons in the system. Because symmetry operations involve reflections through planes, rotations about axes, or inversions through points, the application of such an operation to a product such as $\textbf{H}\psi$ gives the product of the operation applied to each term in the original product. Hence, one can write:
$R(\textbf{H} \psi) = (R\textbf{H})(R\psi). \nonumber$
Now using the fact that H is invariant to R, which means that (RH) = H, this result reduces to:
$R(\textbf{H} \psi) = \textbf{H}(R\psi), \nonumber$
which says that R commutes with H:
$[R,\textbf{H}] = 0 \nonumber$
Because symmetry operators commute with the electronic Hamiltonian, the wavefunctions that are eigenstates of H can be labeled by the symmetry of the point group of the molecule (i.e., those operators that leave H invariant). It is for this reason that one constructs symmetry-adapted atomic basis orbitals to use in forming molecular orbitals.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/04%3A_Atomic_Orbitals/4.01%3A_Shapes_of_Atomic_Orbitals.txt
|
Molecular orbitals possess specific topology, symmetry, and energy-level patterns.
• 5.1: Orbital Interaction Topology
The pattern of molecular orbital energies can often be 'guessed' by using qualitative information about the energies, overlaps, directions, and shapes of the atomic orbitals that comprise the molecular orbitals.
• 5.2: Orbital Symmetry
Symmetry provides additional quantum numbers or labels to use in describing the molecular orbitals. Each such quantum number further sub-divides the collection of all molecular orbitals into sets that have vanishing Hamiltonian matrix elements among members belonging to different sets.
• 5.3: Linear Molecules
Linear molecules belong to the axial rotation group. Their symmetry is intermediate in complexity between nonlinear molecules and atoms.
• 5.4: Atoms
Atoms belong to the full rotation symmetry group; this makes their symmetry analysis the most complex to treat.
05: Molecular Orbitals
The orbital interactions determine how many and which molecular orbitals will have low (bonding), intermediate (non-bonding), and higher (antibonding) energies, with all energies viewed relative to those of the constituent atomic orbitals. The general patterns that are observed in most compounds can be summarized as follows:
1. If the energy splittings among a given atom's atomic orbitals with the same principal quantum number are small, hybridization can easily occur to produce hybrid orbitals that are directed toward (and perhaps away from) the other atoms in the molecule. In the first-row elements (Li, Be, B, C, N, O, and F), the 2s-2p splitting is small, so hybridization is common. In contrast, for Ca, Ga, Ge, As, and Br it is less common, because the 4s-4p splitting is larger. Orbitals directed toward other atoms can form bonding and antibonding mos; those directed toward no other atoms will form nonbonding molecular orbitals.
2. In attempting to gain a qualitative picture of the electronic structure of any given molecule, it is advantageous to begin by hybridizing the atomic orbitals of those atoms which contain more than one ao in their valence shell. Only those atomic orbitals that are not involved in p-orbital interactions should be so hybridized.
3. Atomic or hybrid orbitals that are not directed in a s-interaction manner toward other atomic orbitals or hybrids on neighboring atoms can be involved in p-interactions or in nonbonding interactions.
4. Pairs of atomic orbitals or hybrid orbitals on neighboring atoms directed toward one another interact to produce bonding and antibonding orbitals. The more the bonding orbital lies below the lower-energy ao or hybrid orbital involved in its formation, the higher the antibonding orbital lies above the higher-energy ao or hybrid orbital.
For example, in formaldehyde, $H_2CO$, one forms $sp^2$ hybrids on the C atom; on the O atom, either sp hybrids (with one p orbital "reserved" for use in forming the $\pi \text{ and } \pi^{\text{*}}$ orbitals and another p orbital to be used as a non-bonding orbital lying in the plane of the molecule) or $sp^2$ hybrids (with the remaining p orbital reserved for the $\pi \text{ and } \pi^{\text{*}}$ orbitals) can be used. The H atoms use their 1s orbitals since hybridization is not feasible for them. The C atom clearly uses its $sp^2$ hybrids to form two CH and one CO $\sigma$ bonding - antibonding orbital pairs.
The O atom uses one of its sp or $sp^2$ hybrids to form the CO $\sigma$ bond and antibond. When sp hybrids are used in conceptualizing the bonding, the other sp hybrid forms a lone pair orbital directed away from the CO bond axis; one of the atomic p orbitals is involved in the CO $\pi \text{ and } \pi^{\text{*}}$ orbitals, while the other forms an in-plane non-bonding orbital. Alternatively, when $sp^2$ hybrids are used, the two $sp^2$ hybrids that do not interact with the C-atom $sp^2$ orbital form the two non-bonding orbitals. Hence, the final picture of bonding, non-bonding, and antibonding orbitals does not depend on which hybrids one uses as intermediates.
As another example, the 2s and 2p orbitals on the two N atoms of $N_2$ can be formed into pairs of sp hybrids on each N atom plus a pair of $p_{\pi}$ atomic orbitals on each N atom. The sp hybrids directed toward the other N atom give rise to bonding $\sigma \text{ and antibonding } \sigma^{\text{*}}$ orbitals, and the sp hybrids directed away from the other N atom yield nonbonding $\sigma$ orbitals. The p$_\pi$ orbitals, which consist of 2p orbitals on the N atoms directed perpendicular to the N-N bond axis, produce bonding $\pi \text{ and antibonding } \pi^{\text{*}}$ orbitals.
5. In general, $\sigma$ interactions for a given pair of atoms interacting are stronger than $\pi$ interactions (which, in turn, are stronger than $\delta$ interactions, etc.) for any given sets (i.e., principal quantum number) of atomic orbitals that interact. Hence, $\sigma$ bonding orbitals (originating from a given set of aos) lie below $\pi$ bonding orbitals, and $\sigma^{\text{*}}$ orbitals lie above $\pi^{\text{*}}$ orbitals that arise from the same sets of aos. In the $N_2$ example, the $\sigma$ bonding orbital formed from the two sp hybrids lies below the $\pi$ bonding orbital, but the $\pi^{\text{*}}$ orbital lies below the $\sigma^{\text{*}}$ orbital. In the $H_2CO$ example, the two CH and the one CO bonding orbitals have low energy; the CO $\pi$ bonding orbital has the next lowest energy; the two O-atom non-bonding orbitals have intermediate energy; the CO $\pi^{\text{*}}$ orbital has somewhat higher energy; and the two CH and one CO antibonding orbitals have the highest energies.
6. If a given ao or hybrid orbital interacts with or is coupled to orbitals on more than a single neighboring atom, multicenter bonding can occur. For example, in the allyl radical the central carbon atom's $p_\pi$ orbital is coupled to the $p_\pi$ orbitals on both neighboring atoms; in linear $Li_3$, the central Li atom's 2s orbital interacts with the 2s orbitals on both terminal Li atoms; in triangular $Cu_3$, the 2s orbitals on each Cu atom couple to each of the other two atoms' 4s orbitals.
7. Multicenter bonding that involves "linear" chains containing N atoms (e.g., as in conjugated polyenes or in chains of Cu or Na atoms for which the valence orbitals on one atom interact with those of its neighbors on both sides) gives rise to mo energy patterns in which there are N/2 (if N is even) or $N/2 -1$ non-degenerate bonding orbitals and the same number of antibonding orbitals (if N is odd, there is also a single non-bonding orbital).
8. Multicenter bonding that involves "cyclic" chains of N atoms (e.g., as in cyclic conjugated polyenes or in rings of Cu or Na atoms for which the valence orbitals on one atom interact with those of its neighbors on both sides and the entire net forms a closed cycle) gives rise to mo energy patterns in which there is a lowest non-degenerate orbital and then a progression of doubly degenerate orbitals. If N is odd, this progression includes (N- 1)/2 levels; if N is even, there are (N-2)/2 doubly degenerate levels and a final nondegenerate highest orbital. These patterns and those that appear in linear multicenter bonding are summarized in the Figures shown below.
Figure 5.1.1: Insert caption here!
1. In extended systems such as solids, atom-based orbitals combine as above to form so called 'bands' of molecular orbitals. These bands are continuous rather than discrete as in the above cases involving small polyenes. The energy 'spread' within a band depends on the overlap among the atom-based orbitals that form the band; large overlap gives rise to a large band width, while small overlap produces a narrow band. As one moves from the bottom (i.e., the lower energy part) of a band to the top, the number of nodes in the corresponding band orbital increases, as a result of which its bonding nature decreases. In the figure shown below, the bands of a metal such as Ni (with 3d, 4s, and 4p orbitals) is illustrated. The d-orbital band is narrow because the 3d orbitals are small and hence do not overlap appreciably; the 4s and 4p bands are wider because the larger 4s and 4p orbitals overlap to a greater extent. The d-band is split into $\sigma, \pi, and \delta$ components corresponding to the nature of the overlap interactions among the constituent atomic d orbitals. Likewise, the p-band is split into $\sigma \text{ and } \pi$ components. The widths of the $\sigma$ components of each band are larger than those of the $\pi$ components because the corresponding $\sigma$ overlap interactions are stronger. The intensities of the bands at energy E measure the densities of states at that E. The total integrated intensity under a given band is a measure of the total number of atomic orbitals that form the band.
Figure 5.1.2: Insert caption here!
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/05%3A_Molecular_Orbitals/5.01%3A_Orbital_Interaction_Topology.txt
|
Symmetry provides additional quantum numbers or labels to use in describing the molecular orbitals. Each such quantum number further sub-divides the collection of all molecular orbitals into sets that have vanishing Hamiltonian matrix elements among members belonging to different sets.
Orbital interaction "topology" as discussed above plays a most- important role in determining the orbital energy level patterns of a molecule. Symmetry also comes into play, but in a different manner. Symmetry can be used to characterize the core, bonding, nonbonding, and antibonding molecular orbitals. Much of this chapter is devoted to how this can be carried out in a systematic manner. Once the various molecular orbitals have been labeled according to symmetry, it may be possible to recognize additional degeneracies that may not have been apparent on the basis of orbital-interaction considerations alone. Thus, topology provides the basic energy ordering pattern and then symmetry enters to identify additional degeneracies.
For example, the three NH bonding and three NH antibonding orbitals in $NH_3$, when symmetry adapted within the $C_{3v}$ point group, cluster into $a_1$ and e molecular orbitals as shown in the Figure below. The N-atom localized non-bonding lone pair orbital and the N-atom 1s core orbital also belong to $a_1$ symmetry.
In a second example, the three CH bonds, three CH antibonds, CO bond and antibond, and three O-atom non-bonding orbitals of the methoxy radical $H_3C-O$ also cluster into $a_1$ and e orbitals as shown below. In these cases, point group symmetry allows one to identify degeneracies that may not have been apparent from the structure of the orbital interactions alone.
The three resultant molecular orbital energies are, of course, identical to those obtained without symmetry above. The three LCAO-MO coefficients , now expressing the molecular orbitals in terms of the symmetry adapted orbitals are $C_{is}$ = ( 0.707, 0.707, 0.0) for the bonding orbital, (0.0, 0.0, 1.00) for the nonbonding orbital, and (0.707, -0.707, 0.0) for the antibonding orbital. These coefficients, when combined with the symmetry adaptation coefficients $C_{sa}$ given earlier, express the three molecular orbitals in terms of the three atomic orbitals as
$\phi_i= \sum \limits_{sa} C_{is} C_{sa} \chi_a \nonumber$
The sum
$\sum \limits_s C_{is}C_{sa} \nonumber$
gives the LCAO-MO coefficients $C_{ia}$ which, for example, for the bonding orbital, are $( 0.707^2, 0.707, 0.707^2)$, in agreement with what was found earlier without using symmetry.
The low energy orbitals of the $H_2O$ molecule can be used to illustrate the use of symmetry within the primitive ao basis as well as in terms of hybrid orbitals. The 1s orbital on the Oxygen atom is clearly a nonbonding core orbital. The Oxygen 2s orbital and its three 2p orbitals are of valence type, as are the two Hydrogen 1s orbitals. In the absence of symmetry, these six valence orbitals would give rise to a 6x6 secular problem. By combining the two Hydrogen 1s orbitals into 0.707($1s_L + 1s_R$) and 0.707($1s_L - 1s_R$) symmetry adapted orbitals (labeled $a_1 \text{ and } b_2 \text{ within the } C_{2v}$ point group; see the Figure below), and recognizing that the Oxygen 2s and $2p_z$ orbitals belong to $a_1$ symmetry (the z axis is taken as the $C_2$ rotation axis and the x axis is taken to be perpendicular to the plane in which the three nuclei lie) while the $2p_x$ orbital is $b_1$ and the $2p_y$ orbital is $b_2$, allows the 6x6 problem to be decomposed into a 3x3 ( $a_1$) secular problem, a 2x2 ( $b_2$) secular problem and a 1x1 ( $b_1$ ) problem. These decompositions allow one to conclude that there is one nonbonding $b_1$ orbital (the Oxygen $2p_x$ orbital), bonding and antibonding $b_2$ orbitals ( the O-H bond and antibond formed by the Oxygen $2p_y$ orbital interacting with 0.707($1s_L - 1s_R$)), and, finally, a set of bonding, nonbonding, and antibonding $a_1$ orbitals (the O-H bond and antibond formed by the Oxygen 2s and $2p_z$ orbitals interacting with 0.707($1s_L + 1s_R$) and the nonbonding orbital formed by the Oxygen 2s and $2p_z$ orbitals combining to form the "lone pair" orbital directed along the z-axis away from the two Hydrogen atoms).
Alternatively, to analyze the $H_2O$ molecule in terms of hybrid orbitals, one first combines the Oxygen 2s, $2p_z, 2p_x and 2p_y$ orbitals to form four $sp^3$ hybrid orbitals. The valence-shell electron-pair repulsion (VSEPR) model of chemical bonding (see R. J. Gillespie and R. S. Nyholm, Quart. Rev. 11, 339 (1957) and R. J. Gillespie, J. Chem. Educ. 40, 295 (1963)) directs one to involve all of the Oxygen valence orbitals in the hybridization because four $\sigma$-bond or nonbonding electron pairs need to be accommodated about the Oxygen center; no $\pi$ orbital interactions are involved, of course. Having formed the four $sp^3$ hybrid orbitals, one proceeds as with the primitive atomic orbitals; one forms symmetry adapted orbitals. In this case, the two Hydrogen 1s orbitals are combined exactly as above to form
$0.707(1s_L + 1s_R) \nonumber$
and
$0.707(1s_L - 1s_R). \nonumber$
The two $sp^3$ hybrids which lie in the plane of the H and O nuclei ( label them L and R) are combined to give symmetry adapted hybrids: 0.707(L+R) and 0.707(L-R), which are of $a_1 \text{ and } b_2$ symmetry, respectively (Figure Figure 5.2.3). The two $sp^3$ hybrids that lie above and below the plane of the three nuclei (label them T and B) are also symmetry adapted to form 0.707(T+ B) and 0.707(TB), which are of $a_1 \text{ and } b_1$ symmetry, respectively. Once again, one has broken the 6x6 secular problem into a 3x3 $a_1$ block, a 2x2 $b_2$ block and a 1x1 $b_1$ block. Although the resulting bonding, nonbonding and antibonding $a_1$ orbitals, the bonding and antibonding $b_2$ orbitals and the nonbonding $b_1$ orbital are now viewed as formed from symmetry adapted Hydrogen orbitals and four Oxygen $sp^3$ orbitals, they are, of course, exactly the same molecular orbitals as were obtained earlier in terms of the symmetry adapted primitive aos. The formation of hybrid orbitals was an intermediate step which could not alter the final outcome.
Figure 5.2.3: Insert caption here!
That no degenerate molecular orbitals arose in the above examples is a result of the fact that the $C_{2v}$ point group to which $H_2O$ and the allyl system belong (and certainly the $C_s$ subgroup which was used above in the allyl case) has no degenerate representations. Molecules with higher symmetry such as $NH_3 , CH_4$, and benzene have energetically degenerate orbitals because their molecular point groups have degenerate representations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/05%3A_Molecular_Orbitals/5.02%3A_Orbital_Symmetry.txt
|
Linear molecules belong to the axial rotation group. Their symmetry is intermediate in complexity between nonlinear molecules and atoms.
For linear molecules, the symmetry of the electrostatic potential provided by the nuclei and the other electrons is described by either the $C_{\infty n} \text{ or } D_{\infty h}$ group. The essential difference between these symmetry groups and the finite point groups which characterize the non-linear molecules lies in the fact that the electrostatic potential which an electron feels is invariant to rotations of any amount about the molecular axis (i.e., V($\gamma +\delta \gamma ) =V(\gamma$), for any angle increment $\delta\gamma$). This means that the operator $C_{\delta\gamma}$ which generates a rotation of the electron's azimuthal angle $\gamma$ by an amount $\delta\gamma$ about the molecular axis commutes with the Hamiltonian [h, $C_{\delta\gamma}$ ] =0. $C_{\delta\gamma}$ can be written in terms of the quantum mechanical operator $L_z = -i\hbar \frac{\partial}{\partial \gamma}$ describing the orbital angular momentum of the electron about the molecular (z) axis:
$C_{\delta\gamma}= e^{i\delta\gamma\dfrac{L_z}{\hbar}}. \nonumber$
Because $C_{\delta\gamma}$ commutes with the Hamiltonian and $C_{\delta\gamma}$ can be written in terms of $L_z , L_z$ must commute with the Hamiltonian. As a result, the molecular orbitals $\phi$ of a linear molecule must be eigenfunctions of the z-component of angular momentum $L_z$:
$-i\hbar\dfrac{\partial}{\partial y}| \phi \rangle = m\hbar | \phi \rangle. \nonumber$
The electrostatic potential is not invariant under rotations of the electron about the x or y axes (those perpendicular to the molecular axis), so $L_x \text{ and } L_y$ do not commute with the Hamiltonian. Therefore, only $L_z$ provides a "good quantum number" in the sense that the operator $L_z$ commutes with the Hamiltonian.
In summary, the molecular orbitals of a linear molecule can be labeled by their m quantum number, which plays the same role as the point group labels did for non-linear polyatomic molecules, and which gives the eigenvalue of the angular momentum of the orbital about the molecule's symmetry axis. Because the kinetic energy part of the Hamiltonian contains $\frac{\hbar^2}{2m_er^2}\frac{\partial^2}{\partial \gamma^2}$, whereas the potential energy part is independent of $\gamma$, the energies of the molecular orbitals depend on the square of the m quantum number. Thus, pairs of orbitals with m= ± 1 are energetically degenerate; pairs with m= ± 2 are degenerate, and so on. The absolute value of m, which is what the energy depends on, is called the $\lambda$ quantum number. Molecular orbitals with $\lambda = 0 \text{ are called } \sigma$ orbitals; those with $\lambda = 1 \text{ are } \pi$ orbitals; and those with $\lambda$ = 2 are $\delta$ orbitals.
Just as in the non-linear polyatomic-molecule case, the atomic orbitals which constitute a given molecular orbital must have the same symmetry as that of the molecular orbital. This means that $\sigma,\pi, \text{ and } \delta$ molecular orbitals are formed, via LCAO-MO, from m=0, m= ± 1, and m= ± 2 atomic orbitals, respectively. In the diatomic $N_2$ molecule, for example, the core orbitals are of $\sigma$ symmetry as are the molecular orbitals formed from the 2s and $2p_z$ atomic orbitals (or their hybrids) on each Nitrogen atom. The molecular orbitals formed from the atomic $2p_{-1} =(2p_x- i2p_y) \text{ and the } 2p_{+1} = (2p_x + i2p_y$) orbitals are of p symmetry and have m = -1 and +1.
For homonuclear diatomic molecules and other linear molecules which have a center of symmetry, the inversion operation (in which an electron's coordinates are inverted through the center of symmetry of the molecule) is also a symmetry operation. Each resultant molecular orbital can then also be labeled by a quantum number denoting its parity with respect to inversion. The symbols g (for gerade or even) and u (for ungerade or odd) are used for this label. Again for $N_2$, the core orbitals are of $\sigma_g \text{ and } \sigma_u$ symmetry, and the bonding and antibonding $\sigma$ orbitals formed from the 2s and $2p_\sigma$ orbitals on the two Nitrogen atoms are of $\sigma_g \text{ and } \sigma_u$ symmetry.
The bonding $\pi$ molecular orbital pair (with m = +1 and -1) is of $\pi_u$ symmetry whereas the corresponding antibonding orbital is of $\pi_g$ symmetry. Examples of such molecular orbital symmetries are shown above.
The use of hybrid orbitals can be illustrated in the linear-molecule case by considering the $N_2$ molecule. Because two $\pi$ bonding and antibonding molecular orbital pairs are involved in $N_2$ (one with m = +1, one with m = -1), VSEPR theory guides one to form sp hybrid orbitals from each of the Nitrogen atom's 2s and $2p_z$ (which is also the 2p orbital with m = 0) orbitals. Ignoring the core orbitals, which are of $\sigma_g \text{ and } \sigma_u$ symmetry as noted above, one then symmetry adapts the four sp hybrids (two from each atom) to build one $\sigma_g$ orbital involving a bonding interaction between two sp hybrids pointed toward one another, an antibonding $\sigma_u$ orbital involving the same pair of sp orbitals but coupled with opposite signs, a nonbonding $\sigma_g$ orbital composed of two sp hybrids pointed away from the interatomic region combined with like sign, and a nonbonding $\sigma_u$ orbital made of the latter two sp hybrids combined with opposite signs. The two $2p_m$ orbitals (m= +1 and -1) on each Nitrogen atom are then symmetry adapted to produce a pair of bonding $\pi_u$ orbitals (with m = +1 and -1) and a pair of antibonding $\pi_g$ orbitals (with m = +1 and -1). This hybridization and symmetry adaptation thereby reduces the 8x8 secular problem (which would be 10x10 if the core orbitals were included) into a 2x2 $\sigma_g$ problem (one bonding and one nonbonding), a 2x2 $\sigma_u$ problem (one bonding and one nonbonding), an identical pair of 1x1 $\pi_u$ problems (bonding), and an identical pair of 1x1 $\pi_g$ problems (antibonding).
Another example of the equivalence among various hybrid and atomic orbital points of view is provided by the CO molecule. Using, for example, sp hybrid orbitals on C and O, one obtains a picture in which there are: two core $\sigma$ orbitals corresponding to the O-atom 1s and C-atom 1s orbitals; one CO bonding, two non-bonding, and one CO antibonding orbitals arising from the four sp hybrids; a pair of bonding and a pair of antibonding $\pi$ orbitals formed from the two p orbitals on O and the two p orbitals on C. Alternatively, using $sp^2$ hybrids on both C and O, one obtains: the two core $\sigma$ orbitals as above; a CO bonding and antibonding orbital pair formed from the $sp^2$ hybrids that are directed along the CO bond; and a single $\pi$ bonding and antibonding $\pi^{\text{*}}$ orbital set. The remaining two $sp^2$ orbitals on C and the two on O can then be symmetry adapted by forming ± combinations within each pair to yield: an $a_1$ non-bonding orbital (from the + combination) on each of C and O directed away from the CO bond axis; and a $p_\pi$ orbital on each of C and O that can subsequently overlap to form the second $\pi$ bonding and $\pi^{\text{*}}$ antibonding orbital pair.
It should be clear from the above examples, that no matter what particular hybrid orbitals one chooses to utilize in conceptualizing a molecule's orbital interactions, symmetry ultimately returns to force one to form proper symmetry adapted combinations which, in turn, renders the various points of view equivalent. In the above examples and in several earlier examples, symmetry adaptation of, for example, $sp^2$ orbital pairs (e.g., $sp_L^2 ± sp_R^2$) generated orbitals of pure spatial symmetry. In fact, symmetry combining hybrid orbitals in this manner amounts to forming other hybrid orbitals. For example, the above ± combinations of $sp^2$ hybrids directed to the left (L) and right (R) of some bond axis generate a new sp hybrid directed along the bond axis but opposite to the $sp^2$ hybrid used to form the bond and a non-hybridized p orbital directed along the L-to-R direction. In the CO example, these combinations of $sp^2$ hybrids on O and C produce sp hybrids on O and C and $p_\pi$ orbitals on O and C.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/05%3A_Molecular_Orbitals/5.03%3A_Linear_Molecules.txt
|
Atoms belong to the full rotation symmetry group; this makes their symmetry analysis the most complex to treat.
In moving from linear molecules to atoms, additional symmetry elements arise. In particular, the potential field experienced by an electron in an orbital becomes invariant to rotations of arbitrary amounts about the x, y, and z axes; in the linear-molecule case, it is invariant only to rotations of the electron's position about the molecule's symmetry axis (the z axis). These invariances are, of course, caused by the spherical symmetry of the potential of any atom. This additional symmetry of the potential causes the Hamiltonian to commute with all three components of the electron's angular momentum:
• $[L_x, H] =0$
• $[L_y, H] =0$
• $[L_z, H] =0$
It is straightforward to show that H also commutes with the operator $L^2$, defined as the sum of the squares of the three individual components of the angular momentum
$L^2 = L_x^2 + L_y^2 + L_z^2 \nonumber$
Because $L_x$, $L_y$, and $L_z$ do not commute with one another, orbitals which are eigenfunctions of H cannot be simultaneous eigenfunctions of all three angular momentum operators. However, because $L_x$, $L_y$, and $L_z$ do commute with $L^2$, orbitals can be found which are eigenfunctions of H, of $L^2$ and of any one component of L; it is convention to select $L_z$ as the operator which, along with H and $L^2$, form a mutually commutative operator set of which the orbitals are simultaneous eigenfunctions.
So, for any atom, the orbitals can be labeled by both l and m quantum numbers, which play the role that point group labels did for non-linear molecules and $\lambda$ did for linear molecules. Because (i) the kinetic energy operator in the electronic Hamiltonian explicitly contains $\frac{L^2}{2m_er^2}$, (ii) the Hamiltonian does not contain additional $L_z , L_x, \text{ or } L_y$ factors, and (iii) the potential energy part of the Hamiltonian is spherically symmetric (and commutes with $L^2 \text{ and } L_z$), the energies of atomic orbitals depend upon the l quantum number and are independent of the m quantum number. This is the source of the 2l+1- fold degeneracy of atomic orbitals.
The angular part of the atomic orbitals is described in terms of the spherical harmonics $Y_{l,m}$; that is, each atomic orbital $\phi$ can be expressed as
$\phi_{n,l,m} = Y_{l,m}(\theta,\varphi)R_{n,l}(r). \nonumber$
The explicit solutions for the Yl,m and for the radial wavefunctions $R_{n,l}$ are given in Appendix B. The variables $r,\theta,\varphi$ give the position of the electron in the orbital in spherical coordinates. These angular functions are, as discussed earlier, related to the cartesian (i.e., spatially oriented) orbitals by simple transformations; for example, the orbitals with l=2 and m=2,1,0,-1,-2 can be expressed in terms of the $d_{xy}, d_{xz}, d_{yz}, d_{xx-yy} , \text{ and } d_{zz}$ orbitals. Either set of orbitals is acceptable in the sense that each orbital is an eigenfunction of H; transformations within a degenerate set of orbitals do not destroy the Hamiltonian- eigenfunction feature. The orbital set labeled with l and m quantum numbers is most useful when one is dealing with isolated atoms (which have spherical symmetry), because m is then a valid symmetry label, or with an atom in a local environment which is axially symmetric (e.g., in a linear molecule) where the m quantum number remains a useful symmetry label. The cartesian orbitals are preferred for describing an atom in a local environment which displays lower than axial symmetry (e.g., an atom interacting with a diatomic molecule in $C_{2v}$ symmetry).
The radial part of the orbital $R_{n,l}$(r) as well as the orbital energy $\varepsilon_{n,l}$ depend on l because the Hamiltonian itself contains $\frac{l(l+1)\hbar^2}{2m_er^2}$; they are independent of m because the Hamiltonian has no m-dependence. For bound orbitals, $R_{n,l}$(r) decays exponentially for large r (as $e^{-2r\sqrt{2}\varepsilon_{n,l}}$), and for unbound (scattering) orbitals, it is oscillatory at large r with an oscillation period related to the deBroglie wavelength of the electron. In $R_{n,l}$ (r) there are (n-l-1) radial nodes lying between r=0 and $r=\infty$. These nodes provide differential stabilization of low-l orbitals over high-l orbitals of the same principal quantum number n. That is, penetration of outer shells is greater for low-l orbitals because they have more radial nodes; as a result, they have larger amplitude near the atomic nucleus and thus experience enhanced attraction to the positive nuclear charge.
The average size (i.e., $\langle r \rangle$:
$\langle r \rangle = \int R^2_{n,l}r r^2 dr) \nonumber$
of an orbital depends strongly on $n$, weakly on $l$ and is independent of $m$. It also depends strongly on the nuclear charge and on the potential produced by the other electrons. This potential is often characterized qualitatively in terms of an effective nuclear charge $Z_{eff}$ which is the true nuclear charge of the atom Z minus a screening component $Z_{sc}$ which describes the repulsive effect of the electron density lying radially inside the electron under study. Because, for a given n, low-l orbitals penetrate closer to the nucleus than do high-l orbitals, they have higher $Z_{eff}$ values (i.e., smaller $Z_{sc}$ values) and correspondingly smaller average sizes and larger binding energies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/05%3A_Molecular_Orbitals/5.04%3A_Atoms.txt
|
Along "reaction paths", orbitals can be connected one-to-one according to their symmetries and energies. This is the origin of the Woodward-Hoffmann rules.
• 6.1: Reduction in Symmetry Along Reaction Paths
As fragments are brought together to form a larger molecule, the symmetry of the nuclear framework (recall the symmetry of the coulombic potential experienced by electrons depends on the locations of the nuclei) changes. However, in some cases, certain symmetry elements persist throughout the path connecting the fragments and the product molecule. These preserved symmetry elements can be used to label the orbitals throughout the 'reaction'.
• 6.2: Orbital Correlation Diagrams - Origins of the Woodward-Hoffmann Rules
Connecting the energy-ordered orbitals of reactants to those of products according to symmetry elements that are preserved throughout the reaction produces an orbital correlation diagram.
06: Quantum Mechanics in Reactions
As fragments are brought together to form a larger molecule, the symmetry of the nuclear framework (recall the symmetry of the Coulombic potential experienced by electrons depends on the locations of the nuclei) changes. However, in some cases, certain symmetry elements persist throughout the path connecting the fragments and the product molecule. These preserved symmetry elements can be used to label the orbitals throughout the 'reaction'.
The point-group, axial- and full-rotation group symmetries which arise in nonlinear molecules, linear molecules, and atoms, respectively, are seen to provide quantum numbers or symmetry labels which can be used to characterize the orbitals appropriate for each such species. In a physical event such as interaction with an external electric or magnetic field or a chemical process such as collision or reaction with another species, the atom or molecule can experience a change in environment which causes the electrostatic potential which its orbitals experience to be of lower symmetry than that of the isolated atom or molecule. For example, when an atom interacts with another atom to form a diatomic molecule or simply to exchange energy during a collision, each atom's environment changes from being spherically symmetric to being axially symmetric. When the formaldehyde molecule undergoes unimolecular decomposition to produce $CO + H_2$ along a path that preserves $C_{2v}$ symmetry, the orbitals of the CO moiety evolve from $C_{2v}$ symmetry to axial symmetry.
It is important, therefore to be able to label the orbitals of atoms, linear, and nonlinear molecules in terms of their full symmetries as well in terms of the groups appropriate to lower-symmetry situations. This can be done by knowing how the representations of a higher symmetry group decompose into representations of a lower group. For example, the $Y_{l,m}$ functions appropriate for spherical symmetry, which belong to a 2l+1 fold degenerate set in this higher symmetry, decompose into doubly degenerate pairs of functions $Y_{l,l} , Y_{l,- l} ; Y_{l,l-1} , Y_{l,-1+1}$; etc., plus a single non-degenerate function $Y_{l,0}$, in axial symmetry. Moreover, because $L^2$ no longer commutes with the Hamiltonian whereas $L_z$ does, orbitals with different l-values but the same m-values can be coupled. As the $N_2$ molecule is formed from two N atoms, the 2s and $2p_z$ orbitals, both of which belong to the same $(\sigma)$ symmetry in the axial rotation group but which are of different symmetry in the isolated-atom spherical symmetry, can mix to form the sg bonding orbital, the su antibonding, as well as the $\sigma_g$ and $\sigma_u$ nonbonding lone-pair orbitals. The fact that 2s and 2p have different l-values no longer uncouples these orbitals as it did for the isolated atoms, because l is no longer a "good" quantum number.
Another example of reduced symmetry is provided by the changes that occur as $H_2O$ fragments into OH and H. The $\sigma$ bonding orbitals $(a_1 \text{ and } b_2)$ and in-plane lone pair $(a_1)$ and the $\sigma$* antibonding $(a_1 \text{ and } b_2) \text{ of } H_2O$ become a' orbitals (see the Figure below); the out-of-plane $b_1$ lone pair orbital becomes a'' (in Appendix IV of Electronic Spectra and Electronic Structure of Polyatomic Molecules , G. Herzberg, Van Nostrand Reinhold Co., New York, N.Y. (1966) tables are given which allow one to determine how particular symmetries of a higher group evolve into symmetries of a lower group).
Figure 6.1.1: Insert caption here!
To further illustrate these points dealing with orbital symmetry, consider the insertion of CO into $H_2$ along a path which preserves $C_{2v}$ symmetry. As the insertion occurs, the degenerate $\pi$ bonding orbitals of CO become $b1 \text{ and } b_2$ orbitals. The antibonding $\pi$* orbitals of CO also become $b_1 \text{ and } b_2$. The $\sigma_g$ bonding orbital of $H_2 \text{ becomes } a_1$, and the $\sigma_u \text{ antibonding } H_2 \text{ orbital becomes } b_2.$ The orbitals of the reactant $H_2CO$ are energy-ordered and labeled according to $C_{2v}$ symmetry in the Figure shown below as are the orbitals of the product $H_2$ + CO.
Figure 6.1.2: Insert caption here!
When these orbitals are connected according to their symmetries as shown above, one reactant orbital to one product orbital starting with the low-energy orbitals and working to increasing energy, an orbital correlation diagram (OCD) is formed. These diagrams play essential roles in analyzing whether reactions will have symmetry-imposed energy barriers on their potential energy surfaces along the reaction path considered in the symmetry analysis. The essence of this analysis, which is covered in detail in Chapter 12, can be understood by noticing that the sixteen electrons of ground-state $H_2CO$ do not occupy their orbitals with the same occupancy pattern, symmetry-by-symmetry, as do the sixteen electrons of ground-state $H_2$ + CO. In particular, $H_2CO$ places a pair of electrons in the second $b_2 \text{ orbital while } H_2 + CO$ does not; on the other hand, $H_2$ + CO places two electrons in the sixth $a_1 \text{ orbital while } H_2CO$ does not. The mismatch of the orbitals near the $5a_1, 6a_1, \text{ and } 2b_2$ orbitals is the source of the mismatch in the electronic configurations of the ground-states of $H_2CO \text{ and } H_2$ + CO. These mismatches give rise, as shown in Chapter 12, to symmetry-caused energy barriers on the $H_2CO \rightarrow H_2 + CO$ reaction potential energy surface.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/06%3A_Quantum_Mechanics_in_Reactions/6.01%3A_Reduction_in_Symmetry_Along_Reaction_Paths.txt
|
Connecting the energy-ordered orbitals of reactants to those of products according to symmetry elements that are preserved throughout the reaction produces an orbital correlation diagram.
In each of the examples cited above, symmetry reduction occurred as a molecule or atom approached and interacted with another species. The "path" along which this approach was thought to occur was characterized by symmetry in the sense that it preserved certain symmetry elements while destroying others. For example, the collision of two Nitrogen atoms to produce $N_2$ clearly occurs in a way which destroys spherical symmetry but preserves axial symmetry. In the other example used above, the formaldehyde molecule was postulated to decompose along a path which preserves $C_{2v}$ symmetry while destroying the axial symmetries of CO and $H_2.$ The actual decomposition of formaldehyde may occur along some other path, but if it were to occur along the proposed path, then the symmetry analysis presented above would be useful.
The symmetry reduction analysis outlined above allows one to see new orbital interactions that arise (e.g., the 2s and $2p_z$ interactions in the $N + N \rightarrow N_2$ example) as the interaction increases. It also allows one to construct orbital correlation diagrams (OCD's) in which the orbitals of the "reactants" and "products" are energy ordered and labeled by the symmetries which are preserved throughout the "path", and the orbitals are then correlated by drawing lines connecting the orbitals of a given symmetry, one-by-one in increasing energy, from the reactants side of the diagram to the products side. As noted above, such orbital correlation diagrams play a central role in using symmetry to predict whether photochemical and thermal chemical reactions will experience activation barriers along proposed reaction paths (this subject is treated in Chapter 12).
To again illustrate the construction of an OCD, consider the p orbitals of 1,3- butadiene as the molecule undergoes disrotatory closing (notice that this is where a particular path is postulated; the actual reaction may or may not occur along such a path) to form cyclobutene. Along this path, the plane of symmetry which bisects and is perpendicular to the $C_2-C_3$ bond is preserved, so the orbitals of the reactant and product are labeled as being even-e or odd-o under reflection through this plane. It is not proper to label the orbitals with respect to their symmetry under the plane containing the four C atoms; although this plane is indeed a symmetry operation for the reactants and products, it does not remain a valid symmetry throughout the reaction path.
Figure 6.2.1: Insert caption here!
The four $\pi$ orbitals of 1,3-butadiene are of the following symmetries under the preserved plane (see the orbitals in the Figure above): $\pi_1 = e, \pi_2 = 0, \pi_3 = e, \pi_4 = 0.$ The $\pi \text{ and } \pi$* and $\sigma \text{ and } \sigma$* orbitals of cyclobutane which evolve from the four active orbitals of the 1,3-butadiene are of the following symmetry and energy order: $\sigma = e, \pi = e, \pi^\text{*} = 0, \sigma^\text{*} = 0.$ Connecting these orbitals by symmetry, starting with the lowest energy orbital and going through the highest energy orbital, gives the following OCD:
The fact that the lowest two orbitals of the reactants, which are those occupied by the four $\pi$ electrons of the reactant, do not correlate to the lowest two orbitals of the products, which are the orbitals occupied by the two $\sigma \text{ and two } \pi$ electrons of the products, will be shown later in Chapter 12 to be the origin of the activation barrier for the thermal disrotatory rearrangement (in which the four active electrons occupy these lowest two orbitals) of 1,3-butadiene to produce cyclobutene.
If the reactants could be prepared, for example by photolysis, in an excited state having orbital occupancy $\pi_1^2 \pi_2^1 \pi_3^1,$ then reaction along the path considered would not have any symmetry-imposed barrier because this singly excited configuration correlates to a singly-excited configuration $\sigma^2\pi^1\pi^{\text{*}1}$ of the products. The fact that the reactant and product configurations are of equivalent excitation level causes there to be no symmetry constraints on the photochemically induced reaction of 1,3-butadiene to produce cyclobutene. In contrast, the thermal reaction considered first above has a symmetry-imposed barrier because the orbital occupancy is forced to rearrange (by the occupancy of two electrons) from the ground-state wavefunction of the reactant to smoothly evolve into that of the product.
It should be stressed that although these symmetry considerations may allow one to anticipate barriers on reaction potential energy surfaces, they have nothing to do with the thermodynamic energy differences of such reactions. Symmetry says whether there will be symmetry-imposed barriers above and beyond any thermodynamic energy differences. The enthalpies of formation of reactants and products contain the information about the reaction's overall energy balance.
As another example of an OCD, consider the $N + N \rightarrow N_2$ recombination reaction mentioned above. The orbitals of the atoms must first be labeled according to the axial rotation group (including the inversion operation because this is a homonuclear molecule). The core 1s orbitals are symmetry adapted to produce $1\sigma_g \text{ and } 1\sigma_u$ orbitals (the number 1 is used to indicate that these are the lowest energy orbitals of their respective symmetries); the 2s orbitals generate $2\sigma_g \text{ and } 2\sigma_u$ orbitals; the 2p orbitals combine to yield $3\sigma_g$, a pair of $1\pi_u$ orbitals, a pair of $1\pi_g$ orbitals, and the $3\sigma_u$ orbital, whose bonding, nonbonding, and antibonding nature was detailed earlier. In the two separated Nitrogen atoms, the two orbitals derived from the 2s atomic orbitals are degenerate, and the six orbitals derived from the Nitrogen atoms' 2p orbitals are degenerate. At the equilibrium geometry of the $N_2$ molecule, these degeneracies are lifted, Only the degeneracies of the $1\pi_u \text{ and } 1\pi_g$ orbitals, which are dictated by the degeneracy of +m and -m orbitals within the axial rotation group, remain.
As one proceeds inward past the equilibrium bond length of $N_2$, toward the unitedatom limit in which the two Nitrogen nuclei are fused to produce a Silicon nucleus, the energy ordering of the orbitals changes. Labeling the orbitals of the Silicon atom according to the axial rotation group, one finds the 1s is $\sigma_g , \text{ the 2s is } \sigma_g ,\text{ the 2p orbitals are } \sigma_u \text{ and } \pi_u$, the 3s orbital is $\sigma_g$, the 3p orbitals are $\sigma_u \text{ and } \pi_u ,\text{ and the 3d orbitals are } \sigma_g , \pi_g , \text{ and } \delta_g.$ The following OCD is obtained when one connects the orbitals of the two separated Nitrogen atoms (properly symmetry adapted) to those of the $N_2$ molecule and eventually to those of the Silicon atom.
Figure 6.2.3: Insert caption here!
The fact that the separated-atom and united-atom limits involve several crossings in the OCD can be used to explain barriers in the potential energy curves of such diatomic molecules which occur at short internuclear distances. It should be noted that the Silicon atom's 3p orbitals of $\pi_u$ symmetry and its 3d orbitals of $\sigma_g \text{ and } \delta_g$ symmetry correlate with higher energy orbitals of $N_2$ not with the valence orbitals of this molecule, and that the 3su antibonding orbital of N2 correlates with a higher energy orbital of Silicon (in particular, its 4p orbital).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/06%3A_Quantum_Mechanics_in_Reactions/6.02%3A_Orbital_Correlation_Diagrams_-_Origins_of_the_Woodward-Hoffmann_Rules.txt
|
The most elementary molecular orbital models contain symmetry, nodal pattern, and approximate energy information
• 7.1: The LCAO-MO Expansion and the Orbital-Level Schrödinger Equation
In the simplest picture of chemical bonding, the valence molecular orbitals are constructed as linear combinations of valence atomic orbitals according to the LCAO-MO formula.
• 7.2: Determining the Effective Potential
In the most elementary models of orbital structure, the quantities that explicitly define the potential V are not computed from first principles as they are in so-called ab initio methods. Rather, either experimental data or results of ab initio calculations are used to determine the parameters in terms of which V is expressed. The resulting empirical or semi-empirical methods discussed below differ in the sophistication.
• 7.3: The Hückel Parameterization
The Hückel mode is the most simplified orbital-level model. Its implementation requires knowledge of the atomic resonance and exchange values, which are eventually expressed in terms of experimental data, as well as a means of calculating the geometry dependence of the βμ,νβμ,ν 's (e.g., some method for computing overlap matrices Sμ,νSμ,ν ).
• 7.4: The Extended Hückel Method
It is well known that bonding and antibonding orbitals are formed when a pair of atomic orbitals from neighboring atoms interact. The energy splitting between the bonding and antibonding orbitals depends on the overlap between the pair of atomic orbitals.
07: Further Characterization of Molecular Orbitals
In the simplest picture of chemical bonding, the valence molecular orbitals $\phi_i$ are constructed as linear combinations of valence atomic orbitals $\chi_\mu$ according to the LCAOMO formula:
$\phi_i=\sum\limits_\mu C_{i\mu}\chi_\mu. \nonumber$
The core electrons are not explicitly included in such a treatment, although their effects are felt through an electrostatic potential V that has the following properties:
1. V contains contributions from all of the nuclei in the molecule exerting coulombic attractions on the electron, as well as coulombic repulsions and exchange interactions exerted by the other electrons on this electron;
2. As a result of the (assumed) cancellation of attractions from distant nuclei and repulsions from the electron clouds (i.e., the core, lone-pair, and valence orbitals) that surround these distant nuclei, the effect of V on any particular mo $\phi_i$ depends primarily on the atomic charges and local bond polarities of the atoms over which $\phi_i$ is delocalized.
As a result of these assumptions, qualitative molecular orbital models can be developed in which one assumes that each mo fi obeys a one-electron Schrödinger equation
$h\phi_i = \varepsilon_i \phi_i. \nonumber$
Here the orbital-level hamiltonian h contains the kinetic energy of motion of the electron and the potential V mentioned above:
$\left[ \dfrac{-\hbar^2}{2m_e\nabla^2} \right] \phi_i = \varepsilon_i \phi_i. \nonumber$
Expanding the mo $\phi_i$ in the LCAO-MO manner, substituting this expansion into the above Schrödinger equation, multiplying on the left by $\chi^{\text{*}}_\nu$, and integrating over the coordinates of the electron generates the following orbital-level eigenvalue problem:
$\sum\limits_\mu \langle\chi_\nu | \dfrac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu \rangle C_{i\mu} = \varepsilon \sum\limits_\mu \langle \chi_\nu | \chi_\mu \rangle C_{i\mu}. \nonumber$
If the constituent atomic orbitals {$\chi_\mu$} have been orthonormalized as discussed earlier in this chapter, the overlap integrals $\langle \chi_\nu | \chi_\mu \rangle$ reduce to $\delta_{\mu,\nu}$.
7.02: Determining the Effective Potential
In the most elementary models of orbital structure, the quantities that explicitly define the potential \(V\) are not computed from first principles as they are in so-called ab initio methods. Rather, either experimental data or results of ab initio calculations are used to determine the parameters in terms of which \(V\) is expressed. The resulting empirical or semi-empirical methods discussed below differ in the sophistication used to include electron-electron interactions as well as in the manner experimental data or ab initio computational results are used to specify \(V\).
If experimental data is used to parameterize a semi-empirical model, then the model should not be extended beyond the level at which it has been parameterized. For example, experimental bond energies, excitation energies, and ionization energies may be used to determine molecular orbital energies which, in turn, are summed to compute total energies. In such a parameterization it would be incorrect to subsequently use these molecular orbitals to form a wavefunction, as in Sections 3 and 6, that goes beyond the simple 'product of orbitals' description. To do so would be inconsistent because the more sophisticated wavefunction would duplicate what using the experimental data (which already contains mother nature's electronic correlations) to determine the parameters had accomplished.
Alternatively, if results of ab initio theory at the single-configuration orbital-product wavefunction level are used to define the parameters of a semi-empirical model, it would then be proper to use the semi-empirical orbitals in a subsequent higher-level treatment of electronic structure as done in Section 6.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/07%3A_Further_Characterization_of_Molecular_Orbitals/7.01%3A_The_LCAO-MO_Expansion_and_the_Orbital-Level_Schr%C3%B6dinge.txt
|
In the most simplified embodiment of the above orbital-level model, the following additional approximations are introduced.
Approximation 1: Diagonal Component
The diagonal values $\langle \chi_\mu | \frac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu \rangle$, which are usually denoted $\alpha_\mu$, are taken to be equal to the energy of an electron in the atomic orbital $\chi_\mu$ and, as such, are evaluated in terms of atomic ionization energies (IP's) and electron affinities (EA's):
$\langle\chi_\mu|\dfrac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu\rangle = -IP_\mu, \nonumber$
for atomic orbitals that are occupied in the atom, and
$\langle\chi_\mu|\dfrac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu\rangle = -EA_\mu, \nonumber$
for atomic orbitals that are not occupied in the atom.
These approximations assume that contributions in V arising from coulombic attraction to nuclei other than the one on which $\chi_\mu$ is located, and repulsions from the core, lone-pair, and valence electron clouds surrounding these other nuclei cancel to an extent that $\langle\chi_\mu|V|\chi_\mu\rangle$ contains only potentials from the atom on which $\chi_\mu$ sits.
It should be noted that the IP's and EA's of valence-state orbitals are not identical to the experimentally measured IP's and EA's of the corresponding atom, but can be obtained from such information. For example, the 2p valence-state IP (VSIP) for a Carbon atom is the energy difference associated with the hypothetical process $C(1s^22s2p_x2p_y2p_z) \rightarrow C^+(1s^22s2p_x2p_y). \nonumber$ If the energy differences for the "promotion" of C $C(1s^22s^22p_x2p_y) \rightarrow C(1s^22s2p_x2p_y2p_z); \Delta E_C \nonumber$ and for the promotion of $C^+$ $C^+(1s^22s^22p_x) \rightarrow C^+(1s^22s2p_x2p_y); \Delta E_C^+ \nonumber$ are known, the desired VSIP is given by: $IP_{2p_z} = IP_C + \Delta E_C + - \Delta E_C. \nonumber$ The EA of the 2p orbital is obtained from the $C(1s^22s^22p_x2p_y) \rightarrow C^-(1s^22s^22p_x2p_y2p_z) \nonumber$ energy gap, which means that $EA_{2p_z} = EA_C$. Some common IP's of valence 2p orbitals in eV are as follows: C (11.16), N (14.12), $N^+$ (28.71), O (17.70), $O^+$ (31.42), $F^+$ (37.28).
Approximation 2: Nearest Neighbors Approximation
The off-diagonal elements $\langle\chi_\nu | \frac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu \rangle$ are taken as zero if $\chi_\mu \text{ and } \chi_nu$ belong to the same atom because the atomic orbitals are assumed to have been constructed to diagonalize the one-electron hamiltonian appropriate to an electron moving in that atom. They are set equal to a parameter denoted $\beta_{\mu,\nu} \text{ if } \chi_\mu \text{ and } \chi_\nu$ reside on neighboring atoms that are chemically bonded. If $c_m$ and $c_n$ reside on atoms that are not bonded neighbors, then the off-diagonal matrix element is set equal to zero.
Approximation 3: Off-Diagonal Component
The geometry dependence of the $\beta_{\mu,\nu}$ parameters is often approximated by assuming that $\beta_{\mu,\nu}$ is proportional to the overlap $S_{\mu,\nu}$ between the corresponding atomic orbitals:
$\beta_{\mu,\nu} = \beta^o_{\mu,\nu}S_{\mu,\nu}. \nonumber$
Here $\beta^o_{\mu,\nu}$ is a constant (having energy units) characteristic of the bonding interaction between $\chi_\mu \text{ and } \chi_\nu$; its value is usually determined by forcing the molecular orbital energies obtained from such a qualitative orbital treatment to yield experimentally correct ionization potentials, bond dissociation energies, or electronic transition energies.
It is sometimes assumed that the overlap matrix $S$ is the identity matrix. This means that overlap between the orbitals is neglected
The three approximations above form the basis of the so-called Hückel model. Its implementation requires knowledge of the atomic $\alpha_\mu$ and $\beta^0_{\mu,\nu}$ values, which are eventually expressed in terms of experimental data, as well as a means of calculating the geometry dependence of the $\beta_{\mu,\nu}$'s (e.g., some method for computing overlap matrices $S_{\mu,\nu}$ ).
7.04: The Extended Hückel Method
It is well known that bonding and antibonding orbitals are formed when a pair of atomic orbitals from neighboring atoms interact. The energy splitting between the bonding and antibonding orbitals depends on the overlap between the pair of atomic orbitals. Also, the energy of the antibonding orbital lies higher above the arithmetic mean $E_{ave} = E_A + E_B$ of the energies of the constituent atomic orbitals $(E_A \text{ and } E_B)$ than the bonding orbital lies below $E_{ave}$. If overlap is ignored, as in conventional Hückel theory (except in parameterizing the geometry dependence of $\beta_{\mu,\nu}$), the differential destabilization of antibonding orbitals compared to stabilization of bonding orbitals can not be accounted for.
By parameterizing the off-diagonal Hamiltonian matrix elements in the following overlap-dependent manner:
$h_{\nu,\mu} = \langle\chi_\nu|\frac{-\hbar^2}{2m_e}\nabla^2 + V |\chi_\mu\rangle = 0.5K(h_{\mu,\mu} + h_{\nu,\nu})S_{\mu,\nu}, \nonumber$
and explicitly treating the overlaps among the constituent atomic orbitals {cm} in solving the orbital-level Schrödinger equation
$\sum\limits_\mu \rangle\chi\nu |\frac{-\hbar^2}{2m_e}\nabla^2 + V|\chi_\mu \rangle C_{i\mu} = \varepsilon_i \sum\limits_\mu \langle\chi_\nu | \chi_\mu\rangle C_{i\mu}, \nonumber$
Hoffmann introduced the so-called extended Hückel method. He found that a value for K= 1.75 gave optimal results when using Slater-type orbitals as a basis (and for calculating the $S_{/mu,/nu}$). The diagonal $h_{\mu,\mu}$ elements are given, as in the conventional Hückel method, in terms of valence-state IP's and EA's. Cusachs later proposed a variant of this parameterization of the off-diagonal elements:
$h_{\nu,\mu} = 0.5K(h_{\mu,\mu})S_{\mu,\nu}(2-|S_{\mu,\nu}|). \nonumber$
For first- and second-row atoms, the 1s or (2s, 2p) or (3s,3p, 3d) valence-state ionization energies ($\alpha_\mu's$), the number of valence electrons (#Elec.) as well as the orbital exponents ($e_s , e_p \text{ and } e_d$) of Slater-type orbitals used to calculate the overlap matrix elements $S_{m,n}$ corresponding are given below.
Table 7.4.1: Insert caption here!
In the Hückel or extended Hückel methods no explicit reference is made to electron-electron interactions although such contributions are absorbed into the V potential, and hence into the $\alpha_\mu \text{ and } \beta_{\mu,\nu}$ parameters of Hückel theory or the $h_{\mu,\mu} \text{ and } h_{\mu,\nu}$ parameters of extended Hückel theory. As electron density flows from one atom to another (due to electronegativity differences), the electron-electron repulsions in various atomic orbitals changes. To account for such charge-density-dependent coulombic energies, one must use an approach that includes explicit reference to inter-orbital coulomb and exchange interactions. There exists a large family of semi-empirical methods that permit explicit treatment of electronic interactions; some of the more commonly used approaches are discussed in Appendix F.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/07%3A_Further_Characterization_of_Molecular_Orbitals/7.03%3A_The_H%C3%BCckel_Parameterization.txt
|
One of the goals of quantum chemistry is to allow practicing chemists to use knowledge of the electronic states of fragments (atoms, radicals, ions, or molecules) to predict and understand the behavior (i.e., electronic energy levels, geometries, and reactivities) of larger molecules. In the preceding Section, orbital correlation diagrams were introduced to connect the orbitals of the fragments along a 'reaction path' leading to the orbitals of the products. In this Section, analogous connections are made among the fragment and product electronic states, again labeled by appropriate symmetries. To realize such connections, one must first write down N-electron wavefunctions that possess the appropriate symmetry; this task requires combining symmetries of the occupied orbitals to obtain the symmetries of the resulting states.
08: Electronic Configurations
Knowing the orbitals of a particular species provides one information about the sizes, shapes, directions, symmetries, and energies of those regions of space that are available to the electrons (i.e., the complete set of orbitals that are available). This knowledge does not determine into which orbitals the electrons are placed. It is by describing the electronic configurations (i.e., orbital occupancies such as $1s^22s^22p^2$ or $1s^22s^22p^13s^1$) appropriate to the energy range under study that one focuses on how the electrons occupy the orbitals. Moreover, a given configuration may give rise to several energy levels whose energies differ by chemically important amounts. for example, the $1s^22s^22p^2$ configuration of the Carbon atom produces nine degenerate $^3P \text{ states, five degenerate } ^1D \text{ states, and a single } ^1S$ state. These three energy levels differ in energy by 1.5 eV and 1.2 eV, respectively.
8.02: Even N-Electron Configurations are Not Mother Nature's True Energy States
Moreover, even single-configuration descriptions of atomic and molecular structure (e.g., $1s^22s^22p^4$ for the Oxygen atom) do not provide fully correct or highly accurate representations of the respective electronic wavefunctions. As will be shown in this Section and in more detail in Section 6, the picture of N electrons occupying orbitals to form a configuration is based on a so-called "mean field" description of the coulomb interactions among electrons. In such models, an electron at r is viewed as interacting with an "averaged" charge density arising from the N-1 remaining electrons:
$V_{\text{mean field}} = \int \rho_{N-1}({\textbf{r'}})\dfrac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr'}. \nonumber$
Here $\rho_{N-1}(\textbf{r'})$ represents the probability density for finding electrons at $\textbf{r'}, \text{ and } \frac{e^2}{|\textbf{r}-\textbf{r'}|}.$ is the mutual Coulomb repulsion between electron density at r and r'. Analogous mean-field models arise in many areas of chemistry and physics, including electrolyte theory (e.g., the Debye-Hückel theory), statistical mechanics of dense gases (e.g., where the Mayer-Mayer cluster expansion is used to improve the ideal-gas mean field model), and chemical dynamics (e.g., the vibrationally averaged potential of interaction).
In each case, the mean-field model forms only a starting point from which one attempts to build a fully correct theory by effecting systematic corrections (e.g., using perturbation theory) to the mean-field model. The ultimate value of any particular meanfield model is related to its accuracy in describing experimental phenomena. If predictions of the mean-field model are far from the experimental observations, then higher-order corrections (which are usually difficult to implement) must be employed to improve its predictions. In such a case, one is motivated to search for a better model to use as a starting point so that lower-order perturbative (or other) corrections can be used to achieve chemical accuracy (e.g., ± 1 kcal/mole).
In electronic structure theory, the single-configuration picture (e.g., the $1s^22s^22p^4$ description of the oxygen atom) forms the mean-field starting point; the configuration interaction (CI) or perturbation theory techniques are then used to systematically improve this level of description.
The single-configuration mean-field theories of electronic structure neglect correlations among the electrons. That is, in expressing the interaction of an electron at r with the N-1 other electrons, they use a probability density $\rho_{N-1}(\textbf{r'})$ that is independent of the fact that another electron resides at r.
The single-configuration mean-field theories of electronic structure neglect correlations among the electrons.
In fact, the so-called conditional probability density for finding one of N-1 electrons at r', given that an electron is at r certainly depends on r. As a result, the mean-field coulomb potential felt by a $2p_x \text{ orbital's electron in the } 1s^22s^22p_x2p_y$ single-configuration description of the Carbon atom is:
$V_{\text{mean field}} = 2\int |1s(\textbf{r'})|^2 \dfrac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr'} + 2\int |2s(\textbf{r'})|^2 \dfrac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr'} + \int |2p_y(\textbf{r'})|^2 \dfrac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr'}. \nonumber$
In this example, the density $\rho_{N-1}(\textbf{r'})$ is the sum of the charge densities of the orbitals occupied by the five other electrons $2 |1s(\textbf{r'})|^2 + 2 |2s(\textbf{r'})|^2 + |2py(\textbf{r'})|^2$, and is not dependent on the fact that an electron resides at r.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/08%3A_Electronic_Configurations/8.01%3A_Orbitals_Do_Not_Provide_the_Complete_Picture%3B_Their_Occupancy_By_the_N_Electro.txt
|
The Mean-Field Model, Which Forms the Basis of Chemists' Pictures of Electronic Structure of Molecules, Is Not Very Accurate
The magnitude and "shape" of such a mean-field potential is shown below for the Beryllium atom. In this figure, the nucleus is at the origin, and one electron is placed at a distance from the nucleus equal to the maximum of the 1s orbital's radial probability density (near 0.13 Å). The radial coordinate of the second is plotted as the abscissa; this second electron is arbitrarily constrained to lie on the line connecting the nucleus and the first electron (along this direction, the inter-electronic interactions are largest). On the ordinate, there are two quantities plotted: (i) the Self-Consistent Field (SCF) mean-field potential $\int |1s(\textbf{r'})|^2 \frac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr'}$, and (ii) the so-called Fluctuation potential (F), which is the true coulombic $\frac{e^2}{|\textbf{r}-\textbf{r'}|}$ interaction potential minus the SCF potential.
Figure 8.3.1: Insert caption here!
As a function of the inter-electron distance, the fluctuation potential decays to zero more rapidly than does the SCF potential. For this reason, approaches in which F is treated as a perturbation and corrections to the mean-field picture are computed perturbatively might be expected to be rapidly convergent (whenever perturbations describing long-range interactions arise, convergence of perturbation theory is expected to be slow or not successful). However, the magnitude of F is quite large and remains so over an appreciable range of inter-electron distances. The resultant corrections to
The resultant corrections to the SCF picture are therefore quite large when measured in kcal/mole. For example, the differences $\Delta E$ between the true (state-of-the-art quantum chemical calculation) energies of interaction among the four electrons in Be and the SCF mean-field estimates of these interactions are given in the table shown below in eV (recall that 1 eV = 23.06 kcal/mole).
Table 8.3.1: Insert caption here!
To provide further insight why the SCF mean-field model in electronic structure theory is of limited accuracy, it can be noted that the average value of the kinetic energy plus the attraction to the Be nucleus plus the SCF interaction potential for one of the 2s orbitals of Be with the three remaining electrons in the $1s^22s^2$ configuration is:
$\langle 2s|\dfrac{-\hbar^2}{2m_e}\nabla^2 - \dfrac{4e^2}{r} + V_{SCF} | 2s \rangle = -15.4 eV; \nonumber$
the analogous quantity for the 2p orbital in the $1s^22s^2p$ configuration is:
$\langle 2p| \dfrac{-\hbar^2}{2m_e}\nabla^2 - \dfrac{4e^2}{r} + V_{SCF}|2p\rangle = -12.28 eV; \nonumber$
the corresponding value for the 1s orbital is (negative and) of even larger magnitude. The SCF average coulomb interaction between the two 2s orbitals of $1s^22s^2$ Be is:
$\int |2s(\textbf{r})|^2 |2s(\textbf{r'})|^2 \dfrac{e^2}{|\textbf{r}-\textbf{r'}|} \textbf{dr dr'} = 5,95 eV. \nonumber$
This data clearly shows that corrections to the SCF model (see the above table) represent significant fractions of the inter-electron interaction energies (e.g., 1.234 eV compared to 5.95- 1.234 = 4.72 eV for the two 2s electrons of Be), and that the interelectron interaction energies, in turn, constitute significant fractions of the total energy of each orbital (e.g., 5.95 -1.234 eV = 4.72 eV out of -15.4 eV for a 2s orbital of Be).
The task of describing the electronic states of atoms and molecules from first principles and in a chemically accurate manner (± 1 kcal/mole) is clearly quite formidable. The orbital picture and its accompanying SCF potential take care of "most" of the interactions among the N electrons (which interact via long-range coulomb forces and whose dynamics requires the application of quantum physics and permutational symmetry). However, the residual fluctuation potential, although of shorter range than the bare coulomb potential, is large enough to cause significant corrections to the mean-field picture. This, in turn, necessitates the use of more sophisticated and computationally taxing techniques (e.g., high order perturbation theory or large variational expansion spaces) to reach the desired chemical accuracy.
Mean-field models are obviously approximations whose accuracy must be determined so scientists can know to what degree they can be "trusted". For electronic structures of atoms and molecules, they require quite substantial corrections to bring them into line with experimental fact. Electrons in atoms and molecules undergo dynamical motions in which their coulomb repulsions cause them to "avoid" one another at every instant of time, not only in the average-repulsion manner that the mean-field models embody. The inclusion of instantaneous spatial correlations among electrons is necessary to achieve a more accurate description of atomic and molecular electronic structure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/08%3A_Electronic_Configurations/8.03%3A_Mean-Field_Models.txt
|
The most commonly employed tool for introducing such spatial correlations into electronic wavefunctions is called configuration interaction (CI); this approach is described briefly later in this Section and in considerable detail in Section 6. Briefly, one employs the (in principle, complete as shown by P. O. Löwdin, Rev. Mod. Phys. 32 , 328 (1960)) set of N-electron configurations that
1. can be formed by placing the N electrons into orbitals of the atom or molecule under study, and that
2. possess the spatial, spin, and angular momentum symmetry of the electronic state of interest.
This set of functions is then used, in a linear variational function, to achieve, via the CI technique, a more accurate and dynamically correct description of the electronic structure of that state. For example, to describe the ground 1S state of the Be atom, the 1s22s2 configuration (which yields the mean-field description) is augmented by including other configurations such as $1s^23s^2 \text{ , } 1s^22p^2 \text{ , } 1s^23p^2 \text{ , } 1s^22s^3s \text{ , } 3s^22s^2 \text{ , } 2p^22s^2,$etc., all of which have overall $^1S$ spin and angular momentum symmetry. The excited $^1S$ states are also combinations of all such configurations. Of course, the ground-state wavefunction is dominated by the $|1s^22s^2|$ and excited states contain dominant contributions from $|1s^22s3s|$, etc. configurations. The resultant CI wavefunctions are formed as shown in Section 6 as linear combinations of all such configurations.
To clarify the physical significance of mixing such configurations, it is useful to consider what are found to be the two most important such configurations for the ground 1S state of the Be atom:
$\Psi \cong C_1 |1_s^22s^2| - C_2 \left[ |1s^22p_x^2| + |1s^22p_y^2| + |1s^22p_z^2| \right] . \nonumber$
As proven in Chapter 13.III, this two-configuration description of Be's electronic structure is equivalent to a description is which two electrons reside in the 1s orbital (with opposite, $\alpha \text{ and } \beta$ spins) while the other pair reside in 2s-2p hybrid orbitals (more correctly, polarized orbitals) in a manner that instantaneously correlates their motions:
\begin{align} \Psi \cong \dfrac{1}{6} C_1 |1s^2& \{ \left[ (2s-a2p_x)\alpha (2s + a2p_x)\beta - (2s - a2p_x)\beta (2s + a2p_x)\alpha \right] \ &+ \left[ (2s - a2p_y)\alpha (2s + a2p_y) \beta - (2s - a2p_y) \beta (2s + a2p_y)\alpha \right] \ &+ \left[ (2s - a2p_z)\alpha (2s + a2p_z)\beta - (2s - a2p_z )\beta (2s + a2p_z )\alpha \right] \}|, \end{align}
where $a = \sqrt{3\dfrac{C_2}{C_1}}.$
The so-called polarized orbital pairs $(2s \pm a 2p_{x,y, \text{ or } z}$) are formed by mixing into the 2s orbital an amount of the $2_{px,y, \text{ or } z}$ orbital, with the mixing amplitude determined by the ratio of $C_2 \text{ to } C_1$. As will be detailed in Section 6, this ratio is proportional to the magnitude of the coupling $\langle |1s^22s^2 |H| 1s^22p^2|\rangle$ between the two configurations and inversely proportional to the energy difference $\left[ \langle |1s^22s^2 |H| 1s^22s^2| \rangle - \langle |1s^22p^2 |H| 1s^22p^2 |\rangle \right]$ for these configurations. So, in general, configurations that have similar energies (Hamiltonian expectation values) and couple strongly give rise to strongly mixed polarized orbital pairs. The result of forming such polarized orbital pairs are described pictorially below.
In each of the three equivalent terms in this wavefunction, one of the valence electrons moves in a 2s+a2p orbital polarized in one direction while the other valence electron moves in the 2s-a2p orbital polarized in the opposite direction. For example, the first term $\left[ (2s - a2p_x)\alpha (2s + a2p_x)\beta - (2s-a2p_x)\beta (2s + a2p_x)\alpha \right]$ describes one electron occupying a $2s-a2p_x$ polarized orbital while the other electron occupies the $2s+a2p_x$ orbital. In this picture, the electrons reduce their mutual coulomb repulsion by occupying different regions of space; in the SCF mean-field picture, both electrons reside in the same 2s region of space. In this particular example, the electrons undergo angular correlation to "avoid" one another. The fact that equal amounts of x, y, and z orbital polarization appear in Y is what preserves the $^1S$ symmetry of the wavefunction.
The fact that the CI wavefunction
$\Psi \cong C_1 |1s^22s^ - C_2 \left[ |1s^22p_x^2| + |1s^22p_y^2| + |1s^22p_z^2 | \right] \nonumber$
mixes its two configurations with opposite sign is of significance. As will be seen later in Section 6, solution of the Schrödinger equation using the CI method in which two configurations (e.g., $|1s^22s^2| \text{ and } |1s^22p^2|)$ are employed gives rise to two solutions. One approximates the ground state wave function; the other approximates an excited state. The former is the one that mixes the two configurations with opposite sign.
To understand why the latter is of higher energy, it suffices to analyze a function of the form
$\Psi ' \cong C_1 |1s^22s^2| + C_2 \left[ |1s^22p_x^2| + |1s^22p_y^2| + |1s^22p_z^2| \right] \nonumber$
in a manner analogous to above. In this case, it can be shown that
\begin{align} \Psi ' \cong \dfrac{1}{6} C_1 |1s^2&\{ \left[ (2s - ia2p_x)\alpha (2s + ia2p_x)\beta - (2s - ia2p_x)\beta (2s + ia2p_x)\alpha \right] \ &+\left[ (2s - ia2p_y)\alpha (2s + ia2p_y)\beta - (2s - ia2p_y)\beta (2s + ia2p_y)\alpha \right] \ &+\left[ (2s - ia2p_z)\alpha (2s + ia2p_z)\beta - (2s - ia2p_z)\beta (2s + ia2p_z)\alpha \right] | \}.\end{align}
There is a fundamental difference, however, between the polarized orbital pairs introduced earlier $\phi_{\pm} = (2s \pm a2p_{x,y, \text{ or } z})$ and the corresponding functions $\phi_{\pm}' = ( 2s \pm ia2p_{x, y \text{ or }z} )$ appearing here. The probability densities embodied in the former
$|\phi_{\pm}|^2 = |2s|^2 + a^2 ||^2 \pm 2a(2s 2p_{x,y \text{ or } z}) \nonumber$
describe constructive (for the + case) and destructive (for the - case) superposition of the probabilities of the 2s and 2p orbitals. The probability densities of $\phi_{\pm}'$ are
\begin{align} |\phi_{\pm}'|^2 &= \left( 2s \pm ia2p_{x,y \text{ or }z} \right) *\left( 2s \pm ia2p_{x,y \text{ or } z} \right) \ &= |2s|^2 + a^2|2p_{x,y \text{ or } z}|^2 .\end{align}
These densities are identical to one another and do not describe polarized orbital densities. Therefore, the CI wavefunction which mixes the two configurations with like sign, when analyzed in terms of orbital pairs, places the electrons into orbitals $\phi_{\pm}' = \left( 2s \pm ia2p_{x,y, \text{ or }z} \right)$ whose densities do not permit the electrons to avoid one another. Rather, both orbitals have the same spatial density $|2s|^2 + a^2 |2p_{x,y, \text{ or } z}|^2$, which gives rise to higher coulombic interaction energy for this state.
8.05: Summary
In summary, the dynamical interactions among electrons give rise to instantaneous spatial correlations that must be handled to arrive at an accurate picture of atomic and molecular structure. The simple, single-configuration picture provided by the mean-field model is a useful starting point, but improvements are often needed. In Section 6, methods for treating electron correlation will be discussed in greater detail.
For the remainder of this Section, the primary focus is placed on forming proper Nelectron wavefunctions by occupying the orbitals available to the system in a manner that guarantees that the resultant N-electron function is an eigenfunction of those operators that commute with the N-electron Hamiltonian.
For polyatomic molecules, these operators include point-group symmetry operators (which act on all N electrons) and the spin angular momentum ($S^2 \text{ and } S_z$) of all of the electrons taken as a whole (this is true in the absence of spin-orbit coupling which is treated later as a perturbation). For linear molecules, the point group symmetry operations involve rotations $R_z$ of all N electrons about the principal axis, as a result of which the total angular momentum $L_z$ of the N electrons (taken as a whole) about this axis commutes with the Hamiltonian, H. Rotation of all N electrons about the x and y axes does not leave the total coulombic potential energy unchanged, so $L_x \text{ and } L_y$ do not commute with H. Hence for a linear molecule, $L_z , S^2, \text{ and } S_z$ are the operators that commute with H. For atoms, the corresponding operators are $L^2, L_z, S^2, \text{ and } S_z$ (again, in the absence of spin-orbit coupling) where each operator pertains to the total orbital or spin angular momentum of the N electrons.
To construct N-electron functions that are eigenfunctions of the spatial symmetry or orbital angular momentum operators as well as the spin angular momentum operators, one has to "couple" the symmetry or angular momentum properties of the individual spinorbitals used to construct the N-electrons functions. This coupling involves forming direct product symmetries in the case of polyatomic molecules that belong to finite point groups, it involves vector coupling orbital and spin angular momenta in the case of atoms, and it involves vector coupling spin angular momenta and axis coupling orbital angular momenta when treating linear molecules. Much of this Section is devoted to developing the tools needed to carry out these couplings.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/08%3A_Electronic_Configurations/8.04%3A__Configuration_Interaction_%28CI%29_Describes_the_Correct_Electronic_States.txt
|
Electronic wavefuntions must be constructed to have permutational antisymmetry because the N electrons are indistinguishable Fermions
• 9.1: Electronic Configurations
Atoms, linear molecules, and non-linear molecules have orbitals which can be labeled either according to the symmetry appropriate for that isolated species or for the species in an environment which produces lower symmetry.
• 9.2: Antisymmetric Wavefunctions
A general introduction to the anti-symmetric nature of multi-electron wavefunctions (as fermions) and the consequence this has on construction and application of these wavefunctions.
09: Symmetry of Electronic Wavefunctions
Atoms, linear molecules, and non-linear molecules have orbitals which can be labeled either according to the symmetry appropriate for that isolated species or for the species in an environment which produces lower symmetry. These orbitals should be viewed as regions of space in which electrons can move, with, of course, at most two electrons (of opposite spin) in each orbital. Specification of a particular occupancy of the set of orbitals available to the system gives an electronic configuration. For example, $1s^22s^22p^4$ is an electronic configuration for the Oxygen atom (and for the F$^{+1}$ ion and the $N^{-1} \text{ ion); } 1s^22s^22p^33p^1$ is another configuration for O, $F^{+1}, \text{ or } N^{-1}$. These configurations represent situations in which the electrons occupy low-energy orbitals of the system and, as such, are likely to contribute strongly to the true ground and low-lying excited states and to the low-energy states of molecules formed from these atoms or ions.
Specification of an electronic configuration does not, however, specify a particular electronic state of the system. In the above $1s^22s^22p^4$ example, there are many ways (fifteen, to be precise) in which the 2p orbitals can be occupied by the four electrons. As a result, there are a total of fifteen states which cluster into three energetically distinct levels, lying within this single configuration. The $1s^22s^22p^33p^1$ configuration contains thirty-six states which group into six distinct energy levels (the word level is used to denote one or more state with the same energy). Not all states which arise from a given electronic configuration have the same energy because various states occupy the degenerate (e.g., 2p and 3p in the above examples) orbitals differently. That is, some states have orbital occupancies of the form
$2p^2_12p^1_02p^1_{-1} \nonumber$
while others have
$2p^2_12p^2_02p^0_{-1} \nonumber$
as a result, the states can have quite different coulombic repulsions among the electrons (the state with two doubly occupied orbitals would lie higher in energy than that with two singly occupied orbitals). Later in this Section and in Appendix G techniques for constructing wavefunctions for each state contained within a particular configuration are given in detail. Mastering these tools is an important aspect of learning the material in this text.
In summary, an atom or molecule has many orbitals (core, bonding, non-bonding, Rydberg, and antibonding) available to it; occupancy of these orbitals in a particular manner gives rise to a configuration. If some orbitals are partially occupied in this configuration, more than one state will arise; these states can differ in energy due to differences in how the orbitals are occupied. In particular, if degenerate orbitals are partially occupied, many states can arise and have energies which differ substantially because of differences in electron repulsions arising in these states. Systematic procedures for extracting all states from a given configuration, for labeling the states according to the appropriate symmetry group, for writing the wavefunctions corresponding to each state and for evaluating the energies corresponding to these wavefunctions are needed. Much of Chapters 10 and 11 are devoted to developing and illustrating these tools.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/09%3A_Symmetry_of_Electronic_Wavefunctions/9.01%3A_Electronic_Configurations.txt
|
General Concepts
The total electronic Hamiltonian
$\hat{H} = \sum_i \left( \dfrac{- h^2}{2m_e} ∇_i^2 - \sum_a \dfrac{Z_a e^2}{r_{ia}} \right) + \sum_{i>j} \dfrac{e^2}{r_{ij} }+ \sum _{a>b} \dfrac{Z_a Z_b e^2}{r_{ab}} \nonumber$
where $i$ and $j$ label electrons and a and b label the nuclei (whose charges are denoted $Z_a$), commutes with the operators $P_{ij}$ which permute the names of the electrons $i$ and $j$. This, in turn, requires eigenfunctions of $\hat{H}$ to be eigenfunctions of $\hat{P}_{ij}$. In fact, the set of such permutation operators form a group called the symmetric group. In the present text, we will not exploit the full group theoretical nature of these operators; we will focus on the simple fact that all wavefunctions must be eigenfunctions of the $\hat{P}_{ij}$ operator.
Because $\hat{P}_{ij}$ obeys
$\hat{P}_{ij}^* \hat{P}_{ij} = 1 \nonumber$
the eigenvalues of the $\hat{P}_{ij}$ operators must be +1 or - 1. Electrons are Fermions (i.e., they have half-integral spin) and they must have wavefunctions which are odd under permutation of any pair:
$\hat{P}_{ij} Ψ = - Ψ \nonumber$
Bosons such as photons or deuterium nuclei (i.e., species with integral spin quantum numbers) have wavefunctions, which are even under permutation and obey
$\hat{P}_{ij} Ψ = + Ψ \nonumber$
These permutational symmetries are not only characteristics of the exact eigenfunctions of $H$ belonging to any atom or molecule containing more than a single electron, but they are also conditions which must be placed on any acceptable model or trial wavefunction (e.g., in a variational sense) which one constructs.
In particular, within the orbital model of electronic structure (discussed in Section 6), one can not construct trial wavefunctions which are simple spin-orbital products (i.e., an orbital multiplied by an α or β spin function for each electron) such as
$1s α 1s β 2s α 2s β 2p 1 α 2p 0 α. \nonumber$
Such spin-orbital product functions must be made permutationally antisymmetric if the N-electron trial function is to be properly antisymmetric. This can be accomplished for any such product wavefunction by applying the following antisymmetrizer operator:
$A = \dfrac{1}{\sqrt{N!}} \sum p s p P \nonumber$
where $N$ is the number of electrons, $P$ runs over all N! permutations, and s p is +1 or -1 depending on whether the permutation P contains an even or odd number of pairwise permutations (e.g., 231 can be reached from 123 by two pairwise permutations:
$123 \rightarrow 213 \rightarrow 231 \nonumber$
so 231 would have s p =1). The permutation operator $P$ in $A$ acts on a product wavefunction and permutes the ordering of the spin-orbitals.
For example,
$A φ 1 φ 2 φ 3 = \dfrac{1}{\sqrt{6}} \left[ φ 1 φ 2 φ 3 - φ 1 φ 3 φ 2 - φ 3 φ 2 φ 1 - φ 2 φ 1 φ 3 + φ 3 φ 1 φ 2 + φ 2 φ 3 φ 1 \right] \nonumber$
where the convention is that electronic coordinates $r_1, r_2$, and $r_3$ correspond to the orbitals as they appear in the product (e.g., the term φ 3 φ 2 φ 1 represents φ 3 (r 1 ) φ 2 (r 2 ) φ 1 (r 3 )).
It turns out that the permutations $\hat{P}$ can be allowed either to act on the "names" or labels of the electrons, keeping the order of the spin-orbitals fixed, or to act on the spin- orbitals, keeping the order and identity of the electrons' labels fixed. The resultant wavefunction, which contains N! terms, is exactly the same regardless of how one allows the permutations to act. Because we wish to use the above convention in which the order of the electronic labels remains fixed as 1, 2, 3, ... N, we choose to think of the permutations acting on the names of the spin-orbitals. It should be noted that the effect of A on any spin-orbital product is to produce a function that is a sum of N! terms. In each of these terms the same spin-orbitals appear, but the order in which they appear differs from term to term.
Antisymmetrization does not alter the overall orbital occupancy; it simply "scrambles" any knowledge of which electron is in which spin-orbital.
The antisymmetrized orbital product $A φ_1 φ_2 φ_ 3$ is represented by the short hand | φ_1 φ_2 φ_3 | and is referred to as a Slater determinant. The origin of this notation can be made clear by noting that (1/ √ N!) times the determinant of a matrix whose rows are labeled by the index i of the spin-orbital φ i and whose columns are labeled by the index j of the electron at r j is equal to the above function:
$A φ 1 φ 2 φ 3 = (1/ √ 3!) det( φ i ( r j )). \nonumber$
The general structure of such Slater determinants is illustrated below:
$\dfrac{1}{\sqrt{N!}} \text{det}{ φj(ri)} = \dfrac{1}{\sqrt{N}} φ 1 (1)φ 2 (1)φ 3 (1)...φ k (1).......φ N (1) φ 1 (2)φ 2 (2)φ 3 (2)...φ k (2).......φ N (2) . . . . φ 1 (Ν)φ 2 (Ν)φ 3 (Ν)..φ k (Ν)..φ N (Ν) \nonumber$
The antisymmetry of many-electron spin-orbital products places constraints on any acceptable model wavefunction, which give rise to important physical consequences. For example, it is antisymmetry that makes a function of the form | 1s α 1s α | vanish (thereby enforcing the Pauli exclusion principle) while | 1s α 2s α | does not vanish, except at points r 1 and r 2 where $1s( r_1 ) = 2s( r_2 )$, and hence is acceptable. The Pauli principle is embodied in the fact that if any two or more columns (or rows) of a determinant are identical, the determinant vanishes. Antisymmetry also enforces indistinguishability of the electrons in that
$|1s α 1s β 2s α 2s β | = - | 1s α 1s β 2s β2 s α |. \nonumber$
That is, two wavefunctions which differ simply by the ordering of their spin-orbitals are equal to within a sign (+/- 1); such an overall sign difference in a wavefunction has no physical consequence because all physical properties depend on the product Ψ * Ψ , which appears in any expectation value expression.
The antisymmetry of many-electron spin-orbital products places constraints on any acceptable model wavefunction.
Physical Consequences of Antisymmetry
Once the rules for evaluating energies of determinental wavefunctions and for forming functions which have proper spin and spatial symmetries have been put forth (in Chapter 11), it will be clear that antisymmetry and electron spin considerations, in addition to orbital occupancies, play substantial roles in determining energies and that it is precisely these aspects that are responsible for energy splittings among states arising from one configuration. A single example may help illustrate this point. Consider the $π^1 π^{*1}$ configuration of ethylene (ignore the other orbitals and focus on the properties of these two). As will be shown below when spin angular momentum is treated in full, the triplet spin states (e.g., $S=1$) of this configuration are:
$|S=1, M_S =1 \rangle = | παπ^*α | \nonumber$
$|S=1, M_S =-1 \rangle = | πβπ^* β |, \nonumber$
and
$|S=1, M_S = 0 \rangle = \dfrac{1}{\sqrt{2}} [ | παπ^* β | + | πβπ^* α |]. \nonumber$
The singlet spin state is:
$|S=0, M_S = 0 \rangle = \dfrac{1}{\sqrt{2}} [ | παπ^* β | - | πβπ^* α |]. \nonumber$
To understand how the three triplet states have the same energy and why the singlet state has a different energy, and an energy different than the $M_S = 0$ triplet even though these two states are composed of the same two determinants, we proceed as follows:
• Step 1. We express the bonding $π$ and antibonding $π^*$ orbitals in terms of the atomic p-orbitals from which they are formed: $π = \dfrac{1}{\sqrt{2}} [ L + R ] \nonumber$ and $π^* = \dfrac{1}{\sqrt{2}} [ L - R ], \nonumber$ where R and L denote the p-orbitals on the left and right carbon atoms, respectively.
• Step 2. We substitute these expressions into the Slater determinants that form the singlet and triplet states and collect terms and throw out terms for which the determinants vanish.
• Step 3. This then gives the singlet and triplet states in terms of atomic-orbital occupancies where it is easier to see the energy equivalences and differences. Let us begin with the triplet states:
\begin{align} |\pi\alpha\pi^*\alpha| &= \dfrac{1}{2} \left[ |L\alpha L\alpha | - |R\alpha R\alpha | + |R\alpha L\alpha | - |L\alpha R\alpha | \right] \ &= |R\alpha L\alpha|; \ \dfrac{1}{\sqrt{2}}\left[ |\pi\alpha\pi^*\beta | + |\pi\beta\pi^*\alpha | \right] &= \dfrac{1}{\sqrt{2}}\dfrac{1}{2} \left[ |L\alpha L\beta | - |R\alpha R\beta | + |R\alpha L\beta | - |L\alpha R\beta | + |L\beta L\alpha | - |R\beta R\alpha | + |R\beta L\alpha | - |L\beta R\alpha | \right] \ &= \dfrac{1}{\sqrt{2}} \left[ |R\alpha L\beta | + |R\beta L\alpha | \right] ; \ |\pi\beta\pi^*\beta | &= \dfrac{1}{2} \left[ |L\beta L\beta | - |R\beta R\beta | + |R\beta L\beta | - |L\beta R\beta | \right] \ &= |R\beta L\beta |. \end{align}
The singlet state can be reduced in like fashion:
\begin{align} \dfrac{1}{\sqrt{2}} \left[ |\pi\alpha\pi^*\beta | - |\pi\beta\pi^*\alpha | \right] &= \dfrac{1}{\sqrt{2}}\dfrac{1}{2} \left[ |L\alpha L\beta | - |R\alpha R\beta | + |R\alpha L\beta| - |L\alpha R\beta | - |L\beta L\alpha | - |R\beta L\alpha | + |L\beta R\alpha | \right] \ &= \dfrac{1}{\sqrt{2}} \left[ |L\alpha L\beta | - |R\beta R\alpha | \right] . \end{align}
Notice that all three triplet states involve atomic orbital occupancy in which one electron is on one atom while the other is on the second carbon atom. In contrast, the singlet state places both electrons on one carbon (it contains two terms; one with the two electrons on the left carbon and the other with both electrons on the right carbon). In a "valence bond" analysis of the physical content of the singlet and triplet $π^1 π^{*1}$ states, it is clear that the energy of the triplet states will lie below that of the singlet because the singlet contains "zwitterion" components that can be denoted $C^+ C^- \text{ and } C^- C^+$, while the three triplet states are purely "covalent". This case provides an excellent example of how the spin and permutational symmetries of a state "conspire" to qualitatively affect its energy and even electronic character as represented in its atomic orbital occupancies. Understanding this should provide ample motivation for learning how to form proper antisymmetric spin (and orbital) angular momentum eigenfunctions for atoms and molecules.
The energy of the triplet states will lie below that of the singlet
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/09%3A_Symmetry_of_Electronic_Wavefunctions/9.02%3A_Antisymmetric_Wavefunctions.txt
|
Electronic wavefunctions must also possess proper symmetry. These include angular momentum and point group symmetries
• 10.1: Angular Momentum Symmetry and Strategies for Angular Momentum Coupling
Any acceptable model or trial wavefunction for a multi-electron systme should be constrained to also be an eigenfunction of the symmetry operators that commute with H. If the atom or molecule has additional symmetries (e.g., full rotation symmetry for atoms, axial rotation symmetry for linear molecules and point group symmetry for nonlinear polyatomics), the trial wavefunctions should also conform to these spatial symmetries.
• 10.2: Electron Spin Angular Momentum
Individual electrons possess intrinsic spin characterized by angular momentum quantum numbers \(S\) and \(m_s\). The proper spin eigenfunctions must be constructed from antisymmetric (i.e., determinental) wavefunctions.
• 10.3: Coupling of Angular Momenta
The simple "vector coupling" method applies to any angular momenta. If the angular momenta are "equivalent" in the sense that they involve indistinguishable particles that occupy the same orbital shell, the Pauli principle eliminates some of the expected term symbols. For linear molecules, the orbital angular momenta of electrons are not vector coupled, but the electrons' spin angular momenta are vector coupled. For non-linear polyatomic molecules, on spin angular momenta is vector coupled.
• 10.4: Atomic Term Symbols and Wavefunctions
When coupling non-equivalent angular momenta (e.g., a spin and an orbital angular momenta or two orbital angular momenta of non-equivalent electrons), one vector couples using the fact that the coupled angular momenta range from the sum of the two individual angular momenta to the absolute value of their difference.
• 10.5: Atomic Configuration Wavefunctions
To express, in terms of Slater determinants, the wavefunctions corresponding to each of the states in each of the levels, one proceeds as follows in this section.
• 10.6: Inversion Symmetry
One more quantum number, that relating to the inversion (i) symmetry operator can be used in atomic cases because the total potential energy V is unchanged when all of the electrons have their position vectors subjected to inversion (i.e., ir=-r ). This quantum number is straightforward to determine.
• 10.7: Review of Atomic Cases
10: Angular Momentum and Group Symmetries of Electronic Wavefunctions
Because the total Hamiltonian of a many-electron atom or molecule forms a mutually commutative set of operators with $S^2$, $S_z$, and A = (Ö1/N!)Sp sp P, the exact eigenfunctions of $H$ must be eigenfunctions of these operators. Being an eigenfunction of $A$ forces the eigenstates to be odd under all $P_{ij}$. Any acceptable model or trial wavefunction should be constrained to also be an eigenfunction of these symmetry operators.
If the atom or molecule has additional symmetries (e.g., full rotation symmetry for atoms, axial rotation symmetry for linear molecules and point group symmetry for nonlinear polyatomics), the trial wavefunctions should also conform to these spatial symmetries. This Chapter addresses those operators that commute with $H$, $P_{ij}$, $S^2$, and $S_z$ and among one another for atoms, linear, and non-linear molecules. As treated in detail in Appendix G, the full non-relativistic N-electron Hamiltonian of an atom or molecule
$H = \sum\limits_j \left( \dfrac{-\hbar^2}{2m} \nabla_j^2 - \sum\limits_a \dfrac{Z_ae^2}{r_{j,a}} \right) + \sum\limits_{j<k}\dfrac{e^2}{r_{j,k}} \nonumber$
commutes with the following operators:
1. The inversion operator i and the three components of the total orbital angular momentum $L_z = \sum\limits_jL_z(j), L_y, L_x,$ as well as the components of the total spin angular momentum $S_z, S_x, \text{ and } S_y$ for atoms (but not the individual electrons' $L_z(j) , S_z(j),$ etc). Hence, $L^2, L_z, S^2, S_z$ are the operators we need to form eigenfunctions of, and L, $M_L$, S, and MS are the "good" quantum numbers.
2. $L_z = \sum\limits_j L_z(j),$ as well as the N-electron $S_x, S_y, \text{ and } S_z$ for linear molecules (also i, if the molecule has a center of symmetry). Hence, $L_z, S^2, \text{ and } S_z$ are the operators we need to form eigenfunctions of, and $M_L, \text{ S, and } M_S$ are the "good" quantum numbers; L no longer is! iii.
3. $S_x, S_y, \text{ and } S_z$ as well as all point group operations for non-linear polyatomic molecules. Hence $S^2, S_z,$ and the point group operations are used to characterize the functions we need to form. When we include spin-orbit coupling into H (this adds another term to the potential that involves the spin and orbital angular momenta of the electrons), $L^2, L_z, S^2, S_z$ no longer commute with H. However, $J_z = S_z + L_z \text{ and } J^2 = (\textbf{L+S})^2$ now do commute with H.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.01%3A_Angular_Momentum_Symmetry_and_Strategie.txt
|
Individual electrons possess intrinsic spin characterized by angular momentum quantum numbers $s$ and $m_s$; for electrons, s = 1/2 and ms = 1/2, or -1/2. The $m_s =1/2$ spin state of the electron is represented by the symbol $\alpha$ and the $m_s = -1/2$ state is represented by $\beta$. These spin functions obey:
$S^2 \alpha = \dfrac{1}{2} \left(\dfrac{1}{2} + 1\right)\hbar^2 \alpha, \nonumber$
$S_z \alpha = \dfrac{1}{2}\hbar \alpha \nonumber$
$S^2 \beta = \dfrac{1}{2} \left(\dfrac{1}{2} + 1\right) \hbar^2 \beta \nonumber$
and
$S_z \beta = -\dfrac{1}{2}\hbar \beta. \nonumber$
The $\alpha\( and \(\beta$ spin functions are connected via lowering $S_-$ and raising $S_+$ operators, which are defined in terms of the x and y components of S as follows:
$S_+ = Sx +iSy \nonumber$
and
$S_- = S_x -iS_y. \nonumber$
In particular $S_+\beta = \hbar\alpha, S_{+}\alpha =0, S_-\alpha = \hbar\beta, \text{ and } S_-\beta =0$. These expressions are examples of the more general relations (these relations are developed in detail in Appendix G) which all angular momentum operators and their eigenstates obey:
$J^2 |j,m \rangle = j(j+1)\hbar^2 |j,m\rangle \nonumber$
$J_z |j,m \rangle = m\hbar |j,m \rangle \nonumber$
$J_+ |j,m \rangle = \hbar \sqrt{j(j+1)-m(m+1)} |j,m+1 \rangle \nonumber$
and
$J_- |j,m \rangle = \hbar \sqrt{j(j+1)-m(m-1)} |j,m-1 \rangle \nonumber$
In a many-electron system, one must combine the spin functions of the individual electrons to generate eigenfunctions of the total $S_z =\sum\limits_i S_z(i) \text{ (expressions for } S_x = \sum\limits_i S_x(i) \text{ and } S_y =\sum\limits_i S_y(i)$ also follow from the fact that the total angular momentum of a collection of particles is the sum of the angular momenta, component-by-component, of the individual angular momenta) and total $S^2$ operators because only these operators commute with the full Hamiltonian, H, and with the permutation operators $P_{ij}$. No longer are the individual $S^2(i) \text{ and } S_z(i)$ good quantum numbers; these operators do not commute with $P_{ij}$.
Spin states which are eigenfunctions of the total $S^2 \text{ and } S_z$ can be formed by using angular momentum coupling methods or the explicit construction methods detailed in Appendix (G). In the latter approach, one forms, consistent with the given electronic configuration, the spin state having maximum $S_z$ eigenvalue (which is easy to identify as shown below and which corresponds to a state with S equal to this maximum $S_z$ eigenvalue) and then generating states of lower $S_z$ values and lower S values using the angular momentum raising and lowering operators
$S_- = \sum\limits_i S_-(i) \text{ and } S_+ = \sum\limits_i S_+ (i)). \nonumber$
To illustrate, consider a three-electron example with the configuration 1s2s3s. Starting with the determinant | $1s\alpha 2s\alpha 3s\alpha$ |, which has the maximum $M_s = \frac{3}{2}$ and hence has $S = \frac{3}{2}$ (this function is denoted | $\frac{3}{2}, \frac{3}{2} \rangle ), \text{ apply } S_-$ in the additive form $S_- = \sum\limits_i S_-(i)$ to generate the following combination of three determinants:
$\hbar \left[| 1s\beta 2s\alpha 3s\alpha | + | 1s\alpha 2s\beta 3s\alpha | + | 1s\alpha 2s\alpha 3s\beta | \right], \nonumber$
which, according to the above identities, must equal
$\hbar \left.\sqrt{\dfrac{3}{2}\left( \dfrac{3}{2}+1\right) - \dfrac{3}{2}\left( \dfrac{3}{2}-1 \right)} -\right\vert \dfrac{3}{2}, \dfrac{1}{2}\rangle. \nonumber$
So the state $\vert \frac{3}{2}, \frac{1}{2}\rangle \text{ with } S=\frac{3}{2} \text{ and } M_s = \frac{1}{2}$ can be solved for in terms of the three determinants to give
$\vert \dfrac{3}{2}, \dfrac{1}{2}\rangle = \dfrac{1}{\sqrt{3}}\left[ | 1s\beta 2s\alpha 3s\alpha | + | 1s\alpha 2s\beta 3s\alpha | + | 1s\alpha 2s \alpha 3s \beta | \right]. \nonumber$
The states with $S=\frac{3}{2} \text{ and } M_s = \frac{-1}{2} \text{ and } \frac{-3}{2}$ can be obtained by further application of $S_-$ to $\vert \frac{3}{2}, \frac{1}{2}\rangle$ (actually, the $M_s = \frac{-3}{2}$ can be identified as the "spin flipped" image of the state with $M_s = \frac{3}{2}$ and the one with $M_s = \frac{-1}{2}$ can be formed by interchanging all a's and b's in the $M_s = \frac{1}{2}$ state).
Of the eight total spin states (each electron can take on either $\alpha \text{ or } \beta$ spin and there are three electrons, so the number of states is $2^3$), the above process has identified proper combinations which yield the four states with $S= \frac{3}{2}.$ Doing so consumed the determinants with $M_s = \frac{3}{2} \text{ and } \frac{-3}{2},$ one combination of the three determinants with $M_S = \frac{1}{2},$ and one combination of the three determinants with $M_S = \frac{-1}{2}$. There still remain two combinations of the $M_S = \frac{1}{2}$ and two combinations of the $M_S = \frac{-1}{2}$ determinants to deal with. These functions correspond to two sets of $S = \frac{1}{2}$ eigenfunctions having $M_S = \frac{1}{2} \text{ and } \frac{-1}{2}$. Combinations of the determinants must be used in forming the $S = \frac{1}{2}$ functions to keep the $S = \frac{1}{2}$ eigenfunctions orthogonal to the above $S = \frac{3}{2}$ functions (which is required because $S^2$ is a hermitian operator whose eigenfunctions belonging to different eigenvalues must be orthogonal). The two independent $S = \frac{1}{2}, M_s = \frac{3}{2}$ states an be formed by simply constructing combinations of the above three determinants with $M_S = \frac{1}{2}$ which are orthogonal to the $S = \frac{3}{2}$ combination given above and orthogonal to each other. For example,
$\vert \dfrac{1}{2}, \dfrac{1}{2} \rangle = \dfrac{1}{\sqrt{2}}\left[ \vert 1s\beta 2s\alpha 3s\alpha \vert - \vert 1s\alpha 2s\beta 3s\alpha \vert + 0x \vert 1s\alpha 2s\alpha 3s\beta \vert \right], \nonumber$
$\vert \dfrac{1}{2}, \dfrac{1}{2} \rangle = \dfrac{1}{\sqrt{6}}\left[ \vert 1s\beta 2s\alpha 3s\alpha \vert + \vert 1s\alpha 2s\beta 3s\alpha \vert - 2x \vert 1s\alpha 2s\alpha 3s\beta \vert \right], \nonumber$
are acceptable (as is any combination of these two functions generated by a unitary transformation ). A pair of independent orthonormal states with $S = \frac{1}{2} \text{ and } M_S = \frac{-1}{2}$ can be generated by applying $S_-$ to each of these two functions ( or by constructing a pair of orthonormal functions which are combinations of the three determinants with $M_S = \frac{-1}{2} \text{ and which are orthogonal to the } S = \frac{3}{2}, M_S = \frac{-1}{2}$ function obtained as detailed above).
The above treatment of a three-electron case shows how to generate quartet (spin states are named in terms of their spin degeneracies 2S+1) and doublet states for a configuration of the form 1s2s3s. Not all three-electron configurations have both quartet and doublet states; for example, the $1s^22s$ configuration only supports one doublet state. The methods used above to generate $S = \frac{3}{2}$ and $S = \frac{1}{2}$ states are valid for any three-electron situation; however, some of the determinental functions vanish if doubly occupied orbitals occur as for $1s^22s.$ In particular, the $| 1s\alpha 1s\alpha 2s\alpha |$ and $| 1s\beta 1s\beta 2s\beta | M_S =\frac{3}{2}, \frac{-3}{2}$ and $| 1s\alpha 1s\alpha 2s\beta |$ and $| 1s\beta 1s\beta 2s\alpha$ | $M_S = \frac{1}{2}, \frac{-1}{2}$ determinants vanish because they violate the Pauli principle; only $| 1s\alpha 1s\beta 2s\alpha |$ and $| 1s\alpha 1s\beta 2s\beta |$ do not vanish. These two remaining determinants form the $S = \frac{1}{2}$, $M_S = \frac{1}{2}, \frac{-1}{2}$ doublet spin functions which pertain to the $1s^22s$ configuration. It should be noted that all closed-shell components of a configuration (e.g., the $1s^2$ part of $1s^22s$ or the $1s^22s^2 2p^6$ part of $1s^22s^2 2p^63s^13p^1$ ) must involve $\alpha \text{ and } \beta$ spin functions for each doubly occupied orbital and, as such, can contribute nothing to the total $M_S$ value; only the open-shell components need to be treated with the angular momentum operator tools to arrive at proper total-spin eigenstates.
In summary, proper spin eigenfunctions must be constructed from antisymmetric (i.e., determinental) wavefunctions as demonstrated above because the total $S^2$ and total $S_z$ remain valid symmetry operators for many-electron systems. Doing so results in the spinadapted wavefunctions being expressed as combinations of determinants with coefficients determined via spin angular momentum techniques as demonstrated above. In configurations with closed-shell components, not all spin functions are possible because of the antisymmetry of the wavefunction; in particular, any closed-shell parts must involve $\alpha \beta$ spin pairings for each of the doubly occupied orbitals, and, as such, contribute zero to the total $M_s$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.02%3A_Electron_Spin_Angular_Momentum.txt
|
Vector Coupling
Given two angular momenta (of any kind) $\textbf{L}_1 \text{ and } \textbf{L}_2,$ when one generates states that are eigenstates of their vector sum $L= L_1+L_2$, one can obtain $L$ values of
$L_1+L_2, L_1+L_2-1, ...|L_1-L_2|. \nonumber$
This can apply to two electrons for which the total spin S can be 1 or 0 as illustrated in detail above, or to a p and a d orbital for which the total orbital angular momentum L can be 3, 2, or 1. Thus for a $p^1d^1$ electronic configuration, $^3F, ^1F, ^3D, ^1D, ^3P, \text{and} ^1P$ energy levels (and corresponding wavefunctions) arise. Here the term symbols are specified as the spin degeneracy (2S+1) and the letter that is associated with the L-value. If spin-orbit coupling is present, the $^3F$ level further splits into J= 4, 3, and 2 levels which are denoted $^3F_4, ^3F_3, \text{ and } ^3F_2.$
This simple "vector coupling" method applies to any angular momenta. However, if the angular momenta are "equivalent" in the sense that they involve indistinguishable particles that occupy the same orbital shell (e.g., $2p^3 \text{ involves 3 equivalent electrons; } 2p^13p^14p^1$ involves 3 non-equivalent electrons; $2p^23p^1$ involves 2 equivalent electrons and one non-equivalent electron), the Pauli principle eliminates some of the expected term symbols (i.e., when the corresponding wavefunctions are formed, some vanish because their Slater determinants vanish). Later in this section, techniques for dealing with the equivalent-angular momenta case are introduced. These techniques involve using the above tools to obtain a list of candidate term symbols after which Pauli-violating term symbols are eliminated.
Non-Vector Coupling
For linear molecules, one does not vector couple the orbital angular momenta of the individual electrons (because only $L_z \text{ not } L^2$ commutes with H), but one does vector couple the electrons' spin angular momenta. Coupling of the electrons' orbital angular momenta involves simply considering the various $L_z$ eigenvalues that can arise from adding the $L_z$ values of the individual electrons. For example, coupling two p orbitals (each of which can have m = ±1) can give
$M_L$ = 1+1, 1-1, -1+1, and -1-1, or 2, 0, 0, and -2.
The level with $M_L$ = ±2 is called a D state (much like an orbital with m = ±2 is called a d orbital), and the two states with $M_L$ = 0 are called S states. States with $L_z$ eigenvalues of $M_L \text{ and } - M_L$ are degenerate because the total energy is independent of which direction the electrons are moving about the linear molecule's axis (just a $\pi_{+1} \text{ and } \pi_{-1}$ orbitals are degenerate). Again, if the two electrons are non-equivalent, all possible couplings arise (e.g., a $\pi^1 \pi '^1 \text{ configuration yields }^3\Delta, ^3\Sigma, ^3\Sigma, ^1\Delta, ^1\Sigma, and ^1\Sigma$ states). In contrast, if the two electrons are equivalent, certain of the term symbols are Pauli forbidden. Again, techniques for dealing with such cases are treated later in this Chapter.
Direct Products for Non-Linear Molecules
For non-linear polyatomic molecules, one vector couples the electrons' spin angular momenta but their orbital angular momenta are not even considered. Instead, their point group symmetries must be combined, by forming direct products, to determine the symmetries of the resultant spin-orbital product states. For example, the $b_1^1b_2^1$ configuration in $C_{2v}$ symmetry gives rise to $^3A_2 \text{ and } ^1A_2$ term symbols. The $e^1e^{'1}$ configuration in $C_{3v}$ symmetry gives $^3E, ^3A_2, ^3A_1, ^1E, ^1A_2, \text{ and } ^1A_1$ term symbols. For two equivalent electrons such as in the $e^2$ configuration, certain of the $^3E, ^3A_2, ^3A_1, ^1E, ^1A_2, \text{ and } ^1A_1$ term symbols are Pauli forbidden. Once again, the methods needed to identify which term symbols arise in the equivalent-electron case are treated later.
One needs to learn how to tell which term symbols will be Pauli excluded, and to learn how to write the spin-orbit product wavefunctions corresponding to each term symbol and to evaluate the corresponding term symbols' energies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.03%3A__Coupling_of_Angular_Momenta.txt
|
Non-Equivalent Orbital Term Symbols
When coupling non-equivalent angular momenta (e.g., a spin and an orbital angular momenta or two orbital angular momenta of non-equivalent electrons), one vector couples using the fact that the coupled angular momenta range from the sum of the two individual angular momenta to the absolute value of their difference. For example, when coupling the spins of two electrons, the total spin S can be 1 or 0; when coupling a p and a d orbital, the total orbital angular momentum can be 3, 2, or 1. Thus for a $p^1d^1$ electronic configuration, $^3F, ^1F, ^3D, ^1D, ^3P, \text{ and } ^1P$ energy levels (and corresponding wavefunctions) arise. The energy differences among these levels has to do with the different electron-electron repulsions that occur in these levels; that is, their wavefunctions involve different occupancy of the p and d orbitals and hence different repulsion energies. If spin-orbit coupling is present, the L and S angular momenta are further vector coupled. For example, the $^3F$ level splits into J= 4, 3, and 2 levels which are denoted $^3F_4, ^3F_3, \text{ and } ^3F_2.$ The energy differences among these J-levels are caused by spin-orbit interactions.
Equivalent Orbital Term Symbols
If equivalent angular momenta are coupled (e.g., to couple the orbital angular momenta of a $p^2 \text{ or } d^3$ configuration), one must use the "box" method to determine which of the term symbols, that would be expected to arise if the angular momenta were nonequivalent, violate the Pauli principle. To carry out this step, one forms all possible unique (determinental) product states with non-negative $M_L \text{ and } M_S$ values and arranges them into groups according to their $M_L \text{ and } M_S$ values. For example, the boxes appropriate to the $p^2$ orbital occupancy are shown below:
There is no need to form the corresponding states with negative $M_L \text{ or negative } M_S$ values because they are simply "mirror images" of those listed above. For example, the state with $M_L= -1 \text{ and } M_S = -1 \text{ is } |p_{-1}\beta p_0\beta|,$ which can be obtained from the $M_L = 1, M_S = 1$ state $|p1\alpha p_0\alpha|$ by replacing $\alpha \text{ by } \beta$ and replacing $p_1 \text{ by } p_{-1}$.
Given the box entries, one can identify those term symbols that arise by applying the following procedure over and over until all entries have been accounted for:
1. One identifies the highest $M_S$ value (this gives a value of the total spin quantum number that arises, S) in the box. For the above example, the answer is S = 1.
2. For all product states of this $M_S$ value, one identifies the highest $M_L$ value (this gives a value of the total orbital angular momentum, L, that can arise for this S ). For the above example, the highest $M_L \text{ within the } M_S =1 \text{ states is } M_L = 1 \text{ (not } M_L = 2),\text{ hence }L=1.$
3. Knowing an S, L combination, one knows the first term symbol that arises from this configuration. In the $p^2$ example, this is $^3P.$
4. Because the level with this L and S quantum numbers contains (2L+1)(2S+1) states with $M_L \text{ and } M_S$ quantum numbers running from -L to L and from -S to S, respectively, one must remove from the original box this number of product states. To do so, one simply erases from the box one entry with each such ML and MS value. Actually, since the box need only show those entries with non-negative ML and MS values, only these entries need be explicitly deleted. In the $^3P$ example, this amounts to deleting nine product states with $M_L, M_S$ values of 1,1; 1,0; 1,-1; 0,1; 0,0; 0,-1; -1,1; -1,0; -1,-1.
5. After deleting these entries, one returns to step 1 and carries out the process again. For the $p^2$ example, the box after deleting the first nine product states looks as follows (those that appear in italics should be viewed as already cancelled in counting all of the $^3P$ states):
It should be emphasized that the process of deleting or crossing off entries in various $M_L, M_S$ boxes involves only counting how many states there are; by no means do we identify the particular $L,S,M_L,M_S$ wavefunctions when we cross out any particular entry in a box. For example, when the $\lvert p_1\alpha p_0\beta \rvert$ product is deleted from the $M_L= 1, M_S=0$ box in accounting for the states in the $^3P$ level, we do not claim that $|p_1\alpha p_0\beta|$ itself is a member of the $^3P$ level; the $|p_0\alpha p_1\beta|$ product state could just as well been eliminated when accounting for the $^3P$ states. As will be shown later, the $^3P$ state with $M_L= 1, M_S=0$ will be a combination of $|p_1\alpha p_0\beta| \text{ and } |p_0\alpha p_1\beta|.$
Returning to the $p^2$ example at hand, after the $^3P$ term symbol's states have been accounted for, the highest $M_S$ value is 0 (hence there is an S=0 state), and within this $M_S$ value, the highest $M_L$ value is 2 (hence there is an L=2 state). This means there is a $^1D$ level with five states having $M_L$ = 2,1,0,-1,-2. Deleting five appropriate entries from the above box (again denoting deletions by italics) leaves the following box:
The only remaining entry, which thus has the highest $M_S \text{ and } M_L$ values, has $M_S = 0 \text{ and } M_L = 0.$ Thus there is also a $^1S$ level in the $p^2$ configuration.
Thus, unlike the non-equivalent $^2p1^3p1$ case, in which $^3P, ^1P, ^3D, ^1D, ^3S, \text{ and } ^1S$ levels arise, only the $^3P, ^1D, \text{ and } ^1S$ arise in the $p^2$ situation. This "box method" is necessary to carry out whenever one is dealing with equivalent angular momenta.
If one has mixed equivalent and non-equivalent angular momenta, one can determine all possible couplings of the equivalent angular momenta using this method and then use the simpler vector coupling method to add the non-equivalent angular momenta to each of these coupled angular momenta. For example, the $p^2d^1$ configuration can be handled by vector coupling (using the straightforward non-equivalent procedure) L=2 (the d orbital) and $S= \frac{1}{2}$ (the third electron's spin) to each of $^3P, ^1D, \text{ and } ^1S.$ The result is $^4F, ^4D, ^4P, ^2F, ^2D, ^2P, ^2G, ^2F, ^2D, ^2P, ^2S, \text{ and } ^2D.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.04%3A_Atomic_Term_Symbols_and_Wavefunctions.txt
|
To express, in terms of Slater determinants, the wavefunctions corresponding to each of the states in each of the levels, one proceeds as follows:
1. For each $M_S, M_L$ combination for which one can write down only one product function (i.e., in the non-equivalent angular momentum situation, for each case where only one product function sits at a given box row and column point), that product function itself is one of the desired states. For the $p^2$ example, the $|p_1\alpha p_0\alpha| \text{ and } |p_1\alpha p_{-1}\alpha|$ (as well as their four other $M_L \text{ and } M_S$ "mirror images") are members of the $^3P$ level (since they have $M_S$ = ±1) and $|p_1\alpha p_1\beta|$ and its $M_L$ mirror image are members of the $^1D$ level (since they have $M_L$ = ±2).
2. After identifying as many such states as possible by inspection, one uses $L_± \text{ and } S_±$ to generate states that belong to the same term symbols as those already identified but which have higher or lower $M_L \text{ and/or } M_S$ values.
3. If, after applying the above process, there are term symbols for which states have not yet been formed, one may have to construct such states by forming linear combinations that are orthogonal to all those states that have thus far been found.
To illustrate the use of raising and lowering operators to find the states that can not be identified by inspection, let us again focus on the $p^2$ case. Beginning with three of the $^3P$ states that are easy to recognize, $|p_1\alpha p_0\alpha|, |p_1\alpha p_{-1}\alpha|, \text{ and } |p_{-1}\alpha p_0\alpha|$, we apply $S_-$ to obtain the $M_S=0$ functions:
$S_- ^3P(M_L=1, M_S=1) = [S_- (1) + S_- (2)] |p_1\alpha p_0\alpha| \nonumber$
$= \hbar\sqrt{1(2)-1(0)} ^3P(M_L = 1, M_S = 0) \nonumber$
$= \hbar\sqrt{\dfrac{1}{2}\left( \dfrac{3}{2}-\dfrac{1}{2}\left( -\dfrac{1}{2} \right) \right)} |p_1\beta p_0\alpha| + \hbar \sqrt{1} |p_1 \alpha p_0\beta|, \nonumber$
so,
$^3P(M_L = 1, M_S = 0) = \dfrac{1}{\sqrt{2}}[|p_1\beta p_0\alpha|+|p_1\alpha p_0\alpha|]. \nonumber$
The same process applied to $|p_1\alpha p_{-1}\alpha| \text{ and } |p_{-1}\alpha p_0\alpha|$ gives
$\dfrac{1}{\sqrt{2}}\left[ |p_1\alpha p_{-1}\beta|+|p_1 \beta p_{-1}\alpha| \right] \nonumber$
and
$\dfrac{1}{\sqrt{2}}\left[ |p_{-1}\beta p_0\beta| + |p_{-1}\beta p_0\alpha| \right], \nonumber$
respectively.
The $^3P(M_L=1, M_S=0) = \dfrac{1}{\sqrt{2}} [|p_1\beta p_0\alpha| + |p_1\alpha p_0\beta|]$ function can be acted on with $L_-$ to generate $^3P(M_L=0, M_S=0):$
$L_- ^3P(M_L=1, M_S=0) = [L_-(1) + L_-(2)] \dfrac{1}{\sqrt{2}} [|p_1\beta p_0\alpha| + |p_1\alpha p_0\beta|] \nonumber$
$= \hbar\sqrt{1(2)-1(0)} ^3P(M_L=0, M_S=0) \nonumber$
$=\hbar\sqrt{\dfrac{1(2)-1(0)}{2}} [|p_0\beta p_0\alpha| + |p_0\alpha p_0\beta|] + \hbar \sqrt{\dfrac{1(2)-0(-1)}{2}} [|p_1\beta p_{-1}\alpha| + |p_1\alpha p_{-1}\beta|], \nonumber$
so,
$^3P(M_L=0, M_S=0) = \dfrac{1}{\sqrt{2}} [|p_1\beta p_{-1}\alpha| + |p_1\alpha p_{-1}\beta|]. \nonumber$
The $^1D$ term symbol is handled in like fashion. Beginning with the $M_L = 2 \text{ state } |p_1\alpha p_1\beta|$, one applies $L_-$ to generate the $M_L$ = 1 state:
$L_- ^1D(M_L=2, M_S=0) = [L_-(1) + L_-(1)]|p_1\alpha p_1 \beta| \nonumber$
$=\hbar\sqrt{2(3)-2(1)} ^1D(M_L = 1, M_S = 0) \nonumber$
$=\hbar \sqrt{1(2)-1(0)}[|p_0\alpha p_1\beta|+|p_1\alpha p_0\beta|], \nonumber$
so,
$^D(M_L = 1, M_S = 0) = \dfrac{1}{\sqrt{2}}[|p_0\alpha p_1\beta|+|p_1\alpha p_0 \beta|] \nonumber$
Applying $L_-$ once more generates the $^1D(M_L=0, M_S=0)$ state:
$L_- ^1D(M_L, M_S=0) = \dfrac{[L_-(1) + L_-(2)]}{\sqrt{2}} [|p_0\alpha p_1\beta|+|p_1\alpha p_0\beta|] \nonumber$
$= \hbar\sqrt{2(3)-1(0)}D(M_L=0, M_S=0) \nonumber$
$= \sqrt{\dfrac{1(2)-0(-1)}{2}}[|p_{-1}\alpha p_1\beta|+|p_1\alpha p_{-1}\beta|] + \hbar\sqrt{\dfrac{1(2)-1(0)}{2}}[|p_0\alpha p_0\beta|+|p_0\alpha p_0\beta|] \nonumber$
so,
$^1D(M_L=0, M_S=0) = \dfrac{1}{\sqrt{6}}[2|p_0\alpha p_0\beta|+|p_{-1}\alpha p_1\beta|+|p_1\alpha p_{-1}\beta|]. \nonumber$
Notice that the $M_L=0, M_S=0$ states of $^3P$ and of $^1D$ are given in terms of the three determinants that appear in the "center" of the $p^2$ box diagram:
$^1D(M_L=0, M_S=0) = \dfrac{1}{\sqrt{6}}[|2p_0\alpha p_0\beta|+|p_{-1}\alpha p_1\beta|+|p_1\alpha p_{-1}\beta|], \nonumber$
$^3P(M_L=0, M_S=0) \dfrac{1}{\sqrt{2}}[|p_1\beta p_{-1}\alpha|+|p_1\alpha p_{-1}\beta|] \nonumber$
$\dfrac{1}{\sqrt{2}}[-|p_{-1}\alpha p_1\beta|+|p_1 \alpha p_{-1}\beta|]. \nonumber$
The only state that has eluded us thus far is the $^1S$ state, which also has $M_L=0 \text{ and } M_S=0$. To construct this state, which must also be some combination of the three determinants with $M_L=0 \text{ and } M_S=0$, we use the fact that the $^1S$ wavefunction must be orthogonal to the $^3P \text{ and } ^1D \text{ functions because } ^1S, ^3P, \text{ and } ^1D$ are eigenfunctions of the hermitian operator $L^2$ having different eigenvalues. The state that is normalized and is a combination of$p_0\alpha p_0\beta|, |p_{-1}\alpha p_1\beta|, \text{ and } |p_1\alpha p_{-1}\beta|$ is given as follows:
$^1S = \dfrac{1}{\sqrt{3}}\left[|p_0\alpha p_0\beta|-|p_{-1}\alpha p_1\beta|-|p_1\alpha p_{-1}\beta|\right]. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.05%3A_Atomic_Configuration_Wavefunctions.txt
|
One more quantum number, that relating to the inversion (i) symmetry operator can be used in atomic cases because the total potential energy $V$ is unchanged when all of the electrons have their position vectors subjected to inversion (i.e., $i\textbf{r} = \textbf{-r}$). This quantum number is straightforward to determine. Because each $L, S, M_L, M_S, H$ state discussed previously consists of a few (or, in the case of configuration interaction several) symmetry adapted combinations of Slater determinant functions, the effect of the inversion operator on such a wavefunction $\Psi$ can be determined by:
1. applying i to each orbital occupied in $\Psi$ thereby generating a ± 1 factor for each orbital (+1 for s, d, g, i, etc orbitals; -1 for p, f, h, j, etc orbitals),
2. multiplying these $\pm 1$ factors to produce an overall sign for the character of $\Psi$ under $\hat{i}$.
When this overall sign is positive, the function $\Psi$ is termed "even" and its term symbol is appended with an "e" superscript (e.g., the $^3P$ level of the $O$ atom, which has $1s^22s^22p^4$ occupancy is labeled $^3P^e$); if the sign is negative $\Psi$ is called "odd" and the term symbol is so amended (e.g., the $^3P$ level of $1s^22s^12p^1$ $B^+$ ion is labeled $^3P_o$).
10.07: Review of Atomic Cases
The orbitals of an atom are labeled by $l$ and $m_l$ quantum numbers; the orbitals belonging to a given energy and $l$ value are $2l+1$-fold degenerate. The many-electron Hamiltonian, H, of an atom and the antisymmetrizer operator$A = \left( \sqrt{\frac{1}{N!}} \right) \sum\limits_p s_pP$ commute with total $L_z = \sum\limits_i L_z(i)$, as in the linear-molecule case. The additional symmetry present in the spherical atom reflects itself in the fact that $L_x$, and $L_y$ now also commute with $H$ and $A$. However, since $L_z$ does not commute with $L_x$\ or $L_y$, new quantum numbers can not be introduced as symmetry labels for these other components of $L$. A new symmetry label does arise when
$L^2 = L_z^2 + L_x^2 + L_y^2 \nonumber$
is introduced; $L^2$ commutes with H, A , and $L_z$, so proper eigenstates (and trial wavefunctions) can be labeled with $L, M_L, S, M_s$, and H quantum numbers.
To identify the states which arise from a given atomic configuration and to construct properly symmetry-adapted determinental wave functions corresponding to these symmetries, one must employ L and $M_L \text{ and S and } M_S$ angular momentum tools. One first identifies those determinants with maximum $M_S$ (this then defines the maximum S value that occurs); within that set of determinants, one then identifies the determinant(s) with maximum $M_L$ (this identifies the highest L value). This determinant has S and L equal to its $M_S \text{ and } M_L$ values (this can be verified, for example for L, by acting on this determinant with $L_2$ in the form
$L^2 = L_-L_+ + L_z^2 + \hbar L_z \nonumber$
and realizing that $L_+$ acting on the state must vanish); other members of this L,S energy level can be constructed by sequential application of $S_- \text{ and } L_- = \sum\limits_i L_-(i)$. Having exhausted a set of (2L+1)(2S+1) combinations of the determinants belonging to the given configuration, one proceeds to apply the same procedure to the remaining determinants (or combinations thereof). One identifies the maximum $M_S \text{ and, within it, the maximum } M_L$ which thereby specifies another S, L label and a new "maximum" state. The determinental functions corresponding to these L,S (and various $M_L, M_S$ ) values can be constructed by applying $S_- \text{ and } L_-$ to this "maximum" state. This process is continued until all of the states and their determinental wave functions are obtained.
As illustrated above, any $p^2$ configuration gives rise to $^3P^e, ^1D^e, \text{ and } ^1S^e$ levels which contain nine, five, and one state respectively. The use of L and S angular momentum algebra tools allows one to identify the wavefunctions corresponding to these states. As shown in detail in Appendix G, in the event that spin-orbit coupling causes the Hamiltonian, H, not to commute with L or with S but only with their vector sum J = L + S, then these $L^2 S^2 L_z S_z$ eigenfunctions must be coupled (i.e., recombined) to generate $J^2 J_z$ eigenstates. The steps needed to effect this coupling are developed and illustrated for the above $p^2$ configuration case in Appendix G.
In the case of a pair of non-equivalent p orbitals (e.g., in a $2p^13p^1$ configuration), even more states would arise. They can also be found using the tools provided above. Their symmetry labels can be obtained by vector coupling (see Appendix G) the spin and orbital angular momenta of the two subsystems. The orbital angular momentum coupling with l = 1 and l = 1 gives L = 2, 1, and 0 or D, P, and S states. The spin angular momentum coupling with s =1/2 and s = 1/2 gives S = 1 and 0, or triplet and singlet states. So, vector coupling leads to the prediction that $^3D^e, ^1D^e, ^3P^e, ^1P^e, ^3S^e, \text{ and } ^1S^e$ states can be formed from a pair of non-equivalent p orbitals. It is seen that more states arise when non-equivalent orbitals are involved; for equivalent orbitals, some determinants vanish, thereby decreasing the total number of states that arise.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/10%3A_Angular_Momentum_and_Group_Symmetries_of_Electronic_Wavefunctions/10.06%3A__Inversion_Symmetry.txt
|
One must be able to evaluate the matrix elements among properly symmetry adapted N-electron configuration functions for any operator, the electronic Hamiltonian in particular. The Slater-Condon rules provide this capability
• 11.1: Configuration State Functions can Express the Full N-Electron Wavefunction
A given electronic configuration can yield several space- and spin- adapted determinental wavefunctions referred to as configuration state functions (CSFs). These CSF wavefunctions are not the exact eigenfunctions of the many-electron Hamiltonian; they are simply functions which possess the space, spin, and permutational symmetry of the exact eigenstates. As such, they comprise an acceptable set of functions to use in, for example, a linear variational treatment of the true states.
• 11.2: The Slater-Condon Rules Give Expressions for the Operator Matrix Elements Among the CSFs
To form the Hamiltonian matrix elements, one uses the so-called Slater-Condon rules which express all non-vanishing determinental matrix elements involving either one- or two- electron operators (one-electron operators are additive).
• 11.3: The Slater-Condon Rules
The Slater–Condon rules express integrals of one- and two-body operators over wavefunctions constructed as Slater determinants of orthonormal orbitals in terms of the individual orbitals. In doing so, the original integrals involving N-electron wavefunctions are reduced to sums over integrals involving at most two molecular orbitals, or in other words, the original 3N dimensional integral is expressed in terms of many three- and six-dimensional integrals.
• 11.4: Examples of Applying the Slater-Condon Rules
It is wise to gain some experience using the Slater-Condon rules, so let us consider a few illustrative example problems.
• 11.S: Evaluating the Matrix Elements of N-electron Wavefunctions (Summary)
In all of the examples in Chapter 11, the Slater-Condon rules were used to reduce matrix elements of one- or two- electron operators between determinental functions to one- or two- electron integrals over the orbitals which appear in the determinants. In any ab initio electronic structure computer program there must exist the capability to form symmetry-adapted CSFs and to evaluate, using these rules.
11: Evaluating the Matrix Elements of N-electron Wavefunctions
It has been demonstrated that a given electronic configuration can yield several space- and spin- adapted determinental wavefunctions; such functions are referred to as configuration state functions (CSFs). These CSF wavefunctions are not the exact eigenfunctions of the many-electron Hamiltonian, H; they are simply functions which possess the space, spin, and permutational symmetry of the exact eigenstates. As such, they comprise an acceptable set of functions to use in, for example, a linear variational treatment of the true states.
In such variational treatments of electronic structure, the N-electron wavefunction $\Psi$ is expanded as a sum over all CSFs that possess the desired spatial and spin symmetry:
$\Psi = \sum\limits_J C_J \Phi_J. \nonumber$
Here, the $\Phi_J$ represent the CSFs that are of the correct symmetry, and the $C_J$ are their expansion coefficients to be determined in the variational calculation. If the spin-orbitals used to form the determinants, that in turn form the CSFs {$\Phi_J$}, are orthonormal one electron functions (i.e., $\langle \phi_k | \phi_j \rangle = \delta_{k,j}$), then the CSFs can be shown to be orthonormal functions of N electrons
$\langle \Phi_J | \Phi_K \rangle = \delta_{J,K}. \nonumber$
In fact, the Slater determinants themselves also are orthonormal functions of N electrons whenever orthonormal spin-orbitals are used to form the determinants.
The above expansion of the full N-electron wavefunction is termed a "configuration-interaction" (CI) expansion. It is, in principle, a mathematically rigorous approach to expressing $\Psi$ because the set of all determinants that can be formed from a complete set of spin-orbitals can be shown to be complete. In practice, one is limited to the number of orbitals that can be used and in the number of CSFs that can be included in the CI expansion. Nevertheless, the CI expansion method forms the basis of the most commonly used techniques in quantum chemistry.
In general, the optimal variational (or perturbative) wavefunction for any (i.e., the ground or excited) state will include contributions from spin-and space-symmetry adapted determinants derived from all possible configurations. For example, although the determinant with L =1, S = 1, $M_L =1, M_S =1$ arising from the $1s^22s^22p^2$ configuration may contribute strongly to the true ground electronic state of the Carbon atom, there will be contributions from all configurations which can provide these L, S, $M_L, \text{ and } M_S$ values (e.g., the $1s^22s^22p^13p^1 \text{ and } 2s^22p^4$ configurations will also contribute, although the $1s^22s^22p^13s^1 \text{ and } 1s^22s^12p^23p^1$ will not because the latter two configurations are odd under inversion symmetry whereas the state under study is even).
The mixing of CSFs from many configurations to produce an optimal description of the true electronic states is referred to as configuration interaction (CI). Strong CI (i.e., mixing of CSFs with large amplitudes appearing for more than one dominant CSF) can occur, for example, when two CSFs from different electronic configurations have nearly the same Hamiltonian expectation value. For example, the $1s^2 2s^2$ and $1s^22p^2$ $^1S$ configurations of Be and the analogous $ns^2 \text{ and } np^2$ configurations of all alkaline earth atoms are close in energy because the ns-np orbital energy splitting is small for these elements; the $\pi^2 \text{ and } \pi^{2*}$ configurations of ethylene become equal in energy, and thus undergo strong CI mixing, as the CC $\pi$ bond is twisted by 90° in which case the $\pi \text{ and } \pi^{\text{*}}$ orbitals become degenerate.
Within a variational treatment, the relative contributions of the spin-and space symmetry adapted CSFs are determined by solving a secular problem for the eigenvalues ($E_i$ ) and eigenvectors ( $C_i$ ) of the matrix representation H of the full many-electron Hamiltonian H within this CSF basis:
$\sum\limits_L H_{K,L} C_{i,L} = E_i C_{i,K}. \nonumber$
The eigenvalue $E_i$ gives the variational estimate for the energy of the $i^{th}$ state, and the entries in the corresponding eigenvector $C_{i,K}$ give the contribution of the $K^{th}$ CSF to the $i^{th}$ wavefunction $\Psi_i$ in the sense that
$\Psi_i = \sum\limits_K C_{i,K}\Phi_K, \nonumber$
where $\Phi_K \text{is the} K^{th} CSF.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/11%3A_Evaluating_the_Matrix_Elements_of_N-electron_Wavefunctions/11.01%3A_Configuration_State_Functions_can_Express_the_.txt
|
To form the $H_{K,L}$ matrix, one uses the so-called Slater-Condon rules which express all non-vanishing determinental matrix elements involving either one- or two- electron operators (one-electron operators are additive and appear as
$F = \sum\limits_i f(i); \nonumber$
two-electron operators are pairwise additive and appear as
$G = \sum\limits_{ij}g(i,j). \nonumber$
Because the CSFs are simple linear combinations of determinants with coefficients determined by space and spin symmetry, the $H_{I,J}$ matrix in terms of determinants can be used to generate the $H_{K,L}$ matrix over CSFs.
The Slater-Condon rules give the matrix elements between two determinants
$|>=|\phi_1\phi_2\phi_3... \Phi_N| \nonumber$
and
$|'>=|\phi_1' \phi_2' \phi_3' ...\phi_N'| \nonumber$
for any quantum mechanical operator that is a sum of one- and two- electron operators (F + G). It expresses these matrix elements in terms of one-and two-electron integrals involving the spin-orbitals that appear in | > and | '> and the operators f and g.
As a first step in applying these rules, one must examine | > and | '> and determine by how many (if any) spin-orbitals | > and | '> differ. In so doing, one may have to reorder the spin-orbitals in one of the determinants to achieve maximal coincidence with those in the other determinant; it is essential to keep track of the number of permutations ($N_p$) that one makes in achieving maximal coincidence. The results of the Slater-Condon rules given below are then multiplied by $(-1)^{N_p}$ to obtain the matrix elements between the original | > and | '>. The final result does not depend on whether one chooses to permute | > or | '>.
Once maximal coincidence has been achieved, the Slater-Condon (SC) rules provide the following prescriptions for evaluating the matrix elements of any operator F + G containing a one-electron part $F = \sum\limits_i f(i)$ and a two-electron part $G = \sum\limits_{ij} g(i,j)$ (the Hamiltonian is, of course, a specific example of such an operator; the electric dipole operator $\sum\limits_i e\textbf{r}_i$ and the electronic kinetic energy $\frac{-\hbar^2}{2m_e}\sum\limits_i \nabla_i^2$ are examples of one-electron operators (for which one takes g = 0); the electron-electron coulomb interaction $\sum\limits_{i>j} \frac{e^2}{r_{ij}}$ is a two-electron operator (for which one takes f = 0)):
The Slater–Condon rules express integrals of one- and two-body operators over wavefunctions constructed as Slater determinants of orthonormal orbitals in terms of the individual orbitals. In doing so, the original integrals involving N-electron wavefunctions are reduced to sums over integrals involving at most two molecular orbitals, or in other words, the original 3N dimensional integral is expressed in terms of many three- and six-dimensional integrals.
11.03: The Slater-Condon Rules
The Slater–Condon rules express integrals of one- and two-body operators over wavefunctions constructed as Slater determinants of orthonormal orbitals in terms of the individual orbitals. In doing so, the original integrals involving N-electron wavefunctions are reduced to sums over integrals involving at most two molecular orbitals
i. If |> and |'> are identical, then
$\langle|F + G|\rangle = \sum \limits_i \langle\phi_i| f |\phi_i\rangle + \sum\limits_{i>j}\left[ \langle\phi_i \phi_j | g | \phi_i \phi_j - \langle \phi_i \phi_j | g | \phi_j \phi_i \rangle \right], \nonumber$
where the sums over i and j run over all spin-orbitals in | >;
ii. If | > and | '> differ by a single spin-orbital mismatch ( $(\phi_p \neq \phi_p'$ ),
$\langle | F + G |' \rangle = \langle \phi_p | f | \phi_p' \rangle + \sum\limits_j \left[ \langle \phi_p\phi_j | g | \phi_p'\phi_j \rangle - \langle \phi_p\phi_j | g | \phi_j\phi_p' \rangle \right], \nonumber$
where the sum over j runs over all spin-orbitals in | > except $\phi_p$ ;
iii. If | > and | '> differ by two spin-orbitals ( $\phi_p \neq \phi_p'$ ),
$\langle | F + G | '\rangle = \langle \phi_p \phi_q | g | \phi_p' \phi_q' \rangle - \langle \phi_p \phi_q | g | \phi_q' \phi_p' \rangle \nonumber$
(note that the F contribution vanishes in this case);
iv. If | > and | '> differ by three or more spin orbitals, then $< | F + G | '> = 0; \nonumber$
v. For the identity operator I, the matrix elements < | I | '> = 0 if | > and | '> differ by one or more spin-orbitals (i.e., the Slater determinants are orthonormal if their spin-orbitals are).
Recall that each of these results is subject to multiplication by a factor of $(-1)^{N_p}$ to account for possible ordering differences in the spin-orbitals in | > and | '>.
In these expressions, $< \phi_i | f | \phi_j > \nonumber$ is used to denote the one-electron integral $\int \phi_i^{\text{*}}(r)f(r)\phi_j(r)dr \nonumber$ and $< \phi_i\phi_j | g | \phi_k\phi_l >$ (or in short hand notation < i j| k l >) represents the two-electron integral $\int \phi_i^{\text{*}}(r)g(r,r')\phi_k(r)\phi_l(r')drdr' \nonumber$
The notation < i j | k l> introduced above gives the two-electron integrals for the g(r,r') operator in the so-called Dirac notation, in which the i and k indices label the spin-orbitals that refer to the coordinates r and the j and l indices label the spin-orbitals referring to coordinates r'. The r and r' denote $r, \theta, \phi, \sigma \text{ and } r', \theta ', \phi ', \sigma '$ (with $\sigma \text{ and } \sigma ' \text{ being the } \alpha \text{ or } \beta$ spin functions). The fact that r and r' are integrated and hence represent 'dummy' variables introduces index permutational symmetry into this list of integrals. For example,
$< i j | k l > = < j i | l k > = < k l | i j >^{\text{*}} = < l k | j i >^{\text{*}}; \nonumber$
the final two equivalences are results of the Hermitian nature of g(r,r').
It is also common to represent these same two-electron integrals in a notation referred to as Mulliken notation in which:
$\int\phi_i^{\text{*}}(r)\phi_j^{\text{*}}(r') g(r,r')\phi_k(r)\phi_l(r')drdr' = (i k | j l). \nonumber$
Here, the indices i and k, which label the spin-orbital having variables r are grouped together, and j and l, which label spin-orbitals referring to the r' variables appear together. The above permutational symmetries, when expressed in terms of the Mulliken integral list read:
$(i k | j l) = (j l | i k) = (k i | l j)^{\text{*}} = (l j | k i)^{\text{*}}. \nonumber$
If the operators f and g do not contain any electron spin operators, then the spin integrations implicit in these integrals (all of the $\phi_i$ are spin-orbitals, so each $\phi$ is accompanied by an $\alpha$ or $\beta$ spin function and each $\phi^{\text{*}}$ involves the adjoint of one of the $\alpha$ or $\beta$ spin functions) can be carried out as $<\alpha|\alpha>=1, <\alpha |\beta>=0, <\beta |\alpha >=0, <\beta |\beta>=1$, thereby yielding integrals over spatial orbitals. These spin integration results follow immediately from the general properties of angular momentum eigenfunctions detailed in Appendix G; in particular, because $\alpha \text{ and } \beta$ are eigenfunctions of $S_Z$ with different eigenvalues, they must be orthogonal $<\alpha |\beta > = <\beta | \alpha > = 0$.
The essential results of the Slater-Condon rules are:
1. The full N! terms that arise in the N-electron Slater determinants do not have to be treated explicitly, nor do the N!(N! + 1)/2 Hamiltonian matrix elements among the N! terms of one Slater determinant and the N! terms of the same or another Slater determinant
2. All such matrix elements, for any one- and/or two-electron operator can be expressed in terms of one- or two-electron integrals over the spin-orbitals that appear in the determinants.
3. The integrals over orbitals are three or six dimensional integrals, regardless of how many electrons N there are.
4. These integrals over mo's can, through the LCAO-MO expansion, ultimately be expressed in terms of one- and two-electron integrals over the primitive atomic orbitals. It is only these ao-based integrals that can be evaluated explicitly (on high speed computers for all but the smallest systems).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/11%3A_Evaluating_the_Matrix_Elements_of_N-electron_Wavefunctions/11.02%3A_The_Slater-Condon_Rules_Give_Expressions_for_t.txt
|
It is wise to gain some experience using the Slater-Condon rules, so let us consider a few illustrative example problems.
1. What is the contribution to the total energy of the $^3P$ level of Carbon made by the two 2p orbitals alone? Of course, the two 1s and two 2s spin-orbitals contribute to the total energy, but we artificially ignore all such contributions in this example to simplify the problem.
Because all nine of the $^3P$ states have the same energy, we can calculate the energy of any one of them; it is therefore prudent to choose an "easy" one
$^3P(M_L=1, M_S=1) = |p_1\alpha p_0\alpha|. \nonumber$
The energy of this state is $< |p_1\alpha p_0\alpha| H |p_1\alpha p_0\alpha| >$. The SC rules tell us this equals:
$I_{2p_1} + I_{2p_0} + < 2p_12p_0 | 2p_12p_0 > - < 2p_12p_0 | 2p_02p_1 >, \nonumber$
where the short hand notation $I_j =< j | f | j > is introduced. If the contributions from the two 1s and two 2s spin-orbitals are now taken into account, one obtains a total energy that also contains \( 2I_{1s} + 2I_{2s} + <1s1s|1s1s> + 4<1s2s|1s2s> - 2<1s2s|2s1s> + <2s2s|2s2s> + 2<1s2p_1|1s2p_1> - <1s2p_1|2p_11s> + 2<1s2p_0|1s2p_0> - <1s2p_0|2p_01s> + 2<2s2p_1|2s2p_1> - <2s2p_1|2p_12s> + 2<2s2p_0|2s2p_0> - <2s2p_0|2p_02s>.$
2. Is the energy of another $^3P$ state equal to the above state's energy? Of course, but it may prove informative to prove this.
Consider the $M_S=0, M_L=1$ state whose energy is:
$\dfrac{1}{\sqrt{2}}<[|p_1\alpha p_0\beta| + |p_1\beta p_0 \alpha|]|H|<[|p_1\alpha p_0\beta | + |p_1\beta p_0\alpha |]> \dfrac{1}{\sqrt{2}} \nonumber$
$= \dfrac{1}{2}\left[ I_{2p_1} + I_{2p_0} + <2p_12p_0|2p_12p_0> + I_{2p_1} + I_{2p_0} + <2p_12p_0|2p_12p_0> \right] \nonumber$
$\dfrac{1}{2}\left[ -<2p_12p_0|2p_02p_1> - <2p_12p_0|2p_02p_1> \right] \nonumber$
$= I_{2p_1} + I_{2p_0} + <2p_12p_0|2p_12p_0> - <2p_12p_0|2p_02p_1>. \nonumber$
Which is, indeed, the same as the other $^3P$ energy obtained above
3. What energy would the singlet state $\frac{1}{\sqrt{2}}<[|p_1\alpha p_0\beta| - |p_1\beta p_0\alpha|]$ have?
The $^3P M_S=0$ example can be used (changing the sign on the two determinants) to give
$E = I_{2p_1} + I_{2p_0} + <2p_12p_0|2p_12p_0> + <2p_12p_0|2p_02p_1>. \nonumber$
Note, this is the $M_L=1$ component of the $^1D$ state; it is, of course, not a 1P state because no such state exists for two equivalent p electrons
4. What is the CI matrix element coupling $|1s^22s^2| \text{ and } |1s^23s^2|$?
These two determinants differ by two spin-orbitals, so
$<|1s\alpha 1s\beta 2s\alpha 2s\beta| H |1s\alpha 1s\beta 3s\alpha 3s\beta|> = <2s2s|3s3s> = <2s3s|3s2s> \nonumber$
(note, this is an exchange-type integral).
5. What is the CI matrix element coupling $|\pi\alpha\pi\beta| \text{ and } |\pi^{\text{*}}\alpha\pi^{\text{*}}\beta$|?
These two determinants differ by two spin-orbitals, so
$<|\pi\alpha\pi\beta| H |\pi^{\text{*}}\alpha\pi^{\text{*}}\beta|> = <\pi\pi|\pi^{\text{*}}\pi^{\text{*}}> = <\pi\pi^{\text{*}}|\pi^{\text{*}}\pi> \nonumber$
(note, again this is an exchange-type integral).
6. What is the Hamiltonian matrix element coupling $|\pi\alpha\pi\beta|$ and $\frac{1}{\sqrt{2}}[|\pi\alpha\pi^{\text{*}}\beta| - |\pi\beta\pi^{\text{*}}\alpha|]$?
The first determinant differs from the $\pi^2$ determinant by one spin-orbital, as does the second (after it is placed into maximal coincidence by making one permutation), so
$<|\pi\alpha\pi\beta| H |\dfrac{1}{\sqrt{2}}[|\pi\alpha\pi^{\text{*}}\beta|-|\pi\beta\pi^{\text{*}}\alpha|]> \nonumber$
$= \dfrac{1}{\sqrt{2}}\left[ <\pi |f|\pi^{\text{*}}> + <\pi\pi |\pi^{\text{*}}\pi>\right] -(-1)\dfrac{1}{\sqrt{2}}\left[ <\pi |f|\pi^{\text{*}}> + <\pi\pi |\pi^{\text{*}}\pi >\right] \nonumber$
$= \sqrt{2}[<\pi| f |\pi^{\text{*}}> + <\pi\pi|\pi^{\text{*}}\pi>]. \nonumber$
7. What is the element coupling $|\pi\alpha\pi\beta|$ and $\frac{1}{\sqrt{2}} \left[ |\pi\alpha\pi^{\text{*}}\beta | + |\pi\beta\pi^{\text{*}}\alpha| \right]$?
$<| \pi\alpha\pi\beta |H| \dfrac{1}{\sqrt{2}} \left[ | \pi\alpha\pi^{\text{*}}\beta | + | \pi\beta\pi^{\text{*}}\alpha | \right]> = \dfrac{1}{\sqrt{2}} \left[ < \pi |f| \pi^{\text{*}} > + <\pi\pi | \pi^{\text{*}}\pi > \right] + (-1)\dfrac{1}{\sqrt{2}}\left[ <\pi |f| \pi^{\text{*}} > + < \pi\pi | \pi^{\text{*}}\pi > \right] =0. \nonumber$
This result should not surprise you because $| \pi\alpha\pi\beta |$ is an S=0 singlet state while $\frac{1}{\sqrt{2}}\left[ \pi\alpha\pi^{\text{*}}\beta | + | \pi\beta\pi^{\text{*}}\alpha \right]$ is the $M_S=0$ component of the S=1 triplet state.
8. What is the $\textbf{r} = \sum\limits_j e\textbf{r}_j$ electric dipole matrix element between $| p_1\alpha p_1\beta| \text{ and } \frac{1}{\sqrt{2}}[|p_1\alpha p_0\beta | + |p_0\alpha p_1\beta |]$? Is the second function a singlet or triplet? It is a singlet in disguise; by interchanging the $p_0\alpha \text{ and } p_1\beta$ and thus introducing a (-1), this function is clearly identified as $\frac{1}{\sqrt{2}}[| p_1\alpha p_0\beta | - | p_1\alpha p_0\alpha |]$ which is a singlet.
The first determinant differs from the latter two by one spin orbital in each case, so
$<|p_1\alpha p_1\beta|\textbf{r}|\dfrac{1}{\sqrt{2}}[|p_1\alpha p_0\beta| + |p_0\alpha p_1\beta|]> = \dfrac{1}{\sqrt{2}}[ < p_1|\textbf{r}|p_0 > + <p_1| \textbf{r} |p_0> ] = \dfrac{1}{\sqrt{2}}<p_1|\textbf{r}|p_0>. \nonumber$
9. What is the electric dipole matrix elements between the $^1\Delta = | \pi_1\alpha\pi_1\beta |$ state and the $^1\sum = \frac{1}{\sqrt{2}}\left[ |\pi_1\alpha\pi_{-1}\beta| + |\pi_{-1}\alpha\pi_1\beta| \right]$ state?
$< \dfrac{1}{\sqrt{2}}\left[ | \pi_1\alpha\pi_{-1}\beta | + |\pi_{-1}\alpha\pi_1\beta | \right] |\textbf{r}| \pi_1\alpha\pi_1\beta |> \nonumber$
$= \dfrac{1}{\sqrt{2}}\left[ < \pi_{-1} |\textbf{r}| \pi_1 > + < \pi_{-1} |\textbf{r}| pi_1 > \right] \nonumber$
$= \sqrt{2} < \pi_{-1} |\textbf{r}| \pi_1 >. \nonumber$
10. As another example of the use of the SC rules, consider the configuration interaction which occurs between the $1s^22s^2$ and $1s^2 2p^2$$^1S$ CSFs in the Be atom.
The CSFs corresponding to these two configurations are as follows:
$\Phi_i = |1s\alpha 1s\beta 2s\alpha 2s\beta| \nonumber$
and
$\Phi_2 = \dfrac{1}{\sqrt{3}}\left[ | 1s\alpha 1s\beta 2p_0\alpha 2p_0\beta | - | 1s\alpha 1s\beta 2p_1\alpha 2p_{-1}\beta | - | 1s\alpha 1s\beta 2p_{-1}\alpha 2p_{-1}\alpha 2p_1\beta | \right]. \nonumber$
The determinental Hamiltonian matrix elements needed to evaluate the 2x2 $H_{K,L}$ matrix appropriate to these two CSFs are evaluated via the SC rules. The first such matrix element is:
$< | 1s\alpha 1s\beta 2s\alpha 2s\beta | H | 1s\alpha 1s\beta 2s\alpha 2s\beta | > = 2h_{1s} + 2h_{2s} + J_{1s,2s} + 4J_{1s,2s} + J_{2s,2s} - 2K_{1s,2s}, \nonumber$
where
$h_i = < \phi_i | \dfrac{-\hbar^2}{2m_e}\nabla^2 - \dfrac{4e^2}{r} | \phi_i >, \nonumber$
$J_{i,j} = < \phi_i\phi_j | \dfrac{e^2}{r_{12}} | \phi_i \phi_j >, \nonumber$
and
$K_{ij} = < \phi_i \phi_j | \dfrac{e^2}{r_{12}} | \phi_j \phi_i > \nonumber$
are the orbital-level one-electron, coulomb, and exchange integrals , respectively.
Coulomb integrals $J_{ij}$ describe the coulombic interaction of one charge density ( $\phi_i^2$ above) with another charge density ($\phi_j^2$ above); exchange integrals $K_{ij}$ describe the interaction of an overlap charge density (i.e., a density of the form $\phi_i\phi_j$ ) with itself ( $\phi_i\phi_j \text{ with } \phi_i\phi_j$ in the above).
The spin functions $\alpha \text{ and } \beta$ which accompany each orbital in $|1s\alpha 1s\beta 2s\alpha 2s\beta |$ have been eliminated by carrying out the spin integrations as discussed above. Because H contains no spin operators, this is straightforward and amounts to keeping integrals $< \phi_i | f |\phi_j >$ only $\phi_i \text{ and } \phi_j$ are of the same spin and integrals $< \phi_i\phi_j | g | \phi_k\phi_l > \text{only if } \phi_i \text{ and } \phi_k \text{ are of the same spin } \textbf{and } \phi_j \text{ and } \phi_l$ are of the same spin. The physical content of the above energy (i.e., Hamiltonian expectation value) of the $|1s\alpha 1s\beta 2s\alpha 2s\beta |$ determinant is clear: $2h_{1s} + 2h_{2s}$ is the sum of the expectation values of the one-electron (i.e., kinetic energy and electron-nuclear coulomb interaction) part of the Hamiltonian for the four occupied spin-orbitals; $J_{1s,1s} + 4J_{1s,2s} + J_{2s,2s} - 2K_{1s,2s}$ contains the coulombic repulsions among all pairs of occupied spin-orbitals minus the exchange interactions among pairs of spin-orbitals with like spin.
The determinental matrix elements linking $\Phi_1 \text{ and } \Phi_2$ are as follows:
$<| 1s\alpha 1s\beta 2s\alpha 2s\beta | H | 1s\alpha 1s\beta 2p_0\alpha 2p_0\beta | > = < 2s2s | 2p_02p_0 >, \nonumber$
$<| 1s\alpha 1s\beta 2s\alpha 2s\beta | H | 1s\alpha 1s\beta 2p1\alpha 2p_{-1}\beta | > = < 2s2s | 2p_12p_{-1} >, \nonumber$
$<| 1s\alpha 1s\beta 2s\alpha 2s\beta | H | 1s\alpha 1s\beta 2p_{-1}\alpha 2p_1\beta | > = < 2s2s | 2p_{-1}2p_1 >, \nonumber$
where the Dirac convention has been introduced as a shorthand notation for the two electron integrals (e.g., $< 2s2s | 2p_02p_0>$ represents $\int 2s^{\text{*}}(r_1)2s^{\text{*}}(r_2)\frac{e^2}{r_{12}}2p_0(r_1)2p_0(r_2)dr_1dr_2 )$.
The three integrals shown above can be seen to be equal and to be of the exchange integral form by expressing the integrals in terms of integrals over cartesian functions and recognizing identities due to the equivalence of the $2p_x, 2p_y, \text{ and } 2p_z$ orbitals. For example,
$<2s2s | 2p_12p_{-1}> = \dfrac{1}{\sqrt{2}}^2 \left[ <2s2s | [2p_x +i2p_y] [2p_x -i2p_y]> \right] = \nonumber$
$\dfrac{1}{2}\left[ <2s2s | xx> + < 2s2s | yy > + i< 2s2s | yx > -i< 2s2s | xy> \right] = <2s2s | xx> = K_{2s,x} \nonumber$
(here the two imaginary terms cancel and the two remaining real integrals are equal);
$< 2s2s2p_02p_0 > = <2s2s | zz> = < 2s2s | xx > = K_{2s,x} \nonumber$
this is because $K_{2s,z} = K_{2s,x} = K_{2s,y};$
$< 2s2s | 2p_{-1}2p_1 > = \dfrac{1}{2}\left[ < 2s2s | [2p_x-i2p_y][2p_x + i2p_y]>\right] = \nonumber$
$< 2s2s | xx > = \int 2s^{\text{*}}(r_1)2s^{\text{*}}(r_2)\dfrac{e^2}{r_{12}}2p_x(r_1)2p_x(r_2)dr_1dr_2 = K_{2s,x}. \nonumber$
These integrals are clearly of the exchange type because they involve the coulombic interaction of the $2s 2p_{x,y,or z}$ overlap charge density with itself.
Moving on, the matrix elements among the three determinants in $\Phi_2$ are given as follows:
$<|1s\alpha 1s\beta 2p_0\alpha 2p_0\beta| H |1s\alpha 1s\beta 2p_0\alpha 2p_0\beta|> = 2h_{1s} + 2h_{2p} + J_{1s,1s} + 4J_{1s,2p} - 2K_{1s,2p} \nonumber$
($J_{1s,2p} \text{ and } K_{1s,2p} \text{ are independent of whether the 2p orbital is } 2p_x, 2p_y, \text{ or } 2p_z$);
$<| 1s\alpha 1s\beta 2p_1\alpha 2p_{-1}\beta | H | 1s\alpha 1s\beta 2p_1\alpha 2p_{-1}\beta |> = 2h_{1s} + 2h_{2p} + J_{1s,1s} + 4J_{1s,2p} - 2K_{1s,2p} + < 2p_12p_{-1}|2p_12p_{-1} >; \nonumber$
$<| 1s\alpha 1s\beta 2p_{-1}\alpha 2p_{1}\beta | H | 1s\alpha 1s\beta 2p_{-1}\alpha 2p_{1}\beta |> = 2h_{1s} + 2h_{2p} + J_{1s,1s} + 4J_{1s,2p} - 2K_{1s,2p} + < 2p_{-1}2p_{1}|2p_{-1}2p_{1} >; \nonumber$
$<|1s\alpha 1s\beta 2p_0\alpha 2p_0\beta | H | 1s\alpha 1s\beta 2p_{1}\alpha 2p_{-1}\beta |> = < 2p_02p_0 | 2p_{1}2p_{-1} > \nonumber$
$<|1s\alpha 1s\beta 2p_0\alpha 2p_0\beta | H | 1s\alpha 1s\beta 2p_{-1}\alpha 2p_{1}\beta |> = < 2p_02p_0 | 2p_{-1}2p_{1} > \nonumber$
$<|1s\alpha 1s\beta 2p_1\alpha 2p_{-1}\beta | H | 1s\alpha 1s\beta 2p_{-1}\alpha 2p_{1}\beta |> = < 2p_12p_{-1} | 2p_{-1}2p_{1} > \nonumber$
Certain of these integrals can be recast in terms of cartesian integrals for which equivalences are easier to identify as follows:
$< 2p_02p_0 | 2p_12p_{-1} > = <2p_02p_0 | 2p_{-1}2p_1 > = < zz|xx > = K_{z,x}; \nonumber$
$< 2p_12p_{-1}|2p_{-1}2p_1 > = < xx|yy > +\dfrac{1}{2}\left[ < xx|xx > - < xy|xy > \right] \nonumber$
$= K_{x,y} + \dfrac{1}{2}\left[ J_{x,x} - J_{x,y} \right]; \nonumber$
$< 2p_12p_{-1}|2p_12p_{-1} > = < 2p_{-1}2p_1|2p_{-1}2p_1 > = \dfrac{1}{2}\left[ J_{x,x} + J_{x,y} \right]. \nonumber$
Finally, the 2x2 CI matrix corresponding to the CSFs $\Psi_1$ and $\Psi_2$ can be formed from the above determinental matrix elements; this results in:
$H_{11} = 2h_{1s} + 2h_{2s} + J_{1s,1s} + 4J_{1s,2s} + J_{2s,2s} - 2K_{1s,2s}; \nonumber$
$H_{12} = \frac{-K_{2s,x}}{\sqrt{3}}; \nonumber$
$H_{22} = 2h_{1s} + 2h_{2p} + J_{1s,1s} + 4J_{1s,2p} - 2K_{1s,2p} + J_{Z,Z} - \frac{2}{3}K_{Z,X}. \nonumber$
The lowest eigenvalue of this matrix provides this CI calculation's estimate of the groundstate $^1S$ energy of Be; its eigenvector provides the CI amplitudes for $\Phi_1 \text{ and } \Phi_2$ in this ground-state wavefunction. The other root of the 2x2 secular problem gives an approximation to another $^1S$ state of higher energy, in particular, a state dominated by the $\frac{1}{\sqrt{3}}[|1s\alpha 1s\beta 2p_0 \alpha 2p_0\beta|-|1s\alpha 1s\beta 2p_1\alpha 2p_{-1}\beta|-|1s\alpha 1s\beta 2p_{-1}\alpha 2p_1\beta |]$ CSF.
11. As another example, consider the matrix elements which arise in electric dipole transitions between two singlet electronic states:
$< \Psi_1 | \textbf{E} \cdot \sum\limits_i e\textbf{r}_i |\Psi_2 >.$ Here $\textbf{E} \cdot \sum\limits_i e\textbf{r}_i$ is the one-electron operator describing the interaction of an electric field of magnitude and polarization E with the instantaneous dipole moment of the electrons (the contribution to the dipole operator arising from the nuclear charges $-\sum\limits_a Z_ae^2\textbf{R}_a$ does not contribute because, when placed between $\Psi_1 \text{ and } \Psi_2$, this zero-electron operator yields a vanishing integral because $\Psi_1 \text{ and } \Psi_2$ are orthogonal).
When the states $\Psi_1 \text{ and } \Psi_2$ are described as linear combinations of CSFs as introduced earlier ($\Psi_i = \sum\limits_K C_{iK}\Psi_K$), these matrix elements can be expressed in terms of CSF-based matrix elements < $\Psi_K | \sum\limits_i e\textbf{r}_i |\Psi_L$ >. The fact that the electric dipole operator is a one-electron operator, in combination with the SC rules, guarantees that only states for which the dominant determinants differ by at most a single spin-orbital (i.e., those which are "singly excited") can be connected via electric dipole transitions through first order (i.e., in a one-photon transition to which the < $\Psi_1 |\sum\limits_i e\textbf{r}_i |\Psi_2$ > matrix elements pertain). It is for this reason that light with energy adequate to ionize or excite deep core electrons in atoms or molecules usually causes such ionization or excitation rather than double ionization or excitation of valence-level electrons; the latter are two-electron events.
In, for example, the $\pi \rightarrow \pi^{\text{*}}$ excitation of an olefin, the ground and excited states are dominated by CSFs of the form (where all but the "active" $\pi \text{ and } \pi^{\text{*}}$ orbitals are not explicitly written) :
$\Psi_1 = | ... \pi\alpha\pi\beta | \nonumber$
and
$\Phi_2 = \frac{1}{\sqrt{2}}[ | ...\pi\alpha\pi^{\text{*}}\beta| - | ...\pi\beta\pi^{\text{*}}\alpha| ]. \nonumber$
The electric dipole matrix element between these two CSFs can be found, using the SC rules, to be
$\dfrac{e}{\sqrt{2}}\left[ < \pi | \textbf{r} | \pi^{\text{*}} > + < \pi | \textbf{r} | \pi^{\text{*}} \right] = \sqrt{2}e< \pi | \textbf{r} | \pi^{\text{*}} >. \nonumber$
Notice that in evaluating the second determinental integral $< | ... \pi\alpha\pi\beta| e\textbf{r} | ...\pi\beta\pi^{\text{*}}\alpha| >$, a sign change occurs when one puts the two determinants into maximum coincidence; this sign change then makes the minus sign in $\Phi_2$ yield a positive sign in the final result.
11.0S: 11.S: Evaluating the Matrix Elements of N-el
In all of the examples in Chapter 11, the Slater-Condon rules were used to reduce matrix elements of one- or two- electron operators between determinental functions to one- or two- electron integrals over the orbitals which appear in the determinants. In any ab initio electronic structure computer program there must exist the capability to form symmetry-adapted CSFs and to evaluate, using these SC rules, the Hamiltonian and other operators' matrix elements among these CSFs in terms of integrals over the Molecular Orbitals that appear in the CSFs. The Slater-Condon rules provide not only the tools to compute quantitative matrix elements; they allow one to understand in qualitative terms the strengths of interactions among CSFs. In the following section, the SC rules are used to explain why chemical reactions in which the reactants and products have dominant CSFs that differ by two spin-orbital occupancies often display activation energies that exceed the reaction endoergicity.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/11%3A_Evaluating_the_Matrix_Elements_of_N-electron_Wavefunctions/11.04%3A_Examples_of_Applying_the_Slater-Condon_Rules.txt
|
Along "reaction paths", configurations can be connected one-to-one according to their symmetries and energies. This is another part of the Woodward-Hoffmann rules
• 12.1: Concepts of Configuration and State Energies
The energy of a particular electronic state of an atom or molecule has been expressed in terms of Hamiltonian matrix elements, using the Slater-Condon rules, over the various spin-and spatially adapted determinants or CSFs. Plots of the diagonal CSF energies, connected according to symmetry, constitute a configuration correlation diagram (CCD).
• 12.2: Mixing of Covalent and Ionic Configurations
The detailed examination of the H2 molecule via the valence bond and molecular orbital approaches forms the basis of our thinking about bonding when confronted with new systems. Let us examine this model system in further detail to explore the electronic states that arise by occupying two orbitals (derived from the two 1s orbitals on the two hydrogen atoms) with two electrons.
• 12.3: Various Types of Configuration Mixing
As motion occurs along the proposed reaction path, geometries may be encountered at which it is essential to describe the electronic wavefunction in terms of a linear combination of more than one CSF. Such essential configuration mixing is often referred to as treating "essential CI". To achieve reasonable chemical accuracy in electronic structure calculations it is necessary to use a multiconfigurational wavefunctions even in situations where no obvious strong configuration mixing is present
12: Quantum Mechanical Picture of Bond Making and Breaking Reactions
Plots of CSF Energies Give Configuration Correlation Diagrams
The energy of a particular electronic state of an atom or molecule has been expressed in terms of Hamiltonian matrix elements, using the Slater-Condon rules, over the various spin-and spatially adapted determinants or CSFs which enter into the state wavefunction.
$E = \sum\limits_{I,J} \langle \Phi_I | H | \Phi_J \rangle C_IC_J. \nonumber$
The diagonal matrix elements of H in the CSF basis multiplied by the appropriate CI amplitudes $\langle \Phi_I | H | \Phi_I \rangle C_I C_I$ represent the energy of the $I^{th}$ CSF weighted by the strength ( $C_I^2$ ) of that CSF in the wavefunction. The off-diagonal elements represent the effects of mixing among the CSFs; mixing is strongest whenever two or more CSFs have nearly the same energy ( i.e., $< \Phi_I | H | \Phi_I > \neq < \Phi_J | H | \Phi_J >$ ) and there is strong coupling ( i.e., $< \Phi_I | H | \Phi_J >$ is large ). Whenever the CSFs are widely separated in energy, each wavefunction is dominated by a single CSF.
CSFs Interact and Couple to Produce States and State Correlation Diagrams
Just as orbital energies connected according to their symmetries and plotted as functions of geometry constitute an orbital correlation diagram, plots of the diagonal CSF energies, connected according to symmetry, constitute a configuration correlation diagram (CCD). If, near regions where energies of CSFs of the same symmetry cross (according to the direct product rule of group theory discussed in Appendix E, only CSFs of the same symmetry mix because only they have non-vanishing $< \Phi_I | H | \Phi_J >$ matrix elements), CI mixing is allowed to couple the CSFs to give rise to "avoided crossings", then the CCD is converted into a so-called state correlation diagram ( SCD ).
CSFs that Differ by Two Spin-Orbitals Interact Less Strongly than CSFs that Differ by One Spin-Orbital
The strengths of the couplings between pairs of CSFs whose energies cross are evaluated through the Slater-Condon rules. CSFs that differ by more than two spin-orbital occupancies do not couple; the Slater-Condon rules give vanishing Hamiltonian matrix elements for such pairs. Pairs that differ by two spin-orbitals $(e.g. |.. \phi_a... \phi_b...| vs |.. \phi_{a'}... \phi_{b'}...| )$ have interaction strengths determined by the two-electron integrals < ab | a'b' > - < ab | b'a'>. Pairs that differ by a single spin-orbital $(e.g. |.. \phi_a... ...| vs |.. \phi_{a'}... ...| )$ are coupled by the one- and two- electron parts of $H: < a | f | b> + \sum\limits_j [< aj | bj> - < aj | jb > ].$ Usually, couplings among CSFs that differ by two spin-orbitals are much weaker than those among CSFs that differ by one spin-orbital. In the latter case, the full strength of H is brought to bear, whereas in the former, only the electron-electron coulomb potential is operative.
State Correlation Diagrams
In the SCD, the energies are connected by symmetry but the configurational nature as reflected in the $C_I$ coefficients changes as one passes through geometries where crossings in the CCD occur. The SCD is the ultimate product of an orbital and configuration symmetry and energy analysis and gives one the most useful information about whether reactions will or will not encounter barriers on the ground and excited state surfaces.
As an example of the application of CCD's and SCD's, consider the disrotatory closing of 1,3-butadiene to produce cyclobutene. The OCD given earlier for this proposed reaction path is reproduced below.
Recall that the symmetry labels e and o refer to the symmetries of the orbitals under reflection through the one $C_v$ plane that is preserved throughout the proposed disrotatory closing. Low-energy configurations (assuming one is interested in the thermal or low-lying photochemically excited-state reactivity of this system) for the reactant molecule and their overall space and spin symmetry are as follows:
• $\pi_1^2\pi_2^2 = 1e^21o^2, ^1Even$
• $\pi_1^2\pi_2^1\pi_3^1 = 1e^21o^12e^1, ^3\text{Odd and }^1\text{Odd.}$
For the product molecule, on the other hand, the low-lying states are
• $\sigma^2\pi^2 = 1e^22e^2, ^1\text{Even}$
• $\sigma^2\pi^1\pi^{\text{*}1} = 1e^22e^1o^1, ^3Odd, ^1Odd.$
Notice that although the lowest energy configuration at the reactant geometry $\pi_1^2\pi_2^2 = 1e^21o^2$ and the lowest energy configuration at the product geometry $\sigma^2\pi^2 = 1e^22e^2$ are both of 1Even symmetry, they are not the same configurations; they involve occupancy of different symmetry orbitals.
In constructing the CCD, one must trace the energies of all four of the above CSFs (actually there are more because the singlet and triplet excited CSFs must be treated independently) along the proposed reaction path. In doing so, one must realize that the $1e^21o^2$ CSF has low energy on the reactant side of the CCD because it corresponds to $\pi_1^2p_2^2$ orbital occupancy, but on the product side, it corresponds to $\sigma^2\pi^{\text{*2}}$ orbital occupancy and is thus of very high energy. Likewise, the $1e^22e^2$ CSF has low energy on the product side where it is $\sigma^2\pi^2$ but high energy on the reactant side where it corresponds to $\pi_1^2\pi_3^2.$ The low-lying singly excited CSFs are $1e^22e^11o^1$ at both reactant and product geometries; in the former case, they correspond to $\pi_1^2\pi_2^1\pi_3^1$ occupancy and at the latter to $\sigma^2\pi^1\pi^{*1}$ occupancy. Plotting the energies of these CSFs along the disrotatory reaction path results in the CCD shown below.
Figure 12.1.2: Insert caption here!
If the two $^1$Even CSFs which cross are allowed to interact (the Slater-Condon rules give their interaction strength in terms of the exchange integral $< |1e^21o^2 | H |1e^22e^2 | > = < 1o1o | 2e2e > = K_{1o,2e}$ ) to produce states which are combinations of the two 1Even CSFs, the following SCD results:
Figure 12.1.3: Insert caption here!
This SCD predicts that the thermal (i.e., on the ground electronic surface) disrotatory rearrangement of 1,3-butadiene to produce cyclobutene will experience a symmety-imposed barrier which arises because of the avoided crossing of the two 1Even configurations; this avoidance occurs because the orbital occupancy pattern (i.e., the configuration) which is best for the ground state of the reactant is not identical to that of the product molecule. The SCD also predicts that there should be no symmetry-imposed barrier for the singlet or triplet excited-state rearrangement, although the reaction leading from excited 1,3-butadiene to excited cyclobutene may be endothermic on the grounds of bond strengths alone.
It is also possible to infer from the SCD that excitation of the lowest singlet $\pi\pi^{\text{*}}$ state of 1,3-butadiene would involve a low quantum yield for producing cyclobutene and would, in fact, produce ground-state butadiene. As the reaction proceeds along the singlet $\pi\pi^{\text{*}}$ surface this $^1$Odd state intersects the ground $^1$1Even surface on the reactant side of the diagram; internal conversion ( i.e., quenching from the $^1$Odd to the $^1$Even surfaces induced by using a vibration of odd symmetry to "digest" the excess energy (much like vibronic borrowing in spectroscopy) can lead to production of ground-state reactant molecules. Some fraction of such events will lead to the system remaining on the $^1$Odd surface until, further along the reaction path, the $^1$Odd surface again intersects the $^1$Even surface on the product side at which time quenching to produce ground-state products can occur. Although, in principle, it is possible for some fraction of the events to follow the $^1$Odd surface beyond this second intersection and to thus lead to $^1$Odd product molecules that might fluoresce, quenching is known to be rapid in most polyatomic molecules; as a result, reactions which are chemiluminescent are rare. An appropriate introduction to the use of OCD's, CCD's, and SCD's as well as the radiationless processes that can occur in thermal and photochemical reactions is given in the text Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, Boston (1983).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/12%3A_Quantum_Mechanical_Picture_of_Bond_Making_and_Breaking_Reactions/12.01%3A_Concepts_of_Configuration_and_State_Ener.txt
|
As chemists, much of our intuition concerning chemical bonds is built on simple models introduced in undergraduate chemistry courses. The detailed examination of the $H_2$ molecule via the valence bond and molecular orbital approaches forms the basis of our thinking about bonding when confronted with new systems. Let us examine this model system in further detail to explore the electronic states that arise by occupying two orbitals (derived from the two 1s orbitals on the two hydrogen atoms) with two electrons.
In total, there exist six electronic states for all such two-orbital, two-electron systems. The heterolytic fragments X + Y: and X: + Y produce two singlet states; the homolytic fragments X$\cdot$ + Y$\cdot$ produce one singlet state and a set of three triplet states having $M_S$ = 1, 0, and -1. Understanding the relative energies of these six states , their bonding and antibonding characters, and which molecular state dissociates to which asymptote are important.
Before proceeding, it is important to clarify the notation $(e.g., X\cdot, Y\cdot, X, Y: , etc.),$ which is designed to be applicable to neutral as well as charged species. In all cases considered here, only two electrons play active roles in the bond formation. These electrons are represented by the dots. The symbols X$\cdot \text{ and Y}\cdot$ are used to denote species in which a single electron is attached to the respective fragment. By X: , we mean that both electrons are attached to the X- fragment; Y means that neither electron resides on the Y- fragment. Let us now examine the various bonding situations that can occur; these examples will help illustrate and further clarify this notation.
The $H_2$ Case in Which Homolytic Bond Cleavage is Favored
To consider why the two-orbital two-electron single bond formation case can be more complex than often thought, let us consider the $H_2$ system in more detail. In the molecular orbital description of $H_2$, both bonding sg and antibonding su mos appear. There are two electrons that can both occupy the sg mo to yield the ground electronic state $H_2( 1S_g^+, S_g^2)$; however, they can also occupy both orbitals to yield $^3Su +(sg ^1s_u^1)$ and $^1S_u + (s_g^1s_u^1)$, or both can occupy the su mo to give the 1Sg +(su 2) state. As demonstrated explicitly below, these latter two states dissociate heterolytically to X + Y: = $H^+ + H^-$, and are sufficiently high in energy relative to X• + Y• = H + H that we ordinarily can ignore them. However, their presence and character are important in the development of a full treatment of the molecular orbital model for $H_2$ and are essential to a proper treatment of cases in which heterolytic bond cleavage is favored.
Cases in Which Heterolytic Bond Cleavage is Favored
For some systems one or both of the heterolytic bond dissociation asymptotes (e.g., X+ Y: or X: + Y) may be lower in energy than the homolytic bond dissociation asymptote. Thus, the states that are analogues of the $^1\sum\limits_u ^+(\sigma_g^1\sigma_u^1) \text{ and } ^1\sum\limits_g^+(\sigma_u^2$) states of $H_2$ can no longer be ignored in understanding the valence states of the XY molecules. This situation arises quite naturally in systems involving transition metals, where interactions between empty metal or metal ion orbitals and 2-electron donor ligands are ubiquitous.
Two classes of systems illustrate cases for which heterolytic bond dissociation lies lower than the homolytic products. The first involves transition metal dimer cations, $M_2^+$. Especially for metals to the right side of the periodic table, such cations can be considered to have ground-state electron configurations with $\sigma^2d^nd^{n+1}$ character, where the d electrons are not heavily involved in the bonding and the s bond is formed primarily from the metal atom s orbitals. If the $\sigma$ bond is homolytically broken, one forms $X\cdot + Y\cdot = M(s^1d^{n+1}) + M^+ (s^1d^n).$ For most metals, this dissociation asymptote lies higher in energy than the heterolytic products X: + Y = M $(s^2d^n)$ + $M^+ (s^0d^{n+1})$, since the latter electron configurations correspond to the ground states for the neutrals and ions, respectively. A prototypical species which fits this bonding picture is $Ni_2^+.$
The second type of system in which heterolytic cleavage is favored arises with a metal-ligand complex having an atomic metal ion (with a $s^0d^{n+1}$ configuration) and a two electron donor, L: . A prototype is (Ag $C_6H_6)^+$ which was observed to photodissociate to form $X \cdot + Y \cdot = Ag(^2S, s^1d^{10}) + C_6H_6 +( ^2B_1)$ rather than the lower energy (heterolytically cleaved) dissociation limit Y + X: = $Ag^+( ^1S, s^0d^{10}) + C_6H_6 (^1A_1).$
Analysis of Two-Electron, Two-Orbital, Single-Bond Formation
The resultant family of six electronic states can be described in terms of the six configuration state functions (CSFs) that arise when one occupies the pair of bonding $\sigma \text{ and antibonding } \sigma^{\text{*}}$ molecular orbitals with two electrons. The CSFs are combinations of Slater determinants formed to generate proper spin- and spatial symmetry- functions.
The spin- and spatial- symmetry adapted N-electron functions referred to as CSFs can be formed from one or more Slater determinants. For example, to describe the singlet CSF corresponding to the closed-shell $\sigma^2$ orbital occupancy, a single Slater determinant
$^\sum (0) = | \sigma\alpha \sigma\beta | = \dfrac{1}{\sqrt{2}}\left[ \sigma\alpha(1)\sigma\beta(2) - \sigma\beta(1)\sigma\alpha(2) \right] \nonumber$
suffices. An analogous expression for the $(\sigma^{\text{*}})^2$ CSF is given by
$^1\sum^{\text{**}}(0) = | \sigma^{\text{*}}\alpha\sigma^{\text{*}}\beta | = \dfrac{1}{\sqrt{2}} \left[ \sigma^{\text{*}}\alpha(1) \sigma^{\text{*}}\beta(2) - \sigma^{\text{*}}\alpha(2)\sigma^{\text{*}}\beta(1) \right]. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/12%3A_Quantum_Mechanical_Picture_of_Bond_Making_and_Breaking_Reactions/12.02%3A_Mixing_of_Covalent_and_Ionic_Configurati.txt
|
Essential CI
The above examples of the use of CCD's show that, as motion takes place along the proposed reaction path, geometries may be encountered at which it is essential to describe the electronic wavefunction in terms of a linear combination of more than one CSF:
$\Psi = \sum\limits_I C_I \Phi_I \nonumber$
where the $\Phi_I$I are the CSFs which are undergoing the avoided crossing. Such essential configuration mixing is often referred to as treating "essential CI".
Dynamical CI
To achieve reasonable chemical accuracy (e.g., ± 5 kcal/mole) in electronic structure calculations it is necessary to use a multiconfigurational $\Psi$ even in situations where no obvious strong configuration mixing (e.g., crossings of CSF energies) is present. For example, in describing the $\pi^2$ bonding electron pair of an olefin or the $ns^2$ electron pair in alkaline earth atoms, it is important to mix in doubly excited CSFs of the form $(\pi^{\text{*}})^2$ and $np^2$, respectively. The reasons for introducing such a CI-level treatment were treated for an alkaline earth atom earlier in this chapter.
Briefly, the physical importance of such doubly-excited CSFs can be made clear by using the identity:
$C_I |..\phi\alpha \phi \beta ..| - C_2 |..\phi ' \alpha \phi ' \beta ..| \nonumber$
$= \frac{C_I}{2}\left[ | ..(\phi - x\phi ')\alpha(\phi + x\phi ')\beta ..| - | ..(\phi - x\phi ')\beta (\phi + x\phi ')\alpha ..| \right], \nonumber$
where
$x = \sqrt{\frac{C_2}{C_1}} \nonumber$
This allows one to interpret the combination of two CSFs which differ from one another by a double excitation from one orbital $(\phi)$ to another $(\phi ')$ as equivalent to a singlet coupling of two different (non-orthogonal) orbitals $(\phi - x\phi ') \text{ and } (\phi + x\phi ')$. This picture is closely related to the so-called generalized valence bond (GVB) model that W. A. Goddard and his co-workers have developed (see W. A. Goddard and L. B. Harding, Annu. Rev. Phys. Chem. 29 , 363 (1978)). In the simplest embodiment of the GVB model, each electron pair in the atom or molecule is correlated by mixing in a CSF in which that electron pair is "doubly excited" to a correlating orbital. The direct product of all such pair correlations generates the GVB-type wavefunction. In the GVB approach, these electron correlations are not specified in terms of double excitations involving CSFs formed from orthonormal spin orbitals; instead, explicitly non-orthogonal GVB orbitals are used as described above, but the result is the same as one would obtain using the direct product of doubly excited CSFs.
In the olefin example mentioned above, the two non-orthogonal "polarized orbital pairs" involve mixing the $\pi$ and $\pi^{\text{*}}$ orbitals to produce two left-right polarized orbitals as depicted below:
In this case, one says that the $\pi^2$ electron pair undergoes left-right correlation when the $(\pi^{\text{*}})^2$ CSF is mixed into the CI wavefunction.
In the alkaline earth atom case, the polarized orbital pairs are formed by mixing the ns and np orbitals (actually, one must mix in equal amounts of $p_1, p_{-1} , \text{ and }p_0 \text{ orbitals to preserve overall } ^1S$ symmetry in this case), and give rise to angular correlation of the electron pair. Use of an $(n+1)s^2$ CSF for the alkaline earth calculation would contribute in-out or radial correlation because, in this case, the polarized orbital pair formed from the ns and (n+1)s orbitals would be radially polarized.
The use of doubly excited CSFs is thus seen as a mechanism by which $\Psi$ can place electron pairs , which in the single-configuration picture occupy the same orbital, into different regions of space (i.e., one into a member of the polarized orbital pair) thereby lowering their mutual coulombic repulsions. Such electron correlation effects are referred to as "dynamical electron correlation"; they are extremely important to include if one expects to achieve chemically meaningful accuracy (i.e., ± 5 kcal/mole).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/12%3A_Quantum_Mechanical_Picture_of_Bond_Making_and_Breaking_Reactions/12.03%3A__Various_Types_of_Configuration_Mixing.txt
|
Treating the full internal nuclear-motion dynamics of a polyatomic molecule is complicated. It is conventional to examine the rotational movement of a hypothetical "rigid" molecule as well as the vibrational motion of a non-rotating molecule, and to then treat the rotation-vibration couplings using perturbation theory.
• 13.1: Rotational Motions of Rigid Molecules
In Chapter 3 and Appendix G the energy levels and wavefunctions that describe the rotation of rigid molecules are described. Therefore, in this Chapter these results will be summarized briefly and emphasis will be placed on detailing how the corresponding rotational Schrödinger equations are obtained and the assumptions and limitations underlying them.
• 13.2: Vibrational Motion Within the Harmonic Approximation
The simple harmonic motion of a diatomic molecule will not be repeated here. Instead, emphasis is placed on polyatomic molecules whose electronic energy's dependence on the 3N Cartesian coordinates of its N atoms can be written (approximately) in terms of a Taylor series expansion about a stable local minimum. We therefore assume that the molecule of interest exists in an electronic state for which the geometry being considered is stable (i.e., not subject to spontaneous geometrical distortion).
• 13.3: Anharmonicity
The electronic energy of a molecule, ion, or radical at geometries near a stable structure can be expanded in a Taylor series in powers of displacement coordinates as was done in the preceding section of this Chapter. This expansion leads to a picture of uncoupled harmonic vibrational energy levels.
Thumbnail: A model visualizing molecular vibrations. Two atoms are connected by a spring to account for the flexibility of the bond. (CC SA-BY 3.0; Tby11)
13: Molecular Rotation and Vibration
In Chapter 3 and Appendix G the energy levels and wavefunctions that describe the rotation of rigid molecules are described. Therefore, in this Chapter these results will be summarized briefly and emphasis will be placed on detailing how the corresponding rotational Schrödinger equations are obtained and the assumptions and limitations underlying them.
Linear Molecules
As given in Chapter 3, the Schrödinger equation for the angular motion of a rigid (i.e., having fixed bond length R) diatomic molecule is
$\dfrac{\hbar^2}{2\mu}\left[ \dfrac{1}{R^2 \sin \theta}\dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial}{\partial \theta} \right) + \left( \dfrac{1}{R^2\sin^2\theta} \right) \dfrac{\partial^2}{\partial \phi^2} \right] \psi = E \psi \nonumber$
or more succinctly in terms of the angular momentum operator as
$\dfrac{L^2\psi}{2\mu R^2} = E \psi \nonumber$
The Hamiltonian in this problem contains only the kinetic energy of rotation; no potential energy is present because the molecule is undergoing unhindered "free rotation". The angles $\theta \text{ and } \phi$ describe the orientation of the diatomic molecule's axis relative to a laboratory-fixed coordinate system, and $\mu$ is the reduced mass of the diatomic molecule
$\mu=\dfrac{m_1m_2}{m_1+m_2}. \nonumber$
The Eigenfunctions and Eigenvalues
The eigenvalues corresponding to each eigenfunction are straightforward to find because $H_{rot}$ is proportional to the $L^2$ operator whose eigenvalues have already been determined. The resultant rotational energies are given as:
$E_J = \hbar ^2\dfrac{J(J+1)}{(2\mu R^2)} = BJ(J+1) \nonumber$
and are independent of $M$. Thus each energy level is labeled by $J$ and is $2J+1$-fold degenerate (because $M$ ranges from $-J$ to $J$). The rotational constant B (defined as $\hbar^2/2\mu R^2$ depends on the molecule's bond length and reduced mass. Spacings between successive rotational levels (which are of spectroscopic relevance because angular momentum selection rules often restrict $\Delta J$ to 1,0, and -1) are given by
$\Delta E = B(J+1)(J+2) - BJ(J+1) = 2B(J+1). \nonumber$
Within this "rigid rotor" model, the absorption spectrum of a rigid diatomic molecule should display a series of peaks, each of which corresponds to a specific $J \rightarrow J + 1$ transition. The energies at which these peaks occur should grow linearly with J. An example of such a progression of rotational lines is shown in the figure below.
The energies at which the rotational transitions occur appear to fit the $\Delta E = 2B (J+1)$ formula rather well. The intensities of transitions from level J to level J+1 vary strongly with J primarily because the population of molecules in the absorbing level varies with J. These populations PJ are given, when the system is at equilibrium at temperature T, in terms of the degeneracy (2J+1) of the $J^{th}$ level and the energy of this level B J(J+1) :
$P_J = Q^{-1}(2J+1) e^{\dfrac{-BJ(J+1)}{kT}}, \nonumber$
where $Q$ is the rotational partition function:
$Q = \sum\limits_J (2J+1)e^{\dfrac{-BJ(J+1)}{kT}}. \nonumber$
For low values of $J$, the degeneracy is low and the $e^{\frac{-BJ(J+1)}{kT}}$ factor is near unity. As J increases, the degeneracy grows linearly but the $e^{\frac{-BJ(J+1)}{kT}}$ factor decreases more rapidly. As a result, there is a value of J, given by taking the derivative of $(2J+1)e^{- \frac{BJ(J+1)}{kT}}$ with respect to J and setting it equal to zero,
$2J_max + 1 = \sqrt{\dfrac{2kT}{B}} \nonumber$
at which the intensity of the rotational transition is expected to reach its maximum.
The eigenfunctions belonging to these energy levels are the spherical harmonics $Y_{L,M}(\theta ,\phi)$ which are normalized according to
$\int\limits^\pi_0 \int\limits^{2\pi}_0 Y^{\text{*}}_{L,M}(\theta ,\phi) Y_{L',M'}(\theta ,\phi) sin\theta d\theta d\phi = \delta_{L,L'}\delta_{M,M'}. \nonumber$
These functions are identical to those that appear in the solution of the angular part of Hydrogen-like atoms. The above energy levels and eigenfunctions also apply to the rotation of rigid linear polyatomic molecules; the only difference is that the moment of inertia $I$ entering into the rotational energy expression is given by
$I = \sum \limits_am_a R_a^2 \nonumber$
where $m_a$ is the mass of the $a^{th} \text{ atom and } R_a$ is its distance from the center of mass of the molecule. This moment of inertia replaces $mR^2$ in the earlier rotational energy level expressions.
Non-Linear Molecules
The rotational kinetic energy operator for a rigid polyatomic molecule is shown in Appendix G to be
$H_{rot} = \dfrac{J_b^2}{2I_a} + \dfrac{J_b^2}{2I_b} + \dfrac{J_c^2}{2I_c} \nonumber$
where the $I_k$ (k = a, b, c) are the three principal moments of inertia of the molecule (the eigenvalues of the moment of inertia tensor). This tensor has elements in a Cartesian coordinate system (K, K' = X, Y, Z) whose origin is located at the center of mass of the molecule that are computed as:
$I_{K,K} \sum\limits_j m_j (R_j^2 - R^2_{K,j}) (\text{for K = K'}) \nonumber$
$I_{K,K'} = -\sum\limits_jm_jR_{K,j}R_{K',j} (\text{ for K }\neq\text{ K'}). \nonumber$
The components of the quantum mechanical angular momentum operators along the three principal axes are:
$J_a = -i\hbar \cos \chi \left[ \cot \theta \dfrac{\partial}{\partial \chi} - \dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \phi} \right] - -i\hbar \sin\chi \dfrac{\partial}{\partial \theta} \nonumber$
$J_b = i\hbar \sin \chi \left[ \cot \theta \dfrac{\partial}{\partial \chi}- \dfrac{1}{\sin\theta}\dfrac{\partial}{\partial \phi} \right]- -i\hbar \cos\chi \dfrac{\partial}{\partial \theta} \nonumber$
$J_c = -i\hbar\dfrac{\partial}{\partial \chi}. \nonumber$
The angles $\theta, \phi, \text{ and } \chi$ are the Euler angles needed to specify the orientation of the rigid molecule relative to a laboratory-fixed coordinate system. The corresponding square of the total angular momentum operator $J^2$ can be obtained as
$J^2 = J_a^2 + J_b^2 + J_c^2 = -\dfrac{\partial^2}{\partial\theta^2} - cot\theta \dfrac{\partial}{\partial \theta} - \dfrac{1}{sin\theta}\left( \dfrac{\partial^2}{\partial\phi^2} + \dfrac{\partial^2}{\partial\chi^2} - 2cos\theta \dfrac{\partial^2}{\partial\phi\partial\chi}\right), \nonumber$
and the component along the lab-fixed Z axis $J_Z \text{ is } -i\hbar\frac{\partial}{\partial \phi}.$
The Eigenfunctions and Eigenvalues for Special Cases
Spherical Tops
When the three principal moment of inertia values are identical, the molecule is termed a spherical top. In this case, the total rotational energy can be expressed in terms of the total angular momentum operator $J^2$
$H_{rot} = \dfrac{J^2}{2I} \nonumber$
As a result, the eigenfunctions of $H_{rot}$ are those of $J^2$ (and $J_a \text{ as well as }J_Z \text{ both of which commute with }J^2 \text{ and with one another; } J_Z$ is the component of J along the lab-fixed Z-axis and commutes with $J_a \text{ because } J_Z = - i\hbar\frac{\partial}{\partial \phi} \text{ and } J_a = -i\hbar\frac{\partial}{\partial\chi}$ act on different angles). The energies associated with such eigenfunctions are
$E(J,M,K) = \hbar^2\dfrac{J(J+1)}{2I^2}, \nonumber$
for all K (i.e., $J_a$ quantum numbers) ranging from -J to J in unit steps and for all M (i.e., $J_Z$ quantum numbers) ranging from -J to J. Each energy level is therefore $(2J + 1)^2$ degenarate because there are 2J + 1 possible K values and 2J + 1 possible M values for each J.
The eigenfunctions of $J^2, J_Z \text{ and } J_a , |J,M,K \rangle$ are given in terms of the set of rotation matrices $D_{J,M,K}:$
$|J,M,K \rangle = \sqrt{\dfrac{2J + 1}{8\pi^2}}D^{\text{*}}_{J,M,K}(\theta ,\phi ,\chi) \nonumber$
which obey
$J^2|J,M,K \rangle = \hbar^2 J(J+1) |J,M,K\rangle, \nonumber$
$J_a|J,M,K\rangle = \hbar K|J,M,K\rangle, \nonumber$
$J_Z|J,M,K\rangle = \hbar M|J,M,K\rangle. \nonumber$
Symmetric Tops
Molecules for which two of the three principal moments of inertia are equal are called symmetric tops. Those for which the unique moment of inertia is smaller than the other two are termed prolate symmetric tops; if the unique moment of inertia is larger than the others, the molecule is an oblate symmetric top. Again, the rotational kinetic energy, which is the full rotational Hamiltonian, can be written in terms of the total rotational angular momentum operator $J^2$ and the component of angular momentum along the axis with the unique principal moment of inertia:
$H_{rot} = \dfrac{J^2}{2I} + J_a^2\left[ \dfrac{1}{2I_a} - \dfrac{1}{2I} \right] \nonumber$
for prolate tops and
$H_{rot} = \dfrac{J^2}{2I} + J_c^2\left[ \dfrac{1}{2I_c} - \dfrac{1}{2I}\right] \nonumber$
for oblate tops.
As a result, the eigenfunctions of $H_{rot} \text{are those of }J^2 \text{ and } J_a \text{ or }J_c \text{ (and of }J_Z)$, and the corresponding energy levels are:
$E(J,K,M) = \hbar^2\dfrac{J(J+1)}{2I^2} + \hbar^2K^2\left[ \dfrac{1}{2I_a} - \dfrac{1}{2I} \right], \nonumber$
for prolate toops
$E(J,K,M) = \hbar \dfrac{J(J+1)}{2I^2} + \hbar^2K^2 \left[ \dfrac{1}{2I_a} - \dfrac{1}{2I} \right], \nonumber$
for oblate tops, again for K and M (i.e., $J_a \text{ or } J_c \text{ and } J_Z$ quantum numbers, respectively) ranging from -J to J in unit steps. Since the energy now depends on K, these levels are only 2J + 1 degenerate due to the 2J + 1 different M values that arise for each J value. The eigenfunctions $|J, M,K \rangle$ are the same rotation matrix functions as arise for the spherical-top case.
Asymmetric Tops
The rotational eigenfunctions and energy levels of a molecule for which all three principal moments of inertia are distinct (a so-called asymmetric top) can not easily be expressed in terms of the angular momentum eigenstates and the J, M, and K quantum numbers. However, given the three principal moments of inertia $I_a, I_b, \text{ and } I_c$, a matrix representation of each of the three contributions to the rotational Hamiltonian
$H_{rot} = \dfrac{J_a^2}{2I_a} + \dfrac{J_b^2}{2I_b} + \dfrac{J_c^2}{2I_c} \nonumber$
can be formed within a basis set of the {|J, M, K>} rotation matrix functions. This matrix will not be diagonal because the |J, M, K> functions are not eigenfunctions of the asymmetric top $H_{rot}$. However, the matrix can be formed in this basis and subsequently brought to diagonal form by finding its eigenvectors {$C_{n,J,M,K}$ and its eigenvalues $\{E_n\}$. The vector coefficients express the asymmetric top eigenstates as
$\Psi_n(\theta , \phi , \chi ) = \sum\limits_{J, M, K} C_{n, J, M, K}|J, M, K\rangle. \nonumber$
Because the total angular momentum $J^2$ still commutes with $H_{rot}$, each such eigenstate will contain only one J-value, and hence $\Psi_n$ can also be labeled by a J quantum number:
$\Psi_{n,J}(\theta , \phi, \chi) = \sum\limits_{M,K} C_{n, J, M, K} |J, M, K\rangle. \nonumber$
To form the only non-zero matrix elements of $H_{rot}$ within the $|J, M, K \rangle$ basis, one can use the following properties of the rotation-matrix functions:
$\langle J, M, K| J_a^2 | J, M, K \rangle =\langle J, M, K| J_b^2 |J, M, K \rangle = \dfrac{1}{2} \langle J,M, K| J^2 - J_c^2 | J, M, K \rangle = \hbar^2 [J(J+1) - K^2], \nonumber$
$\langle J, M, K | J_c^2 |J, M, K \rangle = \hbar^2 K^2, \nonumber$
$\langle J, M, K| J_a^2 | J, M, K \pm 2 \rangle = -\langle J, M, K | J_b^2 | J, M, K \pm 2 \rangle = \hbar^2 \sqrt{J(J+1) - K(K\pm 1}\sqrt{J(J+1) - (K \pm 1)(K \pm 2)} \nonumber$
$\langle J, M, K| J_c^2 | J, M, K \pm 2 \rangle = 0. \nonumber$
Each of the elements of $J_c^2, J_a^2, \text{ and }J_b^2$ must, of course, be multiplied, respectively, by $\frac{1}{2I_c}, \frac{1}{2I_a} \text{ and } \frac{1}{2I_b}$ summed together to form the matrix representation of $H_{rot}.$ The diagonalization of this matrix then provides the asymmetric top energies and wavefunctions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/13%3A_Molecular_Rotation_and_Vibration/13.01%3A_Rotational_Motions_of_Rigid_Molecules.txt
|
The simple harmonic motion of a diatomic molecule was treated in Chapter 1, and will not be repeated here. Instead, emphasis is placed on polyatomic molecules whose electronic energy's dependence on the 3N Cartesian coordinates of its N atoms can be written (approximately) in terms of a Taylor series expansion about a stable local minimum. We therefore assume that the molecule of interest exists in an electronic state for which the geometry being considered is stable (i.e., not subject to spontaneous geometrical distortion).
The Taylor series expansion of the electronic energy is written as:
$V(q_k) = V(0) + \sum\limits_k \left( \dfrac{\partial V}{\partial q_k} \right) q_k + \dfrac{1}{2}\sum\limits_{j,k}q_j H_{j,k} + ... , \nonumber$
where V(0) is the value of the electronic energy at the stable geometry under study, $q_k$ is the displacement of the $k_{th}$ Cartesian coordinate away from this starting position, $\left( \frac{\partial V}{\partial q_k} \right)$ is the gradient of the electronic energy along this direction, and the $H_{j,k}$ are the second derivative or Hessian matrix elements along these directions $H_{j,k} = \left( \frac{\partial^2 V}{\partial q_j\partial q_k} \right).$ If the starting geometry corresponds to a stable species, the gradient terms will all vanish (meaning this geometry corresponds to a minimum, maximum, or saddle point), and the Hessian matrix will possess 3N - 5 (for linear species) or 3N -6 (for non-linear molecules) positive eigenvalues and 5 or 6 zero eigenvalues (corresponding to 3 translational and 2 or 3 rotational motions of the molecule). If the Hessian has one negative eigenvalue, the geometry corresponds to a transition state (these situations are discussed in detail in Chapter 20).
From now on, we assume that the geometry under study corresponds to that of a stable minimum about which vibrational motion occurs. The treatment of unstable geometries is of great importance to chemistry, but this Chapter deals with vibrations of stable species. For a good treatment of situations under which geometrical instability is expected to occur, see Chapter 2 of the text Energetic Principles of Chemical Reactions by J. Simons. A discussion of how local minima and transition states are located on electronic energy surfaces is provided in Chapter 20 of the present text.
The Newton Equations of Motion for Vibration
The Kinetic and Potential Energy Matrices
Truncating the Taylor series at the quadratic terms (assuming these terms dominate because only small displacements from the equilibrium geometry are of interest), one has the so-called harmonic potential:
$V(q_k) = V(0) + \dfrac{1}{2}\sum\limits_{j,k}q_j H_{j,k}q_k. \nonumber$
The classical mechanical equations of motion for the 3N {$q_k$} coordinates can be written in terms of the above potential energy and the following kinetic energy function:
$T = \dfrac{1}{2}\sum\limits_jm_j\dot{q_j}^2, \nonumber$
where $\dot{q_j}$ denotes the time rate of change of the coordinate $q_j \text{ and } m_j$ is the mass of the atom on which the $j^{th}$ Cartesian coordinate resides. The Newton equations thus obtained are:
$m_j \ddot{q_j} = - \sum\limits_k H_{j,k}q_k \nonumber$
where the force along the $j^{th}$ coordinate is given by minus the derivative of the potential V along this coordinate $\frac{\partial V}{\partial q_j}= \sum\limits_k H_{j,k}q_k$ within the harmonic approximation.
These classical equations can more compactly be expressed in terms of the time evolution of a set of so-called mass weighted Cartesian coordinates defined as:
$x_j = q_j\sqrt{m_j}, \nonumber$
in terms of which the Newton equations become
$\ddot{x_j} = -\sum\limits_k H'_{j,k}x_k \nonumber$
and the mass-weighted Hessian matrix elements are
$H'_{j,k} = H_{j,k}\dfrac{1}{\sqrt{m_jm_k}}. \nonumber$
The Harmonic Vibrational Energies and Normal Mode Eigenvectors
Assuming that the $x_j$ undergo some form of sinusoidal time evolution:
$x_j(t) = x_j(0)cos(\omega t), \nonumber$
and substituting this into the Newton equations produces a matrix eigenvalue equation:
$\omega^2x_j = \sum\limits_k H'_{j,k}x_k \nonumber$
in which the eigenvalues are the squares of the so-called normal mode vibrational frequencies and the eigenvectors give the amplitudes of motion along each of the 3N mass weighted Cartesian coordinates that belong to each mode.
Within this harmonic treatment of vibrational motion, the total vibrational energy of the molecule is given as
$E(v_1, v_2, ... V_{3N-5 \text{ or }6}) = \sum\limits_{j=1}^{3N-5 \text{ or } 6} \hbar\omega_j \left( v_j + \dfrac{1}{2} \right) \nonumber$
as a product of 3N-5 or 3N-6 harmonic oscillator functions $\Psi_{vj}(x_j)$ are for each normal mode within this picture, the energy gap between one vibrational level and another in which one of the $v_j$ quantum numbers is increased by unity (the origin of this "selection rule" is discussed in Chapter 15) is
$\Delta E_{vj} \rightarrow v_j + 1 = \hbar\omega_j \nonumber$
The harmonic model thus predicts that the "fundamental" $(v=0 \rightarrow v = 1) \text{ and "hot band" } (v=1 \rightarrow v = 2)$ transition should occur at the same energy, and the overtone (v=0 Æ v=2) transitions should occur at exactly twice this energy.
The Use of Symmetry
Symmetry Adapted Modes
It is often possible to simplify the calculation of the normal mode frequencies and eigenvectors by exploiting molecular point group symmetry. For molecules that possess symmetry, the electronic potential $V(q_j)$ displays symmetry with respect to displacements of symmetry equivalent Cartesian coordinates. For example, consider the water molecule at its $C_{2v}$ equilibrium geometry as illustrated in the figure below. A very small movement of the $H_2O$ molecule's left H atom in the positive x direction $(\Delta x_L)$ produces the same change in V as a correspondingly small displacement of the right H atom in the negative x direction $(-\Delta x_R).$ Similarly, movement of the left H in the positive y direction $(\Delta y_L)$ produces an energy change identical to movement of the right H in the positive y direction $(\Delta y_R).$
The equivalence of the pairs of Cartesian coordinate displacements is a result of the fact that the displacement vectors are connected by the point group operations of the $C_{2v}$ group. In particular, reflection of $\Delta x_L$ through the yz plane produces $-\Delta x_R$, and reflection of $\Delta y_L$ through this same plane yields $\Delta y_R.$
More generally, it is possible to combine sets of Cartesian displacement coordinates {$q_k$} into so-called symmetry adapted coordinates {$Q_{Gamma,j}$}, where the index $\Gamma$ labels the irreducible representation and j labels the particular combination of that symmetry. These symmetry adapted coordinates can be formed by applying the point group projection operators to the individual Cartesian displacement coordinates.
To illustrate, again consider the $H_2O$ molecule in the coordinate system described above. The 3N = 9 mass weighted Cartesian displacement coordinates $(X_L, Y_L, Z_L, X_O, Y_O, Z_O, X_R, Y_R, Z_R)$ can be symmetry adapted by applying the following four projection operators:
$P_{A_1} = 1 + \sigma_{yz} + \sigma_{xy} + C_2 \nonumber$
$P_{b_1} = 1 + \sigma_{yz} + \sigma_{xy} - C_2 \nonumber$
$P_{b_2} = 1 - \sigma_{yz} + \sigma_{xy} - C_2 \nonumber$
$P_{a_2} = 1 - \sigma_{yz} - \sigma_{xy} + C_2 \nonumber$
to each of the 9 original coordinates. Of course, one will not obtain 9 x 4 = 36 independent symmetry adapted coordinates in this manner; many identical combinations will arise, and only 9 will be independent.
The independent combination of $\textbf{ a_1 symmetry } (normalized to produce vectors of unit length) are $Q_{a_1,1} = \dfrac{1}{\sqrt{2}}[X_L - X_R] \nonumber$ $Q_{a_1,2} = \dfrac{1}{\sqrt{2}}[Y_L - Y_R] \nonumber$ $Q_{a_1,3} = [Y_O] \nonumber$ Those of \(b_2$ symmetry are
$Q_{b_2,1} = \dfrac{1}{\sqrt{2}}[X_L + X_R] \nonumber$
$Q_{b_2,2} = \dfrac{1}{\sqrt{2}}[Y_L - Y_R] \nonumber$
$Q_{b_2,3} = [X_O] \nonumber$
and the combinations
$Q_{b_1,1} = \dfrac{1}{\sqrt{2}}[Z_L + Z_R] \nonumber$
$Q_{b_1,2} = [Z_O] \nonumber$
are of $b_1$ symmetry, whereas
$Q_{a_2,1} = \dfrac{1}{\sqrt{2}}[Z_L - Z_R] \nonumber$
is of $a_2$ symmetry.
Point Group Symmetry of the Harmonic Potential
These nine $Q_{\Gamma,j}$ are expressed as unitary transformations of the original mass weighted Cartessian coordinates:
$Q_{\Gamma,j} = \sum\limits_k C_{\Gamma, j, k}X_k \nonumber$
These transformation coefficients {$C_{\Gamma, j, k}$} can be used to carry out a unitary transformation of the 9x9 mass-weighted Hessian matrix. In so doing, we need only form blocks
$H^{\Gamma}_{j,1} = _{k k'}C_{\Gamma , j, k}H_{k, k'}\dfrac{1}{\sqrt{m_k m_k'}}C_{\Gamma , 1, k'} \nonumber$
within which the symmetries of the two modes are identical. The off-diagonal elements
$H_{j 1}^{\Gamma \Gamma '} = _{k k'}C_{\Gamma , j, k}H_{k, k'}\dfrac{1}{\sqrt{m_km_k'}}C_{\Gamma ' , 1, k'} \nonumber$
vanish because the potential $V(q_j) \text{ (and the full vibrational Hamiltonian H = T + V) commutes with the }C_{2V}$ point group symmetry operations.
As a result, the 9x9 mass-weighted Hessian eigenvalue problem can be sub divided into two 3x3 matrix problems ( of $a_1 \text{ and } b_2$ symmetry), one 2x2 matrix of $b_1 \text{ symmetry and one 1x1 matrix of } a_2 \text{ symmetry. For example, the } a_1 \text{ symmetry block } H^{a_1}_{ j l}$ is formed as follows:
The $b_2, b_1 \text{ and } a_2$ blocks are formed in a similar manner. The eigenvalues of each of these blocks provide the squares of the harmonic vibrational frequencies, the eigenvectors provide the normal mode displacements as linear combinations of the symmetry adapted {$Q^{\Gamma}_{j}$}.
Regardless of whether symmetry is used to block diagonalize the mass-weighted Hessian, six (for non-linear molecules) or five (for linear species) of the eigenvalues will equal zero. The eigenvectors belonging to these zero eigenvalues describe the 3 translations and 2 or 3 rotations of the molecule. For example,
$\dfrac{1}{\sqrt{3}}[X_L + X_R + X_O] \nonumber$
$\dfrac{1}{\sqrt{3}}[Y_L + Y_R + Y_O] \nonumber$
$\dfrac{1}{\sqrt{3}}[Z_L + Z_R + Z_O] \nonumber$
are three translation eigenvectors of $b_2, a_1 \text{ and } b_1$ symmetry, and
$\dfrac{1}{\sqrt{2}}(Z_L - Z_R) \nonumber$
is a rotation (about the Y-axis in the figure shown above) of $a_2$ symmetry. This rotation vector can be generated by applying the $a_2 \text{ projection operator to } Z_L \text{ or to } Z_R.$ The fact that rotation about the Y-axis is of $a_2 \text{ symmetry is indicated in the right-hand column of the } C_{2v} \text{ character table of Appendix E via the symbol } R_Z$ (n.b., care must be taken to realize that the axis convention used in the above figure is different than that implied in the character table; the latter has the Z-axis out of the molecular plane, while the figure calls this the X-axis). The other two rotations are of $b_1 \text{ and } b_2 \text{ symmetry (see the } C_{2v}$ character table of Appendix E) and involve spinning of the molecule about the X- and Z- axes of the figure drawn above, respectively.
So, of the 9 cartesian displacements, 3 are of $a_1 \text{ symmetry, 3 of } b_2 ,\text{ 2 of } b_1,\text{ and 1 of } a_2.$ Of these, there are three translations $(a_1, b_2,\text{ and } b_1)$ and three rotations $(b_2, b_1,\text{ and } a_2).$ This leaves two vibrations of $a_1 \text{ and one of } b_2$ symmetry. For the $H_2O$ example treated here, the three non zero eigenvalues of the mass-weighted Hessian are therefore of $a_1 b_2 ,\text{ and } a_1$ symmetry. They describe the symmetric and asymmetric stretch vibrations and the bending mode, respectively as illustrated below.
The method of vibrational analysis presented here can work for any polyatomic molecule. One knows the mass-weighted Hessian and then computes the non-zero eigenvalues which then provide the squares of the normal mode vibrational frequencies. Point group symmetry can be used to block diagonalize this Hessian and to label the vibrational modes according to symmetry.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/13%3A_Molecular_Rotation_and_Vibration/13.02%3A_Vibrational_Motion_Within_the_Harmonic_Approximation.txt
|
The electronic energy of a molecule, ion, or radical at geometries near a stable structure can be expanded in a Taylor series in powers of displacement coordinates as was done in the preceding section of this Chapter. This expansion leads to a picture of uncoupled harmonic vibrational energy levels
$E(v_1 ... V_{3N - 5 \text{ or }6}) = \sum\limits_{j=1}^{3N-5 \text{ or }6}\hbar\omega _j\left( v_j + \frac{1}{2} \right) \nonumber$
and wavefunctions
$\Psi(x_1 ... x_{3N-5\text{ or }6} = ^{3N-5 \text{ or }}_{j=1} \Psi_{vj}(x_j). \nonumber$
The spacing between energy levels in which one of the normal-mode quantum numbers increases by unity
$\Delta E_{vj} = E(... v_j + 1 ... ) - E(...v_j ... ) = \hbar\omega_j \nonumber$
is predicted to be independent of the quantum number vj . This picture of evenly spaced energy levels
$\Delta E_0 = \Delta E_1 = \Delta E_2 = ... \nonumber$
is an incorrect aspect of the harmonic model of vibrational motion, and is a result of the quadratic model for the potential energy surface $V(x_j).$
The Expansion of E(v) in Powers of $\left(v+\dfrac{1}{2}\right).$
Experimental evidence clearly indicates that significant deviations from the harmonic oscillator energy expression occur as the quantum number $v_j$ grows. In Chapter 1 these deviations were explained in terms of the diatomic molecule's true potential V(R) deviating strongly from the harmonic $\frac{1}{2k}(E-E_e)^2$ potential at higher energy (and hence larger $|R-R_e|)$ as shown in the following figure.
At larger bond lengths, the true potential is "softer" than the harmonic potential, and eventually reaches its asymptote which lies at the dissociation energy $D_e$ above its minimum. This negative deviation of the true V(R) from $\frac{1}{2k}(R-R_e)^2$ causes the true vibrational energy levels to lie below the harmonic predictions.
It is convention to express the experimentally observed vibrational energy levels, along each of the 3N-5 or 6 independent modes, as follows:
$E(v_j) = \hbar\left[ \omega_j\left(v_j + \frac{1}{2}\right) - (\omega x)_j \left(v_j + \frac{1}{2}\right)^2 + (\omega y)_j \left(v_j + \frac{1}{2}\right)^3 + (\omega z)_j \left( v_j + \frac{1}{2} \right)^4 + ... \right] \nonumber$
The first term is the harmonic expression. The next is termed the first anharmonicity; it (usually) produces a negative contribution to E$(v_j)$ that varies as $\left( v_j + \frac{1}{2} \right)^2$. The spacings between successive $v_j \rightarrow v_j + 1$ energy levels is then given by:
$\Delta E_{vj} = E()v_j + 1) - E(v_j) \nonumber$
$= \hbar [\omega_j - 2(\omega x)_j (v_j + 1) + ...] \nonumber$
A plot of the spacing between neighboring energy levels versus $v_j$ should be linear for values of vj where the harmonic and first overtone terms dominate. The slope of such a plot is expected to be $-2\hbar(\omega x)_j$ and the small $-v_j$ intercept should be $\hbar[\omega_j - 2(\omega x)_j ].$ Such a plot of experimental data, which clearly can be used to determine the $\omega_j \text{ and } (\omega x)_j$ parameter of the vibrational mode of study, is shown in the figure below.
The Birge-Sponer Extrapolation
These so-called Birge-Sponer plots can also be used to determine dissociation energies of molecules. By linearly extrapolating the plot of experimental $\Delta E_{vj}$ values to large vj values, one can find the value of $v_j$ at which the spacing between neighboring vibrational levels goes to zero. This value $v_j$, max specifies the quantum number of the last bound vibrational level for the particular potential energy function $V(x_j)$ of interest. The dissociation energy $D_e$ can then be computed by adding to $\frac{1}{2}\hbar\omega _j$ (the zero point energy along this mode) the sum of the spacings between neighboring vibrational energy levels from $v_j = 0 \text{ to } v_j = v_j,\text{ max }$:
$D_e \frac{1}{2}\hbar \omega_j + ^{v_j \text{ max }}_{v_j \text{ = 0 }}\Delta E_{v_j}. \nonumber$
Since experimental data are not usually available for the entire range of $v_j$ values (from 0 to $v_j$,max), this sum must be computed using the anharmonic expression for $\Delta E_{v_j}$ :
$\Delta E_{v_j} = \hbar \left[ \omega_j - 2(\omega x)_j \left( v_j + \dfrac{1}{2} + ... \right) \right]. \nonumber$
Alternatively, the sum can be computed from the Birge-Sponer graph by measuring the area under the straight-line fit to the graph of $\Delta E_{v_j} \text{ or } v_j \text{ from } v_j = 0 \text{ to } v_j = v_{j,\text{ max }}.$
This completes our introduction to the subject of rotational and vibrational motions of molecules (which applies equally well to ions and radicals). The information contained in this Section is used again in Section 5 where photon-induced transitions between pairs of molecular electronic, vibrational, and rotational eigenstates are examined. More advanced treatments of the subject matter of this Section can be found in the text by Wilson, Decius, and Cross, as well as in Zare's text on angular momentum.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/13%3A_Molecular_Rotation_and_Vibration/13.03%3A_Anharmonicity.txt
|
The interaction of a molecular species with electromagnetic fields can cause transitions to occur among the available molecular energy levels (electronic, vibrational, rotational, and nuclear spin). Collisions among molecular species likewise can cause transitions to occur. Time-dependent perturbation theory and the methods of molecular dynamics can be employed to treat such transitions.
• 14.1: Time-Dependent Vector Potentials
Modifying the nuclear and electronic momenta of the full electronic and nuclear-motion Hamiltonian to address interactions with light results in "interaction perturbations" that induce transitions among the various electronic/vibrational/rotational states of a molecule. The one-electron additive nature of these perturbation plays an important role in determining the kind of transitions that can be induce.
• 14.2: Time-Dependent Perturbation Theory
The mathematical machinery needed to compute the rates of transitions among molecular states induced by such a time-dependent perturbation is contained in time dependent perturbation theory (TDPT).
• 14.3: Application to Electromagnetic Perturbations
Light-induced transitions between states can be treated with first-order time-dependent perturbation theory. One results of this is the first-order Fermi-Wentzel "golden rule" expression that gives the rate as the square of a transition matrix element between the two states involved, of the first order perturbation multiplied by the light source function evaluated at the transition frequency.
• 14.4: The "Long-Wavelength" Approximation
To make progress in further analyzing the first-order results obtained above, it is useful to consider the wavelength λλ of the light used in most visible/ultraviolet, infrared, or microwave spectroscopic experiments. Even the shortest such wavelengths (ultraviolet) are considerably longer than the spatial extent of all, but the largest molecules (i.e., polymers and biomolecules for which the approximations we introduce next are not appropriate).
• 14.5: The Kinetics of Photon Absorption and Emission
Thumbnail: Energy diagram illustrating the Franck–Condon principle applied to the solvation of chromophores. The parabolic potential curves symbolize the interaction energy between the chromophores and the solvent. The Gaussian curves represent the distribution of this interaction energy. (CC -BY-SA 3.0; Mark Somoza)
14: Time-dependent Quantum Dynamics
The full N-electron non-relativistic Hamiltonian H discussed earlier in this text involves the kinetic energies of the electrons and of the nuclei and the mutual Coulombic interactions among these particles
$H = \sum\limits_{a=1,M} -\left(\dfrac{\hbar^2}{2m_a}\right) \nabla_a^2 + \sum\limits_j \left[ \left(-\dfrac{\hbar^2}{2m_e}\right) \nabla_j^2 - \sum\limits_a Z_a \dfrac{e^2}{r_{j,a}} \right] + \sum\limits_{j<k}\dfrac{e^2}{r_{j,k}} + \sum\limits_{a<b}Z_aZ_b \dfrac{e^2}{R_{a,b}}. \nonumber$
When an electromagnetic field is present, this is not the correct Hamiltonian, but it can be modified straightforwardly to obtain the proper H.
The Time-Dependent Vector $\textbf{A}(\textbf{r},t)$ Potential
The only changes required to achieve the Hamiltonian that describes the same system in the presence of an electromagnetic field are to replace the momentum operators P$_a$ and p$_j$ for the nuclei and electrons, respectively, by (P$_a$ - Z$_a$ e/c A(R$_a$,t)) and (p$_j$ - e/c A(rj ,t)). Here Za e is the charge on the ath nucleus, -e is the charge of the electron, and c is the speed of light.
The vector potential A depends on time t and on the spatial location r of the particle in the following manner:
$\textbf{A}(\textbf{r},t) = 2 \textbf{A}_0 cos(\omega t - \textbf{k}\cdot{\textbf{r}}). \nonumber$
The circular frequency of the radiation $\omega$ (radians per second) and the wave vector k (the magnitude of k is |k| = $\frac{2\pi}{\lambda}$, where $\lambda$ is the wavelength of the light) control the temporal and spatial oscillations of the photons. The vector $\textbf{A}_o$ characterizes the strength (through the magnitude of $\textbf{A}_o$) of the field as well as the direction of the A potential; the direction of propagation of the photons is given by the unit vector k/|k|. The factor of 2 in the definition of A allows one to think of $\textbf{A}_0$ as measuring the strength of both $e^{i(\omega t - \textbf{k}\cdot{\textbf{r}})}$ and $e^{i(\omega t - \textbf{k}\cdot{\textbf{r}})}$ components of the $cos(\omega t - \textbf{k}\cdot{\textbf{r}})$ function.
The Electric $\textbf{E}(\textbf{r},t) \text{ and Magnetic } \textbf{H}(\textbf{r},t) \text{ Fields }$
The electric $\textbf{E}(\textbf{r},t) \text{ and magnetic } \textbf{H}(\textbf{r}$,t) fields of the photons are expressed in terms of the vector potential A as
$\textbf{E}(\textbf{r},t) = -\dfrac{1}{3}\dfrac{\partial \textbf{A}}{\partial t} = \dfrac{\omega}{c}\textbf{A}_0 \sin( \omega t - \textbf{k}\cdot{\textbf{r}} ) \nonumber$
$\textbf{H}(\textbf{r},t) = \nabla \textbf{ x A } = \textbf{ k x A}_o 2 \sin(\omega t - \textbf{k}\cdot{\textbf{r}}). \nonumber$
The E field lies parallel to the $\textbf{A}_o$ vector, and the H field is perpendicular to $\textbf{A}_o$; both are perpendicular to the direction of propagation of the light k/|k|. E and H have the same phase because they both vary with time and spatial location as $\sin (\omega t - \textbf{k}\cdot{\textbf{r}}).$ The relative orientations of these vectors are shown below.
The Resulting Hamiltonian
Replacing the nuclear and electronic momenta by the modifications shown above in the kinetic energy terms of the full electronic and nuclear-motion hamiltonian results in the following additional factors appearing in H:
$H_{int} = \sum\limits_j \left[ \dfrac{ie\hbar}{m_ec}\textbf{A}(r_j,t)\cdot{\nabla_j} + \left( \dfrac{e^2}{2m_ec^2} \right)|\textbf{A}(r_j,t)|^2 \right] + \sum\limits_a \left[ \left( iZ_a\dfrac{e\hbar}{m_ac} \right)\textbf{A}(R_a,t)\cdot{\nabla_a} + \left( \dfrac{Z_a^2e^2}{2m_ac^2} \right)|\textbf{A}(R_a,t)|^2 \right]. \nonumber$
These so-called interaction perturbations $H_{int}$ are what induces transitions among the various electronic/vibrational/rotational states of a molecule. The one-electron additive nature of $H_{int}$ plays an important role in determining the kind of transitions that $H_{int}$ can induce. For example, it causes the most intense electronic transitions to involve excitation of a single electron from one orbital to another (e.g., the Slater-Condon rules).
14.02: Time-Dependent Perturbation Theory
The mathematical machinery needed to compute the rates of transitions among molecular states induced by such a time-dependent perturbation is contained in time dependent perturbation theory (TDPT). The development of this theory proceeds as follows. One first assumes that one has in-hand all of the eigenfunctions {$\Phi_k$} and eigenvalues {$E_k^0$} that characterize the Hamiltonian $H^0$ of the molecule in the absence of the external perturbation:
$H^0 \Phi_k = E_k^0 \Phi_k. \nonumber$
One then writes the time-dependent Schrödinger equation
$i\hbar\dfrac{\partial \Psi}{\partial t} = (H^0 + H_{int}) \Psi \nonumber$
in which the full Hamiltonian is explicitly divided into a part that governs the system in the absence of the radiation field and $H_{int}$ which describes the interaction with the field.
Perturbative Solution
By treating $H^0$ as of zeroth order (in the field strength |$\textbf{A}_0$|), expanding $\Psi$ order-by order in the field-strength parameter:
$\Psi = \Psi^0 + \Psi^1 + \Psi^2 + \Psi^3 + ..., \nonumber$
realizing that Hint contains terms that are both first- and second- order in $|\textbf{A}_0|$
$H^1_{int} = \sum\limits_j \left[ \left(\dfrac{ie\hbar}{m_ec}\right) \textbf{A}(r_j,t)\cdot{\nabla_j} \right] + \sum\limits_a \sum\limits_a \left[ \left(\dfrac{iZ_ae\hbar}{m_ac}\right) \textbf{A}(R_a,t)\cdot{\nabla_a} \right], \nonumber$
$H^2_{int} = \sum\limits_j \left[ \left(\dfrac{e^2}{2m_ec^2}\right) |\textbf{A}(r_j,t)|^2\right] + \sum\limits_a \left[ \left( \dfrac{Z_a^2e^2}{2m_ac^2}\right) |\textbf{A}(R_a,t)|^2 \right], \nonumber$
and then collecting together all terms of like power of $|\textbf{A}_0|$, one obtains the set of time dependent perturbation theory equations. The lowest order such equations read:
$i\hbar \dfrac{\partial \Psi^0}{\partial t} = H^0 \Psi^0 \nonumber$
$i\hbar\dfrac{\partial \Psi^1}{\partial t} = (H^0 \Psi^1 + H^1_{int} \Psi^0) \nonumber$
$i\hbar\dfrac{\partial \Psi^2}{\partial t} = (H^0 \Psi^2 + H^2_{int}\Psi^0 + H^1_{int}\Psi^1). \nonumber$
The zeroth order equations can easily be solved because $H^0$ is independent of time. Assuming that at $t = - \infty, \Psi = \psi_i$ (we use the index i to denote the initial state), this solution is:
$\Psi^0 = \Phi_i e^{\dfrac{-iE_i^0t}{\hbar}}. \nonumber$
The first-order correction to $\Psi^0, \Psi^1$ can be found by (i) expanding $\Psi^1$ in the complete set of zeroth-order states {$\Phi_f$}:
$\Psi^1 = \sum\limits_f\Phi_f<\Phi_f|\Psi^1> = \sum\limits_f\Phi_fC_f^1, \nonumber$
(ii) using the fact that
$H^0\Phi_f = E_f^0 \Phi_f \nonumber$,
and (iii) substituting all of this into the equation that Y1 obeys. The resultant equation for the coefficients that appear in the first-order equation can be written as
$i\hbar \dfrac{\partial C_f^1}{\partial t} = \sum\limits_k [E_k^0 C_k^1 \delta_{f,k}] + <\Phi_f| H^1_{int}|\Phi_i> e^{\dfrac{-iE_i^0t}{\hbar}}, \nonumber$
or
$i\hbar\dfrac{\partial C_f^1}{\partial t} = E_f^0C_f^1 + <\Phi_f|H^1_{int}|\Phi_i> e^{\dfrac{-iE_i^0t}{\hbar}}. \nonumber$
Defining
$C_f^1(t) = D_f^1(t)e^{\dfrac{-iE_f^0t}{\hbar}}. \nonumber$
his equation can be cast in terms of an easy-to-solve equation for the $D_f^1$ coefficients:
$i\hbar\dfrac{\partial D_f^1}{\partial t} = <\Phi_f|H^1_{int}|\Phi_i> e^{\dfrac{i[E_f^0-E_i^0]t}{\hbar}}. \nonumber$
Assuming that the electromagnetic field $\textbf{A}(\textbf{r},t)$ is turned on at t=0, and remains on until t = T, this equation for $D_f^1$ can be integrated to yield:
$D_f^1(t) = \dfrac{1}{(i\hbar)}\int\limits_0^T <\Phi_f|H^1_{int}|\Phi_i> e^{\dfrac{i[E_f^0-E_i^0]t'}{\hbar}}dt'. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/14%3A_Time-dependent_Quantum_Dynamics/14.01%3A_Time-Dependent_Vector_Potentials.txt
|
First-Order Fermi-Wentzel "Golden Rule"
Using the earlier expressions for $H^1_{int}$ and for A(r,t)
$H^1_{int} = \sum\limits_j \left[ \left( \dfrac{ie\hbar}{m_ec} \right) \textbf{A}(r_j,t) \cdot{\nabla_j} \right] + \sum\limits_a \left[ \left( \dfrac{iZ_ae\hbar}{m_ac} \right) \textbf{A}(R_a,t) \cdot{\nabla_a} \right] \nonumber$
and
$2\textbf{A}_o cos(\omega t - \textbf{k}\cdot{\textbf{r}}) = \textbf{A}_0 \left[ e^{i(\omega t - \textbf{k}\cdot{\textbf{r}})} + e^{-i(\omega t - \textbf{k}\cdot{\textbf{r}})} \right], \nonumber$
it is relatively straightforward to carry out the above time integration to achieve a final expression for $D_f^1(t)$, which can then be substituted into $C_f^1(t) = D_f^1(t) e^{(-\frac{-iE_f^0t}{\hbar})}$ to obtain the final expression for the first-order estimate of the probability amplitude for the molecule appearing in the state $\Phi_f e^{\frac{-iE_f^0t}{\hbar}}$ after being subjected to electromagnetic radiation from t = 0 until t = T. This final expression reads:
$C_f^1(T) = \dfrac{1}{i\hbar} e^{\dfrac{-iE_f^0T}{\hbar}} \left[ \langle\Phi_f|\sum\limits_j \left[ \left( \dfrac{ie\hbar}{m_ec} \right) e^{-i\textbf{k}\cdot{\textbf{r}_j}}\textbf{A}_0\cdot{\nabla_j} + \sum\limits_a \left( \dfrac{iZ_ae\hbar}{m_ac} \right) e^{-i\textbf{k}\cdot{R}_a}\textbf{A}_0\cdot{\nabla_a}|\Phi_i \rangle \right] \dfrac{e^{i(\omega + \omega_{f,i})T}-1}{i(\omega + \omega_{f,i})} \right] \nonumber$
$+ \dfrac{1}{i\hbar} e^{\dfrac{-iE_f^0T}{\hbar}}\left[\langle\Phi_f|\sum\limits_j \left[ \left( \dfrac{ie\hbar}{m_ec} \right) e^{i\textbf{k}\cdot{\textbf{r}_j}}\textbf{A}_0\cdot{\nabla_j} + \sum\limits_a \left( \dfrac{iZ_ae\hbar}{m_ac} \right) e^{i\textbf{k}\cdot{R}_a}\textbf{A}_0\cdot{\nabla_a}|\Phi_i \rangle \right] \dfrac{e^{i(-\omega + \omega_{f,i})T}-1}{i(-\omega + \omega_{f,i})} \right] \nonumber$
where
$\omega_{f,i} = \dfrac{[E_f^0 - E_i^0]}{\hbar} \nonumber$
is the resonance frequency for the transition between "initial" state $\Phi_i \text{ and "final" state } \Phi_f$
Defining the time-independent parts of the above expression as
$\alpha_{f,i} = \langle \Phi_f |\sum\limits_j \left[ \left( \dfrac{e}{m_ec} \right) e^{-i\textbf{k}\cdot{\textbf{r}_j}}\textbf{A}_0\cdot{\nabla_j} + \sum\limits_a \left( \dfrac{Z_ae}{m_ac} \right) e^{-i\textbf{k}\cdot{\textbf{R}_a}}\textbf{A}_0\cdot{\nabla_a}|\Phi_i \rangle, \right] \nonumber$
this result can be written as
$C_f^1(T) = e^{\dfrac{-iE_f^0T}{\hbar}}\left[ \alpha_{f,i}\dfrac{e^{i(\omega+\omega_{f,i})T}-1}{i(\omega+\omega_{f,i})} + \alpha^{\text{*}}_{f,i}\dfrac{e^{-i(\omega - \omega_{f,i})T}-1}{-i(\omega-\omega_{f,i})} \right]. \nonumber$
The modulus squared $|C_f^1(T)|^2$ gives the probability of finding the molecule in the final state $\Phi_f$ at time T, given that it was in $\Phi_i$ at time t = 0. If the light's frequency $\omega$ is tuned close to the transition frequency $\omega_{f,i}$ of a particular transition, the term whose denominator contains $(\omega - \omega_{f,i})$ will dominate the term with $(\omega + \omega_{f,i})$ in its denominator. Within this "near-resonance" condition, the above probability reduces to:
$|C_f^1|^2 = 2|\alpha_{f,i}|^2 \dfrac{1-cos(\omega - \omega_{f,i})T}{(\omega - \omega_{f,i})^2} \nonumber$
$= 4|\alpha_{f,i}|^2\dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2}. \nonumber$
This is the final result of the first-order time-dependent perturbation theory treatment of light-induced transitions between states $\Phi_i \text{ and } \Phi_f$.
The so-called sinc-function
$\dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2} \nonumber$
as shown in the figure below is strongly peaked near $\omega = \omega_{f,i}$, and displays secondary maxima (of decreasing amplitudes) near $\omega = \omega_{f,i} + 2\frac{n\pi}{T} , n = 1, 2$, ... . In the $T \rightarrow \infty$ limit, this function becomes narrower and narrower, and the area under it
$\int\limits_{-\infty}^{\infty} \dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2}d\omega = \dfrac{T}{2}\int\limits_{-\infty}^{\infty} \dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{1/4T^2(\omega - \omega_{f,i})^2}d\left(\omega \dfrac{T}{2}\right) = \dfrac{T}{2}\int\limits_{-\infty}^{\infty} \dfrac{sin^2(x)}{x^2} = \pi\dfrac{T}{2} \nonumber$
grows with T. Physically, this means that when the molecules are exposed to the light source for long times (large T), the sinc function emphasizes $\omega$ values near $\omega_{f,i}$ (i.e., the on-resonance $\omega$ values). These properties of the sinc function will play important roles in what follows.
In most experiments, light sources have a "spread" of frequencies associated with them; that is, they provide photons of various frequencies. To characterize such sources, it is common to introduce the spectral source function g($\omega$) d$\omega$ which gives the probability that the photons from this source have frequency somewhere between $\omega \text{ and } \omega+d\omega$. For narrow-band lasers, g($\omega)$ is a sharply peaked function about some "nominal" frequency $\omega_o$; broader band light sources have much broader g($\omega$) functions.
When such non-monochromatic light sources are used, it is necessary to average the above formula for $|C_f^1(T)|^2$ over the g($\omega$) d$\omega$ probability function in computing the probability of finding the molecule in state $\Phi_f$ after time T, given that it was in $\Phi_i$ up until t = 0, when the light source was turned on. In particular, the proper expression becomes:
$|C_f^1(T)|^2_{ave} = 4|\alpha_{fi}|^2 \int\limits g(\omega) \dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2}d\omega \nonumber$
$= 2|\alpha_{f,i}|^2 T \int\limits_{-\infty}^{\infty} g(\omega) \dfrac{2in^2(1/2(\omega - \omega_{f,i})T)}{1/4T^2(\omega - \omega_{f,i})^2}d\left( \omega\dfrac{T}{2}\right) \nonumber$
If the light-source function is "tuned" to peak near $\omega = \omega_{f,i}$ and if $g(\omega)$ is much broader (in $\omega$-space) than the $\dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2}$ function, g($\omega$) can be replaced by its value at the peak of the $\dfrac{sin^2(1/2(\omega - \omega_{f,i})T)}{(\omega - \omega_{f,i})^2}$ function, yielding:
$|C_f^1(T)_{ave} = 2g(\omega_{f,i})|\alpha_{f,i}|^2T \int\limits^{\infty}_{-\infty}\dfrac{sin^2(1/2(\omega - \omega_{f,i})T}{1/4T^2(\omega - \omega_{f,i})^2}d\left( \omega\dfrac{T}{2} \right) = 2g(\omega_{f,i})|\alpha_{f,i}|^2 T\int\limits_{-\infty}^{\infty} \dfrac{sin^2(x)}{x^2}dx = 2\pi g(\omega_{f,i})|\alpha_{f,i}|^2T. \nonumber$
The fact that the probability of excitation from $\Phi_i \text{ to } \Phi_f$ grows linearly with the time T over which the light source is turned on implies that the rate of transitions between these two states is constant and given by:
$\textbf{R}_{i,f} = 2\pi g(\omega_{f,i})|\alpha_{f,i}|^2; \nonumber$
this is the so-called first-order Fermi-Wentzel "golden rule" expression for such transition rates. It gives the rate as the square of a transition matrix element between the two states involved, of the first order perturbation multiplied by the light source function $g(\omega)$ evaluated at the transition frequency $\omega_{f,i}$.
Higher Order Results
Solution of the second-order time-dependent perturbation equations,
$i\hbar\dfrac{\partial \Psi^2}{\partial t} = (H^0\Psi^2 + H^2_{int}\Psi^0 + H^1_{int}\Psi^1) \nonumber$
which will not be treated in detail here, gives rise to two distinct types of contributions to the transition probabilities between $\Phi_i \text{ and } \Phi_f$:
There will be matrix elements of the form
$\langle \Phi_f | \sum\limits_j \left[ \left( \dfrac{e^2}{2m_ec^2} \right)| \textbf{A}(\textbf{r}_j,t)|^2 \right] + \sum\limits_a\left[ \left( \dfrac{Z_a^2e^2}{2m_ac^2} \right)|\textbf{A}(R_a,t)|^2 \right]|\Phi_i \rangle \nonumber$
arising when $H^2_{int} \text{ couples } \Phi_i \text{ to } \Phi_f$.
There will be matrix elements of the form
$\sum\limits_k <\Phi_f |\sum\limits_j \left[ \left( \dfrac{ie\hbar}{m_ec} \right)\textbf{A}(r_j,t)\cdot{\nabla_j} \right] + \sum\limits_a \left[ \left( \dfrac{iZ_ae\hbar}{m_ac} \right)\textbf{A}(R_a,t)\cdot{\nabla_a} \right]| \Phi_k \rangle \nonumber$
$\langle\Phi_k |\sum\limits_j \left[ \left( \dfrac{ie\hbar}{m_ec} \right)\textbf{A}(r_j,t)\cdot{\nabla_j} \right] + \sum\limits_a \left[ \left( \dfrac{iZ_ae\hbar}{m_ac} \right)\textbf{A}(R_a,t)\cdot{\nabla_a} \right]| \Phi_i \rangle \nonumber$
arising from expanding $H^1_{int}\Psi^1 = \sum\limits_kC_k^1H^1_{int}|\Phi_k \rangle$ and using the earlier result for the first-order amplitudes $C_k^1$. Because both types of second-order terms vary quadratically with the A(r,t) potential, and because A has time dependence of the form $cos(\omega t - \textbf{k}\cdot{\textbf{r}})$, these terms contain portions that vary with time as $cos(2\omega t).$ As a result, transitions between initial and final states $\Phi_i \text{ and } \Phi_f$ whose transition frequency is $\omega_{f,i}$ can be induced when $2\omega = \omega_{f,i}$; in this case, one speaks of coherent two-photon induced transitions in which the electromagnetic field produces a perturbation that has twice the frequency of the "nominal" light source frequency $\omega$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/14%3A_Time-dependent_Quantum_Dynamics/14.03%3A_Application_to_Electromagnetic_Perturbations.txt
|
To make progress in further analyzing the first-order results obtained above, it is useful to consider the wavelength $\lambda$ of the light used in most visible/ultraviolet, infrared, or microwave spectroscopic experiments. Even the shortest such wavelengths (ultraviolet) are considerably longer than the spatial extent of all, but the largest molecules (i.e., polymers and biomolecules for which the approximations we introduce next are not appropriate).
In the definition of the essential coupling matrix element $\alpha_{f,i}$
$\alpha_{f,i} = \langle\Phi_f |\sum\limits_j \left( \dfrac{e}{m_ec} \right) e^{-i\textbf{k}\cdot{\textbf{r}_j}}\textbf{A}_0\cdot{\nabla_j} + \sum_a\left( \dfrac{Z_ae}{m_ac} \right) e^{-i\textbf{k}\cdot{\textbf{R}_a}}\textbf{A}_0\cdot{\nabla_a}| \Phi_i \rangle, \nonumber$
the factors $e^{ -i\textbf{k}\cdot{\textbf{r}}_j}$ and $e^{ -i\textbf{k}\cdot{\textbf{R}}_a }$ can be expanded as:
$e^{-i\textbf{k}\cdot{\textbf{r}}_j} = 1 + (-i\textbf{k}\cdot{\textbf{r}_j}) + \dfrac{1}{2}(-i\textbf{k}\cdot{\textbf{r}_j})^2 + ... \nonumber$
$e^{-i\textbf{k}\cdot{\textbf{R}}_a} = 1 + (-i\textbf{k}\cdot{\textbf{R}_a}) + \dfrac{1}{2}(-i\textbf{k}\cdot{\textbf{R}_a})^2 + ... \nonumber$
Because |k| = $2\pi/\lambda$, and the scales of $\textbf{r}_j \text{ and } \textbf{R}_a$ are of the dimension of the molecule, $\textbf{k}\cdot{\textbf{r}_j} \text{ and } \textbf{k}\cdot{\textbf{R}_a}$ are less than unity in magnitude, within this so-called "long-wavelength" approximation.
Electric Dipole Transitions
Introducing these expansions into the expression for af,i gives rise to terms of various powers in $1/\lambda$. The lowest order terms are:
$\alpha_{f,i}(E1) = \langle\Phi_f|\sum\limits_j\left( \dfrac{e}{m_ec}\right)\textbf{A}_0\cdot{\nabla_j} + \sum\limits_a\left( \dfrac{Z_ae}{m_ac}\right)\textbf{A}_0\cdot{\nabla_a}|\Phi_i\rangle \nonumber$
and are called "electric dipole" terms, and are denoted E1. To see why these matrix elements are termed E1, we use the following identity (see Chapter 1) between the momentum operator $-i\hbar\nabla$ and the corresponding position operator r:
$\nabla_j = -\left( \dfrac{m_e}{\hbar^2} \right)[ H,\textbf{r}_j ] \nonumber$
$\nabla_a = -\left( \dfrac{m_a}{\hbar^2} \right)[ H,\textbf{R}_a ] \nonumber$
This derives from the fact that H contains $\nabla_j \text{ and } \nabla_a \text{ in its kinetic energy operators (as } \nabla_a^2 \text{ and } \nabla^2_j$ ). Substituting these expressions into the above $\alpha_{f,i}(E1) \text{ equation and using H} \Phi_{ \text{ i or f } } = E^0_{\text{ i or f }} \Phi_{ \text{ i or f }}$, one obtains:
$\alpha_{f,i}(E1) = (E^0_f - E^0_i)\textbf{A}_0\cdot{\langle}\Phi_f|\sum\limits_j \left( \dfrac{e}{\hbar^2 c} \right) \textbf{r}_j + \sum\limits_a \left( \dfrac{Z_ae}{\hbar^2c}\textbf{R}_a | \Phi_i\rangle \right) \nonumber$
$= \omega_{f,i} \textbf{A}_0\cdot{\langle}\Phi_f|\sum\limits_j \left( \dfrac{e}{\hbar c} \right)\textbf{r}_j + \sum\limits_a\left( \dfrac{Z_ae}{\hbar c} \right) \textbf{R}_a | \Phi_i \rangle \nonumber$
$= \left( \dfrac{\omega_{f,i}}{\hbar c} \right) \textbf{A}_0\cdot{\langle}\Phi_f|\mu|\Phi_i\rangle , \nonumber$
where $\mu$ is the electric dipole moment operator for the electrons and nuclei:
$\mu = \sum\limits_j e\textbf{r}_j + \sum\limits_a Z_ae\textbf{R}_a. \nonumber$
The fact that the E1 approximation to $\alpha_{f,i}$ contains matrix elements of the electric dipole operator between the initial and final states makes it clear why this is called the electric dipole contribution to $\alpha_{f,i}$; within the E1 notation, the E stands for electric moment and the 1 stands for the first such moment (i.e., the dipole moment).
Within this approximation, the overall rate of transitions is given by:
$R_{i,f} = 2\pi g(\omega_{f,i}) | \alpha_{f,i}|^2 \nonumber$
$2\pi g(\omega_{f,i})\left( \dfrac{\omega_{f,i}}{\hbar c} \right)^2 |\textbf{A}_0 \cdot{\langle}\Phi_f|\mu |\Phi_i\rangle |^2. \nonumber$
Recalling that $\textbf{E}(\textbf{r},t) = -\dfrac{1}{c}\dfrac{\partial \textbf{A}}{\partial t} = \dfrac{\omega}{c} \textbf{A}_0 \text{ sin } (\omega t - \textbf{k}\cdot{\textbf{r}})$,
the magnitude of $\textbf{A}_0$ can be replaced by that of E, and this rate expression becomes
$R_{i,f} = \left( \dfrac{2\pi}{\hbar^2} \right) g(\omega_{f,i}) | \textbf{E}_0\cdot{\langle}\Phi_f|\mu |\Phi_i\rangle |^2. \nonumber$
This expresses the widely used E1 approximation to the Fermi-Wentzel golden rule.
Magnetic Dipole and Electric Quadrupole Transitions
When E1 predictions for the rates of transitions between states vanish (e.g., for symmetry reasons as discussed below), it is essential to examine higher order contributions to $\alpha_{f,i}$. The next terms in the above long-wavelength expansion vary as $\frac{1}{\lambda}$ and have the form:
$\alpha_{f,i}(E2 + M1) = \langle\Phi_f | \sum\limits_j \left( \dfrac{e}{m_ec} \right)[-i\textbf{k}\cdot{\textbf{r}}_j]\textbf{A}_0\cdot{\nabla}_j + \sum\limits_a \left( \dfrac{Z_ae}{m_ac} \right)[-i\textbf{k}\cdot{\textbf{R}}_a] \textbf{A}_0\cdot{\nabla}_a | \Phi_i\rangle. \nonumber$
For reasons soon to be shown, they are called electric quadrupole (E2) and magnetic dipole (M1) terms. Clearly, higher and higher order terms can be so generated. Within the longwavelength regime, however, successive terms should decrease in magnitude because of the successively higher powers of $\frac{1}{\lambda}$ that they contain.
To further analyze the above E2 + M1 factors, let us label the propagation direction of the light as the z-axis (the axis along which k lies) and the direction of $\textbf{A}_0$ as the x-axis. These axes are so-called "lab-fixed" axes because their orientation is determined by the direction of the light source and the direction of polarization of the light source's E field, both of which are specified by laboratory conditions. The molecule being subjected to this light can be oriented at arbitrary angles relative to these lab axes. With the x, y, and z axes so defined, the above expression for $\alpha_{f,i}$ (E2+M1) becomes
$\alpha_{f,i}(E2 + M1) = -i\left( \dfrac{A_0 2\pi}{\lambda} \right) \langle\Phi_f |\sum\limits_j \left( \dfrac{e}{m_ec} \right)z_j \dfrac{\partial}{\partial x_j} + \sum\limits_a \left( \dfrac{Z_ae}{m_ac} \right) z_a \dfrac{\partial}{\partial x_a} | \Phi_i\rangle . \nonumber$
Now writing (for both $z_j \text{ and } z_a$)
$z\dfrac{\partial}{\partial x} = \dfrac{1}{2}\left( z \dfrac{\partial}{\partial x} - x \dfrac{\partial}{\partial z} + z\dfrac{\partial}{\partial x} + x\dfrac{\partial}{\partial z} \right), \nonumber$
and using
$\nabla_j = -\left( \dfrac{m_e}{\hbar^2} \right)[ H, \textbf{r}_j ] \nonumber$
$\nabla_a = -\left( \dfrac{m_a}{\hbar^2} \right)[ H, \textbf{R}_a ], \nonumber$
the contributions of $\frac{1}{2}\left( z \frac{\partial}{\partial x} + x \frac{\partial}{\partial z} \right)$ (E2+M1) can be rewritten as
$\alpha_{f,i}(E2) = -i\dfrac{(A_0 e2\pi \omega_{f,i})}{c\lambda \hbar} \langle \Phi_f | \sum\limits_j z_j x_j + \sum\limits_a Z_a z_a x_a | \Phi_i \rangle . \nonumber$
The operator $\sum\limits_i z_i x_j + \sum\limits_aZ_az_ax_a$ that appears above is the z,x element of the electric quadrupole moment operator $Q_{z,x}$ ; it is for this reason that this particular component is labeled E2 and denoted the electric quadrupole contribution.
The remaining $\dfrac{1}{2}\left( z \dfrac{\partial}{\partial x} - x\dfrac{\partial}{\partial z} \right)$ contribution to $\alpha_{f,i}$ (E2+M1) can be rewritten in a form that makes its content more clear by first noting that
$\dfrac{1}{2}\left( z\dfrac{\partial}{\partial x} - x\dfrac{\partial}{\partial z}\right) = \left( \dfrac{i}{2\hbar}\right) (zp_x - xp_z) = \left( \dfrac{i}{2\hbar}\right) L_y \nonumber$
contains the y-component of the angular momentum operator. Hence, the following contribution to $\alpha_{f,i}$ (E2+M1) arises:
$\alpha_{f,i}(MI) = \dfrac{A_02\pi e}{2\lambda c\hbar}\langle \Phi_f | \sum\limits_j \dfrac{L_{y_j}}{m_e} + \sum\limits_a Z_a \dfrac{L_{y_a}}{m_a} | \Phi_i\rangle . \nonumber$
The magnetic dipole moment of the electrons about the y axis is
$\mu_{\text{ y, electrons}} = \sum\limits_j \left( \dfrac{e}{2m_ec} \right) L_{y_j} ; \nonumber$
that of the nuclei is
$\mu_{\text{y, nuclei}} = \sum\limits_a \left( \dfrac{Z_ae}{2m_ac} \right) L_{y_a} . \nonumber$
The $\alpha_{f,i}$ (M1) term thus describes the interaction of the magnetic dipole moments of the electrons and nuclei with the magnetic field (of strength |H| = $A_0$ k) of the light (which lies along the y axis):
$\alpha_{f,i}(M1) = \dfrac{|H|}{\hbar} \langle \Phi_f | \mu_{\text{y, electrons}} + \mu_{\text{y, nuclei}} |\Phi_i \rangle . \nonumber$
The total rate of transitions from $\Phi_i \text{ to } \Phi_f$ is given, through first-order in perturbation theory, by
$R_{i,f} = 2\pi g(\omega_{f,i}) |\alpha_{f,i}|^2, \nonumber$
where $\alpha_{f,i}$ is a sum of its E1, E2, M1, etc. pieces. In the next chapter, molecular symmetry will be shown to be of use in analyzing these various pieces. It should be kept in mind that the contributions caused by E1 terms will dominate, within the long-wavelength approximation, unless symmetry causes these terms to vanish. It is primarily under such circumstances that consideration of M1 and E2 transitions is needed.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/14%3A_Time-dependent_Quantum_Dynamics/14.04%3A_The_%22Long-Wavelength%22_Approximation.txt
|
The Phenomenological Rate Laws
Before closing this chapter, it is important to emphasize the context in which the transition rate expressions obtained here are most commonly used. The perturbative approach used in the above development gives rise to various contributions to the overall rate coefficient for transitions from an initial state $\Phi_i \text{ to a final state } \Phi_f$ ; these contributions include the electric dipole, magnetic dipole, and electric quadrupole first order terms as well contributions arising from second (and higher) order terms in the perturbation solution.
In principle, once the rate expression
$R_{i,f} = 2\pi g(\omega_{f,i}) |\alpha_{f,i}|^2 \nonumber$
has been evaluated through some order in perturbation theory and including the dominant electromagnetic interactions, one can make use of these state-to-state rates , which are computed on a per-molecule basis, to describe the time evolution of the populations of the various energy levels of the molecule under the influence of the light source's electromagnetic fields.
For example, given two states, denoted i and f, between which transitions can be induced by photons of frequency wf,i, the following kinetic model is often used to describe the time evolution of the numbers of molecules ni and nf in the respective states:
$\dfrac{dn_i}{dt} = -R_{i,f}n_i + R_{f,i}n_f \nonumber$
$\dfrac{dn_f}{dt} = - R_{f,i}n_f + R_{i,f}n_i . \nonumber$
Here, $R_{i,f} \text{ and } R_{f,i}$ are the rates (per molecule) of transitions for the $\text{ i } \rightarrow \text{ f and f } \rightarrow \text{ i }$ transitions respectively. As noted above, these rates are proportional to the intensity of the light source (i.e., the photon intensity) at the resonant frequency and to the square of a matrix element connecting the respective states. This matrix element square is $|\alpha_{i,f}|^2 \text{ in the former case and } |\alpha_{f,i}|^2$ in the latter. Because the perturbation operator whose matrix elements are $\alpha_{i,f} \text{ and } \alpha_{f,i}$ is Hermitian (this is true through all orders of perturbation theory and for all terms in the long-wavelength expansion), these two quantities are complex conjugates of one another, and, hence $|\alpha_{i,f}|^2 = |\alpha_{f,i}|^2$, from which it follows that $R_{i,f} = R_{f,i}.$ This means that the state-to-state absorption and stimulated emission rate coefficients (i.e., the rate per molecule undergoing the transition) are identical. This result is referred to as the principle of microscopic reversibility.
Quite often, the states between which transitions occur are members of levels that contain more than a single state. For example, in rotational spectroscopy a transition between a state in the J = 3 level of a diatomic molecule and a state in the J = 4 level involve such states; the respective levels are 2J+1 = 7 and 2J+1 = 9 fold degenerate, respectively.
To extend the above kinetic model to this more general case in which degenerate levels occur, one uses the number of molecules in each level ($N_i \text{ and }N_f$ for the two levels in the above example) as the time dependent variables. The kinetic equations then governing their time evolution can be obtained by summing the state-to-state equations over all states in each level
$\sum\limits_{\text{i in level 1}} \left( \dfrac{dn_i}{dt} \right) = \dfrac{dN_I}{dt} \nonumber$
$\sum\limits_{\text{f in level F}} \left( \dfrac{dn_f}{dt} \right) = \dfrac{dN_F}{dt} \nonumber$
and realizing that each state within a given level can undergo transitions to all states within the other level (hence the total rates of production and consumption must be summed over all states to or from which transitions can occur). This generalization results in a set of rate laws for the populations of the respective levels:
$\dfrac{dN_i}{dt} = -g_fR_{i,f}N_i + g_iR_{f,i}N_f \nonumber$
$\dfrac{dN_f}{dt} = -g_iR_{f,i}N_f + g_f R_{i,f}N_i . \nonumber$
Here, $g_i \text{ and } g_f$ are the degeneracies of the two levels (i.e., the number of states in each level) and the $R_{i,f} \text{ and } R_{f,i}$, which are equal as described above, are the state-to-state rate coefficients introduced earlier.
Spontaneous and Stimulated Emission
It turns out (the development of this concept is beyond the scope of this text) that the rate at which an excited level can emit photons and decay to a lower energy level is dependent on two factors:
1. the rate of stimulated photon emission as covered above and
2. the rate of spontaneous photon emission.
The former rate $g_f R_{i,f} \text{ (per molecule) is proportional to the light intensity } g(\omega_{f,i})$ at the resonance frequency. It is conventional to separate out this intensity factor by defining an intensity independent rate coefficient $B_{i,f}$ for this process as:
$g_f R_{i,f} = g(\omega_{f,i}) B_{i,f}. \nonumber$
Clearly, $B_{i,f} \text{ embodies the final-level degeneracy factor } g_f$, the perturbation matrix elements, and the $2\pi \text{ factor in the earlier expression for } R_{i,f}.$ The spontaneous rate of transition from the excited to the lower level is found to be independent of photon intensity, because it deals with a process that does not require collision with a photon to occur, and is usually denoted $A_{i,f}.$ The rate of photon-stimulated upward transitions from state f to state i $(g_i R_{f,i} = g_i R_{i,f} \text{ in the present case) is also proportional to } g(\omega_{f,i}),$ so it is written by convention as:
$g_i R_{f,i} = g(\omega_{f,i}) B_{f,i}. \nonumber$
An important relation between the $B_{i,f} \text{ and } B_{f,i}$ parameters exists and is based on the identity $R_{i,f} = R_{f,i}$ that connects the state-to-state rate coefficients:
$\dfrac{(B_{i,f})}{(B_{f,i})} = \dfrac{(g_fR_{i,f})}{(g_iR_{f,i})} = \dfrac{g_f}{f_i}. \nonumber$
This relationship will prove useful in the following sections.
Saturated Transitions and Transparency
Returning to the kinetic equations that govern the time evolution of the populations of two levels connected by photon absorption and emission, and adding in the term needed for spontaneous emission, one finds (with the initial level being of the lower energy):
$\dfrac{dN_i}{dt} = -gB_{i,f}N_i + (A_{f,i} + gB_{f,i})N_f \nonumber$
$\dfrac{dN_f}{dt} = -(A_{f,i} + gB_{f,i})N_f + gB_{i,f}N_i \nonumber$
where g = g($\omega$) denotes the light intensity at the resonance frequency. At steady state, the populations of these two levels are given by setting
$\dfrac{dN_i}{dt} = \dfrac{dN_f}{dt} = 0: \nonumber$
$\dfrac{N_f}{N_i} = \dfrac{(gB_{i,f})}{(A_{f,i} + gB_{f,i})}. \nonumber$
When the light source's intensity is so large as to render $gB_{f,i}$>> $A_{f,i}$ (i.e., when the rate of spontaneous emission is small compared to the stimulated rate), this population ratio reaches $(B_{i,f}/B_{f,i})$, which was shown earlier to equal $(g_f /g_i ).$ In this case, one says that the populations have been saturated by the intense light source. Any further increase in light intensity will result in zero increase in the rate at which photons are being absorbed. Transitions that have had their populations saturated by the application of intense light sources are said to display optical transparency because they are unable to absorb (nor emit) any further photons because of their state of saturation.
Equilibrium and Relations Between A and B Coefficients
When the molecules in the two levels being discussed reach equilibrium (at which time the $\frac{dN_i}{dt} = \frac{dN_f}{dt} = 0$ also holds) with a photon source that itself is in equilibrium characterized by a temperature T, we must have:
$\dfrac{N_f}{N_i} = \dfrac{g_f}{g_i}e^{\dfrac{-(E_f - E_i)}{kT}} = \dfrac{g_f}{g_i}e^{\dfrac{\hbar \omega}{kT}} \nonumber$
where $g_f \text{ and } g_i$ are the degeneracies of the states labeled f and i. The photon source that is characterized by an equilibrium temperature T is known as a black body radiator, whose intensity profile $g(\omega$) (in erg $cm^{-3}$ sec) is know to be of the form:
$g\left( \omega \right) = \dfrac{2(\hbar \omega)^3}{\pi c^3 \hbar^2} \left(e^{\dfrac{\hbar \omega}{kT}} - 1\right)^{-1}. \nonumber$
Equating the kinetic result that must hold at equilibrium:
$\dfrac{N_f}{N_i} = \dfrac{(gB_{i,f})}{( A_{f,i}+ gB_{f,i} )} \nonumber$
to the thermodynamic result:
$\dfrac{N_f}{N_i} = \dfrac{g_f}{g_i}e^{ \dfrac{-\hbar\omega}{kT} }, \nonumber$
and using the above black body g($\omega$) expression and the identity
$\dfrac{B_{i,f}}{B_{f,i}} = \dfrac{g_f}{g_i}, \nonumber$
one can solve for the $A_{f,i} \text{ rate coefficient in terms of the } B_{f,i}$ coefficient. Doing so yields:
$A_{f,i} = B_{f,i}\dfrac{2( \hbar\omega )^3}{\pi^3\hbar^2} \nonumber$
Summary
In summary, the so-called Einstein A and B rate coefficients connecting a lower-energy initial state $i$ and a final state $f$ are related by the following conditions:
$B_{i,f} = \dfrac{g_f}{g_i} B_{f,i} \nonumber$
and
$A_{f,i} = \dfrac{2(\hbar\omega)^3}{ \pi c^3\hbar^2 B_{f,i}}. \nonumber$
These phenomenological level-to-level rate coefficients are related to the state-to-state $R_{i,f}$ coefficients derived by applying perturbation theory to the electromagnetic perturbation through
$g_f R_{i,f} = g(\omega_{f,i}) B_{i,f}. \nonumber$
The A and B coefficients can be used in a kinetic equation model to follow the time evolution of the populations of the corresponding levels:
$\dfrac{dN_i}{dt} = -gB_{i,f}N_i + (A_{f,i} + gB_{f,i})N_f \nonumber$
$\dfrac{dN_f}{dt} = -(A_{f,i} + gB_{f,i})N_f + gB_{i,f}N_i. \nonumber$
These equations possess steady state solutions
$\dfrac{N_f}{N_i} = \dfrac{(gB_{i,f})}{(A_{f,i} + gB_{f,i})} \nonumber$
which, for large $g(\omega)$, produce saturation conditions:
$\dfrac{N_f}{N_i} = \dfrac{(B_{i,f})}{(B_{f,i})} = \dfrac{g_f}{g_i}. \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/14%3A_Time-dependent_Quantum_Dynamics/14.05%3A_The_Kinetics_of_Photon_Absorption_and_Emission.txt
|
The tools of time-dependent perturbation theory can be applied to transitions among electronic, vibrational, and rotational states of molecules.
• 15.1: Rotational Transitions
In microwave spectroscopy, the energy of the radiation lies in the range of fractions of a $cm^{-1} \text{ through several } cm^{-1}$; such energies are adequate to excite rotational motions of molecules but are not high enough to excite any but the weakest vibrations (e.g., those of weakly bound Van der Waals complexes). In rotational transitions, the electronic and vibrational states are thus left unchanged by the excitation process.
• 15.2: Vibration-Rotation Transitions
When the initial and final electronic states are identical, but the respective vibrational and rotational states are not, one is dealing with transitions between vibration-rotation states of the molecule. These transitions are studied in infrared (IR) spectroscopy using light of energy in the 30 cm$^{-1} \text{ (far IR) to 5000 cm}^{-1}$ range. The electric dipole matrix element analysis still begins with the electronic dipole moment integral.
• 15.3: Electronic-Vibration-Rotation Transitions
Molecular point-group symmetry can often be used to determine whether a particular transition's dipole matrix element will vanish and, as a result, the electronic transition will be "forbidden" and thus predicted to have zero intensity. If the direct product of the symmetries of the initial and final electronic states do not match the symmetry of the electric dipole operator (which has the symmetry of its x, y, and z components) the matrix element will vanish.
• 15.4: Time Correlation Function Expressions for Transition Rates
The first-order "golden-rule" expression for the rates of photon-induced transitions can be recast into a form in which certain specific physical models are easily introduced, By using so-called equilibrium averaged time correlation functions, it is possible to obtain rate expressions appropriate to a large number of molecules that exist in a distribution of initial states (e.g., for molecules that occupy many possible rotational and perhaps several vibrational levels at room temperature).
Thumbnail: Analysis of white light by dispersing it with a prism is an example of spectroscopy. (CC BY-SA 3.0; D-Kuru).
15: Spectroscopy
Within the approximation that the electronic, vibrational, and rotational states of a molecule can be treated as independent, the total molecular wavefunction of the "initial" state is a product
$\Phi_i = \psi_{ei} \chi_{vi} \phi_{ri} \nonumber$
of an electronic function $\psi_{ei}, \text{ a vibrational function } \chi_{vi}, \text{ and a rotational function } \phi_{ri}. \text{ A similar product expression holds for the "final" wavefunction } \Phi_f.$
In microwave spectroscopy, the energy of the radiation lies in the range of fractions of a $cm^{-1} \text{ through several } cm^{-1}$; such energies are adequate to excite rotational motions of molecules but are not high enough to excite any but the weakest vibrations (e.g., those of weakly bound Van der Waals complexes). In rotational transitions, the electronic and vibrational states are thus left unchanged by the excitation process; hence $\psi_{ei} = \psi_{ef} \text{ and } \chi_{vi} = \chi_{vf}.$
Applying the first-order electric dipole transition rate expressions
$R_{i,f} = 2\pi g(\omega_{f,i}) |\alpha_{f,i}|^2 \nonumber$
obtained in Chapter 14 to this case requires that the E1 approximation
$R_{i,f} = \left( \dfrac{2\pi}{\hbar^2} \right) g(\omega_{f,i}) | \textbf{E}_0\cdot{\langle}\Phi_f |\mu | \Phi_i \rangle |^2 \nonumber$
be examined in further detail. Specifically, the electric dipole matrix elements $\langle \Phi_f | \mu | \Phi_i \rangle \text{ with } \mu = \sum\limits_j e \textbf{r}_j + \sum\limits_a Z_a e \textbf{R}_a$ must be analyzed for $\Phi_i \text{ and } \Phi_f$ being of the product form shown above.
The integrations over the electronic coordinates contained in $\langle \Phi_f |\mu |\Phi_i \rangle ,$ as well as the integrations over vibrational degrees of freedom yield "expectation values" of the electric dipole moment operator because the electronic and vibrational components of $\Phi_i \text{ and } \Phi_f$ are identical:
$\langle \psi_{ei}| \mu | \psi_{ei} \rangle = \mu(\textbf{R}) \nonumber$
is the dipole moment of the initial electronic state (which is a function of the internal geometrical degrees of freedom of the molecule, denoted R); and
$\langle \chi_{vi} | \mu (\textbf{R} ) |\chi_{vi} \rangle = \mu_{ave} \nonumber$
is the vibrationally averaged dipole moment for the particular vibrational state labeled $\chi_{vi}. \text{ The vector } \mu_{ave}$ mave has components along various directions and can be viewed as a vector "locked" to the molecule's internal coordinate axis (labeled a, b, c as below).
The rotational part of the $\langle \Phi_f | \mu | \Phi_i \rangle$integral is not of the expectation value form because the initial rotational function $\phi_{ir} \text{ is not the same as the final } \phi_{fr}$. This integral has the form:
$\langle \Phi_{ir} | \mu_{ave} | \Phi_{fr} \rangle = \int (Y^{\text{*}}_{L,M}(\theta,\phi)) \mu_{ave} Y_{\text{L', M'}}(\theta, \phi) \text{ sin}\theta \text{ d}\theta \text{ d}\phi \nonumber$
or linear molecules whose initial and final rotational wavefunctions are $Y_{\text{L,M}} \text{ and } Y_{\text{L',M'}}$, respectively, and
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr} \rangle = \sqrt{\dfrac{2L + 1}{8\pi^2}}\sqrt{\dfrac{2L' + 1}{8\pi^2}} \nonumber$
$\int \left( D_{\text{L, M, K}}(\theta , \phi , \chi) \mu_{ave} D^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi) \text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \right) \nonumber$
for spherical or symmetric top molecules (here, $\sqrt{\dfrac{2L + 1}{8\pi^2}}D^{\text{*}}_{\text{L, M, K}}(\theta , \phi , \chi)$ are the normalized rotational wavefunctions described in Chapter 13 and in Appendix G). The angles $\theta , \phi , \text{ and } \chi$ refer to how the molecule-fixed coordinate system is oriented with respect to the space-fixed X, Y, Z axis system.
Linear Molecules
For linear molecules, the vibrationally averaged dipole moment $\mu_{ave}$ lies along the molecular axis; hence its orientation in the lab-fixed coordinate system can be specified in terms of the same angles $(\theta \text{ and } \phi)$ that are used to describe the rotational functions $Y_{\text{L,M}} (\theta ,\phi ).$ Therefore, the three components of the $\langle \phi_{ir} | \mu_{ave} | \phi_{fr} \rangle$ integral can be written as:
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr} \rangle_x = \mu \int Y^{\text{*}}_{\text{L, M}}(\theta ,\phi) \text{ sin}\theta \text{ cos}\phi \text{ Y}_{\text{L', M'}}(\theta , \phi )\text{ sin}\theta \text{ d}\theta \text{ d}\phi \nonumber$
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr} \rangle_y = \mu \int Y^{\text{*}}_{\text{L, M}}(\theta ,\phi) \text{ sin}\theta \text{ cos}\phi \text{ Y}_{\text{L', M'}}(\theta , \phi )\text{ sin}\theta \text{ d}\theta \text{ d}\phi \nonumber$
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr} \rangle_z = \mu \int Y^{\text{*}}_{\text{L, M}}(\theta ,\phi) \text{ sin}\theta \text{ cos}\phi \text{ Y}_{\text{L', M'}}(\theta , \phi )\text{ sin}\theta \text{ d}\theta \text{ d}\phi \nonumber$
where $\mu$ is the magnitude of the averaged dipole moment. If the molecule has no dipole moment, all of the above electric dipole integrals vanish and the intensity of E1 rotational transitions is zero.
The three E1 integrals can be further analyzed by noting that cos$\theta \propto \text{ Y}_{\text{1,0}}; \text{ sin}\theta \text{ cos}\phi \propto \text{ Y}_{\text{1,1}} + \text{ Y}_{\text{1,-1}}; \text{ and sin}\theta \text{ sin}\phi \propto \text{ Y}_{\text{1,1}} - \text{ Y}_{\text{1,-1}}$ and using the angular momentum coupling methods illustrated in Appendix G. In particular, the result given in that appendix:
$D_{\text{j, m, m'}}D_{\text{l, n, n'}} = \sum\limits_{\text{J, M, M'}}\langle J,M|j,m; l,n\rangle \langle j,m'; l,n'|J,M' \rangle D_{\text{J, M, M'}} \nonumber$
when multiplied by D$^{\text{*}}_{\text{J, M, M'}} \text{ and integrated over sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi$, yields:
$\int D^{\text{*}}_{\text{J, M, M'}} D_{\text{j, m, m'}}D_{\text{l, n, n'}} \text{ sin}\theta\text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
$= \dfrac{8\pi^2}{2J+1}\langle \text{ J,M| j,m ; l,n }\rangle \langle \text{ j,m' ; l,n'|J, M' } \rangle \nonumber$
$= 8\pi^2 \left( \dfrac{jlJ}{mn-M} \right) \left( \dfrac{jlJ}{m'n'-M'} \right) \left( -1 \right)^{\text{M+M'}}. \nonumber$
To use this result in the present linear-molecule case, we note that the $D_{\text{J,M,K}} \text{ functions and the Y}_{\text{J,M}}$ functions are related by:
$\text{Y}_{\text{J,M}}(\theta ,\phi ) = \sqrt{\dfrac{2J + 1}{4\pi}} D^{\text{*}}_{\text{J, M, 0}}(\theta ,\phi ,\chi). \nonumber$
The normalization factor is now $\sqrt{\frac{2J + 1}{4\pi}} \text{ rather than} \sqrt{\frac{2J + 1}{8\pi^2}}$ because the $\text{Y}_{\text{J,M}}$ are no longer functions of $\chi$, and thus the need to integrate over $0 \leq \chi \leq 1\pi$ disappears. Likewise, the $\chi \text{-dependence of}$ $\text{D}^{\text{*}}_{\text{J,M,K}}$ disappears for K = 0.
We now use these identities in the three E1 integrals of the form
$\mu \int Y^{\text{*}}_{\text{L,M}}(\theta ,\phi) Y_{\text{1,m}}(\theta , \phi) Y_{\text{L',M'}}(\theta ,\phi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi , \nonumber$
with m = 0 being the Z- axis integral, and the Y- and X- axis integrals being combinations of the m = 1 and m = -1 results. Doing so yields:
$\mu \int Y^{\text{*}}_{\text{L,M}}(\theta , \phi) Y_{\text{1,m}}(\theta ,\phi )\text{ Y}_{\text{L', M'}}(\theta ,\phi) \text{ sin}\theta \theta{ d}\theta \text{ d}\phi \nonumber$
$=\mu \sqrt{\dfrac{2L + 1}{4\pi} \dfrac{2L' + 1}{4\pi} \dfrac{3}{4\pi}}\int D_{\text{L, M, 0}}\textbf{ D}^{\text{*}}_{\text{l, m, 0}}\textbf{D}^{\text{*}}_{\text{L', M', 0}} \text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\dfrac{\chi}{2\pi}. \nonumber$
The last factor of $1/2\pi$ is inserted to cancel out the integration over $d\chi$that, because all K-factors in the rotation matrices equal zero, trivially yields $2\pi$. Now, using the result shown above expressing the integral over three rotation matrices, these E1 integrals for the linearmolecule case reduce to:
$\mu \int\text{ Y}^{\text{*}}_{\text{L,M}}(\theta , \phi)\text{ Y}_{\text{1,m}}(\theta ,\phi)\text{ Y}_{\text{L',M'}}(\theta ,\phi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi \nonumber$
$= \mu \sqrt{\dfrac{2L + 1}{4\pi}\dfrac{2L' + 1}{4\pi}\dfrac{3}{4\pi}}\dfrac{8\pi^2}{2\pi}\left( \dfrac{L' 1 L}{M'm-M} \right)\left( \dfrac{L' 1 L}{0 0 -0} \right)(-1)^M \nonumber$
$\mu\sqrt{(2L + 1)(2L' + 1)\dfrac{3}{4\pi}}\left( \dfrac{L' 1 L}{M' m -M} \right) \left( \dfrac{L' 1 L}{0 0 -0} \right) (-1)^M. \nonumber$
Applied to the z-axis integral (identifying m = 0), this result therefore vanishes unless:
$\text{M} = \text{M'} \nonumber$
and
$\text{L} = \text{L'} + 1 \text{ or L' }- 1. \nonumber$
Even though angular momentum coupling considerations would allow L = L' (because coupling two angular momenta with j = 1 and j = L' should give L'+1, L', and L'-1), the 3-j symbol $\left( \dfrac{\text{L' 1 L}}{\text{0 0 -0}} \right)$ vanishes for the L = L' case since 3-j symbols have the following symmetry
$\left( \dfrac{\text{L' 1 L}}{\text{M' m -M}} \right) = (-1)^{\text{L+L'+1}} \left( \dfrac{\text{L' 1 L}}{\text{-M' -m M}} \right) \nonumber$
with respect to the M, M', and m indices. Applied to the $\left( \dfrac{\text{L' 1 L}}{\text{0 0 -0}} \right)$ 3-j symbol, this means that this particular 3-j element vanishes for L = L' since L + L' + 1 is odd and hence $(-1)^{\text{L + L' + 1}}$ is -1.
Applied to the x- and y- axis integrals, which contain m = ± 1 components, this same analysis yields:
$\mu \sqrt{\text{(2L + 1)(2L' + 1)}\dfrac{3}{4\pi}}\left( \dfrac{\text{L' 1 L}}{M' \pm 1 -M} \right) \left( \dfrac{\text{L' 1 L}}{\text{0 0 -0}} \right) (-1)^M \nonumber$
which then requires that
$\text{M} = \text{M'} \pm 1 \nonumber$
and
$\text{L} = \text{ L' + 1, L' -1} \nonumber$
with L = L' again being forbidden because of the second 3-j symbol.
These results provide so-called "selection rules" because the limit the L and M values of the final rotational state, given the L', M' values of the initial rotational state. In the figure shown below, the L = L' + 1 absorption spectrum of NO at 120 °K is given. The intensities of the various peaks are related to the populations of the lower-energy rotational states which are, in turn, proportional to $(2 L' + 1) e^{(- L'(L'+1) \dfrac{\hbar^2}{8\pi^2IkT}}.$ Also included in the intensities are so-called line strength factors that are proportional to the squares of the quantities:
$\mu \sqrt{ (2L + 1)(2L' + 1)\dfrac{3}{4\pi} }\left( \dfrac{\text{L' 1 L}}{M' m -M} \right) \left( \dfrac{\text{L' 1 L}}{\text{0 0 -0}} \right)(-1)^M \nonumber$
which appear in the E1 integrals analyzed above (recall that the rate of photon absorption $R_{i,f} = \left( \frac{2\pi}{\hbar^2} \right)g(\omega_{f,i}) | \textbf{E}_0\cdot{\langle}\Phi_f | \mu | \Phi_i \rangle |^2$ involves the squares of these matrix elements). The book by Zare gives an excellent treatment of line strength factors' contributions to rotation, vibration, and electronic line intensities.
Non-Linear Molecules
For molecules that are non-linear and whose rotational wavefunctions are given in terms of the spherical or symmetric top functions $\textbf{D}^{\text{*}}_{\text{L,M,K}}$, the dipole moment $\mu_{ave}$ can have components along any or all three of the molecule's internal coordinates (e.g., the three molecule-fixed coordinates that describe the orientation of the principal axes of the moment of inertia tensor). For a spherical top molecule, $| \mu_{ave} |$ vanishes, so E1 transitions do not occur.
For symmetric top species, $\mu_{ave}$ lies along the symmetry axis of the molecule, so the orientation of $\mu_{ave}$ can again be described in terms of $\theta \text{ and } \phi$, the angles used to locate the orientation of the molecule's symmetry axis relative to the lab-fixed coordinate system. As a result, the E1 integral again can be decomposed into three pieces:
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr}\rangle_x = \mu\int D_{\text{L, M, K}}(\theta ,\phi ,\chi)\text{ cos}\theta \text{ cos}\phi D^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr}\rangle_y = \mu\int D_{\text{L, M, K}}(\theta ,\phi ,\chi)\text{ cos}\theta \text{ sin}\phi D^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
$\langle \phi_{ir} | \mu_{ave} | \phi_{fr}\rangle_z = \mu\int D_{\text{L, M, K}}(\theta ,\phi ,\chi) \text{ cos}\theta D^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
Using the fact that $\text{ cos}\theta \propto \text{D}^{\text{*}}_{\text{1, 0,0}}; \text{ sin}\theta \text{ cos}\phi \propto \text{D}^{\text{*}}_{\text{1,1,0}} + \text{D}^{\text{*}}_{\text{1,-1,0}}; \text{ and sin}\theta \text{ sin}\phi \propto \text{D}^{\text{*}}_{\text{1,1,0}} - \text{D}^{\text{*}}_{\text{1,-1,0}}$ and the tools of angular momentum coupling allows these integrals to be expressed, as above, in terms of products of the following 3-j symbols:
$\left( \dfrac{\text{L' 1 L}}{\text{M' m -M}} \right) \left( \dfrac{\text{L' 1 L}}{\text{K' 0 -K}} \right), \nonumber$
from which the following selection rules are derived:
$\text{L} = \text{L' + 1, L', L' - 1} \text{ (but not L = L' = 0)} \nonumber$
$\text{K = K'} \nonumber$
$\text{M = M' + m,} \nonumber$
with m = 0 for the Z-axis integral and m = ± 1 for the X- and Y- axis integrals. In addition, if K = K' = 0, the L = L' transitions are also forbidden by the second 3-j symbol vanishing.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/15%3A_Spectroscopy/15.01%3A_Rotational_Transitions.txt
|
When the initial and final electronic states are identical, but the respective vibrational and rotational states are not, one is dealing with transitions between vibration-rotation states of the molecule. These transitions are studied in infrared (IR) spectroscopy using light of energy in the 30 cm$^{-1} \text{ (far IR) to 5000 cm}^{-1}$ range. The electric dipole matrix element analysis still begins with the electronic dipole moment integral $\langle \psi_{ei} | \mu | \psi_{ei} \rangle = \mu (\textbf{R}),$ but the integration over internal vibrational coordinates no longer produces the vibrationally averaged dipole moment. Instead one forms the vibrational transition dipole integral:
$\langle \chi_{vf} | \mu (\textbf{R} ) | \chi_{vi} \rangle = \mu_{f,i} \nonumber$
between the initial $\chi_i \text{ and final } \chi_f$ vibrational states.
The Dipole Moment Derivatives
Expressing $\mu ( \textbf{R} )$ in a power series expansion about the equilibrium bond length position (denoted $\textbf{R}_e$ collectively and $R_{a,e}$ individually):
$\mu ( \textbf{R} ) = \mu( \textbf{R}_e ) + \sum\limits_a \dfrac{\partial \mu}{\partial R_a}(R_a - R_{a,e}) + ..., \nonumber$
substituting into the $\langle\chi_{vf} | \mu ( \textbf{R} ) | \chi_{vi} \rangle$ integral, and using the fact that $\chi_i \text{ and } \chi_f$ are orthogonal (because they are eigenfunctions of vibrational motion on the same electronic surface and hence of the same vibrational Hamiltonian), one obtains:
$\langle \chi_{vf} | \mu ( \textbf{R} ) | \chi_{vi} \rangle = \mu ( \textbf{R}_e ) \langle \chi_{vf} | \chi_{vi} \rangle + \sum\limits_a \dfrac{\partial \mu}{\partial \textbf{R}_a} \langle \chi_{vf} | (R_a - R_{a,e}) | \chi_{vi} \rangle + ... \nonumber$
$= \sum\limits_a \left( \dfrac{\partial \mu}{\partial \textbf{R}_a}\right) \langle\chi_{vf} | (\textbf{R}_a - \textbf{R}_{a,e}) | \chi_{vi} \rangle + ... . \nonumber$
This result can be interpreted as follows:
1. Each independent vibrational mode of the molecule contributes to the $\mu_{f,i}$ vector an amount equal to $\left( \dfrac{\partial \mu}{\partial \textbf{R}_a} \right) \langle \chi_{vf} | (\textbf{R}_a - \textbf{R}_{a,e}) | \chi_{vi} \rangle + ... .$
2. Each such contribution contains one part $\left( \dfrac{\partial \mu}{\partial \textbf{R}_a} \right)$ that depends on how the molecule's dipole moment function varies with vibration along that particular mode (labeled a),
3. and a second part $\langle\chi_{vf} | ( \textbf{R}_a - \textbf{R}_{a,e} ) | \chi_{vi} \rangle$ that depends on the character of the initial and final vibrational wavefunctions.
If the vibration does not produce a modulation of the dipole moment (e.g., as with the symmetric stretch vibration of the $CO_2$ molecule), its infrared intensity vanishes because $\left( \dfrac{\partial \mu}{\partial \textbf{R}_a} \right) = 0.$ One says that such transitions are infrared "inactive".
Selection Rules on v in the Harmonic Approximation
If the vibrational functions are described within the harmonic oscillator approximation, it can be shown that the $\langle\chi_{vf} | ( \textbf{R}_a - \textbf{R}_{a,e} ) | \chi_{vi}\rangle$ integrals vanish unless vf = vi +1 , vi -1 (and that these integrals are proportional to $\sqrt{(vi +1)} \text{ and } \sqrt{(vi)}$ in the respective cases). Even when $\chi_{vf} \text{ and } \chi_{vi}$ are rather non-harmonic, it turns out that such $\Delta v = \pm 1$ transitions have the largest $\langle\chi_{vf} | (\textbf{R}_a - \textbf{R}_{a,e}) | \chi_{vi} \rangle$ integrals and therefore the highest infrared intensities. For these reasons, transitions that correspond to $\Delta v = \pm 1$ are called "fundamental"; those with $\Delta v = \pm 2$ are called "first overtone" transitions.
In summary then, vibrations for which the molecule's dipole moment is modulated as the vibration occurs (i.e., for which $\left( \frac{\partial \mu}{\partial \textbf{R}_a}\right)$ is non-zero) and for which $\Delta v = \pm 1$ tend to have large infrared intensities; overtones of such vibrations tend to have smaller intensities, and those for which $\left( \frac{\partial \mu}{\partial \textbf{R}_a} \right) = 0$ have no intensity.
Rotational Selection Rules
The result of all of the vibrational modes' contributions to
$\sum\limits_a \left( \frac{\partial \mu}{\partial \textbf{R}_a} \right) \langle \chi_{vf} | (\textbf{R}_a - \textbf{R}_{a,e}) | \chi_{vi} \rangle \nonumber$
is a vector $\mu_{trans}$ that is termed the vibrational "transition dipole" moment. This is a vector with components along, in principle, all three of the internal axes of the molecule. For each particular vibrational transition (i.e., each particular $\chi_i \text{ and } \chi_f$ ) its orientation in space depends only on the orientation of the molecule; it is thus said to be locked to the molecule's coordinate frame. As such, its orientation relative to the lab-fixed coordinates (which is needed to effect a derivation of rotational selection rules as was done earlier in this Chapter) can be described much as was done above for the vibrationally averaged dipole moment that arises in purely rotational transitions. There are, however, important differences in detail. In particular,
1. For a linear molecule $\mu_{trans}$ can have components either along (e.g., when stretching vibrations are excited; these cases are denoted $\sigma$-cases) or perpendicular to (e.g., when bending vibrations are excited; they are denoted $\pi$ cases) the molecule's axis.
2. For symmetric top species, $\mu_{trans}$ need not lie along the molecule's symmetry axis; it can have components either along or perpendicular to this axis.
3. For spherical tops, $\mu_{trans}$ will vanish whenever the vibration does not induce a dipole moment in the molecule. Vibrations such as the totally symmetric $a_1$ C-H stretching motion in CH$_4$ do not induce a dipole moment, and are thus infrared inactive; non-totally-symmetric vibrations can also be inactive if they induce no dipole moment.
As a result of the above considerations, the angular integrals
$\langle \phi_{ir} | \mu_{trans} | \phi_{fr} \rangle = \int \text{D}_{\text{L, M, K}}(\theta ,\phi , \chi) \mu_{trans} \textbf{D}^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi)\text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
that determine the rotational selection rules appropriate to vibrational transitions produce similar, but not identical, results as in the purely rotational transition case.
The derivation of these selection rules proceeds as before, with the following additional considerations. The transition dipole moment's $\mu_{trans}$ components along the labfixed axes must be related to its molecule-fixed coordinates (that are determined by the nature of the vibrational transition as discussed above). This transformation, as given in Zare's text, reads as follows:
$(\mu_{trans})_m = \sum\limits_k \textbf{D}^{\text{*}}_{\text{l, m, k}}(\theta , \phi , \chi )(\mu_{trans})_k \nonumber$
where $(\mu_{trans})_m$ with m = 1, 0, -1 refer to the components along the lab-fixed (X, Y, Z) axes and $(\mu_{trans})_k$ with k = 1, 0, -1 refer to the components along the molecule- fixed (a, b, c) axes.
This relationship, when used, for example, in the symmetric or spherical top E1 integral:
$\langle \phi_{ir} | \mu_{trans} | \phi_{fr} \rangle = \int \text{D}_{\text{L,M,K}}(\theta , \phi , \chi )\mu_{trans} \textbf{D}^{\text{*}}_{\text{L', M', K'}}(\theta , \phi , \chi )\text{ sin}\theta \text{ d}\theta \text{ d}\phi \text{ d}\chi \nonumber$
gives rise to products of 3-j symbols of the form:
$\left( \dfrac{\text{L' 1 L}}{\text{M' m -M}} \right) \left( \dfrac{\text{L' 1 L}}{\text{K' k -K}} \right). \nonumber$
The product of these 3-j symbols is nonvanishing only under certain conditions that provide the rotational selection rules applicable to vibrational lines of symmetric and spherical top molecules.
Both 3-j symbols will vanish unless
L = L' +1, L', or L'-1.
In the special case in which L = L' =0 (and hence with M = M' =0 = K = K', which means that m = 0 = k), these 3-j symbols again vanish. Therefore, transitions with
L = L' =0
are again forbidden. As usual, the fact that the lab-fixed quantum number m can range over m = 1, 0, -1, requires that
M = M' + 1, M', M'-1.
The selection rules for $\Delta \textbf{K}$ depend on the nature of the vibrational transition, in particular, on the component of $\mu_{trans}$ along the molecule-fixed axes. For the second 3-j symbol to not vanish, one must have
K = K' + k,
where k = 0, 1, and -1 refer to these molecule-fixed components of the transition dipole. Depending on the nature of the transition, various k values contribute.
Symmetric Tops
In a symmetric top molecule such as $NH_3$, if the transition dipole lies along the molecule's symmetry axis, only k = 0 contributes. Such vibrations preserve the molecule's symmetry relative to this symmetry axis (e.g. the totally symmetric N-H stretching mode in $NH_3$). The additional selection rule $\Delta K = 0$ is thus obtained. Moreover, for K = K' = 0, all transitions with $\Delta L = 0$ vanish because the second 3-j symbol vanishes. In summary, one has:
$\Delta K = 0; \Delta M = \pm 1, 0; \Delta L = \pm 1,0$ (but L = L' = 0 is forbidden and all $\Delta L = 0$ are forbidden for K = K' = 0)
for symmetric tops with vibrations whose transition dipole lies along the symmetry axis.
If the transition dipole lies perpendicular to the symmetry axis, only k = ±1 contribute. In this case, one finds
$\Delta K = \pm 1; \Delta M = \pm 1, 0; \Delta L = \pm 1, 0$ (neither L = L' = 0 nor K = K' = 0 can occur for such transitions, so there are no additional constraints).
Linear Molecules
When the above analysis is applied to a diatomic species such as HCl, only k = 0 is present since the only vibration present in such a molecule is the bond stretching vibration, which has $\sigma$ symmetry. Moreover, the rotational functions are spherical harmonics (which can be viewed as $\textbf{D}^{\text{*}}_{\text{L',M',K'}}(\theta , \phi ,\chi )$ functions with K' = 0), so the K and K' quantum numbers are identically zero. As a result, the product of 3-j symbols
$\left( \dfrac{\text{L' 1 L}}{\text{M' n -M}} \right) \left( \dfrac{\text{L' 1 L}}{K' k -K} \right) \nonumber$
reduces to
$\left( \dfrac{\text{L' 1 L}}{\text{M' m -M}} \right) \left( \dfrac{\text{L' 1 L}}{\text{0 0 0}} \right) \nonumber$
which will vanish unless
$\text{ L = L' + 1, L' - 1, } \nonumber$
but not L = L' (since parity then causes the second 3-j symbol to vanish), and
$\text{ M = M' + 1, M', M' - 1. } \nonumber$
The L = L' +1 transitions are termed R-branch absorptions and those obeying L = L' -1 are called P-branch transitions. Hence, the selection rules
$\Delta M = \pm 1, 0; \Delta L = \pm 1 \nonumber$
are identical to those for purely rotational transitions.
When applied to linear polyatomic molecules, these same selection rules result if the vibration is of $\sigma$ symmetry (i.e., has k = 0). If, on the other hand, the transition is of $\pi$ symmetry (i.e., has k = ±1), so the transition dipole lies perpendicular to the molecule's axis, one obtains:
$\Delta M = \pm 1,0; \Delta L = \pm 1,0. \nonumber$
These selection rules are derived by realizing that in addition to k = ±1, one has: (i) a linear-molecule rotational wavefunction that in the v = 0 vibrational level is described in terms of a rotation matrix $\text{D}_{\text{L',M',0}} (\theta , \phi ,\chi )$ with no angular momentum along the molecular axis, K' = 0 ; (ii) a v = 1 molecule whose rotational wavefunction must be given by a rotation matrix $\text{D}_{\text{L,M,1}} (\theta ,\phi , \chi )$ with one unit of angular momentum about the molecule's axis, K = 1. In the latter case, the angular momentum is produced by the degenerate $\pi$ vibration itself. As a result, the selection rules above derive from the following product of 3-j symbols:
$\left( \dfrac{\text{L' 1 L}}{\text{M' m -M}} \right) \left( \dfrac{\text{L' 1 L}}{\text{0 1 -1}} \right). \nonumber$
Because $\Delta L = 0$ transitions are allowed for $\pi \text{ vibrations, one says that } \pi$ vibrations possess Q- branches in addition to their R- and P- branches (with $\Delta L = 1 \text{ and } -1,$ respectively).
In the figure shown below, the v = 0 $\rightarrow$ v = 1 (fundamental) vibrational absorption spectrum of HCl is shown. Here the peaks at lower energy (to the right of the figure) belong to P-branch transitions and occur at energies given approximately by:
$E = \hbar \omega_{\text{stretch}} + \left( \dfrac{h^2}{8\pi^2I} \right)((L - 1)L - L(L + 1)) = \hbar \omega_{\text{stretch}} -2\left( \dfrac{h^2}{8\pi^2 I} \right)L. \nonumber$
The R-branch transitions occur at higher energies given approximately by:
$E = \hbar \omega_{\text{stretch}} + \left( \dfrac{h^2}{8\pi^2I} \right)((L+1)(L+2) - L(L + 1)) = \hbar \omega_{\text{stretch}} + 2\left( \dfrac{h^2}{8\pi^2I} \right)(L + 1). \nonumber$
The absorption that is "missing" from the figure below lying slightly below 2900 cm-$^{-1}$ is the Q-branch transition for which L = L'; it is absent because the selection rules forbid it.
It should be noted that the spacings between the experimentally observed peaks in HCl are not constant as would be expected based on the above P- and R- branch formulas. This is because the moment of inertia appropriate for the v = 1 vibrational level is different than that of the v = 0 level. These effects of vibration-rotation coupling can be modeled by allowing the v = 0 and v = 1 levels to have rotational energies written as
$E = \hbar \omega_{\text{stretch}} \left( v + \dfrac{1}{2} \right) + \left( \dfrac{h^2}{8\pi^2I_v} \right) (L(L + 1)) \nonumber$
where v and L are the vibrational and rotational quantum numbers. The P- and R- branch transition energies that pertain to these energy levels can then be written as:
$E_P = \hbar \omega_{\text{stretch}} - \left[ \left( \dfrac{h^2}{8\pi^2I_1} \right) + \left( \dfrac{h^2}{8\pi^2I_0}\right) \right] L + \left[ \left( \dfrac{h^2}{8\pi^2I_1} \right) - \left( \dfrac{h^2}{8\pi^2I_0} \right) \right] L^2 \nonumber$
$E_R = \hbar\omega_{\text{stretch}} + 2\left( \dfrac{h^2}{8\pi^2I_1} \right) + 3\left[ \left( \dfrac{h^2}{8\pi^2I_1}\right) - \left( \dfrac{h^2}{8\pi^2I_0} \right) \right]L + \left[ \left( \dfrac{h^2}{8\pi^2I_1}\right) - \left( \dfrac{h^2}{8\pi^2I_0}\right) \right]L^2. \nonumber$
Clearly, these formulas reduce to those shown earlier in the I1 = I0 limit.
If the vibrationally averaged bond length is longer in the v = 1 state than in the v = 0 state, which is to be expected, $I_1$ will be larger than $I_0$, and therefore $\left[ \left( \dfrac{h^2}{8\pi^2I_1} \right) - \left( \dfrac{h^2}{8\pi^2I_0} \right) \right]$ will be negative. In this case, the spacing between neighboring P-branch lines will increase as shown above for HCl. In contrast, the fact that $\left[ \left( \dfrac{h^2}{8\pi^2I_1} \right) - \left( \dfrac{h^2}{8\pi^2I_0} \right) \right]$ is negative causes the spacing between neighboring R- branch lines to decrease, again as shown for HCl.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/15%3A_Spectroscopy/15.02%3A_Vibration-Rotation_Transitions.txt
|
The Electronic Transition Dipole and Use of Point Group Symmetry
Returning to the expression
$R_{i,f} = \left( \dfrac{2\pi}{\hbar^2} \right) g(\omega_{f,i}) |\textbf{E}_0\cdot{\langle}\phi_f |\mu | \phi_i \rangle |^2 \nonumber$
for the rate of photon absorption, we realize that the electronic integral now involves
$\langle \psi_{ef} | \mu | \psi_{ei} \rangle = \mu_{f,i} (\textbf{R}), \nonumber$
a transition dipole matrix element between the initial $\phi_{e,i}$ and final $\phi_{e,f}$ electronic wavefunctions. This element is a function of the internal vibrational coordinates of the molecule, and again is a vector locked to the molecule's internal axis frame.
Molecular point-group symmetry can often be used to determine whether a particular transition's dipole matrix element will vanish and, as a result, the electronic transition will be "forbidden" and thus predicted to have zero intensity. If the direct product of the symmetries of the initial and final electronic states $\psi_{ei} \text{ and } \psi_{ef}$ do not match the symmetry of the electric dipole operator (which has the symmetry of its x, y, and z components; these symmetries can be read off the right most column of the character tables given in Appendix E), the matrix element will vanish.
For example, the formaldehyde molecule $H_2CO$ has a ground electronic state (see Chapter 11) that has $^1\text{A}_1$ symmetry in the $\text{C}_{2v} \text{point group. Its } \pi \rightarrow \pi^{\text{*}}$ singlet excited state also has $^1\text{A}_1$ symmetry because both the $\pi \text{ and } \pi^{\text{*}}$ orbitals are of $b_1$ symmetry. In contrast, the lowest n $\rightarrow \pi^{\text{*}} \text{ singlet excited state is of } ^1\text{A}_2$ symmetry because the highest energy oxygen centered n orbital is of $b_2$ symmetry and the $\pi^{\text{*}} \text{ orbital is of } b_1$ symmetry, so the Slater determinant in which both the n and $\pi^{\text{*}}$ orbitals are singly occupied has its symmetry dictated by the $b_2 x b_1 \text{ direct product, which is A}_2.$
The $\pi \rightarrow \pi^{\text{*}}$ transition thus involves ground ($^1\text{A}_1$) and excited ($^1\text{A}_1$) states whose direct product ($A_1 \text{ x } A_1) \text{ is of A}_1$ symmetry. This transition thus requires that the electric dipole operator possess a component of $\text{A}_1 \text{ symmetry. A glance at the C}_{2v}$ point group's character table shows that the molecular z-axis is of $\text{A}_1$ symmetry. Thus, if the light's electric field has a non-zero component along the $\text{C}_2$ symmetry axis (the molecule's z-axis), the $\pi \rightarrow \pi^{\text{*}}$ transition is predicted to be allowed. Light polarized along either of the molecule's other two axes cannot induce this transition.
In contrast, the n $\rightarrow \pi^{\text{*}}$ transition has a ground-excited state direct product of $\text{B}_2 \text{ x B}_1 = \text{ A}_2$ symmetry. The $\text{C}_{2v}$ 's point group character table clearly shows that the electric dipole operator (i.e., its x, y, and z components in the molecule-fixed frame) has no component of $\text{A}_2$ symmetry; thus, light of no electric field orientation can induce this n $\rightarrow \pi^{\text{*}}$ transition. We thus say that the n $\rightarrow \pi^{\text{*}}$ transition is E1 forbidden (although it is M1 allowed).
Beyond such electronic symmetry analysis, it is also possible to derive vibrational and rotational selection rules for electronic transitions that are E1 allowed. As was done in the vibrational spectroscopy case, it is conventional to expand $\mu_{f,i} (\textbf{R})$ in a power series about the equilibrium geometry of the initial electronic state (since this geometry is more characteristic of the molecular structure prior to photon absorption):
$\mu_{f,i}(\textbf{R}) = \mu_{f,i}(\textbf{R}_e) + \sum\limits_a \dfrac{\partial \mu_{f,i}}{\partial R_a} (R_a - R_{a,e}) + .... \nonumber$
The Franck-Condon Factors
The first term in this expansion, when substituted into the integral over the vibrational coordinates, gives $\mu_{f,i}(\textbf{R}_e)\langle \chi_{vf} | \chi_{vi} \rangle$, which has the form of the electronic transition dipole multiplied by the "overlap integral" between the initial and final vibrational wavefunctions. The $\mu_{f,i}(\textbf{R}_e)$ factor was discussed above; it is the electronic E1 transition integral evaluated at the equilibrium geometry of the absorbing state. Symmetry can often be used to determine whether this integral vanishes, as a result of which the E1 transition will be "forbidden".
Unlike the vibration-rotation case, the vibrational overlap integrals $\langle \chi_{vf} | \chi_{vi} \rangle$ do not necessarily vanish because $\chi_{vf} \text{ and } \chi_{vi}$ are no longer eigenfunctions of the same vibrational Hamiltonian. $\chi_{vf}$ is an eigenfunction whose potential energy is the final electronic state's energy surface; $\chi_{vi}$ has the initial electronic state's energy surface as its potential. The squares of these $\langle \chi_{vf} | \chi_{vi} \rangle$ integrals, which are what eventually enter into the transition rate expression $R_{i,f} = \left( \dfrac{2\pi}{\hbar^2} \right) g( \omega_{f,i}) | \textbf{E}_0 \cdot{\langle}\phi_f | \mu | \phi_i \rangle |^2,$ are called "Franck-Condon factors". Their relative magnitudes play strong roles in determining the relative intensities of various vibrational "bands" (i.e., peaks) within a particular electronic transition's spectrum.
Whenever an electronic transition causes a large change in the geometry (bond lengths or angles) of the molecule, the Franck-Condon factors tend to display the characteristic "broad progression" shown below when considered for one initial-state vibrational level vi and various final-state vibrational levels vf:
Notice that as one moves to higher vf values, the energy spacing between the states $(\text{E}_{vf} - \text{E}_{vf-1})$ decreases; this, of course, reflects the anharmonicity in the excited state vibrational potential. For the above example, the transition to the vf = 2 state has the largest FranckCondon factor. This means that the overlap of the initial state's vibrational wavefunction $\chi_{vi} \text{ is largest for the final state's } \chi_{vf}$ function with vf = 2.
As a qualitative rule of thumb, the larger the geometry difference between the initial and final state potentials, the broader will be the Franck-Condon profile (as shown above) and the larger the vf value for which this profile peaks. Differences in harmonic frequencies between the two states can also broaden the Franck-Condon profile, although not as significantly as do geometry differences.
For example, if the initial and final states have very similar geometries and frequencies along the mode that is excited when the particular electronic excitation is realized, the following type of Franck-Condon profile may result:
In contrast, if the initial and final electronic states have very different geometries and/or vibrational frequencies along some mode, a very broad Franck-Condon envelope peaked at high-vf will result as shown below:
Vibronic Effects
The second term in the above expansion of the transition dipole matrix element $\sum\limits_a \dfrac{\partial \mu_{f,i}}{\partial R_a}(R_a - R_{a,e})$ can become important to analyze when the first term $\mu_{f,i}(\textbf{R}_e)$ vanishes (e.g., for reasons of symmetry). This dipole derivative term, when substituted into the integral over vibrational coordinates gives $\sum\limits_a \dfrac{\partial \mu_{f,i}}{\partial R_a} \langle \chi_{vf} | (R_a - R_{a,e}) | \chi_{vi} \rangle$. Transitions for which $\mu_{f,i}(\textbf{R}_e)$ vanishes but for which $\dfrac{\partial \mu_{f,i}}{\partial R_a}$ does not for the $\text{a}^{th}$ vibrational mode are said to derive intensity through "vibronic coupling" with that mode. The intensities of such modes are dependent on how strongly the electronic dipole integral varies along the mode (i.e, on $\dfrac{\partial \mu_{f,i}}{\partial \textbf{R}_a}$ ) as well as on the magnitude of the vibrational integral $\langle \chi_{vf} | ( R_a - R_{a,e}) | \chi_{vi} \rangle .$
An example of an E1 forbidden but "vibronically allowed" transition is provided by the singlet n $\rightarrow \pi^{\text{*}}$ transition of $H_2CO$ that was discussed earlier in this section. As detailed there, the ground electronic state has $^1\text{A}_1$ symmetry, and the n $\rightarrow \pi^{\text{*}} \text{ state is of } ^1\text{A}_2$ symmetry, so the E1 transition integral $\langle \psi_{ef} | \mu | \psi_{ei} \rangle$ vanishes for all three (x, y, z) components of the electric dipole operator $\mu$. However, vibrations that are of $\text{b}_2$ symmetry (e.g., the H-C-H asymmetric stretch vibration) can induce intensity in the n $\rightarrow \pi^{\text{*}}$ transition as follows: (i) For such vibrations, the $\text{b}_2$ mode's vi = 0 to vf = 1 vibronic integral $\langle \chi_{vf} | (R_a - R_{a,e}) | \chi_{vi} \rangle$ will be non-zero and probably quite substantial (because, for harmonic oscillator functions these "fundamental" transition integrals are dominant- see earlier); (ii) Along these same $\text{b}_2$ modes, the electronic transition dipole integral derivative $\frac{\partial \mu_{f,i}}{\partial R_a}$ will be non-zero, even though the integral itself $\mu_{f,i} (\textbf{R}_e)$ vanishes when evaluated at the initial state's equilibrium geometry.
To understand why the derivative $\frac{\partial \mu_{f,i}}{\partial R_a}$ can be non-zero for distortions (denoted $R_a) \text{ of b}_2$ symmetry, consider this quantity in greater detail:
$\dfrac{\partial \mu_{f,i}}{\partial R_a} = \dfrac{\partial}{\partial R} \langle \psi_{ef} | \mu | \psi_{ei} \rangle = \langle \dfrac{\partial \psi_{ef}}{\partial R_a} | \mu | \psi_{ei} \rangle + \langle \psi_{ef} | \mu | \dfrac{\partial \psi_{ei}}{\partial R_a} \rangle + \langle \psi_{ef} | \dfrac{\partial \mu}{\partial R_a} | \psi_{ei} \rangle . \nonumber$
The third integral vanishes because the derivative of the dipole operator itself $\mu = \sum\limits_i e r_j + \sum\limits_a Z_a e \text{R}_a$ with respect to the coordinates of atomic centers, yields an operator that contains only a sum of scalar quantities (the elementary charge e and the magnitudes of various atomic charges $\text{Z}_a$); as a result and because the integral over the electronic wavefunctions $\langle \psi+{ef} | \psi_{ei} \rangle$ vanishes, this contribution yields zero. The first and second integrals need not vanish by symmetry because the wavefunction derivatives $\frac{\partial \psi_{ef}}{\partial R_a} \text{ and} \frac{\partial \psi_{ei}}{\partial R_a}$ do not possess the same symmetry as their respective wavefunctions $\psi_{ef} \text{ and } \psi_{ei}.$ In fact, it can be shown that the symmetry of such a derivative is given by the direct product of the symmetries of its wavefunction and the symmetry of the vibrational mode that gives rise to the $\frac{\partial }{\partial R_a}. \text{ For the }H_2CO$ case at hand, the $\text{b}_2$ mode vibration can induce in the excited $^1\text{A}_2$ state a derivative component (i.e., $\frac{\partial \psi_{ef}}{\partial R_a}$ ) that is of $^1\text{B}_1$ symmetry) and this same vibration can induce in the $^1\text{A}_1$ ground state a derivative component of $^1\text{B}_2$ symmetry.
As a result, the contribution $\langle \frac{\psi_{ef}}{\partial R_a} | \mu | \psi_{ei} \rangle$ to $\frac{\partial \mu_{f,i}}{\partial R_a}$ arising from vibronic coupling within the excited electronic state can be expected to be non-zero for components of the dipole operator $\mu$ that are of $( \frac{\partial \psi_{ef}}{\partial R_a} x \psi_{ei}) = (\text{B}_1 \text{ x A}_1) = \text{ B}_1$ symmetry. Light polarized along the molecule's x-axis gives such a $\text{b}_1$ component to $\mu$ (see the $\text{C}_{2v}$ character table in Appendix E). The second contribution $\langle \psi_{ef} | \mu | \frac{\partial \psi_{ei}}{\partial R_a} \rangle$ can be non-zero for components of $\mu$ that are of $( \psi+{ef} \text{ x }\frac{\partial \psi_{ei}}{\partial R_a})$ = $( \text{A}_2 \text{ x B}_2 ) = \text{B}_1$ symmetry; again, light of x-axis polarization can induce such a transition.
In summary, electronic transitions that are E1 forbidden by symmetry can derive significant (e.g., in $H_2CO$ the singlet n $\rightarrow \pi^{\text{*}}$ transition is rather intense) intensity through vibronic coupling. In such coupling, one or more vibrations (either in the initial or the final state) cause the respective electronic wavefunction to acquire (through $\frac{\partial \psi}{\partial R_a}$) a symmetry component that is different than that of $\psi$ itself. The symmetry of $\frac{\partial \psi}{\partial R_a}$, which is given as the direct product of the symmetry of $\psi$ and that of the vibration, can then cause the electric dipole integral $\langle \psi ' | \mu | \frac{\partial \psi}{\partial R_a} \rangle$ to be non-zero even when $\langle \psi ' | \mu | \psi \rangle$is zero. Such vibronically allowed transitions are said to derive their intensity through vibronic borrowing.
Rotational Selection Rules for Electronic Transitions
Each vibrational peak within an electronic transition can also display rotational structure (depending on the spacing of the rotational lines, the resolution of the spectrometer, and the presence or absence of substantial line broadening effects such as those discussed later in this Chapter). The selection rules for such transitions are derived in a fashion that parallels that given above for the vibration-rotation case. The major difference between this electronic case and the earlier situation is that the vibrational transition dipole moment $\mu_{\text{trans}}$ appropriate to the former is replaced by $\mu_{f,i}(\textbf{R}_e)$ for conventional (i.e., nonvibronic) transitions or $\frac{\partial \mu_{f,i}}{\partial R_a}$ (for vibronic transitions).
As before, when $\mu_{f,i}(\textbf{R}_e) \text{( or } \frac{\partial \mu_{f,i}}{\partial \textbf{R}_a})$ lies along the molecular axis of a linear molecule, the transition is denoted $\sigma$ and k = 0 applies; when this vector lies perpendicular to the axis it is called $\pi$ and k = ±1 pertains. The resultant linear-molecule rotational selection rules are the same as in the vibration-rotation case:
$\Delta L = \pm 1, \text{ and } \Delta M = \pm 1,0 \text{ (for } \sigma \text{ transitions).} \nonumber$
$\Delta L = \pm 1, \text{ and } \Delta M = \pm 1,0 \text{ (for } \pi \text{ transitions).} \nonumber$
In the latter case, the L = L' = 0 situation does not arise because a p transition has one unit of angular momentum along the molecular axis which would preclude both L and L' vanishing.
$\Delta L = \pm 1,0; \Delta M = \pm 1,0; \text{ and } \Delta K = 0 \text{ (L = L' = 0 is not allowed and all } \Delta L = 0 \text{ are forbidden when K = K' = 0) } \nonumber$
which applies when $\mu_{f,i}(\textbf{R}_e) \text{ or } \frac{\partial \mu_{f,i}}{\partial R_a}$ lies along the symmertry axis, and
$\Delta L = \pm 1,0; \Delta M = \pm 1,0; \text{ and } \Delta K = \pm 1 \text{ (L = L' = 0 is not allowed)} \nonumber$
which applies when $\mu_{f,i}(\textbf{R}_e) \text{ or } \frac{\partial \mu_{f,i}}{\partial R_a}$ lies perpendicular to the symmetry axis.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/15%3A_Spectroscopy/15.03%3A_Electronic-Vibration-Rotation_Transitions.txt
|
The first-order E1 "golden-rule" expression for the rates of photon-induced transitions can be recast into a form in which certain specific physical models are easily introduced and insights are easily gained. Moreover, by using so-called equilibrium averaged time correlation functions, it is possible to obtain rate expressions appropriate to a large number of molecules that exist in a distribution of initial states (e.g., for molecules that occupy many possible rotational and perhaps several vibrational levels at room temperature).
State-to-State Rate of Energy Absorption or Emission
To begin, the expression obtained earlier
$R_{i,f} = \left( \dfrac{2\pi}{\hbar^2}\right) g(\omega_{f,i}) | \textbf{E}_0 \cdot{\langle} \phi_f | \mu | \Phi_i \rangle |^2, \nonumber$
that is appropriate to transitions between a particular initial state $\Phi_i$ and a specific final state $\Phi_f$, is rewritten as
$R_{i,f} = \left( \dfrac{2\pi}{\hbar^2}\right) \int g( \omega ) | \textbf{E}_0 \cdot{\langle} \phi_i | \mu | \phi_i \rangle |^2 \delta (\omega_{f,i} - \omega )\text{ d}\omega . \nonumber$
Here, the $\delta (\omega_{f,i} - \omega )$ function is used to specifically enforce the "resonance condition" that resulted in the time-dependent perturbation treatment given in Chapter 14; it states that the photons' frequency $\omega$ must be resonant with the transition frequency $\omega_{f,i}$. It should be noted that by allowing $\omega$ to run over positive and negative values, the photon absorption (with $\omega_{f,i}$ positive and hence w positive) and the stimulated emission case (with $\omega_{f,i}$ negative and hence $\omega$ negative) are both included in this expression (as long as g($\omega$) is defined as g(|$\omega$|) so that the negative-$\omega$ contributions are multiplied by the light source intensity at the corresponding positive $\omega$ value).
The following integral identity can be used to replace the $\delta$-function:
$\delta (\omega_{f,i} - \omega ) = \dfrac{1}{2\pi} \int\limits_{-\infty}^{\infty} e^{i(\omega_{f,i} - \omega )t} \text{ dt} \nonumber$
by a form that is more amenable to further development. Then, the state-to-state rate of transition becomes:
$R_{i,f} = \left(\dfrac{1}{\hbar}\right) \int g(\omega ) | \textbf{E}_0 \cdot{\langle}\phi_f | \mu | \phi_i \rangle |^2 \int\limits_{-\infty}^{\infty}e^{i(\omega_{f,i} - \omega)t} \text{ dt d}\omega . \nonumber$
Averaging Over Equilibrium Boltzmann Population of Initial States
If this expression is then multiplied by the equilibrium probability $\rho_i$ that the molecule is found in the state $\Phi_i$ and summed over all such initial states and summed over all final states $\Phi_f$ that can be reached from $\Phi_i$ with photons of energy $\hbar \omega$, the equilibrium averaged rate of photon absorption by the molecular sample is obtained:
$R_{\text{eq.ave.}} = \left( \dfrac{1}{\hbar^2} \right) \sum\limits_{i,f} \rho_i \nonumber$
$\int g(\omega ) | \textbf{E}_0 \cdot{\langle}\Phi_f | \mu | \Phi_i \rangle |^2 \int\limits_{-\infty}^{\infty} e^{i(\omega_{f,i} - \omega )t} \text{dt d}\omega . \nonumber$
This expression is appropriate for an ensemble of molecules that can be in various initial states $\Phi_i$ with probabilities $\rho_i$. The corresponding result for transitions that originate in a particular state ($\Phi_i$) but end up in any of the "allowed" (by energy and selection rules) final states reads:
$R_{\text{state i.}} = \left( \dfrac{1}{\hbar^2} \right) \sum\limits_f \int g(\omega ) | \textbf{E}_0 \cdot{\langle}\Phi_f | \mu | \Phi_i \rangle |^2 \nonumber$
$\int\limits_{-\infty}^{\infty} e^{i(\omega_{f,i} - \omega )t}\text{ dtd}\omega . \nonumber$
For a canonical ensemble, in which the number of molecules, the temperature, and the system volume are specified, $\rho_i$ takes the form:
$\phi_i = \dfrac{g_i}{Q}e^{-\dfrac{E_i^0}{kT}} \nonumber$
where Q is the canonical partition function of the molecules and $g_i$ is the degeneracy of the state $\Phi_i$ whose energy is $\text{E}_i^0.$
In the above expression for $\text{R}_{\text{eq.ave.}}$, a double sum occurs. Writing out the elements that appear in this sum in detail, one finds:
$\sum\limits_{i,f} \phi_i \textbf{E}_0 \cdot{\langle} \Phi_i | \mu | \Phi_f \rangle \textbf{E}_0 \cdot{\langle}\Phi_f | \mu | \Phi_i \rangle e^{i(\omega_{f,i})t}. \nonumber$
In situations in which one is interested in developing an expression for the intensity arising from transitions to all allowed final states, the sum over these final states can be carried out explicitly by first writing
$\langle \Phi_f | \mu | \Phi_i \rangle e^{i(\omega_{f,i})t} = \langle \Phi_f | e^{\dfrac{iHt}{\hbar}} \mu e^{-\dfrac{iHt}{\hbar}} | \Phi_i \rangle \nonumber$
and then using the fact that the set of states {$\Phi_k$} are complete and hence obey
$\sum\limits_k | \Phi_k \rangle \langle \Phi_k | = 1. \nonumber$
The result of using these identities as well as the Heisenberg definition of the time dependence of the dipole operator
$\mu (t) = e^{\dfrac{iHt}{\hbar}} \mu e^{-\dfrac{iHt}{\hbar}}, \nonumber$
is:
$\sum\limits_i \rho_i \langle \Phi | \textbf{E}_0 \cdot{\mu} \textbf{E}_0\cdot{\mu}(t) | \Phi_i \rangle . \nonumber$
In this form, one says that the time dependence has been reduce to that of an equilibrium averaged (n.b., the $\sum\limits_i \rho_i \langle \Phi_i | | \Phi_i \rangle$) time correlation function involving the component of the dipole operator along the external electric field at t = 0 ($\textbf{E}_0\cdot{\mu}$) and this component at a different time $t(\textbf{E}_0\cdot{\mu}(t))$.
Photon Emission and Absorption
If $\omega_{f,i}$ is positive (i.e., in the photon absorption case), the above expression will yield a non-zero contribution when multiplied by $e^{-i \omega t}$ and integrated over positive $\omega$ values. If $\omega_{f,i}$ is negative (as for stimulated photon emission), this expression will contribute, again when multiplied by $e^{-i\omega t}$, for negative $\omega$-values. In the latter situation, $\rho_i$ is the equilibrium probability of finding the molecule in the (excited) state from which emission will occur; this probability can be related to that of the lower state $\rho_f$ by
$\rho_{\text{excited}} = \rho_{\text{lower}} e^{-\dfrac{(\text{E}^0_{\text{excited}} - \text{E}^0_{\text{lower}})}{kT}} \nonumber$
$= \rho_{\text{lower}}e^{-\dfrac{\hbar \omega}{kT}}. \nonumber$
In this form, it is important to realize that the excited and lower states are treated as individual states, not as levels that might contain a degenerate set of states.
The absorption and emission cases can be combined into a single net expression for the rate of photon absorption by recognizing that the latter process leads to photon production, and thus must be entered with a negative sign. The resultant expression for the net rate of decrease of photons is:
$\text{R}_{\text{eq.ave.net}} = \left( \dfrac{1}{\hbar^2} \right) \sum\limits_i \rho_i \left( 1 - e^{-\dfrac{\hbar \omega}{kT}} \right) \nonumber$
$\iint g(\omega ) \langle \Phi_i | (\textbf{E}_0 \cdot{\mu} ) \textbf{E}_0 \cdot{\mu}(t) | \Phi_i \rangle e^{-i\omega t} \,d\omega \,dt. \nonumber$
The Line Shape and Time Correlation Functions
Now, it is convention to introduce the so-called "line shape" function $I(\omega)$:
$I(\omega ) = \sum\limits_i \rho_i \int \langle \Phi_i | (\textbf{E}_0 \cdot{\mu}) \textbf{E}_0\cdot{\mu} (t) | \Phi_i \rangle e^{-i\omega t}\, dt \nonumber$
in terms of which the net photon absorption rate is
$\text{R}_{\text{eq.ave.net}} = \left( \dfrac{1}{\hbar^2} \right) \left( 1 -e^{-\hbar \omega/kT}\right) \int g(\omega )I(\omega ) \text{ d}\omega . \nonumber$
As stated above, the function
$\text{C}(t) = \sum\limits_i \rho_i \langle \Phi_i | \textbf{E}_0 \cdot{\mu} ) \textbf{E}_0\cdot{\mu}(t) | \Phi_i \rangle \nonumber$
is called the equilibrium averaged time correlation function of the component of the electric dipole operator along the direction of the external electric field $\textbf{E}_0$. Its Fourier transform is $I(\omega)$, the spectral line shape function. The convolution of $I(\omega)$ with the light source's (\g(\omega\)) function, multiplied by $\left(1 - e^{-\frac{h \omega}{kT}} \right)$, the correction for stimulated photon emission, gives the net rate of photon absorption.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/15%3A_Spectroscopy/15.04%3A_Time_Correlation_Function_Expressions_for_Transition_Rates.txt
|
Collisions among molecules can also be viewed as a problem in time-dependent quantum mechanics. The perturbation is the "interaction potential" and the time dependence arises from the movement of the nuclear positions.
• 16.1: One Dimensional Scattering
Atom-atom scattering on a single Born-Oppenheimer energy surface can be reduced to a one-dimensional Schrödinger equation by separating the radial and angular parts of the three-dimensional Schrödinger equation in the same fashion as used for the Hydrogen atom.
• 16.2: Multichannel Problems
When excited electronic states are involved, couplings between two or more electronic surfaces may arise. Dynamics occuring on an excited-state surface may evolve in a way that produces flux on another surface.
• 16.3: Classical Treatment of Nuclear Motion
For potentials of typical chemical bonding and for all but low-energy motions of light particles such as Hydrogen and Deuterium nuclei or electrons, the local de Broglie wavelengths are often short enough to treat the nuclear-motion dynamics of molecules in a purely classical manner, and to apply so-called semi-classical corrections near classical turning points. The motions of H and D atomic centers usually require quantal treatment except when their kinetic energies are quite high.
• 16.4: Wavepackets
In an attempt to combine the attributes and stregths of classical trajectories, which allow us to "watch" the motions that molecules undergo, and quantum mechanical wavefunctions, which are needed if interference phenomena are to be treated, a hybrid approach is sometimes used. A popular and rather successful such point of view is provided by so called coherent state wavepackets.
Thumbnail: A simple schematic diagram of a two-particle scattering process. One particle is scattered on a single scattering center. The impact parameter and the differential cross section element and the solid angle element in the exit direction are marked. Their quotient is the differential cross section. (CC BY-SA 3.0; Vswitchs)
16: Collisions and Scattering
Atom-atom scattering on a single Born-Oppenheimer energy surface can be reduced to a one-dimensional Schrödinger equation by separating the radial and angular parts of the three-dimensional Schrödinger equation in the same fashion as used for the Hydrogen atom in Chapter 1. The resultant equation for the radial part $\psi$(R) of the wavefunction can be written as:
$-\left( \dfrac{\hbar^2}{2\mu} \right) \dfrac{1}{R^2} \dfrac{\partial }{\partial R} \left( R^2\dfrac{\partial \psi}{\partial R} \right) + L(L+1)\dfrac{\hbar^2}{(2\mu R^2)} \psi + V(R)\psi = E\psi , \nonumber$
where L is the quantum number that labels the angular momentum of the colliding particles whose reduced mass is $\mu$.
Defining $\Psi(R) = R \psi(R)\text{ and substituting into the above equation gives the following equation for } \Psi :$
$\left( \dfrac{\hbar^2}{2\mu} \right) \dfrac{\partial^2\Psi}{\partial R^2} + L(L+1)\dfrac{\hbar^2}{2\mu R^2}\Psi + V(R)\Psi = E\Psi . \nonumber$
The combination of the "centrifugal potential" $L(L+1)\frac{\hbar^2}{2\mu R^2}$ and the electronic potential V(R) thus produce a total "effective potential" for describing the radial motion of the system.
The simplest reasonable model for such an effective potential is provided by the "square well" potential illustrated below. This model V(R) could, for example, be applied to the L = 0 scattering of two atoms whose bond dissociation energy is $D_e$ and whose equilibrium bond length for this electronic surface lies somewhere between R = 0 and R = $\text{R}_{\text{max}}$.
The piecewise constant nature of this particular V(R) allows exact solutions to be written both for bound and scattering states because the Schrödinger equation
$-\left( \dfrac{\hbar^2}{2\mu} \right) \dfrac{\text{d}^2\Psi}{\text{d}R^2} = E\Psi \text{ ( for 0 } \leq \text{ R } \leq \text{R}_{\text{max}}) \nonumber$
$-\left( \dfrac{\hbar^2}{2\mu} \right) \dfrac{\text{d}^2\Psi}{\text{d}R^2} + D_e\Psi = E \Psi \text{ R}_{\text{max}} \leq \text{ R } < \infty ) \nonumber$
admits simple sinusoidal solutions.
Bound States
The bound states are characterized by having E < D$_e$. For the inner region, the two solutions to the above equation are
$\Psi_1(R) = A sin(kR) \nonumber$
and
$\Psi_2(R) = B cos(kR) \nonumber$
where
$k = \sqrt{\dfrac{2\mu E}{\hbar^2}} \nonumber$
is termed the "local wave number" because it is related to the momentum values for the $e^{\pm ikR}$ components of such a function:
$\dfrac{-i\hbar \partial e^{\pm ikR}}{\partial R} = \hbar k e^{\pm ikR}. \nonumber$
The cos(kR) solution must be excluded (i.e., its amplitude B in the general solution of the Schrödinger equation must be chosen equal to 0.0) because this function does not vanish at R = 0, where the potential moves to infinity and thus the wavefunction must vanish. This means that only the
$\Psi = A sin(kR) \nonumber$
term remains for this inner region.
Within the asymptotic region (R > R$_{max}$) there are also two solutions to the Schrödinger equation:
$\Psi_3 = Ce^{-\kappa R} \nonumber$
and
$\Psi_4 = D e^{\kappa R} \nonumber$
where
$\kappa = \sqrt{\dfrac{2\mu (D_e-E)}{\hbar^2}}. \nonumber$
Clearly, one of these functions is a decaying function of R for large R and the other $\Psi_4$ grows exponentially for large R. The latter's amplitude D must be set to zero because this function generates a probability density that grows larger and larger as R penetrates deeper and deeper into the classically forbidden region (where E < V(R)).
To connect $\Psi_1$ in the inner region to $\Psi_3$ in the outer region, we use the fact that $\Psi$ and $\frac{\text{d} \Psi}{\text{dR}}$ must be continuous except at points R where V(R) undergoes an infinite discontinuity (see Chapter 1). Continuity of $\Psi \text{at R}_{max}$ gives:
$\text{Asin}(\text{kR}_{max}) = Ce^{-\kappa\text{R}_{\text{max}}}, \nonumber$
and continuity of $\frac{\text{d}\Psi}{\text{dR}} \text{at R}_{\text{max}}$ yields
$Akcos(kR_{max})= - \kappa C e^{-\kappa R_{max}} \nonumber$
These two equations allow the ratio C/A as well as the energy E (which appears in $\kappa$ and in k) to be determined:
$\dfrac{A}{C} = \dfrac{-\kappa}{k}\dfrac{e^{-\kappa R_{max}}}{cos(kR_{max})}. \nonumber$
The condition that determines E is based on the well known requirement that the determinant of coefficients must vanish for homogeneous linear equations to have no-trivial solutions (i.e., not A = C = 0):
$\det\left( \matrix{sin(kR_{max}) & -e^{-\kappa R_{max}} \cr kcos(kR_{max}) & \kappa e^{-\kappa R_{max}}}\right) = 0 \nonumber$
The vanishing of this determinant can be rewritten as
$\kappa sin(kR_{max}) e^{-\kappa R_{max}} + kcos(kR_{max})e^{-\kappa R_{max}} = 0 \nonumber$
or
$tan(kR_{max}) = \dfrac{-\kappa}{k}. \nonumber$
When employed in the expression for A/C, this result gives
$\dfrac{A}{C} = \dfrac{e^{-\kappa R_{max}}}{sin(kR_{max})} \nonumber$
For very large D$_e$ compared to E, the above equation for E reduces to the familiar "particle in a box" energy level result since $\frac{k}{\kappa}$ vanishes in this limit, and thus tan(kR$_{max}$) = 0, which is equivalent to sin(kR$_{max}$) = 0, which yields the familiar $\text{E} = \frac{n^2h^2}{8\mu R^2_{\text{max}}}$ and $\frac{C}{A}$ = 0, so $\Psi = \text{ A sin}(\text{kR}$).
When D$_e$ is not large compared to E, the full transcendental equation $\text{tan(kR}_{\text{max}}) = -\frac{k}{\kappa}$ must be solved numerically or graphically for the eigenvalues $\text{E}_n$, n = 1, 2, 3, ... . These energy levels, when substituted into the definitions for k and $\kappa$ give the wavefunctions:
$\Psi = \text{A sin(kR) (for 0} \leq \text{ R } \leq \text{ R}_{\text{max}}) \nonumber$
$\Psi = \text{A sin(kR}_{\text{max}})e^{\kappa \text{R}_{\text{max}}}e^{-\kappa R}\text{ (for R}_{\text{max}} \leq \text{R < }\infty). \nonumber$
The one remaining unknown A can be determined by requiring that the modulus squared of the wavefunction describe a probability density that is normalized to unity when integrated over all space:
$\int\limits_{0}^{\infty}|\Psi |^2 \text{dR} = 1. \nonumber$
Note that this condition is equivalent to
$\int\limits_{0}^{\infty} |\Psi |^2 \text{R}^2\text{dR} = 1 \nonumber$
which would pertain to the original radial wavefunction. In the case of an infinitely deep potential well, this normalization condition reduces to
$\int\limits_{0}^{\text{R}_{\text{max}}}\text{A}^2\text{sin}^2\text{(kR)}\text{dR} = 1 \nonumber$
which produces
$\text{A} = \sqrt{\dfrac{2}{\text{R}_{\text{max}}} }. \nonumber$
Scattering States
The scattering states are treated in much the same manner. The functions $\Psi_1 \text{ and } \Psi_2 \text{ arise as above, and the amplitude of } \Psi_2 \text{ must again be chosen to vanish because } \Psi$ must vanish at R = 0 where the potential moves to infinity. However, in the exterior region (R> R$_{\text{max}}$), the two solutions are now written as:
$\Psi_3 = Ce^{\text{ik'R}} \nonumber$
$\Psi_4 = De^{-\text{ik'R}} \nonumber$
where the large-R local wavenumber
$k' = \sqrt{\dfrac{2\mu (E-D_e)}{\hbar^2}} \nonumber$
arises because E > $\text{D}_e$ for scattering states.
The conditions that $\Psi \text{ and } \frac{\text{d}\Psi}{\text{dR}} \text{ be continuous at R}_{\text{max}}$ still apply:
$\text{Asin(kR}_{\text{max}}) = Ce^{\text{ik'R}_{\text{max}}}-\text{ik'D}e^{\text{-ik'R}_{\text{max}}}. \nonumber$
and
$\text{kAcos(kR}_{\text{max}}) = \text{ik'C}e^{\text{ik'R}_{\text{max}}}-\text{ik'D}e^{\text{-ik'R}_{\text{max}}}. \nonumber$
However, these two equations (in three unknowns A, C, and D) can no longer be solved to generate eigenvalues E and amplitude ratios. There are now three amplitudes as well as the E value but only these two equations plus a normalization condition to be used. The result is that the energy no longer is specified by a boundary condition; it can take on any value. We thus speak of scattering states as being "in the continuum" because the allowed values of E form a continuum beginning at E = $\text{D}_e$ (since the zero of energy is defined in this example as at the bottom of the potential well).
The R > R$_\text{max}$ components of $\Psi$ are commonly referred to as "incoming"
$\Psi_{\text{in}} = \text{D}e^{\text{-ik'R}} \nonumber$
and "outgoing"
$\Psi_{\text{out}} = Ce^{\text{ik'R}} \nonumber$
because their radial momentum eigenvalues are -$\hbar \text{ k' and } \hbar$ k', respectively. It is a common convention to define the amplitude D so that the flux of incoming particles is unity.
Choosing
$\text{D} = \sqrt{\dfrac{\mu}{\hbar k'}} \nonumber$
produces an incoming wavefunction whose current density is:
$\text{S(R) = } \dfrac{-i\hbar}{2\mu}\left[ \Psi^{\text{*}}_{\text{in}}\left( \dfrac{\text{d}}{\text{dR}\Psi_{\text{in}}} \right) - \left( \dfrac{\text{d}\Psi_{\text{in}}}{\text{dR}} \right)^{\text{*}}\Psi_{\text{in}} \right] \nonumber$
$= |\text{D}|^2 \left( \dfrac{-i\hbar}{2\mu} \right) [-2ik'] \nonumber$
$= -1. \nonumber$
This means that there is one unit of current density moving inward (this produces the minus sign) for all values of R at which $\Psi_{\text{in}}$ is an appropriate wavefunction (i.e., R > $\text{R}_{\text{max}}$). This condition takes the place of the probability normalization condition specified in the boundstate case when the modulus squared of the total wavefunction is required to be normalized to unity over all space. Scattering wavefunctions can not be so normalized because they do not decay at large R; for this reason, the flux normalization condition is usually employed. The magnitudes of the outgoing (C) and short range (A) wavefunctions relative to that of the incoming function (D) then provide information about the scattering and "trapping" of incident flux by the interaction potential.
Once D is so specified, the above two boundary matching equations are written as a set of two inhomogeneous linear equations in two unknowns (A and C):
$\text{Asin(kR}_{\text{max}}) - Ce^{\text{ik'R}_{\text{max}}} = De^{\text{-ik'R}_{\text{max}}}) \nonumber$
and
$\text{kAcos(kR}_{\text{max}}) - \text{ik'C}e^{\text{ik'R}_{\text{max}}}) = \text{-ik'D}e^{\text{-ik'R}_{\text{max}}} \nonumber$
or
$\left( \matrix{ \text{sin(kR}_{\text{max}}) & -e^{\text{ik'R}_{\text{max}}} \cr \text{kcos(kR}_{\text{max})} & \text{-ik'}e^{\text{ik'R}_{\text{max}}} } \right) \left( \matrix{A \cr C} \right) = \left( \matrix{ \text{D}e^{\text{-ik'R}_{\text{max}}} \cr \text{-ik'D}e^{\text{-ik'R}_{\text{max}}} } \right). \nonumber$
Non-trivial solutions for A and C will exist except when the determinant of the matrix on the left side vanishes:
$\text{-ik'sin(kR}_{\text{max}}) + \text{kcos(kR}_{\text{max}}) = 0, \nonumber$
which can be true only if
$\text{tan(kR}_{\text{max}}) = \dfrac{\text{ik'}}{\text{k}}. \nonumber$
This equation is not obeyed for any (real) value of the energy E, so solutions for A and C in terms of the specified D can always be found.
In summary, specification of unit incident flux is made by choosing D as indicated above. For any collision energy E > D$_e$, the 2x1 array on the right hand side of the set of linear equations written above can be formed, as can the 2x2 matrix on the left side. These linear equations can then be solved for A and C. The overall wavefunction for this E is then given by:
$\matrix{\Psi = \text{Asin(kR)} & \text{(for 0 } \leq \text{ R } \leq \text{ R}_{\text{max}}) \cr \Psi =Ce^{\text{ik'R}} + De^{\text{-ik'R}} & \text{(for R}_{\text{max}} \leq \text{R} < \infty). } \nonumber$
Shape Resonance States
If the angular momentum quantum number L in the effective potential introduced earlier is non-zero, this potential has a repulsive component at large R. This repulsion can combine with short-range attractive interactions due, for example, to chemical bond forces, to produce an effective potential that one can model in terms of simple piecewise functions shown below.
Again, the piecewise nature of the potential allows the one-dimensional Schrödinger equation to be solved analytically. For energies below D$_e$, one again finds bound states in much the same way as illustrated above (but with the exponentially decaying function $e^{\text{-k'R}} \text{ used in the region } \text{R}_\text{max} \leq \text{R} \leq \text{R}_{\text{max}} + \delta\text{, with } \kappa ' = \sqrt{2\mu(-D_e-\delta V - E)/\hbar^2}).$
For energies lying above D$_e$ + $\delta$V, scattering states occur and the four amplitudes of the functions (sin(kR), e$^{\text{(±i k'''R)}} \text{ with k''' = } \sqrt{2\mu(D_e + \delta V + E)/\hbar^2}), \text{ and e}^{\text{(ik'R)}})$ appropriate to each R-region are determined in terms of the amplitude of the incoming asymptotic function $\text{De}^{\text{(-ik'R)}} \text{ from the four equations obtained by matching } \Psi \text{ and } \frac{\text{d}\Psi}{\text{dR}} \text{ at R}_{\text{max}} \text{ and at } \text{R}_{\text{max}} + \delta$.
For energies lying in the range D$_e$ < E < D$_e \text{ + }\delta V$, a qualitatively different class of scattering function exists. These so-called shape resonance states occur at energies that are determined by the condition that the amplitude of the wavefunction within the barrier (i.e., for 0 $\leq \text{ R } \leq \text{R}_{\text{max}}$ ) be large so that incident flux successfully tunnels through the barrier and builds up, through constructive interference, large probability amplitude there. Let us now turn our attention to this specific energy regime.
The piecewise solutions to the Schrödinger equation appropriate to the shaperesonance case are easily written down:
$\matrix{ & \Psi = \text{Asin(kR)} & \text{(for 0 } \leq \text{ R } \leq \text{R}_{\text{max}}) \cr & \Psi = \text{B}_+e^{\kappa '\text{R}} + \text{B}_-e^{-\kappa '\text{R}} & \text{(for R}_{max} \leq \text{ R } \leq \text{R}_{\text{max}} + \delta ) \cr & \Psi = \text{C}e^{\text{ik'R}} + \text{D}e^{\text{-ikR}} & \text{(for R}_{\text{max}} + \delta \leq \text{ R } \leq \infty) } \nonumber$
Note that both exponentially growing and decaying functions are acceptable in the $R_{max} \leq \text{ R } \leq \text{R}_{\text{max}} +\delta$ region because this region does not extend to $\text{R} = \infty$. There are four amplitudes (A, B+, B- , and C) that must be expressed in terms of the specified amplitude D of the incoming flux. Four equations that can be used to achieve this goal result when $\Psi \text{ and } \frac{\text{d}\Psi}{\text{dR}} \text{ are matched at R}_{max} \text{ and at R}_{max} + \delta$:
$\matrix{ & Asin(kR_{max}) = B_+e^{\kappa 'R_{max}} + B_-e^{-\kappa 'R_{max}}, \cr & Akcos(kR_{max}) = \kappa 'B_+e^{\kappa 'R_{max}} -\kappa 'B_-e^{-\kappa 'R_{max}}, \cr & B_+e^{\kappa '(R_{max}+\delta)} +B_-e^{-\kappa '(R_{max}+\delta)} = Ce^{ik'(R_{max} + \delta)} + De^{-ik'(R_{max} + \delta)}, \cr & \kappa 'B_+e^{\kappa '(R_{max} + \delta)} -\kappa 'B_-e^{-\kappa '(R_{max} + \delta)} \cr & = ik'Ce^{ik'(R_{max} + \delta)} -ik'De^{ik'(R_{max}+\delta)}. } \nonumber$
It is especially instructive to consider the value of A/D that results from solving this set of four equations in four unknowns because the modulus of this ratio provides information about the relative amount of amplitude that exists inside the centrifugal barrier in the attractive region of the potential compared to that existing in the asymptotic region as incoming flux.
The result of solving for A/D is:
$\dfrac{A}{D} = \dfrac{4\kappa 'e^{-ik'(R_{max}+\delta)}}{\left[ \dfrac{e^{\kappa '\delta}(ik'-\kappa ')(\kappa 'sin(kR_{max})+kcos(kR_{max}))}{ik'} + \dfrac{e^{- \kappa '\delta}(ik'+\kappa ')(\kappa 'sin(kR_{max})-kcos(kR_{max}))}{ik'} \right]^{-1}} \nonumber$
Further, it is instructive to consider this result under conditions of a high (large $D_e + \delta$V - E) and thick (large $\delta$) barrier. In such a case, the "tunnelling factor" $e^{-\kappa '\delta}$ will be very small compared to its counterpart $e^{\kappa '\delta}$, and so
$\dfrac{A}{D} = 4\dfrac{ik'\kappa '}{ik'-\kappa '}e^{-ik'(R_{max}+\delta)}\dfrac{e^{-\kappa '\delta}}{\kappa 'sin(kR_{max})+kcos(kR_{max})}. \nonumber$
The $e^{-\kappa '\delta}$ factor in A/D causes the magnitude of the wavefunction inside the barrier to be small in most circumstances; we say that incident flux must tunnel through the barrier to reach the inner region and that $e^{-\kappa '\delta}$ gives the probability of this tunnelling. The magnitude of the A/D factor could become large if the collision energy E is such that
$\kappa 'sin(kR_{max})+kcos(kR_{max}) \nonumber$
is small. In fact, if
$tan(kR_{max}) = \dfrac{-k}{\kappa '} \nonumber$
this denominator factor in A/D will vanish and A/D will become infinite. Note that the above condition is similar to the energy quantization condition
$tan(kR_{max}) = \dfrac{-k}{\kappa} \nonumber$
that arose when bound states of a finite potential well were examined earlier in this Chapter. There is, however, an important difference. In the bound-state situation
$k = \sqrt{\dfrac{2\mu E}{\hbar^2}} \nonumber$
and
$\kappa = \sqrt{\dfrac{2\mu (D_e - E)}{\hbar^2}}; \nonumber$
in this shape-resonance case, k is the same, but
$\kappa ' = \sqrt{\dfrac{2\mu (D_e + \delta V - E)}{\hbar^2}} \nonumber$
rather than $\kappa$ occurs, so the two tan(k$R_{max}$) equations are not identical.
However, in the case of a very high barrier (so thats $\kappa '$ is much larger than k), the denominator
$\kappa ' sin(kR_{max}) + kcos(kR_{max}) \approx \kappa 'sin(kR_{max}) \nonumber$
in A/D can become small if
$sin(kR_{max}) \approx 0. \nonumber$
This condition is nothing but the energy quantization condition that would occur for the particle-in-a-box potential shown below.
This potential is identical to the true effective potential for $0 \leq R \leq R_{max}$, but extends to infinity beyond $R_{max}$ ; the barrier and the dissociation asymptote displayed by the true potential are absent.
In summary, when a barrier is present on a potential energy surface, at energies above the dissociation asymptote $D_e$ but below the top of the barrier $(D_e + \delta V$ here), one can expect shape-resonance states to occur at "special" scattering energies E. These socalled resonance energies can often be approximated by the bound-state energies of a potential that is identical to the potential of interest in the inner region $(0 \leq R \leq R_{max}$ here) but that extends to infinity beyond the top of the barrier (i.e., beyond the barrier, it does not fall back to values below E).
The chemical significance of shape resonances is great. Highly rotationally excited molecules may have more than enough total energy to dissociate $(D_e)$, but this energy may be "stored" in the rotational motion, and the vibrational energy may be less than $D_e$. In terms of the above model, high angular momentum may produce a significant barrier in the effective potential, but the system's vibrational energy may lie significantly below $D_e$. In such a case, and when viewed in terms of motion on an angular momentum modified effective potential, the lifetime of the molecule with respect to dissociation is determined by the rate of tunnelling through the barrier.
For the case at hand, one speaks of "rotational predissociation" of the molecule. The lifetime $\tau$ can be estimated by computing the frequency $\nu$ at which flux existing inside $R_{max}$ strikes the barrier at $R_{max}$
$\nu = \dfrac{\hbar k}{2\mu R_{max}} \text{ (sec}^{-1}) \nonumber$
and then multiplying by the probability P that flux tunnels through the barrier from R$_{max}$ to R$_{max} + \delta$:
$\text{P = }e^{-2\kappa '\delta}. \nonumber$
The result s that
$\dfrac{1}{\tau} = \dfrac{\hbar k}{2\mu R_{max}}e^{-2\kappa '\delta} \nonumber$
with the energy E entering into k and $\kappa '$ being determined by the resonance condition: ($\kappa 'sin(kR_{max}$)+kcos(kR$_{max}$)) = minimum.
Although the examples treated above involved piecewise constant potentials (so the Schrödinger equation and the boundary matching conditions could be solved exactly), many of the characteristics observed carry over to more chemically realistic situations. As discussed, for example, in Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, Portola Valley, Calif. (1983), one can often model chemical reaction processes in terms of:
1. motion along a "reaction coordinate" (s) from a region characteristic of reactant materials where the potential surface is positively curved in all direction and all forces (i.e., gradients of the potential along all internal coordinates) vanish,
2. to a transition state at which the potential surface's curvature along s is negative while all other curvatures are positive and all forces vanish,
3. onward to product materials where again all curvatures are positive and all forces vanish.
Within such a "reaction path" point of view, motion transverse to the reaction coordinate s is often modelled in terms of local harmonic motion although more sophisticated treatments of the dynamics is possible. In any event, this picture leads one to consider motion along a single degree of freedom (s), with respect to which much of the above treatment can be carried over, coupled to transverse motion along all other internal degrees of freedom taking place under an entirely positively curved potential (which therefore produces restoring forces to movement away from the "streambed" traced out by the reaction path s).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/16%3A_Collisions_and_Scattering/16.01%3A_One_Dimensional_Scattering.txt
|
When excited electronic states are involved, couplings between two or more electronic surfaces may arise. Dynamics occuring on an excited-state surface may evolve in a way that produces flux on another surface. For example, collisions between an electronically excited 1s2s($^3$S) He atom and a ground-state 1s$^2(^1$S) He atom occur on a potential energy surface that is repulsive at large R (due to the repulsive interaction between the closed-shell 1s$^2$ He and the large 2s orbital) but attractive at smaller R (due to the $\sigma^2\sigma^{\text{*1}}$ orbital occupancy arising from the three 1s-derived electrons). The ground-state potential energy surface for this system (pertaining to two 1s$^2(^1$S) He atoms is repulsive at small R values (because of the $\sigma^2\sigma^{\text{*2}}$ nature of the electronic state). In this case, there are two Born-Oppenheimer electronic-nuclear motion states that are degenerate and thus need to be combined to achieve a proper description of the dynamics:
$\Psi_1 = \big| \sigma^2\sigma^{\text{*2}} \big| \Psi_{\text{ground.}}(R,\theta ,\phi ) \nonumber$
pertaining to the ground electronic state and the scattering state $\Psi_{\text{ground.}}$ on this energy surface, and
$\Psi_2 = \big| \sigma^2\sigma^{\text{*1}}2\sigma^1 \big| \Psi_{\text{ex.}}(R. \theta ,\phi ) \nonumber$
pertaining to the excited electronic state and the nuclear-motion state $\Psi_{ex.}$ on this energy surface. Both of these wavefunctions can have the same energy E; the former has high nuclear-motion energy and low electronic energy, while the latter has higher electronic energy and lower nuclear-motion energy.
A simple model that can be used to illustrate the two-state couplings that arise in such cases is introduced through the two one-dimensional piecewise potential surfaces shown below.
The dashed energy surface
$V(R) = - \Delta \text{ for 0 } \leq R < \infty) \nonumber$
provides a simple representation of a repulsive lower-energy surface, and the solid-line plot represents the excited-state surface that has a well of depth $D_e$ and whose well lies $\Delta$ above the ground-state surface.
In this case, and for energies lying above zero (for E < 0, only nuclear motion on the lower energy dashed surface is "open" (i.e., accessible)) yet below $D_e$, the nuclear motion wavefunction can have amplitudes belonging to both surfaces. That is, the total (electronic and nuclear) wavefunction consists of two portions that can be written as:
$\Psi = A \phi sin(kR) + \phi '' A'' sin(k''R) \text{ ( for } 0 \leq R \leq \text{R}_{\text{max}}) \nonumber$
and
$\Psi = A \phi sin(kR_{\text{max}})e^{\kappa R_{\text{max}}}e^{-\kappa R} + \phi ''A'' sin(k''R) \text{ (for R}_{\text{max}} \leq R < \infty ), \nonumber$
where $\phi$ and $\phi ''$ denote the electronic functions belonging to the upper and lower energy surfaces, respectively. The wavenumbers k and k'' are defined as:
$k = \sqrt{\dfrac{2\mu E}{\hbar^2}} \nonumber$
$k'' = \sqrt{\dfrac{2\mu (E + \Delta)}{\hbar^2}} \nonumber$
and $\kappa$ is as before
$\kappa = \sqrt{\dfrac{2\mu (D_e - E)}{\hbar^2}}. \nonumber$
For the lower-energy surface, only the sin(k''R) function is allowed because the cos(k''R) function does not vanish at R=0.
The Coupled Channel Equations
In such cases, the relative amplitudes (A and A'') of the nuclear motion wavefunctions on each surface must be determined by substituting the above "two-channel" wavefunction ( the word channel is used to denote separate asymptotic states of the system; in this case, the $\phi \text{ and } \phi ''$ electronic states) into the full Schrödinger equation. In Chapter 3, the couplings among Born-Oppenheimer states were so treated and resulted in the following equation:
$[(E_j(R) - E)\Xi_j(R) + T \Xi_j(R)] = -\sum\limits_i \left[ \langle \Psi_j \big| T \big| \Psi_i \rangle (R) \Xi_i(R) + \sum\limits_{a=1,M}\left( \dfrac{-\hbar^2}{m_a}\right) \langle \Psi_j \big| \nabla_a \big| \Psi_i \rangle (R)\cdot{\nabla_a}\Xi_i(R) \right] \nonumber$
where $\text{E}_j \text{(R) and } \Xi_j\text{(R) }$ denote the electronic energy surfaces and nuclear-motion wavefunctions, $\Psi_j$ denote the corresponding electronic wavefunctions, and the $\nabla_a$ represent derivatives with respect to the various coordinates of the nuclei. Changing to the notation used in the one-dimensional model problem introduced above, these so-called coupled-channel equations read:
$\left[ (-\Delta - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right]\text{A''sin(k''R)} = -\left[ \langle\phi '' \big|-\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2}|\phi ''\rangle \text{A''sin(k''R)} + \left( -\dfrac{\hbar^2}{\mu} \right) \langle \phi '' \big| \dfrac{\text{d}}{\text{dR}} \big| \phi\rangle \dfrac{\text{d}}{\text{dR}}\text{A sin(kR)} \right] \nonumber$
$\text{(for 0 } \leq R \leq \text{R}_{\text{max}}),$
$\left[ (-\Delta - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right]\text{A''sin(k''R)} = -\left[ \langle\phi '' \big| -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2}\big|\phi ''\rangle \text{A''sin(k''R)} + \left( -\dfrac{\hbar^2}{\mu} \right) \langle \phi '' \big| \dfrac{\text{d}}{\text{dR}} \big| \phi\rangle \dfrac{\text{d}}{\text{dR}}\text{A}\phi\text{sin(kR}_{\text{max}}\text{)}e^{\kappa R_{\text{max}}}e^{-\kappa R} \right] \nonumber$
$\text{(for 0 } \leq R < \infty ),$
when the index j refers to the ground-state surfaces (V(R) = -$\Delta$, for ( 0 < R < $\infty$), and
$\left[ (0 - E) -\dfrac{\hbar^2}{2\mu} \dfrac{\text{d}^2}{\text{dR}^2} \right]\text{Asin(kR)} = -\left[ \langle\phi \big| -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2}\big|\phi\rangle \text{Asin(kR)} + \left( -\dfrac{\hbar^2}{\mu} \right) \langle \phi \big| \dfrac{\text{d}}{\text{dR}}\big|\phi ''\rangle \dfrac{\text{d}}{\text{dR}}\text{A'' sin(k''R)} \right] \nonumber$
$(\text{for 0} \leq \text{R} \leq \text{R}_{\text{max}})$
$\left[ (D_e - E) - \dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right]\text{Asin(kR}_{\text{max}})e^{\kappa R_{\text{max}}}e^{-\kappa R} = -\left[ \langle\phi \big|-\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2}\big|\phi\rangle \text{Asin(kR}_{\text{max}})e^{\kappa \text{R}_{\text{max}}}e^{-\kappa\text{R}} - \left( \dfrac{\hbar^2}{\mu} \right)\langle\phi \big| \dfrac{\text{d}}{\text{dR}}\big|\phi ''\rangle \dfrac{\text{d}}{\text{dR}}\text{A''sin(k''R)} \right] \nonumber$
$\text{( for R}_{\text{max}} \leq \text{ R } < \infty )$
when the index j refers to the excited-state surface (where V(R)) = 0, for 0 < R $\leq \text{ R}_{\text{max}}$ and V(R) = D$_e$ for R$_{\text{max}} \leq R < \infty$ ).
Clearly, if the right-hand sides of the above equations are ignored, one simply recaptures the Schrödinger equations describing motion on the seperate potential energy surfaces:
$\left[ (-\Delta - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right] \text{A''sin(k''R)} = 0 \nonumber$
$\text{ ( for 0} \leq \text{R} \leq \text{R}_{\text{max}}),$
$\left[ (-\Delta - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right] \text{A''sin(k''R)} = 0 \nonumber$
$\text{( for 0} \leq \text{R} < \infty );$
that describe motion on the lower-energy surface, and
$\left[ (0 - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right]\text{Asin(kR)} = 0 \nonumber$
$( \text{ for 0} \leq \text{R} \leq \text{R}_{\text{max}}),$
$\left[ (D_e - E) -\dfrac{\hbar^2}{2\mu}\dfrac{\text{d}^2}{\text{dR}^2} \right] \text{Asin(kR}_{\text{max}})e^{\kappa \text{R}_{\text{max}}}e^{-\kappa\text{R}} = 0 \nonumber$
$(\text{ for R}_{\text{max}} \leq \text{R < } \infty )$
describing motion on the upper surface on which the bonding interaction occurs. The terms on the right-hand sides provide the couplings that cause the true solutions to the Schrödinger equation to be combinations of solutions for the two separate surfaces.
In applications of the coupled-channel approach illustrated above, coupled sets of second order differential equations (two in the above example) are solved by starting with a specified flux in one of the channels and a chosen energy E. For example, one might specify the amplitude A to be unity to represent preparation of the system in a bound vibrational level (with E < D$_e$) of the excited electronic-state potential. One would then choose E to be one of the eigenenergies of that potential. Propagation methods could be used to solve the coupled differential equations subject to these choices of E and A. The result would be the determination of the amplitude A' of the wavefunction on the groundstate surface. The ratio A'/A provides a measure of the strength of coupling between the two Born-Oppenheimer states
Perturbative Treatment
Alternatively, one can treat the coupling between the two states via time dependent perturbation theory. For example, by taking A = 1.0 and choosing E to be one of the eigenenergies of the excited-state potential, one is specifying that the system is initially (just prior to t = 0) prepared in a state whose wavefunction is:
$\Psi^0_{\text{ex}} = \phi \text{sin(kR)} \text{ (for 0 } \leq \text{ R } \leq \text{ R}_{\text{max}} ) \nonumber$
$\Psi^0_{\text{ex}} = \phi \text{sin(kR}_{\text{max}})e^{\kappa R_{\text{max}}}e^{-\kappa R} \text{ ( for R}_{\text{max}} \leq \text{ R } \leq \infty ). \nonumber$
From t = 0 on, the coupling to the other state
$\Psi^0_{\text{ground}} = \phi ' \text{sin(k'R) ( for 0 } \leq \text{ R } < \infty \nonumber$
is induced by the "perturbation" embodied in the terms on the right-hand side of the coupled-channel equations
Within this time dependent perturbation theory framework, the rate of transition of probability amplitude from the initially prepared state (on the excited state surface) to the ground-state surface is proportional to the square of the perturbation matrix elements between these two states:
$\text{Rate } \alpha \big| \int\limits_{0}^{R_{\text{max}}}\text{sin(kR)}e^{\kappa\text{R}_{\text{max}}}e^{-\kappa \text{R}} \langle \phi \big| \dfrac{\text{d}}{\text{dR}} \big| \phi '' \rangle \left( \dfrac{\text{d}}{\text{dR}}\text{sin(k''R)} \right) \text{dR} + \int\limits_{\text{R}_{\text{max}}}^{\infty}\text{sin(kR}_{\text{max}})e^{\kappa \text{R}_{\text{max}}}e^{-\kappa \text{R}} \langle \phi \big| \dfrac{\text{d}}{\text{dR}} \big| \phi '' \rangle \left( \dfrac{\text{d}}{\text{dR}}\text{sin(k''R)} \right)\text{dR} \big| ^2 \nonumber$
The matrix elements occurring here contain two distinct parts:
$\langle \phi \big| \dfrac{\text{d}}{\text{dR}} \big| \phi '' \rangle \nonumber$
has to do with the electronic state couplings that are induced by radial movement of the nuclei; and both
$\text{sin(kR) } \dfrac{\text{d}}{\text{dR}}\text{sin(k''R)} \nonumber$
relate to couplings between the two nuclear-motion wavefunctions induced by these same radial motions. For a transition to occur, both the electronic and nuclear-motion states must undergo changes. The initially prepared state (the bound state on the upper electronic surface) has high electronic and low nuclear-motion energy, while the state to which transitions may occur (the scattering state on the lower electronic surface) has low electronic energy and higher nuclear-motion energy.
Of course, in the above example, the integrals over R can be carried out if the electronic matrix elements $\langle \phi \big| \dfrac{\text{d}}{\text{dR}} \big| \phi '' \rangle$ can be handled. In practical chemical applications (for an introductory treatment see Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, Portola Valley, Calif. (1983)), the evaluation of these electronic matrix elements is a formidable task that often requires computation intensive techniques such as those discussed in Section 6.
Even when the electronic coupling elements are available (or are modelled or parameterized in some reasonable manner), the solution of the coupled-channel equations that govern the nuclear motion is a demanding task. For the purposes of this text, it suffices to note that:
1. couplings between motion on two or more electronic states can and do occur;
2. these couplings are essential to treat whenever the electronic energy difference (i.e., the spacing between pairs of Born-Oppenheimer potential surfaces) is small (i.e., comparable to vibrational or rotational energy level spacings);
3. there exists a rigorous theoretical framework in terms of which one can evaluate the rates of so-called radiationless transitions between pairs of such electronic, vibrational, rotational states. Expressions for such transitions involve (a) electronic matrix elements that depend on how strongly the electronic states are modulated by movement $\left( \text{ hence the } \frac{\text{d}}{\text{dR}} \right)$ of the nuclei, and (b) nuclear-motion integrals connecting the initial and final nuclear-motion wavefunctions, which also contain $\dfrac{\text{d}}{\text{dR}}$ because they describe the "recoil" of the nuclei induced by the electronic transition.
Chemical Relevance
As presented above, the most obvious situation of multichannel dynamics arises when electronically excited molecules undergo radiationless relaxation (e.g., internal conversion when the spin symmetry of the two states is the same or intersystem crossing when the two states differ in spin symmetry). These subjects are treated in some detail in the text Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, Portola Valley, Calif. (1983)) where radiationless transitions arising in photochemistry and polyatomic molecule reactivity are discussed.
Let us consider an example involving the chemical reactivity of electronically excited alkaline earth or $\text{d}^{10}s^2$ transition metal atoms with H$_2$ molecules. The particular case for $\text{Cd* + H}_2 \rightarrow \text{CdH + H }$ has been studied experimentally and theoretically. In such systems, the potential energy surface connecting to ground-state Cd $(^1$S) + H$_2$ becomes highly repulsive as the collision partners approach (see the depiction provided in the Figure shown below). The three surfaces that correlate with the Cd ($^1$P) + H$_2$ species prepared by photo-excitation of Cd($^1$S) behave quite differently as functions of the Cd-to-H$_2$ distance because in each the singly occupied 6p orbital assumes a different orientation relative to the H$_2$ molecule's bond axis. For (near) C$_{2v}$ orientations, these states are labeled $^1B_2$, $^1B_1$, and $^1A_1$; they have the 6p orbital directed as shown in the second Figure, respectively. The corresponding triplet surfaces that derive from Cd ($^3P) + H_2$ behave, as functions of the Cd-to-H$_2$ distance (R) in similar manner, except they are shifted to lower energy because Cd ($^3P$) lies below Cd ($^1$P) by ca. 37 kcal/mol.
Collisions between Cd ($^1\text{P) and H}_2$ can occur on any of the three surfaces mentioned above. Flus on the $^1A_1$ surface is primarily reflected (at low collision energies characteristic of the thermal experiments) because this surface is quite repulsive at large R. Flux on the $^1\text{B}_1$ surface can proceed in to quite small R (ca. 2.4 Å ) before repulsive forces on this surface reflect it. At geometries near R = 2.0Å and $\text{r}_{\text{HH}}$ = 0.88 Å, the highly repulsive $^3\text{A}_1$ surface intersects this $^1\text{B}_1$ surface from below. At and near this intersection, a combination of spin-orbit coupling (which is large for Cd) and non-adiabatic coupling may induce flux to evolve onto the $^3\text{A}_1$ surface, after which fragmentation to Cd ($^3\text{P) + H}_2$ could occur.
In contrast, flux on the $^1\text{B}_2$ surface propogates inward under attractive forces to R = 2.25 Å and $\text{r}_{\text{HH}}$ = 0.79 Å where it may evolve onto the $^3\text{A}_1$ surface which intersects from below. At and near this intersection, a combination of spin-orbit coupling (which is large for Cd) and non-adiabatic coupling may induce flux to evolve onto the $^3\text{A}_1$ surface, after which fragmentation to Cd ($^3\text{P) + H}_2$ could occur. Flux that continues to propogate inward to smaller R values experiences even stronger attractive forces that lead, near R = 1.69 Å and $\text{r}_{\text{HH}}$ = 1.54 Å, to an intersection with the $^1\text{A}_1$ surface that connects to Cd ($^1\text{S) + H}_2$. Here, non-adiabatic couplings may cause flux to evolve onto the $^1\text{A}_1$ surface which may then lead to formation of ground state Cd ($^1\text{S) + H}_2 \text{ or Cd (}^1\text{S) + H + H, }$ both of which are energetically possible. Processes in which electronically excited atoms produce groundstate atoms through such collisions and surface hopping are termed "electronic quenching".
The nature of the non-adiabatic couplings that arise in the two examples given above are quite different. In the former case, when the $^1\text{B}_1 \text{ and } ^3\text{A}_1$ surfaces are in close proximity to one another, the first-order coupling element:
$\langle \Psi (^1\text{B}_1) \big| \nabla_j \big| \Psi (^3\text{A}_1 )\rangle \nonumber$
is non-zero only for nuclear motions (i.e., $\nabla_{\text{j}}$ ) of $\text{b}_1\text{xa}_1 = \text{ b}_1$ symmetry. For the $\text{CdH}_2$ collision complex being considered in (or near) $\text{ C}_{2\text{v}}$ symmetry, such a motion corresponds to rotational motion of the nuclei about an axis lying parallel to the H-H bond axis. In contrast, to couple the $^3\text{A}_1 \text{ and } ^1\text{B}_2$ electronic states through an element of the form
$\langle \Psi ( ^1\text{B}_2 ) \big| \nabla_{\text{j}} \big| \Psi (^3\text{A}_1 ) \rangle , \nonumber$
the motion must be of $\text{b}_2\text{xa}_1\text{ = b}_2$ symmetry. This movement corresponds to asymmetric vibrational motion of the two Cd-H interatomic coordinates.
The implications of these observations are clear. For example, in so-called halfcollision experiments in which a van der Waals $\text{ CdH}_2$ complex is probed, internal rotational motion would be expected to enhance $^1\text{B}_1 \rightarrow ^3\text{A}_1$ quenching, whereas asymmetric vibrational motion should enhance the $^1\text{B}_2 \rightarrow ^3\text{A}_1$ process.
Moreover, the production of ground-state Cd ($^1\text{S) +H}_2\text{ via } ^1\text{B}_2 \rightarrow ^1\text{A}_1$ surface hopping (near R = 1.69 Å and $\text{r}_{\text{HH}}$ = 1.54 Å) should also be enhanced by asymmetric vibrational excitation. The $^1\text{B}_2\text{ and } ^1\text{A}_1$ surfaces also provide, through their non-adiabatic couplings, a "gateway" to formation of the asymmetric bond cleavage products $\text{CdH(}^2\Sigma\text{) + H. }$ It can be shown that the curvature (i.e., second energy derivative) of a potential energy surface consists of two parts: (i) one part that in always positive, and (ii) a second that can be represented in terms of the non-adiabatic coupling elements between the two surfaces and the energy gap $\Delta$E between the two surfaces. Applied to the two states at hand, this second contributor to the curvature of the $^1\text{B}_2$ surface is:
$\dfrac{\big| \langle \Psi (^1\text{B}_2) \big| \nabla_f \big| \Psi (^1\text{A}_1 )\rangle \big|^2}{\text{E(}^1\text{B}_2 ) - \text{E(}^1\text{A}_1)}. \nonumber$
Clearly, when the $^1\text{A}_1$ state is higher in energy but strongly non-adiabatically coupled to the $^1\text{B}_2$ state, negative curvature along the asymmetric $\text{b}_2$ vibrational mode is expected for the $^1\text{B}_2$ state. When the $^1\text{A}_1$ state is lower in energy, negative curvature along the $\text{b}_2$ vibrational mode is expected for the $^1\text{A}_1$ state (because the above expression also expresses the curvature of the $^1\text{A}_1$ state).
Therefore, in the region of close-approach of these two states, state-to-state surface hopping can be facile. Moreover, one of the two states (the lower lying at each geometry) will likely possess negative curvature along the $\text{b}_2$ vibrational mode. It is this negative curvature that causes movement away from $\text{C}_{\text{2v}}$ symmetry to occur spontaneously, thus leading to the $\text{CdH (}^2\Sigma\text{) + H }$ reaction products.
Coupled-state dynamics can also be used to describe situations in which vibrational rather than electronic-state transitions occur. For example, when van der Waals complexes such as HCl...Ar undergo so-called vibrational predissociation, one thinks in terms of movement of the Ar atom relative to the center of mass of the HCl molecule playing the role of the R coordinate above, and the vibrational state of HCl as playing the role of the quantized (electronic) state in the above example.
In such cases, a vibrationally excited HCl molecule (e.g., in v = 1) to which an Ar atom is attached via weak van der Waals attraction transfers its vibrational energy to the Ar atom, subsequently dropping to a lower (e.g., v = 0) vibrational level. Within the twocoupled-state model introduced above, the upper energy surface pertains to Ar in a bound vibrational level (having dissociation energy $\text{D}_{\text{e}}$) with HCl in an excited vibrational state ($\Delta$ being the v = 0 to v = 1 vibrational energy gap), and the lower surface describes an Ar atom that is free from the HCl molecule that is itself in its v = 0 vibrational state. In this case, the coordinate R is the Ar-to-HCl distance.
In analogy with the electronic-nuclear coupling example discussed earlier, the rate of transition from HCl (v=1) bound to Ar to HCl(v=0) plus a free Ar atom depends on the strength of coupling between the Ar...HCl relative motion coordinate (R) and the HCl internal vibrational coordinate. The $\langle \phi \big| \frac{\text{d}}{\text{dR}} \big| \phi '' \rangle$ coupling elements in this case are integrals over the HCl vibrational coordinate x involving the v = 0 ($\phi$) and v = 1 ($\phi ''$) vibrational functions. The integrals over the R coordinate in the earlier expression for the rate of radiationless transitions now involve integration over the distance R between the Ar atom and the center of mass of the HCl molecule.
This completes our discussion of dynamical processes in which more than one Born-Oppenheimer state is involved. There are many situations in molecular spectroscopy and chemical dynamics where consideration of such coupled-state dynamics is essential. These cases are characterized by
1. total energies E which may be partitioned in two or more ways among the internal degrees of freedom (e.g., electronic and nuclear motion or vibrational and ad-atom in the above examples),
2. Born-Oppenheimer potentials that differ in energy by a small amount (so that energy transfer from the other degree(s) of freedom is facile).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/16%3A_Collisions_and_Scattering/16.02%3A__Multichannel_Problems.txt
|
For all but very elementary chemical reactions (e.g., D + HH $\rightarrow$ HD + H or F + HH $\rightarrow$ FH + H) or scattering processes (e.g., CO (v,J) + He $\rightarrow$ CO (v',J') + He), the above fully quantal coupled equations simply can not be solved even when modern supercomputers are employed.Fortunately, the Schrödinger equation can be replaced by a simple classical mechanics treatment of nuclear motions under certain circumstances.
For motion of a particle of mass $\mu$ along a direction R, the primary condition under which a classical treatment of nuclear motion is valid
$\dfrac{\lambda}{4\pi}\dfrac{1}{\text{p}}\bigg|\dfrac{\text{dp}}{\text{dR}}\bigg| \ll 1 \nonumber$
relates to the fractional change in the local momentum defined as:
$\text{p} = \sqrt{2\mu (\text{E - E}_j\text{(R)})} \nonumber$
along R within the 3N - 5 or 3N - 6 dimensional internal coordinate space of the molecule, as well as to the local de Broglie wavelength
$\lambda = \dfrac{2\pi\hbar}{|\text{p}|}. \nonumber$
The inverse of the quantity $\dfrac{1}{\text{p}}\bigg| \dfrac{\text{dp}}{\text{dR}} \bigg|$ can be thought of as the length over which the momentum changes by 100%. The above condition then states that the local de Broglie wavelength must be short with respect to the distance over which the potential changes appreciably. Clearly, whenever one is dealing with heavy nuclei that are moving fast (so |p| is large), one should anticipate that the local de Broglie wavelength of those particles may be short enough to meet the above criteria for classical treatment.
It has been determined that for potentials characteristic of typical chemical bonding (whose depths and dynamic range of interatomic distances are well known), and for all but low-energy motions (e.g., zero-point vibrations) of light particles such as Hydrogen and Deuterium nuclei or electrons, the local de Broglie wavelengths are often short enough for the above condition to be met (because of the large masses $\mu$ of non-Hydrogenic species) except when their velocities approach zero (e.g., near classical turning points). It is therefore common to treat the nuclear-motion dynamics of molecules that do not contain H or D atoms in a purely classical manner, and to apply so-called semi-classical corrections near classical turning points. The motions of H and D atomic centers usually require quantal treatment except when their kinetic energies are quite high.
Classical Trajectories
To apply classical mechanics to the treatment of nuclear-motion dynamics, one solves Newtonian equations
$\text{m}_{\text{k}}\dfrac{\text{d}^2\text{X}_{\text{k}}}{\text{dt}^2} = - \dfrac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}} \nonumber$
where $\text{X}_{\text{k}}$ denotes one of the 3N cartesian coordinates of the atomic centers in the molecule, m$_{\text{k}}$ is the mass of the atom associated with this coordinate, and $\frac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}}$ is the derivative of the potential, which is the electronic energy $\text{E}_{\text{j}}$(R), along the $\text{k}^{\text{th}}$ coordinate's direction. Starting with coordinates {$\text{X}_{\text{k}}$(0)} and corresponding momenta {$\text{P}_{\text{k}}$(0)} at some initial time t = 0, and given the ability to compute the force - $\frac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}}$ at any location of the nuclei, the Newton equations can be solved (usually on a computer) using finite-difference methods:
$\text{X}_{\text{k}}(t+\delta t) = \text{X}_{text{k}}(t) + \text{P}_{\text{k}}(t) \dfrac{\delta t}{\text{m}_{\text{k}}} \nonumber$
$\text{P}_{\text{k}}(t+\delta t) = \text{P}_{\text{k}}(t) - \dfrac{\text{dE}_j}{\text{dX}_k}(t) \delta t. \nonumber$
In so doing, one generates a sequence of coordinates {$\text{X}_k(\text{t}_n)$} and momenta {$\text{P}_k(\text{t}_n)$}, one for each "time step" tn. The histories of these coordinates and momenta as functions of time are called "classical trajectories". Following them from early times, characteristic of the molecule(s) at "reactant" geometries, through to late times, perhaps characteristic of "product" geometries, allows one to monitor and predict the fate of the time evolution of the nuclear dynamics. Even for large molecules with many atomic centers, propagation of such classical trajectories is feasible on modern computers if the forces - $\frac{\text{dE}_j}{\text{dX}_k}$ can be computed in a manner that does not consume inordinate amounts of computer time.
In Section 6, methods by which such force calculations are performed using firstprinciples quantum mechanical methods (i.e., so-called ab initio methods) are discussed. Suffice it to say that these calculations are often the rate limiting step in carrying out classical trajectory simulations of molecular dynamics. The large effort involved in the ab initio determination of electronic energies and their gradients - $\frac{\text{dE}}{\text{dX}_k}$ motivate one to consider using empirical "force field" functions V$_j$ (R) in place of the ab initio electronic energy E$_j$ (R). Such model potentials V\)_j\) (R), are usually constructed in terms of easy to compute and to differentiate functions of the interatomic distances and valence angles that appear in the molecule. The parameters that appear in the attractive and repulsive parts of these potentials are usually chosen so the potential is consistent with certain experimental data (e.g., bond dissociation energies, bond lengths, vibrational energies, torsion energy barriers).
For a large polyatomic molecule, the potential function V usually contains several distinct contributions:
$V = V_{\text{bond}^+} V_{\text{bend}^+} V_{\text{vanderWaals}^+}V_{\text{torsion}^+}V_{\text{electrostatic}}. \nonumber$
Here $\text{V}_{\text{bond}}$ gives the dependence of V on stretching displacements of the bonds (i.e., interatomic distances between pairs of bonded atoms) and is usually modeled as a harmonic or Morse function for each bond in the molecule:
$\text{V}_{\text{bond}} = \sum\limits_J \dfrac{1}{2}k_j (R_j - R_{\text{eq,J}})^2 \nonumber$
or
$\text{V}_{\text{bond}} = \sum\limits_J \text{D}_{\text{e,J}} \left( 1-e^{-a_J(\text{R}_J - \text{R}_{\text{eq,J }) }} \right)^2 \nonumber$
where the index J labels the bonds and the $k_J \text{ , } a_J \text{ and } R_{eq,J}$ are the force constant and equilibrium bond length parameters for the $J^{th}$ bond.
$\text{V}_{\text{bend}}$ describes the bending potentials for each triplet of atoms (ABC) that are bonded in a A-B-C manner; it is usually modeled in terms of a harmonic potential for each such bend:
$\text{V}_{\text{bend}} = \sum\limits_J \dfrac{1}{2}k^{\theta}_{J}\left( \theta_J - \theta_{\text{eq,J}} \right)^2. \nonumber$
The $\theta_{\text{eq,J}} \text{ and } k^{\theta}_J$ are the equilibrium angles and force constants for the J$^{th}$ angle.
$\text{V}_{\text{vanderWaals}}$ represents the van der Waals interactions between all pairs of atoms that are not bonded to one another. It is usually written as a sum over all pairs of such atoms (labeled J and K) of a Lennard-Jones 6-12 potential:
$\text{V}_{\text{vanderWaals}} = \sum\limits_{J<k} \left[ a_{J,K}(R_{J,K})^{-12} - b_{J,K}(R_{J,K})^{-6} \right] \nonumber$
where $a_{J,K} \text{ and } b_{J,K}$ are parameters relating to the repulsive and dispersion attraction forces, respectively for the $\text{J}^{\text{th}} \text{ and } \text{K}^{\text{th}}$ atoms.
$\text{V}_{\text{torsion}}$ contributions describe the dependence of V on angles of rotation about single bonds. For example, rotation of a CH$_3$ group around the single bond connecting the carbon atom to another group may have an angle dependence of the form:
$\text{V}_{\text{torsion}} = V_0(1 - cos(3\theta)) \nonumber$
where $\theta$ is the torsion rotation angle, and $V_0$ is the magnitude of the interaction between the C-H bonds and the group on the atom bonded to carbon.
$\text{V}_{\text{electrostatic}}$ contains the interactions among polar bonds or other polar groups (including any charged groups). It is usually written as a sum over pairs of atomic centers (J and K) of Coulombic interactions between fractional charges {Q$_J$} (chosen to represent the bond polarity) on these atoms:
$\text{V}_{\text{electrostatic}} = \sum\limits_{J<K}\dfrac{\text{Q}_J\text{Q}_K}{\text{R}_{J,K}} \nonumber$
Although the total potential V as written above contains many components, each is a relatively simple function of the Cartesian positions of the atomic centers. Therefore, it is relatively straightforward to evaluate V and its gradient along all 3N Cartesian directions in a computationally efficient manner. For this reason, the use of such empirical force fields in so-called molecular mechanics simulations of classical dynamics is widely used for treating large organic and biological molecules.
Initial Conditions
No single trajectory can be used to simulate chemical reaction or collisions that relate to realistic experiments. To generate classical trajectories that are characteristic of particular experiments, one must choose many initial conditions (coordinates and momenta) the collection of which is representative of the experiment. For example, to use an ensemble of trajectories to simulate a molecular beam collision between H and Cl atoms at a collision energy E, one must follow many classical trajectories that have a range of "impact parameters" (b) from zero up to some maximum value b$_{max}$ beyond which the H ....Cl interaction potential vanishes. The figure shown below describes the impact parameter as the distance of closest approach that a trajectory would have if no attractive or repulsive forces were operative.
Moreover, if the energy resolution of the experiment makes it impossible to fix the collision energy closer than an amount $\delta$E, one must run collections of trajectories for values of E lying within this range.
If, in contrast, one wishes to simulate thermal reaction rates, one needs to follow trajectories with various E values and various impact parameters b from initiation at t = 0 to their conclusion (at which time the chemical outcome is interrogated). Each of these trajectories must have their outcome weighted by an amount proportional to a Boltzmann factor $e^{\frac{-E}{RT}}$, where R is the ideal gas constant and T is the temperature because this factor specifies the probability that a collision occurs with kinetic energy E.
As the complexity of the molecule under study increases, the number of parameters needed to specify the initial conditions also grows. For example, classical trajectories that relate to $\text{F + H}_2 \rightarrow \text{ HF + H }$ need to be specified by providing (i) an impact parameter for the F to the center of mass of $\text{H}_2$, (ii) the relative translational energy of the F and $\text{H}_2$, (iii) the radial momentum and coordinate of the $\text{H}_2$ molecule's bond length, and (iv) the angular momentum of the $\text{H}_2$ molecule as well as the angle of the H-H bond axis relative to the line connecting the F atom to the center of mass of the $\text{H}_2$ molecule. Many such sets of initial conditions must be chosen and the resultant classical trajectories followed to generate an ensemble of trajectories pertinent to an experimental situation.
It should be clear that even the classical mechanical simulation of chemical experiments involves considerable effort because no single trajectory can represent the experimental situation. Many trajectories, each with different initial conditions selected so they represent, as an ensemble, the experimental conditions, must be followed and the outcome of all such trajectories must be averaged over the probability of realizing each specific initial condition.
Analyzing Final Conditions
Even after classical trajectories have been followed from t = 0 until the outcomes of the collisions are clear, one needs to properly relate the fate of each trajectory to the experimental situation. For the $\text{F + H}_2 \rightarrow \text{ HF + H}$ example used above, one needs to examine each trajectory to determine, for example, (i) whether HF + H products are formed or non-reactive collision to produce F + $\text{H}_2$ has occurred, (ii) the amount of rotational energy and angular momentum that is contained in the HF product molecule, (iii) the amount of relative translational energy that remains in the H + FH products, and (iv) the amount of vibrational energy that ends up in the HF product molecule.
Because classical rather than quantum mechanical equations are used to follow the time evolution of the molecular system, there is no guarantee that the amount of energy or angular momentum found in degrees of freedom for which these quantities should be quantized will be so. For example, $\text{ F + H}_2 \rightarrow \text{ HF + H }$ trajectories may produce HF molecules with internal vibrational energy that is not a half integral multiple of the fundamental vibrational frequency w of the HF bond. Also, the rotational angular momentum of the HF molecule may not fit the formula $J(J+1)\frac{h^2}{8\pi^2I}$, where I is HF's moment of inertia.
To connect such purely classical mechanical results more closely to the world of quantized energy levels, a method know as "binning" is often used. In this technique, one assigns the outcome of a classical trajectory to the particular quantum state (e.g., to a vibrational state v or a rotational state J of the HF molecule in the above example) whose quantum energy is closest to the classically determined energy. For the HF example at hand, the classical vibrational energy $\text{E}_{\text{cl.vib}}$ is simply used to define, as the closest integer, a vibrational quantum number v according to:
$v = \dfrac{\text{E}_{\text{cl.vib}}}{\hbar\omega}-\dfrac{1}{2}. \nonumber$
Likewise, a rotational quantum number J can be assigned as the closest integer to that determined by using the classical rotational energy $\text{E}_{\text{cl.vib}}$ in the formula:
$J = \dfrac{1}{2}\left[ \sqrt{1+\dfrac{32\pi^2IE_{\text{cl,rot}}}{h^2}} -1 \right] \nonumber$
which is the solution of the quadratic equation $J(J+1)\frac{h^2}{8\pi^2I} = \text{E}_{\text{cl,rot}}$ By following many trajectories and assigning vibrational and rotational quantum numbers to the product molecules formed in each trajectory, one can generate histograms giving the frequency with which each product molecule quantum state is observed for the ensemble of trajectories used to simulate the experiment of interest. In this way, one can approximately extract product-channel quantum state distributions from classical trajectory simulations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/16%3A_Collisions_and_Scattering/16.03%3A_Classical_Treatment_of_Nuclear_Motion.txt
|
In an attempt to combine the attributes and stregths of classical trajectories, which allow us to "watch" the motions that molecules undergo, and quantum mechanical wavefunctions, which are needed if interference phenomena are to be treated, a hybrid approach is sometimes used. A popular and rather successful such point of view is provided by so called coherent state wavepackets.
A quantum mechanical wavefunction $\psi (\textbf{x} \big| \textbf{X}, \textbf{P}$) that is a function of all pertinent degrees of freedom (denoted collectively by x) and that depends on two sets of parameters (denoted X and P, respectively) is defined as follows:
$\psi ( \textbf{x} \big| \textbf{X} , \textbf{P} ) = \prod\limits_{k=1}^N \dfrac{e^{ \left[ \dfrac{iP_kx_k}{\hbar} - \dfrac{\left( x_k - X_k \right)^2}{4\left<\Delta x_k\right>^2} \right]}}{ \sqrt{2\pi\langle\Delta x_k \rangle^2}} \nonumber$
Here, $\langle\Delta x_k\rangle^2$ is the uncertainty
$\langle\Delta x_k\rangle^2 = \int \big|\psi\big|^2 \left( x_k-X_k \right)^2d\textbf{x} \nonumber$
along the $k^{th}$ degree of freedom for this wavefunction, defined as the mean squared displacement away from the average coordinate
$\int \big| \psi \big|^2 x_k d\textbf{x} = X_k. \nonumber$
So, the parameter $X_k$ specifies the average value of the coordinate $x_k$. In like fashion, it can be shown that the parameter $P_k$ is equal to the average value of the momentum along the $k^{th}$ coordinate:
$\int \psi^{\text{*}}\left( -i\hbar\dfrac{\partial}{\partial x_k} -P_k \right)^2 \psi d\textbf{x} = P_k \nonumber$
The uncertainty in the momentum along each coordinate:
$\langle\Delta p_k \rangle^2 = \int \psi^{\text{*}}\left( -i\hbar\dfrac{\partial}{\partial x_k} - P_k \right)^2 \psi d\textbf{x} \nonumber$
is given, for functions of the coherent state form, in terms of the coordinate uncertainty as
$\langle\Delta p_k\rangle^2\langle\Delta x_k\rangle^2 = \dfrac{\hbar^2}{4} \nonumber$
Of course, the general Heisenberg uncertainty condition
$\langle\Delta p_k\rangle^2\langle\Delta x_k\rangle^2 \geq \dfrac{\hbar^2}{4} \nonumber$
limits the coordinate and momentum uncertainty products for arbitrary wavefunctions. The coherent state wave packet functions are those for which this uncertainty product is minimum . In this sense, coherent state wave packets are seen to be as close to classical as possible since in classical mechanics there are no limits placed on the resolution with which one can observe coordinates and momenta
These wavepacket functions are employed as follows in the most straightforward treatements of combined quantal/classical mechanics:
1. Classical trajectories are used, as discribed in greater detail above, to generate a series of coordinates $X_k(t_n) \text{ and momenta } P_k(t_n) \text{ at a sequence of times denoted }$ {$t_n$}.
2. These classical coordinates and momenta are used to define a wavepacket function as written above, whose $\textbf{X}_k \text{ and P}_k$ parameters are taken to be the coordinates and momenta of the classical trajectory. In effect, the wavepacket moves around "riding" the classical trajectory's coordiates and momenta as time evolves
3. At any time $t_n$, the quantum mechanical properties of the system are computed by forming the expectation values of the corresponding quantum operators for a wavepacket wavefunction of the form given above with $X_k \text{ and } P_k$ given by the classical coordinates and momenta at that time $t_n$.
Such wavepackets are, of course, simple approximations to the true quantum mechanical functions of the system because they do not obey the Schrödinger equation appropriate to the system. The should be expected to provide accurate representations to the true wavefunctions for systems that are more classical in nature (i.e., when the local de Broglie wave lengths are short compared to the range over which the potentials vary appreciably). For species containing light particles (e.g., electrons or H atoms) or for low kinetic energies, the local de Broglie wave lengths will not satisfy such criteria, and these approaches can be expected to be less reliable. For further information about the use of coherent state wavepackets in molecular dynamics and molecular spectroscopy, see E. J. Heller, Acc. Chem. Res. 14 , 368 (1981).
This completes our treatment of the subjects of molecular dynamics and molecular collisions. Neither its depth not its level was at the research level; rather, we intended to provide the reader with an introduction to many of the theoretical concepts and methods that arise when applying either the quantum Schrödinger equation or classical Newtonian mechanics to chemical reaction dynamics. Essentially none of the experimental aspects of this subject (e.g., molecular beam methods for preparing "cold" molecules, laser pump/probe methods for preparing reagents in specified quantum states and observing products in such states) have been discussed. An excellent introduction to both the experimental and theoretical foundations of modern chemical and collision dynamics is provided by the text Molecular Reaction Dynamics and Chemical Reactivity by R. D. Levine and R. B. Bernstein, Oxford Univ. Press (1987).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/16%3A_Collisions_and_Scattering/16.04%3A_Wavepackets.txt
|
Electrons interact via pairwise Coulomb forces; within the "orbital picture" these interactions are modelled by less difficult to treat "averaged" potentials. The difference between the true Coulombic interactions and the averaged potential is not small, so to achieve reasonable (ca. 1 kcal/mol) chemical accuracy, high-order corrections to the orbital picture are needed.
• 17.1: Orbitals, Configurations, and the Mean-Field Potential
ab initio quantum chemistry methods typically use, as a starting point from which improvements are made, a picture in which the electrons interact via a one-electron additive potential. Their predicted atomic and molecular structure from these "mean-field" potentials are approximate and must be improved to achieve accurate solutions to the true electronic Schrödinger equation. To do so, three constructs are employed typically employed: orbitals, configurations, and electron correlation.
• 17.2: Electron Correlation Requires Moving Beyond a Mean-Field Model
To improve upon the mean-field picture of electronic structure, one move beyond the single-configuration approximation. It is essential to do so to achieve higher accuracy, but it is also important to do so to achieve a conceptually correct view of chemical electronic structure. The inclusion of instantaneous spatial correlations among electrons is necessary to achieve a more accurate description. No single spin-orbital product wavefunction is capable of treating electron correlation.
• 17.3: Moving from Qualitative to Quantitative Models
Section 6 addresses the quantitative and computational implementation of many of the above ideas. It is not designed to address all of the state-of-the-art methods which have been, and are still being, developed to calculate orbitals and state wavefunctions.
• 17.4: Atomic Units
Atomic unites are used to remove all $\hbar \text{, e, and m}_e$ factors from the equations.
Thumbnail: Mean field approximation with a single configuration accounts for 99% of the energy of the ground-state the rest can be computed/approximated by addressing other "excited-state" configurations with electrons in virtual (unoccupied) orbitals expected for the ground-state configuration alone.
17: Higher Order Corrections to Electronic Structure
The discipline of computational ab initio quantum chemistry is aimed at determining the electronic energies and wavefunctions of atoms, molecules, radicals, ions, solids, and all other chemical species. The phrase ab initio implies that one attempts to solve the Schrödinger equation from first principles, treating the molecule as a collection of positive nuclei and negative electrons moving under the influence of coulombic potentials, and not using any prior knowledge about this species' chemical behavior.
To make practical use of such a point of view requires that approximations be introduced; the full Schrödinger equation is too difficult to solve exactly for any but simple model problems. These approximations take the form of physical concepts (e.g., orbitals, configurations, quantum numbers, term symbols, energy surfaces, selection rules, etc.) that provide useful means of organizing and interpreting experimental data and computational methods that allow quantitative predictions to be made.
Essentially all ab initio quantum chemistry methods use, as a starting point from which improvements are made, a picture in which the electrons interact via a one-electron additive potential. These so-called mean-field potentials
$V_{mf}(\textbf{r}) = \sum\limits_j V_{mf}(\textbf{r}_j) \nonumber$
provide descriptions of atomic and molecular structure that are approximate. Their predictions must be improved to achieve reasonably accurate solutions to the true electronic Schrödinger equation. In so doing, three constructs that characterize essentially all ab initio quantum chemical methods are employed: orbitals, configurations, and electron correlation.
Since the electronic kinetic energy operator is one-electron additive
$T = \sum\limits_j T_j \nonumber$
the mean-field Hamiltonian
$H^0 = T + V_{mf} \nonumber$
is also of this form. The additivity of $H^0$ implies that the mean-field wavefunctions {$\Psi^0_k$} can be formed in terms of products of functions {$\phi k$} of the coordinates of the individual electrons, and that the corresponding energies {$E^0_k$} are additive. Thus, it is the ansatz that $V_{mf}$ is separable that leads to the concept of orbitals, which are the one-electron functions {$\phi_j$}. These orbitals are found by solving the one-electron Schrödinger equations:
$(T_1 + V_{mf}(\textbf{r}_1)) \phi_j(\textbf{r}_1) = \epsilon_j\phi_j(\textbf{r}_1); \nonumber$
the eigenvalues {$\epsilon_j$} are called orbital energies.
Because each of the electrons also possesses intrinsic spin, the one-electron functions {$\phi_j$} used in this construction are taken to be eigenfunctions of ($T_1 + V_{mf}(\textbf{r}_1$)) multiplied by either $\alpha \text{ or } \beta$. This set of functions is called the set of mean-field spin-orbitals.
Given the complete set of solutions to this one-electron equation, a complete set of N-electron mean-field wavefunctions can be written down. Each $\Psi^0_k$ is constructed by forming an antisymmetrized product of N spin-orbitals chosen from the set of {$\phi_j$}, allowing each spin-orbital in the list to be a function of the coordinates of one of the N electrons (e.g,
$\Psi^0_k = \big|\phi_{k1}(\textbf{r}_1)\phi_{k2}(\textbf{r}_2)\phi_{k3}(\textbf{r}_3) ... \phi_{k(N-1)}(\textbf{r}_{N-1})\phi_{kN}(\textbf{r}_N)\big| , \nonumber$
as above). The corresponding mean field energy is evaluated as the sum over those spin-orbitals that appear in $\Psi^0_k$:
$E^0_k = \sum\limits_{j=1,N}\epsilon_{kj}. \nonumber$
By choosing to place N electrons into specific spin-orbitals, one has specified a configuration. By making other choices of which N $\phi_j$ to occupy, one describes other configurations. Just as the one-electron mean-field Schrödinger equation has a complete set of spin-orbital solutions {$\phi_j \text{ and } \epsilon_j$}, the N-electron mean-field Schrödinger equation has a complete set of N-electron configuration state functions (CSFs) $\Psi^0_k$ and energies $E^0_k$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/17%3A_Higher_Order_Corrections_to_Electronic_Structure/17.01%3A_Orbitals%2C_Configurations%2C_and_the_Mean-Field_Potenti.txt
|
To improve upon the mean-field picture of electronic structure, one must move beyond the single-configuration approximation. It is essential to do so to achieve higher accuracy, but it is also important to do so to achieve a conceptually correct view of chemical electronic structure. However, it is very disconcerting to be told that the familiar \(1s^22s^22p^2\) configuration description of the carbon atom is inadequate and that instead one must think of the \(^3P\) ground state of this atom as a 'mixture' of multiple (often considered "excited-state") configurations:
• \(1s^22s^22p^2\)
• \(1s^22s^23p^2\)
• \(1s^22s^23d^2\)
• \(2s^23s^22p^2\)
and any other configurations whose angular momenta can be coupled to produce \(L=1\) and \(S=1\).
Although the picture of configurations in which \(N\) electrons occupy \(N\) spin-orbitals may be very familiar and useful for systematizing electronic states of atoms and molecules, these constructs are approximations to the true states of the system. They were introduced when the mean-field approximation was made, and neither orbitals nor configurations describe the proper eigenstates {\(\Psi_k, E_k\)}.
The picture of configurations in which \(N\) electrons occupy \(N\) spin-orbitals is very familiar and useful for systematizing electronic states of atoms and molecules, but these constructs are approximations to the true states of the system.
The inclusion of instantaneous spatial correlations among electrons is necessary to achieve a more accurate description of atomic and molecular electronic structure. No single spin-orbital product wavefunction is capable of treating electron correlation to any extent; its product nature renders it incapable of doing so.
17.03: Moving from Qualitative to Quantitative Models
The preceding Chapters introduced, in a qualitative manner, many of the concepts which are used in applying quantum mechanics to electronic structures of atoms and molecules. Atomic, bonding, non-bonding, antibonding, Rydberg, hybrid, and delocalized orbitals and the configurations formed by occupying these orbitals were discussed. Spin and spatial symmetry as well as permutational symmetry were treated, and properly symmetry-adapted configuration state functions were formed. The Slater-Condon rules were shown to provide expressions for Hamiltonian matrix elements (and those involving any one- or two-electron operator) over such CSFs in terms of integrals over the orbitals occupied in the CSFs. Orbital, configuration, and state correlation diagrams were introduced to allow one to follow the evolution of electronic structures throughout a 'reaction path'.
Section 6 addresses the quantitative and computational implementation of many of the above ideas. It is not designed to address all of the state-of-the-art methods which have been, and are still being, developed to calculate orbitals and state wavefunctions. The rapid growth in computer hardware and software power and the evolution of new computer architectures makes it difficult, if not impossible, to present an up-to-date overview of the techniques that are presently at the cutting edge in computational chemistry. Nevertheless, this Section attempts to describe the essential elements of several of the more powerful and commonly used methods; it is likely that many of these elements will persist in the next generation of computational chemistry techniques although the details of their implementation will evolve considerably. The text by Szabo and Ostlund provides excellent insights into many of the theoretical methods treated in this Section.
17.04: Atomic Units
The electronic Hamiltonian is expressed, in this Section, in so-called atomic units (aus)
$H_e = \sum\limits \left[ -\dfrac{1}{2}\nabla_j^2 - \sum\limits_a\dfrac{Z_a}{r_{j,a}} \right] + \sum\limits_{j<k}\dfrac{1}{r_{r,k}}. \nonumber$
These units are introduced to remove all $\hbar \text{, e, and m}_e$ factors from the equations.
To effect this unit transformation, one notes that the kinetic energy operator scales as $r_j^{-2}$ whereas the coulombic potentials scale as $r_j^{-1}$ and as $r_{j,k}^{-1}$. So, if each of the distances appearing in the cartesian coordinates of the electrons and nuclei were expressed as a unit of length $a_0$ multiplied by a dimensionless length factor, the kinetic energy operator would involve terms of the form $\left(-\frac{\hbar^2}{2(a_0)^2m_e} \right)\nabla_j^2$, and the coulombic potentials would appear as $\frac{Z_ae^2}{(a_0)r_{j,a}} \text{ and } \frac{e^2}{(a_0)r_{j,k}}$. A factor of $\frac{e^2}{a_0}$ (which has units of energy since $a_0$ has units of length) can then be removed from the coulombic and kinetic energies, after which the kinetic energy terms appear as $-\frac{\hbar^2}{2(e^2a_0)m_e}\nabla_j^2$ and the potential energies appear as $\frac{Z_a}{r_{j,a}} \text{ and } \frac{1}{r_{k,j}}$. Then, choosing $a_0 = \frac{\hbar^2}{e^2m_e}$ changes the kinetic energy terms into $-\frac{1}{2}\nabla_j^2$; as a result, the entire electronic Hamiltonian takes the form given above in which no $e^2, \text{ me, or } \hbar^2$ factors appear. The value of the so-called Bohr radius $a_0 = \frac{\hbar^2}{e^2m_e}$ is 0.529 Å, and the so-called Hartree energy unit $\frac{e^2}{a_0}$, which factors out of $H_e$, is 27.21 eV or 627.51 kcal/mol.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/17%3A_Higher_Order_Corrections_to_Electronic_Structure/17.02%3A_Electron_Correlation_Requires_Moving_Beyond_a_Mean-Field.txt
|
The single Slater determinant wavefunction (properly spin and symmetry adapted) is the starting point of the most common mean field potential. It is also the origin of the molecular orbital concept.
• 18.1: Optimization of the Energy for a Multiconfiguration Wavefunction
For a multiconfigurational wavefunction the Slater-Condon rules can be used to decompose the energy in terms of one- and two-electron integrals. The resulting set of equations are called the CI-secular equations. and the Fock equations.
• 18.2: The Single-Determinant Wavefunction
The simplest trial function of the form given above is the single Slater determinant function, where the configuration Interaction part of the energy minimization is absent. The resulting Schrodinger equations within an orbital approximation can be substituted with canonical Hartree-Fock equations that explicitly include exchange energy terms.
• 18.3: The Unrestricted Hartree-Fock Spin Impurity Problem
As formulated above in terms of spin-orbitals, the Hartree-Fock (HF) equations yield orbitals that do not guarantee that Ψ possesses proper spin symmetry.
• 18.4: Atomic Orbital Basis Sets
The basis orbitals commonly used in the LCAO-MO-SCF process fall into two classes Slater-type orbitals and Cartesian Gaussian-type orbitals. STOs are used primarily for atomic and linear-molecule calculations because the multi-center integrals cannot efficiently be performed when STOs are employed in polyatomic molecule calculations. Such integrals can routinely be done when GTOs are used, which is the fundamental advantage that lead to the dominance of these functions in quantum chemistry.
• 18.5: The LCAO-MO Expansion
The new F operator then gives new φ_i and ε_i via solution of the new Fφ_i = ε_iφ_i equations. This iterative process is continued until the φ_i and ε_i do not vary significantly from one iteration to the next, at which time one says that the process has converged. This iterative procedure is referred to as the Hartree-Fock self-consistent field (SCF) procedure because iteration eventually leads to coulomb and exchange potential fi
• 18.6: The Roothaan Matrix SCF Process
The Roothaan SCF process is carried out in a fully ab initio manner in that all one- and two-electron integrals are computed in terms of the specified basis set; no experimental data or other input is employed. It is possible to introduce approximations to the coulomb and exchange integrals entering into the Fock matrix elements that permit many of the requisite elements to be evaluated in terms of experimental data or in terms of a small set of 'fundamental' orbital-level coulomb interaction in
• 18.7: Observations on Orbitals and Orbital Energies
A so-called generalized Brillouin theorem (GBT) arises when one deals with energy optimization for a multiconfigurational variational trial wavefunction for which the orbitals and CI mixing coefficients are simultaneously optimized. This GBT causes certain Hamiltonian matrix elements to vanish, which, in turn, simplifies the treatment of electron correlation for such wavefunctions. This matter is treated in more detail later in this text.
18: Multiconfiguration Wavefunctions
The Energy Expression
The most straightforward way to introduce the concept of optimal molecular orbitals is to consider a trial wavefunction of the form which was introduced earlier in Chapter 9.2. Consider a multi-electron wavefunction of the multiconfigurational form
$\Psi = \sum _I^M C_I\Phi_I \nonumber$
where $\Phi_I$ is a space- and spin-adapted configuration state function (CSF) consisting of determinental wavefunctions of spin-orbitals ($\phi_i$):
$\Phi_I\ = \big| \phi_{I1}\phi_{I2}\phi_{I3} ... \phi_{IN} \big| . \nonumber$
The expectation value of the Hamiltonian with this wavefunction
$E = \langle \Psi | \hat{H} | \Psi \rangle \nonumber$
can be expanded as:
$E = \sum _{I=1}^M \sum _{J=1}^M C_IC_J \langle \Phi_I \big| \hat{H} \big| \Phi_J \rangle . \nonumber$
The spin- and space-symmetry of the $\Phi_I$ SCFs determine the symmetry of the state $\Psi$ whose energy is to be optimized. In this form, it is clear that $E$ is a quadratic function of the $C_I$ amplitudes $C_J$; it is a quartic functional of the spin-orbitals because the Slater-Condon rules express each $\langle \Phi_I \big| H \big| \Phi_J \rangle$ CI matrix element in terms of one- and two-electron integrals $\langle \phi_i \big| f \big| \phi_j \rangle \text{ and } \langle \phi_i\phi_j \big| g \big| \phi_k\phi_l \rangle$ over these spin-orbitals.
The Fock and Secular Equations
The variational method can be used to optimize the above expectation value expression for the electronic energy (i.e., to make the functional stationary) as a function of the CI coefficients $C_J$ and the LCAO-MO coefficients {$C_{\nu,i}$} that characterize the spin-orbitals. However, in doing so the set of {$C_{\nu,i}$} can not be treated as entirely independent variables. The fact that the spin-orbitals {$\phi_i$} are assumed to be orthonormal imposes a set of constraints on the {$C_{\nu,i}$}:
$\langle \phi_i \big| \phi_j \rangle = \delta_{i,j} = \sum\limits_{\mu ,\nu}C^{\text{*}}_{\mu ,i} \langle \chi_\mu \big| \chi_\nu \rangle C_{\nu ,j} \nonumber$
These constraints can be enforced within the variational optimization of the energy function mentioned above by introducing a set of Lagrange multipliers {$\epsilon_{i,j}$} , one for each constraint condition, and subsequently differentiating
$E - \sum\limits_{i,j} \epsilon_{i,j}\left[ \delta_{i,j} - \sum\limits_{\mu ,\nu}C^{\text{*}}_{\mu ,i} \langle \chi_{\mu} \big| \chi_{\nu} \rangle C_{\nu ,j} \right] \nonumber$
with respect to each of the $C_{\nu ,i}$ variables.
Upon doing so, the following set of equations is obtained (early references to the derivation of such equations include A. C. Wahl, J. Chem. Phys. 41,2600 (1964) and F. Grein and T. C. Chang, Chem. Phys. Lett. 12 , 44 (1971) and R. Shepard, p 63, in Adv. in Chem. Phys. LXIX, K. P. Lawley, Ed., WileyInterscience, New York (1987); the subject is also treated in the textbook Second Quantization Based Methods in Quantum Chemistry, P. Jørgensen and J. Simons, Academic Press, New York (1981))) :
$\sum\limits_{J=1}^M H_{I,J} C_J = E C_I \label{Sec}$
with $I = 1, 2, ... M$ and
$F \phi_i = \sum\limits_j \epsilon_{i, j}\phi_j , \label{Fock}$
where the $\epsilon_{i,j}$ are Lagrange multipliers.
The set of equations in Equation $\ref{Sec}$ that govern the {$C_J$} amplitudes are called the CI-secular equations. The set of equations in Equation $\ref{Fock}$ that determine the LCAO-MO coefficients of the spin-orbitals {$\phi_j$} are called the Fock equations. The Fock operator F is given in terms of the one- and two-electron operators in H itself as well as the so-called one- and two-electron density matrices $\gamma_{i,j} \text{ and } \Gamma_{i,j,k,l}$ which are defined below. These density matrices reflect the averaged occupancies of the various spin orbitals in the CSFs of $\Psi$. The resultant expression for $F$ is:
$F \phi_i = \sum\limits_j \gamma_{i,j}h \phi_j + \sum\limits_{j, k ,l}\Gamma_{i, j, k, l}J_{k, 1}\phi_k , \nonumber$
where h is the one-electron component of the Hamiltonian (i.e., the kinetic energy operator and the sum of coulombic attractions to the nuclei). The operator $J_{j,l}$ is defined by:
$J_{j,1}\phi_k(r) = \int \phi^{\text{*}}_j(r') \phi_1(r')\frac{1}{\big| r-r' \big|}d\tau \phi_k(r), \nonumber$
where the integration denoted d$\tau$' is over the spatial and spin coordinates. The so-called spin integration simply means that the $\alpha \text{ or } \beta$ spin function associated with $\phi_l$ must be the same as the $\alpha \text{ or } \beta$ spin function associated with $\phi_j$ or the integral will vanish. This is a consequence of the orthonormality conditions $\langle \alpha \big| \alpha \rangle = \langle \beta \big| \beta \rangle = 1, \langle \alpha \big| \beta \rangle = \langle \beta \big| \alpha \rangle = 0$
One- and Two- Electron Density Matrices
The density matrices introduced above can most straightforwardly be expressed in terms of the CI amplitudes and the nature of the orbital occupancies in the CSFs of $\Psi$ as follows:
1. $\gamma_{i,i}$ is the sum over all CSFs, in which $\phi_i$ is occupied, of the square of the CI coefficient of that CSF:
$\gamma_{i,i} = \sum\limits_I \text{ (with }\phi_i \text{ occupied) }C^2_I. \nonumber$
2. $\gamma_{i,j}$ is the sum over pairs of CSFs which differ by a single spin-orbital occupancy (i.e., one having $\phi_i$ occupied where the other has $\phi_j$ occupied after the two are placed into maximal coincidence-the sign factor (sign) arising from bringing the two to maximal coincidence is attached to the final density matrix element):
$\gamma_{i,j} = \sum\limits_{I,J} \text{( sign )( with }\phi_i\text{ occupied in I where }\phi_j\text{ is in J) C}_I\text{ C}_J. \nonumber$
The two-electron density matrix elements are given in similar fashion:
3. $\Gamma_{i, j, i, j} = \sum\limits_I \text{( with both}\phi_i\text{ and }\phi_j\text{ occupied)}C_IC_I$
4. $\Gamma_{i, j, j, i} = -\sum\limits_I\text{( with both}\phi_i\text{ and }\phi_j\text{ occupied)}C_IC_I = -\Gamma_{i, j, i, j}$ (it can be shown, in general that $\Gamma_{i,j,k,l}$ is odd under exchange of i and j, odd under exchange of k and l and even under (i,j)$\leftrightarrow$(k,l) exchange; this implies that $\Gamma_{i,j,k,l}$ vanishes if i = j or k = l.);
5. $\Gamma_{i, j, k, j} = \sum\limits_{I,J} \text{( sign )( with }\phi_j\text{ in both I and J and }\phi_i\text{in I where }\phi_k\text{ is in J)}C_IC_J$
$= \Gamma_{j,i,j,k} = -\Gamma_{i,j,j,k} = -\Gamma_{j,i,l,k}; \nonumber$
6. $\Gamma_{i,j,k,l} = \sum\limits_{I,J}\text{( sign )( with }\phi_i\text{ in I where }\phi_k\text{ is in J and }\phi_j\text{ in I where }\phi_l\text{ is in J) C}_I\text{C}_J$
$= \Gamma_{j,i,l,k} = -\Gamma_{j,i,j,l} = -\Gamma_{i,j,l,k} = \Gamma_{j,i,l,k}. \nonumber$
These density matrices are themselves quadratic functions of the CI coefficients and they reflect all of the permutational symmetry of the determinental functions used in constructing $\Psi$; they are a compact representation of all of the Slater-Condon rules as applied to the particular CSFs which appear in $\Psi$. They contain all information about the spin-orbital occupancy of the CSFs in $\Psi$. The one- and two- electron integrals $\langle \phi_i \big| f \big| \phi_j \rangle \text{ and } \langle \phi_i\phi_j \big| g \big| \phi_k\phi_l \rangle$ contain all of the information about the magnitudes of the kinetic and Coulombic interaction energies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.01%3A_Optimization_of_the_Energy_for_a_Multiconfiguration_Wavefunction.txt
|
The simplest trial function of the form given above is the single Slater determinant function:
$| \Psi \rangle = \big| \phi_1\phi_2\phi_3 ... \phi_N \big|. \nonumber$
For such a function, the CI part of the energy minimization is absent (the classic papers in which the SCF equations for closed- and open-shell systems are treated are C. C. J. Roothaan, Rev. Mod. Phys. 23 , 69 (1951); 32 , 179 (1960)) and the density matrices simplify greatly because only one spin-orbital occupancy is operative. In this case, the orbital optimization conditions reduce to:
$\hat{F} \phi_i = \sum\limits_j \epsilon_{i,j} \phi_j , \nonumber$
where the so-called Fock operator $\hat{F}$ is given by
$\hat{F} \phi_i = h \phi_i + \sum \limits_{j(occupied)}\left[ \hat{J}_j - \hat{K}_j \right]\phi_i . \nonumber$
The coulomb ($\hat{J}_j$) and exchange ($\hat{K}_j$) operators are defined by the relations:
$\hat{J}_j \phi_i = \int\phi^{\text{*}}_j(r')\phi_j(r') \dfrac{1}{\big| r-r' \big|}d\tau ' \phi_i(r) \nonumber$
and
$\hat{K}_j \phi_i = \int\phi^{\text{*}}_j(r')\phi_i(r') \dfrac{1}{\big| r-r' \big|}d\tau ' \phi_j(r) . \nonumber$
Again, the integration implies integration over the spin variables associated with the $\phi_j$ (and, for the exchange operator, $\phi_i$ ), as a result of which the exchange integral vanishes unless the spin function of $\phi_j$ is the same as that of $\phi_i$ ; the coulomb integral is non-vanishing no matter what the spin functions of $\phi_j \text{ and } \phi_i$.
The sum over coulomb and exchange interactions in the Fock operator runs only over those spin-orbitals that are occupied in the trial $\Psi$. Because a unitary transformation among the orbitals that appear in $| \Psi \rangle$ leaves the determinant unchanged (this is a property of determinants- det (UA) = det (U) det (A) = 1 det (A), if U is a unitary matrix), it is possible to choose such a unitary transformation to make the $\epsilon_{i,j}$ matrix diagonal. Upon so doing, one is left with the so-called canonical Hartree-Fock equations:
$\hat{F} \phi_i = \epsilon_i\phi_j , \nonumber$
where $\epsilon_i$ is the diagonal value of the $\epsilon_{i,j}$ matrix after the unitary transformation has been applied; that is, $\epsilon_i$ is an eigenvalue of the $\epsilon_{i,j}$ matrix. These equations are of the eigenvalue-eigenfunction form with the Fock operator playing the role of an effective one-electron Hamiltonian and the $\phi_i$ playing the role of the one-electron eigenfunctions.
It should be noted that the Hartree-Fock equations $\hat{F} \phi_i = \epsilon_i \phi_j$ possess solutions for the spin-orbitals which appear in $\Psi$ (the so-called occupied spin-orbitals) as well as for orbitals which are not occupied in $\Psi$ ( the so-called virtual spin-orbitals). In fact, the F operator is hermitian, so it possesses a complete set of orthonormal eigenfunctions; only those which appear in $\Psi$ appear in the coulomb and exchange potentials of the Fock operator. The physical meaning of the occupied and virtual orbitals will be clarified later in this Chapter (Section VII.A).
18.03: The Unrestricted Hartree-Fock Spin Impurity Problem
As formulated above in terms of spin-orbitals, the Hartree-Fock (HF) equations yield orbitals that do not guarantee that $\Psi$ possesses proper spin symmetry. To illustrate the point, consider the form of the equations for an open-shell system such as the Lithium atom Li. If $1s\alpha$, $1s\beta \text{, and } 2s\alpha$ spin-orbitals are chosen to appear in the trial function $\Psi$, then the Fock operator will contain the following terms:
$F = h + J_{1s\alpha} + J_{is\beta} + J_{2s\alpha} - \left[ K_{1s\alpha} + K_{1s\beta} + K_{2s\alpha} \right]. \nonumber$
Acting on an $\alpha$ spin-orbital $\phi_{k\alpha}$ with $F$ and carrying out the spin interfrations, one obtains
$F\phi_{k\alpha} = h \phi_{k\alpha} + (2J_{1s} + J_{2s})\phi_{k\alpha} - (K_{1s} + K_{2s})\phi_{k\alpha}. \nonumber$
In contrast, when acting on a $\beta$ spin-orbital, one obtains
$F\phi_{k\beta} = h\phi_{k\beta} + (2J_{1s} + J_{2s})\phi_{k\beta} - (K_{1s})\phi_{k\beta}. \nonumber$
Spin-orbitals of $\alpha \text{ and } \beta$ type do not experience the same exchange potential in this model, which is clearly due to the fact that $\Psi$ contains two a spin-orbitals and only one $\beta$ spin-orbital.
One consequence of the spin-polarized nature of the effective potential in F is that the optimal $1s\alpha \text{ and } 1s\beta$ spin-orbitals, which are themselves solutions of $F \phi_i = \epsilon_i \phi_i$, do not have identical orbital energies (i.e., $\epsilon_{1s\alpha} \ne \epsilon_{1s\beta}$ ) and are not spatially identical to one another ( i.e., $\phi_{1s\alpha} \text{ and } \phi_{1s\beta}$ do not have identical LCAO-MO expansion coefficients). This resultant spin polarization of the orbitals in $\Psi \text{ gives rise to spin impurities in } \Psi$. That is, the determinant $\big| 1s\alpha 1s'\beta 2s\alpha \big|$ is not a pure doublet spin eigenfunction although it is an $S_z$ eigenfunction with $M_s = \frac{1}{2}$; it contains both $S = \frac{1}{2} \text{ and S = } \frac{3}{2}$ components. If the $1s\alpha \text{ and } 1s'\beta$ spin-orbitals were spatially identical, then $\big| 1s\alpha \text{ }1s'\beta \text{ } 2s\alpha \big|$ would be a pure spin eigenfunction with $S = \frac{1}{2}$.
The above single-determinant wavefunction is commonly referred to as being of the unrestricted Hartree-Fock (UHF) type because no restrictions are placed on the spatial nature of the orbitals which appear in $\Psi$. In general, UHF wavefunctions are not of pure spin symmetry for any open-shell system. Such a UHF treatment forms the starting point of early versions of the widely used and highly successful Gaussian 70 through Gaussian- 8X series of electronic structure computer codes which derive from J. A. Pople and coworkers (see, for example, M. J. Frisch, J. S. Binkley, H. B. Schlegel, K Raghavachari, C. F. Melius, R. L. Martin, J. J. P. Stewart, F. W. Bobrowicz, C. M. Rohling, L. R. Kahn, D. J. Defrees, R. Seeger, R. A. Whitehead, D. J. Fox, E. M. Fleuder, and J. A. Pople, Gaussian 86 , Carnegie-Mellon Quantum Chemistry Publishing Unit, Pittsburgh, PA (1984)).
The inherent spin-impurity problem is sometimes 'fixed' by using the orbitals which are obtained in the UHF calculation to subsequently form a properly spin-adapted wavefunction. For the above Li atom example, this amounts to forming a new wavefunction (after the orbitals are obtained via the UHF process) using the techniques detailed in Section 3 and Appendix G:
$\Psi = \dfrac{1}{\sqrt{2}}\left[ \big| 1s\alpha \text{ } 1s'\beta \text{ } 2s\alpha \big| \text{ - } \big| 1s\beta \text{ } 1s'\alpha \text{ } 2s\alpha \big| \right]. \nonumber$
This wavefunction is a pure $S = \frac{1}{2}$ state. This prescription for avoiding spin contamination (i.e., carrying out the UHF calculation and then forming a new spin-pure $\Psi$) is referred to as spin-projection.
It is, of course, possible to first form the above spin-pure $\Psi$ as a trial wavefunction and to then determine the orbitals 1s 1s' and 2s which minimize its energy; in so doing, one is dealing with a spin-pure function from the start. The problem with carrying out this process, which is referred to as a spin-adapted Hartree-Fock calculation, is that the resultant 1s and 1s' orbitals still do not have identical spatial attributes. Having a set of orbitals (1s, 1s', 2s, and the virtual orbitals) that form a non-orthogonal set (1s and 1s' are neither identical nor orthogonal) makes it difficult to progress beyond the singleconfiguration wavefunction as one often wishes to do. That is, it is difficult to use a spinadapted wavefunction as a starting point for a correlated-level treatment of electronic motions.
Before addressing head-on the problem of how to best treat orbital optimization for open-shell species, it is useful to examine how the HF equations are solved in practice in terms of the LCAO-MO process.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.02%3A_The_Single-Determinant_Wavefunction.txt
|
The basis orbitals commonly used in the LCAO-MO-SCF process fall into two classes Slater-type orbitals and Cartesian Gaussian-type orbitals.
Slater-type orbitals (STO) are characterized by quantum numbers $n$, $l$, and $m$ and exponents (which characterize the 'size' of the basis function) $\xi$:
$\chi_{n,l,m}(r,\theta ,\phi )= N_{n,l,m,\xi} Y_{l,m}(\theta ,\phi )r^{n-1}e^{-\xi r}, \nonumber$
and $N_{n,l,m, \xi}$ denotes the normalization constant.
Cartesian Gaussian-type orbitals (GTO) are characterized by quantum numbers $a$, $b$, and $c$ which detail the angular shape and direction of the orbital and exponents $\alpha$ which govern the radial 'size' of the basis function akin to $\xi$ in STOs.
$\chi_{a,b,c}(r, \theta ,\phi )- N'_{a,b,c,\alpha}\textbf{x}^a\textbf{y}^b\textbf{z}^ce^{-\alpha r^2}, \nonumber$
For example, orbitals with $a$, $b$, and $c$ values of 1,0,0 or 0,1,0 or 0,0,1 are $p_x$, $p_y$, and $p_z$ orbitals; those with $a$, $b$, and $c$ values of 2, 0, 0 or 0, 2, 0 or 0, 0, 2 and 1, 1, 0 or 0, 1, 1 or 1, 0, 1 span the space of five d orbitals and one s orbital (the sum of the 2, 0, 0 and 0, 2, 0 and 0, 0, 2 orbitals is an s orbital because $x^2 + y^2 + z^2 = r^2$ is independent of $\theta \text{ and } \phi$).
For both types of orbitals, the coordinates $r\, \theta, \text{ and } \phi$ refer to the position of the electron relative to a set of axes attached to the center on which the basis orbital is located.
The Utility of Gaussians
Although STOs are preferred on fundamental grounds (e.g., as demonstrated in Appendices A and B, the hydrogen atom orbitals are of this form and the exact solution of the many-electron Schrödinger equation can be shown to be of this form (in each of its coordinates) near the nuclear centers), STOs are used primarily for atomic and linear-molecule calculations because the multi-center integrals $\langle \chi_a \chi_b \big| g \big| \chi_e \chi_d \rangle$ (each basis orbital can be on a separate atomic center) which arise in polyatomic-molecule calculations can not efficiently be performed when STOs are employed. In contrast, such integrals can routinely be done when GTOs are used. This fundamental advantage of GTOs has lead to the dominance of these functions in molecular quantum chemistry.
To understand why integrals over GTOs can be carried out when analogous STO based integrals are much more difficult, one must only consider the orbital products ( $\chi_a \chi_c (r_1) \text{ and } \chi_b \chi_d (r_2)$ ) which arise in such integrals. For orbitals of the GTO form, such products involve $e^{-\alpha_a (\textbf{r}-\textbf{R}_a)^2}e^{-\alpha_c( \textbf{r}-\textbf{R}_c )^2}$. By completing the square in the exponent, this product can be rewritten as follows:
$e^{-\alpha_a (\textbf{r}-\textbf{R}_a)^2} e^{-\alpha_c (\textbf{r}-\textbf{R}_c)^2} = e^{-(\alpha_a + \alpha_c )(\textbf{r}-\textbf{R'})^2}e^{-\alpha ' (\textbf{R}_a-\textbf{R}_c)^2}, \nonumber$
where
$\textbf{R}' = \dfrac{\alpha_a\textbf{R}_a + \alpha_c\textbf{R}_c}{\alpha_a + \alpha_c} \nonumber$
and
$\alpha ' = \dfrac{\alpha_a \alpha_c}{\alpha_a + \alpha_c}. \nonumber$
Thus, the product of two GTOs on different centers is equal to a single other GTO at a center $\textbf{R}'$ between the two original centers. As a result, even a four-center two-electron integral over GTOs can be written as, at most, a two-center two-electron integral; it turns out that this reduction in centers is enough to allow all such integrals to be carried out. A similar reduction does not arise for STOs because the product of two STOs can not be rewritten as a new STO at a new center.
To overcome the primary weakness of GTO functions, that they have incorrect behavior near the nuclear centers (i.e., their radial derivatives vanish at the nucleus whereas the derivatives of STOs are non-zero), it is common to combine two, three, or more GTOs, with combination coefficients which are fixed and not treated as LCAO-MO parameters, into new functions called contracted GTOs or CGTOs. Typically, a series of tight, medium, and loose GTOs (i.e., GTOs with large, medium, and small $\alpha$ values, respectively) are multiplied by so-called contraction coefficients and summed to produce a CGTO which appears to possess the proper 'cusp' (i.e., non-zero slope) at the nuclear center (although even such a combination can not because each GTO has zero slope at the nucleus).
Basis Set Libraries
Much effort has been devoted to developing sets of STO or GTO basis orbitals for main-group elements and the lighter transition metals. This ongoing effort is aimed at providing standard basis set libraries which:
1. Yield reasonable chemical accuracy in the resultant wavefunctions and energies.
2. Are cost effective in that their use in practical calculations is feasible.
3. Are relatively transferrable in the sense that the basis for a given atom is flexible enough to be used for that atom in a variety of bonding environments (where the atom's hybridization and local polarity may vary).
The Fundamental Core and Valence Basis
In constructing an atomic orbital basis to use in a particular calculation, one must choose from among several classes of functions. First, the size and nature of the primary core and valence basis must be specified. Within this category, the following choices are common:
1. A minimal basis in which the number of STO or CGTO orbitals is equal to the number of core and valence atomic orbitals in the atom.
2. A double-zeta (DZ) basis in which twice as many STOs or CGTOs are used as there are core and valence atomic orbitals. The use of more basis functions is motivated by a desire to provide additional variational flexibility to the LCAO-MO process. This flexibility allows the LCAO-MO process to generate molecular orbitals of variable diffuseness as the local electronegativity of the atom varies. Typically, double-zeta bases include pairs of functions with one member of each pair having a smaller exponent ($\zeta \text{ or } \alpha$ value) than in the minimal basis and the other member having a larger exponent.
3. A triple-zeta (TZ) basis in which three times as many STOs or CGTOs are used as the number of core and valence atomic orbitals.
4. Dunning has developed CGTO bases which range from approximately DZ to substantially beyond TZ quality (T. H. Dunning, J. Chem. Phys. 53 , 2823 (1970); T. H. Dunning and P. J. Hay in Methods of Electronic Structure Theory, H. F. Schaefer, III Ed., Plenum Press, New York (1977))). These bases involve contractions of primitive GTO bases which Huzinaga had earlier optimized (S. Huzinaga, J. Chem. Phys. 42 , 1293 (1965)) for use as uncontracted functions (i.e., for which Huzinaga varied the $\alpha$ values to minimize the energies of several electronic states of the corresponding atom). These Dunning bases are commonly denoted, for example, as follows for first-row atoms: (10s,6p/5s,4p), which means that 10 s-type primitive GTOs have been contracted to produce 5 separate s-type CGTOs and that 6 primitive p-type GTOs were contracted to generate 4 separate p-type CGTOs. More recent basis sets from the Dunning group are given in T. Dunning, J. Chem. Phys. 90 , 1007 (1990).
5. Even-tempered basis sets (M. W. Schmidt and K. Ruedenberg, J. Chem. Phys. 71 , 3961 (1979)) consist of GTOs in which the orbital exponents ak belonging to series of orbitals consist of geometrical progressions: $\alpha_k \text{ = a } \beta_k$, where $\alpha$ and $\beta$ characterize the particular set of GTOs.
6. STO-3G bases were employed some years ago (W. J. Hehre, R. F. Stewart, and J. A. Pople, J. Chem. Phys. 51 , 2657 (1969)) but are less popular recently. These bases are constructed by least squares fitting GTOs to STOs which have been optimized for various electronic states of the atom. When three GTOs are employed to fit each STO, a STO-3G basis is formed.
7. 4-31G, 5-31G, and 6-31G bases (R. Ditchfield, W. J. Hehre, and J. A. Pople, J. Chem. Phys. 54 , 724 (1971); W. J. Hehre, R. Ditchfield, and J. A. Pople, J. Chem. Phys. 56 , 2257 (1972); P. C. Hariharan and J. A. Pople, Theoret. Chim. Acta. (Berl.) 28 , 213 (1973); R. Krishnan, J. S. Binkley, R. Seeger, and J. A. Pople, J. Chem. Phys. 72 , 650 (1980)) employ a single CGTO of contraction length 4, 5, or 6 to describe the core orbital. The valence space is described at the DZ level with the first CGTO constructed from 3 primitive GTOs and the second CGTO built from a single primitive GTO.
The values of the orbital exponents ($\zeta s \text{ or } \alpha s$) and the GTO-to-CGTO contraction coefficients needed to implement a particular basis of the kind described above have been tabulated in several journal articles and in computer data bases (in particular, in the data base contained in the book Handbook of Gaussian Basis Sets: A. Compendium for Ab initio Molecular Orbital Calculations , R. Poirer, R. Kari, and I. G. Csizmadia, Elsevier Science Publishing Co., Inc., New York, New York (1985)).
Several other sources of basis sets for particular atoms are listed in the Table shown below (here JCP and JACS are abbreviations for the Journal of Chemical Physics and the Journal of The American Chemical Society, respectively).
Literature Reference Basis Type Atoms
Hehre, W.J.; Stewart, R.F.; Pople, J.A. JCP 51 , 2657 (1969).
Hehre, W.J.; Ditchfield, R.; Stewart, R.F.; Pople, J.A. JCP 52 , 2769 (1970).
STO - 3G H-Ar
Binkley, J.S.; Pople, J.A.; Hehre, W.J. JACS 102 , 939 (1980). 3-21G H-Ne
Gordon, M.S.; Binkley, J.S.; Pople, J.A.; Pietro, W.J.; Hehre, W.J. JACS 104 , 2797 (1982). 3-21G Na-Ar
Dobbs, K.D.; Hehre, W.J., J. Comput. Chem. 7 , 359 (1986). 3-21G K, Ca, Ga
Dobbs, K.D.; Hehre, W.J., J. Comput. Chem. 8, 880 (1987). 3-21G Sc-Zn
Ditchfield, R.; Hehre, W.J.; Pople, J.A., JCP 54 , 724 (1971). 6-31G H
Dill, J.D.; Pople, J.A., JCP 62 , 2921 (1975). 6-31G Li, B
Binkley, J.S.; Pople, J.A., JCP 66 , 879 (1977) 6-31G Be
Hehre, W.J.; Ditchfield, R.; Pople, J.A., JCP 56 , 2257 (1972). 6-31G C-F
Francl, M.M.; Pietro, W.J.; Hehre, W.J.; Binkley, J.S.; Gordon, M.S.; DeFrees, D.J.;
Pople, J.A. JCP 77 , 3654 (1982).
6-31G Na-Ar
Dunning, T. JCP 53 , 2823 (1970). (4s/2s)
(4s/3s)
(9s5p/3s2p)
(9s5p/4s2p)
(9s5p/5s3p)
H
H
B-F
B-F
B-F
Dunning, T. JCP 55 , 716 (1971). (5s/3s)
(10s/4s)
(10s/5s)
(10s6p/5s3p)
(10s6p/5s4p)
H
Li
Be
B-Ne
B-Ne
Krishnan, R.; Binkley, J.S.; Seeger, R.; Pople, J.A. JCP 72 , 650 (1980). 6-311G H-Ne
Dunning, unpublished VDZ. (4s/2s)
(9s5p/3s2)
(12s8p/4s3p)
H
Li, Be, C-Ne
Na-Ar
Dunning, unpublished VTZ.
(5s/3s)
(6s/3s)
(12s6p/4s3p)
(17s10p/5s4p)
H
H
Li, Be, C-Ne
Mg-Ar
Dunning, unpublished VQZ. (7s/4s)
(8s/4s)
(16s7p/5s4p)
H
H
B-Ne
Dunning, T. JCP 90 , 1007 (1989). (pVDZ,pVTZ,pVQZ correlation-consistent) (4s1p/2s1p)
(5s2p1d/3s2p1d)
(6s3p1d1f/4s3p2d1f)
(9s4p1d/3s2p1d)
(10s5p2d1f/4s3p2d1f)
(12s6p3d2f1g/5s4p3d2f1g)
H
H
H
B-Ne
B-Ne
B-Ne
Huzinaga, S.; Klobukowski, M.; Tatewaki, H. Can. J. Chem. 63 , 1812 (1985). (14s/2s)
(14s9p/2s1p)
(16s9p/3s1p)
(16s11p/3s2p)
Li, Be
B-Ne
Na-Mg
Al-Ar
Huzinaga, S.; Klobukowski, M. THEOCHEM. 44 , 1 (1988).
(14s10p/2s1p)
(17s10p/3s1p)
(17s13p/3s2p)
(20s13p/4s2p)
(20s13p10d/4s2p1d)
(20s14p9d/4s3d1d)
B-Ne
Na-Mg
Al-Ar
K-Ca
Sc-Zn
Ga
McLean, A.D.; Chandler, G.S. JCP 72 , 5639 (1980). (12s8p/4s2p)
(12s8p/5s2p)
(12s8p/6s4p)
(12s9p/6s4p)
(12s9p/6s5p)
Na-Ar, P$^-$, S$^-$, Cl$^-$
Na-Ar, P$^-$, S$^-$, Cl$^-$
Na-Ar, P$^-$, S$^-$, Cl$^-$
Na-Ar, P$^-$, S$^-$, Cl$^-$
Na-Ar, P$^-$, S$^-$, Cl$^-$
Dunning, T.H.Jr.; Hay, P.J. Chapter 1 in 'Methods of Electronic Structure Theory',
Schaefer, H.F.III, Ed., Plenum Press, N.Y., 1977.
(11s7p/6s4p) Al-Cl
Hood, D.M.; Pitzer, R.M.; Schaefer, H.F.III, JCP 71 , 705 (1979). (14s11p6d/10s8p3d) Sc-Zn
Schmidt, M.W.; Ruedenberg, K. JCP 71 , 3951 (1979).
(regular even-tempered)
([N]s), N=3-10
([2N]s), N=3-10
([2N]s), N=3-14
([2N]s[N]p), N=3-11
([2N]s[N]p), N=3-13
([2N]s[N]p), N=4-12
([2N-6]s[N]p), N=7-15
H
He
Li, Be
B, N-Ne
C
Na, Mg
Al-Ar
Polarization Functions
In addition to the fundamental core and valence basis described above, one usually adds a set of so-called polarization functions to the basis. Polarization functions are functions of one higher angular momentum than appears in the atom's valence orbital space (e.g, d-functions for C, N , and O and p-functions for H). These polarization functions have exponents ($\zeta \text{ or } \alpha$) which cause their radial sizes to be similar to the sizes of the primary valence orbitals ( i.e., the polarization p orbitals of the H atom are similar in size to the 1s orbital). Thus, they are not orbitals which provide a description of the atom's valence orbital with one higher l-value; such higher-l valence orbitals would be radially more diffuse and would therefore require the use of STOs or GTOs with smaller exponents.
The primary purpose of polarization functions is to give additional angular flexibility to the LCAO-MO process in forming the valence molecular orbitals. This is illustrated below where polarization d$_\pi$ orbitals are seen to contribute to formation of the bonding $\pi$ orbital of a carbonyl group by allowing polarization of the Carbon atom's p$_\pi$ orbital toward the right and of the Oxygen atom's p$_\pi$ orbital toward the left.
Polarization functions are essential in strained ring compounds because they provide the angular flexibility needed to direct the electron density into regions between bonded atoms.
Functions with higher l-values and with 'sizes' more in line with those of the lower-l orbitals are also used to introduce additional angular correlation into the calculation by permitting polarized orbital pairs (see Chapter 10) involving higher angular correlations to be formed. Optimal polarization functions for first and second row atoms have been tabulated (B. Roos and P. Siegbahn, Theoret. Chim. Acta (Berl.) 17 , 199 (1970); M. J. Frisch, J. A. Pople, and J. S. Binkley, J. Chem. Phys. 80, 3265 (1984)).
Diffuse Functions
When dealing with anions or Rydberg states, one must augment the above basis sets by adding so-called diffuse basis orbitals. The conventional valence and polarization functions described above do not provide enough radial flexibility to adequately describe either of these cases. Energy-optimized diffuse functions appropriate to anions of most lighter main group elements have been tabulated in the literature (an excellent source of Gaussian basis set information is provided in Handbook of Gaussian Basis Sets , R. Poirier, R. Kari, and I. G. Csizmadia, Elsevier, Amsterdam (1985)) and in data bases. Rydberg diffuse basis sets are usually created by adding to conventional valence-pluspolarization bases sequences of primitive GTOs whose exponents are smaller than that (call it $\alpha_{\text{diff}}$) of the most diffuse GTO which contributes strongly to the valence CGTOs. As a 'rule of thumb', one can generate a series of such diffuse orbitals which are liniarly independent yet span considerably different regions of radial space by introducing primitive GTOs whose exponents are $\frac{1}{3}\alpha_{\text{diff}}, \frac{1}{9}\alpha_{\text{diff}}, \frac{1}{27}\alpha_{\text{diff}},$ etc.
Once one has specified an atomic orbital basis for each atom in the molecule, the LCAO-MO procedure can be used to determine the $C_{\nu ,i}$ coefficients that describe the occupied and virtual orbitals in terms of the chosen basis set. It is important to keep in mind that the basis orbitals are not themselves the true orbitals of the isolated atoms; even the proper atomic orbitals are combinations (with atomic values for the $C_{\nu ,i}$ coefficients) of the basis functions. For example, in a minimal-basis-level treatment of the Carbon atom, the 2s atomic orbital is formed by combining, with opposite sign to achieve the radial node, the two CGTOs (or STOs); the more diffuse s-type basis function will have a larger $C_{i,\nu}$ coefficient in the 2s atomic orbital. The 1s atomic orbital is formed by combining the same two CGTOs but with the same sign and with the less diffuse basis function having a larger $C_{\nu ,i}$ coefficient. The LCAO-MO-SCF process itself determines the magnitudes and signs of the $C_{\nu ,i}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.04%3A_Atomic_Orbital_Basis_Sets.txt
|
The HF equations $F \phi_i = \epsilon_i \phi_i$ comprise a set of integro-differential equations; their differential nature arises from the kinetic energy operator in h, and the coulomb and exchange operators provide their integral nature. The solutions of these equations must be achieved iteratively because the $J_i \text{ and } K_i$ operators in F depend on the orbitals $\phi_i$ which are to be solved for. Typical iterative schemes begin with a 'guess' for those $\phi_i \text{ which appear in } \Psi$, which then allows F to be formed. Solutions to $F \phi_i = \epsilon_i \phi_i$ are then found, and those $\phi_i$ which possess the space and spin symmetry of the occupied orbitals of $\Psi$ and which have the proper energies and nodal character are used to generate a new F operator (i.e., new $J_i \text{ and } K_i$ operators). The new $\hat{F}$ operator then gives new $\phi_i \text{ and } \epsilon_i$ via solution of the new equations:
$\hat{F} \phi_i = \epsilon_i \phi_i. \label{EQ1}$
This iterative process is continued until the $\phi_i \text{ and } \epsilon_i$ do not vary significantly from one iteration to the next, at which time one says that the process has converged. This iterative procedure is referred to as the Hartree-Fock self-consistent field (SCF) procedure because iteration eventually leads to coulomb and exchange potential fields that are consistent from iteration to iteration.
In practice, solution of Equation $\ref{EQ1}$ as an integro-differential equation can be carried out only for atoms (C. Froese-Fischer, Comp. Phys. Commun. 1, 152 (1970)) and linear molecules (P. A. Christiansen and E. A. McCullough, J. Chem. Phys. 67 , 1877 (1977)) for which the angular parts of the $\phi_i$ can be exactly separated from the radial because of the axial- or full- rotation group symmetry (e.g., $\phi_i = Y_{l,m} R_{n,l} \text{(r) for an atom and } \phi_i = e^{im\phi} R_{n,l,m} (r,\theta )$ for a linear molecule). In such special cases, $F \phi_i = \epsilon_i \phi_i$ gives rise to a set of coupled equations for the $R_{n,l}\text{(r) or R}_{n,l,m}$(r,q) which can and have been solved. However, for non-linear molecules, the HF equations have not yet been solved in such a manner because of the three-dimensional nature of the $\phi_i$ and of the potential terms in F.
In the most commonly employed procedures used to solve the HF equations for non-linear molecules, the $\phi_i$ are expanded in a basis of functions $\chi_m$ according to the LCAO-MO procedure:
$\phi_i = \sum\limits_\mu C_{\mu ,i}\chi_\mu . \nonumber$
Doing so then reduces F $\phi_i = \epsilon_i \phi_i$ to a matrix eigenvalue-type equation of the form:
$\sum\limits_\nu F_{\mu ,\nu}C_{\nu ,i} = \epsilon_i \sum\limits_\nu \textbf{S}_{\mu ,\nu}C_{\nu ,i}, \nonumber$
where $\textbf{S}_{\mu ,\nu} = \langle \chi_\mu \big| \chi_\nu \rangle$ is the overlap matrix among the atomic orbitals (aos) and
$F_{\mu ,\nu} = \langle \chi_\mu \big| h \big| \chi_\nu \rangle + \sum\limits_{\delta ,\kappa} \left[ \gamma_{\delta ,\kappa} \langle \chi_\mu \chi_\delta \big| g \big| \chi_\nu \chi_\kappa \rangle -\gamma_{\delta ,\kappa}^{ex} \langle \chi_\mu \chi_\delta \big| g \big| \chi_\kappa \chi_\nu \rangle \right] \nonumber$
is the matrix representation of the Fock operator in the ao basis. The coulomb and exchange-density matrix elements in the ao basis are:
$\gamma_{\delta ,\kappa} = \sum\limits_{i\text{(occupied)}}C_{\delta ,i}C_{\kappa ,i} \text{, and} \nonumber$
$\gamma_{\delta ,\kappa}^{ex} = \sum\limits_{i\text{(occ., and same spin)}}C_{\delta ,i}C_{\kappa ,i}, \nonumber$
where the sum in $\gamma_{\delta ,\kappa}^{ex}$ runs over those occupied spin-orbitals whose $m_s$ value is equal to that for which the Fock matrix is being formed (for a closed-shell species, $\gamma_{\delta ,\kappa}^{ex} = \frac{1}{2} \gamma_{\delta ,\kappa}$).
It should be noted that by moving to a matrix problem, one does not remove the need for an iterative solution; the $F_{\mu ,\nu}$ matrix elements depend on the $C_{\nu ,i}$ LCAO-MO coefficients which are, in turn, solutions of the so-called Roothaan matrix Hartree-Fock equations:
$\sum\limits_{\nu} F_{\mu ,\nu}C_{\nu ,i} = \epsilon_i \sum\limits_\nu \textbf{S}_{\mu ,\nu}C_{\nu ,i}. \label{Roothan}$
One should also note that, just as $F \phi_i = \epsilon_i \phi_j$ possesses a complete set of eigenfunctions, the matrix $F_{\mu ,\nu}$, whose dimension M is equal to the number of atomic basis orbitals used in the LCAO-MO expansion, has M eigenvalues $\epsilon_i$ and M eigenvectors whose elements are the $C_{\nu ,i}.$ Thus, there are occupied and virtual molecular orbitals (mos) each of which is described in the LCAO-MO form with $C_{\nu ,i}$ coefficients obtained via solution of Equations $\ref{Roothan}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.05%3A_The_LCAO-MO_Expansion.txt
|
The matrix SCF equations introduced earlier
$\sum\limits_\nu F_{\mu ,\nu}C_{\nu ,i} = \epsilon_i \sum\limits_\nu \textbf{S}_{\mu ,\nu} C_{\nu ,i} \nonumber$
must be solved both for the occupied and virtual orbitals' energies $\epsilon_i \text{ and } C_{\nu ,i}$ values. Only the occupied orbitals' $C_{\nu ,i}$ coefficients enter into the Fock operator
$F_{\mu ,\nu} = \langle \chi_\mu \big| h \big| \chi_\nu \rangle + \sum\limits_{\delta ,\kappa} \left[ \langle \chi_\mu \chi_\delta \big| g \big| \chi_\nu \chi_\kappa \rangle - \gamma_{\delta ,\kappa}^{ex} \langle \chi_\mu \chi_\delta \big| g \big| \chi_\kappa \chi_\nu \rangle \right] , \nonumber$
but both the occupied and virtual orbitals are solutions of the SCF equations. Once atomic basis sets have been chosen for each atom, the one- and two-electron integrals appearing in $F_{\mu ,\nu}$ must be evaluated. Doing so is a time consuming process, but there are presently several highly efficient computer codes which allow such integrals to be computed for s, p, d, f, and even g, h, and i basis functions. After executing one of these 'integral packages' for a basis with a total of N functions, one has available (usually on the computer's hard disk) of the order of $\frac{\textbf{N}^2}{2} \text{ one-electron and } \frac{\textbf{N}^4}{8}$ two-electron integrals over these atomic basis orbitals (the factors of $\frac{1}{2}\text{ and } \frac{1}{8}$ arise from permutational symmetries of the integrals). When treating extremely large atomic orbital basis sets (e.g., 200 or more basis functions), modern computer programs calculate the requisite integrals but never store them on the disk. Instead, their contributions to $F_{\mu ,\nu}$ are accumulated 'on the fly' after which the integrals are discarded.
To begin the SCF process, one must input to the computer routine which computes $F_{\mu ,\nu}$ initial 'guesses' for the $C_{\nu ,i}$ values corresponding to the occupied orbitals. These initial guesses are typically made in one of the following ways:
1. If one has available C$_{\nu ,i}$ values for the system from an SCF calculation performed earlier at a nearby molecular geometry, one can use these Cn,i values to begin the SCF process.
2. If one has $C_{\nu ,i}$ values appropriate to fragments of the system (e.g., for C and O atoms if the CO molecule is under study or for CH$_2$ and O if H$_2$CO is being studied), one can use these.
3. If one has no other information available, one can carry out one iteration of the SCF process in which the two-electron contributions to $F_{\mu ,\nu}$ are ignored ( i.e., take $F_{\mu ,\nu} = \langle \chi_\mu \big| h \big| \chi_\nu \rangle$) and use the resultant solutions to $\sum\limits_\nu F_{\mu ,\nu}C_{\nu ,i} = \epsilon_i \sum\limits_\nu \textbf{S}_{\mu ,\nu}C_{\nu ,i}$ as initial guesses for the $C_{\nu ,i}$. Using only the one-electron part of the Hamiltonian to determine initial values for the LCAO-MO coefficients may seem like a rather severe step; it is, and the resultant $C_{\nu ,i}$ values are usually far from the converged values which the SCF process eventually produces. However, the initial $C_{\nu ,i}$ obtained in this manner have proper symmetries and nodal patterns because the one-electron part of the Hamiltonian has the same symmetry as the full Hamiltonian.
Once initial guesses are made for the $C_{\nu ,i}$ of the occupied orbitals, the full $F_{\mu ,\nu}$ matrix is formed and new $\epsilon_i \text{ and } C_{\nu ,i}$ values are obtained by solving $\sum\limits_\nu F_{\mu ,\nu}C_{\nu ,i} = \epsilon_i \sum\limits_\nu \textbf{S}_{\mu ,\nu}C_{\nu ,i}$. These new orbitals are then used to form a new $F_{\mu ,\nu} \text{ matrix from which new } \epsilon_i \text{ and } C_{\nu ,i}$ are obtained. This iterative process is carried on until the $\epsilon_i \text{ and } C_{\nu ,i}$ do not vary (within specified tolerances) from iteration to iteration, at which time one says that the SCF process has converged and reached self-consistency.
As presented, the Roothaan SCF process is carried out in a fully ab initio manner in that all one- and two-electron integrals are computed in terms of the specified basis set; no experimental data or other input is employed. As described in Appendix F, it is possible to introduce approximations to the coulomb and exchange integrals entering into the Fock matrix elements that permit many of the requisite $F_{\mu ,\nu}$ elements to be evaluated in terms of experimental data or in terms of a small set of 'fundamental' orbital-level coulomb interaction integrals that can be computed in an ab initio manner. This approach forms the basis of so-called 'semi-empirical' methods. Appendix F provides the reader with a brief introduction to such approaches to the electronic structure problem and deals in some detail with the well known Hückel and CNDO- level approximations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.06%3A_The_Roothaan_Matrix_SCF_Process.txt
|
The Meaning of Orbital Energies
The physical content of the Hartree-Fock orbital energies can be seen by observing that $\hat{F} \phi = \epsilon_i\phi_i$ implies that $\epsilon_i$ can be written as:
$\epsilon_i = \langle \phi_i \big| \hat{F} \big| \phi_i \rangle = \langle \phi_i \big| h \big| \phi_i \rangle + \sum\limits_{\text{j(occupied)}}\langle \phi_i \big| J_j - K_j \big| \phi_i \rangle = \langle \phi_i \big| h \big| \phi_i \rangle + \sum\limits_{\text{j(occupied)}}\left[ \textbf{J}_{i,j} - \textbf{K}_{i,j} \right]. \nonumber$
In this form, it is clear that $\epsilon_i$ is equal to the average value of the kinetic energy plus Coulombic attraction to the nuclei for an electron in $\phi_i$ plus the sum over all of the spin-orbitals occupied in $\Psi$ of coulomb minus exchange interactions between $\phi_i$ and these occupied spin-orbitals.
• If $\phi_i$ itself is an occupied spin-orbital, the term [$\textbf{J}_{i,i} - \text{K}_{i,i}$] disappears and the latter sum represents the coulomb minus exchange interaction of $\phi_i$ with all of the N-1 other occupied spin-orbitals.
• If $\phi_i$ is a virtual spin-orbital, this cancellation does not occur, and one obtains the coulomb minus exchange interaction of $\phi_i$ with all N of the occupied spin-orbitals.
In this sense, the orbital energies for occupied orbitals pertain to interactions which are appropriate to a total of N electrons, while the orbital energies of virtual orbitals pertain to a system with N+1 electrons. It is this fact that makes SCF virtual orbitals not optimal (in fact, not usually very good) for use in subsequent correlation calculations where, for instance, they are used, in combination with the occupied orbitals, to form polarized orbital pairs as discussed in Chapter 12. To correlate a pair of electrons that occupy a valence orbital requires double excitations into a virtual orbital that is not too dislike in size. Although the virtual SCF orbitals themselves suffer these drawbacks, the space they span can indeed be used for treating electron correlation. To do so, it is useful to recombine (in a unitary manner to preserve orthonormality) the virtual orbitals to 'focus' the correlating power into as few orbitals as possible so that the multiconfigurational wavefunction can be formed with as few CSFs as possible. Techniques for effecting such reoptimization or improvement of the virtual orbitals are treated later in this text.
Koopmans' Theorem
Further insight into the meaning of the energies of occupied and virtual orbitals can be gained by considering the following model of the vertical (i.e., at fixed molecular geometry) detachment or attachment of an electron to the original N-electron molecule:
1. In this model, both the parent molecule and the species generated by adding or removing an electron are treated at the single-determinant level.
2. In this model, the Hartree-Fock orbitals of the parent molecule are used to describe both the parent and the species generated by electron addition or removal. It is said that such a model neglects 'orbital relaxation' which would accompany the electron addition or removal (i.e., the reoptimization of the spin-orbitals to allow them to become appropriate to the daughter species).
Within this simplified model, the energy difference between the daughter and the parent species can be written as follows ($\phi_k$ represents the particular spin-orbital that is added or removed):
1. For electron detachment:
$E^{N-1} - E^N = \langle \big| \phi_1\phi_2 ... \phi_{k-1} ... \phi_N \big| \text{H} \big| \phi_1\phi_2 ... \phi_{k-1} ... \phi_N \big| \rangle - \langle \big| \phi_1 \phi_2 ... \phi_{k-1}\phi_k ...\phi_N \big| \text{H} \big| \phi_1 \phi_2 ... \phi_{k-1}\phi_k ... \phi_N \big| \nonumber$ $= -\langle \phi_k \big| h \big| \phi_k \rangle - \sum\limits_{j=(1,k-1,k+1,N)}[J_{k,j} - K_{k,j}] = - \epsilon_k ; \nonumber$
2. For electron attachment:
$E^N - E^{N+1} = \langle \big| \phi_1\phi_2 ... \phi_N \big| \text{H} \big| \phi_1\phi_2 ... \phi_N \big| \rangle - \langle \big| \phi_1 \phi_2 ... \phi_N \phi_k ...\phi_N \big| \text{H} \big| \phi_1 \phi_2 ... \phi_N \phi_k \big| \rangle \nonumber$ $= -\langle \phi_k \big| h \big| \phi_k \rangle - \sum\limits_{j=(1,N)}[J_{k,j} - K_{k,j}] = - \epsilon_k ; \nonumber$
So, within the limitations of the single-determinant, frozen-orbital model set forth, the ionization potentials (IPs) and electron affinities (EAs) are given as the negative of the occupied and virtual spin-orbital energies, respectively. This statement is referred to as Koopmans' theorem (T. Koopmans, Physica 1, 104 (1933)); it is used extensively in quantum chemical calculations as a means for estimating IPs and EAs and often yields results that are at least qualitatively correct (i.e., ± 0.5 eV).
Koopmans' theorem argues that the ionization potentials and electron affinities are given as the negative of the occupied and virtual spin-orbital energies, respectively.
Orbital Energies and the Total Energy
For the N-electron species whose Hartree-Fock orbitals and orbital energies have been determined, the total SCF electronic energy can be written, by using the Slater-Condon rules, as:
$E = \sum\limits_{i(occupied)} \langle \phi_i \big| h \big| \phi_i \rangle + \sum\limits_{i>j(occupied)}[J_{i,j} - K_{i,j}]. \nonumber$
For this same system, the sum of the orbital energies of the occupied spin-orbitals is given by:
$\sum\limits_{\text{i(occupied)}}\epsilon_i = \sum\limits_{\text{i(occupied)}} \langle \phi_i \big| h \big| \phi_i \rangle + \sum\limits_{\text{i,j(occupied)}}[J_{i,j} - K_{i,j}]. \nonumber$
These two seemingly very similar expressions differ in a very important way; the sum of occupied orbital energies, when compared to the total energy, double counts the coulomb minus exchange interaction energies. Thus, within the Hartree-Fock approximation, the sum of the occupied orbital energies is not equal to the total energy. The total SCF energy can be computed in terms of the sum of occupied orbital energies by taking one-half of $\sum\limits_{\text{i(occupied)}}\epsilon_i$ and then adding to this one-half of $\sum\limits_{\text{i(occupied)}} \langle \phi_i \big| h \big| \phi_i \rangle :$
$E = \dfrac{1}{2}\left[ \sum\limits_{\text{i(occupied)}}\langle \phi_i \big| h \big| \phi_i \rangle + \sum\limits_{\text{i(occupied)}}\epsilon_i \right]. \nonumber$
The fact that the sum of orbital energies is not the total SCF energy also means that as one attempts to develop a qualitative picture of the energies of CSFs along a reaction path, as when orbital and configuration correlation diagrams are constructed, one must be careful not to equate the sum of orbital energies with the total configurational energy; the former is higher than the latter by an amount equal to the sum of the coulomb minus exchange interactions.
The sum of orbital energies is not the total SCF energy.
The Brillouin Theorem
The condition that the SCF energy be stationary with respect to variations $\delta \phi_i$ in the occupied spin-orbitals (that preserve orthonormality) can be written
$\langle \big| \phi_1 ... \delta \phi_i ... \phi_N \big| \text{H} \big| \phi_1 ... \phi_i ...\phi_N \big| \rangle = 0. \nonumber$
The infinitesimal variation of $\phi_i$ can be expressed in terms of its (small) components along the other occupied $\phi_j$ and along the virtual $\phi_m$ as follows:
$\delta \phi_i = \sum\limits_{j=\infty}U_{ij}\phi_j + \sum\limits_m U_{im} \phi_m . \nonumber$
When substituted into $\big| \phi_1 ... \delta \phi_i ... \phi_N$, the terms $\sum\limits_{j'=occ} \big| \phi_1 ...\phi_j ... \phi_N \big| U_{ij} \text{ vanish because } \phi_j$ already appears in the original Slater determinant $\big| \phi_1 ... \phi_N \big| \text{ ,so } \big| \phi_1 ... \phi_j ... \phi_N \big|$ contains $\phi_j$ twice. Only the sum over virtual orbitals remains, and the stationary property written above becomes
$\sum\limits_m U_{\text{im}} \langle \big| \phi_1 ... \phi_m ... \phi_N \big| H \big| \phi_1 ... \phi_i ... \phi_N \big| \rangle = 0. \nonumber$
The Slater-Condon rules allow one to express the Hamiltonian martix elements appearing here as
$\langle \big|\phi_1 ... \phi_m ... \phi_N \big| \text{H} \big| \phi_1 ... \phi_i ... \phi_N \big| \rangle = \langle \phi_m \big| h \big| \phi_i \rangle + \sum\limits_{\text{j=occ,} \ne i} \langle \phi_m \big| [J_j - K_j] \big| \phi_i \rangle , \nonumber$
which (because the term with j=i can be included since it vanishes) is equal to the following element of the Fock operator: $\langle \phi_m \big| F \big| \phi_i \rangle = \epsilon_i \delta_{im} = 0$. This result proves that Hamiltonian matrix elements between the SCF determinant and those that are singly excited relative to the SCF determinant vanish because they reduce to Fock-operator integrals connecting the pair of orbitals involved in the 'excitation'. This stability property of the SCF energy is known as the Brillouin theorem (i.e., that $\big| \phi_1 \phi_i \phi_N \big| \text{ and } \big| \phi_1 ... \phi_m ... \phi_N \big|$ have zero Hamiltonian matrix elements if the $\phi$s are SCF orbitals). It is exploited in quantum chemical calculations in two manners:
1. When multiconfiguration wavefunctions are formed from SCF spin-orbitals, it allows one to neglect Hamiltonian matrix elements between the SCF configuration and those that are 'singly excited' in constructing the secular matrix.
2. A so-called generalized Brillouin theorem (GBT) arises when one deals with energy optimization for a multiconfigurational variational trial wavefunction for which the orbitals and CI mixing coefficients are simultaneously optimized. This GBT causes certain Hamiltonian matrix elements to vanish, which, in turn, simplifies the treatment of electron correlation for such wavefunctions. This matter is treated in more detail later in this text.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/18%3A_Multiconfiguration_Wavefunctions/18.07%3A_Observations_on_Orbitals_and_Orbital_Energies.txt
|
Corrections to the mean-field model are needed to describe the instantaneous Coulombic interactions among the electrons. This is achieved by including more than one Slater determinant in the wavefunction.
• 19.1: Introduction to Multi-Determinant Wavefunctions
To allow for a properly spin- and space- symmetry adapted trial wavefunction and to permit ΨΨ to contain more than a single CSF, methods which are more flexible than the single-determinant HF procedure are needed. In particular, it may be necessary to use a combination of determinants to describe such a proper symmetry function.
• 19.2: Different Methods
There are numerous procedures currently in use for determining the 'best' wavefunction that involve two fundamentally different kinds of parameters to be determined: the CI coefficients CI and the LCAO-MO coefficients. The most commonly employed methods used to determine these parameters include: (1) the multiconfigurational self-consistent field (MCSCF) method, (2) the configuration interaction (CI) method, (3) the Møller-Plesset perturbation method (MPPT), and (4) the Coupled-Cluster method.
• 19.3: Strengths and Weaknesses of Various Methods
Methods that are based on making the energy functional stationary (i.e., variational methods) yield upper bounds to the lowest energy of the symmetry which characterizes the CSFs. All variational techniques suffer from at least one serious drawback; they are not size-extensive, which means that the energy computed using these tools can not be trusted to scale with the size of the system. This precludes these methods from use in extended systems (e.g., solids).
• 19.4: Further Details on Implementing Multiconfigurational Methods
In the CI method, one usually attempts to realize a high-level treatment of electron correlation. A set of orthonormal molecular orbitals are first obtained from an SCF or MCSCF calculation (usually involving a small to moderate list of CSFs). The LCAO-MO coefficients of these orbitals are no longer considered as variational parameters in the subsequent CI calculation; only the CI coefficients are to be further optimized.
Thumbnail: Schematic representation of the cluster-expansion-based classification. The full correlation is composed of singlets, doublets, triplets, and higher-order correlations, all uniquely defined by the cluster-expansion approach. Each blue sphere corresponds to one particle operator and yellow circles/ellipses to correlations. The number of spheres within a correlation identifies the cluster number. (CC SA-BY-3.0; Christoph N. Böttge, "Phonon-assistierte Lasertätigkeit in Mikroresonatoren").
19: Multi-Determinant Wavefunctions
Much of the development of the previous chapter pertains to the use of a single Slater determinant trial wavefunction. As presented, it relates to what has been called the unrestricted Hartree-Fock (UHF) theory in which each spin-orbital $\phi_i$ has its own orbital energy $\epsilon_i$ and LCAO-MO coefficients $C_{\nu ,i}$; there may be different $C_{\nu ,i}$ for $\alpha$ spin-orbitals than for $\beta$ spin-orbitals. Such a wavefunction suffers from the spin contamination difficulty detailed earlier.
To allow for a properly spin- and space- symmetry adapted trial wavefunction and to permit $\Psi$ to contain more than a single CSF, methods which are more flexible than the single-determinant HF procedure are needed. In particular, it may be necessary to use a combination of determinants to describe such a proper symmetry function. Moreover, as emphasized earlier, whenever two or more CSFs have similar energies (i.e., Hamiltonian expectation values) and can couple strongly through the Hamiltonian (e.g., at avoided crossings in configuration correlation diagrams), the wavefunction must be described in a multiconfigurational manner to permit the wavefunction to evolve smoothly from reactants to products. Also, whenever dynamical electron correlation effects are to be treated, a multiconfigurational $\Psi$ must be used; in this case, CSFs that are doubly excited relative to one or more of the essential CSFs (i.e., the dominant CSFs that are included in the socalled reference wavefunction ) are included to permit polarized-orbital-pair formation.
Multiconfigurational functions are needed not only to account for electron correlation but also to permit orbital readjustments to occur. For example, if a set of SCF orbitals is employed in forming a multi-CSF wavefunction, the variational condition that the energy is stationary with respect to variations in the LCAO-MO coefficients is no longer obeyed (i.e., the SCF energy functional is stationary when SCF orbitals are employed, but the MC-energy functional is generally not stationary if SCF orbitals are employed). For such reasons, it is important to include CSFs that are singly excited relative to the dominant CSFs in the reference wavefunction.
That singly excited CSFs allow for orbital relaxation can be seen as follows. Consider a wavefunction consisting of one CSF $\big|\phi_1 ... \phi_i ... \phi_N\big|$ to which singly excited CSFs of the form $\big|\phi_1 ... \phi_m ... \phi_N\big|$ have been added with coefficients $C_{i,m}$:
$\Psi = \sum\limits_m C_{i,m} \big| \phi_1 ... \phi_m ... \phi_N \big| + \big|\phi_1 ... \phi_i ... \phi_N \big| . \nonumber$
All of these determinants have all of their columns equal except the i$^{th}$ column; therefore, they can be combined into a single new determinant:
$\Psi = \big|\phi_1 ... \phi_i ' ... \phi_N\big| , \nonumber$
where the relaxed orbital $\phi_i '$
$\phi_i ' = \phi_i + \sum\limits_m C_{i,m}\phi_m . \nonumber$
The sum of CSFs that are singly excited in the $i^{th}$ spin-orbital with respect to $\big|\phi_1 ... \phi_i ... \phi_N\big|$ is therefore seen to allow the spin-orbital $\phi_i$ to relax into the new spin-orbital $\phi_i '$. It is in this sense that singly excited CSFs allow for orbital reoptimization.
In summary, doubly excited CSFs are often employed to permit polarized orbital pair formation and hence to allow for electron correlations. Singly excited CSFs are included to permit orbital relaxation (i.e., orbital reoptimization) to occur.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/19%3A_Multi-Determinant_Wavefunctions/19.01%3A_Introduction_to_Multi-Determinant_Wavefunctions.txt
|
There are numerous procedures currently in use for determining the 'best' wavefunction of the form:
$\Psi = \sum\limits_IC_I\Phi_I , \nonumber$
where $\Phi_I$I is a spin-and space- symmetry adapted CSF consisting of determinants of the form $\big| \phi_{I1} \phi_{I2} \phi_{I3} ... \phi_{IN} \big|$. Excellent overviews of many of these methods are included in Modern Theoretical Chemistry Vols. 3 and 4, H. F. Schaefer, III Ed., Plenum Press, New York (1977) and in Advances in Chemical Physics , Vols. LXVII and LXIX, K. P. Lawley, Ed., Wiley-Interscience, New York (1987). Within the present Chapter, these two key references will be denoted MTC, Vols. 3 and 4, and ACP, Vols. 67 and 69, respectively.
In all such trial wavefunctions, there are two fundamentally different kinds of parameters that need to be determined- the CI coefficients CI and the LCAO-MO coefficients describing the $\phi_{Ik}$. The most commonly employed methods used to determine these parameters include:
1. The multiconfigurational self-consistent field ( MCSCF) method in which the expectation value $\frac{\langle \Psi \big| \text{ H } \Psi \rangle}{\langle \Psi \big| \Psi \rangle}$ is treated variationally and simultaneously made stationary with respect to variations in the $C_I \text{ and } C_{n,i}$ coefficients subject to the constraints that the spin-orbitals and the full N-electron wavefunction remain normalized: $\langle \phi_i \big| \phi_j \rangle = \delta_{i,j} = \sum\limits_{\nu ,\mu} C_{\nu ,i}S_{\nu ,\mu}C_{\mu ,i} \text{, and } \nonumber$\sum\limits_I C_I^2 = 1. \nonumber$ The articles by H.-J. Werner and by R. Shepard in ACP Vol. 69 provide up to date reviews of the status of this approach. The article by A. C. Wahl and G. Das in MTC Vol. 3 covers the 'earlier' history on this topic. F. W. Bobrowicz and W. A. Goddard, III provide, in MTC Vol. 3, an overview of the GVB approach, which, as discussed in Chapter 12, can be viewed as a specific kind of MCSCF calculation.
2. The configuration interaction (CI) method in which the LCAO-MO coefficients are determined first (and independently) via either a singleconfiguration SCF calculation or an MCSCF calculation using a small number of CSFs. The CI coefficients are subsequently determined by making the expectation value $\frac{\langle \Psi \big| \text{ H }\big| \Psi \rangle}{\langle \Psi \big| \Psi \rangle}$ stationary with respect to variations in the $C_I$ only. In this process, the optimizations of the orbitals and of the CSF amplitudes are done in separate steps. The articles by I. Shavitt and by B. O. Ross and P. E. M. Siegbahn in MTC, Vol. 3 give excellent early overviews of the CI method.
3. The Møller-Plesset perturbation method (MPPT) uses the single-configuration SCF process (usually the UHF implementation) to first determine a set of LCAO-MO coefficients and, hence, a set of orbitals that obey $F\phi_i = \epsilon_i \phi_i$. Then, using an unperturbed Hamiltonian equal to the sum of these Fock operators for each of the N electrons $H^0 = \sum\limits_{i=1,N} F(i)$, perturbation theory (see Appendix D for an introduction to time-independent perturbation theory) is used to determine the CI amplitudes for the CSFs. The MPPT procedure is also referred to as the many-body perturbation theory (MBPT) method. The two names arose because two different schools of physics and chemistry developed them for somewhat different applications. Later, workers realized that they were identical in their working equations when the UHF $H^0$ is employed as the unperturbed Hamiltonian. In this text, we will therefore refer to this approach as MPPT/MBPT.
The amplitude for the so-called reference CSF used in the SCF process is taken as unity and the other CSFs' amplitudes are determined, relative to this one, by RayleighSchrödinger perturbation theory using the full N-electron Hamiltonian minus the sum of Fock operators H-H0 as the perturbation. The Slater-Condon rules are used for evaluating matrix elements of (H-H$^0$) among these CSFs. The essential features of the MPPT/MBPT approach are described in the following articles: J. A. Pople, R. Krishnan, H. B. Schlegel, and J. S. Binkley, Int. J. Quantum Chem. 14 , 545 (1978); R. J. Bartlett and D. M. Silver, J. Chem. Phys. 62 , 3258 (1975); R. Krishnan and J. A. Pople, Int. J. Quantum Chem. 14 , 91 (1978).
4. The Coupled-Cluster method expresses the CI part of the wavefunction in a somewhat different manner (the early work in chemistry on this method is described in J. Cizek, J. Chem. Phys. 45 , 4256 (1966); J. Paldus, J. Cizek, and I. Shavitt, Phys. Rev. A5 , 50 (1972); R. J. Bartlett and G. D. Purvis, Int. J. Quantum Chem. 14 , 561 (1978); G. D. Purvis and R. J. Bartlett, J. Chem. Phys. 76 , 1910 (1982)): $\Psi = e^T\Phi , \nonumber$ where $\Phi$ is a single CSF (usually the UHF single determinant) which has been used to independently determine a set of spin-orbitals and LCAO-MO coefficients via the SCF process. The operator T generates, when acting on $\Phi$, single, double, etc. 'excitations' (i.e., CSFs in which one, two, etc. of the occupied spin-orbitals in $\Phi$ have been replaced by virtual spin-orbitals). T is commonly expressed in terms of operators that effect such spin-orbital removals and additions as follows: $\text{T} = \sum\limits_{i,m}t_i^m m^+ i + \sum\limits_{i,j,m,n} t_{i,j}^{m,n}m^+n^+ j i + ..., \nonumber$ where the operator $m^+$ is used to denote creation of an electron in virtual spin-orbital $\phi_m$ and the operator j is used to denote removal of an electron from occupied spin-orbital $\phi_j$.
The $t_i^m \text{ , } t_{i,j}^{m,n}$, etc. amplitudes, which play the role of the CI coefficients in CC theory, are determined through the set of equations generated by projecting the Schrödinger equation in the form $e^{-\text{T}}\text{H}e^{\text{T}}\Phi = E \Phi \nonumber$ against CSFs which are single, double, etc. excitations relative to $\Phi$. For example, for double excitations $\Phi_{i,j}^{m,n}$ the equations read: $\langle \Phi_{i,j}^{m,n} \big| e^{\text{-T}}e^{\text{T}} \big| \Phi \rangle = \text{E} \langle \Phi_{i,j}^{m,n} \big| \Phi \rangle = 0; \nonumber$ zero is obtained on the right hand side because the excited CSFs $\big| \Phi_{i,j}^{m,n} \rangle$ are orthogonal to the reference function $\big| \Phi \rangle$. The elements on the left hand side of the CC equations can be expressed, as described below, in terms of one- and two-electron integrals over the spin-orbitals used in forming the reference and excited CSFs.
Integral Transformations
All of the above methods require the evaluation of one- and two-electron integrals over the N atomic orbital basis: $\langle \chi_a \big| f \big| \chi_b \rangle \text{ and } \langle \chi_a\chi_b\big| g\big|\chi_c\chi_d\rangle .$ Eventually, all of these methods provide their working equations and energy expressions in terms of one- and two electron integrals over the N final molecular orbitals: $\langle \phi_i \big| f \big| \phi_j \rangle$ and $\langle \chi_i\chi_j\big| g\big|\chi_k\chi_l\rangle$. The mo-based integrals can only be evaluated by transforming the AO-based integrals as follows:
$\langle \phi_i\phi_j \big| g \big| \phi_k\phi_l \rangle = \sum\limits_{a,b,c,d}C_{a,i}C_{b,j}C_{c,k}C_{d,l}\langle \chi_a \chi_b \big| g \big| \chi_c \chi_d \rangle , \nonumber$
and
$\langle \phi_i \big| f \big| \phi_j \rangle = \sum\limits_{a,b}C_{a,i}C_{b,j} \langle \chi_a \big| f \big| \chi_b \rangle . \nonumber$
It would seem that the process of evaluating all $N^4 \text{ of the } \langle \phi_i \phi_j \big| g \big| \phi_k\phi_l \rangle$, each of which requires $N^4$ additions and multiplications, would require computer time proportional to $N^8$. However, it is possible to perform the full transformation of the two-electron integral list in a time that scales as $N^5$. This is done by first performing a transformation of the $\langle \chi_a \chi_b \big| g \big| \chi_c \chi_d \rangle$ to an intermediate array $\langle \chi_a \chi_b \big| g \big| \chi_c \phi_1 \rangle$ labeled as follows:
$\langle \chi_a \chi_b \big| g \big| \chi_c \phi_1 \rangle = \sum\limits_d C_{d,l}\langle \chi_a \chi_b \big| g \big| \chi_c \chi_d \rangle . \nonumber$
This partial transformation requires $N^5$ multiplications and additions. The list $\langle \chi_a \chi_b \big| g \big| \chi_c \phi_1 \rangle$ is then transformed to a second-level transformed array $\langle \chi_a \chi_b \big| g \big| \phi_k \phi_1 \rangle$:
$\langle \chi_a \chi_b \big| g \big| \phi_k \phi_1 \rangle = \sum\limits_c C_{c,k} \langle \chi_a \chi_b \big| g \big| \chi_c \phi_1 \rangle , \nonumber$
which requires another $N^5$ operations. This sequential, one-index-at-a-time transformation is repeated four times until the final $\langle \phi_i \phi_j \big| g \big| \phi_k \phi_l \rangle$ array is in hand. The entire transformation done this way requires 4N$^{\textbf{5}}$ multiplications and additions.
Once the requisite one- and two-electron integrals are available in the molecular orbital basis, the multiconfigurational wavefunction and energy calculation can begin. These transformations consume a large fraction of the computer time used in most such calculations, and represent a severe bottleneck to progress in applying ab initio electronic structure methods to larger systems.
Configuration List Choices
Once the requisite one- and two-electron integrals are available in the molecular orbital basis, the multiconfigurational wavefunction and energy calculation can begin. Each of these methods has its own approach to describing the configurations {$\Phi_J$} included in the calculation and how the {$C_J$} amplitudes and the total energy E is to be determined.
The number of configurations ($N_C$) varies greatly among the methods and is an important factor to keep in mind when planning to carry out an ab initio calculation. Under certain circumstances (e.g., when studying Woodward-Hoffmann forbidden reactions where an avoided crossing of two configurations produces an activation barrier), it may be essential to use more than one electronic configuration. Sometimes, one configuration (e.g., the SCF model) is adequate to capture the qualitative essence of the electronic structure. In all cases, many configurations will be needed if highly accurate treatment of electron-electron correlations are desired.
The value of $N_C$ determines how much computer time and memory is needed to solve the $N_C\text{-dimensional } \sum\limits_J \text{H}_{I,J}C_J = E C_I$ secular problem in the CI and MCSCF methods. Solution of these matrix eigenvalue equations requires computer time that scales as $N_C^2 \text{ (if few eigenvalues are computed) to } N_C^3$ (if most eigenvalues are obtained).
So-called complete-active-space (CAS) methods form all CSFs that can be created by distributing N valence electrons among P valence orbitals. For example, the eight noncore electrons of $H_2O$ might be distributed, in a manner that gives $M_S = 0$, among six valence orbitals (e.g., two lone-pair orbitals, two OH $\sigma$ bonding orbitals, and two OH $\sigma^{\text{*}}$ antibonding orbitals). The number of configurations thereby created is 225 . If the same eight electrons were distributed among ten valence orbitals 44,100 configurations results; for twenty and thirty valence orbitals, 23,474,025 and 751,034,025 configurations arise, respectively. Clearly, practical considerations dictate that CAS-based approaches be limited to situations in which a few electrons are to be correlated using a few valence orbitals. The primary advantage of CAS configurations is discussed below in Sec. II. C.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/19%3A_Multi-Determinant_Wavefunctions/19.02%3A_Different_Methods.txt
|
Variational Methods Such as MCSCF, SCF, and CI Produce Energies that are Upper Bounds, but These Energies are not Size-Extensive
Methods that are based on making the energy functional $\frac{\langle \Psi \big| \text{ H } \big| \Psi \rangle}{\langle \Psi \big| \Psi \rangle}$ stationary (i.e., variational methods) yield upper bounds to the lowest energy of the symmetry which characterizes the CSFs which comprise $\Psi$. These methods also can provide approximate excited-state energies and wavefunctions (e. g., in the form of other solutions of the secular equation $\sum\limits_J \text{H}_{I,J}C_J = C_I$ that arises in the CI and MCSCF methods). Excited-state energies obtained in this manner can be shown to 'bracket' the true energies of the given symmetry in that between any two approximate energies obtained in the variational calculation, there exists at least one true eigenvalue. This characteristic is commonly referred to as the 'bracketing theorem' (E. A. Hylleraas and B. Undheim, Z. Phys. 65 , 759 (1930); J. K. L. MacDonald, Phys. Rev. 43 , 830 (1933)). These are strong attributes of the variational methods, as is the long and rich history of developments of analytical and computational tools for efficiently implementing such methods (see the discussions of the CI and MCSCF methods in MTC and ACP).
However, all variational techniques suffer from at least one serious drawback; they are not size-extensive (J. A. Pople, pg. 51 in Energy, Structure, and Reactivity , D. W. Smith and W. B. McRae, Eds., Wiley, New York (1973)). This means that the energy computed using these tools can not be trusted to scale with the size of the system. For example, a calculation performed on two $CH_3$ species at large separation may not yield an energy equal to twice the energy obtained by performing the same kind of calculation on a single $CH_3$ species. Lack of size-extensivity precludes these methods from use in extended systems (e.g., solids) where errors due to improper scaling of the energy with the number of molecules produce nonsensical results.
By carefully adjusting the kind of variational wavefunction used, it is possible to circumvent size-extensivity problems for selected species. For example, a CI calculation on $Be_2$ using all $^1\sum\limits_g$ CSFs that can be formed by placing the four valence electrons into the orbitals $2\sigma_g \text{, } 2\sigma_u \text{, } 3\sigma_g \text{, } 3\sigma_u \text{, } 1\pi_u \text{, and } 1\pi_g$ can yield an energy equal to twice that of the Be atom described by CSFs in which the two valence electrons of the Be atom are placed into the 2s and 2p orbitals in all ways consistent with a $^1$S symmetry. Such special choices of configurations give rise to what are called complete-active-space (CAS) MCSCF or CI calculations (see the article by B. O. Roos in ACP for an overview of this approach).
Let us consider an example to understand why the CAS choice of configurations works. The $^1$S ground state of the Be atom is known to form a wavefunction that is a strong mixture of CSFs that arise from the $2s^2 \text{ and } 2p^2$ configurations:
$\Psi_{\text{Be}} = C_1 \big| 1s^2 2s^2 \big| + C_2 \big| 1s^2 2p^2 \big| , \nonumber$
where the latter CSF is a short-hand representation for the proper spin- and spacesymmetry adapted CSF
$\big| 1s^2 2p^2 \big| = \dfrac{1}{\sqrt{3}} \left[ \big| 1s\alpha 1s\beta 2p_0 \alpha 2p_0 \beta \big| \text{ - } \big| 1s\alpha 1s\beta 2p_1 \alpha 2p_{-1}\beta \big| \text{ - } \big| 1s\alpha 1s\beta 2p_{-1}\alpha 2p_1\beta \big| \right] . \nonumber$
The reason the CAS process works is that the Be$_2$ CAS wavefunction has the flexibility to dissociate into the product of two CAS Be wavefunctions:
$\Psi = \Psi_{\text{Bea}} \Psi_{\text{Beb}} = \left[ C_1 \big| 1s^2 2s^2 \big| + C_2 \big| 1s^2 2p^2\big| \right]_a \left[ C_1 \big| 1s^2 2s^2 \big| + C_2 \big| 1s^2 2p^2 \big| \right]_b , \nonumber$
where the subscripts a and b label the two Be atoms, because the four electron CAS function distributes the four electrons in all ways among the $2\sigma_a \text{, } 2\sigma_b \text{, } 2\pi_a \text{, and } 2\pi_b$ orbitals. In contrast, if the Be$_2$ calculation had been carried out using only the following CSFs : $\big| 1\sigma^2_g \text{ } 1\sigma^2_u \text{ } 2\sigma^2_g \text{ } 2\sigma^2_u \big|$ and all single and double excitations relative to this (dominant) CSF, which is a very common type of CI procedure to follow, the Be$_2$ wavefunction would not have contained the particular CSFs $\big| 1\sigma^2 2\pi^2 \big|_a \big| 1\sigma^2 2\pi^2 \big|_b$ because these CSFs are four-fold excited relative to the $\big| 1\sigma^2_g 1\sigma^2_u 2\sigma^2_g 2\sigma^2_u \big|$ 'reference' CSF.
In general, one finds that if the 'monomer' uses CSFs that are K-fold excited relative to its dominant CSF to achieve an accurate description of its electron correlation, a size-extensive variational calculation on the 'dimer' will require the inclusion of CSFs that are 2K-fold excited relative to the dimer's dominant CSF. To perform a size-extensive variational calculation on a species containing M monomers therefore requires the inclusion of CSFs that are MxK-fold excited relative to the M-mer's dominant CSF.
Non-Variational Methods Such as MPPT/MBPT and CC do not Produce Upper Bounds, but Yield Size-Extensive Energies
In contrast to variational methods, perturbation theory and coupled-cluster methods achieve their energies from a 'transition formula' $\langle \Phi \big| \text{ H } \big| \Psi \rangle$ rather than from an expectation value $\langle \Psi \big| \text{ H } \big| \Psi \rangle$. It can be shown (H. P. Kelly, Phys. Rev. 131 , 684 (1963)) that this difference allows non-variational techniques to yield size-extensive energies. This can be seen in the MPPT/MBPT case by considering the energy of two non-interacting Be atoms. The reference CSF is $\Phi = \big| s_g^2 2s_a^2 as_b^2 2s_b^2 \big|$; the Slater-Condon rules limit the CSFs in Y which can contribute to
$\text{E} = \langle \Phi \big| \text{ H } \big| \Psi \rangle = \langle \Phi \big| \text{ H } \sum\limits_J C_J \Phi_J \rangle . \nonumber$
to be $\Phi$ itself and those CSFs that are singly or doubly excited relative to $\Phi$. These 'excitations' can involve atom a, atom b, or both atoms. However, any CSFs that involve excitations on both atoms ( e.g., $\big| 1s_a^2 2s_a 2p_a 1s_b^2 2s_b 2p_b \big|$ ) give rise, via the SC rules, to one- and two- electron integrals over orbitals on both atoms; these integrals ( e.g., $\langle 2s_a 2p_a \big| \text{ g } \big| 2s_b 2p_b \rangle$ ) vanish if the atoms are far apart, as a result of which the contributions due to such CSFs vanish in our consideration of size-extensivity. Thus, only CSFs that are excited on one or the other atom contribute to the energy:
$\text{E} = \langle \Phi_a \Phi_b \big| \text{ H } \sum\limits_{Ja} C_{Ja} \Phi_{Ja}^{\text{*}}\Phi_b + \sum\limits_{Jb}C_{Jb}\Phi_a \Phi^{\text{*}}_{Jb} \rangle , \nonumber$
where $\Phi_a \text{ and } \Phi_b \text{ as well as } \Phi^{\text{*}}_{Ja} \text{ and } \Phi^{\text{*}}_{Jb}$ are used to denote the a and b parts of the reference and excited CSFs, respectively.
This expression, once the SC rules are used to reduce it to one- and two- electron integrals, is of the additive form required of any size-extensive method:
$\text{E} = \langle \Phi_a \big| \text{ H } \sum\limits_{Ja}C_{Ja}\Phi_{Ja} \rangle + \langle \Phi_b \big| \text{ H } \big| \sum\limits_{Jb} C_{Jb} \Phi_{Jb} \rangle , \nonumber$
and will yield a size-extensive energy if the equations used to determine the CJa and CJb amplitudes are themselves separable. In MPPT/MBPT, these amplitudes are expressed, in first order, as:
$C_{Ja} = \dfrac{\langle \Phi_a \Phi_b \big| \text{ H } \big| \Phi_{Ja}^{\text{*}} \Phi_b}{\text{E}^0_a + \text{E}^0_b - \text{E}^{\text{*}}_{Ja} - \text{E}^0_b } \nonumber$
(and analogously for C$_{Jb}$). Again using the SC rules, this expression reduces to one that involves only atom a:
$C_{Ja} = \dfrac{\langle \big| \text{ H } \big| \Phi_{Ja}^{\text{*}} \rangle}{\text{E}_a^0 - \text{E}_{Ja}^{\text{*}}} . \nonumber$
The additivity of E and the separability of the equations determining the C$_J$ coefficients make the MPPT/MBPT energy size-extensive. This property can also be demonstrated for the Coupled-Cluster energy (see the references given above in Chapter 19. I.4). However, size-extensive methods have at least one serious weakness; their energies do not provide upper bounds to the true energies of the system (because their energy functional is not of the expectation-value form for which the upper bound property has been proven).
Which Method is Best?
At this time, it may not possible to say which method is preferred for applications where all are practical. Nor is it possible to assess, in a way that is applicable to most chemical species, the accuracies with which various methods predict bond lengths and energies or other properties. However, there are reasons to recommend some methods over others in specific cases. For example, certain applications require a size-extensive energy (e.g., extended systems that consist of a large or macroscopic number of units or studies of weak intermolecular interactions), so MBPT/MPPT or CC or CAS-based MCSCF are preferred. Moreover, certain chemical reactions (e.g., Woodward-Hoffmann forbidden reactions) and certain bond-breaking events require two or more 'essential' electronic configurations. For them, single-configuration-based methods such as conventional CC and MBTP/MPPT should not be used; MCSCF or CI calculations would be better. Very large molecules, in which thousands of atomic orbital basis functions are required, may be impossible to treat by methods whose effort scales as $N^4$ or higher; density functional methods would be better to use then.
For all calculations, the choice of atomic orbital basis set must be made carefully, keeping in mind the N$^4$ scaling of the one- and two-electron integral evaluation step and the $N^5$ scaling of the two-electron integral transformation step. Of course, basis functions that describe the essence of the states to be studied are essential (e.g., Rydberg or anion states require diffuse functions, and strained rings require polarization functions).
As larger atomic basis sets are employed, the size of the CSF list used to treat dynamic correlation increases rapidly. For example, most of the above methods use singly and doubly excited CSFs for this purpose. For large basis sets, the number of such CSFs, $N_C$, scales as the number of electrons squared, $n_e^2$, times the number of basis functions squared, N$^2$. Since the effort needed to solve the CI secular problem varies as $N_C^2 \text{ or } N_C^3$, a dependence as strong as N$^4$ to N$^6$ can result. To handle such large CSF spaces, all of the multiconfigurational techniques mentioned in this paper have been developed to the extent that calculations involving of the order of 100 to 5,000 CSFs are routinely performed and calculations using 10,000, 100,000, and even several million CSFs are practical
Other methods, most of which can be viewed as derivatives of the techniques introduced above, have been and are still being developed. This ongoing process has been, in large part, stimulated by the explosive growth in computer power and change in computer architecture that has been realized in recent years. All indications are that this growth pattern will continue, so ab initio quantum chemistry will likely have an even larger impact on future chemistry research and education (through new insights and concepts).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/19%3A_Multi-Determinant_Wavefunctions/19.03%3A_Strengths_and_Weaknesses_of_Various_Methods.txt
|
The MCSCF Method
The simultaneous optimization of the LCAO-MO and CI coefficients performed within an MCSCF calculation is a quite formidable task. The variational energy functional is a quadratic function of the CI coefficients, and so one can express the stationary conditions for these variables in the secular form:
$\sum\limits_J\text{H}_{I,J}C_J = \text{E C}_I . \nonumber$
However, E is a quartic function of the C$_{\nu ,i}$ coefficients because each matrix element $\Phi_I \big| \text{ H } \big| \Phi_I \rangle$ involves one- and two-electron integrals over the mos $\phi_i$, and the two-electron integrals depend quartically on the $C_{\nu ,i}$ coefficients. The stationary conditions with respect to these $C_{\nu ,i}$ parameters must be solved iteratively because of this quartic dependence.
It is well known that minimization of a function (E) of several non-linear parameters (the $C_{\nu ,i}$) is a difficult task that can suffer from poor convergence and may locate local rather than global minima. In an MCSCF wavefunction containing many CSFs, the energy is only weakly dependent on the orbitals that are weakly occupied (i.e., those that appear in CSFs with small $C_I$ values); in contrast, E is strongly dependent on the $C_{\nu ,i}$ coefficients of those orbitals that appear in the CSFs with larger $C_I$ values. One is therefore faced with minimizing a function of many variables (there may be as many $C_{\nu ,i}$ as the square of the number of orbital basis functions) that depends strongly on several of the variables and weakly on many others. This is a very difficult job.
For these reasons, in the MCSCF method, the number of CSFs is usually kept to a small to moderate number (e.g., a few to several hundred) chosen to describe essential correlations (i.e., configuration crossings, proper dissociation) and important dynamical correlations (those electron-pair correlations of angular, radial, left-right, etc. nature that arise when low-lying 'virtual' orbitals are present). In such a compact wavefunction, only spin-orbitals with reasonably large occupations (e.g., as characterized by the diagonal elements of the one-particle density matrix $\gamma_{i,}$j) appear. As a result, the energy functional is expressed in terms of variables on which it is strongly dependent, in which case the nonlinear optimization process is less likely to be pathological.
Such a compact MCSCF wavefunction is designed to provide a good description of the set of strongly occupied spin-orbitals and of the CI amplitudes for CSFs in which only these spin-orbitals appear. It, of course, provides no information about the spin-orbitals that are not used to form the CSFs on which the MCSCF calculation is based. As a result, the MCSCF energy is invariant to a unitary transformation among these 'virtual' orbitals.
In addition to the references mentioned earlier in ACP and MTC, the following papers describe several of the advances that have been made in the MCSCF method, especially with respect to enhancing its rate and range of convergence: E. Dalgaard and P. Jørgensen, J. Chem. Phys. 69 , 3833 (1978); H. J. Aa. Jensen, P. Jørgensen, and H. åÅgren, J. Chem. Phys. 87 , 457 (1987); B. H. Lengsfield, III and B. Liu, J. Chem. Phys. 75 , 478 (1981).
The Configuration Interaction Method
In the Configuration Interaction (CI) method, one usually attempts to realize a high-level treatment of electron correlation. A set of orthonormal molecular orbitals are first obtained from an SCF or MCSCF calculation (usually involving a small to moderate list of CSFs). The LCAO-MO coefficients of these orbitals are no longer considered as variational parameters in the subsequent CI calculation; only the $C_I$ coefficients are to be further optimized.
The CI wavefunction
$\psi = \sum\limits_J C_J \Phi_J \nonumber$
is most commonly constructed from CSFs $\Phi_J$ that include:
1. All of the CSFs in the SCF (in which case only a single CSF is included) or MCSCF wavefunction that was used to generate the molecular orbitals $\phi_i$. This set of CSFs are referred to as spanning the 'reference space' of the subsequent CI calculation, and the particular combination of these CSFs used in this orbital optimization (i.e., the SCF or MCSCF wavefunction) is called the reference function.
2. CSFs that are generated by carrying out single, double, triple, etc. level 'excitations' (i.e., orbital replacements ) relative to reference CSFs. CI wavefunctions limited to include contributions through various levels of excitation (e.g., single, double, etc. ) are denoted S (singly excited), D (doubly), SD ( singly and doubly), SDT (singly, doubly, and triply), and so on.
The orbitals from which electrons are removed and those into which electrons are excited can be restricted to focus attention on correlations among certain orbitals. For example, if excitations out of core electrons are excluded, one computes a total energy that contains no correlation corrections for these core orbitals. Often it is possible to so limit the nature of the orbital excitations to focus on the energetic quantities of interest (e.g., the CC bond breaking in ethane requires correlation of the $\sigma_{CC}$ orbital but the 1s Carbon core orbitals and the CH bond orbitals may be treated in a non-correlated manner).
Clearly, the number of CSFs included in the CI calculation can be far in excess of the number considered in typical MCSCF calculations; CI wavefunctions including 5,000 to 50,000 CSFs are routinely used, and functions with one to several million CSFs are within the realm of practicality (see, for example, J. Olsen, B. Roos, Poul Jørgensen, and H. J. Aa. Jensen, J. Chem. Phys. 89 , 2185 (1988) and J. Olsen, P. Jørgensen, and J. Simons, Chem. Phys. Letters 169 , 463 (1990)).
The need for such large CSF expansions should not come as a surprise once one considers that (i) each electron pair requires at least two CSFs (let us say it requires P of them, on average, a dominant one and P-1 others which are doubly excited) to form polarized orbital pairs, (ii) there are of the order of N(N-1)/2 = X electron pairs in an atom or molecule containing N electrons, and (iii) that the number of terms in the CI wavefunction scales as $P^X$. So, for an H$_2$O molecule containing ten electrons, there would be P$^{55}$ terms in the CI expansion. This is 3.6 x10$^{16}$ terms if P=2 and 1.7 x10$^{26}$ terms if P=3. Undoubtedly, this is an over estimate of the number of CSFs needed to describe electron correlation in H$_2$O, but it demonstrates how rapidly the number of CSFs can grow with the number of electrons in the system.
The $H_{I,J}$ matrices that arise in CI calculations are evaluated in terms of one- and two- electron integrals over the molecular orbitals using the equivalent of the Slater-Condon rules. For large CI calculations, the full $H_{I,J}$ matrix is not actually evaluated and stored in the computer's memory (or on its disk); rather, so-called 'direct CI' methods (see the article by Roos and Siegbahn in MTC) are used to compute and immediately sum contributions to the sum $\sum\limits_J \text{H}_{I,J}C_I$ in terms of integrals, density matrix elements, and approximate values of the $C_J$ amplitudes. Iterative methods (see, for example, E. R. Davidson, J. Comput. Phys. 17 , 87 (1975)), in which approximate values for the $C_J$ coefficients and energy E are refined through sequential application of $\sum\limits_I\text{H}_{I,J}$ to the preceding estimate of the $C_J$ vector, are employed to solve these large CI matrix eigenvalue problems.
The MPPT/MBPT Method
In the MPPT/MBPT method, once the reference CSF is chosen and the SCF orbitals belonging to this CSF are determined, the wavefunction $\psi$ and energy E are determined in an order-by-order manner. This is one of the primary strengths of the MPPT/MBPT technique; it does not require one to make further (potentially arbitrary) choices once the basis set and dominant (SCF) configuration are specified. In contrast to the MCSCF and CI treatments, one need not make choices of CSFs to include in or exclude from $\psi$. The MPPT/MBPT perturbation equations determine what CSFs must be included through any particular order.
For example, the first-order wavefunction correction $\psi^1 \text {(i.e., } \psi = \Phi + \psi^1$ through first order) is given by:
\begin{align} \psi^1 &= -\sum\limits_{i<j,m>n} \dfrac{\langle \Phi_{i,j}^{m,n}\big| \text{ H - H}^0 \big| \Phi \rangle}{\epsilon_m - \epsilon_i + \epsilon_n - \epsilon_j} \bigg| \Phi_{i,j}^{m,n} \rangle \ &= - \sum\limits_{i<j,M<n} \dfrac{\langle i,j \big| g \big| m,n \rangle - \langle i,j \big| g \big| n,m \rangle}{\epsilon_m - \epsilon_i + \epsilon_n - \epsilon_j}\bigg| \Phi_{i,j}^{m,n} \rangle \end{align} \nonumber
where the SCF orbital energies are denoted $\epsilon_k \text{ and } \Phi_{i,j}^{m,n}$ represents a CSF that is doubly excited relative to \Phi. Thus, only doubly excited CSFs contribute to the first-order wavefunction ; as a result, the energy E is given through second order as:
\begin{align} E &= \langle \big| \text{H}^0 \big| \Phi \rangle + \langle \Phi \Big| \text{ H - H}^0 \big| \Phi \rangle + \langle \Phi \big| \text{ H - H}^0 \big| \psi^1 \rangle \ &= \langle \Phi \big| \text{ H } \big| \Phi \rangle - \sum\limits_{i<j,m<n} \dfrac{\big| \langle \Phi_{i,j}^{m,n} \big| \text{ H - H}^0 \big| \Phi \rangle \big|^2}{\epsilon_m - \epsilon_i + \epsilon_n -\epsilon_j} \ &= \text{E}_{\text{SCF}} - \sum\limits_{i<j,m<n} \dfrac{\big| \langle i,j \big| g \big| m,n \rangle - \langle i,j \big| g \big| n,m \rangle \big|^2}{\epsilon_m - \epsilon_i + \epsilon_n - \epsilon_j} \ &= \text{E}^0 + \text{E}^1 + \text{E}^2. \end{align} \nonumber
These contributions have been expressed, using the SC rules, in terms of the two-electron integrals $\langle \text{i,j} \big| g \big|\text{m,n} \rangle$ coupling the excited spin-orbitals to the spin-orbitals from which electrons were excited as well as the orbital energy differences $[ \epsilon_m - \epsilon_i + \epsilon_n - \epsilon_j ]$ accompanying such excitations. In this form, it becomes clear that major contributions to the correlation energy of the pair of occupied orbitals $\phi_i \phi_j$ are made by double excitations into virtual orbitals $\phi_m \phi_n$ that have large coupling (i..e., large $\langle i,j \big| \text{ g }\big| m,n \rangle$ integrals) and small orbital energy gaps, $[ \epsilon_m - \epsilon_i + \epsilon_n - \epsilon_j ]$.
In higher order corrections to the wavefunction and to the energy, contributions from CSFs that are singly, triply, etc. excited relative to $\Phi$ appear, and additional contributions from the doubly excited CSFs also enter. It is relatively common to carry MPPT/MBPT calculations (see the references given above in Chapter 19.I.3 where the contributions of the Pople and Bartlett groups to the development of MPPT/MBPT are documented) through to third order in the energy (whose evaluation can be shown to require only $\psi^0 \text{ and } \psi^1$). The entire GAUSSIAN-8X series of programs, which have been used in thousands of important chemical studies, calculate E through third order in this manner.
In addition to being size-extensive and not requiring one to specify input beyond the basis set and the dominant CSF, the MPPT/MBPT approach is able to include the effect of all CSFs (that contribute to any given order) without having to find any eigenvalues of a matrix. This is an important advantage because matrix eigenvalue determination, which is necessary in MCSCF and CI calculations, requires computer time in proportion to the third power of the dimension of the $\text{H}_{I,J}$ matrix. Despite all of these advantages, it is important to remember the primary disadvantages of the MPPT/MBPT approach; its energy is not an upper bound to the true energy and it may not be able to treat cases for which two or more CSFs have equal or nearly equal amplitudes because it obtains the amplitudes of all but the dominant CSF from perturbation theory formulas that assume the perturbation is 'small'.
The Coupled-Cluster Method
The implementation of the Coupled-Cluster method begins much as in the MPPT/MBPT case; one selects a reference CSF that is used in the SCF process to generate a set of spin-orbitals to be used in the subsequent correlated calculation. The set of working equations of the Coupled-Cluster technique can be written explicitly by introducing the form of the so-called cluster operator T,
$\text{T} = \sum\limits_{i,m}t_i^m m^+ i + \sum\limits_{i,j,m,n}t_{i,j}^{m,n}m^+ n^+ \text{ j i + ...,} \nonumber$
where the combination of operators $m^+ i$ denotes creation of an electron in virtual spin orbital $\phi_m$ and removal of an electron from occupied spin-orbital $\phi_i$ to generate a single excitation. The operation $m^+ n^+$ j i therefore represents a double excitation from $\phi_i \phi_j$ to $\phi_m \phi_n$. Expressing the cluster operator T in terms of the amplitudes $t_i^m \text{, } t_{i,j}^{m,n} \text{, }$ etc. for singly, doubly, etc. excited CSFs, and expanding the exponential operators in
$e^{-T} \text{ H } e^{T} \nonumber$
one obtains:
$\langle \Phi_i^m \big| \text{ H + [H,T] + }\dfrac{1}{2} \text{[[H,T],T] + }\dfrac{1}{6}\text{[[[H,T],T],T] + }\dfrac{1}{24} \text{[[[[H,T],T],T],T] }\big| \Phi \rangle = 0; \nonumber$
$\langle \Phi_{i,j}^{m,n} \big| \text{ H + [H,T] + }\dfrac{1}{2} \text{[[H,T],T] + }\dfrac{1}{6}\text{[[[H,T],T],T] + }\dfrac{1}{24} \text{[[[[H,T],T],T],T] }\big| \Phi \rangle = 0; \nonumber$
$\langle \Phi_{i,j,k}^{m,n,p} \big| \text{ H + [H,T] + }\dfrac{1}{2} \text{[[H,T],T] + }\dfrac{1}{6}\text{[[[H,T],T],T] + }\dfrac{1}{24} \text{[[[[H,T],T],T],T] }\big| \Phi \rangle = 0; \nonumber$
and so on for higher order excited CSFs. It can be shown, because of the one- and twoelectron operator nature of H, that the expansion of the exponential operators truncates exactly at the fourth power; that is terms such as [[[[[H,T],T],T],T],T] and higher commutators vanish identically (this is demonstrated in Chapter 4 of Second Quantization Based Methods in Quantum Chemistry , P. Jørgensen and J. Simons, Academic Press, New York (1981).
As a result, the exact Coupled-Cluster equations are quartic equations for the $t_i^m \text{, } t_{i,j}^{m,n} \text{, }$ etc. amplitudes. Although it is a rather formidable task to evaluate all of the commutator matrix elements appearing in the above Coupled-Cluster equations, it can be and has been done (the references given above to Purvis and Bartlett are especially relevant in this context). The result is to express each such matrix element, via the Slater-Condon rules, in terms of one- and twoelectron integrals over the spin-orbitals used in determining $\Phi$, including those in $\Phi$ itself and the 'virtual' orbitals not in $\Phi$.
In general, these quartic equations must then be solved in an iterative manner and are susceptible to convergence difficulties that are similar to those that arise in MCSCF-type calculations. In any such iterative process, it is important to start with an approximation (to the t amplitudes, in this case) which is reasonably close to the final converged result. Such an approximation is often achieved, for example, by neglecting all of the terms that are nonlinear in the t amplitudes (because these amplitudes are assumed to be less than unity in magnitude). This leads, for the Coupled-Cluster working equations obtained by projecting onto the doubly excited CSFs, to:
$\langle \text{ i j } \big| \text{ g } \big| \text{ m,n } \rangle ' + \left[ \epsilon_m -\epsilon_i +\epsilon_n -\epsilon_j \right] t_{i,j}^{m,n} + \sum\limits_{i',j',m',n'} \langle \Phi_{i,j}^{m,n} \big| \text{ H - H}^0 \big| \Phi_{i',j'}^{m',n'} \Phi_{i',j'}^{m',n'} \rangle t_{i',j'}^{m',n'} = 0, \nonumber$
where the notation $\frac{\langle \text{ i,j }\big| \text{ g }\big| \text{ m,n }\rangle '}{\epsilon_m -\epsilon_i +\epsilon_n -\epsilon_j}$ is used to denote the two-electron integral difference $\rangle \text{ i,j } \big| \text{ g } \big| \text{ m,n } \rangle \text{ - } \langle \text{ i,j } \big| \text{ g } \big| \text{ n,m } \rangle$. If, in addition, the factors that couple different doubly excited CSFs are ignored (i.e., the sum over i',j',m',n') , the equations for the t amplitudes reduce to the equations for the CSF amplitudes of the first-order MPPT/MBPT wavefunction:
$t_{i,j}^{m,n} = - \dfrac{\langle \text{ i,j }\big| \text{ g } \big| \text{ m,n } \rangle '}{\epsilon_m -\epsilon_i +\epsilon_n -\epsilon_j} \nonumber$
As Bartlett and Pople have both demonstrated, there is, in fact, close relationship between the MPPT/MBPT and Coupled-Cluster methods when the Coupled-Cluster equations are solved iteratively starting with such an MPPT/MBPT-like initial 'guess' for these double-excitation amplitudes.
The Coupled-Cluster method, as presented here, suffers from the same drawbacks as the MPPT/MBPT approach; its energy is not an upper bound and it may not be able to accurately describe wavefunctions which have two or more CSFs with approximately equal amplitude. Moreover, solution of the non-linear Coupled-Cluster equations may be difficult and slowly (if at all) convergent. It has the same advantages as the MPPT/MBPT method; its energy is size-extensive, it requires no large matrix eigenvalue solution, and its energy and wavefunction are determined once one specifies the basis and the dominant CSF.
Density Functional Methods
These approaches provide alternatives to the conventional tools of quantum chemistry. The CI, MCSCF, MPPT/MBPT, and CC methods move beyond the singleconfiguration picture by adding to the wave function more configurations whose amplitudes they each determine in their own way. This can lead to a very large number of CSFs in the correlated wave function, and, as a result, a need for extraordinary computer resources.
The density functional approaches are different. Here one solves a set of orbitallevel equations
$\left[ -\dfrac{\hbar^2}{2m_e}\nabla^2 -\sum\limits_A \dfrac{Z_Ae^2}{|\textbf{r} - \textbf{R}_A}| + \int\dfrac{\rho (\textbf{r}'e^2)}{|\textbf{r}-\textbf{r}'|}\textbf{dr}' + \text{U}(\textbf{r}) \right] \phi_i = \epsilon_i\phi_i \nonumber$
in which the orbitals {$\phi_i$} 'feel' potentials due to the nuclear centers (having charges Z$_{\text{A}}$), Coulombic interaction with the total electron density $\rho$(r'), and a so-called exchange correlation potential denoted U(r'). The particular electronic state for which the calculation is being performed is specified by forming a corresponding density $\rho$(r'). Before going further in describing how DFT calculations are carried out, let us examine the origins underlying this theory.
The so-called Hohenberg-Kohn theorem states that the ground-state electron density $\rho$(r) describing an N-electron system uniquely determines the potential V(r) in the Hamiltonian
$\text{H } = \sum\limits_J \dfrac{\hbar^2}{2m_e}\nabla_j^2 + \text{V(}\textbf{r}_j\text{)} + \dfrac{e^2}{2} \sum\limits_{k \ne j} \dfrac{1}{r_{j,k}} , \nonumber$
and, because H determines the ground-state energy and wave function of the system, the ground-state density $\rho (\textbf{r})$ determines the ground-state properties of the system. The proof of this theorem proceeds as follows:
• $\rho (\textbf{r}) \text{ determines N because } \int \rho (\textbf{r}) d^3r = \text{N} .$
• Assume that there are two distinct potentials (aside from an additive constant that simply shifts the zero of total energy) V(r) and V'(r) which, when used in H and H’, respectively, to solve for a ground state produce $\text{E}_0 \text{, } \psi \text{(}\textbf{r}\text{) and E}_0 ’ \text{, } \psi \text{’(}\textbf{r}\text{) } \nonumber$ that have the same one-electron density: $\int \big| \psi \big|^2 \text{dr}_2 \text{ dr}_3 \text{ ... dr}_{\text{N}} = \rho (\textbf{r}) = \int \big|\psi '\big| \text{dr}_2 \text{ dr}_3 \text{ ... dr}_{\text{N}} \nonumber$
• If we think of $\psi$ as trial variational wave function for the Hamiltonian H, we know that $\text{E}_0 < \langle \psi ' \big| \text{H}\big| \psi '\rangle = \langle \psi '\big| \text{H'} \big| \psi ' \rangle + \int \rho(\textbf{r})\left[V(\textbf{r}) - V'(\textbf{r}) \right]\text{d}^3\text{r} = \text{E}_o ' + \int \rho (\textbf{r}) \left[ V(\text{r}) - V'(\textbf{r}) \right]\text{d}^3\textbf{r} \label{B3}$
• Similarly, taking $\psi$ as a trial function for the H' Hamiltonian, one finds that $E_0 ' \langle E_0 + \int \rho(\textbf{r}) \left[ V'(\textbf{r}) - V(\textbf{r}) \right] \text{d}^3\textbf{r}. \label{B4}$
• Adding the Equations \ref{B3} and \ref{B4} gives $\text{E}_0 \text{ + } \text{E}_0 \text{ + } \text{E}_0 ', \nonumber$and a clear contradiction.
Hence, there cannot be two distinct potentials V and V’ that give the same groundstate $\rho (\textbf{r}$). So, the ground-state density $\rho (\textbf{r}$) uniquely determines N and V, and thus H, and therefore $\psi \text{ and E}_0. \text{ Furthermore, because } \psi$ determines all properties of the ground state, then $\rho (\textbf{r})$, in principle, determines all such properties. This means that even the kinetic energy and the electron-electron interaction energy of the ground-state are determined by $\rho (\textbf{r}$). It is easy to see that $\int \rho (\textbf{r}) \text{V(}\textbf{r}\text{)d}^3\textbf{r} = \text{V[}\rho ]$ gives the average value of the electron nuclear (plus any additional one-electron additive potential) interaction in terms of the ground-state density $\rho (\textbf{r})$, but how are the kinetic energy T[$\rho$] and the electron-electron interaction V$_{ee}\text{[}\rho$] energy expressed in terms of $\rho$?
The main difficulty with DFT is that the Hohenberg-Kohn theorem shows that the ground-state values of T, V$_{ee}$, V, etc. are all unique functionals of the ground-state $\rho$ (i.e., that they can, in principle, be determined once $\rho$ is given), but it does not tell us what these functional relations are.
To see how it might make sense that a property such as the kinetic energy, whose operator $-\frac{\hbar^2}{2m_e}\nabla^2$ involves derivatives, can be related to the electron density, consider a simple system of N non-interacting electrons moving in a three-dimensional cubic “box” potential. The energy states of such electrons are known to be
$E = \left( \dfrac{h^2}{2m_e L^2} \right) \left( n_x^2 + n_y^2 + n_z^2 \right), \nonumber$
where L is the length of the box along the three axes, and $n_x \text{, } n_y \text{, and } n_z$ are the quantum numbers describing the state. We can view $n_x^2 \text{ + } n_y^2\text{ + } n_z^2 = R^2$ as defining the squared radius of a sphere in three dimensions, and we realize that the density of quantum states in this space is one state per unit volume in the $n_x \text{, } n_y \text{, } n_z$ space. Because $n_x \text{, } n_y \text{, and } n_z$ must be positive integers, the volume covering all states with energy less than or equal to a specified energy E = $\left(\frac{h^2}{2m_eL^2}\right)R^2 \text{ is } \frac{1}{8}$ the volume of the sphere of radius R:
$\Phi (\text{E}) = \dfrac{1}{8}\left( \dfrac{4\pi}{3}\right)R^3 = \left( \dfrac{\pi}{6}\right) \sqrt{9m_e E\dfrac{L^2}{h^2}}^3 . \nonumber$
Since there is one state per unit of such volume, $\Phi$(E) is also the number of states with energy less than or equal to E, and is called the integrated density of states. The number of states g(E) dE with energy between E and E+dE, the density of states , is the derivative of $\Phi$:
$g(E) = \dfrac{d\Phi}{dE} = \dfrac{\pi}{4}\sqrt{8m_e\dfrac{L^2}{h^2}}^3 \sqrt{E}. \nonumber$
If we calculate the total energy for N electrons, with the states having energies up to the so-called Fermi energy (i.e., the energy of the highest occupied molecular orbital HOMO) doubly occupied, we obtain the ground-state energy:
$E_0 = 2\int\limits_0^{E_F}g(E)EdE = \dfrac{8\pi}{5}\sqrt{\dfrac{2m_e}{h^2}}^3L^3\sqrt{E_F}^5 . \nonumber$
The total number of electrons N can be expressed as
$N = 2\int\limits_0^{E_F}g(E)dE = \dfrac{8\pi}{3}\sqrt{\dfrac{2m_e}{h^2}}^3 L^3 \sqrt{E_F}^3 , \nonumber$
which can be solved for $E_F$ in terms of N to then express $E_0$ in terms of N instead of $E_F$:
$E_0 = \dfrac{3h^2}{10m_e}\sqrt[3]{\dfrac{3}{8\pi}}^2L^3 \sqrt[3]{\dfrac{N}{L}}^5 . \nonumber$
This gives the total energy, which is also the kinetic energy in this case because the potential energy is zero within the “box”, in terms of the electron density $\rho$(x,y,z) = ($\frac{N}{L^3}$ ). It therefore may be plausible to express kinetic energies in terms of electron densities $\rho(\textbf{r}$), but it is by no means clear how to do so for “real” atoms and molecules with electron-nuclear and electron-electron interactions operative.
In one of the earliest DFT models, the Thomas-Fermi theory, the kinetic energy of an atom or molecule is approximated using the above kind of treatment on a “local” level. That is, for each volume element in r space, one assumes the expression given above to be valid, and then one integrates over all r to compute the total kinetic energy:
$T_{TF}[\rho ] = \int \dfrac{3h^2}{10m_e}\sqrt[3]{\dfrac{3}{8\pi}}^2 \sqrt[3]{\rho(\textbf{r})}^5 \text{d}^3\textbf{r} = C_F\int \sqrt[3]{\rho (\textbf{r})}^5 \text{d}^3\textbf{r} , \nonumber$
where the last equality simply defines the C$_F$ constant (which is 2.8712 in atomic units). Ignoring the correlation and exchange contributions to the total energy, this T is combined with the electron-nuclear V and Coulombic electron-electron potential energies to give the Thomas-Fermi total energy:
$E_{0,TF}[\rho ] = C_F \int \sqrt[3]{\rho (\textbf{r})}^5d^3\textbf{r} + \int V(\textbf{r})\rho (\textbf{r}) d^3\textbf{r} + \dfrac{e^2}{2}\int \rho (\textbf{r})\dfrac{\rho (\textbf{r}')}{|\textbf{r}-\textbf{r}' |}d^3\textbf{r} d^3\textbf{r}' , \nonumber$
This expression is an example of how E$_0$ is given as a local density functional approximation (LDA). The term local means that the energy is given as a functional (i.e., a function of $\rho$) which depends only on $\rho(\textbf{r})$ at points in space but not on $\rho(\textbf{r})$ at more than one point in space.
Unfortunately, the Thomas-Fermi energy functional does not produce results that are of sufficiently high accuracy to be of great use in chemistry. What is missing in this theory are a. the exchange energy and b. the correlation energy; moreover, the kinetic energy is treated only in the approximate manner described.
In the book by Parr and Yang, it is shown how Dirac was able to address the exchange energy for the 'uniform electron gas' (N Coulomb interacting electrons moving in a uniform positive background charge whose magnitude balances the charge of the N electrons). If the exact expression for the exchange energy of the uniform electron gas is applied on a local level, one obtains the commonly used Dirac local density approximation to the exchange energy:
$\text{E}_{\text{ex, Dirac}}[\rho ] = - C_x \int \sqrt[3]{\rho (\textbf{r})}^4 d^3\textbf{r} , \nonumber$
with $C_x = \frac{3}{4}|sqrt[3]{\frac{3}{\pi}} = 0.7386$ in atomic units. Adding this exchange energy to the Thomas-Fermi total energy E$_{0,TF} [\rho]$ gives the so-called Thomas-Fermi-Dirac (TFD) energy functional.
Because electron densities vary rather strongly spatially near the nuclei, corrections to the above approximations to T[$\rho\text{] and E}_{ex.Dirac}$ are needed. One of the more commonly used so-called gradient-corrected approximations is that invented by Becke, and referred to as the Becke88 exchange functional:
$E_{ex}(Becke88) = E_{ex,Dirac}[\rho ] -\gamma \int x^2\sqrt[3]{\rho}^4\left[ \dfrac{1}{1 + 6\gamma\text{ x }csch(x)} \right] \textbf{dr,} \nonumber$
where x = $\frac{1}{\sqrt[3]{\rho}^4} \big|\nabla \rho \big| \text{, and } \gamma$ is a parameter chosen so that the above exchange energy can best reproduce the known exchange energies of specific electronic states of the inert gas atoms (Becke finds $\gamma$ to equal 0.0042). A common gradient correction to the earlier T[$\rho$] is called the Weizsacker correction and is given by
$\delta\text{T}_{\text{Weizsacker}} = \dfrac{1}{72}\dfrac{\hbar}{m_e}\int \dfrac{\big| \nabla \rho (\textbf{r}) \big|^2}{\rho (\textbf{r})} d\textbf{r}. \nonumber$
Although the above discussion suggests how one might compute the ground-state energy once the ground-state density $\rho (\textbf{r})$ is given, one still needs to know how to obtain $\rho$. Kohn and Sham (KS) introduced a set of so-called KS orbitals obeying the following equation:
$\left[ -\dfrac{1}{2}\nabla^2 + V(\textbf{r}) + \dfrac{e^2}{2}\int \dfrac{\rho(\textbf{r}') }{\big|\textbf{r}-\textbf{r}'\big|} \textbf{dr}' + U_{xc}(\textbf{r}) \right] \phi_j = \epsilon_j \phi_j , \nonumber$
where the so-called exchange-correlation potential $U_{xc} (\textbf{r}) = \delta\text{E}_{xc}\dfrac{[\rho ]}{\delta\rho (\textbf{r})}$ could be obtained by functional differentiation if the exchange-correlation energy functional $\text{E}_{xc}[\rho ]$ were known. KS also showed that the KS orbitals {$\phi_j$} could be used to compute the density $\rho$ by simply adding up the orbital densities multiplied by orbital occupancies n$_j$ :
$\rho (\textbf{r} ) = \sum\limits_j n_j \big| \phi_j (\textbf{r})\big|^2 . \nonumber$
(here $n_j$ =0,1, or 2 is the occupation number of the orbital $\phi_j$ in the state being studied) and that the kinetic energy should be calculated as
$\text{T} = \sum\limits_j n_j \langle \phi_j (\textbf{r}) \big| -\dfrac{1}{2} \nabla^2 \big| \phi_j (\textbf{r}) \rangle . \nonumber$
The same investigations of the idealized 'uniform electron gas' that identified the Dirac exchange functional, found that the correlation energy (per electron) could also be written exactly as a function of the electron density $\rho$ of the system, but only in two limiting cases- the high-density limit (large $\rho$) and the low-density limit. There still exists no exact expression for the correlation energy even for the uniform electron gas that is valid at arbitrary values of $\rho$. Therefore, much work has been devoted to creating efficient and accurate interpolation formulas connecting the low- and high- density uniform electron gas expressions. One such expression is
$\text{E}_c [\rho ] = \int \rho (\textbf{r})\epsilon_c(\rho )\text{d}\textbf{r} , \nonumber$
where
$\epsilon_c(\rho ) = \dfrac{A}{2}\left[ ln\left( \dfrac{x}{X} \right) + \dfrac{2b}{Q tan^{-1}\left( \dfrac{Q}{2x + b} \right)} -\dfrac{bx_0}{X_0} \left[ ln\left( \dfrac{(x-x_0)^2}{X} + \dfrac{2(b+x_0)}{Q tan^{-1}\left( \dfrac{Q}{2x+b} \right)} \right) \right] \right] \nonumber$
is the correlation energy per electron. Here x = $\sqrt{r_s} \text{, } X=x^2 +bx+c \text{, } X_0 = x_0^2 + bx_0 + c \text{ and } Q = \sqrt{(4c - b^2)} \text{, } A = 0.0621814\text{, } x_0$ = -0.409286, b = 13.0720, and c = 42.7198. The parameter $r_s \text{ is how the density } \rho \text{ enters since } \frac{4}{3} \pi r_s^3$ is equal to $\frac{1}{\rho}; \text{ that is, } r_s$ is the radius of a sphere whose volume is the effective volume occupied by one electron. A reasonable approximation to the full $\text{E}_{xc}[\rho ]$ would contain the Dirac (and perhaps gradient corrected) exchange functional plus the above $\text{E}_C[\rho ]$, but there are many alternative approximations to the exchange-correlation energy functional. Currently, many workers are doing their best to “cook up” functionals for the correlation and exchange energies, but no one has yet invented functionals that are so reliable that most workers agree to use them.
To summarize, in implementing any DFT, one usually proceeds as follows:
1. An atomic orbital basis is chosen in terms of which the KS orbitals are to be expanded.
2. Some initial guess is made for the LCAO-KS expansion coefficients $\text{C}_{jj,a}: \phi_j = \sum\limits_a \text{C}_{j,a}\chi_a$.
3. The density is computed as $\rho (\textbf{r}) = \sum\limits_j n_j \big|\phi_j (\textbf{r})\big|^2$. Often, $\rho (\textbf{r})$ is expanded in an atomic orbital basis, which need not be the same as the basis used for the $\phi_j$, and the expansion coefficients of $\rho$ are computed in terms of those of the $\phi_j$. It is also common to use an atomic orbital basis to expand $\sqrt[3]{\rho (\textbf{r})} \text{ which, together with } \rho$, is needed to evaluate the exchange-correlation functional’s contribution to E$_0$.
4. The current iteration’s density is used in the KS equations to determine the Hamiltonian $\left[-\frac{1}{2}\nabla^2 \text{ + V}(\textbf{r}) + \frac{e^2}{2} \int \frac{\rho (\textbf{r}’)}{|\textbf{r}-\textbf{r}’|} \text{ d}\textbf{r}’ + \text{ U}_{xc}(\textbf{r}) \right]$whose “new” eigenfunctions {$\phi_j$ } and eigenvalues {$\epsilon_j$ } are found by solving the KS equations.
5. These new $\phi_j$ are used to compute a new density, which, in turn, is used to solve a new set of KS equations. This process is continued until convergence is reached (i.e., until the $\phi_j$ used to determine the current iteration’s $\rho$ are the same $\phi_j$ that arise as solutions on the next iteration.
6. Once the converged $\rho (\textbf{r})$ is determined, the energy can be computed using the earlier expression
$\text{E} [\rho ] = \sum\limits_j n_j \langle \phi_j (\textbf{r})\big| -\dfrac{1}{2} \nabla^2 \big| \phi_j (\textbf{r})\rangle + \int V(\textbf{r}) \rho (\textbf{r}) \text{d}\textbf{r} + \dfrac{e^2}{2} \int \rho (\textbf{r})\dfrac{ \rho (\textbf{r}')}{\big|\textbf{r}-\textbf{r}'\big|}\text{d}\textbf{r }\text{d}\textbf{r}' + \text{E}_{xc}[\rho ]. \nonumber$
In closing this section, it should once again be emphasized that this area is currently undergoing explosive growth and much scrutiny. As a result, it is nearly certain that many of the specific functionals discussed above will be replaced in the near future by improved and more rigorously justified versions. It is also likely that extensions of DFT to excited states (many workers are actively pursuing this) will be placed on more solid ground and made applicable to molecular systems. Because the computational effort involved in these approaches scales much less strongly with basis set size than for conventional (SCF, MCSCF, CI, etc.) methods, density functional methods offer great promise and are likely to contribute much to quantum chemistry in the next decade.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/19%3A_Multi-Determinant_Wavefunctions/19.04%3A_Further_Details_on_Implementing_Multiconfigurational_Methods.txt
|
Many physical properties of a molecule can be calculated as expectation values of a corresponding quantum mechanical operator. The evaluation of other properties can be formulated in terms of the "response" (i.e., derivative) of the electronic energy with respect to the application of an external field perturbation.
• 20.1: Calculations of Properties Other Than the Energy
Essentially all experimentally measured properties can be thought of as arising through the response of the system to some externally applied perturbation or disturbance. In turn, the calculation of such properties can be formulated in terms of the response of the energy E or wavefunction Ψ to a perturbation.
• 20.2: Ab Initio, Semi-Empirical, and Empirical Force Field Methods
Semi-empirical methods use experimental data or the results of ab initio calculations to determine some of the matrix elements or integrals needed to carry out their procedures. Each of these tools has advantages and limitations. Ab initio methods involve intensive computation and therefore tend to be limited to smaller atoms, molecules, radicals, and ions. Semi-empirical or empirical methods would be of little use on systems whose electronic properties have not been included in the data base.
Thumbnail: Potential energy diagram. (Public Domain; Chem540grp1f08)
20: Response Theory
There are, of course, properties other than the energy that are of interest to the practicing chemist. Dipole moments, polarizabilities, transition probabilities among states, and vibrational frequencies all come to mind. Other properties that are of importance involve operators whose quantum numbers or symmetry indices label the state of interest. Angular momentum and point group symmetries are examples of the latter properties; for these quantities the properties are precisely specified once the quantum number or symmetry label is given (e.g., for a $^3P$ state, the average value of $L^2 \text{ is } = \langle ^3 P \big| L^2 \big| ^3P \rangle = \hbar^2 1(1+1) = 2\hbar^2$.
Although it may be straightforward to specify what property is to be evaluated, often computational difficulties arise in carrying out the calculation. For some ab initio methods, these difficulties are less severe than for others. For example, to compute the electric dipole transition matrix element $\langle \Psi_2 \big| \textbf{ r }\big| \Psi_1 \rangle$ between two states $\Psi_1 \text{ and } \Psi_2$, one must evaluate the integral involving the one-electron dipole operator $\textbf{r} = \sum\limits_j e\textbf{r}_j - \sum\limits_a eZ_a\textbf{R}_a$; here the first sum runs over the N electrons and the second sum runs over the nuclei whose charges are denoted $Z_a$. To evaluate such transition matrix elements in terms of the Slater-Condon rules is relatively straightforward as long as $\Psi_1 \text{ and } \Psi_2$ are expressed in terms of Slater determinants involving a single set of orthonormal spin-orbitals. If $\Psi_1 \text{ and } \Psi_2$, have been obtained, for example, by carrying out separate MCSCF calculations on the two states in question, the energy optimized spin-orbitals for one state will not be the same as the optimal spin-orbitals for the second state. As a result, the determinants in $\Psi_1 \text{ and those in } \Psi_2$ will involve spin-orbitals that are not orthonormal to one another. Thus, the SC rules can not immediately be applied. Instead, a transformation of the spin-orbitals of $\Psi_1 \text{ and } \Psi_2$ to a single set of orthonormal functions must be carried out. This then expresses $\Psi_1 \text{ and } \Psi_2$ in terms of new Slater determinants over this new set of orthonormal spinorbitals, after which the SC rules can be exploited.
In contrast, if $\Psi_1 \text{ and } \Psi_2$ are obtained by carrying out a CI calculation using a single set of orthonormal spin-orbitals (e.g., with $\Psi_1 \text{ and } \Psi_2$ formed from two different eigenvectors of the resulting secular matrix), the SC rules can immediately be used to evaluate the transition dipole integral.
Formulation of Property Calculations as Responses
Essentially all experimentally measured properties can be thought of as arising through the response of the system to some externally applied perturbation or disturbance. In turn, the calculation of such properties can be formulated in terms of the response of the energy E or wavefunction $\Psi$ to a perturbation. For example, molecular dipole moments $\mu$ are measured, via electric-field deflection, in terms of the change in energy
$\Delta \text{E} = \mu\cdot{\textbf{E}} + \dfrac{1}{2}\textbf{E}\cdot{\alpha}\cdot{\textbf{E}} + \dfrac{1}{6}\textbf{E}\cdot{\textbf{E}}\cdot{\textbf{E}}\cdot{\beta} + ... \nonumber$
caused by the application of an external electric field E which is spatially inhomogeneous, and thus exerts a force
$\textbf{F} = -\nabla \Delta E \nonumber$
on the molecule proportional to the dipole moment (good treatments of response properties for a wide variety of wavefunction types (i.e., SCF, MCSCF, MPPT/MBPT, etc.) are given in Second Quantization Based Methods in Quantum Chemistry , P. Jørgensen and J. Simons, Academic Press, New York (1981) and in Geometrical Derivatives of Energy Surfaces and Molecular Properties , P. Jørgensen and J. Simons, Eds., NATO ASI Series, Vol. 166, D. Reidel, Dordrecht (1985)).
To obtain expressions that permit properties other than the energy to be evaluated in terms of the state wavefunction $\Psi$, the following strategy is used:
1. The perturbation V = H-H$^0$ appropriate to the particular property is identified. For dipole moments ($\mu$), polarizabilities ($\alpha$), and hyperpolarizabilities ($\beta$), V is the interaction of the nuclei and electrons with the external electric field $V = \sum\limits_a Z_ae\textbf{R}_a\cdot{\textbf{E}} - \sum\limits_je\textbf{r}_j\cdot{\textbf{E}}. \nonumber$For vibrational frequencies, one needs the derivatives of the energy E with respect to deformation of the bond lengths and angles of the molecule, so V is the sum of all changes in the electronic Hamiltonian that arise from displacements $\delta\textbf{R}_a$ of the atomic centers $V = \sum_a (\nabla \textbf{R}_a H)\cdot{\delta \textbf{R}_a}. \nonumber$
2. A power series expansion of the state energy E, computed in a manner consistent with how $\Psi$ is determined (i.e., as an expectation value for SCF, MCSCF, and CI wavefunctions or as $\langle \Phi \big| \text{ H } \big| \Psi \rangle$for MPPT/MBPT or as $\langle \Phi \big| e^{-T}\text{ H }e^{T}\Phi \rangle$ for CC wavefunctions), is carried out in powers of the perturbation V: $\text{E = E}^0 + \text{E}^{(1)} + \text{E}^{(2)} + \text{E}^{(3)} + ... \nonumber$ In evaluating the terms in this expansion, the dependence of H = H$^0$+V and of $\Psi$ (which is expressed as a solution of the SCF, MCSCF, ..., or CC equations for H not for H$^0$) must be included.
3. The desired physical property must be extracted from the power series expansion of $\Delta$ E in powers of V.
The MCSCF Response Case
The Dipole Moment
To illustrate how the above developments are carried out and to demonstrate how the results express the desired quantities in terms of the original wavefunction, let us consider, for an MCSCF wavefunction, the response to an external electric field. In this case, the Hamiltonian is given as the conventional one- and two-electron operators H$^0$ to which the above one-electron electric dipole perturbation V is added. The MCSCF wavefunction $\Psi$ and energy E are assumed to have been obtained via the MCSCF procedure with H=H$^0+\lambda \text{V, where } \lambda$ can be thought of as a measure of the strength of the applied electric field. The terms in the expansion of E($\lambda$) in powers of $\lambda$:
$\text{E} = \text{E(}\lambda = 0) + \lambda\left( \dfrac{\text{dE}}{\text{d}\lambda} \right)_0 + \dfrac{1}{2}\lambda^2 \left( \dfrac{\text{d}^2\text{E}}{\text{d}\lambda^2} \right)_0 + ... \nonumber$
are obtained by writing the total derivatives of the MCSCF energy functional with respect to $\lambda$ and evaluating these derivatives at $\lambda = 0$ (which is indicated by the subscript (..)0 on the above derivatives):
$\text{E}(\lambda = 0) = \langle \Psi (\lambda = 0)\big| \text{ H}^0\big|\Psi (\lambda = 0) \rangle = \text{E}^0 , \nonumber$
$\left( \dfrac{\text{dE}}{\text{d}\lambda} \right)_0 = \langle \Psi (\lambda = 0)\big| V\big| \Psi (\lambda = 0)\rangle + 2\sum\limits_J \left( \dfrac{\partial \text{C}_J}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0 \big| \Psi (\lambda = 0)\rangle + 2 \sum\limits_{i,a}\left( \dfrac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big| \Psi (\lambda = 0) \rangle + ... \nonumber$
$... + 2 \sum\limits_{\nu} \left( \dfrac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}} \big| \text{ H}^0 \big| \Psi (\lambda = 0) \rangle , \nonumber$
and so on for higher order terms. The factors of 2 in the last three terms come through using the hermiticity of H$^0$ to combine terms in which derivatives of $\Psi$ occur.
The first-order correction can be thought of as arising from the response of the wavefunction (as contained in its LCAO-MO and CI amplitudes and basis functions $\chi_{\nu}$) plus the response of the Hamiltonian to the external field. Because the MCSCF energy functional has been made stationary with respect to variations in the C$_J$ and C$_{i,a}$ amplitudes, the second and third terms above vanish:
$\dfrac{\partial \text{E}}{\partial \text{C}_J} = 2 \langle \dfrac{\partial \Psi}{\partial \text{C}_J}\big|\text{ H}^0\big|\Psi (\lambda = 0) \rangle = 0, \nonumber$
$\dfrac{\partial \text{E}}{\partial \text{C}_{a,i}} = 2 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}}\big|\text{ H}^0\big|\Psi (\lambda = 0) \rangle = 0. \nonumber$
If, as is common, the atomic orbital bases used to carry out the MCSCF energy optimization are not explicitly dependent on the external field, the third term also vanishes because $\left( \frac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 = 0.$ Thus for the MCSCF case, the first-order response is given as the average value of the perturbation over the wavefunction with $\lambda l= 0$:\
$\left( \dfrac{\text{dE}}{\text{d} \lambda} \right)_0 = \langle \Psi (\lambda = 0)\big| V \big| \Psi (\lambda = 0) \rangle . \nonumber$
For the external electric field case at hand, this result says that the field-dependence of the state energy will have a linear term equal to
$\langle \Psi (\lambda = 0) \big| V \big| \Psi (\lambda = 0) \rangle = \langle \Psi \big| \sum\limits_a Z_ae\textbf{R}_a\cdot{e} - \sum\limits_je\textbf{r}_j\cdot{e}\big| \Psi \rangle , \nonumber$
where e is a unit vector in the direction of the applied electric field (the magnitude of the field $\lambda$ having already been removed in the power series expansion). Since the dipole moment is determined experimentally as the energy's slope with respect to field strength, this means that the dipole moment is given as:
$\mu = \langle \Psi \big| \sum\limits_a Z_a e \textbf{R}_a - \sum\limits_j e\textbf{r}_j \big| \Psi \rangle . \nonumber$
The Geometrical Force
These same techniques can be used to determine the response of the energy to displacements $\delta \text{R}_a$ of the atomic centers. In such a case, the perturbation is
$V = \sum\limits_a \delta \textbf{R}_a\cdot{\nabla_{\textbf{R}_a}}\left( -\sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{\big| r_i - R_a \big|} \right) = -\sum\limits_a Z_a e^2\delta \textbf{R}_a\cdot{\sum\limits_i} \dfrac{( \textbf{r}_i - \textbf{R}_a )}{\big| r_i - R_a \big|^3 .} \nonumber$
Here, the one-electron operator $\sum\limits_i \dfrac{( \textbf{r}_i - \textbf{R}_a )}{\big| r_i - R_a \big|}$ is referred to as 'the HellmannFeynman' force operator; it is the derivative of the Hamiltonian with respect to displacement of center-a in the x, y, or z direction. The expressions given above for E($\lambda$=0) and $\left( \frac{\text{dE}}{\text{d}\lambda} \right)_0$ can once again be used, but with the Hellmann-Feynman form for V. Once again, for the MCSCF wavefunction, the variational optimization of the energy gives
$\langle \dfrac{\partial \Psi}{\partial \text{C}_J}\big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle = \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}}\big| \text{ H}^0\big| \Psi (\lambda = 0) \rangle = 0. \nonumber$
However, because the atomic basis orbitals are attached to the centers, and because these centers are displaced in forming V, it is no longer true that $\left( \frac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 = 0;$ the variation in the wavefunction caused by movement of the basis functions now contributes to the firstorder energy response. As a result, one obtains
$\left( \right)_0 = -\sum\limits_a Z_a e^2 \delta \textbf{R}_a\cdot{\langle} \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_ - \textbf{R}_a)}{|r_i - R_a|^3} \big| \Psi \rangle + 2 \sum\limits_a \delta \textbf{R}_a\cdot{\sum\limits_{\nu}} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle . \nonumber$
The first contribution to the force
$\textbf{F}_a = - Z_a e^2 \langle \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{|r_i - R_a|^3} \big| \Psi \rangle + 2\sum\limits_{\nu} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}} \big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle \nonumber$
along the x, y, and z directions for center-a involves the expectation value, with respect to the MCSCF wavefunction with $\lambda$ = 0, of the Hellmann-Feynman force operator. The second contribution gives the forces due to infinitesimal displacements of the basis functions on center-a. The evaluation of the latter contributions can be carried out by first realizing that
$\Psi = \sum\limits_J \text{C}_J \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \phi_{Jn} ... \phi_{JN} \big| \nonumber$
with
$\phi_j = \sum\limits_{\mu}\text{C}_{\mu ,i}\chi_{\mu} \nonumber$
involves the basis orbitals through the LCAO-MO expansion of the $\phi_j$s. So the derivatives of the basis orbitals contribute as follows:
$\sum\limits_{\nu} (\nabla_{\textbf{R}_a} \chi_{\nu})\langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| = \sum\limits_J \sum\limits_{j,\nu}\text{C}_J\text{C}_{\nu ,j} \langle \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \nabla_{\textbf{R}_a}\chi_{\nu} ... \phi_{JN} \big|. \nonumber$
Each of these factors can be viewed as combinations of CSFs with the same $\text{C}_J \text{ and } \text{C}_{n,j}$ coefficients as in $\Psi \text{ but with the } j^{th}$ spin-orbital involving basis functions that have been differentiated with respect to displacement of center-a. It turns out that such derivatives of Gaussian basis orbitals can be carried out analytically (giving rise to new Gaussians with one higher and one lower l-quantum number). When substituted into $\sum\limits_{\nu} (\nabla_{\textbf{R}_a} \chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| \text{ H}^0\big| \Psi (\lambda =0) \rangle$, these basis derivative terms yield $\sum\limits_{\nu} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big|\text{ H}^0\big| \Psi (\lambda = 0)\rangle = \sum\limits_J \sum\limits_{j,\nu} \text{C}_J\text{C}_{\nu ,j} \langle \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \nabla_{\textbf{R}_a}\chi_{\nu} ... \phi_{JN}\big| \text{ H}^0\big| \Psi \rangle , \nonumber$ whose evaluation via the Slater-Condon rules is straightforward. It is simply the expectation value of H$^0$ with respect to $\Psi$ (with the same density matrix elements that arise in the evaluation of $\Psi$'s energy) but with the one- and two-electron integrals over the atomic basis orbitals involving one of these differentiated functions: $\langle \chi_{\mu}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle \Longrightarrow \nabla_{\textbf{R}_a}\langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \big| \chi_{\gamma}\chi_{\delta} \rangle = \nonumber$ $= \langle \nabla_{\textbf{R}_a}\chi_{\mu}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \nabla_{\textbf{R}_a}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \chi_{\nu}\big|\text{ g }\big| \nabla_{\textbf{R}_a} \chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \chi_{\nu}\big|\text{ g }\big| \chi_{\gamma} \nabla_{\textbf{R}_a} \chi_{\delta} \rangle . \nonumber$ In summary, the force F$_a$ felt by the nuclear framework due to a displacement of center-a along the x, y, or z axis is given as $\textbf{F}_a = -Z_a e^2 \langle \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{|r_i - R_a|^3}\big| \Psi \rangle + (\nabla_{\textbf{R}_a}\langle \Psi \big| \text{ H}^0 \big| \Psi \rangle ), \nonumber$ where the second term is the energy of $\Psi$ but with all atomic integrals replaced by integral derivatives: $\langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \chi_{\gamma} \chi_{\delta} \rangle \Longrightarrow \nabla_{\textbf{R}_a} \langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \chi_{\gamma} \chi_{\delta} \rangle .$
Responses for Other Types of Wavefunctions
It should be stressed that the MCSCF wavefunction yields especially compact expressions for responses of E with respect to an external perturbation because of the variational conditions
$\langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0 \big|\Psi (\lambda = 0)\rangle = \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0 \nonumber$
that apply. The SCF case, which can be viewed as a special case of the MCSCF situation, also admits these simplifications. However, the CI, CC, and MPPT/MBPT cases involve additional factors that arise because the above variational conditions do not apply (in the CI case, $\langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0$ still applies, but the orbital condition $\langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0$ does not because the orbitals are not varied to make the CI energy functional stationary).
Within the CC, CI, and MPPT/MBPT methods, one must evaluate the so-called responses of the C$_I$ and C$_{a,i}$ coefficients $\left( \frac{\partial \text{C}_J}{\partial \lambda} \right)_0$ and $\left( \frac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0$ that appear in the full energy response as (see above) $2 \sum\limits_J \left( \frac{\partial \text{C}_J}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle + 2 \sum\limits_{i,a} \left( \frac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle$. To do so requires solving a set of response equations that are obtained by differentiating whatever equations govern the $C_I \text{ and } C_{a,i}$ coefficients in the particular method (e.g., CI, CC, or MPPT/MBPT) with respect to the external perturbation. In the geometrical derivative case, this amounts to differentiating with respect to x, y, and z displacements of the atomic centers. These response equations are discussed in Geometrical Derivatives of Energy Surfaces and Molecular Properties , P. Jørgensen and J. Simons, Eds., NATO ASI Series, Vol. 166, D. Reidel, Dordrecht (1985). Their treatment is somewhat beyond the scope of this text, so they will not be dealt with further here.
The Use of Geometrical Energy Derivatives
1. Gradients as Newtonian Forces The first energy derivative is called the gradient g and is the negative of the force F (with components along the $a^{th}$ center denoted $\textbf{F}_{\textbf{a}}$) experienced by the atomic centers F = -g . These forces, as discussed in Chapter 16, can be used to carry out classical trajectory simulations of molecular collisions or other motions of large organic and biological molecules for which a quantum treatment of the nuclear motion is prohibitive. The second energy derivatives with respect to the x, y, and z directions of centers a and b (for example, the x, y component for centers a and b is $\text{H}_{ax,by} = \left( \dfrac{\partial ^2E}{\partial x_a \partial y_b} \right)$ form the Hessian matrix H. The elements of H give the local curvatures of the energy surface along the 3N cartesian directions. The gradient and Hessian can be used to systematically locate local minima (i.e., stable geometries) and transition states that connect one local minimum to another. At each of these stationary points, all forces and thus all elements of the gradient g vanish. At a local minimum, the H matrix has 5 or 6 zero eigenvalues corresponding to translational and rotational displacements of the molecule (5 for linear molecules; 6 for non-linear species) and 3N-5 or 3N-6 positive eigenvalues. At a transition state, H has one negative eigenvalue, 5 or 6 zero eigenvalues, and 3N-6 or 3N-7 positive eigenvalues.
2. Transition State Rate Coefficients The transition state theory of Eyring or its extensions due to Truhlar and coworkers (see, for example, D. G. Truhlar and B. C. Garrett, Ann. Rev. Phys. Chem. 35 , 159 (1984)) allow knowledge of the Hessian matrix at a transition state to be used to compute a rate coefficient k$_{\text{rate}}$ appropriate to the chemical reaction for which the transition state applies. More specifically, the geometry of the molecule at the transition state is used to compute a rotational partition function Q$^†_{\text{rot}}$ in which the principal moments of inertia $I_a \text{, } I_b \text{, and } I_c$ (see Chapter 13) are those of the transition state (the $^†$ symbol is, by convention, used to label the transition state): $Q_{\text{rot}}^{\dagger} = \prod\limits_{\text{n = a,b,c}}\sqrt{\dfrac{8\pi^2 I_nkT}{h^2}}, \nonumber$ where k is the Boltzmann constant and T is the temperature in $^{\circ}\text{K}$. The eigenvalues {$\omega_{\alpha}$} of the mass weighted Hessian matrix (see below) are used to compute, for each of the 3N-7 vibrations with real and positive $\omega_{\alpha}$ values, a vibrational partition function that is combined to produce a transition-state vibrational partition function: $\text{Q}^{\dagger}_{\text{vib}} = \prod\limits_{\alpha = 1,3 N-7} \dfrac{e^{-\dfrac{\hbar \omega_{\alpha}}{2kT}}}{1-e^{-\dfrac{\hbar \omega_{\alpha}}{kT}}} . \nonumber$ The electronic partition function of the transition state is expressed in terms of the activation energy (the energy of the transition state relative to the electronic energy of the reactants) E$^{\dagger}$ as: $\text{Q}^{\dagger}_{\text{electronic}} = \omega^{\dagger} e^{ -\dfrac{ \text{E}^{\dagger} }{ kT } } \nonumber$ where $\omega^{\dagger}$ is the degeneracy of the electronic state at the transition state geometry. In the original Eyring version of transition state theory (TST), the rate coefficient k$_{rate}$ is then given by: $k_{\text{rate}} = \dfrac{\text{kT}}{\hbar}\omega^{\dagger}e^{ -\dfrac{E^{\dagger}}{kT} } \dfrac{ \text{Q}_{\text{rot}}^{\dagger} \text{Q}_{\text{vib}}^{\dagger} }{\text{Q}_{\text{reactants}}} \nonumber$ where $\text{Q}_{\text{reactants}}$ is the converntional partition function for the reactant materials. For example, in a biomolecular reaction such as: $\text{F + H}_2 \rightarrow \text{ FH + H}, \nonumber$ the reactant partition function $\text{Q}_{\text{reactants}} = \text{Q}_F \text{Q}_{H_2} \nonumber$ is written in terms of the translational and electronic (the degeneracy of the $^2$P state produces the 2 (3) overall degeneracy factor) partition functions of the F atom $\text{Q}_F = 2\sqrt{\dfrac{2\pi m_F kT}{h^2}}^3 \nonumber$ and the translational, electronic, rotational, and vibrational partition functions of the H$_2$ molecule $\text{Q}_{\text{H}_2} = \sqrt{\dfrac{2\pi m_{H_2}kT}{h^2}}^3 \dfrac{8\pi^2 I H_2 kT}{2h^2} \dfrac{e^{-\dfrac{\hbar \omega H_2}{2kT}}}{1-e^{-\dfrac{\hbar \omega_{H_2}}{kT}}}. \nonumber$ The factor of 2 in the denominator of the H2 molecule's rotational partition function is the "symmetry number" that must be inserted because of the identity of the two H nuclei. The overall rate coefficient k$_{\text{rate}} \text{ (with units sec}^{-1}$ because this is a rate per collision pair) can thus be expressed entirely in terms of energetic, geometrical, and vibrational information about the reactants and the transition state. Even within the extensions to Eyring's original model, such is the case. The primary difference in the more modern theories is that the transition state is identified not as the point on the potential energy surface at which the gradient vanishes and there is one negative Hessian eigenvalue. Instead, a so-called variational transition state (see the above reference by Truhlar and Garrett) is identified. The geometry, energy, and local vibrational frequencies of this transition state are then used to compute, must like outlined above, k$_{\text{rate}}$.
3. Harmonic Vibrational Frequencies It is possible (see, for example, J. Nichols, H. L. Taylor, P. Schmidt, and J. Simons, J. Chem. Phys. 92 , 340 (1990) and references therein) to remove from H the zero eigenvalues that correspond to rotation and translation and to thereby produce a Hessian matrix whose eigenvalues correspond only to internal motions of the system. After doing so, the number of negative eigenvalues of H can be used to characterize the nature of the stationary point (local minimum or transition state), and H can be used to evaluate the local harmonic vibrational frequencies of the system. The relationship between H and vibrational frequencies can be made clear by recalling the classical equations of motion in the Lagrangian formulation: $\dfrac{d}{dt}\left( \dfrac{\partial L}{\partial \dot{q_j}} \right) - \left( \dfrac{\partial L}{\partial q_j} \right) = 0, \nonumber$ where $q_j$ denotes, in our case, the 3N cartesian coordinates of the N atoms, and $\dot{q}_j$ is the velocity of the corresponding coordinate. Expressing the Lagrangian L as kinetic energy minus potential energy and writing the potential energy as a local quadratic expansion about a point where g vanishes, gives $L = \dfrac{1}{2} \sum\limits_J m_j \dot{q}_j^2 - E(0) -\dfrac{1}{2}\sum\limits_{j,k} q_j H_{j,k}q_k . \nonumber$ Here, E(0) is the energy at the stationary point, mj is the mass of the atom to which $q_j \text{ applies, and the } H_{j,k}$ are the elements of H along the x, y, and z directions of the various atomic centers. Applying the Lagrangian equations to this form for L gives the equations of motion of the $q_j$ coordinates: $m_j\dots{q}_j = -\sum\limits_k H_{j,k}q_k . \nonumber$ To find solutions that correspond to local harmonic motion, one assumes that the coordinates $q_j$ oscillate in time according to $q_j(t) = q_j cos(\omega t) . \nonumber$ Substituting this form for q_j(t) into the equations of motion gives $m_j \omega^2 q_j = \sum\limits_k H_{j,k} q_k . \nonumber$ Defining $q_j' = q_j\sqrt{m_j} \nonumber$ and introducing this into the above equation of motion yields $\omega^2 q_j' = \sum\limits H_{j,k}'q_k' , \nonumber$ where $H_{j,k}' = H_{j,k}\dfrac{1}{\sqrt{m_j m_k}} \nonumber$ is the so-called mass-weighted Hessian matrix. The squares of the desired harmonic vibrational frequencies $\omega^2$ are thus given as eigenvalues of the mass-weighted Hessian H': $\textbf{H'q'}_{\alpha} = \omega^2_{\alpha}\textbf{q'}_{\alpha} \nonumber$ The corresponding eigenvector, {q'$_{\alpha ,j}$ gives, when multiplied by $\frac{1}{\sqrt{m_j}}$, the atomic displacements that accompany that particular harmonic vibration. At a transition state, one of the $\omega^2_{\alpha}$ will be negative and 3N-6 or 3N-7 will be positive.
4. Reaction Path Following The Hessian and gradient can also be used to trace out 'streambeds' connecting local minima to transition states. In doing so, one utilizes a local harmonic description of the potential energy surface $E(\textbf{x}) = E(0) + \textbf{x}\cdot{\textbf{g}} + \dfrac{1}{2}\textbf{x}\cdot{\textbf{H}}\cdot{\textbf{x}} + ..., \nonumber$ where x represents the (small) step away from the point x = 0 at which the gradient g and Hessian H have been evaluated. By expressing x and g in terms of the eigenvectors v$_{\alpha}$ of H $\textbf{Hv}_{\alpha} = \lambda_{\alpha} \textbf{v}_{\alpha} , \nonumber$ $\textbf{x} = \sum\limits_{\alpha} \langle \textbf{v}_{\alpha} \big| \textbf{x} \rangle \textbf{v}_{\alpha} = \sum\limits_{\alpha} \textbf{x}_{\alpha} \textbf{v}_{\alpha} , \nonumber$ $\textbf{g} = \sum\limits_{\alpha} \langle \textbf{v}_{\alpha} \big| \textbf{g} \rangle \textbf{v}_{\alpha} = \sum\limits_{\alpha} \textbf{g}_{\alpha} \textbf{v}_{\alpha} , \nonumber$ the energy change E(x) - E(0) can be expressed in terms of a sum of independent changes along the eigendirections: $\text{E(}\textbf{x}) - \text{E(0)} = \sum\limits_{\alpha} \left[ x_{\alpha} g_{\alpha} + \dfrac{1}{2}x^2_{\alpha} \lambda_{\alpha} \right] + ... \nonumber$ Depending on the signs of g$_{\alpha} \text{ and of } \lambda_{\alpha}$, various choices for the displacements xa will produce increases or decreases in energy:
1. If $\lambda_{\alpha}$ is positive, then a step x$_{\alpha} \text{ 'along' g}_{\alpha}$ (i.e., one with x$_{\alpha} \text{ g}_{\alpha}$ positive) will generate an energy increase. A step 'opposed to' g$_{\alpha}$ will generate an energy decrease if it is short enough that x$_{\alpha} \text{ g}_{\alpha}$ is larger in magnitude than $\frac{1}{2}x^2_{\alpha} \lambda_{\alpha}$, otherwise the energy will increase.
2. If $\lambda_{\alpha}$} is negative, a step opposed to g$_{\alpha}$ will generate an energy decrease. A step along g$_{\alpha}$ will give an energy increase if it is short enough for x$_{\alpha} \text{ g}_{\alpha}$ to be larger in magnitude than $\frac{1}{2}x^2_{\alpha}\lambda_{\alpha}$, otherwise the energy will decrease. Thus, to proceed downhill in all directions (such as one wants to do when searching for local minima), one chooses each x$_{\alpha}$ in opposition to g$_{\alpha}$ and of small enough length to guarantee that the magnitude of x$_{\alpha} \text{ g}_{\alpha}$ exceeds that of $\frac{1}{2}x^2_{\alpha} \lambda_{\alpha}$ for those modes with $\lambda_{\alpha}$ > 0. To proceed uphill along a mode with $\lambda_{\alpha}$' < 0 and downhill along all other modes with $\lambda_{\alpha}$ > 0, one chooses x$_{\alpha}$' along g$_{\alpha}$' with x$_{\alpha}$' short enough to guarantee that x$_{\alpha}'$ g$_{\alpha} '$ is larger in magnitude than $\frac{1}{2}_{\alpha}'x^2 \lambda_{\alpha '}$, and one chooses the other x$_{\alpha}$ opposed to g$_{\alpha}$ and short enough that x$_{\alpha}$ g$_{\alpha}$ is larger in magnitude than $\frac{1}{2}x^2_{\alpha} \lambda_{\alpha}$. Such considerations have allowed the development of highly efficient potential energy surface 'walking' algorithms (see, for example, J. Nichols, H. L. Taylor, P. Schmidt, and J. Simons, J. Chem. Phys. 92 , 340 (1990) and references therein) designed to trace out streambeds and to locate and characterize, via the local harmonic frequencies, minima and transition states. These algorithms form essential components of most modern ab initio , semi-empirical, and empirical computational chemistry software packages.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/20%3A_Response_Theory/20.01%3A_Calculations_of_Properties_Other_Than_the_Energy.txt
|
Ab Initio Methods
Most of the techniques described in this Chapter are of the ab initio type. This means that they attempt to compute electronic state energies and other physical properties, as functions of the positions of the nuclei, from first principles without the use or knowledge of experimental input. Although perturbation theory or the variational method may be used to generate the working equations of a particular method, and although finite atomic orbital basis sets are nearly always utilized, these approximations do not involve 'fitting' to known experimental data. They represent approximations that can be systematically improved as the level of treatment is enhanced.
Semi-Empirical and Fully Empirical Methods
Semi-empirical methods, such as those outlined in Appendix F, use experimental data or the results of ab initio calculations to determine some of the matrix elements or integrals needed to carry out their procedures. Totally empirical methods attempt to describe the internal electronic energy of a system as a function of geometrical degrees of freedom (e.g., bond lengths and angles) in terms of analytical 'force fields' whose parameters have been determined to 'fit' known experimental data on some class of compounds. Examples of such parameterized force fields were presented in Section III. A of Chapter 16.
Strengths and Weaknesses
Each of these tools has advantages and limitations. Ab initio methods involve intensive computation and therefore tend to be limited, for practical reasons of computer time, to smaller atoms, molecules, radicals, and ions. Their CPU time needs usually vary with basis set size (M) as at least M\(^4\); correlated methods require time proportional to at least M\(^5\) because they involve transformation of the atomic-orbital-based two-electron integrals to the molecular orbital basis. As computers continue to advance in power and memory size, and as theoretical methods and algorithms continue to improve, ab initio techniques will be applied to larger and more complex species. When dealing with systems in which qualitatively new electronic environments and/or new bonding types arise, or excited electronic states that are unusual, ab initio methods are essential. Semi-empirical or empirical methods would be of little use on systems whose electronic properties have not been included in the data base used to construct the parameters of such models.
On the other hand, to determine the stable geometries of large molecules that are made of conventional chemical units (e.g., CC, CH, CO, etc. bonds and steric and torsional interactions among same), fully empirical force-field methods are usually quite reliable and computationally very fast. Stable geometries and the relative energetic stabilities of various conformers of large macromolecules and biopolymers can routinely be predicted using such tools if the system contains only conventional bonding and common chemical building blocks. These empirical potentials usually do not contain sufficient flexibility (i.e., their parameters and input data do not include enough knowledge) to address processes that involve rearrangement of the electronic configurations. For example, they can not treat:
1. Electronic transitions, because knowledge of the optical oscillator strengths and of the energies of excited states is absent in most such methods;
2. Concerted chemical reactions involving simultaneous bond breaking and forming, because to do so would require the force-field parameters to evolve from those of the reactant bonding to those for the product bonding as the reaction proceeds;
3. Molecular properties such as dipole moment and polarizability, although in certain fully empirical models, bond dipoles and lone-pair contributions have been incorporated (although again only for conventional chemical bonding situations).
Semi-empirical techniques share some of the strengths and weaknesses of ab initio and of fully empirical methods. They treat at least the valence electrons explicitly, so they are able to address questions that are inherently electronic such as electronic transitions, dipole moments, polarizability, and bond breaking and forming. Some of the integrals involving the Hamiltonian operator and the atomic basis orbitals are performed ab initio ; others are obtained by fitting to experimental data. The computational needs of semiempirical methods lie between those of the ab initio methods and the force-field techniques. As with the empirical methods, they should never be employed when qualitatively new electronic bonding situations are encountered because the data base upon which their parameters were determined contain, by assumption, no similar bonding cases.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/20%3A_Response_Theory/20.02%3A_Ab_Initio%2C_Semi-Empirical%2C_and_Empirical_Force_Field_Methods.txt
|
Q1
Transform (using the coordinate system provided below) the following functions accordingly:
a. from Cartesian to spherical polar coordinates
$3x + y -4z =12 \nonumber$
b. from Cartesian to cylindrical coordinates
$y^2 + z^2 = 9 \nonumber$
c. from spherical polar to Cartesian coordinates
$r = 2\sin \theta \cos \phi \nonumber$
Q2
Perform a separation of variables and indicate the general solution for the following expressions:
a. $9x + 16y \dfrac{\partial y}{\partial x} = 0$
b. $2y + \dfrac{\partial y}{\partial x} + 6 = 0$
Q3
Find the eigenvalues and corresponding eigenvectors of the following matrices:
1. $\begin{bmatrix} -1 & 2 \ 2 & 2 \end{bmatrix}$
2. $\begin{bmatrix} -2 & 0 & 0 \ 0 & -1 & 2 \ 0 & 2 & 2 \end{bmatrix}$
Q4
For the hermitian matrix in review exercise 3a show that the eigenfunctions can be normalized and that they are orthogonal.
Q5
For the hermitian matrix in review exercise 3b show that the pair of degenerate eigenvalues can be made to have orthonormal eigenfunctions.
Q6
Solve the following second order linear differential equation subject to the specified "boundary conditions":
$\dfrac{d^2 x}{dt^2} + k^2x(t) = 0 \nonumber$
where $x(t=0) = L$ and $\dfrac{dx}{dt} \bigg|_{t=0} = 0.$
22.1.02: ii. Exercises
Q1
Replace the following classical mechanical expressions with their corresponding quantum mechanical operators.
a. K.E. = $\frac{mv^2}{2}$ in three-dimensional space.
b. $\textbf{p} = m\textbf{v}$, a three-dimensional Cartesian vector.
c. y-component of angular momentum: $L_y = zp_x - xp_z.$
Q2
Transform the following operators into the specified coordinates:
a. $\textbf{L}_x = \dfrac{\hbar}{i}\left( y \dfrac{\partial}{\partial z} - z \dfrac{\partial}{\partial y} \right)$ from Cartesian to spherical polar coordinates.
b. $\textbf{L}_z = \dfrac{\hbar}{i} \dfrac{\partial}{\partial \phi}$ from spherical polar to Cartesian coordinates.
Q3
Match the eigenfunctions in column B to their operators in column A. What is the eigenvalue for each eigenfunction?
$\begin{matrix} & \textbf{Column A} & \textbf{Column B} \ & \text{i.} (1-x^2)\dfrac{d^2}{dx^2} - x \dfrac{d}{dx} & 4x^4 - 12x^2 + 3 \ & \text{ii.} \dfrac{d^2}{dx^2} & 5x^4 \ & \text{iii.} x\dfrac{d}{dx} & e^{3x} + e^{-3x} \ & \text{iv.} \dfrac{d^2}{dx^2} - 2x\dfrac{d}{dx} & x^2 - 4x + 2 \ \text{v.} & x\dfrac{d^2}{dx^2} + (1-x) \dfrac{d}{dx} & 4x^3 - 3x \end{matrix} \nonumber$
Q4
Show that the following operators are Hermitian.
1. $\textbf{P}_x$
2. $\textbf{L}_x$
Q5
For the following basis of functions $(\Psi_{2p_{-1'}}, \Psi_{2p_{0'}} \text{, and }\Psi_{2p_{+1}})$, construct the matrix representation of the $\textbf{L}_x$ operator (use the ladder operator representation of $\textbf{L}_x$). Verify that the matrix is Hermitian. Find the eigenvalues and corresponding eigenvectors. Normalize the eigenfunctions and verify that they are orthogonal.
$\Psi_{2p_{-1}} = \dfrac{1}{8\sqrt{\pi}}\sqrt{\dfrac{Z}{a}}^5 re^{\dfrac{-zr}{2a}}Sin\theta e^{-i\phi} \nonumber$
$\Psi_{2p_{0}} = \dfrac{1}{\sqrt{\pi}}\sqrt{\dfrac{Z}{2a}}^5 re^{\dfrac{-zr}{2a}}Cos\theta \nonumber$
$\Psi_{2p_{1}} = \dfrac{1}{8\sqrt{\pi}}\sqrt{\dfrac{Z}{a}}^5 re^{\dfrac{-zr}{2a}}Sin\theta e^{i\phi} \nonumber$
Q6
Using the set of eigenstates (with corresponding eigenvalues) from the preceding problem, determine the probability for observing a z-component of angular momentum equal to 1$\hbar$ if the state is given by the $\textbf{L}_x \text{ eigenstate with 0}\hbar \textbf{ L}_x \text{ eigenvalue.}$
Q7
Use the following definitions of the angular momentum operators:
$\textbf{L}_x = \dfrac{\hbar}{i}\left( y\dfrac{\partial}{\partial z} - z\dfrac{\partial}{\partial y} \right), \textbf{L}_y = \dfrac{\hbar}{i}\left( z\dfrac{\partial}{\partial x} - x\dfrac{\partial}{\partial z} \right), \textbf{L}_z = \dfrac{\hbar}{i} \left( x\dfrac{\partial}{\partial y} - y\dfrac{\partial}{\partial x} \right) \nonumber$
$\text{ and } \textbf{L}^2 = \textbf{L}_x^2 + \textbf{L}_y^2 + \textbf{L}_z^2 \text{, and the relationships:} \nonumber$
$[\textbf{x}, \textbf{p}_x] = i\hbar \text{, [}\textbf{y}, \textbf{p}_y] = i\hbar \text{, and [}\textbf{z} , \textbf{p}_z]=i\hbar \text{, to demonstrate the following operator identities:}$
1. [$\textbf{L}_x, \textbf{L}_y] = i\hbar \textbf{ L}_z$,
2. [$\textbf{L}_y, \textbf{L}_z] = i\hbar \textbf{ L}_x$,
3. [$\textbf{L}_z, \textbf{L}_x] = i\hbar \textbf{ L}_y$,
4. [$\textbf{L}_x, \textbf{L}^2] = 0$,
5. $\textbf{L}_y, \textbf{L}^2] = 0$,
6. $\textbf{L}_z, \textbf{L}^2] = 0$.
Q8
In exercise 7 above you determined whether or not many of the angular momentum operators commute. Now, examine the operators below along with an appropriate given function. Determine if the given function is simultaneously an eigenfunction of both operators. Is this what you expected?
a. $\textbf{L}_z, \textbf{L}^2 \text{, with function: Y}^0_0(\theta ,\phi )= \dfrac{1}{\sqrt{4\pi}}.$
b. $\textbf{L}_x, \textbf{L}_z \text{, with function: Y}^0_0(\theta ,\phi )= \dfrac{1}{\sqrt{4\pi}}.$
c. $\textbf{L}_z, \textbf{L}^2 \text{, with function: Y}^0_1(\theta ,\phi )= \dfrac{3}{\sqrt{4\pi}}Cos \theta .$
d. $\textbf{L}_x, \textbf{L}_z \text{, with function: Y}^0_1(\theta ,\phi )= \dfrac{3}{\sqrt{4\pi}} Cos\theta .$
Q9
For a "particle in a box" constrained along two axes, the wavefunction $\Psi$(x,y) as given in the text was:
$\Psi (x,y) = \sqrt{\dfrac{1}{2L_x}}\sqrt{\dfrac{1}{2L_y}}\left[ e^{\left( \dfrac{in_x\pi x}{L_x} \right)} - e^{\left(\dfrac{-in_x\pi x}{L_x}\right)} \right]\left[ e^{\left( \dfrac{in_y\pi y}{L_y} \right)} - e^{\left(\dfrac{-in_y\pi y}{L_y}\right)} \right] , \nonumber$
with n$_x \text{ and n}_y$ = 1,2,3, ... Show that this wavefunction is nomalized.
Q10
Using the same wavefunction, $\Psi$(x,y), given in exercise 9 show that the expectation value of $\textbf{p}_x$ vanishes.
Q11
Calculate the expectation value of the x$^2$ operator for the first two states of the harmonic oscillator. Use the v=0 and v=1 harmonic oscillator wavefunctions given below which are normalized such that $\int\limits_{-\infty}^{\infty} \Psi (x)^2dx = 1.$ Remember that $\Psi_0 = \sqrt[4]{\dfrac{\alpha}{\pi}} \text{e}^{\left( -\dfrac{\alpha x^2}{2}\right) } \text{ and } \Psi_1 = \sqrt[4]{\dfrac{4\alpha^3}{\pi}} x\text{e}^{\left( -\dfrac{\alpha x^2}{2}\right) } .$
Q12
For each of the one-dimensional potential energy graphs shown below, determine:
1. whether you expect symmetry to lead to a separation into odd and even solutions,
2. whether you expect the energy will be quantized, continuous, or both, and
3. the boundary conditions that apply at each boundary (merely stating that $\Psi \text{ and/or } \dfrac{\partial \Psi}{\partial x}$ is continuous is all that is necessary).
Q13
Consider a particle of mass m moving in the potential:
$\begin{matrix} & V(x) = \infty & \text{ for } & x < 0 & \text{ Region I } \ & V(x) = 0 & \text{ for } & 0 \leq x \leq 0 & \text{ Region II } \ & V(x) = V(V>0) & \text{ for } & x > L & \text{ Region III }\end{matrix} \nonumber$
1. Write the general solution to the Schrödinger equation for the regions I, II, III, assuming a solution with energy E < V (i.e. a bound state).
2. Write down the wavefunction matching conditions at the interface between regions I and II and between II and III.
3. Write down the boundary conditions on $\Psi\text{ for x } \rightarrow ± \infty$.
4. Use your answers to a. - c. to obtain an algebraic equation which must be satisfied for the bound state energies, E.
5. Demonstrate that in the limit V $\rightarrow \infty$, the equation you obtained for the bound state energies in d. gives the energies of a particle in an infinite box; $E_n = \frac{n^2\hbar^2\pi^2}{2mL^2}$ ; n = 1,2,3,...
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.01%3A_The_Basic_Tools_of_Quantum_Mechanics/22.1.01%3A_i._Review_Exercises.txt
|
Q1
A particle of mass m moves in a one-dimensional box of length L, with boundaries at x = 0 and x = L. Thus, V(x) = 0 for $0 \leq x \leq L$, and V(x) = $\infty$. The normalized eigenfunctions of the Hamiltonian for this system are given by $\Psi_n(x) = \sqrt{\frac{2}{L}}sin\frac{n\pi x}{L} \text{, with } E_n = \frac{n^2\pi^2\hbar^2}{2mL^2}$, where the quantum number n can take on the values n=1,2,3,....
1. Assuming that the particle is in an eigenstate, $\Psi_n(x)$, calculate the probability that the particle is found somewhere in the region $0 \leq x \leq \frac{L}{4}$. Show how this probability depends on n.
2. For what value of n is there the largest probability of finding the particle in $0 \leq x \leq \frac{L}{4}$?
3. Now assume that $\Psi$ is a superposition of two eigenstates, $\Psi = a\Psi_n + b\Psi_m\text{, at time t = 0. What is } \Psi$ at time t? What energy expectation value does $\Psi$ have at time t and how does this relate to its value at t = 0?
4. For an experimental measurement which is capable of distinguishing systems in state $\Psi_n$ from those in $\Psi_m$, what fraction of a large number of systems each described by $\Psi$ will be observed to be in $\Psi_n$? What energies will these experimental measurements find and with what probabilities?
5. For those systems originally in $\Psi = a\Psi_n + b\Psi_m$ which were observed to be in $\Psi_n$ at time t, what state ($\Psi_n, \Psi_m$, or whatever) will they be found in if a second experimental measurement is made at a time t' later than t?
6. Suppose by some method (which need not concern us at this time) the system has been prepared in a nonstationary state (that is, it is not an eigenfunction of H). At the time of a measurement of the particle's energy, this state is specified by the normalized wavefunction $\Psi = \sqrt{\frac{30}{L^5}}x(L-x) \text{ for } 0 \leq x \leq L\text{, and } \Psi = 0$ elsewhere. What is the probability that a measurement of the energy of the particle will give the value $E_n = \frac{n^2\pi^2\hbar^2}{2mL^2}$ for any given value of n?
7. What is the expectation value of H, i.e. the average energy of the system, for the wavefunction $\Psi$ given in part f?
Q2
Show that for a system in a non-stationary state, $\Psi = \sum\limits_j C_j\Psi_j\text{e}^{-\frac{iE_jt}{\hbar}}$, the average value of the energy does not vary with time but the expectation values of other properties do vary with time.
Q3
A particle is confined to a one-dimensional box of length L having infinitely high walls and is in its lowest quantum state. Calculate: $\langle x \rangle , \langle x^2 \rangle , \langle p \rangle \text{, and } \langle p^2 \rangle .$ Using the definition $\Delta A = \sqrt{\langle A^2 \rangle - \langle A \rangle^2}$, to define the uncertainty, $\Delta A \text{, calculate } \Delta x \text{ and } \Delta p.$ Verify the Heisenberg uncertainty principle that $\Delta x \Delta p \geq \frac{\hbar}{2}.$
It has been claimed that as the quantum number n increases, the motion of a particle in a box becomes more classical. In this problem you will have an opportunity to convince yourself of this fact.
1. For a particle of mass m moving in a one-dimensional box of length L, with ends of the box located at x = 0 and x = L, the classical probability density can be shown to be independent of x and given by P(x)dx = $\dfrac{dx}{L}$ regardless of the energy of the particle. Using this probability density, evaluate the probability that the particle will be found within the interval from x = 0 to x = $\dfrac{L}{4}.$
2. Now consider the quantum mechanical particle-in-a-box system. Evaluate the probability of finding the particle in the interval from x = 0 to x = $\dfrac{L}{4}$ for the system in its nth quantum state.
3. Take the limit of the result you obtained in part b as $n \rightarrow \infty$. How does your result compare to the classical result you obtained in part a?
Q5
According to the rules of quantum mechanics as we have developed them, if $\Psi$ is the state function, and $\phi_n$ are the eigenfunctions of a linear, Hermitian operator, A, with eigenvalues a$_n$, A$\phi_n = a_n \phi_n$, then we can expand $\Psi$ in terms of the complete set of eigenfunctions of A according to $\Psi = \sum\limits_{n}c_n \phi_n , \text{ where } c_n = \int \phi_n^{\text{*}} \Psi d\tau .$ Furthermore, the probability of making a measurement of the property corresponding to A and obtaining a value $a_n \text{ is given by } \big| c_n \big|^2$, provided both $\Psi \text{ and } \phi_n$ are properly normalized. Thus, P($a_n ) = \big| c_n \big|^2$. These rules are perfectly valid for operators which take on a discrete set of eigenvalues, but must be generalized for operators which can have a continuum of eigenvalues. An example of this latter type of operator is the momentum operator, p$_x$, which has eigenfunctions given by $\phi_p(x) = Ae^{\dfrac{ipx}{\hbar}}$ where p is the eigenvalue of the p$_x$ operator and A is a normalization constant. Here p can take on any value, so we have a continuous spectrum of eigenvalues of p$_x$. The obvious generalization to the equation for $\Psi$ is to convert the sum over discrete states to an integral over the continuous spectrum of states:
$\Psi(x) = \int\limits_{- \infty}^{+ \infty}C(p) \phi_p(x) dp = \int\limits_{-\infty}^{+\infty}C(p)Ae^{\dfrac{ipx}{\hbar}} dp \nonumber$
The interpretation of C(p) is now the desired generalization of the equation for the probability $P(p)dp = \big| C(p) \big|^2 dp.$ This equation states that the probability of measuring the momentum and finding it in the range from p to p+dp is given by $\big| C(p) \big|^2 dp.$ Accordingly, the probability of measuring p and finding it in the range from $p_1 \text{ to } p_2$ is given by $\int\limits_{p_1}^{p_2}P(p)dp = \int\limits_{p_1}^{p_2}C(p)^{\text{*}}C(p) dp$. C(p) is thus the probability amplitude for finding the particle with momentum between p and p+dp. This is the momentum representation of the wavefunction. Clearly we must require C(p) to be normalized, so that $\int\limits_{-\infty}^{+\infty}C(p)^{\text{*}}C(p) dp = 1.$ With this restriction we can derive the normalization constant A = $\dfrac{1}{\sqrt{\pi\hbar}}$, giving a direct relationship between the wavefunction in coordinate space, $\Psi$(x), and the wavefunction in momentum space, C(p):
$\Psi(x) = \dfrac{1}{\sqrt{2\pi\hbar}} \int\limits_{-\infty}^{+\infty}C(p)e^{\dfrac{ipx}{\hbar}} dp, \nonumber$
and by the Fourier integral theorem:
$C(p) = \dfrac{1}{\sqrt{2\pi\hbar}} \int\limits_{-\infty}^{+\infty} \Psi (x)e^{\dfrac{ipx}{\hbar}} dx. \nonumber$
Lets use these ideas to solve some problems focusing our attention on the harmonic oscillator; a particle of mass m moving in a one-dimensional potential described by $V(x) = \dfrac{kx^2}{2}.$
a. Write down the Schrödinger equation in the coordinate representation.
b. Now lets proceed by attempting to write the Schrödinger equation in the momentum representation. Identifying the kinetic energy operator T, in the momentum representation is quite straitforward $\textbf{T} = \dfrac{\textbf{p}^2}{2m} = -\textbf{Error!}$. Writing the potential, V(x), in the momentum representation is not quite as straightforward. The relationship between position and momentum is realized in their commutation relation [x,p] = $i\hbar$, or ( xp - px ) = i$\hbar$ This commutation relation is easily verified in the coordinate representation leaving x untouched (x = x. ) and using the above definition for p. In the momentum representation we want to leave p untouched (p = p. ) and define the operator x in such a manner that the commutation relation is still satisfied. Write the operator x in the momentum representation. Write the full Hamiltonian in the momentum representation and hence the Schrödinger equation in the momentum representation.
c. Verify that $\Psi$ as given below is an eigenfunction of the Hamiltonian in the coordinate representation. What is the energy of the system when it is in this state? Determine the normalization constant C, and write down the normalized ground state wavefunction in coordinate space.
$\Psi(x) = Ce^{ \left( -\sqrt{mk}\dfrac{x^2}{2\hbar} \right)}. \nonumber$
d. Now consider $\Psi$ in the momentum representation. Assuming that an eigenfunction of the Hamiltonian may be found of the form $\Psi(p) = Ce^{- \alpha p^2}$, substitute this form of $\Psi$ into the Schrödinger equation in the momentum representation to find the value of $\alpha$ which makes this an eigenfunction of H having the same energy as $\Psi (x)$ had. Show that this $\Psi (p)$ is the proper fourier transform of $\Psi (x)$. The following integral may be useful:
$\int\limits_{-\infty}^{+\infty}e^{- \beta x^2}Cos(bx)dx = \sqrt{\dfrac{\pi}{\beta}}e^{\dfrac{-b^2}{4 \beta}}. \nonumber$
since this Hamiltonian has no degenerate states, you may conclude that $\Psi (x) \text{ and } \Psi (p)$ represent the same state of the system if they have the same energy.
Q6
The energy states and wavefunctions for a particle in a 3-dimensional box whose lengths are $L_1, L_2, \text{ and } L_3$ are given by
$E(n_1,n_2,n_3) = \dfrac{h^2}{8m} \left[ \left( \dfrac{n_1}{L_1} \right)^2 + \left( \dfrac{n_2}{L_2} \right)^2 + \left( \dfrac{n_3}{L_3} \right)^2 \right] \text{ and} \nonumber$
$\Psi (n_1,n_2,n_3) = \sqrt{\dfrac{2}{L_1}}\sqrt{\dfrac{2}{L_2}}\sqrt{\dfrac{2}{L_3}} sin \left( \dfrac{n_2\pi y}{L_2} \right) sin \left( \dfrac{n_3 \pi z}{L_3} \right) . \nonumber$
These wavefunctions and energy levels are sometimes used to model the motion of electrons in a central metal atom (or ion) which is surrounded by six ligands.
1. Show that the lowest energy level is nondegenerate and the second energy level is triply degenerate if $L_1 = L_2 = L_3$. What values of $n_1, n_2, \text{ and } n_3$ characterize the states belonging to the triply degenerate level?
2. For a box of volume V =$L_1L_2L_3$, show that for three electrons in the box (two in the nondegenerate lowest "orbital", and one in the next), a lower total energy will result if the box undergoes a rectangular distortion ($L_1 = L_2 \neq L_3$). which preserves the total volume than if the box remains undistorted (hint: if V is fixed and $L_1 = L_2\text{, then } L_3 = \dfrac{V}{L_1^2} \text{ and } L_1$ is the only "variable").
3. Show that the degree of distortion (ratio of $L_3 \text{ to } L_1$) which will minimize the total energy is $L_3 = \sqrt{2} L_1$. How does this problem relate to Jahn-Teller distortions? Why (in terms of the property of the central atom or ion) do we do the calculation with fixed volume?
4. By how much (in eV) will distortion lower the energy (from its value for a cube, $L_1 = L_2 = L_3) \text{ if V = 8 Å}^3 \text{ and } \dfrac{h^2}{8m} = 6.01 x 10^{-27} \text{erg cm}^2$. 1 eV = 1.6 x 10^${-12}$ erg
Q7
The wavefunction $\Psi = Ae^{-a|x|}$ is an exact eigenfunction of some one-dimensional Schrödinger equation in which x varies from $-\infty \text{ to } +\infty$. The value of a is: a = $\left( 2Å \right)^{-1}.$ For now, the potential V(x) in the Hamiltonian $\left( \textbf{H} = -\dfrac{\hbar}{2m}\dfrac{d^2 }{dx^2} + V(x) \right) \text{ for which } \Psi (x)$ is an eigenfunction is unknown.
1. Find a value of A which makes $\Psi$(x) normalized. Is this value unique? What units does $\Psi$(x) have?
2. Sketch the wavefunction for positive and negative values of x, being careful to show the behavior of its slope near x = 0. Recall that |x| is defined as: $|x| = \begin{matrix} x & if x > 0 \ -x & if x < 0 \end{matrix} \nonumber$
3. Show that the derivative of $\Psi (x)$ undergoes a discontinuity of magnitude $2\sqrt[3]{a}$ as x goes through x = 0. What does this fact tell you about the potential V(x)?
4. Calculate the expectation value of |x| for the above normalized wavefunction | (obtain a numerical value and give its units). What does this expectation value give a measure of?
5. The potential V(x) appearing in the Schrödinger equation for which $\Psi = Ae^{-a|x|}$ is an exact solution is given by V(x) = $\dfrac{\hbar^2 a}{m}\delta (x).$ Using this potential, compute the expectation value of the Hamiltonian $\left( \textbf{H} = -\dfrac{\hbar}{2m}\dfrac{d^2}{dx^2} + V(x) \right)$ for your normalized wavefunction. Is V(x) an attractive or repulsive potential? Does your wavefunction correspond to a bound state? Is $\langle H \rangle \text{ use } \dfrac{\hbar^2}{2m} = 6.06x10^{-28}$ erg cm$^2$ and 1eV = 1.6x10$^{-12}$ erg.
6. Transform the wavefunction, $\Psi = Ae^{-a|x|}$, from coordinate space to momentum space.
7. What is the ratio of the probability of observing a momentum equal to 2a$\hbar$ to the probability of observing a momentum equal to $-a\hbar$ ?
Q8
The $\pi$-orbitals of benzene, $C_6H_6$, may be modeled very crudely using the wavefunctions and energies of a particle on a ring. Lets first treat the particle on a ring problem and then extend it to the benzene system.
1. Suppose that a particle of mass m is constrained to move on a circle (of radius r) in the xy plane. Further assume that the particle's potential energy is constant (zero is a good choice). Write down the Schrödinger equation in the normal cartesian coordinate representation. Transform this Schrödinger equation to cylindrical coordinates where $\text{x = rcos(} \phi )\text{, y = rsin(} \phi ),$ and z = z (z = 0 in this case). Taking r to be held constant, write down the general solution, $\Phi ( \phi )$, to this Schrödinger equation. The "boundary" conditions for this problem require that $\Phi ( \phi ) = \Phi ( \phi + 2\pi ).$ Apply this boundary condition to the general solution. This results in the quantization of the energy levels of this system. Write down the final expression for the normalized wavefunction and quantized energies. What is the physical significance of these quantum numbers which can have both positive and negative values? Draw an energy diagram representing the first five energy levels.
2. Treat the six $\pi$-electrons of benzene as particles free to move on a ring of radius 1.40 Å, and calculate the energy of the lowest electronic transition. Make sure the Pauli principle is satisfied! What wavelength does this transition correspond to? Suggest some reasons why this differs from the wavelength of the lowest observed transition in benzene, which is 2600 Å.
Q9
A diatomic molecule constrained to rotate on a flat surface can be modeled as a planar rigid rotor (with eigenfunctions, $\Phi ( \phi )$, analogous to those of the particle on a ring) with fixed bond length r. At t = 0, the rotational (orientational) probability distribution is observed to be described by a wavefunction $\Psi ( \phi ,0) = \sqrt{\dfrac{4}{3\pi}}\text{Cos}^2 \phi .$ What values, and with what probabilities, of the rotational angular momentum, $\left( -i\hbar\frac{\partial }{\partial \phi} \right) ,$ could be observed in this system? Explain whether these probabilities would be time dependent as $\Psi ( \phi , 0) \text{ evolves into } \Psi ( \phi ,t).$
Q10
A particle of mass m moves in a potential given by $V(x,y,z) = \dfrac{k}{2} \left( x^2 + y^2 + z^2 \right) = dfrac{kr^2}{2}.$
1. Write down the time-independent Schrödinger equation for this system.
2. Make the substitution $\Psi$(x,y,z) = X(x)Y(y)Z(z) and separate the variables for this system.
3. What are the solutions to the resulting equations for X(x), Y(y), and Z(z)?
4. What is the general expression for the quantized energy levels of this system, in terms of the quantum numbers $n_x, n_y\text{, and } n_z$, which correspond to X(x), Y(y), and Z(z)?
5. What is the degree of degeneracy of a state of energy $E = 5.5\hbar \sqrt{\dfrac{k}{m}} \text{ for this system?} \nonumber$
6. An alternative solution may be found by making the substitution $\Psi (r, \theta , \phi ) = F(r)G( \theta , \phi ). \text{ In this substitution, what are the solutions for } G( \theta , \phi )?$
7. Write down the differential equation for F(r) which is obtained when the substitution $\Psi (r, \theta , \phi ) = F(r)G( \theta , \phi)$ is made. Do not solve this equation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.01%3A_The_Basic_Tools_of_Quantum_Mechanics/22.1.03%3A_iii._Problems_1-10.txt
|
Q11
Consider an $N_2$ molecule, in the ground vibrational level of the ground electronic state, which is bombarded by 100 eV electrons. This leads to the ionization of the $N_2$ molecule to form $N_2^+$. In this problem we will attempt to calculate the vibrational distribution of the newly-formed $N_2^+$ ions, using a somewhat simplified approach.
a. Calculate (according to classical mechanics) the velocity (in cm/sec) of a 100 eV electron, ignoring any relativistic effects. Also calculate the amount of time required for a 100 eV electron to pass an N$_2$ molecule, which you may estimate as having a length of 2Å.
b. The radial Schrödinger equation for a diatomic molecule treating vibration as a harmonic oscillator can be written as:
$-\dfrac{\hbar^2}{2\mu r^2} \left( \frac{\partial }{\partial r} \left( r^2 \frac{\partial \Psi}{\partial r} \right) \right) + \dfrac{k}{2} \left( r - r_e \right)^2 \Psi = E \Psi , \nonumber$
Substituting $\Psi (r) = \dfrac{F(r)}{r},$ this equation can be rewritten as:
$-\dfrac{\hbar^2}{2\mu}\frac{\partial^2 }{\partial r^2}F(r) + \dfrac{k}{2} \left( r - r_e \right)^2 \Psi = E \Psi , \nonumber$
The vibrational Hamiltonian for the ground electronic state of the N2 molecule within this approximation is given by:
$\textbf{H}(N_2) = -\dfrac{\hbar^2}{2\mu}\dfrac{d^2 }{dr^2 } + \dfrac{k_{N_2}}{2} \left( r - r_{N_2} \right) , \nonumber$
where $r_{N_2} \text{ and } k_{N_2}$ have been measured experimentally to be:
$r_{N_2} = 1.09796 Å; k_{N_2} = 2.294 x 10^{6}\dfrac{\text{g}}{\text{sec}^2}. \nonumber$
The vibrational Hamiltonian for the $N_2^+$ ion , however, is given by :
$\textbf{H}(N_2) = -\dfrac{\hbar^2}{2\mu}\dfrac{d^2}{dr^2} + \dfrac{k_{N_2}^+}{2} \left( r - r_{N_2}^+ \right)^2 , \nonumber$
where $r_{N_2}^+ \text{ and } k_{N_2}^+$ have been measured experimentally to be:
$r_{N_2}^+ = 1.11642 Å; k_{N_2}^+ = 2.009 x 10^6 \dfrac{\text{g}}{\text{sec}^2} \nonumber$
In both systems the reduced mass is $\mu = 1.1624 x 10^{-23} g.$ Use the above information to write out the ground state vibrational wavefunctions of the $N_2 \text{ and } N_2$ + molecules, giving explicit values for any constants which appear in them. Note: For this problem use the "normal" expression for the ground state wavefunction of a harmonic oscillator. You need not solve the differential equation for this system.
c. During the time scale of the ionization event (which you calculated in part a), the vibrational wavefunction of the N2 molecule has effectively no time to change. As a result, the newly-formed N$_2^+$ ion finds itself in a vibrational state which is not an eigenfunction of the new vibrational Hamiltonian, H(N$_2^+$). Assuming that the N$_2$ molecule was originally in its v=0 vibrational state, calculate the probability that the N$_2^+$ ion will be produced in its v=0 vibrational state.
Q12
The force constant, k, of the C-O bond in carbon monoxide is $1.87 x 10^6 \dfrac{\text{g}}{\text{sec}^2}.$ Assume that the vibrational motion of CO is purely harmonic and use the reduced mass $\mu$ = 6.857 amu.
a. Calculate the spacing between vibrational energy levels in this molecule, in units of ergs and cm$^{-1}$.
b. Calculate the uncertainty in the internuclear distance in this molecule, assuming it is in its ground vibrational level. Use the ground state vibrational wavefunction $( \Psi_{v=0}), \text{and calculate } \langle x \rangle , \langle x^2 \rangle \text{, and } \Delta x = \sqrt{ \langle x^2 \rangle - \langle x \rangle ^2}.$
c. Under what circumstances (i.e. large or small values of k; large or small values of $\mu$) is the uncertainty in internuclear distance large? Can you think of any relationship between this observation and the fact that helium remains a liquid down to absolute zero?
Q13
Suppose you are given a trial wavefunction of the form:
$\phi = \dfrac{Z_e^3}{\pi a_0^3}e^{ \left( \dfrac{-Z_er_1}{a_0} \right) }e^{ \left( \dfrac{-Z_er_2}{a_0} \right) } \nonumber$
to represent the electronic structure of a two-electron ion of nuclear charge Z and suppose that you were also lucky enough to be given the variational integral, W, (instead of asking you to derive it!):
$W = \left( Z_e^2 - 2ZZ_e + \dfrac{5}{8}Z_e \right)\dfrac{e^2}{a_0}. \nonumber$
a. Find the optimum value of the variational parameter Z$_e$ for an arbitrary nuclear charge Z by setting $\dfrac{dW}{dZ_e} = 0.$ Find both the optimal value of Z$_e$ and the resulting value of W.
b. The total energies of some two-electron atoms and ions have been experimentally determined to be:
$\begin{matrix} Z &= 1 & H^- & -14.35 eV \ Z &= 2 & He & -78.98 eV \ Z &= 3 & Li^+ & -198.02 eV \ Z &= 4 & Be^{+2} & -371.5 eV \ Z &= 5 & B^{+3} & -599.3. eV \ Z &= 6 & C^{+4} & -881.6 eV \ Z &= 7 & N^{+5} & -1218.3 eV \ Z &= 8 & O^{+6} & -1609.5 eV \end{matrix} \nonumber$
Using your optimized expression for W, calculate the estimated total energy of each of these atoms and ions. Also calculate the percent error in your estimate for each ion. What physical reason explains the decrease in percentage error as Z increases?
c. In 1928, when quantum mechanics was quite young, it was not known whether the isolated, gas-phase hydride ion, H$^-$, was stable with respect to dissociation into a hydrogen atom and an electron. Compare your estimated total energy for H$^-$ to the ground state energy of a hydrogen atom and an isolated electron (system energy = -13.60 eV), and show that this simple variational calculation erroneously predicts H$^-$ to be unstable. (More complicated variational treatments give a ground state energy of H$^-$ of -14.35 eV, in agreement with experiment.)
Q14
A particle of mass m moves in a one-dimensional potential given by $\textbf{H} = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{dx^2} + a|x|$, where the absolute value function is defined by |x| = x if x $\geq$ 0 and |x| = -x if x $\leq$ 0.
a. Use the normalized trial wavefunction $\phi = \sqrt[4]{\dfrac{2b}{\pi}}e^{-bx^2}$ to estimate the energy of the ground state of this system, using the variational principle to evaluate W(b).
b. Optimize b to obtain the best approximation to the ground state energy of this system, using a trial function of the form of $\phi$, as given above. The numerically calculated exact ground state energy is 0.808616 $\sqrt[3]{\hbar^2} \sqrt[-3]{m} \sqrt[-3]{a}^2$. What is the percent error in your value?
Q15
The harmonic oscillator is specified by the Hamiltonian:
$\textbf{H} = -\dfrac{\hbar^2}{2m}\dfrac{d^2}{dx^2} + \dfrac{1}{2}kx^2 . \nonumber$
Suppose the ground state solution to this problem were unknown, and that you wish to approximate it using the variational theorem. Choose as your trial wavefunction,
$\phi = \sqrt{\dfrac{15}{16}}\sqrt[-2]{a}^5 \left( a^2 - x^2 \right) \text{ for -a < x < a} \nonumber$
$\phi = 0 \text{ for |x| } \geq \text{ a } \nonumber$
where a is an arbitrary parameter which specifies the range of the wavefunction. Note that $\phi$ is properly normalized as given.
a. Calculate $\int\limits_{-\infty}^{+\infty} \phi^{\text{*}} \textbf{H} \phi dx and show it to be given by: $\int\limits_{-\infty}^{+\infty} \phi^{\text{*}}\textbf{H} \phi dx = \dfrac{5}{4}\dfrac{\hbar^2}{ma^2} + \dfrac{ka^2}{14}. \nonumber$ b. Calculate \( \int\limits_{-\infty}^{+\infty} \phi^{\text{*}}\textbf{H} \phi dx$ for a = b $\sqrt[4]{\dfrac{\hbar^2}{km}}$ with b = 0.2, 0.4, 0.6 0.8, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, and 5.0, and plot the result.
c. To find the best approximation to the true wavefunction and its energy, find the minimum of $\int\limits_{-\infty}^{+\infty} \phi^{\text{*}}\textbf{H} \phi dx \text{ by setting } \dfrac{d}{da} \int\limits_{-\infty}^{+\infty} \phi^{\text{*}}\textbf{H} \phi dx = 0$ and solving for a. Substitute this value into the experssion for $\int\limits_{-\infty}^{+\infty} \phi^{\text{*}}\textbf{H} \phi dx$ given in part a. to obtain the best approximation for the energy of the ground state of the harmonic oscillator.
d. What is the percent error in your calculated energy of part c. ?
Q16
Einstein told us that the (relativistic) expression for the energy of a particle having rest mass m and momentum p is $E^2 = m^2c^4 + p^2c^2$.
a. Derive an expression for the relativistic kinetic energy operator which contains terms correct through one higher order than the "ordinary" $E = mc^2 + \dfrac{p^2}{2m}$
b. Using the first order correction as a perturbation, compute the first-order perturbation theory estimate of the energy for the 1s level of a hydrogen-like atom (general Z). Show the Z dependence of the result.
$\text{Note: } \Psi (r)_{1s} = \sqrt{\dfrac{Z}{a}}^3 \sqrt{\dfrac{1}{\pi}}e^{-\dfrac{Zr}{a}} \text{ and } E_{1s} = -\dfrac{Z^2me^4}{2\hbar^2} \nonumber$
c. For what value of Z does this first-order relativistic correction amount to 10% of the unperturbed (non-relativistic) 1s energy?
Q17
Consider an electron constrained to move on the surface of a sphere of radius r. The Hamiltonian for such motion consists of a kinetic energy term only $\textbf{H}_0 = \dfrac{\textbf{L}^2}{2m_er_0^2},$ where L is the orbital angular momentum operator involving derivatives with respect to the spherical polar coordinates $( \theta , \phi ). \textbf{ H}_0$ has the complete set of eigenfunctions $\psi_{lm}^{(0)} = Y_{l,m}( \theta , \phi ).$
a. Compute the zeroth order energy levels of this system.
b. A uniform electric field is applied along the z-axis, introducing a perturbation $V = -e \epsilon z = -e \epsilon r_0\text{Cos}\theta \text{, where } \epsilon$ is the strength of the field. Evaluate the correction to the energy of the lowest level through second order in perturbation theory, using the identity
$\text{Cos} \theta Y_{l,m} \left( \theta , \phi \right) = \sqrt{\dfrac{(l+m+1)(l-m+1)}{(2l+1)(2l+3)}}Y_{l+1,m} \left( \theta , \phi \right) + \sqrt{\dfrac{(l+m)(l-m)}{(2l+1)(2l-1)}}Y_{l-1,m} \left( \theta , \phi \right) . \nonumber$
Note that this identity enables you to utilize the orthonormality of the spherical harmonics.
c. The electric polarizability $\alpha$ gives the response of a molecule to an externally applied electric field, and is defined by $\alpha = -\frac{\partial^2 E }{\partial^2 \epsilon}\bigg|_{ \epsilon = 0}$ where E is the energy in the presence of the field and $\epsilon$ is the strength of the field. Calculate $\alpha$ for this system.
d. Use this problem as a model to estimate the polarizability of a hydrogen atom, where r$_0 = a_0$ = 0.529 Å, and a cesium atom, which has a single 6s electron with r$_0 \approx$ 2.60 Å. The corresponding experimental values are $\alpha_H$ = 0.6668 Å$^3 \text{ and } \alpha_{Cs} = 59.6 Å^3$.
Q18
An electron moving in a conjugated bond framework can be viewed as a particle in a box. An externally applied electric field of strength e interacts with the electron in a fashion described by the perturbation $V = e \epsilon \left( x - \dfrac{L}{2} \right)$, where x is the position of the electron in the box, e is the electron's charge, and L is the length of the box.
a. Compute the first order correction to the energy of the n=1 state and the first order wavefunction for the n=1 state. In the wavefunction calculation, you need only compute the contribution to $\Psi_1^{(1)} \text{ made by } \Psi_2^{(0)}$. Make a rough (no calculation needed) sketch of $\Psi_1^{(0)} + \Psi_1^{(1)}$ as a function of x and physically interpret the graph.
b. Using your answer to part a. compute the induced dipole moment caused by the polarization of the electron density due to the electric field effect $\mu_{\text{induced}} = -e \int \Psi^{\text{*}} \left( x - \dfrac{L}{2} \right) \Psi dx.$ You may neglect the term proportional to $\epsilon^2$ ; merely obtain the term linear in $\epsilon$.
c. Compute the polarizability, $\alpha$, of the electron in the n=1 state of the box, and explain physically why $\alpha$ should depend as it does upon the length of the box L. Remember that $\alpha = \frac{\partial \mu }{\partial \epsilon}\bigg|_{ \epsilon = 0}.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.01%3A_The_Basic_Tools_of_Quantum_Mechanics/22.1.04%3A_iv._Problems_11-18.txt
|
Q1
The general relationships are as follows:
$\begin{matrix} x &= \text{rSin} \theta \text{ Cos} \phi &r^2 = x^2 + y^2 + z^2 \ y &= \text{rSin} \theta \text{ Sin} \phi &\text{Sin} \theta \dfrac{\sqrt{x^2 + y^2}}{\sqrt{x^2 + y^2 + z^2}} \ z &= \text{rCos} \theta & \text{Cos} \theta = \dfrac{z}{\sqrt{x^2 + y^2 + z^2}} \ & &\text{Tan} \phi = \dfrac{y}{x} \ \end{matrix} \nonumber$
$\begin{matrix} a. & \ & 3x + y - 4z = 12 \ & 3\text{(rSin} \theta \text{Cos} \phi ) + \text{Sin} \theta \text{Sin} \phi - 4\text{(rCos} \theta ) = 12 \ & r\text{(3Sin} \theta \text{Cos} \phi + \text{Sin} \theta \text{Sin} \phi - 4\text{Cos} \theta ) = 12 \end{matrix} \nonumber$
$\begin{matrix} b. & \ & x = \text{rCos} \phi & r^2 = x^2 + y^2 \ & y = \text{rSin} & \text{Tan} \phi = & \dfrac{y}{x} \ & z = z & y^2 + z^2 = 9 \ & &r^2\text{Sin}^2 + z^2 = 9 \end{matrix} \nonumber$
$\begin{matrix} c. \ & r = 2\text{Sin} \theta \text{Cos} \phi \ & r = 2 \left( \dfrac{x}{r} \right) \ & r^2 = 2x \ & x^2 + y^2 + z^2 = 2x \ & x^2 -2x + y^2 + z^2 = 0 \ & x^2 -2x + 1 + y^2 + z^2 = 1 \ & \left( x -1 \right)^2 + y^2 + z^2 = 1 \end{matrix} \nonumber$
Q2
a. $\begin{matrix} & 9x + 16y\frac{\partial y}{\partial x} = 0 \ & 16ydy = -9xdx \ & \dfrac{16}{2}y^2 = -\dfrac{9}{2}x^2 + c \ & 16y^2 = -9x^2 +c' \ & \dfrac{y^2}{9} + \dfrac{x^2}{16} = c'' \text{(general equation for an ellipse)} \end{matrix} \nonumber$
b. $\begin{matrix} & 2y + \frac{\partial y}{\partial x} + 6 = 0 \ & 2y + 6 = -\dfrac{dy}{dx} \ & -2dx = -\dfrac{dy}{2dx} \ & -2dx = \dfrac{dy}{y+3} \ & -2x = ln(y + 3) + c \ & c'e^{-2x} = y + 3 \ & y= c'e^{-2x} - 3 \end{matrix} \nonumber$
Q3
a. First determine the eigenvalues:
$\begin{matrix} & \text{det} \begin{bmatrix} &-1 - \lambda & 2 \ &2 & 2 - \lambda \end{bmatrix} = 0 \ & (-1 - \lambda )(2 - \lambda ) -2^2 = 0 \ & -2 + \lambda -2 \lambda + \lambda^2 -4 = 0 \ & \lambda^2 - \lambda -6 = 0 \ & ( \lambda -3)( \lambda +2) = 0 \ & \lambda = 3 \text{ or } \lambda = -2. \end{matrix} \nonumber$
Next, determine the eigenvectors. First, the eigenvector associated with eigenvalue -2:
$\begin{matrix} & \begin{bmatrix} -1 & 2 \ 2 & 2 \end{bmatrix} \begin{bmatrix} C_{11} \ C_{21} \end{bmatrix} = -2 \begin{bmatrix} C_{11} \ C_{21} \end{bmatrix} \ & -C){11} + 2C_{21} = -2C_{11} \ & C_{11} = -2C_{21} \end{matrix} \nonumber$
(Note: The second row offers no new information, e.g. 2$C_{11} + 2C_{21} = -2C_{21} )$
$\begin{matrix} & C_{11}^2 + C_{21}^2 = 1\text{ (from normalization)} \ & (-2C_{21})^2 + C_{21}^2 = 1 \ & 4C_{21}^2 + C_{21}^2 = 1 \ & 5C_{21}^2 = 1 \ &C_{21}^2 = 0.2 \ & C_{21} = \sqrt{0.2} \ & \text{ (again the second row offers no new information)} \ & C_{12}^2 + C_{22}^2 = 1 \ & 0.25C_{22}^2 + C_{22}^2 = 1 \ & 1.25C_{22}^2 = 1 \ & C_{22}^2 = 0.8 \ &C_{22} = \sqrt{0.8} = 2\sqrt{0.2} \text{, and therefore } C_{12} = \sqrt{0.2}. \ & \text{Therefore the eigenvector matrix becomes:} \ & \begin{bmatrix} -2\sqrt{0.2} & \sqrt{0.2} \ \sqrt{0.2} & 2\sqrt{0.2} \end{bmatrix} \end{matrix} \nonumber$
b. First determine the eigenvalues:
$\begin{matrix} & \text{det}\begin{bmatrix} -2 - \lambda & 0 & 0 \ 0 & -1 - \lambda & 2 \ 0 & 2 & 2 - \lambda \end{bmatrix} = 0 \ & \text{det} \left[ -2 - \lambda \right] \text{det} \begin{bmatrix} -1 - \lambda & 2 \ 2 & 2 - \lambda \end{bmatrix} = 0 \end{matrix} \nonumber$
From 3a, the solutions then become -2, -2, and 3. Next, determine the eigenvectors. First the eigenvector associated with eigenvalue 3 (the third root):
$\begin{bmatrix} -2 & 0 & 0 \ 0 & -1 & 2 \ 0 & 2 & 2 \end{bmatrix} \begin{bmatrix} C_{11} \ C_{21} \ C_{31} \end{bmatrix} = 3 \begin{bmatrix} C_{11} \ C_{21} \ C_{31} \end{bmatrix} \nonumber$
$-2C_{13} = 3C_{13} \text{ (row one)} \ C_{13} = 0 \ -C_{23} + 2C_{33} = 3C_{23} \text{ (row two)} \ 2C_{33} = 2C_{23} \ C_{33} = 2C_{23} \text{ (again the third row offers no new information)} \ C_{13}^2 + C_{23}^2 + C_{33}^2 = 1\text{ (from normalization)} \ 0 + C_{23}^2 + (2C_{23})^2 = 1 \ 5C_{23}^2 = 1 \ C_{23} = \sqrt{0.2} \text{, and therefore } C_{33} = \sqrt{0.2} . \nonumber$
Next, find the pair of eigenvectors associated with the degenerate eigenvalue of -2. First, root one eigenvector one:
$-2C_{11} = -2C_{11} \text{ (no new information from row one)} \ -C_{21} + 2C_{31} = -2C_{21} \text{ (row two)} \ C_{21} = -2C_{31} \text{ (again the third row offers no new information)} \ C_{11}^2 + C_{21}^2 + C_{31}^2 = 1 \text{ (from normalization)} \ C_{11}^2 + (-2C_{31})^2 + C_{31}^2 = 1 \ C_{11}^2 + 5C_{31}^2 = 1 \ C_{11} = \sqrt{1 - 5C_{31}^2} \nonumber$
Note: There are now two equations with three unkowns. Second, root two eigenvector two:
$-2C_{12} = -2C_{12} \text{ (no new information from row one)} \ -C_{21} + 2C_{31} = -2C_{21} \text{ (row two)} \ C_{21} = -2C_{31} \text{ (again the third row offers no new information)} \ C_{11}^2 + C_{21}^2 + C_{31}^2 = 1 \text{ (from normalization)} \ C_{12}^2 + (-2C_{32})^2 + C_{32}^2 = 1 \ C_{12}^2 + 5C_{32}^2 = 1 \ C_{12} = \sqrt{1-5C_{32}^2} \nonumber$
Note: Again there are now two equations with three unknows.
$C_{11}C_{12} + C_{21}C_{22} + C_{31}C_{32} = 0 \text{ (from orthogonalization)} \nonumber$
Now there are five equations with six unknowns.
$\text{Arbitrarily choose } C_{11} = 0 \ C_{11} = 0 = \sqrt{1 - 5C_{31}^2} \ 5C_{31}^2 = 1 \ C_{31} = \sqrt{0.2} \ C_{21} = -2\sqrt{0.2} \ C_{11}C_{12} + C_{21}C_{22} + C_{31}C_{32} = 0 \text{ (from orthogonalization)} \ 0 + -2\sqrt{0.2}(-2C_{32}) + \sqrt{0.2} C_{32} = 0 \ 5C_{32} = 0 \ C_{32} = 0, C_{22} = 0 \text{, and } C_{12} = 1 \nonumber$
Therefore the eigenvector matrix becomes:
$\begin{bmatrix} 0 & 1 & 0 \ -2\sqrt{0.2} & 0 & \sqrt{0.2} \ \sqrt{0.2} & 0 & 2\sqrt{0.2} \end{bmatrix} \nonumber$
Q4
Show: $\langle \phi_1 \big| \phi_1 \rangle = 1, \langle \phi_2 \big| \phi_2 \rangle = 1 \text{, and } \langle \phi_1 \big| \phi_2 \rangle = 0$
$\langle \phi_1 | \phi_1 \rangle \overset{?}{=} 1 \ \left( -2 \sqrt{0.2} \right)^2 + \left( \sqrt{0.2} \right)^2 \overset{?}{=} 1 \ 4(0.2) + 0.2 \overset{?}{=} 1 \ 0.8 + 0.2 \overset{?}{=} 1 \ 1 = 1 \ \langle \phi_2 \big| \phi_2 \rangle \overset{?}{=} 1 \ \left( \sqrt{0.2} \right)^2 + \left( 2\sqrt{0.2} \right)^2 \overset{?}{=} 1 \ 0.2 + 4(0.2) \overset{?}{=} 1 \ 0.2 + 0.8 \overset{?}{=} 1 \ 1 = 1 \ \langle \phi_1\big| _2 \phi \rangle = \langle \phi_2 \big| \phi_1 \rangle \overset{?}{=} 0 \ -2\sqrt{0.2} \sqrt{0.2} _ \sqrt{0.2}2\sqrt{0.2} \overset{?}{=} 0 \ -2(0.2) + 2(0.2) \overset{?}{=} 0 \ -0.4 + 0.4 \overset{?}{=} 0 \ 0 = 0 \nonumber$
Q5
Show (for the degenerate eigenvalue; $\lambda$ = -2): $\langle \phi_2 \big| \phi_2 \rangle = 1, \langle \phi_2 \big| \phi_2 = 1\text{, and} _1 \big| e \phi \rangle = 0$
$\langle \phi_1 \big| \phi_1 \rangle \overset{?}{=} 1 \ 0 + \left( -2\sqrt{0.2} \right)^2 + \left( \sqrt{0.2} \right)^2 \overset{?}{=} 1 \ 4(0.2) + 0.2 \overset{?}{=} 1 \ 0.8 + 0.2 \overset{?}{=} 1 \ 1 = 1 \ \langle \phi_1 \big| \phi_2 \rangle \overset{?}{=} 1 \ 1^2 + 0 + 0 \overset{?}{=} 1 \ 1 = 1 \ \langle \phi_1 \big| \phi_2 = \langle \phi_2 \big| \phi_1 \rangle \overset{?}{=} 0 \ (0)(1) + (-2\sqrt{0.2})(0) + (\sqrt{0.2})(0) \overset{?}{=} 0 \nonumber$
Q6
Suppose the solution is of the form x(t) = eat , with a unknown. Inserting this trial solution into the differential equation results in the following:
$\dfrac{d^2}{dt^2}e^{ \alpha t} + k^2 e^{ \alpha t} = 0 \ \alpha^2e^{ \alpha t} + k^2e^{ \alpha t} = 0 \ \left( \alpha^2 + k^2 \right) x(t) = 0 \ \left( \alpha^2 + k^2 \right) = 0 \ \alpha^2 = -k^2 \ \alpha = \sqrt{-k^2} \ \alpha = \pm ik \nonumber$
$\therefore$ Solutions are of the form $e^{-ikt}, e^{ikt},$ or a combination of both: $x(t) = C_1e^{ikt} + C_2e^{-ikt}.$
Euler's formula also states that: $e^{+i\theta} = Cos\Theta \pm iSin\Theta$, so the previous equation for x(t) can also be written as:
$x(t) = C_1 \left[ Cos(kt) + iSin(kt) \right] + C_2 \left[ Cos(kt) - iSin(kt) \right] \ x(t) = \left( C_1 + C_2 \right) Cos(kt) + \left( C_1 + C_2 \right) iSin(kt)\text{, or alternatively} \ x(t) = C_3Cos(kt) + C_4Sin(kt). \nonumber$
We can determin these coefficients by making use of the "boundary conditions".
$\text{at t } = 0, x(0) = L \ x(0) = C_3Cos(0) + C_4Sin(0) = L \ C_3 = L \ \text{at t } = 0, \dfrac{dx(0)}{dt}=0 \ \dfrac{d}{dt}x(t) = \dfrac{d}{dt} \left( C_3Cos(kt) + C_4Sin(kt) \right) \ \dfrac{d}{dt}x(t) = -C_3kSin(kt) + C_4kCos(kt) \ \dfrac{d}{dt}x(0) = 0 = -C_3kSin(0) + C_4kCos(0) \ C_4k = 0 \ C_4 = 0 \nonumber$
$\therefore$ The solution is of the form: $x(t) = LCos(kt)$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.01%3A_The_Basic_Tools_of_Quantum_Mechanics/22.1.05%3A_v._Review_Exercise_Solutions.txt
|
Q1
1. \begin{align} K.E. &= \dfrac{mv^2}{2} = \left( \dfrac{m}{m} \right) \dfrac{dmv^2}{2} = \dfrac{(mv)^2}{2m} = \dfrac{p^2}{2m} \ K.E. &= \dfrac{1}{2m} \left( p_x^2 + p_y^2 + p_z^2 \right) \ K.E. &= \dfrac{1}{2m} \left[ \left( \hbar\dfrac{\partial}{\partial x} \right)^2 + \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial y} \right)^2 + \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial z} \right)^2 \right] \ K.E. &= \dfrac{-\hbar^2}{2m} \left[ \dfrac{\partial^2 }{\partial x^2} + \dfrac{\partial^2}{\partial y^2} \dfrac{\partial^2 }{\partial z^2} \right] \end{align}
2. \begin{align} \textbf{p} &= m\textbf{v} = \textbf{i}p_x + \textbf{j}p_y + \textbf{k}p_z \ p &= \left[ \textbf{i} \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial x} + \textbf{j} \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial y} \right) + \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial z} \right) \right) \right] \end{align} where i, j, and k are unit vectors along the x, y, and z axes.
3. \begin{align} L_y &= zp_x - xp_z \ L_y &= z \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial x} \right) - x \left( \dfrac{\hbar}{i}\dfrac{\partial}{\partial z} \right) \end{align}
Q2
First derive the general formulas for $\dfrac{\partial }{\partial x}, \dfrac{\partial }{\partial y}, \dfrac{\partial }{\partial z}$ in terms of r, $\theta \text{, and } \phi \text{, and } \dfrac{\partial }{\partial r} , \dfrac{\partial }{\partial \theta } \text{, and } \dfrac{\partial }{\partial \phi}$ in terms of x,y, and z. The general relationships are as follows:
\begin{align} x &= r\sin \theta \cos \phi & r^2 &= x^2 y^2 + z^2 \ y &= r\sin \theta \sin \phi & \sin \theta &=\dfrac{\sqrt{x^2 + y^2}}{\sqrt{x^2 + y^2 + z^2}} \ z &= r\cos \theta & \cos \theta &= \dfrac{z}{\sqrt{x^2 + y^2 +z^2}} \ & & \tan \phi &= \dfrac{y}{x} \end{align} \nonumber
First $\dfrac{\partial }{\partial x}, \dfrac{\partial }{\partial y}\text{, and } \dfrac{\partial }{\partial z}$ from the chain rule:
\begin{align} \dfrac{\partial }{\partial x} &= \left( \dfrac{\partial r}{\partial x} \right)_{y,z} \dfrac{\partial }{\partial r} + \left( \dfrac{\partial \theta }{\partial x} \right)_{y,z} \dfrac{\partial }{\partial \theta } + \left( \dfrac{\partial \phi}{\partial x} \right)_{y,z}\dfrac{\partial }{\partial \phi}, \ \dfrac{\partial }{\partial y} &= \left( \dfrac{\partial r}{\partial y} \right)_{y,z} \dfrac{\partial }{\partial r} + \left( \dfrac{\partial \theta }{\partial y} \right)_{y,z} \dfrac{\partial }{\partial \theta } + \left( \dfrac{\partial \phi}{\partial y} \right)_{y,z}\dfrac{\partial }{\partial \phi}, \ \dfrac{\partial }{\partial z} &= \left( \dfrac{\partial r}{\partial z} \right)_{y,z} \dfrac{\partial }{\partial r} + \left( \dfrac{\partial \theta }{\partial z} \right)_{y,z} \dfrac{\partial }{\partial \theta } + \left( \dfrac{\partial \phi}{\partial z} \right)_{y,z}\dfrac{\partial }{\partial \phi}, \ \end{align} \nonumber
Evaluation of the many "coefficients" gives the following:
\begin{align} \left( \dfrac{\partial r}{\partial x} \right)_{y,z} &= \sin \theta \cos \phi , & \left( \dfrac{\partial \theta }{\partial x} \right)_{y,z} &= \dfrac{\cos \theta \cos \phi}{r}, & \left( \dfrac{\partial \phi}{\partial x} \right)_{y,z} =& -\dfrac{\sin \phi}{r\sin \theta } \ \left( \dfrac{\partial r}{\partial y} \right)_{x,z} &= \sin \theta \sin \phi , & \left( \dfrac{\partial \theta }{\partial y} \right)_{x,z} &= \dfrac{\cos \theta \sin \phi}{r}, & \left( \dfrac{\partial \phi}{\partial y} \right)_{x,z} =& \dfrac{\cos \phi}{r\sin \theta } \ \left( \dfrac{\partial r}{\partial z} \right)_{x,z} &= \cos \theta , & \left( \dfrac{\partial \theta }{\partial z} \right)_{x,y} &= -\dfrac{\sin \theta}{r}\text{, and } & \left( \dfrac{\partial \phi}{\partial z} \right)_{y,z} =& 0. \end{align} \nonumber
Upon substitution of these "coefficients":
\begin{align} \dfrac{\partial }{\partial x} &= \sin \theta \cos \phi\dfrac{\partial }{\partial r} + \dfrac{\cos \theta \cos \phi }{r}\dfrac{\partial }{\partial \theta } - \dfrac{\sin \phi}{r\sin \theta }\dfrac{\partial }{\partial \phi}, \ \dfrac{\partial }{\partial y} &= \sin \theta \sin \phi\dfrac{\partial }{\partial r} + \dfrac{\cos \theta \sin \phi }{r}\dfrac{\partial }{\partial \theta } + \dfrac{Cos \phi}{r\sin \theta }\dfrac{\partial }{\partial \phi} \text{, and } \ \dfrac{\partial }{\partial z} &= \cos \theta \dfrac{\partial }{\partial r} - \dfrac{\sin \theta }{r}\dfrac{\partial }{\partial \theta} + 0\dfrac{\partial}{\partial \phi}, \end{align} \nonumber
Next $\dfrac{\partial }{\partial r}, \dfrac{\partial }{\partial \theta }\text{, and }\dfrac{\partial }{\partial \phi}$ from the chain rule:
\begin{align} \dfrac{\partial }{\partial r} &= \left( \dfrac{\partial x}{\partial r} \right)_{ \theta , \phi}\dfrac{\partial }{\partial x} + \left( \dfrac{\partial y}{\partial r} \right)_{ \theta , \phi}\dfrac{\partial }{\partial y} + \left( \dfrac{\partial z}{\partial r} \right)_{ \theta , \phi}\dfrac{\partial }{\partial z}, \ \dfrac{\partial }{\partial \theta } &= \left( \dfrac{\partial x}{\partial \theta } \right)_{r, \phi}\dfrac{\partial }{\partial x} + \left( \dfrac{\partial y}{\partial \theta } \right)_{r, \phi}\dfrac{\partial }{\partial y} + \left( \dfrac{\partial z}{\partial \theta } \right)_{r, \phi}\dfrac{\partial }{\partial z}\text{, and } \ \dfrac{\partial }{\partial \phi } &= \left( \dfrac{\partial x}{\partial \phi} \right)_{r, \theta }\dfrac{\partial }{\partial x} + \left( \dfrac{\partial y}{\partial \phi} \right)_{r, \theta}\dfrac{\partial }{\partial y} + \left( \dfrac{\partial z}{\partial \phi} \right)_{r, \theta}\dfrac{\partial }{\partial z}. \end{align} \nonumber
Again evaluation of the the many "coefficients" results in:
\begin{align} \left( \dfrac{\partial x}{\partial r} \right)_{ \theta , \phi} &= \dfrac{x}{\sqrt{x^2 + y^2 + z^2}}, & \left( \dfrac{\partial y}{\partial r} \right)_{ \theta , \phi} &= \dfrac{y}{\sqrt{x^2 + y^2 + z^2}}, & \left( \dfrac{\partial z}{\partial r} \right)_{ \theta , \phi} &= \dfrac{z}{\sqrt{x^2 + y^2 + z^2}} \ \left( \dfrac{\partial x}{\partial \theta} \right)_{r, \phi} &= \dfrac{x z}{\sqrt{x^2 + y^2}}, & \left( \dfrac{\partial y}{\partial \theta } \right)_{r, \phi} &= \dfrac{y z}{\sqrt{x^2 + y^2}}, & \left( \dfrac{\partial z}{\partial \theta} \right)_{r, \phi} &= -\sqrt{x^2 + y^2} \ \left( \dfrac{\partial x}{\partial \phi} \right)_{r, \theta} &= -y, & \left( \dfrac{\partial y}{\partial \phi} \right)_{r, \theta} &= x, & \text{ and } \left( \dfrac{\partial z}{\partial \phi} \right)_{r, \theta} = 0 \end{align} \nonumber
Upon substitution of these "coefficients":
\begin{align} \dfrac{\partial }{\partial r} &= \dfrac{x}{\sqrt{x^2 + y^2 + z^2}}\dfrac{\partial }{\partial x} + \dfrac{y}{\sqrt{x^2 + y^2 + z^2}}\dfrac{\partial }{\partial y} + \dfrac{z}{\sqrt{x^2 + y^2 + z^2}}\dfrac{\partial }{\partial z} \ \dfrac{\partial }{\partial \theta} &= \dfrac{x z}{\sqrt{x^2 + y^2}}\dfrac{\partial }{\partial x} + \dfrac{y z}{\sqrt{x^2 + y^2}}\dfrac{\partial }{\partial y} - \sqrt{x^2 + y^2}\dfrac{\partial }{\partial z} \ \dfrac{\partial }{\partial \phi} &= -y\dfrac{\partial }{\partial x} + x\dfrac{\partial }{\partial y} + 0\dfrac{\partial }{\partial z}. \end{align} \nonumber
Note, these many "coefficients" are the elements which make up the Jacobian matrix used whenever one wishes to transform a function from one coordinate representation to another. One very familiar result should be in transforming the volume element $dx\,dy\,dz$ to $r^2sin \theta \,dr\,d \theta \,d \phi.$ For example:
$\int f(x, y, z) dxdydz = \int\limits{ f(x(r, \theta , \phi),y(r, \theta , \phi),z(r, \theta , \phi)) \begin{vmatrix} \left( \dfrac{\partial x}{\partial r} \right)_{\theta \phi} & \left( \dfrac{\partial x}{\partial \theta } \right)_{r \phi} & \left( \dfrac{\partial x}{\partial \phi} \right)_{r \theta} \ \left( \dfrac{\partial y}{\partial r} \right)_{ \theta \phi} & \left( \dfrac{\partial y}{\partial \theta} \right)_{r \phi} & \left( \dfrac{\partial y}{\partial \phi} \right)_{r \theta} \ \left( \dfrac{\partial z}{\partial r} \right)_{ \theta \phi} & \left( \dfrac{\partial z}{\partial \theta} \right)_{r \phi} & \left( \dfrac{\partial z}{\partial \phi} \right)_{r \theta } \end{vmatrix} drd \theta d \phi} \nonumber$
a. \begin{align} L_x &= \dfrac{\hbar}{i} \left[ y\dfrac{\partial }{\partial z} - z\dfrac{\partial }{\partial y} \right] \ L_x &= \dfrac{\hbar}{i} \left[ rsin \theta sin \phi \left( \cos \theta \dfrac{\partial }{\partial r} - \dfrac{\sin \theta }{r}\dfrac{\partial }{\partial \theta } \right) \right] - \dfrac{\hbar}{i} \left[ r\cos \theta \left( \sin \theta \sin \phi\dfrac{\partial }{\partial r} + \dfrac{\cos \theta \sin \phi}{r}\dfrac{\partial }{\partial \theta }+ \dfrac{\cos \phi}{r\sin \theta }\dfrac{\partial }{\partial \phi} \right) \right] \ L_x &= -\dfrac{\hbar}{i} \left( \sin \phi\dfrac{\partial }{\partial \theta } + \cot \theta \cos \phi\dfrac{\partial }{\partial \phi} \right) \end{align}
b. \begin{align} L_z &= \dfrac{\hbar}{i}\dfrac{\partial }{\partial \phi} = -i\hbar\dfrac{\partial }{\partial \phi} \ L_z &= \dfrac{\hbar}{i} \left( -y\dfrac{\partial }{\partial x} + x\dfrac{\partial }{\partial y} \right) \end{align}
Q3
\begin{align} {}& & B & & B' & & B'' & \ i. & &4x^4 -12x^2 +3 & & 16x^3 -24x & & 48x^2 - 24 & \ ii. & & 5x^4 & & 20x^3 & & 60x^2 & \ iii. & & e^{3x} + e^{-3x} & & 3 \left( e^{3x} - e^{3x} \right) & & 9 \left( e^{3x} + e^{-3x} \right) & \ iv. & & x^2 - 4x + 2 & & 2x - 4 & & 2 & \ v. & & 4x^3 -3 & & 12x^2 - 3 & & 24x & \end{align}
B(v.) is an eigenfunction of A(i.):
\begin{align} &\left( 1 - x^2 \right) \dfrac{d^2}{dx^2} - x\dfrac{d }{dx}\text{B(v.)} &= \ & &= & \left( 1 - x^2 \right) (24x) - x \left( 12x^2 - 3 \right) \ & &= & 24x - 24x^3 - 12x^3 + 3x \ & &= & -36x^3 + 27x \ & &= & -9 \left( 4x^3 - 3x \right) \text{ (eigenvalue is -9)} \end{align}
B(iii.) is an eigenfunction of A(ii.):
\begin{align} &\dfrac{d^2 }{dx^2}\text{B(iii.)} &= \ & &= & 9\left( e^{3x} + e^{-3x} \right) \text{ (eigenvalue is 9)} \end{align}
B(ii.) is an eigenfunction of A(iii.):
\begin{align} & x\dfrac{d}{dx}\text{B(ii.)} &= \ & &= & x\left( 20x^3 \right) \ & &= & 20x^4 \ & &= & 4 \left( 5x^4 \right) \text{ (eigenvalue is 4)} \end{align}
B(i.) is an eigenfunction of A(vi.):
\begin{align} &\dfrac{d^2 }{dx^2} - 2x\dfrac{d }{dx}\text{B(i.)} &= \ & &= & \left( 48x^2 - 24 \right) - 2x \left( 16x^3 - 24x \right) \ & &= & 48x^2 - 24 -32x^4 + 48x^2 \ & &= &-32x^4 + 6x^2 - 24 \ & & =& -8 \left( 4x^4 - 12x^2 + 3 \right) \text{ (eigenvalue is -8)} \end{align}
B(iv.) is an eigenfunction of A(v.):
\begin{align} &x\dfrac{d^2 }{dx^2} -+(1-x)\dfrac{d }{dx}\text{B(iv.)} &= \ & &= & x(2) + (1-x)(2x - 4) \ & &= & 2x + 2x - 4 -2x^2 + 4x \ & &= &-2x^2 + 8x - 4 \ & & =& -2 \left( x^2 - 4x + 2 \right) \text{ (eigenvalue is -2)} \end{align}
Q4
Show that:
$\int f^{\text{*}}\textbf{A}gd\tau = \int g(\textbf{A}f)^{\text{*}}d\tau \nonumber$
a. Suppose f and g are functions of x and evaluate the integral on the left hand side by "integration by parts":
$\int f(x)^{\text{*}} \left( -i\hbar\dfrac{\partial }{\partial x} \right) g(x)dx \nonumber$
let dv = $\dfrac{\partial }{\partial x}g(x)dx$ and $u = -i\hbar f(x)^{\text{*}}$
$v = g(x) du = -i\hbar\dfrac{\partial }{\partial x}f(x)^{\text{*}}dx$
Now, $\int udv = uv - \int vdu ,$
so:
$f(x)^{\text{*}} \left( -i\hbar\dfrac{\partial }{\partial x} \right)g(x)dx = -i\hbar f(x)^{\text{*}}g(x) + i\hbar\int g(x)\dfrac{\partial }{\partial x}f(x)^{\text{*}}dx. \nonumber$
Note that in, principle, it is impossible to prove hermiticity unless you are given knowledge of the type of function on which the operator is acting. Hermiticity requires (as can be seen in this example) that the term -i$\hbar$f(x)*g(x) vanish when evaluated at the integral limits. This, in general, will occur for the "well behaved" functions (e.g., in bound state quantum chemistry, the wavefunctions will vanish as the distances among particles approaches infinity). So, in proving the hermiticity of an operator, one must be careful to specify the behavior of the functions on which the operator is considered to act. This means that an operator may be hermitian for one class of functions and non-hermitian for another class of functions. If we assume that f and g vanish at the boundaries, then we have
$\int f(x)^{\text{*}} \left( -i\hbar \dfrac{\partial }{\partial x}\right) g(x)dx = \int g(x) \left( -\hbar\dfrac{\partial }{\partial x}(x) \right)^{\text{*}} dx \nonumber$
b. Suppose f and g are functions of y and z and evaluate the integral on the left hand side by "integration by parts" as in the previous exercise:
$\int f(y,z)^{\text{*}} \left[ -i\hbar \left( y \dfrac{\partial }{\partial z} - z\dfrac{\partial }{\partial y} \right) \right] g(y,z)dydz = \int f(y,z)^{\text{*}} \left[ -i\hbar \left( y \dfrac{\partial }{\partial z}\right) \right] g(y,z)dydz - \int (y,z)^{\text{*}} \left[ -i\hbar \left( z\dfrac{\partial }{\partial y} \right) \right] g(y,z)dydz \nonumber$
For the first integral, $\int f(z)^{\text{*}} \left( -i\hbar y\dfrac{\partial }{\partial z} \right) g(z)dz,$
\begin{align} \text{let dv } &= \dfrac{\partial }{\partial z}g(z)dz & u &= -i\hbar yf(z)^{\text{*}} \ v &= g(z) & du &= -i\hbar y\dfrac{\partial }{\partial z}(z)^{\text{*}}dz \end{align} \nonumber
so:
$\int f(z)^{\text{*}} \left( -i\hbar y\dfrac{\partial }{\partial z} \right) g(z)dz = -i\hbar yf(z)^{\text{*}}g(z) + i\hbar y\int g(z)\dfrac{\partial }{\partial z}f(z)^{\text{*}}dz = \int g(z) \left( -i\hbar y\dfrac{\partial }{\partial z}f(z) \right)^{\text{*}}dz . \nonumber$
For the second integral, $\int f(y)^{\text{*}} \left( -i\hbar y\dfrac{\partial }{\partial y} \right) g(y)dy,$
\begin{align} \text{let dv } &= \dfrac{\partial }{\partial y}g(y)dy & u &= -i\hbar yf(y)^{\text{*}} \ v &= g(y) & du &= -i\hbar z\dfrac{\partial }{\partial y}f(y)^{\text{*}}dy \end{align}. \nonumber
so:
$\int f(y)^{\text{*}} \left( -i\hbar z\dfrac{\partial }{\partial y} \right) g(y)dy = -i\hbar zf(y)^{\text{*}}g(y) + i\hbar z\int g(y)\dfrac{\partial }{\partial y}f(y)^{\text{*}}dy = \int g(y) \left( -i\hbar z\dfrac{\partial }{\partial y}f(y) \right)^{\text{*}}dy. \nonumber$
$\int f(y,z)^{\text{*}} \left[ -i\hbar \left( y \dfrac{\partial }{\partial z} - z\dfrac{\partial }{\partial y} \right) \right] g(y,z)dydz = \int g(z) \left( -i\hbar y\dfrac{\partial }{\partial z}f(z) \right)^{\text{*}}dz - \int g(y) \left( -i\hbar z\dfrac{\partial }{\partial y}f(y) \right)^{\text{*}}dy \nonumber$
$= \int g(y,z) \left( -\hbar \left( y\dfrac{\partial }{\partial z} - z\dfrac{\partial }{\partial y} \right) f(y,z) \right)^{\text{*}}dydz. \nonumber$
Again we have had to assume that the functions f and g vanish at the boundary.
Q5
$L_+ = \textbf{L}_x + i\textbf{L}_y \nonumber$
$L_- = \textbf{L}_x - i\textbf{L}_y \text{, so} \nonumber$
$\textbf{L}_+ + \textbf{L}_- = 2\textbf{L}_x, \text{ or } \textbf{L}_x = \dfrac{1}{2}(\textbf{L}_+ + \textbf{L}_-) \nonumber$
$\textbf{L}_+ Y_{l,m} = \sqrt{ l(l + l) - m(m + l)}\hbar T_{l,m+1} \nonumber$
$\textbf{L}_- Y_{l,m} = \sqrt{ l(l + l) - m(m - l)}\hbar T_{l,m-1} \nonumber$
Using these relationships:
$\textbf{L}_- \Psi_{2p_{-1}} = 0 , \textbf{L}_- \Psi_{2p_0} = \sqrt{2}\hbar \Psi_{2p_{-1}}, \textbf{L}_- \Psi_{2p_{+1}} = \sqrt{2}\hbar \Psi_{2p_0} \nonumber$
$\textbf{L}_+ \Psi_{2p_{-1}} = \sqrt{2}\hbar \Psi_{2p_0} , \textbf{L}_+ \Psi_{2p_0} = \sqrt{2}\hbar \Psi_{2p_{+1}} , \textbf{L}_+ \Psi_{2p_{+1}} = 0 \text{ , and the following L}_x \text{ matrix elements can be evaluated: } \nonumber$
$L_x (1,1) = \langle \Psi_{2p_{-1}} \big| \dfrac{1}{2} \left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{-1}} \rangle = 0 \nonumber$
$L_x(1,2) = \langle \Psi_{2p_{-1}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{0}} \rangle = \dfrac{ \sqrt{2}}{2} \hbar \nonumber$
$L_x(1,3) = \langle \Psi_{2p_{-1}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{+1}} \rangle = 0 \nonumber$
$L_x(2,1) = \langle \Psi_{2p_{0}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{-1}} \rangle = \dfrac{ \sqrt{2}}{2} \hbar \nonumber$
$L_x(2,2) = \langle \Psi_{2p_{0}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{0}} \rangle = 0 \nonumber$
$L_x(2,3) = \langle \Psi_{2p_{0}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) big| \Psi_{2p_{+1}} \rangle = \dfrac{ \sqrt{2}}{2} \hbar \nonumber$
$L_x(3,1) = \langle \Psi_{2p_{+1}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{-1}} \rangle = 0 \nonumber$
$L_x(3,2) = \langle \Psi_{2p_{+1}} \big| \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \big| \Psi_{2p_{-1}} \rangle = \dfrac{ \sqrt{2}}{2} \hbar \nonumber$
$L_x(3,3) = 0 \nonumber$
This matrix: \begin{align} \begin{bmatrix} 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \ \dfrac{\sqrt{2}}{2}\hbar & 0 & \dfrac{\sqrt{2}}{2}\hbar \ 0 & \dfrac{ \sqrt{2}}{2} \hbar & 0 \end{bmatrix} \end{align}, can now be diagonalized:
\begin{align} \begin{vmatrix} 0 - \lambda & \dfrac{\sqrt{2}}{2}\hbar & 0 \ \dfrac{\sqrt{2}}{2}\hbar & 0 & \dfrac{\sqrt{2}}{2}\hbar \ 0 & \dfrac{\sqrt{2}}{2} & 0 \end{vmatrix} \end{align} = 0 \nonumber
\begin{align} \begin{vmatrix} 0 - \lambda & \dfrac{\sqrt{2}}{2}\hbar \ \dfrac{\sqrt{2}}{2}\hbar & 0 -\lambda \end{vmatrix} \end{align} ( -\lambda ) -\begin{align} \begin{vmatrix} \dfrac{\sqrt{2}}{2}\hbar & \dfrac{\sqrt{2}}{2}\hbar \ 0 & 0 - \lambda \end{vmatrix} \end{align} \left( \dfrac{\sqrt{2}}{2}\hbar \right) = 0
Expanding theses determinants yields:
$\left( \lambda^2 - \dfrac{\hbar^2}{2} \right) (-\lambda ) - \dfrac{\sqrt{2}\hbar}{2}( -\lambda ) \left( \dfrac{\sqrt{2}\hbar}{2} \right) = 0 \nonumber$
$-\lambda \left( \lambda^2 - \hbar^2 \right) = 0 \nonumber$
$-\lambda \left( \lambda - \hbar \right) \left( \lambda + \hbar \right) = 0 \nonumber$
with roots: 0,$\hbar \text{, and } -\hbar$
Next, determine the corresponding eigenvectors:
For $\lambda = 0$
\begin{align} \begin{bmatrix} 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \ \dfrac{\sqrt{2}}{2}\hbar & 0 & \dfrac{\sqrt{2}}{2}\hbar \ 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \end{bmatrix} \end{align} \begin{bmatrix} C_{11} \ C_{21} \ C_{31} \end{bmatrix} = 0 \begin{bmatrix} C_{11} \ C_{21} \ C_{31} \end{bmatrix} \nonumber
$\dfrac{\sqrt{2}}{2}\hbar C_{21} = 0 \text{ (row one) } \nonumber$
$C_21 = 0 \nonumber$
$\dfrac{\sqrt{2}}{2}\hbar C_{11} + \dfrac{\sqrt{2}}{2}\hbar C_{31} = 0 \text { (row two)} \nonumber$
$C_{11} + C_{31} = 0 \nonumber$
$C_{11} = -C_{31} \nonumber$
$C_{11}^2 + C_{21}^2 + C_{31}^2 = 1 \text{ (normalization)} \nonumber$
$C_{11}^2 + (-C_{11})^2 = 1 \nonumber$
$2C_{11}^2 = 1 \nonumber$
$C_{11} = \dfrac{1}{\sqrt{2}}, C_{21} = 0\text{, and } C_{31} = -\dfrac{1}{\sqrt{2}} \nonumber$
For $\lambda = 1\hbar :$
\begin{align} \begin{bmatrix} 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \ \dfrac{\sqrt{2}}{2}\hbar & 0 & \dfrac{\sqrt{2}}{2}\hbar \ 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \end{bmatrix} \end{align} \begin{bmatrix} C_{12} \ C_{22} \ C_{32} \end{bmatrix} = 0 \begin{bmatrix} C_{12} \ C_{22} \ C_{32} \end{bmatrix} \nonumber
$\dfrac{\sqrt{2}}{2}\hbar C_{22} = \hbar C_{12} \text{ (row one)} \nonumber$
$C_{12} = \dfrac{\sqrt{2}}{2}C_{22} \nonumber$
$\dfrac{\sqrt{2}}{2}\hbar C_{12} + \dfrac{\sqrt{2}}{2}\hbar C_{32} = \hbar C_{22} \text{ (row two)} \nonumber$
$\dfrac{\sqrt{2}}{2}\dfrac{\sqrt{2}}{2} C_{22} + \dfrac{\sqrt{2}}{2} C_{32} = C_{22} \nonumber$
$\dfrac{1}{2} C_{22} + \dfrac{\sqrt{2}}{2} C_{32} = C_{22} \nonumber$
$\dfrac{\sqrt{2}}{2} C_{32} = \dfrac{1}{2} C_{22} \nonumber$
$C_{32} = \dfrac{\sqrt{2}}{2} C_{22} \nonumber$
$C_{12}^2 + C_{22}^2 + C_{32}^2 = 1\text{ (normalization)} \nonumber$
$\left( \dfrac{\sqrt{2}}{2}C_{22} \right)^2 + C_{22}^2 + \left( \dfrac{\sqrt{2}}{2}C_{22}\right)^2 = 1 \nonumber$
$\dfrac{1}{2}C_{22}^2 + C_{22}^2 + \dfrac{1}{2}C_{22}^2 = 1 \nonumber$
$2C_{22}^2 = 1 \nonumber$
$C_{22} = \dfrac{\sqrt{2}}{2} \nonumber$
$C_{12} = \dfrac{1}{2}, C_{22} = \dfrac{\sqrt{2}}{2}\text{, and } C_{32} = \dfrac{1}{2} \nonumber$
For $\lambda = -1 \hbar$
\begin{align} \begin{bmatrix} 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \ \dfrac{\sqrt{2}}{2}\hbar & 0 & \dfrac{\sqrt{2}}{2}\hbar \ 0 & \dfrac{\sqrt{2}}{2}\hbar & 0 \end{bmatrix} \end{align} \begin{bmatrix} C_{13} \ C_{23} \ C_{33} \end{bmatrix} = 0 \begin{bmatrix} C_{13} \ C_{23} \ C_{33} \end{bmatrix} \nonumber
$\dfrac{\sqrt{2}}{2}\hbar C_{23} = - \hbar C_{13} \text{ (row one) } \nonumber$
$C_{13} = - \dfrac{\sqrt{2}}{2} C_{23} \nonumber$
$\dfrac{\sqrt{2}}{2}\hbar C_{13} + \dfrac{\sqrt{2}}{2}\hbar C_{33} = -\hbar C_{23} \text{ (row two)} \nonumber$
$\dfrac{\sqrt{2}}{2} \left( -\dfrac{\sqrt{2}}{2} C_{23} \right) + \dfrac{\sqrt{2}}{2} C_{33} = -C_{23} \nonumber$
$-\dfrac{1}{2} C_{23} + \dfrac{\sqrt{2}}{2} C_{33} = -C_{23} \nonumber$
$\dfrac{\sqrt{2}}{2} C_{33} = -\dfrac{1}{2} C_{23} \nonumber$
$C_{33} = -\dfrac{\sqrt{2}}{2} C_{23} \nonumber$
$C_{13}^2 + C_{23}^2 + C_{33}^2 = 1 \text{ (normalization)} \nonumber$
$\left( -\dfrac{\sqrt{2}}{2} C_{23} \right)^2 + C_{23}^2 + \left( -\dfrac{\sqrt{2}}{2}C_{23} \right)^2 = 1 \nonumber$
$\dfrac{1}{2}C_{23}^2 + C_{23}^2 + \dfrac{1}{2}C_{23}^2 = 1 \nonumber$
$2C_{23}^2 = 1 \nonumber$
$C_{23} = \dfrac{\sqrt{2}}{2} \nonumber$
$C_{13} = -\dfrac{1} , C_{23} = \dfrac{\sqrt{2}}{2} \text{, and } C_{33} = -\dfrac{1}{2} \nonumber$
Show: $\langle \phi_1 | \phi_1 \rangle = 1, \langle \phi_2 | \phi_2 \rangle = 1, \langle \phi_3 | \phi_3 \rangle = 1, \langle \phi_1 | \phi_2 \rangle = 0, \langle \phi_1 | \phi_3 \rangle = 0 \text{, and} \langle \phi_2 | \phi_3 \rangle = 0.$
$\langle \phi_1 | \phi_1 \rangle \stackrel{?}{=} 1 \nonumber$
$\left( \dfrac{\sqrt{2}}{2} \right)^2 + 0 + \left( -\dfrac{\sqrt{2}}{2} \right)^2 \stackrel{?}{=} 1 \nonumber$
$\dfrac{1}{2} + \dfrac{1}{2} \stackrel{?}{=} 1 \nonumber$
$1 = 1 \nonumber$
$\langle \phi_2 | \phi_2 \rangle \stackrel{?}{=} 1 \nonumber$
$\left( \dfrac{1}{2} \right)^2 + \left( \dfrac{\sqrt{2}}{2} \right)^2 + \left( \dfrac{1}{2} \right)^2 \stackrel{?}{=} 1 \nonumber$
$\dfrac{1}{4} + \dfrac{1}{2} + \dfrac{1}{4} \stackrel{?}{=} 1 \nonumber$
$1 = 1 \nonumber$
$\langle \phi_3 | \phi_3 \rangle \stackrel{?}{=} 1 \nonumber$
$\left( -\dfrac{1}{2}\right)^2 + \left( \dfrac{\sqrt{2}}{2} \right)^2 + \left(-\dfrac{1}{2}\right)^2 \stackrel{?}{=} 1 \nonumber$
$\dfrac{1}{4} + \dfrac{1}{2} + \dfrac{1}{4} \stackrel{?}{=} 1 \nonumber$
$1 = 1 \nonumber$
$\langle \phi_1 | \phi_2 \rangle = \langle \phi_2 | \phi_1 \rangle \stackrel{?}{=} 0 \nonumber$
$\left( \dfrac{\sqrt{2}}{2} \right) \left( \dfrac{1}{2} \right) + (0)\left( \dfrac{\sqrt{2}}{2} \right) + \left( -\dfrac{\sqrt{2}}{2} \right)\left( \dfrac{1}{2} \right) \stackrel{?}{=} 0 \nonumber$
$\left( \dfrac{\sqrt{2}}{4} \right) - \left( \dfrac{\sqrt{2}}{4} \right) \stackrel{?}{=} 0 \nonumber$
$0 = 0 \nonumber$
$\langle \phi_1 | \phi_3 \rangle = \langle \phi_3 | \phi_1 \rangle \stackrel{?}{=} 0 \nonumber$
$\left( \dfrac{\sqrt{2}}{2} \right) \left( -\dfrac{1}{2} \right) + (0)\left( \dfrac{\sqrt{2}}{2} \right) + \left( -\dfrac{\sqrt{2}}{2} \right)\left( -\dfrac{1}{2} \right) \stackrel{?}{=} 0 \nonumber$
$\left( -\dfrac{\sqrt{2}}{4} \right) + \left( \dfrac{\sqrt{2}}{4}\right) \stackrel{?}{=} 0 \nonumber$
$0 = 0 \nonumber$
$\langle \phi_2 | \phi_3 \rangle = \langle \phi_3 | \phi_2 \rangle \stackrel{?}{=} 0 \nonumber$
$\left( \dfrac{1}{2}\right) \left( -\dfrac{1}{2}\right) + \left( \dfrac{\sqrt{2}}{2} \right) \left( \dfrac{\sqrt{2}}{2} \right) + \left( \dfrac{1}{2} \right) \left( -\dfrac{1}{2} \right) \stackrel{?}{=} 0 \nonumber$
$\left(-\dfrac{1}{4}\right) + \left(\dfrac{1}{2}\right) + \left(-\dfrac{1}{4}\right) \stackrel{?}{=} 0 \nonumber$
$0 = 0 \nonumber$
Q6
$P_{2p_{+1}} = \big| \langle \phi_{2p_{+1}} \Phi_{L_x}^{0\hbar} \rangle \big|^2 \nonumber$
$\Psi_{L_x}^{0\hbar} = \dfrac{1}{\sqrt{2}} \phi_{2p_{-1}} - \dfrac{1}{\sqrt{2}}\phi_{2p_{+1}} \nonumber$
$P_{2p_{+1}} = \big| -\dfrac{1}{\sqrt{2}} \langle \phi_{20_{+1}} \phi_{2p_{+1}} \rangle \big|^2 = \dfrac{1}{2} \text{ (or 50%)} \nonumber$
Q7
It is useful here to use some of the general commutator relations found in Appendix C.V.
a. $\left[ L_x,L_y \right] = \left[ yp_z - zp_y, zp_x - xp_z \right] \nonumber$
$= \left[ yp_z, zp_x \right] - \left[ yp_z, xp_z \right] - \left[ zp_y, zp_x \right] + \left[ zp_y, xp_z \right] \nonumber$
$= [y,z]p_xp_z + z[y,p_x]p_z + y[p_z,z]p_x + yz[p_z,p_x] \nonumber$
$- [y,x]p_zp_z - x[y,p_z]p_z - y[p_z,x]p_z - yx[p_z,p_z] \nonumber$
$- [z,z]p_xp_y - z[z,p_x]p_y - z[p_y,z]p_x - zz[p_y,p_x] \nonumber$
$+ [z,x]p_zp_y + x[z,p_z]p_y + z[p_y.x]p_z + zx[p_y,p_z] \nonumber$
As can be easily ascertained, the only non-zero terms are:
$\left[ L_x, L_y \right] = y\left[ p_z,Z \right] p_x + x\left[ z,p_z \right] p_y \nonumber$
$= y( i\hbar )p_x + x( i\hbar )p_y \nonumber$
$= i\hbar \left( -yp_x + xp_y \right) \nonumber$
$= i\hbar L_z \nonumber$
b. $\left[ L_y,L_z \right] = \left[ zp_x - xp_z, xp_y - yp_z \right] \nonumber$
$= \left[ zp_x, xp_y \right] - \left[ zp_x, yp_x \right] - \left[ xp_z, xp_y \right] + \left[ xp_z, yp_x \right] \nonumber$
$= [z,x]p_yp_x + x\left[ z, p_y\right]p_x + z\left[ p_x,x\right] p_y + zx\left[ p_x,p_z \right] \nonumber$
$- [z,y]P_xp_x - y\left[z,p_x\right] p_x - z\left[ p_x,y \right] p_x - zy\left[ p_x, p_x \right] \nonumber$
$- [x,x]p_yp_z - x\left[x,p_y\right] p_z - x\left[ p_z,x \right] p_y - xx\left[ p_z, p_y \right] \nonumber$
$+[x,y]p_xp_z + y\left[ x,p_x \right] p_z + x\left[ p_z, y\right] p_x + xy\left[ p_z ,p_x \right] \nonumber$
Again, as can be easily ascertained, the only non-zero terms are:
$\left[ L_y, L_z \right] = z \left[ p_x,x \right] p_y + y\left[ x,p_x \right] p_z \nonumber$
$= z( -i\hbar )p_y + y(i\hbar )p_z \nonumber$
$= i\hbar (-zp_y + yp_z) \nonumber$
$=i\hbar L_x \nonumber$
c. $\left[ L_z, L_x \right] = \left[ xp_y - yp_x, yp_z - zp_y \right] \nonumber$
$= \left[ xp_y, yp_z \right] - \left[ xp_y, zp_y \right] - \left[ yp_x, yp_z \right] + \left[ yp_x, zp_y \right] \nonumber$
$= \left[ x,y \right] p_zp_y + y\left[ x,p_z \right] p_y + x\left[ p_y, y \right] p_z + xy\left[ p_y, p_z \right] \nonumber$
$- \left[ x,z \right] p_yp_y - z\left[ x,p_y \right] p_y - x\left[ p_y, z \right] p_y - xz\left[ p_y, p_y \right] \nonumber$
$- \left[ y, y \right] p_zp_x - z\left[ y,p_y \right] p_x - y\left[ p_x, y \right] p_z - yy\left[ p_x, p_z \right] \nonumber$
$+ \left[ y, z \right] p_yp_x + z\left[ y,p_y \right] p_x + y\left[ p_x, z \right] p_y + yz\left[ p_x, p_z \right] \nonumber$
Again, as can be easily ascertained, the only non-zero terms are:
$\left[ L_z, L_x \right] = x\left[ p_y, y\right] p_z + z\left[ y,p_y \right] p_x \nonumber$
$= x(-i\hbar)p_z + z(i\hbar )p_x \nonumber$
$= i\hbar (-xp_z + zp_x) \nonumber$
$= i\hbar L_y \nonumber$
d. $\left[ L_x, L^2 \right] = \left[ L_x. L_x^2 + L_y^2 + L_z^2 \right] \nonumber$
$= \left[ L_x, L_x^2 \right] + \left[ L_x, L_y^2 \right] + \left[ L_x, L_z^2 \right] \nonumber$
$= \left[ L_x, L_y^2 \right] + \left[ L_x, L_z^2 \right] \nonumber$
$= \left[ L_x, L_y \right] L_y + L_y\left[ L_x. L_y \right] + \left[ L_x, L_z \right] L_z + L_z\left[ L_x, L_z \right] \nonumber$
$= \left( i\hbar L_z \right) L_y + L_y\left( i\hbar L_z \right) + \left( -i\hbar L_y \right) L_z + L_z\left( -i\hbar L_y \right) \nonumber$
$= (i\hbar )\left( -L_zL_x - L_xL_z + L_xL_z + L_zL_x \right) \nonumber$
$= (i\hbar )\left( \left[ L_z, L_y \right] + \left[ L_y, L_z \right] \right) = 0 \nonumber$
e. $\left[ L_y, L^2 \right] = \left[ L_y, L_x^2 + L_y^2 + L_z^2 \right] \nonumber$
$= \left[ L_y, L_x^2 \right] + \left[ L_y, L_y^2 \right] + \left[ L_y, L_z^2 \right] \nonumber$
$= \left[ L_z, L_x^2 \right] + \left[ L_z, L_y^2 \right] \nonumber$
$= \left[ L_y, L_x \right] L_x + L_x\left[ L_y, L_x \right] + \left[ L_y, L_z \right] L_z + L_z\left[ L_y, L_z \right] \nonumber$
$= \left( -i \hbar L_z \right) L_x + L_x \left( -i\hbar L_z \right) + \left( i\hbar L_x \right) L_z + L_z \left( i\hbar L_x\right) \nonumber$
$= \left( i\hbar \right) \left( -L_z L_x - L_x L_z + L_x L_z + L_z L_x \right) \nonumber$
$= \left( i\hbar \right) \left( \left[ L_x, L_z \right] + \left[ L_z, L_x \right] \right) = 0 \nonumber$
f. $\left[ L_z, L^2 \right] = \left[ L_z, L_x^2 + L_y^2 + L_z^2 \right] \nonumber$
$= \left[ L_z, L_x^2 \right] + \left[ L_z, L_y^2 \right] + \left[ L_z, L_z^2 \right] \nonumber$
$= \left[ L_z, L_x^2 \right] + \left[ L_z, L_y^2 \right] \nonumber$
$= \left[ L_z, L_x \right] L_x + L_x\left[ L_z, L_x \right] + \left[ L_z, L_y \right] L_y + L_y \left[ L_z, L_y \right] \nonumber$
$=\left( i\hbar L_y \right) L_x + L_x \left( i\hbar L_y \right) + \left( -i\hbar L_x\right) L_y + L_y \left( -i\hbar L_x \right) \nonumber$
$\left( i\hbar\right) \left( L_yL_x + L_xL_y - L_xL_y - L_yL_x \right) \nonumber$
$\left( i\hbar\right) \left( \left[ L_y, L_x \right] + \left[ L_x, L_y \right] \right) = 0 \nonumber$
Q8
Use the general angular momentum relationships:
$J^2 \big| j,m \rangle = \hbar^2 \left( j \left( j+1 \right)\right) \big| j,m \rangle \nonumber$
$J_z \big| j,m \rangle = \hbar m \big| j,m \rangle , \nonumber$
and the information used in exercise 5, namely that:
$\textbf{L}_x = \dfrac{1}{2}\left( \textbf{L}_+ + \textbf{L}_- \right) \nonumber$
$\textbf{L}_+ Y_{l,m} = \sqrt{l(l + l) - m(m + l)}\hbar Y_{l,m+1} \nonumber$
[ \textbf{L}_- Y_{l,m} = \sqrt{l(l + l) - m(m - l)}\hbar Y_{l,m-1} \nonumber \]
Given that:
$Y_{0,0} (\theta, \phi) = \dfrac{1}{\sqrt{4\pi}} = \big| 0,0\rangle \nonumber$
$Y_{1,0} (\theta ,\phi ) = \sqrt{\dfrac{3}{4\pi}} Cos\theta = \big| 1,0 \rangle . \nonumber$
a. $\textbf{L}_z \big| 0,0\rangle = 0 \nonumber$
[ \textbf{L}^2 \big| 0,0\rangle = 0 \nonumber \]
Since $L^2$ and $L_z$ commute you would expect |0,0> to be simultaneous eigenfunctions of both.
b. $\textbf{L}_x \big| 0,0\rangle = 0 \nonumber$
$\textbf{L}_z \big| 0,0\rangle = 0 \nonumber$
$L_x$ and $L_z$ do not commute. It is unexpected to find a simultaneous eigenfunction (|0,0>) of both ... for sure these operators do not have the same full set of eigenfunctions.
c. $\textbf{L}_z \big| 1,0\rangle = 0 \nonumber$
$\textbf{L}^2 \big| 1,0\rangle = 2 \hbar^2 \big| 1,0 \rangle \nonumber$
Again since $L^2$ and $L_z$ commute you would expect |1,0> to be simultaneous eigenfunctions of both.
d. $\textbf{L}_x \big| 1,0\rangle = \dfrac{\sqrt{2}}{2} \hbar \big| 1,-1 \rangle + \dfrac{\sqrt{2}}{2} \hbar \big| 1, 1 \rangle \nonumber$
$\textbf{L}_z \big| 1, 0\rangle = 0 \nonumber$
Again, $L_x$ and $L_z$ do not commute. Therefore it is expected to find differing sets of eigenfunctions for both.
Q9
For
$\Psi (x,y) = \sqrt{\left( \dfrac{1}{2L_x} \right) \left( \dfrac{1}{2L_y} \right)} \left[ e^{\left( \dfrac{in_x\pi x}{L_x} \right)} - e^{\left( \dfrac{-in_x\pi x}{L_x} \right)} \right] \left[ e^{\left( \dfrac{in_y\pi y}{L_y} \right)} - e^{\left( \dfrac{-in_y\pi y}{L_y} \right)} \right] \nonumber$
$\langle \Psi (x,y) \big| \Psi (x,y) \rangle \stackrel{?}{=} 1 \nonumber$
Let: $a_x = \dfrac{n_x\pi}{L_x}$, and $a_y = \dfrac{n_y\pi}{L_y}$ and using Euler's formula, expand the exponentials into Sin and Cos terms.
$\Psi (x,y) = \sqrt{ \left( \dfrac{1}{2L_x} \right) \left( \dfrac{1}{2L_y} \right) } \left[ \cos (a_xx) + i\sin (a_x x) - \cos (a_x x) + i\sin (a_x x) \right] \left[ \cos (a_y y) + i\sin (a_y y) - \cos (a_y y) + i\sin (a_y y) \right] \nonumber$
$\Psi (x,y) = \sqrt{ \left( \dfrac{1}{2L_x} \right) \left( \dfrac{1}{2L_y} \right) } 2i\sin (a_x x) 2i\sin (a_y y) \nonumber$
$\Psi (x,y) = \sqrt{ \left( -\dfrac{2}{L_x} \right) \left( \dfrac{2}{L_y} \right) } \sin (a_x x) \sin (a_y y) \nonumber$
$\langle \Psi (x,y) \big| \Psi (x,y) \rangle = \int \left( \sqrt{ \left( -\dfrac{2}{L_x} \right) \left( \dfrac{2}{L_y} \right) } \sin (a_x x) \sin (a_y y) \right)^2 dxdy \nonumber$
$= \left( \dfrac{2}{L_x} \right) \left( \dfrac{2}{L_y} \right) \int \sin^2 (a_x x) \sin^2 (a_y y) dxdy \nonumber$
Using the integral:
$\int\limits^L_0 \sin^2 \dfrac{n\pi x}{L} dx = \dfrac{L}{2}, \nonumber$
$\angle \Psi (x,y) \big| \Psi (x,y) \rangle = \left( \dfrac{2}{L_x} \right) \left( \dfrac{2}{L_y} \right) \left( \dfrac{L_x}{2} \right) \left( \dfrac{L_y}{2} \right) = 1 \nonumber$
Q10
$\langle \Psi (x,y) | p_x | \Psi (x,y) \rangle = \left( \dfrac{2}{L_y} \right) \int\limits_0^{L_y} \sin^2 (a_y y)dy \left( \dfrac{2}{L_x} \right) \int\limits_0^{L_x} \sin (a_x x)\left( -i\hbar \dfrac{\partial}{\partial x}\right) \sin (a_x x) dx = \left( \dfrac{-i\hbar 2a_x}{L_x} \right) \int\limits_0^{L_x} \sin (a_x x)\cos (a_x x) dx \nonumber$
But the integral:
$\int\limits_0^{L_x} \cos (a_x x)\sin (a_x x) dx = 0, \nonumber$
$\therefore \langle \Psi (x,y) \big| p_x \big| \Psi (x,y) \rangle = 0 \nonumber$
Q11
$\langle \Psi_0 \big| x^2 \big| \Psi_0 \rangle = \sqrt{\dfrac{\alpha}{\pi}}\int\limits_{-\infty}^{\infty}\left( e^{-\dfrac{1}{2}\alpha x^2} \right) \left( \textbf{x}^2 \right) \left( e^{-\dfrac{1}{2}\alpha x^2} \right) dx = 2\sqrt{\dfrac{\alpha}{\pi}}\int\limits_0^{\infty}x^2 e^{-\dfrac{1}{2}\alpha x^2}dx \nonumber$
Using the integral:
$\int\limits_0^{\infty}x^{2n} e^{-\beta x^2}dx = \dfrac{1\cdot{3}\cdot{\cdot{\cdot{(2n-1)}}}}{2^{n+1}}\sqrt{ \left( \dfrac{\pi}{\beta ^{2n + 1}} \right) } \nonumber$
$\langle \Psi_0 | \textbf{x}^2 | \Psi_0 \rangle = 2\sqrt{\dfrac{\alpha}{\pi}} \left( \dfrac{1}{2^2} \right) \sqrt{\dfrac{\pi}{\alpha^3}} \nonumber$
$\langle \Psi_0 | \textbf{x}^2 | \Psi_0 \rangle = \left( \dfrac{1}{2\alpha} \right) \nonumber$
$\langle \Psi_1 | \textbf{x}^2 | \Psi_1 \rangle = \sqrt{\dfrac{4\alpha^3}{\pi}}\int^{\infty}_{-\infty}\left( xe^{-\dfrac{1}{2}\alpha x^2} \right) \left( \textbf{x}^2 \right) \left( xe^{-\alpha \dfrac{1}{2}x^2} \right) dx \nonumber$
$= 2\sqrt{\dfrac{4\alpha^3}{\pi}}\int\limits_0^{\infty} x^4 e^{-\alpha\dfrac{1}{2}x^2}dx \nonumber$
Using the previously defined integral:
$\langle \Psi_1 | \textbf{x}^2 | \Psi_1 \rangle = 2\sqrt{\dfrac{4\alpha^3}{\pi}}\left( \dfrac{3}{2^3} \right) \sqrt{\dfrac{\pi}{\alpha^5}} \nonumber$
$\langle \Psi_1 | \textbf{x}^2 | \Psi_1 \rangle = \left( \dfrac{3}{2\alpha} \right) \nonumber$
Q13
a. $\Psi_I (x) = 0 \nonumber$
$\Psi_{II} (x) = Ae^{\left( i\sqrt{\dfrac{2mE}{\hbar^2}}x \right)} + Be^{\left( -i\sqrt{\dfrac{2mE}{\hbar^2}}x \right)} \nonumber$
$\Psi_{III} (x) = A'e^{\left( i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}x \right)} + B'e^{\left( -i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}x \right)} \nonumber$
b. $I \leftrightarrow II \nonumber$
$\Psi_I(0) = \Psi_{II}(0) \nonumber$
$\Psi_I (0) = 0 = \Psi_{II} (0) = Ae^{\left( i\sqrt{\dfrac{2mE}{\hbar^2}}(0) \right)} + Be^{\left( -i\sqrt{\dfrac{2mE}{\hbar^2}}(0) \right)} \nonumber$
$0 = A + B \nonumber$
$B = -A \nonumber$
$\Psi_I '(0) = \Psi_{II} '(0) \nonumber$
( this gives no useful information since $\Psi_I '(x)$ does not exist at x = 0 )
$I \leftrightarrow II \nonumber$
$\Psi_{II} (L) = \Psi_{III} (L) \nonumber$
$Ae^{\left( i\sqrt{\dfrac{2mE}{\hbar^2}}L \right)} + Be^{\left( -i\sqrt{\dfrac{2mE}{\hbar^2}}L \right)} = A'e^{\left( i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L \right)} + B'e^{\left( -i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L \right)} \nonumber$
$\Psi_{II}'(L) = \Psi_{III}'(L) A\left( e^{i\sqrt{\dfrac{2mE}{\hbar^2}}L} \right) + B\left( e^{-i\sqrt{\dfrac{2mE}{\hbar^2}}L} \right) = A' \left( e^{i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \right) + B' \left( e^{-i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \right) \nonumber$
$\Psi_{II}'(L) = \Psi_{III}'(L) \nonumber$
$A \left( i\sqrt{ \dfrac{2mE}{\hbar^2}} \right) e^{i\sqrt{\dfrac{2mE}{\hbar}}L} - B \left( i\sqrt{ \dfrac{2mE}{\hbar^2}} \right) e^{i\sqrt{\dfrac{2mE}{\hbar}}L} \nonumber$
$= A' \left( i\sqrt{ \dfrac{2m(V-E)}{\hbar^2}} \right) e^{i\sqrt{\dfrac{2m(V-E)}{\hbar}}L} - B' \left( i\sqrt{ \dfrac{2m(V-E)}{\hbar^2}} \right) e^{i\sqrt{\dfrac{2m(V-E)}{\hbar}}L} \nonumber$
c. $as x \rightarrow -\infty , \Psi_I (x) = 0 \nonumber$
$as x \rightarrow \infty , \Psi_{III}(x) = 0 \therefore A' = 0 \nonumber$
d. Rewrite the equations for $\Psi_I(0), \Psi_{II}(0), \Psi_{II}(L) = \Psi_{III}(L) \text{, and } \Psi_{II}'(L) = \Psi_{III}'(L)$ using the information in 13c:
$B = -A (eqn. 1) \nonumber$
$A e^{i\sqrt{\dfrac{2mE}{\hbar}}L} + B e^{-i\sqrt{\dfrac{2mE}{\hbar}}L} = B'e^{i\sqrt{\dfrac{2m(V-E)}{\hbar}}L} \nonumber$
(eqn. 2)
$A\left( i \sqrt{\dfrac{2mE}{\hbar^2}}\right) e^{i\sqrt{\dfrac{2mE}{\hbar}}L} - B\left( i\sqrt{\dfrac{2mE}{\hbar^2}L} \right) e^{-i\sqrt{\dfrac{2mE}{\hbar^2}}L} = -B' \left( i\sqrt{\dfrac{2m(V-E)}{\hbar^2}} \right) e^{ -i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \nonumber$
substitution (eqn. 1) into (eqn. 2)
$Ae^{i\sqrt{\dfrac{2mE}{\hbar^2}}L} - Ae^{-i\sqrt{\dfrac{2mE}{\hbar^2}}L} = B'e^{-i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \nonumber$
$A \left[ \cos \left( \sqrt{\dfrac{2mE}{\hbar^2}}L \right) + i\sin\left( \sqrt{\dfrac{2mE}{\hbar^2}L} \right) \right] - A \left[ \cos \left( \sqrt{\dfrac{2mE}{\hbar^2}}L \right) - i\sin\left( \sqrt{\dfrac{2mE}{\hbar^2}L} \right) \right] = B'e^{-i\sqrt{\dfrac{(V-E)}{\hbar^2}}L} \nonumber$
$2Ai\sin \left( \sqrt{\dfrac{2mE}{\hbar^2}}L \right) = B'e^{-i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \nonumber$
$\sin \left( \sqrt{\dfrac{2mE}{\hbar^2}}L \right) = \dfrac{B'}{2Ai}e^{-i\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L} \text{ (eqn. 4)} \nonumber$
substituting (eqn. 1) into (eqn. 3):
$A\left( i\sqrt{\dfrac{2mE}{\hbar^2}} \right)e^{\left( i\sqrt{\dfrac{2mE}{\hbar^2}}L \right)} + A\left( i\sqrt{\dfrac{2mE}{\hbar^2}} \right)e^{\left( -i\sqrt{\dfrac{2mE}{\hbar^2}}L \right)} = - B' \left( i\sqrt{\dfrac{2m(V-E)}{\hbar^2}} \right)e^{\left( -1\sqrt{\dfrac{2m(V-E)}{\hbar^2}}L \right) } \nonumber$
$A\left( i\sqrt{\dfrac{2mE}{\hbar^2}} \right) \left( \cos \left( \sqrt{\dfrac{2mE}{\hbar^2}}L \right) + i\sin \left( \sqrt{ \dfrac{2mE}{\hbar^2} } L \right) \right) + A\left( i \sqrt{\dfrac{2mE}{\hbar^2}}\right) \left( \cos \left( \sqrt{\dfrac{2mE}{\hbar^2} }L \right) -i\sin \left( \sqrt{\dfrac{2mE}{\hbar^2}}L\right) \right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.01%3A_The_Basic_Tools_of_Quantum_Mechanics/22.1.06%3A_vi._Exercise_Solutions.txt
|
Q1
Draw qualitative shapes of the (1) s, (3) p and (5) d "tangent sphere" atomic orbitals (note that these orbitals represent only the angular portion and do not contain the radial portion of the hydrogen like atomic wavefunctions) Indicate with ± the relative signs of the wavefunctions and the position(s) (if any) of any nodes.
Q2
Define the symmetry adapted "core" and "valence" orbitals of the following systems:
i. $NH_3$ in the $C_{3v}$ point group,
ii. $H_2O$ in the $C_{2v}$ point group,
iii. $H_2O_2$ (cis) in the $C_2$ point group,
iv. N in $D_{\infty h}, D_{2h}, C_{2v}$, and $C_s$ point groups,
v. N${_2} \text{ in } D_{\infty h}\text{, } D_{2h}, C_{2v}$\text{, and } C_s\) point groups.
Q3
Plot the radial portions of the 4s, 4p, 4d, and 4f hydrogen like atomic wavefunctions.
Q4
Plot the radial portions of the 1s, 2s, 2p, 3s, and 3p hydrogen like atomic wavefunctions for the Si atom using screening concepts for any inner electrons.
22.2.02: ii. Exercises
1. In quantum chemistry it is quite common to use combinations of more familiar and easy- to-handle "basis functions" to approximate atomic orbitals. Two common types of basis functions are the Slater type orbitals (STO's) and gaussian type orbitals (GTO's). STO's have the normalized form:
$\left( \dfrac{2\xi}{a_0} \right)^{n+\dfrac{1}{2}} \left( \dfrac{1}{(2n)!} \right)^{\dfrac{1}{2}} r^{n-1} e^{\left( \dfrac{-\xi r}{a_0} \right)} Y_{l,m}\left( \theta , \phi \right), \nonumber$
whereas GTO's have the form:
$N r^1 e^{\left( -\xi r^2 \right)} Y_{l,m} ( \theta ,\phi ). \nonumber$
Orthogonalize (using Löwdin (symmetric) orthogonalization) the following 1s (core), 2s (valence), and 3s (Rydberg) STO's for the Li atom given:
$Li_{1s} \xi = 2.6906 \nonumber$
$Li_{2s} \xi = 0.6396 \nonumber$
$Li_{3s} \xi = 0.1503. \nonumber$
Express the three resultant orthonormal orbitals as linear combinations of these three normalized STO's.
2. Calculate the expectation value of r for each of the orthogonalized 1s, 2s, and 3s Li orbitals found in Exercise 1.
3. Draw a plot of the radial probability density (e.g., $r^2 [R_{nl}(r)]^2$ with R referring to the radial portion of the STO) versus r for each of the orthonormal Li s orbitals found in Exercise 1.
22.2.03: iii. Problems
Q1
Given the following orbital energies (in hartrees) for the N atom and the coupling elements between two like atoms (these coupling elements are the Fock matrix elements from standard ab-initio minimum-basis SCF calculations), calculate the molecular orbital energy levels and 1-electron wavefunctions. Draw the orbital correlation diagram for formation of the $N_2$ molecule. Indicate the symmetry of each atomic and molecular orbital. Designate each of the molecular orbitals as bonding, non-bonding, or antibonding.
$N_{1s} = -15.31^{\text{*}} \nonumber$ $N_{2s} = -0.86^{\text{*}} \nonumber$
$N_{2p} = -0.48^{\text{*}} \nonumber$
$N_2 \sigma_g \text{Fock matrix}^{\text{*}} \nonumber$
\begin{bmatrix} -6.52 & & \ -6.22 & -7.06 & \ 3.61 & 4.00 & -3.92 \end{bmatrix}
$N_2 \pi_g \text{ For Matrix}^{\text{*}} \nonumber$
$[0.28] \nonumber$
$N_2 \sigma_u \text{ For Matrix}^{\text{*}} \nonumber$
\begin{bmatrix} 1.02 & & \ -0.60 & -7.59 & \ 0.02 & 7.42 & -8.53 \end{bmatrix}
$N_2 \pi_u \text{ Fock matrix}^{\text{*}} \nonumber$
$[-0.58] \nonumber$
*The Fock matrices (and orbital energies) were generated using standard STO3G minimum basis set SCF calculations. The Fock matrices are in the orthogonal basis formed from these orbitals.
Q2
Given the following valence orbital energies for the C atom and $H_2$ molecule draw the orbital correlation diagram for formation of the $CH_{2}$ molecule (via a $C_{2v}$ insertion of C into $H_2$ resulting in bent $CH_2$). Designate the symmetry of each atomic and molecular orbital in both their highest point group symmetry and in that of the reaction path ($C_{2v}$).
\begin{align} C_{1s}=-10.91^{\text{*}} & & H_2 \sigma_g = -0.58^{\text{*}} \ C_{2s}=-0.60^{\text{*}} & & H_2 \sigma_u = 0.67^{\text{*}} \ C_{2p}=-0.33^{\text{*}} & & \end{align}
*The orbital energies were generated using standard STO3G minimum basis set SCF calculations.
Q3
Using the empirical parameters given below for C and H (taken from Appendix F and "The HMO Model and its Applications" by E. Heilbronner and H. Bock, Wiley- Interscience, NY, 1976), apply the Hückel model to ethylene in order to determine the valence electronic structure of this system. Note that you will be obtaining the 1-electron energies and wavefunctions by solving the secular equation (as you always will when the energy is dependent upon a set of linear parameters like the MO coefficients in the LCAO- MO approach) using the definitions for the matrix elements found in Appendix F.
$C\alpha_{2p\pi} = -11.4 eV \nonumber$
$C \alpha_{sp^2} = -14.7 eV \nonumber$
$H \alpha_s = -13.6 eV \nonumber$
$C-C\beta_{2p\pi -2\pi} = -1.2 eV \nonumber$
$C-C\beta_{sp^2-2p^2} = -5.0 eV \nonumber$
$C-H\beta_{sp^2-s} = -4.0 eV \nonumber$
1. Determine the C=C $(2\pi )$ 1-electron molecular orbital energies and wavefunctions. Calculate the $\pi \rightarrow \pi^{\text{*}}$ transition energy for ethylene within this model.
2. Determine the C-C ($sp^2$) 1-electron molecular orbital energies and wavefunctions.
3. Determine the C-H ($sp^2$-s) 1-electron molecular orbital energies and wavefunctions (note that appropriate choice of symmetry will reduce this 8x8 matrix down to 4 2x2 matrices; that is, you are encouraged to symmetry adapt the atomic orbitals before starting the Hückel calculation). Draw a qualitative orbital energy diagram using the HMO energies you have calculated.
Q4Q5
Using the empirical parameters given below for B and H (taken from Appendix F and "The HMO Model and its Applications" by E. Heilbronner and H. Bock, Wiley- Interscience, NY, 1976), apply the Hückel model to borane ($BH_3$) in order to determine the valence electronic structure of this system.
$B \alpha_{2p\pi} = -8.5 eV \nonumber$
$B \alpha_{sp^2} = -10.7 eV \nonumber$
$H\alpha_s = -13.6 eV \nonumber$
$B-H \beta_{sp^2-s} = -3.5 eV \nonumber$
Determine the symmetries of the resultant molecular orbitals in the $D_{3h}$ point group. Draw a qualitative orbital energy diagram using the HMO energies you have calculated.
Q5
Qualitatively analyze the electronic structure (orbital energies and 1-electron wavefunctions) of $PF_5$. Analyze only the 3s and 3p electrons of P and the one 2p bonding electron of each F. Proceed with a $D_{3h}$ analysis in the following manner:
1. Symmetry adapt the top and bottom F atomic orbitals.
2. Symmetry adapt the three (trigonal) F atomic orbitals.
3. Symmetry adapt the P 3s and 3p atomic orbitals.
4. Allow these three sets of $D_{3h}$ orbitals to interact and draw the resultant orbital energy diagram. Symmetry label each of these molecular energy levels. Fill this energy diagram with 10 "valence" electrons.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.02%3A_Simple_Molecular_Orbital_Theory/22.2.01%3A_i._Review_Exercises.txt
|
1. i. In ammonia the only "core" orbital is the N 1s and this becomes an a1 orbital in $C_{3v}$ symmetry. The N 2s orbitals and 3 H 1s orbitals become 2 $a_1$ and an e set of orbitals. The remaining N 2p orbitals also become 1 $a_1$ and a set of e orbitals. The total valence orbitals in $C_{3v}$ symmetry are $3a_1$ and 2e orbitals.
ii. In water the only core orbital is the O 1s and this becomes an a1 orbital in $C_{2v}$ symmetry. Placing the molecule in the yz plane allows us to further analyze the remaining valence orbitals as: O $2p_z = a_1$, O $2p_y\text{ as } b_2\text{ , and O } 2p_x\text{ as } b_1$. The H 1s + H 1s combination is an $a_1$ whereas the H 1s - H 1s combination is a $b_2$.
iii. Placing the oxygens of $H_2O_2$ in the yz plane (z bisecting the oxygens) and the (cis) hydrogens distorted slightly in +x and -x directions allows us to analyze the orbitals as follows. The core O 1s + O 1s combination is an a orbital whereas the O 1s - O 1s combination is aborbital. The valence orbitals are: O2s+O2s=a, O2s-O2s=b, O $2p_x\text{ + O }2p_x$ = b, O 2px - O 2px = a, O 2py + O 2py = a, O 2py - O 2py = b, O 2pz + O 2pz = b, O 2pz - O 2pz = a, H 1s + H 1s = a, and finally the H 1s - H 1s = b.
iv. For the next two problems we will use the convention of choosing the z axis as principal axis for the $D_{\infty h}\text{, }D_{2h}\text{, and } C_{2v}$ point groups and the xy plane as the horizontal reflection plane in $C_s$ symmetry.
\begin{align} & & D_{\infty h} & & D_{2h} & & C_{2v} & & C_s & \ & N 1s & & \sigma_g & & a_g & & a_1 & & a^{\prime} & \ & N 2s & & \sigma_g & & a_g & & a_1 & & a^{\prime} & \ & N 2p_x & & \pi_{xu} & & b_{3u} & & b_1 & & a^{\prime} & \ & N 2p_y & & \pi_{yu} & & b_{2u} & & b_2 & & a^{\prime} & \ & N 2p_z & & \sigma_{u} & & b_{1u} & & a_1 & & a^{\prime\prime} & \end{align}
v. The Nitrogen molecule is in the yz plane for all point groups except the Cs in which case it is placed in the xy plane.
\begin{align} & & D_{\infty h} & & D_{2h} & & C_{2v} & & C_s & \ & N 1s + N 1s & & \sigma_g & & a_g & & a_1 & & a^{\prime} & \ & N 1s - N 1s & & \sigma_u & & b_{1u} & & b_2 & & a^{\prime} & \ & N 2s + N 2s & & \sigma_g & & a_g & & a_1 & & a^{\prime} & \ & N 2s - N 2s & & \sigma_u & & b_{1u} & & b_2 & & a^{\prime} & \ & N 2p_x + N 2p_x & & \pi_{xu} & & b_{3u} & & b_1 & & a^{\prime} & \ & N 2p_x - N 2p_x & & \pi_{xg} & & b_{2g} & & a_2 & & a^{\prime} & \ & N 2p_y + N 2p_y & & \pi_{yu} & & b_{2u} & & a_1 & & a^{\prime} & \ & N 2p_y - N 2p_y & & \pi_{yg} & & b_{3g} & & b_2 & & a^{\prime} & \ & N 2p_z + N 2p_z & & \sigma_{u} & & b_{1u} & & b_2 & & a^{\prime\prime} & \ & N 2p_z - N 2p_z & & \sigma_{g} & & a_{g} & & a_1 & & a^{\prime\prime} & \end{align}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.02%3A_Simple_Molecular_Orbital_Theory/22.2.04%3A_iv._Review_Exercises_Solutions.txt
|
1. Two Slater type orbitals, i and j, centered on the same point results in the following overlap integrals:
$S_{ij} = \int\limits^{2\pi}_{0}\int\limits^{\pi}_0\int\limits^{\infty}_0 \left( \dfrac{2\xi_i}{a_0} \right)^{n_i+\dfrac{1}{2}}\sqrt{\dfrac{1}{(2n_i)!}} r^{(n_i-1)}e^{\left( \dfrac{-\xi_i r}{a_0} \right)} Y_{l_i,m_i}(\theta,\phi ) \left( \dfrac{2\xi_j}{a_0} \right)^{n_j+\dfrac{1}{2}} \sqrt{ \dfrac{1}{(2n_j)!}} r^{(n_j-1)}e^{\left( \dfrac{-\xi_j r}{a_0} \right)} Y_{l_j,m_j}(\theta,\phi ) r^2sin \theta drd\theta d\phi . \nonumber$
For these s orbitals l = m = 0 and $Y_{0,0}(\theta ,\phi ) = \dfrac{1}{\sqrt{4\pi}}.$ Performing the integrations over $\theta$ and $\phi$ yields $4\pi$ which then cancels with these Y terms. The integral then recuces to:
$S_{ij} = \left( \dfrac{2\xi_i}{a_0} \right)^{n_i+\dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_i)!}} \left( \dfrac{2\xi_j}{a_0} \right)^{n_j + \dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_j)!}} \int\limits^{\infty}_{0} r^{(n_i-1+n_j-1)}e^{\left( \dfrac{-(\xi_i+\xi_j)r}{a_0} \right)}r^2dr \nonumber$
$= \left( \dfrac{2\xi_i}{a_0} \right)^{n_i+\dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_i)!}} \left( \dfrac{2\xi_j}{a_0} \right)^{n_j + \dfrac{1}{2}}\sqrt{\dfrac{1}{(2n_j)!}}\int\limits^{\infty}_{0} r^{(n_i-1+n_j-1)}e^{\left( \dfrac{-(\xi_i+\xi_j)r}{a_0} \right)}r^2dr \nonumber$
Using integral equation (4) the integral then reduces to:
$S_{ij} = \left( \dfrac{2\xi_i}{a_0} \right)^{n_i + \dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_i)!}} \left( \right)^{n_j+\dfrac{1}{2}} \sqrt{ \dfrac{1}{(2n_j)!}}(n_i+n_j)! \left( \dfrac{a_0}{\xi_i+\xi_j} \right)^{n_i+n_j+1}. \nonumber$
We then substitute in the values for each of these constants:
$\text{for i=1; n=1, l=m=0, and } \xi = 2.6906 \nonumber$
$\text{for i=2; n=2, l=m=0, and } \xi = 0.6396 \nonumber$
$\text{for i=3; n=3, l=m=0, and } \xi = 0.1503. \nonumber$
Evaluating each of these matrix elements we obtain:
$S_{11} = (12.482992)(0.707107)(12.482992)(0.707107)(2.00)(0.006417) = 1.000000 \nonumber$
$S_{21}=S_{12} = (1.850743)(0.204124)(12.482992)(0.707107)(6.00)(0.008131) = 0.162673 \nonumber$
$S_{22} = (1.850743)(0.204124)(1.850743)(0.204124)(24.00)(0.291950) = 1.00 \nonumber$
$S_{31} = S_{13} = (0.0144892)(0.037268)(12.482992)(0.707107)(24.00)(0.005404) = 0.000635 \nonumber$
$S_{32} = S_{23} = (0.014892)(0.037268)(1.850743)(0.204124)(120.00)(4.116872) = 0.103582 \nonumber$
$S_{33} = (0.014892)(0.037268)(0.014892)(0.037268)(720.00)(4508.968136) = 1.00 \nonumber$
$S = \begin{bmatrix} & 1.000000 & & & \ & 0.162673 & 1.000000 & & \ & 0.000635 & 0.103582 & 1.000000 & \end{bmatrix} \nonumber$
We now solve the matrix eigenvalue problem S U = $\lambda$ U.
The eigenvalues, $\lambda$, of this overlap matrix are:
[ 0.807436 0.999424 1.193139 ],
and the corresponding eigenvectors, U, are:
\begin{bmatrix} & 0.596540 & -0.537104 & -0.596372 & \ & -0.707634 & -0.001394 & 0.706578 & \ & 0.378675 & 0.843515 & -0.380905 & \end{bmatrix}
The $\lambda^{-\dfrac{1}{2}}$ matrix becomes:
$\lambda^{-\dfrac{1}{2}} = \begin{bmatrix} & 1.112874 & 0.000000 & 0.000000 & \ & 0.000000 & 1.000288 & 0.000000 & \ & 0.000000 & 0.000000 & 0.915492 & \end{bmatrix}. \nonumber$
Back transforming into the original eigenbasis gives $S^{-\dfrac{1}{2}}$, e.g.
$S^{-\dfrac{1}{2}} = U\lambda^{-\dfrac{1}{2}} U^T \nonumber$
$S^{-\dfrac{1}{2}} = \begin{bmatrix} & 1.010194 & & & \ & -0.083258 & 1.014330 & & \ & 0.006170 & -0.052991 & 1.004129 & \end{bmatrix} \nonumber$
The old ao matrix can be written as:
$C = \begin{bmatrix} & 1.000000 & 0.000000 & 0.000000 & \ & 0.000000 & 1.000000 & 0.000000 & \ & 0.000000 & 0.000000 & 1.000000 & \end{bmatrix} \nonumber$
The new ao matrix (which now gives each ao as a linear combination of the original aos) then becomes:
$C^{\prime} = S^{-\dfrac{1}{2}}C = \begin{bmatrix} & 1.010194 & -0.083258 & 0.006170 & \ & -0.083258 & 1.014330 & -0.052991 & \ & 0.006170 & -0.052991 & 1.004129 & \end{bmatrix} \nonumber$
These new aos have been constructed to meet the orthonormalization requirement $C^{\prime T}SC^{\prime} = 1$ since:
$\left( S^{-\dfrac{1}{2}}C \right)^T SS^{-\dfrac{1}{2}} C = C^TS^{-\dfrac{1}{2}} SS^{-\dfrac{1}{2}} C=C^TC = 1. \nonumber$
But, it is always good to check our result and indeed:
$C^{\prime T}SC^{\prime} = \begin{bmatrix} & 1.000000 & 0.000000 & 0.000000 & \ & 0.000000 & 1.000000 & 0.000000 & \ & 0.000000 & 0.000000 & 1.000000 & \end{bmatrix} \nonumber$
2. The least time consuming route here is to evaluate each of the needed integrals first. These are evaluated analogous to exercise 1, letting $\chi_i$ denote each of the individual Slater Type Orbitals.
\begin{align} \int\limits^{\infty}_{0} \chi_i r \chi_j r^2 dr & = & \langle r \rangle_{ij} \ & = & \left( \dfrac{2\xi_i}{a_0} \right)^{n_i + \dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_i)!}} \left( \dfrac{2\xi_j}{a_0} \right)^{n_j + \dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_j)!}}\int\limits_{0}^{\infty}r^{(n_i+n_j+1)}e^{\left( -\dfrac{(\xi_i+\xi_j)r}{a_0} \right)}dr \end{align} \nonumber
Once again using integral equation (4) the integral recduces to:
$= \left( \dfrac{2\xi_i}{a_0} \right)^{n_i + \dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_i)!}} \left( \dfrac{2\xi_j}{a_0} \right)^{n_j+\dfrac{1}{2}} \sqrt{\dfrac{1}{(2n_j)!}}\int\limits_0^{\infty}r^{(n_i+n_j+1)}e^{\left(-\dfrac{(\xi_i+\xi_j )r}{a_0}\right)}dr \nonumber$
Again, upon substituting in the values for each of these constants, evaluation of these expectation values yields:
$\langle r \rangle_{11} = (12.482992)(0.707107)(12.482992)(0.707107)(6.00)(0.001193) = 0.557496 \nonumber$
$\langle r \rangle_{21} = \langle r \rangle_{12} (1.850743)(0.204124)(12.482992)(0.707107)(24.00)(0.002441) = 0.195391 \nonumber$
$\langle r \rangle_{22} = (1.850743)(0.204124)(1.850743)(0.204124)(120.00)(0.228228) = 3.908693 \nonumber$
$\langle r \rangle_{31} = \langle r \rangle_{13} = (0.014892)(0.0337268)(12.482292)(0.707107)(120.00)(0.001902) = 0.001118 \nonumber$
$\langle r \rangle_{32} = \langle r \rangle_{23} = (0.014892)(0.037268)(1.850743)(0.204124)(720.00)(5.211889) = 0.786798 \nonumber$
$\langle r \rangle_{33} = (0.014892)(0.037268)(0.014892)(0.037268)(5040.00)(14999.893999) = 23.286760 \nonumber$
$\int\limits_{0}^{\infty}\chi_i r\chi_j r^2 dr = \langle r \rangle_{ij} \begin{bmatrix} & 0.557496 & & & \ & 0.195391 & 3.908693 & & \ & 0.001118 & 0.786798 & 23.286760 & \end{bmatrix} \nonumber$
Using these integrals one then proceeds to evaluate the expectation values of each of the orthogonalized aos, $\chi^{\prime}_n$, as:
$\int\limits^{\infty}_{0} \chi^{\prime}_n r \chi^{\prime}_n r^2 dr = \sum\limits_{i=1}^3 \sum\limits_{j=1}^3 C^{\prime}_{ni}C_{nj}^{\prime} \langle r \rangle_{ij}. \nonumber$
This results in the following expectation values (in atomic units):
\begin{align} \int\limits_{0}^{\infty} \chi^{\prime}_{1s} r \chi^{\prime}_{1s} r^2 dr & = & 0.563240 \text{ bohr} \ \int\limits_{0}^{\infty} \chi^{\prime}_{2s} r \chi^{\prime}_{2s} r^2 dr & = & 3.973199 \text{ bohr} \ \int\limits_{0}^{\infty} \chi^{\prime}_{3s} r \chi^{\prime}_{3s} r^2 dr & = & 23.406622 \text{ bohr} \end{align} \nonumber
3. The radial density for each orthogonalized orbital, $\chi^{\prime}_n$, assuming integrations over θ and $\phi$ have already been performed can be written as:
$\int\limits_{0}^{\infty} \chi^{\prime}_n\chi^{\prime}_n r^2 dr = \sum\limits_{i=1}^3\sum\limits_{j=1}^3 C^{\prime}_{ni}C^{\prime}_{nj} \int\limits_{0}^{\infty} R_iR_j r^2 dr \text{ , where } R_i \text{ and } R_j \text{ are the radial portions of the individual Slater Type Orbitals, e.g.,} \nonumber$
$R_iR_jr^2 = \left( \dfrac{2\xi_i}{a_0} \right)^{n_i + \dfrac{1}{2}}\sqrt{\dfrac{1}{(2n_i)!}}\left( \dfrac{2\xi_j}{a_0} \right)^{n_j+\dfrac{1}{2}}\sqrt{\dfrac{1}{(2n_j)!}}r^{(n_i+n_j)}e^{\left(-\dfrac{(\xi_i+\xi_j)r}{a_0} \right)} \nonumber$
Therefore a plot of the radial probability for a given orthogonalized atomic orbital, n, will be: $\sum\limits_{i=1}^3\sum\limits_{j=1}^3 C^{\prime}_{ni}C^{\prime}_{nj}R_iR_jr^2 \text{ vs. r.}$
Plot the orthogonalized 1s orbital probability density vs r; note there are no nodes.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.02%3A_Simple_Molecular_Orbital_Theory/22.2.05%3A_v._Exercise_Solution.txt
|
The above diagram indicates how the SALC-AOs are formed from the 1s, 2s, and 2p N atomic orbitals. It can be seen that there are $3\sigma_g$, $3\sigma_u$, $1\pi_{ux}$, $1\pi_{uy}$, $1\pi_{gx}$, and $1\pi_{gy}$ SALC - AOs. The Hamiltonian matrices (Fock matrices) are given. Each of these can be diagonalized to give the following MO energies:
3$\sigma_g$; -15.52, -1.45, and -0.54 (hartrees)
3$\sigma_u$; -15.52, -0.72, and 1.13
$1\pi_{ux};$ -0.58
$1\pi_{uy};$ -0.58
$1\pi_{gx};$ 0.28
$1\pi_{gy};$ 0.28
It can be seen that the 3$\sigma_g$ orbitals are bonding, the 3$\sigma_u$ orbitals are antibonding, the $1\pi_{ux}$ and $1\pi_{uy}$ orbitals are bonding, and the $1\pi_{gx}$ and $1\pi_{gy}$ orbitals are antibonding. The eigenvectors one obtains are in the orthogonal basis and therefore pretty meaningless. Back transformation into the original basis will generate the expected results for the 1e$^-$ MOs (expected combinations of SALC-AOs).
2. Using these approximate energies we can draw the following MO diagram:
This MO diagram is not an orbital correlation diagram but can be used to help generate one. The energy levels on each side (C and $H_2$) can be "superimposed" to generate the left side of the orbital correlation diagram and the center $CH_2$ levels can be used to form the right side. Ignoring the core levels this generates the following orbital correlation diagram.
3.
Using $D_{2h}$ symmetry and labeling the orbitals $(f_1-f_{12})$ as shown above proceed by using the orbitals to define a reducible representation which may be subsequently reduced to its irreducible components. Use projectors to find the SALC-AOs for these irreps.
3. a. The $2P_x$ orbitals on each carbon form the following reducible representation:
\begin{align} & D_{2h} & \text{ } & E & \text{ } & C_2(z) & \text{ } & C_2(y) & \text{ } & C_2(x) & \text{ } & i & \text{ } & \sigma (xy) & \text{ } & \sigma (xz) & \text{ } & \sigma (yz) \ & \Gamma_{2p_x} & \text{ } & 2 & \text{ } & -2 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 2 & \text{ } & -2 \end{align} \nonumber
The number of irreducible representations may be found by using the following formula:
$n_{irrep} = \dfrac{1}{g} \sum\limits_{R} \chi_{red}(R)\chi_{irrep}(R) , \nonumber$
where g = the order of the point group (8 for $D_{2h}$).
\begin{align} n_{A_g} &=& & \dfrac{1}{8}\sum\limits_R \Gamma_{2p_x} (R)A_g(R) \ &=& & \dfrac{1}{8}\left[ (2)(1)+(-2)(1)+(0)(1)+(0)(1)+(0)(1)+(0)(1)+(2)(1)+(-2)(1) \right] = 0 \end{align}
Similarly,
\begin{align} &n_{B_{1g}} &=& &0& \ &n_{B_{2g}} &=& &1& \ &n_{B_{3g}} &=& &0& \ &n_{A_u} &=& &0& \ &n_{B_{1u}} &=& &0& \ &n_{B_{2u}} &=& &0& \ &n_{B_{3u}} &=& &1& \end{align}
Projectors using the formula:
$P_{irrep} = \sum\limits_R \chi_{irrep}(R)R , \nonumber$
may be used to find the SALC-AOs for these irreducible representations.
$P_{B_{2g}} = \sum\limits_R \chi_{B_{2g}}(R)R , \nonumber$
\begin{align} P_{B_{2g}} &=& &(1)Ef_1 + (-1)C_2(z)f_1 + (1)C_2(y)f_1 + (-1)C_2(x)f_1 + (1)if_1 + (-1)\sigma (xy)f_1 + (1)\sigma (xz)f_1 + (-1)\sigma (yz)f_1 \ &=& &(1)f_1 + (-1)-f_1 + (1)-f_2 + (-1)f_2 + (1)-f_2 + (-1)f_2 + (1)f_1 + (-1)-f_1 \ &=& &f_1 + f_1 - f_2 - f_2 - f_2 - f_2 + f_1 + f_1 \ &=& &4f_1 - 4f_2 \end{align}
Normalization of this SALC-AO (and representing the SALC-AOs with $\phi$) yields:
$\int N(f_1 - f_2)N(f_1 - f_2)d\tau = 1 \nonumber$ $N^2 \left( \int f_1f_1d\tau - \int f_1f_2d\tau - \int f_2f_1d\tau + \int f_2f_2d\tau \right) = 1 \nonumber$ $N^2(1 + 1) = 1 \nonumber$ $2N^2 = 1 \nonumber$ $N = \dfrac{1}{\sqrt{2}} \nonumber$ $\phi_{1b_{2g}} = \dfrac{1}{\sqrt{2}}(f_1 - f_2). \nonumber$
The $B_{3u}$ SALC-AO may be found in a similar fashion:
\begin{align} P_{B_{3u}}f_1 &=& &(1)f_1 + (-1)-f_1 + (-1)-f_2 + (1)f_2 + (-1)-f_2 + (1)f_2 + (1)f_1 + (-1)-f_1 \ &=& &f_1 + f_1 + f_2 + f_2 + f_2 + f_2 + f_1 + f_1 \ &=& &4f_1 + 4f_2 \end{align}
Normalization of this SALC-AO yields:
$\phi_{1b_{3u}} = \dfrac{1}{\sqrt{2}} (f_1 + f_2). \nonumber$
Since there are only two SALC-AOs and both are of different symmetry types these SALC-AOs are MOs and the 2x2 Hamiltonian matrix reduces to 2 1x1 matricies.
\begin{align} H_{1b_{2g},1b_{2g}} &=& &\int \dfrac{1}{\sqrt{2}}(f_1 - f_2)H\dfrac{1}{\sqrt{2}}(f_1 - f_2)d\tau \ &=& &\dfrac{1}{2}\left( \int f_1Hf_1d\tau - 2\int f_1Hf_2d\tau + \int f_2Hf_2d\tau \right) \ &=& &\dfrac{1}{2}\left( \alpha_{2p\pi} - 2\beta_{2p\pi - 2p\pi} + \alpha_{2p\pi} \right) \ &=& &\alpha_{2p\pi} - \beta_{2p\pi - 2p\pi} \ &=& &-11.4 - (-1.2) = -10.2 \end{align}
\begin{align} H_{1b_{3u},1b_{3u}} &=& &\int \dfrac{1}{\sqrt{2}} (f_1 + f_2 )H\dfrac{1}{\sqrt{2}}(f_1 + f_2)d\tau \ &=& &\dfrac{1}{2}\left( \int f_1 H f_1d\tau + 2\int f_1Hf_2d\tau + \int f_2Hf_2 d\tau \right) \ &=& &\dfrac{1}{2}\left( \alpha_{2p\pi} + 2\beta_{2p\pi -2p\pi} + \alpha_{2p\pi} \right) \ &=& &\alpha_{2p\pi} + \beta_{2p\pi-2p\pi} \ &=& &-11.4 + (-1.2) = -12.6 \end{align}
This results in a $\pi \rightarrow \pi^{\text{*}}$ splitting of 2.4 eV.
3. b. The $sp^2$ orbitals forming the C-C bond generate the following reducible representation:
\begin{align} &D_{2h}& &E& &C_2(z)& &C_2(y)& &C_2(x)& &i& &\sigma (xy)& &\sigma (xz)& &\sigma (yz)& \ &\Gamma_{C_{sp}}^2 &2& &2& &0& &0& &0& &0& &2& &2& \end{align}
This reducible representation reduces to $1A_g \text{ and } 1B_{1u}$ irreducible representations.
Projectors are used to find the SALC-AOs for these irreducible representations.
\begin{align} P_{A_g}f_3 &=& &(1)Ef_3 + (1)C_2(z)f_3 + (1)C_2(y)f_3 + (1)C_2(x)f_3 + (1)if_3 + (1)\sigma (xy)f_3 + (1)\sigma (xz)f_3 + (1)\sigma (yz)f_3 \ (1)f_3 + (1)f_3 + (1)f_4 + (1)f_4 + (1)f_4 + (1)f_4 + (1)f_3 + (1)f_3 \ &=& &4f_3 + 4f_4 \end{align}
Normalization of this SALC-AO yields:
$\phi_{1a_{g}} = \dfrac{1}{\sqrt{2}}(f_3 + f_4). \nonumber$
The $B_{1u}$ SALC-AO may be found in a similar fashion:
\begin{align} P_{B_{1u}}f_3 &=& &(1)f_3 + (1)f_3 + (-1)f_4 + (-1)f_4 + (-1)f_4 + (-1)f_4 + (1)f_3 + (1)f_3 \ &=& &4f_3 - 4f_4\end{align}
Normalization of this SALC-AOs yields:
$\phi_{1b_{3u}} = \dfrac{1}{\sqrt{2}} (f_3 - f_4). \nonumber$
Again sine there are only two SALC-AOs and both are of different symmetry types these SALC-AOs are MOs and the 2x2 Hamiltonian matrix reduces to 2 1x1 matrices.
\begin{align} H_{1a_g,1a_g} &=& &\int \dfrac{1}{\sqrt{2}}(f_3 + f_4)H\dfrac{1}{\sqrt{2}}(f_3 + f_4)d\tau \ &=& &\dfrac{1}{2}\left( \int f_3Hf_3d\tau + 2\int f_3Hf_4d\tau + \int f_4Hf_4d\tau \right) \ &=& &\dfrac{1}{2} \left( \alpha_{sp^2} + 2\beta_{sp^2-sp^2} + \alpha_{sp^2} \right) \ &=& &\alpha_{sp^2} + \beta_{sp^2-sp^2} \ &=& &-14.7 + (-5.0) = -19.7 \ H_{1b_{1u},1b_{1u}} &=& &\int \dfrac{1}{\sqrt{2}}(f_3 - f_4)H\dfrac{1}{\sqrt{2}}(f_3 - f_4)d\tau \ &=& &\dfrac{1}{2}\left( \int f_3Hf_3d\tau - 2\int f_3Hf_4d\tau + \int f_4Hf_4d\tau \right) \ &=& &\dfrac{1}{2}\left( \alpha_{sp^2} - 2\beta_{2p^2-2p^2} + \alpha_{sp^2} \right) \ &=& &\alpha_{sp^2} - \beta_{sp^2-sp^2} \ &=& &-14.7 - (-5.0) = -9.7 \end{align}
3. c. The C $sp^2$ orbitals and the H s orbitals forming the C-H bonds generate the following reducible representation:
\begin{align} &D_{2h}& &E& &C_2(z)& &C_2(y)& &C_2(x)& &i& &\sigma (xy)& &\sigma (xz)& &\sigma (yz)& \ &\Gamma_{sp^2-s}& &8& &0& &0& &0& &0& &0& &0& &8& \end{align}
This reducible representation reduces to $2A_g, 2B_{3g}, 2B_{1u}, \text{ and } 2B_{2u}$ irreducible representation.
Projectors are used to find the SALC-AOs for these irreducible representations.
\begin{align} P_{A_g} f_6 &=& &(1)Ef_6 + (1)C_2(z)f_6 + (1)C_2(y)f_6 + (1)C_2(x)f_6 + (1)if_6 (1)\sigma (xy)f_6 + (1)\sigma (xz)f_6 + (1)\sigma (yz)f_6 \ &=& &(1)f_6 + (1)f_5 + (1)f_7 + (1)f_8 + (1)f_8 + (1)f_7 + (1)f_5 + (1)f_6 \ &=& &2f_5 + 2f_6 + 2f_7 + 2f_8 \end{align}
Normalization yields: $\phi 2a_g = \dfrac{1}{2}( f_5 + f_6 + f_7 + f_8 ).$
\begin{align} P_{A_g}f_{10} &=& &(1)Ef_{10} + (1)C_2(z)f_{10} + (1)C_2(y)f_{10} + (1)C_2(x)f_{10} + (1)if_{10} + (1)\sigma (xy)f_{10} + (1)\sigma (xz)f_{10} + (1)\sigma (yz)f_{10} \ &=& &(1)f_{10} + (1)f_9 + (1)f_{11} + (1)f_{12} + (1)f_{12} + (1)f_{11} + (1)f_9 + (1)f_{10} \ &=& &2f_9 + 2f_{10} + 2f_{11} + 2f_{12} \end{align}
Normalization yields: $\phi_{3a_g} = \dfrac{1}{2}( f_9 + f_{10} + f_{11} + f_{12} ).$
\begin{align} P_{B_{3g}}f_6 &=& &(1)f_6 + (-1)f_5 + (-1)f_7 + (1)f_8 + (1)f_8 +(-1)f_7 + (-1)f_5 + (1)f_6 \ &=& &-2f_5 + 2f_6 - 2f_7 + 2f_8 \end{align}
Normalization yields: $\phi_{1b_{3g}} = \dfrac{1}{2}(-f_5 + f_6 - f_7 + f_8).$
\begin{align} P_{B_{3g}}f_{10} &=& &(1)f_{10} + (-1)f_9 + (-1)f_{11} + (1)f_{12} + (1)f_{12} + (-1)f_{11} + (-1)f_9 + (1)f_{10} \ &=& &-2f_9 + 2f_{10} - 2f_{11} + 2f_{12} \end{align}
Normalization yields: $\phi_{2b_{3g}} = \dfrac{1}{2}(-f_9 + f_{10} - f_{11} + f_{12}).$
\begin{align} P_{B_{1u}}f_6 &=& &(1)f_6 + (1)f_5 + (-1)f_7 + (-1)f_8 + (-1)f_8 + (-1)f_7 + (1)f_5 + (1)f_6 \ &=& &2f_5 + 2f_6 - 2f_7 - 2f_8 \end{align}
Normalization yields: $\phi_{2b_{1u}} = \dfrac{1}{2}(f_5 + f_6 - f_7 - f_8).$
\begin{align} P_{B_{1u}}f_10 &=& &(1)f_{10} + (1)f_9 + (-1)f_{11} + (-1)f_{12} + (-1)f_{12} + (-1)f_{11} + (1)f_9 + (1)f_{10} \ &=& &2f_9 + 2f_{10} - 2f_{11} - 2f_{12} \end{align}
Normalization yields: $\phi_{3b_{1u}} = \dfrac{1}{2}(f_9 + f_{10} - f_{11} - f_{12}).$
\begin{align} P_{B_{2u}}f_6 = (1)f_6 + (-1)f_5 + (1)f_7 + (-1)f_8 + (-1)f_8 + (1)f_7 + (-1)f_5 + (1)f_6 &=& &-2f_5 + 2f_6 + 2f_7 - 2f_8 \end{align}
Normalization yields: $\phi_{1b_{2u}} = \dfrac{1}{2}(-f_5 + f_{6} - f_{7} - f_{8}).$
\begin{align} P_{B_{2u}}f_{10} = (1)f_{10} + (-1)f_9 + (1)f_{11} + (-1)f_{12} + (-1)f_{12} + (1)f_{11} + (-1)f_9 + (1)f_{10} &=& &-2f_9 + 2f_{10} + 2f_{11} - 2f_{12} \end{align}
Normalization yields: $\phi_{2b_{2u}} = \dfrac{1}{2}(-f_9 + f_{10} + f_{11} - f_{12}).$
Each of these four 2x2 symmetry blocks generate identical Hamitonian matrices. This will bve demonstrated for the $B_{3g}$ symmetry, the others proceed analogously:
\begin{align} H_{1b_{3g},1b_{3g}} &=& &\int \dfrac{1}{2}(-f_5 + f_6 - f_7 + f_8)H\dfrac{1}{2}(-f_5 + f_6 - f_7 + f_8)d\tau \ &=& &\dfrac{1}{4}\bigg[ \int f_5Hf_5d\tau - \int f_5Hf_6d\tau + \int f_5Hf_7d\tau - \int f_5Hf_8d\tau \ & &-&\int f_6Hf_5d\tau + \int f_6Hf_6d\tau - \int f_6Hf_7d\tau + \int f_5Hf_8d\tau \ & & +&\int f_7Hf_9d\tau - \int f_7Hf_{10}d\tau + \int f_7Hf_{11}d\tau - \int f_7Hf_{12}d\tau \ & & -& \int f_8Hf_9d\tau + \int f_8Hf_{10}d\tau - \int f_8Hf_{11}d\tau + \int f_8Hf_{12}d\tau \bigg] \ &=& &\dfrac{1}{4}\left[ \beta_{sp^2-s} - 0 + 0 - 0 - 0 + \beta_{sp^2-s} - 0 + 0 + 0 - 0 + \beta_{sp^2-s} - 0 - 0 + 0 - 0 + \beta_{sp^2-s} \right] = \beta_{sp^2-s} \end{align}
\begin{align} H_{1b_{3g},2b_{3g}} &=& &\int \dfrac{1}{2}(-f_5 + f_6 - f_7 + f_8)H\dfrac{1}{2}(-f_9 + f_{10} - f_{11} + f_{12})d\tau \ &=& &\dfrac{1}{4}\bigg[ \int f_5Hf_9d\tau - \int f_5Hf_{10}d\tau + \int f_5Hf_{11}d\tau - \int f_5Hf_{12}d\tau \ & &-&\int f_6Hf_{9}d\tau + \int f_6Hf_{10}d\tau - \int f_6Hf_{11}d\tau + \int f_5Hf_{12}d\tau \ & & +&\int f_7Hf_9d\tau - \int f_7Hf_{10}d\tau + \int f_7Hf_{11}d\tau - \int f_7Hf_{12}d\tau \ & & -& \int f_8Hf_9d\tau + \int f_8Hf_{10}d\tau - \int f_8Hf_{11}d\tau + \int f_8Hf_{12}d\tau \bigg] \ &=& &\dfrac{1}{4}\left[ \beta_{sp^2-s} - 0 + 0 - 0 - 0 + \beta_{sp^2-s} - 0 + 0 + 0 - 0 + \beta_{sp^2-s} - 0 - 0 + 0 - 0 + \beta_{sp^2-s} \right] = \beta_{sp^2-s} \end{align}
\begin{align} H_{2b_{3g},2b_{3g}} &=& &\int \dfrac{1}{2}(-f_9 + f_{10} - f_{11} + f_{12})H\dfrac{1}{2}(-f_9 + f_{10} - f_{11} + f_{12})d\tau \ &=& &\dfrac{1}{4}\bigg[ \int f_9Hf_9d\tau - \int f_9Hf_{10}d\tau + \int f_9Hf_{11}d\tau - \int f_9Hf_{12}d\tau \ & &-&\int f_{10}Hf_{9}d\tau + \int f_{10}Hf_{10}d\tau - \int f_{10}Hf_{11}d\tau + \int f_{10}Hf_{12}d\tau \ & & +&\int f_{11}Hf_9d\tau - \int f_{11}Hf_{10}d\tau + \int f_{11}Hf_{11}d\tau - \int f_{11}Hf_{12}d\tau \ & & -& \int f_{12}Hf_9d\tau + \int f_{12}Hf_{10}d\tau - \int f_{12}Hf_{11}d\tau + \int f_{12}Hf_{12}d\tau \bigg] \ &=& &\dfrac{1}{4}\left[ \alpha_s - 0 + 0 - 0 - 0 + \alpha_s - 0 + 0 + 0 - 0 + \alpha_s - 0 - 0 + 0 - 0 + \alpha_s \right] = \alpha_s \end{align}
This matrix eigenvalue problem then becomes:
$\begin{vmatrix} & \alpha_{sp^2} -\epsilon & \beta_{sp^2-s} & \ & \beta_{sp^2-s} & \alpha_s-\epsilon & \end{vmatrix} = 0 \nonumber$
$\begin{vmatrix} & -14.7 -\epsilon & -4.0 & \ & -4.0 & -13.6 & \end{vmatrix} = 0 \nonumber$
Solving this yields eigenvalues of:
\begin{vmatrix} & -18.19 & -10.11 & \end{vmatrix}
and corresponding eigenvectors:
\begin{vmatrix} & -0.7537 & -0.6572 & \ & -0.6572 & 0.7537 & \end{vmatrix}
This results in an orbital energy diagram:
For the ground state of ethylene you would fill the bottom 3 levels (the C-C, C-H, and $\pi$ bonding orbitals), with 12 electrons.
4.
Using the hybrid atomic orbitals as labeled above (functions $f_1-f_7$) and the $D_{3h}$ point group
symmetry it is easiest to construct three sets of reducible representations:
i. the B $2p_z$ orbital (labeled function 1)
ii. the 3 B $sp^2$ hybrids (labeled functions 2 - 4)
iii. the 3 H 1s orbitals (labeled functions 5 - 7).
i. The B $2p_z$ orbital generates the following irreducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v &\ & \Gamma_{2p_{z}} & & 1 & & 1 & & -1& & -1 & & -1 & & 1 & \end{align}
This irreducible representation is $A_2''$ and is its own SALC-AO.
ii. The B $sp^2$ orbitals generate the following reducible representation:
$\begin{vmatrix} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v &\ & \Gamma_{2p} & & 3 & & 0 & & 1& & 3 & & 0 & & 1 & \end{vmatrix} \nonumber$
This reducible representation reduces to $1A_1'$ and 1E'
irreducible representations.
Projectors are used to find the SALC-AOs for these irreducible representations.
Define: $C_3$ = 120 degree rotations, $C_3' = 240$ degree rotation,
$C_2$ = rotation around $f_4 , C_2'$ = rotation around $f_2$, and
$C_2$ = rotation adound $f_3$. $S_3$ and $S_3'$ are defined analogous
to $C_3$ and $C_3'$ with accompanying horizontal reflection.
$\sigma_v$ = a reflection plane through $f_4, \sigma_v'$ = a reflection plane
through $f_2,$ and $\sigma_v''$ = a reflection plan through $f_3$
$P_{A_1'} f_2 = (1)E f_2 + (1)C_3f_2 + (1)C_3'f_2 + (1)C_2f_2 + (1)C_2'f_2 + (1)C_2''f_2 + (1)\sigma_hf_2 + (1)S_3f_2 +(1)S_3'f_2 + (1)\sigma_vf_2 + (1)\sigma_v'f_2 + (1)\sigma_v''f_2$
$= (1)f_2 + (1)f_3 + (1)f_4 + (1)f_3 + (1)f_2 + (1)f_4 + (1)f_2 + (1) f_3 + (1)f_4 + (1)f_3 + (1)f_2 + (1)\sigma f_4$
$= 4f_2 + 4f_3 + 4f_4$
Normalization yields: $\phi_{1e'} = \dfrac{1}{\sqrt{6}}\left( 2f_2 - f_3 -f_4 \right) .$
To find the second e' (orthogonal to the first), projection of $f_3$ yields ($2f_3 - f_2 -f_4$) and projection on $f_4$ yields $(2f_4 - f_2 - f_3$). Neither of these functions are orthogonal to the first, but a combination of the two ($2f_4 - f_2 - f_4) - (2f_4 - f_2 - f_3)$ yields a function which is orthogonal to the first.
Normalization yields: $\phi_{2e'} = \dfrac{1}{\sqrt{2}}\left( f_3 - f_4\right).$\
iii. The H 1s orbitals generate the following reducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v & \ & \Gamma_{sp^2} & & 3 & & 0 & & 1 & & 3 & & 0 & & 1 & \end{align} \nonumber
This reducible representation reduces to $1A_1'$ and 1E' irreducible representations exactly like part ii. and in addition the projectors used to find the SALC-AOs for these irreducible representations is exactly analogous to part ii.
$\phi_{2a_1'} = \dfrac{1}{\sqrt{3}} (f_5 + f_6 + f_7) \nonumber$
$\phi_{3e'} = \dfrac{1}{\sqrt{6}}(2f_5 - f_6 - f_7). \nonumber$
$\phi_{4e'} = \dfrac{1}{\sqrt{2}}(f_6 - f_7). \nonumber$
So, there are $1A_2'', 2A_1'$ and 2E' orbitals. Solving the Hamiltonian matrix for each symmetry block yields:
$A_2''$ Block:
\begin{align} H_{1a_{2'},1a_{2'}} & = & \int f_1Hf_1d\tau \ & = & \alpha_{2p\pi} \ & = & -8.5\end{align}
$A_1'$ Block:
\begin{align} H_{1_{a1'},1_{a2'}} & = & \int \dfrac{1}{\sqrt{3}}(f_2 +f_3+f_4)H\dfrac{1}{\sqrt{3}}(f_2 + f_3 + f_4)d\tau & = & \dfrac{1}{3} [ \int f_2Hf_2d\tau + \int f_2Hf_3d\tau + \int f_2Hf_4d\tau + \ & + & \int f_3 Hf_2d\tau +\int f_3 Hf_3d\tau + \int f_3Hf_4d\tau + \ & + & \int f_4Hf_2d\tau + \int f_4Hf_3d\tau + \int f_4Hf_4d\tau ] \ & = & \dfrac{1}{3}\left[ \alpha_{sp^2} + 0 + 0 + 0 + \alpha_{sp^2} + 0 + 0 + 0 + \alpha_{sp^2} \right] = \alpha_{sp^2} \end{align}
\begin{align} H_{1_{a1'},2_{a1'}} & = & \int \dfrac{1}{\sqrt{3}}(f_2 +f_3+f_4)H\dfrac{1}{\sqrt{3}}(f_5 + f_6 + f_7)d\tau & = & \dfrac{1}{3} [ \int f_2Hf_5d\tau + \int f_2Hf_6d\tau + \int f_2Hf_7d\tau + \ & + & \int f_3 Hf_5d\tau +\int f_3 Hf_6d\tau + \int f_3Hf_7d\tau + \ & + & \int f_4Hf_5d\tau + \int f_4Hf_6d\tau + \int f_4Hf_7d\tau ] \ & = & \dfrac{1}{3}\left[ \beta_{sp^2-s} + 0 + 0 + 0 + \beta_{sp^2-s} + 0 + 0 + 0 + \beta_{sp^2-s} \right] = \beta_{sp^2-s} \end{align}
\begin{align} H_{2_{a1'},2_{a1'}} & = & \int \dfrac{1}{\sqrt{3}}(f_5 +f_6+f_7)H\dfrac{1}{\sqrt{3}}(f_5 + f_6 + f_7)d\tau & = & \dfrac{1}{3} [ \int f_5Hf_5d\tau + \int f_5Hf_6d\tau + \int f_5Hf_7d\tau + \ & + & \int f_6 Hf_5d\tau +\int f_6 Hf_6d\tau + \int f_6Hf_7d\tau + \ & + & \int f_7Hf_5d\tau + \int f_7Hf_6d\tau + \int f_7Hf_7d\tau ] \ & = & \dfrac{1}{3}\left[ \alpha_{s} + 0 + 0 + 0 + \alpha_{s} + 0 + 0 + 0 + \alpha_{s} \right] = \alpha_{s} \end{align}
This matrix eigenvalue problem then becomes:
\befin{align} \begin{vmatrix} & \alpha_{sp^2 -\epsilon} & \beta_{sp^2-s} & \ & \beta_{sp^2-s} & \alpha_s-\epsilon & \end{vmatrix} & = & 0 \ \begin{vmatrix} & -10.7 - \epsilon & -3.5 & \ & -3.5 & -13.6 - \epsilon & \end{vmatrix} & = & 0 \end{align}
Solving this yields eigenvalues of:
\begin{vmatrix} & -15.94 & -8.36 & \end{vmatrix}
and corresponding eigenvectors:
\begin{vmatrix} & -0.5555 & -0.8315 & \ & -0.8315 & 0.5555 & \end{vmatrix}
E' Block:
This 4x4 symmetry block factors to two 2x2 blocks: where one 2x2 block includes the SALC-AOs
\begin{align} \phi_{e'} & = & \dfrac{1}{\sqrt{6}}(2f_2 - f_3 - f_4) \ \phi_{e'} & = & \dfrac{1}{\sqrt{6}}(2f_5 - f_6 - f_7),\end{align}
and the other includes the SALC-AOs
\begin{align} \phi_{e'} & = & \dfrac{1}{\sqrt{2}}(f_3 - f_4) \ \phi_{e'} & = & \dfrac{1}{\sqrt{2}}(f_6 - f_7). \end{align}
Both of these 2x2 matrices are identical to the $A_1$' 2x2 array and therefore yield identical energies and MO coefficients.
This results in an orbital energy diagram:
For the ground state of $BH_3$ you would fill the bottom level (B-H bonding), $a_1'$ and e' orbitals, with 6 electrons.
5.
5. a. The two F p orbitals (top and bottom) generate the following reducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v & \ & \Gamma_p & & 2 & & 2 & & 0 & & 0 & & 0 & & 2 & \end{align}
This reducible representation reduces to $1A_1'$ and $1A_2''$ irreducible representations.
Projectors may be used to find the SALC-AOsfor these irreducible representations.
\begin{align} \phi_{a_1'} & = & \dfrac{1}{\sqrt{2}}(f_1 - f_2) \ \phi_{a_2}'' & = & \dfrac{1}{\sqrt{2}}(f_1 + f_2) \end{align}
5. b. The three trigonal F p orbitals generate the following reducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v & \ & \Gamma_p & & 3 & & 0 & & 1 & & 3 & & 0 & & 1 & \end{align}
This reducible representation reduces to $1A_1'$ and 1E' irreducible representations.
Projectors may be used to find the SALC-AOs for these irreducible representations (but they are exactly analogous to the previous few problems):
\begin{align} \phi_{a_1'} & = & \dfrac{1}{\sqrt{3}}(f_3 + f_4 + f_5) \ \phi_{e'} & = & \dfrac{1}{\sqrt{6}}(2f_3 - f_4 - f_5) \ \phi_{e'} & = & \dfrac{1}{\sqrt{2}}(f_4 - f_5). \end{align}
5. c. The 3 P $sp^2$ orbitals generate the following reducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v & \ & \Gamma_p & & 3 & & 0 & & 1 & & 3 & & 0 & & 1 & \end{align}
This reducible representation reduces to $1A_1'$ and 1E' irreducible representations. Again, projectors may be used to find the SALC-AOs for these irreducible representations.(but again they are exactly analogous to the previous few problems):
\begin{align} \phi_{a_1'} & = & \dfrac{1}{\sqrt{3}}(f_6 + f_7 + f_8) \ \phi_{e'} & = & \dfrac{1}{\sqrt{6}}(2f_6 - f_7 - f_8) \ \phi_{e'} & = & \dfrac{1}{\sqrt{2}}(f_7 - f_8). \end{align}
The leftover P $p_z$ orbital generate the following irreducible representation:
\begin{align} & D_{3h} & & E & & 2C_3 & & 3C_2 & & \sigma_h & & 2S_3 & & 3\sigma_v & \ & \Gamma_p & & 1 & & 1 & & -1 & & -1 & & -1 & & 1 & \end{align}
This irreducible representation is an $A_2''$
$\phi_{a_2''} = f_9. \nonumber$
Drawing an energy level diagram using these SALC-AOs would result in the following:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.02%3A_Simple_Molecular_Orbital_Theory/22.2.06%3A_vi._Problem_Solutions.txt
|
Q1
For the given orbital occupations (configurations) of the following systems, determine all possible states (all possible allowed combinations of spin and space states). There is no need to form the determinental wavefunctions simply label each state with its proper term symbol. One method commonly used is Harry Grays "box method" found in Electrons and Chemical Bonding.
\begin{align} &a.)& CH_2& &1a_1^22a_1^21b_2^23a_1^11b_1^1& \ &b.)& B_2& &1\sigma_g^21\sigma_u^22\sigma_u^21\pi_u^12\pi_u^1 & \ &c.)& O_2& &1\sigma_g^21\sigma_u^22\sigma_g^22\sigma_u^21\pi_u^43\sigma_g^21\pi_g^2 & \ &d.)& Ti& &1s^22s^22p^63s^23p^64s^2d^14d^1& \ &e.)& Ti& &1s^22s^22p^63s^23p^64s^23d^2& \end{align}
22.3.02: ii. Exercises
Q1
Show that the configuration (determinant) corresponding to the $Li^+ 1s(\alpha )1s(\alpha )$ state vanishes.
Q2
Construct the 3 triplet and 1 singlet wavefunctions for the $Li^+ 1s^12s^1$ configuration. Show that each state is a proper eigenfunction of $S^2 \text{ and } S_z$ (use raising and lowering operators for $S^2$)
Q3
Construct wavefunctions for each of the following states of $CH_2:$
\begin{align} &a.) & ^1B_1(1_{a1}^22_{a1}^21_{b2}^23_{a1}^11_{b1}^1) \ &b.) & ^3B_1(1_{a1}^22_{a1}^21_{b2}^23_{a1}^11_{b1}^1) \ & c.) & ^1A_1(1_{a1}^22_{a1}^21_{b2}^23_{a1}^2) \end{align}
Q4
Construct wavefunctions for each state of the $1\sigma^22\sigma^23\sigma^21\pi^2$ configuration of NH.
Q5
Construct wavefunctions for each state of the $1s^12s^13s^1$ configuration of Li.
Q6
Determine all term symbols that arise from the $1s^22s^22p^23d^1$ configuration of the excited N atom.
Q7
Calculate the energy (using Slater Condon rules) associated with the ep valence electrons for the following states of the C atom.
i. $^3P(M_L=1,M_S=1),$
ii. $^3P(M_L=0,M_S=0),$
iii. $^1S(M_L=0, M_S=0), \text{ and }$
iv. $^1D(M_L=0, M_S=0)$.
Q8
Calculate the energy (using Slater Condon rules) associated with the $\pi$ valence electrons for the following states of the NH molecule.
i. $^1\Delta (M_L=2, M_S=0),$
ii. $^1\Sigma (M_L=0, M_S=0), \text{ and }$
iii. $^3\Sigma (M_L=0, M_S=0).$
22.3.03: iii. Problems
Q1
Let us investigate the reactions:
\begin{align} i.& &CH_2(^1A_1) &\rightarrow & & H_2 + C\text{, and} \ ii.& &CH_2(^3B_1) & \rightarrow & & H_2 + C, \end{align}
under an assumed $C_{2v}$ reaction pathway utilizing the following information:
a. Write down (first in terms of $2p_{1,0,-1}$ orbitals and then in terms of $2p_{x,y,z}$ orbitals) the:
i. three Slater determinant (SD) wavefunctions belonging to the $^3P$ state all of which have $M_S = 1$,
ii. five $^1$D SD wavefunctions, and
iii. one $^1$S SD wavefunction.
b. Using the coordinate system shown below, label the hydrogen orbitals $\sigma_g, \sigma_u$ and the carbon 2s, $2p_x$, $2p_y$, $sp_z$, orbitals as $a_1\text{, }b_1(x)\text{, }b_2(y)\text{, or }a_2$. Do the same for the $\sigma \text{, } \sigma \text{, } \sigma^{\text{*}}\text{, } \sigma^{\text{*}} \text{, n, and } p_{\pi} \text{ orbitals of } CH_2.$
c. Draw an orbital correlation diagram for the $CH_2 \rightarrow H_2 + C$ reactions. Try to represent the relative energy orderings of the orbitals correctly.
d. Draw (on graph paper) a configuration correlation diagram for $CH_2(^3B_1) \rightarrow H_2 + C$ showing all configurations which arise from the $C(^3P) + H_2$ products. You can assume the doubly excited configurations like much (~100 kcal / mole) above their parent configurations.
e. Repeat step d. for $CH_2(^1A_1) \rightarrow H_2 + C$ again showing all configurations which arise from the $C(^1D) + H_2$ products.
f. Do you expect the reaction $C(^3P) + H_2 \rightarrow CH_2$ to have a large activation barrier? About how long? What state of $CH_2$ is produced in this reaction? Would distortions away from $C_{2v}$ symmetry be expected to raise of lower the activation barrier? Show how one could estimate where along the reaction path the barrier top occurs.
g. Would $C(^1D) + H_2 \rightarrow CH_2$ be expected to have a larger or smaller barrier than you found for the $^3P C$ reaction?
Q2
The decomposition of the ground-state single carbene, , to produce acetylene and $^1$D carbon is known to occur with an activation energy equal to the reaction endothermicity. However, when triplet carbene decomposes to acetylene and ground-state (triplet) carbon, the activation energy exceeds this reaction's endothermicity. Construct orbital, configuration, and state correlation diagrams which permit you to explain the above observations. Indicate whether single configuration or configuration interaction wavefunctions would be required to describe the above singlet and triplet decomposition processes.
Q3
We want to carry out a configuration interaction calculation on H2 at R=1.40 au. A minimal basis consisting of normalized 1s Slater orbitals with $\xi$=1.0 gives rise to the following overlap (S), one-electron (h), and two-electron atomic integrals:
$\langle 1S_A | 1S_B \rangle = 0.753 \equiv S, \nonumber$
$\langle 1S_A |h|1s_A \rangle = -1.110, \text{ } \langle 1S_B |h|S_A \rangle = -0.968, \nonumber$
$\langle 1S_A1S_A|h|1S_A1S_A\rangle = 0.625 \equiv \langle AA|AA\rangle \nonumber$
$\langle AA|BB \rangle = 0.323 \text{, } \langle AB|AB \rangle = 0.504 \text{, and} \nonumber$
$\langle AA|AB \rangle = 0.426. \nonumber$
1. The normalized and orthogonal molecular orbitals we will use for this minimal basis will be determined purely by symmetry: $\sigma_g = \dfrac{(1S_A + 1S_B)}{\sqrt{2+2S}} \text{, and } \nonumber$ $\sigma_u = \dfrac{(1S_A - 1S_B)}{\sqrt{2+2S}} \nonumber$ Show that these orbitals are indeed orthogonal.
2. Evaluate (using the one- and two- electron atomic integrals given above) the unique one- and two- electron integrals over this molecular orbital basis (this is called a transformation from the ao to the mo basis). For example, evaluate <u|h|u>, <uu|uu>, <gu|gu>, ect.
3. Using the two $^1\sum\limits_g^+$ configureations $\sigma_g^2 \text{, and } \sigma_u^2$, show that the elements of the 2x2configuration interaction Hamiltonian matrix are -1.805, 0.140, and -0.568.
4. Using this configureation interaction matrix, find the configuration interaction (CI) approximation to the ground and excited state energies and wavefunctions.
5. Evaluate and make a rough sketch of the polarized orbitals which result from the above ground state $\sigma_g^2 \text{ and } \sigma_u^2$ CI wavefunction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.03%3A_Electronic_Configurations%2C_Term_Symbols%2C_and_States/22.3.01%3A_i._Review_Exercises.txt
|
Q1
a. For non-degenerate point groups one can simply multiply the representations (since only one representation will be obtained):
$a_1 \otimes b_1 = b_1 \nonumber$
Constructing a "box" in this case is unnecessary since it would only contain a single row. Two unpaired electrons will result in a singlet (S=0, $M_S$=0), and three triplets (S=1, $M_S$=1, S=1, $M_S$=0, S=1, $M_S=-1$). The states will be: $^3B_1(M_S$=1), $^3B_1(M_S=0$), (^3B_1(M_S=-1\)), and $^1B_1(M_S=0$).
b. Remember that when coupling non-equivalent linear molecule angular momenta, one simple adds the individual Lz values and vector couples the electron spin. So, in this case $(1\pi_u^12\pi_u^1)$, we have $M_L$ values of 1+1, 1-1, -1+1, and -1-1 (2,0,0, and -2). The term symbol $\Delta$ is used to denote the spatially doubly degenerate level $(M_L = \pm 2$) and there are two distinct spatially non-degenerate levels denote by the term symbol $\sum (M_L=0).$ Again, two unpaired electrons will result in a singlet ($S=0 \text{, } M_S=0$), and three triplets $(S=1\text{, } M_S=1\text{; } S=1 \text{, } M_S=0 \text{; } S=1 \text{, } M_S=-1$). The states generate are then:
$^1\Delta \text{ }(M_L=2) \text{; one states } (M_S=0), \nonumber$
$^1\Delta \text{ }(M_L=-2) \text{; one states } (M_S=0), \nonumber$
$^3\Delta \text{ }(M_L=2) \text{; one states } (M_S=\text{1, 0, and -1),} \nonumber$
$^3\Delta \text{ }(M_L=-2) \text{; one states } (M_S=\text{1, 0, and -1),} \nonumber$
$^1\Delta \text{ }(M_L=0) \text{; one states } (M_S=0), \nonumber$
$^1\Delta \text{ }(M_L=0) \text{; one states } (M_S=0), \nonumber$
$^3\Delta \text{ }(M_L=0) \text{; one states } (M_S=\text{1, 0, and -1), and } \nonumber$
$^3\Delta \text{ }(M_L=0) \text{; one states } (M_S=\text{1, 0, and -1)} \nonumber$
c. Constructing the "box" for two equivalent $\pi$ electrons one obtains:
From this "box" one obtains six states:
$^1\Delta (M_L=2)\text{; one state }(M_S=0), \nonumber$
$^1\Delta (M_L=-2)\text{; one state }(M_S=0), \nonumber$
$^1\Delta (M_L=0)\text{; one state }(M_S=0), \nonumber$
$^3\Delta (M_L=0)\text{; three states }(M_S=\text{1, 0, and -1}), \nonumber$
d. It is not necessary to construct a "box" when coupling non-equivalent angular momenta since the vector coupling results in a range from the sum of the two individual angular momenta to the absolute value of their difference. In this case, $3d^14d^1$, L=4, 3, 2, 1, 0, and S=1, 0. The term symbol are: $^3G, ^1G, ^3F, ^1F, ^3D, ^1D, ^3P, ^1P, ^3S\text{, and } ^1S.$ THe L and S angular momenta can be vector coupled to produce further splitting into levels:
$\text{ J = L + S ... |L - S|. } \nonumber$
Denoting J as a term symbol subscript one can identify all the levels and the subsequent (2J + 1) states:
$^3G_5 \text{ (11 states),} \nonumber$
$^3G_4 \text{ (9 states),} \nonumber$
$^3G_3 \text{ (7 states),} \nonumber$
$^1G_4 \text{ (9 states),} \nonumber$
$^3F_4 \text{ (9 states),} \nonumber$
$^3F_3 \text{ (7 states),} \nonumber$
$^3F_2 \text{ (5 states),} \nonumber$
$^1F_3 \text{ (7 states),} \nonumber$
$^3D_3 \text{ (7 states),} \nonumber$
$^3D_2 \text{ (5 states),} \nonumber$
$^3D_1 \text{ (3 states),} \nonumber$
$^1D_2 \text{ (5 states),} \nonumber$
$^3P_2 \text{ (5 states),} \nonumber$
$^3P_1 \text{ (3 states),} \nonumber$
$^3P_0 \text{ (1 states),} \nonumber$
$^1P_1 \text{ (3 states),} \nonumber$
$^3S_5 \text{ (3 states), and} \nonumber$
$^1S_0 \text{ (1 states).} \nonumber$
e. Construction of a "box" for the two equivalent d electrons generates (note the "box" has been turned side ways for convenience):
The term symbols are: $^1G \text{, } ^3F \text{, } ^1D \text{, } ^3P \text{, and } ^1S.$ The L and S angular momenta can be vector coupled to produce further splitting into levels:
$^1G_4 \text{ (9 states),} \nonumber$
$^3F_4 \text{ (9 states),} \nonumber$
$^3F_3 \text{ (7 states),} \nonumber$
$^3F_2 \text{ (5 states),} \nonumber$
$^1D_2 \text{ (5 states),} \nonumber$
$^3P_2 \text{ (5 states),} \nonumber$
$^3P_1 \text{ (3 states),} \nonumber$
$^3P_0 \text{ (1 states), and} \nonumber$
$^1S_0 \text{ (1 states).} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.03%3A_Electronic_Configurations%2C_Term_Symbols%2C_and_States/22.3.04%3A_iv._Review_Exercise_Solutions.txt
|
Q1
Constructing the Slater determinant corresponding to the "state" $1s(\alpha )1s(\alpha )$ with the rows labeling the orbitals and the columns labeling the electrons gives:
\begin{align} |1s\alpha 1s\alpha| &=& &\dfrac{1}{\sqrt{2\!}} \begin{vmatrix} 1s\alpha (1) & 1s\alpha (2) \ 1s\alpha (1) & as\alpha (2) \end{vmatrix} \ &=& &\dfrac{1}{\sqrt{2}} \left( 1s\alpha (1)1s\alpha (2) - 1s\alpha (1) 1s\alpha (2) \right) \ &=& &0\end{align}
Q2
Staring with the $M_S=1 \text{ }^3S$ state which in a "box" for this $M_L=0 \text{, } M_S=1$ case would contain only one product function; $|1s\alpha 2s\alpha |)$ and applying $S_-$ gives:
\begin{align} S_-^3S(S=1, M_S=1) &=& &\sqrt{1(1 + 1) - 1(1 - 1)}\hbar^3S(S=1, M_S=0) \ &=& &\hbar \sqrt{2} ^3S(S=1, M_S=0) \ &=& &\left( S_-(1) + S_-(2)\right) |1s\alpha 2s\alpha | \ &=& &S_-(1) |1s\alpha 2s\alpha | + S_-(2)|1s\alpha 2s\alpha | \ &=& &\hbar \sqrt{\dfrac{1}{2}\left( \dfrac{1}{2} + 1\right) - \dfrac{1}{2}\left(\dfrac{1}{2} - 1\right)}\left[ |1s\beta 2s\alpha + |1s\alpha 2s\beta | \right] \ &=& &\hbar \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta | \right) \ \text{So, } \hbar\sqrt{2} ^3S(S=1, M_S=0) &=& &\hbar (|1s\beta 2s\alpha | + |1s\alpha 2s\beta | ) \ ^3S(S=1, M_S=0) &=& &\dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta |) \right)\end{align}
The three triplet states are then:
\begin{align} &^3S(S=1, M_S=1) = |1s\alpha 2s\alpha |, \ &^3S(S=1, M_S=0) = \dfrac{1}{2} \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta| \right)\text{, and} \ &^3S(S=1, M_S=-1) = |1s\beta 2s\beta |. \end{align}
The single state which must be constructed orthogonal to the three singlet states (and in particular to the $^3S(S=0, M_S=0)$ state) can be seen to be:
$^1S(S=0, M_S=0) = \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - |1s\alpha 2s\beta | \right) . \nonumber$
Applying $S^2 \text{ and } S_z$ to each of these states gives:
\begin{align} S_Z |1s\alpha 2s\alpha | &=& &\left( S_Z(1) + S_Z(2) \right) |1s\alpha 2s\beta | \ &=& &S_Z(1)|1s\alpha 2s\alpha | + S_Z(2)|1s\alpha 2s\alpha | \ &=& &\hbar \left(\dfrac{1}{2}\right) |1s\alpha 2s\alpha | + \hbar \left(\dfrac{1}{2}\right) |1s\alpha 2s\alpha| \ &=& &\hbar |1s\alpha 2s\alpha| \ S^2 |1s\alpha 2s\alpha| &=& &\left( S_-S_+ + S_Z^2 + \hbar S_Z \right) |1s\alpha 2s\alpha | \ &=& &S_-S_+ |1s\alpha 2s\alpha | + S_Z^2 |1s\alpha 2s\alpha | + \hbar S_Z |1s\alpha 2s\alpha | \ &=& &0 + \hbar^2 |1s\alpha 2s\alpha | + \hbar^2| 1s\alpha 2s\alpha | \ &=& &2\hbar^2 |1s\alpha 2s\alpha| \end{align}
\begin{align} S_Z\dfrac{1}{\sqrt{2}}\left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta || \right) &=& &\left( S_Z(1) + S_Z(2) \right) \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | + | 1s\alpha 2s\beta | \right) \ &=& &\dfrac{1}{\sqrt{2}} \left( S_Z(1) + S_Z(2) \right) |1s\beta 2s\alpha | + \dfrac{1}{\sqrt{2}} \left( S_Z(1) + S_Z(2) \right) |1s\alpha 2s\beta | \ &=& &\dfrac{1}{\sqrt{2}} \left( \hbar \left( -\dfrac{1}{2} \right) + \hbar \left( \dfrac{1}{2} \right) \right) |1s\beta 2s\alpha | + \dfrac{1}{\sqrt{2}} \left( \hbar \left( \dfrac{1}{2} \right) + \hbar \left( -\dfrac{1}{2} \right) \right) |1s\alpha 2s\beta | \ &=& &0\hbar \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta |\right) \end{align}
\begin{align} S^2\dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta | \right) &=& &\left( S_-S_+ + S_Z^2 + \hbar S_Z \right)\dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | + |1s\alpha 2s\beta | \right) \ &=& &S_-S_+ \dfrac{1}{\sqrt{2}} \left( |1s\beta 2\alpha | + |1s\alpha 2s\beta | \right) \ &=& & \dfrac{1}{\sqrt{2}} \left( S_-\left( S_+(1) + S_+(2)\right) | 1s\beta 2s\alpha | + S_-\left( S_+(1) + S_+(2)\right) |1s\alpha 2s\beta |\right) \ &=& &\dfrac{1}{\sqrt{2}} \left( S_- \hbar |1s\alpha 2s\alpha | + S_- \hbar |1s\alpha 2s\alpha | \right) \ &=& &2\hbar \dfrac{1}{\sqrt{2}} \left( \left( S_-(1) + S_-(2)\right) | 1s\alpha 2s\alpha | \right) \ &=& &2\hbar \dfrac{1}{\sqrt{2}} \left( \hbar |1s\beta 2s\alpha | + |\hbar |1s\alpha 2s\beta |\right) \ &=& &2\hbar^2 \dfrac{1}{\sqrt{2}}\left( 1s\beta 2s\alpha | + |1s\alpha 2s \beta | \right)\end{align}
\begin{align} S^2 | 1s\beta 2s\beta | &=& &\left( S_+S_- + S_Z^2 - \hbar S_A \right) |1s\beta 2s\beta | \ &=& &S_+S_- |1s\beta 2s\beta |1s\beta 2s\beta |-\hbar S_Z |1s\beta 2s\beta | \ &=& &0 + \hbar^2 | 1s\beta 2s \beta | + \hbar ^2|1s\beta 2s\beta | \ &=& &2\hbar^2 |1s\beta 2s\beta | \end{align}
\begin{align} S_Z \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - |1s\alpha 2s\beta |\right) &=& &\left( S_Z(1) + S_Z(2) \right) \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - |1s\alpha 2s\beta | \right) \ &=& &\dfrac{1}{\sqrt{2}} \left(S_Z(1) + S_Z(2)\right) |1s\beta 2s\alpha| - \dfrac{1}{\sqrt{2}}\left( S_Z(1) + S_Z(2)\right) |1s\alpha 2s\beta | \ &=& & \dfrac{1}{\sqrt{2}} \left( \hbar \left( -\dfrac{1}{2} \right) + \hbar \left( \dfrac{1}{2} \right) \right) |1s\beta 2s\alpha | - \dfrac{1}{\sqrt{2}} \left( \hbar \left( \dfrac{1}{2} \right) + \hbar \left( -\dfrac{1}{2} \right) \right) |1s\alpha 2s\beta | \ &=& &0\hbar \dfrac{1}{\sqrt{2}} \left( 1s\beta 2s\alpha | - |1s\alpha 2s\beta | \right) \end{align}
\begin{align} S^2\dfrac{1}{\sqrt{2}} \left( 1s\beta 2s\alpha | - |1s\alpha 2s\beta |\right) &=& &\left( S_-S_+ S_Z^2 + \hbar S_Z \right) \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - | 1s\alpha 2s\beta |\right) \ &=& &S_-S_+\dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - |1s\alpha 2s\beta | \right) \ &=& &\dfrac{1}{\sqrt{2}}\left( S_- \left( S_+(1) + S_+(2)\right) |1s\beta 2s \alpha | - S_- \left(S_+(1) + S_+(2)\right) |1s\alpha 2s\beta |\right) \ &=& &\dfrac{1}{\sqrt{2}} \left( S_-\hbar |1s\alpha 2s\alpha | - S_-\hbar |1s\alpha 2s\alpha |\right) \ &=& &0\hbar \dfrac{1}{\sqrt{2}}\left( \left( S_-(1) + S_(2)\right) |1s\alpha 2s\alpha | \right) \ &=& &0\hbar \dfrac{1}{\sqrt{2}} \left( \hbar |1s\beta 2s\alpha | - \hbar |1s\alpha 2s\beta | \right) \ &=& &0\hbar^2 \dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha | - |1s\alpha 2s\beta | \right)\end{align}
Q3
a. Once the spatial symmetry has been determined by multimplication of the irreducible representations, the spon coupling is identical to exercise 2 and gives the result:
$\dfrac{1}{\sqrt{2}} \left( |3_{a1}\alpha1b_1\beta | - |3_{a1}\beta 1b_1\alpha |\right) \nonumber$
b. There are three states here (again analogous to exercise 2):
\begin{align} & 1.) |3_{a1}\alpha 1b_1\alpha |, \ & 2.) \dfrac{1}{\sqrt{2}} \left( |3_{a1}\alpha1b_1\beta | + |3_{a1}\beta1b_1\alpha | \right) \text{, and} \ & 3.) |3_{a1}\beta 1b_1\beta | \end{align}
c. $|3_{a1}\alpha 3_{a1}\beta | \nonumber$
Q4
As shown in review exercise 1c, for two ewuivalent $\pi$ electrons one obtains six states:
\begin{align} &^1\Delta (M_L=2)\text{; one state }(M_S=0), \ &^1\Delta (M_L=-2)\text{; one state }(M_S=0), \ &^1\sum (M_L=0)\text{; one state }(M_S=0)\text{, and } \ &^3\sum (M_L=0)\text{; three states }(M_S=1,0\text{, and -1}). \end{align}
By inspecting the "box" in review exercise 1c, it should be fairly straightforward to write down the wavefunctions for each of these:
\begin{align} &^1\Delta (M_L=2); |\pi_1\alpha\pi_1\beta| \ &^1\Delta (M_L=-2); |\pi_1\alpha\pi_1\beta| \ &^1\sum (M_L=0); \dfrac{1}{\sqrt{2}}\left( |\pi_1\beta\pi_{-1}\alpha| - |\pi_1\alpha \pi_{-1}\beta | \right) \ &^3\sum (M_L=0, M_S=1); |\pi_1\alpha\pi_{-1}\alpha| \ &^3\sum (M_L=0, M_S=0); \dfrac{1}{\sqrt{2}} \left( |\pi_1\beta\pi_{-1}\alpha | + |\pi_1\alpha\pi_{-1}\beta |\right) \ &^3\sum (M_L=0, M_S=-1); |\pi_1\beta\pi_{-1}\beta| \end{align}
Q5
We can conveniently couple another s electron to the states generate from the $1s^12s^1$ configuration in exercise 2:
\begin{align} ^3S(L=0, S=1) \text{with } &3s^1 (L=0, S=\dfrac{1}{2}) giving: \ &L=0, S=\dfrac{3}{2}, \dfrac{1}{2}, ^4S (4 states) and ^2S (2 states). \ &^1S(L=0, S=0) with 3s^1 (L=0, S=\dfrac{1}{2}) giving: \ &L=0, S=\dfrac{1}{2}; ^2S (2 states). \end{align}
Constructing a "box" for this case would yield:
One can immediately identify the wavefunctions for two of the quartets (they are single entries):
\begin{align} &^4S(S=\dfrac{3}{2}, M_S=\dfrac{3}{2}): |1s\alpha 2s\alpha 3s\alpha | \ &^4S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2}): |1s\beta 2s\beta 3s\beta | \end{align}
Applying $S_- \text{ to } ^4S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2})$ yields:
\begin{align} S_-^4S(S=\dfrac{3}{2}, M_S=\dfrac{3}{2}) &=& &\hbar \sqrt{\dfrac{3}{2}\left( \dfrac{3}{2} + 1\right) - \dfrac{3}{2}\left(\dfrac{3}{2} - 1\right)} ^4S(S=\dfrac{3}{2}, M_S=\dfrac{1}{2}) \ &=& &\hbar\sqrt{3} ^4S(S=\dfrac{3}{2}, M_S=\dfrac{1}{2}) \ S_- |1s\alpha 2s\alpha 3s\alpha | &=& &\hbar \left( |1s\beta 2s\alpha 3s\alpha | + | 1s\alpha 2s\beta 3s \alpha | + |1s\alpha 2s\alpha 3s\beta | \right) \ \text{So, } ^4S(S=\dfrac{3}{2}, M_S=\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{3}} \left( |1s\beta 2s\alpha 3s\alpha | + |1s\alpha 2s\beta 3s\alpha| + |1s\alpha 2s\alpha 3s\beta |\right) \end{align}
Applying $S_+ \text{ to } ^S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2}) \text{ yields:}$
Applying $S_+ \text{ to } ^4S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2})$ yields:
\begin{align} S_+^4S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2}) &=& &\hbar \sqrt{\dfrac{3}{2}\left( \dfrac{3}{2} + 1\right) - -\dfrac{3}{2}\left(\dfrac{3}{2} + 1\right)} ^4S(S=\dfrac{3}{2}, M_S=-\dfrac{1}{2}) \ &=& &\hbar\sqrt{3} ^4S(S=\dfrac{3}{2}, M_S=-\dfrac{1}{2}) \ S_+ |1s\beta 2s\beta 3s\beta | &=& &\hbar \left( |1s\alpha 2s\beta 3s\beta | + | 1s\beta 2s\alpha 3s \beta | + |1s\beta 2s\beta 3s\alpha | \right) \ \text{So, } ^4S(S=\dfrac{3}{2}, M_S=-\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{3}} \left( |1s\alpha 2s\beta 3s\beta | + |1s\beta 2s\alpha 3s\beta| + |1s\beta 2s\beta 3s\alpha |\right) \end{align}
Applying $S_+ \text{ to } ^S(S=\dfrac{3}{2}, M_S=-\dfrac{3}{2}) \text{ yields:}$
It only remains to construct the doublet states which are orthogonal to these quartet states. Recall that the orthogonal combinations for systems having three equal components (for example when symmetry adapting the 3 $sp^2 \text{ hybrids in } C_{2v} \text{ or } D_{3h}$ symmetry) give results of + + +, +2 - -, and 0 + -. Notice that the quartets are the + + + combinations and therefore the doublets can be recognized as:
\begin{align} ^2S(S=\dfrac{1}{2}, M_S = -\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{6}} \left( |1s\beta 2s\alpha 3s\alpha | + |1s\alpha 2s\beta 3s\alpha | - 2|1s\alpha 2s\alpha 3s\beta |\right) \ ^2S(S=\dfrac{1}{2}, M_S = -\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{2}} \left( |1s\beta 2s\alpha 3s\alpha | - |1s\alpha 2s\beta 3s\alpha | + 0|1s\alpha 2s\alpha 3s\beta |\right) \ ^2S(S=\dfrac{1}{2}, M_S = -\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{6}} \left( |1s\alpha 2s\beta 3s\beta | + |1s\beta 2s\alpha 3s\beta | - 2|1s\beta 2s\beta 3s\alpha |\right) \ ^2S(S=\dfrac{1}{2}, M_S = -\dfrac{1}{2}) &=& &\dfrac{1}{\sqrt{3}} \left( |1s\alpha 2s\beta 3s\beta | - |1s\beta 2s\alpha 3s\beta | + 0|1s\beta 2s\beta 3s\alpha |\right) \end{align}
Q6
As illustrated in this chapter a $p^2$ configuration (two equivalent p electrons) gives rise to the term symbols: $^3P, ^1D\text{, and } ^1S.$ Coupling an additional electron $(3d^1) \text{ to this } p^2$ configureation will give the desired $1s^22s^22p^23d^1$ term symbols:
\begin{align} &^3P(L=1, S=1) \text{ with } ^D(L=2, S=\dfrac{1}{2}) generates; \ &L=3,2,1 \text{, and } S=\dfrac{3}{2}, \dfrac{1}{2} \text{ with term symbols } ^4F, ^2F, ^4D, ^2D, ^4P\text{, and } ^2S, \ &^1S(L=0, S=0) \text{ with } ^2D(L=2,S=\dfrac{1}{2}) generates; \&L=2\text{ and } S=\dfrac{1}{2} \text{ with term symbol } ^2D. \end{align}
Q7
The notation used for the Slater Condon tules will be the same as used in the test:
(a.) zero (spin orbital) difference;
\begin{align} \langle |F + G| \rangle &=& & \langle \phi_i |f| \phi_i \rangle + \sum\limits_{i>j} \left( \langle \phi_i \phi_j |g| \phi_i\phi_j \rangle - \langle \phi_i\phi_j |g|\phi_j\phi_i \rangle \right) \ &=& &\sum\limits_i f_{ii} + \sum\limits_{i>j} \left( g_{ijij} - g_{ijji} \right) \end{align}
(b.) one (spin orbital) difference $(\phi_p \neq \phi_{p^{\prime}}$;
\begin{align} \langle |F + G | \rangle &=& &\langle \phi_{p} | f | \phi_{p^{\prime}} \rangle + \sum\limits_{j\ne p;p^{\prime}} \left( \langle \phi_p\phi_j |g| \phi_{p^{\prime}}\phi_j \rangle - \langle\phi_p\phi_j |g| \phi_j \phi_{p^{\prime}} \rangle \right) \ &=& &f_{pp^{\prime}} + \sum\limits_{j\ne p; p^{\prime}} \left( g_{pjp^{\prime}j} - g_{pjjp^{\prime}} \right) \end{align}
(c.) two (spin orbital) differences $(\phi_p \ne \phi_{p^{\prime}} \text{ and } \phi_q \ne \phi_{q^{\prime}});$
\begin{align} \langle |F + G |\rangle &=& &\langle \phi_p\phi_q |g| \phi_{p^{\prime}} \phi_{q^{\prime}}\rangle - \langle \phi_p \phi_q |g| \phi_j \phi_{p^{\prime}} \rangle \ &=& &g_{pqp^{\prime}q^{\prime}} - g_{pqq^{-\prime}p^{\prime}} \end{align}
(d.) three or more (spin orbital) differences;
$\langle |F + G|\rangle = 0 \nonumber$
i. $^3P(M_L=1, M_S=1) = |p_1\alpha p_0\alpha |$
$\langle |p_1\alpha p_0\alpha |H|p_1\alpha p_0\alpha|\rangle =$
Error!. Using the Slater Condon rule (a.) above (SCa):
$\langle |10|H|10|\rangle = f_{11} + f_{00} + g_{1010} - g_{1001} \nonumber$
ii. $^3P(M_L=0, M_S=0) = \dfrac{1}{\sqrt{2}} \left( |p_1\alpha p_{-1}\beta | + |p_1\beta p_{-1}\alpha | \right)$
\begin{align} \langle ^3P(M_L=0, M_S=0)|H|^3P(M_L=0,M_S=0)\rangle &=& &\ &=& &\dfrac{1}{2} \left( \langle |p_1 \alpha p_{-1}\beta|H|p_1\alpha p_{-1}\beta |\rangle + \langle |p_1 \alpha p_{-1}\beta|H|p_1\beta p_{-1}\alpha |\rangle + \langle |p_1 \beta p_{-1}\alpha|H|p_1\alpha p_{-1}\beta |\rangle + \langle |p_1 \beta p_{-1}\alpha|H|p_1\beta p_{-1}\alpha |\rangle \right) \end{align}
Evaluating each matrix element gives:
\begin{align} \langle |p_1\alpha p_{-1}\beta |H|p_1\alpha p_{-1}\beta |\rangle &=& &f_{1\alpha 1\alpha} + f_{-1\beta-1\beta} + g_{1\alpha -1\beta 1\alpha - 1\beta} - g_{1\alpha - 1\beta - 1\beta 1\alpha} (SCA) \ &=& &f_{11} + f_{-1-1} + g_{1-11-1} - 0 \ \langle |p_1\alpha p_{-1}\beta |H|p_1\beta p_{-1}\alpha |\rangle &=& &g_{1\alpha -1\beta 1\beta -1\alpha} - g_{1\alpha - 1\beta -1\alpha 1\beta} (SCc) \ &=& & 0 -g_{1-1-11} \ \langle | p_1\beta p_{-1}\alpha |H| p_1\alpha p_{-1}\beta | \rangle &=& & g_{1\beta -1\alpha 1\alpha -1\beta} - g_{1\beta -1\alpha -1\beta 1\alpha} (SCc) \ &=& & 0 - g_{1-1-11} \ \langle | p_1\beta p_{-1} \alpha |H| p_1\beta p_{-1} \alpha | \rangle &=& & f_{1\beta 1\beta} + f_{-1\alpha -1\alpha} + g_{1\beta -1\alpha 1\beta - 1\alpha} - g_{1\beta - 1\alpha - 1\alpha 1\beta} (SCa) \ &=& & f_{11} + f_{-1-1} + g_{1-11-1} - 0 \end{align}
Substitution of these expressions give:
\begin{align} \langle ^3P(M_L=0, M_S=0)|H|^3P(M_L=0,M_S=0) \rangle &=& &\dfrac{1}{2} \left( f_{11} + f_{-1-1} + g_{1-11-1} - g_{1-1-11} - g_{1-1-11} + f_{11} + f_{-1-1} + g_{1-11-1} \right) \ &=& & f_{11} + f_{-1-1} + g_{1-11-1} - g_{1-1-11} \end{align}
iii. $^1S (M_L=0, M_S=0); \dfrac{1}{\sqrt{3}}\left( |p_0\alpha p_0\beta | - |p_1\alpha p_{-1}\beta | - |p_{-1}\alpha p_1\beta | \right)$
\begin{align} \langle ^1S(M_L=0, M_S=0)|H|^1 S(M_L=0, M_S=0) \rangle &=& &\dfrac{1}{3} \bigg( \langle |p_0\alpha p_0\beta |H| p_0\alpha p_0\beta | \rangle - \langle |p_0\alpha p_0\beta |H| p_1\alpha p_{-1}\beta | \rangle &-& &\langle |p_0\alpha p_0 \beta |H| p_{-1}\alpha p_1\beta | \rangle - \langle |p_1\alpha p_{-1}\beta |H| p_0\alpha p_0 \beta | \rangle \ &+& & \langle |p_1\alpha p_{-1}\beta |H|p_1\alpha p_{-1}\beta | \rangle + \langle |p_1\alpha p_{-1}\beta |H| p_{-11}\alpha p_1\beta | \rangle \ &-& &\langle |p_{-1}\alpha p_1\beta |H|p_0\alpha p_0\beta | \rangle + \langle | p_{-1}\alpha p_1\beta|H|p_1\alpha p_{-1}\beta | \rangle \ &+& &\langle | p_{-1}\alpha p_1\beta|H|p_{-1}\alpha p_1\beta | \rangle \bigg) \end{align}
Evaluating each matrix element gives:
\begin{align} \langle | P_0\alpha p_0\beta |H|p_0\alpha p_0\beta | \rangle &=& &f_{0\alpha 0\alpha} + f_{0\beta 0 \beta} + g_{0\alpha 0\beta 0\alpha 0 \beta} - g_{0\alpha 0 \beta 0 \beta 0 \alpha} (SCa) \ &=& & f_{00} + f_{00} + g_{0000} - 0 \ \langle | p_0 \alpha p_0\beta |H|p_1\alpha p_{-1}\beta | \rangle &=& & \langle | p_1\alpha p_{-1}\beta |H| p_0\alpha p_0\beta | \rangle \ &=& &g_{0\alpha 0\beta 1\alpha -1\beta} - g_{0\alpha 0 \beta -1\beta 1\alpha} (SCc) \ &=& &g_{001-1} -0 \ \langle | p_0\alpha p_0\beta |H|p_{-1}\alpha p_1\beta | \rangle &=& &\langle | p_{-1}\alpha p_1\beta |H|p_0 \alpha p_0\beta | \rangle \ &=& &g_{0\alpha 0\beta - 1\alpha 1\beta} -g_{0\alpha 0\beta 1\beta -1\alpha} (SCc) \ &=& &g_{00-11} - 0 \ \langle | p_1\alpha p_{-1}\beta |H|p_1 \alpha p_{-1}\beta | \rangle &=& & f_{1\alpha 1\alpha} + f_{-1\beta -1\beta} + g_{1\alpha -1\beta 1\alpha -1\beta} - g_{1\alpha - 1\beta -1\beta 1\alpha} (SCa) \ &=& &f_{11} + f_{-1-1} + g_{1-11-1} - 0 \ \langle | p_1\alpha p_{-1}\beta |H|p_{-1}\alpha p_1\beta | \rangle &=& & \langle |p_{-1}\alpha p_1\beta |H|p_1 \alpha p_{-1}\beta | \rangle \ &=& & g_{1\alpha - \beta -1\alpha 1\beta} - g_{1\alpha -1\beta 1\beta -1\alpha} (SCc) \ &=& &g_{1-1-11} - 0 \ \langle |p_{-1}\alpha p_1\beta |H|p_{-1}\alpha p_1\beta | \rangle &=& &f_{-1\alpha -1\alpha} + f_{1\beta 1\beta} + g_{-1\alpha 1\beta - 1\alpha 1\beta} - g_{-1\alpha 1\beta 1\beta - 1\alpha} (SCa) \ &=& &f_{-1-1} + f_{11} + g_{-11-11} - 0 \end{align}
Substitution of these expressions give:
\begin{align} \langle ^1S(M_L=0,M_S=0) & & &|H| ^1S(M_L=0,M_S=0)\rangle \ & = & & \dfrac{1}{3} \left( f_{00} + f_{00} + g_{0000} - g_{001-1} - g_{00-11} - g_{001-1} + f_{11} + f_{-1-1} + g_{1-11-1} + g_{1-1-11} - g_{00-11} + g_{1-1-11} + f_{-1-1} + f_{11} + g_{-11-11} \right) \ &=& &\dfrac{1}{3} \left( 2f_{00} + 2f_{11} + 2f_{-1-1} + g_{0000} - 4g_{001-1} + 2g_{1-11-1} + 2g_{1-1-11} \right) \end{align}
iv. $^1D(M_L=0, M_S=0) = \dfrac{1}{\sqrt{6}} \left( 2|p_0\alpha p_0\beta | + |p_1\alpha p_{-1}\beta | + |p_{-1}\alpha p_1\beta | \right)$
Evaluating $\langle ^1D(M_L=0, M_S=0)|H|^1D(M_L=0, M_S=0)\rangle$ we note that all the Slater Condon matrix elements generated are the same as those evaluated in part iii. (the sign for the wavefunction components and the multiplicative factor of two for one of the components, however, are different).
\begin{align} &\langle ^1D \left( M_L=0, M_S=0 \right) |H|^1D \left(M_L=0, M_S=0\right) \rangle \ &\dfrac{1}{6} \left( 4f_{00} + 4f_{00} + 4g_{0000} + 2g_{001-1} + 2g_{00-11} + 2g_{001-1} + f_{11} + f_{-1-1} + g_{1-11-1} + g_{1-1-11} + 2g_{00-11} + g_{1-1-11} + f_{-1-1} + f_{11} + g_{-11-11} \right) \ & = \dfrac{1}{6} \left( 8f_{00} + 2f_{11} + 2f_{-1-1} + 4g_{0000} + 8g_{001-1} + 2g_{1-11-1} + 2g_{1-1-11} \right) \end{align}
Q8
i. $^\Delta \left( M_L=2, M_S=0 \right) = |\pi_1\alpha\pi_1\beta |$
\begin{align} \langle ^1\Delta \left( M_L=2, M_S=0 \right) |H|^1\Delta \left( M_L=2, M_S=0\right) \rangle & & \ & = & \langle |\pi_1\alpha\pi_1\beta |H| \pi_1\alpha\pi_1\beta |\rangle \ & = & f_{1\alpha 1\alpha} + f_{1\beta 1\beta} + g_{1\alpha 1\beta 1\alpha 1\beta} - g_{1\alpha 1\beta 1\beta 1\alpha} (SCa) \ & = & f_{11} + f_{11} + g_{1111} - 0 \ & = & 2f_{11} + g_{1111} \end{align}
ii. $^1\sum \left(M_L=0, M_S=0 \right) = \dfrac{1}{\sqrt{2}} \left( |\pi_1\alpha\pi_{-1}\beta | - |\pi_1\beta \pi_{-1}\alpha |\right)$
\begin{align} \langle ^3\sum \left( M_L=0, M_S=0 \right) |H| ^3\sum \left(M_L=0, M_S=0 \right) \rangle & & \ & = & \dfrac{1}{2} \left( \langle |\pi_1\alpha \pi_{-1}\beta |H| \pi_1\alpha \pi_{-1}\beta | \rangle - \langle |\pi_1 \alpha \pi_{-1}\beta |H| \pi_1\beta \pi_{-1}\alpha | \rangle - \langle |\pi_1\beta\pi_{-1}\alpha |H| \pi_1\alpha\pi_{-1}\beta |\rangle + \langle |\pi_1\beta\pi_{-1}\alpha |H|\pi_1\beta \pi_{-1}\alpha |\rangle \right) \end{align}
Evaluating each matrix element gives:
\begin{align} \langle |\pi_1\alpha \pi_{-1} \beta |H| \pi_1\alpha\pi_{-1}\beta |\rangle &=& f_{1\alpha 1\alpha} + f_{-1\beta -1\beta} + g_{1\alpha -1\beta 1\alpha -1\beta} - g_{1\alpha -1\beta -1\beta 1\alpha} (SCa) \ &=& &f_{11} + f_{-1-1} + g_{1-11-1} - 0 \ \langle |\pi_1\alpha \pi_{-1} \beta |H| \pi_1\beta\pi_{-1}\alpha |\rangle &=& &g_{1\alpha -1\beta 1\beta -1\alpha} - g_{1\alpha -1\beta -1\alpha 1\beta} (SCc) \ &=& 0 -g_{1-1-11} \ \langle |\pi_1\beta \pi_{-1} \alpha |H| \pi_1\alpha\pi_{-1}\beta |\rangle &=& &0 - g_{1-1-11} \ \langle |\pi_1\beta \pi_{-1}\alpha |H| \pi_1\beta\pi_{-1}\beta |\rangle &=& &f_{1\beta 1\beta} + f_{-1\alpha -1\alpha} + g_{1\beta -1\alpha 1\beta -1\alpha} - g_{1\beta -1\alpha -1\alpha 1\beta} (SCa) \ &=& &f_{11} + f_{-1-1} + g_{1-11-1} - 0 \end{align}
Substitution of these expressions give:
\begin{align} \langle ^3\sum \left( M_L=0,M_S=0\right) |H|^3\sum \left(M_L=0,M_S=0\right) \rangle &=& &\dfrac{1}{2} \left( f_{11} + f_{-1-1} + g_{1-11-1} + g_{1-1-11} + g_{1-1-11} + f_{11} + f_{-1-1} + g_{1-11-1} \right) \ &=& & f_{11} + f_{-1-1} + g_{1-11-1} + g_{1-1-11} \end{align}
iii. $^3\sum \left(M_L=0, M_S=0 \right) = \dfrac{1}{\sqrt{2}} \left( |\pi_1\alpha\pi_{-1}\beta | + |\pi_1\beta\pi_{-1}\alpha | \right)$
\begin{align} \langle ^3\sum \left( M_L=0,M_S=0 \right) |H| ^3\sum\left( M_L=0, M_S=0\right) \rangle \ &=& & \dfrac{1}{2}\left( \langle |\pi_1\alpha\pi_{-1}\beta |H|\pi_1\alpha\pi_{-1}\beta |\rangle + \langle |\pi_1\alpha\pi_{-1}\beta |H| \pi_1\beta\pi_{-1}\alpha |\rangle + \langle |\pi_1\beta\pi_{-1}\alpha |H|\pi_1\alpha\pi_{-1}\beta |\rangle + \langle |\pi_1\beta \pi_{-1}\alpha |H|\pi_1\beta\pi_{-1}\alpha |\rangle \right)\end{align}
Evaluating each matrix element gives:
\begin{align} \langle |\pi_1\alpha\pi_{-1}\beta |H|\pi_1\alpha\pi_{-1}\beta\rangle &=& & f_{1\alpha 1\alpha} + f_{-1\beta -1\beta} + g_{1\alpha -1\beta 1\alpha -1\beta} - g_{1\alpha -1\beta -1\beta 1\alpha} (SCa) \ &=& &f_{11} + f_{-1-1} + g_{1-11-1} - 0 \ \langle |\pi_1\alpha \pi_{-1}\beta |H| \pi_1\beta\pi_{-1}\alpha |\rangle &=& & g_{1\alpha -1\beta 1\beta -1\alpha} - g_{1\alpha -1\beta -1\alpha 1\beta} (SCc) \ &=& &0 - g_{1-1-11} \ \langle |\pi_1\beta\pi_{-1}\alpha |H|\pi_1\alpha\pi_{-1}\alpha |\rangle &=& &f_{1\beta 1\beta} + f_{-1\alpha -1\alpha} + g_{1\beta -1\alpha 1\beta -1\alpha} - g_{1\beta -1\alpha -1\alpha 1\beta} (SCa) \ &=& & f_{11} + f_{-1-1} + g_{1-11-1} - 0 \end{align}
Substitution of these expressions give:
\begin{align} \langle & & &^\sum \left( M_L=0, M_S=0 \right) |H|^3 \sum \left( M_L=0, M_S=0 \right) \rangle \ &=& & \dfrac{1}{2} \left( f_{11} + f_{-1-1} + g_{1-11-1} - g_{1-1-11} + f_{11} + f_{-1-1} + g_{1-11-1} \right) \ &=& & f_{11} + f_{-1-1} + g_{1-11-1} - g_{1-1-11} \end{align}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.03%3A_Electronic_Configurations%2C_Term_Symbols%2C_and_States/22.3.05%3A_v._Exercise_Solutions.txt
|
Q1
a. All the Slater determinants have in common the $|1s\alpha 1s\beta 2s\alpha 2s\beta |$ "core" and hence this component will not be written out explicitly for each case.
\begin{align} ^3P(M_L=1,M_S=1) &=& &|p_1\alpha p_0\alpha | \ &=& &|\dfrac{1}{\sqrt{2}}(p_x + ip_y) \alpha (p_z)\alpha | \ &=& &\dfrac{1}{\sqrt{2}}(| p_x\alpha p_z\alpha | + i|p_y\alpha p_z\alpha |) \ ^3P(M_L=0,M_S=1) &=& &|p_1\alpha p_{-1}\alpha | \ &=& &|\dfrac{1}{\sqrt{2}}(p_x + ip_y)\alpha | \ \dfrac{1}{\sqrt{2}} (p_x - ip_y)\alpha | &=& & \dfrac{1}{2}( |p_x\alpha p_x\alpha | - i|p_x\alpha p_y\alpha | + i|p_y\alpha p_x\alpha | + | p_y\alpha p_y\alpha |) \ &=& & \dfrac{1}{2}(0 - i|p_x\alpha p_y\alpha |) \ &=& &-i|p_x\alpha p_y\alpha | \ ^3P(M_L=-1.M_S=1) &=& &|p_{-1}\alpha p_0\alpha | \ &=& &| \dfrac{1}{\sqrt{2}}(p_x - ip_y)\alpha (p_z)\alpha | \ &=& &\dfrac{1}{\sqrt{2}} (| p_x\alpha p_z \alpha | - i|p_y\alpha p_z\alpha |) \end{align}
As you can see, the symmetried of each of these states cannot be labeled with a single irreducible representation of the $C_{2v}$ point group. For example, $|p_x\alpha p_z\alpha |$ is xz $(B_1)$ and $|p_y\alpha p_z\alpha |$ is yz $(B_2)$ and hence the $^3P(M_L,M_S=1)$ functions are degenerate for the C atom and any combination of these functions would also be degenerate. Therefore we can choose new combinations which can be labeled with "pure" $C_{2v}$ point group labels.
\begin{align} ^3P(xz,M_S=1) &=& & |p_x\alpha p_z\alpha | \ &=& &\dfrac{1}{\sqrt{2}} (^3P(M_L=1,M_S=1) + ^3P(M_L=-1,M_S=1)) = ^3B_1 \ ^3P(yx,M_S=1) &=& & |p_y\alpha p_x\alpha | \ &=& &\dfrac{1}{i}(^3P(M_L=0,M_S=1)) = ^3A_2 \ ^3P(yz,M_S=1) &=& & |p_y\alpha p_z\alpha |\ &=& & \dfrac{1}{i\sqrt{2}} (^3P(M_L=1,M_S=1) - ^3P(M_L=-1,M_S=1)) = ^3B_2 \end{align}
Now we can do likewise for the five degenerate $^1D$ states:
\begin{align} ^1D(M_L=2,M_S=0) &=& & |p_1\alpha p_1\beta | \ &=& &| \dfrac{1}{\sqrt{2}} (p_x + ip_y)\alpha \dfrac{1}{\sqrt{2}} (p_x + ip_y) \beta | \ &=& &\dfrac{1}{2} (|p_x\alpha p_x\beta | + i|p_x\alpha p_y\beta | + i|p_y\alpha p_x\beta | - |p_y\alpha p_y\beta | ) \ ^1D(M_L=-2,M_S=0) &=& &|p_{-1}\alpha p_{-1}\beta | \ &=& &| \dfrac{1}{\sqrt{2}} (p_x - ip_y)\alpha \dfrac{1}{\sqrt{2}}(p_x - ip_y)\beta | \ &=& &\dfrac{1}{2}( |p_x\alpha p_x\beta | - i|p_x\alpha p_y\beta | - i|p_y\alpha p_x\beta | - |p_y\alpha p_y\beta |) \ 1^D(M_L=1,M_S=0) &=& &\dfrac{1}{\sqrt{2}} ( |p_0\alpha p_1\beta | - |p_0\beta p_1\alpha |) \ &=& &\dfrac{1}{\sqrt{2}}\left( |(p_z)\alpha\dfrac{1}{\sqrt{2}} (p_x + ip_y)\beta | - |(p_z)\beta \dfrac{1}{\sqrt{2}} (p_x + ip_y) \alpha |\right) \ &=& &\dfrac{1}{2} (|p_z\alpha p_x\beta | + i|p_z\alpha p_y\beta | - |p_z\beta p_x\alpha | - i|p_z\beta p_y\alpha |) \ ^1D(M_L=-1,M_S=0) &=& &\dfrac{1}{\sqrt{2}} (|p_0\alpha p_{-1}\beta | - |p_0\beta p_{-1}\alpha |) \ &=& &\dfrac{1}{\sqrt{2}} \left( |(p_z)\alpha \dfrac{1}{\sqrt{2}} (p_x - ip_y) \beta | - |(p_z)\beta \dfrac{1}{\sqrt{2}}(p_x - ip_y)\alpha |\right) \ &=& &\dfrac{1}{2} (| p_z\alpha p_x \beta | - ip_z\alpha p_y\beta | - |p_z\beta p_x\alpha | + |i|p_z\beta p_y\alpha |) \ ^1D(M_L=0,M_S=0) &=& &\dfrac{1}{\sqrt{6}} (2| p_0\alpha p_0\beta | + |p_1\alpha p_{-1}\beta | + |p_{-1}\alpha p_1\beta |) \ &=& &\dfrac{1}{\sqrt{6}} \left( 2|p_z\alpha p_z\beta | + |\dfrac{1}{\sqrt{2}} (p_x + ip_y)\alpha \dfrac{1}{\sqrt{2}}(p_x - ip_y)\beta | + |\dfrac{1}{\sqrt{2}}(p_x - ip_y)\alpha\dfrac{1}{\sqrt{2}}(p_x + ip_y)\beta |\right) \ &=& &\dfrac{1}{\sqrt{6}} \left( 2|p_z\alpha p_z\beta | + \dfrac{1}{2}(|p_x\alpha p_x\beta | - i|p_x\alpha p_y\beta | + i|p_y\alpha p_x\beta | + |p_y\alpha p_y \beta |) + \dfrac{1}{2} (|p_x\alpha p_x\beta | + i|p_x\alpha p_y\beta | - i|p_y\alpha p_x\beta | + |p_y\alpha p_y\beta |)\right) \ &=& &\dfrac{1}{\sqrt{6}} (2|p_z\alpha p_z\beta | + |p_x\alpha p_x\beta | + |p_y\alpha p_y\beta |) \end{align}
Analogous to the three $^3p$ states we can also choose combinations of the five degenerates $^1D$ states which can be labeled with "pure" $C_{2v}$ point group labels:
\begin{align} ^1D(xx-yy,M_S=0) &=& &|p_x\alpha p_x\beta | - |p_y\alpha p_y \beta | \ &=& &(^1D(M_L=2,M_S=0) + ^1D(M_L=-2,M_S=0)) = ^1A_1 \ ^1D(yx,M_S=0) &=& &|p_x\alpha p_y\beta | + |p_y\alpha p_x\beta | \ &=& &\dfrac{1}{i} (^1D(M_L=2,M_S=0) - ^1D(M_L=-2,M_S=0)) = ^1A_2 \ ^1D(zx,M_S=0) &=& &|p_z\alpha p_y\beta | - |p_z\beta p_y\alpha | \ &=& &(^1D(M_L=1,M_S=0) + ^1D(M_L=-1,M_S=0)) = ^1B_1 \ ^1D(zy,M_S=0) &=& &|p_z\alpha p_y\beta | - |p_z\beta p_y\alpha | \ &=& &\dfrac{1}{i}(^1D(M_L=1,M_S=0) - ^1D(M_L=-1,M_S=0)) = ^1B_2 \ ^1D(2zz + xx + yy,M_S=0) &=& &\dfrac{1}{\sqrt{6}} (2|p_z\alpha p_z\beta | + |p_x\alpha p_x\beta | + |p_y\alpha p_y\beta |) ) \ &=& & ^1D(M_L=0,M_S=0) = ^1A_1 \end{align}
The only state left is the $^1S$:
\begin{align} ^1S(M_L=0,M_S=0) &=& &\dfrac{1}{\sqrt{3}} (| p_0\alpha p_0\beta | - |p_1\alpha p_{-1}\beta | - |p_{-1}\alpha p_1\beta | ) \ &=& &\dfrac{1}{\sqrt{3}} \left( |p_z\alpha p_z\beta | - |\dfrac{1}{\sqrt{2}} (p_x + ip_y)\alpha \dfrac{1}{\sqrt{2}} (p_x - ip_y)\beta | - |\dfrac{1}{\sqrt{2}} (p_x - ip_y) \alpha \dfrac{1}{\sqrt{2}} (p_x + ip_y) \beta |\right) \ &=& &\dfrac{1}{\sqrt{3}}\left( |p_z\alpha p_z\beta | - \dfrac{1}{2}(|p_x\alpha p_x\beta | - i|p_x\alpha p_y\beta | + i|p_y\alpha p_x\beta | + |p_y\alpha p_y\beta |) \dfrac{1}{2}(|p_x\alpha p_x\beta | + i|p_x\alpha p_y\beta | - i|p_y\alpha p_x\beta | + |p_y\alpha p_y\beta |) \right) \ &=& & \dfrac{1}{\sqrt{3}} ( |p_z\alpha p_z\beta | - p_x\alpha p_x\beta | - |p_y\alpha p_y\beta | ) \end{align}
Each of the componenets of this state are $A_1$ and hence this state has $A_1$ symmetry.
b. Forming SALC-AOs from the C and H atomic orbitals would generate the following:
c.
d. - e. It is necessary to determine how the wave functions found in part a. correlate with states of the $CH_2$ molecule:
\begin{align} ^3P(xz,M_S=1); ^3B_1 &=& &\sigma_g^2s^2p_xp_z \longrightarrow sigma^2n^2p_\pi\sigma^{\text{*}} \ ^3P(yx,M_S=1); ^3A_2 &=& &\sigma_g^2s^2p_xp_y \longrightarrow \sigma^2n^2p_\pi\sigma \ ^3P(yz,M_S=1); ^3B_2 &=& &\sigma_g^2s^2p_yp_z \longrightarrow \sigma^2n^2\sigma\sigma^{\text{*}} \ & & &^1D(xx-yy,M_S=0); ^1A_1 \longrightarrow \sigma^2n^2p_\pi^2 - \sigma^2n^2\sigma^2 \ & & &^1D(yx,M_S=0); ^1A_2 \longrightarrow \sigma^2n^2\sigma p_\pi \ & & &^1D(zx,M_S=0); ^1B_1 \longrightarrow \sigma^2n^2\sigma^{\text{*}}p_\pi \ & & &^1D(zy,M_S=0); ^1B_2 \longrightarrow \sigma^2n^2\sigma^{\text{*}}\sigma \ & & &^1D(2zz+xx+yy,M_S=0); ^1A_1 \longrightarrow 2\sigma^2n^2\sigma^{\text{*}2} + \sigma^2n^2p_{\pi}^2 + \sigma^2n^2\sigma^2 \end{align}
Note, the C + $H_2$ state to which the lowest $^1A_1 (\sigma^2n^2\sigma^2) CH_2$ state decomposes would be $\sigma_g^2s^2p_y^2.$ This sate $(\sigma_g^2s^2p_y^2)$ cannon be obtained by a simple combination of the $^1D$ states. In order to obtain pure $\sigma_g^2s^2p_y^2)$ cannon be obtained by a simple combination of the $^1D$ states. In order to obtain pure $\sigma_g^2s^2p_y^2$ it is necessary to combine $^1S \text{ with } ^1D.$ For example,
$\sigma_g^2s^2p_y^2 = \dfrac{1}{6} \left( \sqrt{6} ^1D(0,0) - 2\sqrt{3} ^1S(0,0)\right) - \dfrac{1}{2} \left( ^1D(2,0) + ^1D(-2,0) \right). \nonumber$
This indicates that a CCD must be drawn with a barrier near the $^1D$ asymptote to represent the fact that $^1A_1 \text{ CH}_2$ correlates with a mixture of $^1D \text{ and } ^1S$ carbon plus hydrogen. The C + $H_2$ state to which the lowest $^3B_1 (\sigma^2n\sigma^2p_\pi )CH_2$ state decomposes would be $\sigma_g^2sp_y^2p_x.$
f. If you follow the $^3B_1$ component of the $C(^3P) + H_2$ (since it leads to the ground state products) to $^3B_1 CH_2$ you must go over an approximately 20 Kcal/mole barrier. Of course this path produces $^3B_1 CH_2$ product. Distortions away from $C_{2v}$ symmetry, for example to $C-s$ symmetry, would make the $a_1 \text{ and } b_2$ orbitals identical in symmetry (a'). The $b_1$ orbitals would maintain their identity group going to a" symmetry. Thus $^3B_1 \text{ and } ^3A_2$ (both $^3A" \text{ in } C_s$ symmetry and odd under reflection through the molecular plane) can mix. The system could thus follow the $^3A_2$ component of the $C(^3P) + H_2$ surface to the place (marked with a circle on the CCD) where it crosses the $^3B_1$ surface upon which it then moves and continues to products. As a result, the barrier would be lowered.
You can estimate when the barrier occurs (late or early) using thermodynamic information for the reaction (i.e. slopes and asymptotic energies). For example, an early barrier would be obtained for a reaction with the characteristics:
This relation between reaction endothermicity or exothermicity is known as the Hammond postulate. Note that the $C(^3P_1) + H_2 \rightarrow CH_2$ reaction of interest here (see the CCD) has an early barrier.
g. The reaction $C(^1D) + H_2 \rightarrow CH_2 (^1A_1)$ should have no symmetry barrier )this can be recognized by the following the $^1A_1 (C(^1D) + H_2)$ reactants down to the $^1A_1 (CH_2)$ products on the CCD).
Q2
This problem in many respects is analogous to problem 1.
The $^3B_1$ surface certainly requires a two configuration CI wavefunction; the $\sigma^2\sigma^2np_x (\pi^2p_y^2sp_x)$ and the $\sigma^2n^2p_x\sigma^{\text{*}} (\pi^2s^2p_xp_z).$ The $^1A_1$ surface could use the $sigma^2\sigma^2n^2 (\pi^2s^2p_y^2)$ only but once again there is n combination of $^1D$ determinants which gives purely this configuration $(\pi^2s^2p_y^2)$. Thus mixing of both $^1D \text{ and } ^1S$ determinants are necessary to yield the required $pi^2s^2p_y^2$ configuration. Hence even the $^1A_1$ surface would require a multiconfigurational wavefunction for adequate description.
Q3
a.
\begin{align} \langle \sigma_g | \sigma_g \rangle &=& & \langle \dfrac{1s_A + 1s_B}{\sqrt{2+2S}} \bigg| \dfrac{1s_A + 1s_B}{\sqrt{2+2S}} \rangle \ &=& & \dfrac{1}{2+2S} \left( \langle 1s_A|1s_A\rangle + \langle 1s_A|1s_B \rangle + \langle 1s_B|1s_A \rangle + \langle 1s_B|1s_B \rangle \right) \ &=& & (0.285)((1.000) + (0.753) + (0.753) + (1.000)) \ &=& &0.999 \approx 1 \end{align}
\begin{align} \langle \sigma_g | \sigma_u \rangle &=& &\langle \dfrac{1s_A + 1s_B}{\sqrt{2+2S}} \bigg| \dfrac{1s_A - 1s_B}{\sqrt{2-2S}} \rangle \ &=& &\dfrac{\langle 1s_A|1s_A \rangle + \langle 1s_A|1s_B \rangle + \langle 1s_B|1s_A \rangle + \langle 1s_B|1s_B \rangle}{\sqrt{2+2S}\sqrt{2-2S}} \ &=& & (1.434)(0.534)((1.000) - (0.753) + (0.753) - (1.000)) \ &=& &0 \ \langle \sigma_u |\rangle &=& & \langle \dfrac{1s_A-1s_B}{\sqrt{2-2S}} \bigg| \dfrac{1s_A - 1s_B}{\sqrt{2-2S}}\rangle \ &=& & \dfrac{\langle 1s_A|1s_A \rangle - \langle 1s_A|1s_B \rangle - \langle 1s_B|1s_A \rangle + \langle 1s_B|1s_B \rangle}{2-2S} \ &=& &(2.024)((1.000) - (0.753) - (0.753) + (1.000)) \ &=& & 1.000 \end{align}
b.
\begin{align} \langle \sigma_g|h|\sigma_g \rangle &=& &\langle \dfrac{1s_A + 1s_B}{\sqrt{2+2S}} \bigg| h \bigg| \dfrac{1s_A + 1s_B}{\sqrt{2+2S}} \rangle \ &=& & \dfrac{\langle 1s_A |h|1s_A \rangle + \langle 1s_A|h|1s_B \rangle + \langle 1s_B|h|1s_A \rangle + \langle 1s_B|h|1s_B \rangle}{2+2S} \ &=& & (0.285)((-1.110) + (-0.968) + (-0.968) + (-1.110)) \ &=& & -1.184 \ \langle \sigma_u |h| \sigma_u \rangle &=& & \langle \dfrac{1s_A - 1s_B}{\sqrt{2-2S}} \bigg| h \bigg| \dfrac{1s_A - 1s_B}{\sqrt{2-2S}} \rangle \ &=& &\dfrac{\langle 1s_A |h|1s_A \rangle - \langle 1s_A|h|1s_B \rangle - \langle 1s_B|h|1s_A \rangle + \langle 1s_B|h|1s_B \rangle}{2-2S} \ &=& &(2.024)((-1.110) + (0.968) + (0.968) + (-1.110)) \ &=& &-0.575 \ \langle \sigma_g\sigma_g |h|\sigma_g\sigma_g \rangle &\equiv & & \langle gg|gg\rangle = \dfrac{1}{2+2S}\dfrac{1}{2+2S} \ & & & \langle (1s_A + 1s_B) (1s_A + 1s_B) (1s_A + 1s_B) (1s_A + 1s_B) \ &=& & \dfrac{1}{(2+2S)^2} ( \langle AA|AA \rangle + \langle AA|AB \rangle + \langle AA|BA \rangle + \langle AA|BB \rangle + \langle AB|AA \rangle + \langle AB|AB \rangle + \langle AB|BA \rangle + \langle AB|BB \rangle + \ & & & \langle BA|AA \rangle + \langle BA|AB \rangle + \langle BA|BA \rangle + \langle BA|BB \rangle + \langle BB|AA \rangle + \langle BB|BA \rangle + \langle BB|BA \rangle + \langle BB|BB \rangle ) \ &=& & (0.081)((0.625) + (0.426) + (0.426) + (0.323) + (0.426) + (0.504) + (0.323) + (0.426) +\ & & & (0.426) + (0.323) + (0.504) + (0.426) + (0.323) + (0.426) + (0.426) + (0.625)) \ &=& &0.564 \ \langle uu|uu \rangle &=& & \dfrac{1}{2-2S}\dfrac{1}{2-2S} \ & & &\langle (1s_A - 1s_B) (1s_A - 1s_B)|(1s_A - 1s_B)(1s_A - 1s_B) \rangle \ &=& &\dfrac{1}{(2-2S)^2} ( \langle AA|AA \rangle - \langle AA|AB \rangle - \langle AA|BA \rangle \langle AA|BB \rangle - \langle BA|AA \rangle + \langle BA|AB \rangle + \langle BA|BA \rangle - \langle BA|BB \rangle \ & & & \langle BA|AA \rangle + \langle BA|AB \rangle + \langle BA|BA \rangle - \langle BA|BB \rangle + \langle BB|AA \rangle - \langle BB|AB \rangle - \langle BB|BA \rangle + \langle BB|BB \rangle ) \ &=& &(4.100)((0.625) - (0.426) - (0.426) + (0.323) - (0.426) + (0.504) + (0.323) - (0.426) - \ & & &(0.426) + (0.323) + (0.504) - (0.426) + (0.323) - (0.426) - (0.426) + (0.625)) \ &=& & 0.582 \end{align}
\begin{align} \langle gg|uu \rangle &=& & \dfrac{1}{2+2S}\dfrac{1}{2-2S} \ & & & \langle (1s_A + 1s_B)(1s_A + 1s_B)|(1s_A - 1s_B)(1s_A-1s_B) \rangle \ &=& & \dfrac{1}{2+2S}\dfrac{1}{2-2S} \ & & & ( \langle AA|AA \rangle - \langle AA|AB \rangle - \langle AA|BA \rangle + \langle AA|BB \rangle + \langle AB|AA \rangle - \langle AB|AB \rangle - \langle AB|BA \rangle + \langle AB|BB \rangle + \ & & &+ \langle BA|AA \rangle - \langle BA|AB \rangle - \langle BA|BA \rangle + \langle BA|BB \rangle + \langle BB|AA \rangle - \langle BB|AB \rangle - \langle BB|BA \rangle + \langle BB|BB \rangle )\ &=& & (0.285)(2.024)(( 0.625) - (0.426) - (0.426) + (0.323) + (0.426) - (0.504) - (0.323) + (0.426) + \ & & &+ (0.426) - (0.323) - (0.504) - (0.426) + (0.323) - (0.426) - (0.426) + (0.625) \ &=& &0.140 \ \langle gu|gu \rangle &=& & \dfrac{1}{2+2S}\dfrac{1}{2-2S} \ & & &\langle (1s_A + 1s_B)(1s_A - 1s_B)(1s_A + 1s_B)(1s_A - 1s_B) \rangle \ &=& & \dfrac{1}{2+2S}\dfrac{1}{2-2S} \ & & &( \langle AA|AA \rangle - \langle AA|AB \rangle + \langle AA|BA \rangle - \langle AA|BB \rangle - \langle AB|AA \rangle + \langle AB|AB \rangle - \langle AB|BA \rangle + \langle AB|BB \rangle \ & & &+ \langle BA|AA \rangle - \langle BA|AB \rangle + \langle BA|BA \rangle - \langle BA|BB \rangle - \langle BB|AA \rangle + \langle BB|AB \rangle - \langle BB|BA \rangle + \langle BB|BB \rangle ) \ &=& & (0.285)(2.024)((0.625) - (0.426) + (0.426) - (0.323) - (0.426) + (0.504) - (0.323) + (0.426) + ...\ & & &...+ (0.426) - (0.323) + (0.504) - (0.426) - (0.323) + (0.426) - (0.426) + (0.625)) \ &=& & 0.557 \end{align}
Note, that $\langle gg|gu \rangle = \langle uu|ug \rangle = 0$ from symmetry considerations, but this can be easily verified. For example,
\begin{align} \langle gg|gu \rangle .&=& & \dfrac{1}{\sqrt{2+2S}}\dfrac{1}{\sqrt{(2-2S)^3}} \ & & &\langle (1s_A + 1s_B)(1s_A + 1s_B) \bigg| (1s_A + 1s_B)(1s_A - 1s_B) \rangle \ &=& & \dfrac{1}{\sqrt{2+2S}}\dfrac{1}{(2-2S)^3}. \ & & & ( \langle AA | AA \rangle - \langle AA | AB \rangle + \langle AA | BA \rangle - \langle AA | BB \rangle \ & & & \langle AB | AA \rangle - \langle AB | AB \rangle + \langle AB | BA \rangle - \langle AB | BB \rangle + \ & & & \langle BA | AA \rangle - \langle BA | AB \rangle + \langle BA | BA \rangle - \langle BA | BB \rangle + \ & & & \langle BB | AA \rangle - \langle BB | AB \rangle + \langle BB | BA \rangle - \langle BB | BB \rangle ) \ & & & (0.534)(2.880)((0.625) - (0.426) + (0.426) - (0.323) + (0.426) - (0.504) + (0.323) - (0.426) +... \ & & & ...+ (0.426) - (0.323) + (0.504) - (0.426) + (0.323) - (0.426) + (0.426) - (0.625)) \ &=& & 0.000 \end{align}
c. We can now set up the configuration interaction Hamiltonian matrix. The elements are evaluated by using the Slater-Condon rules as shown in the text.
\begin{align} H_{11} &=& & \langle \sigma_g\alpha\sigma_g\beta |H| \sigma_g\alpha\sigma_g\beta \rangle \ &=& &2f_{\sigma_g\sigma_g} + g_{\sigma_g\sigma_g\sigma_g\sigma_g} \ &=& &2(-1.184) + 0.564 = -1.804 \ H_{21} &=& &H_{12} = \langle \sigma_g\alpha\sigma_g\beta |H| \sigma_u\alpha\sigma_u\beta\rangle \ &=& & g_{\sigma_g\sigma_g\sigma_u\sigma_u} \ &=& & 0.140 \ H_{22} &=& &\langle \sigma_u\alpha\sigma_u\beta |H| \sigma_u\alpha\sigma_u\beta\rangle \ &=& & 2f_{\sigma_u\sigma_u} + g_{\sigma_u\sigma_u\sigma_u\sigma_u} \ &=& & 2(-0.575) + 0.582 = -0.568 \end{align}
d. Solving this eigenvalue problem:
\begin{align} \begin{vmatrix} -1.804 - \varepsilon & 0.140 \ 0.140 & -0.568 - \varepsilon \end{vmatrix} &=& & 0 \ (-1.804 -\varepsilon)(-0.568 - \varepsilon) - (0.140)^2 &=& & 0 \1.025 + 1.804\varepsilon + 0.568\varepsilon + \varepsilon^2 - 0.0196 &=& &0 \ \varepsilon^2 + 2.372\varepsilon + 1.005 &=& &0 \ \varepsilon &=& & \dfrac{-2.372 \pm \sqrt{(2.372)^2 - 4(1)(1.005)}}{(2)(1)}\ &=& &-1.186 \pm 0.634 \ &=& & -1.820, \text{ and } -0.552. \end{align}
Solving for the coefficients:
\begin{align}\begin{bmatrix} -1,804 - \varepsilon & 0.140 \ 0.140 & -0.568 - \varepsilon \end{bmatrix}\begin{bmatrix} C_1 \ C_2 \end{bmatrix} = \begin{bmatrix} 0 \ 0 \end{bmatrix} \end{align}
For the first eigenvalue this becomes:
\begin{align} & & &\begin{bmatrix} -1.804 + 1.820 & 0.140 \ 0.140 & -0.568 + 1.820 \end{bmatrix}\begin{bmatrix} C_1\C_2 \end{bmatrix} = \begin{bmatrix}0\0 \end{bmatrix} \ & & & \begin{bmatrix} 0.016 & 0.140 \ 0.140 & 1.252 \end{bmatrix}\begin{bmatrix} C_1 \ C_2 \end{bmatrix} = \begin{bmatrix} 0 \ 0 \end{bmatrix} \& & & (0.140)(C_1) + (1.252)(C_2) = 0 \ & & & C_1 = -8.943 C_2 \ & & & C_1^2 + C_2^2 = 1 \text{(from normalization)} \ & & & (-8.943 C_2)^2 + C_2^2 = 1 \ & & & 80.975 C_2^2 = 1 \ & & & C_2 = 0.111, C_1 = -0.994 \end{align}
For the second eigenvalue this becomes:
\begin{align} & & &\begin{bmatrix} -1.804 + 0.552 & 0.140 \ 0.140 & -0.568 + 0.552 \end{bmatrix} \begin{bmatrix} C_1 \ C_2 \end{bmatrix} = \begin{bmatrix} 0 \ 0 \end{bmatrix} \ & & & \begin{bmatrix} -1.252 & 0.140 \ 0.140 & -0.016 \end{bmatrix}\begin{bmatrix} C_1 \ C_2 \end{bmatrix} = \begin{bmatrix} 0 \ 0 \end{bmatrix} \ & & & (-1.252)(C_1) + (0.140)(C_2) \ & & & C_1 = 0.112 C_2 \ & & & C_1^2 + C_2^2 = 1\text{ (from normalization) } \ & & & (0.112 C_2)^2 + C_2^2 = 1 \ & & & 1.0125 C_2^2 = 1 \ & & & C_2 = 0.994, C_1 = 0.111 \end{align}
e. The polarized orbitals, $R_{\pm}$, are given by:
\begin{align} R_{\pm} &=& & \sigma_g \pm \sqrt{\dfrac{C_2}{C_1}}\sigma_u \ R_{\pm} &=& & \sigma_g \pm \sqrt{\dfrac{0.111}{0.994}}\sigma_u \ R_{\pm} &=& & \sigma_g \pm 0.334 \sigma_u \ R_{+} &=& & \sigma_g + 0.334 \sigma_u \text{ (left polarized) } \ R_{-} &=& & \sigma_g - 0.334 \sigma_u \text{ (right polarized) }\end{align}
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.03%3A_Electronic_Configurations%2C_Term_Symbols%2C_and_States/22.3.06%3A_vi._Problem_Solutions.txt
|
Q1
Consider the molecules $CCl_4, CHCl_3, \text{ and } CH_2Cl_2.$
a. What kind of rotor are they (symmetric top, ect; do not bother with oblate, or near-prolate, etc.)
b. Will they show pure rotational spectra?
c. Assume that ammonia shows a pure rotational spectrum. If the rotational constrants are 9.44 and 6.20 $cm^{-1}$, use the energy expression:
$E = (A - B) K^2 + B J(J + 1), \nonumber$
to calculate the energies ( in $cm^{-1}$) of the first three lines (i.e., those with lowest K, J quantum number for the adsorbing level) in the absorption spectrum (ignoring higher order terms in the energy expression).
Q2
The molecule $^{11}B ^{16}O$ has a vibrational frequency $\omega_e = 1885 cm^{-1}$, a rotational constant $B-e = 1,78 cm^{-1}$, and a bond energy from the bottom of the potential well of $D_e^0 = 8.28 eV$.
Use integral atomic masses in the following:
a. In the approximation that the molecule can be represented as a Morse oscillaor, calculate the bond length, $R_e$ in angstroms, the centrifugal distortion constant, $D_e \text{ in } cm^{-1}$, the anharmonicity constant, $\omega_eX_e \text{ in cm}^{-1}$, the zero-point corrected bond energy, $D_0^0$ in eV, the vibrational rotation interaction constant, $\alpha_e \text{ in cm}^{-1}$, and the vibrational state specific rotation constants, $B_0 \text{ and } B_1 \text{ in cm}^{-1}$. Use the vibration-rotation energy expression for a Morse oscillator:
\begin{align} E &=& & \hbar\omega_e \left(v + \dfrac{1}{2}\right) - \hbar \omega_eX_e\left( v = \dfrac{1}{2}\right)^2 + B_vJ(J + 1) - D_eJ^2(J + 1)^2 \text{, where} \ B_v &=& & B_e - \alpha_e\left(v + \dfrac{1}{2}\right), \alpha_e = \dfrac{-6B_e^2}{\hbar\omega_e} + \dfrac{6\sqrt{B_e^3\hbar\omega_eX_e}}{\hbar\omega_e}\text{, and } D_e = \dfrac{4B_e^3}{\hbar\omega_e^2} \end{align}
b. Will this molecule show a pure rotation spectrum? A vibration-rotation spectrum? Assume that it does, what are the energies $(in cm^{-1})$ of the first three lines in the P branch $(\Delta v = +1, \Delta J = -1)$ of the fundamental absorption?
Q3
Consider trans-$C_2H_2Cl_2$. The vibrational normal modes of this molecule are shown below. What is the symmetry of the molecule? Label each of the modes with the appropriate irreducible representation.
22.4.02: ii. Problems
Q1
Suppose you are given two molecules (one is $CH_2$ and the other is $CH_2^-$ but you don't know which is which). Both molecules have $C_{2v}$ symmetry. The CH bond length of molecule I is 1.121 Å and for molecule II it is 1.076 Å. The bond angle of molecule I is 104$^\circ$ and for molecule II it is 136$^\circ$.
a. Using a coordinate system centered on the C nucleus as shown above (the molecule is in the YZ plane), compute the moment of inertia tensors of both species (I and II). The definitions of the componenets of the tensor are, for example:
\begin{align} I_{xx} &=& & \sum\limits_j m_j (y_j^2 + z_j^2) - M(Y^2 + Z^2) \ I_{xy} &=& & -\sum\limits_j m_jx_jy_j - MXY \end{align}
Here, $m_j$ is the mass of the nucleus j, M is the mass of the entire molecule, and X, Y, Z are the coordinates of the center of mass of the molecule. Use Å for distances and amu's for masses.
b. Find the principal moment of interia $I_a \langle I_b \langle I_c$ for both compounds ( in amu Å$^2$ units) and convert these values into rotational constants A, B, and C in $cm^{-1}$ using, for example,
$A = \dfrac{h}{8\pi^2cI_a}. \nonumber$
c. Both compounds are "nearly prolate tops" whose energy levels can be well approximated using the prolate top formula:
$E = (A - B) K^2 + B J(J + 1), \nonumber$
if one uses for the B constant the average of the B and C valued determined earlier. Thus, take B and C values (for each compound) and average them to produce an effective B constant to use in the above energy formula. Write down ( in $cm^{-1}$ units) the energy formula for both species. What values are J and K allowed to assume? What is the degeneracy of the level labeled by a given J and K?
d. Draw a picture of both compounds and show the directions of the three principle axes (a,b,c). On these pictures show the kinf of rotational motion associated with the quantum number K.
e. Given that the electrical transition moment vector $\vec{\mu}$ connecting species I and II is directed along the Y axis, what are the selection rules J and K?
f. Suppose you are given the photoelectron spectrum of $CH_2^-$. In this spectrum $J_j = J_i + 1$ transitions are called R-branch absorptions and those obeying $J_j = J_i - 1$ are called P-branch transitions , The spacing between lines can increase or decrease as functions of $J_i$ depending on the changes in the moment of inertia for the transition. If spacings grow closer and closer, we say that the spectrum exhibits a so-called band head formation. In the photoelectron spectrum that you are given, a rotational analysis of the vibrational lines in this spectrum is carried out and it is found that the R-branches show band head formation but the P-branches do not. Based on this information, determine which compound I or II is the $CH_2^-$ anion. Explain your reasoning.
g. At what J value (of the absorbing species) does the band head occur and at what rotational energy difference?
Q2
Let us consider the vibrational motions of benzene. To consider all of the vibrational modes of benzene we should attach a set of displacement vectors in the x, y, and z directions to each atom in the molecule (giving 36 vectors in all), and evaluate how these transform under the symmetry operations of $D_{6h}$. For this problem, however, let's only inquire about the C-H stretching vibrations.
a. Represent the C-H stretching motion on each C-H bond by an outward-directed vector on each H atom, designated $r_i$:
These vectors form the basis for a reducible representation. Evaluate the characters for this reducible representation under the symmetry operations of the $D_{6h}$ group.
b. Decompose the reducible representation you obtained in part a. into its irreducible components. These are the symmetries of the various C-H stretching vibrational modes in benzene.
c. The vibrational state with zero quanta in each of the vibrational modes (the ground vibrational state) of any molecule always belongs to the totally symmetric representation. For benzene the ground vibrational state is therefore of $A_{1g}$ symmetry. Am excited state which has one quantum of vibrational excitation in a mode which is of a given symmetry species has the same symmetry species as the mode which is excied (because the vibrational wave functions are given as Hermite polynomials in the stretching coordinate). Thus, for example, excitation (by one quantum) of a vibrational mode of $A_{2u}$ symmetry gives a wavefunction of $A_{2u}$ symmetry. To resolve the question of what vibrational modes may be excited by the absorption of infrared radiation we must examine the x, y, and z componenets of the transition dipole integral for initial and final state wave functions $\psi_i \text{ and } \psi_f$, respectively:
$|\langle \psi_f |x| \psi_i \rangle |, |\langle \psi_f |y| \psi_i \rangle |, \text{ and } |\langle \psi_f |z| \psi_i \rangle |. \nonumber$
Using the information provided above, which of the C-H vibrational modes of benzene will be infrared-active, and how will the transitions be polarized? How many C-H vibrations will you observe in the infrared spectrum of benzene?
d. A vibrational mode will be acrive in Raman spectroscopy only if one of the following integrals is nonzero:
\begin{align} & & &| \langle \psi_f |xy| \psi_i \rangle |, | \langle \psi_f |xz| \psi_i \rangle |, | \langle \psi_f |yz| \psi_i \rangle |, \ & & &| \langle \psi_f |x^2| \psi_i \rangle |, | \langle \psi_f |y^2| \psi_i \rangle |, \text{ and } | \langle \psi_f |z^2| \psi_i \rangle | . \end{align}
Using the fact that the quadratic operators transform according to the irreducible representations:
\begin{align} \left( x^2 + y^2, z^2 \right) &\Rightarrow && A_{1g} \ \left( xz, yz \right) &\Rightarrow && E_{1g} \ \left( x^2 - y^2 ,xy \right) & \Rightarrow && E_{2g} \end{align}
Determine which of the C-H vibrational modes will be Raman-active.
e. Are there any of the C-H stretching vibrational motions of benzene which cannot be observed in either infrared of Raman spectroscopy? Give the irreducible representation label for these unobservable modes.
Q3
In treating the vibrational and rotational motion of a diatomic molecule having reduced mass μ, equilibrium bond length re and harmonic force constant k, we are faced with the following radial Schrödinger equation:
$\dfrac{-h^2}{2\mu r^2}\dfrac{d}{dr} \left( r^2 \dfrac{dR}{dr} \right) + \dfrac{J(J + 1)\hbar^2}{2\mu r^2}R + \dfrac{1}{2}k(r-r_e)^2R = ER \nonumber$
a. Show that the substitution $R = \dfrac{F}{r}$ leads to:
$\dfrac{-h^2}{2\mu}F" + \dfrac{J(J + 1)\hbar^2}{2\mu r^2}F + \dfrac{1}{2}k(r - r_e)^2F = EF \nonumber$
b. Taking $r = r_e + \Delta r\text{ and expanding } \dfrac{1}{(1 + x)^2} = 1 - 2x + 3x^2 + ...,$
show that so-called vibration-rotation coupling term $\dfrac{J(J + 1)\hbar^2}{2\mu r^2}$ can be approximated (for small $\Delta \text{ r) by } \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right).$
Keep terms only through order $\Delta r^2.$
c. Show that, through terms of order $\Delta r^2$, the above equation for F can be rearranged to yield a new equation of the form:
$\dfrac{-\hbar^2}{2\mu} F" + \dfrac{1}{2}\bar{k}( r - \bar{r}_e)^2 F = \left( E - \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} + \Delta \right) F \nonumber$
Give explicit expressions for how the modified force constant $\bar{k} \text{, bond length } \bar{r}_e$, and energy shift $\Delta$ depend on J, k, $r_e \text{ and } \mu .$
d. Given the above modified vibrational problem, we can now conclude that the modified energy levels are:
$E = \hbar \sqrt{\dfrac{\bar{k}}{\mu}}\left( v + \dfrac{1}{2}\right) + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} - \Delta . \nonumber$
Explain how the conclusion is "obvious", how for J = 0, k = $\bar{k} \text{, and }\Delta$ = 0, we obtain the usual harmonic oscillator energy levels. Describe how the energy levels would be expected to vary as J increases from zero and explain how these changes arise from changes in k and $r_e$. Explain in terms of physical forces involved in the rotating-vibrating molecule why $r_e$ and k are changed by rotation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.04%3A_Molecular_Rotation_and_Vibration/22.4.01%3A_i._Exercises.txt
|
Q1
a. $CCl_4$ is tetrahedral and therefore is a spherical top. $CHCl_3 \text{ has } C_{3v}$ symmetry and therefore is a symmetric top. $CH_2Cl_2 \text{ has } C_{2v}$ symmetry and therefore is an asymmetric top.
b. $CCl_4$ has such high symmetry that it will not exhibit pure rotational spectra. $CHCl_3 \text{ and } CH_2Cl_2$ will both exhibit pure rotation spectra.
c. $NH_3$ is a symmetric top (oblate). Use the given energy expression,|
$E = (A - B) K^2 + B J(J + 1), \nonumber$
A = 6.20 $cm^{-1}$, B = 9.44$cm^{-1}$, selection rules $\Delta J = \pm 1$, and the fact that $\vec{\mu_0}$ lies along the figure axis such that $\Delta K = 0$, to give:
$\Delta E = 2B (J + 1) = 2B, 4B\text{, and } 6B (J = 0, 1 \text{, and } 2). \nonumber$
So, lines are at 18.88$cm^{-1}$, 37.76$cm^{-1}$. and 56.64$cm^{-1}$.
Q2
To convert between $cm^{-1}$ and energy, multiply by hc = $6.62618x10^{-34}\text{ J sec })(2.997925x10^{10}\text{ cm sec}^{-1}) = 1.9865x10^{23} \text{ J cm }$.
Let all quantities in $cm^{-1}$ be designated with a bar,
e.g. $\bar{B_e}$ = 1.78$cm^{-1}$.
a. \begin{align} hc\bar{B_e} &=& &\dfrac{\hbar^2}{2\mu R_e^2} \ R_e &=& &\dfrac{\hbar}{\sqrt{2\mu hc\bar{B_e}}}, \ \mu &=& &\dfrac{m_Bm_O}{m_B + m_O} = \dfrac{(11)(16)}{(11 + 16)}x1.66056x10^{-27} \text{kg} \ &=& &1.0824x10^{26} \text{kg} \ hc\bar{B_e} &=& &hc(1.78 \text{cm}^{-1} = 3.5359x10^{-23} \text{J} \ R_e &=& & \dfrac{1.05459x10^{-34}\text{J sec}}{\sqrt{(2)1.0824x10^{-26}\text{kg}.3.35359x10^{-23}\text{ J}}} \ R_e &=& &1.205x10^{-10}\text{m} = 1.205 Å \ D_e &=& &\dfrac{4B_e^3}{\hbar \omega_e^2}, \bar{D_e} = \dfrac{4\bar{B_e}^3}{\bar{\omega_e}^2} = \dfrac{(4)(1.78 \text{cm}^{-1})^3}{(4)(66782.2 \text{cm}^{-1})} = 6.35x10^{-6}\text{cm}^{-1} \ \omega_ex_e &=& &\dfrac{\hbar \omega_e^2}{4D_e^0}, \bar{\omega_ex_e} = \dfrac{\bar{\omega_e^2}}{4\bar{D_e^0}} = \dfrac{(1885 \text{cm}^{-1})^2}{(4)(66782.2 \text{cm}^{-1})} = 13.30 \text{cm}^{-1}.\ D_0^0 &=& &D_e^0 - \dfrac{\hbar\omega_e}{2} + \dfrac{\hbar\omega_e x_e}{4}, \bar{D_0^0} = \bar{D_e^0} - \dfrac{\omega_e}{2} + \dfrac{\bar{\omega_ex_e}}{4} \ &=& &66782.2 - \dfrac{1885}{2} + \dfrac{13.3}{4} \ &=& &65843.0 \text{cm}^{-1}= 8.16 eV. \ \alpha_e &=& & \dfrac{-6B_e^2}{\hbar\omega_e} + \dfrac{6\sqrt{B_e^3\hbar\omega_e x_e}}{\hbar\omega_e} \ \bar{\alpha_e} &=& & \dfrac{-6\bar{B_e^2}}{\bar{\omega_e}} + \dfrac{6\sqrt{\bar{B_e^3}\bar{\omega_e x_e}}}{\bar{\omega_e}} \ \bar{\alpha_e} &=& &\dfrac{(-6)(1.78)^2}{(1885)} + \dfrac{6\sqrt{(1.78)^3(13.3)}}{(1885)} = 0.0175 \text{cm}^{-1}. \ B_0 &=& & B_e - \alpha_e \left( \dfrac{1}{2} \right), \bar{B_0} = \bar{B_e} - \bar{\alpha_e}\left( \dfrac{1}{2} \right) = 1.78 - \dfrac{0.0175}{2} \ &=& & 1.77 \text{cm}^{-1} \ B_1 &=& & B_e - \alpha_e\left( \dfrac{3}{2} \right), \bar{B_1} = \bar{B_e} - \bar{\alpha_e} \left( \dfrac{3}{2} \right) = 1.78 - 0.0175(1.5) \ &=& & 1.75 \text{cm}^{-1} \end{align}
b. The molecule has a dipole moment and so it should have a pure rotational spectrum. In addition, the dipole moment should change with R and so it should have a vibration rotation spectrum.
The first three lines correspond to J $=1 \rightarrow 0, J=2 \rightarrow 1, J=3 \rightarrow 2$
\begin{align} E &=& & \hbar \omega_e\left( v + \dfrac{1}{2}\right) - \hbar \omega_ex_e \left( v + \dfrac{1}{2}\right)^2 + B_vJ(J + 1) - D_eJ^2(J + 1)^2 \ \Delta E &=& &\hbar \omega_e - 2\hbar \omega_ex_e - B_0J)J + 1) + B_1J(J - 1) - 4D_eJ^3 \ \bar{\Delta E} &=& & \bar{\omega_e} - 2\bar{\omega_e x_e} - \bar{B_0}J(J = 1) + \bar{B_1}J(J - 1) - 4\bar{D_e}J^3 \ \bar{\Delta E} &=& & 1885 - 2(13.3) - 1.77J(J + 1) + 1.75J(J - 1) - 4(6.35x10^{-6})J^3 \ &=& & 1858.4 - 1.77J(J + 1) + 1.75J(J - 1) - 2.54x10^{-5}J^3 \ \bar{\Delta E}(J = 1) &=& & 1854.9 \text{cm}^{-1} \ \bar{\Delta E}(J = 2) &=& & 1851.3 \text{cm}^{-1} \ \bar{\Delta E}(J = 3) &=& & 1847.7 \text{cm}^{-1} \end{align}
Q3
The $C_2H_2Cl_2$ molecule has a $\sigma_h$ plane of symmetry (plane of molecule) a $C_2$ axis ($\bot$ to plane), and inversion symmetry, this result in $C_{2h}$ symmetry. Using $C_{2h}$ symmetry labels the modes can be labeled as follows: $\nu_1, \nu_2, \nu_3, \nu_4 \text{, and } \nu_5 \text{ are } a_g, \nu_6 \text{ and } \nu_7 \text{ are } a_u, \nu_8 \text{ is } b_g \text{, and } \nu_9, \nu_{10}, \nu_{11} \text{, and } \nu_{12} \text{ are } b_u.$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.04%3A_Molecular_Rotation_and_Vibration/22.4.03%3A_iii._Exercise_Solutions.txt
|
Q1
\begin{align} &\text{Molecule I} & & \text{Molecule II}& \ &R_{CH} = 1.121 Å & & R_{CH} = 1.076 Å & \ &\angle_{HCH} = 104^{\circ}& & \angle_{HCH} = 136^{\circ} \ &y_H = \text{ R Sin}\left( \dfrac{\theta}{2} \right) = \pm 0.8834 & & y_H = \pm 0.9976 & \ &z_H = \text{ R Cos}\left( \dfrac{\theta}{2} \right) = -0.2902 & & z_H = -0.4031 \ &\text{ Center of Mass(COM):} & & \text{clearly, X = Y = 0, } & \ & Z = \dfrac{12(0) - 2\text{RCos}\left(\dfrac{\theta}{2}\right)}{14} = -0.0986 & & Z = -0.0576 & \end{align}
a. \begin{align} I_{xx} &=& & \sum\limits_j m_j(y_j^2 + z_j^2) - M(Y^2 + Z^2) \ T_{xy} &=& & -\sum\limits_j m_jx_jy_j - MXY\end{align}
\begin{align} & I_{xx} = 2(1.121)^2 - 14(-0.0986)^2 & & I_{xx} = 2(1.076)^2 - 14(-0.0576)^2 & \ & = 2.377 & & = 2.269 & \ & I_{yy} = 2(0.6902)^2 - 14(-0.0986)^2 & & I_{yy} = 2(0.4031)^2 - 14(-0.0576)^2 & \ & = 0.8167 & & = 0.2786 & \ & I_{zz} = 2(0.8834)^2 & & I_{zz} = 2(0.9976)^2 & \ & =1.561 & & = 1.990 & \ & I_{xz} = I_{yz} = I_{xy} = 0 & & & \end{align}
b. Since the moment of inertia tensor is already diagonal, the principal moments of inertia have already been determined to be
\begin{align} & \left( I_a \langle I_b \langle I_c \right): & & &\ & I_{yy} \langle I_{zz} \langle I_{xx} & & I_{yy} \langle I_{zz} \langle I_{xx} & \ & 0.8167 \langle 1.561 \langle 2.377 & & 0.2786 \langle 1.990 \langle 2.269 & \end{align}
Using the formula: $A = \dfrac{h}{8\pi^2cI_a} = \dfrac{6.62x10^{-27}}{8\pi^2(3x10^{10})I_a}$
$A = \dfrac{16.84}{I_a} \text{cm}^{-1} \nonumber$
similarly, $B = \dfrac{}{} \text{ cm}^{-1} \text{, and C }= \dfrac{16.84}{I_c} \text{ cm}^{-1}.$
So,
\begin{align} & \text{Molecule I} & & \text{Molecule II} & \ & y \Rightarrow A = 20.62 & & y \Rightarrow A = 60.45 & \ & z \Rightarrow B = 10.79 & & z \Rightarrow B = 8.46 & \ & x \Rightarrow C = 7.08 & & x \Rightarrow C = 7.42 & \end{align}
c. Averaging B + C:
\begin{align} & B = B \dfrac{B + C}{2} = 8.94 & & B = \dfrac{B + C}{2} = 7.94 & \ & A - B = 11.68 & & A - B = 52.51 & \end{align}
Using the Prolate top formula
$E = (A - B)K^2 + B J(J + 1), \nonumber$
\begin{align} & \text{Molecule I} & & \text{Molecule II} & \ & E = 11.68K^2 + 8.94J(J + 1) & & E = 52,51K^2 + 7.94J(J + 1) & \end{align}
Levels: J = 0,1,2,... and K = 0,1, ... J
For a given level defined by J and K, there are $M_J$ degeneracies given by: (2J + 1)x \begin{Bmatrix} \text{1 for K = 0}\ \text{2 for K }\ne 0 \end{Bmatrix}
d.
Molecule I Molecule II
e. Since $\vec{\mu}$ is along Y, $\Delta$K = 0 since K describes rotation about the y axis.
Therefore $\Delta J = \pm 1$
f. Assume molecule I is $CH_2^-$ and molecule II is $CH_2$. Then, $\Delta E = E_{J_j}(CH_2)$, where:
$E(CH_2) = 52.51K^2 + 7.94J(J + 1)\text{, and } E(CH_2) = 11.68K^2 + 8.94J(J + 1)$
For R-branches: $J_j = J_i + 1, \Delta K = 0;$
\begin{align} \Delta E_R &=& &E_{J_j}(CH_2) - E_{J_i}(CH_2) \ &=& & 7.94(J_i + 1)(J_i + 1 + 1) - 8.94J_i(J_i + 1) \ &=& & (J_i + 1)\{7.94(J_i + 1 + 1) - 8.94J_i\} \ &=& &(J_i + 1)\{(7.94 - 8.94)J_i + 2(7.94)\} \ &=& &(J_i + 1)\{-J_i + 15.88 \} \end{align}
For P-branches: $J_j - 1, \Delta K = 0;$
\begin{align} \Delta E_P &=& &E_{J_j}(CH_2) - E_{J_i}(CH_2) \ &=& & 7.94(J_i - 1)(J_i - 1 + 1) - 8.94J_i(J_i + 1) \ &=& & J_i \{7.94(J_i - 1) - 8.94(J_i + 1)\} \ &=& &J_i \{(7.94 - 8.94)J_i - 7.94 - 8.94\} \ &=& & J_i\{-J_i - 16.88 \} \end{align}
This indicates that the R branch lines occur at energies which grow closer and closer together as J increases (since the 15.88 - $J_i$ term will cancel). The P branch lines occur at energies which lie more and more negative (i.e. to the left of the origin). So, you can predict that if molecule I is $CH_2^-$ and molecule II is $CH_2$ then the R-branch has a band head and the P-branch does not. This is observed therefore our assumption was correct:
molecule I is $CH_2^-$ and molecule II is $CH_2$.
g. The band head occurs when $\dfrac{d(\Delta E_R )}{dJ} = 0$.
\begin{align} \dfrac{d(\Delta E_R)}{dJ} &=& \dfrac{d}{dJ} \left[ (J_i + 1) \{-J_i + 15.88 \} \right] = 0 \ &=& \dfrac{d}{dJ} \left( -J_i^2 - J_i + 15.88J_i + 15.88 \right) = 0 \ &=& -2J_i + 14.88 = 0 \ &\therefore & J_i = 7.44 \text{, so } J = 7 \text{ or } 8. \end{align}
At J = 7.44:
\begin{align} \Delta E_R &=& &(J + 1)\{-J + 15.88\} \ \Delta E_R &=& &(7.44 + 1)\{ -7.44 + 15.88 \} = (8.44)(8.44) = 71.2 \text{cm}^{-1} \text{above the origin.} \end{align}
Q2
a.
b. The number of irreducible representations may be found by using the following formula:
\begin{align} n_{irrep} &=& &\dfrac{1}{g}\sum\limits_R \xi_{red}(R)\xi_{irrep}(R), \ \text{where g } &=& & \text{the order of the point group (24 for } D_{6h}). \ n_{A_{1g}} &=& &\dfrac{1}{24}\sum\limits_R\Gamma_{C-H}(R)\dot{A_{1g}}(R) \ &=& & \dfrac{1}{24}\{(1)(6)(1) + (2)(0)(1) + (2)(0)(1) + (1)(0)(1) + ... \ & & & + (3)(0)(1) + (3)(2)(1) + (1)(0)(1) + (2)(0)(1) + ... \ & & & + (2)(0)(1) + (1)(6)(1) + (3)(2)(1) + (3)(0)(1) \} \ &=& &1 \ n_{A_{2g}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(1) + (2)(0)(1) + (1)(0)(1) +... \ & & & +(3)(0)(-1) + (3)(2)(-1) + (1)(0)(1) + (2)(0)(1) +...\ & & &+(2)(0)(1) + (1)(6)(1) + (3)(2)(-1) + (3)(0)(-1)\} \ &=& & 0 \ n_{B_{1g}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(-1) + (2)(0)(1) + (1)(0)(-1) +... \ & & & +(3)(0)(1) + (3)(2)(-1) + (1)(0)(1) + (2)(0)(-1) +...\ & & &+(2)(0)(1) + (1)(6)(-1) + (3)(2)(1) + (3)(0)(-1)\} \ &=& & 0 \ n_{B_{2g}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(-1) + (2)(0)(1) + (1)(0)(-1) +... \ & & & +(3)(0)(-1) + (3)(2)(1) + (1)(0)(1) + (2)(0)(-1) +...\ & & & + (2)(0)(1) + (1)(6)(-1) + (3)(2)(-1) + (3)(0)(1)\} \ &=& & 0 \ n_{E_{1g}} &=& & \dfrac{1}{24} \{(1)(6)(2) + (2)(0)(1) + (2)(0)(-1) + (1)(0)(-2) +... \ & & & +(3)(0)(0) + (3)(2)(0) + (1)(0)(2) + (2)(0)(1) +...\ & & &+(2)(0)(-1) + (1)(6)(-2) + (3)(2)(0) + (3)(0)(0)\} \ &=& & 0 \ n_{E_{2g}} &=& & \dfrac{1}{24} \{(1)(6)(2) + (2)(0)(-1) + (2)(0)(-1) + (1)(0)(2) +... \ & & & +(3)(0)(0) + (3)(2)(0) + (1)(0)(2) + (2)(0)(-1) +...\ & & &+(2)(0)(-1) + (1)(6)(2) + (3)(2)(0) + (3)(0)(0)\} \ &=& & 0 \ n_{A_{1u}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(1) + (2)(0)(1) + (1)(0)(1) +... \ & & & +(3)(0)(1) + (3)(2)(1) + (1)(0)(-1) + (2)(0)(-1) +...\ & & & + (2)(0)(-1) + (1)(6)(-1) + (3)(2)(-1) + (3)(0)(-1)\} \ &=& & 0 \ n_{A_{2u}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(1) + (2)(0)(1) + (1)(0)(1) +... \ & & & + (3)(0)(-1) + (3)(2)(-1) + (1)(0)(-1) + (2)(0)(-1) +...\ & & &+(2)(0)(-1) + (1)(6)(-1) + (3)(2)(1) + (3)(0)(1)\} \ &=& & 0 \ n_{B_{1u}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(-1) + (2)(0)(1) + (1)(0)(-1) +... \ & & & +(3)(0)(1) + (3)(2)(-1) + (1)(0)(-1) + (2)(0)(1) +...\ & & & + (2)(0)(-1) + (1)(6)(1) + (3)(2)(-1) + (3)(0)(1)\} \ &=& & 0 \ n_{B_{2u}} &=& & \dfrac{1}{24} \{(1)(6)(1) + (2)(0)(-1) + (2)(0)(1) + (1)(0)(-1) +... \ & & & +(3)(0)(-1) + (3)(2)(1) + (1)(0)(-1) + (2)(0)(1) +...\ & & &+(2)(0)(-1) + (1)(6)(1) + (3)(2)(1) + (3)(0)(-1)\} \ &=& & 1 \ n_{E_{1u}} &=& & \dfrac{1}{24} \{(1)(6)(2) + (2)(0)(1) + (2)(0)(-1) + (1)(0)(-2) +... \ & & & +(3)(0)(0) + (3)(2)(0) + (1)(0)(-2) + (2)(0)(-1) +...\ & & &+(2)(0)(1) + (1)(6)(2) + (3)(2)(0) + (3)(0)(0)\} \ &=& & 1 \n_{E_{2u}} &=& & \dfrac{1}{24} \{(1)(6)(2) + (2)(0)(-1) + (2)(0)(-1) + (1)(0)(2) +... \ & & & +(3)(0)(0) + (3)(2)(0) + (1)(0)(-2) + (2)(0)(1) +...\ & & & + (2)(0)(1) + (1)(6)(-2) + (3)(2)(0) + (3)(0)(0)\} \ &=& & 0 \end{align}
We see that $\Gamma_{C-H} = A_{1g}\oplus E_{2g}\oplus B_{2u}\oplus E_{1u}$
c. x and y $\Rightarrow$ $E_{1u} \text{, z } \Rightarrow A_{2u}$, so, the ground state $A_{1g}$ level can be excited to the degenerate $E_{1u}$ level by coupling through the x or y transition dipoles. Therefore $E_{1u}$ is infrared active and $\perp$ polarized.
d. $(x^2 + y^2, z^2 ) \Rightarrow A_{1g}, (xz, yz) \Rightarrow E_{1g}, (x^2 - y^2, xy) \Rightarrow E_{2g}$, so, the ground sate $A_{1g}$ level can be excited to the degenerate $E_{2g}$ level by coupling through the $x^2 - y^2$ or xy transitions or be excited to the degenerate $A_{1g}$ level by coupling through the xz or yz transitions. Therefore $A_{1g} \text{ and } E_{2g}$ are Raman active.
Q3
a.
\begin{align} \dfrac{d}{dr}\left( \dfrac{F}{r} \right) &=& & \dfrac{F'}{r} - \dfrac{F}{r^2} \ r^2 \dfrac{d}{dr} \left( \dfrac{F}{r} \right) &=& & rF' - F \ \dfrac{d}{dr} \left( r^2 \dfrac{d}{dr} \left( \dfrac{F}{r} \right) \right) &=& & F' - F' + rF'' \end{align}
So,
\begin{align} \dfrac{-\hbar^2}{2\mu r^2}\dfrac{d}{dr}\left( r^2 \dfrac{d}{dr}\left( \dfrac{F}{r} \right) \right) &=& & \dfrac{-\hbar^2}{2\mu} \dfrac{F''}{r}\end{align}
Rewriting the radial Schrödinger equation with the substitution: $R = \dfrac{F}{r}$ gives:
$\dfrac{-h^2}{2\mu r^2}\dfrac{d}{dr} \left( r^2 \dfrac{d(Fr^{-1})}{dr} \right) + \dfrac{J(J + 1)\hbar^2}{2\mu r^2} \left(\dfrac{F}{r}\right) + \dfrac{1}{2} k(r - r_e)^2 \left( \dfrac{F}{r} \right) = \left( \dfrac{F}{r} \right) \nonumber$
Using the above derived identity gives:
$\dfrac{-\hbar^2}{2\mu}\dfrac{F''}{r} + \dfrac{J(J + 1)\hbar^2}{2\mu r^2} \left( \dfrac{F}{r} \right) + \dfrac{1}{2}k(r - r_e)^2 \left( \dfrac{F}{r} \right) = E \left( \dfrac{F}{r} \right) \nonumber$
Cancelling out an $r^{-1}$:
$\dfrac{-\hbar^2}{2\mu} F'' + \dfrac{J(J + 1)\hbar^2}{2\mu r^2}F + \dfrac{1}{2} k(r - r_e)^2 F = EF \nonumber$
b. $\dfrac{1}{r^2} = \dfrac{1}{(r_e + \Delta r)^2} = \dfrac{1}{r_e^2 \left( 1 + \dfrac{\Delta r}{r_e} \right)} \approx \dfrac{1}{r_e^2}\left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right) \nonumber$
So,
$\dfrac{J(J + 1)\hbar^2}{2\mu r^2} \approx \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right) \nonumber$
c. Using this substitution we now have:
$\dfrac{-\hbar^2}{2\mu} F'' + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right) F + \dfrac{1}{2} k(r - r_e)^2 F = EF \nonumber$
Now, regroup the terms which are linear and quadratic in $\Delta r = r - r_e$:
$\dfrac{1}{2} k\Delta r^2 + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{3}{r_e^2}\Delta r^2 - \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{2}{r_e}\Delta r \nonumber$
$= \left( \dfrac{1}{2}k + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{3}{r_e^2} \right) \Delta r^2 - \left( \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \dfrac{2}{r_e} \right)\Delta r \nonumber$
Now, we must complete the square:
$a\Delta r^2 - b\Delta r = a \left( \Delta r - \dfrac{b}{2a} \right)^2 - \dfrac{b^2}{4a}. \nonumber$
So,
$\left( \dfrac{1}{2}k + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{3}{r_e^2} \right)\left( \Delta r - \dfrac{\dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{1}{r_e}}{\dfrac{1}{2}k + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{3}{r_e^2}} \right)^2 - \dfrac{\left( \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \dfrac{1}{r_e} \right)^2}{\dfrac{1}{2}k + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}\dfrac{3}{r_e^2}} \nonumber$
Now, redefine the first term as $\dfrac{1}{2}k$, second term as (r - $\bar{r_e})^2$, and the third term as $-\Delta$ giving:
$\dfrac{1}{2}\bar{k} \left(r - \bar{r_e}\right)^2 - \Delta \nonumber$
From:
\begin{align} & & &\dfrac{-\hbar^2}{2\mu}F'' + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \left( 1 - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right) F + \dfrac{1}{2} k(r - r_e)^2 F = EF, \ & & & \dfrac{-\hbar^2}{2\mu} F'' + \dfrac{J(J + 1)\hbar^2}{2\mu r_e} F + \left( \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} \left( - \dfrac{2\Delta r}{r_e} + \dfrac{3\Delta r^2}{r_e^2} \right) + \dfrac{1}{2}k\Delta r^2 \right) F = EF \end{align}
and making the above substitution result in:
$\dfrac{-\hbar^2}{2\mu}F'' + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}F + \left( \dfrac{1}{2}\bar{k} \left( r - \bar{r_e}\right)^2 - \Delta \right) F = EF, \nonumber$
or,
$\dfrac{-\hbar^2}{2\mu}F'' + \dfrac{1}{2}\bar{k} (r - r\bar{r_e} )^2 F = \left( E - \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} + \Delta \right) F. \nonumber$
d. Since the above is nothing but a harmonic oscillator differential equation in x with force constant $\bar{k}$ and equilibrium bond length $\bar{r}_e$, we know that:
\begin{align} & & &\dfrac{-\hbar^2}{2\mu}F'' + \dfrac{1}{2} \bar{k} (r - \bar{r}_e)^2 F \varepsilon F\text{, has energy levels:} \ & & & \varepsilon = \hbar \sqrt{\dfrac{\bar{k}}{\mu}}\left(v + \dfrac{1}{2}\right) \text{, v = 0, 1, 2, ... } \end{align}
So,
\begin{align} & & & E + \Delta . - \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} = \varepsilon \end{align}
tell us that:
\begin{align} & & & E = \hbar\sqrt{\dfrac{\bar{k}}{\mu}}\left( v + \dfrac{1}{2} \right) + \dfrac{J(J + 1)\hbar^2}{2\mu r_e^2} - \Delta .\end{align}
As J increases, $\bar{r}_e$ increases because of the centrifugal force pushing the two atoms apart. On the other hand $\bar{k}$ increases which inicates that the molecule finds it more difficult to stretch against both the centrifugal and Hooke's Law (spring) Harmonic force field. The total energy level (labeled by J and v) will equal a rigid rotor componenet $\dfrac{J(J + 1)\hbar^2}{2\mu r_e^2}$ plus a Harmonic oscillator part $\hbar \sqrt{\dfrac{\bar{k}}{\mu}\left( v + \dfrac{1}{2}\right)}$ (which has a force constant $\bar{k}$ which increases with J).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.04%3A_Molecular_Rotation_and_Vibration/22.4.04%3A_iv._Problem_Solutions.txt
|
Q1
Time dependent perturbation theory provides an expression for the radiative lifetime of an excited electronic state, given by $\tau_R$:
$\tau_R = \dfrac{3\hbar^4c^3}{4(E_i - E_f)^3|\mu_{fi}|^2}, \nonumber$
where i refers to the excited state, f refers to the lower state, and $\mu_{fi}$ is the transition dipole.
a. Evaluate the z-component of the transition dipole for the $2p_z \rightarrow$ 1s transition in a hydrogenic atom of nuclear charge Z, given:
$\psi 1s = \dfrac{1}{\sqrt{\pi}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}e^{\dfrac{-Zr}{a_0}}\text{, and } \psi_{2p_z} = \dfrac{1}{4\sqrt{2\pi}}\left( \dfrac{Z}{a_0}\right)^{\dfrac{5}{2}} rCos\theta e^{\dfrac{-Zr}{2a_0}} \nonumber$
Express your answer in units of $ea_0.$
b. Use symmetry to demonstrate that the x- and y-components of $\mu_{fi}$ are zero, i.e.
$\langle 2p_z |ex|1s \rangle = \langle 2p_z |ey|1s\rangle = 0. \nonumber$
c. Calculate the radiative lifetime $\tau_R$ of a hydrogenlike atom in its $2p_z$ state. Use the relation $e^2 = \dfrac{\hbar^2}{m_ea_0}$ to simplify our results.
Q2
Consider a case in which the complete set of states $\{\phi_k\}$ for a Hamiltonian is known.
a. If the system is initially in the state m at time t=0 when a constant perturbation V is suddenly turned on, find the probability amplitudes $C_k^{(2)}(t) \text{ and } C_m^{(2)}(t)$, to second order in V, that describe the system being in a different state k or the same state m at time t.
b. If the perturbation is turned on adiabatically, what are $C_k^{(2)}(t) \text{ and } C_m^{(2)}(t)$?
Here, consider that the initial time is $t_0 \rightarrow -\infty$, and the potential is V e^{\eta}t\), where the positive parameter $\eta$ is allowed to approach zero $\eta \rightarrow 0$ in order to describe the adiabatically (i.e., slowly) turned on perturbation.
c. Compare the results of parts a. and b. and explain any differences.
d. Ignore first order contributions (assume they vanish) and evaluate the transition rates $\dfrac{d}{dt} |C_k^{(2)}(t)|^2$ for the results of part b. by taking the limits $\eta \rightarrow 0^+$, to obtain the adiabatic results.
Q3
If a system is initially in a state m, conservation of probability requires that the total probability of transitions out of state m be obtainable from the decrease in the probability of being in state m. Prove this to the lowest order by using the results of exercise 2, i.e. show that: $|C_m|^2 = 1 - \sum\limits_{k \ne m} |C_k|^2.$
22.5.02: ii. Problems
Q1
Consider an interaction or perturbation which is carried out suddenly (instantaneously, e.g., within an interval of time ∆t which is small compared to the natural period $\omega_{nm}^{-1}$ corresponding to the transition from state m to state n), and after that is turned off adiabatically (i.e., extremely slowly as V $e^{\eta}t$). The transition probability in this case is given as:
$T_{nm} \approx \dfrac{|\langle n|V|m \rangle |^2}{\hbar^2\omega_{nm}^2} \nonumber$
where V corresponds to the maximum value of the interaction when it is turned on. This formula allows one to calculate the transition probabilities under the action of sudden perturbations which are small in absolute value whenever perturbation theory is applicable. Let's use this "sudden approximation" to calculate the probability of excitation of an electron under a sudden change of the charge of the nucleus. Consider the reaction:
$^3_1H \rightarrow ^3_2He^+ + e^-, \nonumber$
and assume the tritium atom has its electron initially in a 1s orbital.
a. Calculate the transition probability for the transition 1s → 2s for this reaction using the above formula for the transition probability.
b. Suppose that at time t = 0 the system is in a state which corresponds to the wavefunction $\varphi_m$, which is an eigenfunction of the operator $H_0$. At t = 0, the sudden change og the Hamiltonian occurs (now denoted as H and remains unchanged). Calculate the same 1s → 2s transition probability as in part a., only this time as the square of the magnitude of the coefficient, $A_{1s,2s}$ using the expansion:
$\Psi (r,0) = \varphi_m(r) = \sum\limits_n A_{nm}\psi_n(r) \text{, where } A_{nm} = \int \varphi_m(r)\psi_n(r)d^3r \nonumber$
Note, that the eigenfunctions of H are $\psi_n$ with eigenvalues $E_n$. Compare this "exact" value with that obtained by perturbation theory in part a.
Q2
The methyl iodide molecule is studied using microwave (pure rotational) spectroscopy. The following integral governs the rotational selection rules for transitions labeled J, M, K → J', M', K':
$I = \langle D^{J'}_{M'K'} |\vec{\epsilon}\cdot{\vec{\mu}}|D^{J}_{MK} \rangle . \nonumber$
The dipole moment $\vec{\mu}$ lies along the molecule's $C_3$ symmetry axis. Let the electric field of the light $\vec{\varepsilon}$ define the lab-fixed Z-direction.
a. Using the fact that Cos$\beta = D^{1*}_{00}$, show that
$I = 8\pi^2\mu\epsilon(-1)^{(M+K)} \begin{pmatrix} J' & 1 & J \ M & 0 & M \end{pmatrix}\begin{pmatrix} J' & 1 & J \ K & 0 & K \end{pmatrix}\delta_{M'M}\delta_{K'K} \nonumber$
b. What restrictions does this result place on $\Delta J = J' - J?$ Explain physically why the K quantum number can not change.
Q3
Consider the molecule BO.
a. What are the total number of possible electronic states which can be formed by combination of ground state B and O atoms?
b. What electron configurations of the molecule are likely to be low in energy? Consider all reasonable orderings of the molecular orbitals. What are the states corresponding to these configurations?
c. What are the bond orders in each of these states?
d. The true ground state of BO is $^2\sum$. Specify the +/- and u/g symmetries for this state.
e. Which of the excited states you derived above will radiate to the $^2\sum$ ground state? Consider electric dipole, magnetic dipole, and electric quadrupole radiation.
f. Does ionization of the molecule to form a cation lead to a stronger, weaker, or equivalent bond strength?
g. Assuming that the energies of the molecular orbitals do not change upon ionization, what are the ground state, the first excited state, and the second excited state of the positive ion?
h. Considering only these states, predict the structure of the photoelectron spectrum you would obtain for ionization of BO.
Q4
The above figure shows part of the infrared absorption spectrum of HCN gas. The molecule has a CH stretching vibration, a bending vibration, and a CN stretching vibration.
a. Are any of the vibrations of linear HCN degenerate?
b. To which vibration does the group of peaks between 600 $\text{cm}^{-1}$ and 800 $\text{cm}^{-1}$ belong?
c. To which vibration does the group of peaks between 3200 $\text{cm}^{-1}$ and 3400 $\text{cm}^{-1}$ belong?
d. What are the symmetries $(\sigma, \pi, \delta)$ of the CH stretch, CN stretch, and bending vibrational motions?
e. Starting with HCN in its 0,0,0 vibrational level, which fundamental transitions would be infrared active under parallel polarized light (i.e., z-axis polarization):
i. 000 → 001?
ii. 000 → 100?
iii. 000 → 010?
f. Which transitions would be active when perpendicular polarized light is used?
g. Why does the 712 $\text{cm}^{-1}$ transition have a Q-branch, whereas that near 3317 $\text{cm}^{-1}$ has only P- and R-branches?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.05%3A_Time_Dependent_Processes/22.5.01%3A_i._Exercises.txt
|
Q1
a. Evaluate the z-componenet of $\mu_{fi}$:
\begin{align} \mu_{fi} &=& & \langle 2p_z |erCos\theta |1s\rangle\text{, where }\psi_{1s} = \dfrac{1}{\sqrt{\pi}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}e^{\dfrac{-Zr}{a_0}}\text{, and } \psi_{2p_z} = \dfrac{1}{4\sqrt{2\pi}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{5}{2}}r Cos\theta e^{\dfrac{-Zr}{2a_0}} \ \mu_{fi} &=& & \dfrac{1}{4\sqrt{2\pi}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{5}{2}}\dfrac{1}{\sqrt{\pi}}\left( \dfrac{Z}{a_0}\right)^{\dfrac{3}{2}}\langle r Cos\theta e^{\dfrac{-Zr}{2a_0}}| erCos \theta |e^{\dfrac{-Zr}{a_0}}\rangle \ &=& & \dfrac{e}{4\pi\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^4 \int\limits_0^{\infty} r^2dr \int\limits_0^{\pi}Sin \theta d\theta \int\limits_0^{2\pi}d\varphi \left( r^2e^{\dfrac{-Zr}{2a_0}}e^{\dfrac{-Zr}{a_0}} \right) Cos^2\theta \ &=& & \dfrac{e}{4\pi\sqrt{2}}2\pi \left( \dfrac{Z}{a_0} \right)^4\int\limits_0^{\infty} \left( r^4 e^{\dfrac{-3Zr}{2a_0}} \right) dr \int\limits^\pi_0 Sin\theta Cos^2\theta d\theta \end{align}
Using integral equation 4 to integrate over r and equation 17 to integrate over $\theta$ we obtain:
\begin{align} &=& & \dfrac{e}{4\pi\sqrt{2}}2\pi \left( \dfrac{Z}{a_0} \right)^4 \dfrac{4\!}{\left( \dfrac{3Z}{2a_0} \right)^5}\left( \dfrac{-1}{3}\right) Cos^3\theta \bigg|^\pi_0 \ &=& & \dfrac{e}{4\pi\sqrt{2}}2\pi \left( \dfrac{Z}{a_0} \right)^4 \dfrac{2^5a_0^54\!}{3^5Z^5} \left( \dfrac{-1}{3}\right) ((-1)^3 - (1)^3) \ &=& & \dfrac{e}{\sqrt{2}}\dfrac{2^8a_0}{3^5Z} = \dfrac{ea_0}{Z}\dfrac{2^8}{\sqrt{2}3^5} = 0.7499 \dfrac{ea_0}{Z} \end{align}
b. Examine the symmetry of the integrands for $\langle 2p_z|ex|1s\rangle \text{ and } \langle 2p_z|ey|1s\rangle$.
Function Symmetry
$2p_z$ - 1
x + 1
1s + 1
y + 1
Under this operation the integrand of $\langle 2p_z |ex|1s\rangle$ is (-1)(1)(1) = -1 (it is antisymmetric) and hence $\langle 2p_z |ex|1s\rangle = 0$.
Similarly, under this operation the integrand of $\langle 2p_z |ey| 1s\rangle$ is (-1)(1)(1) = -1 (it is antisymmetric) and hence $\langle 2p_z |ey|1s \rangle = 0$.
c. \begin{align} \tau_R &=& & \dfrac{3\hbar^4 c^3}{4\left( \dfrac{3}{8}\left( \dfrac{e^2}{a_0} \right) Z^2 \right)^3 \left( \left( \dfrac{ea_0}{Z} \right) \dfrac{2^8}{\sqrt{2}3^5} \right)^2}, \ &=& & \dfrac{3\hbar^4c^3}{4\dfrac{3^3}{8^3}\left( \dfrac{e^6}{a_0^3}\right) Z^6 \left( \dfrac{e^2a_0^2}{Z^2} \right) \dfrac{2^{16}}{(2)3^{10}} } \ &=& & \dfrac{\hbar^4 c^3 3^8 a_0}{e^8Z^42^8} \ \text{Inserting } e^2 &=& & \dfrac{\hbar^2}{m_ea_0} \text{ we obtain }: \ \tau_R &=& & \dfrac{\hbar^4 c^3 3^8 a_0 m_e^4 a_0^4}{\hbar^8 Z^4 2^8} = \dfrac{3^8}{2^8}\dfrac{c^3 a_0^5 m_e^4}{\hbar^4Z^4} \ &=& &25.6289 \dfrac{c^3 a_0^5 m_e^4}{\hbar^4Z^4} \ &=& & 256,289 \left( \dfrac{1}{Z^4}\right) x \dfrac{(2.998x10^{10}\text{cm sec}^{-1})^3 (0.529177x10^{-8}\text{cm})^5 (9.109x10^{-28}\text{ g})^4}{(1.0546x10^{-27} \text{ g cm}^2 \text{ sec}^{-1})^4} \ &=& & 1.595x10^{-9} \text{ sec }x\left( \dfrac{1}{Z^4} \right) \end{align}
So, for example:
Atom $\tau_R$
H 1.595 ns
He$^+$ 99.7 ps
Li$^{+2}$ 19.7 ps
Be$^{+3}$ 6.23 ps
Ne$^{+9}$ 159 fs
Q2
a. $H = H_0 + \lambda H'(t), H'(t) = V\theta (t), H_0 \varphi_k = E_k\varphi_k, \omega_k = \dfrac{E_k}{\hbar}, i\hbar\dfrac{\partial \psi}{\partial t} = H\psi$
let $\psi (r,t) = i\hbar \sum\limits_j c_j(t)\varphi_j e^{-i\omega_jt} \text{ and insert into the above expression: }$
\begin{align} & & & i\hbar\sum\limits_j \left[ \dot{c}_j - i\omega_j c_j \right] e^{-i\omega_j t} \varphi_j = i\hbar\sum\limits c_j (t) e^{-i\omega_j t}(H_0 + \lambda H'(t))\varphi_j \ & & & \sum\limits_j \left[ i\hbar dot{c}_j + E_jc_j - c_jE_j - c_j\lambda H' \right] e^{-i\omega_j t}\varphi_j = 0 \ & & & \sum\limits_j \left[ i\hbar\dot{c}_j \langle m|j\rangle - c_j\lambda\langle m|H'|j\rangle \right] e^{-i\omega_j t} = 0 \ & & & i\hbar\dot{c}_m e^{-i\omega_mt} = \sum\limits_j c_j\lambda H'_{mj}e^{-i\omega_jt} \ So, & & & \ & & & \dot{c}_m = \dfrac{1}{i\hbar}\sum\limits_j c_j\lambda H'_{mj}e^{-i(\omega_{jm})} \end{align}
Going back a few equations and multiplying from the left by $\varphi_k$ instead of $\varphi_m$ we obtain:
\begin{align} & & & \sum\limits_j \left[ i\hbar \dot{c}_j \langle k|j\rangle - c_j\lambda \langle k|H'|j\rangle \right] e^{-i\omega_jt} = 0 \ & & & i\hbar\dot{c}_ke^{-i\omega_kt} = \sum\limits_jc_j\lambda H'_{kj}e^{-i\omega_jt} \ So, & & & \ & & & \dot{c}_k = \dfrac{1}{i\hbar}\sum\limits_j c_j\lambda H'_{kj} e^{-i\omega_j t} \ \text{Now, let: } & & &\ & & &c_m = c_m^{(0)} + c_m^{(1)}\lambda + c_m^{(2)}\lambda^2 + ... \ & & & c_k = c_k^{(0)} + c_k^{(1)}\lambda + c_k^{(2)}\lambda^2 + ... \ \text{and substituting into above we obtain: } & & & \ & & & \dot{c}_m^{(0)} + \dot{c}_m^{(1)}\lambda + \dot{c}_m^{(2)}\lambda^2 + ... = \dfrac{1}{i\hbar}\sum\limits_j \left[ c_j^{(0)} + c_j^{(1)}\lambda + c_j^{(2)}\lambda^2 + ... \right] \lambda H'_{mj} e^{-i(\omega_{jm})t} \end{align}
\begin{align} \text{first order: } & \ & \dot{c}_m^{(0)} = 0 \Rightarrow c_m^{(0)} = 1 \ \text{second order: } & \dot{c}_m^{(1)} = \dfrac{1}{i\hbar}\sum\limits_jc_j^{(0)} H'_{mj} e^{-i(\omega_{jm})t} \ (n + 1)^{st} \text{ order: } & \ & \dot{c}_m^{(n)} = \dfrac{1}{i\hbar}\sum\limits_j c_j^{(n-1)}H'_{mj} e^{-i(\omega_{jm})t} \ \text{Similarly: } & \ \text{first order: } & \ & \dot{c}_k^{(0)} = 0 \Rightarrow c_{k\ne m}^{(0)} = 0 \ \text{second order: } & \ & \dot{c}_k^{(1)} = \dfrac{1}{i\hbar}\sum\limits_j c_j^{(0)}H'_{kj} e^{-i(\omega_{jk})} \ \text{(n + 1)}^{st}\text{ order: } & \ & \dot{c}_k^{(n)} = \dfrac{1}{i\hbar} \sum\limits_j c_j^{(n-1)} H'_{kj} e^{-i(\omega_{jk})t} \ \text{So, } & \ & \dot{c}_m^{(1)} = \dfrac{1}{i\hbar} c_m^{(0)}H'_{mm} e^{-i(\omega_{mm})t} = \dfrac{1}{i\hbar}H'_{mm} \& c_m^{(1)}(t) = \dfrac{1}{i\hbar}\int\limits_0^t dt' V_{mm} = \dfrac{V_{mm}t}{i\hbar} \ \text{ and similarly, } & \ & \dot{c}_k^{(1)} = \dfrac{1}{i\hbar}c_m^{(0)} H'_{km} e^{-i(\omega_{mk})t} = \dfrac{1}{i\hbar}H'_{km}e^{-i(\omega_{mk})t} \ & c_k^{(1)}(t) = \dfrac{1}{i\hbar}V_{km} \int\limits_0^t dt' e^{-i(\omega_{mk})t'} = \dfrac{V_{km}}{\hbar\omega_{mk}}\left[ e^{-i(\omega_{mk})t} - 1 \right] \ & \dot{c}_m^{(2)} = \dfrac{1}{i\hbar}\sum\limits_j c_j^{(1)}H'_{mj} e^{-i(\omega_{jm})t} \ & \dot{c}_m^{(2)} = \sum\limits_{j\neq m} \dfrac{1}{i\hbar}\dfrac{V_{jm}}{\hbar\omega_{mj}} \left[ e^{-i(\omega_{mj})t} - 1 \right] H'_{mj} e^{-i(\omega_{jm})t} + \dfrac{1}{i\hbar}\dfrac{V_{mm}t}{i\hbar} H'_{mm} \ & c_m^{(2)} = \sum\limits_{j\neq m} \dfrac{1}{i\hbar}\dfrac{V_{jm}V_{mj}}{\hbar\omega_{mj}}\int\limits_0^t dt' e^{-i(\omega_{jm})t'} \left[ e^{-i(\omega_{mj})t'} - 1 \right] - \dfrac{V_{mm}V_{mm}}{\hbar^2} \int\limits_0^t t'dt' \ &= \sum\limits_{j\ne m} \dfrac{V_{jm}V_{mj}}{i\hbar^2\omega_{mj}} \int\limits_o^t dt' \left[ 1 - e^{-i(\omega_{jm})t'} \right] - \dfrac{|V_{mm}|^2}{\hbar^2}\dfrac{t^2}{2} \ &= \sum\limits_{j\ne m} \dfrac{V_{jm}V_{mj}}{i\hbar^2\omega_{mj}} \left( t - \dfrac{e^{-i(\omega_{jm})t}-1}{-i\omega_{jm}} \right) - \dfrac{|V_{mm}|^2}{\hbar^2}\dfrac{t^2}{2} \ &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{mj}}{\hbar^2\omega_{mj}^2}\left( e^{-i(\omega_{jm})t} - 1 \right) + \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{mj}}{i\hbar^2 \omega_{mj}}t - \dfrac{|V_{mm}|^2 t^2}{2\hbar^2} \ \text{Similarly, } & \ \dot{c}_k^{(2)} &= \dfrac{1}{i\hbar}\sum\limits_j c_j^{(1)} H'_{kj} e^{-i(\omega_{jk})t} \ &= \sum\limits_{j\ne m} \dfrac{1}{i\hbar} \dfrac{V_{jm}}{\hbar \omega_{mj}}\left[ e^{-i(\omega_{mj})t} - 1 \right] H'_{kj} e^{-i(\omega_{jk})t} + \dfrac{1}{i\hbar}\dfrac{V_{mm}t}{i\hbar} H'_{km} e^{-i(\omega_{mk})} \ c_k^{(2)}(t) &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{kj}}{i\hbar^2\omega_{mj}} \int\limits_0^t dt' e^{-i(\omega_{jk})t'} \left[ e^{-i(\omega_{mj})t'} - 1 \right] - \dfrac{V_{mm}V_{km}}{\hbar^2} \int\limits_0^t t' dt' e^{-i(\omega_{mk})t'} \ &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{kj}}{i\hbar^2\omega_{mj}} \left( \dfrac{e^{-i(\omega_{mj} + \omega_{jm})t} - 1}{-i\omega_{mk}} - \dfrac{e^{-i(\omega_{jk})t} - 1}{-\omega_{jk}} \right) - \dfrac{V_{mm}V_{km}}{\hbar^2} \left[ e^{-i(\omega_{mk})t'} \left( \dfrac{t'}{-i\omega_{mk}} - \dfrac{1}{-(i\omega_{mk})^2} \right) \right]^t_0 \ &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{kj}}{\hbar^2\omega_{mj}} \left( \dfrac{e^{-i(\omega_{mk})t} - 1}{\omega_{mk}} - \dfrac{e^{-i(\omega_{jk})t} - 1}{\omega_{jk}}\right) + \dfrac{V_{mm}V_{km}}{\hbar^2\omega_{mk}} \left[ e^{-i(\omega_{mk})t'} \left( \dfrac{t'}{i} - \dfrac{1}{\omega_{mk}} \right) \right]^t_0 \ &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{kj}}{E_m - E_j} \left( \dfrac{e^{-i(\omega_{mk})t} - 1}{E_m - E_k} - \dfrac{e^{-i(\omega_{jk})t}}{E_j - E_k} \right) + \dfrac{V_{mm}V_{km}}{\hbar (E_m - E_k)} \left[ e^{-i(\omega_{mk})t} \left( \dfrac{t}{i} - \dfrac{1}{\omega_{mk}}\right) + \dfrac{1}{\omega_{mk}} \right] \end{align}
So, the overall amplitudes $c_m \text{, and } c_k$, to second order are:
\begin{align} c_m(t) &= 1 + \dfrac{V_{mm}t}{i\hbar} + \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{mj}}{i\hbar (E_m - E_j)}t + \sum\limits_{j\ne m}'\dfrac{V_{jm}V_{mj}}{\hbar^2 (E_m - E_j )^2}( e^{-i(\omega_{mk})t} - 1) - \dfrac{|V_{mm}|^2t^2}{2\hbar^2} \ c_k(t) &= \dfrac{V_{km}}{(E_m - E_k)}\left[ e^{-i(\omega_{mk})t} - 1 \right] + \dfrac{V_{mm}V_{km}}{(E_m - E_k)^2}\left[ 1 - e^{-i(\omega_{mk})t} \right] + \dfrac{V_{mm}V_{km}}{(E_m - E_k)}\dfrac{t}{\hbar i}e^{-i(\omega_{mk})t} + \sum\limits_{j\ne m}'\dfrac{V_{jm}V_{kj}}{E_m - E_j}\left( \dfrac{e^{-i(\omega_{mk})t} - 1}{E_m - E_k} - \dfrac{e^{-i(\omega_{jk})t} - 1}{E_j - E_k} \right)\end{align}
b. The perturbation equations still hold:
\begin{align} \dot{c}_m^{(n)} &= \dfrac{1}{i\hbar}\sum\limits_j c_j^{(n-1)}H_{mj}' e^{-i(\omega_{jm})t}\text{ ; } \dot{c}_k^{(n)} = \dfrac{1}{i\hbar} \sum\limits_j c_j^{(n-1)} H_{kj}' e^{-i(\omega_{jk})t} \ \text{So, }c_m^{(0)} &= 1 \text{ and } c_k^{(0)} = 0 \ \dot{c}_m^{(1)} &= \dfrac{1}{i\hbar} H_{mm}' \ c_m^{(1)} &= \dfrac{1}{i\hbar}V_{mm} \int\limits_{-\infty}^t dt' e^{-i(\omega )t'} = \dfrac{V_{km}}{i\hbar (-i\omega_{mk} + \eta )} \left[ e^{-i(\omega_{mk} + \eta)} \right] \ &= \dfrac{V_{km}}{E_m - E_k + i\hbar\eta} \left[ e^{-i(\omega_{mk} + \eta )t} \right] \ \dot{c}_m^{(2)} &= \sum\limits_{j\ne m}' \dfrac{1}{i\hbar}\dfrac{V_{jm}}{E_m - E_j + i\hbar\eta}e^{-i(\omega_{mj} + \eta )t}V_{mj}e^{\eta t}e^{-i(\omega_{jm} )t} + \dfrac{1}{i\hbar}\dfrac{V_{mm}e^{\eta t}}{i\hbar \eta}V_{mm} e^{\eta t} \ c_m^{(2)} &= \sum\limits_{j\ne m}' \dfrac{1}{i\hbar}\dfrac{V_{jm}V_{mj}}{E_m - E_j + i\hbar\eta}\int\limits_{-\infty}^t e^{2\eta t'} dt' - \dfrac{|V_{mm}|^2}{2\hbar^2\eta^2}\int\limits_{-\infty}^t e^{2\eta t'}dt' &= \sum\limits_{j\ne m}' \dfrac{V_{jm}V_{mj}}{i\hbar 2\eta (E_m - E_j + i\hbar\eta )}e^{2\eta t} - \dfrac{|V_{mm}|^2}{2\hbar^2\eta^2}e^{2\eta t} \ \dot{c}_k^{(2)} &= \sum\limits_{j\ne m}' \dfrac{1}{i\hbar}\dfrac{V_{jm}}{E_m - E_j + i\hbar\eta}e^{-i(\omega_{mj} + \eta )t}H_{kj}' e^{-i(\omega_{jk})t} \end{align}
c. In part a. the $c^{(2)}(t)$ grow linearly with time ( for $V_{mm}$ = 0) while in part b. they remain finite for $\eta \langle 0$. The result in par a. is due to the sudden turning on of the field.
d.
\begin{align} |c_k(t)|^2 &= \bigg| \sum\limits_j \dfrac{V_{jm}V_{kj}e^{-i(\omega_{mk} + 2\eta )t}}{(E_m - E_j + i\hbar\eta )(E_m - E_k + 2i\hbar\eta )} \bigg|^2 \ &= \sum\limits_{jj'} \dfrac{V_{kj}V_{kj'}V_{jm}V_{j'm}e^{4\eta t}}{\left[ (E_m-E_j)(E_m-E_{j'}) + i\hbar\eta (E_j-E_{j'}) + \hbar^2\eta^2 \right] ((E_m - E_k)^2 + 4\hbar^2\eta^2)} \ \text{ Now, look at the limit as } \eta \rightarrow 0^+:& \ & \dfrac{d}{dt} |c_k(t)|^2 \ne 0 \text{ when } E_m = E_k \ & lim_{\eta \to 0^+} \dfrac{4\eta}{((E_m - E_k)^2 + 4\hbar^2\eta^2)}\alpha \delta (E_m - E_k) \ \text{So, the final result is the 2}^{nd}\text{ order golden rule expression:}& \ & \dfrac{d}{dt}|c_k(t)|^2 \dfrac{2\pi}{\hbar}\delta (E_m - E_k) \text{lim}_{\eta\to 0^+} \bigg| \dfrac{V_{jm}V_{kj}}{(E_j - E_m - i\hbar\eta )} \bigg|^2 \end{align}
Q3
For the sudden perturbation case:
\begin{align} |c_m(t)|^2 &= 1 + \sum\limits_j' \dfrac{V_{jm}V_{mj}}{(E_m - E_j)^2}\left[ e^{-i(\omega_{jm})t} - 1 + e^{i(\omega_{jm})t} - 1 \right] + O(V^3) \ |c_m(t)|^2 &= 1 + \sum\limits_j' \dfrac{V_{jm}V_{mj}}{(E_m - E_j)^2} \left[ e^{-i(\omega_{jm}t)} + e^{i(\omega_{jm})t} - 2\right] + O(V^3) \ |c_k(t)|^2 &= \dfrac{V_{km}V_{mk}}{(E_m - E_k)^2}\left[ -e^{-i(\omega_{mk})t} - e^{i(\omega_{mk})t} + 2 \right] + O(V^3) \ 1 - \sum\limits_{k\ne m}' |c_k(t)|^2 &= 1 - \sum\limits_k' \dfrac{V_{km}V_{mk}}{(E_m - E_k)^2}\left[ -e^{-i(\omega_{mk})t} - e^{i(\omega_{mk})t} + 2 \right] + O(V^3) \ &= 1 + \sum\limits_k' \dfrac{V_{km}V_{mk}}{(E_m - E_k)^2}\left[ e^{-i(\omega_{mk})t} + e^{i(\omega_{mk})t} - 2 \right] + O(V^3) \end{align}
$\therefore$ to order $V^2, |c_m(t)|^2 = 1 - \sum\limits_k' |c_k(t)|^2$, with no assumptions made regarding $V_{mm}.$
For the adiabatic perturbation case:
\begin{align} |c_m(t)|^2 &= 1 + \sum\limits_{j\ne m}' \left[ \dfrac{V_{jm}V_{mj}e^{2\eta t}}{i\hbar 2\eta (E_m - E_j + i\hbar\eta )} + \dfrac{V_{jm}V_{mj}e^{2\eta t}}{-i\hbar 2\eta (E_m - E_j + i\hbar\eta )} \right] + O(V^3) \ &= 1 + \sum\limits_{j\ne m}' \dfrac{1}{i\hbar 2\eta}\left[ \dfrac{1}{(E_m - E_j + i\hbar\eta)} - \dfrac{1}{( E_m - E_j - i\hbar\eta )} \right] V_{jm}V_{mj} e^{2\eta t} + O(V^3) \ &= 1 + \sum\limits_{j\ne m}' \dfrac{1}{i\hbar 2\eta}\left[ \dfrac{-2i\hbar\eta}{(E_m - E_j)^2 + \hbar^2\eta^2} \right] V_{jm}V_{mj}e^{2\eta t} + O(V^3) \ &= 1 - \sum\limits_{j\ne m}' \left[ \dfrac{V_{jm}V_{mj}e^{2\eta t}}{(E_m - E_j)^2 + \hbar^2\eta^2} \right] + O(V^3) \ |c_k(t)|^2 &= \dfrac{V_{km}V_{mk}}{(E_m - E_k)^2 + \hbar^2\eta^2}e^{2\eta t} + O(V^3) \end{align}
$\therefore \text{ to order } V^2, |c_m(t)|^2 = 1 - \sum\limits_{k}' |c_k(t)|^2$, with no assumptions made regarding $V_{mm}$ for this case as well.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.05%3A_Time_Dependent_Processes/22.5.03%3A_iii._Exercise_Solutions.txt
|
Q1
a. $T_{nm} \approx \dfrac{|\langle n|V|m \rangle |^2}{\hbar^2\omega_{nm}^2}$
evaluating $\langle 1s|V|2s \rangle$ (using only the radial portions of the 1s and 2s wavefunctions since the spherical harmonics will integrate to unity) where V = $(e^2,r):$
\begin{align} \langle 1s|V|2s \rangle &= \int 2 \left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}} e^{\dfrac{-Zr}{a_0}}\dfrac{1}{r}\dfrac{1}{\sqrt{2}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}} \left( 1 - \dfrac{Z_r}{2a_0}\right) e^{\dfrac{-Zr}{2a_0}}r^2dr \ \langle 1s|V|2s \rangle &= \dfrac{2}{\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^{3} \left[ \int re^{\dfrac{-3Zr}{2a_0}}dr - \int \dfrac{Zr^2}{2a_0} e^{\dfrac{-3Zr}{2a_0}}dr \right] \end{align}
Using integral equation 4 for two integration we obtain:
\begin{align} \langle 1s|V|2s \rangle &= \dfrac{2}{\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^3 \left[ \dfrac{1}{\left( \dfrac{3Z}{2a_0} \right)^2} - \left( \dfrac{Z}{2a_0} \right) \dfrac{2}{\left( \dfrac{3Z}{2a_0} \right)} \right] \ \langle 1s|V|2s \rangle &= \dfrac{2}{\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^3 \left[ \dfrac{2^2a_0^2}{3^2Z^2} - \dfrac{2^3a_0^2}{3^3Z^2} \right] \ \langle 1s|V|2s \rangle &= \dfrac{2}{\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^3 \left[ \dfrac{(3)2^2a_0^2 - 2^3a_0^2}{3^3Z^2} \right] = \dfrac{8Z}{\sqrt{2}27a_0} \ Now, & \ E_n &= -\dfrac{Z^2e^2}{n^22a_0} \text{, } E_{1s} = -\dfrac{Z^2e^2}{2a_0}\text{, } E_{2s} = -\dfrac{Z^2e^2}{8a_0} \text{, } E_{2s} - E_{1s} = \dfrac{3Z^2e^2}{8a_0} \ So, & \ T_{mn} &= \dfrac{\left( \dfrac{8Z}{\sqrt{2}27a_0} \right)^2}{\left( \dfrac{3Z^2}{8a_0} \right)} = \dfrac{2^6Z^22^6a_0^2}{(2)3^8a_0^2Z^4} = \dfrac{2^{11}}{3^8Z^2} = 0.312 \text{( for Z = 1)} \end{align}
b. $\varphi_m (r) = \varphi_{1s} = 2\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}} e^{\dfrac{-Zr}{a_0}} \text{Y}_{00}$
The orthogonality of the spherical harmonics results in only s-states having non-zero values for $A_{nm}$. We can then drop the $Y_{00}$ (integrating this term will only result in unity) in determining the value of $A_{1s,2s}$.
\begin{align} \psi_n (r) &= \psi_{2s} = \dfrac{1}{\sqrt{2}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}} \left( 1 - \dfrac{Zr}{2a_0} \right) e^{\dfrac{-Zr}{2a_0}} \ \text{Remember for } \varphi_{1s}Z = 1 \text{ and or } \psi_{2s} Z = 2 & \ A_{nm} =& \int 2 \left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}e^{\dfrac{-Zr}{a_0}}\dfrac{1}{\sqrt{2}} \left( \dfrac{Z + 1}{a_0} \right)^{\dfrac{3}{2}} \left( 1 - \dfrac{( Z + 1)r}{2a_0} \right) e^{\dfrac{-(Z + 1)r}{2a_0}}r^2dr \ A_{nm} &= \dfrac{2}{\sqrt{2}} \left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}} \left( \dfrac{Z + 1}{a_0} \right)^{\dfrac{3}{2}} \int e^{\dfrac{-(3Z + 1)r}{2a_0}} \left(1 - \dfrac{(Z + 1)}{2a_0} \right) r^2dr \ A_{nm} &= \dfrac{2}{\sqrt{2}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}\left( \dfrac{Z + 1}{a_0} \right)^{\dfrac{3}{2}} \left[ \int r^2 e^{\dfrac{-(3Z + 1)}{2a_0}}dr - \int \dfrac{(Z + 1)r^3}{2a_0}e^{\dfrac{-(3Z + 1)r}{2a_0}} dr \right] \end{align}
Evaluating these integrals using integral equation 4 we obtain:
\begin{align} A_{nm} &= \dfrac{2}{\sqrt{2}}\left( \dfrac{Z}{a_0}\right)^{\dfrac{3}{2}} \left[ \dfrac{2}{\left( \dfrac{3Z + 1}{2a_0} \right)^3} - \left( \dfrac{Z + 1}{2a_0} \right) \dfrac{(3)(2)}{\left( \dfrac{3Z + 1}{2a_0}\right)^4} \right] \ A_{nm} &= \dfrac{2}{\sqrt{2}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}\left( \dfrac{Z + 1}{a_0} \right)^{\dfrac{3}{2}}\left[ \dfrac{2^4a_0^3}{(3Z + 1)^3} - (Z + 1)\dfrac{(3)2^4a_0^3}{(3Z + 1)^4} \right] \ A_{nm} &= \dfrac{2}{\sqrt{2}}\left( \dfrac{Z}{a_0} \right)^{\dfrac{3}{2}}\left( \dfrac{Z + 1}{a_0}\right)^{\dfrac{3}{2}}\left[ \dfrac{-2^5a_0^3}{(3Z + 1)^4} \right] \ A_{nm} &= -2 \dfrac{[2^3Z(Z + 1)]^{\dfrac{3}{2}}}{(3Z + 1)^4} \end{align}
The transition probability is the square of this amplitude:
$T_{nm} = \left( -2\dfrac{[2^3Z(Z + 1)]^{\dfrac{3}{2}}}{(3Z + 1)^4} \right)^2 = \dfrac{2^{11}Z^3(Z + 1)^3}{(3Z + 1)^8} = 0.25 \text{ (for Z = 1).} \nonumber$
The difference in these two results (parts a. and b.) will become negligible at large values of Z when the perturbation becomes less significant as in the case of Z = 1.
Q2
$\vec{\varepsilon}$ is along Z (lab fixed), and $\vec{\mu}$ is along z (the C-I module fixed bond). The angle between Z and z is $\beta$:
$\vec{\varepsilon}\dot{\vec{\mu}} = \varepsilon\mu Cos \beta = \varepsilon\mu D^{1*}_{00} (\alpha\beta\gamma ) \nonumber$
So,
\begin{align} I = \langle D_{M'K'}^{J'} | \vec{\varepsilon}\dot{\vec{\mu}} | D_{MK}^J \rangle &= \int D_{M'K'}^{J'} \vec{\varepsilon}\dot{\vec{\mu}}D_{MK}^J Sin \beta d\beta d\gamma d\alpha \ &= \varepsilon\mu \int D_{M'K'}^{J'} D_{00}^{1*}D_{MK}^J Sin \beta d\beta d\gamma d\alpha . \ \text{Now use: } & \ & D_{M'n'}^{J'*}D_{00}^{1*} = \sum\limits_{jmn} \langle J'M'10 | jm\rangle^*D_{mn}^{j*} \langle jn|J'K' 10\rangle^{*}, \ \text{to obtain: } & \ I &= \varepsilon\mu \sum\limits_{jmn} \langle J'M' 10 |jm\rangle *\langle jn |J'K'10 \rangle * \int D_{mm}^{j*}D_{MK}^J Sin \beta d\beta d\gamma d\alpha .\ \text{ Now use: } & \ & \int D_{mm}^{j*} D_{MK}^J Sin \beta d\beta d\gamma d\alpha = \dfrac{8\pi^2}{2J + 1} \delta_{Jj}\delta_{Mm}\delta_{Kn}, \ \text{to obtain: } & \ I &= \varepsilon\mu \dfrac{8\pi^2}{2J + 1} \sum\limits_{jmn} \langle J'M' 10 |jm \rangle * \langle jn|J'K' 10\rangle *\delta_{Jj}\delta_{Mm}\delta_{Kn} \ &= \varepsilon\mu \dfrac{8\pi^2}{2J + 1} \langle J'M'10 |JM \rangle \langle JK|J'K'10 \rangle . \ \text{We use: } & \ & \langle JK|J'K' 10\rangle = \sqrt{2J + 1}(-i)^{J' - 1 + K)} \begin{pmatrix} J' & 1 & J \ K' & 0 & K \end{pmatrix} \ \text{and, } &\ & \langle J'M'10|JM \rangle = \sqrt{2J + 1}(-i)^{(J' - 1 + M)}\begin{pmatrix} J' & 1 & J \ M' & 0 & M \end{pmatrix} \ \text{to give: } & \ & I = \varepsilon\mu \dfrac{8\pi^2}{2J + 1}\sqrt{2J + 1}(-i)^{(J' - 1 + M)}\begin{pmatrix} J' & 1 & J \ M' & 0 & M \end{pmatrix}\sqrt{2J + 1}(-i)^{(J' - 1 + K)}\begin{pmatrix} J' & 1 & J \ K' & 0 & K \end{pmatrix} \ & = \varepsilon\mu 8\pi^2 (-i)^{(J' - 1 + M + J' - 1 + K)}\begin{pmatrix} J' & 1 & J \ M' & 0 & M \end{pmatrix} \begin{pmatrix} J' & 1 & J \ K' & 0 & K \end{pmatrix} \ & = \varepsilon\mu 8\pi^2 (-i)^{(M + K)}\begin{pmatrix} J' & 1 & J \ M' & 0 & M \end{pmatrix} \begin{pmatrix} J' & 1 & J \ K' & 0 & K \end{pmatrix} \end{align}
The 3-J symbols vanish unless: K' + 0 = K and M' + 0 = M.
So,
$I = \varepsilon\mu 8 \pi^2 (-i)^{(M + K)}\begin{pmatrix} J' & 1 & J \ M & 0 & M \end{pmatrix}\begin{pmatrix} J' & 1 & J \ K & 0 & K \end{pmatrix} \delta_{M'M}\delta_{K'K} \nonumber$
b. $\begin{pmatrix} J' & 1 & J \ M & 0 & M \end{pmatrix}$ and $\begin{pmatrix} J' & 1 & J \ K & 0 & K \end{pmatrix}$ vanish unless J' = J + 1, J, J - 1$\therefore \Delta J = \pm 1, 0$
The K quantum number can not change because the dipole moment lies along the molecule's $C_3$ axis and the light's electric field thus can exert no torque that twists the molecule about this axis. As a result, the light can not induce transitions that excite the molecule's spinning motion about this axis.
Q3
a. B atom: $as^22s^22p^1 \text{, }^2P$ ground state L = 1, S = $\dfrac{1}{2}$, gives degeneracy ((2L + 1)(2S + 1)) of 6.
O atom: $1s^22s^22p^4$, $^2P$ ground state L = 1, S = 1, gives a degeneracy ((2L + 1)(2S + 1)) of 9.
The total number of states formed is then (6)(0) = 54.
b. We need only consider the p orbitals to find the low lying molecular states:
Which, in reality look like this:
This is the correct ordering to give a $^2\sum^+$ ground state. The only low-lying electron configurations are $1\pi^35\sigma^2 \text{ or } 1\pi^45\sigma^1$. These lead to $^2\Pi \text{ and } ^2\Sum^+$ states, respectively.
c. The bond orders in both states are $2\dfrac{1}{2}$.
d. The $^2\sum$ is + and g/u cannot be specified since this is a heteronuclear molecule.
e. Only one excited state, the $^2\Pi$, is spin-allowed to radiate to the $^2\sum^+$. Consider symmetries of transitio moment operators that arise in the E1, E2 and M1 contributions to the transition rate
Electric dipole allowed: $z \rightarrow \sum^+, x, y \rightarrow \Pi , \therefore \text{ the } ^2\Pi \rightarrow ^2\sum^+$ is electric dipole allowed via a perpendicular band.
Magnetic dipole allowed: $R_z \rightarrow \sum^-, R_{x,y} \rightarrow \Pi , \therefore \text{ the } ^\sum /rightarrow ^2\sum^+$ is magnetic dipole allowed.
Electric quadrupole allowed: $x^2 + y^2, z^2 \rightarrow \sum^+, xy, yz \rightarrow \Pi, x^2 - y^2, xy \rightarrow \Delta \therefore \text{ the } ^2\Pi \rightarrow ^2\sum^+$ is electric quadrupole allowed as well.
f. Since ionization will remove a bonding electron, the BO$^+$ bond is weaker than the BO bond.
g. The ground state BO$^+$ is $^1\sum^+$ corresponding to a $^1\pi^4$ electron configuration. An electron configuration of $1\pi^3 5\sigma^1$ leads to a $^3\Pi$ and a $^1\Pi$ state. The $^3Pi$ will be lower in energy. A $1\pi^2 5\sigma^2$ confirmation will lead to higher lying states of $^3\sum^- , ^1\Delta, \text{ and } ^1\sum^+$.
h. There should be 3 bands corresponding to formation of BO$^+$ in the $^1\sum^+, ^3\Pi \text{, and } ^\Pi$ states. Since each of these involves removing a bonding electron, the Franck-Conden integrals will be appreciable for several vibrational levels, and thus a vibrational progression should be observed.
Q4
a. The bending $(\pi )$ vibration is degenerate.
b. \begin{align} H--&-C\equiv N \ \Uparrow & \ & \text{bending fundamental} \end{align}
c. \begin{align} H--&-C\equiv N \ \Uparrow & \ & \text{stretching fundamental} \end{align}
d. CH stretch ($\nu_3$ in figure) is $\sigma$ CN stretch is $\sigma$ and HCN $(\nu_2$ in figure) bend is $\pi$.
e. Under z $(\sigma )$light the CN stretch and the CH stretch can be exited, since $\psi_0 = \sigma, \psi_1 = \sigma \text{ and } z = \sigma$ provides coupling.
f. Under x,y ($\pi$) light the HCN bend can be excited, since $\psi_0 = \sigma, \psi_1 = \pi$ and x,y = $\pi$ provides coupling.
g. The bending vibration is active under (x,y) perpendicular polarized light. $\Delta J = 0, \pm 1$ are the selection rules for $\perp$ transitions. The CH stretching vibration is active under (z) $\|$ polarized light. $\Delta J = \pm 1$ are the selection rules for $\|$ transitions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.05%3A_Time_Dependent_Processes/22.5.04%3A_iv._Problem_Solutions.txt
|
Q1
Contrast Slater type orbitals (STOs) with Gaussian type orbitals (GTOs).
22.6.03: ii. Exercises
Q1
By expanding the molecular orbitals $\{\phi\kappa\}$ as linear combinations of atomic orbitals $\{\chi_{\mu}\}$,
$\phi_k = \sum\limits_\mu c_{\mu k}\chi_\mu \nonumber$
show how the canonical Hartree-Fock (HF) equations:
$F \phi_i - \epsilon_i\phi_j \nonumber$
reduce to the matrix eigenvalue-type equation of the form given in the text:
$\sum\limits_\nu F_{\mu\nu} C_{\nu i} = \epsilon_i\sum\limits_{\nu} S_{\mu\nu}C_{\nu i} \nonumber$
where:
\begin{align} F_{\mu\nu} &= \langle \chi_\mu |h|\chi_\nu \rangle + \sum\limits_{\delta \kappa} \left[ \gamma_{\delta \kappa} \langle \chi_\mu \chi_\delta |g| \chi_\nu \chi_\kappa \rangle - \gamma_{\delta \kappa}^{ex} \langle \chi_\mu \chi_\delta |g| \chi_\kappa \chi_\nu \rangle \right], \ S_{\mu\nu} &= \langle \chi_\mu | \chi_\nu \rangle, \gamma_{\delta \kappa} = \sum\limits_{i=occ} C_{\delta i}C_{\kappa i}, \ \text{and } \gamma_{\delta \kappa}^{ex} &= \sum\limits_{\substack{\text{occ and}\\text{same spin}}}C_{\delta i}C_{\kappa i}. \end{align}
Note that the sum over i in $\gamma_{\delta\kappa} \text{ and } \gamma_{\delta\kappa}$ is a sum over spin orbitals. In addition, show
that this Fock matrix can be further reduced for the closed shell case to:
$F_{\mu\nu} = \langle \chi_\mu |h| \chi_\nu \rangle + \sum\limits_{\delta\kappa} P_{\delta\kappa} \left[ \langle \chi_\mu \chi_\delta |g| \chi_\nu \chi_\kappa \rangle - \dfrac{1}{2}\langle \chi_\mu \chi_\delta |g| \chi_\kappa \chi_\nu \rangle \right] , \nonumber$
where the charge bond order matrix, P, is defined to be:
$P_{\delta \kappa} = \sum\limits_{i=occ} 2C_{\delta i}C_{\kappa i}, \nonumber$
where the sum over i here is a sum over orbitals not spin orbitals.
Q2
Show that the HF total energy for a closed-shell system may be written in terms of integrals over the orthonormal HF orbitals as:
$\text{E(SCF) } = 2\sum\limits_{k}^{occ} \langle \phi_k |h| \phi_k \rangle + \sum\limits_{kl}^{occ}\left[ 2\langle k1| gk1 \rangle - \langle k1 |g| 1k \rangle \right] + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}}. \nonumber$
Q3
Show that the HF total energy may alternatively be expressed as:
$\text{E(SCF)} = \sum\limits_k^{occ} \left( \epsilon_k + \langle \phi_k |h| \phi_k \rangle \right) + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} \nonumber$
where the $\epsilon_k$ refer to the HF orbital energies.
22.6.04: iii. Problems
Q1
This problem will be concerned with carrying out an SCF calculation for the HeH$^+$ molecule in the $^1\sum_g^+ (1\sigma^2)$ ground state. The one- and two-electron integrals (in atomic units) needed to carry out this SCF calculation at R = 1.4 a.u. using Slater type orbitals with orbital exponents of 1.6875 and 1.0 for the He and H, respectively are:
\begin{align} & S_{11} = 1.0, & & S_{22} = 1.0, & & S_{12} = 0.5784 & \ & h_{11} = -2.6442, & & h_{22} = -1.7201, & & h_{12} = -1.5113, & \ & g_{1111} = 1.0547, & & g_{1121} = 0.4744, & & g_{1212} = 0.5664, & \ & g_{2211} = 0.2469, & & g_{2221} = 0.3504, & & g_{2222} = 0.6250, & \end{align}
where 1 refers to $1s_{He} \text{ and 2 to } 1s_H.$ Note that the two-electrons integrals are given in Dirac notation. Parts a. - d. should be done by hand. Any subsequent parts can make use of the QMIC software provided.
a. Using $\phi_1 \approx 1s_{He}$ for the initial guess of the occupied molecular orbital, form a 2x2 Fock matrix. Use the equation derived above in question 1 for $F_{\mu\nu}$.
b. SOlve the Fock matrix eigenvalue equations given above to obtain the orbital energies and an improved occupied molecular orbital. In so doing, note that $\langle \phi_1 |\phi_1 \rangle = 1 = C_1^T SC_1$ gives the needed normalization condition for the expansion coefficients of the $\phi_1$ in the atomic orbital basis.
c. Determine the total SCF energy using the result of exercise 3 above at this step of the iterative procedure. When will this energy agree with that obtained by using the alternative expression for E(SCF) given in exercise 2?
d. Obtain the new molecular orbital, $\phi_1$, from the solution of the matrix eigenvalue problem (part b).
e. A new Fock matrix and related total energy can be obtained with this improved choice of molecular orbital, $\phi 1$. This process can be continued until a convergence criterion has been satisfied. Typical convergence criteria include: no significant change in the molecular orbitals or the total energy (or both) from one iteration to the next. Perform this iterative procedure for the HeH$^+$ system until the difference in total energy between two successive iterations is less than $10^{-5}$ a.u.
f. Show, by comparing the difference between the SCF total energy at one iteration and the converged SCF total energy, that the convergence of the above SCF approach is primarily linear (or first order).
g. Is the SCF total energy calculated at each iteration of the above SCF procedure (via exercise 3) an upper bound to the exact ground-state total energy?
h. Using the converged self-consistent set of molecular orbitals, $\phi 1 \text{ and } \phi 2$, calculate the one- and two-electron integrals in the molecular orbital basis. Using the equations for E(SCF) in exercises 2 and 3 calculate the converged values of the orbital energies making use of these integrals in the mo basis.
i. Does this SCF wavefunction give rise (at R $\rightarrow \infty$) to proper dissociation products?
Q2
This problem will continue to address the same HeH$^+$ molecular system as above, extending the analysis to include "correlation effects." We will use the one- and two-electron integrals (same geometry) in the converged (to 10$^{-5}$ au) SCF molecular orbital basis which we would have obtained after 7 iterations above. The converged mos you would have obtained in problem 1 are:
$\phi_1 = \begin{bmatrix} & -0.89997792 \ &-0.15843012 \end{bmatrix} \phi_2 = \begin{bmatrix} & -0.83233180 \ & 1.21558030 \end{bmatrix} \nonumber$
a. Carry out a two configuration CI calculation using the $1\sigma^2 \text{ and } 2\sigma^2$ configurations first by obtaining an expression for the CI matrix elements $H_{ij} (i,j = 1\sigma^2, 2\sigma^2)$ in terms of one- and two-electron integrals, and secondly by showing that the resultant CI matrix is (ignoring the nuclear repulsion term):
$\begin{bmatrix} -4.2720 & 0.1261 \ 0.1261 & -2.0149 \end{bmatrix} \nonumber$
b. Obtain the two CI energies and eigenvectors for the matrix found in a part a.
c. Show that the lowest energy CI wavefunction is equivalent to the following two-determinant (single configuration) wavefunction:
$\dfrac{1}{2} \left[ \big| \left( a^{\dfrac{1}{2}}\phi_1 + b^{\dfrac{1}{2}}\phi_2 \right)\alpha \left( a^{\dfrac{1}{2}}\phi_1 - b^{\dfrac{1}{2}\phi_2} \right)\beta \big| + \big| \left( a^{\dfrac{1}{2}}\phi_1 - b^{\dfrac{1}{2}}\phi_2 \right)\alpha \left( a^{\dfrac{1}{2}}\phi_1 + b^{\dfrac{1}{2}}\phi_2 \right)\beta \big| \right] \nonumber$
involving the polarized orbitals: $a^{\dfrac{1}{2}}\phi_1 \pm b^{\dfrac{1}{2}}\phi_2$, where a = 0.9984 and b = 0.0556.
d. Expanding the CI list to 3 configurations by adding the $1\sigma 2\sigma$ to the original $1\sigma^2$ and $2\sigma^2$ configurations of part a above. First, express the proper singlet spin-coupled $1\sigma 2\sigma$ configuration as a combination of Slater determinants and then computer all elements of this 3x3 matrix.
e. Obtain all eigenenergies and corresponding normalized eigenvectors for this CI problem.
f. Determine the excitation energies and transition moments for HeH$^+$ using the full CI result of part e above. The nonvanishing matrix elements of the dipole operator r(x,y,z) in the atomic basis are:
$\langle 1_{SH} | z | 1_{SHe} \rangle = 0.2854 \text{ and } \langle 1_{SH} |z| 1_{SH} \rangle = 1.4 \nonumber$
First determine the matrix elements of r in the SCF orbital basis then determine the excitation energies and transition moments from the ground state to the two excited singlet states of HeH$^+$.
g. Now turning to perturbation theory, carry out a RSPT calculation of the first-order wavefunction $|1\sigma^2\rangle^{(1)}$ for the case in which the zeroth-order wavefunction is taken to be the $1\sigma^2$ Slater determinant. Show that the first-order wavefunction is given by:
$|1\sigma^2 \rangle^{(1)} = -0.0442|2\sigma^2 \rangle . \nonumber$
h. Why does the $|1\sigma 2\sigma \rangle$ configuration not enter into the first-order wavefuncion?
i. Normalize the resultant wavefunction that contains zeroth- plus first-order parts and compare it to the wavefunction obtained in the two-configuration CI study of part b.
j. Show that the second-order RSPT correlation energy, $E^{(2)}\text{, of HeH}^+$ is -0.0056 a.u. How does this compare with the correlation energy obtained from the two- configuration CI study of part b?
Q3
Using the QMIC programs, calculate the SCF energy of $HeH^+$ using the same geometry as in problem 1 and the STO3G basis set provided in the QMIC basis set library. How does this energy compare to that found in problem 1? Run the calculation again with the 3- 21G basis basis provided. How does this energy compare to the STO3G and the energy found using STOs in problem 1?
Q4
Generate SCF potential energy surfaces for $HeH^+ \text{ and } H_2$ using the QMIC software provided. Use the 3-21G basis set and generate points for geometries of R = 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.5, and 10.0. Plot the energies vs. geometry for each system. Which system dissociates properly?
Q5
Generate CI potential energy surfaces for the 4 states of $H_2$ resulting from a CAS calculation with 2 electrons in the lowest 2 SCF orbitals $(1\sigma_g \text{ and } 1\sigma_u)$. Use the same geometries and basis set as in problem 4. Plot the energies vs. geometry for each system. Properly label and characterize each of the states (e.g., repulsive, dissociate properly, etc.).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.06%3A_More_Quantitative_Aspects_of_Electronic_Structure_Calculations/22.6.01%3A_i._Review_Execises.txt
|
Q1
Slater type orbitals (STOs) are "hydrogen-like" in that they have a normalized form of: $\left( \dfrac{2\xi}{a_0} \right)^{n+\dfrac{1}{2}}\left( \dfrac{1}{(2n)!}\right)^{\dfrac{1}{2}}r^{n-1}e^{\left(\dfrac{-\xi r}{a_0}\right)} Y_{l,m} (\theta, \phi), \nonumber$ where as gaussian type orbitals GTOs have the form: $N' x^ay^bz^c e^{(-\alpha r^2)}, \nonumber$ where a, b, and c are quantum numbers each ranging from zero upward in unit steps. So, STOs give "better" overall energies and properties that depend on the shape of the wavefunction near the nuclei (e.g., Fermi contact ESR hyperfine constants) but they are more difficult to use (two-electron integrals are more difficult to evaluate; especially the 4- center variety which have to be integrated numerically). GTOs on the other hand are easier to use (more easily integrable) but improperly describe the wavefunction near the nuclear centers because of the so-called cusp condition (they have zero slope at R = 0, whereas 1s STOs have non-zero slopes there).
22.6.06: v. Exercise Solutions
Q1
$F \phi_i = \varepsilon_i\phi_j = h \phi_i + \sum_j \left[ J_j - K_j \right] \phi_i \nonumber$
Let the closed shell Fock potential bewritten as:
\begin{align}V_{ij} &= \sum_k \left( 2\langle ik | jk \rangle - \langle ik | kj \rangle \right) \text{, and the 1e}^- \text{component as:} \ h_{ij} &= \langle \phi_i - \dfrac{1}{2} \nabla^2 - \sum\limits_A \dfrac{Z_A}{|r - R_A|}| \phi_j \rangle \text{, and the delta as:} \ \delta_{ij} &= \langle i|j \rangle \text{, so that: } h_{ij} + V_{ij} = \delta_{ij}\varepsilon_{i} \ \text{using: } \phi_i &= \sum\limits_\mu C_{\mu i}\chi_{\mu}, \phi_j = \sum\limits_\nu C_{\nu j}\chi_\nu \text{, and } \phi_k = \sum\limits_\gamma C_{\gamma k} \chi_\gamma \end{align}
, and transforming from the mo to ao basis we obtain:
\begin{align} V_{ij} &= \sum\limits_{k\mu\gamma\nu k} C_{\mu i} C_{\gamma k}C_{\nu j} C_{\kappa k} \left( 2\langle \mu\gamma |\nu\kappa \rangle - \langle \mu\gamma | \kappa \nu \rangle \right) \ &= \sum\limits_{k \mu \gamma \nu\kappa} \left(C_{\gamma k} C_{\kappa k}\right)\left(C_{\mu i}C_{\nu j}\right) \left( 2\langle \mu\gamma | \nu\kappa \rangle - \langle \mu\gamma | \kappa\nu \rangle \right) \ &= \sum\limits_{\mu\nu} \left( C_{\mu i} C_{\nu j} \right) V_{\mu \nu} \text{ where, } \ V_{\mu\nu} &= \sum\limits_{\gamma\kappa} P_{\gamma\kappa} \left( 2\langle \mu\gamma | \nu\kappa \rangle - \langle \mu\gamma | \kappa\nu \rangle \right) \text{, and } P_{\gamma\kappa} = \sum\limits_k \left( C_{\gamma k}C_{\kappa k} \right) , \ h_{ij} &= \sum\limits_{\mu\nu} \left( C_{\mu i}C_{\nu j} \right) h_{\mu\nu} \text{ , where } \ h_{\mu\nu} &= \langle \chi_\mu | - \dfrac{1}{2} \nabla^2 - \sum\limits_A \dfrac{Z_A}{|r - R_A|} | \chi_\nu \rangle \text{ , and } \ \delta_{ij} &= \langle i|j \rangle = \sum\limits_{\mu\nu} \left( C_{\mu i} S_{\mu\nu} C_{\nu j} \right). \end{align}
SO, $h_{ij} + V_{ij} = \delta_{ij}\varepsilon_j$ becomes:
\begin{align} & \sum\limits_{\mu\nu} \left( C_{\mu i}C_{\nu j} \right) h_{\mu\nu} + \sum\limits_{\mu\nu} \left( C_{\mu i}C_{\nu j}\right) V_{\mu\nu} = \sum\limits_{\mu\nu} \left( C_{\mu i}S_{\mu\nu}C_{\nu j} \right) \varepsilon_j, \ & \sum\limits_{\mu\nu} \left( C_{\mu i}S_{\mu\nu} C_{\nu j} \right) \varepsilon_ - \sum\limits_{\mu\nu} \left( C_{\mu i}C_{\nu j}\right) h_{\mu\nu} - \sum\limits_{\mu\nu} \left( C_{\mu i}C_{\nu j} \right) V_{\mu\nu} = 0 \text{ for all i,j } \ & \sum\limits_{\mu\nu} C_{\mu i} \left[ \varepsilon_jS_{\mu\nu} - h_{\mu\nu} - V_{\mu\nu} \right] C_{\nu j} = 0 \text{ for all i,j} \ \text{Therefore, } & \ \sum\limits_\nu \left[ h_{\mu\nu} + V_{\mu\nu} - \varepsilon_jS_{\mu\nu} \right] C_{\nu j} = 0 \end{align}
Tis is FC = SCE.
Q2
The Slater Condon rule for zero (spin orbital) difference with N electrons in N spin orbitals is:
\begin{align} E &= \langle |H + G|\rangle = \sum\limits_i^N \langle \phi_i |h|\phi_i \rangle + \sum\limits_{i>j}^N \left( \langle \phi_i\phi_j |g| \phi_i\phi_j \rangle - \langle \phi_i\phi_j |g| \phi_j\phi_i \rangle \right) \ &= \sum\limits_i h_{ii} + \sum\limits_{i>j} \left( g_{ijij} - g_{ijji} \right) \ &= \sum\limits_i h_{ii} + \dfrac{1}{2}\sum\limits_{ij} \left( g_{ijij} - g_{ijji} \right) \end{align}
If all orbitals are doubly occupied and we carry out the spin integration we obtain:
$E = 2\sum\limits_i^{occ} h_{ii} + \sum\limits_{ij}^{occ} \left( 2g_{ijij} - g_{ijji} \right), \nonumber$
where i and j now refer to orbitals (not spin-orbitals).
Q3
If the occupied orbitals obey $F\phi_k = \varepsilon_k\phi_k$, then the expression for E in problem 2 above can be rewritten as.
$E = \sum\limits_{i}^{occ} \left( h_{ii} + \sum\limits_j^{occ} \left( 2g_{ijij} - g_{ijji} \right) \right) + \sum\limits_{i}^{occ} h_{ii} \nonumber$
We recognize the closed shell Fock operator expression and rewrite this as
$E = \sum\limits_i^{occ} F_{ii} + \sum\limits_i^{occ} h_{ii} = \sum\limits_i^{occ} \left( \varepsilon_i + h_{ii} \right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.06%3A_More_Quantitative_Aspects_of_Electronic_Structure_Calculations/22.6.05%3A_iv._Review_Exercise_So.txt
|
1. We will use the QMIC software to do this problem. Lets just start from the beginning. Get the starting "guess" mo coefficients on disk. Using the program MOCOEFS it asks us for the first and second mo vectors. We input 1, 0 for the first mo (this means that the first mo is 1.0 times the He 1s orbital plus 0.0 times the H 1s orbital; this bonding mo is more likely to be heavily weighted on the atom having the higher nuclear charge) and 0, 1 for the second. Our beginning mo-ao array looks like: \begin{bmatrix} 1.0 & 0.0 \ 0.0 & 1.0 \end{bmatrix} and is placed on disk in a file we choose to call "mocoefs.dat". We also put the ao integrals on disk using the program RW_INTS. It asks for the unique one- and two- electron integrals and places a canonical list of these on disk in a file we choose to call "ao_integrals.dat". At this point it is useful for us to step back and look at the set of equations which we wish to solve: FC = SCE. The QMIC software does not provide us with a so-called generalized eigenvalue solver (one that contains an overlap matrix; or metric), so in order to use the diagonalization program that is provided we must transform this equation (FC = SCE) to one that looks like (F'C' = C'E). We do that in the following manner:
Since S is symmetric and positive definite we can find an $S^{-\dfrac{1}{2}} \text{ such that } S^{-\dfrac{1}{2}} S^{+\dfrac{1}{2}} = 1, S^{-\dfrac{1}{2}} S = S^{+\dfrac{1}{2}}$,etc. rewrite FC = SCE by inserting unity between FC and multiplying the whole equation on the left by $S^{-\dfrac{1}{2}}$. This gives:
$S^{-\dfrac{1}{2}} FS^{-\dfrac{1}{2}}S^{+\dfrac{1}{2}} C = S^{-\dfrac{1}{2}} SCE = S^{+\dfrac{1}{2}}CE \nonumber$
Letting: \begin{align} F' &= S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}} \ C' &= S^{+\dfrac{1}{2}}C \text{, and inserting expressions above give: } \ F'C' &= C'E \end{align}
Note, that to get the next iterations mo coefficients we must calculate C from C':
C' = $S^{+\dfrac{1}{2}}$C, so, multiplying through on the left by $S^{-\dfrac{1}{2}}$ gives:
$S^{-\dfrac{1}{2}}C' = S^{-\dfrac{1}{2}}S^{+\dfrac{1}{2}}C = C \nonumber$
This will be the method we will use to solve our fock equations.
Find $S^{-\dfrac{1}{2}}$ by using the program FUNCT_MAT (this program generates a function of a matrix). This program will ask for the elements of the S array and write to disk a file (name of your choice ... a good name might be "shalf") containing the $S^{-\dfrac{1}{2}}$ array. Now we are ready to begin the iterative Fock procedure.
a. Calculate the Fock matrix, F, using program FOCK which reads in the mo coefficients from "mocoefs.dat" and the integrals from "ao_integrals.dat" and writes the resulting Fock matrix to a user specified file (a good filename to use might be something like "fock1").
b. Calculate F' = $S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$ using the program UTMATU which reads in F and $S^{-\dfrac{1}{2}}$ from files on the disk and writes F' to a user specified file (a good filename to use might be something like "fock1p"). Diagonalize F' using the program DIAG. This program reads in the matrix to be diagonalized from a user specified filename and writes the resulting eigenvectors to disk using a user specified filename (a good filename to use might be something like "coef1p"). You may wish to choose the option to write the eigenvalues (Fock orbital energies) to disk in order to use them at a later time in program FENERGY. Calculate C by back transforming e.g. C = $S^{-\dfrac{1}{2}}$ C'. This is accomplished by using the program MATXMAT which reads in two matrices to be multiplied from user specified files and writes the product to disk using a user specified filename (a good filename to use might be something like "mocoefs.dat").
c. The QMIC program FENERGY calculates the total energy, using the result of exercises 2 and 3;
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2\langle k1|1k \rangle - \langle k1|1k \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} \text{, and } \sum\limits_k \varepsilon_k + \langle k|h|k \rangle + \sum\limits_{\mu > \nu }\dfrac{Z_\mu Z_\nu}{R_{\mu\nu}}. \nonumber$
This is the conclusion of one iteration of the Fock procedure ... you may continue by going back to part a. and proceeding onward.
d. and e. Results for the successful convergence of this system using the supplied QMIC software is as follows (this is alot of bloody detail but will give the user assurance that they are on the right track; alternatively one could switch to the QMIC program SCF and allow that program to iteratively converge the Fock equations):
The one-electron AO integrals:
$\begin{bmatrix} -2.644200 & -1.511300 \ -1.511300 & -1.720100 \end{bmatrix} \nonumber$
The two-electron AO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 1.054700 & \ & 2 & & 1 & & 1 & & 1 & & 0.4744000 & \ & 2 & & 1 & & 2 & & 1 & & 0.5664000 & \ & 2 & & 2 & & 1 & & 1 & & 0.2469000 & \ & 2 & & 2 & & 2 & & 1 & & 0.3504000 & \ & 2 & & 2 & & 2 & & 2 & & 0.6250000 & \end{align} \nonumber
The "initial" MO-AO coeffficients:
$\begin{bmatrix} 1.000000 & 0.000000 \ 0.000000 & 1.000000 \end{bmatrix} \nonumber$
AO overlap matrix (S):
$\begin{bmatrix} 1.000000 & 0.578400 \ 0.578400 & 1.000000 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}} \begin{bmatrix} 1.168032 & -0.3720709 \ -0.3720709 & 1.168031 \end{bmatrix} \nonumber$
************
ITERATION 1
************
The charge bond order matrix
$\begin{bmatrix} 1.000000 & 0.000000 \ 0.000000 & 0.000000 \end{bmatrix} \nonumber$
The Fock matrix (F):
$\begin{bmatrix} -1.589500 & -1.036900 \ -1.036900 & -0.8342001 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.382781 & -0.5048679 \ -0.5048678 & -0.4568883 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.604825 -0.2348450 ] \nonumber$
Their corresponding eigenvectors (C = $S^{+\dfrac{1}{2}}*C$) are:
$\begin{bmatrix} -0.9153809 & -0.4025888 \ -0.4025888 & 0.9153810 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C = $S^{-\dfrac{1}{2}}$*C')
$\begin{bmatrix} -0.9194022 & -0.8108231 \ -0.1296498 & 1.218985 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.624352 & -0.1644336 \ -0.1644336 & -1.306845 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9779331 & \ & 2 & & 1 & & 1 & &1 & & 0.1924623 & \ & 2 & & 1 & & 2 & & 1 & & 0.5972075 & \ & 2 & & 2 & & 1 & & 1 & & 0.1170838 & \ & 2 & & 2 & & 2 & & 1 & & -0.0007945194 & \ & 2 & & 2 & & 2 & & 2 & & 0.6157323 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2 \langle k|h|k \rangle + 2\langle k1|k1 \rangle - \langle k1|1k \rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84219933 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.80060530 \nonumber$
the difference is:
$-0.04159403 \nonumber$
************
ITERATION 2
************
The change bond order matrix:
$\begin{bmatrix} 0.8453005 & 0.1192003 \ 0.1192003 & 0.01680906 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.624673 & -1.083623 \ -1.083623 & -0.8772071 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}F S^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.396111 & -0.5411037 \ -0.5411037 & -0.4798213 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.646972 -0.2289599 ] \nonumber$
Their corresponding eigenvectors (C' = $S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9072427 & -0.4206074 \ -0.4206074 & 0.9072427 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C = $S^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} -0.9031923 & -0.8288413 \ -0.1537240 & 1.216184 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.617336 & -0.1903475 \ -0.1903475 & -1.313861 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9626070 & \ & 2 & & 1 & & 1 & & 1 & & 0.1949828 & \ & 2 & & 1 & & 2 & & 1 & & 0.6048143 & \ & 2 & & 2 & & 1 & & 1 & & 0.1246907 & \ & 2 & & 2 & & 2 & & 1 & & 0.003694540 & \ & 2 & & 2 & & 2 & & 2 & & 0.6158437 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2\langle kl|kl \rangle - \langle kl|lk \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84349298 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.83573675 \nonumber$
the difference is:
$-0.00775623 \nonumber$
************
ITERATION 3
************
The change bond order matrix:
$\begin{bmatrix} -1.631153 & -1.091825 \ -1.091825 & -0.8853514 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.398951 & -0.5470731 \ -0.5470730 & -0.4847007 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.654745 -0.2289078 ] \nonumber$
Their corresponding eigenvectors (C' = $S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9058709 & -0.4235546 \ -0.4235545 & 0.9058706 \end{bmatrix} \nonumber$
The "new" MM-AO coefficients (C=S$^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} -0.9004935 & -0.8317733 \ -0.1576767 & 1.215678 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.616086 & -0.1945811 \ -0.1945811 & -1.315112 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & &1 & & 0.9600707 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953255 & \ & 2 & & 1 & & 2 & & 1 & & 0.6060572 & \ & 2 & & 2 & & 1 & & 1 & & 0.1259332 & \ & 2 & & 2 & & 2 & & 1 & & 0.004475587 & \ & 2 & & 2 & & 2 & & 2 & & 0.6158972 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2\langle kl|kl\rangle - \langle kl|lk\rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu }} = -2.84353018 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84225941 \nonumber$
the difference is:
$-0.00127077 \nonumber$
************
ITERATION 4
************
The charge bond order matrix:
$\begin{bmatrix} 0.8108885 & -1.093155 \ -1.093155 & -0.8866909 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.632213 & -1.093155 \ -1.093155 & -0.8866909 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.399426 & -0.5480287 \ -0.5480287 & -0.4855191 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.656015 -0.2289308 ] \nonumber$
Their corresponding eigenvectors (C' = $S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9056494 & -0.4240271 \ -0.4240271 & 0.9056495 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C = $S^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} -0.9000589 & -0.8322428 // -0.1583111 & 1.215595 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.615881 & -0.1952594 \ -0.1952594 & -1.315315 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9596615 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953781 & \ & 2 & & 1 & & 2 & & 1 & & 0.6062557 & \ & 2 & & 2 & & 1 & & 1 & & 0.1261321 \ & 2 & & 2 & & 2 & & 1 & & 0.004601604 & \ & 2 & & 2 & & 2 & & 2 & & 0.6159065 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2\langle kl|kl\rangle - \langle kl|lk\rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352922 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84332418 \nonumber$
the difference is:
$-0.00020504 \nonumber$
************
ITERATION 5
************
The charge bond order matrix:
$\begin{bmatrix} 0.8101060 & 0.1424893 \ 0.1424893 & 0.02506241 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.632385 & -1.093368 \ -1.093368 & -0.8869066 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.399504 & -0.5481812 \ -0.5481813 & -0.4856516 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.656219 -0.2289360 ] \nonumber$
Their corresponding eigenvectors (C' = S$^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9056138 & -0.4241026 \ -0.4241028 & 0.9056141 \end{bmatrix} \nonumber$
The "new" MO-AO coeficients (C = $S^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} -0.8999892 & -0.8323179 \ -0.1584127 & 1.215582 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.615847 & -0.1953674 \ -0.1953674 & -1.315348 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9595956 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953862 & \ & 2 & & 1 & & 2 & & 1 & & 0.6062872 & \ & 2 & & 2 & & 1 & & 1 & & 0.1261639 & \ & 2 & & 2 & & 2 & & 1 & & 0.004621811 & \ & 2 & & 2 & & 2 & & 2 & & 0.6159078 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2\langle kl|kl\rangle - \langle kl|lk \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_{\mu}Z_\nu}{R_{\mu\nu}} = -2.84352779 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84349489 \nonumber$
the difference is:
-0.00003290
************
ITERATION 6
************
The charge bond order matrix:
$\begin{bmatrix} 0.8099805 & 0.1425698 \ 0.1425698 & 0.02509460 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.632412 & -1.093402 \ -1.093402 & -0.8869413 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.399517 & -0.5482056 \ -0.5482056 & -0.4856730 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.656253 -0.2289375 ] \nonumber$
Their corresponding eigenvectors (C' = $S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9056085 & -0.4241144 \ -0.4241144 & 0.9056086 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C = $S^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} 0.8999786 & -0.8323296 \ -0.1584283 1.215580 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.615843 & -0.1953846 \ -0.1953846 & -1.315353 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9595859 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953878 & \ & 2 & & 1 & & 2 & & 1 & & 0.6062925 & \ & 2 & & 2 & & 2 & & 1 & & 0.004625196 & \ & 2 & & 2 & & 2 & & 2 & & 0.6159083 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle 2 \langle kl|kl\rangle - \langle kl|lk\rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352827 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu > \nu } \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352398 \nonumber$
the difference is:
-0.00000429
************
ITERATION 7
************
The charge bond order matrix:
$\begin{bmatrix} 0.8099616 & 0.1425821 \ 0.1425821 & 0.02509952 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.632416 & -1.093407 \ -1.093407 & -0.8869464 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.399519 & -0.5482093 \ -0.5482092 & -0.4856761 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.656257 -0.2289374 ] \nonumber$
Their corresponding eigenvectors (C' = $S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9056076 & -0.4241164 \ -0.4241164 & 0.9056077 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C=$S^{-\dfrac{1}{2}}$*C'):
$\begin{bmatrix} -0.8999770 & -0.8323317 \ -0.1584310 & 1.215580 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.615843 & -0.1953876 \ -0.1953876 & -1.315354 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9595849 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953881 & \ & 2 & & 1 & & 2 & & 1 & & 0.6062936 & \ & 2 & & 2 & & 1 & & 1 & & 0.1261697 & \ & 2 & & 2 & & 2 & & 1 & & 0.004625696 & \ & 2 & & 2 & & 2 & & 2 & & 0.6159083 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2 \langle k|h|k\rangle + 2\langle kl|kl \rangle - \langle kl|lk \rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352922 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352827 \nonumber$
the difference is:
$-0.00000095 \nonumber$
************
ITERATION 8
************
The charge bond order matrix:
$\begin{bmatrix} 0.8099686 & 0.1425842 \ 0.1425842 & 0.02510037 \end{bmatrix} \nonumber$
The Fock matrix:
$\begin{bmatrix} -1.632416 & -1.093408 \ -1.093408 & -0.8869470 \end{bmatrix} \nonumber$
$S^{-\dfrac{1}{2}}FS^{-\dfrac{1}{2}}$
$\begin{bmatrix} -1.399518 & -0.5482103 \ -0.5482102 & -0.4856761 \end{bmatrix} \nonumber$
The eigenvalues of this matrix (Fock orbital energies) are:
$[ -1.656258 -0.2289368 ] \nonumber$
Their corresponding eigenvectors (C'=$S^{+\dfrac{1}{2}}$*C) are:
$\begin{bmatrix} -0.9056074 & -0.4241168 \ -0.4241168 & 0.9056075 \end{bmatrix} \nonumber$
The "new" MO-AO coefficients (C=$S^{-\dfrac{1}{2}}$ *C'):
$\begin{bmatrix} -0.8999765 & -0.8323320 \ -0.1584315 & 1.215579 \end{bmatrix} \nonumber$
The one-electron MO integrals:
$\begin{bmatrix} -2.615842 & -0.1953882 \ -0.1953882 & -1.315354 \end{bmatrix} \nonumber$
The two-electron MO integrals:
\begin{align} & 1 & & 1 & & 1 & & 1 & & 0.9595841 & \ & 2 & & 1 & & 1 & & 1 & & 0.1953881 & \ & 2 & & 1 & & 2 & & 1 & & 0.6062934 & \ & 2 & & 2 & & 1 & & 1 & & 0.1261700 & \ & 2 & & 2 & & 2 & & 1 & & 0.004625901 & \ & 2 & & 2 & & 2 & & 2 & & 0.6159081 & \end{align} \nonumber
The closed shell Fock energy from formula:
$\sum\limits_{kl} 2\langle k|h|k\rangle + 2 \langle kl|kl\rangle - \langle kl|lk\rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352827 \nonumber$
from formula:
$\sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu >\nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} = -2.84352827 \nonumber$
the difference is:
$0.00000000 \nonumber$
f. In looking at the energy convergence we see the following:
Iteration Formula 1 Formula 2
1 -2.84219933 -2.80060530
2 -2.84349298 -2.83573675
3 -2.84353018 -2.84225941
4 -2.84352922 -2.84332418
5 -2.84352779 -2.84349489
6 -2.84352827 -2.84352827
7 -2.84352922 -2.84352827
8 -2.84352827 -2.84352827
f. If you look at the energy difference (SCF at iteration n - SCF converged) and plot this data versus iteration number, and do a 5th order polynomial fit, we see the following:
In looking at the polynomial fit we see that the convergence is primarily linear since the coefficient of the linear term is much larger than those of the cubic and higher terms.
g. The converged SCF total energy calculated using the result of exercise 3 is an upper bound to the ground state energy, but, during the iterative procedure it is not. At convergence, the expectation value of the Hamiltonian for the Hartree Fock determinant is given by the equation in exercise 3.
h. The one- and two- electron integrals in the MO basis are given above (see part e iteration 8). The orbital energies are found using the result of exercise 2 and 3 to be:
\begin{align} E(SCF) &= \sum\limits_k \varepsilon_k + \langle k|h|k\rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} \ E(SCF) &= \sum\limits_{kl} 2\langle k|h|k \rangle + 2\langle kl|kl\rangle - \langle kl|lk\rangle + \sum\limits_{\mu > \nu} \dfrac{Z_\mu Z_\nu}{R_{\mu\nu}} \ \text{so, } \varepsilon_k &= \langle k|h|k\rangle + \sum\limits_1^{occ} (2\langle kl|kl\rangle - \langle kl|lk\rangle ) \ \varepsilon_1 &= h_{11} + 2\langle 11|11 \rangle - \langle 11|11\rangle \ &= -2.615842 + 0.9595841 \ &= -1.656258 \ \varepsilon_2 &= h_{22} + 2\langle 21|21\rangle - \langle 21|12\rangle \ &= -1.315354 + 2*0.6062934 - 0.1261700 \ &= -0.2289372 \end{align}
i. Yes, the 1$\sigma^2$ configuration does dissociate properly because at $R\rightarrow \infty$ the lowest energy state is He + $H^+$, which also has a $1\sigma^2$ orbital occupancy (i.e., $1s^2$ on He and $1s^0 \text{ on } H^+$).
1. At convergence the mo coefficients are:
$\phi_1 = \begin{bmatrix} & -0.8999765 \ & -0.1584315 \end{bmatrix} \phi_2 = \begin{bmatrix} & -0.8323320 \ & 1.215579 \end{bmatrix} \nonumber$
and the integrals in this MO basis are:
\begin{align} & h_{11} = -2.615842 & & h_{21} = -0.1953882 & & h_{22} = -1.315354 & \ & g_{1111} = 0.9595841 & & g_{2111} = 0.1953881 & & g_{2121} = 0.6062934 & \ & g_{2211} = 0.1261700 & & g_{2221} = 004625901 & & g_{2222} = 0.6159081 & \end{align}
a. \begin{align} H &= \begin{bmatrix} \langle 1\sigma^2|H|1\sigma^2\rangle & \langle 1\sigma^2|H|2\sigma^2\rangle \ \langle 2\sigma^2|H|1\sigma^2\rangle & \langle 2\sigma^2|H|2\sigma^2\rangle \end{bmatrix} = \begin{bmatrix} 2h_{11} + g_{1111} & g_{1122} \ g_{1122} & 2h_{22} + g_{2222} \end{bmatrix} \ &= \begin{bmatrix} 2*-2.615842 + 0.9595841 & 0.1261700 \ 0.126170 & 2*-1.315354 + 0.6159081 \end{bmatrix} \ & = \begin{bmatrix} -4.272100 & 0.126170 \ 0.126170 & -2.014800 \end{bmatrix} \end{align}
b. The eigenvalues are $E_1 = -4.279131 \text{ and } E_2 = -2.007770.$ The corresponding eigenvectors are:
$C_1 = \begin{bmatrix} & -.99845123 \ & 0.05563439 \end{bmatrix} \text{ , C_2 = } \begin{bmatrix} & 0.05563438 \ & 0.99845140 \end{bmatrix} \nonumber$
c. \begin{align} \dfrac{1}{2}&\left[ \bigg| \left(\sqrt{a}\phi_1 + \sqrt{b}\phi_2 \right) \alpha \left( \sqrt{a}\phi_1 - \sqrt{b}\phi_2\right)\beta\bigg| + \bigg|\left(\sqrt{a}\phi_1 - \sqrt{b}\phi_2\right)\alpha\left(\sqrt{a}\phi_1 + \sqrt{b}\phi_2\right)\beta\bigg| \right] \ &= \dfrac{1}{2\sqrt{2}}\left[ \left( \sqrt{a}\phi_1 + \sqrt{b}\phi_2 \right)\left( \sqrt{a}\phi_1 - \sqrt{b}\phi_2 \right) + \left( \sqrt{a}\phi_1 - \sqrt{b}\phi_2 \right)\left( \sqrt{a}\phi_1 + \sqrt{b}\phi_2 \right) \right](\alpha\beta - \beta\alpha ) \ &= \dfrac{1}{\sqrt{2}}(a\phi_1\phi_1 - b\phi_2\phi_2)(\alpha\beta - \beta\alpha) \ &= a|\phi_1\alpha\phi_1\beta | - b|\phi_2\alpha\phi_2\beta |. & \text{(note from part b. a = 0.9984 and b = 0.0556)}\end{align}
d. The third configuration $|1\sigma 2\sigma | = \dfrac{1}{\sqrt{2}} \left[ |1\alpha 2\beta | - | 1 \beta 2\alpha | \right]$,
Adding this configuration to the previous 2x2 CI results in the following 3x3 'full' CI:
\begin{align} H &= \begin{bmatrix} \langle 1\sigma^2 |H|1\sigma^2\rangle & \langle 1\sigma^2 |H|2\sigma^2\rangle & \langle 1\sigma^2 |H|1\sigma 2\sigma\rangle \ \langle 2\sigma^2 |H|1\sigma^2\rangle & \langle 2\sigma^2 |H|2\sigma^2\rangle & \langle 2\sigma^2 |H|1\sigma 2\sigma\rangle \ \langle 1\sigma 2\sigma |H|1\sigma^2\rangle & \langle 2\sigma^2 |H|1\sigma 2\sigma \rangle & \langle 1\sigma 2\sigma |H|1\sigma 2\sigma\rangle \end{bmatrix} \ &= \begin{bmatrix} 2h_{11} + g_{1111} & g_{1122} & \dfrac{1}{\sqrt{2}}\left[ 2h_{12} + 2g_{2111}\right] \ g_{1122} & 2h_{22} + g_{2222} & \dfrac{1}{\sqrt{2}}\left[ 2h_{12} + 2g_{2221}\right] \ \dfrac{1}{\sqrt{2}}\left[ 2h_{12} + 2g_{2111}\right] & \dfrac{1}{\sqrt{2}}\left[ 2h_{12} + 2g_{2221}\right] & h_{11} + h_{22} + g_{2121} + g_{2211} \end{bmatrix} \end{align}
Evaluating the new matrix elements:
\begin{align} H_{12} &= H_{21} = \sqrt{2}*(-0.1953882 + 0.1953881) = 0.0 \ H_{23} &= H_{32} = \sqrt{2}*(-0.1953882 + 0.004626) = -0.269778 \ H_{33} &= -2.615842 - 1.315354 + 0.606293 + 0.126170 \ &= -3.198733 \ &= \begin{bmatrix} -4.272100 & 0.126170 & 0.0 \ 0.126170 & -2.014800 & -0.269778 \ 0.0 & -0.269778 & -3.198733 \end{bmatrix}\end{align}
e. The eigenvalues are $E_1 = -4.279345 \text{, } E_2 = -3.256612 \text{ and } E_3 = -1.949678.$ The corresponding eigenvectors are:
$\begin{bmatrix} & -0.99825280 \ & 0.05732290 \ & 0.01431085 \end{bmatrix} \text{ , }C_2 = \begin{bmatrix} & -0.05302767 \ & -0.20969283 \ & -0.9774200 \end{bmatrix} \text{ , }C_3 = \begin{bmatrix} & -0.05302767 \ & -0.97608540 \ & 0.21082004 \end{bmatrix} \nonumber$
f.
We need the non-vanishing matrix elements of the dipole operator in the mo basis. These can be obtained by calculating them by hand. They are more easily obtained by using the TRANS program. Put the $1e^-$ ao integrals on disk by running the program RW_INTS. In this case you are inserting $z_{11} = 0.0 \text{, } z_{21} = 0.2854\text{, and } z_{22} = 1.4$ (insert 0.0 for all the $2e^-$ integrals) ... call the output file "ao_dipole.ints" for example. The converged MO-AO coefficients should be in a file ("mocoefs.dat" is fine). The transformed integrals can be written to a file (name of your choice) for example "mo_dipole.ints". These matrix elements are:
$Z_{11} = 0.11652690 \text{, } z_{21} = -0.54420990 \text{, } z_{22} = 1.49117320 \nonumber$
The excitation energies are $E_2 - E_1 = -3.256612 - -4.279345 = 1.022733 \text{, and } E_3 - E_1 = -1.949678 .- - 4.279345 = 2.329667.$
Using the Slater-Conden rules to obtain the matrix elements between configurations we get:
\begin{align} H_z &= \begin{bmatrix} \langle 1\sigma^2 |z| 1\sigma^2 \rangle & \langle 1\sigma^2 |z| 2\sigma^2 \rangle & \langle 1\sigma^2 |z| \sigma 2\sigma \rangle \ \langle 2\sigma^2 |z| 1\sigma^2 \rangle & \langle 2\sigma^2 |z| 2\sigma^2 \rangle & \langle 2\sigma^2 |z| 1\sigma 2\sigma \rangle \ \langle 1\sigma 2\sigma |z| 1\sigma^2 \rangle & \langle 2\sigma^2 |z| 1\sigma 2\sigma \rangle & \langle 1\sigma 2\sigma |z| 1\sigma 2\sigma \rangle \end{bmatrix} \ &=\begin{bmatrix} 2z_{11} & 0 & \dfrac{1}{\sqrt{2}}\left[ 2z_{12}\right] \ 0 & 2z_{22} & \dfrac{1}{\sqrt{2}}\left[ 2z_{12}\right] \ \dfrac{1}{\sqrt{2}}\left[ 2z_{12}\right] & \dfrac{1}{\sqrt{2}}\left[ 2z_{12}\right] & z_{11} + z_{22} \end{bmatrix} \ \begin{bmatrix} 0.233054 & 0 & -0.769629 \ 0 & 2.982346 & -0.769629 \ -0.769629 & -0.769629 & 1.607700 \end{bmatrix} \end{align}
Now, $\langle \Psi_1 |z| \Psi_2 \rangle = C_1^TH_zC_2$, (this can be accomplished with the program UTMATU)
\begin{align} &= \begin{bmatrix} &-0.99825280 \ & 0.05732290 \ & 0.01431085 \end{bmatrix}^T \begin{bmatrix} 0.233054 & 0 & -0.769629 \ 0 & 2.982346 & -0.769629 \ -0.769629 & -0.769629 & 1.607700 \end{bmatrix} \begin{bmatrix} & -0.02605343 \ & -0.20969283 \ & -0.9774200 \end{bmatrix} \ &= -0.757494 \end{align}
and, $\langle \Psi_1 |z| \Psi_3 \rangle = C_1^T H_zC_3$
\begin{align} &= \begin{bmatrix} & -0.99825280 \ & 0.05732290 \ & 0.01431085 \end{bmatrix}^T \begin{bmatrix} 0.233054 & 0 & -0.769629 \ 0 & 2.982346 & -0.769629 \ -0.769629 & -0.769629 & 1.607700 \end{bmatrix} \begin{bmatrix} & -0.05302767 \ & -0.97698540 \ & 0.21082004 \end{bmatrix} \end{align}
g.
Using the converged coefficients the orbital energies obtained from solving the Fock equations are $\varepsilon_1 = -1.656258 \text{ and } \varepsilon_2 = -0.228938.$ The resulting expression for the RSPT first-order wavefunction becomes:
\begin{align} \big| 1\sigma^2 \rangle^{(1)} &= -\dfrac{g_{2211}}{2(\varepsilon_2 - \varepsilon_1 )}\big| 2\sigma^2\rangle \ \big| 1\sigma^2\rangle^{(1)} &= -\dfrac{0.126170}{2( -0.228938 + 1.656258 )} \big|2\sigma^2\rangle \ \big| 1\sigma^2\rangle^{(1)} &= -0.0441982\big| 2\sigma^2\rangle \end{align}
h. As you can see from part c., the matrix element $\langle 1\sigma^2 |H| 1\sigma 2\sigma\rangle = 0$ (this is also a result of the Brillouin theorem) and hence this configuration does not enter into the first-order wavefunction.
i. \begin{align}& \big| 0\rangle = \big| 1\sigma^2\rangle - 0.0441982 \big| 2\sigma^2 \rangle . \text{To normalize we divide by:} \ & \sqrt{\left[ 1 + (0.0441982)^2 \right]} = 1.0009762 \ &\big|0\rangle = 0.999025\big| 1\sigma^2\rangle - 0.044155\big| 2\sigma^2\rangle \end{align}
In the 2x2 CI we obtained:
\begin{align} & \big|0\rangle = 0.99845123 \big| 1\sigma^2 \rangle - 0.05563439\big| 2\sigma^2\rangle \end{align}
j. The expression for the $2^{nd}$ order RSPT is:
\begin{align} E^{(2)} &= -\dfrac{|g_{2211}|^2}{2(\varepsilon_2 - \varepsilon_1 )} = -\dfrac{0.126170^2}{2(-0.228938 + 1.656258)} \ &= -0.005576 \text{ au } \end{align}
Comparing the 2x2 CI energy obtained to the SCF result we have:
-4.279131 - (-4.272102) = -0.007029 au
2. \begin{align} & \text{STO total energy: } & -2.8435283 \ & \text{STO3G total energy } & -2.8340561 \ & \text{ 3-21G total energy } & -2.8864405 \end{align}
The STO3G orbitals were generated as a best fit of 3 primitive gaussians (giving 1 CGTO) to the STO. So, STO3G can at best reproduce the STO result. The 3-21G orbitals are more flexible since there are 2 CGTOs per atom. This gives 4 orbitals (more parameters to optimize) and a lower total energy.
3. R $HeH^+$ Energy $H_2$ Energy
1.0
-2.812787056 -1.071953297
1.2 -2.870357513 -1.113775015
1.4 -2.886440516 -1.122933507
1.6 -2.886063576 -1.115567684
1.8 -2.880080938 -1.099872589
2.0 -2.872805595 -1.080269098
2.5 -2.856760263 -1.026927710
10.0 -2.835679293 -0.7361705303
Plotting total energy vs. geometry for $HeH^+:$
Plotting total energy vs. geometry for $H_2:$
For $HeH^+$ at R = 10.0 au, the eigenvalues of the converged Fock matrix and the corresponding converged MO-AO coefficients are:
\begin{align} -0.1003571E+01 & -0.4961988E+00 & 0.5864846E+00 & 0.1981702E+01 \ 0.4579189E+00 & -0.8245406E-05 & 0.1532163E-04 & 0.1157140E+01 \ 0.6572777E+00 & -0.4580946E-05 & -0.6822942E-05 & -0.1056716E+01 \ -0.1415438E-05 & 0.3734069E+00 & 0.1255539E+01 & -0.1669342E-04 \ 0.1112778 & 0.71732444E+00 & -0.1096019E01 & 0.2031348E-04 \end{align}
Notice that this indicates that orbital 1 is a combination of the s functions on He only (dissociating properly to $He + H^+$).
For $H_2 \text{ at } R = 10.0$ au, the eigenvalues of the converged Fock matrix and the corresponding converged MO-AO coefficients are:
\begin{align} -0.2458041E+00 & -0.1456223E+00 & 0.1137235E+01 & 0.1137825E+01 \ 0.1977649E+00 & -0.1978204E+00 & 0.1006458E+01 & -0.7903225E+00 \ 0.56325666E+00 & -0.5628273E+00 & -0.8179120E+00 & 0.6424941E+00 \ 0.1976312E+00 & 0.1979216E+00 & 0.7902887E+00 & 0.1006491E+01 \ 0.5629326E+00 & 0.5631776E+00 & -0.6421731E+00 & -0.6421731E+00 & -0.8181460E+00 \end{align}
Notice that this indicates that the orbital 1 is a combination of the s functions on both H atoms (dissociating improperly; equal probabilities of $H_2$ dissociating to two neutral atoms or to a proton plus hydride ion).
1. The $H_2$ CI result"
R $^1\sum_g^+$ $^3\sum_u^+$ $^1\sum_u^+$ $^1\sum_g^+$
1.0 -1.074970 -0.5323429 -0.3997412 0.3841676
1.2 -1.118442 -0.6450778 -0.4898805 0.1763018
1.4 -1.129904 -0.7221781 -0.5440346 0.0151913
1.6 -1.125582 -0.7787328 -0.5784428 -0.1140074
1.8 -1.113702 -0.8221166 -0.6013855 -0.2190144
2.0 -1.098676 -0.8562555 -0.6172761 -0.3044956
2.5 -1.060052 -0.9141968 -0.6384557 -0.4530645
5.0 -0.9835886 -0.9790545 -0.5879662 -0.5802447
7.5 -0.9806238 -0.9805795 -0.5247415 -0.5246646
10.0 -0.989598 -0.9805982 -0.4914058 -0.4913532
For $H_2$ at R = 1.4 au, the eigenvalues of the Hamiltonian matrix and the corresponding determinant amplitudes are:
determinant -1.129904 -0.722178 -0.544035 0.015191
$\big| 1\sigma_g\alpha 1\sigma_g \beta\big|$ 0.99695 0.00000 0.00000 0.07802
$\big| 1\sigma_g\beta 1\sigma_u\alpha\big|$ 0.00000 0.70711 0.70711 0.00000
$\big| 1\sigma_g\alpha 1\sigma_u\beta \big|$ 0.00000 0.70711 -0.70711 0.00000
$\big| \sigma_u\alpha 1\sigma_u\beta\big|$ -0.07802 0.00000 0.00000 0.99695
This shows, as expected, the mixing of the first $^1\sum_g + (1\sigma_g^2 \text{ and the 2nd } ^1\sum_g^+ (1\sigma_u^2 )$ determinants, the
$^3\sum\limits_u^+ = \left( \dfrac{1}{\sqrt{2}}\left( \big| 1\sigma_g\beta 1\sigma_u\alpha \big| + \big| 1\sigma_g\alpha 1\sigma_u\beta \big| \right) \right) , \nonumber$
$\text{and the } ^1\sum\limits_u^+ = \left( \dfrac{1}{\sqrt{2}}\left( \big| 1\sigma_g\beta 1\sigma_u\alpha\big| - \big| 1\sigma_g\alpha 1\sigma_u\beta\big|\right) \right) . \nonumber$
Also notice that the $^1\sum\limits_g^+$ state is the bonding (0.99695 - 0.07802) combination (note specifically the + - combination) and the second $^1\sum\limits_g^+$ state is the antibonding combination (note specifically the + + combination). The + + combination always gives a higher energy than the + - combination. Also notice that the 1st and 2nd states $(^1\sum\limits_g^+ \text{ and } ^3\sum\limits_u^+ )$ are dissociating to proton/anion combinations. The difference in these energies is the ionization potential of H minus the electron affinity of H.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/22%3A_Problems/22.06%3A_More_Quantitative_Aspects_of_Electronic_Structure_Calculations/22.6.07%3A_vi._Problem_Solutions.txt
|
The purpose of this tutorial is to introduce the basics of quantum mechanics using Dirac bracket notation while working in one dimension. Dirac notation is a succinct and powerful language for expressing quantum mechanical principles; restricting attention to one-dimensional examples reduces the possibility that mathematical complexity will stand in the way of understanding. A number of texts make extensive use of Dirac notation (1-5).
Wave-particle duality is the essential concept of quantum mechanics. DeBroglie expressed this idea mathematically as $\lambda = h/m \nu = h/p$. On the left is the wave property $\lambda$, and on the right the particle property m$\nu$, momentum. The most general coordinate space wavefunction for a free particle with wavelength $\lambda$ is the complex Euler function shown below.
$\langle x | \lambda \rangle = \exp \left(i2 \pi \frac{x}{\lambda}\right) = \cos \left(2 \pi \frac{x}{\lambda}\right)+i \sin \left(2 \pi \frac{x}{\lambda}\right) \label{Equation 1}$
Feynman called this equation “the most remarkable formula in mathematics.” He referred to it as “our jewel.” A nd indeed it is, because when it is enriched with de Broglie’s relation it serves as the foundation of quantum mechanics.
According to de Broglie=s hypothesis, a particle with a well-defined wavelength also has a well-defined momentum. Therefore, we can obtain the momentum wavefunction (unnormalized) of the particle in coordinate space by substituting the deBroglie relation into Equation \ref{Equation 1}.
$\langle x | p \rangle = \exp \left(\dfrac{ipx}{\hbar}\right) \label{Equation 2}$
When Equation \ref{Equation 2} is graphed it creates a helix about the axis of propagation (X-axis). Z is the imaginary axis and Y is the real axis. It is the simplest example of a fourier transform, translating momentum into coordinate language. It also has in it the heart of the uncertainty principle.
Everyday examples of this important mathematical formula include telephone cords, spiral notebooks and slinkies.
Quantum mechanics teaches that the wavefunction contains all the physical information about a system that can be known, and that one extracts information from the wavefunction using quantum mechanical operators. There is, therefore, an operator for each observable property.
For example, in momentum space if a particle has a well-defined momentum we write its state as $|p \rangle$. If we operate on this state with the momentum operator $\hat{p}$, the following eigenvalue equation is satisfied.
$\hat{p} |p \rangle = p|p \rangle \label{Equation 3}$
We say the system is in a state which is an eigenfunction of the momentum operator with eigenvalue p. In other words, operating on the momentum eigenfunction with the momentum operator, in momentum space, returns the momentum eigenvalue times the original momentum eigenfunction. From
$\langle p | \hat{p} |p \rangle = p \langle p|p \rangle \label{Equation 4}$
it follows that,
$\langle p | \hat{p} = p \langle p| \label{Equation 5}$
Equations (3) and (5) show that in its own space the momentum operator is a multiplicative operator, and can operate either to the right on a ket, or to the left on a bra. The same is true of the position operator in coordinate space.
To obtain the momentum operator in coordinate space, equation (3) is projected onto coordinate space by operating on the left with $\langle x |$. After inserting equation (2) we have,
$\langle x | \hat{p} |p \rangle = p \langle x|p \rangle = p \exp \left(\dfrac{ipx}{h}\right) = \frac{\hbar}{i} \frac{d}{dx} \exp \left(\dfrac{ipx}{\hbar}\right) = \frac{\hbar}{i} \frac{d}{dx} \langle x | p \rangle \label{Equation 6}$
Comparing the first and last terms reveals that
$\langle x | \hat{p} = \frac{\hbar}{i} \frac{d}{dx} \langle x | \label{Equation 7}$
and that $\frac{\hbar}{i} \frac{d}{dx} \langle x |$ is the momentum operator in coordinate space.
The position wavefunction in momentum space is the complex conjugate of the momentum wavefunction coordinate space.
$\langle p|x \rangle = \langle x|p \rangle^{*} = \exp \left(\dfrac{-ipx}{\hbar}\right) \label{Equation 8}$
Starting with the coordinate-space eigenvalue equation
$\hat{x} |x \rangle = x|x \rangle \label{Equation 9}$
and using the same approach as with momentum, it is easy to show that
$\langle x| \hat{x} = x \langle x| \label{Equation 10}$
$\langle p| \hat{x} = - \frac{\hbar}{i} \frac{d}{dp} \langle p| \label{Equation 11}$
In summary, the two fundamental dynamical operators are position and momentum, and the two primary representations are coordinate space and momentum space. The results achieved thus far are shown in the following table.
Coordinate Space Momentum Space
position operator: $\hat{x}$ $x \langle x|$ $- \frac{\hbar}{i} \frac{d}{dp} \langle p |$
momentum operator: $\hat{p}$ $\frac{\hbar}{i} \frac{d}{dx} \langle x |$ $p \langle p|$
Other quantum mechanical operators can be constructed from $\hat{x}$ and $\hat{p}$ in the appropriate representation, position or momentum. To illustrate this, Schrödinger's equation for the one-dimensional harmonic oscillator will be set up in both coordinate and momentum space using the information in the table. Schrödinger's equation is the quantum mechanical energy eigenvalue equation, and for the harmonic oscillator it looks like this initially,
the information in the table. Schrödinger's equation is the quantum mechanical energy eigenvalue equation, and for the harmonic oscillator it looks like this initially,
$\bigg[ \frac{\hat{p}^{2}}{2m} + \frac{1}{2} k \hat{x} \bigg] | \Psi \rangle = E | \Psi \rangle \label{Equation 12}$
The term in brackets on the left is the classical energy written as an operator without a commitment to a representation (position or momentum) for the calculation.
Most often, chemists solve Schrödinger's equation in coordinate space. Therefore, to prepare Schrödinger's equation for solving, equation (12) is projected onto coordinate space by operating on the left with $\langle x |$.
$\langle x | \bigg[ \frac{\hat{p}^{2}}{2m} + \frac{1}{2} k \hat{x} \bigg] | \Psi \rangle = \langle x | E | \Psi \rangle \label{Equation 13}$
Using the information in the table this yields,
$\bigg[ - \frac{\hbar^{2}}{2m} \frac{d^{2}}{dx^{2}} + \frac{1}{2} kx^{2} \bigg] \langle x| \Psi \rangle = E \langle x| \Psi \rangle \label{Equation 14}$
The square bracket on the left contains the quantum mechanical energy operator in coordinate space. Before proceeding we illustrate how the kinetic energy operator emerges as a differential operator in coordinate space using equation (7).
$\frac{1}{2m} \langle x| \hat{p} \hat{p} | \Psi \rangle = \frac{1}{2m} \frac{\hbar}{i} \frac{d}{dx} \langle x| \hat{p} | \Psi \rangle = \frac{1}{2m} \frac{\hbar}{i} \frac{d}{dx} \frac{\hbar}{i} \frac{d}{dx} \langle x| \Psi \rangle = - \frac{hbar^{2}}{2m} \frac{d^{2}}{dx^{2}} \langle x | \Psi \rangle \label{Equation 15}$
Equation (10) is used in a similar fashion to show that potential energy is a multiplicative operator in coordinate space.
$\frac{1}{2} k \langle x| \hat{x} \hat{x} | \Psi \rangle = \frac{1}{2} kx \langle x| \hat{x} | \Psi \rangle = \frac{1}{2} kx^{2} \langle x| \Psi \rangle \label{Equation 16}$
Obviously the calculation could also have been set up in momentum space. It is easy to show that in the momentum representation Schrödinger=s equation is
$\bigg[ \frac{p^{2}}{2m} - \frac{\hbar^{2} k}{2} \frac{d^{2}}{dp^{2}} \bigg] \langle p | \Psi \rangle = E \langle p | \Psi \rangle \label{Equation 17}$
In momentum space the kinetic energy operator is multiplicative and the potential energy operator is differential. The one-dimensional simple harmonic oscillator problem is exactly soluble in both coordinate and momentum space. The solution can be found in any chemistry and physics text dealing with quantum mechanics, and will not be dealt with further here, other than to say that equations (14) and (17) reveal an appealing symmetry.
Unfortunately, for most applications the potential energy term in momentum space presents more of a mathematical challenge than it does for the harmonic oscillator problem. A general expression for the potential energy in the momentum representation when its form in the coordinate representation is specified is given below.
$\langle p | \hat{V} | Psi \rangle = \int \int \exp \bigg[ \frac{i(p'-p)x}{\hbar} \bigg] V(x) \langle p' | \Psi \rangle dp' dx \label{Equation 18}$
To see how this integral is handled for a specific case see reference (10).
If a system is in a state which is an eigenfunction of an operator, we say the system has a well-defined value for the observable associated with the operator, for example, position, momentum, energy, etc. Every time we measure we get the same result. However, if the system is in a state that is not an eigenfunction of the operator, for example, if $\hat{o} | \Psi \rangle = | \Phi \rangle$, the system does not have a well-defined value for the observable. Then the measurement results have a statistical character and each measurement gives an unpredictable result in spite of the fact that the system is in a well-defined state $| \Psi \rangle$. Under these circumstances, all we can do is calculate a mean value for the observable. This is unheard of in classical physics where, if a system is in a well-defined state, all its physical properties are precisely determined. In quantum mechanics a system can be in a state which has a well-defined energy, but its position and momentum are un-determined.
The quantum mechanical recipe for calculating the mean value of an observable is now derived. Consider a system in the state $| \Psi \rangle$, which is not an eigenfunction of the energy operator, $\hat{H}$. A statistically meaningful number of such states are available for the purpose of measuring the energy. Quantum mechanical principles require that an energy measurement must yield one of the energy eigenvalues, $\epsilon_{i}$, of the energy operator. Therefore, the average value of the energy measurements is calculated as,
$\langle E \rangle = \frac{\sum_{i} n_{i} \epsilon_{i}}{N} \label{Equation 19}$
where ni is the number of times $\epsilon_{i}$ is observed, and N is the total number of measurements. Therefore, pi = ni/N, is the probability that $\epsilon_{i}$ is observed. Equation (19) becomes
$\langle E \rangle = \sum_{i} p_{i} \epsilon_{i} \label{Equation 20}$
According to quantum mechanics, for a system in the state $| \Psi \rangle, p_{i} = \langle \Psi | i \rangle \langle i | \Psi \rangle$, where the $| i\rangle$ are the eigenfunctions of the energy operator. Equation (20) can now be re-written as,
$\langle E \rangle = \sum_{i} \langle \Psi | i \rangle \langle i | \Psi \rangle \epsilon_{i} = \sum_{i} \langle \Psi | i \rangle \epsilon_{i} \langle i | \Psi \rangle \label{Equation 21}$
However, it is also true that
$\hat{H} | i \rangle = \epsilon_{i} | i \rangle = | i \rangle \epsilon_{i} \label{Equation 22}$
Substitution of equation (22) into (21) yields
$\langle E \rangle = \sum_{i} \langle \Psi | \hat{H} | i \rangle \langle i | \Psi \rangle \label{Equation 23}$
As eigenfunctions of the energy operator, the $| i \rangle$ form a complete basis set, making available the discrete completeness condition, $\sum_{i} | i \rangle \langle i | = 1$, the use of which in equation (23) yields
$\langle E \rangle = \langle \Psi | \hat{H} | \Psi \rangle \label{Equation 24}$
This formalism is general and applies to any operator-observable pair. The average value for the observed property may always be calculated as,
$\langle o \rangle = \langle \Psi | \hat{o} | \Psi \rangle \label{Equation 25}$
These principles are now applied to a very simple problem B the particle in a box. Schrödinger's equation in coordinate space,
$- \frac{\hbar^{2}}{2m} \frac{d^{2}}{dx^{2}} \langle x | \Psi \rangle = E \langle x | \Psi \rangle \label{Equation 26}$
can be solved exactly, yielding the following eigenvalues and eigenfunctions,
$E_{n} = \frac{n^{2} h^{2}}{8ma^{2}} \label{Equation 27}$
$\langle x | \Psi_{n} \rangle = \sqrt{\frac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right) \label{Equation 28}$
where a is the box dimension, m is the particle mass, and n is a quantum number restricted to integer values starting with 1.
Substitution of equation (28) into (26) confirms that it is an eigenfunction with the manifold of allowed eigenvalues given by equation (27). However, equation (28) is not an eigenfunction of either the position or momentum operators, as is shown below.
$\langle x | \hat{x} | \Psi_{n} \rangle = x \langle x | \Psi_{n} \rangle = x \sqrt{2}{a} \sin \left(\dfrac{n \pi x}{a}\right) \label{Equation 29}$
$\langle x | \hat{p} | \Psi_{n} \rangle = \frac{\hbar}{i} \frac{d}{dx} \langle x | \Psi_{n} \rangle = \frac{n \pi}{a} \sqrt{2}{a} \cos \left(\dfrac{n \pi x}{a}\right) \label{Equation 30}$
To summarize, the particle in a box has a well-defined energy, but the same is not true for its position or momentum. In other words, it is not buzzing around the box executing a classical trajectory. The outcome of an energy measurement is certain, but position and momentum measurements are uncertain. All we can do is calculate the expectation value for these observables and compare the calculations to the mean values found through a statistically meaningful number of measurements.
Next we set up the calculation for the expectation value for position utilizing the recipe expressed in equation (25).
$\langle x \rangle_{n} = \langle \Psi_{n} | \hat{x} | \Psi_{n} \rangle \label{Equation 31}$
Evaluation of (31) in coordinate space requires the continuous completeness condition.
$\int_{0}^{a} | x \rangle \langle x | dx = 1 \label{Equation 32}$
Substitution of (32) into (31) gives
$\langle x \rangle_{n} = \int_{0}^{a} \langle \Psi_{n} | x \rangle \langle x | \hat{x} | \Psi_{n} \rangle dx = \int_{0}^{a} \langle \Psi_{n} | x \rangle x \langle x | \Psi_{n} \rangle dx = \frac{a}{2} \label{Equation 33}$
The expectation value for momentum is calculated in a similar fashion,
$\langle p \rangle_{n} = \int_{0}^{a} \langle \Psi_{n} | x \rangle \langle x | \hat{p} | \Psi_{n} \rangle dx = \int_{0}^{a} \langle \Psi_{n} | x \rangle \frac{\hbar}{i} \frac{d}{dx} \langle x | \Psi_{n} \rangle dx = 0 \label{Equation 34}$
In other words, the expectation values for position and momentum are the same for all the allowed quantum states of the particle in a box.
It is now necessary to explore the meaning of $\langle x | \Psi \rangle$. It is the probability amplitude that a system in the state $| \Psi \rangle$, will be found at position x. $| \langle x | \Psi \rangle |^{2}$ or $\langle \Psi | x \rangle \langle x | \Psi \rangle$, is the probability density that a system in the state $| \Psi \rangle$ will be found at position x. Thus equation (28) is an algorithm for calculating probability amplitudes and probability densities for the position of the particle in a one-dimensional box. This, of course, is true only if $| \Psi \rangle$ is normalized.
$\langle \Psi | \Psi \rangle = \int_{0}^{a} \langle \Psi | x \rangle \langle x | \Psi \rangle dx = 1 \label{Equation 35}$
There are two ways to arrive at the integral in equation (34). One can insert the continuous completeness relation (32) at the | on the left side, or, equivalently one can express $| \Psi \rangle$, as a linear superposition in the continuous basis set x,
$| \Psi \rangle = \int_{0}^{a} | x \rangle \langle x | \Psi \rangle dx \label{Equation 36}$
and projecting this expression onto $\langle \Psi |$.
A quantum particle is described by its wavefunction rather than by its instantaneous position and velocity; a confined quantum particle, such as a particle in a box, is not moving in any classical sense, and must be considered to be present at all points in space, properly weighted by $| \langle x | \Psi \rangle |^{2}$.
Thus, $\langle x | \Psi \rangle$ allows us to examine the coordinate space probability distribution and to calculate expectation values for observables such as was done in equations (33) and (34). Plots of $| \langle x | \Psi_{n} \rangle |^{2}$ show that the particle is distributed symmetrically in the box, and $\langle x | \Psi_{n} \rangle$, allows us to calculate the probability of finding the particle anywhere inside the box.
Position_Distribution
The coordinate-space wavefunction does not say much about momentum, other than its average value is zero (see equation 34). However, a momentum-space wavefunction, $\langle p | \Psi \rangle$, can be generated by a Fourier transform of $\langle x | \Psi \rangle$. This is accomplished by projecting equation (36) onto momentum space by multiplication on the left by $\langle p |$.
$\langle p | \Psi_{n} \rangle = \int_{0}^{a} \langle p|x \rangle \langle x| \Psi_{n} \rangle dx = \frac{1}{\sqrt{2 \pi \hbar}} \int_{0}^{a} \exp \left(- \dfrac{ipx}{\hbar}\right) \sqrt{\frac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right) dx \label{Equation 37}$
The term preceding the integral is the normalization constant (previously ignored) for the momentum wavefunction. Evaluation of the integral on the right side yields,
$\langle p | \Psi_{n} \rangle = n \sqrt{a \pi \hbar^{3}} \bigg[ \frac{1 - \cos (n \pi) \exp (- \frac{ipa}{\hbar})}{n^{2} \pi^{2} \hbar^{2} - a^{2} p^{2}} \bigg] \label{Equation 38}$
Now with the continuous completeness relationship for momentum,
$\int_{- \infty}^{\infty} | p \rangle \langle p | dp = 1 \label{Equation 39}$
one can re-calculate $\langle x \rangle_{n}$ and $\langle p \rangle_{n}$ in momentum space.
$\langle x \rangle_{n} = \int_{- \infty}^{\infty} \langle \Psi_{n} | p \rangle \langle p | \hat{x} | \Psi_{n} \rangle dp = \int_{- \infty}^{\infty} \langle \Psi_{n} | p \rangle \frac{- \hbar}{i} \frac{d}{dp} \langle p | \Psi_{n} \rangle dp = \frac{a}{2} \label{Equation 40}$
$\langle p \rangle_{n} = \int_{- \infty}^{\infty} \langle \Psi_{n} | \hat{p} | p \rangle \langle p | \Psi_{n} \rangle dp = \int_{- \infty}^{\infty} \langle \Psi_{n} | p \rangle p \langle p | \Psi_{n} \rangle dp = 0 \label{Equation 41}$
It is clear that $\langle x | \Psi_{n} \rangle$ and $\langle p | \Psi_{n} \rangle$ contain the same information; they just present it in different languages (representations). The coordinate space distribution functions for the particle in a box shown above are familiar to anyone who has studied quantum theory, however, because chemists work mainly in coordinate space, the momentum distributions are not so well known. A graphical representation of $| \langle p | \Psi_{n} \rangle |^{2}$ for the first five momentum states is shown below. The distribution functions are offset by small increments for clarity of presentation.
As just shown, the particle in a box can be used to illustrate many fundamental quantum mechanical concepts. To demonstrate that some systems can be analyzed without solving Schrödinger’s equation we will briefly consider the particle on a ring. This model has been used to study the behavior of the $\pi$-electrons of benzene.
In order to satisfy the requirement of being single-valued, the momentum wavefunction in coordinate space for a particle on a ring of radius R must satisfy the following condition,
$\langle x + 2 \pi R | p \rangle = \langle x | p \rangle \label{Equation 42}$
This leads to,
$\exp \left(\dfrac{i2 \pi Rp}{\hbar}\right) \exp \left(\dfrac{ipx}{\hbar}\right) = \exp \left(\dfrac{ipx}{\hbar}\right) \label{Equation 43}$
This equation can be written as,
$\exp \left(\dfrac{i2 pi Rp}{\hbar}\right) = 1 = \exp(i2 \pi m) \quad where\; m = 0, \pm 1, \pm 2, \ldots \label{Equation 44}$
Comparison of the left with the far right of this equation reveals that,
$\frac{Rp}{\hbar} = m \label{Equation 45}$
It is easy to show that the energy manifold associated with this quantum restriction is,
$E_{m} = m^{2} \left(\dfrac{\hbar^{2}}{2m_{e} R^{2}} \right) \label{Equation 46}$
The corresponding wavefunctions can be found in the widely used textbook authored by Atkins and de Paula (11).
There are, of course, many formulations of quantum mechanics, and all of them develop quantum mechanical principles in different ways from diverse starting points, but they are all formally equivalent. In the present approach the key concepts are de Broglie’s hypothesis as stated in equation (2), and the eigenvalue equations (3) and (9) expressed in the momentum and coordinate representations, respectively.
Another formulation (Heisenberg’s approach) identifies the commutation relation of equation (47) as the basis of quantum theory, and adopts operators for position and momentum that satisfy the equation.
$[ \hat{p}, \hat{x}] = \hat{p} \hat{x} - \hat{x} \hat{p} = \frac{\hbar}{i} \label{Equation 47}$
Equation (47) can be confirmed in both coordinate and momentum space for any state function $| \Psi \rangle$ using the operators in the table above.
$\langle x | (\hat{p} \hat{x} - \hat{x} \hat{p}) | \Psi \rangle = \frac{\hbar}{i} \left(\dfrac{d}{dx} x - x \dfrac{d}{dx}\right) \langle x | \Psi \rangle = \frac{\hbar}{i} \langle x | \Psi \rangle \label{Equation 48}$
$\langle p | (\hat{p} \hat{x} - \hat{x} \hat{p}) | \Psi \rangle = i \hbar \left(p \dfrac{d}{dd} - \dfrac{d}{dp} p \right) \langle p | \Psi \rangle = \frac{\hbar}{i} \langle p | \Psi \rangle \label{Equation 49}$
The meaning associated with equations (48) and (49) is that the observables associated with non-commuting operators cannot simultaneously have well-defined values. This, of course, is just another statement of the uncertainty principle.
The famous double-slit experiment illustrates the uncertainty principle in a striking way. To illustrate this it is mathematically expedient to begin with infinitesimally thin slits. Later this restriction will be relaxed.
A screen with infinitesimally thin slits (6) at x1 and x2 projects the incident beam into a linear superposition of position eigenstates.
$| \Psi \rangle = \frac{1}{\sqrt{2}} [ | x_{1} \rangle + | x_{2} \rangle ] \label{Equation 50}$
Expressing this state in the coordinate representation yields the following superposition of Dirac delta functions.
$\langle x | \Psi \rangle = \frac{1}{\sqrt{2}} [ \langle x | x_{1} \rangle + \langle x | x_{2} \rangle ] = \frac{1}{\sqrt{2}} [ \delta (x-x_{1}) + \delta(x-x_{2})] \label{Equation 51}$
According to the uncertainty principle this localization of the incident beam in coordinate space is accompanied by a delocalization of the x-component of the momentum, px. This can be seen by projecting $| \Psi \rangle$ onto momentum space.
$\langle p_{x} | \Psi \rangle = \frac{1}{\sqrt{2}} [ \langle p_{x} | x_{1} \rangle + \langle p_{x} | x_{2} \rangle ] = \frac{1}{2 \sqrt{\pi \hbar}} \bigg[ \exp \left(- \dfrac{ip_{x}x_{1}}{\hbar}\right) + \exp \left(- \dfrac{ip_{x} x_{2}}{\hbar}\right) \bigg] \label{Equation 52}$
The momentum probability distribution in the x-direction, $P(p_{x})=| \langle p_{x} | \Psi \rangle |^{2}$, reveals the required spread in momentum, plus the interesting interference pattern in the momentum distribution that will ultimately be projected onto the detection screen. As Marcella (6) points out the detection screen is actually measuring the x-component of the momentum.
Of course, in the actual experiment the slits are not infinitesimally thin and the diffraction pattern takes on the more familiar appearance reported in the literature (7) and textbooks (8). For example, a linear superposition of Gaussian functions can be used to represent the coordinate-space wavefunction at a screen with two slits of finite width.
$\langle x | \Psi \rangle = \exp [-(x-x_{1})^{2}]+ \exp [-(x-x_{2})^{2}] \label{Equation 53}$
The Fourier transform of this state into momentum space leads to the momentum distribution shown in the figure below (9).
The double-slit experiment reveals the three essential steps in a quantum mechanical experiment:
1. state preparation (interaction of incident beam with the slit-screen)
2. measurement of observable (arrival of scattered beam at the detection screen)
3. calculation of expected results of the measurement step
The Dirac delta function appeared in equation (51). It expresses the fact that the position eigenstates form a continuous orthogonal basis. The same, of course, is true for the momentum eigenstates.
The bracket $\langle x | x' \rangle$ is zero unless x = x′. This expresses the condition that an object at x′ is not at x. It is instructive to expand this bracket in the momentum representation.
$\langle x | x' \rangle = \int_{- \infty}^{\infty} \langle x|p \rangle \langle p|x' \rangle dp = \frac{1}{2 \pi \hbar} \int_{- \infty}^{\infty} \exp \left(\dfrac{ip(x-x')}{\hbar}\right)dp = \delta (x-x') \label{Equation 54}$
The same approach for momentum yields,
$\langle p|p' \rangle = \int_{- \infty}^{\infty} \langle p|x \rangle \langle x|p' \rangle dx = \frac{1}{2 \pi \hbar} \int_{- \infty}^{\infty} \exp \left(\dfrac{-i(p-p')x}{\hbar}\right) dx = \delta (p-p') \label{Equation 55}$
The Dirac delta function has great utility in quantum mechanics, so it is important to be able to recognize it in its several guises.
The time-dependent energy operator can be obtained by adding time dependence to equation (1) so that it represents a classical one-dimensional plane wave moving in the positive x-direction.
$\langle x | \lambda \rangle \langle t | \nu \rangle = \exp \left( i2 \pi \frac{x}{\lambda}\right) \exp (-i2 \pi \nu t) \label{Equation 56}$
This classical wave equation is transformed into a quantum mechanical wavefunction by using (as earlier) the de Broglie relation and E = h$\nu$.
$\langle x|p \rangle \langle t|E \rangle = \exp \left(\dfrac{ipx}{\hbar}\right) \exp \left(- \dfrac{iEt}{\hbar}\right) \label{Equation 57}$
From this equation we obtain the important Dirac bracket relating energy and time.
$\langle t|E \rangle = \exp \left(- \dfrac{iEt}{\hbar}\right) \label{Equation 58}$
The time-dependent energy operator is found by projecting the energy eigenvalue equation,
$\hat{H} |E \rangle = E|E \rangle \label{Equation 59}$
into the time domain.
$\langle t| \hat{H} |E \rangle = E \langle t|E \rangle = E \exp \left(- \dfrac{iEt}{\hbar}\right) = i \hbar \frac{d \langle t|E \rangle}{dt} \label{Equation 60}$
Comparison of the first and last terms reveals that the time-dependent energy operator is
$\langle t | \hat{H} = i \hbar \frac{d}{dt} \langle t | \label{Equation 61}$
We see also from equation (60) that
$i \hbar \frac{d}{dt} \langle t | = E \langle t | \label{Equation 62}$
So that in general,
$i \hbar \frac{d}{dt} \langle t | \Psi \rangle = E \langle t | \Psi \rangle \label{Equation 63}$
Integration of equation (63) yields a general expression for the time-dependence of the wavefunction.
$\langle t | \Psi \rangle = \exp \left(- \dfrac{iE(t-t_{0})}{\hbar}\right) \langle t_{0} | \Psi \rangle \label{Equation 64}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.01%3A_An_Approach_to_Quantum_Mechanics.txt
|
Klaus Ruedenberg once wrote that there are no ground states in classical mechanics. In other words, quantum principles are required to understand the stability of atoms and molecules, and the nature of the chemical bond. Quantum mechanics is not mathematically more difficult than classical physics, the real challenge it offers is of a conceptual nature. That’s because it says that understanding the stability of matter requires that it possess both particle and wave properties. The conceptual challenge is that particle and wave are contradictory concepts.
Wave-Particle Duality for Matter and Light
A wave is spatially delocalized
A particle is spatially localized
These incompatible concepts are united by the deBroglie wave equation with the wave property (wavelength) on the left and the particle property (momentum) on the right in a reciprocal relationship mediated by the ubiquitous Planck’s constant.
$\lambda = \frac{h}{m \nu} \nonumber$
Investing matter with wave behavior subjects it to interference effects and leads immediately to a redefinition of kinetic energy. In quantum mechanics the concept kinetic energy should be replaced with confinement energy, because the former implies classical motion. According to quantum mechanical principles a confined particle, a quon, because of its wave-like character is described by a wave function which is a weighted superposition of all possible positions. Quons do not execute trajectories in the classical sense. They are not here and later there; they are here and there simultaneously. As Werner Heisenberg said, “There is no space-time inside the atom.” In other words there is also no space-time for confined quons.
“A quon is any entity (electron, proton, neutron, atom, molecule, C60 etc.), no matter how immense, that exhibits both wave and particle aspects in the peculiar quantum manner.” Nick Herbert, Quantum Reality, page 64. The following tutorial outlines the origin of de Broglie’s wave hypothesis for matter and how it transforms classical kinetic energy into quantum mechanical confinement energy.
Wave-Particle Duality
Wave-particle duality as expressed in the de Broglie equation is the foundational concept of quantum theory. The mathematical form of de Broglie's equation is derived for the photon. It is subsequently assumed that it applies also to matter.
The momentum of a photon is inversely proportional to its wavelength. The derivation begins with Einstein's mass-energy equation which applies in general.
$E = mc^{2} \xrightarrow[E=h \nu]{p=mc} pc=h \nu \xrightarrow{c= \nu \lambda} p = \frac{h}{\lambda} \nonumber$
De Broglie postulated that this expression involving the particle property momentum and the wave property wavelength also applies to matter. Therefore the wavelength of a particle is inversely proportional to its momentum. This is the origin of wave-particle duality.
$\lambda = \frac{h}{m \nu} = \frac{h}{p} \nonumber$
A wave is spatially delocalized. A particle is spatially localized. These incompatible concepts are united by the de Broglie equation with the wave property (wavelength) on the left and the particle property (momentum) on the right in a reciprocal relationship mediated by the ubiquitous Planck's constant.
The de Broglie equation is a dictionary for translating classical kinetic energy into quantum language. The quantum equivalent of kinetic energy should be called confinement energy because there is no motion; a particle confined to a circular orbit, as in the Bohr model of the hydrogen atom, is in a stationary state. Restrictions are placed on the particle's wavelength ($n \lambda = 2 \pi R$) in order to avoid self-interference.
$T = \frac{p^{2}}{2m} \xrightarrow{p= \frac{h}{\lambda}} \frac{h^{2}}{2m \lambda^{2}} \xrightarrow[n \lambda = 2 \pi R^{2}]{confinement} \frac{n^{2} h^{2}}{8 \pi^{2} mR^{2}} \nonumber$
We always measure particles (detectors click, photographic film is darkened, etc.), but we interpret what happened or predict what will happen by assuming wavelike behavior. In other words, quantum particles exhibit both wave and particle properties in every experiment. To paraphrase Nick Herbert (Quantum Reality), particles are always detected, but the experimental results observed are the result of wavelike behavior.
In The Character of Physical Law Feynman described wave-particle duality as follows, "I will summarize, then, by saying that electrons arrive in lumps, like particles, but the probability of arrival of these lumps is determined as the intensity of waves would be. It is in this sense that the electron behaves sometimes like a particle and sometimes like a wave. It behaves in two different ways at the same time." Lawrence Bragg summarized wave-particle behavior saying, "Everything in the future is a wave, everything in the past is a particle."
In his 1951 treatise Quantum Theory, David Bohm described wave-particle duality as follows: "One of the characteristic features of quantum theory is wave-particle duality, i.e. the ability of matter or light quanta to demonstrate the wave-like property of interference, and yet to appear subsequently in the form of localizable particles, even after such interference has taken place." In other words, to explain interference phenomena wave properties must be assigned to matter and light quanta prior to detection as particles.
Whenever it is measured the world seems particle-like, but the pattern formed by these particles leads to the conclusion that between measurements the world acts like a wave.
Nothing but particles are ever detected, but the pattern of these (detected) particles must have been caused by some sort of wave - the form light seems to take when it is not being measured.
The following three-slide power-point presentation attempts to provide a concrete illustration of the ideas presented above by Feynman, Bohm, Bragg and Herbert by examining single-slit diffraction through the lense of wave-particle duality.
Wave-particle Duality Illustrated
• Source emits light
• When viewed through a narrow slit interference fringes are observed, the signature of wave-like behavior.
• However, the detector (eye) registers particles (retinal absorbs a photon and changes shape ultimately causing a signal to be sent to the brain via the optic nerve)
• We detect particles, but we predict what will happen, or interpret what happened, by assuming wave-like behavior
• “Everything in the future is a wave, everything in the past is a particle.” Lawrence Bragg
Single-slit Diffraction
The double-slit diffraction experiment provides an even more dramatic illustration of wave-particle duality in action. Richard Feynman once said that if anyone asks you a question about quantum mechanics, you can always reply "You remember the experiment with the two slits? It's the same thing."
Regarding the double-slit experiment, Feynman also wrote "We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by 'explaining' how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics.
The stabilities of hydrogen and helium, the most abundant elements in the universe, are now explained using rudimentary quantum principles.
Atomic Stability
Neils Bohr once observed that from the perspective of classical physics the stability of matter was a pure miracle. The problem, of course, is that two of the basic building blocks of matter are oppositely charged - the electron and the proton.Given Coulomb's Law the troubling question is what keeps them from coalescing?
Quantum mechanics is considered by many to be an abstract and esoteric science that doesn't have much to do with everyday life. Yet it provides an explanation for atomic and molecular stability, and classical physics fails at that task. Thus, to achieve some understanding of one of the basic facts about the macro-world requires quantum mechanical concepts and tools.
The issue of atomic stability will be explored with a quantum mechanical analysis of the two simplest elements in the periodic table - hydrogen and helium. They are also the two most abundant elements in the universe. Schrödinger's equation can be solved exactly for the hydrogen atom, but approximate methods are required for the helium atom. In this study, the variational method will be used for both hydrogen and helium.
Variational Method for the Hydrogen Atom
Normalized trial wave function:
$\Psi (\alpha, r) \colon =\sqrt{\frac{\alpha^{3}}{\pi}} \cdot \exp (- \alpha \cdot r) \nonumber$
where $\alpha$ is a scale-factor that controls the size of the wave function.
Integral:
$\int_{0}^{\infty} \Box \cdot 4 \pi r^{2}\; dr \nonumber$
Kinetic energy operator:
$- \frac{1}{2r} \frac{d^{2}}{dr^{2}} (r \cdot \Box) \nonumber$
Potential energy operator:
$\frac{-Z}{r} \cdot \Box \nonumber$
Demonstrate that the trial function is normalized:
$\int_{0}^{\infty} \Psi (\alpha, r)^{2} \cdot 4 \pi r^{2}\; dr\; \text{assume,}\; \alpha > 0 \rightarrow 1 \nonumber$
Plot trial wave function for several values of $\alpha$, the variational parameter:
Calculate the average value of the kinetic energy of the electron:
$T(\alpha) :=\int_{0}^{\infty} \Psi (\alpha, r) \cdot - \frac{1}{2r} \frac{d^{2}}{dr^{2}} (r \cdot \Psi (\alpha, r)) \cdot 4 \pi r^{2}\; dr\; \text{assume,}\; \alpha > 0 \rightarrow \frac{\alpha^{2}}{2} \nonumber$
Calculate the average value of the potential energy of the electron:
$V(\alpha, Z) :=\int_{0}^{\infty} \Psi (\alpha, r) \cdot - \frac{Z}{r} \cdot \Psi (\alpha, r) \cdot 4 \pi r^{2}\; dr\; \text{assume,}\; \alpha > 0 \rightarrow -Z \cdot \alpha \nonumber$
Calculate R, the average distance of the electron from the nucleus:
$R(\alpha) :=\int_{0}^{\infty} \Psi (\alpha, r) \cdot r \cdot \Psi (\alpha, r) \cdot 4 \pi r^{2}\; dr\; \text{assume,}\; \alpha > 0 \rightarrow \frac{3}{2 \alpha} \nonumber$
From this we find that:
$E(\alpha)=T(\alpha)+V(\alpha, Z) = \frac{\alpha^{2}}{2} - \alpha \cdot Z \nonumber$
But from above we know:
$\alpha = \frac{3}{2R} \nonumber$
This allows us to express the total energy and its components in terms of R the average distance of the electron from the nucleus.
Total energy:
$E(R,Z) \colon =\frac{\alpha^{2}}{2} - \alpha \cdot Z |_{\text{expand}}^{\text{substitute,}\; \alpha = \frac{3}{2R}} \rightarrow \frac{9}{8R^{2}} - \frac{3Z}{2R} \nonumber$
Electron kinetic energy:
$T_{E}(R) \colon =\frac{9}{8R^{2}} \nonumber$
Electron-nucleus potential energy:
$V_{NE}(R,Z) \colon =\frac{-3Z}{2R} \nonumber$
E, TE and VNE are graphed versus R for the hydrogen atom (Z=1):
$R \colon =0, 0.01, \ldots 8 \nonumber$
The hydrogen atom ground-state energy is determined by minimizing its energy with respect to R:
$R \colon =R \qquad R \colon =\frac{d}{dR} E(R,1) = 0\; \text{solve,}\; R \rightarrow \frac{3}{2} \qquad E(R,1)=-0.5 \nonumber$
Imagine a hydrogen atom forming as an electron approaches a proton from a great distance. The electron is drawn toward the proton by the Coulombic attractive interaction between the two opposite charges and the potential energy decreases like -1/R. The attractive potential energy interaction confines the electron to a smaller volume and the kinetic energy increases as 1/R2. Thus the kinetic energy goes to positive infinity faster than the potential energy goes to negative infinity and a total energy minimum (ground state) is achieved at R = 1.5, as shown in the figure above.
The electron does not collapse into (coalesce with) the proton under the influence of the attractive Coulombic interaction because of the repulsive effect of the confinement energy - that is, kinetic energy. Kinetic energy, therefore, is at the heart of understanding atomic stability.
Variational Method for the Helium Atom
Now we will proceed to the He atom. There are five contributions to the total electronic energy: kinetic energy of each electron, the interaction of each electron with the nucleus, and the electron-electron interaction.The only new term is the last, electron-electron potential energy. It is evaluated as follows for two electrons in 1s orbitals.
The electrostatic potential at r due to electron 1 is:
$\Phi (\alpha, r) \colon = \frac{1}{r} \int_{0}^{r} \Psi (\alpha, x)^{2} 4 \pi x^{2}\; dx + \int_{r}^{\infty} \frac{\Psi (\alpha, x)^{2} 4 \pi x^{2}}{x} dx \Bigg|_{\text{simplify}}^{\text{assume,}\; \alpha > 0} \rightarrow \frac{e^{-2 \alpha r} + \alpha re^{-2 \alpha r} - 1}{r} \nonumber$
The electrostatic interaction between the two electrons is:
$V_{EE} \colon \int_{0}^{\infty} \Phi (\alpha, r) \Psi (\alpha, r)^{2} 4 \pi r^{2}\; dr \Bigg|_{\text{simplify}}^{\text{assume,}\; \alpha > 0} \rightarrow \frac{5 \alpha}{8} \nonumber$
In terms of R, the electron-electron potential energy is:
$V_{EE}(R) \colon = \frac{15}{16R} \nonumber$
$Z \colon = 2 \qquad E_{He}(R) \colon = (R) 2 T_{E} (R) + 2 V_{NE}(R,Z) + V_{EE}(R) \qquad V(R) \colon = 2 V_{NE}(R,Z) + V_{EE}(R) \nonumber$
The various contributions to the total electronic energy of the helium atom are plotted below.
$R \colon = 0, 0.01 \ldots 4 \nonumber$
The helium atom ground-state energy is determined by minimizing its energy with respect to R:
$R \colon =R \qquad R \colon =\frac{d}{dR} E_{He}(R) = 0\; \text{solve,}\; R \rightarrow \frac{8}{9} \qquad R = 0.889 \qquad E_{He}(R)=-2.848 \nonumber$
Graphing EHe vs. <R> reveals again that kinetic energy (confinement energy) is the key to atomic stability. Several things should be noted in the graph shown above. First, that when the total energy minimum is achieved VNE and V (VNE + VEE) are still in a steep decline. This is a strong indication that VEE is really a rather feeble contribution to the total energy, increasing significantly only long after the energy minimum has been attained. Thus electron-electron repulsion cannot be used to explain atomic stability. The graph above clearly shows that on the basis of classical electrostatic interactions, the electron should collapse into the nucleus. This is prevented by the kinetic energy term for the same reasons as were given for the hydrogen atom.
Unfortunately chemists give too much significance to electron-electron repulsion (VSEPR for example) when it is really the least important term in the Hamiltonian energy operator. And to make matters worse they completely ignore kinetic energy as an important factor in atomic and molecular phenomena. It is becoming increasing clear in the current literature that many traditional explanations for chemical phenomena based exclusively on electrostatic arguments are in need of critical re-examination.
Scientists work almost exclusively in coordinate space when they do quantum calculations. There are two other options, momentum space and phase space, as the following tutorial shows. Calculations on the hydrogen atom are provided for all three computational venues in the following box.
Quantum Calculations on the Hydrogen Atom in Coordinate, Momentum and Phase Space
Coordinate Space Operators
Position operator $x \cdot \Box$
Momentum operator $p = \frac{1}{i} \frac{d}{dx} \Box$
Integral $\int_{0}^{\infty} \Box \; dx$
Kinetic energy operator $KE = - \frac{1}{2} \frac{d^{2}}{dx^{2}} \Box$
Potential energy operator $PE= \frac{-1}{x} \cdot \Box$
The energy operator for the one‐dimensional hydrogen atom in atomic units is:
$\frac{-1}{2} \frac{d^{2}}{dx^{2}} \Box - \frac{1}{x} \Box \nonumber$
The ground state wave function in coordinate space is:
$\Psi (x) \colon = 2 \cdot x \cdot \exp (-x) \nonumber$
Display the coordinate‐space distribution function:
The ground state energy is ‐0.5 Eh.
$\frac{-1}{2} \frac{d^{2}}{dx^{2}} \Psi (x) - \frac{1}{x} \Psi (x) = E \cdot \Psi (x)\; \text{solve,} \; E \rightarrow \frac{-1}{2} \nonumber$
The coordinate wave function is normalized $\int_{0}^{\infty} \Psi (x)^{2} \; dx = 1$
The expectation value for position $\int_{0}^{\infty} x \cdot \Psi (x)^{2} \; dx = 1.5$
The expectation value for momentum $\int_{0}^{\infty} \Psi (x) \cdot \frac{1}{i} \frac{d}{dx} \Psi (x) \; dx = 0$
The expectation value for kinetic energy $\int_{0}^{\infty} \Psi (x) \cdot \frac{-1}{2} \frac{d^{2}}{dx^{2}} \Psi (x) \; dx = 0.5$
The expectation value for potential energy $\int_{0}^{\infty} \frac{-1}{x} \cdot \Psi (x)^{2} \; dx = -1$
The momentum wave function is generated by the following Fourier transform of the coordinate space wave function.
$\Phi (p) \colon = \frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \exp(-ipx) \cdot \Psi (x)\; dx \rightarrow \frac{2^{1/2}}{\pi^{1/2} \cdot (ip + 1)^{2}} \nonumber$
Momentum Space Operators
Momentum space integral $\int_{- \infty}^{\infty} \Box \; dp$
Momentum operator $p \cdot \Box$
Kinetic energy operator $\frac{p^{2}}{2}$
Position operator $i \cdot \frac{d}{dp} \Box$
The same calculations made with the momentum space wave function:
The coordinate wave function is normalized $\int_{- \infty}^{\infty} (| \Phi (p)|)^{2}\; dp = 1$
The expectation value for position $\int_{- \infty}^{\infty} \overline{\Phi (p)} \cdot i \cdot \frac{d}{dp} \Phi (p)\; dp = 1.5$
The expectation value for momentum $\int_{- \infty}^{\infty} p \cdot (| \Phi (p)|)^{2}\; dp = 0$
The expectation value for kinetic energy $\int_{- \infty}^{\infty} \frac{p^{2}}{2} \cdot (| \Phi (p)|)^{2}\; dp = 0.5$
The expectation value for potential energy
The Wigner function for the hydrogen atom ground state is generated using the momentum wave function.
$W(x,p) \colon = \frac{1}{2 \pi} \int_{- \infty}^{\infty} \overline{\Phi (p + \frac{s}{2})} \cdot \exp (-isx) \cdot \Phi (p- \frac{s}{2})\; ds \nonumber$
The Wigner distribution is displayed graphically.
$N \colon = 60 \quad i \colon = 0 \ldots N \quad x_{i} \colon = \frac{6i}{N} \quad j \colon = 0 \ldots N \quad p_{j} \colon = -5 + \frac{10 \cdot j}{N} \quad Wigner_{i,j} \colon = W(x_{i}, p_{j}) \nonumber$
One of the interesting features of doing quantum mechanics with the Wigner distribution is that the position and momentum operators retain their classical forms; they are both multiplicative operators. By comparison in the coordinate representation position is multiplicative and momentum is differential. In the momentum representation itʹs the reverse. This is illustrated below with the following calculations.
Phase space integral $\int_{- \infty}^{\infty} \int_{0}^{\infty} \Box \; dx \; dp$
Position operator $x \cdot \Box$
Momentum operator $p \cdot \Box$
Kinetic energy operator $KE = \frac{p^{2}}{2} \cdot \Box$
Potential energy operator $PE = \frac{-1}{x} \cdot \Box$
Phase space calculations using the Wigner distribution:
The Wigner distribution is normalized $\int_{- \infty}^{\infty} \int_{0}^{\infty} W(x,p) \; dx \; dp = 1$
The expectation value for position $\int_{- \infty}^{\infty} \int_{0}^{\infty} x \cdot W(x,p) \; dx \; dp = 1.5$
The expectation value for momentum $\int_{- \infty}^{\infty} \int_{0}^{\infty} p \cdot W(x,p) \; dx \; dp = 0$
The expectation value for kinetic energy $\int_{- \infty}^{\infty} \int_{0}^{\infty} \frac{p^{2}}{2} \cdot W(x,p) \; dx \; dp = 0.5$
The expectation value for potential energy $\int_{- \infty}^{\infty} \int_{0}^{\infty} \frac{-1}{x} \cdot W(x,p) \; dx \; dp = -1$
The phase space calculations require the Wigner distribution function. This link provides further information about the Wigner distribution and how it repackages quantum weirdness.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.02%3A_Atomic_and_Molecular_Stability.txt
|
Sometimes practitioners of quantum mechanics misinterpret energy contributions when studying the details of atomic behavior. This can happen when classical concepts are allowed to intrude on the quantum realm where they are not valid.
Another Critique of the Centrifugal Effect in the Hydrogen Atom
On page 174 of Quantum Chemistry & Spectroscopy, 3rd ed. Thomas Engel derives equation 9.5 which is presented in equivalent form in atomic units ($e = m_{e} = \frac{h}{2 \pi} = 4 \pi \epsilon_{0} = 1$) here.
$- \frac{1}{2r^{2}} \frac{d}{dr} (r^{2} \frac{d}{dr} R(r)) + \Bigg[ \frac{L(L+1)}{2r^{2}} - \frac{1}{r} \Bigg] \cdot R(r) = E \cdot R(r) \label{eq1}$
At the bottom of the page he writes,
Note that the second term (in brackets) on the left‐hand side of Equation 9.5 can be viewed as a effective potential, Veff(r). It is made up of the centrifugal potential, which varies as +1/r2, and the Coulomb potential, which varies as ‐1/r.$V_{eff} (r) = \frac{L(L+1)}{2r^{2}} - \frac{1}{r} \nonumber$
Engel notes that because of its positive mathematical sign, the centrifugal potential is repulsive, and goes on to say,
The net result of this repulsive centrifugal potential is to force the electrons in orbitals with L > 0 ( p, d, and f electrons) on average farther from the nucleus than s electrons for which L = 0.
This statement is contradicted by the radial distribution functions shown in Figure 9.10 on page 187, which clearly show the opposite effect. As L increases the electron is on average closer to the nucleus. It is further refuted by calculations of the average value of the electron position from the nucleus as a function of the n and L quantum numbers. For a given n the larger L is the closer on average the electron is to the nucleus. In other words, these calculations support the graphical representation in Figure 9.10.
$r(n,L) \colon = \frac{3n^{2} - L(L+1)}{2} \quad \begin{pmatrix} L & 0 & 1 & 2 & 3 \ n=1 & 1.5 & ' & ' & ' \ n=2 & 6 & 5 & ' & ' \ n=3 & 13.5 & 12.5 & 10.5 & ' \ n=4 & 24 & 23 & 21 & 18 \end{pmatrix} \nonumber$
On page 180 in Example Problem 9.2, Engel introduces the virial theorem. For systems with a Coulombic potential energy, such as the hydrogen atom, it is
$\langle V \rangle = 2 \langle E \rangle = ‐2 \langle T \rangle. \nonumber$
We will work with the version
$\dfrac{ \langle E \rangle}{\langle V \rangle} = 0.5. \nonumber$
The values of the energy, the so called centrifugal potential energy and the Coulombic potential energy are as shown below as a function of the appropriate quantum numbers.
$E(n) \colon = \frac{-1}{2n^{2}} \qquad V_{centrifugal} (n,L) \colon = \frac{L(L+1)}{2n^{3} (L+ \frac{1}{2})} \qquad V_{coulomb} (n) \colon = - \frac{1}{n^{2}} \nonumber$
The calculations below show that the virial theorem is violated for any state for which L > 0.
$1s \quad n \colon = 1 \quad L \colon = 0 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.5 \nonumber$
$2s \quad n \colon = 2 \quad L \colon = 0 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.5 \nonumber$
$2p \quad n \colon = 2 \quad L \colon = 1 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.75 \nonumber$
$3s \quad n \colon = 3 \quad L \colon = 0 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.5 \nonumber$
$3p \quad n \colon = 3 \quad L \colon = 1 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.643 \nonumber$
$3d \quad n \colon = 3 \quad L \colon = 2 \quad \frac{E(n)}{V_{centrifugal} (n,L) + V_{coulomb} (n)} = 0.833 \nonumber$
These calculations are now repeated eliminating the centrifugal term, showing that the virial theorem is satisfied and supporting the claim that the ʺcentrifugal potentialʺ is actually a kinetic energy term.
$1s \quad n \colon = 1 \quad L \colon = 0 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
$2s \quad n \colon = 2 \quad L \colon = 0 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
$2p \quad n \colon = 2 \quad L \colon = 1 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
$3s \quad n \colon = 3 \quad L \colon = 0 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
$3p \quad n \colon = 3 \quad L \colon = 1 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
$3d \quad n \colon = 3 \quad L \colon = 2 \quad \frac{E(n)}{V_{coulomb} (n)} = 0.5 \nonumber$
We finish by rewriting Equation \ref{eq1} with brackets showing that the first two terms are quantum kinetic energy and that the Coulombic term is the only potential energy term.
$\Bigg[ - \frac{1}{2r^{2}} \frac{d}{dr} (r^{2} \frac{d}{dr} R(r)) + \frac{L(L+1)}{2r^{2}} \cdot R(r) \Bigg] - \frac{1}{r} \cdot R(r) = E \cdot R(r) \nonumber$
In summary, the ʺcentrifugal potentialʺ and the concept of ʺeffective potential energyʺ are good examples of the danger in thinking classically about a quantum mechanical system. Furthermore, itʹs bad pedagogy to create fictitious forces and to mislabel energy contributions in a misguided effort to provide conceptual simplicity.
Further evidence that confinement energy is not kinetic energy is seen in the following analysis of the effect of lepton mass in the hydrogen atom. In what follows the electron is replaced by the muon and the tauon in the hydrogen atom. Positronium, in which the proton is replaced by the positron, is also examined. These analyses also provide a graphical illustration of the uncertainty principle.
Exploring the Role of Lepton Mass in the Hydrogen Atom
Under normal circumstances the the hydrogen atom consists of a proton and an electron. However, electrons are leptons and there are two other leptons which could temporarily replace the electron in the hydrogen atom. The other leptons are the muon and tauon, and their fundamental properties, along with those of the electron, are given in the following table.
$\begin{pmatrix} \text{Property} & e & \mu & \tau \ \frac{Mass}{m_{e}} & 1 & 206.8 & 3491 \ \frac{Effective\; Mass}{m_{e}} & 1 & 185.86 & 1203 \ \frac{Life\; Time}{s} & Stable & 2.2 \times 10^{-6} & 3.0 \times 10^{-13} \end{pmatrix} \nonumber$
The purpose of this exercise is to demonstrate the importance of mass in atomic systems, and therefore also kinetic energy. Substitution of the deBroglie relation ($\lambda = \frac{h}{m \nu}$) into the classical expression for kinetic energy yields a quantum mechanical expression for kinetic energy. It is of utmost importance that in quantum mechanics, kinetic energy is inversely proportional to mass.
$T = \frac{1}{2} mv^{2} = \frac{h^{2}}{2m \lambda^{2}} \nonumber$
A more general and versatile quantum mechanical expression for kinetic energy is the differential operator shown below, where again mass appears in the denominator. An Approach to Quantum Mechanics outlines the origin of kinetic energy operator.
Atomic units ($e = m_{e} = \frac{h}{2 \pi} = 4 \pi \epsilon_{0} = 1$) will be used in the calculations that follow. Please note that $\mu$ in the equations below is the effective mass and not a symbol for the muon.
Kinetic energy operator:$T= - \frac{1}{2 \mu r} \frac{d^{2}}{dr^{2}} ( \cdot \Box)$ Potential energy operator:$V = - \frac{1}{r} \cdot \Box$
Variational trial wave function with variational parameter $\beta$:
$\Psi (r, \beta) \colon = \left(\dfrac{\beta^{3}}{\pi}\right)^{1/2} \cdot \exp (- \beta \cdot r) \nonumber$
Evaluation of the variational energy integral:
$E(\beta , \mu) \colon = \int_{0}^{\infty} \Psi (r, \beta) \Bigg[ - \frac{1}{2 \mu r} \frac{d^{2}}{dr^{2}} (r \cdot \Psi (r, \beta) \Bigg] 4 \pi r^{2} \; dr \ldots \bigg|_{\text{simplify}}^{\text{assume,}\; \beta > 0} \rightarrow \frac{\beta^{2}}{2 \mu} - \beta + \int_{0}^{\infty} \Psi (r, \beta) \cdot - \frac{1}{r} \cdot \Psi (r, \beta) \cdot 4 \pi r^{2}\; dr \nonumber$
Minimize the energy with respect to the variational parameter $\beta$.
$\frac{d}{d \beta} E( \beta, \mu) = 0\; \text{solve,}\; \beta \rightarrow \mu \nonumber$
Express energy in terms of reduced mass:
$E( \beta , \mu)\; \text{substitute,} \; \beta = \mu \rightarrow - \frac{\mu}{2} \nonumber$
Using the virial theorem the kinetic and potential energy contributions are:
$T = \frac{\mu}{2} \qquad V = - \mu \nonumber$
Express the trial wave function in terms of reduced mass.
$\Psi (r, \beta)\; \text{substitute,}\; \beta = \mu \rightarrow \frac{e^{- \mu r} \sqrt{\mu^{3}}}{\sqrt{\pi}} \nonumber$
Demonstrate the effect of mass on the radial distribution function with plots of mass equal to 0.5, 1 and 2.
Calculate the expectation value for position to show that it is consistent with the graphical representation above. The more massive the lepton the closer it is on average to the proton.
$\int_{0}^{\infty} \Psi (r, \mu) \cdot r \cdot \Psi (r, \mu) \cdot 4 \pi r^{2}\; dr\; \text{assume,}\; \mu > 0 \rightarrow \frac{3}{2 \mu} \nonumber$
Summarize the calculated values for the physical properties of He, H$\mu$ and H$\tau$.
$\begin{pmatrix} \text{Species} & \frac{E}{E_{h}} & \frac{T}{E_{h}} & \frac{V}{E_{h}} & \frac{r_{avg}}{a_{0}} \ H_{e} & \frac{-1}{2} & \frac{1}{2} & -1 & \frac{3}{2} \ H_{\mu} & -92.93 & 92.93 & -185.86 & 8.07 \times 10^{-3} \ H_{\tau} & -601.5 & 601.5 & -1203 & 1.25 \times 10^{-3} \end{pmatrix} \nonumber$
Now imagine that you have a regular hydrogen atom in its ground state and the electron is suddenly by some mechanism replaced by a muon. Nothing has changed from an electrostatic perspective, but the change in energy and average distance of the lepton from the proton are very large. The ground state energy and the average distance from the nucleus decrease by a factor of 185.6, the ratio of the effective masses of the electron and the muon.
This mass effect provides a challenge for those who think all atomic physical phenomena can be explained in terms of electrostatic potential energy effects. Of course, there is an even bigger problem for the potential energy aficionados, and that is the fundamental issue of atomic and molecular stability. Quantum mechanical kinetic energy effects are required to explain the stability of matter.
A Fourier transform of the coordinate wave function yields the corresponding momentum distribution and the opportunity to create a visualization of the uncertainty principle.
$\Phi (p, \mu) \colon = \frac{1}{4 \sqrt{\pi^{3}}} \int_{0}^{\infty} \exp(-ipr) \cdot \Psi (r, \mu) \cdot 4 \pi r^{2} \; dr \bigg|_{\text{simplify}}^{\text{assume,}\; \mu > 0} \rightarrow \frac{2 \mu^{3/2}}{\pi (\mu + p \cdot i)^{3}} \nonumber$
Replacing the proton with a positron, the electron's anti-particle, creates another exotic atom, positronium (Ps). In its singlet ground state electron-positron annihilation occurs in 125 ps creating two $\gamma$ rays. Positronium's ($\mu$ = 1/2) spatial and momentum distributions are shown in Figures 1 and 2. A revised table including positronium is provided below.
$\begin{pmatrix} \text{Species} & \frac{E}{E_{h}} & \frac{T}{E_{h}} & \frac{V}{E_{h}} & \frac{r_{avg}}{a_{0}} \ H_{e} & - \frac{1}{2} & \frac{1}{2} & -1 & \frac{3}{2} \ H_{\mu} & -92.93 & 92.93 & -185.86 & 8.07 \times 10^{-3} \ H_{\tau} & -601.5 & 601.5 & -1203 & 1.25 \times 10^{-3} \ Ps & - \frac{1}{4} & \frac{1}{4} & - \frac{1}{2} & 3 \end{pmatrix} \nonumber$
Many in the chemical education community teach chemical bonding as simply an electrostatic phenomenon. I and many others have argued against this incorrect, simplistic view on many occasions. In an effort to get a better understanding let’s look at an overview of the nature of the chemical bond written by Frank E. Harris many years ago.
The Chemical Bond and Quantum Mechanics*
The behavior of electrons in molecules and atoms is described by quantum mechanics; classical (Newtonian) mechanics cannot be used because the de Broglie wavelengths ($\lambda= \frac{h}{m \nu}$) of the electrons are comparable with molecular (and atomic) dimensions. The relevant quantum-mechanical ideas are as follows:
• Electrons are characterized by their entire distributions (called wave functions or orbitals) rather than by instantaneous positions and velocities: an electron may be considered always to be (with appropriate probability) at all points of its distribution (which does not vary with time).
• The kinetic energy of an electron decreases as the volume occupied by the bulk of its distribution increases, so delocalization lowers its kinetic energy.$KE= \frac{p^{2}}{2m} = \frac{h^{2}}{2m \lambda^{2}} \approx \frac{A}{D^{2}} \approx \frac{A}{V^{2/3}}$
• The potential energy of interaction between an electron and other charges is as calculated by classical physics, using the appropriate distribution (wave function) for the electron: an electron distribution is therefore attracted by nuclei and its potential energy decreases as the average electron-nuclear distance decreases.$PE \approx - \frac{B}{D} \approx - \frac{B}{V^{1/3}}$
• A minimum-energy electron distribution represents the best compromise between concentration near the nuclei (to reduce potential energy) and delocalization (to reduce kinetic energy).$E = KE + PE \approx \frac{A}{V^{2/3}} - \frac{B}{V^{1/3}}$
A bond will form between two atoms when the electron distribution of the combined atoms (molecular orbital) yields a significantly lower energy than the separate-atom distributions (atomic orbitals). An example is a covalent bond, in which two electrons, one originally on each atom, change their distributions so that each extends over both atoms.
* Taken from "Molecules" in The Encyclopedia of Physics by Frank E. Harris (with some additions and modifications by Frank Rioux)
We go deeper with the following treatment of the covalent bond in the hydrogen molecule ion using the virial theorem (John C. Slater) and ab initio quantum mechanics (Klaus Ruedenberg). In my opinion, Slater and Ruedenberg are the true pioneers in understanding the covalent chemical bond. Many books have been written about the chemical bond, but few are as insightful as the papers published by Slater and Ruedenberg.
Two Analyses of the Covalent Bond Using the Virial Theorem
Atomic and molecular stability and spectroscopy, and the nature of the chemical bond cannot be explained using classical physics: quantum mechanical principles are required. Bohr was a pioneer in an effort to apply an early version of quantum theory to these important issues with his models of the hydrogen atom and hydrogen molecule. Of course, Bohr's approach became "old" quantum mechanics and was abandoned in the 1920s with the creation of a "new" quantum mechanics by Heisenberg and Schrödinger and their collaborators. This tutorial deals exclusively with the chemical bond and summarizes Slater's contribution to our current understanding of the energetics of its formation using the virial theorem. Since the acceptance of the "new" quantum mechanics many others have contributed to the interpretation of chemical bond formation, but like Bohr, Slater was an insightful pioneer.
In a seminal paper, J.Chem. Phys. 1933, 1, 687-691, John C. Slater used the virial theorem to analyze chemical bond formation and was the first (in my opinion) to recognize the importance of kinetic energy in covalent bond formation. Regarding the universally valid virial theorem he wrote,
...this theorem gives a means of finding kinetic and potential energy separately for all configurations of the nuclei, as soon as the total energy is known, from experiment or theory.
The purpose of this tutorial is to demonstrate the validity of this assertion. The experimental method employs a Morse function for the energy of the hydrogen molecule ion parametrized using spectroscopic data. The theoretical approach is based on an ab initio LCAO-MO calculation of the energy based on a molecular orbital consisting a superposition of scaled hydrogen atomic orbitals.
Experimental Method
The kinetic energy, potential energy and total energy of the hydrogen molecule ion are calculated as a function of R using the virial theorem and a Morse function for the total energy based on experimental parameters.
Total energy: $E(R) = T(R) + V(R) \tag{1}$
Virial theorem: $2 \cdot T(R) + V(R) = - R \frac{d}{dR} E(R) \tag{2}$
Morse parameters: $D_{e} : = 0.103 \cdot E_{h} \qquad R_{e} : = 2.003 \cdot a_{0} \qquad \beta : = 0.708 a_{0}^{-1}$
These parameters are from Levine, I. N. Quantum Chemistry, 6th ed., p. 400.
Morse Function: $E(R) : = D_{e} (1- \exp [ - \beta (R - R_{e})])^{2} - D_{e} \tag{3} Using equation (1) to eliminate V(R) in equation (2) yields an equation for kinetic energy as a function of the internuclear separation. Using equation (1) to eliminate T(R) in equation (2) yields an equation for potential energy as a function of the internuclear separation. \[T(R) : = -E(R) - R \frac{d}{dR} E(R) \tag{4}$
$V(R) : = 2 E(R) + R \frac{d}{dR} E(R) \tag{5}$
These important equations determine the mean kinetic and potential energies as functions of R, one might almost say experimentally, directly from the curves of E as a function of R which can be found from band spectra. The theory is so simple and direct that one can accept the results without question....
$R : = 0.6 a_{0}, 0.65 a_{0} \ldots 8a_{0} \nonumber$
While the Morse calculation has an empirical flavor to it, its results are interpreted in terms of rudimentary quantum theory concepts. This energy profile shows that as the protons approach, the potential energy rises because molecular orbital formation (constructive interference) draws electron density away from the nuclei into the internuclear region. The kinetic energy decreases because molecular orbital formation brings about electron delocalization.
At about 4.1a0 this trend reverses as atomic orbital contraction begins. The kinetic energy rises because the volume occupied by the electron decreases and the potential energy decreases for the same reason: a smaller electronic volume brings the electron closer to the nuclei on average. Only at R ~ 1a0 does nuclear repulsion cause the total potential energy to increase and contribute to the repulsion energy of the molecule. In other words, the kinetic energy increase is the immediate cause of the energy minimum or ground state.
Theoretical Method
The following LCAO-MO calculation uses a superposition of scaled 1s atomic orbitals and yields the following result for the energy of the hydrogen molecule ion as a function of the internuclear distance and the orbital scale factor.
$1s_{a} = \sqrt{\frac{\alpha^{3}}{\pi}} \exp (- \alpha r_{a}) \nonumber$
$1s_{b} = \sqrt{\frac{\alpha^{3}}{\pi}} \exp (- \alpha r_{b}) \nonumber$
$S_{ab} = \int 1s_{a} \cdot 1s_{b} d \tau \nonumber$
$\Psi_{mo} = \frac{1s_{a} + 1s_{b}}{\sqrt{2+2S_{ab}}} \nonumber$
$E(\alpha , R) : = \frac{- \alpha^{2}}{2} + \frac{[\alpha^{2} - \alpha - \frac{1}{R} + \frac{1 + \alpha R}{R} \exp (-2 \alpha R) + \alpha (\alpha - 2) \cdot (1 + \alpha R) \exp (- \alpha R)]}{[1+ \exp(- \alpha R) \cdot (1 + \alpha R + \frac{\alpha^{2} R^{2}}{3})]} + \frac{1}{R} \nonumber$
$\alpha$ := 1 Energy := 2
$\text{Given}\; Energy = E (\alpha ,R) \quad \frac{d}{d \alpha} E( \alpha ,R) = 0 \quad Energy(R) := Find(\alpha, Energy) \nonumber$
As noted above, using $E=T+V$ with the virial theorem $2T+V=-R \frac{d}{dR} E$ leads to the following expressions for T and V.
$T(R) := - Energy(R)_{1} - R \frac{d}{dR} Energy (R)_{1} \qquad V(R) := 2 \cdot Energy(R)_{1} + R \frac{d}{dR} Energy(R)_{1} \qquad R := 0.2, 0.25 \ldots 10 \nonumber$
It is clear that both methods lead to very similar energy profiles. The only significant difference is that the Morse calculation leads to a lower energy minimum, -0.1026 Eh versus -0.0865 Eh for the molecular orbital calculation. Therefore, the interpretation provided for the Morse energy profile is valid for the LCAO-MO profile.
Conversion factors: $a_{0} = 5.29177 \cdot 10^{-11}\; m \qquad E_{h} = 4.359748 \cdot 10^{-18}\; joule$
The following link provides graphical displays of the electron density in the hydrogen molecule ion.
The following calculation shows that the lepton mass effect in molecules is the same as it is atoms. This mass effect provides a challenge for those who think atomic and molecular stability can be explained solely in terms of electrostatic potential energy effects. The mass effect is important because quantum mechanical kinetic energy (confinement energy) is inversely proportional to mass, but classical kinetic energy is directly proportional to mass.
A Molecular Orbital Calculation for H2+
An ab initio molecular orbital calculation yields the following result for the energy of the hydrogen molecule ion as a function of the internuclear separation, lepton mass (highlighted below) and the orbital decay constant.
$1 \mathrm{s}_{\mathrm{a}}=\sqrt{\frac{\alpha^{3}}{\pi}} \cdot \exp \left(-\alpha \cdot r_{\mathrm{a}}\right) \qquad 1 s_{b}=\sqrt{\frac{\alpha^{3}}{\pi}} \cdot \exp \left(-\alpha \cdot r_{b}\right) \qquad \mathrm{S}_{\mathrm{ab}}=\int \quad \mathrm{1s}_{\mathrm{a}} \cdot 1 \mathrm{s}_{\mathrm{b}} \mathrm{d} \tau \qquad \Psi_{\mathrm{mo}}=\frac{1 \mathrm{s}_{\mathrm{a}}+\mathrm{ls}_{\mathrm{b}}}{\sqrt{2+2 \cdot \mathrm{s}_{\mathrm{ab}}}} \nonumber$
$m : = 0.5 \quad \mathrm{E}(\alpha, \mathrm{R}) :=\frac{-\alpha^{2}}{2 \cdot \mathrm{m}}+\frac{\frac{\alpha^{2}}{\mathrm{m}}-\alpha-\frac{1}{\mathrm{R}}+\frac{1}{\mathrm{R}} \cdot(1+\alpha \cdot \mathrm{R}) \cdot \exp (-2 \cdot \alpha \cdot R)+\alpha \cdot\left(\frac{\alpha}{m}-2\right) \cdot(1+\alpha \cdot R) \cdot \exp (-\alpha \cdot R)}{1+\exp (-\alpha \cdot R) \cdot\left(1+\alpha \cdot R+\frac{\alpha^{2} \cdot R^{2}}{3}\right)}+\frac{1}{R} \nonumber$
$\alpha : = 1 \quad \text{Energy} : = -2 \qquad \text{Given} \quad \text{Energy} = E(\alpha, R) \quad \frac{d}{d \alpha} E( \alpha ,R) = 0 \quad \text{Energy(R)} : = \text{Find} (\alpha , \text{Energy}) \nonumber$
Using $E = T + V$ in the virial theorem $2 T + V = -R \cdot \frac{d}{dR} E$ yields expressions for T and V.
$R : = 0.2, 0.3 \ldots 20 \quad \mathrm{T}(\mathrm{R}) :=-\text { Energy }(\mathrm{R})_{1}-\mathrm{R} \cdot \frac{\mathrm{d}}{\mathrm{dR}} \text { Energy }(\mathrm{R})_{1} \quad \mathrm{V}(\mathrm{R}) :=2 \cdot \text { Energy }(\mathrm{R})_{1}+\mathrm{R} \cdot \frac{\mathrm{d}}{\mathrm{dR}} \text { Energy( R } )_{1} \nonumber$
Note that halving the lepton mass reduces the ground state energy by half and doubles the bond length. The same mass effect was found earlier for the hydrogen atom.
Some might feel uncomfortable relying on a one-electron molecule to gain an understanding of the chemical bond. After all don’t chemical bonds consist of electron pairs? So we move to the hydrogen molecule for a quantum analysis of the more traditional two-electron bond. The only new contribution to the total energy is electron-electron potential energy, and the significance of confinement energy in understanding molecular stability survives.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.03%3A_Atomic_and_Molecular_Stability.txt
|
I apologize for the considerable overlap in the following, but it does provide some additional interpretive graphics. It also contains references to the publications of Slater and Ruedenberg.
The Chemical Bond According to Slater and Ruedenberg
John Slater pioneered the use of the virial theorem in interpreting the chemical bond in a benchmark paper published in the inaugural volume of the Journal of Chemical Physics (1). This early study indicated that electron kinetic energy played an important role in bond formation. Thirty years later Klaus Ruedenberg and his collaborators published a series of papers (2,3,4) detailing the crucial role that kinetic energy plays in chemical bonding, thereby completing the project that Slater started. This Mathcad worksheet recapitulates Slater's use of the virial theorem in studying chemical bond formation and summarizes Ruedenberg final analysis.
Equation [1] gives the virial theorem for a diatomic molecule as a function of internuclear separation, while Equation \ref{2} is valid at the energy minimum.
$1 \cdot T(R) + V(R) = -R \frac{d}{dR} E(R) \label{1}$
$E(R_{e}) = \frac{v(R_{e})}{2} = -T(R_{e}) \label{2}$
At non-equilibrium values for the internuclear separation the virial equation can be used, with $E = T + V$, to obtain equations for the kinetic and potential energy as a function of the inter-nuclear separation.
$T(R) : = -E(R)-R \frac{d}{dR} E(R) \label{3}$
$V(R) : = 2 E(R) + R \frac{d}{dR} E(R) \label{4}$
Thus, if $E(R)$ is known one can calculate $T(R)$ and $V(R)$ and provide a detailed energy profile for the formation of a chemical bond. $E(R)$ can be provided from spectroscopic data or from ab initio quantum mechanics. In this examination of the chemical bond we employ the empirical approach and use spectroscopic data for the hydrogen molecule to obtain the parameters (highlighted below) for a model of the chemical bond based on the Morse function (5). However, it should be noted that quantum mechanics tells the same story and yields an energy profile just like that shown in the following figure.
$R \equiv 30, 30.2 \ldots 400 \quad D_{e} \equiv 0.761 \quad \beta = 0.0193 \quad R_{e} \equiv 74.1$
$E(R) \equiv [D_{e} (1- \exp [- \beta (R - R_{e})])^{2} - D_{e}] \nonumber$
The dissociation energy, De, is given in atta (10-18) joules, the internuclear separation in picometers, and the constant $\beta$ in inverse picometers.
This energy profile shows that as the internuclear separation decreases, the potential energy rises, falls, and then rises again. The kinetic energy first decreases and then increases at about the same internuclear distance that the potential energy begins to decrease.
As the molecular orbital is formed at large $R$ constructive interference between the two overlaping atomic orbitals draws electron density away from the nuclear centers into the internuclear region. The potential energy rises as electron density is drawn away from the nuclei, but the total energy decreases because of a larger decrease in kinetic energy due to charge delocalization. Thus a decrease in kinetic energy funds the initial build up of charge between the nuclei that we normally associate with chemical bond formation.
Following this initial phase, at an internuclear separation of about 180 pm the potential energy begins to decrease and the kinetic energy increases, both sharply (eventually), while the total energy continues to decrease gradually. This is an atomic effect, not a molecular one as Ruedenberg so clearly showed. The initial transfer of charge away from the nuclei and into the bond region allows the atomic orbitals to contract significantly ($\alpha$ increases) causing a large decrease in potential energy because the electron density has moved, on average, closer to the nuclei. The kinetic energy increases because the orbitals are smaller and kinetic energy increases inversely with the square of the orbital radius.
An energy minimum is reached while the potential energy is still in a significant decline (6), indicating that kinetic energy is the immediate cause of a stable bond and the molecular ground state in H2. The final increase in potential energy which is due mainly to nuclear-nuclear repulsion, and not electron-electron repulsion, doesn't begin until the internuclear separation is less than 50 pm, while the equilibrium bond length is 74 pm . Thus the common explanation that an energy minimum is reached because of nuclear-nuclear repulsion does not have merit.
The H2 ground state, E = -0.761 aJ, is reached at an internuclear separation of 74 pm (1.384 a0). In light of the previous arguments it is instructive to partition the total H2 electron density into atomic and molecular contributions. Each electron is in a molecular orbital which is a linear combination of hydrogenic 1s orbitals, as is shown below.
$\Psi_{mo} = \frac{\Psi_{1sa} + \Psi_{1sb}}{\sqrt{2 + 2S_{ab}}} \quad where \quad \Psi_{1sa} = \sqrt{\frac{\alpha^{3}}{\pi}} \exp (- \alpha r_{a}) \quad \Psi_{1sb} = \sqrt{\frac{\alpha^{3}}{\pi}} \exp (- \alpha r_{b}) \nonumber$
The total electron density is therefore 2$\Psi_{MO}^{2}$. The atomic, or non-bonding, electron density is given by the following equation, which represents the electron density associated with two non-interacting atomic orbitals.
$\rho_{n} = 2 \left( \Bigg| \dfrac{\Psi_{1sa} + i \Psi_{1sb}}{\sqrt{2}} \Bigg| \right)^{2} = \Psi_{1sa}^{2} + \Psi_{1sb}^{2} \nonumber$
Clearly, the bonding electron density must be the difference between the total electron density and the non-bonding, or atomic, electron density.
$\rho_{b} = \Psi_{MO}^{2} - \rho_{n} = \rho_{t} - \rho_{n} \nonumber$
These three terms are plotted along the bond axis in the figure below. Alpha is the optimum orbital scale factor, Sab is the overlap integral, and R is the equilibrium internuclear distance in atomic units.
$\alpha : = 1.197 \quad S_{ab} : = 0.681 \quad R : = 1.384 \quad y : = 0 \quad z : = 0 \quad x : = -3, -2.99 \ldots 5$
$\rho_{t} (\alpha, x, y, z) : = \frac{\frac{\alpha^{3}}{\pi} \bigg[ \exp (- \alpha \sqrt{x^{2} + y^{2} + z^{2}}) + \exp (- \alpha \sqrt{(x - R)^{2} + y^{2} + z^{2}}) \bigg]^{2}}{1 + S_{ab}} \nonumber$
$\rho_{n} (\alpha, x, y, z) : = \frac{\alpha^{3}}{\pi} \bigg[ \exp (-2 \alpha \sqrt{x^{2} + y^{2} + z^{2}}) + \exp (-2 \alpha \sqrt{(x-R)^{2} + y^{2} + z^{2}}) \bigg] \nonumber$
$\rho_{b} (\alpha, x, y, z) : = \rho_{t} (\alpha, x, y, z) - \rho_{n} (\alpha, x, y, z) \nonumber$
The bonding electron density illustrates that constructive interference accompanying atomic orbital overlap transfers charge from the nuclei into the internuclear region, while the non-bonding density clearly shows the subsequent atomic orbital contraction which draws some electron density back toward the nuclei.
Literature cited:
1. Slater, J. C. J. Chem. Phys. 1933, 1, 687-691.
2. Ruedenberg, K. Rev. Mod. Phys. 1962, 34, 326-352.
3. Feinberg, M. J.; Ruedenberg, K. J. Chem. Phys. 1971, 54, 1495-1511; 1971, 55, 5804-5818.
4. Feinberg, M. J.; Ruedenberg, K.; Mehler, E. L. Adv. Quantum Chem. 1970, 5, 27-98.
5. McQuarrie, D. A.; Simon, J. D. Physical Chemistry: A Molecular Approach, University Science Books, Sausalito, CA, 1997, p. 165.
6. Slater, J. C. Quantum Theory of Matter, Krieger Publishing, Huntington, N.Y., 1977, pp. 405-408.
7. Rioux, F. The Chemical Educator, 1997, 2, No. 6.
I close with a rather mystical description of the chemical bond by Charles A. Coulson, the author of Valence, an influential monograph on the chemical bond published in 1952.
Sometimes it seems to me that a bond between two atoms has become so real, so tangible, so friendly, that I can almost see it. Then I awake with a little shock, for a chemical bond is not a real thing. It does not exist. No one has ever seen one. No one ever will. It is a figment of our own imagination.... Here is a strange situation. The tangible, the real, the solid, is explained by the intangible, the unreal, the purely mental.
Addendum: The Bohr Model of Atomic and Molecular Stability
The purpose of this study was to outline the success of quantum mechanics in explaining the stability and structure of matter. But quantum mechanics didn’t emerge out of a vacuum; it had a precursor – the Bohr model. So I thought it appropriate to take a brief look at that precursor and present its explanation of atomic and molecular stability.
My starting point (for the development of the Bohr model) was not at all the idea that an atom is a small-scale planetary system and as such governed by the laws of astronomy. I never took things as literally as that. My starting point was rather the stability of matter, a pure miracle when considered from the standpoint of classical physics. -Niels Bohr
All matter consists of elements that are made up of electrons, protons and neutrons. Given that the electron and the proton have opposite charges and therefore are attracted to one another provides the problem that classical physics has in explaining the stability of matter. What keeps these oppositely charged building blocks apart? There is no more fundamental question in science and Bohr was the first to attempt an explanation of atomic stability by creating, by fiat, the necessary rudimentary quantum mechanical concepts.
We begin with the Bohr model for the simplest element, the hydrogen atom. In the following tutorial the Bohr model is enriched with de Broglie’s hypothesis of wave properties for the electron. This allows the reinterpretation of some of the apparently arbitrary features of Bohr’s initial atomic model.
A deBroglie‐Bohr Model for the Hydrogen Atom
The 1913 Bohr model of the hydrogen atom was replaced by Schrödingerʹs wave mechanical model in 1926. However, Bohrʹs model is still profitably taught today because of its conceptual and mathematical simplicity, and because it introduced a number of key quantum mechanical ideas such as the quantum number, quantization of observable properties, quantum jump and stationary state.
Bohr calculated the manifold of allowed electron energies by balancing the mechanical forces (centripetal and electron‐nucleus) on an electron executing a circular orbit of radius R about the nucleus, and then arbitarily quantizing its angular momentum. Finally by fiat he declared that the electron was in a non‐radiating stationary state because an orbiting (accelerating) charge radiates energy and will collapse into the oppositely charge nucleus. In 1924 de Broglie postulated wave‐particle duality for the electron and other massive particles, thereby providing the opportunity to remove some of the arbitariness from Bohrʹs model. For example, an electron possessing wave properties is subject to constructive and destructive interference. As will be shown this leads naturally to quantization of electron momentum and kinetic energy, and consequently a manifold of allowed energy states for the electron relative to the nucleus.
The de Broglie‐Bohr model of the hydrogen atom presented here treats the electron as a particle on a ring with wave‐like properties.
$\lambda = \frac{h}{m_{e} \nu}$ de Broglieʹs hypothesis that matter has wave‐like properties
$n \lambda = 2 \pi r$ The consequence of de Broglieʹs hypothesis; an integral number of wavelengths must fit within the circumference of the orbit. This introduces the quantum number which can have values 1,2,3,... The n = 4 electron state is shown below.
$m_{e} \nu = \frac{n \cdot h}{2 \pi r}$ Substitution of the first equation into the second equation reveals that momentum is quantized.
$T = \frac{1}{2} m_{e} \nu^{2} = \frac{n^{2} h^{2}}{8 \pi^{2} m_{e} r^{2}}$ If momentum is quantized, so is kinetic energy.
$E = T + V = \frac{n^{2} h^{2}}{8 \pi^{2} m_{e} r^{2}} - \frac{e^{2}}{4 \pi \epsilon_{0} r}$ Which means that total energy is quantized. The second term is the electron‐proton electrostatic potential energy.
The quantum mechanical interpretation of these ʺBohr orbitsʺ is that they are stationary states. In spite of the fact that we use the expression kinetic energy, which implies electron motion, there is no motion. The electron occupies the orbit as a particle‐wave, it is not orbiting the nucleus. If it was orbiting in a classical sense it would radiate energy and quickly collapse into the nucleus. Clearly the stability of matter requires the quantum mechanical version of kinetic energy.
The ground state energy and orbit radius of the electron in the hydrogen atom is found by plotting the energy as a function of the orbital radius. The ground state is the minimum in the total energy curve. Naturally calculus can be used to obtain the same information by minimizing the energy with respect to the orbit radius. However, the graphical method has the virtue of illuminating the issue of atomic stability.
Fundamental constants: electron charge, electron mass, Planckʹs constant, vacuum permitivity. $e : = 1.6021777 \times 10^{-19}\; coul$ $m_{e} : = 9.10939 \times 10^{-31}\; kg$
$h : = 6.62608 \times 10^{-34}\; joule \cdot sec$ $\epsilon_{0} : = 8.85419 \times 10^{-12}\; \frac{coul^{2}}{joule \cdot m}$
Quantum number and conversion factor between meters and picometers and joules and attojoules. $n : = 1 \qquad pm : = 10^{-12}\; m \qquad ajoule : = 10^{-18}\; joule$
$r : = 20, 20.5 \ldots 500\; pm \quad T(r) : = \frac{n^{2} h^{2}}{8 \pi^{2} m_{e} r^{2}} \quad V(r) : = \frac{e^{2}}{4 \pi \epsilon_{0} r} \quad E(r) : = T(r) + V(r) \nonumber$
This figure shows that atomic stability involves a balance between potential and kinetic energy. The electron is drawn toward the nucleus by the attractive potential energy interaction (~ ‐1/R), but is prevented from collapsing into the nucleus by the extremely large kinetic energy (~1/R2) associated with small orbits.
As shown below, the graphical approach can also be used to find the electronic excited states.
$n : = 2 \quad T(r) : = \frac{n^{2} h^{2}}{8 \pi^{2} m_{e} r^{2}} \quad V(r) : = - \frac{e^{2}}{4 \pi \epsilon_{0} r} \quad E(r) : = T(r) + V(r) \nonumber$
As mentioned earlier the manifold of allowed electron energies can also be obtained by minimizing the energy with respect to the orbit radius. This procedure yields,
$E_{n} = - \frac{m_{e} e^{4}}{2(4 \pi \epsilon_{0})^{2} \hbar^{2}} \frac{1}{n^{2}} \quad and \quad r_{n} = \frac{4 \pi \epsilon_{0} \hbar^{2}}{m_{e} e^{2}} n^{2} \nonumber$
The Bohr model gives correct results for any one-electron atomic atom or ion. The following tutorial shows that it is also accurate when applied to the rather esoteric and short-lived hydrogen atom analog positronium, in which the proton is replaced the positron, the electron’s antiparticle.
A deBroglie-Bohr Model for Positronium
Positronium is a metastable bound state consisting of an electron and its positron antiparticle. In other words it might be thought of as a hydrogen atom in which the proton is replaced by a positron. Naturally it decays quickly after formation due to electron-positron annihilation. However, it exists long enough for its ground state energy, -0.25 Eh, to be determined. The purpose of this tutorial is to calculate this value using the Bohr model for positronium shown below.
The electron occupies a circular orbit of radius R which has a positron at its center. Likewise the positron occupies a circular orbit of radius R which has an electron at its center. Occupies has been emphasized to stress that there is no motion, no orbiting. Both particles are behaving as waves (this is the meaning of wave-particle duality) occupying the orbit. As waves they are subject to interference, and to avoid destructive interference the wavelength for the ground state is one orbit circumference.
$\lambda = 2 \pi R \nonumber$
Introducing the de Broglie relationship between wavelength and momentum, $\lambda = \frac{h}{p}$, yields the following expression for momentum in atomic units (h = 2$\pi$).
$p = \frac{h}{2 \pi R} = \frac{1}{R} \nonumber$
In atomic units me = mp = 1. Therefore, the kinetic energy of each particle is,
$T = \frac{p^{2}}{2m} = \frac{1}{2R^{2}} \nonumber$
The total energy of positronium is the sum of electron and positron kinetic energies and their coulombic potential energy.
$E = T_{e} + T_{p} + V_{ep} = \frac{1}{2R^{2}} + \frac{1}{2R^{2}} - \frac{1}{R} = \frac{1}{R^{2}} - \frac{1}{R} \nonumber$
Energy minimization with respect to the electron-positron distance R yields the following result.
$\frac{d}{dR} \left(\dfrac{1}{R^{2}} - \dfrac{1}{R}\right) = 0 \quad \text{solve,}\; R \rightarrow 2 \nonumber$
The optimum R value yields a ground state energy of -0.25 Eh, in agreement with experiment.
$E = \frac{1}{R^{2}} - \frac{1}{R} \quad \text{substitute,}\; R = 2 \rightarrow E = - \frac{1}{4} \nonumber$
Including the symbols for mass in the kinetic energy contributions facilitates the introduction of the concept of effective mass of the composite system.
$T = \frac{1}{2 m_{e} R^{2}} + \frac{1}{2m_{p} R^{2}} = \frac{1}{2R^{2}} \left(\dfrac{1}{m_{e}} + \dfrac{1}{m_{p}}\right) = \frac{1}{2R^{2}} \left(\dfrac{m_{e} + m_{p}}{m_{e} \cdot m_{p}} \right) = \frac{1}{2 \mu_{ep} R^{2}} \nonumber$
$\mu_{ep} = \frac{m_{e} \cdot m_{p}}{m_{e} + m_{p}} = \frac{1 \cdot 1}{1 + 1} = \frac{1}{2} \nonumber$
The positronium energy minimum can also be located graphically.
Plotting kinetic and potential energy along with total energy reveals that a ground state is achieved because beginning at R = 2, the kinetic energy is approaching positive infinity more quickly than the potential energy is approaching negative infinity.
After reviewing the Bohr model for the hydrogen atom, the following tutorial outlines Bohr’s model for the hydrogen molecule. It shows that it yields plausible values for bond energy and bond length.
Extracting Atomic and Molecular Parameters From the de Broglie‐Bohr Model of the Atom
The 1913 Bohr model of the hydrogen atom was replaced by Schrödingerʹs wave mechanical model in 1926. However, Bohrʹs model is still profitably taught today because of its conceptual and mathematical simplicity, and because it introduced a number of key quantum mechanical ideas such as the quantum number, quantization of observable properties, quantum jump and stationary state. In addition it provided realistic values for such parameters as atomic and molecular size, electron ionization energy, and molecular bond energy.
In his ʺplanetaryʺ model of the hydrogen atom Bohr began with a Newtonian analysis of the electron executing a circular orbit of radius R about a stationary nucleus, and then arbitrarily quantized the electronʹs angular momentum. Finally, by fiat he declared that the electron was in a non‐radiating stationary state because an orbiting (accelerating) charge radiates energy and will collapse into the oppositely charge nucleus.
In 1924 de Broglie postulated wave‐particle duality for the electron and other massive particles, thereby providing the opportunity to remove some of the arbitrariness from Bohrʹs model. For example, an electron possessing wave properties is subject to constructive and destructive interference. As will be shown this leads naturally to quantization of electron momentum and kinetic energy, and consequently to a stable ground state for the hydrogen atom.
The de Broglie‐Bohr model of the hydrogen atom presented here treats the electron as a particle on a ring with wave‐like properties. The key equation is wave‐particle duality as expressed by the de Broglie equation. The particle concept momentum and the wave concept $\lambda$ are joined in a reciprocal relationship mediated by the ubiquitous Planckʹs constant.
$p = \frac{h}{\lambda} \nonumber$
This equation will be used with the Bohr model of the hydrogen atom to explain atomic stability and to generate estimates of atomic size and electron binding energy in the atom.
In the de Broglie version of the Bohr hydrogen atom we say that the electron occupies a ring of radius R. It is not orbiting the nucleus, it is behaving as a stationary wave. In order to avoid self‐interference the following wavelength restriction must be obeyed for the ground state of the hydrogen atom.
$\lambda = 2 \pi R \nonumber$
When combined with the de Broglie equation it reveals the following restriction on the electronʹs particle property, linear momentum.
$p = \frac{h}{2 \pi R} \nonumber$
This means there is also a restriction on the electronʹs kinetic energy. Use of this equation in the classical expression for kinetic energy yields the quantum mechanical kinetic energy or more accurately electron confinement energy
$T = \frac{p^{2}}{2m} = \frac{h^{2}}{8 \pi^{2} mR^{2}} = \frac{1}{2R^{2}} \nonumber$
In this equation we have moved from the classical definition of kinetic energy to the quantum mechanical version expressed on the right in atomic units.
$\frac{h}{2 \pi} = m = e = 4 \pi \epsilon_{0} = 1 \nonumber$
The electrostatic potential energy retains its classical definition in quantum mechanics.
$V = \frac{-e^{2}}{4 \pi \epsilon_{0} R} = \frac{-1}{R} \nonumber$
The total electron energy, EH(R) = T(R) + V(R), is now minimized with respect to the ring or orbit radius, the only variational parameter in the model. The total energy, and kinetic and potential energy are also displayed as a function of ring radius.
$R : = 0.5 \quad E_{H} (R) : = \frac{1}{2R^{2}} - \frac{1}{R} \quad R : = \text{Minimize} (E_{H}, R) \quad R = 1.000 \quad E_{H}(R) = -0.500 \nonumber$
From this simple model we learn that it is the wave nature of the electron that explains atomic stability. The electronʹs ring does not collapse into the nucleus because kinetic (confinement) energy goes to positive infinity (~R‐2) faster than potential energy goes to negative infinity (~‐R‐1). This is seen very clearly in the graph. The ground state is due to the sharp increase in kinetic energy as the ring radius decreases. This is a quantum effect, a consequence of de Broglieʹs hypothesis that electrons have wave‐like properties. As Klaus Ruedenberg has written, ʺThere are no ground states in classical mechanics.ʺ
The minimization process above the figure provides the ground state ring radius and electron energy in atomic units, a0 and Eh, respectively. R = 1 a0 = 52.9 pm gives us the benchmark for atomic size. Tables of atomic and ionic radii carry entries ranging from approximately half this value to roughly five or six times it. The ground state (binding) energy, E = ‐0.5 Eh = ‐13.6 eV = ‐1312 kJ/mol, is the negative of the ionization energy. This value serves as a benchmark for how tightly electrons are held in atoms and molecules.
A more comprehensive treatment of the Bohr atom utilizing the restriction that an integral number of wavelengths must fit within the ring, n$\lambda$ = 2$\pi$R , where n = 1, 2, 3, ... reveals a manifold of allowed energy states (‐0.5 Eh/n2) and the basis for Bohrʹs concept of the quantum jump which ʺexplainedʺ the hydrogen atom emission spectrum. Here for example is the n = 4 Bohr atom excited state.
Rudimentary estimates of some molecular parameters, the most important being bond energy and bond length, can be obtained using the following Bohr model for H2. The distance between the protons is D, the electron ring radius is R, and the bond axis is perpendicular to the plane of the ring
There are eight contributions to the total molecular energy based on this model: electron kinetic energy (2), electron‐proton potential energy (4), proton‐proton potential energy (1) and electron‐electron potential energy (1).
$E_{H2} (R,D) : = \frac{1}{R^{2}} - \frac{4}{\sqrt{R^{2} + (\frac{D}{2})^{2}}} + \frac{1}{D} + \frac{1}{2R} \nonumber$
Minimization of the energy with respect to ring radius and proton‐proton distance yields the following results.
$D : = 2 \quad \begin{pmatrix} R \ D \end{pmatrix} : = \text{Minimize} (E_{H2}, R,D) \quad \begin{pmatrix} R \ D \end{pmatrix} = \begin{pmatrix} 0.953 \ 1.101 \end{pmatrix} \quad E_{H2} (R,D) = -1.100 \nonumber$
The H‐H bond energy is the key parameter provided by this analysis. We see that it predicts a stable molecule and that the energy released on the formation of H2 is 0.1 Eh or 263 kJ/mol, compared with the experimental value of 458 kJ/mol. The model predicts a H‐H bond length of 58 pm (D.52.9 pm), compared to the literature value of 74 pm. These results are acceptable given the primitive character of the model.
$H + H = H_{2} \nonumber$
$\Delta E_{bond} : = E_{H2} (R,D) = -2 E_{H}(1) \quad \Delta E_{bond} = -0.100 \nonumber$
In addition to these estimates of molecular parameters, the model clearly shows that molecular stability depends on a balancing act between electron‐proton attraction and the ʺrepulsiveʺ character of electron kinetic energy. Just as in the atomic case, it is the 1/R2 dependence of kinetic (confinement) energy on ring radius that prevents molecular collapse under electron‐proton attraction. As the energy profile provided in the Appendix shows, the immediate cause of the molecular ground state is a rise in kinetic energy. Potential energy is still declining at this point and does not begin to rise until 0.55 a0, well after the ground state is reached at 1.10 a0.
Although the model is a relic from the early days of quantum theory it still has pedagogical value. Its mathematical simplicity clearly reveals the importance of the wave nature of matter, the foundational concept of quantum theory.
Two relatively recent appraisals of Bohrʹs models of atomic and molecular structure have been appeared in Physics Today:
• ʺNiels Bohr between physics and chemistry,ʺ by Helge Kragh, May 2013, 36‐41.
• ʺBohrʹs molecular model, a century later,ʺ by Anatoly Svidzinsky, Marlan Scully, and Dudley Herschbach, January 2014, 33‐39.
Appendix:
$R : = 0.1 \quad \text{Energy} : = -1 \quad Given\; \text{Energy} = E_{H2} (R,D) \quad \frac{d}{dR} E_{H2} (R,D) = 0 \quad \text{Energy(D) : = Find(R, Energy)} \nonumber$
$D : = 0.15, 0.16 \ldots 4 \quad T(D) : = \frac{1}{[Energy(D)_{0}]^{2}} \quad V(D) : = - \frac{4}{\sqrt{[Energy(D)_{0}]^{2} + (\frac{D}{2})^{2}}} + \frac{1}{D} + \frac{1}{2 \cdot \text{Energy}(D)_{0}} \nonumber$
The examples presented in this addendum are based on classical pictures of the hydrogen atom, positronium, and the hydrogen molecule that have been moved in the quantum direction with de Broglie’s hypothesis of wave-particle duality for matter. Bohr and de Broglie are the early quantum theorists who cut a path for those who created modern quantum theory.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.04%3A_Atomic_and_Molecular_Stability.txt
|
The reason I was keen to include at least some mathematical descriptions was simply that in my own study of quantum computation the only time I really felt that I understood what was happening in a quantum program was when I examined some typical quantum circuits and followed through the equations. Julian Brown, The Quest for the Quantum Computer, page 6.
My reason for beginning with Julian Brown’s statement is that I accept it wholeheartedly. I learn the same way. So in what follows I will present mathematical analyses of some relatively simple and representative quantum circuits that are designed to carry out important contemporary processes such as parallel computation, teleportation, data-base searches, prime factorization, quantum encryption and quantum simulation. I will conclude with a foray into the related area of Bell’s theorem and the battle between local realism and quantum mechanics.
Quantum computers use superpositions, entanglement and interference to carry out calculations that are impossible with a classical computer. Click here for insightful descriptions of the non-classical character of superpositions and entangled superpositions.
To illuminate the difference between classical and quantum computation we begin with a review of the fundamental principles of quantum theory using the computational methods of matrix mechanics.
Rudimentary Matrix Mechanics
A quon (an entity that exhibits both wave and particle aspects in the peculiar quantum manner - Nick Herbert, Quantum Reality, page 64) has a variety of properties each of which can take on two values. For example, it has the property of hardness and can be either hard or soft. It also has the property of color and can be either black or white, and the property of taste and be sweet or sour. The treatment that follows draws on material from Chapter 3 of David Z Albert's book, Quantum Mechanics and Experience.
The basic principles of matrix and vector math are provided in Appendix A. An examination of this material will demonstrate that most of the calculations presented in this tutorial can easily be performed without the aid of Mathcad or any other computer algebra program. In other words, they can be done by hand.
In the matrix formulation of quantum mechanics the hardness and color states are represented by the following vectors.
$Hard : = \begin{pmatrix} 1 \ 0 \end{pmatrix} \quad Soft : = \begin{pmatrix} 0 \ 1 \end{pmatrix} \quad Black : = \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} \quad White : = \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}}\end{pmatrix} \nonumber$
Hard and Soft represent an orthonormal basis in the two-dimensional Hardness vector space.
$\begin{array} hHard^{T} \cdot Hard = 1 \qquad & Soft^{T} \cdot Soft = 1 \qquad & Hard^{T} \cdot Soft = 0 \ \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = 1 & \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = 1 & \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = 0 \end{array} \nonumber$
Likewise Black and White are an orthonormal basis in the two-dimensional Color vector space.
$\begin{array} bBlack^{T} \cdot Black= 1 & White^{T} \cdot White= 1 \quad & Black^{T} \cdot White= 0 \ \begin{pmatrix} \frac{1}{\sqrt{2}} \quad & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} = 1 & \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} = 1 & \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} = 0 \end{array} \nonumber$
The relationship between the two bases is reflected in the following projection calculations. Note: $\frac{1}{\sqrt{2}} = 0.707$
$\begin{array} hHard^{T} \cdot Black= 0.707 \qquad & Hard^{T} \cdot White= 0.707 \qquad & Soft^{T} \cdot Black = 0.707 \qquad & Soft^{T} \cdot White = -0.707 \ \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} = 0.707 & \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} = 0.707 & \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} = 0.707 & \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} = -0.707 \end{array} \nonumber$
The values calculated above are probability amplitudes. The absolute square of those values is the probability. In other words, the probability that a black quon will be found to be hard is 0.5. The probability that a white quon will be found to be soft is also 0.5.
$\begin{array} a(|Hard^{T} \cdot Black|)^{2} = 0.5 \qquad & (|Hard^{T} \cdot White|)^{2} = 0.5 \qquad & (|Soft^{T} \cdot Black|)^{2} = 0.5 \qquad & (|Soft^{T} \cdot White|)^{2} = 0.5 \ \Bigg[ \Bigg| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 & \Bigg[ \Bigg| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 & \Bigg[ \Bigg| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 & \Bigg[ \Bigg| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 \end{array} \nonumber$
Clearly Black and White can be written as superpositions of Hard and Soft, and vice versa.
$\begin{array} a \dfrac{1}{\sqrt{2}} (Hard + Soft) = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} & \dfrac{1}{\sqrt{2}} \Bigg[ \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \Bigg] = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} \ \dfrac{1}{\sqrt{2}} (Hard - Soft) = \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} & \dfrac{1}{\sqrt{2}} \Bigg[ \begin{pmatrix} 1 \ 0 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \Bigg] = \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} \ \dfrac{1}{\sqrt{2}} (Black+ White) = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \dfrac{1}{\sqrt{2}} \Bigg[ \begin{pmatrix} \frac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{pmatrix} + \begin{pmatrix} \frac{1}{\sqrt{2}} \ \dfrac{-1}{\sqrt{2}} \end{pmatrix} \Bigg] = \begin{pmatrix} 1 \ 0 \end{pmatrix} \ \dfrac{1}{\sqrt{2}} (Black - White) = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \dfrac{1}{\sqrt{2}} \Bigg[ \begin{pmatrix} \frac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{pmatrix} - \begin{pmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{-1}{\sqrt{2}} \end{pmatrix} \Bigg] = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{array} \nonumber$
Hard, Soft, Black and White are measurable properties and the vectors representing them are eigenstates of the Hardness and Color operators with eigenvalues $\pm$1. The Identity operator is also given and will be discussed later. Of course, the Hardness and Color operators are just the Pauli spin operators in the z- and x-directions. Later the Taste operator will be introduced; it is the y-direction Pauli spin operator.
Operators
$Hardness : = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \qquad Color : = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \qquad I : = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
Eigenvalue +1 Eigenvalue -1
$Hardness \cdot Hard = \begin{pmatrix} 1 \ 0 \end{pmatrix} \qquad \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix}$ $Hardness \cdot Soft = \begin{pmatrix} 0 \ -1 \end{pmatrix} \qquad \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ -1 \end{pmatrix}$
$Color \cdot Black= \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} \qquad \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} \end{pmatrix} = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix}$ $Color \cdot White = \begin{pmatrix} -0.707 \ 0.707 \end{pmatrix} \qquad \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \ \frac{-1}{\sqrt{2}} \end{pmatrix} = \begin{pmatrix} -0.707 \ 0.707 \end{pmatrix}$
Another way of showing this is by calculating the expectation (or average) value. Every time the hardness of a hard quon is measured the result is +1. Every time the hardness of a soft quon is measured the result is -1.
The characteristic feature of a quantum computer is its ability to calculate in parallel. How this is accomplished is illustrated in the following one-page tutorial.
The Quantum Computer
A quantum computer exploits quantum mechanical effects such as superpositions, entanglement and interference to perform new types of calculations that are impossible on a classical computer. Quantum computation is therefore nothing less than a distinctly new way of harnessing nature. (Adapted from David Deutsch, The Fabric of Reality, page 195.)
Whereas classical computers perform operations on classical bits, which can be in one of two discrete states, 0 or 1, quantum computers perform operations on quantum bits, or qubits, which can be put into any superposition of two quantum states, |0> and |1>. Peter Pfeifer, McGraw-Hill Encyclopedia of Science and Industry.
The following example demonstrates how a quantum circuit can function as an algorithm for the evaluation of a mathematical function f(x), and how the same algorithm is capable of parallel evaluations of that function.
$\left( \begin{array}{ll}{x} & {f(x)} \ {0} & {1} \ {1} & {0}\end{array}\right) \quad \begin{array} | x \rangle & \cdots & \bullet & \ldots & \ldots & | x \rangle \ \; & \; & | & \; & \; & \; \ | 0 \rangle & \cdots & \oplus & \text{NOT} & \cdots & | f(x) \rangle \end{array} \quad \hat{U}_{f} | x \rangle | 0 \rangle=| x \rangle | f(x) \rangle \nonumber$
As shown below, when |x> is |0> or |1> the circuit behaves like a classical computer yielding the value of f(x). When |x> is a superposition of |0> and |1> the circuit is a quantum computer, operating on both input values simultaneously in a single pass through the circuit, yielding both values of f(x). Note that in the latter case, the intermediate and final states are entangled Bell superpositions. The Bell states are an essential resource in many quantum information applications.
$\begin{array} a \text { Input } & \text{Operation} & \text{Intermediate} & \text{Operation} & \text{Output} \ |0 \rangle | 0 \rangle & \; & |0 \rangle | 0 \rangle & \; & |0 \rangle | 1 \rangle \ |1 \rangle | 0 \rangle & \xrightarrow{CNOT} & |1 \rangle | 1 \rangle & \xrightarrow{I \otimes NOT} & |1 \rangle | 0 \rangle \ \frac{1}{\sqrt{2}}[ |0 \rangle+| 1 \rangle ] | 0 \rangle=\frac{1}{\sqrt{2}}[ |0 \rangle | 0 \rangle+| 1 \rangle | 0 \rangle] & \; & \frac{1}{\sqrt{2}}[ |0 \rangle+| 0 \rangle ] | 1 \rangle | 1 \rangle] & \; & \frac{1}{\sqrt{2}}[ |0\rangle+| 1 \rangle ] | 1 \rangle | 0 \rangle] \end{array} \nonumber$
Haroche and Raimond (pages 94-95 of Exploring the Quantum) describe the latter process as follows: "By superposing the inputs of a computation, one operates the machine 'in parallel', making it compute simultaneously all the values of a function and keeping its state, before any final bit detection is performed, suspended in a coherent superposition of all the possible outcomes." However, as Haroche and Raimond note, on a practical level only one result can be realized for each operation of the circuit because on measurement the superposition created by the circuit collapses to one of the states forming the superposition. Therefore, the exploitation of quantum parallelism for practical purposes such as searches and factorization requires more elaborate quantum circuits than the one presented here.
Truth tables for the quantum circuit:
$\mathrm{NOT} \left( \begin{array}{lll}{0} & {\text { to }} & {1} \ {1} & {\text { to }} & {0}\end{array}\right) \qquad \mathrm{CNOT} \left( \begin{array} a \text{Decimal} & \text{Binary} & \text{to} & \text{Binary} & \text{Decimal} \ 0 & 00 & \text{to}& 00 & 0 \ 1 & 01 & \text{to}& 01 & 1 \ 2 & 10 & \text{to}& 11 & 3 \ 3 & 11 & \text{to}& 10 & 2 \end{array} \right) \nonumber$
The following tutorial adds a matrix analysis to the previous example of parallel calculation.
A Very Simple Example of Parallel Quantum Computation
This tutorial deals with quantum function evaluation and parallel computation. The example is taken from pages 94-95 of Exploring the Quantum by Haroche and Raimond. A certain function of x yields the following table of results.
$\left( \begin{array}{ccc}{x} & {0} & {1} \ {f(x)} & {1} & {0}\end{array}\right) \nonumber$
First we establish that the circuit shown below yields the results given in the table, and then demonstrate that it also carries out a parallel calculation in one step using both input values of x.
$\begin{array} | x \rangle & \cdots & \bullet & \ldots & \ldots & | x \rangle \ \; & \; & | & \; & \; & \; \ | 0 \rangle & \cdots & \oplus & \text{NOT} & \cdots & | f(x) \rangle \end{array} \qquad \text{where, for example}\; | 0 \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad | 1 \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
The top wire carries the value of x and the bottom wire is initially set to |0 >. After operation of the controlled-NOT and NOT gates, x remains on the top wire while the bottom wire carries the value of the function, f(x). In other words,
$\hat{U}_{f} | x \rangle | 0 \rangle=| x \rangle | f(x) \rangle \nonumber$
The quantum gates in matrix form are:
$\mathrm{I} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \quad \mathrm{NOT} :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{CNOT} :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \nonumber$
Uf (controlled-NOT, followed by a NOT operation on the lower wire) is a reversible operator. Doing it twice in succession on the initial two-qubit state is equivalent to the identity operation.
Kronecker is Mathcad's command for carrying out matrix tensor multiplication. Note that the identity operator is required when a wire is not involved in an operation. In what follows the quantum circuit is constructed, displayed and its reversibility demonstrated. In other words, repeating the circuit is equivalent to the identity operation. Reversibility is a crucial property in quantum computer circuitry.
$\text{QuantumCircuit} : = \text{kronecker (I, NOT)} \cdot \text{CNOT} \nonumber$
$\text{QuantumCircuit} = \left( \begin{array}{cccc}{0} & {1} & {0} & {0} \ {1} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \qquad \text{QuantumCircuit}^{2} = \left( \begin{array}{llll}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
Given the simplicity of the matrix representing the circuit, the following calculations can easily be done by hand.
Input Calculation Output
f(0) = 1 $\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right)$ $\text{QuantumCircuit} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)$ $\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)$
f(1) = 0 $\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)$
$\text{QuantumCircuit} \cdot \left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)$ $\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)$
These calculations demonstrate that the quantum circuit is a valid algoritm for the calculation of f(x). We now demonstrate parallel computation by putting |x> in a balanced superposition of |0> and |1>. As shown below, the operation of the circuit yields a superposition of the previous results. The function has been evaluated for both values of x in a single pass through the circuit.
$\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {0} \ {1} \ {0}\end{array}\right) \nonumber$
$\text{QuantumCircuit} \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0.707} \ {0.707} \ {0}\end{array}\right) \nonumber$
$\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right)=\frac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)=\frac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right]\right. \nonumber$
Haroche and Raimond describe this process as follows: "By superposing the inputs of a computation, one operates the machine 'in parallel', making it compute simultaneously all the values of a function and keeping its state, before any final bit detection is performed, suspended in a coherent superposition of all the possible outcomes."
In summary, simple calculations have demonstrated how a quantum circuit can function as an algorithm for the evaluation of a mathematical function, and how the same circuit is capable of parallel evaluations of that function.
$\begin{array} a \text { Input } & \text{Operation} & \text{Intermediate} & \text{Operation} & \text{Output} \ |00 \rangle & \; & |00 \rangle & \; & |01 \rangle \ |10 \rangle & \xrightarrow{CNOT} & |11 \rangle & \xrightarrow{I \otimes NOT} & |10 \rangle \ \frac{1}{\sqrt{2}}[ |0 \rangle+| 1 \rangle ] | 0 \rangle=\frac{1}{\sqrt{2}}[ |0 0 \rangle+| 1 0 \rangle] & \; & \frac{1}{\sqrt{2}}[ |0 0 \rangle ]+ | 1 1 \rangle] & \; & \frac{1}{\sqrt{2}}[ |01 \rangle ]+ | 1 0 \rangle] \end{array} \nonumber$
However, as Haroche and Raimond note, on a practical level only one result can be realized for each operation of the circuit because on measurement the superposition created by the circuit collapses to one of the states forming the superposition. This is simulated with projection operators (|0><0| and |1><1|) on both registers for the four possible measurement outcomes for each value of x.
$f(0) = 0 \qquad \Bigg[ \Bigg| \text{kronecker} \left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}\right] \cdot \text{QuantumCircuit} \cdot \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} \Bigg| \Bigg]^{2} = 0 \nonumber$
$f(0) = 1 \qquad \Bigg[ \Bigg| \text{kronecker} \left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}\right] \cdot \text{QuantumCircuit} \cdot \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 \nonumber$
$f(1) = 0 \qquad \Bigg[ \Bigg| \text{kronecker} \left[\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}\right] \cdot \text{QuantumCircuit} \cdot \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} \Bigg| \Bigg]^{2} = 0.5 \nonumber$
$f(1) = 1 \qquad \Bigg[ \Bigg| \text{kronecker} \left[\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}\right] \cdot \text{QuantumCircuit} \cdot \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} \Bigg| \Bigg]^{2} = 0 \nonumber$
As Haroche and Raimond write, "It is, however, one thing to compute potentially at once all the values of f(x) and quite another to be able to exploit this quantum parallelism and extract from it more information than from a mundane classical computation. The final stage of information acquisition must always be a measurement." Therefore, the exploitation of quantum parallelism for practical purposes such as searches and factorization requires more elaborate quantum circuits than the one presented here.
Truth tables for quantum circuit elements:
$\mathrm{Identity} \left( \begin{array}{lll}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {1}\end{array}\right) \qquad \mathrm{NOT} \left( \begin{array}{lll}{0} & {\text { to }} & {1} \ {1} & {\text { to }} & {0}\end{array}\right) \qquad \mathrm{CNOT} \left( \begin{array} a \text{Decimal} & \text{Binary} & \text{to} & \text{Binary} & \text{Decimal} \ 0 & 00 & \text{to}& 00 & 0 \ 1 & 01 & \text{to}& 01 & 1 \ 2 & 10 & \text{to}& 11 & 3 \ 3 & 11 & \text{to}& 10 & 2 \end{array} \right) \nonumber$
Solving systems of equations is a relatively routine task for a quantum circuit.
Solving Equations Using a Quantum Circuit
This tutorial demonstrates the solution of two linear simultaneous equation using a quantum circuit. The circuit is taken from arXiv:1302.1210. See this reference for details on the experimental implementation of the circuit and also for a discussion of the potential of quantum solutions for systems of equations. Two other sources (arXiv:1302.1946 and 1302.4310) provide alternative quantum circuits and methods of implementation.
First we consider the conventional method of solving systems of linear equation for a particular matrix A and three different |b> vectors.
$A | x \rangle=| b \rangle \qquad | x \rangle=A^{-1} | b \rangle \nonumber$
$A :=\left( \begin{array}{cc}{1.5} & {0.5} \ {0.5} & {1.5}\end{array}\right) \qquad b_{1} : =\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \qquad b_{2} : =\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {-1}\end{array}\right) \qquad b_{3} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
$A^{-1} \cdot b_{1}=\left( \begin{array}{c}{0.354} \ {0.354}\end{array}\right) \quad A^{-1} \cdot b_{2}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \quad A^{-1} \cdot B_{3}=\left( \begin{array}{c}{0.75} \ {-0.25}\end{array}\right) \nonumber$
The following quantum circuit (see arXiv:1302.1210) generates the same solutions.
$\begin{array} a |b \rangle & \rhd & R & \bullet & R^{T} & \rhd & |x \rangle \ \; & \; & \; & | & \; & \; & \; \ |1 \rangle & \rhd & \cdots & Ry( \theta ) & M_{1} & \rhd & |1 \rangle \end{array} \nonumber$
In this circuit, R is the matrix of eigenvectors of matrix A and RT its transpose. The last step on the bottom wire is the measurement of |1>, which is represented by the projection operator M1. The identity operator is required for cases in which a quantum gate operation is occurring on one wire and no operation is occurring on the other wire.
$R : = \text{eigenvecs(A)} \quad \mathrm{R}=\left( \begin{array}{ll}{0.707} & {-0.707} \ {0.707} & {0.707}\end{array}\right) \quad \mathrm{R}^{\mathrm{T}}=\left( \begin{array}{cc}{0.707} & {0.707} \ {-0.707} & {0.707}\end{array}\right) \quad \mathrm{M}_{1} :=\left( \begin{array}{ll}{0} & {0} \ {0} & {1}\end{array}\right) \longleftarrow \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{ll}{0} & {1}\end{array}\right) \quad \mathrm{I}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
The controlled rotation, CR($\theta$), is the only two-qubit gate in the circuit. The rotation angle required is determined by the ratio of the eigenvalues of A as shown below.
$\mathrm{CR}(\theta) :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {\cos \left(\frac{\theta}{2}\right)} & {-\sin \left(\frac{\theta}{2}\right)} \ {0} & {0} & {\sin \left(\frac{\theta}{2}\right)} & {\cos \left(\frac{\theta}{2}\right)}\end{array}\right) \qquad \text{eigenvals(A)} = \begin{pmatrix} 2 \ 1 \end{pmatrix} \qquad \theta : = -2 \cdot \text{acos} \left(\dfrac{1}{2}\right) \nonumber$
The input (|b>|1>) and output (|x>|1>) states are expressed in tensor format. Kronecker is Mathcad's command for the tensor product of matrices.
Input |b>|1> Quantum Circuit Output |x>|1>
|1>" class="lt-chem-135493">$\begin{array} a \dfrac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\dfrac{1}{\sqrt{2}} \left( \begin{array}{l}{0} \ {1} \ {0} \ {1}\end{array}\right) \quad & \text{kronecker} \left( R^{T}, M_{1} \right) \cdot \mathrm{CR} (\theta) \cdot \text{kronecker} (R,I) \cdot \dfrac{1}{\sqrt{2}} \cdot \left(\begin{array}{l}{0} \ {1} \ {0} \ {1}\end{array}\right)= \left( \begin{array}{c}{0} \ {0.354} \ {0} \ {0.354}\end{array}\right) \quad & \left( \begin{array}{c}{0.354} \ {0.354}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \ \dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\dfrac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {0} \ {-1}\end{array}\right) \quad & \text{kronecker} \left( R^{T}, M_{1} \right) \cdot \mathrm{CR} (\theta) \cdot \text{kronecker} (R,I) \cdot \dfrac{1}{\sqrt{2}} \cdot \left(\begin{array}{l}{0} \ {1} \ {0} \ {-1}\end{array}\right)= \left( \begin{array}{c}{0} \ {0.707} \ {0} \ {-0.707}\end{array}\right) \quad & \left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \ \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right) \quad & \text{kronecker} \left( R^{T}, M_{1} \right) \cdot \mathrm{CR} (\theta) \cdot \text{kronecker} (R,I) \cdot \left(\begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)= \left( \begin{array}{c}{0} \ {0.75} \ {0} \ {-0.25}\end{array}\right) \quad & \left( \begin{array}{c}{0.75} \ {-0.25}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \end{array}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.05%3A_Quantum_Computation_-_A_Short_Course.txt
|
One of the most intriguing applications of entanglement is quantum teleportation, which uses entanglement and a classical communication channel to transfer a quantum state from one location to another. However, to truly understand teleportation it is necessary to distinguish it from cloning. So first we look at the quantum no-cloning principle followed by a one-page snapshot of teleportation.
Quantum Restrictions on Cloning
Suppose a quantum copier exists which is able to carry out the following cloning operation.
$\left( \begin{array}{l}{0} \ {1}\end{array}\right) \stackrel{\mathrm{Clone}}{\longrightarrow} \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
Next the cloning operation (using the same copier) is carried out on the general qubit shown below.
$\left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right) \stackrel{\mathrm{Clone}}{\longrightarrow} \left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right) \otimes \left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right)=\left( \begin{array}{l}{\cos^{2} (\theta)} \ {\cos (\theta)\sin (\theta)} \ {\cos (\theta)\sin (\theta)} \ {\sin^{2} (\theta)}\end{array}\right) \nonumber$
Quantum transformations are unitary, meaning probability is preserved. This requires that the scalar products of the initial and final states must be the same.
Initial state:
$(\cos (\theta) \quad \sin (\theta)) \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\sin (\theta) \nonumber$
Final state:
$\left(\cos ^{2}(\theta) \quad \cos (\theta) \sin (\theta) \quad \sin (\theta) \cos (\theta) \quad \sin ^{2}(\theta)\right) \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)=\sin ^{2}(\theta) \nonumber$
It is clear from this analysis that quantum theory puts a significant restriction on copying. Only states for which sin($\theta$) = 0 or 1 (0 and 90 degrees) can be copied by the original cloner.
In conclusion, two quotes from Wootters and Zurek, Physics Today, February 2009, page 76.
Perfect copying can be achieved only when the two states are orthogonal, and even then one can copy those two states (...) only with a copier specifically built for that set of states.
In sum, one cannot make a perfect copy of an unknown quantum state, since, without prior knowledge, it is impossible to select the right copier for the job. That formulation is one common way of stating the no-cloning theorem.
An equivalent way to look at this (see arXiv:1701.00989v1) is to assume that a cloner exists for the V-H polarization states.
$\hat{C} | V \rangle | X \rangle=| V \rangle | V \rangle \quad \hat{C} | H \rangle | X \rangle=| H \rangle | H \rangle \nonumber$
A diagonally polarized photon is a superposition of the V-H polarization states.
$| D \rangle=\frac{1}{\sqrt{2}}( |V\rangle+| H \rangle ) \nonumber$
However, due to the linearity of quantum mechanics the V-H cloner cannot clone a diagonally polarized photon.
$\hat{C} | D \rangle | X \rangle=\hat{C} \frac{1}{\sqrt{2}}( |V\rangle+| H \rangle ) | X \rangle=\frac{1}{\sqrt{2}} \hat{C}( |V\rangle | X \rangle+| H \rangle | X \rangle )=\frac{1}{\sqrt{2}}( |V\rangle | V \rangle+| H \rangle | H \rangle ) \nonumber$
$\hat{C} | D \rangle | X \rangle \neq | D \rangle | D \rangle=\frac{1}{2}( |V\rangle | V \rangle+| V \rangle | H \rangle+| H \rangle | V \rangle+| H \rangle | H \rangle ) \nonumber$
Quantum Teleportation
As shown in the graphic below (Nature, December 11, 1997, page 576), quantum teleportation is a form of information transfer that requires pre-existing entanglement and a classical communication channel to send information from one location to another. Alice has the photon to be teleported and a photon of an entangled pair ($\beta_{00}$) that she shares with Bob. She performs a measurement on her photons that projects them into one of the four Bell states and Bob's photon, via the entangled quantum channel, into a state that has a unique relationship to the state of the teleportee. Bob carries out one of four unitary operations on his photon depending on the results of Alice's measurement, which she sends him through a classical communication channel.
The teleportee and the Bell states indexed in binary notation:
$\text{Teleportee:} \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \quad \text{Bell states}: \beta_{00}=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \quad \beta_{01} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right) \quad \beta_{10} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {-1}\end{array}\right) \quad 3_{11}=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
The three-cubit initial state is rewritten as a linear superposition of the four possible Bell states that Alice can find on measurement.
$| \Psi \rangle = \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} \otimes \beta_{00} = \dfrac{1}{2} \Bigg[ \beta_{00} \otimes \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} + \beta_{01} \otimes \begin{pmatrix} \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} + \beta_{10} \otimes \begin{pmatrix} \sqrt{\frac{1}{3}} \ - \sqrt{\frac{2}{3}} \end{pmatrix} + \beta_{11} \otimes \begin{pmatrix} - \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} \Bigg] \nonumber$
Alice's Bell state measurement result ($\beta_{00}, \beta_{01}, \beta_{10}\; or\; \beta_{11}$) determines the operation (I, X, Z or ZX) that Bob performs on his photon. The matrices for these operations are as follows.
$\mathrm{I} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \quad \mathrm{X} :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{Z} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \mathrm{Z} \cdot \mathrm{X}=\left( \begin{array}{cc}{0} & {1} \ {-1} & {0}\end{array}\right) \nonumber$
Tabular summary of teleportation experiment:
$\begin{pmatrix} \text{Alice Measurement Result} & \beta_{00} & \beta_{01} & \beta_{10} & \beta_{11} \ \text{Bob's Action} & I & X & Z & Z \cdot X \end{pmatrix} \nonumber$
Summary of the quantum teleportation protocol:
"Quantum teleportation provides a 'disembodied' way to transfer quantum states from one object to another at a distant location, assisted by previously shared entangled states and a classical communication channel." Nature 518, 516 (2015)
The following tutorial provides four in-depth perspectives on teleportation.
Quantum Teleportation: Four Perspectives
The science fiction dream of "beaming" objects from place to place is now a reality - at least for particles of light. Anton Zeilinger, Scientific American, April 2000, page 50.
Quantum teleportation is a way of transferring the state of one particle to a second, effectively teleporting the initial particle. (Tony Sudbery, Nature, December 11, 1997, page 551.)
As shown in the graphic below, quantum teleportation is a form of information transfer that requires pre-existing entanglement and a classical communication channel to send information from one location to another.
Alice has the photon to be teleported and a photon of an entangled pair that she shares with Bob. She performs a measurement on her photons that projects them into one of the four Bell states and Bob's photon, via the entangled quantum channel, into a state that has a unique relationship to the state of the teleportee. Bob carries out one of four unitary operations on his photon depending on the results of Alice's measurement, which she sends him through a classical communication channel.
The figure (Nature, December 11, 1997, page 576) on the left provides a graphic summary of the first successful teleportation experiment. The quantum circuit on the right shows a method of implementation.
The teleportee and the Bell states indexed in binary notation:
$\text{Teleportee:} \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \quad \text{Bell states}: \beta_{00}=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \quad \beta_{01} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right) \quad \beta_{10} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {-1}\end{array}\right) \quad 3_{11}=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
The three-cubit initial state is rewritten as a linear superposition of the four possible Bell states that Alice can find on measurement.
$| 0 \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad | 1 \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \quad \mathrm{X}=\left( \begin{array}{ccc}{0} & {\text { to }} & {1} \ {1} & {\text { to }} & {0}\end{array}\right) \quad \mathrm{Z}=\left( \begin{array}{ccc}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {-1}\end{array}\right) \ H=\left[ \begin{array}{cc}{0} & {\text { to }} & {\dfrac{(0+1)}{\sqrt{2}}} \ {1} & {\text { to }} & {\dfrac{(0-1)}{\sqrt{2}}}\end{array}\right] \quad \text { CNOT }=\left( \begin{array}{ccc}{00} & {\text { to }} & {00} \ {01} & {\text { to }} & {01} \ {10} & {\text { to }} & {11} \ {11} & {\text { to }} & {10}\end{array}\right) \nonumber$
$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \quad \mathrm{X} :=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{Z} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad H :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \quad \mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \quad \text { CNOT } :=\left( \begin{array}{llll}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \nonumber$
Perspective I.
Using the truth tables, the operation of the teleportation circuit is expressed in Dirac notation.
$\left( \sqrt{\frac{1}{3}} | 0 \rangle + \sqrt{\frac{2}{3}} | 1 \rangle \right) \frac{1}{\sqrt{2}} (| 00 \rangle + | 11 \rangle ) = \frac{1}{\sqrt{2}} \bigg[ \sqrt{\frac{1}{3}} ( | 00 \rangle | 0 \rangle + | 01 \rangle | 1 \rangle ) + \sqrt{\frac{2}{3}} ( | 10 \rangle | 0 \rangle + | 11 \rangle | 1 \rangle ) \bigg] \nonumber$
$\text{CNOT} \otimes \text{I} \nonumber$
$\frac{1}{\sqrt{2}} \bigg[ \sqrt{\frac{1}{3}} ( | 00 \rangle | 0 \rangle + | 01 \rangle | 1 \rangle ) + \sqrt{\frac{2}{3}} ( 11 \rangle | 0 \rangle + | 10 \rangle | 1 \rangle ) \bigg] = \frac{1}{\sqrt{2}} \bigg[ \sqrt{\frac{1}{3}} | 0 \rangle ( | 00 \rangle + | 11 \rangle ) + \sqrt{\frac{2}{3}} | 1 \rangle ( | 10 \rangle + | 01 \rangle ) \bigg] \nonumber$
$\text{H} \otimes \text{I} \otimes \text{I} \nonumber$
$\frac{1}{2} \bigg[ \sqrt{\frac{1}{3}} ( | 0 \rangle + | 1 \rangle ) ( | 00 \rangle + | 11 \rangle ) + \sqrt{\frac{2}{3}} ( | 0 \rangle - | 1 \rangle ) ( | 10 \rangle + | 01 \rangle ) \bigg] \ \downarrow \nonumber$
$\frac{1}{2} \Bigg[ | 00 \rangle \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} + | 01 \rangle \begin{pmatrix} \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} + | 10 \rangle \begin{pmatrix} \sqrt{\frac{1}{3}} \ - \sqrt{\frac{2}{3}} \end{pmatrix} + | 11 \rangle \begin{pmatrix} - \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} \Bigg] \xrightarrow{Action} \frac{1}{2} \Bigg[ \text{I} \cdot \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} + \text{X} \cdot \begin{pmatrix} \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} + \text{Z} \cdot \begin{pmatrix} \sqrt{\frac{1}{3}} \ - \sqrt{\frac{2}{3}} \end{pmatrix} + \text{Z} \cdot \text{X} \cdot \begin{pmatrix} - \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} \Bigg] \nonumber$
Alice's Bell state measurement result (|00>, |01>. |10> or |11>, see indexed Bell states above) determines the operation (I, X, Z or ZX) that Bob performs on his photon.
Perspective II.
The three-cubit initial state is re-written as a linear superposition of the four possible Bell states that Alice can find on measurement. Note that this is a equivalent to the expression on the left side of the equation immediately above if the Bell states are replaced by their binary indices, as they would be after the Bell state measurement.
$\begin{split} | \Psi \rangle &= \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} \otimes \sqrt{\frac{1}{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \sqrt{\frac{1}{2}} \begin{pmatrix} \sqrt{\frac{1}{3}} \ 0 \ 0 \ \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \ 0 \ 0 \ \sqrt{\frac{2}{3}} \end{pmatrix} \ &= \frac{1}{2} \Bigg[ \sqrt{\frac{1}{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} \sqrt{\frac{1}{3}} \ \sqrt{\frac{2}{3}} \end{pmatrix} + \sqrt{\frac{1}{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} + \sqrt{\frac{1}{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} \otimes \begin{pmatrix} \sqrt{\frac{1}{3}} \ - \sqrt{\frac{2}{3}} \end{pmatrix} + \sqrt{\frac{1}{2}} \begin{pmatrix} 0 \ 1 \ -1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} - \sqrt{\frac{2}{3}} \ \sqrt{\frac{1}{3}} \end{pmatrix} \Bigg] \end{split} \nonumber$
Condensed version of the equation:
$\begin{split} | \Psi \rangle & =\left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \otimes \beta_{00} \ &=\frac{1}{2}\left[\beta_{00}\otimes \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) +\beta_{01} \otimes \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right) +\beta_{0} \otimes \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {-\sqrt{\frac{2}{3}}}\end{array}\right) +\beta_{11} \otimes \left( \begin{array}{c}{-\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right) \right] \end{split} \nonumber$
Another way to write this equation:
$\begin{split} | \Psi \rangle & =\left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \otimes \beta_{00} \ &=\frac{1}{2}\left[\beta_{00}\otimes I \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) +\beta_{01} \otimes X \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) +\beta_{0} \otimes Z \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) +\beta_{11} \otimes X \cdot Z \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \right] \end{split} \nonumber$
Perspective III.
The teleportation circuit (TC) is written as a composite matrix operator which then operates on the initial three-qubit state.
$\Psi :=\frac{1}{\sqrt{2}} \cdot \begin{pmatrix} \sqrt{\dfrac{1}{3}} &0&0& \sqrt{\dfrac{1}{3}}& \sqrt{\dfrac{2}{3}}& 0&0& \sqrt{\dfrac{2}{3}}\end{pmatrix}^{\mathrm{T}} \nonumber$
$\mathrm{TC} :=\text { kronecker }(\mathrm{H}, \text { kronecker }(\mathrm{I}, \mathrm{I})) \cdot \text { kronecker }(\mathrm{CNOT}, \mathrm{I}) \nonumber$
After operation of the circuit the system is in a superposition state involving the Bell state indices on the top two registers. The third register contains a state that can easily be transformed into the teleportee once Alice tells Bob which Bell state she observed.
$\begin{split} \mathrm{TC} \cdot \Psi &= \left( \begin{array}{c}{0.289} \ {0.408} \ {0.408} \ {0.289} \ {0.289} \ {-0.408} \ {-0.408} \ {0.289}\end{array}\right) \ &= \frac{1}{2}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)+\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {-\sqrt{\frac{2}{3}}}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{c}{-\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right) \right] \ &= \frac{1}{2}\left[ \left(\begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) +\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right) \left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right) \otimes \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {-\sqrt{\frac{2}{3}}}\end{array}\right) +\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{-\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}} \end{array}\right) \right] \end{split} \nonumber$
Tabular summary of teleportation experiment:
$\begin{pmatrix} \text{Alice Measurement Result} & \beta_{00} & \beta_{01} & \beta_{10} & \beta_{11} \ \text{Bob's Action} & I & X & Z & Z \cdot X \end{pmatrix} \nonumber$
Bell state indices:
$\mathrm{kronecker} (\mathrm{H}, \mathrm{I}) \cdot \mathrm{CNOT} \cdot \beta_{00}=\left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right)$
$\mathrm{kronecker} (\mathrm{H}, \mathrm{I}) \cdot \mathrm{CNOT} \cdot \beta_{01}=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)$
$\mathrm{kronecker} (\mathrm{H}, \mathrm{I}) \cdot \mathrm{CNOT} \cdot \beta_{10}=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)$
$\mathrm{kronecker} (\mathrm{H}, \mathrm{I}) \cdot \mathrm{CNOT} \cdot \beta_{11}=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)$
A similar approach is to use projection operators on the top two qubits to simulate the four measurement outcomes. See the Appendix for more on this method.
$\begin{array}{l} \; & \; & \text{Bob's action} \ \text{Measure}\; |00 \rangle & 2 \cdot \text { kronecker } \left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi =\left( \begin{array}{c}{0.577} \ {0.816} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0}\end{array}\right) & \text{No action required.} \ \text{Measure}\; |01 \rangle & 2 \cdot \text { kronecker } \left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi =\left( \begin{array}{c}{0} \ {0} \ {0.816} \ {0.577} \ {0} \ {0} \ {0} \ {0}\end{array}\right) & \text{Operate with X} \ \text{Measure}\; |10 \rangle & 2 \cdot \text { kronecker } \left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi =\left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {0.577} \ {-0.816} \ {0} \ {0}\end{array}\right) & \text{Operate with Z} \ \text{Measure}\; |11 \rangle & 2 \cdot \text { kronecker } \left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi =\left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {-0.816} \ {0.577}\end{array}\right) & \text{Operate with ZX} \end{array} \nonumber$
Perspective IV.
Projecting the teleportee photon 1 (green) onto the result of Alice's Bell state measurement (blue) yields the state of photon 2 which was initially entangled with Bob's photon 3. Projection of this state onto the original 2-3 entangled state (red) transforms Bob's photon to the teleportee state 25% of the time. As is now shown this happens when Alice's Bell state measurment yields $\beta_{00}$.
$\begin{split} _{12}\left\langle\beta_{00} | \Psi\right\rangle_{1} | \beta_{00} \rangle_{23} &=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2}^{\mathrm{T}}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{2}^{\mathrm{T}}\right] \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)_{1} \frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{3}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{3}\right] \ &=\frac{1}{2} \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)_{3} \end{split} \nonumber$
An algebraic analysis of this example is as follows.
$\text{Teleportee} \ \frac{1}{\sqrt{2}}\left[_{1}\langle\left. 0\right|_{2}\langle 0 |+\;_{1}\langle\left. 1\right|_{2}\langle 1|][\alpha | 0\rangle_{1}+\beta | 1\right\rangle_{1}] \frac{1}{\sqrt{2}}[ |0\rangle_{2} | 0\rangle_{3}+| 1\rangle_{2} | 1 \rangle_{3} ] \ \downarrow \ \frac{1}{\sqrt{2}}[\alpha_{2}\langle 0|+\beta_{2}\langle 1|] \frac{1}{\sqrt{2}}[ |0\rangle_{2}| 0\rangle_{3}+| 1\rangle_{2} | 1 \rangle_{3} ] \ \downarrow \ \frac{1}{2}[\alpha | 0\rangle_{3}+\beta | 1 \rangle_{3} ] \nonumber$
Naturally this approach yields the same results as the previous perspectives when Alice's Bell state measurement is $\beta_{01}, \beta_{10}$ and $\beta_{11}$. As demonstrated previously for these results Bob's action is X, Z and ZX, respectively.
$\begin{split} _{12}\left\langle\beta_{01} | \Psi_{1}\right\rangle | \beta_{00} \rangle_{23} &= \frac{1}{\sqrt{2}}\left[\left( \begin{array}{cc}{1} \ {0}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{2}^{\mathrm{T}}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2}^{\mathrm{T}}\right] \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)_{1} \frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{3}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{3}\right] \ &= \frac{1}{2} \left( \begin{array}{c}{\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right)_{3} \end{split} \nonumber$
$\begin{split} _{12}\left\langle\beta_{10} | \Psi_{1}\right\rangle | \beta_{00} \rangle_{23} &= \frac{1}{\sqrt{2}}\left[\left( \begin{array}{cc}{1} \ {0}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{2}^{\mathrm{T}}-\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2}^{\mathrm{T}}\right] \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)_{1} \frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{3}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{3}\right] \ &= \frac{1}{2} \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {- \sqrt{\frac{2}{3}}}\end{array}\right)_{3} \end{split} \nonumber$
$\begin{split} _{12}\left\langle\beta_{11} | \Psi_{1}\right\rangle | \beta_{00} \rangle_{23} &= \frac{1}{\sqrt{2}}\left[\left( \begin{array}{cc}{1} \ {0}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{2}^{\mathrm{T}}-\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{1}^{\mathrm{T}} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2}^{\mathrm{T}}\right] \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right)_{1} \frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)_{3}+\left( \begin{array}{c}{0} \ {1}\end{array}\right)_{2} \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)_{3}\right] \ &= \frac{1}{2} \left( \begin{array}{c}{- \sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}}\end{array}\right)_{3} \end{split} \nonumber$
Summary of the quantum teleportation protocol:
"Quantum teleportation provides a 'disembodied' way to tranfer quantum states from one object to another at a distant location, assisted by previously shared entangled states and a classical communication channel." Nature 518, 516 (2015)
The paper cited above reported the first successful teleportation of two degrees of freedom of a single photon. The analysis is somewhat more complicated than that provided in this tutorial, but the general principle is the same. The quantum channel is a hyper-entangled state shared by Alice and Bob, rather than one of the simple entangled Bell states.
Appendix:
Addendum to Perspective III.
In these calculations the required operations by Bob are actually carried out on the third qubit.
Measure |00> do nothing. $\text{kronecker}(1, \text { kronecker }(\mathrm{I}, \mathrm{I})) \cdot 2 \cdot \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi\right. \ =\left( \begin{array}{c}{0.577} \ {0.816} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0}\end{array}\right)$
Measure |01> operate with X. $\text{kronecker}(1, \text { kronecker }(\mathrm{I}, \mathrm{X})) \cdot 2 \cdot \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi\right. \ =\left( \begin{array}{c}{0} \ {0} \ {0.577} \ {0.816} \ {0} \ {0} \ {0} \ {0}\end{array}\right)$
Measure |10> operate with Z. $\text{kronecker}(1, \text { kronecker }(\mathrm{I}, \mathrm{Z})) \cdot 2 \cdot \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{1} \ {0}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi\right. \ =\left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {0.577} \ {0.816} \ {0} \ {0}\end{array}\right)$
Measure |11> operate with ZX. $\text{kronecker}(1, \text { kronecker }(\mathrm{I}, \mathrm{Z}\cdot\mathrm{X})) \cdot 2 \cdot \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \text { kronecker }\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right)^{\mathrm{T}}, \mathrm{I}\right] \cdot \mathrm{TC} \cdot \Psi\right. \ =\left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0.577} \ {0.816}\end{array}\right)$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.06%3A_Quantum_Computation-_A_Short_Course.txt
|
As can be seen in the previous tutorials, teleportation involves entanglement transfer. Alice projects her photons onto one of the entangled Bell states and Bob receives a photon state which using information provided via the classical communication channel can be transformed into the teleportee state. Given the importance of entanglement in quantum computing a more elaborate example of entanglement transfer is provided in the following tutorial.
An Entanglement Swapping Protocol
In the field of quantum information interference, superpositions and entangled states are essential resources. Entanglement, a non-factorable superposition, is routinely achieved when two photons are emitted from the same source, say a parametric down converter (PDC). Entanglement swapping involves the transfer of entanglement to two photons that were produced independently and never previously interacted. The Bell states are the four maximally entangled two-qubit entangled basis for a four-dimensional Hilbert space and play an essential role in quantum information theory and technology, including teleportation and entanglement swapping. The Bell states are shown below.
$\begin{array} a \Phi_{\mathrm{p}}=\dfrac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \quad & \Phi_{\mathrm{p}} :=\dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \quad & \Phi_{\mathrm{m}}=\dfrac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \quad & \Phi_{\mathrm{m}} :=\dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {-1}\end{array}\right) \ \Psi_{\mathrm{p}}=\dfrac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)+\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \quad & \Psi_{\mathrm{p}} :=\dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right) & \Psi_{\mathrm{m}}=\dfrac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right] \quad & \Psi_{\mathrm{m}} :=\dfrac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \end{array} \nonumber$
A four-qubit state is prepared in which photons 1 and 2 are entangled in Bell state $\Phi_{p}$, and photons 3 and 4 are entangled in Bell state $\Psi_{m}$. The state multiplication below is understood to be tensor vector multiplication.
$\Psi=\Phi_{\mathrm{p}} \cdot \Psi_{\mathrm{m}}=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \quad \Psi :=\frac{1}{2} \cdot \left( \begin{array}{ccccccccccccccc}{0} & {1} & {-1} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {1} & {-1} & {0}\end{array}\right)^{\mathrm{T}} \quad \mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Four Bell state measurements are now made on photons 2 and 3 which entangles photons 1 and 4. Projection of photons 2 and 3 onto $\Phi_{p}$ projects photons 1 and 4 onto $\Psi_{m}$.
$\left(\text { kronecker }\left(\mathrm{I}, \text { kronecker }\left(\Phi_{\mathrm{p}} \cdot \Phi_{\mathrm{p}}^{\mathrm{T}}, \mathrm{I}\right)\right) \cdot \Psi\right)^{\mathrm{T}}=\left( \begin{array}{llllllllllllll}{0} & {0.25} & {0} & {0} & {0} & {0} & {0} & {0.25} & {-0.25} & {0} & {0} & {0} & {0} & {0} & {-0.25} & {0}\end{array}\right) \nonumber$
$\frac{1}{2 \sqrt{2}} \cdot\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) -\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \frac{1}{\sqrt{2}}\cdot\left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right]^{T} \ =\frac{1}{4} \cdot \left( \begin{array}{llllllllllll}{0} & {1} & {0} & {0} & {0} & {0} & {0 }& {1} & {-1} & {0} & {0} & {0} & {0} & {0} & {-1} & {0}\end{array}\right) \nonumber$
Projection of photons 2 and 3 onto $\Phi_{m}$ projects photons 1 and 4 onto $\Psi_{p}$.
$\left(\text { kronecker }\left(\mathrm{I}, \text { kronecker }\left(\Phi_{\mathrm{m}} \cdot \Phi_{\mathrm{m}}^{\mathrm{T}}, \mathrm{I}\right)\right) \cdot \Psi\right)^{\mathrm{T}}=\left( \begin{array}{llllllllllllll}{0} & {0.25} & {0} & {0} & {0} & {0} & {0} & {-0.25} & {0.25} & {0} & {0} & {0} & {0} & {0} & {-0.25} & {0}\end{array}\right) \nonumber$
$\frac{1}{2 \sqrt{2}} \cdot\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) +\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \frac{1}{\sqrt{2}}\cdot\left( \begin{array}{l}{1} \ {0} \ {0} \ {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right]^{T} \ =\frac{1}{4} \cdot \left( \begin{array}{llllllllllll}{0} & {1} & {0} & {0} & {0} & {0} & {0 }& {-1} & {1} & {0} & {0} & {0} & {0} & {0} & {-1} & {0}\end{array}\right) \nonumber$
Projection of photons 2 and 3 onto $\Psi_{p}$ projects photons 1 and 4 onto $\Phi_{m}$.
$\left(\text { kronecker }\left(\mathrm{I}, \text { kronecker }\left(\Phi_{\mathrm{p}} \cdot \Phi_{\mathrm{p}}^{\mathrm{T}}, \mathrm{I}\right)\right) \cdot \Psi\right)^{\mathrm{T}}=\left( \begin{array}{llllllllllllll}{0} & {0} & {-0.25} & {0} & {-0.25} & {0} & {0} & {0} & {0} & {0} & {0} & {0.25} & {0} & {0.25} & {0} & {0}\end{array}\right) \nonumber$
$\frac{1}{2 \sqrt{2}} \cdot\left[\left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) -\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \frac{1}{\sqrt{2}}\cdot\left( \begin{array}{l}{0} \ {1} \ {1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right]^{T} \ =\frac{1}{4} \cdot \left( \begin{array}{llllllllllll}{0} & {0} & {-1} & {0} & {-1} & {0} & {0}& {0} & {0} & {0} & {0} & {1} & {0} & {1} & {0} & {0}\end{array}\right) \nonumber$
Finally, projection of photons 2 and 3 onto $\Phi_{m}$ projects photons 1 and 4 onto $\Psi_{p}$.
$\left(\text { kronecker }\left(\mathrm{I}, \text { kronecker }\left(\Phi_{\mathrm{m}} \cdot \Phi_{\mathrm{m}}^{\mathrm{T}}, \mathrm{I}\right)\right) \cdot \Psi\right)^{\mathrm{T}}=\left( \begin{array}{llllllllllllll}{0} & {0} & {-0.25} & {0} & {0.25} & {0} & {0} & {0} & {0} & {0} & {0} & {-0.25} & {0} & {0.25} & {0} & {0}\end{array}\right) \nonumber$
$\frac{-1}{2 \sqrt{2}} \cdot\left[\left( \begin{array}{c}{1} \ {0}\end{array}\right) \cdot \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {1} \ {-1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) +\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \frac{1}{\sqrt{2}}\cdot\left( \begin{array}{l}{0} \ {1} \ {-1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]^{T} \ =\frac{1}{4} \cdot \left( \begin{array}{llllllllllll}{0} & {0} & {-1} & {0} & {1} & {0} & {0}& {0} & {0} & {0} & {0} & {-1} & {0} & {1} & {0} & {0}\end{array}\right) \nonumber$
In our earlier examination of the quantum computer we saw that a quantum circuit can calculate all the values of f(x) simultaneously, but we can only retrieve one value due to the collapse of the superposition of answers on observation. To achieve a quantum advantage in computation requires more subtle programming techniques which exploit the effects of quantum interference. The following tutorials reveal how the quantum advantage can be achieved in several areas of practical importance.
While quantum mechanics could spell disaster for public-key cryptography, it may also offer salvation. This is because the resources of the quantum world appear to offer the ultimate form of secret code, one that is guaranteed by the laws of physics to be unbreakable. Julian Brown, The Quest for the Quantum Computer, page 189.
In other words, “The quantum taketh away and the quantum giveth back!” Asher Peres
We begin with Shor’s algorithm which demonstrates how quantum entanglement and interference effects can facilitate the factorization of large integers into their prime factors. The inability of conventional computers to do this is essential to the integrity of public-key cryptography.
Factoring Using Shor's Quantum Algorithm
This tutorial presents a toy calculation dealing with quantum factorization using Shor's algorithm. Before beginning that task, traditional classical factorization is reviewed with the example of finding the prime factors of 15. As shown below the key is to find the period of ax modulo 15, where a is chosen randomly.
$\mathrm{a} :=4 \qquad \mathrm{N} :=15 \qquad f(x) :=\bmod \left(a^{x}, N\right) \qquad \mathrm{Q} :=8 \qquad \mathrm{x} :=0 \ldots \mathrm{Q}-1 \nonumber$
x f(x)
0 1
1 4
2 1
3 4
4 1
5 4
6 1
7 4
Seeing that the period of f(x) is two, the next step is to use the Euclidian algorithm by calculating the greatest common denominator of two functions involving the period and a, and the number to be factored, N.
$\text{period} : = 2 \qquad \operatorname{gcd}\left(a^{\frac{\text { period }}{2}}-1, \mathrm{N}\right)=3 \qquad \operatorname{gcd}\left(a^{\frac{\text { period }}{2}}+1, \mathrm{N}\right)=5 \nonumber$
We proceed by ignoring the fact that we already know that the period of f(x) is 2 and demonstrate how it is determined using a quantum (discrete) Fourier transform. After the registers are loaded with x and f(x) using a quantum computer, they exist in the following superposition.
$\frac{1}{\sqrt{Q}} \sum_{x=0}^{Q-1} | x \rangle | f(x) \rangle=\frac{1}{2}[ |0\rangle | 1 \rangle+| 1 \rangle | 4 \rangle+| 2 \rangle | 1 \rangle+| 3 \rangle | 4 \rangle+\cdots ] \nonumber$
The next step is to find the period of f(x) by performing a quantum Fourier transform (QFT) on the input register |x>.
$\mathrm{Q} :=4 \qquad \mathrm{m} :=0 \ldots \mathrm{Q}-1 \qquad \mathrm{n} :=0 . \mathrm{Q}-1 \qquad \mathrm{QFT}_{\mathrm{m}, \mathrm{n}} :=\frac{1}{\sqrt{\mathrm{Q}}} \cdot \exp \left(\mathrm{i} \cdot \frac{2 \cdot \pi \cdot \mathrm{m} \cdot \mathrm{n}}{\mathrm{Q}}\right) \qquad \mathrm{QFT}=\frac{1}{2} \cdot \left( \begin{array}{cccc}{1} & {1} & {1} & {1} \ {1} & {\mathrm{i}} & {-1} & {-\mathrm{i}} \ {1} & {-1} & {1} & {-1} \ {1} & {-\mathrm{i}} & {-1} & {\mathrm{i}}\end{array}\right) \nonumber$
$\mathrm{x}=0 \quad \mathrm{QFT} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0.5} \ {0.5} \ {0.5} \ {0.5}\end{array}\right) \qquad \mathrm{x}=1 \quad \mathrm{QFT} \cdot \left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0.5} \ {0.5 \mathrm{i}} \ {-0.5} \ {-0.5 \mathrm{i}}\end{array}\right) \ \mathrm{x}=2 \quad \mathrm{QFT} \cdot \left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0.5} \ {-0.5} \ {0.5} \ {-0.5}\end{array}\right) \qquad \mathrm{x}=3 \quad \mathrm{QFT} \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0.5} \ {-0.5 \mathrm{i}} \ {-0.5} \ {0.5 \mathrm{i}}\end{array}\right) \nonumber$
The operation of the QFT on the x-register is expressed algebraically in the middle term below. Quantum interference in this term yields the result on the right which shows a period of 2 on the x-register.
$\begin{array} a \; & \frac{1}{4}[ |0\rangle+| 1 \rangle+| 2 \rangle+| 3 \rangle ] | 1 \rangle & \ \; & \qquad \qquad \quad + & \; \ \; & \frac{1}{4}[ |0\rangle+ i | 1 \rangle-| 2 \rangle-i | 3 \rangle ] | 4 \rangle & \; \ Q F T(x) \frac{1}{2}[ |0\rangle | 1 \rangle+| 1 \rangle | 4 \rangle+| 2 \rangle | 1 \rangle+| 3 \rangle | 4 \rangle ]= & \qquad \qquad \quad + & =\frac{1}{2}[ |0\rangle( |1\rangle+| 4 \rangle )+| 2 \rangle( |1\rangle-| 4 \rangle ) ] \ \; & \frac{1}{4}[ |0\rangle-| 1 \rangle+| 2 \rangle-| 3 \rangle ] | 1 \rangle & \; \ \; & \qquad \qquad \quad + & \; \ \; & \frac{1}{4}[ |0\rangle- i | 1 \rangle-| 2 \rangle+i | 3 \rangle ] | 4 \rangle & \; \end{array} \nonumber$
Figure 5 in "Quantum Computation," by David P. DiVincenzo, Science 270, 258 (1995) provides a graphical illustration of the steps of Shor's factorization algorithm.
How quantum theory gives back is demonstrated by an examination of Ekert’s quantum secret key proposal.
The Quantum Math Behind Ekert's Key Distribution Scheme
Alice and Bob share an entangled photon (EPR) pair in the following state.
$\begin{split} | \Psi \rangle &=\frac{1}{\sqrt{2}}[ |\mathrm{R}\rangle_{A} | \mathrm{R} \rangle_{\mathrm{B}}+| \mathrm{L} \rangle_{\mathrm{A}} | \mathrm{L} \rangle_{\mathrm{B}} ] =\frac{1}{2 \sqrt{2}}\left[\left( \begin{array}{c}{1} \ {i}\end{array}\right)_{A} \otimes \left( \begin{array}{l}{1} \ {i}\end{array}\right)_{B}+\left( \begin{array}{c}{1} \ {-i}\end{array}\right)_{A} \otimes \left( \begin{array}{c}{1} \ {-i}\end{array}\right)_{B}\right] \ &=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {0} \ {0} \ {-1}\end{array}\right)=\frac{1}{\sqrt{2}}[ |\mathrm{V}\rangle_{\mathrm{A}} | \mathrm{V} \rangle_{\mathrm{B}}-| \mathrm{H} \rangle_{\mathrm{A}} | \mathrm{H} \rangle_{\mathrm{B}} ] \end{split} \nonumber$
They agree to make random polarization measurements in the rectilinear and circular polarization bases. When a measurement is made on a quantum system the result is always an eigenstate of the measurement operator. The eigenstates in the circular and rectilinear bases are:
$\mathrm{R} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {\mathrm{i}}\end{array}\right) \quad \mathrm{L} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-\mathrm{i}}\end{array}\right) \quad \mathrm{V} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad \mathrm{H} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Pertinent superpositions:
$\mathrm{V}=\frac{1}{\sqrt{2}} \cdot(\mathrm{R}+\mathrm{L}) \quad \mathrm{H}=\frac{\mathrm{i}}{\sqrt{2}} \cdot(\mathrm{L}-\mathrm{R}) \quad \mathrm{R}=\frac{1}{\sqrt{2}} \cdot(\mathrm{V}+\mathrm{i} \cdot \mathrm{H}) \quad \mathrm{L}=\frac{1}{\sqrt{2}} \cdot(\mathrm{V}-\mathrm{i} \cdot \mathrm{H}) \nonumber$
Alice's random measurement effectively sends a random photon to Bob due to the correlations built into the entangled state of their shared photon pair. Alice's four measurement possibilities and their consequences for Bob are now examined.
Alice's photon is found to be right circularly polarized, |R>. If Bob measures circular polarization he is certain to find his photon to be |R>. But if he chooses to measure in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\frac{1}{\sqrt{2}} \cdot \mathrm{R} \cdot \mathrm{R}=\frac{1}{\sqrt{2}} \cdot \mathrm{R} \cdot\left[\frac{1}{\sqrt{2}} \cdot(\mathrm{V}+\mathrm{i} \cdot \mathrm{H})\right] \nonumber$
If Alice observes |L>, Bob will also if he measures circular polarization. But if he measures in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\frac{1}{\sqrt{2}} \cdot \mathrm{L} \cdot \mathrm{L}=\frac{1}{\sqrt{2}} \cdot \mathrm{L} \cdot\left[\frac{1}{\sqrt{2}} \cdot(\mathrm{V}-\mathrm{i} \cdot \mathrm{H})\right] \nonumber$
The same kind of reasoning applies to measurements Alice makes in the rectilinear basis.
$\frac{1}{\sqrt{2}} \cdot \mathrm{V} \cdot \mathrm{V}=\frac{1}{\sqrt{2}} \cdot \mathrm{V} \cdot\left[\frac{1}{\sqrt{2}} \cdot(\mathrm{R}+\mathrm{L})\right] \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{H} \cdot \mathrm{H}=-\frac{1}{\sqrt{2}} \cdot \mathrm{H} \cdot\left[\frac{\mathrm{i}}{\sqrt{2}} \cdot(\mathrm{L}-\mathrm{R})\right] \nonumber$
Alice and Bob keep the results for the experiments for which they measured in the same basis (blue in the table below), and make the following bit value assignments: |V> = |R> = 0 and |H> = |L> = 1. This leads to the secret key on the bottom line.
$\begin{array}{llllllllll}{\text {Alice }} & {\mathrm{R}} & {\mathrm{V}} & {\mathrm{V}} & {\mathrm{L}} & {\mathrm{H}} & {\mathrm{L}} & {\mathrm{H}} & {\mathrm{R}} & {\mathrm{V}} & {\mathrm{L}} & {\mathrm{H}} \ {\mathrm{Bob}} & {\mathrm{R}} & {\mathrm{L}} & {\mathrm{V}} & {\mathrm{L}} & {\mathrm{R}} & {\mathrm{H}} & {\mathrm{H}} & {\mathrm{R}} & {\mathrm{V}} & {\mathrm{V}} & {\mathrm{H}} \ {\mathrm{Key}} & {0} & \; &{0} & {1} & \; & \; & {1} & {0} & {0} & \; & {1}\end{array} \nonumber$
The following demonstrates how a binary message is coded and subsequently decoded using a shared binary secret key and modulo 2 arithmetic.
Message Key Coded Message Decoded Message
$\text{Mes} : = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \end{pmatrix}$
$\text{Key} : = \begin{pmatrix} 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \end{pmatrix}$
$\mathrm{CMes} :=\bmod (\mathrm{Mes}+\mathrm{Key}, 2)\begin{pmatrix} 0 \ 1 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 1 \end{pmatrix}$
$\mathrm{DMes} :=\bmod (\mathrm{CMes}+\mathrm{Key}, 2)\begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \end{pmatrix}$
It is clear by inspection that the message has been accurately decoded. This is confirmed by calculating the difference between the message and the decoded message.
$(\mathrm{Mes}-\mathrm{DMes})^{\mathrm{T}}=\left( \begin{array}{lllllllllllll}{0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
Coding and Decoding Venus
In 2000 Anton Zeilinger and his research team sent an encrypted photo of the fertility goddess Venus of Willendorf from Alice to Bob, two computers in two buildings about 400 meters apart. The figure summarizing this achievement first appeared in Physical Review Letters and later in a review article in Nature.
It is easy to produce a rudimentary simulation of the experiment. Bitwise XOR is nothing more than addition modulo 2. The original Venus and the shared key are represented by the following matrices, where the matrix elements are pixels that are either off (0) or on (1).
$\mathrm{i}=1 . .7 \qquad \mathrm{j} :=1 . .6 \qquad \mathrm{Key}_{\mathrm{i}, \mathrm{j}} :=\operatorname{trunc}(\mathrm{md}(2)) \nonumber$
$\text{Key }=\left( \begin{array}{cccccc}{0} & {0} & {1} & {0} & {1} & {0} \ {1} & {0} & {0} & {0} & {1} & {0} \ {0} & {1} & {1} & {0} & {0} & {0} \ {1} & {1} & {1} & {1} & {1} & {0} \ {1} & {1} & {1} & {1} & {0} & {1} \ {0} & {1} & {0} & {0} & {1} & {1} \ {0} & {1} & {0} & {1} & {1} & {1}\end{array}\right) \nonumber$
$\text{Venus }: =\left( \begin{array}{cccccc}{1} & {1} & {1} & {1} & {1} & {1} \ {0} & {1} & {1} & {1} & {1} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {1} & {1} & {1} & {1} & {0} \ {1} & {1} & {1} & {1} & {1} & {1}\end{array}\right) \nonumber$
A coded version of Venus is prepared by adding Venus and the Key modulo 2 and sent to Bob.
$C_{i, j} :=\text { Venus }_{i, j} \oplus K e y_{i, j} \qquad \text{CVenus}=\left( \begin{array}{cccccc}{1} & {1} & {0} & {1} & {0} & {1} \ {1} & {1} & {1} & {1} & {0} & {0} \ {0} & {1} & {0} & {1} & {0} & {0} \ {1} & {1} & {0} & {0} & {1} & {0} \ {1} & {1} & {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {1} & {0} & {1} \ {1} & {0} & {1} & {0} & {0} & {0}\end{array}\right) \nonumber$
Bob adds the key to CVenus modulo 2 and sends the result to his printer.
$\mathrm{DVenus}_{\mathrm{i}, \mathrm{j}} :=\mathrm{CVenus}_{\mathrm{i}, \mathrm{j}} \oplus \mathrm{Key}_{\mathrm{i}, \mathrm{j}} \qquad \text{DVenus}=\left( \begin{array}{cccccc}{1} & {1} & {1} & {1} & {1} & {1} \ {0} & {1} & {1} & {1} & {1} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {0} & {1} & {1} & {0} & {0} \ {0} & {1} & {1} & {1} & {1} & {0} \ {1} & {1} & {1} & {1} & {1} & {1}\end{array}\right) \nonumber$
A graphic summary of the simulation:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.07%3A_Quantum_Computation-_A_Short_Course.txt
|
If you are asked if two pieces of glass have the same thickness, the conventional thing to do is to measure the thickness of each piece – two measurements. As shown in the first two tutorials a double-slit apparatus and a Mach-Zehnder interferometer can answer the question with a single measurement. The third tutorial is a version of the first which shows how path information destroys interference and how the interference can be restored. The fourth tutorial summarizes David Deutsch’s solution to an equivalent mathematical problem using a function introduced earlier. The double-slit apparatus, the Mach-Zehnder interferometer and Deutsch’s circuit are quantum computers which use superpositions and interference effects to cut the effort of answering the question by a factor of two.
Simulating the Deutsch‐Jozsa Algorithm with a Double‐Slit Apparatus
The Deutsch‐Jozsa algorithm determines if either [1] 2N numbers are either all 0 or all 1 (a constant function), or [2] half are 0 and half are 1 (a balanced function) in one step instead of up to 2N‐1 + 1 steps. For N = 1, the Deutsch‐Jozsa algorithm can be visualized as putting two pieces of glass, which may be thin (0) or thick (1), behind the apertures of a double‐slit apparatus and measuring the interference pattern of a light source illuminating the slits. If the pattern is unchanged compared to the empty apparatus, the glass pieces have the same thickness (constant function); otherwise they have different thickness (balanced function). Peter Pfeifer, ʺQuantum Computation,ʺ Access Science, Vol. 17, p. 678, slightly modified by F.R.
This is demonstrated by calculating the diffraction pattern without glass present, and with glass present of the same thickness and different thickness. The diffraction pattern is the momentum distribution, which is the Fourier transform of the slit geometry.
Slit positions:
$\mathrm{x}_{\mathrm{L}} =1 \quad \mathrm{x}_{\mathrm{R}} =2 \nonumber$
Slit width:
$\delta : =0.2 \nonumber$
The momentum wave function with possible phase shifts $\theta$ and $\phi$ at the two slits is represented by the following superposition. The phase shifts are directly proportional to the thickness of the glass.
$\Psi(\mathrm{p}, \theta, \varphi) = \dfrac{\int_{x_{L}-\frac{\delta}{2}}^{x_{L}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \exp (i \cdot \theta)+\int_{x_{R} \frac{\delta}{2}}^{x_{R}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \exp (i-\varphi)}{\sqrt{2}} \nonumber$
The diffraction patterns (momentum distributions) for the empty apparatus ($\theta = \phi = 0$), for an apparatus with glass pieces of the same thickness ($\theta = \phi = \frac{\pi}{8}$) and for one that has glass of different thickness ($\theta = \frac{\pi}{8}\; \phi = \frac{\pi}{2}$) behind the slits are displayed below.
Another look at the calculations with graphical support on the left is provided below.
Simulating a Quantum Computer with a Mach-Zehnder Interferometer
Suppose you are asked if two pieces of glass are the same thickness. The convential thing to do is to measure the thickness of each piece of glass and then compare the results. As David Deutsch pointed out this is overkill. You were asked only if they were the same thickness, but you made two measurements to answer that question, when in fact it can be done with one.
Quantum mechanics provides two ways to answer the question; using a double-slit apparatus or a Mach-Zehnder interferometer. They both operate on the same quantum principles. Using the double-slit apparatus you put a piece of glass behind each of the slits and shine light on the slits. If the resulting diffraction pattern is symmetrical about the center of the slits, the glasses are the same thickness. See the previous tutorial: Simulating the Deutsch-Jozsa Algorithm with a Double-Slit Apparatus.
Alternatively you could put a piece of glass in each arm of an equal-arm Mach-Zehnder interferometer (MZI). How this approach works is the subject of this tutorial. First we need to get acquainted with a MZI.
As shown in the following figure a MZI consists of a photon source, two 50-50 beam splitters, two mirrors and two detectors. The Appendix contains the mathematical information necessary to carry out a matrix mechanics analysis of the operation of the interferometer. The motional states of the photon are represented by vectors, while the beam splitters and mirrors are represented by matrices and operate on the vectors.
Yogi Berra has famously said "When you come to a fork in the road, take it." This is exactly what the photon does at a beam splitter. After the first beam splitter the photon, which was moving in the x-direction being emitted by the source, is now in a superposition of moving in both the x- and y-directions. It has been transmitted and reflected at the beam splitter. By convention a 90 degree ($\frac{\pi}{2}, i$) phase shift is assigned to reflection.
The following calculations illustrate the formation of the superposition state created by the photon's interaction with the first beam splitter.
$\mathrm{BS} \cdot \mathrm{x} \rightarrow \left( \begin{array}{c}{\frac{\sqrt{2}}{2}} \ {\frac{\sqrt{2} \cdot \mathrm{i}}{2}}\end{array}\right) \qquad \mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{iR}) \quad \frac{1}{\sqrt{2}} \cdot x+\frac{i}{\sqrt{2}} \cdot y \rightarrow \left( \begin{array}{c}{\frac{\sqrt{2}}{2}} \ {\frac{\sqrt{2} \cdot i}{2}}\end{array}\right) \nonumber$
After the initial beam splitter, the mirrors direct the transmitted and reflected photon states to a second beam splitter where they are recombined. The consequence of this in an equal arm MZI is that the photon is alway registered at D1. There are two paths (histories) to each detector and the amplitudes for these paths interfere. To reach D1 both paths experience one reflection and so arrive in phase with each other with their phases shifted by 90 degrees. The paths to D2, however, are 180 degrees out of phase and destructively interfere. The photon is never detected at D2.
A photon entering the MZI in the x-direction exits in the x-direction phase-shifted by 90 degrees:
$\mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x} \rightarrow \left( \begin{array}{c}{\mathrm{i}} \ {0}\end{array}\right) \qquad \mathrm{i} \cdot \mathrm{x} \rightarrow \left( \begin{array}{l}{\mathrm{i}} \ {0}\end{array}\right) \nonumber$
The highlighted areas above next to the detectors show the matrix mechanics calculations for the probability of the photon being registered at D1 and D2. The equations are read from the right. A photon moving in the x-direction interacts with a beam splitter, a mirror and another beam splitter. This state is then projected onto x- and y-direction motion to calculate which detector will register the photon. The absolute square of this calculation (the probability amplitude) is the probability.
Now we place the pieces of glass in the arms of the interferometer as shown below. The speed of light in glass is different from that in air. Therefore glass causes a phase shift depending on its thickness as is shown below. If the pieces of glass are the same thickness, $\delta$, they will cause the same phase shift and the photon will be detected at D1. However, if they have different thicknesses, the phase shifts will be different in the two arms of the interferometer. For example, if $\phi x$ is $\frac{\pi}{2}$ and $\phi y$ is $\frac{\pi}{4}$ then D2 will fire almost 15% of the time indicating that the glasses are not the same thickness.
$\phi_{\mathrm{x}}=2 \cdot \pi \cdot \frac{\delta_{\mathrm{x}}}{\lambda} \qquad \phi_{\mathrm{x}} =\frac{\pi}{2} \qquad \phi_{\mathrm{y}}=2 \cdot \pi \cdot \frac{\delta_{\mathrm{y}}}{\lambda} \qquad \phi_{\mathrm{y}} =\frac{\pi}{4} \nonumber$
Appendix:
State Vectors
Photon moving horizontally:
$\mathbf{x} \equiv \left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
Photon moving vertically:
$\mathrm{y} \equiv \left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Operators
Operator representing a beam splitter:
$\mathrm{BS} \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right) \nonumber$
Operator representing interaction with glass:
$A\left(\phi_{x}, \phi_{y}\right) \equiv \begin{pmatrix} e^{i \cdot \phi_{x}} & 0 \ 0 & e^{i \cdot \phi_{y}} \end{pmatrix} \nonumber$
Operator representing a mirror:
$M \equiv \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
Which Path Information and the Quantum Eraser
This tutorial examines the real reason which‐path information destroys the double‐slit diffraction pattern and how the so‐called ʺquantum eraserʺ restores it. The wave function for a photon illuminating the slit screen is written as a superposition of the photon being present at both slits simultaneously. The double‐slit diffraction pattern is calculated by projecting this superposition into momentum space. This is a Fourier transform for which the mathematical details can be found in the Appendix.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \qquad \Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle] \nonumber$
Attaching polarizers to the slits creates an entangled superposition of the photon being at slit 1 with vertical polarization and at slit 2 with horizontal polarization. This leads to the following momentum distribution at the detection screen. The interference fringes have disappeared leaving a single‐slit diffraction pattern.
$| \Psi^{\prime} \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle | V \rangle+| x_{2} \rangle | H \rangle ] \qquad \Psi^{\prime}(p)=\langle p | \Psi^{\prime}\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle | V\rangle+\langle p | x_{2}\rangle | H \rangle ] \nonumber$
The usual explanation for this effect is that it is now possible to know which slit the photons went through, and that such knowledge destroys the interference fringes because the photons are no longer in a superposition of passing through both slits, but rather a mixture of passing through one slit or the other.
However, a better explanation is that the superposition persists with orthogonal polarization tags, and because of this the interference (cross) terms in the momentum distribution, $| \Psi^{ʹ} (p)|^{2}$, vanish leaving a pattern at the detection screen which is the sum of two single‐slit diffraction patterns, one from the upper slit and the other from the lower slit.
That this is a reasonable interpretation is confirmed when a so‐called quantum eraser, a polarizer (D) rotated clockwise by 45 degrees relative to the vertical, is placed before the detection screen.
$\Psi^{\prime \prime}(p)=\langle D | \Psi^{\prime}(p)\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle\langle D | V\rangle+\langle p | x_{2}\rangle\langle D | H\rangle]=\frac{1}{2}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle] \nonumber$
The diagonal polarizer is called a quantum eraser because it appears to restore the interference pattern lost because of the which‐path information provided by the V/H polarizers. However, it is clear from this analysis that the diagonal polarizer doesnʹt actually erase, it simply passes the diagonal component of $| \Psi^{'} \rangle$ which then shows an attenuated (by half) version of the original interference pattern produced by $| \Psi \rangle$.
Placing an anti‐diagonal polarizer (rotated counterclockwise by 45 degrees relative to the vertical) before the detection screen causes a 180 degree phase shift in the restored interference pattern.
$\Psi^{\prime \prime}(p)=\langle A | \Psi^{\prime}(p)\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle\langle A | V\rangle+\langle p | x_{2}\rangle\langle A | H\rangle]=\frac{1}{2}[\langle p | x_{1}\rangle-\langle p | x_{2}\rangle] \nonumber$
This phase shift is inconsistent with any straightforward explanation based on the concept of erasure of which‐path information. Erasure implies removal of which‐path information. If which‐path information has been removed shouldnʹt the original interference pattern be restored without a phase shift?
Appendix:
The V/H polarization which‐path tags and the D/A polarization ʺerasersʺ in vector format:
$| \mathrm{V} \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) | \mathrm{H} \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \qquad | \mathrm{D} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) | \mathrm{A} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \ \langle\mathrm{D} | \mathrm{H}\rangle=\langle\mathrm{A} | \mathrm{V}\rangle=\langle\mathrm{A} | \mathrm{v}\rangle=\frac{1}{\sqrt{2}}\langle\mathrm{A} | \mathrm{H}\rangle=-\frac{1}{\sqrt{2}} \nonumber$
For infinitesimally thin slits the momentum‐space wave function is
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
Assuming a slit width $\delta$ the calculations of $\Psi$(p), $\Psi^{ʹ}$(p), $\Psi^{"}$(p) and $\Psi^{"ʹ}$(p) are carried out as follows:
Position of first slit: $x_{1} \equiv 0$
Position of second slit: $x_{2} \equiv 1$
Slit width: $\delta \equiv 0.2$
$\Psi(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \left[ \int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x +\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \right] \nonumber$
For $\Psi^{ʹ}$(p) the V/H polarization which‐path tags are added to the two terms of $\Psi$(p)
$\Psi^{\prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \left[ \int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \ +\int_{x_{2}-\frac{\delta}{2}}^{x_{2} +\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \right] \nonumber$
$\Psi^{"}(p) is the projection of \(\Psi^{ʹ}$(p) onto a diagonal polarizer <D|.
$\Psi^{\prime \prime}(p) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
$\Psi^{"'}$(p) is the projection of $\Psi^{ʹ}$(p) onto an anti‐diagonal polarizer <A|.
$\Psi^{m}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
Rewriting $\Psi^{ʹ}$(p) in terms of |D> and |A> clearly shows the origin of the phase difference between the $\left(\left|\Psi^{\prime \prime}(p)\right|\right)^{2}$ and $\left(\left|\Psi^{\prime \prime \prime}(p)\right|\right)^{2}$ interference patterns.
$\Psi^{\prime}(p)=\langle p | \Psi^{\prime}\rangle=\frac{1}{2}[(\langle p | x_{1}\rangle+\langle p | x_{2}\rangle) | D\rangle+(\langle p | x_{1}\rangle-\langle p | x_{2}\rangle) | A \rangle ] \nonumber$
A Brief Analysis of Deutsch's Problem
A certain function of x maps {0,1} to {0,1}. The four possible outcomes of the evaluation of f(x) are given in tabular form.
$\begin{pmatrix} x & ' & 0 & 1 & ' & 0 & 1 & ' & 0 & 1 & ' & 0 & 1 \ f(x) & ' & 0 & 0 & ' & 1 & 1 & ' & 0 & 1 & ' & 1 & 0 \end{pmatrix} \nonumber$
The circuit shown below yields the result given in the right most section of the table. In other words, f(x) is a balanced function, because $f(0) \neq f(1)$, as is the result immediately to its left. The results in the first two sections are labelled constant because $f(0) = f(1)$.
where
$| 0 \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad | 1 \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
From the classical perspective, if the question (as asked by Deutsch) is whether f(x) is constant or balanced then one must calculate both f(0) and f(1) to answer the question. Deutsch pointed out that quantum superpositions and the interference effects between them allow the answer to be given with one pass through the following modified version of the circuit.
$\begin{array} a | 0 \rangle & \rhd & \mathrm{H} & \cdots & \cdot & \cdots & \mathrm{H} & \rhd & \text{measure}\; \frac{| 0 \rangle \text{constant}}{| 1 \rangle \text{balanced}} \ \; & \; & \; & \; & | & \; & \; & \; & \; \ | 1 \rangle & \rhd & \mathrm{H} & \cdots & \oplus & \mathrm{NOT} & \cdots & \; & \; \end{array} \nonumber$
An algebraic analysis of the operation of Deutsch's algorithm is now provided. Truth tables for H, CNOT, NOT and the identity are provided in the Appendix.
$| 0 \rangle | 1 \rangle=| 01 \rangle \ \mathrm{H} \otimes \mathrm{H} \ \frac{1}{\sqrt{2}}[ |0\rangle+| 1 \rangle ] \frac{1}{\sqrt{2}}[ |0\rangle-| 1 \rangle ]=\frac{1}{2}[ |00\rangle-| 01 \rangle+| 10 \rangle-| 11 \rangle ] \ \mathrm{CNOT} \ \frac{1}{2}[ |00\rangle-| 01 \rangle+| 1 1 \rangle-| 10 \rangle ]=\frac{1}{2}( |0\rangle-| 1 \rangle )( |0\rangle-| 1 \rangle ) \ \mathrm{I} \otimes \mathrm{NOT} \ \frac{1}{2}[( |0\rangle-| 1\rangle )( |1\rangle-| 0 \rangle ) ] \ \mathrm{H} \otimes \mathrm{I} \ | 1 \rangle \frac{1}{\sqrt{2}}( |1\rangle-| 0 \rangle ) \nonumber$
The top wire contains |1 > indicating the function is balanced.
Appendix:
Identity
$\left( \begin{array}{lll}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {1}\end{array}\right) \nonumber$
NOT
$\left( \begin{array}{lll}{0} & {\text { to }} & {1} \ {1} & {\text { to }} & {0}\end{array}\right) \nonumber$
CNOT
$\left( \begin{array}{ccc}{00} & {\text { to }} & {00} \ {01} & {\text { to }} & {01} \ {10} & {\text { to }} & {11} \ {11} & {\text { to }} & {10}\end{array}\right) \nonumber$
Hadamard operation
$\left[ \begin{matrix} 0 & ' & H & ' & \frac{1}{\sqrt{2}} \cdot (0+1) & ' & H & ' & 0 \ 1 & ' & H & ' & \frac{1}{\sqrt{2}} \cdot (0-1) & ' & H & ' & 1 \end{matrix} \right] \nonumber$
The Hadamard operation is a simple example of a discrete Fourier transform. In other words, the final step of Deutsch's algorithm is to carry out a Fourier transform on the input wire. This also occurs on the input wires in Grover's search algorithm, Simon's query algorithm and Shor's factorization algorithm. These are other types of calculations in which quantum superpositions and interference effects enable a quantum computer to outperform a classical computer.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.08%3A_Quantum_Computation-_A_Short_Course.txt
|
The Mach-Zehnder interferometer introduced above can be used to illuminate several contentious issues in quantum mechanics. The sub-microscopic building blocks of the natural world (electrons, protons, neutrons and photons) do not behave like the macroscopic objects we encounter in daily life because they have both wave and particle characteristics. Nick Herbert (Quantum Reality, p. 64) called them quons. “A quon is any entity … that exhibits both wave and particle aspects in the peculiar quantum manner.” The ‘peculiar quantum manner’ is that while we always observe particles, prior to measurement or observation quons behave like waves. This peculiar behavior is illustrated in the following tutorial.
Using a Mach-Zehnder Interferometer to Illustrate Feynman's Sum Over Histories Approach to Quantum Mechanics
Thirty-one years ago Dick Feynman told me about his 'sum over histories' version of quantum mechanics. "The electron does anything it likes," he said. "It just goes in any direction, at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave function." I said to him "You're crazy." But he isn't. Freeman Dyson, 1980.
In Volume 3 of the celebrated Feynman Lectures on Physics, Feynman uses the double-slit experiment as the paradigm for his 'sum over histories' approach to quantum mechanics. He said that any question in quantum mechanics could be answered by responding, "You remember the experiment with the two holes? It's the same thing." And, of course, he's right.
A 'sum over histories' is a superposition of probability amplitudes for the possible experimental outcomes which in quantum mechanics carry phase and therefore interfer constructively and destructively with one another. The square of the magnitude of the superposition of histories yields the probabilities that the various experimental possibilities will be observed.
Obviously it takes a minum of two 'histories' to demonstrate the interference inherent in the quantum mechanical superposition. And, that's why Feynman chose the double-slit experiment as the paradigm for quantum mechanical behavior. The two slits provide two paths, or 'histories' to any destination on the detection screen. In this tutorial a close cousin of the double-slit experiment, single particle interference in a Mach-Zehnder interferometer, will be used to illustrate Feynman's 'sum over histories' approach to quantum mechanics.
A Beam Splitter Creates a Quantum Mechanical Superposition
Single photons emitted by a source (S) illuminate a 50-50 beam splitter (BS). Mirrors (M) direct the photons to detectors D1 and D2. The probability amplitudes for transmission and reflection are given below. By convention a 90 degree phase shift (i) is assigned to reflection.
Probability amplitude for photon transmission at a 50-50 beam splitter:
$\langle T | S\rangle=\frac{1}{\sqrt{2}} \nonumber$
Probability amplitude for photon reflection at a 50-50 beam splitter:
$\langle R | S\rangle=\frac{i}{\sqrt{2}} \nonumber$
After the beam splitter the photon is in a superposition state of being transmitted and reflected.
$| S \rangle \rightarrow \frac{1}{\sqrt{2}} \left[ |T\rangle+ i | R \rangle \right] \nonumber$
As shown in the diagram below, mirrors reflect the transmitted photon path to D2 and the reflected path to D1. The source photon is expressed in the basis of the detectors as follows.
$| S \rangle \rightarrow \frac{1}{\sqrt{2}}[ |T\rangle+ i | R \rangle ] \xrightarrow[R \rightarrow D_{1}]{T \rightarrow D_{2}}>\frac{1}{\sqrt{2}}[ |D_{2}\rangle+ i | D_{1} \rangle ] \nonumber$
The square of the magnitude of the coefficients of D1 and D2 give the probabilities that the photon will be detected at D1 or D2. Each detector registers photons 50% of the time. In other words, in the quantum view the superposition collapses randomly to one of the two possible measurement outcomes it represents.
The classical view that detection at D1 means the photon was reflected at BS1 and that detection at D2 means it was transmitted at BS1 is not tenable as will be shown using a Mach-Zehnder interferometer which has a second beam splitter at the path intersection before the detectors.
A Second Beam Splitter Provides Two Paths to Each Detector
If a second beam splitter is inserted before the detectors the photons always arrive at D1. In the first experiment there was only one path to each detector. The construction of a Mach-Zehnder interferometer by the insertion of a second beam splitter creates a second path to each detector and the opportunity for constructive and destructive interference on the paths to the detectors.
Given the superposition state after BS1, the probability amplitudes after BS2 interfere constructively at D1 and destructively at D2.
$\begin{matrix} \text{After BS}_{1} & \text{After BS}_{2} & \text{Final State} \ \; & | T \rangle \rightarrow \frac{1}{\sqrt{2}}[i | D_{1}\rangle+| D_{2} \rangle ] & \; \ | S \rangle \rightarrow \frac{1}{\sqrt{2}}[ |T\rangle+ i | R \rangle ] \rightarrow & + & \rightarrow \quad i | D_{1} \rangle \ \; & i | R \rangle \rightarrow \frac{i}{\sqrt{2}}[ |D_{1}\rangle+ i | D_{2} \rangle ] & \; \end{matrix} \nonumber$
Adopting the classical view that the photon is either transmitted or reflected at BS1 does not produce this result. If the photon was transmitted at BS1 it would have equal probability of arriving at either detector after BS2. If the photon was reflected at BS1 it would also have equal probability of arriving at either detector after BS2. The predicted experimental results would be the same as those of the single beam splitter experiment. In summary, the quantum view that the photon is in a superposition of being transmitted and reflected after BS1 is consistent with both experimental results described above; the classical view that it is either transmitted or reflected is not.
Some disagree with this analysis saying the two experiments demonstrate the dual, complementary, behavior of photons. In the first experiment particle-like behavior is observed because both detectors register photons indicating the individual photons took one path or the other. The second experiment reveals wave-like behavior because interference occurs - only D1 registers photons. According to this view the experimental design determines whether wave or particle behavior will occur and somehow the photon is aware of how it should behave. Suppose in the second experiment that immediately after the photon has interacted with BS1, BS2 is removed. Does what happens at the detectors require the phenomenon of retrocausality or delayed choice? Only if you reason classically about quantum experiments.
We always measure particles (detectors click, photographic film is darkened, etc.) but we interpret what happened or predict what will happen by assuming wavelike behavior, in this case the superposition created by the initial beam splitter that delocalizes the position of the photon. Quantum particles (quons) exhibit both wave and particle properties in every experiment. To paraphrase Nick Herbert (Quantum Reality), particles are always detected, but the experimental results observed are the result of wavelike behavior. Richard Feynman put it this way (The Character of Physical Law), "I will summarize, then, by saying that electrons arrive in lumps, like particles, but the probability of arrival of these lumps is determined as the intensity of waves would be. It is in this sense that the electron behaves sometimes like a particle and sometimes like a wave. It behaves in two different ways at the same time (in the same experiment)." Bragg said, "Everything in the future is a wave, everything in the past is a particle."
In 1951 in his treatise Quantum Theory, David Bohm described wave-particle duality as follows: "One of the most characteristic features of the quantum theory is the wave-particle duality, i.e. the ability of matter or light quanta to demonstrate the wave-like property of interference, and yet to appear subsequently in the form of localizable particles, even after such interference has taken place." In other words, to explain interference phenomena wave properties must be assigned to matter and light quanta prior to detection as particles.
Matrix Mechanics Approach
As a companion analysis, the matrix mechanics approach to single-photon interference in a Mach-Zehnder interferometer is outlined next.
State Vectors
Photon moving horizontally:
$\mathrm{x} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
Photon moving vertically:
$\mathrm{y} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Operators
Operator representing a beam splitter:
$\mathrm{BS} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right) \nonumber$
Operator representing a mirror:
$\mathrm{M} :=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
Single beam splitter example:
Reading from right to left.
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter and a mirror will be detected at D1. $\left(\left|x^{\mathrm{T}} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}\right|\right)^{2}=0.5$
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter and a mirror will be detected at D2. $\left(\left|y^{\mathrm{T}} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}\right|\right)^{2}=0.5$
Two beam splitter example (MZI):
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter, a mirror and another beam splitter will be detected at D1. $\left(\left|\mathrm{x}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}\right|\right)^{2}=1$
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter, a mirror and another beam splitter will be detected at D2. $\left(\left|y^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}\right|\right)^{2}=0$
Some quantum physicists believe they can isolate wave and particle behavior in a properly designed experiment. They also invoke the concept of delayed choice. I disagree with both in the next tutorial, but first some remarks by John Wheeler.
John Wheeler, designer of several delayed-choice experiments (both terrestrial and cosmological), had the following to say about the interpretation of such experiments.
... in a loose way of speaking, we decide what the photon shall have done after it has already done it. In actuality it is wrong to talk of the ʹrouteʹ of the photon. For a proper way of speaking we recall once more that it makes no sense to talk of a phenomenon until it has been brought to a close by an irreversible act of amplification. ‘No elementary phenomenon is a phenomenon until it is a registered (observed) phenomenon.’
A Quantum Delayed-Choice Experiment?
This note presents a critique of "Entanglement-Enabled Delayed-Choice Experiment," F. Kaiser, et al. Science 338, 637 (2012). This experiment was also summarized in section 6.3 of Quantum Weirdness, by William Mullin, Oxford University Press, 2017.
A source, S, emits two photons in opposite directions on the x-axis in the following polarization state, where v and h represent vertical and horizontal polarization, respectively.
$| \Psi \rangle_{A B}=\frac{1}{\sqrt{2}}[ |x \nu \rangle_{A} | x \nu \rangle_{B}+| x h \rangle_{A} | x h \rangle_{B} ] \nonumber$
Photon B travels to the left to a polarizing beam splitter, PBS. Photon A travels to the right entering an interferometer whose elements are a beam splitter (BS), two mirrors (M), and a polarization-dependent beam splitter (PDBS).
The implementation of the PDBS is shown here.
The operation of the optical elements are as follows. A mirror simply reflects the photon's direction of motion.
$M=| y \rangle\langle x|+| x\rangle\langle y| \nonumber$
A 50-50 BS splits the photon beam into a superposition of motion in the x- and y-directions. By convention the reflected beam collects a pi/2 (i) phase shift relative to the transmitted beam.
$B S=\frac{ |x \rangle+i | y\rangle}{\sqrt{2}}\langle x|+\frac{i | x \rangle+| y\rangle}{\sqrt{2}}\langle y|=\frac{1}{\sqrt{2}}( |x\rangle\langle x|+i| y\rangle\langle x|+i| x\rangle\langle y|+| y\rangle\langle y|) \nonumber$
A PBS transmits vertically polarized photons and reflects horizontally polarized photons.
$P B S=| x v \rangle\langle x v|+| y h\rangle\langle x h|+| y v\rangle\langle y v|+| x h\rangle\langle y h| \nonumber$
The PDBS uses an initial PBS to reflect horizontally polarized photons to a second PBS which reflects them to the detectors. Vertically polarized photons are transmitted by the first PBS to a BS which has the action shown above, after which they are transmitted to the detectors by the second PBS. PBS/PDBS blue/red color coding highlights the action of the central BS.
$P D P S=\frac{ |x v \rangle+i | y v\rangle}{\sqrt{2}}\langle x v|+| y h\rangle\langle x h|+\frac{i | x v \rangle+| y v\rangle}{\sqrt{2}}\langle y v|+| x h\rangle\langle y h| \nonumber$
Given the state produced by the source, half the time a horizontal photon will enter the interferometer and half the time a vertical photon will enter. The following algebraic analysis shows the progress of the h- and v-polarized photons entering the interferometer. It is clear from this analysis that D1 will fire 25% of the time and D2 75% of the time.
$\begin{matrix} | xh \rangle & | x \nu \rangle \ BS & BS \ \frac{ |x h \rangle+i | y h\rangle}{\sqrt{2}} & \frac{ |x v \rangle+i | y v\rangle}{\sqrt{2}} \ M & M \ \frac{ |y h \rangle+i | x h\rangle}{\sqrt{2}} & \frac{ |y v \rangle+i | x v\rangle}{\sqrt{2}} \ PDBS & PDBS \ \frac{ |D 2 \rangle | h \rangle+i | D 1 \rangle | h\rangle}{\sqrt{2}} & \frac{1}{\sqrt{2}}\left[\frac{ |D 1 \rangle | v \rangle+i | D 2 \rangle | v\rangle}{\sqrt{2}}+\frac{i(i | D 1\rangle | v \rangle+| D 2 \rangle | v\rangle )}{\sqrt{2}}\right] \ \; & \downarrow \ \; & i | D 2 \rangle | v \rangle \end{matrix} \nonumber$
This analysis suggests to some that inside the interferometer h-photons behave like particles and v-photons behave like waves. The argument for this view is that interference occurs at the PDBS for v-photons, but not for h-photons. However, in both cases the state illuminating the PDBS, highlighted in blue, is a superposition of the photon being in both arms of the interferometer. In my opinion this superposition implies delocalization which implies wavelike behavior. At the PDBS the h-photon superposition is reflected away from the central BS to D1 and D2, leading to the final superposition in the left-hand column above, which collapses on observation to either D1 or D2. The v-photon superposition is transmitted at the PDBS to the central BS allowing for destructive interference at D1 and constructive interference at D2 as is shown at the bottom of the right-hand column.
Those who interpret this experiment in terms of particle or wave behavior also invoke the concept of delayed-choice, claiming that if D2 fires we don't know for sure which behavior has occurred because both h and v photons can arrive there. They argue that until photon B has been observed at Dv or Dh, which by design can be long after photon A has exited the interferometer, the polarization of the photon detected at D2 is unknown and therefore so is whether particle or wave behavior has occurred. These analysts write the final two photon wavefunction as the following entangled superposition, where particle behavior is highlighted in red and wave behavior in blue.
$\begin{split}| \Psi \rangle_{\text {final}}&=\frac{1}{\sqrt{2}} \left(\left(\frac{ |D 2, h \rangle+i | D 1, h\rangle}{\sqrt{2}}\right)_{A} | D h\rangle_{B}+i | D 2, \nu \rangle_{A} | D \nu \rangle_{B}\right )\&=\frac{1}{\sqrt{2}}( |Particle\rangle_{A} | D h \rangle_{B}+| W a v e \rangle_{A} | D v \rangle_{B} ) \end{split} \nonumber$
For the reasons expressed above I do not find this interpretation convincing. We always observe particles (detectors click, photographic film is darkened, etc.), but we interpret what happened or predict what will happen by assuming wavelike behavior. In other words, objects governed by quantum mechanical principles (quons) exhibit both wave and particle properties in every experiment. To paraphrase Nick Herbert (Quantum Reality), particles are always detected, but the experimental results observed are the result of wavelike behavior. Bragg summarized wave-particle duality saying, "Everything in the future is a wave, everything in the past is a particle."
In summary, I accept the Copenhagen interpretation of quantum mechanics for the reasons so cogently stated by David Lindley on page 164 of Where Does The Weirdness Go?
And since none of the other “interpretations” of quantum mechanics that we have looked at has brought us any real peace of mind, they simply push the weirdness around, from one place to another, but cannot make it go away – let us stick with the Copenhagen interpretation, which has the virtues of simplicity and necessity. It takes quantum mechanics seriously, takes its weird aspects at face value, and provides an economical, austere, perhaps even antiseptic, account of them.
We now turn to some true quantum weirdness. The following tutorial uses a Mach-Zehnder interferometer (MZI) and Feynman’s sum over histories approach to demonstrate interaction-free measurement, or how to see in the dark.
Interaction Free Measurement: Seeing in the Dark
The illustration of the concept of interaction-free measurement requires the use of an interferometer. A simple illustration employs a Mach-Zehnder interferometer (MZI) like the one shown here.
This equal-arm MZI consists of two 50-50 beam splitters (BS1, BS2), two mirrors (M) and two detectors (D1, D2). A source emits a photon which interacts with BS1 producing the following superposition. (By convention a 90 degree (i) phase shift is assigned to reflection.)
$\mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \cdot \mathrm{R}) \tag{1} \nonumber$
The transmitted and reflected branches are united at BS2 by the mirrors, where they evolve into the following superpositions in the basis of the detectors.
$\mathrm{T}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{i} \cdot \mathrm{D}_{1}+\mathrm{D}_{2}\right) \tag{2} \nonumber$
$\mathrm{R}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{D}_{1}+\mathrm{i} \cdot \mathrm{D}_{2}\right) \tag{3} \nonumber$
Substitution of 2 and 3 into 1 reveals that the output photon is always registered at D1. There are two paths to each detector and constructive interference occurs at D1 and destructive interference at D2.
$\mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \cdot \mathrm{R}) \Bigg|^{\text { substitute, } \mathrm{T}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{i} \cdot \mathrm{D}_{1}+\mathrm{D}_{2}\right)}_{\text { substitute, } \mathrm{R}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{D}_{1}+\mathrm{i} \cdot \mathrm{D}_{2}\right)}\rightarrow \mathrm{S}=\mathrm{D}_{1} \cdot \mathrm{i} \nonumber$
Probability at D1:
$(|i|)^{2}=1 \nonumber$
The MZI provides a rudimentary method of determining whether an obstruction is present in its upper arm without actually interacting with it. As we shall see, it is not an efficient method, but it does clearly illustrate the principle involved which then can be used in a more elaborate and sophisticated interferometer to yield better results.
In the presence of the obstruction equation 2 becomes $T = \gamma_{Absorbed}$. This leads to the following result at the detectors.
$\mathrm{s}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \cdot \mathrm{R}) \Bigg|^{\text { substitute, } \mathrm{T}=\gamma_{\mathrm{Absorbed}}}_{\text { substitute, } \mathrm{R}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{D}_{1}+\mathrm{i} \cdot \mathrm{D}_{2}\right)} \rightarrow \mathrm{S}=\frac{\sqrt{2} \cdot \gamma_{\mathrm{Absorbed}}}{2}-\frac{\mathrm{D}_{2}}{2}+\frac{\mathrm{D}_{1} \mathrm{i}}{2} \nonumber$
Quantum mechanics predicts that for a large number of experiments 50% of the photons will be absorbed by the obstruction, 25% will be detected at D1 and 25% will be detect at D2. This later result is the signature of interaction-free measurement. Even if the photon is not absorbed, the mere presence of the obstruction causes the probability of detection at D2 to go from zero to 25%. The photon's arrival at D2 signals the presence of an obstruction in the upper arm of the MZI, and the obstruction is detected without an interaction.
Of course, 25% is not great efficiency, so this is "a proof of principle" example. However, with a little ingenuity the probability of interaction-free detection can be increased dramatically. To see how this can be accomplished read "Quantum Seeing in the Dark" by Kwiat, Weinfurter and Zeilinger in the November 1996 issue of Scientific American.
However, it is possible to improve performance significantly by using a system of nested MZIs which is only slightly more complicated than the simple MZI used earlier. To simplify analysis Feynman's "sum over histories" approach to quantum mechanics will be used. The probability amplitudes for transmission and reflection at the beam splitters are required.
$\mathrm{T} :=\frac{1}{\sqrt{2}} \qquad \mathrm{R} :=\frac{\mathrm{i}}{\sqrt{2}} \nonumber$
Placing an additional BS before the original MZI and another BS before D2 and renaming it D3, plus an additional mirror and new detector D2, yields a nested interferometer configuration that significantly increases the probability of interaction-free detection of the obstruction.
To interact with the obstruction a photon must be transmitted at the first and second beam splitters. In this case there is only one history and the probability of the interaction occuring is the square of its absolute magnitude.
$(|\mathrm{T} \cdot \mathrm{T}|)^{2} \rightarrow \frac{1}{4}=25 \% \nonumber$
The probabilities of detectors 1, 2 and 3 firing are calculated using the same methodology.
The probability D1 registers a photon:
$(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{T}|)^{2} \rightarrow \frac{1}{8}=12.5 \% \nonumber$
The probability D2 registers a photon:
$(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{R}+\mathrm{R} \cdot \mathrm{T}|)^{2} \rightarrow \frac{1}{16}=6.25 \% \nonumber$
The probability D3 registers a photon:
$(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}+\mathrm{R} \cdot \mathrm{R}|)^{2} \rightarrow \frac{9}{16}=56.25 \% \nonumber$
With the modified interferometer detecting the presence of the obstruction without interacting with it increases from 25% to 56.25%.
Perhaps weirder is the quantum Cheshire cat. The following tutorial shows how a MZI can be engineered to create a situation in which a photon is separated from a property, in this case its angular momentum.
A Quantum Optical Cheshire Cat
The following is a summary of "Quantum Cheshire Cats" by Aharonov, Popescu, Rohrlich and Skrzypczyk which was published in the New Journal of Physics 15, 113015 (2013) and can also be accessed at: arXiv:1202.0631v2.
In the absence of the half-wave plate (HWP) and the phase shifter (PS) a horizontally polarized photon entering the interferometer from the lower left (propagating to the upper right) arrives at D2 with a 90 degree ($\frac{\pi}{2}, i$) phase shift. (By convention reflection at a beam splitter introduces a 90 degree phase shift.)
$| R \rangle | H \rangle \stackrel{B S_{i}}{\longrightarrow} \frac{1}{\sqrt{2}}[i | L\rangle+| R \rangle ] | H \rangle \stackrel{B s_{2}}{\longrightarrow} i | D_{2} \rangle | H \rangle \nonumber$
The state immediately after the first beam splitter is the pre-selected state.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[i | L\rangle+| R \rangle ] | H \rangle \nonumber$
The post-selected state is,
$| \Phi \rangle=\frac{1}{\sqrt{2}}[ |L\rangle | H \rangle+| R \rangle | V \rangle ] \nonumber$
The HWP (converts |V> to |H> in the R-branch) and PS transform this state to,
$| \Phi \rangle \xrightarrow[P S]{H W P}\frac{1}{\sqrt{2}}[ |L\rangle+ i | R \rangle ] | H \rangle \nonumber$
which exits the second beam splitter through the left port to encounter a polarizing beam splitter which transmits horizontal polarization and reflects vertical polarization. Thus, the post-selected state is detected at D1. The evolution of the post-selected state is summarized as follows:
$| \Phi \rangle=\frac{1}{\sqrt{2}}[ |L\rangle | H \rangle+| R \rangle | V \rangle ] \xrightarrow[P S]{H W P}[ |L\rangle+ i | R \rangle ] | H \rangle \xrightarrow{B S_{2}} i | L \rangle | H \rangle \xrightarrow{P B S} i | D_{1} \rangle | H \rangle \nonumber$
The last term on the right side below is the weak value of A multiplied by the probability of its occurrence for the preselected state $\Psi$ and the post-selected state $\Phi$.
$\langle\Psi|\widehat{\mathrm{A}}| \Psi\rangle=\sum_{j}\langle\Psi | \Phi_{j}\rangle\left\langle\Phi_{j}|\hat{\mathrm{A}}| \Psi\right\rangle=\sum_{j}\langle\Psi | \Phi_{j}\rangle\left\langle\Phi_{j} | \Psi\right\rangle \frac{\left\langle\Phi_{j}|\hat{\mathbf{A}}| \Psi\right\rangle}{\left\langle\Phi_{j} | \Psi\right\rangle}=\sum_{j} p_{j} \frac{\left\langle\Phi_{j}|\hat{\mathbf{A}}| \Psi\right\rangle}{\left\langle\Phi_{j} | \Psi\right\rangle} \nonumber$
The weak value calculations are carried out in a 4-dimensional Hilbert space created by the tensor product of the photon's direction of propagation and polarization vectors.
Direction of propagation vectors:
$\mathrm{L} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{R} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Polarization state vectors:
$\mathrm{H} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{V} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Pre-selected state:
$\Psi=\frac{1}{\sqrt{2}} \cdot(i \cdot L+R) \cdot H=\frac{1}{\sqrt{2}} \cdot\left[i \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{i} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{i} \ {0}\end{array}\right) \qquad \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{i} \ {0} \ {1} \ {0}\end{array}\right) \nonumber$
Post-selected state:
$\Phi :=\frac{1}{\sqrt{2}} \cdot(\mathrm{L} \cdot \mathrm{H}+\mathrm{R} \cdot \mathrm{V})=\frac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \qquad \Phi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
Direction of propagation operators:
$\text{Left}:=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{ll}{1} & {0}\end{array}\right) \rightarrow \left( \begin{array}{ll}{1} & {0} \ {0} & {0}\end{array}\right) \qquad \text{Right}:=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{cc}{0} & {1}\end{array}\right) \rightarrow \left( \begin{array}{ll}{0} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Photon angular momentum operator:
$\operatorname{Pang} :=\left( \begin{array}{cc}{0} & {-i} \ {i} & {0}\end{array}\right) \nonumber$
Identity operator:
$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
The following weak value calculations show that for the pre- and post-selection ensemble of observations the photon is in the left arm of the interferometer while its angular momentum is in the right arm. Like the case of the Cheshire cat, a photon property has been separated from the photon.
$\begin{pmatrix} \; & \text{“Left Arm''} & \text{“Right Arm''} \ \text{ “Arm''} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{Left}, 1) \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{Right}, \mathrm{I}) \cdot \Psi}{\mathbf{\Phi}^{\mathrm{T}} \cdot \Psi} \ \text{ “Pang''} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker (Left, Pang) } \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker (Right }, \text { Pang } ) \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} \end{pmatrix}= \begin{pmatrix} \; & \text{ “Left Arm''} & \text{ “Right Arm''} \ \text{ “Arm''} & 1 & 0 \ \text{ “Pang''} & 0 & 1 \end{pmatrix} \nonumber$
The following shows the evolution of the pre-selected state to the final state at the detectors. The intermediate is the state illuminating BS2. The polarization state at the detectors is ignored.
$| \Psi \rangle \rightarrow \frac{i}{\sqrt{2}}[ |L\rangle | H \rangle+| R \rangle | V \rangle ] \rightarrow-\frac{1}{2} | D_{1} \rangle+\frac{i}{2} | D_{3} \rangle+\frac{(i-1)}{2} | D_{2} \rangle \nonumber$
Squaring the magnitude of the probability amplitudes shows that the probabilities that D1, D3 and D2 will fire are 1/4, 1/4 and 1/2, respectively. The probability at D1 is consistent with the probability that the post-selected state is contained in the pre-selected state. A photon in the post-selected state has a probability of 1 of reaching D1 and it represents a 25% contribution to the pre-selected state.
$\left(\left|\Phi^{\mathrm{T}} \cdot \Psi\right|\right)^{2} \rightarrow \frac{1}{4} \nonumber$
Note that the expectation values for the pre-selected state show no path-polarization separation.
$\left[ \begin{matrix} \; & \text{ “Left Arm''} & \text{ “Right Arm''} \ \text{ “Arm''} & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker (Left, } \mathrm{I} ) \cdot \Psi & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker (Right, } \mathrm{I} ) \cdot \Psi \ \text{ “Pang''} & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker (Left, } \mathrm{Pang} ) \cdot \Psi & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker (Right, } \mathrm{Pang} ) \cdot \Psi \ \text{ “Hop''} & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker }\left(\text { Left }, \mathrm{H} \cdot \mathrm{H}^{\mathrm{T}}\right) \cdot \Psi & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker }\left(\text { Right }, \mathrm{H} \cdot \mathrm{H}^{\mathrm{T}}\right) \cdot \Psi \ \text{ “Vop''} & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker }\left(\text { Left }, \mathrm{V} \cdot \mathrm{V}^{\mathrm{T}}\right) \cdot \Psi & \left(\overline{\Psi}\right)^{\mathrm{T}} \cdot \text { kronecker }\left(\text { Right }, \mathrm{V} \cdot \mathrm{V}^{\mathrm{T}}\right) \cdot \Psi \end{matrix}\right] = \begin{pmatrix} \; & \text{ “Left Arm''} & \text{ “Right Arm''} \ \text{ “Arm''} & 0.5 & 0.5 \ \text{ “Pang''} & 0 & 0 \ \text{ “Hop''} & 0.5 & 0.5 \ \text{ “Vop''} & 0 & 0 \end{pmatrix} \nonumber$
In addition the following table shows that linear polarization (HV) is not separated from the photon's path.
$\mathrm{HV} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \ \begin{pmatrix} \; & \text{ “Left Arm''} & \text{ “Right Arm''} \ \text{ “Arm''} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{Left}, 1) \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{Right}, \mathrm{I}) \cdot \Psi}{\mathbf{\Phi}^{\mathrm{T}} \cdot \Psi} \ \text{ “HIV''} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker (Left, Pang) } \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} & \frac{\Phi^{\mathrm{T}} \cdot \text { kronecker (Right }, \text { Pang } ) \cdot \Psi}{\Phi^{\mathrm{T}} \cdot \Psi} \end{pmatrix} = \begin{pmatrix} \; & \text{ “Left Arm''} & \text{ “Right Arm''} \ \text{ “Arm''} & 1 & 0 \ \text{ “HIV''} & 1 & 0 \end{pmatrix} \nonumber$
The "Complete Quantum Cheshire Cat" by Guryanova, Brunner and Popescu (arXiv 1203.4215) provides an optical set-up which achieves complete path-polarization separation for the photon.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.09%3A_Quantum_Computation-_A_Short_Course.txt
|
The last several tutorials were a bit off the theme of quantum computation. We get back on track with a look at data base searching the quantum way, or the best way to find a needle in a hay stack. This is followed by a demonstration of Simon’s algorithm, an illustration of quantum dense coding and an example of quantum error correction.
Grover Search Algorithm Implementation for Two Items
Chris Monroe's research group recently published an experimental implementation of the Grover search algorithm in Nature Communications 8, 1918 (2017). In this report the Grover search is implemented for N = 3 using the three qubit quantum circuit shown below. In this particular example it is demonstrated that the search algorithm successfully searches for two items in one operation of the circuit. The lead sentence in the paper states "The Grover search algorithm has four stages: initialization (red), oracle (green), amplification (blue) and measurement (black)."
$\begin{matrix} |0 \rangle & \rhd & \text{H} & \lceil & \; & \rceil & \text{H} & \text{X} & \cdot & \text{X} & \text{H} & \rhd & \text{Measure} \ \; & \; & \; & | & \; & | & \; & \; & | & \; & \; & \; & \; \ |0 \rangle & \rhd & \text{H} & | & \text{Oracle} & | & \text{H} & \text{X} & | & \text{X} & \text{H} & \rhd & \text{Measure} \ \; & \; & \; & | & \; & | & \; & \; & | & \; & \; & \; & \; \ |0 \rangle & \rhd & \text{H} & \lfloor & \; & \rfloor & \text{H} & \text{X} & \text{Z} & \text{X} & \text{H} & \rhd & \text{Measure} \end{matrix} \nonumber$
The oracle, highlighted below, contains 3 (|011>) and 5 (|101>).
$\mathrm{H} :=\frac{1}{\sqrt{2}} \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \qquad \mathrm{X} :=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \ \text{Oracle}=\left( \begin{array}{llllllll}{1} & {0} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {-1} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {-1} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {0} & {1}\end{array}\right) \ \mathrm{CCZ} :=\left( \begin{array}{ccccccc}{1} & {0} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {1} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {0} & {-1}\end{array}\right) \nonumber$
$\mathrm{HHH} :=\mathrm{kronecker}(\mathrm{H}, \mathrm{kronecker}(\mathrm{H}, \mathrm{H})) \qquad \mathrm{XXX} :=\text { kronccker }(\mathrm{X}, \text { kronecker }(\mathrm{X}, \mathrm{X})) \nonumber$
The initial Hadamard gates on the three circuit wires feed the Grover search algorithm (in blue) a superposition of all possible queries yielding a superposition of the correct answers.
$\text{GroverSearch} : = \text{HHH} \cdot \text{XXX} \cdot \text{CCZ} \cdot \text{XXX} \cdot \text{HHH} \cdot \text{Oracle} \cdot \text{HHH} \qquad \text{GroverSearch}\cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {-0.707} \ {0} \ {-0.707} \ {0} \ {0}\end{array}\right)=-\frac{1}{\sqrt{2}}[ |011\rangle+| 101 \rangle ] \nonumber$
Aspects of Simon's Algorithm
A concealed quantum algorithm calculates f(x) from an input register containing a superposition of all x-values. Pairs of x-values (x,x') generate the same output. Simon's algorithm is an efficient method for finding the relationship between the pairs: $f(x) = f(x^{'}) = f(x \oplus s)$, where s is a secret string and the addition on the right is bitwise modulo 2 . In a classical calculation one could compute f(x) until some pattern emerged and find the pairs by inspection. This approach is illustrated below.
$\begin{matrix} \text{Decimal} & \text{Binary} & \; & \text{Binary} & \text{Decimal} \ |0 \rangle & | 000 \rangle & \xrightarrow{f(0)} & | 011 \rangle & | 3 \rangle \ |1 \rangle & | 001 \rangle & \xrightarrow{f(1)} & | 001 \rangle & | 1 \rangle \ |2 \rangle & | 010 \rangle & \xrightarrow{f(2)} & | 010 \rangle & | 2 \rangle \ |3 \rangle & | 011 \rangle & \xrightarrow{f(3)} & | 000 \rangle & | 0 \rangle \ |4 \rangle & | 100 \rangle & \xrightarrow{f(4)} & | 001 \rangle & | 1 \rangle \ |5\rangle & | 101 \rangle & \xrightarrow{f(5)} & | 011 \rangle & | 3 \rangle \ |5 \rangle & | 101 \rangle & \xrightarrow{f(5)} & | 011 \rangle & | 3 \rangle \ |6 \rangle & | 110 \rangle & \xrightarrow{f(6)} & | 000 \rangle & | 0 \rangle \ |7 \rangle & | 111 \rangle & \xrightarrow{f(7)} & | 010 \rangle & | 2 \rangle \end{matrix} \nonumber$
The table of results reveals the pairs {(0,5), (1,4), (2,7), (3,6)} and that |s> = |101>. Adding |s> bitwise modulo 2 to any |x> reveals its partner |x'>.
The following quantum circuit is a rudimentary implementation of Simon's algorithm. The section in blue is the concealed algorithm. It has been discussed in two other tutorials: Quantum Parallel Calculation and An Illustration of the Deutsch-Jozsa Algorithm. Its operation yields the results shown in the following table.
$\left( \begin{array}{ccccc}{x} & {0} & {1} & {2} & {3} \ {f(x)} & {1} & {0} & {0} & {1}\end{array}\right) \nonumber$
$\begin{matrix} \text{Initial} & \; & 1 & \; & 2 & \; & 3 & \; & 4 & \; & 5 & \; & \text{Final} \ | 0 \rangle & \rhd & H & \cdots & \cdot & \cdots & \cdots & \cdots & \cdots & \cdots & H & \rhd & \; \ \; & \; & \; & \; & | & \; & \; & \; & \; & \; & \; \ | 0 \rangle & \rhd & H & \cdots & | & \cdots & \cdot & \cdots & \cdots & \cdots & H & \rhd & \; \ \; & \; & \; & \; & | & \; & | & \; & \; & \; & \; \ | 0 \rangle & \rhd & \cdots & \cdots & \oplus & \cdots & \oplus & \cdots & \text{NOT} & \cdots & \cdots & \rhd & \text{Measure, 0 or 1} \end{matrix} \nonumber$
Next we prepare a table showing the results of a classical calculation. It is clear that the pairs are (0,3) and (1,2), and that |s> = |11>.
$\begin{matrix} \text{Decimal} & \text{Binary} & \; & \text{Binary} & \text{Decimal} \ |0 \rangle & | 000 \rangle & \xrightarrow{f(0)} & | 011 \rangle & | 3 \rangle \ |1 \rangle & | 001 \rangle & \xrightarrow{f(1)} & | 001 \rangle & | 1 \rangle \ |2 \rangle & | 010 \rangle & \xrightarrow{f(2)} & | 010 \rangle & | 2 \rangle \ |3 \rangle & | 011 \rangle & \xrightarrow{f(3)} & | 000 \rangle & | 0 \rangle \end{matrix} \nonumber$
Now we examine the operation of the quantum circuit that implements Simon's algoritm by two different, but equivalent methods. The matrices representing the quantum gates in the circuit are:
$\mathrm{I} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \mathrm{NOT} :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \qquad \mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \ \mathrm{CNOT} :=\left( \begin{array}{llll}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \ \mathrm{CnNOT} :=\left( \begin{array}{cccccccc}{1} & {0} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {1} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {0} & {1} \ {0} & {0} & {0} & {0} & {0} & {0} & {1} & {0}\end{array}\right) \nonumber$
The three qubit input state is:
$\Psi_{\mathrm{in}} :=\left( \begin{array}{llllllll}{1} & {0} & {0} & {0} & {0} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}} \nonumber$
The concealed algorithm:
$U_{f} : = \text{kronecker}(\text{I, kronecker(I, NOT))} \cdot \text{kronecker(I, CNOT)} \cdot \text{CnNOT} \nonumber$
The complete quantum circuit:
$\text{QuantumCircuit} : = \text{kronecker(H, kronecker(H, I))} \cdot U_{f} \cdot \text{kronecker(H, kronecker(H, I))} \nonumber$
The operation of the quantum circuit on the input state yields the following result:
$\begin{split} \text{QuantumCircuit}\cdot\Psi_{\text { in }}&=\left( \begin{array}{c}{0.5} \ {0.5} \ {0} \ {0} \ {0} \ {0} \ {-0.5} \ {0.5}\end{array}\right) \ &= \frac{1}{2}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\frac{1}{2}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \left( \begin{array}{l}{0} \ {1}\end{array}\right) \ &= \frac{1}{2}[ |00\rangle-| 11 \rangle ] | 0 \rangle+\frac{1}{2}[ |00\rangle+| 11 \rangle ] | 1 \rangle \end{split} \nonumber$
The terms in brackets are superpositions of the x-values which are related by $x^{'} = x \oplus s$. Thus we see by inspection that |s> = |11>. The actual implementation of Simon's algorithm involves multiple measurements in order to determine the secret string. The Appendix modifies the quantum circuit to include the effect of measurement on the bottom wire.
The second method of analysis uses the following truth tables for the quantum gates and the operation of the Hadamard gate to trace the evolution of the input quibits through the quantum circuit.
$\text { NOT } \; \begin{pmatrix} 0 & ' & 1 \ 1 & ' & 0 \end{pmatrix} \ \text{CNOT} \; \begin{pmatrix} \text{Decimal} & \text{Binary} & ' & \text{Binary} & \text{Decimal} \ 0 & 00 & ' & 00 & 0 \ 1 & 01 & ' & 01 & 1 \ 2 & 10 & ' & 11 & 3 \ 3 & 11 & ' & 10 & 2 \end{pmatrix} \ \text{CnNOT} \; \begin{pmatrix} \text{Decimal} & \text{Binary} & ' & \text{Binary} & \text{Decimal} \ 0 & 000 & ' & 000 & 0 \ 1 & 001 & ' & 001 & 1 \ 2 & 010 & ' & 010 & 3 \ 3 & 011 & ' & 011 & 3 \ 4 & 100 & ' & 101 & 5 \ 5 & 101 & ' & 100 & 4 \ 6 & 110 & ' & 111 & 7 \ 7 & 111 & ' & 110 & 6 \end{pmatrix} \ \text{Hadamard operation:}\; \left[ \begin{matrix} 0 & ' & H & ' & \frac{1}{\sqrt{2}} \cdot (0+1) & ' & H & ' & 0 \ 1 & ' & H & ' & \frac{1}{\sqrt{2}} \cdot (0-1) & ' & H & ' & 1 \end{matrix} \right] \nonumber$
$\begin{matrix} | 000 \rangle \ \mathrm{H} \otimes \mathrm{H} \otimes \mathrm{I} \ \frac{1}{\sqrt{2}}[ |0\rangle+| 1 \rangle ] \frac{1}{\sqrt{2}}[ |0\rangle+| 1 \rangle ] | 0 \rangle=\frac{1}{2}[ |000\rangle+| 010 \rangle+| 100 \rangle+| 110 \rangle ] \ \mathrm{CnNOT} \ \frac{1}{2}[ |000\rangle+| 010 \rangle+| 101 \rangle+| 111 \rangle ] \ \mathrm{I} \otimes \mathrm{CNOT} \ \frac{1}{2}[ |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ] \ \mathrm{I} \otimes \mathrm{I} \otimes \mathrm{NOT} \ \frac{1}{2}[ |001\rangle+| 010 \rangle+| 100 \rangle+| 111 \rangle ] \ \mathrm{H} \otimes \mathrm{H} \otimes \mathrm{I} \ \frac{1}{2}[( |00\rangle-| 11\rangle ) | 0 \rangle+( |00\rangle+| 11 \rangle ) | 1 \rangle ] \end{matrix} \nonumber$
Appendix:
The circuit modification shown below includes the effect of measurement on the bottom wire.
Measure |0> on the bottom wire:
$\text{QuantumCircuit} : = \text{kronecker} \left( \text{H, kronecker} \left[ \text{H,} \begin{pmatrix} 1 \ 0 \end{pmatrix} \cdot \begin{pmatrix} 1 \ 0 \end{pmatrix}^{T} \right] \right) \cdot U_{f} \cdot \text{kronecker(H, kronecker(H, I))} \ \text{QuantumCircuit}\cdot\Psi_{\text { in }}=\left( \begin{array}{c}{0.5} \ {0} \ {0} \ {0} \ {0} \ {0} \ {-0.5} \ {0}\end{array}\right) \ \left( \begin{array}{c}{0.5} \ {0} \ {0} \ {0} \ {0} \ {0} \ {-0.5} \ {0}\end{array}\right)= \frac{1}{2} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
Measure |1> on the bottom wire:
$\text{QuantumCircuit} : = \text{kronecker} \left( \text{H, kronecker} \left[ \text{H,} \begin{pmatrix} 0 \ 1 \end{pmatrix} \cdot \begin{pmatrix} 0 \ 1 \end{pmatrix}^{T} \right] \right) \cdot U_{f} \cdot \text{kronecker(H, kronecker(H, I))} \ \text{QuantumCircuit}\cdot\Psi_{\text { in }}=\left( \begin{array}{c}{0} \ {0.5} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0.5}\end{array}\right) \ \left( \begin{array}{c}{0} \ {0.5} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0.5}\end{array}\right)= \frac{1}{2} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right] \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Quantum Dense Coding
Quantum superdense coding reliably transmits two classical bits through an entangled pair of particles, even though only one member of the pair is handled by the sender. Charles Bennett, Physics Today, October 1995, p. 27
This tutorial is based on Brad Rubin's "Superdense Coding" at the Wolfram Demonstration Project: http:demonstrations.wolfram.com/SuperdenseCoding/. The quantum circuit shown below impliments quantum dense coding. Alice and Bob share the entangled pair of photons in the Bell basis shown at the left. Alice encodes two classical bits of information (four possible messages) on her photon, and Bob subsequently reads her message by performing a Bell state measurement on the modified entangled photon pair. In other words, although Alice encodes two bits on her photon Bob's readout requires a measurement involving both photons. In this example Alice sends |11> to Bob.
$\begin{matrix} \cdot & \text{X}^{1} & \cdot & \text{Z}^{1} & \cdot & \cdot & H \; \ \left( \begin{array}{c}{1 / \sqrt{2}} \ {0} \ {0} \ {1 / \sqrt{2}}\end{array}\right) & \; & \left( \begin{array}{c}{0} \ {1 / \sqrt{2}} \ {1 / \sqrt{2}} \ {0}\end{array}\right) & \; & \left( \begin{array}{c}{0} \ {1 / \sqrt{2}} \ {-1 / \sqrt{2}} \ {0}\end{array}\right) & \begin{array}{c}{|} \ {|} \ {|} \ {|}\end{array} & \; & \left( \begin{array}{c}{0} \ {0} \ {0} \ {1}\end{array}\right) = \left( \begin{array}{l}{0} \ {1}\end{array}\right) \left( \begin{array}{l}{0} \ {1}\end{array}\right) \ \cdot & \mathrm{I} & \cdot & \mathrm{I} & \cdot & \oplus & \mathrm{I} & \; \end{matrix} \nonumber$
The operation of the circuit is outlined in both matrix and algebraic format. The necessary truth tables and matrix operators are provided in the Appendix.
Matrix Method
$\mathrm{H} \otimes \mathrm{I} \cdot \mathrm{CNOT} \cdot \mathrm{Z} \otimes \mathrm{I} \cdot \mathrm{X} \otimes \mathrm{I} \cdot \frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {0} \ {0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) =| 11 \rangle \nonumber$
Algebraic Method
$\frac{ |00 \rangle+| 11\rangle}{\sqrt{2}} \xrightarrow{\mathrm{X}^{1} \otimes \mathrm{I}} \frac{ |10 \rangle+| 01\rangle}{\sqrt{2}} \xrightarrow{\mathrm{Z}^{1} \otimes \mathrm{I}} \frac{-| 10 \rangle+| 01\rangle}{\sqrt{2}} \xrightarrow{CNOT} \frac{-| 11 \rangle+| 01\rangle}{\sqrt{2}} \xrightarrow{\mathrm{H} \otimes \mathrm{I}} \frac{-( |0\rangle-| 1 \rangle ) | 1 \rangle+( |0\rangle+| 1 \rangle ) | 1\rangle}{2}=| 11 \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
Appendix:
Truth tables for the quantum circuit:
$\mathrm{X}=\mathrm{NOT} \left( \begin{array}{ccc}{0} & {\mathrm{to}} & {1} \ {1} & {\mathrm{to}} & {0}\end{array}\right) \quad Z \left( \begin{array}{ccc}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {-1}\end{array}\right) \quad \mathrm{H}=\text { Hadamard } \left[ \begin{array}{cc}{0} & {\text { to } \frac{1}{\sqrt{2}} \cdot(0+1)} \ {1} & {\text { to }} {\frac{1}{\sqrt{2}} \cdot(0-1)}\end{array}\right] \quad \mathrm{CNOT} \left( \begin{array}{ccc}{00} & {\text { to }} & {00} \ {01} & {\text { to }} & {01} \ {10} & {\text { to }} & {11} \ {11} & {\text { to }} & {10}\end{array}\right) \nonumber$
Circuit elements in matrix format:
$\mathrm{I} \equiv \left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \quad \mathrm{X} \equiv \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{Z} \equiv \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad H \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \quad \mathrm{CNOT} \equiv \left( \begin{array}{llll}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \nonumber$
Quantum Error Correction
This tutorial deals with quantum error correction as presented by Julian Brown on pages 274-278 in The Quest for the Quantum Computer. Brown's three-qubit example includes an input qubit and two ancillary qubits in the initial state $|\Psi 00\rangle$. This state is encoded and subsequently decoded, with the possibility that in between it acquires an error. The quantum circuit demonstrates how correction occurs if a qubit is flipped due to decoherence at the intermediate state.
The quantum gates required for the error correction algorithm are:
$\mathrm{I} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \text { CNOT } :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \ \mathrm{CNOT} :=\left( \begin{array}{ccccccc}{1} & {0} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {1} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {0} & {1} \ {0} & {0} & {0} & {0} & {0} & {0} & {1} & {0}\end{array}\right) \ \mathrm{IToffoli}:=\left( \begin{array}{lllllll}{1} & {0} & {0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {0} & {1} \ {0} & {0} & {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right) \nonumber$
The encoding and decoding elements of the circuit in terms of these gates are shown in the appropriate place above the circuit diagram.
$\text{Encode} : = \text{CnNOT} \cdot \text{kronecker(CNOT, I)} \qquad \text{Decode} : = \text{IToffoli} \cdot \text{Encode} \nonumber$
$\begin{matrix} \sqrt{\frac{1}{3}} | 0 \rangle+\sqrt{\frac{2}{3}} | 1 \rangle & \cdots & \cdot & \cdots & \cdot & \cdots & \mathrm{E} & \cdots & \cdot & \cdots & \cdot & \cdots & \oplus & \rhd & \sqrt{\frac{1}{3}} | 0 \rangle+\sqrt{\frac{2}{3}} | 1 \rangle \ \; & \; & | & \; & | & \; & \mathrm{R} & \; & | & \; & | & \; & | & \; & \; \ |0 \rangle & \cdots & \oplus & \cdots & | & \cdots & \mathrm{R} & \cdots & \oplus & \cdots & | & \cdots & \cdot & \rhd & |0 \rangle\; \text{or}\; | 1 \rangle \ \; & \; & \; & \; & | & \; & \mathrm{O} & \; & \; & \; & | & \; & | & \; & \; \ |0 \rangle & \cdots & \cdots & \cdots & \oplus & \cdots & \mathrm{R} & \cdots & \cdots & \cdots & | & \cdots & \cdot & \rhd & |0 \rangle\; \text{or}\; | 1 \rangle \end{matrix} \nonumber$
Given an initial state, the encoding step creates and entangled Bell state as demonstrated below.
Initial State
$\left(\sqrt{\frac{1}{3}}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0} \ {0}\end{array}\right) \nonumber$
Encode
$\text{Encode}=\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0.577} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0.816}\end{array}\right) \nonumber$
Encoded state
$\left(\sqrt{\frac{1}{3}}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{c}{0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \nonumber$
As an initial example, it is assumed that between encoding and decoding no errors are introduced to the encoded state. This case demonstrates that decoding simply returns the initial state. Susbsequent to this the operation of the circuit when errors occur on each of the wires are examined. These cases demonstrate that the original state appears on the top wire at the completion of the error correction circuit.
No errors:
$\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left(\sqrt{\frac{2}{3}}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \ \text{Decode}\cdot \left( \begin{array}{c}{0.577} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0.816}\end{array}\right)=\left( \begin{array}{c}{0.577} \ {0} \ {0} \ {0} \ {0.816} \ {0} \ {0} \ {0}\end{array}\right) \quad \left(\sqrt{\frac{1}{3}}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)\cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) =\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0}\end{array}\right) \nonumber$
Next it is shown that if the input state, $| \Psi \rangle$, is corrupted that the decoder corrects the error and returns the original $| \Psi \rangle$ to the top wire of the circuit. In the example shown below the top qubit is flipped.
Top qubit flipped:
$\left( \begin{array}{c}{0} \ {\sqrt{\frac{1}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{c}{\sqrt{\frac{2}{3}}} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0}\end{array}\right) \qquad \text{Decode} \cdot\left( \begin{array}{c}{0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {0.577} \ {0} \ {0} \ {0} \ {0.816}\end{array}\right) \quad \left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \nonumber$
The circuit can also be expressed using Dirac notation. Truth tables for the gates are provided in the Appendix.
$\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle \right) | 00 \rangle \stackrel{\text { encode }}{\longrightarrow} \sqrt{\frac{1}{3}} | 000 \rangle+\sqrt{\frac{2}{3}} | 111 \rangle \xrightarrow[\text{qubit}]{\text{flip top}}\sqrt{\frac{1}{3}} | 100 \rangle+\sqrt{\frac{2}{3}} | 011 \rangle \xrightarrow{\mathrm{CNOT}, \mathrm{I}} \sqrt{\frac{1}{3}} | 110 \rangle +\sqrt{\frac{2}{3}} | 011 \rangle \xrightarrow{\mathrm{CnNOT}} \sqrt{\frac{1}{3}} | 111 \rangle+\sqrt{\frac{2}{3}} | 011 \rangle \xrightarrow{\text{InToffoli}} \sqrt{\frac{1}{3}} | 011 \rangle+\sqrt{\frac{2}{3}} | 111 \rangle=\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle \right) | 11 \rangle \nonumber$
Naturally the ancillary qubits are also susceptible to errors. The following examples show that if a qubit flip occurs on the middle or bottom wire, the circuit still functions properly.
Middle qubit flipped:
$\left( \begin{array}{l}{\sqrt{\frac{1}{3}}} \ {0}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{c}{0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0}\end{array}\right) \qquad \text{Decode}\cdot \left( \begin{array}{c}{0} \ {0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0.577} \ {0} \ {0} \ {0} \ {0.816} \ {0}\end{array}\right) \quad \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0}\end{array}\right) \nonumber$
$\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle\right ) | 00 \rangle \xrightarrow{\text { encode }}\sqrt{\frac{1}{3}} | 000 \rangle+\sqrt{\frac{2}{3}} | 111 \rangle \xrightarrow[\text{qubit}]{\text { flip middle }}\sqrt{\frac{1}{3}} | 010 \rangle+\sqrt{\frac{2}{3}} | 101 \rangle\xrightarrow{\mathrm{CNOT}, \mathrm{I}} \sqrt{\frac{1}{3}} | 010 \rangle\xrightarrow{\mathrm{CnNOT}} \sqrt{\frac{1}{3}} | 010 \rangle+\sqrt{\frac{2}{3}} | 110 \rangle \xrightarrow{\text{InToffoli}} \sqrt{\frac{1}{3}} | 010 \rangle+\sqrt{\frac{2}{3}} | 110 \rangle=\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle \right) | 10 \rangle \nonumber$
Bottom qubit flipped:
$\left(\sqrt{\frac{1}{3}}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)+\left( \begin{array}{c}{0} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}3}} \ {0}\end{array}\right) \quad \text{Decode}\cdot \left( \begin{array}{c}{0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0.577} \ {0} \ {0} \ {0} \ {0.816} \ {0}\ {0}\end{array}\right)\quad \left( \begin{array}{c}{\sqrt{\frac{1}{3}}} \ {\sqrt{\frac{2}{3}}}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {\sqrt{\frac{1}{3}}} \ {0} \ {0} \ {0} \ {\sqrt{\frac{2}{3}}} \ {0} \ {0}\end{array}\right) \nonumber$
$\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle \right)| 00 \rangle \xrightarrow{\text{encode}} \sqrt{\frac{1}{3}} | 000 \rangle+\sqrt{\frac{2}{3}} | 111 \rangle \xrightarrow[\text{quibt}]{\text{flip bottom}}\sqrt{\frac{1}{3}} | 001 \rangle+\sqrt{\frac{2}{3}} | 110 \rangle \xrightarrow{\text{CNOT, I}} \sqrt{\frac{1}{3}} | 001 \rangle+\sqrt{\frac{2}{3}} | 100 \rangle\xrightarrow{\text{CnNOT}}\sqrt{\frac{1}{3}} | 001 \rangle+\sqrt{\frac{2}{3}} | 101 \rangle\xrightarrow{\text{InToffoli}}\sqrt{\frac{1}{3}} | 001 \rangle+\sqrt{\frac{2}{3}} | 101 \rangle=\left(\sqrt{\frac{1}{3}} | 0\rangle+\sqrt{\frac{2}{3}} | 1 \rangle \right) | 01 \rangle \nonumber$
Appendix:
CNOT
$\left( \begin{array}{ccccc}{\text { Decimal }} & {\text { Binary }} & {\text { to}} & {\text{ Binary }} & {\text { Decimal }} \ {0} & {00} & {\text { to }} & {00} & {0} \ {1} & {01} & {\text { to }} & {01} & {1} \ {2} & {10} & {\text { to }} & {11} & {3} \ {3} & {11} & {\text { to }} & {10} & {2}\end{array}\right) \nonumber$
CnNOT
$\left( \begin{array}{ccccc}{\text{Decimal}} & {\text{Binary}} & {\text { to }} & {\text { Binary }} & {\text { Decimal }} \ {0} & {000} & {\text { to }} & {000} & {0} \ {1} & {001} & {\text { to }} & {001} & {1} \ {2} & {010} & {\text { to }} & {010} & {2} \ {3} & {011} & {\text { to }} & {011} & {3} \ {4} & {100} & {\text { to }} & {101} & {5} \ {5} & {101} & {\text{to}} & {100} & {4} \{6} & {110} & {\text { to }} & {111} & {7} \ {7} & {111} & {\text { to }} & {110} & {6}\end{array}\right) \nonumber$
IToffoli
$\left( \begin{array}{ccccc}{\text { Decimal }} & {\text { Binary }} & {\text { to}} & {\text{ Binary }} & {\text { Decimal }} \ {0} & {000} & {\text { to }} & {000} & {0} \ {1} & {001} & {\text { to }} & {001} & {1} \ {2} & {010} & {\text { to }} & {010} & {2} \ {3} & {011} & {\text { to }} & {111} & {7} \ {4} & {100} & {\text { to }} & {100} & {4} \ {5} & {101} & {\text { to }} & {101} & {5} \ {6} & {110} & {\text { to }} & {110} & {6} \ {7} & {111} & {\text { to }} & {011} & {3}\end{array}\right) \nonumber$
And now we move to Bell’s theorem and the battle between local realism and quantum mechanics.
Quantum theory is both stupendously successful as an account of the small-scale structure of the world and it is also the subject of unresolved debate and dispute about its interpretation. J. C. Polkinghorne, The Quantum World.
So we end this short course with the conflict between quantum mechanics and the classical view of reality held by Einstein and others known as local realism. Until the work of John Bell this battle was regarded by most scientists as a distracting philosophical debate. According to David Mermin quantum physicists were admonished to ignore the debate and “Shut up and calculate!”
In 1964 Bell demonstrated that experiments were possible for which quantum theory and local realism gave conflicting predictions, thus moving the disagreement from the realm of philosophical debate to the jurisdiction of the laboratory. In Where Does the Weirdness Go? David Lindley summarized Bell’s achievement.
The hallmark of Bell’s work was his success in locating precisely the point at which classical views of reality ran into trouble with quantum mechanics, and in devising a means by which the two viewpoints could be empirically compared. Bell’s sympathies seem often to have lain with Bohm and the effort to establish a hidden variables version of quantum mechanics. His greatest achievement, though, was to demonstrate incontrovertibly the price that must be paid to make such a theory work.
Bell said that if a theory of reality was local it would not agree with quantum mechanics, and if it agreed with quantum mechanics it would contain non-local interactions, in other words “spooky interactions at a distance” to use Einstein’s phrase. Here’s the best description of this spookiness that I have read: “A non-local interaction links up one location with another without crossing space, without decay, and without delay. A non-local event is, in short, unmediated, unmitigated and immediate.” Nick Herbert, Quantum Reality, page 214.
In 1981 Richard Feynman gave a lecture titled “Simulating Nature with Computers.” His thesis was that the simulation of nature at the nanoscopic level required a quantum computer. “I'm not happy with all the analyses that go with just the classical theory, because nature isn't classical, dammit. And if you want to make a simulation of nature, you'd better make it quantum mechanical …”
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.10%3A_Quantum_Computation-_A_Short_Course.txt
|
The purpose of this tutorial is to analyze the Stern-Gerlach experiment using matrix mechanics. The figure below is taken (and modified) from Thomas Engel's text, Quantum Chemistry & Spectroscopy.
Silver atoms are deflected by an inhomogeneous magnetic field because of the two-valued magnetic moment associated with their unpaired 5s electron ([Kr]5s14d10). The beam of silver atoms entering the Stern-Gerlach magnet oriented in the z-direction (SGZ) on the left is unpolarized. This means it is a mixture of randomly spin-polarized Ag atoms. As such, it is impossible to write a quantum mechanical wavefunction for this initial state.
This situation is exactly analogous to the three-polarizer demonstration described in a previous tutorial. Light emerging from an incadescent light bulb is unpolarized, a mixture of all possible polarization angles, so we can't write a wave function for it. The first Stern-Gerlach magnet plays the same role as the first polarizer, it forces the Ag atoms into one of measurement eigenstates - spin-up or spin-down in the z-direction. The only difference is that in the three-polarizer demonstration only one state was created - vertical polarization. Both demonstrations illustrate an important quantum mechanical postulate - the only values that are observed in a measurement are the eigenvalues of the measurement operator.
To continue with the analysis of the Stern-Gerlach demonstration we need vectors to represent the various spin states of the Ag atoms. We will restrict our attention to the x- and z- spin directions, although the spin states for the y-direction are also available.
Spin Eigenfunctions
Spin-up in the z-direction: $\alpha_{Z} :=\left(\begin{array}{l}{1} \ {0}\end{array}\right)$
Spin-down in the z-direction: $\beta_{\mathrm{z}} :=\left(\begin{array}{l}{0} \ {1}\end{array}\right)$
Spin-up in the x-direction: $\alpha_{\mathrm{x}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{l}{1} \ {1}\end{array}\right)$
Spin-down in the x-direction: $\beta_{\mathrm{x}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-1}\end{array}\right)$
After the SGZ magnet, the spin-up beam (deflected toward the magnet's north pole) enters a magnet oriented in the x-direction, SGX. The $\alpha_{z}$ beam splits into $\alpha_{x}$ and $\beta_{x}$ beams of equal intensity. This is because it is a superposition of the x-direction spin eigenstates as shown below.
$\frac{1}{\sqrt{2}} \cdot\left[\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {1}\end{array}\right)+\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-1}\end{array}\right)\right] \rightarrow\left(\begin{array}{c}{1} \ {0}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot\left(\alpha_{\mathrm{x}}+\beta_{\mathrm{x}}\right) \rightarrow\left(\begin{array}{c}{1} \ {0}\end{array}\right) \nonumber$
Next the $\alpha_{x}$ beam is directed toward a second SGZ magnet and splits into two equal $\alpha_{z}$ and $\beta_{z}$ beams. This happens because $\alpha_{x}$ is a superposition of the $\alpha_{z}$ and $\beta_{z}$ spin states.
$\frac{1}{\sqrt{2}} \cdot\left[\left(\begin{array}{l}{1} \ {0}\end{array}\right)+\left(\begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left(\begin{array}{l}{0.707} \ {0.707}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot\left(\alpha_{\mathrm{z}}+\beta_{\mathrm{z}}\right)=\left(\begin{array}{c}{0.707} \ {0.707}\end{array}\right) \nonumber$
Operators
We can also use the Pauli operators (in units of h/4$pi$) to analyze this experiment. The matrix operators associated with the two Stern-Gelach magnets are shown below.
SGZ operator:
$\mathrm{SGZ} :=\left(\begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \nonumber$
SGX operator:
$\operatorname{SGX} :=\left(\begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
The spin states $\alpha_{z}$ and $\beta_{z}$ are eigenfunctions of the SGZ operator with eigenvalues +1 and -1, respectively:
$S G Z \cdot \alpha_{z}=\alpha_{z} \qquad \operatorname{SGZ} \cdot \alpha_{z}=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
$S G Z \cdot \beta_{z}=-\beta_{z} \qquad \operatorname{SGZ} \cdot \beta_{\mathrm{z}}=\left(\begin{array}{c}{0} \ {-1}\end{array}\right) \nonumber$
The spin states $\alpha_{x}$ and $\beta_{x}$ are eigenfunctions of the SGX operator with eigenvalues +1 and -1, respectively:
$S G X \cdot \alpha_{x}=\alpha_{x} \qquad \mathrm{SGX} \cdot \alpha_{\mathrm{x}}=\left(\begin{array}{c}{0.707} \ {0.707}\end{array}\right) \nonumber$
$S G X \cdot \beta_{x}=-\beta_{x} \qquad \mathrm{SGX} \cdot \beta_{\mathrm{x}}=\left(\begin{array}{c}{-0.707} \ {0.707}\end{array}\right) \nonumber$
The spin states $\alpha_{x}$ and $\beta_{x}$ are not eigenfunctions of the SGZ operator as is shown below.
$S G Z \cdot \alpha_{x}=\beta_{x} \qquad \mathrm{SGZ} \cdot \alpha_{\mathrm{x}}=\left(\begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \nonumber$
$S G Z \cdot \beta_{x}=\alpha_{x} \qquad \mathrm{SGZ} \cdot \beta_{\mathrm{x}}=\left(\begin{array}{c}{0.707} \ {0.707}\end{array}\right) \nonumber$
And, of course, the spin states $\alpha_{z}$ and $\beta_{z}$ are not eigenfunctions of the SGX operator as is shown below.
$S G X \cdot \alpha_{z}=\beta_{z} \qquad \mathrm{SGX} \cdot \alpha_{\mathrm{z}}=\left(\begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
$S G X \cdot \beta_{z}=\alpha_{z} \qquad \operatorname{SGX} \cdot \beta_{2}=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
The Predicted Results After the SGX Magnet
The probability that an $\alpha_{z}$ Ag atom will emerge spin-up after passing through a SGX magnet:
$\left|\left\langle\alpha_{x}|S G X| \alpha_{z}\right\rangle\right|^{2}=1 / 2 \qquad\left(|\alpha_{x}^{\mathrm{T}} \cdot \operatorname{SGX} \cdot \alpha_{2}|\right)^{2} \rightarrow \frac{1}{2} \nonumber$
The probability that an $\alpha_{z}$ Ag atom will emerge spin-down after passing through a SGX magnet:
$\left|\left\langle\beta_{x}|\operatorname{SGX}| \alpha_{z}\right\rangle\right|^{2}=1 / 2 \qquad\left(|\beta_{\mathrm{x}}^{\mathrm{T}} \cdot \operatorname{SGX} \cdot \alpha_{2}|\right)^{2} \rightarrow \frac{1}{2} \nonumber$
The Predicted Results After the Final SGZ Magnet
The probability that an $\alpha_{x}$ Ag atom will emerge spin-up after passing through a SGZ magnet:
$\left|\left\langle\alpha_{z}|\operatorname{SGZ}| \alpha_{x}\right\rangle\right|^{2}=1 / 2 \qquad\left(|\alpha_{2}^{\mathrm{T}} \cdot \operatorname{SGZ} \cdot \alpha_{\mathrm{x}}|\right)^{2} \rightarrow \frac{1}{2} \nonumber$
The probability that an $\alpha_{x}$ Ag atom will emerge spin-down after passing through a SGZ magnet:
$\left|\left\langle\beta_{z}|\operatorname{SGZ}| \alpha_{x}\right\rangle\right|^{2}=1 / 2 \qquad\left(|\beta_{2}^{\mathrm{T}} \cdot \operatorname{SGZ} \cdot \alpha_{\mathrm{x}}|\right)^{2} \rightarrow \frac{1}{2} \nonumber$
The Predicted Results for the First SGZ Magnet
Now we deal with the most difficult part of the analysis. How does quantum mechanics predict what will happen when an unpolarized spin beam encounters the initial SGZ magnet. As mention earlier, an unpolarized spin beam is a mixture of all possible spin polarizations. We proceed by introducing the density operator, which is a more general quantum mechanical construct that can be used to represent both pure states and mixtures, as shown below.
$\hat{\rho}_{\text {pure}}=|\Psi\rangle\langle\Psi| \qquad \hat{\rho}_{m i x e d}=\sum p_{i}\left|\Psi_{i}\right\rangle\left\langle\Psi_{i}\right| \nonumber$
In the equation on the right, pi is the fraction of the mixture in the state $\Psi_{i}$. It is not difficult to elucidate the origin of the density operator and its utility in quantum mechanical calculations. The expectation value for a pure state $\Psi$ for the measurement operator A is traditionally written as follows.
$\langle A\rangle=\langle\Psi|\hat{A}| \Psi\rangle \nonumber$
Expansion of $\Psi$ in the eigenfunctions of the measurement operator, followed by rearrangement of the brackets yields the calculation of the expectation value of A in terms of the product of density operator and the measurement operator, A.
$\langle A\rangle=\langle\Psi|\hat{A}| \Psi\rangle=\sum\langle\Psi|\hat{A}| a\rangle\langle a | \Psi\rangle=\sum_{d}\langle a | \Psi\rangle\langle\Psi|\hat{A}| a\rangle=\sum\langle a|\hat{\rho} \hat{A}| a\rangle=\operatorname{Trace}(\hat{\rho} \hat{A}) \nonumber$
We now show that the traditional method and the method using the trace function give the same result for the z-direction spin eigenfunctions.
$\begin{matrix} \alpha_{\mathrm{z}}^{\mathrm{T}} \cdot \mathrm{SGZ} \cdot \alpha_{\mathrm{z}}=1 & \operatorname{tr}\left(\alpha_{\mathrm{z}} \cdot \alpha_{\mathrm{z}}^{\mathrm{T}} \cdot \operatorname{SGZ}\right)=1 \ \beta_{\mathrm{Z}}^{\mathrm{T}} \cdot \mathrm{SGZ} \cdot \beta_{\mathrm{z}}=-1 & \operatorname{tr}\left(\beta_{\mathrm{Z}} \cdot \beta_{\mathrm{Z}}^{\mathrm{T}} \cdot \mathrm{SGZ}\right)=-1 \end{matrix} \nonumber$
An unpolarized beam can be written as a 50-50 mixture of any of the orthogonal spin eigenfunctions - $\alpha_{z}$ and $\beta_{z}$, or $\alpha_{x}$ and $\beta_{x}$, or $\alpha_{y}$ and $\beta_{y}$. Thus, according to the previous definition the density operator for an unpolarized spin beam is as follows.
$\hat{\rho}_{\operatorname{mix}}=\frac{1}{2}|\alpha_{z}\rangle\langle\alpha_{z}|+\frac{1}{2}| \beta_{z}\rangle\langle\beta_{z}|=\frac{1}{2}| \alpha_{x}\rangle\langle\alpha_{x}|+\frac{1}{2}| \beta_{x}\rangle\langle\beta_{x}|=\frac{1}{2}| \alpha_{y}\rangle\langle\alpha_{y}|+\frac{1}{2}| \beta_{y}\rangle\langle\beta_{y}| \nonumber$
Fifty percent of the silver atoms are deflected toward the north pole ($\alpha_{z}$, eigenvalue +1) and fifty percent toward the south pole ($\beta_{z}$, eigenvalue -1). Therefore, the expectation value should be zero as is calculated below using both z- and x- spin directions.
$\operatorname{tr}\left[\left(\frac{1}{2} \cdot \alpha_{2} \cdot \alpha_{2}^{T}+\frac{1}{2} \cdot \beta_{z} \cdot \beta_{z}^{T}\right) \cdot \operatorname{SGZ}\right]=0 \qquad \operatorname{tr}\left[\left(\frac{1}{2} \cdot \alpha_{x} \cdot \alpha_{x}^{T}+\frac{1}{2} \cdot \beta_{x} \cdot \beta_{x}^{T}\right) \cdot \operatorname{SGZ}\right]=0 \nonumber$
An equivalent method of obtaining the same result is shown below.
$\frac{1}{2} \cdot \alpha_{z}^{T} \cdot \operatorname{SGZ} \cdot \alpha_{z}+\frac{1}{2} \cdot \beta_{z}^{T} \cdot \operatorname{SGZ} \cdot \beta_{z}=0 \qquad \frac{1}{2} \cdot \alpha_{z}^{T} \cdot \operatorname{SGZ} \cdot \alpha_{z}+\frac{1}{2} \cdot \beta_{z}^{T} \cdot \operatorname{SGZ} \cdot \beta_{z}=0 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.100%3A_Analysis_of_the_Stern-Gerlach_Experiment.txt
|
Silver atoms are deflected by an inhomogeneous magnetic field because of the two-valued magnetic moment associated with their unpaired 5s electron ([Kr]5s14d10). The beam of silver atoms entering the Stern-Gerlach magnet oriented in the z-direction (SGZ) on the left is unpolarized. This means it is a mixture of randomly polarized Ag atoms. A mixture cannot be represented by a wave function, it requires a density matrix, as will be shown later.
This situation is exactly analogous to the three-polarizer demonstration. Light emerging from an incadescent light bulb is unpolarized, a mixture of all possible polarization angles, so we can't write a wave function for it. The first Stern-Gerlach magnet plays the same role as the first polarizer, it forces the Ag atoms into one of measurement eigenstates - spin-up or spin-down in the z-direction. The only difference is that in the three-polarizer demonstration only one state was created - vertical polarization. Both demonstrations illustrate that the only values that are observed in an experiment are the eigenvalues of the measurement operator.
To continue with the analysis of the Stern-Gerlach demonstration we need vectors to represent the various spin states of the Ag atoms.
Spin Eigenfunctions
Spin-up in the z-direction: $\alpha_{\mathrm{Z}} :=\left(\begin{array}{l}{1} \ {0}\end{array}\right)$
Spin-down in the z-direction: $\beta_{\mathrm{z}} :=\left(\begin{array}{l}{0} \ {1}\end{array}\right)$
Spin-up in the x-direction: $\alpha_{\mathrm{x}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{l}{1} \ {1}\end{array}\right)$
Spin-down in the x-direction: $\beta_{\mathrm{x}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-1}\end{array}\right)$
In the next step, the spin-up beam (deflected toward by the magnet's north pole) enters a magnet oriented in the x-direction, SGX. The $\alpha_{z}$ beam splits into $\alpha_{x}$ and $\beta_{x}$ beams of equal intensity. This is because it is a superposition of the x-direction spin eigenstates as shown below.
$\frac{1}{\sqrt{2}} \cdot\left[\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{l}{1} \ {1}\end{array}\right)+\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-1}\end{array}\right)\right] \rightarrow\left(\begin{array}{c}{1} \ {0}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot\left(\alpha_{\mathrm{x}}+\beta_{\mathrm{x}}\right) \rightarrow\left(\begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
Next the $\alpha_{x}$ beam is directed toward a second SGZ magnet and splits into two equal $\alpha_{z}$ and $\beta_{z}$ beams. This happens because $\alpha_{x}$ is a superposition of the $\alpha_{z}$ and $\beta_{z}$ spin states.
$\frac{1}{\sqrt{2}} \cdot\left[\left(\begin{array}{l}{1} \ {0}\end{array}\right)+\left(\begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left(\begin{array}{c}{0.707} \ {0.707}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot\left(\alpha_{\mathrm{z}}+\beta_{\mathrm{z}}\right)=\left(\begin{array}{c}{0.707} \ {0.707}\end{array}\right) \nonumber$
Operators
We can also use the Pauli operators (in units of h/4$\pi$) to analyze this experiment.
SGZ operator:
$\mathrm{SGZ} :=\left(\begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \nonumber$
SGX operator:
$\operatorname{SGX} :=\left(\begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
The probability that an $\alpha_{z}$ Ag atom will emerge spin-up after passing through a SGX magnet:
Probability amplitude:
$\alpha_{\mathrm{x}}^{\mathrm{T}} \cdot \mathrm{SGX} \cdot \mathrm{\alpha}_{\mathrm{Z}}=0.707 \nonumber$
Probability:
$\left(\alpha_{x}^{T} \cdot \operatorname{SGX} \cdot \alpha_{z}\right)^{2}=0.5 \nonumber$
The probability that an $\alpha_{z}$ Ag atom will emerge spin-down after passing through a SGX magnet:
Probability amplitude:
$\beta_{\mathrm{x}}^{\mathrm{T}} \cdot \mathrm{SGX} \cdot \alpha_{\mathrm{z}}=-0.707 \nonumber$
Probability:
$\left(\beta_{\mathrm{x}}^{\mathrm{T}} \cdot \mathrm{SGX} \cdot \alpha_{\mathrm{z}}\right)^{2}=0.5 \nonumber$
The probability that an $\alpha_{x}$ Ag atom will emerge spin-up after passing through a SGZ magnet:
Probability amplitude:
$\alpha_{z}^{T} \cdot \operatorname{SGX} \cdot \alpha_{x}=0.707 \nonumber$
Probability:
$\left(\alpha_{z}^{T} \cdot \operatorname{SGX} \cdot \alpha_{x}\right)^{2}=0.5 \nonumber$
The probability that an $\alpha_{x}$ Ag atom will emerge spin-down after passing through a SGZ magnet:
Probability amplitude:
$\beta_{\mathrm{z}}^{\mathrm{T}} \cdot \mathrm{SGX} \cdot \alpha_{\mathrm{x}}=0.707 \nonumber$
Probability:
$\left(\beta_{\mathrm{z}}^{\mathrm{T}} \cdot \mathrm{SGX} \cdot \alpha_{\mathrm{x}}\right)^{2}=0.5 \nonumber$
In examining the figure above we note that the SGX magnet distroys the entering $\alpha_{z}$ state, creating a superposition of spin-up and spin-down in the x-direction. Again measurement forces the system into one of the eigenstates of the measurement operator.
Density Operator (Matrix) Approach
A more general analysis is based on the concept of the density operator (matrix), in general given by the following outer product $|\Psi><\Psi|$. It is especially important because it can be used to represent mixtures, which cannot be represented by wave functions as noted above.
For example, the probability that an $\alpha_{z}$ spin system will emerge in the $\alpha_{x}$ channel of a SGX magnet is equal to the trace of the product of the density matrices representing the $\alpha_{z}$ and $\alpha_{x}$ states as shown below.
$\begin{split} \left|\left\langle\alpha_{x} | \alpha_{z}\right\rangle\right|^{2}&=\left\langle\alpha_{z} | \alpha_{x}\right\rangle\left\langle\alpha_{x} | \alpha_{z}\right\rangle\&=\sum_{i}\left\langle\alpha_{z} | i\right\rangle\left\langle i | \alpha_{x}\right\rangle\left\langle\alpha_{x} | \alpha_{z}\right\rangle\&=\sum_{i}\left\langle i | \alpha_{x}\right\rangle\left\langle\alpha_{x} | \alpha_{z}\right\rangle\left\langle\alpha_{z} | i\right\rangle \&=\operatorname{Tr}\left(\left|\alpha_{x}\right\rangle\left\langle\alpha_{x} | \alpha_{z}\right\rangle\left\langle\alpha_{z}\right|\right)=\operatorname{Tr}\left(\widehat{\rho_{\alpha_{x}}} \widehat{\rho_{\alpha_{z}}}\right)\end{split} \nonumber$
where the completeness relation $\sum_{i}|i\rangle\langle i|=1$ has been employed.
Density matrices for spin-up and spin-down in the z-direction:
$\rho_{\alpha z} :=\alpha_{z} \cdot \alpha_{z}^{T} \qquad \rho_{\beta z} :=\beta_{z} \cdot \beta_{z}^{T} \nonumber$
Density matrices for spin-up and spin-down in the x-direction:
$\rho_{\mathrm{Qx}} :=\alpha_{\mathrm{x}} \cdot \alpha_{\mathrm{x}}^{\mathrm{T}} \qquad \rho_{\beta \mathrm{x}} :=\beta_{\mathrm{x}} \cdot \beta_{\mathrm{x}}^{\mathrm{T}} \nonumber$
An unpolarized spin system can be represented by a 50-50 mixture of any two orthogonal spin density matrices. Below it is shown that using the z-direction and the x-direction give the same answer.
$\rho_{\operatorname{mix}} :=\frac{1}{2} \cdot \rho_{\alpha z}+\frac{1}{2} \cdot \rho_{\beta z}=\left(\begin{array}{cc}{0.5} & {0} \ {0} & {0.5}\end{array}\right) \nonumber$
Now we re-analyze the Stern-Gerlach experiment using the density operator (matrix) approach.
The probability that an unpolarized spin system will emerge in the $\alpha_{z}$ channel of a SGZ magnet is 0.5:
$\operatorname{tr}\left(\rho_{\alpha z} \cdot \rho_{\operatorname{mix}}\right)=0.5 \nonumber$
The probability that the $\alpha_{z}$ beam will emerge in the $\alpha_{x}$ channel of a SGX magnet is 0.5:
$\operatorname{tr}\left(\rho_{\mathrm{\alpha x}} \cdot \rho_{\mathrm{\alpha z}}\right)=0.5 \nonumber$
The probability that the $\alpha_{x}$ beam will emerge in the $\alpha_{z}$ channel of the final SGZ magnet is 0.5:
$\operatorname{tr}\left(\rho_{\alpha z} \cdot \rho_{\alpha x}\right)=0.5 \nonumber$
The probability that the $\alpha_{x}$ beam will emerge in the $\beta_{z}$ channel of the final SGZ magnet is 0.5:
$\operatorname{tr}\left(\rho_{\beta z} \cdot \rho_{\mathrm{\alpha x}}\right)=0.5 \nonumber$
After the final SGZ magnet, 1/8 of the original Ag atoms emerge in the $\alpha_{z}$ channel and 1/8 in the $\beta_{z}$ channel.
$\operatorname{tr}\left(\rho_{\alpha z} \cdot \rho_{\alpha x}\right) \cdot \operatorname{tr}\left(\rho_{\alpha x} \cdot\rho_{\alpha z}\right) \cdot \operatorname{tr}\left(\rho_{\alpha z} \cdot \rho_{\operatorname{mix}}\right)=0.125 \ \operatorname{tr}\left(\rho_{\beta z} \cdot \rho_{\alpha x}\right) \cdot \operatorname{tr}\left(\rho_{\alpha x} \cdot \rho_{\alpha z}\right) \cdot \operatorname{tr}\left(\rho_{\alpha z} \cdot \rho_{\operatorname{mix}}\right)=0.125 \nonumber$
1.103: Bloch Sphere
The eigenfunctions of the Pauli spin matrices
$\sigma_{\mathrm{Z}} :=\left(\begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \qquad \sigma_{\mathrm{x}} :=\left(\begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \qquad \sigma_{\mathrm{y}} :=\left(\begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \nonumber$
are presented mathematically and shown on the Bloch sphere below. The Xu state is highlighted.
$Z_{u} :=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \qquad Z_{d} :=\left(\begin{array}{l}{0} \ {1}\end{array}\right) \ \mathrm{X}_{\mathrm{u}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{l}{1} \ {1}\end{array}\right) \qquad \mathrm{X}_{\mathrm{d}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-1}\end{array}\right) \ \mathrm{Y}_{\mathrm{u}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{l}{1} \ {\mathrm{i}}\end{array}\right) \qquad \mathrm{Y}_{\mathrm{d}} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{c}{1} \ {-\mathrm{i}}\end{array}\right) \nonumber$
This figure was taken from demonstrations.wolfram.com/QubitsOnThePoincareBlochSphere/ a contribution by Rudolf Muradian.
The Bloch sphere is prepared in Cartesian coordinates using Mathcad graphics.
$\text{numpts}:=100 \qquad \mathrm{i} :=0 \ldots \text { numpts } \qquad \mathrm{j} :=0 \ldots \text{numpts} \ \theta_{\mathrm{i}} :=\frac{\pi \cdot \mathrm{i}}{\text { numpts }} \quad \phi_{\mathrm{j}} :=\frac{2 \cdot \pi \cdot \mathrm{j}}{\text { numpts }} \ \mathrm{X}_{\mathrm{i}, \mathrm{j}}=\sin \left(\theta_{\mathrm{i}}\right) \cdot \cos \left(\phi_{\mathrm{j}}\right) \qquad \mathrm{Y}_{\mathrm{i}, \mathrm{j}} :=\sin \left(\theta_{\mathrm{i}}\right) \cdot \sin \left(\phi_{\mathrm{j}}\right) \qquad \mathrm{z}_{\mathrm{i}, \mathrm{j}} :=\cos \left(\theta_{\mathrm{i}}\right) \nonumber$
Next, the coordinates of a quantum qubit are calculated and displayed on the Bloch sphere as a white dot. As the polar and azmuthal angles are changed, you will need to rotate the figure to see where the white dot is on the surface of the Bloch sphere.
$\theta 1 :=\frac{\pi}{2} \qquad \phi 1 :=0 \ \Psi(\theta 1, \phi 1) :=\cos \left(\frac{\theta 1}{2}\right) \cdot\left(\begin{array}{c}{1} \ {0}\end{array}\right)+\exp (\mathrm{i} \cdot \phi 1) \cdot \sin \left(\frac{\theta 1}{2}\right) \cdot\left(\begin{array}{l}{0} \ {1}\end{array}\right) \quad \Psi(\theta 1, \phi 1)=\left(\begin{array}{l}{0.707} \ {0.707}\end{array}\right) \ \mathrm{XX}_{\mathrm{i}, \mathrm{j}}=\sin (\theta 1) \cdot \cos (\phi 1) \qquad \mathrm{YY}_{\mathrm{i}, \mathrm{j}} :=\sin (\theta 1) \cdot \sin (\phi 1) \qquad \mathrm{ZZ}_{\mathrm{i}, \mathrm{j}} :=\cos (\theta 1) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.101%3A_Related_Analysis_of_the_Stern-Gerlach_Experiment.txt
|
Silver atoms are deflected by an inhomogeneous magnetic field because of the two-valued magnetic moment associated with their unpaired 5s electron ([Kr]5s14d10). The beam of silver atoms entering the Stern-Gerlach magnet oriented in the z-direction (SGZ) on the left is unpolarized. This means it is a mixture of randomly polarized Ag atoms. A mixture cannot be represented by a wave function, it requires a density matrix, as will be shown later.
This situation is exactly analogous to the three-polarizer demonstration. Light emerging from an incadescent light bulb is unpolarized, a mixture of all possible polarization angles, so we can't write a wave function for it. The first Stern-Gerlach magnet plays the same role as the first polarizer, it forces the Ag atoms into one of measurement eigenstates - spin-up or spin-down in the z-direction. The only difference is that in the three-polarizer demonstration only one state was created - vertical polarization. Both demonstrations illustrate that the only values that are observed in an experiment are the eigenvalues of the measurement operator.
To continue with the analysis of the Stern-Gerlach demonstration we need vectors to represent the various spin states of the Ag atoms.
Spin Eigenfunctions
Spin-up in the z-direction:
$\alpha_z = \begin{pmatrix} 1 \ 0 \end{pmatrix} \nonumber$
Spin-down in the z-direction:
$\beta_z = \begin{pmatrix} 0 \ 1 \end{pmatrix} \nonumber$
Spin-up in the x-direction:
$\alpha_x = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \nonumber$
Spin-down in the x-direction:
$\beta_x = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \nonumber$
In the next step, the spin-up beam (deflected toward by the magnet's north pole) enters a magnet oriented in the x-direction, SGX. The αz beam splits into αx and βx beams of equal intensity. This is because it is a superposition of the x-direction spin eigenstates as shown below.
$\begin{matrix} \frac{1}{ \sqrt{2}} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} + \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \right] \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left( \alpha_x + \beta_x \right) \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
Next the αx beam is directed toward a second SGZ magnet and splits into two equal αz and βz beams. This happens because αx is a superposition of the αz and βz spin states.
$\begin{matrix} \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left( \alpha_z + \beta_z \right) = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} \end{matrix} \nonumber$
Operators
We can also use the Pauli operators (in units of h/4π) to analyze this experiment.
SGZ operator:
$\text{SGZ} = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \nonumber$
SGX operator:
$\text{SGX} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \nonumber$
The probability that an αz Ag atom will emerge spin-up after passing through a SGX magnet:
Probability amplitude:
$\alpha_x^T \text{SGX} \alpha_z = 0.707 \nonumber$
Probability:
$\left( \alpha_x^T \text{SGX} \alpha_z \right)^2 = 0.5 \nonumber$
The probability that an αz Ag atom will emerge spin-down after passing through a SGX magnet:
Probability amplitude:
$\beta_x^T \text{SGX} \alpha_z = -0.707 \nonumber$
Probability:
$\left( \beta_x^T \text{SGX} \alpha_z \right)^2 = 0.5 \nonumber$
The probability that an αx Ag atom will emerge spin-up after passing through a SGZ magnet:
Probability amplitude:
$\alpha_z^T \text{SGX} \alpha_x = 0.707 \nonumber$
Probability:
$\left( \alpha_z^T \text{SGX} \alpha_x \right)^2 = 0.5 \nonumber$
The probability that an αx Ag atom will emerge spin-down after passing through a SGZ magnet:
Probability amplitude:
$\beta_z^T \text{SGX} \alpha_x = 0.707 \nonumber$
Probability:
$\left( \beta_z^T \text{SGX} \alpha_x \right)^2 = 0.5 \nonumber$
In examining the figure above we note that the SGX magnet distroys the entering αz state, creating a superposition of spin-up and spin-down in the x-direction. Again measurement forces the system into one of the eigenstates of the measurement operator.
Density Operator (Matrix) Approach
A more general analysis is based on the concept of the density operator (matrix), in general given by the following outer product |Ψ >< Ψ|. It is especially important because it can be used to represent mixtures, which cannot be represented by wave functions as noted above.
For example, the probability that an αz spin system will emerge in the αx channel of a SGX magnet is equal to the trace of the product of the density matrices representing the αz and αx states as shown below.
$\left| \left \langle \alpha_x | \alpha_z \right \rangle \right|^2 = \left \langle \alpha_z | \alpha_x \right \rangle\left \langle \alpha_x | \alpha_z \right \rangle = \sum_i \left \langle \alpha_z | i \right \rangle \left \langle i | \alpha_x \right \rangle \left \langle \alpha_x | \alpha_z \right \rangle = \sum_i \left \langle i | \alpha_z \right \rangle \left \langle \alpha_x | \alpha_z \right \rangle \left \langle \alpha_z | i \right \rangle = Tr \left( | \alpha_x \rangle \langle \alpha_x | \alpha_z \rangle \langle \alpha_z | \right) = Tr \left( \widehat{ \rho_{ \alpha x}} \widehat{ \rho_{ \alpha z}} \right) \nonumber$
where the completeness relation $\sum_{i} |i \rangle \langle i | =1$ has been employed.
Density matrices for spin-up and spin-down in the z-direction:
$\begin{matrix} \rho_{ \alpha z} = \alpha_z \alpha_z^T & \rho_{ \beta z} = \beta_z \beta_z^T \end{matrix} \nonumber$
Density matrices for spin-up and spin-down in the x-direction:
$\begin{matrix} \rho_{ \alpha x} = \alpha_x \alpha_x^T & \rho_{ \beta x} = \beta_x \beta_x^T \end{matrix} \nonumber$
An unpolarized spin system can be represented by a 50-50 mixture of any two orthogonal spin density matrices. Below it is shown that using the z-direction and the x-direction give the same answer.
$\rho_{mix} = \frac{1}{2} \rho_{ \alpha z} + \frac{1}{2} \rho_{ \beta z} = \begin{pmatrix} 0.5 & 0 \ 0 & 0.5 \end{pmatrix} \nonumber$
Now we re-analyze the Stern-Gerlach experiment using the density operator (matrix) approach.
The probability that an unpolarized spin system will emerge in the αz channel of a SGZ magnet is 0.5:
$\text{tr} \left( \rho_{ \alpha z} \rho_{ \text{mix}} \right) = 0.5 \nonumber$
The probability that the αz beam will emerge in the αx channel of a SGX magnet is 0.5:
$\text{tr} \left( \rho_{ \alpha x} \rho_{ \alpha z} \right) = 0.5 \nonumber$
The probability that the αx beam will emerge in the αz channel of the final SGZ magnet is 0.5:
$\text{tr} \left( \rho_{ \alpha z} \rho_{ \alpha x} \right) = 0.5 \nonumber$
The probability that the αx beam will emerge in the βz channel of the final SGZ magnet is 0.5:
$\text{tr} \left( \rho_{ \beta x} \rho_{ \alpha z} \right) = 0.5 \nonumber$
After the final SGZ magnet, 1/8 of the original Ag atoms emerge in the αz channel and 1/8 in the βz channel.
$\begin{matrix} \text{tr} \left( \rho_{ \alpha z} \rho_{ \alpha x} \right) \text{tr} \left( \rho_{ \alpha x} \rho_{ \alpha z} \right) \text{tr} \left( \rho_{ \alpha z} \rho_{ \text{mix}} \right) = 0.125 & \text{tr} \left( \rho_{ \beta z} \rho_{ \alpha x} \right) \text{tr} \left( \rho_{ \alpha x} \rho_{ \alpha z} \right) \text{tr} \left( \rho_{ \alpha z} \rho_{ \text{mix}} \right) = 0.125 \end{matrix} \nonumber$
1.107: The Bloch Sphere
The eigenfunctions of the Pauli spin matrices.
$\begin{matrix} \sigma_z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & \sigma_x = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \sigma_y = \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} \end{matrix} \nonumber$
are presented mathematically and shown on the Bloch sphere below. The Xu state is highlighted.
$\begin{matrix} Z_u = \begin{pmatrix} 1 \ 0 \end{pmatrix} & Z_d = \begin{pmatrix} 0 \ 1 \end{pmatrix} & X_u = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & X_d = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & Y-u = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & Y_d = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} \end{matrix} \nonumber$
This figure was taken from demonstrations.wolfram.com/QubitsOnThePoincareBlochSphere/ a contribution by Rudolf Muradian. The Bloch sphere is prepared in Cartesian coordinates using Mathcad graphics.
$\begin{matrix} \text{numpts} = 100 & i = 0 .. \text{numpts} & j = 0 .. \text{numpts} & \theta_i = \frac{ \pi i}{ \text{numpts}} & \phi_j = \frac{2 \pi j}{ \text{numpts}} \end{matrix} \nonumber$
$\begin{matrix} X_{i,~j} = \sin \left( \theta_i \right) \cos \left( \phi_j \right) & Y_{i,~j} = \sin \left( \theta_i \right) \sin \left( \phi_j \right) & Z_{i,~j} = \cos \left( \theta_i \right) \end{matrix} \nonumber$
Next, the coordinates of a quantum qubit are calculated and displayed on the Bloch sphere as a white dot. As the polar and azmuthal angles are changed, you will need to rotate the figure to see where the white dot is on the surface of the Bloch sphere.
$\begin{matrix} \theta 1 = \frac{ \pi}{2} & \phi 1 = 0 & \Psi ( \theta 1,~ \phi 1 ) = \cos \left( \frac{ \theta 1}{2} \right) \begin{pmatrix} 1 \ 0 \end{pmatrix} + \text{exp} ( i \phi 1) \sin \left( \frac{ \theta 1}{2} \right) \begin{pmatrix} 0 \ 1 \end{pmatrix} & \Psi ( \theta 1,~ \phi 1) = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} XX_{i,~j} = \sin ( \theta 1) \cos ( \phi 1) & YY_{i,~j} = \sin ( \theta 1) \sin ( \phi 1) & ZZ_{i,~j} = \cos ( \theta 1) \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.104%3A_88._Related_Analysis_of_the_Stern-Gerlach_Experiment.txt
|
A spin‐1/2 state is represented by the following density matrix.
$\rho = \begin{pmatrix} \frac{2}{3} & \frac{1}{6} - \frac{i}{3} \ \frac{1}{6} + \frac{i}{3} & \frac{1}{3} \end{pmatrix} \nonumber$
Show that this is a mixed state.
$\begin{matrix} \text{tr} ( \rho ) \rightarrow 1 & \text{tr} \left( \rho^2 \right) \rightarrow \frac{5}{6} \end{matrix} \nonumber$
A value of less than unity for the trace of the square of the density matrix indicates a mixed state.
Given the Pauli matrices,
$\begin{matrix} \sigma_x = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \sigma_y = \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} & \sigma_z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
calculate the Bloch vector and its magnitude.
$\begin{matrix} R_x = \text{tr} \left( \rho \sigma_x \right) \rightarrow \frac{1}{3} & R_y = \text{tr} \left( \rho \sigma_y \right) \rightarrow \frac{2}{3} & R_z = \text{tr} \left( \rho \sigma_z \right) \rightarrow \frac{1}{3} & R = \sqrt{R_x^2 + R_y^2 + R_z^2} = 0.816 \end{matrix} \nonumber$
A value of less than unity for the magnitude of the Bloch vector also indicates a mixed state.
Calculate the eigenvalues, λ, of ρ and use them to calculate the entropy of the state represented by ρ.
$\begin{matrix} \lambda = \text{eigenvals} ( \rho ) = \begin{pmatrix} 0.908 \ 0.092 \end{pmatrix} & - \sum_{i = 1}^{2} \left( \lambda_i \text{log} \left( \lambda_i,~ 2 \right) \right) = 0.422 \end{matrix} \nonumber$
Use the magnitude of the Bloch vector to calculate the entropy (Haroche and Raimond, p. 168).
$- \frac{1 + R}{2} \text{log} \left( \frac{1 + R}{2},~2 \right) - \frac{1 - R}{2} \text{log} \left( \frac{1 - R}{2},~2 \right) = 0.442 \nonumber$
Calculate the expectation values for the measurements of spin in the x‐ and z‐directions.
$\begin{matrix} \text{tr} \left( \rho \sigma_x \right) \rightarrow \frac{1}{3} & \text{tr} \left( \rho \sigma_z \right) \rightarrow \frac{1}{3} \end{matrix} \nonumber$
$\begin{matrix} \text{Sxu} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & \text{Sxd} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & \text{Szu} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{Szd} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Form projection operators for spin‐up and spin‐down in the x‐ and z‐directions using the spin states provided above, and calculate the probabilities for spin‐up and spin‐down measurements in each direction. The eigenvalue for Sxu and Szu is +1 and the eigenvalue for Sxd and Szd is ‐1. It is easy to see that the results below are consistent with the expectation value calculations.
$\begin{matrix} \text{Sxu Sxu}^T \rightarrow \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \ \frac{1}{2} & \frac{1}{2} \end{pmatrix} & \text{Sxd Sxd}^T \rightarrow \begin{pmatrix} \frac{1}{2} & - \frac{1}{2} \ - \frac{1}{2} & \frac{1}{2} \end{pmatrix} & \text{Szu Szu}^T \rightarrow \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} & \text{Szd Szd}^T \rightarrow \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{tr} \left( \text{Sxd Sxd}^T \rho \right) \rightarrow \frac{1}{3} & \text{tr} \left( \text{Sxu Sxu}^T \rho \right) \rightarrow \frac{2}{3} & \text{tr} \left( \text{Szd Szd}^T \rho \right) \rightarrow \frac{1}{3} & \text{tr} \left( \text{Szu Szu}^T \rho \right) \rightarrow \frac{2}{3} \ \text{Sxd}^T \rho \text{Sxd} \rightarrow \frac{1}{3} & \text{Sxu}^T \rho \text{Sxu} \rightarrow \frac{2}{3} & \text{Szd}^T \rho \text{Szd} \rightarrow \frac{1}{3} & \text{Szu}^T \rho \text{Szu} \rightarrow \frac{2}{3} \end{matrix} \nonumber$
Calculate the eigenvectors of the density matrix and use them to diagonalize it, showing that the diagonal elements are the eigenvalues as calculated above.
$\begin{matrix} \text{Vecs = eigenvecs} ( \rho ) & \text{Vecs} = \begin{pmatrix} 0.839 & -0.243 + 0.478i \ 0.243 + 0.487i & 0.839 \end{pmatrix} \ \left( \overline{ \text{Vecs}} \right)^T \rho \text{Vecs} = \begin{pmatrix} 0.908 & 0 \ 0 & 0.092 \end{pmatrix} & \text{Vecs}^{-1} \rho \text{Vecs} = \begin{pmatrix} 0.908 & 0 \ 0 & 0.092 \end{pmatrix} \end{matrix} \nonumber$
Using the density matrix the pure state Sxu show that its entropy is 0.
$\rho = \frac{1}{2} \begin{pmatrix} 1 & 1 \ 1 & 1 \end{pmatrix} \nonumber$
Show that this is a pure state.
$\begin{matrix} \text{tr} ( \rho ) \rightarrow 1 & \text{tr} \left( \rho^2 \right) \rightarrow 1 \end{matrix} \nonumber$
Calculate the Bloch vector and its magnitude.
$\begin{matrix} R_x = \text{tr} \left( \rho \sigma_x \right) & R_y = \text{tr} \left( \rho \sigma_y \right) \rightarrow 0 & R_z = \text{tr} \left( \rho \sigma_z \right) \rightarrow 0 & R = \sqrt{ R_x^2 + R_y^2 + R_z^2} = 1 \end{matrix} \nonumber$
A value of unity for the magnitude of the Bloch vector indicates a pure state.
Calculate the eigenvalues of ρ and use them to calculate the entropy of the state represented by ρ.
$\begin{matrix} \lambda = \text{eigenvals} ( \rho ) = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \sum_{i = 1}^{2} \left( \lambda_i \text{log} \left( \lambda_i,~ 2 \right) \right) = 0 \end{matrix} \nonumber$
Two‐particle, entangled Bell states are also pure states. This is demonstrated for the anti‐symmetric singlet state, Ψm.
$\begin{matrix} | \Psi_m \rangle = \frac{1}{ \sqrt{2}} \left[ | \uparrow_1 \rangle | \downarrow_2 \rangle - | \downarrow_1 \rangle | \uparrow_1 \rangle \right] = \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ -1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \Psi_m = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ -1 \ 0 \end{pmatrix} & \rho = \Psi_m \Psi_m^T & \rho = \begin{pmatrix} 0 & 0 & 0 & 0 \ 0 & 0.5 & -0.5 & 0 \ 0 & -0.5 & 0.5 & 0 \ 0 & 0 & 0 & 0 \end{pmatrix} & \text{tr} ( \rho ) = 1 & \text{tr} \left( \rho^2 \right) = 1 \end{matrix} \nonumber$
One of the eigenvalues of ρ is 1 and the rest are 0, so the entropy is 0.
$\begin{matrix} \lambda = \text{eigenvals} ( \rho ) & \lambda = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
When a two‐spin system is in an entangled superposition such as Ψm, the individual spins states are not definite. Tracing (see below) the composite density matrix over spin 2 yields the following spin density matrix for spin 1 showing that it is a mixed state with one unit of entropy.
$\begin{matrix} \rho_1 = \frac{1}{2} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix}^T + \frac{1}{2} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix}^T \rightarrow \begin{pmatrix} \frac{1}{2} & 0 \ 0 & \frac{1}{2} \end{pmatrix} & \text{tr} \left( \rho_1 \right) \rightarrow 1 & \text{tr} \left( \rho_1^2 \right) \rightarrow \frac{1}{2} \ \lambda = \text{eigenvals} \left( \rho_1 \right) = \begin{pmatrix} 0.5 \ 0.5 \end{pmatrix} & - \sum_{i=1}^{2} \left( \lambda_i \text{log} \left( \lambda_i,~2 \right) \right) = 1 \end{matrix} \nonumber$
The same holds for spin 2.
In this entangled state we know the state of the composite system (zero entropy), but do not know the states of the individual spins (two units of entropy).
The trace operation is outlined below:
$\begin{matrix} | \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ \textcolor{red}{ | \uparrow_1 \rangle} \textcolor{blue}{ | \downarrow_2 \rangle} - \textcolor{red}{ | \downarrow_1 \rangle} \textcolor{blue}{ | \uparrow_2 \rangle} \right] \ \widehat{ \rho_{12}} = | \Psi \rangle \langle \Psi | = \frac{1}{2} \left[ \textcolor{red}{ | \uparrow_1 \rangle} \textcolor{blue}{ | \downarrow_2 \rangle} - \textcolor{red}{ | \downarrow_1 \rangle} \textcolor{blue}{ | \uparrow_2 \rangle} \right] = \frac{1}{2} \begin{pmatrix} 0 & 0 & 0 & 0 \ 0 & 1 & -1 & 0 \ 0 & -1 & 1 & 0 \ 0 & 0 & 0 & 0 \end{pmatrix} \ \widehat{ \rho_{1}} =\frac{1}{2} \left[ \langle \textcolor{blue}{ \uparrow_2} | \Psi \rangle \langle \Psi | \textcolor{blue}{ \uparrow_2} \rangle + \langle \textcolor{blue}{ \downarrow_2} | \Psi \rangle \langle \Psi | \textcolor{blue}{ \downarrow_2} \rangle \right] = \frac{1}{2} \left[ | \textcolor{red}{ \downarrow_1} \rangle \langle \textcolor{red}{ \downarrow_1} | + | \textcolor{red}{ \uparrow_1} \rangle \langle \textcolor{red}{ \uparrow_1} | \right] = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \ \widehat{ \rho_{2}} = \frac{1}{2} \left[ \langle \textcolor{red}{ \uparrow_1} | \Psi \rangle \langle \Psi | \textcolor{red}{ \uparrow_1} \rangle + \langle \textcolor{red}{ \downarrow_1} | \Psi \rangle \langle \Psi | \textcolor{red}{ \downarrow_1} \rangle \right] = \frac{1}{2} \left[ | \textcolor{blue}{ \downarrow_2} \rangle \langle \textcolor{blue}{ \downarrow_2} | + | \textcolor{blue}{ \uparrow_2} \rangle \langle \textcolor{blue}{ \uparrow_2} | \right] = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.108%3A_Density_Matrix_Bloch_Vector_and_Entropy.txt
|
This tutorial will use the concepts of state vector and state operator to examine superpositions and mixed states using the matrix formulation of quantum mechanics. The state vectors required are given immediately below. An unsubscripted spin arrow refers to spin in the z‐direction.
$\begin{matrix} | \Psi \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \langle \uparrow | = \begin{pmatrix} 1 & 0 \end{pmatrix} & | \downarrow \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \langle \downarrow | = \begin{pmatrix} 0 & 1 \end{pmatrix} & | \uparrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & | \downarrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & \langle \downarrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & -1 \end{pmatrix} \end{matrix} \nonumber$
These same states are now defined using Mathcad syntax.
$\begin{pmatrix} \text{Szu} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{Szd} = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \text{Sxu} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & \text{Sxd} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \ \text{Szu}^T = \begin{pmatrix} 1 & 0 \end{pmatrix} & \text{Szd}^T = \begin{pmatrix} 0 & 1 \end{pmatrix} & \text{Sxu}^T = \begin{pmatrix} 0.707 & 0.707 \end{pmatrix} & \text{Sxd}^T = \begin{pmatrix} 0.707 & -0.707 \end{pmatrix} \end{pmatrix} \nonumber$
The identity and z‐ and x‐direction spin operators (in units of h/4π) will be used.
$\begin{matrix} I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \sigma_z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & \sigma_x = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
Dirac has said that the superposition principle is at the heart of quantum mechanics. He elaborated in The Physical Principles of Quantum Mechanics.
The nature of the relationships which the superposition principle requires to exist between states of any system is of a kind that cannot be explained in terms of familiar physical concepts. One cannot in the classical sense picture a system being partly in each of two states and see the equivalence of this to the system being completely in some other state. There is an entirely new idea involved, to which one must get accustomed and in terms of which one must proceed to build up an exact mathematical theory, without having any detailed classical picture.
N. David Mermin [American Journal of Physics 71, 29, (2003)] succinctly summarized Diracʹs statement when he wrote, ʺSuperpositions have no classical interpretation. They are sui generis, an intrinsically quantum‐mechanical construct...ʺ
We now illustrate the properties of a superposition with a specific example. A spin system with spin up in the x‐direction is prepared using a Stern‐Gerlach apparatus. We see from the mathematics below that this system can also be considered to be an even superposition of spin up and spin down in the z‐direction.
$\begin{matrix} | \uparrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \left[ | \uparrow \rangle + | \downarrow \rangle \right] = | \Psi \rangle & \Psi = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Subsequent spin measurements confirm what is expressed mathematically above. If the spin system is passed through a Stern‐Gerlach magnet oriented in the x‐direction, the beam bends in one direction, the direction associated with spin up in the x‐direction. However, if the system is passed through a Stern‐Gerlach magnet oriented in the z‐direction, it is split into two beams, one associated with spin up in the z‐direction and one associated with spin down in the z‐direction. The expectation values calculated below are consistent with these experimental results.
$\begin{matrix} \Psi^T \sigma_x \Psi = 1 & \Psi^T \sigma_z \Psi = 0 \end{matrix} \nonumber$
The state operator is a more versatile quantum construct than the mathematically simpler state vector. It is the outer product of the state vector.
$\begin{matrix} \hat{ \rho} = | \uparrow_x \rangle \langle \uparrow_x | = \frac{1}{2} \begin{pmatrix} 1 \ 1 \end{pmatrix} \begin{pmatrix} 1 & 1 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & 1 \ 1 & 1 \end{pmatrix} = | \Psi \rangle \langle \Psi | & \rho = \frac{1}{2} \begin{pmatrix} 1 & 1 \ 1 & 1 \end{pmatrix} \end{matrix} \nonumber$
Expectation values can also be calculate using the state operator.
$\langle \Psi | \hat{A} | \Psi \rangle = \sum_{i} \langle \Psi | \hat{A} | i \rangle \langle i | \Psi \rangle = \sum_{i} \langle i | \Psi \rangle \langle \Psi | \hat{A} | i \rangle = Tr \left( \hat{ \rho}_{ \Psi} \hat{A} \right) ~ \text{where} ~ \sum_{i} | i \rangle \langle i | = Identity \nonumber$
$\begin{matrix} \text{tr} \left( \rho \sigma_x \right) = 1 & \text{tr} \left( \rho \sigma_z \right) = 0 \end{matrix} \nonumber$
It is not uncommon for people to look at the superposition
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ |\Psi \rangle + | \Psi \rangle \right] \nonumber$
we are considering and think of it as a 50‐50 mixture of spin up and spin down in the z‐direction.
However, a mixture cannot be represented by a state vector. A mixture must be represented by a state operator which is a weighted sum of the outer products of the vectors contributing to the mixture, as shown below.
$\begin{matrix} \hat{ \rho}_{Mix} = \frac{1}{2} | \uparrow \rangle \langle \uparrow | + \frac{1}{2} | \downarrow \rangle \langle \downarrow | = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} \right] = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \rho_{ \text{mix}} = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Measurement of the z‐direction spin gives the same results as above and might lead one to conclude that a superposition and a mixture cannot be distinguished experimentally. However, the x‐direction calculation shows that a superposition is not a classical mixture of spin states, because it is in disagreement with the experimental results presented above.
$\begin{matrix} \text{tr} \left( \rho_{ \text{mix}} \sigma_z \right) = 0 & \text{tr} \left( \rho_{ \text{mix}} \sigma_x \right) = 0 \end{matrix} \nonumber$
Before moving on to composite systems and entanglement, a few additional features of the state operators for superpositions (pure states) and mixtures are illustrated.
$\begin{matrix} \rho_{ \text{sup}} = \frac{1}{2} \begin{pmatrix} 1 & 1 \ 1 & 1 \end{pmatrix} & \rho_{ \text{mix}} = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
One rather obvious difference between these state operators is that the one representing a classical mixture is diagonal, while the one representing the superposition has off‐diagonal elements. The presence of off‐diagonal elements in a state operator is the signature of a quantum mechanical system.
All valid state operators have unit traces; the sum of the diagonal elements equals unity.
$\begin{matrix} \text{tr} \left( \rho_{ \text{sup}} \right) = 1 & \text{tr} \left( \rho_{ \text{mix}} \right) = 1 \end{matrix} \nonumber$
However, the trace of the square of the state operator distinguishes between a pure state and a mixture as shown below.
$\begin{matrix} \text{tr} \left( \rho_{ \text{sup}}^2 \right) = 1 & \text{tr} \left( \rho_{ \text{mix}}^2 \right) = 1 \end{matrix} \nonumber$
Finally we show that ρmix cannot represent a superposition.
$\frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \neq \begin{pmatrix} a \ b \end{pmatrix} \begin{pmatrix} a & b \end{pmatrix} = \begin{pmatrix} a^2 & ab \ ab & b^2 \end{pmatrix} \nonumber$
Comparing the left and right sides we conclude that: a2 = b2 = 1/2 and ab = 0. These constraints are contradictory.
Composite two‐particle entangled states are also superpositions. Below one of the Bell states is constructed. Bell states are maximally entangled superpositions of two‐particle states.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | \uparrow_1 \rangle \otimes | \uparrow_2 \rangle + | \downarrow_1 \rangle \otimes | \downarrow_2 \rangle \right] = \frac{1}{ \sqrt{2}} \left[ \right] = \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
This is an entangled state because it cannot be factored as is shown below.
$\frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \neq \begin{pmatrix} a_1 \ a_2 \end{pmatrix} \otimes \begin{pmatrix} b_1 \ b_2 \end{pmatrix} = \begin{pmatrix} a_1 b_1 \ a_1 b_2 \ a_2 b_1 \ a_2 b_2 \end{pmatrix} \nonumber$
Comparing the left and right sides we conclude that: a1b1 = a2b2 = 1/sqrt(2) and a1b2 = a2b1 = 0. There are no values of a1, a2, b1 and b2 that satisfy these constraints.
Because entangled wave functions are not separable the entangled particles represented by such wave functions do not have separate identities or individual properties. However, if the spin orientation of particle 1 is learned through measurement, the spin orientation of particle 2 is also immediately known no matter how far away it may be. Entanglement suggests nonlocal phenomena which in the words of Nick Herbert are ʺunmediated, unmitigated and immediate.ʺ
Next we form the state operator of the two‐particle entangled state.
$| \Psi \rangle \langle \Psi | = \frac{1}{2} \left[ | \uparrow_1 \rangle \otimes | \uparrow_2 \rangle + | \downarrow_1 \rangle \otimes | \downarrow_2 \rangle \right] \left[ \langle \uparrow_2 | \otimes \langle \uparrow_1 | + \langle \downarrow_2 | \otimes \langle \downarrow_1 | \right] \nonumber$
$\begin{matrix} | \Psi \rangle \langle \Psi | = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 0 & 0 & 1 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1& 0 & 0 & 1 \ 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 1 \end{pmatrix} & \rho_{ \text{ent}} = \frac{1}{2} \begin{pmatrix} 1& 0 & 0 & 1 \ 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
The two‐spin entangled state is a pure state:
$\begin{matrix} \text{tr} \left( \rho_{ \text{ent}} \right) = 1 & \text{tr} \left( \rho_{ \text{ent}}^2 \right) = 1 \end{matrix} \nonumber$
Next we demonstrate the correlation inherent in this entangled state. Spin measurements in the z‐ and x‐directions on the spins always yields the same result (highlighted below). Kronecker is Mathcadʹs command for tensor multiplication of matrices.
$\begin{matrix} \text{SzSz = kronecker} \left( \sigma_z, ~ \sigma_z \right) & \text{tr} \left( \rho_{ \text{ent}} \text{SzSz} \right) = 1 & \text{SxSx = kronecker} \left( \sigma_x,~ \sigma_x \right) & \text{tr} \left( \rho_{ \text{ent}} \text{SxSx} \right) = 1 \end{matrix} \nonumber$
Initially the x‐direction measurement result may seem strange. However, it is easy to show that writing the entangled wave function in the x‐basis is identical to the z‐basis version.
$\begin{matrix} | \uparrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & \langle \uparrow_x | = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \end{pmatrix} & | \downarrow_x \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & \langle \downarrow_x | = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & -1 \end{pmatrix} \end{matrix} \nonumber$
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | \uparrow_{x1} \rangle \otimes | \uparrow_{x2} \rangle + | \downarrow_{x1} \rangle \otimes | \downarrow_{x2} \rangle \right] = \frac{1}{2 \sqrt{2}} \left[ \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 1 \end{pmatrix} + \begin{pmatrix} 1 \ -1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ -1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
In spite of the strong correlation observed when the spins of both particles are measured in the same direction, spin measurements on the individual particles are completely random.
$\begin{matrix} \text{SzI = kronecker} \left( \sigma_z,~ I \right) & \text{SxI = kronecker} \left( \sigma_x,~ I \right) & \text{ISz = kronecker} \left( I,~ \sigma_z \right) & \text{ISx = kronecker} \left( I,~ \sigma_x \right) \ \text{tr} \left( \rho_{ \text{ent}} \text{SzI} \right) = 0 & \text{tr} \left( \rho_{ \text{ent}} \text{SxI} \right) = 0 & \text{tr} \left( \rho_{ \text{ent}} \text{ISx} \right) = 0 & \text{tr} \left( \rho_{ \text{ent}} \text{ISx} \right) = 0 \end{matrix} \nonumber$
This can be confirmed by expanding the state operator in the fashion shown below, and then ʺtracingʺ over the spin states of particle 1. The terms highlighted in blue survive the trace operation.
$\begin{matrix} | \Psi \rangle \langle \Psi | = \frac{1}{2} \left[ \textcolor{blue}{ | \uparrow_1 \rangle | \uparrow_2 \rangle \langle \uparrow_2 | \langle \uparrow_1 | } + | \uparrow_1 \rangle | \uparrow_2 \rangle \langle \downarrow_2 | \langle \downarrow_1 | + | \downarrow_1 \rangle | \downarrow_2 \rangle \langle \downarrow_2 | \langle \uparrow_1 | + \textcolor{blue}{| \downarrow_1 \rangle | \downarrow_2 \rangle \langle \downarrow_2 | \downarrow_1 |} \right] \ \rho_2 = \frac{1}{2} \left[ \langle \textcolor{red}{ \uparrow_1} | \Psi \rangle \langle \Psi | \textcolor{red}{ \uparrow_1} \rangle + \rangle \textcolor{red}{ \downarrow_1} | \Psi \rangle \langle \Psi | \textcolor{red}{ \downarrow_1} \rangle \right] = \frac{1}{2} \left[ | \textcolor{blue}{ \uparrow_2} \rangle \langle \textcolor{blue}{ \uparrow_2} | + | \textcolor{blue}{ \downarrow_2} \rangle \langle \textcolor{blue}{ \downarrow_2} \right] \end{matrix} \nonumber$
This yields the partial state operator of particle 2 and shows that it behaves like a classical mixture, consistent with the previous results.
Now the calculations on the entangled state using the state operator will be repeated using the state vector.
$\begin{matrix} \Psi_{ \text{ent}} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 0 & 0 & 1 \end{pmatrix}^T \end{matrix} \nonumber$
$\begin{matrix} & \Psi_{ \text{ent}}^T \text{SzSz} \Psi_{ \text{ent}} = 1 & \Psi_{ \text{ent}}^T \text{SxSx} \Psi_{ \text{ent}} = 1 & \ \Psi_{ \text{ent}}^T \text{SzI} \Psi_{ \text{ent}} = 0 & \Psi_{ \text{ent}}^T \text{ISz} \Psi_{ \text{ent}} = 0 & \Psi_{ \text{ent}}^T \text{SxI} \Psi_{ \text{ent}} = 0 & \Psi_{ \text{ent}}^T \text{ISx} \Psi_{ \text{ent}} = 0 \end{matrix} \nonumber$
Finally we calculate the expectation values when the spins are measured in different direction using both computational methods.
$\begin{matrix} \text{SzSx = kronecker} \left( \sigma_z,~ \sigma_x \right) & \text{SxSz = kronecker} \left( \sigma_x,~ \sigma_z \right) \ \text{tr} \left( \rho_{ \text{ent}} \text{SzSx} \right) = 0 & \text{tr} \left( \rho_{ \text{ent}} \text{SxSz} \right) = 0 \ \Psi_{ \text{ent}}^T \text{SzSx} \Psi_{ \text{ent}} = 0 & \Psi_{ \text{ent}}^T \text{SxSz} \Psi_{ \text{ent}} = 0 \end{matrix} \nonumber$
Say the first spin is measured in the z‐direction and found to be spin‐up. This means the second spin is also spin‐up in the z‐direction, which is an even superposition of spin‐up and spin‐down in the x‐direction, giving an over all expectation value of zero.
Ψent is one of the Bell states. For extensive calculations on it and the other three Bell states see ʺBell State Exercises.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.109%3A_State_Vectors_and_State_Operators-_Superpositions_Mixed_States_and_Entanglement.txt
|
The four remaining tutorials deal with this clash between quantum mechanics and local realism, and the simulation of physical phenomena. The first two examine entangled spin systems and the third entangled photon systems. The fourth provides a terse mathematical summary for both spin and photon systems. They all clearly show the disagreement between the predictions of quantum theory and those of local hidden-variable models.
Another Look at Mermin's EPR Gedanken Experiment
Quantum theory is both stupendously successful as an account of the small-scale structure of the world and it is also the subject of an unresolved debate and dispute about its interpretation. J. C. Polkinghorne, The Quantum World, p. 1.
In Bohm's EPR thought experiment (Quantum Theory, 1951, pp. 611-623), both local realism and quantum mechanics were shown to be consistent with the experimental data. However, the local realistic explanation used composite spin states that were invalid according to quantum theory. The local realists countered that this was an indication that quantum mechanics was incomplete because it couldn't assign well-defined values to all observable properties prior to or independent of observation. In the 1980s N. David Mermin presented a related thought experiment [American Journal of Physics (October 1981, pp 941-943) and Physics Today (April 1985, pp 38-47)] in which the predictions of local realism and quantum mechanics disagree. As such Mermin's thought experiment represents a specific illustration of Bell's theorem.
A spin-1/2 pair is prepared in a singlet state and the individual particles travel in opposite directions to detectors which are set up to measure spin in three directions in x-z plane: along the z-axis, and angles of 120 and 240 degrees with respect to the z-axis. The detector settings are labeled 1, 2 and 3, respectively.
The switches on the detectors are set randomly so that all nine possible settings of the two detectors occur with equal frequency.
Local realism holds that objects have properties independent of measurement and that measurements at one location on a particle cannot influence measurements of another particle at a distant location even if the particles were created in the same event. Local realism maintains that the spin-1/2 particles carry instruction sets (hidden variables) which dictate the results of subsequent measurements. Prior to measurement the particles are in an unknown but well-defined state.
The following table presents the experimental results expected on the basis of local realism. Singlet spin states have opposite spin values for each of the three measurement directions. If A's spin state is (+-+), then B's spin state is (-+-). A '+' indicates spin-up and a measurement eigenvalue of +1. A '-' indicates spin-down and a measurement eigenvalue of -1. If A's detector is set to spin direction "1" and B's detector is set to spin direction "3" the measured result will be recorded as +-,with an eigenvalue of -1.
There are eight spin states and nine possible detector settings, giving 72 possible measurement outcomes all of which are equally probable. The next to bottom line of the table shows the average (expectation) value for the nine possible detector settings given the local realist spin states. When the detector settings are the same there is perfect anti-correlation between the detectors at A and B. When the detectors are set at different spin directions there is no correlation.
As will now be shown quantum mechanics (bottom line of the table) disagrees with this local realistic analysis. The singlet state produced by the source is the following entangled superposition, where the arrows indicate the spin orientation for any direction in the x-z plane. As noted above the directions used are 0, 120 and 240 degrees, relative to the z-axis.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |\uparrow\rangle_{1} | \downarrow \rangle_{2}-| \downarrow \rangle_{1} | \uparrow \rangle_{2} ]=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right) \otimes \left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right)-\left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right) \otimes \left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \quad \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
The single particle spin operator in the x-z plane is constructed from the Pauli spin operators in the xand z-directions. $\phi$ is the angle of orientation of the measurement magnet with the z-axis. Note that the Pauli operators measure spin in units of \frac{h}{4 \pi}\). This provides for some mathematical clarity in the forthcoming analysis.
$\sigma_{\mathrm{Z}} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \qquad \sigma_{\mathrm{x}} :=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \ \mathrm{S}(\varphi) :=\cos (\varphi) \cdot \sigma_{\mathrm{Z}}+\sin (\varphi) \cdot \sigma_{\mathrm{x}} \rightarrow \left( \begin{array}{cc}{\cos (\varphi)} & {\sin (\varphi)} \ {\sin (\varphi)} & {-\cos (\varphi)}\end{array}\right) \nonumber$
The joint spin operator for the two-spin system in tensor format is,
$\left( \begin{array}{cc}{\cos \varphi_{A}} & {\sin \varphi_{A}} \ {\sin \varphi_{A}} & {-\cos \varphi_{A}}\end{array}\right) \otimes \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right)= \begin{pmatrix} \cos \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) & \sin \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) \ \sin \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) & -\cos \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) \end{pmatrix} \nonumber$
In Mathcad syntax this operator is:
$\mathrm{kronecker}\left(\mathrm{S}\left(\varphi_{\mathrm{A}}\right), \mathrm{S}\left(\varphi_{\mathrm{B}}\right)\right) \nonumber$
When the detector settings are the same quantum theory predicts an expectation value of -1, in agreement with the analysis based on local realism.
$\Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \mathrm{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi=-1 \quad \Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \mathrm{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi=-1 \quad \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \text { deg }), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=-1 \nonumber$
However, when the detector settings are different quantum theory predicts an expectation value of 0.5, in disagreement with the local realistic value of 0.
$\Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \mathrm{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi=0.5 \quad \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=0.5 \quad \Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \mathrm{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=0.5 \nonumber$
Considering all detector settings local realism predicts an expectation value of -1/3 [2/3(0) + 1/3(-1)], while quantum theory predicts an expectation value of 0 [2/3(1/2) + 1/3(-1)]. (See the two bottom rows in the table above.)
Furthermore, the following calculations demonstrate that the various spin operators do not commute and therefore represent incompatible observables. In other words, they are observables that cannot simultaneously be in well-defined states. Thus, quantum theory also rejects the realist's spin states used in the table.
$\mathrm{S}(0 \cdot \operatorname{deg}) \cdot \mathrm{S}(120 \cdot \mathrm{deg})-\mathrm{S}(120 \cdot \mathrm{deg}) \cdot \mathrm{S}(0 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {1.732} \ {-1.732} & {0}\end{array}\right) \nonumber$
$\mathrm{S}(0 \cdot \mathrm{deg}) \cdot \mathrm{S}(240 \cdot \mathrm{deg})-\mathrm{S}(240 \cdot \mathrm{deg}) \cdot \mathrm{S}(0 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {-1.732} \ {1.732} & {0}\end{array}\right) \nonumber$
$\mathrm{S}(120 \cdot \mathrm{deg}) \cdot \mathrm{S}(240 \cdot \mathrm{deg})-\mathrm{S}(240 \cdot \mathrm{deg}) \cdot \mathrm{S}(120 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {1.732} \ {-1.732} & {0}\end{array}\right) \nonumber$
The local realist is undeterred by this argument and the disagreement with the quantum mechanical predictions, asserting that the fact that quantum theory cannot assign well-defined states to all elements of reality independent of observation is an indication that it provides an incomplete description of reality.
However, results available for experiments of this type with photons support the quantum mechanical predictions and contradict the local realists analysis shown in the table above. Thus, there appears to be a non-local interaction between the two spins at their measurement sites. Nick Herbert provides a memorable and succinct description of such non-local influences on page 214 of Quantum Reality.
A non-local interaction links up one location with another without crossing space, without decay, and without delay. A non-local interaction is, in short, unmediated, unmitigated, and immediate.
Jim Baggott puts it this way (The Meaning of Quantum Theory, page 135):
The predictions of quantum theory (in this experiment) are based on the properties of a two-particle state vector which ... is 'delocalized' over the whole experimental arrangement. The two particles are, in effect, always in 'contact' prior to measurement and can therefore exhibit a degree of correlation that is impossible for two Einstein separable particles.
"...if [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local. This is what the theorem says." -John S. Bell
Further Information
The eigenvectors of the single particle spin operator, S($phi$), in the x-z plane are given below along with their eigenvalues.
$\varphi_{u}(\varphi) :=\left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right) \qquad \varphi_{\mathrm{d}}(\varphi) :=\left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right) \nonumber$
$\varphi_{\mathrm{u}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow 1 \quad \varphi_{\mathrm{d}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{d}}(\varphi) \text { simplify } \rightarrow 1 \quad \varphi_{\mathrm{d}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow 0 \nonumber$
Eigenvalue +1 Eigenvalue -1
$\mathrm{S}(\varphi) \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow \left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right)$ $\mathrm{S}(\varphi) \cdot \varphi_{\mathrm{d}}(\varphi) \text { simplify } \rightarrow \left( \begin{array}{c}{\sin \left(\frac{\varphi}{2}\right)} \ {-\cos \left(\frac{1}{2} \cdot \varphi\right)}\end{array}\right)$
A summary of the quantum mechanical calculations:
$\begin{pmatrix} \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \end{pmatrix}^{T}=\left( \begin{array}{cccccccc}{0.5} & {0.5} & {0.5} & {0.5} & {0.5} & {0.5} & {-1} & {-1} & {-1}\end{array}\right) \nonumber$
Calculation of the overall spin expectation value:
$\sum_{i=0}^{2} \sum_{j=0}^{2} \left[\Psi^{\mathrm{T}} \cdot \text { kronecker }[\mathrm{S}[\mathrm{i} \cdot(120 \cdot \mathrm{deg})], \mathrm{S}[j \cdot(120 \cdot \mathrm{deg})]] \Psi\right]=0 \nonumber$
The expectation value as a function of the relative orientation of the detectors reveals the level of correlation between the two spin measurements. For $\theta$ = 0° there is perfect anti-correlation; for $\theta$ = 180° perfect correlation; for $\theta$ = 90° no correlation; for $\theta$ = 60° intermediate anti-correlation (-0.5) and for $\theta$ = 120° intermediate correlation (0.5).
A Quantum Simulation
This thought experiment is simulated using the following quantum circuit. As shown below the results are in agreement with the previous theoretical quantum calculations. The initial Hadamard and CNOT gates create the singlet state from the |11> input. Rz($\theta$) rotates spin B. The final Hadamard gates prepare the system for measurement. See arXiv:1712.05642v2 for further detail.
$\begin{matrix} \text{Spin A} & | 1 \rangle & \rhd & H & \cdot & \cdots & H & \rhd & \text{Measure 0 or 1: Eigenvalue 1 or -1} \ \; & \; & \; & \; & | & \; & \; & \; & \; \ \text{Spin B} & | 1 \rangle & \rhd & \cdots & \oplus & R_{Z} (\theta) & H & \rhd & \text{Measure 0 or 1: Eigenvalue 1 or -1} \end{matrix} \nonumber$
The quantum gates required to execute this circuit:
Identity Hadamard gate Rz rotation Controlled NOT
$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$ $\mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right)$ $\mathrm{R}_{\mathrm{Z}}(\theta) :=\left( \begin{array}{cc}{1} & {0} \ {0} & {\mathrm{e}^{\mathrm{i} \cdot \theta}}\end{array}\right)$ $\mathrm{CNOT} :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right)$
The operator representing the circuit is constructed from the matrix operators provided above.
$\mathrm{Op}(\theta) :=\text { kronecker }(\mathrm{H}, \mathrm{H}) \cdot \text { kronecker }\left(\mathrm{I}, \mathrm{R}_{\mathrm{Z}}(\theta)\right) \text { CNOT kronecker }(\mathrm{H}, \mathrm{I}) \nonumber$
There are four equally likely measurement outcomes with the eigenvalues and overall expectation values shown below for relative measurement angles 0 and 120 deg ($\frac{2 \pi}{3}$).
|00>
eigenvalue +1
$\left[\left|\left( \begin{array}{c}{1} \ {0} \ {0} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(0 \cdot \mathrm{deg}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0$
|01>
eigenvalue -1
$\left[\left|\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(0 \cdot \operatorname{deg}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.5$
|10>
eigenvalue -1
$\left[\left|\left( \begin{array}{c}{0} \ {0} \ {1} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(0 \cdot \mathrm{deg}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.5$
|11>
eigenvalue +1
$\left[\left|\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(0 \cdot \operatorname{deg}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0$
Expectation value: 0 - 0.5 - 0.5 + 0 = -1
|00>
eigenvalue +1
$\left[\left|\left( \begin{array}{c}{1} \ {0} \ {0} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.375$
|01>
eigenvalue -1
$\left[\left|\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.125$
|10>
eigenvalue -1
$\left[\left|\left( \begin{array}{c}{0} \ {0} \ {1} \ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.125$
|11>
eigenvalue +1
$\left[\left|\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right)\right|\right]^{2}=0.375$
Expectation value: 0.375 - 0.125 + 0.375 - 0.125 = 0.5
Another Simulation of a GHZ Gedanken Experiment
Many years ago N. David Mermin published two articles (Physics Today, June 1990; American Journal of Physics, August 1990) in the general physics literature on a Greenberger-Horne-Zeilinger (American Journal of Physics, December 1990; Nature, 3 February 2000) thought experiment involving spins that sharply revealed the clash between local realism and the quantum view of reality.
Three spin-1/2 particles are created in a single event and move apart in the horizontal y-z plane. Subsequent spin measurements will be carried out in units of $\frac{h}{4 \pi}$ with spin operators in the x- and y-directions.
The z-basis eigenfunctions are:
$\mathrm{Sz}_{\mathrm{up}} :=\left( \begin{array}{c}{1} \ {0}\end{array}\right) \qquad \mathrm{Sz}_{\mathrm{down}} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
The x- and y-direction spin operators:
$\sigma_{\mathrm{x}} :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \text { eigenvals }\left(\sigma_{\mathrm{x}}\right)=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) \quad \sigma_{\mathrm{y}} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \quad \text { eigenvals }\left(\sigma_{\mathrm{y}}\right)=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
The initial entangled spin state for the three spin-1/2 particles in tensor notation is:
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0}\ {-1}\end{array}\right) \quad \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {0} \ {0} \ {0} \ {0} \ {-1}\end{array}\right) \nonumber$
The following operators represent the measurements to be carried out on spins 1, 2 and 3, in that order.
$\sigma_{x}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{y}^{3} \quad \sigma_{y}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{y}^{3} \quad \sigma_{y}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{x}^{3} \quad \sigma_{x}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{x}^{3} \nonumber$
The matrix tensor product is also known as the Kronecker product, which is available in Mathcad. The four operators in tensor format are formed as follows.
$\sigma_{\mathrm{xyy}} :=\text { kronecker }\left(\sigma_{\mathrm{x}}, \text { kronecker }\left(\sigma_{\mathrm{y}}, \sigma_{\mathrm{y}}\right)\right)$ $\sigma_{\mathrm{yxy}} :=\text { kronecker }\left(\sigma_{\mathrm{y}}, \text { kronecker }\left(\sigma_{\mathrm{x}}, \sigma_{\mathrm{y}}\right)\right)$
$\sigma_{\mathrm{yyx}} :=\mathrm{kronecker}\left(\sigma_{\mathrm{y}}, \mathrm{kronecker}\left(\sigma_{\mathrm{y}}, \sigma_{\mathrm{x}}\right)\right)$ $\sigma_{\mathrm{xxx}} :=\text { kronecker }\left(\sigma_{\mathrm{x}}, \text { kronecker }\left(\sigma_{\mathrm{x}}, \sigma_{\mathrm{x}}\right)\right)$
These composite operators are Hermitian and mutually commute which means they can have simultaneous eigenvalues.
$\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yxy}}-\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$ $\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yyx}}-\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$ $\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$
$\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{yyx}}-\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{yxy}} \rightarrow 0$ $\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{yxy}} \rightarrow 0$ $\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{yyx}} \rightarrow 0$
The expectation values of the operators are now calculated.
$\Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xyy}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{yxy}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{yyx}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xxx}} \cdot \Psi=-1 \nonumber$
Consequently the product of the four operators has the expectation value of -1.
$\Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{Xxx}} \cdot \Psi=-1 \nonumber$
Local realism assumes that objects have definite properties independent of measurement. In this example it assumes that the x- and y-components of the spin have definite values prior to measurement. This position leads to a contradiction with the above result as demonstrated by Mermin (Physics Today, June 1990). Looking again at the measurement operators, notice that there is a σx measurement on the first spin in the first and fourth experiment. If the spin state is well-defined before measurement those results have to be the same, either both +1 or both -1, so that the product of the two measurements is +1.
$\left(\sigma_{x}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{y}^{3}\right)\left(\sigma_{y}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{y}^{3}\right)\left(\sigma_{y}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{x}^{3}\right)\left(\sigma_{x}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{x}^{3}\right) ) \nonumber$
Likewise there is a y measurement on the second spin in experiments one and three. By similar arguments those results will lead to a product of +1 also. Continuing with all pairs in the total operator using local realistic reasoning unambiguously shows that its expectation value should be +1, in sharp disagreement with the quantum mechanical result of -1. This result should cause all mathematically literate local realists to renounce and recant their heresy. However, they may resist saying this is just a thought experiment. It hasn't actually been performed. However, if you believe in quantum simulation it has been performed.
Quantum Simulation
"Quantum simulation is a process in which a quantum computer simulates another quantum system. Because of the various types of quantum weirdness, classical computers can simulate quantum systems only in a clunky, inefficient way. But because a quantum computer is itself a quantum system, capable of exhibiting the full repertoire of quantum weirdness, it can efficiently simulate other quantum systems. The resulting simulation can be so accurate that the behavior the computer will be indistinguishable from the behavior of the simulated system itself." (Seth Lloyd, Programming the Universe, page 149.) The thought experiment can be simulated using the quantum circuit shown below which is an adaptation of one that can be found at: arXiv:1712.06542v2.
$\begin{matrix} | 1 \rangle & \rhd & H & \cdot & \cdots & \cdots & H & \rhd & \text{Measure, 0 or 1} \ \; & \; & \; & | & \; & \; & \; & \; & \; \ | 0 \rangle & \rhd & \cdots & \oplus & \cdot & S & H & \rhd & \text{Measure, 0 or 1} \ \; & \; & \; & \; & | & \; & \; & \; & \; \ | 0 \rangle & \rhd & \cdots & \cdots & \oplus & S & H & \rhd & \text{Measure, 0 or 1} \ \; & \; & \; & | & \; & \; & \; & \; & \; \end{matrix} \nonumber$
The matrix operators required for the implementation of the quantum circuit:
$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \quad \mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \quad \mathrm{S} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-\mathrm{i}}\end{array}\right) \quad \mathrm{CNOT} :=\left( \begin{array}{llll}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0}\end{array}\right) \nonumber$
$\mathrm{HII} :=\text { kronecker }(\mathrm{H}, \text { kronecker }(\mathrm{I}, \mathrm{I}))$ $\mathrm{CNOTI} :=\text { kronecker }(\mathrm{CNOT}, \mathrm{I})$ $\mathrm{ICNOT} :=\text { kronecker }(\mathrm{I}, \mathrm{CNOT})$
$\mathrm{ISS} :=\text { kronecker }(\mathrm{I}, \text { kronecker }(\mathrm{S}, \mathrm{S}))$ $\mathrm{SIS} :=\text { kronecker }(\mathrm{S}, \text { kronecker }(\mathrm{I}, \mathrm{S}))$ $\mathrm{SSI} :=\text { kronecker }(\mathrm{S}, \text { kronecker }(\mathrm{S}, \mathrm{I}))$
$\mathrm{HHH} :=\text{kronecker}(\mathrm{H}, \text{kronecker}(\mathrm{H}, \mathrm{H}))$
First it is demonstrated that the first three steps of the circuit create the initial state.
$[ \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left( \begin{array}{llllllll}{0.707} & {0} & {0} & {0} & {0} & {0} & {0} & {-0.707}\end{array}\right) \nonumber$
The complete circuit shown above simulates the expectation value of the $\sigma_{x}\sigma_{y}\sigma_{y}$ operator. The presence of S on a line before the final H gates indicates the measurement of the $\sigma_{y}$, its absence a measurement of $\sigma_{x}$. The subsequent simulations show the absence of S on the middle and last line, and finally on all three lines for the simulation of the expectation value for $\sigma_{x}\sigma_{x}\sigma_{x}$.
Eigenvalue |0> = +1; eigenvalue |1> = -1
$[ \text{HHH} \cdot \text{ISS} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber$
$\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle= 1 \nonumber$
Given the eigenvalue assignments above the expectation value associated with this measurement outcome is 1/4[(1)(1)(1)+(1)(-1)(-1)+(-1)(1)(-1)+(-1)(-1)(1)] = 1. Note that 1/2 is the probability amplitude for the product state. Therefore the probability of each member of the superposition being observed is 1/4. The same reasoning is used for the remaining simulations.
$[ \text{HHH} \cdot \text{SIS} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber$
$\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{y} \sigma_{x} \sigma_{y}\right\rangle= 1 \nonumber$
$[ \text{HHH} \cdot \text{SSI} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber$
$\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{y} \sigma_{y} \sigma_{x}\right\rangle= 1 \nonumber$
$[ \text{HHH} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0} & {0.5} & {0.5} & {0} & {0.5} & {0} & {0} & {0.5}\end{array}\right) \nonumber$
$\frac{1}{2}( |001\rangle+| 010 \rangle+| 100 \rangle+| 111 \rangle ) \Rightarrow\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle=- 1 \nonumber$
Individually and in product form the simulated results are in agreement with the previous quantum mechanical calculations.
$\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle\left\langle\sigma_{y} \sigma_{x} \sigma_{y}\right\rangle\left\langle\sigma_{y} \sigma_{y} \sigma_{x}\right\rangle=- 1 \nonumber$
The appendix provides algebraic calculations of $<\sigma_{x} \sigma_{y} \sigma_{y}>$ and $<\sigma_{x} \sigma_{x} \sigma_{x}>$.
Appendix:
Truth tables for the operation of the circuit elements:
$\mathrm{I}=\left( \begin{array}{ccc}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {1}\end{array}\right) \quad H=\left[ \begin{array}{ccc}{0} & {\text { to }} & {\frac{(0+1)}{\sqrt{2}}} \ {1} & {\text { to }} & {\frac{(0-1)}{\sqrt{2}}}\end{array}\right] \quad \mathrm{CNOT}=\left( \begin{array}{lll}{00} & {\text { to }} & {00} \ {01} & {\text { to }} & {01} \ {10} & {\text { to }} & {11} \ {11} & {\text { to }} & {10}\end{array}\right) \quad \mathrm{S}=\left( \begin{array}{ccc}{0} & {\text { to }} & {0} \ {1} & {\text { to }} & {-\mathrm{i}}\end{array}\right) \nonumber$
$\begin{array}{c}{|100 \rangle} \ {H \otimes I \otimes I} \ {\frac{1}{\sqrt{2}}[ |000\rangle-| 100\rangle]} \ {C N O T \otimes I} \ {\frac{1}{\sqrt{2}}[ |000\rangle-| 110\rangle]} \ {I \otimes C N O T}\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 111 \rangle ]} \ {I \otimes S \otimes S} \ {\frac{1}{\sqrt{2}}[[000\rangle-| 1-i-i\rangle]} \ {H \otimes H \otimes H} \ {\frac{1}{2}[ |000\rangle+| 011 \rangle+| 101 \rangle+| 110\rangle]} \ {\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle= 1}\end{array}$
$\begin{array}{c}{|100\rangle} \ {H \otimes I \otimes I} \ {\frac{1}{\sqrt{2}}[ |000\rangle-| 100\rangle]} \ {C N O T \otimes I} \ {c}{\frac{1}{\sqrt{2}}[ |000\rangle-| 110\rangle]} \ {I \otimes C N O T} \ {\frac{1}{\sqrt{2}}[ |000\rangle-| 111\rangle]} \ {H \otimes H \otimes H} \ {\frac{1}{2}[ |001\rangle+| 010 \rangle+| 100 \rangle+| 111\rangle]} \ {\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle=- 1}\end{array}$
Quantum Correlations Illustrated With Photons
A two-stage atomic cascade emits entangled photons (A and B) in opposite directions with the same circular polarization according to observers in their path. The experiment involves the measurement of photon polarization states in the vertical/horizontal measurement basis, and allows for the rotation of the right-hand detector through an angle $\theta$, in order to explore the consequences of quantum mechanical entanglement. PA stands for polarization analyzer and could simply be a calcite crystal.
$\begin{matrix} V & \lhd & \lceil & \; & \rceil & \; & \; & \; & \lceil & \; & \rceil & \rhd & V \ \; & \; & | & 0 & | & \xleftarrow{A} & \xleftrightarrow{Source} & \xrightarrow{B} & | & \theta & | & \; & \; \ H & \lhd & \lfloor & \; & \rfloor & \; & \; & \; & \lfloor & \; & \rfloor & \rhd & H \ \; & \; & PA_{A} & \; & \; & \; & PA_{B} & \; & \; \end{matrix} \nonumber$
The entangled two-photon polarization state is written in the circular and linear polarization bases,
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |L\rangle_{A} | L \rangle_{B}+| R \rangle_{A} | R \rangle_{B} ]=\frac{1}{\sqrt{2}}[ |V\rangle_{A} | V \rangle_{B}-| H \rangle_{A} | H \rangle_{B} ] \text{using} \quad | L \rangle=\frac{1}{\sqrt{2}}[ |V\rangle+ i | H \rangle ] \quad | R \rangle=\frac{1}{\sqrt{2}}[ |V\rangle- i | H \rangle ] \nonumber$
The vertical (eigenvalue +1) and horizontal (eigenvalue -1) polarization states for the photons in the measurement plane are given below. $\Theta$ is the angle of the measuring PA.
$\mathrm{V}(\theta) :=\left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right)\quad \mathrm{H}(\theta) :=\left( \begin{array}{l}{-\sin (\theta)} \ {\cos (\theta)}\end{array}\right) \quad \mathrm{V}(0)=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad \mathrm{H}(0)=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
If photon A has vertical polarization photon B also has vertical polarization, the probability that photon B has vertical polarization when measured at an angle θ giving a composite eigenvalue of +1 is,
$\left(\mathrm{V}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \rightarrow \cos (\theta)^{2} \nonumber$
If photon A has vertical polarization photon B also has vertical polarization, the probability that photon B has horizontal polarization when measured at an angle θ giving a composite eigenvalue of -1 is,
$\left(\mathrm{H}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \rightarrow \sin (\theta)^{2} \nonumber$
Therefore the overall quantum correlation coefficient or expectation value is:
$\mathrm{E}(\theta) :=\left(\mathrm{V}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2}-\left(\mathrm{H}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \text { simplify } \rightarrow \cos (2 \cdot \theta) \quad \mathrm{E}(0 \cdot \mathrm{deg})=1 \quad \mathrm{E}(30 \cdot \mathrm{deg})=0.5 \quad \mathrm{E}(90 \cdot \mathrm{deg})=-1 \nonumber$
Now it will be shown that a local-realistic, hidden-variable model can be constructed which is in agreement with the quantum calculations for 0 and 90 degrees, but not for 30 degrees (highlighted).
If objects have well-defined properties independent of measurement, the results for $\theta$ = 0 degrees and $\theta$ = 90 degrees require that the photons carry the following instruction sets, where the hexagonal vertices refer to $\theta$ values of 0, 30, 60, 90, 120, and 150 degrees.
There are eight possible instruction sets, six of the type on the left and two of the type on the right. The white circles represent vertical polarization with eigenvalue +1 and the black circles represent horizontal polarization with eigenvalue -1. In any given measurement, according to local realism, both photons (A and B) carry identical instruction sets, in other words the same one of the eight possible sets.
The problem is that while these instruction sets are in agreement with the 0 and 90 degree quantum calculations, with expectation values of +1 and -1 respectively, they can't explain the 30 degree predictions of quantum mechanics. The figure on the left shows that the same result should be obtained 4 times with joint eigenvalue +1, and the opposite result twice with joint eigenvalue of -1. For the figure on the right the opposite polarization is always observed giving a joint eigenvalue of -1. Thus, local realism predicts an expectation value of 0 in disagreement with the quantum result of 0.5.
$\frac{6 \cdot(1-1+1+1-1+1)+2 \cdot(-1-1-1-1-1-1)}{8}=0 \nonumber$
This analysis is based on "Simulating Physics with Computers" by Richard Feynman, published in the International Journal of Theoretical Physics (volume 21, pages 481-485), and Julian Brown's Quest for the Quantum Computer (pages 91-100). Feynman used the experiment outlined above to establish that a local classical computer could not simulate quantum physics.
A local classical computer manipulates bits which are in well-defined states, 0s and 1s, shown above graphically in white and black. However, these classical states are incompatible with the quantum mechanical analysis which is consistent with experimental results. This two-photon experiment demonstrates that simulation of quantum physics requires a computer that can manipulate 0s and 1s, superpositions of 0 and 1, and entangled superpositions of 0s and 1s.
Simulation of quantum physics requires a quantum computer. The following quantum circuit simulates this experiment exactly. The Hadamard and CNOT gates transform the input, |10>, into the required entangled Bell state. R($\theta$) rotates the polarization of photon B clockwise through an angle $\theta$. Finally measurement yields one of the four possible output states: |00>, |01>, |10> or |11>.
$\begin{matrix} | 1 \rangle & \rhd & H & \cdot & \cdots & \rhd & \text{Measure 0 or 1} \ \; & \; & \; & | & \; & \; & \; \ | 0 \rangle & \rhd & \cdots & \oplus & R(\theta) & \rhd & \text{Measure 0 or 1} \end{matrix} \nonumber$
The following algebraic analysis of the quantum circuit shows that it yields the correct expectation value for all values of $\theta$. This analysis requires the truth tables for the matrix operators. Recall from above that |0> = |V> with eigenvalue +1, and |1> = |H> with eigenvalue -1.
$H=\left[ \begin{array}{ccc}{0} & {\text { to }} & {\frac{1}{\sqrt{2}} \cdot(0+1)} \ {1} & {\text { to }} & {\frac{1}{\sqrt{2}} \cdot(0-1)}\end{array}\right] \quad \mathrm{CNOT}=\left( \begin{array}{ccc}{00} & {\mathrm{to}} & {00} \ {01} & {\mathrm{to}} & {01} \ {10} & {\mathrm{to}} & {11} \ {11} & {\mathrm{to}} & {10}\end{array}\right) \quad \begin{matrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} & \xrightarrow{R(\theta)} & \begin{pmatrix} \cos \theta \ \sin \theta \end{pmatrix} \ \begin{pmatrix} 0 \ 1 \end{pmatrix} & \xrightarrow{R(\theta)} & \begin{pmatrix} - \sin \theta \ \cos \theta \end{pmatrix} \end{matrix} \nonumber$
$\begin{array}{c}{|1 \rangle | 0 \rangle=| 10\rangle} \ {\mathrm{H} \otimes \mathrm{I}}{\frac{1}{\sqrt{2}}[ |0\rangle-| 1 \rangle ] | 0 \rangle=\frac{1}{\sqrt{2}}[ |00\rangle-| 10\rangle} \ {\text { CNOT }} \begin{array}{c}{\frac{1}{\sqrt{2}}[ |00\rangle-| 11} \ {\quad \mathrm{I} \otimes \mathrm{R}(\theta)}\end{array}\frac{1}{\sqrt{2}}[ |0\rangle(\cos \theta | 0\rangle+\sin \theta | 1 \rangle )-| 1 \rangle(-\sin \theta | 0\rangle+\cos \theta | 1 \rangle ) ] \ \Downarrow \ \frac{1}{\sqrt{2}}[\cos \theta | 00\rangle+\sin \theta | 01 \rangle+\sin \theta | 10 \rangle-\cos \theta | 11 \rangle ] \ \text{Probabilities} \ \Downarrow \ \frac{\cos ^{2} \theta}{2} | 00 \rangle+\frac{\sin ^{2} \theta}{2} | 01 \rangle+\frac{\sin ^{2} \theta}{2} | 10 \rangle+\frac{\cos ^{2} \theta}{2} | 11 \rangle \end{array} \nonumber$
|00> = |VV> and |11> = |HH> have a composite eigenvalue of +1. |01> = |VH> and |10> = |HV> have a composite eigenvalue of -1. Therefore,
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.11%3A_Quantum_Computation-_A_Short_Course.txt
|
In this exercise the Gram-Schmidt method will be used to create an orthonormal basis set from the following vectors which are neither normalized nor orthogonal.
$\begin{matrix} u1 = \begin{pmatrix} 1 + i \ 1 \ i \end{pmatrix} & u2 = \begin{pmatrix} i \ 3 \ 1 \end{pmatrix} & u3 = \begin{pmatrix} 0 \ 28 \ 0 \end{pmatrix} \end{matrix} \nonumber$
Demonstrate that the vectors are not normalized and are not orthogonal.
$\begin{matrix} \left( \overline{u1} \right)^T u1 = 4 & \left( \overline{u2} \right)^T u2 = 11 & \left( \overline{u3} \right)^T u3 = 784 \ \left( \overline{u1} \right)^T u2 = 4 & \left( \overline{u1} \right)^T u3 = 28 & \left( \overline{u2} \right)^T u3 = 84 \end{matrix} \nonumber$
Using the first vector make u2 orthogonal to it by subtracting its projection on u1.
$u2 = u2 - \frac{ \left( \overline{u1} \right)^T u2}{ \left( \overline{u1} \right)^T u1} u1 \nonumber$
Make u3 orthogonal to u1 and u2 by subtracting its projection on u1 and u2.
$u3 = u3 - \frac{ \left( \overline{u1} \right)^T u3}{ \left( \overline{u1} \right)^T u1} u1 - \frac{ \left( \overline{u2} \right)^T u3}{ \left( \overline{u2} \right)^T u2} u2 \nonumber$
Finally, normalize the new orthogonal vectors.
$\begin{matrix} u1 = \frac{u1}{ \sqrt{ \left( \overline{u1} \right)^T u1}} & u2 = \frac{u2}{ \sqrt{ \left( \overline{u2} \right)^T u2}} & u3 = \frac{u3}{ \sqrt{ \left( \overline{u3} \right)^T u3}} \end{matrix} \nonumber$
Demonstrate that an orthonormal basis set has been created.
$\begin{matrix} \left( \overline{u1} \right)^T u1 = 1 & \left( \overline{u2} \right)^T u2 = 1 & \left( \overline{u3} \right)^T u3 = 1 \ \left( \overline{u1} \right)^T u2 = 0 & \left( \overline{u1} \right)^T u3 = 0 & \left( \overline{u2} \right)^T u3 = 0 \end{matrix} \nonumber$
Display the orthonormal basis set.
$\begin{matrix} u1 = \begin{pmatrix} 0.5 + 0.5i \ 0.5 \ 0.5i \end{pmatrix} & u2 = \begin{pmatrix} -0.378 \ 0.756 \ 0.378 - 0.378i \end{pmatrix} & u3 = \begin{pmatrix} 0.085 - 0.592i \ 0.423 \ -0.676 + 0.085i \end{pmatrix} \end{matrix} \nonumber$
1.12: Quantum Computation- A Short Course
Quantum Correlations Illustrated With Spin-1/2 Particles
A spin-1/2 pair is prepared in an entangled singlet state and the individual particles travel in opposite directions on the y-axis to a pair of Stern-Gerlach detectors which are set up to measure spin in the x-z plane. Particle A's spin is measured along the z-axis, and particle B's spin is measured at an angle $\theta$ with respect to the z-axis.
Spin-up eigenstate Eigenvalue +1
$\varphi_{\mathrm{u}}(\theta)=\left( \begin{array}{c}{\cos \left(\frac{\theta}{2}\right)} \ {\sin \left(\frac{\theta}{2}\right)}\end{array}\right) \nonumber$
Spin-down eigenstate Eigenvalue -1
$\varphi_{\mathrm{d}}(\theta) :=\left( \begin{array}{c}{-\sin \left(\frac{\theta}{2}\right)} \ {\cos \left(\frac{\theta}{2}\right)}\end{array}\right) \nonumber$
The singlet state wave function in the xz-plane:
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |\uparrow\rangle_{\mathrm{A}} | \downarrow \rangle_{\mathrm{B}}-| \downarrow \rangle_{\mathrm{A}} | \uparrow \rangle_{\mathrm{B}} ] =\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\cos \left(\frac{\theta}{2}\right)} \ {\sin \left(\frac{\theta}{2}\right)}\end{array}\right)_{A} \otimes \left( \begin{array}{c}{-\sin \left(\frac{\theta}{2}\right)} \ {\cos \left(\frac{\theta}{2}\right)}\end{array}\right)_{B}-\left( \begin{array}{c}{-\sin \left(\frac{\theta}{2}\right)} \ {\cos \left(\frac{\theta}{2}\right)}\end{array}\right)_{A} \otimes \left( \begin{array}{c}{\cos \left(\frac{\theta}{2}\right)} \ {\sin \left(\frac{\theta}{2}\right)}\end{array}\right)_{B}\right] =\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
The spin operator in the $\theta$ direction:
$\mathrm{S}(\theta) :=\varphi_{\mathrm{u}}(\theta) \cdot \varphi_{\mathrm{u}}(\theta)^{\mathrm{T}}-\varphi_{\mathrm{d}}(\theta) \cdot \varphi_{\mathrm{d}}(\theta)^{\mathrm{T}} \text { simplify } \rightarrow \left( \begin{array}{cc}{\cos (\theta)} & {\sin (\theta)} \ {\sin (\theta)} & {-\cos (\theta)}\end{array}\right) \nonumber$
$\sigma_{\mathrm{Z}} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \sigma_{\mathrm{X}}=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{S}(\theta) :=\cos (\theta) \cdot \sigma_{\mathrm{Z}}+\sin (\theta) \cdot \sigma_{\mathrm{x}} \rightarrow \left( \begin{array}{cc}{\cos (\theta)} & {\sin (\theta)} \ {\sin (\theta)} & {-\cos (\theta)}\end{array}\right) \nonumber$
The composite spin operator in the z-direction for A and at an angle θ in the xz-plane for B.
$S(0) \otimes S(\theta)=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \otimes \left( \begin{array}{cc}{\cos \theta} & {\sin \theta} \ {\sin \theta} & {-\cos \theta}\end{array}\right)=\left( \begin{array}{cccc}{\cos \theta} & {\sin \theta} & {0} & {0} \ {\sin \theta} & {-\cos \theta} & {0} & {0} \ {0} & {0} & {-\cos \theta} & {-\sin \theta} \ {0} & {0} & {-\sin \theta} & {\cos \theta}\end{array}\right) \nonumber$
Calculate and display the expectation value or correlation coefficient.
$\Psi_{m} :=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
$E(\theta) :=\Psi_{m}^{T} \left( \begin{array}{cccc}{\cos (\theta)} & {\sin (\theta)} & {0} & {0} \ {\sin (\theta)} & {-\cos (\theta)} & {0} & {0} \ {0} & {0} & {-\cos (\theta)} & {-\sin (\theta)} \ {0} & {0} & {-\sin (\theta)} & {\cos (\theta)}\end{array}\right) \cdot \Psi_{m} \rightarrow-\cos (\theta) \nonumber$
Demonstrate that a hidden-value model that assigns spin orientations in non-orthogonal directions to A and B disagrees with the quantum calculation at 45 degrees.
$\left[ \begin{matrix} \text{Particle A} & \text{Particle B} & \hat{S}_{0}(\mathrm{A}) \cdot \hat{S}_{0}(\mathrm{B}) & \hat{S}_{45}(\mathrm{A}) \cdot \hat{S}_{45}(\mathrm{B}) & \hat{S}_{0}(\mathrm{A}) \cdot \hat{S}_{45}(\mathrm{B}) \ | \uparrow \rangle | \nearrow \rangle & | \downarrow \rangle | \swarrow \rangle & -1 & -1 & -1 \ | \uparrow \rangle | \swarrow \rangle & | \downarrow \rangle | \nearrow \rangle & -1 & -1 & 1 \ | \downarrow \rangle | \nearrow \rangle & | \uparrow \rangle | \swarrow \rangle & -1 & -1 & 1 \ | \downarrow \rangle | \swarrow \rangle & | \uparrow \rangle | \nearrow \rangle & -1 & -1 & -1 \ \text{Realist} & \text{Value} & -1 & -1 & 0 \ \text{Quantum} & \text{Value} & -1 & -1 & -0.707 \end{matrix} \right] \nonumber$
Quantum Correlations Illustrated With Photons
A two-stage atomic cascade emits entangled photons (A and B) in opposite directions with the same circular polarization according to observers in their path. The experiment involves the measurement of photon polarization states in the vertical/horizontal measurement basis, and allows for the rotation of the right-hand detector through an angle of $\theta$, in order to explore the consequences of quantum mechanical entanglement. PA stands for polarization analyzer and could simply be a calcite crystal.
$\begin{matrix} V & \lhd & \lceil & \; & \rceil & \; & \; & \; & \lceil & \; & \rceil & \rhd & V \ \; & \; & | & 0 & | & \xleftarrow{A} & \xleftrightarrow{Source} & \xrightarrow{B} & | & \theta & | & \; & \; \ H & \lhd & \lfloor & \; & \rfloor & \; & \; & \; & \lfloor & \; & \rfloor & \rhd & H \ \; & \; & \; & PA_{A} & \; & \; & \; & \; & \; & PA_{B} & \; & \; & \; \end{matrix} \nonumber$
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |L\rangle_{A} | L \rangle_{B}+| R \rangle_{A} | R \rangle_{B} ]=\frac{1}{\sqrt{2}}[ |V\rangle_{A} | V \rangle_{B}-| H \rangle_{A} | H \rangle_{B} ]=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {0} \ {0} \ {-1}\end{array}\right) \nonumber$
using
$| L \rangle=\frac{1}{\sqrt{2}}[ |V\rangle+ i | H \rangle ] \quad | R \rangle=\frac{1}{\sqrt{2}}[ |V\rangle- i | H \rangle ] \nonumber$
Calculate the composite polarization measurement operator for 0 degrees (A) and $\theta$ degrees (B):
$\mathrm{V}(\theta) :=\left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right) \quad H(\theta) :=\left( \begin{array}{c}{-\sin (\theta)} \ {\cos (\theta)}\end{array}\right) \ \mathrm{V}(\theta) \cdot \mathrm{V}(\theta)^{\mathrm{T}}-\mathrm{H}(\theta) \cdot \mathrm{H}(\theta)^{\mathrm{T}} \text { simplify } \rightarrow \left( \begin{array}{cc}{\cos (2 \cdot \theta)} & {\sin (2 \cdot \theta)} \ {\sin (2 \cdot \theta)} & {2 \cdot \sin (\theta)^{2}-1}\end{array}\right) \nonumber$
$1-2 \cdot \sin (\theta)^{2} \text { simplify } \rightarrow \cos (2 \cdot \theta) \nonumber$
$\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \otimes \left( \begin{array}{cccc}{\cos (2 \theta)} & {\sin (2 \theta)} \ {\sin (2 \theta)} & {-\cos (2 \theta)}\end{array}\right)=\left( \begin{array}{cccc}{\cos (2 \theta)} & {\sin (2 \theta)} & {0} & {0} \ {\sin (2 \theta)} & {-\cos (2 \theta)} & {0} & {0} \ {0} & {0} & {-\cos (2 \theta)} & {-\sin (2 \theta)} \ {0} & {0} & {-\sin (2 \theta)} & {\cos (2 \theta)}\end{array}\right) \nonumber$
Calculate and display the expectation value or correlation coefficient.
$\Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {-1}\end{array}\right) \quad \mathrm{E}(\theta)=\Psi^{\mathrm{T}} \cdot \left( \begin{array}{cccc}{\cos (2 \cdot \theta)} & {\sin (2 \cdot \theta)} & {0} & {0} \ {\sin (2 \cdot \theta)} & {-\cos (2 \cdot \theta)} & {0} & {0} \ {0} & {0} & {-\cos (2 \cdot \theta)} & -\sin (2 \cdot \theta) \ {0} & {0} & {-\sin (2 \cdot \theta)} & {\cos (2 \cdot \theta)}\end{array}\right) \cdot \Psi \text { simplify } \rightarrow \cos (2 \cdot \theta) \nonumber$
Quantum expectation value for 30 degrees:
Demonstrate that a hidden-value model that agrees with the 0 and 90 degree quantum expectation values, disagrees with the 30 degree result.
Hidden variable expectation value for 30 degrees:
$\frac{6 \cdot(1-1+1+1-1+1)+2 \cdot(-1-1-1-1-1-1)}{8}=0 \nonumber$
I turn to David Lindley again (Where Does the Weirdness Go? page 15) for a concluding comment on the issues dealt with in this section.
We are, through long familiarity, grounded in the assumption of an external, objective, and definite reality, regardless of how much or how little we actually know about it. It is hard to find the language or the concepts to deal with a “reality” that only becomes real when it is measured. There is no easy way to grasp this change of perspective, but persistence and patience allow a certain new familiarity to supplant the old.
Bibliography
• The Quest for the Quantum Computer, Julian Brown
• The Meaning of Quantum Theory, Jim Baggott
• Quantum Reality, Nick Herbert
• Where Does the Weirdness Go?, David Lindley
• The Cosmic Code, Heinz Pagels
• Quantum Mechanics and Experience, David Z Albert
• Quantum Mechanics, Alastair Rae
• Quantum Physics: Illusion or Reality, Alastair Rae
• The Quantum Divide, Gerry and Bruno
• Quantum Weirdness, William J. Mullin
• Through Two Doors at Once, Anil Ananthaswamy
• Beyond Weird, Philip Ball
• Quantum Physics: What Everyone Needs to Know, Michael G. Raymer
• Quantum Theory: A Very Short Introduction, John Polkinhorne
• The Quantum Challenge, Greenstein and Zajonc
• Quantum Enigma, Rosenblum and Kuttner
• Programming the Universe, Seth Lloyd
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.110%3A_The_Gram-Schmidt_Procedure.txt
|
Wave-particle duality as expressed by the de Broglie wave equation
$\lambda=\frac{h}{m v}=\frac{h}{p} \nonumber$
is the seminal concept of quantum mechanics. On the left side we have the wave property, wavelength, and on the right in a reciprocal relationship mediated by the ubiquitous Planck’s constant, we have the particle property, momentum.
Wave and particle are physically incompatible concepts because waves are spatially delocalized, while particles are spatially localized. In spite of this incongruity we find in quantum theory that they are necessary companions in the analysis of atomic and molecular phenomena. Both concepts are required for a complete examination of experiments at the nanoscale level.
This view can be summarized by saying that in quantum-level experiments we always detect particles, but we predict or interpret the experimental outcome by assuming wavelike behavior prior to particle detection. As Bragg once said, “Everything in the future is a wave; everything in the past is a particle.” It has also been said that between release and detection particles behave like waves.
Wave-particle duality, a strange dichotomous co-dependency, was first recognized as a permanent feature of modern nanoscience when Niels Bohr proclaimed the complementarity principle as the corner stone of the Copenhagen interpretation of quantum theory. This scientific dogma states, among other things, that there will be no future resolution of the cognitive dissonance that results from analyses that require, at root level, the use of irreconcilable concepts such as wave and particle.
In what follows it will be shown that wave-particle duality leads naturally to other conjugate relationships between traditional physical variables such as position and momentum, and energy and time. The vehicle for this extension will turn out to be the Fourier transform.
To reason mathematically about wave behavior requires a wave function. The one-dimensional, time-independent plane wave expression for a free particle is suitable for this purpose.
$\exp \left(i 2 \pi \frac{x}{\lambda}\right) \nonumber$
We see that this expression contains the basics of wave-particle duality; x represents position, a particle characteristic, and $\lambda$ represents wave behavior. Substitution of the de Broglie equation for $\lambda$ yields one of the most important mathematical functions in quantum mechanics.
$\exp \left(\frac{i p x}{h}\right) \nonumber$
By convention this function is called the momentum eigenfunction in the coordinate representation. We express this in Dirac notation as follows (the normalization constant is ignored for the time being).
$\langle x | p\rangle=\exp \left(\frac{i p x}{\hbar}\right) \nonumber$
Its complex conjugate is the position eigenfunction in the momentum representation.
$\langle p | x\rangle=\langle x | p\rangle^{*}=\exp \left(\frac{-i p x}{\hbar}\right) \nonumber$
Both expressions are also simple examples of Fourier transforms. They are dictionaries for translating between two different languages or representations. A rudimentary graphical illustration of this ability to translate also provides a concise illustration of the uncertainty principle. (See The Emperor’s New Mind, by Roger Penrrose, page 246.)
A quon (“A quon is any entity, no matter how immense, that exhibits both wave and particle aspects in the peculiar quantum manner.” Quantum Reality by Nick Herbert page 64.) with a precise position is represented by a Dirac delta function in coordinate space and a helix in momentum space. If the position is known exactly, the momentum is completely unknown because $|\langle p | x_{1}\rangle|^{2}$ is a constant for all values of the momentum. All momentum values have the same probability of being observed.
A quon has position $x_{1} : | x_{1} \rangle$
$\begin{matrix} \text{Coordinate space} & \Leftrightarrow & \text{Momentum space} \ \langle x | x_{1}\rangle=\delta\left(x-x_{1}\right) & \; & \langle p | x_{1}\rangle=\exp \left(-\frac{i p x_{1}}{\hbar}\right) \end{matrix} \nonumber$
These rudimentary examples of the use of the Fourier transform in quantum mechanics involve infinitesimal points in coordinate and momentum space. To employ the Fourier transform for objects of finite dimensions requires integration over the spatial or momentum dimensions.
For example, suppose we ask what the pattern of diffracted light on a distant screen would look like if a light source illuminated a mask with a single small circular aperture. This, of course, yields the well-known Airy diffraction pattern, which is nothing more than the Fourier transform of the coordinate wave function (the circular aperture) into momentum space. The Airy pattern calculation is given in the following tutorial, along with illustrations of how the radius of the hole illustrates the uncertainty principle.
Calculating the Airy Diffraction Pattern
The Airy diffraction pattern is created by illuminating a screen containing a circular hole with photons. The experiment can be performed with weak sources such that there is only one photon interacting with the screen at a time. This photon-screen interaction constitutes a position measurement.
The position wave function has a constant amplitude within the area of the hole and is shown to be normalized.
$\Psi(x, y) :=\frac{1}{\sqrt{\pi \cdot R^{2}}} \quad \mathrm{R}^{2}=x^{2}+y^{2} \quad \int_{-R}^{\mathrm{R}} \int_{-\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}}^{\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}} \Psi(\mathrm{x}, \mathrm{y})^{2} \mathrm{dy} \mathrm{d} \mathrm{x}=1 \nonumber$
The Airy diffraction pattern is the Fourier transform of the position wave function into the momentum representation. In other words, the interference pattern at the detection screen actually represents a momentum measurement. The following calculations are carried out in atomic units using a hole radius of 0.2.
Hole radius: $R :=0.2$
Calculate the Airy diffraction pattern:
$\Delta :=100 \quad \mathrm{N} :=80 \quad \mathrm{j} :=0 \ldots \mathrm{N} \quad \mathrm{p}_{\mathrm{x}_{\mathrm{j}}}=-\Delta+\frac{2 \cdot \Delta \cdot \mathrm{j}}{\mathrm{N}} \quad \mathrm{k} :=0 \ldots \mathrm{N} \quad \quad \mathrm{p}_{\mathrm{y}_{k}} :=-\Delta+\frac{2 \cdot \Delta \cdot \mathrm{k}}{\mathrm{N}} \nonumber$
$\Phi\left(\mathrm{p}_{\mathrm{x}}, \mathrm{p}_{\mathrm{y}}\right) :=\frac{1}{\pi} \cdot \int_{-\mathrm{R}}^{\mathrm{R}} \int_{-\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}^{2}}^{\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}^{2}} \frac{1}{\sqrt{\pi \cdot \mathrm{R}^{2}}} \cdot \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{x}} \cdot \mathrm{x}\right) \cdot \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{y}} \cdot \mathrm{y}\right)dy\;dx \quad \mathrm{P}_{\mathrm{j}, \mathrm{k}} :=\left(|\Phi\mathrm{p}_{\mathrm{x}_{\mathrm{j}}}, \mathrm{p}_{\mathrm{y}_{k}})|\right)^{2} \nonumber$
Display the Airy diffraction pattern.
Truncating the high intensity central disk provides a better picture of the outer maxima and minima.
Examining a radial slice of the Airy diffraction pattern provides a simple illustration of the uncertainty principle. Assume that the position uncertainty is given by the diameter of the hole and that the momentum uncertainty is given by the momentum range of the central disk.
$\mathrm{py} :=0 \quad \mathrm{px} :=-100,-99 \ldots 100 \quad \Phi(\mathrm{px}, \mathrm{py}, \mathrm{R}) :=\frac{1}{\pi} \cdot \int_{-\mathrm{R}}^{\mathrm{R}} \int_{-\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}}^{\sqrt{\mathrm{R}^{2}-\mathrm{x}^{2}}} \frac{1}{\sqrt{\pi \cdot \mathrm{R}^{2}}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \mathrm{x} \cdot \mathrm{x}) \cdot \exp (-\mathrm{i} \cdot \mathrm{py} \cdot \mathrm{y})dy\;dx \nonumber$
For a diameter of 0.4 the position-momentum uncertainty product is:
$0.4 \cdot 38.5 = 15.4 \nonumber$
For a diameter of 0.2 the position-momentum uncertainty product is:
$0.2 \cdot 77.0 = 15.4 \nonumber$
The reciprocal relationship between the uncertainty in position and momentum is clearly revealed in this example.
The Double-Slit Experiment
Coordinate wave function:
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \nonumber$
Momentum wave function for infinitesimally thin slits:
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle]=\frac{1}{2 \sqrt{\pi}}\left[\exp \left(-\frac{i p x_{1}}{\hbar}\right)+\exp \left(-\frac{i p x_{2}}{\hbar}\right)\right] \nonumber$
Position of first slit:
$x_{1} : = 0 \nonumber$
Position of second slit:
$x_{2} : = 1 \nonumber$
Momentum wave function for finite slits:
$\Psi (p) : = \frac{\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} +\int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}}{\sqrt{2}} \nonumber$
According to the Encyclopedia Britannica, Fresnel and Arago “using an apparatus based on Young’s experiment” observed that “two beams polarized in mutually parallel planes never yield fringes.” In the following tutorial this phenomenon is examined from the quantum mechanical perspective and a critique of the concept of the quantum eraser is provided.
Which Path Information and the Quantum Eraser
This tutorial examines the real reason which‐path information destroys the double‐slit diffraction pattern and how the so‐called ʺquantum eraserʺ restores it. The wave function for a photon illuminating the slit screen is written as a superposition of the photon being present at both slits simultaneously. The double‐slit diffraction pattern is calculated by projecting this superposition into momentum space. This is a Fourier transform for which the mathematical details can be found in the Appendix.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \qquad \Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle] \nonumber$
Attaching polarizers to the slits creates an entangled superposition of the photon being at slit 1 with vertical polarization and at slit 2 with horizontal polarization. This leads to the following momentum distribution at the detection screen. The interference fringes have disappeared leaving a single‐slit diffraction pattern.
$| \Psi^{\prime} \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle | V \rangle+| x_{2} \rangle | H \rangle ] \qquad \Psi^{\prime}(p)=\langle p | \Psi^{\prime}\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle | V\rangle+\langle p | x_{2}\rangle | H \rangle ] \nonumber$
The usual explanation for this effect is that it is now possible to know which slit the photons went through, and that such knowledge destroys the interference fringes because the photons are no longer in a superposition of passing through both slits, but rather a mixture of passing through one slit or the other.
However, a better explanation is that the superposition persists with orthogonal polarization tags, and because of this the interference (cross) terms in the momentum distribution, $\left|\Psi^{\prime}(p)\right|^{2}$, vanish leaving a pattern at the detection screen which is the sum of two single‐slit diffraction patterns, one from the upper slit and the other from the lower slit.
That this is a reasonable interpretation is confirmed when a so‐called quantum eraser, a polarizer (D) rotated clockwise by 45 degrees relative to the vertical, is placed before the detection screen.
$\Psi^{\prime \prime}(p)=\langle D | \Psi^{\prime}(p)\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle\langle D | V\rangle+\langle p | x_{2}\rangle\langle D | H\rangle]=\frac{1}{2}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle] \nonumber$
The diagonal polarizer is called a quantum eraser because it appears to restore the interference pattern lost because of the which‐path information provided by the V/H polarizers. However, it is clear from this analysis that the diagonal polarizer doesnʹt actually erase, it simply passes the diagonal component of $| \Psi^{'} \rangle$ which then shows an attenuated (by half) version of the original interference pattern produced by $| \Psi \rangle$.
Placing an anti‐diagonal polarizer (rotated counterclockwise by 45 degrees relative to the vertical) before the detection screen causes a 180 degree phase shift in the restored interference pattern.
$\Psi^{\prime \prime}(p)=\langle A | \Psi^{\prime}(p)\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle\langle A | V\rangle+\langle p | x_{2}\rangle\langle A | H\rangle]=\frac{1}{2}[\langle p | x_{1}\rangle-\langle p | x_{2}\rangle] \nonumber$
This phase shift is inconsistent with any straightforward explanation based on the concept of erasure of which‐path information. Erasure implies removal of which‐path information. If which‐path information has been removed shouldnʹt the original interference pattern be restored without a phase shift?
Appendix:
The V/H polarization which‐path tags and the D/A polarization ʺerasersʺ in vector format:
$| \mathrm{V} \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad | \mathrm{H} \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \quad | \mathrm{D} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \quad | \mathrm{A} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \ \langle\mathrm{D} | \mathrm{V}\rangle=\langle\mathrm{D} | \mathrm{H}\rangle=\langle\mathrm{A} | \mathrm{V}\rangle=\frac{1}{\sqrt{2}}\quad \langle\mathrm{A} | \mathrm{H}\rangle=-\frac{1}{\sqrt{2}} \nonumber$
For infinitesimally thin slits the momentum‐space wave function is,
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
Assuming a slit width $\delta$ the calculations of $\Psi(p), \Psi^{ʹ}(p), \Psi^{ʹʹ}(p)$ and $\Psi^{ʹʹʹ}(p)$ are carried out as follows:
Position of first slit: $x_{1} \equiv 0$ Position of second slit: $x_{2} \equiv 1$ Slit width: $\delta \equiv 0.2$
$\Psi(p)\equiv\frac{1}{\sqrt{2}} \cdot\left(\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x+\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x\right] \nonumber$
For $\Psi^{ʹ}(p)$ the V/H polarization which‐path tags are added to the two terms of $\Psi (p)$
$\Psi^{\prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot\left[\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{\mathrm{x}_{1}+\frac{\delta}{2}}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}}\frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \right] \nonumber$
$\Psi^{ʹʹ}(p)$ is the projection of $\Psi^{ʹ}(p)$ onto a diagonal polarizer $\langle D |$.
$\Psi^{\prime \prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
$\Psi^{ʹʹʹ}(p)$ is the projection of $\Psi^{ʹ}(p)$ onto an anti‐diagonal polarizer $\langle A |$.
$\Psi^{\prime \prime \prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
Rewriting $\Psi^{ʹ}(p)$ in terms of $| D \rangle$ and $| A \rangle$ clearly shows the origin of the phase difference between the $\left(|\Psi^{\prime \prime}(\mathrm{p})|\right)^{2}$ and $\left(|\Psi^{\prime \prime \prime}(\mathrm{p})|\right)^{2}$ interference patterns.
$\Psi^{\prime}(p)=\langle p | \Psi^{\prime}\rangle=\frac{1}{2}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle) | D \rangle+(\langle p | x_{1}\rangle-\langle p | x_{2}\rangle) | A \rangle ] \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.13%3A_Quantum_Mechanics_and_the_Fourier_Transform.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.