chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Objectives • Understand the concept of area and volume elements in cartesian, polar and spherical coordinates. • Be able to integrate functions expressed in polar or spherical coordinates. • Understand how to normalize orbitals expressed in spherical coordinates, and perform calculations involving triple integrals. Coordinate Systems The simplest coordinate system consists of coordinate axes oriented perpendicularly to each other. These coordinates are known as cartesian coordinates or rectangular coordinates, and you are already familiar with their two-dimensional and three-dimensional representation. In the plane, any point $P$ can be represented by two signed numbers, usually written as $(x,y)$, where the coordinate $x$ is the distance perpendicular to the $x$ axis, and the coordinate $y$ is the distance perpendicular to the $y$ axis (Figure $1$, left). In space, a point is represented by three signed numbers, usually written as $(x,y,z)$ (Figure $1$, right). Often, positions are represented by a vector, $\vec{r}$, shown in red in Figure $1$. In three dimensions, this vector can be expressed in terms of the coordinate values as $\vec{r}=x\hat{i}+y\hat{j}+z\hat{k}$, where $\hat{i}=(1,0,0)$, $\hat{j}=(0,1,0)$ and $\hat{z}=(0,0,1)$ are the so-called unit vectors. We already know that often the symmetry of a problem makes it natural (and easier!) to use other coordinate systems. In two dimensions, the polar coordinate system defines a point in the plane by two numbers: the distance $r$ to the origin, and the angle $\theta$ that the position vector forms with the $x$-axis. Notice the difference between $\vec{r}$, a vector, and $r$, the distance to the origin (and therefore the modulus of the vector). Vectors are often denoted in bold face (e.g. r) without the arrow on top, so be careful not to confuse it with $r$, which is a scalar. While in cartesian coordinates $x$, $y$ (and $z$ in three-dimensions) can take values from $-\infty$ to $\infty$, in polar coordinates $r$ is a positive value (consistent with a distance), and $\theta$ can take values in the range $[0,2\pi]$. The relationship between the cartesian and polar coordinates in two dimensions can be summarized as: $\label{eq:coordinates_1} x=r\cos\theta$ $\label{eq:coordinates_2} y=r\sin\theta$ $\label{eq:coordinates_3} r^2=x^2+y^2$ $\label{eq:coordinates_4} \tan \theta=y/x$ In three dimensions, the spherical coordinate system defines a point in space by three numbers: the distance $r$ to the origin, a polar angle $\phi$ that measures the angle between the positive $x$-axis and the line from the origin to the point $P$ projected onto the $xy$-plane, and the angle $\theta$ defined as the is the angle between the $z$-axis and the line from the origin to the point $P$: Before we move on, it is important to mention that depending on the field, you may see the Greek letter $\theta$ (instead of $\phi$) used for the angle between the positive $x$-axis and the line from the origin to the point $P$ projected onto the $xy$-plane. That is, $\theta$ and $\phi$ may appear interchanged. This can be very confusing, so you will have to be careful. When using spherical coordinates, it is important that you see how these two angles are defined so you can identify which is which. Spherical coordinates are useful in analyzing systems that are symmetrical about a point. For example a sphere that has the cartesian equation $x^2+y^2+z^2=R^2$ has the very simple equation $r = R$ in spherical coordinates. Spherical coordinates are the natural coordinates for physical situations where there is spherical symmetry (e.g. atoms). The relationship between the cartesian coordinates and the spherical coordinates can be summarized as: $\label{eq:coordinates_5} x=r\sin\theta\cos\phi$ $\label{eq:coordinates_6} y=r\sin\theta\sin\phi$ $\label{eq:coordinates_7} z=r\cos\theta$ These relationships are not hard to derive if one considers the triangles shown in Figure $4$: Area and Volume Elements In any coordinate system it is useful to define a differential area and a differential volume element. In cartesian coordinates the differential area element is simply $dA=dx\;dy$ (Figure $1$), and the volume element is simply $dV=dx\;dy\;dz$. We already performed double and triple integrals in cartesian coordinates, and used the area and volume elements without paying any special attention. For example, in example [c2v:c2vex1], we were required to integrate the function ${\left | \psi (x,y,z) \right |}^2$ over all space, and without thinking too much we used the volume element $dx\;dy\;dz$ (see page ). We also knew that “all space” meant $-\infty\leq x\leq \infty$, $-\infty\leq y\leq \infty$ and $-\infty\leq z\leq \infty$, and therefore we wrote: $\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }{\left | \psi (x,y,z) \right |}^2\; dx \;dy \;dz=1 \nonumber$ But what if we had to integrate a function that is expressed in spherical coordinates? Would we just replace $dx\;dy\;dz$ by $dr\; d\theta\; d\phi$? The answer is no, because the volume element in spherical coordinates depends also on the actual position of the point. This will make more sense in a minute. Coming back to coordinates in two dimensions, it is intuitive to understand why the area element in cartesian coordinates is $dA=dx\;dy$ independently of the values of $x$ and $y$. This is shown in the left side of Figure $2$. However, in polar coordinates, we see that the areas of the gray sections, which are both constructed by increasing $r$ by $dr$, and by increasing $\theta$ by $d\theta$, depend on the actual value of $r$. Notice that the area highlighted in gray increases as we move away from the origin. The area shown in gray can be calculated from geometrical arguments as $dA=\left[\pi (r+dr)^2- \pi r^2\right]\dfrac{d\theta}{2\pi}.$ Because $dr<<0$, we can neglect the term $(dr)^2$, and $dA= r\; dr\;d\theta$ (see Figure $10.2.3$). Let’s see how this affects a double integral with an example from quantum mechanics. The wave function of the ground state of a two dimensional harmonic oscillator is: $\psi(x,y)=A e^{-a(x^2+y^2)}$. We know that the quantity $|\psi|^2$ represents a probability density, and as such, needs to be normalized: $\int\limits_{all\;space} |\psi|^2\;dA=1 \nonumber$ This statement is true regardless of whether the function is expressed in polar or cartesian coordinates. However, the limits of integration, and the expression used for $dA$, will depend on the coordinate system used in the integration. In cartesian coordinates, “all space” means $-\infty<x<\infty$ and $-\infty<y<\infty$. The differential of area is $dA=dxdy$: $\int\limits_{all\;space} |\psi|^2\;dA=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty} A^2e^{-2a(x^2+y^2)}\;dxdy=1 \nonumber$ In polar coordinates, “all space” means $0<r<\infty$ and $0<\theta<2\pi$. The differential of area is $dA=r\;drd\theta$. The function $\psi(x,y)=A e^{-a(x^2+y^2)}$ can be expressed in polar coordinates as: $\psi(r,\theta)=A e^{-ar^2}$ $\int\limits_{all\;space} |\psi|^2\;dA=\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi} A^2 e^{-2ar^2}r\;d\theta dr=1 \nonumber$ Both versions of the double integral are equivalent, and both can be solved to find the value of the normalization constant ($A$) that makes the double integral equal to 1. In polar coordinates: $\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi} A^2 e^{-2ar^2}r\;d\theta dr=A^2\int\limits_{0}^{\infty}e^{-2ar^2}r\;dr\int\limits_{0}^{2\pi}\;d\theta =A^2\times\dfrac{1}{4a}\times2\pi=1 \nonumber$ Therefore1, $A=\sqrt{2a/\pi}$. The same value is of course obtained by integrating in cartesian coordinates. It is now time to turn our attention to triple integrals in spherical coordinates. In cartesian coordinates, the differential volume element is simply $dV= dx\,dy\,dz$, regardless of the values of $x, y$ and $z$. Using the same arguments we used for polar coordinates in the plane, we will see that the differential of volume in spherical coordinates is not $dV=dr\,d\theta\,d\phi$. The geometrical derivation of the volume is a little bit more complicated, but from Figure $4$ you should be able to see that $dV$ depends on $r$ and $\theta$, but not on $\phi$. The volume of the shaded region is $\label{eq:dv} dV=r^2\sin\theta\,d\theta\,d\phi\,dr$ We will exemplify the use of triple integrals in spherical coordinates with some problems from quantum mechanics. We already introduced the Schrödinger equation, and even solved it for a simple system in Section 5.4. We also mentioned that spherical coordinates are the obvious choice when writing this and other equations for systems such as atoms, which are symmetric around a point. As we saw in the case of the particle in the box (Section 5.4), the solution of the Schrödinger equation has an arbitrary multiplicative constant. Because of the probabilistic interpretation of wave functions, we determine this constant by normalization. The same situation arises in three dimensions when we solve the Schrödinger equation to obtain the expressions that describe the possible states of the electron in the hydrogen atom (i.e. the orbitals of the atom). The Schrödinger equation is a partial differential equation in three dimensions, and the solutions will be wave functions that are functions of $r, \theta$ and $\phi$. The lowest energy state, which in chemistry we call the 1s orbital, turns out to be: $\psi_{1s}=Ae^{-r/a_0} \nonumber$ This particular orbital depends on $r$ only, which should not surprise a chemist given that the electron density in all $s$-orbitals is spherically symmetric. We will see that $p$ and $d$ orbitals depend on the angles as well. Regardless of the orbital, and the coordinate system, the normalization condition states that: $\int\limits_{all\;space} |\psi|^2\;dV=1 \nonumber$ For a wave function expressed in cartesian coordinates, $\int\limits_{all\;space} |\psi|^2\;dV=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\psi^*(x,y,z)\psi(x,y,z)\,dxdydz \nonumber$ where we used the fact that $|\psi|^2=\psi^* \psi$. In spherical coordinates, “all space” means $0\leq r\leq \infty$, $0\leq \phi\leq 2\pi$ and $0\leq \theta\leq \pi$. The differential $dV$ is $dV=r^2\sin\theta\,d\theta\,d\phi\,dr$, so $\int\limits_{all\;space} |\psi|^2\;dV=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ Let’s see how we can normalize orbitals using triple integrals in spherical coordinates. Example $1$ When solving the Schrödinger equation for the hydrogen atom, we obtain $\psi_{1s}=Ae^{-r/a_0}$, where $A$ is an arbitrary constant that needs to be determined by normalization. Find $A$. Solution In spherical coordinates, $\int\limits_{all\; space} |\psi|^2\;dV=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ because this orbital is a real function, $\psi^*(r,\theta,\phi)\psi(r,\theta,\phi)=\psi^2(r,\theta,\phi)$. In this case, $\psi^2(r,\theta,\phi)=A^2e^{-2r/a_0}$. Therefore, $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}\psi^*(r,\theta,\phi)\psi(r,\theta,\phi) \, r^2 \sin\theta \, dr d\theta d\phi=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}A^2e^{-2r/a_0}\,r^2\sin\theta\,dr d\theta d\phi=1 \nonumber$ $\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits_{0}^{\infty}A^2e^{-2r/a_0}\,r^2\sin\theta\,dr d\theta d\phi=A^2\int\limits_{0}^{2\pi}d\phi\int\limits_{0}^{\pi}\sin\theta \;d\theta\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr \nonumber$ The result is a product of three integrals in one variable: $\int\limits_{0}^{2\pi}d\phi=2\pi \nonumber$ $\int\limits_{0}^{\pi}\sin\theta \;d\theta=-\cos\theta|_{0}^{\pi}=2 \nonumber$ $\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=? \nonumber$ From the formula sheet: $\int_{0}^{\infty}x^ne^{-ax}dx=\dfrac{n!}{a^{n+1}}, \nonumber$ where $a>0$ and $n$ is a positive integer. In this case, $n=2$ and $a=2/a_0$, so: $\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=\dfrac{2!}{(2/a_0)^3}=\dfrac{2}{8/a_0^3}=\dfrac{a_0^3}{4} \nonumber$ Putting the three pieces together: $A^2\int\limits_{0}^{2\pi}d\phi\int\limits_{0}^{\pi}\sin\theta \;d\theta\int\limits_{0}^{\infty}e^{-2r/a_0}\,r^2\;dr=A^2\times2\pi\times2\times \dfrac{a_0^3}{4}=1 \nonumber$ $A^2\times \pi \times a_0^3=1\rightarrow A=\dfrac{1}{\sqrt{\pi a_0^3}} \nonumber$ The normalized 1s orbital is, therefore: $\displaystyle{\color{Maroon}\dfrac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0}} \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.04%3A_Spherical_Coordinates.txt
The determinant is a useful value that can be computed from the elements of a square matrix Consider row reducing the standard 2x2 matrix. Suppose that $a$ is nonzero. $\begin{pmatrix} a &b \ c &d \end{pmatrix} \nonumber$ $\frac{1}{a} R_1 \rightarrow R_1, \;\;\; R_2-cR_1 \rightarrow R_2 \nonumber$ $\begin{pmatrix} 1 &\frac{b}{a} \ c &d \end{pmatrix} \nonumber$ $\begin{pmatrix} 1 & \frac{b}{a} \ 0 & d-\frac{cb}{a}\end{pmatrix} \nonumber$ Now notice that we cannot make the lower right corner a 1 if $d - \frac{cb}{a} = 0 \nonumber$ or $ad - bc = 0. \nonumber$ Definition: The Determinant We call $ad - bc$ the determinant of the 2 by 2 matrix $\begin{pmatrix} a &b \ c &d \end{pmatrix} \nonumber$ it tells us when it is possible to row reduce the matrix and find a solution to the linear system. Example 32.5.1 : The determinant of the matrix $\begin{pmatrix} 3 & 1\ 5 & 2 \end{pmatrix} \nonumber$ is $3(2) - 1(5) = 6 - 5 = 1. \nonumber$ Determinants of 3 x 3 Matrices We define the determinant of a triangular matrix $\begin{pmatrix} a &d &e \ 0 &b &f \ 0 &0 &c \end{pmatrix} \nonumber$ by $\text{det} = abc. \nonumber$ Notice that if we multiply a row by a constant $k$ then the new determinant is $k$ times the old one. We list the effect of all three row operations below. Theorem The effect of the the three basic row operations on the determinant are as follows 1. Multiplication of a row by a constant multiplies the determinant by that constant. 2. Switching two rows changes the sign of the determinant. 3. Replacing one row by that row + a multiply of another row has no effect on the determinant. To find the determinant of a matrix we use the operations to make the matrix triangular and then work backwards. Example 32.5.2 : Find the determinant of $\begin{pmatrix} 2 & 6 &10 \ 2 &4 &-3 \ 0 &4 &2 \end{pmatrix} \nonumber$ We use row operations until the matrix is triangular. $\dfrac{1}{2}R_1 \rightarrow R_1 \text{(Multiplies the determinant by } \dfrac{1}{2}) \nonumber$ $\begin{pmatrix} 1 & 3 &5 \ 2 &4 &-3 \ 0 &4 &2 \end{pmatrix} \nonumber$ $R_2 - 2R_1 \rightarrow R_2 \text{ (No effect on the determinant)} \nonumber$ $\begin{pmatrix} 1 & 3 &5 \ 0 &-2 &-13 \ 0 &4 &2 \end{pmatrix} \nonumber$ Note that we do not need to zero out the upper middle number. We only need to zero out the bottom left numbers. $R_3 + 2R_2 \rightarrow R_3 \text{ (No effect on the determinant)}. \nonumber$ $\begin{pmatrix} 1 & 3 &5 \ 0 &-2 &-13 \ 0 &0 &-24 \end{pmatrix} \nonumber$ Note that we do not need to make the middle number a 1. The determinant of this matrix is 48. Since this matrix has $\frac{1}{2}$ the determinant of the original matrix, the determinant of the original matrix has $\text{determinant} = 48(2) = 96. \nonumber$ Inverses We call the square matrix I with all 1's down the diagonal and zeros everywhere else the identity matrix. It has the unique property that if $A$ is a square matrix with the same dimensions then $AI = IA = A. \nonumber$ Definition If $A$ is a square matrix then the inverse $A^{-1}$ of $A$ is the unique matrix such that $AA^{-1}=A^{-1}A=I. \nonumber$ Example 32.5.3 : Let $A=\begin{pmatrix} 2 &5 \ 1 &3 \end{pmatrix} \nonumber$ then $A^{-1}= \begin{pmatrix} 3 &-5 \ -1 &2 \end{pmatrix} \nonumber$ Verify this! Theorem: ExistEnce The inverse of a matrix exists if and only if the determinant is nonzero. To find the inverse of a matrix, we write a new extended matrix with the identity on the right. Then we completely row reduce, the resulting matrix on the right will be the inverse matrix. Example 32.5.4 : $\begin{pmatrix} 2 &-1 \ 1 &-1 \end{pmatrix} \nonumber$ First note that the determinant of this matrix is $-2 + 1 = -1 \nonumber$ hence the inverse exists. Now we set the augmented matrix as $\begin{pmatrix}\begin{array}{cc|cc}2&-1&1&0 \1&-1&0&1\end{array}\end{pmatrix} \nonumber$ $R_1 {\leftrightarrow} R_2 \nonumber$ $\begin{pmatrix}\begin{array}{cc|cc}1&-1&0&1 \2&-1&1&0\end{array}\end{pmatrix} \nonumber$ $R_2 - 2R_1 {\rightarrow} R_2 \nonumber$ $\begin{pmatrix}\begin{array}{cc|cc}1&-1&0&1 \0&1&1&-2\end{array}\end{pmatrix} \nonumber$ $R_1 + R_2 {\rightarrow} R_1 \nonumber$ $\begin{pmatrix}\begin{array}{cc|cc}1&0&1&-1 \0&1&1&-2\end{array}\end{pmatrix} \nonumber$ Notice that the left hand part is now the identity. The right hand side is the inverse. Hence $A^{-1}= \begin{pmatrix} 1&-1 \ 1&-2 \end{pmatrix} \nonumber$ Solving Equations Using Matrices Example 32.5.5 : Suppose we have the system $2x - y = 3 \nonumber$ $x - y = 4 \nonumber$ Then we can write this in matrix form $Ax = b \nonumber$ where $A=\begin{pmatrix} 2&-1 \ 1&-1 \end{pmatrix}, \;\;\; x= \begin{pmatrix} x \ y \end{pmatrix}, \;\;\; \text{and} \; b=\begin{pmatrix} 3\4 \end{pmatrix} \nonumber$ We can multiply both sides by $A^{-1}$: $A^{-1}A x = A^{-1}b \nonumber$ or $x = A^{-1}b \nonumber$ From before, $A^{-1}=\begin{pmatrix} 1&-1 \ 1&-2 \end{pmatrix} \nonumber$ Hence our solution is $\begin{pmatrix} -1&-5 \end{pmatrix} \nonumber$ or $x = -1 \text{ and } y = 5 \nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.05%3A_Determinants.txt
Chapter Objectives • Learn the nomenclature used in linear algebra to describe matrices (rows, columns, triangular matrices, diagonal matrices, trace, transpose, singularity, etc). • Learn how to add, subtract and multiply matrices. • Learn the concept of inverse. • Understand the use of matrices as symmetry operators. • Understand the concept of orthogonality. • Understand how to calculate the eigenvalues and normalized eigenvectors of a 2 × 2 matrix. • Understand the concept of Hermitian matrix Definitions An $n \times m$ matrix is a two dimensional array of numbers with $n$ rows and $m$ columns. The integers $n$ and $m$ are called the dimensions of the matrix. If $n = m$ then the matrix is square. The numbers in the matrix are known as matrix elements (or just elements) and are usually given subscripts to signify their position in the matrix e.g. an element $a_{ij}$ would occupy the $i^{th}$ row and $j^{th}$ column of the matrix. For example: $M = \left(\begin{array}{ccc} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \end{array}\right) \label{8.1}$ is a $3 \times 3$ matrix with $a_{11}=1$, $a_{12}=2$, $a_{13}=3$, $a_{21}=4$ etc In a square matrix, diagonal elements are those for which $i$=$j$ (the numbers $1$, $5$, and $9$ in the above example). Off-diagonal elements are those for which $i \neq j$ ($2$, $3$, $4$, $6$, $7$, and $8$ in the above example). If all the off-diagonal elements are equal to zero then we have a diagonal matrix. We will see later that diagonal matrices are of considerable importance in group theory. A unit matrix or identity matrix (usually given the symbol $I$) is a diagonal matrix in which all the diagonal elements are equal to $1$. A unit matrix acting on another matrix has no effect – it is the same as the identity operation in group theory and is analogous to multiplying a number by $1$ in everyday arithmetic. The transpose $A^T$ of a matrix $A$ is the matrix that results from interchanging all the rows and columns. A symmetric matrix is the same as its transpose ($A^T=A$ i.e. $a_{ij} = a_{ji}$ for all values of $i$ and $j$). The transpose of matrix $M$ above (which is not symmetric) is $M^{T} = \left(\begin{array}{ccc} 1 & 4 & 7 \ 2 & 5 & 8 \ 3 & 6 & 9 \end{array}\right) \label{8.2}$ The sum of the diagonal elements in a square matrix is called the trace (or character) of the matrix (for the above matrix, the trace is $\chi = 1 + 5 + 9 = 15$). The traces of matrices representing symmetry operations will turn out to be of great importance in group theory. A vector is just a special case of a matrix in which one of the dimensions is equal to $1$. An $n \times 1$ matrix is a column vector; a $1 \times m$ matrix is a row vector. The components of a vector are usually only labeled with one index. A unit vector has one element equal to $1$ and the others equal to zero (it is the same as one row or column of an identity matrix). We can extend the idea further to say that a single number is a matrix (or vector) of dimension $1 \times 1$. Matrix Algebra 1. Two matrices with the same dimensions may be added or subtracted by adding or subtracting the elements occupying the same position in each matrix. e.g. $A = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 &1 \ 3 & 2 & 0 \end{pmatrix} \label{8.3}$ $B = \begin{pmatrix} 2 & 0 & -2 \ 1 & 0 & 1 \ 1 & -1 & 0 \end{pmatrix} \label{8.4}$ $A + B = \begin{pmatrix} 3 & 0 & 0 \ 3 & 2 & 2 \ 4 & 1 & 0 \end{pmatrix} \label{8.5}$ $A - B = \begin{pmatrix} -1 & 0 & 4 \ 1 & 2 & 0 \ 2 & 3 & 0 \end{pmatrix} \label{8.6}$ 1. A matrix may be multiplied by a constant by multiplying each element by the constant. $4B = \left(\begin{array}{ccc} 8 & 0 & -8 \ 4 & 0 & 4 \ 4 & -4 & 0 \end{array}\right) \label{8.7}$ $3A = \left(\begin{array}{ccc} 3 & 0 & 6 \ 6 & 6 & 3 \ 9 & 6 & 0 \end{array}\right) \label{8.8}$ 1. Two matrices may be multiplied together provided that the number of columns of the first matrix is the same as the number of rows of the second matrix i.e. an $n \times m$ matrix may be multiplied by an $m \times l$ matrix. The resulting matrix will have dimensions $n \times l$. To find the element $a_{ij}$ in the product matrix, we take the dot product of row $i$ of the first matrix and column $j$ of the second matrix (i.e. we multiply consecutive elements together from row $i$ of the first matrix and column $j$ of the second matrix and add them together i.e. $c_{ij}$ = $\Sigma_k$ $a_{ik}$$b_{jk}$ e.g. in the $3 \times 3$ matrices $A$ and $B$ used in the above examples, the first element in the product matrix $C = AB$ is $c_{11}$ = $a_{11}$$b_{11}$ + $a_{12}$$b_{21}$ + $a_{13}$$b_{31}$ $AB = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 & 1 \ 3 & 2 & 0 \end{pmatrix} \begin{pmatrix} 2 & 0 & -2 \ 1 & 0 & 1 \ 1 & -1 & 0 \end{pmatrix} = \begin{pmatrix} 4 & -2 & -2 \ 7 & -1 & -2 \ 8 & 0 & -4 \end{pmatrix} \label{8.9}$ An example of a matrix multiplying a vector is $A\textbf{v} = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 & 1 \ 3 & 2 & 0 \end{pmatrix} \begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix} = \begin{pmatrix} 7 \ 9 \ 7 \end{pmatrix} \label{8.10}$ Matrix multiplication is not generally commutative, a property that mirrors the behavior found earlier for symmetry operations within a point group. Direct Products The direct product of two matrices (given the symbol $\otimes$) is a special type of matrix product that generates a matrix of higher dimensionality if both matrices have dimension greater than one. The easiest way to demonstrate how to construct a direct product of two matrices $A$ and $B$ is by an example: \begin{align} A \otimes B &= \begin{pmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{pmatrix} \otimes \begin{pmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{pmatrix} \[4pt] &= \begin{pmatrix} a_{11}B & a_{12}B \ a_{21}B & a_{22}B \end{pmatrix} \[4pt] &= \begin{pmatrix} a_{11}b_{11} & a_{11}b_{12} & a_{12}b_{11} & a_{12}b_{12} \ a_{11}b_{21} & a_{11}b_{22} & a_{12}b_{21} & a_{12}b_{22} \ a_{21}b_{11} & a_{21}b_{12} & a_{22}b_{11} & a_{22}b_{21} \ a_{21}b_{21} & a_{21}b_{22} & a_{22}b_{21} & a_{22}b_{22} \end{pmatrix} \label{8.11} \end{align} Though this may seem like a somewhat strange operation to carry out, direct products crop up a great deal in group theory. Inverse Matrices and Determinants If two square matrices $A$ and $B$ multiply together to give the identity matrix I (i.e. $AB = I$) then $B$ is said to be the inverse of $A$ (written $A^{-1}$). If $B$ is the inverse of $A$ then $A$ is also the inverse of $B$. Recall that one of the conditions imposed upon the symmetry operations in a group is that each operation must have an inverse. It follows by analogy that any matrices we use to represent symmetry elements must also have inverses. It turns out that a square matrix only has an inverse if its determinant is non-zero. For this reason (and others which will become apparent later on when we need to solve equations involving matrices) we need to learn a little about matrix determinants and their properties. For every square matrix, there is a unique function of all the elements that yields a single number called the determinant. Initially it probably won’t be particularly obvious why this number should be useful, but matrix determinants are of great importance both in pure mathematics and in a number of areas of science. Historically, determinants were actually around before matrices. They arose originally as a property of a system of linear equations that ‘determined’ whether the system had a unique solution. As we shall see later, when such a system of equations is recast as a matrix equation this property carries over into the matrix determinant. There are two different definitions of a determinant, one geometric and one algebraic. In the geometric interpretation, we consider the numbers across each row of an $n \times n$ matrix as coordinates in $n$-dimensional space. In a one-dimensional matrix (i.e. a number), there is only one coordinate, and the determinant can be interpreted as the (signed) length of a vector from the origin to this point. For a $2 \times 2$ matrix we have two coordinates in a plane, and the determinant is the (signed) area of the parallelogram that includes these two points and the origin. For a $3 \times 3$ matrix the determinant is the (signed) volume of the parallelepiped that includes the three points (in three-dimensional space) defined by the matrix and the origin. This is illustrated below. The idea extends up to higher dimensions in a similar way. In some sense then, the determinant is therefore related to the size of a matrix. The algebraic definition of the determinant of an $nxn$ matrix is a sum over all the possible products (permutations) of n elements taken from different rows and columns. The number of terms in the sum is $n!$, the number of possible permutations of $n$ values (i.e. $2$ for a $2 \times 2$ matrix, $6$ for a $3 \times 3$ matrix etc). Each term in the sum is given a positive or a negative sign depending on whether the number of permutation inversions in the product is even or odd. A permutation inversion is just a pair of elements that are out of order when described by their indices. For example, for a set of four elements $\begin{pmatrix} a_1, a_2, a_3, a_4 \end{pmatrix}$, the permutation $a_1 a_2 a_3 a_4$ has all the elements in their correct order (i.e. in order of increasing index). However, the permutation $a_2 a_4 a_1 a_3$ contains the permutation inversions $a_2 a_1$, $a_4 a_1$, $a_4 a_3$. For example, for a two-dimensional matrix $\begin{pmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{pmatrix} \label{8.12}$ where the subscripts label the row and column positions of the elements, there are $2$ possible products/permutations involving elements from different rows and column, $a_{11}$$a_{22}$ and $a_{12}$$a_{21}$. In the second term, there is a permutation inversion involving the column indices $2$ and $1$ (permutation inversions involving the row and column indices should be looked for separately) so this term takes a negative sign, and the determinant is $a_{11}$$a_{22}$ - $a_{12}$$a_{21}$. For a $3 \times 3$ matrix $\begin{pmatrix} a_{11} & a_{12} & a_{13} \ a_{21} & a_{22} & a_{23} \ a_{31} & a_{32} & a_{33} \end{pmatrix} \label{8.13}$ the possible combinations of elements from different rows and columns, together with the sign from the number of permutations required to put their indices in numerical order are: $\begin{array}{rl}a_{11} a_{22} a_{33} & (0 \: \text{inversions}) \ -a_{11} a_{23} a_{32} & (1 \: \text{inversion -} \: 3>2 \: \text{in the column indices}) \ -a_{12} a_{21} a_{33} & (1 \: \text{inversion -} \: 2>1 \: \text{in the column indices}) \ a_{12} a_{23} a_{31} & (2 \: \text{inversions -} \: 2>1 \: \text{and} \: 3>1 \: \text{in the column indices}) \ a_{13} a_{21} a_{32} & (2 \: \text{inversions -} \: 3>1 \: \text{and} \: 3>2 \: \text{in the column indices}) \ -a_{13} a_{22} a_{31} & (3 \: \text{inversions -} \: 3>2, 3>1, \: \text{and} \: 2>1 \: \text{in the column indices}) \end{array} \label{8.14}$ and the determinant is simply the sum of these terms. This may all seem a little complicated, but in practice there is a fairly systematic procedure for calculating determinants. The determinant of a matrix $A$ is usually written det($A$) or |$a$|. For a $2 \times 2$ matrix $A = \begin{pmatrix} a & b \ c & d \end{pmatrix}; det(A) = |A| = \begin{vmatrix} a & b \ c & d \end{vmatrix} = ad - bc \label{8.15}$ For a $3 \times 3$ matrix $B = \begin{pmatrix} a & b & c \ d & e & f \ g & h & i \end{pmatrix}; det(B) = a\begin{vmatrix} e & f \ h & i \end{vmatrix} - b\begin{vmatrix} d & f \ g & i \end{vmatrix} + c\begin{vmatrix} d & e \ g & h \end{vmatrix} \label{8.16}$ For a $4x4$ matrix $C = \begin{pmatrix} a & b & c & d \ e & f & g & h \ i & j & k & l \ m & n & o & p \end{pmatrix}; det(C) = a\begin{vmatrix} f & g & h \ j & k & l \ n & o & p \end{vmatrix} - b\begin{vmatrix} e & g & h \ i & k & l \ m & o & p \end{vmatrix} + c\begin{vmatrix} e & f & h \ i & j & l \ m & n & p \end{vmatrix} - d\begin{vmatrix} e & f & g \ i & j & k \ m & n & o \end{vmatrix} \label{8.17}$ and so on in higher dimensions. Note that the submatrices in the $3 \times 3$ example above are just the matrices formed from the original matrix $B$ that don’t include any elements from the same row or column as the premultiplying factors from the first row. Matrix determinants have a number of important properties: 1. The determinant of the identity matrix is $1$. $e.g. \begin{vmatrix} 1 & 0 \ 0 & 1 \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{vmatrix} = 1 \label{8.18}$ 1. The determinant of a matrix is the same as the determinant of its transpose i.e. det($a$) = det($A^{T}$) $e.g. \begin{vmatrix} a & b \ c & d \end{vmatrix} = \begin{vmatrix} a & c \ b & d \end{vmatrix} \label{8.19}$ 1. The determinant changes sign when any two rows or any two columns are interchanged $e.g. \begin{vmatrix} a & b \ c & d \end{vmatrix} = -\begin{vmatrix} b & a \ d & c \end{vmatrix} = -\begin{vmatrix} c & d \ a & b \end{vmatrix} = \begin{vmatrix} d & c \ b & a \end{vmatrix} \label{8.20}$ 1. The determinant is zero if any row or column is entirely zero, or if any two rows or columns are equal or a multiple of one another. $e.g. \begin{vmatrix} 1 & 2 \ 0 & 0 \end{vmatrix} = 0, \begin{vmatrix} 1 & 2 \ 2 & 4 \end{vmatrix} = 0 \label{8.21}$ 1. The determinant is unchanged by adding any linear combination of rows (or columns) to another row (or column). 2. The determinant of the product of two matrices is the same as the product of the determinants of the two matrices i.e. det($AB$) = det($A$)det($B$). The requirement that in order for a matrix to have an inverse it must have a non-zero determinant follows from property vi). As mentioned previously, the product of a matrix and its inverse yields the identity matrix I. We therefore have: $\begin{array}{rcl} det(A^{-1} A) = det(A^{-1}) det(A) & = & det(I) \ det(A^{-1}) & = & det(I)/det(A) = 1/det(A) \end{array} \label{8.22}$ It follows that a matrix $A$ can only have an inverse if its determinant is non-zero, otherwise the determinant of its inverse would be undefined.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.06%3A_Matrices.txt
The development of thermodynamics would have been unthinkable without calculus in more than one dimension (multivariate calculus) and partial differentiation is essential to the theory. 'Active' Variables When applying partial differentiation it is very important to keep in mind, which symbol is the variable and which ones are the constants. Mathematicians usually write the variable as x or y and the constants as a, b or c but in Physical Chemistry the symbols are different. It sometimes helps to replace the symbols in your mind. For example the van der Waals equation can be written as: $P= \dfrac{RT}{\overline{V} -b} - \dfrac{a}{\overline{V}^2} \label{eq1}$ Suppose we must compute the partial differential $\left( \dfrac{\partial P}{\partial \overline{V}} \right)_T \nonumber$ In this case molar volume is the variable 'x' and the pressure is the function $f(x)$, the rest is just constants, so Equation \ref{eq1} can be rewritten in the form $f(x)= \dfrac{c}{x-b} - \dfrac{a}{x^2} \label{eq4}$ When calculating $\left( \dfrac{\partial P}{\partial T} \right)_{\overline{V}} \nonumber$ should look at Equation \ref{eq1} as: $f(x) = cx -d \nonumber$ The active variable 'x' is now the temperature T and all the rest is just constants. It is useful to train your eye to pick out the one active one from all the inactive ones. Use highlighters, underline, rewrite, do whatever helps you best. Cross Derivatives As shown in Equations H.5 and H.6 there are also higher order partial derivatives versus $T$ and versus $V$. A very interesting derivative of second order and one that is used extensively in thermodynamics is the mixed second order derivative. $\left( \dfrac{\partial^2 P}{\partial T\, \partial \overline{V} } \right) = \left( \dfrac{\partial^ P}{ \partial \overline{V} \,\partial T} \right) \label{Cross1}$ Of course here the 'active' variable is first $T$, then $V$. The interesting thing about it is that it does not matter whether you first take $T$ and then $V$ or the other way around. Example H-2 shows an example of how mixed derivatives can be used to translate one quantity into the other. This trick is used over and over again in thermodynamics because it allows you to replace a quantity that is really hard to measure by one (or more) that are much easier to get good experimental values for. For example: $\left( \dfrac{\partial S}{\partial V } \right)_T = \left( \dfrac{\partial P}{\partial T} \right)_V \nonumber$ This expression is not obvious at all. It tells you that if you study the pressure $P$ when heating up while keeping the volume the same (which is doable) you're measuring how the entropy changes with volume under isothermal conditions. Entropy will be discussed later, suffice it to say that nobody has ever constructed a working 'entropometer'! So that is an impossible quantity to measure directly. The Decomposition of Changes A very important result of multivariate calculus is that if a quantity $Q$ is a function of more than one variable, say $A$ and $B$ that we can decompose any infinitesimal change $dQ$ into infinitesimal changes in $A$ and $B$ in a very simple linear way: $dQ = \alpha \,dA + \beta dB \label{Total}$ $dq$ is sometimes referred to as the total differential. The coefficients $\alpha$ and $\beta$ are the partial derivatives of first order versus $A$ and $B$. This mathematical fact is something we will be using over and over. Exact and Inexact differentials: State and path functions The car trip Suppose you drive your car up and down a mountain. You perform two measurements: you have a barometer that measures the air pressure and you keep an eye on your gas gage. Even though the barometer will show lower values on top of the mountain, its value will return to its initial value when you return home (barring weather changes). You might wish the same would hold for your gas gage particularly at current gas prices! Pressure is a good example of a state function (it returns to its old value if you go back to a previous state). The other (the gas gage) is a path function. (Make a detour and your bank account will tell you difference!). The difference between state and path functions has its roots deep in mathematics and it comes in as soon as a function has two of more variables. The gas law is a good example. The pressure depends on both temperature T and (molar) volume V. When changing the pressure a little bit, say by dP we can show that we can write that out in the two possible components dT and dV as: \begin{align} dP &= p dT + q dV \label{eq14} \[4pt] &= \left( \dfrac{\partial S}{\partial V } \right)_V dT + \left( \dfrac{\partial P}{\partial V } \right)_T dV \label{eq5} \end{align} At first, I wrote arbitrary coefficients p and q in Equation \ref{eq14}, but as you can see they are really partial derivatives (Equation \ref{eq5}). This is another way that thermodynamics exploits multivariate calculus: it shows how total changes can be built up of various contributions. The interesting thing is that if the function P is a state function (and your barometer will testify to that) then Equation \ref{Cross1} must hold. However, if the function is a path function, then this equality does not hold. Thermodynamics is largely based upon exploiting the above facts: • It tries to define state functions to describe energy changes • It tries to decompose changes into well-defined contributions • It uses partial differentials to link known quantities to unknown ones
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.08%3A_Partial_Differentiation.txt
Maclaurin Series A function $f(x)$ can be expressed as a series in powers of $x$ as long as $f(x)$ and all its derivatives are finite at $x=0$. For example, we will prove shortly that the function $f(x) = \dfrac{1}{1-x}$ can be expressed as the following infinite sum: $\label{eq1}\dfrac{1}{1-x}=1+x+x^2+x^3+x^4 + \ldots$ We can write this statement in this more elegant way: $\label{eq2}\dfrac{1}{1-x}=\displaystyle\sum_{n=0}^{\infty} x^{n}$ If you are not familiar with this notation, the right side of the equation reads “sum from $n=0$ to $n=\infty$ of $x^n.$” When $n=0$, $x^n = 1$, when $n=1$, $x^n = x$, when $n=2$, $x^n = x^2$, etc (compare with Equation \ref{eq1}). The term “series in powers of $x$” means a sum in which each summand is a power of the variable $x$. Note that the number 1 is a power of $x$ as well ($x^0=1$). Also, note that both Equations \ref{eq1} and \ref{eq2} are exact, they are not approximations. Similarly, we will see shortly that the function $e^x$ can be expressed as another infinite sum in powers of $x$ (i.e. a Maclaurin series) as: $\label{expfunction}e^x=1+x+\dfrac{1}{2} x^2+\dfrac{1}{6}x^3+\dfrac{1}{24}x^4 + \ldots$ Or, more elegantly: $\label{expfunction2}e^x=\displaystyle\sum_{n=0}^{\infty}\dfrac{1}{n!} x^{n}$ where $n!$ is read “n factorial” and represents the product $1\times 2\times 3...\times n$. If you are not familiar with factorials, be sure you understand why $4! = 24$. Also, remember that by definition $0! = 1$, not zero. At this point you should have two questions: 1) how do I construct the Maclaurin series of a given function, and 2) why on earth would I want to do this if $\dfrac{1}{1-x}$ and $e^x$ are fine-looking functions as they are. The answer to the first question is easy, and although you should know this from your calculus classes we will review it again in a moment. The answer to the second question is trickier, and it is what most students find confusing about this topic. We will discuss different examples that aim to show a variety of situations in which expressing functions in this way is helpful. How to obtain the Maclaurin Series of a Function In general, a well-behaved function ($f(x)$ and all its derivatives are finite at $x=0$) will be expressed as an infinite sum of powers of $x$ like this: $\label{eq3}f(x)=\displaystyle\sum_{n=0}^{\infty}a_n x^{n}=a_0+a_1 x + a_2 x^2 + \ldots + a_n x^n$ Be sure you understand why the two expressions in Equation \ref{eq3} are identical ways of expressing an infinite sum. The terms $a_n$ are called the coefficients, and are constants (that is, they are NOT functions of $x$). If you end up with the variable $x$ in one of your coefficients go back and check what you did wrong! For example, in the case of $e^x$ (Equation \ref{expfunction}), $a_0 =1, a_1=1, a_2 = 1/2, a_3=1/6, etc$. In the example of Equation \ref{eq1}, all the coefficients equal 1. We just saw that two very different functions can be expressed using the same set of functions (the powers of $x$). What makes $\dfrac{1}{1-x}$ different from $e^x$ are the coefficients $a_n$. As we will see shortly, the coefficients can be negative, positive, or zero. How do we calculate the coefficients? Each coefficient is calculated as: $\label{series:coefficients}a_n=\dfrac{1}{n!} \left( \dfrac{d^n f(x)}{dx^n} \right)_0$ That is, the $n$-th coefficient equals one over the factorial of $n$ multiplied by the $n$-th derivative of the function $f(x)$ evaluated at zero. For example, if we want to calculate $a_2$ for the function $f(x)=\dfrac{1}{1-x}$, we need to get the second derivative of $f(x)$, evaluate it at $x=0$, and divide the result by $2!$. Do it yourself and verify that $a_2=1$. In the case of $a_0$ we need the zeroth-order derivative, which equals the function itself (that is, $a_0 = f(0)$, because $\dfrac{1}{0!}=1$). It is important to stress that although the derivatives are usually functions of $x$, the coefficients are constants because they are expressed in terms of the derivatives evaluated at $x=0$. Note that in order to obtain a Maclaurin series we evaluate the function and its derivatives at $x=0$. This procedure is also called the expansion of the function around (or about) zero. We can expand functions around other numbers, and these series are called Taylor series (see Section 3). Example $1$ Obtain the Maclaurin series of $sin(x)$. Solution We need to obtain all the coefficients ($a_0, a_1...etc$). Because there are infinitely many coefficients, we will calculate a few and we will find a general pattern to express the rest. We will need several derivatives of $sin(x)$, so let’s make a table: $n$ $\dfrac{d^n f(x)}{dx^n}$ $\left( \dfrac{d^n f(x)}{dx^n} \right)_0$ 0 $\sin (x)$ 0 1 $\cos (x)$ 1 2 $-\sin (x)$ 0 3 $-\cos (x)$ -1 4 $\sin (x)$ 0 5 $\cos (x)$ 1 Remember that each coefficient equals $\left( \dfrac{d^n f(x)}{dx^n} \right)_0$ divided by $n!$, therefore: $n$ $n!$ $a_n$ 0 1 0 1 1 1 2 2 0 3 $6$ $-\dfrac{1}{6}$ 4 $24$ 0 5 $120$ $\dfrac{1}{120}$ This is enough information to see the pattern (you can go to higher values of $n$ if you don’t see it yet): 1. the coefficients for even values of $n$ equal zero. 2. the coefficients for $n = 1, 5, 9, 13,...$ equal $1/n!$ 3. the coefficients for $n = 3, 7, 11, 15,...$ equal $-1/n!$. Recall that the general expression for a Maclaurin series is $a_0+a_1 x + a_2 x^2...a_n x^n$, and replace $a_0...a_n$ by the coefficients we just found: $\displaystyle{\color{Maroon}\sin (x) = x - \dfrac{1}{3!} x^3+ \dfrac{1}{5!} x^5 -\dfrac{1}{7!} x^7...} \nonumber$ This is a correct way of writing the series, but in the next example we will see how to write it more elegantly as a sum. Example $2$ Express the Maclaurin series of $\sin (x)$ as a sum. Solution In the previous example we found that: $\label{series:sin}\sin (x) = x - \dfrac{1}{3!} x^3+ \dfrac{1}{5!} x^5 -\dfrac{1}{7!} x^7...$ We want to express this as a sum: $\displaystyle\sum_{n=0}^{\infty}a_n x^{n} \nonumber$ The key here is to express the coefficients $a_n$ in terms of $n$. We just concluded that 1) the coefficients for even values of $n$ equal zero, 2) the coefficients for $n = 1, 5, 9, 13,...$ equal $1/n!$ and 3) the coefficients for $n = 3, 7, 11,...$ equal $-1/n!$. How do we put all this information together in a unique expression? Here are three possible (and equally good) answers: • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=0}^{\infty} \left( -1 \right) ^n \dfrac{1}{(2n+1)!} x^{2n+1}}$ • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=1}^{\infty} \left( -1 \right) ^{(n+1)} \dfrac{1}{(2n-1)!} x^{2n-1}}$ • $\displaystyle{\color{Maroon}\sin (x)=\displaystyle\sum_{n=0}^{\infty} cos(n \pi) \dfrac{1}{(2n+1)!} x^{2n+1}}$ This may look impossibly hard to figure out, but let me share a few tricks with you. First, we notice that the sign in Equation \ref{series:sin} alternates, starting with a “+”. A mathematical way of doing this is with a term $(-1)^n$ if your sum starts with $n=0$, or $(-1)^{(n+1)}$ if you sum starts with $n=1$. Note that $\cos (n \pi)$ does the same trick. $n$ $(-1)^n$ $(-1)^{n+1}$ $\cos (n \pi)$ 0 1 -1 1 1 -1 1 -1 2 1 -1 1 3 -1 1 -1 We have the correct sign for each term, but we need to generate the numbers $1, \dfrac{1}{3!}, \dfrac{1}{5!}, \dfrac{1}{7!},...$ Notice that the number “1” can be expressed as $\dfrac{1}{1!}$. To do this, we introduce the second trick of the day: we will use the expression $2n+1$ to generate odd numbers (if you start your sum with $n=0$) or $2n-1$ (if you start at $n=1$). Therefore, the expression $\dfrac{1}{(2n+1)!}$ gives $1, \dfrac{1}{3!}, \dfrac{1}{5!}, \dfrac{1}{7!},...$, which is what we need in the first and third examples (when the sum starts at zero). Lastly, we need to use only odd powers of $x$. The expression $x^{(2n+1)}$ generates the terms $x, x^3, x^5...$ when you start at $n=0$, and $x^{(2n-1)}$ achieves the same when you start your series at $n=1$. Confused about writing sums using the sum operator $(\sum)$? This video will help: http://tinyurl.com/lvwd36q Need help? The links below contain solved examples. External links: Finding the maclaurin series of a function I: http://patrickjmt.com/taylor-and-maclaurin-series-example-1/ Finding the maclaurin series of a function II: http://www.youtube.com/watch?v= dp2ovDuWhro Finding the maclaurin series of a function III: http://www.youtube.com/watch?v= WWe7pZjc4s8 Graphical Representation From Equation $\ref{eq3}$ and the examples we discussed above, it should be clear at this point that any function whose derivatives are finite at $x=0$ can be expressed by using the same set of functions: the powers of $x$. We will call these functions the basis set. A basis set is a collection of linearly independent functions that can represent other functions when used in a linear combination. Figure $1$ is a graphic representation of the first four functions of this basis set. To be fair, the first function of the set is $x^0=1$, so these would be the second, third, fourth and fifth. The full basis set is of course infinite in length. If we mix all the functions of the set with equal weights (we put the same amount of $x^2$ than we put $x^{245}$ or $x^{0}$), we obtain $(1-x)^{-1}$ (Equation \ref{eq1}. If we use only the odd terms, alternate the sign starting with a ‘+’, and weigh each term less and less using the expression $1/(2n-1)!$ for the $n-th$ term, we obtain $\sin{x}$ (Equation \ref{series:sin}). This is illustrated in Figure $2$, where we multiply the even powers of $x$ by zero, and use different weights for the rest. Note that the ‘etcetera’ is crucial, as we would need to include an infinite number of functions to obtain the function $\sin{x}$ exactly. Although we need an infinite number of terms to express a function exactly (unless the function is a polynomial, of course), in many cases we will observe that the weight (the coefficient) of each power of $x$ gets smaller and smaller as we increase the power. For example, in the case of $\sin{x}$, the contribution of $x^3$ is $1/6 th$ of the contribution of $x$ (in absolute terms), and the contribution of $x^5$ is $1/120 th$. This tells you that the first terms are much more important than the rest, although all are needed if we want the sum to represent $\sin{x}$ exactly. What if we are happy with a ‘pretty good’ approximation of $\sin{x}$? Let’s see what happens if we use up to $x^3$ and drop the higher terms. The result is plotted in blue in Figure $3$ together with $\sin{x}$ in red. We can see that the function $x-1/6 x^3$ is a very good approximation of $\sin{x}$ as long as we stay close to $x=0$. As we move further away from the origin the approximation gets worse and worse, and we would need to include higher powers of $x$ to get it better. This should be clear from eq. [series:sin], since the terms $x^n$ get smaller and smaller with increasing $n$ if $x$ is a small number. Therefore, if $x$ is small, we could write $\sin (x) \approx x - \dfrac{1}{3!} x^3$, where the symbol $\approx$ means approximately equal. But why stopping at $n=3$ and not $n=1$ or 5? The above argument suggests that the function $x$ might be a good approximation of $\sin{x}$ around $x=0$, when the term $x^3$ is much smaller than the term $x$. This is in fact this is the case, as shown in Figure $4$. We have seen that we can get good approximations of a function by truncating the series (i.e. not using the infinite terms). Students usually get frustrated and want to know how many terms are ‘correct’. It takes a little bit of practice to realize there is no universal answer to this question. We would need some context to analyze how good of an approximation we are happy with. For example, are we satisfied with the small error we see at $x= 0.5$ in Figure $4$? It all depends on the context. Maybe we are performing experiments where we have other sources of error that are much worse than this, so using an extra term will not improve the overall situation anyway. Maybe we are performing very precise experiments where this difference is significant. As you see, discussing how many terms are needed in an approximation out of context is not very useful. We will discuss this particular approximation when we learn about second order differential equations and analyze the problem of the pendulum, so hopefully things will make more sense then. Linear Approximations If you take a look at Equation $3.1.5$ you will see that we can always approximate a function as $a_0+a_1x$ as long as $x$ is small. When we say ‘any function’ we of course imply that the function and all its derivatives need to be finite at $x=0$. Looking at the definitions of the coefficients, we can write: $\label{eq1} f (x) \approx f(0) +f'(0)x$ We call this a linear approximation because Equation \ref{eq1} is the equation of a straight line. The slope of this line is $f'(0)$ and the $y$-intercept is $f(0)$. A fair question at this point is ‘why are we even talking about approximations?’ What is so complicated about the functions $\sin{x}$, $e^x$ or $\ln{(x+1)}$ that we need to look for an approximation? Are we getting too lazy? To illustrate this issue, let’s consider the problem of the pendulum, which we will solve in detail in the chapter devoted to differential equations. The problem is illustrated in Figure $1$, and those of you who took a physics course will recognize the equation below, which represents the law of motion of a simple pendulum. The second derivative refers to the acceleration, and the $\sin \theta$ term is due to the component of the net force along the direction of motion. We will discuss this in more detail later in this semester, so for now just accept the fact that, for this system, Newton’s law can be written as: $\frac{d^2\theta(t)}{dt^2}+\frac{g}{l} \sin{\theta(t)}=0 \nonumber$ This equation should be easy to solve, right? It has only a few terms, nothing too fancy other than an innocent sine function...How difficult can it be to obtain $\theta(t)$? Unfortunately, this differential equation does not have an analytical solution! An analytical solution means that the solution can be expressed in terms of a finite number of elementary functions (such as sine, cosine, exponentials, etc). Differential equations are sometimes deceiving in this way: they look simple, but they might be incredibly hard to solve, or even impossible! The fact that we cannot write down an analytical solution does not mean there is no solution to the problem. You can swing a pendulum and measure $\theta(t)$ and create a table of numbers, and in principle you can be as precise as you want to be. Yet, you will not be able to create a function that reflects your numeric results. We will see that we can solve equations like this numerically, but not analytically. Disappointing, isn’t it? Well... don’t be. A lot of what we know about molecules and chemical reactions came from the work of physical chemists, who know how to solve problems using numerical methods. The fact that we cannot obtain an analytical expression that describes a particular physical or chemical system does not mean we cannot solve the problem numerically and learn a lot anyway! But what if we are interested in small displacements only (that is, the pendulum swings close to the vertical axis at all times)? In this case, $\theta<<1$, and as we saw $\sin{\theta}\approx\theta$ (see Figure $3.1.4$). If this is the case, we have now: $\frac{d^2\theta(t)}{dt^2}+\frac{g}{l} \theta(t)=0 \nonumber$ As it turns out, and as we will see in Chapter 2, in this case it is very easy to obtain the solution we are looking for: $\theta(t)=\theta(t=0)\cos \left((\frac{g}{l})^{1/2}t \right) \nonumber$ This solution is the familiar ‘back and forth’ oscillatory motion of the pendulum you are familiar with. What you might have not known until today is that this solution assumes $\sin{\theta}\approx\theta$ and is therefore valid only if $\theta<<1$! There are lots of ‘hidden’ linear approximations in the equations you have learned in your physics and chemistry courses. You may recall your teachers telling you that a give equation is valid only at low concentrations, or low pressures, or low... you hopefully get the point. A pendulum is of course not particularly interesting when it comes to chemistry, but as we will see through many examples during the semester, oscillations, generally speaking, are. The example below illustrates the use of series to a problem involving diatomic molecules, but before discussing it we need to provide some background. The vibrations of a diatomic molecule are often modeled in terms of the so-called Morse potential. This equation does not provide an exact description of the vibrations of the molecule under any condition, but it does a pretty good job for many purposes. $\label{morse}V(R)=D_e\left(1-e^{-k(R-R_e)}\right)^2$ Here, $R$ is the distance between the nuclei of the two atoms, $R_e$ is the distance at equilibrium (i.e. the equilibrium bond length), $D_e$ is the dissociation energy of the molecule, $k$ is a constant that measures the strength of the bond, and $V$ is the potential energy. Note that $R_e$ is the distance at which the potential energy is a minimum, and that is why we call this the equilibrium distance. We would need to apply energy to separate the atoms even more, or to push them closer (Figure $2$). At room temperature, there is enough thermal energy to induce small vibrations that displace the atoms from their equilibrium positions, but for stable molecules, the displacement is very small: $R-R_e\rightarrow0$. In the next example we will prove that under these conditions, the potential looks like a parabola, or in mathematical terms, $V(R)$ is proportional to the square of the displacement. This type of potential is called a ’harmonic potential’. A vibration is said to be simple harmonic if the potential is proportional to the square of the displacement (as in the simple spring problems you may have studied in physics). Example $1$ Expand the Morse potential as a power series and prove that the vibrations of the molecule are approximately simple harmonic if the displacement $R-R_e$ is small. Solution The relevant variable in this problem is the displacement $R-R_e$, not the actual distance $R$. Let’s call the displacement $R-R_e=x$, and let’s rewrite Equation \ref{morse} as $\label{morse2}V(R)=D_e\left(1-e^{-kx}\right)^2$ The goal is to prove that $V(R) =cx^2$ (i.e. the potential is proportional to the square of the displacement) when $x\rightarrow0$. The constant $c$ is the proportionality constant. We can approach this in two different ways. One option is to expand the function shown in Equation \ref{morse2} around zero. This would be correct, but it but involve some unnecessary work. The variable $x$ appears only in the exponential term, so a simpler option is to expand the exponential function, and plug the result of this expansion back in Equation \ref{morse2}. Let’s see how this works: We want to expand $e^{-kx}$ as $a_0+a_1 x + a_2 x^2...a_n x^n$, and we know that the coefficients are $a_n=\frac{1}{n!} \left( \frac{d^n f(x)}{dx^n} \right)_0.$ The coefficient $a_0$ is $f(0)=1$. The first three derivatives of $f(x)=e^{-kx}$ are • $f'(x)=-ke^{-kx}$ • $f''(x)=k^2e^{-kx}$ • $f'''(x)=-k^3e^{-kx}$ When evaluated at $x=0$ we obtain, $-k, k^2, -k^3...$ and therefore $a_n=\frac{(-1)^n k^n}{n!}$ for $n=0, 1, 2...$. Therefore, $e^{-kx}=1-kx+k^2x^2/2!-k^3x^3/3!+k^4x^4/4!...$ and $1-e^{-kx}=+kx-k^2x^2/2!+k^3x^3/3!-k^4x^4/4!...$ From the last result, when $x<<1$, we know that the terms in $x^2, x^3...$ will be increasingly smaller, so $1-e^{-kx}\approx kx$ and $(1-e^{-kx})^2\approx k^2x^2$. Plugging this result in Equation \ref{morse2} we obtain $V(R) \approx D_e k^2 x^2$, so we demonstrated that the potential is proportional to the square of the displacement when the displacement is small (the proportionality constant is $D_e k^2$). Therefore, stable diatomic molecules at room temperatures behave pretty much like a spring! (Don’t take this too literally. As we will discuss later, microscopic springs do not behave like macroscopic springs at all). Taylor Series Before discussing more applications of Maclaurin series, let’s expand our discussion to the more general case where we expand a function around values different from zero. Let’s say that we want to expand a function around the number $h$. If $h=0$, we call the series a Maclaurin series, and if $h\neq0$ we call the series a Taylor series. Because Maclaurin series are a special case of the more general case, we can call all the series Taylor series and omit the distinction. The following is true for a function $f(x)$ as long as the function and all its derivatives are finite at $h$: $\label{taylor} f(x)=a_0 + a_1(x-h)+a_2(x-h)^2+...+a_n(x-h)^n = \displaystyle\sum_{n=0}^{\infty}a_n(x-h)^n$ The coefficients are calculated as $\label{taylorcoeff} a_n=\frac{1}{n!}\left( \frac{d^n f}{dx^n}\right)_h$ Notice that instead of evaluating the function and its derivatives at $x=0$ we now evaluate them at $x=h$, and that the basis set is now $1, (x-h), (x-h)^2,...,(x-h)^n$ instead of $1, x, x^2,...,x^n$. A Taylor series will be a good approximation of the function at values of $x$ close to $h$, in the same way Maclaurin series provide good approximations close to zero. To see how this works let’s go back to the exponential function. Recall that the Maclaurin expansion of $e^x$ is shown in Equation $3.1.3$. We know what happens if we expand around zero, so to practice, let’s expand around $h=1$. The coefficient $a_0$ is $f(1)= e^1=e$. All the derivatives are $e^x$, so $f'(1)=f''(1)=f'''(1)...=e.$ Therefore, $a_n=\frac{e}{n!}$ and the series is therefore $\label{taylorexp} e\left[ 1+(x-1)+\frac{1}{2}(x-1)^2+\frac{1}{6}(x-1)^3+... \right]=\displaystyle\sum_{n=0}^{\infty}\frac{e}{n!}(x-1)^n$ We can use the same arguments we used before to conclude that $e^x\approx ex$ if $x\approx 1$. If $x\approx 1$, $(x-1)\approx 0$, and the terms $(x-1)^2, (x-1)^3$ will be smaller and smaller and will contribute less and less to the sum. Therefore, $e^x \approx e \left[ 1+(x-1) \right]=ex.$ This is the equation of a straight line with slope $e$ and $y$-intercept 0. In fact, from Equation $3.1.7$ we can see that all functions will look linear at values close to $h$. This is illustrated in Figure $1$, which shows the exponential function (red) together with the functions $1+x$ (magenta) and $ex$ (blue). Not surprisingly, the function $1+x$ provides a good approximation of $e^x$ at values close to zero (see Equation $3.1.3$) and the function $ex$ provides a good approximation around $x=1$ (Equation \ref{taylorexp}). Example $1$: Expand $f(x)=\ln{x}$ about $x=1$ Solution $f(x)=a_0 + a_1(x-h)+a_2(x-h)^2+...+a_n(x-h)^n, a_n=\frac{1}{n!}\left( \frac{d^n f}{dx^n}\right)_h \nonumber$ $a_0=f(1)=\ln(1)=0 \nonumber$ The derivatives of $\ln{x}$ are: $f'(x) = 1/x, f''(x)=-1/x^2, f'''(x) = 2/x^3, f^{(4)}(x)=-6/x^4, f^{(5)}(x)=24/x^5... \nonumber$ and therefore, $f'(1) = 1, f''(1)=-1, f'''(1) = 2, f^{(4)}(1)=-6, f^{(5)}(1)=24.... \nonumber$ To calculate the coefficients, we need to divide by $n!$: • $a_1=f'(1)/1!=1$ • $a_2=f''(1)/2!=-1/2$ • $a_3=f'''(1)/3!=2/3!=1/3$ • $a_4=f^{(4)}(1)/4!=-6/4!=-1/4$ • $a_n=(-1)^{n+1}/n$ The series is therefore: $f(x)=0 + 1(x-1)-1/2 (x-1)^2+1/3 (x-1)^3...=\displaystyle{\color{Maroon}\displaystyle\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}(x-1)^{n}} \nonumber$ Note that we start the sum at $n=1$ because $a_0=0$, so the term for $n=0$ does not have any contribution. Need help? The links below contain solved examples. External links: Finding the Taylor series of a function I: http://patrickjmt.com/taylor-and-maclaurin-series-example-2/ Other Applications of Mclaurin and Taylor series So far we have discussed how we can use power series to approximate more complex functions around a particular value. This is very common in physical chemistry, and you will apply it frequently in future courses. There are other useful applications of Taylor series in the physical sciences. Sometimes, we may use relationships to derive equations or prove relationships. Example $1$ illustrates this last point. Example $1$ Calculate the following sum ($\lambda$ is a positive constant) $\displaystyle\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!} \nonumber$ Solution Let’s ‘spell out’ the sum: $\displaystyle\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!}=e^{-\lambda} \left[1+\frac{\lambda^1}{1!}+\frac{\lambda^2}{2!}+\frac{\lambda^3}{3!}...\right] \nonumber$ The sum within the brackets is exactly $e^\lambda$. This is exact, and not an approximation, because we have all infinite terms. Therefore, $\sum_{k=0}^{\infty}\frac{\lambda^k e^{-\lambda}}{k!}=e^{-\lambda}e^\lambda=1 \nonumber$ This would require that you recognize the term within brackets as the Maclaurin series of the exponential function. One simpler version of the problem would be to ask you to prove that the sum equals 1. There are more ways we can use Taylor series in the physical sciences. We will see another type of application when we study differential equations. In fact, power series are extremely important in finding the solutions of a large number of equations that arise in quantum mechanics. The description of atomic orbitals, for example, require that we solve differential equations that involve expressing functions as power series.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.09%3A_Series_and_Limits.txt
The Fourier transform converts a function vs. continuous (or descrete) time and maps it into a function vs. continuous (or descrete) frequencies. Hence, the transform converts time-domain data into frequency-domain data (and vice versa). This decomposion of a function into sinusoids of different frequencies is a powerful approach to many experimental and theoretical problems. Fourier transform spectroscopy is an approach whereby spectra are collected based on measurements using time-domain or space-domain measurements of the electromagnetic radiation or other type of radiation. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy (FT IR, FT-NIRS), Fourier transform nuclear magnetic resonance (NMR), mass spectrometry and electron spin resonance spectroscopy. Introduction Fourier analysis is a subject area which grew out of the study of Fourier series. The subject began with trying to understand when it was possible to represent general functions by sums of simpler trigonometric functions. The attempt to understand functions (or other objects) by breaking them into basic pieces that are easier to understand is one of the central themes in Fourier analysis. Fourier analysis is named after Joseph Fourier who showed that representing a function by a trigonometric series greatly simplified the study of heat propagation. Today the subject of Fourier analysis encompasses a vast spectrum of mathematics with parts that, at first glance, may appear quite different. In the sciences and engineering the process of decomposing a function into simpler pieces is often called an analysis. In Fourier analysis, the term Fourier transform often refers to the process that decomposes a given function into the basic pieces. This process results in another function that describes how much of each basic piece are in the original function. However, the transform is often given a more specific name depending upon the domain and other properties of the function being transformed, as elaborated below. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Fourier Series When the function ƒ is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function F at frequency ω represents the amplitude of a frequency component whose initial phase is given by the phase of F. However, it is important to realize that Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. Continuous Fourier Trasnform (CFT) Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, such as time (t). In this case the Fourier transform describes a function ƒ(t) in terms of basic complex exponentials of various frequencies. In terms of ordinary frequency ν, the Fourier transform is given by the complex number: $F(\nu) = \int_{-\infty}^{\infty} f(t) \cdot e^{- 2\pi \cdot i \cdot \nu \cdot t} dt.$ Evaluating this quantity for all values of ν produces the frequency-domain function. Discrete Fourier Transform (DFT) Experimentally, we collect data that is not continuous, but samples a measurements at specific point. Hence we have to deal with the discrete version of the Fourier Transform. Fast Fourier Transfer (FFT) The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms. This algortihm sympically requires a 2n number of measurements to operate. Hence you will notice data sets often with such dimensions. (Conte & de Boor 1980) References 1. Arfken, G. "Fourier Series." Ch. 14 in Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 760-793, 1985. 2. Askey, R. and Haimo, D. T. "Similarities between Fourier and Power Series." Amer. Math. Monthly 103, 297-304, 1996. 3. Beyer, W. H. (Ed.). CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, 1987. 4. Brown, J. W. and Churchill, R. V. Fourier Series and Boundary Value Problems, 5th ed. New York: McGraw-Hill, 1993. 5. Byerly, W. E. An Elementary Treatise on Fourier's Series, and Spherical, Cylindrical, and Ellipsoidal Harmonics, with Applications to Problems in Mathematical Physics. New York: Dover, 1959. 6. Carslaw, H. S. Introduction to the Theory of Fourier's Series and Integrals, 3rd ed., rev. and enl. New York: Dover, 1950. 7. Davis, H. F. Fourier Series and Orthogonal Functions. New York: Dover, 1963. 8. Dym, H. and McKean, H. P. Fourier Series and Integrals. New York: Academic Press, 1972. 9. Folland, G. B. Fourier Analysis and Its Applications. Pacific Grove, CA: Brooks/Cole, 1992. 10. Groemer, H. Geometric Applications of Fourier Series and Spherical Harmonics. New York: Cambridge University Press, 1996. 11. Körner, T. W. Fourier Analysis. Cambridge, England: Cambridge University Press, 1988. 12. Körner, T. W. Exercises for Fourier Analysis. New York: Cambridge University Press, 1993. 13. Krantz, S. G. "Fourier Series." §15.1 in Handbook of Complex Variables. Boston, MA: Birkhäuser, pp. 195-202, 1999. 14. Lighthill, M. J. Introduction to Fourier Analysis and Generalised Functions. Cambridge, England: Cambridge University Press, 1958. 15. Morrison, N. Introduction to Fourier Analysis. New York: Wiley, 1994. 16. Sansone, G. "Expansions in Fourier Series." Ch. 2 in Orthogonal Functions, rev. English ed. New York: Dover, pp. 39-168, 1991. 17. Weisstein, E. W. "Books about Fourier Transforms." http://www.ericweisstein.com/encyclopedias/books/FourierTransforms.html. 18. Whittaker, E. T. and Robinson, G. "Practical Fourier Analysis." Ch. 10 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 260-284, 1967. 19. R. R. Williams Spectroscopy and the Fourier Transform Wiley: New York, 1995 20. Bettis, Cliff; Lyons, Edward J.; Brooks, David W. J. Chem. Educ. 1996, 73, 839 21. Graff, Daria K. J. Chem. Educ. 1995, 72, 304. 22. Chesick, John P. J. Chem. Educ. 1989, 66, 128. 23. Glasser, L. J. Chem. Educ. 1987, 64, A228. 24. Glasser, L. J.Chem. Educ. 1987, 64, A255. 25. Glasser, L. J. Chem. Educ. 1987, 64, A306. Contributors and Attributions • Wikipedia entry (to begin with) • mmrc.caltech.edu/FTIR/FTIRintro.pdf 32.10: Fourier Analysis Fourier analysis encompasses a vast spectrum of mathematics with parts that, at first glance, may appear quite different. In the sciences and engineering the process of decomposing a function into simpler pieces is often called an analysis. The corresponding operation of rebuilding the function from these pieces is known as synthesis. In this context the term Fourier synthesis describes the act of rebuilding and the term Fourier analysis describes the process of breaking the function into a sum of simpler pieces. In mathematics, the term Fourier analysis often refers to the study of both operations. Introduction In Fourier analysis, the term Fourier transform often refers to the process that decomposes a given function into the basic pieces. This process results in another function that describes how much of each basic piece are in the original function. It is common practice to also use the term Fourier transform to refer to this function. However, the transform is often given a more specific name depending upon the domain and other properties of the function being transformed, as elaborated below. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations and the general field is often known as harmonic analysis. (Continuous) Fourier transform Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, such as time (t). In this case the Fourier transform describes a function ƒ(t) in terms of basic complex exponentials of various frequencies. In terms of ordinary frequency ν, the Fourier transform is given by the complex number. Evaluating this quantity for all values of ν produces the frequency-domain function. Matlab and the FFT Matlab's FFT function is an effective tool for computing the discrete Fourier transform of a signal.The following code examples will help you to understand the details of using the FFT function. Example 1 The typical syntax for computing the FFT of a signal is FFT(x,N) where x is the signal, x[n], you wish to transform, and N is the number of points in the FFT. N must be at least as large as the number of samples in x[n]. To demonstrate the effect of changing the value of N, synthesize a cosine with 30 samples at 10 samples per period. • n = [0:29]; • x = cos(2*pi*n/10); Define 3 different values for N. Then take the transform of x[n] for each of the 3 values that were defined. The abs function find the magnitude of the transform, as we are not concerned with distinguishing between real and imaginary components. • N1 = 64; • N2 = 128; • N3 = 256; • X1 = abs(fft(x,N1)); • X2 = abs(fft(x,N2)); • X3 = abs(fft(x,N3)); The frequency scale begins at 0 and extends to N - 1 for an N-point FFT. We then normalize the scale so that it extends from 0 to 1 - 1/N • F1 = [0 : N1 - 1]/N1; • F2 = [0 : N2 - 1]/N2; • F3 = [0 : N3 - 1]/N3; Plot each of the transforms one above the other • subplot(3,1,1) • plot(F1,X1,'-x'),title('N = 64'),axis([0 1 0 20]) • subplot(3,1,2) • plot(F2,X2,'-x'),title('N = 128'),axis([0 1 0 20]) • subplot(3,1,3) • plot(F3,X3,'-x'),title('N = 256'),axis([0 1 0 20]) Upon examining the plot one can see that each of the transforms adheres to the same shape, differing only in the number of samples used to approximate that shape. What happens if N is the same as the number of samples in x[n]? To find out, set N1 = 30. What does the resulting plot look like? Why does it look like this? Example 2 In the last example the length of x[n] was limited to 3 periods in length. Now, let's choose a large value for N (for a transform with many points), and vary the number of repetitions of the fundamental period. • n = [0:29]; • x1 = cos(2*pi*n/10); % 3 periods • x2 = [x1 x1]; % 6 periods • x3 = [x1 x1 x1]; % 9 periods • N = 2048; • X1 = abs(fft(x1,N)); • X2 = abs(fft(x2,N)); • X3 = abs(fft(x3,N)); • F = [0:N-1]/N; • subplot(3,1,1) • plot(F,X1),title('3 periods'),axis([0 1 0 50]) • subplot(3,1,2) • plot(F,X2),title('6 periods'),axis([0 1 0 50]) • subplot(3,1,3) • plot(F,X3),title('9 periods'),axis([0 1 0 50]) The previous code will produce three plots. The first plot, the transform of 3 periods of a cosine, looks like the magnitude of 2 sincs with the center of the first sinc at 0.1fs and the second at 0.9fs. The second plot also has a sinc-like appearance, but its frequency is higher and it has a larger magnitude at 0:1fs and 0:9fs. Similarly, the third plot has a larger sinc frequency and magnitude. As x[n] is extended to a large number of periods, the sincs will begin to look more and more like impulses. But I thought a sinusoid transformed to an impulse, why do we have sincs in the frequency domain? When the FFT is computed with an N larger than the number of samples in x[n], it fills in the samples after x[n] with zeros. Example 2 had an x[n] that was 30 samples long, but the FFT had an N = 2048. When Matlab computes the FFT, it automatically fills the spaces from n = 30 to n = 2047 with zeros. This is like taking a sinusoid and multiplying it with a rectangular box of length 30. A multiplication of a box and a sinusoid in the time domain should result in the convolution of a sinc with impulses in the frequency domain. Furthermore, increasing the width of the box in the time domain should increase the frequency of the sinc in the frequency domain. The previous Matlab experiment supports this conclusion. Identifying Signals in noise with FFT A common use of Fourier transforms is to find the frequency components of a signal buried in a noisy time domain signal. Consider data sampled at 1000 Hz. Form a signal containing a 50 Hz sinusoid of amplitude 0.7 and 120 Hz sinusoid of amplitude 1 and corrupt it with some zero-mean random noise: • Fs = 1000; % Sampling frequency • T = 1/Fs; % Sample time • L = 1000; % Length of signal • t = (0:L-1)*T; % Time vector • x = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t); % Sum of a 50 Hz sinusoid and a 120 Hz sinusoid • y = x + 2*randn(size(t)); % Sinusoids plus noise • plot(Fs*t(1:50),y(1:50)) • title('Signal Corrupted with Zero-Mean Random Noise') • xlabel('time (milliseconds)') It is difficult to identify the frequency components by looking at the original signal. Converting to the frequency domain, the discrete Fourier transform of the noisy signal y is found by taking the fast Fourier transform (FFT): • NFFT = 2^nextpow2(L); % Next power of 2 from length of y • Y = fft(y,NFFT)/L; • f = Fs/2*linspace(0,1,NFFT/2+1); % Plot single-sided amplitude spectrum. • plot(f,2*abs(Y(1:NFFT/2+1))) • title('Single-Sided Amplitude Spectrum of y(t)') • xlabel('Frequency (Hz)') • ylabel('|Y(f)|') The main reason the amplitudes are not exactly at 0.7 and 1 is because of the noise. Several executions of this code (including recomputation of y) will produce different approximations to 0.7 and 1. The other reason is that you have a finite length signal. Increasing L from 1000 to 10000 in the example above will produce much better approximations on average. 32.10.02: Fourier Synthesis of Periodic Waveforms Java Developed by Dr. Constantinos E. Efstathiou, Laboratory of Analytical Chemistry, National and Kapodistrian University of Athens 32.11: The Binomial Distribution and Stirling's Appromixation Stirling's approximation is named after the Scottish mathematician James Stirling (1692-1770). In confronting statistical problems we often encounter factorials of very large numbers. The factorial $N!$ is a product $N(N-1)(N-2)...(2)(1)$. Therefore, $\ln \,N!$ is a sum $\left.\ln N!\right. = \ln 1 + \ln 2 + \ln 3 + ... + \ln N = \sum_{k=1}^N \ln k. \label{1}$ where we have used the property of logarithms that $\log(abc) =\log(a) + \log(b) +\log(c)$. The sum is shown in figure below. Using Euler-MacLaurin formula one has $\sum_{k=1}^N \ln k=\int_1^N \ln x\,dx+\sum_{k=1}^p\frac{B_{2k}}{2k(2k-1)}\left(\frac{1}{n^{2k-1}}-1\right)+R , \label{2}$ where B1 = −1/2, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0, B8 = −1/30, ... are the Bernoulli numbers, and $R$ is an error term which is normally small for suitable values of $p$. Then, for large $N$, $\ln N! \sim \int_1^N \ln x\,dx \approx N \ln N -N . \label{3}$ after some further manipulation one arrives at (apparently Stirling's contribution was the prefactor of $\sqrt{2\pi})$ $N! = \sqrt{2 \pi N} \; N^{N} e^{-N} e^{\lambda_N} \label{4}$ where $\dfrac{1}{12N+1} < \lambda_N < \frac{1}{12N}. \label{5}$ The sum of the area under the blue rectangles shown below up to $N$ is $\ln N!$. As you can see the rectangles begin to closely approximate the red curve as $m$ gets larger. The area under the curve is given the integral of $\ln x$. $\ln N! = \sum_{m=1}^N \ln m \approx \int_1^N \ln x\, dx \label{6}$ To solve the integral use integration by parts $\int u\,dv=uv-\int v\,dy \label{7A}$ Here we let $u = \ln x$ and $dv = dx$. Then $v = x$ and $du = \frac{dx}{x}$. $\int_0^N \ln x \, dx = x \ln x|_0^N - \int_0^N x \dfrac{dx}{x} \label{7B}$ Notice that $x/x = 1$ in the last integral and $x \ln x$ is 0 when evaluated at zero, so we have $\int_0^N \ln x \, dx = N \ln N - \int_0^N dx \label{8}$ Which gives us Stirling’s approximation: $\ln N! = N \ln N – N$. As is clear from the figure above Stirling’s approximation gets better as the number N gets larger (Table $1$). Table $1$: Evaluation of Approximation with absolute values N N! ln N! N ln N – N Error 10 3.63 x 106 15.1 13.02 13.8% 50 3.04 x 1064 148.4 145.6 1.88% 100 9.33 x 10157 363.7 360.5 0.88% 150 5.71 x 10262 605.0 601.6 0.56% Calculators often overheat at 200!, which is all right since clearly result are converging. In thermodynamics, we are often dealing very large N (i.e., of the order of Avagadro’s number) and for these values Stirling’s approximation is excellent.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.10%3A_Fourier_Analysis/32.10.01%3A_Fourier_Analysis_in_Matlab.txt
• A1: Deriving Planck's Distribution Law Albert Einstein developed a simple but effective analysis of induced emission and absorption of radiation along with spontaneous emission that can be used to derive the Planck formular for thermal radiation. Appendices Albert Einstein developed a simple but effective analysis of induced emission and absorption of radiation along with spontaneous emission that can be used to derive the Planck formula for thermal radiation. Consider two energy levels for the molecules in a material. The lower of the two is denoted as $E_1$ and the higher as $E_2$. The probability of a transition from level 1 up to level 2 through induced absorption is assumed to be proportional to the energy density per unit frequency interval, ($du/d \nu$). Likewise the probability of an induced transition from level 2 down to level 1 is assumed also to be proportional to ($du/d\nu$). These two probabilities are taken to be $B_{12}(du/d\nu)$ and $B_{21}(du/d\nu)$, respectively, where $B_{12}$ and $B_{21}$ are constants. The probability of a spontaneous emission is assumed to be a constant $A_{21}$. Let $N_1$ and $N_2$ be the number of molecules in energy states 1 and 2, respectively. For equilibrium the number of transitions from 1 to 2 has to be equal to the number from 2 to 1; i.e., $\underbrace{N_1\left[B_{12}\left(\dfrac{du}{d\nu}\right)\right]}_{\text{flow up}} = \underbrace{N_2\left[B_{21}\left(\dfrac{du}{d\nu} \right)+ A_{21} \right]}_{\text{flow down}} \nonumber$ This means that the ratio of the occupancies of the energy levels must be $\dfrac{N_2}{N_1} = \dfrac{B_{12}\left(\dfrac{du}{d\nu}\right)}{ B_{21}\left(\dfrac{du}{d\nu}\right) + A_{21}} \label{einstein2}$ But the occupancies are given by the Boltzmann distribution as $N_1 = N_0 \exp \left(− \dfrac{E_1}{kT} \right) \nonumber$ and $N_2 = N_0 \exp \left(−\dfrac{E_2}{kT} \right) \nonumber$ where $k$ is Boltzmann's constant and $T$ is absolute temperature. $N_0$ is just a constant that is irrelevant for the rest of the analysis. Thus according to the Boltzmann distribution $\dfrac{N_2}{N_1} = \exp \left(−\dfrac{E_2−E_1}{kT} \right) \label{boltz2}$ Therefore for radiative equilibrium, Equations \ref{boltz2} and \ref{einstein2} can be set to each other and $\exp\left(−\dfrac{E_2−E_1}{kT} \right) = \dfrac{B_{12}\left(\dfrac{du}{d\nu}\right)}{ B_{21}\left(\dfrac{du}{d\nu}\right) + A_{21}} \nonumber$ This condition can be solved for $(du/dν)$; i.e., $\dfrac{du}{d\nu} = \dfrac{A_{21}}{B_{12}\exp \left( \dfrac{E_2−E_1}{kT} \right)−B_{21}} \nonumber$ Consider what happens to the above expression for as $T \rightarrow \infty$. It goes to $\lim _ {T \rightarrow \infty} \dfrac{du}{d\nu} = \dfrac{A_{21}}{B_{12}−B_{21}} \nonumber$ Einstein maintained that $(du/dν)$ must go to infinity as $T$ goes to infinity. This requires that $B_{12}$ be equal to $B_{21}$. Thus $\dfrac{du}{d\nu} = \dfrac{A_{21}/B_{21}}{\exp \left(\dfrac{E_2−E_1}{kT}\right)−1} \label{eq10}$ Now Planck's assumption is introduced: $E_2−E_1 = hν \nonumber$ Thus Equation \ref{eq10} becomes $\dfrac{du}{d\nu} = \dfrac{A_{21}/B_{21}}{\exp \left(\dfrac{hv}{kT}\right)−1} \label{eq11}$ The Rayleigh-Jeans Radiation Law says $\dfrac{du}{d\nu}= \dfrac{8πkTν^2}{c^3} \label{RJ}$ The Planck formula must coincide with the Rayleigh-Jeans Law for sufficiently small $ν$. Note that the exponent in the denominator of Equation \ref{eq11} can be expanded (via a Taylor expansion): $\exp\left(\dfrac{hν}{kT}\right) \approx 1 + \dfrac{hν}{kT} \nonumber$ for sufficiently small $ν$. This means that Equation \ref{eq11} simplifies to $\dfrac{du}{d\nu} = \dfrac{A_{21}/B_{21}}{1 + (hν/kT) −1} = \dfrac{A_{21}/B_{21}}{hν/kT} \nonumber$ and hence $\dfrac{du}{d\nu}= \left(\dfrac{A_{21}}{B_{21}} \right) \left( \dfrac{kT}{hν}\right) \label{eq20}$ Equating Equations \ref{RJ} and \ref{eq20} for $(du/dν)$ gives $\left(\dfrac{A_{21}}{B_{21}}\right) \left(\dfrac{kT}{hν}\right) = \dfrac{8πkTν^2}{c^3} \nonumber$ which reduces to $\dfrac{A_{21}}{B_{21}} = \dfrac{8πhν^3}{c^3} \nonumber$ Thus $\dfrac{du}{d\nu} = \dfrac{8πhν^³}{c^³} \dfrac{1}{\exp(hν/kT)−1} \nonumber$ This is Planck's formula in terms of frequency. Reference 1. K.D. Möller, Optics, University Science Books, Mill Valley, California, 1988.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/Appendices/A1%3A_Deriving_Planck%27s_Distribution_Law.txt
This section presents a basic overview of the theory of modern NMR. Readers interested in more in-depth treatments of this subject are encouraged to utilize the resources listed in the reference page at the end of this section. The embedded animations in the web book http://www.cis.rit.edu/htbooks/nmr/ authored by Professor Hornak makes this site especially useful for students learning about NMR. This section will help you answer the following questions: 01: Basic NMR Theory The fundamentals of NMR begin with the understanding that a nucleus belonging to an element with an odd atomic or mass number has a nuclear spin that can be observed. Examples of nuclei with spin include 1H, 3H, 13C, 15N, 19F, 31P and 29Si. All of these nuclei have a spin of ½. Other nuclei like 2H or 14N have a spin of 1. Nuclei with even atomic and mass numbers like 12C and 16O have spin of 0 and cannot be studied by NMR. The following introductory discussion of NMR is limited to spin ½ nuclei. Nuclei that possess spin have angular momentum, ρ. The maximum number of values of angular momentum a nucleus can have is described by the magnetic quantum number, Ι. The possible spin states can vary from +Ι to –Ι in integer values. Therefore, there are 2Ι +1 possible values of ρ. Exercise \(1\) How many spin states would you predict for 2H? For spin ½ nuclei, the angular momentum can have two possible values: +½ or –½. Since spin is a quantum mechanical property, it can be difficult to visualize. One way to imagine spin is by thinking of spin ½ nuclei as tiny bar magnets that can have two possible orientations with respect to a larger external magnetic field. It is important to note that in the absence of an external magnetic field, these discrete spin states have random orientations and identical energies. 1.02: How does absorption of energy generate an NMR spectrum In the absence of an external magnetic field the two spins in the previous figure would be randomly oriented and their energies degenerate, in other words they would have identical energies. However in the presence of an applied magnetic field, the energies of the two spin states diverge and the spins orient themselves with respect to the applied field. The larger the magnetic field, the greater the difference in energy between the spin states. For most spin ½ nuclei, the + ½ (α) spin state is of lower energy and corresponds to having the spin aligned with the applied field while the - ½ (β) spin state can be thought of as having the spin opposed to the applied field. The difference in energy between the states, ∆E, depends on the strength of the applied magnetic field, Bo, according to Eq. $\ref{E1}$. In this equation γ is the gyromagnetic ratio, a fundamental property of each type of nucleus and h is Plank’s constant. Table 1 shows values of the gyromagnetic ratio for several common NMR nuclei. $∆E = \dfrac{γhB_o}{2π} \label{E1}$ Table 1. Properties of Nuclei Commonly Studied by NMR1 Element Atomic Number Mass Number Spin Natural Abundance Gyromagnetic Ratio γ (107 rad·s-1·T-1) Reference Compound Hydrogen 1 1 ½ 99.985% 26.7522128 Me4Si Deuterium 1 2 1 0.0115 4.10662791 (CD3)4Si Carbon 6 13 ½ 1.07 6.728284 Me4Si Nitrogen 7 15 ½ 0.368 -2.71261804 MeNO2 Fluorine 9 19 ½ 100 25.18148 CCl3F Silicon 14 29 ½ 4.6832 -5.3190 Me4Si Phosphorus 15 31 ½ 100 10.8394 H3PO4 Selenium 34 77 ½ 7.63 5.1253857 Me2Se Cadmium 48 113 ½ 12.22 -5.9609155 Me2Cd 1. R.K. Harris, E. D. Becker, S. M. C. De Menezes, R. Goodfellow, P. Granger, Pure. Appl. Chem. 73:1795-1818 (2001). http://www.iupac.org/publications/pa.../7311x1795.pdf The signal in NMR is produced by absorption of electromagnetic radiation of the appropriate frequency. Energy absorption causes the nuclei to undergo transitions from the lower energy (α) to the higher energy (β) spin states. If we think about the spins as bar magnets, absorption of energy at the right frequency causes the spins to flip with respect to the applied field. As is the case with other spectroscopic methods, the difference in population of these two quantized states can be expressed by the Boltzmann equation, Eq. $\ref{E2}$ where k is Boltzmann’s constant, 1.38066 x 10-23 J·K-1, and T is the temperature in degrees Kelvin. $\dfrac{N_{upper}}{N_{lower}} = e^{-\Large\frac{ΔE}{kT}} \label{E2}$ Equation $\ref{E2}$ relates the ratio of the number of nuclei in the upper (higher energy) spin state and the lower energy spin state to the energy difference between the spin states, ∆E, and therefore, the magnitude of the applied magnetic field, Bo (Eq. $\ref{E1}$). In NMR the difference in energy in the two spin states is very small therefore the population difference is also small (about 1 in 10,000 for 1H in an 11.74 T magnetic field). Because this population difference is the source of our signal, NMR is inherently a less sensitive technique than many other spectroscopic methods. Exercise $2$ Given the same magnetic field and temperature, how would the difference in population for 1H and 31P compare?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/01%3A_Basic_NMR_Theory/1.01%3A_What_is_spin.txt
Up to this point in our discussion, the theory of NMR seems similar to that for other common spectroscopic methods. However there are some differences that should be considered. For example in UV-visible absorption spectroscopy, which occurs as a result of electronic transitions, at room temperature essentially all of the molecules will be in the ground electronic state because the energy difference between the ground and excited states is large. However, in NMR the difference in energy in the two spin states is very small, therefore the population difference is also small (about 1 in 10,000 for 1H in an 11.74 T magnetic field). Because this population difference is the source of our signal, NMR is inherently a less sensitive technique than many other spectroscopic methods. Let’s think now about the energy difference between the nuclear spin states in NMR. Do you recall the relationship between energy and frequency? Say we are interested in a compound with an absorption maximum at a wavelength, λ, of 600 nm. What would be the frequency, ν, of the light absorbed? The frequency of light absorbed is inversely proportional to the wavelength as shown in the equation below, where c is the speed of light, 3.0 x 108 m/s. $ν = \dfrac{c}{λ} \label{E3}$ Therefore, light with a wavelength of 600 nm has a frequency of 5 x 1014 Hz (cycles per second). The energy, E, of this light is directly proportional to the product of its frequency and Planck’s constant (h), 6.626 x 10-34 J·s. $E = hν \label{E4}$ Our 600 nm light has an energy of 3.31 x 10-19 J. The energy of the light absorbed by our molecule roughly corresponds to the energy difference between the ground and excited electronic states of our molecule. How does the energy absorbed in NMR compare with this value? We already indicated that we expect the energy difference between the ground and excited spin states in NMR to be much less than for absorption of visible light. We can calculate the energy of the NMR transition using Equation $2.1$ for a particular nucleus in a given magnetic field strength. Let’s do this calculation for the protons (hydrogen nuclei) in a sample placed in an 11.74 T magnet, using the value of γ for hydrogen (normally referred to as proton) in Table 1. We can now calculate the energy difference of the spin states, as in Equation $\ref{E5}$. $∆E = \dfrac{26.75222127 × 10^7\: rad ⋅ s^{-1} T^{-1} × 6.626 × 10^{-34}\: J − s × 11.74T}{2π} = 1.054 × 10^{−25}\: J \label{E5}$ This energy may not seem like it is that much less than the energy of our visible absorption transition at 600 nm, after all the numbers only differ by a factor of 106. However, if we think about the thermal energy of our sample in terms of kT (1.38066 x 10-23 J·K-1 x 298 K = 4.11 x 10-21 J) we can see that the thermal energy of our sample is about 100 fold less than the energy of the visible absorption of 600 nm light but is about 10,000 times greater than the energy of our proton NMR transition. This is why there is only a very small difference in population between the ground and excited states in NMR. Having compared the energies of these two spectroscopic methods we might now ask how do the frequency and wavelength in NMR compare with our 600 nm light? We can calculate the NMR frequency, known as the Larmor frequency, using Equation $\ref{E6}$ $υ = \dfrac{∆E}{h} = \dfrac{γB_o}{2π} \label{E6}$ For our example of protons in an 11.74 T magnetic field, ν is 500 x 106 Hz or 500 MHz. This is in the radio frequency range of the electromagnetic spectrum. It is common to refer to NMR instruments by the frequency of protons in the magnetic field associated with a given spectrometer, therefore a spectrometer with an 11.74 T magnet is referred to as a 500 MHz instrument. Exercise $3$ Calculate the wavelength of electromagnetic radiation corresponding to a frequency of 500 MHz. 1.04: What is chemical shift and how does it relate to resonance frequency If each type of nucleus (e.g. all protons) gave a single resonance frequency, as implied by Equation $3.4$, NMR would not be of much use to chemists. The actual nuclear resonance frequency is highly dependent on the local chemical environment. The effective magnetic field, Beff, felt by a nucleus differs from that of the applied magnetic field, $B_o$, due to shielding by the motion of the electron clouds surrounding the nucleus. The greater the electron density around the nucleus, the larger is this shielding effect. The amount of shielding is expressed as the magnetic shielding constant $σ$, where $B_{eff} = (1- σ)B_o$. Therefore, the resonance frequency of each nucleus differs depending on the value of $B_{eff}$. $ν = \dfrac{γ(1 - σ)B_o}{2π} = \dfrac{γB_{eff}}{2π} \label{Eq. 7}$ The chemical shift of a nucleus reveals much about the structure of a molecule as shielding constants are well correlated with local chemical environment. For example I can know whether a molecule contains a methyl group or an aromatic ring depending on the chemical shifts of the protons in my NMR spectrum. Early NMR spectrometers were scanning instruments in which the radio frequency was scanned through the proton chemical shift range until a frequency was reached at which energy was absorbed by the sample; this is the resonance condition. Modern instruments irradiate the sample with a broad band, or range, of frequencies and excite all of the protons at the same time. 1.05: What is precession A spinning charged particle creates a magnetic field, called the magnetic moment, µ. This magnetic moment is a vector quantity that is proportional to the angular momentum: \(µ = γp\). Because our nucleus has angular momentum, the magnetic moment, depicted as the red vector in the figure below, will appear to precess (or rotate) about the applied magnetic field \(B_o\). This precession is analogous to the motion of a spinning top. The frequency of precession is dependent only on the type of nucleus (defined by the gyromagnetic ratio, \(γ\)) and the value of \(B_{eff}\), as defined in Equation \(4.1\). The precession of a single nucleus, depicted as a blue sphere spinning about its axis, is shown here.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/01%3A_Basic_NMR_Theory/1.03%3A_How_does_the_population_difference_in_NMR_compare_to_the_difference_between_electronic_ground_and_excited_.txt
If we now examine what we would expect for an ensemble of nuclei, the magnetic moments of the +½ spins will be aligned with the applied magnetic field, while the moments of the higher energy –½ spin state will be opposed to Bo. However, all the spins in our sample will be precessing randomly about Bo at their Larmor frequency as illustrated in the figure below. Because slightly more of our nuclei are in the lower energy +½ spin state, if we take the vector sum of all the magnetic moments we will realize a single vector pointing in the direction of the applied magnetic field called the macroscopic magnetization, Mo. The macroscopic magnetization then provides a way to visualize the population difference of our spins. It is this macroscopic magnetization vector that is manipulated in the NMR experiment. 1.07: How can the nuclear spins be manipulated to generate the NMR spectrum The previous figure shows the system at equilibrium. In order to generate a NMR signal, we must do something to perturb the populations of our spin states. As in other spectroscopic measurements, this is done through the absorption of radiant energy (light) of the appropriate frequency. In NMR, this transition is in the radio frequency (rf) range, corresponding to the Larmor frequency of the nucleus we are interested in. We cause this transition by irradiating our sample at a single radio frequency. An AC current oscillating at the desired rf frequency is applied to a coil wound around our sample. This oscillating current creates an additional magnetic field (called the B1 field) that acts upon our macroscopic magnetization vector and tips it away from its equilibrium position aligned with Bo. This B1 pulse creates the signal that we detect in NMR. In order to excite all of the different types of a single nucleus in our sample (e.g. all of the different types of protons or carbons), this pulse of rf radiation is kept short (typically ~10 µs). By the Heisenberg uncertainty principle, a short pulse will excite a broad range of frequencies; ∆f = 1/∆t. Exercise \(4\) What range of frequencies would be excited by a 10 µs rf pulse? 1.08: What is the tip angle The angle that the B1 pulse tips the magnetization through depends on the power of the pulse and its length. For a given power setting, a tip angle, θ, (in radians) can be defined as θ = γB1τ, where γ is the magnetogyric ratio and τ is the length of time the pulse is on. What we detect in the NMR experiment is the projection of the macroscopic magnetization vector, Mxy, into the xy plane of the NMR coordinate system. A 90o pulse will produce the greatest signal in the xy plane. The Figure below shows the effect of a 90o pulse on the spins. 1.09: What is the Free Induction Decay The signal we detect is called a Free Induction Decay (FID). The FID is produced by the macroscopic magnetization after the pulse. The magnetization will undergo several processes as it returns to equilibrium. First immediately after the pulse, the transverse component of the macroscopic magnetization, Mxy, will begin to precess at its Larmor frequency. This precessing magnetization will induce an alternating current in a coil (the same one used to generate the rf pulse) wound round the sample. This induced AC current is our FID, such as the one shown below. The FID contains all of the information in the NMR spectrum, but it is difficult for us to discern the information in this format. Fourier transformation of the FID, a time domain signal, produces the frequency domain NMR spectrum. The resonance frequencies of the signals in the transformed spectrum correspond to the frequency of oscillations in the FID. In this FID measured for isopropanol, the 0.16 modulation of the FID is due to the 6.18 Hz difference in frequency of the resonances of the intense methyl doublet. The intensity information of each component is contained in the intensity of the first point of the FID. The signals that comprise the FID decay exponentially with time due to relaxation processes discussed in the next section. The rate of decay for each component of the FID is inversely proportional to the width of each NMR resonance.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/01%3A_Basic_NMR_Theory/1.06%3A_How_does_precession_generate_the_macroscopic_magnetization_%28Mo%29.txt
The decay of the FID corresponds to the loss of intensity of the macroscopic magnetization vector in the xy plane (called the transverse plane) by a process called spin-spin relaxation (or transverse or T2) relaxation. T2 relaxation occurs when a nucleus in a –½ spin state transfers its spin to a nearby nucleus in a + ½ spin state, and vice versa. Since T2 relaxation occurs through mutual spin flips, the energy of the system is unaffected, it is an entropic process. In terms of our vector model, T2 relaxation corresponds to a loss of coherence or dephasing of the magnetization vector. The recovery of magnetization along the z (longitudinal) axis (aligned with Bo) to its equilibrium position occurs by a process called spin lattice (or longitudinal or T1 relaxation). T1 relaxation occurs through interactions of the nuclei with the lattice (or the nuclei that surround our sample). Lattice motions at the same frequency as the Larmor frequency stimulate the magnetization in the higher energy – ½ spin states to lose this excess energy by transferring it to the lattice via a process called radiationless decay. Since T1 relaxation involves a loss of energy by the system as the spins return to their equilibrium populations, it is an enthalpic process. These relaxation processes are first order processes characterized by the relaxation time constants T1 and T2. The width at half-height of a resonance is inversely related to the T2 relaxation time of the nucleus, w1/2 = (πT2)-1. Because the magnets we use are not perfectly homogeneous, there is a secondary contribution to the line width that comes from magnetic field inhomogeneity. Therefore, the apparent spin-spin relaxation time constant or T2* observed in the FID includes both the natural T2 relaxation time of the nucleus as well as the effect of magnetic field inhomogeneity, w1/2 = (πT2*)-1. If you want to know the real T2 value for a nucleus, a special experiment, called the spin echo can be used. Exercise \(5\) What are the resonance line widths of nuclei that have apparent T2 relaxation times (i.e.T2* values) of 1 and 2 sec. The effects of T1 relaxation are more difficult to observe directly, because it corresponds to the return to equilibrium populations following the pulse. However, if several FIDs are coadded, as is usually the case in NMR, and if the time between successive pulse and acquire steps is insufficient for complete T1 relaxation, the resonances in the resulting NMR spectrum will be less intense than they would otherwise appear. Because quantitative NMR measurements rely on resonance intensity, understanding the effects of T1 relaxation is very important for obtaining accurate qNMR results. Therefore this subject is treated in greater depth in the Practical Aspects section of this module. 1.11: Where should I look to learn more about NMR It is hoped that this brief tutorial has provided sufficient background for you to understand the next section, focusing on practical aspects of quantitative NMR measurements. For further insights into NMR, the following websites and books are recommended. Many students find the e-book written by Professor Joseph Hornak at RIT to be especially useful since it contains embedded animations that illustrate many of the concepts introduced here. “The Basics of NMR” by Joseph P. Hornak, http://www.cis.rit.edu/htbooks/nmr/ “Georgetown Graduate Course on NMR Spectroscopy” by Angel de Dios, http://bouman.chem.georgetown.edu/nmr/syllabus.htm "2D NMR Spectroscopy” by Marc Bria, Pierre Watkin and Yves Plancke, http://rmn2d.univ-lille1.fr/rmn2d_en..._RMN2D_en.html “Understanding NMR Spectroscopy” by James Keeler, John Wiley & Sons, 2005 “High-Resolution NMR Techniques in Organic Chemistry” by Timothy D. W. Claridge, Pergamon, Oxford, 1999. “Spin Choreography: Basic Steps in High Resolution NMR” by Ray Freeman, Oxford University Press (1999). “Modern NMR Spectroscopy: A Guide for Chemists”, 2nd Edition, by Jeremy K. M. Sanders and Brian K. Hunter, Oxford University Press, 1993. "200 and More NMR Experiments: a Practical Course” by Stephan Berger and Siegmar Braun, Wiley-VCH, 2004. “Basic One- and Two-Dimensional NMR Spectroscopy" by Horst Friebolin, Wiley-VCH, 2004. "Experimental Pulse NMR: A Nuts and Bolts Approach" by Eiichi Fukushima and Stephen B. W. Roeder, Perseus Publishing, 1993.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/01%3A_Basic_NMR_Theory/1.10%3A_How_do_T_and_T_relaxation_affect_NMR_spectra.txt
This discussion presumes that you already have an understanding of the basic theory of NMR. There are a number of issues that should be considered when measuring NMR spectra for quantitative analysis. Many of these issues pertain to the way that the NMR signal is acquired and processed. It is usually necessary to perform Q-NMR measurements with care to obtain accurate and precise quantitative results. This section is designed to help you answer the following questions: 02: Practical Aspects of Q-NMR With NMR, we need only to have available any pure standard compound (which can be structurally unrelated to our analyte) that contains the nucleus of interest and has a resonance that does not overlap those of our analyte. The analyte concentration can then be determined relative to this standard compound. The requirement for lack of overlap means that most standards have simple NMR spectra, often producing only singlet resonances. Additional requirements for standards to be used for quantitative analysis are that they: • are chemically inert • have low volatility • have similar solubility characteristics as the analyte • have reasonable T1 relaxation times The structures of several common NMR chemical shift and quantitation standards are shown in the figure below. TMS and dioxane are chemical shift reference compounds commonly used in organic solvents. However they do not make good quantitation standards because they suffer from high volatility. Therefore it is difficult to prepare a standard solution for which the concentration is known with high accuracy. TMSP is a water soluble chemical shift reference. While it has improved performance as a quantitation standard compared with TMS or dioxane, it has been shown to absorb to glass so stock solutions may have stability problems.1 In addition to the criteria listed above, it is helpful for quantitation purposes if the compound selected as the standard also has the properties of a primary analytical standard, for example potassium hydrogen phthalate (KHP), which is available in pure form, is a crystalline solid at room temperature and can be dried to remove waters of hydration. 2.02: How is the internal standard used to quantify the concentration of my analyte If an NMR spectrum is measured with care, the integrated intensity of a resonance due to the analyte nuclei is directly proportional to its molar concentration and to the number of nuclei that give rise to that resonance. $\mathrm{\dfrac{Integral\: Area}{Number\: of\: Nuclei} ∝ Concentration} \label{E1}$ For example the 1H NMR resonance of a methyl group would have 3 times the intensity of a peak resulting from a single proton. In the spectrum below for isopropanol, the 2 methyl groups give rise to a resonance at 1.45 ppm that is 6 times greater than the integrated intensity of the CH resonance at 3.99 ppm. Since this spectrum was measured in D2O solution, only the resonances of the carbon-bound protons were detected. The OH proton of isopropanol is in fast exchange with the residual water (HOD) resonance at 4.78 ppm, therefore a separate resonance is not observed for this proton. In this example we compared the relative integrals of the proton resonances of isopropanol. This information can be very useful for structure elucidation. If we instead compare the integral of an analyte resonance to that of a standard compound of known concentration, we can determine the analyte concentration. $\mathrm{Analyte\: Concentration = \dfrac{Normalized\: Area\: Analyte \times Standard\: Concentration}{Normalized\: Area\: Standard}} \label{E2}$ The direct proportionality of the analytical response and molar concentration is a major advantage of NMR over other spectroscopic measurements for quantitative analysis. For example with UV-visible spectroscopy measurements based the Beer-Lambert Law, absorbance can be related to concentration only if a response factor can be determined for the analyte. The response factor, called the molar absorptivity in UV-visible spectroscopy, is different for each molecule therefore, we must be able to look up the absorptivity or have access to a pure standard of each compound of interest so that a calibration curve can be prepared. With NMR we have a wide choice of standard compounds and a single standard can be used to quantify many components of the same solution. Exercise $1$ Question 1. A quantitative NMR experiment is performed to quantify the amount of isopropyl alcohol in a D2O solution. Sodium maleate (0.01021 M) is used as an internal standard. The integral obtained for the maleate resonance is 46.978. The isopropanol doublet at 1.45 ppm produces an integral of 104.43. What would you predict for the integral of the isopropanol CH resonance as 3.99 ppm. What is the concentration of isopropanol in this solution? 2.03: What sample considerations are important What nucleus should I detect? Just as you might make a choice between measuring a UV or an IR spectrum, in NMR we often have a choice in the nucleus we can use for the measurement. A wide range of nuclei can be measured, with the spin ½ nuclei 1H, 31P, 13C, 15N, 19F, 29Si, and 31P among the most common. However, most quantitative NMR experiments make use of 1H, because of the inherent sensitivity of this nucleus and its high relative abundance (nearly 100%). In addition, as we will see in the next section, the relaxation properties of nuclei are also important to consider in quantitative NMR experiments, and compared with many other nuclei like 13C, 1H nuclei have more favorable T1 relaxation times. The choice of the observe nucleus can depend on whether one seeks universal detection (for organic compounds 1H and 13C fall into this category) or selective detection. For example fluoride ions can be easily detected in fluorinated water at the sub-ppm level, in large part because of the selectivity of the measurement – one expects to find very few other sources of fluorine in water. Similarly phosphorous containing compounds like ATP, ADP, and inorganic phosphate can be detected and even quantified in live cells, tissue or organisms. How concentrated is my sample? In the Beer-Lambert law you are probably familiar with from UV-visible spectroscopy, absorbance is directly related to the concentration of the analyte. Similarly, in NMR the signal we detect scales linearly with concentration. Since NMR is not a very sensitive method, you would ideally like to work with reasonably concentrated samples, for protons this means analyte concentrations typically in the millimolar to molar range, depending on the instrument you will be using. Other nuclei are less sensitive than protons. The sensitivity issue has two components, the inherent sensitivity, which depends on the magnetogyric ratio (γ), and the relative abundance of the nucleus (for example 19F is 100% abundant, but 13C is only 1.1% of all carbon atoms) What other practical issues do I need to consider? The sensitivity of an NMR experiment can also be affected by the homogeneity of the magnetic field that the sample feels. It is normal to adjust the field homogeneity through a process known as shimming. NMR samples should be free of particulate matter, because particles can make it difficult to achieve good line shape by shimming. You will also have better luck with shimming if you have a sample volume sufficient to meet or exceed the minimum volume recommended by your instrument manufacturer.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/02%3A_Practical_Aspects_of_Q-NMR/2.01%3A_How_do_I_choose_a_reference_standard_for_my_Q-NMR_analysis.txt
This may not be a big consideration in measuring a UV-visible or IR spectrum; you generally just walk up to the instrument, place your sample in a sample holder and make a measurement. However, with NMR there are several parameters, summarized below, that can have a huge impact on the quality of your results and whether or not your results can be interpreted quantitatively. Number of Scans An important consideration is the number of FIDs that are coadded. Especially for quantitative measurements it is important to generate spectra that have a high signal-to-noise ratio to improve the precision of the determination. Because the primary noise source in NMR is thermal noise in the detection circuits, the signal-to-noise ratio (S/N) scales as the square root of the number of scans coadded. To be 99% certain that the measured integral falls within + 1% of the true value, a signal-to-noise ratio of 250 is required. Acquisition of high quality spectra for dilute solutions can be very time consuming. However, even when solutions have a sufficiently high concentration that signal averaging is not necessary to improve the S/N, a minimum number of FIDs (typically 8) are coadded to reduce spectral artifacts arising from pulse imperfections or receiver mismatch. Exercise $1$ A solution prepared for quantitative analysis using NMR was acquired by coaddition of 8 FIDs produces a spectrum with an S/N of 62.5 for the analyte signals. How many FIDs would have to be coadded to produce a spectrum with an S/N of 250? Acquisition Time The acquisition time (AT) is the time after the pulse for which the signal is detected. Because the FID is a decaying signal, there is not much point in acquiring the FID for longer than 3 x T2 because at that point 95% of the signal will have decayed away into noise. Typical acquisition times in 1H NMR experiments are 1 – 5 sec. An interesting feature in choosing an acquisition time is the relationship between the number of data points collected and the spectral width, or the range of frequencies detected. Although the initial FID detected in the coil is an analog signal, it needs to be digitized for computer storage and Fourier transformation. According to Nyquist theory, the minimum sampling frequency is at least twice the highest frequency detected. The dwell time (DW) or time between data point sampling is a parameter that is not typically set by the user, but determined by the spectral width (SW) and the number of data points (NP). $DW = \dfrac{1}{2SW} \label{E4}$ $AT = DW × NP \label{E5}$ Another feature of the acquisition parameters that is important for quantitative measurements is the digital resolution (DR). $DR = \dfrac{SW}{NP(real)} \label{E6}$ Almost all spectrometers are designed with quadrature phase detection, which in effect splits the data points into real and imaginary datasets that serve as inputs for a complex Fourier transform. It is important to have sufficient digital resolution to accurately define the peak. Since a typical 1H NMR resonance has a width at half height (w1/2) of 0.5 to 1.0 Hz, 8-10 data points are required to accurately define the peak. The total number of data points used in the Fourier transformation and contributing to the digital resolution can be increased by zero-filling, as described in the section on data processing. Exercise $2$ A 1H NMR spectrum was measured using a 400 MHz instrument by acquisition of 16,384 total data points (8192 real points) and a spectral width of 12 ppm. What was the acquisition time? Calculate the digital resolution of the resulting spectrum? Is this digital resolution sufficient to accurately define a peak with a width at half height of 0.5 Hz? Receiver Gain The receiver includes the coil and amplifier circuitry that detects and amplifies the signal prior to digitization by the analog-to-digital converter (ADC). It is important to set the receiver gain properly so that the ADC is mostly filled, without overflowing. ADC’s used in NMR typically have limited dynamic range of 16 -18 bits. If the receiver gain is set too low, only a few bits of the ADC are filled and digitization error can contribute to poor S/N. If the receiver gain is set too high, (called clipping the FID) the initial portions of the FID will overflow the ADC and will not be properly digitized. In this case, resonance intensity can no longer be interpreted in a quantitative manner. In addition, a lot of spurious signals will appear in the spectrum. For most experiments the autogain routine supplied by the NMR manufacturer will work well for the initial setup of the experiment. Repetition Time The repetition time is the total time between the start of acquisition of the first FID and the start of acquisition of the second FID. The repetition time is the sum of the acquisition time and any additional relaxation delay inserted prior to the rf pulse. Recall that there are two relaxation times in NMR, T1 and T2 (with T1 ≥ T2). If a pulse width of 90o is used to signal average multiple FIDs to improve S/N or reduce artifacts, we generally need to wait 5 x T1 between each acquisition so that the magnetization can relax essentially completely (by at least 99%) to its equilibrium state. If the repetition time is less than 5T1, the resonances in the spectrum cannot be simply interpreted in a quantitative manner and resonance intensity is scaled according to T1. The inversion-recovery pulse sequence can be used to measure T1 relaxation times. In this pulse sequence, diagrammed below, the magnetization is inverted by a 180o pulse. The relaxation delay at the start of the experiment is selected to assure complete relaxation between acquisitions. During the variable delay, magnetization relaxes by spin-lattice (T1) relaxation and is tipped into the transverse plane by the 90o read pulse. The intensity of the resonances is measured and then fit to an exponential function to determine the T1 relaxation time. The figure below shows selected spectra measured for the KHP protons using the inversion-recovery experiment and the fit of the integral of one of the resonances to determine the T1 relaxation time of the corresponding proton. Pulse Width As described in the Basic Theory section, the NMR signal is detected as a result of a radio frequency (rf) pulse that excites the nuclei in the sample. The pulse width is a calibrated parameter for each instrument and sample that is typically expressed in µs. The pulse width can also be thought of in terms of the tip angle, θ, of the pulse $\mathrm{θ = γB_1τ} \label{E7}$ where γ is the gyromagnetic ratio, B1 is the strength of the magnetic field produced by the pulse and $τ$ is the length of the pulse. For quantitative NMR spectra, 90o pulses with a repetition time ≥ 5T1 are typically used since this pulse produces the greatest S/N in a single scan, although other pulse widths can also be used. For spectra where qualitative, rather than quantitative analysis is desired, significant time savings can be obtained by using shorter pulses (i.e. 30o ) since the magnetization takes less time to recover to its equilibrium state after the pulse. For a more detailed analysis of the effects of tip angle in quantitative NMR experiments, visit the following page.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/02%3A_Practical_Aspects_of_Q-NMR/2.04%3A_How_do_I_choose_the_right_acquisition_parameters_for_a_quantitative_NMR_measurement.txt
In experiments where the goal is qualitative analysis, it is not necessary to acquire the spectra in a manner that produces fully relaxed spectra. In this case, one can use the Ernst Angle relationship to calculate the tip angle that will maximize S/N for a given repetition time.1 In such experiments, it is typically most efficient to eliminate the relaxation delay and use a repetition time equivalent to the acquisition period. Typically the acquisition time is set to allow the FID to decay to noise. This is often approximated by setting the acquisition time to about 3 x T2*, which will allow the magnetization to decay by 95% of its initial value. $\textrm{Ernst Angle Relationship:}\:\: \cos ϑ = e^{\Large\frac{− T_r}{T_1}}$ In the Ernst angle equation, ϕ is the tip angle, Tr is the repetition time and T1 is the spin-lattice relaxation time of the resonance of interest. For example, with a repetition time of 3 sec and a T1 relaxation time of 5 sec, we calculate an Ernst angle of 56.7o. While this repetition time and tip angle maximizes S/N, it will not give us integrals that can be interpreted quantitatively in a straight forward way. It is possible to correct for incomplete T1 relaxation, however this introduces greater error in the result and is not always practical, since analyte solutions. Where quantitative integrals are desired there is generally not that great of time savings by using a tip angle less than 90o. To see why, let’s look at a concrete example. Assume that for an analyte resonance with a T1 relaxation time of 5 sec, we have to coadd 100 FIDs to achieve a S/N of 250:1 using a 90o pulse. In this experiment, the shortest repetition time we should use is 5 x T1, or 25 sec, which will allow the magnetization to relax to 99% of its initial value. This means that it would take 2500 sec (or 0.694 hr) to complete this experiment. What would happen if we used a shorter tip angle? By using a smaller tip angle, the magnetization will take less time to recover following the pulse however we will also detect less intensity following the pulse. The intensity of the magnetization following a pulse, My, can be described by the equation below, where Mo is the intensity of the fully relaxed magnetization, t is the time following the pulse, for a tip angle of ϕ, and T2* is the apparent spin-spin relaxation time. $M_y = M_0 \sin ϑ e^{\Large\frac{− t}{T_2^*}}$ Immediately following the pulse, t = 0. For a 56.7o tip angle, My would be 0.836 Mo immediately following the pulse. This means that we would have 83.6% of the signal that we would have achieved with a 90o pulse. However, unlike a 90o pulse, the value of Mz is not zero immediately following a 56.7o pulse. In general, the value of Mz following a pulse with a tip angle, ϕ, can be described by the equation below, where t is the time following the pulse, for a tip angle of ϕ, and T1 is the apparent spin-lattice relaxation time.2 $M_z = M_0 [1 − (1 − \cos ϑ) e ^{\Large\frac{− t}{T_1}}]$ This means that immediately following a 56.7o pulse, t = 0 and the value of Mz, governed by $\cos ϕ$ is already 0.549 x Mo. Since we want to acquire fully relaxed spectra, the question is how long will we have to wait until Mz = 0.99 x Mo if the T1 relaxation time is 5 sec? Substitution into the equation above we calculate that t = 19.04 sec. This is a significant time savings over the 25 sec we would have to wait for T1 relaxation if a 90o pulse were used. However, using a 56.7o pulse we have generated only 83.6% of the signal we had if a 90o pulse had been used. Since we gain S/N as the square root of the number of FID’s coadded we would have to acquire 1.43 times as many FIDs as we did with a 90o pulse to obtain the desired S/N of 250:1. Therefore, the total experiment time using a 56.7o pulse would be 19.04 sec x 143 or 2723 sec or 0.756 hr, a slightly longer total acquisition time than was required using a 90o pulse. For this reason, most NMR spectroscopists simply use a 90o pulse and with a repetition time of at least 5 x T1 for quantitative NMR experiments.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/02%3A_Practical_Aspects_of_Q-NMR/2.05%3A_Effects_of_Tip_Angle_in_Quantitative_NMR_Experiments.txt
Data processing describes operations that are performed to improve spectral quality after the data has been acquired and saved to disk. Zero-Filling Zero-filling is the addition of zeros to the end of the FID to increase the spectral resolution. Because zeros are added, not additional real data points carrying with them an overlay of noise, zero-filling can improve digital resolution without decreasing S/N. Another option is to use linear prediction to add data points calculated from the beginning of the FID where S/N is at its highest. Apodization Apodization is the multiplication of the FID by a mathematical function. Apodization can serve several purposes. Spectral resolution can be improved by emphasizing the data points at the end of the FID. S/N can be improved by multiplying the FID by a function that emphasizes the beginning of the FID relative to the later data points where S/N is poorer. For quantitative NMR experiments, the most common apodization function is an exponential decay that matches the decay of the FID (a matched filter) and forces the data to zero intensity at the end of the FID. This function is often referred to a line broadening, since it broadens the signals based on the time constant of the exponential decay. This trade-off between S/N and spectral resolution is not restricted to NMR and is common to many instrumental methods of analysis. Integration Regions Because NMR signals are Lorentzians, the resonances have long tails that can carry with them significant amounts of resonance intensity. This is especially problematic when the sample is complex containing many closely spaced or overlapped signals, or when the homogeneity of the magnetic field around the sample has not been properly corrected by shimming. For a Lorentzian peak with a width at half-height of 0.5 Hz, integration regions set at 3.2 Hz or 16 Hz on either side of the resonance would include approximately 95% or 99% of the peak area, respectively. Note that this analysis does not include the 13C satellites which account for an additional 1.1% of the intensity of carbon-bound protons in samples containing 13C at natural abundance. In cases where resonances are highly overlapped, more accurate quantitative analysis can often be achieved by peak fitting rather than by integration. An alternative approach utilizes 13C decoupling during the acquisition of the proton spectrum to collapse the 13C satellites so that this signal is coincident with the primary 1H-12C resonance.2, 3 This relatively simple approach requires only that the user has access to a probe (for example a broadband inverse or triple resonance probe) that permits 13C decoupling. Baseline Correction NMR integrals are calculated by summation of the intensities of the data points within the defined integration region. Therefore, a flat spectral baseline with near zero intensity is required. This can be achieved in several ways; the most common is selecting regions across the spectrum where no signals appear, defining these as baseline and fitting them to a polynomial function that is then subtracted from the spectrum. 2.07: References 1. D.A. Jayawickrama, C.K. Larive, Anal. Chem. 71:2117-2112 (1999). 2. The Quantitative NMR Portal, http://tigger.uic.edu/~gfp/qnmr/ 3. G. F. Pauli, B. U. Jaki, D. C. Lankin “A Routine Experimental Protocol for qHNMR Illustrated with Taxol“ J. Nat. Prod. 70:589-595 (2007).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/02%3A_Practical_Aspects_of_Q-NMR/2.06%3A_What_data_processing_considerations_are_important_for_obtaining_accurate_and_precise_results.txt
This experiment, available in pdf format, uses FTNMR Simulator, a program written by Dr. Harold Bell, professor emeritus at Virginia Tech to simulate an NMR experiment. Instrument parameters such as spectral width, number of data points, pulse width, noise, etc., are selected by the user. Once the FID is displayed, it can be treated by exponential smoothing or resolution enhancement. After the Fourier transform, phase corrections and baseline flattening may be applied. Spectra may be printed, or saved as Windows metafiles. To download the software, you can access the following website and download the FTNMR Simulator, newfid.zip(2.9 mBytes). http://www.asdlib.org/onlineArticles...retextpage.htm The program is also available in Spanish, fidsp.zip, and French, fidwinfr.zip. A tutorial, wintutor.pdf (255 kBytes), to accompany this software is also available. It contains more than 20 exercises selected to help novices learn about FTNMR. 03: Virtual Experiment This virtual laboratory makes use of the FTNMR simulation program written by Harold Bell to explore the effects of the parameters discussed in the Practical Aspects section of this module on the results of simulated NMR data. Although you are encouraged to explore this program more fully, the focus here is on the effects of applied magnetic field strength, signal averaging, relaxation times, repetition times, the number of data points and the receiver gain. 1. Run the simulation program and select “Proton or Carbon System from the Menu” and click “Continue”. Choose one of the molecules from the proton menu. Compounds with a range of complexity are available and you may wish to investigate more than one molecule. In the new menu that appears, click the box that says “TMS in Sample?” Choose a NMR Frequency (magnetic field strength) from the list and click continue. A new menu will appear (see figure below) that contains the resonance frequencies for the compound you selected. Click “Continue” and the coupling constants for these nuclei will appear. Clicking “Done” generates a list of the spectral frequencies. Click “DO FID” button to begin the process of simulating the FID. 2. A new menu should appear that contains many of the acquisition parameters that we would like to investigate. At this point there are some notable differences between the simulation and real data acquisition. This program allows you to add noise; this is not a feature of real spectrometers. Another difference here is that you can select the pulse width in degrees. If you were using a real spectrometer you would select the pulse power and calibrate the pulse width in μs that corresponded to the tip angle you desire. Start with these initial default parameters: Default Parameters • Noise 0.2 • T2 Relaxation time 1.0 s • Acquisition delay 0.0 s • Zero order phase error 0.0 • Under “Mode of Detection”, Click the box that says “Quadrature” • Flip angle 90o • Receiver gain 0.1 • Number of pulses 1 • Relaxation delay 10 s • Use the default spectral window selected for your compound • Number of data points 8192 Click the box at the bottom of the page to “Continue” and simulate the FID 3. A new menu should appear along the top of the page. Click “Show FID” to display the FID you simulated. You will see that there are choices to show the whole FID or expansions of the FID with real or imaginary data points. If you click on “Show Points” you will see the individual data points that would have been acquired. Although the pulses we apply in NMR are in the radio frequency range, the digitized signals are on the order of a few thousand Hz. This places them in the audio range and you can hear your FID by clicking on “Listen to FID” button. 4. Click Continue and select “zero fill” and a new menu will appear. This menu accesses the data processing part of the simulation program. Choose “Exponential Smoothing” and use the value of 0.5 Hz line broadening and choose “Done”. Answer “YES” to accept the smoothing function and apodize the FID. Select “Do FT” to Fourier transform the FID and generate the spectrum. 5. Now let’s investigate the effect of the receiver gain. Choose “Start Over” and choose “Same Frequencies, Intensities and T1’s”. Select gain values of 0.01, 0.1, 1, 10 and 100 and examine the effect on the FID and on the Fourier transformed spectrum. What is the optimum value of the gain? Can the gain be set too high? What is the effect of too high of gain on the FID? On the spectrum? 6. Using the gain you determined to be the optimum value, let's explore the effects of the number of pulses. Set the noise to 3.0 and repeat the simulation for 1 pulse. Now compare this result with 100 pulses. After selecting “100 pulses” and “Continue”, a dialog box called “Pause” will pop up showing the FID after 1 pulse, select continue. You will now get more dialog boxes showing you how the FID looks after 10, 50 and 100 pulses. Do you see how the signal-to-noise ratio (S/N) improves with the number of FIDs coadded? How does the Fourier transformed spectrum compare with the one you measured using a single pulse? In this simulation, it is nearly just as rapid to coadd 100 FIDs as to measure 1, but with a real measurement it would take 100 times as long. In this case do you think that the S/N improvement would be worth the extra time? How would the S/N improvement for quantitative data affect the quality of the results you would obtain? 7. Now go back to 1 pulse leaving all the other parameters the same including a noise level of 3.0 and let’s evaluate the effect of the exponential smoothing function. This time when you choose “Start Over” just select “Same FID” since this is a post-acquisition processing parameter and does not affect the saved data. Evaluate the transformed spectra using values of 0. 0.5, 1, 5 and 10 Hz line broadening. Which value do you think is the optimum? Why? 8. Click on "Start Over" and select "Complete Restart". This time choose “User-Defined Set of Frequencies” and press “Continue”. A new menu should appear. Select 1-5 Single Frequencies”. Enter a single frequency of 1.28 Hz. Choose a T1 of 2 and intensity of 2, and press “Continue” and enter the default parameters from question #2 above. Click “Continue” to simulate the FID and then click "Show Vectors" and examine how the vectors relate to the FID. You may want to click “slow” on the vector speed bar to slow down the motion of the vector and the evolution of the FID. Fourier transform the FID and you should see a single resonance at 1.28 Hz. 9. Choose “Start Over” with the same frequencies, intensities and T1’s. Change the T2 to lower and higher values. Examine the effect of T2 on both the FID and the Fourier transformed spectrum. 10. In this exercise, we will explore the effects of the flip angle. Use the Default FID Parameters from question #2 above, except select a T2 of 2 sec. Now investigate the following flip angle values: 30, 45, 60 and 90o. What is the effect of flip angle on the intensity of the FID and in the Fourier transformed spectrum? Can you use the vector model of NMR to explain the effect of flip angle on resonance intensity? 11. Now let’s explore the effect the number of data points (this is related to the acquisition time in a real spectrometer). Choose “Start Over” with the same frequencies, intensities and T1’s. Use the Default FID Parameters except select a T2 of 2 sec and a noise value of 0.1. Investigate the effect of the number of data points by choosing values less than and greater than 1024. What is the effect on the transformed spectrum? Why does the use of too few data points produce artifacts in the transformed spectrum? 12. With quadrature detection, the analog signal is split into two FIDs that differ by a 90o phase shift. These FIDs form the sine and cosine inputs to the complex Fourier Transform. Choose 512 points and click on "Show FID". You should have two possibilities, real and imaginary. How does the real FID differ from the imaginary one? 13. Now let's see what happens if we have more than one resonance in our FID. Click on "Start Over" and select "Complete Restart". This time enter two frequencies 1.28 Hz, and 2.56 Hz choose an intensity of 1 and a T1 of 2 sec for each resonance. Click “Continue”. Follow the instructions above for "Default FID Parameters" except choose 2048 data points and generate a new FID. Click on "Show Frequencies". You should see the individual components of the FID. Can you see how the two waves add to produce the FID? Now click on "show vectors" and examine how the vectors relate to the FID. Perform the Fourier transform and examine the spectrum. Does this spectrum make sense to you? Start over and choose some other frequencies. How does the choice of frequency affect the FID, vectors and the transformed spectrum? 14. Now let’s explore the effect of Spectral Width. Choose “Start Over” and “Complete Restart”. Choose a single frequency of 2.56 Hz, intensity of 2 and a T1 of 2. Begin by using the "Default FID Parameters" listed above with a 100 Hz Spectrum Width to generate the FID. Now click on "Show Points". At this level of expansion, it is not possible to really see the individual data points. Select “Show FID” and choose "0.8 sec". Now you can better see that there are discrete data points that sample the FID. Now choose "Start Over" and choose "Same Frequencies, Intensities, T1" and examine the effect the spectral width has on the FID. Reduce the value of the “Spectrum Width” to 20, 10, 5, 2 and 1 Hz expanding the FID (show the first 0.8 sec) to show the individual data points. What is the effect of changing the spectral width on the transformed spectrum? What happens if you choose a spectral width of 2 or 1 Hz? Is the frequency of the peak the expected value of 2.56 Hz? 15. Now let’s examine what happens if our acquisition parameters do not allow for complete relaxation of the magnetization. Start completely over and again let's use 3 resonances with frequencies of 0 Hz (20sec), 5 Hz (5sec), and 10 Hz (1sec) all with intensities of 0.5 and with the T1 values given in parentheses after the frequency. Use the "Default FID Parameters" except choose a relaxation delay of 100 sec, 10 pulses, a T2 of 0.3, a spectrum width of 100 Hz and 1024 points. Fourier transform the FID to examine the relative intensities of the resonances in the NMR spectrum. Choose “Start Over” selecting the “Same Frequencies, Intensities and T1’s” and examine the effect of signal averaging on the intensity of the resonances by reducing the relaxation delay to 20, 10, 5, 2, and 0 sec, and Fourier transforming each FID (remember that in each case, the magnetization does relax during the acquisition time). What is the effect of reducing the relaxation delay on the relative intensities and on the overall S/N ratio.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/03%3A_Virtual_Experiment/3.01%3A_Virtual_Laboratory.txt
Laboratory Experiment: Determination of Malic Acid Content in Apple Juice by NMR This laboratory experiment contains 4 sections, a set of prelab exercises, introductory material to provide background in addition to the basic theory, practical aspects and virtual laboratory sections of this module, a dry lab that allows students to download and process Q-NMR data and a wet lab experiment in which students make their own measurements to quantify the malic acid content of apple juice. Additional background information about malic acid's role in fermentation and wine production, and its measurement by Q-NMR can be found in the Q-NMR Applications section of this module. 04: Q-NMR Experiment 1. Malic acid is a diprotic acid containing 2 carboxylate groups. Draw the structures of malic acid. 2. What other analytical methods might be used for quantitative analysis of malic acid in fruit juices? 3. What properties would you consider in choosing a reference standard for quantitative analysis by NMR? 4. How will changing pH affect the chemical shifts of malic acid? What potential problems might arise from these pH effects? Follow the instructions provided in the Virtual Lab to help answer questions 5 and 6. 1. What acquisition parameters are important for a quantitative NMR measurement? How do you select the values of these parameters? 2. What data processing considerations are important for obtaining accurate and precise results? 4.02: Background There are a number of issues that should be considered when measuring NMR spectra for purposes of quantitative analysis. Many of these issues pertain to the way that the NMR data is acquired and processed. It is usually necessary to perform quantitative NMR measurements with care to obtain accurate and precise quantitative results. The advantages of NMR over other spectroscopic methods are that no response factor is needed and all the resonances generated by a particular nucleus (for example 1H, 31P or 19F) have an integrated intensity directly proportional to the molar concentration of the analyte and to the number of nuclei that give rise to that resonance. $\mathrm{\dfrac{Integral\: Area}{Number\: of\: Nuclei} ∝ Concentration} \label{E1}$ The 1H NMR resonance of a methyl group would have 3 times the intensity of a peak resulting from a single proton. For example, sodium 3-(trimethylsilyl)tetrapropionate TSP, has a methyl resonance equivalent to 9 protons (from the three methyl groups) and therefore would give rise to a resonance that is 9 times greater than the intensity resulting from a single proton. For this experiment, KHP (potassium hydrogen phthalate) will be used as an internal quantitation standard. KHP has the advantage of being a primary standard, meaning that after drying, its solution concentration can be calculated directly from its mass. You may also wish you use an internal chemical shift reference, like TSP-d4 in the preparation of your solutions. TSP is not a primary standard, and is known to adsorb to glass surfaces which can change its solution concentration over time, therefore it is not a useful quantitation standard. As a chemical shift reference, TSP-d4 has the advantage of producing a single sharp resonance with a defined chemical shift of 0.00 ppm. Malic acid and citric acids are the major organic acids in fruits. Q-NMR is a valuable technique for determining the quantities of major and minor compounds in fruit juices. By comparing the resonance integral of an analyte to that of a standard compound of known concentration, we can determine the analyte concentration according to the equation below: $C_{analyte} = \dfrac{I_{analyte} × C_{std}}{I_{std}} \label{E2}$ where Canalyte is the analyte concentration, Cstd is the quantitation standard concentration, and Ianalyte and Istd are the areas of the resonances of the analyte and the standard, respectively, normalized to the number of protons giving rise to each resonance. The accuracy and precision of the integral measurements are affected by the following experimental factors. • spectral S/N • line shape • quality of shimming • baseline • apodization window functions • phasing, baseline-, and drift -corrections Resonance overlap is a potential problem in accurate quantitation by NMR. This problem can sometimes be solved by careful selection of pH, using a different solvent, or adding a reagent to change the analyte chemical shift (i.e. lanthanide shift reagents). In some cases, the 1H NMR spectrum may be too crowded for accurate quantitation, but another nucleus, for example 19F or 13C, that has a larger chemical shift window might produce well-separated resonances of the mixture components. Field-frequency lock The fields of superconducting magnets tend to drift over a period of minutes to hours causing loss of resolution. Most modern spectrometers are equipped with a lock channel that regulates the spectrometer field by monitoring the chemical shift of a deuterium resonance of the solvent. As the magnetic field drifts, the change in the deuterium resonance frequency generates an error signal that indicates both the magnitude and the direction of the field change, allowing compensation by a feedback circuit. Non-viscous deuterated solvents provide the best field-frequency lock because of their sharp and intense resonances. In 1H NMR experiments an additional advantage of preparing samples in deuterated solvents is that the intensity of the solvent proton resonance is reduced. The resonance of protic solvents (e.g., H2O, or CH3CN) can obscure analyte 1H NMR resonances and reduce the dynamic range of the measurement. While it is common to suppress the 1H NMR resonances of protic solvents, analyte resonances with similar chemical shifts will also be suppressed. Sometimes it is not possible to make the sample solution in a deuterated solvent, for example when the sample is already a liquid (i.e. blood plasma, urine or fruit juice). In such a case, a sufficient quantity of a deuterated solvent, like 10% D2O, is added to the sample to provide the lock signal. Solvent Suppression Apart from accurate tuning of the probe and pulse width calibration, effective suppression of the solvent resonance is often crucial for the analyte resonance to be observed in aqueous samples. There are a number of solvent suppression methods available for use in NMR experiments, the simplest of which is presaturation. Presaturation uses a selective pulse to equalize the populations of the solvent spins. It is important to evaluate the effect of the solvent presaturation parameters on the resolution and sensitivity of the experiment to ensure good results. The presaturation power should be selected such that the solvent resonance is significantly attenuated without reducing the intensity of neighboring analyte resonances. Repetition Time The time between successive acquisitions is crucial in Q-NMR. To determine the longitudinal relaxation delay of a given analyte proton, the inversion recovery pulse sequence is used to measure T1 relaxation times. The pulse sequence uses a calibration program which fits data to the exponential decay equation $I(τ) = I_0\left(1 − 2 × e^{\Large\frac{−τ}{T_1}}\right) \label{E3}$ where I(τ) is the intensity of the selected proton resonance for a given τ value, I0 the intensity at equilibrium (infinite τ), τ is the value of the inversion delay, and T1 is the first order time constant for longitudinal relaxation. Malic and citric acid content of fruits The table below summarizes the results obtained from quantitative NMR analysis of malic and citric acid content of various fruits.1 Malic Acid Citric Acid Apple* 3.42-10.12 g/L 0.09-0.36 g/L Apricot 4.59 4.13 Pear 2.55 1.05 Kiwi 2.66 11.00 Orange 2.13 11.71 Strawberry 1.74 7.13 Pineapple 1.33 5.99 *Data obtained from three apples ranged between the values given.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/04%3A_Q-NMR_Experiment/4.01%3A_Prelab_Exercises.txt
Drylab Procedure: This section contains FIDs measured using a 600 MHz Bruker Avance spectrometer. The FIDs can be downloaded and processed to quantify the concentration of malic acid in a D2O solution and in apple juice. Inversion-recovery spectra measured for KHP are provided so that you can calculate the T1 relaxation time using the resonance intensity. The FIDs are provided in JCAMP format which can be processed using most modern vendor software programs. If you do not have access to an NMR spectrometer, a free NMR processing software package wxNUTS can be downloaded to use with Mac OSX and Windows: http://www.acornnmr.com/nuts.htm A. Preparation of KHP solution and determination of the T1 relaxation times of the KHP protons A small amount of potassium hydrogen phthalate (KHP) was placed a beaker put into an oven at 110 ºC for 4 hrs. The beaker was then removed, covered in aluminum foil and placed in a dessicator to cool. The KHP was weighed using an analytical balance, transferred to a 5 mL volumetric flask and diluted with to the mark with D2O to prepare a stock solution. Mass of weighing paper = 0.2219 g Mass of weighing paper + KHP = 0.3533 g To measure the T1 relaxation times of the KHP protons, a 600 μL aliquot of this stock solution was transferred to an NMR tube. A series of inversion recovery spectra were acquired as a function of the variable delay between the 180o and 90o pulses. The spectrometer frequency was 599.923 MHz. This experiment used an acquisition time of 2 sec, and an additional relaxation delay of 35 sec. 8 FIDs were coadded for each of the following spectra. Download and analyze these spectra to determine the T1 relaxation times of the KHP protons. To download a file click the file name and once the text window opens go to File and Save As to save the file as a text file. To process the downloaded file using NUTS, open the wxNUTS program and under the File menu click Import and then select the file to process. Variable delay JCAMP File 0.005 (s) T1-measurement-KHP-051708_0s.dx 2 T1-measurement-KHP-051708_2s.dx 2.5 T1-measurement-KHP-051708_2p5s.dx 3 T1-measurement-KHP-051708_3s.dx 3.5 T1-measurement-KHP-051708_3p5s.dx 4 T1-measurement-KHP-051708_4s.dx 6 T1-measurement-KHP-051708_6s.dx 10 T1-measurement-KHP-051708_10s.dx 15 T1-measurement-KHP-051708_15s.dx 20 T1-measurement-KHP-051708_20s.dx B. Determination of the malic acid concentration in a D2O stock solution To test our ability to quantitatively measure the malic acid concentration in an unknown apple juice sample using KHP as an internal standard, a solution containing a known malic acid concentration was prepared by transferring a known mass of malic acid to a 5 mL volumetric flask and diluting to the mark with D2O. Mass of paper = 0.1897 g Mass of paper + Malic acid = 0.3324 g The solution for Q-NMR was prepared by combining 1.00 mL of the KHP stock solution and 1.00 mL of the malic acid stock solution. The solution was mixed well and a 600 μL aliquot transferred to an NMR tube for analysis. The frequency of the spectrometer was 599.923 MHz. The spectrum was measured using a 2 s acquisition time and an additional 35 s relaxation delay. 64 FIDs were coadded. The FID below was acquired for the quantitative analysis of the malic acid standard solution using KHP as an internal standard. Download and analyze this spectrum to determine the concentration of the malic acid in this stock solution. Q-NMR-Malic-JHP-061108_Run2.dx C. Determination of the malic acid concentration in a fruit juice solution The KHP stock solution was diluted by mixing 1.00 mL of the stock solution prepared in part A with 1.00 mL of D2O. After mixing, a pipettor was used to add 100 μL of the diluted KHP solution to 900 μL of apple juice obtained from the grocery store (note that better accuracy and precision would have been achieved if a 1 mL glass pipette were used instead of a pipettor). The pH of the solution was adjusted to approximately 1.35 using HCl. A 600 μL aliquot of this solution was transferred to an NMR tube and the spectrum recorded at a frequency of 599.923 MHz. The spectrum was measured using a 2 s acquisition time and an additional 35 s relaxation delay. The solvent resonance was suppressed by saturation during the relaxation delay. 480 FIDs were coadded. The FID below was acquired for the quantitative analysis of the malic acid in apple juice using KHP as an internal standard. Download and analyze this spectrum to determine the concentration of the malic acid in the apple juice sample. Q-NMR-Apple-JHP-061108A_Run1.dx Dry Lab Report: 1. Using the mass and volume used to make the stock solution in part A, calculate the KHP concentration. 2. Using the inversion-recovery data, determine the T1 relaxation times of the two KHP resonances. 3. Using the mass and volume used to make the stock solution in part B, calculate the malic acid concentration. 4. What is the concentration of malic acid determined from the Q-NMR experiment using KHP as an internal standard? 5. What is the concentration of the malic acid in the apple juice?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/04%3A_Q-NMR_Experiment/4.03%3A_Dry_Lab.txt
Wet Lab Procedure: Part A. Preparation of standard solutions 1. Prepare a solution of the reference standard by weighing dry KHP and dissolving in an appropriate volume of deionized water or D2O (Your instructor will indicate which solvent should be used in your experiments). Exact knowledge of the mass and volume of the KHP solution is important because the concentration of this solution controls the accuracy of the analysis. 2. Prepare a stock solution of malic acid by weighing dry malic acid and dissolving in an appropriate volume of deionized water or D2O (Your instructor will indicate which solvent should be used in your experiments). Adjust these stock solutions to around pH 1 with HCl Note: if deionized water is used as the solvent you will need to add some D2O (usually around 10%) as a lock solvent. You also may want to consider using a chemical shift reference standard. Part B. NMR parameters and quantitative analysis of standard solutions 1. Combine aliquots of the malic acid and KHP solutions you made in part A. 2. Determine and record the values for the each parameter listed below. Be prepared to justify your choice in each case. Obtain an NMR spectrum of the solutions you prepared in part B1. Acquisition time = Relaxation delay* = Pulse width = Spectral width = Receiver gain = Temperature = Number of scans = * The appropriate value of the relaxation delay can be determined from an inversion recovery experiment. If this is not feasible, your instructor may help you decide on the appropriate value of this parameter. 3. After the experiment is completed, process the spectra using appropriate line broadening, zero filling, phasing, and baseline correction. 4. Assign the KHP and malic acid resonances. 5. Calculate the S/N for this measurement by dividing the integrals measured for resolved malic acid resonances by the rms (root mean square) noise over a region of the spectrum that free of resonances and has a flat baseline. 6. Determine the concentrations of malic acid in your stock solution relative to KHP. How does the concentration you determined by Q-NMR compare with the value you would calculate from the mass and volume used in the preparation of the stock solutions? Part C. Determination of the malic acid content of unknown fruit juice samples 1. If necessary clarify the juice sample by centrifugation. Prepare a 5-fold dilution of the juice sample with deionized water, adjusting the pH to around 1 with HCl. 2. Prepare at least 3 replicate solutions for quantitative analysis using KHP as an internal standard. 3. Acquire NMR spectra for the juice solutions starting from the optimized parameters used for the standard solutions in part B. 4. Process the spectra and calculate the average concentration of malic acid in the fruit juice. Determine the relative standard deviation of your measurements. Wet Lab Report: 1. Report the amount of malic acid in your sample along with the relative standard deviation. How does the amount of malic acid you determined in the juice sample compare with the amount reported in the table in the background section of this lab experiment? 2. Using the S/N values calculated for the standard malic acid solution, estimate the limit of quantitation and detection. 3. Is the splitting pattern for any of the resonances of the compounds studied different than what you might predict from the simple rules you learned in organic chemistry? To what do you attribute these differences? Reference del Campo, G.; Berregi, I.; Caracena, R.; Santos, J. I. Analytica Chimica Acta. 2006, 556, 462-468.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/04%3A_Q-NMR_Experiment/4.04%3A_Wet_Lab.txt
This section presents summaries of several common Q-NMR applications. Q-NMR is widely used for both purity and impurity analyses. Q-NMR finds extensive use in the food and beverage industry, where it is used to detect adulteration and to follow the progression of processes such as fermentation. A relatively recent application of Q-NMR is in the area of metabonomics, which follows relative changes in the level of metabolites in biofluids like urine or plasma, or in tissue biopsies to shed insight into complex biological processes including the effect of genetic variations, disease progression, drug efficacy and the effect of toxicants. 05: Q-NMR Applications Macrolide antibiotics are derived from microbial fermentations. As a class the macrolide antibiotics contain a 14- or 16-membered macrocyclic lactone, containing amino groups and deoxy sugars. The antibiotic activity of these compounds results from the inhibition of bacterial protein synthesis. Because macrolides accumulate within leukocytes, they are transported to the site of infection. Some macrolides also have immunomodulatory effects that can reduce inflammation. One of the best known macrolide antibiotics is erythromycin, the structure of which is shown here. This application examines the use of quantitative 1H NMR measurements for determining the purity of macrolide antibiotic reference standards. Samples of clarithrmycin, roxithromycin, azithrimycin, dirithromycin and midecamycin were tested. The Q-NMR experiments were performed at 500 MHz using 1,4-dinitrobenzene as an internal standard. Rather than determining T1 relaxation times, the relaxation delay (64 s) was selected by comparing the peak area for the internal standard using d1 delays of 1, 5,10, 20, 32, 64 and 256 s. The Q-NMR results were compared with the more conventional approach of determining mass balance, shown in Table 1 below. In mass balance determinations, the content is calculated as shown in Equation $\ref{E1}$. $\mathrm{Content\: \% = (1 - impurity\%)(1 - water\% - volatile\: material\: \% - sulfated\: ash\%) \times 100} \label{E1}$ In this determination the % impurity is determined by HPLC-UV. Water content is determined by Karl Fischer titration and residual solvents are measured using gas chromatography. The amount of sulfated ash (primarily a European designation) corresponds to the amount of residue remaining after ignition of the sample. One of the drawbacks of HPLC-UV for these measurements is the lack of chromophore with a characteristic absorption wavelength. In addition, a general problem with this approach is that the UV absorption coefficient of each impurity is different, and in many cases unknown. Purity determinations using NMR rely on the comparison of the integrals measured for analyte’s resonances (Ix) with the integral of a quantitation standard (Istd). In this case an internal standard was used. Because the masses of the sample (mx) and the internal standard (mstd) are known, the content of the analyte can be determined. Note that the underlying assumption in this method is that resonances of impurities do not overlap with the resonances used to measure the analyte content. The purity of the analyte, Px, is calculated as shown in Equation $\ref{E2}$: $P_x = \dfrac{I_x}{I_{std}} \dfrac{N_{std}}{N_x} \dfrac{M_x}{M_{std}} \dfrac{m_{std}}{m_x} P_{std} \label{E2}$ where Mx and Mstd are the molar masses of the analyte and the standard, respectively, Pstd is the purity of the standard, and Nstd and Nx are the number of spins responsible for the integrated standard and analyte signals, respectively. The content results, summarized in Table 1, for the macrolide antibiotics obtained with 1H NMR and by the mass balance method are in good agreement. The main source of uncertainty in the Q-NMR method was in the sample weight, about 15 mg in these experiments. Use of a larger mass would decrease the uncertainty in the results but would consume larger amounts of deuterated solvents. An informative feature of this report is the detailed error analysis it contains. Table 1. Results of the Q-NMR and mass balance methods (adapted from Liu and Hu, Anal. Chim. Acta 2007, 602, 114-121) 1H Q-NMR method (%) Mass balance method (%) Average Contenta RSD Uexpanded Impurity/u1 Water/u2 b Residual Solvents/u3 b Sulfated ash/u4 b Content/ Uexpanded Clarithromycin 96.3 0.49 1.89 3.35/0.0299 1.4/0.0148 <0.001/<8.4 x 10-6 0/0 95.3/2.64 Roxithromycin 95.7 0.44 1.82 2.50/0.0223 2.2/0.0233 <0.001/<8.4 x 10-6 0/0 95.4/2.64 Azithromycin 94.3 0.50 1.36 1.59/0.0142 4.5/0.0486 <0.001/<8.4 x 10-6 0.02/0.0198 94.0/2.75 Dirithromycin 96.9 0 1.81 3.30/0.0295 0.6/0.00636 <0.001/<8.4 x 10-6 0/0 96.1/2.67 Midecamycin 97.1 2.0 1.96 3.94/0.0352 0.16/0.000035c   0/0 95.9/1.71 1. Calculated using 4 NMR signals 2. u1 is the uncertainty in the impurity, u2 is the uncertainty in water, u3 is the uncertainty in residual solvents and u4 is the uncertainty in the sulfated ash 3. The content of water and residual solvents were not determined for midecamycin. Total volatile materials of midecamycin were determined by the method of loss on drying Reference Liu, S.-Y.; Hu, C.-Q. “A comparative uncertainty study of the calibration of macrolide antibiotic reference standards using quantitative nuclear magnetic resonance and mass balance methods” Anal. Chim. Acta 2007, 602, 114-121.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/05%3A_Q-NMR_Applications/5.01%3A_Q-NMR_for_purity_determination_of_macrolide_antibiotic_reference_standards-_Comparison_with_the_mass_bal.txt
This document contains two separate applications that discuss the use of quantitative NMR measurements for the analysis of enantiomeric or isomeric purity of pharmaceutical compounds. Application 1. Q-NMR for Determination of the Enantiomeric Purity of Pharmaceutical Ingredients Enantiomeric purity (EP) is important in the development of active pharmaceutical ingredients (API) by the pharmaceutical industry. For some drugs, API enantiomers can produce dramatically different pharmacological responses. Enantiomeric purity is commonly determined by effecting a chiral separation however, chiral separations can be time consuming and typically involve the use of expensive chiral columns. Although it is not possible to distinguish enantiomers directly with NMR, derivatization to form diastereomers produces molecules with distinct NMR spectra.1 Determination of EP by NMR using chiral solvating agents (CSAs) alleviates the need for chemical derivatization or standards. CSAs interact with the enantiomers in solution, in effect forming transient diastereomers. The use of CSAs in NMR was first reported in 1966 by Pirkle.2 The CSA used in this study was 1,1’–binaphthol shown as compound 1 in Figure 1. This compound is known to resolve chiral amines such as the compounds 2-5 shown below. Some of the chiral compounds in Figure 1 are prescribed clinically to treat such disorders as depression and anxiety (e.g., 2, Zoloft® and 3, Paxil®). Compound 5, fenfluramine, was a component of the anti-obesity drug Fen-Phen, which was withdrawn from the US market after reports linked it to heart damage. Figure 1. Structures of (1) the chiral solvating agent (R)-1,1’-bi-2-naphthol, (2) (+)-sertraline HCl, (3) (-)-paroxetine HCl, (4) racemic methylbenzylamine, and (5) racemic fenfluramine HCl. Enantiomeric separation by NMR is based on the intrinsic differences in the diastereomeric complexes formed and/or differences in the association kinetics of the equilibria below: $\mathrm{E + S \rightleftharpoons ES}$ $\mathrm{E’ + S \rightleftharpoons E’S}$ where S represents the CSA molecule while E and E’ represent the different solute enantiomers.3 In experiments by Salsbury et al. to determine enantiomeric purity of the APIs, fenfluramine, sertaline and paroxetine, and the model compound, methylbenzylamine (MBA, compound 4 in Figure 1), the analytes were dissolved in CDCl3 and chemical shifts referenced to tetramethylsilane (TMS).1 The 1H NMR spectra used to determine the limits of detection and quantitation were measured with 64 transients, tip angle of 30°, relaxation delay of 1s and line broadening of 0.3 Hz. Proton chemical shift assignments were confirmed using COSY. Standards were weighed (1-4 mg) and CDCl3 solutions containing an appropriate molar ratio of the analyte and 1,1’–binaphthol were prepared. The ability of the CSA interactions to resolve the mixtures of enantiomers was evaluated using MBA mixtures. Using standards of methylbenzylamine at different concentrations to obtain a calibration curve, the limit of quantitation was determined to be below 1% of the minor component. Analysis of racemic fenfluramine revealed that it contained 50.2 ± 0.4% the S-enantiomer. Although chiral HPLC could not be performed for fenfluramine or sertraline without derivatization, the analysis of paroxetine enantiomers was carried out by both NMR and HPLC yielding results of 7.5 ± 0.3% and 8.5%, respectively. Application 2. Q-NMR for the Quantitation of the E/Z Isomer content of Fluvoxamine Fluvoxamine is an antidepressant with two possible isomeric structures as shown in Figure 1 below. The activity of fluvoxamine resides in the E-isomer (Figure 1A). However, the Z-isomer (Figure 1B) occurs in all the synthesis pathways. Transport proteins can discriminate between the E- and Z-isomers. The British Pharmacopeia limits the content of Z-isomer to 0.5%. The QNMR method described measures the Z-isomer to the 0.2% level in 15 mg of the drug substance. Figure 1. Structures of (A) (E)-fluvoxamine and (B) (Z)-fluvoxamine. The pharmaceutical formulation is available as the maleate salt of fluvoxamine. The numbering of the atoms correlates with the NMR spectrum reported in the reference. An advantage of Q-NMR for determining the Z-isomer content is the minimum sample preparation required. In this example, 15 mg of material was dissolved in deuterated methanol which was also used as the 1H (3.31 ppm) chemical shift reference.1 1H NMR spectra were acquired by coaddition of 128 transients over a spectral width of 4595 Hz. FIDs were apodized by multiplication with an exponential function equivalent to 0.3 Hz line broadening. For quantitation, the C-2 proton resonances of the Z- (2.62 ppm) and E- (2.90 ppm) fluvoxamine isomers were manually integrated and the values compared. Before performing quantitative measurements, it was necessary to determine the limits of quantification and detection for each isomer. Although the pure E-isomer was commercially available, the pure Z-isomer was not. Instead the authors had access only to a 1:1 (E/Z) mixture. Therefore a stock solution containing 5.13% (Z) fluvoxamine was prepared by mixing appropriate amounts of the pure E and E/Z mixture. Serial dilutions were made from this stock solution for NMR analysis. With each dilution, the concentrations of the E- and Z-isomers decreased, but the %Z content remained at 5.13%. For the spectrum measured at each concentration, manual integration of the C-2(Z) and C-2(E) proton resonances was performed three times. The difference between calculated and determined values of the Z-isomer content was less than 5% at concentrations down to 0.07 mg/L. Greater deviation from calculated values was observed at lower Z-isomer concentrations. Based on these experiments, the limits of quantitation and detection were determined to be 0.07 mg/L and 0.018 mg/L, respectively. To determine linearity, a set of mixtures containing 0-10% Z-isomer were measured in triplicate. The correlation coefficient of the calibration plot was found to be 0.9999 with a coefficient slope of 0.9923. Since the British Pharmacopeia tests requires the content of Z to be less than 0.5%, solutions were prepared containing 0.15-1.01% of the Z-isomer. NMR spectra were measured in triplicate and each spectrum integrated three times. Linear regression analysis yielded a correlation coefficient of 0.994 with a slope of 1.042, indicating that the Q-NMR assay is linear over this concentration. The Q-NMR method was found to be an accurate, sensitive, and timesaving method for the determination of Z- fluvoxamine content. Reference 1. Deubner, R. and Holzgrabe, U. Magn. Reson. Chem. 2002, 40, 762-766 5.03: Q-NMR for Analysis and Characterization in Vaccine Preparations PedvaxHIB is a vaccine that is made by chemically conjugating the capsular polysaccharide of Haemophilius influenzae type b (Hib) to an outer membrane protein from Neisseria meningitidis to form a protein-conjugated vaccine that is very effective in preventing invasive Hib infection in infants and young children. This example shows the utility of NMR for both the characterization of the derivatized polysaccharide and its quantitative analysis. The advantages of NMR include its nondestructive nature and its ability to detect molecules that do not contain a UV-visible chromophore. To obtain an accurate determination of the solution temperature, a linear calibration of the HDO chemical shift was carried out. Shimming of the magnet was performed using the DMSO peak and the proton 90° pulse was calibrated for each solution to compensate for differences in solution ionic strength. The DMSO was also used as the internal reference. The spectral width was 10.5 ppm with a digital resolution of 0.3 Hz. Spectra were measured with 16 transients with a total recycle time of 60 s. The total data acquisition time was 20 min. Figure 1. 1H NMR spectrum of the derivatized capsular polysaccharide. Inset is the sidechain structure of the PRP dervativzed with butanediamine bromoacetyl chloride (PRPBuA2). Reprinted from Anal. Biochem. 337, Q. Xu, J. Klees, J. Teyral, R. Capen, M. Huang, A. W. Sturgess, J. P. Hennessey, Jr., M. Washabaugh, R. Sitrin, C. Abegunawardana, Quantitative nuclear magnetic resonance analysis and characterization of the derivatized Haemophilus influenzae type b polysaccharide intermediate for PedvaxHIB, 234-245, Copyright (2005), with permission from Elsevier. Figure 1 shows the 1H NMR spectrum of an intermediate in the synthesis of the capsular polyribosylribitol phosphate (PRP) – outer membrane protein complex. PRP is first activated with 1,1’=carbonyldiimidazole and then reacted with an excess of butanediamine. The resonances of the derivatized PRP are well resolved in this spectrum. Quantitation was performed using an internal reference because it alleviates the need for a standard calibration curve. In determining the percentage of the various forms, neither the molecular weight of the polysaccharide nor the degree of polymerization was needed for calculation. The Q-NMR assay developed in this paper can find application in product release or process monitoring in the pharmaceutical industry and can potentially replace tedious chromatographic and colorimetric methods. Reference Xu, Q.; Klees, J.; Teyral, J.; Capen, R.; Huang, M.; Sturgess, A. W.; Hennessy, J.P.; Washabaugh, M.; Sitrin, R.; Abeygunawardana, C. Anal. Biochem. 2005, 337, 235-245
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/05%3A_Q-NMR_Applications/5.02%3A_Determining_Enantiomeric_or_Isomeric_Purity_of_Active_Pharmaceutical_Ingredients.txt
Cyclosporine A (CsA) is a potent calcineurin inhibitor used in transplantation medicine. It is potentially nephrotoxic. The use of CsA in combination with other immunosuppresants, such as sirolimus (SRL) or everolimus (RAD), has been reported to produce a beneficial synergistic effect. In this study, blood samples were collected from rats treated with CsA (10 mg/kg), CsA(10 mg/kg) + SRL(3 mg/kg), and CsA(10 mg/kg) + RAD (3mg/kg). Whole blood samples were collected and processed by dual chloroform/methanol extraction to yield water- and lipidsoluble extracts. The extracts were analyzed by Q-NMR to predict the metabolic toxicity, and to identify and quantify metabolic biomarkers. A 600 MHz NMR spectrometer was used and the proton NMR spectra obtained by coaddition of 40 transients, using a relaxation delay of 12 s and a tip angle of 90°. The water resonance was suppressed by selective saturation. A solution of trimethylsilylpropionic acid-d4 (TMSP-d4, 0.00 ppm) was placed in a capillary and inserted into the NMR tube for use as an external standard. All spectra were normalized to the intensity of the TMSP-d4 singlet resonance. To obtain accurate and meaningful information, NMR spectral data must be carefully processed prior to attempting statistical analysis of the results. In this experiment, Fourier transformation, phase correction and baseline correction were performed. For multivariate statistical analysis, all spectra were normalized to the TSP-d4 intensity and the full spectrum was bucketed into 0.04 ppm intervals, except the regions containing the solvent resonances of water and methanol which were excluded from the statistical analysis. Principal component analysis (PCA) was performed using the AMIX 3.1 software to classify the NMR spectra obtained from animals subjected to different experimental treatments. For metabolite quantification, each 1H peak of identified metabolites was integrated. Absolute concentrations of the identified metabolites were calculated using the equation shown below: $C_x = \dfrac{\dfrac{I_x}{N_x}\times C_{TMSP}}{\dfrac{I_{TMSP}}{N_{TMSP}}}\times \dfrac{V}{M}$ where Cx = metabolite concentration; Ix = integral of the metabolite 1H NMR resonance; Nx = number of protons giving rise to the metabolite 1H peak (from CH, CH2, CH3, etc); CTMSP = TMSP concentration; ITMSP = integral of TMSP 1H resonance at 0 ppm; NTMSP = 9 because this resonance is produced by the 9 equivalent protons of the 3 methyl groups; V = volume of the extract; and M = volume of the blood sample. In the results obtained by PCA from the PC1 vs PC2 scores plot, spectral results for all 5 placebo-treated control animals clustered together and were overlapped with the samples for the CsA+RAD treated animals. The results for the CsA treated animals were well-separated from the control animals. The spectral results that were most different from the controls were those of the CSA+SRL treated animals, which clustered as their own group distinct from the CsA-only treated animals. The PCA loadings plot indicated that the intensities of the following metabolites differed among the CsA+SRL, CsA, and placebo groups: hydroxybutyrate, lactate, total glutathione, creatine + creatinine, trimethylamine-N-oxide (TMAO), and glucose. The only metabolite that increased in all 3 treatment groups was cholesterol. QNMR measurements of individual metabolite levels showed that CsA administration significantly increased the blood concentrations of glucose, hydroxybutyrate and creatine + creatinine. However the levels of glutathione dropped in both CsA and CsA+SRL treated animals. The blood levels of these metabolites were not significantly different for the CsA+NAD treated animals and the placebotreated controls. The increase in blood glucose and hydroxybutyrate confirmed the ability of CS to induce hyperglycemia and hyperketonemia. The decreased levels of glutathione were thought to be related to CS-induced oxidative stress. The increased concentrations of metabolites such as creatine and creatinine could reflect decreased renal clearance of these substances. While coadministration with SRL enhanced the metabolic changes indicative of toxicity, combination treatment with RAD partially alleviated these effects. This example illustrates the utility of metabolic profiling by Q-NMR and the need to monitor the toxicodynamic effects of immunosuppressant combinations. Reference Serkova, N. J. and Christians, U. Ther. Drug. Monit. 2005, 27, 733-737 5.05: Q-NMR for Time Course Evolution of Malic and Lactic Acid Figure 1. Structures of malic acid and lactic acid. Conversion of malic acid to lactic acid can take place during malolactic or alcoholic fermentation. Wines consist of various components present at different concentrations. The major components are ethanol, water, glycerol, sugars and organic acids such as malic and lactic acid shown in the figure above. Low levels of malic acid (0.4-0.5 g/L) are a prerequisite for the commercial production of some red wines. In addition, the regulation of this acid is essential in the elaboration of other types of wines such as white and rosé. The levels of malic acid can be controlled by allowing the spontaneous growth of lactic acid bacteria which carry out the malolactic fermentation as shown in Figure 1 above. Q-NMR is useful in evaluating wine quality with respect to age, origin, and effects of adulteration. Control of the fermentation process is essential in determining the desired wine quality. This example illustrates the use of Q-NMR to monitor the fermentation process by measuring levels of malic and lactic acid over the concentration range of 1-3.2 mM. The effectiveness of Q-NMR analysis is compared with the results of enzymatic measurements. In this experiment, wine samples were collected from various tanks containing grapes of different varietals. Samples were collected directly and preserved at -25 °C. Prior to recording NMR spectra, sample pH was adjusted to 3.0. Succinic acid was used as an external standard. Matrix effects were evaluated by spiking the wine with malic and lactic acid. NMR spectra were obtained by coaddition of 128 transients using a 90° pulse, a spectral width of 10 ppm and a relaxation delay of 60 s. The water resonance was suppressed by irradiation with a selective presaturation pulse. In order to evaluate the effectiveness of Q-NMR for this analysis, the results were compared with those obtained from an enzymatic assay which consumes malate and produces NADH (Boehringer test). In this test the UV absorption of NADH at 340 nm is used for quantitation. The results of the enzymatic assay were in good agreement with the corresponding measurements by Q-NMR. The major advantages of Q-NMR in this example include minimal sample preparation and rapid analysis. Reference Avenoza, A.; Busto, J. H.; Canal, N.; Peregrina, J. M. J. Agric. Food Chem. 2006, 54, 4715-4720
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/05%3A_Q-NMR_Applications/5.04%3A_Q-NMR-Based_Metabonomics_of_Blood_Samples.txt
This guide is intended to assist instructors in the utilization of the q-NMR learning module. While the module is designed as a stand-alone resource to accommodate self-learners, it can also be used as an active learning resource. This instructor’s guide is designed to help. 06: Instructor's Guide The questions below are also listed on the webpage that links to the Basic Theory section. These questions can be handed out in class or given as a homework assignment. Students should be able to answer these questions using the Basic Theory section of this module as an instructional resource. I would also recommend assigning the excellent web resource created by Joe Hornak, since it contains embedded animations that help clarify many difficult to understand processes in NMR. This site can be accessed at http://www.cis.rit.edu/htbooks/nmr/. • What is spin? • How does absorption of energy generate an NMR spectrum? • Why is NMR less sensitive than UV-visible spectroscopy? • What is chemical shift and how does it relate to resonance frequency? • What is precession? • How does precession produce the macroscopic magnetization (Mo)? • How can the nuclear spins be manipulated to generate the NMR spectrum? • What is the tip angle? • What is a Free Induction Decay? • How do T1 and T2 relaxation affect NMR spectra? 6.02: Answers to Questions in the Basic Theory section In addition to the conceptual questions, the Basic Theory section also contains a series of simple quantitative questions, the answers of which are provided below. Question 1 How many spin states would you predict for 2H? Solution Deuterium has a spin of 1. Therefore there should be 3 possible spin states: +1, 0 and -1. Question 2 Given the same magnetic field and temperature, how would the difference in population for 1H and 31P compare? Solution For this problem we will use the following equation: $\dfrac{N_{upper}}{N_{lower}} = e^{\large\frac{-∆E}{kT}}$ The difference in population for 1H and 31P will be related to the differences in their ∆E values. Since ∆E=γhBo/2π, for a fixed magnetic field the only differences between 1H and 31P is in their magnetogyric ratios. $\dfrac{∆E( ^1H)}{∆E( ^{31}P)} = \dfrac{26.752}{10.84} = 2.468$ The ratio of the Nupper/Nlower for 1H is e2.468 or =11.80 times larger than the ratio of Nupper/Nlower for 31P. Question 3 Calculate the wavelength of electromagnetic radiation corresponding to a frequency of 500 MHz. Solution The wavelength of electromagnetic radiation corresponding to a frequency of 500 MHz is 0.6 m. Question 4 What range of frequencies would be excited by a 10 µs rf pulse? Solution A 10 µs rf pulse would excite a range of frequencies covering 100,000 Hz. Question 5 What are the resonance line widths of nuclei that have apparent T2 relaxation times (i.e.T2* values) of 1 and 2 sec. Solution $w_{\large\frac{1}{2}} = \dfrac{1}{πT_2^*}$ Therefore, the two resonances have line widths of 0.32 and 0.16 Hz. 6.03: Practical Aspects Concept Questions The questions below are also listed on the webpage that links to the Practical Aspects section. These questions can be handed out in class or given as a homework assignment. Students should be able to answer these questions using the Practical Aspects section of this module as an instructional resource. • How do I choose a reference standard for my Q-NMR analysis? • How is the internal standard used to quantify the concentration of my analyte? • What sample considerations are important in Q-NMR analysis? • How do I choose the right acquisition parameters for a quantitative NMR measurement? • What data processing considerations are important for obtaining accurate and precise results?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/06%3A_Instructor's_Guide/6.01%3A_Basic_Theory_Concept_Questions.txt
Question 1 A quantitative NMR experiment is performed to quantify the amount of isopropyl alcohol in a $D_2O$ solution. Sodium maleate (0.01021 M) is used as an internal standard. The integral obtained for the maleate resonance is 46.978. The isopropanol doublet at 1.45 ppm produces an integral of 104.43. What would you predict for the integral of the isopropanol CH resonance as 3.99 ppm. What is the concentration of isopropanol in this solution? Solution The isopropanol CH resonance is produced by a single proton whereas the doublet is produced by the 6 methyl protons. Therefore, the CH integral should be 1/6th that of the methyl doublet, or 17.405. To find the isopropanol concentration we first have to calculate normalized areas for isopropanol and our standard, maleate. The isopropanol(IP) doublet is comprised of 6 protons due to the two equivalent methyl groups of this compound. $\textrm{Normalized Area (IP)} = \dfrac{104.43}{6} = 17.405$ Similarly, the normalized area for maleate (MA) is: $\textrm{Normalized Area (MA)} = \dfrac{46.978}{2} = 23.489$ The concentration of the isopropanol can be calculated using the known the maleate concentration. $\mathrm{[IP] = \dfrac{[MA] \times Normalized\: Area\: (IP)}{Normalized\: Area\: (MA)}}$ $\mathrm{[IP] = \dfrac{0.01021\: M \times 17.405}{23.489} = 0.007565\: M}$ Because the accuracy of the determination depends on how well the maleate concentration is known, the standard solution should be prepared with care, using dried sodium maleate of high purity, weighing carefully a mass that is known to an appropriate number of significant figures (in this case 4), transferring the maleate quantitatively to a volumetric flask and finally dilution to the mark. Again, an appropriate solution volume must be selected to produce the desired number of significant figures given the manufacturer specifications for the glassware used. Question 2 A solution prepared for quantitative analysis using NMR was acquired by coaddition of 8 FIDs produces a spectrum with an S/N of 62.5 for the analyte signals. How many FIDs would have to be coadded to produce a spectrum with an S/N of 250? Solution S/N increases in NMR experiments as the square root of the number of scans coadded. $\mathrm{S/N ∝ (n)^{0.5}}$ To increase the S/N from 62.5 to 250 (a factor of 4 increase in S/N) would require coaddition of 16 times as many FIDs as was used to produce a spectrum with S/N of 62.5. The answer is that coaddition of 128 FIDs (8 x 16) would be required to achieve an S/N of 250. Question 3 A 1H NMR spectrum was measured using a 400.0 MHz instrument by acquisition of 8192 total data points (8192 real points) and a spectral width of 12.00 ppm. What was the acquisition time? Calculate the digital resolution of the resulting spectrum? Is this digital resolution sufficient to accurately define a peak with a width at half height of 0.5 Hz? Solution We can calculate the acquisition time knowing the spectral width and the total number of data points. $\mathrm{AT= \dfrac{NP}{2\: SW} = \dfrac{16384}{2 \times 400 \times 12} = 1.707\: sec}$ $\mathrm{DR = \dfrac{SW}{NP(real)} = \dfrac{2 \times 400 \times 12}{8192} = 1.172\: Hz/pt}$ This would not be adequate digital resolution to accurately define a peak with a 0.5 Hz width at half height. A longer acquisition time would allow for collection of more points. Also, zero-filling could also be used to help increase the digital resolution.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/06%3A_Instructor's_Guide/6.04%3A_Answers_to_Questions_in_the_Practical_Aspects_section.txt
KHP T1 relaxation times There are two ways of getting the T1 relaxation times for the KHP resonances. The simplest way is to have students estimate the null times for the two resonances in the inversion-recovery spectra provided. Selected spectra from the dry lab data set were used to make the figure in the Practical Aspects section of the module. Alternatively, students can process the spectra and measure resonance integrals for each peak. These can be plotted vs. the relaxation delay and fit to determine T1. The integrals we obtained are summarized in the Table below. The fits obtained using Origin 7.5 are also provided. We obtained T1 values of 4.79s and 3.11s for the KHP resonances at 7.75 and 7.61 ppm, respectively. The concentration of KHP in the stock solution is determined from the mass of KHP. $\mathrm{Mass\: KHP = 0.3533\: g - 0.2219\: g = 0.1314\: g}$ $\mathrm{[KHP] = \dfrac{0.1314\: g}{204.22\: g/mol}\times\dfrac{1}{0.005\: L} = 0.1287\: M}$ Table 1. Inversion-recovery data for KHP Delay Integral R1 (7.75 ppm) Integral R2 (7.61 ppm) 0 s -8.34 -8.48 2 -2.38 0.07 2.5 -1.01 1.65 3 -0.14 2.81 3.5 0.8 3.78 4 1.62 4.57 6 3.87 6.86 10 6.37 8.63 15 8.28 9.67 20 8.95 9.65 Inversion-recovery plot for R1 Inversion-recovery plot for R2 Malic Acid Standard Solution Determination From the mass of malic acid weighed and the solution volume, we can calculate the concentration of this standard. $\mathrm{Mass\: of\: MA = 0.3324\: g - 0.1897\: g = 0.1427\: g}$ $\mathrm{[MA] = \dfrac{0.1427\:g}{134.09\:g/mol}\times \dfrac{1}{0.005\:L} = 0.2128\:M}$ Since equal volumes of the KHP and malic acid were mixed to prepare this solution, the dilution factor can be neglected and the concentration of malic acid calculated as shown below. Here we used the integral of the resonances at 2.82 and 2.89 ppm corresponding to the inequivalent malic acid CH2 protons. $\mathrm{[MA] = [KHP] \times \dfrac{Int_{MA}}{N_{MA}} \times \dfrac{N_{KHP}}{Int_{KHP}} = 0.1287\:M \times \dfrac{0.8322}{2} \times \dfrac{4}{1.000} = 0.2142\: M}$ Spectrum of malic acid standard solution containing KHP Determination of Malic Acid in Apple Juice The concentration of malic acid can similarly be calculated in an apple juice sample. Here though we will need to take into account the dilutions performed. The KHP was diluted twice in preparing this solution. First it was diluted by half with D2O, and subsequently 100 µL of was further diluted by addition to 900 µL of apple juice for a total volume of 1 mL. The KHP concentration in the apple juice sample can be calculated as shown below. Note that the volume added in making the pH adjustment to 1.35 is not important since both the KHP and the malic acid will be diluted by the same amount. This pH adjustment is necessary to resolve the malic acid resonances from those of the other apple juice components. Because of the simplicity of the malic acid standard spectrum, pH adjustment is not needed. $\mathrm{[KHP]_{juice} = [KHP]_{stock}\times\dfrac{1}{2}\times\dfrac{100}{1000} = 0.1287\:M \times\dfrac{1}{2}\times\dfrac{100}{1000} = 0.00643\:M}$ The malic acid concentration can then be calculated as before, providing that the dilution resulting from addition of the KHP solution is included in the calculation. $\mathrm{[MA]_{juice} = [KHP]_{juice} \times \dfrac{Int_{MA}}{N_{MA}} \times \dfrac{N_{KHP}}{Int_{KHP}} \times \dfrac{1000}{900} = 0.00643\: M \times \dfrac{1.000}{2} \times \dfrac{4}{0.4351}\times \dfrac{1000}{900} = 0.0328\: M}$ For purposes of comparison with the table provided in the background section of this laboratory, we can convert this concentration to g/L using the molecular weight of malic acid. $\mathrm{0.0.328\: M \times 134.09\: g/mol = 4.40\: g/L\: malic\: acid}$ Spectrum of apple juice sample containing KHP
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantitative_NMR_(Larive_and_Korir)/06%3A_Instructor's_Guide/6.05%3A_Q-NMR_Drylab.txt
• 1.1: Atoms and Photons- Origin of the Quantum Theory The origin of quantum theory can be marked by three diverse phenomena involving electromagnetic radiation, which could not be adequately explained by the methods of classical physics. First among these was blackbody radiation. Next was the photoelectric effect. Third was the origin of line spectra. A coherent formulation of quantum mechanics was eventually developed in 1925 and 1926, principally the work of Schrödinger, Heisenberg and Dirac. • 1.2: Waves and Particles For all its relevance, the quantum world differs quite dramatically from the world of everyday experience. To understand the modern theory of matter, conceptual hurdles of both psychological and mathematical variety must be overcome. A paradox which stimulated the early development of the quantum theory concerned the indeterminate nature of light. Light usually behaves as a wave phenomenon but occasionally it betrays a particle-like aspect, a schizoid tendency known as the wave-particle duality. • 1.3: Quantum Mechanics of Some Simple Systems The simplest system in quantum mechanics has the potential energy V=0 everywhere. This is called a free particle since it has no forces acting on it. • 1.4: Principles of Quantum Mechanics Here we will continue to develop the mathematical formalism of quantum mechanics, using heuristic arguments as necessary. This will lead to a system of postulates which will be the basis of our subsequent applications of quantum mechanics. • 1.5: Harmonic Oscillator The harmonic oscillator is a model which has several important applications in both classical and quantum mechanics. It serves as a prototype in the mathematical treatment of such diverse phenomena as elasticity, acoustics, AC circuits, molecular and crystal vibrations, electromagnetic fields and optical properties of matter. • 1.6: Angular Momentum Angular momentum is the rotational analog of linear momentum. It is an important quantity in classical physics because it is a conserved quantity. The extension of this concept to particles in the quantum world is straightforward. • 1.7: Hydrogen Atom Bohr sought to avoid an atomic catastrophe by proposing that certain orbits of the electron around the nucleus could be exempted from classical electrodynamics and remain stable. The Bohr model was quantitatively successful for the hydrogen atom, as we shall now show.  In contrast to the particle in a box and the harmonic oscillator, the hydrogen atom is a real physical system that can be treated exactly by quantum mechanics. • 1.8: Helium Atom The second element in the periodic table provides our first example of a quantum-mechanical problem which cannot be solved exactly. Nevertheless, as we will show, approximation methods applied to helium can give accurate solutions in perfect agreement with experimental results. In this sense, it can be concluded that quantum mechanics is correct for atoms more complicated than hydrogen. By contrast, the Bohr theory failed miserably in attempts to apply it beyond the hydrogen atom. • 1.9: Atomic Structure and The Periodic Law Quantum mechanics can account for the periodic structure of the elements, by any measure a major conceptual accomplishment for any theory. Although accurate computations become increasingly more challenging as the number of electrons increases, the general patterns of atomic behavior can be predicted with remarkable accuracy. • 1.10: The Chemical Bond • 1.11: Molecular Orbital Theory Molecular orbital theory is a conceptual extension of the orbital model, which was so successfully applied to atomic structure. As was once playfully remarked, "a molecule is nothing more than an atom with more nuclei." This may be overly simplistic. Our understanding of atomic orbitals began with the exact solutions of a prototype problem – the hydrogen atom. We will begin our study of homonuclear diatomic molecules beginning with another exactly solvable prototype, the hydrogen molecule-ion • 1.12: Molecular Symmetry In many cases, the symmetry of a molecule provides a great deal of information about its quantum states, even without a detailed solution of the Schrödinger equation. A geometrical transformation which turns a molecule into an indistinguishable copy of itself is called a symmetry operation. A symmetry operation can consist of a rotation about an axis, a reflection in a plane, an inversion through a point, or some combination of these. • 1.13: Molecular Spectroscopy Our most detailed knowledge of atomic and molecular structure has been obtained from spectroscopy-study of the emission, absorption and scattering of electromagnetic radiation accompanying transitions among atomic or molecular energy levels. Whereas atomic spectra involve only electronic transitions, the spectroscopy of molecules is more intricate because vibrational and rotational degrees of freedom come into play as well. Early observations of absorption or emission by molecules were character • 1.14: Nuclear Magnetic Resonance Nuclear magnetic resonance (NMR) is a versatile and highly-sophisticated spectroscopic technique which has been applied to a growing number of diverse applications in science, technology and medicine. This chapter will consider, for the most part, magnetic resonance involving protons. 01: Chapters Atomic and Subatomic Particles The notion that the building blocks of matter are invisibly tiny particles called atoms is usually traced back to the Greek philosophers Leucippus of Miletus and Democritus of Abdera in the 5th Century BC. The English chemist John Dalton developed the atomic philosophy of the Greeks into a true scientific theory in the early years of the 19th Century. His treatise New System of Chemical Philosophy gave cogent phenomenological evidence for the existence of atoms and applied the atomic theory to chemistry, providing a physical picture of how elements combine to form compounds consistent with the laws of definite and multiple proportions. Table $1$ summarizes some very early measurements (by Sir Humphrey Davy) on the relative proportions of nitrogen and oxygen in three gaseous compounds. Compound Percent N Percent O Ratio Table $1$: Oxides of Nitrogen I 29.50 70.50 0.418 II 44.05 55.95 0.787 III 63.30 36.70 1.725 We would now identify these compounds as NO2, NO and N2O, respectively. We see in data such as these a confirmation of Dalton's atomic theory: that compounds consist of atoms of their constituent elements combined in small whole number ratios. The mass ratios in Table $1$ are, with modern accuracy, 0.438, 0.875 and 1.750. After over 2000 years of speculation and reasoning from indirect evidence, it is now possible in a sense to actually see individual atoms, as shown for example in Figure $1$. The word "atom" comes from the Greek atomos, meaning literally "indivisible." It became evident in the late 19th Century, however, that the atom was not truly the ultimate particle of matter. Michael Faraday's work had suggested the electrical nature of matter and the existence of subatomic particles. This became manifest with the discovery of radioactive decay by Henri Becquerel in 1896 the emission of alpha, beta and gamma particles from atoms. In 1897, J. J. Thompson identified the electron as a universal constituent of all atoms and showed that it carried a negative electrical charge, now designated -e. To probe the interior of the atom, Ernest Rutherford in 1911 bombarded a thin sheet of gold with a stream of positively-charged alpha particles emitted by a radioactive source. Most of the high-energy alpha particles passed right through the gold foil, but a small number were strongly detected in a way that indicated the presence a small but massive positive charge in the center of the atom (Figure $2$). Rutherford proposed the nuclear model of the atom. As we now understand it, an electrically-neutral atom of atomic number Z consists of a nucleus of positive charge +Ze, containing almost the entire the mass of the atom, surrounded by Z electrons of very small mass, each carrying a charge -e. The simplest atom is hydrogen, with Z = 1, consisting of a single electron outside a single proton of charge +e. With the discovery of the neutron by Chadwick in 1932, the structure of the atomic nucleus was clarified. A nucleus of atomic number Z and mass number A was composed of Z protons and A-Z neutrons. Nuclei diameters are of the order of several times 10-15m. From the perspective of an atom, which is 105 times larger, a nucleus behaves, for most purposes, like a point charge +Ze. During the 1960's, compelling evidence began to emerge that protons and neutrons themselves had composite structures, with major contributions by Murray Gell-Mann. According to the currently accepted "Standard Model," the protons and neutron are each made of three quarks, with compositions uud and udd, respectively. The up quark u has a charge of $+ \frac{2}{3}e$, while the down quark d has a charge of $-\frac{1}{3}e$. Despite heroic experimental efforts, individual quarks have never been isolated, evidently placing them in the same category with magnetic monopoles. By contrast, the electron maintains its status as an indivisible elementary particle. Electromagnetic Waves Perhaps the greatest achievement of physics in the 19th century was James Clerk Maxwell's unification in 1864 of the phenomena of electricity, magnetism and optics. An (optional) summary of Maxwell's equations is given in Supplement 1A. Heinrich Hertz in 1887 was the first to demonstrate experimentally the production and detection of the electromagnetic waves predicted by Maxwell-specifically radio waves-by acceleration of electrical charges. As shown in Figure $3$, electromagnetic waves consist of mutually perpendicular electric and magnetic fields, E and B respectively, oscillating in synchrony at high frequency and propagating in the direction of E x B. The wavelength $\lambda$ is the distance between successive maxima of the electric (or magnetic) field. The frequency $\nu$ represents the number of oscillations per second observed at a fixed point in space. The reciprocal of frequency $\tau = \frac{1}{\nu}$ represents period of oscillation-the time it takes for one wavelength to pass a fixed point. The speed of propagation of the wave is therefore determined by $\lambda = c \tau$ or in more familiar form $\lambda \nu = c \label{1}$ where $c = 2.9979 \times 10^8m/sec$, usually called the speed of light, applies to all electromagnetic waves in vacuum. Frequencies are expressed in hertz (Hz), defined as the number of oscillations per second. Electromagnetic radiation is now known to exist in an immense range of wavelengths including gamma rays, X-rays, ultraviolet, visible light, infrared, microwaves and radio waves, as shown in Figure $4$. Three Failures of Classical Physics Isaac Newton's masterwork, Pincipia, published in 1687, can be considered to mark the beginning of modern physical science. Not only did Newton delineate the fundamental laws governing motion and gravitation but he established a general philosophical worldview which pervaded all scientific theories for two centuries afterwards. This system of thinking about the physical world is known as "Classical Physics." Its most notable feature is the primacy of cause and effect relationships. Given sufficient information about the present state of part of the Universe, it should be possible, at least in principle, to predict its future behavior (as well as its complete history.) This capability is known as determinism. For example, solar and lunar eclipses can be predicted centuries ahead, within an accuracy of several seconds. (But interestingly, we can't predict even a couple of days in advance if the weather will be clear enough to view the eclipse!) The other great pillar of classical physics is Maxwell's theory of electromagnetism. The origin of quantum theory can be marked by three diverse phenomena involving electromagnetic radiation, which could not be adequately explained by the methods of classical physics. First among these was blackbody radiation, which led to the contribution of Max Planck in 1900. Next was the photoelectric effect, treated by Albert Einstein in 1905. Third was the origin of line spectra, the hero being Neils Bohr in 1913. A coherent formulation of quantum mechanics was eventually developed in 1925 and 1926, principally the work of Schrödinger, Heisenberg and Dirac. The remainder of this Chapter will describe the early contributions to the quantum theory by Planck, Einstein and Bohr. Blackbody Radiation It is a matter of experience that a hot object can emit radiation. A piece of metal stuck into a flame can become "red hot." At higher temperatures, its glow can be described as "white hot." Under even more extreme thermal excitation it can emit predominantly blue light (completing a very patriotic sequence of colors!). Josiah Wedgwood, the famous pottery designer, noted as far back as 1782 that different materials become red hot at the same temperature. The quantitative relation between color and temperature is described by the blackbody radiation law. A blackbody is an idealized perfect absorber and emitter of all possible wavelengths $\lambda$ of the radiation. Figure $5$ shows experimental wavelength distributions of thermal radiation at several temperatures. Consistent with our experience, the maximum in the distribution, which determines the predominant color, increases with temperature. This relation is given by Wien's displacement law, which can be expressed $T \lambda_{max} = 2.898 \times 10^6\, nm K$ where the wavelength is expressed in nanometers (nm). At room temperature (300 K), the maximum occurs around 10$\mu m$, in the infrared region. In Figure $5$, the approximate values of $\lambda_{max}$ are 2900 nm at 1000 K, 1450 nm at 2000 K and 500n m at 5800 K, the approximate surface temperature of the Sun. The Sun's $\lambda_{max}$ is near the middle of the visible range (380-750nm) and is perceived by our eyes as white light. The origin of blackbody radiation was a major challenge to 19th Century physics. Lord Rayleigh proposed that the electromagnetic field could be represented by a collection of oscillators of all possible frequencies. By simple geometry, the higher-frequency (lower wavelength) modes of oscillation are increasingly numerous since it it possible to fit their waves into an enclosure in a larger number of arrangements. In fact, the number of oscillators increases very rapidly as $\lambda^{-4}$. Rayleigh assumed that every oscillator contributed equally to the radiation (the equipartition principle). This agrees fairly well with experiment at low frequencies. But if ultraviolet rays and higher frequencies were really produced in increasing number, we would get roasted like marshmallows by sitting in front of a fireplace! Fortunately, this doesn't happen, and the incorrect theory is said to suffer from an "ultraviolet catastrophe." Max Planck in 1900 derived the correct form of the blackbody radiation law by introducing a bold postulate. He proposed that energies involved in absorption and emission of electromagnetic radiation did not belong to a continuum, as implied by Maxwell's theory, but were actually made up of discrete bundles which he called "quanta." Planck's idea is traditionally regarded as marking the birth of the quantum theory. A quantum associated with radiation of frequency $\nu$ has the energy $E = h \nu \label2$ where the proportionality factor h = 6.626 x 10-34 J sec is known as Planck's constant. For our development of the quantum theory of atoms and molecules, we need only this simple result and do not have to follow the remainder of Planck's derivation. If you insist, however, the details are given in Supplement 1B. The Photoelectric Effect A familiar device in modern technology is the photocell or "electric eye," which runs a variety of useful gadgets, including automatic door openers. The principle involved in these devices is the photoelectric effect, which was first observed by Heinrich Hertz in the same laboratory in which he discovered electromagnetic waves. Visible or ultraviolet radiation impinging on clean metal surfaces can cause electrons to be ejected from the metal. Such an effect is not, in itself, inconsistent with classical theory since electromagnetic waves are known to carry energy and momentum. But the detailed behavior as a function of radiation frequency and intensity can not be explained classically. The energy required to eject an electron from a metal is determined by its work function $\Phi$. For example, sodium has $\Phi = 1.82 eV$. The electron-volt is a convenient unit of energy on the atomic scale: 1 eV = 1.602 x 10-19J. This corresponds to the energy which an electron picks up when accelerated across a potential difference of 1 volt. The classical expectation would be that radiation of sufficient intensity should cause ejection of electrons from a metal surface, with their kinetic energies increasing with the radiation intensity. Moreover, a time delay would be expected between the absorption of radiation and the ejection of electrons. The experimental facts are quite different. It is found that no electrons are ejected, no matter how high the radiation intensity, unless the radiation frequency exceeds some threshold value $\nu_{0}$ for each metal. For sodium $\nu_{0}$ = 4.39 x 10​14 Hz (corresponding to a wavelength of 683 nm), as shown in Figure $6$. For frequencies $\nu$ above the threshold, the ejected electrons acquire a kinetic energy given by $\frac{1}{2}mv^{2} =h( \nu - \nu_{0}) =h \nu - \Phi \label3$ Evidently, the work function $\Phi$ can be identified with $h \nu_{0}$, equal to 3.65 x 10-19J=1.82 eV for sodium. The kinetic energy increases linearly with frequency above the threshold but is independent of the radiation intensity. Increased intensity does, however, increase the number of photoelectrons. Einstein's explanation of the photoelectric effect in 1905 appears trivially simple once stated. He accepted Planck's hypothesis that a quantum of radiation carries an energy $h \nu$. Thus, if an electron is bound in a metal with an energy $\Phi$, a quantum of energy $h \nu_{0}$ = $\Phi$ will be sufficient to dislodge it. And any excess energy $h( \nu - \nu_{0})$ will appear as kinetic energy of the ejected electron. Einstein believed that the radiation field actually did consist of quantized particles, which he named photons. Although Planck himself never believed that quanta were real, Einstein's success with the photoelectric effect greatly advanced the concept of energy quantization. Line Spectra Most of what is known about atomic (and molecular) structure and mechanics has been deduced from spectroscopy. Figure $7$ shows two different types of spectra. A continuous spectrum can be produced by an incandescent solid or gas at high pressure. Blackbody radiation, for example, is a continuum. An emission spectrum can be produced by a gas at low pressure excited by heat or by collisions with electrons. An absorption spectrum results when light from a continuous source passes through a cooler gas, consisting of a series of dark lines characteristic of the composition of the gas. Frauenhofer between 1814 and 1823 discovered nearly 600 dark lines in the solar spectrum viewed at high resolution. It is now understood that these lines are caused by absorption by the outer layers of the Sun. Gases heated to incandescence were found by Bunsen, Kirkhoff and others to emit light with a series of sharp wavelengths. The emitted light analyzed by a spectrometer (or even a simple prism) appears as a multitude of narrow bands of color. These so called line spectra are characteristic of the atomic composition of the gas. The line spectra of several elements are shown in Figure $8$. It is consistent with classical electromagnetic theory that motions of electrical charges within atoms can be associated with the absorption and emission of radiation. What is completely mysterious is how such radiation can occur for discrete frequencies, rather than as a continuum. The breakthrough that explained line spectra is credited to Neils Bohr in 1913. Building on the ideas of Planck and Einstein, Bohr postulated that the energy levels of atoms belong to a discrete set of values $E_{n}$, rather than a continuum as in classical mechanics. When an atom makes a downward energy transition from a higher energy level $E_{m}$ to a lower energy level $E_{n}$, it caused the emission of a photon of energy $h \nu =E_{m} - E_{n} \label{4}$ This is what accounts for the discrete values of frequency $\nu$ in emission spectra of atoms. Absorption spectra are correspondingly associated with the annihilation of a photon of the same energy and concomitant excitation of the atom from $E_{n}$ to $E_{m}$. Figure $9$ is a schematic representation of the processes of absorption and emission of photons by atoms. Absorption and emission processes occur at the same set frequencies, as is shown by the two line spectra in Figure $7$. Rydberg (1890) found that all the lines of the atomic hydrogen spectrum could be fitted to a simple empirical formula $\dfrac{1}{ \lambda} =R\left( \dfrac{1}{n_1^2} -\dfrac{1}{n_2^2}\right), n = 1,2,3..., n_2>n_1 \label{5}$ where R, known as the Rydberg constant, has the value 109,677 cm-1. This formula was found to be valid for hydrogen spectral lines in the infrared and ultraviolet regions, in addition to the four lines in the visible region. No analogously simple formula has been found for any atom other than hydrogen. Bohr proposed a model for the energy levels of a hydrogen atom which agreed with Rydberg's formula for radiative transition frequencies. Inspired by Rutherford's nuclear atom, Bohr suggested a planetary model for the hydrogen atom in which the electron goes around the proton in one of a set of allowed circular orbits, as shown in Fig 8. A more fundamental understanding of the discrete nature of orbits and energy levels had to await the discoveries of 1925-26, but Bohr's model provided an invaluable stepping-stone to the development of quantum mechanics. We will consider the hydrogen atom in greater detail in Chap. 7.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.01%3A_Atoms_and_Photons-_Origin_of_the_Quantum_Theory.txt
Quantum mechanics is the theoretical framework which describes the behavior of matter on the atomic scale. It is the most successful quantitative theory in the history of science, having withstood thousands of experimental tests without a single verifiable exception. It has correctly predicted or explained phenomena in fields as diverse as chemistry, elementary-particle physics, solid-state electronics, molecular biology and cosmology. A host of modern technological marvels, including transistors, lasers, computers and nuclear reactors are offspring of the quantum theory. Possibly 30% of the US gross national product involves technology which is based on quantum mechanics. For all its relevance, the quantum world differs quite dramatically from the world of everyday experience. To understand the modern theory of matter, conceptual hurdles of both psychological and mathematical variety must be overcome. A paradox which stimulated the early development of the quantum theory concerned the indeterminate nature of light. Light usually behaves as a wave phenomenon but occasionally it betrays a particle-like aspect, a schizoid tendency known as the wave-particle duality. We consider first the wave theory of light.​ The Double-Slit Experiment Figure $1$ shows a modernized version of the famous double-slit diffraction experiment first performed by Thomas Young in 1801. Light from a monochromatic (single wavelength) source passes through two narrow slits and is projected onto a screen. Each slit by itself would allow just a narrow band of light to illuminate the screen. But with both slits open, a beautiful interference pattern of alternating light and dark bands appears, with maximum intensity in the center. To understand what is happening, we review some key results about electromagnetic waves. Maxwell's theory of electromagnetism was an elegant unification of the diverse phenomena of electricity, magnetism and radiation, including light. Electromagnetic radiation is carried by transverse waves of electric and magnetic fields, propagating in vacuum at a speed $c≈3\times 10^8m/sec$, known as the "speed of light." As shown in Figure 2, the E and B fields oscillate sinusoidally, in synchrony with one another. The magnitudes of E and B are proportional ($B=E/c$ in SI units). The distance between successive maxima (or minima) at a given instant of time is called the wavelength $\lambda$. At every point in space, the fields also oscillate sinusoidally as functions of time. The number of oscillations per unit time is called the frequency $\nu$. Since the field moves one wavelength in the time $\lambda/c$, the wavelength, frequency and speed for any wave phenomenon are related by $\lambda\nu=c\label{1}$ The energy density contained in an electromagnetic field, even a static one, is given by $\rho=\dfrac{1}{2}\left(\epsilon_{0}E^{2}+\dfrac{B^{2}}{\mu_{0}}\right)\label{3}$ Note that both of the above energy quantities depend quadratically on the fields E and B. To discuss the diffraction experiments described above, it is useful to define the amplitude of an electromagnetic wave at each point in space and time r, t by the function $\Psi\left(\vec{r},t\right)=\sqrt{\epsilon_{0}}E\left(\vec{r},t\right)=\dfrac{B\left(\vec{r},t\right)}{\sqrt{\mu_{0}}}\label{4}$ such that the intensity is given by $\rho\left(\vec{r},t\right)=[\Psi\left(\vec{r},t\right)]^2​\label{5}$ The function $\Psi(\vec{r},t)$ will, in some later applications, have complex values. In such cases we generalize the definition of intensity to $\rho\left(\vec{r},t\right)=|\Psi\left(\vec{r},t\right)|^2=\Psi\left(\vec{r},t\right)^\ast\Psi\left(\vec{r},t\right)\label{6}$ where $\Psi\left(\vec{r},t\right)^\ast$ represents the complex conjugate of $\Psi\left(\vec{r},t\right)$. In quantum mechanical applications, the function $\Psi$ is known as the wavefunction. Figure $1$: Interference of two equal sinusoidal waves. Top: constructive interference. Bottom: destructive interference. Center: intermediate case. The resulting intensities $\rho=\Psi^2$ is shown on the right. The electric and magnetic fields, hence the amplitude $\Psi$, can have either positive and negative values at different points in space. In fact constructive and destructive interference arises from the superposition of waves, as illustrated in Figure $3$. By Equation $\ref{5}$, the intensity $\rho\ge0$ everywhere. The light and dark bands on the screen are explained by constructive and destructive interference, respectively. The wavelike nature of light is convincingly demonstrated by the fact that the intensity with both slits open is not the sum of the individual intensities, ie, $\rho\neq\rho_{1}+\rho_{2}$. Rather it is the wave amplitudes which add: $\Psi=\Psi_{1}+\Psi_{2}\label{7}$ with the intensity given by the square of the amplitude: $\rho=\Psi^2=\Psi_{1}^2+\Psi_{2}^2+2\Psi_{1}\Psi_{2}\label{8}$ The cross term $2\Psi_{1}\Psi_{2}$ is responsible for the constructive and destructive interference. Where $\Psi_{1}$ and $\Psi_{2}$ have the same sign, constructive interference makes the total intensity greater than the the sum of $\rho_{1}$ and $\rho_{2}$. Where $\Psi_{1}$ and $\Psi_{2}$ have opposite signs, there is destructive interference. If, in fact, $\Psi_{1}$ = $-\Psi_{2}$ then the two waves cancel exactly, giving a dark fringe on the screen. Wave-Particle Duality The interference phenomena demonstrated by the work of Young, Fresnel and others in the early 19th Century, apparently settled the matter that light was a wave phenomenon, contrary to the views of Newton a century earlier--case closed! But nearly a century later, phenomena were discovered which could not be satisfactorily accounted for by the wave theory, specifically blackbody radiation and the photoelectric effect. Deviating from the historical development, we will illustrate these effects by a modification of the double slit experiment. Let us equip the laser source with a dimmer switch capable of reducing the light intensity by several orders of magnitude, as shown in Figure $4$. With each successive filter the diffraction pattern becomes dimmer and dimmer. Eventually we will begin to see localized scintillations at random positions on an otherwise dark screen. It is an almost inescapable conclusion that these scintillations are caused by photons, the bundles of light postulated by Planck and Einstein to explain blackbody radiation and the photoelectric effect. But wonders do not cease even here. Even though the individual scintillations appear at random positions on the screen, their statistical behavior reproduces the original high-intensity diffraction pattern. Evidently the statistical behavior of the photons follows a predictable pattern, even though the behavior of individual photons is unpredictable. This implies that each individual photon, even though it behaves mostly like a particle, somehow carry with it a "knowledge" of the entire wavelike diffraction pattern. In some sense, a single photon must be able to go through both slits at the same time. This is what is known as the wave-particle duality for light: under appropriate circumstances light can behave as a wave or as a particle. Planck's resolution of the problem of blackbody radiation and Einstein's explanation of the photoelectric effect can be summarized by a relation between the energy of a photon to its frequency: $E=h \nu \label{8b}$ where $h = 6.626\times 10^{-34} J sec$, known as Planck's constant. Much later, the Compton effect was discovered, wherein an x-ray or gamma ray photon ejects an electron from an atom, as shown in Figure $5$. Assuming conservation of momentum in a photon-electron collision, the photon is found to carry a momentum p, given by $p=\dfrac{h}{\lambda} \label{9}$ Equation $\ref{8b}$ and $\ref{9}$ constitute quantitative realizations of the wave-particle duality, each relating a particle-like property--energy or momentum--to a wavelike property--frequency or wavelength. According to the special theory of relativity, the last two formulas are actually different facets of the same fundamental relationship. By Einstein's famous formula, the equivalence of mass and energy is given by $E=mc^2\label{10}$ The photon's rest mass is zero, but in travelling at speed c, it acquires a finite mass. Equating Equation $\ref{8}$) and $\ref{10}$ for the photon energy and taking the photon momentum to be $p = mc$, we obtain $p = \dfrac{E}{c} = \dfrac{h\nu}{c} = \dfrac{h}{\lambda} \label{11}$ Thus, the wavelength-frequency relation (Equation $\ref{1}$), implies the Compton-effect formula (Equation $\ref{9}$). The best we can do is to describe the phenomena constituting the wave-particle duality. There is no widely accepted explanation in terms of everyday experience and common sense. Feynman referred to the "experiment with two holes" as the "central mystery of quantum mechanics." It should be mentioned that a number of models have been proposed over the years to rationalize these quantum mysteries. Bohm proposed that there might exist hidden variables which would make the behavior of each photon deterministic, ie, particle-like. Everett and Wheeler proposed the "many worlds interpretation of quantum mechanics" in which each random event causes the splitting of the entire universe into disconnected parallel universes in which each possibility becomes the reality. Needless to say, not many people are willing to accept such a metaphysically unwieldy view of reality. Most scientists are content to apply the highly successful computational mechanisms of quantum theory to their work, without worrying unduly about its philosophical underpinnings. Sort of like people who enjoy eating roast beef but would rather not think about where it comes from. There was never any drawn-out controversy about whether electrons or any other constituents of matter were other than particle-like. Yet a variant of the double-slit experiment using electrons instead of light proves otherwise. The experiment is technically difficult but has been done. An electron gun, instead of a light source, produces a beam of electrons at a selected velocity, which is focused and guided by electric and magnetic fields. Then, everything that happens for photons has its analog for electrons. Individual electrons produce scintillations on a phosphor screen-this is how TV works. But electrons also exhibit diffraction effects, which indicates that they too have wavelike attributes. Diffraction experiments have been more recently carried out for particles as large as atoms and molecules, even for the C60 fullerene molecule. De Broglie in 1924 first conjectured that matter might also exhibit a wave-particle duality. A wavelike aspect of the electron might, for example, be responsible for the discrete nature of Bohr orbits in the hydrogen atom. According to de Broglie's hypothesis, the "matter waves" associated with a particle have a wavelength given by $\lambda=h/p\label{12}$ which is identical in form to Compton's result (Equation \ref{9}) (which, in fact, was discovered later). The correctness of de Broglie's conjecture was most dramatically confirmed by the observations of Davisson and Germer in 1927 of diffraction of monoenergetic beams of electrons by metal crystals, much like the diffraction of x-rays. And measurements showed that de Broglie's formula (Equation $\ref{12}$) did indeed give the correct wavelength (see Figure $6$). The Schrödinger Equation Schrödinger in 1926 first proposed an equation for de Broglie's matter waves. This equation cannot be derived from some other principle since it constitutes a fundamental law of nature. Its correctness can be judged only by its subsequent agreement with observed phenomena (a posteriori proof). Nonetheless, we will attempt a heuristic argument to make the result at least plausible. In classical electromagnetic theory, it follows from Maxwell's equations that each component of the electric and magnetic fields in vacuum is a solution of the wave equation $\nabla^2\Psi-\dfrac{1}{c^2}\dfrac{\partial ^2\Psi}{\partial t^2}=0\label{13}$ where the Laplacian or "del-squared" operator is defined by $\nabla^2=\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2}\label{14}$ We will attempt now to create an analogous equation for de Broglie's matter waves. Accordingly, let us consider a very general instance of wave motion propagating in the x-direction. At a given instant of time, the form of a wave might be represented by a function such as $\psi(x)=f\left(\dfrac {2\pi x}{ \lambda}\right)\label{15}$ where $f(\theta)$ represents a sinusoidal function such as $\sin\theta$, $\cos\theta$, $e^{i\theta}$, $e^{-i\theta}$ or some linear combination of these. The most suggestive form will turn out to be the complex exponential, which is related to the sine and cosine by Euler's formula $e^{i\theta}=\cos\theta + i \sin\theta\label{16}$ Each of the above is a periodic function, its value repeating every time its argument increases by $2\pi$. This happens whenever x increases by one wavelength $\lambda$. At a fixed point in space, the time-dependence of the wave has an analogous structure: $T(t)=f(2\pi\nu t)\label{17}$ where $\nu$ gives the number of cycles of the wave per unit time. Taking into account both x- and t-dependence, we consider a wavefunction of the form $\Psi(x,t)=exp\left[2\pi i\left(\dfrac{x}{\lambda}-\nu t\right)\right]\label{18}$ representing waves travelling from left to right. Now we make use of the Planck and de Broglie formulas (Equation $\ref{8}$ and $\ref{12}$, respectively) to replace $\nu$ and $\lambda$ by their particle analogs. This gives $\Psi(x,t)=exp[i(px-Et)/\hbar]\label{19}$ where $\hbar\equiv\dfrac{h}{2\pi}\label{20}$ Since Planck's constant occurs in most formulas with the denominator $2\pi$, this symbol was introduced by Dirac. Now Equation $\ref{17}$ represents in some way the wavelike nature of a particle with energy E and momentum p. The time derivative of Equation \ref{19} gives $\dfrac{\partial\Psi}{\partial t} = -(iE/\hbar)\times \exp \left[\dfrac{i(px-Et)}{\hbar} \right]\label{21}$ Thus $i\hbar\dfrac{\partial\Psi}{\partial t} = E\Psi\label{22}$ Analogously $-i\hbar\dfrac{\partial\Psi}{\partial x} = p\Psi\label{23}$ and $-\hbar^2\dfrac{\partial^2\Psi}{\partial x^2} = p^2\Psi\label{24}$ The energy and momentum for a nonrelativistic free particle are related by $E=\dfrac{1}{2}mv^2=\dfrac{p^2}{2m}\label{25}$ Thus $\Psi(x,t)$ satisfies the partial differential equation $i\hbar\dfrac{\partial\Psi}{\partial t}=-\dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi}{\partial x^2}\label{26}$ For a particle with a potential energy $V(x)$, $E=\dfrac{p^2}{2m}+V(x)\label{27}$ we postulate that the equation for matter waves generalizes to $i\hbar\dfrac{\partial\Psi}{\partial t}=\left[-\dfrac{\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2}+V(x)\right]\Psi\label{28}$ For waves in three dimensions should then have $i\hbar\dfrac{\partial}{\partial t}\Psi(\vec{r},t)=\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\Psi(\vec{r},t)\label{29}$ Here the potential energy and the wavefunction depend on the three space coordinates x, y, z, which we write for brevity as r. This is the time-dependent Schrödinger equation for the amplitude $\Psi(\vec{r}, t)$ of the matter waves associated with the particle. Its formulation in 1926 represents the starting point of modern quantum mechanics. (Heisenberg in 1925 proposed another version known as matrix mechanics.) For conservative systems, in which the energy is a constant, we can separate out the time-dependent factor from (19) and write $\Psi(\vec{r},t)=\psi(\vec{r})e^{-iEt\div\hbar}\label{30}$ where $\psi(\vec{r})$ is a wavefunction dependent only on space coordinates. Putting Equation \ref{30} into Equation \ref{29} and cancelling the exponential factors, we obtain the time-independent Schrödinger equation: $\left[-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\right]\psi(\vec{r})=E\psi(\vec{r})\label{31}$ Most of our applications of quantum mechanics to chemistry will be based on this equation. The bracketed object in Equation $\ref{31}$ is called an operator. An operator is a generalization of the concept of a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another. The Laplacian ($\nabla$) is an example of an operator. We usually indicate that an object is an operator by placing a hat' over it, eg, $\hat{A}$. The action of an operator that turns the function f into the function g is represented by $\hat{A}f=g\label{32}$ Equation $\ref{23}$ implies that the operator for the x-component of momentum can be written $\hat{p}_{x}=-i\hbar\dfrac{\partial}{\partial x}\label{33}$ and by analogy, we must have $\hat{p}_{y}=-i\hbar\dfrac{\partial}{\partial y}$ and $\hat{p}_{z}=-i\hbar\dfrac{\partial}{\partial z}\label{34}$ The energy, as in Equation $\ref{27}$, expressed as a function of position and momentum is known in classical mechanics as the Hamiltonian. Generalizing to three dimensions, $\hat{H}=\dfrac{p^2}{2m}+V(\vec{r})=\dfrac{1}{2m}(p_{x}^2+p_{y}^2+p_{z}^2)+V(x,y,z)\label{35}$ We construct thus the corresponding quantum-mechanical operator $\hat{H}=-\dfrac{\hbar^2}{2m}\left(\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2}\right)+V(x,y,z)=-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\label{36}$ The time-independent Schrödinger equation (Equation $\ref{31}$) can then be written symbolically as $\hat{H}\Psi=E\Psi\label{37}$ This form is actually more generally to any quantum-mechanical problem, given the appropriate Hamiltonian and wavefunction. Most applications to chemistry involve systems containing many particles--electrons and nuclei. An operator equation of the form $\hat{A}\psi=const \psi\label{38}$ is called an eigenvalue equation. Recall that, in general, an operator acting on a function gives another function (e.g., Equation $\ref{32}$). The special case (Equation $\ref{38}$) occurs when the second function is a multiple of the first. In this case, $\psi$ is known as an eigenfunction and the constant is called an eigenvalue. (These terms are hybrids with German, the purely English equivalents being characteristic function' and `characteristic value.') To every dynamical variable $A$ in quantum mechanics, there corresponds an eigenvalue equation, usually written $\hat{A}\psi=a\psi \label{39}$ The eigenvalues a represent the possible measured values of the variable $A$. The Schrödinger Equation ($\ref{37}$) is the best known instance of an eigenvalue equation, with its eigenvalues corresponding to the allowed energy levels of the quantum system. The Wavefunction For a single-particle system, the wavefunction $\Psi(\vec{r},t)$, or $\psi(\vec{r})$ for the time-independent case, represents the amplitude of the still vaguely defined matter waves. The relationship between amplitude and intensity of electromagnetic waves we developed for Equation $\ref{6}$ can be extended to matter waves. The most commonly accepted interpretation of the wavefunction is due to Max Born (1926), according to which $\rho(r)$, the square of the absolute value of $\psi(r)$ is proportional to the probability density (probability per unit volume) that the particle will be found at the position r. Probability density is the three-dimensional analog of the diffraction pattern that appears on the two-dimensional screen in the double-slit diffraction experiment for electrons described in the preceding Section. In the latter case we had the relative probability a scintillation would appear at a given point on the screen. The function $\rho(r)$ becomes equal, rather than just proportional to, the probability density when the wavefunction is normalized, that is, $\int|\psi(\vec{r})|^2d\tau=1\label{40}$ This simply accounts for the fact that the total probability of finding the particle somewhere adds up to unity. The integration in Equation $\ref{40}$ extends over all space and the symbol $d\tau$ designates the appropriate volume element. For example, the volume differential in Cartesian coordinates, $d\tau=dx\,dy\,dz$ is changed in spherical coordinates to $d\tau=r^2sin\theta\, dr \,d\theta \, d\phi$. The physical significance of the wavefunctions makes certain demands on its mathematical behavior. The wavefunction must be a single-valued function of all its coordinates, since the probability density ought to be uniquely determined at each point in space. Moreover, the wavefunction should be finite and continuous everywhere, since a physically-meaningful probability density must have the same attributes. The conditions that the wavefunction be single-valued, finite and continuous--in short, "well behaved"-- lead to restrictions on solutions of the Schrödinger equation such that only certain values of the energy and other dynamical variables are allowed. This is called quantization and is in the feature that gives quantum mechanics its name. 1.03: Quantum Mechanics of Some Simple Systems The simple quantum-mechanical problem we have just solved can provide an instructive application to chemistry: the free-electron model (FEM) for delocalized $\pi$-electrons. The simplest case is the 1,3-butadiene molecule $\rho =2\psi _{1}^2+2\psi _{2}^2\label{28}$ A chemical interpretation of this picture might be that, since the $\pi$-electron density is concentrated between carbon atoms 1 and 2, and between 3 and 4, the predominant structure of butadiene has double bonds between these two pairs of atoms. Each double bond consists of a $\pi$-bond, in addition to the underlying $\sigma$-bond. However, this is not the complete story, because we must also take account of the residual $\pi$-electron density between carbons 2 and 3. In the terminology of valence-bond theory, butadiene would be described as a resonance hybrid with the contributing structures CH2=CH-CH=CH2 (the predominant structure) and ºCH2-CH=CH-CH2º​ (a secondary contribution). The reality of the latter structure is suggested by the ability of butadiene to undergo 1,4-addition reactions. The free-electron model can also be applied to the electronic spectrum of butadiene and other linear polyenes. The lowest unoccupied molecular orbital (LUMO) in butadiene corresponds to the n=3 particle-in-a-box state. Neglecting electron-electron interaction, the longest-wavelength (lowest-energy) electronic transition should occur from n=2, the highest occupied molecular orbital (HOMO). The energy difference is given by $\Delta E=E_{3}-E_{2}=(3^2-2^2)\dfrac{h^2}{8mL^2}\label{29}$ Here m represents the mass of an electron (not a butadiene molecule!), 9.1x10-31 Kg, and L is the effective length of the box, 4x1.40x10-10 m. By the Bohr frequency condition $\Delta E=h\upsilon =\dfrac{hc}{\lambda }\label{30}$ The wavelength is predicted to be 207 nm. This compares well with the experimental maximum of the first electronic absorption band, $\lambda_{max} \approx$ 210 nm, in the ultraviolet region. We might therefore be emboldened to apply the model to predict absorption spectra in higher polyenes CH2=(CH-CH=)n-1CH​2. For the molecule with 2n carbon atoms (n double bonds), the HOMO → LUMO transition corresponds to n → n + 1, thus $\dfrac{hc}{\lambda} \approx \begin{bmatrix}(n+1)^2-n^2\end{bmatrix}\dfrac{h^2}{8m(2nL_{CC})^2}\label{31}$ A useful constant in this computation is the Compton wavelength $\dfrac{h}{mc}= 2.426 \times 10^{-12} m.$ For n=3, hexatriene, the predicted wavelength is 332 nm, while experiment gives $\lambda _{max}\approx$ 250 nm. For n=4, octatetraene, FEM predicts 460 nm, while $\lambda _{max}\approx$ 300 nm. Clearly the model has been pushed beyond range of quantitate validity, although the trend of increasing absorption band wavelength with increasing n is correctly predicted. Incidentally, a compound should be colored if its absorption includes any part of the visible range 400-700 nm. Retinol (vitamin A), which contains a polyene chain with n=5, has a pale yellow color. This is its structure: Contributors and Attributions Seymour Blinder (Professor Emeritus of Chemistry and Physics at the University of Michigan, Ann Arbor)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.02%3A_Waves_and_Particles.txt
Here we will continue to develop the mathematical formalism of quantum mechanics, using heuristic arguments as necessary. This will lead to a system of postulates which will be the basis of our subsequent applications of quantum mechanics. Hermitian Operators An important property of operators is suggested by considering the Hamiltonian for the particle in a box: $\hat{H}=-\frac{h^2}{2m}\frac{d^2}{dx^2} \label{1}$ Let $f(x)$ and $g(x)$ be arbitrary functions which obey the same boundary values as the eigenfunctions of $\hat{H}$, namely that they vanish at $x = 0$ and $x = a$. Consider the integral $\int_0^a \! f(x) \, \hat{H} \, g(x) \, \mathrm{d}x =-\frac{\hbar^2}{2m} \int_0^a \! f(x) \, g''(x) \, \mathrm{d}x \label{2}$ Now, using integration by parts, $\int_0^a \! f(x) \, g''(x) \, \mathrm{d}x = - \int_0^a \! f'(x) \, g'(x) \, \mathrm{d}x + \, \Biggl[f(x) \, g'(x) \Biggr]_0^a \label{3}$ The boundary terms vanish by the assumed conditions on $f$ and $g$. A second integration by parts transforms Equation $\ref{3}$ to $\int_0^a \! f''(x) \, g(x) \, \mathrm{d}x \, - \, \Biggl[f'(x) \, g(x) \Biggr]_0^a$ It follows therefore that $\int_0^a \! f(x) \, \hat{H} \, g(x) \, \mathrm{d}x=\int_0^a g(x) \, \hat{H} \, f(x) \, \mathrm{d}x \label{4}$ An obvious generalization for complex functions will read $\int_0^a \! f^*(x) \, \hat{H} \, g(x) \, \mathrm{d}x=\Biggl(\int_0^a g^*(x) \, \hat{H} \, f(x) \, \mathrm{d}x\Biggr)^* \label{5}$ In mathematical terminology, an operator $\hat{A}$ for which $\int \! f^* \, \hat{A} \, g \, \mathrm{d}\tau=\Biggl(\int \! g^* \, \hat{A} \, f \, \mathrm{d}\tau\Biggr)^* \label{6}$ for all functions $f$ and $g$ which obey specified boundary conditions is classified as hermitian or self-adjoint. Evidently, the Hamiltonian is a hermitian operator. It is postulated that all quantum-mechanical operators that represent dynamical variables are hermitian. Properties of Eigenvalues and Eigenfunctions The sets of energies and wavefunctions obtained by solving any quantum-mechanical problem can be summarized symbolically as solutions of the eigenvalue equation $\hat{H} \, \psi_n=E_n \, \psi_n \label{7}$ For another value of the quantum number, we can write $\hat{H} \, \psi_m=E_m \, \psi_m \label{8}$ Let us multiply Equation $\ref{7}$ by $\psi_m^*$ and the complex conjugate of Equation $\ref{8}$ by $\psi_n$. Then we subtract the two expressions and integrate over $\mathrm{d}\tau$. The result is $\int \! \psi_m^* \, \hat{H} \, \psi_n \, \mathrm{d}\tau \, - \, \Biggl(\int \! \psi_n^* \, \hat{H} \, \psi_m \, \mathrm{d}\tau\Biggr)^*=(E_n-E_m^*)\int \! \psi_m^* \, \psi_n \, \mathrm{d}\tau \label{9}$ But by the hermitian property (Equation $\ref{5}$), the left-hand side of Equation $\ref{9}$ equals zero. Thus $(E_n-E_m^*)\int \! \psi_m^* \, \psi_n \, \mathrm{d}\tau=0 \label{10}$ Consider first the case $m = n$. The second factor in Equation $\ref{10}$ then becomes the normalization integral $\int \! \psi_n^* \, \psi_n \, \mathrm{d}\tau$, which equals 1 (or at least a nonzero constant). Therefore the first factor in Equation $\ref{10}$ must equal zero, so that $E_n^*=E_n \label{11}$ implying that the energy eigenvalues must be real numbers. This is quite reasonable from a physical point of view since eigenvalues represent possible results of measurement. Consider next the case when $E_m \not= E_n$. Then it is the second factor in Equation $\ref{10}$ that must vanish and $\int \! \psi_m^* \, \psi_n \, \mathrm{d}\tau=0 \,\,\,\, when \,\, E_m \not= E_n \label{12}$ Thus eigenfunctions belonging to different eigenvalues are orthogonal. In the case that $\psi_m$ and $\psi_n$ are degenerate eigenfunctions, so $m \not= n$ but $E_m = E_n$, the above proof of orthogonality does not apply. But it is always possible to construct degenerate functions that are mutually orthogonal. A general result is therefore the orthonormalization condition $\int \! \psi_m^* \, \psi_n \, \mathrm{d}\tau=\delta_{mn} \label{13}$ It is easy to prove that a linear combination of degenerate eigenfunctions is itself an eigenfunction of the same energy. Let $\hat{H} \, \psi_{nk}= E_n \, \psi_{nk}, \,\,\,\,\,\,\, k=1,2,...d \label{14}$ where the $\psi_{nk}$ represent a d-fold degenerate set of eigenfunctions with the same eigenvalue $E_n$. Consider now the linear combination $\psi = c_1\psi_{n,1} + c_2\psi_{n,2} + ... + c_d\psi_{n,d} \label{15}$ Operating on $\psi$ with the Hamiltonian and using (14), we find $\hat{H} \, \psi = c_1\hat{H} \,\psi_{n,1} + c_2\hat{H} \,\psi_{n,2} + ... =E_n (c_1\psi_{n,1} + c_2\psi_{n,2} + ... )=E_n \, \psi \label{16}$ which shows that the linear combination $\psi$ is also an eigenfunction of the same energy. There is evidently a limitless number of possible eigenfunctions for a degenerate eigenvalue. However, only d of these will be linearly independent. Dirac Notation The term orthogonal has been used both for perpendicular vectors and for functions whose product integrates to zero. This actually connotes a deep connection between vectors and functions. Consider two orthogonal vectors a and b. Then, in terms of their x, y, z components, labeled by 1, 2, 3, respectively, the scalar product can be written $\mathbf{a} \cdot \mathbf{b} = a_1b_1 + a_2b_2 + a_3b_3 = 0 \label{17}$ Suppose now that we consider an analogous relationship involving vectors in n-dimensional space (which you need not visualize!). We could then write ${a} \cdot {b}= \sum_{k=0}^{n} a_kb_k = 0 \label{18}$ Finally let the dimension of the space become non-denumerably infinite, turning into a continuum. The sum in Equation $\ref{18}$ would then be replaced by an integral such as $\int \! a(x) \, b(x) dx = 0 \label{19}$ But this is just the relation for orthogonal functions. A function can therefore be regarded as an abstract vector in a higher-dimensional continuum, known as Hilbert space. This is true for eigenfunctions as well. Dirac denoted the vector in Hilbert space corresponding to the eigenfunction $\psi_n$ by the symbol $|n \rangle$. Correspondingly, the complex conjugate $\psi_m^*$ is denoted by $\langle m|$. The integral over the product of the two functions is then analogous to a scalar product (or inner product in linear algebra) of the abstract vectors, written $\int \! \psi_m^* \, \psi_n \, \mathrm{d}\tau= \langle m| \cdot |n \rangle\equiv \langle m|n\rangle \label{20}$ The last quantity is known as a bracket, which led Dirac to designate the vectors $\langle m|$ and $|n \rangle$ as a "bra" and a "ket," respectively. The orthonormality conditions (Equation $\ref{13}$) can be written $\langle m|n\rangle = \delta_{mn} \label{21}$ The integral of a "sandwich" containing an operator $\hat{A}$ can be written very compactly in the form $\int \! \psi_m^* \, \hat{A} \, \psi_n \, \mathrm{d}\tau=\langle m| A |n \rangle \label{22}$ The hermitian condition on $\hat{A}$ [cf. Eq (6)] is therefore expressed as $\langle m| A |n \rangle=\langle n| A |m \rangle^* \label{23}$ Expectation Values One of the extraordinary features of quantum mechanics is the possibility for superpositions of states. The state of a system can sometimes exist as a linear combination of other states, for example, $\psi = c_1\psi_{1} + c_2\psi_{2} \label{24}$ Assuming that all three functions are normalized and that $\psi_1$ and $\psi_2$ are orthogonal, we find $\int \! \psi^* \, \psi \, \mathrm{d}\tau=|c_1|^2 + |c_2|^2=1 \label{25}$ We can interpret $|c_1|^2$ and $|c_2|^2$ as the probabilities that a system in a state described by $\psi$ can have the attributes of the states $\psi_1$ and $\psi_2$, respectively. Suppose $\psi_1$ and $\psi_2$ represent eigenstates of an observable $A$, satisfying the respective eigenvalue equations $\hat{A} \psi_1=a_1\psi_1 \,\,\,\,\,\, and \,\,\,\,\,\, \hat{A} \psi_2=a_2\psi_2 \label{26}$ Then a large number of measurements of the variable $A$ in the state $\psi$ will register the value $a_1$ with a probability $|c_1|^2$ and the value $a_2$ with a probability $|c_2|^2$. The average value or expectation value of $A$ will be given by $\langle{A}\rangle =|c_1|^2 a_1+|c_2|^2 a_2 \label{27}$ This can be obtained directly from $\psi$ by the "sandwich construction" $\langle{A}\rangle=\int \! \psi^* \hat{A} \, \psi \, \mathrm{d}\tau \label{28}$ or, if $\psi$ is not normalized, $\langle{A}\rangle=\frac{\int \! \psi^* \hat{A} \, \psi \, \mathrm{d}\tau}{\int \! \psi^* \, \psi \, \mathrm{d}\tau} \label{29}$ Note that the expectation value need not itself be a possible result of a single measurement (like the centroid of a donut, which is located in the hole!). When the operator $\hat{A}$ is a simple function, not containing differential operators or the like, then Equation $\ref{28}$ reduces to the classical formula for an average value: $\langle{A}\rangle=\int \, A \, \rho \,\mathrm{d}\tau \label{30}$ More on Operators An operator represents a prescription for turning one function into another: in symbols, $\hat{A}\psi=\phi$. From a physical point of view, the action of an operator on a wavefunction can be pictured as the process of measuring the observable $A$ on the state $\psi$. The transformed wavefunction $\phi$ then represents the state of the system after the measurement is performed. In general, $\phi$ is different from $\psi$, consistent with the fact that the process of measurement on a quantum system produces an irreducible perturbation of its state. Only in the special case that $\psi$ is an eigenstate of $A$, does a measurement preserve the original state. The function $\phi$ is then equal to an eigenvalue $a$ times $\psi$. The product of two operators, say $\hat{A}\hat{B}$, represents the successive action of the operators, reading from right to left---i.e., first $\hat{B}$ then $\hat{A}$. In general, the action of two operators in the reversed order, say $\hat{B}\hat{A}$, gives a different result, which can be written $\hat{A}\hat{B}\not=\hat{B}\hat{A}.$ We say that the operators do not commute. This can be attributed to the perturbing effect one measurement on a quantum system can have on subsequent measurements. An example of non-commuting operators from everyday life. In our usual routine each morning, we shower and we get dressed. But the result of carrying out these operations in reversed order will be dramatically different! The commutator of two operators is defined by $\left[ \hat{A}, \, \hat{B} \, \right] \equiv \hat{A}\hat{B}-\hat{B}\hat{A} \label{31}$ When $\left[ \hat{A}, \, \hat{B}\, \right]=0$, the two operators are said to commute. This means their combined effect will be the same whatever order they are applied (like brushing your teeth and showering). The uncertainty principle for simultaneous measurement of two observables $A$ and $B$ is closely related to their commutator. The uncertainty $\Delta a$ in the observable $A$ is defined in terms of the mean square deviation from the average: $(\Delta a)^2 = \langle{(\hat{A}-\langle{A}\rangle)^2}\rangle=\langle{A^2}\rangle-\langle{A}\rangle^2 \label{32}$ It corresponds to the standard deviation in statistics. The following inequality can be proven for the product of two uncertainties: $\Delta{a}\Delta{b} \ge \frac{1}{2}|\langle{\left[ \hat{A}, \, \hat{B}\, \right]}\rangle| \label{33}$ The best known application of Equation $\ref{33}$ is to the position and momentum operators, say $\hat{x}$ and $\hat{p_x}$. Their commutator is given by $[ \hat{x}, \, \hat{p_x} \, ] = i\hbar \label{34}$ so that $\Delta{x}\Delta{p} \ge \frac{\hbar}{2} \label{35}$ which is known as the Heisenberg uncertainty principle. This fundamental consequence of quantum theory implies that the position and momentum of a particle cannot be determined with arbitrary precision--the more accurately one is known, the more uncertain is the other. For example, if the momentum is known exactly, as in a momentum eigenstate, then the position is completely undetermined. If two operators commute, there is no restriction on the accuracy of their simultaneous measurement. For example, the $x$ and $y$ coordinates of a particle can be known at the same time. An important theorem states that two commuting observables can have simultaneous eigenfunctions. To prove this, write the eigenvalue equation for an operator $\hat{A}$ $\hat{A} \, \psi_n=a_n \, \psi_n \label{36}$ then operate with $\hat{B}$ and use the commutativity of $\hat{A}$ and $\hat{B}$ to obtain $\hat{B} \, \hat{A} \, \psi_n=\hat{A} \, \hat{B} \, \psi_n=a_n \, \hat{B} \, \psi_n \label{37}$ This shows that $\hat{B} \, \psi_n$ is also an eigenfunction of $\hat{A}$ with the same eigenvalue $a_n$. This implies that $\hat{B} \, \psi_n=const \, \psi_n=b_n \, \psi_n \label{38}$ showing that $\psi_n$ is a simultaneous eigenfunction of $\hat{A}$ and $\hat{B}$ with eigenvalues $a_n$ and $b_n$, respectively. The derivation becomes slightly more complicated in the case of degenerate eigenfunctions, but the same conclusion follows. After the Hamiltonian, the operators for angular momenta are probably the most important in quantum mechanics. The definition of angular momentum in classical mechanics is $\mathbf{L} = \mathbf{r} \times \mathbf{p}$. In terms of its Cartesian components, $L_x = yp_z - zp_y$L_y = zp_x - xp_z$L_z = xp_y - yp_x \label{39}$ In future, we will write such sets of equation as "$L_x = yp_z - zp_y, \, et \, cyc$," meaning that we add to one explicitly stated relation, the versions formed by successive cyclic permutation $x \rightarrow y \rightarrow z \rightarrow x$. The general prescription for turning a classical dynamical variable into a quantum-mechanical operator was developed in Chap 2. The key relations were the momentum components $\hat{p_x}=-i \hbar \frac{\partial}{\partial x}, \,\,\, \hat{p_y}=-i \hbar \frac{\partial}{\partial y}, \,\,\, \hat{p_z}=-i \hbar \frac{\partial}{\partial z} \label{40}$ with the coordinates $x, y, z$ simply carried over into multiplicative operators. Applying Equation $\ref{40}$ to Equation $\ref{39}$, we construct the three angular momentum operators $\hat{L_x}=-i \hbar \, \left( y \frac{\partial}{\partial z}-z \frac{\partial}{\partial y}\right) \,\,\,\,\,\, et \, cyc \label{41}$ while the total angular momentum is given by $\hat{L}^2=\hat{L}_x^2+\hat{L}_y^2+\hat{L}_z^2 \label{42}$ The angular momentum operators obey the following commutation relations: $\left[ \hat{L_x}, \, \hat{L_y}\right]=i \hbar \hat{L_z} \,\,\,\, et \, cyc \label{43}$ but $\left[ \hat{L}^2, \, \hat{L_z}\right]=0 \label{44}$ and analogously for $\hat{L_x}$ and $\hat{L_y}$. This is consistent with the existence of simultaneous eigenfunctions of $\hat{L}^2$ and any one component, conventionally designated $\hat{L_z}$. But then these states cannot be eigenfunctions of either $\hat{L_x}$ or $\hat{L_y}$. Postulates of Quantum Mechanics Our development of quantum mechanics is now sufficiently complete that we can reduce the theory to a set of five postulates. Postulate 1: Wavefunctions The state of a quantum-mechanical system is completely specified by a wavefunction $\Psi$ that depends on the coordinates and time. The square of this function $\Psi^* \Psi$ gives the probability density for finding the system with a specified set of coordinate values. The wavefunction must fulfill certain mathematical requirements because of its physical interpretation. It must be single-valued, finite and continuous. It must also satisfy a normalization condition $\int \! \Psi^* \, \Psi \, \mathrm{d}\tau=1 \label{45}$ Postulate 2: Observables Every observable in quantum mechanics is represented by a linear, hermitian operator. The hermitian property was defined in Equation \ref{6}. A linear operator is one which satisfies the identity $\hat{A} (c_1\psi_{1} + c_2\psi_{2})=c_1 \, \hat{A} \psi_{1} + c_2 \, \hat{A} \psi_{2} \label{46}$ which is required in order to have a superposition property for quantum states. The form of an operator which has an analog in classical mechanics is derived by the prescriptions $\mathbf{\hat{r}}=\mathbf{r}, \,\,\,\,\,\, \mathbf{\hat{p}}=-i \hbar \nabla \label{47}$ which we have previously expressed in terms of Cartesian components [cf. Equation $\ref{40}$]. Postulate 3: Eigenstates In any measurement of an observable $A$, associated with an operator $\hat{A}$, the only possible results are the eigenvalues $a_n$, which satisfy an eigenvalue equation $\hat{A} \psi_n=a_n \, \psi_n \label{48}$ This postulate captures the essence of quantum mechanics--the quantization of dynamical variables. A continuum of eigenvalues is not forbidden, however, as in the case of an unbound particle. Every measurement of $A$ invariably gives one of the eigenvalues. For an arbitrary state (not an eigenstate of $A$), these measurements will be individually unpredictable but follow a definite statistical law, which is the subject of the fourth postulate: Postulate 4: Expectation Values For a system in a state described by a normalized wave function $\Psi$, the average or expectation value of the observable corresponding to $A$ is given by $\langle{A}\rangle=\int \! \Psi^* \, \hat{A} \, \Psi \, \mathrm{d}\tau \label{49}$ Finally, Postulate 5: Time-dependent Evolution The wavefunction of a system evolves in time in accordance with the time-dependent Schrödinger equation $i\hbar \frac{\partial \Psi}{\partial t}=\hat{H} \, \Psi \label{50}$ For time-independent problems this reduces to the time-independent Schrödinger equation $\hat{H} \, \psi=E \, \psi \label{51}$ which is the eigenvalue equation for the Hamiltonian operator. The Variational Principle Except for a small number of intensively-studied examples, the Schrödinger equation for most problems of chemical interest cannot be solved exactly. The variational principle provides a guide for constructing the best possible approximate solutions of a specified functional form. Suppose that we seek an approximate solution for the ground state of a quantum system described by a Hamiltonian $\hat{H}$. We presume that the Schrödinger equation $\hat{H} \, \psi_0=E \, \psi_0 \label{52}$ is too difficult to solve exactly. Suppose, however, that we have a function $\tilde{\psi}$ which we think is an approximation to the true ground-state wavefunction. According to the variational principle (or variational theorem), the following formula provides an upper bound to the exact ground-state energy $E_0$: $\tilde{E} \equiv \frac{\int \! \tilde{\psi}^* \hat{H} \, \tilde{\psi} \, \mathrm{d}\tau}{\int \! \tilde{\psi}^* \, \tilde{\psi} \, \mathrm{d}\tau} \ge E_0 \label{53}$ Note that this ratio of integrals has the same form as the expectation value $\langle{H}\rangle$ defined by Equation $\ref{29}$. The better the approximation $\tilde{\psi}$, the lower will be the computed energy $\tilde{E}$, though it will still be greater than the exact value. To prove Equation $\ref{53}$, we suppose that the approximate function can, in concept, be represented as a superposition of the actual eigenstates of the Hamiltonian, analogous to Equation $\ref{24}$, $\tilde{\psi}=c_0\psi_0+c_1\psi_1+... \label{54}$ This means that $\tilde{\psi}$, the approximate ground state, might be close to the actual ground state $\psi_0$ but is "contaminated" by contributions from excited states $\psi_1$, ... Of course, none of the states or coefficients on the right-hand side is actually known, otherwise there would be no need to worry about approximate computations. By Equation $\ref{25}$, the expectation value of the Hamiltonian in the state Equation $\ref{54}$ is given by $\tilde{E}=|c_0|^2E_0+|c_1|^2E_1+... \label{55}$ Since all the excited states have higher energy than the ground state, $E_1, \, E_2... \ge E_0$, we find $\tilde{E} \ge (|c_0|^2+|c_1|^2+...) \, E_0=E_0 \label{56}$ assuming $\tilde{\psi}$ has been normalized. Thus $\tilde{E}$ must be greater than the true ground-state energy $E_0$, as implied by Equation $\ref{53}$. As a very simple, although artificial, illustration of the variational principle, consider the ground state of the particle in a box. Suppose we had never studied trigonometry and knew nothing about sines or cosines. Then a reasonable approximation to the ground state might be an inverted parabola such as the normalized function $\tilde{\psi}(x)=\left( \frac{30}{a^5} \right)^\frac{1}{2} \, x(a-x) \label{57}$ Fig. 1 shows this function along with the exact ground-state eigenfunction $\psi_1 (x)=\left( \frac{2}{a} \right)^\frac{1}{2} \, sin \frac{\pi x}{a} \label{58}$ $\frac{x}{a}$ Figure $1$: Variational approximation for particle in a box. Red line represents $\tilde{\psi}$ and black line represents $\psi_1$ A variational calculation gives $\tilde{E}=\int^a_0 \tilde{\psi} (x) \, \left( -\frac{\hbar^2}{2m} \right) \, \tilde{\psi}''(x) \, dx = \frac{5}{4\pi^2}\frac{h^2}{ma^2}=\frac{10}{\pi^2} \, E_1 \approx 1.01321E_1 \label{59}$ in terms of the exact ground state energy $E_1 = \frac{h^2}{8ma^2}$. In accord with the variational theorem, $\tilde{E} > E_1$. The computation is in error by about 1%.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.04%3A_Principles_of_Quantum_Mechanics.txt
The harmonic oscillator is a model which has several important applications in both classical and quantum mechanics. It serves as a prototype in the mathematical treatment of such diverse phenomena as elasticity, acoustics, AC circuits, molecular and crystal vibrations, electromagnetic fields and optical properties of matter. Classical Oscillator A simple realization of the harmonic oscillator in classical mechanics is a particle which is acted upon by a restoring force proportional to its displacement from its equilibrium position. Considering motion in one dimension, this means $F = −kx \label{1}$ Such a force might originate from a spring which obeys Hooke’s law, as shown in Figure $1$. According to Hooke’s law, which applies to real springs for sufficiently small displacements, the restoring force is proportional to the displacement—either stretching or compression—from the equilibrium position. The force constant $k$ is a measure of the stiffness of the spring. The variable $x$ is chosen equal to zero at the equilibrium position, positive for stretching, negative for compression. The negative sign in Equation $\ref{1}$ reflects the fact that $F$ is a restoring force, always in the opposite sense to the displacement $x$. Applying Newton’s second law to the force from Equation $\ref{1}$, we find $x$ $F = m \dfrac{d^2 x}{dx^2} = -kx \label{2}$ where $m$ is the mass of the body attached to the spring, which is itself assumed massless. This leads to a differential equation of familiar form, although with different variables: $\ddot{x}(t)+ \omega^2x(t)= 0 \label{3}$ with $\omega^2 \equiv \dfrac{k}{m}$ The dot notation (introduced by Newton himself) is used in place of primes when the independent variable is time. The general solution to Equation $\ref{3}$ is $x(t) = A\sin ωt + B\cos ωt \label{4}$ which represents periodic motion with a sinusoidal time dependence. This is known as simple harmonic motion and the corresponding system is known as a harmonic oscillator. The oscillation occurs with a constant angular frequency $\omega = \sqrt{\dfrac{k}{m}}\; \text{radians per second} \label{5}$ This is called the natural frequency of the oscillator. The corresponding circular (or angular) frequency in Hertz (cycles per second) is $\nu = \dfrac{\omega}{2\pi } = \dfrac{1}{2\pi} \sqrt{\dfrac{k}{m}}\; \text{Hz} \label{6}$ The general relation between force and potential energy in a conservative system in one dimension is $F =\dfrac{−dV}{dx} \label{7}$ Thus the potential energy of a harmonic oscillator is given by $V(x) = \dfrac{1}{2}kx^2 \label{8}$ which has the shape of a parabola, as drawn in Figure $2$. A simple computation shows that the oscillator moves between positive and negative turning points $\pm x_{max}$ where the total energy $E$ equals the potential energy $\dfrac{1}{2} k x_{max}^{2}$ while the kinetic energy is momentarily zero. In contrast, when the oscillator moves past $x = 0$, the kinetic energy reaches its maximum value while the potential energy equals zero. Harmonic Oscillator in Quantum Mechanics Given the potential energy in Equation $\ref{8}$, we can write down the Schrödinger equation for the one-dimensional harmonic oscillator:​ $-\dfrac{\hbar^{2}}{2m} \psi''(x) + \dfrac{1}{2}kx^2 \psi(x) = E \psi(x) \label{9}$ For the first time we encounter a differential equation with non-constant coefficients, which is a much greater challenge to solve. We can combine the constants in Equation $\ref{9}$ to two parameters $\alpha^2 = \dfrac{mk}{\hbar^2}$ and $\lambda = \dfrac{2mE}{\hbar^2\alpha} \label{10}$ and redefine the independent variable as $\xi = \alpha^{1/2}x \label{11}$ This reduces the Schrödinger equation to $\psi''(\xi) + (\lambda-\xi^2)\psi(\xi) = 0\label{12}$ The range of the variable $x$ (also $\xi$) must be taken from $−\infty$ to $+\infty$, there being no finite cutoff as in the case of the particle in a box. A useful first step is to determine the asymptotic solution to Equation $\ref{11}$, that is, the form of $\psi(\xi)$ as $\xi\rightarrow\pm\infty$. For sufficiently large values of $\lvert\xi\rvert$, $\xi^{2} \gg \lambda$ and the differential equation is approximated by $\psi''(\xi) - \xi^2\psi(\xi) \approx 0 \label{13}$ This suggests the following manipulation: $\left(\dfrac{d^2}{d\xi^2} - \xi^2 \right) \psi(\xi) \approx \left( \dfrac{d}{d\xi}-\xi \right) \left( \dfrac{d}{d\xi}+\xi \right) \psi(\xi) \approx 0 \label{14}$ The first-order differential equation $\psi'(\xi) + \xi\psi(\xi)=0 \label{15}$ can be solved exactly to give $\psi(\xi) = \text{const.}\, e^{-\xi^2/2} \label{16}$ Remarkably, this turns out to be an exact solution of the Schrödinger ​equation (Equation $\ref{12}$) with $\lambda=1$. Using Equation $\ref{10}$, this corresponds to an energy $E=\dfrac{\lambda\hbar^2\alpha}{2m} = \dfrac{1}{2}\hbar\sqrt{\dfrac{k}{m}} = \dfrac{1}{2} \hbar\omega \label{17}$ where $\omega$ is the natural frequency of the oscillator according to classical mechanics. The function in Equation $\ref{16}$ has the form of a Gaussian, the bell-shaped curve so beloved in the social sciences. The function has no nodes, which leads us to conclude that this represents the ground state of the system.The ground state is usually designated with the quantum number $n = 0$ (the particle in a box is a exception, with $n = 1$ labeling the ground state). Reverting to the original variable $x$, we write $\psi_{0}(x) = \text{const} e^{-\alpha x^2/2}$ with $\alpha=(mk/\hbar^2)^{1/2} \label{18}$ With help of the well-known definite integral (Laplace 1778) $\int^{\infty}_{-\infty} e^{- \alpha x^{2}} dx= \sqrt{\dfrac{\pi}{\alpha}} \label{19}$ we find the normalized eigenfunction $\psi_{0}(x)=(\dfrac{\alpha}{\pi})^{1/4} e^{-\alpha x^{2}/2} \label{20}$ with the corresponding eigenvalue $E_{0}=\dfrac{1}{2}\hbar\omega \label{21}$ Drawing from our experience with the particle in a box, we might surmise that the first excited state of the harmonic oscillator would be a function similar to Equation $\ref{20}$, but with a node at $x=0$, say, $\psi_{1}(x)=const x e^{-\alpha x^{2}/2} \label{22}$ This is orthogonal to $\psi_0(x)$ by symmetry and is indeed an eigenfunction with the eigenvalue $E_{1}=\dfrac{3}{2}\hbar\omega \label{23}$ Continuing the process, we try a function with two nodes $\psi_{2}= const (x^{2}-a) e^{-\alpha x^{2}/2} \label{24}$ Using the integrals tabulated in the Supplement 5, on Gaussian Integrals, we determine that with $a=\dfrac{1}{2}$ makes $\psi_{2}(x)$ orthogonal to $\psi_{0}(x)$ and $\psi_{1}(x)$. We verify that this is another eigenfunction, corresponding to $E_{2}=\dfrac{5}{2}\hbar\omega \label{25}$ The general result, which follows from a more advanced mathematical analysis, gives the following formula for the normalized eigenfunctions: $\psi_{n}(x)=(\dfrac{\sqrt{\alpha}}{2^{n}n!\sqrt{\pi}})^{1/2} H_{n}(\sqrt{\alpha}x) e^{-\alpha x^{2}/2} \label{26}$ where $H_{n}(\xi)$ represents the Hermite polynomial of degree $n$. The first few Hermite polynomials are $H_{0}(\xi)=1$ $H_{1}(\xi)=2\xi$ $H_{2}(\xi)=4\xi^{2}-2$ $H_{3}(\xi)=8\xi^{3}-12\xi \label{27}$ The four lowest harmonic-oscillator eigenfunctions are plotted in Figure $3$. Note the topological resemblance to the corresponding particle-in-a-box eigenfunctions. The eigenvalues are given by the simple formula $E_{n}=\left(n+\dfrac{1}{2}\right)\hbar\omega \label{28}$ These are drawn in Figure $2$, on the same scale as the potential energy. The ground-state energy $E_{0}=\dfrac{1}{2}\hbar\omega$ is greater than the classical value of zero, again a consequence of the uncertainty principle. This means that the oscillator is always oscillating. It is remarkable that the difference between successive energy eigenvalues has a constant value $\Delta E=E_{n+1}-E_{n}=\hbar\omega=h\nu \label{29}$ This is reminiscent of Planck’s formula for the energy of a photon. It comes as no surprise then that the quantum theory of radiation has the structure of an assembly of oscillators, with each oscillator representing a mode of electromagnetic waves of a specified frequency.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.05%3A_Harmonic_Oscillator.txt
Particle in a Ring Consider a variant of the one-dimensional particle in a box problem in which the x-axis is bent into a ring of radius R. We can write the same Schrödinger equation $\dfrac{-\hbar^2}{2m} \dfrac{d^2 \psi(x)}{dx^2} = E \psi(x) \label{1}$ There are no boundary conditions in this case since the x-axis closes upon itself. A more appropriate independent variable for this problem is the angular position on the ring given by, $\phi = x {/} R$ . The Schrödinger equation would then read $-\dfrac{\hbar^2}{2mR^2} \dfrac{d^2 \psi (\phi)} {d (\phi)^2} = E \psi (\phi) \label{2}$ The kinetic energy of a body rotating in the xy-plane can be expressed as $E = \dfrac{L_z^2}{2I} \label{3}$ where $I = mR^2$ is the moment of inertia and $L_z$, the z-component of angular momentum. (Since $L = r \times p$, if r and p lie in the xy-plane, L points in the z-direction.) The structure of Equation $\ref{2}$ suggests that this angular-momentum operator is given by $\hat{L_z} = -{i} \hbar \dfrac{\partial}{\partial \phi} \label{4}$ This result will follow from a more general derivation in the following Section. The Schrödinger equation (Equation $\ref{2}$) can now be written more compactly as $\psi \prime \prime \ (\phi) + m^2 \psi (\phi) = 0 \label{5}$ where $m^2 \equiv 2IE/ \hbar^2\label{6}$ (Please do not confuse this variable m with the mass of the particle!) Possible solutions to (Equation $\ref{5}$) are $\psi (\phi) = \text{const}\, e^{\pm{i}m\phi} \label{7}$ For this wavefunction to be physically acceptable, it must be single-valued. Since $\phi$ increased by any multiple of $2\pi$ represents the same point on the ring, we must have $\psi (\phi + 2\pi ) = \psi (\phi) \label{8}$ and therefore $e^{{i}m (\phi + 2\pi)} = e^{{i}m \phi} \label{9}$ This requires that $e^{2\pi {i}m} = 1 \label{10}$ which is true only if $m$ is an integer: $m = 0, \pm 1, \pm 2... \label{11}$ Using Equation $\ref{6}$, this gives the quantized energy values $E_m = \dfrac{\hbar^2}{2I} m^2 \label{12}$ In contrast to the particle in a box, the eigenfunctions corresponding to $+m$ and $-m$ (Equation $\ref{7}$) are linearly independent, so both must be accepted. Therefore all eigenvalues, except $E_0$, are two-fold (or doubly) degenerate. The eigenfunctions can all be written in the form const ​$e^{{i}m \phi}$, with m allowed to take either positive and negative values (or 0), as in Equation $\ref{10}$. The normalized eigenfunctions are ${\psi _{m}} (\phi) = \dfrac{1}{\sqrt{2 \pi}} e^{im\phi} \label{13}$ and can be verified to satisfy the normalization condition containing the complex conjugate $\int\limits_{0}^{2\pi} {\psi_{m}^*} (\phi) {\psi _{m}} (\phi) d\phi = 1$ where we have noted that ${\psi_{m}^*} (\phi) = (2\pi)^{-1/2} e^{-{i}m\phi}$. The mutual orthogonality of the functions (Equation $\ref{13}$) also follows easily, for $\int\limits_{0}^{2\pi} {\psi_{m^\prime}^*} {\psi _{m}} (\phi) d\phi = \dfrac{1}{2\pi} \int\limits_{0}^{2\pi} e^{{i}(m-m^\prime) \phi} d\phi$ $= \dfrac{1}{2\pi} \int\limits_{0}^{2\pi} [cos(m-m^\prime)\phi + {i} sin(m-m^\prime)\phi] d\phi =0$ for $m^\prime \neq m$. The solutions in Equation $\ref{12}$ are also eigenfunctions of the angular momentum operator (Equation $\ref{4}$), with $\hat{L_z} \psi_{m} (\phi) = m\hbar \psi_{m} (\phi), m = 0, \pm 1, \pm 2...$ This is a instance of a fundamental result in quantum mechanics, that any measured component of orbital angular momentum is restricted to integral multiples of $\hbar$. The Bohr theory of the hydrogen atom, to be discussed in the next Chapter, can be derived from this principle alone. Free Electron Model for Aromatic Molecules The benzene molecule consists of a ring of six carbon atoms around which six delocalized pi-electrons can circulate. A variant of the FEM for rings predicts the ground-state electron configuration which we can write as $1\pi^{2} 2\pi^{4}$, as shown here: The enhanced stability the benzene molecule can be attributed to the complete shells of $\pi$-electron orbitals, analogous to the way that noble gas electron configurations achieve their stability. Naphthalene, apart from the central C-C bond, can be modeled as a ring containing 10 electrons in the next closed-shell configuration$1\pi^{2} 2\pi^{4} 3\pi^{4}$. These molecules fulfill Hückel's "4N+2 rule" for aromatic stability. The molecules cyclobutadiene ${(1\pi^{2} 2\pi^{2})}$ and cyclooctatetraene${(1\pi^{2} 2\pi^{4} 3\pi^{2})}$, even though they consist of rings with alternating single and double bonds, do not exhibit aromatic stability since they contain partially-filled orbitals. The longest wavelength absorption in the benzene spectrum can be estimated according to this model as $\dfrac{hc}{\lambda} = E_2 - E_1 = \dfrac{\hbar^2}{2mR^2} {(2^2 -1^2)}$ The ring radius R can be approximated by the C-C distance in benzene, 1.39 Å. We predict $\lambda \approx$ 210 nm, whereas the experimental absorption has $\lambda_{max} \approx$ 268 nm. Spherical Polar Coordinates The motion of a free particle on the surface of a sphere will involve components of angular momentum in three-dimensional space. Spherical polar coordinates provide the most convenient description for this and related problems with spherical symmetry. The position of an arbitrary point r is described by three coordinates $r , \theta, \phi$ as shown in Figure $2$. These are connected to Cartesian coordinates by the relations $x = r \sin\theta \cos \phi$ $y = r \sin \theta \sin \phi$ $z = r \cos \theta$ The radial variable r represents the distance from r to the origin, or the length of the vector r: $r= \sqrt{x^2 +y^2 +z^2}$ The coordinate $\theta$ is the angle between the vector r and the z-axis, similar to latitude in geography, but with $\theta= 0$ and $\theta = \pi$ corresponding to the North and South Poles, respectively. The angle $\phi$ describes the rotation of r about the z-axis, running from 0 to $2 \pi$, similar to geographic longitude. The volume element in spherical polar coordinates is given by $d \tau = r^2 \sin \theta dr d \theta d\phi,$ $r \in \{0, \infty \} , \theta \in \{0, \pi\}, \phi \in \{0, 2\pi \}$ and represented graphically by the distorted cube in Figure $1$.  Figure $3$: Volume element in spherical polar coordinates. (CC BY; OpenStax).  We also require the Laplacian operator $\nabla^{2} = \dfrac{1}{r^{2}} \dfrac{\partial}{\partial r} r^{2} \dfrac{\partial}{\partial r} + \dfrac{1}{r^2 sin \theta} \dfrac{\partial}{\partial \theta } sin \theta \dfrac{\partial}{\partial \theta } + \dfrac{1}{r^2 sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2}$ A detailed derivation is given in Supplement 6. Rotation in Three Dimensions A particle of mass M, free to move on the surface of a sphere of radius R, can be located by the two angular variables $\theta, \phi$. The Schrödinger equation therefore has the form $-\dfrac{\hbar^2}{2M} \nabla^{2} Y ({\theta , \phi}) = E Y ({\theta , \phi})$ with the wavefunction conventionally written as $Y ({\theta , \phi})$. These functions are known as spherical harmonics and have been used in applied mathematics long before quantum mechanics. Since $r = R$, a constant, the first term in the Laplacian does not contribute. The Schrödinger equation reduces to $\left\{ \dfrac{1}{sin \theta} \dfrac{\partial}{\partial \theta } sin \theta \dfrac{\partial}{\partial \theta } + \dfrac{1}{sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2} + \lambda \right\} Y ({ \theta , \phi}) = 0$ where $\lambda = \dfrac{2MR^2E}{\hbar^2} = \dfrac{2IE}{\hbar^2}$ again introducing the moment of inertia $I = MR^2$. The variables $\theta$ and $\phi$ can be separated in Equation $\ref{22}$ after multiplying through by $sin^2 \theta$. If we write $Y ({\theta , \phi }) = \Theta ({\theta }) \Phi ({\phi })$ and follow the procedure used for the three-dimensional box, we find that dependence on $\phi$ alone occurs in the term $\dfrac{\Phi^{\prime \prime} ({\phi)} }{\Phi ({\phi}) } = const$ This is identical in form to Equation $\ref{5}$, with the constant equal to $-m^2$, and we can write down the analogous solutions $\Phi_{m}({\phi}) = \sqrt{\dfrac{1}{2 \pi}} e^{im \phi}, m=0, \pm 1, \pm 2 ...$ Substituting Equation $\ref{24}$ into Equation $\ref{22}$ and cancelling the functions $\Phi ({\phi })$, we obtain an ordinary differential equation for $\Theta ({\theta })$ $\left \{ \dfrac{1}{sin \theta} \dfrac{d}{d \theta} sin \theta \dfrac{d}{d \theta} - \dfrac{m^2}{sin^2 \theta} + \lambda \right \} \Theta ({\theta}) = 0$ Consulting our friendly neighborhood mathematician, we learn that the single-valued, finite solutions to (Equation $\ref{27}$) are known as associated Legendre functions. The parameters $\lambda$ and $m$ are restricted to the values $\lambda = \ell ({ \ell + 1}) , \ell = 0, 1, 2 ...$ while $m = 0, \pm 1, \pm 2 ... \pm \ell ({2 \ell +1 values})$ Putting Equation $\ref{28}$ into Equation $\ref{23}$, the allowed energy levels for a particle on a sphere are found to be $E_{\ell} = \dfrac{\hbar^2}{2I} \ell ({ \ell + 1})$ Since the energy is independent of the second quantum number m, the levels (Equation $\ref{30}$) are $({2 \ell+1})$-fold degenerate. The spherical harmonics constitute an orthonormal set satisfying the integral relations $\int_0^{\pi} \int_0^{2 \pi} Y_{\ell^{\prime} m^{\prime}}^* ({ \theta , \phi }) Y_{\ell m} ({\theta , \phi}) sin \theta d \theta d \phi = \delta_{\ell \ell^{\prime}} \delta_{mm^{\prime}}$ The following table lists the spherical harmonics through $\ell$ = 2, which will be sufficient for our purposes. $Spherical Harmonics Y_{\ell m} ({\theta , \phi})$ $Y_{00} = \left({\dfrac{1}{4 \pi}} \right)^{1/2}$ $Y_{10} = \left({\dfrac{3}{4 \pi}} \right)^{1/2} cos \theta$ $Y_{1 \pm 1} = \mp \left({\dfrac{3}{4 \pi}} \right)^{1/2} sin \theta e^{\pm i \phi}$ $Y_{20} = \left({\dfrac{5}{16 \pi}} \right)^{1/2} ({ 3 cos^2 \theta - 1})$ $Y_{2 \pm 1} = \mp \left({\dfrac{15}{8 \pi}} \right)^{1/2} cos \theta sin \theta e^{\pm i \phi}$ $Y_{2 \pm 2} = \left({\dfrac{15}{32 \pi}} \right)^{1/2} sin^2 \theta e^{\pm 2i \phi}$ A graphical representation of these functions is given in Figure $4$. Surfaces of constant absolute value are drawn, positive where green and negative where red. Theory of Angular Momentum Generalization of the energy-angular momentum relation in Equation $\ref{3}$ to three dimensions gives $E = \dfrac{L^2}{2I}$ Thus from Equation $\ref{21}$-$\ref{23}$ we can identify the operator for the square of total angular momentum $\hat{L^2} = -\hbar^2 \left\{ \dfrac{1}{sin \theta} \dfrac{\partial}{\partial \theta} sin \theta \dfrac{\partial}{\partial \theta} + \dfrac{1}{sin^2 \theta} \dfrac{\partial^{2}}{\partial \phi^{2}} \right\}$ By Equations $\ref{28}$ and $\ref{29}$, the functions $Y ({ \theta , \phi})$ are simultaneous eigenfunctions of $\hat{L^2}$ and $\hat{L}_z$ such that $\hat{L^2} Y_{\ell m} ({\theta , \phi}) = \ell ({\ell + 1}) \hbar^2 Y_{\ell m} ({\theta , \phi})$ and $\hat{L}_z Y_{\ell m} ({\theta, \phi}) = m \hbar Y_{\ell m} ({\theta , \phi})$ But the $Y_{\ell m} ({\theta , \phi})$ are not eigenfunctions of either $L_x$ and $L_y$ (unless $\ell$ = 0). Note that the magnitude of the total angular momentum $\sqrt{\ell ({\ell +1}) } \hbar$ is greater than its maximum observable component in any direction, namely $\ell \hbar$. The quantum-mechanical behavior of the angular momentum and its components can be represented by a vector model, illustrated in Figure 5. The angular momentum vector L, with magnitude $\sqrt{\ell ({\ell +1}) } \hbar$, can be pictured as precessing about the z-axis, with its z-component $L_z$ constant. The components $L_x$ and $L_y$ fluctuate in the course of precession, corresponding to the fact that the system is not in an eigenstate of either. There are 2$\ell$ + 1 different allowed values for $L_z$, with eigenvalues $m \hbar ({ m = 0, \pm 1, \pm 2 ... \pm \ell })$ equally spaced between $+ \ell \hbar$ and $- \ell \hbar$.  Figure $5$: Vector model for angular momentum, showing the case $\ell$= 2. (Public Domain; Maschen). This discreteness in the allowed directions of the angular momentum vector is called space quantization. The existence of simultaneous eigenstates of $\hat{L^2}$ and any one component, conventionally $\hat{L}_z$, is consistent with the commutation relations derived in Chap. 4: $\left[ \hat{L}_x , \hat{L}_y \right] = i \hbar \hat{L}_z et cyc$ and $\left[ \hat{L^2} , \hat{L}_z \right] = 0$ Electron Spin The electron, as well as certain other fundamental particles, possesses an intrinsic angular momentum or spin, in addition to its orbital angular momentum. These two types of angular momentum are analogous to the daily and annual motions, respectively, of the Earth around the Sun. To distinguish the spin angular momentum from the orbital, we designate the quantum numbers as s and $m_s$, in place of $\ell$ and m. For the electron, the quantum number s always has the value $\dfrac{1}{2}$, while $m_s$ can have one of two values,$\pm \dfrac{1}{2}$. The electron is said to be an elementary particle of spin $\dfrac{1}{2}$. The proton and neutron also have spin $\dfrac{1}{2}$ and belong to the classification of particles called fermions, which are governed by the Pauli exclusion principle. Other particles, including the photon, have integer values of spin and are classified as bosons. These do not obey the Pauli principle, so that an arbitrary number can occupy the same quantum state. A complete theory of spin requires relativistic quantum mechanics. For our purposes, it is sufficient to recognize the two possible internal states of the electron, which can be called spin up' and spin down.' These are designated, respectively, by $\alpha$ and $\beta$ as factors in the electron wavefunction. Spins play an essential role in determining the possible electronic states of atoms and molecules.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.06%3A_Angular_Momentum.txt
Atomic Spectra When gaseous hydrogen in a glass tube is excited by a $5000$-volt electrical discharge, four lines are observed in the visible part of the emission spectrum: red at $656.3$ nm, blue-green at $486.1$ nm, blue violet at $434.1$ nm and violet at $410.2$ nm: Other series of lines have been observed in the ultraviolet and infrared regions. Rydberg (1890) found that all the lines of the atomic hydrogen spectrum could be fitted to a single formula $\dfrac{1}{\lambda} = \mathcal{R} \left( \dfrac{1}{n_1^{2}} - \dfrac{1}{n_2^{2}} \right), \quad n_1 = 1, \: 2, \: 3..., \: n_2 > n_1 \label{1}$ where $\mathcal{R}$, known as the Rydberg constant, has the value $109,677$ cm-1 for hydrogen. The reciprocal of wavelength, in units of cm-1, is in general use by spectroscopists. This unit is also designated wavenumbers, since it represents the number of wavelengths per cm. The Balmer series of spectral lines in the visible region, shown in Figure $1$, correspond to the values $n_1 = 2, \: n_2 = 3, \: 4, \: 5$ and $6$. The lines with $n_1 = 1$ in the ultraviolet make up the Lyman series. The line with $n_2 = 2$, designated the Lyman alpha, has the longest wavelength (lowest wavenumber) in this series, with $1/ \lambda = 82.258$ cm-1 or $\lambda = 121.57$ nm. Other atomic species have line spectra, which can be used as a "fingerprint" to identify the element. However, no atom other than hydrogen has a simple relation analogous to Equation $\ref{1}$ for its spectral frequencies. Bohr in 1913 proposed that all atomic spectral lines arise from transitions between discrete energy levels, giving a photon such that $\Delta E = h \nu = \dfrac{hc}{\lambda} \label{2}$ This is called the Bohr frequency condition. We now understand that the atomic transition energy $\Delta E$ is equal to the energy of a photon, as proposed earlier by Planck and Einstein. The Bohr Atom The nuclear model proposed by Rutherford in 1911 pictures the atom as a heavy, positively-charged nucleus, around which much lighter, negatively-charged electrons circulate, much like planets in the Solar system. This model is however completely untenable from the standpoint of classical electromagnetic theory, for an accelerating electron (circular motion represents an acceleration) should radiate away its energy. In fact, a hydrogen atom should exist for no longer than $5 \times 10^{-11}$ sec, time enough for the electron's death spiral into the nucleus. This is one of the worst quantitative predictions in the history of physics. It has been called the Hindenberg disaster on an atomic level. (Recall that the Hindenberg, a hydrogen-filled dirigible, crashed and burned in a famous disaster in 1937.) Bohr sought to avoid an atomic catastrophe by proposing that certain orbits of the electron around the nucleus could be exempted from classical electrodynamics and remain stable. The Bohr model was quantitatively successful for the hydrogen atom, as we shall now show. We recall that the attraction between two opposite charges, such as the electron and proton, is given by Coulomb's law $F = \begin{cases} -\dfrac{e^{2}}{r^{2}} \quad \mathsf{(gaussian \: units)} \ -\dfrac{e^{2}}{4 \pi \epsilon_0 r^{2}} \quad \mathsf{(SI \: units)} \end{cases} \label{3}$ We prefer to use the Gaussian system in applications to atomic phenomena. Since the Coulomb attraction is a central force (dependent only on r), the potential energy is related by $F = -\dfrac{dV(r)}{dr} \label{4}$ We find therefore, for the mutual potential energy of a proton and electron, $V(r) = -\dfrac{e^2}{r} \label{5}$ Bohr considered an electron in a circular orbit of radius $r$ around the proton. To remain in this orbit, the electron must be experiencing a centripetal acceleration $a = -\dfrac{v^{2}}{r} \label{6}$ where $v$ is the speed of the electron. Using Equations $\ref{4}$ and $\ref{6}$ in Newton's second law, we find $\dfrac{e^{2}}{r^{2}} = \dfrac{mv^{2}}{r} \label{7}$ where $m$ is the mass of the electron. For simplicity, we assume that the proton mass is infinite (actually $m_p \approx 1836 m_e$) so that the proton's position remains fixed. We will later correct for this approximation by introducing reduced mass. The energy of the hydrogen atom is the sum of the kinetic and potential energies: $E = T + V = \dfrac{1}{2} mv^{2} - \dfrac{e^{2}}{r} \label{8}$ Using Equation $\ref{7}$, we see that $T = -\dfrac{1}{2} V\ \qquad \mathsf{and} \qquad E = \dfrac{1}{2} V = -T \label{9}$ This is the form of the virial theorem for a force law varying as $r^{-2}$. Note that the energy of a bound atom is negative, since it is lower than the energy of the separated electron and proton, which is taken to be zero. For further progress, we need some restriction on the possible values of $r$ or $v$. This is where we can introduce the quantization of angular momentum $\mathbf{L} = \mathbf{r} \times \mathbf{p}$. Since $\mathbf{p}$ is perpendicular to $\mathbf{r}$, we can write simply $L = rp = mvr \label{10}$ Using Equation $\ref{9}$, we find also that $r = \dfrac{L^{2}}{me^{2}} \label{11}$ We introduce angular momentum quantization, writing $L = n\hbar, \qquad n = 1, \: 2... \label{12}$ excluding $n = 0$, since the electron would then not be in a circular orbit. The allowed orbital radii are then given by $r_n = n^{2} a_0 \label{13}$ where $a_0 \equiv \dfrac{\hbar^{2}}{me^{2}} = 5.29 \times 10^{-11} \: \mathsf{m} = 0.529Å \label{14}$ which is known as the Bohr radius. The corresponding energy is $E_n = -\dfrac{e^{2}}{2a_0n^{2}} = -\dfrac{me^{4}}{2\hbar^{2}n^{2}}, \qquad n = 1, \: 2... \label{15}$ Rydberg's formula (Equation $\ref{1}$) can now be deduced from the Bohr model. We have $\dfrac{hc}{\lambda} = E_{n_2} - E_{n_1} = \dfrac{2\pi^{2}me^{4}}{h^{2}} \left( \dfrac{1}{n_1^{2}} - \dfrac{1}{n_2^{2}} \right) \label{16}$ and the Rydbeg constant can be identified as $\mathcal{R} = \dfrac{2\pi^{2}me^{4}}{h^{3}c} \approx 109,737 \: \mathsf{cm}^{-1} \label{17}$ The slight discrepency with the experimental value for hydrogen $(109,677)$ is due to the finite proton mass. This will be corrected later. The Bohr model can be readily extended to hydrogenlike ions, systems in which a single electron orbits a nucleus of arbitrary atomic number $Z$. Thus $Z = 1$ for hydrogen, $Z = 2$ for $\mathsf{He}^{+}$, $Z = 3$ for $\mathsf{Li}^{++}$, and so on. The Coulomb potential $\ref{5}$ generalizes to $V(r) = -\dfrac{Ze^{2}}{r}, \label{18}$ the radius of the orbit (Equation $\ref{13}$) becomes $r_n = \dfrac{n^{2}a_0}{Z} \label{19}$ and the energy Equation $\ref{15}$ becomes $E_n = -\dfrac{Z^{2}e^{2}}{2a_0n^{2}} \label{20}$ De Broglie's proposal that electrons can have wavelike properties was actually inspired by the Bohr atomic model. Since $L = rp = n\hbar = \dfrac{nh}{2\pi} \label{21}$ we find $2\pi r = \dfrac{nh}{p} = n\lambda \label{22}$ Therefore, each allowed orbit traces out an integral number of de Broglie wavelengths. Wilson (1915) and Sommerfeld (1916) generalized Bohr's formula for the allowed orbits to $\oint p \, dr = nh, \qquad n =1, \: 2... \label{23}$ The Sommerfeld-Wilson quantum conditions Equation $\ref{23}$ reduce to Bohr's results for circular orbits, but allow, in addition, elliptical orbits along which the momentum $p$ is variable. According to Kepler's first law of planetary motion, the orbits of planets are ellipses with the Sun at one focus. Figure $2$ shows the generalization of the Bohr theory for hydrogen, including the elliptical orbits. The lowest energy state $n = 1$ is still a circular orbit. But $n = 2$ allows an elliptical orbit in addition to the circular one; $n = 3$ has three possible orbits, and so on. The energy still depends on $n$ alone, so that the elliptical orbits represent degenerate states. Atomic spectroscopy shows in fact that energy levels with $n > 1$ consist of multiple states, as implied by the splitting of atomic lines by an electric field (Stark effect) or a magnetic field (Zeeman effect). Some of these generalized orbits are drawn schematically in Figure $2$. The Bohr model was an important first step in the historical development of quantum mechanics. It introduced the quantization of atomic energy levels and gave quantitative agreement with the atomic hydrogen spectrum. With the Sommerfeld-Wilson generalization, it accounted as well for the degeneracy of hydrogen energy levels. Although the Bohr model was able to sidestep the atomic "Hindenberg disaster," it cannot avoid what we might call the "Heisenberg disaster." By this we mean that the assumption of well-defined electronic orbits around a nucleus is completely contrary to the basic premises of quantum mechanics. Another flaw in the Bohr picture is that the angular momenta are all too large by one unit, for example, the ground state actually has zero orbital angular momentum (rather than $\hbar$). The assumption of well-defined electronic orbits around a nucleus in the Bohr atom is completely contrary to the basic premises of quantum mechanics. Quantum Mechanics of Hydrogenlike Atoms In contrast to the particle in a box and the harmonic oscillator, the hydrogen atom is a real physical system that can be treated exactly by quantum mechanics. In addition to their inherent significance, these solutions suggest prototypes for atomic orbitals used in approximate treatments of complex atoms and molecules. For an electron in the field of a nucleus of charge $+Ze$, the Schrӧdinger equation can be written $\left\{ -\dfrac{\hbar^{2}}{2m} \nabla^{2} - \dfrac{Ze^{2}}{r} \right\} \psi(r) = E\psi(r) \label{24}$ It is convenient to introduce atomic units in which length is measured in bohrs: $a_0 = \dfrac{\hbar^{2}}{me^{2}} = 5.29 \times 10^{-11} \: \mathsf{m} \equiv 1 \: \mathsf{bohr}$ and energy in hartrees: $\dfrac{e^2}{a_0} = 4.358 \times 10^{-18} \: \mathsf{J} = 27.211 \: \mathsf{eV} \equiv 1 \: \mathsf{hartree}$ Electron volts $(\mathsf{eV})$ are a convenient unit for atomic energies. One $\mathsf{eV}$ is defined as the energy an electron gains when accelerated across a potential difference of $1 \: \mathsf{volt}$. The ground state of the hydrogen atom has an energy of $-1/2 \: \mathsf{hartree}$ or $-13.6 \: \mathsf{eV}$. Conversion to atomic units is equivalent to setting $\hbar = e = m = 1$ in all formulas containing these constants. Rewriting the Schrӧdinger equation in atomic units, we have $\left\{ -\dfrac{1}{2} \nabla^{2} - \dfrac{Z}{r} \right\} \psi(r) = E\psi(r) \label{25}$ Since the potential energy is spherically symmetrical (a function of $r$ alone), it is obviously advantageous to treat this problem in spherical polar coordinates $r, \: \theta, \: \phi$. Expressing the Laplacian operator in these coordinates [cf. Eq (6-20)], $-\dfrac{1}{2} \left\{ \dfrac{1}{r^{2}} \dfrac{\partial}{\partial r} r^{2} \dfrac{\partial}{\partial r} + \dfrac{1}{r^{2}\sin\theta} \dfrac{\partial}{\partial \theta} \sin\theta \dfrac{\partial}{\partial \theta} + \dfrac{1}{r^{2}\sin^{2}\theta} \dfrac{\partial^{2}}{\partial\phi^{2}} \right\} \ \times \psi(r, \: \theta, \: \phi) - \dfrac{Z}{r} \psi(r, \: \theta, \: \phi) = E\psi(r, \: \theta, \: \phi) \label{26}$ Equation $\ref{26}$ shows that the second and third terms in the Laplacian represent the angular momentum operator $\hat{L}^{2}$. Clearly, Equation $\ref{26}$ will have separable solutions of the form $\psi(r, \: \theta, \: \phi) = R(r)Y_{\ell m}(\theta, \: \phi) \label{27}$ Substituting Equation $\ref{27}$ into Equation $\ref{26}$ and using the angular momentum eigenvalue Equation Equation $\ref{6-34}$, we obtain an ordinary differential equation for the radial function $R(r)$: $\left\{ -\dfrac{1}{2r^{2}} \dfrac{d}{dr} r^{2} \dfrac{d}{dr} + \dfrac{\ell(\ell + 1)}{2r^{2}} - \dfrac{Z}{r} \right\} R(r) = ER(r) \label{28}$ Note that in the domain of the variable $r$, the angular momentum contribution $\ell (\ell + 1) / 2r^{2}$ acts as an effective addition to the potential energy. It can be identified with centrifugal force, which pulls the electron outward, in opposition to the Coulomb attraction. Carrying out the successive differentiations in Equation $\ref{29}$ and simplifying, we obtain $\dfrac{1}{2}R''(r) + \dfrac{1}{r}R'(r) + \left[\dfrac{Z}{r} - \dfrac{\ell(\ell + 1)}{2r^{2}} + E\right]R(r) = 0 \label{29}$ another second-order linear differential equation with non-constant coefficients. It is again useful to explore the asymptotic solutions to Equation $\ref{29}$, as $r \rightarrow \infty$. In the asymptotic approximation, $R''(r) - 2r\lvert E \rvert R(r) \approx 0 \label{30}$ having noted that the energy $E$ is negative for bound states. Solutions to Equation $\ref{30}$ are $R(r) \approx \mathsf{const} \, e^{\pm\sqrt{2\lvert E \rvert}r} \label{31}$ We reject the positive exponential on physical grounds, since $R(r) \rightarrow \infty$ as $r \rightarrow \infty$, in violation of the requirement that the wavefunction must be finite everywhere. Choosing the negative exponential and setting $E = -Z^{2}/2$ the ground state energy in the Bohr theory (in atomic units), we obtain $R(r) \approx \mathsf{const} \, e^{-Zr} \label{32}$ It turns out, very fortunately, that this asymptotic approximation is also an exact solution of the Schrӧdinger equation (Equation $\ref{29}$) with $\ell = 0$, just what happened for the harmonic-oscillator problem in Chap. 5. The solutions to Equation $\ref{29}$, designated $R_{n\ell}(r)$, are labeled by $n$, known as the principal quantum number, as well as by the angular momentum $\ell$, which is a parameter in the radial equation. The solution in Equation $\ref{32}$ corresponds to $R_{10}(r)$. This should be normalized according to the condition $\int_{0}^{\infty} [R_{10}(r)]^{2} \, r^{2} \, dr = 1 \label{33}$ A useful definite integral is $\int_{0}^{\infty} r^{n} \, e^{-\alpha r} \, dr = \dfrac{n!}{\alpha^{n + 1}} \label{34}$ The normalized radial function is thereby given by $R_{10}(r) = 2Z^{3/2} e^{-Zr} \label{35}$ Since this function is nodeless, we identify it with the ground state of the hydrogenlike atom. Multipyling Equation $\ref{35}$ by the spherical harmonic $Y_{00} = 1/ \sqrt{4\pi}$, we obtain the total wavefunction (Equation $\ref{27}$) $\psi_{100}(r, \theta, \phi) = \left( \dfrac{Z^{3}}{\pi} \right)^{1/2} e^{-Zr} \label{36}$ This is conventionally designated as the 1s function $\psi_{1s}(r)$. Integrals in spherical-polar coordinates over a spherically-symmetrical integrand (like the 1s orbital) can be significantly simplified. We can do the reduction $\int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2\pi} f(r) \, r^{2} \, \sin\theta \, dr \, d\theta \, d\phi = \int_{0}^{\infty} f(r) \, 4\pi r^{2} \, dr \label{37}$ since integration over $\theta$ and $\phi$ gives $4\pi$, the total solid angle of a sphere. The normalization of the 1s wavefunction can thus be written as $\int_{0}^{\infty} [\psi_{1s}(r)]^{2} \, 4\pi r^{2} \, dr = 1 \label{38}$ Hydrogen Atom Ground State There are a number of different ways of representing hydrogen-atom wavefunctions graphically. We will illustrate some of these for the 1s ground state. In atomic units, $\psi_{1s}(r) = \dfrac{1}{\sqrt{\pi}}e^{-r} \label{39}$ is a decreasing exponential function of a single variable $r$, and is simply plotted in Figure 3. Figure $3$ gives a somewhat more pictorial representation, a three-dimensional contour plot of $\psi_{1s}(r)$ as a function of $x$ and $y$ in the $x$, $y$-plane. According to Born's interpretation of the wavefunction, the probability per unit volume of finding the electron at the point $(r, \: \theta, \: \phi)$ is equal to the square of the normalized wavefunction $\rho_{1s}(r) = [\psi_{1s}(r)]^{2} = \dfrac{1}{\pi}e^{-2r} \label{40}$ This is represented in Figure 5 by a scatter plot describing a possible sequence of observations of the electron position. Although results of individual measurements are not predictable, a statistical pattern does emerge after a sufficiently large number of measurements. The probability density is normalized such that $\int_{0}^{\infty} \rho_{1s}(r) \, 4\pi r^{2} \, dr = 1 \label{41}$ In some ways $\rho (r)$ does not provide the best description of the electron distribution, since the region around $r = 0$, where the wavefunction has its largest values, is a relatively small fraction of the volume accessible to the electron. Larger radii $r$ represent larger physical regions since, in spherical polar coordinates, a value of $r$ is associated with a shell of volume $4\pi r^{2} \, dr$. A more significant measure is therefore the radial distribution function $D_{1s}(r) = 4\pi r^{2} [\psi_{1s}(r)]^{2} \label{42}$ which represents the probability density within the entire shell of radius $r$, normalized such that $\int_{0}^{\infty} D_{1s}(r) \, dr = 1 \label{43}$ The functions $\rho_{1s}(r)$ and $D_{1s}(r)$ are both shown in Figure $6$. Remarkably, the 1s RDF has its maximum at $r = a_0$, equal to the radius of the first Bohr orbit Atomic Orbitals The general solution for $R_{n\ell}(r)$ has a rather complicated form which we give without proof: $R_{n\ell}(r) = N_{n\ell} \, \rho^{\ell} \, L_{n + \ell}^{2\ell + 1} \, (\rho) e^{-\rho /2} \qquad \rho \equiv \dfrac{2Zr}{n} \label{44}$ Here $L_{\beta}^{\alpha}$ is an associated Laguerre polynomial and $N_{n\ell}$, a normalizing constant. The angular momentum quantum number $\ell$ is by convention designated by a code: s for $\ell\ = 0$, p for $\ell\ = 1$, d for $\ell\ = 2$, f for $\ell\ = 3$, g for $\ell\ = 4$, and so on. The first four letters come from an old classification scheme for atomic spectral lines: sharp, principal, diffuse and fundamental. Although these designations have long since outlived their original significance, they remain in general use. The solutions of the hydrogenic Schrӧdinger equation in spherical polar coordinates can now be written in full $\psi_{n\ell m}(r, \: \theta, \: \phi) = R_{n\ell}(r)Y_{\ell m}(\theta, \: \phi) \ n = 1, \: 2... \qquad \ell = 0, \: 1... \: n - 1 \qquad m = 0, \: \pm 1, \: \pm 2... \: \pm \ell \label{45}$ where $Y_{\ell m}$ are the spherical harmonics tabulated in Chap. 6. Table 1 below enumerates all the hydrogenic functions we will actually need. These are called hydrogenic atomic orbitals, in anticipation of their later applications to the structure of atoms and molecules. Table 1. Real hydrogenic functions in atomic units. $\psi_{1s} = \dfrac{1}{\sqrt{\pi}} e^{-r}$ $\psi_{2s} = \dfrac{1}{2\sqrt{2\pi}} \left( 1 - \dfrac{r}{2} \right) e^{-r/2}$ $\psi_{2p_z} = \dfrac{1}{4\sqrt{2\pi}} z \, e^{-r/2}$ $\psi_{2p_x}, \: \psi_{2p_y} \qquad \mathsf{analogous}$ $\psi_{3s} = \dfrac{1}{81\sqrt{3\pi}} (27 - 18r + 2r^{2}) e^{-r/3}$ $\psi_{3p_z} = \dfrac{\sqrt{2}}{81\sqrt{\pi}} (6 - r) z \, e^{-r/3}$ $\psi_{3p_x}, \: \psi_{3p_y} \qquad \mathsf{analogous}$ $\psi_{3d_{z^{2}}} = \dfrac{1}{81\sqrt{6\pi}}(3z^{2} - r^{2}) e^{-r/3}$ $\psi_{3d_{zx}} = \dfrac{\sqrt{2}}{81\sqrt{\pi}}zx \, e^{-r/3}$ $\psi_{3d_{yz}}, \: \psi_{3d_{xy}} \qquad \mathsf{analogous}$ $\psi_{3d_{x^{2} - y^{2}}} = \dfrac{1}{81\sqrt{\pi}}(x^{2} - y^{2}) e^{-r/3}$ The energy levels for a hydrogenic system are given by $E_n = -\dfrac{Z^{2}}{2n^{2}} \: \mathsf{hartrees} \label{46}$ and depends on the principal quantum number alone. Considering all the allowed values of $\ell$ and $m$, the level $E_n$ has a degeneracy of $n^{2}$. Figure 7 shows an energy level diagram for hydrogen $(Z = 1)$. For $E \geq 0$, the energy is a continuum, since the electron is in fact a free particle. The continuum represents states of an electron and proton in interaction, but not bound into a stable atom. Figure $7$ also shows some of the transitions which make up the Lyman series in the ultraviolet and the Balmer series in the visible region. The $ns$ orbitals are all spherically symmetrical, being associated with a constant angular factor, the spherical harmonic $Y_{00} = 1/ \sqrt{4\pi}$. They have $n - 1$ radial nodesspherical shells on which the wavefunction equals zero. The 1s ground state is nodeless and the number of nodes increases with energy, in a pattern now familiar from our study of the particle-in-a-box and harmonic oscillator. The 2s orbital, with its radial node at $r = 2$ bohr, is also shown in Figure $3$. p- and d-Orbitals The lowest-energy solutions deviating from spherical symmetry are the 2p-orbitals. Using Equations $\ref{44}$, $\ref{45}$ and the $\ell = 1$ spherical harmonics, we find three degenerate eigenfunctions: $\psi_{210}(r, \: \theta, \: \phi) = \dfrac{1}{4\sqrt{2\pi}}re^{-r/2} \cos\theta \label{47}$ and $\psi_{21 \pm 1}(r, \: \theta, \: \phi) = \mp \dfrac{1}{4\sqrt{2\pi}}re^{-r/2} \sin\theta e^{\pm i \phi} \label{48}$ The function $\psi_{210}$ is real and contains the factor $r \cos\theta$, which is equal to the cartesian variable $z$. In chemical applications, this is designated as a 2pz orbital: $\psi_{2p_z} = \dfrac{1}{4\sqrt{2\pi}}ze^{-r/2} \label{49}$ A contour plot is shown in Figure $8$. Note that this function is cylindrically-symmetrical about the $z$-axis with a node in the $x$, $y$-plane. The $\psi_{21 \pm 1}$ are complex functions and not as easy to represent graphically. Their angular dependence is that of the spherical harmonics $Y_{1 \pm 1}$, shown in Figure 6-4. As noted in Chap. 4, any linear combination of degenerate eigenfunctions is an equally-valid alternative eigenfunction. Making use of the Euler formulas for sine and cosine $\cos\phi = \dfrac{e^{i\phi} + e^{-i\phi}}{2} \qquad \mathsf{and} \qquad \sin\phi = \dfrac{e^{i\phi} - e^{-i\phi}}{2} \label{50}$ and noting that the combinations $\sin\theta\cos\phi$ and $\sin\theta\sin\phi$ correspond to the cartesian variables $x$ and $y$, respectively, we can define the alternative 2p orbitals $\psi_{2p_x} = \dfrac{1}{\sqrt{2}}(\psi_{21-1} - \psi_{211}) = \dfrac{1}{4\sqrt{2\pi}} xe^{-r/2} \label{51}$ and $\psi_{2p_y} = -\dfrac{i}{\sqrt{2}}(\psi_{21-1} + \psi_{211}) = \dfrac{1}{4\sqrt{2\pi}} ye^{-r/2} \label{52}$ Clearly, these have the same shape as the 2pz-orbital, but are oriented along the $x$- and $y$-axes, respectively. The threefold degeneracy of the p-orbitals is very clearly shown by the geometric equivalence the functions 2px, 2py and 2pz, which is not obvious for the spherical harmonics. The functions listed in Table 1 are, in fact, the real forms for all atomic orbitals, which are more useful in chemical applications. All higher p-orbitals have analogous functional forms $x \, f(r)$, $y \, f(r)$ and $z \, f(r)$ and are likewise 3-fold degenerate. The orbital $\psi_{320}$ is, like $\psi_{210}$, a real function. It is known in chemistry as the $d_{z^{2}}$-orbital and can be expressed as a cartesian factor times a function of $r$: $\psi_{3d_{z^{2}}} = \psi_{320} = (3z^{2} - r^{2}) f(r) \label{53}$ A contour plot is shown in Figure $9$. This function is also cylindrically symmetric about the $z$-axis with two angular nodesthe conical surfaces with $3z^{2} - r^{2} = 0$. The remaining four 3d orbitals are complex functions containing the spherical harmonics $Y_{2 \pm 1}$ and $Y_{2 \pm 2}$ pictured in Figure 6-4. We can again construct real functions from linear combinations, the result being four geometrically equivalent "four-leaf clover" functions with two perpendicular planar nodes. These orbitals are designated $d_{x^{2} - y^{2}}, \: d_{xy}, \: d_{zx}$ and $d_{yz}$. Two of them are shown in Figure 9. The $d_{z^{2}}$ orbital has a different shape. However, it can be expressed in terms of two non-standard d-orbitals, $d_{z^{2} - x^{2}}$ and $d_{y^{2} - z^{2}}$. The latter functions, along with $d_{x^{2} - y^{2}}$ add to zero and thus constitute a linearly dependent set. Two combinations of these three functions can be chosen as independent eigenfunctions. Summary The atomic orbitals listed in Table 1 are illustrated in Figure $20$. Blue and red indicate, respectively, positive and negative regions of the wavefunctions (the radial nodes of the 2s and 3s orbitals are obscured). These pictures are intended as stylized representations of atomic orbitals and should not be interpreted as quantitatively accurate. The electron charge distribution in an orbital $\psi_{n\ell m}(\mathbf{r})$ is given by $\rho(\mathbf{r}) = \lvert \psi_{n\ell m}(\mathbf{r}) \rvert ^{2} \label{54}$ which for the s-orbitals is a function of $r$ alone. The radial distribution function can be defined, even for orbitals containing angular dependence, by $D_{n\ell}(r) = r^{2} [R_{n\ell}(r)]^{2} \label{55}$ This represents the electron density in a shell of radius $r$, including all values of the angular variables $\theta$, $\phi$. Figure $11$ shows plots of the RDF for the first few hydrogen orbitals. Contributors and Attributions Seymour Blinder (Professor Emeritus of Chemistry and Physics at the University of Michigan, Ann Arbor) • Integrated by Daniel SantaLucia (Chemistry student at Hope College, Holland MI)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.07%3A_Hydrogen_Atom.txt
The second element in the periodic table provides our first example of a quantum-mechanical problem which cannot be solved exactly. Nevertheless, as we will show, approximation methods applied to helium can give accurate solutions in perfect agreement with experimental results. In this sense, it can be concluded that quantum mechanics is correct for atoms more complicated than hydrogen. By contrast, the Bohr theory failed miserably in attempts to apply it beyond the hydrogen atom. The helium atom has two electrons bound to a nucleus with charge Z = 2. The successive removal of the two electrons can be diagrammed as $\ce{He}\xrightarrow{\textit{I}_1}\ce{He}^++e^-\xrightarrow{\textit{I}_2}\ce{He}^{++}+2e^-\label{1}$ The first ionization energy I1, the minimum energy required to remove the first electron from helium, is experimentally 24.59 eV. The second ionization energy, I2, is 54.42 eV. The last result can be calculated exactly since He+ is a hydrogen-like ion. We have $\textit{I}_2=-\textit{E}_{ 1\textit{s}}(\ce{He}^+)=\dfrac{Z^2}{2n^2}=2 \mbox{ hartrees}=54.42\mbox{ eV}\label{2}$ The energy of the three separated particles on the right side of Equation $\ref{1}$ is, by definition, zero. Therefore the ground-state energy of helium atom is given by $E_0=-(\textit{I}_1+\textit{I}_2)=-79.02\mbox{ eV}=-2.90372\mbox{ hartrees}$. We will attempt to reproduce this value, as close as possible, by theoretical analysis. Schrödinger Equation and Variational Calculations The Schrödinger equation for He atom, again using atomic units and assuming infinite nuclear mass, can be written $\bigg\{-\dfrac{1}{2}\nabla^2_1-\dfrac{1}{2}\nabla^2_2-\dfrac{Z}{r_1}-\dfrac{Z}{r_2}+\dfrac{1}{r_{12}}\bigg\}\psi(\text{r}_1,\text{r}_2)=E\psi(\text{r}_1,\text{r}_2)\label{3}$ The five terms in the Hamiltonian represent, respectively, the kinetic energies of electrons 1 and 2, the nuclear attractions of electrons 1 and 2, and the repulsive interaction between the two electrons. It is this last contribution which prevents an exact solution of the Schrödinger equation and which accounts for much of the complication in the theory. In seeking an approximation to the ground state, we might first work out the solution in the absence of the 1/r12-term. In the Schrödinger equation thus simplified, we can separate the variables r1 and r2 to reduce the equation to two independent hydrogen-like problems. The ground state wavefunction (not normalized) for this hypothetical helium atom would be $\psi(\text{r}_1,\text{r}_2)=\psi_{1s}(r_1)\psi_{1s}(r_2)=e^{-Z(r_1+r_2)}\label{4}$ and the energy would equal $2\times(-Z^2/2)=-4$ hartrees, compared to the experimental value of $-2.90$ hartrees. Neglect of electron repulsion evidently introduces a very large error. A significantly improved result can be obtained with the functional form ( Equation $\ref{4}$), but with Z replaced by a adjustable parameter $\alpha$, thus: $\tilde{\psi}(r_1,r_2)=e^{-\alpha(r_1+r_2)}\label{5}$ Using this function in the variational principle [cf. Eq (4.53)], we have $\tilde{E}=\dfrac{\int\psi(r_1,r_2)\hat{H}\psi(r_1,r_2)d\tau_1\tau_2}{\int\psi(r_1,r_2)\psi(r_1,r_2)d\tau_1d\tau_2}\label{6}$ where $\hat{H}$ is the full Hamiltonian as in Equation $\ref{3}$, including the $1/r_{12}$-term. The expectation values of the five parts of the Hamiltonian work out to $\left\langle-\dfrac{1}{2}\nabla^2_1\right\rangle=\left\langle-\dfrac{1}{2}\nabla^2_2\right\rangle=\dfrac{\alpha^2}{2}$ $\left\langle-\dfrac{Z}{r_1}\right\rangle=\left\langle-\dfrac{Z}{r_2}\right\rangle=-Z\alpha, \left\langle\dfrac{1}{r_{12}}\right\rangle=\dfrac{5}{8}\alpha\label{7}$ The sum of the integrals in Equation $\ref{7}$ gives the variational energy $\tilde{E}(\alpha)=\alpha^2-2Z\alpha+\dfrac{5}{8}\alpha\label{8}$ This will be always be an upper bound for the true ground-state energy. We can optimize our result by finding the value of $\alpha$ which minimizes the energy (Equation $\ref{8}$). We find $\dfrac{d\tilde{E}}{d\alpha}=2\alpha-2Z+\dfrac{5}{8}=0\label{9}$ giving the optimal value $\alpha=Z-\dfrac{5}{16}\label{10}$ This can be given a physical interpretation, noting that the parameter $\alpha$ in the wavefunction (Equation $\ref{5}$) represents an effective nuclear charge. Each electron partially shields the other electron from the positively-charged nucleus by an amount equivalent to $5/8$ of an electron charge. Substituting Equation $\ref{10}$ into Equation $\ref{8}$, we obtain the optimized approximation to the energy $\tilde{E}=-\left(Z-\dfrac{5}{16}\right)^2\label{11}$ For helium ($Z = 2$), this gives $-2.84765$ hartrees, an error of about $2\%$ $(E_0 = -2.90372)$. Note that the inequality $\tilde{E} > E_0$ applies in an algebraic sense. In the late 1920's, it was considered important to determine whether the helium computation could be improved, as a test of the validity of quantum mechanics for many electron systems. The table below gives the results for a selection of variational computations on helium. wavefunction parameters energy $e^{-Z(r_1+r_2)}$ $Z=2$ $-2.75$ $e^{-\alpha(r_1+r_2)}$ $\alpha=1.6875$ $-2.84765$ $\psi(r_1)\psi(r_2)$ best $\psi(r)$ $-2.86168$ $e^{-\alpha(r_1+r_2)}(1+c r_{12})$ best $\alpha, \textrm{c}$ $-2.89112$ Hylleraas (1929) 10 parameters $-2.90363$ Pekeris (1959) 1078 parameters $-2.90372$ The third entry refers to the self-consistent field method, developed by Hartree. Even for the best possible choice of one-electron functions $\psi(r)$, there remains a considerable error. This is due to failure to include the variable $r_{12}$ in the wavefunction. The effect is known as electron correlation. The fourth entry, containing a simple correction for correlation, gives a considerable improvement. Hylleraas (1929) extended this approach with a variational function of the form $\psi(r_1, r_2, r_{12})=e^{-\alpha(r_1+r_2)} \times \textrm{polynomial in} r_1, r_2, r_{12}$ and obtained the nearly exact result with 10 optimized parameters. More recently, using modern computers, results in essentially perfect agreement with experiment have been obtained. Spinorbitals and the Exclusion Principle The simpler wavefunctions for helium atom in Equation $\ref{5}$, can be interpreted as representing two electrons in hydrogen-like 1s orbitals, designated as a 1s2 configuration. According to Pauli's exclusion principle, which states that no two electrons in an atom can have the same set of four quantum numbers, the two 1s electrons must have different spins, one spin-up or $\alpha$, the other spin-down or $\beta$. A product of an orbital with a spin function is called a spinorbital. For example, electron 1 might occupy a spinorbital which we designate $\phi(1)=\psi_{1s}(1)\alpha(1) \textrm{or} \psi_{1s}(1)\beta(1)\label{12}$ Spinorbitals can be designated by a single subscript, for example, $\phi_a$ or $\phi_b$, where the subscript stands for a set of four quantum numbers. In a two electron system the occupied spinorbitals $\phi_a$ and $\phi_b$ must be different, meaning that at least one of their four quantum numbers must be unequal. A two-electron spinorbital function of the form $\Psi (1, 2) = \dfrac{1}{2} \bigg( \phi_a(1)\phi_b(2) - \phi_b(1)\phi_a(2)\bigg)\label{13}$ automatically fulflls the Pauli principle since it vanishes if $a=b$. Moreover, this function associates each electron equally with each orbital, which is consistent with the indistinguishability of identical particles in quantum mechanics. The factor $1/\sqrt{2}$ normalizes the two-particle wavefunction, assuming that $\phi_a$ and $\phi_b$ are normalized and mutually orthogonal. The function (Equation $\ref{13}$) is antisymmetric with respect to interchange of electron labels, meaning that $\Psi (2,1) = -\Psi (1, 2)\label{14}$ This antisymmetry property is an elegant way of expressing the Pauli principle. We note, for future reference, that the function in Equation $\ref{13}$ can be expressed as a $2 \times 2$ determinant: $\Psi (1, 2) = \dfrac{1}{\sqrt{2}}\begin{vmatrix}\phi_a(1) & \phi_b(1)\\phi_a(2) & \phi_b(2)\end{vmatrix}\label{15}$ For the 1s2 configuration of helium, the two orbital functions are the same and Equation $\ref{13}$ can be written $\Psi (1, 2) = \psi_{1s}(1)\psi_{1s}(2) \times \dfrac{1}{\sqrt{2}}\bigg(\alpha(1)\beta(2) - \beta(1)\alpha(2)\bigg)\label{16}$ For two-electron systems (but not for three or more electrons), the wavefunction can be factored into an orbital function times a spin function. The two-electron spin function $\sigma_{0,0}(1, 2) = \dfrac{1}{\sqrt{2}}\bigg(\alpha(1)\beta(2) - \beta(1)\alpha(2)\bigg)\label{17}$ represents the two electron spins in opposing directions (antiparallel) with a total spin angular momentum of zero. The two subscripts are the quantum numbers S and MS for the total electron spin. Eqution $\ref{16}$ is called the singlet spin state since there is only a single orientation for a total spin quantum number of zero. It is also possible to have both spins in the same state, provided the orbitals are different. There are three possible states for two parallel spins: $\sigma_{1,1}(1, 2) = \alpha(1)\alpha(2)$ $\sigma_{1,0}(1, 2) = \dfrac{1}{\sqrt{2}}\bigg(\alpha(1)\beta(2) + \beta(2)\alpha(2)\bigg)$ $\sigma_{1,-1}(1, 2) = \beta(1)\beta(2)\label{18}$ These make up the triplet spin states, which have the three possible orientations of a total angular momentum of 1. Excited States of Helium The lowest excitated state of helium is represented by the electron configuration 1s 2s. The 1s 2p configuration has higher energy, even though the 2s and 2p orbitals in hydrogen are degenerate, because the 2s penetrates closer to the nucleus, where the potential energy is more negative. When electrons are in different orbitals, their spins can be either parallel or antiparallel. In order that the wavefunction satisfy the antisymmetry requirement (Equation $\ref{14}$), the two-electron orbital and spin functions must have opposite behavior under exchange of electron labels. There are four possible states from the 1s 2s configuration: a singlet state $\Psi^+ (1, 2) = \dfrac{1}{\sqrt{2}}\bigg(\psi_{1s}(1)\psi_{2s}(2) + \psi_{2s}(1)\psi_{1s}(2)\bigg) \sigma_{0, 0}(1, 2)\label{19}$ and three triplet states $\Psi^-(1, 2) = \dfrac{1}{\sqrt{2}}\bigg(\psi_{1s}(1)\psi_{2s}(2) - \psi_{2s}(1)\psi_{1s}(2)\bigg)\begin{cases} \sigma_{1,1}(1, 2)\ \sigma_{1,0}(1, 2) \ \sigma_{1,-1}(1, 2)\end{cases}\label{20}$ Using the Hamiltonian in Equation $\ref{3}$, we can compute the approximate energies $E^{\pm}=\iint\Psi^{\pm}(1,2) \hat{H} \Psi^{\pm}(1,2)d\tau_1d\tau_2\label{21}$ After evaluating some fierce-looking integrals, this reduces to the form $E^{\pm}=I(1s)+I(2s)+J(1s, 2s) \pm K(1s, 2s)\label{22}$ in terms of the one electron integrals $I(a)=\int \psi_a(\textrm{r})\left\{-\dfrac{1}{2}\nabla^2-\dfrac{Z}{r}\right\} \psi_a(\textrm{r})d\tau\label{23}$ the Coulomb integrals $J(a, b)=\iint\psi_a(\textrm{r}_1)^2\dfrac{1}{r_{12}}\psi_b(\textrm{r}_2)^2d\tau_1d\tau_2\label{24}$ and the exchange integrals $K(a, b)=\iint\psi_a(\textrm{r}_1)\psi_b(\textrm{r}_1)\dfrac{1}{r_{12}}\psi_a(\textrm{r}_2)\psi_b(\textrm{r}_2)d\tau_1d\tau_2\label{25}$ The Coulomb integral represents the repulsive potential energy for two interacting charge distributions $\psi_a(\textbf{r}_1)^2$ and $\psi_b(\textbf{r}_2)^2$. The exchange integral, which has no classical analog, arises because of the exchange symmetry (or antisymmetry) requirement of the wavefunction. Both J and K can be shown to be positive quantities. Therefore the lower sign in (22) represents the state of lower energy, making the triplet state of the configuration 1s 2s lower in energy than the singlet state. This is an almost universal generalization and contributes to Hund's rule, to be discussed in the next Chapter.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.08%3A_Helium_Atom.txt
Quantum mechanics can account for the periodic structure of the elements, by any measure a major conceptual accomplishment for any theory. Although accurate computations become increasingly more challenging as the number of electrons increases, the general patterns of atomic behavior can be predicted with remarkable accuracy. Slater Determinants According to the orbital approximation, which was introduced in the last Chapter, an N-electron atom contains N occupied spinorbitals, which can be designated $\phi$a, $\phi$b . . . $\phi$n. In accordance with the Pauli exclusion principle, no two of these spinorbitals can be identical. Also, every electron should be equally associated with every spinorbital. A very neat mathematical representation for these properties is a generalization of the two-electron wavefunction (8.13) or (8.15) called a Slater determinant $\Psi (1,2 \ldots N ) = \frac{1}{\sqrt{N !}} \begin{vmatrix} \phi_a(1) & \phi_b(1) & \ldots & \phi_n(1) \ \phi_a(2) & \phi_b(2) & \ldots & \phi_n(2) \ & & \vdots & \ \phi_a(N) & \phi_b(N) & \ldots & \phi_n(N) \ \end{vmatrix} \label{1}$ Since interchanging any two rows (or columns) of a determinant multiplies it by $-1$, the antisymmetry property (8.15) is fulfilled, for every pair of electrons. The Hamiltonian for an atom with N electrons around a nucleus of charge Z can be written $\hat{H} = \sum_{i=1}^N \left\{-\frac{1}{2}\bigtriangledown^2_i - \frac{Z}{​r_i} \right\} + \sum_{i<j}^N \frac{1}{ r_{ij}}\label{2}$ The sum over electron repulsions is written so that each pair {i,,j} is counted just once. The energy of the state represented by a Slater determinant (Equation $\ref{1}$) can be obtained after a lengthy derivation. We give just the final result $\tilde{E} = \sum_{a} I_a+\frac{1}{2}\sum_{a,b} \left( J_{ab}-K_{ab} \right) \label{3}$ where the sums run over all occupied spinorbitals. The one-electron, Coulomb and exchange integrals have the same form as those defined for helium atom in Eqs (8.22-24). The only difference is that an exchange integral equals zero unless the spins of orbitals a and b are both $\alpha$ or both $\beta$. The factor 1/2 corrects for the double counting of pairs of spinorbitals in the second sum. The contributions with a = b can be omitted since Jaa = Kaa. This effectively removes the Coulomb interaction of an orbital with itself, which is spurious. The Hartree-Fock or self-consistent field (SCF) method is a procedure for optimizing the orbital functions in the Slater determinant (1), so as to minimize the energy (Equation $\ref{3}$). SCF computations have been carried out for all the atoms of the periodic table, with predictions of total energies and ionization energies generally accurate in the $1-2\%$ range. Aufbau Principles and Periodic Structure​ Aufbau means "building-up." Aufbau principles determine the order in which atomic orbitals are filled as the atomic number is increased. For the hydrogen atom, the order of increasing orbital energy is given by 1s < 2s = 2p < 3s = 3p = 3d, etc. The dependence of energy on n alone leads to extensive degeneracy, which is however removed for orbitals in many-electron atoms. Thus 2s lies below 2p, as already observed in helium. Similarly, 3s, 3p and 3d increase energy in that order, and so on. The 4s is lowered sufficiently that it becomes comparable to 3d. The general ordering of atomic orbitals is summarized in the following scheme: $1s < 2s < 2p < 3s < 3p < 4s \sim 3d < 4p < 5s \sim 4d\< 5p < 6s \sim 5d \sim 4f < 6p < 7s \sim 6d \sim 5f \label{4}$ and illustrated in Figure 1. This provides enough orbitals to fill the ground states of all the atoms in the periodic table. For orbitals designated as comparable in energy, e.g., 4s $\sim$ 3d, the actual order depends which other orbitals are occupied. The sequence of orbitals pictured above increases in the order $n+\frac{1}{2}$l, except that l = 4 (rather than 3) is used for an f-orbital. The tabulation below shows the ground-state electron configuration and term symbol for selected elements in the first part of the periodic table. From the term symbol, one can read off the total orbital angular momentum L and the total spin angular momentum S. The code for the total orbital angular momentum mirrors the one-electron notation, but using upper-case letters, as follows: L = 0 1 2 3 4 S P D F G The total spin S is designated, somewhat indirectly, by the spin multiplicity 2S + 1 written as a superscript before the S, P, D. . . symbol. For example 1S (singlet S) ,1P (singlet P). . . mean S = 0; 2S (doublet S) ,2P (doublet P). . . mean S = 1/2; 3S (triplet S) ,3P (triplet P). . . mean S = 1, and so on. Please do not confuse the spin quantum number S with the orbital designation S. Atom Z Electron Configuration Term Symbol H 1 1s 2S1/2 He 2 1s2 1S0 Li 3 [He]2s 2S1/2 Be 4 [He]2s2 1S0 B 5 [He]2s22p 2P1/2 C 6 [He]2s22p2 3P0 N 7 [He]2s22p3 4S​3/2 O 8 [He]2s22p4 3P2 F 9 [He]2s22p5 2P3/2 Ne 10 [He]2s22p6 1S0 Na 11 [Ne]3s 2S1/2 Cl 17 [Ne]3s23p5 2P​3/2 Ar 18 [Ne]3s23p6 1S0 K 19 [Ar]4s 2S1/2 Ca 20 [Ar]4s2 1S0 Sc 21 [Ar]4s​23d 2D3/2 Ti 22 [Ar]4s23d2 3F2 V 23 [Ar]4s23d3 4F3/2 Cr 24 [Ar]4s​3d5 7S3 Mn 25 [Ar]4s23d5 6S5/2 Fe 26 [Ar]4s23d​6 5D4 Co 27 [Ar]4s23d7 4F9/2 Ni 28 [Ar]4s23d8 3F4 Cu 29 [Ar]4s3d10 2S1/2 Zn 30 [Ar]4s23d10 1S0 Ga 31 [Ar]4s23d104p 2P1/2 Br 35 [Ar]4s23d104p5 2P3/2 Kr 36 [Ar]3d104s24p6 1S0 The vector sum of the orbital and spin angular momentum is designated $\bf{J} = \bf{L} + \bf{S} \label{5}$ The possible values of the total angular momentum quantum number J runs in integer steps between |L - S| and L + S. The J value is appended as a subscript on the term symbol, eg, 1S0, 2P1/2, 2P3/2. The energy differences between J states is a result of spin-orbit interaction, a magnetic interaction between the circulating charges associated with orbital and spin angular momenta. For atoms of low atomic number, the spin-orbit coupling is a relatively small correction to the energy, but it can become increasingly significant for heavier atoms. We will next consider in some detail the Aufbau of ground electronic states starting at the beginning of the periodic table. Hydrogen has one electron in an s-orbital so its total orbital angular momentum is also designated S. The single electron has s = 1/2, thus S = 1/2. The spin multiplicity 2S + 1 equals 2, thus the term symbol is written 2S. In helium, a second electron can occupy the 1s shell, provided it has the opposite spin. The total spin angular momentum is therefore zero, as is the total orbital angular momentum. The term symbol is 1S, as it will be for all other atoms with complete electron shells. In determining the total spin and orbital angular moments, we need consider only electrons outside of closed shells. Therefore lithium and beryllium are a reprise of hydrogen and helium. The angular momentum of boron comes from the single 2p electron, with l = 1 and s = 1/2, giving a 2P state. To build the carbon atom, we add a second 2p electron. Since there are three degenerate 2p orbitals, the second electron can go into either the already-occupied 2p orbital or one of the unoccupied 2p orbitals. Clearly, two electrons in different 2p orbitals will have less repulsive energy than two electrons crowded into the same 2p orbital. In terms of the Coulomb integrals, we would expect, for example $J(2px, 2py) < J(2px, 2px) \label{6}$ For nitrogen atom, with three 2p electrons, we expect, by the same line of reasoning, that the third electron will go into the remaining unoccupied 2p orbital. The half-filled 2p3 subshell has an interesting property. If the three occupied orbitals are 2px, 2py and 2pz, then their total electron density is given by $\rho_{2p} = \psi^{2}_{2p_{x}} + \psi^{2}_{2p_{y}} + \psi^{2}_{2p_{z}} = \left(x^2 + y^2 + z^2\right) \times \text{function of r} = \text{function of r} \label{7}$ noting that $x^2 + y^2 + z^2 = r^2$. But spherical symmetry implies zero angular momentum, like an s-orbital. In fact, any half filled subshell, such as p3, d5, f7, will contribute zero angular momentum. The same is, of course true as well for filled subshells, such as p6, d10, f14. These are all S terms. Another way to understand this vector cancelation of angular momentum is to consider the alternative representation of the degenerate 2p-orbitals: 2p​-1; 2p0 and 2p1. Obviously, the z-components of angular momentum now add to zero, and since only this one component is observable, the total angular momentum must also be zero. Returning to our unfinished consideration of carbon, the 2p2 subshell can be regarded, in concept, as a half-filled 2p3 subshell plus an electron "hole." The advantage of this picture is that the total orbital angular momentum must be equal to that of the hole, namely l = 1. This is shown below: Thus the term symbol for the carbon ground state is P. It remains to determine the total spins of these subshells. Recall that exchange integrals Kab are non-zero only if the orbitals a and b have the same spin. Since exchange integrals enter the energy formula (3) with negative signs, the more nonvanishing K integrals, the lower the energy. This is achieved by having the maximum possible number of electrons with unpaired spins. We conclude that S = 1 for carbon and S = 3/2 for nitrogen, so that the complete term symbols are 3P and 4S, respectively. The allocation electrons among degenerate orbitals can be formalized by Hund's rule: For an atom in its ground state, the term with the highest multiplicity has the lowest energy. Resuming Aufbau of the periodic table, oxygen with four 2p electrons must have one of the 2p-orbitals doubly occupied. But the remaining two electrons will choose unoccupied orbitals with parallel spins. Thus oxygen has, like carbon, a 3P ground state. Fluorine can be regarded as a complete shell with an electron hole, thus a 2P ground state. Neon completes the 2s2p shells, thus term symbol 1S. The chemical stability and high ionization energy of all the noble-gas atoms can be attributed to their electronic structure of complete shells. The third row of the periodic table is filled in complete analogy with the second row. The similarity of the outermost electron shells accounts for the periodicity of chemical properties. Thus, the alkali metals Na and K belong in the same family as Li, the halogens Cl and Br are chemically similar to F, and so forth. The transition elements, atomic numbers 21 to 30, present further challenges to our understanding of electronic structure. A complicating factor is that the energies of the 4s and 3d orbitals are very close, so that interactions among occupied orbitals often determines the electronic state. Ground-state electron configurations can be deduced from spectroscopic and chemical evidence, and confirmed by accurate self-consisent field computations. The 4s orbital is the first to be filled in K and Ca. Then come 3d electrons in Sc, Ti and V. A discontinuity occurs at Cr. The groundstate configuration is found to be 4s3d5, instead of the extrapolated 4s23d4. This can be attributed to the enhanced stability of a half-filled 3d5-shell. All six electrons in the valence shells have parallel spins, maximizing the number of stabilizing exchange integrals and giving the observed 6S term. An analogous discontinuity occurs for copper, in which the 4s subshell is again raided to complete the 3d10 subshell. The order in which orbitals are filled is not necessarily consistent with the order in which electrons are removed. Thus, in all the positive ions of transition metals, the two 4s-electrons are removed first. The inadequacy of any simple generalizations about orbital energies is demonstrated by comparing the three ground-state electron configurations: Ni 4s23d8, Pd 5s04d10 and Pt 6s5d9. The periodic structure of the elements is evident for many physical and chemical properties, including chemical valence, atomic radius, electronegativity, melting point, density, and hardness. The classic prototype for periodic behavior is the variation of the first ionization energy with atomic number, which is plotted in in Figure 2.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.09%3A_Atomic_Structure_and_The_Periodic_Law.txt
The Hydrogen Molecule This four-particle system, two nuclei plus two electrons, is described by the Hamiltonian $\hat{H} = -\frac{1}{2} \nabla^2_1 -\frac{1}{2} \nabla^2_2 -\frac{1}{2M_A} \nabla^2_A -\frac{1}{2M_B} \nabla^2_B -\frac{1}{r_{1A}} -\frac{1}{r_{2B}} -\frac{1}{r_{2A}} -\frac{1}{r_{1B}} +\frac{1}{r_{12}} +\frac{1}{R} \label{1}$ in terms of the coordinates shown in Figure $1$. We note first that the masses of the nuclei are much greater than those of the electrons,Mproton = 1836 atomic units, compared to melectron = 1 atomic unit. Therefore nuclear kinetic energies will be negligibly small compared to those of the electrons. In accordance with the Born-Oppenheimer approximation, we can first consider the electronic Schrö​dinger equation $\hat{H}_{elec} \psi(r_1,r_2,R) = E_{elec}(R) \psi(r_1,r_2,R) \label{2}$ where $\hat{H} = -\frac{1}{2} \nabla^2_1 -\frac{1}{2} \nabla^2_2 -\frac{1}{r_{1A}} -\frac{1}{r_{2B}} -\frac{1}{r_{2A}} -\frac{1}{r_{1B}} +\frac{1}{r_{12}} +\frac{1}{R} \label{3}$ The internuclear separation R occurs as a parameter in this equation so that the Schrödinger equation must, in concept, be solved for each value of the internuclear distance R. A typical result for the energy of a diatomic molecule as a function of R is shown in Figure $2$. For a bound state, the energy minimum occurs at for R = Re, known as the equilibrium internuclear distance. The depth of the potential well at Re is called the binding energy or dissociation energy De. For the H2 molecule, De = 4.746 eV and Re=1.400 bohr = 0.7406 Å. Note that as R → 0, E(R) → $\infty$, since the 1/R nuclear repulsion will become dominant. The more massive nuclei move much more slowly than the electrons. From the viewpoint of the nuclei, the electrons adjust almost instantaneously to any changes in the internuclear distance. The electronic energy Eelec(R) therefore plays the role of a potential energy in the Schrödinger equation for nuclear motion $\left\{ -\frac{1}{2M_A} \nabla^2_A -\frac{1}{2M_B} \nabla^2_B + V(R)\right\} \chi (r_A,r_B) = E \chi (r_A,r_B) \label{4}$ where $V(R) = E_{elec}(R) \label{5}$ from solution of Equation $\ref{2}$. Solutions of Equation $\ref{4}$ determine the vibrational and rotational energies of the molecule. These will be considered elsewhere. For the present, we are interested in the obtaining electronic energy from Equation $\ref{2}$ and $\ref{3}$. We will thus drop the subscript "elec" on $\hat{H}$ and E(R) for the remainder this Chapter. The first quantum-mechanical account of chemical bonding is due to Heitler and London in 1927, only one year after the Schrödinger equation was proposed. They reasoned that, since the hydrogen molecule H2 was formed from a combination of hydrogen atoms A and B, a first approximation to its electronic wavefunction might be $\psi(r_1,r_2) = \psi_{1s} (r_{1A})\psi_{1s} (r_{2B}) \label{6}$ Using this function into the variational integral $\tilde{E}(R) = \frac{\int{ \psi \hat{H} \psi d\tau}}{\int{\psi^2 d\tau}} \label{7}$ the value Re $\approx$ 1.7 bohr was obtained, indicating that the hydrogen atoms can indeed form a molecule. However, the calculated binding energy De $\approx$ 0.25 eV, is much too small to account for the strongly-bound H2 molecule. Heitler and London proposed that it was necessary to take into account the exchange of electrons, in which the electron labels in Equation $\ref{6}$ are reversed. The properly symmetrized function $\psi(r_1, r_2) = \psi_{1s} (r_{1A})\psi_{1s} (r_{2B}) +\psi_{1s} (r_{1B})\psi_{1s} (r_{2A}) \label{8}$ gave a much more realistic binding energy value of 3.20 eV, with Re = 1.51 bohr. We have already used exchange symmetry (and antisymmetry) in our treatment of the excited states of helium. The variational function (Equation $\ref{8}$) was improved (Wang, 1928) by replacing the hydrogen 1s functions $e^{-r}$ by $e^{-\zeta r}$. The optimized value $\zeta$ = 1.166 gave a binding energy of 3.782 eV. The quantitative breakthrough was the computation of James and Coolidge (1933). Using a 13-parameter function of the form $\psi(r_1, r_2) = e^{- \alpha ( \xi_{1}+\xi_{2})} \mbox{ x polynomial in} \{ \xi_{1}, \xi_{2}, \eta_{1}, \eta_{2}, \rho \} , \xi_{i} \equiv \frac{r_{iA} + r_{iB}}{R}, \eta_{i} \equiv \frac{r_{iA}+r_{iB}}{R}, \rho \equiv \frac{r_{12}}{R} \label{9}$ they obtained Re = 1.40 bohr, De = 4.720 eV. In a sense, this result provided a proof of the validity of quantum mechanics for molecules, in the same sense that Hylleraas' computation on helium was a proof for many-electron atoms. The Valence Bond Theory The basic idea of the Heitler-London model for the hydrogen molecule can be extended to chemical bonds between any two atoms. The orbital function (8) must be associated with the singlet spin function $\sigma_{0,0}(1,2)$ in order that the overall wavefunction be antisymmetric. This is a quantum-mechanical realization of the concept of an electron-pair bond, first proposed by G. N. Lewis in 1916. It is also now explained why the electron spins must be paired, i.e., antiparallel. It is also permissible to combine an antisymmetric orbital function with a triplet spin function but this will, in most cases, give a repulsive state, as shown by the red curve in Figure $2$. According to valence-bond theory, unpaired orbitals in the valence shells of two adjoining atoms can combine to form a chemical bond if they overlap significantly and are symmetry compatible. A $\sigma$-bond is cylindrically symmetrical about the axis joining the atoms. Two s AO's, two pz AO's or an s and a pz can contribute to a $\sigma$-bond, as shown in Figure $3$. The z-axis is chosen along the internuclear axis. Two px or two py AO's can form a $\pi$-bond, which has a nodal plane containing the internuclear axis. Examples of symmetry-incompatible AO's would be an s with a px or a px with a py. In such cases the overlap integral would vanish because of cancelation of positive and negative contributions. Some possible combinations of AO's forming $\sigma$ and $\pi$ bonds are shown in Figure $3$. Bonding in the HCl molecule can be attributed to a combination of a hydrogen 1s with an unpaired 3pz on chlorine. In Cl2, a sigma bond is formed between the 3pz AO's on each chlorine. As a first approximation, the other doubly-occupied AO's on chlorine-the inner shells and the valence-shell lone pairs-are left undisturbed. The oxygen atom has two unpaired 2p-electrons, say 2px and 2py. Each of these can form a $\sigma$-bond with a hydrogen 1s to make a water molecule. It would appear from the geometry of the p-orbitals that the HOH bond angle would be 90°. It is actually around 104.5°. We will resolve this discrepency shortly. The nitrogen atom, with three unpaired 2p electrons can form three bonds. In NH3, each 2p-orbital forms a $\sigma$-bond with a hydrogen 1s. Again 90° HNH bond angles are predicted, compared with the experimental 107°. The diatomic nitrogen molecule has a triple bond between the two atoms, one $\sigma$ bond from combining 2pz AO's and two $\pi$ bonds from the combinations of 2px's and 2py's, respectively. Hybrid Orbitals and Molecular Geometry To understand the bonding of carbon atoms, we must introduce additional elaborations of valence-bond theory. We can write the valence shell configuration of carbon atom as 2s22px2py, signifying that two of the 2p orbitals are unpaired. It might appear that carbon would be divalent, and indeed the species CH2 (carbene or methylene radical) does have a transient existence. But the chemistry of carbon is dominated by tetravalence. Evidently it is a good investment for the atom to promote one of the 2s electrons to the unoccupied 2pz orbital. The gain in stability attained by formation of four bonds more than compensates for the small excitation energy. It can thus be understood why the methane molecule CH4 exists. The molecule has the shape of a regular tetrahedron, which is the result of hybridization, mixing of the s and three p orbitals to form four sp3 hybrid atomic orbitals. Hybrid orbitals can overlap more strongly with neighboring atoms, thus producing stronger bonds. The result is four C-H $\sigma$-bonds, identical except for orientation in space, with 109.5° H-C-H bond angles. Other carbon compounds make use of two alternative hybridization schemes. The s AO can form hybrids with two of the p AO's to give three sp2 hybrid orbitals, with one p-orbital remaining unhybridized. This accounts for the structure of ethylene (ethene): The C-H and C-C $\sigma$-bonds are all trigonal sp2 hybrids, with 120° bond angles. The two unhybridized p-orbitals form a $\pi$-bond, which gives the molecule its rigid planar structure. The two carbon atoms are connected by a double bond, consisting of one $\sigma$ and one $\pi$. The third canonical form of sp-hybridization occurs in C-C triple bonds, for example, acetylene (ethyne). Here, two of the p AO's in carbon remain unhybridized and can form two $\pi$-bonds, in addition to a $\sigma$-bond, with a neighboring carbon: Acetylene H-C$\equiv$C-H is a linear molecule since sp-hybrids are oriented 180° apart. The deviations of the bond angles in H2O and NH3 from 90° can be attributed to fractional hybridization. The angle H-O-H in water is 104.5° while H-N-H in ammonia is 107°. It is rationalized that the p-orbitals of the central atom acquire some s-character and increase their angles towards the tetrahedral value of 109.5°. Correspondingly, the lone pair orbitals must also become hybrids. Apparently, for both water and ammonia, a model based on tetrahedral orbitals on the central atoms would be closer to the actual behavior than the original selection of s- and p-orbitals. The hybridization is driven by repulsions between the electron densities of neighboring bonds. Valence Shell Model An elementary, but quite successful, model for determining the shapes of molecules is the valence shell electron repulsion theory (VSEPR), first proposed by Sidgewick and Powell and popularized by Gillespie. The local arrangement of atoms around each multivalent center in the molecule can be represented by AXn-kEk, where X is another atom and E is a lone pair of electrons. The geometry around the central atom is then determined by the arrangement of the n electron pairs (bonding plus nonbonding), which minimizes their mutual repulsion. The following geometric configurations satisfy this condition: n shape 2 linear 5 trigonal bipyramid 3 trigonal planar 6 octahedral 4 tetrahedral 7 pentagonal bipyramid The basic geometry will be distorted if the n surrounding pairs are not identical. The relative strength of repulsion between pairs follows the order E-E > E-X > X-X. In ammonia, for example, which is NH3E, the shape will be tetrahedral to a first approximation. But the lone pair E will repel the N-H bonds more than they repel one another. Thus the E-N-H angle will increase from the tetrahedral value of 109.5°, causing the H-N-H angles to decrease slightly. The observed value of 107° is quite well accounted for. In water, OH2E2, the opening of the E-O-E angle will likewise cause a closing of H-O-H, and again, 104.5° seems like a reasonable value. Valence-bond theory is about 90% successful in explaining much of the descriptive chemistry of ground states. VB theory fails to account for the triplet ground state of O2 or for the bonding in electron-deffcient molecules such as diborane, B2H6. It is not very useful in consideration of excited states, hence for spectroscopy. Many of these deficiencies are remedied by molecular orbital theory, which we take up in the next Chapter.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.10%3A_The_Chemical_Bond.txt
Molecular orbital theory is a conceptual extension of the orbital model, which was so successfully applied to atomic structure. As was once playfully remarked, "a molecule is nothing more than an atom with more nuclei." This may be overly simplistic, but we do attempt, as far as possible, to exploit analogies with atomic structure. Our understanding of atomic orbitals began with the exact solutions of a prototype problem – the hydrogen atom. We will begin our study of homonuclear diatomic molecules beginning with another exactly solvable prototype, the hydrogen molecule-ion $H_{2}^{+}$. The Hydrogen Molecule-Ion The simplest conceivable molecule would be made of two protons and one electron, namely $H_{2}^{+}$. This species actually has a transient existence in electrical discharges through hydrogen gas and has been detected by mass spectrometry. It also has been detected in outer space. The Schrödinger equation forH$H_{2}^{+}$ can be solved exactly within the Born-Oppenheimer approximation. For fixed internuclear distance R, this reduces to a problem of one electron in the field of two protons, designated A and B. We can write $\left\{-\dfrac{1}{2}\nabla^2\dfrac{1}{r_A}-\dfrac{1}{r_B}+\dfrac{1}{R} \right\} \psi(r)=E\psi(r) \label{1}$ where rA and rB are the distances from the electron to protons A and B, respectively. This equation was solved by Burrau (1927), after separating the variables in prolate spheroidal coordinates. We will write down these coordinates but give only a pictorial account of the solutions. The three prolate spheroidal coordinates are designated $\xi$, $\eta$, $\phi$. the first two are defined by $\xi=\dfrac{r_{A}+r_{B}}{R}$ and $\eta=\dfrac{r_{A}-r_{B}}{R}\label{2}$ while $\phi$ is the angle of rotation about the internuclear axis. The surfaces of constant $\xi$ and $\eta$ are, respectively, confocal ellipsoids and hyperboloids of revolution with foci at A and B. The two-dimensional analog should be familiar from analytic geometry, an ellipse being the locus of points such that the sum of the distances to two foci is a constant. Analogously, a hyperbola is the locus whose difference is a constant. Figure $1$ shows several surfaces of constant $\xi$, $\eta$ and $\phi$. The ranges of the three coordinates are $\xi$ $\in$ $\{1,\infty\}$, $\eta$ $\in$ $\{-1,1\}$, $\phi$ $\in$ $\{0,2\pi\}$. The prolate-spheroidal coordinate system conforms to the natural symmetry of the $H_{2}^{+}$ problem in the same way that spherical polar coordinates were the appropriate choice for the hydrogen atom. The first few solutions of the $H_2^+$ Schrödinger equation are sketched in Figure $2$, roughly in order of increasing energy. The $\phi$-dependence of the wavefunction is contained in a factor $\Phi(\phi)=e^{i\lambda\phi},\ \ \ \ \ \lambda=0,\pm1,\pm2,\ldots\label{3}$ which is identical to the $\phi$-dependence in atomic orbitals. In fact, the quantum number $\lambda$ represents the component of orbital angular momentum along the internuclear axis, the only component which has a definite value in systems with axial (cylindrical) symmetry. The quantum number $\lambda$ determines the basic shape of a diatomic molecular orbital, in the same way that $\ell$ did for an atomic orbital. An analogous code is used $\sigma$ for $\lambda$ = 0, $\pi$ for $\lambda$ = $\pm$1, $\delta$ for $\lambda$ = $\pm$2, and so on. We are already familiar with $\sigma$- and $\pi$-orbitals from valence-bond theory. A second classification of the $H_{2}^{+}$ eigenfunctions pertains to their symmetry with respect to inversion through the center of the molecule, also known as parity. If $\psi$(-r) = +$\psi$(r), the function is classified gerade or even parity, and the orbital designation is given a subscript g, as in $\sigma_{g}$ or $\pi_{g}$. If $\psi$(-r) = -$\psi$(r), the function is classified as ungerade or odd parity, and we write instead $\sigma_{u}$ or $\pi_{u}$. Atomic orbitals can also be classified by inversion symmetry. However, all s and d atomic orbitals are g, while all p and f orbitals are u, so no further designation is necessary. The molecular orbitals of a given symmetry are numbered in order of increasing energy, for example, 1$\sigma_{g}$, 2$\sigma_{g}$, 3$\sigma_{g}$. The lowest-energy orbital, as we have come to expect, is nodeless. It obviously must have cylindrical symmetry ($\lambda$ = 0) and inversion symmetry (g). It is designated 1$\sigma_{g}$ since it is the first orbital of this classification. The next higher orbital has a nodal plane, with $\eta$ = 0, perpendicular to the axis. This function still has cylindrical symmetry ($\sigma$) but now changes sign upon inversion (u). It is designated 1$\sigma_{u}$, as the first orbital of this type. The next higher orbital has an inner ellipsiodal node. It has the same symmetry as the lowest orbital and is designated 2$\sigma_{g}$. Next comes the 2$\sigma_{u}$ orbital, with both planar and ellipsoidal nodes. Two degenerate $\pi$-orbitals come next, each with a nodal plane containing the internuclear axis, with $\phi$=const. Their classification is 1$\pi_{u}$. The second 1$\pi_{u}$-orbital, not shown in Figure $2$, has the same shape rotated by 90°. The 3$\sigma_{g}$ orbital has two hyperbolic nodal surfaces, where $\eta$ = $\pm$const. The 1$\pi_{g}$, again doubly-degenerate, has two nodal planes, $\eta$ = 0 and $\phi$=const. Finally, the 3$\sigma_{u}$, the last orbital we consider, has three nodal surfaces where $\eta$=const. An molecular orbital is classified as a bonding orbital if it promotes the bonding of the two atoms. Generally a bonding molecular orbital has a significant accumulation of electron charge in the region between the nuclei and thus reduces their mutual repulsion. The 1$\sigma_{g}$, 2$\sigma_{g}$, 1$\pi_{u}$ and 3$\sigma_{g}$ are evidently bonding orbitals. An molecular orbital which does not significantly contribute to nuclear shielding is classified as an antibonding orbital. The 1$\sigma_{u}$, 2$\sigma_{u}$, 1$\pi_{g}$ and 3$\sigma_{u}$ belong in this category. Often an antibonding molecular orbital is designated by $\sigma$* or $\pi$*. The actual ground state of $H_{2}^{+}$ has the 1$\sigma_{g}$ orbital occupied. The equilibrium internuclear distance Re is 2.00 bohr and the binding energy De is 2.79 eV, which represents quite a strong chemical bond. The 1$\sigma_{u}$ is a repulsive state and a transition from the ground state results in dissociation of the molecule. The LCAO Approximation In Figure $3$, the 1$\sigma_{g}$ and 1$\sigma_{u}$ orbitals are plotted as functions of z, along the internuclear axis. Both functions have cusps, discontinuities in slope, at the positions of the two nuclei A and B. The 1s orbitals of hydrogen atoms have the same cusps. The shape of the 1$\sigma_{g}$ and 1$\sigma_{u}$ suggests that they can be approximated by a sum and difference, respectively, of hydrogen 1s orbitals, such that$\psi(1\sigma_{g,u})\approx\psi(1s_{A})\pm\psi(1s_{B})\label{4}$ This linear combination of atomic orbitals is the basis of the so-called LCAO approximation. The other orbitals pictured in Figure $2$ can likewise be approximated as follows: $\psi(2\sigma_{g,u})\approx\psi(2s_{A})\pm\psi(2s_{B})$ $\psi(3\sigma_{g,u})\approx\psi(2p\sigma_{A})\pm\psi(2p\sigma_{B})$ $\psi(1\pi_{u,g})\approx\psi(2p\pi_{A})\pm\psi(2p\pi_{B})\label{5}$ The 2$p\sigma$ atomic orbital refers to 2pz, which has the axial symmetry of a $\sigma$-bond. Likewise 2$p\pi$ refers to 2px or 2py, which are positioned to form $\pi$-bonds. An alternative notation for diatomic molecular orbitals which specifies their atomic origin and bonding/antibonding character is the following: 1$\sigma_{g}$ 1$\sigma_{u}$ 2$\sigma_{g}$ 2$\sigma_{u}$ 3$\sigma_{g}$ 3$\sigma_{u}$ 1$\pi_{u}$ 1$\pi_{g}$ $\sigma$1s $\sigma$*1s $\sigma$2s $\sigma$*2s $\sigma$2p $\sigma$*2p $\pi$2p $\pi$*2p Almost all applications of molecular-orbital theory are based on the LCAO approach, since the exact $H_{2}^{+}$ functions are far too complicated to work with. The relationship between molecular orbitals and their constituent atomic orbitals can be represented in correlation diagrams, show in Figure $4$. MO Theory of Homonuclear Diatomic Molecules A sufficient number of orbitals is available for the Aufbau of the ground states of all homonuclear diatomic species from H2 to Ne2. Table 1 summarizes the results. The most likely order in which the molecular orbitals are filled is given by $1\sigma_{g}<1\sigma_{u}<2\sigma_{g}<2\sigma_{u}<3\sigma_{g}\sim1\pi_{u}<1\pi_{g}<3\sigma_{u}$ The relative order of 3$\sigma_{g}$ and 1$\pi_{u}$ depends on which other molecular orbitals are occupied, much like the situation involving the 4s and 3d atomic orbitals. The results of photoelectron spectroscopy indicate that 1$\pi_{u}$ is lower up to and including N2, but 3$\sigma_{g}$ is lower thereafter. The term symbol $\Sigma,\Pi,\Delta\ldots$, analogous to the atomic S, P, D. . . symbolizes the axial component of the total orbital angular momentum. When a $\pi$-shell is filled (4 electrons) or half-filled (2 electrons), the orbital angular momentum cancels to zero and we find a $\Sigma$ term. The spin multiplicity is completely analogous to the atomic case. The total parity is again designated by a subscript g or u. Since the many electron wavefunction is made up of products of individual MO's, the total parity is odd only if the molecule contains an odd number of u orbitals. Thus a $\sigma_{u}^{2}$ or a $\pi_{u}^{2}$ subshell transforms like g. For $\Sigma$ terms, the superscript $\pm$ denotes the sign change of the wavefunction under a reflection in a plane containing the internuclear axis. This is equivalent to a sign change in the variable $\phi\rightarrow-\phi$. This symmetry is needed when we deal with spectroscopic selection rules. In a spin-paired $\pi_{u}^{2}$ subshell the triplet spin function is symmetric so that the orbital factor must be antisymmetric, of the form $\dfrac{1}{\sqrt{2}} \biggl( \pi_{x}(1)\pi_{y}(2)-\pi_{y}(1)\pi_{x}(2) \biggr) \label{6}$ This will change sign under the reflection, since $x\rightarrow{x}$ but $y\rightarrow{-y}$. We need only remember that a $\pi_{u}^{2}$ subshell will give the term symbol $^{3}\Sigma_{g}^{-}$. The net bonding effect of the occupied molecular orbitals is determined by the bond order, half the excess of the number bonding minus the number antibonding. This definition brings the molecular orbital results into correspondence with the Lewis (or valence-bond) concept of single, double and triple bonds. It is also possible in molecular orbital theory to have a bond order of 1/2, for example, in $H_{2}^{+}$ which is held together by a single bonding orbital. A bond order of zero generally indicates no stable chemical bond, although helium and neon atoms can still form clusters held together by much weaker van der Waals forces. Molecular-orbital theory successfully accounts for the transient stability of a $^{3}\Sigma_{u}^{+}$ excited state of He2, in which one of the antibonding electrons is promoted to an excited bonding orbital. This species has a lifetime of about 10-4 sec, until it emits a photon and falls back into the unstable ground state. Another successful prediction of molecular orbital theory concerns the relative binding energy of the positive ions N$_{2}^{+}$ and O$_{2}^{+}$, compared to the neutral molecules. Ionization weakens the N–N bond since a bonding electron is lost, but it strengthens the O–O bond since an antibonding electron is lost. One of the earliest triumphs of molecular orbital theory was the prediction that the oxygen molecule is paramagnetic. Figure $5$ shows that liquid O2 is a magnetic substance, attracted to the region between the poles of a permanent magnet. The paramagnetism arises from the half-filled 1$\pi_{g}^{2}$ subshell. According to Hund's rules the two electrons singly occupy the two degenerate 1$\pi_{g}$ orbitals with their spins aligned parallel. The term symbol is $^{3}\Sigma_{g}^{-}$ and the molecule thus has a nonzero spin angular momentum and a net magnetic moment, which is attracted to an external magnetic field. Linus Pauling invented the paramagnetic oxygen analyzer, which is extensively used in medical technology. Variational Computation of Molecular Orbitals Thus far we have approached molecular orbital theory from a mainly descriptive point of view. To begin a more quantitative treatment, recall the LCAO approximation to the $H_{2}^{+}$ ground state, Equation $\ref{4}$, which can be written $\psi=c_{A}\psi_{A}+c_{B}\psi_{B}\label{7}$ Using this as a trial function in the variational principle, we have $E(c_{A},c_{B})=\dfrac{\int\psi\hat{H}\psi{d}\tau}{\int\psi^{2}d\tau}\label{8}$ where $\hat{H}$ is the Hamiltonian from Equation $\ref{1}$. In fact, these equations can be applied more generally to construct any molecular orbital, not just solutions for $H_{2}^{+}$. In the general case, $\hat{H}$ will represent an effective one-electron Hamiltonian determined by the molecular environment of a given orbital. The energy expression involves some complicated integrals, but can be simplified somewhat by expressing it in a standard form. Hamiltonian matrix elements are defined by $H_{AA}=\int\psi_{A}\hat{H}\psi_{A}d\tau$H_{BB}=\int\psi_{B}\hat{H}\psi_{B}d\tau$H_{AB}=H_{BA}=\int\psi_{A}\hat{H}\psi_{B}d\tau\label{9}$ while the overlap integral is given by $S_{AB}=\int\psi_{A}\psi_{B}d\tau\label{10}$ Presuming the functions $\psi_{A}$ and $\psi_{B}$ to be normalized, the variational energy (Equation $\ref{8}$) reduces to $E(c_{A}c_{B})=\dfrac{c_{A}^{2}H_{AA}+2c_{A}c_{B}H_{AB}+c_{B}^{2}H_{BB}}{c_{A}^{2}+2c_{A}c_{B}S_{AB}+c_{B}^{2}}\label{11}$ To optimize the MO, we find the minimum of E wrt variation in cA and cB, as determined by the two conditions $\dfrac{\partial{E}}{\partial{c_{A}}}=0,\ \ \ \ \ \dfrac{\partial{E}}{\partial{c_{B}}}=0\label{12}$ The result is a secular equation determining two values of the energy: $\left|\begin{array}{ll}H_{AA}-E&H_{AB}-ES_{AB}\H_{AB}-ES_{AB}&H_{BB}-E\end{array}\right|=0\label{13}$ For the case of a homonuclear diatomic molecule, for example $H_{2}^{+}$, the two Hamiltonian matrix elements $H_{AA}$ and $H_{BB}$ are equal, say to $\alpha$. Setting $H_{AB}=\beta$ and $S_{AB}=S$, the secular equation reduces to $\left|\begin{array}{ll}\alpha-E&\beta-ES\\beta-ES&\alpha-E\end{array}\right|=(\alpha-E)^{2}-(\beta-ES)^{2}=0\label{14}$ with the two roots $E^{\pm}=\dfrac{\alpha\pm\beta}{1\pm{S}}\label{15}$ The calculated integrals $\alpha$ and $\beta$ are usually negative, thus for the bonding orbital $E^{+}=\dfrac{\alpha+\beta}{1+S}\ \ \ \ \ (bonding)\label{16}$ while for the antibonding orbital $E^{-}=\dfrac{\alpha-\beta}{1-S}\ \ \ \ \ (antibonding)\label{17}$ Note that $(E^{-}-\alpha)>(\alpha-E^{+})$, thus the energy increase associated with antibonding is slightly greater than the energy decrease for bonding. For historical reasons, $\alpha$ is called a Coulomb integral and $\beta$, a resonance integral. Heteronuclear Molecules The variational computation leading to Equation $\ref{13}$ can be applied as well to the heteronuclear case in which the orbitals $\psi_{A}$ and $\psi_{B}$ are not equivalent. The matrix elements $H_{AA}$ and $H_{BB}$ are approximately equal to the energies of the atomic orbitals $\psi_{A}$ and $\psi_{B}$, respectively, say $E_{A}$ and $E_{B}$ with $E_{A}>E_{B}$. It is generally true that $|E_{A}|,|E_{B}|\gg|H_{AB}|$. With these simplifications, secular equation can be written $\left|\begin{array}{ll}E_{A}-E&H_{AB}-ES_{AB}\H_{AB}-ES_{AB}&E_{B}-E\end{array}\right|=(E_{A}-E)(E_{B}-E)-(H_{AB}-ES_{AB})^{2}=0\label{18}$ This can be rearranged to $E-E_{A}=\dfrac{(H_{AB}-ES_{AB})^{2}}{E-E_{B}}\label{19}$To estimate the root closest to EA, we can replace E by EA on the right hand side of the equation. This leads to$E^{-}\approx{E_{A}+\dfrac{(H_{AB}-E_{A}S_{AB})^{2}}{E_{A}-E_{B}}}\label{20}$and analogously for the other root, $E^{+}\approx{E_{B}-\dfrac{(H_{AB}-E_{B}S_{AB})^{2}}{E_{A}-E_{B}}}\label{21}$ The following correlation diagram represents the relative energies of these atomic orbitals and MO's: A simple analysis of Equations $\ref{18}$ implies that, in order for two atomic orbitals $\psi_{A}$ and $\psi_{B}$ to form effective molecular orbitals the following conditions must be met: 1. (The atomic orbitals must have compatible symmetry. For example, $\psi_{A}$ and $\psi_{B}$ can be either s or p$\sigma$ orbitals to form a $\sigma$-bond or both can be p$\pi$ (with the same orientation) to form a $\pi$-bond. 2. The charge clouds of $\psi_{A}$ and $\psi_{B}$ should overlap as much as possible. This was the rationale for hybridizing the s and p orbitals in carbon. A larger value of SAB implies a larger value for HAB. 3. The energies EA and EB must be of comparable magnitude. Otherwise, the denominator in (20) and (21) will be too large and the molecular orbitals will not differ significantly from the original AO's. A rough criterion is that EA and EB should be within about 0.2 hartree or 5 eV. For example, the chlorine 3p orbital has an energy of -13.0 eV, comfortably within range of the hydrogen 1s, with energy -13.6 eV. Thus these can interact to form a strong bonding (plus an antibonding) molecular orbital in HCl. The chlorine 3s has an energy of -24.5 eV, thus it could not form an effective bond with hydrogen even if it were available. Hückel Molecular Orbital Theory Molecular orbital theory has been very successfully applied to large conjugated systems, especially those containing chains of carbon atoms with alternating single and double bonds. An approximation introduced by Hü​ckel in 1931 considers only the delocalized p electrons moving in a framework of $\sigma$-bonds. This is, in fact, a more sophisticated version of a free-electron model. We again illustrate the model using butadiene CH2=CH-CH=CH2. From four p atomic orbitals with nodes in the plane of the carbon skeleton, one can construct four $\pi$ molecular orbitals by an extension of the LCAO approach: $\psi=c_{1}\psi_{1}+c_{2}\psi_{2}+c_{3}\psi_{3}+c_{4}\psi_{4}\label{22}$ Applying the linear variational method, the energies of the molecular orbitals are the roots of the 4 x​ 4 secular equation $\left|\begin{array}{lcc}H_{11}-E&H_{12}-ES_{12}&\ldots\ \ \ \H_{12}-ES_{12}&H_{22}-E&\ldots\ \ \ \\ \ \ \ \ \ \ \ldots&\ldots&\ldots\ \ \ \end{array}\right|=0\label{23}$ Four simplifying assumptions are now made 1. All overlap integrals Sij are set equal to zero. This is quite reasonable since the p-orbitals are directed perpendicular to the direction of their bonds. 2. All resonance integrals Hij between non-neighboring atoms are set equal to zero. 3. All resonance integrals Hij between neighboring atoms are set equal to $\beta$. 4. All coulomb integrals Hii are set equal to $\alpha$. The secular equation thus reduces to $\left|\begin{array}{cccc}\alpha-E&\beta&0&0\\beta&\alpha-E&\beta&0\0&\beta&\alpha-E&\beta\0&0&\beta&\alpha-E\end{array}\right|=0\label{24}$ Dividing by $\beta^{4}$ and defining $x=\dfrac{\alpha-E}{\beta}\label{25}$ the equation simplifies further to $\left|\begin{array}{cccc}x&1&0&0\1&x&1&0\0&1&x&1\0&0&1&x\end{array}\right|=0\label{26}$ This is essentially the connection matrix for the molecule. Each pair of connected atoms is represented by 1, each non-connected pair by 0 and each diagonal element by $x$. Expansion of the determinant gives the 4th order polynomial equation $x^{4}-3x^{2}+1=0\label{27}$ Noting that this is a quadratic equation in $x^{2}$, the roots are found to be $x^{2}=(3\pm\sqrt{5})/2$, so that $x=\pm0.618,\pm1.618$. This corresponds to the four MO energy levels $E=\alpha\pm1.618\beta,\ \ \ \ \ \alpha\pm0.618\beta\label{28}$ Since $\alpha$ and $\beta$ are negative, the lowest molecular orbitals have $E(1\pi)=\alpha+1.618\beta$ and $E(2\pi)=\alpha+0.618\beta$ and the total $\pi$-electron energy of the $1\pi^{2}2\pi^{2}$ configuration equals $E_{\pi}=2(\alpha+1.618\beta)+2(\alpha+0.618\beta)=4\alpha+4.472\beta\label{29}$ The simplest application of Hü​ckel theory, to the ethylene molecule CH2=CH2 gives the secular equation $\left|\begin{array}{cc}x&1\1&x\end{array}\right|=0\label{30}$ This is easily solved for the energies $E=\alpha\pm\beta$. The lowest orbital has $E(1\pi)=\alpha+\beta$ and the 1$\pi^{2}$ ground state has $E_{\pi}=2(\alpha+\beta)$. If butadiene had two localized double bonds, as in its dominant valence-bond structure, its $\pi$-electron energy would be given by $E_{\pi}=4(\alpha+\beta)$. Comparing this with the Hückel result (Equation $\ref{29}$), we see that the energy lies lower than the that of two double bonds by $0.48\beta$. The thermochemical value is approximately -17 kJmol-1. This stabilization of a conjugated system is known as the delocalization energy. It corresponds to the resonance-stabilization energy in valence-bond theory. Aromatic systems provide the most significant applications of Hü​ckel theory. For benzene, we find the secular equation $\left|\begin{array}{cccccc}x&1&0&0&0&1\1&x&1&0&0&0\0&1&x&1&0&0\0&0&1&x&1&0\0&0&0&1&x&1\1&0&0&0&1&x\end{array}\right|=0\label{31}$ with the six roots $x=\pm2,\pm1,\pm1$. The energy levels are $E=\alpha\pm2\beta$ and two-fold degenerate $E=\alpha\pm\beta$. With the three lowest molecular orbitals occupied, we have $E_{\pi}=2(\alpha+2\beta)+4(\alpha+\beta)=6\alpha+8\beta\label{32}$ Since the energy of three localized double bonds is $6\alpha+6\beta$, the delocalization energy equals $2\beta$. The thermochemical value is -152 kJmol-1.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.11%3A_Molecular_Orbital_Theory.txt
In many cases, the symmetry of a molecule provides a great deal of information about its quantum states, even without a detailed solution of the Schrödinger equation. A geometrical transformation which turns a molecule into an indistinguishable copy of itself is called a symmetry operation. A symmetry operation can consist of a rotation about an axis, a reflection in a plane, an inversion through a point, or some combination of these. The Ammonia Molecule We shall introduce the concepts of symmetry and group theory by considering a concrete example–the ammonia molecule NH3. In any symmetry operation on NH3, the nitrogen atom remains fixed but the hydrogen atoms can be permuted in 3!=6 different ways. The axis of the molecule is called a C3 axis, since the molecule can be rotated about it into 3 equivalent orientations, $120^\circ$ apart. More generally, a Cn axis has n equivalent orientations, separated by $2\pi/n$ radians. The axis of highest symmetry in a molecule is called the principal axis. Three mirror planes, designated $\sigma_1,\sigma_2,\sigma_3$, run through the principal axis in ammonia. These are designated as $\sigma_v$ or vertical planes of symmetry. Ammonia belongs to the symmetry group designated C3v, characterized by a three-fold axis with three vertical planes of symmetry. Let us designate the orientation of the three hydrogen atoms in Figure $1$ as {1, 2, 3}, reading in clockwise order from the bottom. A counterclockwise rotation by 120$^\circ$, designated by the operator C3, produces the orientation {2, 3, 1}. A second counterclockwise rotation, designated $C_3^2$, produces {3, 1, 2}. Note that two successive counterclockwise rotations by 120$^\circ$ is equivalent to one clockwise rotation by 120$^\circ$, so the last operation could also be designated $C_3^{-1}$. The three reflection operations $\sigma_1,\sigma_2,\sigma_3$, applied to the original configuration {1, 2, 3} produces {1, 3, 2}, {3, 2, 1} and {2, 1, 3}, respectively. Finally, we must include the identity operation, designated E, which leaves an orientation unchanged. The effects of the six possible operations of the symmetry group C3v can be summarized as follows: $E\{1,2,3\}=\{1,2,3\} C_3\{1,2,3\}=\{2,3,1\}$ $C_3^2\{1,2,3\}=\{3,1,2\} \sigma_1\{1,2,3\}=\{1,3,2\}$ $\sigma_2\{1,2,3\}=\{3,2,1\} \sigma_3\{1,2,3\}=\{2,1,3\}$ We have thus accounted for all 6 possible permutations of the three hydrogen atoms. The successive application of two symmetry operations is equivalent to some single symmetry operation. For example, applying C3, then $\sigma_1$ to our starting orientation, we have $\sigma_1 C_3\{1,2,3\}=\sigma_1\{2,3,1\}=\{2,1,3\}$ But this is equivalent to the single operation $\sigma_3$. This can be represented as an algebraic relation among symmetry operators $\sigma_1 C_3=\sigma_3$ Note that successive operations are applied in the order right to left when represented algebraically. For the same two operations in reversed order, we find $C_3 \sigma_1 \{1,2,3\} = C_3 \{1,3,2\} = \{3,2,1\} = \sigma_2 \{1,2,3\}$ Thus symmetry operations do not, in general commute $A B \not\equiv B A \label{1}$ although they may commute, for example, $C_3$ and $C_3^2$. The algebra of the group $C_{3v}$ can be summarized by the following multiplication table. $\begin{matrix} & 1^{st} & E & C_3 & C_3^2 & \sigma_1 &\sigma_2 &\sigma_3 \ 2^{nd} & & & & & & & \ E & &E &C_3 &C_3^2 &\sigma_1 &\sigma_2 &\sigma_3 \ C_3& &C_3 &C_3^2 &E &\sigma_3 &\sigma_1 &\sigma_2 \ C_3^2& & C_3^2 &E &C_3 &\sigma_2 &\sigma_3 &\sigma_1 \ \sigma_1& &\sigma_1 &\sigma_2 &\sigma_3 &E &C_3 &C_3^2 \ \sigma_2 & &\sigma_2 &\sigma_3 &\sigma_1 &C_3^2 &E &C_3 \ \sigma_3 & &\sigma_3 &\sigma_1 &\sigma_2 &C_3 &C_3^2 &E \end{matrix}$ Notice that each operation occurs once and only once in each row and each column. Group Theory In mathematics, a group is defined as a set of g elements $\mathcal{G} \equiv \{G_1,G_2...G_h\}$ together with a rule for combination of elements, which we usually refer to as a product. The elements must fulfill the following four conditions. 1. The product of any two elements of the group is another element of the group. That is $G_iG_j=G_k$ with $G_k\in\mathcal{G}$ 2. Group multiplication obeys an associative law, $G_i(G_jG_k)=(G_iG_j)G_k\equiv G_iG_jG_k$ 3. There exists an identity element E such that $EG_i=G_iE=G_i$ for all i. 4. Every element $G_i$ has a unique inverse $G_i^{-1}$, such that $G_iG_i^{-1}=G_i^{-1}G_i=E$ with $G_i^{-1}\in\mathcal{G}$. The number of elements h is called the order of the group. Thus $C_{3v}$ is a group of order 6. A set of quantities which obeys the group multiplication table is called a representation of the group. Because of the possible noncommutativity of group elements [cf. Eq (1)], simple numbers are not always adequate to represent groups; we must often use matrices. The group $C_{3v}$ has three irreducible representations, or IR’s, which cannot be broken down into simpler representations. A trivial, but nonetheless important, representation of any group is the totally symmetric representation, in which each group element is represented by 1. The multiplication table then simply reiterates that $1\times 1=1$. For $C_{3v}$ this is called the $A_1$ representation: $A_1: E=1,C_3=1,C_3^2=1,\sigma_1=1,\sigma_2=1,\sigma_3=1 \label{2}$ A slightly less trivial representation is $A_2$: $A_2: E=1,C_3=1,C_3^2=1,\sigma_1=-1,\sigma_2=-1,\sigma_3=-1 \label{3}$ Much more exciting is the E representation, which requires $2\times 2$ matrices: $E= \begin{pmatrix} 1 &0 \0 &1 \end{pmatrix} \qquad C_3=\begin{pmatrix} -1/2 &-\sqrt{3}/2 \ \sqrt{3}/2 &-1/2 \end{pmatrix} \ C_3^2=\begin{pmatrix} -1/2 &\sqrt{3}/2 \ -\sqrt{3}/2 &-1/2 \end{pmatrix} \qquad \sigma_1=\begin{pmatrix} -1 &0 \0 &1 \end{pmatrix} \ \sigma_2=\begin{pmatrix} 1/2 &-\sqrt{3}/2 \ -\sqrt{3}/2 &-1/2 \end{pmatrix} \qquad \sigma_3=\begin{pmatrix} 1/2 &\sqrt{3}/2 \ \sqrt{3}/2 &-1/2 \end{pmatrix} \label{4}$ The operations $C_3$ and $C_3^2$ are said to belong to the same class since they perform the same geometric function, but for different orientations in space. Analogously, $\sigma_1, \sigma_2$ and $\sigma_3$ are obviously in the same class. E is in a class by itself. The class structure of the group is designated by $\{E,2C_3,3\sigma_v\}$. We state without proof that the number of irreducible representations of a group is equal to the number of classes. Another important theorem states that the sum of the squares of the dimensionalities of the irreducible representations of a group adds up to the order of the group. Thus, for $C_{3v}$, we find $1^2+1^2+2^2=6$. The trace or character of a matrix is defined as the sum of the elements along the main diagonal: $\chi(M)\equiv\sum_kM_{kk} \label{5}$ For many purposes, it suffices to know just the characters of a matrix representation of a group, rather than the complete matrices. For example, the characters for the E representation of $C_{3v}$ in Eq (4) are given by $\chi(E)=2,\quad \chi(C_3)=-1, \quad \chi(C_3^2)=-1, \ \chi(\sigma_1)=0, \quad \chi(\sigma_2)=0, \quad \chi(\sigma_3)=0 \label{6}$ It is true in general that the characters for all operations in the same class are equal. Thus Eq (6) can be abbreviated to $\chi(E)=2,\quad \chi(C_3)=-1, \quad \chi(\sigma_v)=0 \label{7}$ For one-dimensional representations, such as $A_1$ and $A_2$, the characters are equal to the matrices themselves, so Equations $\ref{2}$ and $\ref{3}$ can be read as a table of characters. The essential information about a symmetry group is summarized in its character table. We display here the character table for $C_{3v}$ $\begin{matrix} C_{3v} &E &2C_3 &3\sigma_v & & \\hline A_1 &1 &1 &1 &z &z^2,x^2+y^2 \A_2 &1 &1 &-1 & & \E &2 &-1 &0 &(x,y) &(xy,x^2-y^2),(xz,yz) \end{matrix}$ The last two columns show how the cartesian coordinates x, y, z and their products transform under the operations of the group. Group Theory and Quantum Mechanics When a molecule has the symmetry of a group $\mathcal{G}$, this means that each member of the group commutes with the molecular Hamiltonian $[\hat G_i,\hat H]=0 \quad i=1...h \label{8}$ where we now explicitly designate the group elements $G_i$ as operators on wavefunctions. As was shown in Chap. 4, commuting operators can have simultaneous eigenfunctions. A representation of the group of dimension d means that there must exist a set of d degenerate eigenfunctions of $\hat H$ that transform among themselves in accord with the corresponding matrix representation. For example, if the eigenvalue $E_n$ is d-fold degenerate, the commutation conditions (Equation $\ref{2}$) imply that, for $i=1...h$, $\hat G_i \hat H \psi_{nk} = \hat H \hat G_i \psi_{nk}=E_n \hat G_i \psi_{nk} \; \text{for} \;k=1...d \label{9}$ Thus each $\hat G_i \psi_{nk}$ is also an eigenfunction of $\hat H$ with the same eigenvalue $E_n$, and must therefore be represented as a linear combination of the eigenfunctions $\psi_{nk}$. More precisely, the eigenfunctions transform among themselves according to $\hat G_i \psi_{nk}=\sum_{m=1}^d D(G_i)_{km}\psi_{nm} \label{10}$ where $D(G_i)_{km}$ means the $\{k,m\}$ element of the matrix representing the operator $\hat G_i$. The character of the identity operation E immediately shows the degeneracy of the eigenvalues of that symmetry. The $C_{3v}$ character table reveals that $NH_3$, and other molecules of the same symmetry, can have only nondegenerate and two-fold degenerate energy levels. The following notation for symmetry species was introduced by Mulliken: 1. One dimensional representations are designated either A or B. Those symmetric wrt rotation by $2\pi/n$ about the $C_n$ principal axis are labeled A, while those antisymmetric are labeled B. 2. Two dimensional representations are designated E; 3, 4 and 5 dimensional representations are designated T, F and G, respectively. These latter cases occur only in groups of high symmetry: cubic, octahedral and icosohedral. 3. In groups with a center of inversion, the subscripts g and u indicate even and odd parity, respectively. 4. Subscripts 1 and 2 indicate symmetry and antisymmetry, respectively, wrt a $C_2$ axis perpendicular to $C_n$, or to a $\sigma_v$ plane. 5. Primes and double primes indicate symmetry and antisymmetry to a $\sigma_h$ plane. For individual orbitals, the lower case analogs of the symmetry designations are used. For example, MO’s in ammonia are classified $a_1,a_2$ or e. For ammonia and other $C_{3v}$ molecules, there exist three species of eigenfunctions. Those belonging to the classification $A_1$ are transformed into themselves by all symmetry operations of the group. The 1s, 2s and $2p_z$ AO’s on nitrogen are in this category. The z-axis is taken as the 3-fold axis. There are no low-lying orbitals belonging to $A_2$. The nitrogen $2p_x$ and $2p_y$ AO’s form a two-dimensional representation of the group $C_{3v}$. That is to say, any of the six operations of the group transforms either one of these AO’s into a linear combination of the two, with coefficients given by the matrices (4). The three hydrogen 1s orbitals transform like a $3\times 3$ representation of the group. If we represent the hydrogens by a column vector {H1,H2,H3}, then the six group operations generate the following algebra $\begin{matrix} E=\begin{pmatrix} 1 &0 &0 \0 &1 &0 \0 &0 &1 \end{pmatrix} & C_3=\begin{pmatrix} 0 &1 &0 \0 &0 &1 \1 &0 &0 \end{pmatrix} \ C_3^2=\begin{pmatrix} 0 &0 &1 \1 &0 &0 \0 &1 &0 \end{pmatrix} & \sigma_1=\begin{pmatrix} 1 &0 &0 \0 &0 &1 \0 &1 &0 \end{pmatrix} \ \sigma_2=\begin{pmatrix} 0&0 &1 \0 &1 &0 \1 &0 &0 \end{pmatrix} & \sigma_3=\begin{pmatrix} 0&1 &0 \1 &0 &0 \0 &0 &1 \end{pmatrix} \end{matrix} \label{11}$ Let us denote this representation by $\Gamma$. It can be shown that $\Gamma$ is a reducible representation, meaning that by some unitary transformation the $3 \times 3$ matrices can be factorized into blockdiagonal form with $2 \times 2$ plus $1 \times 1$ submatrices. The reducibility of $\Gamma$ can be deduced from the character table. The characters of the matrices (Equation $\ref{11}$) are $\Gamma: \qquad \chi(E)=3, \quad \chi(C_3)=0, \quad \chi_(\sigma_v)=1 \label{12}$ The character of each of these permutation operations is equal to the number of H atoms left untouched: 3 for the identity, 1 for a reflection and 0 for a rotation. The characters of $\Gamma$ are seen to equal the sum of the characters of $A_1$ plus E. This reducibility relation is expressed by writing $\Gamma=A_1\oplus E \label{13}$ The three H atom 1s functions can be combined into LCAO functions which transform according to the IR’s of the group. Clearly the sum $\psi=\psi_{1s}(1)+\psi_{1s}(2)+\psi_{1s}(3) \label{14}$ transforms like $A_1$. The two remaining linear combinations which transform like E must be orthogonal to (Equation $\ref{14}$) and to one another. One possible choice is $\psi'=\psi_{1s}(2)-\psi_{1s}(3), \quad \psi''=2\psi_{1s}(1)-\psi_{1s}(2)-\psi_{1s}(3) \label{15}$ Now, Equation $\ref{14}$ can be combined with the N 1s, 2s and $2p_z$ to form MO’s of $A_1$ symmetry, while Equation $\ref{15}$ can be combined with the N $2p_x$ and $2p_y$ to form MO’s of E symmetry. Note that no hybridization of AO’s is predetermined, it emerges automatically in the results of computation.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.12%3A_Molecular_Symmetry.txt
Our most detailed knowledge of atomic and molecular structure has been obtained from spectroscopy-study of the emission, absorption and scattering of electromagnetic radiation accompanying transitions among atomic or molecular energy levels. Whereas atomic spectra involve only electronic transitions, the spectroscopy of molecules is more intricate because vibrational and rotational degrees of freedom come into play as well. Early observations of absorption or emission by molecules were characterized as band spectra-in contrast to the line spectra exhibited by atoms. It is now understood that these bands reflect closely-spaced vibrational and rotational energies augmenting the electronic states of a molecule. With improvements in spectroscopic techniques over the years, it has become possible to resolve individual vibrational and rotational transitions. This has provided a rich source of information on molecular geometry, energetics and dynamics. Molecular spectroscopy has also contributed significantly to analytical chemistry, environmental science, astrophysics, biophysics and biochemistry. Reduced Mass Consider a system of two particles of masses $m_1$ and $m_2$ interacting with a potential energy which depends only on the separation of the particles. The classical energy is given by $E=\frac{1}{2} m_1 \dot{\vec{r}}_1^2+\frac{1}{2} m_2 \dot{\vec{r}}_2^2+V (|\vec{r}_2-\vec{r}_1|) \label{1}$ the dots signifying derivative wrt time. Introduce two new variables, the particle separation $\vec{r}$ and the position of the center of mass $\vec{R}$: $\vec{r}=\vec{r}_2-\vec{r}_1 \mbox{,}\hspace{20pt}\vec{R}=\dfrac{m_1 \vec{r}_1+m_2\vec{r}_2}{m}\label{2}$ where $m=m_1+m_2$. In terms of the new coordinates $\vec{r}_1=\vec{R}+\frac{m_2}{m} \vec{r} \mbox{,}\hspace{20pt}\vec{r}_2=\vec{R}-\frac{m_1}{m} \vec{r}\label{3}$ and $E=\dfrac{1}{2}m\dot{\vec{R}}^2+\dfrac{1}{2}\mu\dot{\vec{r}}^2+V(r)\label{4}$ where $r=|\vec{r}|$ and $\mu$ is called the $reduced\hspace{2pt}mass$ $\mu\equiv\dfrac{m_1 m_2}{m_1+m_2}\label{5}$ An alternative relation for reduced mass is $\dfrac{1}{\mu}=\frac{1}{m_1}+\frac{1}{m_2}\label{6}$ reminiscent of the formula for resistance of a parallel circuit. Note that, if $m_2\rightarrow\infty$, then $\mu\rightarrow m_1$. The term containing $\dot{\vec{R}}$ represents the kinetic energy of a single hypothetical particle of mass $\mu$ located at the center of mass $\vec{R}$. The remaining terms represent the $relative$ motion of the two particles. It was the appearance of a $single$ particle of effective mass $\mu$ moving in the potential field $V(r)$. $E_{rel}=\dfrac{1}{2} \mu \dot{\vec{r}^2}+V(r)= \dfrac{\vec{p}^2}{2\mu}+V(r)\label{7}$ We can thus write the Schrödinger equation for the relative motion $\left\{-\dfrac{\hbar^2}{2 \mu} \bigtriangledown^2+V(r) \right\}\psi (\vec{r})= E \psi (\vec{r}) \label{8}​$ When we treated the hydrogen atom, it was assumed that the nuclear mass was infinite. In that case we can set $\mu =m$, the mass of an electron. The Rydberg constant for infinite nuclear mass was calculated to be $R_\infty = \dfrac{2\pi^2me^4}{h^3c}=109,737 \text{cm} ^{-1}\label{9}$ If instead, we use the reduced mass of the electron-proton system $\mu = \dfrac{mM}{m+M} \approx \dfrac{1836}{1837}\, m \approx 0.999456 \, m \label{10}$. This changes the Rydberg constant for hydrogen to $R_{H}\approx 109,677 \, cm^{-1}\label{11}$ in perfect agreement with experiment. In 1931, H. C. Urey evaporated four liters of hydrogen down to one milliliter and measured the spectrum of the residue. The result was a set of lines displaced slightly from the hydrogen spectrum. This amounted to the discovery of deuterium, or heavy hydrogen, for which Urey was awarded in 1934 Nobel Prize in Chemistry. Estimating the mass of the deuteron, 2H1, as twice that of the proton, gives $R_{D}\approx 109,707 \, cm^{-1}\label{12}$ Another interesting example involving reduced mass concerns positronium, a short-lived combination of an electron and a positron-the electron's antiparticle. The electron and position mutually annihilate with a half-life of approximately 10-7 sec. and positroium decays into gamma rays. The reduced mass of positronium is $\mu = \frac{m \times m}{m+m} = \frac{m}{2} \label{13}$ half the mass of the electron. Thus the ionization energy equals 6.80 eV, half that of hydrogen atom. Positron emission tomography (PET) provides a sensitive scanning technique for functioning living tissue, notably the brain. A compound containing a positron-emitting radionuclide, for example, 11C, 13N, 15O or 18F, is injected into the body. The emitted positrons attach to electrons to form short-lived positronium, and the annihilation radiation is monitored. Vibration of Diatomic Molecules A diatomic molecule with nuclear masses MA, MB has a reduced mass $\mu =\frac{M_{A}M_{B}}{M_{A}+M_{B}}\label{14}$ Solution of the electronic Schrö​dinger equation gives the energy as a function of internuclear distance Eelec​(R). This plays the role of a potential energy function for motion of the nuclei V(R), as sketched in Fig. 2. We can thus write the Schrö​dinger equation for vibration $\begin{Bmatrix} -\frac{\hbar^2}{2\mu }\frac{d^2}{dR^2} +V(R)\end{Bmatrix}\chi (R)=E_{\chi }(R)\label{15}$ If the potential energy is expanded in a Taylor series about R = Re $V(R)=V(R_{e})+(R-R_{e})V'(R_{e})+\frac{1}{2}(R-R_{e})^2V"(R_{e})+...\label{16}$ An approximation for this expansion has the form of a harmonic oscillator with $V(R)\approx \frac{1}{2}k(R-R_{e})^2\label{17}$ The energy origin can be chosen so V (Re) = 0. At the minimum of the potential, V'(Re) = 0. The best fit to the parabola (17) is obtained with a force constant set equal to $k\approx \frac{d^2V(R)}{dR^2}\mid _{R\, =\, R_{e}}\label{18}$ From the solution for the harmonic oscillator, we identify the ground state vibrational energy, with quantum number $\nu$ = 0 $E_{0}=\hbar\omega =\hbar\sqrt{\frac{k}{\mu }}\label{19}$ The actual dissociation energy from the ground vibrational state is then approximated by $D_{0}\approx D_{e}-\frac{1}{2}\hbar\omega\label{20}$ In wavenumber units $hcD_{0}\approx hcD_{e}-\frac{1}{2}\tilde{\nu }\: cm^{-1}\label{21}$ An improved treatment of molecular vibration must account for anharmonicity, deviation from a harmonic oscillator. Anharmonicity results in a finite number of vibrational energy levels and the possibility of dissociation of the molecule at sufficiently high energy. A very successful approximation for the energy of a diatomic molecule is the Morse potential: $V(R)=hcD_{e}\begin{Bmatrix}1-e^{a(R-R_{e})}\end{Bmatrix}^2\; \; \; a=\begin{pmatrix}\frac{\mu \omega ^2}{2hcD_{e}}\end{pmatrix}^{\frac{1}{2}}\label{22}$ Note that V (Re) = 0 at the minimum of the potential well. The Schrö​dinger equation for a Morse oscillator can be solved to give the energy levels $E_{\upsilon }=(\upsilon +\frac{1}{2})\hbar\omega-(\upsilon+\frac{1}{2} )^2\hbar\omega x_{e}\label{23}$ or, expressed in wavenumber units, $hcE_{\upsilon }=(\upsilon +\frac{1}{2})\tilde{\nu }-(\upsilon +\frac{1}{2})^2x_{e}\tilde{\nu }\label{24}$ Higher vibrational energy levels are spaced closer together, just as in real molecules. Vibrational transitions of diatomic molecules occur in the infrared, roughly in the range of 50-12,000 cm-1. A molecule will absorb or emit radiation only if it has a non-zero dipole moment. Thus HCl is infrared active while H2 and Cl2 are not. Vibration of Polyatomic Molecules A molecule with N atoms has a total of 3N degrees of freedom for its nuclear motions, since each nucleus can be independently displaced in three perpendicular directions. Three of these degrees of freedom correspond to translational motion of the center of mass. For a nonlinear molecule, three more degrees of freedom determine the orientation of the molecule in space, and thus its rotational motion. This leaves 3N - 6 vibrational modes. For a linear molecule, there are just two rotational degrees of freedom, which leaves 3N - 5 vibrational modes. For example, the nonlinear molecule H2O has three vibrational modes while the linear molecule CO2 has four vibrational modes. The vibrations consist of coordinated motions of several atoms in such a way as to keep the center of mass stationary and nonrotating. These are called the normal modes. Each normal mode has a characteristic resonance frequency $\tilde{\nu _{i}}$, which is usually determined experimentally. To a reasonable approximation, each normal mode behaves as an independent harmonic oscillator of frequency $\tilde{\nu _{i}}$. The normal modes of H2O and CO2 are pictured below. A normal mode will be infrared active only if it involves a change in the dipole moment. All three modes of H2O are active. The symmetric stretch of CO2 is inactive because the two C-O bonds, each of which is polar, exactly compensate. Note that the bending mode of CO2 is doubly degenerate. Bending of adjacent bonds in a molecule generally involves less energy than bond stretching, thus bending modes generally have lower wavenumbers than stretching modes. Rotation of Diatomic Molecules The rigid rotor model assumes that the internuclear distance R is a constant. This is not a bad approximation since the amplitude of vibration is generally of the order of 1% of R. The Schrö​dinger equation for nuclear motion then involves the three-dimensional angular momentum operator, written $\hat{J}$ rather than $\hat{L}$ when it refers to molecular rotation. The solutions to this equation are already known and we can write $\frac{\hat{J}^2}{2\mu R^2}Y_{JM}(\theta ,\phi )=E_{J}Y_{JM}(\theta,\phi)\; \; \; J=0,1,2...\; \; \; M=0,\pm \, 1...\pm\,J\label{25}$ where YJM($\theta,\phi$) are spherical harmonics in terms of the quantum numbers J and M, rather than l and m. Since the eigenvalues of $\hat{J}^2$ are $J(J +1)\hbar^2$, the rotational energy levels are $E_{J}=\frac{\hbar^2}{2I}J(J+1)\label{26}$ The moment of inertia is given by $I=\mu R^2=M_{A}R^{2}_{A}+M_{B}R^{2}_{B}\label{27}$ where RA and RB are the distances from nuclei A and B, respectively, to the center of mass. In wavenumber units, the rotational energy is expressed $hcE_{J}=BJ(J+1)cm^{-1}\label{28}$ where B is the rotational constant. The rotational energy-level diagram is shown in Fig.5. Each level is (2J + 1)-fold degenerate. Again, only polar molecules can absorb or emit radiation in the course of rotational transitions. The radiation is in the microwave or far infrared region. The selection rule for rotational transitions is $\Delta$J = $\pm$1. Molecular Parameters from Spectroscopy Following is a table of spectroscopic constants for the four hydrogen halides: The force constant can be found from the vibrational constant. Equating the energy quantities $\hbar\omega=hc\tilde{\nu }$, we find $\omega=2\pi c\tilde{\nu }=\sqrt{\frac{k}{\mu }}\label{29}$ Thus $k=(2\pi c\tilde{\nu })^2\mu\label{30}$ with $\mu=\frac{m_{A}m_{B}}{m_{A}+m_{B}}=\frac{M_{A}M_{B}}{M_{A}+M_{B}}u\label{31}$ where u = 1.66054 x 10-27 kg, the atomic mass unit. MA and MB are the conventional atomic weights of atoms A and B (on the scale 12C = 12). Putting in numerical factors $k=58.9\, \times \, 10^{-6}(\tilde{\nu }/cm^{-1})^2\frac{M_{A}M_{B}}{M_{A}+M_{B}}N/m \label{32}$ This gives 958.6, 512.4, 408.4 and 311.4 N/m for HF, HCl, HBr and HI, respectively. These values do not take account of anharmonicity. The internuclear distance R is determined by the rotational constant. By definition, $hcB=\frac{\hbar^2}{2I}\label{33}$ Thus $B=\frac{\hbar}{4\pi cI}\label{34}$ with $I=\mu R^2=\frac{m_{A}m_{B}}{m_{A}+m_{B}}R^2=\frac{M_{A}M_{B}}{M_{A}+M_{B}}uR^2\; \; kg\: m^2\label{35}$ Solving for R, $R=410.6 / \sqrt{\frac{M_{A}M_{B}}{M_{A}+M_{B}}(B/cm^{-1})}\: \: pm\label{36}$ For the hydrogen halides, HF, HCl, HBr, HI, we calculate R = 92.0, 127.9, 142.0, 161.5 pm, respectively. Rotation of Nonlinear Molecules A nonlinear molecule has three moments of inertia about three principal axes, designated Ia, Ib and Ic. The classical rotational energy can be written $E=\dfrac{J^2_{a}}{2I_{a}}+\frac{J^2_{b}}{2I_{b}}+\dfrac{J^2_{c}}{2I_{c}}\label{37}$ where Ja, Jb, Jc are the components of angular momentum about the principal axes. For a spherical rotor, such as CH4 or SF6, the three moments of inertia are equal to the same value I. The energy simplifies to J2/2I and the quantum-mechanical Hamiltonian is given by $\hat{H}=\frac{\hat{J}^2}{2I}\label{38}$ The eigenvalues are $E_{J}=\frac{\hbar^2}{2I}J(J+1)\; \;\; J=0,1,2..\label{39}$ just as for a linear molecule. But the levels of a spherical rotor have degeneraciesof (2J + 1)2 rather than (2J + 1). A symmetric rotor has two equal moments of inertia, say Ic = Ib $\neq$ Ia. The molecules NH3, CH3Cl and C6H6 are examples. The Hamiltonian takes the form $\hat H = \frac{\hat J_a^2}{2I_a}+\frac{\hat J_b^2 + \hat J_c^2}{2I_b} = \frac{\hat J^2}{2I_b} + (\frac{1}{2I_a} - \frac{1}{2I_b}) \hat J_a^2 \label{40}$ Since it is possible to have simultaneous eigenstates of $\hat J^2$ and one of its components $\hat J_a$, the energies of a symmetric rotor have the form $E_{JK}=\frac{J(J+1)}{2I_{b}}+(\frac{1}{2I_{a}}-\frac{1}{2I_{b}})K^2\; \; J=0,1,2...\; \;\; K=0\pm 1,\pm2,...\pm J\label{41}$ There is, in addition, the (2J + 1)-fold M degeneracy. Electronic Excitations in Diatomic Molecules The quantum states of molecules are composites of rotational, vibrational and electronic contributions. The energy spacing characteristic of these different degrees of freedom vary over many orders of magnitude, giving rise to very different spectroscopic techniques for studying rotational, vibrational and electronic transitions. Electronic excitations are typically of the order of several electron volts, 1 eV being equivalent to approximately 8000 cm-1 or 100 kJ mol​-1. As we have seen, typical energy differences are of the order of 1000 cm-1 for vibration and 10 cm-1 for rotation. Fig. 6 gives a general idea of the relative magnitudes of these energy contributions. Each electronic state has a vibrational structure, characterized by vibrational quantum numbers v and each vibrational state has a rotational structure, characterized by rotational quantum numbers J and M. Every electronic transition in a molecule is accompanied by changes in vibrational and rotational states. Generally, in the liquid state, individual vibrational transitions are not resolved, so that electronic spectra consist of broad bands comprising a large number of overlapping vibrational and rotational transitions. Spectroscopy on the gas phase, however, can often resolve individual vibrational and even rotational transitions. When a molecule undergoes a transition to a different electronic state, the electrons rearrange themselves much more rapidly than the nuclei. To a very good approximation, the electronic state can be considered to occur instantaneously, while the nuclear configuration remains fixed. This is known as the Franck-Condon principle. It has the same physical origin as the Born-Oppenheimer approximation, namely the great disparity in the electron and nuclear masses. On a diagram showing the energies of the ground and excited states as functions of internuclear distance, Franck-Condon behavior is characterized by vertical transitions, in which R remains approximately constant as the molecule jumps from one potential curve to the other. In a vibrational state $\upsilon$ = 0 the maximum of probability for the internuclear distance R is near the center of the potential well. For all higher values vibrational states, maxima of probability occur near the two turning points of the potential-where the total energy equals the potential energy. These correspond on the diagrams to the end points of the horizontal dashes inside the potential curve. Contributors and Attributions Seymour Blinder (Professor Emeritus of Chemistry and Physics at the University of Michigan, Ann Arbor)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.13%3A_Molecular_Spectroscopy.txt
Nuclear magnetic resonance (NMR) is a versatile and highly-sophisticated spectroscopic technique which has been applied to a growing number of diverse applications in science, technology and medicine. This chapter will consider, for the most part, magnetic resonance involving protons. Magnetic Properties of Nuclei In all our previous work, it has been sufficient to treat nuclei as structureless point particles characterized fully by their mass and electric charge. On a more fundamental level, as was discussed in Chap. 1, nuclei are actually composite particles made of nucleons (protons and neutrons) and the nucleons themselves are made of quarks. The additional properties of nuclei which will now become relevant are their spin angular momenta and magnetic moments. Recall that electrons possess an intrinsic or spin angular momentum s which can have just two possible projections along an arbitrary spacial direction, namely $\pm \frac{1}{2} \hbar$. Since $\hbar$ is the fundamental quantum unit of angular momentum, the electron is classified as a particle of spin one-half. The electron’s spin state is described by the quantum numbers $s=1$ and $m_s = \pm 1$. A circulating electric charge produces a magnetic moment $\vec{\mu}$ proportional to the angular momentum J. Thus $\vec{\mu} = \gamma \vec{J} \label{1}$ where the constant of proportionality γ is known as the magnetogyric ratio. The z-component of $\vec{\mu}$ has the possible values $\vec{\mu}_z = \gamma\hbar m_J \mbox{where} m_J = -J, -J+1,..., +J \label{2}$ determined by space quantization of the angular momentum J. The energy of a magnetic dipole in a magnetic field B is given by $E= -\vec{\mu} \cdot \vec{B} = -\vec{\mu}_z B \label{3}$ where magnetic field defines the z-axis. The SI unit of magnetic field (more correctly, magnetic induction) is the tesla, designated T. Electromagnets used in NMR produce fields in excess of 10 T. Small iron magnets have fields around .01 T, while some magnets containing rare-earth elements such as NIB (niobium-iron-boron) reach 0.2 T. The Earth’s magnetic field is approximately 5×10−5T (0.5 gauss in alternative units), dependent on geographic location. At the other extreme, a neutron star, which is really a giant nucleus, has a field predicted to be of the order of 108 T. The energy relation (3) determines the most conveniently units for magnetic moment, namely joules per tesla, J T1. For orbital motion of an electron, where the angular momentum is l, the magnetic moment is given by $\vec{\mu}_z = -\frac{e{\hbar}}{2m} m_l = -\vec{\mu}_B m_l \label{4}$ where the minus sign reflects the negative electric charge. The Bohr magneton is defined by $\vec{\mu}_B = -\frac{e{\hbar}}{2m} = 9.274\times10^{-24} JT^{-1} \label{5}$ The magnetic moment produced by electron spin is written $\vec{\mu}_z = -g \vec{\mu}_B m_s \label{6}$ with introduction of the g-factor. Eq (4) implies g = 1 for orbital motion. For electron spin, however, g = 2 (more exactly, 2.0023). The factor 2 compensates for ms = 1 such that spin and l = 1 orbital magnetic moments 2 are both equal to one Bohr magneton. Many nuclei possess spin angular momentum, analogous to that of the electron. The nuclear spin, designated I, has an integral or half-integral value: 0, 1 , 1, 3 , and so on. Table 1 lists some nuclei of importance in chemical applications of NMR. The proton and the neutron both are spin 2 particles, like the electron. Complex nuclei have angular momenta which are resultants of the spins of their component nucleons. The deuteron 2H, with I = 1, evidently has parallel proton and neutron spins. The 4He nucleus has I = 0, as do 12C, 16O, 20Ne, 28Si and 32S. These nuclei contain filled shells of protons and neutrons with the vector sum of the component angular momenta equal to zero, analogous to closed shells of electrons in atoms and molecules. In fact, all even-even nuclei have spins of zero. Nuclear magnetic moments are of the order of a nuclear magneton $\vec{\mu}_N = \frac{e\hbar}{2M} = 5.051\times 10^{-27} JT^{-1} \label{7}$ where M is the mass of the proton. The nuclear magneton is smaller than the Bohr magneton by a factor m/M ≈ 1836. Table 1: Some common nuclei in NMR spectroscopy In analogy with Equations $\ref{2}$ and $\ref{6}$, nuclear moments are represented by $\vec{\mu}_z = g_I \vec{\mu}_N m_I = \hbar \gamma_I m_I \label{8}$ where gI is the nuclear g-factor and γI, the magnetogyric ratio. Most nuclei have positive g-factors, as would be expected for a rotating positive electric charge. It was long puzzling that the neutron, although lacking electric charge, has a magnetic moment. It is now understood that the neutron is a composite of three charged quarks, udd. The negatively-charged d- quarks are predominantly in the outermost regions of the neutron, thereby producing a negative magnetic moment, like that of the electron. The g- factor for 17O, and other nuclei dominated by unpaired neutron spins, is consequently also negative. Nuclear Magnetic Resonance The energy of a nuclear moment in a magnetic field, according to Equation $\ref{3}$, is given by $E_{m_I} = -\hbar \gamma_I m_I B \label{9}$ For a nucleus of spin $I$, the energy of a nucleus in a magnetic field is split into $2I+1$ Zeeman levels. A proton and other nuclei with spin $\frac{1}{2}$ particles will have just two possible levels: $E_{\pm \frac{1}{2}} = \pm \dfrac{1}{2} \hbar \gamma B \label{10}$ with the α-spin state ( $m_I = -\frac{1}{2}$) lower in energy than the β-spin state ($m_I = +\frac{1}{2}$) by $\Delta E = \hbar \gamma B \label{11}$ Fig. 1 shows the energy of a proton as a function of magnetic field. In zero field (B = 0), the two spin states are degenerate. In a field B, the energy splitting corresponds to a photon of energy $\Delta E = \hbar \omega = h \nu$ where $\omega_L = \gamma_B\space \mbox{or}\space \nu_L = \gamma_B \label{12}$ known as the Larmor frequency of the nucleus. For the proton in a field of 1 T, $\nu_L$ = 42.576 MHz, as the proton spin orientation flips from $+\frac{1}{2}$ to $-\frac{1}{2}$. This transition is in the radiofrequency region of the electromagnetic spectrum. NMR spectroscopy consequently exploits the technology of radiowave engineering. Figure 1. Energies of spin $\frac{1}{2}$ in magnetic field showing NMR transition at Larmor frequency $\nu_L$. A transition cannot occur unless the values of the radiofrequency and the magnetic field accurately fulfill Eq (12). This is why the technique is categorized as a resonance phenomenon. If some resonance condition is not satisfied, no radiation can be absorbed or emitted by the nuclear spins. In the earlier techniques of NMR spectroscopy, it was found more convenient keep the radiofrequency fixed and sweep over values of the magnetic field B to detect resonances. These have been largely supplanted by modern pulse techniques, to be described later. The transition probability for the upward transition (absorption) is equal to that for the downward transition (stimulated emission). (The contribution of spontaneous emission is neglible at radiofrequencies.) Thus if there were equal populations of nuclei in the α and β spin states, there would be zero net absorption by a macroscopic sample. The possibility of observable NMR absorption depends on the lower state having at least a slight excess in population. At thermal equilibrium, the ratio of populations follows a Boltzmann distribution $\frac{N_{\beta}}{N_\alpha} = \frac{e^{\frac{-E_\beta}{kT}}}{e^{\frac{-E_\alpha}{kt}}} = e^{- \frac{\hbar \gamma B}{kT}} \label{13}$ Thus the relative population difference is given by $\dfrac{\Delta N}{N_\alpha} = \frac{N_\alpha - N_{\beta}}{N_\alpha + N_{\beta}} \approx \frac{ \hbar \gamma B}{2kT} \label{14}$ Since nuclear Zeeman energies are so small, the populations of the α and β spin states differ very slightly. For protons in a 1 T field, N/N 3×106. Although the population excess in the lower level is only of the order of parts per million, NMR spectroscopy is capable of detecting these weak signals. Higher magnetic fields and lower temperatures are favorable conditions for enhanced NMR sensitivity. The Chemical Shift NMR has become such an invaluable technique for studying the structure of atoms and molecules because nuclei represent ideal noninvasive probes of their electronic environment. If all nuclei of a given species responded at their characteristic Larmor frequencies, NMR might then be useful for chemical analysis, but little else. The real value of NMR to chemistry comes from minute differences in resonance frequencies dependent on details of the electronic structure around a nucleus. The magnetic field induces orbital angular momentum in the electron cloud around a nucleus, thus, in effect, partially shielding the nucleus from the external field B. The actual or local value of the magnetic field at the position of a nucleus is expressed $B_{loc} = (1- \sigma)B \label{15}$ where the fractional reduction of the field is denoted by σ, the shielding constant, typically of the order of parts per million. The actual resonance frequency of the nucleus in its local environment is then equal to $\nu = (1- \sigma) \frac{\gamma B}{2 \pi} \label{16}$ A classic example of this effect is the proton NMR spectrum of ethanol CH3CH2OH, shown in Fig. 2. The three peaks, with intensity ratios 3:2:1 can be identified with the three chemically-distinct environments in which the protons find themselves: three methyl protons (CH3), two methylene protons (CH2) and one hydroxyl proton (OH). Figure 2. Oscilloscope trace showing the first NMR spectrum of ethanol, taken at Stanford University in 1951. Courtesy Varian Associates, Inc. The variation in resonance frequency due to the electronic environment of a nucleus is called the chemical shift. Chemical shifts on the delta scale are defined by $\delta = {\frac{ \nu - \nu^{O}}{\nu^{O}}} \times 10^{6} \label{17}$ where νo represents the resonance frequency of a reference compound, usually tetramethylsilane Si(CH3)4, which is rich in highly-shielded chemically-equivalent protons, as well as being unreactive and soluble in many liquids. By definition δ = 0 for TMS and almost everything else is “downfield” with positive values of δ. Most compounds have delta values in the range of 0 to 12 (hydrogen halides have negative values, e.g. δ 13 for HI). The hydrogen atom has δ 13 while the bare proton would have δ 31. Conventionally, the δ-scale is plotted as increasing from right to left, in the opposite sense to the magnitude of the magnetic field. Nuclei with larger values of δ are said to be more deshielded, with the bare proton being the ultimate limit. Fig. 3 shows some typical values of δ for protons in some common organic compounds. Figure 3. Ranges of proton chemical shifts for common functional groups. From P. Atkins, Physical Chemistry, (Freeman, New York, 2002). Fig. 4 shows a high-resolution NMR spectrum of ethanol, including a δ-scale. The “fine structure” splittings of the three chemically-shifted components will be explained in the next Section. The chemical shift of a nucleus is very difficult to treat theoretically. However, certain empirical regularities, for example those represented in Fig. 3, provide clues about the chemical environment of the nucleus. We will not consider these in any detail except to remark that often increased deshielding of a nucleus (larger δ) can often be attributed to a more electronegative neighboring atom. For example the proton in the ethanol spectrum (Fig. 4) with δ 5 can be identified as the hydroxyl proton, since the oxygen atom can draw significant electron density from around the proton. Figure 4. High-resolution NMR spectrum of ethanol showing δ scale of chemical shifts. The line at δ = 0 corresponds to the TMS trace added as a reference. Neighboring groups can also contribute to the chemical shift of a given atom, particularly those with mobile π-electrons. For example, the ring current in a benzene ring acts as a secondary source of magnetic field. Depending on the location of a nucleus, this can contribute either shielding or deshielding of the external magnetic field, as shown in Fig. 5. The interaction of neighboring groups can be exploited to obtain structural information by using lanthanide shift reagents. Lanthanides (elements 58 through 71) contain 4f-electrons, which are not usually involved in chemical bonding and can give large paramagnetic contributions. Lanthanide complexes which bind to organic molecules can thereby spread out pro- ton resonances to simplify their analysis. A popular chelating complex is Eu(dpm)3, tris(dipivaloylmethanato)europium, where dpm is the group (CH3)3C–CO=CH–CO–C(CH3)3. Figure 5. Magnetic field produced by ring current in benzene, shown as red loops. Where the arrows are parallel to the external field B, including protons directly attached to the ring, the effect is deshielding. However, any nuclei located within the return loops will experience a shielding effect. Spin-Spin Coupling Two of the resonances in the ethanol spectrum shown in Fig. 4 are split into closely-spaced multiplets—one triplet and one quartet. These are the result of spin-spin coupling between magnetic nuclei which are relatively close to one another, say within two or three bond separations. Identical nuclei in identical chemical environments are said to be equivalent. They have equal chemical shifts and do not exhibit spin-spin splitting. Nonequivalent magnetic nuclei, on the other hand, can interact and thereby affect one another’s NMR frequencies. A simple example is the HD molecule, in which the spin-1proton can interact with the spin-1 deuteron, even though the atoms are chemically equivalent. The proton’s energy is split into two levels by the external magnetic field, as shown in Fig. 1. The neighboring deuteron, itself a magnet, will also contribute to the local field at the pro- ton. The deuteron’s three possible orientations in the external field, with MI = −1,0,+1, with different contributions to the magnetic field at the proton, as shown in Fig. 6. The proton’s resonance is split into three evenly spaced, equally intense lines (a triplet), with a separation of 42.9 Hz. Corespondingly the deuteron’s resonance is split into a 42.9 Hz doublet by its interaction with the proton. These splittings are independent of the external field B, whereas chemical shifts are proportional to B. Fig. 6 represents the energy levels and NMR transitions for the proton in HD. • Figure 6. Nuclear energy levels for proton in HD molecule. The two Zeeman levels of the proton when B > 0 are further split by interaction with the three possible spin orientations of the deuteron Md= 1, 0, +1. The proton NMR transition, represented by blue arrows, is split into a triplet with separation 42.9 Hz. Nuclear-spin phenomena in the HD molecule can be compactly represented by a spin Hamiltonian $\hat{H} = -\hbar \gamma_H M_H(1- \sigma_H) - \hbar \gamma_D M_D(1-\sigma_D)B + h J_{HD} I_H \cdot I_D \label{18}$ • The shielding constants σH and σD are, in this case, equal since the two nuclei are chemically identical. For sufficiently large magnetic fields B, the last term is effectively equal to hJHDMHMD. The spin-coupling constant J can be directly equated to the splitting expressed in Hz. We consider next the case of two equivalent protons, for example, the CH2 group of ethanol. Each proton can have two possible spin states with $M_I = \pm\frac{1}{2}$ , giving a total of four composite spin states. Just as in the case of 2 electron spins, these combine to give singlet and triplet nuclear-spin states with M = 0 and 1, respectively. Also, just as for electron spins, transitions between singlet and triplet states are forbidden. The triplet state allows NMR transitions with $\Delta M = \pm 1$ to give a single resonance frequency, while the singlet state is inactive. As a consequence, spin-spin splittings do not occur among identical nuclei. For example, the H2 molecule shows just a single NMR frequency. And the CH2 protons in ethanol do not show spin- spin interactions with one another. They can however cause a splitting of the neighboring CH3 protons. Fig. 7 (left side) shows the four possible spin states of two equivalent protons, such as those in the methylene group CH2, and the triplet with intensity ratios 1:2:1 which these produce in nearby protons. Also shown (right side) are the eight possible spin states for three equivalent protons, say those in a methyl group CH3, and the quartet with intensity ratios 1:3:3:1 which these produce. In general, n equivalent protons will give a splitting pattern of n + 1 lines in the ratio of binomial coefficients 1:n:n(n 1)/2 . . . The tertiary hydrogen in isobutane (CH3)3CH, marked with an asterisk, should be split into 10 lines by the 9 equivalent methyl protons. • Figure 7. Splitting patterns from methylene and methyl protons. The NMR spectrum of ethanol CH3CH2OH (Fig. 4) can now be interpreted. The CH3 protons are split into a 1:2:1 triplet by spin-spin interaction with the neighboring CH2. Conversely, the CH2 protons are split into a 1:3:3:1 quartet by interaction with the CH3. The OH (hydroxyl) proton evidently does not either cause or undergo spin-spin splitting. The explanation for this is hydrogen bonding, which involves rapid exchange of hydroxyl protons among neighboring molecules. If this rate of exchange is greater than or comparable to the NMR radiofrequency, then the splittings will be “washed out.” Only one line with a motion-averaged value of the chemical shift will be observed. NMR has consequently become a useful tool to study intramolecular motions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)/1.14%3A_Nuclear_Magnetic_Resonance.txt
The study of any discipline requires some grounding in fundamentals. Without this common experience, there is little hope of communicating any complex concepts. For example, in order to make use of a textbook, one must be comfortable with reading. In a mathematically intensive discipline such as physical chemistry, ones comfort level must extend to following discussions that incorporate mathematics and mathematical equations and relationships. As an example, consider the proof of conservation of energy as a means to frame a discussion of this concept. • 1.1: Some Newtonian Physics Consider the definition of acceleration (a) as the first time-derivative of velocity (v) and the second time-derivative of position (x). • 1.2: Some Vectors and Dot Products The concepts of linear combinations and orthogonality show up repeatedly in quantum chemistry. But these are generally not new concepts to students at this level, as the same concepts are used to describe forces and motions in a standard physics course in classical mechanics. • 1.3: Classical Description of a Wave on a String The mathematics used in solving quantum mechanical problems follow be the same basic process for each of the different problems we will examine. In this section, those mathematics will be developed in order to describe a (hopefully) familiar problem in classical physics. • 1.4: Failures of Classical Physics Imagine being a scientist in the year 1900. At the time, there was significant debate in society as to whether or not science was a valuable discipline for study. • 1.5: On Superposition and the Weirdness of Quantum Mechanics In order to better appreciate the fascinating (and sometimes shocking!) results of the quantum world, let’s consider some measurable properties of electrons. Consider in particular two specific properties they exhibit. • 1.6: References • 1.7: Vocabulary and Concepts • 1.8: Problems Thumbnail: The Photoelectric effect require quantum mechanics to describe accurately (CC BY-SA-NC 3.0; anonymous via LibreTexts). 01: Foundations and Review Consider the definition of acceleration (a) as the first time-derivative of velocity (v) and the second time-derivative of position (x). $a=\dfrac{dv}{dt}=\dfrac{d^2x}{dt^2}\nonumber$ Newton’s second law states that force (F) is the product of mass (m) and acceleration. \begin{aligned} F&=ma\&=m\dfrac{dv}{dt} \ &=m\dfrac{d^2x}{dt^2} \end{aligned}\nonumber Since momentum (p) is related to velocity and mass thought the definition $p=mv\nonumber$ (and mass is invariant to time) the following must hold. $\dfrac{dp}{dt}=\dfrac{d\left(mv\right)}{dt}m\dfrac{dv}{dt}=ma=F\nonumber$ Now consider potential energy (U) – which is also related to force through the first derivative with respect to position. $F=-\dfrac{dU}{dx}\nonumber$ This indicates that the following equation must hold for any particle that can be described by Newtonian motion. $-\dfrac{dU}{dx}=\dfrac{dp}{dt}\nonumber$ The classical Hamiltonian (H) is the sum of kinetic energy (T) and potential energy (U). And as it turns out, the kinetic energy can be expressed in terms of momentum. $T=\dfrac{mv^2}{2}=\dfrac{p^2}{2m}\nonumber$ So the Hamiltonian function, which gives the sum of the kinetic and potential energies is given by $H=\dfrac{p^2}{2m}+U\nonumber$ The time-rate-of-change of the total energy can be found from the first derivative of H with respect to t. \begin{aligned} \dfrac{d}{dt}H &=\dfrac{d}{dt}\left(\dfrac{p^2}{2m}+U\right) \ &=\dfrac{1}{2m}\cdot 2p\cdot \dfrac{dp}{dt}+\dfrac{dU}{dt}\ &=\dfrac{2mv}{2m}\cdot \dfrac{dp}{dt}+\dfrac{dU}{dx}\dfrac{dx}{dt}\ &=\dfrac{dx}{dt}\left(\dfrac{dp}{dt}+\dfrac{dU}{dx}\right)\end{aligned} \nonumber And since $-\dfrac{dU}{dx}=\dfrac{dp}{dt}\nonumber$ it follows that \begin{aligned} \dfrac{d}{dt}H&=\dfrac{dx}{dt}\left(-\dfrac{dU}{dx}+\dfrac{dU}{dx}\right) \ &=\dfrac{dx}{dt}\left(0\right) \ &=0 \end{aligned}\nonumber This indicates that the total energy of a system that follows Newtonian physics does not change in time. Another way to state this is that energy is conserved, or that total energy is a “constant of the motion”. This is also a mathematical proof that the sum of potential and kinetic energy must be conserved in all processes, since this sum cannot change in time. Many discussions in this text will rely on derivations such as above in order to make specific points about the nature of matter. Keep in mind that the important points are the conclusions as well as the pathway to relating the conclusions to the initial parameters of the problem. The more you can focus on these aspects, rather than getting bogged down in the specifics of the math, the more sense quantum mechanics will make to you.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.01%3A_Some_Newtonian_Physics.txt
The concepts of linear combinations and orthogonality show up repeatedly in quantum chemistry. But these are generally not new concepts to students at this level, as the same concepts are used to describe forces and motions in a standard physics course in classical mechanics. Consider a pair of vectors ($\textbf{u}$ and $\textbf{v}$) in three-dimensional space can be described as a linear combination of basis vectors in the x, y and z directions ($\textbf{i}$, $\textbf{j}$ and $\textbf{k}$, respectively.) \begin{aligned} & \mathbf{u}= \mathrm{a} \mathbf{i}+\mathrm{b} \mathbf{j}+\mathrm{c} \mathbf{k} \ & \mathbf{v}=\mathrm{d} \mathbf{i}+\mathrm{e} \mathbf{j}+\mathrm{f} \mathbf{k} \end{aligned}\nonumber The inner product of two vectors $\textbf{u}$ and $\textbf{v}$ is given the symbol $\langle \textbf{u} \mid v \rangle$. There are many possible definitions for an inner product, but most students are familiar with the dot product. The dot product of these two vectors can be calculated by $\left\langle \textbf{u} \mid \textbf{v}\right\rangle =\textbf{u} \cdot \textbf{v}=\left(a\cdot d\right)+\left(b\cdot e\right)+\left(c\cdot f\right)\nonumber$ If the dot product is zero, the two vectors are said to be orthogonal. In three dimensional space, this is oftentimes interpreted as the vectors having a $90^{\circ}$ angle between them as the dot product can also be calculated from $\textbf{u} \cdot \textbf{v}= \|\textbf{u}\| \|\textbf{v}\| \cos(\alpha)\nonumber$ where u indicates the magnitude of the vector $\textbf{u}$ and $\alpha$ is the angle formed between the two vectors $\textbf{u}$ and $\textbf{v}$. Given this definition, the only way two vectors of non-zero magnitude can be orthogonal is if the $\cos(\alpha)$ term vanishes. In other words, the angle between them must be $90^{\circ}$ or $\alpha/2$ radians. The concept of orthogonality can also be extended to include functions. All that is necessary is a definition for an inner product for two functions. The definition that we will encounter most in quantum mechanics is the integral over all relevant space of the product of the two functions. $\langle f\left(x\right)\mid g\left(x\right) \rangle =\int{f\left(x\right)\cdot g\left(x\right)\ dx}\nonumber$ In the event that this integral is zero, the two functions are orthogonal in the same sense that two vectors whose dot product is zero are orthogonal. In addition to being orthogonal, vectors can also be normalized. A vector is said to be normalized if it has a unit magnitude. The magnitude of a vector is determined by taking the square root of the dot product of the vector with itself. $\|\textbf{u}\|=\sqrt{\left\langle \textbf{u} \mid \textbf{u} \right\rangle} =\sqrt{a^{2} +b^{2} +c^{2} }\nonumber$ The vector has unit magnitude and is normalized if its magnitude is unity. In the case of vectors, $\textbf{i}$, $\textbf{j}$ and $\textbf{k}$ form an orthonormal set. That is to say that each vector in the set is orthogonal to the other two and is normalized as each has a unit magnitude. This property can be defined for any set of vectors $\textbf{e}_1, \textbf{e}_{2}\ldots \textbf{e}_{N}$ by the following relationship $\left\langle \textbf{e}_{i} \mid \textbf{e}_{j} \right\rangle =\delta_{ij}\nonumber$ where $\delta_{ij}$ is a function called the Kronecker Delta and has the properties $\delta_{\mathrm{ij}}=\mid \begin{array}{ll} 1 & \text { if } \mathrm{i}=\mathrm{j} \ 0 & \text { if } \mathrm{i} \neq \mathrm{j} \end{array}\nonumber$ Similarly, functions ($f_{1}(x), f_{2}(x) \ldots f_{N}(x)$) can form an orthonormal set if $\left\langle f_{i}(x) \mid f_{j} (x) \right\rangle =\int f_{i} (x)\cdot f_{j} (x)\; d\tau =\delta _{ij}\nonumber$ As we will see, this relationship is common in quantum mechanics, and has many useful properties which we will exploit as they make calculations simpler. This will be particularly evident when we discuss the superposition theorem.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.02%3A_Some_Vectors_and_Dot_Products.txt
The mathematics used in solving quantum mechanical problems follow be the same basic process for each of the different problems we will examine. In this section, those mathematics will be developed in order to describe a (hopefully) familiar problem in classical physics. Consider a wave on a string of length a which is fixed at both ends ($x=0$ and $x=a$.) Classical physics tells us that the wave will obey the following condition $\dfrac{{\partial }^2}{\partial x^2}\varphi \left(x,t\right)=\dfrac{1}{v^2}\dfrac{{\partial }^2}{\partial t^2}\varphi (x,t)\nonumber$ where $\phi (x,t)$ gives the displacement of the string from equilibrium at position $x$ and time $t$. To solve this second order partial differential equation, we separate the function into the product of a function which deals only in position and one which deals only in time. $\text{Let}\; \phi (x,t)=X(x)T(t)\nonumber$ Substituting this form in to the equation above and gathering spatial variables on one side and time variables on the other, we get $\dfrac{\partial ^{2} }{\partial x^{2} } \; X(x)T(t)=\; \dfrac{1}{v^{2} } \dfrac{\partial ^{2} }{\partial t^{2} } X(x)T(t)\nonumber$ $T(t)\dfrac{d^{2} }{dx^{2} } \; X(x)=\; \dfrac{X(x)}{v^{2} } \dfrac{d^{2} }{dt^{2} } T(t)\nonumber$ Notice how the partial derivatives become total derivatives since the functions on which they operate depend only on the variables in the given derivative operators. Now dividing both sides by $X(x)T(t)$ yields $\dfrac{1}{X(x)} \dfrac{d^{2} }{dx^{2} } X(x)=\dfrac{1}{v^{2} T(t)} \dfrac{d^{2} }{dt^{2} } T(t)\nonumber$ The only way this can be true is if each side is equal to a constant. Since I already know the answer, I am going to cheat and let that constant be $-k^{2}$ since this will avoid imaginary numbers in the solution. So now we generate two separated second order differential equations: $\dfrac{d^{2} }{dx^{2} } X(x)=-k^{2} X(x)\nonumber$ $\dfrac{d^{2} }{d{\rm t}^{2} } T(t)\; =\; -v^{2} k^{2} T(t)\nonumber$ These two equations are of a special type called eigenvalue-eigenfunction relationship. In these type of relationships, the operator (in this case a second derivative) operates on a function, yielding the same function multiplied by a constant. These type of relationships exist throughout quantum mechanics. The Spatial Solutions Let’s consider only the spatial portion for the time being. Being a second order normal differential equation, there will be two linearly independent functions $X(x)$ which satisfy the equation. Two fairly obvious choices to this eigenvalue-eigenfunction problem are $X(x)=\sin (kx)\, \text{and} \, X(x)=\cos (kx)\nonumber$ As mathematics would have it, any linear combination of these two solutions will also be a solution. Thus, it is convenient to write a general solution that is a linear combination of the two linearly independent functions. $X(x)=A\sin(kx)+B \cos(kx)\nonumber$ We will now employ the boundary conditions to find values for the variables $A$, $B$ and $k$. The boundary conditions are that the string is fixed at both ends. Thus we know that $X(0)=0$ and $X(a)=0$ Using the first condition, we see that \begin{aligned} X(0)&= A \sin(k\cdot 0)+B\cos (k\cdot 0)\ &=0+B\&=0 \end{aligned}\nonumber This can only be true if $B = 0$ since the cosine term will give a non-zero contribution for any non-zero value of $B$ implying that the string is displaced from its fixed position, which it can not be since it is fixed at that position. For the remainder of the solution to this problem, the cosine term will be neglected since it must vanish in order to ensure that $X(0) = 0$. The second condition is that $X(a) = 0$. This requires that $X(a)=A\sin (k\cdot a)=0\nonumber$ One way of making this true is if $A = 0$. This is known as a trivial solution since it implies that $X(x)$ is zero for any value of $x$ (meaning the string is never displaced from equilibrium at any point.) Many problems have trivial solutions, but these are generally ignored as they add no useful insight into the physical behavior of a system. To get the non-trivial solutions, it is useful to know when $\sin(\alpha) = 0$. This will be true if $\alpha$ is an integral multiple of $\pi$. Thus, $k\cdot a=n\pi \quad n=1,2,3\ldots\nonumber$ Or $k=\dfrac{n\pi }{a} \quad n=1,2,3\ldots\nonumber$ Another way to think of this is that the second condition ($X(a) = 0$) can only be met if the length of the string ($a$) is a half integral multiple of the wavelength of the sine function. Since there are several (an infinite number, really) possible values of $n$, the solution implies an infinite number of functions as solutions. Further, there is no reason to expect that $A$ needs to be the same for each value of $n$. \begin{aligned}X_{n} (x)&=A_{n} \sin \left(\dfrac{n\pi x}{a} \right) \ n&=1,2,3... \end{aligned}\nonumber Since we have only two boundary conditions, we can only determine two of the unknown quantities. The last one, $A_{n}$, will govern the amplitude of the particular function. A large value implies that the string will be displaced a large amount from its equilibrium position. Thus, there may be a different value of $A_{n}$ for each value of n (which is why the subscript is included.) For the time being though, let’s leave $A_{n}$ as a symbolic variable and evaluate it later. Before continuing with the time portion of the problem, let’s note some interesting properties of the solutions of the spatial portion. The functions $X_{n}(x)$ are called the “normal modes” of vibration for the string (sometimes they are called the time-independent modes.) That means that a string which is prepared to vibrate with the displacements given by one of the functions $X_{n}(x)$ will have a standing wave. In other words, the nodes (the places along the string where the string does not move or $X_{n}(x) = 0$) are stationary. Further, the functions $X_{n}(x)$ form an orthogonal set. This implies that $\int{X_n\left(x\right)X_m\left(x\right)dx}=A_nA_m\int{{\mathrm{sin} \left(\dfrac{n\pi x}{a}\right)\ }{\mathrm{sin} \left(\dfrac{m\pi x}{a}\right)\ }dx}=A_nA_m{\delta }_{nm}\nonumber$ To prove this, it is useful to consider the following result that can be found in a standard table of integrals. $\int \sin (\alpha x)\sin (\beta x)d\tau = \dfrac{\sin \left[\left(\alpha -\beta \right)x\right]}{2\left(\alpha -\beta \right)} -\dfrac{\sin \left[\left(\alpha +\beta \right)x\right]}{2\left(\alpha +\beta \right)} \quad (\alpha \ne \beta )\nonumber$ Substitution into the above expression yields $\begin{array}{rcl} {A_{n} A{}_{m} \int _{x=0}^{x=a}\sin \left(\dfrac{n\pi \, x}{a} \right)\sin \left(\dfrac{m\pi \, x}{a} \right) x} & {=} & {A_{n} A_{m} \left[\dfrac{\sin \left(\dfrac{\left(n-m\right)\pi \, x}{a} \right)}{2\left(n-m\right){\raise0.5ex\hbox{\scriptstyle \pi }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle a }} } -\dfrac{\sin \left(\dfrac{\left(n+m\right)\pi \, x}{a} \right)}{2\left(n+m\right){\raise0.5ex\hbox{\scriptstyle \pi }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle a }} } \right]_{x=0}^{x=a} } \ {} & {} & {=A_{n} A_{m} \left[\dfrac{\sin \left(\left(n-m\right)\pi \, \right)}{2\left(n-m\right){\raise0.5ex\hbox{\scriptstyle \pi }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle a }} } -\dfrac{\sin \left(\left(n+m\right)\pi \right)}{2\left(n+m\right){\raise0.5ex\hbox{\scriptstyle \pi }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle a }} } -0+0\right]} \end{array}\nonumber$ Since both $n$ and $m$ are integers, $n+m$ and $n-m$ will be integers as well and both sine terms will vanish. Hence, for any $n \neq m$, the integral will vanish. As such, any pair of functions in this set are mutually orthogonal, or the functions form an orthogonal set. But what happens when $n = m$? Again, it is useful to pull the following result from a standard table of integrals. $\int \sin ^{2} (\alpha x)dx =\dfrac{x}{2} -\dfrac{\sin (2\alpha x)}{4\alpha }\nonumber$ Substitution into this expression yields the following: $\begin{array}{rcl} {} & {} & {A{}_{n}^{2} \int _{x=0}^{x=a}\sin ^{2} \left(\dfrac{n\pi \, x}{a} \right)dx A_{n}^{2} \left[\dfrac{x}{2} -\dfrac{\sin \left(\dfrac{2n\pi \, x}{a} \right)}{4\left({\raise0.5ex\hbox{\scriptstyle n\pi }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle a }} \right)} \right]_{x=0}^{x=a} } \ {} & {} & {=A_{n}^{2} \left[\dfrac{a}{2} -0-0+0\right]} \end{array}\nonumber$ A convenient result comes from choosing values for $A_{n}$ such that the result is unity. $1=A_{n}^{2} \left(\dfrac{a}{2} \right)$ or $A_{n} =\sqrt{\dfrac{2}{a} }$ $A_{n}$ is called a normalization constant, and has a value chosen to insure that the integral of the square of the function over all relevant space is unity. Another way of saying this is that $A_{n}$ is chosen so as to normalize the function. We will see this concept throughout our development of quantum mechanics. Note that $A_{n}$ does not depend on $n$. (This will not be the case for most normalization constants.) These functions $X_{n} (x)=\sqrt{\dfrac{2}{a} } \sin \left(\dfrac{n\pi \, x}{a} \right)\quad n=1,2,3\ldots\nonumber$ form an orthonormal set of functions. The have the property that $\int _{x=0}^{x=a}X_{n} X_{m} dx\; =\delta _{nm}\nonumber$ where $\delta_{nm}$ is the Kronecker delta and has the property $\delta _{nm} =\left|\begin{array}{c} {1ifn=m} \ {0 if n\ne m} \end{array}\right.\nonumber$ The Time Solutions The solution to the time dependence part of the problem is very similar to that of the spatial part. Recall that the equation $\dfrac{d^{2} }{d{\rm t}^{2} } T(t)\; =\; -v^{2} k^{2} T(t)\nonumber$ must be satisfied. The value of k has already been determined from the special solutions and is given by $k= \frac{n \pi}{a}$. For convenience, let’s make the substitution $\omega _{n} =vk=\dfrac{vn\pi }{a}\nonumber$ such that $\omega_{n}$ gives a frequency to the oscillation of the string that is parameterized by the velocity of the wave. Further, if n is doubled, the frequency of the wave is doubled. This would be manifested in the audible tone of the vibrating string going up by one octave. Those familiar with the acoustic nature of overtones on strings (such as those that can be produced on the strings of a guitar) are familiar with this concept. The substitution creates the rather familiar looking eigenvalue-eigenfunction problem $\dfrac{d^{2} }{d{\rm t}^{2} } T(t)\; \; =\; \; -\omega _{n}^{2} T(t)\nonumber$ As was the case in the spatial part, the second order ordinary differential equation must have two linearly independent solutions, and any linear combination of those two functions will also be a solution to the equation. Thus, one can write $T(t)=C\sin (\omega _{n} t)+D\cos (\omega _{n} t)\nonumber$ The rest of the development requires a simple trick. Since there are no remaining boundary conditions by which we can evaluate $C$ and $D$, we can choose a constant \sigma such that $C=-\sin (\delta )$ and $D=\cos (\delta )$ so that the time function can be expressed $T(t)=\cos (\omega _{n} t)\cos (\delta )-\sin (\omega _{n} t)\sin (\delta )\nonumber$ and since $\cos (\alpha +\beta )=\cos (\alpha )\cos (\beta )-\sin (\alpha )\sin (\beta )\nonumber$ the function can be expressed $T(t)=\cos (\omega _{n} t+\delta )\nonumber$ In this expression, $\sigma$ is a phase shift in time. For a given choice of $t = 0$, $\sigma$ can be forced to be zero. Given this constraint, the time function can be expressed $T(t)=\cos (\omega _{n} t)\nonumber$ The final result, then, for the normalized wavefunctions that describe the motion of the string are given by $\phi _{n} (x,t)=\sqrt{\dfrac{2}{a} } \sin \left(\dfrac{n\pi \, x}{a} \right)\cos \left(\omega _{n} t\right)\nonumber$ The Superposition Principle For the following discussion, we will only concern ourselves with the time-independent solutions (the spatial functions) for simplicity. The time functions could be included to give the time evolution of each component of a superposition of waves, but the discussion of the mathematics involved would be identical to that for the spatial part of the problem. As such, we will focus just on the result for a fixed point in time of $t = 0$. As it turns out, any well-constructed wave (specifically one that obey the boundary conditions of the original problem) can be expressed as a linear combination of normal mode waves. $\Phi (x)=\sum _{n}c_{n} \cdot X_{n} (x)\nonumber$ where \Phi(x) gives the function that describes the shape of the arbitrary wave, $X_{n}(x)$ are the time-independent functions that were derived in the previous section, given by $X_{n} (x)=\sqrt{\dfrac{2}{a} } \sin \left(\dfrac{n\pi \, x}{a} \right)\nonumber$ And the factor $c_{n}$ gives the amplitude of the $n^{th}$ component of the superposition. The coefficients $c_{n}$ (known as Fourier coefficients) are easily calculated from the following expression $c_{n} =\int \Phi (x)\cdot X_{n} (x)\, dx\nonumber$ This is easily shown by making the substitution $\Phi (x)=\sum _{m}c_{m} X_{m} (x)$ into the above equation. $\begin{array}{rcl} {c{}_{n} } & {=} & {\int \Phi (x)\cdot X_{n} (x)\, dx } \ {} & {} & {=\int \left(\sum _{m}c_{m} X_{m} (x) \right)X_{n} (x)\, dx } \end{array}\nonumber$ Since integration is a linear operation, and multiplication is distributive, the result can be simplified $\begin{array}{rcl} {\int \left(\sum _{m}c_{m} X_{m} (x) \right)X_{n} (x)\, dx } & {=} & {\sum _{m}c_{m} \int X_{m} (x)X_{n} (x)\, dx } \ {} & {} & {=\sum _{m}c_{m} \delta _{mn} } \end{array}\nonumber$ using the orthonormality property of the functions $X_{n}(x)$ as developed above. The sum is also easy to simplify based on the properties of the Kronecker delta. $\begin{array}{rcl} {\sum _{m}c_{m} \delta _{mn} } & {=} & {c_{1} \delta _{1n} +c_{2} \delta _{2n} +c_{3} \delta _{3n} +\ldots +c_{n} \delta _{nn} +\ldots } \ {} & {=} & {c_{1} \cdot 0+c_{2} \cdot 0+c_{3} \cdot 0+\ldots +c_{n} \cdot 1+\ldots } \ {} & {=} & {c_{n} } \end{array}\nonumber$ The description of the function $\Phi (x)=\sum _{n}c_{n} X_{n} (x)$ is known as a Fourier expansion, and is the same sort of mathematics used by a Fourier Transform spectrometer. The spectrometer, through interferometry, measures the values of the amplitudes ($c_{n}$) and then mathematically reconstructs the spectrum by superimposing the constituent functions $X_{n}$(x) and adding them all up. To illustrate the concept, consider a function that is defined as $\Phi(x)=\mid \begin{array}{lll} \sqrt{\frac{\pi}{2 a}} \sin \left(\frac{2 \pi x}{a}\right) & \text { if } & 0 \leq x \leq a / 2 \ 0 & \text { if } & a / 2 \leq x \leq a \end{array}\nonumber$ This function can be expanded in the basis set of normal mode (time independent) functions. The following MathCad worksheet calculates the values of the coefficients and demonstrates the superposition of waves. This sort of expansion in a set of basis functions occurs throughout chemistry including the construction of an $sp^{3}$ hybridized orbital set used in the description of bonding in a methane molecule or the addition of p-orbitals to for $\pi$-bonding and antibonding orbitals. Expect to see this concept again!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.03%3A_Classical_Description_of_a_Wave_on_a_String.txt
Imagine being a scientist in the year 1900. At the time, there was significant debate in society as to whether or not science was a valuable discipline for study. The argument was that Isaac Newton and others had already solved all of the important problems of physics and as such, there was nothing more to be learned. There were still a few problems remaining that didn’t work perfectly according to Newtonian physics, but the prevailing thought was that it was a simple matter of finding the one small piece that people were missing and the entire package would be complete. As it turned out, they couldn’t have been more incorrect! Every new detail that was discovered on these pesky problems seemed to indicate something that was not commensurate with Newtonian physics at all. And the deeper investigators looked, the more perplexing the problems became – and the further from classical physics the solutions took them. But the modeling of these problems formed the foundations of a new quantum theory. That theory, while completely counter-intuitive to scientists of the time, is now engrained in every aspect of how we think of the atomic and molecular nature of matter. As such, no study of chemistry is complete without exploring this bizarre world of quantum mechanics. So sit back, relax, and enjoy the story of the origins of the quantum theory. Max Planck and Blackbody Radiation One of the problems that perplexed scientists at the turn of the 20${}^{th}$ century was that of the description of black-body radiation. The term “Black Body” was introduced by Gustav Kirchhoff in 1860. It refers to an object that absorbs all light that falls on it (i.e. it reflects no light.) The thermal radiation emitted by a black body is called black body radiation. Black-body radiation is the light that is given off from a body that glows from being hot. Examples of blackbody radiators include incandescent light bulbs and the sun. In the laboratory, a black body radiator can be constructed by painting the inside of a metal box black (so that light is not reflected inside) and heating the box. The light given off by the box will be black body radiation. The emission spectrum of a black-body radiator was well established and reproducible. The intensity increases at all wavelengths and the maximum intensity shifts to shorter wavelengths at higher temperatures. But while the experimental result was well established and agreed upon, there was no theoretical model that predicted the result. Existing classical models could predict either the long-wavelength side of the spectrum or the short wavelength side, but not both. Max Planck (1858-1947) produced the first theory that could predict both sides of the spectrum. He did this by making a ridiculous assumption about the nature of light. Despite the prevailing classical theories of the wave-nature of light and numerous experimental observations confirming these theories, Planck decided to model a light beam as a shower of energy packets (which he called Quanta) where the energy was proportional to the frequency of the light wave. $E=h\nu \nonumber$ In this model, E is the energy of a quantum, h is a constant of proportionality and $\nu$ is the frequency of the light wave. This dual nature of light (having properties of both particles and waves) was revolutionary, and was thus met with great skepticism. Planck’s model, published in 1901 [1], can be expressed by $I\left(\nu ,T\right)=\dfrac{2h{\nu }^3}{c^2}\left(\dfrac{1}{e^{\dfrac{h\nu }{k_BT}}-1}\right)\nonumber$ in which $I$ is the intensity, $T$ the temperature and $c$ the speed of light, successfully described both sides of the black body radiation curve. It also provided a value of $h$, the constant of proportionality of $h = 6.36 \times 10^{-34} J \cdot s \nonumber$ Planck was awarded the Nobel Prize in Physics in 1918 for this theory. But while interesting, Planck’s theory only provided one possible explanation of the black body radiation problem. But without corroboration from other experiments involving other phenomena, Planck’s theory of light quanta would not have gained any meaningful attention. That corroboration came in a paper published by Albert Einstein describing a quantum theory of the photoelectric effect. Albert Einstein and the Photoelectric Effect When Planck published his paper in 1901, Albert Einstein was working as a scientific expert in the Swiss patent office while working to secure a professorship in physics. He read Planck’s paper. Through studying Planck’s work, Einstein was able to apply a quantum theory of light to make sense out of another well-established, but as of then not understood experiment, the photoelectric effect. The photoelectric effect involves shining light on the polished surface of metal under a vacuum. If the light has a wavelength shorter than a threshold value (characteristic of the individual metal), electrons are emitted from the surface. The challenge to understanding the result came from changing the intensity of the light. Classical physics tells us that the energy of a wave is determined by its amplitude, or in the case of light, the intensity. An increase in the intensity of incident light, therefore, should lead to an increase in the kinetic energy possessed by the emitted electrons. However, the kinetic energy of the electrons seemed to be a function not of the intensity of the light, but rather it’s frequency. Einstein was able to explain [2] this using Planck’s theory that light consisted of a shower of quant, each of which was a packet of energy the magnitude of which was proportional to the frequency of the light. ($E_{photon} = h\nu$) In Einstein’s model, the kinetic energy of the photoelectrons was determined as the difference between the photon energy and the “work function” or the energy necessary to rip an electron from the surface of the metal. $E_{kin} =h\nu -\varepsilon \nonumber$ In this case, each quantum of light, or photon, can produce one photoelectron. If the energy of the photons are too small (less than $\varepsilon$), no photoelectrons are produced. But at frequencies that exceeded the threshold value, the kinetic energy was a linear function of the light frequency, with the slope of that line giving a value for Planck’s constant of proportionality. Einstein’s model provided a separate measurement for Planck’s constant but yielded an identical result. At this point, the scientific community could no longer ignore this new quantum theory of light. Einstein was awarded the Nobel Prize in Physics in 1921 for explaining the photoelectric effect. Johannes Balmer and the Emission Spectrum of Hydrogen In 1885, J.J. Balmer [3](a high school teacher and amateur scientist) wrote about the series of lines in the visible emission spectrum of atomic hydrogen. The lines formed a pattern where the spacing decreased in decreasing wavelength and seemed to converge on a series limit. The wavelengths ($\lambda$) of lines in this spectrum fit the pattern: $\lambda =G\left(\dfrac{n^{2} }{n^{2} -4} \right) \nonumber$ where $G = 3647.053 \overset{\circ}{A}$, or the series limit, and $n = 3,4,5, \ldots$. In modern terms, this expression is given as $\dfrac{1}{\lambda } =\tilde{\nu }=R_{H} \left(\dfrac{1}{n_{l}^{2} } -\dfrac{1}{n_{u}^{2} } \right) \nonumber$ where $R_{H}$is known as the “Rydberg constant” for hydrogen, and has the value given by $R_{H}= 109677.581 cm^{-1}$. Also, $n_{l }<n_{u\ }$and either value must obey $n = 1,2,3, \ldots$. In Balmer’s paper, the expression is purely empirical (meaning it is based only on observation and not tied to any theoretical value.) While he was unable to provide any theory for the pattern he had derived from data, he did state that such a simple pattern could not be a coincidence. The job of theoretical physics was to derive a theory of the H-atom that would yield energy levels, transitions between which would produce the observed spectrum and the simple pattern determined by Balmer. The first quantum theory of the hydrogen atom was proposed by Niel’s Bohr (who was born in 1885 – the year that Balmer’s paper was published!) Bohr’s model is consistent with the wave nature of matter predicted by Louis de Broglie. Louis de Broglie and wave nature of matter Louis de Broglie (1892-1987) was intrigued by the notion that light, which every sensible physicist knew propagated as waves, could be described as though it was a stream of particles. Not to be outdone, he decided to examine the ramifications of doing something equally preposterous – treat something everyone knew was a particle, as a wave. de Broglie proposed that all particles would behave with a wave nature, and would have a wavelength determined by their momentum and Planck’s constant. $\lambda =\dfrac{h}{p} =\dfrac{h}{mv}\nonumber$ Based on this theory, de Broglie predicted in his 1923 Ph.D. dissertation that interference patterns could be observed in electron beams diffracted by regular patterns, much in the same way that such results could be seen with light waves or water waves. This phenomenon was observed in electron beams diffracted off of nickel surfaces in 1927 [4]. de Broglie was awarded the Nobel Prize in physics in 1929 for the work in his dissertation – the first time the prize was awarded for a PhD thesis! Niels Bohr and the Hydrogen Atom Niels Bohr (1885-1962) was the first person to offer a quantum theory of the hydrogen atom that satisfactorily predicted the patterns seen in the emission spectra of atomic hydrogen. Basically, Bohr suggested that the electron in a hydrogen atom orbited the nucleus (a proton) in a circle, the circumference of which had to be an integral multiple of de Broglie wavelengths. (Bohr’s model was actually published in 1913 [5] – 10 years before de Broglie’s Nobel Prize winning thesis, but it is easily explained based on the de Broglie principle.) Bohr suggested that the angular momentum of an orbiting electron had to be an integral multiple of Planck’s constant divided by $2\pi$. $mvr=\dfrac{nh}{2\pi } =n\hbar \nonumber$ This expression is easily rearranged to yield the de Broglie relationship: $\begin{array}{l} {mvr=\dfrac{nh}{2\pi } } \ {2\pi \, r=\dfrac{nh}{mv} =n\lambda } \end{array} \nonumber$ Based on this relationship, and balancing the electrostatic attractive forces with the centripetal force acting on the orbiting electron, Bohr was able to derive the value of the Rydberg constant for hydrogen and predict the pattern seen in the emission spectrum of hydrogen. While the theory does a remarkable job of describing the empirical model of Balmer, it has many shortcomings as well. For example, a charged electron orbiting a charged proton should eventually see its orbit decay and the electron will crash into the proton. Clearly this does not happen, contrary to the predictions of classical physics. Also, the Bohr theory is not applicable to atoms that have more than one electron, meaning it has not real application on most of the atoms in which chemists have interest. None the less, Bohr’s foothold into the quantum world was important. And some important aspects of a quantum theory can be easily demonstrated using the model as well. Heisenberg, Schrödinger and Dirac While quantum mechanics is most often taught (and will be discussed in this text) in terms of the formalisms of Erwin Schrödinger (1887-1961), the first formal theory was derived by Werner Heisenberg (1901-1976) in 1925 (he was awarded the Nobel Prize in physics in 1932 for this theory) using a matrix formalism. Schrödinger’s methodology uses integrals and eigenvalue-eigenfunction relationships and was first published in 1926. Schrödinger was awarded the Nobel Prize in Physics in 1933. Two years later, he proposed the famous “Schrödinger’s Cat” thought experiment (after consulting with Albert Einstein, who never fully excepted quantum mechanics) aimed at disproving the very theory that had won Schrödinger the Nobel Prize. Schrödinger clearly lamented his contributions to the scientific foofaraw that quantum theory would become. In particular, he was dissatisfied by the notion of “quantum jumps” that were needed to describe electronic transitions in the hydrogen atom. In one heated debate with Niels Bohr, Schrödinger exasperated If we are going to have to put up with these damn quantum jumps, I’m sorry that I ever had anything to do with quantum theory. [6] Paul Dirac’s (1902-1984) seminal textbook on quantum theory published in 1930 showed that the formalisms of Heisenberg and Schrödinger were mathematically identical. Dirac shared the 1933 Nobel Prize with Schrödinger. Among the many significant contributions that Dirac made, was a January 1928 paper in the Proceedings of the Royal Society that helped to explain the nature of electron spin. The consequences of his relativistic interpretation of the nature of an electron also predicted the existence of antimatter. There is a lot more to the story of the development of quantum theory and a great many colorful characters involved. While this text will focus on the applications of quantum theory to understand molecular behavior rather than the history of its development, the history of the science is definitely something about which reading is extremely worthwhile. Also, given the efforts towards a unified field theory in physics, there is no time that studying quantum mechanics could be more valuable. In the development of these theories, quantum mechanics and relativity often struggle against one another, but it is quantum mechanics that always seems to win these struggles. As such, quantum theory is bound to play an enormous role as modern physics continues to evolve. It is my sincerest hope that this introduction will not only provide a background required to make sense out of modern chemistry, but also whet the appetite for more knowledge and understanding on this fascinating subject. [1] M. Planck, "On the Law of Distribution of Energy in the Normal Spectrum," Annalen der Physik, vol. 4, p. 553, 1901. W310W9405 [2] A. Einstein, "On a Heuristic Viewpoint Concerning the Production and Transformation of Light," Annalen der Physik, vol. 17, pp. 132-148, 1905. W310W9405 [3] J. J. Balmer, "Hinweis auf die Spektrallinien des Wasserstoffs," Annalen der Physik und Chemie, vol. 25, pp. 80-85, 1885. W310W9405 [4] C. Davisson and L. Germer, "Diffraction of Electrons by a Crystal of Nickel," Physical Review, vol. 30, pp. 705-740, 1927. W310W9405 [5] N. Bohr, "On the Constitution of Atoms and Molecules, Part I," Philosphical Magazine, vol. 26, pp. 1-24, 1913. W310W9405 [6] J. Baggot, The Meaning of Quantum Theory, New York: Oxford Science Publications, 1992, p. 28.W310W9405
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.04%3A_Failures_of_Classical_Physics.txt
In order to better appreciate the fascinating (and sometimes shocking!) results of the quantum world, let’s consider some measurable properties of electrons. Consider in particular two specific properties they exhibit. It doesn’t really matter what these properties actually are, but it does matter that there are only two possible outcomes when measuring these properties. For the purposes of this discussion, we can call these properties Latin and Greek, and the two measurable values of these properties are X or Y (for Latin) and a or b (for Greek.) For the purposes of this discussion, let us assume that we can build a perfect sorting box for each property. For example, we can build a “Latin” box that will direct electrons though an aperture based on whether the electron is detected to have the value X, and a different aperture if the electron is found to have the value Y. Such a box would work as follows: Similarly, we can build a “Greek” box that will sort in the same manner, except according to the measured value of the Greek property: Are the Properties Repeatable? We can use these boxes to test whether or not the measured values of the Greek and Latin properties are repeatable. In order to do this, consider directing the X aperture output of a Latin box into a second Latin box. If the measured value of the property is repeatable, we would expect all of the electrons to exit the second Latin box through the X aperture. Pictorially, the second box would look as follows demonstrating that the property is indeed repeatable. The same behavior is observed using the Greek box, in that previously measured a electrons will always exit the a aperture of the Greek box. Are the Properties Correlated? A reasonable question to ask is whether or not the properties are correlated. An example of this correlation would be observed if previously measured X electrons were more likely to be measured as a electrons afterward. The apparatus for testing for this kind of correlation might look as follows: As is suggested in the diagram, the outcome of the Greek measurement does not show any preference for a or b for previously measured X electrons. The outcome for measuring a electrons with a Latin box is similar, in that half of the electrons exit the X aperture and half exit the Y aperture. The conclusion, therefore, would be that the Latin and Greek properties are not correlated. Now, suppose we try a third variation and create a three-box experiment. In this experiment, we will use a Latin box to select the X electrons out of an initial random stream of electrons. These will then run through a Greek box. We will then take the a aperture output of the Greek box and run that through a Latin box. The box arrangement for this experiment would look as follows: What do you expect for the percentages of electrons leaving the Latin box apertures? As it turns out, half of the a electrons leaving the Greek box will exit the X aperture and half will exit the Y aperture. As crazy as it seems, it appears that measuring the Greek property made the electrons “forget” that they were previously measured to be X electrons! This has an important implication about the nature of these sorting boxes. It implies that it would be impossible to build a compound box (a larger box constructed for Latin and Greek boxes) that would simultaneously sort electrons by both Latin and Greek properties. In other words, the following device would not work: The reason this box will not work is that the electrons do not behave as though they carry definite values of Latin or Greek properties. Rather, these properties have to be determined at the time of measurement. The result is contrary to the behavior of any particle that is well-described by Newtonian physics! To help illustrate this, consider randomizing the state of a quarter (\$0.25) by flipping it. We know that it will land as either heads or tails. But we can also imagine it landing with the head (or tail) upright or upside down. The coin can, in effect, land in one of four states. For convenience, let’s label them as HU, TU, HD, and TD (H/T for heads or tails, and U/D for up or down. For a classical object, like a coin, we expect all of the physical properties to persist. For example, if we flip the coin, and then measure in order, heads or tails, up or down, and then heads or tails, we expect the results of the first and third measurements to yield the same result. But in the case of the electron, measuring the Greek property seemed to cause the electron to completely forget what was measured about the Latin property. This leads us to the conclusion that there is not an internal property that determines the outcome of the measurement of that Latin property – ant least not one that can survive the measurement of the Greek property. Do the Properties Interfere with One Another? While it is true that electrons can not be definitively sorted simultaneously by Latin and Greek properties due to the lack of persistence of the measured outcomes when mixing boxes, one might ask if measuring one outcome interferes with the measurement of a second. Consier a new type of compound box, into which we will introduce two new devices: mirrors, and what we can consider a “combining” box. The role of the mirrors is simply to redirect a beam, but they will not alter the beam in any other way that its direction of travel. Similarly, the “combining” box will only collect the beams and cause them to travel in the same direction. The box will be designed to accept the input of a beam of electrons previously selected as X electrons. It will then sort by Latin or Greek properties, redirect and combine the beams and then measure for either Latin or Greek properties at the exit aperture. Such a compound device might look as follows: Such a device could be configured for four different interesting experiments. These experiments are described below: 1. Sort the X electrons using a Latin box, and measure the Latin property at the exit 2. Sort the X electrons using a Latin box, and measure the Greek property at the exit 3. Sort the X electrons using a Greek box, and measure the Greek property at the exit 4. Sort the X electrons using a Greek box, and measure the Latin property at the exit The results of these experiments are summarized in the table below: Experiment Input Sorter Detector Result? I 100% X Latin Latin 100% X II 100% X Latin Greek 50% a, 50% b III 100% X Greek Greek 50% a, 50% b IV 100% X Greek Latin ??? Let’s consider the results individually. Experiment I The results of this experiment are not surprising based on the results of the previous sections. Consider the path that the electrons will take as they pass through the apparatus. All of the X electrons incident on the box will be sorted to exit the X aperture of the Latin box and travel to the detector where they will again be measured as X electrons. This is the expected result because the property is measured to be repeatable by successive boxes of the same type. Experiment II Again, the result is not too surprising. We expect all of the electrons to exit the “sorting” box along the X pathway. And since the Greek property is not correlated to the Latin property, when measured at the Greek detector, we expect 50% a and 50% b electrons to be detected. Experiment III In this experiment, things are getting to be more interesting, as we have to consider electrons exiting the “sorting” box along both the a and b paths, each accounting for half of the initial X electrons. Of the electrons that travel along the a path (which is expected to be 50% of the incident X electrons), we expect them all to be measured as a electrons. Similarly for those electrons which follow the b path, we expect them to be detected as b electrons at the detector. Experiment IV In this configuration, one might expect half of the incident X electrons to exit the sorter along the a path, and when detected, half will be X, and half will be Y. Similarly for those electrons that travel along the b path, half will be detected as X and half will be detected as Y. This would result in a total of 50% X and 50% Y. And this result seems perfectly reasonable based on our initial results. But the quantum world has a huge surprise for us. In this experiment 100% of the electrons are detected as X electrons! How is this possible? It seems to completely contradict the notion that measuring the Greek property causes the electron to lose its Latin identity. On its face, this result seems completely absurd and impossible, but the behavior is observed on electrons, photons, and even large molecules such as buckyballs (\(C_{60}\) molecules)! Further Developments Let’s consider a new apparatus in which beam stoppers can be introduced to block the individual a and b paths inside the box. This setup might look something like what is depicted in the diagram below. This suggests four new experiments, the designs and results of which are listed in the table below: Experiment a-path b-path A open open 100% X 0% Y B open blocked 25% X 25% Y C blocked open 25% X 25% Y D blocked blocked 0% X 0% Y These results allow us to draw some important (but classically troubling) conclusions about the pathway the electrons are taking through the box. Do they take the a-path or the b-path? If the a-path is open (experiments A and B) we detect electrons at the exit, but the intensity is reduced by 50% if the b-path is blocked (Experiment B). This result is consistent with the interpretation that half of the electrons take the a-path and half take the b-path, and also is consistent with what we expect based on previous experiments. However, because we now see a split of both X and Y electrons detected at the exit rather than 100% X, we have to conclude that they electrons are not simply taking the b-path. And further, we can conclude that they are not simply taking the a-path given the results of experiment C! Are they somehow taking both paths? It may seem like a silly question, but if they were taking both paths, blocking one of the paths would result in a half electron being detected at the exit if the incident beam was slowed sufficiently – and that never happens! Electrons are always detected whole and intact. So we can conclude that the electrons are also not magically splitting into half with each half taking one of the paths. Is it possible they take neither path? The results of experiment D for us to reject this possibility as well, since blocking both paths eliminates any detected signals at the exit. They must be somehow using the pathways but without picking one or the other, and also not using both! The Superposition Solution This is where we have to resort to a new kind of descript of the state of these electrons. We call this state a superposition state. We will explore what this means in great detail, and how we can use the stationary states of waves to form bases in which these superpositions can be expressed, much as we described an arbitrary wave on a string as a superposition of standing waves, each with a unique amplitude. In the case of our last set of experiments, it would be reasonable to conclude that the superposition state has some sort of an oscillatory amplitude of X and Y states, such that when the beams are combined, the amplitudes of the Y states are removed through destructive interference. And, while this description may eventually be shown to be incorrect or incomplete through further experimentation (a possibility that always exists in science) it is at least consistent with the experiments summarized here. How to use this information going forward In this chapter, we have seen how to model waves using classical models, and how supposition allows us to extend our understanding beyond simple stating waves. We have also seen how classical physics was challenged as new observations and technologies forces scientists to develop new models and tools in order to predict behavior in the Universe. It is important to view this as an active and dynamic process. Remember to always think like a scientist. Our best models are useful only because they are consistent with the current state-of-the-art observations of the behavior of nature. And like in any area of scientific endeavor, there will be continual tweaks and sometimes even Earth-shattering changes brought for as new experiments allow us to see Nature through more detailed lenses. But it is this point that makes the study of Quantum Mechanics so exiting right now, as we are on the cusp (perhaps) of these new discoveries and observations as scientists are able to use new instrumentation to make new observations every day. The hope of this book is that it will help you to develop enough insight into the Chemical application if Quantum Theory to enjoy and appreciate the intricacies of this scientific journey as these new discoveries and observations challenge our current best models of Nature.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.05%3A_On_Superposition_and_the_Weirdness_of_Quantum_Mechanics.txt
1. M. Planck, "On the Law of Distribution of Energy in the Normal Spectrum," Annalen der Physik, vol. 4, p. 553, 1901. W310W9405 2. A. Einstein, "On a Heuristic Viewpoint Concerning the Production and Transformation of Light," Annalen der Physik, vol. 17, pp. 132-148, 1905. W310W9405 3. J. J. Balmer, "Hinweis auf die Spektrallinien des Wasserstoffs," Annalen der Physik und Chemie, vol. 25, pp. 80-85, 1885. W310W9405 4. C. Davisson and L. Germer, "Diffraction of Electrons by a Crystal of Nickel," Physical Review, vol. 30, pp. 705-740, 1927. W310W9405 5. N. Bohr, "On the Constitution of Atoms and Molecules, Part I," Philosphical Magazine, vol. 26, pp. 1-24, 1913. W310W9405 6. J. Baggot, The Meaning of Quantum Theory, New York: Oxford Science Publications, 1992, p. 28.W310W9405 1.07: Vocabulary and Concepts acceleration basis set black body radiation correlated eigenfunction eigenvalue force Fourier coefficients Hamiltonian kinetic energy Kronecker Delta linear combinations mass momentum normalization constant normalize normalized orthogonality orthonormal position potential energy superposition superposition theorem velocity wavefunctions 1.08: Problems 1. Consider a sphere with a mass of 1.00 kg rolling on a frictionless parabolic surface where the relationship between the height ($h$) and the position ($x$) is given by $h = x^{2}\nonumber$ 1. At what point on the surface (what value of x) will the sphere have the maximum kinetic energy? 2. What will the potential energy be at the point you specified in a? 3. If the sphere begins at rest at position x = -1.00 m, what is its potential energy? 4. Given that the sum of potential and kinetic energy is a constant, derive an expression for kinetic energy as a function of position for the system. 1. Consider the vectors u and v given by \begin{aligned} \textbf{u} &= 3\textbf{i} + 2\textbf{j} \ \textbf{v} &= 2\textbf{i} – \textbf{j}\end{aligned}\nonumber where i and j are unit vectors in the x and y directions respectively. 1. Calculate the magnitudes of vectors u and v. 2. Find expressions for vectors $\bf{e_1}$ and $\bf{e_2}$ which are unit vectors parallel to u and v respectively. 3. Are the vectors u and v orthogonal? Demonstrate this mathematically. 4. Consider a vector $\textbf{w} = 3\textbf{i} – 6\textbf{j}$. find values for $c_{1}$ and $c_{2}$ in order to express w as a linear combination of $\bf{e_1}$ and $\bf{e_2}$. 5. $\textbf{w} = c_{1} \textbf{e}_1 + c_{2} \textbf{e}_2$ 1. Consider a string that is distorted from equilibrium at time $t=0$ such that its wavefunction is given by $\Psi (x)=\dfrac{1}{\sqrt{5} } \phi _{1} (x)+\dfrac{2}{\sqrt{5} } \phi _{2} (x)\nonumber$ where $\phi _{n} (x)=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a} \right)$. 1. Show that the functions $\phi_{n}(x)$ form an orthogonal set of functions. To do this, show that $\int _{0}^{a}\phi _{n} (x)\cdot \phi _{m} (x)dx=0$ for $n \neq m$ 1. Show that $\int _{0}^{a}\Psi (x)\cdot \Psi (x)dx=1\nonumber$ 1. Show that $\int _{0}^{a}\Psi (x)\cdot \phi _{1} (x)dx=\dfrac{1}{\sqrt{5} }$ and $\int _{0}^{a}\Psi (x)\cdot \phi _{2} (x)dx=\dfrac{2}{\sqrt{5} }$ 1. Calculate the kinetic energy and de Broglie wavelength for the following particles traveling at a velocity of 500 m/s. 1. an electron 2. a nitrogen molecule 3. a ball bearing with mass = 0.500 g 1. The wavelength of light from one line of an argon ion laser is 488 nm. Metal Work Function (eV) Al 4.08 Fe 4.5 Co 5.0 Cu 4.7 Ag 4.73 Au 5.1 Na 2.28 K 2.3 Cs 2.1 1. Calculate the energy of a photon of this energy in 1. J 2. kJ/mol 3. eV 2. Of the elements in the table to the left, which (if any) would produce photoelectrons if light of $\lambda = 488 \; nm$ is focused on the surface? 3. What would be the kinetic energy of a photoelectron ejected from the surface of cesium produced by light of $\lambda = 488 \; nm$? 4. What is the longest wavelength of light that will produce photoelectrons from the surface of silver?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/01%3A_Foundations_and_Review/1.06%3A_References.txt
In this chapter, we will develop the theoretical problem of a particle in a box. The purpose here is to explore the capabilities of quantum mechanics and see how some of the mathematical machinery works. The reason for "kicking the tires" of quantum theory with this particular problem is that the math is fairly simple (at least by comparison!) and the results are relatively easy to interpret. After developing a toolbox of methods in this chapter, we can focus more on the results as applied to more complex problems of greater chemical importance. • 2.1: Background At the beginning of the 1900 s, there was actually a great deal of debate as to whether or not science was a valuable subject for study. At the time, Newtonian physics had proven to be a very reliable model for predicting the behavior of the observable universe. • 2.2: The Postulates of Quantum Mechanics There are only a small number of postulates of quantum mechanics. Upon them is built all of the conclusions of this powerful theory. • 2.3: The One-Dimensional Particle in a Box Imagine a particle of mass m constrained to travel back and forth in a one dimensional box of length a. For convenience, we define the endpoints of the box to be located at x=0 and x=a. The derivation of wavefunctions and energy levels and the properties of the system using the tools of quantum mechanics will be instructive as we move forward in our studies of quantum mechanics. • 2.4: The Tools of Quantum Mechanics Quantum mechanics is a model that can predict many properties of systems. The prediction of these properties can be made by examining the results of operations on the wavefunctions describing systems. In order to develop a quantum mechanical "toolbox", we utilize the results of the Particle in a Box model. • 2.5: Superposition and Completeness As stated previously, a system need not be in a state that is described by a single eigenfunction of the Hamiltonian. A system can be prepared such that any well-behaved, single valued, smooth function that vanishes at endpoints. When the wavefunction is not an eigenfunction of the Hamiltonian, the Superposition Principle can be used to greatly simplify how we work with the wave function. • 2.6: Problems in Multiple Dimensions Most problems will require multiple "dimensions" as they will involve not only electronic state descriptions, but also vibrational descriptions and rotational descriptions as well. In this section, we will discuss how variables are separated in the multidimensional problems, using a particle in a three-dimensional box as an example. • 2.7: The Free Electron Model Consider a long molecule that is a conjugated polyene. Kuhn (Kuhn, 1949) has suggested a model for the electrons involved in this π-bond system in which an electron is said to have a finite potential energy when it is "on" the molecule and an infinite potential energy when it is "off" the molecule. The model (known as the free electron model) is very much analogous to the particle in a box problem. • 2.8: Entanglement and Schrödinger's Cat There are many elements of the quantum theory that produce bizarre results (at least compared to our intuition as residents in a classical physics world. As it turns out, some of the early pioneers of a quantum theory found these elements of strangeness too much to handle. Both expended a great deal of energy to eliminate quantum mechanics as an accepted theory that would shape modern science. All of the bizarreness predicted by quantum mechanics has withstood the tests of experimentation though • 2.9: References • 2.10: Vocabulary and Concepts • 2.11: Problems Thumbnail: The quantum wavefunction of a particle in a 2D infinite potential well of dimensions \(L_x\) and \(L_y\). The wavenumbers are \(n_x=2\) and \(n_y=2\). (Public Domain; Inductiveload). 02: Particle in a Box At the beginning of the 1900 s, there was actually a great deal of debate as to whether or not science was a valuable subject for study. At the time, Newtonian physics had proven to be a very reliable model for predicting the behavior of the observable universe. However, as was discussed in Chapter 1, the figurative scientific roof was about to collapse with the advent of a quantum theory. Quantum theory attempts to do many of the same things that classical (Newtonian) physics does. The goal is to be able to model the behavior of particles and predict how they will behave in the future. In classical physics, this is accomplished by deriving an equation of motion for a particle. With such an equation, and a few initial parameters (such as position, velocity and acceleration at time \(t=0\)) the entire trajectory of a particle can be predicted as time moves forward. The equivalent construct in the quantum theory is a wavefunction. The wavefunction for a system contains all of the information needed to predict what can be measured and observed in terms of the properties of the particle or system. The rules describing a wavefunction are not arbitrary, however. Based on a few simple postulates (given below) the requirements of the wavefunction are outlined, and the entire quantum theory is defined.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.01%3A_Background.txt
There are only a small number of postulates of quantum mechanics. Upon them is built all of the conclusions of this powerful theory. Postulate 1 The state of a quantum-mechanical system is completely specified by a function $\Psi(\mathbf{r}, \mathrm{t})$ that depends on the coordinates of the particle $(\mathbf{r})$ and the time $(\mathrm{t})$. This function, called the wavefunction has the important property that $\boldsymbol{\Psi}^{*}(\mathbf{r}, \mathrm{t}) \boldsymbol{\Psi}(\mathbf{r}, \mathrm{t}) \mathrm{dx} \mathrm{dy} \mathrm{dz}\nonumber$ is the probability of finding the particle within the infinitesimally small volume element dxdydz located at position $\mathbf{r}$ at time $\mathrm{t}$. Postulate 2 To every physical observable in classical mechanics, there corresponds an operator in quantum mechanics. This operator will be both linear and Hermitian. Postulate 3 In any measurement of the observable associated with the operator $\hat{A}$, the only values that will ever be observed are the eigenvalues $a$ which satisfy the eigenvalue equation $\hat{A} \phi=a \phi\nonumber$ It is important to note that the wavefunction describing the particle need not be an eigenfunction of the operator Â. However, well defined wavefunctions (those meeting the requirements of all of the postulates of quantum mechanics) will have the possibility of being described as a linear combination of eigenfunctions of any of the needed operators. The Superposition Principle is invaluable in working with this concept. Postulate 4 If a system is in a state described by a normalized wavefunction ( $\Psi$ ) then the average measured value of the observable corresponding to $\hat{A}$ is given by $\langle a\rangle=\int \Psi^{*} \hat{A} \Psi d \tau \quad \text { or } \quad\langle a\rangle=\dfrac{\int \Psi^{*} \hat{A} \Psi d \tau}{\int \Psi^{*} \Psi d \tau}\nonumber$ Postulate 5 The wavefunction of a system evolves in time according to the time dependent Schrödinger equation $\hat{H} \Psi(\mathbf{r}, t)=i \hbar \dfrac{\partial}{\partial} \Psi(\mathbf{r}, t)\nonumber$ Each of these postulates has important consequences and ramifications as to what quantum theory can (and cannot) tell us about a particle or system. In the remainder of this section, we will explore each postulate individually in order to lay a foundation of what quantum mechanics can predict for us about the nature of matter. Postulate 1: a Squared Wavefunction is a Probability Distribution This postulate describes the commonly accepted interpretation of a wavefunction. First and foremost, a wavefunction is a mathematical function. It must be single valued in that for each point in space, there is only one value that can be calculated from the function. When considering all space which a particle may occupy, the squared wavefunction must create a smooth $^{1}$ and continuous probability distribution describing where the particle might be observed to be located. (for our purposes, "smooth" means that the first derivative of the function must be continuous.) Since the square of the wavefunction is a probability distribution for the location of the particle, any location in space where the squared wavefunction is zero, has a corresponding probability of zero that the particle will be observed at that location. Example $1$ Consider a particle of mass $m$ in box of length $a$ that is prepared such that it’s wave function is given by $\psi(x)=\sqrt{\dfrac{30}{a^{5}}} \cdot x(a-x)\nonumber$ Calculate the probability that the particle will have a position measurement reveal the particle to be in the middle half of the box (with the measured position satisfying $a / 4 \leq x \leq 3 a / 4$.) Solution The squared wavefunction gives the probability distribution for where the particle’s position will be measured to be. $(\psi(x))^{2}=\dfrac{30}{a^{5}}\left(a^{2} x^{2}-2 a x^{3}+x^{4}\right)\nonumber$ The total probability will be given by the following integral. \begin{aligned} P &=\int_{a / 4}^{3 a / 4}[\psi(x)]^{2} d x \ &=\dfrac{30}{a^{5}} \int_{a / 4}^{3 a / 4}\left(a^{2} x^{2}-2 a x^{3}+x^{4}\right) d x \ &=\dfrac{30}{a^{5}}\left[\dfrac{a^{2} x^{3}}{3}-\dfrac{2 a x^{4}}{4}+\dfrac{x^{5}}{5}\right]_{a / 4}^{3 a / 4} \ &=\dfrac{30}{a^{5}}\left(\dfrac{27 a^{5}}{192}-\dfrac{162 a^{5}}{1024}+\dfrac{243 a^{5}}{5120}-\dfrac{a^{5}}{192}+\dfrac{2 a^{5}}{1024}-\dfrac{a^{5}}{5120}\right) \ &=\dfrac{30}{a^{5}}\left(\dfrac{26 a^{5}}{192}-\dfrac{164 a^{5}}{1024}+\dfrac{242 a^{5}}{5120}\right) \ &=0.520 \end{aligned}\nonumber Note that the final probability is unitless! $^{1}$ The wavefunction will be smooth provided that the potential energy function is not discontinuous. A discontinuous potential energy function (such as a step function) will lead to a wavefunction that which single-valued, will not have a continuous first derivative, and therefore, not be "smooth" in the strictest sense. The wavefunction contains all of the information about a system that is needed to understand how the system behaves and how it will behave in the future, at least within the limits of the quantum theory! Information on such properties as energy, momentum and position are all contained in the wavefunction. Postulate 2: Quantum Mechanical Operators The second postulate describes the nature of quantum mechanical operators and their relationship to those properties of a system which we can observe. The operators are the tools that pull physical information from the wavefunction and reveal the properties of the quantum mechanical system. The following table shows some operators and their corresponding physically observable quantities. Physical Observable One Dimension Three Dimensions $\hat{x}$ Position $\mathrm{x}$ $\mathbf{r}$ $\hat{p}$ Momentum $-i \hbar \dfrac{d}{d x}$ $-i \hbar \vec{\nabla}$ $\hat{H}$ Energy $\hat{T}+\hat{U}$ $\hat{T}$ Kinetic $-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}}$ $-\dfrac{\hbar^{2}}{2 m} \nabla^{2}$ $\hat{U}$ Potential $\mathrm{U}(\mathrm{x})$ $\mathrm{U}(\mathbf{r})$ Each of these operators will have two very important properties. 1) Each is linear and 2) each is Hermitian. In one dimension, an operator $(\hat{A})$ is defined to be linear if the following condition holds: $\hat{A}(a f(x)+b g(x))=a \hat{A} f(x)+b \hat{A} g(x)\nonumber$ where a and b are scalar values. An example of a linear operator is multiplication by a constant or a function. Taking a derivative (or integrating) is also a linear operation, as is adding a constant or a function. An example of a non-linear operator is taking a logarithm or raising a function to a power other than one. The Hermitian nature of quantum mechanical operators has many important consequences. An operator ( $\hat{A})$ is Hermitian if it satisfies the following relationship: $\int g^{*} \hat{A} f d \tau=\int f \hat{A}^{*} g^{*} d \tau\nonumber$ for well-behaved $^{2}$ functions $f$ and $g$, where the asterisk $(*)$ indicated the complex conjugate of the function or operator. Hermitian operators have the important properties that 1) their $^{2}$ A well-behaved function is one that is normalizable and continuous over the relevant space of the problem. Example $1$ Is the operator $\dfrac{d}{d x}$ a Hermitian operator? Solution For an operator $\hat{A}$ to be Hermitian, the following relationship must hold (for wellbehaved functions $f$ and $g$ : $\int g^{*} \hat{A} f d \tau=\int f \hat{A}^{*} g^{*} d \tau\nonumber$ So if we choose arbitrary functions $f$ and $g$, we can evaluate the left-hand side of the above relationship by noting the pattern $d(u v)=u d v+v d u$ and integrating by parts. Using this approach $\int u d v=u v-\int v d u\nonumber$ Making the substitutions that $\begin{gathered} u=g^{*} \ d v=\dfrac{d}{d x} f d x \end{gathered}\nonumber$ it should be clear that $\begin{gathered} d u=\dfrac{d}{d x} g^{*} d x \ v=f \end{gathered}\nonumber$ So $\int g^{*} \dfrac{d}{d x} f d x=\left.g^{*} f\right|_{-\infty} ^{\infty}-\int f \dfrac{d}{d x} g^{*} d x\nonumber$ In order for $f$ and $g$ to meet the criteria that they are normalizable, they must vanish as $\mathrm{x}$ approaches $\pm \infty$. As such, $\left.g^{*} f\right|_{-\infty} ^{\infty}=0\nonumber$ And we are left with $\int g^{*} \dfrac{d}{d x} f d x=-\int f \dfrac{d}{d x} g^{*} d x\nonumber$ Which clearly can not be true. Therefore, the operator $\dfrac{d}{d x}$ is not Hermitian. You should, however, be able to use the same method to show that the operator $\hat{A}=i \dfrac{d}{d x}$ is in fact Hermitian! Postulate 3: Measurable Values Postulate three states that the only measurable values for a system are those values that are eigenvalues of the corresponding quantum mechanical operator. The first measurable value which we will explore is the energy of the system (see below.) Because the wavefunction provides a probability distribution, it also provides a means of predicting the statistics for a theoretical infinite set of measurements on a system. The ramifications of that point are developed in the discussion of the fourth postulate. Postulate 4: Expectation Values An expectation value is an average value that would be expected based on an infinite number of measurements. Since wavefunctions give us probability information, it stands to reason that we can calculate a great deal of statistical information about a system based on the wavefunction and the corresponding operators. This will be discussed in detail in section D with regards to expectation values calculated for position, momentum and energy. It is important to note that the expectation value does not indicate the most probable measurement or observation that will be made, nor must it even give a value that can ever be measured; it just gives the average. This postulate has very important (and controversial) ramifications. It forms the basis for how the Heisenberg Uncertainty Principle can be discussed. The problem is that quantum mechanics cannot tell you what will be measured, but rather only the probability that a certain value can be measured for a specific property. While a subtle point, it shakes the very nature of our intuition as to what it means for a system to have a certain property. In most cases, the properties we associate with classical particles do not even exist in quantum mechanical particles (at least in any sense to which we are accustomed) until those properties are measured. This has led to numerous debates as to the validity of quantum mechanics as a model, and even led one of the original developers of quantum theory (Erwin Schrödinger) to change his mind completely on the model. Postulate 5: the evolution of a system in time The $5^{\text {th }}$ postulate indicates how a system will evolve in time. It also gives the definition of the time dependent Schrödinger equation. We will explore many of these properties based on the particle in a box problem in order to gain some insight into what quantum mechanics can and can not tell us about a system. The particle in a box problem actually has limited physical application (although it does have some), but does provide a "thought sandbox" in which we can explore the concepts, powers and limitations of the quantum theory. Hopefully then when we apply the theory to problems of greater chemical interest, we can focus more on the conclusions than on the specific mathematics.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.02%3A_The_Postulates_of_Quantum_Mechanics.txt
Imagine a particle of mass $m$ constrained to travel back and forth in a one dimensional box of length $a$. For convenience, we define the endpoints of the box to be located at $\mathrm{x}=0$ and $\mathrm{x}$ $=a$. The derivation of wavefunctions and energy levels and the properties of the system using the tools of quantum mechanics will be instructive as we move forward in our studies of quantum mechanics. The Hamiltonian Whenever we begin a new quantum mechanical problem, the first challenge is to write the Hamiltonian that describes the system. This always has two parts - a Kinetic Energy term (which is always the same for each particle) and a Potential Energy term (that is different for each new system.) The kinetic energy term in one dimension for a single particle is always given by $\hat{T}=-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}}\nonumber$ This operator can be derived from the momentum operator based on the relationship between momentum and kinetic energy that comes from classical physics. Namely $T=\dfrac{p^{2}}{2 m}\nonumber$ As such, \begin{aligned} &\hat{T}=\dfrac{p^{2}}{2 m} \ &=\dfrac{1}{2 m}\left(-i \hbar \dfrac{d}{d x}\right) \ &=\dfrac{(-i \hbar)^{2}}{2 m} \dfrac{d^{2}}{d x^{2}} \ &=-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}} \end{aligned}\nonumber The potential energy function is also fairly simple for this problem. The potential energy is infinite outside of the box $(\mathrm{x}<0$ and $\mathrm{x}>\mathrm{a})$ and zero every place else. This forces the particle to be in the box at all times. It also limits the relevant space of the problem to lie between $\mathrm{x}=0$ and $\mathrm{x}=$ $a$ since the infinite potential energy precludes the particle from ever existing outside of the limits of $\mathrm{x}=0$ and $\mathrm{x}=a$. $U(x)=\mid \begin{array}{lll} \infty & \text { if } & x<0 \ 0 & \text { if } & 0 \leq x \leq a \ \infty & \text { if } & x>a \end{array}\nonumber$ So for the problem, limited to the space inside the box, the Hamiltonian can be written $\hat{H}=-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}}\nonumber$ And the Schrödinger equation can be written as $-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}} \psi(x)=E \psi(x)\nonumber$ where $\psi(\mathrm{x})$ is the wavefunction describing the state of the particle. There are a number of approaches that can be used to solve this equation to find the wavefunctions $\psi(\mathrm{x})$ which satisfy the differential equation. The Solution We will solve this problem two different ways. First, we will solve it using the de Broglie wavelength (an algebraic solution) and then using the Schrödinger equation (an eigenvalue/eigenfunction approach.) The de Broglie Approach Before trying to solve the problem using Schrödinger’s equation, let’s use the de Broglie condition to solve the problem algebraically. Recall that de Broglie suggested that a particle can be treated as a wave, the wavelength of which is given by $\lambda=\mathrm{h} / \mathrm{p}$, where $\mathrm{h}$ is Planck’s constant, and $p$ is the momentum of the particle. The necessary conditions on the de Broglie wave are that the wave itself must vanish at the ends of the box (in order to satisfy the first postulate, since the particle can never escape the box.) This will happen for very specific wavelengths which are dependent on the length of the box itself. This is very common in physics for any system with a wave nature. When the wave is constrained to a specific geometry, the system will "ring" with frequencies (and thus wavelengths) characteristic of the medium and the geometry. Quantum mechanical systems are no different in that regard. What will be required in order to create a standing wave is that the length of the box $(a)$ must be an integral multiple of half de Broglie wavelengths $(\lambda / 2)$. $a=n \dfrac{\lambda}{2}\nonumber$ Given that the de Broglie wavelength is related to momentum, it is simple to derive the following relationship, indicating the possible values for momentum. \begin{aligned} a &=n \dfrac{\lambda}{2} \ &=\dfrac{n h}{2 p} \ p &=\dfrac{n h}{2 a} \end{aligned}\nonumber Given the relationship between momentum and kinetic energy, the expected expression for energy levels can be derived. $E=\dfrac{p^{2}}{2 m}=\dfrac{1}{2 m}\left(\dfrac{n h}{2 a}\right)^{2}=\dfrac{n^{2} h^{2}}{8 m a^{2}}\nonumber$ And since the energy depends on $n^{2}$, the spacings between successive energy levels increases as the energy increases. Now let’s see if we can derive this expression based on the Schrödinger equation. The Schrödinger equation: the wavefunctions The time-independent Schrödinger equation can be written $\hat{H} \psi=E \psi\nonumber$ Where $H$ is the Hamiltonian operator that was derived in section B.2, $\psi$ is the wavefunction describing the system, and $E$, the eigenvalue of the Hamiltonian, gives the energy. The wavefunctions are derived so that they are eigenfunctions of the Hamiltonian operator. Substituting the specific statement of the Hamiltonian $-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}} \psi=E \psi\nonumber$ For convenience, we can gather all of the constants in one place by making a substitution $-k^{2}=-\dfrac{2 m E}{\hbar^{2}}\nonumber$ The particular choice if the form of this substitution is made to simplify the solutions by avoiding (for now) imaginary functions. With the substitution, the Schrödinger equation can be rewritten as $\dfrac{d^{2}}{d x^{2}} \psi=-k^{2} \psi\nonumber$ As was the case for the classical wave-on-a-string problem, this is a second order ordinary differential equation, and this has two linearly independent solutions. A general solution is given by a linear combination of two linearly independent solutions, so one way to write a solution is $\psi=A \sin (k x)+B \cos (k x)\nonumber$ Now we can focus on evaluating $\mathrm{A}, \mathrm{B}$ and $\mathrm{k}$ based on the boundary conditions. The boundary conditions are that the wavefunction must go to 0 at the ends of the box, in accordance with the first postulate. The first boundary condition, $\psi(0)=0$, yields the following result: \begin{aligned} \psi(0) &=A \sin (k \cdot 0)+B \cos (k \cdot 0) \ &=0+B=0 \end{aligned}\nonumber So $\mathrm{B}=0$ and the cosine term must vanish. Focusing only on what has not vanished from the solutions, the second boundary condition, $\psi(a)=0$, can be applied. $\psi(a)=A \sin (k \cdot a)=0\nonumber$ There are two trivial ways to make this true. One is to make $\mathrm{A}=0$ and the other is to make $\mathrm{k}=$ 0 . Both are trivial solutions and unimportant (but fun to mention in class!) The other way to force the function to 0 at $\mathrm{x}=\mathrm{a}$ is to insure that the sine function is zero by forcing $k \cdot a=n \pi\nonumber$ where $\mathrm{n}$ is an integer $(\mathrm{n}=1,2,3 \ldots)$, since the sine function crosses zero every $\mathrm{n} \pi$ radians. This is an important point: the application of a boundary condition leads to the introduction of a quantum number and fixed the results to only functions where that number has a value taken from a very specific list. In fact, the origin of quantum numbers in all problems is the result of the application of boundary conditions. Solving for $\mathrm{k}$ and substituting yields $\psi(x)=A \sin \left(\dfrac{n \pi x}{a}\right)\nonumber$ This is as far as the boundary conditions can get us. The value of $\mathrm{A}$ is determined based on the first postulate of quantum mechanics, which says that the square of the wavefunction must give a probability distribution as to where the particle can be measured to be. Since all measurements must place the particle in the box, the sum of probabilities at all of the possible locations in the box must equal unity. This implies the condition that $\int_{0}^{a}(\psi(x))^{2} d x=1\nonumber$ Solving for A yields \begin{aligned} \int_{0}^{a}(\psi(x))^{2} d x &=A^{2} \int_{0}^{a} \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x \ &=A^{2}\left[\dfrac{x}{2}-\dfrac{\sin \left(\dfrac{2 n \pi x}{a}\right)}{\left(\dfrac{4 n \pi}{a}\right)}\right]_{0}^{a} \ &=A^{2}\left(\dfrac{a}{2}-0-0+0\right) \ A &=\sqrt{\dfrac{2}{a}} \end{aligned}\nonumber Notice that the value of A did not depend on the quantum number n. Normalization constants usually do have some dependence on the quantum numbers that arise from the application of boundary conditions, but this is one of the rare problems in which the normalization constant does not. The Schrödinger Equation: the energy levels Whenever we solve a quantum mechanical problem, there are two important things at which we must look: the energy levels and the wavefunctions. To chemists, the energy levels are the most important part, as the energy levels govern the chemistry the system can do. To a physicist, it is the wavefunctions that are important as they contain all of the information about the physical nature of the system. The energy levels can be derived using the normalized wavefunctions and the Schrödinger equation. \begin{aligned} \hat{H} \psi &=E \psi \[4pt] \underbrace{-\dfrac{\hbar^{2}}{2 m} \dfrac{d^{2}}{d x^{2}}}_{\hat{H}} \underbrace{\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)}_{\psi} &= \underbrace{\dfrac{\hbar^{2}}{2 m}\left(\dfrac{n \pi}{a}\right)^{2}}_{E} \underbrace{\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)}_{\psi} \end{aligned} Comparison (or solving for $E$) yields the following $E=\dfrac{n^{2} \pi^{2} \hbar^{2}}{2 m a^{2}}\nonumber$ which looks similar to, but not exactly like the result produced using the de Broglie relationship. In fact, it is the identical result! Making the substitution $\hbar=h / 2 \pi$, it is easy to show that $E=\dfrac{n^{2} h^{2}}{8 m a^{2}}\nonumber$ These energy levels depend on $\mathrm{n}^{2}$, and so doubling the quantum number $\mathrm{n}$ quadruples the energy. Another way of saying this is that the energy level spacings (the difference in energy between two successive levels) increase with increasing $\mathrm{n}$ or energy. It is also interesting to note that the energy levels are given by a real (non-imaginary) expression. This is to be expected since the energy is the eigenvalue of a Hermitian operator, the Hamiltonian, and thus must be a real value. Properties of the Wavefunctions The wavefunctions for the one-dimensional particle in a box problem are given by $\psi_{n}(x)=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)\nonumber$ These wavefunctions have many important properties. Orthogonality Similar to the relationship of Hermitian operators having real eigenvalues, the eigenfunctions of Hermitian operators must be orthogonal. Our wavefunctions are actually an infinite set of function, any pair of which must cause the inner product integral to vanish. Mathematically, this looks like $\int_{0}^{a} \psi_{n}(x) \cdot \psi_{m}(x) d x=0 \quad n \neq m\nonumber$ This relationship is easy to verify. To do so, we will make use of the following result taken from a standard table of integrals. $\int \sin (\alpha x) \sin (\beta x) d x=\dfrac{\sin [(\alpha-\beta) x]}{2(\alpha-\beta)}-\dfrac{\sin [(\alpha+\beta) x]}{2(\alpha+\beta)} \quad \alpha \neq \beta\nonumber$ Noting that $\alpha=\dfrac{n \pi}{a}$ and $\beta=\dfrac{m \pi}{a}$, substitution into the above relationship yields \begin{aligned} \int_{0}^{a} \psi_{n}(x) \cdot \psi_{m}(x) d x &=\left[\dfrac{\sin \left[\dfrac{\pi}{a}(n-m) x\right]}{2\left(\dfrac{\pi}{a}(n-m)\right)}-\dfrac{\sin \left[\dfrac{\pi}{a}(n+m) x\right]}{2\left(\dfrac{\pi}{a}(n+m)\right)}\right]_{0}^{a} \ &=\left[\dfrac{\sin [\pi(n-m)]}{2\left(\dfrac{\pi}{a}(n-m)\right)}-\dfrac{\sin [\pi(n+m)]}{2\left(\dfrac{\pi}{a}(n+m)\right)}-0+0\right] \end{aligned}\nonumber And since $n$ and $m$ are integer, $n-m$ and $n+m$ must also be integers. And the sine of an integral multiple of $\pi$ is always zero, it is easy to show that this function vanishes for any $n \neq m$. Normalization When $n=m$ the integral becomes $\int_{0}^{a}\left[\psi_{n}(x)\right]^{2} d x=\dfrac{2}{a} \int_{0}^{a} \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x\nonumber$ which can be evaluated using the result from a table of integrals $\int \sin ^{2}(\alpha x) d x=\dfrac{x}{2}-\dfrac{\sin (2 \alpha x)}{4 \alpha}\nonumber$ So making the substitution $\alpha=\dfrac{n \pi}{a}$ \begin{aligned} &\dfrac{2}{a} \int_{0}^{a} \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x=\dfrac{2}{a}\left[\dfrac{x}{2}-\dfrac{\sin \left(\dfrac{2 n \pi x}{a}\right)}{4(n \pi / a)}\right]_{0}^{a} \ &=\dfrac{2}{a}\left[\dfrac{a}{2}-0-0+0\right] \ &=1 \end{aligned}\nonumber This result shouldn’t be surprising since the value $\mathrm{A}=\sqrt{\dfrac{2}{a}}$ was chosen to ensure the result! Specifically, it was chosen so as to normalize the wave functions. Example $1$ Show that the wavefunction $\Psi(x)=\sqrt{\dfrac{30}{a^{5}}} \cdot x(a-x)\nonumber$ is normalized for a particle in a box of length $a$. Solution The wavefunction is normalized if $\int_{0}^{a} \Psi(x) \Psi(x) d x=1\nonumber$ This can be demonstrated by plugging the wavefunction into the relationship and testing to see if it is true: \begin{aligned} \int_{0}^{a} \sqrt{\dfrac{30}{a^{5}}} \cdot x(a-x) & \sqrt{\dfrac{30}{a^{5}}} \cdot x(a-x) d x=\dfrac{30}{a^{5}} \int_{0}^{a} x^{2}\left(a^{2}-2 a x+x^{2}\right) d x \ &=\dfrac{30}{a^{5}} \int_{0}^{a}\left(a^{2} x^{2}-2 a x^{3}+x^{4}\right) d x \ &=\dfrac{30}{a^{5}}\left[\dfrac{a^{2} x^{3}}{3}-\dfrac{2 a x^{4}}{4}+\dfrac{x^{5}}{5}\right]_{0}^{a} \ &=\dfrac{30}{a^{5}}\left(\dfrac{a^{5}}{3}-\dfrac{a^{5}}{2}+\dfrac{a^{5}}{5}-0+0-0\right) \ &=\dfrac{30}{a^{5}}\left(\dfrac{10 a^{5}}{30}-\dfrac{15 a^{5}}{30}+\dfrac{6 a^{5}}{30}\right) \ &=\dfrac{30}{a^5}\left(\dfrac{a^5}{30}\right) \ &=1 \end{aligned} Therefore the wavefunction is normalized!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.03%3A_The_One-Dimensional_Particle_in_a_Box.txt
Quantum mechanics is a model that can predict many properties of systems. The prediction of these properties can be made by examining the results of operations on the wavefunctions describing systems. In order to develop a quantum mechanical "toolbox", we utilize the results of the Particle in a Box model. Expectation Values The fourth postulate of quantum mechanics gives a recipe for calculating the expectation value of a particular measurement. The expectation value is a prediction of the average value measured based on an infinite number of measurements of the property. The Expectation value of Energy $\langle E \rangle$ One of the most useful properties to know for a system is its energy. As chemists, the energy is what is most useful to understand for atoms and molecules as all of the thermodynamics of the system are determined by the energies of the atoms and molecules in the system. For illustrative convenience, consider a system that is prepared such that its wavefunction is given by one of the eigenfunctions of the Hamiltonian. $\psi_{n}=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)\nonumber$ These functions satisfy the important relationship $\hat{H} \psi_{n}=E_{n} \psi_{n}\nonumber$ This greatly simplifies the calculation of the expectation value! To get the expectation value of E, we need simply the following expression: $\langle E\rangle=\int \psi_{n}^{*} \hat{H} \psi_{n} d \tau\nonumber$ Making the substitution from above yields: 48 \begin{aligned} \langle E\rangle &=\int \psi_{n}^{*} \hat{H} \psi_{n} d \tau \ &=\int \psi_{n}^{*} E_{n} \psi_{n} d \tau \ &=E_{n} \int \psi_{n}^{*} \psi_{n} d \tau \ &=E_{n} \end{aligned}\nonumber In fact it is easy to prove that for a system whose wavefunction is an eigenfunction of any operator, the expectation value for the property corresponding to that operator is the eigenvalue for the given operator operating on the wavefunction. The proof for this is almost trivial! Proof: For a system prepared in a state such that its wavefunction is given by $\psi$, and $\psi$ satisfies the relationship $\hat{A} \psi=a \psi\nonumber$ The expectation value for the property associated with operator  will be the eigenvalue $a$. \begin{aligned} \langle a\rangle &=\int \psi^{*} \hat{A} \psi d \tau \ &=\int \psi^{*} a \psi d \tau \ &=a \int \psi^{*} \psi d \tau \ &=a \end{aligned}\nonumber since the wavefunction $\psi$ is normalized. The Expectation value of position $\langle x \rangle$ To illustrate the concept, let’s calculate $\langle x\rangle$ or the expectation value of position for a particle in a box that is in the $\mathrm{n}^{\text {th }}$ eigenstate \begin{aligned} \langle x\rangle &=\int_{0}^{a} \psi_{n}(x) \cdot x \cdot \psi_{n}(x) d x \ &=\dfrac{2}{a} \int_{0}^{a} x \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x \end{aligned}\nonumber Again, it helps to find the result for the integral in a table of integrals. $\int x \sin ^{2}(\alpha x) d x=\dfrac{x^{2}}{4}-\dfrac{x \sin (2 \alpha x)}{4 \alpha}-\dfrac{\cos (2 \alpha x)}{8 \alpha^{2}}\nonumber$ Substitution yields \begin{aligned} \dfrac{2}{a} \int_{0}^{a} x \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x &=\dfrac{2}{a}\left[\dfrac{x^{2}}{4}-\dfrac{x \sin \left(2 \dfrac{n \pi}{a} x\right)}{4 \dfrac{n \pi}{a}}-\dfrac{\cos \left(2 \dfrac{n \pi}{a} x\right)}{8\left(\dfrac{n \pi}{a}\right)^{2}}\right]_{0}^{a} \ &=\dfrac{2}{a}\left[\dfrac{a^{2}}{4}-0-\dfrac{1}{8\left(\dfrac{n \pi}{a}\right)^{2}}-0+0+\dfrac{1}{8\left(\dfrac{n \pi}{a}\right)^{2}}\right] \ &=\dfrac{a}{2} \end{aligned}\nonumber This result is interesting for two reasons. First off, $\frac{a}{2}$ is the middle of the box. So the result implies that we might find the particle on the left side of the box half the time and the right side of the box the other half. Averaging all of the results yields a mean value of the middle of the box. Secondly, the result is independent of the quantum number $n$ - which means that we get the same result irrespective of the quantum state in which the system is. This is a remarkable result, really, (well, not really, but it is fun to claim it is) since it means that for the $\mathrm{n}=2$ eigenstate, which has a node at the center of the box, meaning we will never measure the particle to be there, still has an expectation value of position centered in the box. This should really drive home the idea that an expectation value is an average. We need never measure the particle to be at the position indicated by the expectation value. The average of the measured positions must, instead, be at the position indicated by the expectation value. The Expectation Value of Momentum $\langle p\rangle$ It is also easy to calculate the expectation value for momentum, $\langle p \rangle$. In fact, it is almost trivially easy! Based on the fourth postulate, $\langle p\rangle$ is found from the expression \begin{aligned} \langle p\rangle &=\int_{0}^{a} \psi \hat{p} \psi d x \ &=-i \hbar \int_{0}^{a} \psi \dfrac{d}{d x} \psi d x \end{aligned}\nonumber At this point it is convenient to make a substitution. If we let $u=\psi$ then $d u=\dfrac{d \psi}{d x} d x$. Now the problem can be restated in terms of $u$. But since we have changed from $x$ to $u$, we must change the limits of integration to the values of $u$ at the endpoints. As it turns out, $\psi(0)$ and $\psi(a)$ are both 0 ! \begin{aligned} \langle p\rangle &=-i \hbar \int_{0}^{0} u d u \ &=-i \hbar\left[\dfrac{u^{2}}{2}\right]_{0}^{0} \ &=0 \end{aligned}\nonumber Wow! The expectation value of momentum is zero! What makes this so remarkable is that the particle is always moving since it has a non-zero kinetic energy. (How can this be?) Keeping in mind that the expectation value is the average of a theoretical infinite number of measurements, and that momentum is a vector quantity it is easy to see why the average is zero. Half of the time, the momentum is measured in the positive $\mathrm{x}$ direction and the other half in the negative $\mathrm{x}$ direction. These cancel one another and the average result is zero. Variance Quantum mechanics provides enough information to also calculate the variance of a theoretical infinite set of measurements. Based on normal statistics, the variance of any value be calculated from $\sigma_{a}^{2}=\left\langle a^{2}\right\rangle-\langle a\rangle^{2}\nonumber$ That result does not come from quantum mechanics, by the way. Quantum mechanics just tells us how to calculate the expectation values. The above expression for variance can be applied to any set of measurements of any property on any system. So, to calculate $\sigma_{\mathrm{x}}^{2}$ and $\sigma_{\mathrm{p}}^{2}$ it is simply necessary to know $\langle\mathrm{x}\rangle,\left\langle\mathrm{x}^{2}\right\rangle,\langle\mathrm{p}\rangle$ and $\left\langle\mathrm{p}^{2}\right\rangle$. Two of those quantities we already know from the previous sections. The variance in $x\left(\sigma_{x}^{2}\right)$ To calculate $\left\langle\mathrm{x}^{2}\right\rangle$, we set up the usual expression. \begin{aligned} \left\langle x^{2}\right\rangle &=\int_{0}^{a} \psi x^{2} \psi d x \ &=\dfrac{2}{a} \int_{0}^{a} x^{2} \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x \end{aligned}\nonumber From a table of integrals, it can be found that $\int x^{2} \sin ^{2}(\alpha x) d x=\dfrac{x^{3}}{6}-\left(\dfrac{x^{2}}{4 \alpha}-\dfrac{1}{8 \alpha^{3}}\right) \sin (2 \alpha x)-\dfrac{x \cos (2 \alpha x)}{4 \alpha^{2}}\nonumber$ Letting $\alpha=\dfrac{n \pi}{a}$ and noting that $\cos (2 \mathrm{n} \pi \mathrm{x})=1$ and $\sin (2 \mathrm{n} \pi \mathrm{x})=0$ for any value of $n$, we see that \begin{align*} \left\langle x^{2}\right\rangle &=\dfrac{2}{a}\left[\dfrac{x^{3}}{6}-\left(\dfrac{a x^{2}}{4 n \pi}-\dfrac{a^{3}}{8 n^{3} \pi^{3}}\right) \sin \left(\dfrac{2 n \pi x}{a}\right)-\dfrac{a^{2} x \cos \left(\dfrac{2 n \pi x}{a}\right)}{4 n^{2} \pi^{2}}\right]_{0}^{a} \[4pt] &=\dfrac{2}{a}\left(\dfrac{a^{3}}{6}-\right.\left.0-\dfrac{a^{3}}{4 n^{2} \pi^{2}}-0+0+0\right) \[4pt] &= \dfrac{a^{2}}{3}-\dfrac{a^{2}}{2 n^{2} \pi^{2}} \end{align*} Notice that this result has units of length squared (due to the $a^{2}$ dependence) which is to be expected for $\left\langle x^{2}\right\rangle$. Based on these results, it is easy to calculate the variance, and thus the standard deviation of the theoretical infinite set of measurements of position. \begin{aligned} \sigma_{x}^{2} &=\left\langle x^{2}\right\rangle-\langle x\rangle^{2} \ &=\left(\dfrac{a^{2}}{3}-\dfrac{a^{2}}{2 n^{2} \pi^{2}}\right)-\left(\dfrac{a}{2}\right)^{2} \ &=\dfrac{\left(8 n^{2} \pi^{2}-12-6 n^{2} \pi^{2}\right) a}{24 n^{2} \pi^{2}} \ &=\dfrac{\left(n^{2} \pi^{2}-6\right) a^{2}}{12 n^{2} \pi^{2}} \end{aligned}\nonumber The variance in $p\left(\sigma_{p}^{2}\right)$ The relationship between energy and momentum simplifies the calculation of $\left\langle\mathrm{p}^{2}\right\rangle$ greatly. Recall that $T=\dfrac{p^{2}}{2 m}\nonumber$ And since all of the energy in this system is kinetic energy, it follows that $\left\langle p^{2}\right\rangle=2 m\langle H\rangle\nonumber$ Further, $\langle H\rangle$ (or $\langle\mathrm{E}\rangle$ ) is simply the energy expression since the wavefunctions are eigenfunctions of the Hamiltonian! $\left(\hat{H} \psi_{n}=E_{n} \psi_{n}\right)$ \begin{aligned} \langle H\rangle &=\int_{0}^{a} \psi_{n} \hat{H} \psi_{n} d x \ &=\int_{0}^{a} \psi_{n} E_{n} \psi_{n} d x \ &=E_{n} \int_{0}^{a} \psi_{n} \psi_{n} d x \ &=E_{n} \end{aligned}\nonumber Basically, this means that the expectation value for energy for a system in an eigenstate is always given by the eigenvalue of the Hamiltonian. In a later section we’ll discuss the expectation value of energy when the system is not in an eigenstate. Another important aspect of the above relationship is how the integral simply went away. It didn’t, really. It’s just that the wavefunctions are normalized, so the integral is unity. Recall that for orthonormalized wavefunctions $\int \psi_{i}^{*} \psi_{j} d \tau=\delta_{i j}\nonumber$ which is a property of which we will make great use throughout our development of quantum theory. So from the result for the expectation value for energy, it follows that \begin{aligned} \left\langle p^{2}\right\rangle &=2 m E \ &=2 m\left(\dfrac{n^{2} h^{2}}{8 m a^{2}}\right) \ &=\dfrac{n^{2} h^{2}}{4 a^{2}} \end{aligned}\nonumber Note that the variance of the position measurement decreases with increasing $n$. For momentum, the variance is given by \begin{aligned} \sigma_{p}^{2} &=\left\langle p^{2}\right\rangle-\langle p\rangle^{2} \ &=\left(\dfrac{n^{2} h^{2}}{4 a^{2}}\right)-(0)^{2} \ &=\dfrac{n^{2} h^{2}}{4 a^{2}} \end{aligned}\nonumber The variance of momentum measurements increases with increasing $n$ ! We shall place these results on hold for now, and revisit them when we look at the Heisenberg Uncertainty Principle. But in order to make sense of that rather important consequence of quantum theory, we must first examine commutators and the relationship between pairs of operators as this will have a profound impact on what can be known (or measured) by their associated physical observables. The Heisenberg Uncertainty Principle One of the more interesting (and controversial!) consequences of the quantum theory can be seen in the Heisenberg Uncertainty Principle. Before examining the Heisenberg Uncertainty principle, it is necessary to examine the relationship that can exist between a pair of quantum mechanical operators. In order to do this, we define an operator for operators, called the commutator. The Commutator For a pair of operators $\hat{A}$ and $\hat{B}$, the commutator $[\hat{A}, \hat{B}]$ is defined as follows $[\hat{A}, \widehat{B}] f(x)=\hat{A} \widehat{(B} f(x))-\widehat{B} \widehat{(A} f(x))\nonumber$ If the end result of the commutator operating on $\mathrm{f}(\mathrm{x})$ is zero, then the two operations are said to commute. This means that for the particular pair of operations, it does not matter which order they on the function - the same result is obtained either way. Relationships for Commutators There are a number of important mathematical relationships for commutators. First, every operator commutes with itself, and with any power of itself. \begin{aligned} &{[\hat{\mathrm{A}}, \hat{\mathrm{A}}]=0} \ &{\left[\hat{\mathrm{A}}, \hat{\mathrm{A}}^{\mathrm{n}}\right]=0} \end{aligned}\nonumber Second, given the definition of the commutator relationship, it should be fairly obvious that $[\hat{\mathrm{A}}, \hat{\mathrm{B}}]=-[\hat{\mathrm{B}}, \hat{\mathrm{A}}]\nonumber$ Also, there is a linearity relationship for commutators (of linear operators). $[k \hat{A}, \hat{B}]=k[\hat{A}, \hat{B}]\nonumber$ Theorem $1$ Proof: Show that two operators have a common set of eigenfunctions, the operators must commute. Solution: Consider operators $\hat{A}$ and $\hat{B}$ that have the same set of eigenfunctions $\phi_{\mathrm{n}}$ such that $\hat{A} \phi_{n}=a_{n} \phi_{n} \quad \text { and } \quad \hat{B} \phi_{n}=b_{n} \phi_{n}\nonumber$ For any arbitrary function $\Phi$ that can be expressed as a linear combination of $\phi_{\mathrm{n}}$ $\Phi=\sum_{n} c_{n} \phi_{n}\nonumber$ the commutator of $\hat{A}$ and $\hat{B}$ operating on $\Phi$ will give the following result. \begin{aligned} {[\hat{A}, \hat{B}] \Phi } &=\left[\hat{A}, \hat{B} \sum_{n} c_{n} \phi_{n}\right.\ &=\hat{A}\left(\hat{B} \sum_{n} c_{n} \phi_{n}\right)-\hat{B}\left(\hat{A} \sum_{n} c_{n} \phi_{n}\right) \end{aligned}\nonumber And since $\hat{A}$ and $\hat{B}$ are linear (as all quantum mechanical operators must be) \begin{aligned} \hat{A}\left(\hat{B} \sum_{n} c_{n} \phi_{n}\right)-\hat{B}\left(\hat{A} \sum_{n} c_{n} \phi_{n}\right) &=\hat{A}\left(\sum_{n} c_{n} \hat{B} \phi_{n}\right)-\hat{B}\left(\sum_{n} c_{n} \hat{A} \phi_{n}\right) \ &=\hat{A}\left(\sum_{n} c_{n} b_{n} \phi_{n}\right)-\hat{B}\left(\sum_{n} c_{n} a_{n} \phi_{n}\right) \ &=\sum_{n} c_{n} b_{n} \hat{A} \phi_{n}-\sum_{n} c_{n} a_{n} \hat{B} \phi_{n} \ &=\sum_{n} c_{n} b_{n} a_{n} \phi_{n}-\sum_{n} c_{n} a_{n} b_{n} \phi_{n} \ &=0 \end{aligned}\nonumber And so it is clear that the operators $\hat{A}$ and $\hat{B}$ must commute. When Operators do not Commute An example of operators that do not commute are $\hat{x}$ and $\hat{p}$. The commutator of these two operators is evaluated below, using a well-behaved function $f$. \begin{aligned} {[\hat{x}, \hat{p}] f } &=\hat{x}(\hat{p} f)-\hat{p}(\hat{x} f) \ &=x \cdot\left(-i \hbar \dfrac{d}{d x} f\right)+i \hbar \dfrac{d}{d x}(x \cdot f) \end{aligned}\nonumber The second term requires the product rule to evaluate. Recall that $d(u v)=v d u+u d v\nonumber$ And so the above expression can be simplified by noting that $\dfrac{d}{d x}(x \cdot f)=f \dfrac{d}{d x} x+x \dfrac{d}{d x} f\nonumber$ And so \begin{aligned} {[\hat{x}, \hat{p}] f } &=x \cdot\left(-i \hbar \dfrac{d}{d x} f\right)+i \hbar \dfrac{d}{d x}(x \cdot f) \ &=\left(-i \hbar \cdot x \cdot \dfrac{d}{d x} f\right)+i \hbar\left(f \dfrac{d}{d x} x+x \dfrac{d}{d x} f\right) \ &=-i \hbar \cdot x \cdot \dfrac{d}{d x} f+i \hbar f+i \hbar \cdot x \cdot \dfrac{d}{d x} f \ &=i \hbar f \end{aligned}\nonumber So the final result of the operation is to multiply the function by $i \hbar$. Another way to state this is to note $[\hat{x}, \hat{p}]=i \hbar\nonumber$ The Heisenberg Uncertainty Principle Among the many contributions that Werner Heisenberg made to the development of quantum theory, one of the most important was the discovery of the uncertainty principle. Heisenberg’s observation was based on the prediction of interference of electron beams that was predicted by de Broglie. The uncertainty principle states that for the observables corresponding to a pair of operators $\hat{A}$ and $\hat{B}$, the following result must hold $\sigma_{a}^{2} \sigma_{b}^{2} \geq-\dfrac{1}{4}\left(\int \psi^{*}[\hat{A}, \hat{B}] \psi d \tau\right)^{2}\nonumber$ The most popularly taught statement of the uncertainty principle is based on the uncertainty product for position and momentum. $\Delta x \Delta p \geq \dfrac{\hbar}{2}\nonumber$ This result is easy to derive from the above expression. \begin{aligned} \sigma_{x}^{2} \sigma_{p}^{2} & \geq-\dfrac{1}{4}\left(\int \psi^{*}[\hat{x}, \hat{p}] \psi d \tau\right)^{2} \ & \geq-\dfrac{1}{4}\left(\int \psi^{*}(i \hbar \psi) d \tau\right)^{2} \ & \geq-\dfrac{1}{4}(i \hbar)^{2}\left(\int \psi^{*} \psi d \tau\right)^{2} \ & \geq-\dfrac{1}{4}(i h)^{2} \ & \geq \dfrac{\hbar^{2}}{4} \ \sigma_{x} \sigma_{p} & \geq \dfrac{\hbar}{2} \end{aligned}\nonumber As we saw in a previous section, we have a means of evaluating $\sigma_{x}$ and $\sigma_{\mathrm{p}}$ to verify this relationship for a given state of a particle in a box. (This evaluation is left as an exercise.)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.04%3A_The_Tools_of_Quantum_Mechanics.txt
As stated previously, a system need not be in a state that is described by a single eigenfunction of the Hamiltonian. A system can be prepared such that any well-behaved, single valued, smooth function that vanishes at endpoints. When the wavefunction is not an eigenfunction of the Hamiltonian, the Superposition Principle can be used to greatly simplify how we work with the wave function. This is true because the so-called normal solutions $\left(\psi_{n}(x)\right)$ to the Schrödinger Equation $\widehat{H} \psi_{n}(x)=E_{n} \psi_{n}(x)\nonumber$ using the language of linear algebra, span the space of well-behaved functions that can describe the physics of the particle. That means that any arbitrary function that is 1) continuous, and 2) obeys the boundary conditions, can be expressed as a linear combination of these normal solutions: $\Phi(x)=\sum_{n} c_{n} \psi_{n}(x)\nonumber$ where the coefficients $c_{n}$ are calculated using the Fourier Transform shown below. $c_{n}=\int_{-\infty}^{\infty} \Phi(x) \psi_{n}(x) d x\nonumber$ Superposition This description also has a number of other important ramifications. Consider a particle in a box system prepared so that the wavefunction is given by $\Psi(x)=\dfrac{1}{\sqrt{2}} \psi_{1}(x)+\dfrac{1}{\sqrt{2}} \psi_{2}(x)\nonumber$ where $\psi_{n}(x)=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)\nonumber$ The first question one might ask is, "Is the wavefunction $\Psi(\mathrm{x})$ normalized?" Well, let’s see! \begin{aligned} \int_{0}^{a}(\Psi(x))^{2} d x &=\int_{0}^{a}\left[\left(\dfrac{1}{\sqrt{2}}\right) \psi_{1}+\left(\dfrac{1}{\sqrt{2}}\right) \psi_{2}\right]^{2} d x \ &=\int_{0}^{a}\left(\dfrac{1}{2} \psi_{1} \psi_{1}+\psi_{1} \psi_{2}+\dfrac{1}{2} \psi_{2} \psi_{2}\right) d x \ &=\dfrac{1}{2} \int_{0}^{a} \psi_{1} \psi_{1} d x+\int_{0}^{a} \psi_{1} \psi_{2} d x+\dfrac{1}{2} \int_{0}^{a} \psi_{2} \psi_{2} d x \ &=\dfrac{1}{2}(1)+(0)+\dfrac{1}{2}(1) \ &=1 \end{aligned}\nonumber (Notice how the property $\int \psi_{i} \psi_{j} d \tau=\delta_{i j}$ has been used to simplify the problem, by making the integral of the cross product in the middle vanish, and the integrals of the first and third terms go to unity.) So the wavefunction is normalized. Now, let’s evaluate the expectation value of energy $\langle \mathrm{E} \rangle$. \begin{aligned} \langle E\rangle &=\int_{0}^{a} \Psi \hat{H} \Psi d x \ &=\int_{0}^{a}\left(\dfrac{1}{\sqrt{2}} \psi_{1}+\dfrac{1}{\sqrt{2}} \psi_{2}\right) \hat{H}\left(\dfrac{1}{\sqrt{2}} \psi_{1}+\dfrac{1}{\sqrt{2}} \psi_{2}\right) d x \ &=\int_{0}^{a}\left(\dfrac{1}{\sqrt{2}} \psi_{1}+\dfrac{1}{\sqrt{2}} \psi_{2}\right)\left(\dfrac{E_{1}}{\sqrt{2}} \psi_{1}+\dfrac{E_{2}}{\sqrt{2}} \psi_{2}\right) d x \ &=\int_{0}^{a}\left(\dfrac{E_{1}}{2} \psi_{1} \psi_{1}+\dfrac{E_{1}}{2} \psi_{2} \psi_{1}+\dfrac{E_{2}}{2} \psi_{1} \psi_{2}+\dfrac{E_{2}}{2} \psi_{2} \psi_{2}\right) d x \ &=\dfrac{E_{1}}{2} \int_{0}^{a} \psi_{1} \psi_{1} d x+\dfrac{E_{1}}{2} \int_{0}^{a} \psi_{2} \psi_{1} d x+\dfrac{E_{2}}{2} \int_{0}^{a} \psi_{1} \psi_{2} d x \dfrac{E_{2}}{2} \int_{0}^{a} \psi_{2} \psi_{2} d x \ &=\dfrac{E_{1}}{2}+0+0+\dfrac{E_{2}}{2} \end{aligned}\nonumber So the expectation value is given by the average of $E_{1}$ and $E_{2}$. This result is only possible if half of the time the energy is measured, the observed value is $\mathrm{E}_{1}$ and the other half $\mathrm{E}_{2}$. In other words, the probability of measuring $E_{1}$ is $\frac{1}{2}$ and that of $E_{2}$ is $\frac{1}{2}$. It is also important to note that these probabilities are given by the Fourier coefficients of $c_{1}=1 / \sqrt{2}, c_{2}=1 / \sqrt{2} \text { and } c_{n}=0 \text { for all other } n\nonumber$ It can be concluded that the probability of measuring $E_{n}$ is given by $\left|c_{n}{ }^{2}\right|$. $P\left(E_{n}\right)=\left|c_{n}^{2}\right|\nonumber$ Completeness Imagine the following scenario. A quantum mechanical particle of mass $m$ in a onedimensional box of length $a$ is prepared such that its wavefunction is given by $\psi_{1}(\mathrm{x})$. Instantaneously, the length of the box increases to $2 a$. The particle is no longer in an eigenstate of the new system. Rather, its wavefunction will look like the function depicted below in the MatchCad worksheet. The function can be described as a superposition of wavefunctions that are eigenfunctions of the Hamiltonian that reflects the new length of the box. A MathCad worksheet that reflects this expansion is given on the next page. The larger the value of $\mathrm{m}$ selected, the better the representation of the wavefunction. The above problem is analogous to what happens when an atom undergoes radioactive decay by something such as $\beta$-particle emission from the nucleus. In that case, the nuclear charge suddenly changes (changing the potential energy function and thus the Hamiltonian.) The change happens effectively instantaneously compared to the time required for the atom to react. The atom suddenly finds itself in a non-eigenstate, the nature of which will govern how the atom changes in time to respond to the nuclear decay. The superposition of eigenfunctions of the new Hamiltonian will give a description of the atom immediately following the decay, and the overall wavefunction will evolve in time based on how it is predicted to do so according to the fifth postulate. The superposition theorem allows for a complete description of a wavefunction according to the needs to the quantum theory - even if the wavefunction being described by a superposition of states is not an eigenfunction of the Hamiltonian! (Now how much would you pay?)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.05%3A_Superposition_and_Completeness.txt
As luck would have it, not all quantum mechanical problems are expressible in terms of a single dimension. In fact, most problems will require multiple "dimensions" as they will involve not only electronic state descriptions, but also vibrational descriptions and rotational descriptions as well. In this section, we will discuss how variables are separated in the multidimensional problems, using a particle in a three-dimensional box as an example. The Particle in a Rectangular Box Consider a particle of mass $m$ constrained to a three dimensional rectangular box with sides of lengths $a, b$ and $c$ in the $x, y$ and $z$ directions respectively. For this problem, the Hamiltonian will look as follows \begin{aligned} &\hat{H}=-\dfrac{\hbar^{2}}{2 m} \nabla^{2} \ &=-\dfrac{\hbar^{2}}{2 m}\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right) \end{aligned}\nonumber One important thing to notice is that this Hamiltonian can be written as a sum of three separate operators, each affecting only a single variable. \begin{aligned} \hat{H} &=-\dfrac{\hbar^{2}}{2 m} \dfrac{\partial^{2}}{\partial x^{2}}-\dfrac{\hbar^{2}}{2 m} \dfrac{\partial^{2}}{\partial y^{2}}-\dfrac{\hbar^{2}}{2 m} \dfrac{\partial^{2}}{\partial z^{2}} \ &=\hat{H}_{x}+\hat{H}_{y}+\hat{H}_{z} \end{aligned}\nonumber When the Hamiltonian takes a form like this, it will also be possible to express the eigenfunctions as a product of functions. Let’s give it a try. The time independent Schrödinger equation looks as follows $\begin{array}{r} \hat{H} \Psi(x, y, z)=E \Psi(x, y, z) \ -\dfrac{\hbar^{2}}{2 m} \nabla^{2} \Psi(x, y, z)=E \Psi(x, y, z) \ -\dfrac{\hbar^{2}}{2 m}\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}+\right) \Psi(x, y, z)=E \Psi(x, y, z) \end{array}\nonumber$ To simplify things, let’s gather variables and make the substitution $-\dfrac{2 m E}{\hbar^{2}}=-k^{2}\nonumber$ To proceed, we make an assumption that the wavefunction can be expressed as a product of functions. $\Psi(x, y, z)=X(x) Y(y) Z(z)\nonumber$ The wave equation then becomes \begin{aligned} &\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}+\right) X(x) Y(y) Z(z)=-k^{2} X(x) Y(y) Z(z) \ &Y(y) Z(z) \dfrac{d^{2}}{d x^{2}} X(x)+X(x) Z(z) \dfrac{d^{2}}{d y^{2}} Y(y)+X(x) Y(y) \dfrac{d^{2}}{d z^{2}} Z(z)=-k^{2} X(x) Y(y) Z(z) \end{aligned}\nonumber Dividing both sides by $\mathrm{X}(\mathrm{x}) \mathrm{Y}(\mathrm{y}) \mathrm{Z}(\mathrm{z})$ yields $\dfrac{1}{X(x)} \dfrac{d^{2}}{d x^{2}} X(x)+\dfrac{1}{Y(y)} \dfrac{d^{2}}{d y^{2}} Y(y)+\dfrac{1}{Z(z)} \dfrac{d^{2}}{d z^{2}} Z(z)=-k^{2}\nonumber$ Since each of these terms is in a different variable, the only way the equation can be true is if each term on the left is equal to a constant. These constants are chosen in a convenient way so as to make the solution of the problem simple. So again, to proceed, we make a substitution. \begin{aligned} &\dfrac{1}{X(x)} \dfrac{d^{2}}{d x^{2}} X(x)=-k_{x}^{2} \ &\dfrac{1}{Y(y)} \dfrac{d^{2}}{d y^{2}} Y(y)=-k_{y}^{2} \ &\dfrac{1}{Z(z)} \dfrac{d^{2}}{d z^{2}} Z(z)=-k_{z}^{2} \end{aligned}\nonumber where $-k_{x}^{2}-k_{y}^{2}-k_{z}^{2}=-k^{2}\nonumber$ These substitutions allow us to separate the problem into three problems in single variables. Further, we know what the solutions to these equations are! \begin{aligned} &X(x)=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n_{x} \pi x}{a}\right) \quad n_{x}=1,2,3, \ldots \ &Y(y)=\sqrt{\dfrac{2}{b}} \sin \left(\dfrac{n_{y} \pi y}{b}\right) \quad n_{y}=1,2,3, \ldots \ &Z(z)=\sqrt{\dfrac{2}{c}} \sin \left(\dfrac{n_{y} \pi z}{c}\right) \quad n_{z}=1,2,3, \ldots \end{aligned}\nonumber The total wavefunction, therefore is $\Psi(x, y, z)=\sqrt{\dfrac{8}{a b c}} \sin \left(\dfrac{n_{x} \pi x}{a}\right) \sin \left(\dfrac{n_{y} \pi y}{b}\right) \sin \left(\dfrac{n_{z} \pi z}{c}\right)\nonumber$ And the energy levels can be expressed as \begin{aligned} E &=E_{x}+E_{y}+E_{z} \ &=\left(\dfrac{n_{x}^{2} h^{2}}{8 m a^{2}}\right)+\left(\dfrac{n_{y}^{2} h^{2}}{8 m b^{2}}\right)+\left(\dfrac{n_{z}^{2} h^{2}}{8 m c^{2}}\right) \end{aligned}\nonumber The key element to notice here is that the wavefunctions are expressed as a product and the eigenfunction as a sum. This is a common pattern as it always happens when the operator can be expressed as a sum as was the case for this operator. This pattern arises often in chemistry, where, for example, the total wavefunction of a molecule might be described as the product of wavefunctions describing the electronic state, the vibrational state and the rotational state. $\Psi_{\text {tot }}=\psi_{\text {elec }} \psi_{\text {vib }} \psi_{\text {rot }}\nonumber$ In the limit that this is a good description, the energy of the molecule can be expressed as a sum of energies. $\mathrm{E}_{\text {tot }}=\mathrm{E}_{\text {elec }}+\mathrm{E}_{\mathrm{vib}}+\mathrm{E}_{\text {rot }}\nonumber$ Degeneracy Let’s now consider the case where the particle is confined to a cubic space - a rectangular solid where all edges have the same length. If that length is $a$, the wavefunction becomes $\Psi(x, y, z)=\sqrt{\dfrac{8}{a^{3}}} \sin \left(\dfrac{n_{x} \pi x}{a}\right) \sin \left(\dfrac{n_{y} \pi y}{a}\right) \sin \left(\dfrac{n_{z} \pi z}{a}\right)\nonumber$ The energy levels are given by $E=\left(n_{x}^{2}+n_{y}^{2}+n_{z}^{2}\right) \dfrac{h^{2}}{8 m a^{2}}\nonumber$ This result leads to an important possibility. Specifically, several eigenstates of the system can have the same energy. Consider the set of quantum numbers and energies shown in the following table. Notice that several energies can be generated by a number of combinations of quantum numbers. The degeneracy is indicated by the number of quantum states that yield the same energy. There are many examples in quantum mechanics where several eigenstates yield the same energy. This can have important consequences on the nature of the system being described. This is perhaps the simplest system in which this phenomenon is observed. (Well, a particle in a 2-D box is simpler.) Level $\boldsymbol{n}_{\boldsymbol{X}}$ $\boldsymbol{n}_{\boldsymbol{y}}$ $\boldsymbol{n}_{\boldsymbol{z}}$ $\boldsymbol{E} /\left(\boldsymbol{h}^{2} / \mathbf{8} \boldsymbol{m a}^{2}\right)$ Degeneracy 1 1 1 1 3 1 2 1 1 2 6 3 3 1 2 1 6 3 4 2 1 1 6 3 5 1 2 2 9 3 6 2 1 2 9 3 7 2 2 1 9 3 8 1 1 3 11 3 9 1 3 1 11 3 10 3 1 1 11 3 11 2 2 2 12 1 12 1 2 3 14 6 13 2 3 1 14 6 14 3 2 1 14 6 15 1 3 2 14 6 16 3 2 1 14 6 17 2 1 3 14 6 Linear Combinations of Degenerate Wavefunctions Oftentimes, it is convenient to describe systems using linear combinations of wavefunctions. An example of this is the creation of molecular orbitals as linear combinations of atomic orbitals. Another is the construction of hybrid orbitals such as the $sp ^{3}$ hybrid set that is often used to describe the bonding in methane or other hydrocarbons. These linear combinations have important properties. In the case that the basis wavefunctions are degenerate eigenfunctions of the same operator (say, the Hamiltonian operator for instance) the linear combinations will also be eigenfunctions of that operator. However, this will not generally be the case for linear combinations of non-degenerate eigenfunctions. The proof of this is fairly straight forward. Theorem $1$ Proof: Show that any linear combination of two functions that are eigenfunctions of the same operator, and have the same eigenvalues is also an eigenfunction of the operator. Solution: Consider two functions $\mathrm{f}$ and $g$ that are eigenfunctions of the operator $\hat{A}$. $\text { Âf }=a \mathrm{f} \quad \text { and } \quad \hat{A} g=a g\nonumber$ Any linear combination of the functions $\mathrm{f}$ and $g$ will also be an eigenfunction of the operator $\hat{A}$. \begin{aligned} \hat{\mathrm{A}}\left(\mathrm{c}_{1} \mathrm{f}+\mathrm{c}_{2} \mathrm{~g}\right) &=\mathrm{c}_{1} \hat{\mathrm{A} f}+\mathrm{c}_{2} \hat{\mathrm{A} g} \ &=a \mathrm{c}_{1} \mathrm{f}+a \mathrm{c}_{2} \mathrm{~g} \ &=a\left(\mathrm{c}_{1} \mathrm{f}+\mathrm{c}_{2} \mathrm{~g}\right) \end{aligned}\nonumber The Particle on a Ring Problem Consider a quantum mechanical particle of mass $m$ constrained to a circular path of radius $a$. In Cartesian coordinates, we can write the potential energy function for this system as $V(x, y)=\mid \begin{array}{lc} \infty & \text { for } x^{2}+y^{2} \neq a^{2} \ 0 & \text { for } x^{2}+y^{2}=a^{2} \end{array}\nonumber$ However, it is much more convenient to work in coordinates that reflect the symmetry of the problem. In plane polar coordinates, the potential energy function is defined as $V(r, \theta)=\mid \begin{array}{lc} \infty & \text { for } r \neq a \ 0 & \text { for } r=a \end{array}\nonumber$ And since the Laplacian operator is given by $\nabla^{2}=\dfrac{1}{r^{2}} \dfrac{\partial^{2}}{\partial \theta^{2}}\nonumber$ we can write the time-independent Schrödinger equation as $-\dfrac{\hbar^{2}}{2 m} \cdot \dfrac{1}{r^{2}} \cdot \dfrac{\partial^{2}}{\partial \theta^{2}} \psi(r, \theta)=E \psi(r, \theta)\nonumber$ As usual, we proceed by separating variables. Let’s let $\psi(r, \theta)=R(r) \Theta(\theta)$. We now get $-\dfrac{\hbar^{2}}{2 m} \cdot \dfrac{R(r)}{r^{2}} \cdot \dfrac{d^{2}}{d \theta^{2}} \Theta(\theta)=E R(r) \Theta(\theta)\nonumber$ Now we can divide both sides by the function $R(r)$ and simply get rid of it. In this problem the only thing we need to know about the $r$ is that is it a constant $(r=a$. $)$ So after a trivial rearrangement, we see $\dfrac{d^{2}}{d \theta^{2}} \Theta(\theta)=-\dfrac{2 m r^{2} E}{\hbar^{2}} \Theta(\theta)\nonumber$ This is starting to look more like something we can manage to solve by inspection! Let’s make a substitution. Let $m_{l}=\pm \dfrac{\left(2 m r^{2} E\right)^{1 / 2}}{\hbar}\nonumber$ We’ll evaluate $m_{l}$ later. But now it is easy to show that $\Theta(\theta)=A e^{i m_{l} \theta}\nonumber$ is a solution to the eigenvalue, eigenfunction problem. Let’s try! \begin{aligned} &\qquad \dfrac{d}{d \theta} A e^{i m_{l} \theta}=i A m_{l} e^{i m_{l} \theta} \ &\text { and } \ &\qquad \dfrac{d}{d \theta} i A m_{l} e^{i m_{l} \theta}=-A m_{l}^{2} e^{i m_{l} \theta} \end{aligned}\nonumber So the eigenfunctions are given by $\Theta(\theta)=A e^{i m_{l} \theta}$ and the eigenvalues are given by $-m_{l}^{2}$. To proceed, we will employ a cyclical boundary condition. Since all wavefunctions must be single valued, we see that $\Theta(\theta)=\Theta(\theta+2 \pi)\nonumber$ So ... \begin{aligned} A e^{i m_{l} \theta} &=A e^{i m_{l}(\theta+2 \pi)} \ &=A e^{i m_{l} \theta} e^{i 2 \pi m_{l}} \end{aligned}\nonumber Or dividing both sides by $A e^{i m_{l} \theta}$, we see $1=e^{i 2 \pi m_{l}}\nonumber$ This is going to quantize the possible values which $m_{l}$ can take. And since the Euler relation tells us that $e^{i \pi}=-1\nonumber$ we see that $1=(-1)^{2 m_{l}}\nonumber$ which can only be true if $m_{l}$ is an integer. As it turns out, it doesn’t matter if $m_{l}$ is positive or negative. It just has to be an integer. $m_{l}=0, \pm 1, \pm 2 \ldots\nonumber$ As promised, this quantizes the energies possible for the system. $\dfrac{m_{l}^{2} \hbar^{2}}{2 I}=E\nonumber$ where the moment of inertia $I$ is given by the mass times the radius squared. $I=m r^{2}\nonumber$ Finally, we can obtain the value of the normalization constant $A$ to normalize the wavefunctions. $1=A^{2} \int_{0}^{2 \pi} e^{i m_{l} \theta} e^{i m_{l} \theta} d \theta\nonumber$ And we see that $A=\left(\dfrac{1}{2 \pi}\right)^{1/2}\nonumber$ So, in summary, the wavefunctions are given by $\psi(r, \theta)=\left(\dfrac{1}{2 \pi}\right)^{1 / 2} e^{i m_{l} \theta} \quad m_{l}=0, \pm 1, \pm 2, \ldots\nonumber$ And the energies are given by $E_{m_{l}}=\dfrac{m_{l}^{2} \hbar^{2}}{2 I} \quad \text { where } I=m r^{2}\nonumber$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.06%3A_Problems_in_Multiple_Dimensions.txt
Consider a long molecule that is a conjugated polyene. Kuhn (Kuhn, 1949) has suggested a model for the electrons involved in this $\pi$-bond system in which an electron is said to have a finite potential energy when it is "on" the molecule and an infinite potential energy when it is "off" the molecule. The model (known as the free electron model) is very much analogous to the particle in a box problem as we have presented it in class. Let’s consider a conjugated polyene molecule in which there are twelve atoms in the conjugated polyene chain. Each atom contributes one $\pi$ electron and each bond contributes $0.139 \mathrm{~nm}$ (the $\mathrm{C}=\mathrm{C}$ bond length in benzene.) We can consider each energy level in the system as one orbital. As in all other cases involving electrons, each orbital can contain two electrons. Using the model, we can predict the wavelength of light the molecule will absorb to excite one electron from the HOMO to the LUMO (highest occupied molecular orbital to the lowest unoccupied molecular orbital.) First, there are 11 bonds in the chain. Since each bond contributes $0.139 \mathrm{~nm}$, the "box" is $1.529 \mathrm{~nm}$ long. The energy levels of the molecular orbitals are then given by: $E_{n}=\dfrac{n^{2} h^{2}}{8 m a^{2}}\nonumber$ where $n=1,2,3 \ldots, h$ is Plank’s constant $\left(h=6.63 \times 10^{-34} \mathrm{Js}\right), m$ is the mass of an electron $\left(m_{\mathrm{e}}=\right.$ $\left.9.11 \times 10^{-31} \mathrm{~kg}\right)$ and $a$ is the length of the box $\left(a=1.529 \times 10^{-9} \mathrm{~m}\right)$. The energy levels will be filled with the $12 \pi$ electrons packing two electrons per orbital. Thus, the HOMO will be the state with $n=6$. The LUMO will be the state with $n=7$ - the next state up in energy. The difference in energy is what we want in order to predict the wavelength of light the molecule will absorb. \begin{aligned} &E_{6}=\dfrac{6^{2}\left(6.63 \times 10^{-34} \mathrm{Js}\right)^{2}}{8\left(9.11 \times 10^{-31} \mathrm{~kg}\right)\left(1.529 \times 10^{-9} \mathrm{~m}\right)^{2}}=9.288 \times 10^{-19} \mathrm{~J} \ &E_{7}=\dfrac{7^{2}\left(6.63 \times 10^{-34} \mathrm{Js}\right)^{2}}{8\left(9.11 \times 10^{-31} \mathrm{~kg}\right)\left(1.529 \times 10^{-9} \mathrm{~m}\right)^{2}}=1.2642 \times 10^{-18} \mathrm{~J} \end{aligned}\nonumber So the energy of excitation will be $3.354 \times 10^{-19} \mathrm{~J}$. This corresponds to an absorption wavelength of $593 \mathrm{~nm}$ (which is in the visible region of the spectrum.) How would the absorption wavelength change for more or fewer atoms in the conjugated polyene chain? The solution is left as an exercise. 2.08: Entanglement and Schrodinger's Cat There are many elements of the quantum theory that produce bizarre results (at least compared to our intuition as residents in a classical physics world. As it turns out, some of the early pioneers of a quantum theory (such as Albert Einstein and Erin Schrödinger) found these elements of strangeness too much to handle. Both expended a great deal of energy to eliminate quantum mechanics as an accepted theory that would shape modern science. As it turns out, all of the bizarreness predicted by quantum mechanics has withstood the tests of experimentation, despite the concerns and well-thought objections of these two scientific giants. Entanglement and Spooky Action at a Distance One of Einstein’s objections came in the form of what he named "spooky action at a distance." To understand this phenomenon, consider the decomposition of a p-meson into an electron and a positron. Since the original particle has zero spin, in order to conserve angular momentum, must be "spinning" in opposite directions. In other words, one has $m_{s}=+\frac{1}{2}$ and the other has $\mathrm{m}_{\mathrm{s}}=-\frac{1}{2}$. $\beta^{+} \longleftarrow \pi_{0} \longrightarrow \beta^{-}\nonumber$ The wavefunction that describes this system prior to the measurement of the spin of either particle is given by $\psi_{\text {spin }}=\dfrac{1}{\sqrt{2}}\left(\alpha_{+} \beta_{-}-\beta_{+} \alpha_{-}\right)\nonumber$ which allows for the possibility that either particle is spin up or spin down to be equally lightly. But the spins of the two particles are intimately coupled to one another. If the electron $\left(\beta^{-}\right)$is spin up $(\alpha)$ then the positron $\left(\beta^{+}\right)$must be spin down $(\beta)$ (and vice versa.) This property is an example of entanglement where the properties of one particle are entangled with those of the other through the wavefunction that describes the entire system. Now suppose that the spin of the electron is measured and determined, the spin of the other is determined at the same time. As such, the measurement of the property of one particle causes the wavefunction of the other particle to change instantaneously. This is what Einstein referred to as "spooky action at a distance." This action would require information to be transferred across space at a speed faster than the speed of light, violating Einstein’s theory of relativity. This paradox has been studied extensively and remains a topic of research interest. It should be noted that whenever these sort of issues crop up, it is quantum mechanics that seems to prevail over relativity. (Sorry Einstein!) Schrödinger’s Cat Erwin Schrödinger’s involvement in trying to dissuade the scientific community from embracing quantum theory is particularly peculiar, as it was the development of the wave equation that is still used today that won him the Nobel Prize in 1933. None the less, Schrödinger found himself quite troubled by the conclusions of the quantum theory. Toward that end, in 1935, he published a paper in which he described a thought experiment that had to give the scientific world pause where quantum theory was concerned. The problem was stated thusly. Imagine a box inside of which no observation could be made unless the box was opened. Inside, was placed a cat, a bottle of poison (prussic acid) and a radioactive atom. If the atom decays, a hammer will drop on the poison, killing the cat. The experiment was to wait one half-life of the atom. At that point, the wavefunction for the atom was given by $\Psi_{\text {atom }}=\dfrac{1}{\sqrt{2}} \psi_{\text {decayed }}+\dfrac{1}{\sqrt{2}} \psi_{\text {undecayed }}\nonumber$ This implies that it is equally likely that the atom has decayed as not decayed. And since the life of the cat was tied to the state of the atom, it is equally likely that the cat is dead or alive. Therefore, the "wavefunction" for the cat would be given by $\Psi_{\text {cat }}=\dfrac{1}{\sqrt{2}} \psi_{\text {dead }}+\dfrac{1}{\sqrt{2}} \psi_{\text {alive }}\nonumber$ This implies that the cat is neither dead nor alive, but both with equal probability! And even for the most lethargic of cats, it is very clear that animal is either alive or not. The notion that it is both is simply preposterous! This is the conclusion of which Schrödinger hoped to convince the scientific world. Alas, experimentation has failed to uphold Schrödinger’s notion that quantum mechanics provides an incorrect description of the atom. There have been numerous treatises on these topics and beyond. (The strangeness of quantum mechanics has been a very thought provoking topic indeed!) After completing a course in quantum mechanics (such as this one) a student should be well prepared to explore some of these very intriguing and perplexing predictions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.07%3A_The_Free_Electron_Model.txt
Kuhn, H. J. (1949). Journal of Chemical Physics, 17, 1198. 2.10: Vocabulary and Concepts commute equation of motion Hamiltonian Heisenberg Uncertainty Principle Kinetic Energy orthogonal spooky action at a distance Superposition Principle wavefunction 2.11: Problems 1. Consider the functions $f(x)=A\left(1-x^{2}\right)$ and $g(x)=3 x^{3}-x$. 1. Find a value for A such that $f(x)$ is normalized on the interval $-1 \leq x \leq 1$. 2. Are the functions $\mathrm{f}(\mathrm{x})$ and $\mathrm{g}(\mathrm{x})$ orthogonal over the interval $-1 \leq \mathrm{x} \leq 1$ ? 2. Consider each of the following functions and the associated intervals. Indicate whether or not the given function is suitable as a wavefunction over the given interval. 1. $\mathrm{e}^{\mathrm{x}} \qquad 0 \leq x \leq \infty$ 2. $\mathrm{e}^{-\mathrm{x}} \qquad 0 \leq x \leq \infty$ 3. $1 / \mathrm{x} \qquad -\infty \leq \mathrm{x} \leq \infty$ 4. $\mathrm{e}^{\mathrm{i} \theta} \qquad 0 \leq x \leq 2 \pi$ 5. $x(1-x) \qquad 0 \leq x \leq 1$ 3. Consider the following operators. Determine whether or not they are Hermitian. a. $\mathrm{d} / \mathrm{dx}$ b. $i \mathrm{~d} / \mathrm{dx}$ c. $\mathrm{d}^{2} / \mathrm{dx}^{2}$ d. $i \mathrm{~d}^{2} / \mathrm{dx}^{2}$ 4. Consider an operator  and associated set of eigenfunctions $\phi_{\mathrm{n}}$ that satisfies $\hat{A} \phi_{\mathrm{n}}=a_{\mathrm{n}} \phi_{\mathrm{n}} \nonumber$ Show that if the operator is Hermitian that the eigenvalues $a_{\mathrm{n}}$ must be real-valued. 1. Consider the data in the table. 1. Calculate $\left\langle x \rangle \right.$ and $\left\langle x^{2}\right\rangle$. 2. Calculate $\sigma_{x}^{2}$ for the data set. 3. Does $\sigma_{x}^{2}=\left\langle x^{2}\right\rangle-\langle x\rangle^{2}$ ? If not, what is the difference? 2. Consider a particle of mass $m$ in a rectangular solid box with edge lengths given by $a=$ a, $b=2 \mathrm{a}, c=2 \mathrm{a}$. Find the degeneracies of the first 10 energy levels for the system. $\mathbf{i}$ $\mathbf{x}$ 1 $2.3$ 2 $6.4$ 3 $4.2$ 4 $3.5$ 5 $4.9$ 1. Consider a particle of mass $m$ that is in a one-dimensional box of length $a$. The system is prepared so that the wavefunction is given by $\psi(x)=\operatorname{Ax}(a-x)$. 1. Find a value of $\mathrm{A}$ that normalizes the wavefunction. 2. Find the expectation values for $x$ and $x^{2}\left(\langle x \rangle \right.$ and $\langle x^{2}\rangle )$. 3. Find the expectation values for $\mathrm{p}$ and $\mathrm{p}^{2}\left(\langle \mathrm{p} \rangle\right.$ and $\langle \mathrm{p}^{2} \rangle$). 4. Given that the variance for a measurement is given by $\sigma_{a}^{2}=\langle a^{2} \rangle- \langle a \rangle^{2}$ calculate the variances $\sigma_{\mathrm{x}}^{2}$ and $\sigma_{\mathrm{p}}^{2}$. 5. Find the value of $\sigma_{x} \sigma_{p}$. Does it exceed $\frac{\hbar}{2}$ ? 2. Consider a particle of mass $m$ in a box of length $a$. The system is prepared such that the wavefunction is given by $\psi(\mathrm{x})=\mathrm{Ax}^{2}(a-\mathrm{x})$. 1. Find a value of A that normalizes the wavefunction. 2. What are the units on the wavefunction? 3. Find $\langle x \rangle$. 4. Is $\langle\mathrm{x}\rangle=a / 2$ ? Why or why not? 3. Consider the following pairs of operators and determine whether or not the operators commute. a. $\mathrm{d} / \mathrm{dx}, \mathrm{d}^{2} / \mathrm{dx}^{2}$ b. $\mathrm{x}, \mathrm{d}^{2} / \mathrm{dx}^{2}$ c. $\mathrm{x}, \int d x$ 4. Consider a particle of mass $m$ in a box of length $a$ for which the wavefunction is given by $\Psi(\mathrm{x})=(2)^{1 / 2} / 3 \phi_{1}(\mathrm{x})-(7)^{1 / 2} / 3 \phi_{3}(\mathrm{x})\nonumber$ where $\phi_{\mathrm{n}}(\mathrm{x})=(2 / a)^{1 / 2} \sin (\mathrm{n} \pi \mathrm{x} / a)$. 1. Show that the wavefunction $\Psi(\mathrm{x})$ is normalized. 2. Graph the wavefunction $\Psi(\mathrm{x})$. 3. What is the expectation value for energy $\langle \mathrm{E} \rangle$ for the system? 4. What is the most likely energy to be measured for the system? 1. Consider benzene $\left(\mathrm{C}_{6} \mathrm{H}_{6}\right)$ as modeled using the free-electron model. 1. Using a $\mathrm{C}-\mathrm{C}$ bond length of $\mathrm{r}_{\mathrm{cc}}=0.139 \mathrm{~nm}$, calculate the circumference of the ring and its radius. 2. Based on the model, what are the degeneracies of the four lowest energy levels? 3. Placing two electrons per particle-on-a-ring "orbital", calculate the energy gap (and corresponding wavelength of light driving a transition) between the HOMO and the LUMO based on this model. 4. How does the value you found in part c compare to the observed band-origin of the $A_{1 \mathrm{~g}} \rightarrow \mathrm{B}_{1 \mathrm{u}}$ transition of benzene $(\lambda=215 \mathrm{~nm})$ ?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/02%3A_Particle_in_a_Box/2.09%3A_References.txt
Many problems in chemistry can be simplified based on the symmetry of molecules and/or the symmetries of atomic and molecular orbitals. Since this course will deal mostly in the mathematical models used to describe molecular motions (rotations and vibration) and the orbitals needed to describe the electronic structure of atoms and molecules, some introduction to the mathematics of symmetry is useful. The concepts discussed in this chapter will be used through the text to demonstrate how symmetry can be used to simplify the descriptions of atomic and molecular behavior. • 3.1: Overview Group Theory is the mathematical theory associated with the mathematical properties of groups. In chemistry, group theory is the mathematics of symmetry. • 3.2: Group Theory in Chemistry In Chemistry, group theory is useful in understanding the ramifications of symmetry within chemical bonding, quantum mechanics and spectroscopy. • 3.3: Determining the Point Group for a Molecule- the Schoenflies notation The first step in determining the point group for a molecule is to determine the structure of the molecule. Once this is done, identify all of the symmetry elements the molecular structure possesses. • 3.4: Multiplication Operation for Symmetry Elements Multiplication is fairly simple when it comes to symmetry operations. One simply applies the operations from right to left. Going back to the tennis racket example, it is fairly simple to visualize each symmetry element. • 3.5: More Definitions- Order and Class An important definition is the order of a group. • 3.6: Representations A representation is any mathematical construct that will reproduce the group multiplication table. In general, there are an infinite number of representations possible for a given group, however, most of them will be related through simple relationships, and thus can be constructed from (or reduced to) other representations. • 3.7: The "Great Orthogonality Theorem" One thing that is important about irreducible representations is that they are orthogonal. This is the property that makes group theory so very useful in chemistry, because orthogonality makes integrals zero. It’s always easier to do the integrals when orthogonality tells us the result will be zero before doing any complicated math! • 3.8: Character and Character Tables Most summaries of group theory do not give the full matrix specifications for each irreducible representation in each important point group. Rather, a very useful quantity is defined, called the character. • 3.9: Direct Products The intensity of a transition in the spectrum of a molecule is proportional to the magnitude squared of the transition moment matrix element. • 3.10: Vocabulary and Concepts • 3.11: Problems 03: An Introduction to Group Theory Group Theory is the mathematical theory associated with the mathematical properties of groups. In chemistry, group theory is the mathematics of symmetry. A group (\(G\)) is a set of elements (\(A\), \(B\), etc.) that can be associated through a mathematical operation (sometimes referred to as a multiplication operation, eg. \(A*B\)) and satisfying the following criteria: 1. The group must have an identity element (\(E\)) such that for each element A in the group, \(A*E = E*A = A\). (It can be proven that for a given group and multiplication operation, the identity element is unique.) 2. Each element \(A\) in the group must have an inverse (\(A^{-1}\)) that is also a member of the group and that satisfies the criterion \(A*A^{-1} = A^{-1}*A = E\). (It can be proven that each element has one and only one inverse.) 3. The group must be closed under multiplication. That means that for any pair of elements in the group A and B for which \(A*B = C\), \(C\) must also be a member of the group. Note that the multiplication operation need not be commutative. The order of multiplication may matter. There is no guarantee that \(A*B = B*A\). Many groups that satisfy this property are called abelian groups. The set of numbers 1 and –1 form an abelian group under the normal operation of simple multiplication. A simple group multiplication table table can be constructed for this group. 1 -1 1 1 -1 -1 -1 1 Clearly, the identity element in this group is 1 since multiplication by 1 gives the same number back. Also, both members happen to be their own inverse since \(1*1 = 1\) and \((-1)*(-1) = 1\)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.01%3A_Overview.txt
In Chemistry, group theory is useful in understanding the ramifications of symmetry within chemical bonding, quantum mechanics and spectroscopy. The group elements we are concerned with are symmetry operations. Symbol Operation Description Element Mathematical example E identity This is the “don’t do anything to it” operation E. $E (x,y,z) = (x,y,z)$ $C_{n}$ Proper rotation This is an operation in which the object is rotated about an axis by an angle of $\frac{2\pi }{n}$ radians. The axis will be referred to as the “$C_{n}$ axis”. $C_{n}$. The axis with the largest value of n is designated the “principle rotation axis” and the z-axis is always assigned as lying along the principle rotation axis. $C_{4}(x,y,z) = (y,-x,z)$ $C_{2}(x,y,z) = (-x,-y,z)$ Etc. $\sigma$ Reflection plane This operation involves reflection of the object through a mirror plane. $\sigma_{v}$, $\sigma_{d}$ or $\sigma_{h}$. $\sigma_{v}$ and $\sigma_{d}$ contain the principle rotation axis, whereas $\sigma_{h}$ planes are perpendicular to the principle rotation axis. $\sigma_{v}(x,y,z) = (-x,y,z)$ (for reflection through the $yz$ plane)$\sigma_{h}(x,y,z) = (x,y,-z)$ $\sigma_{d}(x,y,z) = (y,x,z)$ $i$ Inversion center This operation involves reflection trough a point. i. The inversion center (if it exists) will always be located at the center of mass of a molecule. $i(x,y,z) = (-x,-y,-z)$ $S_{n}$ Improper rotation This operation involves a rotation through a $C_{n}$ axis followed by reflection by a $\sigma_{h}$ plane. $S_{n}$. A symmetry operation is a geometrical manipulation that leaves an object in a geometry that is indistinguishable from that which it had before the manipulation. There are five important types of symmetry operations with which we are concerned. Each type of operation has an associated symmetry element. Using standardized notation, these operations and elements can be summarized as follows. A given molecule may have several of the above symmetry elements. The particular combination will define a group, and that group can be given a named based on the type of symmetry elements it contains. Further, all of the convenient wavefunctions that describe the vibrations, rotations and molecular orbitals of the molecule will be eigenfunctions of the symmetry elements, forcing some very useful mathematical properties upon the wavefunctions. A case study: the symmetry of a tennis racket A tennis racquet has all of the same symmetry elements as a water molecule or a formaldehyde molecule. Let’s identify these symmetry elements and write out a group multiplication table for the group to which that particular set belongs. The most obvious symmetry element is always the identity element (E). Every object possesses this symmetry element. Some objects are so asymmetrical that this is the only symmetry element they possess. Certainly, a tennis racquet possesses the symmetry element E. The next most useful element to examine is the reflection plane. An object may or may not possess this type of symmetry. A tennis racquet has two vertical ($\sigma_{v}$) reflection planes. One is in the plane of the strings and the other is perpendicular to the face of the racquet. This happens often that an object has more than one of a given type of symmetry element. For our purposes, we will designate the plane that is perpendicular to the face of the racquet as $\sigma_{v}$ and the one that is parallel to the face of the racquet as $\sigma_{v}’$. A tennis racquet possesses neither an inversion center ($i$) nor an improper rotation axis ($S_{n}$). The set of symmetry elements that the object does possess ($E$, $C_{2}$, $\sigma_{v}$ and $\sigma_{v}'$) define a group that goes by the label $C_{2v}$. Any object that has these and only these symmetry elements is said to have $C_{2v}$ symmetry. It is easy to demonstrate that the set of symmetry elements that define $C_{2v}$ define a group.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.02%3A_Group_Theory_in_Chemistry.txt
The first step in determining the point group for a molecule is to determine the structure of the molecule. Once this is done, identify all of the symmetry elements the molecular structure possesses. Once this has been accomplished, you can use the preceding flowchart to determine the correct point group using the Scheonflies notation system. Example $1$ Determine the point group for a methane molecule. Solution A methane molecules has tetrahedral symmetry. It contains the following symmetry elements: E, 4 $C_{3}$ (one each along a C-H bond) axes, 6 $\sigma$ planes (one each containing the carbon and a pair of hydrogen atoms), 3 $C_{2}$ axes (each on bisecting an HCH bond angle.) It also has 3 $S_{4}$ axes (each one co-linear with a $C_{2}$ axis.) The molecule belongs to the point group $T_d$, as can be discerned from the following analysis. 1. Is the molecular Linear? No 2. Does the molecule have two or more $C_{n\geq 3}$ axes? Yes 3. Does the molecule have a $C_{n \geq 4}$ axis? No 4. Does the molecule have any $\sigma$ planes? Yes 5. Does the molecule have an inversion center? No The molecule belongs to the $T_{d}$ Point Group. Example $2$ Determine the point group for $CH_{3}Cl$. Solution Chloromethane has the same tetrahedral shape as methane, but belongs to the point group $C_{3v}$. The molecule has the following symmetry elements: $E$, $C_{3}$ (along the C-Cl bond axis) and 3 $\sigma_{v}$ planes (each containing the chlorine and carbon atoms plus one hydrogen atom. The classification of the molecule goes as follows: 1. Is the molecule linear? No 2. Does the molecule have two or more $C_{n\geq 3}$ axes? No 3. Does the molecule have a $C_{n}$ axis? Yes 4. Are there n $C_{2}$ axes perpendicular to the principle axis? No 5. Does the molecule have a $\sigma_{h}$ plane? No 6. Does it have n $\sigma_{v}$ planes? Yes The molecule belongs to the $C_{3v}$ point group. Example $3$ Determine the point group for benzene. Solution Benzene has a planar geometry and belongs to the point group D${}_{6h}$. The molecule possesses the following symmetry elements: $E$, $C_{6}$, 6 $C_{2}$, 6 $\sigma_{v}$, $\sigma_{h}$ and i. The classification of the molecule goes as follows: 1. Is the molecule linear? No 2. Does the molecule have two or more $C_{n \geq 3}$ axes? No 3. Does the molecule have a $C_{n}$ axis? (n = 6 for benzene) Yes 4. Are there n $C_{2}$ axes perpendicular to the principle axis? Yes 5. Does the molecule have a $\sigma_{h}$ plane? Yes The molecule belongs to the point group $D_{6h}$ Example $4$ Classify ethane by its point group. Solution Ethene has a planar geometry. The molecule possesses the following symmetry elements: $E$, 3 $C_{2}$, 3 $\sigma$, and $i$. The classification of the molecule goes as follows: 1. Is the molecule linear? No 2. Does the molecule have two or more $C_{n \geq 3}$ axes? No 3. Does the molecule have a $C_{n}$ axis? Yes ($n = 2$) 4. Are there n $C_{2}$ axes perpendicular to the principle axis? Yes 5. Does the molecule have a $\sigma_{h}$ plane? Yes The molecule belongs to the $D_{2h}$ point group. Example $5$ Classify the isomers of dichloroethene by their point groups. Solution Dichloroethene has three isomers. All of them have a planar geometry. The cis- and gem- isomers have the following symmetry elements: $E$, $C_{2}$, and $2 \sigma_{v}$. (The 1,1- (or gem-) isomer has the same elements as the cis- isomer.) The classification of the molecule goes as follows: 1. Is the molecule linear? No 2. Does the molecule have two or more $C_{n \geq 3}$ axes? No 3. Does the molecule have a $C_{n}$ axis? Yes ($n = 2$) 4. Are there $n C_{2}$ axes perpendicular to the principle axis? No 5. Does the molecule have a $\sigma_{h}$ plane? No 6. Does the molecule have $n \sigma_{v}$ planes? Yes The cis-isomer belongs to the $C_{2v}$ point group. The trans-isomer has the following symmetry elements: $E$, $C_{2}$, $\sigma_{h}$, and $i$. The classification of the molecule goes as follows: 1. Is the molecule linear? No 2. Does the molecule have two or more $C_{n \geq 3}$ axes? No 3. Does the molecule have a $C_{n}$ axis? Yes ($n = 2$) 4. Are there $n C_{2}$ axes perpendicular to the principle axis? No 5. Does the molecule have a $\sigma_{h}$ plane? Yes The trans-isomer belongs to the $C_{2h}$ point group. 3.04: Multiplication Operation for Symmetry Elements Multiplication is fairly simple when it comes to symmetry operations. One simply applies the operations from right to left. Going back to the tennis racket example, it is fairly simple to visualize each symmetry element. To show this, it is useful to construct a group multiplication table. To do this, it is useful to pick a corner of the object and imagine where it is transported under a pair of sequential operations. Then imagine what operation will affect the same transformation directly. By applying them pairwise, one can generate the group multiplication table: $C_{2v}$ $E$ $C_{2}$ $\sigma _{v}$ $\sigma _{v} '$ $E$ $E$ $C _{2}$ $\sigma _{v}$ $\sigma _{v} '$ $C _{2}$ $C _{2}$ $E$ $\sigma _{v} '$ $\sigma _{v}$ $\sigma _{v}$ $\sigma _{v}$ $\sigma _{v} '$ $E$ $C _{2}$ $\sigma _{v} '$ $\sigma _{v} '$ $\sigma _{v}$ $C _{2}$ $E$ What should jump right out from this multiplication table is that the group $C _{2v}$ 1) is abelian (actually, this will become clear after the term is defined) and 2) has the property that each element happens to be its own inverse! For some objects (such as a three-legged stool or an ammonia molecule) this will not be the case. 3.05: More Definitions- Order and Class An important definition is the order of a group. The order (\(h\)) is simply the number of symmetry elements in the group. For the \(C_{2v}\) point group, the order is \(h=4\). Another important concept defines the number of classes of operations a point group contains. Two operations (\(A\) and \(B\)) belong to the same class if there is a third operation (\(C\)) in the group that relates them by the similarity transform \(C^{-1}AC = B\) According to this definition, the operations \(A\) and \(B\) are said to be complementary. A complete set of complementary operations within a group defines a class. This will be demonstrated later, using the \(C_{3v}\) point group operations. In the case of the \(C_{2v}\) point group, no two elements are in the same class. This has some very important ramifications for the point group. A group for which this the case is said to be an abelian group. Not all point groups will have this property however.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.03%3A_Determining_the_Point_Group_for_a_Molecule-_the_Schoenflies_notat.txt
A representation is any mathematical construct that will reproduce the group multiplication table. In general, there are an infinite number of representations possible for a given group, however, most of them will be related through simple relationships, and thus can be constructed from (or reduced to) other representations. Those that cannot be reduced to linear combinations of other representations are called irreducible representations. The irreducible representations are particularly useful as they can be used to predict the mathematical properties of any function that is an eigenfunction of all of the symmetry elements of a group. The number of classes of operations always gives the number of irreducible representations. Each irreducible representation can be labeled as $\Gamma _{i}$. To construct a representation for a group, one must assign each operation a mathematical element. For the $C _{2v}$ point group, we can get away with using either 1 or –1 for each element. (This is a consequence of each operation belonging to its own class.) The simplest representation can be constructed by assigning each symmetry element as 1. The group multiplication table will hold, as can be seen below. $C _{2v}$ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Note that each product gives a value that corresponds to the correct element. For example, we let $C _{2}$ = 1 and $\sigma _{v}$ = 1. The product of $C _{2}$ *$\sigma _{v}$ yields $\sigma_{v}'$. And since the value we assigned $\sigma_{v}'$= 1 . . and $1*1 = 1$. . everything worked. This particular representation seems pretty trivial since it has to work for any multiplication table that can ever be written! In fact, every point group has this type of representation. Since 1 gives all of the elements of this representation, this is called the totally symmetric representation. Another representation ($\Gamma _{2}$ ) can be constructed in which E and $C _{2}$ are represented by a 1 and $\sigma _{v\ }$ and $\sigma_{v}'$are represented by –1. In this case, the multiplication table looks as follows: $C _{2v}$ 1 1 -1 -1 1 1 1 -1 -1 1 1 1 -1 -1 -1 -1 -1 1 1 -1 -1 -1 1 1 It should be clear again (or easily enough verified) that this has the same pattern as the group multiplication table. Two other representations can be constructed in this manner (with all of the elements given as either 1 or –1). Together with the first representation, these can be summarized as in the following table. $C _{2v}$   E $C _{2}$ $\sigma _{v}$ $\sigma_{v}'$ $\Gamma _{1}$ $A _{1}$ 1 1 1 1 $\Gamma _{2}$ $A _{2}$ 1 1 -1 -1 $\Gamma _{3}$ $B _{1}$ 1 -1 1 -1 $\Gamma _{4}$ $B _{2}$ 1 -1 -1 1 These irreducible representations ($\Gamma _{i}$ ) go by a standardized set of naming rules. First, the irreducible representations are all singly degenerate (no two-by-two or three-by-three matrices were needed for the representations) so all of the irreducible representations are given the symbol A or B. A is used if the representation is symmetric (1) with respect to the principle rotation axis ($C _{2}$ ) and B if it is antisymmetric (-1) with respect to the principle axis. The subscript is 1 if the representation is symmetric with respect to the $\sigma _{v}$ reflection plane, and 2 if the representation is antisymmetric with respect to this plane of reflection. If an irreducible representation requires a set of two-by-two matrices, the representation is designated E, and three-by-three matrix irreducible representations are labeled T. We’ll discuss more on the difference between a reducible and irreducible representation later. First, lets work through a slightly more difficult point group. The $C _{3v}$ point group is not abelian and requires matrices for some of the irreducible representations. The Symmetry of a Triangular Pyramid: a more complex point group An example of a point group that requires two-by-two matrix elements for the irreducible representations is the $C _{3v}$ point group. This point group (which describes the symmetry elements of an ammonia molecule or a pyramid with an equilateral triangular base) consists of the symmetry elements $E$, $C _{3}$, $C_{3}'$ (or $C_{3}^{2}$), $\sigma _{v}$, $\sigma_{v}'$and $\sigma_{v}''$. In the figure to the left, the $C _{3}$ axis runs perpendicular to the base of the pyramid (you are looking straight down on the top of the pyramid) and the $C _{3}$ operation might correspond to a clockwise rotation of the figure about that axis. The $C_{3}'$ axis is the same as the $C _{3}$ axis, but the $C_{3}'$ operation corresponds to a counterclockwise rotation by $2\pi/3$ radians. Note that this operation is equivalent to performing the $C _{3}$ operation twice (hence the alternative notation of $C _{3} ^{2}$.) The $\sigma _{v}$, $\sigma_{v}'$and $\sigma_{v}''$ elements are reflection planes that lie perpendicular to the base, but each containing one edge of the pyramid. The reader is left to imagine the identity element. If the corners of the base of the pyramid are labeled for convenience, the effect of each symmetry operation can be represented as follows. $E * (1,2,3) = (1,2,3) \qquad \sigma _{v} * (1,2,3) = (1,3,2)$ $C _{3} * (1,2,3) = (3,1,2) \qquad \sigma _{v} ’ * (1,2,3) = (3,2,1)$ $C _{3} ^{2} * (1,2,3) = (2,3,1) \qquad \sigma _{v} ” * (1,2,3) = (2,1,3)$ Following these permutations, it is possible to construct the group multiplication table. The group multiplication table for this group ($C _{3v}$ ) looks as follows: $C _{3v}$ E $C _{3}$ $C _{3} ^{2}$ $\sigma _{v}$ $\sigma_{v}'$ $\sigma_{v}''$ E E $C _{3}$ $C _{3} ^{2}$ $\sigma _{v}$ $\sigma_{v}'$ $\sigma_{v}''$ $C _{3}$ $C _{3}$ $C_{3}^{2}$ E $\sigma_{v}''$ $\sigma _{v}$ $\sigma_{v}'$ $C_{3}^{2}$ $C_{3}^{2}$ E $C _{3}$ $\sigma_{v}'$ $\sigma_{v}''$ $\sigma _{v}$ $\sigma _{v}$ $\sigma _{v}$ $\sigma_{v}'$ $\sigma_{v}''$ E $C _{3}$ $C_{3}^{2}$ $\sigma_{v}'$ $\sigma_{v}'$ $\sigma_{v}''$ $\sigma _{v}$ $C_{3}^{2}$ E $C _{3}$ $\sigma_{v}''$ $\sigma_{v}''$ $\sigma _{v}$ $\sigma_{v}'$ $C _{3}$ $C_{3}^{2}$ E From this information, it is possible to separate the operations into classes. Note, for example that $(\sigma _{v}) ^{-1} = \sigma _{v}$ and $(\sigma_{v}') ^{-1} = \sigma_{v}'$and $(\sigma_{v}'') ^{-1} = \sigma_{v}''$. Using these relationships, the similarity transforms of $C _{3}$ involving these operations all yield $C_{3}^{2}$. $(\sigma _{v} ) ^{-1} * C _{3} * \sigma _{v} = (\sigma _{v} * C _{3} ) * \sigma _{v} = \sigma_{v}'' * \sigma _{v} = C_{3}^{2}$ $(\sigma_{v}') {}^{-1} * C _{3} * \sigma_{v}'= (\sigma_{v}'* C _{3} ) * \sigma_{v}'= \sigma _{v} * \sigma_{v}'= C_{3}^{2}$ $(\sigma_{v}'') {}^{-1} * C _{3} * \sigma_{v}'' = (\sigma_{v}'' * C _{3} ) * \sigma_{v}'' = \sigma_{v}'* \sigma_{v}'' = C_{3}^{2}$ Similarly, the similarity transforms on $C_{3}^{2}$ using these operations all yield $C _{3}$. $(\sigma _{v} ) {}^{-1} * C_{3}^{2} * \sigma _{v} = (\sigma _{v} * C_{3}^{2}) * \sigma _{v} = \sigma_{v}'* \sigma _{v} = C _{3}$ $(\sigma_{v}') {}^{-1} * C_{3}^{2} * \sigma_{v}'= (\sigma_{v}'* C_{3}^{2}) * \sigma_{v}'= \sigma_{v}'' * \sigma_{v}'= C _{3}$ $(\sigma_{v}'') ^{-1} * C_{3}^{2} * \sigma_{v}'' = (\sigma_{v}'' * C_{3}^{2}) * \sigma_{v}'' = \sigma _{v} * \sigma_{v}'' = C _{3}$ This is sufficient to indicate that the operations $C _{3}$ and $C_{3}^{2}$ belong to the same class. However, to show that these are the only two operations in this class. Consider the similarity transforms based on the operators E, $C _{3}$ and $C_{3}^{2}$ on $C _{3}$ : $(E) ^{-1} * C _{3} * E = (E * C _{3} ) * E = E * C _{3} = C _{3}$ $(C _{3} ) ^{-1} * C _{3} * C _{3} = (C_{3}^{2} * C _{3} ) * C _{3} = E * C _{3} = C _{3}$ $(C_{3}^{2}) ^{-1} * C _{3} * C_{3}^{2}= (C _{3} * C _{3} ) * C_{3}^{2} = C_{3}^{2} * C_{3}^{2}= C _{3}$ The fact that the result of a similarity transform on either $C _{3}$ or $C_{3}^{2}$ never results in $\sigma _{v}$, $\sigma_{v}'$or $\sigma_{v}''$, is a consequence of the proper rotation operations belonging to a different class than the reflection planes. In fact, there are three classes of operations for this point group. This implies that there are three irreducible representations for this point group. Another useful approach is to use matrix operators to affect the changes to the object caused by the symmetry operation. The choice of matrix operators depends on the basis set of functions being used to model the system. In this case, we will use position vectors of the corners of the bas of the pyramid. Other choices of basis might be the atomic orbitals on the atoms in a molecule. This is a very convenient choice when the task of constructing symmetry-adapted linear combinations of atomic orbitals for the purpose of modeling molecular orbitals. But I digress . . . Consider the position vectors of the corners of the base of our trigonal pyramid. They can be specified by indicating the $(x, y, z)$ coordinates if the origin is located in the plane of the base along the axis where all of the symmetry elements intersect. Corner x y z 1 0 $\dfrac{1}{\sqrt{3}}$ 0 2 1/2 $-\dfrac{1}{2\sqrt{3}}$ 0 3 -1/2 $-\dfrac{1}{2\sqrt{3}}$ 0 4 0 0 h Only corners 1, 2 and 3 will be important since none of the symmetry elements moves the fourth corner! Assuming unit length for the base edges and a height of h for the pyramid, the following table gives the $(x, y, z)$ coordinates for each of the four corners. From the previous discussion, we have already determined the effects of each of the symmetry operations. $E * (1,2,3) = (1,2,3) \qquad \sigma _{v} * (1,2,3) = (1,3,2)$ $C _{3} * (1,2,3) = (3,1,2) \qquad \sigma_{v}'* (1,2,3) = (3,2,1)$ $C_{3}^{2} * (1,2,3) = (2,3,1) \qquad \sigma_{v}'' * (1,2,3) = (2,1,3)$ The task now is to construct matrix representations for each of the symmetry operations that will affect the above stated changes when matrix multiplication is used as the operation. The identity element is easy. It will be the 3x3 identity matrix given by $E=\left(\begin{array}{ccc} {1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right)$ This is easily confirmed since $\left(\begin{array}{ccc} {1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right)\left(\begin{array}{c} {x} \ {y} \ {z} \end{array}\right)=\left(\begin{array}{c} {x} \ {y} \ {z} \end{array}\right)$ for any choice of $x$, $y$ and $z$. The other operations are a little trickier, but not too hard. It can be shown that the matrix that affects a rotation of $\alpha$ radians about the z-axis is given by $\left(\begin{array}{ccc} {\cos \alpha } & {-\sin \alpha } & {0} \ {\sin \alpha } & {\cos \alpha } & {0} \ {0} & {0} & {1} \end{array}\right)\nonumber$ So that the resultant of this operation is given by $\left(\begin{array}{ccc} {\cos \alpha } & {-\sin \alpha } & {0} \ {\sin \alpha } & {\cos \alpha } & {0} \ {0} & {0} & {1} \end{array}\right)\left(\begin{array}{c} {x} \ {y} \ {z} \end{array}\right)=\left(\begin{array}{c} {x\cos \alpha -y\sin \alpha } \ {x\sin \alpha +y\cos \alpha } \ {z} \end{array}\right) \nonumber\nonumber$ For a rotation of $2\pi /3$ radians, it is useful to note the following. $\cos(2\pi/3) = -1/2$ $\sin(2\pi /3) = \sqrt{3} /2$ So the transformation of corner 1 of the pyramid is accomplished as follows for the $C _{3}$ operation. $\left(\begin{array}{ccc} -1 / 2 & -\sqrt{3} / 2 & 0 \ \sqrt{3} / 2 & -1 / 2 & 0 \ 0 & 0 & 1 \end{array}\right)\left(\begin{array}{c} 0 \ 1 / \sqrt{3} \ 0 \end{array}\right)=\left(\begin{array}{c} -1 / 2 \ -1 / 2 \sqrt{3} \ 0 \end{array}\right) \nonumber$ The operation has transformed corner 1 into corner 3. It is also easily shown that the operator matrix also transforms corner 2 into corner 1, and corner 3 into corner 2. This is just as expected according to the expression shown above: $C _{3} * (1, 2, 3) = (3, 1, 2)$ Additionally, the matrix must satisfy the multiplication table relationship of $C _{3} *C _{3} = C_{3}^{2}$. $\left(\begin{array}{ccc} {-1/2} & {-\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)\left(\begin{array}{ccc} {-1/2} & {-\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)=\left(\begin{array}{ccc} {-1/2} & {\sqrt{3} /2} & {0} \ {-\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)\nonumber$ This is the rotation matrix for a rotation of $–2\pi/3$ radians. Hence, the product worked out as expected since the $C_{3}^{2}$ operation is equivalent to the rotation of $–2\pi/3$ radians. The matrix representations for the $\sigma _{v}$ planes can be worked out by one of two methods. One is to set up the matrix equation for how a point is transformed. The other is by using the group multiplication table to generate a matrix as the product of two other operations in the group for which the matrix has already been established. To demonstrate these methods, recall from above that the $\sigma _{v}$ operation exchanges corners 2 and 3. The matrix for this operation must satisfy the following expression: $\left(\begin{array}{lll} R_{11} & R_{12} & R_{13} \ R_{21} & R_{22} & R_{23} \ R_{31} & R_{32} & R_{33} \end{array}\right)\left(\begin{array}{c} 1 / 2 \ -1 / 2 \sqrt{3} \ 0 \end{array}\right)=\left(\begin{array}{c} -1 / 2 \ -1 / 2 \sqrt{3} \ 0 \end{array}\right)\nonumber$ The matrix that will affect this transformation is: $\left(\begin{array}{ccc} {-1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right) \nonumber\nonumber$ Now, using the group multiplication table, we can generate $\sigma_{v}'$and $\sigma_{v}''$ by the relationships $\sigma _{v} * C_{3}^{2} = \sigma_{v}'$ $\sigma _{v} * C _{3} = \sigma_{v}''$ or $\left(\begin{array}{ccc} {-1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right)\left(\begin{array}{ccc} {-1/2} & {\sqrt{3} /2} & {0} \ {-\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)=\left(\begin{array}{ccc} {1/2} & {-\sqrt{3} /2} & {0} \ {-\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)=\sigma _{v} '\nonumber$ $\left(\begin{array}{ccc} {-1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right)\left(\begin{array}{ccc} {-1/2} & {-\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)=\left(\begin{array}{ccc} {1/2} & {\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)=\sigma _{v} "\nonumber$ The set of matrices can now be used as a representation of the group. However, these matrices can be seen as a reproducible representation of the group since they are in block-diagonal form. $E=\left(\begin{array}{ccc} {1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right) C_{3} =\left(\begin{array}{ccc} {-1/2} & {-\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right) C_{3}^{2} =\left(\begin{array}{ccc} {-1/2} & {\sqrt{3} /2} & {0} \ {-\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)\nonumber$ $\sigma _{v} =\left(\begin{array}{ccc} {-1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \end{array}\right) \sigma _{v} '=\left(\begin{array}{ccc} {1/2} & {-\sqrt{3} /2} & {0} \ {-\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right) \sigma _{v} "=\left(\begin{array}{ccc} {1/2} & {\sqrt{3} /2} & {0} \ {\sqrt{3} /2} & {-1/2} & {0} \ {0} & {0} & {1} \end{array}\right)\nonumber$ This representation can be broken down into two simpler representations. The first consists only of the lower right block of each of the matrices above. This yields the totally symmetric representation. The other is a representation of 2x2 matrices that are made from the upper left block of each of the matrices above. There is one other irreducible representation for the $C _{3v}$ point group. It is given in the table below without derivation, but it is easy to demonstrate that it satisfies the group multiplication table. $C_{3v}$     E C3 $\Gamma_1$ $A_1$ 1 1 1 $\Gamma_2$ $A_2$ 1 1 1 $\Gamma_3$ E $\left(\begin{array}{ll} 1 & 0 \ 0 & 1 \end{array}\right)$ $\left(\begin{array}{cc} -1 / 2 & \sqrt{3} / 2 \ -\sqrt{3} / 2 & -1 / 2 \end{array}\right)$ $\left(\begin{array}{cc} -1 / 2 & -\sqrt{3} / 2 \ \sqrt{3} / 2 & -1 / 2 \end{array}\right)$ $C_{3v}$   $\sigma_{v}$ $\sigma_{v}'$ $\sigma_{v}''$ $\Gamma_1$ $A_1$ 1 1 1 $\Gamma_2$ $A_2$ -1 -1 -1 $\Gamma_3$ E $\left(\begin{array}{cc} -1 & 0 \ 0 & 1 \end{array}\right)$ $\left(\begin{array}{cc} 1 / 2 & \sqrt{3} / 2 \ \sqrt{3} / 2 & -1 / 2 \end{array}\right)$ $\left(\begin{array}{cc} 1 / 2 & -\sqrt{3} / 2 \ -\sqrt{3} / 2 & -1 / 2 \end{array}\right)$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.06%3A_Representations.txt
One thing that is important about irreducible representations is that they are orthogonal. This is the property that makes group theory so very useful in chemistry, because orthogonality makes integrals zero. It’s always easier to do the integrals when orthogonality tells us the result will be zero before doing any complicated math! The Great Orthogonality Theorem (GOT) can be stated: $\sum _{R}\left[\Gamma _{i} (R)_{mn} \right]\left[\Gamma _{j} (R)_{m'n'} \right]^{*} =\frac{h}{\sqrt{l_{i} l_{j} } } \delta _{ij} \delta _{mm'} \delta _{nn'}\nonumber$ (Any theorem with that many subscripts must have something truly useful to say!) In this notation, $\Gamma _{i} (R) _{mn}$ indicates the row m, column n element of the $i ^{th}$ irreducible representation for symmetry operation R. The m and n are needed since not all irreducible representations are made up of just 1 and –1. Many irreducible representations need to use matrices to represent each symmetry element. For these cases, $l_{i}$ gives the dimension of the matrices used in the $\Gamma _{i}$. In our example of the $C _{2v}$ point group, all irreducible representations have $l =1$, so the GOT can be stated more simply (for this point group specifically) as $\sum _{R}\left[\Gamma _{i} (R)\right]\left[\Gamma _{j} (R)\right]^{*} =h\delta _{ij}\nonumber$ Consider applying this statement to the $A _{2}$ and $B_{1}$ irreducible representations ($\Gamma _{2}$ and $\Gamma _{3}$ ) for the $C _{2v}$ point group. $\begin{array}{rcl} {\sum _{R}\left[\Gamma _{2} (R)\right]\left[\Gamma _{3} (R)\right]^{*} } & {=} & {\Gamma _{2} (E)\Gamma _{3} (E)+\Gamma _{2} (C_{2} )\Gamma _{3} (C_{2} )+\Gamma _{2} (\sigma _{v} )\Gamma _{3} (\sigma _{v} )+\Gamma _{2} (\sigma _{v} ')\Gamma _{3} (\sigma _{v} ')} \ {} & {=} & {(1)(1)+(1)(-1)+(-1)(1)+(-1)(-1)} \ {} & {=} & {1-1-1+1} \ {} & {=} & {0} \end{array}\nonumber$ Similarly, considering using the GOT on just $\Gamma _{4}$ (the $B _{2}$ irreproducible representation) yields the following $\begin{array}{rcl} {\sum _{R}\left[\Gamma _{4} (R)\right]\left[\Gamma _{4} (R)\right]^{*} } & {=} & {\Gamma _{4} (E)\Gamma _{4} (E)+\Gamma _{4} (C_{2} )\Gamma _{4} (C_{2} )+\Gamma _{4} (\sigma _{v} )\Gamma _{4} (\sigma _{v} )+\Gamma _{4} (\sigma _{v} ')\Gamma _{4} (\sigma _{v} ')} \ {} & {=} & {(1)(1)+(-1)(-1)+(-1)(-1)+(1)(1)} \ {} & {=} & {1+1+1+1} \ {} & {=} & {4} \end{array}\nonumber$ Recall that the order of the group (h) is 4 because there are four symmetry elements in the group. In the case of the $C _{3v}$ point group, there is a 2x2 matrix representation. Consider the upper right member of each of the $\Gamma _{3}(E)$ matrices (row 1, column 2) and apply the GOT to these elements along with the elements of $\Gamma _{1}(A _{1} )$. $\begin{array}{rcl} {\sum _{R}\left[\Gamma _{1} (R)\right]\left[\Gamma _{3} (R)_{12} \right] } & {=} & {(1)(0)+(1)(\sqrt{3} /2)+(1)(-\sqrt{3} /2)+(1)(0)+(1)(-\sqrt{3} /2)+(1)(\sqrt{3} /2)} \ {} & {=} & {0} \end{array}\nonumber$ Similarly, applying the GOT to the row 1, column 1 elements of $\Gamma _{3} (E)$ we see $\begin{array}{rcl} {\sum _{R}\left[\Gamma _{3} (R)_{11} \right]\left[\Gamma _{3} (R)_{11} \right] } & {=} & {(1)^{2} +(-1/2)^{2} +(-1/2)^{2} +(-1)^{2} +(1/2)^{2} +(1/2)^{2} } \ {} & {} & {=3=6/2=h/l_{3} } \end{array}\nonumber$ Now tell me . . isn’t that truly a Great Orthogonality Theorem ? (Now how much would you pay?) Once we introduce the concept of character, we will restate the GOT in terms of class characters.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.07%3A_The_Great_Orthogonality_Theorem.txt
Most summaries of group theory do not give the full matrix specifications for each irreducible representation in each important point group. Rather, a very useful quantity is defined, called the character. An important property that elements of the same class will share is that they have the same character. As such, it is only necessary to show the character once for each class of operations in the group. The character of an element is given by the sum of the diagonal elements of the matrix used to represent the symmetry operation. $\chi _{i} (R)=\sum _{m}\Gamma _{i} (R)_{mm}\nonumber$ C3v E C3 sv A1 1 1 1 A2 1 1 -1 E $\left(\begin{array}{ll} 1 & 0 \ 0 & 1 \end{array}\right)$ $\left(\begin{array}{cc} \cos (2 \pi / 3) & -\sin (2 \pi / 3) \ \sin (2 \pi / 3) & \cos (2 \pi / 3) \end{array}\right)$ $\left(\begin{array}{cc} 1 & 0 \ 0 & -1 \end{array}\right)$ To evaluate the characters of each of the classes within each irreproducible representation, we need only generate a representation for one operation within each class. The three irreducible representations for some characteristic operators in each class can be expressed as follows: Using the expressions above, the character table for the $C_{3v}$ group can be expressed as $C_ {3v}$ E 2 $C_ {3}$ 3 $\sigma_ {v}$ $A_ {1}$ 1 1 1 $A_ {2}$ 1 1 -1 E 2 -1 0 Note that the character of the identity element is always given as the dimension of the matrices used in the irreducible representation. $\chi _{i} (E)=l_{i}\nonumber$ The GOT can be expressed in terms of characters. $\sum _{R}\chi _{i} (R)\chi _{j} (R)=h\delta _{ij}\nonumber$ This statement has a number of important and useful properties and consequences. One relationship deals with the sum of the squares of the characters of the identity elements. $\sum _{i}\left[\chi _{i} (E)\right]^{2} =h\nonumber$ These expressions can be used to find and verify the characters for other point groups. For example, consider the partial character table for the point group $C_ {4v}$. A typical kind of exam or quiz question might be to fill in the missing values. In this case, all of the values are missing! So let’s tackle the problem based on what we know from definitions, and complete the problem by using of the GOT. $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ $A_ {2}$ $B_{1}$ $B_{2}$ E First off, the order of the group is $h = 8$. Second, every group has a totally symmetric representation. This is the $A_ {1}$ representation and has members that are all 1. Let’s fill that in (using red for clarity.) $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ $\textcolor{red}{1}$ $\textcolor{red}{1}$ $\textcolor{red}{1}$ $\textcolor{red}{1}$ $\textcolor{red}{1}$ $A_ {2}$ $B_{1}$ $B_{2}$ E Additionally, we can fill in the column for the identity element. All of the A and B representations are singly degenerate, and the E representation is doubly degenerate. So using the expression $\sum _{i}\left[\chi _{i} (E)\right]^{2} =h\nonumber$ That yields the following (shown in $\textcolor{red}{red}$): $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ 1 1 1 1 1 $A_ {2}$ $\textcolor{red}{1}$ $B_{1}$ $\textcolor{red}{1}$ $B_{2}$ $\textcolor{red}{1}$ E $\textcolor{red}{2}$ And it clearly satisfies $\begin{array}{rcl} {\sum _{i}\left[\chi _{i} (E)\right]^{2} } & {=} & {(1)^{2} +(1)^{1} +(1)^{2} +(1)^{1} +(2)^{2} } \ {} & {} & {=8=h} \end{array}\nonumber$ Now using the definition that A representations have a character of 1 for the (are symmetric with respect to) the principle rotation axis and B representations have a character of –1 for (or are antisymmetric with respect to) the principle axis rotation. Thus, we can fill in $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ 1 1 1 1 1 $A_ {2}$ 1 $\textcolor{red}{1}$ $B_{1}$ 1 $\textcolor{red}{-1}$ $B_{2}$ 1 $\textcolor{red}{-1}$ E 2 $\textcolor{red}{?}$ But should we do about the character of the $C_ {4}$ operation under the irreducible doubly degenerate representation E? One solution comes from another important consequence of the GOT. This can be stated as $\sum _{i}\chi _{i} (R_{m} )\chi _{i} (R_{n} ) =h\delta _{mn}\nonumber$ Using this relationship, we can solve for the character of the $C_ {4}$ operation under the E irreducible representation. $\begin{array}{rcl} {\sum _{i}\chi _{i} (E)\chi _{i} (C_{4} ) } & {=} & {\sum _{i}\chi _{i} (E)\left[2\chi _{i} (C_{4} )\right] } \ {} & {} & {=2(1)(1)+2(1)(1)+2(1)(-1)+2(1)(-1)+2(2)x=0} \end{array}\nonumber$ The only value of x that will satisfy this expression is x = 0. We can enter this value and also apply the definitions that the $A_ {1}$ and $B_{1}$ representations are symmetric with respect to the $\sigma_ {v}$ operation and the $A_ {2}$ and $B_{2}$ representations are antisymmetric with respect to $\sigma_ {v}$. $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ 1 1 1 1 1 $A_ {2}$ 1 1   $\textcolor{red}{-1}$ $B_{1}$ 1 -1   $\textcolor{red}{1}$ $B_{2}$ 1 -1   $\textcolor{red}{-1}$ E 2 0   $\textcolor{red}{?}$ Again, the question mark can be removed as above. $\begin{array}{rcl} {\sum _{i}\chi _{i} (E)\chi _{i} (\sigma _{v} ) } & {=} & {\sum _{i}\chi _{i} (E)\left[2\chi _{i} (\sigma _{v} )\right] } \ {} & {} & {=2(1)(1)+2(1)(-1)+2(1)(1)+2(1)(-1)+2(2)x=0} \end{array}\nonumber$ Once again, as luck would have it, the only value of x that satisfies the equation is x = 0. Now, we can apply the GOT to the representations for $A_ {1}$, and $A_ {2}$ to generate an equation with two unknowns to determine the characters of $C_ {2}$ and $\sigma_ {d}$ for representations $A_ {2}$ and $B_{1}$. We can solve it because we know x and y can only be 1 or –1. (These are the only values possible for singly degenerate representations.) $\begin{array}{rcl} {\sum _{R}\chi _{i} (R)\chi _{j} (R) } & {=} & {\chi _{1} (E)\chi _{2} (E)+2\chi _{1} (C_{4} )\chi _{2} (C_{4} )+\cdots } \ {} & {=} & {(1)(1)+2(1)(1)+(1)x+2(1)(-1)+2(1)y=0} \ {} & {=} & {1+x+2y=0} \end{array}\nonumber$ $C_ {4v}$ E 2 $C_ {4}$ $C_ {2}$ 2 $\sigma_ {v}$ 2 $\sigma_ {d}$ $A_ {1}$ 1 1 1 1 1 $A_ {2}$ 1 1 $\textcolor{red}{1}$ -1 $\textcolor{red}{-1}$ $B_{1}$ 1 -1   1 $B_{2}$ 1 -1   -1 E 2 0   0 The only combination that works is $x = 1$ and $y = -1$. The character table now looks as follows: Completion of the rest of the character table is left as an exercise.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.08%3A_Character_and_Character_Tables.txt
The intensity of a transition in the spectrum of a molecule is proportional to the magnitude squared of the transition moment matrix element. $\text{Intensity } \propto \left|\int \left(\psi '\right)^{*} \vec{\mu }\left(\psi "\right) d\tau \right|^{2}$ By knowing the symmetry of each part of the integrand, the symmetry of the product can be determined as the direct product of the symmetries of each part $(\psi’) ^{*}$, $(\psi”)$ and $\mu$. This is helpful, since the integrand must not be antisymmetric with respect to any symmetry elements or the integral will vanish by symmetry. Before exploring that concept, let’s look at the concept of direct products. This is a concept many people have seen, in that the integral of an odd function over a symmetric interval, is zero. Recall what it means to be an “odd function” or an “even function. Symmetry definition Integrals Even $f(-x) = f(x)$ $\int _{-a}^{a}f(x)dx=2\int _{0}^{a}f(x)dx$ Odd $f(-x) = -f(x)$ $\int _{-a}^{a}f(x)dx=0$ Consider the function $f(x)=\left(x^{3} -3x\right)e^{-x^{2} }$. A graph of this function looks as follows: One notes that the area under the curve on the side of the function for which $x > 0$ has exactly the same magnitude but opposite sign of the area under the other side of the graph. Mathematically, $\begin{array}{rcl} {\int _{-a}^{a}f(x)dx } & {=} & {\int _{-a}^{0}f(x)dx +\int _{0}^{a}f(x)dx } \ {} & {} & {=-\int _{0}^{a}f(x)dx +\int _{0}^{a}f(x)dx =0} \end{array}\nonumber$ It is also interesting to note that the function f(x) can be expressed as the product of two functions, one of which is an odd function ( $x^{3} -3x$ ) and the other which is an even function ( $e^{-x^{2} }$ ). The result is an odd function. By determining the symmetry of the function as a product of the eigenvalues of the functions with respect to the inversion operator, as discussed below, one can derive a similar result. The even/odd symmetry is an example of inversion symmetry. Recall that the inversion operator (in one dimension) affects a change of sign on $x$. $\hat{i}f(x)=f(-x)\nonumber$ “Even” and “odd” functions are eigenfunctions of this operator, and have eigenvalues of either +1 or –1. For the function used in the previous example, $f(x)=g(x)h(x)\nonumber$ where $g(x)=x^{3} -3x$ and $h(x)=e^{-x^{2} }$ Here, $g(x)$ is an odd function and $h(x)$ is an even function. The product is an odd function. This property is summarized for any $f(x)=g(x)h(x)$, in the following table. g(x) h(x) f(x) ig(x)=__g(x) ih(x)=__h(x) if(x)=__f(x) even even even 1 1 1 even odd odd 1 -1 -1 odd odd even -1 -1 1 Note that the eigenvalue (+1 or –1) is simply the character of the inversion operation for the irreducible representation by which the function transforms! In a similar manner, any function that can be expressed as a product of functions (like the integrand in the transition moment matrix element) can be determined as the direct product of the irreducible representations by which each part of the product transforms. Consider the point group $C_ {2v}$ as an example. Recall the character table for this point group. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z   $x ^{2} -y ^{2}$, $z ^{2}$ $B_ {2}$ 1 -1 -1 1 y $R_ {x}$ $yz$ $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ $xz$ $A_ {2}$ 1 1 -1 -1   $R_ {z}$ $xy$ The direct product of irreducible representations can by the definition $\chi _{prod} (R)=\chi _{i} (R)\otimes \chi _{j} (R)\nonumber$ So for the direct product of $B_ {1}$ and $B_ {2}$, the following table can be used. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $B_ {1}$ 1 -1 1 -1 $B_ {2}$ 1 -1 -1 1 $B_ {1} \otimes B_ {2}$ 1 1 -1 -1 The product is actually the irreducible representation given by $A_ {2}$ ! As it turns out, the direct product will always yield a set of characters that is either an irreducible representation of the group, or can be expressed as a sum of irreducible representations. This suggests that a multiplication table can be constructed. An example (for the $C_ {2v}$ point group) is given below. Studying this table reveals some useful generalizations. Two things in particular jump from the page. These are summarized in the following tables. A B A A B B B A 1 2 1 1 2 2 2 1 $C_ {2v}$ $A_ {1}$ $A_ {2}$ $B_ {1}$ $B_ {2}$ $A_ {1}$ $A_ {1}$ $A_ {2}$ $B_ {1}$ $B_ {2}$ $A_ {2}$ $A_ {2}$ $A_ {1}$ $B_ {2}$ $B_ {1}$ $B_ {1}$ $B_ {1}$ $B_ {2}$ $A_ {1}$ $A_ {2}$ $B_ {2}$ $B_ {2}$ $B_ {1}$ $A_ {2}$ $A_ {1}$ This pattern might seem obvious to some. It stems from the idea that symmetric*symmetric = symmetric symmetric*antisymmetric = antisymmetric antisymmetric*antisymmetric = symmetric Noting that A indicates that an irreducible representation is symmetric with respect to the $C_ {2}$ operation and B indicates that an irreducible representation is antisymmetric . . and that the subscript 1 indicates that an irreducible representation is symmetric with respect to the $\sigma_ {v}$ operation, and that a subscript 2 indicates that an irreducible representation is antisymmetric . . the rest seems to follow! Some point groups have irreducible representations use subscripts g/u or primes and double primes. The g/u subscript indicates symmetry with respect to the inversion ($i$) operator, and the prime/double prime indicates symmetry with respect to a $\sigma$ plane (generally the plane of the molecule for planar molecules). This method works well for singly degenerate representations. But what does one do for products involving doubly degenerate representations? As an example, consider the $C_ {3v}$ point group. $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ $A_ {1}$ 1 1 1 z $A_ {2}$ 1 1 -1   $R_ {z}$ E 2 -1 0 $(x, y)$ ($R_ {x}$, $R_ {y}$ ) Consider the direct product of $A_ {2}$ and E. $C_ {3v}$ E 2 $C_ {3}$ 3 $\sigma_ {v}$ $A_ {2}$ 1 1 -1 E 2 -1 0 $A_ {2} \otimes E$ 2 -1 0 This product is clearly just the E representation. Now one other example – Consider the product $E \otimes E$. $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ E 2 -1 0 E 2 -1 0 $E \otimes E$ 4 1 0 To find the irreducible representations that comprise this reducible representation, we proceed in the same manner as determining the number of vibrational modes belonging to each symmetry. $\begin{array}{rcl} {N_{A_{1} } } & {=} & {\dfrac{1}{6} \left[(1)(4)+2(1)(1)+3(1)(0)\right]=1} \ {N_{A_{2} } } & {=} & {\dfrac{1}{6} \left[(1)(4)+2(1)(1)+3(-1)(0)\right]=1} \ {N_{E} } & {=} & {\dfrac{1}{6} \left[(2)(4)+2(-1)(1)+3(0)(0)\right]=1} \end{array}\nonumber$ This allows us to build a table of direct products. Notice that the direct product always has the total dimensionality that is given by the product of the dimensions. $C_ {3v}$ $A_ {1}$ $A_ {2}$ E $A_ {1}$ $A_ {1}$ $A_ {2}$ E $A_ {2}$ $A_ {2}$ $A_ {1}$ E E E E $A_ {1} + A_ {2} +E$ The concepts developed in this chapter will be used extensively in the discussions of vibrational, rotational and electronic degrees of freedom in atoms and molecules. 3.10: Vocabulary and Concepts abelian abelian group character class closed commutative complementary direct product Great Orthogonality Theorem group group multiplication table Group Theory identity element inverse irreducible representations multiplication operation order principle rotation axis representation Scheonflies notation similarity transform symmetry element symmetry operations totally symmetric representation 3.11: Problems 1. Find the symmetry elements and point groups for the following molecules 1. \(SF_{4}\) 2. \(CHCl_{3}\) 3. Pyridine 4. Naphthalene 5. \(ICl_{5}\) 6. \(PCl_{5}\) 1. Consider diazine, which has three isomers. Determine which isomer(s) has/have \(C_{2v}\) symmetry and which has/have \(D _{2h}\) symmetry. 2. Complete the following character table. E 2 A 2 B C 3 D 3 F \(A_ {1}\) 1 1 1 1 1 1 \(A_ {2}\) 1 1 1 1 -1 -1 \(B_{1}\)     1 \(B_{2}\) 1 -1 1 -1 -1 1 \(E_{1}\)     1 \(E_{2}\)     -1 1. Complete the following direct product table. \(C_{4h}\) \(A_ {g}\) \(B_{g}\) \(E_{g}\) \(A_ {u}\) \(B_{u}\) \(E_{u}\) \(A_ {g}\) \(A_ {g}\) \(B_{g}\) \(E_{g}\) \(A_ {u}\) \(B_{u}\) \(E_{u}\) \(B_{g}\) \(B_{g}\) \(E_{g}\) \(E_{g}\)   \(A_ {g} +B_{g} +E_{g}\)     \(A_ {u} +B_{u} +E_{u}\) \(A_ {u}\) \(A_ {u}\)     \(A_ {g}\) \(B_{u}\) \(B_{u}\) \(E_{u}\) \(E_{u}\) 1. Consider the following group multiplication table. Separate the operations into classes. E A B C D F E E A B C D F A A B E F C D B B E A D F C C C D F E A B D D F C B E A F F C D A B E 1. Demonstrate that the \(A_ {2}\), \(B_{1}\), \(B_{2}\) and E irreducible representations are orthogonal to the \(A_ {1}\) irreducible representation under the point group \(C_{4v}\). 2. A point group has 8 operations which fall into five classes. How many irreducible representations will it have? How many will be singly degenerate? How many will be doubly degenerate?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/03%3A_An_Introduction_to_Group_Theory/3.09%3A_Direct_Products.txt
One of the four important problems in quantum mechanics that can be solved analytically is that of the Harmonic Oscillator. This problem is very important to chemists as it provides the model for vibrating molecules and explains what we see in infrared and Raman spectra of molecules. In this chapter we will develop the problem, discuss the limitations of the simple problem and how we deal with them, and the applications of the conclusions to molecular spectroscopy and the measurement of molecular properties. Thumbnail: The rigid rotor model for a diatomic molecule. (CC BY-SA 3.0 Unported; Mysterioso via Wikipedia) 04: The Harmonic Oscillator and Vibrational Spectroscopy 5.2: The Equation for a Harmonic-Oscillator Model of a Diatomic Molecule Contains the Reduced Mass of the MoleculeConsider the potential energy surface for a diatomic molecule. The functional form can be seen in the following graph. In the surface, it is easy to see the “hard wall” on the left side, where the repulsive force between atoms is strong (which is why the curve is so steep) and the “soft wall” on the right side of the well, where the restorative force of the chemical bond exists. The bond length at the potential minimum is indicated by $r_ {e}$, the equilibrium bond length. The function can be expressed as a Taylor series expansion. For convenience, we can define $x = (r-r _{e}$ ). We will also define the zero of energy to be the bottom of the potential well. Given these definitions and the Taylor expansion about $x = 0$ which can be expressed by $U(x)=U(0)+\left. \dfrac{d}{dx} U(x)\right|_{x=0} (x)+\dfrac{1}{2} \left. \dfrac{d^{2} }{dx^{2} } U(x)\right|_{x-0} (x^{2} )+\dfrac{1}{6} \left. \dfrac{d^{3} }{dx^{3} } U(x)\right|_{x-0} (x^{3} )+\cdots\nonumber$ We can evaluate these terms qualitatively based on the above diagram and the definitions provided above. The first two terms of the expansion are zero, by the choice of the zero of energy and because the derivative is zero at the potential minimum. The third and fourth terms are simplified by making the following substitutions $\left. \dfrac{d^{2} }{dx^{2} } U(x)\right|_{x-0} \equiv k$ and $\left. \dfrac{d^{3} }{dx^{3} } U(x)\right|_{x-0} \equiv \gamma$ The new function can be rewritten as $U(x)=\dfrac{1}{2} k\, x^{2} +\dfrac{1}{6} \gamma \, x^{3} +\cdots\nonumber$ And if the series is truncated at the $x^{2}$ term, it yields the familiar Harmonic Oscillator potential energy function that corresponds to a Hook’s Law oscillator. $U(x)=\dfrac{1}{2} k\, x^{2}\nonumber$ Transforming to Center of Mass Coordinates Consider a diatomic molecule that can be modeled as two masses ($m_ {1}$ and $m_ {2}$ ) attached by a spring that has a force constant k. The location of atom 1 is $z_ {1}$ and that of atom 2 is $z_ {2}$. The equilibrium length of the spring is $r_ {e}$. The force acting on either atom can be expressed in two ways. $F = ma\nonumber$ and $F=-kx\nonumber$ where m is either $m_ {1}$ or $m_ {2}$ and $x$ is the displacement from the equilibrium distance, given by $x = (z _{2} - z _{1} - r _{e})\nonumber$ The force acting on atom 1 is in the opposite direction of that acting on atom 2. This suggests two equations that will govern the motion of atom 1 and atom 2 respectively. $m_{1} \dfrac{d^{2} }{dt^{2} } z_{1} =k\left(z_{2} -z_{1} -r_{e} \right)$ and $-m_{2} \dfrac{d^{2} }{dt^{2} } z_{2} =k\left(z_{2} -z_{1} -r_{e} \right)$ Dividing both equations by the masses yields the following pair of equations. $\dfrac{d^{2} }{dt^{2} } z_{1} =\dfrac{k}{m_{1} } \left(z_{2} -z_{1} -r_{e} \right)$ and $-\dfrac{d^{2} }{dt^{2} } z_{2} =\dfrac{k}{m_{2} } \left(z_{2} -z_{1} -r_{e} \right)$ Add these two equations yields $\dfrac{d^{2} }{dt^{2} } z_{1} -\dfrac{d^{2} }{dt^{2} } z_{2} =\left(\dfrac{1}{m_{1} } +\dfrac{1}{m_{2} } \right)k\left(z_{2} -z_{1} -r_{e} \right)\nonumber$ The term $\left(\dfrac{1}{m_{1} } +\dfrac{1}{m_{2} } \right)$ has important significance, as it is the reciprocal of the reduced mass. $\left(\dfrac{1}{m_{1} } +\dfrac{1}{m_{2} } \right)=\dfrac{m_{1} +m_{2} }{m_{1} m_{2} } =\dfrac{1}{\mu }\nonumber$ $\mu =\dfrac{m_{1} m_{2} }{m_{1} +m_{2} }\nonumber$ The reduced mass is introduced as a consequence of moving to center of mass coordinates. It is the mass of a single object that would move with the same frequency of oscillation were it attached to a fixed point by a spring of the same force constant. It is important to note that $\mu$ has units of mass. Also, in the limit that $m_ {1}$ and $m_ {2}$ have the same value (let’s call it $m_ {1}$ ) $\begin{array}{rcl} {\mu } & {=} & {\dfrac{m_{1} m_{1} }{m_{1} +m_{1} } } \ {} & {} & {=\dfrac{m_{1}^{2} }{2m_{1} } =\dfrac{m_{1} }{2} } \end{array}\nonumber$ This result makes a great deal of sense because for equal masses, the motion of the molecule will involve equal and opposite motions of the two atoms relative to the center of mass (which will be the middle of the bond.) Thus, a single mass oscillating with the same frequency is moving relative to a distance that is in the middle of the spring. Hence, the mass will have to be half of the mass of one of the atoms, or the frequency would be different. The other important limit is when one mass is significantly larger than the other. Consider what happens when $m_ {1} > > m_ {2}$ $\begin{array}{rcl} {\mu } & {=} & {\dfrac{m_{1} m_{2} }{m_{1} +m_{2} } } \ {} & {} & {\approx \dfrac{m_{1} m_{2} }{m_{1} } =m_{2} } \end{array}\nonumber$ This result makes a great deal of sense because if one mass is significantly larger than the other, it will be the light atom that undergoes the larger motion. In the limit that $m_ {1} = \mathrm{\infty}$, the center of mass is located at $z_ {1}$ and the heavy atom becomes a fixed point in the motion. The next task is to simplify things further by introducing a mass-weighted coordinate, Z. $Z\equiv \dfrac{m_{1} z_{1} +m_{2} z_{2} }{m_{1} +m_{2} }\nonumber$ This expression gives the location of the center of mass of the molecule. The utility of this substitution is found in taking the difference of the two equations $m_{1} \dfrac{d^{2} }{dt^{2} } z_{1} =k\left(z_{2} -z_{1} -r_{e} \right)$ and $-m_{2} \dfrac{d^{2} }{dt^{2} } z_{2} =k\left(z_{2} -z_{1} -r_{e} \right)$ which yields $\begin{array}{rcl} {m_{1} \dfrac{d^{2} }{dt^{2} } z_{1} +m{}_{2} \dfrac{d^{2} }{dt^{2} } z_{2} } & {=} & {0} \ {\dfrac{d^{2} }{dt^{2} } \left(m_{1} z_{1} +m_{2} z_{2} \right)} & {=} & {0} \end{array}\nonumber$ Dividing both sides by ($m_ {1} + m_ {2}$ ) yields $\begin{array}{rcl} {\left(\dfrac{1}{m_{1} +m_{2} } \right)\dfrac{d^{2} }{dt^{2} } \left(m_{1} z_{1} +m_{2} z_{2} \right)} & {=} & {0} \ {\dfrac{d^{2} }{dt^{2} } \left(\dfrac{m_{1} z_{1} +m_{2} z_{2} }{m_{1} +m_{2} } \right)} & {=} & {0} \end{array}\nonumber$ Finally, making the substitution for the center of mass $\dfrac{d^{2} }{dt^{2} } Z=0\nonumber$ which tells us that the center of mass of the system does not move in time.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/04%3A_The_Harmonic_Oscillator_and_Vibrational_Spectroscopy/4.01%3A_The_Potential_Energy_Surface_for_a_Diatomic_.txt
It is convenient to make the substitution that $x=\left(z_{2} -z_{1} -r_{e} \right)\nonumber$ This allows us to write the Hamiltonian for the system then as $\hat{H}=-\dfrac{\hbar ^{2} }{2\mu } \dfrac{d^{2}}{dx^{2} } +\dfrac{1}{2} kx^{2}\nonumber$ where $\mu$ is the reduced mass given by $\mu =\dfrac{m_{1} m_{2} }{m_{1} +m_{2}}\nonumber$ $k$ is the force constant of the bond and $x$ is defined by $x = (r - r _{e} )$ as previously state. The Schrödinger equation is then given by $\left(-\dfrac{\hbar ^{2} }{2\mu } \dfrac{d^{2} }{dx^{2} } +\dfrac{1}{2} kx^{2} \right)\psi (x)=E\psi (x)\nonumber$ Energy Levels The boundary conditions require that the square of the wavefunction must have a finite area below it in order to ensure that the wavefunction is normalizable. The only way this happens is if the following conditions are met ${\mathop{\lim }\limits_{x\to \pm \infty }} \psi (x)=0\nonumber$ The resulting energy levels are the set of eigenvalues that correspond to the functions that satisfy the above stated boundary condition. These energies have values given by $E_{v} =\hbar \sqrt{\dfrac{k}{\mu } } \left(v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} \right) \qquad v = 0, 1, 2, 3, \ldots$ Notice how the use of the boundary conditions is what leads to the instruction of quantized energies. The resulting energy levels are evenly spaced with increasing energy. The actual spacing is determined by the physical characteristics of a given molecule, namely the reduced mass and the force constant. Spectroscopic Constants and Force Constants Vibrational spectroscopy is often done using units of $cm ^{-1}$. Energies expressed in terms of this unit are called term values. The termvalue is given as the energy divided by Planck’s constant and the speed of light ($E/hc$). Standard notation uses the symbol $G _{v}$ to indicate the term value for vibrational energy. $G_ {v}$ is given by $G_ {v} = \dfrac{E_{v} }{hc} = \omega_{e} (v + \dfrac{1}{2})\nonumber$ where $\omega _{e} =\dfrac{1}{2\pi \, c} \sqrt{\dfrac{k}{\mu } }\nonumber$ The vibrational constant $w_{e}$ can be determined experimentally for specific molecules. Consider the following values for various molecules. Molecule $\omega_ {e} (cm ^{-1} )$ k (N/m) $\mu$ (kg) $^{1} H ^{35} Cl$ 2989.74 516 $1.627 \times 10 ^{-27}$ $^{1} H ^{79} Br$ 2649.67 412 $1.652 \times 10 ^{-27}$ $^{1} H ^{127} I$ 2309.5 314 $1.660 \times 10 ^{-27}$ $^{19} F ^{19} F$ 916.64 347 $1.577 \times 10 ^{-26}$ $^{16} O ^{16} O$ 1580.93 1177 $1.328 \times 10 ^{-26}$ $^{14} N ^{14} N$ 2359.61 3116 $1.163 \times 10 ^{-26}$ Two important points can be made from this data. First, a typical force constant for a single bond is on the order of a couple hundred N/m. Secondly, multiple bonds lead to significantly larger force constants. This is not too surprising since the force constant gives a measure of the stiffness of the bond. The Wavefunctions The wavefunctions for the harmonic oscillator are determined by solving the Schrödinger equation. As stated before, the only wavefunctions that obey the boundary conditions have eigenvalues given by $E_{v} =\hbar \sqrt{\dfrac{k}{\mu } } (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )\nonumber$ where $v = 0, 1, 2, 3, \ldots$ The wavefunctions themselves can be determined by solving the differential equation using a power-series solution. In the end, we find that the resulting function involve a set of orthogonal polynomials known as the Hermite Polynomials. We will discuss some properties of this important set of functions before discussing the wave functions themselves. Hermite Polynomials The Hermite polynomials are a set of orthogonal polynomials. Like all sets of orthogonal polynomials, they have 1) a generator formula, 2) an orthogonality relationship and 3) a (or several) recursion relations that relate one function in the series to others. The Hermite polynomials can be generated using the following function $H_{v} (y)=(-1)^{v} e^{y^{2} } \dfrac{d^{v} }{dy^{v} } e^{-y^{2} }\nonumber$ Using this function, the first few Hermite polynomials can be generated. v $H_ {v} (y)$ 0 1 1 2y 2 $4y ^{2} -2$ Further members of the set of functions can be generated using one of the important recursion relations. $H_{v+1} (y)=2yH_{v} (y)-2vH_{v-1} (y)\nonumber$ Using this function, we can generate a longer list of Hermite polynomials without having to take so many derivatives. v $H_{v}(y)$ 0 $1$ 1 $2y$ 2 $4y ^{2} -2$ 3 $8y ^{3} -12y$ 4 $16y ^{4} -48y ^{2} +12$ 5 $32y ^{5} -160y ^{3} +120y$ Etc. Another important relationship between these functions is that $\dfrac{d}{dy} H_{v} (y)=2vH_{v-1} (y)\nonumber$ In addition to these relationships, the Hermite polynomials have an important orthogonality relationship. $\int_{-\infty}^{\infty} H_v(y) H_{v^{\prime}}(y) e^{-y^2} d y=v ! 2^v \sqrt{\pi} \delta_{v v^{\prime}} \nonumber$ The Hermite polynomials also have important symmetry properties. Each function in the set is an eigenfunction of the inversion operator. The inversion operator is a symmetry operator that is defined by the operation (in one dimension) $\hat{i}f(x)=f(-x)\nonumber$ Functions that are eigenfunctions of this operator can be classified as being either even function or odd function. Even $f(-x)=f(x)$ Odd $f(-x) = -f(x)$ Even functions are symmetric eigenfunctions of the inversion operator and odd functions are antisymmetric eigenfunctions as their eigenvalues are +1 and -1 respectively. Even and odd functions also have important properties when integrated over symmetric intervals. Even $\int _{-a}^{a}f(x)dx =2\int _{0}^{a}f(x)dx$ Odd $\int _{-a}^{a}f(x)dx =0$ These properties can greatly simplify integration involving these types of functions! The Harmonic Oscillator Wavefunctions The wavefunctions for the Harmonic Oscillator have three important parts: 1) a normalization constant, 2) a Hermite polynomial and 3) an exponential function that insures the orthogonality of the wavefunctions. ${\psi }_v\left(x\right)=N_vH_v\left({\alpha }^{\dfrac{1}{2}}x\right)e^{-\alpha x^2/2}\nonumber$ where $\alpha =\sqrt{k\cdot \mu }$ and $N_v=\sqrt{\dfrac{\sqrt{\alpha / \pi}}{2^v \cdot v !}}$ Expectation Values The simplicity of the wavefunctions makes the calculation of expectation values very simple for the harmonic oscillator problem. Position The expectation value of position can be determined solely based on symmetry arguments. Recall that harmonic oscillator wavefunctions are either even or odd functions. The symmetry of the products of even or odd functions can be summarized as follows. even odd even even odd odd odd even It is easy to recognize this multiplication table as arising from taking the products of the eigenvalues of the functions with respect to the inversion operator. 1 -1 1 1 -1 -1 -1 1 These results will be used to demonstrate that the expectation value of position is the same for all of the stationary wavefunction. Consider the integral required to calculate this value. $\left\langle x\right\rangle =\int _{-\infty }^{\infty }\psi _{v} \cdot x\cdot \psi _{v} dx\nonumber$ The wavefunction $\psi_{v}$ is either an even or odd function depending only on whether v is even or odd. Since the $\hat{x}$ operator is itself an odd function (always), there are only two possibilities for the total symmetry of the integrand. $\psi _{v}$ x $\psi _{v}$ Integrand Symmetry even odd even odd odd odd odd odd The pattern emerges due to the fact that the product of even and odd function produces a resulting function according to the following symmetry multiplication table. Regardless of whether the wavefunction is an even or odd function, the product $\psi _{v} \cdot x \cdot \psi _{v}$ is always an odd function. And as we have seen before, the integral of an odd function over any symmetric interval is zero by symmetry. Therefore, the expectation value of x, $\langle x\rangle$, is always 0 for any eigenstate of the harmonic oscillator. The means that $\langle r\rangle = r _{e}$, the equilibrium bond length. Momentum The evaluation of the expectation value of momentum can be made following the same symmetry arguments. In order to do this, one must consider the effect of taking a derivative of a function. Consider the following even function $f(x)=4x^{2} -2\nonumber$ The first derivative of this function is given by $\dfrac{d}{dx} f(x)=8x\nonumber$ which is an odd function. The derivative of this function $\dfrac{d}{dx} 8x=8\nonumber$ yields an even function. The following set of properties will hold for the symmetries of functions and their derivatives. $f(x)$ $\dfrac{d}{dx} f(x)$ even odd odd even As such, the symmetry of the integrand for the calculation of the expectation value of momentum $\int _{-\infty }^{\infty }\psi _{v} \hat{p}\psi _{x} dx\nonumber$ must always be an odd function, since the $\hat{p}$ takes the first derivative of the wavefunction. $\psi _{v}$ $p \cdot \psi _{v}$ Integrand Symmetry even odd odd odd even odd The result is that the expectation value of momentum, $\langle p \rangle$, must also be 0 for any eigenstate of the harmonic oscillator problem. Again, this can be reasoned by noting that half of the time the momentum measured will be in the direction of the bond stretching, and the other half of the time in the direction of the bond being compressed. On average, these two circumstances will cancel, yielding an average value of $\langle p\rangle = 0$. Energy As with any eigenstate, the expectation value of energy $\langle E\rangle$ is easy to calculate. Recall that the wavefunctions were determined to be eigenfunctions of the Hamiltonian. $\hat{H}\psi _{v} =E_{v} \psi _{v}\nonumber$ As such, The expectation value of energy is trivially easy to find for a system in an eigenstate. $\begin{array}{l} {\langle E \rangle =\int _{-\infty }^{\infty }\psi _{v} \hat{H}\psi _{v} dx } \ {=\int _{-\infty }^{\infty }\psi _{v} E_{v} \psi _{v} dx } \ {=E_{v} \int _{-\infty }^{\infty }\psi _{v} \psi _{v} dx } \ {=E_{v} } \end{array}\nonumber$ since the wavefunctions are normalized. The expectation value of energy is always an eigenvalue of the Hamiltonian for a system that is in an eigenstate of the Hamiltonian. Tunneling One of the curious consequences of quantum mechanics can be seen in the form of tunneling. This odd behavior becomes possible whenever the square of the wavefunction extends beyond a classical barrier to the motion of the particle r molecule. In the case of the harmonic oscillator, this is seen as possible since the squared wavefunction extends beyond the classical turning points of the oscillation. The classical turning point is defined as the point in the motion where all energy has been converted from kinetic energy to potential energy. At this point, the motion switches direction as potential energy is converted back into kinetic energy. Since there is a non-zero value of the squared wavefunction beyond this point for all eigenstates, there is a non-zero probability of measuring the position of the system to lie beyond these classical turning points. And then if there is a new potential well accessible if the system tunnels through the classical barrier, there is a non-zero probability of finding the system in that well, meaning that the system may have changed states completely! This result is another example of the bizarreness of quantum mechanics. If one were to consider a classical ball that is thrown against the wall at the front of the classroom, one expects that the ball will return to the thrower after bouncing off the wall every time. But for a quantum mechanical ball, there is a non-zero possibility of finding the ball on the other side of the wall! If this were to be the case, the ball would have been said to have tunneled through the wall. The probability for this happening is proportional to that fraction of the area under the squared wavefunction curve that lies beyond the classical barrier. This probability will be decreased for heavier objects as the fraction of wavefunction beyond the classical barrier will be smaller.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/04%3A_The_Harmonic_Oscillator_and_Vibrational_Spectroscopy/4.02%3A_Solving_the_Schrodinger_Equation.txt
Keeping in mind that the harmonic oscillator model is an approximate model, it should not come as a surprise that there are a number of shortcomings to it. The harmonic oscillator does not place any constraints on bond length. At the short bond length side of the potential, there is nothing in the model to prevent the bond length from becoming zero or even negative (implying that it is possible for one atom to pass through the other in a molecule. Additionally, the harmonic oscillator does not allow for molecular dissociation as the potential energy just keeps increasing with increasing bond length. None the less, the harmonic oscillator model works quite well for small displacements from the equilibrium bond length. The Morse Potential One improved form of a potential energy function was provided by Phillip Morse (Morse, 1929). The Morse potential is given by the following function $U(r)=D_{e} \left(1-e^{-\beta (r-r_{e} )} \right)^{2}\nonumber$ where $D_{e}$ is the dissociation energy of the molecule. While this function still allows for negative bond lengths, it does allow for molecular dissociation at long bond lengths. The force constant for the Morse potential is determined by evaluating the second derivative of the potential energy function at the potential minimum. $k=\left. \frac{d^{2} }{dr^{2} } U(r)\right|_{r=r_{e} }\nonumber$ Based on the expression given above for the Morse potential, the following result is obtained. $k=2D_{e} \beta ^{2}\nonumber$ Anharmonicity A solution to the Schrödinger equation using the Morse potential produces an additional constant in the energy expression for vibrational energy. $G_{v} =\omega _{e} (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )-\omega _{e} x_{e} (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )^{2}\nonumber$ The new constant, $\omega_{e} x_{e}$, is called an anharmonicity constant, as it accounts for deviation from the harmonic potential. For a more general potential energy function, the expression for the vibrational term value can be expressed as a longer power series in $(v+½)$. $G_{v} =\omega _{e} (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )-\omega _{e} x_{e} (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )^{2} +\omega _{e} y_{e} (v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )^{3} +\cdots\nonumber$ For well-behaved molecules, the magnitude of the anharmonicity constants decreases with increasing order in $(v+½)$. Thus, the series can be truncated at some point and will provide an adequate model for the purposes of fitting experimental data. 4.04: Vibrational Spectroscopy Techniques Infrared and Raman spectroscopy are two experimental methods that are commonly used by chemists to measure vibrational frequencies ($\omega_ {e}$ ). Infrared spectroscopy generally involves direct absorption whereas Raman spectroscopy involves scattering of light. Infrared Spectra Infrared spectroscopy is a commonly used technique in the identification of molecular compounds. It is also a very convenient technique to use in determining molecular force constants, since the spectrum records vibrational frequencies. Based on the results of the harmonic oscillator problem, the selection rules for an infrared spectrum are determined to be $\Delta v=\pm 1\nonumber$ That means that as a molecule absorbs or emits a single infrared photon (meaning the electronic state of the molecule does not change) the vibrational quantum number can go up or down (depending on absorption or emission) by one quantum. For a typical experiment, the theory predicts a single band in the spectrum of a molecule, and that band will be centered at a frequency equal to $\omega_ {e}$ for the molecule. A schematic diagram of a typical infrared absorption spectroscopy experiment is shown below. The light is produced at the source (typically an incandescent light bulb or a glowbar), passes through the sample where some of the light can be absorbed, and then the monochrometer (which is typically either a grating or an interferometer) which is used to distinguish between the various frequencies of light, and finally the light is detected by a detector. Plotting detected intensity as a function of frequency produces the spectrum. Determining a Force Constant Consider the experimentally determined we value for carbon monoxide (CO.) The spectrum shows a strong absorption at $2143 \; cm ^{-1}$ due to CO. Using this value for $\omega_ {e}$ (it is actually a little off due to anharmonicity), the force constant can be determined for the molecule. $\omega_{e} =\dfrac{1}{2\pi c} \sqrt{\dfrac{k}{\mu } }\nonumber$ Using a value of $1.14 \times 10 ^{-26}$ kg for the reduced mass of the molecule, the force constant is found to be 1856 N/m. The literature value for this force constant is $1860 \; cm ^{-1}$. Given that this calculation did not treat anharmonicity, the agreement is pretty good! Progressions in Electronic Spectra Electronic transition in diatomic molecules which can be observed in the visible and ultraviolet regions of the spectrum can have a great deal of vibrational structure as the molecule is free to vibrate in both the upper and lower states. Figure $2$ shows vibrational progressions in the emission spectrum of $\ce{AlBr}$ near 2800 Å (Fleming & Mathews, 1996). These progressions can be analyzed to provide dissociation energies for the electronic states involved in the transition. If the vibrational energy function is truncated at the $\omega_ {e} x _{e}$ level (as predicted by the Morse potential) the vibrational term value will reach a maximum value at some value of v. Any further vibrational excitation is predicted to lower the molecular energy. This is actually the dissociation limit. Therefore, the maximum value of $v$ for a bound state ($v_ {max}$ ) is the largest value of v for which the vibrational energy spacing is positive. The dissociation energy of the molecule is then given by the sum of vibrational energy spacings from $v=0$ to $v=v_ {max}$. Determining a Dissociation Energy To find the value of the dissociation energy, it is convenient to define the difference between successive vibrational terms as $\Delta G_{v+{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} } \equiv G_{v+1} -G_{v}\nonumber$ Using the expression for $G_ {v}$ as predicted by the Morse potential, \begin{aligned} \Delta G_{v+1 / 2}= & \omega_e(v+3 / 2)-\omega_e x_e(v+3 / 2)^2-\omega_e(v+1 / 2)-\omega_e x_e(v+1 / 2)^2 \[4pt] & =\omega_e(v+3 / 2-v-1 / 2)-\omega_e x_e\left(v^2+3 v+9 / 4-v^2-v-1 / 4\right) \[4pt]&=\omega_e-\omega_e x_e(2 v+2) \[4pt]&=\omega_e-2 \omega_e x_e(v+1)\end{aligned} This suggests that a set of values of $\Delta G_ {v+1/2}$ vs. $(v+1)$ should yield a straight line with a slope equal to -2$\omega_ {e}$ $x_ {e}$ and an intercept equal to $\omega_ {e}$. And $v_ {max}$ is determined by setting $\Delta G_ {v+\frac{1}{2}}$ to zero and solving for $v$ (Figure $3$). The Birge-Sponer method (Gaydon, 1946) can be used to determine the sum of vibrational spacings, and thus the dissociation of a molecule. The method involves plotting $\Delta G_ {v+\frac{1}{2}}$ vs. $(v+1)$. The dissociation energy is taken as the area under the curve. Vibrations of Polyatomic Molecules Nonlinear molecules have $3N-6$ vibrational degrees of freedom, where N is the number of atoms in the molecule. Thus, a triatomic molecule such as water has three vibrational degrees of freedom. These account for the three vibrational modes of water (symmetric stretch, bend and antisymmetric stretch.) Each mode will have a characteristic frequency. If each mode is treated as a harmonic oscillator, the total vibrational energy is given by $G=\sum _{i=1}^{3N-6}\omega _{i} (v_{i} +{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} )\nonumber$ where $\omega_ {i}$ is the frequency of the $i ^{th}$ vibrational mode, and $v_ {i}$ is the quantum number indicating the number of quanta of the $i ^{th}$ mode excited. If anharmonicity is to be included, the expression becomes $G=\sum _{i=1}^{3N-6}\omega _{i} (v_{i} +{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} ) -\sum _{i=1}^{3N-6}\sum _{j=1}^{3N-6}x_{ij} \left(v_{i} +{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} \right)\left(v_{j} +{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} \right)\nonumber$ where $x_ {ij}$ is the anharmonicity term that couples the vibrational modes.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/04%3A_The_Harmonic_Oscillator_and_Vibrational_Spectroscopy/4.03%3A_Strengths_and_Weaknesses.txt
Group theory provides a powerful set of tools for predicting and interpreting vibrational spectra. In this section, we will consider how Group Theory helps us to understand these important phenomena. Transformation of Axes and Rotations It is possible to determine the symmetry species or irreducible representation by which each of the three Cartesian coordinate axes transform. This is useful, particularly in determining selection rules in spectroscopy, as the components of a molecule’s dipole moment will transform as these axes. The rotations are also useful in understanding the rotational selection rules. Recall the character table for the $C_{2v}$ point group. $C_{2v}$ E $C_{2}$ $\sigma_{v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 $A_ {2}$ 1 1 -1 -1 $B_ {1}$ 1 -1 1 -1 $B_ {2}$ 1 -1 -1 1 It is useful to determine how each axis (x, y and z) is transformed under each symmetry operation. Once this is done, it will be easy to determine the representation that transforms the axis in this way. A table might be useful. Recalling our designation of the $\sigma_ {v}$ operation as reflection through the xz plane, it can be shown easily that the axes transform as follows: $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ x x -x x -x y y -y -y y z z z z z The z-axis is unchanged by any of the symmetry operations. Another way of saying this is that the z-axis is symmetric with respect to all of the operations. (In this point group, all of the symmetry elements happen to intersect on the z-axis, which is why it is unchanged by any of the symmetry operations.) The conclusion is that the z-axis transforms with the $A_ {1}$ representation. The other axes can be described the same way. Note that the x-axis is symmetric with respect to the $\sigma_ {v}$ operation and the E operation. (Everything is symmetric with respect to the E operation, oddly enough.) The x-axis is antisymmetric, however, with respect to the $\sigma_ {v}$ ’ and $C_ {2}$ operations. The results for all axes can be summarized in the character table. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z $A_ {2}$ 1 1 -1 -1 $B_ {1}$ 1 -1 1 -1 x $B_ {2}$ 1 -1 -1 1 y Rotations about the x, y and z axes can be characterized in a similar fashion. Consider the angular momentum vector for each rotation and how it transforms. Such a vector can be constructed using he right-hand rule. If the fingers on your right hand point in the direction of the rotation, your thumb points in the direction of the angular momentum vector. Rotation about the z-axis ($R_ {z}$ ) is symmetric with respect to the operations E and $C_ {2}$, but antisymmetric with respect to operations $\sigma_ {v}$ and $\sigma_ {v}$ ’. Rotation about the x-axis is symmetric with respect to E and $C_ {2}$. Clearly, this operation transforms as the irreducible representation $A_ {2}$. Rotation about the x-axis and y-axis can also be characterized as following the properties of the $B_ {2}$ and $B_ {1}$ representations respectively. As such, the character table for $C_ {2v}$ can be augmented to include this information. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z $A_ {2}$ 1 1 -1 -1   $R_ {z}$ $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ $B_ {2}$ 1 -1 -1 1 y $R_ {x}$ Another interpretation of the transformation of the x, y and z-axes is that the representations that indicate the symmetries of these axes in the point group also indicate how the $p_ {x}$, $p_ {y}$ and $p_ {z}$ orbitals transform. The set of d orbital wavefunctions can also be used. These transformations are generally given in another column in the character table. (This information is also useful for calculating polarizabilities, and hence selection rules for Raman transitions!) $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z   $x ^{2}$ -$y ^{2}$, $z ^{2}$ $A_ {2}$ 1 1 -1 -1   $R_ {z}$ xy $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ xz $B_ {2}$ 1 -1 -1 1 y $R_ {x}$ yz Characterizing Vibrational Modes Vibrational wave functions describing the normal modes of vibrations will be eigenfunctions of the symmetry properties of the group. As such, group theory can be quite useful in determining the vibrational selection rules needed to predict infrared spectra. The number of vibrational degrees of freedom for a molecule is given by ($3N-6$) if the molecule is non-linear and ($3N-5$) if it is linear. In these expressions, N is the number of atoms in the molecule. One way to think of these numbers is that it takes 3N Cartesian coordinates (an x, y and z variable) for each atom in the molecule to fully specify the structure of a molecule. As such, 3N is the total number of degrees of freedom. Since the molecule can translate through space in the x, y or z directions, three (3) degrees of freedom belong to translation. One can also think of these three degrees of freedom being the three Cartesian coordinates needed to specify the location of the center of mass of the molecule – or for the translation of the center of mass of the molecule. For non-linear molecules, rotation can occur about each of the three Cartesian axes as well. So three (3) degrees of freedom belong to rotation for non-linear molecules. Linear molecules only have rotational degrees of freedom about the two axes that are perpendicular to the molecular axis (which remember is the C axis – and thus the z-axis.) So linear molecules only have two (2) rotational degrees of freedom. The sum of the irreducible representations by which the vibrational modes transform can be found fairly easily using group theory. The first step is to determine how the three Cartesian axes transform under the symmetry operations of the point group. As an example, let’s use water ($H_ {2} O$), which belongs to the $C_ {2v}$ point group since it is familiar. Later, we will work though a more complex example. Consider the character table for the $C_ {2v}$ point group. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}'$ $A_ {1}$ 1 1 1 1 z   $x ^{2} -y ^{2}$, $z ^{2}$ $A_ {2}$ 1 1 -1 -1   $R_ {z}$ xy $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ xz $B_ {2}$ 1 -1 -1 1 y $R_ {x}$ yz The sum of the representations by which the axes transform will be given by $B_ {1} + B_ {2} + A_ {1}$. $C_ {2v}$   E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $\Gamma_ {1}$ $A_ {1}$ 1 1 1 1 z $\Gamma_ {2}$ $B_ {1}$ 1 -1 1 -1 x $\Gamma_ {3}$ $B_ {2}$ 1 -1 -1 1 y $\Gamma_ {xyz}$ $A_1 + B_1 + B_2$ 3 -1 1 1 The reducible representation ($\Gamma_ {xyz}$ ) is then multiplied by the representation generated by counting the number of atoms in the molecule that remain unmoved by each symmetry element. This representation for water is generated as follows: $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $O$ $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ $H_ {1}$ $\checkmark$ - - $\checkmark$ $H_ {2}$ $\checkmark$ - - $\checkmark$ $\Gamma_ {unmoved}$ 3 1 1 3 The reducible representation that describes the transformation of the Cartesian coordinates of each of the atoms in the molecule are given by the product of $\Gamma_ {xyz} \cdot\Gamma_ {unmoved}$ as shown in the following table. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $\Gamma_ {xyz}$ 3 -1 1 1 $\Gamma_ {unmoved}$ 3 1 1 3 $\Gamma_ {total} = \Gamma_ {xyz} \cdot\Gamma_ {unmoved}$ 9 -1 1 3 Note that the order of $\Gamma_ {total}$ is given by $3N$. This is the sum of representations needed to describe the transformation of each of the Cartesian coordinates for each atom. f the representation for the Cartesian coordinates ($\Gamma_ {xyz}$ ) is subtracted from $\Gamma_ {total}$, the remainder describes the sum of representations by which the rotations and vibrations transform, and this result should be of order ($3N-3$). Let’s see . . . $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $\Gamma_ {total}$ 9 -1 1 3 $\Gamma_ {xyz}$ 3 -1 1 1 $\Gamma_ {vib+rot}$ 6 0 0 2 So far, so good. Now let’s subtract the sum of the representations by which the rotations transform. The remainder of this operation should be of order ($3N-6$) and give the sum of irreducible representations by which the vibrations transform. $C_{2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $\Gamma_{vib+rot}$ 6 0 0 2 $\Gamma_{rot}$ 3 -1 -1 -1 $\Gamma_{vib}$ 3 1 1 3 $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 $A_ {1}$ 1 1 1 1 $B_ {2}$ 1 -1 -1 1 $\Gamma_ {vib}$ 3 1 1 3 A quick calculation shows that this result is generated by the sum of $A_ {1}$ + $A_ {1}$ + $B_ {2}$. To see this, we can use the Great Orthogonality Theorem. (I told you it was great!) In this case, the number of vibrational modes that transform as the $i^{th}$ irreducible representation is given by the relationship $N_{i} =\dfrac{1}{h} \sum _{R}\chi _{i} (R)\chi _{vib} (R)\nonumber$ For the $A_ {1}$ representation, this sum looks as follows. $\begin{array}{rcl} {N{}_{A_{1} } } & {=} & {\dfrac{1}{h} \left(\chi _{A_{1} } (E)\cdot \chi _{vib} (E)+\chi _{A_{1} } (C_{2} )\cdot \chi _{vib} (C_{2} )+\chi _{A_{1} } (\sigma _{v} )\cdot \chi _{vib} (\sigma _{v} )+\chi _{A_{1} } (\sigma _{v}^{'} )\cdot \chi _{vib} (\sigma _{v}^{'} )\right)} \ {} & {=} & {\dfrac{1}{4} \left((1)\cdot (3)+(1)\cdot (1)+(1)\cdot (1)+(1)\cdot (3)\right)} \ {} & {=} & {\dfrac{1}{4} \left(8\right)} \ {} & {=} & {2} \end{array}\nonumber$ The result for the $A_ {2}$ representation should come to zero since no vibrational modes transform as $A_ {2}$. For the $A_ {2}$ representation, this sum looks as follows. $\begin{array}{rcl} {N_{A_{2} } } & {=} & {\dfrac{1}{4} \left((1)\cdot (3)+(1)\cdot (1)+(-1)\cdot (1)+(-1)\cdot (3)\right)} \ {} & {=} & {\dfrac{1}{4} \left(0\right)=0} \end{array}\nonumber$ For $B_ {1}$ and $B_ {2}$ the sum looks as follows: $\begin{array}{rcl} {N_{B_{1} } } & {=} & {\dfrac{1}{4} \left((1)\cdot (3)+(-1)\cdot (1)+(1)\cdot (1)+(-1)\cdot (3)\right)} \ {} & {=} & {\dfrac{1}{4} \left(0\right)=0} \end{array}\nonumber$ $\begin{array}{rcl} {N_{B_{2} } } & {=} & {\dfrac{1}{4} \left((1)\cdot (3)+(-1)\cdot (1)+(-1)\cdot (1)+(1)\cdot (3)\right)} \ {} & {=} & {\dfrac{1}{4} \left(4\right)=1} \end{array}\nonumber$ Let’s see if that makes sense! Consider the three normal-mode vibrations in water. These (the symmetric stretch, the bend and the antisymmetric stretch) can be depicted as follows: It is fairly simple to show that the symmetric stretch and the bending mode both transform as the $A_ {1}$ representation. Similarly, the antisymmetric stretching mode transforms as the $B_ {2}$ representation. (Note that we have chosen the xz plane (or the $\sigma_ {v}$ plane) to lie perpendicular to the molecule!) Example $1$ Find the symmetries of the normal vibrational modes of ammonia. Solution Recall the character table for the $C_ {3v}$ point group: $C_{3v}$ E 2 $C_{3}$ 3 \sigma $A_1$ 1 1 1 z $A_2$ 1 1 -1   $R_z$ E 2 -1 0 $x$, $y$ $R_x$, $R_y$ The representation for $\Gamma_ {total}$ can be found in the same way as before. Once we have $\Gamma_ {total}$, $\Gamma_ {vib}$ is determined as before. $C_{3v}$ E 2 $C_{3}$ 3$\sigma_{v}$ $\Gamma_{xyz}$ 3 1 1 $\Gamma_{unmoved}$ 4 1 2 $\Gamma_{total}$ 12 1 2 $C_{3v}$ E 2 $C_{3}$ 3$\sigma_{v}$ $\Gamma_{total}$ 12 1 2 $\Gamma_{xyz}$ 3 1 1 $\Gamma_{rot}$ 3 0 -1 $\Gamma_{vib}$ 6 0 2 The GOT can be used to find how many modes of each symmetry are present. Mode Freq. (cm-1) Sym. Umbrella 1139 $A_1$ Bend 1765 E Antisym. Str. 3464 E Sym. Str. 3534 $A_1$ \begin{aligned} N_{A_1} & =\dfrac{1}{6}[(1) \cdot(6)+2(1) \cdot(0)+3(1) \cdot(2)] \ & =\dfrac{1}{6}(12)=2 \end{aligned}\nonumber \begin{aligned} N_{A_2} & =\dfrac{1}{6}[(1) \cdot(6)+2(1) \cdot(0)+3(-1) \cdot(2)] \ & =\dfrac{1}{6}(0)=0 \end{aligned}\nonumber \begin{aligned} N_E & =\dfrac{1}{6}[(2) \cdot(6)+2(-1) \cdot(0)+3(0) \cdot(2)] \ & =\dfrac{1}{6}(12)=2 \end{aligned}\nonumber So there are two (2) $A_ {1}$ modes and two (2) doubly degenerate E modes of vibration. These can be summarized in the table to the right. Example $2$: The vibrational modes of $SF_4$ Solution $SF _{4}$ is an example of a molecule with a “see saw” geometry. It belongs to the point group $C_ {2v}$ like water. Let’s find the symmetries of the normal modes of vibration using group theory. First, we must generate $\Gamma_ {total}$. $C_{2v}$ E $C_{2}$ $\sigma_{v}$ $\sigma_{v}$’ $\Gamma_{xyz}$ 3 -1 1 1 $\Gamma_{unmoved}$ 5 1 3 3 $\Gamma_{total}$ 15 -1 3 3 $C_{2v}$ E $C_{2}$ $\sigma_{v}$ $\sigma_{v}$’ $\Gamma_{total}$ 15 -1 3 3 $\Gamma_{xyz}$ 3 -1 1 1 $\Gamma_{rot}$ 3 -1 -1 -1 $\Gamma_{vib}$ 9 1 3 3 Now, subtract $\Gamma_ {xyz}$ and $\Gamma_ {rot}$ to generate $\Gamma_ {vib}$ as shown above. So this implies that there are nine degrees of freedom due to vibration. This is the result we expect since for the 5-atom non-linear molecule, (3N-6) = 9. To generate the number of vibrational modes that transform as the $A_ {1}$ irreducible representation, the follow expression must be evaluated. \begin{aligned} N_{A_1} & =\dfrac{1}{h}\left(\chi_{A_1}(E) \cdot \chi_{v i b}(E)+\chi_{A_1}\left(C_2\right) \cdot \chi_{v i b}\left(C_2\right)+\chi_{A_1}\left(\sigma_v\right) \cdot \chi_{v i b}\left(\sigma_v\right)+\chi_{A_1}\left(\sigma_v^{\prime}\right) \cdot \chi_{v i b}\left(\sigma_v^{\prime}\right)\right) \ & =\dfrac{1}{4}((1) \cdot(9)+(1) \cdot(1)+(1) \cdot(3)+(1) \cdot(3)) \ & =\dfrac{1}{4}(16) \ & =4 \end{aligned}\nonumber Similarly, \begin{aligned} N_{A_2} & =\dfrac{1}{4}((1) \cdot(9)+(1) \cdot(1)+(-1) \cdot(3)+(-1) \cdot(3)) \ & =\dfrac{1}{4}(4)=1 \end{aligned}\nonumber \begin{aligned} N_{B_1} & =\dfrac{1}{4}((1) \cdot(9)+(-1) \cdot(1)+(1) \cdot(3)+(-1) \cdot(3)) \ & =\dfrac{1}{4}(8)=2 \end{aligned}\nonumber \begin{aligned} N_{B_2} & =\dfrac{1}{4}((1) \cdot(9)+(-1) \cdot(1)+(-1) \cdot(3)+(1) \cdot(3)) \ & =\dfrac{1}{4}(8)=2 \end{aligned}\nonumber So there should be 4 vibrational modes of $A_ {1}$ symmetry, 1 of $A_ {2}$ symmetry and two each of $B_ {1}$ and $B_ {2}$ symmetry. A calculation of the structure and vibrational frequencies in $SF _{4}$ at the B3LYP/6-31G(d) level of theory1 yields the following. Mode Freq. (cm-1) Symmetry Mode Freq. (cm-1) Symmetry 1 189 $A_1$ 6 584 $A_1$ 2 330 $B_1$ 7 807 $B_2$ 3 436 $A_2$ 8 852 $B_1$ 4 487 $A_1$ 9 867 $A_1$ 5 496 $B_2$ The calculation also allows for the simulation of the infrared spectrum of $SF _{4}$. What would be exceptionally useful is if group theory could help to identify which vibrational modes are active – or if any are inactive. Fortunately, it can! (And now how much would you pay?) The tools for determining selection rules depend on direct products. Intensity Group theory provides tools to calculate when a spectral transition will have zero intensity, and this will not be seen. In this section, we will se how group theory can help to determine the selection rules that govern which transitions can and cannot be see. $\text{Intensity} \propto \left|\int \left(\psi '\right)^{*} \vec{\mu }\left(\psi "\right) d\tau \right|^{2}$ The intensity of a transition in the spectrum of a molecule is proportional to the magnitude squared of the transition moment matrix element. By knowing the symmetry of each part of the integrand, the symmetry of the product can be determined as the direct product of the symmetries of each part (\psi’) ${}^{*}$, (\psi”) and \mu. This is helpful, since the integrand must not be antisymmetric with respect to any symmetry elements or the integral will vanish by symmetry. Before exploring that concept, lets look at the concept of direct products. This is a concept many people have seen, in that the integral of an odd function over a symmetric interval, is zero. Recall what it means to be an “odd function” or an “even function. Symmetry definition Intensity Even $f(-x) = f(x)$ $\int _{-a}^{a}f(x)dx=2\int _{0}^{a}f(x)dx$ Odd $f(-x) = -f(x)$ $\int _{-a}^{a}f(x)dx=0$ Consider the function $f(x)=\left(x^{3} -3x\right)e^{-x^{2} }$. A graph of this function looks as follows: One notes that the area under the curve on the side of the function for which x $\mathrm{>}$ 0 has exactly the same magnitude but opposite sign of the area under the other side of the graph. Mathematically, $\begin{array}{rcl} {\int _{-a}^{a}f(x)dx } & {=} & {\int _{-a}^{0}f(x)dx +\int _{0}^{a}f(x)dx } \ {} & {} & {=-\int _{0}^{a}f(x)dx +\int _{0}^{a}f(x)dx =0} \end{array}\nonumber$ It is also interesting to note that the function f(x) can be expressed as the product of two functions, one of which is an odd function ( $x^{3} -3x$ ) and the other which is an even function ( $e^{-x^{2} }$ ). The result is an odd function. By determining the symmetry of the function as a product of the eigenvalues of the functions with respect to the inversion operator, as discussed below, one can derive a similar result. The even/odd symmetry is an example of inversion symmetry. Recall that the inversion operator (in one dimension) affects a change of sign on x. $\hat{i}f(x)=f(-x)\nonumber$ “Even” and “odd” functions are eigenfunctions of this operator, and have eigenvalues of either +1 or –1. For the function used in the previous example, $f(x)=g(x)h(x)\nonumber$ where $g(x)=x^{3} -3x$ and $h(x)=e^{-x^{2} }$ Here, $g(x)$ is an odd function and $h(x)$ is an even function. The product is an odd function. This property is summarized for any $f(x)=g(x)h(x)$, in the following table. g(x) h(x) f(x) ig(x)=__g(x) ih(x)=__h(x) if(x)=__f(x) even even even 1 1 1 even odd odd 1 -1 -1 odd odd even -1 -1 1 Note that the eigenvalue (+1 or –1) is simply the character of the inversion operation for the irreducible representation by which the function transforms! In a similar manner, any function that can be expressed as a product of functions (like the integrand in the transition moment matrix element) can be determined as the direct product of the irreducible representations by which each part of the product transforms. Consider the point group $C_ {2v}$ as an example. Recall the character table for this point group. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z   $x ^{2}$ -$y ^{2}$, $z ^{2}$ $A_ {2}$ 1 1 -1 -1   $R_ {z}$ xy $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ xz $B_ {2}$ 1 -1 -1 1 y $R_ {x}$ yz The direct product of irreducible representations can by the definition $\chi _{prod} (R)=\chi _{i} (R)\cdot \chi _{j} (R)\nonumber$ So for the direct product of $B_ {1}$ and $B_ {2}$, the following table can be used. $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $B_ {1}$ 1 -1 1 -1 $B_ {2}$ 1 -1 -1 1 $B_ {1}$ $\otimes$ $B_ {2}$ 1 1 -1 -1 The product is actually the irreducible representation given by $A_ {2}$ ! As it turns out, the direct product will always yield a set of characters that is either an irreducible representation of the group, or can be expressed as a sum of irreducible representations. This suggests that a multiplication table can be constructed. An example (for the $C_ {2v}$ point group) is given below. $C_ {2v}$ $A_ {1}$ $A_ {2}$ $B_ {1}$ $B_ {2}$ $A_ {1}$ $A_ {1}$ $A_ {2}$ $B_ {1}$ $B_ {2}$ $A_ {2}$ $A_ {2}$ $A_ {1}$ $B_ {2}$ $B_ {1}$ $B_ {1}$ $B_ {1}$ $B_ {2}$ $A_ {1}$ $A_ {2}$ $B_ {2}$ $B_ {2}$ $B_ {1}$ $A_ {2}$ $A_ {1}$ Studying this table reveals some useful generalizations. Two things in particular jump from the page. These are summarized in the following tables. A B         1 2 A A B       1 1 2 B B A       2 2 1 This pattern might seem obvious to some. It stems from the idea that symmetric*symmetric = symmetric symmetric*antisymmetric = antisymmetric antisymmetric*antisymmetric = symmetric Noting that A indicates an irreducible representation is symmetric with respect to the $C_ {2}$ operation and B indicates that the irreducible representation is antisymmetric . . and that the subscript 1 indicates that an irreducible representation is symmetric with respect to the $\sigma_ {v}$ operation, and that a subscript 2 indicates that the irreducible representation is antisymmetric . . the rest seems to follow! Some point groups have irreducible representations use subscripts g/u or primes and double primes. The g/u subscript indicates symmetry with respect to the inversion (i) operator, and the prime/double prime indicates symmetry with respect to a $\sigma$ plane (generally the plane of the molecule for planar molecules). This method works well for singly degenerate representations. But what does one do for products involving doubly degenerate representations? As an example, consider the $C_ {3v}$ point group. $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ $A_ {1}$ 1 1 1 z $A_ {2}$ 1 1 -1   $R_ {z}$ E 2 -1 0 (x, y) ($R_ {x}$, $R_ {y}$ ) $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ $A_ {2}$ 1 1 -1 E 2 -1 0 $A_ {2}$ $\otimes$ E 2 -1 0 Consider the direct product of $A_ {2}$ and E. This product is clearly just the E representation. Now one other example – Consider the product $E \otimes E$. $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ E 2 -1 0 E 2 -1 0 $E \otimes E$ 4 1 0 To find the irreducible representations that comprise this reducible representation, we proceed in the same manner as determining the number of vibrational modes belonging to each symmetry. $\begin{array}{rcl} {N_{A_{1} } } & {=} & {\dfrac{1}{6} \left[(1)(4)+2(1)(1)+3(1)(0)\right]=1} \ {N_{A_{2} } } & {=} & {\dfrac{1}{6} \left[(1)(4)+2(1)(1)+3(-1)(0)\right]=1} \ {N_{E} } & {=} & {\dfrac{1}{6} \left[(2)(4)+2(-1)(1)+3(0)(0)\right]=1} \end{array}\nonumber$ This allows us to build a table of direct products. Notice that the direct product always has the total dimensionality that is given by the product of the dimensions. $C_ {3v}$ $A_ {1}$ $A_ {2}$ E $A_ {1}$ $A_ {1}$ $A_ {2}$ E $A_ {2}$ $A_ {2}$ $A_ {1}$ E E E E $A_ {1} + A_ {2} +E$ Now that we have a handle on direct products, we can move on to selection rules. Selection Rules According to quantum mechanics, transitions will only be allowed (have non-zero intensity) if the squared magnitude of the transition moment ( $\left|\int \psi '* \vec{\mu }\psi "d\tau \right|^{2}$ ) is not zero. If the integral vanishes by symmetry, obviously the transition moment will have zero magnitude and the transition is forbidden and will not be seen. In order to determine if the integral vanishes by symmetry, it is necessary to determine the symmetry by which the dipole moment operator transforms. This ( $\vec{\mu }$ ) is a vector operator and can be decomposed into $x$, $y$ and $z$ components. As such, the transition moment is also a vector property that can have x-, y- and/or z-axis components. Clearly, it will be important to determine how the three axes transform. Fortunately, this information is contained in character tables! Consider the following two point groups, $C_ {3v}$ and $C_ {2v}$. $C_ {3v}$ E 2 $C_ {3}$ 3$\sigma_ {v}$ $A_ {1}$ 1 1 1 $z$ $A_ {2}$ 1 1 -1   $R_z$ E 2 -1 0 $(x,y)$ $(R_x, R_y)$ $C_ {2v}$ E $C_ {2}$ $\sigma_ {v}$ $\sigma_ {v}$ ’ $A_ {1}$ 1 1 1 1 z $A_ {2}$ 1 1 -1 -1   $R_ {z}$ $B_ {1}$ 1 -1 1 -1 x $R_ {y}$ $B_ {2}$ 1 -1 -1 1 Y $R_ {x}$ In the case of $C_ {2v}$, it is clear that the x-, y- and z-axes transform as the $B_ {1}$, $B_ {2}$ and $A_ {1}$ irreducible representations respectively. In the case of $C_ {3v}$, the z-axis transforms as $A_ {1}$, but the x- and y-axes come as a pair and transform as the E irreducible representation. It will always require two axes to complete the basis for a doubly degenerate representation. Under the $C_ {2v}$ point group, any vector quantity will transform as the sum of $A_ {1}$ +$B_ {1}$ +$B_ {2}$ as we saw for $\Gamma_ {xyz}$ before. Further, one can say that the x-axis component transforms as $B_ {1}$, the y-axis component as $B_ {2}$ and the z-axis component as $A_ {1}$. By a similar token, under the $C_ {3v}$ point group, a vector quantity transforms as the sum of $A_ {1} +E$. The z-axis component transforms as $A_ {1}$ and the x- and y-axis components come as a pair that transform by the E representation. All that is needed to complete the picture is to determine the symmetries of the upper and lower state wave functions. Infrared Active Transitions In order for a spectral transition to be allowed by electric dipole selection rules, the transition moment integral must not vanish. $\int \psi '^{*} \vec{\mu }\psi " d\tau\nonumber$ This can be determined by using the irreducible representations by which the two wavefunctions transform and the three components of the transition moment operator, which will be $x$, $y$ and $z$. $\int \Gamma _{\psi '} \Gamma _{\vec{\mu }} \Gamma _{\psi "} d\tau\nonumber$ If the direct product of the integrand does not contain at least a component of the totally symmetric irreducible representation, the integral will vanish by symmetry. Example $3$ The three vibrational modes of $H_ {2}$ O transform by $A_ {1}$ (symmetric stretch), $A_ {1}$ (bend) and $B_ {2}$ (antisymmetric stretch.) Will the symmetric stretch mode be infrared active? Solution For the symmetric stretch, which transforms as $A_ {1}$, the transition moment integrand will be have symmetry properties determined by the product $\psi '\left(\begin{array}{c} {x} \ {y} \ {z} \end{array}\right)\psi " \qquad A_{1} \left(\begin{array}{c} {B_{1} } \ {B_{2} } \ {A_{1} } \end{array}\right) A_{1}$ where one of the irreducible representations from the set in the middle of the product may be used. (They are the irreducible representations by which the $x$, $y$ and $z$ axes transform.) In this case, the z-axis must be used. $A_ {1} \cdot A_ {1} \mathrm{\cdot} A_ {1} = A_ {1}$ This is the only component that will not vanish.When the z-axis component must be used to make the transition moment operator not vanish, the transition is said to be a parallel transition. Transition moments that lie along axis perpendicular to the z-axis are said to be perpendicular transitions. Parallel and Perpendicular Transitions often have very different selection rules and thus very different band contours. Another Method Another method that can be used to see if a mode is infrared active is to take the direct product of the irreducible representations of the wavefunction, and use $\Gamma_ {xyz}$ for the transition moment. If the resulting product has a component that is totally symmetric, the mode will be infrared active. Example $4$ Is the antisymmetric stretch mode of water predicted to be infrared active? Solution This mode transforms as the $B_ {2}$ irreducible representation. $\Gamma_ {xyz}$ is given by $\Gamma_ {xyz} = B_ {1} + B_ {2} + A_ {1}$ So: $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $B_2$ 1 -1 -1 1 $\Gamma_{xyz}$ 3 -1 1 1 $\Gamma_{prod}$ 3 1 -1 1 The resulting reducible representation will have a component of the totally symmetric irreducible representation. $A_ {1} \cdot \Gamma_ {prod} = (1)(3) + (1)(1) + (1)(-1) + (1)(1) = 4$ So the $A_ {1}$ irreducible representation appears once in the product reducible representation. In fact, the component that does not vanish is due to the presence of $B_ {2}$ in $\Gamma_ {xyz}$. Hence, the transition is predicted to be a perpendicular $\bot$ transition, since the transition moment lies along the y-axis. Example $5$ Will the E modes in $NH_ {3}$ be infrared active? Solution In the $C_ {3v}$ point group, $\Gamma_ {xyz}$ is given by $A_ {1} + E$ $C_{3v}$ E 2 $C_{3}$ 3 $\sigma_{v}$ E 2 -1 0 $\Gamma_{xyz}$ 3 0 1 $\Gamma_{prod}$ 6 0 0 $\Gamma_ {prod}$ clearly has the totally symmetric irreducible representation as a component. $A_1 \cdot \Gamma_{prod} = (1)(6) + 2(1)(0) + 3(1)(0) = 6$ In fact, it is the E component of $\Gamma_ {xyz}$ that makes this transition allowed (and so it is a perpendicular ( $\bot$ ) transition. $C_{3v}$ E 2 $C_{3}$ 3 $\sigma_{v}$ E 2 -1 0 E 2 -1 0 $\Gamma_{prod}$ 4 1 0 $A_1 \cdot \Gamma_{prod} =(1)(4) + 2(1)(1) + 3(1)(0) = 6$ Vibrational Raman Spectra Vibrational Raman spectroscopy is often used as a complementa$R_y$ method to infrared spectroscopy. The selection rules for Raman spectroscopy can be determined in much the same way, except that a polarizability integral must be used. The polarizability operator can be expressed as a 3x3 tensor of the form $\alpha =\left(\begin{array}{ccc} {\alpha _{xx} } & {\alpha _{xy} } & {\alpha _{xz} } \ {\alpha _{yx} } & {\alpha _{yy} } & {\alpha _{yz} } \ {\alpha _{zx} } & {\alpha _{zy} } & {\alpha _{zz} } \end{array}\right)\nonumber$ This tensor is symmetric along the diagonal, and the elements transform in the same ways as the functions $x ^{2}$, $y ^{2}$, $z ^{2}$, $xy$, $xz$ and $yz$. Example $6$ What are the vibrational mode symmetries for the molecule $H_ {2} CCH_ {2}$ which transforms as the D ${}_{2h}$ point group? Which modes will be infrared active? Which will be Raman active? Solution Set up the vibrational analysis table in the usual manner. $D_{2h}$ E $C_{2}$(z) $C_{2}$(y) $C_{2}$(x) i $\sigma_{xy}$ $\sigma_{xz}$ $\sigma_{yz}$ $A_{g}$ 1 1 1 1 1 1 1 1   $x^2,\; y^2, \;z^2$ $B_{1g}$ 1 1 -1 -1 1 1 -1 -1 $R_z$ $xy$ $B_{2g}$ 1 -1 1 -1 1 -1 1 -1 $R_y$ $xz$ $B_{3g}$ 1 -1 -1 1 1 -1 -1 1 $R_x$ $yz$ $A_{u}$ 1 1 1 1 -1 -1 -1 -1 $B_{1u}$ 1 1 -1 -1 -1 -1 1 1 $z$ $B_{2u}$ 1 -1 1 -1 -1 1 -1 1 $y$ $B_{3u}$ 1 -1 -1 1 -1 1 1 -1 $x$ $\Gamma_{xyz}$ 3 -1 -1 -1 -3 1 1 1 $\Gamma_{rot}$ 3 -1 -1 -1 3 -1 -1 -1 $D_{2h}$ E $C_{2}$(z) $C_{2}$(y) $C_{2}$(x) i $\sigma_{xy}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{xyz}$ 3 -1 -1 -1 -3 1 1 1 $\Gamma_{unm}$ 6 0 0 2 0 6 2 0 $\Gamma_{tot}$ 18 0 0 -2 0 6 2 0 $\Gamma_{xyz}$ 3 -1 -1 -1 -3 1 1 1 15 1 1 -1 3 5 1 -1 $\Gamma_{rot}$ 3 -1 -1 -1 3 -1 -1 -1 $\Gamma_{vib}$ 12 2 2 0 0 6 2 0 Decomposing to the individual components: $D_{2h}$ E $C_{2}$(z) $C_{2}$(y) $C_{2}$(x) i $\sigma_{xy}$ $\sigma_{xz}$ $\sigma_{yz}$ sum #(h) $A_{g} \cdot\Gamma_{vib}$ (1)(12) (1)(2) (1)(2) (1)(0) (1)(0) (1)(6) (1)(2) (1)(0) 24 3 $B_{1g}\cdot \Gamma_{vib}$ (1)(12) (1)(2) (-1)(2) (-1)(0) (1)(0) (1)(6) (-1)(2) (-1)(0) 16 2 $B_{2g}\cdot\Gamma_{vib}$ (1)(12) (-1)(2) (1)(2) (-1)(0) (1)(0) (-1)(6) (1)(2) (-1)(0) 8 1 $B_{3g}\cdot \Gamma_{vib}$ (1)(12) (-1)(2) (-1)(2) (1)(0) (1)(0) (-1)(6) (-1)(2) (1)(0) 0 0 $A_{u}\cdot \Gamma_{vib}$ (1)(12) (1)(2) (1)(2) (1)(0) (-1)(0) (-1)(6) (-1)(2) (-1)(0) 8 1 $B_{1u}\cdot \Gamma_{vib}$ (1)(12) (1)(2) (-1)(2) (-1)(0) (-1)(0) (-1)(6) (1)(2) (1)(0) 8 1 $B_{2u}\cdot \Gamma_{vib}$ (1)(12) (-1)(2) (1)(2) (-1)(0) (-1)(0) (1)(6) (-1)(2) (1)(0) 16 2 $B_{3u}\cdot \Gamma_{vib}$ (1)(12) (-1)(2) (-1)(2) (1)(0) (-1)(0) (1)(6) (1)(2) (-1)(0) 16 2 So $\Gamma_ {vib} = 3 A_ {g} + 2 B_ {1g} + B_ {2g} + A_ {u} + B_ {1u} + 2 B_ {2u} + 2 B_ {3u}$ Of these, the 6 gerade modes will be Raman active, and the five $B_ {nu}$ modes ($n = 1, 2, 3$) will be infrared active. The $A_ {u}$ mode will be dark. 1. Calculation performed using Gaussian 98 (http://www.gaussian.com/) using the WebMO (http://www.webmo.net/) web-based interface. 4.06: References Fleming, P. E., & Mathews, C. W. (1996). A Reanalysis of the A\(^1\Pi\) –X 1 \(^1\Sigma\) +Transition of AlBr. Journal of Molecular Spectroscopy, 175(1), 31-36. doi:10.1006/jmsp.1996.0005 Gaydon, A. G. (1946). The determination of dissociation energies by the birge-sponer extrapolation. Proceedings of the Physical Society, 58(5), 525-537. Meyer, C. F., & Levin, A. A. (1929). Physical Review A, 34, 44. Morse, P. M. (1929). Diatomic Molecules According to the Wave Mechanics. II. Vibrational Levels. Physical Review, 34(1), 57-64. doi:10.1103/PhysRev.34.57 4.07: Vocabulary and Concepts anharmonicity constant direct product even function Hermite polynomials odd function potential energy surface Taylor series term values tunneling 4.08: Problems 1. For each molecule, calculate the reduced mass (in kg) and the force constant for the bond (in N/m). Molecule $\omega_e$ (cm ${}^{-1}$ ) $\mu$ (kg) k (N/m) $^{1} H ^{79} Br$ 2648.975 $^{35} Cl _{2}$ 559.72 $^{12} C ^{16} O$ 2169.81358 $^{69} Ga ^{35} Cl$ 365.3 1. The typical carbonyl stretching frequency is on the order of 1600-1900 $cm ^{-1}$. Why is this value smaller than the value of $\omega_e$ for $CO$ given in the table above? 1. The first few Hermite polynomials are given below. v $H_ {v} (y)$ 0 1 1 $2y$ 2 $4y ^{2} – 2$ $H_ {v+1}(y) = 2yH_{v} (y) – 2vH_ {v-1} (y)$ 1. Use the recursion relation to generate the functions $H_ {3}$ (y) and $H_ {4}$ (y). 2. Demonstrate that the first three Hermite polynomials ($H_ {0}$ (y), $H_ {1}$ (y) and $H_ {2}$ (y)) form an orthogonal set. 1. The Morse Potential function is given by $U\left(x\right)=D_e\left(1-e^{-\beta x}\right)\nonumber$ where $x = (r – r _{e})$. 1. Find an expression for the force constant of a Morse Oscillator bond by evaluating 2. For $^{1} H ^{35} Cl$, $D _{e} = 7.31 \times 10 ^{-19} J$ and $\beta = 1.8 \times 10 ^{10} m^{-1}$. Use your above expression to evaluate k for the bond in HCl. 3. On what shortcoming of the Harmonic Oscillator model does the Morse Potential improve? What shortcoming does the Morse model share with that of a Harmonic Oscillator? 1. The following data are observed in the vibrational overtone spectrum in $^{1}H ^{35} Cl$ (Meyer & Levin, 1929). $v’ \leftarrow v”$ ${\widetilde{\nu }}_{obs}$ ($cm ^{-1}$ ) $1 \leftarrow 0$ 2885.9 $2 \leftarrow 0$ 5666.8 $3 \leftarrow 0$ 8347.0 $4 \leftarrow 0$ 10923.1 $5 \leftarrow 0$ 13396.5 From these data, calculate a set of $\Delta G _{v+\frac{1}{2}}$ values. Fit these results to the form $\mathrm{\Delta }G_{v+\frac{1}{2}}={\omega }_e-2\ {\omega }_ex_e(v+1) \nonumber$ to determine values for $\omega_e$ and $\omega_ex _{e}$ for $HCl$. 1. The following wavenumber frequencies are reported for the band origins for the $1 – v”$ bands in an electronic transition of a diatomic molecule. Using the Birge-Sponer method, determine the dissociation energy of the molecule in its ground electronic state. v" Wavenumber ($cm ^{-1}$ ) $\Delta G _{v+\frac{1}{2}}$ ($cm ^{- 1}$ ) 19586.9 19522.3 19504.8 19465.9 19418.3 19375.1 19323.2 19275.7 19223.8 19167.6 19111.4 19050.9 18990.4 18925.6 18860.7 18795.9 18722.4 18653.3 18579.8 18506.3 27 18428.5 18342.1 18259.9 18177.8 18091.5 17996.3 17909.8 17814.8 17719.7 17624.6
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/04%3A_The_Harmonic_Oscillator_and_Vibrational_Spectroscopy/4.05%3A_Group_Theory_Considerations.txt
One of the most powerful tools for elucidating molecular structure is the analysis of rotationally resolved molecular spectra. These can be observed in the microwave, infrared, and visible/ultraviolet regions of the spectrum. The rigid rotor (or rigid rotator) problem provides the idealized model that chemists use to describe the rotational motion of a molecule. In this chapter, we will explore the quantum mechanical model of a rotating body, and apply the results to lay the foundation for an understanding of the rotational structure in molecular spectra. We’ll look at the shortcomings of the model when applying it to real molecules (which as we saw in the previous chapter, do not have rigid bonds!) and apply these results to the interpretation of pure rotational spectra (generally found in the microwave region of the spectrum) and rotationvibration spectra (accounting for the rotational structure that is observed in infrared spectra of molecules.) • 5.1: Spherical Polar Coordinates The description of a rotating molecule in Cartesian coordinates would be very cumbersome. The problem is actually much easier to solve in spherical polar coordinates. • 5.2: Potential Energy and the Hamiltonian Since there is no energy barrier to rotation, there is no potential energy involved in the rotation of a molecule. All of the energy is kinetic energy. • 5.3: Solution to the Schrödinger Equation The time-independent Schrödinger equation can be written as follows. • 5.4: Spherical Harmonics The solutions to rigid rotor Hamiltonian are very important in a number of areas in chemistry and physics. The eigenfunctions are known as the spherical harmonics and they appear in every problem that has spherical symmetry. • 5.5: Angular Momentum The Spherical Harmonics are involved in a number of problems where angular momentum is important (including the Rigid Rotor problem, the H-atom problem and anything else where spherical symmetry is involved.) • 5.6: Application to the Rotation of Real Molecules While the spherical harmonics are the wavefunctions that describe the rotational motion of a rigid rotator, the names of the quantum numbers are changed to reflect the type of angular momentum encountered in the problem. • 5.7: Spectroscopy The experimental determination of spectroscopic rotational constants provides a very precise set of data describing molecular structure. To see how experimental measurements inform the determination of molecular structure, let’s examine what is to be expected in the pure rotational spectrum of a molecule first. • 5.8: References • 5.9: Vocabulary and Concepts • 5.10: Problems Thumbnail: The rigid rotor model for a diatomic molecule. (CC BY-SA 3.0 Unported; Mysterioso via Wikipedia) 05: The Rigid Rotor and Rotational Spectroscopy The description of a rotating molecule in Cartesian coordinates would be very cumbersome. The problem is actually much easier to solve in spherical polar coordinates. Consider a particle that is located in space at some arbitrary point (x,y,z). In spherical polar coordinates, the position of a particle is also described by three variables, namely $\mathrm{r}, \theta$, and $\phi$. These variables are defined according to the diagram. The distance from the origin to the point is specified by r. $\theta$ gives the angle formed by the position vector of the point and the positive z-axis. $\phi$ give the angle of rotation from the positive $x$-axis of the projection of the position vector into the xy plane. The ranges of possible values for $\mathrm{r}, \theta$ and $\phi$ are given by \begin{aligned} &0 \leq r \leq \infty \ &0 \leq \theta \leq \pi \ &0 \leq \phi \leq 2 \pi \end{aligned} The coordinates of any point can be transformed from spherical polar coordinates to Cartesian coordinates using the following equations. $\begin{gathered} x=r \sin \theta \cos \phi \ y=r \sin \theta \sin \phi \ z=r \cos \theta \end{gathered} \nonumber$ The coordinates can be transformed from Cartesian coordinates to spherical polar coordinates by these equations. \begin{aligned} r &=\sqrt{x^{2}+y^{2}+z^{2}} \ \theta &=\tan ^{-1}\left(\dfrac{y}{x}\right) \ \phi &=\cos ^{-1}\left(\dfrac{z}{\sqrt{x^{2}+y^{2}+z^{2}}}\right) \end{aligned} \nonumber 5.02: Potential Energy and the Hamiltonian Since there is no energy barrier to rotation, there is no potential energy involved in the rotation of a molecule. All of the energy is kinetic energy. This simplifies the writing of the Hamiltonian. In Cartesian coordinates, the Hamiltonian can be written \begin{aligned} \hat{H} &=-\dfrac{\hbar^{2}}{2 \mu} \nabla^{2} \ &=-\dfrac{\hbar^{2}}{2 \mu}\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right) \end{aligned}\nonumber In spherical polar coordinates, the Hamiltonian can be written \begin{aligned} \hat{H} &=-\dfrac{\hbar^{2}}{2 \mu} \nabla^{2} \ &=-\dfrac{\hbar^{2}}{2 \mu}\left(\dfrac{1}{r^{2}} \dfrac{\partial}{\partial r} r^{2} \dfrac{\partial}{\partial r}+\dfrac{1}{r^{2} \sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{r^{2} \sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right) \end{aligned}\nonumber For the rigid rotor problem, $\mathrm{r}$ is taken to be a constant, simplifying the operator. $\hat{H}=-\dfrac{\hbar^{2}}{2 \mu r^{2}}\left(\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right)\nonumber$ The expression $\mu \mathrm{r}^{2}$ is the moment of inertia for the molecule. This value shows up often in problems involving the rotation of a molecule. $I=\mu r^{2}\nonumber$ While the expression for the Hamiltonian in spherical polar coordinates looks considerably more cumbersome than the Hamiltonian expressed in Cartesian coordinates, it will still be simpler to solve the problem describing the rotation of a molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/05%3A_The_Rigid_Rotor_and_Rotational_Spectroscopy/5.01%3A_Spherical_Polar_Coordinates.txt
The time-independent Schrödinger equation can be written as follows. $\begin{gathered} \hat{H} \psi(\theta, \phi)=E \psi(\theta, \phi) \ -\dfrac{\hbar^{2}}{2 \mu r^{2}}\left(\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right) \psi(\theta, \phi)=E \psi(\theta, \phi) \end{gathered}\nonumber$ Since the Hamiltonian can be expressed as a sum of operators, one in $\theta$ and the other in $\phi$, it follows that the wavefunction should be able to be expressed as a product of two functions. $\psi(\theta, \phi)=\Theta(\theta) \Phi(\phi)\nonumber$ Making this substitution, the equation becomes $-\dfrac{\hbar^{2}}{2 \mu r^{2}}\left(\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right) \Theta(\theta) \Phi(\phi)=E \Theta(\theta) \Phi(\phi)\nonumber$ With minimal rearrangement, the following result can be derived $\dfrac{\Phi(\phi)}{\sin \theta} \dfrac{d}{d \theta} \sin \theta \dfrac{d}{d \theta} \Theta(\theta)+\dfrac{\Theta(\theta)}{\sin ^{2} \theta} \dfrac{d^{2}}{d \phi^{2}} \Phi(\phi)=-\dfrac{2 \mu r E}{\hbar^{2}} \Theta(\theta) \Phi(\phi)\nonumber$ And dividing both sides by $\Theta(\theta) \Phi(\phi)$ produces $\left(\dfrac{1}{\Theta(\theta) \sin \theta} \dfrac{d}{d \theta} \sin \theta \dfrac{d}{d \theta} \Theta(\theta)\right)+\left(\dfrac{1}{\Phi(\phi) \sin ^{2} \theta} \dfrac{d^{2}}{d \phi^{2}} \Phi(\phi)\right)=-\dfrac{2 \mu r^{2} E}{\hbar^{2}}\nonumber$ This expression suggests that the sum of two functions, one only in $\theta$ and the other only in $\phi$, when added together, yields a constant. As the two variables $\theta$ and $\phi$ are independent of one another, the only way this can be true is if each equation is itself equal to a constant. \begin{aligned} \dfrac{1}{\sin \theta} \dfrac{d}{d \theta} \sin \theta \dfrac{d}{d \theta} \Theta(\theta) &=-\lambda_{1}^{2} \Theta(\theta) \ \dfrac{1}{\sin ^{2} \theta} \dfrac{d^{2}}{d \phi^{2}} \Phi(\phi) &=-\lambda_{2}^{2} \Phi(\phi) \end{aligned}\nonumber where $\lambda_{1}$ and $\lambda_{2}$ are constants of separation (the form of which is chosen for convenience) which satisfy the following relationship. \begin{aligned} -\lambda_{1}^{2}-\lambda_{2}^{2} &=-\lambda^{2} \ &=-\dfrac{2 \mu r^{2} E}{\hbar^{2}} \end{aligned}\nonumber Rotation in the xy plane $(\theta=\pi / 2)$ We’ll tackle the equation in $\phi$ first. One way to picture this part of the equation is that it describes the rotation of a molecule in the xy plane only (defined by $\theta=\pi / 2$.) Given this constraint, it is clear that the $\sin ^{2}(\theta)$ term becomes unity, $\operatorname{since} \sin (\pi / 2)=1$. The problem then becomes $\dfrac{d^{2}}{d \phi^{2}} \Phi(\phi)=-\dfrac{2 \mu r^{2} E}{\hbar^{2}} \Phi(\phi)\nonumber$ If a substitution is made for the constants on the right-hand side of the equation, $-m_{l}^{2}=-\dfrac{2 \mu r^{2} E}{\hbar^{2}}\nonumber$ we get $\dfrac{d^{2}}{d \phi^{2}} \Phi(\phi)=-m_{l}^{2} \Phi(\phi)\nonumber$ which should look like a familiar problem. Instead of using sine and cosine functions this time though, we will use an imaginary exponential function instead. $\Phi(\phi)=A_{m_{l}} e^{i m_{l} \phi}\nonumber$ The boundary condition for this problem is that the function $\Phi(\phi)$ must be single valued. Therefore $\Phi(\phi)=\Phi(\phi+2 \pi)\nonumber$ So $A_{m_{l}} e^{i m_{1} \phi}=A_{m_{1}} e^{i m_{l}(\phi+2 \pi)}\nonumber$ Dividing both sides by $A_{m_{l}}$ and expressing the second exponential as a product yields \begin{aligned} e^{i m_{l} \phi} &=e^{i m_{l} \phi} e^{i m_{l} 2 \pi} \ 1 &=e^{i m_{l} 2 \pi} \end{aligned}\nonumber Using the Euler relationship $e^{i \alpha}=\cos \alpha+i \sin \alpha\nonumber$ we see that $1=\cos \left(2 m_{l} \pi\right)+i \sin \left(2 m_{l} \pi\right)\nonumber$ In order for this to be true, the sine term must vanish and the cosine term must become unity. This is true if $m_{l}$ is an integer, either positive or negative and including zero. $m_{l}=\ldots,-2,-1,0,1,2, \ldots\nonumber$ Energy Levels As such, the energy of a rigid rotator limited to rotation in the xy plane is given by $E_{m_{l}}=\dfrac{m_{l}^{2} \hbar^{2}}{2 \mu r^{2}} \quad m_{l}=0, \pm 1, \pm 2, \ldots\nonumber$ It is important to note that these functions are doubly degenerate for any non-zero value of $m_{l}$ as there are always two values of $m_{l}$ that yield the same energy. Normalization The wavefunctions can be normalized in the usual way. \begin{aligned} \int_{0}^{2 \pi}\left(A_{m_{l}} e^{i m_{l} \phi}\right)^{*}\left(A_{m_{l}} e^{i m_{l} \phi}\right) d \phi &=1 \ &=A_{m_{l}}^{2} \int_{0}^{2 \pi} e^{-i m_{l} \phi} e^{i m_{l} \phi} d \phi \ &=A_{m_{l}}^{2} \int_{0}^{2 \pi} d \phi \ &=A_{m_{l}}^{2}[\phi]_{0}^{2 \pi} \ &=2 \pi A_{m_{l}}^{2} \ \sqrt{\dfrac{1}{2 \pi}} &=A_{m_{l}} \end{aligned}\nonumber As was the case with the particle in a box problem, the normalization factor does not depend on the quantum number. The wavefunctions can be expressed $\Phi(\phi)=\sqrt{\dfrac{1}{2 \pi}} e^{i m_{l} \phi} \quad m_{l}=0, \pm 1, \pm 2, \ldots\nonumber$ Rotation in three dimensions We are now ready to tackle the more complicated problem of rotation in three dimensions. Recall the Schrödinger equation as was previously written. $\dfrac{\Phi(\phi)}{\sin \theta} \dfrac{d}{d \theta} \sin \theta \dfrac{d}{d \theta} \Theta(\theta)+\dfrac{\Theta(\theta)}{\sin ^{2} \theta} \dfrac{d^{2}}{d \phi^{2}} \Phi(\phi)=-\dfrac{2 \mu r E}{\hbar^{2}} \Theta(\theta) \Phi(\phi)\nonumber$ We already know the form of the solutions for the $\Phi(\phi)$ part of the equation. However, due to the $1 / \sin ^{2} \theta$ term in the $\Phi$ equation, it is possible that the solution to the $\Theta$ part of the equation will introduce a new constraint on the quantum number $m_{i}$. Energy Levels The only well-behaved functions (functions that satisfy all of the boundary conditions) have energies given by $E_{l}=\dfrac{l(l+1) \hbar^{2}}{2 \mu r^{2}} \quad l=0,1,2, \ldots\nonumber$ The quantum number $l$ indicated the angular momentum. $m_{l}$ is the z-axis component of angular momentum. The z-axis is treated differently than the $\mathrm{x}$ - or $y$-axes due to the unique manner in which the z-axis is treated in the choice of the spherical polar coordinate system (since $\theta$ is taken as the angle of the position vector with the positive z-axis.) Also, as will be shown later, the operator $\hat{L}_{z}$, the z-axis angular momentum component operator, has a special relationship with the Hamiltonian (as does the squared angular momentum operator, $\hat{L}^{2}$.) Degeneracy The interpretation of the quantum number $m_{l}$ is that it gives the magnitude of the z-axis component of the angular momentum vector. And since no vector can have a component with a magnitude greater than that of the vector itself, the constraint on $m_{l}$ that is introduced by this solution is $\left|m_{l}\right| \leq l\nonumber$ so for a given value of $l$, there are $(2 l+1)$ values of $\mathrm{m}_{\mathrm{l}}$ that fit the constraint. And since the energy expression does not depend on $m_{l}$, it is clear that each energy level has a degeneracy that is given by $(2 l+1)$. That can be demonstrated as in the diagram below for an angular momentum vector of magnitude $2(l=2)$. As can be seen in the diagram, there are five possible values of $m_{l},+2,+1,0,-1$ and $-2$. These five values correspond to the $(2 l+1)$ degeneracy predicted for a state with total angular momentum given by $l=2$ (and therefore $2 l+1=5$ ). When we see the wavefunctions in more detail, there will be a new reason for this constraint on the quantum number $m_{l}$. Wavefunctions For convenience, we’ll first look at the solutions where $m_{l}=0$. The wavefunctions under this constraint have two parts, a normalization constant and a Legendre polynomial in $\cos (\theta)$. The Legendre polynomials are another set of orthogonal polynomials, similar to the Hermite polynomials that occur in the solution to the harmonic oscillator problem. The Legendre polynomials can be generated by the following relationship $P_{l}(x)=\dfrac{1}{2^{l} l !} \dfrac{d^{\prime}}{d x^{l}}\left(x^{2}-1\right)^{l}\nonumber$ The first few Legendre polynomials are given below. $\mathbf{l}$ $\mathbf{P}_{l}(\mathbf{x})$ $\mathbf{P}_{l}(\cos \theta)$ 0 1 1 1 $x$ $\cos (\theta)$ 2 $\dfrac{1}{2}\left(3 x^{2}-1\right)$ $\dfrac{1}{2}\left[3 \cos ^{2}(\theta)-1\right]$ 3 $\dfrac{1}{2}\left(5 x^{3}-3 x\right)$ $\dfrac{1}{2}\left[5 \cos ^{3}(\theta)-3 \cos (\theta)\right]$ A recursion relation for the Legendre Polynomials is given by $(l+1) P_{l+1}(x)=(2 l+1) x P_{l}(x)-l P_{l-1}(x)\nonumber$ When $m_{l}=0$, the spherical harmonic function $Y_{l}^{m}(\theta, \phi)=\Theta(\theta) \Phi(\phi)$ becomes just $\Theta(\theta)$, since the $\phi$ dependence disappears. The $\Theta(\theta)$ part of the wavefunctions are given by $\Theta(\theta)=\left[\dfrac{(2 l+1)}{2}\right]^{\frac{1}{2}} P_{l}(\cos \theta)\nonumber$ The functions are slightly different for $m_{l} \neq 0$. In this case, the functions involve a set of functions that are related to the Legendre Polynomials called the associated Legendre polynomials. These functions are generated from the Legendre polynomials via the following relationship. $P_{l}^{\left|m_{l}\right|}(x)=(-1)^{\left|m_{l}\right|}\left(1-x^{2}\right)^{\backslash m_{l} / / 2} \dfrac{d^{\left|m_{l}\right|}}{d x^{\left|m_{l}\right|}} P_{l}(x)\nonumber$ Note that for any value of $\left|m_{l}\right|>l$, the derivative of $\mathrm{P}_{l}(\mathrm{x})$ vanishes. $\dfrac{d^{\left|m_{l}\right|}}{d x^{\left|m_{l}\right|}} P_{l}(x)=0 \quad \text { for }\left|m_{l}\right|>l\nonumber$ And this is the origin of the constraint on $m_{l}$. The associated Legendre polynomials depend on both $l$ and $m_{l}$. Also, given the $\left|m_{l}\right|$ dependence, the sign of $m_{l}$ does not matter. (The only place that the sign of $m_{l}$ matter is in the $\Phi(\phi)$ function.) The first few associated Legendre Polynomials are given in the table below. $\boldsymbol{I}$ $\left|\boldsymbol{m}_{\boldsymbol{l}}\right|$ $\mathbf{P}^{|m|}(\mathbf{x})$ $\mathbf{P}^{|m|}(\cos \theta)$ 0 0 1 1 $1 \nonumber$ 0 $\mathrm{x}$ $\cos (\theta)$ 1 $\left(1-\mathrm{x}^{2}\right)^{\dfrac{1}{2}}$ $\sin (\theta)$ $2 \nonumber$ 0 $\dfrac{1}{2}\left(3 \mathrm{x}^{2}-1\right)$ $\dfrac{1}{2}\left(3 \cos ^{2}(\theta)-1\right)$ 1 $3 \mathrm{x}\left(1-\mathrm{x}^{2}\right)^{\dfrac{1}{2}}$ $3 \cos (\theta) \sin (\theta)$ 2 $3\left(1-\mathrm{x}^{2}\right)$ $3 \sin ^{2}(\theta)$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/05%3A_The_Rigid_Rotor_and_Rotational_Spectroscopy/5.03%3A_Solution_to_the_Schrodinger_Equation.txt
The rigid rotor problem was solved using the Schrödinger equation $-\dfrac{\hbar^{2}}{2 \mu r^{2}}\left(\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right) \psi(\theta, \phi)=E \psi(\theta, \phi)\nonumber$ As it turns out, the solutions to this equation are very important in a number of areas in chemistry and physics. The eigenfunctions are known as the spherical harmonics $\left(Y_{l}^{m_{l}}(\theta, \phi)\right)$ and they appear in every problem that has spherical symmetry. The Spherical Harmonics satisfy the relationship $\left(\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right) Y_{l}^{m_{l}}(\theta, \phi)=\hbar^{2} l(l+1) Y_{l}^{m_{l}}(\theta, \phi)\nonumber$ Each function $Y_{l}^{m_{l}}(\theta, \phi)$ has three parts: 1) a normalization constant, 2) an associated Legendre polynomial in $\cos (\theta)$, and 3) an imaginary (for $m_{l} \neq 0$ ) exponential in $\phi$. $Y_{l}^{m_{l}}(\theta, \phi)=\left[\dfrac{(2 l+1)\left(l-\left|m_{l}\right|\right)}{4 \pi\left(l+\left|m_{l}\right|\right)}\right]^{\dfrac{1}{2}} P_{l}^{\left|m_{l}\right|}(\cos \theta) e^{i m_{l} \phi}\nonumber$ The first few Spherical harmonics are shown in the table below. $\boldsymbol{I}$ $\boldsymbol{m}_{\boldsymbol{I}}$ $Y_{l}^{m_{l}}(\theta, \phi)$ 0 0 $\sqrt{\dfrac{1}{4 \pi}}$ $1\nonumber$ 0 $\sqrt{\dfrac{3}{4 \pi}} \cos (\theta)$ $\pm 1$ $\pm 1$ $2\nonumber$ $\pm 1$ $\sqrt{\dfrac{5}{16 \pi}}\left(3 \cos ^{2}(\theta)-1\right)$ 0 $\sin (\theta) \cos (\theta) e^{\pm i \phi}$ $\pm 2$ $\sqrt{\dfrac{15}{32 \pi}} \sin ^{2}(\theta) e^{\pm 2 i \phi}$ Notice the $(2 l+1)$ degeneracy in these functions, due to the $(2 l+1)$ values of $m_{l}$ for each value of $l$. Also, it is useful to not that these functions all have $l$ angular nodes (values of $\theta$ that cause the wavefunction to vanish.) For the $l=1$ wavefunctions, these nodes occur at $\theta=\pi / 2$ for $m_{l}=0$ and at $\theta=0$ for $m_{l}=\pm 1$. The number of nodes in each wavefunction is a useful property to know when discussing how these functions related to the radial wavefunction in the Hydrogen atom. 5.05: Angular Momentum The Spherical Harmonics are involved in a number of problems where angular momentum is important (including the Rigid Rotor problem, the H-atom problem and anything else where spherical symmetry is involved.) Angular momentum is a vector quantity that is given by the cross product of position and momentum. $\overrightarrow{\mathbf{L}}=\overrightarrow{\mathbf{r}} \times \overrightarrow{\mathbf{p}}\nonumber$ This quantity can be calculated from the following determinant. \begin{aligned} \overrightarrow{\mathbf{L}} &=\overrightarrow{\mathbf{r}} \times \overrightarrow{\mathbf{p}}=\left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \ x & y & z \ p_{x} & p_{y} & p_{z} \end{array}\right| \ &=\left(y p_{z}-z p_{y}\right) \mathbf{i}+\left(z p_{x}-x p_{z}\right) \mathbf{j}+\left(x p_{y}-y p_{x}\right) \mathbf{k} \end{aligned}\nonumber Substituting the operators for the components of linear momentum, the operators that correspond to the three components of angular momentum are \begin{aligned} &\widehat{L_{x}}=-i \hbar\left(y \dfrac{\partial}{\partial z}-z \dfrac{\partial}{\partial y}\right) \ &\widehat{L_{y}}=-i \hbar\left(z \dfrac{\partial}{\partial x}-x \dfrac{\partial}{\partial z}\right) \ &\widehat{L_{z}}=-i \hbar\left(x \dfrac{\partial}{\partial y}-y \dfrac{\partial}{\partial x}\right) \end{aligned}\nonumber These can be used to determine the square of the angular momentum, which is given by the dot product of $\overrightarrow{\mathbf{L}}$ with itself. $\overrightarrow{\mathbf{L}} \cdot \overrightarrow{\mathbf{L}}=L^{2}=L_{x}^{2}+L_{y}^{2}+L_{z}^{2}\nonumber$ Similarly, the operator for the square of the angular momentum is given by $\hat{L}^{2}=\hat{L}_{x}^{2}+\hat{L}_{y}^{2}+\hat{L}_{z}^{2}\nonumber$ In spherical polar coordinates, the angular momentum operators are given by the expressions \begin{aligned} &\hat{L}_{x}=-i \hbar\left(\sin \phi \dfrac{\partial}{\partial \theta}+\cot \theta \cos \phi \dfrac{\partial}{\partial \phi}\right) \ &\hat{L}_{y}=-i \hbar\left(-\cos \phi \dfrac{\partial}{\partial \theta}+\cot \theta \sin \phi \dfrac{\partial}{\partial \phi}\right) \ &\hat{L}_{z}=-i \hbar \dfrac{\partial^{2}}{\partial \phi^{2}} \end{aligned}\nonumber And the angular momentum squared operator is given by $\hat{L}^{2}=-\hbar^{2}\left[\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right]\nonumber$ For the Rigid-Rotator problem, it is interesting to note that the Hamiltonian is very closely related to the angular momentum squared operator. \begin{aligned} \hat{H} &=-\dfrac{\hbar^{2}}{2 \mu r^{2}}\left[\dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \sin \theta \dfrac{\partial}{\partial \theta}+\dfrac{1}{\sin ^{2} \theta} \dfrac{\partial^{2}}{\partial \phi^{2}}\right] \ &=\dfrac{1}{2 I} \hat{L}^{2} \end{aligned}\nonumber The eigenfunctions of the $\hat{L}^{2}$ operator are the Spherical Harmonics, $Y_{l}^{m_{1}}(\theta, \phi)$. These functions have the important properties that \begin{aligned} &\hat{H} Y_{l}^{m_{l}}(\theta, \phi)=\dfrac{\hbar^{2} l(l+1)}{2 \mu r^{2}} Y_{l}^{m_{l}}(\theta, \phi) \ &\hat{L}^{2} Y_{l}^{m_{l}}(\theta, \phi)=\hbar^{2} l(l+1) Y_{l}^{m_{l}}(\theta, \phi) \ &\hat{L}_{z} Y_{l}^{m_{l}}(\theta, \phi)=\hbar m_{l} Y_{l}^{m_{l}}(\theta, \phi) \end{aligned}\nonumber Seeing as the spherical harmonics are eigenfunctions of all three of these operators, what is implied about the commutator of these two operators? There are important relationships between the angular momentum operators. Each of the operators corresponding to the components of angular momentum commutes with the $\hat{L}^{2}$ operator, but they do not commute with one another. This implies that one can measure the squared angular momentum and only one component of angular momentum. This is generally taken as the z-axis component of angular momentum as the z-axis has special properties due to the manner in which the spherical polar coordinates have been defined. $\begin{gathered} {\left[\hat{L}^{2}, \hat{L}_{x}\right]=\left[\hat{L}^{2}, \hat{L}_{y}\right]=\left[\hat{L}^{2}, \hat{L}_{z}\right]=0} \ {\left[\hat{L}_{x}, \hat{L}_{y}\right] \neq 0 ;\left[\hat{L}_{y}, \hat{L}_{z}\right] \neq 0 ;\left[\hat{L}_{x}, \hat{L}_{z}\right] \neq 0} \end{gathered}\nonumber$ The commutators involving two components of angular momentum are particularly interesting. Consider the commutator between and $\hat{L}_{x}$ and $\hat{L}_{y}$. $\left|\hat{L}_{x}, \hat{L}_{y}\right|=\hat{L}_{x} \hat{L}_{y}-\hat{L}_{y} \hat{L}_{x}\nonumber$ Let’s define each term separately and then take the difference. \begin{aligned} \hat{L}_{x} \hat{L}_{y} &=(-i \hbar)^{2}\left(y \dfrac{\partial}{\partial z}-z \dfrac{\partial}{\partial y}\right)\left(z \dfrac{\partial}{\partial x}-x \dfrac{\partial}{\partial z}\right) \ &=-\hbar^{2}\left(y \dfrac{\partial}{\partial z} z \dfrac{\partial}{\partial x}-y \dfrac{\partial}{\partial z} x \dfrac{\partial}{\partial z}-z \dfrac{\partial}{\partial y} z \dfrac{\partial}{\partial x}+z \dfrac{\partial}{\partial y} x \dfrac{\partial}{\partial z}\right) \end{aligned}\nonumber The second, third and fourth terms are easy to simplify as the derivatives do not affect the $\mathrm{x}$ or $\mathrm{z}$ variables. The first term, however, requires some application of the chain rule. $\hat{L}_{x} \hat{L}_{y}=-\hbar^{2}\left(\left\{y \dfrac{\partial}{\partial x}+y z \dfrac{\partial^{2}}{\partial x \partial z}\right\}-x y \dfrac{\partial^{2}}{\partial z^{2}}-z^{2} \dfrac{\partial^{2}}{\partial x \partial y}+x z \dfrac{\partial^{2}}{\partial y \partial z}\right)\nonumber$ Similarly, \begin{aligned} \hat{L}_{y} \hat{L}_{x} &=(-i \hbar)^{2}\left(z \dfrac{\partial}{\partial x}-x \dfrac{\partial}{\partial z}\right)\left(y \dfrac{\partial}{\partial z}-z \dfrac{\partial}{\partial y}\right) \ &=-\hbar^{2}\left(z \dfrac{\partial}{\partial x} y \dfrac{\partial}{\partial z}-z \dfrac{\partial}{\partial x} z \dfrac{\partial}{\partial y}-x \dfrac{\partial}{\partial z} y \dfrac{\partial}{\partial z}+x \dfrac{\partial}{\partial z} z \dfrac{\partial}{\partial y}\right) \ &=-\hbar^{2}\left(z y \dfrac{\partial^{2}}{\partial x \partial z}-z^{2} \dfrac{\partial^{2}}{\partial x \partial y}-x y \dfrac{\partial^{2}}{\partial z^{2}}+\left\{x \dfrac{\partial}{\partial y}+x z \dfrac{\partial^{2}}{\partial z \partial y}\right\}\right) \end{aligned}\nonumber Taking the difference will cancel all of the second derivative terms, leaving only the first derivative terms behind. \begin{aligned} \hat{L}_{x} \hat{L}_{y}-\hat{L}_{y} \hat{L}_{x} &=-\hbar^{2}\left(y \dfrac{\partial}{\partial x}-x \dfrac{\partial}{\partial y}\right) \ &=i \hbar \hat{L}_{z} \end{aligned}\nonumber Similarly, it can be shown that \begin{aligned} &{\left[\hat{L}_{y}, \hat{L}_{z}\right]=i \hbar \hat{L}_{x}} \ &{\left[\hat{L}_{z}, \hat{L}_{x}\right]=i \hbar \hat{L}_{y}} \end{aligned}\nonumber 5.06: Application to the Rotation of Real Molecules While the spherical harmonics are the wavefunctions that describe the rotational motion of a rigid rotator, the names of the quantum numbers are changed to reflect the type of angular momentum encountered in the problem. The quantum number $l$ and $m_{l}$ should be familiar as these are the ones used in the hydrogen atom problem to describe the orbital angular momentum. However, for rotational motion, these are replaced by $\mathrm{J}$ and $\mathrm{M}_{\mathrm{J}}$. The energy levels of the rigid rotator are therefore given by $E_{J}=J(J+1) \dfrac{\hbar^{2}}{2 \mu r^{2}}\nonumber$ And since $\mathrm{M}_{\mathrm{J}}$ does not appear in the energy level expression, each level has a $(2 \mathrm{~J}+1)$ degeneracy. The spacings between energy levels increases with increasing $\mathrm{J}$ due to the $\mathrm{J}(\mathrm{J}+1)$ dependence (which has a $\mathrm{J}^{2}$ term.) This pattern is shown in the diagram below. For spectroscopic measurements, the rotational energy (given the symbol $F_{J}$ ) is often expressed in spectroscopic units, such as $\mathrm{cm}^{-1}$. Also, a spectroscopic constant, B, is used to describe the energy level stack. $F_{J}=\dfrac{E_{J}}{h c}=B J(J+1)\nonumber$ where the spectroscopic constant $B$ is given by $B=\dfrac{h}{8 \pi^{2} c \mu r^{2}}\nonumber$ Thus, by knowing the value of $\mu$, the reduced mass, and measuring the value of $B$, the rotational constant, one can determine the value of $\mathrm{r}$, the bond length. This is the utility of rotational spectroscopy - it gives us detailed information about molecular structure! Centrifugal Distortion As we know, since they vibrate, real molecules do not have rigid bonds. So it is no surprise to learn that the Rigid Rotor is really just a limiting ideal model, much like the ideal gas law describes limiting ideal behavior. Real molecules, especially when rotating with very high angular momentum, will tend to stretch. In other words, the average bond length will increase with increasing $J$. And given the inverse relationship between $\mathrm{B}$ and bond length($r$), it is not surprising that the effective $B$ value is smaller at higher levels of $\mathrm{J}$. In fact, this centrifugal distortion problem is well treated by introducing a "distortion constant" $D$ such that $F_{J}=B J(J+1)-D[J(J+1)]^{2}\nonumber$ Naturally, one would expect the distortion constant to be small in the case of a strong, inflexible bond, but larger if the bond is weaker. The approximation of Kraitzer suggests that the distortion constant is determined to a good approximation by $D \approx \dfrac{4 B^{3}}{\omega_{e}^{2}}\nonumber$ For a well behaved molecule, he distortion constant $\mathrm{D}$ is always smaller in magnitude than $\mathrm{B}$. Some molecules require several distortional constants to yield a reasonable description of their rotational energy level stack. If additional constants are needed, they are introduced as coefficients in a power series of $\mathrm{J}(\mathrm{J}+1)$. $\mathrm{F}_{\mathrm{J}}=\mathrm{BJ}(\mathrm{J}+1)-\mathrm{D}[\mathrm{J}(\mathrm{J}+1)]^{2}+\mathrm{H}[\mathrm{J}(\mathrm{J}+1)]^{3}+\ldots\nonumber$ The power series is truncated at a point that yields a good fit to experimental observations for a given molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/05%3A_The_Rigid_Rotor_and_Rotational_Spectroscopy/5.04%3A_Spherical_Harmonics.txt
The experimental determination of spectroscopic rotational constants provides a very precise set of data describing molecular structure. To see how experimental measurements inform the determination of molecular structure, let’s examine what is to be expected in the pure rotational spectrum of a molecule first. Microwave Spectroscopy The rotational selection rule in microwave absorption spectra is $\Delta \mathrm{J}=+1\nonumber$ (Selection rules are discussed in more detail in a later section.) The pattern of lines predicted to be observed in a microwave spectrum (a pure rotational spectrum of a mole) can be derived by taking differences in rotational energy levels. \begin{aligned} & \tilde{v}_{J}=F_{J+1}-F_{J} \ F_{J+1}-F_{J}=& B(J+1)(J+2)-B J(J+1) \ =& B\left(J^{2}+3 J+2\right)-B\left(J^{2}+J\right) \ =& B\left(J^{2}+3 J+2-J^{2}-J\right) \ =& B(2 J+2) \ =& 2 B(J+1) \end{aligned}\nonumber This suggests that a pure microwave spectrum should consist of a series of lines that are evenly spaces, the spacing between which is $2 \mathrm{~B}$. It also suggests that a plot of the line frequency divided by $(\mathrm{J}+1)$ should yield a straight and horizontal line, $\dfrac{\tilde{v}_{J}}{(J+1)}=2 B\nonumber$ The inclusion of distortion yields a slightly different conclusion. \begin{aligned} F_{J+1}-F_{J} &=B(J+1)(J+2)-D[(J+1)(J+2)]^{2}-B J(J+1)+D[J(J+1)]^{2} \ &=B\left(J^{2}+3 J+2-J^{2}-J\right)-D\left[\left(J^{2}+3 J+2\right)^{2}-\left(J^{2}+J\right)^{2}\right] \ &=B(2 J+2)-D\left(J^{4}+6 J^{3}+13 J^{2}+12 J+4-J^{4}-2 J^{3}-J^{2}\right) \ &= 2 B(J+1)-D\left(4 J^{3}+12 J^{2}+12 J+4\right) \ &= 2 B(J+1)-4 D(J+1)^{3} \end{aligned}\nonumber $\dfrac{\tilde{v}_{J}}{(J+1)}=2 B-4 D(J+1)^{2}\nonumber$ This suggests that a plot of $\dfrac{\tilde{v}_{J}}{(J+1)}$ vs. $(J+1)^{2}$ should yield a straight line with slope -4D and intercept $2 \mathrm{~B}$. Consider the following set of data for the microwave spectrum of $^{12} \mathrm{C}^{16} \mathrm{O}$ (Lovas & Krupenie, 1974). A plot of $\dfrac{\tilde{v}_{J}}{J+1}$ vs. $J$ yields a plot as the following. $\mathrm{J}$ $\tilde{v}_{\mathrm{J}}\left(\mathrm{cm}^{-1}\right)$ 0 $3.84503$ 1 $7.68992$ 2 $11.53451$ 3 $15.37867$ 4 $19.22223$ 5 $23.06506$ 6 $26.90701$ Clearly, this is not a horizontal line. The conclusion is that centrifugal distortion is not negligible for this molecule. Including distortion suggests that the plot that should be considered would involve $\dfrac{\tilde{v}_{J}}{(J+1)}$ vs. $(\mathrm{J}+1)^{2}$. This yields the following: This does yield a straight line! From the fit, one calculates a B value of $1.92253 \mathrm{~cm}^{-1}$ and a D value of $0.00000612 \mathrm{~cm}^{-1}$. Calculating a Bond Length from Spectroscopic Data Spectroscopic data (and microwave data in particular) provides extremely high precision information from which bond lengths can be determined. Based on the above data and the masses of carbon-12 (12.00000 amu) and oxygen-16 (15.99491463 amu) (Rosman & Taylor, 1998) a reduced mass for ${ }^{12} \mathrm{C}^{16} \mathrm{O}$ can be calculated as $\mu=\dfrac{m_{C} m_{O}}{m_{C}+m_{O}}=6.85621 \mathrm{amu}=1.1385 \times 10^{-26} \mathrm{~kg}\nonumber$ Recalling the expression for the rotational constant B $B=\dfrac{h}{8 \pi^{2} c \mu r^{2}}\nonumber$ The bond length is given by $r=\sqrt{\dfrac{h}{8 \pi^{2} c \mu B}}\nonumber$ Using the data from above, one calculates a bond length for CO to be $r=1.1312 \mathring{\mathrm{A}}$. This value is actually the average value of the bond length in the $v=0$ level. The literature value for the equilibrium bond length (the bond length at the potential minimum) is given by $r_{e}=1.128323 \mathring{\mathrm{A}}$ (Bunker, 1970) which is slightly shorter (as is to be expected.) The extrapolation of data to determine values at the potential minimum is discussed in a later section. Rotation-Vibration Spectroscopy Each vibrational level in a molecule will have a whole stack of rotational energy levels. As such, vibrational transitions will also show rotational fine structure. This fine structure can be analyzed to determine very precise values for molecular structure in much the same ways microwave data for the pure rotational spectrum can be. One method for analyzing this data is that of combination differences although direct fitting of the data will give better results mathematically. Before beginning a discussion of combination differences, however, it is necessary to discuss selection rules. Selection Rules and Branch Structure Selection rules are determined for spectroscopic transitions as those transitions for which the transition moment integral does not vanish. This is because the observed intensities of spectroscopic transitions are proportional to the squared magnitude of the transition moment. The transition moment integral is given by $\int\left(\psi^{\prime}\right)^{*} \vec{\mu}\left(\psi^{\prime \prime}\right) d \tau\nonumber$ and so the intensities of transitions are given by $\text { Int. } \propto\left|\int\left(\psi^{\prime}\right)^{*} \vec{\mu}\left(\psi^{\prime \prime}\right) d \tau\right|^{2}\nonumber$ where a single prime (’) indicates the upper state of the transition and a double prime (") indicates the lower state. The operator $\vec{\mu}$ corresponds to the change in the electric dipole moment of the molecule as it undergoes a transition from a state described by $\psi$ " to one described by $\psi$ ’. Other operators may be used in this expression (magnetic dipole, electric quadrupole, etc.) but these lead to significantly weaker transitions (by a factor of $10^{6}$ or more!) When the electric dipole operator is used, the transitions for which the transition moment is not zero are said to be allowed transitions, while all others are said to be forbidden transitions by electric dipole selection rules. Since other types of transitions are so weak by comparison, a transition that is said to be allowed or forbidden is assumed to mean by electric dipole selection rules unless specifically stated otherwise. The selection rules for vibrational transitions are $\Delta \mathrm{v}=\pm 1\nonumber$ For closed-shell molecules (molecules where all of the electrons are paired), the rotational selection rules are $\Delta \mathrm{J}=\pm 1\nonumber$ $\Delta \mathrm{J}=0$ is possible for some open-shell molecules, but his topic will be discussed in more detail in Chapter 7. The rotational fine structure of a transition can be separated into branches according to the specific change in the rotational quantum number $J$. $\mathbf{\Delta J}$ $+1$ R-branch 0 Q-branch $-1$ P-branch In Raman spectroscopy (which is an inelastic light scattering process rather than the direct absorption or emission of a photon, and thus follows different selection rules) O- and S-Branches can be observed with $\Delta \mathrm{J}=-2$ and $+2$ respectively. The spectrum of possible branches and transitions that can be observed for all possible molecules can be quite daunting (and take an entire graduate level course in molecular spectroscopy just to scratch the surface!) For the purposes of this discussion, we will limit ourselves for the time being to just closed-shell molecules for which P- and R-branches can be observed. Consider the following energy level diagram depicting the rotational energy levels in two different states. The diagram shows the expected branch structure for a closed shell molecule. Notice that the transition lines get longer with increasing $\mathrm{J}$ in the R-branch, but shorter with increasing $\mathrm{J}$ in the P-branch. The largest difference in transition energy is for successive lines in the spectrum is that between the $\mathrm{R}_{0}$ and $\mathrm{P}_{1}$ lines. The band origin $\left(\tilde{v}_{0}\right)$ will lie between these two lines, and is at the energy difference between the $J^{\prime}=0$ and $J$ " $=0$, the two non-rotating levels in the two vibrational levels. Also notice that the rotational energy spacings in the upper state are smaller than those in the lower state. This is do to a smaller $\mathrm{B}$ value in the upper state $(\mathrm{v}=$ 1), which has a larger average bond length than the $v=0$ level. Combination Differences Consider the following partial energy level diagram: It is clear that since the $\mathrm{R}(\mathrm{J})$ and $\mathrm{P}(\mathrm{J})$ transitions share a common lower rotational level $\left(F_{J}\right)$, the energy difference between the $R(J)$ and $P(J)$ transitions gives the energy difference between the $\mathrm{F}_{\mathrm{J}+1}$ and $\mathrm{F}_{\mathrm{J}-1}$ in the upper state of the transition. Similarly, the difference between $F_{J+1}$ and $F_{J-1}$ in the lower state is given by $R(J-1)-P(J+1)$. Thus, by taking differences of transition energies in the proper combination, dependence on one of the states can be eliminated. Also, the difference $\Delta_{2} \mathrm{~F}(\mathrm{~J})$ can be found. This difference is defined by: $\Delta_{2} \mathrm{~F}(\mathrm{~J}) \equiv \mathrm{F}_{\mathrm{J}+1}-\mathrm{F}_{\mathrm{J}-1}\nonumber$ Using the rigid rotator model, $F_{J}=B J(J+1)\nonumber$ an expression for $\Delta_{2} F(J)$ can be easily derived: \begin{aligned} \Delta_{2} \mathrm{~F}(\mathrm{~J}) &=\mathrm{B}(\mathrm{J}+1)(\mathrm{J}+2)-\mathrm{B}(\mathrm{J}-1)(\mathrm{J}) \ &=\mathrm{B}\left(\mathrm{J}^{2}+3 \mathrm{~J}+2\right)-\mathrm{B}\left(\mathrm{J}^{2}-\mathrm{J}\right) \ &=\mathrm{B}(4 \mathrm{~J}+2) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right) \end{aligned}\nonumber Thus the value of $\Delta_{2} \mathrm{~F}(\mathrm{~J})$ that can be found for either the upper or lower states by combination differences from the energies of the spectral lines, can be used to find the spectroscopic constant B. $\dfrac{\Delta_{2} F(J)}{\left(J+\dfrac{1}{2}\right)}=4 B\nonumber$ And the $\Delta_{2} \mathrm{~F}(\mathrm{~J})$ values are determined by the combination differences $\begin{gathered} \Delta_{2} F^{\prime}(J)=R(J)-P(J) \ \Delta_{2} F^{\prime \prime}(J)=R(J-1)-P(J+1) \end{gathered}\nonumber$ were the single prime (") refers to the upper state and the double prime (") refers to the lower state. For most molecules, the rotational distortion constants are not negligible. In this case, the rotational term values are given by $\mathrm{F}(\mathrm{J})=\mathrm{BJ}(\mathrm{J}+1)-\mathrm{DJ}^{2}(\mathrm{~J}+1)^{2}+\mathrm{HJ}^{3}(\mathrm{~J}+1)^{3}+\ldots\nonumber$ Neglecting terms of higher order than $\mathrm{DJ}^{2}(\mathrm{~J}+1)^{2}$ (since these terms are small for most molecules) the combination differences relationship can be derived as \begin{aligned} \Delta_{2} \mathrm{~F}(\mathrm{~J}) &=\mathrm{B}(\mathrm{J}+1)(\mathrm{J}+2)-\mathrm{D}(\mathrm{J}+1)^{2}(\mathrm{~J}+2)^{2}-\mathrm{B}(\mathrm{J}-1)(\mathrm{J})+\mathrm{D}(\mathrm{J}-1)^{2}(\mathrm{~J})^{2} \ &=\mathrm{B}\left[\left(\mathrm{J}^{2}+3 \mathrm{~J}+2\right)-\left(\mathrm{J}^{2}-\mathrm{J}\right)\right]-\mathrm{D}\left[\left(\mathrm{J}^{2}+2 \mathrm{~J}+1\right)\left(\mathrm{J}^{2}+4 \mathrm{~J}+4\right)-\left(\mathrm{J}^{2}-2 \mathrm{~J}+1\right) \mathrm{J}^{2}\right] \ &=\mathrm{B}(4 \mathrm{~J}+2)-\mathrm{D}\left(\mathrm{J}^{4}+4 \mathrm{~J}^{3}+4 \mathrm{~J}^{2}+2 \mathrm{~J}^{3}+8 \mathrm{~J}^{2}+8 \mathrm{~J}+\mathrm{J}^{2}+4 \mathrm{~J}+4-\mathrm{J}^{4}+2 \mathrm{~J}^{3}-\mathrm{J}^{2}\right) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-\mathrm{D}\left(8 \mathrm{~J}^{3}+12 \mathrm{~J}^{2}+12 \mathrm{~J}+4\right) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(\mathrm{J}^{3}+\dfrac{3}{2} \mathrm{~J}^{2}+\dfrac{3}{2} \mathrm{~J}+\dfrac{1}{2}\right) \end{aligned}\nonumber It would be convenient if the term involving $\mathrm{D}$ could be factored. Recognizing that $\left(J+\dfrac{1}{2}\right)^{3}=\mathrm{J}^{3}+\dfrac{3}{2} \mathrm{~J}^{2}+\dfrac{3}{4} \mathrm{~J}+\dfrac{1}{8}\nonumber$ the "cube" can be "completed" by \begin{aligned} \Delta_{2} \mathrm{~F}(\mathrm{~J}) &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(\mathrm{J}^{3}+\dfrac{3}{2} \mathrm{~J}^{2}+\dfrac{3}{4} \mathrm{~J}+\dfrac{1}{8}+\dfrac{3}{4} \mathrm{~J}+\dfrac{3}{8}\right) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(J+\dfrac{1}{2}\right)^{3}-8 \mathrm{D}(\dfrac{3}{4} \mathrm{~J}+\dfrac{3}{8}) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(J+\dfrac{1}{2}\right)^{3}-\mathrm{D}(6 \mathrm{~J}+3) \ &=4 \mathrm{~B}\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(J+\dfrac{1}{2}\right)^{3}-6 \mathrm{D}\left(J+\dfrac{1}{2}\right) \ &=[4 \mathrm{~B}-6 \mathrm{D}]\left(J+\dfrac{1}{2}\right)-8 \mathrm{D}\left(J+\dfrac{1}{2}\right)^{3} \end{aligned}\nonumber And by dividing through by $\left(J+\dfrac{1}{2}\right)$ $\dfrac{\Delta_{2} F(J)}{\left(J+\dfrac{1}{2}\right)}=[4 \mathrm{~B}-6 \mathrm{D}]-8 \mathrm{D}\left(J+\dfrac{1}{2}\right)^{2}\nonumber$ So using the spectral data, a plot of $\begin{gathered} \dfrac{\mathrm{R}(\mathrm{J})-\mathrm{P}(\mathrm{J})}{\left(J+\dfrac{1}{2}\right)} \text { vs. }\left(J+\dfrac{1}{2}\right)^{2} \ \text { or } \ \dfrac{\mathrm{R}(\mathrm{J}-1)-\mathrm{P}(\mathrm{J}+1)}{\left(J+\dfrac{1}{2}\right)} \text { vs. }\left(J+\dfrac{1}{2}\right)^{2} \end{gathered}\nonumber$ should yield straight lines with slopes of $8 \mathrm{D}$ and an intercept of $(4 \mathrm{~B}-6 \mathrm{D})$ for the upper and lower states respectively. Additional Spectroscopic Constants Since each vibrational level has a different average bond length (increasing with increasing vibrational quantum number for a well-behaved electronic state,) the rotational constant has a dependence on the vibrational quantum number $\mathrm{v}$. $B_{v}=B_{e}-\alpha_{e}\left(v+\dfrac{1}{2}\right)+\gamma_{e}\left(v+\dfrac{1}{2}\right)^{2}+\ldots\nonumber$ where $B_{\mathrm{e}}$ is the equilibrium value of the rotational constant (and the constant from which $\mathrm{r}_{\mathrm{e}}$ is derived), $\alpha_{\mathrm{e}}$ and $\gamma_{\mathrm{e}}$ are constants that describe how rotation and vibration are coupled in a molecule. Usually this power series in $\left(\mathrm{v}+\dfrac{1}{2}\right)$ can be truncated at the $\alpha_{\mathrm{e}}$ term (unless data for a great many vibrational levels are known.) Similarly, the distortional term can be expanded in a power series in $\left(\mathrm{v}+\dfrac{1}{2}\right)$. $D_{v}=D_{e}-\beta_{e}\left(v+\dfrac{1}{2}\right)+\ldots\nonumber$ For most molecules, $\beta_{\mathrm{e}}$ is not determined within experimental uncertainty unless a great many vibrational levels have been included in the fit. A typical methodology would be to determine $\mathrm{B}_{\mathrm{v}}$ for all of the vibrational levels for which data exists. (A single vibration-rotation band analysis provides two values, one for the upper state and one for the lower state.) Then the $B_{\mathrm{v}}$ values are fit to the functional form given by $B_{v}=B_{e}-\alpha_{e}\left(v+\dfrac{1}{2}\right)+\gamma_{e}\left(v+\dfrac{1}{2}\right)^{2}+\ldots\nonumber$ truncating the power series so as to include the minimum number of adjustable parameters as are needed to yield a good fit to the data. This process yields a value for $\mathrm{B}_{\mathrm{e}}$ which can then be used to calculate $r_{e}$. These values can then be compared to those found in the literature (if such a value has been measured) or reported in the literature if it has not yet been measured! A similar approach is used for the distortional term(s). Line Intensity in Rotational Structure One element that we have not discussed in the subject of rotational spectroscopy (or the rotational fine structure in vibration-rotation spectroscopy) is the intensities of the spectral lines. The intensity will be determined by two factors: 1) the population of the originating state (lower state in absorption and upper state in emission spectra) which is well described for a thermalized sample by a Maxwell-Boltzmann distribution, and 2) the line strength, which is determined by the quantum mechanical relationship between the upper and lower states of the transition. The Maxwell-Boltzmann Distribution The Maxwell-Boltzmann distribution of energy level populations will be achieved by any system that is in thermal equilibrium (usually implying that a sufficient number of molecular collisions occur for a gas phase sample, or that all of the parts of a sample are in thermal contact with one another in condensed phase samples) to ensure thermal uniformity throughout the sample. The distribution is given by the following expression: $\dfrac{N_{i}}{N_{t o t}}=\dfrac{d_{i} e^{-E_{i} / k T}}{q}\nonumber$ where $\frac{N_{i} }{N_{\text {tot }}}$ is the fraction of molecules in the $i^{\text {th }}$ quantum state, that has and energy given by $E_{i}$ and a degeneracy given by $d_{i}$. The term $k T$ is the Boltzmann constant times the temperature on an absolute scale. The denominator, $q$, is a partition function, which is part of a normalization factor. The partition function is given by $q=\sum_{i} d_{i} e^{-E_{i} / k T}\nonumber$ In the case of rotational energy levels for closed-shell molecules, the subscript I can be replaced by the rotational quantum number $\mathrm{J}$. $q_{r o t}=\sum_{J} d_{J} e^{-E_{J} / k T}\nonumber$ In this expression, the rotational energy level degeneracies are always given by $(2J+1)$ and the rotational energy levels (if treated as rigid rotor levels) are given by $hcBJ(J+1)$. Thus the expression for the rotational partition function, qrot, is given by $q_{r o t}=\sum_{J}(2 J+1) e^{-h c B J(J+1) / k T}\nonumber$ It is handy to note that $\frac{hc}{kT}$ has a value of approximately $206 \mathrm{~cm}^{-1}$ at room temperature. When the energy $E_{i}$ exceeds approximately $10 \cdot \mathrm{kT}$, the exponential term becomes negligibly small. Focusing on the numerator of the Maxwell-Boltzmann expression, it is clear that the effect of increasing $\mathrm{J}$ is mixed in the expression. As $\mathrm{J}$ increases, the degeneracy increases (having the effect of increased fractional population in the level) but also the exponential term gets smaller due to the higher energy (having the effect of a decreased fractional population in the energy level.) A plot of factional population as a function of $\mathrm{J}$ (for $\mathrm{HCl}$ at $298 \mathrm{~K}$ ) is shown below. Note that at low values of $\mathrm{J}$, the fractional population increases with increasing $\mathrm{J}$, to a point. Eventually, the exponential term takes over and the population is extinguished. The $J$ value $\left(\mathrm{J}_{\max }\right)$ at which this changeover occurs is a function of the rotational constant $\mathrm{B}$ and the temperature, and can be determined by solving the following expression for $\mathrm{J}$. $\dfrac{d}{d J}(2 J+1) e^{\frac{-h c B J(J+1)}{kT}}=0\nonumber$ The result is $J_{\max }=\sqrt{\dfrac{k T}{2 B h c}}-\dfrac{1}{2}\nonumber$ The intensity pattern is plainly visible in the rotation-vibration spectrum of $HCl$. A simulated spectrum of the 1-0 band of $H^{35} Cl$ is shown below, clearly showing the P- and R-branch structure, and the large gap between where the band origin can be found. Line Strength Considerations The second major consideration in spectral line intensity is the line strength. This is determined by the squared magnitude of the transition moment integral. $\text { Int. } \propto\left|\int\left(\psi^{\prime}\right)^{*} \vec{\mu}\left(\psi^{\prime \prime}\right) d \tau\right|^{2}\nonumber$ The rotational contribution, often called the rotational line strength, to this expression is a HönlLondon factor. For closed shell diatomic molecules, the Hönl-London factors are given by $\begin{array}{ll} \mathrm{S}_{\mathrm{J}}=\mathrm{J}+1 & \text { (for R-branch lines) } \ \mathrm{S}_{\mathrm{J}}=\mathrm{J} & \text { (for P-branch lines) } \end{array}\nonumber$ A good way to think of these expressions is to view them as branching ratios. They indicate the relative fraction of molecules in a given level that will undergo an R-branch transition compared to what fraction will undergo a P-branch transition. The molecules the lower state must "decide" to undergo either an R-branch transition or a P-branch transition. The relative fraction of each type of "decision" is the branching ratio. Notice that the sum of these two expressions gives the total degeneracy of the rotational level. Given this relationship, it should be clear that the fractions of molecules undergoing each type of transition are given by $F_{R}=\dfrac{J+1}{2 J+1} \quad \text { and } \quad F_{P}=\dfrac{J}{2 J+1}\nonumber$ For open shell molecules, the expressions can be quite a bit more complex, but that is a topic for a more detailed course on molecular spectroscopy. However, some of the details of rotational structure of open shell molecules will be discussed in Chapter 8, as the electronic portion of the molecular wavefunction can affect the rotational structure profoundly.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/05%3A_The_Rigid_Rotor_and_Rotational_Spectroscopy/5.07%3A_Spectroscopy.txt
Bunker, P. R. (1970). The effect of the breakdown of the Born-Oppenheimer approximation on the determination of Be and ?e for a diatomic molecule. Journal of Molecular Spectroscopy, 35(2), 306-313. doi:10.1016/0022-2852(70)90206-7 Lovas, F. J., & Krupenie, P. H. (1974). Microwave Spectra of Molecules of Astrophysical Interest, VII. Carbon Monoxide, Carbon Monosulfide, and Silicon Monoxide. Journal of Physical and Chemical Reference Data, 3(1), 245-257. doi:10.1063/1.3253140 Rosman, K., & Taylor, P. (1998). Isotopic Composition of the Elements 1997. Pure and Applied Chemistry, 70(1), 217-235. doi:10.1351/pac199870010217 5.09: Vocabulary and Concepts allowed transitions angular momentum combination differences forbidden transitions Hönl-London factor Legendre polynomial line strength Maxwell-Boltzmann distribution moment of inertia rigid rotor selection rules spherical harmonics spherical polar coordinates transition moment 5.10: Problems 1. Consider the data given in the table for lines found in the pure rotational spectrum of ${ }^{12} \mathrm{C}^{16} \mathrm{O}$. Determine an approximate value for $B$ and assign the spectrum (the lower $\rightarrow$ upper state rotational quantum numbers for each line.) Make a graph of $\frac{\tilde{v}_{J}}{(J+1)}$ vs. $(J+1)^{2}$ and determine line $\tilde{v}\left(\mathrm{~cm}^{-1}\right)$ 1 $3.84503319$ 2 $7.68991907$ 3 $11.5345096$ 4 $15.378662$ 5 $19.222223$ 6 $23.065043$ the best fit line. Use these results to determine $\mathrm{B}$ and $\mathrm{D}$ for the molecule. Compare your results to those found in the NIST Webbook of Chemistry for the ground electronic state of CO. 1. Consider the following data for the rotation-vibration spectrum of $\mathrm{H}^{35} \mathrm{Cl}$. a. Using the differences in frequency, assign the location of the band origin and assign the $\mathrm{P}$ - and R-branches accordingly. b. Using combination differences, fir the data to find B’, D’, B" and D’. c. Use your results to find $B_{\mathrm{e}}, \alpha_{\mathrm{e}}$ and $\mathrm{De}_{\mathrm{e}}$. d. Based on your value of $B_{e}$, find a value for $r_{e}$ for the molecule. e. Compare your results to those found in the NIST Webbook of Chemistry. line Freq. $\left(\mathbf{c m}^{-1}\right)$ $\Delta \tilde{v}$ 1 $3085.62$ 2 $3072.76$ 3 $3059.07$ 4 $3044.88$ 5 $3029.96$ 6 $3014.29$ 7 $2997.78$ 8 $2980.90$ 9 $2963.24$ 10 $2944.89$ 11 $2925.78$ 12 $2906.25$ 13 $2865.09$ 14 $2843.65$ 15 $2821.49$ 16 $2798.78$ 17 $2775.79$ 18 $2752.03$ 19 $2727.75$ 20 $2703.06$ 21 $2677.73$ 22 $2651.97$ 23 $2625.74$ 24 $2599.00$ 1. A recursion formula for the Legendre Polynomials is given by $(l+1) P_{l+1}(x)=(2 l+1) x P_{l}(x)-l P_{l-1}(x)\nonumber$ Based on $P_{0}(x)=1$ and $P_{1}(x)=x$ find expressions for $P_{2}(x)$ and $P_{3}(x)$. 1. The function describing the $l=1, \mathrm{~m}_{l}=0$ spherical harmonic is $Y_{1}^{0}(\theta, \phi)=\sqrt{\frac{3}{4 \pi}} \cos (\theta)$ 1. Show that this function is normalized. To do this, you must use the limits on $\theta$ and $\phi$ of $0 \leq \theta \leq \pi$, and $0 \leq \phi \leq 2 \pi$. Also, for the angular part of the Laplacian, $d \tau=\sin (\theta) d \theta d \phi$ 2. Using plane polar graph paper (or a suitable graphing program) plot the square of the function from problem 2 in the $yz$ plane (which gives a cross-section of the probability function for the particular spherical harmonic.) Does the shape look familiar? 2. Based on the given bond-length data, calculate values for the rotational constants for the following molecules: Molecule Bond Length $(\mathring{\mathrm{A}})$ $\mathrm{H}^{35} \mathrm{Cl}$ $1.2746$ $\mathrm{H}^{79} \mathrm{Br}$ $1.4144$ $\mathrm{H}^{127} \mathrm{I}$ $1.6092$ 1. The spacing between lines in the pure rotational spectrum of $\mathrm{BN}$ is $3.31 \mathrm{~cm}^{-1}$. From this, find $\mathrm{B}$ and calculate the bond length ( $\mathrm{r}_{\mathrm{BN}}$ ) in the BN molecule. 2. From your result in problem 6, calculate the frequencies of the first 4 lines in the pure rotational spectrum of BN.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/05%3A_The_Rigid_Rotor_and_Rotational_Spectroscopy/5.08%3A_References.txt
The hydrogen atom problem was one that was very perplexing to the pioneers of quantum theory. While its quantized nature was evident from the known atomic emission spectra, there were no models that could adequately describe the patterns seen in the spectra. Thumbnail: Hydrogen atom. (Public Domain; Bensaccount via Wikipedia) 06: The Hydrogen Atom Two of the most important (historically) models of the hydrogen atom and it’s energy levels/spectra were proved by Johannes Balmer, a high school teacher, and Niels Bohr, a Danish physicist. Balmer’s model was a completely empirical fit to existing data for the emission spectrum of hydrogen, whereas Bohr provided an actual theoretical underpinning to the form of the model which Balmer derived. In this section, we will discuss the development and ramifications of these two models. Balmer’s Formula Balmer (Balmer, 1885) was the first to provide an empirical formula that gave a very good fit to the data, but offered no theoretical reasoning as to why the formula had the simple form it did. Balmer felt, however, that despite the lack of a theoretical foundation, such a simple pattern could not be the result of an “accident”. Balmer suggested the formula $\lambda =G\left(\dfrac{n^{2} }{n^{2} -4} \right)\nonumber$ to calculate the wavelengths ($\lambda$) of the lines in the visible emission spectrum of hydrogen. In this formula, $G = 3647.053 \; \AA$, which is the series limit (depicted as $H _{\mathrm{\infty }}$ in the figure above.) Balmer considered this to be a “fundamental constant” for hydrogen and fully expected other elements to have similar fundamental constants. In modern terms, Balmer’s formula has been extended to describe all of the emission lines in the spectrum of atomic hydrogen. $\tilde{\nu }=R_{H} \left(\dfrac{1}{n_{l}^{2} } -\dfrac{1}{n_{u}^{2} } \right)\nonumber$ where $n_ {l}$ and $n_ {u}$ are integers with $n_ {l} < n_ {u}$. $R_ {H}$ is the Rydberg constant for hydrogen and has the value $R_ {H} = 109677 \; cm ^{-1}$ The job of subsequent investigators was to provide a theory that explained the form of the Rydberg Equation shown above and to correctly predict the value of the Rydberg Constant. This model describes all known series of emission lines in the spectrum of atomic hydrogen. Each series is characterized by the lower state quantum number. The following table summarizes the names of these series. $n_ {l}$ Name Region 1 Lyman Vacuum Ultraviolet 2 Balmer Visible/Ultraviolet 3 Paschen Near Infrared 4 Brachen Infrared 5 Pfund Far Infrared The Bohr Model Niels Bohr (Bohr, 1913) was the first person to offer a successful quantum theory of the hydrogen atom in his 1913 paper. He was later awarded the Nobel Prize in Physics in 1922 for his contributions to the understanding of atomic structures (as well as many other significant contributions.) And while the Bohr model has significant shortcomings in terms of providing the best description of a hydrogen atom, it still provides the basis (a “solar system model”) for how many people view atoms today. Bohr’s model was mostly an extension of the Rutherford model of an atom, in which electrons exist in a cloud surrounding a dense, positively charged nucleus. The Bohr model suggested a possible structure to this cloud in an attempt to give an explanation of the empirical formula presented by Balmer. The strength of the Bohr model is that it does provide an accurate prediction not only of the form of Balmer’s formula, but also of the magnitude of the Rydberg constant that appears in the formula. Bohr’s approach was to balance the electrostatic attractive force between an electron and a positively charged nucleus, with the centrifugal force the electron feels as it orbits the nucleus in a circular orbit. He derived these orbits by making the assumption that the angular momentum of an orbiting electron is an integral multiple of $\hbar$. While successful in predicting the form of the Rydberg Equation and the magnitude of $R_ {H}$, the Bohr model presented some difficulty. First, it ignored the reality that a charged particle orbiting another (oppositely) charged nucleus would see its orbit decay over time, eventually colliding with the nucleus. This clearly does not happen with hydrogen! Also, the Bohr model was not extendable to larger atoms. Quantum mechanics would have to address these problems, while also providing the kind of explanations for the Rydberg Equation provided by Bohr.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/06%3A_The_Hydrogen_Atom/6.01%3A_Older_Models_of_the_Hydrogen_Atom.txt
As is so often the case for quantum mechanical systems, the story of the hydrogen atom begins with writing down the Hamiltonian describing the system. The Potential Energy and the Hamiltonian The time-independent Schrödinger equation has the following form. \begin{align*} \hat{H}\psi \left(r, \theta ,\phi \right) &=E \psi \left(r,\theta ,\phi \right) \[4pt] \left[-\frac{\hbar^2}{2 \mu} \nabla^2+U(r)\right] \psi(r, \theta, \phi) &=E \psi(r, \theta, \phi) \end{align*} where m is the reduced mass for the electron/nucleus system. The Laplacian operator has the form \begin{aligned} \nabla {}^{2} =&\left(\dfrac{\partial ^{2} }{\partial x^{2} } +\dfrac{\partial ^{2} }{\partial y^{2} } +\dfrac{\partial ^{2} }{\partial z^{2} } \right) \[4pt] =& \dfrac{1}{r^{2} } \dfrac{\partial }{\partial r} r^{2} \dfrac{\partial }{\partial r} +\dfrac{1}{r^{2} \sin \theta } \dfrac{\partial }{\partial \theta } \sin \theta \dfrac{\partial }{\partial \theta } +\dfrac{1}{r^{2} \sin ^{2} \theta } \dfrac{\partial ^{2} }{\partial \phi ^{2} } \[4pt] =& \dfrac{1}{r^{2} } \dfrac{\partial }{\partial r} r^{2} \dfrac{\partial }{\partial r} +\dfrac{1}{r^{2} } \hat{L}^{2} \end{aligned}\nonumber The potential energy is given by the electrostatic attraction of the electron to the nucleus. $U\left(r\right)=-\dfrac{Ze^2}{4\pi {\varepsilon }_0r}\nonumber$ where $Z$ is the charge on the nucleus in electron charges (also given by the atomic number), e is the charge on an electron and $\varepsilon _{0}$ is the vacuum permittivity. The $\frac{1}{r}$ dependence means that the electrostatic attraction diminishes as the distance between the electron and the nucleus is increased. The potential energy approaches zero as r goes to $\mathrm{\infty}$, at which point the atom ionizes. Putting this all together allows the Hamiltonian to be expressed as $\hat{H}=-\dfrac{\hbar ^{2} }{2\mu \, r^{2} } \dfrac{\partial }{\partial r} r^{2} \dfrac{\partial }{\partial r} -\dfrac{Ze^{2} }{4\pi \varepsilon _{0} r} +\dfrac{1}{2\mu \, r^{2} } \hat{L}^{2}\nonumber$ The wavefunctions can be expressed as a product of a radial part and an angular part since the CityplaceHamilton is separable into these two parts. $\psi \left(r,\theta ,\phi \right)=R(r)Y_{l}^{m_{l} } \left(\theta ,\phi \right)\nonumber$ The angular part of the function, $Y_{l}^{m_{l} } \left(\theta ,\phi \right)$ are the spherical harmonics and are eigenfunctions of the $\hat{L}^{2}$ operator. Substitution into the Schrödinger equation yields $Y_{l}^{m_{l} } \left(\theta ,\phi \right)\left(-\dfrac{\hbar ^{2} }{2\mu \, r^{2} } \dfrac{\partial }{\partial r} r^{2} \dfrac{\partial }{\partial r} -\dfrac{Ze^{2} }{4\pi \varepsilon _{0} r} \right)R(r)+\dfrac{R(r)}{2\mu \, r^{2} } \hat{L}^{2} Y_{l}^{m_{l} } \left(\theta ,\phi \right)=ER(r)Y_{l}^{m_{l} } \left(\theta ,\phi \right)\nonumber$ Since the spherical harmonics are eigenfunctions of the $\hat{L}^{2}$ operator, the following substitution can be made. $\hat{L}^{2} Y_{l}^{m_{l} } \left(\theta ,\phi \right)=\hbar ^{2} l(l+1)Y_{l}^{m_{l} } \left(\theta ,\phi \right)\nonumber$ After making this substitution and dividing both sides by $Y_{l}^{m_{l} } \left(\theta ,\phi \right)$, we get $\left(-\dfrac{\hbar ^{2} }{2\mu \, r^{2} } \dfrac{\partial }{\partial r} r^{2} \dfrac{\partial }{\partial r} -\dfrac{Ze^{2} }{4\pi \varepsilon _{0} r} \right)R(r)+\dfrac{\hbar ^{2} l(l+1)}{2\mu \, r^{2} } R(r)=ER(r)\nonumber$ However, since $l$ shows up in the equation in which we are solving for the radial wavefunctions $R(r)$, it is not to be unexpected that the solution to the radial part of the equation will place new constraints on the quantum number $l$. In fact, the radial wavefunctions themselves depend on $l$ and a principle quantum number number $n$. The Energy Levels Applying the boundary condition that the radial wavefunction $R(r)$ must vanish as $r \rightarrow \infty$, the only wavefunctions that behave properly have the following eigenvalues $E_{n} =-\dfrac{\mu Z^{2} e^{4} }{2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } \dfrac{1}{n^{2} } \quad \quad n=1,2,3,\ldots\nonumber$ Notice also that this expression vanishes as $n$ approaches $\mathrm{\infty}$, which is the ionization limit of the atom. Also, since the energy expression depends only on $n$ (and not on $l$ and $m_ {l}$ ) it is expected that there will be a great deal of degeneracy in the wavefunctions. Taking differences between two energies levels (to derive an expression for the energy differences that can be observed in the spectrum of hydrogen), it is seen that $\begin{array}{rcl} {} & {} & {E_{n'} -E_{n"} =-\dfrac{\mu Z^{2} e^{4} }{2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } \left(\dfrac{1}{n'^{2} } -\dfrac{1}{n"^{2} } \right)=\dfrac{\mu Z^{2} e^{4} }{2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } \left(\dfrac{1}{n"^{2} } -\dfrac{1}{n'^{2} } \right)} \end{array}\nonumber$ which is exactly the form of the Rydberg Equation. Now dividing both sides by $hc$ in order to convert from energy units to wavenumber units $\begin{array}{rcl} {\dfrac{E_{n'} -E_{n"} }{hc} } & {=} & {\dfrac{\mu Z^{2} e^{4} }{(hc)2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } \left(\dfrac{1}{n"^{2} } -\dfrac{1}{n'^{2} } \right)} \[4pt] {} & {=} & {109677.581\; cm^{-1} \left(\dfrac{1}{n"^{2} } -\dfrac{1}{n'^{2} } \right)} \end{array}\nonumber$ using the reduced mass for the hydrogen atom and a nuclear charge of +1. So this model also predicts the correct value for the Rydberg constant $R_ {H}$. The Rydberg Constant for Heavier Nuclei The expression for the Rydberg constant is $R_{H} =\dfrac{\mu e^{4} }{(hc)2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} }\nonumber$ which has a value of $R_ {H} = 109677.581 \; cm ^{-1}$. In this expression, $\mu$ is the reduced mass of the electron-proton system in the hydrogen atom. But what happens when the mass of the nucleus is extremely large? First, consider the reduced mass. $\mu =\dfrac{m_{e} m_{N} }{m_{e} +m_{N} }\nonumber$ Where $m_ {e}$ is the mass of an electron and $m_ {N}$ is the mass of the nucleus. In the case that the nuclear mass is extremely large compared to the mass of an electron, the total mass is approximately equally to the mass of the nucleus. $(m_ {e} +m_ {N} ) \approx m_ {N}$ In this case, the reduced mass becomes $\begin{array}{rcl} {\mu } & {=} & {\dfrac{m_{e} m_{N} }{m_{e} +m_{N} } } \[4pt] {} & {} & {\approx \dfrac{m_{e} m_{N} }{m_{N} } =m_{e} } \end{array}\nonumber$ And the Rydberg constant expression comes to $\begin{array}{rcl} {R_{\infty } } & {=} & {\dfrac{m_{e} e^{4} }{(hc)2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } } \[4pt] {} & {} & {=109737.316\; cm^{-1} } \end{array}\nonumber$ where $R_ {\mathrm{\infty }}$ indicates the Rydberg constant for an infinite mass nucleus atom. It is this value that is usually found in tables of physical constants. But for lighter atoms, such as hydrogen, the value of the Rydberg constant deviates form this value. In fact, hydrogen shows the largest deviation for any atom, given that it has the lightest nucleus. Compared to experimental precision, this deviation is important (even for atoms where the mass of an electron is only $1 \times 10^{-6}$ times that of the nucleus!) if one hopes to fit data to experimental precision. To address this problem, we look back to the expression for the Rydberg constant for an arbitrary mass nucleus, $R_ {M}$. $\begin{array}{rcl} {R_{M} } & {=} & {\dfrac{\mu e^{4} }{(hc)2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } } \[4pt] {} & {} & {=\left(\dfrac{m_{N} }{m_{e} +m_{N} } \right)\dfrac{m_{e} e^{4} }{(hc)2\hbar ^{2} (4\pi \varepsilon _{0} )^{2} } =\left(\dfrac{m_{N} }{m_{e} +m_{N} } \right)\, R_{\infty } } \end{array}\nonumber$ Clearly as the mass of the nucleus ($m_ {N}$ ) becomes larger, the value of $R_ {M}$ will approach that of $R_ {\mathrm{\infty }}$ asymptotically. The Wavefunctions The hydrogen atom wavefunctions $\psi(r,\theta,\phi)$ can be expressed as a product of radial and angular functions. ${\psi }_{nlm_l}\left(r,\theta ,\phi \right)=R_{nl}\left(r\right)Y^{m_l}_l\left(\theta ,\phi \right)\nonumber$ The angular part is simply the spherical harmonics that were described in Chapter 5, depend on the quantum numbers $l$ and $m_ {l}$. More details of how the spherical harmonics are generally presented as H-atom angular functions is discussed in section 3.i. The radial part of the wave functions, $R_ {n}^{l} (\theta,\phi)$ will be described in a later section. The Angular Part of the Wavefunctions Each orbital wave function can be designated with a letter than indicates the value of $l$ as assigned in the following table. $l$ Designation 0 s 1 p 2 d 3 f The angular parts of the wavefunctions are given by the spherical harmonics. After taking linear combinations to eliminate the imaginary part of the wave functions, the familiar shapes of s, p, d and f orbitals are generated. For example, the $p_ {x}$ and $p_ {y}$ orbitals are generated as linear combinations of the $p_ {-1}$ and $p_ {1}$ orbitals. $\begin{array}{l} {p_{x} =\dfrac{1}{\sqrt{2} } \left(Y_{1}^{1} -Y_{1}^{-1} \right)\propto \sin \theta \cos \phi } \[4pt] {p_{y} =\dfrac{1}{i\sqrt{2} } \left(Y_{1}^{1} +Y_{1}^{-1} \right)\propto \sin \theta \sin \phi } \end{array}\nonumber$ Similar linear combinations are used to generate the $d_ {x^ 2 -y^2}$, $d_ {xy}$, $d_ {yz}$ and $d_ {xz}$ functions. $\begin{array}{c} {d_{z^{2} } =Y_{2}^{0} } \[4pt] {d_{xz} =-\dfrac{1}{\sqrt{2} } \left(Y_{2}^{1} -Y_{2}^{-1} \right)\quad d_{yz} =-\dfrac{1}{i\sqrt{2} } \left(Y_{2}^{1} +Y_{2}^{-1} \right)} \[4pt] {d_{xy} =-\dfrac{1}{\sqrt{2} } \left(Y_{2}^{2} -Y_{2}^{-2} \right)\quad d_{x^{2} -y^{2} } =-\dfrac{1}{i\sqrt{2} } \left(Y_{2}^{2} +Y_{2}^{-2} \right)} \end{array}\nonumber$ There are multiple choices for how to take linear combinations to generate the f orbital functions (the best choice being determined by the geometry of the complex in which an f-orbital containing atom exists), so these are rarely shown in textbooks! The tables below give the angular parts of s, p and d hydrogen atom orbitals. The linear combinations shown above have been used to eliminate the imaginary parts of the wave functions. The result is what is usually plotted for the shapes of these orbitals. $l$ Orbital $Y^{m_l}_l(\theta, \phi)$ 0 s $\sqrt{\dfrac{1}{4\pi}}$ 1 $p_x$ $\sqrt{\dfrac{3}{4\pi}} \sin (\theta) \cos (\phi)$ $p_y$ $\sqrt{\dfrac{3}{4\pi}} \sin (\theta) \sin (\phi)$ $p_z$ $\sqrt{\dfrac{3}{4\pi}} \cos (\theta)$ $l$ Orbital $Y^{m_l}_l(\theta, \phi)$ 2 $d_{z^2}$ $\sqrt{\dfrac{5}{16 \pi}} \left ( 3 \cos^2 (\theta) -1 \right)$ $d_{xz}$ $\sqrt{\dfrac{15}{16 \pi}} \sin(\theta) \cos (\theta) \sin (\phi)$ $d_{yz}$ $\sqrt{\dfrac{15}{16 \pi}} \sin(\theta) \cos (\theta) \cos (\phi)$ $d_{xy}$ $\sqrt{\dfrac{15}{64 \pi}} \sin^2 (\theta) \sin (2 \phi)$ $d_{x^2-y^2}$ $\sqrt{\dfrac{15}{64 \pi}} \sin^2 (\theta) \cos (2 \phi)$ These functions generate the familiar angular parts of the hydrogen atom wavefunctions. Some depictions are shown in the figure below. The Radial Part of the Wavefunctions The radial part of the wavefunction has three parts. 1) a normalization constant, 2) an associated Laguerre Polynomial and 3) an exponential part that ensures the wavefunction vanishes as $r \rightarrow \infty$. The associated Laguerre polynomials are derived from the Laguerre polynomials (much like the associated Legendre Polynomials were from the Legendre polynomials.) The Laguerre polynomials can be derived from the expression $L_n\left(x\right)=\dfrac{e^x}{n!}\dfrac{d^n}{dx^n}\ x^ne^{-x} \nonumber$ The first few Laguerre polynomials are given by n $L_ {n}$ (x) 0 $1$ 1 $-x+1$ 2 $\dfrac{1}{2}\left(x^2-4x+2\right)$ 3 $\dfrac{1}{6}\left(-x^3+9x^2-18x+6\right)$ A recursion formula for these functions is given by $L_{n+1}\left(x\right)=\left(2n+1-x\right)L_n\left(x\right)-n^2L_{n-1}(x)\nonumber$ The associated Laguerre polynomials can be generated using the expression $L_{n}^{\alpha } (x)=\dfrac{d^{\alpha } }{dx^{\alpha } } L_{n} (x)\nonumber$ This expression is used to generate an associated Laguerre polynomial of degree $n-\omega$ and order $\omega$. The functions of interest to the hydrogen atom radial problem are the associate Laguerre polynomials of degree $n-l-1$ and order $2l+1$. It can be shown that these functions can be generated from the relationship $L^{2l+1}_{n+l}\left(x\right)=\sum^{n-l-1}_{k=0}{{\left(-1\right)}^{k+1}\dfrac{{\left[\left(n+l\right)!\right]}^2}{\left(n-1-l-k\right)!\left(2l+1+k\right)!k!}x^k}\nonumber$ Note that when $n-l-1$ is less than zero, the functions vanish. This leads to the restriction on the quantum number l that comes from the solutions to the radial part of the problem. $l\le n-1\nonumber$ The first few associated Laguerre polynomials that appear in the hydrogen atom wavefunctions are shown below. n $l$   $L_{n+l}^{2l+1} (x)$ # nodes 1 0 $L_{1}^{1} (x)$ -1 0 2 0 $L_{2}^{1} (x)$ $-2!(2 – x)$ 1 1 $L_{3}^{3} (x)$ $-3!$ 0 3 0 $L_{3}^{1} (x)$ $-3!(3 - 3x - ½ x ^{2})$ 2 1 $L_{4}^{3} (x)$ $-4!(4 – x)$ 1 2 $L_{5}^{5} (x)$ $-5!$ 0 Notice that if $(2l+1)$ exceeds $(n+l)$, the derivative causes the function to go to zero, as was the case for the associated Legendre Polynomials when $m_ {l}$ exceeds $l$. This provides the constraint on l that was expected to be found in the solution to the radial part given that $l$ shows up in the equation to be solved. $l \le n – 1\nonumber$ Typically, x is replaced by a new function in $r$, $\rho$. $\rho$ is defined as follows: $\rho =\left(\dfrac{2Zr}{na_{0} } \right)\nonumber$ where $a_ {0}$ is the Bohr radius. The overall expression for the radial wavefunction is given as follows: $R_{nl} (r)=-\left[\dfrac{(n-l-1)!}{2n\left[(n+l)!\right]^{3} } \right]^{{\raise0.5ex\hbox{\scriptstyle 1 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} } \left(\dfrac{Z}{na_{0} } \right)^{l+{\raise0.5ex\hbox{\scriptstyle 3 }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle 2 }} } r^{l} L_{n+l}^{2l+1} \left(\dfrac{2Zr}{na_{0} } \right)e^{-{\raise0.5ex\hbox{\scriptstyle r }\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{\scriptstyle na_{0} }} }\nonumber$ The first several radial wavefunctions are given below. n l   $R_{n}^{l} (\rho )$ 1 0 1s $2\left(\dfrac{Z}{a_{0} } \right)^{3/2} e^{-\dfrac{Zr}{a_{0} } }$ 2 0 2s $\left(\dfrac{Z}{2a_{0} } \right)^{3/2} \left(2-\rho \right)e^{-\rho /2}$ 1 2p $\dfrac{1}{\sqrt{3} } \left(\dfrac{Z}{2a_{0} } \right)^{3/2} \rho e^{-\rho /2}$ 3 0 3s $\dfrac{2}{27} \left(\dfrac{Z}{3a_{0} } \right)^{3/2} \left(27-18\rho +2\rho ^{2} \right)e^{-\rho /3}$ 1 3p $\dfrac{1}{27} \left(\dfrac{2Z}{3a_{0} } \right)^{3/2} \left(6\rho -\rho ^{2} \right)e^{-\rho /3}$ 2 3d $\dfrac{4}{27\sqrt{10} } \left(\dfrac{Z}{3a_{0} } \right)^{3/2} \rho ^{2} e^{-\rho /3}$ where $\rho = Zr/a_ {0}$. $a_ {0}$ is the Bohr radius, which has a value of $5.291 772 49 \times 10 ^{-11}$ m. Example $1$ What is the expectation value of r for the electron if it is in the 1s subshell of an H atom? Solution The expectation value can be found from $\left\langle r\right\rangle =\int^{\infty }_0{{\psi }^*_{1s}\cdot r{\cdot \psi }_{1s}\ r^2dr}\nonumber$ Where $r ^{2}dr$ comes from the r portion of the volume element $dx \; dy \; dz$ after it has been transformed into spherical polar coordinates. Substituting the wavefunction from above yields $\left\langle r\right\rangle =\int^{\infty }_0{\left[{2\left(\dfrac{1}{a_0}\right)}^{\frac{3}{2}}e^{-\frac{r}{a_0}}\right]r\left[{2\left(\dfrac{1}{a_0}\right)}^{\frac{3}{2}}e^{-\frac{r}{a_0}}\right]r^2dr}\nonumber$ This expression simplifies to $\langle r \rangle =4{\left(\dfrac{1}{a_0}\right)}^3\int^{\infty }_0{r^3\left[e^{-\frac{2r}{a_0}}\right]dr}\nonumber$ A table of integrals shows $\int^{\infty }_0{x^ne^{-ax}}dx=\dfrac{n!}{a^{n+1}}\nonumber$ Substituting the above integral into the general form results in \begin{aligned} \left\langle r\right\rangle &=4{\left(\dfrac{1}{a_0}\right)}^3\left(\dfrac{6}{{\left(\dfrac{2}{a_0}\right)}^4}\right) \[4pt] &=\dfrac{24}{16}\left(\dfrac{1}{a^3_0}\right)\left(a^4_0\right) \[4pt] &=\dfrac{3}{2}a_0 \end{aligned} \nonumber Example $2$ What is the most probable value of $r$ for the electron in a hydrogen atom in a 1s orbital? Solution The most probable value of $r$ will be found at the maximum of the function $P\left(r\right)=\ r^2{\left[R\left(r\right)\right]}^2\nonumber$ This can be found by taking the derivative and setting it equal to zero. First, let’s find the probability function $\ P\left(r\right)=r^2\ {\left[2{\left(\dfrac{1}{a_0}\right)}^{\frac{3}{2}}e^{-\frac{r}{a_o}}\right]}^2=\dfrac{4}{a^3_0}\ r^2\ e^{-\frac{2r}{a_0}}\nonumber$ At the maximum, the derivative is zero. $\dfrac{d}{dr}P\left(r\right)=0\nonumber$ So $\dfrac{d}{dr}\left[\dfrac{4}{a^3_o}\ r^2\ e^{-\frac{2r}{a_0}}\right]=\dfrac{4}{a^3_3}\left(2r\ e^{-\frac{2r}{a_0}}-\dfrac{2}{a_0}r^2\ e^{-\frac{2r}{a_0}}\right)=0\nonumber$ After dividing both sides by $\dfrac{4}{a^3_0}$, and placing the right-hand term on the other side of the equals sign, this simplifies to $2r\ e^{-\frac{2r}{a_0}}=\dfrac{2}{a_0}r^2\ e^{-\frac{2r}{a_0}}\nonumber$ This is further simplified by dividing both sides by $e^{-\frac{2r}{a_0}}$: $2r=\dfrac{2}{a_0}{\ r}^2 \nonumber$ The rest of the algebra is straight forward (actually, all of the algebra was straight-forward, but who is counting?) $r=a_0\nonumber$ Nodes A hydrogen atom wavefunction can have nodes in either the orbital (angular) part of the wavefunction or the radial part. The total number of nodes is always given by $n – 1$. The number of angular nodes is always given by $l$. The number of radial nodes, therefore, is determined by both n and l. Consider the following examples. Nodes radial angular total 1s 0 0 0 4d 2 1 3 5f 1 3 4 2d   - - 2p 0 1 1 Notice that it is impossible to form a 2d wavefunction as it violates the relationship that $l \le n – 1\nonumber$ causing the radial wavefunction to vanish. This is easy to see as the combination of $n = 2$ and $l = 3$ implies that there are -1 radial nodes, which is clearly impossible. Shells, Subshells and Orbitals It is convenient to name the different subdivisions of the electronic structure of a hydrogen atom. The subdivisions are based on the quantum numbers n, l and $m_ {l}$. A shell is characterized by the quantum number $n$. (Examples: the n=2 shell or the n=4 shell.) A subshell is characterized by both the quantum number $n$ and $l$. (Examples: the 2s subshell or the 3d subshell.) An orbital is characterized by the quantum number $n$, $l$, and $m_ {l}$ . (Examples: the 2$p_ {0}$ orbital or the 5f1 orbital.) It should be noted that an orbital can also be constructed from a linear combination of other orbitals! (Example: the 2$p_ {x}$ orbital or the 3$d_ {xy}$ orbital.) Degeneracy The hydrogen atom wavefunctions have high degeneracies since the energy of a given level depends only on the principle quantum number n. As such, all wavefunctions with the same value of n will have the same eigenvalue to the Hamiltonian, and are degenerate. Recall the following relationships: $l \le n-1 \; \text{and } \; m_ {l} \le l \nonumber$ These relationships can be used to fill in the following table that indicates the degeneracies of the hydrogen atom energy levels. Subshell n $l$ $m_ {l}$ $m_ {s}$ orbital total 1s 1 0 0 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 1 2 2s 2 0 0 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 4 8 2p   1 +1, 0, -1 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 3s 3 0 0 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 9 18 3p   1 +1, 0, -1 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 3d   2 +2, +1, 0, -1, -2 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 4s 4 0 0 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 16 32 4p   1 +1, 0, -1 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 4d   2 +2, +1, 0, -1, -2 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ 4f   3 +3, +2, +1, 0, -1, -2, -3 $+ \dfrac{1}{2}$, $- \dfrac{1}{2}$ It is clear that the total degeneracy of a shell is given by $2n ^{2}$. The Overall Wavefunctions The total wavefunction, including both angular and radial parts, for hydrogen-like atoms is given by $\Psi _{nlm_{l} } =R_{nl} \left(r\right)Y_{l}^{m_{l} } \left(\theta ,\phi \right)\nonumber$ The first few hydrogen atom orbital wavefunctions are given in the table below. Shell Subshell $m_ {l}$ Wavefunction 1 1s 0 $\psi_ {100}$ $\dfrac{1}{\sqrt{\pi } } \left(\dfrac{Z}{a_{0} } \right)^{3/2} e^{-\rho }$ 2 2s 0 $\psi_ {200}$ $\dfrac{1}{\sqrt{32\pi } } \left(\dfrac{Z}{a_{0} } \right)^{3/2} \left(2-\rho \right)e^{-\rho /2}$ 2p 0 $\psi_ {210}$ $\dfrac{1}{\sqrt{32\pi } } \left(\dfrac{Z}{a_{0} } \right)^{3/2} \rho e^{-\rho /2} \cos (\theta )$ $\pm 1$ $\psi_ {21\pm 1}$ $\dfrac{1}{\sqrt{64\pi } } \left(\dfrac{Z}{a_{0} } \right)^{3/2} \rho e^{-\rho /2} \sin (\theta )e^{\pm i\phi }$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/06%3A_The_Hydrogen_Atom/6.02%3A_The_Quantum_Mechanical_H-atom.txt
To a very good approximation, the electronic spectra of highly excited atoms look a lot like the spectrum of hydrogen. These highly excited states of atoms are called “Rydberg States” and to a good approximation, the excited electron in a Rydberg state “feels” the nucleus of the atom as a point charge. As this occurs, the atom comes to be in a state that looks much like a state in a hydrogen-like atom, with a heavy nucleus that has $a+1$ charge (the residual ion if the excited electron is removed). In cases such as this, the energy levels of the excited electron can almost be treated using the Rydberg formula proposed by Balmer, and with the correct Rydberg constant ($R_ {M}$ ) and nuclear charge. The formula does not work perfectly, but can be forced to fit the data by introducing a “fudge factor.” Approximating a Hydrogen-like Atom Scientists like to force the descriptions of real systems in terms of the limiting ideal cases with slight perturbations. In the case of real atoms, there are two common ways that this is typically done. One is to fudge the nuclear charge and the other is to fudge on the principle quantum number. Shielding and Effective Nuclear Charge One “fudges” the nuclear charge by noting that the excited electron will not “see” the inner core ion as a point charge with $a +1$ charge. Instead, it will feel the full charge of the nucleus, but shielded by the electrons that remain in the ion. Thus, the effective nuclear charge ($Z ^{*}$ ) can be used. $\tilde{\nu }=\left(Z^{*} \right)^{2} R_{M} \left(\dfrac{1}{n_{l}^{2} } -\dfrac{1}{n_{u}^{2} } \right)\nonumber$ where $Z ^{*}$, the effective nuclear charge, is defined by $Z ^{*} = Z – \sigma\nonumber$ where $\sigma$ is the shielding constant and is determined by adding the effects of each of the inner electrons. The trouble with this approach is that the degree of shielding is dependent on the excitation level of the excited electron. The shielding constant $\sigma$ should reach a limiting value for highly excited Rydberg states of the atom. Quantum Defect and the Effective Principal Quantum Number Another approach is to “fudge” on the principle quantum number of the excited electron. The utility of using this method is that there is only one electron to treat, rather than a slew of electrons in the core ion, the shielding of each will be variable. In this method, the effective principal quantum number $n^ {*}$ is defined as $n^ {*} = n – \delta$ where $\delta$ is the quantum defect. The quantum defect has the useful property that it reaches a constant value for electrons in atoms at high levels of excitation. The ionization potential The ionization potential of an atom I defined by the enthalpy change at 0 K for the following reaction $M \leftarrow M ^{+} + e ^{-} \qquad \Delta H = IP\nonumber$ If one pictures ionization as a series of excitations of the electron to be removed through a set of Rydberg states, one can deduce the ionization potential of an atom. (This is how atomic spectroscopy is used to determine highly accurate ionization potentials.) Using the effective principle quantum number $n^ {*}$, the energy levels can be expressed as $\dfrac{E}{hc} =\dfrac{IP}{hc} -\dfrac{R_{M} }{(n^{*} )^{2} }\nonumber$ Consider the Rydberg series in $\ce{^{23}Na}$ , the first few levels of which is given below. For $\ce{Na}$, the Rydberg constant can be calculated \begin{align*} R_{Na} =& \left(\dfrac{m_{Na} }{m_{e} +m_{Na}} \right)R_{\infty } \[4pt] & =\left(\dfrac{3.81763 \times 10^{-26} kg}{9.109 \times 10^{-31} kg+3.81763 \times 10^{-26} kg} \right)\left(109737.316\; cm^{-1} \right) \[4pt] &=109734.698\; cm^{-1} \end{align*} Based on a guess of the ionization potential, an effective principle quantum number can be calculated for each level from $n^{*} =\sqrt{\dfrac{R_{Na} }{IP-E} }\nonumber$ From $n^ {*}$, one can calculate the quantum defect ($\delta$) and adjust the guess of the ionization potential until $\Sigma$ becomes constant for large $n$. $IP =$ $41449.48 cm ^{-1}$   $R_ {Na} =$ $109734.7 \; cm ^{-1}$ Level $n$ $\delta$ $n*$ Energy ($cm ^{-1}$ ) 3p 3 0.883 2.117 16956.17 4p 4 0.867 3.133 30266.99 5p 5 0.862 4.138 35040.38 6p 6 0.860 5.140 37296.32 7p 7 0.858 6.142 38540.18 8p 8 0.858 7.142 39298.35 9p 9 0.857 8.143 39794.48 10p 10 0.857 9.143 40136.80 11p 11 0.857 10.143 40382.92 12p 12 0.857 11.143 40565.78 13p 13 0.857 12.143 40705.34 14p 14 0.856 13.144 40814.27 15p 15 0.856 14.144 40900.91 16p 16 0.857 15.143 40970.97 17p 17 0.857 16.143 41028.41 This method is extremely sensitive and can be used to determine very precise values of ionization potentials for atoms. The above result is 5.145 eV, whereas the literature value for the ionization potential of sodium is 5.139 eV (Webelements). The slightly large value determined from this data is a consequence of only using a limited number of excited levels, and not the highest energy levels, which behave most Rydberg-like. A close examination of the data actually reveals that there is some curvature to the $\delta$ vs $n$ curve at high values of $n$. Since the curve is actually increasing at the larger values of $n$, it is an indication that the guess for the ionization potential is slightly high – a fact that is consistent with the literature value! 6.04: References Balmer, J. J. (1885). Notiz über die Spectrallinien des Wasserstoffs. Annalen der Physik und Chemie, 25, 80-85.Bohr, N. (1913). On the Constitution of Atoms and Molecules, Part I. Philosophical Magazine, 26, 1-24.Webelements. (n.d.). Retrieved November 19, 2009, from http://webelements.com/sodium/atoms.html 6.05: Vocabulary and Concepts angular nodes effective nuclear charge effective principle quantum number orbital principle quantum number quantum defect Rydberg constant shell shielding constant subshell 6.06: Problems 1. Calculate the finite-mass Rydberg constant ($R_ {M}$ ) for 1. H 2. D 3. $_{7} N$ 4. $_{11} Na$ 1. The 1s orbital wavefunction for hydrogen is given by ${\psi }_{1s}=\dfrac{1}{\sqrt{\pi }}{\left(\dfrac{1}{a_0}\right)}^{3/2}e^{-\frac{r}{a_0}}\nonumber$ 1. Show that this wavefunction is normalized. 2. Find the expectation value of $r$ in units of $a_{0}$ (the Bohr Radius.) 1. Show that the 2s wavefunction for hydrogen is 1. Normalized 2. An eigenfunction of the Hamiltonian. (What is the eigenvalue?) 2. The Laguerre Polynomial $L_{1}(x)$ is given by $L_1\left(x\right)=-x+1\nonumber$ The Associated Laguerre polynomials are generated from the relationship $L^{\alpha }_n\left(x\right)=\dfrac{d^{\alpha }}{dx^{\alpha }}L_n(x)\nonumber$ 1. Show that the Associated Laguerre polynomials $L^0_1\left(x\right)=-x+1$, $L^1_1\left(x\right)=-1$, and $L^2_1\left(x\right)=0$. (In fact, $L^{\alpha }_1\left(x\right)=0$ for any choice of $\alpha > 1$.) 2. Given that the Associated Laguerre polynomials used in the radial wavefunctions of the Hydrogen atom problem are $L^{2l+1}_{n+l}(x)$, derive a relationship between $n$ and $l$ that ensure that $L^{2l+1}_{n+l}(x)\neq 0$. 1. Using the Laguerre polynomials $L_2\left(x\right)=\dfrac{1}{2}\left(x^2-4x+2\right)$ and $L_1\left(x\right)=-x+1$, show that $\dfrac{d}{dx}L_n\left(x\right)=\dfrac{d}{dx}L_{n-1}\left(x\right)-L_{n-1}(x)\nonumber$ 1. Sketch the radial wavefunctions for the 1s, 2s, 2p, 3s, 3p, and 3d orbital wavefunctions of Hydrogen. 1. Determine the number of nodes in each of the following hydrogen atom orbital wavefunctions: wavefunction Total nodes Angular nodes Radial nodes 2s 3p 5d 6f 1. Determine the ionization potential for $_{3} He ^{+}$. 1. Find $R_{He}$ for the He-3 isotope. 2. Use the relationship $IP=Z^2R_M\left(\dfrac{1}{{\left(1\right)}^2}-\dfrac{1}{{\left(\infty \right)}^2}\right)\nonumber$ 1. Based on the following data, find the ionization energy of Rb, using the fact that at high excitation, the quantum defect ($\delta$) becomes constant. n (for the $np \leftarrow 5s$ transition) Wavenumber ($cm ^{-1}$ ) 5 12578.950 6 23715.081 7 27835.02 8 29834.94 9 30958.91 10 31653.85 11 32113.55 12 32433.50 13 32665.03 14 32838.02 15 32970.66 16 33074.59 17 33157.54 18 33224.83 19 33280.13 20 33326.13
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/06%3A_The_Hydrogen_Atom/6.03%3A_Rydberg_Spectra_of_Polyelectronic_Atoms.txt
The previous chapters all dealt with problems that can be solved analytically. However, there are many problems that are of chemical interest that cannot be solved exactly. For these problems, we must employ some methods that will approximate a correct and complete solution. Two such methods will be discussed in this chapter. • 7.1: Perturbation Theory Often times, a system represents only a small difference from an exactly solvable system. In these instances, perturbation theory can be used to describe the system. • 7.2: Variational Method The variational method is based on the Variational principle which says that a wavefunction that is not the true wavefunction will always yield a value for the energy that is greater than the true ground state energy of the system. • 7.3: Vocabulary and Concepts • 7.4: Problems 07: Approximate Methods Often times, a system represents only a small difference from an exactly solvable system. In these instances, perturbation theory can be used to describe the system. To use perturbation theory, one must separate the Hamiltonian into two parts: one for which the solution is known ($\hat{H}^{(0)}$) and the other part which will represent the perturbation to the system ($\hat{H}^{(1)}$). $\hat{H}=\hat{H}^{(0)} +\hat{H}^{(1)} \nonumber$ The solution for the unperturbed system is known. $\hat{H}^{(0)} \psi _{n}^{(0)} =E_{n}^{(0)} \psi _{n}^{(0)} \nonumber$ The energy levels and wavefunctions for the perturbed system are determined by applying a series of corrections (referred to as first order, second order, etc.) $E_{n} =E_{n}^{(o)} +E_{n}^{(1)} +E_{n}^{(2)} +\ldots \nonumber$ $\psi _{n} =\psi _{n}^{(o)} +\psi _{n}^{(1)} +\psi _{n}^{(2)} +\ldots \nonumber$ Oftentimes only the first and second order corrections are needed to give a reasonable description of the system. The first order correction to the energy is given by $E_{n}^{(1)} =\int \psi _{n}^{(0)} \hat{H}^{(1)} \psi _{n}^{(0)} d\tau \nonumber$ The second order correction to the energy depends on the first order correction to the wavefunctions. $E_{n}^{(2)} =\int \psi _{n}^{(0)} \hat{H}^{(1)} \psi _{n}^{(1)} d\tau \label{pertE} \nonumber$ The formula for generating the first order corrections to the wavefunctions is given by $\psi _{n}^{(1)} = \sum _{i\ne n}\psi _{i}^{(0)} \frac{\int \psi _{i}^{(0)} \hat{H}^{(0)} \psi _{n}^{(0)} d\tau }{E_{n}^{(0)} -E_{i}^{(0)} } \nonumber$ Substitution into the expression for $E_{n}^{(2)}$ yields $E_{n}^{(2)} =\sum _{i\ne n}\frac{\left|\int \psi _{n}^{(0)} H^{(1)} \psi _{n}^{(0)} d\tau \right|^{2} }{E_{n}^{(0)} -E_{1}^{(0)} } \nonumber$ 7.02: Variational Method The variational method is based on the Variational principle which says that a wavefunction that is not the true wavefunction will always yield a value for the energy that is greater than the true ground state energy of the system. This principle can be proven using the superposition theorem that was previously discussed. Theorem $1$: Variational Method Assume a trial wavefunction $\psi(x)$ describing a particle in a box, that can be expressed as a linear combination of the normal particle in a box wavefunctions. $\psi (x)=\sum _{n}c_{n} \phi _{n} (x)\nonumber$ Assuming $\psi(x)$ is normalized, the expectation value of energy $\langle E \rangle$ is obtained from the expression $\left\langle E\right\rangle =\int \psi (x)\, \hat{H}\, \psi (x)\, d\tau \nonumber$ Substituting the expression for $\psi(x)$ from above $\left\langle E\right\rangle =\int \left(\sum _{m}c_{m} \phi _{m} \right)\hat{H}\left(\sum _{n}c_{n} \phi _{n} \right) d\tau \nonumber$ Noting that $\hat{H}\phi _{n} =E_{n} \phi _{n}\nonumber$ Substitution yields $\left\langle E\right\rangle =\int \left(\sum _{m}c_{m} \phi _{m} \right)\left(\sum _{n}c_{n} E_{n} \phi _{n} \right) d\tau\nonumber$ Gathering terms, one obtains \begin{align*} \left\langle E\right\rangle &= \int \left(\sum _{m}\sum _{n}c_{m} c_{n} E_{n} \phi _{m} \phi _{n} \right) d\tau \[4pt] &= \sum _{m}\sum _{n}c_{m} c_{n} E_{n} \int \left(\phi _{m} \phi _{n} \right) d\tau \[4pt] &= \sum _{m}\sum _{n}c_{m} c_{n} E_{n} \delta _{mn} \end{align*} The Kronecker delta will destroy one of the summations since it will pick out only one value to be non-zero. \begin{align*} \left\langle E\right\rangle &= \sum _{m}\sum _{n}c_{m} c_{n} E_{n} \delta _{mn} \[4pt] &=\sum _{n}c_{n}^{2} E_{n} \end{align*} Thus if any components of the linear combination have a non-zero contribution ($c_ {n} \neq 0$ for $n > 1$) the expectation value has to be larger than $E_ {1}$. The Variational principle can be used to determine reasonable trial wavefunctions ($\Psi$) based on a set of approximate wavefunctions ($\phi_ {n}$). This is done by assuming the trial wavefunction can be expressed as a linear combination of the approximate wavefunctions $\Psi =\sum _{n}c_{n} \phi _{n}\nonumber$ and then determining the contribution to the trial function by minimizing the energy with respect to the coefficients ($c_ {n}$) in the expansion. $\dfrac{\partial }{\partial c_{n} } \left\langle E\right\rangle =0\nonumber$ This will produce n equations with $n$ unknown values of $c_ {n}$ which can be simultaneously solved to yield the optimal values of $c_ {n}$. This methodology is used to a great extent in computational chemistry methods. Example $1$ What is $\langle E \rangle$ for a system with the following wavefunction that approximates $\psi_{1}$ (x) for a particle in a box? $\psi (x)=\sqrt{\dfrac{30}{a^{5} } } \cdot x\cdot \left(a-x\right) \nonumber$ Solution The wavefunction is a reasonable, but not perfect, approximation of the $n=1$ level of a particle in a box. $\phi_1( x )=\sqrt{\frac{2}{ a }} \cdot \sin \left(\frac{\pi \cdot x }{ a }\right) \nonumber$ The expectation value of energy is found in the usual manner. \begin{align*} \left\langle E\right\rangle &= \int _{0}^{a}\psi {\kern 1pt} \hat{H}{\kern 1pt} \psi \, d\tau \[4pt] &=-\dfrac{\hbar ^{2} }{2m} \dfrac{30}{a^{5} } \int _{0}^{a}\left(ax-x^{2} \right)\dfrac{d^{2} }{dx^{2} } \left(ax-x^{2} \right)dx =-\dfrac{15\hbar ^{2} }{ma^{5} } \int _{0}^{a}\left(ax-x^{2} \right)\left(-2\right)dx \[4pt] &=\dfrac{30\hbar ^{2} }{ma^{5} } \left[\dfrac{ax^{2} }{2} -\dfrac{x^{3} }{3} \right]_{0}^{a} =\dfrac{30\hbar ^{2} }{ma^{5} } \left(\dfrac{a^{3} }{6} \right) \[4pt] &= \dfrac{5\hbar ^{2} }{ma^{2} } \end{align*}\nonumber This result is slightly larger than $\dfrac{h^ {2}}{ 8ma ^{2}}$ since $\dfrac{5}{(2\pi) ^{2}} = 0.127$ and $\dfrac{1}{8} = 0.125$. In the variational method, an approximate form of a wave function can be used 7.03: Vocabulary and Concepts perturbation theory variational method 7.04: Problems 1. Consider a particle of mass $m$ in a box defined between $x = 0$ and $x = a$, that is prepared in the $n = 1$ state. If the wavefunction is approximated by $\psi \left(x\right)=\sqrt{\frac{30}{a^5}}x(a-x) \nonumber$ 1. Show that the expectation value of $\langle E \rangle$ exceeds $E_ {1}$ for a particle in a box. 2. By what percentage does the approximate energy exceed that of the $n = 1$ energy?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/07%3A_Approximate_Methods/7.01%3A_Perturbation_Theory.txt
One of the shortcomings of Bohr’s model of the hydrogen atom was that it was not extensible to atoms that had more than one electron. The newly emerging quantum mechanics was hoped to do a better job. Unfortunately, while the hydrogen atom problem is solvable analytically, issues arise when an attempt is made to solve the problem for atoms with multiple electrons. Regardless, the first step in deriving this theory, then, is writing the Hamiltonian for the System. • 8.1: Potential Energy and the Hamiltonian The potential energy of a poly electronic atom is all electrostatic in nature. There are attractive forces between electrons and the nucleus and repulsive forces between the electrons themselves. • 8.2: The Aufbau Principle The aufbau principle(German for “building up” principle), or building up principle, suggests that we can construct a description of an atom my adding subatomic particles one at a time, moving through the periodic table until we reach the element of interest. • 8.3: Orbital Diagrams Orbital diagrams are handy to depict electronic configurations without having to resort to just quantum numbers. • 8.4: Angular Momentum Coupling Any system that has more than one source of angular momentum will be subject to coupling between those forms of angular momentum. • 8.5: The Pauli Exclusion Principle One explanation as to why the differences between the term symbols that arise from a p2 configuration relative to a pp configuration is the Pauli Exclusion principle. • 8.6: Atomic Spectroscopy The complex spectra of atoms can be understood using term symbols, as they contain all of the symmetry and quantum number values needed. • 8.7: Vocabulary and Concepts • 8.8: Learning Objectives • 8.9: Problems Thumbnail: Neon Atom. (CC BY 3.0 Unported; BruceBlaus via Wikipedia) 08: Polyelectronic Atoms The potential energy of a poly electronic atom is all electrostatic in nature. There are attractive forces between electrons and the nucleus and repulsive forces between the electrons themselves. For simplicity, we will consider the helium atom first, which has a nucleus with a charge of +2 electron charges and two electrons with -1 charges each. The Hamiltonian for this system will have kinetic energy terms for both electrons and three terms to describe the potential energy in the system. The attractive forces will lead to negative contributions to the potential energy and the repulsive (electron-electron) force will contribute a positive value to the potential energy. In atomic units, this yields $\hat{H}=\hat{T}_{1} +\hat{T}_{1} -\dfrac{2}{r_{1} } -\dfrac{2}{r_{2} } +-\dfrac{1}{r_{12} }\nonumber$ The $-\dfrac{1}{r _{12}}$ (electron-electron repulsion term) makes the problem unseparable into terms that relate only to a single electron. This creates a three body problem, which cannot be solved analytically. The Orbital Approximation The way we deal with this problem is to simply ignore the electron-electron repulsion term in the solution, and treat it phenomenologically after the fact. This is known as the orbital approximation, as it allows for the separation of the Hamiltonian into two terms, one of which deals in electron 1 and the other in electron 2. \begin{aligned} \hat{H}_{tot} &=\hat{T}_{1} -\dfrac{2}{r_{1} } +\hat{T}_{2} -\dfrac{2}{r_{2} } \ &=\hat{H}_{1} +\hat{H}_{2} \end{aligned} \nonumber This is also the approximation that allows us to write electronic configurations for polyelectronic atoms. In the electronic configuration, we assume that each electron has a hydrogen-like wavefunction. 8.02: The Aufbau Principle The aufbau principle(German for “building up” principle), or building up principle, suggests that we can construct a description of an atom my adding subatomic particles one at a time, moving through the periodic table until we reach the element of interest. Under this description, a carbon atom (atomic number 6) is similar to a boron (atomic number 5) atom, but with one additional proton and some additional neutrons in the nucleus and one additional electron added to the electron cloud. Electronic Configurations Consider carbon, which is atomic number 6. Most chemists advanced to a level to which they are prepared to take a course in physical chemistry can construct an electronic configuration for $_{6}C$. $_{6}C$: $[He] 2s^{2} 2p^{2}$ Or for $_{23}V$, one would write $_{23}V$: $[Ar] 4s^{2} 3d^{3}$ It is a curious thing that that the 4s subshell fills before the 3d subshell, since in atomic hydrogen, the 3d subshell has a lower energy. However, in polyelectronic atoms, (specifically for K and Ca) the 4s subshell is actually lower in energy than the 3d subshell. As such, according to the aufbau principle, it is the 4s subshell that fills first of the two. However, it is important to note that the relative energies of the subshells change with changing nuclear charge and differing numbers of electrons. For example, in Sc, it is the 4s electrons that are higher in energy than the 3d electron. As such, the 4s electrons are the first to be removed when the atom is ionized. Shells, Subshells, Orbitals and Spin It is useful to develop some nomenclature to describe the different combinations of quantum numbers that describe the different wavefunctions for the electrons in an atom. In order to do this, we need ot define a few terms that will come in handy later. 1. shell– characterized by the principle quantum number n 2. subshell– characterized by n and the angular momentum quantum number l 3. orbital– characterized by n, l and the azimuthal quantum number $m_{l}$. In addition to shells, subshells and orbitals, electrons have spin. The spin quantum number of an electron is $\textbf{s} = \frac{1}{2}$. But generally electrons are described as being “spin up” or “spin down” based on the value of the z-axis component of the spin, $m_{s}$. $m_{s}$ can take values of $+\frac{1}{2}$ and $-\frac{1}{2}$. Each orbital can hold two electrons. If there are two electrons in the orbital, the spins must be pairs such that one is “spin up” and the other is “spin down.” 8.03: Orbital Diagrams Orbital diagrams are handy to depict electronic configurations without having to resort to just quantum numbers. In an orbital diagram, each orbital is depicted using a box or a line and electrons are depicted with arrows pointing either up or down depending on the value of \(m_{s}\).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/08%3A_Polyelectronic_Atoms/8.01%3A_Potential_Energy_and_the_Hamiltonian.txt
Any system that has more than one source of angular momentum will be subject to coupling between those forms of angular momentum. For example, consider the emission from an excited hydrogen atom, for which the electron is in the 2p subshell the atom emits a photon as the electron relaxes to be in the ground 1s subshell. In fact, this transition is doubled as two lines can be observed if viewed at high enough resolution. The transition is depicted in the above energy level diagram. The upper (2p) state is shown to be split into two components, one labeled $^{2} P _{3/2}$ and one $^{2} P _{1/2}$. The lower state has only one component, labeled $^{2} S _{1/2}$. Part of the job of quantum mechanics will be to describe this splitting. The explanation comes in the form of angular momentum coupling. There are two sources of angular momentum in the electronic wavefunction of the atom: the orbital angular momentum ($l = 1$) and the electron spin angular momentum ($s = \dfrac{1}{2}$). These angular momenta can couple to yield a total angular momentum $J = \dfrac{3}{2} \; or \; \dfrac{1}{2}$. The resultant angular momentum can be determined by the two angular momentum vectors adding in parallel of antiparallel. The result is to split the state into two components. Term Symbols Angular momentum in atoms can be summarized using a term symbol. The term symbol will indicate a number of different types of angular momentum such as the total orbital angular momentum, total spin angular momentum and the total (spin + orbit) angular momentum. In the limit that Russell-Saunders coupling (which will be described in detail shortly) provides a a good description of the atom, the term symbol used will be of the form $^{(2S+1)}L_{J}\nonumber$ where S is the total spin angular momentum and ($2S+1$) is the spin degeneracy, L is the total orbital angular momentum, and J gives the total of the spin-orbit angular momentum. (The convention will be followed that lower-case letters are used to indicate one-electron properties and upper-case letters are used to describe total atom properties.) L and S must be calculated using vectoral sums of the single-electron angular momenta (whether orbital or spin.) The vectoral sums can yield several values depending on the angle between the vectors. The possible magnitudes of the resultant vectors will be quantized, with the range of magnitudes being given by a Clebsch series. Consider the addition of the angular momentum vectors for two electrons in $p (l = 1)$ subshells. $\begin{gathered} \mathbf{L}=\boldsymbol{l}_1 \oplus \boldsymbol{l}_2 \ =l_1+l_2, l_1+l_2-1, l_1+l_2-2, \ldots,\left|l_1-l_2\right| \end{gathered}\nonumber$ As such, the possible values of L for a $p^ {2}$ configuration are \begin{aligned} \mathbf{L} &= \boldsymbol{l}_1 \oplus \boldsymbol{l}_2 \ &= 1 \oplus 1 \ &=2, \; 1, \; 0 \end{aligned} \nonumber As in the case of one-electron orbital angular momenta, the total orbital angular momentum is signified using a letter. The following table shows which letters are used. One-electron Total Atom l Designation L Designation 0 s 0 S 1 p 1 P 2 d 2 D 3 f 3 F 4 g 4 G The possible values of $\boldsymbol{S}$, are given by $s_{1} \oplus s _{2}$. (For all electrons, $\boldsymbol{1}s = \dfrac{1}{2}$.) $\boldsymbol{S} = \boldsymbol{s}_1 \oplus \boldsymbol{s}_2 = \dfrac{1}{2} \oplus \dfrac{1}{2}\nonumber$ So the possible values of ($2\boldsymbol{S} + 1$) are 3 and 1. In other words, both triplet and singlet states arise from a $p^ {2}$ configuration. However, not all possible combinations of $\boldsymbol{L}$ and ($2\boldsymbol{S} + 1$) are possible. In fact, only those values that arise from distinguishable combinations of miscrostate quantum number combinations are possible. The Microstate Method The number of distinguishable microstates for a given electronic configuration is given by $\dfrac{G!}{N!(G-N)!}\nonumber$ where G is the number of spin-orbit states possible for a single electron and N is the number of electrons. For a $p^ {2}$ configuration, $G = 6$ and $N = 2$. So the number of microstates is given by $\dfrac{6!}{2!\cdot 4!} =\dfrac{6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1}{(2\cdot 1)\cdot (4\cdot 3\cdot 2\cdot 1)} =15\nonumber$ So there are 15 possible microstates possible. Each microstate will be characterized by a value of $m_ {l}$ and $m_ {s}$ for each electron under consideration. A complete set of microstates for a $p^ {2}$ configuration is shown in the table below. $m_ {l}$ and $m_ {s}$ are indicated for electrons 1 and 2 in the atom. Notice that only distinguishable combinations are shown! $m_ {L}$ $m_ {S}$ Designation 1 2 1 2 1 +1 +1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ +2 0 $^{1 }D$ 2 +1 0 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 +1 $^{3 }P$ 3 +1 0 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 0 $^{1 }D$ 4 +1 -1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 +1 $^{3 }P$ 5 +1 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 0 $^{1 }D$ 6 +1 0 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 0 $^{3 }P$ 7 +1 0 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 -1 $^{3 }P$ 8 +1 -1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 0 $^{3 }P$ 9 +1 -1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 -1 $^{3 }P$ 10 0 0 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 0 $^{1 }S$ 11 0 -1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 +1 $^{3 }P$ 12 0 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 0 $^{1 }D$ 13 0 -1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 0 $^{3 }P$ 14 0 -1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 -1 $^{3 }P$ 15 -1 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ -2 0 $^{1 }D$ The “Designation” column in the above table is really for bookkeeping only. For example, it should be noted that there are two miscrostates that yield $m_ {L}$ = +1, $m_ {S}$ = 0. One has been designated $^{1 }D$ and the other $^{3 }P$. In fact, the wavefunctions needed to describe these term symbol components require linear combinations of both microstates. The resulting microstates for a $p^ {2}$ configuration are $^{1 }D$, $^{3 }P$ and $^{1 }S$. The methodology for determining this from the table of microstates is as follows: 1. Find the largest value of $m_ {L}$ and the largest value of $m_ {S}$ that corresponds to that value. 2. From these, find L and S for the term symbol. 3. Mark combinations of $m_ {L}$ and $m_ {S}$ that match the pattern for a given term symbol. 4. Repeat from step 1 for remaining microstates. Keep repeating until there are no microstates left. It is very important to approach this process methodically or errors will occur in determining microstate-term symbol correlations. Utilizing this methodology to work through the above table, we start with the largest value for $m_ {L}$ which is +2. The largest value of $m_ {S}$ that goes with it is 0. This indicates $\boldsymbol{L}$ and $\boldsymbol{S}$ values of 2 and 1 respectively. $\boldsymbol{L} = 2$ indicates a $D$ state. $\boldsymbol{S} = 0$ indicates that $(2\boldsymbol{S} + 1) = 1$ (or a singlet state.) So the resulting term is $^{1 }D$. This will have components of $m_ {L} = +2,\; +1,\; 0,\; -1,\; -2$. Each will have $m_ {S}$ = 0. This accounts for five of the microstates. The largest value of $m_ {L}$ for the remaining microstates is $m_ {L} = +1$. the largest value of $m_ {S}$ that goes with $m_ {L}= +1$ is $m_ {S} = +1$. This correlates to $\boldsymbol{L} = 1$, $S =1$ or a $^{3 }P$ state. There are nine combinations of microstates for this term symbol, one each for each combination of $m_ {L} = +1, \; 0, \; -1$ and $m_ {S} = +1, \; 0, \;-1$. After these combinations are marked, the only remaining combination is $m_ {L}= 0$, $m_ {S} = 0$, which corresponds to a $^{1 }S$ state. The number of microstates used for a given term symbol can be determined from ($2L+1$) and ($2S+1$), the orbital and spin degeneracies respectively. Consider the following table. Notice that the total of $(2L+1)(2S+1)$ is the same as the number of original microstates. $(2L+1)$ $(2S+1)$ $(2L+1)(2S+1)$ $^{1 }D$ 5 1 5 $^{3 }P$ 3 3 9 $^{1 }S$ 1 1 1 Total 15 Spin-Orbit Coupling The one thing that has not been determined from the microstates themselves is the total angular momentum $\boldsymbol{J}$, which is given by the vectoral sum of $\boldsymbol{L}$ and $\boldsymbol{S}$. $\boldsymbol{J}$ values must be determined for each term separately. This coupling of spin and orbit angular momenta will split the term states further. $J = L \oplus S\nonumber$ L S J Terms $^{1 }D$ 2 0 2 $^{1 }D$ ${}_{2}$ $^{3 }P$ 1 1 2, 1, 0 $^{3 }P_ {2}$, $^{3 }P_ {1}$, $^{3 }P_ {0}$ $^{1 }S$ 0 0 0 $^{1 }S$ ${}_{0}$ Again, the values of the spin-orbit degeneracies, given by (2J+1) can be used to determine if the coupling scheme has been done properly. J (2J+1) $^{1 }D$ ${}_{2}$ 2 5 $^{3 }P_ {2}$ 2 5 $^{3 }P_ {1}$ 1 3 $^{3 }P_ {0}$ 0 1 $^{1 }S$ ${}_{0}$ 0 1 Total 15 Again, notice that the total matches the original number of microstates. The Hole Rule When dealing with a subshell that is more than half filled, it is oftentimes easier (or at least less tedious) to employ the hole rule. The hole rule involves treating electron holes rather than the electrons themselves. Consider $_{6 }C$ and $_{8 }O$ as an example of complementary atoms. Carbon has a $p^ {2}$ configuration and oxygen a $p^ {4}$ configuration. (Added together, that makes a $p^ {6}$ configuration, which closes the p-subshell and is why the two atoms are complementary.) For each microstate in the $p^ {2}$ system, there exists one in the $p^ {4}$ system that when added together would complete the p-subshell. An example is shown below. This relationship ensures that the exact same symmetry relationships hold for the $p^ {4}$ system as for the $p^ {2}$ system. Hence, the term symbols that arise from a $p^ {4}$ system are $^{1 }D$, $^{3 }P$ and $^{1 }S$. With spin-orbit coupling, the 3P will split into three components, $^{3 }P_ {0}$, $^{3 }P_ {1}$ and $^{3 }P_ {2}$. Of these, $^{3 }P_ {2}$ will have the lowest energy according to Hund’s rule 3b, as these terms arise from a system where the subshell is more than half filled. Hund’s Rules Hund’s rules are used to determine the lowest energy state within the manifold of states generated from a given electronic configuration. The rules can be summarized as follows: 1. The lowest energy state will be the one with the largest value of S. 2. For multiple states with the same largest value of S, the lowest energy state will have the largest value of L. 3. For states with the same values of L and S, the lowest energy state will have 1. The smallest value of J, if the term arises from an electronic configuration in which the subshell is less than half filled 2. The largest value of J, if the term arises from an electronic configuration in which the subshell is more than half filled For the case of a $p^ {2}$ configuration, the largest value of S generated is S = 1, for the $^{3 }P$ state. And within this state, the lowest energy term will be $^{3 }P_ {0}$, since $p^ {2}$ corresponds to a subshell that is less than half filled. Example $1$: Nonequivalent Electrons Determine the term symbols that arise from the $p^ {3}$ configuration of ${}_{7}$ N. Solution Consider a carbon atom in an excited state where the electronic configuration is given by ${}_{6 }C$: [He] 2s $^{2}$ 2$p^ {1}$ 3$p^ {1}$ This is an example of a pp configuration (which is different than a $p^ {2}$ configuration since the two electrons have different values of the principle quantum number n. In this case, a number of microstate combinations become distinguishable that would not be before. A complete set of microstates for a pp configuration is given in the table below. In this case, since the electrons are not equivalent, it is possible for both to be in orbitals where $m_ {l}$ = +1 with $m_ {s}$ = $+\dfrac{1}{2}$ since they are in different subshells. $m_ {L}$ $m_ {S}$ Designation 2p 3p 2p 3p 1 +1 +1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ +2 +1 $^{3 }D$ 2 +1 +1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ +2 0 $^{3 }D$ 3 +1 +1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ +2 0 $^{1 }D$ 4 +1 +1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ +2 -1 $^{3 }D$ 5 +1 0 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 +1 $^{3 }D$ 6 +1 0 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 0 $^{3 }D$ 7 +1 0 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 0 $^{ 1 }D$ 8 +1 0 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 -1 $^{3 }D$ 9 +1 -1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 +1 $^{3 }D$ 10 +1 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 0 $^{3 }D$ 11 +1 -1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 0 $^{1 }D$ 12 +1 -1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 -1 $^{3 }D$ 13 0 +1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 +1 $^{3 }P$ 14 0 +1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 0 $^{3 }P$ 15 0 +1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ +1 0 $^{1 }P$ 16 0 +1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ +1 -1 $^{3 }P$ 17 0 0 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 +1 $^{3 }S$ 18 0 0 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 0 $^{3 }S$ 19 0 0 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 0 $^{1 }S$ 20 0 0 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 -1 $^{3 }S$ 21 0 -1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 +1 $^{3 }D$ 22 0 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 0 $^{3 }D$ 23 0 -1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 0 $^{1 }D$ 24 0 -1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 -1 $^{3 }D$ 25 -1 +1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 +1 $^{3 }P$ 26 -1 +1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 0 $^{3 }P$ 27 -1 +1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ 0 0 $^{ 1 }P$ 28 -1 +1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ 0 -1 $^{3 }P$ 29 -1 0 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 +1 $^{3 }P$ 30 -1 0 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 0 $^{3 }P$ 31 -1 0 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ -1 0 $^{1 }P$ 32 -1 0 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ -1 -1 $^{3 }P$ 33 -1 -1 $+\dfrac{1}{2}$ $+\dfrac{1}{2}$ -2 +1 $^{3 }D$ 34 -1 -1 $+\dfrac{1}{2}$ $-\dfrac{1}{2}$ -2 0 $^{3 }D$ 35 -1 -1 $-\dfrac{1}{2}$ $+\dfrac{1}{2}$ -2 0 $^{1 }D$ 36 -1 -1 $-\dfrac{1}{2}$ $-\dfrac{1}{2}$ -2 -1 $^{3 }D$ In this example, there are more term symbols generated due to the fact that the electrons are not in the same subshell. The resulting term symbols are $^{3 }D$, $^{3 }P$, $^{3 }S$, $^{1 }P$, $^{1 }P$ and $^{1 }S$. As such, this set of microstates includes some combinations of $m_ {l}$ and $m_ {s}$ which would not be possible if the two electrons were in the same subshell.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/08%3A_Polyelectronic_Atoms/8.04%3A_Angular_Momentum_Coupling.txt
One explanation as to why the differences between the term symbols that arise from a $p^ {2}$ configuration relative to a pp configuration is the Pauli Exclusion principle. The usual statement of the Pauli Exclusion Principle is that no two electrons in an atom can have the same set of four quantum numbers n, l, $m_ {l}$ and $m_ {s}$. Another explanation is to simply announce that Electrons are Fermions! This approach is useful if you happen to know the properties of Fermions, but does not provide much insight if you do not. A Fermion is a particle with half-integral spin. An obvious example (according to the statement above) is an electron which has $\boldsymbol{s} = \dfrac{1}{2}$. Other examples include protons and neutrons and fluorine-19 nuclei (all with $\boldsymbol{I}= \dfrac{1}{2}$), aluminum-27 nuclei ($\boldsymbol{I} = \dfrac{5}{2}$) etc. Fermions have the property that the total wavefunction of a system containing two equivalent fermions must change sign if the two particles are exchanged. The other type of particle is called a Boson. This is a particle with integral spin. Examples of bosons include deuterium nuclei or nitrogen-14 nuclei (both with $\boldsymbol{I} = 1$) or helium-4 nuclei ($\boldsymbol{I}= 0$). A system containing two equivalent bosons must have a wavefunction that does not change sign for the exchange of two equivalent bosons. \begin{aligned} \Psi(1,2) &= -\Psi(2,1) \qquad \text{(for fermions)} \ \Psi(1,2) &= \Psi(2,1) \qquad \text{(for bosons)}\end{aligned}\nonumber In order to explore the properties of these types of particles, it is useful to define an operator that exchanges two equivalent particles (1 and 2). $\begin{array}{c} {\hat{O}\Psi (1,2)=\Psi (2,1)} \ {\hat{O}\psi _{m} (1)\psi _{n} (2)=\psi _{m} (2)\psi _{n} (1)} \end{array}\nonumber$ In the limit that spin and orbital wavefunctions are separable (the total wavefunction can be expressed as the product of a spin function and an orbital function) $\Psi _{tot} =\psi _{orbital} \psi _{spin}\nonumber$ both the spin and orbital functions must be eigenfunctions of the electron exchange operator. We shall explore the properties of this operation on spin wavefunction to explore the difference between single and triplet spin wavefunctions as derived from a pp pr $p^ {2}$ configuration. Consider how the microstates shown in Table 1 behave under the exchange operation. $\begin{array}{l} {\hat{O}\Psi _{1} =\hat{O}\alpha (1)\alpha (2)=\alpha (2)\alpha (1)=\Psi _{1} } \ {\hat{O}\Psi _{2} =\hat{O}\alpha (1)\beta (2)=\alpha (2)\beta (1)=\Psi _{3} } \ {\hat{O}\Psi _{3} =\hat{O}\beta (1)\alpha (2)=\beta (2)\alpha (1)=\Psi _{2} } \ {\hat{O}\Psi _{4} =\hat{O}\beta (1)\beta (2)=\beta (2)\beta (1)=\Psi _{4} } \end{array}\nonumber$ Wavefunctions $\Psi_ {1}$ and $\Psi_ {4}$ are eigenfunctions of $\hat{O}$. Wavefunctions $\Psi_ {2}$ and $\Psi_ {3}$ are not eigenfunctions of $\hat{O}$, but they are clearly related to one another through the electron exchange operation as the operation converts one into the other. The relationship suggests that linear combinations of $\Psi_ {2}$ and $\Psi_ {3}$ can be taken in order to construct spin wavefunctions that are eigenfunctions of $\hat{O}$. One linear combination is symmetric (eigenvalue = +1) and the other is be antisymmetric (eigenvalue = -1). The correct, normalized linear combinations are as follows. $\begin{array}{l} {\Psi _{s} =\dfrac{1}{\sqrt{2} } \left(\Psi _{2} +\Psi _{3} \right)=\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)+\beta (1)\alpha (2)\right)} \ {\Psi _{a} =\dfrac{1}{\sqrt{2} } \left(\Psi _{2} -\Psi _{3} \right)=\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)-\beta (1)\alpha (2)\right)} \end{array}\nonumber$ Under the electron exchange operator, these linear combinations behave as follows. $\begin{array}{l} {\hat{O}\Psi _{s} =\hat{O}\left[\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)+\beta (1)\alpha (2)\right)\right]=\dfrac{1}{\sqrt{2} } \left(\alpha (2)\beta (1)+\beta (2)\alpha (1)\right)=\Psi _{s} } \ {\hat{O}\Psi _{a} =\hat{O}\left[\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)-\beta (1)\alpha (2)\right)\right]=\dfrac{1}{\sqrt{2} } \left(\alpha (2)\beta (1)-\beta (2)\alpha (1)\right)=-\Psi _{a} } \end{array}\nonumber$ So $\Psi_ {s}$ is symmetric with respect to electron interchange and $\Psi_ {a}$ is antisymmetric with respect to electron interchange. Noting that $\Psi_ {1}$ and $\Psi_ {4}$ are natural symmetric eigenfunctions of the exchange operator, it is easy to group the spin wavefunctions into triplet and singlet components according to symmetry with respect to the operator $\hat{O}$. The summary of these results is shown in the table below. Wavefunction S $M_ {S}$ Triplet Symmetric $\Psi_ {1}$ $\alpha (1)\alpha (2)$ 1 +1 $\Psi_ {s}$ $\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)+\beta (1)\alpha (2)\right)$ 0 $\Psi_ {4}$ $\beta (1)\beta (2)$ -1 Singlet Antisymmetric $\Psi_ {a}$ $\dfrac{1}{\sqrt{2} } \left(\alpha (1)\beta (2)-\beta (1)\alpha (2)\right)$ 0 0 It can be seen that there are three components of the triplet spin wavefunction and only one component to the singlet function, as implied by the names “triplet” and “singlet.” More importantly, it is clear that to generate the ground state wavefunction for the atom, one must include contributions from paired electron spin functions ($\Psi_ {s}$ ). So the statement of Hund’s rule that maximizing the number of electrons with the same value of $m_ {s}$ attains the lowest energy state is clearly incorrect, as it excludes the necessary component with ${M}_{S} = 0$. For equivalent electrons (electrons in the same subshell, or the $p^ {2}$ case) the symmetric spin wavefunction set (the triplet functions) must take antisymmetric orbital function ($P$). The singlet spin function, which is antisymmetric to electron exchange, must take a symmetric orbital function ($D$ or $S$.) As such, the three term symbols generated are ${}^{1 }D$, ${}^{3 }P$ and ${}^{1 }S$. If the electrons are not equivalent, as is the case in a pp configuration, all combinations of the triplet and singlet spin functions with D, P and S orbital functions are possible and the resulting terms are ${}^{3 }D$, ${}^{3 }P$, ${}^{3 }S$, ${}^{1 }D$, ${}^{1 }P$ and ${}^{1 }S$. The ${}^{3 }D$, ${}^{1 }P$ and ${}^{3 }S$ functions are not possible in the $p^ {2}$ case, as these would require microstates that are either duplicates of other microstates, or microstates that involve two electrons in the same orbital with the same value of $m_ {s}$. The latter is a clear violation of the Pauli Exclusion Principle since both electrons would then have the same values of $n$, $l$, $m_ {l}$ and $m_ {s}$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/08%3A_Polyelectronic_Atoms/8.05%3A_The_Pauli_Exclusion_Principle.txt
The complex spectra of atoms can be understood using term symbols, as they contain all of the symmetry and quantum number values needed. The selection rules for systems that are well described by Russell-Saunders coupling are $\Delta S = 0$ $\Delta L = 0,\; \pm 1$ (but not $0 \leftrightarrow 0$) $\Delta J = 0,\; \pm 1$ (but not $0 \leftrightarrow 0$) Consider a $^{2}P \leftarrow ^{2}S$ transition. An energy level diagram for such a transition is shown to the right. The selection rules predict two lines will be observed in the spectrum. The splitting between the lines will be related to the spin-orbit coupling constant in the upper state. Note that for this transition, $\Delta S =$ and $\Delta L = +1$. (In spectroscopy recall that changes are always calculated as the upper state value minus the lower state value as in $\Delta L = L’ – L”$). The two lines predicted have $\Delta J = 0$ and $+1$ as depicted in the diagram. Things get more complex for larger values of $L$ and $S$. For example, consider the transition between a $^{3 }D$ state and a $^{3 }P$ state (with the $^{3 }D$ state as the upper state and both states increasing in energy with increasing J.) For this transition, six lines are predicted. The pattern formed by the lines can vary based on the relative values of the spin-orbit coupling constants in each level. In general, the upper state will have the lower spin-orbit coupling constant, as electronic excitation quenches spin-orbit coupling. Landé Interval Rule The Landé Interval Rule describes the magnitude of the splittings in a term manifold. For example, it is predicted that the splitting pattern in a 3P state is The splitting between the $^{3 }P_ {2}$ level and the $^{3 }P_ {1}$ level is twice as large as that between the $^{3 }P_ {1}$ component and the $^{3 }P_ {0}$ component. In general, the Landé Interval Rule can be stated $E _{J+1} – E _{J} = hcA(J+1)\nonumber$ where A is the spin-orbit splitting constant for the level. The Landé Interval Rule works well for small splittings, where the spin-orbit interaction can be treated as a perturbation to the Hamiltonian. There will generally be small deviations from the interval rule, especially when relativistic effects become important. The Landé Interval can be used to interpret the complex splitting patterns that can be seen in some atomic spectra. The Deslandres Table A very useful tool that can be used in spectroscopy is the Deslandres table. In such a table, transitions are arranged according to upper and lower state combinations in such a way as to accentuate the differences in energy between quantum levels. For example, consider the following energy level diagram for $^{3 }D – ^{3 }P$ transition, where the six transitions have been labeled a-f for convenience. Looking at the diagram, it should be clear that the difference in energy between lines b and c must be identical to that between lines d and e, since both differences give the difference in energy between the $J = 2$ and $J = 1$ components of the ${}^{3 }P$ level. Similarly, the difference in energy between lines b and d must be equal to that between lines c and e, as that is the difference in energy between the $J = 2$ and $J = 1$ levels in the ${}^{3 }D$ state. A Deslandres table summarizes the information in the energy level diagram and also incorporates the values of the measured lines in the spectrum. Symbolically, the Deslandres table for the above transition would look as follows $^3 D$ 3 3A’ 2 2A’ 1 $^{3 }P$ 2 a $\textcolor{red}{a-b}$ b $\textcolor{red}{b-d}$ d 2A”     $\textcolor{red}{c-b}$   $\textcolor{red}{e-d}$ 1   c $\textcolor{red}{c-e}$ e A”         $\textcolor{red}{f-e}$ 0     f The table contains not only the line frequencies, but also the differences between them. It is the constancy of differences that confirms the assignment of the spectrum. Example $1$ consider the following data for a $^{3 }D– ^{3 }P$ transition. Assign the lines and calculate the spin-orbit coupling constants for both the upper and lower states based on your assignments. Line Freq ($cm^{-1}$) 1 18492.74 2 18511.98 3 18525.82 4 18540.84 5 18542.36 6 18545.06 The stick spectrum (simulated spectrum, with transitions indicated as sticks instead of lines with a definite line shape and without intensity data indicated) looks as follows. Solution It would be difficult to assign the spectrum simply based on the pattern seen above. In some cases, the spectral pattern can be quite complex! A couple of things can be inferred, however, based on the energy level diagram above. The smallest energy transition is for $^{3 }D _{1} – ^{3 }P_ {2}$ and the largest energy transition is either $^{3 }D _{1} – ^{3 }P_ {0}$ or $^{3 }D _{2} – ^{3 }P_ {1}$ (depending on the relative magnitudes of the spin-orbit splittings.) Based on these observations, we can assign the 18492.74 line. If 18545.06 $cm ^{-1}$ is the $^{3 }D _{1 } – ^{3 }P_ {0}$ transition, then the difference should be 3A”. This predicts a lower level spin-orbit-coupling constant of A” = 17.44 $cm ^{-1}$. And there must be a line at 18527.62 $cm ^{-1}$. But there is no such line! Hence, the highest energy transition is not the ${}^{3 }D _{1} – ^{3 }P_ {0}$ transition. It must be the $^{3 }D _{2} - ^{3 }P_ {1}$ transition instead! If the $18542.36 \; cm ^{-1}$ line is the $^{3 }D _{1} – ^{3 }P_ {0}$ transition, a value of $A” = 16.54 \; cm ^{-1}$ is predicted. This predicts a line at $18525.82 \; cm ^{-1}$ which does exist! (This is idealized theoretical data for demonstration purposes. The Landé interval rule does not always hold as strongly as that.)The difference between the $^{3 }D _{2} – ^{3 }P_ {1}$ transition and the $^{3 }D _{1} – ^{3 }P_ {1}$ transition is 19.24 $cm ^{-1}$. In order to maintain a constant set of differences, there must be a line at 18511.98 $cm ^{-1}$, which there is. This is assigned as the $^{3 }D _{2}- ^{2 }P_ {2}$ transition.The only remaining line is 18540.84 $cm ^{-1}$, which is assigned as the $^{3 }D _{3} – ^{3 }P_ {2}$ transition. The final Deslandres table looks as follows. $^3 D$ 3 3A’ 2 2A’ 1 $^3 P$ 2 18540.84 $\textcolor{red}{28.86}$ 18511.98 $\textcolor{red}{19.24}$ 18492.74 2A” $\textcolor{red}{33.08}$ $\textcolor{red}{33.08}$ 1 -- 18545.06 $\textcolor{red}{19.24}$ 18525.82 A” $\textcolor{red}{16.54}$ 0 -- -- 18542.36 In conclusion, angular momentum coupling schemes can be used to describe the states in a polyelectronic atom. These states can be used to predict the spectroscopy of these systems. In the next chapter, we will apply a number of the principles developed in this chapter in order to understand the electronic structure of diatomic molecules. This has important ramifications on both spectroscopy and bonding in these molecules, and also forms a foundation for how we think about electronic structure in larger molecules. 8.07: Vocabulary and Concepts angular momentum aufbau principle Boson Clebsch series Deslandres table Fermion hole rule Hund’s rules Landé Interval Rule miscrostate orbital orbital approximation Pauli Exclusion principle Russell-Saunders coupling shell spin-orbit splitting constant subshell term symbol 8.08: Learning Objectives After mastering the material covered in this chapter, one will be able to: 1. Describe the Orbital Approximation and explain how it leads to differences for polyelectronic atoms relative to the Hydrogen atom results. 2. Utilize the Aufbau principle to determine the ground electronic state electronic configuration for a polyelectronic atom, taking into account any important consequences of 1. the Pauli Exclusion Principle 2. Hund’s Rules of Maximum Multiplicity 3. Construct an orbital diagram depicting an electronic configuration, including using such a diagram to predict important properties of the ground (or any) electronic state configuration of an atom. These properties may include 1. Paramagnetism or diamagnetism 2. Total spin multiplicity or the number of total spin multiplicities associated with a given electronic configuration. 4. Use Russell-Saunders angular momentum coupling to determine the term symbols that arise for a given electronic configuration. Especially, one should be able to predict the lowest-energy term-state that arises from an electronic configuration consistent with Hund’s Rules. 5. Employ electron exchange symmetry rules to construct symmetry-adapted linear combinations of spin functions that can be used to satisfy the Pauli Exclusion Principle by creating total wavefunctions that are antisymmetric with respect to the exchange of equivalent electrons. 6. Construct energy-level diagrams for term states that are consistent with Russell-Saunders coupling and the Lande Interval Rule. 1. Use these diagrams to predict the structure of electronic transition spectra involving these states. 2. Organize the data into a Deslandres Table to aid in the conformation of assignments and the calculation of spin-orbit coupling constants. 8.09: Problems 1. Write a table of microstates and predict the term simple that arise for N with an electronic configuration of [He] $2s ^{2} 2p ^{3}$. Which is predicted to be the ground electronic state? 2. On the planet Zorg, electrons can exist in $\zeta$ orbitals, with $l = \frac{3}{2}$ (and so $m _{l} = + \frac{3}{2},\; +\frac{1}{2},\; -\frac{1}{2}, \; -\frac{3}{2}$). All other rules apply (2 electrons per orbital, Hund’s Rules, etc.) 1. How many microstates arise from a $\zeta^ {2}$ configuration? 2. Write a table of microstates for the $\zeta^ {2}$ configuration. What term symbols arise from this set of microstates? 3. Using the accepted conventions, draw an orbital diagram for the d electrons in V. 1. What is the predicted ground state term? 2. How many additional microstates contribute to the term? 4. Consider a $^{3} P – ^{3}P$ transition (in which both states increase in energy with increasing $J$.) 1. Draw an energy level diagram for the transition and predict the component transitions. 2. consider the following values: $A” = 12.3 \; cm ^{-1}$, $A’ = 8.4 \; cm ^{-1}$ and the $^{3}P _{1}-^{3}P _{0}$ transition occurs at $12459.3 \; cm ^{-1}$. Based on these complete a Deslandres table describing all of the component transitions and the spin-orbit spacings in the $^{3} P- ^{3} P$ transition.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/08%3A_Polyelectronic_Atoms/8.06%3A_Atomic_Spectroscopy.txt
Quantum mechanics can be used to predict a large number of properties, especially those related to electronic spectroscopy, for diatomic molecules. A number of the concepts discussed in this chapter can be expanded to explain a great deal of the behavior of polyatomic molecules as well. Thumbnail: A covalent bond forming \(\ce{H2}\) where two hydrogen atoms share the two electrons. (CC BY-SA 3.0; Jacek FH via Wikipedia; modified by LibreTexts) 09: Molecules The first task of applying quantum mechanics to a problem is writing the Hamiltonian. This requires deriving an expression for potential energy. Consider as an example, the simplest diatomic molecule, $H _{2} ^{+}$. In the above diagram, the blue dots indicate protons and the red dot, an electron. There will be attractive forces between the electron and protons 1 and 2 (separated by $r_ {1}$ and $r_ {2}$ respectively) and a repulsive force between the two protons, separate by a distance $r_ {12}$. In atomic units, the Hamiltonian can be written $\hat{H}=\hat{T}_{1} +\hat{T}_{2} +\hat{T}_{e} -\frac{1}{r_{1} } -\frac{1}{r_{2} } +\frac{1}{r_{12} } \nonumber$ where $T_ {1}$, $T_ {2}$ and $T_ {e}$ indicate the kinetic energies of protons 1 and 2 and the electron, respectively. As was the case in the helium atom, the $H _{2} ^{+}$ molecule involves a three body problem which cannot be solved analytically. As such, an approximation must be made in order to proceed. 9.02: The Born-Oppenheimer Approximation The Born-Oppenheimer approximation (Born & Oppenheimer, 1927) is made in order to simplify the problem in the case of a molecule. This approximation is based on the relative masses (and therefore the relative speeds) of the heavy nuclei compared to the light electron. It says that if the nuclei move (such as due to molecular vibration) that the electron(s) will react to a change in the potential energy field instantaneously. As such, the internuclear distance ($r_{12}$) can be fixed, and the wave function for the electron optimized. If the nuclear coordinates are fixed, the Hamiltonian becomes $\hat{H}=\hat{T}_{e} -\frac{1}{r_{1} } -\frac{1}{r_{2} } +\frac{1}{r_{12} }\nonumber$ and the value of $\frac{1}{r_{12}}$ becomes a constant. There are many cases where the Born-Oppenheimer approximation breaks down, such as Renner-Teller interactions and Jahn-Teller interactions which involve strong coupling between vibrational motion of a molecule and the electronic state. For the purposes of this text, we will stick to examples where the Born-Oppenheimer approximation is reasonable. The Born-Oppenheimer approximation makes it possible to calculate a number of properties for molecules. Below is an example of a potential energy surface of $O_{2}$ calculated using molecular modeling software at the HF/6-31G(d) level of theory. Basically, the program optimizes the wavefunctions describing the molecular orbitals based on a fixed internuclear separation. After populating the resultant orbitals with electrons, a total molecular energy is generated. After repeating this process at several different internuclear separation values, the curve can be constructed. Such calculations are based entirely on the electronic structure of the molecule. As such, some insight into the nature of molecular orbitals and their wavefunctions is needed to proceed. 9.03: Molecular Orbital Theory There are a number of ways to describe the electronic structure in diatomic molecules and the wavefunctions that are needed for the descriptions. Molecular Orbital theory provides one such example. There are many ways to describe molecular orbitals. One of the most commonly used is the method of using linear combinations of atomic orbitals (LCAO). Linear Combinations of Atomic Orbitals (LCAO) Consider a wavefunctions derived from the Schrödinger equation that can be expressed as linear combinations of the 1s orbitals centered on each atom. The wavefunction can then be written $\psi(\textbf{r}_ {1} ,\textbf{r}_ {2} ) = c _{1} (1s _{1} ) + c _{2} (1s _{2} ) \nonumber$ In this expression, $r_ {1}$ and $r_ {2}$ are the coordinates (position vectors) for nuclei 1 and 2. $1s _{1}$ an $1s _{2}$ refer to the 1s orbitals centered on nuclei 1 and 2 respectively. Due to the symmetry of the molecule, the magnitudes of $c_ {1}$ and $c_ {2}$ must be the same. $c _{1} = c _{2} = c\nonumber$ In order to be normalized, the wave function must satisfy \begin{align*} 1 &= c^{2} \int \left(1s_{1} +1s_{2} \right)\left(1s_{1} +1s_{2} \right)d\tau \[4pt] &= c^{2} \int 1s_{1} 1s_{1} d\tau +2c^{2} \int 1s_{1} 1s_{2} d\tau +c^{2} \int 1s_{2} 1s_{2} d\tau \end{align*} The first and the third integrals in this expression are unity due to the fact that the 1s orbitals are themselves normalized. This the expression becomes $\begin{array}{rcl} {1} & {=} & {2c^{2} +2c^{2} \int 1s_{1} 1s_{2} d\tau } \ {} & {} & {=2c^{2} \left(1+\int 1s_{1} 1s_{2} d\tau \right)} \end{array}\nonumber$ The integral in this expression $\int 1s_{1} 1s_{2} d\tau$ does not vanish due to orthogonality as we have seen in other examples, since the wavefunctions are centered in different locations. The magnitude of the integral, therefore, depends on the degree to which the two orbitals overlap one another. The overlap integral is commonly given the symbol S. The magnitude of the normalization constant for the molecular wavefunction will depend intimately on the magnitude of this overlap. $1=2c^{2} \left(1+S\right)\nonumber$ Solving for c, the following results $c = [2(1+S)] ^{\frac{1}{2}}\nonumber$ And the wavefunction can be written as $\psi (r_{1} ,r_{1} )=\dfrac{1}{\left[2\left(1+S\right)\right]^{1/2} } \left(1s_{1} +1s_{2} \right)\nonumber$ The value of the overlap integral S will depend on the size of the orbitals and also the internuclear separation. The above wavefunction is an example of a bonding orbital as the value of the overlap S will be positive. Positive overlap is a stabilizing condition and acts to hold a molecule together. But just as a linear combination can be constructed from the sum of the 1s orbitals on the two H atoms, one can also be constructed from the difference. $\psi (r_{1} ,r_{1} )=c\left(1s_{1} -1s_{2} \right)\nonumber$ This wavefunction will have negative overlap and thus produce an antibonding orbital which, if populated, has the effect of destabilizing the molecule. The Expectation Value for Energy The energies of these bonding and antibonding orbitals can be calculates from the following expressions $\begin{array}{rcl} {\left\langle E\right\rangle } & {=} & {\dfrac{\int \psi ^{*} \hat{H}\, \psi \; d\tau }{\int \psi ^{*} \, \psi \; d\tau } } \ {} & {} & {=\dfrac{\int \left(c_{1} 1s_{1} +c_{2} 1s_{2} \right)\hat{H}\left(c_{1} 1s_{1} +c_{2} 1s_{2} \right)\; d\tau }{\int \left(c_{1} 1s_{1} +c_{2} 1s_{2} \right)\left(c_{1} 1s_{1} +c_{2} 1s_{2} \right)\; d\tau } =\dfrac{c_{1}^{2} H_{11} +2c_{1} c_{2} H_{12} +c_{2}^{2} H_{22} }{c_{1}^{2} +2c_{1} c_{2} S+c_{2}^{2} } } \end{array}\nonumber$ In this expression, $H_ {11}$ and $H_ {22}$ are the Coulomb integrals defined by $H_{ii} =\int 1s_{i} \; \hat{H}\, 1s_{i} \, d\tau\nonumber$ It can be easily shown that $H_ {11} = H_ {22}$ by symmetry. The other type of integral (besides S, the overlap integral which has already been discussed) is $H_ {12}$, called the exchange integral. $H_{ij} =\int 1s_{i} \, \hat{H}\, 1s_{j} \, d\tau\nonumber$ The energy of the wavefunction is minimized by use of the variational principle. Specifically, the coefficients $c_ {1}$ and $c_ {2}$ must be chosen so as to minimize the energy of the wavefunction. This is done by differentiating the energy expression and setting it equal to zero (since the derivative will be zero at the minimum.) For simplicity, the expression is rearranged so that implicit differentiation is easier to see. $E\left(c_{1}^{2} +2c_{1} c_{2} S+c_{2}^{2} \right)=c_{1}^{2} H_{11} +2c_{1} c_{2} H_{12} +c_{2}^{2} H_{22}\nonumber$ Differentiation of this expression with respect to $c_ {1}$ and $c_ {2}$ yields two expressions which can be used to find the two unknowns, $c_ {1}$ and $c_ {2}$. $\begin{array}{l} {E\left(2c_{1} +2c_{2} S\right)+\dfrac{\partial E}{\partial c_{1} } \left(c_{1}^{2} +2c_{1} c_{2} S+c_{2}^{2} \right)=2c_{1} H_{11} +2c_{2} H_{12} } \ {E\left(2c_{2} +2c_{1} S\right)+\dfrac{\partial E}{\partial c_{2} } \left(c_{1}^{2} +2c_{1} c_{2} S+c_{2}^{2} \right)=2c_{2} H_{22} +2c_{1} H_{12} } \end{array}\nonumber$ Since $\dfrac{\partial E}{\partial c_{1} } =0$ at the minimum, the second terms on the left sides of the above equations vanish. (How nice of them!) $\begin{array}{l} {E\left(2c_{1} +2c_{2} S\right)=2c_{1} H_{11} +2c_{2} H_{12} } \ {E\left(2c_{2} +2c_{1} S\right)=2c_{2} H_{22} +2c_{1} H_{12} } \end{array}\nonumber$ These expressions can be rearranged. $\begin{array}{l} {c_{1} \left(E-H_{11} \right)+c_{2} \left(SE-H_{12} \right)=0} \ {c_{1} \left(H_{12} -SE\right)+c_{2} \left(E-H_{22} \right)=0} \end{array}\nonumber$ So long as the Coulomb, Exchange and Overlap integrals can be determined, the coefficients can be as well. The non-trivial solution for $c_ {1}$ and $c_ {2}$ can be found from the determinant of the matrix shown below being set to zero. $\left|\begin{array}{cc} {H_{11} -E} & {SE-H_{12} } \ {H_{12} -SE} & {E-H_{22} } \end{array}\right|=0\nonumber$ It can be shown (although it will not be shown here) that $H_ {ii} = E(1s) + J\nonumber$ where $E(1s)$ is the energy of a 1s orbital in hydrogen and J is an expression that depends on internuclear distance (r), given by $J=e^{-2r} \left(1+\dfrac{1}{r} \right)\nonumber$ Similarly, $H_ {ij}$ can be determined from $H_ {ij} = E(1s)S + K\nonumber$ where $K$ is given by $K=\dfrac{S}{r} -e^{-r} (1+r)\nonumber$ Notice that the expressions for both $J$ and $K$ vanish as $r$ approaches $\mathrm{\infty}$. Given these substitutions, the determinant equation becomes $\left|\begin{array}{cc} {E_{1s} +J+E} & {E_{1s} +K-SE} \ {E_{1s} S+K-SE} & {E_{1s} +J+E} \end{array}\right|=0\nonumber$ Or $\left(E_{1s} +J+E\right)^{2} -\left(E_{1s} +K-SE\right)^{2} =0\nonumber$ Being quadratic in $E$, this expression yields two solutions for the energy. One will give the energy of the bonding orbital and the other will be the energy of the antibonding orbital. (Now how much would you pay?) These energies are given by the expressions $E_{bonding} =E_{1s} +\dfrac{J+K}{1+S}\nonumber$ and $E_{antibonding} =E_{1s} +\dfrac{J-K}{1-S}\nonumber$ The following diagrams show the radial wavefunctions (across the z-axis of the molecule) for both the bonding and antibonding combinations of 1s orbitals. The graph on the left shows the value of the wavefunction, while the one on the right shows the square of the wavefunction. Note the node in the middle of the molecule in the antibonding orbital! The following figures show the axial wavefunction for the $\psi = 1s _{A} + 1s _{B}\nonumber$ bonding and the $\psi = 1s _{A} – 1s _{B}\nonumber$ antibonding orbitals (on the left) and the corresponding squared axial wavefunctions on the right. Bonding: Antibonding: These orbitals are easy to visualize and understand based on a pictorial approach of linear combinations of orbitals as well. In the pictorial approach, the emphasis is on the sign of the function in the overlap region. Bonding and Antibonding Orbitals Constructed from s Orbitals The combination of 1s orbitals can be visualized in the following diagram In this diagram, depicting the symmetric overlap to two 1s orbitals, it can be seen that the region of overlap will have a positive value (as it is given by the product of two positive numbers. This is an example of a s orbital since it is cylindrically symmetric about the internuclear axis. Just as the symmetric combination can be depicted, the antisymmetric combination is also easy to generate. In this depiction, it should be clear that the region of overlap has a negative value. Another way to think about this is that the wavefunction must change sign as it crosses from left to right. This implies a node between the nuclei! As stated before, the positive overlap depicted in the first orbital is a stabilizing condition, and the negative overlap in the second is destabilizing. This can be depicted in an orbital diagram. In this diagram, the atomic orbitals on the separated atoms are shown on the far right and left, and the orbitals in the middle column are the molecular orbitals that arise from the linear combination of the atomic orbitals. $\sigma _{g}$ indicates the bonding orbital and $\sigma _{u} ^{*}$ indicates the antibonding orbital resulting from the symmetric and antisymmetric combinations of the 1s orbitals. The subscripts g and u state for gerade and ungerade respectively. Gerade is a German word meaning even, which ungerade means odd. Specifically, these terms (and subscripts) are used to indicate the symmetry of a function with respect to inversion. The g/u symmetry can be determined by drawing an arrow through the middle of a picture of a molecular orbital. If the arrow ends in a point with the opposite sign, the wavefunction is ungerade. However, it must be noted that this symmetry applies only to homonuclear diatomic molecules (and other molecules that possess an inversion center symmetry elements.) More will be discussed about molecular symmetry in later chapters. Bonding and Antibonding Orbitals constructed from p Orbitals Bonding and antibonding $\sigma$ orbitals can be constructed from p-orbitals that are aligned on axis. In the diagram below, the upper picture indicates an antibonding orbital while the lower image is a bonding orbital. In addition to $\sigma$ orbitals, $\pi$ orbitals can also be constructed. Clearly the $\pi$-bonding orbital is ungerade, while the $\pi$-antibonding orbital is gerade (if an inversion center exists within the molecule. It is also important to note that $\pi$-type overlap is smaller than \sigma-overlap, due to the need to get two nuclei so close together for strong overlap of the p orbitals in a $\pi$ orientation. As such, the $\pi$ orbitals are less stabilizing or destabilizing relative to the atomic orbital energies. The $\sigma$ boding and antibonding orbitals will be formed by the symmetric and antisymmetric combinations of the $p_ {z}$ orbitals on the separated atoms, whereas the $\pi$ orbitals will be formed from the $p_ {x}$ and $p_ {y}$ orbitals from the separated atoms. Electronic Configurations Electronic configurations can be written for molecules just as they can be for atoms. Instead of being numbered by the principle quantum number, however, molecular orbitals are numbered sequentially from the lowest energy orbital of a certain symmetry. Consider the following list of electronic configurations for homonuclear diatomic molecules formed using the first ten elements. Molecule Electronic Configuration Bond Order Electronic State $H_ {2}$ $(1\sigma _{g} )^2$ 1 $^{1} \Sigma _{g} ^{+}$ $He _{2}$ $(1\sigma _{g} ) ^{2} (1\sigma _{u} ^{*} ) ^{2}$ 0 unbound $Li _{2}$ $KK (2\sigma _{g} ) ^{2}$ 1 $^{1} \Sigma _{g} ^{+}$ $Be _{2}$ $KK (2\sigma _{g} ) ^{2} (2\sigma_u^*)^2$ 0 unbound $B _{2}$ $KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2$ 1 $^{1} \Sigma _{g} ^{+}$ $C_ {2}$ $KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2 (1\pi _{u} ) ^{2}$ 2 $^{3} \Sigma _{g} ^{-}$ $N _{2}$ $KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2 (1\pi_u) ^{4}$ 3 $^{1} \Sigma _{g} ^{+}$ $O _{2}$ $KK (2\sigma_g)^2 (2\sigma_u ^* ) ^{2} (3\sigma_g)^2 (1\pi_u) ^{4} (1\pi_g ^{*}) ^{2}$ 2 $^{3} \Sigma _{g} ^{-}$ $F _{2}$ $KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2 (1\pi_u) ^4 (1\pi_g ^* ) ^4$ 1 $^{1} \Sigma _{g} ^{+}$ $Ne _{2}$ $KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2 (1\pi_u) ^4 (1\pi_g ^* ) ^4 (3\sigma_u^*)^2$ 0 unbound In this table, the older shell notation is used to indicate a filling of the inner shell electrons, $(1\sigma_g)^2 (1\sigma_u^*)^2$. These are given the symbol KK. Bond Order The bond order of a molecule is determined by adding the number of electrons in boding orbitals, subtracting the number of electrons in antibonding orbitals and dividing the result by 2 (since there are two electrons per orbital.) $\text{Bond Order } =\dfrac{\# bonding-\# antibonding}{2}$ The larger the bond order, the stronger a chemical bond is predicted to be. Also, since strong bonds are short bonds, the larger the bond order, the shorter a bond is predicted to be. Ionization of a molecule may have a profound affect on the bond order, and therefore the bond length. Consider the molecule $c_ {2}$ that has an electronic configuration given by $c_ {2} : KK (2\sigma_g)^2 (2\sigma_u^*)^2 (3\sigma_g)^2 (1\pi_u) ^{2}$ The addition of an electron to for $c_ {2} ^{-}$ will require the electron to go into the $1_{pu}$ bonding subshell. This will have the effect of strengthening the bond (since it increases the bond order.) Removal of an electron to form $c_ {2} ^{+}$ would weaken the bond since it involves the removal of a bonding electron. Paramagnetism While the bond order of oxygen ($O _{2}$ ) is correctly predicted by a Lewis Structure, the Lewis structure fails to predict that the molecule will be paramagnetic. Paramagnetism is a property of a molecule or atom that occurs when the system has unpaired electrons. These electrons each have a small magnetic moment which can align with an external magnetic field, lowering the energy of the atom or molecule. As such, the atom or molecule will be attracted to a magnetic field. Oxygen, which has an electronic configuration given by $\begin{array}{ll} \mathrm{O}_2: & \left(1 \sigma_{\mathrm{g}}\right)^2\left(1 \sigma_{\mathrm{u}}{ }^*\right)^2\left(2 \sigma_{\mathrm{g}}\right)^2\left(2 \sigma_{\mathrm{u}}\right)^2\left(3 \sigma_{\mathrm{g}}\right)^2\left(1 \pi_{\mathrm{u}}\right)^4\left(1 \pi_{\mathrm{g}}\right)^2 \ & \mathrm{KK}\left(2 \sigma_{\mathrm{g}}\right)^2\left(2 \sigma_{\mathrm{u}}\right)^2\left(3 \sigma_{\mathrm{g}}\right)^2\left(1 \pi_{\mathrm{u}}\right)^4\left(1 \pi_{\mathrm{g}}\right)^2 \end{array}\nonumber$ It is clear that there are two unpaired electrons. This is a property that cannot be predicted based on the Lewis Structure!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/09%3A_Molecules/9.01%3A_Potential_Energy_and_the_Hamiltonian.txt
There are clearly sources of angular momentum in a molecule due to orbital and spin considerations. But unlike atoms, molecules can also have angular momentum contributions from molecular rotation. There are many ways to describe the coupling of these different types of angular momentum. This text will focus on two specific cases, Hund’s coupling cases a and b. Hund’s case (a) In Hund’s case (a) coupling, the orbital and spin angular momenta are strongly coupled to the internuclear axis of the molecule. This defines the quantum number $\Lambda$ and $\Sigma$, which are the internuclear axis projections of L and S. The sum of $\Lambda$ and $\Sigma$ give the total electronic angular momentum along the internuclear axis, $\Omega$. $\Lambda + \Sigma = \Omega\nonumber$ $\Omega$ is then coupled to the end-over-end rotational angular momentum of the molecule ($R$) to give $J$, the total angular momentum. $J = \Omega + R\nonumber$ For a molecule that is well described by Hund’s case (a) coupling, that is in a $^{1}\Pi$ electronic state, the lowest value of J possible is $J = 1$. The one unit of angular momentum comes from the orbital part of the wave function, so $J = 1$ actually describes a non-rotating molecule $(R = 0)$! Hund’s case (a) does a good job of describing molecules which exhibit moderate spin-orbit coupling. If the coupling is extremely strong, another case (case (c), for example) is needed to describe the molecule’s properties. Hund’s case (b) Hund’s case (b) is slightly different from case (a) in that the spin angular momentum is uncoupled from the internuclear axis. As such, in Hund’s case (b) coupling, the quantum numbers $\Sigma$ and $\Omega$ are undefined. In this case, the end-over-end rotation (R) of the molecule couples with $\Lambda$ to produce N, which describes the sum of rotation plus orbital angular momentum. $N = \Lambda + R\nonumber$ N can then couple with $S$ to give $J$, the total angular momentum. $J = N + S\nonumber$ Singlet states, with $S = 0$, are always well described by Hund’s case (b) coupling. Hund’s case (b) is a good description for molecules where spin-orbit coupling is weak (or immeasurably small.) In the section describing the rotation of molecules as rigid rotators, the quantum number $J$ was used to describe the total angular momentum due to rotation. This is consistent with both Hund’s cases (a) and (b) for molecules in $^{1}\Sigma$ states, where $\Lambda = 0$ and $S = 0$ (implying where appropriate that $\Sigma = 0$ as well.) 9.05: Diatomic Term Symbols A term symbol for a diatomic molecule contains a great deal of information about symmetry properties of the wavefunction which describes the electronic state. The symmetry properties are closely related to the values of the quantum numbers which specify the wavefunction. The pattern used to assign a symbol to a value for a quantum number is very similar to the pattern used for atomic systems. The major difference is that the quantum numbers must reflect the cylindrical symmetry of diatomic molecules rather than the spherical symmetry of atoms. Quantum Number One Electron Many Electrons Atom (l) Molecule ($\lambda$) Atom (L) Molecule ($\Lambda$) 0 s $\sigma$ S $\Sigma$ 1 p $\pi$ P $\Pi$ 2 d $\delta$ D $\Delta$ 3 f $\phi$ F $\Phi$ Just as there is a ($2l+1$) degeneracy in the spherical wavefunctions, there is also an important degeneracy pattern in the wavefunctions of diatomic molecules. $\Sigma$ and $\sigma$ states are singly degenerate whereas all other are doubly degenerate. Why this is should become apparent as we develop the united atom method for decomposing spherical symmetry to cylindrical symmetry. $\lambda$ or $\Lambda$ Wavefunction Symmetry Degeneracy 0 $\sigma$ $\Sigma$ 1 1 $\pi$ $\Pi$ 2 2 $\delta$ $\Delta$ 2 3 $\phi$ $\Phi$ 2 There are three methods commonly used to derive terms symbols for diatomic molecules. All of the methods are based on determining the quantum number $\Lambda$ and the total spin quantum number. In the case of homonuclear diatomic molecules, the inversion symmetry is also important. $\Sigma$ states have another important symmetry designation. $\Sigma$ states can have either + or - symmetry depending on whether or not the state is symmetric with respect to reflection through a plane containing the internuclear axis. Symmetric states are designated as $\Sigma ^{+}$ state and antisymmetric ones are $\Sigma ^{-}$. $\Pi$, $\Delta$ and all other states with $L \neq 0$ are doubly degenerate as they have both + and - components. There is always an odd number of S states generated for the United Atom method or the Separated Atom method. They will come in pairs of $\Sigma ^{+}$, $\Sigma ^{-}$ and the odd remaining state will have +/- symmetry as determined by the Wigner-Witmer rule. For this, one must consider the associated atomic state (using either the United Atom or the Separated Atom method). The +/- symmetry is determined by whether the indicated sum is even or odd according to the following table. Method Sum Value Parity United Atom $L + \sum l_{i}$ even + odd - Separated Atom $L _{A} + \sum l_{A} + L _{B} + \sum l_{B}$ even + odd - United Atom Method Think of the molecule as an atom with the same number of electrons. The atom will have spherical symmetry. The task is to reduce the spherical symmetry of the atomic wavefunction to the cylindrical symmetry of the diatomic molecule. In this case, the z-axis of the unified atom becomes the internuclear axis of the molecule. Thus, the quantum numbers will transform as \begin{aligned} M _{L} &\rightarrow \Lambda \ S &\rightarrow \Lambda \end{aligned}\nonumber Example $1$ What molecular terms are predicted for the OH radical? Solution The unified atom with the same number of electrons as OH is fluorine. The ground state designation for atomic fluorine is $^{2} P$. For this state, $L = 1$ and so $m_ {L}$ can be -1, 0 or +1. The only values of $m_ {L}$ are 0 and 1. Therefore, the predicted terms will be $\Sigma$ and $\Pi$. The multiplicity will be the same as the unified atom ($S = \frac{1}{2}$). The $\Sigma$ state will be symmetric with respect to reflection though a plane containing the z-axis since $L + \sum l_{i}\nonumber$ is even for fluorine. So the expected terms are $^{2} \Sigma ^{+} \text{ and } ^{2} \Pi. \nonumber$ As it turns out, the ground state of OH is $^{2} \Pi$. The only way to confirm the ground state, however, is to use the molecular orbital method. Separated Atom Method A second method for determining molecular term symmetries is the separated atom method. This method is similar to the atomic term symbol method of writing out an exhaustive list of microstates and then accounting for each one. The quantum numbers which are important are determined from the sums of the z-component quantum numbers of the atomic wavefunctions. Thus, the values of $\Lambda$ which are possible will be given by all possible combinations of $m_ {L}$. Values of the same magnitude are then paired to make the two degenerate components for any values of $\Lambda > 0$. Example $2$ What molecular terms arise for HLi, formed from a ground state hydrogen atom and a ground state lithium atoms? Solution The ground state of lithium is $^{2} S$. For this set of atoms, we can construct the following table to combine values of $m_ {L}$ to form values of $\Lambda$ and values of S as well. H ($^2 S$) Li ($^2 S$) $\Lambda$ and S $M_L$ 0 0 0 S $\frac{1}{2}$ $\frac{1}{2}$ 1, 0 It is clear that the only value of $\Lambda$ that can be generated from these separated atom states is $\Lambda = 0$, or a $\Sigma$ state. The sum of $L _{A} + L _{B} + \Sigma l _{A} +\Sigma l_{B}$ is given by $0 + 0 + 0 + 0 = 0$, which is even. Hence, the $\Sigma$ state has $\Sigma ^{+}$ symmetry. So the resulting states are $^{1} \Sigma _{g} ^{+}$ and $^{3} \Sigma _{g} ^{+}$. The ground state of $Li _{2}$ is $^{1} \Sigma _{g} ^{+}$, but this can only be confirmed by the use of the molecular orbital method. Example $3$ What molecular terms are predicted for the OH radical? Solution The ground state atomic term for O is ${}^{3} P$ and that for H is $^{2} S$. The following table shows the possible combinations of $m_ {L}$ to form $\Lambda$ and the combinations of S which form the familiar Clebcsh series of resultant S values. H ($^2 S$) O ($^3 P$) $\Lambda$ and S $M_L$ 0 +1, 0, -1 +1, 0, -1 S $\frac{1}{2}$ 1 $\frac{3}{2}$, $\frac{1}{2}$ The combination of a P term and an S term gives one $\Pi$ ($\Lambda = \pm 1$) and one $\Sigma$($\Lambda = 0$) term. The sum $L _{A}+ L _{B} + \Sigma l_{A} + \Sigma l _{B}$ is given by $1 + 0 + 4 + 0$ and is clearly odd. Therefore, the $\Sigma$ state will be of $\Sigma ^{-}$ symmetry. The spin quantum numbers which are possible are $\frac{3}{2}$ and $\frac{1}{2}$. Therefore, the possible term symbols are $^{4} \Pi$, $^{4} \Sigma ^{-}$, $^{2} \Pi$ and $^{2} \Sigma ^{-}$. (The ground state of the OH radical happens to be of $^{2} \Pi$ symmetry, but again, this can only be confirmed using a molecular orbital approach.) Notice that there is no g/u symmetry indicated in this case because the molecule does not include an inversion center being a heteronuclear diatomic molecule! Example $3$ What molecular terms arise for CO formed from a ground state carbon atom and a ground state oxygen atom? Solution The ground state of both C and O is $^{3} P$. the following table summarizes the decomposition of the two atomic states from spherical to cylindrical symmetry. C ($^3 P$) O ($^3 P$) $\Lambda$ and S $M_L$ +1, 0, -1 +1, 0, -1 ±2, ±1, ±1, 0, 0, 0 S 1 1 2, 1, 0 The resultant state are $\Delta$, $2 \Pi$ and $3 \Sigma$. Of the three $\Sigma$ states, two will form a pair of $\Sigma ^{+} / \Sigma ^{-}$. The last S state must have its +/- symmetry determined by the Wigner-Witmer rule. $L _{C} + L _{O} + \Sigma l_{C} + \Sigma l_{O}= 1 + 1 + 2 + 4 = 8 \; \text{(even)}\nonumber$ So the final $\Sigma$ state is $\Sigma +$. The spin states generated are quintet, triplet and singlet. So the set of molecular states generated are $^{5} \Delta$, $^{5} \Pi$, $^{5} \Pi$, $^{5} \Sigma ^{+}$, $^{5} \Sigma ^{ -}$, ${}\^{5} \Sigma ^{+}$ $^{3} \Delta$, $^{3} \Pi$, $^{3} \Pi$, $^{3} \Sigma ^{+}$, $^{3} \Sigma ^{-}$, $^{3} \Sigma ^{+}$ $^{1} \Delta$, $^{1} \Pi$, $^{1} \Pi$, $^{1} \Sigma ^{+}$, $^{1} \Sigma ^{-}$, $^{1} \Sigma ^{+}$ The ground state of CO is in fact $^{1} \Sigma ^{+}$, but as always, this can only be reliably predicted using the molecular orbital method. The number of states generated from separated atoms increases rapidly as the angular momentum in the separated atoms increases. Molecular Orbital Method The molecular orbital method requires the construction of a molecular orbital diagram. As was the case in the atomic term symbol problem, the molecular terms can be constructed considering only partially filled subshells. The quantum numbers will then be given by the vectoral sums of the one-electron quantum numbers. Consider the orbital diagram for the oxygen molecule. The only important electrons in this case are the two $\pi _{g} ^{*}$ electrons. (Ignore all of the ones in completely filled subshells - just as was done in the case of atoms as these always contribute $\Lambda = 0$ and $S = 0$.) The orbital angular momentum $\lambda$ of one of the $\pi _{g} ^{*}$ electrons will cancel that of the other as one will have a value of $\lambda = -1$ and the other has $\lambda = +1$. (This is similar to the atomic case where one electron was in an orbital with $m_ {l} = -1$ and the other in an orbital with $m_ {l} = +1$. The sum of the two is zero.) Thus, $\Lambda$ will be 0. Hence the predicted term will be a $\Sigma$ state. Since one of the $\pi _{g} ^{*}$ orbitals is symmetric with respect to reflection through a plane containing the nuclei and the other is antisymmetric, the predicted term will be antisymmetric with respect to this symmetry operation. $(sym) \times (antisym) = antisym\nonumber$ Thus, the state will be of $\Sigma ^{-}$ symmetry. In a similar manner, the gerade/ungerade symmetry can be determined by the product of the one-electron orbital symmetries. $(g) \times (g) = g\nonumber$ Finally, the spin multiplicity can be determined in the usual way. \begin{aligned} S &= s_ {1} + s_ {2} , s_ {1} + s_ {2} - 1, \ldots, s_ {1} - s_ {2} \ S &= 1 \; \text{and } \; 0 \end{aligned} \nonumber The predicted terms for this electronic configuration are $^{3} \Sigma _{g} ^{-}$ and $^{1} \Sigma _{g} ^{-}$. The ground state of $O _{2}$ is $^3 \Sigma _{g} ^{-}$. And since this result was generated using the molecular orbital method, the result is reliable that this is indeed the ground state of the $O_{2}$ molecule!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/09%3A_Molecules/9.04%3A_Hund%27s_coupling_cases_%28a%29_and_%28b%29.txt
One of the important reasons for describing the electronic structures and angular momentum coupling in diatomic molecules is to apply these descriptions to the prediction of the rotational branch structure in molecular spectra. As always, the first concern when predicting patterns in molecular spectra is the determination of selection rules. The selection rules for which the transition moment does not vanish are summarized below. $\Delta S = 0\nonumber$ $\Delta \Lambda = 0, \pm 1 \nonumber$ $+ \leftrightarrow -, \; - \leftrightarrow +\nonumber$ Based on these selection rules, Herzberg diagrams can be used to predict the rotational branch structure and “first lines” in each branch based on the symmetries of upper and lower states in a given transition. In order to discuss this very useful tool, we shall begin by discussing the description of a single state, starting with simple symmetry ($^1 \Sigma ^+$ ). In order to proceed, it is important to note the +/- symmetry of rotational wavefunctions. Basically, the rotational wavefunction is symmetric with respect to reflection through a plane containing the internuclear axis if R is even, and antisymmetric if R is odd. Thus the symmetry of the total wavefunction, given by $\Psi _{tot} = \psi _{elec} \psi _{vib} \psi _{rot}\nonumber$ is given by the product of the symmetries of $\psi _{elec}$, $\psi_ {vib}$ and $\psi_ {rot}$. In the case of a $^1 \Sigma ^+$ state, $\psi_ {elec}$ is +. $\psi_ {vib}$ is always + for vibration of a diatomic molecule. The rotational contribution ($y _{rot}$ ) will alternate for increasing R or J. (In the case of a $^1 \Sigma ^+$ state, $R$ and $J$ have the same value, since $\Lambda = 0$ and $S = 0$.) The above Herzberg diagram summarizes the +/- symmetry for the first few rotational levels. Based on this diagram, and the selection rule that + $\leftrightarrow$ - and $- \leftrightarrow +$, the branch structure for a $^1 \Sigma ^+ \leftrightarrow ^1 \Sigma ^+$ transition can be predicted. Clearly, R- and P-branches are predicted in the rotational structure. This is the proper Herzberg diagram for the description of the 1-0 rotation-vibration spectrum of HCl (or other closed shell heteronuclear diatomic molecules.) Notice that $\Delta$ $J = 0$ (Q-branch) transitions are impossible since the parity (+/- symmetry) does not change in such transitions, and hence they are forbidden. The Herzberg diagram description of a $^{1}\Sigma ^{-}$ state is not too different than that for a $^1 \Sigma ^+$ state. The only difference is that the +/- symmetry changes such that levels with odd J are now + and those with even J are now -. The description of a $^{1} \Pi$ state can be based on modifications to the descriptions of $^1 \Sigma ^+$ and $^{1}\Sigma ^{-}$ states. Two important differences must be taken into account. First, since J is given by the sum of $\Lambda$ and R (or $\Omega$ and R in Hund’s case (a), but this will only be important if $S \neq 0$, which is not the case for a singlet state.) Second, since $\Pi$ states (like $\Delta$, $\Phi$, etc.) have two components, both must be included in the diagram. The description of a $^{1} \Pi – ^1 \Sigma ^+$ transition can now be constructed. Note that P- Q- and R-branches are predicted. Also notice the “first line” in each branch. If the $\Pi$ state is the upper state, the first lines in each branch are $P(2)$, $Q(1)$ and $R(0)$. (There can be no $P(1)$ line as the $J = 0$ level is missing in the upper state.) This is a pattern is a one way to recognize a $^{1} \Pi – ^1 \Sigma ^+$ transition. A reversal of state, such that the $^1 \Sigma ^+$ state is the upper state, causes the pattern to change. In the case of a $^1 \Sigma ^+ - ^{1} \Pi$ transition, he first lines in each branch are predicted to be $P(1)$, $Q(1)$ and $R(1)$. A $^{1} \Pi – ^{1} \Pi$ transition becomes a little more complex as well. In this case, it can be seen that there are two Q-branches predicted! These will be resolved only if the two $\Lambda$ components of at least one of the $\Pi$ state are significantly different in energy. The first lines are predicted to be $P(2)$, $Q_ {1}(1)$, $Q_ {2} (1)$ and $R(1)$. While the description here has been limited to singlet states of $\Sigma$ and $\Pi$ symmetry, these tools can be extended to describe and predict a great deal of rotational fine structure patterns in spectroscopic transitions (Herzberg, 1950). The patterns can get extremely complex for systems with high spin or orbital angular momenta. The picture can become even more complex when nuclear spin exists in the molecule which can couple to orbital, spin and/or rotational angular momenta. Entire books are dedicated to sorting out these patterns and interpreting the spectra of molecules which require these considerations (Brink, 1994) (Bunker, 2009).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/09%3A_Molecules/9.06%3A_Herzberg_Diagrams.txt
Just as rotational motion is important in understanding vibrational spectra, vibrational (as well as rotational) motion(s) are important in understanding electronic transition is molecules. Electronic transitions in which vibrational structure is resolved are sometimes referred to a vibronic transition. When rotation is thrown in to the mix, the term “rovibronic transitions” is sometimes used. Vibronic transitions can be discussed in terms of the transition moment. Keeping in mind that the wavefunction for a vibronic state can be expressed as a product $\Psi _{tot} = \psi _{elec} \psi _{vib}\nonumber$ and that the transition moment is given by $\int \Psi _{tot}^{*} \vec{\mu }\, \Psi _{tot} d\tau\nonumber$ Substitution yields $\int \left(\psi _{elec} \psi _{vib} \right)^{*} \vec{\mu }\, \left(\psi _{elec} \psi _{vib} \right) d\tau\nonumber$ Since the dipole moment operator is a derivative operator, the chain rule must be employed, which yields $\int \psi _{elec}^{*} \psi _{elec} d\tau \int \psi _{vib}^{*} \vec{\mu }\psi _{vib} d\tau +\int \psi _{elec}^{*} \vec{\mu }\psi _{elec} d\tau \int \psi _{vib}^{*} \psi _{vib} d\tau\nonumber$ Since the electronic wavefunction must be orthogonal, the first term will vanish for transitions between two different electronic states. The second term however, does not vanish. In face, the magnitude of the $\int \psi _{vib}^{*} \psi _{vib} d\tau$ will be determined by the overlap of the two vibrational levels. (Note that since these represent vibrational wavefunctions in different electronic state, there is no reason for the wavefunctions to be orthogonal.) Franck-Condon Factors The intensity of a band in a vibronic transition will be governed by the magnitude of the Frank-Condon Factor for the band. The Franck-Condon factor (FCF) is defined by $FCF = \left[\int \psi _{vib}^{'} \psi _{vib}^{"} d\tau \right]^{2}\nonumber$ which is governed purely by the degree of overlap between the upper state vibrational wavefunction and that in the lower state. The overlap will be large for $\Delta v = 0$ if the potential energy functions of the upper and lower states are similar (similar $\omega _{e}$, $\omega _{e} x _{e}$, $r _{e}$, etc.) and strong sequences will be observed in the spectrum. If, however, the equilibrium bond length changes significantly, the maximum Franck-Condon overlap will occur for combinations of v’ and v” for which $\Delta v \neq 0$. In these cases, strong progressions will be observed. The Franck-Condon principle is closely associated with the Born-Oppenheimer approximation. In cases where the Born-Oppenheimer breaks down, the Franck-Condon principle is compromised as well. 9.08: Term Symbols for Polyatomic Molecules Term symbols are used to designate electronic states of polyatomic molecules, much the same as they are used to designate electronic states for both atomic systems and diatomic molecules. These can be derived in much the same manner as we have developed for diatomic molecules, by taking combinations of atomic orbitals, whose symmetries have been decomposed from the spherical symmetry of the atoms to the lowered symmetry of the molecule. An example would be \(H_{3}^{+}\), which is the most common triatomic ion in the universe. (It is also an excellent example of a three-center two-electron bond in so far as it is the simplest example of a molecule possessing such a bond!) The combination of three 1s orbitals on the three atoms will yield three molecular orbitals. The decomposition of symmetry is described in the following section.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/09%3A_Molecules/9.07%3A_Vibronic_Transitions.txt
One of the more powerfully predictive things we can do with Group Theory is predict the symmetries of molecular orbitals. Molecular orbital symmetries can have huge ramification on chemical bonding and chemical reactions. The first thing we would like to be able to do is to predict the symmetries of the molecular orbitals that arise from the linear combinations of atomic orbitals. This is not too difficult. In fact, the process has many aspects in common with determining molecular vibration symmetries. The process can be summarized as follows: 1. Separate the molecule into groups of equivalent atoms. 2. For each set of equivalent atoms, determine the reducible representation that describe the atomic orbitals to be used in the construction of molecular orbitals. This is determined by assuming that the point group is centered on an atom containing the orbitals. Call this $\Gamma_ {ao}$. 3. Determine $\Gamma_ {unmoved}$ for the set of equivalent atoms. 4. Multiply $\Gamma_ {ao}$ $\otimes$ $\Gamma_ {unmoved}$ to determine $\Gamma_ {reducible}$ for each set of equivalent atoms. 5. Add all of the $\Gamma_ {reducible}$ that you have determined for each individual set of equivalent atoms. Call the result $\Gamma_ {MO}$. 6. $\Gamma_ {MO}$ can then be resolved into components. These components give the symmetries of the molecular orbitals that result from the linear combinations of the atomic orbitals you have selected. Example $1$ The Molecular Orbitals for a Water Molecule. Solution For this example, we shall consider the 1s orbitals on the H atoms, and the 2s and 2p orbitals on O. As it turns out, s orbitals are always totally symmetric in any point group, since they possess spherical symmetry. The p orbitals will transform as the x, y and z axes. So the following set of tables is used to generate $\Gamma_ {MO}$ for water. First, determine $\Gamma_ {H}$ describing the H atoms. $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{H(1s)}$ 1 1 1 1 $\Gamma_{unm}$ 2 0 0 2 $\Gamma_H$ 2 0 0 2 Next, determine $\Gamma_ {O}$ describing the four orbitals on the O atom. $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{O(2s)}$ 1 1 1 1 $\Gamma_{O(2p)}$ 3 -1 1 1 $\Gamma_{red}$ 4 0 2 2 $\Gamma_{unm}$ 1 1 1 1 $\Gamma_{O}$ 4 0 2 2 Next, determine $\Gamma_ {MO}$ as the sum of $\Gamma_ {H} + \Gamma_ {O}$ $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_H$ 2 0 0 2 $\Gamma_{O}$ 4 0 2 2 $\Gamma_{MO}$ 6 0 2 4 Now, decompose $\Gamma_ {MO}$ under C ${}_{2v}$ symmetry! $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{MO}$ 6 0 2 4 -3 $A_1$ 3 3 3 3 3 -3 -1 1 -$B_1$ 1 -1 1 -1 2 -2 -2 2 -2 $B_2$ 2 -2 -2 2 0 0 0 0 So $\Gamma_ {MO} = 3 A _{1} + B _{1} + 2 B _{2}\nonumber$ The molecular orbitals of water are shown below. The $1a _{1}$ orbital was not generated in this example because it is essentially the 1s orbital on oxygen, which was not included in the basis set of functions we originally used. Also missing from our set are the $2b _{2}$ and $3b _{2}$ orbitals, which require the addition of $3p _{x}$ and $3d _{xz}$ orbitals on oxygen, which were not included. These orbitals are “virtual orbitals” as they are unoccupied. The electronic configuration of $H _{2} O$ is given by $(1a _{1} )^2 (2a _{1} )^2 (1b _{2} )^2 (3a _{1} )^2 (1b _{1} )^2\nonumber$ The overall symmetry of the electronic state is given by the product of the se symmetries, counting each one twice since each orbital contains two electrons. In fact, all closed shell molecules (all subshells filled) will have an electronic symmetry that is totally symmetric. In this case, the electronic state is $^{1} A _{1}$. If the lowest unoccupied molecular orbital is of $B _{2}$ symmetry, then the first excited state of the molecule will be $(1b _{1}) ^{1} (4a _{1} ) ^{1}\nonumber$ The total electronic symmetry is given by $B _{1} \otimes A _{1} = B _{1}$. The electronic configuration would give rise to both singlet and triplet states. To test whether or not the transition to this state is allowed, the transition moment integral must not vanish. $\int \psi '\overrightarrow{\mu }\psi "d\tau =\int B_{1} \cdot \left(\begin{array}{c} {B_{1} } \ {B_{2} } \ {A_{1} } \end{array}\right)\cdot A_{1} d\tau\nonumber$ This integral clearly will not vanish by symmetry for the component along the x-axis. Hence, the transition to this excited state of water will be a perpendicular transition. Example $2$ Formaldehyde Solution To generate the molecular orbitals in formaldehyde, consider the 1s orbitals on H, the 2s and 2p orbitals on C and O. First, determine $\Gamma_ {H}$ describing the H atoms. $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_H$(1s) 1 1 1 1 $\Gamma_{unm}$ 2 0 0 2 $\Gamma_H$ 2 0 0 2 Next, determine $\Gamma_ {C}$ and $\Gamma_ {O\ }$ describing the four orbitals on the C atom and the O atom. $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{C(2s)}$ 1 1 1 1 $\Gamma_{C(2p)}$ 3 -1 1 1 $\Gamma_{red}$ 4 0 2 2 $\Gamma_{unm}$ 1 1 1 1 $\Gamma_{C}$ 4 0 2 2 $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_{O(2s)}$ 1 1 1 1 $\Gamma_{O(2p)}$ 3 -1 1 1 $\Gamma_{red}$ 4 0 2 2 $\Gamma_{unm}$ 1 1 1 1 $\Gamma_{O}$ 4 0 2 2 The total reducible representation to be reduced is given by $\Gamma_ {H}+ \Gamma_ {C} +\Gamma_ {O}$. $C_{2v}$ E $C_{2}$ $\sigma_{xz}$ $\sigma_{yz}$ $\Gamma_H$ 2 0 0 2 $\Gamma_{C}$ 4 0 2 2 $\Gamma_{O}$ 4 0 2 2 $\Gamma_{MO}$ 10 0 4 6 Decomposition of this reducible representation shows $\Gamma_ {MO} = 5A _{1} + 2B _{1} + 3B _{2}\nonumber$ The electronic configuration for formaldehyde is given by $(1a _{1} )^2 (2a _{1} )^2 (3a _{1} )^2 (4a _{1} )^2 (1b _{2} )^2 (5a _{1} )^2 (1b _{1} )^2 (2b _{2} )^2\nonumber$ The ($1a _{1}$ ) and ($2a _{1}$ ) orbitals did not come from the above analysis as they are essentially the as orbitals on O and C that were not included in the basis set. The lowest energy unoccupied orbital is ($2b _{1}$ ), so the first excited electronic state will have an electronic configuration given by $(5a _{1} )^2 (1b _{1} )^2 (2b _{2} ) ^{1} (2b _{1} ) ^{1}\nonumber$ This yield both triplet and singlet spin functions and an orbital function with symmetry given by $b _{2} \otimes b _{1} = a _{2}$. And as it turns out, the first electronic transition in formaldehyde is orbitally forbidden since no choice of a component of the dipole moment operator can be used to create a totally symmetric integrand for the electric dipole transition moment integral. $\int A_{2} \cdot \left(\begin{array}{c} {B_{1} } \ {B_{2} } \ {A_{1} } \end{array}\right)\cdot A_{1} d\tau\nonumber$ In order to see this transition in formaldehyde, there must be some involvement from vibrational motion that changes the symmetry of the overall wavefunction. Recall that $\Psi _{tot} = \psi _{elec} \psi _{vib}\nonumber$ if the Born-Oppenheimer approximation holds. The symmetries for the vibrational wavefunctions (which can be derived using the method previously discussed) are given by $\Gamma_ {vib} = 3 A _{1} + B _{1} + 2 B _{2}\nonumber$ So excitation of a $B _{1}$ or $B _{2}$ vibrational mode (yielding an overall symmetry for the total wavefunction of either $B _{2}$ or $B _{1}$ respectively) will cause the transition to “turn on”. This type of vibronically allowed transition is not uncommon (similar behavior is observed in benzene) and is characterized by a missing 0-0 band in the electronic spectrum of the molecule. 9.10: References Born, M., & Oppenheimer, R. J. (1927). Zur Quantentheorie der Molekeln. Annalen der Physik un Chemie, 84, 457-484. doi:10.1002/andp.19273892002 Brink, D. M. (1994). Angular Momentum. USA: Oxford University Press. Bunker, P. (2009). Molecular Symmetry and Spectroscopy (2nd ed.). Ottawa: National Research Council (Canada) Research Press. Herzberg, G. (1950). Molecular Spectra and Molecular Structure, I. Spectra of Diatomic Molecules (2nd ed.). New York: Van Nostrand Reinhold. 9.11: Vocabulary and Concepts antibonding orbital bonding orbital Born-Oppenheimer approximation Coulomb integrals exchange integral Franck-Condon factor gerade Herzberg Diagrams Hund’s case (a) Hund’s case (b) linear combinations of atomic orbitals Molecular Orbital Method Molecular Orbital theory negative overlap orbitally forbidden overlap integral paramagnetic rovibronic transitions Separated Atom Method ungerade United Atom Method vibronic transition vibronically allowed transition Wigner-Witmer rule 9.12: Learning Objectives After mastering the material covered in this chapter, one will be able to: 1. Describe the Born-Oppenheimer Approximation and how it is used to construct potential energy surfaces describing the vibration of a diatomic molecule. 2. Construct a molecular orbital diagram for a diatomic molecule depicting both bonding and antibonding orbitals of s and p symmetries including inversion symmetry (g/u) as appropriate for homonuclear diatomic molecules. Utilize the diagram to 1. Predict the ground state electronic configuration of a diatomic molecule, including 1. Magnetic properties 2. Bond order 3. Describe the differences between Hund’s Angular Momentum Cases (a) and (b) and how these cases manifest in the resulting energy levels in real molecules. 4. Determine molecular term symbols for diatomic molecules using the 1. United Atom Method 2. Separated Atom Method 3. Molecular Orbital Method 5. Construct Herzberg Diagrams and use them to 1. Determine the band structure of a spectroscopic transition, including the “first line” in each branch. 6. Derive the formulation for the Franck-Condon factor and explain how it determines relative intensity of vibrational bands in an electron transition. 7. Utilize the tools of Group Theory to predict the symmetries of the molecular orbitals that arise from linear combinations of atomic orbitals for a polyatomic molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/09%3A_Molecules/9.09%3A_Group_Theoretical_Approach_to_Molecular_Orbitals.txt
One of the most important tools in scientific measurement and the development of technology in general is the laser. The word “laser” is an acronym for Light Amplification by Stimulated Emission Radiation. What a laser does is use a spectroscopic transition to amplify the intensity of a light source by stimulating emission from the upper state of the transition. In order to do this, the system must have a population inversion. • 10.1: Fractional Population of Quantum States A molecule will exist in a quantum state with an energy determined by that quantum state. For a sample containing a large number of molecules, several quantum states will be available, and the molecules will be distributed among them. If the sample is thermalized, the distribution will follow the Maxwell-Boltzmann distribution law. • 10.2: Types of Lasers There are many different types of lasers built on many different principles and techniques of creating population inversions. A population inversion can be induced in a system through the fast absorption of light, a chemical reaction that creates a non-equilibrium distribution of molecules, “zapping” a system with electrons, or many other ways. We will consider several of them in this section. • 10.3: Examples of Laser Systems There are many types of laser commonly used in science today. • 10.4: Laser Spectroscopy Typical spectroscopy experiments require four elements: 1) a light source, 2) a sample, 3, a monochrometer and 4) a detector. • 10.5: References • 10.6: Vocabulary and Concepts • 10.7: Problems Thumbnail: Six commercial lasers in operation, showing the range of different colored light beams that can be produced, from red to violet. From the top, the wavelengths of light are: 660 nm, 635 nm, 532 nm, 520 nm, 445 nm, and 405 nm. Manufactured by Q-line. (CC BY-SA 3.0 Unported; Sariling gawa via Wikipedia) 10: Lasers A molecule will exist in a quantum state with an energy determined by that quantum state. For a sample containing a large number of molecules, several quantum states will be available, and the molecules will be distributed among them. If the sample is thermalized1, the distribution will follow the Maxwell-Boltzmann distribution law. Maxwell-Boltzmann Distribution Law According to the Maxwell-Botlzmann distribution law, the fraction of the number of molecules in the sample that are in a specific quantum state will be given by $\dfrac{N_{i} }{N_{tot} } \propto d_{i} e^{-\frac{E_{i} }{kT} }\nonumber$ where $N_{i}/N_{tot}$ is the fraction of the total number of molecules in the $i ^{th}$ quantum state which has energy $E_ {i}$ relative to the lowest energy the molecule can attain. If the fraction of molecules in each quantum state is added, the result must be unity. $\sum _{i}\dfrac{N_{i} }{N_{tot} } =1\nonumber$ Partition Functions To ensure this, a partition function is introduced to normalize the distribution. $q=\sum _{i}d_{i} e^{-\frac{E_{i} }{kT} }\nonumber$ And so $\dfrac{N_{i} }{N_{tot} } =\dfrac{d_{i} e^{-\frac{E_{i} }{kT} } }{q}\nonumber$ The partition function, which is a function of temperature as well as the physical properties of the molecules under consideration, can be expressed as a product of partition functions for each type of motion available in the molecule. If electronic, vibrational and rotational energy levels only are considered, the partition function can be expressed as $q_{tot} = q_{elec}q_{vib}q_{rot}\nonumber$ When considering each type of motion, it is important to consider both the energy levels and the degeneracies of states. As was seen in the case of rotational motion (Chapter IV), at low energies, the degeneracy part of the expression dominates, but at higher energies, the exponential part of the function takes over. If the energy $E_{i}$is very large (relative to kT) then there will be essentially no population in the $i^{th}$ level. This is the case, in general, for electronic excitation; the energy level is so high in energy relative to kT that there are essentially no molecules in excited electronic state except at extraordinarily high temperatures. In this case, where the energy is very large relative to kT \begin{aligned} q & = d_{1} e^{-\frac{0}{kT} } +\sum _{i\ne 1} d_{i} e^{-\frac{E_{i} }{kT} } \ &\approx d_{1} \cdot 1+\sum _{i\ne 1}d_{i} e^{-\infty } \ &=d_{1} +\sum _{i\ne 1}d_{i} \cdot 0 \ &=d_{1} \end{aligned}\nonumber Naturally, q will become larger for motions will small energy level differences (such as rotational motion) where the word “small” is always considered relative to $kT$. Based on the above equations and the degeneracies and energy level expressions for the harmonic oscillator (for $q_{vib}$) and the rigid rotor (for $d_{rot}$) the following approximate expressions can be used to estimate partition functions for each type of motion. Expression Approx. Exp. Magnitude Estimate $q_{elec}$ $q=\sum _{i}d_{i} e^{-\frac{E_{i} }{kT} }$ $d_{1}$ 1 $q_<{vib}$ $q=\sum _{v}d_{v} e^{-\frac{hc\omega _{e} (v+1/2)}{kT} }$ $\left(1-e^{-\frac{\omega_{e}hc}{kT}}\right)^{-1}$ 1-10 $q_{rot}$ $q=\sum _{J}(2J+1) e^{-\frac{hcBJ(J+1)_{i} }{kT} }$ $kT/B$ 100-1000 Rotational and Vibrational Temperatures The above discussion suggests that the temperature of a system can be determined by measuring the populations of individual quantum states. This can be done using spectroscopic intensity data. A line or band in spectrum will be more intense if there are a larger number of molecules in the originating state of the transition. (This is essentially Beer’s Law that says that spectral intensity is proportional to concentration.) Sometimes these analyses will yields results that are not consistent between different types of motion within the molecule. For example, analysis of the vibrational intensity distribution may yield a temperature that is different than the analysis of the rotational intensity distribution. For this reason, scientists often refer to the “vibrational temperature” or the “rotational temperature” of a sample. These are non-equilibrium situations and are usually dependent on the dynamics of how a molecule was formed within a sample. Some pathways may leave an excess of energy in vibrational modes whereas other may lead to rotationally hot product molecules due to an excess of energy in rotational motion. Typically after a large number of collisions which energy may be transferred from one molecule to another, these temperatures will equilibrate and the Maxwell-Boltzmann distribution law will describe all fractional populations irrespective of the type(s) of motion that dominate(s) an energy level. Population Inversion In the case where all available energy levels are singly degenerate, the Maxwell-Boltzmann distribution law suggests that fractional population should decrease with increasing energy. In some cases, the non-equilibrium distribution of molecules through the available quantum states becomes inverted. Again, this situation can be created by the specific dynamics of how a system is prepared. In the case that a population inversion can be created, a laser can be made that uses the sample of molecules with this inverted population as a gain medium to create the laser light output. Theoretically, any system in which a population inversion can be induced can be used as a gain medium for a laser. 1. The word “Thermalized” means that all of the molecules in the sample are in “thermal contact” with one another (typically due to a large frequency of collisions with other molecules in the sample) so that there is an equilibrium established for the exchange of energy between molecules in the sample.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/10%3A_Lasers/10.01%3A_Fractional_Population_of_Quantum_States.txt
There are many different types of lasers built on many different principles and techniques of creating population inversions. A population inversion can be induced in a system through the fast absorption of light, a chemical reaction that creates a non-equilibrium distribution of molecules, “zapping” a system with electrons, or many other ways. We will consider several of them in this section. Two-level laser The simplest type of laser is a two-level laser, although many argue that a true two-level laser cannot exist1. None the less, it is instructive to consider a simplified system with only two levels, in which a population inversion has been introduced. Once the population inversion has been achieved, light of a frequency that matches the resonance between the two levels is passed through the sample. This can “tickle” a molecule into dropping to the lower level by giving off a photon. If this happens, the stimulated emission will be coherent (in phase and of the same frequency) as the stimulating photon. If many molecules are stimulated to emit, the gain will be substantial and a strong beam of coherent, monochromatic light will be produced. Naturally as laser output is achieved, the upper-level population will deplete and that of the lower level will grow. When a Maxwell-Boltzmann distribution is established, laser output will cease. So in order to keep the laser operating, the upper state must be repopulated or the lower state must be depopulated. The nature of the laser is defined by the manner in which these population/depopulation events occur. The manner in which the light is manipulated can also define the nature of the laser and how it operates. Three-level lasers There are several examples of three-level lasers. In these systems, a third level is introduced in order to either populate the upper level of the laser transition or depopulate the lower level. This difference defines two types of three-level laser systems. In the case that the third level (\(E_{1}\) ) lies above the upper level of the laser transition (\(E_{u}\) ), the following schematic energy level diagram will result. In this system, The level \(E_{3}\) is populated by the absorption of light (which is what is depicted in the diagram above) or some other method. The transition between \(E_{1}\) and \(E_{u}\) is much faster than the transition between \(E_{u}\) and the lower level of the laser transition, \(E_{l}\). As such \(E_{u}\) will be populated quickly and a population inversion will be established. As this laser operates, \(E_{u}\) will be depopulated, so a fresh supply of molecules in this level must be provided by the pump source cycling molecules out of \(E_{l}\) and back to \(E_{3}\). An example of this type of three-level laser is the ruby laser2, in which the gain medium is a ruby crystal. The pump exciting molecules from \(E_{l}\) to \(E_{3}\) is provided by a flash lamp. Since the flash lamp is pulsed, this system produces pulsed laser output. The wavelength of the ruby laser output is 694.3 nm. The helium-neon (HeNe) laser (Microwave Determination of Average Electron Energy and Density in He–Ne Discharges, 1964) is another example of this type of laser. The HeNe laser is a continuous wave laser (meaning it is not pulsed like the ruby laser) that produces red light at 632.8 nm. A second type of three-level laser is one on which the third level (\(E_{3}\) ) lies below the lower level (\(E_{l}\) ) of the laser transition. In this system, the upper level of the laser transition is populated either by a chemical or electrical pump or by a chemical reaction. The lower level is depopulated by a fast transition (or a chemical reaction). Since this depopulation happens faster than the population of \(E_{l}\) through the laser transition, a population inversion is maintained easily. An example of this type of laser is the chemical laser in which the upper level of the laser transition is populated through a chemical reaction which creates vibrationally excited molecules (Spencer, Jacobs, Mirels, & Gross, 1969) (Kasper & Pimentel, 1965) (Hinchen, 1973). Such lasers typically produce output in the infrared. Four-level lasers A four-level laser incorporates elements of both types of three-level lasers by having an energy level above the upper level of the laser transition that rapidly populates \(E_ {u}\) and one below the lower state of the laser transition that rapidly depopulates the lower level, \(E_ {l}\). Briefly, a pump (usually supplied by a flash lamp) excites molecules from \(E_ {3}\) to \(E_ {4}\). A fast transition from \(E_ {4}\) to \(E_ {u}\) populates the upper state of the laser transition. A fast transition from \(E_ {l}\) to \(E_ {3}\) then depopulates the lower level of the laser transition, maintaining a population inversion between \(E_ {u}\) and \(E_ {l}\) until \(E_ {4}\) is no longer able to populate \(E_ {u}\). The Nd:YAG (Geusic, Marcos, & Van Uitert, 1964) (neodymium YAG) laser is an example of a four level laser. In this laser, neodymium (III) ions entrained in a yttrium aluminum garnet crystal provide the four energy levels. The laser produces a polarized pulsed output at 1064 nm. Q-switching One of the important devices that makes a Nd:YAG laser (and many others) is a Q-switch. A Q-switch is a polarized filter that changes direction fo polarization when an electrical potential is applied to it. In one orientation, the switch blocks laser output light (preventing stimulated emission amplification) and in the other orientation, it allows for this light to pass. The Q-switch is used to limit laser gain (which would deplete the upper level of the laser transition) until an optimal population inversion is achieved. The Q-switch is then “opened” and laser output is generated until the population inversion is relaxed. The timing is critical and must be tuned for each laser (and usually re-optimized several times a day while the laser is in operation, as changes in temperature can change the characteristics of the YAG crystal dramatically. 1. Others argue that excimer lasers and dye lasers are two-level lasers. The difference depends on what is considered a “level”. 2. There are actually two levels in a ruby laser that act as E 3. For a complete description, see (Maiman, 1960)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/10%3A_Lasers/10.02%3A_Types_of_Lasers.txt
There are many types of laser commonly used in science today. The range of applications of lasers in science and technology is extremely broad, ranging from household applications (such as television remote controls) to manufacturing applications (such as laser cutting and welding, or laser lithography used in the manufacture of microelectronics), to medicine (including specific procedures such as laser eye surgery) to basic fundamental science. The specific needs of a particular job determine which laser is best for the job. $N_2$ laser A nitrogen laser is a continuous wave laser that provides ultraviolet output at 337 nm, but can be tuned to several wavelengths near its strongest output line. The laser gain transition is the 0-0 band in the B-X transition of $N_{2}$. The upper state is populated by subjecting the gas to an electrical discharge. Applications of the$N_{2}$ laser is pumping of dye lasers (described in section B.4.d), diagnostics of air samples and laser desorption techniques. Excimer Lasers An excimer laser is one in which the upper state of the transition is a metastable state of a molecule, and the lower state is dissociative. Because the lower state is not bound, molecules that land in that state after emitting a photon immediately dissociate, allowing for no buildup of population in the lower level of the laser transition. As such, any population in the upper state implies a population inversion. The upper (metastable) state is populated by a pulsed electrical discharge through a gas containing the precursors of the excimer molecules. Since these precursors (usually involving HCl or HF gas) are particularly caustic (to say nothing of how reactive the soup of radicals and ions produced by the electrical discharge are!) these laser require a very high level of maintenance. However, because of the simplicity of the energy level scheme, these lasers are very easy to tune to provide strong laser output. These lasers are used in a number of applications including the pumping of dye lasers and laser eye surgery. The pulses that emanate from these laser have a time on the order of a few nanoseconds. The output wavelength of an excimer laser is determined by the particular excimer formed in the discharge. The most commonly used excimer lasers are XeCl (308 nm) and ArF (193 nm.) The following table shows several common excimers and their output wavelengths. Table: Common Excimers with Output Wavelengths Excimer Wavelength (nm) ArF 193 KrCl 222 KrF 248 XeCl 308 XeF 351 Rare Gas Ion Lasers Another important class of lasers is the rare gas ion laser. In this laser, the gain medium is provided by an ion of a noble as (such as $Ar^{+}$ ). The gas is ionized by means of an electrical discharge. These lasers typically have several wavelengths which can be selected for the output. These lasers are used widely as pump lasers for dye lasers and also in Laserium light shows. Tunable Dye Lasers Tunable dye lasers are a very flexible type of laser as they provide selectable output wavelengths. Many of them can be scanned through a set of wavelengths which can be very useful in a number of applications (such as laser spectroscopy.) The gain medium in a dye laser is provided by a strong fluorescent dye dissolved in a liquid solvent (such as methanol.) The range of output wavelengths is determined by the specific dye. Commercially available dyes are available that span the entire visible spectrum. Ring dye lasers are capable of very high resolution (narrow wavelength or frequency range.) Pulse Amplification Pulse amplification is a technique used to increase the output power of a laser. In this technique, a seed beam is passed through a dye cell and is crossed by a pulsed pump beam which excites the dye, providing another stage of gain for the seed beam. Most dye lasers have at least one stage of pulse amplification in them to achieve suitable power for the specific application. Frequency Doubling Another useful technique that extends the wavelength output range of laser is frequency doubling. In this technique, laser output is focused on a special crystal (such a beta-Barium Borate or BBO) which has nonlinear optical properties that allow it to fuse two photons of frequency $\omega$ into one photon with frequency $2\omega$. Frequency doubling is not a terrifically efficient process and usually comes at a significant price to output intensity. However, the benefit of frequency doubling a tunable dye laser output is that one can extend the tunable range of laser output into the ultraviolet. Ultrafast Lasers A fairly recent development in technology is the development of ultrafast lasers. This class of device delivers laser output in very short (on the order of femtoseconds) pulses of laser output. On this time scale, it is possible to take snapshots of chemical reaction intermediates since the laser pulse time is comparable to the lifetime of a chemical intermediate. These lasers, however, have very brad spectral output due to the Heisenberg uncertainty principle precluding simultaneously small uncertainties in time and wavelength.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/10%3A_Lasers/10.03%3A_Examples_of_Laser_Systems.txt
Typical spectroscopy experiments require four elements: 1) a light source, 2) a sample, 3, a monochrometer and 4) a detector. In laser methods, a laser can serve as both a light source and a monochrometer. It can also serve as just one of those two, or be used in a totally different way such that it serves as neither! Total Fluorescence In a total fluorescence experiment, the laser is used as both the light source as well as the monochrometer. The data obtained is similar to that obtained in a regular absorption spectroscopy experiment. The laser used in this kind of experiment would typically be a tunable dye laser that will be scanned through a range of wavelengths in order to map the absorption spectrum of the sample. The detector must be placed at an angle to the incident laser beam in order to minimize direct exposure to the laser light, which will swamp the signal (and probably ruin the detector!) What is detected is actually photons produced in the fluorescence of the sample, which is increased whenever the laser frequency coincides with a resonance frequency. Monitoring fluorescence intensity as a function of excitation laser wavelength produces an absorption spectrum of the molecule. By and large, the total fluorescence method yields information about the upper state of a transition since scanning the tunable laser maps the energy levels in the upper state. Dispersed Fluorescence In a dispersed fluorescence spectrum, The wavelength of the excitation laser is fixed and the fluorescence is collected by a monochrometer and separated into its wavelength components. By separating the fluorescence into its wavelength components, the lower level energy levels are mapped. As such, this experiment is similar to an emission spectrum, but has the advantage of having only a single upper level quantum state. This type of experiment yields information about the lower level of the transition. Molecular Beam Spectroscopy (A Sub-Doppler Method) Laser excitation (total fluorescence) spectroscopy and dispersed fluorescence spectroscopy have resolution that is limited by the instrumentation and the natural Doppler width of the lines in the spectrum (caused by the motion of molecules in the gas phase, which can be parallel, antiparallel or at some angle to the direction of the laser beam propagation.) A number of techniques exist that allow for sub-Doppler resolution (resolution that is better than the Doppler limit would otherwise allow. 10.05: References Geusic, J., Marcos, H., & Van Uitert, L. (1964). Laser oscillations in Nd-doped yttrium aluminum, yttrium gallium and gadolinium garnets. Applied Physics Letters, 4(10), 182-184.Hinchen, J. (1973). Vibrational relaxation of hydrogen and deuterium fluorides. Journal of Chemical Physics, 59(1), 233-240.Kasper, J. V., & Pimentel, G. C. (1965). HCl Chemical Laser. Physical Review Letters, 14(10), 352-354. Maiman, T. H. (1960). Stimulated Optical Radiation in Ruby. Nature, 187(4736), 493-494. Microwave Determination of Average Electron Energy and Density in He–Ne Discharges. (1964). Journal of Applied Physics, 35(5), 1647-1648. Spencer, D. J., Jacobs, T. A., Mirels, H., & Gross, R. W. (1969). Continuous-Wave Chemical Laser. International Journal of Chemical Kinetics, 1(5), 493-494. 10.06: Vocabulary and Concepts continuous wave dispersed fluorescence Doppler limit Doppler width frequency doubling Heisenberg uncertainty principle laser Maxwell-Boltzmann distribution Maxwell-Botlzmann distribution metastable state partition function population inversion Pulse amplification Q-switch rare gas ion laser rotational temperature spectroscopy total fluorescence Tunable dye lasers ultrafast lasers vibrational temperature 10.07: Problems 1. A dye laser produces pulses of 15.0 mJ at a wavelength of 564 nm. How many photons are being produced per pulse? 2. In the above problem, consider the optical gain medium occupying a volume of 1.00 mL. What is the minimum concentration (in mol/L) of chromophores needed to produce pulses of 15.0 mJ at 564 nm? 3. Consider a two-level system, in which the difference in energy is 1.0 eV. If both levels are singly degenerate, calculate the fractional population of each level at 10 K, 100 K, and 1000 K.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/10%3A_Lasers/10.04%3A_Laser_Spectroscopy.txt
In the television show “The Big Bang Theory”, Dr. Sheldon Cooper describes the best use of his time as a scientist to “employ his rare and precious mental faculties to tear the mask off of nature and stare at the face of God.” [1] And while the fictitious character may have an inflated view of the magnitude of his research efforts, he is not in poor company in terms of the feelings that science is a tool to be used to see the nature of God in nature itself. Albert Einstein is quoted as saying “Science without religion is lame. Religion without science is blind.” [2] Another quote attributed to Einstein is, “I want to know God’s thoughts; the rest is just details.” [3] Of course, Einstein claimed some familiarity with the intentions of the Creator when he quipped in a letter to Max Born, “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the "old one." I, at any rate, am convinced that He does not throw dice.” [4] Much has been made of Einstein’s opinions of God as a craps player. Through the 1920s and 1930s, Einstein and Niels Bohr had many conversations on the ramifications of the quantum theory. In response to Einstein’s quip about a non-dice-playing deity, Bohr is said to have responded, “Einstein, stop telling God what to do!1 Of course, Bohr was very well aware of the strangeness of the quantum theory and how it shook the very roots of conventional wisdom about nature. Bohr is quoted as saying, “Anyone who is not shocked by quantum theory has not understood it.” [5] Naturally, Einstein found quantum theory quite shocking indeed. One of his earliest objections was that the quantum theory required that one dismiss a deterministic view of the universe. The philosophy of Determinism states that if all is known about a system at one point in time, then all can be known about that system at all points in time. Bohr, on the other hand, had no difficulties in dismissing determinism in favor of a quantum theory. Eventually, the debate would focus on the indeterminacy predicted by the Heisenberg Uncertainty Principle for complimentary variables (variables for which the corresponding quantum mechanical operators do not commute, such as position and momentum.) In fact, the spirited (but mostly amiable) debates between Einstein and Bohr did the development of quantum theory an enormous service. (not all of Bohr’s debates were amiable. Some of his discussions with Werner Heisenberg left Heisenberg reportedly in tears! Heisenberg said of these discussions, “Since my talks with Bohr often continued till long after midnight and did not produce a satisfactory conclusion, ...both of us became utterly exhausted and rather tense.”) [6] By poking at the forefronts of what the theory predicts and what it can not predict, the Bohr-Einstein debates pushed quantum theory forward by enormous leaps. In this chapter, we will examine how various people have probed the “strangeness” of the quantum theory and the bizarre behavior it predicts (or in some cases, the bizarre behavior that was discovered almost by accident.) Much of the strangeness of quantum mechanics continues to be researched actively and colors such important topics as quantum communications and quantum computing. 1. The Big Bang Theory, CBS Television, 2011. W430W9405 2. A. Einstein, "Science, Philosphy, and Religion: a Symposium," 1941. W430W9405 3. E. Salaman, "A Talk with Einstein," The Listener, vol. 54, pp. 370-371, 1955. W430W9405 4. A. Einstein, "Letter to Max Born (4 December 1926)," in The Born-Einstein Letters, New York, Walker and Company, 1971. W430W9405 5. K. Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning, Duke University Press Books, 2007, p. 254.W430W9405 6. D. C. Cassidy, "Triumph of the Copenhagen Interpretation (1925-1927)," American Institute of Physics, [Online]. Available: https://history.aip.org/exhibits/hei...g/triumph.html. [Accessed 4 October 2022].W430W9405 Thumbnail: The Stern-Gerlach experiment. (CC BY-SA 4.- Internationl; Tatoute via Wikipedia; Modified by LibreTexts). 11: Quantum Strangeness One of the first introductions students of the Quantum Theory receive involves the nodes in the wavefunctions of a one-dimensional particle in a box. The probability of measuring the particle to exist at any given position in the box is given by the square of the wavefunction. For the $n = 2$ level, the squared wavefunction is plotted above. The figure shows that the probability of measuring the position of the particle at positions $x = \frac{a}{4}$ and $x = \frac{3a}{4}$ or the maxima and that there is zero probability of measuring the particle to exist at the endpoints or at $x = \frac{a}{2}$, the middle position of the box. One might wonder how the particle can travel from one side of the box to the other without ever actually being in the middle. If one models the particle as a small ball bearing traveling from end to end in an evacuated, sealed glass tube (consistent with a deterministic view in which the particle has a definite location at all times) the prediction is clearly troubling. For many, this creates a dilemma. The reconciliation of this dilemma requires that one abandon a notion of determinism in embracing the wave-nature of the particle. Namely, if one accepts the wave description of the particle, the notion of a definite location become meaningless since the wave must be delocalized across the entire box. In fact, the wave even exists at the central node despite the value of the wavefunction being zero! This concept provides a clear challenge to the notion of determinism that is suggested by Newtonian physics. The idea of “matter waves” also lead to a proposal by Louis de Broglie that matter-wave interference should be observable.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/11%3A_Quantum_Strangeness/11.01%3A_Nodes_and_Wave_Nature.txt
Thomas Young showed in 1803 [7] that light traveling through a pair of parallel slits will produce an interference pattern that follows Bragg’s Law for diffraction. This was a huge problem to the existing Newtonian theory of light, as place CityNewton had postulated that light is, in fact, a stream of particles. With the advent of a quantum theory, light was postulated to have a dual nature, having properties of both particles and waves. This dual nature, of course would be applicable to the description of matter as well according to Louis de Broglie. At this point, things started to get really interesting. But before we go into that, let’s think about the two-slit experiment in terms of the Heisenberg Uncertainty Principle. Recall that the Uncertainty principle states that there is a small minimum value for the product of the uncertainties of position and momentum. $\Delta x\Delta p\ge \frac{\hbar }{2}\nonumber$ This concept can be used to describe why a light wave is diffracted by a slit. As the photon or other wave-particle passes through the slit, the uncertainty of the position of the wave-particle is basically given by the size of the slit. The uncertainty in momentum then allows for the spreading of the wave-particle spatially. This is illustrated in the diagram. This interpretation is very useful in understanding how Einstein used this experiment as a criticism of the Uncertainty Principle and of the Quantum Theory itself. In 1924 [8] [9], Louis de Broglie proposed a wave description of all matter by proposing his famous wavelength relationship $\lambda = h/p\nonumber$ His predictions that matter-wave interference could be observed was confirmed in 1927 in independent experiments by George Thomson, who observed diffraction patterns in electron beams passing through thin metal films [10] and by Clinton Davison and Lester Germer, who observed electron diffraction on an electron beam focused on a crystalline nickel metal surface. [11] Thomson and Davisson shared the Nobel Prize in Physics in 1937 for these discoveries. While the observation of interference of matter waves gave a great deal of credibility to the emerging quantum theory, Einstein was still troubled. In a series of interactions with Bohr, Einstein would propose thought experiments which he believed would uncover an inconsistency in the quantum theory by violating the Heisenberg Uncertainty Principle. Bohr would then consider the experiment and, in particular, the apparatus that would be used to make the measurements Einstein had proposed. Then, in presenting the “apparatus” to Einstein, Bohr would explain the flaw in Einstein’s reasoning and how such a measurement could not violate the predictions of quantum mechanics. One such exchange occurred over the concept of the “two-slit” experiment. In this experiment, a beam of electrons travels through a screen before arriving at a detector. In the screen, there are two slits through which the beam may pass. Each of these slits will diffract the beam, and lead to an interference pattern as the beam hits a detector screen. The diffraction is confirmed by the interference pattern observed on the detector. To make matters even more interesting, if one slit is blocked, the result is the disappearance of the interference pattern. Instead, the recorded signal is consistent with the electrons traveling through the single unblocked slit. For light waves, this phenomenon was well understood, thanks to the experiments of Young. But for matter waves, the picture becomes someone bizarre. There is not much of a problem if one considers what happens when the beam is turned on continuously. In this case, there are plenty of electrons making the transit and it is easy to imagine each as having a wave nature which can interfere with all of the other electrons making the transit. The real excitement happens when the electron source is slowed down so that only one electron is making the transit at a time. If the resulting signals generated when the electrons reach the detector are integrated, over time an identical interference pattern emerges! “How can that be?” I hear you cry. And the question would indeed be very profound. One explanation is that each electron traverses the distance from the gun through the slits by taking both possible pathways. This explanation is equivalent to saying that the electron becomes delocalized as soon as it leaves the source, takes all possible pathways to the detector and then becomes localized once again when it interacts with the detector, revealing its final position. Such an explanation would be very problematic to a person clinging to the philosophy of Determinism. Einstein’s description of the phenomenon provided an important piece of the puzzle in terms of probing the limitations of quantum theory. Einstein argued that a particle passing through a slit would only have its path altered if it imparted some momentum to the screen containing the slit through a collision. That collision would have to cause the screen to move a tiny amount (due to conservation of momentum.) And if that movement could be detected, then one would then simultaneously know both the position of the particle (as it passed through the slit) and its momentum (due to the momentum imparted to the slit itself.) And this would create a violation of the Heisenberg Uncertainty principle. Bohr’s response was quick and decisive. He pointed to the fact that Einstein had only attempted to apply the Uncertainty Principle to the wave-particle that passed through the slit and not to the slit itself. In fact, the uncertainty in the momentum of the slit will be the same as the uncertainty in the momentum of the wave-particle (since similar methods are used to measure them.) $\Delta p_{slit} = \Delta p_{wp}\nonumber$ Further, the uncertainty of the position of the wave-particle is equal to the uncertainty of the position of the of the slit. $\Delta x_{slit} = \Delta x_{wp}\nonumber$ Additionally, the slit itself must satisfy the Uncertainty Principle in that $\Delta x_{slit} \Delta p_{slit} \ge ħ/2\nonumber$ simple substitution shows that if the slit is governed by the Uncertainty Principle, then the wave-particle must be as well. $\Delta x_{wp}\Delta p_{wp} \ge ħ/2\nonumber$ This argument does not prove that quantum mechanics is correct, but it does show that it is self-consistent. Very recently, scientists have used a modified approach to the double-slit experiment to reopen the question. [12] In this experiment, laser light shines on a screen with two pinholes. A clever detection system is used that detects only those photons that pass through one of the pinholes (a particle-like behavior.) But at the same time, detecting wires are placed in the positions of the destructive interference fringes (where no light should fall), confirming that no light is detected in these dark fringes (which is a consequence of the wave nature of light.) As such, the experiments demonstrate that light can show both the wave and particle nature simultaneously – something that Bohr had predicted to be impossible based on the idea of complementarity. Clearly, the debate continues and forms the subject of current research. Bohr and Einstein would have several of these types of debates over the course of the late 1920s. Each time, Einstein would propose a thought experiment which he believed would violate the Uncertainty Principle, and each time Bohr would counter with a demonstration that, in fact, there was no violation at all. It seemed that Einstein was defeated. However, that was far from the case! However, before exploring Einstein’s next move, let’s consider another experiment that shows the strangeness of quantum mechanics. It will be useful in framing a discussion of Einstein’s next move. 1. T. Young, "The Bakerian Lecture: Experiments and Calculations Relative to Physical Optics," Philosophical Transactions of the Royal Society of London, pp. 1-16, 1804. W430W9405 2. L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Paris, 1924. W430W9405 3. L. de Broglie, "Recherches sur la théorie des Quanta," Annalen der Physik, no. 3, pp. 22-128, 1925. W430W9405 4. G. P. Thomson, The Wave Mechanics of Free Electrons, New York: McGraw-Hill Book Company, 1930. W430W9405 5. C. Davisson and L. Germer, "Diffraction of electrons by a crystal of nickel," Physical Review, p. 705–740, 1927. W430W9405 6. S. S. Afshar, E. Flores, K. F. McDonald and E. Knoesel, "Paradox in Wave-Particle Duality," Foundations of Physics, no. 3, p. 295–305, 2007. W430W9405
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/11%3A_Quantum_Strangeness/11.02%3A_Quantum_Interference.txt
One of the very interesting aspects of many small particles, including electrons, is that of spin. (The original Stern-Gerlach experiment [13] was performed on a beam of silver atoms, but the result apply to electrons as well.) The property of spin creates a magnetic moment for these particles. For electrons, which have $s = \frac{1}{2}$, the component of angular momentum along an external axis can take two possible values, $m_ {s} = \pm \frac{1}{2}$. That means that an electron traveling through an inhomogeneous magnetic field can align its magnetic moment either with or against the external field. The ramifications are very interesting. A beam of electrons that passes through an inhomogeneous magnetic field will be split into two beams. Those electrons whose magnetic moment aligned with the field will be deflected in one direction, and those with a magnetic field aligned against the external field will be deflected in the other. Each beam can then be considered as containing only electrons that are either “spin up” ($\alpha$, $m_ {s} = +\frac{1}{2}$) or “spin down” ($\beta$, $m_ {s} = -\frac{1}{2}$). As such, if one of the beams passes through another magnetic field that it oriented parallel to the first, no further splitting occurs since all of the electrons in that sub-beam have their spins aligned. However, things get very interesting when the second magnetic field is oriented at $90 ^{\circ}$ to the first. Since the magnetic moments of the electrons are aligned perpendicular to the external magnetic field, there should be no effect. What actually happens is that the beam again splits into two sub-beams, just as the original beam did! If the second magnetic field is placed at some other angle, the beam will still split into two components, but the intensities will be determined by the magnitude of the projection of the electron magnetic moment along the external axis. That magnitude is easily calculable if one thinks of the spin wavefunction as a linear combination of two spin functions in the rotated axis system. $\Psi _{spin} = \frac{1}{\sqrt{2} } cos(\theta ) \cdot \alpha + \frac{1}{\sqrt{2} } sin(\theta ) \cdot \beta \nonumber$ where $\theta$ is the angle between the two magnetic fields. The factors of $\frac{1}{\sqrt{2} }$ are to normalize the wavefunction. The probabilities then of measuring the spin as either an $\alpha$ or $\beta$ state is given by the squares of the corresponding Fourier coefficients. $P(\alpha ) = \frac{1}{2} \cos ^{2} (\theta )\nonumber$ $P(\beta ) = \frac{1}{2} \sin ^{2} (\theta )\nonumber$ This conclusion will be useful in interpreting later results. One very important question that the Stern-Gerlach result raises deals directly with Determinacy. The question is whether or not an individual electron “knows” that it is $\alpha$ or $\beta$ before interacting with the detector. The results (particularly for the experiments where a beam of selected spin particles is resplit) suggests that it is the interaction with the detector that forces the particle into one state or the other. In this manner, the Stern-Gerlach result shows is that making a measurement on a system will, in fact, alter that system. The interaction of the electrons with the external field causes an alignment of the individual magnetic moments (either with or against the external field.) The types of experiments (and specifically spin detectors) used in the Stern-Gerlach experiment can be used to help to frame the next step in the Einstein-Bohr debates on the completeness of quantum mechanics. 1. W. Gerlach and O. Stern, "Das magnetische Moment des Silberatoms," Zeitschrift für Physik, vol. 9, p. 353–355, 1922. W430W9405
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/11%3A_Quantum_Strangeness/11.03%3A_The_Stern-Gerlach_Experiment.txt
In 1935, Einstein raised the stakes in the quantum debate significantly. Along with his postdoctoral co-authors, Boris Podolsky and Nathan Rosen, published one of the most famous papers in the history of the quantum theory debates. The EPR paper [14] (so called based on the initials of the authors) would create a veritable firestorm within the community that championed the Copenhagen interpretation of Quantum Mechanics. The EPR paradox The EPR paper proposed a paradox in the form of a thought experiment, much as the several thought experiments proposed by Einstein to Bohr at the various Solvay Conferences. In the paradox, Einstein used the concepts of a conserved center of mass and conserved momentum in a fragmenting particle to show that either a measurement on one fragment must affect the properties of the other, or that the quantum theory had to be incomplete. The thought experiment involved the fragmentation of a particle into two fragment particles. The fragment particles would be linked through a single wavefunction describing the entire system. After some time of traveling apart, it was assumed that the two fragment particles could no longer interact as they were physically separated by a distance. At some point following the fragmentation, the position is measure for one of the fragment particles. This, thought the conservation of the center of mass, would determine the position of the other particle. Then, by measuring the momentum of the counter fragment, the momentum of the first fragment would be determined through the conservation of momentum. As such, there would be simultaneous knowledge of both position and momentum for both particles, in violation of the Uncertainty Principle. The argument in the EPR paper was that since a measurement on one fragment determined the properties of the counter fragment, and that the two fragments were separated in space, that the properties of the counter fragment must have been determined all along, irrespective of having been measured. (Einstein referred to the phenomenon of measurement on one fragment affecting properties on the counter fragment as “Spooky action at a distance.) In other words, Indeterminacy as suggested by the Heisenberg Uncertainty Principle must be a fallacy. The only other explanation possible was that the Quantum Theory had to be incomplete. With this argument, people had to take very seriously the possibility that a theory of “local reality” in which properties exist with definite values, as opposed to only coming to being through the interaction with a detector of some sort, as a distinct possibility. Bohr responded within months. He attacked a specific assumption of the set up of the EPR paradox, namely that a measurement of the properties of one particle would not “disturb the system in any way.” Hidden Variables The EPR paradox was both eloquent and succinct. It touched off quite a storm within the community as well as it shock the very foundations of the quantum theory. But perhaps even more interestingly, it spurned a whole new avenue of research into understanding the ramifications. Specifically troubling was the idea that the wavefunction describing a system did not, in fact, provide a complete description of that system. Scientists began to wonder if there might be some “hidden variables” in a system that allowed properties to be both hidden under the vagueness of a wavefunction and also determined by the definite values of the variables, irrespective of whether or not the system was observed or measured. In 1951, David Bohm published a textbook [15] on quantum theory that included a good deal of discussion on the EPR paradox. In it, he suggested measuring the nuclear spins of hydrogen atoms that result from the dissociation of a singlet-state hydrogen molecule. The spins would be correlated through the conservation of angular momentum and could thus take the place of the measurements of position and momentum in the EPR version. In Bohm’s version of the EPR experiment (sometimes called the EPRB experiment) the spin states of the hydrogen atoms would be correlated as the atoms would be “entangled”. And since angular momentum had to be conserved, measurement of the spin of one atom along the laboratory fixed z-axis would determine the value along the z-axis for the other atom. But what if the measurement was made along the x- or y-axes? If the EPR definition of reality is to be believed, these values must also be determined (or real.) Of course quantum mechanics only allows for the measurement of one of the components, as the operators for the three components do not commute. Thus, if the EPR definition of reality is correct, then the wavefunction by necessity must be incomplete. There would need to be hidden variables. Even more significantly, Bohm’s proposed experiments could be carried out in a laboratory, rather than being limited to the realms of thought. 1. B. P. Albert Einstein and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?," Physical Review, vol. 47, pp. 777-780, 1935. W430W9405 2. D. Bohm, Quantum Theory, New York: Prentice-Hall, 1951. W430W9405
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/11%3A_Quantum_Strangeness/11.04%3A_Spooky_Action_at_a_Distance.txt
Bohm’s work on the EPR paradox reawakened an interest in the topic. One physicist who took a particular interest in the topic was John S. Bell. Bell proposed a mathematical model that could in fact distinguish between local hidden variable theories and quantum theory [16]. Consider a set of things U which can be subdivided into three overlapping subsets, A, B and C. Bell’s theorem states: the number of members of A that are not a member of B plus all members of B that are not a member of C must be greater than or equal to the number of members in the subset of A that are not also in subset B. To show this, let’s first settle on some notation. We’ll call the number of items that are in subset A, but not in subset B by the symbol $N(A_ {+} B_ {-} )$ and the number of items in subset B but not in subset C by $N(B_ {+} C _{-} )$. Etcetera. This notation coupled with the use of some Venn diagrams, the concept of the inequality should become clear. It should be clear that $N(A_{+}B_{-})$ can be easily shown to be given by the number of items in subset A, not in subset B and in subset C, plus the number in A, not in B and not in C. $N(A_{+}B_{-}) = N(A_ {+}B_ {-}C_ {+}) + N(A_ {+}B_ {-}C_ {-}) \nonumber$ Similar sums can be derived for $N(B_ {+} C_-)$ and $N(A_ {+} C_ {-} )$ $N(B_ {+} C_ {-} ) = N(A_ {+} B_ {+} C_ {-} ) + N(A_ {-} B_ {+} C_ {-} )\nonumber$ $N(A_ {+} C_ {-} ) = N(A_ {+} B_ {+} C_ {-} ) + N(A_ {+} B_ {-} C_ {-} )\nonumber$ Shown below is the sum for $N(B_ {+} C_ {-} )$. Adding the terms for $N(A_{+}B_{-})$ and $N(B_{+}C_{-})$ gives $N(A_{+}B_{-}) +N(B_{+}C_{-}) = N(A_ {+} B_ {-} C_ {+} ) + N(A_ {+} B_ {-} C_ {-} ) + N(A_ {+} B_ {+} C_ {-} ) + N(A_ {-} B_ {+} C_ {-} )\nonumber$ This can be simplified by grouping the terms for $N(A_ {+} B_ {+} C_ {-} )$ and $N(A_ {+} B_ {-} C_ {-} )$ and recognizing that their sum gives $N(A_ {+} C_ {-} )$. $N(A_{+}B_{-}) +N(B_{+}C_{-}) = N(A_ {+} B_ {-} C_ {+} ) + N(A_ {-} B_ {+} C_ {-} ) + N(A_ {+} C_ {-} )\nonumber$ So long as neither $N(A_ {+} B_ {-} C_ {+} )$ nor $N(A_ {-} B_ {+} C_ {-} )$ are negative (which they can not be) then we arrive at CityplaceBell’s inequality: $N(A_{+}B_{-}) +N(B_{+}C_{-}) \ge N(A_ {+} C_ {-} )\nonumber$ Employing the Stern-Gerlach results to Test Bell’s Inequality On the face of it, place CityBell’s result does not seem that extraordinary. In fact, it almost seems trivial. However, it is only trivial when the results of tests that would place an object into group A, B or C are not correlated. When the results are correlated, the result becomes a bit perplexing. Consider the dissociation of a pion (also called a $\pi$ meson), which is a subatomic particle with zero spin and zero charge. It can decompose into a positron and an electron (to conserve charge), each traveling in opposite directions (such that momentum is conserved.) The spins will also be entangled in such a way as to conserve angular momentum. In fact, the spin state of the electron/positron pair will be given by the familiar singlet spin function: $\Psi =\frac{1}{\sqrt{2} } \left(\alpha _{+} \beta _{-} -\beta _{+} \alpha _{-} \right)\nonumber$ This suggests that if the positron (subscript +) is detected in the $\alpha$ spin state, the electron (subscript -) will necessarily be forced into the $\beta$ spin state. The wavefunction allows for equal probability that the positron will be detected in the $\alpha$ spin state or the $\beta$ spin state, but detection in either state forces an immediate collapse of the wavefunction for the electron. This is the “spooky action at a distance” that Einstein so vehemently rejected in the EPR paper [14]. Einstein also insisted that the spin state of the positron was a “real” property that existed with a definite value for the entire transit of the positron from the decay event to the detector. And quantum mechanics, in Einstein’s view, was incomplete in that it could not predict the “realness” of that spin state. If Einstein’s view was correct, then correlated measurements of the two spin states would have to satisfy CityplaceBell’s inequality. With the results of the Stern-Gerlach experiments, we can actually determine exactly what quantum mechanics will predict. To do this, we will set up our detectors to detect the spin to the dissociated fragments, but we will rotate the detectors relative to one another. In a laboratory-fixed coordinate system, we will set detector A at $0 ^{\circ}$ rotation, B at $30 ^{\circ}$ and C and $60 ^{\circ}$. What we want to know is the probability that if one detector measures its particle to be in spin state $\alpha$ that the other will measure its particle to be in spin state $\beta$. That probability will be related to the angle of rotation of the second detector relative to the first. According to the Stern-Gerlach result, the probability is given by $\frac{1}{2}\sin(\theta_ {2} -\theta_ {1} )$, where $\theta_ {2}$ and $\theta_ {1}$ are the angles of the second and first detectors in the pair respectively. So if we define $P(A_ {+} B_ {-}$ ) as the probability that detector A detects an $\alpha spin$ and detector B fails to detect a $\beta$ spin, we can construct the following table based on three specific experimental configurations: Experiment $\theta_ {1}$ $\theta_ {2}$ Case $\theta_1 - \theta_2$ $\frac{1}{2}sin ^{2} (\Delta \theta )$ 1 $0^{\circ}$ $30^{\circ}$ $P(A_ {+} B_ {-} )$ $30^{\circ}$ 0.125 2 $30^{\circ}$ $60^{\circ}$ $P(B_ {+} C_ {-} )$ $30^{\circ}$ 0.125 3 $0^{\circ}$ $60^{\circ}$ $P(A_ {+} C_ {-} )$ $60^{\circ}$ 0.375 After collecting data from a very large set of measurements using these configurations, we will have can compare the experimental distribution of outcomes to what is predicted by quantum mechanics, and thus conclude if it is possible to have a locality variable that predetermines our outcomes, or if the measurements are purely probabilistic. If the locality variable exists, then Bell’s Inequality must hold [17]. $P(A_ {+} B_ {-} ) + P(B_ {+} C_ {-}) \ge P(A_ {+} C_ {-} )\nonumber$ However, if Quantum Mechanics allows for a locality variable to redetermine the measured outcomes of the three experiments, then the following must be true: $0.125 + 0.125 \ge 0.375\nonumber$ Except that it simply isn’t true. (In fact, it isn’t even true for extremely large values of the sum 0.125 + 0.125.) The above set of experiments was proposed by Alain Aspect in 1976 [17], and results published in 1982 [18]. And while the results were criticized due to the “detection loophole”, results of similar experiments being conducted up to 2015 [20] confirmed Aspect’s results. Alain Aspect shared the 2022 Nobel Prize in Physics with John Clauser and Anton Zeillinger “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. [21] Since Aspect’s result was derived completely independent of any theory of hidden variables, it should be clear that the result is incompatible with any such theory. In fact, the result shows that one must divorce oneself from any ideas of local realism for quantum mechanical particles. One simply must conclude that it is the observation that creates the reality and that no reality for observable properties on quantum mechanical system can exist independent of their observation. (Of course, Sheldon Cooper would also point out that one can be beaten up simply for referring to oneself as “one.) [19] 1. J. S. Bell, "On the Einstein Podolsky Rosen paradox," Physics Physique Fizika, vol. 1, p. 195, 1964. W430W9405 2. J. S. Bell, Speakable and Unspeakable in Quantum Mechanics, London: Cambridge University Press, 1987. W430W9405 3. A. Aspect, "Proposed experiment to test the nonseparability of quantum mechanics," Physical Review D, vol. 14, p. 1944, 1976. W430W9405 4. A. Aspect, P. Grangier and G. Roger, "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities," Physical Review Letters, vol. 49, p. 91, 1982. W430W9405 5. J. Markoff, "Sorry, Einstein. Quantum Study Suggests ’Spooky Action’ Is Real," New York Times, 21 October 2015. W430W9405 6. "The Nobel Prize in Physics 2022," NobelPrize.org, 5 October 2022. [Online]. Available: https://www.nobelprize.org/prizes/ph.../2022/summary/. [Accessed 4 October 2022].W430W9405 11.06: References 1. The Big Bang Theory, CBS Television, 2011. W430W9405 2. A. Einstein, "Science, Philosphy, and Religion: a Symposium," 1941. W430W9405 3. E. Salaman, "A Talk with Einstein," The Listener, vol. 54, pp. 370-371, 1955. W430W9405 4. A. Einstein, "Letter to Max Born (4 December 1926)," in The Born-Einstein Letters, New York, Walker and Company, 1971. W430W9405 5. K. Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning, Duke University Press Books, 2007, p. 254.W430W9405 6. D. C. Cassidy, "Triumph of the Copenhagen Interpretation (1925-1927)," American Institute of Physics, [Online]. Available: https://history.aip.org/exhibits/hei...g/triumph.html. [Accessed 4 October 2022].W430W9405 7. T. Young, "The Bakerian Lecture: Experiments and Calculations Relative to Physical Optics," Philosophical Transactions of the Royal Society of London, pp. 1-16, 1804. W430W9405 8. L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Paris, 1924. W430W9405 9. L. de Broglie, "Recherches sur la théorie des Quanta," Annalen der Physik, no. 3, pp. 22-128, 1925. W430W9405 10. G. P. Thomson, The Wave Mechanics of Free Electrons, New York: McGraw-Hill Book Company, 1930. W430W9405 11. C. Davisson and L. Germer, "Diffraction of electrons by a crystal of nickel," Physical Review, p. 705–740, 1927. W430W9405 12. S. S. Afshar, E. Flores, K. F. McDonald and E. Knoesel, "Paradox in Wave-Particle Duality," Foundations of Physics, no. 3, p. 295–305, 2007. W430W9405 13. W. Gerlach and O. Stern, "Das magnetische Moment des Silberatoms," Zeitschrift für Physik, vol. 9, p. 353–355, 1922. W430W9405 14. B. P. Albert Einstein and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?," Physical Review, vol. 47, pp. 777-780, 1935. W430W9405 15. D. Bohm, Quantum Theory, New York: Prentice-Hall, 1951. W430W9405 16. J. S. Bell, "On the Einstein Podolsky Rosen paradox," Physics Physique Fizika, vol. 1, p. 195, 1964. W430W9405 17. J. S. Bell, Speakable and Unspeakable in Quantum Mechanics, London: Cambridge University Press, 1987. W430W9405 18. A. Aspect, "Proposed experiment to test the nonseparability of quantum mechanics," Physical Review D, vol. 14, p. 1944, 1976. W430W9405 19. A. Aspect, P. Grangier and G. Roger, "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities," Physical Review Letters, vol. 49, p. 91, 1982. W430W9405 20. J. Markoff, "Sorry, Einstein. Quantum Study Suggests ’Spooky Action’ Is Real," New York Times, 21 October 2015. W430W9405 21. "The Nobel Prize in Physics 2022," NobelPrize.org, 5 October 2022. [Online]. Available: https://www.nobelprize.org/prizes/ph.../2022/summary/. [Accessed 4 October 2022].W430W9405 22. The Big Bang Theory, CBS Television., 2010. W430W9405 1. This quote, while very clever, is disputed, as a very similar quote is also attributed to Enrico Fermi.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/11%3A_Quantum_Strangeness/11.05%3A_Bell%27s_Inequality.txt
Some Useful Mathematical Identities ${\mathrm{sin} \left(\alpha \pm \beta \right)\ }={\mathrm{sin} \left(\alpha \right)\ }{\mathrm{cos} \left(\beta \right)\ }\pm {\mathrm{cos} \left(\alpha \right)\ }\mathrm{sin}\mathrm{}(\beta )\nonumber$ ${\mathrm{cos} \left(\alpha \pm \beta \right)\ }={\mathrm{cos} \left(\alpha \right)\ }{\mathrm{cos} \left(\beta \right)\ }\mp \mathrm{sin}\mathrm{}(\alpha )\mathrm{sin}\mathrm{}(\beta )\nonumber$ ${\mathrm{sin} \left(x\right)\ }=x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dots +{\left(-1\right)}^{n+1}\dfrac{x^{2n-1}}{\left(2n-1\right)!}\nonumber$ ${\mathrm{cos} \left(x\right)\ }=1-\dfrac{x^2}{2!}+\dfrac{x^4}{4!}-\dots +{\left(-1\right)}^n\dfrac{x^{2n}}{\left(2n\right)!}\nonumber$ $e^{\pm i\theta }={\mathrm{cos} \left(\theta \right)\ }\pm i\ \mathrm{sin}\mathrm{}(\theta ) \qquad {\mathrm{cos} \left(\theta \right)\ }=\dfrac{e^{i\theta }+e^{-i\theta }}{2} \qquad {\mathrm{sin} \left(\theta \right)\ }=\dfrac{e^{i\theta }-e-i\theta }{2}\nonumber$ Some Useful Integrals $\int \sin \left(\alpha x\right)\sin \left(\beta x\right) dx=\dfrac{\sin \left[\left(\alpha -\beta \right)x\right]}{2\left(\alpha -\beta \right)} -\dfrac{\sin \left[\left(\alpha +\beta \right)x\right]}{2\left(\alpha +\beta \right)} \quad \alpha \ne \beta\nonumber$ $\int \sin ^{2} \left(\alpha x\right)dx=\dfrac{x}{2} -\dfrac{\sin \left(2\alpha x\right)}{4\alpha } \qquad \qquad \int{{\mathrm{sin}^3 \left(\alpha x\right)\ }\ dx}=\dfrac{\mathrm{cos} \left(3\alpha x\right) -9\mathrm{cos} \left(\alpha x\right)}{12\alpha }\nonumber$ $\int x\sin ^{2} (\alpha x)dx= \dfrac{x^{2} }{4} -\dfrac{x\sin (2\alpha x)}{4\alpha } -\dfrac{\cos (2\alpha x)}{8\alpha ^{2} }\nonumber$ $\int x^{2} \sin ^{2} (\alpha x)dx=\dfrac{x^{3} }{6} -\left(\dfrac{x^{2} }{4\alpha } -\dfrac{1}{8\alpha ^{3} } \right)\sin (2\alpha x)-\dfrac{x\cos (2\alpha x)}{4\alpha ^{2} }\nonumber$ $\int \cos \left(\alpha x\right)\cos \left(\beta x\right)dx=\dfrac{\sin \left[\left(\alpha -\beta \right)x\right]}{2\left(\alpha -\beta \right)} +\dfrac{\sin \left[\left(\alpha +\beta \right)x\right]}{2\left(\alpha +\beta \right)}\nonumber$ $\int \cos ^{2} \left(\alpha x\right)dx =\dfrac{x}{2} +\dfrac{1}{4\alpha } \sin \left(2\alpha x\right) \qquad \int x\; \cos ^{2} \left(\alpha x\right)dx= \dfrac{x^{2} }{4} +\dfrac{x\sin \left(2\alpha x\right)}{4\alpha } +\dfrac{\cos \left(2\alpha x\right)}{8\alpha ^{2} }\nonumber$ $\int x^{2} \; \cos ^{2} (\alpha x)dx= \dfrac{x^{3} }{6} +\left(\dfrac{x^{2} }{4\alpha } -\dfrac{1}{8\alpha ^{3} } \right)\sin (2\alpha x)+\dfrac{x\cos (2\alpha x)}{4\alpha ^{2} }\nonumber$ $\int _{0}^{\infty }x^{n} e^{-\alpha x} dx=\dfrac{n!}{\alpha ^{n+1} } \; \text{(n a positive integer)} \qquad \qquad \int _{0}^{\infty }e^{-\alpha x^{2} } dx =\sqrt{\dfrac{\pi }{4\alpha } } \nonumber$ $\int _{0}^{\infty }x^{2n} e^{-\alpha x^{2} } dx =\dfrac{1\cdot 3\cdot 5\cdot \cdot \cdot \left(2n-1\right)}{2^{n+1} \alpha ^{n} } \sqrt{\dfrac{\pi }{\alpha } } \qquad \qquad \int _{0}^{\infty }x^{2n+1} e^{-\alpha x^{2} } dx =\dfrac{n!}{2\alpha ^{n+1} }\nonumber$ Some Useful Coordinate Transformations Plane Polar Coordinates: $0 \le r \le \infty$; $0 \le \theta \le 2\pi$ $x=r\cos \theta \qquad \qquad y=r\sin \theta \qquad \qquad \theta =\arctan \left(\dfrac{y}{x} \right) \qquad \qquad r=\sqrt{x^{2} +y^{2} }\nonumber$ $\begin{array}{rcl} {} & {} & {\nabla {}^{2} =\dfrac{\partial ^{2} }{\partial x^{2} } +\dfrac{\partial ^{2} }{\partial y^{2} } =\dfrac{1}{r} \dfrac{\partial }{\partial r} \left(r\dfrac{\partial }{\partial r} \right)+\dfrac{1}{r^{2} } \dfrac{\partial ^{2} }{\partial \theta ^{2} } } \end{array}\nonumber$ Spherical Polar Coordinates: $0 \le r \le \infty$ ; $0 \le \theta \le \pi$; $0 \le \phi \le 2\pi$ $x=r\sin \theta \cos \phi \qquad \qquad y=r\sin \theta \sin \phi \qquad \qquad z=r\cos \theta\nonumber$ $\begin{array}{rcl} {} & {} & {\nabla ^{2} =\dfrac{\partial ^{2} }{\partial x^{2} } +\dfrac{\partial ^{2} }{\partial y^{2} } +\dfrac{\partial ^{2} }{\partial z^{2} } =\dfrac{1}{r^{2} } \dfrac{\partial }{\partial r} \left(r^{2} \dfrac{\partial }{\partial r} \right)+\dfrac{1}{r^{2} \sin \theta } \dfrac{\partial }{\partial \theta } \left(\sin \theta \dfrac{\partial }{\partial \theta } \right)+\dfrac{1}{r^{2} \sin ^{2} \theta } \dfrac{\partial ^{2} }{\partial \phi ^{2} } } \end{array}\nonumber$ $dx\ dy\ dz=r^2{\mathrm{sin} \theta \ dr\ d\theta \ d\phi \ }\nonumber$ 12.02: Appendix II Selected Character Tables Nonaxial Groups $C_{1}$ E A 1 $C_{s}$ E $\sigma$ A’ 1 1 $x$, $y$, $R_{z}$ $x^{2}$, $y^{2}$,$z^{2}$, $xy$ A” 1 -1 $z$, $R_{x}$, $R_{y}$ $xz$, $yz$ $C_{i}$ E i $A_{g}$ 1 1 $R_{x}$, $R_{y}$, $R_{z}$ $x^{2}$, $y^{2}$,$z^{2}$, $xy$, $xz$, $yz$ $A_{u}$ 1 -1 $x$, $y$, $z$ $C_{n}$ groups $C_{2}$ E $C_{2}$ A 1 1 $z$, $R_{z}$ $x^{2}$, $y^{2}$,$z^{2}$, $xy$ B 1 -1 $x$, $y$, $R_{x}$, $R_{y}$ $xz$, $yz$ $C_3$ E $C_3$ $C_3^2$ A 1 1 1 $z$, $R_z$ $x^2 + y^2$, $z^2$ E 1 $\varepsilon$ $\varepsilon^*$ $x+iy$; $R_x+iR_y$ $(x^2-y^2, xy)$ 1 $\varepsilon^*$ $\varepsilon$ $x-iy$; $R_x-iR_y$ $(xz, yz)$ $C_4$ E $C_4$ $C_2$ $C_4^3$ A 1 1 1 1 $z$, $R_z$ $x^2 + y^2, \; z^2$ B 1 -1 1 -1 $x^2-y^2$, $xy$ E 1 $i$ -1 $-i$ $x+iy$; $R_x+iR_y$ $(xz, yz)$ 1 $-i$ -1 $i$ $x-iy$; $R_x-iR_y$ $C_{5}$ E $C_{5}$ $C_5^2$ $C_5^3$ $C_5^4$ A 1 1 1 1 1 $z$, $R_z$ $x^2 + y^2$, $z^2$ $E_1$ 1 $\varepsilon$ $\varepsilon^{2}$ $\varepsilon^{2*}$ $\varepsilon^*$ $x+iy$, $R_x+iR_y$ $(xz, yz)$ 1 $\varepsilon^*$ $\varepsilon^{2*}$ $\varepsilon^{2}$ $\varepsilon$ $x-iy$, $R_x-iR_y$ $E_2$ 1 $\varepsilon^{2}$ $\varepsilon^*$ $\varepsilon$ $\varepsilon^{2*}$ $(x^2-y^2, xy)$ 1 $\varepsilon^{2*}$ $\varepsilon$ $\varepsilon^*$ $\varepsilon^{2}$ $C_6$ E $C_6$ $C_6^2$ $C_6^3$ $C_6^4$ $C_6^5$ $A$ 1 1 1 1 1 1 $z$, $R_z$ $x^2 + y^2$, $z^2$ $B$ 1 -1 1 -1 1 -1 $E_1$ 1 $\varepsilon$ $-\varepsilon^*$ -1 $-\varepsilon$ $\varepsilon^*$ $x+iy$, $R_x+iR_y$ $x-iy$, $R_x-iR_y$ $(xz, yz)$ 1 $\varepsilon^*$ $\varepsilon$ -1 $-\varepsilon^*$ $\varepsilon$ $E_2$ 1 $-\varepsilon^*$ $-\varepsilon$ 1 $-\varepsilon^*$ $-\varepsilon$ $(x^2-y^2, \; xy)$ 1 $-\varepsilon$ $-\varepsilon^*$ 1 $-\varepsilon$ $-\varepsilon^*$ $D_{n}$ groups $D_{2}$ E $C_{2}(z)$ $C_{2}(y)$ $C_{2}(x)$ $A_{1}$ 1 1 1 1   $x^{2}$, $y^{2}$,$z^{2}$ $B_{1}$ 1 1 -1 -1 $z, \; R_z$ $xy$ $B_{2}$ 1 -1 1 -1 $y, \; R_y$ $xz$ $B_{3}$ 1 -1 -1 1 $x, \; R_x$ $yz$ $D_{3}$ E 2 $C_{2}$ 3$C_2'$ $A_{1}$ 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 -1 $z$, $R_z$ E 2 -1 0 $(x, y)$ $(R_{x}, R_{y})$ $(x^2 -y^2, \; xy)$ $(xz, yz)$ $D_{4}$ E 2 $C_{4}$ $C_{2}$ 2 $C_{2}$’ 2 $C_{2}$” $A_{1}$ 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 -1 -1 $z$, $R_z$ $B_{1}$ 1 -1 1 1 -1   $x^2 - y^2$ $B_{2}$ 1 -1 1 -1 1   $xy$ E 2 0 -2 0 0 $(x, y)$ $(R_{x}, \; R_{y})$ $(xz, yz)$ $D_{5}$ E 2 $C_{5}$ 2 $C_{5}$’ 5 $C_{2}$ $A_{1}$ 1 1 1 1   $x^2 + y^2, \; z^{2}$ $A_{2}$ 1 1 1 -1 $z, \; R_z$ $E_{1}$ 2 $2 \cos(72^{\circ})$ $2 \cos(144^{\circ})$ 0 $(x, y)$ $(R_{x}, \; R_{y})$ $(xz, yz)$ $E_{2}$ 2 $2 \cos(144^{\circ})$ $2 \cos(72^{\circ})$ 0   $(x^2 -y^2, \; xy)$ $D_{6}$ E 2 $C_{6}$ 2 $C_{3}$ $C_{2}$ 3 $C_{2}$’ 3 $C_{2}$” $A_{1}$ 1 1 1 1 1 1   $x^2 + y^2$, $z^{2}$ $A_{2}$ 1 1 1 1 -1 -1 $z$, $R_z$ $B_{1}$ 1 -1 1 -1 1 -1 $B_{2}$ 1 -1 1 -1 -1 1 $E_{1}$ 2 -1 1 -2 0 0 $(x, y)$ $(R_{x}, \; R_{y})$ $(xz, yz)$ $E_{2}$ 2 -1 -1 2 0 0   $(x^2 -y^2, \; xy)$ $C_{nv}$ groups $C_{2v}$ E $C_{2}$ $\sigma_{v}$ $\sigma_{v}$’ $A_{1}$ 1 1 1 1 $z$   $x^{2}$, $y^{2}$,$z^{2}$ $A_{2}$ 1 1 -1 -1   $R_{z}$ $xy$ $B_{1}$ 1 -1 1 -1 $x$ $R_{y}$ $xz$ $B_{2}$ 1 -1 -1 1 $y$ $R_{x}$ $yz$ $C_{3v}$ E 2 $C_{2}$ 3$\sigma_{v}$ $A_{1}$ 1 1 1 $z$ $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 -1 $R_{z}$ E 2 -1 0 $(x, y)$ $(R_{x}, \; R_{y})$ $x^2 -y^2$, $xy)$$(xz, yz)$ $C_{4v}$ E 2 $C_{4}$ $C_{2}$ 2 $\sigma_{v}$ 2 $\sigma_{d}$ $A_{1}$ 1 1 1 1 1 $z$ $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 -1 -1 $R_{z}$ $B_{1}$ 1 -1 1 1 -1   $x^2 - y^2$ $B_{2}$ 1 -1 1 -1 1   $xy$ E 2 0 -2 0 0 $(x, y)$ $(R_{x}, R_{y})$ $(xz, yz)$ $C_{5v}$ E 2 $C_{5}$ $C_{5}^{2}$ 5 $\sigma_{v}$ $A_{1}$ 1 1 1 1 $z$ $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 -1 $R_{z}$ $E_{1}$ 2 $2 \cos(72^{\circ})$ $2 \cos(144^{\circ})$ 0   $(xz, yz)$ $E_{2}$ 2 $2 \cos(144^{\circ})$ $2 \cos(72^{\circ})$ 0 $(x, y)$ $(R_{x}, \; R_{y})$ $(x^2 -y^2$, $xy)$ $C_{6v}$ E 2 $C_{6}$ 2 $C_{3}$ $C_{2}$ 3$\sigma_{v}$ 3$\sigma_{d}$ $A_{1}$ 1 1 1 1 1 1 $z$ $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 1 -1 -1 $R_{z}$ $B_{1}$ 1 -1 1 -1 1 -1 $B_{2}$ 1 -1 1 -1 -1 1 $E_{1}$ 2 1 -1 -2 0 0 $(x, y)$ $(R_{x}, \; R_{y})$ $(xz, yz)$ $E_{2}$ 2 -1 -1 2 0 0   $(x^2 -y^2, \; xy)$ $C_{nh}$ Groups $C_{2h}$ E $C_{2}$ i $\sigma_{h}$ $A_{g}$ 1 1 1 1 $R_{z}$ $x^{2}$, $y^{2}$,$z^{2}$ $A_{u}$ 1 1 -1 -1 $z$ $B_{g}$ 1 -1 1 -1 $R_{x}$, $R_{y}$ $xz$, $xy$, $yz$ $B_{u}$ 1 -1 -1 1 $x$, $y$ $C_{3h}$ E $C_3$ $C_3^2$ $\sigma_h$ $S_3$ $S_s^2$ $A’$ 1 1 1 1 1 1 $R_z$ $x^2 + y^2$, $z^2$ $E’$ 1 $\varepsilon$ $\varepsilon^*$ 1 $\varepsilon$ $\varepsilon^*$ $x+iy$ $x-iy$ $(x^2-y^2, \; xy)$ 1 $\varepsilon^*$ $\varepsilon$ 1 $\varepsilon^*$ $\varepsilon$ $A$" 1 1 1 -1 -1 -1 $z$ $E$" 1 $\varepsilon$ $\varepsilon^*$ -1 $-\varepsilon$ $-\varepsilon^*$ $R_x+iR_y$ $R_x-iR_y$ $(xz, yz)$ 1 $\varepsilon^*$ $\varepsilon$ -1 $-\varepsilon^*$ $-\varepsilon$ $C_{4h}$ E $C_4$ $C_2$ $C_4^3$ i $S_4^3$ $\sigma_h$ $S_4$ $A_g$ 1 1 1 1 1 1 1 1 $R_z$ $x^2 + y^2$, $z^2$ $B_g$ 1 -1 1 -1 1 -1 1 -1 $x^2-y^2$, $xy$ $E_g$ 1 $i$ -1 $-i$ 1 $i$ -1 $-i$ $R_x+iR_y$ $(xz, yz)$ 1 $-i$ -1 $i$ 1 $-i$ -1 $i$ $R_x-iR_y$ $A_u$ 1 1 1 1 -1 -1 -1 -1 $z$ $B_u$ 1 -1 1 -1 -1 1 -1 1 $E_u$ 1 $i$ -1 $-i$ -1 $i$ 1 $-i$ $x+iy$ 1 $-i$ -1 $i$ -1 $-i$ 1 $i$ $x-iy$ $C_{5h}$ E $C_{5}$ $C_5^2$ $C_5^3$ $C_5^4$ $\sigma_h$ $S_{5}$ $S_{5}^2$ $S_{5}^3$ $S_{5}^4$ $A$' 1 1 1 1 1 1 1 1 1 1 $R_z$ $x^2 + y^2$,$z^2$ $E_1$’ 1 $\varepsilon$ $\varepsilon^2$ $\varepsilon^{2*}$ $\varepsilon^*$ 1 $\varepsilon$ $\varepsilon^2$ $\varepsilon^{2*}$ $\varepsilon^*$ $x+iy$ $x-iy$ 1 $\varepsilon^*$ $\varepsilon^{2*}$ $\varepsilon^{2}$ $\varepsilon$ 1 $\varepsilon^*$ $\varepsilon^{2*}$ $\varepsilon^2$ $\varepsilon$ $E_2$' 1 $\varepsilon^2$ $\varepsilon^*$ $\varepsilon$ $\varepsilon^{2*}$ 1 $\varepsilon^2$ $\varepsilon^*$ $\varepsilon$ $\varepsilon^{2*}$ $(x^2-y^2, xy)$ 1 $\varepsilon^{2*}$ $\varepsilon$ $\varepsilon^*$ $\varepsilon^{2}$ 1 $\varepsilon^{2*}$ $\varepsilon$ $\varepsilon^*$ $\varepsilon^{2}$ $A$" 1 1 1 1 1 -1 -1 -1 -1 -1 $z$ $E_1$" -1 $\varepsilon$ $\varepsilon^2$ $\varepsilon^{2*}$ $\varepsilon^*$ -1 -$\varepsilon$ -$\varepsilon^2$ -$\varepsilon^{2*}$ -$\varepsilon^*$ $R_x+iR_y$ $R_x-iR_y$ $(xz, yz)$ -1 $\varepsilon^*$ $\varepsilon^{2*}$ $\varepsilon^{2}$ $\varepsilon$ -1 -$\varepsilon^*$ -$\varepsilon^{2*}$ -$\varepsilon^2$ -$\varepsilon$ $E_2$" -1 $\varepsilon$ $\varepsilon^2$ $\varepsilon^{2*}$ $\varepsilon^*$ -1 -$\varepsilon^2$ -$\varepsilon^*$ -$\varepsilon$ -$\varepsilon^{2*}$ -1 $\varepsilon^*$ $\varepsilon^{2*}$ $\varepsilon^{2}$ $\varepsilon$ -1 -$\varepsilon^{2*}$ -$\varepsilon$ -$\varepsilon^*$ -$\varepsilon^{2}$ $D_{nh}$ Groups $D_{2h}$ E $C_{2}$(z) $C_{2}$(y) $C_{2}$(x) $i$ $\sigma_{xy}$ $\sigma_{xx}$ $\sigma_{yz}$ $A_{g}$ 1 1 1 1 1 1 1 1   $x^{2}$, $y^{2}$,$z^{2}$ $B_{1g}$ 1 1 -1 -1 1 1 -1 -1 $R_{z}$ $xy$ $B_{2g}$ 1 -1 1 -1 1 -1 1 -1 $R_{y}$ $xz$ $B_{3g}$ 1 -1 -1 1 1 -1 -1 1 $R_{x}$ $yz$ $A_{u}$ 1 1 1 1 -1 -1 -1 -1 $B_{1u}$ 1 1 -1 -1 -1 -1 1 1 $z$ $B_{2u}$ 1 -1 1 -1 -1 1 -1 1 $y$ $B_{3u}$ 1 -1 -1 1 -1 1 1 -1 $x$ $D_{3h}$ E 2 $C_{3}$ 3$C_{2}$’ $\sigma_{h}$ 2 $S_{3}$ 3 $\sigma_{v}$ $A_{1}$’ 1 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2}$’ 1 1 -1 1 1 -1 $R_{z}$ E’ 2 -1 0 2 -1 0 $(R_{x}$, $R_{y})$ $x^2 -y^2$, $xy)$ $A_{1}$” 1 1 1 -1 -1 -1 $A_{2}$” 1 1 -1 -1 -1 1 $z$ E” 2 -1 0 -2 1 0 $(x, y)$ $(xz, yz)$ $D_{4h}$ E 2 $C_{4}$ $C_{2}$ 2 $C_{2}$’ 2 $C_{2}$” $i$ 2 $S_{4}$ $\sigma_{h}$ 2 $\sigma_{v}$ 2 $\sigma_{d}$ $A_{1g}$ 1 1 1 1 1 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2g}$ 1 1 1 -1 -1 1 1 1 -1 -1 $R_{z}$ $B_{1g}$ 1 -1 1 1 -1 1 -1 1 1 -1   $x^2 - y^2$ $B_{2g}$ 1 -1 1 -1 1 1 -1 1 -1 1   $xy$ $E_{g}$ 2 0 -2 0 0 2 0 -2 0 0 $(R_{x}$, $R_{y})$ $(xz, yz)$ $A_{1u}$ 1 1 1 1 1 -1 -1 -1 -1 -1 $A_{2u}$ 1 1 1 -1 -1 -1 -1 -1 1 1 $z$ $B_{1u}$ 1 -1 1 1 -1 -1 1 -1 -1 1 $B_{2u}$ 1 -1 1 -1 1 -1 1 -1 1 -1 $E_{u}$ 2 0 -2 0 0 -2 0 2 0 0 $(x, y)$ $D_{6h}$ E 2 $C_{6}$ 2 $C_{3}$ $C_{2}$ 3 $C_{2}$’ 3 $C_{2}$” $i$ 2 $S_{3}$ 2 $S_{6}$ $\sigma_{h}$ 3 $\sigma_{v}$ 3 $\sigma_{d}$ $A_{1g}$ 1 1 1 1 1 1 1 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2g}$ 1 1 1 1 -1 -1 1 1 1 1 -1 -1 $R_{z}$ $B_{1g}$ 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 $B_{2g}$ 1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 $E_{1g}$ 2 -1 1 -2 0 0 2 -1 1 -2 0 0 $(R_{x}$, $R_{y})$ $(xz, yz)$ $E_{2g}$ 2 -1 -1 2 0 0 2 -1 -1 2 0 0   $(x^2 -y^2, \; xy)$ $A_{1u}$ 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 $A_{2u}$ 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 $z$ $B_{1u}$ 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 $B_{2u}$ 1 -1 1 -1 -1 1 -1 1 -1 1 1 -1 $E_{1u}$ 2 -1 1 -2 0 0 -2 -1 1 2 0 0 $(x, y)$ $E_{2u}$ 2 -1 -1 2 0 0 -2 1 1 -2 0 0 $D_{nd}$ Groups $D_{2d}$ E 2 $S_{4}$ $C_{2}$ 2 $C_{2}$’ 2 $\sigma_{d}$ $A_{1}$ 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 -1 -1 $R_{z}$ $B_{1}$ 1 -1 1 1 -1   $x^2 - y^2$ $B_{2}$ 1 -1 1 -1 1 $z$ $xy$ E 2 0 -2 0 0 $(x, y)$ $(R_{x}$, $R_{y})$ $(xz, yz)$ $D_{3d}$ E 2 $C_{3}$ 3$C_{2}$’ $i$ 2 $S_{6}$ 3 $\sigma_{d}$ $A_{1g}$ 1 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2g}$ 1 1 -1 1 1 -1 $R_{z}$ $E_{g}$ 2 -1 0 2 -1 0 $(R_{x}$, $R_{y})$ $x^2 -y^2$, $xy)$, $(xz, yz)$ $A_{1u}$ 1 1 1 -1 -1 -1 $A_{2u}$ 1 1 -1 -1 -1 1 z $E_{u}$ 2 -1 0 -2 1 0 $(x, y)$ $D_{4d}$ E 2 $S_{8}$ 2 $C_{4}$ 2 $S_{8}$${}^{3}$ $C_{2}$ 4 $C_{2}$’ 4 $\sigma_{d}$ $A_{1}$ 1 1 1 1 1 1 1   $x^2 + y^2$,$z^{2}$ $A_{2}$ 1 1 1 1 1 -1 -1 $R_{z}$ $B_{1}$ 1 -1 1 -1 1 1 -1 $B_{2}$ 1 -1 1 -1 1 -1 1 z $E_{1}$ 2 $\sqrt{2}$ 0 -$\sqrt{2}$ -2 0 0 $(x, y)$ $E_{2}$ 2 0 -2 0 2 0 0   $x^2 -y^2$, $xy)$ $E_{3}$ 2 -$\sqrt{2}$ 0 $\sqrt{2}$ -2 0 0 $(R_{x,}$$R_{y})$ $(xz, yz)$ Cubic Groups $T_{d}$ E 8 $C_{3}$ 3 $C_{2}$ 6 $S_{4}$ 6 $\sigma_{d}$ $A_{1}$ 1 1 1 1 1   $x^{2}$+$y^{2}$+$z_{2}$ $A_{2}$ 1 1 1 -1 -1 E 2 -1 2 0 0   (2$z_{2}$-$x^2 - y^2$, $x^2 - y^2)$ $T_{1}$ 3 0 -1 1 -1 $(R_{x}$, $R_{y}$, $R_{z})$ $T_{2}$ 3 0 -1 -1 1 $(x, y, z)$ $(xy$,$xz$,$yz)$ $O_{h}$ E 8$C_{3}$ 6$C_{2}$ 6$C_{4}$ 3$C_{2}$ $i$ 6$S_{4}$ 8$S_{6}$ 3$\sigma_{h}$ 6$\sigma_{d}$ $A_{1g}$ 1 1 1 1 1 1 1 1 1 1   $x^{2}$+$y^{2}$+$z_{2}$ $A_{2g}$ 1 1 -1 -1 1 1 -1 1 1 -1 $E_{g}$ 2 -1 0 0 2 2 0 -1 -2 0   (2$z_{2}$-$x^2 - y^2$, $x^2 - y^2)$ $T_{1g}$ 3 0 -1 1 -1 3 1 0 -1 -1 $(R_{x}$, $R_{y}$, $R_{z})$ $T_{2g}$ 3 0 1 -1 -1 3 -1 0 -1 1   $(xy$,$xz$,$yz)$ $A_{1u}$ 1 1 1 1 1 -1 -1 -1 -1 -1 $A_{2u}$ 1 1 -1 -1 1 -1 1 -1 -1 1 $E_{u}$ 2 -1 0 0 2 -2 0 1 -2 0 $T_{1u}$ 3 0 -1 1 -1 -3 -1 0 1 1 $(x, y, z)$ $T_{2u}$ 3 0 1 -1 -1 -3 1 0 1 -1
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_with_Applications_in_Spectroscopy_(Fleming)/12%3A_Appendices/12.01%3A_Appendix_I.txt
Quantum Mechanics Describes Matter in Terms of Wavefunctions and Energy Levels and physical Measurements are Described in Terms of Operators Acting on Wavefunctions • 1.1: Operators Each physically measurable quantity has a corresponding operator. The eigenvalues of the operator tell the values of the corresponding physical property that can be observed • 1.2: Wavefunctions The eigenfunctions of a quantum mechanical operator depend on the coordinates upon which the operator acts; these functions are called wavefunctions • 1.3: The Schrödinger Equation The Schrödinger Equation is an eigenvalue equation for the energy or Hamiltonian operator; its eigenvalues provide the energy levels of the system. • 1.4: Free-Particle Motion in Two Dimensions The number of dimensions depends on the number of particles and the number of spatial (and other) dimensions needed to characterize the position and motion of each particle • 1.5: Particles in Boxes The particle-in-a-box problem provides an important model for several relevant chemical situations • 1.6: One Electron Moving About a Nucleus The Hydrogenic atom problem forms the basis of much of our thinking about atomic structure. To solve the corresponding Schrödinger equation requires separation of the r, θ, and ϕ variables  The Hydrogenic atom problem forms the basis of much of our thinking about atomic structure. To solve the corresponding Schrödinger equation requires separation of the r, θ, and ϕ variables . • 1.7: Harmonic Vibrational Motion This Schrödinger equation forms the basis for our thinking about bond stretching and angle bending vibrations as well as collective phonon motions in solids • 1.8: Rotational Motion for a Rigid Diatomic Molecule This Schrödinger equation relates to the rotation of diatomic and linear polyatomic molecules. It also arises when treating the angular motions of electrons in any spherically symmetric potential • 1.9: The Physical Relevance of Wavefunctions, Operators and Eigenvalues Quantum mechanics has a set of 'rules' that link operators, wavefunctions, and eigenvalues to physically measurable properties. These rules have been formulated not in some arbitrary manner nor by derivation from some higher subject. Rather, the rules were designed to allow quantum mechanics to mimic the experimentally observed facts as revealed in mother nature's data. The extent to which these rules seem difficult to understand usually reflects the presence of experimental observations. 01: The Basic Tools of Quantum Mechanics Each physically measurable quantity has a corresponding operator. The eigenvalues of the operator tell the values of the corresponding physical property that can be observed In quantum mechanics, any experimentally measurable physical quantity F (e.g., energy, dipole moment, orbital angular momentum, spin angular momentum, linear momentum, kinetic energy) whose classical mechanical expression can be written in terms of the cartesian positions {q$_i$} and momenta {p$_i$} of the particles that comprise the system of interest is assigned a corresponding quantum mechanical operator F. Given F in terms of the {q$_i$} and {p$_i$}, F is formed by replacing p$_j$ by $-i\hbar\frac{\partial}{\partial q_{j}}$ and leaving q$_j$ untouched. For example, if $F=\sum\limits _{1=1}^N \left(\dfrac{p_1^2}{2m_1}+ \dfrac{1}{2}k(q_1-q_1^0)^2 + L(q_1-q_1^0)\right) \nonumber$ then $F = \sum\limits_{1=1}^N \left( \dfrac{-\hbar^2}{2m_1}\dfrac{\partial^2}{\partial q_1^2} + \dfrac{1}{2}k(q_1-q_1^0)^2 + L(q_1-q_1^0)\right) \nonumber$ The x-component of the dipole moment for a collection of N particles has $F = \sum\limits_{j=1}^N Z_jex_j \nonumber$ and $F = \sum\limits_{j=1}^N Z_jex_j \nonumber$ where Z$_j$e is the charge on the j$^{th}$ particle. The mapping from F to F is straightforward only in terms of cartesian coordinates. To map a classical function F, given in terms of curvilinear coordinates (even if they are orthogonal), into its quantum operator is not at all straightforward. Interested readers are referred to Kemble's text on quantum mechanics which deals with this matter in detail. The mapping can always be done in terms of cartesian coordinates after which a transformation of the resulting coordinates and differential operators to a curvilinear system can be performed. The corresponding transformation of the kinetic energy operator to spherical coordinates is treated in detail in Appendix A. The text by EWK also covers this topic in considerable detail. The relationship of these quantum mechanical operators to experimental measurement will be made clear later in this chapter. For now, suffice it to say that these operators define equations whose solutions determine the values of the corresponding physical property that can be observed when a measurement is carried out; only the values so determined can be observed. This should suggest the origins of quantum mechanics' prediction that some measurements will produce discrete or quantized values of certain variables (e.g., energy, angular momentum, etc.). 1.02: Wavefunctions The eigenfunctions of a quantum mechanical operator depend on the coordinates upon which the operator acts; these functions are called wavefunctions In addition to operators corresponding to each physically measurable quantity, quantum mechanics describes the state of the system in terms of a wavefunction \(\Psi\) that is a function of the coordinates {q\(_j\)} and of time \(t\). The function |\(\Psi(q_j ,t)|^2 = \Psi^*\Psi\) gives the probability density for observing the coordinates at the values \(q_j\) at time t. For a many-particle system such as the \(H_2O\) molecule, the wavefunction depends on many coordinates. For the \(H_2O\) example, it depends on the x, y, and z (or r,q, and f) coordinates of the ten electrons and the x, y, and z (or r,q, and f) coordinates of the oxygen nucleus and of the two protons; a total of thirty-nine coordinates appear in \(\Psi\). In classical mechanics, the coordinates qj and their corresponding momenta \(p_j\) are functions of time. The state of the system is then described by specifying \(q_j\) (t) and \(p_j\) (t). In quantum mechanics, the concept that qj is known as a function of time is replaced by the concept of the probability density for finding \(q_j\) at a particular value at a particular time t: \(|\Psi(q_j,t)|^2\). Knowledge of the corresponding momenta as functions of time is also relinquished in quantum mechanics; again, only knowledge of the probability density for finding \(p_j\) with any particular value at a particular time \(t\) remains.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.01%3A_Operators.txt
The Time-Dependent Schrödinger Equation How to extract from $\Psi(q_j,t)$ knowledge about momenta is treated, where the structure of quantum mechanics, the use of operators and wavefunctions to make predictions and interpretations about experimental measurements, and the origin of 'uncertainty relations' such as the well known Heisenberg uncertainty condition dealing with measurements of coordinates and momenta are also treated. Before moving deeper into understanding what quantum mechanics 'means', it is useful to learn how the wavefunctions $\Psi$ are found by applying the basic equation of quantum mechanics, the Schrödinger equation, to a few exactly soluble model problems. Knowing the solutions to these 'easy' yet chemically very relevant models will then facilitate learning more of the details about the structure of quantum mechanics because these model cases can be used as 'concrete examples'. The Schrödinger equation is a differential equation depending on time and on all of the spatial coordinates necessary to describe the system at hand (thirty-nine for the H$_2$O example cited above). It is usually written $\textbf{H} \Psi = i\hbar\dfrac{\partial \Psi}{\partial t} \nonumber$ where $\Psi(q_j$,t) is the unknown wavefunction and $\textbf{H}$ is the operator corresponding to the total energy physical property of the system. This operator is called the Hamiltonian and is formed, as stated above, by first writing down the classical mechanical expression for the total energy (kinetic plus potential) in Cartesian coordinates and momenta and then replacing all classical momenta pj by their quantum mechanical operators $p_j = -i\hbar \dfrac{\partial}{\partial q_j}$. For the H$_2$O example used above, the classical mechanical energy of all thirteen particles is $E = \sum\limits_i \left( \dfrac{p_i^2}{2m_e} + \dfrac{1}{2}\sum\limits_j \dfrac{e^2}{r_{i,j}} -\sum\limits_aZ_a \dfrac{e^2}{r_{i,a}}\right) + \sum\limits_a\left( -\dfrac{\hbar^2}{2m_a} \dfrac{\partial^2}{\partial q_a^2} + \dfrac{1}{2}\sum\limits_b Z_a Z_b \dfrac{e^2}{r_{a,b}} \right) \nonumber$ where the indices i and j are used to label the ten electrons whose thirty cartesian coordinates are {q$_i$} and a and b label the three nuclei whose charges are denoted {Z$_a$}, and whose nine cartesian coordinates are {q$_a$}. The electron and nuclear masses are denoted me and {m$_a$}, respectively. The corresponding Hamiltonian operator is $\textbf{H} = \sum\limits_i \left( - \left( \dfrac{\hbar^2}{2m_e}\right) \dfrac{\partial^2}{\partial q_i^2} + \dfrac{1}{2} \sum\limits_j \dfrac{e^2}{r_{i,j}} - \sum\limits_a Z_a \dfrac{e^2}{r_{i,a}} \right) + \sum\limits_a \left( -\left(\dfrac{\hbar^2}{2m_a}\right) \dfrac{\partial^2}{\partial q_a^2} + \dfrac{1}{2}\sum\limits_b Z_aZ_b\dfrac{e^2}{r_{a,b}} \right) . \nonumber$ Notice that H is a second order differential operator in the space of the thirty-nine Cartesian coordinates that describe the positions of the ten electrons and three nuclei. It is a second order operator because the momenta appear in the kinetic energy as $p_j^2$ and $p_a^2$, and the quantum mechanical operator for each momentum p = $i\hbar \dfrac{\partial }{\partial q}$ is of first order. The Schrödinger equation for the $H_2O$ then reads $\sum\limits_i \left[ - \left( \dfrac{\hbar^2}{2m_e}\right) \dfrac{\partial^2}{\partial q_i^2} + \dfrac{1}{2} \sum\limits_j \dfrac{e^2}{r_{i,j}} - \sum\limits_a Z_a \dfrac{e^2}{r_{i,a}} \right] \Psi + \sum\limits \left[ -\left(\dfrac{\hbar^2}{2m_a}\right) \dfrac{\partial^2}{\partial q_a^2} + \dfrac{1}{2}\sum\limits_b Z_aZ_b\dfrac{e^2}{r_{a,b}} \right] \Psi \nonumber$ $= i\hbar \dfrac{\partial \Psi}{\partial t} \nonumber$ If the Hamiltonian operator contains the time variable explicitly, one must solve the time-dependent Schrödinger equation. If the Hamiltonian operator does not contain the time variable explicitly, one can solve the time-independent Schrödinger equation The Time-Independent Schrödinger Equation In cases where the classical energy, and hence the quantum Hamiltonian, do not contain terms that are explicitly time dependent (e.g., interactions with time varying external electric or magnetic fields would add to the above classical energy expression time dependent terms discussed later in this text), the separations of variables techniques can be used to reduce the Schrödinger equation to a time-independent equation. In such cases, H is not explicitly time dependent, so one can assume that $\Psi(q_j$,t) is of the for $\Psi(q_j,t) = \Psi(q_j)F(t). \nonumber$ Substituting this 'ansatz' into the time-dependent Schrödinger equation gives $\Psi (q_j) i\hbar \dfrac{\partial F}{\partial t} = \textbf{H} \Psi(q_j)F(t). \nonumber$ Dividing by $\Psi(q_j)$F(t) then gives $F^{-1} \left( i\hbar \dfrac{\partial F}{\partial t} \right) = \Psi^{-1}(\textbf{H} \Psi (q_j)). \nonumber$ Since F(t) is only a function of time t, and $\Psi(q_j$ ) is only a function of the spatial coordinates {$q_j$}, and because the left hand and right hand sides must be equal for all values of t and of {$q_j$}, both the left and right hand sides must equal a constant. If this constant is called E, the two equations that are embodied in this separated Schrödinger equation read as follows: $H\Psi (q_j) = E \Psi (q_j), \label{TISE}$ $i \hbar \dfrac{\partial F(t)}{\partial t} = i\hbar \dfrac{dF(t)}{dt} = E F(t). \nonumber$ Equation \ref{TISE} is called the time-independent Schrödinger Equation; it is a so-called eigenvalue equation in which one is asked to find functions that yield a constant multiple of themselves when acted on by the Hamiltonian operator. Such functions are called eigenfunctions of H and the corresponding constants are called eigenvalues of H. For example, if H were of the form $\dfrac{-\hbar^2}{2M}\dfrac{\partial ^2}{\partial \phi^2} = H \nonumber$ then functions of the form e$^{(im \phi)}$ would be eigenfunctions because $\left( \dfrac{-\hbar^2}{2M} \dfrac{\partial^2}{\partial \phi^2} \right) e^{im\phi} = \left( \dfrac{m^2 \hbar^2}{2M} \right) e^{(im\phi)} \nonumber$ In this case, $\left(\dfrac{m^2 \hbar^2}{2M} \right)$ is the eigenvalue. When the Schrödinger equation can be separated to generate a time-independent equation describing the spatial coordinate dependence of the wavefunction, the eigenvalue $E$ must be returned to the equation determining $F(t)$ to find the time dependent part of the wavefunction. By solving $i\hbar\dfrac{dF(t)}{dt} = EF(t) \nonumber$ once $E$ is known, one obtains $F(t) = e ^{\dfrac{-iEt}{\hbar}} \nonumber$ and the full wavefunction can be written as $\Psi(q_j,t) = \Psi(q_j)e^{\dfrac{-iEt}{\hbar}} \nonumber$ and the full wavefunction can be written as $\Psi(q_j,t) = \Psi(q_j)e^{\dfrac{-iEt}{\hbar}} \nonumber$ For the above example, the time dependence is expressed by $F(t) = e^{ \left( \dfrac{-i t}{\hbar}\dfrac{m^2\hbar^2}{2M} \right)} \nonumber$ Having been introduced to the concepts of operators, wavefunctions, the Hamiltonian and its Schrödinger equation, it is important to now consider several examples of the applications of these concepts. The examples treated below were chosen to provide the learner with valuable experience in solving the Schrödinger equation; they were also chosen because the models they embody form the most elementary chemical models of electronic motions in conjugated molecules and in atoms, rotations of linear molecules, and vibrations of chemical bonds.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.03%3A_The_Schr%C3%B6dinger_Equation.txt
The Schrödinger Equation The number of dimensions depends on the number of particles and the number of spatial (and other) dimensions needed to characterize the position and motion of each particle Consider an electron of mass m and charge e moving on a two-dimensional surface that defines the x,y plane (perhaps the electron is constrained to the surface of a solid by a potential that binds it tightly to a narrow region in the z-direction), and assume that the electron experiences a constant potential $V_0$ at all points in this plane (on any real atomic or molecular surface, the electron would experience a potential that varies with position in a manner that reflects the periodic structure of the surface). The pertinent time independent Schrödinger equation is: $-\dfrac{\hbar^2}{2m}\left( \dfrac{\partial^2}{\partial x^2} + \dfrac{\partial^2}{\partial y^2} \right) \psi(x,y) + V_0 \psi(x,y) = E \psi(x,y) \nonumber$ Because there are no terms in this equation that couple motion in the x and y directions (e.g., no terms of the form $x^ay^b$ or $\dfrac{\partial}{\partial x}$ $\dfrac{\partial}{\partial y}$ or $x\dfrac{\partial}{\partial y}$), separation of variables can be used to write $\phi$ as a product $\phi$(x,y)=A(x)B(y). Substitution of this form into the Schrödinger equation, followed by collecting together all x-dependent and all y-dependent terms, gives; $- \dfrac{h2}{2m} A^{-1}\dfrac{\partial^2A}{\partial x^2} -\dfrac{\hbar^2}{2m}B^{-1}\dfrac{\partial^2 B}{\partial y^2} =E-V_0 \nonumber$ Since the first term contains no y-dependence and the second contains no x-dependence, both must actually be constant (these two constants are denoted $E_x$ and $E_y$, respectively), which allows two separate Schrödinger equations to be written: $\dfrac{-\hbar^2}{2m}A^{-1}\dfrac{\partial^2 A}{\partial x^2} = E_x \text{, and } \nonumber$ $\dfrac{-\hbar^2}{2m}B^{-1}\dfrac{\partial^2B}{\partial y^2} = E_y \nonumber$ The total energy E can then be expressed in terms of these separate energies $E_x$ and $E_y$ as $E_x + E_y =E-V_0$. Solutions to the x- and y- Schrödinger equations are easily seen to be: $A(x) = e^{ix \sqrt{ \dfrac{2mE_x}{\hbar^2} } } \text{and } \nonumber$ $e^{-ix \sqrt{\dfrac{2mE_x}{\hbar^2}}} \nonumber$ $B(y) = e^{iy \sqrt{ \dfrac{2mE_y}{\hbar^2}}} \text{ and } \nonumber$ $e^{-iy \sqrt{\dfrac{2mE_y}{\hbar^2}}} \nonumber$ Two independent solutions are obtained for each equation because the x- and y-space Schrödinger equations are both second order differential equations. Boundary Conditions The boundary conditions, not the Schrödinger equation, determine whether the eigenvalues will be discrete or continuous If the electron is entirely unconstrained within the x,y plane, the energies $E_x$ and $E_y$ can assume any value; this means that the experimenter can 'inject' the electron onto the x,y plane with any total energy E and any components $E_x$ and $E_y$ along the two axes as long as $E_x$ + $E_y$ = E. In such a situation, one speaks of the energies along both coordinates as being 'in the continuum' or 'not quantized'. In contrast, if the electron is constrained to remain within a fixed area in the x,y plane (e.g., a rectangular or circular region), then the situation is qualitatively different. Constraining the electron to any such specified area gives rise to so-called boundary conditions that impose additional requirements on the above A and B functions. These constraints can arise, for example, if the potential $V_0$(x,y) becomes very large for x,y values outside the region, in which case, the probability of finding the electron outside the region is very small. Such a case might represent, for example, a situation in which the molecular structure of the solid surface changes outside the enclosed region in a way that is highly repulsive to the electron. For example, if motion is constrained to take place within a rectangular region defined by 0 $\leq$ x $\leq L_x$; 0 $\leq y \leq L_y$, then the continuity property that all wavefunctions must obey (because of their interpretation as probability densities, which must be continuous) causes A(x) to vanish at 0 and at L$_x$. Likewise, B(y) must vanish at 0 and at L$_y$. To implement these constraints for A(x), one must linearly combine the above two solutions e$^{ix \sqrt{\dfrac{2mE_x}{\hbar^2}}}$ and e$^{-ix \sqrt{\dfrac{2mE_x}{\hbar^2}}}$to achieve a function that vanishes at x=0: $A(x) = e^{ix \sqrt{\dfrac{2mE_x}{\hbar^2}}} - e^{-ix \sqrt{\dfrac{2mE_x}{\hbar^2}}} \nonumber$ One is allowed to linearly combine solutions of the Schrödinger equation that have the same energy (i.e., are degenerate) because Schrödinger equations are linear differential equations. An analogous process must be applied to B(y) to achieve a function that vanishes at y=0: $B(y) = e^{iy \sqrt{\dfrac{2mE_y}{\hbar^2}}} - e^{-iy \sqrt{\dfrac{2mE_y}{\hbar^2}}} \nonumber$ Further requiring A(x) and B(y) to vanish, respectively, at x=L$_x$ and y=L$_y$, gives equations that can be obeyed only if $E_x$ and $E_y$ assume particular values: $e^{iL_x \sqrt{\dfrac{2mE_x}{\hbar^2}}} - e^{-iL_x \sqrt{\dfrac{2mE_x}{\hbar^2}}} = 0 \text{ and } \nonumber$ $e^{iL_y \sqrt{\dfrac{2mE_y}{\hbar^2}}} - e^{-iL_y \sqrt{\dfrac{2mE_y}{\hbar^2}}} = 0 \nonumber$ These equations are equivalent to $sin\left({iL_x \sqrt{\dfrac{2mE_x}{\hbar^2}}} \right) = sin\left({iL_y \sqrt{\dfrac{2mE_y}{\hbar^2}}} \right) = 0 \nonumber$ Knowing that sin($\theta$) vanishes at $\theta = n\pi$, for n=1,2,3,..., (although the sin(n$\pi$) function vanishes for n=0, this function vanishes for all x or y, and is therefore unacceptable because it represents zero probability density at all points in space) one concludes that the energies $E_x$ and $E_y$ can assume only values that obey: $L_x \sqrt{\dfrac{2mE_x}{\hbar^2}} = n_x \pi \nonumber$, $L_y \sqrt{\dfrac{2mE_y}{\hbar^2}} = n_x\pi \text{, or } \nonumber$ $E_x = \dfrac{n_x^2\pi^2\hbar^2}{2mL_x^2} \text{ , and } \nonumber$ $E_y = \dfrac{n_y^2\pi^2\hbar^2}{2mL_y^2}, \text{with} \: n_x \: \text{and} \: n_y = \text{1,2,3,...} \nonumber$ It is important to stress that it is the imposition of boundary conditions, expressing the fact that the electron is spatially constrained, that gives rise to quantized energies. In the absence of spatial confinement, or with confinement only at x =0 or L$_x$ or only at y =0 or L$_y$, quantized energies would not be realized. In this example, confinement of the electron to a finite interval along both the x and y coordinates yields energies that are quantized along both axes. If the electron were confined along one coordinate (e.g., between 0 $\leq$ x $\leq L_x$) but not along the other (i.e., B(y) is either restricted to vanish at y=0 or at y=L$_y$ or at neither point), then the total energy E lies in the continuum; its $E_x$ component is quantized but $E_y$ is not. Such cases arise, for example, when a linear triatomic molecule has more than enough energy in one of its bonds to rupture it but not much energy in the other bond; the first bond's energy lies in the continuum, but the second bond's energy is quantized. Perhaps more interesting is the case in which the bond with the higher dissociation energy is excited to a level that is not enough to break it but that is in excess of the dissociation energy of the weaker bond. In this case, one has two degenerate states- i. the strong bond having high internal energy and the weak bond having low energy ($\psi_1$), and ii. the strong bond having little energy and the weak bond having more than enough energy to rupture it ($\psi_2$). Although an experiment may prepare the molecule in a state that contains only the former component (i.e., $\psi= C_1\psi_1 + C_2\psi_2 \text{with} C_1\ll C_2)$, coupling between the two degenerate functions (induced by terms in the Hamiltonian H that have been ignored in defining $\psi_1$ and $\psi_2$) usually causes the true wavefunction $\Psi$ = e$^{ \left( \dfrac{-itH}{\hbar} \right)} \psi$ to acquire a component of the second function as time evolves. In such a case, one speaks of internal vibrational energy flow giving rise to unimolecular decomposition of the molecule. 3. Energies and Wavefunctions for Bound States For discrete energy levels, the energies are specified functions the depend on quantum numbers, one for each degree of freedom that is quantized Returning to the situation in which motion is constrained along both axes, the resultant total energies and wavefunctions (obtained by inserting the quantum energy levels into the expressions for $A(x) B(y)$ are as follows: $E_x = \dfrac{n_x^2\pi^2\hbar^2}{2mL_x^2}, \nonumber$ and $E_y = \dfrac{n_y^2\pi^2\hbar^2}{2mL_y^2}, \nonumber$ $E = E_x + E_y, \nonumber$ $\psi (x,y) = \left( \sqrt{\dfrac{1}{2L_x}} \right)\left( \sqrt{\dfrac{1}{2L_y}} \right) \left[ e^{\dfrac{in_x\pi x}{L_x}} - e^{\dfrac{-in_x\pi x}{L_x}}\right] \left[ e^{\dfrac{in_y\pi y}{L_y}} - e^{\dfrac{-in_y\pi y}{L_y}}\right] \nonumber$ with $n_x$ and $n_y$ = 1,2,3, ... . The two $\sqrt{\dfrac{1}{2L}}$ factors are included to guarantee that $\psi$ is normalized: $\int|\psi(x,y)|^2 \text{dx dy} = 1. \nonumber$ Normalization allows $|\psi(x,y)|^2$ to be properly identified as a probability density for finding the electron at a point x, y. 4. Quantized Action Can Also be Used to Derive Energy Levels There is another approach that can be used to find energy levels and is especially straightforward to use for systems whose Schrödinger equations are separable. The socalled classical action (denoted S) of a particle moving with momentum p along a path leading from initial coordinate $\textbf{q}_i$ at initial time t$_i$ to a final coordinate $\textbf{q}_f$ at time $t_f$ is defined by: $S = \int\limits ^{\textbf{q}_f,t_f}_{\textbf{q}_i,t_i} \textbf{p} \cdot \textbf{dq} \nonumber$ Here, the momentum vector p contains the momenta along all coordinates of the system, and the coordinate vector q likewise contains the coordinates along all such degrees of freedom. For example, in the two-dimensional particle in a box problem considered above, q = (x, y) has two components as does p = (p$_x$, p$_y$), and the action integral is: $S = \int\limits _{x_i;y_i;t_i}^{x_f;y_f;t_f}( p_x dx + p_y dy). \nonumber$ In computing such actions, it is essential to keep in mind the sign of the momentum as the particle moves from its initial to its final positions. An example will help clarify these matters. For systems such as the above particle in a box example for which the Hamiltonian is separable, the action integral decomposed into a sum of such integrals, one for each degree of freedom. In this two-dimensional example, the additivity of H: $H = H_x + H_y = \dfrac{p_x^2}{2m} + \dfrac{p_y^2}{2m} + V(x) + V(y) \nonumber$ $= \dfrac{-\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2} + V(x) -\dfrac{\hbar}{2m}\dfrac{\partial^2}{\partial y^2} + V(y) \nonumber$ means that p$_x$ and p$_y$ can be independently solved for in terms of the potentials V(x) and V(y) as well as the energies $E_x$ and $E_y$ associated with each separate degree of freedom: $p_x = \pm\sqrt{2m(E_x - V(x))} \nonumber$ $p_y = \pm\sqrt{2m(E_y - V(y))}; \nonumber$ the signs on p$_x$ and p$_y$ must be chosen to properly reflect the motion that the particle is actually undergoing. Substituting these expressions into the action integral yields: $S = S_x + S_y \nonumber$ $= \int\limits^{x_f,t_f}_{x_i;t_i} \pm \sqrt{2m(E_x - V(x))} \text{dx} + \int\limits ^{y_f;t_f}_{y_i;t_i} \pm \sqrt{2m(E_y - V(y))} \text{dy}. \nonumber$ The relationship between these classical action integrals and existence of quantized energy levels has been show to involve equating the classical action for motion on a closed path (i.e., a path that starts and ends at the same place after undergoing motion away from the starting point but eventually returning to the starting coordinate at a later time) to an integral multiple of Planck's constant: $S_{\text{closed}} = \int\limits^{\textbf{q}_f= \textbf{q}_i;t_f}_{\textbf{q}_i;t_i}\textbf{p}\cdot \textbf{dq} = n h \qquad \qquad (\text{n} = 1, 2, 3, 4, ...). \nonumber$ Applied to each of the independent coordinates of the two-dimensional particle in a box problem, this expression reads: $n_xh = \int\limits_{x=0}^{x=L_x} \sqrt{2m(E_x - V(x))}dx + \int\limits_{x=L_x}^{x=0}-\sqrt{2m(E_x - V(x)))}dx \nonumber$ $n_yh = \int\limits_{y=0}^{y=L_y} \sqrt{2m(E_y - V(y))}dx + \int\limits_{y=L_y}^{y=0}-\sqrt{2m(E_y - V(y)))}dy. \nonumber$ Notice that the sign of the momenta are positive in each of the first integrals appearing above (because the particle is moving from x = 0 to x = L$_x$, and analogously for y-motion, and thus has positive momentum) and negative in each of the second integrals (because the motion is from x = L$_x$ to x = 0 (and analogously for y-motion) and thus with negative momentum). Within the region bounded by 0 $\leq$ x $\leq L_x$; 0 $\leq$ y $\leq L_y$, the potential vanishes, so V(x) = V(y) = 0. Using this fact, and reversing the upper and lower limits, and thus the sign, in the second integrals above, one obtains: $n_x h = 2\int\limits^{x=L_x}_{x=0} \sqrt{2mE_x}dx = 2\sqrt{2mE_x} L_x \nonumber$ $n_y h = 2\int\limits^{y=L_y}_{y=0}\sqrt{2mE_y}dy = 2\sqrt{2mE_y} L_y. \nonumber$ Solving for $E_x$ and $E_y$, one finds: $E_x = \dfrac{(n_xh)^2}{8mL_x^2} \nonumber$ $E_y = \dfrac{(n_yh)^2}{8mL_y^2} \nonumber$ These are the same quantized energy levels that arose when the wavefunction boundary conditions were matched at x = 0, x = L$_x$ and y = 0, y = L$_y$. In this case, one says that the Bohr-Sommerfeld quantization condition: $n h = \int\limits^{\textbf{q}_f=\textbf{q}_i;t_f}_{\textbf{q}_i;t_i}\textbf{q} \cdot \textbf{dq} \nonumber$ has been used to obtain the result.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.04%3A_Free-Particle_Motion_in_Two_Dimensions.txt
The particle-in-a-box problem provides an important model for several relevant chemical situations The above 'particle in a box' model for motion in two dimensions can obviously be extended to three dimensions or to one. For two and three dimensions, it provides a crude but useful picture for electronic states on surfaces or in crystals, respectively. Free motion within a spherical volume gives rise to eigenfunctions that are used in nuclear physics to describe the motions of neutrons and protons in nuclei. In the so-called shell model of nuclei, the neutrons and protons fill separate s, p, d, etc orbitals with each type of nucleon forced to obey the Pauli principle. These orbitals are not the same in their radial 'shapes' as the s, p, d, etc orbitals of atoms because, in atoms, there is an additional radial potential $V(r) = -Z \dfrac{e^2}{r} \nonumber$ present. However, their angular shapes are the same as in atomic structure because, in both cases, the potential is independent of $\theta$ and $\phi$. This same spherical box model has been used to describe the orbitals of valence electrons in clusters of mono-valent metal atoms such as Cs$_n$, Cu$_n$, Na$_n$ and their positive and negative ions. Because of the metallic nature of these species, their valence electrons are sufficiently delocalized to render this simple model rather effective (see T. P. Martin, T. Bergmann, H. Göhlich, and T. Lange, J. Phys. Chem. 95 , 6421 (1991)). One-dimensional free particle motion provides a qualitatively correct picture for $\pi$-electron motion along the p$_\pi$ orbitals of a delocalized polyene. The one cartesian dimension then corresponds to motion along the delocalized chain. In such a model, the box length L is related to the carbon-carbon bond length R and the number N of carbon centers involved in the delocalized network L=(N-1)R. Below, such a conjugated network involving nine centers is depicted. In this example, the box length would be eight times the C-C bond length. Conjugated $\pi$ Network with 9 Centers Involved The eigenstates $\psi_n$(x) and their energies E$_n$ represent orbitals into which electrons are placed. In the example case, if nine $\pi$ electrons are present (e.g., as in the 1,3,5,7- nonatetraene radical), the ground electronic state would be represented by a total wavefunction consisting of a product in which the lowest four $\psi$'s are doubly occupied and the fifth $\psi$ is singly occupied: $\Psi = \psi_1\alpha\psi_1\beta\psi_2\alpha\psi_2\beta\psi_3\alpha\psi_3\beta\psi_4\alpha\psi_4\beta\psi_5\alpha. \nonumber$ A product wavefunction is appropriate because the total Hamiltonian involves the kinetic plus potential energies of nine electrons. To the extent that this total energy can be represented as the sum of nine separate energies, one for each electron, the Hamiltonian allows a separation of variables $H \cong \sum\limits_j H(j) \nonumber$ in which each H(j) describes the kinetic and potential energy of an individual electron. This (approximate) additivity of H implies that solutions of H $\Psi$ = E $\Psi$ are products of solutions to $H (j) \psi(\textbf{r}_j) = E_j \psi(\textbf{r}_j). \nonumber$ The two lowest $\pi$-excited states would correspond to states of the form $\Psi^* = \psi_1\alpha \psi_1 \beta \psi_2 \alpha \psi_2 \beta \psi_3\alpha \psi_3\beta \psi_4 \alpha \psi_5 \beta \psi_5 \alpha \nonumber$ and $\Psi'^* = \psi_1 \alpha \psi_1 \beta \psi_2 \alpha \psi_2\beta \psi_3 \alpha \psi_3 \beta \psi_4 \alpha \psi_4 \beta \psi_6 \alpha, \nonumber$ where the spin-orbitals (orbitals multiplied by $\alpha$ or $\beta$) appearing in the above products depend on the coordinates of the various electrons. For example, $\psi_1\alpha \psi_1 \beta \psi_2 \alpha \psi_2 \beta \psi_3\alpha \psi_3\beta \psi_4 \alpha \psi_5 \beta \psi_5 \alpha \nonumber$ denotes $\psi_1\alpha(\textbf{r}_1) \psi_1 \beta(\textbf{r}_2) \psi_2 \alpha(\textbf{r}_3) \psi_2 \beta (\textbf{r}_4) \psi_3\alpha (\textbf{r}_5) \psi_3\beta (\textbf{r}_6)\psi_4 \alpha (\textbf{r}_7) \psi_5 \beta (\textbf{r}_8) \psi_5 \alpha (\textbf{r}_9) \nonumber$ The electronic excitation energies within this model would be $\Delta E^{\text{*}} = \pi^2 \dfrac{\hbar^2}{2m}\left[ \dfrac{5^2}{L^2} - \dfrac{4^2}{L^2} \right] \nonumber$ and $\Delta E'* = \pi^2 \dfrac{\hbar^2}{2m} \left[ \dfrac{6^2}{L^2} - \dfrac{5^2}{L^2} \right] , \nonumber$ for the two excited-state functions described above. It turns out that this simple model of $\pi$-electron energies provides a qualitatively correct picture of such excitation energies. This simple particle-in-a-box model does not yield orbital energies that relate to ionization energies unless the potential 'inside the box' is specified. Choosing the value of this potential V$_0$ such that $V_0 + \pi^2 \dfrac{\hbar^2}{2m} \left[ \dfrac{5^2}{L^2}\right] \nonumber$ is equal to minus the lowest ionization energy of the 1,3,5,7-nonatetraene radical, gives energy levels $\left( \text{as } E = V_0 + \pi^2 \dfrac{\hbar^2}{2m} \left[ \dfrac{n^2}{L^2} \right] \right)$ which then are approximations to ionization energies. The individual p-molecular orbitals $\psi_n = \sqrt{ \dfrac{2}{L} } \sin \left(\dfrac{n\pi x}{L}\right) \nonumber$ are depicted in the figure below for a model of the 1,3,5 hexatriene $\pi$-orbital system for which the 'box length' L is five times the distance $R_{CC}$ between neighboring pairs of Carbon atoms. $\sqrt{\dfrac{2}{L}} \sin \left( \dfrac{n \pi x}{L} \right); L =5 xR_{CC} \nonumber$ In this figure, positive amplitude is denoted by the clear spheres and negative amplitude is shown by the darkened spheres; the magnitude of the k$^{th}$ C-atom centered atomic orbital in the n$^{th}$ $pi$-molecular orbital is given by $\sqrt{\dfrac{2}{L}}\sin \left(\dfrac{n\pi kR_{CC}}{L}\right). \nonumber$ This simple model allows one to estimate spin densities at each carbon center and provides insight into which centers should be most amenable to electrophilic or nucleophilic attack. For example, radical attack at the C$_5$ carbon of the nine-atom system described earlier would be more facile for the ground state $\Psi$ than for either $\Psi$* or $\Psi$'*. In the former, the unpaired spin density resides in $\psi_5$, which has non-zero amplitude at the $C_5$ site $x=\dfrac{L}{2}$; in $\Psi$* and $\Psi$'*, the unpaired density is in $\psi_4$ and $\psi_6$, respectively, both of which have zero density at $C_5$. These densities reflect the values $\sqrt{\dfrac{2}{L}} \sin \left( \dfrac{n\pi kR_{CC}}{L} \right) \nonumber$ of the amplitudes for this case in which L = 8 x R$_{CC}$ for n = 5, 4, and 6, respectively.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.05%3A_Particles_in_Boxes.txt
The Schrödinger equation for a single particle of mass m moving in a central potential (one that depends only on the radial coordinate r) can be written as $\dfrac{-\hbar^2}{2\mu}\left( \dfrac{\partial^2}{\partial x^2} + \dfrac{\partial^2}{\partial y^2} + \dfrac{\partial^2}{\partial z^2} \right)\psi + V \left( \sqrt{x^2 + y^2 + z^2} \right) \psi = E\psi \nonumber$ This equation is not separable in cartesian coordinates (x,y,z) because of the way x,y, and z appear together in the square root. However, it is separable in spherical coordinates $\dfrac{\hbar^2}{2\mu r^2}\left( \dfrac{\partial}{\partial r} \left(r^2 \dfrac{\partial \psi}{\partial r} \right)\right) + \dfrac{1}{r^2 \sin \theta} \dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial \psi}{\partial \theta} \right) + \dfrac{1}{r^2 sin^2 \theta} \dfrac{\partial^2 \psi}{\partial \phi^2} + V(r)\psi = E\psi. \nonumber$ Subtracting V(r)y from both sides of the equation and multiplying by $\dfrac{-2\mu r^2}{\hbar^2}$ then moving the derivatives with respect to r to the right-hand side, one obtains $\dfrac{1}{sin \theta} \dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial \psi}{\partial \theta} \right) + \dfrac{1}{sin^2 \theta} \dfrac{\partial^2 \psi}{\partial \phi^2} = \dfrac{-2\mu r^2}{\hbar^2} (E-V(r))\psi - \dfrac{\partial}{\partial r} \left( r^2 \dfrac{\partial \psi}{\partial r} \right). \nonumber$ Notice that the right-hand side of this equation is a function of r only; it contains no q or f dependence. Let's call the entire right hand side F(r) to emphasize this fact. To further separate the $\theta$ and $\phi$ dependence, we multiply by sin$^2 \theta$ and subtract the $\theta$ derivative terms from both sides to obtain $\dfrac{\partial^2 \psi}{\partial \phi^2} = \textbf{F}(r)\psi sin^2 \theta - \sin \theta \dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial \psi}{\partial \theta} \right) \nonumber$ Now we have separated the $\phi$ dependence from the $\phi$ and r dependence. If we now substitute $\psi = \Phi(f) Q(r,\theta)$ and divide by $Psi$ Q, we obtain $\dfrac{1}{\Phi} \dfrac{\partial ^2\Phi}{\partial \phi^2} = \dfrac{1}{Q}\left( \textbf{F}(r) sin^2 \theta - \sin \theta \dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial Q}{\partial \theta}\right) \right). \nonumber$ Now all of the $\phi$ dependence is isolated on the left hand side; the right hand side contains only r and $\theta$ dependence. Whenever one has isolated the entire dependence on one variable as we have done above for the $\phi$ dependence, one can easily see that the left and right hand sides of the equation must equal a constant. For the above example, the left hand side contains no r or $\theta$ dependence and the right hand side contains no $\phi$ dependence. Because the two sides are equal, they both must actually contain no r, $\theta$, or $\phi$ dependence; that is, they are constant. For the above example, we therefore can set both sides equal to a so-called separation constant that we call -m$^2$. It will become clear shortly why we have chosen to express the constant in this form. The Hydrogenic atom problem forms the basis of much of our thinking about atomic structure. To solve the corresponding Schrödinger equation requires separation of the r, $\theta$, and $\phi$ variables The $\Phi$ Equation The resulting F equation reads $\Phi'' + m^2\Psi = 0 \nonumber$ which has as its most general solution $\Phi = Ae^{im\phi} + Be^{-im\phi}. \nonumber$ We must require the function $\Phi$ to be single-valued, which means that $\Phi(\phi) = \Phi(2\pi + \phi) \: \text{or}, \nonumber$ $Ae^{im\phi}\left(1 - e^{2im\pi}\right) + Be^{-im\phi} \left( 1- e^{-2im\pi}\right) = 0. \nonumber$ This is satisfied only when the separation constant is equal to an integer m = 0, ±1, ± 2, ... . and provides another example of the rule that quantization comes from the boundary conditions on the wavefunction. Here m is restricted to certain discrete values because the wavefunction must be such that when you rotate through 2$\pi$ about the z-axis, you must get back what you started with. The $\Theta$ Equation Now returning to the equation in which the $\phi$ dependence was isolated from the r and $\theta$ dependence and rearranging the $\theta$ terms to the left-hand side, we have $\dfrac{1}{sin \theta} \dfrac{\partial}{\partial \theta}\left( \sin \theta \dfrac{Q}{\partial \theta}\right) - \dfrac{-m^2Q}{sin^2 \theta} = \textbf{F}(r)Q. \nonumber$ In this equation we have separated $\theta$ and r variations so we can further decompose the wavefunction by introducing $Q = \Theta(\theta) R(r)$, which yields $\dfrac{1}{\Theta}\dfrac{1}{sin \theta}\dfrac{\partial }{\partial \theta} \left( \sin \theta \dfrac{\partial \Theta}{\partial \theta} \right) - \dfrac{m^2}{sin^2 \theta} = \dfrac{\textbf{F}(r)R}{R} = -\lambda, \nonumber$ where a second separation constant, -l, has been introduced once the r and q dependent terms have been separated onto the right and left hand sides, respectively. We now can write the $\theta$ equation as $\dfrac{1}{sin \theta}\dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial \Theta}{\partial \theta} \right) - \dfrac{m^2 \Theta}{sin^2 \theta} = -\lambda \: \Theta, \nonumber$ where m is the integer introduced earlier. To solve this equation for $\Theta$, we make the substitutions z = cos$\theta$ and P(z) = $\Theta(\theta)$, so $\sqrt{1-z^2} = sin\theta$, and $\dfrac{\partial}{\partial \theta} = \dfrac{\partial z}{\partial \theta}\dfrac{\partial }{\partial z} = -sin\theta \dfrac{\partial}{\partial z} \nonumber$ The range of values for $\theta$ was $0 \leq \theta < \pi$, so the range for z is -1 < z < 1. The equation for $\Theta$, when expressed in terms of P and z, becomes $\dfrac{\text{d}}{\text{dz}}\left((1-z^2)\dfrac{\text{dP}}{\text{dz}}\right) - \dfrac{m^2P}{1-z^2} + \lambda P = 0. \nonumber$ Now we can look for polynomial solutions for P, because z is restricted to be less than unity in magnitude. If m = 0, we first let $P = \sum\limits_{k=0}^\infty a_kz^k, \nonumber$ and substitute into the differential equation to obtain $\sum\limits_{k=1}^\infty (k+2)(k+1)a_{k+2}z^k - \sum\limits_{k=0}^\infty (k+1)k a_kz^k + \lambda \sum\limits_{k=0}^\infty a_kz^k = 0. \nonumber$ Equating like powers of z gives $a_{k+2} = \dfrac{a_k(k(k+1)-\lambda)}{(k+2)(k+1)}. \nonumber$ Note that for large values of k $\dfrac{a_{k+2}}{a_k} \rightarrow \dfrac{k^2\left( 1+ \dfrac{1}{k} \right)}{k^2\left(1+\dfrac{2}{k} \right)\left(1+\dfrac{1}{k} \right)} = 1. \nonumber$ Since the coefficients do not decrease with k for large k, this series will diverge for z = ± 1 unless it truncates at finite order. This truncation only happens if the separation constant $\lambda$ obeys $\lambda$ = l(l+1), where l is an integer. So, once again, we see that a boundary condition (i.e., that the wavefunction be normalizable in this case) give rise to quantization. In this case, the values of $\lambda$ are restricted to l(l+1); before, we saw that m is restricted to 0, ±1, ± 2, ... . Since this recursion relation links every other coefficient, we can choose to solve for the even and odd functions separately. Choosing a$_0$ and then determining all of the even ak in terms of this a$_0$, followed by rescaling all of these a$_k$ to make the function normalized generates an even solution. Choosing a$_1$ and determining all of the odd a$_k$ in like manner, generates an odd solution. For l= 0, the series truncates after one term and results in $P_o(z) = 1$. For l= 1 the same thing applies and P$_1$(z) = z. For l= 2, a$_2 = -6 \dfrac{a_o}{2} = -3a_o$, so one obtains P$_2 = 3z^2-1$, and so on. These polynomials are called Legendre polynomials. For the more general case where m ¹ 0, one can proceed as above to generate a polynomial solution for the Q function. Doing so, results in the following solutions: $P^m_1(z) = (1-z^2)^\dfrac{|m|}{2}\dfrac{d^{|m|}P_1(z)}{dz^{|m|}}. \nonumber$ These functions are called Associated Legendre polynomials, and they constitute the solutions to the $\Theta$ problem for non-zero m values. The above P and e$^{im\phi}$ functions, when re-expressed in terms of $\theta \:\text{and} \: \phi$, yield the full angular part of the wavefunction for any centrosymmetric potential. These solutions are usually written as $Y_{l,m}(\theta,\phi) = P_l^m \dfrac{cos \theta}{\sqrt{2\pi}}e^{im\phi} \nonumber$ These are called spherical harmonics. They provide the angular solution of the r,$\theta, \phi$ Schrödinger equation for any problem in which the potential depends only on the radial coordinate. Such situations include all one-electron atoms and ions (e.g., H, He$^+$, Li$^{++}$, etc.), the rotational motion of a diatomic molecule (where the potential depends only on bond length r), the motion of a nucleon in a spherically symmetrical "box" (as occurs in the shell model of nuclei), and the scattering of two atoms (where the potential depends only on interatomic distance). The $R$ Equation Let us now turn our attention to the radial equation, which is the only place that the explicit form of the potential appears. Using our derived results and specifying $V(r)$ to be the coulomb potential appropriate for an electron in the field of a nucleus of charge +Ze, yields: $\dfrac{1}{r^2} \dfrac{d}{dr} \left( r^2\dfrac{dR}{dr} \right) + \left( \dfrac{2\mu}{\hbar^2} \left( E + \dfrac{Ze^2}{r} \right) - \dfrac{l(l + 1)}{r^2} \right) R = 0. \nonumber$ We can simplify things considerably if we choose rescaled length and energy units because doing so removes the factors that depend on $\mu$, $\hbar$, and $e$. We introduce a new radial coordinate $rho \: \text{and a quantity} \: \sigma$ as follows: $\rho = \sqrt{ \dfrac{-8\mu E}{\hbar^2} }r \nonumber$ and $\sigma^2 = -\dfrac{-\mu Z^2e^4}{2E\hbar^2}. \nonumber$ Notice that if $E$ is negative, as it will be for bound states (i.e., those states with energy below that of a free electron infinitely far from the nucleus and with zero kinetic energy), $rho$ is real. On the other hand, if $E$ is positive, as it will be for states that lie in the continuum, $rho$ will be imaginary. These two cases will give rise to qualitatively different behavior in the solutions of the radial equation developed below. We now define a function $S$ such that $S(\rho) = R(r) \nonumber$ and substitute $S$ for $R$ to obtain: $\dfrac{1}{\rho^2} \dfrac{d}{d \rho}\left( \rho^2 \dfrac{dS}{d \rho} \right) + \left( - \dfrac{1}{4} - \dfrac{l(l+1)}{\rho^2} + \dfrac{\sigma}{\rho} \right) S = 0. \nonumber$ The differential operator terms can be recast in several ways using $\dfrac{1}{\rho^2}\dfrac{d}{d \rho}\left( \rho^2 \dfrac{dS}{d \rho} \right) = \dfrac{d^2S}{d \rho^2} + \dfrac{2}{\rho}\dfrac{dS}{d \rho} = \dfrac{1}{\rho} \dfrac{d^2}{d \rho^2}(\rho S). \nonumber$ It is useful to keep in mind these three embodiments of the derivatives that enter into the radial kinetic energy; in various contexts it will be useful to employ various of these. The strategy that we now follow is characteristic of solving second order differential equations. We will examine the equation for S at large and small $\rho$ values. Having found solutions at these limits, we will use a power series in $\rho$ to "interpolate" between these two limits. Let us begin by examining the solution of the above equation at small values of $\rho$ to see how the radial functions behave at small r. As $\rho \rightarrow$0, the second term in the brackets will dominate. Neglecting the other two terms in the brackets, we find that, for small values of $rho$ (or r), the solution should behave like $\rho^L$ and because the function must be normalizable, we must have $L\geq 0$. Since L can be any non-negative integer, this suggests the following more general form for S($\rho$) : $S(\rho) \approx \rho^L e^{-a\rho}. \nonumber$ This form will insure that the function is normalizable since S$(\rho) \rightarrow 0 \text{as} r\rightarrow \infty$ for all L, as long as $rho$ is a real quantity. If $\rho$ is imaginary, such a form may not be normalized (see below for further consequences). Turning now to the behavior of S for large $\rho$, we make the substitution of $S(\rho)$ into the above equation and keep only the terms with the largest power of $\rho$ (e.g., first term in brackets). Upon so doing, we obtain the equation $a^2\rho^Le^{-a\rho} = \dfrac{1}{4} \rho^Le^{-a\rho}, \nonumber$ which leads us to conclude that the exponent in the large-$rho$ behavior of S is a = $\dfrac{1}{2}.$ Having found the small- and large-$\rho$ behaviors of S($\rho$), we can take S to have the following form to interpolate between large and small $rho$-values: $S(\rho) = \rho^Le^\dfrac{-\rho}{2}P(\rho), \nonumber$ where the function L is expanded in an infinite power series in $\rho$ as $P(\rho) = \sum a_k\rho^k$. Again Substituting this expression for S into the above equation we obtain $P''\rho + P'(2L_2-\rho) + P(\sigma -L-1) =0, \nonumber$ and then substituting the power series expansion of P and solving for the ak's we arrive at: $a_{k+1} = \dfrac{(k- \sigma + L +1) a_k}{(k+1)(k+2L+2)}. \nonumber$ For large k, the ration of expansion coefficients reaches the limit $\dfrac{a_k+1}{a_k} = \dfrac{1}{k},$ which has the same behavior as the power series expansion of e$^\rho$. Because the power series expansion of P describes a function that behaves like e$^\rho$ for large $\rho$, the resulting S($\rho$) function would not be normalizable because the e$^{\dfrac{-\rho}{2}}$ factor would be overwhelmed by this e$^\rho$ dependence. Hence, the series expansion of P must truncate in order to achieve a normalizable S function. Notice that if $\rho$ is imaginary, as it will be if E is in the continuum, the argument that the series must truncate to avoid an exponentially diverging function no longer applies. Thus, we see a key difference between bound (with r real) and continuum (with $\rho$ imaginary) states. In the former case, the boundary condition of non-divergence arises; in the latter, it does not. To truncate at a polynomial of order n', we must have n' - $\sigma$ + L+ l= 0. This implies that the quantity $\sigma$ introduced previously is restricted to $\sigma$ = n' + L + l , which is certainly an integer; let us call this integer n. If we label states in order of increasing n = 1,2,3,... , we see that doing so is consistent with specifying a maximum order (n') in the P($\rho$) polynomial n' = 0,1,2,... after which the l-value can run from l = 0, in steps of unity up to L = n-1. Substituting the integer n for $\sigma$, we find that the energy levels are quantized because $\sigma$ is quantized (equal to n): $E = -\dfrac{\mu Z^2e^4}{2\hbar^2 n^2} :\ \text{and} :\ \rho = \dfrac{Zr}{a_on}. \nonumber$ Here, the length a$_o$ is the so called Bohr radius $\left( a_o = \dfrac{\hbar^2}{\mu e^2} \right)$ ; it appears once the above E-expression is substituted into the equation for $\rho$. Using the recursion equation to solve for the polynomial's coefficients a$_k$ for any choice of n and l quantum numbers generates a so-called Laguerre polynomial; P$_{n-L-1}(\rho)$. They contain powers of $\rho$ from zero through n-l-1. This energy quantization does not arise for states lying in the continuum because the condition that the expansion of P$(\rho)$ terminate does not arise. The solutions of the radial equation appropriate to these scattering states (which relate to the scattering motion of an electron in the field of a nucleus of charge Z) are treated on p. 90 of EWK. In summary, separation of variables has been used to solve the full r,$\theta ,\phi$ Schrödinger equation for one electron moving about a nucleus of charge Z. The $\theta \: \text{and}\: \phi$ solutions are the spherical harmonics $Y_{L,m} (\theta,\phi).$ The bound-state radial solutions $R_{n,L}(r) = S(\rho) = \rho^Le^{\dfrac{-\rho}{2}}P_{n-L-1}(\rho) \nonumber$ depend on the n and l quantum numbers and are given in terms of the Laguerre polynomials (see EWK for tabulations of these polynomials). Summary To summarize, the quantum numbers l and m arise through boundary conditions requiring that $\psi (\theta)$ be normalizable (i.e., not diverge) and $\psi (\phi) = \psi(\phi+2\pi).$ In the texts by Atkins, EWK, and McQuarrie the differential equations obeyed by the $\theta \: \text{and} \: \phi$ components of Y$_{l,m}$ are solved in more detail and properties of the solutions are discussed. This differential equation involves the three-dimensional Schrödinger equation's angular kinetic energy operator. That is, the angular part of the above Hamiltonian is equal to $\hbar^2 \dfrac{L^2}{2mr^2}$, where L$^2$ is the square of the total angular momentum for the electron. The radial equation, which is the only place the potential energy enters, is found to possess both bound-states (i.e., states whose energies lie below the asymptote at which the potential vanishes and the kinetic energy is zero) and continuum states lying energetically above this asymptote. The resulting hydrogenic wavefunctions (angular and radial) and energies are summarized in Appendix B for principal quantum numbers n ranging from 1 to 3 and in Pauling and Wilson for n up to 5. There are both bound and continuum solutions to the radial Schrödinger equation for the attractive coulomb potential because, at energies below the asymptote the potential confines the particle between r=0 and an outer turning point, whereas at energies above the asymptote, the particle is no longer confined by an outer turning point (see the figure below). The solutions of this one-electron problem form the qualitative basis for much of atomic and molecular orbital theory. For this reason, the reader is encouraged to use Appendix B to gain a firmer understanding of the nature of the radial and angular parts of these wavefunctions. The orbitals that result are labeled by n, l, and m quantum numbers for the bound states and by l and m quantum numbers and the energy E for the continuum states. Much as the particle-in-a-box orbitals are used to qualitatively describe $\pi$ - electrons in conjugated polyenes, these so-called hydrogen-like orbitals provide qualitative descriptions of orbitals of atoms with more than a single electron. By introducing the concept of screening as a way to represent the repulsive interactions among the electrons of an atom, an effective nuclear charge Z$_{eff}$ can be used in place of Z in the $\psi_{n,l,m}$ and E$_{n,l}$ to generate approximate atomic orbitals to be filled by electrons in a many-electron atom. For example, in the crudest approximation of a carbon atom, the two 1s electrons experience the full nuclear attraction so Z$_{eff}$=6 for them, whereas the 2s and 2p electrons are screened by the two 1s electrons, so Z$_{eff}$= 4 for them. Within this approximation, one then occupies two 1s orbitals with Z=6, two 2s orbitals with Z=4 and two 2p orbitals with Z=4 in forming the full six-electron wavefunction of the lowest-energy state of carbon.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.06%3A_One_Electron_Moving_About_a_Nucleus.txt
The radial motion of a diatomic molecule in its lowest (J=0) rotational level can be described by the following Schrödinger equation: $\dfrac{-\hbar^2}{2\mu r^2}\dfrac{\partial}{\partial r}\left( \dfrac{r^2 \partial}{\partial r} \right)\psi + V(r)\psi = E\psi, \nonumber$ where m is the reduced mass $\mu = \frac{m_1m_2}{(m_1+m_2)} \nonumber$ of the two atoms. By substituting $\psi = \frac{F(r)}{r}$ into this equation, one obtains an equation for F(r) in which the differential operators appear to be less complicated: $\dfrac{-\hbar^2}{2\mu} \dfrac{d^2F}{dr^2} + V(r)F =E F. \nonumber$ This equation is exactly the same as the equation seen above for the radial motion of the electron in the hydrogen-like atoms except that the reduced mass $\mu$ replaces the electron mass $m$ and the potential $V(r)$ is not the Coulomb potential. If the potential is approximated as a quadratic function of the bond displacement $x = r-r_e$ expanded about the point at which $V$ is minimum: $V = \dfrac{1}{2}k(r-r_e)^2, \nonumber$ the resulting harmonic-oscillator equation can be solved exactly. Because the potential V grows without bound as x approaches $\infty \text{or} -\infty$, only bound-state solutions exist for this model problem; that is, the motion is confined by the nature of the potential, so no continuum states exist. This Schrödinger equation forms the basis for our thinking about bond stretching and angle bending vibrations as well as collective phonon motions in solids In solving the radial differential equation for this potential (see Chapter 5 of McQuarrie), the large-r behavior is first examined. For large-r, the equation reads: $\dfrac{d^2F}{dx^2} = \dfrac{1}{2}kx^2 \left( \dfrac{2\mu}{\hbar^2} \right) F, \nonumber$ where $x = r-r_e$ is the bond displacement away from equilibrium. Defining $\xi = \sqrt[4]{\frac{\mu k}{\hbar^2}}x$ as a new scaled radial coordinate allows the solution of the large-r equation to be written as: $F_{\text{larger-r}} = e^{\dfrac{-xi^2}{2}} \nonumber$ The general solution to the radial equation is then taken to be of the form: $F = e^{\dfrac{-\xi^2}{2}}\sum\limits_{n=0}^\infty \xi^n C_n, \nonumber$ where the C$_n$ are coefficients to be determined. Substituting this expression into the full radial equation generates a set of recursion equations for the $C_n$ amplitudes. As in the solution of the hydrogen-like radial equation, the series described by these coefficients is divergent unless the energy E happens to equal specific values. It is this requirement that the wavefunction not diverge so it can be normalized that yields energy quantization. The energies of the states that arise are given by: $E_n = \hbar \sqrt{\dfrac{k}{\mu}}\left( n + \dfrac{1}{2} \right) \nonumber$ and the eigenfunctions are given in terms of the so-called Hermite polynomials $H_n(y)$ as follows: $\psi_n(x) = \dfrac{1}{\sqrt{n! 2^n}} \sqrt[4]{\dfrac{\alpha}{\pi}} \left( e^{\dfrac{-\alpha x^2}{2}}\right) H_n \left( \sqrt{\alpha}x \right), \nonumber$ where $\alpha = \left( \sqrt{\frac{k\mu}{\hbar^2}} \right)$. Within this harmonic approximation to the potential, the vibrational energy levels are evenly spaced: $\Delta E = E_{n+1} -E_n = \hbar\left( \dfrac{k}{\mu} \right). \nonumber$ In experimental data such evenly spaced energy level patterns are seldom seen; most commonly, one finds spacings E$_{n+1} - E_n$ that decrease as the quantum number n increases. In such cases, one says that the progression of vibrational levels displays anharmonicity. Because the $H_n$ are odd or even functions of x (depending on whether n is odd or even), the wavefunctions $\psi_n$(x) are odd or even. This splitting of the solutions into two distinct classes is an example of the effect of symmetry; in this case, the symmetry is caused by the symmetry of the harmonic potential with respect to reflection through the origin along the x-axis. Throughout this text, many symmetries will arise; in each case, symmetry properties of the potential will cause the solutions of the Schrödinger equation to be decomposed into various symmetry groupings. Such symmetry decompositions are of great use because they provide additional quantum numbers (i.e., symmetry labels) by which the wavefunctions and energies can be labeled. The harmonic oscillator energies and wavefunctions comprise the simplest reasonable model for vibrational motion. Vibrations of a polyatomic molecule are often characterized in terms of individual bond-stretching and angle-bending motions each of which is, in turn, approximated harmonically. This results in a total vibrational wavefunction that is written as a product of functions one for each of the vibrational coordinates. Two of the most severe limitations of the harmonic oscillator model, the lack of anharmonicity (i.e., non-uniform energy level spacings) and lack of bond dissociation, result from the quadratic nature of its potential. By introducing model potentials that allow for proper bond dissociation (i.e., that do not increase without bound as x$\rightarrow \infty$), the major shortcomings of the harmonic oscillator picture can be overcome. The so-called Morse potential (see the figure below) $V(r) = D_e \left( 1 - e^{-a(r-r_e)} \right)^2, \nonumber$ is often used in this regard. Here, $D_e$ is the bond dissociation energy, $r_e$ is the equilibrium bond length, and a is a constant that characterizes the 'steepness' of the potential and determines the vibrational frequencies. The advantage of using the Morse potential to improve upon harmonic oscillator-level predictions is that its energy levels and wavefunctions are also known exactly. The energies are given in terms of the parameters of the potential as follows: $E_n = \hbar \sqrt{\dfrac{k}{\mu}} \left[ \left( n+\dfrac{1}{2} \right) - \left( n + \dfrac{1}{2} \right)^2 \frac{\hbar}{4} \sqrt{\dfrac{k}{\mu}} D_e \right] \nonumber$ where the force constant k is $k=2D_e a^2.$ The Morse potential supports both bound states (those lying below the dissociation threshold for which vibration is confined by an outer turning point) and continuum states lying above the dissociation threshold. Its degree of anharmonicity is governed by the ratio of the harmonic energy $\hbar \sqrt{\dfrac{k}{\mu}}$ to the dissociation energy $D_e$. 1.08: Rotational Motion for a Rigid Diatomic Molecule A diatomic molecule with fixed bond length R rotating in the absence of any external potential is described by the following Schrödinger equation: $\dfrac{\hbar^2}{2\mu} \left[ \dfrac{1}{R^2 \sin\theta} \dfrac{\partial}{\partial \theta} \left( \sin \theta \dfrac{\partial}{\partial \theta} \right) + \dfrac{1}{R^2 \sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2} \right] \psi = E\psi \nonumber$ or $\dfrac{L^2\psi}{2\mu R^2} = E \psi. \nonumber$ The angles $\theta$ and $\phi$ describe the orientation of the diatomic molecule's axis relative to a laboratory-fixed coordinate system, and $\mu$ is the reduced mass of the diatomic molecule $\mu = \dfrac{m_1m_2}{m_1 + m_2} . \nonumber$ The differential operators can be seen to be exactly the same as those that arose in the hydrogen-like-atom case, and, as discussed above, these $\theta :\ \text{and} :\ \phi$ differential operators are identical to the $L^2$ angular momentum operator whose general properties are analyzed in Appendix G. Therefore, the same spherical harmonics that served as the angular parts of the wavefunction in the earlier case now serve as the entire wavefunction for the so-called rigid rotor: $\psi = Y_{J,M}(\theta, \phi)$. As detailed later in this text, the eigenvalues corresponding to each such eigenfunction are given as: $E_J = \hbar^2 \dfrac{J(J+1)}{(2\mu R^2)} = BJ(J+1) \nonumber$ and are independent of M. Thus each energy level is labeled by J and is 2J+1-fold degenerate (because M ranges from -J to J). The so-called rotational constant B $\left( \text{ defined as } \dfrac{\hbar^2}{2\mu R^2} \right)$ depends on the molecule's bond length and reduced mass. Spacings between successive rotational levels (which are of spectroscopic relevance because angular momentum selection rules often restrict $\Delta$J to 1,0, and -1) are given by $\Delta E = B(J+1)(J+2) - BJ(J+1) = 2B(J+1). \nonumber$ These energy spacings are of relevance to microwave spectroscopy which probes the rotational energy levels of molecules. This Schrödinger equation relates to the rotation of diatomic and linear polyatomic molecules. It also arises when treating the angular motions of electrons in any spherically symmetric potential. Summary The rigid rotor provides the most commonly employed approximation to the rotational energies and wavefunctions of linear molecules. As presented above, the model restricts the bond length to be fixed. Vibrational motion of the molecule gives rise to changes in $R$ which are then reflected in changes in the rotational energy levels. The coupling between rotational and vibrational motion gives rise to rotational $B$ constants that depend on vibrational state as well as dynamical couplings,called centrifugal distortions, that cause the total ro-vibrational energy of the molecule to depend on rotational and vibrational quantum numbers in a non-separable manner
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.07%3A_Harmonic_Vibrational_Motion.txt
Quantum mechanics has a set of 'rules' that link operators, wavefunctions, and eigenvalues to physically measurable properties. These rules have been formulated not in some arbitrary manner nor by derivation from some higher subject. Rather, the rules were designed to allow quantum mechanics to mimic the experimentally observed facts as revealed in mother nature's data. The extent to which these rules seem difficult to understand usually reflects the presence of experimental observations that do not fit in with our common experience base. The structure of quantum mechanics (QM) relates the wavefunction $\Psi$ and operators F to the 'real world' in which experimental measurements are performed through a set of rules. Some of these rules have already been introduced above. Here, they are presented in total as follows: 1: The Time Evolution The time evolution of the wavefunction $\Psi$ is determined by solving the time-dependent Schrödinger equation (see pp 23-25 of EWK for a rationalization of how the Schrödinger equation arises from the classical equation governing waves, Einstein's $E=h\nu$, and deBroglie's postulate that $\lambda = \frac{h}{p}$) $i\hbar \dfrac{\partial \Psi}{\partial t}= \textbf{H}\Psi, \nonumber$ where H is the Hamiltonian operator corresponding to the total (kinetic plus potential) energy of the system. For an isolated system (e.g., an atom or molecule not in contact with any external fields), H consists of the kinetic and potential energies of the particles comprising the system. To describe interactions with an external field (e.g., an electromagnetic field, a static electric field, or the 'crystal field' caused by surrounding ligands), additional terms are added to H to properly account for the system-field interactions. If H contains no explicit time dependence, then separation of space and time variables can be performed on the above Schrödinger equation $\Psi = \psi e^{\dfrac{-iEt}{\hbar}}$ to give $\textbf{H}\psi = e\psi. \nonumber$ In such a case, the time dependence of the state is carried in the phase factor $e^{\frac{-Et}{\hbar}}$; the spatial dependence appears in $\psi(q_j)$. The so called time independent Schrödinger equation $\textbf{H} \psi=E\psi$ must be solved to determine the physically measurable energies $E_k$ and wavefunctions $\psi_k$ of the system. The most general solution to the full Schrödinger equation $i\hbar\frac{\partial \Psi}{\partial t} = \textbf{H}\Psi \nonumber$ is then given by applying $e^{\frac{-i\textbf{H}t}{\hbar}}$ to the wavefunction at some initial time (t=0) $\Psi =\sum\limits_kC_k\psi_k \nonumber$ to obtain $\Psi(t)=\sum\limits_kC_k\psi_ke^{\frac{-itE_k}{\hbar}}. \nonumber$ The relative amplitudes $C_k$ are determined by knowledge of the state at the initial time; this depends on how the system has been prepared in an earlier experiment. Just as Newton's laws of motion do not fully determine the time evolution of a classical system (i.e., the coordinates and momenta must be known at some initial time), the Schrödinger equation must be accompanied by initial conditions to fully determine $\Psi(q_j ,t)$. Example $1$: Using the results of Problem 11 of this chapter to illustrate, the sudden ionization of $N_2$ in its v=0 vibrational state to generate $N_2^+$ produces a vibrational wavefunction $\Psi_0 = \sqrt[4]{\dfrac{\alpha}{\pi}}e^{\dfrac{-\alpha x^2}{2}} = 3.53333\dfrac{1}{\sqrt{Å}} \nonumber$ that was created by the fast ionization of $N_2$. Subsequent to ionization, this $N_2$ function is not an eigenfunction of the new vibrational Schrödinger equation appropriate to $N_2^+.$ As a result, this function will time evolve under the influence of the $N_2^+$ Hamiltonian. The time evolved wavefunction, according to this first rule, can be expressed in terms of the vibrational functions {$\Psi_v$} and energies {$E_v$} of the $N_2^+$ ion as $\Psi(t) = \sum\limits_vC_v\Psi_ve^{\dfrac{-iE_vt}{\hbar}}. \nonumber$ The amplitudes $C_v$, which reflect the manner in which the wavefunction is prepared (at t=0), are determined by determining the component of each $\Psi_v$ in the function $\Psi$ at t=0. To do this, one uses $\int\Psi_{v'}^{\text{*}}\Psi(t=0) d\tau = C_{v'}, \nonumber$ which is easily obtained by multiplying the above summation by $\Psi^{\text{*}}_{v'}$, integrating, and using the orthonormality of the {$\Psi_v$} functions. For the case at hand, this results shows that by forming integrals involving products of the $N_2$ v=0 function $\Psi(t=0)$ $= \int\limits_{-\infty}^{\infty}3.47522e^{-229.113(r^{-1.11642})^2}3.53333e^{-244.83(r^{-1.09769})^2} dr \nonumber$ As demonstrated in Problem 11, this integral reduces to 0.959. This means that the $N_2$ v=0 state, subsequent to sudden ionization, can be represented as containing |0.959|2 = 0.92 fraction of the v=0 state of the $N_2^+$ ion. Example $1$ relates to the well known Franck-Condon principal of spectroscopy in which squares of 'overlaps' between the initial electronic state's vibrational wavefunction and the final electronic state's vibrational wavefunctions allow one to estimate the probabilities of populating various final-state vibrational levels. In addition to initial conditions, solutions to the Schrödinger equation must obey certain other constraints in form. They must be continuous functions of all of their spatial coordinates and must be single valued; these properties allow $\Psi^{\text{*}}\Psi$ to be interpreted as a probability density (i.e., the probability of finding a particle at some position can not be multivalued nor can it be 'jerky' or discontinuous). The derivative of the wavefunction must also be continuous except at points where the potential function undergoes an infinite jump (e.g., at the wall of an infinitely high and steep potential barrier). This condition relates to the fact that the momentum must be continuous except at infinitely 'steep' potential barriers where the momentum undergoes a 'sudden' reversal. 2: Measurements are Eigenvalues An experimental measurement of any quantity (whose corresponding operator is F) must result in one of the eigenvalues $f_j$ of the operator F. These eigenvalues are obtained by solving $\textbf{F} \phi_j = f_j \phi_j, \nonumber$ where the $f_j$ are the eigenfunctions of F. Once the measurement of F is made, for that subpopulation of the experimental sample found to have the particular eigenvalue $f_j$, the wavefunction becomes $\phi_j$. The equation $\textbf{H}\psi_k = E_k\psi_k$ is but a special case; it is an especially important case because much of the machinery of modern experimental chemistry is directed at placing the system in a particular energy quantum state by detecting its energy (e.g., by spectroscopic means). The reader is strongly urged to also study Appendix C to gain a more detailed and illustrated treatment of this and subsequent rules of quantum mechanics. 3: Operators that correspond to Measurables are Hermitian The operators F corresponding to all physically measurable quantities are Hermitian; this means that their matrix representations obey (see Appendix C for a description of the 'bra' | \rangle and 'ket' \langle | notation used below): $\langle \chi_j|\textbf{F}|\chi_k\rangle = \langle \chi_k|\textbf{F}\chi_j\rangle * = \langle \textbf{F}\chi_j|\chi_k\rangle \nonumber$ in any basis {$\chi_j$} of functions appropriate for the action of F (i.e., functions of the variables on which F operates). As expressed through equality of the first and third elements above, Hermitian operators are often said to 'obey the turn-over rule'. This means that F can be allowed to operate on the function to its right or on the function to its left if F is Hermitian. Hermiticity assures that the eigenvalues {$f_j$} are all real, that eigenfunctions {$\chi_j$} having different eigenvalues are orthogonal and can be normalized $\langle \chi_j|\chi_k\rangle =\delta_{j,k},$ and that eigenfunctions having the same eigenvalues can be made orthonormal (these statements are proven in Appendix C). 4: Stationary states do not have varying Measurables Once a particular value $f_j$ is observed in a measurement of F, this same value will be observed in all subsequent measurements of F as long as the system remains undisturbed by measurements of other properties or by interactions with external fields. In fact, once $f_i$ has been observed, the state of the system becomes an eigenstate of F (if it already was, it remains unchanged): $\textbf{F}\Psi = f_i\Psi. \nonumber$ This means that the measurement process itself may interfere with the state of the system and even determines what that state will be once the measurement has been made. Example $2$: Again consider the v=0 $N_2$ ionization treated in Problem 11 of this chapter. If, subsequent to ionization, the $N_2^+$ ions produced were probed to determine their internal vibrational state, a fraction of the sample equal to $|\langle \Psi (N_2; \nu =0) | \Psi(N_2^+; \nu=0)\rangle |^2 = 0.92$ would be detected in the v=0 state of the $N_2^+$ ion. For this sub-sample, the vibrational wavefunction becomes, and remains from then on, $\Psi(t) = \Psi(N_2^+; \nu=0)e^{\dfrac{-it E^+_{\nu=0}}{\hbar}}, \nonumber$ where $E^+_{\nu=0}$ is the energy of the $N_2^+$ ion in its $\nu=0$ state. If, at some later time, this subsample is again probed, all species will be found to be in the$\nu=0$ state. 5: Probability of observed a specific Eigenvalue The probability $P_k$ of observing a particular value $f_k$ when F is measured, given that the system wavefunction is $\Psi$ prior to the measurement, is given by expanding $\Psi$ in terms of the complete set of normalized eigenstates of F $\Psi = \sum\limits_j|\phi_j\rangle \langle \phi_j|\Psi\rangle \nonumber$ and then computing $P_k = |\langle \phi_k|\Psi\rangle |^2.$ For the special case in which $\Psi$ is already one of the eigenstates of F (i.e., $\Psi=\phi_k$), the probability of observing $f_j$ reduces to $P_j =\delta_{j,k}$. The set of numbers $C_j = \langle \phi_j|\Psi\rangle$ are called the expansion coefficients of $\Psi$ in the basis of the {$f_j$}. These coefficients, when collected together in all possible products as $D_{j,i} = C_i^{\text{*}} C_j$ form the so-called density matrix $D_{j,i}$ of the wavefunction $\Psi$ within the {$\psi_j$} basis. Example $3$: If F is the operator for momentum in the x-direction and $\Psi(x,t)$ is the wave function for x as a function of time t, then the above expansion corresponds to a Fourier transform of $\Psi$ $\Psi(x,t) = \dfrac{1}{2\pi} \int e^{ikx}\int e^{-ikx'}\Psi(x',t) dx' dk. \nonumber$ Here $\sqrt{\frac{1}{2\pi}}e^{ikx}$ is the normalized eigenfunction of $\textbf{F} = -i\hbar \frac{\partial}{\partial x}$ corresponding to momentum eigenvalue $\hbar k$. These momentum eigenfunctions are orthonormal: $\dfrac{1}{2\pi}\int e^{-ikx}e^{ikx'}dk = \delta(x-x') \nonumber$ because F is a Hermitian operator. The function $\int e^{-ikx'} \Psi(x',t) dx'$ is called the momentum-space transform of $\Psi(x,t)$ and is denoted $\Psi(k,t)$; it gives, when used as $\Psi*(k,t)\Psi(k,t)$, the probability density for observing momentum values $\hbar k$ at time t. Example $4$: Take the initial $\psi$ to be a superposition state of the form $\psi = a (2p_0 + 2_{p-1} - 2_{p1}) + b(3p_0 - 3_{p-1}), \nonumber$ where the a and b are amplitudes that describe the admixture of $2_p$ and $3_p$ functions in this wavefunction. Then: a. If $\textbf{L}^2$ were measured, the value $2\hbar^2$ would be observed with probability $3|a|^2 + 2|b|^2 = 1$, since all of the functions in $\psi$ are p-type orbitals. After said measurement, the wavefunction would still be this same $\psi$ because this entire $\psi$ is an eigenfunction of $\textbf{L}^2$. b. If $\textbf{L}_z$ were measured for this $\psi = a(2p_0 + 2_{p-1} - 2_{p1}) + b(3p_0 - 3p_{-1}), \nonumber$ the values $0\hbar, 1\hbar, and -1\hbar$ would be observed (because these are the only functions with non-zero $C_m$ coefficients for the $L_z$ operator) with respective probabilities $|a|^2 + |b|^2, |-a|^2, \text{and} |a|^2 + |-b|^2.$ c. After $L_z$ were measured, if the sub-population for which $-1\hbar$ had been detected were subjected to measurement of $\textbf{L}^2$ the value $2\hbar^2$ would certainly be found because the new wavefunction $\psi ' = \left[ -a2p_{-1} -b 3p_{-1} \right] \dfrac{1}{\sqrt{|a|^2 + |b|^2}} \nonumber$ is still an eigenfunction of $\textbf{L}^2$ with this eigenvalue. d. Again after $\textbf{L}_z$ were measured, if the sub-population for which $-1\hbar$ had been observed and for which the wavefunction is now $\psi ' = \left[ -a2p_{-1} -b 3pp_{-1} \right] \dfrac{1}{\sqrt{|a|^2 + |b|^2}} \nonumber$ were subjected to measurement of the energy (through the Hamiltonian operator), two values would be found. With probability $|-a|^2 \frac{1}{|a|^2 + |b|^2}$the energy of the $2p_{-1}$ orbital would be observed; with probability $|-b|^2\frac{1}{|a|^2 + |b|^2}$, the energy of the $3p_{-1}$ orbital would be observed. If $\Psi$ is a function of several variables (e.g., when $\Psi$ describes more than one particle in a composite system), and if F is a property that depends on a subset of these variables (e.g., when F is a property of one of the particles in the composite system), then the expansion $\Psi=\sum\limits_j |\phi_j\rangle \langle \phi_j|\Psi\rangle$ is viewed as relating only to $\Psi$'s dependence on the subset of variables related to F. In this case, the integrals $\langle \phi_k|\Psi\rangle$ are carried out over only these variables; thus the probabilities $P_k = |\langle \phi_k|\Psi\rangle |^2$ depend parametrically on the remaining variables. Suppose that $\Psi(r,\theta)$ describes the radial (r) and angular ($\theta$) motion of a diatomic molecule constrained to move on a planar surface. If an experiment were performed to measure the component of the rotational angular momentum of the diatomic molecule perpendicular to the surface $\left( \textbf{L}_z = -i\hbar\frac{\partial}{\partial \theta}\right)$, only values equal to $m\hbar$(m=0,1,-1,2,-2,3,- 3,...) could be observed, because these are the eigenvalues of $\textbf{L}_z$ : $\textbf{L}_z \phi_m = -i\hbar\frac{\partial}{\partial \theta}\phi_m = m\hbar \phi_m, \text{where} \nonumber$ $\phi_m = \sqrt{\dfrac{1}{2\pi}}e^{im \theta}. \nonumber$ The quantization of $\textbf{L}_z$ arises because the eigenfunctions $\phi_mm(\theta)$ must be periodic in $\theta$: $\phi(\theta + 2\pi) = \phi(\theta). \nonumber$ Such quantization (i.e., constraints on the values that physical properties can realize) will be seen to occur whenever the pertinent wavefunction is constrained to obey a so-called boundary condition (in this case, the boundary condition is $\phi(\theta + 2\pi) = \phi (\theta ).$ Expanding the $\theta$-dependence of $\Psi$ in terms of the $\phi_m$ $\Psi = \sum\limits_m \langle \phi_m|\Psi\rangle \phi_m(\theta) \nonumber$ allows one to write the probability that $m\hbar$ is observed if the angular momentum $\textbf{L}_z$ is measured as follows: $P_m = |\langle \phi_m|\Psi \rangle |^2 = |\int \phi_m^\text{*}(\theta ) \Psi (r,\theta) d\theta |^2. \nonumber$ If one is interested in the probability that $m\hbar$ be observed when $L_z$ is measured regardless of what bond length r is involved, then it is appropriate to integrate this expression over the r-variable about which one does not care. This, in effect, sums contributions from all rvalues to obtain a result that is independent of the r variable. As a result, the probability reduces to: $P_m = \int \phi^{\text{*}}(\theta ') \left[ \Psi^{\text{*}}(r,\theta ') \Psi(r,\theta) \right]\phi (\theta) d\theta ' d\theta, \nonumber$ which is simply the above result integrated over r with a volume element r dr for the twodimensional motion treated here. If, on the other hand, one were able to measure $L_z$ values when r is equal to some specified bond length (this is only a hypothetical example; there is no known way to perform such a measurement), then the probability would equal: $P_mr dr = r dr\int \phi_m^{\text{*}}(\theta ')\Psi^{\text{*}}(r,\theta ')\Psi(r,\theta )\phi_m (\theta) d\theta 'd \theta = |\langle \phi_m|\Psi\rangle |^2 r dr. \nonumber$ 6. Commuting Operators Two or more properties F, G, J whose corresponding Hermitian operators F, G, J commute FG-GF=FJ-JF=GJ-JG= 0 have complete sets of simultaneous eigenfunctions (the proof of this is treated in Appendix C). This means that the set of functions that are eigenfunctions of one of the operators can be formed into a set of functions that are also eigenfunctions of the others: $\textbf{F}\phi_j = f_j\phi_j \Longrightarrow \textbf{G}\phi_j = g_j\phi_j \Longrightarrow \textbf{J}\phi_j=j_j\phi_j. \nonumber$ Example $5$: The $p_x, p_y and p_z$ orbitals are eigenfunctions of the $\textbf{L}^2$ angular momentum operator with eigenvalues equal to $L(L+1) \hbar^2 = 2 \hbar^2$. Since $\textbf{L}^2 \text{and} \textbf{L}_z$ commute and act on the same (angle) coordinates, they possess a complete set of simultaneous eigenfunctions. Although the $p_x, p_y \text{and} p_z$ orbitals are not eigenfunctions of $\textbf{L}_z$, they can be combined to form three new orbitals: $p_0 = p_z , p_1= \frac{1}{\sqrt{2}} [p_x + ip_y], \text{and} p_{-1}= \frac{1}{\sqrt{2}} [p_x - ip_y]$ that are still eigenfunctions of $\textbf{L}^2$ but are now eigenfunctions of $\textbf{L}_z$ also (with eigenvalues $0\hbar, 1\hbar, and -1\hbar$, respectively). It should be mentioned that if two operators do not commute, they may still have some eigenfunctions in common, but they will not have a complete set of simultaneous eigenfunctions. For example, the $L_z \text{and} L_x$ components of the angular momentum operator do not commute; however, a wavefunction with L=0 (i.e., an S-state) is an eigenfunction of both operators. The fact that two operators commute is of great importance. It means that once a measurement of one of the properties is carried out, subsequent measurement of that property or of any of the other properties corresponding to mutually commuting operators can be made without altering the system's value of the properties measured earlier. Only subsequent measurement of another property whose operator does not commute with F, G, or J will destroy precise knowledge of the values of the properties measured earlier. Example $6$: Assume that an experiment has been carried out on an atom to measure its total angular momentum $L^2$. According to quantum mechanics, only values equal to $L(L+1) \hbar^2$ will be observed. Further assume, for the particular experimental sample subjected to observation, that values of $L^2$ equal to $2\hbar^2 \text{and} 0\hbar^2$ were detected in relative amounts of 64 % and 36 % , respectively. This means that the atom's original wavefunction $\psi$ could be represented as: $\psi = 0.8P + 0.6S, \nonumber$ where P and S represent the P-state and S-state components of $\psi$. The squares of the amplitudes 0.8 and 0.6 give the 64 % and 36 % probabilities mentioned above. Now assume that a subsequent measurement of the component of angular momentum along the lab-fixed z-axis is to be measured for that sub-population of the original sample found to be in the P-state. For that population, the wavefunction is now a pure P-function: $\psi ' = P \nonumber$ However, at this stage we have no information about how much of this $\psi$' is of m = 1, 0, or -1, nor do we know how much 2p, 3p, 4p, ... np components this state contains. Because the property corresponding to the operator $\textbf{L}_z$ is about to be measured, we express the above $\psi '$ in terms of the eigenfunctions of $\textbf{L}_z:$ $\psi ' = P. \nonumber$ However, at this stage we have no information about how much of this y' is of m = 1, 0, or -1, nor do we know how much 2p, 3p, 4p, ... np components this state contains. Because the property corresponding to the operator $\textbf{L}_z$ is about to be measured, we express the above $\psi$' in terms of the eigenfunctions of $\textbf{L}_z:$ $\psi ' = P = \sum\limits_{m=1,0,-1}C'_mP_m. \nonumber$ When the measurement of $L_z$ is made, the values $1\hbar, 0\hbar, and -1\hbar$ will be observed with probabilities given by $|C'_1|^2, |C'_0|^2, \text{and} |C'_{-1}|^2,$ respectively. For that sub-population found to have, for example, $L_z$ equal to $-1\hbar$, the wavefunction then becomes $\psi ' = P_{-1}. \nonumber$ At this stage, we do not know how much of $2p_{-1}, 3p_{-1}, 4p_{-1}, ... np_{-1}$ this wavefunction contains. To probe this question another subsequent measurement of the energy (corresponding to the H operator) could be made. Doing so would allow the amplitudes in the expansion of the above $\psi ''= P_{-1}$ $\psi '' = P_{-1} = \sum\limits_n C''_nnP_{-1} \nonumber$ to be found. The kind of experiment outlined above allows one to find the content of each particular component of an initial sample's wavefunction. For example, the original wavefunction has $0.64 |C''_n|^2 |C'_m|^2$ fractional content of the various $nP_m$ functions. It is analogous to the other examples considered above because all of the operators whose properties are measured commute. Example $7$: Let us consider an experiment in which we begin with a sample (with wavefunction $\psi$) that is first subjected to measurement of $L_z$ and then subjected to measurement of $L^2$ and then of the energy. In this order, one would first find specific values (integer multiples of $\hbar$) of $L_z$ and one would express \psi as $\psi = \sum\limits_m D_m \psi_m. \nonumber$ At this stage, the nature of each $\m is unknown (e.g., the y1 function can contain np1, n'd1, n''f1, etc. components); all that is known is that ym has m h as its Lz value Taking that sub-population \((|D_m|^2 fraction)$ with a particular m$\hbar$ value for $L_z$ and subjecting it to subsequent measurement of $L^2$ requires the current wavefunction $\psi_m$ to be expressed as $\psi_m = \sum\limits_L D_{L,m}\psi_{L,m}. \nonumber$ When $L^2$ is measured the value L(L+1)$\hbar^2$ will be observed with probability $|D_{m,L}|^2$, and the wavefunction for that particular sub-population will become $\psi '' = \psi_{L,m}. \nonumber$ At this stage, we know the value of L and of m, but we do not know the energy of the state. For example, we may know that the present sub-population has L=1, m=-1, but we have no knowledge (yet) of how much 2p-1, 3p-1, ... np-1 the system contains. To further probe the sample, the above sub-population with L=1 and m=-1 can be subjected to measurement of the energy. In this case, the function $\psi_{1,-1}$ must be expressed as $\psi_{1,-1} = \sum\limits_nD_n'' nP_{-1}. \nonumber$ When the energy measurement is made, the state $nP_{-1}$ will be found $|D_n''|^2$ fraction of the time. The fact that $\textbf{L}_z , \textbf{L}^2$, and H all commute with one another (i.e., are mutually commutative) makes the series of measurements described in the above examples more straightforward than if these operators did not commute. In the first experiment, the fact that they are mutually commutative allowed us to expand the 64 % probable $\textbf{L}^2$ eigenstate with L=1 in terms of functions that were eigenfunctions of the operator for which measurement was about to be made without destroying our knowledge of the value of $L^2$. That is, because $\textbf{L}^2$ and $\textbf{L}_z$ can have simultaneous eigenfunctions, the L = 1 function can be expanded in terms of functions that are eigenfunctions of both $\textbf{L}^2$ and $\textbf{L}_z.$ This in turn, allowed us to find experimentally the sub-population that had, for example -1$\hbar$ as its value of $L_z$ while retaining knowledge that the state remains an eigenstate of $\textbf{L}^2$ (the state at this time had L = 1 and m = -1 and was denoted $P_{-1}$). Then, when this $P_{-1}$ state was subjected to energy measurement, knowledge of the energy of the sub-population could be gained without giving up knowledge of the $L^2$ and $L_z$ information; upon carrying out said measurement, the state became $nP_{-1}$. We therefore conclude that the act of carrying out an experimental measurement disturbs the system in that it causes the system's wavefunction to become an eigenfunction of the operator whose property is measured. If two properties whose corresponding operators commute are measured, the measurement of the second property does not destroy knowledge of the first property's value gained in the first measurement. On the other hand, as detailed further in Appendix C, if the two properties (F and G) do not commute, the second measurement destroys knowledge of the first property's value. After the first measurement, $\Psi$ is an eigenfunction of F; after the second measurement, it becomes an eigenfunction of G. If the two non-commuting operators' properties are measured in the opposite order, the wavefunction first is an eigenfunction of G, and subsequently becomes an eigenfunction of F. It is thus often said that 'measurements for operators that do not commute interfere with one another'. The simultaneous measurement of the position and momentum along the same axis provides an example of two measurements that are incompatible. The fact that x = x and $p_x = -i\hbar \frac{\partial}{\partial x}$ do not commute is straightforward to demonstrate: $\left[x\left(-i\hbar \dfrac{\partial}{\partial x}\right) - \left( -i\hbar \dfrac{\partial}{\partial x}\right)x\right]\chi = i\hbar \chi \neq 0. \nonumber$ Operators that commute with the Hamiltonian and with one another form a particularly important class because each such operator permits each of the energy eigenstates of the system to be labelled with a corresponding quantum number. These operators are called symmetry operators. As will be seen later, they include angular momenta (e.g., $L^2, L_z, S^2, S_z,$ for atoms) and point group symmetries (e.g., planes and rotations about axes). Every operator that qualifies as a symmetry operator provides a quantum number with which the energy levels of the system can be labeled. 7: Expectation Values If a property F is measured for a large number of systems all described by the same $\Psi$, the average value of \langle F\rangle for such a set of measurements can be computed as $\langle F\rangle = \langle \Psi |\textbf{F}|\Psi\rangle . \nonumber$ Expanding $\Psi$ in terms of the complete set of eigenstates of F allows \langle F\rangle to be rewritten as follows: $\langle F\rangle = \sum\limits_jf_j|\langle \phi_j|\Psi\rangle |^2, \nonumber$ which clearly expresses \langle F\rangle as the product of the probability $P_j$ of obtaining the particular value $f_j$ when the property F is measured and the value $f_j$.of the property in such a measurement. This same result can be expressed in terms of the density matrix $D_{i,j}$ of the state $\Psi$ defined above as: $\langle F\rangle = \sum\limits_{i,j} \langle \Psi |\phi\rangle \langle \phi_i|\textbf{F}|\phi_j\rangle \langle \phi_j|\Psi\rangle = \sum\limits_{i,j}C_i^{\text{*}}\langle \phi_i|\textbf{F}|\phi_j\rangle C_j \nonumber$ $\sum\limits_{i,j}D_{i,j}\langle \phi_i|\textbf{F}|\phi_j\rangle = Tr(DF). \nonumber$ Here, DF represents the matrix product of the density matrix $D_{j,i}$ and the matrix representation $F_{i,j} = \langle \phi_i|\textbf{F}|\phi_j\rangle$ of the F operator, both taken in the {$\phi_j$} basis, and Tr represents the matrix trace operation. As mentioned at the beginning of this Section, this set of rules and their relationships to experimental measurements can be quite perplexing. The structure of quantum mechanics embodied in the above rules was developed in light of new scientific observations (e.g., the photoelectric effect, diffraction of electrons) that could not be interpreted within the conventional pictures of classical mechanics. Throughout its development, these and other experimental observations placed severe constraints on the structure of the equations of the new quantum mechanics as well as on their interpretations. For example, the observation of discrete lines in the emission spectra of atoms gave rise to the idea that the atom's electrons could exist with only certain discrete energies and that light of specific frequencies would be given off as transitions among these quantized energy states took place. Even with the assurance that quantum mechanics has firm underpinnings in experimental observations, students learning this subject for the first time often encounter difficulty. Therefore, it is useful to again examine some of the model problems for which the Schrödinger equation can be exactly solved and to learn how the above rules apply to such concrete examples. The examples examined earlier in this Chapter and those given in the Exercises and Problems serve as useful models for chemically important phenomena: electronic motion in polyenes, in solids, and in atoms as well as vibrational and rotational motions. Their study thus far has served two purposes; it allowed the reader to gain some familiarity with applications of quantum mechanics and it introduced models that play central roles in much of chemistry. Their study now is designed to illustrate how the above seven rules of quantum mechanics relate to experimental reality. An Example Illustrating Several of the Fundamental Rules The physical significance of the time independent wavefunctions and energies treated in Section II as well as the meaning of the seven fundamental points given above can be further illustrated by again considering the simple two-dimensional electronic motion model. If the electron were prepared in the eigenstate corresponding to $n_x =1, n_y = 2,$ its total energy would be $E = \pi^2 \dfrac{\hbar^2}{2m}\left[ \dfrac{1^2}{L_x^2} + \dfrac{2^2}{L_y^2} \right]. \nonumber$ If the energy were experimentally measured, this and only this value would be observed, and this same result would hold for all time as long as the electron is undisturbed. If an experiment were carried out to measure the momentum of the electron along the y-axis, according to the second postulate above, only values equal to the eigenvalues of $-\hbar \dfrac{\partial}{\partial y}$ could be observed. The p$_y$ eigenfunctions (i.e., functions that obey p$_y F = -\hbar\frac{\partial}{\partial y} F = c F)$ are of the form $\sqrt{\frac{1}{L_y}}e^{ik_y y}, \nonumber$ where the momentum $\hbar k_y$ can achieve any value; the $\sqrt{\frac{1}{L_y}}$ factor is used to normalize the eigenfunctions over the range $0 \leq y \leq L_y.$ It is useful to note that the y-dependence of $\psi$ as expressed above $\left[ e^{\frac{i2\pi y}{L_y}} - e^{\frac{-i2\pi y}{L_y}} \right]$ is already written in terms of two such eigenstates of $-i\hbar \frac{\partial}{\partial y}:$ $-\hbar\dfrac{\partial}{\partial y}\left( e^{\dfrac{i2\pi y}{L_y}}\right) = \dfrac{2h}{L_y} \left(e^{\dfrac{i2\pi y}{L_y}}\right), \ \text{and} \nonumber$ $-\hbar\dfrac{\partial}{\partial y}\left( e^{\dfrac{-i2\pi y}{L_y}}\right) = \dfrac{-2h}{L_y} \left( e^{\dfrac{-i2\pi y}{L_y}} \right). \nonumber$ Thus, the expansion of $\psi$ in terms of eigenstates of the property being measured dictated by the fifth postulate above is already accomplished. The only two terms in this expansion correspond to momenta along the y-axis of $\frac{2h}{L_y} \ \text{and} \ -\frac{2h}{L_y};$ the probabilities of observing these two momenta are given by the squares of the expansion coefficients of $\psi$ in terms of the normalized eigenfunctions of $-i\hbar \frac{\partial}{\partial y}$. The functions $\sqrt{\frac{1}{L_y}}\left( e^{\frac{i2\pi y}{L_y}}\right) \ \text{and} \ \sqrt{\frac{1}{L_y}}\left( e^{-\frac{2\pi y}{L_y}}\right)$ are such normalized eigenfunctions; the expansion coefficients of these functions in $\psi \ \text{are} \ \frac{1}{\sqrt{2}} \ \text{and} \ -\frac{1}{\sqrt{2}},$ respectively. Thus the momentum $\frac{2h}{L_y}$ will be observed with probability $\left( \frac{1}{\sqrt{2}}\right) ^2 = \frac{1}{2} \ \text{and} \ -\frac{2h}{L_y}$ will be observed with probability $\left( \frac{1}{\sqrt{-2}} \right)^2 = \frac{1}{2}.$ If the momentum along the x-axis were experimentally measured, again only two values $\frac{1h}{L_x} \ \text{and} \ -\frac{1h}{L_x}$would be found, each with a probability of $\frac{1}{2}$. The average value of the momentum along the x-axis can be computed either as the sum of the probabilities multiplied by the momentum values: $\langle p_x\rangle = \frac{1}{2}\left[ \frac{1h}{L_x} - \frac{1h}{L_x} \right] = 0, \nonumber$ or as the so-called expectation value integral shown in the seventh postulate: $\langle p_x\rangle = \iint \psi^{\text{*}} \left(-\hbar \dfrac{\partial \psi}{\partial x}\right) \text{dx dy}. \nonumber$ Inserting the full expression for $\psi$(x,y) and integrating over x and y from 0 to L$_x \ \text{and} \ L_y,$ respectively, this integral is seen to vanish. This means that the result of a large number of measurements of p$_x$ on electrons each described by the same $\psi$ will yield zero net momentum along the x-axis.; half of the measurements will yield positive momenta and half will yield negative momenta of the same magnitude. The time evolution of the full wavefunction given above for the n$_x$=1, n$_y$=2 state is easy to express because this $\psi$ is an energy eigenstate: $\Psi(x,y,t) = \psi(x,y) e^{\dfrac{-iEt}{\hbar}}. \nonumber$ If, on the other hand, the electron had been prepared in a state $\psi(x,y)$ that is not a pure eigenstate (i.e., cannot be expressed as a single energy eigenfunction), then the time evolution is more complicated. For example, if at t=0 $\psi$ were of the form $\psi = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}}\left[ \text{a}\: sin \left( \dfrac{2\pi x}{L_x} \right) sin \left( \dfrac{1\pi y}{L_y} \right) + \text{b} \: sin \left( \dfrac{1\pi x}{L_x} \right) sin \left( \dfrac{2\pi y}{L_y} \right) \right], \nonumber$ with a and b both real numbers whose squares give the probabilities of finding the system in the respective states, then the time evolution operator $e^{\dfrac{-i\textbf{H}t}{\hbar}}$ applied to $\psi$ would yield the following time dependent function: $\Psi = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} \left[ a\: e^{\dfrac{-iE_{2,1}t}{\hbar}} sin\left( \dfrac{2\pi x}{L_x} \right)sin \left( \dfrac{2\pi x}{L_x} \right) sin\left( \dfrac{1\pi x}{L_x} \right) + b\: e^{\dfrac{-iE_{1,2}t}{\hbar}}sin \left( \dfrac{1\pi x}{L_x} \right) sin\left( \dfrac{2\pi y}{L_y} \right) \right], \nonumber$ where $E_{2,1} = \pi^2\dfrac{\hbar^2}{2m}\left[ \dfrac{2^2}{L_x^2} + \dfrac{1^2}{L_y^2} \right], \text{and} \nonumber$ $E_{1,2} = \pi^2\dfrac{\hbar^2}{2m}\left[ \dfrac{1^2}{L_x^2} + \dfrac{2^2}{L_y^2} \right], \text{and} \nonumber$ The probability of finding $E_{2,1}$ if an experiment were carried out to measure energy would be $a| e^{\dfrac{-iE_{2,1}t}{\hbar}}|^2 = |a|^2$; the probability for finding $E_{1,2}$ would be $|b|^2$. The spatial probability distribution for finding the electron at points x,y will, in this case, be given by: $|\Psi|^2 = |a|^2 |\psi_{2,1}|^2 + |b|^2 |\psi_{1,2}|^2 + 2 \:ab\: \psi_{2,1} \psi_{1,2} cos\left( \dfrac{\Delta Et}{\hbar} \right), \nonumber$ where $\Delta E$ is $E_{2,1} - E_{1,2},$ $\psi_{2,1} = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} sin\left(\dfrac{2\pi x}{L_x}\right)sin\left(\dfrac{1\pi y}{L_y}\right), \nonumber$ and $\psi_{1,2} = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} sin\left(\dfrac{1\pi x}{L_x}\right)sin\left(\dfrac{2\pi y}{L_y}\right), \nonumber$ This spatial distribution is not stationary but evolves in time. So in this case, one has a wavefunction that is not a pure eigenstate of the Hamiltonian (one says that $\Psi$ is a superposition state or a non-stationary state) whose average energy remains constant $(E=E_{2,1} |a|^2 + E_{1,2} |b|^2)$ but whose spatial distribution changes with time. Although it might seem that most spectroscopic measurements would be designed to prepare the system in an eigenstate (e.g., by focusing on the sample light whose frequency matches that of a particular transition), such need not be the case. For example, if very short laser pulses are employed, the Heisenberg uncertainty broadening $(\Delta E\Delta t\geq \hbar)$ causes the light impinging on the sample to be very non-monochromatic (e.g., a pulse time of $1 x10^{-12}$ sec corresponds to a frequency spread of approximately $5 cm^{-1}$). This, in turn, removes any possibility of preparing the system in a particular quantum state with a resolution of better than $30 cm^{-1}$ because the system experiences time oscillating electromagnetic fields whose frequencies range over at least $5 cm^{-1}$). Essentially all of the model problems that have been introduced in this Chapter to illustrate the application of quantum mechanics constitute widely used, highly successful 'starting-point' models for important chemical phenomena. As such, it is important that students retain working knowledge of the energy levels, wavefunctions, and symmetries that pertain to these models. Thus far, exactly soluble model problems that represent one or more aspects of an atom or molecule's quantum-state structure have been introduced and solved. For example, electronic motion in polyenes was modeled by a particle-in-a-box. The harmonic oscillator and rigid rotor were introduced to model vibrational and rotational motion of a diatomic molecule As chemists, we are used to thinking of electronic, vibrational, rotational, and translational energy levels as being (at least approximately) separable. On the other hand, we are aware that situations exist in which energy can flow from one such degree of freedom to another (e.g., electronic-to-vibrational energy flow occurs in radiationless relaxation and vibration-rotation couplings are important in molecular spectroscopy). It is important to understand how the simplifications that allow us to focus on electronic or vibrational or rotational motion arise, how they can be obtained from a first-principles derivation, and what their limitations and range of accuracy are.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/01%3A_The_Basic_Tools_of_Quantum_Mechanics/1.09%3A_The_Physical_Relevance_of_Wavefunctions%2C_Operators_and_Eigenvalues.txt
Approximation methods can be used when exact solutions to the Schrödinger equation cannot be found. In applying quantum mechanics to 'real' chemical problems, one is usually faced with a Schrödinger differential equation for which, to date, no one has found an analytical solution. This is equally true for electronic and nuclear-motion problems. It has therefore proven essential to develop and efficiently implement mathematical methods which can provide approximate solutions to such eigenvalue equations. Two methods are widely used in this context- the variational method and perturbation theory. These tools, whose use permeates virtually all areas of theoretical chemistry, are briefly outlined here, and the details of perturbation theory are amplified in Appendix D • 2.1: The Variational Method Variational methods, in particular the linear variational method, are the most widely used approximation techniques in quantum chemistry. To implement such a method one needs to know the Hamiltonian whose energy levels are sought and one needs to construct a trial wavefunction in which some 'flexibility' exists (e.g., as in the linear variational method). This tool will be used to develop several of the most commonly used and powerful molecular orbital methods in chemistry. • 2.2: Perturbation Theory Perturbation theory is the second most widely used approximation method in quantum chemistry. It allows one to estimate the splittings and shifts in energy levels and changes in wavefunctions that occur when an external field (e.g., an electric or magnetic field or a field that is due to a surrounding set of 'ligands'- a crystal field) or a field arising when a previously-ignored term in the Hamiltonian is applied to a species whose 'unperturbed' states are known • 2.E: Approximation Methods (Exercises) Homework problems and select solutions to "Chapter 2: Approximation Methods" of Simons and Nichol's Quantum Mechanics in Chemistry Textmap. 02: Approximation Methods For the kind of potentials that arise in atomic and molecular structure, the Hamiltonian H is a Hermitian operator that is bounded from below (i.e., it has a lowest eigenvalue). Because it is Hermitian, it possesses a complete set of orthonormal eigenfunctions $\{ |\psi_j \rangle\}$. Any function Φ that depends on the same spatial and spin variables on which H operates and obeys the same boundary conditions that the { $\Psi$ j } obey can be expanded in this complete set $Φ = \sum \limits_j C_j | \psi_j \rangle. \nonumber$ The expectation value of the Hamiltonian for any such function can be expressed in terms of its $C_j$ coefficients and the exact energy levels $E_j$ of H as follows: $\langle Φ| H |Φ\rangle = \sum\limits_{ij}C_iC_j \langle \psi_i |H| \psi_j \rangle = \sum\limits_j |C_j|^2 E_j . \nonumber$ If the function Φ is normalized, the sum $\sum\limits_j|C_j|^2$ is equal to unity. Because H is bounded from below, all of the $E_j$ must be greater than or equal to the lowest energy $E_0$. Combining the latter two observations allows the energy expectation value of Φ to be used to produce a very important inequality: $\langle Φ |H| Φ \rangle \geq E_0 . \nonumber$ The equality can hold only if Φ is equal to $\psi_0$ ; if Φ contains components along any of the other $\psi_j$, the energy of Φ will exceed $E_0$. This upper-bound property forms the basis of the so-called variational method in which 'trial wavefunctions' Φ are constructed: 1. To guarantee that Φ obeys all of the boundary conditions that the exact $\Psi_j$ do and that Φ is of the proper spin and space symmetry and is a function of the same spatial and spin coordinates as the $\Psi_j$; 2. With parameters embedded in Φ whose 'optimal' values are to be determined by making $\langle Φ |H| Φ \rangle$ a minimum. It is perfectly acceptable to vary any parameters in Φ to attain the lowest possible value for $\langle Φ |H| Φ \rangle$ because the proof outlined above constrains this expectation value to be above the true lowest eigenstate's energy $E_0$ for any Φ. The philosophy then is that the Φ that gives the lowest $\langle Φ |H| Φ\rangle$ is the best because its expectation value is closes to the exact energy. Linear Variational Calculations Quite often a trial wavefunction is expanded as a linear combination of other functions (not the eigenvalues of the Hamiltonian, since they are not known) $Φ = \sum_J^N C_J |Φ_J \rangle. \label{Ex1}$ In these cases, one says that a 'linear variational' calculation is being performed. The set of functions {$Φ_J$} are usually constructed to obey all of the boundary conditions that the exact state $\Psi$ obeys, to be functions of the the same coordinates as $Ψ$, and to be of the same spatial and spin symmetry as Ψ. Beyond these conditions, the {$Φ_J$} are nothing more than members of a set of functions that are convenient to deal with (e.g., convenient to evaluate Hamiltonian matrix elements $\langle Φ_I|H|Φ_J \rangle$ that can, in principle, be made complete if more and more such functions are included in the expansion in Equation $\ref{Ex1}$ (i.e., increase $N$). For such a trial wavefunction, the energy depends quadratically on the 'linear variational' $C_J$ coefficients: $\langle Φ |H| Φ \rangle = \sum_{I,J} ^{N,N}C_IC_J \langle Φ_Ι|H|Φ_J \rangle. \nonumber$ Minimization of this energy with the constraint that Φ remain normalized, i.e., $\langle Φ|Φ \rangle = \sum\limits_{IJ} C_IC_J \langle Φ_I | Φ_J \rangle= 1 \nonumber$ gives rise to a so-called secular or eigenvalue-eigenvector problem: $\sum\limits_J [\langle Φ_I|H|Φ_J \rangle - E \langle Φ_I|Φ_J \rangle] C_J = \sum\limits_J [H_{IJ} - E S_{IJ} ]C_J = 0. \nonumber$ If the functions $\{|Φ_J\rangle \}$ are orthonormal, then the overlap matrix $S$ reduces to the unit matrix and the above generalized eigenvalue problem reduces to the more familiar form: $\sum\limits_J^N H_{IJ}C_J = E C_I . \nonumber$ The secular problem, in either form, has as many eigenvalues $E_i$ and eigenvectors {$C_{iJ}$} as the dimension of the $H_{IJ}$ matrix as $Φ$. It can also be shown that between successive pairs of the eigenvalues obtained by solving the secular problem at least one exact eigenvalue must occur (i.e.,$E_{i+1} > E_{exact} > E_i$, for all i). This observation is referred to as 'the bracketing theorem'. Variational methods, in particular the linear variational method, are the most widely used approximation techniques in quantum chemistry. To implement such a method one needs to know the Hamiltonian $H$ whose energy levels are sought and one needs to construct a trial wavefunction in which some 'flexibility' exists (e.g., as in the linear variational method where the $C_J$ coefficients can be varied). In Section 6 this tool will be used to develop several of the most commonly used and powerful molecular orbital methods in chemistry.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/02%3A_Approximation_Methods/2.01%3A_The_Variational_Method.txt
Perturbation theory is the second most widely used approximation method in quantum chemistry. It allows one to estimate the splittings and shifts in energy levels and changes in wavefunctions that occur when an external field (e.g., an electric or magnetic field or a field that is due to a surrounding set of 'ligands'- a crystal field) or a field arising when a previously-ignored term in the Hamiltonian is applied to a species whose 'unperturbed' states are known. These 'perturbations' in energies and wavefunctions are expressed in terms of the (complete) set of unperturbed eigenstates. Assuming that all of the wavefunctions $\phi_k$ and energies $E_k^0$ belonging to the unperturbed Hamiltonian $H^0$ are known $H^0 \Phi_k = E_k^0 \Phi_k, \nonumber$ and given that one wishes to find eigenstates $(\psi_k$ and $E_k)$ of the perturbed Hamiltonian $H = H^0 + \lambda V, \nonumber$ perturbation theory expresses $\psi_k$ and $E_k$ as power series in the perturbation strength $\lambda$: $\psi_k = \sum\limits_{n=0}^{\infty} \lambda^n \psi_k^{(n)} \nonumber$ $E_k = \sum\limits_{n=0}^{\infty}\lambda^nE_k^{(n)}. \nonumber$ The systematic development of the equations needed to determine the $E_k^{(n)}$ and the $\psi_k^{(n)}$ is presented in Appendix D. Here, we simply quote the few lowest-order results. The zeroth-order wavefunctions and energies are given in terms of the solutions of the unperturbed problem as follows: $\psi_k^{(0)}=\Phi_k \nonumber$ and $E_k^{(0)} = E_k^0. \nonumber$ This simply means that one must be willing to identify one of the unperturbed states as the 'best' approximation to the state being sought. This, of course, implies that one must therefore strive to find an unperturbed model problem, characterized by $H^0$ that represents the true system as accurately as possible, so that one of the $\Phi_k$ will be as close as possible to $\psi_k$. The first-order energy correction is given in terms of the zeroth-order (i.e., unperturbed) wavefunction as: $E_k^{(1)} = <\Phi_k | V | \Phi_k>, \nonumber$ which is identified as the average value of the perturbation taken with respect to the unperturbed function $\Phi_k$. The so-called first-order wavefunction $\psi_k^{(1)}$ expressed in terms of the complete set of unperturbed functions {$\Phi_J$} is: $\psi_k^{(1)} = \sum\limits_{j\neq k} \dfrac{\langle \Psi_j|V|\Phi_k \rangle}{[ E_k^0 - E_j^0 ]} | \Phi_j \rangle. \nonumber$ and the second-order correction to the wavefunction is expressed as $\psi_k^{(2)} = \sum\limits_{j \neq k} \dfrac{1}{[ E_k^0 - E_j^0 ]}\sum\limits_{l\neq k} \left[ \langle \Phi_j| V |\Phi_l \rangle -\delta_{j,i}E_k^{(1)} \right] \nonumber$ $\langle \Phi_l| V | \Phi_k \rangle \dfrac{1}{E_k^0 - E_l^0}| \Phi_j \rangle. \nonumber$ An essential point about perturbation theory is that the energy corrections $E_k^{(n)}$ and wavefunction corrections $\psi_k^{(n)}$ are expressed in terms of integrals over the unperturbed wavefunctions $\Phi_k$ involving the perturbation (i.e.,$\langle \Phi_j| V |\Phi_l \rangle$ ) and the unperturbed energies $E_j^0.$ Perturbation theory is most useful when one has, in hand, the solutions to an unperturbed Schrödinger equation that is reasonably 'close' to the full Schrödinger equation whose solutions are being sought. In such a case, it is likely that low-order corrections will be adequate to describe the energies and wavefunctions of the full problem. It is important to stress that although the solutions to the full 'perturbed' Schrödinger equation are expressed, as above, in terms of sums over all states of the unperturbed Schrödinger equation, it is improper to speak of the perturbation as creating excited-state species. For example, the polarization of the 1s orbital of the Hydrogen atom caused by the application of a static external electric field of strength E along the z-axis is described, in first-order perturbation theory, through the sum $\sum\limits_{n=2, \infty} \phi_{np_0} \dfrac{\langle \phi_{np_0} | \text{E e r cos} \theta | 1s\rangle}{E_{1s} - E_{np_0}} \nonumber$ over all $p_z = p_0$ orbitals labeled by principal quantum number n. The coefficient multiplying each $p_0$ orbital depends on the energy gap corresponding to the 1s-to-np 'excitation' as well as the electric dipole integral $\langle \phi_{np_0} | \text{ E e r cos} \theta |1s \rangle$ between the 1s orbital and the $np_0$ orbital. This sum describes the polarization of the 1s orbital in terms of functions that have $p_0$ symmetry; by combining an s orbital and $p_0$ orbitals, one can form a 'hybrid-like' orbital that is nothing but a distorted 1s orbital. The appearance of the excited $np_0$ orbitals has nothing to do with forming excited states; these $np_0$ orbitals simply provide a set of functions that can describe the response of the 1s orbital to the applied electric field. The relative strengths and weaknesses of perturbation theory and the variational method, as applied to studies of the electronic structure of atoms and molecules, are discussed in Section 6.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Mechanics__in_Chemistry_(Simons_and_Nichols)/02%3A_Approximation_Methods/2.02%3A_Perturbation_Theory.txt