chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
However, just as with the circular aperture (Airy pattern) a single slit also yields a diffraction pattern when illuminated. Both are examples of the superposition principle because the photons that arrive at the detection screen can get there from any points within the aperture or slit. So, in general, we calculate the diffraction pattern by a Fourier transform of the coordinate space geometry, slit or circle or something more complicated. The following tutorial explores single-slit diffraction and the uncertainty principle.
A Quantum Mechanical Interpretation of Single‐slit Diffraction
Diffraction has a simple quantum mechanical interpretation based on the uncertainty principle. Or we could say diffraction is an excellent way to illustrate the uncertainty principle.
A screen with a single slit of width, w, is illuminated with a coherent photon or particle beam. The normalized coordinate‐space wave function at the slit screen is,
$\mathrm{w} :=1 \qquad \Psi(\mathrm{x}, \mathrm{w}) :=\mathrm{if}\left[\left(\mathrm{x} \geq-\frac{\mathrm{w}}{2}\right) \cdot\left(\mathrm{x} \leq \frac{\mathrm{w}}{2}\right), \frac{1}{\sqrt{\mathrm{w}}}, 0\right] \qquad \mathrm{x} :=\frac{-\mathrm{w}}{2}, \frac{-\mathrm{w}}{2}+0.005 \ldots \frac{\mathrm{w}}{2} \nonumber$
The coordinate‐space probability density, $|\Psi(x, w)|^{2}$, is displayed for a slit of unit width below
The slit‐screen measures position, it localizes the incident beam in the x‐direction. According to the uncertainty principle, because position and momentum are complementary, or conjugate, observables, this measurement must be accompanied by a delocalization of the x‐component of the momentum. This can be seen by a Fourier transform of $\Psi (x,w)$ into momentum space to obtain the momentum wave function, $\Phi (p_{x},w)$.
$\Phi\left(\mathrm{p}_{\mathrm{x}}, \mathrm{w}\right) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\frac{\mathrm{w}}{2}}^{\frac{\mathrm{w}}{2}} \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{x}} \cdot \mathrm{x}\right) \cdot \frac{1}{\sqrt{\mathrm{w}}} \mathrm{dx} \text { simplify } \rightarrow \frac{\sqrt{2} \cdot \sin \left(\frac{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{w}}{2}\right)}{\sqrt{\pi} \cdot \mathrm{p}_{\mathrm{x}} \cdot \sqrt{\mathrm{w}}} \nonumber$
It is the momentum distribution, $|\Phi(p_{x} ,w)|^{2}$, shown histographically below that is projected onto the detection screen. Thus, a position measurement at the detection screen is also effectively a measure of the x‐component of the particle momentum.
In this figure we see the spread in momentum required by the uncertainty principle, plus interference fringes due to the fact that the incident beam can imerge from any where within the slit, allowing for constructive and destructive interference at the detection screen. If the slit width is decreased the position is more precisely known and the uncertainty principle demands a broadening in the momentum distribution as shown below.
Equating uncertainty in position with slit width and uncertainty in momentum with the width of the intense center of the diffraction pattern, we have in atomic units: $\Delta x \Delta p_{x}=12.6$. If the slit width is decreased the position is more precisely known and the uncertainty principle demands a broadening in the momentum distribution as shown below. For slit width 0.5 we again find the product of the uncertainties is 12.6.
Naturally if the slit width is increased to 2.0 the position uncertainty increases and the uncertainty in momentum decreases yielding again $\Delta x \Delta p_{x}=12.6$.
The x‐direction momentum can be expressed in terms of the wavelength of the illuminating beam and the diffraction angle using the following sequence of equations of which the second is the de Broglie relation in atomic units (h = 2$\pi$).
$\mathrm{p}_{\mathrm{x}}=\mathrm{p} \cdot \sin (\Theta) \qquad \mathrm{p}=\frac{2 \cdot \pi}{\lambda} \qquad \mathrm{p}_{\mathrm{x}}=\frac{2 \cdot \pi}{\lambda} \cdot \sin (\Theta) \nonumber$
$\Phi\left(\Theta_{\mathrm{x}}, \mathrm{w}, \lambda\right) :=\sqrt{\frac{2}{\pi \cdot \mathrm{w}} \cdot} \cdot \frac{\sin \left(\frac{\pi \cdot \mathrm{w}}{\lambda} \cdot \sin \left(\Theta_{\mathrm{x}}\right)\right)}{\frac{2 \cdot \pi}{\lambda} \cdot \sin \left(\Theta_{\mathrm{x}}\right)} \nonumber$
This allows one to explore the effect of the wavelength of the illuminating beam on the diffraction pattern. The figure below shows that a short wavelength (high momentum) illuminating beam gives rise to a narrower diffraction pattern.
The method used here to calculate single‐slit diffraction patterns (momentum‐space distribution functions) is easily extended to multiple slits, and also to diffraction at two‐dimensional masks with a variety of hole geometries.
Relevant Literature:
Primary source: ʺQuantum interference with slits,ʺ Thomas Marcella which appeared in European Journal of Physics 23, 615‐621 (2002).
See also: ʺCalculating diffraction patterns,ʺ F. Rioux in European Journal of Physics, 24, N1‐N3 (2003). “Using Optical Transforms to Teach Quantum Mechanics,” F. Rioux; B. J. Johnson, The Chemical Educator, 9, 12‐16 (2004). ʺSingle‐slit Diffraction and the Uncertainty Prinicple,ʺ F. Rioux in Journal of Chemical Education, 82, 1210 (2005).
ʺExperimental verification of the Heisenberg uncertainty principle for hot fullerene moleculesʺ, O. Nairz, M. Arndt, and A. Zeilinger, Phys. Rev. A, 65, 032109 (2002).
ʺIntroducing the Uncertainty Principle Using Diffraction of Light Waves,ʺ Pedro L. Muino, Journal of Chemical Education, 77, 1025‐1027 (2000).
The next link shows how the methods used to examine these relatively simple cases can be expanded to more interesting geometries, including the DNA double helix. The following tutorial provides details regarding a simple simulation of the DNA diffraction pattern.
Simulating the DNA Diffraction Pattern
The publication of the DNA double-helix structure by x-ray diffraction in 1953 is one the most significant scientific events of the 20th century (1). Therefore, it is important that science students and their teachers have some understanding of how this great achievement was accomplished. X-ray diffraction is conceptually simple: a source of X-rays illuminates a sample which scatters the x-rays, and a detector records the arrival of the scattered x-rays (diffraction pattern). However, the mathematical analysis required to extract from diffraction pattern the molecular geometry of the sample that caused the diffraction pattern is quite formidable. Therefore, the purpose of this tutorial is to illustrate some of the elements of the mathematical analysis required to solve a structure.
The famous X-ray diffraction pattern obtained by Rosalind Franklin is shown below (2).
This X-ray picture stimulated Watson and Crick to propose the now famous double-helix sturcture for DNA. It was surely fortuitous that Crick had recently completed an unrelated study of the diffraction patterns of helical molecules ( 3).
To gain some understanding of how the experimental pattern led to the hypothesis of a double-helical structure we will work in reverse. We will assume the double-helix structure, calculate the diffraction pattern, and compare it with the experimental result. This, therefore, is a deductive exercise as opposed to the brilliant inductive accomplishment of Watson and Crick in determining the DNA structure from Franklin's experimental X-ray pattern.
The experimental pattern will be simulated by modeling DNA solely as a planar double strand of sugar-phosphate backbone groups shown below. Reference 4 provides the justification and the limitations in using two-dimensional models for three-dimensional structures when simulating X-ray diffraction experiments.
The double-strand geometry shown below was created using the following mathematics. Calculations are carried out in atomic units.
Sugar-phosphate groups per strand: $A : = 20$ Strand radius: $R : = 1$ Phase difference between strands: $0.8 \cdot \pi$
First strand:
$\mathrm{m} :=1 \ldots \mathrm{A} \quad \Theta_{\mathrm{m}} :=\frac{4 \cdot \pi \cdot \mathrm{m}}{\mathrm{A}} \quad \mathrm{y}_{\mathrm{m}} :=\mathrm{m} \quad \mathrm{x}_{\mathrm{m}} :=\mathrm{R} \cdot \cos \left(\Theta_{\mathrm{m}}\right) \nonumber$
Second strand:
$\mathrm{m} :=21 \ldots40 \quad \Theta_{\mathrm{m}} :=\frac{4 \cdot \pi \cdot(\mathrm{m}-\mathrm{A})}{\mathrm{A}} \quad \mathrm{y}_{\mathrm{m}} :=(\mathrm{m}-\mathrm{A}) \quad \mathrm{x}_{\mathrm{m}} :=\mathrm{R} \cdot \cos \left(\Theta_{\mathrm{m}}+0.8 \cdot \pi\right) \nonumber$
$\mathrm{m} :=1 \ldots20 \quad \mathrm{n} :=21 \ldots 40 \nonumber$
According to quantum mechanical principles, the photons illuminating this geometrical arrangement interact with all its members simultaneously thus being cast into the spatial superposition, $\Psi$, given below.
$| \Psi \rangle=\frac{1}{\sqrt{N}} \sum_{i=1}^{N} | x_{i}, y_{i} \rangle \nonumber$
This spatial wave function is then projected into momentum space by a Fourier transform to yield the theoretical diffraction pattern. What is measured at the detector according to quantum mechanics is the two-dimensional momentum distribution created by the spatial localization that occurs during illumination of the structure. If the sugar-phosphate groups are treated as point scatterers the momentum wave function is given by the following Fourier transform.
$\Phi\left(\mathrm{p}_{\mathrm{x}}, \mathrm{p}_{\mathrm{y}}\right) :=\frac{1}{2 \cdot \pi} \cdot \sum_{\mathrm{m}=1}^{40} \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{x}} \cdot \mathrm{x}_{\mathrm{m}}\right) \cdot \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{y}} \cdot \mathrm{y}_{\mathrm{m}}\right) \nonumber$
The theoretical diffraction pattern can now be displayed as the absolute magnitude squared of the momentum wave function.
$\Delta :=8 \quad \mathrm{N} :=200 \quad \mathrm{j} :=0 \ldots \mathrm{N} \quad \mathrm{p}_{\mathrm{x}} :=-\Delta+\frac{2 \cdot \Delta \cdot \mathrm{j}}{\mathrm{N}} \quad \mathrm{k} :=0 \ldots \mathrm{N} \quad \quad \mathrm{p}_{\mathrm{y}} :=-\Delta+\frac{2 \cdot \Delta \cdot \mathrm{k}}{\mathrm{N}} \nonumber$
$\text{DiffractionPattern}_{\mathrm{j}, \mathrm{k}} :=\left(|\Phi\left(\mathrm{p}_{\mathrm{x}_{j}}, \mathrm{p}_{\mathrm{y}_{k}}\right)|\right)^{2} \nonumber$
Clearly the naive model diffraction pattern presented here captures several important features of the experimental diffraction pattern. Among those are the characteristic X-shaped cross of the diffraction pattern and the missing fourth horizontal layer (indicated by arrows).
Lucas, Lisensky, and co-workers (4, 5) have simulated the DNA diffraction pattern using the optical transform method. This tutorial might therefore be considered to be a theoretical companion to their more empirical approach to the subject.
References:
1. Watson, J. D.; Crick, F. H. C. Nature 1953, 171, 737.
2. Franklin, R. E.; Gosling, R. G. Nature 1953, 171, 740.
3. Cochran, W.; Crick, F. H. C.; Vand, V. Acta Crystallogr. 1952, 5, 581.
4. Lucas, A. A.; Lambin, Ph.; Mairesse, R.; Mathot, M. J. Chem. Educ. 1999, 76, 378.
5. Lisensky, G. C.; Lucas, A. A.; Nordell, K. J.; Jackelen, A. L.; Condren, S. M.; Tobe, R. H.; Ellis, A. B. DNA Optical Transform Kit; Institute for Chemical Education: University of Wisconsin, WI, 1999.
A brief discussion of the impact of rotational symmetry in determining diffraction patterns and the concept of the quasi-crystal can be found at the following tutorial.
Crystal Structure, Rotational Symmetry and Quasicrystals
Prior to 1991 crystals were defined to be solids having only 2-, 3-, 4- and 6-fold rotational symmetry because only these rotational symmetries have the required translational periodicity to build the long-range order of a crystalline solid. Long-range order is synonymous with periodicity, requiring some unit structure which repeats itself by translation in all directions infinitely. It is easy to demonstrate that a pentagon, with 5-fold rotational symmetry cannot be used as a unit cell to create long-range order in a plane or in three-dimensions.
The justification for this definition was that solid structures with 2-, 3-, 4- and 6-fold rotational symmetry yield discrete diffraction patterns that also have translational periodicity. Another way to put this is to say that solid structures with 2-, 3-, 4- and 6-fold rotational symmetry have reciprocal lattices that also have translational periodicity. Yet, another way to put this, of course, is that the Fourier transforms of geometries with 2-, 3-, 4- and 6-fold rotational symmetry yield lattice-like momentum distributions with translational periodicity. This latter statement is preferred by the author because it emphasizes that diffraction patterns are actually the momentum distributions of the diffracted particles.
The key in this latter interpretation is that diffraction experiments involve an initial spatial localization of the radiation through interaction with the crystal lattice, followed as required by the uncertainty principle, a delocalization of the momentum distribution in the detection plane.
Let’s look at some examples. First we examine the Fourier transforms of two mini-lattices with two- and three-fold rotational symmetry.
Clearly both diffraction patterns exhibit translational periodicity, their repeating units being a 90 degree rotation of the spatial structure. Next we look at four-fold rotational symmetry and see that the unit cell is obvious.
Six-fold rotational symmetry is more interesting than the previous three examples, but again the unit cell is easy to find.
Now look at what happens when we consider 5-fold symmetry – the diffraction pattern generated by a pentagon.
The unit cell, the universal repeating unit, is gone. The diffraction pattern is well-defined, it has rotational symmetry and it is appealing, but it does not satisfy the criterion for translational periodicity. That’s why 5-fold rotational symmetry is excluded from the list of symmetries that can generate diffraction patterns that have translational periodicity, and why by definition crystalline solids are not supposed to have 5-fold axes, or rotational axes greater than order six.
However, in 1984 an international research team consisting of D. Shechtman, I. Blech, D. Gratias and J. W. Cahn, published “Metallic phase with long-range orientational order and no translational symmetry” in Physical Review Letters 53, 1951-1953 (1984). The crystalline metallic phases they studied produced discrete diffraction patterns that were characteristic of the 5- and 10-fold rotational symmetry axes that were prohibited by the accepted definition of a crystalline solid.
In the face of this contradictory evidence, 5-fold rotational symmetry and a well-defined diffraction pattern, the International Union of Crystallography in 1991 redefined crystal to mean any solid having a discrete diffraction pattern. However, the solid phases discovered by Shechtman and his co-workers go by the name quasicrystals, indicating that they don’t quite have the same stature as those that don’t violate the rotational symmetry rule.
The striking diffraction pattern created by a pentagon of point scatterers is shown below.
In these recent examples we have been Fourier transforming from coordinate space to momentum space because the momentum distribution function is the diffraction pattern, and our experiments are set up in coordinate space. In quantum mechanics an experiment requires two steps: state preparation followed by a measurement. State preparation occurs at the slit screen and measurement at the detection screen. The following link shows how to go from the coordinate representation to the momentum representation and back again.
Single Slit Diffraction and the Fourier Transform
Slit width: $w : = 1$ Coordinate‐space wave function: $\Psi(\mathrm{x}, \mathrm{w}) :=\text { if }\left[\left(\mathrm{x} \geq-\frac{\mathrm{w}}{2}\right) \cdot\left(\mathrm{x} \leq \frac{\mathrm{w}}{2}\right), 1,0\right]$
$x :=\frac{-w}{2}, \frac{-w}{2}+0.005 \dots \frac{w}{2} \nonumber$
A Fourier transform of the coordinate‐space wave function yields the momentum wave function and the momentum distribution function, which is the diffraction pattern.
$\Phi\left(\mathrm{p}_{\mathrm{x}}, \mathrm{w}\right) :=\frac{1}{\sqrt{2 \cdot \pi \cdot \mathrm{w}}} \cdot \int_{-\frac{\mathrm{w}}{2}}^{\frac{\mathrm{w}}{2}} \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{x}} \cdot \mathrm{x}\right) d x \text { simplify } \rightarrow \frac{\sqrt{2} \cdot \sin \left(\frac{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{w}}{2}\right)}{\sqrt{\pi} \cdot \mathrm{p}_{\mathrm{x}} \cdot \sqrt{\mathrm{w}}} \nonumber$
Now Fourier transform the momentum wave function back to coordinate space and display result. This is done numerically using large limits of integration for momentum.
$\Psi(x, w) :=\int_{-5000}^{5000} \frac{\frac{1}{2} \sin \left(\frac{1}{2} \cdot w \cdot p_{x}\right)}{\pi^{\frac{1}{2}} \cdot w^{\frac{1}{2}} \cdot p_{x}} \cdot \frac{\exp \left(i \cdot p_{x} \cdot x\right)}{\sqrt{2 \cdot \pi}} d p_{x} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.14%3A_Quantum_Mechanics_and_the_Fourier_Transform.txt
|
Quantum chemists work almost exclusively in coordinate space because they need wave functions that will help them understand molecular structure and chemical reactivity. They need the location of the nuclear centers and electron density maps. Consequently undergraduate physical chemistry texts examine all the traditional model problems (particle in a box, rigid rotor, harmonic oscillator, hydrogen atom, hydrogen molecule ion, hydrogen molecule, etc.) using spatial wave functions. Students learn how to interpret graphical representations of the various wave functions. If knowledge is required about electron momentum, for example, expectation values are calculated using the coordinate wave function.
For the model systems listed above, it is a simple matter to carry out a Fourier transform into momentum space. This allows the same kind of visualization in momentum space that is available in the coordinate representation. The value of this capability in presenting a graphical illustration of the uncertainty principle for the particle-in-the-box (PIB) problem is illustrated in the following tutorial.
A Graphical Illustration of the Heisenberg Uncertainty Relationship
According to quantum mechanics position and momentum are conjugate variables; they cannot be simultaneously known with high precision. The uncertainty principle requires that if the position of an object is precisely known, its momentum is uncertain, and vice versa. This reciprocal relationship is captured by the well-known uncertainty relation, which says that the product of the uncertainties in position and momentum must be greater than or equal to Planck's constant divided by 4$\pi$.
$\Delta x \cdot \Delta p \geq \frac{h}{4 \cdot \pi} \nonumber$
This simple mathematical relation can be visualized using the traditional work horse - the quantum mechanical particle in a box (infinite one-dimensional potential well). The particle's ground-state wave function in coordinate space for a box of width a is shown below.
$\Psi(\mathrm{x}, \mathrm{a}) :=\sqrt{\frac{2}{\mathrm{a}}} \cdot \sin \left(\frac{\pi \cdot \mathrm{x}}{\mathrm{a}}\right) \nonumber$
To illustrate the uncertainty principle and the reciprocal relationship between position and momentum, $\Psi$(x,a) is Fourier transformed into momentum space yielding the particle's ground-atate wave function in the momentum representation.
$\Phi(p, a) :=\sqrt{\frac{1}{2 \cdot \pi}} \cdot \int_{0}^{a} \exp (-i \cdot p \cdot x) \cdot \sqrt{\frac{2}{a}} \cdot \sin \left(\frac{\pi \cdot x}{a}\right) d x \operatorname{simplify} \rightarrow \frac{\pi \cdot a \cdot\left(e^{-a \cdot p \cdot 1}+1\right) \cdot \sqrt{\frac{1}{a}}}{\pi^{\frac{5}{2}}-\sqrt{\pi} \cdot a^{2} \cdot p^{2}} \nonumber$
In the figure below, the momentum distribution, $|\Phi(p, a)|^{2}$, is shown for three sizes, a = 1,2 and 3. The uncertainty principle is illustrated as follows: as the box size increases the position uncertainty increases and momentum uncertainty decreases because the momentum distribution narrows.
We use the particle-in-the-box example to introduce students to almost all the fundamental quantum mechanical concepts. When we come to spectroscopy and the chemical bond, we initially model the chemical bond as a harmonic oscillator. Here there are two physical parameters, force constant and effective mass. The following tutorial shows how the coordinate and momentum wave functions can be used to illustrate the uncertainty principle for various values of k and $\mu$.
The Harmonic Oscillator and the Uncertainty Principle
Schrödinger's equation in atomic units (h = 2$\pi$) for the harmonic oscillator has an exact analytical solution.
$\mathrm{V}(\mathrm{x}, \mathrm{k}) :=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \quad \frac{-1}{2 \cdot \mu} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\mathrm{V}(\mathrm{x}) \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \nonumber$
The ground-state wave function (coordinate space) and energy for an oscillator with reduced mass $\mu$ and force constant k are as follows.
$\Psi(\mathrm{x}, \mathrm{k}, \mu) :=\left(\frac{\sqrt{\mathrm{k} \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{\mathrm{k} \cdot \mu} \cdot \frac{\mathrm{x}^{2}}{2}\right) \quad \mathrm{E}(\mathrm{k}, \mu) :=\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}} \nonumber$
The first thing we want to illustrate is that tunneling occurs in the simple harmonic oscillator. The classical turning point is that position at which the total energy is equal to the potential energy. In other words, classically the kinetic energy is zero and the oscillator's direction is going to reverse. For the ground state the classical turning point is,
$\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}}=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \text { has solution }(\mathrm{s}) \left( \begin{array}{c}{\frac{-1}{k^{\frac{1}{4}}\cdot \mu^{\frac{1}{4}}}} \ {\frac{1}{k^{\frac{1}{4}} \cdot \mu^{\frac{1}{4}}}} \end{array}\right) \nonumber$
From the quantum mechanical perspective the oscillator is not vibrating; it is in a stationary state. To the extent that the oscillator's wave function extends beyond the classical turning point, tunneling is occurring. The calculation below shows that the probability that tunneling occurs is independent of the values of k and $\mu$ for the ground state.
$2 \cdot\left[\int_\frac{1}{(\mathrm{k} \cdot \mu)^{\frac{1}{4}}}^{\infty}\left[\left(\frac{\sqrt{\mathrm{k} \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{\mathrm{k} \cdot \mu} \cdot \frac{\mathrm{x}^{2}}{2}\right)\right]^{2} \mathrm{dx}\right] \Bigg|^{\text { assume, k> } 0, \mu>0}_{ \text { simplify }} \rightarrow 1-\operatorname{erf}(1)=0.157 \nonumber$
A Fourier transform of the coordinate wave function provides its counter part in momentum space.
$\Phi(\mathbf{p}, \mathbf{k}, \boldsymbol{\mu}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathbf{i} \cdot \mathbf{p} \cdot \mathbf{x}) \cdot \Psi(\mathbf{x}, \mathbf{k}, \boldsymbol{\mu}) \mathrm{d} \mathbf{x} \Bigg| \begin{array}{l}{\text { assume, } \mathrm{k}>0} \ {\text { assume, } \mu>0} \ {\text { simplify }}\end{array}\rightarrow \frac{e^{-\frac{p^{2}}{2 \cdot \sqrt{\mu} \cdot \sqrt{k}}}}{\pi^{\frac{1}{4}} \cdot \mu^{\frac{1}{8}} \cdot k^{\frac{1}{8}}} \nonumber$
The uncertainty principle can now be illustrated by comparing the coordinate and momentum wave functions for a variety values of k and $\mu$. For the benchmark case, k = $\mu$ = 1, we see that the coordinate and momentum wave functions are identical and the classical turning point (CTP) is 1. The classical turning point will be taken as a measure of the spatial domain of the oscillator.
• For k = 2 and $\mu$ =1, the force constant has doubled reducing the amplitude of vibration (CTP =0.841) and therefore the uncertainty in position. Consequently there is an increase in the uncertainty in momentum which is manifested by a broader momentum distribution function.
• For k = 1 and $\mu$ = 2, the increase in effective mass drops the oscillator in the potential well decreasing the vibrational amplitude (CTP = 0.841) causing a decrease in $\Delta$x and an increase in $\Delta$p.
• For k = 0.5 and $\mu$ = 1, the lower force constant causes a larger vibrational amplitude (CTP = 1.189) and an accompanying increase in $\Delta$x. Consequently $\Delta$p decreases.
Force constant: $k := 0.5$ Effective mass: $\mu := 1$ Energy: $\mathrm{E}(\mathrm{k}, \mu)=0.354$ CTP: $\frac{1}{\mathrm{k}^{\frac{1}{4}} \cdot \mu^{\frac{1}{4}}}=1.189$
The uncertainties in position and momentum are calculated as shown below because for the harmonic oscillator $<x>= =0$.
$\Delta x :=\sqrt{\int_{-\infty}^{\infty} x^{2} \cdot \Psi(x, k, \mu)^{2} d x}=0.841 \quad \Delta \mathrm{p} :=\sqrt{\int_{-\infty}^{\infty} \mathrm{p}^{2} \cdot \Phi(\mathrm{p}, \mathrm{k}, \mu)^{2} \mathrm{d} \mathrm{p}=0.595} \quad \Delta x \cdot \Delta p=0.5 \nonumber$
A summary of the four cases considered is provided in the table below.
$\left( \begin{array}{cccccc}{\mu} & {k} & {C T P} & {\Delta x} & {\Delta p} & {\Delta x \Delta p} \ {1} & {1} & {1.00} & {0.707} & {0.707} & {0.5} \ {1} & {2} & {0.841} & {0.595} & {0.841} & {0.5} \ {2} & {1} & {0.841} & {0.595} & {0.841} & {0.5} \ {1} & {0.5} & {1.189} & {0.841} & {0.594} & {0.5}\end{array}\right) \nonumber$
The Wigner function, W(x,p), is a phase-space distribution that can be used to provide an alternative graphical representation of the results calculated above. As shown below it can be generated using either the coordinate or momentum wave function.
Calculate Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N} :=50 \quad \mathrm{i} :=0 \ldots \mathrm{N} \quad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \quad \mathrm{j} :=0 \ldots \mathrm{N} \quad \mathrm{p}_{\mathrm{j}} :=-4+\frac{8 \cdot \mathrm{j}}{\mathrm{N}} \quad \text { Wigner}_{\mathrm{i}, \mathrm{j}}=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
Calculate Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N} :=50 \quad \mathrm{i} :=0 \ldots \mathrm{N} \quad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \quad \mathbf{j} :=0 \ldots \mathrm{N} \quad \mathrm{p}_{\mathrm{j}} :=-4+\frac{8 \cdot \mathrm{j}}{\mathrm{N}} \quad \text { Wigner}_{\mathrm{i}, \mathrm{j}} :=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
It is easy to derive the angular position, angular momentum uncertainty relationship for a particle on a ring using the position-momentum uncertainty equation and the classical definition of angular momentum. Visualization is provided by the next tutorial.
Demonstrating the Uncertainty Principle for Angular Momentum and Angular Position
The uncertainty relation between angular momentum and angular position can be derived from the more familiar uncertainty relation between linear momentum and position
$\Delta \mathrm{p} \cdot \Delta \mathrm{x} \geq \frac{\mathrm{h}}{4 \cdot \pi} \tag{1} \nonumber$
Consider a particle with linear momentum p moving on a circle of radius r. The particleʹs angular momentum is given by equation (2).
$\mathrm{L}=\mathrm{m} \cdot \mathrm{v} \cdot \mathrm{r}=\mathrm{p} \cdot \mathrm{r} \tag{2} \nonumber$
In moving a distance x on the circle the particle sweeps out an angle $\phi$ in radians.
$\phi=\frac{x}{r} \tag{3} \nonumber$
Equations (2) and (3) suggest,
$\Delta \mathrm{p}=\frac{\Delta \mathrm{L}}{\mathrm{r}} \qquad \Delta \mathrm{x}=\Delta \phi \cdot \mathrm{r} \tag{4} \nonumber$
Substitution of equations (4) into equation (1) yields the angular momentum, angular position uncertainty relation.
$\Delta \mathrm{L} \cdot \Delta \phi \geq \frac{\mathrm{h}}{4 \cdot \pi} \tag{5} \nonumber$
In addition to the Heisenberg restrictions represented by equations (1) and (5), conjugate observables are related by Fourier transforms. For example, for position and momentum it is given by equation (6) in atomic units (h = 2$\pi$).
$\langle p | x\rangle=\frac{1}{\sqrt{2 \pi}} \exp (-i p x) \tag{6} \nonumber$
Equations (2) and (3) can be used with (6) to obtain the Fourier transform between angular momentum and angular position.
$\langle L | \phi\rangle=\frac{1}{\sqrt{2 \pi}} \exp (-i L \phi) \tag{7} \nonumber$
Equations (6) and (7) are mathematical dictionaries telling us how to translate from x language to p language, or $\phi$ language to L language. The complex conjugates of (6) and (7) translate in the reverse direction, from p to x and from L to $\phi$.
The work‐horse particle‐in‐a‐box (PIB) problem can be used to provide a compelling graphical illustration of the position‐momentum uncertainty relation. The position wave function for the ground state of a PIB in a box of length a is given below.
$\langle x | \Psi\rangle=\sqrt{\frac{2}{a}} \sin \left(\frac{\pi x}{a}\right) \tag{8} \nonumber$
The conjugate momentum‐space wave function is obtained by the following Fourier transform.
$\langle p | \Psi\rangle=\int_{0}^{a}\langle p | x\rangle\langle x | \Psi\rangle d x=\frac{1}{\sqrt{\pi a}} \int_{0}^{a} \exp (-i p x) \sin \left(\frac{\pi x}{a}\right) \tag{9} \nonumber$
Evaluation of the integral in equation (9) yields
$\Psi(\mathrm{p}, \mathrm{a}) :=\sqrt{\mathrm{a} \cdot \pi} \cdot \frac{\exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{a})+1}{\pi^{2}-\mathrm{p}^{2} \cdot \mathrm{a}^{2}} \tag{10} \nonumber$
Plotting the momentum distribution function for several box lengths, as is done in the figure below, clearly reveals the position‐momentum uncertainty relation. The greater the box length the greater the uncertainty in position. However, as the figure shows, the greater the box length the narrower the momentum distribution, and, consequently, the smaller the uncertainty in momentum.
A similar visualization of the angular‐momentum/angular‐position uncertainty relation is also possible. Suppose a particle on a ring is prepared in such a way that its angular wave function is represented by the following gaussian function,
$\langle\phi | \Psi\rangle=\exp \left(-a \phi^{2}\right) \tag{11} \nonumber$
where the parameter a controls the width of the angular distribution. The conjugate angular momentum wave function is obtained by the following Fourier transform.
$\langle L | \Psi\rangle=\int_{-\pi}^{\pi}\langle L | \phi\rangle\langle\phi | \Psi\rangle d \phi=\frac{1}{\sqrt{2 \pi}} \int_{-\pi}^{\pi} \exp (-i L \phi) \exp \left(-a \phi^{2}\right) d \phi \tag{12} \nonumber$
Plots of $|<\phi| \Psi>\left.\right|^{2}$ and $|<L| \Psi>\left.\right|^{2}$ shown below for two values of the parameter a illustrate the angular momentum/angular position uncertainty relation. The larger the value of a, the smaller the angular positional uncertainty and the greater the angular momentum uncertainty. In other words, the greater the value of a the greater the number of angular momentum eigenstates observed.
$\mathrm{a} :=0.5 \qquad \Phi(\phi, \mathrm{a}) :=\exp \left(-\mathrm{a} \cdot \phi^{2}\right) \nonumber$
$\mathrm{L} :=-5 \ldots5 \qquad \Psi(\mathrm{L}, \mathrm{a}) :=\int_{-\pi}^{\pi} \exp (-\mathrm{i} \cdot \mathrm{L} \cdot \phi) \Phi(\phi, \mathrm{a}) \mathrm{d} \phi \nonumber$
Make the angular position distribution narrower: $a : = 2.5$
Observe a broader distribution in angular momentum.
The uncertainty relation between angular position and angular momentum as outlined above is a simplified version of that presented by S. Franke‐Arnold et al. in New Journal of Physics 6, 103 (2004).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.15%3A_Quantum_Mechanics_and_the_Fourier_Transform.txt
|
Moving on to the hydrogen atom, we can use the spatial and momentum wave functions for the 1s, 2s and 3s energy states to again illustrate visually the uncertainty principle.
The Position‐Momentum Uncertainty Relation in the Hydrogen Atom
The hydrogen atom coordinate and momentum wave functions can be used to illustrate the uncertainty relation involving position and momentum.
The 1s wave function is used to calculate the average distance of the electron from the nucleus.
$\Psi_{1 s}(r) :=\frac{1}{\sqrt{\pi}} \cdot \exp (-r) \quad \mathrm{r}_{\mathrm{ls}} :=\int_{0}^{\infty} \mathrm{r} \cdot \Psi_{1 \mathrm{s}}(\mathrm{r})^{2} \cdot 4 \cdot \pi \cdot \mathrm{r}^{2} \mathrm{d} \mathrm{r} \quad r_{1 s}=1.500 \nonumber$
The Fourier transform of the 1s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{1 s}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{1 s}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{r} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \ \rightarrow 2 \cdot \frac{2^{\frac{1}{2}}}{\pi \cdot[(-1)+\mathrm{i} \cdot \mathrm{p}]^{2} \cdot(1+\mathrm{i} \cdot \mathrm{p})^{2}} \nonumber$
$\mathrm{p}_{\mathrm{ls}} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{1 \mathrm{s}}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \quad \quad \mathrm{p}_{1 \mathrm{s}}=0.849 \nonumber$
The 2s wave function is used to calculate the average distance of the electron from the nucleus.
$\Psi_{2 s}(r) :=\frac{1}{\sqrt{32 \cdot \pi}} \cdot(2-r) \cdot \exp \left(-\frac{r}{2}\right) \quad \mathrm{r}_{2 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{r} \cdot \Psi_{2 \mathrm{s}}(\mathrm{r})^{2} \cdot 4 \cdot \pi \cdot \mathrm{r}^{2} \mathrm{dr} \quad \mathrm{r}_{2 \mathrm{s}}=6.000 \nonumber$
The Fourier transform of the 2s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{2 s}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{2 \mathrm{s}}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{r} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \; \text{simplify} \ \rightarrow \frac{-16}{\pi} \cdot \frac{(-1)+4 \cdot p^{2}}{[(-1)+2 \cdot i \cdot p]^{3} \cdot(1+2 \cdot i \cdot p)^{3}} \nonumber$
$\mathrm{p}_{2 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{2 \mathrm{s}}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \quad \quad \mathrm{p}_{2 \mathrm{s}}=0.340 \nonumber$
The 3s wave function is used to calculate the average distance of the electron from the nucleus.
$\Psi_{3 \mathrm{s}}(\mathrm{r}) :=\frac{1}{81 \cdot \sqrt{3 \cdot \pi}} \cdot\left(27-18 \cdot \mathrm{r}+2 \cdot \mathrm{r}^{2}\right) \exp \left(\frac{-\mathrm{r}}{3}\right) \quad \mathrm{r}_{3 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{r} \cdot \Psi_{3 \mathrm{s}}(\mathrm{r})^{2} \cdot 4 \cdot \pi \cdot \mathrm{r}^{2} \mathrm{dr} \quad \mathrm{r}_{3 \mathrm{s}}=13.500 \nonumber$
The Fourier transform of the 3s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{3 \mathrm{s}}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{3 s}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{r} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \; \text{simplify} \ \rightarrow 18 \cdot \frac{2^{\frac{1}{2}}}{\pi} \cdot 3^{\frac{1}{2}} \cdot \frac{1-30 \cdot \mathrm{p}^{2}+81 \cdot \mathrm{p}^{4}}{[(-1)+3 \cdot \mathrm{i} \cdot \mathrm{p}]^{4} \cdot(1+3 \cdot \mathrm{i} \cdot \mathrm{p})^{4}} \nonumber$
$\mathrm{p}_{3 s} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{3 s}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \quad \quad \mathrm{p}_{3 \mathrm{s}}=0.218 \nonumber$
These results can be summarize in both tabular and graphical form.
$\left( \begin{array}{cccc}{\text { Orbital}} & \text{ AveragePosition } & {\text { AverageMomentum }} \ {1 \mathrm{s}} & {1.5} & {0.849} \ {2 \mathrm{s}} & {6.0} & {0.340} \ {3 \mathrm{s}} & {13.5} & {0.218}\end{array}\right) \nonumber$
The table shows that the average distance of the electron from the nucleus increases from 1s to 3s, indicating an increase in the uncertainty in the location of the electron. At the same time the average magnitude of electron momentum decreases from 1s to 3s, indicating a decrease in momentum uncertainty. The spatial and momentum distribution functions shown below illustrate this effect graphically.
In addition to the coordinate and momentum representation, there is also the phase space approach to quantum mechanics, which physicist in particular find useful. It combines the coordinate and momentum approaches and requires a phase-space distribution function such as the Wigner function. The following tutorial illustrates the computational consistency of the three approaches to quantum mechanics. Two related tutorials dealing with quantum mechanical tunneling and the repackaging of quantum weirdness are also available.
Quantum Tunneling in Coordinate, Momentum and Phase Space
A study of quantum mechanical tunneling brings together the classical and quantum mechanical points of view. In this tutorial the harmonic oscillator will be used to analyze tunneling in coordinate-, momentum- and phase-space. The Appendix provides the position and momentum operators appropriate for these three representations.
The classical equation for the energy of a harmonic oscillator is,
$\mathrm{E}=\frac{\mathrm{p}^{2}}{2 \cdot \mu}+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \nonumber$
The quantum mechanical counter part is Schrödinger's equation (in atomic units, h = 2 $\pi$),
$\frac{-1}{2 \cdot \mu} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \nonumber$
In atomic units the quantum mechanical wave function in coordinate space for the harmonic oscillator ground state with reduced mass $\mu$ and force constant k is given by,
$\Psi(\mathrm{x}, \mathrm{k}, \mu) :=\left(\frac{\sqrt{\mathrm{k} \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{\mathrm{k} \cdot \mu} \cdot \frac{\mathrm{x}^{2}}{2}\right) \nonumber$
In the interest of mathematical simplicity and expediency we will use k = $\mu$ =1. The normalized ground state wave function under these conditions is,
$\Psi(x) :=\left(\frac{1}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(\frac{-x^{2}}{2}\right) \qquad \int_{-\infty}^{\infty} \Psi(x)^{2} d x=1 \nonumber$
Solving Schrödinger's equation for this wave function yields a ground state energy of 0.5 in atomic units.
$\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\frac{1}{2} \cdot \mathrm{x}^{2} \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \nonumber$
Classically a harmonic oscillator, like a pendulum, has a turning point when kinetic energy is zero and the pendulum bob changes direction. The turning point is calculated as follows using the classical expression for the energy.
$\frac{1}{2}=\frac{1}{2} \cdot \mathrm{x}^{2} \text { solve, } \mathrm{x} \rightarrow \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
Thus, the permissible range of position values is between -1 and +1. Position values outside this range are classically forbidden. However, quantum theory permits position values for which the total energy is less than the potential energy. This is referred to as quantum tunneling. The probability that tunneling is occurring is calculated below.
$2 \cdot \int_{1}^{\infty} \Psi(x)^{2} d x=0.157 \nonumber$
Next we move to a similar calculation in momentum space. First the coordinate wave function is Fourier transformed into momentum space and normalization is demonstrated.
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}) \mathrm{dx} \rightarrow \frac{1}{\pi^{\frac{1}{4}}} \cdot \mathrm{e}^{\frac{-1}{2} \cdot \mathrm{p}^{2}} \qquad \int_{-\infty}^{\infty}(|\Phi(\mathrm{p})|)^{2} \mathrm{d} \mathrm{p}=1 \nonumber$
Solving Schrödinger's equation in momentum space naturally gives the same energy eigenvalue.
$\frac{\mathrm{p}^{2}}{2} \cdot \Phi(\mathrm{p})-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dp}^{2}} \Phi(\mathrm{p})=\mathrm{E} \cdot \Phi(\mathrm{p}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \nonumber$
And we find that the classically permissible range of momentum values is the same given the reduced mass and force constant values used in these calculations.
$\frac{1}{2}=\frac{\mathrm{p}^{2}}{2} \text { solve, } \mathrm{p} \rightarrow \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
Next we see that the tunneling probability in momentum space is the same as it is in coordinate space.
$2 \cdot \int_{1}^{\infty} \Phi(\mathrm{p})^{2} \mathrm{dp}=0.157 \nonumber$
Moving to phase space requires a distribution function that depends on both position and momentum. The Wigner function fits these requirements and is generated here using both the coordinate and momentum wave functions. Please see “Examining the Wigner Distribution Using Dirac Notation,” arXiv: 0912.2333 (2009) for further detail.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \cdot \int_{-\infty}^{\infty} \Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
The Wigner function is normalized over position and momentum, and yields the appropriate energy expectation value for the ground state of the harmonic oscillator.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \mathrm{d} \mathrm{p}=1 \qquad \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\left(\frac{p^{2}}{2}+\frac{x^{2}}{2}\right) \cdot \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{x} \mathrm{d} \mathrm{p}=0.5 \nonumber$
Tunneling probability in phase space is calculated as follows:
$4 \int_{1}^{\infty} \int_{1}^{\infty} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{x} \mathrm{dp}=0.025 \nonumber$
This is in agreement with the separate coordinate and momentum space calculations which gave values of 0.157.
$0.157 \cdot 0.157=0.025 \nonumber$
Appendix
The table lists the forms of the position and momentum operators in the coordinate, momentum and phase space representations. Clearly the multiplicative character of the phase space operators appeals to our classical prejudices and intuition. However, we must remind ourselves that the phase space distribution function on which they "operate" is generated from either the coordinate or momentum wave function. In the coordinate representation the momentum operator is differential; in the momentum representation the coordinate operator is differential. As is shown in other tutorials in this series, the apparent "classical character" of the phase space representation only temporarily hides the underlying quantum weirdness.
$\begin{pmatrix} \text{Operator} & \text{CoordinateSpace} & \text{MomentumSpace} & \text{PhaseSpace} \ \text{position} & x \cdot \Box & i \cdot \frac{d}{dp} \Box & x \cdot \Box \ \text{momentum} & \frac{1}{i} \cdot \frac{d}{dx} \Box & p \cdot \Box & p \cdot \Box \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.16%3A_Quantum_Mechanics_and_the_Fourier_Transform.txt
|
Finally, I would like to point out that the elementary Fourier transforms that we started with, $< x | p >$ and $< p | x >$, can also be used to derive the mathematical form of quantum mechanical operators. This is shown in some detail in An Approach to Quantum Mechanics.
An Approach to Quantum Mechanics
The purpose of this tutorial is to introduce the basics of quantum mechanics using Dirac bracket notation while working in one dimension. Dirac notation is a succinct and powerful language for expressing quantum mechanical principles; restricting attention to one-dimensional examples reduces the possibility that mathematical complexity will stand in the way of understanding. A number of texts make extensive use of Dirac notation (1-5).
Wave-particle duality is the essential concept of quantum mechanics. DeBroglie expressed this idea mathematically as $\lambda = \frac{h}{m \nu} = \frac{h}{p}$. On the left is the wave property $\lambda$, and on the right the particle property mv, momentum. The most general coordinate space wavefunction for a free particle with wavelength $\lambda$ is the complex Euler function shown below.
$\langle x | \lambda\rangle=\exp \left(i 2 \pi \frac{x}{\lambda}\right)=\cos \left(2 \pi \frac{x}{\lambda}\right)+i \sin \left(2 \pi \frac{x}{\lambda}\right) \label{1} \nonumber$
Feynman called this equation “the most remarkable formula in mathematics.” He referred to it as “our jewel.” And indeed it is, because when it is enriched with de Broglie’s relation it serves as the foundation of quantum mechanics.
According to de Broglie=s hypothesis, a particle with a well-defined wavelength also has a well-defined momentum. Therefore, we can obtain the momentum wavefunction (unnormalized) of the particle in coordinate space by substituting the deBroglie relation into Equation \ref{1}.
$\langle x | p\rangle=\exp \left(\frac{i p x}{\hbar}\right) \label{2} \nonumber$
When Equation \ref{2} is graphed it creates a helix about the axis of propagation (X-axis). Z is the imaginary axis and Y is the real axis. It is the simplest example of a fourier transform, translating momentum into coordinate language. It also has in it the heart of the uncertainty principle. Everyday examples of this important mathematical formula include telephone cords, spiral notebooks and slinkies.
Quantum mechanics teaches that the wavefunction contains all the physical information about a system that can be known, and that one extracts information from the wavefunction using quantum mechanical operators. There is, therefore, an operator for each observable property.
For example, in momentum space if a particle has a well-defined momentum we write its state as $| p >$. If we operate on this state with the momentum operator $\hat{p}$, the following eigenvalue equation is satisfied.
$\hat{p} | p \rangle=p | p \rangle \label{3} \nonumber$
We say the system is in a state which is an eigenfunction of the momentum operator with eigenvalue $p$. In other words, operating on the momentum eigenfunction with the momentum operator, in momentum space, returns the momentum eigenvalue times the original momentum eigenfunction. From
$\langle p|\hat{p}| p\rangle= p\langle p | p\rangle \label{4} \nonumber$
it follows that,
$\langle p|\hat{p}=p\langle p| \label{5} \nonumber$
Equations \ref{3} and \ref{5} show that in its own space the momentum operator is a multiplicative operator, and can operate either to the right on a ket, or to the left on a bra. The same is true of the position operator in coordinate space.
To obtain the momentum operator in coordinate space, Equation \ref{3} is projected onto coordinate space by operating on the left with < x |. After inserting Equation \ref{2} we have,
$\langle x|\hat{p}| p\rangle= p\langle x | p\rangle= p \exp \left(\frac{i p x}{\hbar}\right)=\frac{\hbar}{i} \frac{d}{d x} \exp \left(\frac{i p x}{\hbar}\right)=\frac{\hbar}{i} \frac{d}{d x}\langle x | p\rangle \label{6} \nonumber$
Comparing the first and last terms reveals that
$\langle x|\hat{p}=\frac{\hbar}{i} \frac{d}{d x}\langle x| \label{7} \nonumber$
and that $\frac{\hbar}{i} \frac{d}{d x}\langle x|$ is the momentum operator in coordinate space.
The position wavefunction in momentum space is the complex conjugate of the momentum wavefunction coordinate space.
$\langle p | x\rangle=\langle x | p\rangle^{*}=\exp \left(\frac{-i p x}{\hbar}\right) \label{8} \nonumber$
Starting with the coordinate-space eigenvalue equation
$\hat{x} | x \rangle=x | x \rangle \label{9} \nonumber$
and using the same approach as with momentum, it is easy to show that
$\langle x|\hat{x}=x\langle x| \label{10} \nonumber$
$\langle p|\hat{x}=-\frac{\hbar}{i} \frac{d}{d p}\langle p| \label{11} \nonumber$
In summary, the two fundamental dynamical operators are position and momentum, and the two primary representations are coordinate space and momentum space. The results achieved thus far are shown in the following table.
Coordinate Space Momentum Space
position operator: $\hat{x}$ $x\langle x|$ $-\frac{\hbar}{i} \frac{d}{d p}\langle p|$
momentum operator: $\hat{p}$ $\frac{\hbar}{i} \frac{d}{d x}\langle x|$ $p\langle p|$
Other quantum mechanical operators can be constructed from $\hat{x}$ and $\hat{p}$ in the appropriate representation, position or momentum. To illustrate this, Schrödinger=s equation for the one-dimensional harmonic oscillator will be set up in both coordinate and momentum space using the information in the table. Schrödinger=s equation is the quantum mechanical energy eigenvalue equation, and for the harmonic oscillator it looks like this initially,
$\left[\frac{\hat{p}^{2}}{2 m}+\frac{1}{2} k \hat{x}^{2}\right] | \Psi \rangle=E | \Psi \rangle \label{12} \nonumber$
The term in brackets on the left is the classical energy written as an operator without a commitment to a representation (position or momentum) for the calculation.
Most often, chemists solve Schrödinger=s equation in coordinate space. Therefore, to prepare Schrödinger=s equation for solving, Equation \ref{12} is projected onto coordinate space by operating on the left with < x |.
$\left\langle x\left|\left[\frac{\hat{p}^{2}}{2 m}+\frac{1}{2} k \hat{x}^{2}\right]\right| \Psi\right\rangle=\langle x|E| \Psi\rangle \label{13} \nonumber$
Using the information in the table this yields,
$\left[-\frac{\hbar^{2}}{2 m} \frac{d^{2}}{d x^{2}}+\frac{1}{2} k x^{2}\right]\langle x | \Psi\rangle= E\langle x | \Psi\rangle \label{14} \nonumber$
The square bracket on the left contains the quantum mechanical energy operator in coordinate space. Before proceeding we illustrate how the kinetic energy operator emerges as a differential operator in coordinate space using Equation \ref{7}.
$\frac{1}{2 m}\langle x|\hat{p} \hat{p}| \Psi\rangle=\frac{1}{2 m} \frac{\hbar}{i} \frac{d}{d x}\langle x|\hat{p}| \Psi\rangle=\frac{1}{2 m} \frac{\hbar}{i} \frac{d}{d x} \frac{\hbar d}{d x}\langle x | \Psi\rangle=-\frac{\hbar^{2}}{2 m} \frac{d^{2}}{d x^{2}}\langle x | \Psi\rangle \label{15} \nonumber$
Equation \ref{10} is used in a similar fashion to show that potential energy is a multiplicative operator in coordinate space.
$\frac{1}{2} k\langle x|\hat{x} \hat{x}| \Psi\rangle=\frac{1}{2} k x\langle x|\hat{x}| \Psi\rangle=\frac{1}{2} k x^{2}\langle x | \Psi\rangle \label{16} \nonumber$
Obviously the calculation could also have been set up in momentum space. It is easy to show that in the momentum representation Schrödinger=s equation is
$\left[\frac{p^{2}}{2 m}-\frac{\hbar^{2} k}{2} \frac{d^{2}}{d p^{2}}\right]\langle p | \Psi\rangle= E\langle p | \Psi\rangle \label{17} \nonumber$
In momentum space the kinetic energy operator is multiplicative and the potential energy operator is differential. The one-dimensional simple harmonic oscillator problem is exactly soluble in both coordinate and momentum space. The solution can be found in any chemistry and physics text dealing with quantum mechanics, and will not be dealt with further here, other than to say that equations (14) and (17) reveal an appealing symmetry.
Unfortunately, for most applications the potential energy term in momentum space presents more of a mathematical challenge than it does for the harmonic oscillator problem. A general expression for the potential energy in the momentum representation when its form in the coordinate representation is specified is given below.
$\langle p|\hat{V}| \Psi\rangle=\iint \exp \left(\frac{i\left(p^{\prime}-p\right) x}{\hbar}\right) V(x)\left\langle p^{\prime} | \Psi\right\rangle d p^{\prime} d x \label{18} \nonumber$
To see how this integral is handled for a specific case see reference (10).
If a system is in a state which is an eigenfunction of an operator, we say the system has a well-defined value for the observable associated with the operator, for example, position, momentum, energy, etc. Every time we measure we get the same result. However, if the system is in a state that is not an eigenfunction of the operator, for example, if $\hat{o} | \Psi \rangle=| \Phi \rangle$, the system does not have a well-defined value for the observable. Then the measurement results have a statistical character and each measurement gives an unpredictable result in spite of the fact that the system is in a well-defined state $| \Psi \rangle$. Under these circumstances, all we can do is calculate a mean value for the observable. This is unheard of in classical physics where, if a system is in a well-defined state, all its physical properties are precisely determined. In quantum mechanics a system can be in a state which has a well-defined energy, but its position and momentum are un-determined.
The quantum mechanical recipe for calculating the mean value of an observable is now derived. Consider a system in the state $| \Psi \rangle$, which is not an eigenfunction of the energy operator, $\hat{H}$. A statistically meaningful number of such states are available for the purpose of measuring the energy. Quantum mechanical principles require that an energy measurement must yield one of the energy eigenvalues, $\epsilon_{i}$, of the energy operator. Therefore, the average value of the energy measurements is calculated as,
$\langle E\rangle=\frac{\sum_{i} n_{i} \varepsilon_{i}}{N} \label{19} \nonumber$
where ni is the number of times $\epsilon_{i}$ is observed, and N is the total number of measurements. Therefore, pi = ni/N, is the probability that $\epsilon_{i}$ is observed. Equation \ref{19} becomes
$\langle E\rangle=\sum_{i} p_{i} \varepsilon_{i} \label{20} \nonumber$
According to quantum mechanics, for a system in the state $| \Psi \rangle, p_{i}=\langle\Psi | i\rangle\langle i | \Psi\rangle$, where the | i > are the eigenfunctions of the energy operator. Equation \ref{20} can now be re-written as,
$\langle E\rangle=\sum_{i}\langle\Psi | i\rangle\langle i | \Psi\rangle \varepsilon_{i}=\sum_{i}\langle\Psi | i\rangle \varepsilon_{i}\langle i | \Psi\rangle \label{21} \nonumber$
However, it is also true that
$\hat{H} | i \rangle=\varepsilon_{i} | i \rangle=| i \rangle \varepsilon_{i} \label{22} \nonumber$
Substitution of Equation \ref{22}) into Equation \ref{21} yields
$\langle E\rangle=\sum_{i}\langle\Psi|\hat{H}| i\rangle\langle i | \Psi\rangle \label{23} \nonumber$
As eigenfunctions of the energy operator, the | i > form a complete basis set, making available the discrete completeness condition, $\sum_{i} | i \rangle\langle i|=1$, the use of which in Equation \ref{23} yields
$\langle E\rangle=\langle\Psi|\hat{H}| \Psi\rangle \label{24} \nonumber$
This formalism is general and applies to any operator-observable pair. The average value for the observed property may always be calculated as,
$\langle o\rangle=\langle\Psi|\hat{o}| \Psi\rangle \label{25} \nonumber$
These principles are now applied to a very simple problem B the particle in a box. Schrödinger=s equation in coordinate space,
$-\frac{\hbar^{2}}{2 m} \frac{d^{2}}{d x^{2}}\langle x | \Psi\rangle= E\langle x | \Psi\rangle \label{26} \nonumber$
can be solved exactly, yielding the following eigenvalues and eigenfunctions,
$E_{n}=\frac{n^{2} h^{2}}{8 m a^{2}} \label{27} \nonumber$
$\langle x | \Psi_{n}\rangle=\sqrt{\frac{2}{a}} \sin \left(\frac{n \pi x}{a}\right) \label{28} \nonumber$
where a is the box dimension, m is the particle mass, and n is a quantum number restricted to integer values starting with 1.
Substitution of Equation \ref{28} into Equation \ref{26} confirms that it is an eigenfunction with the manifold of allowed eigenvalues given by Equation \ref{27}. However, Equation \ref{28} is not an eigenfunction of either the position or momentum operators, as is shown below.
$\left\langle x|\hat{x}| \Psi_{n}\right\rangle= x\langle x | \Psi_{n}\rangle= x \sqrt{\frac{2}{a}} \sin \left(\frac{n \pi x}{a}\right) \label{29} \nonumber$
$\left\langle x|\hat{p}| \Psi_{n}\right\rangle=\frac{\hbar}{i} \frac{d}{d x}\langle x | \Psi_{n}\rangle=\frac{n \pi}{a} \sqrt{\frac{2}{a}} \cos \left(\frac{n \pi x}{a}\right) \label{30} \nonumber$
To summarize, the particle in a box has a well-defined energy, but the same is not true for its position or momentum. In other words, it is not buzzing around the box executing a classical trajectory. The outcome of an energy measurement is certain, but position and momentum measurements are uncertain. All we can do is calculate the expectation value for these observables and compare the calculations to the mean values found through a statistically meaningful number of measurements.
Next we set up the calculation for the expectation value for position utilizing the recipe expressed in Equation \ref{25}.
$\langle x\rangle_{n}=\left\langle\Psi_{n}|\hat{x}| \Psi_{n}\right\rangle \label{31} \nonumber$
Evaluation of Equation \ref{31} in coordinate space requires the continuous completeness condition.
$\int_{0}^{a} | x \rangle\langle x|d x=1 \label{32} \nonumber$
Substitution of (32) into (31) gives
$\langle x\rangle_{n}=\int_{0}^{a}\left\langle\Psi_{n} | x\right\rangle\left\langle x|\hat{x}| \Psi_{n}\right\rangle d x=\int_{0}^{a}\left\langle\Psi_{n} | x\right\rangle x\langle x | \Psi_{n}\rangle d x=\frac{a}{2} \label{33} \nonumber$
The expectation value for momentum is calculated in a similar fashion,
$\langle p\rangle_{n}=\int_{0}^{a}\left\langle\Psi_{n} | x\right\rangle\left\langle x|\hat{p}| \Psi_{n}\right\rangle d x=\int_{0}^{a}\left\langle\Psi_{n} | x\right\rangle \frac{\hbar}{i} \frac{d}{d x}\langle x | \Psi_{n}\rangle d x=0 \label{34} \nonumber$
In other words, the expectation values for position and momentum are the same for all the allowed quantum states of the particle in a box.
It is now necessary to explore the meaning of $\langle x | \Psi\rangle$. It is the probability amplitude that a system in the state $| \Psi \rangle$, will be found at position x. $|\langle x | \Psi\rangle|^{2}$ or $\langle\Psi | x\rangle\langle x | \Psi\rangle$ is the probability density that a system in the state $| \Psi \rangle$, will be found at position x. Thus Equation \ref{28} is an algorithm for calculating probability amplitudes and probability densities for the position of the particle in a one-dimensional box. This, of course, is true only if $| \Psi \rangle$ is normalized.
$\langle\Psi | \Psi\rangle=\int_{0}^{a}\langle\Psi | x\rangle\langle x | \Psi\rangle d x=1 \label{35} \nonumber$
There are two ways to arrive at the integral in Equation \ref{34}. One can insert the continuous completeness relation (32) at the * on the left side, or, equivalently one can express $| \Psi \rangle$ as a linear superposition in the continuous basis set x,
$| \Psi \rangle=\int_{0}^{a} | x \rangle\langle x | \Psi\rangle d x \label{36} \nonumber$
and projecting this expression onto $\langle \Psi |$.
A quantum particle is described by its wavefunction rather than by its instantaneous position and velocity; a confined quantum particle, such as a particle in a box, is not moving in any classical sense, and must be considered to be present at all points in space, properly weighted by $|\langle x | \Psi\rangle|^{2}$.
Thus, $\langle x | \Psi\rangle$ allows us to examine the coordinate space probability distribution and to calculate expectation values for observables such as was done in equations (33) and (34). Plots of $|\langle x | \Psi_{n}\rangle|^{2}$ show that the particle is distributed symmetrically in the box, and $\langle x | \Psi_{n}\rangle$, allows us to calculate the probability of finding the particle anywhere inside the box.
The coordinate-space wavefunction does not say much about momentum, other than its average value is zero (see equation 34). However, a momentum-space wave function, $\langle p | \Psi \rangle$, can be generated by a Fourier transform of $\langle x | \Psi \rangle$. This is accomplished by projecting Equation \ref{36) onto momentum space by multiplication on the left by$\langle p |$.
$\langle p | \Psi_{n}\rangle=\int_{0}^{a}\langle p | x\rangle\langle x | \Psi_{n}\rangle d x=\frac{1}{\sqrt{2 \pi \hbar}} \int_{0}^{a} \exp \left(-\frac{i p x}{\hbar}\right) \sqrt{\frac{2}{a}} \sin \left(\frac{n \pi x}{a}\right) d x \label{37} \nonumber$
The term preceding the integral is the normalization constant (previously ignored) for the momentum wave function. Evaluation of the integral on the right side yields,
$\langle p | \Psi_{n}\rangle= n \sqrt{a \pi h^{3}}\left[\frac{1-\cos (n \pi) \exp \left(-\frac{i p a}{\hbar}\right)}{n^{2} \pi^{2} \hbar^{2}-a^{2} p^{2}}\right] \label{38} \nonumber$
Now with the continuous completeness relationship for momentum,
$\int_{-\infty}^{\infty} | p \rangle\langle p|d p=1 \label{39} \nonumber$
one can re-calculate $\langle x \rangle_{n}$ and $\langle p \rangle_{n}$ in momentum space.
$\langle x\rangle_{n}=\int_{-\infty}^{\infty}\left\langle\Psi_{n} | p\right\rangle\left\langle p|\hat{x}| \Psi_{n}\right\rangle d p=\int_{-\infty}^{\infty}\left\langle\Psi_{n} | p\right\rangle \frac{-\hbar}{i} \frac{d}{d p}\langle p | \Psi_{n}\rangle d p=\frac{a}{2} \label{40} \nonumber$
$\langle p\rangle_{n}=\int_{-\infty}^{\infty}\left\langle\Psi_{n}|\hat{p}| p\right\rangle\langle p | \Psi_{n}\rangle d p=\int_{-\infty}^{\infty}\left\langle\Psi_{n} | p\right\rangle p\langle p | \Psi_{n}\rangle d p=0 \label{41} \nonumber$
It is clear that $\langle x | \Psi_{n} \rangle$ and $\langle p | \Psi_{n} \rangle$ contain the same information; they just present it in different languages (representations). The coordinate space distribution functions for the particle in a box shown above are familiar to anyone who has studied quantum theory, however, because chemists work mainly in coordinate space, the momentum distributions are not so well known. A graphical representation of $| \langle p | \Psi_{n} \rangle |^{2}$ for the first five momentum states is shown below. The distribution functions are offset by small increments for clarity of presentation.
As just shown, the particle in a box can be used to illustrate many fundamental quantum mechanical concepts. To demonstrate that some systems can be analyzed without solving Schrödinger’s equation we will briefly consider the particle on a ring. This model has been used to study the behavior of the $\pi$-electrons of benzene.
In order to satisfy the requirement of being single-valued, the momentum wavefunction in coordinate space for a particle on a ring of radius R must satisfy the following condition,
$\langle x+2 \pi R | p\rangle=\langle x | p\rangle \label{42} \nonumber$
This leads to,
$\exp \left(\frac{i 2 \pi R p}{ \hbar}\right) \exp \left(\frac{i p x }{ \hbar}\right)=\exp \left(\frac{i p x }{ \hbar}\right) \label{43} \nonumber$
This equation can be written as,
$\exp \left(\frac{i 2 \pi R p }{ \hbar}\right)=1=\exp (i 2 \pi m) \quad \text { where } \quad \mathrm{m}=0, \pm 1, \pm 2, \ldots \label{44} \nonumber$
Comparison of the left with the far right of this equation reveals that,
$\frac{R p}{\hbar}=m \label{45} \nonumber$
It is easy to show that the energy manifold associated with this quantum restriction is,
$E_{m}=m^{2}\left(\frac{\hbar^{2}}{2 m_{e} R^{2}}\right) \label{46} \nonumber$
The corresponding wave functions can be found in the widely used textbook authored by Atkins and de Paula (11).
There are, of course, many formulations of quantum mechanics, and all of them develop quantum mechanical principles in different ways from diverse starting points, but they are all formally equivalent. In the present approach the key concepts are de Broglie’s hypothesis as stated in Equation \ref{2}, and the eigenvalue equations (3) and (9) expressed in the momentum and coordinate representations, respectively.
Another formulation (Heisenberg’s approach) identifies the commutation relation of Equation \ref{47} as the basis of quantum theory, and adopts operators for position and momentum that satisfy the equation.
$[\hat{p}, \hat{x}]=\hat{p} \hat{x}-\hat{x} \hat{p}=\frac{\hbar}{i} \label{47} \nonumber$
Equation \ref{47} can be confirmed in both coordinate and momentum space for any state function *Ψ, using the operators in the table above.
$\langle x|(\hat{p} \hat{x}-\hat{x} \hat{p})| \Psi\rangle=\frac{\hbar}{i}\left(\frac{d}{d x} x-x \frac{d}{d x}\right)\langle x | \Psi\rangle=\frac{\hbar}{i}\langle x | \Psi\rangle \label{48} \nonumber$
$\langle p|(\hat{p} \hat{x}-\hat{x} \hat{p})| \Psi\rangle= i \hbar\left(p \frac{d}{d p}-\frac{d}{d p} p\right)\langle p | \Psi\rangle=\frac{\hbar}{i}\langle p | \Psi\rangle \label{49} \nonumber$
The meaning associated with equations (48) and (49) is that the observables associated with non-commuting operators cannot simultaneously have well-defined values. This, of course, is just another statement of the uncertainty principle.
The famous double-slit experiment illustrates the uncertainty principle in a striking way. To illustrate this it is mathematically expedient to begin with infinitesimally thin slits. Later this restriction will be relaxed.
A screen with infinitesimally thin slits (6) at x1 and x2 projects the incident beam into a linear superposition of position eigenstates.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \label{50} \nonumber$
Expressing this state in the coordinate representation yields the following superposition of Dirac delta functions.
$\langle x | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle x | x_{1}\rangle+\langle x | x_{2}\rangle]=\frac{1}{\sqrt{2}}\left[\delta\left(x-x_{1}\right)+\delta\left(x-x_{2}\right)\right] \label{51} \nonumber$
According to the uncertainty principle this localization of the incident beam in coordinate space is accompanied by a delocalization of the x-component of the momentum, px. This can be seen by projecting $| \Psi \rangle$ onto momentum space.
$\left\langle p_{x} | \Psi\right\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p_{x} | x_{1}\right\rangle+\left\langle p_{x} | x_{2}\right\rangle\right]=\frac{1}{2 \sqrt{\pi \hbar}}\left[\exp \left(-\frac{i p_{x} x_{1}}{\hbar}\right)+\exp \left(-\frac{i p_{x} x_{2}}{\hbar}\right)\right] \nonumber$
The momentum probability distribution in the x-direction, $P\left(p_{x}\right)=\left|\left\langle p_{x} | \Psi\right\rangle\right|^{2}$, reveals the required spread in momentum, plus the interesting interference pattern in the momentum distribution that will ultimately be projected onto the detection screen. As Marcella (6) points out the detection screen is actually measuring the x-component of the momentum.
Of course, in the actual experiment the slits are not infinitesimally thin and the diffraction pattern takes on the more familiar appearance reported in the literature (7) and textbooks (8). For example, a linear superposition of Gaussian functions can be used to represent the coordinate-space wavefunction at a screen with two slits of finite width.
$\langle x | \Psi\rangle=\exp \left(-\left(x-x_{1}\right)^{2}\right)+\exp \left(-\left(x-x_{2}\right)^{2}\right) \label{53} \nonumber$
The Fourier transform of this state into momentum space leads to the momentum distribution shown in the figure below (9).
The double-slit experiment reveals the three essential steps in a quantum mechanical experiment:
1. state preparation (interaction of incident beam with the slit-screen)
2. measurement of observable (arrival of scattered beam at the detection screen)
3. calculation of expected results of the measurement step
The Dirac delta function appeared in Equation \ref{51}. It expresses the fact that the position eigenstates form a continuous orthogonal basis. The same, of course, is true for the momentum eigenstates.
The bracket $\langle x | x^{\prime}\rangle$ is zero unless x = x′. This expresses the condition that an object at x′ is not at x. It is instructive to expand this bracket in the momentum representation.
$\langle x | x^{\prime}\rangle=\int_{-\infty}^{\infty}\langle x | p\rangle\langle p | x^{\prime}\rangle d p=\frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} \exp \left(\frac{i p\left(x-x^{\prime}\right)}{ \hbar}\right) d p=\delta\left(x-x^{\prime}\right) \label{54} \nonumber$
The same approach for momentum yields,
$\langle p | p^{\prime}\rangle=\int_{-\infty}^{\infty}\langle p | x\rangle\langle x | p^{\prime}\rangle d x=\frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} \exp \left(\frac{-i\left(p-p^{\prime}\right) x }{ \hbar}\right) d x=\delta\left(p-p^{\prime}\right) \label{55} \nonumber$
The Dirac delta function has great utility in quantum mechanics, so it is important to be able to recognize it in its several guises.
The time-dependent energy operator can be obtained by adding time dependence to Equation \ref{1} so that it represents a classical one-dimensional plane wave moving in the positive x-direction.
$\langle x | \lambda\rangle\langle t | v\rangle=\exp \left(i 2 \pi \frac{x}{\lambda}\right) \exp (-i 2 \pi v t) \label{56} \nonumber$
This classical wave equation is transformed into a quantum mechanical wavefunction by using (as earlier) the de Broglie relation and E = h$\nu$.
$\langle x | p\rangle\langle t | E\rangle=\exp \left(\frac{i p x}{\hbar}\right) \exp \left(-\frac{i E t}{\hbar}\right) \label{57} \nonumber$
From this equation we obtain the important Dirac bracket relating energy and time.
$\langle t | E\rangle=\exp \left(-\frac{i E t}{\hbar}\right) \label{58} \nonumber$
The time-dependent energy operator is found by projecting the energy eigenvalue equation,
$\hat{H} | E \rangle=E | E \rangle \label{59} \nonumber$
into the time domain.
$\langle t|\hat{H}| E\rangle= E\langle t | E\rangle= E \exp \left(-\frac{i E t}{\hbar}\right)=i \hbar \frac{d\langle t | E\rangle}{d t} \label{60} \nonumber$
Comparison of the first and last terms reveals that the time-dependent energy operator is
$\langle t|\hat{H}=i \hbar \frac{d}{d t}\langle t| \label{61} \nonumber$
We see also from Equation \ref{60} that
$i \hbar \frac{d}{d t}\langle t|=E\langle t| \label{62} \nonumber$
So that in general,
$i \hbar \frac{d}{d t}\langle t | \Psi\rangle= E\langle t | \Psi\rangle \label{63} \nonumber$
Integration of Equation \ref{63) yields a general expression for the time-dependence of the wave function.
$\langle t | \Psi\rangle=\exp \left(-\frac{i E\left(t-t_{0}\right)}{\hbar}\right)\left\langle t_{0} | \Psi\right\rangle \label{64} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.17%3A_Quantum_Mechanics_and_the_Fourier_Transform.txt
|
The purpose of this tutorial is to explore the connections between Schrödinger’s equations (time-dependent and time-independent) and prior concepts in classical mechanics and quantum mechanics. For the sake of mathematical simplicity we will work in one spatial dimension.
The foundation of quantum mechanics is de Broglie’s hypothesis of wave-particle duality for matter and electromagnetic radiation. Therefore, our starting point is the equation for a classical plane wave moving in the positive x direction,
$F(x, t)=A \exp \left(i 2 \pi \frac{x}{\lambda}\right) \exp (-i 2 \pi \nu t) \nonumber$
where $\lambda$ is wavelength, $\nu$ is wave frequency, and A is wave amplitude.
$F(x,t)$ can be converted to a quantum mechanical particle wave function using the relations shown below, which are succinct mathematical expressions of de Broglie’s wave-particle hypothesis.
$\lambda=\frac{h}{p} \quad \text { and } \quad E=h \nu \nonumber$
Substitution of these equations into F(x,t) yields,
$\Psi(x, t)=A \exp \left(\frac{i p x}{\hbar}\right) \exp \left(-\frac{i E t}{\hbar}\right) \nonumber$
where $\hbar= \frac{h}{ 2 \pi}$.
The next step is to write the classical expression for the energy of a free particle,
$E=\frac{p^{2}}{2 m} \nonumber$
and ask what operations must be performed on $\Psi (x,t)$ to obtain this equation.
Clearly, with appropriate pre-multipliers, the first derivative with respect to time will yield E, and the second derivative with respect to $x$ will give kinetic energy.
$i \hbar \frac{\partial \Psi(x, t)}{\partial t}=E \Psi(x, t) \nonumber$
$-\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \Psi(x, t)}{\partial x^{2}}=\frac{p^{2}}{2 m} \Psi(x, t) \nonumber$
We therefore assert that the quantum mechanical equivalent free-particle energy equation is,
$i \hbar \frac{\partial \Psi(x, t)}{\partial t}=-\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \Psi(x, t)}{\partial x^{2}} \nonumber$
and name it the time-dependent Schrödinger equation. For a particle subject to a time-independent potential, V(x), we generalize this equation as follows,
$i \hbar \frac{\partial \Psi(x, t)}{\partial t}=-\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \Psi(x, t)}{\partial x^{2}}+V(x) \Psi(x, t)=\hat{H}(x) \Psi(x, t) \nonumber$
An elegant expression of this equation, $i \hbar \dot{\psi}=H \psi$, can be found on Schrödinger’s tombstone in Alpbach, Austria. Because V(x) is independent of time, we assume $\Psi (x,t)$ is still separable in space and time,
$\Psi(x, t)=\Psi(x) \exp \left(-\frac{i E t}{\hbar}\right) \nonumber$
Substitution of this function into the time-dependent Schrödinger equation allows us to extract the time-independent Schrödinger equation,
$-\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \Psi(x)}{\partial x^{2}}+V(x) \Psi(x)=E \Psi(x) \nonumber$
Solutions to this equation for various V(x) and the appropriate boundary conditions yield, in general, a manifold of allowed energy eigenvalues and associated eigenfunctions.
Sources
1. J. C. Polkinghorne, The Quantum World, Princeton University Press, 1984.
2. Edward Gerjuoy, “Quantum Mechanics”, in AccessScience@McGraw-Hill, http://www.accessscience.com, DOI 10.1036/1097-8542.562900, last modified: 9/11/2002
1.19: Basic Quantum Mechanics in Coordinate Momentum and Phase Space
The purpose of this paper is to use calculations on the harmonic oscillator to illustrate the relationship between the coordinate, momentum and phase space representations of quantum mechanics.
First, the ground‐state coordinate space eigenfunction for the harmonic oscillator is used for several traditional quantum mechanical calculations. Then the coordinate wave function is Fourier transformed into the momentum representation, and the calculations repeated showing that the same results are obtained. Next, the coordinate (and subsequently the momentum) wave function is used to generate the Wigner phase‐space distribution function. It is then used to repeat the quantum mechanical calculations done in the coordinate and momentum representations, yielding the same results. Finally, a variational calculation is carried out in all three representations for the V = |x| potential energy function using a Gaussian trial wave function. As might be expected the three calculations yield identical results.
All calculations are carried out in atomic units (h = 2$\pi$) with the effective mass and force constant set to unity ($\mu$ = k = 1) for the sake of computational convenience.
The first three harmonic oscillator eigenfunctions are given below. While the ground‐state eigenfunction is used in the example calculations, it is easy for the interested reader to edit the companion Mathcad file to repeat the calculations for the other eigenfunctions.
$\Psi_{0}(x) :=\pi^{-\frac{1}{4}} \cdot \exp \left(-\frac{x^{2}}{2}\right) \qquad \Psi_{1}(x) :=\left(\frac{4}{\pi}\right)^{\frac{1}{4}} \cdot x \cdot \exp \left(-\frac{x^{2}}{2}\right) \qquad \Psi_{2}(x) :=(4 \cdot \pi)^{-\frac{1}{4}} \cdot\left(2 \cdot x^{2}-1\right) \cdot \exp \left(-\frac{x^{2}}{2}\right) \nonumber$
As is well‐known, in coordinate space the position operator is multiplicative and the momentum operator is differential. In momentum space it is the reverse, while in phase space, both position and momentum are multiplicative operators. In Appendix A Dirac notation is used to derive the position and momentum operators in coordinate and momentum space. Case (1) uses the Weyl transform to show that both the position and momentum operators are multiplicative in phase space. In Appendix B a deductive rationalization for the multiplicative character of the position operator in phase space is presented. The extension to the multiplicative character of the momentum operator is straightforward.
Coordinate Space Calculations
Coordinate space integral:
$\int_{-\infty}^{\infty} \Box\; d x$
Position operator:
$x \cdot \Box$
Potential energy operator:
$\frac{x^{2}}{2} \cdot \Box$
Momentum operator:
$\frac{1}{i} \cdot \frac{d}{d x} \Box$
Kinetic energy operator:
$-\frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \Box$
Display the v = 0 coordinate distribution function.
Demonstrate that the wave function is normalized and calculate <x>, <x2>,
, and <p2>. Then use these results to demonstrate that the uncertainty principle is satisfied.
$\int_{-\infty}^{\infty} \Psi_{0}(x)^{2} d x=1 \qquad x_{\text { ave }}=\int_{-\infty}^{\infty} x \cdot \Psi_{0}(x)^{2} d x \rightarrow 0 \qquad \mathrm{x} 2_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \mathrm{x}^{2} \cdot \Psi_{0}(\mathrm{x})^{2} \mathrm{d} \mathrm{x} \rightarrow \frac{1}{2} \nonumber$
$\mathrm{p}_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \Psi_{0}(\mathrm{x}) \cdot \frac{1}{\mathrm{i}} \cdot \frac{\mathrm{d}}{\mathrm{dx}} \Psi_{0}(\mathrm{x}) \mathrm{d} \mathrm{x} \rightarrow 0 \qquad \mathrm{p} 2_{\text { ave }} :=\int_{-\infty}^{\infty} \Psi_{0}(\mathrm{x}) \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi_{0}(\mathrm{x}) \mathrm{d} \mathrm{x} \rightarrow \frac{1}{2} \nonumber$
In the coordinate representation the expectation values involving position appear to be calculated classically. The average value is the sum over each value of x weighted by its probability of occurring, $\Psi(x)^{2}$. This is clearly not the case for the momentum expectation values in coordinate space. Quantum weirdness is manifest in the momentum calculations and hidden in the coordinate calculations. As mentioned above, Appendix A shows the origin of this computational difference between position and momentum expectation values in coordinate space. Basically it comes down to the fact that in its own (eigen) space an operator has special privileges, it appears to operate multiplicatively
The uncertainty principle requires that $\Delta x \cdot \Delta p \geq \frac{1}{2}$ (in atomic units). The expectation values from above show that the harmonic oscillator is in compliance.
$\Delta x :=\sqrt{x 2_{\text { ave }}-x_{\text { ave }}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \qquad \Delta \mathrm{p} :=\sqrt{\mathrm{p} 2_{\mathrm{ave}}-\mathrm{p}_{\mathrm{ave}}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \qquad \Delta \mathrm{x} \cdot \Delta \mathrm{p} \rightarrow \frac{1}{2} \nonumber$
Demonstrate that $\Psi_{0}(x)$ is an eigenfunction of the energy operator and use the expectation values from above to calculate the expectation value for energy.
$\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi_{0}(\mathrm{x})+\frac{1}{2} \cdot \mathrm{x}^{2} \cdot \Psi_{0}(\mathrm{x})=\mathrm{E} \cdot \Psi_{0}(\mathrm{x}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \qquad \frac{\mathrm{p} 2_{\mathrm{ave}}}{2}+\frac{\mathrm{x} 2_{\mathrm{ave}}}{2} \rightarrow \frac{1}{2} \nonumber$
Momentum Space Calculations
The energy operator for the harmonic oscillator is,
$\hat{H}=\frac{\hat{p}^{2}}{2 m}+\frac{1}{2} k \hat{x}^{2} \nonumber$
Most quantum mechanical problems are easier to solve in coordinate space. Because of its symmetry, the harmonic oscillator is as easy to solve in momentum space as it is in coordinate space. However, we generate the momentum wave function by Fourier transform of the coordinate‐space wave function. It is then shown that it gives the same results as the wave function in the position basis.
$\Phi_{0}(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \Psi_{0}(\mathrm{x}) \mathrm{dx} \; \text { simplify } \rightarrow \frac{\frac{\mathrm{e}^{2}}{2} \cdot \mathrm{p}^{2}}{\pi^{\frac{1}{4}}} \nonumber$
First, we demonstrate that the Fourier transform of this momentum wave function returns the coordinate space wave function.
$\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \Phi_{0}(\mathrm{p}) \text { dp simplify } \rightarrow \frac{1}{\pi^{\frac{1}{4}}} \cdot \mathrm{e}^{\frac{-1}{2} \cdot \mathrm{x}^{2}} \nonumber$
Display the v = 0 momentum distribution function.
Notice that the coordinate and momentum distribution functions are identical given the parameterization of the calculation ($\mu$ = k = 1).
Momentum space integral:
$\int_{-\infty}^{\infty} \Box\; d p$
Momentum operator:
$p \cdot \Box$
Kinetic energy operator:
$\frac{p^{2}}{2} \cdot \Box$
Position operator:
$i \cdot \frac{d}{d p} \Box$
Potential energy operator:
$\frac{-1}{2} \cdot \frac{d^{2}}{d p^{2}} \Box$
Demonstrate that the wave function is normalized and calculate <x>, <x2>,
, and <p2>. Then use these results to demonstrate that the uncertainty principle is satisfied.
$\int_{-\infty}^{\infty}\left(\left|\Phi_{0}(\mathrm{p})\right|\right)^{2} \mathrm{d} \mathrm{p}=1 \quad \mathrm{p}_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \mathrm{p} \cdot \Phi_{0}(\mathrm{p})^{2} \mathrm{d} \mathrm{p} \rightarrow 0 \quad \mathrm{p} 2_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \mathrm{p}^{2} \cdot\left(\left|\Phi_{0}(\mathrm{p})\right|\right)^{2} \mathrm{dp} \rightarrow \frac{1}{2} \nonumber$
$\mathrm{x}_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \Phi_{0}(\mathrm{p}) \cdot \mathrm{i} \cdot \frac{\mathrm{d}}{\mathrm{d} \mathrm{p}} \Phi_{0}(\mathrm{p}) \mathrm{dp} \rightarrow 0 \qquad \mathrm{x}_{2 \mathrm{ave}} :=\int_{-\infty}^{\infty} \overline{\Phi_{0}(\mathrm{p})} \cdot -\frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{p}^{2}} \Phi_{0}(\mathrm{p}) \mathrm{dp} \rightarrow \frac{1}{2} \nonumber$
In momentum space, it is the momentum operator that appears to behave classically, and the position operator that manifests quantum weirdness.
These momentum space calculations are in compliance with the uncertainty principle.
$\Delta x :=\sqrt{x 2_{\text { ave }}-x_{\text { ave }}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \qquad \Delta \mathrm{p} :=\sqrt{\mathrm{p} 2_{\mathrm{ave}}-\mathrm{p}_{\mathrm{ave}}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \qquad \Delta \mathrm{x} \cdot \Delta \mathrm{p} \rightarrow \frac{1}{2} \nonumber$
Demonstrate that $\Phi_{0}(p)$ is an eigenfunction of the energy operator and use the expectation values from above to calculate the expectation value for energy.
$\frac{\mathrm{p}^{2}}{2} \cdot \Phi_{0}(\mathrm{p})-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{p}^{2}} \Phi_{0}(\mathrm{p})=\mathrm{E} \cdot \Phi_{0}(\mathrm{p}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \qquad \frac{\mathrm{p} 2_{\mathrm{ave}}}{2}+\frac{\mathrm{x} 2_{\mathrm{ave}}}{2} \rightarrow \frac{1}{2} \nonumber$
In summary, we see that coordinate and momentum space calculations give the same results. However, the coordinate wave function does not tell us anything about the distribution of momentum states, only the average value. Likewise, the momentum wave function does not provide detail on the spatial distribution of the particle it represents, only the average position.
Phase Space Calculations
Phase‐space calculations require a phase‐space distribution, such as the Wigner function. Because this approach to quantum mechanics is not as familiar as the Schrödinger formulation, several important equations will be deconstructed using Dirac notation. Expressed in Dirac notation, the Wigner function resembles a classical trajectory.
$W(x, p)=\int_{-\infty}^{\infty}\left\langle\Psi | x+\frac{s}{2}\right\rangle\left\langle x+\frac{s}{2} | p\right\rangle\left\langle p | x-\frac{s}{2}\right\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s \nonumber$
The four Dirac brackets are read from right to left as follows: (1) is the amplitude that a particle in the state $\Psi$ has position (x ‐ s/2); (2) is the amplitude that a particle with position (x ‐ s/2) has momentum p; (3) is the amplitude that a particle with momentum p has position (x + s/2); (4) is the amplitude that a particle with position (x + s/2) is (still) in the state $\Psi$. The Wigner function is the integral of the product of these probability amplitudes over all values of s.
We get the traditional form of the Wigner distribution function by recognizing that the middle brackets, which function as a propagator between the initial and final positional states, can be combined as follows,
$\left\langle x+\frac{s}{2} | p\right\rangle\left\langle p | x-\frac{s}{2}\right\rangle=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{i p\left(x+\frac{\Delta}{2}\right)} \frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-i p\left(x-\frac{\delta}{2}\right)}=\frac{1}{2 \pi} \mathrm{e}^{i p s} \nonumber$
Now we can generate the Wigner function for the v = 0 harmonic oscillator state using the coordinate eigenfunction.
$\mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \pi} \cdot \int_{-\infty}^{\infty} \Psi_{0}\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi_{0}\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \text { ds simplify } \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
Now we can generate the Wigner function for the v = 0 harmonic oscillator state using the coordinate eigenfunction.
$\mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \pi} \cdot \int_{-\infty}^{\infty} \Psi_{0}\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi_{0}\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \text { ds simplify } \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
In coordinate space, momentum is represented by a differential operator, the first derivative with respect to position. In momentum space, position is represented by the first derivative with respect to momentum. Part of the appeal of the phase‐space approach to quantum mechanics is that both position and momentum are represented by multiplicative operators (1). Thus phase‐space quantum mechanics, at first glance, appears to more closely resemble classical mechanics than the traditional Schrödinger formulation with its differential operators.
Phase space integral:
$\int_{-\infty}^{\infty} \Box\; dx\; d p$
Position operator:
$x \cdot \Box$
Potential energy operator:
$\frac{x^{2}}{2} \cdot \Box$
Momentum operator:
$p \cdot \Box$
Kinetic energy operator:
$\frac{p^{2}}{2} \cdot \Box$
Demonstrate that the Wigner function is normalized over phase space and calculate <x>, <x2>,
, and <p2>. Then use these results to demonstrate that the uncertainty principle is satisfied. In Appendix B Dirac notation is used to deconstruct (unpack) the first two phase‐space calculations below and show that they are equivalent to the traditional quantum mechanical calculations carried out previously.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{x} \text { dp simplify } \rightarrow 1 \nonumber$
$x_{\text { ave }} :=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W_{0}(x, p) \cdot x \text { dx dp simplify } \rightarrow 0$
$\mathrm{x} 2_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \cdot \mathrm{x}^{2} \mathrm{dx} \text { dp simplify } \rightarrow \frac{1}{2}$
$\mathrm{p}_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \cdot \mathrm{p} \mathrm{dx} \text { dp simplify } \rightarrow 0$
$\mathrm{p} 2_{\mathrm{ave}} :=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \cdot \mathrm{p}^{2} \mathrm{dx} \text { dp simplify } \rightarrow \frac{1}{2}$
$\Delta x :=\sqrt{x 2_{\text { ave }}-x_{\text { ave }}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \quad \Delta p :=\sqrt{p 2_{\text { ave }}-p_{\text { ave }}^{2}} \rightarrow \frac{1}{2} \cdot 2^{\frac{1}{2}} \quad \Delta x \cdot \Delta p \rightarrow \frac{1}{2} \nonumber$
Calculate the expectation value for the total energy
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \cdot\left(\frac{\mathrm{p}^{2}}{2}+\frac{\mathrm{x}^{2}}{2}\right) \mathrm{dx} \text { dp simplify } \rightarrow \frac{1}{2} \nonumber$
In summary, the phase‐space calculations based on the Wigner function give the same results as the calculations carried out in coordinate and momentum space.
Next, we demonstrate that integrating the Wigner function over momentum space yields the coordinate distribution function. (See the Appendix C for a deconstruction of this integral using Dirac notation.)
$\int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \mathrm{dp} \text { simplify } \rightarrow \frac{\mathrm{e}^{-\mathrm{x}^{2}}}{\pi^{\frac{1}{2}}} \qquad \Psi_{0}(x)^{2} \text { simplify } \rightarrow \frac{e^{-x^{2}}}{\pi^{\frac{1}{2}}} \nonumber$
Likewise, integrating the Wigner function over coordinate space yields the momentum distribution function. (See the Appendix C for a deconstruction of this integral using Dirac notation.)
$\int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \text { simplify } \rightarrow \frac{\mathrm{e}^{-\mathrm{p}^{2}}}{\pi^{\frac{1}{2}}} \qquad \Phi_{0}(\mathrm{p})^{2} \text { simplify } \rightarrow \frac{\mathrm{e}^{-\mathrm{p}^{2}}}{\pi^{\frac{1}{2}}} \nonumber$
Just as we have previous graphed the coordinate and momentum distribution functions, we now display the Wigner distribution function.
$\mathrm{N} :=60 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-3+\frac{6 \cdot \mathrm{i}}{\mathrm{N}} \qquad \mathrm{j} :=0 \ldots \mathrm{N} \mathrm{p}_{\mathrm{j}} :=-5+\frac{10 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text {Wigner}_{\text{i, j}} :=\mathrm{W}_{0}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
In these phase‐space calculations W(x,p) appears to behave like a classical probability function. By eliminating the need for differential operators, it seems to have removed some of the weirdness from quantum mechanics. However, we will now see that the Wigner function, phase‐space approach only temporarily hides the weirdness. This shouldnʹt come as a surprise because, after all, the Wigner function was generated using a Schrödinger wave function.
To see how the weirdness is hidden we generate the Wigner function for the v = 1 harmonic oscillator state.
$\mathrm{W}_{1}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \Psi_{1}\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi_{1}\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \; \text{simplify} \ \rightarrow \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2} }\cdot \frac{2 \cdot \mathrm{x}^{2}+2 \cdot \mathrm{p}^{2}-1}{\pi} \nonumber$
Next, it is demonstrate that the Wigner functions for the ground and excited harmonic oscillator states are orthogonal over phase space.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}_{0}(\mathrm{x}, \mathrm{p}) \cdot \mathrm{W}_{1}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{x} \mathrm{dp} \rightarrow 0 \nonumber$
This result indicates that W1(x,p) must be negative over some part of phase space, because the graph of W0(x,p) shows that it is positive for all values of position and momentum. To explore further we display the Wigner distribution for the v = 1 harmonic oscillator state.
This graphic shows that the Wigner function is indeed negative for certain regions of phase space. This makes it impossible to interpret it as a probability distribution function. For this reason the Wigner function is frequently referred to as a quasiprobability distribution.
The Variation Method
As a final example of the equivalence of the three approaches to quantum mechanics presented, we look at a variation method calculation on a potential function that resembles (somewhat) the harmonic oscillator.
$\mathrm{V}(\mathrm{x}) :=|\mathrm{x}| \nonumber$
A Gaussian trial wave function is chosen for the coordinate space calculation.
$\Psi(x, \beta) :=\left(\frac{2 \cdot \beta}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\beta \cdot x^{2}\right) \nonumber$
The variational energy integral is evaluated.
$\mathrm{E}(\beta) :=\int_{-\infty}^{\infty} \Psi(\mathrm{x}, \beta) \cdot -\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x}, \beta) \mathrm{d} \mathrm{x} \ldots \Bigg|^{\text { assume, } \beta>0}_{\text { simplify }} \ \rightarrow \frac{1}{2} \cdot \frac{\beta^{2} \cdot \pi+2^{\frac{1}{2}} \cdot(\beta \cdot \pi)^{\frac{1}{2}}}{\beta \cdot \pi} +\int_{-\infty}^{\infty} \mathrm{V}(\mathrm{x}) \cdot \Psi(\mathrm{x}, \beta)^{2} \mathrm{d} \mathrm{x} \nonumber$
The momentum wave function is obtained from the coordinate wave function by Fourier transform.
$\Phi(\mathrm{p}, \beta) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}, \beta) \mathrm{d} \mathrm{x} \Bigg|^{\text { assume, } \beta>0}_{\text { simplify}} \rightarrow \frac{1}{2} \cdot \frac{\pi^{\frac{1}{2}} \cdot \beta^{\frac{3}{2}}+2^{\frac{1}{2}}}{\pi^{\frac{1}{2}} \cdot \beta^{\frac{1}{2}}} +\int_{-\infty}^{\infty} \overline{\Phi(p, \beta)} \cdot\left|i \frac{d}{d p} \Phi(p, \beta)\right| d p \nonumber$
The Wigner function is calculated using the coordinate wave function (the momentum wave function will yield the same result).
$\mathrm{W}(\mathrm{x}, \mathrm{p}, \beta) :=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}, \beta\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}, \beta\right) \mathrm{ds}\Bigg|^{\text { simplify }}_{\text { assume, } \beta>0}\rightarrow \frac{1}{\pi} \cdot e^{\frac{-1}{2} \cdot \frac{4 \cdot \beta^{2} \cdot x^{2}+p^{2}}{\beta}} \nonumber$
The variational energy integral in phase space is evaluated.
$\mathrm{E}(\beta) :=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \mathrm{W}(\mathrm{x}, \mathrm{p}, \beta) \cdot\left(\frac{\mathrm{p}^{2}}{2}+\mathrm{V}(\mathrm{x})\right) \mathrm{d} \mathrm{x} \mathrm{dp}\Bigg|^{\text { assume, } \beta>0}_{\text { simplify }} \rightarrow \frac{1}{2} \cdot \frac{\pi^{\frac{1}{2}} \cdot \beta^{\frac{3}{2}}+2^{\frac{1}{2}}}{\pi^{\frac{1}{2}} \cdot \beta^{\frac{1}{2}}} \nonumber$
As is to be expected, the three methods yield the same expression for the variational energy. Minimization of the energy with respect to the variational parameter, $\beta$, yields $\beta$ = 0.542 and E($\beta$) = 0.813. This is in good agreement with the result obtained by numerical integration of Schrödingerʹs equation for this potential, E = 0.809.
Appendix A
In coordinate space, the position eigenvalue equation can be written in two equivalent forms. In both cases the operator, in its home space, extracts the eigenvalue and returns the eigenfunction. It thus appears to be a multiplicative operator.
$\hat{x} | x \rangle=x | x \rangle \quad\langle x|\hat{x}=x\langle x| \nonumber$
Using the form on the right we demonstrate that the position operator in the coordinate representation operates multiplicatively on an arbitrary state function.
$\langle x|\hat{x}| \Psi\rangle= x\langle x | \Psi\rangle= x \Psi(x) \nonumber$
Making use of the coordinate space completeness relation,
$\int | x \rangle\langle x|d x=1 \nonumber$
we can illuminate the position expectation value in the coordinate representation.
$\langle x\rangle=\langle\Psi|\hat{x}| \Psi\rangle=\int\langle\Psi | x\rangle\langle x|\hat{x}| \Psi\rangle d x=\int\langle\Psi | x\rangle x\langle x | \Psi\rangle d x=\int x|\Psi(x)|^{2} d x \nonumber$
In momentum space, the momentum eigenvalue equation can also be written in two equivalent forms. As in coordinate space, the home‐space operator extracts the eigenvalue and returns the eigenfunction.
$\hat{p} | p \rangle=p | p \rangle \quad\langle p|\hat{p}=p\langle p| \nonumber$
Using the form on the right we demonstrate that the momentum operator in momentum space operates multiplicatively on an arbitrary state function.
$\langle p|\hat{p}| \Phi\rangle= p\langle p | \Phi\rangle= p \Phi(p) \nonumber$
To proceed to a justification of the differential form of the momentum operator in coordinate space and the differential form of the position operator in momentum space requires the following Dirac bracket between position and momentum.
$\langle x | p\rangle=\langle p | x\rangle^{*}=\exp \left(\frac{i p x}{\hbar}\right) \nonumber$
This relation is obtained by substitution of the deBroglie wave equation into the Euler equation for a plane wave (2).
$\lambda=\frac{h}{m v}=\frac{h}{p} \qquad\langle x | \lambda\rangle=\exp \left(i 2 \pi \frac{x}{\lambda}\right) \nonumber$
The Dirac brackets, <x|p> and <p|x> are ubiquitous in quantum mechanics and are, essentially, a dictionary for translating from momentum language to position language and vice versa. In other words, they are momentum‐position Fourier transforms.
The momentum operator in coordinate space is obtained by projecting the momentum eigenvalue expression onto coordinate space.
$\langle x|\hat{p}| p\rangle= p\langle x | p\rangle= p \exp \left(\frac{i p x}{\hbar}\right)=\frac{\hbar}{i} \frac{d}{d x}\langle x | p\rangle \nonumber$
Comparing the first and the last terms, gives the momentum operator in position space.
$\langle x|\hat{p}=\frac{\hbar}{i} \frac{d}{d x}\langle x| \nonumber$
Using this form we demonstrate the momentum operator in coordinate space operating on an arbitrary state function.
$\langle x|\hat{p}| \Psi\rangle=\frac{\hbar}{i} \frac{d}{d x}\langle x | \Psi\rangle=\frac{\hbar}{i} \frac{d}{d x} \Psi(x) \nonumber$
Using the coordinate completeness relation, we derive the mathematical structure of the calculation of the momentum expectation value in the coordinate representation.
$\langle p\rangle=\langle\Psi|\hat{p}| \Psi\rangle=\int\langle\Psi | x\rangle\langle x|\hat{p}| \Psi\rangle d x=\int\langle\Psi | x\rangle \frac{\hbar}{i} \frac{d\langle x | \Psi\rangle}{d x} d x=\int \Psi^{*}(x) \frac{\hbar}{i} \frac{d \Psi(x)}{d x} d x \nonumber$
The position operator in momentum space is obtained by projecting the position eigenvalue expression onto momentum space.
$\langle p|\hat{x}| x\rangle= x\langle p | x\rangle= x \exp \left(\frac{-i p x}{\hbar}\right)=-\frac{\hbar}{i} \frac{d}{d p}\langle p | x\rangle \nonumber$
Comparing the first and the last terms, gives the position operator in momentum space.
$\langle p|\hat{x}=-\frac{\hbar}{i} \frac{d}{d p}\langle p| \nonumber$
Using this form we demonstrate the position operator in momentum space operating on an arbitrary state function.
$\langle p|\hat{x}| \Phi\rangle=-\frac{\hbar}{i} \frac{d}{d p}\langle p | \Phi\rangle=-\frac{\hbar}{i} \frac{d}{d p} \Phi(p) \nonumber$
Illustrating the origin of the quantum mechanical calculations for the expectation values for momentum and position in momentum space are similar to the calculations in coordinate space, except that the completeness relation in momentum space is required.
$\int | p \rangle\langle p|d p=1 \nonumber$
In the text it was noted that in phase space, both position and momentum are multiplicative operators (1). The following table summarizes the operator notation for the three spaces.
$\begin{pmatrix} \text{Operator} & \text{CoordinateSpace} & \text{MomentumSpace} & \text{PhaseSpace} \ \text{position} & x & \frac{-1}{\mathrm{i}} \cdot \frac{\mathrm{d}}{\mathrm{d} \mathrm{p}} \Box & x \ \text{momentum} & \frac{1}{\mathrm{i}} \cdot \frac{\mathrm{d}}{\mathrm{d} \mathrm{x}} \Box & p & p \end{pmatrix} \nonumber$
Appendix B
In Dirac notation the phase‐space normalization condition is,
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W(x, p) d x d p=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\langle\Psi | x+\frac{s}{2}\rangle\left\langle x+\frac{s}{2} | p\right\rangle\langle p | x-\frac{s}{2}\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s d x d p \nonumber$
Utilizing the momentum completeness relation,
$\int_{-\infty}^{\infty} | p \rangle\langle p|d p=1 \nonumber$
yields,
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\left\langle\Psi | x+\frac{s}{2}\right\rangle\left\langle x+\frac{s}{2} | x-\frac{s}{2}\right\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s d x \nonumber$
However, the integral over s is zero unless s = 0. Thus, for a normalized coordinate wave function we arrive at,
$\int_{-\infty}^{\infty}\langle\Psi | x\rangle\langle x | \Psi\rangle d x=1 \nonumber$
Using similar arguments it is easy to show that,
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W(x, p) x d x d p=\int_{-\infty}^{\infty}\langle\Psi | x\rangle x\langle x | \Psi\rangle d x=\langle x\rangle \nonumber$
Appendix C
To show that integrating the Wigner function over momentum space yields the coordinate distribution function, we proceed as shown below.
$\int_{-\infty}^{\infty} W(x, p) d p=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\left\langle\Psi | x+\frac{s}{2}\right\rangle\left\langle x+\frac{s}{2} | p\right\rangle\left\langle p | x-\frac{s}{2}\right\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s d p \nonumber$
Using the momentum completeness relation (see above) on the right side gives,
$\int_{-\infty}^{\infty} W(x, p) d p=\int_{-\infty}^{\infty}\left\langle\Psi | x+\frac{s}{2}\right\rangle\left\langle x+\frac{s}{2} | x-\frac{s}{2}\right\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s \nonumber$
However, the right side is zero unless s = 0, yielding
$\int_{-\infty}^{\infty} W(x, p) d p=\langle\Psi | x\rangle\langle x | \Psi\rangle=\Psi^{*}(x) \Psi(x) \nonumber$
To facilitate the demonstration that integrating the Wigner function over coordinate space yields the momentum distribution function, we first show that the Wigner function can also be generated using the momentum wave function.
$\mathrm{Wp}_{0}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \cdot \int_{-\infty}^{\infty}\Phi_{0}\left(\mathrm{p}+\frac{\mathrm{s}}{2}\right) \cdot \exp (-\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi_{0}\left(\mathrm{p}-\frac{\mathrm{s}}{2}\right) \mathrm{ds}\; \text{simplify} \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
$\int_{-\infty}^{\infty} W(x, p) d x=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\langle\Phi | p+\frac{s}{2}\rangle\left\langle p+\frac{s}{2} | x\right\rangle\langle x | p-\frac{s}{2}\rangle\left\langle p-\frac{s}{2} | \Phi\right\rangle d s d x \nonumber$
Employing the coordinate completeness relation
$\int_{-\infty}^{\infty} | x \rangle\langle x|d x=1 \nonumber$
yields
$\int_{-\infty}^{\infty} W(x, p) d x=\int_{-\infty}^{\infty}\left\langle\Phi | p+\frac{s}{2}\right\rangle\left\langle p+\frac{s}{2} | p-\frac{c}{2}\right\rangle\left\langle p-\frac{s}{2} | \Phi\right\rangle d s \nonumber$
However, the right side is zero unless s = 0, yielding
$\int_{-\infty}^{\infty} W(x, p) d x=\langle\Phi | p\rangle\langle p | \Phi\rangle=\Phi^{*}(p) \Phi(p) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.18%3A_Exploring_the_Origin_of_Schrodingers_Equations.txt
|
Quantum mechanics offers its students and practitioners several significant conceptual challenges, among them differential operators, wave-particle duality, tunneling, uncertainty, superpositions, interference, entanglement, and non-local correlations. This tutorial deals with just one of these challenges - the concept of the quantum mechanical operator and how it extracts information from the wavefunction. Professor Chris Cramer (University of Minnesota) likens the wavefunction to an oracle - it knows all and tells some when properly addressed and questioned.
According to Daniel F. Styer (Oberlin College) there are at least nine formulations of quantum mechanics. In this tutorial the position and momentum operators will be examined in the coordinate, momentum and phase space formulations of quantum mechanics. The table below lists the forms of the operators in each of these representations. Clearly the multiplicative character of the phase space operators appeals to our classical prejudices and intuition. The differential form of the momentum operator in coordinate space and position operator in momentum space are signatures of the weird and deeply non-classical character of quantum theory. However, as we shall see the quantum weirdness in the phase-space formulation has simply been temporarily hidden.
According to Styer [Amer. J. Phys. 70, 297 (2002)], "The various formulations package that weirdness in various ways, but none of them can eliminate it because the weirdness comes from the facts, not the formalism."
$\begin{pmatrix} \text{Operator} & \text{CoordinateSpace} & \text{MomentumSpace} & \text{PhaseSpace} \ \text{position} & x \cdot \Box & i \cdot \frac{d}{dp} \Box & x \cdot \Box \ \text{momentum} & \frac{1}{i} \cdot \frac{d}{dx} \Box & p \cdot \Box & p \cdot \Box \end{pmatrix} \nonumber$
The first excited state of the harmonic oscillator will be used to illustrate the repackaging of quantum weirdness. All calculations are carried out in atomic units (h = 2$\pi$), and in the interest of mathematical clarity and expediency we add the following restriction, $\mu$ = k =1.
We begin in coordinate space by demonstrating that the $v = 1$ wavefunction is normalized and displaying its spatial distribution function. Next we calculate the expectation value for the energy and demonstrate that the wavefunction is an eigenfunction of the energy operator.
Next the v = 1 spatial wavefunction is Fourier transformed into the momentum representation and everything done for the coordinate wavefunction is repeated. We get the same result for the energy calculation, as expected. We also find, also as expected, that the momentum wavefunction is also an eigenfunction of the energy operator. To prove that this is a two-way street, we Fourier transform the momentum wavefunction back to the coordinate representation.
In the last section the Wigner function, a phase-space (coordinate-momentum space) distribution function is generated from both the coordinate and momentum wavefunctions. The calculation for the energy has a classical look to it (both position and momentum are multiplicative operators) and the result agrees with the coordinate and momentum space calculations. However, it will be easy to show that the quantum weirdness has just been hidden from direct view.
Coordinate Representation
$\Psi_{1}(x) :=\left(\frac{4}{\pi}\right)^{\frac{1}{4}} \cdot x \cdot \exp \left(-\frac{x^{2}}{2}\right) \qquad \int_{-\infty}^{\infty} \Psi_{1}(x)^{2} d x=1 \nonumber$
Energy expectation value:
$\int_{-\infty}^{\infty} \Psi_{1}(x) \cdot \frac{-1}{2} \cdot \frac{d^{2}}{d x^{2}} \Psi_{1}(x) d x+\int_{-\infty}^{\infty} \Psi_{1}(x) \cdot \frac{1}{2} \cdot x^{2} \cdot \Psi_{1}(x) d x=1.5 \nonumber$
The wavefunction is an eigenfunction of the energy operator:
$\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi_{1}(\mathrm{x})+\frac{1}{2} \cdot \mathrm{x}^{2} \cdot \Psi_{1}(\mathrm{x})=\mathrm{E} \cdot \Psi_{1}(\mathrm{x}) \text { solve }, \mathrm{E} \rightarrow \frac{3}{2} \nonumber$
Momentum Representation
A Fourier transform of the coordinate wavefunction yields the momentum space wavefunction.
$\Phi_{1}(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \Psi_{1}(\mathrm{x}) \mathrm{d} \mathrm{x} \; \text{simplify} \rightarrow (-i) \cdot \frac{2^{\frac{1}{2}}}{\pi^{\frac{1}{4}}} \cdot \mathrm{e}^{\frac{-1}{2} \cdot \mathrm{p}^{2}} \cdot \mathrm{p} \nonumber$
$\int_{-\infty}^{\infty}\left(\left|\Phi_{1}(\mathrm{p})\right|\right)^{2} \mathrm{d} \mathrm{p}=1 \nonumber$
Of course, a Fourier transform of the momentum wavefunction returns the coordinate wavefunction.
$\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \Phi_{1}(\mathrm{p}) \mathrm{dp} \; \text{simplify} \rightarrow \mathrm{e}^{\frac{-1}{2} \cdot \mathrm{x}^{2}} \cdot \frac{2^{\frac{1}{2}}}{\pi^{\frac{1}{4}}} \cdot \mathrm{x} \nonumber$
The momentum distribution is displayed and the energy calculations executed.
$\int_{-\infty}^{\infty} \overline{\Phi_{1}(p)} \cdot \frac{p^{2}}{2} \cdot \Phi_{1}(p) d p+\int_{-\infty}^{\infty} \overline{\Phi_{1}(p)} \cdot \frac{-1}{2} \cdot \frac{d^{2}}{d p^{2}} \Phi_{1}(p) d p=1.5 \nonumber$
$\frac{\mathrm{p}^{2}}{2} \cdot \Phi_{1}(\mathrm{p})+\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{p}^{2}} \Phi_{1}(\mathrm{p})=\mathrm{E} \cdot \Phi_{1}(\mathrm{p}) \text { solve }, \mathrm{E} \rightarrow \frac{3}{2} \nonumber$
Phase Space Representation
As shown below, the Wigner phase-space distribution function can be generated from either the coordinate or momentum wavefunctions. A deconstruction of the Wigner function can be found at: http://www.users.csbsju.edu/~frioux/wigner/wigner.pdf.
$\mathrm{W}_{1}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \pi} \cdot \int_{-\infty}^{\infty} \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi_{1}\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \Psi_{1}\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \; \text{simplify} \ \rightarrow \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \cdot \frac{2 \cdot \mathrm{x}^{2}+2 \cdot \mathrm{p}^{2}-1}{\pi} \nonumber$
$\frac{1}{2 \pi} \cdot \int_{-\infty}^{\infty} \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi_{1}\left(\mathrm{p}+\frac{\mathrm{s}}{2}\right) \cdot \Phi_{1}\left(\mathrm{p}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \; \text{simplify} \ \rightarrow -\mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \cdot \frac{2 \cdot \mathrm{x}^{2}+2 \cdot \mathrm{p}^{2}-1}{\pi} \nonumber$
Integration over the spatial and momentum coordinates shows that the Wigner function is normalized.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W_{1}(x, p) d x d p=1 \nonumber$
Integration over the momentum coordinate yields the spatial distribution function, exactly the same as graphed in Figure 1.
$\int_{-\infty}^{\infty} \mathrm{W}_{1}(\mathrm{x}, \mathrm{p}) \text { dp simplify } \rightarrow 2 \cdot \mathrm{e}^{-\mathrm{x}^{2}} \cdot \frac{\mathrm{x}^{2}}{\pi^{\frac{1}{2}}} \nonumber$
Integration over the spatial coordinate yields the momentum distribution function, exactly the same as graphed in Figure 2.
$\int_{-\infty}^{\infty} \mathrm{W}_{1}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \text { simplify } \rightarrow 2 \cdot \mathrm{e}^{-\mathrm{p}^{2}} \cdot \frac{\mathrm{p}^{2}}{\pi^{\frac{1}{2}}} \nonumber$
The expectation value for the total energy using the Wigner distribution is the same as that obtained previously with the coordinate and momentum wavefunctions.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\left(\frac{p^{2}}{2}+\frac{x^{2}}{2}\right) \cdot W_{1}(x, p) d x d p=1.5 \nonumber$
This calculation has true classical flavor. The energy values, which are a function of x and p, are weighted by the phase-space (p-x) distribution function, followed by integration over all possible values of position and momentum. That quantum weirdness is being hidden is revealed when the Wigner distribution function is graphed. It can have negative values and therefore can't be a true probability distribution function. For this reason the Wigner function is referred to as a quasi-probability distribution function. In summary, in order to recover a classical-like energy calculation in quantum mechanics one has to be able to tolerate negative probabilities!
$\mathrm{N} :=60 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-3+\frac{6 \cdot \mathrm{i}}{\mathrm{N}} \qquad \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-5+\frac{10 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text { Wigner }_{\mathrm{i}, \mathrm{j}} :=\mathrm{W}_{1}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.20%3A_The_Repackaging_of_Quantum_Weirdness.txt
|
When unpolarized light illuminates a polarizing film oriented in the vertical direction 50% of the photons are transmitted. In quantum mechanics this event is called state preparation; the transmitted photons are now in a well-defined state – they are vertically polarized and may be represented by a Dirac ket, $| \updownarrow \rangle$. According to quantum mechanics only two subsequent experiments have certain outcomes.
1. The probability that the vertically polarized photons will pass a second vertical polarizer is 1, $\left(|\langle\updownarrow | \updownarrow\rangle|^{2}=1\right)$.
2. The probability that the vertically polarized photons will pass a second polarizer that is oriented horizontally is $\left(|\langle\leftrightarrow | \updownarrow\rangle|^{2}=0\right)$. In other words, the projection of $| \updownarrow \rangle$ onto $| \leftrightarrow \rangle$ is zero because $| \updownarrow \rangle$ and $| \leftrightarrow \rangle$ are orthogonal.
For all other experiments involving two polarizers only the probability of the outcome can be predicted, and this is cos2 ($\theta$), where $\theta$ is the relative angle of the polarizing films. See Figure 1 in the appendix for a graphical illustration of the trigonometry involved.
We now proceed to what is usually called the “three polarizer paradox.” With two polarizers opposed in the vertical and horizontal orientations, a third polarizer is inserted between them at a 45° angle. Now some light is transmitted by the final horizontal polarizer. The quantum mechanical interpretation of this experiment is based on the superposition principle and is outlined below.
A vertically polarized photon can be represented as a linear superposition of any other set of orthogonal basis states, for example ± 45° relative to the vertical.
Note
$| \swarrow \nearrow \rangle =$
$| \nwarrow \searrow \rangle =$
$| \updownarrow \rangle =\frac{1}{\sqrt{2}}[ |\nwarrow\searrow\rangle+| \swarrow\nearrow \rangle ] \nonumber$
Thus, a vertically polarized photon has a probability of $\frac{1}{2} \left(|\langle \swarrow \nearrow | \updownarrow\rangle|^{2}=\left|\frac{1}{\sqrt{2}}\right|^{2}=\frac{1}{2}\right)$ of passing a polarizer oriented at a 45° angle. A photon that has passed a 45° polarizer is in the state $| \swarrow \nearrow \rangle =$. This state can be written as a linear superposition of a vertically and horizontally polarized photon.
$| \swarrow\nearrow \rangle=\frac{1}{\sqrt{2}}[ |\updownarrow\rangle+| \leftrightarrow \rangle ] \nonumber$
Photons that have passed the 45° polarizer have a probability of $\frac{1}{2} \left(|\langle \leftrightarrow | \swarrow \nearrow \rangle|^{2}=\left|\frac{1}{\sqrt{2}}\right|^{2}=\frac{1}{2}\right)$ of passing the final horizontal polarizer. To summarize, in the absence of the diagonally oriented polarizer none of the original unpolarized photons pass the final horizontal polarizer, but in its presence 12.5% of the photons are transmitted $\left(\frac{1}{2}\frac{1}{2}\frac{1}{2}\right)$. See the second figure in the appendix for a graphical representation of the three-polarizer demonstration.
Appendix
The probability amplitude that a $\theta$-polarized photon will pass a vertical polarizer is $\langle v | \theta\rangle=\cos (\theta)$. The probability for this event, therefore, is $|\langle v | \theta\rangle|^{2}=\cos ^{2}(\theta)$
.
1.22: Relationship Between the Coordinate and Momentum Representations
A quon has position $x_{1} :| x_{1} \rangle$
Coordinate space $\Leftrightarrow$ Fourier Transform $\Leftrightarrow$ Momentum space
$\langle x | x_{1}\rangle=\delta\left(x-x_{1}\right)= \xrightleftharpoons[\int\langle x | p\rangle\langle p | x_{1}\rangle d p]{\int\langle p | x\rangle\langle x | x_{1}\rangle d x} \langle p | x_{1}\rangle=\exp \left(-\frac{i p x_{1}}{\hbar}\right) \nonumber$
A quon has momentum $p_{1} :| p_{1} \rangle$
Coordinate space $\Leftrightarrow$ Fourier Transform $\Leftrightarrow$ Momentum space
$\langle x | p_{1}\rangle=\exp \left(\frac{i p_{1} x}{\hbar}\right) \xrightleftharpoons[\int\langle x | p\rangle\langle p | p_{1}\rangle d p]{\int\langle p | x\rangle\langle x | p_{1}\rangle d x} \langle p | p_{1}\rangle=\delta\left(p-p_{1}\right) \nonumber$
Please note the important role that the coordinate and momentum completeness relations play in these transformations.
$\int | x \rangle\langle x|d x=1 \quad \text { and } \quad \int| p\rangle\langle p|d p=1 \nonumber$
1.23: Very Brief Relationship Between the Coordinate and Momentum Representations
A quon has position $x_{1} :| x_{1} \rangle$
Coordinate space $\Leftrightarrow$ Momentum space
$\langle x | x_{1}\rangle=\delta\left(x-x_{1}\right) \qquad \langle p | x_{1}\rangle d p]{\int\langle p | x\rangle\langle x | x_{1}\rangle d x} \langle p | x_{1}\rangle=\exp \left(-\frac{i p x_{1}}{\hbar}\right) \nonumber$
A quon has momentum $p_{1} :| p_{1} \rangle$
Coordinate space $\Leftrightarrow$ Momentum space
$\langle x | p_{1}\rangle=\exp \left(\frac{i p_{1} x}{\hbar}\right) \qquad {\int\langle p | x\rangle\langle x | p_{1}\rangle d x} \langle p | p_{1}\rangle=\delta\left(p-p_{1}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.21%3A_Quantum_Principles_Illuminated_with_Polarized_Light.txt
|
It is impossible to simulate a quantum mechanical superposition with a mixture of M&M candies, or any other ensemble of macroscopic objects (1). Among the defining characteristics of a superposition is that it is not a mixture. Miller acknowledges that while his demonstration is not "strictly accurate", it is "quite effective and achieves a variety of educational goals." In spite of Miller's claims it is my contention that the students have not been provided with a concrete or correct picture of the superposition principle with his exercise.
Here is what Dirac had to say about the linear superposition in his famous treatise on quantum mechanics:
The nature of the relationships which the superposition principle requires to exist between the states of any system is of a kind that cannot be explained in terms of familiar physical concepts. One cannot in the classical sense picture a system being partly in each of two states and see the equivalence of this to the system being completely in some other state. There is an entirely new idea involved, to which one must get accustomed and in terms of which one must proceed to build up an exact mathematical theory, without having any detailed classical picture. (2)
With regard to the desire for classical pictures, Dirac said,
...the main object of physical science is not the provision of pictures, but is the formulation of laws governing phenomena and application of these laws to the discovery of new phenomena. If a picture exists, so much the better; but whether a picture exists or not is a matter of only secondary importance. In the case of atomic phenomena no picture can be expected to exist in the usual sense of the word 'picture' by which is meant a model functioning essentially on classical lines. (3)
We do more harm than good by constructing facile, but false classical analogies for non-classical concepts. Therefore, the only reliable way to get accustomed to the non-classical nature of the quantum mechanical superposition is by direct appeal to experiment. One must study those cases in optics and spectroscopy, for example, where the superposition principle manifests itself most directly. The available examples are edifying, plentiful, and usually quite surprising.
Before reviewing some of these experimental examples I would like to comment on two errors in Miller's application of the superposition principle to the particle in a one-dimensional box. First, he incorrectly writes that the wave function is $\Psi(x) :=N \cdot \sin (n \cdot k \cdot x)$, but clearly the argument of the sine function is either (knx) or $\frac{n \pi x}{a}$), where a is the box dimension. This initial error is compounded by the assertion that the coordinate-space wave function is an equal superposition of two momentum eigenstates with eigenvalues "proportional" (?) to $\frac{kh}{2 \pi}$. Basically the same error is made by Atkins (4), Miller's primary reference.
The expectation value for momentum is indeed zero, but to describe in more detail the outcome of momentum measurements on a particle in a one-dimensional box a momentum-space wave function is required. Such a wave function can be obtained by a Fourier transform of the coordinate-space wave function into momentum space (5, 6, 7). For the particle in a one-bohr box, in atomic units, the Fourier transform is,
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{0}^{1} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \sqrt{2} \cdot \sin (\mathrm{n} \cdot \pi \cdot \mathrm{x}) \mathrm{d} \mathrm{x} \nonumber$
Evaluation of this integral yields,
$\Phi(\mathrm{p}) :=\frac{\mathrm{n} \cdot \pi-\mathrm{n} \cdot \exp (-\mathrm{i} \cdot \mathrm{p}) \cdot \cos (\mathrm{n} \cdot \mathrm{p})-\mathrm{i} \cdot \mathrm{p} \cdot \exp (-\mathrm{i} \cdot \mathrm{p}) \cdot \sin (\mathrm{n} \cdot \pi)}{\sqrt{\pi} \cdot\left(\mathrm{n}^{2} \cdot \pi^{2}-\mathrm{p}^{2}\right)} \nonumber$
Graphical representations of the momentum-space distribution, $|\Phi(\mathrm{p})|^{2}$, as a function of the quantum number n demonstrate that the momentum distribution is never simply $\frac{kh}{2 \pi}$ (5, 6, 7).
Turning now to empirical examples of the superposition principle, we consider first the double-slit experiment which Richard Feynman made so famous.
We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by "explaining" how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. (8)
Amplifying the last sentence of this quotation Feynman said, at another time, the double-slit experiment is so fundamental that if asked a question about quantum mechanics one can always reply, "You remember the case of the experiment with the two holes? It's the same thing." (9)
The salient feature of all double-slit experiments is, as shown in Fig. 1, that between source (S) and detector (D) the particle is offered two paths (P1, P2).
Fig. 1 - Schematic diagram of the double-slit experiment. S = source; D = detector; P1 = path 1; P2 = path 2.
If the path of the particle is not observed quantum mechanics requires its wave function to be a linear superposition of arriving at D by taking both paths. Under these circumstances, the probability that a particle leaving S will be detected at D is calculated as the absolute square of the sum of the probability amplitudes for arriving at D via P1and P2. (10)
$P(S \rightarrow D)=|\langle D| S\rangle\left.\right|^{2}=|\langle D| P_{1}\rangle\langle P_{1}|S\rangle+|\langle D\left|P_{2}\rangle\langle P_{2}\right| S\rangle\left.\right|^{2} \nonumber$
Calculation of the double-slit interference pattern is straightforward (11). The probability amplitude for reaching the detector via P1, for example, is proportional to $\frac{\exp \left(\frac{2 \pi i \delta_{1}}{\lambda}\right) }{ \delta_{1}}$, where $\lambda$ is the de Broglie wavelength of the particle and $\delta_{1}$ is the distance between source and detector via P1. Substitution of this expression and a similar one for P2 into the previous equation yields,
$P(S, D) : = \left[ \left| \frac{\exp \left(2 \cdot \pi \cdot i \cdot \frac{\delta_{1}}{\lambda}\right)}{\delta_{1}} + \frac{\exp \left(2 \cdot \pi \cdot i \cdot \frac{\delta_{2}}{\lambda}\right)}{\delta_{2}} \right| \right]^{2} \nonumber$
The addition of probability amplitudes before squaring (rather than squaring the individual probability amplitudes) leads to interference effects, a signature of the linear superposition, and one of its essential features that Miller's mixtures of M&Ms can't capture.
While Feynman presented the double-slit example pedagogically as a 'thought experiment' in his text, it has ample empirical verification for a variety of particles. For example, a demonstration of the double-slit experiment involving single electrons (only one electron in the apparatus at a time) has been reported in the pedagogical literature (12). Quite recently a striking interference pattern has been observed for C60 in a multi-slit apparatus using a diffraction grating. C60 is the most massive particle, so far, to demonstrate the wave-particle duality underlying the double-slit experiment (13). Very recently a temporal double-slit experiment with attosecond windows in the time domain has been reported. 13a
The results reported above, and those that will follow, are stunning examples of quantum mechanical behavior with roots deep in the superposition principle. In these experiments the source produces particles and the detector registers particles, but between source and detector the behavior is wave-like with the particle apparently traversing both paths simultaneously. In light of this bizarre behavior Feynman said,
I think I can safely say that nobody understands quantum mechanics... Do not keep saying to yourself, if you can possibly avoid it, 'But how can it be like that?' because you will 'get down the drain', into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. (9)
I think Feynman's comment, delivered in his well-known colloquial style, carries the same message as the more formal statements by Dirac quoted earlier. Dirac and Feynman are saying we must get accustomed to the fact that the nano-world is not simply a miniature of the macro-world. We must resist the expectation of being able to employ naive, visually-based, classical concepts in the nanoscopic realm. When we can't resist, we fail and then blame quantum theory for being abstract and remote from experience. However, our blame is misplaced as Marvin Chester points out in the following quotation.
The mathematical predictions of quantum mechanics yield results that are in agreement with experimental findings. That is the reason we use quantum theory. That quantum theory fits experiment is what validates the theory, but why experiment should give such peculiar results is a mystery." (14)
We continue with manifestations of the superposition principle using an example from chemistry. Chemical reactions that occur by more than one mechanism create the possibility, under favorable circumstances, of a chemical double-slit phenomena with accompanying interference effects. This is how Dixon, et al. (15) have recently interpreted some unusual results in the photo-dissociation of water. The reaction H2O + h$\nu$ ---> H + OH can occur through two linear intermediates I1 (HOH), and I2 (HHO). The OH moiety has a relative 180o phase difference in the two intermediates, which leads to an observed even-odd intensity oscillation in the rotational states of the product OH. For the transition from reactants (R) to products (P) by a two-intermediate mechanism the previous equation becomes,
$\mathrm{P}(\mathrm{R} \rightarrow \mathrm{P})=|\langle\mathrm{P}| \mathrm{R}\rangle\left.\right|^{2}= |\langle\mathrm{P}| \mathrm{I}_{1}\rangle\langle\mathrm{I}_{1}|\mathrm{R}\rangle+|\langle\mathrm{P}\left|\mathrm{I}_{2}\rangle\langle\mathrm{I}_{2}\right| \mathrm{R}\rangle\left.\right|^{2} =|\Psi\left(I_{1}\right)+(-1)^{N} \Psi\left.\left(I_{2}\right)\right|^{2} \nonumber$
where N is the rotational quantum number and the term (-1)N takes into account that even rotational states are symmetric with respect to a 180o rotation, while odd rotational states are anti-symmetric to such rotations. Clearly the interference term in this equation will be alternatively positive and negative, in agreement with the spectroscopic data.
Perhaps the most simple and striking version of the double-slit experiment is single-photon interference performed with a Mach-Zehnder interferometer (16). This apparatus, as Fig. 2 shows, consists of a photon source S, two 50-50 beam splitters BS, two mirrors M, and two detectors, D1 and D2.
Fig. 2 - Schematic diagram of a Mach-Zehnder interferometer. S = source; BS = beam splitter; M = mirror; R = reflected; T = transmitted; D = detector; TT = transmitted at BS1 and transmitted at BS2; TR = transmitted at BS1 and reflected at BS2; etc.
The experiment can be performed with a low intensity source such that there is only one photon in the interferometer at any time. With equal path lengths to the detectors, the photon is always detected at D1. Each detector can be reached by both paths and this requires the addition of probability amplitudes for each path, just as in the previous examples.
For D1 the amplitudes are in phase and for D2 they are 180 out of phase, so the photon is never detected at D2. At the beam splitters the probability amplitude for transmission is 2-1/2 (= 0.707), while for reflection it is i 2-1/2 (= 0.707 i). A 90o phase difference between transmission and reflection at the beam splitters (17, 18) is assigned by convention to the reflected beam. The probability for the arrival of a photon at D1 or D2is calculated using the same formalism as in the previous examples.
$\left|\langle D_{1}\right| S\rangle\left.\right|^{2}=\left|\langle D_{1}\right| I\rangle\langle T|S\rangle+|\langle D_{1}|R \times R| S\rangle\left.\right|^{2} =|(0.707 i)(0.707)+ (0.707)\left.(0.707 i)\right|^{2}=1 \nonumber$
$\left|\langle D_{2}\right| S\rangle\left.\right|^{2}=\left|\langle D_{2}\right| I\rangle\langle T|S\rangle+|\langle D_{2}|R \times R| S\rangle\left.\right|^{2} =|(0.707)(0.707)+ (0.707i)\left.(0.707 i)\right|^{2}=0 \nonumber$
If either path is blocked 50% of the photons get through, and 25% reach D1 and 25% reach D2. If the second beam splitter is removed 50% of the photons are detected at D1and 50% at D2. In both cases there is only one path to each detector, so there is no opportunity for interference of probability amplitudes.
In the examples presented so far particles have been described as being in a linear superposition of having taken both paths, not a mixture of some particles taking one path and some taking the other. The evidence in favor of the superposition has been interference effects, something which doesn't occur with mixtures of particles. However, it is possible to distinguish superpositions from mixtures without observing interference.
This example will treat Stern-Gerlach measurements in the x-z plane on spin-$\frac{1}{2}$ particles (19). Spin in the x- and z-directions are incompatible observables because their associated operators do not commute, which means that they cannot have simultaneous eigenstates. If a particle has a well-defined spin in the z-direction, its spin in the x-direction is uncertain, and vice versa. There are two eigenstates for each spin direction.
Let's say they are $| \uparrow \rangle$ and $| \downarrow \rangle$ in the z-direction, and $| \rightarrow \rangle$ and $| \leftarrow \rangle$ in the x-direction. The incompatibility of these observables is expressed by the following superpositions, which may also be expressed in vector form (19).
$| \uparrow \rangle=\frac{1}{\sqrt{2}}[ |\rightarrow\rangle+| \leftarrow \rangle ] \qquad | \downarrow \rangle=\frac{1}{\sqrt{2}}[ |\rightarrow\rangle-| \leftarrow \rangle ] \nonumber$
$| \rightarrow \rangle=\frac{1}{\sqrt{2}}[ |\uparrow\rangle+| \downarrow \rangle ] \qquad | \leftarrow \rangle=\frac{1}{\sqrt{2}}[ |\uparrow\rangle-| \downarrow \rangle ] \nonumber$
Now suppose that a beam of spin-$\frac{1}{2}$ particles is passed through a Stern-Gerlach apparatus oriented in the z-direction. A statistically meaningful number of measurements yields 50% $| \uparrow \rangle$ and 50% $| \downarrow \rangle$. Two hypotheses that are consistent with this outcome will be considered: the beam of particles could be completely un-polarized in the x-z plane, a random mixture of $| \uparrow \rangle$, $| \downarrow \rangle$, $| \rightarrow \rangle$, and $| \leftarrow \rangle$; or it could be a linear superposition, $| \uparrow \rangle \pm | \downarrow \rangle$. To distinguish between these alternatives it is only necessary to rotate the Stern-Gerlach magnet so that it is oriented along the x-direction. If the beam is an un-polarized mixture a large number of measurements will yield 50% $| \rightarrow \rangle$ and 50% $| \leftarrow \rangle$. However, if it is a linear superposition of $| \uparrow \rangle$ and $| \downarrow \rangle$, by the equations shown above, it will yield either 100% $| \rightarrow \rangle$ or 100% $| \leftarrow \rangle$. Experiments of this type have been reported in the primary literature (20) and summarized in the review literature (21).
Another example of the importance of the linear superposition is the ammonia maser which was first achieved experimentally in 1953. In an early paper on the maser in the popular scientific literature, J. P. Gordon (22) correctly described it as a "quantum-mechanical device." The maser is based on the ammonia molecule's umbrella inversion which in the classical view the nitrogen atom oscillates "back and forth" through the plane of the hydrogen atoms. This inversion vibration can be modeled quantum mechanically by a dominant harmonic potential supplemented with an internal Gaussian barrier which creates the required double potential well (23).
$\mathrm{V}=\frac{1}{2} \mathrm{k} \mathrm{x}^{2}+\mathrm{b} \exp \left(-\mathrm{c} \mathrm{x}^{2}\right) \nonumber$
The presence of the internal barrier causes a bunching of adjacent symmetric (+) and anti-symmetric (-) harmonic oscillator states. All states are raised in energy by the presence of the barrier, but the (-) states are elevated less than the (+) states because they have a node in the barrier and the (+) states do not. Thus v = 0 and v = 1, v = 2 and v = 3, etc. become paired with the effect declining in importance with increasing v quantum number as the magnitude of the energy barrier becomes less significant.
The ammonia maser is based on a microwave transition involving the first pair of symmetric and anti-symmetric states, v = 0 and v = 1. Numerical integration of Schrödinger's equation (24) for the first two states yields the wave equations shown in Fig. 3.
Fig. 3 - First two states ( v = 0 and v = 1) for the harmonic oscillator with an internal Gaussian barrier. The wave functions are off-set on the vertical axis for clarity of presentation.
Clearly these states represent in-phase and out-of-phase superpositions of the nitrogen atom being on both sides of the plane of the hydrogen atoms. If |NH3> represents the left-hand well and |H3N> the right-hand well, we can write the wave functions for these states symbolically as shown below.
$|\Psi\rangle_{0}=2^{-1 / 2}\left[\mathrm{NH}_{3}\rangle+| \mathrm{H}_{3} \mathrm{N}\rangle\right] \quad \text{and} \quad | \Psi\rangle_{1}=2^{-1 / 2}\left[\left|\mathrm{NH}_{3}\rangle-\right| \mathrm{H}_{3} \mathrm{N}\rangle\right] \nonumber$
The energy difference between these states is only 0.79 cm-1 (23), so they are essentially equally populated at room temperature. However, $| \Psi \rangle_{1}$ can be separated from $| \Psi \rangle_{0}$ by electrostatic means and directed to a resonant cavity. Irradiation of the v = 1 state with a 24 GHz signal causes stimulated emission and coherent amplification of the original signal.
Up to this point we have been dealing with "one-particle" superpositions in which a single particle or system is assumed to occupy a linear superposition of two states. However, there is no limit to the number of particles or the number of states involved in the linear superposition. Einstein, in collaboration with Podolsky and Rosen (EPR), was first to explore bizarre implications of the two-particle superposition (25). Schrödinger called this an entangled state and identified it as a fundamental trait of quantum mechanical systems.
For example, suppose that an excited atom (calcium for example) emits two photons in a cascade in opposite directions (26). Conservation of angular momentum requires that the photons are either both right circularly polarized or left circularly polarized. According to the superposition principle the wave function of the composite system must be,
$|\Psi \rangle =2^{-1 / 2}\left[\left|\mathrm{R}\rangle_{1}\right| \mathrm{R}\rangle_{2}+\left|\mathrm{L}\rangle_{1}\right| \mathrm{L}\rangle_{2}\right] \nonumber$
An EPR-like analysis demonstrated, and experiment confirmed, that such a quantum mechanically entangled state violated the realistic principle of locality and permitted in Einstein's words "spooky action at a distance." Once together, always together, or according to Lucien Hardy, "When two particles are in an entangled state they appear to continue to talk to each other even after they have finished interacting directly." (27) Einstein and his co-authors took this unusual prospect (at that time) as evidence that quantum mechanics was not complete and would ultimately have to be superseded by a more comprehensive theory that did not have its unpleasant non-local characteristics.
The EPR thought experiment, in the light of more recent suggestions by Bohm (28) and the penetrating analysis of Bell (29), has generated a remarkable experimental effort during the last 20 years that has confirmed Einstein's worst fear - "spooky action at a distance" is permitted, at least, in the nano-world (30, 31). The experimental study of the Greenberger-Horne-Zeilinger three-particle entanglement by Pan, et al. (32) is among the latest confirmations of the non-locality inherent in entangled superpositions.
The empirical support for the superposition principle outlined above validates its use for theoretical interpretation. For example, we can use the superposition principle to understand the electronic ground state of the hydrogen atom, which in atomic units is, $\langle\mathrm{r}|\Psi\rangle=\Psi(\mathrm{r})=\pi^{-1 / 2} exp(-r)$. This equation says that the hydrogen atom's electron is in a weighted superposition of all possible distances, r, from the nucleus. It is not orbiting the nucleus in a circular orbit or an elliptical orbit, it is not moving at all in any ordinary sense. The electron does not execute a classical trajectory within the atom. This is why in quantum mechanics we say the electron is in a stationary state, and why, un-like moving charges, it does not radiate or absorb energy unless it is making a transition from one allowed stationary state to another.
The superposition principle also provides a simple interpretation of the covalent chemical bond. In H2+, for example, at the most rudimentary level of theory, we write the molecular orbital as a linear superposition of the 1s orbitals of the two hydrogen atoms: $\Psi_{\mathrm{MO}}=2^{-1 / 2}\left(\psi_{1 \mathrm{sa}}+\psi_{1 \mathrm{sb}}\right)$. Adding the probability amplitudes, $\psi_{1sa}$ and $\psi_{1sb}$, is equivalent to saying the electron is delocalized over the molecule as a whole, and just as in the hydrogen atom case it is not correct to think of the electron as executing a trajectory or hopping back and forth between the two atoms. Squaring $\Psi_{MO}$ (the sum of two probability amplitudes) to obtain the probability density yields an interference term, $2\psi_{1sa}\psi_{1sb}$, which leads to a build-up of charge in the internuclear region. Thus constructive interference associated with an in-phase linear superposition of atomic states provides an understanding of the mechanism of chemical bond formation.
Of course, a linear superposition in which the atomic orbitals are 180o ($\pi$ radians)out of phase leads to destructive interference (charge depletion in the internuclear region) and an anti-bonding molecular orbital. If the atomic orbitals are 90o $\frac{\pi}{2}$ radians) out of phase, the interference term disappears yielding a non-bonding molecular orbital. In fact the superposition principle supports a continuum of atomic orbital combinations from in-phase bonding to out-of-phase anti-bonding interactions.
Moving from H2+ to larger molecules of more interest to chemists, we employ the same general procedure by writing a trial molecular orbital as a linear combination of all relevant atomic orbitals (LCAO-MO). Application of the variation method to solve Schrödinger's equation yields a set of optimized cannonical molecular orbitals in which the electron density is delocalized (DMO) over the molecule as a whole. While these DMOs have the most direct experimental support through photo-electron spectroscopy (and are, therefore, also called spectroscopic orbitals), they frequently are not the most useful to the chemist who has found localized electron pairs extremely helpful in understanding chemistry. The superposition principle comes to the rescue of the chemist as a number of excellent articles in this Journal have demonstrated over the years (33, 34, 35, 36, 37, 38). Any linear combination of the cannonical orbitals is also a valid solution to Schrödinger's equation. This is the theoretical justification for hybridizing atomic orbitals and forming localized molecular orbitals (LMOs) from DMOs. The superposition principle also provides a quantum mechanical justification for the use of the schematic, but also very useful, Lewis resonance structures (39).
When quantum mechanical principles are applied to atomic and molecular systems, the result is an explanation of atomic and molecular stability (the emergence of a ground state), and a manifold of quantized energy states for the internal degrees of freedom of the system being studied. Spectroscopy deals with the interaction of electromagnetic radiation with matter, and spectra are, therefore, usually interpreted as manifesting "quantum jumps" between the allowed energy levels. The superposition principle provides, as McMillen clearly showed some years ago in this Journal (40), a simple and serviceable model for the ubiquitous quantum jump.
To illustrate this model we consider an electron in the ground state of a one-dimensional box that is exposed to electromagnetic radiation. Under the influence of this perturbation the electron moves into a state that is a time-dependent linear superposition of the ground state and the manifold of excited states (in atomic units).
$\Psi :=c_{1} \cdot \psi_{1} \cdot \exp \left(-i \cdot E_{1} \cdot t\right)+\sum_{n=2}^{\infty} c_{n} \cdot \psi_{n} \cdot \exp \left(-i \cdot E_{n} \cdot t\right) \nonumber$
For a transition to occur between the ground state and the first excited state, for example, two conditions must be met according to the model proposed by McMillin. First, the Bohr frequency condition must be satisfied, $\nu = \frac{E_{2} - E_{1}}{h}$ Second, the electron density represented by the square of the absolute magnitude of the time-dependent linear superposition,
$\left|\Psi_{1 \rightarrow 2}\right|^{2}=\left|c_{1} \psi_{1} \exp \left(-i \mathrm{E}_{1} \mathrm{t}\right)+\mathrm{c}_{2} \psi_{2} \exp \left(-i \mathrm{E}_{2} \mathrm{t}\right)\right|^{2} \nonumber$
must exhibit oscillating dipole character (41). This latter criterion is the selection rule and provides a mechanism for a coupling between the radiation field of frequency n and the electron density also oscillating with frequency $\nu$. By comparison the n = 1 to n = 3 transition is forbidden, even if the Bohr frequency condition is satisfied, because the time-dependent superposition of these states does not exhibit oscillating dipole character; the electron density oscillates symmetrically about the center of the box and there is no coupling with the oscillating electromagnetic field (41).
These examples, and others not discussed here (26), show that the linear superposition is indeed a fundamental concept in the nanoscopic world of photons, atoms, and molecules. It has in it "the heart of quantum mechanics" to repeat Feynman's words, but what is its status in our macro-world? If quantum mechanics has universal validity, why don't we find macroscopic examples of the superposition principle? Einstein and Schrödinger answered by saying that quantum theory wasn't universally valid and that it did not present a complete physics for even the nano-world.
As noted previously, Einstein and his collaborators (25) demonstrated that quantum mechanics challenged certain scientific assumptions about the nature of physical reality, in particular calling into question traditional ideas regarding determinism, causality, and locality. Schrödinger formulated his famous 'cat paradox' (42) to demonstrate the absurdity of thinking that quantum mechanics and the superposition principle applied to the macro-world. In this ingenious thought experiment he created an entangled linear superposition which coupled the nano-world to the macro-world. Schrödinger postulated that a cat and a radioactive atom with a half-life of one hour were sealed in a box with a device (diabolical, in his words) that kills the cat if the atom decays. Therefore, after one hour the cat is presumably in an even superposition of being both alive and dead - clearly an absurd outcome from the macroscopic point of view. Furthermore, in order to reconcile quantum theory with macro-reality it is necessary to postulate that opening the box for the purpose of observing the actual state of the cat causes the wave function to "collapse" into one or the other of the equally likely contributions to the linear superposition.
$|\Psi>=2^{-1 / 2}[|\text { cat alive }\rangle| \text { atom not decayed }\rangle+|\text { cat dead }\rangle| \text { atom decayed }\rangle] \nonumber$
In this thought-experiment Schrödinger exposed a serious conflict between the formalism of quantum theory and our everyday experience, and a significant experimental and theoretical effort to resolve the conflict ensued. At the experimental level researchers have attempted to create mesocopic and macroscopic "cat" states (43 - 45). Theorists, for their part, have put considerable effort into creating a mechanism to explain the "collapse" of the wave function and to delineate the border between the quantum and classical worlds (46). For a recent survey of both experimental and theoretical work in this area the interested reader is directed to reference (26).
In summary, my premise has been that there are no classical analogs for the quantum mechanical superposition. To understand it one must study its experimental manifestations which, fortunately, are numerous at the nanoscopic level. To this end I have provided a brief survey of some of the more well-known empirical examples of the superposition principle. Areas of current research involving the quantum mechanical superposition that have been omitted from this presentation in the interest of brevity include quantum computing, quantum cryptography, and quantum teleportation.
Literature Cited
1. Miller, J. B. J. Chem. Educ. 2000, 77, 879.
My critique of Miller's paper is not meant to imply that there are no pedagogically effective classical analogs for quantum mechanical principles. For successful attempts to simulate the superposition principle with macro objects see the following:
• de Barros Neto, B. J. Chem. Educ. 1984, 61, 1044.
• Fleming, P. E. J. Chem. Educ. 2001, 78, 57.
2. Dirac, P. A. M. Principles of Quantum Mechanics, 4th ed.; Oxford U. P.: London, 1958, p. 12.
3. Ibid., p. 10.
4. Atkins, P. W. Physical Chemistry, 6th ed.; Freeman: New York, 1998; p. 316.
5. Markley, F. L. Am. J. Phys. 1972, 40, 1545.
6. Liang, Y. Q.; Zhang, H; Dardenne, Y. X. J. Chem. Educ. 1995, 72, 148.
7. Rioux, F. J. Chem. Educ. 1999, 726, 156. See also,http://www.users.csbsju.edu/~frioux/...b-momentum.htm.
8. Feynman, R. P.; Leighton, R. B.; Sands, M. The Feynman Lectures on Physics, Vol. 3; Addison-Wesley: Reading, 1965, p. 1-1.
9. Feynman, R. P. The Character of Physical Law; MIT Press: Cambridge, 1967; p. 130.
10. Dirac's bra-ket notation is used throughout this paper. Feynman's text, ref 8, is perhaps the most accessible introduction to bra-ket notation. In addition, the author has posted a Dirac notation tutorial at: http://www.users.csbsju.edu/~frioux/dirac/dirac.htm.
11. See reference 8 page 3 - 4 and www.users.csbsju. edu/~frioux/two-slit/2slit.htm for an example of how to do this calculation using Mathcad.
12. Tonomura, A.; Endo, T.; Matsuda, T.; Kawasaki, T.; Ezawa, H. Am. J. Phys. 1989, 57, 117.
13. Arndt. M.; Nairz, O.; Vos-Andreae, J.; Keller, C.; Van der Zouw, G.; Zeilinger, A. Nature, 1999, 401, 680.
14. Chester, M. Primer of Quantum Mechanics; Krieger Publishing Co.:Malabar, FL, 1992.
15. Dixon, R. N.; Hwang, D. W.; Yang, X. F.; Harich, S.; Lin, J. J.; Yang. X. Science, 1999, 285, 1249.
16. Scarani, V.; Suarez, A. Am. J. Phys. 1998, 66, 718. For additional methods of analysis of single photon interference see:
17. Degiorgio, V. Am. J. Phys. 1980, 48, 81.
18. Zeilinger, A. Am. J. Phys. 1981, 49, 882.
19. Rae, A. I. M. Quantum Mechanics, 3rd ed.; Institute of Physics Publishing, Ltd.: Bristol, 1992, pp 110-115.
20. Sumhammer, J.; Badurek, G.; Rauch, H.; Kisko, J.; Zeilinger, A. Phys. Rev. A. 1983. 27, 2523.
21. Leggett, A. J. Contemp. Phys. 1984, 25, 583.
22. Gordon, J. P. Sci. Amer. 1958, 199(6), 42.
23. Swalen, J. D.; Ibers, J. A. J. Chem. Phys. 1962, 36, 1914.
24. The numerical integration was carried out using Mathcadwww.users.csbsju.edu/~...er/AMMONIA.pdf. with the integration algorithm described in, Hansen, J. C. JCE: Software 1996, 8C(2).
25. Einstein, A.; Podolsky, B.; Rosen, N. Phys. Rev. 1935, 45, 777.
26. Greenstein, G.; Zajonc, A. G. The Quantum Challenge; Jones and Bartlett Pub.: Sudbury, 1997, and references cited therein.
27. Hardy, L. Contemp. Phys. 1998, 39, 419. For a simple example of two-photon entanglement see: http://www.users.csbsju.edu/~frioux/2photon.htm.
28. Bohm, D. Quantum Theory; Prentice-Hall: New York, 1951.
29. Bell, J. S. Physics 1964, 1, 195. Reprinted in: Bell, J. S. Speakable and Unspeakable in Quantum Mechanics, Cambridge, U. P.: Cambridge, 1987.
30. Aspect, A.; Grangier, P.; Roger, G. Phys. Rev. Lett. 1981, 41, 460.
31. Greenberger, D. M.; Horne, M. A.; Zeilinger, A. Phys. Today, 1993, 44(8), 22.
32. Pan, J-W.; Bouwmeester, D.; Daniel, M.; Winfurter, H.; Zeilinger, A. Nature2000, 403, 515.
33. Cohen, I.; Del Bene, J. J. Chem. Educ. 1969, 46, 487.
34. Bernett, W. A. J. Chem. Educ. 1969, 46, 746.
35. Hoffman, D. K,; Ruedenberg, K. ; Verkade, J. G. J. Chem. Educ. 1977, 54, 590.
36. Liang, M. J. Chem. Educ. 1987, 64, 124.
37. Gallup, G. A. J. Chem. Educ. 1988, 65, 671.
38. Martin, R. B. J. Chem. Educ. 1988, 65, 668.
39. Feynman, R. P.; Leighton, R. B.; Sands, M., op. cit., chapter 15.
40. McMillin, D. R. J. Chem. Educ. 1978, 55, 7.
41. Rioux, F. JCE: Software 1993, 1D(2). See also: http://www.users.csbsju.edu/~frioux/q-jump/njump.pdf.
42. Wheeler, J. A.; Zurek, W. H., Editors. Quantum Theory and Measurement; Princeton, U. P.: Princeton, 1983, pp 152-167.
43. Monroe, C.; Meekhof, D. M.; King, B. E.; Wineland, D. J. Science 1996, 272, 1131.
44. Friedman, J. R.; Patel, V.; Chen, W.; Tolpygo, S. K.; Lukens, J. E. Nature 2000, 406, 43.
45. van der Wal, C., et al. Science 2000, 290, 773. See also:www.sciencemag.org/cgi/conten...l/290/5492/720
46. Zurek, W. Phys. Today, 1991, 44(10), 36.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.24%3A_Getting_Accustomed_to_the_Superposition_Principle.txt
|
The Dirac delta function expressed in Dirac notation is: $\Delta(x - x_1) = \langle x | x_1 \rangle$. The $\langle x | x_1 \rangle$ bracket is evaluated using the momentum completeness condition. See the Mathematical Appendix for definitions of the required Dirac brackets and other mathematical tools used in the analysis that follows.
$\langle x | x_{1}\rangle=\int_{-\infty}^{\infty}\langle x | p\rangle\langle p | x_{1}\rangle d p=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \exp (i p x) \exp \left(-i p x_{1}\right) d p=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \exp \left[i p\left(x-x_{1}\right)\right] d p \nonumber$
Evaluation of this integral over a finite range of momentum values shows that the delta function is small except in the immediate neighborhood of x1. Integrating from -20 to 20 to reduce computational time shows that < x | x1 = 2> is small except in the area x = 2.
$\mathrm{x}_{1} =2 \quad \mathrm{x} =0,0 .01 \ldots 4 \quad \operatorname{Dirac}\left(\mathrm{x}, \mathrm{x}_{1}\right) =\frac{1}{2 \cdot \pi} \cdot \int_{-20}^{20} \exp \left[\mathrm{i} \cdot \mathrm{p} \cdot\left(\mathrm{x}-\mathrm{x}_{1}\right)\right] \mathrm{d} \mathrm{p} \nonumber$
The Fourier transform of the Dirac delta function into the momentum representation yields the following result.
$\int_{-\infty}^{\infty}\langle p | x\rangle\langle x | x_{1}\rangle d x=\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)=\langle p | x_{1}\rangle \nonumber$
The normalization constant is omitted for clarity of expression and the previous value of x1 is cleared to allow symbolic calculation.
$\mathrm{x}_{1} =\mathrm{x}_{1} \qquad \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Delta\left(\mathrm{x}-\mathrm{x}_{1}\right) \mathrm{d} \mathrm{x} \text { simplify } \rightarrow \; \mathrm{e}^{-\mathrm{p} \cdot \mathrm{x}_{1} \cdot \mathrm{i}} \nonumber$
Mathematical Appendix
The position and momentum completeness conditions:
$\int | x \rangle\langle x|d x=1 \qquad \int| p\rangle\langle p|d p=1 \nonumber$
The momentum eigenstate in the coordinate representation:
$\langle x | p\rangle=\frac{1}{\sqrt{2 \pi}} \exp (i p x) \nonumber$
The position eigenstate in the momentum representation:
$\langle p | x\rangle=\frac{1}{\sqrt{2 \pi}} \exp (-i p x) \nonumber$
1.26: Elements of Dirac Notation
In the early days of quantum theory, P. A. M. (Paul Adrian Maurice) Dirac created a powerful and concise formalism for it which is now referred to as Dirac notation or bra-ket (bracket $\langle \, | \, \rangle$) notation.
Two major mathematical traditions emerged in quantum mechanics: Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics. These distinctly different computational approaches to quantum theory are formally equivalent, each with its particular strengths in certain applications. Heisenberg’s variation, as its name suggests, is based matrix and vector algebra, while Schrödinger’s approach requires integral and differential calculus. Dirac’s notation can be used in a first step in which the quantum mechanical calculation is described or set up. After this is done, one chooses either matrix or wave mechanics to complete the calculation, depending on which method is computationally the most expedient.
Kets
In Dirac’s notation what is known is put in a ket, $| \, \rangle$. So, for example, $| p \rangle$ expresses the fact that a particle has momentum $p$. It could also be more explicit: $| p=2 \rangle$, the particle has momentum equal to 2; $| x=1.23 \rangle$, the particle has position 1.23. $| \psi \rangle$ represents a system in the state $\psi$ and is therefore called the state vector. The ket can also be interpreted as the initial state in some transition or event.
Bras
The bra $\langle \, |$ represents the final state or the language in which you wish to express the content of the ket $| \, \rangle$. For example,$\langle 0.25 | \psi \rangle$ is the probability amplitude that a particle in state $\psi$ will be found at position $x = 0.25$. In conventional notation we write this as $\psi(x=0.25)$, the value of the function $\psi$ at $x$=0.25. The absolute square of the probability amplitude, $\left| \langle x=0.25| \psi \rangle \right|^2$, is the probability density that a particle in state $\psi$ will be found at $x$ = 0.25. Thus, we see that a bra-ket pair can represent an event, the result of an experiment. In quantum mechanics an experiment consists of two sequential observations - one that establishes the initial state (ket) and one that establishes the final state (bra).
Bra-Ket Pairs
If we write $\langle x| \psi \rangle$, we are expressing $\psi$ in coordinate space without being explicit about the actual value of $x$. $\langle 0.25 | \psi \rangle$ is a number, but the more general expression $\rangle x | \psi \rangle$ is a mathematical function, a mathematical function of $x$, or we could say a mathematical algorithm for generating all possible values of $\langle x| \psi \rangle$, the probability amplitude that a system in state $| \psi \rangle$ has position $x$.
Example
For the ground state of the well-known particle-in-a-box of unit dimension.
$\langle x | \psi \rangle = \psi(x) \ 2^{1/2} \sin (\pi x) \nonumber$
However, if we wish to express $\psi$ in momentum space we would write
$\langle p | \psi \rangle = \psi(p) = 2^{1/2} \dfrac{e^{-ip} +1}{\pi^2 - p^2} \nonumber$
How one finds this latter expression will be discussed later.
The major point here is that there is more than one language in which to express $| \psi \rangle$. The most common language for chemists is coordinate space ($x$, $y$, and $z$, or $r$, $\theta$, and $\phi$, etc.), but we shall see that momentum space offers an equally important view of the state function. It is important to recognize that $\langle x| \psi \rangle$ and $\langle p| \psi \rangle$ are formally equivalent and contain the same physical information about the state of the system. One of the tenets of quantum mechanics is that if you know $| \psi \rangle$, you know everything there is to know about the system, and if, in particular, you know $\langle x| \psi \rangle$, you can calculate all of the properties of the system and transform $\langle x| \psi \rangle$, if you wish, into any other appropriate language such as momentum space.
A bra-ket pair can also be thought of as a vector projection (i.e., a dot product) - the projection of the content of the ket onto the content of the bra, or the “shadow” the ket casts on the bra. For example, $\langle \Phi | \psi \rangle$ is the projection of state $\psi$ onto state $\Phi$. It is the amplitude (probability amplitude) that a system in state $| \psi \rangle$ will be subsequently found in state $| \Phi \rangle$. It is also what we have come to call an overlap integral.
The $| \psi \rangle$ state vector can be a complex function (that is have the form, $a + ib$, or $exp(-ipx)$, for example, where $i = \sqrt{-1}$). Given the relation of amplitudes to probabilities mentioned above, it is necessary that $\langle \psi | \psi \rangle$, the projection of $| \psi \rangle$ onto itself is real. This requires that
$\langle \psi | = | \psi \rangle^* \nonumber$
where $| \psi \rangle^*$ is the complex conjugate of $| \psi \rangle$. So if $| \psi \rangle = a + ib$ then $\langle \psi | = a - ib$, which yields $\langle \psi | \psi \rangle = a^2 + b^2$, a real number.
The Linear Superposition
The analysis above can be approached in a less direct, but still revealing way by writing $| \psi \rangle$ and $\langle \Phi |$ as linear superpositions in the eigenstates of the position operator as is shown below.
$| \psi \rangle = \int | x \rangle \langle x | \psi \rangle \, dx \nonumber$
$\langle \Phi | = \int \langle \Phi | x' \rangle \langle x' | \, dx' \nonumber$
Combining these as a bra-ket pair yields,
$\langle \Phi | \psi \rangle = \iint \langle \Phi | x' \rangle \langle x' | x \rangle \langle x | \psi \rangle \; dx' \,dx = \int \langle \Phi | x \rangle \langle x | \psi \rangle \; dx \nonumber$
The $x'$ disappears because the position eigenstates are an orthogonal basis set and $\langle x' | x \rangle =0$ unless $x' = x$ in which case it equals 1.
$| \psi \rangle = \sum_n | n \rangle \langle n | \psi \rangle$ is a linear superposition in the discrete (rather than continuous) basis set $\{|n\rangle \}$. A specific example of this type of superposition is easy to demonstrate using matrix mechanics.
It cannot be stressed too strongly that a linear superposition is not a mixture. For example, when the system is in the state $| S_{su} \rangle$ every measurement of the $x$-direction spin yields the same result: spin-up. However, measurement of the z-direction spin yields spin-up 50% of the time and spin-down 50% of the time. The system has a well-defined value for the spin in the x-direction, but an indeterminate spin in the z-direction. It is easy to calculate the probabilities for the z-direction spin measurements:
$\left| \langle S_{zu} | S_{xu} \rangle \right|^2 = \dfrac{1}{2} \nonumber$
and
$\left| \langle S_{zd} | S_{xu} \rangle \right|^2 = \dfrac{1}{2}. \nonumber$
The reason $| S_{xu} \rangle$ cannot be interpreted as a 50-50 mixture of $| S_{zu} \rangle$ and $| S_{zu} \rangle$ is because $| S_{zu} \rangle$ and$| S_{zu} \rangle$ are linear superpositions of $| S_{xu} \rangle$ and $| S_{zxd} \rangle$:
$| S_{zu} \rangle = \dfrac{ | S_{xu} \rangle + | S_{xd} \rangle}{2^{1/2}} \nonumber$
and
$| S_{zd} \rangle = \dfrac{ | S_{xu} \rangle + | S_{xd} \rangle}{2^{1/2}} \nonumber$
Thus, if $| S_{xu} \rangle$ is a mixture of $| S_{zu} \rangle$ and $| S_{zd} \rangle$and it would yield an indefinite measurement of the spin in the x-direction in spite of the fact that it is an eigenfunction of the x-direction spin operator.
Example
Just one more example of the linear superposition. Consider a trial wave function for the particle in the one-dimensional, one-bohr box such as:
$\Phi(x) = \sqrt{105} (x^2-x^3) \nonumber$
Because the eigenfunctions for the particle-in-a-box problem form a complete basis set, $\Phi(x)$ can be written as a linear combination (i.e., a linear superposition) of these eigenfunctions.
$| \Phi \rangle = \sum_n |n \rangle \langle n| \Phi \rangle = \sum_n | n \rangle \int \langle n |x \rangle \langle x | \Phi \rangle dx \nonumber$
In this notation $\langle n | \Phi \rangle$ is the projection of $| \Phi \rangle$ onto the eigenstate $|n\rangle$. This projection or shadow of $\Phi$ onto $n$ can be written as $c_n$. It is a measure of the contribution $| n \rangle$) makes to the state $| \Phi \rangle$. It is also an overlap integral. Therefore we can write
$| \Phi \rangle = \sum_n | n \rangle c_n \nonumber$
Using a numerical software like Matlab, it is easy to show that the first ten coefficients in this expansion are:
$c_1$ $c_2$ $c_3$ $c_4$ $c_5$ $c_6$ $c_7$ $c_8$ $c_9$ $c_{10}$
0.935 -0.351 0.035 -0.044 0.007 -0.013 0.003 -0.005 0.001 -0.003
These expansion coefficients argue that the trail wavefunction strongly resembles the lowest energy eigenstate ($|n \rangle$) of the particle in the box system.
Operators, Eigenvectors, Eigenvalues, and Expectation Values
In matrix mechanics operators are matrices and states are represented by vectors. The matrices operate on the vectors to obtain useful physical information about the state of the system. According to quantum theory there is an operator for every physical observable and a system is either in a state with a well-defined value for that observable or it is not. The operators associated with spin in the $x$- and $z$-direction are shown below in units of
When operates on the result is S is an eigenfunction or eigenvector of with eigenvalue 1 (in units of h/4). However, Sxu is not an eigenfunction of because where This means, as mentioned in the previous section, that S does not have a definite value for spin in the z- direction. Under these circumstances we can’t predict with certainty the outcome of a z-direction spin measurement, but we can calculate the average value for a large number of measurements. This is called the expectation value and in Dirac notation it is represented as follows: In matrix mechanics it is calculated as follows.
This result is consistent with the previous discussion which showed that is a 50-50 linear superposition of and with eigenvalues of +1 and -1, respectively. In other words, half the time the result of the measurement is +1 and the other half -1, yielding an average value of zero. Now we will look at the calculation for the expectation value for a system in the state , which is set up as follows: To make this calculation computationally friendly we expand in the eigenstates of the position operator. Note the simplification that occurs because
The Variation Method
We have had a preliminary look at the variation method, an approximate method used when an exact solution to Schrödinger’s equation is not available. Using
$\Phi(x) = \sqrt{30} x (1 -x) \nonumber$
as a trial wave function for the particle-in-the-box problem, we evaluate the expectation value for the energy as
$\langle E \rangle = \langle \Phi | \hat{H} | \Phi \rangle \nonumber$
However, employing Dirac’s formalism we can expand $\Phi$ as noted above, in terms of the eigenfunctions of $H$ as follows.
$\langle E \rangle = \langle \Phi | \hat{H} | \Phi \rangle = \sum _n \langle | \hat{H} | n \rangle \langle n | \Phi \rangle \nonumber$
However,
$\hat{H} | n \rangle = E_n | n \rangle \nonumber$
because the states
$|n \rangle = \sqrt{2} \sin (n\pi x) \nonumber$
are eigenfunctions of the energy operator $\hat{H}$, Thus, the energy expression becomes.
$\langle E \rangle = \sum_n \langle | n \rangle E_n \langle n | \Phi \rangle = \sum_n | C_n |^2E_n \nonumber$
with
$E_n = \dfrac{n^2\pi^2}{2} \nonumber$
Because $\Phi$ is not an eigenfunction of $\hat{H}$, the energy operator, this system. does not have a well-defined energy and all we can do is calculate the average value for many experimental measurements of the energy. Each individual energy measurement will yield one of the eigenvalues of the energy operator, $E_n$, and the $|c_n|^2$ values tells us the probability of this result being achieved. Using Mathcad it is easy to show that
$c_1^2$ $c_3^2$ $c_5^2$ $c_4^2$
0.9987 0.0014, 0.00006 0.00001
All other coefficients are zero or vanishingly small. These results say that if we make an energy measurement on a system in the state represented by M there is a 99.87% chance we will get 4.935, a 0.14% chance we will get 19.739, and so on. We might say then that the state $\Phi$ is a linear combination of the first four odd eigenfunctions, with the first eigenfunction making by far the biggest contribution.
The variational theorem says that no matter how hard you try in constructing trial wave functions you cannot do better than the ‘true’ ground state value for the energy, and this equation captures that important principle. The only way M can give the correct result for the ground state of the particle in the box, for example, is if $c_1 = 1$, or if $M$ is the eigenfunction itself. If this is not true, then c < 1 and the other values of c are non-zero and the energy has to be greater than $E$. Taking another look at the last two equations reveals that a measurement operator can always be written as a projection operator involving its eigenstates.
Momentum Operator in Coordinate Space
Wave-particle duality is at the heart of quantum mechanics. A particle with wavelength has wave function (un-normalized) . However, according to deBroglie’s wave equation e the particle’s momentum is . Therefore the momentum wave function of the particle in coordinate space is . In momentum space the following eigenvalue equation holds: . Operating on the momentum eigenfunction with the momentum operator in momentum space returns the momentum eigenvalue times the original momentum eigenfunction. In other words, in its own space the momentum operator is a multiplicative operator (the same is true of the position operator in coordinate space). To obtain the momentum operator in coordinate space this expression can be projected onto coordinate space by operating on the left by .
Comparing the first and last terms reveals that and that is the momentum operator in coordinate space. is the position wave function in momentum space. Using the method outlined above it is easy to show that the position operator in momentum space is
Fourier Transform
Quantum chemists work mainly in position (x,y,z) space because they are interested in electron densities, how the electrons are distributed in space in atoms and molecules. However, quantum mechanics has an equivalent formulation in momentum space. It answers the question of what does the distribution of electron velocities look like? The two formulations are equivalent, that is, they contain the same information, and are connected formally through a Fourier transform. The Dirac notation shows this connection very clearly.
$p px x dx \psi= \psi \nonumber$
Starting from the left we have the amplitude that a system in state Q has position x. Then, if it has position x, the amplitude that it has momentum p. We then sum over all values of x to find all the ways a system in the stateQ can have momentum p. As a particular example we can chose the particle-in-a-box problem with eigenfunctions noted above. It is easy to show that the momentum eigenstates in position space in atomic units (see previous section) are . This, of course, means that the complex conjugate is. Therefore, the Fourier transform of Q(x) into momentum space
This integral can be evaluated analytically and yields the following momentum space wavefunctions for the particle-in-a-box. A graphical display of the momentum distribution function,(p), for several states is shown below.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.25%3A_The_Dirac_Delta_Function.txt
|
The particle-in-a-box problem is exactly soluble and the solution is calculated below for the first 20 eigenstates. All calculations will be carried out in atomic units.
$\psi(n,x) = \sqrt{2} \sin(n \pi x) \nonumber$
$E_n = \dfrac{n^{2} \pi^{2}}{2} \nonumber$
with $n = 1, 2, ..., 20$
The First five EigenValues
The first five energy eigenvalues are:
$E_{1} = 4.935$ $E_{2} = 19.739$ $E_{3} = 44.413$ $E_{4} = 78.957$ $E_{5} = 123.37$
The first three eigenfunctions are displayed below:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0,1,100)
t1 = t*np.pi
t2 = t*np.pi*2
t3 = t*np.pi*3
a = np.sin(t1)
b = np.sin(t2)
c = np.sin(t3)
plt.xlim(0,1)
plt.plot(t,a,color = "red", label= "\u03C8 (1,x)")
plt.plot(t,b,color = "blue",label = "\u03c8 (2,x)")
plt.plot(t,c,color = "limegreen",label = "\u03c8 (3,x)")
plt.plot(t,(t*0), color ="black")
plt.xticks([0,0.5,1])
plt.yticks([-.5,0,.5],[])
plt.xlabel("x")
leg = plt.legend(loc = "center", bbox_to_anchor=[-.11,.5],frameon=False)
plt.tick_params(top=True,right=True,direction="in")
plt.show()
The set of eigenfunctions forms a complete basis set and any other functions can be written as a linear combinations in this basis set. For examples, $\Phi$, $\chi$, and $\Gamma$ are three trial functions that satisfy the boundary conditions for the particle in a 1 bohr box.
$\Phi(x) = \sqrt{30}(x-x^{2}) \nonumber$
$\chi(x) = \sqrt{105}(x^{2}-x^{3}) \nonumber$
$\Gamma(x) = \sqrt{105}x(1-x)^{2} \nonumber$
In Dirac bra-ket notation we can express and of these functions as a linear combination in the basis set as follows:
\begin{align} \langle x | \Phi \rangle &= \sum_{n}^{\infty} \langle x | \psi_{n} \rangle \langle \psi_{n} | \Phi \rangle \[4pt] &= \sum_{n}^{\infty} \langle x | \psi_{n} \rangle \int_{0}^{1} \langle \psi_{n} | x \rangle \langle x | \Phi \rangle dx \end{align} \nonumber
The various overlap integral for the three trial function are evaluated below.
$a_{n} = \int_{0}^{1} \psi(n,x) \Phi(x)dx \nonumber$
$b_{n} = \int_{0}^{1} \psi(n,x) \chi(x)dx \nonumber$
$c_{n} = \int_{0}^{1} \psi(n,x) \Gamma(x)dx \nonumber$
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
t = np.arange(0,1,.001)
plt.plot(t,math.sqrt(2)*np.sin(t*np.pi),color = "red", label = "\u03C8 (1,x)")
plt.plot(t,math.sqrt(30)*(t-t**2),color ="blue", linestyle = "--",label = "\u03A6(x)")
plt.plot(t,math.sqrt(105)*t*(1-t)**2,color = "lime", linestyle = "--", label = "\u0393 (x)")
plt.plot(t,math.sqrt(105)*(t**2 - t**3),color = "magenta", linestyle = "-.", label = "\u03A7 (x)")
plt.xticks([0.2,0.4,0.6,0.8])
plt.yticks([.5,1,1.5],[ ])
plt.tick_params(direction="in")
plt.xlabel("x")
leg = plt.legend(loc = "center",bbox_to_anchor=[-.11,.5], frameon=False)
plt.tick_params(top=True, right=True)
plt.xlim(0,1)
plt.ylim(0,2)
plt.show()
The figure shown below demonstrate that only $\Phi$ is a reasonable representative for the ground state wavefunction.
First Five Particle in a BOx EigenFunctions
If $\chi$ is written as a linear combination of the first 5 PIB eigenfunctions, one gets two functions that are essentially indistinguishable from one another.
The same, of course, is true for $\chi$ and $\Gamma$, as is demonstrated in the graphs shown below.
Traditionally we use energy as a criterion for the quality of a trial wavefunction by evaluating the variational integral in the following way.
$\int_{0}^{1} \Phi(x)-\frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \Phi(x) d x=5 \quad \int_{0}^{1} \chi(x)-\frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \chi(x) d x=7 \quad \int_{0}^{1} \Gamma(x) \cdot \frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \Gamma(x) d x=7 \nonumber$
In Dirac notation we write:
$\langle E\rangle=\langle\Phi|\hat{H}| \Phi\rangle=\sum_{n}\langle\Phi|\hat{H}| \Psi_{n}\rangle\left\langle\Psi_{n} | \Phi\right\rangle=\sum\langle\Phi | \Psi_{n}\rangle E_{n}\left\langle\Psi_{n} | \Phi\right\rangle=\sum_{n} a_{n}^{2} E_{n} \nonumber$
Thus we easily show the same result.
$\sum_{\mathrm{n}}\left[\left(\mathrm{a}_{\mathrm{n}}\right)^{2} \cdot \mathrm{E}_{\mathrm{n}}\right]=5 \quad \sum_{\mathrm{n}}\left[\left(\mathrm{b}_{\mathrm{n}}\right)^{2} \cdot \mathrm{E}_{\mathrm{n}}\right]=6.999 \quad \sum_{\mathrm{n}}\left[\left(\mathrm{c}_{\mathrm{n}}\right)^{2} \cdot \mathrm{E}_{\mathrm{n}}\right]=6.999 \nonumber$
We now show, belatedly, that the three trial functions are normalized by both methods.
$\int_{0}^{1} \Phi(x)^{2} d x=1 \quad \int_{0}^{1} \chi(x)^{2} d x=1 \quad \int_{0}^{1} \Gamma(x)^{2} d x=1 \nonumber$
In Dirac notation this is formulated as:
$\langle\Phi | \Phi\rangle=\sum_{n}\langle\Phi | \Psi_{n}\rangle\left\langle\Psi_{n} | \Phi\right\rangle=\sum_{n} a_{n}^{2} \nonumber$
$\sum_{\mathrm{n}}\left(a_{n}\right)^{2}=1 \quad \sum_{n}\left(b_{n}\right)^{2}=1 \quad \sum_{n}\left(c_{n}\right)^{2}=1 \nonumber$
We now calculate some over-lap integrals:
$\int_{0}^{1} \Phi(x) \cdot \chi(x) d x=0.935 \quad \quad \int_{0}^{1} \Phi(x) \cdot \Gamma(x) d x=0.935 \quad \int_{0}^{1} \chi(x) \cdot \Gamma(x) d x=0.75 \nonumber$
In Dirac notation this is formulated as:
$\langle\Phi | \Gamma\rangle=\sum_{n}\langle\Phi | \Psi_{n}\rangle\left\langle\Psi_{n} | \Gamma\right\rangle=\sum_{n} a_{n} c_{n} \nonumber$
$\sum_{\mathrm{n}}\left(\mathrm{a}_{\mathrm{n}} \cdot \mathrm{b}_{\mathrm{n}}\right)=0.935 \quad \sum_{\mathrm{n}}\left(\mathrm{a}_{\mathrm{n}}-\mathrm{c}_{\mathrm{n}}\right)=0.935 \quad \sum_{\mathrm{n}}\left(\mathrm{b}_{\mathrm{n}}-\mathrm{c}_{\mathrm{n}}\right)=0.75 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.27%3A_The_Dirac_Notation_Applied_to_Variational_Calculations.txt
|
The purpose of this tutorial is to illustrate uses of the creation (raising) and annihilation (lowering) operators in the complementary coordinate and matrix representations. These operators have routine utility in quantum mechanics in general, and are especially useful in the areas of quantum optics and quantum information.
The harmonic oscillator eigenstates are regularly used to represent (in a rudimentary way) the vibrational states of diatomic molecules and also (more rigorously) the quantized states of the electromagnetic field. The creation operator adds a quantum of energy to the molecule or the electromagnetic field and the annihilation operator does the opposite.
The harmonic oscillator eigenfunctions in coordinate space are given below, where v is the quantum number and can have the values 0, 1, 2, ...
$\Psi(\mathrm{v}, \mathrm{x}) :=\frac{1}{\sqrt{2^{\mathrm{v}} \cdot \mathrm{v} ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}(\mathrm{v}, \mathrm{x}) \cdot \exp \left(\frac{-\mathrm{x}^{2}}{2}\right) \nonumber$
First we demonstrate that the harmonic oscillator eigenfunctions are normalized.
$\int_{-\infty}^{\infty} \Psi(0, x)^{2} d x=1 \qquad \int_{-\infty}^{\infty} \Psi(1, x)^{2} d x-1 \qquad \int_{-\infty}^{\infty} \Psi(2, x)^{2} d x=1 \nonumber$
Next we demonstrate that they are orthogonal:
$\int_{-\infty}^{\infty} \Psi(0, \mathrm{x}) \cdot \Psi(1, \mathrm{x}) \mathrm{dx}=0 \quad \int_{-\infty}^{\infty} \Psi(0, \mathrm{x}) \cdot \Psi(2, \mathrm{x}) \mathrm{dx}=0 \quad \int_{-\infty}^{\infty} \Psi(1, \mathrm{x}) \cdot \Psi(2, \mathrm{x}) \mathrm{dx}=0 \nonumber$
The harmonic oscillator eigenfunctions form an orthonormal basis set. They are displayed below.
The raising or creation operator in the coordinate representation in reduced units is the position operator minus i times the coordinate space momentum operator:
Operating on the v = 0 eigenfunction yields the v = 1 eigenfunction:
The lowering or annihilation operator in the coordinate representation in reduced units is the position operator plus i times the coordinate space momentum operator:
$\mathrm{x} \cdot \Box+\frac{\mathrm{d}}{\mathrm{dx}} \Box \nonumber$
Operating on the v = 2 eigenfunction yields the v = 1 eigenfunction:
The energy operator in coordinate space and the energy expectation value for the v = 2 state are given below. Ev = v + 1/2 in atomic units.
$H=\frac{-1}{2} \cdot \frac{d^{2}}{d x^{2}} \Box+\frac{1}{2} \cdot x^{2} \cdot \Box \quad \int_{-\infty}^{\infty} \Psi(2, x) \cdot\left[\frac{-1}{2} \cdot \frac{d^{2}}{d x^{2}} \Psi(2, x)+\frac{1}{2} \cdot x^{2} \cdot \Psi(2, x)\right] d x=2.5 \nonumber$
In the matrix formulation of quantum mechanics the harmonic oscillator eigenfunctions are vectors. The matrix representations for the first five eigenstates are given below. They are actually infinite vectors which for practical purposes are truncated at order 5.
$v 0 :=\left( \begin{array}{l}{1} \ {0} \ {0} \ {0} \ {0}\end{array}\right) \quad v 1 :=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0} \ {0}\end{array}\right) \quad \mathrm{v} 2 :=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right) \quad \mathrm{v} 3 :=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right) \nonumber$
They form an orthonormal basis set:
$\mathrm{v} 0^{\mathrm{T}} \cdot \mathrm{v} 0=1 \quad \mathrm{v} 1^{\mathrm{T}} \cdot \mathrm{v} 1=1 \quad \mathrm{v} 2^{\mathrm{T}} \cdot \mathrm{v} 2=1 \qquad \mathrm{v} 0^{\mathrm{T}} \cdot \mathrm{v} 1=0 \quad \mathrm{v} 0^{\mathrm{T}} \cdot \mathrm{v} 2=0 \quad \mathrm{v} 1^{\mathrm{T}} \cdot \mathrm{v} 2=0 \nonumber$
In this context the creation and annihilation operators are 5x5 matrices.
$\text{Create}:=\left( \begin{array}{ccccc}{0} & {0} & {0} & {0} & {0} \ {\sqrt{1}} & {0} & {0} & {0} & {0} \ {0} & {\sqrt{2}} & {0} & {0} & {0} \ {0} & {0} & {\sqrt{3}} & {0} & {0} \ {0} & {0} & {0} & {\sqrt{4}} & {0}\end{array}\right) \qquad \text{Annihilate}:=\left( \begin{array}{ccccc}{0} & {\sqrt{1}} & {0} & {0} & {0} \ {0} & {0} & {\sqrt{2}} & {0} & {0} \ {0} & {0} & {0} & {\sqrt{3}} & {0} \ {0} & {0} & {0} & {0} & {\sqrt{4}} \ {0} & {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
The annihilation operator on the v = 2 state:
$\hat{a} | n \rangle=\sqrt{n} | n-1 \rangle \quad \text { Annihilate} \cdot \left( \begin{array}{c}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {1.414} \ {0} \ {0} \ {0}\end{array}\right) \nonumber$
The annihilation operator on the v = 0 state:
$\text{Annihilate} \cdot\left( \begin{array}{l}{1} \ {0} \ {0} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {0} \ {0} \ {0}\end{array}\right) \nonumber$
The creation operator on the v = 2 state:
$\hat{a}^{\dagger} | n \rangle=\sqrt{n+1} | n+1 \rangle \qquad \text{Create}\cdot \left( \begin{array}{c}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {1.732} \ {0}\end{array}\right) \nonumber$
The number operator on the v = 2 state:
$\hat{a}^{\dagger} \hat{a} | n \rangle=n | n \rangle \ \text{Create} \cdot \text{Annihilate}=\left( \begin{array}{ccccc}{0} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {2} & {0} & {0} \ {0} & {0} & {0} & {3} & {0} \ {0} & {0} & {0} & {0} & {4}\end{array}\right) \quad \text{Create} \cdot \text{Annihilate} \cdot\left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {2} \ {0} \ {0}\end{array}\right) \nonumber$
Or do it this way:
$\begin{matrix} \mathrm{v} 0^{\mathrm{T}} \cdot \text { Create } \cdot \text { Annihilate } \cdot \mathrm{v} 0=0 & \quad \mathrm{v} 1^{\mathrm{T}} \cdot \text { Create } \cdot \text { Annihilate } \cdot \mathrm{v} 1=1 \ \mathrm{v} 2^{\mathrm{T}} \cdot \text { Create } \cdot \text { Annihilate } \cdot \mathrm{v} 2=2 & \quad \mathrm{v} 3^{\mathrm{T}} \cdot \text { Create} \cdot \text{ Annihilate } \cdot \mathrm{v} 3=3 \end{matrix} \nonumber$
The energy operator operating on the v = 2 and 5 states:
$\left(\hat{a}^{\dagger} \hat{a}+\frac{1}{2}\right) | n \rangle=\left(n+\frac{1}{2}\right) | n \rangle \nonumber$
$\text{Create} \cdot \text{Annihilate} \cdot \left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right)+\frac{1}{2} \cdot \left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {2.5} \ {0} \ {0}\end{array}\right) \ \text{Create} \cdot \text{Annihilate} \cdot \left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {1}\end{array}\right)+\frac{1}{2} \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {0} \ {4.5}\end{array}\right) \nonumber$
Creating the v = 2 eigenstate from the vacuum:
$| n \rangle=\frac{1}{\sqrt{n !}}\left(\hat{a}^{\dagger}\right)^{n} | 0 \rangle \qquad \frac{1}{\sqrt{2 !}} \cdot \text { Create }^{2} \cdot \left( \begin{array}{c}{1} \ {0} \ {0} \ {0} \ {0}\end{array}\right) \rightarrow \left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right) \nonumber$
$\frac{1}{\sqrt{2 !}} \cdot \text { Create }^{2} \cdot \mathrm{v} 0=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0} \ {0}\end{array}\right) \nonumber$
This operation is illustrated graphically in the coordinate representation as follows:
Construct the matrix forms of the position and momentum operators using the annihilation and creation operators. See E. E. Anderson, Modern Physics and Quantum Mechanics, page 201.
$\text{Position}:=\frac{\text { Annihilate }+\text { Create }}{\sqrt{2}} \qquad \text{Momentum} :=\frac{\mathrm{i}}{\sqrt{2}} \cdot(\text { Create }-\text { Annihilate }) \nonumber$
Calculate the position and momentum expectation values for several states:
$\mathrm{v} 0^{\mathrm{T}} \cdot \text { Position } \cdot \mathrm{v} 0=0 \quad \mathrm{v} 0^{\mathrm{T}} \cdot \text { Momentum } \mathrm{v} 0=0 \quad \mathrm{v} 1^{\mathrm{T}} \cdot \text { Position } \cdot \mathrm{v} 1=0 \quad \mathrm{v} 1^{\mathrm{T}} \cdot \text { Momentum }\cdot \mathrm{v} 1=0 \nonumber$
Calculate the position‐momentum uncertainty product $(\Delta x \Delta p)$ for several states:
$\sqrt{\mathrm{v} 0^{\mathrm{T}} \cdot \mathrm{Position}^{2} \cdot \mathrm{v} 0-\left(\mathrm{v} 0^{\mathrm{T}} \cdot \text { Position } \cdot \mathrm{v} 0\right)^{2}} \cdot \sqrt{\mathrm{v} 0^{\mathrm{T}} \cdot \text { Momentum }^{2} \cdot \mathrm{v} 0-\left(\mathrm{v} 0^{\mathrm{T}} \cdot \text { Momentum } \cdot \mathrm{v} 0\right)^{2}}=0.5 \nonumber$
$\sqrt{\mathrm{v} 1^{\mathrm{T}} \cdot \mathrm{Position}^{2} \cdot \mathrm{v} 1-\left(\mathrm{v} 1^{\mathrm{T}} \cdot \text { Position } \cdot \mathrm{v} 1\right)^{2}} \cdot \sqrt{\mathrm{v} 1^{\mathrm{T}} \cdot \text { Momentum }^{2} \cdot \mathrm{v} 1-\left(\mathrm{v} 1^{\mathrm{T}} \cdot \text { Momentum } \cdot \mathrm{v} 1\right)^{2}}=1.5 \nonumber$
Calculate the energy expectation value for the following superposition state.
$\Psi :=\frac{1}{\sqrt{2}} \cdot \mathrm{v} 0+\frac{1}{\sqrt{3}} \cdot \mathrm{v} 1+\frac{1}{\sqrt{6}} \cdot \mathrm{v} 2 \qquad \Psi^{\mathrm{T}} \cdot\left(\text { Create} \cdot \text{ Annihilate}\cdot \Psi+\frac{1}{2} \cdot \Psi\right) \rightarrow \frac{7}{6} \nonumber$
$\mathrm{P}_{0} \cdot \mathrm{E}_{0}+\mathrm{P}_{1} \cdot \mathrm{E}_{1}+\mathrm{P}_{2} \cdot \mathrm{E}_{2}=\frac{7}{6} \qquad \frac{1}{2} \cdot \frac{1}{2}+\frac{1}{3} \cdot \frac{3}{2}+\frac{1}{6} \cdot \frac{5}{2} \rightarrow \frac{7}{6} \nonumber$
Below it is demonstrated that there are two equivalent forms of the harmonic oscillator energy operator in the matrix formulation of quantum mechanics.
$\left(\hat{a}^{\dagger} \hat{a}+\frac{1}{2}\right) | n \rangle=\left(n+\frac{1}{2}\right) | n \rangle \qquad \left(\hat{a} \hat{a}^{\dagger}-\frac{1}{2}\right) | n \rangle=\left(n+\frac{1}{2}\right) | n \rangle \nonumber$
$\text{Create} \cdot \text{Annihilate} \cdot\left( \begin{array}{c}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)+\frac{1}{2} \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {3.5} \ {0}\end{array}\right) \ \text{Annihilate} \cdot \text{Create} \cdot\left( \begin{array}{c}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)-\frac{1}{2} \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {3.5} \ {0}\end{array}\right) \nonumber$
Or, do it this way
$\frac{\text { Create} \cdot \text{ Annihilate + Annihilate} \cdot \text{ Create }}{2} \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {3.5} \ {0}\end{array}\right) \nonumber$
Or this way:
$\left(\frac{\text { Momentum }^{2}}{2}+\frac{\text { Position }^{2}}{2}\right) \cdot \left( \begin{array}{l}{0} \ {0} \ {0} \ {1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {0} \ {0} \ {3.5} \ {0}\end{array}\right) \nonumber$
Demonstrate that the position and momentum operators donʹt commute using the matrix form of the operators.
$i \cdot (\text{Momentum} \cdot \text{Position} - \text{Postion} \cdot \text{Momentum}) = \left( \begin{array}{ccccc}{1} & {0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {0} & {-4}\end{array}\right) \nonumber$
This calculation yields the identity matrix as expected, except for the value of the last diagonal element. The latter is a mathematical artifact of using truncated matrices for operators which are infinite.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.28%3A_Raising_and_Lowering_Creating_and_Annihilating.txt
|
Slit width: $w : = 1$ Coordinate‐space wave function: $\Psi(x, w) :=\text { if }\left[\left(x \geq-\frac{w}{2}\right) \cdot\left(x \leq \frac{w}{2}\right), 1,0\right]$
$x :=\frac{-w}{2}, \frac{-w}{2}+.005 \dots \frac{w}{2} \nonumber$
A Fourier transform of the coordinate‐space wave function yields the momentum wave function and the momentum distribution function, which is the diffraction pattern.
$\Phi\left(\mathrm{p}_{\mathrm{X}}, \mathrm{w}\right) :=\frac{1}{\sqrt{2 \cdot \pi \cdot \mathrm{w}}} \cdot \int_{-\frac{\mathrm{w}}{2}}^{\frac{\mathrm{w}}{2}} \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{x}} \cdot \mathrm{x}\right) \mathrm{dx} \text { simplify } \rightarrow \frac{\sqrt{2} \cdot \sin \left(\frac{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{w}}{2}\right)}{\sqrt{\pi} \cdot \mathrm{p}_{\mathrm{x}} \cdot \sqrt{\mathrm{w}}} \nonumber$
Now Fourier transform the momentum wave function back to coordinate space and display result. This is done numerically using large limits of integration for momentum.
$\Psi(x, w) :=\int_{-5000}^{5000} \frac{\frac{1}{2} \sin \left(\frac{1}{2} \cdot w \cdot p_{x}\right)}{\pi^{\frac{1}{2}} \cdot w^{\frac{1}{2}} \cdot p_{x}} \cdot \frac{\exp \left(i \cdot p_{x^{*}} x\right)}{\sqrt{2 \cdot \pi}} d p_{x} \nonumber$
1.30: From Coordinate Space to Momentum Space and Back
The 2s state of the one-dimensional hydrogen atom is used to illustrate transformations back and forth between the coordinate and momentum representations.
$\Psi_{2}(x) :=\frac{1}{\sqrt{8}} \cdot x \cdot(2-x) \cdot \exp \left(-\frac{x}{2}\right) \nonumber$
The 2s state is Fourier transformed into momentum space (using atomic units) and the magnitude of the momentum wave function is displayed.
$\langle p | \Psi_{2}\rangle=\int_{0}^{\infty}\langle p | x\rangle\langle x | \Psi_{2}\rangle d x \quad \text { where } \quad\langle p | x\rangle=\frac{1}{\sqrt{2 \pi}} \exp \left(\frac{-i p x}{\hbar}\right) \nonumber$
$\Psi_{2}(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \int_{0}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi_{2}(\mathrm{x}) \mathrm{d} x \text { simplify } \rightarrow \frac{2}{\pi^{\frac{1}{2}}} \cdot \frac{2 \cdot \mathrm{i} \cdot \mathrm{p}-1}{(2 \cdot \mathrm{i} \cdot \mathrm{p}+1)^{3}} \nonumber$
The return to coordinate space is carried out in the numeric mode, integrating over the range of momentum values shown above ($\pm$10 is effectively $\pm \infty$).
$\langle x | \Psi_{2}\rangle=\int_{-\infty}^{\infty}\langle x | p\rangle\langle p | \Psi_{2}\rangle d p \quad \text { where } \quad\langle x | p\rangle=\frac{1}{\sqrt{2 \pi}} \exp \left(\frac{i p x}{\hbar}\right) \nonumber$
$\Psi_{2}(x) :=\int_{-10}^{10} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (i \cdot p \cdot x) \cdot \Psi_{2}(p) d p \nonumber$
The graphical display below shows that we have successfully returned to coordinate space.
1.31: The Position and Momentum Commutation Relation in Coordinate and Momentum Space
The purpose of this exercise is to illustrate the commutation relation between position and momentum in the coordinate and momentum representations using a one‐dimensional representation of the hydrogen atom. The relevance of the uncertainty principle to these calculations will also be demonstrated. All calculations are done in atomic units: e = me = $4 \pi \epsilon_{o} = \frac{h}{2 \pi}$ = 1.
Coordinate Representation
Position operator: $x \cdot \Box$ Momentum operator: $p=\frac{1}{i} \cdot \frac{d}{d x} \Box$ Integral: $\int_{0}^{\infty} \Box d x$ Kinetic energy operator: $K E=-\frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \Box$ Potential energy operator: $P E=\frac{-1}{x}$
The 1s state of the hydrogen can be represented in one‐dimension by the following wave function:
$\Psi(x)=2 \cdot x \cdot \exp (-x) \qquad \int_{0}^{\infty} \Psi(x)^{2} d x \rightarrow 1 \nonumber$
First we demonstrate that $\Psi$(x) is an eigenfunction of the energy operator with eigenvalue ‐0.5 in atomic units.
$\frac{-\frac{1}{2} \cdot \frac{d^{2}}{d x^{2}} \Psi(x)-\frac{1}{x} \cdot \Psi(x)}{\Psi(x)} \rightarrow \frac{-1}{2} \nonumber$
It is easy to show that $\Psi (x)$ is not an eigenfunction of the position or momentum operators. This means that while the electron in the hydrgogen atom ground state has a well‐defined energy, it does not have a well‐defined position or momentum. This fact is consistent with the commutator and uncertainty calculations shown below.
Next it is shown that the position and momentum operators do not commute.
$\frac{x \cdot\left(\frac{1}{i} \cdot \frac{d}{d x} \Psi(x)\right)-\frac{1}{i} \cdot \frac{d}{d x}(x \cdot \Psi(x))}{\Psi(x)} \text { simplify } \rightarrow i \nonumber$
This result indicates that $\Psi (x)$ is not an eigenstate of the position and momentum operators, and therefore the order of measurement is important. Gaining knowledge of one observable through measurement destroys information about the other. The commutation relation is closely related to the uncertainty principle, which states that the product of uncertainties in position and momentum must equal or exceed a certain minimum value, 0.5 in atomic units.
The uncertainties in position and momentum are now calculated to show that the uncertainty principle is satisfied.
$\Delta x :=\sqrt{\int_{0}^{\infty} \Psi(x) \cdot x^{2} \cdot \Psi(x) d x-\left(\int_{0}^{\infty} \Psi(x) \cdot x \cdot \Psi(x) d x\right)^{2}} \rightarrow \frac{1}{2} \cdot 3^{\frac{1}{2}} \nonumber$
$\Delta p :=\sqrt{\int_{0}^{\infty} \Psi(x) \cdot \frac{d^{2}}{d x^{2}} \Psi(x) d x-\left(\int_{0}^{\infty} \Psi(x) \cdot \frac{1}{i} \cdot \frac{d}{d x} \Psi(x) d x\right)^{2}} \rightarrow 1 \nonumber$
$\Delta x \cdot \Delta p=0.866 \nonumber$
We now move to momentum space to show that the results are identical to those calculate in coordinate space.
Momentum Representation
Position operator: $i \cdot \frac{d}{dp} \Box$ Momentum operator: $p \cdot \Box$ Momentum space integral: $\int_{-\infty}^{\infty} \Box d p$
A momentum wave function is obtained by a Fourier transform of the coordinate wave function.
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{0}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}) \mathrm{dx} \text { simplify } \rightarrow \frac{2^{\frac{1}{2}}}{\pi^{\frac{1}{2}} \cdot(\mathrm{i} \cdot \mathrm{p}+1)^{2}} \nonumber$
$\int_{-\infty}^{\infty}(|\Phi(p)|)^{2} d p \rightarrow 1 \nonumber$
The coordinate and momentum wave functions are equivalent representations of the hydrogen‐atom ground state. That they contain the same information as is illustrated below.
The position and momentum operators do not commute in momentum space.
$\frac{\mathrm{i} \cdot \frac{\mathrm{d}}{\mathrm{dp}}(\mathrm{p} \cdot \Phi(\mathrm{p}))-\mathrm{p} \cdot \mathrm{i} \cdot \frac{\mathrm{d}}{\mathrm{dp}} \Phi(\mathrm{p})}{\Phi(\mathrm{p})} \text { simplify } \rightarrow \frac{-(\mathrm{p}-\mathrm{i})}{\mathrm{i} \cdot \mathrm{p}+1} \nonumber$
It is easy to show that this result is equal to i.
The product of the position‐momentum uncertainty is the same in momentum space as it is in coordinate space.
$\Delta \mathrm{p} :=\sqrt{\int_{-\infty}^{\infty} \mathrm{p}^{2} \cdot(|\Phi(\mathrm{p})|)^{2} \mathrm{d} \mathrm{p}-\left[\int_{-\infty}^{\infty} \mathrm{p} \cdot(|\Phi(\mathrm{p})|)^{2} \mathrm{d} \mathrm{p}\right]^{2}} \rightarrow 1 \nonumber$
$\Delta x :=\sqrt{\int_{-\infty}^{\infty} \overline{\Phi(p)} \cdot -\frac{d^{2}}{d p^{2}} \Phi(p) d p-\left[\int_{-\infty}^{\infty} \overline{\Phi(p)} \cdot i \cdot\left(\frac{d}{d p} \Phi(p)\right) d p \right]^{2}} \rightarrow \frac{1}{2} \cdot 3^{\frac{1}{2}} \nonumber$
$\Delta \mathrm{x} \cdot \Delta \mathrm{p}=0.866 \nonumber$
1.32: Simulating the Aharonov-Bohm Effect
The Aharonov–Bohm effect is a phenomenon by which an electron is affected by the vector potential, A, in regions in which both the magnetic field B, and electric field E are zero. The most commonly described case occurs when the wave function of an electron passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being zero in the region through which the particle passes.
Schematic of double-slit experiment in which Aharonov–Bohm effect can be observed: electrons pass through two slits, interfering at an observation screen, with the interference pattern shifted when a magnetic field B is turned on in the cylindrical solenoid. (All of the above adapted from Wikipeida)
The effect on the interference fringes is calculated and displayed below. Please consult other tutorials on the double-slit interference effect on my page for background information.
Slit positions: $x_{L} : = 1 \quad x_{R} : = 2$ Slit width: $\delta : = 0.2$ Relative phase shift: $\phi : = \pi$
Momentum Distribution/Diffraction Pattern for B = 0:
$\Psi(p) :=\frac{1}{\sqrt{2}}\left(\int_{x_{L}-\frac{\delta}{2}}^{x_{L}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x+\int_{x_{R}-\frac{\delta}{2}}^{x_{R}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x\right) \nonumber$
Relative phase shift, $\phi$, introduced at right-hand slit for B not equal to zero:
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2}}\left(\int_{\mathrm{x}_{\mathrm{L}}-\frac{\delta}{2}}^{\mathrm{x}_{\mathrm{L}}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}+\exp (\mathrm{i} \cdot \phi) \cdot \int_{\mathrm{x}_{\mathrm{R}} +\frac{\delta}{2}}^{\mathrm{x}_{\mathrm{R}}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \right) \nonumber$
Display both diffraction patterns:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.29%3A_Single_Slit_Diffraction_and_the_Fourier_Transform.txt
|
A quon (an entity that exhibits both wave and particle aspects in the peculiar quantum manner - Nick Herbert, Quantum Reality, page 64) has a variety of properties each of which can take on two values. For example, it has the property of hardness and can be either hard or soft. It also has the property of color and can be either black or white, and the property of taste and be sweet or sour. The treatment that follows draws on material from Chapter 3 of David Z Albert's book, Quantum Mechanics and Experience.
The basic principles of matrix and vector math are provided in Appendix A. An examination of this material will demonstrate that most of the calculations presented in this tutorial can easily be performed without the aid of Mathcad or any other computer algebra program. In other words, they can be done by hand.
In the matrix formulation of quantum mechanics the hardness and color states are represented by the following vectors.
$\text{Hard}:=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad \text { Soft } :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \quad \text{Black}:=\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right) \quad \text { White } :=\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right) \nonumber$
Hard and Soft represent an orthonormal basis in the two-dimensional Hardness vector space.
Likewise Black and White are an orthonormal basis in the two-dimensional Color vector space.
$\begin{matrix} \text{Black}^{T} \cdot \text{Black} = 1 & \text{White}^{T} \cdot \text{White} = 1 & \text{Black}^{T} \cdot \text{White} = 0 \ \left(\begin{array}{c}{\dfrac{1}{\sqrt{2}}} & {\dfrac{1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=1 & \left(\begin{array}{c}{\dfrac{1}{\sqrt{2}}} & {\dfrac{-1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=1 & \left(\begin{array}{c}{\dfrac{1}{\sqrt{2}}} & {\dfrac{1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=0 \end{matrix} \nonumber$
The relationship between the two bases is reflected in the following projection calculations.
Note
$\dfrac{1}{\sqrt{2}}=0.707 \nonumber$
$\begin{matrix} \text{Hard}^{T} \cdot \text{Black} = 0.707 & \text{Hard}^{T} \cdot \text{White} = 0.707 & \text{Soft}^{T} \cdot \text{Black} = 0.707 & \text{Soft}^{T} \cdot \text{White} = -0.707 \ \left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{0} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{0} & {1}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=-0.707 \end{matrix} \nonumber$
The values calculated above are probability amplitudes. The absolute square of those values is the probability. In other words, the probability that a black quon will be found to be hard is 0.5. The probability that a white quon will be found to be soft is also 0.5.
$\begin{matrix} \left(\left|\text{Hard}^{T} \cdot \text{Black} \right| \right)^{2} = 0.5 & \left(\left|\text{Hard}^{T} \cdot \text{White}\right| \right)^{2} = 0.5 & \left(\left|\text{Soft}^{T} \cdot \text{Black}\right| \right)^{2} = 0.5 & \left(\left|\text{Soft}^{T} \cdot \text{White}\right| \right)^{2} = 0.5 \ \left[\left|\left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5& \left[\left|\left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5 & \left[\left|\left(\begin{array}{l}{0} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5& \left[\left|\left(\begin{array}{l}{0} & {1}\end{array}\right) \cdot \left( \begin{array}{l}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5\end{matrix} \nonumber$
Clearly Black and White can be written as superpositions of Hard and Soft, and vice versa. This means hard and soft quons do not have a well-defined color, and black and white quons do not have a well-defined hardness.
$\dfrac{1}{\sqrt{2}} \cdot(\mathrm{Hard}+\mathrm{Soft})=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
$\dfrac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$
$\dfrac{1}{\sqrt{2}} \cdot(\mathrm{Hard}-\mathrm{Soft})=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$
$\dfrac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$
$\dfrac{1}{\sqrt{2}} \cdot(\text { Black }+\text { White })=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\dfrac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)+\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)\right]=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\dfrac{1}{\sqrt{2}} \cdot(\text { Black }-\text { White })=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
$\dfrac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)-\left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)\right]=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
Hard, Soft, Black and White are measurable properties and the vectors representing them are eigenstates of the Hardness and Color operators with eigenvalues $\pm$ 1. The Identity operator is also given and will be discussed later. Of course, the Hardness and Color operators are just the Pauli spin operators in the z- and x-directions. Later the Taste operator will be introduced; it is the y-direction Pauli spin operator.
Operators
$\text{Hardness}:=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)$ $\text{Color}:=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)$ $\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$
Eigenvalue +1 Eigenvalue -1
$\text{Hardness}\cdot\text{Hard}=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\text{Hardness}\cdot\text{Soft}=\left( \begin{array}{l}{0} \ {-1}\end{array}\right)$
$\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {-1}\end{array}\right)$
$\text{Color}\cdot\text{Black}=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\text{Color}\cdot\text{White}=\left( \begin{array}{l}{-0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{-0.707} \ {0.707}\end{array}\right)$
Another way of showing this is by calculating the expectation (or average) value. Every time the hardness of a hard quon is measured the result is +1. Every time the hardness of a soft quon is measured the result is -1.
$\text{Hard}^{T}\cdot\text{Hardness}\cdot\text{Hard}=1$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=1$ $\text{Soft}^{T}\cdot\text{Hardness}\cdot\text{Soft}=-1$ $(0 \quad 1) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=-1$
$\text{Black}^{T}\cdot\text{Color}\cdot\text{Black}=1$ $\left(\dfrac{1}{\sqrt{2}} \dfrac{1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=1$ $\text{White}^{T}\cdot\text{Color}\cdot\text{White}=1$ $\left(\dfrac{1}{\sqrt{2}} \dfrac{-1}{\sqrt{2}}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=-1$
If a quon is in a state which is an eigenfunction of an operator, it means it has a well-defined value for the observable represented by the operator. If the quon is in a state which is not an eigenfunction of the operator, it does not have a well-defined value for the observable.
Hard and Soft are not eigenfunctions of the Color operator, and Black and White are not eigenfunctions of the Hardness operator. Hard and soft quons do not have a well-defined color, and black and white quons do not have a well-defined hardness.
$\text{Hardness} \cdot \text{Black}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$ $\text{Hardness} \cdot \text{White}=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
$\text{Color} \cdot \text{Hard}=\left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$ $\text{Color} \cdot \text{Soft}=\left( \begin{array}{c}{1} \ {0}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
Therefore their expectation values are zero. In other words if the hardness of a black quon is measured, half the time it will register hard and half the time soft. If the color of a soft quon is measured, half the time it will register white and half the time black.
$\text{Black}^{T}\cdot\text{Hardness}\cdot\text{Black}=0$ $\left(\dfrac{1}{\sqrt{2}} \dfrac{1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{1}{\sqrt{2}}}\end{array}\right)=0$ $\text{White}^{T}\cdot\text{Hardness}\cdot\text{White}=0$ $\left(\dfrac{1}{\sqrt{2}} \dfrac{-1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\dfrac{1}{\sqrt{2}}} \ {\dfrac{-1}{\sqrt{2}}}\end{array}\right)=0$
$\text{Hard}^{T}\cdot\text{Color}\cdot\text{Hard}=0$ $\left( \begin{array}{ll}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=0$ $\text{Soft}^{T}\cdot\text{Color}\cdot\text{Soft}=0$ $(0 \quad 1) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=0$
As the Hardness-Color commutator shows, the Hardness and Color operators do not commute. They represent incompatible observables; observables that cannot simultaneously have well-defined values.
$\text{Hardness} \cdot \text{Color} - \text{Color} \cdot \text{Hardness} = \left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right)-\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)=\left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right)$
This means that the measurement of the color and then the hardness of a hard quon gives a different result than the measurement of the hardness and then the color.
$\text{Hardness} \cdot \text{Color} \cdot \text{Hard} = \left( \begin{array}{c}{0} \ {-1}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {-1}\end{array}\right)$ $\text{Color} \cdot \text{Hardness} \cdot \text{Hard} = \left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {1}\end{array}\right)$
We can also look at this from the perspective of the uncertainty principle. The uncertainty in a measurement is the square root of the difference between the mean of the square and the square of the mean.
Suppose we measure the color of a Black or White quon. Because Black and White are eigenfunctions of the Color operator the uncertainty in the measurement results are zero.
$\sqrt{\text{Black}^{T} \cdot \text{Color}^{2} \cdot \text{Black} - (\text{Black}^{T} \cdot \text{Color} \cdot \text{Black})^{2}} = 0 \qquad \sqrt{\text{White}^{T} \cdot \text{Color}^{2} \cdot \text{White} - (\text{White}^{T} \cdot \text{Color} \cdot \text{White})^{2}} = 0 \nonumber$
However, the measurement of the color of a Soft or Hard quon is by the same criterion uncertain.
$\sqrt{\text{Soft}^{T} \cdot \text{Color}^{2} \cdot \text{Soft} - (\text{Soft}^{T} \cdot \text{Color} \cdot \text{Soft})^{2}} = 1 \qquad \sqrt{\text{Hard}^{T} \cdot \text{Color}^{2} \cdot \text{Hard} - (\text{Hard}^{T} \cdot \text{Color} \cdot \text{Hard})^{2}} = 1 \nonumber$
The calculations of Hardness and Color reveal the strange behavior of quons. In the macro world we frequently find objects that simultaneously have well-defined values for these physical attributes. But we see this is not possible in the quantum world.
Mathcad has high-level commands which find the eigenvalues and eigenvectors of matrices which in quantum mechanics are operators. Below it is shown that they give the same results as were demonstrated above. See the Appendix for additional computational methods.
$\text{eigenvals}(\text{Hardness}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Hardness}, -1) = \left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\text{eigenvals}(\text{Hardness}, 1) = \left( \begin{array}{c}{1} \ {0}\end{array}\right)$
$\text{eigenvals}(\text{Color}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Color}, -1) = \left( \begin{array}{c}{-0.707} \ {1}\end{array}\right)$ $\text{eigenvals}(\text{Color}, 1) = \left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
Besides the properties of hardness and color, suppose the quon also has the property of taste, tasting either Sweet or Sour. The Taste operator is defined below and its eigenvalues and eigenvectors calculated.
Operator Eigenvalues Sweet/Sour Eigenvectors
$\text{Taste} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right)$ $\text{eigenvals}(\text{Taste}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Taste}) =\left( \begin{array}{cc}{-0.707 \mathrm{i}} & {0.707} \ {0.707} & {-0.707 \mathrm{i}}\end{array}\right)$
Squaring the Hardness, Color and Taste operators gives the Identity operator, that is they are unitary matrices. The Identity operator leaves the vector it operates on unchanged.
$\text{Hardness}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Color}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Taste}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {\mathrm{i}} & {0}\end{array}\right)=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Another important property of these operators is that they are equal to their Hermitian conjugate as shown below. The physical significance of this is that they have real eigenvalues, something we know from earlier calculations.
$\overline{\text{Hardness}}^{T}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) }\right]^{T}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \nonumber$
$\overline{\text{Color}}^{T}=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)}\right]^{T} =\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
$\overline{\text{Taste}}^{T}=\left( \begin{array}{cc}{0} & {-i} \ {i} & {0}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right)}\right]^{T} =\left( \begin{array}{cc}{0} & {-i} \ {i} & {0}\end{array}\right) \nonumber$
The Hadamard matrix is another operator which is important in quantum optics and quantum computing.
$\text{Hadamard}:=\dfrac{1}{\sqrt{2}} \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \nonumber$
The Hadamard matrix performs a Fourier transform between the Hardness and Color basis vectors.
$\text{Hadamard} \cdot \text{Hard} = \text{Black}$ $\text{Hadamard} \cdot \text{Hard} = \left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\text{Hadamard} \cdot \text{Black} = \text{Hard}$ $\text{Hadamard} \cdot \text{Black} = \left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\text{Hadamard} \cdot \text{Soft} = \text{White}$ $\text{Hadamard} \cdot \text{Soft} = \left( \begin{array}{l}{0.707} \ {-0.707}\end{array}\right)$ $\text{Hadamard} \cdot \text{White} = \text{Soft}$ $\text{Hadamard} \cdot \text{White} = \left( \begin{array}{l}{0} \ {1}\end{array}\right)$
The eigenvalues and eigenvectors of the Hadamard matrix:
$\text{eigenvals}(\text{Hadamard}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Hadamard}, 1) = \left( \begin{array}{c}{0.924} \ {0.383}\end{array}\right)$ $\text{eigenvals}(\text{Hadamard}, -1) = \left( \begin{array}{c}{-0.383} \ {0.924}\end{array}\right)$
The Hadamard matrix is also unitary and its own Hermitian conjugate like the other matrices.
$\text{Hadamard}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \overline{\text{Hadamard}}^{T}=\left( \begin{array}{cc}{0.707} & {0.707} \ {0.707} & {-0.707}\end{array}\right) \nonumber$
In addition to performing a Fourier transform between the Hardness and Color basis vectors, it has been reported in the Journal of Olfactory Science that the Hadamard matrix is the operator representing the property of Odor. It's eigenstates, shown above, are Pleasant and Foul, with eigenvalues +1 and -1, respectively. It is left to the interested reader to return to the beginning of this tutorial to explore the quantum relationship of Odor to Hardness, Color and Taste.
Concluding Remarks
The reason for using the properties of hardness, color and taste in these exercises is to emphasize how different the quantum world is from the macro world that we occupy. It is not an uncommon experience (it has happened to me) to eat a piece of candy that is hard, white and sweet. But this is not possible for quantum candy because the matrix operators representing these observables do not commute. Therefore, the observables cannot simultaneously be well defined.
In quantum mechanics these operators,
$\text{Hardness}:=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \text { Color } :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \text { Taste } :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \nonumber$
are actually the Pauli spin matrices and represent the observables for spin in the z-, x- and y-directions as mentioned earlier.
$\sigma_{z}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \sigma_{\mathrm{x}}=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \sigma_{\mathrm{y}} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \nonumber$
They are also the operators for the rectilinear, diagonal and circular polarization properties of photons. In this case the eigenvectors are vertical, horizontal, diagonal, anti-diagonal, and right and left circular polarization.
$\mathrm{V} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{H} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \ \mathrm{D} :=\dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right) \qquad \mathrm{A} :=\dfrac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \ \mathrm{R} :=\dfrac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {\mathrm{i}}\end{array}\right) \qquad \mathrm{L} :=\dfrac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-\mathrm{i}}\end{array}\right) \nonumber$
Appendix: Vector and Matrix Math
Vector inner product:
$(a b) \cdot \left( \begin{array}{l}{c} \ {d}\end{array}\right) \rightarrow a \cdot c+b \cdot d \nonumber$
Vector outer product:
$\left( \begin{array}{l}{c} \ {d}\end{array}\right)-(a b) \rightarrow \left( \begin{array}{ll}{a \cdot c} & {b \cdot c} \ {a \cdot d} & {b \cdot d}\end{array}\right) \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{l}{c} \ {d}\end{array}\right) \cdot(a b)\right] \rightarrow a \cdot c+b \cdot d \nonumber$
Matrix-vector product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \rightarrow \left( \begin{array}{l}{a \cdot x+b \cdot y} \ {c \cdot x+d \cdot y}\end{array}\right) \nonumber$
$( x, y) \cdot \left( \begin{array}{ll}{ a} & { b} \ { c} & { d}\end{array}\right)^{ T} \rightarrow( a \cdot x+ b \cdot y \quad c \cdot x+ d \cdot y) \nonumber$
Expectation value:
$( x \quad y) \cdot \left( \begin{array}{ll}{ a} & { b} \ { c} & { d}\end{array}\right) \cdot \left( \begin{array}{l}{ x} \ { y}\end{array}\right) \text { simplify } \rightarrow a \cdot x^{2}+ d \cdot y^{2}+ b \cdot x \cdot y+ c \cdot x \cdot y \nonumber$
$( x, y) \cdot \left( \begin{array}{cc}{ a} & { b} \ { c} & { d}\end{array}\right)^{ T} \cdot \left( \begin{array}{l}{ x} \ { y}\end{array}\right) \text { simplify } \rightarrow a \cdot x^{2}+ d \cdot y^{2}+ b \cdot x \cdot y+ c \cdot x \cdot y \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{l}{x} \ {y}\end{array}\right) \cdot \left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right]\right] \rightarrow a \cdot x^{2}+d \cdot y^{2}+b \cdot x \cdot y+c \cdot x \cdot y \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{cc}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \cdot \left( \begin{array}{ll}{x} & {y}\end{array}\right]\right] \text { simplify } \rightarrow a \cdot x^{2}+d \cdot y^{2}+b \cdot x \cdot y+c \cdot x \cdot y \nonumber$
Matrix product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{lc}{w} & {x} \ {y} & {z}\end{array}\right) \rightarrow \left( \begin{array}{l}{a \cdot w+b \cdot y} & {a \cdot x+b \cdot z} \ {c \cdot w+d \cdot y} & {c \cdot x+d \cdot z}\end{array}\right) \nonumber$
Vector tensor product:
$\left( \begin{array}{l}{a} \ {b}\end{array}\right) \otimes \left( \begin{array}{l}{c} \ {d}\end{array}\right)=\left( \begin{array}{c}{a \left( \begin{array}{c}{c} \ {d}\end{array}\right)} \ {b \left( \begin{array}{l}{c} \ {d}\end{array}\right)}\end{array}\right)=\left( \begin{array}{c}{a c} \ {a d} \ {b c} \ {b d}\end{array}\right) \nonumber$
Matrix tensor product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \otimes \left( \begin{array}{ll}{w} & {x} \ {y} & {z}\end{array}\right) =\left( \begin{array}{ll}{a \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} & {b \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} \ {c \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} & {d \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)}\end{array}\right) =\left( \begin{array}{llll}{a w} & {a x} & {b w} & {b x} \ {a y} & {a z} & {b y} & {b z} \ {c w} & {c x} & {d w} & {d x} \ {c y} & {c z} & {d y} & {d z}\end{array}\right) \nonumber$
Matrix eigenvalues and eigenvectors (unnormalized):
$\text{eigenvals}\left(\left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right)\right) \rightarrow \left( \begin{array}{l}{a-b} \ {a+b}\end{array}\right) \nonumber$
or
$\left|\left( \begin{array}{cc}{a-\lambda} & {b} \ {b} & {a-\lambda}\end{array}\right)\right|=0 \text { solve }, \lambda \rightarrow \left( \begin{array}{c}{a+b} \ {a-b}\end{array}\right) \nonumber$
or
$\left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right)^{-1} \left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right) \left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right) \rightarrow \left( \begin{array}{cc}{a-b} & {0} \ {0} & {a+b}\end{array}\right) \nonumber$
using
$\text{eigenvecs}\left(\left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right)\right) \rightarrow \left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right) \nonumber$
$\left( \begin{array}{ll}{a} & {b} \ {b} & {a}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right)=(a-b) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \; \text{solve,} y \rightarrow-x \qquad \left( \begin{array}{l}{x} \ {y}\end{array}\right)=\left( \begin{array}{c}{-1} \ {1}\end{array}\right) \nonumber$
$\left( \begin{array}{ll}{\mathrm{a}} & {\mathrm{b}} \ {\mathrm{b}} & {\mathrm{a}}\end{array}\right) \cdot \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right)=(\mathrm{a}+\mathrm{b}) \cdot \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right) \text { solve, } \mathrm{y} \rightarrow \mathrm{x} \qquad \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right)=\left( \begin{array}{l}{1} \ {1}\end{array}\right) \nonumber$
Completeness relations:
$\text{Black} \cdot \text{Black}^{T} + \text{White} \cdot \text{White}^{T} =\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Hard} \cdot \text{Hard}^{T} + \text{Soft} \cdot \text{Soft}^{T} =\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.33%3A_Basic_Matrix_Mechanics.txt
|
A quon (an entity that exhibits both wave and particle aspects in the peculiar quantum manner - Nick Herbert, Quantum Reality, page 64) has a variety of properties each of which can take on two values. For example, it has the property of hardness and can be either hard or soft. It also has the property of color and can be either black or white, and the property of taste and be sweet or sour. The treatment that follows draws on material from Chapter 3 of David Z Albert's book, Quantum Mechanics and Experience.
The basic principles of matrix and vector math are provided in Appendix A. An examination of this material will demonstrate that most of the calculations presented in this tutorial can easily be performed without the aid of Mathcad or any other computer algebra program. In other words, they can be done by hand.
In the matrix formulation of quantum mechanics the hardness and color states are represented by the following vectors.
$\text{Hard}:=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad \text { Soft } :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \quad \text{Black}:=\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right) \quad \text { White } :=\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right) \nonumber$
Hard and Soft represent an orthonormal basis in the two-dimensional Hardness vector space.
$\begin{matrix} \text{Hard}^{T} \cdot \text{Hard} = 1 & \text{Soft}^{T} \cdot \text{Soft} = 1 & \text{Hard}^{T} \cdot \text{Soft} = 0 \ \left( \begin{array}{ll}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=1 & \left( \begin{array}{ll}{0} & {1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=1 & \left( \begin{array}{ll}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=0 \end{matrix} \nonumber$
Likewise Black and White are an orthonormal basis in the two-dimensional Color vector space.
$\begin{matrix} \text{Black}^{T} \cdot \text{Black} = 1 & \text{White}^{T} \cdot \text{White} = 1 & \text{Black}^{T} \cdot \text{White} = 0 \ \left(\begin{array}{c}{\frac{1}{\sqrt{2}}} & {\frac{1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=1 & \left(\begin{array}{c}{\frac{1}{\sqrt{2}}} & {\frac{-1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=1 & \left(\begin{array}{c}{\frac{1}{\sqrt{2}}} & {\frac{1}{\sqrt{2}}}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=0 \end{matrix} \nonumber$
The relationship between the two bases is reflected in the following projection calculations.
Note
$\frac{1}{\sqrt{2}}=0.707 \nonumber$
$\begin{matrix} \text{Hard}^{T} \cdot \text{Black} = 0.707 & \text{Hard}^{T} \cdot \text{White} = 0.707 & \text{Soft}^{T} \cdot \text{Black} = 0.707 & \text{Soft}^{T} \cdot \text{White} = -0.707 \ \left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{0} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=0.707 & \left(\begin{array}{l}{0} & {1}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=-0.707 \end{matrix} \nonumber$
The values calculated above are probability amplitudes. The absolute square of those values is the probability. In other words, the probability that a black quon will be found to be hard is 0.5. The probability that a white quon will be found to be soft is also 0.5.
$\begin{matrix} \left(\left|\text{Hard}^{T} \cdot \text{Black} \right| \right)^{2} = 0.5 & \left(\left|\text{Hard}^{T} \cdot \text{White}\right| \right)^{2} = 0.5 & \left(\left|\text{Soft}^{T} \cdot \text{Black}\right| \right)^{2} = 0.5 & \left(\left|\text{Soft}^{T} \cdot \text{White}\right| \right)^{2} = 0.5 \ \left[\left|\left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5& \left[\left|\left(\begin{array}{l}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5 & \left[\left|\left(\begin{array}{l}{0} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5& \left[\left|\left(\begin{array}{l}{0} & {1}\end{array}\right) \cdot \left( \begin{array}{l}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)\right|\right] ^{2}=0.5\end{matrix} \nonumber$
Clearly Black and White can be written as superpositions of Hard and Soft, and vice versa. This means hard and soft quons do not have a well-defined color, and black and white quons do not have a well-defined hardness.
$\frac{1}{\sqrt{2}} \cdot(\mathrm{Hard}+\mathrm{Soft})=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
$\frac{1}{\sqrt{2}} \cdot\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)+\left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$
$\frac{1}{\sqrt{2}} \cdot(\mathrm{Hard}-\mathrm{Soft})=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$
$\frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right)\right]=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$
$\frac{1}{\sqrt{2}} \cdot(\text { Black }+\text { White })=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)+\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)\right]=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\frac{1}{\sqrt{2}} \cdot(\text { Black }-\text { White })=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
$\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)-\left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)\right]=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
Hard, Soft, Black and White are measurable properties and the vectors representing them are eigenstates of the Hardness and Color operators with eigenvalues $\pm$ 1. The Identity operator is also given and will be discussed later. Of course, the Hardness and Color operators are just the Pauli spin operators in the z- and x-directions. Later the Taste operator will be introduced; it is the y-direction Pauli spin operator.
Operators
$\text{Hardness}:=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)$ $\text{Color}:=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)$ $\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$
Eigenvalue +1 Eigenvalue -1
$\text{Hardness}\cdot\text{Hard}=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\text{Hardness}\cdot\text{Soft}=\left( \begin{array}{l}{0} \ {-1}\end{array}\right)$
$\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {-1}\end{array}\right)$
$\text{Color}\cdot\text{Black}=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\text{Color}\cdot\text{White}=\left( \begin{array}{l}{-0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{-0.707} \ {0.707}\end{array}\right)$
Another way of showing this is by calculating the expectation (or average) value. Every time the hardness of a hard quon is measured the result is +1. Every time the hardness of a soft quon is measured the result is -1.
$\text{Hard}^{T}\cdot\text{Hardness}\cdot\text{Hard}=1$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=1$ $\text{Soft}^{T}\cdot\text{Hardness}\cdot\text{Soft}=-1$ $(0 \quad 1) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=-1$
$\text{Black}^{T}\cdot\text{Color}\cdot\text{Black}=1$ $\left(\frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=1$ $\text{White}^{T}\cdot\text{Color}\cdot\text{White}=1$ $\left(\frac{1}{\sqrt{2}} \frac{-1}{\sqrt{2}}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=-1$
If a quon is in a state which is an eigenfunction of an operator, it means it has a well-defined value for the observable represented by the operator. If the quon is in a state which is not an eigenfunction of the operator, it does not have a well-defined value for the observable.
Hard and Soft are not eigenfunctions of the Color operator, and Black and White are not eigenfunctions of the Hardness operator. Hard and soft quons do not have a well-defined color, and black and white quons do not have a well-defined hardness.
$\text{Hardness} \cdot \text{Black}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$ $\text{Hardness} \cdot \text{White}=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
$\text{Color} \cdot \text{Hard}=\left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$ $\text{Color} \cdot \text{Soft}=\left( \begin{array}{c}{1} \ {0}\end{array}\right)$ $\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
Therefore their expectation values are zero. In other words if the hardness of a black quon is measured, half the time it will register hard and half the time soft. If the color of a soft quon is measured, half the time it will register white and half the time black.
$\text{Black}^{T}\cdot\text{Hardness}\cdot\text{Black}=0$ $\left(\frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{1}{\sqrt{2}}}\end{array}\right)=0$ $\text{White}^{T}\cdot\text{Hardness}\cdot\text{White}=0$ $\left(\frac{1}{\sqrt{2}} \frac{-1}{\sqrt{2}}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{c}{\frac{1}{\sqrt{2}}} \ {\frac{-1}{\sqrt{2}}}\end{array}\right)=0$
$\text{Hard}^{T}\cdot\text{Color}\cdot\text{Hard}=0$ $\left( \begin{array}{ll}{1} & {0}\end{array}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=0$ $\text{Soft}^{T}\cdot\text{Color}\cdot\text{Soft}=0$ $(0 \quad 1) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=0$
As the Hardness-Color commutator shows, the Hardness and Color operators do not commute. They represent incompatible observables; observables that cannot simultaneously have well-defined values.
$\text{Hardness} \cdot \text{Color} - \text{Color} \cdot \text{Hardness} = \left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right)-\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)=\left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right)$
This means that the measurement of the color and then the hardness of a hard quon gives a different result than the measurement of the hardness and then the color.
$\text{Hardness} \cdot \text{Color} \cdot \text{Hard} = \left( \begin{array}{c}{0} \ {-1}\end{array}\right)$ $\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {-1}\end{array}\right)$ $\text{Color} \cdot \text{Hardness} \cdot \text{Hard} = \left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{c}{0} \ {1}\end{array}\right)$
We can also look at this from the perspective of the uncertainty principle. The uncertainty in a measurement is the square root of the difference between the mean of the square and the square of the mean.
Suppose we measure the color of a Black or White quon. Because Black and White are eigenfunctions of the Color operator the uncertainty in the measurement results are zero.
$\sqrt{\text{Black}^{T} \cdot \text{Color}^{2} \cdot \text{Black} - (\text{Black}^{T} \cdot \text{Color} \cdot \text{Black})^{2}} = 0 \qquad \sqrt{\text{White}^{T} \cdot \text{Color}^{2} \cdot \text{White} - (\text{White}^{T} \cdot \text{Color} \cdot \text{White})^{2}} = 0 \nonumber$
However, the measurement of the color of a Soft or Hard quon is by the same criterion uncertain.
$\sqrt{\text{Soft}^{T} \cdot \text{Color}^{2} \cdot \text{Soft} - (\text{Soft}^{T} \cdot \text{Color} \cdot \text{Soft})^{2}} = 1 \qquad \sqrt{\text{Hard}^{T} \cdot \text{Color}^{2} \cdot \text{Hard} - (\text{Hard}^{T} \cdot \text{Color} \cdot \text{Hard})^{2}} = 1 \nonumber$
The calculations of Hardness and Color reveal the strange behavior of quons. In the macro world we frequently find objects that simultaneously have well-defined values for these physical attributes. But we see this is not possible in the quantum world.
Mathcad has high-level commands which find the eigenvalues and eigenvectors of matrices which in quantum mechanics are operators. Below it is shown that they give the same results as were demonstrated above. See the Appendix for additional computational methods.
$\text{eigenvals}(\text{Hardness}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Hardness}, -1) = \left( \begin{array}{c}{0} \ {1}\end{array}\right)$ $\text{eigenvals}(\text{Hardness}, 1) = \left( \begin{array}{c}{1} \ {0}\end{array}\right)$
$\text{eigenvals}(\text{Color}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Color}, -1) = \left( \begin{array}{c}{-0.707} \ {1}\end{array}\right)$ $\text{eigenvals}(\text{Color}, 1) = \left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right)$
Besides the properties of hardness and color, suppose the quon also has the property of taste, tasting either Sweet or Sour. The Taste operator is defined below and its eigenvalues and eigenvectors calculated.
Operator Eigenvalues Sweet/Sour Eigenvectors
$\text{Taste} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right)$ $\text{eigenvals}(\text{Taste}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Taste}) =\left( \begin{array}{cc}{-0.707 \mathrm{i}} & {0.707} \ {0.707} & {-0.707 \mathrm{i}}\end{array}\right)$
Squaring the Hardness, Color and Taste operators gives the Identity operator, that is they are unitary matrices. The Identity operator leaves the vector it operates on unchanged.
$\text{Hardness}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Color}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \cdot \left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Taste}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \cdot \left( \begin{array}{c}{0} \ {\mathrm{i}} & {0}\end{array}\right)=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Another important property of these operators is that they are equal to their Hermitian conjugate as shown below. The physical significance of this is that they have real eigenvalues, something we know from earlier calculations.
$\overline{\text{Hardness}}^{T}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) }\right]^{T}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \nonumber$
$\overline{\text{Color}}^{T}=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)}\right]^{T} =\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
$\overline{\text{Taste}}^{T}=\left( \begin{array}{cc}{0} & {-i} \ {i} & {0}\end{array}\right) \qquad \left[\overline{\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right)}\right]^{T} =\left( \begin{array}{cc}{0} & {-i} \ {i} & {0}\end{array}\right) \nonumber$
The Hadamard matrix is another operator which is important in quantum optics and quantum computing.
$\text{Hadamard}:=\frac{1}{\sqrt{2}} \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \nonumber$
The Hadamard matrix performs a Fourier transform between the Hardness and Color basis vectors.
$\text{Hadamard} \cdot \text{Hard} = \text{Black}$ $\text{Hadamard} \cdot \text{Hard} = \left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\text{Hadamard} \cdot \text{Black} = \text{Hard}$ $\text{Hadamard} \cdot \text{Black} = \left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\text{Hadamard} \cdot \text{Soft} = \text{White}$ $\text{Hadamard} \cdot \text{Soft} = \left( \begin{array}{l}{0.707} \ {-0.707}\end{array}\right)$ $\text{Hadamard} \cdot \text{White} = \text{Soft}$ $\text{Hadamard} \cdot \text{White} = \left( \begin{array}{l}{0} \ {1}\end{array}\right)$
The eigenvalues and eigenvectors of the Hadamard matrix:
$\text{eigenvals}(\text{Hadamard}) = \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\text{eigenvals}(\text{Hadamard}, 1) = \left( \begin{array}{c}{0.924} \ {0.383}\end{array}\right)$ $\text{eigenvals}(\text{Hadamard}, -1) = \left( \begin{array}{c}{-0.383} \ {0.924}\end{array}\right)$
The Hadamard matrix is also unitary and its own Hermitian conjugate like the other matrices.
$\text{Hadamard}^{2}=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) \qquad \overline{\text{Hadamard}}^{T}=\left( \begin{array}{cc}{0.707} & {0.707} \ {0.707} & {-0.707}\end{array}\right) \nonumber$
Composite Systems
$\text{Hardness}:=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)$ $\text{Hard}:=\text { eigenvec(Hardness, } 1 )=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ $\text{Soft}:=\text { eigenvec(Hardness, } -1 )=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
$\text{Color}:=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)$ $\text{Black}:=\text { eigenvec(Color, } 1 )=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\text{White}:=\text { eigenvec(Color, } -1 )=\left( \begin{array}{l}{-0.707} \ {0.707}\end{array}\right)$
$\text{Taste}:=\left( \begin{array}{ll}{0} & {-i} \ {i} & {0}\end{array}\right)$ $\text{Sweet}:=\text { eigenvec(Taste, } 1 )=\left( \begin{array}{l}{-0.707i} \ {0.707}\end{array}\right)$ $\text{Sweet}:=\text { eigenvec(Taste, } -1 )=\left( \begin{array}{l}{0.707i} \ {0.707}\end{array}\right)$
$\text{Odor} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right)$ $\text{P}:=\text { eigenvec(Odor, } 1 )=\left( \begin{array}{l}{0.924} \ {0.383}\end{array}\right)$ $\text{F}:=\text { eigenvec(Odor, } -1 )=\left( \begin{array}{l}{-0.383} \ {0.924}\end{array}\right)$
Quantum mechanics gets even more interesting for composite systems - quantum systems consisting of two or more quons. Suppose two quons are created in the same event and one is hard (H) and the other is soft (S), but of course because of the indistinquishability principle we don't know which is which. Under this circumstance an appropriate state vector is the following entangled superposition. (See Appendix A for vector tensor multiplication),
$\Psi=\frac{1}{\sqrt{2}}[ |H\rangle_{A} | S \rangle_{B}-| S \rangle_{A} | H \rangle_{B} ]=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)-\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \ \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
If the hardness of A or the hardness of B is measured the expectation value is 0, because half the time the quon will be found to be hard (+1) and half the time soft (-1). However if the hardness of both quons is measured the joint expectation value is -1, because they are in opposite hardness states. This is perfect anti-correlation. The joint measurements show correlation in spite of the fact that the individual measurements are totally random. This is the "spooky action at a distance" that bothered Einstein. Kronecker is Mathcad's command for matrix tensor multiplication which is illustrated in Appendix A.
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text { Hardness }, \mathrm{I}) \cdot \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \text { kronecker (I, Hardness) } \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \text { kronecker(Hardness, Hardness) } \cdot \Psi=-1 \nonumber$
Now suppose we do color measurements on the same pair of quons. Again we find perfect color anti-correlation between the two quons. Individually the color measurements are randomly black (B) or white (W), but when we measure the color of both quons we always get different colors.
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Color, I} ) \cdot \Psi=0 \nonumber$
$\boldsymbol{\Psi}^{\mathrm{T}} \cdot \mathrm{kronecker}(\mathrm{I}, \text { Color}) \cdot \mathbf{\Psi}=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Color, Color) }\cdot \Psi=-1 \nonumber$
This result can be understood by recalling that black and white are superpositions of hard and soft. Substitution of the appropriate superpositions into the original composite state vector expresses it in the black/white basis and reveals the perfect anti-correlation.
$\Psi=\frac{1}{\sqrt{2}}[ |W\rangle_{A} | B \rangle_{B}-| B \rangle_{A} | W \rangle_{B} ]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right)-\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
The same thing that is true for black and white is also true for sweet (Sw) and sour (So). The taste measurements are individually random, but collectively perfectly anti-correlated.
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text {Taste }, \mathrm{I}) \cdot \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\mathrm{I}, \text { Taste}) \cdot \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text {Taste, Taste}) \cdot \Psi=-1 \nonumber$
Below the original state vector is written in the sweet/sour basis.
$\Psi=\frac{1}{\sqrt{2}}[ |S o\rangle_{A} | S w \rangle_{B}-| S w \rangle_{A} | S o \rangle_{B} ]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-i}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{c}{-i} \ {1}\end{array}\right)-\frac{1}{\sqrt{2}} \left( \begin{array}{c}{-i} \ {1}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-i}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
If different properties of the quons are measured the expectation values are zero - there is no correlation.
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Hardness, Color) }\cdot \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Hardness, Taste) } \cdot \Psi=0 \nonumber$
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Color, Taste) }\cdot \Psi=0 \nonumber$
A realist believes that objects, macro, micro or nano, have well-defined properties prior to and independent of the nature of the observation performed on them. Experiment simply reveals the unknown state of the system under observation.
Objects with three properties (hardness, color and taste) which can be in either of two states (hard +1, soft -1, black +1, white -1, sweet +1 and sour -1) can be in any one of eight states according to a realist: HBSw, HBSo, HWSw, HWSo, SBSw, SBSo, SWSw and SWSo. Due to the correlation values when the same property is measured on both quons, the realist can explain all measurement results as shown in the table below.
Because the states were constructed on the basis of anti-correlation for hardness, color and taste, it is only necessary to show agreement between the quantum calculations and the realist's states for the cases in which different properties are measured. The three right-hand columns of the table show no correlation, in agreement with the quantum calculations.
$\left[ \begin{matrix} \text{QuonA} & \text{QuonB} & \text{Hardness-Color} & \text{Hardness-Taste} & \text{Color-Taste} \ \text{HBSw} & \text{SWSo} & 1 \times -1 = -1 & 1 \times -1 = -1 & 1 \times -1 = -1 \ \text{HBSo} & \text{SWSw} & 1 \times -1 = -1 & 1 \times 1 = 1 & 1 \times 1 = 1 \ \text{HWSw} & \text{SBSo} & 1 \times 1 = 1 & 1 \times -1 = -1 & -1 \times -1 = 1 \ \text{HWSo} & \text{SBSw} & 1 \times 1 = 1 & 1 \times 1 = 1 & -1 \times 1 = -1 \ \text{Expectation} & \text{Value} & 0 & 0 & 0 \end{matrix} \right] \nonumber$
In spite of this agreement, the quantum theorist objects. Earlier it was shown that the hardness and color operators do not commute, meaning that from the quantum perspective hardness and color cannot be simultaneously well-defined. The same is true for hardness and taste, and for color and taste. Therefore, the states in the table proposed by the realist are not legitimate.
$\text{Hardness} \cdot \text{Taste}-\text { Taste} \cdot \text{Hardness}=\left( \begin{array}{cc}{0} & {-2 \mathrm{i}} \ {-2 \mathrm{i}} & {0}\end{array}\right) \nonumber$
$\text{Color} \cdot \text{Taste}-\text { Taste} \cdot \text{Color}=\left( \begin{array}{cc}{2 \mathrm{i}} & {0} \ {0} & {-2 \mathrm{i}}\end{array}\right) \nonumber$
The superpositions tell the same story. For example, if a quon is hard (H) its color and taste are indeterminate because hard is an even superposition of black and white, and sweet and sour.
$\mathrm{H}=\frac{1}{\sqrt{2}} \cdot(\mathrm{B}+\mathrm{W}) \qquad \mathrm{H}=\frac{1}{\sqrt{2}} \cdot(\mathrm{i} \cdot \mathrm{Sw}+\mathrm{So}) \nonumber$
While this line of reasoning is compelling, the realist is undeterred. The fact that quantum mechanics can't assign specific values to all properties of an object is evidence that it does not provide a complete theory of reality!
Thought experiments like this clarify the conflict between quantum theory and local realism, but they do not provide, as we have seen, a final adjudication of the disagreement. That changed in 1964 with a theoretical analysis by John Bell that showed that there are experimental situations where the predictions of quantum mechanics and local realism are in disagreement. We look at one of them now.
Odor is another physical property of objects. The Hadamard operator is renamed the Odor operator. It has two eigenstates Pleasant and Foul, with eigenvalues +1 and -1, respectively, as shown below.
$\text{Odor}:=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \ {1} & {-1}\end{array}\right) \qquad (\text {Odor})=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) \quad \nonumber$
$\text{Pleasant}:=\text { eigenvec (Odor},1 )=\left( \begin{array}{l}{0.924} \ {0.383}\end{array}\right) \ \text {Foul} :=\text { eigenvec (Odor,}-1 )=\left( \begin{array}{c}{-0.383} \ {0.924}\end{array}\right) \nonumber$
$\Psi=\frac{1}{\sqrt{2}}[ |P\rangle_{A} | F \rangle_{B}-| F \rangle_{A} | P \rangle_{B} ]=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{0.924} \ {0.383}\end{array}\right) \otimes \left( \begin{array}{c}{-0.383} \ {0.924}\end{array}\right)-\left( \begin{array}{c}{-0.383} \ {0.924}\end{array}\right) \otimes \left( \begin{array}{c}{0.924} \ {0.383}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \ {1} \ {-1} \ {0}\end{array}\right) \nonumber$
Carrying out some of the same quantum mechanical calculations that we have done for the other observable properties, we see that the individual odor measurements are random, there is perfect anti-correlation in the joint odor measurements and intermediate anti-correlation in the joint hardness-odor measurements. This latter result is of utmost importance, because a local realist can't explain it.
$\Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{Odor}, \mathrm{I}) \cdot \Psi=0 \nonumber$
$\mathbf{\Psi}^{\mathrm{T}} \cdot \mathrm{kronecker}(\mathrm{Odor}, \mathrm{Odor}) \cdot \Psi=-1 \nonumber$
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text { Hardness }, \mathrm{Odor}) \cdot \Psi=-0.707 \nonumber$
Given that we are now dealing with four properties, each of which can have two values, there are 16 composite states. However, for now we only need to consider the four states involving the properites of hardness and odor to show that the local realist model cannot explain the anti-correlation predicted by quantum mechanics for the joint hardness-odor measurements. In the following table H =Hard, S = Soft, P = Pleasant, and F = Foul. Appendix B provides a complete analysis for all four observable properties.
$\begin{pmatrix} \text{Quon1} & \text{Quon2} & \text{HardnessHardness} & \text{OdorOdor} & \text{HardnessOdor} \ \text{HP} & \text{SF} & -1 & -1 & -1 \ \text{HF} & \text{SP} & -1 & -1 & 1 \ \text{SP} & \text{HF} & -1 & -1 & 1 \ \text{SF} & \text{HP} & -1 & -1 & -1 \ \text{Expectation} & \text{Value} & -1 & -1 & 0 \ \text{Quantum} & \text{Result} & -1 & -1 & -0.707 \end{pmatrix} \nonumber$
The last column shows that a local realist model predicts no correlation between the joint hardness-odor measurements, while a quantum mechanical calculation predicts anti-correlation of -0.707. This example illustrates the significance of Bell's analysis: there are experiments for which a local realist model cannot reproduce all the quantum mechanical predictions. And to date the quantum mechanical predictions have been verified experimentally. Thus, a local realist model of nature is not tenable. As mentioned above Appendix B provides additional computational detail regarding this issue.
Concluding Remarks
The reason for using the properties of hardness, color and taste in these exercises is to emphasize how different the quantum world is from the macro world that we occupy. It is not an uncommon experience (it has happened to me) to eat a piece of candy that is hard, white and sweet. But this is not possible for quantum candy because the matrix operators representing these observables do not commute. Therefore, the observables cannot simultaneously be well defined.
In quantum mechanics these operators,
$\text{Hardness}:=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \text { Color } :=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \text { Taste } :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \nonumber$
are actually the Pauli spin matrices and represent the observables for spin in the z-, x- and y-directions as mentioned earlier.
$\sigma_{z}=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \quad \sigma_{\mathrm{x}}=\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right) \quad \sigma_{\mathrm{y}} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \nonumber$
They are also the operators for the rectilinear, diagonal and circular polarization properties of photons. In this case the eigenvectors are vertical, horizontal, diagonal, anti-diagonal, and right and left circular polarization.
$\mathrm{V} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{H} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \ \mathrm{D} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right) \qquad \mathrm{A} :=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right) \ \mathrm{R} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {\mathrm{i}}\end{array}\right) \qquad \mathrm{L} :=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-\mathrm{i}}\end{array}\right) \nonumber$
Appendix A: Vector and Matrix Math
Vector inner product:
$(a b) \cdot \left( \begin{array}{l}{c} \ {d}\end{array}\right) \rightarrow a \cdot c+b \cdot d \nonumber$
Vector outer product:
$\left( \begin{array}{l}{c} \ {d}\end{array}\right)-(a b) \rightarrow \left( \begin{array}{ll}{a \cdot c} & {b \cdot c} \ {a \cdot d} & {b \cdot d}\end{array}\right) \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{l}{c} \ {d}\end{array}\right) \cdot(a b)\right] \rightarrow a \cdot c+b \cdot d \nonumber$
Matrix-vector product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \rightarrow \left( \begin{array}{l}{a \cdot x+b \cdot y} \ {c \cdot x+d \cdot y}\end{array}\right) \nonumber$
$( x, y) \cdot \left( \begin{array}{ll}{ a} & { b} \ { c} & { d}\end{array}\right)^{ T} \rightarrow( a \cdot x+ b \cdot y \quad c \cdot x+ d \cdot y) \nonumber$
Expectation value:
$( x \quad y) \cdot \left( \begin{array}{ll}{ a} & { b} \ { c} & { d}\end{array}\right) \cdot \left( \begin{array}{l}{ x} \ { y}\end{array}\right) \text { simplify } \rightarrow a \cdot x^{2}+ d \cdot y^{2}+ b \cdot x \cdot y+ c \cdot x \cdot y \nonumber$
$( x, y) \cdot \left( \begin{array}{cc}{ a} & { b} \ { c} & { d}\end{array}\right)^{ T} \cdot \left( \begin{array}{l}{ x} \ { y}\end{array}\right) \text { simplify } \rightarrow a \cdot x^{2}+ d \cdot y^{2}+ b \cdot x \cdot y+ c \cdot x \cdot y \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{l}{x} \ {y}\end{array}\right) \cdot \left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right]\right] \rightarrow a \cdot x^{2}+d \cdot y^{2}+b \cdot x \cdot y+c \cdot x \cdot y \nonumber$
$\operatorname{tr}\left[\left( \begin{array}{cc}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \cdot \left( \begin{array}{ll}{x} & {y}\end{array}\right]\right] \text { simplify } \rightarrow a \cdot x^{2}+d \cdot y^{2}+b \cdot x \cdot y+c \cdot x \cdot y \nonumber$
Matrix product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \cdot \left( \begin{array}{lc}{w} & {x} \ {y} & {z}\end{array}\right) \rightarrow \left( \begin{array}{l}{a \cdot w+b \cdot y} & {a \cdot x+b \cdot z} \ {c \cdot w+d \cdot y} & {c \cdot x+d \cdot z}\end{array}\right) \nonumber$
Vector tensor product:
$\left( \begin{array}{l}{a} \ {b}\end{array}\right) \otimes \left( \begin{array}{l}{c} \ {d}\end{array}\right)=\left( \begin{array}{c}{a \left( \begin{array}{c}{c} \ {d}\end{array}\right)} \ {b \left( \begin{array}{l}{c} \ {d}\end{array}\right)}\end{array}\right)=\left( \begin{array}{c}{a c} \ {a d} \ {b c} \ {b d}\end{array}\right) \nonumber$
Matrix tensor product:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \otimes \left( \begin{array}{ll}{w} & {x} \ {y} & {z}\end{array}\right) =\left( \begin{array}{ll}{a \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} & {b \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} \ {c \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} & {d \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)}\end{array}\right) =\left( \begin{array}{llll}{a w} & {a x} & {b w} & {b x} \ {a y} & {a z} & {b y} & {b z} \ {c w} & {c x} & {d w} & {d x} \ {c y} & {c z} & {d y} & {d z}\end{array}\right) \nonumber$
Matrix eigenvalues and eigenvectors (unnormalized):
$\text{eigenvals}\left(\left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right)\right) \rightarrow \left( \begin{array}{l}{a-b} \ {a+b}\end{array}\right) \nonumber$
or
$\left|\left( \begin{array}{cc}{a-\lambda} & {b} \ {b} & {a-\lambda}\end{array}\right)\right|=0 \text { solve }, \lambda \rightarrow \left( \begin{array}{c}{a+b} \ {a-b}\end{array}\right) \nonumber$
or
$\left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right)^{-1} \left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right) \left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right) \rightarrow \left( \begin{array}{cc}{a-b} & {0} \ {0} & {a+b}\end{array}\right) \nonumber$
using
$\text{eigenvecs}\left(\left( \begin{array}{cc}{a} & {b} \ {b} & {a}\end{array}\right)\right) \rightarrow \left( \begin{array}{cc}{-1} & {1} \ {1} & {1}\end{array}\right) \nonumber$
$\left( \begin{array}{ll}{a} & {b} \ {b} & {a}\end{array}\right) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right)=(a-b) \cdot \left( \begin{array}{l}{x} \ {y}\end{array}\right) \; \text{solve,} y \rightarrow-x \qquad \left( \begin{array}{l}{x} \ {y}\end{array}\right)=\left( \begin{array}{c}{-1} \ {1}\end{array}\right) \nonumber$
$\left( \begin{array}{ll}{\mathrm{a}} & {\mathrm{b}} \ {\mathrm{b}} & {\mathrm{a}}\end{array}\right) \cdot \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right)=(\mathrm{a}+\mathrm{b}) \cdot \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right) \text { solve, } \mathrm{y} \rightarrow \mathrm{x} \qquad \left( \begin{array}{l}{\mathrm{x}} \ {\mathrm{y}}\end{array}\right)=\left( \begin{array}{l}{1} \ {1}\end{array}\right) \nonumber$
Completeness relations:
$\text{Black} \cdot \text{Black}^{T} + \text{White} \cdot \text{White}^{T} =\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
$\text{Hard} \cdot \text{Hard}^{T} + \text{Soft} \cdot \text{Soft}^{T} =\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Appendix B: Additional Computational Details
In order to explain the perfect anti-correlation predicted by quantum mechanics when the same type of measurement is made on both quons, the local realist makes the following state assignments. Remember that these states are not legitimate according to quantum theory because hardness, color, taste and odor are incompatible observables.
$\begin{pmatrix} \text{QuonA} & \text{QuonB} \ \text{HBSwP} & \text{SWSoF} \ \text{HBSwF} & \text{SWSoP} \ \text{HBSoP} & \text{SWSwF} \ \text{HBSoF} & \text{SWSwP} \ \text{HWSwP} & \text{SBSoF} \ \text{HWSwF} & \text{SBSoP} \ \text{HWSoP} & \text{SBSwF} \ \text{HWSoF} & \text{SBSwP} \end{pmatrix} \qquad \begin{pmatrix} \text{Property} & \text{Eigenvalue} \ \text{H} & 1 \ \text{S} & -1 \ \text{B} & 1 \ \text{W} & -1 \ \text{Sw} & 1 \ \text{So} & -1 \ \text{P} & 1 \ \text{F} & -1 \end{pmatrix} \nonumber$
It is easy to show that these state assignments are in agreement with the following quantum mechanical calculations.
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Hardness, Hardness) } \cdot\Psi=-1$ $\Psi^{\mathrm{T}} \cdot \text { kronecker (Color, Color} ) \cdot \Psi=-1$ $\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text { Taste, Taste }) \cdot \Psi=-1$
$\mathbf{\Psi}^{\mathrm{T}} \cdot \mathrm{kronecker}(\mathrm{Odor}, \mathrm{Odor}) \cdot \Psi=-1$ $\Psi^{\mathrm{T}} \cdot \text { kronecker (Hardness, Color) } \cdot \Psi=0$ $\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text { Hardness, Taste }) \cdot \Psi=0$
$\Psi^{\mathrm{T}} \cdot \text { kronecker (Color, Taste) } \cdot\Psi=0$ $\Psi^{\mathrm{T}} \cdot \text { kronecker (Taste, Odor) }\cdot\Psi=0$
However, the state assignments are not consistent with the following quantum calculations.
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\text { Hardness }, \text { Odor }) \cdot \Psi=-0.707 \nonumber$
$\Psi^{\mathrm{T}} \cdot \mathrm{kronecker}(\mathrm{Color}, \mathrm{Odor}) \cdot \Psi=-0.707 \nonumber$
The last two rows of the following table compare the local realist and quantum mechanical predictions, showing the disagreement for the hardness/odor and color/odor joint measurements.
$\begin{pmatrix} \text{QuonA} & \text{QuonB} & \text{HH} & \text{CC} & \text{TT} & \text{OO} & \text{HC} & \text{HT} & \text{HO} & \text{CT} & \text{CO} & \text{TO} \ \text{HBSwP} & \text{SWSoF} & -1 & -1& -1& -1& -1& -1& -1& -1& -1& -1 \ \text{HBSwF} & \text{SWSoP} & -1& -1& -1& -1& -1& -1& 1& -1& 1& 1\ \text{HBSoP} & \text{SWSwF} & -1& -1& -1& -1& -1& 1& -1& 1& -1& 1\ \text{HBSoF} & \text{SWSwP} & -1& -1& -1& -1& -1& 1& 1& 1& 1& -1\ \text{HWSwP} & \text{SBSoF} & -1& -1& -1& -1& 1& -1& -1& 1& 1& -1\ \text{HWSwF} & \text{SBSoP} & -1& -1& -1& -1& 1& -1& 1& 1& -1& 1\ \text{HWSoP} & \text{SBSwF} & -1& -1& -1& -1& 1& 1& -1& -1& 1& 1\ \text{HWSoF} & \text{SBSwP} & -1& -1& -1& -1& 1& 1& 1& -1& -1& -1\ \text{Expectation} & \text{Value} & -1& -1& -1& -1 & 0& 0& 0& 0& 0& 0\ \text{Quantum} & \text{Result} & -1& -1& -1& -1& 0& 0& -0.707& 0& -0.707& 0\end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.34%3A_Rudimentary_Matrix_Mechanics.txt
|
The basic principles of quantum theory can be demonstrated very simply by exploring the properties of electron spin using Heisenberg's formulation of quantum mechanics which is usually referred to as matrix mechanics. The matrix formulation provides clear illustrations of the following essential quantum mechanical concepts: eigenvector, operator, eigenvalue, expectation value, the linear superposition, and the commutation relations.
Four quantum numbers are required to describe the electron in quantum mechanics. The last of these is the spin quantum number, s. The electron has a spin component in the x-, y-, and z-directions and for each of these directions the electron can have a value of spin-up or spin-down, or +1 and -1 in units of $\frac{h}{4 \pi}$. These spin states are represented by vectors as is shown below.
$\mathrm{S}_{\mathrm{xu}} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right)$ $\mathrm{S}_{\mathrm{xd}} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-1}\end{array}\right)$ $\left(\overline{\mathrm{S}_{x u}}\right)^{\mathrm{T}}=\begin{pmatrix}0.707 & 0 .707 \end{pmatrix}$ $\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}}=\begin{pmatrix}0.707 & -0 .707 \end{pmatrix}$
$\mathrm{S}_{\mathrm{yu}} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {i}\end{array}\right)$ $\mathrm{S}_{\mathrm{yd}} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-\mathrm{i}}\end{array}\right)$ $\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}}=\begin{pmatrix}0.707 & -0 .707i \end{pmatrix}$ $\left(\overline{\mathrm{S}_{yd}}\right)^{\mathrm{T}}=\begin{pmatrix}0.707 & 0 .707i \end{pmatrix}$
$\mathrm{S}_{\mathrm{zu}} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ $\mathrm{S}_{\mathrm{zd}} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$ $\left(\overline{\mathrm{S}_{z u}}\right)^{\mathrm{T}}=\begin{pmatrix}1 & 0 \end{pmatrix}$ $\left(\overline{\mathrm{S}_{yd}}\right)^{\mathrm{T}}=\begin{pmatrix}0 & 1 \end{pmatrix}$
Let's look at the the y-direction spin states because they are complex, and therefore are slightly more difficult to deal with. In Dirac notation these four vectors are written as |Syu>, |Syd>, <Syu|, and <Syd|. Note that the bra-vectors are the transpose of the complex conjugate of the ket-vectors. It is also easy to show that these spin vectors in the x-, y-, and z-directions form orthonormal basis sets. That means they are normalized and orthogonal to each other.
$\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xu}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xd}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xd}}=0$
$\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{yu}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{yd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{yd}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{yd}}=0$
$\left(\overline{\mathrm{S}_{\mathrm{zu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zu}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{zd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zd}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{zu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zd}}=0$
In Dirac notation we would write the first row as: <Sxu|Sxu> = <Sxd|Sxd> = 1, <Sxu|Sxd> = 0. In other words the projection of the spin states onto themselves is 1 (normalized) and the projection onto the other state is zero (orthogonal).
The calculations above for the y-direction spin vectors are shown explicitly below. You should do hand calculations on all of the above for practice and to have some appreciation for what the computer is doing.
$(0.707-0.707 \mathrm{i}) \cdot \left( \begin{array}{c}{0.707} \ {0.707 \mathrm{i}}\end{array}\right)=1 \ (0.707 \quad 0.707 \mathrm{i}) \cdot \left( \begin{array}{c}{0.707} \ {-0.707 \mathrm{i}}\end{array}\right)=1 \ (0.707-0.707 \mathrm{i}) \cdot \left( \begin{array}{c}{0.707} \ {-0.707 \mathrm{i}}\end{array}\right)=0 \nonumber$
It is easy to show that x- and z-spin states are not orthogonal to one another. This is true of any two different spin directions. <Sxu|Szu> = 0.707, for example.
$\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zu}}=0.707 \quad\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zd}}=0.707 \quad \left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zu}}=0.707 \quad\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zd}}=-0.707 \nonumber$
This of course means that |Sxu> and |Sxd> can be written as linear superpositions of |Szu> and |Szd>, and |Szu> and |Szd> can be written as linear superpositions of |Sxu> and |Sxd>.
$\mathrm{S}_{\mathrm{xu}}=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{zu}}+\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right) \ \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{zu}}-\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \ \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xu}}+\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \ \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xu}}-\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
The concept of the linear superposition is central in quantum theory and has no classical analog. For example, if by measurement an electron is found to have spin-up in the z-direction, this means that the electron does not have a definite spin in either the x- or the y-direction because |Szu> is a linear superposition of the x- and y-direction spin states.
$\mathrm{S}_{zu}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xu}}+\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{yu}}+\frac{1}{\sqrt{2}} \cdot \mathrm{S}_{\mathrm{yd}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
In spite of its appearance, a linear superposition is not a mixture. In other words |Szu> is not 50% |Sxu> and 50% |Sxd>, or 50% |Syu> and 50% |Syd>.
Another central dogma of quantum theory is that the wavefunction or state vector contains all the physical information available for the system. Quantum mechanics therefore consists, in large part, of extracting physical information from the wavefunction or state vector. Quantum mechanics consists of a small set of rules for carrying this procedure out mathematically.
For every observable of the system there is an operator. Since electrons can spin in the x-, y-, or z-directions, there are spin operators in those directions, or for that matter in any other arbitrary direction you might think of. (See Appendix B for the construction of a general spin operator.) In quantum mechanics states are vectors and operators are matrices. The spin operators in units of $\frac{h}{4 \pi}$ are shown below. Note that squaring these operators gives the identity operator.
$\mathrm{S}_{\mathrm{x}}=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right)$ $\mathrm{S}_{\mathrm{y}} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right)$ $\mathrm{S}_{\mathrm{z}} :=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right)$
$\mathrm{S}_{\mathrm{x}}^{2}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$ $\mathrm{S}_{\mathrm{y}}^{2}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$ $\mathrm{S}_{2}^{2}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)$
The square of the total spin operator in units of $\frac{h}{4 \pi}$ is
$\mathrm{S} 2 :=\mathrm{S}_{\mathrm{x}}^{2}+\mathrm{S}_{\mathrm{y}}^{2}+\mathrm{S}_{\mathrm{z}}^{2} \quad \mathrm{S} 2=\left( \begin{array}{ll}{3} & {0} \ {0} & {3}\end{array}\right) \nonumber$
A measurement operator extracts information about the system by operating on the wavefunction or state vector. One possible outcome is that the operation returns the state vector multiplied by a numerical constant. For example,
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{xu}}=\left( \begin{array}{c}{0.707} \ {0.707}\end{array}\right) \nonumber$
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{c}{-0.707} \ {0.707}\end{array}\right) \nonumber$
$\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{yu}}=\left( \begin{array}{c}{0.707} \ {0.707 \mathrm{i}}\end{array}\right) \nonumber$
$\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{yd}}=\left( \begin{array}{c}{-0.707} \ {0.707 \mathrm{i}}\end{array}\right) \nonumber$
$\mathrm{S}_{z} \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
$\mathrm{S}_{z} \cdot \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{c}{0} \ {-1}\end{array}\right) \nonumber$
or, for example:
$\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{c}{0} \ {-1}\end{array}\right) \nonumber$
In Dirac notation we would summarize these calculations as follows: Sx|Sxu> = +1|Sxu>, Sx|Sxd> = -1|Sxd>, Sy|Syu> = +1|Syu>, Sy|Syd> = -1|Syd>, Sz|Szu> = +1|Szu>, Sz|Szd> = -1|Szd>. In each of these cases, the state vector is an eigenfunction of the measurement operator with eigenvalue of either +1 or -1 (in units of $\frac{h}{4 \pi}$). We say, for example, that |Sxu> is an eigenfunction of Sx with eigenvalue +1. The electron has a well-defined value for spin in the x-direction (spin-up) and subsequent measurements of the x-direction spin will yield the value of +1 as long as no intervening measurements in another spin direction are made.
The other possible outcome of the measurement operation is that it yields another state vector.
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{yu}}=\left( \begin{array}{l}{0.707 \mathrm{i}} \ {0.707}\end{array}\right)$ $\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{yd}}=\left( \begin{array}{c}{-0.707 \mathrm{i}} \ {0.707}\end{array}\right)$ $\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$
$\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{xu}}=\left( \begin{array}{c}{-0.707 \mathrm{i}} \ {0.707 \mathrm{i}}\end{array}\right)$ $\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{0.707 \mathrm{i}} \ {0.707 \mathrm{i}}\end{array}\right)$ $\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{l}{0} \ {\mathrm{i}}\end{array}\right)$ $\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{c}{-\mathrm{i}} \ {0}\end{array}\right)$
$\mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{xu}}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right)$ $\mathrm{S}_{\mathrm{Z}} \cdot \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right)$ $\mathrm{S}_{z} \cdot \mathrm{s}_{\mathrm{yu}}=\left( \begin{array}{c}{0.707} \ {-0.707 \mathrm{i}}\end{array}\right)$ $\mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{yd}}=\left( \begin{array}{c}{0.707} \ {0.707 \mathrm{i}}\end{array}\right)$
In Dirac notation these operations appear as: Sx|Syu> = i|Syd>, Sx|Syd> = -i|Syu>, Sx|Szu> = |Szd>, Sx|Szd> = |Szu>, etc. In each case the resulting vector is different than the vector operated on. We say, for example, |Syu> is not an eigenfunction of Sx, and therefore an electron in this state does not have a definite value for spin in the x-direction. X-direction spin measurements on a system known to be in state |Syu> will yield completely random results.
To put it another way, quantum mechanical principles state that a system can be in a well-defined state, |Syu>, and yet the outcome of all experiments are not unikely determined. While a measurement of spin in the y-direction will yield a predictable result, +1, measurement of spin in the x- or z-direction is completely unpredictable and all we can calculate is the average value, or expectation value for a large number of measurements. This is completely different than classical physics where if you know the state of the system, you know the values of all physical observables.
As another example, consider the ground state of the hydrogen atom for which the electron's wave function is $\Psi=\pi^{-1 / 2} \exp (-r)$. When the electron is in this state it has a precise energy, but not a well-defined position or momentum. This, of course, makes the concept of an electron trajectory impossible and it is, therefore, meaningless to think of the electron as moving in any traditional sense.
The quantum mechanical algorithm for calculating the expectation value is to execute the following matrix multiplication: <State Vector | Operator | State Vector>. This formalism is quite general and can be used whether the state vector is an eigenfunction of the operator or not. This is demonstrated below for the spin states that we have been studying.
$\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{xu}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{xd}}=-1$ $\left(\overline{\mathrm{S}_{\mathrm{zu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zu}}=0$
$\left(\overline{\mathrm{S}_{\mathrm{zd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zd}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{yu}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{yd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{yd}}=0$
$\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{xu}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}}\cdot \mathrm{S}_{\mathrm{xd}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{zu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zu}}=0$
$\left(\overline{\mathrm{S}_{\mathrm{zd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zd}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{yu}}=1$ $\left(\overline{\mathrm{S}_{\mathrm{yd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{yd}}=-1$
$\left(\overline{\mathrm{S}_{\mathrm{xu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{xu}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{xd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{xd}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{zu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{zu}}=1$
$\left(\overline{\mathrm{S}_{\mathrm{zd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}}\cdot \mathrm{S}_{\mathrm{zd}}=-1$ $\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{yu}}=0$ $\left(\overline{\mathrm{S}_{\mathrm{yd}}}\right)^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{yd}}=0$
Let's look at the first six enteries because they are representative of the remaining results. If the electron is in the state |Sxu> measurement of Sx will always yield the value of +1 (in units of $\frac{h}{4 \pi}$)). If the electron is in the state |Sxd> measurement of Sx will always yield the value of -1 (in units of $\frac{h}{4 \pi}$)). If instead Sy or Sz are measured, the measurement results will be a statistically random collection of +1 and -1, and the average value will, of course, be zero. Only when the system is in an eigenstate of the measurement operator is the outcome of the experiment certain.
This brings us to the concept of probability and how it is calculated in quantum mechanics. The projection of one state on to another, <Szu | Sxd > = 0.707, is a probability amplitude. Its absolute square, <Sxd | Szu > <Szu | Sxd > = |<Szu | Sxd>|2 = 0.5 (remember <Sxd | Szu> = <Szu | Sxd>*), is the probability that an electron in state |Sxd> will be found by measurement in the state |Szu>. Representative calculations are shown below. (See the Appendix A for another computational method.)
$\left[|(\overline{\mathrm{S}_{\mathrm{zu}}})^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xu}}|\right]^{2}=0.5 \quad \left[|(\overline{\mathrm{S}_{\mathrm{zd}}})^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xu}}|\right]^{2}=0.5 \quad \left[|(\overline{\mathrm{S}_{\mathrm{xu}}})^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{yu}}|\right]^{2}=0.5 \quad \left[|(\overline{\mathrm{S}_{\mathrm{xu}}})^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{xd}}|\right]^{2}=0.5 \nonumber$
Let's review these concepts by taking a specific example. The electron is in the state |Sxu> and we wish to measure Sz. According to quantum mechanical procedures the average value for a statistically meaningful number of measurements is zero -<Sxu | Sz | Sxu> = 0. The eigenstates (eigenfunctions) for Sz are |Szu> and |Szd> with eigenvalues +1 and -1, respectively. As the first two entries above show, the probability that an electron in state |Sxu> will be found in |Szu> with eigenvalue +1 is 0.5, and the probaility that it will be found in state |Szd> with eigenvalue -1 is 0.5. Thus, the average value is expected to be zero, and the two ways of determining the average or expectation value of a measurement are consistent and equivalent.
There is yet another way to look at this issue. In quantum mechanics for most pairs of observables the order of measurement is important. Quantum mechanical operators don't generally commute. For example, as shown below, SxSy|Szu> does not equal SySx|Szu>. This means that if the electron is in the state |Szu> the combined operators SxSy and SySx yield different measurement results.
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{c}{\mathrm{i}} \ {0}\end{array}\right) \quad \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{c}{-\mathrm{i}} \ {0}\end{array}\right) \quad \left(\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{y}}-\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{x}}\right) \cdot \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{c}{2 \mathrm{i}} \ {0}\end{array}\right) \nonumber$
Operators that do not commute have incompatible eigenstates. If a state vector is an eigenstate of one of the operators, it is not an eigenstate of the other. The fact that Sx and Sy do not commute means that an electron cannot simultaneously have well-defined values for Sx and Sy. It is not surprising that there is a deep connection between these properties of operators and the Uncertainty Principle. The commutators for the spin operators are shown below.
$\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{y}}-\mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{x}}=\left( \begin{array}{cc}{2 \mathrm{i}} & {0} \ {0} & {-2 \mathrm{i}}\end{array}\right) \qquad 2 \cdot \mathrm{i} \cdot \mathrm{S}_{\mathrm{z}}=\left( \begin{array}{cc}{2 \mathrm{i}} & {0} \ {0} & {-2 \mathrm{i}}\end{array}\right) \ \mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{x}}-\mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{z}}=\left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right) \qquad 2 \cdot \mathrm{i} \cdot \mathrm{S}_{\mathrm{y}}=\left( \begin{array}{cc}{0} & {2} \ {-2} & {0}\end{array}\right) \ \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{z}}-\mathrm{S}_{\mathrm{z}} \cdot \mathrm{S}_{\mathrm{y}}=\left( \begin{array}{cc}{0} & {2 \mathrm{i}} \ {2 \mathrm{i}} & {0}\end{array}\right) \qquad 2 \cdot \mathrm{i} \cdot \mathrm{S}_{\mathrm{x}}=\left( \begin{array}{cc}{0} & {2 \mathrm{i}} \ {2 \mathrm{i}} & {0}\end{array}\right) \nonumber$
The Uncertainty Principle can also be illustrated by calculating $\Delta$Sx and $\Delta$Sy for an electron known to be in the Szu state. Since we are working in units of $\frac{h}{4 \pi}$, the uncertainty relation is: $\Delta \mathrm{S}_{\mathrm{x}} \cdot \Delta \mathrm{S}_{\mathrm{y}} \geq 1$.
$\sqrt{\mathrm{S}_{\mathrm{zu}}^{T} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zu}}-\left(\mathrm{S}_{\mathrm{zu}}^{T} \cdot \mathrm{S}_{\mathrm{x}} \cdot \mathrm{S}_{\mathrm{zu}}\right)^{2}} \cdot \sqrt{\mathrm{S}_{\mathrm{zu}}^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zu}}-\left(\mathrm{S}_{\mathrm{zu}}^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{y}} \cdot \mathrm{S}_{\mathrm{zu}}\right)^{2}}=1 \nonumber$
We have been dealing with matrix operators and their associated eigenvectors and eigenvalues. The eigenvectors and eigenvalues can be obtained from the matrix operators with Mathcad's eigenvecs and eigenvals commands as is shown below.
$\begin{matrix} \text{eignevals}\left(\mathrm{S}_{\mathrm{x}}\right)=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) & \text{eigenvec}\left(\mathrm{S}_{\mathrm{x}}, 1\right)=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right) & \text{eigenvec}\left(\mathrm{S}_{\mathrm{x}}, -1\right)=\left( \begin{array}{l}{-0.707} \ {0.707}\end{array}\right) \ \text{eignevals}\left(\mathrm{S}_{\mathrm{y}}\right)=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) & \text{eigenvec}\left(\mathrm{S}_{\mathrm{y}}\right)=\left( \begin{array}{cc}{-0.707 \mathrm{i}} & {0.707} \ {0.707} & {-0.707 \mathrm{i}}\end{array}\right) & \; \ \text{eignevals}\left(\mathrm{S}_{\mathrm{z}}\right)=\left( \begin{array}{c}{1} \ {-1}\end{array}\right) & \text{eigenvec}\left(\mathrm{S}_{\mathrm{z}}\right)=\left( \begin{array}{cc}{1} & {0} \ {0} & {1}\end{array}\right) & \; \end{matrix} \nonumber$
One final thing we will do is to demonstrate the completeness relationship. For example, |Szu> <Szu| + |Szd> <Szd| = I, the identity operator. This demonstrates that the spin eigenfunction for the various Cartesian directions span the two-dimensional space.
$\mathrm{S}_{\mathrm{zu}} \cdot \mathrm{S}_{\mathrm{zu}}^{\mathrm{T}}+\mathrm{S}_{\mathrm{zd}}\cdot \mathrm{S}_{\mathrm{zd}}^{\mathrm{T}}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \ \mathrm{S}_{\mathrm{xu}} \cdot \mathrm{S}_{\mathrm{xu}}^{\mathrm{T}}+\mathrm{S}_{\mathrm{xd}} \cdot \mathrm{S}_{\mathrm{xd}}^{\mathrm{T}}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \ \mathrm{S}_{\mathrm{yu}} \cdot\left(\overline{\mathrm{S}_{\mathrm{yu}}}\right)^{\mathrm{T}}+\mathrm{S}_{\mathrm{yd}} \cdot\left(\overline{\mathrm{S}_{\mathrm{yd}}}\right)^{\mathrm{T}}=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Appendix A
By the method outlined above we can show that the probability that a Szu electron will be found in the Sxu spin state is 0.5.
$\left[|(\overline{\mathrm{S}_{\mathrm{xu}}})^{\mathrm{T}} \cdot \mathrm{S}_{\mathrm{zu}}|\right]^{2}=0.5 \nonumber$
This calculation can be rewritten in terms of the trace of the product of the |Szu> <Szu| and |Sxu> <Sxu| projection operators.
$\left|\left\langle S_{x u} | S_{z u}\right\rangle\right|^{2}=\left\langle S_{z u} | S_{x u}\right\rangle\left\langle S_{x u} | S_{z u}\right\rangle =\sum_{i}\left\langle S_{z u} | i\right\rangle\langle i | S_{x u}\rangle\left\langle S_{x u} | S_{z u}\right\rangle =\sum_{i}\langle i | S_{x u}\rangle\left\langle S_{x u} | S_{z u}\right\rangle\left\langle S_{z u} | i\right\rangle = \operatorname{Trace}( |S_{x u}\rangle\left\langle S_{x u} | S_{z u}\right\rangle\left\langle S_{z u}|\right) \nonumber$
where the completeness relation $\sum_{i} | i \rangle\langle i|=1$ has been employed.
$\operatorname{tr}\left[\left(\mathrm{S}_{\mathrm{xu}} \cdot \mathrm{S}_{\mathrm{xu}}^{\mathrm{T}}\right) \cdot\left(\mathrm{S}_{\mathrm{zu}} \cdot \mathrm{S}_{\mathrm{zu}}^{\mathrm{T}}\right)\right]=0.5 \nonumber$
Appendix B
A general spin operator can be constructed using the spherical coordinate system, where $\theta$ is the angle relative to z-axis and f is the angle relative to the x-axis.
$\mathrm{S}(\theta, \phi) :=\cos (\phi) \cdot \sin (\theta) \cdot \mathrm{S}_{\mathrm{x}}+\sin (\phi) \cdot \sin (\theta) \cdot \mathrm{S}_{\mathrm{y}}+\cos (\theta) \cdot \mathrm{S}_{\mathrm{z}} \nonumber$
To confirm the validity of this general operator, we generate the traditional x-, y- and z-direction spin operators.
$\mathrm{S}\left(\frac{\pi}{2}, 0\right)=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \quad \mathrm{S}\left(\frac{\pi}{2}, \frac{\pi}{2}\right)=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \ {\mathrm{i}} & {0}\end{array}\right) \quad S(0,0)=\left( \begin{array}{cc}{1} & {0} \ {0} & {-1}\end{array}\right) \nonumber$
For the Hadamard gate (operator), which is important in quantum computing, $\theta = \frac{\pi}{4}$ and $\phi = 0$.
$\mathrm{S}\left(\frac{\pi}{4}, 0\right)=\left( \begin{array}{cc}{0.707} & {0.707} \ {0.707} & {-0.707}\end{array}\right) \nonumber$
As shown below it represents a Fourier transform between the x and z spin eigenstates.
$\mathrm{S}\left(\frac{\pi}{4}, 0\right) \mathrm{S}_{\mathrm{zu}}=\left( \begin{array}{l}{0.707} \ {0.707}\end{array}\right) \quad \mathrm{S}\left(\frac{\pi}{4}, 0\right) \mathrm{S}_{\mathrm{xu}}=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \ \mathrm{S}\left(\frac{\pi}{4}, 0\right) \mathrm{S}_{\mathrm{zd}}=\left( \begin{array}{c}{0.707} \ {-0.707}\end{array}\right) \quad \mathrm{S}\left(\frac{\pi}{4}, 0\right) \mathrm{S}_{\mathrm{xd}}=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.35%3A_Matrix_Mechanics.txt
|
This tutorial provides a brief summary of the last chapter of C. W. Sherwinʹs excellent Introduction to Quantum Mechanics which deals with relativistic quantum mechanics.
The relativistic equation for the energy of a free particle has positive and negative roots, where the positive root signifies the energy of a particle and the negative root the energy of its antiparticle. This interpretation was confirmed experimentally with the discovery of the anti‐electron (positron) in 1932 by Anderson.
$E=\pm c \sqrt{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}+m^{2} c^{2}} \label{1} \nonumber$
Dirac converted this to a soluble quantum mechanical operator by first writing the argument of the square root as a perfect square in order to get rid of the troubling radical operator which defied physical interpretation. In a second step he replaced energy and momentum with their differential operators, $E = -\left(\frac{h}{2 \pi i}\right) \frac{d}{dt}$ and $p_{q} = \left(\frac{h}{2 \pi i}\right) \frac{d}{dq}$, from non‐relativistic quantum mechanics.
$\mathrm{p}_{\mathrm{x}}^{2}+\mathrm{p}_{\mathrm{y}}^{2}+\mathrm{p}_{\mathrm{Z}}^{2}+\mathrm{m}^{2} \cdot \mathrm{c}^{2}=\left(\alpha_{\mathrm{x}} \cdot \mathrm{p}_{\mathrm{x}}+\alpha_{\mathrm{y}} \cdot \mathrm{p}_{\mathrm{y}}+\alpha_{\mathrm{z}} \cdot \mathrm{p}_{\mathrm{z}}+\beta \cdot \mathrm{m} \cdot \mathrm{c}\right)^{2} \nonumber$
For this mathematical maneuver to be valid the following conditions must hold: $\alpha_{\mathrm{x}}^{2}=\alpha_{\mathrm{y}}^{2}=\alpha_{\mathrm{Z}}^{2}=\beta^{2}=1$
$\alpha_{\mathrm{x}} \cdot \alpha_{\mathrm{y}}+\alpha_{\mathrm{y}} \cdot \alpha_{\mathrm{x}}=0$ $\alpha_{\mathrm{x}} \cdot \alpha_{\mathrm{Z}}+\alpha_{\mathrm{z}} \cdot \alpha_{\mathrm{x}}=0$ $\alpha_{\mathrm{X}} \cdot \beta+\beta \cdot \alpha_{\mathrm{x}}=0$
$\alpha_{y} \cdot \alpha_{z}+\alpha_{z} \cdot \alpha_{y}=0$ $\alpha_{y} \cdot \beta+\beta \cdot \alpha_{y}=0$ $\alpha_{Z} \cdot \beta+\beta \cdot \alpha_{Z}=0$
$\mathrm{p}_{\mathrm{x}} \cdot \mathrm{p}_{\mathrm{y}}=\mathrm{p}_{\mathrm{y}} \cdot \mathrm{p}_{\mathrm{X}}$ $\mathrm{p}_{\mathrm{x}} \cdot \mathrm{p}_{\mathrm{Z}}=\mathrm{p}_{\mathrm{Z}} \cdot \mathrm{p}_{\mathrm{x}}$ $\mathrm{p}_{\mathrm{y}} \cdot \mathrm{p}_{\mathrm{Z}}=\mathrm{p}_{\mathrm{z}} \cdot \mathrm{p}_{\mathrm{y}}$
In other words, the $\alpha$s and $\beta$s must anticommute while the momentum operators as used above must commute. From the non‐relativistic formulation of quantum mechanics it was already clear that the momentum operator pairs above did commute. In formulating a relativistic quantum mechanics, Dirac assumed the validity of the various multiplicative and differential operators of non‐relativistic quantum mechanics for observable properties like energy, position and momentum.
Being cognizant of Heisenbergʹs matrix approach to non‐relativistic quantum mechanics, Dirac realized the restrictions above regarding the $\alpha$s and $\beta$ could be satisfied by the following 4x4 matrices.
$\alpha_{\mathrm{X}} =\left( \begin{array}{cccc}{0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0} \ {0} & {1} & {0} & {0} \ {1} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{y} =\left( \begin{array}{cccc}{0} & {0} & {0} & {-i} \ {0} & {0} & {i} & {0} \ {0} & {-i} & {0} & {0} \ {i} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{z} =\left( \begin{array}{cccc}{0} & {0} & {1} & {0} \ {0} & {0} & {0} & {-1} \ {1} & {0} & {0} & {0} \ {0} & {-1} & {0} & {0}\end{array}\right) \nonumber$
$\boldsymbol{\beta} =\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {-1} & {0} \ {0} & {0} & {0} & {-1}\end{array}\right) \nonumber$
First we show that $\alpha_{\mathrm{X}}^{2}=\alpha_{\mathrm{y}}^{2}=\alpha_{\mathrm{Z}}^{2}=\beta^{2}=\mathrm{I}$ where I is the identity operator.
$\mathrm{I} =\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
$\alpha_{\mathrm{x}} \cdot \alpha_{\mathrm{x}}=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
$\alpha_{y} \cdot \alpha_{y}=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
$\alpha_{z} \cdot \alpha_{z}=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
$\beta \cdot \beta=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
Now we show that the $\alpha$s and $\beta$ anticommute:
$\alpha_{x} \cdot \alpha_{y}+\alpha_{y} \cdot \alpha_{x}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{\mathrm{x}} \cdot \alpha_{\mathrm{z}}+\alpha_{\mathrm{z}} \cdot \alpha_{\mathrm{x}}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{\mathrm{x}} \cdot \beta+\beta \cdot \alpha_{\mathrm{x}}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{y} \cdot \alpha_{z}+\alpha_{z} \cdot \alpha_{y}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{y} \cdot \beta+\beta \cdot \alpha_{y}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\alpha_{z} \cdot \beta+\beta \cdot \alpha_{z}=\left( \begin{array}{cccc}{0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
It is now possible to write Diracʹs relativistic energy equation as follows:
$E=\pm c\left(\alpha_{x} p_{x}+\alpha_{y} p_{y}+\alpha_{z} p_{z}+\beta m c\right) \label{2} \nonumber$
Before proceeding to the next step, the substitution of the differential operators for energy and momentum, it is instructive to look at the right side of the above equation which is a 4x4 Dirac relativistic energy operator. Of course, the left side is a 4x4 matrix with E on the diagonal and zeros everywhere else.
$-c \cdot\left(\alpha_{x} \cdot p_{x}+\alpha_{y} \cdot p_{y}+\alpha_{z} \cdot p_{z}+\beta \cdot m \cdot c\right) \rightarrow \left[ \begin{matrix} -c^{2} \cdot m & 0 & -c cdot p_{z} & -c \left( p_{x} - p_{y} \cdot i \right) \ 0 & -c^{2} \cdot m & -c \left( p_{x} + p_{y} \cdot i \right) & c \cdot p_{z} \ -c \cdot p_{z} & -c \left( p_{x} - p_{y} \cdot i \right) & c^{2} \cdot m & 0 \ -c \left( p_{x} + p_{y} \cdot i \right) & c \cdot p_{z} & 0 & c^{2} \cdot m \end{matrix} \right] \label{3} \nonumber$
Substituting the traditional operators for energy and momentum yields,
$-\frac{\hbar}{i} \frac{\partial \Psi}{\partial t}=-\left[\frac{c \hbar}{i}\left(\alpha_{x} \frac{\partial}{\partial x}+\alpha_{y} \frac{\partial}{\partial y}+\alpha_{z} \frac{\partial}{\partial z}\right)+\beta m c^{2}\right] \Psi \label{4} \nonumber$
Assuming the separability of the space and time coordinates $[\Psi(x, y, z, t)=\psi(x, y, z) \phi(t)]$, this four dimensional differential equation is decoupled in to two differential equations. The time‐dependent equation is easily solved and has the following solution.
$\varphi(t)=e^{-i \frac{E t}{\hbar}} \label{5} \nonumber$
The space part of the differential equation has the following form, with the relativistic Hamiltonian operating on the wavefunction.
$-\left[\frac{c \hbar}{i}\left(\alpha_{x} \frac{\partial}{\partial x}+\alpha_{y} \frac{\partial}{\partial y}+\alpha_{z} \frac{\partial}{\partial z}\right)+\beta m c^{2}\right] \psi=E \psi \label{6} \nonumber$
As demonstrated above (Equation \ref{3}) the relativistic energy operator is a 4x4 matrix. Therefore, the wavefunction must be a four‐component vector.
At this point Sherwin turns to the example of the free particle in the x‐direction (see pages 292‐295). He assumes that the solution has the form of a plane wave. However, as shown below substitution of the deBroglie equation in the plane wave equation yields the momentum eigenfunction in coordinate space.
$\exp \left(i \frac{2 \pi}{\lambda} x\right) \stackrel{\lambda=h / p}{\longrightarrow} \exp \left(i \frac{p x}{\hbar}\right) \nonumber$
This means that this problem is extremely easy to solve in momentum space where the momentum operator is multiplicative. The calculation of the energy eigenvalues is straight forward using Mathcadʹs eigenvals command. We simply ask for the eigenvalues of the relativistic energy operator as shown below.
$\text{eigenvals}\left[-\mathrm{c} \cdot\left(\alpha_{\mathrm{x}} \cdot \mathrm{p}_{\mathrm{x}}+\beta \cdot \mathrm{m} \cdot \mathrm{c}\right]\right] \rightarrow \left( \begin{array}{c}{\mathrm{c} \cdot \sqrt{\mathrm{c}^{2} \cdot \mathrm{m}^{2}+\mathrm{p}_{\mathrm{x}}^{2}}} \ {\mathrm{c} \cdot \sqrt{\mathrm{c}^{2} \cdot \mathrm{m}^{2}+\mathrm{p}_{\mathrm{x}}^{2}}} \ {-\mathrm{c} \cdot \sqrt{\mathrm{c}^{2} \cdot \mathrm{m}^{2}+\mathrm{p}_{\mathrm{x}}^{2}}} \ {-\mathrm{c} \cdot \sqrt{\mathrm{c}^{2} \cdot \mathrm{m}^{2}+\mathrm{p}_{\mathrm{x}}^{2}}}\end{array}\right) \nonumber$
Calculation of the (unnormalized) eigenvectors is equally easy.
$\text{eigenvecs}\left[-\mathrm{c} \cdot\left(\alpha_{\mathrm{x}} \cdot \mathrm{p}_{\mathrm{x}}+\beta \cdot \mathrm{m} \cdot \mathrm{c}\right]\right] = \begin{pmatrix} \frac{\mathrm{W}+\mathrm{m} \cdot \mathrm{c}^{2}}{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{c}} & 0 & 0 & \frac{-\mathrm{W}+\mathrm{m} \cdot \mathrm{c}^{2}}{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{c}} \ 0 & \frac{\mathrm{W}+\mathrm{m} \cdot \mathrm{c}^{2}}{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{c}} & \frac{- \mathrm{W}+\mathrm{m} \cdot \mathrm{c}^{2}}{\mathrm{p}_{\mathrm{x}} \cdot \mathrm{c}} & 0 \ 0 & 1 & 1 & 0 \ 1 & 0 & 0 & 1 \end{pmatrix} \nonumber$
$\mathrm{W}=\sqrt{\mathrm{p}_{\mathrm{X}}^{2} \cdot \mathrm{c}^{2}+\mathrm{m}^{2} \cdot \mathrm{c}^{4}} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.36%3A_Aspects_of_Diracs_Relativistic_Matrix_Mechanics.txt
|
Thomas Young used the double-slit experiment to establish the wave nature of light. Richard Feynman used it to demonstrate the superposition principle as the paradigm of all quantum mechanical phenomena, illustrating wave-particle duality as stated above: between release and detection quons behave as waves.
The Experiment
The slit screen on the left produces the diffraction pattern on the right when illuminated with a coherent radiation source.
The Quantum Mechanical Explanation
Illumination of a double-slit screen with a coherent light source leads to a Schrödinger "cat state", in other words a superposition of the photon being localized at both slits simultaneously.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \nonumber$
Here x1 and x2 are positions of the two slits. It is assumed initially, for the sake of mathematical simplicity, that the slits are infinitesimally thin in the x-direction and infinitely long in the y-direction.
Because the slits localize the photon in the x-direction the uncertainty principle $\left(\Delta \mathrm{x} \Delta \mathrm{p}_{\mathrm{x}}>\frac{\mathrm{h}}{4 \pi}\right)$ demands a compensating delocalization in the x-component of the momentum. To see this delocalization in momentum requires a momentum wave function, which is obtained by a Fourier transform of the position wave function given above.
In this case the Fourier transform is simply the projection of the position wave function onto momentum space. See the Appendix for further information on the Fourier transform.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle]=\frac{1}{2 \sqrt{\pi}}\left[\exp \left(-\frac{i p x_{1}}{\hbar}\right)+\exp \left(-\frac{i p x_{2}}{\hbar}\right)\right] \nonumber$
The quantum mechanical interpretation of the double-slit experiment, or any diffraction experiment for that matter, is that the diffraction pattern is actually the momentum distribution function, $|<p| \Psi>\left.\right|^{2}$. This is illustrated below. The following calculations are carried out in atomic units (h = 2 $\pi$).
Position of first slit: $x_{1} : = 0$ Position of second slit: $x_{2} : = 1$
$\mathrm{p} :=-30,-29.9 \ldots30 \quad \Psi(\mathrm{p}) :=\frac{\frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}_{1}\right)+\frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}_{2}\right)}{\sqrt{2}} \nonumber$
The momentum distribution shows interference fringes because the photons were localized in space at two positions: the maxima represent constructive interference and the minima destructive interference. Notice also that the momentum distribution in the x-direction is completely delocalized; it shows no sign of attenuating at large momentum values. This is due to the fact that under the present model the photons are precisely localized at x1 and x2 - in other words the slits are infinitesimally thin in the x-direction as specified above.
This of course is not an adequate representation of the actual double-slit diffraction pattern because any real slit has a finite size in both directions. It's really not a problem that the slit is infinite in the y-direction, because that means the photon is simply not localized in that direction. So all we need to do to get a more realistic double-slit diffraction pattern is make the slits finite (not infinitesimal) in the x-direction. This is accomplished by giving the slits a finite width, $\delta$, in the x-direction, and recalculating the momentum wave function.
Slit width:
$\delta :=0.2 \quad \Psi(\mathrm{p}) := \frac{\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}+\int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}}{\sqrt{2}} \nonumber$
In summary, the quantum mechanical interpretation of the double-slit experiment is that position is measured at the slit screen and momentum is measured at the detection screen. Position and momentum are conjugate observables which are connected by a Fourier transform and governed by the uncertainty principle. Knowing the slit screen geometry makes it possible to calculate the momentum distribution at the detection screen. Varying the slit width, $\delta$, gives a clear and simple demonstration of the uncertainty principle in action. Narrow slit widths give broad momentum distributions and wide slit widths give narrow momentum distributions.
1.38: DoubleSlit Experiment with Polarized Light
According to the Encyclopedia Britannica, Fresnel and Arago “using an apparatus based on Young’s [double‐slit] experiment” observed that “two beams polarized in mutually perpendicular planes never yield fringes.” The purpose of this tutorial is to examine this phenomenon from a quantum mechanical perspective.
A schematic diagram of the double‐slit experiment with polarizers behind the slits is shown below. V and H stand for vertical and horizontal, respectively.
We begin with a review of the double‐slit experiment in the absence of the polarizers shown in the figure above. Illumination of the double‐slit screen with a coherent light source leads to a Schrödinger ʺcat stateʺ, in other words a superposition of the photon being localized at the both slits simultaneously. In effect, the slit screen performs a position measurement on the photons emanating from the light source.
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle+| x_{2} \rangle ] \nonumber$
Here x1 and x2 are the positions of the two slits. For the sake of mathematical simplicity it is assumed that the slits are infinitesimally thin in the x‐direction and infinitely long in the y‐direction.
Because the slits localize the photon in the x‐direction the uncertainty principle demands a compensating delocalization in the x‐component of the momentum. To see this delocalization in momentum requires a momentum wave function, which is obtained by a Fourier transform of the position wave function given above. In this case the Fourier transform is simply the projection of the position wave function onto momentum space in atomic units (h = 2$\pi$).
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle+\langle p | x_{2}\rangle]=\frac{1}{2 \sqrt{\pi}}\left[\exp \left(-i p x_{1}\right)+\exp \left(-i p x_{2}\right)\right] \nonumber$
The diffraction pattern created by the double‐slit configuration of the screen is the square of the absolute magnitude of the momentum wave function, $|\Psi(p)|^{2}$.
$|\langle p | \Psi\rangle|^{2}=\frac{1+\cos \left(p x_{1}\right) \cos \left(p x_{2}\right)+\sin \left(p x_{1}\right) \sin \left(p x_{2}\right)}{2 \pi} \nonumber$
The diffraction pattern below shows the required delocalization of momentum by the infinitesimally thin slits and interference fringes because the photon was localized at two spatial positions.
If slits of finite width were used the mathematics would be a little more complicated and lead to the following familiar diffraction pattern.
In the presence of the polarizing films, the spatial wave function becomes,
$| \Psi \rangle=\frac{1}{\sqrt{2}}[ |x_{1}\rangle | v \rangle+| x_{2} \rangle | h \rangle ] \nonumber$
where v and h represent the vertical and horizontal polarization states. Projecting this wave function onto momentum space yields,
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}[\langle p | x_{1}\rangle | v\rangle+\langle p | x_{2}\rangle | h \rangle ] \nonumber$
The square of the absolute magnitude of the momentum wave function is, again, the diffraction pattern,
$|\langle p | \Psi\rangle|^{2}=\frac{1}{\sqrt{2}}\left[|\langle p | x_{1}\rangle|^{2}+|\langle p | x_{2}\rangle|^{2}\right]=\frac{1}{2 \pi} \nonumber$
We see that due to the presence of the polarizing films the interference terms (fringes) disappear.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.37%3A_The_Double-Slit_Experiment.txt
|
This tutorial deals with the effect of path information in a Mach-Zehnder interferometer (MZI). A related analysis involving the double-slit experiment is available in the preceding tutorial. We begin with an analysis of the operation of an equal arm MZI illuminated with diagonally polarized light.
The source emits photons in the x-direction illuminating a 50/50 beam splitter (BS) which splits the beam into a superposition of propagation in the x- and y-directions. The reflected beam collects a 90 degree ($\frac{\pi}{2}$, i) phase shift relative to the transmitted beam. Mirrors redirect the two beams to a second 50/50 BS. For an equal arm interferometer the photons are always registered at Dx and never at Dy. One way to explain this is shown in the figure below. A photon has two paths to each detector. At Dx the probability amplitudes of the photon's paths add in phase each being shifted by 90 degrees, resulting in constructive interference. At Dy the probability amplitudes are 180 degrees out of phase causing destructive interference. The beam splitters do not operate on the polarization states of the photons.
Probability amplitude for photon transmission at a 50/50 beam splitter: $T : = \frac{1}{\sqrt{2}}$ Probability amplitude for photon reflection at a 50/50 beam splitter: $R : = \frac{i}{\sqrt{2}}$
An equivalent matrix mechanics analysis uses vectors to represent the photon's direction of propagation and polarization, and matrices to represent the objects that operate on the photon states, such as beam splitters and mirrors. In the analyses which follow a photon can be moving in the x- or y-direction with diagonal, horizontal or vertical polarization.
Propagation states:
$\mathrm{x} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{y} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Polarization states:
$\mathrm{d} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right) \qquad \mathrm{h} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \qquad \mathrm{v} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
The Appendix shows how the propagation and polarization states are written as a composite state using tensor multiplication of vectors.
$\mathrm{xd} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1} \ {0} \ {0}\end{array}\right) \qquad \mathrm{yd} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{0} \ {0} \ {1} \ {1}\end{array}\right) \nonumber$
$\mathrm{xh} :=\left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right) \qquad \mathrm{yh} :=\left( \begin{array}{c}{0} \ {0} \ {1} \ {0}\end{array}\right) \nonumber$
$\mathrm{xv} :=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right) \qquad \mathrm{yv} :=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
The beam splitters and mirrors operate on the photon's direction of propagation but not on its polarization. The identity (do nothing) operates on the second degree of freedom, polarization.
Beam splitter:
$\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{ll}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right) \nonumber$
Mirror:
$\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
Identity:
$\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Kronecker, Mathcad's command for tensor multiplication of matrices, insures that the beam splitters and mirrors operate on direction of motion, and that photon polarization evolves unchanged. See the Appendix for the details of the tensor product of two matrices.
$\mathrm{BS} :=\text { kronecker }\left[\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{ll}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right), \left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right)\right] \nonumber$
$\text{M} :=\text { kronecker } \left[\left( \begin{array}{cc}{0} & {1} \ {1} & {0}\end{array}\right), \left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \right] \nonumber$
The probability that the photon will be moving in the x-direction after the second BS and detected at Dx is 1. The probability it will be registered at Dy is 0.
$\left(|\mathrm{xd}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{xd}|\right)^{2}=1 \nonumber$
$\left(|\mathrm{yd}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{xd}|\right)^{2}=0 \nonumber$
Photon path information can be added by replacing the first BS with a polarizing beam splitter (PBS), which transmits horizontal polarization and reflects vertical polarization. There is no phase shift between transmission and reflection in a PBS.
The matrix representing a PBS can be constructed by considering the fate of the individual photon states encountering a PBS. The individual terms are from right to left. For example, |yv> <yx| means xv --> yv.
$\widehat{P B S}=| x h \rangle\langle x h|+| y v\rangle\langle x v|+| y h\rangle\langle y h|+| x v\rangle\langle y v | \nonumber$
$\mathrm{PBS} :=\mathrm{xh} \cdot \mathrm{xh}^{\mathrm{T}}+\mathrm{yv} \cdot \mathrm{xv}^{\mathrm{T}}+\mathrm{yh} \cdot \mathrm{yh}^{\mathrm{T}}+\mathrm{xv} \cdot \mathrm{yv}^{\mathrm{T}}=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {0} & {0} & {1} \ {0} & {0} & {1} & {0} \ {0} & {1} & {0} & {0}\end{array}\right) \nonumber$
After the PBS the diagonally polarized photon is in a superposition of being transmitted with horizontal polarization and reflected with vertical polarization. In other words the two photon paths have been labeled with orthogonal polarization states; path information is available.
$\operatorname{PBS} \cdot \mathrm{x} \mathrm{d}=\left( \begin{array}{c}{0.707} \ {0} \ {0} \ {0.707}\end{array}\right) \nonumber$
$\frac{1}{\sqrt{2}} \cdot(\mathrm{xh}+\mathrm{yv})=\left( \begin{array}{c}{0.707} \ {0} \ {0} \ {0.707}\end{array}\right) \nonumber$
The following calculations show that this path information destroys the interference observed in the original MZI. Now both detectors fire with equal probability as shown in the diagram above. Half the time the photon is detected at Dx with either horizontal or vertical polarization, and half the time the photon is detected at Dy with horizontal or vertical polarization. This occurs because the path tags are orthogonal polarization states.
$\left(|\mathrm{xv}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xd}|\right)^{2}=0.25 \qquad \left(|\mathrm{xh}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xd}|\right)^{2}=0.25 \nonumber$
$\left(|\mathrm{yv}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xd}|\right)^{2}=0.25 \qquad \left(|\mathrm{yh}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xd}|\right)^{2}=0.25 \nonumber$
An algebraic representation of the steps from source to detectors gives the same numbers when the magnitudes of the probability amplitudes are squared to give the probabilities for each possible measurement outcome at Dx and Dy.
$| x d \rangle \stackrel{P B S}{\longrightarrow} \frac{1}{\sqrt{2}}[ | x h\rangle+| y v \rangle ] \xrightarrow[B S]{M} \frac{1}{\sqrt{2}}[i | x h\rangle+| y h \rangle ] \nonumber$
If the PBS is illuminated with horizontally or vertically polarized photons, there is only one path to the beam splitter in front of the detectors. Again both detectors fire with equal probability because there is no opportunity for path interference.
Horizontally polarized source:
$\left(|\mathrm{xh}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xh}|\right)^{2}=0.5 \qquad \left[ | \operatorname{yh}^{\mathrm{T}} \cdot(\mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS}) \cdot \mathrm{xh}\right] ]^{2}=0.5 \nonumber$
$| x h \rangle \stackrel{P B S}{\longrightarrow} | x h\rangle \xrightarrow[B S]{M} \frac{1}{\sqrt{2}}[i | x h\rangle+| y h \rangle ] \nonumber$
Vertically polarized source:
$\left(|\mathrm{xv}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS} \cdot \mathrm{xv}|\right)^{2}=0.5 \qquad \left[ | \operatorname{yv}^{\mathrm{T}} \cdot(\mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{PBS}) \cdot \mathrm{xv}\right] ]^{2}=0.5 \nonumber$
$| x v \rangle \stackrel{P B S}{\longrightarrow} | y v \rangle \xrightarrow[B S]{M} \frac{1}{\sqrt{2}}[i | x v\rangle+i| y h \rangle ] \nonumber$
Appendix
Tensor products for the various propagation/polarization states are formed as follows:
$| x d \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right)=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1} \ {0} \ {0}\end{array}\right) \qquad | y d \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right)=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{0} \ {0} \ {1} \ {1}\end{array}\right) \nonumber$
$| x h \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{1} \ {0} \ {0} \ {0}\end{array}\right) \qquad | x v \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right) \nonumber$
$| y h \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{1} \ {0}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {1} \ {0}\end{array}\right) \qquad | y v \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \ {1}\end{array}\right)=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
An example of the tensor product of two matrices:
$\left( \begin{array}{ll}{a} & {b} \ {c} & {d}\end{array}\right) \otimes \left( \begin{array}{ll}{w} & {x} \ {y} & {z}\end{array}\right) =\left( \begin{array}{ll}{a \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} & {b \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)} \ {c \left( \begin{array}{ll}{w} & {x} \ {y} & {z}\end{array}\right)} & {d \left( \begin{array}{cc}{w} & {x} \ {y} & {z}\end{array}\right)}\end{array}\right)=\left( \begin{array}{llll}{a w} & {a x} & {b w} & {b x} \ {a y} & {a z} & {b y} & {b z} \ {c w} & {c x} & {d w} & {d x} \ {c y} & {c z} & {d y} & {d z}\end{array}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.39%3A_The_Consequences_of_Path_Information_in_a_Mach-Zehnder_Interferometer.txt
|
This tutorial deals with the effect of path information and its so-called erasure in a Mach-Zehnder interferometer (MZI). A related analysis involving the double-slit experiment is available with the title "Which Path Information and the Quantum Eraser."
The source emits vertically polarized photons in the x-direction illuminating a 50/50 beam splitter (BS) which splits the beam into a superposition of propagation in the x- and y-directions. The reflected beam collects a 90 degree ($\frac{\pi}{2}$,i) phase shift relative to the transmitted beam. A polarization rotator rotates the polarization in the lower arm to horizontal. Mirrors redirect the two beams to a second 50/50 BS. In the absense of the polarization rotator the photons are always registered at Dx and never at Dy. In its presence the detectors both fire 50% of the time. The algebraic analyses below the figure show the evolution of the photons through the interferometer in the absence and presence of the polarization rotator. Note that the orthogonal v/h polarization tags have become orthogonal d/a tags. The significance of this will be come clear in the matrix mechanics analysis that is provided below.
Photon behavior in the absence of the polarization rotator in the lower arm of the interferometer.
$| x v \rangle \stackrel{B S}{\longrightarrow}\frac{1}{\sqrt{2}}( | x v\rangle+ i | y v \rangle ) \stackrel{B S}{\longrightarrow}\frac{1}{2}(i | x v\rangle+| y v \rangle+i | x v \rangle-| y v \rangle )=i | x v \rangle \nonumber$
Photon behavior in the presence of the polarization rotator in the lower arm of the interferometer.
$| x v \rangle \stackrel{B S}{\longrightarrow} \frac{1}{\sqrt{2}}( | x v\rangle+ i | y v \rangle ) \xrightarrow[M]{y/h} \frac{1}{\sqrt{2}}( | y v\rangle+ i | x h \rangle ) \stackrel{B S}{\longrightarrow} \frac{1}{2}(i | x v\rangle+| y v \rangle+i | x h \rangle-| y h \rangle )=\frac{1}{\sqrt{2}}(i | x d\rangle+| y a \rangle ) \nonumber$
The following equations were used to reach the final state:
$| d \rangle=\frac{1}{\sqrt{2}}( | h\rangle+| v \rangle ) \qquad | a \rangle=\frac{1}{\sqrt{2}}( | h\rangle-| v \rangle ) \nonumber$
A matrix mechanics analysis uses vectors to represent states and matrices to represent the optical elements that they encounter in the MZI.
Photon moving horizontally: $\mathrm{x} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ Photon moving vertically: $\mathrm{y} :=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$ Vertical polarization: $\mathrm{v} :=\left( \begin{array}{l}{1} \ {0}\end{array}\right)$ Horizontal polarization: $\mathrm{h}=\left( \begin{array}{l}{0} \ {1}\end{array}\right)$
Photon direction of propagation and polarization states:
$\mathrm{xv} :=\left( \begin{array}{c}{1} \ {0} \ {0} \ {0}\end{array}\right) \quad \mathrm{xh} :=\left( \begin{array}{l}{0} \ {1} \ {0} \ {0}\end{array}\right) \quad \mathrm{yv} :=\left( \begin{array}{c}{0} \ {0} \ {1} \ {0}\end{array}\right) \quad \mathrm{yh} :=\left( \begin{array}{l}{0} \ {0} \ {0} \ {1}\end{array}\right) \nonumber$
Projection operators for the x- and y-detectors:
$\mathrm{D} \mathrm{x} :=\mathrm{x} \cdot \mathrm{x}^{\mathrm{T}}=\left( \begin{array}{cc}{1} & {0} \ {0} & {0}\end{array}\right) \qquad \mathrm{Dy} :=\mathrm{y} \cdot \mathrm{y}^{\mathrm{T}}=\left( \begin{array}{ll}{0} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Beam splitter:
$\mathrm{BS} :=\frac{1}{\sqrt{2}} \left( \begin{array}{ll}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right) \nonumber$
Mirror:
$M :=\left( \begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
Identity:
$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \ {0} & {1}\end{array}\right) \nonumber$
Polarization rotator for the lower arm of the interferometer:
$\mathrm{R}(\varphi) :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {-1} & {0} & {0} \ {0} & {0} & {\cos (2 \cdot \varphi)} & {\sin (2 \cdot \varphi)} \ {0} & {0} & {\sin (2 \cdot \varphi)} & {-\cos (2 \cdot \varphi)}\end{array}\right) \nonumber$
Diagonal and anti-diagonal projection operators:
$\operatorname{Pd} :=\frac{1}{2} \cdot \left( \begin{array}{cccc}{1} & {1} & {0} & {0} \ {1} & {1} & {0} & {0} \ {0} & {0} & {1} & {1} \ {0} & {0} & {1} & {1}\end{array}\right) \nonumber$
$\mathrm{Pa} :=\frac{1}{2} \cdot \left( \begin{array}{cccc}{1} & {-1} & {0} & {0} \ {-1} & {1} & {0} & {0} \ {0} & {0} & {1} & {-1} \ {0} & {0} & {-1} & {1}\end{array}\right) \nonumber$
Vertical and horizontal projection operators:
$\mathrm{Pv} :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {1} & {0} \ {0} & {0} & {0} & {0}\end{array}\right) \nonumber$
$\mathrm{Ph} :=\left( \begin{array}{llll}{0} & {0} & {0} & {0} \ {0} & {1} & {0} & {0} \ {0} & {0} & {0} & {0} \ {0} & {0} & {0} & {1}\end{array}\right) \nonumber$
MZI with lower arm polarization rotator:
$\mathrm{MZ} (\varphi) : = \text{kronecker} (\text{BS, I}) \cdot \mathrm{R} (\varphi) \cdot \text{kronecker} (\text{M, I}) \cdot \text{kronecker} (\text{BS, I}) \nonumber$
Without polarization rotation in the lower arm of the interferometer all photons are detected at Dx. See the algebraic analysis.
$(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{MZ}(0) \cdot \mathrm{xv}|)^{2}=1 \qquad (|\mathrm{kronecker}(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{MZ}(0) \cdot \mathrm{xv}|)^{2}=0 \nonumber$
Adding orthogonal path information by rotating the polarization in the lower arm to horizontal destroys the interference effect at the second beam splitter causing both detectors to register photons. See the algebraic analysis.
$\left(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{M} Z\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0.5 \qquad \left( | \text { kronecker }(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv} |\right)^{2}=0.5 \nonumber$
Placing vertical or horizontal polarizers in front of both detectors reduces the count rates by half. See the algebraic analysis.
$\left(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{Pv} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0.25 \qquad \left(|\mathrm{kronecker}(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{Pv} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0.25 \nonumber$
$\left(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{Ph} \cdot \mathrm{M} \mathrm{Z}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0.25 \qquad \left( | \text { kronecker }(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{Ph} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv} |\right)^{2}=0.25 \nonumber$
A diagonal polarizer placed in front of both detectors passes the x-direction photons and absorbs the y-direction photons. See the algebraic analysis.
$\left(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{Pd} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0.5 \qquad \left(|\mathrm{kronecker}(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{Pd} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv}|\right)^{2}=0 \nonumber$
An anti-diagonal polarizer placed in front of the detectors passes the y-direction photons and absorbs the x-direction photons. See the algebraic analysis.
$\left( | \text { kronecker }(\mathrm{D} \mathrm{x}, \mathrm{I}) \cdot \mathrm{Pa} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv} |\right)^{2}=0 \qquad \left( | \text { kronecker }(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{Pa} \cdot \mathrm{MZ}\left(\frac{\pi}{4}\right) \cdot \mathrm{xv} |\right)^{2}=0.5 \nonumber$
These calculations demonstrate that the lower arm polarization rotator does its job.
$\mathrm{R}(0) \cdot \mathrm{kronecker}(\mathrm{BS}, \mathrm{I}) \cdot \mathrm{xv}=\left( \begin{array}{c}{0.707} \ {0} \ {0.707 \mathrm{i}} \ {0}\end{array}\right) \frac{\mathrm{xv}+\mathrm{i} \cdot \mathrm{yv}}{\sqrt{2}}=\left( \begin{array}{c}{0.707} \ {0} \ {0.707 \mathrm{i}} \ {0}\end{array}\right) \nonumber$
$\mathrm{R}\left(\frac{\pi}{4}\right) \cdot \mathrm{kronecker}(\mathrm{BS}, \mathrm{I}) \cdot \mathrm{xv}=\left( \begin{array}{c}{0.707} \ {0} \ {0} \ {0.707 \mathrm{i}}\end{array}\right) \frac{\mathrm{xv}+\mathrm{i} \cdot \mathrm{yh}}{\sqrt{2}}=\left( \begin{array}{c}{0.707} \ {0} \ {0} \ {0.707 \mathrm{i}}\end{array}\right) \nonumber$
The following calculations show the behavior of the detectors as a function of the polarization rotator angle.
Detection at Dx:
$\mathrm{X}(\theta) :=(|\mathrm{kronecker}(\mathrm{Dx}, \mathrm{I}) \cdot \mathrm{MZ}(\theta) \cdot \mathrm{xv}|)^{2} \nonumber$
Detection at Dy:
$\mathrm{Y}(\theta) :=(|\mathrm{kronecker}(\mathrm{Dy}, \mathrm{I}) \cdot \mathrm{MZ}(\theta) \cdot \mathrm{xv}|)^{2} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.40%3A_Another_look_at_the_Consequences_of_Path_Information_in_a_Mach-Zehnder_Interferometer.txt
|
Fresnel and Arago ʺusing an apparatus based on Youngʹs [double‐slit] experimentʺ observed that ʺtwo beams polarized in mutually perpendicular planes never yield fringes.ʺ The purpose of this tutorial is to examine this phenomenon from a quantum mechanical perspective.
A schematic diagram of the double‐slit experiment with polarizers behind the slits is shown below.$V$ and $Θ$ stand for vertical and $\Theta$ polarizers, respectively.
Assuming infinitessimally thin slits, the photon wave function is a superposition of being at slit 1 with vertical polarization and slit 2 with polarization at an angle $\theta$ relative to the vertical.
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | \mathrm{V} \rangle+| x_{2} \rangle | \Theta \rangle ] \nonumber$
The vertical and $\theta$ polarization states are represented by the following vectors.
$| \mathrm{V} \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \text { and } | \Theta \rangle=\left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right) \nonumber$
The diffraction pattern is the Fourier transform of the state above into the momentum representation.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle | \mathrm{V}\right\rangle+\left\langle p | x_{2}\right\rangle | \Theta \rangle ] \nonumber$
This calculation is implemented in Mathcad for slits of finite width as follows.
Slit positions:
$\mathrm{x}_{1} :=1 \qquad \mathrm{x}_{2} :=2 \nonumber$
Slit width:
$\delta :=0.2 \nonumber$
$\Psi(p, \theta):= \frac{\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}}\cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx} \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x})\cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right)}{\sqrt{2}} \nonumber$
To confirm the assertion of Fresnel and Arago the momentum distributions for three angles of the polarizer at the right slit are calculated.
We see that when the polarizers are oriented at the same angle, the diffraction pattern is the usual one for the Young double‐slit experiment. When the polarizers are crossed the fringes, as Fresnel and Arago assert, disappear. Finally, when the relative angle of the two polarizers is 45 degrees, we see a reduced interference pattern.
1.42: The Quantum Eraser
Paul Kwiat and an undergraduate research assistant published ʺA Do‐It‐Yourself Quantum Eraserʺ in the May 2007 issue of Scientific American. The purpose of this tutorial is to show the quantum math behind the laser demonstrations illustrated in this article.
The quantum mechanics behind the quantum eraser is very similar to that previously used to analyze the double‐slit experiment with polarized light. By way of review we recall that the interference pattern produced in the traditional double‐slit experiment is actually the momentum distribution created by the double‐slit geometry. In other words, the interference pattern is the Fourier transform of the spatial geometry of the slits.
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle+| x_{2} \rangle ] \nonumber$
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{i}\right\rangle+\left\langle p | x_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
For slits of width $\delta$ positioned as indicated below we have,
Position of first slit: $x_{1} :=0$ Position of second slit: $\mathrm{x}_{2} :=1$ Slit width: $\delta :=0.2$
$\Psi (p):= \frac{\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx}+\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} d x}{\sqrt{2}} \nonumber$
yielding the following diffraction pattern.
To gain ʺwhich‐wayʺ information a vertical polarizer is placed behind the first slit and a horizontal polarizer behind the second slit. Essentially two state preparation measurements have been made yielding the following entangled superposition state. (Path and polarization information have been entangled; the path part and polarizaton part cannot be factored into a product of terms.)
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | \mathrm{V} \rangle+| x_{2} \rangle | H \rangle ] \nonumber$
Next come the measurements ‐ polarization state, followed by momentum distribution (the spatial distribution of photon arrivals at the detection screen). The state preparation and measurement apparatus is shown skematically below.
To calculate the results of these sequential measurements $| \Psi>$ is projected onto $<\theta, \mathrm{p} |$, yielding for slits of width $\delta$ the following measurement state.
$\Psi(p, \theta):=\frac{\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \cos (\theta)+\int_{\mathrm{x}_{2} \frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} x \sin (\theta)}{\sqrt{2}} \nonumber$
Kwiat and Hillmer create a double‐slit effect by illuminating a thin wire with a narrow laser beam which is displayed on a distant screen. A brief discussion of their demonstrations in light of the mathematics presented here follows.
• Observing the double‐slit interference effect. The quantum mechanics is outlined at the beginning of this tutorial.
• Labeling the photon path with crossed polarizers (figure above without the third polarizer in front of the detection screen). The quantum math for this demonstration can be found in the previous tutorial, ʺThe Double‐Slit Experiment with Polarized Light.ʺ No interference fringes are predicted and none are observed. This effect was first observed by Fresnel and Arago in the first part of the 19th Century. It appears that this was the first example of the importance of path information in interference phenomena.
• Same as the previous demonstration, but with a third polarizer positioned in front of the detection screen. If the third polarizer is vertically ($\theta$ = 0) or horizontally ($\theta = \frac{\pi}{2}$) oriented no interference fringes are observed. The first two traces in the figure below show that no fringes are predicted.
• Same as the previous demonstration, except that the third polarizer is diagonally oriented ($\theta = \frac{\pi}{4}$) and anti‐diagonally oriented ($\theta = - \frac{\pi}{4}$). Now fringes are observed and predicted, as can be seen in the third and fourth traces in the figure.
Regarding the last demonstration, the reason for the restoration of the interference fringes is that the ʺwhich‐wayʺ information provided by the crossed polarizers has been lost or erased. The vertically and horizontally polarized photons from slits 1 and 2 both have a 50% chance of passing the diagonally or anti‐diagonally oriented third polarizer. Thus, knowledge of the origin of a photon emerging from the third polarizer has been destroyed.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.41%3A_The_DoubleSlit_Experiment_with_Polarized_Light.txt
|
Paul Kwiat and Rachel Hillmer, an undergraduate research assistant, published ʺA Do‐It‐Yourself Quantum Eraserʺ based on the double‐slit experiment in the May 2007 issue of Scientific American. The purpose of this tutorial is to show the quantum math behind the laser demonstrations illustrated in this article.
Hillmer and Kwiat created the double‐slit effect by illuminating a thin wire with a laser beam. They carried out a number of demonstrations with laser light and polarizing films using an experimental set up that effectively is as shown schematically below.
Assuming (initially) infinitesimally thin slits, the photon wave function at the slit screen is an entangled superposition of being at slit 1 with vertical polarization and slit 2 with polarization at an angle $\theta$ relative to the vertical. This entanglement provides which‐way information if $\theta$ is not equal to 0 and, therefore, has an important effect on the diffraction pattern.
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | v \rangle+| x_{2} \rangle | \theta \rangle ] \nonumber$
This state is projected onto $\phi$ and p because a $\phi$‐oriented polarizer (eraser) precedes the detection screen and because a diffraction pattern is actually the momentum distribution of the scattered photons. In other words, position is measured at the slit screen and momentum is measured at the detection screen.
$\langle p \phi | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle\langle\phi | v\rangle+\left\langle p | x_{2}\right\rangle\langle\phi | \theta\rangle\right] \nonumber$
The polarization brackets (amplitudes) are easily shown to be the trigonometric functions shown below.
$\langle p \phi | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle \cos (\phi)+\left\langle p | x_{2}\right\rangle \cos (\theta-\phi)\right] \nonumber$
The position‐momentum brackets are the position eigenstates in the momentum representation and are given by,
$\langle p | x\rangle=\frac{1}{\sqrt{2 \pi \hbar}} \exp \left(-\frac{i p x}{\hbar}\right) \nonumber$
This allows us to write,
$\langle p \phi | \Psi\rangle=\frac{1}{\sqrt{2} }\left[ \frac{1}{\sqrt{2 \pi \hbar}} \exp \left(-\frac{i p x_{1}}{\hbar}\right) \cos (\phi)+\frac{1}{\sqrt{2 \pi \hbar}} \exp \left(-\frac{i p x_{2}}{\hbar}\right) \cos (\theta-\phi) \right] \nonumber$
Working in atomic units (h = 2$\pi$) and now assuming slits of finite width this expression becomes,
Slit positions:
$\mathrm{x}_{1} :=1 \qquad \mathrm{x}_{2}=2 \nonumber$
Slit width:
$\delta :=0.2 \nonumber$
$\Psi(\mathrm{p}, \theta, \phi):= \frac{\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} x \cdot \cos (\phi) +\int_{\mathrm{x}_{2} \frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} x \cdot \cos (\theta-\phi)}{\sqrt{2}} \nonumber$
The square of the absolute magnitude of this function yields a representation of the diffraction pattern as a histogram of photon arrivals on the detection screen. The results shown in the figure will be discussed below.
Discussion of Results
The polarizer at slit 1 is always oriented vertically so only the orientations ($\theta$ and $\phi$) of the other polarizers need to be specified.
[$\theta$ = 0; $\phi$ = 0]
The photons emerging from the slits are vertically polarized and encounter a vertical polarizer before the detection screen. This is the reference experiment and yields the traditional diffraction pattern, as shown by the plot of $(|\Psi(\mathrm{p}, 0,0)|)^{2}$. There is no which‐way information in this experiment and 100% of the photons emerging from the vertically polarized slit screen reach the detection screen.
$\int_{-\infty}^{\infty}(|\Psi(\mathrm{p}, 0,0)|)^{2} \mathrm{dp} \text { float }, 3 \rightarrow 1.00 \nonumber$
[$\theta = \frac{\pi}{2}$, $\phi$ = 0] and [$\theta = \frac{\pi}{2}, \phi= \frac{\pi}{2}$]
The crossed polarizers at the slit screen provide which‐way information and the interference fringes disappear if the third polarizer is vertically or horizontally oriented. This is shown by the plots of $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, 0\right) |\right)^{2}$ and $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, \frac{\pi}{2}\right)|\right)^{2}$. Furthermore, relative to the reference experiment, 50% of the photons reach the detection screen.
$\int_{-\infty}^{\infty}\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, 0\right)|\right)^{2} \mathrm{dp} \text { float }, 3 \rightarrow 0.500 \nonumber$
$\int_{-\infty}^{\infty}\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, \frac{\pi}{2}\right)|\right)^{2} \mathrm{dp} \text { float }, 3 \rightarrow 0.500 \nonumber$
In the absence of the third polarizer, there is also no diffraction pattern but 100% of the photons reach the detection screen.
[$\theta = \frac{\pi}{2}, \phi = \frac{\pi}{4}$] and [$\theta = \frac{\pi}{2}, \phi= - \frac{\pi}{4}$]
The which‐way information provided by the crossed polarizers at the slit screen is erased by diagonally and anti‐diagonally oriented polarizers in front of the detection screen. This is shown by the plots of $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, \frac{\pi}{4}\right) |\right)^{2}$ and $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, -\frac{\pi}{4}\right)|\right)^{2}$. The reason the which‐way information has been erased is that vertically and horizontally polarized photons emerging from slits 1 and 2 both have a 50% chance of passing the diagonally or anti‐diagonally oriented third polarizer. Thus, it is impossible to determine the origin of a photon that passes the third polarizer and the interference fringes are restored. Again, for this experiment 50% of the photons reach the detection screen.
$\int_{-\infty}^{\infty}\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, \frac{\pi}{4}\right)|\right)^{2} \mathrm{dp} \text { float }, 3 \rightarrow 0.500 \nonumber$
$\int_{-\infty}^{\infty}\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2},-\frac{\pi}{4}\right)|\right)^{2} \mathrm{dp} \text { float }, 3 \rightarrow 0 .500 \nonumber$
The shift in the interference fringes calculated for $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, \frac{\pi}{4}\right) |\right)^{2}$ and $\left(|\Psi\left(\mathrm{p}, \frac{\pi}{2}, -\frac{\pi}{4}\right)|\right)^{2}$ is observed in the Kwiat/Hillmer experiment.
The visibility of the restored fringes is maximized for $\phi = \pm \frac{\pi}{4}$. As the figure belows shows the visibility is reduced for other values of $\phi$.
It is possible to animate the rotation of the polarizer in front of the detection screen, the eraser. From Tools select Animation and use the following setting: From: 0 To: 120 At: 5 Frames/Sec.
Explicit Vector Approach
In what follows an explicit vector approach to the analysis above is provided.
$\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | v \rangle+| x_{2} \rangle | \theta \rangle ]=\frac{1}{\sqrt{2}}\left[ | x_{1}\rangle \left( \begin{array}{l}{ 1} \ {0}\end{array}\right)+| x_{2} \rangle \left( \begin{array}{c}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right)\right] \xrightarrow{\langle p |} \frac{1}{\sqrt{2}} \left( \begin{array}{c}{\left\langle p | x_{1}\right\rangle+\cos (\theta)\left\langle p | x_{2}\right\rangle} \ {\sin (\theta)\left\langle p | x_{2}\right\rangle}\end{array}\right) \nonumber$
Assuming finite slit widths, the <p | x> amplitudes become integrals as outlined above. The appropriate Mathcad expression and its graphical display is shown below.
$\Psi(\mathrm{p}, \theta) :=\frac{1}{2 \cdot \sqrt{\pi}} \begin{pmatrix} \int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x+\cos (\theta) \cdot \int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \ \sin (\theta) \cdot \int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \end{pmatrix} \nonumber$
Next $\Psi(\mathrm{p}, \theta)$ is projected onto the eraser polarizer oriented at an angle $\phi$ and the probability distributions for several combinations of $\theta$ and $\phi$ are displayed.
$\Psi(p, \theta, \phi)=\begin{pmatrix}\cos (\phi) & \sin (\phi)\end{pmatrix} \frac{1}{\sqrt{2}} \left( \begin{array}{c}{\left\langle p | x_{1}\right\rangle+\cos (\theta)\left\langle p | x_{2}\right\rangle} \ {\sin (\theta)\left\langle p | x_{2}\right\rangle}\end{array}\right) \nonumber$
$\Psi(\mathrm{p}, \theta, \phi) :=\begin{pmatrix}\cos (\phi) & \sin (\phi)\end{pmatrix} \cdot \Psi(\mathrm{p}, \theta) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.43%3A_Which_Way_Did_It_Go__The_Quantum_Eraser.txt
|
This tutorial examines the real reason which‐way information destroys the double‐slit diffraction pattern and how the so‐called ʺquantum eraserʺ restores it. The traditional double‐slit experiment is presented schematically below.
The wave function for a photon illuminating the slit screen is written as a superposition of the photon being present at both slits simultaneously.
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle+| x_{2} \rangle ] \nonumber$
The diffraction pattern is calculated by projecting this superposition into momentum space. This is a Fourier transform for which the mathematical details can be found in the Appendix.
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right] \nonumber$
The well‐known double‐slit diffraction pattern is displayed below.
When polarization markers are attached to the slits we have the following schematic of the double‐slit experiment with so‐called which‐way information.
According to the Encyclopedia Britannica, Fresnel and Arago ʺusing an apparatus based on Youngʹs [double‐slit] experimentʺ observed that ʺtwo beams polarized in mutually perpendicular planes never yield fringes.ʺ We now look at the quantum mechanical explanation of this phenomena. Fresnel and Arago, working during the 19th Century, provided a valid classical explanation.
The coordinate and momentum wave functions now become,
$| \Psi^{\prime} \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | V \rangle+| x_{2} \rangle | H \rangle ] \quad \text { where } \quad | \mathrm{V} \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) \quad | \mathrm{H} \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
$\Psi^{\prime}(p)=\left\langle p | \Psi^{\prime}\right\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle | V\right\rangle+\left\langle p | x_{2}\right\rangle | H \rangle ] \nonumber$
This leads to the following momentum distribution at the detection screen. The highly visible interference fringes have disappeared leaving a single‐slit diffraction pattern, but the areas of the two histograms are the same. This is demonstrated later.
The usual explanation for this effect is that it is now possible to know which slit the photons went through, and that such knowledge destroys the interference fringes because the photons are no longer in a superposition of passing through both slits, but rather a mixture of passing through one slit or the other.
However, a more reasonable explanation is that the tags are orthogonal polarization states, and because of this the interference (cross) terms in the momentum distribution, $\left|\Psi^{\prime}(\mathrm{p})\right|^{2}$, vanish leaving a pattern at the detection screen which is the sum of two single‐slit diffraction patterns, one from the upper slit and the other from the lower slit.
That this is a reasonable analysis is confirmed when the so‐called quantum eraser, a diagonal polarizer, is placed before the detection screen as diagramed below.
Projection of $\Psi^{'}$ (p) onto $\langle D |$ accounts for the action of the diagonal polarizer, yielding the following wave function after the diagonal polarizer.
$| \Psi^{\prime \prime} \rangle=\left\langle D | \Psi^{\prime}\right\rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle\langle D | V\rangle+| x_{2} \rangle\langle D | H\rangle ]=\frac{1}{2}\left[ | x_{1}\right\rangle+| x_{2} \rangle ] \quad \text{where} \quad | D \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \nonumber$
The Fourier transform of $| \Psi^{\prime \prime}\rangle$ yields the momentum wave function and ultimately the momentum distribution function which is the diffraction pattern.
$\Psi^{\prime \prime}(p)=\left\langle p | \Psi^{''}\right\rangle=\frac{1}{2}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right] \nonumber$
The diagonal polarizer is called a quantum eraser because it appears to erase the which‐path information provided by the H/V polarizers. However, it is clear from this analysis that the diagonal polarizer doesnʹt actually erase, it passes the diagonal component of $| \Psi^{\prime \prime}\rangle$ which then shows an attenuated (by half) version of the original diffraction pattern produced by $| \Psi\rangle$. If which‐path erasure was occurring the integral on the right would equal 1.0.
$\int_{-\infty}^{\infty}(|\Psi(\mathrm{p})|)^{2} \mathrm{dp} \text { float }, 2 \rightarrow 1.0 \nonumber$
$\int_{-\infty}^{\infty}\left(|\Psi^{\prime \prime}(\mathrm{p})|\right)^{2} \mathrm{d} p \text { float }, 2 \rightarrow 1.0 \nonumber$
$\int_{-\infty}^{\infty}\left(|\Psi^{\prime \prime}(\mathrm{p})|\right)^{2} \mathrm{dp} \text { float }, 2 \rightarrow 0.50 \nonumber$
Appendix
For infinitesimally thin slits the momentum‐space wave function is,
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
Assuming a slit width $\delta$ the calculations of $\Psi$(p), $\Psi^{ʹ}$(p) and $\Psi^{ʹʹ}$(p) are carried out as follows:
Position of first slit: $\mathbf{x}_{1} \equiv 0$ Position of second slit: $\mathrm{x}_{2} \equiv 1$ Slit width: $\delta \equiv 0.2$
$\Psi(p) \equiv \frac{\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x+\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x}{\sqrt{2}} \nonumber$
$\Psi^{'}(p)\equiv \frac{\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \begin{pmatrix} 1 \ 0 \end{pmatrix} +\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \begin{pmatrix} 0 \ 1 \end{pmatrix}}{\sqrt{2}} \nonumber$
$\Psi^{'}(p)\equiv \frac{\frac{1}{\sqrt{2}} \cdot \begin{pmatrix} 1 \ 1 \end{pmatrix}^{T} \cdot \left[ \int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \begin{pmatrix} 1 \ 0 \end{pmatrix} +\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot \begin{pmatrix} 0 \ 1 \end{pmatrix} \right]}{\sqrt{2}} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.44%3A_Which_Path_Information_and_the_Quantum_Eraser.txt
|
Slit positions, slit width and the wavefunction at the slit screen which is a superposition of the photon being simultaneously present at all three slits.
$\mathrm{x}_{1} :=-\frac{1}{2} \quad \mathrm{x}_{2} :=0 \quad \mathrm{x}_{3} :=\frac{1}{2} \quad \delta :=0.1 \nonumber$
$| \Psi \rangle=\frac{1}{\sqrt{3}}\left[ | x_{1}\right\rangle+| x_{2} \rangle+| x_{3} \rangle ] \nonumber$
Calculate the diffraction pattern by a Fourier transform of the spatial wavefunction into momentum space.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{3}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle+\left\langle p | x_{3}\right\rangle\right] \nonumber$
$\Psi(\mathrm{p}) := \int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} +\int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx} +\int_{\mathrm{x}_{3}-\frac{\delta}{2}}^{\mathrm{x}_{3}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx} \nonumber$
Display the momentum distribution function which is the diffraction pattern.
Tag the slits with orthogonal states.
$\left\langle p | \Psi^{\prime}\right\rangle=\frac{1}{\sqrt{3}}\left[\left\langle p | x_{1}\right\rangle | \uparrow\right\rangle+\left\langle p | x_{2}\right\rangle | \rightarrow \rangle+\left\langle p | x_{3}\right\rangle | \downarrow \rangle ] \nonumber$
Recalculate the momentum distribution.
$\Psi^{\prime}(\mathrm{p}) :=\int_{\mathrm{x}_{1} \frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} x \cdot \left( \begin{array}{l}{1} \ {0} \ {0}\end{array}\right)+ \ \int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{0} \ {1} \ {0}\end{array}\right)+\int_{\mathrm{x}_{3}-\frac{\delta}{2}}^{\mathrm{x}_{3}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{0} \ {0} \ {1}\end{array}\right) \nonumber$
Display the momentum distribution at the detection screen showing that the diffraction pattern has disappeared. The orthogonallity of the tags destroys the cross-terms in the momentum distribution, \left|\Psi^{\prime}(\mathrm{p})\right|^{2}, which give rise to the interference effects shown in the original diffraction pattern.
Insert an "eraser" after the slit screen and before the detection screen.
$\Psi^{\prime \prime}(\mathrm{p}) :=\frac{1}{\sqrt{3}} \cdot \left( \begin{array}{l}{1} \ {1} \ {1}\end{array}\right)^{\mathrm{T}} . \Psi^{\prime}(\mathrm{p}) \nonumber$
The diffraction pattern is restored but attenuated because the so-called "eraser" filters out the orthogonal tags restoring the interference terms.
1.46: Which Path Information and the Quantum Eraser (Brief)
This tutorial examines the real reason which‐path information destroys the double‐slit diffraction pattern and how the so‐called ʺquantum eraserʺ restores it. The wave function for a photon illuminating the slit screen is written as a superposition of the photon being present at both slits simultaneously. The double‐slit diffraction pattern is calculated by projecting this superposition into momentum space. This is a Fourier transform for which the mathematical details can be found in the Appendix.
$| \Psi \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle+| x_{2} \rangle ] \qquad \Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right] \nonumber$
Attaching polarizers to the slits creates an entangled superposition of the photon being at slit 1 with vertical polarization and at slit 2 with horizontal polarization. This leads to the following momentum distribution at the detection screen. The interference fringes have disappeared leaving a single‐slit diffraction pattern.
$| \Psi^{\prime} \rangle=\frac{1}{\sqrt{2}}\left[ | x_{1}\right\rangle | V \rangle+| x_{2} \rangle | H \rangle ] \qquad \Psi^{\prime}(p)=\left\langle p | \Psi^{\prime}\right\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle | V\right\rangle+\left\langle p | x_{2}\right\rangle | H \rangle ] \nonumber$
The usual explanation for this effect is that it is now possible to know which slit the photons went through, and that such knowledge destroys the interference fringes because the photons are no longer in a superposition of passing through both slits, but rather a mixture of passing through one slit or the other.
However, a better explanation is that the superposition persists with orthogonal polarization tags, and because of this the interference (cross) terms in the momentum distribution, $\left|\Psi^{\prime}(p)\right|^{2}$, vanish leaving a pattern at the detection screen which is the sum of two single‐slit diffraction patterns, one from the upper slit and the other from the lower slit.
That this is a reasonable interpretation is confirmed when a so‐called quantum eraser, a polarizer (D) rotated clockwise by 45 degrees relative to the vertical, is placed before the detection screen.
$\Psi^{\prime \prime}(p)=\left\langle D | \Psi^{\prime}(p)\right\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle\langle D | V\rangle+\left\langle p | x_{2}\right\rangle\langle D | H\rangle\right]=\frac{1}{2}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right] \nonumber$
The diagonal polarizer is called a quantum eraser because it appears to restore the interference pattern lost because of the which‐path information provided by the V/H polarizers. However, it is clear from this analysis that the diagonal polarizer doesnʹt actually erase, it simply passes the diagonal component of $| \Psi^{'} \rangle$ which then shows an attenuated (by half) version of the original interference pattern produced by $| \Psi \rangle$.
Placing an anti‐diagonal polarizer (rotated counterclockwise by 45 degrees relative to the vertical) before the detection screen causes a 180 degree phase shift in the restored interference pattern.
$\Psi^{\prime \prime}(p)=\left\langle A | \Psi^{\prime}(p)\right\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle\langle A | V\rangle+\left\langle p | x_{2}\right\rangle\langle A | H\rangle\right]=\frac{1}{2}\left[\left\langle p | x_{1}\right\rangle-\left\langle p | x_{2}\right\rangle\right] \nonumber$
This phase shift is inconsistent with any straightforward explanation based on the concept of erasure of which‐path information. Erasure implies removal of which‐path information. If which‐path information has been removed shouldnʹt the original interference pattern be restored without a phase shift?
Appendix
The V/H polarization which‐path tags and the D/A polarization ʺerasersʺ in vector format:
$| \mathrm{V} \rangle=\left( \begin{array}{l}{1} \ {0}\end{array}\right) | \mathrm{H} \rangle=\left( \begin{array}{l}{0} \ {1}\end{array}\right) \quad | \mathrm{D} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{l}{1} \ {1}\end{array}\right) \quad | \mathrm{A} \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \ {-1}\end{array}\right)\langle\mathrm{D} | \mathrm{V}\rangle=\langle\mathrm{D} | \mathrm{H}\rangle=\langle\mathrm{A} | \mathrm{V}\rangle=\frac{1}{\sqrt{2}}\qquad\langle \mathrm{A} | \mathrm{H}\rangle=-\frac{1}{\sqrt{2}} \nonumber$
For infinitesimally thin slits the momentum‐space wave function is,
$\Psi(p)=\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
Assuming a slit width $\delta$ the calculations of $\Psi$(p), $\Psi^{ʹ}$(p), $\Psi^{'ʹ}$(p) and $\Psi^{''ʹ}$(p) are carried out as follows:
Position of first slit: $\mathrm{x}_{1} \equiv 0$ Position of second slit: $\mathrm{x}_{2} \equiv 1$ Slit width: $\delta \equiv 0.2$
$\Psi(\mathrm{p})\equiv\frac{1}{\sqrt{2}}\cdot\left(\int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx} +\int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}\right) \nonumber$
For $\Psi^{ʹ}$(p) the V/H polarization which‐path tags are added to the two terms of $\Psi$(p)
$\Psi^{\prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left[ \int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{1} \ {0}\end{array}\right)+\int_{\mathrm{x}_{2}-\frac{\delta}{2}}^{\mathrm{x}_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot \left( \begin{array}{l}{0} \ {1}\end{array}\right) \right] \nonumber$
$\Psi^{ʹʹ}$(p) is the projection of $\Psi^{ʹ}$(p) onto a diagonal polarizer $\langle D |$.
$\Psi^{\prime \prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{l}{1} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
$\Psi^{ʹ'ʹ}$(p) is the projection of $\Psi^{ʹ}$(p) onto a diagonal polarizer $\langle A |$.
$\Psi^{\prime \prime \prime}(\mathrm{p}) \equiv \frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \ {-1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
Rewriting $\Psi^{ʹ}$(p) in terms of $|D\rangle$ and $|A\rangle$ clearly shows the origin of the phase difference between the $\left(|\Psi^{\prime \prime}(\mathrm{p})|\right)^{2}$ and $\left(|\Psi^{\prime \prime \prime}(\mathrm{p})|\right)^{2}$ interference patterns.
$\Psi^{\prime}(p)=\left\langle p | \Psi^{\prime}\right\rangle=\frac{1}{2}\left[\left(\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right) | D\right\rangle+\left(\left\langle p | x_{1}\right\rangle-\left\langle p | x_{2}\right\rangle\right) | A \rangle ] \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.45%3A_Terse_Analysis_of_Triple-slit_Diffraction_with_a_Quantum_Eraser.txt
|
Slit positions, slit width and the wavefunction at the slit screen which is a superposition of the photon being simultaneously present at all three slits.
$x_{1} :=-\frac{1}{2} \quad x_{2}:=0 \quad \mathrm{x}_{3} :=\frac{1}{2} \qquad \delta :=0.1 \nonumber$
$|\Psi\rangle=\frac{1}{\sqrt{3}}\left[\left|x_{1}\right\rangle+\left|x_{2}\right\rangle+\left|x_{3}\right\rangle\right] \nonumber$
Calculate the diffraction pattern by a Fourier transform of the spatial wavefunction into momentum space.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{3}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle+\left\langle p | x_{3}\right\rangle\right] \nonumber$
$\Psi(p)=\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x +\int_{x_{2} -\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x +\int_{x_{3} -\frac{\delta}{2}}^{x_{3}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \nonumber$
Display the momentum distribution function which is the diffraction pattern.
Tag the slits with orthogonal states.
$\left\langle p | \Psi^{\prime}\right\rangle=\frac{1}{\sqrt{3}}\left[\left\langle p | x_{1}\right\rangle|\uparrow\rangle+\left\langle p | x_{2}\right\rangle|\rightarrow\rangle+\left\langle p | x_{3}\right\rangle|\downarrow\rangle\right] \nonumber$
Recalculate the momentum distribution.
$\Psi^{\prime}(p)=\int_{x_{1}-\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot\left(\begin{array}{l}{1} \ {0} \ {0}\end{array}\right) +\int_{x_{2}-\frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot\left(\begin{array}{l}{0} \ {1} \ {0}\end{array}\right) +\int_{x_{3}- \frac{\delta}{2}}^{x_{3}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x \cdot\left(\begin{array}{l}{0} \ {0} \ {1}\end{array}\right) \nonumber$
Display the momentum distribution at the detection screen showing that the diffraction pattern has disappeared. The orthogonallity of the tags destroys the cross-terms in the momentum distribution, $|\Psi^{'}(p)|^{2}$, which give rise to the interference effects shown in the original diffraction pattern.
Insert an "eraser" after the slit screen and before the detection screen.
$\Psi^{\prime \prime}(\mathrm{p}) :=\frac{1}{\sqrt{3}} \cdot\left(\begin{array}{l}{1} \ {1} \ {1}\end{array}\right)^{\mathrm{T}} \cdot \Psi^{\prime}(\mathrm{p}) \nonumber$
The diffraction pattern is restored but attenuated because the so-called "eraser" filters out the orthogonal tags restoring the interference terms.
1.48: Whichway Markers and Postselection in the Doubleslit Experiment
This tutorial examines the real reason which‐way information destroys the double‐slit diffraction pattern and how the so‐called ʺquantum eraserʺ restores it. The double‐slit experiment is presented schematically below.
The wave function for a photon illuminating the slit screen is written as a superposition of the photon being present at both slits simultaneously.
$|\Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|x_{1}\right\rangle+\left|x_{2}\right\rangle\right] \nonumber$
Assuming initially infinitesimally thin slits, the diffraction pattern is calculated by projecting this superposition into momentum space.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{1}\right)+\frac{1}{\sqrt{2 \pi}} \exp \left(-i p x_{2}\right)\right] \nonumber$
Assuming slits of finite width, $\delta$, positioned as indicated below, the momentum wave function becomes,
Position of first slit: $x_{1} :=0$ Position of second slit: $x_{2} :=1$ Slit width: $\delta :=0.2$
$\Psi (p) : = \frac{\int_{x_{1} -\frac{\delta}{2}}^{x_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x+\int_{x_{2}- \frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x}{\sqrt{2}} \nonumber$
The double‐slit diffraction pattern is the momentum distribution function.
When polarization markers are attached to the slits we have the following schematic of the double‐slit experiment with which‐way information.
According to the Encyclopedia Britannica, Fresnel and Arago ʺusing an apparatus based on Youngʹs [double‐slit] experimentʺ observed that ʺtwo beams polarized in mutually perpendicular planes never yield fringes.ʺ We now look at the quantum mechanical explanation of this phenomena. Fresnel and Arago, working during the 19th Century, provided a valid classical explanation.
The coordinate and momentum wave functions now become,
$|\Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|x_{1}\right\rangle|\mathrm{V}\rangle+\left|x_{2}\right\rangle|H\rangle\right] \quad \text{where} \; |\mathrm{V}\rangle=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \qquad|\mathrm{H}\rangle=\left(\begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
$\Psi_{\mathrm{hv}}(\mathrm{p}) :=\frac{1}{\sqrt{2}} \int_{\mathrm{x}_{1}-\frac{\delta}{2}}^{\mathrm{x}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x} \cdot\left(\begin{array}{l}{1} \ {0}\end{array}\right) +\frac{1}{\sqrt{2}} \cdot \int_{x_{2} \frac{\delta}{2}}^{x_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}}\cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x\cdot\left(\begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
This leads to the following momentum distributions at the detection screen.
The usual explanation for this effect is that it is now possible to know which slit the photons went through, and that such knowledge destroys the interference fringes because the photons are no longer in a superposition of passing through both slits, but rather a mixture of passing through each slit half the time.
However, a more reasonable explanation is that the tags are orthogonal polarization states, and because of this the interference (cross) terms in the momentum distribution, $\left|\Psi_{\mathrm{hy}}(\mathrm{p})\right|^{2}$, vanish leaving a pattern at the detection screen which is the sum of two single‐slit diffraction patterns, one from the upper slit and the other from the lower slit.
That this is a reasonable analysis is confirmed when the so‐called quantum eraser, a $\theta$ polarizer, is placed before the detection screen as diagramed below.
The presence of the $\theta$ polarizer is represented mathematically by: $\langle\theta|=(\cos (\theta) \quad \sin (\theta))$
Projection of $\Psi_{hv}$(p) onto $\langle \theta |$ accounts for the action of the $\theta$ polarizer, yielding the momentum wave function after the polarizer.
$\Psi^{\prime}(\mathrm{p}, \theta)=\left(\begin{array}{c}{\cos (\theta)} \ {\sin (\theta)}\end{array}\right)^{\mathrm{T}} \cdot \Psi_{\mathrm{hv}}(\mathrm{p}) \nonumber$
If the $\theta$ polarizer is oriented at any angle other than 0 or multiples of $\frac{\pi}{2}$, the interference fringes reappear to some degree. As an example the diffraction pattern observed for a angles of $\pm \frac{\pi}{4}$ are shown along with the results from the previous graphs.
A common explanation for the reappearance of the interference fringes is that the $\frac{\pi}{4}$ polarizers have erased the which‐way information. A less esoteric explanation is achieved by recognizing that |H > and |V > are even superpositions of |D > (D = diagonal = $\frac{\pi}{4}$) and |A > (A = anti‐diagonal = $-\frac{\pi}{4}$), and that the probability that |H > and |V > photons will pass a diagonal polarizer is 0.5. It also easily explains the phase shift that is observed with the anti‐diagonal ($-\frac{\pi}{4}$) ʺeraser.ʺ
$|\mathrm{V}\rangle=\frac{1}{\sqrt{2}}[|\mathrm{D}\rangle+|\mathrm{A}\rangle]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2}}\left(\begin{array}{l}{1} \ {1}\end{array}\right)+\frac{1}{\sqrt{2}}\left(\begin{array}{c}{1} \ {-1}\end{array}\right)\right] \nonumber$
$|\mathrm{H}\rangle=\frac{1}{\sqrt{2}}[|\mathrm{D}\rangle-|\mathrm{A}\rangle]=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2}}\left(\begin{array}{l}{1} \ {1}\end{array}\right)-\frac{1}{\sqrt{2}}\left(\begin{array}{c}{1} \ {-1}\end{array}\right)\right] \nonumber$
This is an example of post‐selection. After passing the slit screen with its polarization markers (state preparation), but before the detection screen (measurement), a subset of photon states is selected by the orientation of the $\theta$ polarizer, say |D> or |A>. After these polarizers the photons are in one of the following polarized superpositions.
$|\Psi\rangle=\frac{1}{2}\left[\left|x_{1}\right\rangle|\mathrm{D}\rangle+\left|x_{2}\right\rangle|\mathrm{D}\rangle\right] \text { or }|\Psi\rangle=\frac{1}{2}\left[\left|x_{1}\right\rangle|\mathrm{A}\rangle-\left|x_{2}\right\rangle|\mathrm{A}\rangle\right] \nonumber$
Projecting these states into momentum space assuming finite slit widths yields the reduced diffraction patterns shown in the figure above. The following probability calculations support the arguments presented here.
$\int_{-\infty}^{\infty}(|\Psi(\mathrm{p})|)^{2} \mathrm{dp} \text { float, } 2 \rightarrow 1.0 \nonumber$
$\int_{-\infty}^{\infty}\left(\left|\Psi_{\mathrm{hv}}(\mathrm{p})\right|\right)^{2} \mathrm{dp} \text { float, } 2 \rightarrow 1.0 \nonumber$
$\int_{-\infty}^{\infty}\left(\left|\Psi^{n}\left(\mathrm{p}, \frac{\pi}{4}\right)\right|\right)^{2} \text { dp float, } 2 \rightarrow 0.50 \nonumber$
$\int_{-\infty}^{\infty}\left(\big|\Psi^{r}\left(\mathrm{p},-\frac{\pi}{4}\right)\big|\right)^{2} \text { dp float, } 2 \rightarrow 0.50 \nonumber$
If which‐path erasure was really occurring the last two integrals would equal 1.0
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.47%3A_Terse_Analysis_of_Triple-slit_Diffraction_with_a_Quantum_Eraser.txt
|
This tutorial provides a brief mathematical analysis of a proposed quantum eraser experiment involving spin‐1/2 particles which is available at arXiv:quant‐ph/0501010v2. Please see the two immediately preceding tutorials for another example of the quantum eraser and additional mathematical detail.
The first magnet attaches which‐way information such that the spin‐1/2 particles leaving the double‐slit screen are described by the following entangled wave function,
$|\Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|\uparrow_{z}\right\rangle\left|z_{1}\right\rangle+\left|\downarrow_{z}\right\rangle\left|z_{2}\right\rangle\right] \nonumber$
where z1 and z2 represent the positions of the horizontal slits on the z‐axis and the spin eigenstates in the z‐direction are given below.
$\Psi_{\mathrm{zup}} :=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \qquad \Psi_{\mathrm{zdown}}=\left(\begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Recognizing that a diffraction pattern is actually a momentum distribution function, we project $\Psi$ onto momentum space as follows (in atomic units, h = 2$\pi$).
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|\uparrow_{z}\right\rangle\left\langle p | z_{1}\right\rangle+\left|\downarrow_{z}\right\rangle\left\langle p | z_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left[\left|\uparrow_{z}\right\rangle \frac{\exp \left(-i p z_{1}\right)}{\sqrt{2 \pi}}+\left|\downarrow_{z}\right\rangle \frac{\exp \left(-i p z_{2}\right)}{\sqrt{2 \pi}}\right] \nonumber$
Here the exponential terms are the position eigenfunctions in momentum space for infinetesimally thin slits located at z1 and z2. For slits of finite width $<p | \Psi>$ is written as shown below. Again see the previous tutorials in this series for further mathematical detail. The slit positions and slit width chosen are arbitrary.
Slit positions: $\mathrm{z}_{1} :=1 \qquad \mathrm{z}_{2} :=2$ Slit width: $\delta :=0.2$
$\Psi(\mathrm{p}) :=\frac{1}{\sqrt{2}}\cdot \left(\Psi_{\mathrm{zup}} \cdot \int_{\mathrm{z}_{1}- \frac{\delta}{2}}^{\mathrm{z}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{z} +\Psi_{\text { zdown }}\cdot \int_{z_{2}-\frac{\delta}{2}}^{z_{2}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}}\cdot \exp(-\mathrm{i} \cdot p \cdot z) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} z\right) \nonumber$
Because of the addition of path information there are no interference fringes associated with this two‐slit wave function; the encoded orthogonal z‐direction eigenstates destroy the interference cross terms as shown graphically below.
The second Stern‐Gerlach magnet oriented in the x‐direction, according to the conventional interpretation, ʺerasesʺ the which‐way information. This is shown by projecting the state after the first magnet and the slit screen, $\Psi$(p), onto the x‐direction spin eigenstates.
$\Psi_{\mathrm{xup}}=\frac{1}{\sqrt{2}} \cdot(1 \quad 1) \qquad \Psi_{\mathrm{xdown}}=\frac{1}{\sqrt{2}} \cdot(1\quad -1) \nonumber$
$\Psi_{\text { left }}(\mathrm{p}) :=\Psi_{\mathrm{xup}} \cdot \Psi(\mathrm{p}) \qquad \Psi_{\mathrm{right}}(\mathrm{p}) :=\Psi_{\mathrm{xdown}} \cdot \Psi(\mathrm{p}) \nonumber$
The horizontal blue line marks p = 0 on the z‐axis. On the left is the interference pattern of the part of the beam emerging from the x‐up magnet direction with spin state $\Psi_{xup}$, and on the right is the interference pattern of the part of the beam emerging from the x‐down magnet direction with spin state $\Psi_{xdown}$. As shown in the Summary, $\Psi$(p) can be rewritten in terms of the x‐direction spin states clearly show in the superpositions responsible for the interference fringes on the left and right.
$\langle p | \Psi\rangle=\frac{1}{2}\left[\left|\uparrow_{x}\right\rangle\left(\left\langle p | z_{1}\right\rangle+\left\langle p | z_{2}\right\rangle\right)+\left|\downarrow_{x}\right\rangle\left(\left\langle p | z_{1}\right\rangle-\left\langle p | z_{2}\right\rangle\right)\right] \nonumber$
In the absence of both Stern‐Gerlach magnets the usual double‐slit interference pattern is observed.
$\Psi(p)=\frac{1}{\sqrt{2}}\cdot\left(\int_{\mathrm{z}_{1}-\frac{\delta}{2}}^{\mathrm{z}_{1}+\frac{\delta}{2}} \frac{1}{\sqrt{2\cdot\pi}}\cdot\exp(-\mathrm{i}\cdot\mathrm{p}\cdot\mathrm{z})\cdot\frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{z}+\int_{z_{2}-\frac{\delta}{2}}^{z_{2}+\frac{\delta}{2}}\frac{1}{\sqrt{2 \cdot \pi}}\cdot\exp(-i \cdot p \cdot z)\cdot\frac{1}{\sqrt{\delta}} d z\right) \nonumber$
Alternative Analysis
It is possible to express the mathematics in an alternative but equivalent form. The first wave function,
$\frac{1}{\sqrt{2}}\left[\left|\uparrow_{z}\right\rangle\left|z_{1}\right\rangle+\left|\downarrow_{z}\right\rangle\left|z_{2}\right\rangle\right] \nonumber$
can be expressed explicitly in vector format in the momentum representation. This analysis will be based on infinitesimally thin slits as introduced earlier.
$\frac{1}{\sqrt{2}}\left[\left(\begin{array}{l}{1} \ {0}\end{array}\right)\left|z_{1}\right\rangle+\left(\begin{array}{l}{0} \ {1}\end{array}\right)\left|z_{2}\right\rangle\right]=\frac{1}{\sqrt{2}}\left(\begin{array}{c}{\left|z_{1}\right\rangle} \ {\left|z_{2}\right\rangle}\end{array}\right) \xrightarrow{\langle p |} \frac{1}{\sqrt{2}}\left(\begin{array}{c}{\left\langle p | z_{1}\right\rangle} \ {\left\langle p | z_{2}\right\rangle}\end{array}\right) \nonumber$
It is easily shown that this wave function does not lead to interference fringes at the detection screen by calculating the square of its absolute magnitude.
$\frac{1}{2}\left(\left\langle z_{1} | p\right\rangle\quad\left\langle z_{2} | p\right\rangle\right)\left(\begin{array}{c}{\left\langle p | z_{1}\right\rangle} \ {\left\langle p | z_{2}\right\rangle}\end{array}\right)=\frac{1}{2}\left[\left|\left\langle p | z_{1}\right\rangle\right|^{2}+\left|\left\langle p | z_{2}\right\rangle\right|^{2}\right] \nonumber$
$\Psi(\mathrm{p}) :=\frac{1}{2 \cdot \sqrt{\pi}} \cdot\left(\begin{array}{c}{\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{1}\right)} \ {\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{2}\right)}\end{array}\right) \nonumber$
However, in the presence of the second Stern‐Gerlach magnet vector B is projected onto the two output channels of the magnet.
$\frac{1}{2}\left(\begin{array}{ll}{1} & {1}\end{array}\right)\left(\begin{array}{l}{\left\langle p | z_{1}\right\rangle} \ {\left\langle p | z_{2}\right\rangle}\end{array}\right)=\frac{1}{2}\left[\left\langle p | z_{1}\right\rangle+\left\langle p | z_{2}\right\rangle\right] \nonumber$
$\frac{1}{2}\left(\begin{array}{ll}{1} & {-1}\end{array}\right)\left(\begin{array}{l}{\left\langle p | z_{1}\right\rangle} \ {\left\langle p | z_{2}\right\rangle}\end{array}\right)=\frac{1}{2}\left[\left\langle p | z_{1}\right\rangle-\left\langle p | z_{2}\right\rangle\right] \nonumber$
The probability distributions of these states show interference fringes.
$\Psi_{\text { left }}(\mathrm{p}) :=\frac{1}{\sqrt{2}} \cdot(1\quad1) \cdot \frac{1}{2 \cdot \sqrt{\pi}} \cdot\left(\begin{array}{c}{\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{1}\right)} \ {\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{2}\right)}\end{array}\right) \nonumber$
$\Psi_{\text { right }}(\mathrm{p}) :=\frac{1}{\sqrt{2}} \cdot(1\quad-1) \cdot \frac{1}{2\cdot\sqrt{\pi}} \cdot\left(\begin{array}{c}{\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{1}\right)} \ {\exp \left(-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{z}_{2}\right)}\end{array}\right) \nonumber$
Summary
The z‐direction Stern‐Gerlach magnet and the slit screen create the following entangled superposition which does not produce interference fringes due to the orthogonality of the spin states marking the slits.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|\uparrow_{z}\right\rangle\left\langle p | z_{1}\right\rangle+\left|\downarrow_{z}\right\rangle\left\langle p | z_{2}\right\rangle\right] \nonumber$
To understand what happens at the x‐direction magnet this state is rewritten in the x‐direction spin basis.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\frac{1}{\sqrt{2}}\left(\left|\uparrow_{x}\right\rangle+\left|\downarrow_{x}\right\rangle\right)\left\langle p | z_{1}\right\rangle+\frac{1}{\sqrt{2}}\left(\left|\uparrow_{x}\right\rangle-\left|\downarrow_{x}\right\rangle\right)\left\langle p | z_{2}\right\rangle\right] \nonumber$
Collecting terms on the x‐direction spin eigenstates yields,
$\langle p | \Psi\rangle=\frac{1}{2}\left[\left|\uparrow_{x}\right\rangle\left(\left\langle p | z_{1}\right\rangle+\left\langle p | z_{2}\right\rangle\right)+\left|\downarrow_{x}\right\rangle\left(\left\langle p | z_{1}\right\rangle-\left\langle p | z_{2}\right\rangle\right)\right] \nonumber$
The in‐phase and out‐of‐phase superpositions, highlighted in blue and red, exit the magnet in opposite directions. Because of this the superpositions become spatially separated which leads to two sets of interference fringes with a one‐fringe relative phase shift at the detection screen.
Itʹs clear to me that erasure is not a satisfactory explanation for this process. Because fringes appear after the x‐direction magnet it might seem plausible, at first glance, to assume that the which‐way markers have been erased. But actually the x‐direction magnet sorts < p|$\Psi$> into two components in terms of the x‐direction spin eigenstates. Nothing has been erased.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.49%3A_A_SternGerlach_Quantum_Eraser.txt
|
Richard Feynman raised Young’s double-slit experiment to canonical status by presenting it as the paradigm for all quantum mechanical behavior. In The Character of Physical Law he wrote, “Any other situation in quantum mechanics, it turns out, can always be explained by saying, ‘You remember the case of the experiment with the two holes? It’s the same thing.’”
Following Feynman, teachers of quantum theory use the double-slit experiment to illustrate the superposition principle and its signature effect, quantum interference. A single particle (photon, electron, etc.) arrives at any point on the detection screen by two paths, whose probability amplitudes interfere yielding the characteristic diffraction pattern. This is called single-particle interference.
Single-particle interference in a Mach-Zehnder (MZ) interferometer is a close cousin of the traditional double-slit experiment. Using routine complex number algebra, it can be used to illustrate the same fundamentals as the two-slit experiment and also to introduce students to the field of quantum optics.
This tutorial draws heavily on a recent article in the American Journal of Physics and papers quoted therein [Am. J. Phys. 78(8), 792-795 (2010)]. An equal-arm MZ interferometer is shown below. In this configuration the photon is always detected at Dx. The analysis below provides an explanation why this happens.
A key convention in the analysis of MZ interferometers is that reflection at a beam splitter (BS) is accompanied by a 90 degree phase shift ($\frac{\pi}{2}$, i). The behavior of a photon traveling in the x- or y-direction at the beam splitters and mirrors is as follows.
At the 50-50 beam splitters the following photon superpositions are formed:
$|x\rangle \rightarrow \frac{1}{\sqrt{2}}[|x\rangle+ i|y\rangle] \qquad|y\rangle \rightarrow \frac{1}{\sqrt{2}}[|y\rangle+ i|x\rangle] \nonumber$
At the mirrors the behavior of the photon is as follows:
$|x\rangle \rightarrow|y\rangle \quad|y\rangle \rightarrow|x\rangle \nonumber$
Using the information provided above and complex number algebra, the history of a photon leaving the source (moving in the x-direction) is:
$|S\rangle \stackrel{B S_{1}}{\longrightarrow} \frac{1}{\sqrt{2}}[|x\rangle+ i|y\rangle]\xrightarrow{Mirrors} \frac{1}{\sqrt{2}}[|y\rangle+ i|x\rangle] \stackrel{B S_{2}}{\longrightarrow} i|x\rangle \nonumber$
Thus we see that, indeed, the photon always arrives at Dx in the equal-arm MZ interferometer shown above. The paths to Dx (TR+RT) are in phase and constructively interfere. The paths to Dy (TT+RR) are 180 degrees (i2) out of phase and therefore destructively interfere. (T stands for transmitted and R stands for reflected.)
$\text{Probability}\left(D_{x}\right)=|\langle x | S\rangle|^{2}=1 \quad \text { Probability }\left(D_{y}\right)=|\langle y | S\rangle|^{2}=0 \nonumber$
The detection of the photon exclusively at Dx is the equivalent of the appearance of the interference fringes in the double-slit experiment.
Another quantum mechanical point that Feynman made with the double-slit experiment is that if path information (which slit the photon went through) is available (even in principle) the interference fringes disappear. This is also the case with the MZ interferometer.
If after the first beam splitter the photon is observed in path A, we have the following history,
$| S \rangle \xrightarrow{PathA} | x \rangle \xrightarrow{MirrorA} | y \rangle \xrightarrow{BS_{2}} \frac{1}{\sqrt{2}}(i|x\rangle+|y\rangle) \nonumber$
which leads to equal probabilities of detecting the photon at Dx and Dy.
$P\left(D_{x}\right)=|\langle x | S\rangle|^{2}=\Big|\frac{i}{\sqrt{2}}\Big|^{2}=\frac{1}{2} \quad P\left(D_{y}\right)=|\langle y | S\rangle|^{2}=\Big|\frac{1}{\sqrt{2}}\Big|^{2}=\frac{1}{2} \nonumber$
Alternatively, if after the first beam splitter the photon is observed in path B, we have the following history,
$| S \rangle \xrightarrow{PathB} | y \rangle \xrightarrow{MirrorB} | x \rangle \xrightarrow{BS_{2}} \frac{1}{\sqrt{2}}(|x\rangle+i|y\rangle) \nonumber$
which also leads to equal probabilities of detecting the photon at Dx and Dy.
$P\left(D_{x}\right)=|\langle x | S\rangle|^{2}=\Big|\frac{1}{\sqrt{2}}\Big|^{2}=\frac{1}{2} \quad P\left(D_{y}\right)=|\langle y | S\rangle|^{2}=\Big|\frac{i}{\sqrt{2}}\Big|^{2}=\frac{1}{2} \nonumber$
In these cases, where path information is available the detection of the photon at both detectors in equal percentages is the equivalent of the disappearance of the interference fringes in the double-slit experiment when knowledge of which slit the particle went through is available.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.50%3A_Using_the_Mach-Zehnder_Interferometer_to_Illustrate_the_Impact_of_Which-way_Information.txt
|
A Beam Splitter Creates a Quantum Mechanical Superposition
Single photons emitted by a source (S) illuminate a 50-50 beam splitter (BS). Mirrors (M) direct the photons to detectors D1 and D2. The probability amplitudes for transmission and reflection are given below. By convention a 90 degree phase shift (i) is assigned to reflection.
Probability amplitude for photon transmission at a 50-50 beam splitter:
$\langle T | S\rangle=\frac{1}{\sqrt{2}} \nonumber$
Probability amplitude for photon reflection at a 50-50 beam splitter:
$\langle R | S\rangle=\frac{i}{\sqrt{2}} \nonumber$
After the beam splitter the photon is in a superposition state of being transmitted and reflected.
$|S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] \nonumber$
As shown in the diagram below, mirrors reflect the transmitted photon path to D2 and the reflected path to D1. The source photon is expressed in the basis of the detectors as follows.
$|S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] \xrightarrow[R \rightarrow D_{1}]{T \rightarrow D_{z}} \rightarrow \frac{1}{\sqrt{2}}\left[\left|D_{2}\right\rangle+ i\left|D_{1}\right\rangle\right] \nonumber$
The square of the magnitude of the coefficients of D1 and D2 give the probabilities that the photon will be detected at D1 or D2. Each detector registers photons 50% of the time. In other words, in the quantum view the superposition collapses randomly to one of the two possible measurement outcomes it represents.
The classical view that detection at D1 means the photon was reflected at BS1 and that detection at D2 means it was transmitted at BS1 is not tenable as will be shown using a Mach-Zehnder interferometer which has a second beam splitter at the path intersection before the detectors.
A Second Beam Splitter Provides Two Paths to Each Detector
If a second beam splitter is inserted before the detectors the photons always arrive at D1. In the first experiment there was only one path to each detector. The construction of a Mach-Zehnder interferometer by the insertion of a second beam splitter creates a second path to each detector and the opportunity for constructive and destructive interference on the paths to the detectors.
Given the superposition state after BS1, the probability amplitudes after BS2 interfere constructively at D1 and destructively at D2.
$\begin{matrix} \text{After}\;BS_{1} & \; & \text{After}\;BS_{2} & \; & \text{Final State} \ \; & \; & |T\rangle \rightarrow \frac{1}{\sqrt{2}}\left[i\left|D_{1}\right\rangle+\left|D_{2}\right\rangle\right] & \; & \; \ |S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] & \rightarrow & + & \rightarrow & i\left|D_{1}\right\rangle \ \; & \; & i|R\rangle \rightarrow \frac{i}{\sqrt{2}}\left[\left|D_{1}\right\rangle+ i\left|D_{2}\right\rangle\right] & \; & \; \end{matrix} \nonumber$
Adopting the classical view that the photon is either transmitted or reflected at BS1 does not produce this result. If the photon was transmitted at BS1 it would have equal probability of arriving at either detector after BS2. If the photon was reflected at BS1 it would also have equal probability of arriving at either detector after BS2. The predicted experimental results would be the same as those of the single beam splitter experiment. In summary, the quantum view that the photon is in a superposition of being transmitted and reflected after BS1 is consistent with both experimental results described above; the classical view that it is either transmitted or reflected is not.
Some disagree with this analysis saying the two experiments demonstrate the dual, complementary, behavior of photons. In the first experiment particle-like behavior is observed because both detectors register photons indicating the individual photons took one path or the other. The second experiment reveals wave-like behavior because interference occurs - only D1 registers photons. According to this view the experimental design determines whether wave or particle behavior will occur and somehow the photon is aware of how it should behave. Suppose in the second experiment that immediately after the photon has interacted with BS1, BS2 is removed. Does what happens at the detectors require the phenomenon of retrocausality or delayed choice? Only if you reason classically about quantum experiments.
We always measure particles (detectors click, photographic film is darkened, etc.) but we interpret what happened or predict what will happen by assuming wavelike behavior, in this case the superposition created by the initial beam splitter that delocalizes the position of the photon. Quantum particles (quons) exhibit both wave and particle properties in every experiment. To paraphrase Nick Herbert (Quantum Reality), particles are always detected, but the experimental results observed are the result of wavelike behavior. Richard Feynman put it this way (The Character of Physical Law), "I will summarize, then, by saying that electrons arrive in lumps, like particles, but the probability of arrival of these lumps is determined as the intensity of waves would be. It is in this sense that the electron behaves sometimes like a particle and sometimes like a wave. It behaves in two different ways at the same time (in the same experiment)." Bragg said, "Everything in the future is a wave, everything in the past is a particle."
In 1951 in his treatise Quantum Theory, David Bohm described wave-particle duality as follows: "One of the most characteristic features of the quantum theory is the wave-particle duality, i.e. the ability of matter or light quanta to demonstrate the wave-like property of interference, and yet to appear subsequently in the form of localizable particles, even after such interference has taken place." In other words, to explain interference phenomena wave properties must be assigned to matter and light quanta prior to detection as particles.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.51%3A_Quantum_Theory_Wave-Particle_Duality_and_the_Mach-Zehnder_Interferometer.txt
|
Contemporary chemistry is built on a quantum mechanical foundation, and prominent among the fundamental concepts of quantum theory is the superposition principle. Previously the author and others have illustrated the importance of the superposition principle in understanding chemical and physical phenomena [1-5]. Furthermore, the primary literature is replete with manifestations of the quantum superposition in current research [6-8]. Very recently, for example, Hawking and Hertog [9] have published a theory of the origin of the universe based squarely on the superposition princple.
As Richard Feynman demonstrated, the spatial double-slit experiment is a simple and compelling example of the quantum superposition in action [10-11]. All fundamental quantum mechanical phenomena, according to Feynman, can be illuminated by comparison to the double-slit experiment [11].
Now this paradigmatic experiment has a companion in the temporal domain. A temporal doubleslit experiment with attosecond windows was recently reported by an international team led by Gehard Paulus [12]. This note demonstrates that the quantum mechanics behind this remarkable experiment is analogous to that for the spatial double-slit experiment for photons or massive particles.
The spatial and temporal experiments for electrons, juxtaposed in Figure 1A [13], are analyzed in terms of conjugate observables united by a Fourier transform. For the spatial experiment the observables are position and momentum, while for the temporal version they are time and energy.
In the spatial double-slit experiment illumination of the slit screen with an electron beam places each electron in a superposition of being simultaneously at slits located at x1 and x2.
$|\Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|x_{1}\right\rangle+\left|x_{2}\right\rangle\right] \tag{1} \nonumber$
According to quantum mechanical principles spatial localization at two positions leads to delocalization and interference fringes in the electron’s momentum distribution. This can be seen by a Fourier transform of equation (1) into momentum space, initially assuming infinitesimally thin slits.
$\langle p | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle p | x_{1}\right\rangle+\left\langle p | x_{2}\right\rangle\right]=\frac{1}{2 \sqrt{\pi \hbar}}\left[\exp \left(-\frac{i p x_{1}}{\hbar}\right)+\exp \left(-\frac{i p x_{2}}{\hbar}\right)\right] \tag{2} \nonumber$
Clearly the two exponential terms will oscillate in and out of phase for various values of the momentum. (See the Appendix for mathematical background on the Dirac brackets used in this paper.) For finite spatial slits of width δ the momentum wave function is given by equation (3)
$\langle p | \Psi\rangle=\frac{1}{2 \sqrt{\pi \hbar \delta}} \left[ \int_{x_{1} - \frac{\delta}{2}}^{x_{1} + \frac{\delta}{2}} \exp \left(- \frac{ipx}{\hbar}\right) dx + \int_{x_{2} - \frac{\delta}{2}}^{x_{2} + \frac{\delta}{2}} \exp \left(- \frac{ipx}{\hbar}\right) dx \right] \tag{3} \nonumber$
The observed diffraction pattern is actually the electron’s momentum distribution projected onto the detection screen, as is revealed by a graphical representation of $\left|\langle p| \Psi\rangle\right|^{2}$ [14].
In the temporal double-slit experiment a very short laser pulse (~5 fs), which consists of two maxima (temporal double-slit) and one minimum (temporal single-slit) in the electric field, is used to ionize individual argon atoms.
$|\Psi\rangle=\frac{1}{\sqrt{2}}\left[\left|t_{1}\right\rangle+\left|t_{2}\right\rangle\right] \tag{4} \nonumber$
The kinetic energy of the ionized electron is measured at the detector, and as Figure 1B shows interference fringes are observed in the kinetic energy distribution. A Fourier transform of equation (4) into the energy domain reveals the origin of the fringes; the probability amplitudes for being ionized with kinetic energy E at the two different times interfere constructively and destructively.
$\langle E | \Psi\rangle=\frac{1}{\sqrt{2}}\left[\left\langle E | t_{1}\right\rangle+\left\langle E | t_{2}\right\rangle\right]=\frac{1}{2 \sqrt{\pi \hbar}}\left[\exp \left(\frac{i E t_{1}}{\hbar}\right)+\exp \left(\frac{i E t_{2}}{\hbar}\right)\right] \tag{5} \nonumber$
For finite windows of time duration $\delta$ equation (5) becomes,
$\langle E | \Psi\rangle=\frac{1}{2 \sqrt{\pi \hbar \delta}} \left[ \int_{t_{1} - \frac{\delta}{2}}^{t_{1} + \frac{\delta}{2}} \exp \left(- \frac{iEt}{\hbar}\right) dt + \int_{x_{2} - \frac{\delta}{2}}^{x_{2} + \frac{\delta}{2}} \exp \left(- \frac{iEt}{\hbar}\right) dt \right] \tag{6} \nonumber$
Figure 2 shows that a plot of $|\langle E | \Psi\rangle|^{2}$ vs E using equation (6) with estimates for t1, t2 ($\Delta$t) and $\detla$ from reference [12] generates a 15 eV envelope with 11 prominent interference fringes. This calculated result is in reasonable quantitative agreement with the experimental data displayed in Figure 1B.
The temporal double-slit diffraction pattern reported in reference [12] is a single-electron effect – only one ionized electron is being observed at a time. Likewise the spatial analog has been performed at low source intensity such that there is only one electron in the apparatus at a time [15,16]. The fact that an interference pattern is observed under single-particle conditions leads to terms such as single-particle interference or self-interference. Glauber [17] has argued against such language because it is physically misleading.
The things that interfere in quantum mechanics are not particles. They are probability amplitudes for certain events. It is the fact that probability amplitudes add up like complex numbers that is responsible for all quantum mechanical interferences.
The interference of probability amplitudes that Glauber identifies as the source of all quantum mechanical interference phenomena is clearly revealed in the mathematical analysis provided in this note; it is especially clear in equations (2) and (5).
Appendix
A one-dimensional plane wave traveling in the positive x-direction has the following mathematical form.
$F(x, t)=\exp \left(i \frac{2 \pi x}{\lambda}\right) \exp (-i 2 \pi v t) \nonumber$
Substitution of h/p for $\lambda$ (de Broglie) and E/h for $\nu$ (Planck/Einstein) transforms F(x,t) into a quantum mechanical free-particle wave function.
$\Psi(x, t)=\exp \left(\frac{i p x}{\hbar}\right) \exp \left(-\frac{i E t}{\hbar}\right) \nonumber$
Assigning Dirac brackets containing the complementary observable pairs to this equation yields,
$\langle x | p\rangle=\exp \left(\frac{i p x}{\hbar}\right) \quad \text { and } \quad\langle t | E\rangle=\exp \left(-\frac{i E t}{\hbar}\right) \nonumber$
Dirac notation reveals that these equations are Fourier transforms between complementary variables. The complex conjugates of these relations are used in the analysis presented in this paper.
$\langle p | x\rangle=\langle x | p\rangle^{*}=\exp \left(-\frac{i p x}{\hbar}\right) \text { and } \quad\langle E | t\rangle=\langle t | E\rangle^{*}=\exp \left(\frac{i E t}{\hbar}\right) \nonumber$
Additional information on Dirac notation is available online [18].
Acknowledgment
I wish to acknowledge helpful comments by Professor Gerhard Paulus of Max-Planck-Institut für Quantenoptik, Ludvig-Maximilians-Universität München, and Texas A&M University.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.52%3A_Analysis_of_a_Temporal_Double-slit_Experiment.txt
|
Quantum mechanics teaches that if there is more than one path to a particular destination interference effects are likely. Using Feynmanʹs sum over historiesʹ approach to quantum mechanics the probability of arrival at a location is the square of the magnitude of the sum of the probability amplitudes for each path to that location. For example, in a triple‐slit diffraction experiment the probability of arriving at x on the detection screen in Dirac notation is,
$P_{123}=|\langle x | S\rangle|^{2}=|\langle x | 3\rangle\langle 3 | S\rangle+\langle x | 2\rangle\langle 2 | s\rangle+\langle x | 1\rangle\langle\left.(1 | s)\right|^{2} \nonumber$
The single‐photon interferometer shown below [Franson, Science 329, 396 (2010)] is a close cousin of the triple‐slit experiment because it provides three paths to the detector. Initially we will ignore the other two output channels A and B.
Probability Amplitudes
In this analysis the probability amplitudes are calculated using the following conventions.
Assume 50‐50 beam splitters and assign $\frac{\pi}{2}$ (i) phase shift to reflection (usual convention).
Transmission at a beam splitter: $\mathrm{T} :=\frac{1}{\sqrt{2}}$ Reflection at a beam splitter: $\mathrm{R} :=\frac{\mathrm{i}}{\sqrt{2}}$ Reflection at a mirror: $M : = 1$
First, using Feynmanʹs ʹsum over historiesʹ method we calculate the probability that the photon will arrive at the Detector (3 paths):
$(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.5625 \nonumber$
To establish that probability is conserved the probabilities for arrival at A and B are calculated.
Calculate the probability that the photon will arrive at A (2 paths):
$(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{T}+\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.375 \nonumber$
Calculate the probability that the photon will arrive at B (3 paths):
$(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{R}+\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{R}+\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{T}|)^{2}=0.0625 \nonumber$
Demonstrate that probability is conserved:
$0.5625+0.375+0.0625=1 \nonumber$
While this traditional analysis postulates three‐path interference for the arrival probability at the detector, Sinha et al. [Science 329, 418 (2010)] argue that true interference only occurs between pairs of paths. In this case between paths 1 & 2, 1 & 3, and 2 & 3. Franson summarized this view as follows: ʺQuantum interference between many different pathways is simply the sum of the effects from all pairs of pathways.ʺ
The probability expression for an event involving two equivalent paths is
$P_{i j}=\left|\Psi_{i}+\Psi_{j}\right|^{2}=\left|\Psi_{i}\right|^{2}+\left|\Psi_{j}\right|^{2}+\Psi_{i}^{*} \Psi_{j}+\Psi_{j}^{*} \Psi_{i}=P_{i}+P_{j}+I_{i j} \nonumber$
where Iij is the interference term and Pi is defined as the probability when only the ith path is open. It is my opinion that this latter designation is not strictly valid. However, accepting it for the time being leads to the following definition for two‐path interference.
$I_{i j}=P_{i j}-P_{i}-P_{j} \nonumber$
Therefore, the probability for an event involving three equivalent paths to a destination using only two‐path interference is,
$P_{123}=P_{1}+P_{2}+P_{3}+I_{12}+I_{13}+I_{23}=P_{12}+P_{13}+P_{23}-P_{1}-P_{2}-P_{3} \nonumber$
The probabilities in this equation are now calculated.
$\mathrm{P}_{1} \quad(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{T}|)^{2}=0.125$
$\mathrm{P}_{2} \quad(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}|)^{2}=0.0625$
$\mathrm{P}_{3} \quad(|\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.125$
$\mathrm{P}_{12} \quad(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}|)^{2}=0.1875$
$\mathrm{P}_{13} \quad(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.5$
$\mathrm{P}_{23} \quad(|\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.1875$
The probability of arriving at a detector which can be reached by three paths, but using only two‐path interference is,
$\mathrm{P}_{12}+\mathrm{P}_{13}+\mathrm{P}_{23}-\mathrm{P}_{1}-\mathrm{P}_{2}-\mathrm{P}_{3}=0.5625 \nonumber$
Clearly this result does not prove that ʺQuantum interference between many different pathways is simply the sum of the effects from all pairs of pathways.ʺ Why is it necessary to subtract P1, P2 and P3 from the two‐path interference terms if they are what itʹs all about?
The previous Feynman calculation produces the same result using a more transparent method; add the probability amplitudes for the three paths to the detector and square the absolute magnitude.
$\left(\left|A_{1}+A_{2}+A_{3}\right|\right)^{2}=0.5625 \nonumber$
$(|\mathrm{R} \cdot \mathrm{M} \cdot \mathrm{T} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{R} \cdot \mathrm{R} \cdot \mathrm{T}+\mathrm{T} \cdot \mathrm{T} \cdot \mathrm{M} \cdot \mathrm{R}|)^{2}=0.5625 \nonumber$
Objection to the Definitions
Sinha et al. write the double‐slit wave function (unnormalized) as a linear superposition of the photon taking two paths (A and B) to the detector. The square modulus of the wave function gives the probability expression.
$|\Psi|^{2}=\left|\psi_{A}\right|^{2}+\left|\psi_{B}\right|^{2}+\psi_{A}^{*} \psi_{B}+\psi_{B}^{*} \psi_{A} \nonumber$
Obviously this is just traditional quantum mechanical mathematical procedure. The problem I have is with the interpretation or partition of this equation that comes next. The authors write the probability expression as
$|\Psi|^{2}=P_{A}+P_{B}+I_{A B} \nonumber$
where PA(PB) is the probability that only slit A(B) is open and the remaining terms represent the actual interference. I do not believe this sort of partitioning of the terms of $|\Psi|^{2}$ is quantum mechanically legitimate. If only slit A is open then the wave function is $\Psi=\Psi_{\mathrm{A}}$ and not $\Psi=\Psi_{\mathrm{A}}+\Psi_{\mathrm{B}}$. It is incorrect to say, in the double‐slit experiment, that $\left|\Psi_{\mathrm{A}}\right|^{2}$ is the probability that the photon goes through slit A.
As Roy Glauber has written [AJP 63, 12 (1995)], ʺThe things that interfere in quantum mechanics ... are the probability amplitudes for certain events.ʺ The triple‐slit experiment involves the interference of three probability amplitudes because there are three paths from the source (S) to each position (x) on the detection screen. According to quantum fundamentals, each photon takes all three paths simultaneously.
$\Psi_{133}=\langle x | S\rangle=\langle x | 3\rangle\langle 3 | S\rangle+\langle x | 2\rangle\langle 2 | S\rangle+\langle x | 1\rangle\langle1|S\rangle \nonumber$
In Chapter 3 of Volume III of The Feynman Lectures on Physics, Feynman discusses a diffraction experiment in which a two‐slit screen followed by a three‐slit screen are placed before the detection screen providing six paths to any point on the detection screen. Feynmanʹs analysis leads to the following probability amplitude for the particle to be detected at x.
$\langle x | S\rangle=\langle x | a\rangle\langle a | 1\rangle\langle 1 | S\rangle+\langle x | b\rangle\langle b | 1\rangle\langle 1 | S\rangle+\langle x | c\rangle\langle c | 1\rangle\langle 1 | S\rangle +\langle x | a\rangle\langle a | 2\rangle\langle 2 | S\rangle+\langle x | b\rangle\langle b | 2\rangle\langle 2 | S\rangle+\langle x | c\rangle\langle c | 2\rangle\langle 2 | S\rangle \nonumber$
Or more succinctly (Feynmanʹs equation 3.6 on page 3‐4 of Volume III)
$\langle x | S\rangle=\sum_{i=1,2 \atop \alpha=a, b, c}\langle x | \alpha\rangle\langle\alpha | i\rangle\langle i | S\rangle \nonumber$
In summary, these superpositions of probability amplitudes are simple examples of Feynmanʹs sum‐over‐histories approach to quantum mechanics, of which the double‐slit experiment is the simplest example. It is clear that pair wise interference is not suggested in this approach, except for the double‐slit experiment and then only because in that case there are only two paths. Feynmanʹs rule is, sum the amplitudes for all paths to allow for interference before squaring the absolute magnitude of the sum to obtain the probability.
Freeman Dyson had the following reaction when he first heard of Feynmanʹs novel approach to quantum mechanics. ʺThirty‐one years ago, Dick Feynman told me about his ʹsum over historiesʹ version of quantum mechanics. ʹThe electron does anything it likes,ʹ he said. ʹIt just goes in any direction, at any speed, forward or backward in time, however it likes, and then you add up the amplitudes and it gives you the wave function.ʹ I said to him ʹYouʹre crazy.ʹ But he isnʹt.ʺ
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.53%3A_An_Analysis_of_ThreePath_Interference.txt
|
Quantum mechanics teaches that if there is more than one path to a particular destination interference effects are likely. Using Feynmanʹs ʹsum over historiesʹ approach to quantum mechanics the probability of arrival at a location is the square of the magnitude of the sum of the probability amplitudes for each path to that location. For example, in the triple‐slit diffraction experiment the probability of a photon leaving a source and arriving at x on the detection screen shows three‐path interference.
$P_{123}=|\langle x \mid 1\rangle\langle 1 \mid S\rangle+\langle x \mid 2\rangle\langle 2 \mid S\rangle+\langle x \mid 3\rangle\langle 3 \mid S\rangle|^{2} \label{1}$
However, Sinha et al. [Science 329, 418 (2010)] argue that true interference only occurs between pairs of paths. In this case between paths 1 & 2, 1 & 3, and 2 & 3. Sinhaʹs interpretation of triple‐slit diffraction leads to an expression that is complex when compared to the lean equation above.
$P_{123}=|\langle x \mid 1\rangle\langle 1 \mid S\rangle+\langle x \mid 2\rangle\langle 2 \mid S\rangle|^{2}+|\langle x \mid 1\rangle\langle 1 \mid S\rangle+\langle x \mid 3\rangle\langle 3 \mid S\rangle|^{2}+|\langle x \mid 2\rangle\langle 2 \mid S\rangle+\langle x \mid 3\rangle\langle 3 \mid S\rangle|^{2} -|\langle x \mid 1\rangle\langle 1 \mid S\rangle|^{2}-|\langle x \mid 2\rangle\langle 2 \mid S\rangle|^{2}-|\langle x \mid 3\rangle\langle 3 \mid S\rangle|^{2}\label{2}$
It will now be shown that the two expressions, as they must, lead to the same calculated diffraction pattern. My question is whether there is any real physical significance to Sinhaʹs reinterpretation of multi‐path interference in terms of two‐path interference effects.
We begin with the traditional approach of writing the photon wave function as a superposition of being at all three slits and then Fourier transforming it into momentum space to give the diffraction pattern.
number of slit: $\mathrm{n}:=3$
Slit position: $\mathrm{j}:=1 . . \mathrm{n} \quad \mathrm{x}_{\mathrm{j}}:=\mathrm{j}$
Slit width: $\delta:=.2$
Calculate diffraction pattern: $\left.\Psi(\mathrm{p}):=\left(\mid \sum_{j=1}^{\mathrm{n}} \int_{\mathrm{x}_{\mathrm{j}}-\frac{\delta}{2}}^{x_{\mathrm{j}}+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}\right]\right)^{2}$
Next, following Sinha, the diffraction pattern is recalculated using equation (\ref{2}). This equation is a generalization of the double‐slit probability expression as shown in the Appendix.
$P_{123}=P_{1}+P_{2}+P_{3}+I_{12}+I_{13}+I_{23}=P_{12}+P_{13}+P_{23}-P_{1}-P_{2}-P_{3} \nonumber$
Franson [Science 329, 396 (2010)] summarized this view as follows: "Quantum interference between many different pathways is simply the sum of the effects from all pairs of pathways." If that is so why is it necessary to subtract the single‐slit diffraction patterns?
$P_{1 \text { slit }}(p):=\left(\left\lfloor\int_{-\frac{\delta}{2}}^{\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-i \cdot p \cdot x) \cdot \frac{1}{\sqrt{\delta}} d x\right]\right)^{2}$
\begin{align*} \Psi(\mathrm{p}) &=\left( \left| \int_{1-\frac{\delta}{2}}^{1+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}+\int_{2-\frac{\delta}{2}}^{2+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx}\right | \right)^{2} \[4pt] &+ \left( \left| \int_{1-\frac{\delta}{2}}^{1+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}+\int_{3-\frac{\delta}{2}}^{3+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}\right | \right)^{2}\[4pt] &+ \left( \left| \int_{2-\frac{\delta}{2}}^{2+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}+\int_{3-\frac{\delta}{2}}^{3+\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{d} \mathrm{x}\right | \right)^{2} \[4pt] &+-3 \left( \left | \int_{-\frac{\delta}{2}}^{\frac{\delta}{2}} \frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \frac{1}{\sqrt{\delta}} \mathrm{dx}\right|\right)^{2} \end{align*}
A 1980 reminiscence by Freeman Dyson is relevant to my critque.
Thirty‐one years ago Dick Feynman told me about his ʹsum over historiesʹ version of quantum mechanics. "The electron does anything it likes," he said. "It just goes in any direction, at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave function." I said to him "Youʹre crazy." But he isnʹt.
It is obvious from Feynmanʹs description of his ʹsum over historiesʹ approach to quantum interference that it puts no restriction on the number of interfering probability amplitudes.
The conceptual and mathematical clarity of Equation (\ref{1}) is lost in the transition to Equation (\ref{2}), which consists of three two‐path interferences and three single‐slit diffraction terms. Consequently Equation (\ref{2}) does not actually show that the triple‐slit experiment involves only the interference of pairs of paths.
Appendix
The probability expression for an event involving two equivalent paths is
$P_{i j}=\left|\Psi_{i}+\Psi_{j}\right|^{2}=\left|\Psi_{i}\right|^{2}+\left|\Psi_{j}\right|^{2}+\Psi_{i}^{*} \Psi_{j}+\Psi_{j}^{*} \Psi_{i}=P_{i}+P_{j}+I_{i j} \nonumber$
where $I_{ij}$ is the interference term and Pi is defined as the probability when only the $i_{th}$ path is open. It is my opinion that this latter designation is not strictly valid. However, accepting it for the time being leads to the following definition for two‐path interference.
$I_{i j}=P_{i j}-P_{i}-P_{j} \nonumber$
Therefore, the probability for an event involving three equivalent paths to a destination using only two‐path interference is,
$P_{123}=P_{1}+P_{2}+P_{3}+I_{12}+I_{13}+I_{23}=P_{12}+P_{13}+P_{23}-P_{1}-P_{2}-P_{3} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.54%3A_An_Analysis_of_Three-Slit_Interference.txt
|
Thirty-one years ago Dick Feynman told me about his 'sum over histories' version of quantum mechanics. "The electron does anything it likes," he said. "It just goes in any direction, at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave function." I said to him "You're crazy." But he isn't. Freeman Dyson, 1980.
In Volume 3 of the celebrated Feynman Lectures on Physics, Feynman uses the double-slit experiment as the paradigm for his 'sum over histories' approach to quantum mechanics. He said that any question in quantum mechanics could be answered by responding, "You remember the experiment with the two holes? It's the same thing." And, of course, he's right.
A 'sum over histories' is a superposition of probability amplitudes for the possible experimental outcomes which in quantum mechanics carry phase and therefore interfer constructively and destructively with one another. The square of the magnitude of the superposition of histories yields the probabilities that the various experimental possibilities will be observed.
Obviously it takes a minum of two 'histories' to demonstrate the interference inherent in the quantum mechanical superposition. And, that's why Feynman chose the double-slit experiment as the paradigm for quantum mechanical behavior. The two slits provide two paths, or 'histories' to any destination on the detection screen. In this tutorial a close cousin of the double-slit experiment, single particle interference in a Mach-Zehnder interferometer, will be used to illustrate Feynman's 'sum over histories' approach to quantum mechanics.
A Beam Splitter Creates a Quantum Mechanical Superposition
Single photons emitted by a source (S) illuminate a 50-50 beam splitter (BS). Mirrors (M) direct the photons to detectors D1 and D2. The probability amplitudes for transmission and reflection are given below. By convention a 90 degree phase shift (i) is assigned to reflection.
Probability amplitude for photon transmission at a 50-50 beam splitter:
$\langle T | S\rangle=\frac{1}{\sqrt{2}} \nonumber$
Probability amplitude for photon reflection at a 50-50 beam splitter:
$\langle R | S\rangle=\frac{i}{\sqrt{2}} \nonumber$
After the beam splitter the photon is in a superposition state of being transmitted and reflected.
$|S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] \nonumber$
As shown in the diagram below, mirrors reflect the transmitted photon path to D2 and the reflected path to D1. The source photon is expressed in the basis of the detectors as follows.
$|S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] \xrightarrow[R \rightarrow D_{1}]{T \rightarrow D_{z}} \rightarrow \frac{1}{\sqrt{2}}\left[\left|D_{2}\right\rangle+ i\left|D_{1}\right\rangle\right] \nonumber$
The square of the magnitude of the coefficients of D1 and D2 give the probabilities that the photon will be detected at D1 or D2. Each detector registers photons 50% of the time. In other words, in the quantum view the superposition collapses randomly to one of the two possible measurement outcomes it represents.
The classical view that detection at D1 means the photon was reflected at BS1 and that detection at D2 means it was transmitted at BS1 is not tenable as will be shown using a Mach-Zehnder interferometer which has a second beam splitter at the path intersection before the detectors.
A Second Beam Splitter Provides Two Paths to Each Detector
If a second beam splitter is inserted before the detectors the photons always arrive at D1. In the first experiment there was only one path to each detector. The construction of a Mach-Zehnder interferometer by the insertion of a second beam splitter creates a second path to each detector and the opportunity for constructive and destructive interference on the paths to the detectors.
Given the superposition state after BS1, the probability amplitudes after BS2 interfere constructively at D1 and destructively at D2.
$\begin{matrix} \text{After}\;BS_{1} & \; & \text{After}\;BS_{2} & \; & \text{Final State} \ \; & \; & |T\rangle \rightarrow \frac{1}{\sqrt{2}}\left[i\left|D_{1}\right\rangle+\left|D_{2}\right\rangle\right] & \; & \; \ |S\rangle \rightarrow \frac{1}{\sqrt{2}}[|T\rangle+ i|R\rangle] & \rightarrow & + & \rightarrow & i\left|D_{1}\right\rangle \ \; & \; & i|R\rangle \rightarrow \frac{i}{\sqrt{2}}\left[\left|D_{1}\right\rangle+ i\left|D_{2}\right\rangle\right] & \; & \; \end{matrix} \nonumber$
Adopting the classical view that the photon is either transmitted or reflected at BS1 does not produce this result. If the photon was transmitted at BS1 it would have equal probability of arriving at either detector after BS2. If the photon was reflected at BS1 it would also have equal probability of arriving at either detector after BS2. The predicted experimental results would be the same as those of the single beam splitter experiment. In summary, the quantum view that the photon is in a superposition of being transmitted and reflected after BS1 is consistent with both experimental results described above; the classical view that it is either transmitted or reflected is not.
Some disagree with this analysis saying the two experiments demonstrate the dual, complementary, behavior of photons. In the first experiment particle-like behavior is observed because both detectors register photons indicating the individual photons took one path or the other. The second experiment reveals wave-like behavior because interference occurs - only D1 registers photons. According to this view the experimental design determines whether wave or particle behavior will occur and somehow the photon is aware of how it should behave. Suppose in the second experiment that immediately after the photon has interacted with BS1, BS2 is removed. Does what happens at the detectors require the phenomenon of retrocausality or delayed choice? Only if you reason classically about quantum experiments.
We always measure particles (detectors click, photographic film is darkened, etc.) but we interpret what happened or predict what will happen by assuming wavelike behavior, in this case the superposition created by the initial beam splitter that delocalizes the position of the photon. Quantum particles (quons) exhibit both wave and particle properties in every experiment. To paraphrase Nick Herbert (Quantum Reality), particles are always detected, but the experimental results observed are the result of wavelike behavior. Richard Feynman put it this way (The Character of Physical Law), "I will summarize, then, by saying that electrons arrive in lumps, like particles, but the probability of arrival of these lumps is determined as the intensity of waves would be. It is in this sense that the electron behaves sometimes like a particle and sometimes like a wave. It behaves in two different ways at the same time (in the same experiment)." Bragg said, "Everything in the future is a wave, everything in the past is a particle."
In 1951 in his treatise Quantum Theory, David Bohm described wave-particle duality as follows: "One of the most characteristic features of the quantum theory is the wave-particle duality, i.e. the ability of matter or light quanta to demonstrate the wave-like property of interference, and yet to appear subsequently in the form of localizable particles, even after such interference has taken place." In other words, to explain interference phenomena wave properties must be assigned to matter and light quanta prior to detection as particles.
Matrix Mechanics Approach
As a companion analysis, the matrix mechanics approach to single-photon interference in a Mach-Zehnder interferometer is outlined next.
State Vectors
Photon moving horizontally:
$\mathrm{x} :=\left(\begin{array}{l}{1} \ {0}\end{array}\right) \nonumber$
Photon moving vertically:
$\mathrm{y} :=\left(\begin{array}{l}{0} \ {1}\end{array}\right) \nonumber$
Operators
Operator representing a beam splitter:
$\mathrm{BS} :=\frac{1}{\sqrt{2}} \cdot\left(\begin{array}{cc}{1} & {\mathrm{i}} \ {\mathrm{i}} & {1}\end{array}\right) \nonumber$
Operator representing a mirror:
$\mathrm{M} :=\left(\begin{array}{ll}{0} & {1} \ {1} & {0}\end{array}\right) \nonumber$
Single beam splitter example:
Reading from right to left.
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter and a mirror will be detected at D1.
$\left(|\mathrm{x}^{\mathrm{T}} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}|\right)^{2}=0.5 \nonumber$
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter and a mirror will be detected at D2.
$\left(|\mathrm{y}^{\mathrm{T}} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}|\right)^{2}=0.5 \nonumber$
Two beam splitter example (MZI):
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter, a mirror and another beam splitter will be detected at D1.
$\left(|\mathrm{x}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}|\right)^{2}=1 \nonumber$
The probability that a photon leaving the source moving in the (horizontal) x-direction, encountering a beam splitter, a mirror and another beam splitter will be detected at D2.
$\left(|\mathrm{y}^{\mathrm{T}} \cdot \mathrm{BS} \cdot \mathrm{M} \cdot \mathrm{BS} \cdot \mathrm{x}|\right)^{2}=0 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.55%3A_Using_a_Mach-Zehnder_Interferometer_to_Illustrate_Feynman%27s__Sum_Over_Histories_Approach_to_Quantum_Mechanics.txt
|
French and Taylor illustrate the paradox of the recombined beams with a series of experiments using polarized photons in section 7-3 in An Introduction to Quantum Physics. It is my opinion that it is easier to demonstrate this so-called paradox using photons, beam splitters and mirrors. Of course, the paradox is only apparent, being created by thinking classically about a quantum phenomenon.
Single photons illuminate a 50-50 beam splitter and mirrors direct the photons to detectors D1 and D2. For a statistically meaningful number of observations, it is found that 50% of the photons are detected at D1 and 50% at D2. One might, therefore, conclude that each photon is either transmitted or reflected at the beam splitter.
Recombining the paths with a second beam splitter creates a Mach-Zehnder interferometer (MZI). On the basis of the previous reasoning one might expect again that each detector would fire 50% of the time. Half of the photons are in the T branch of the interferometer and they have a 50% chance of being transmitted to D2 and a 50% chance of being reflected to D1 at the second beam splitter. The same reasoning applies to the photons in the R branch. However what is observed in an equal arm MZI is that all the photons arrive at D1.
The reasoning used to explain the first result is plausible, but we see that the attempt to extend it to the MZI shown below leads to a contradiction with actual experimental results. It is clear that some new concepts are required. As will be shown probability amplitude and the quantum superposition are the required concepts. They will yield predictions that are consistent with all experimental results, but they will require a non-classical, quantum way of thinking that most people find bizarre and a bit unsettling.
A Beam Splitter Creates a Quantum Superposition
The probability amplitudes for transmission and reflection at a beam splitter are given below. By convention a 90 degree phase shift (i) is assigned to reflection to conserve probability.
Probability amplitude for transmission at a 50-50 beam splitter:
$\langle T | S\rangle=\frac{1}{\sqrt{2}} \nonumber$
Probability amplitude for reflection at a 50-50 beam splitter:
$\langle R | S\rangle=\frac{i}{\sqrt{2}} \nonumber$
After the beam splitter the photon is in a superposition of being transmitted and reflected. In other words according to quantum theory prior to observations the photon state is not |T> or |R>, but |T> and |R>. Observation causes the superposition to collapse with equal probability to either |T> (|D2>) or |R> (|D1>) according to the Born rule and the Copenhagen interpretation.
$\mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \mathrm{R}) \nonumber$
As shown in the first diagram, mirrors direct the transmitted photon to D2 and the reflected photon to D1. This can be expressed quantum mechanically as shown below.
$\mathrm{T}=\mathrm{D}_{2} \qquad \mathrm{R}=\mathrm{D}_{1} \nonumber$
Expressing the source photon in the basis of the detectors we have,
$\mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \mathrm{R}) \Big|_{\text { substitute, } \mathrm{T}=\mathrm{D}_{2}}^{\text { substitute, } \mathrm{R}=\mathrm{D}_{1}} \rightarrow \mathrm{S}=\sqrt{2} \cdot\left(\frac{\mathrm{D}_{2}}{2}+\frac{\mathrm{D}_{1} \cdot \mathrm{i}}{2}\right) \nonumber$
The magnitude squared of the coefficients of D1 and D2 gives the probabilities that the photon will be detected at D1 and D2. As shown below, each detector registers photons 50% of the time in agreement with experiment.
$\text{Probability D}_{1}=\left(\Big|\frac{\mathrm{i} \cdot \sqrt{2}}{2}\Big|\right)^{2} \rightarrow \text { Probability D}_{1}=\frac{1}{2} \nonumber$
$\text{Probability D}_{2}=\left(\Big|\frac{\sqrt{2}}{2}\Big|\right)^{2} \rightarrow \text { Probability D}_{2}=\frac{1}{2} \nonumber$
A Second Beam Splitter Allows Probability Amplitudes to Interfere
The presence of the second beam splitter provides two paths to each detector and the opportunity for the interference of the probability amplitudes. The probability amplitudes for the paths to D1 interfere constructively, while the probability amplitudes for the paths to D2 interfere destructively. Thus, the evolution of |T> and |R> at the second beam splitter result in the photon always arriving at D1.
$\mathrm{T}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{i} \cdot \mathrm{D}_{1}+\mathrm{D}_{2}\right) \qquad \mathrm{R}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{D}_{1}+\mathrm{i} \cdot \mathrm{D}_{2}\right) \nonumber$
$\mathrm{S}=\frac{1}{\sqrt{2}} \cdot(\mathrm{T}+\mathrm{i} \mathrm{R}) \Bigg| \begin{array}{l}{\text { substitute }, \mathrm{T}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{i} \cdot \mathrm{D}_{1}+\mathrm{D}_{2}\right)} \ {\text { substitute, } \mathrm{R}=\frac{1}{\sqrt{2}} \cdot\left(\mathrm{D}_{1}+\mathrm{i} \cdot \mathrm{D}_{2}\right)^{\rightarrow \mathrm{S}=\mathrm{D}_{1} \cdot \mathrm{i}}} \ {\text { simplify }}\end{array} \nonumber$
$\text{Probability D}_{1}=(|i|)^{2} \rightarrow \text { Probability D}_{1}=1 \nonumber$
"The things that interfere in quantum mechanics are are the probability amplitudes for certain events. It is the fact that probability amplitudes add up like complex numbers that accounts for all quantum mechanical interferences." [Roy Glauber, American Journal of Physics 63, 12 (1995)]
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.56%3A_The_Paradox_of_Recombined_Beams.txt
|
Particle confinement and the wave-like properties of matter lead, according to quantum mechanical principles, to self-interference which is the origin of energy quantization. The particle-in-a-box problem is used in introductory quantum mechanics courses to illustrate this fundamental quantum effect.
Terrestrial objects are confined by the Earth's gravitational field, but the quantum effects of gravity are not observed in the macro-world because the gravitational interaction is weak. Thus, the gravitational energy levels are very closely spaced and for all practical purposes form a continuum.
In spite of the lack of evidence for quantized gravitational energy levels, the "quantum bouncer" has been a favorite example in the repertoire of solvable one-dimensional problems for those who teach quantum chemistry and quantum physics. Schrödinger's equation for the quantum bouncer near the surface of the Earth is,
$-\frac{\hbar^{2}}{2 m} \frac{d^{2} \Psi(z)}{d z^{2}}+m g z \Psi(z)=E \Psi(z) \nonumber$
where the particle is confined by the impenetrable potential barrier of the Earth's surface (V = ) and the attractive gravitational interaction (V = mgz for z > 0).
The energy eigenvalues for the quantum bouncer are (1,2,3),
$E_{i}=\left(\frac{\hbar^{2} g^{2} m}{2}\right)^{\frac{1}{3}} a_{i} \nonumber$
where ai are the roots of the Airy function. The first five roots are 2.33810, 4.08794, 5.52055, 6.78670, and 7.94413.
The associated eigenfunctions are,
$\Psi_{i}(z)=N_{i} A i\left[\left(\frac{2 m^{2} g}{\hbar^{2}}\right)^{1 / 3} z-a_{i}\right] \nonumber$
where
$N_{i}=\left(\int_{0}^{\infty} \Psi_{i}(z)^{2} d z\right)^{-1 / 2} \nonumber$
Because there is no analytical expression for the eigenfunctions each one must be normalized using a numeric algorithm.
Very recently an international team at the Institute Laue-Langevin in Grenoble France lead by V. Nesvizhevsky (4) published evidence for the quantized gravitational states of the neutron. To read a short summary of this experiment in Nature Magazine by Thomas Bowles (5) click here. Another summary has just been published in Physics Today. (6)
To appreciate the significance of this accomplishment we calculate the neutron's ground state energy and wave function in the Earth's gravitational field using the equations above. The mass of the neutron is 1.675x10-27 kg which yields a ground-state energy of E1 = 2.254x10-31 J. This corresponds to a classical vertical velocity of 1.6 cm/s. Thus gravitational confinement requires a source of ultra-cold neutrons (UCNs).
Furthermore, the energy of the first excited state is 3.941x10-31 J, so the energy difference between the ground state and the first excited state is equivalent to photon with a wavelength of 1.2x106 m. Clearly traditional spectroscopic methods cannot be used to establish the existence of quantized gravitational states for the neutron.
The probability distributions, $\Psi$(z)2, for the ground and first excited states are shown in Figures 1 and 4. They hold the key to the experimental design that Nesvizhevsky's and his group used to establish that the neutron's gravitational states are quantized. To down-load a Mathcad file that will generate the neutron eigenstates numerically click here.
The apparatus shown in Figure 2 records neutron throughput as a function of absorber height. The data collected are shown in Figure 3. The shaded circles are the actual data points. We will not be concerned with the solid, dashed, or dotted lines in the figure. The most important feature of the data for this analysis is the sharp increase in neutron throughput at about 20 $\mu$m.
The argument will be made that the neutron wave function shown in Figure 1 is consistent with the data presented in Figure 3. To demonstrate this we calculate the probability that the ground-state neutron will be found in the absorber for a variety of absorber heights. This requires numerical evaluation of
$\int_{a_{z}}^{\infty} \Psi_{1}(z)^{2} d z \nonumber$
where az is the absorber height. These calculations are presented in the table given below.
Absorber
height/mm
Probability
in Absorber
10 0.380
15 0.089
20 0.012
25 0.001
It is clear from these calculations and Figure 1 that the probability of finding the neutron in the absorber falls off sharply at about 20 $\mu$m. This analysis, therefore, is consistent with the sharp increase in neutron throughput at this absorber height.
Many neutron gravitational states besides the ground state are occupied and it is therefore necessary to explore the experimental implications of this fact. The first excited state is, as mentioned previously, shown in Figure 4. This wave function extends further in the z-direction than the ground state function, going to zero around 35 $\mu$m. The experimental significance of this is that the neutron throughput should show another abrupt increase in the neighborhood of 30 $\mu$m, an absorber height for which, the first excited state neutrons have a low probability of being absorbed. This phenomena should be repeated for all other occupied excited states as the absorber reaches the spatial extent of each excited state wave function.
With regard to this expected effect Bowles has commented (5)
The data show some hint of stepped increases at the values corresponding to higher energy states, consistent with the existence of these states, but they are not yet conclusive. Nonetheless, the evidence for the existence of the first energy state is convincing and confirms that a quantum effect occurs in the gravitational trap. The difficulty of this measurement should not be underestimated. The researchers are measuring a quantum effect caused by gravity that requires a resolution of 10-15 eV. Interactions of the neutrons with other fields would normally obscure such a tiny effect, but the neutron's lack of electric charge and the low kinetic energy of the UCNs make such observations possible.
In summary, thanks to Nesvizhevsky and his team, we now have some direct evidence for quantized gravitational states. The "quantum bouncer", previously a purely academic exercise, can now be applied to a real-life example.
Literature cited:
1. P. W. Langhoff, "Schrödinger particle in a gravitational well," Am. J. Phys. 39, 954-957 (1971).
2. R. L. Gibbs, "The quantum bouncer," Am. J. Phys. 43, 25-28 (1975).
3. J. Gea-Banacloche, "A quantum bouncing ball," Am. J. Phys. 67, 776-782 (1999).
4. V. Nesvizhevsky, et al., "Quantum states of neutrons in the Earth's gravitational field," Nature 415 297-299 (2002).
5. T. J. Bowles, "Quantum effects of gravity," Nature 415 267-268 (2002).
6. B. Schwartzchild, "Ultracold Neutrons Exhibit Quantu States in the Earth's Gravitational Field," Physics Today 55 (3) 20-23 (2002).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.57%3A_Evidence_for_Quantized_Gravitational_States_of_the_Neutron.txt
|
Recently an international team of researchers at the Institute Laue-Langevin in Grenoble France published evidence for the quantized gravitational states of the neutron (1,2,3). The purpose of this paper is to analyze this phenomenon using the variational theorem.
Schrödinger’s equation for an object subject to a gravitational field near the surface of the earth is
$-\frac{h^{2}}{2 m} \frac{d^{2} \Psi(z)}{d z^{2}}+m g z \Psi(z)=E \Psi(z) \tag{1} \nonumber$
Equation (1) has an exact solution and the ground state energy for the neutron is given by
$E=2.3381\left(\frac{m g^{2} h^{2}}{2}\right)^{\frac{1}{3}}=2.254 \times 10^{-31} \mathrm{J} \tag{2} \nonumber$
where 2.3381 is the first root of the Airy function.
The performance of two trial wave functions for the “quantum bouncer” will be evaluated. The functions given in equations (3) and (4) were previously suggested for this purpose by J. L. Martin (4).
$\Psi(z)=2 \alpha^{\frac{3}{2}} z \exp (-\alpha z) \tag{3} \nonumber$
$\Phi(z)=\left(\frac{128 \beta^{3}}{\pi}\right)^{\frac{1}{4}} z \exp \left(-\beta z^{2}\right) \tag{4} \nonumber$
First Trial Wave Function
Evaluation of the variational integral for $\Psi$(z) yields
$\langle E\rangle=\int_{0}^{\infty} \Psi(z) \hat{H} \Psi(z) d z=\frac{h^{2} \alpha^{2}}{2 m}+\frac{3 m g}{2 \alpha} \tag{5} \nonumber$
Minimization of the energy provides the optimum value of $\alpha$
$\alpha=\left(\frac{3 m^{2} g}{2 h^{2}}\right)^{\frac{1}{3}} \tag{6} \nonumber$
and the ground-state energy.
$\langle E\rangle= 1.9656\left(m g^{2} h^{2}\right)^{\frac{1}{3}}=2.387 \times 10^{-31} \mathrm{J} \tag{7} \nonumber$
This result is in error by 6% indicating that $\Psi$(z) is a reasonable trial wave function for an object subjected to a confining gravitational interaction.
Second Trial Wave Function
It will be seen, however, that $\Phi$(z) is a much better trial wave function. Evaluation of the variational integral yields
$\langle E\rangle=\int_{0}^{\infty} \Phi(z) \hat{H} \Phi(z) d z=\frac{3 \hbar^{2} \beta}{2 m}+m g\left(\frac{2}{\pi \beta}\right)^{\frac{1}{2}} \tag{8} \nonumber$
Minimization of the energy provides the optimum value of $\beta$
$\beta=\left(\sqrt{\frac{2}{\pi}} \frac{m^{2} g}{3 h^{2}}\right)^{\frac{2}{3}} \tag{9} \nonumber$
and the ground-state energy.
$\langle E\rangle=\frac{3}{2}\left(\frac{6 m g^{2} h^{2}}{\pi}\right)^{\frac{1}{3}}=2.260 \times 10^{-31} \mathrm{J} \tag{10} \nonumber$
The error for this trial function is less 0.3%, so it is a very good approximation to the exact wave function.
The trial wave functions are compared with the exact solution in Figure 1 (exact solution, solid blue; Ψ(z) green dots; $\Phi$(z) red dashes.
The Experiment
Thus far we have used the exact theoretical ground-state energy as a criterion to evaluate two trial wave functions. However, Nesvizhevsky and co-workers did not use energy or spectroscopy to gather evidence for the quantized gravitational states of the neutron.
This is impractical given how closely spaced the neutron gravitational states are. For example, the first excited state for the neutron is E2 = 3.941 x 10-31 J. Therefore, the energy difference between the ground state and first excited state is equivalent to a photon with a wavelength of 1.2 million meters. Clearly traditional spectroscopic methods cannot be used to establish the existence of quantized gravitational states for the neutron.
A schematic of the experiment Nesvizhevsky's team used to gather evidence for quantized neutron gravitational states is shown in Figure 2. The neutron is confined by the attractive gravitational field and the repulsive reflecting mirror surface. Ultra-cold neutrons (UCNs) with total velocities less than 8 m/s are "lobbed" into the apparatus. In the vertical direction the neutrons are subject to the gravitational interaction with the Earth, but there are no forces in the horizontal direction. The vertical and horizontal degrees of freedom are independent of one another in the design of this experiment, because care has been taken to eliminate vibrations, and extraneous electric and magnetic fields.
This apparatus records neutron throughput as a function of absorber height. The data collected are shown in Figure 3. The shaded circles are the actual data points. We will not be concerned with the solid, dashed, or dotted lines in the figure. The most important feature of the data for this analysis is the sharp increase in neutron throughput at about 20 $\mu$m.
Interpretation
The probability that a neutron will be found in the absorber, or the probability it will be absorbed is given by
$\int_{a_{z}}^{\infty}|\Phi(z)|^{2} d z \tag{11} \nonumber$
where az is the absorber height. Using the second trial wave function the probability is calculated for several absorber heights and displayed in the table below.
Absorber height ($\mu$m) Probability in Absorber
10 0.388
15 0.078
20 0.007
25 3 x 10-4
It is clear from these calculations that the probability of finding the neutron in the absorber has declined significantly by 20 $\mu$m. This analysis, therefore, is consistent with the sharp increase in neutron throughput at this absorber height.
Literature cited:
1. Nesvizhevsky, V.; et al. Quantum States of Neutrons in the Earth’s Gravitationsl Field. Nature 2002, 415, 297-299.
2. Bowles, T. J. Quantum Effects of Gravity. Nature 2002, 415, 267-268.
3. Rioux, F. Nature: Evidence for Quantized Gravitational States of the Neutron. J. Chem. Educ. 2002, 79, 1404-1406.
4. Martin, J. L. Basic Quantum Mechanics; Clarendon Press, Oxford, 1981, p. 207.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.58%3A_Quantized_Gravitational_States_A_Variational_Approach.txt
|
The Earthʹs gravitational field is a confining potential and according to quantum mechanical principles, confinement gives rise to quantized energy levels. The one‐dimensional, time‐independent Schrödinger equation for a quantum object (quon, to use Nick Herbertʹs term) in the Earthʹs gravitational field near the surface is,
$\frac{-h^{2}}{8 \cdot \pi^{2} \cdot m} \cdot \frac{d^{2}}{d z^{2}} \Psi(z)+m \cdot g \cdot z \cdot \Psi(z)=E \cdot \Psi(z) \nonumber$
Here m is the mass of the quon, z its vertical distance from the Earthʹs surface, and g the acceleration due to gravity.
The solution to Schrödingerʹs equation for the mgz potential energy operator is well‐known. The energy eigenvalues and eigenfunctions are expressed in terms of the roots of the Airy function.
$\mathrm{E}_{\mathrm{i}}=\left(\frac{\mathrm{h}^{2} \cdot \mathrm{g}^{2} \cdot \mathrm{m}}{8 \cdot \pi^{2}}\right)^{\frac{1}{3}} \cdot \mathrm{a}_{\mathrm{i}} \qquad \Psi_{\mathrm{i}}(\mathrm{z})=\mathrm{N}_{\mathrm{i}} \cdot \mathrm{Ai}_{\mathrm{i}} \cdot\left[\left(\frac{8 \cdot \pi^{2} \cdot \mathrm{m}^{2} \cdot \mathrm{g}}{\mathrm{h}^{2}}\right)^{\frac{1}{3}} \cdot \mathrm{z}-\mathrm{a}_{\mathrm{i}}\right] \nonumber$
The first five roots of the Airy function are presented below.
$a_{1} :=2.33810 \quad a_{2} :=4.08794 \quad a_{3} :=5.52055 \quad a_{4} :=6.78670 \qquad a_{5} :=7.94413 \nonumber$
Time dependence is introduced by multiplying the spatial wave function by $\exp \left(-i \cdot \frac{2 \cdot \pi \cdot E_{i} \cdot t}{h}\right)$. The calculations that follow will be carried out in atomic units. In addition, the acceleration due to gravity and the quon mass will be set to unity for the sake of computational simplicity.
$\mathrm{h} :=2 \cdot \pi \quad \mathrm{g} :=1 \quad \mathrm{m} :=1 \quad \mathrm{i} :=1 \ldots 5 \ \mathrm{E}_{\mathrm{i}} :=\left(\frac{\mathrm{h}^{2} \cdot \mathrm{g}^{2} \cdot \mathrm{m}}{8 \cdot \pi^{2}}\right)^{\frac{1}{3}} \cdot \mathrm{a}_{\mathrm{i}} \nonumber$
The wave functions for the ground state and first excited state are shown below.
$\Psi_{1}(z, t) :=1.6 \cdot \operatorname{Ai}\left[\left(\frac{8 \cdot \pi^{2} \cdot m^{2} \cdot g}{h^{2}}\right)^{\frac{1}{3}} \cdot z-a_{1}\right] \cdot e^{-i \cdot E_{1} \cdot t} \nonumber$
$\Psi_{2} (\mathrm{z}, \mathrm{t}) :=1.4 \cdot \mathrm{Ai}\left[\left(\frac{8 \cdot \pi^{2} \cdot \mathrm{m}^{2} \cdot \mathrm{g}}{\mathrm{h}^{2}}\right)^{\frac{1}{3}} \cdot \mathrm{z}-\mathrm{a}_{2}\right] \cdot \mathrm{e}^{-\mathrm{i} \cdot \mathrm{E}_{2} \cdot \mathrm{t}} \nonumber$
This excercise is usually called the quantum bouncer, but the first thing to stress is that these energy eigenfunctions are stationary states. In other words a quon in the Earthʹs gravitational field is in a state which is a weighted superposition of all possible positions above the surface of the earth which does not vary with time. It is not moving, it is not bouncing in any classical sense.
In other words the behavior of a quon is described by its time‐independent distribution function, $|\Psi(z)|^{2}$, rather than by a trajectory ‐ a sequence of instantaneous positions and velocities. Quantum mechanics predicts the probability of finding the quon at any position above the Earthʹs surface, but the fact that it can be found at many different locations does not mean it is moving from one position to another. Quantum mechanics predicts the probability of observations, not what happens between two observations.
The position distribution functions, $|\Psi(z)|^{2}$, for the ground‐ and first‐excited state are shown below.
The quantum bouncer doesnʹt ʺbounceʺ until it perturbed and forced into a superposition of eigenstates, for example, the ground‐ and first‐excited states
$\Phi(z, t) :=\frac{\Psi_{1}(z, t)+\Psi_{2}(z, t)}{\sqrt{2}} \nonumber$
Below the quon distribution function is shown for three different times.
As the figure shows, now there is motion in the gravitational field. The quon is moving up and down, bouncing, because it is in a time‐dependent superposition of eigenstates.
Another way this quantum mechanical ʺbouncing motionʺ can be illustrated is by displaying the time‐dependence of the average value of the position of the quon in the gravitational field.
$\text{AverageHeight}(\mathrm{t}) :=\int_{0}^{\infty} \mathrm{z} \cdot(|\Phi(z, t)|)^{2} \mathrm{d} \mathrm{z} \nonumber$
It is also possible to animate the time‐dependent superposition. Click on Tools, Animation, Record and follow the instructions in the dialog box. Choose From: 0, To: 100, At: 6 Frames/Sec.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.59%3A_The_Quantum_Bouncer_Doesnt_Bounce_Unless....txt
|
Most explanations of atomic and molecular phenomena found in textbooks are expressed in terms of potential-energy-only (PEO) models. Inclusion of kinetic energy in the analysis is generally considered to be unnecessary or irrelevant. This view is of questionable validity, and it is becoming increasing clear that ignoring kinetic energy at the nanoscopic level can lead to facile but incorrect explanations of atomic and molecular behavior. (1-4)
The idea that kinetic energy should not be ignored is not too surprising since quantum mechanical calculations involve minimization of the total energy, which includes both kinetic and potential energy contributions. In other words, kinetic energy plays an important role at the computational level, and therefore should not be excluded at the level of analysis and interpretation. For example, the fundamental questions regarding the stability of matter, the nature of the covalent bond, and the interaction of electromagnetic radiation with matter cannot be answered without a consideration of kinetic energy in the quantum mechanical context. (5)
The following simple variational calculation on a particle in a one-dimensional box with a linear internal potential clearly illustrates the importance of kinetic energy. For a particle of unit mass in a one-dimensional box of length one bohr (1ao =52.9 pm = 0.0529 nm) with internal potential energy V = 2x the Schrödinger equation in atomic units (h = 2$\pi$) is,
$\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{x}^{2}} \Psi(\mathrm{x})+\mathrm{V}(\mathrm{x}) \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \quad \text { where } \quad \mathrm{V}(\mathrm{x}) :=2 \cdot \mathrm{x} \nonumber$
Three normalized trial wave functions are considered in this analysis, and are shown below both mathematically and graphically. The potential energy function is superimposed on the graphical representation of the trial wave functions.
$\Psi_{\mathrm{a}}(\mathrm{x}) :=\sqrt{2} \cdot \sin (\pi \cdot \mathrm{x}) \ \Psi_{\mathrm{b}}(\mathrm{x}) :=\sqrt{105} \cdot \mathrm{x} \cdot(1-\mathrm{x})^{2} \ \Psi_{\mathrm{c}}(\mathrm{x}) :=\sqrt{105} \cdot \mathrm{x}^{2} \cdot(1-\mathrm{x}) \ \mathrm{x} :=0, 0.02 \ldots 1 \nonumber$
If asked to choose the best trial wave function by inspection, one would undoubtedly be inclined to select $\Psi_{b}$ because it is skewed to the left side of the box where the potential energy is lowest. $\Psi_{a}$ would be next best because it is symmetric, and $\Psi_{c}$ would be last because it is skewed to the right side of the box where the potential energy is highest. However, the quantum mechanical calculations reveal that $\Psi_{a}$ is the best trial function of the three because it gives the lowest total energy, the primary criterion of the variational principle.
For each trial wave function the expectation values for kinetic energy (T), potential energy (V), total energy (E = T + V), and position are calculated. Atomic units are used all calculations: 1Eh = 4.36 aJ and 1ao = 52.9 pm)
Calculations for trial wave function $\Psi_{a}$
Kinetic energy: $\mathrm{T}_{\mathrm{a}} :=\int_{0}^{1} \Psi_{\mathrm{a}}(\mathrm{x}) \cdot-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{x}^{2}} \Psi_{\mathrm{a}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{T}_{\mathrm{a}}=4.935$
Potential energy: $\mathrm{V}_{\mathrm{a}} :=\int_{0}^{1} \Psi_{\mathrm{a}}(\mathrm{x}) \cdot 2 \cdot \mathrm{x} \cdot \Psi_{\mathrm{a}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{V}_{\mathrm{a}}=1.000$
Total energy: $\mathrm{E}_{\mathrm{a}} :=\mathrm{T}_{\mathrm{a}}+\mathrm{V}_{\mathrm{a}}$ $\mathrm{E}_{\mathrm{a}}=5.935$
Average position: $\mathrm{X}_{\mathrm{a}} :=\int_{0}^{1} \Psi_{\mathrm{a}}(\mathrm{x}) \cdot \mathrm{x} \cdot \Psi_{\mathrm{a}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{X}_{\mathrm{a}}=0.500$
Calculations for trial wave function $\Psi_{b}$
Kinetic energy: $\mathrm{T}_{\mathrm{b}} :=\int_{0}^{1} \Psi_{\mathrm{b}}(\mathrm{x}) \cdot-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{x}^{2}} \Psi_{\mathrm{b}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{T}_{\mathrm{b}}=7.000$
Potential energy: $\mathrm{V}_{\mathrm{b}} :=\int_{0}^{1} \Psi_{\mathrm{b}}(\mathrm{x}) \cdot 2 \cdot \mathrm{x} \cdot \Psi_{\mathrm{b}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{V}_{\mathrm{b}}=0.750$
Total energy: $\mathrm{E}_{\mathrm{b}} :=\mathrm{T}_{\mathrm{b}}+\mathrm{V}_{\mathrm{b}}$ $\mathrm{E}_{\mathrm{b}}=7.750$
Average position: $\mathrm{X}_{\mathrm{b}} :=\int_{0}^{1} \Psi_{\mathrm{b}}(\mathrm{x}) \cdot \mathrm{x} \cdot \Psi_{\mathrm{b}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{X}_{\mathrm{b}}=0.375$
Calculations for trial wave function $\Psi_{c}$
Kinetic energy: $\mathrm{T}_{\mathrm{c}} :=\int_{0}^{1} \Psi_{\mathrm{c}}(\mathrm{x}) \cdot-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{x}^{2}} \Psi_{\mathrm{c}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{T}_{\mathrm{c}}=7.000$
Potential energy: $\mathrm{V}_{\mathrm{c}} :=\int_{0}^{1} \Psi_{\mathrm{c}}(\mathrm{x}) \cdot 2 \cdot \mathrm{x} \cdot \Psi_{\mathrm{c}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{V}_{\mathrm{c}}=1.250$
Total energy: $\mathrm{E}_{\mathrm{c}} :=\mathrm{T}_{\mathrm{c}}+\mathrm{V}_{\mathrm{c}}$ $\mathrm{E}_{\mathrm{c}}=8.250$
Average position: $\mathrm{X}_{\mathrm{c}} :=\int_{0}^{1} \Psi_{\mathrm{c}}(\mathrm{x}) \cdot \mathrm{x} \cdot \Psi_{\mathrm{c}}(\mathrm{x}) \mathrm{d} \mathrm{x}$ $\mathrm{X}_{\mathrm{c}}=0.625$
These calculations are summarized in following table.
Property\Wave function Qa Qb Qc
Kinetic Energy/Eh 4.935 7.000 7.000
Potential Energy/Eh 1.000 0.750 1.250
Total Energy/Eh 5.935 7.750 8.250
Average Position/ao 0.500 0.375 0.625
$\Psi_{a}$ is a symmetric function which favors neither the low potential energy region nor the high potential energy region, but has the lowest total energy because it has a significantly lower kinetic energy than the other trial wave functions. The reason it has a lower kinetic energy is because it has a lower curvature than the other wave functions (curvature is the second derivative of the function). $\Psi_{b}$ has a somewhat lower potential energy than $\Psi_{a}$ because it favors the left side of the box, but consequentially a much higher kinetic energy because of its greater curvature. Total energy, as noted above, is what counts in a variational calculation. $\Psi_{c}$ is the worst trial function because it has both high kinetic energy and high potential energy.
Of course, $\Psi_{a}$ is not the best possible wave function for this problem; it is only the best of the three considered here. The best wave function can be found by a more elaborate variational calculation or by numerical integration of Schrödinger's equation. A Mathcad (6) program for numerical integration of Schrödinger's equation for a particle in a box with linear internal potential is given in the appendix.
This latter method yields a wave function with the following physical properties: = 4.942 Eh; = 0.983 Eh; = 5.925 Eh; = 0.491 ao. Note that this optimum wave function is skewed a little to the left of center, increasing kinetic energy slightly (+.007 Eh) and reducing potential energy slightly more (-.017 Eh), and overall yielding an energy reduction of -.01 Eh. The details of these calculations can be found in the appendix.
However, it is also important to note that $\Psi_{a}$, the eigenfunction for the particle in a box problem [V(x)=0], is a very good trial wave function for this particular problem. It is in error by only 0.17% when compared with the more accurate, and essentially exact, numerical solution. $\Psi_{a}$ is displayed along with the numerical wave function in the appendix to show how little it differs from the numerical solution.
Another point that should be noted is that $\Psi_{b}$ does not become the preferred trial function until V(x) = 16.6x. In other words it requires a rather steeply rising internal potential energy to offset the kinetic energy advantage that $\Psi_{a}$ has. The energy calculations for both wave functions are given below.
$\mathrm{V}_{\mathrm{a}} :=\int_{0}^{1} \Psi_{\mathrm{a}}(\mathrm{x}) \cdot 16.6 \cdot \mathrm{x} \cdot \Psi_{\mathrm{a}}(\mathrm{x}) \mathrm{d} \mathrm{x} \ \mathrm{V}_{\mathrm{a}}=8.300 \qquad \mathrm{E}_{\mathrm{a}} :=\mathrm{T}_{\mathrm{a}}+\mathrm{V}_{\mathrm{a}} \qquad \mathrm{E}_{\mathrm{a}}=13.235 \nonumber$
$\mathrm{V}_{\mathrm{b}} :=\int_{0}^{1} \Psi_{\mathrm{b}}(\mathrm{x}) \cdot 16.6 \cdot \mathrm{x} \cdot \Psi_{\mathrm{b}}(\mathrm{x}) \mathrm{d} \mathrm{x} \ \mathrm{V}_{\mathrm{b}}=6.255 \qquad \mathrm{E}_{\mathrm{b}} :=\mathrm{T}_{\mathrm{b}}+\mathrm{V}_{\mathrm{b}} \qquad \mathrm{E}_{\mathrm{b}}=13.235 \nonumber$
In conclusion, this simple example reveals that our intuition about the importance of potential energy in the analysis of physical phenomena at the nanoscale level should be tempered by a realization that the quantum mechanical nature of kinetic energy cannot be safely ignored.
Literature cited:
1. Tokiwa, H.; Ichikawa, H. Int. J. Quantum Chem. 1994, 50, 109-112.
2. Rioux, F.; DeKock, R. L. J. Chem. Educ. 1998, 75, 537-539.
3. Weinhold, F. Nature 2001, 411, 539-541.
4. Rioux, F. Chem. Educator 2003, 8, S1430-4171(03)01650-9; DOI 10.1333/s00897030650a..
5. In the context of quantum mechanics, confinement energy is probably a better descriptor than kinetic energy, because the latter implies classical motion. According to quantum mechanical principles, confined particles, because of their wave-like charcter, are described by a weighted superposition of the allowed position eignestates. They are not executing a trajectory in the classical sense. In other words, they are not here and later there; they are here and there, simultaneously
6. Mathcad 11 is a product of Mathsoft, Cambridge, MA 02142; www.mathsoft.com/.
Appendix
Numerical Solution for the Particle in a Slanted Box
Parameters:
$\mathrm{x}_{\max } :=1 \qquad \mathrm{m} :=1 \qquad \mathrm{V}_{0} :=2 \nonumber$
Potential energy:
$\mathrm{V}(\mathrm{x}) :=\mathrm{V}_{0} \cdot \mathrm{x} \nonumber$
Solve Schrödinger's equation numerically:
Given
$\frac{-1}{2 \cdot \mathrm{m}} \cdot \frac{\mathrm{d}^{2}}{\mathrm{d} \mathrm{x}^{2}} \Psi(\mathrm{x})+\mathrm{V}(\mathrm{x}) \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \ \Psi(0)=0 \qquad \Psi^{\prime}(0)=0.1 \ \Psi :=\text { Odesolve }\left(\mathrm{x}, \mathrm{x}_{\mathrm{max}}\right) \nonumber$
Normalize wavefunction:
$\Psi(x) :=\frac{\Psi(x)}{\sqrt{\int_{0}^{x_{\max }} \Psi(x)^{2}} d x} \nonumber$
Enter energy guess: E = 5.925
Calculate most probable position:
$\mathrm{x} :=.5 \text { Given } \qquad \frac{\mathrm{d}}{\mathrm{dx}} \Psi(\mathrm{x})=0 \qquad \text { Find }(\mathrm{x})=0.485 \nonumber$
Calculate average position:
$\mathrm{X}_{\mathrm{avg}} :=\int_{0}^{1} \Psi(\mathrm{x}) \cdot \mathrm{x} \cdot \Psi(\mathrm{x}) \mathrm{d} \mathrm{x} \qquad \mathrm{X}_{\mathrm{avg}}=0.491 \nonumber$
Calculate potential and kinetic energy:
$\mathrm{V}_{\mathrm{avg}} :=\mathrm{V}_{0} \cdot \mathrm{X}_{\mathrm{avg}} \qquad \mathrm{V}_{\mathrm{avg}}=0.983 \nonumber$
$\mathrm{T}_{\mathrm{avg}} :=\mathrm{E}-\mathrm{V}_{\mathrm{avg}} \qquad \mathrm{T}_{\mathrm{avg}}=4.942 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.60%3A_Kinetic_Energy_Is_Important_in_the_Nanoscale_World.txt
|
A system is in the state $|\Psi\rangle$, which is not an eigenfunction of the energy operator, $\hat{H}$. A statistically meaningful number of such states are available for the purpose of measuring the energy. Quantum mechanical principles state that an energy measurement must yield one of the energy eigenvalues, $\epsilon_{i}$, of the energy operator. Therefore, the average value of the energy measurements is calculated as,
$\langle E\rangle=\frac{\sum_{i} n_{i} \varepsilon_{i}}{N} \tag{1} \nonumber$
where ni is the number of times $\epsilon_{i}$ is measured, and N is the total number of measurements. Therefore, pi = ni/N, is the probability that $\epsilon_{i}$ will be observed. Equation (1) becomes
$\langle E\rangle=\sum_{i} p_{i} \varepsilon_{i} \geq \varepsilon_{1}=\varepsilon_{g s} \tag{2} \nonumber$
where gs stands for ground state. As shown above, it is clear that the average energy has to be greater than (p1 < 1) or equal to (p1 = 1) the lowest energy. This is the origin of the quantum mechanical variational theorem.
According to quantum mechanics, for a system in the state $|\Psi\rangle, p_{i}=\langle\Psi | i\rangle\langle i | \Psi\rangle$, where the $|i\rangle$ are the eigenfunctions of the energy operator. Equation (2) can now be re-written as,
$\langle E\rangle=\sum_{i}\langle\Psi | i\rangle\langle i | \Psi\rangle \varepsilon_{i}=\sum_{i}\langle\Psi | i\rangle \varepsilon_{i}\langle i | \Psi\rangle \tag{3} \nonumber$
However, it is also true that
$\hat{H}|i\rangle=\varepsilon_{i}|i\rangle=|i\rangle \varepsilon_{i} \tag{4} \nonumber$
Substitution of equation (4) into (3) yields
$\langle E\rangle=\sum_{i}\langle\Psi|\hat{H}| i\rangle\langle i | \Psi\rangle \tag{5} \nonumber$
As eigenfunctions of the energy operator, the $|i\rangle$ form a complete basis set, making available the discrete completeness relation, $\sum_{i}| i \rangle\langle i|=1$, the use of which in equation (5) yields
$\langle E\rangle=\langle\Psi|\hat{H}| \Psi\rangle \geq \varepsilon_{g s} \tag{6} \nonumber$
Chemists generally evaluate expectation values in coordinate space, so we now insert the continuous completeness relationship of coordinate space, $\int|x\rangle\langle x| d x=1$, in equation (6) which gives us,
$\langle E\rangle=\int\langle\Psi | x\rangle\langle x|\hat{H}| \Psi\rangle d x=\int\langle\Psi | x\rangle \hat{H}(x)\langle x | \Psi\rangle d x \tag{7} \nonumber$
where
$\hat{H}(x)=-\frac{\hbar^{2}}{2 m} \frac{d}{d x^{2}}+V(x) \tag{8} \nonumber$
1.62: Examining the Wigner Distribution Using Dirac Notation
abstract
Expressing the Wigner distribution function in Dirac notation reveals its resemblance to a classical trajectory in phase space.
References to the Wigner distribution function [1-3] and the phase-space formulation of quantum mechanics are becoming more frequent in the pedagogical and review literature [4-26]. There have also been several important applications reported in the recent research literature [27, 28]. Other applications of the Wigner distribution are cited in Ref. 25.
The purpose of this note is to demonstrate that when expressed in Dirac notation the Wigner distribution resembles a classical phase-space trajectory. The Wigner distribution can be generated from either the coordinate- or momentum-space wave function. The coordinate-space wave function will be employed here and the Wigner transform using it is given in equation (1) for a one-dimensional example in atomic units.
$W(p, x)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \Psi^{*}\left(x+\frac{s}{2}\right) \Psi\left(x-\frac{s}{2}\right) e^{i p s} d s \tag{1} \nonumber$
In Dirac notation the first two terms of the integrand are written as follows,
$\Psi^{*}\left(x+\frac{s}{2}\right)=\left\langle\Psi | x+\frac{s}{2}\right\rangle \qquad \Psi\left(x-\frac{s}{2}\right)=\left\langle x-\frac{s}{2} | \Psi\right\rangle \tag{2} \nonumber$
Assigning 1/2$\pi$ to the third term and utilizing the momentum eigenfunction in coordinate space and its complex conjugate we have,
$\frac{1}{2 \pi} \mathrm{e}^{i p s}=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{i p\left(x+\frac{s}{2}\right)} \frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-i p\left(x-\frac{s}{2}\right)}=\left\langle x+\frac{s}{2} | p\right\rangle\left\langle p | x-\frac{s}{2}\right\rangle \tag{3} \nonumber$
Substituting equations (2) and (3) into equation (1) yields after arrangement
$W(x, p)=\int_{-\infty}^{+\infty}\left\langle\Psi | x+\frac{s}{2}\right\rangle\left\langle x+\frac{s}{2} | p\right\rangle\left\langle p | x-\frac{s}{2}\right\rangle\left\langle x-\frac{s}{2} | \Psi\right\rangle d s \tag{4} \nonumber$
The four Dirac brackets are read from right to left as follows: (1) is the amplitude that a particle in the state $\Psi$ has position (x - s/2); (2) is the amplitude that a particle with position (x - s/2) has momentum p; (3) is the amplitude that a particle with momentum p has position (x + s/2); (4) is the amplitude that a particle with position (x + s/2) is (still) in the state $\Psi$. Thus, in Dirac notation the integrand is the quantum equivalent of a classical phase-space trajectory for a quantum system in the state $\Psi$.
Integration over s creates a superposition of all possible quantum trajectories of the state Ψ, which interfere constructively and destructively, providing a quasi-probability distribution in phase space. As an example, the Wigner probability distribution for a double-slit experiment is shown in the figure below [14, 27]. The oscillating positive and negative values in the middle of the Wigner distribution signify the interference associated with a quantum superposition, distinguishing it from a classical phase-space probability distribution. In the words of Leibfried et al. [14], the Wigner function is a “mathematical construct for visualizing quantum trajectories in phase space.”
Wigner distribution function for the double-slit experiment.
The Wigner double- and triple-slit distribution functions are calculated in the following tutorials.
Wigner Distribution for the Double Slit Experiment
Wigner Distribution for the Triple Slit Experiment
Examples of the generation and use of the Wigner distribution are available in the following tutorials.
Wigner Distribution for the Particle in a Box
Quantum Calculations on the Hydrogen Atom in Coordinate, Momentum and Phase Space
Variation Method Using the Wigner Function: The Feshbach Potential
The Wigner Distribution Function for the Harmonic Oscillator
Given the quantum number this Mathcad file calculates the Wigner distribution function for the specified harmonic oscillator eigen state.
Quantum number: $\mathrm{n} :=2$
Harmonic oscillator eigenstate:
$\Psi(x) :=\frac{1}{\sqrt{2^{n} \cdot n ! \sqrt{\pi}}} \cdot \operatorname{Her}(n, x) \cdot \exp \left(-\frac{x^{2}}{2}\right) \nonumber$
Calculate the Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \nonumber$
Display the Wigner distribution:
$\mathrm{N} :=80 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad x_{i} :=-4+\frac{8 \cdot i}{N} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-5+\frac{10 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i, j}:=W\left(x_{i}, p_{j}\right) \nonumber$
Wigner, Wigner
Phase-space quantum mechanical calculations using the Wigner distribution are compared with coordinate- and momentum-space calculations in the following tutorial.
The Cliff Notes version of the above can be accessed in the following tutorial.
Literature cited:
[1] E. P. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749 – 759 (1932).
[2] M. Hillery, R. F. O’Connell, M. O. Scully, and E. P. Wigner, “Distribution functions in physics: Fundamentals,” Phys. Rep. 106, 121 – 167 (1984).
[3] Y. S. Kim and E. P. Wigner, “Canonical transformations in quantum mechanics,” Am. J. Phys. 58, 439 – 448 (1990).
[4] J. Snygg, “Wave functions rotated in phase space,” Am. J. Phys. 45, 58 – 60 (1977).
[5] N. Mukunda, “Wigner distribution for angle coordinates in quantum mechanics,” Am. J. Phys. 47, 192 – 187 (1979).
[6] S. Stenholm, “The Wigner function: I. The physical interpretation,” Eur. J. Phys. 1, 244 – 248 (1980).
[7] G. Mourgues, J. C. Andrieux, and M. R. Feix, “Solutions of the Schrödinger equation for a system excited by a time Dirac pulse of pulse of potential. An example of the connection with the classical limit through a particular smoothing of the Wigner function,” Eur. J. Phys. 5, 112 – 118 (1984).
[8] M. Casas, H. Krivine, and J. Martorell, “On the Wigner transforms of some simple systems and their semiclassical interpretations,” Eur. J. Phys. 12, 105 – 111 (1991).
[9] R. A. Campos, “Correlation coefficient for incompatible observables of the quantum mechanical harmonic oscillator,” Am. J. Phys. 66, 712 – 718 (1998).
[10] H-W Lee, “Spreading of a free wave packet,” Am. J. Phys. 50, 438 – 440 (1982).
[11] D. Home and S. Sengupta, “Classical limit of quantum mechanics,” Am. J. Phys. 51, 265 – 267 (1983).
[12] W. H. Zurek, “Decoherence and the transition from quantum to classical,” Phys. Today 44, 36 – 44 (October 1991).
[13] M. C. Teich and B. E. A. Saleh, “Squeezed and antibunched light,” Phys. Today 43, 26 – 34 (June 1990).
[14] D. Leibfried, T. Pfau, and C. Monroe, “Shadows and mirrors: Reconstructing quantum states of motion,” Phys. Today 51, 22 – 28 (April 1998).
[15] W. P. Schleich and G. Süssmann, “A jump shot at the Wigner distribution,” Phys. Today 44, 146 – 147 (October 1991).
[16] R. A. Campos, “Correlation coefficient for incompatible observables of the quantum harmonic oscillator,” Am. J. Phys. 66, 712 – 718 (1998).
[17] R. A. Campos, “Wigner quasiprobability distribution for quantum superpositions of coherent states, a Comment on ‘Correlation coefficient for incompatible observables of the quantum harmonic oscillator,’” Am. J. Phys. 67, 641 – 642 (1999).
[18] C. C. Gerry and P. L. Knight, “Quantum superpositions and Schrödinger cat states in quantum optics,” Am. J. Phys. 65, 964 – 974 (1997).
[19] K. Ekert and P. L. Knight, “Correlations and squeezing of two-mode oscillations,” Am. J. Phys. 57, 692 – 697 (1989).
[20] P. J. Price, “Quantum hydrodynamics and virial theorems,” Am. J. Phys. 64, 446 – 448 (1995).
[21] M. G. Raymer, “Measuring the quantum mechanical wave function,” Contemp. Phys. 38, 343 – 355 (1997).
[22] H-W Lee, A. Zysnarski, and P. Kerr, “One-dimensional scattering by a locally periodic potential,” Am. J. Phys. 57, 729 – 734 (1989).
[23] A. Royer, “Why are the energy levels of the quantum harmonic oscillator equally spaced?” Am. J. Phys. 64, 1393 – 1399 (1996).
[24] D. F. Styer, et al., “Nine formulations of quantum mechanics,” Am. J. Phys. 70, 288 – 297 (2002).
[25] M. Belloni, M. A. Doncheski, and R. W. Robinett, “Wigner quasi-probability distribution for the infinite square well: Energy eigenstates and time-dependent wave packets,” Am. J. Phys. 72, 1183 – 1192 (2004).
[26] W. B. Case, “Wigner functions and Weyl transforms for pedestrians,” Am. J. Phys. 76, 937 – 946 (2008).
[27] Ch. Kurtsiefer, T. Pfau, and J. Mlynek, “Measurement of the Wigner function of an ensemble of helium atoms,” Nature 386, 150 – 153 (1997).
[28] W. H. Zurek, “Sub-Planck structure in phase space and its relevance for quantum decoherence,” Nature 412, 712 – 717 (2001).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.61%3A_Energy_Expectation_Values_and_the_Origin_of_the_Variation_Principle.txt
|
The quantum mechanical interpretation of the single‐slit experiment is that position is measured at the slit screen and momentum is measured at the detection screen. Position and momentum are conjugate observables connected by a Fourier transform and governed by the uncertainty principle. Knowing the slit screen geometry makes it possible to calculate the momentum distribution at the detection screen.
The slit‐screen geometry and therefore the coordinate wavefunction is calculate as follows.
Slit width: $w : = 2$
Coordinate‐space wave function:
$\Psi(x, w) :=\text { if }\left[\left(x \geq-\frac{w}{2}\right) \cdot\left(x \leq \frac{w}{2}\right), 1,0\right] \nonumber$
$x :=\frac{-w}{2}, \frac{-w}{2}+.005 \ldots \frac{w}{2} \nonumber$
A Fourier transform of the coordinate‐space wave function yields the momentum wave function and the momentum distribution function, which is the diffraction pattern.
$\Phi\left(\mathrm{p}_{\mathrm{X}}, \mathrm{w}\right) :=\frac{1}{\sqrt{2 \cdot \pi \cdot \mathrm{w}}} \cdot \int_{-\frac{\mathrm{w}}{2}}^{\frac{\mathrm{w}}{2}} \exp \left(-\mathrm{i} \cdot \mathrm{p}_{\mathrm{X}} \cdot \mathrm{x}\right) \mathrm{d} \mathrm{x} \; \text{simplify} \rightarrow 2^{\frac{1}{2}} \cdot \frac{\sin \left(\frac{1}{2} \cdot w \cdot p_{x}\right)}{\pi^{\frac{1}{2}} \cdot w^{\frac{1}{2}} \cdot p_{x}} \nonumber$
The Wigner function for the single‐slit screen geometry is generated using the momentum wave function. (Fifty is effectively infinity and is therefore as the limits of integration.)
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \cdot \int_{-50}^{50} \overline{\Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}, \mathrm{w}\right)} \cdot \exp (-\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}, \mathrm{w}\right) \mathrm{ds} \nonumber$
The single‐slit Wigner function is displayed graphically.
$\mathrm{N} :=150 \quad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-1.5+\frac{3 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-20+\frac{40 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i,j} : = W \left(x_{i}, p_{j}\right) \nonumber$
1.64: Wigner Distribution for the Double Slit Experiment
The quantum mechanical interpretation of the double‐slit experiment is that position is measured at the slit screen and momentum is measured at the detection screen. Position and momentum are conjugate observables connected by a Fourier transform and governed by the uncertainty principle. Knowing the slit screen geometry makes it possible to calculate the momentum distribution at the detection screen.
The slit‐screen geometry and therefore the coordinate wavefunction is modeled as a superposition of two Gaussian functions.
$\Psi(x) :=\exp \left[-4 \cdot(x-3)^{2}\right]+\exp \left[-4 \cdot(x+3)^{2}\right] \nonumber$
The coordinate wavefunction is Fourier transformed into momentum space to yield the diffraction pattern. Note that this calculation is in agreement with the well‐known double slit diffraction pattern.
$\Phi(p) :=\frac{1}{\sqrt{2 \cdot \pi}} \int_{-6}^{6} \exp (-\mathrm{i} \cdot p \cdot x) \cdot \Psi(x) d x \nonumber$
The Wigner function is a phase‐space distribution that is obtained by the Fourier transform of either the coordinate or momentum wavefunction. We use the coordinate wavefunction.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \nonumber$
$\mathrm{N} :=100 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}}=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}}=-6+\frac{12 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{\mathrm{i}, \mathrm{j}}:=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
The Wigner distribution is frequently called a quasi‐probability distribution because, as can be seen in the display above, it can have negative values. Integration of the Wigner function with respect to momentum recovers the coordinate wavefunction and integration with respect to position yields the momentum wavefunction.
$\Psi_{\mathrm{w}}(\mathrm{x}) :=\int_{-6}^{6} \mathrm{W}(\mathrm{x}, \mathrm{p}) \text { dp } \quad \mathrm{x} :=-5,-4.98\ldots5 \ \Phi_{\mathrm{w}}(\mathrm{p}) :=\int_{-5}^{5} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \quad \mathrm{p} :=-6,-5.98\ldots6 \nonumber$
The Wigner distribution can be reconstructed from experimental measurements using quantum state tomography. Reconstructive tomography is a widely used technique in medicine, for example, for obtaining the shape of an inaccessible two‐dimensional object from a set of different one‐dimensional ʺshadowsʺ cast by that object.
Quantum state reconstruction is possible if a system can be prepared repeatedly in the same state. Subsequent measurements on such a system are then effectively mutltiple measurements on the same quantum state. The theoretical Wigner distribution shown above for the double‐slit experiment has been reconstructed for the helium atom. [See, Nature, 386, 150 (1997).
See Figure 1 of ʺShadows and Mirrors:Reconstructing Quantum States of Atom Motion,ʺ Physics Today, April 1998, by Leibfried, Pfau, and Monroe.
Reference: Decoherence and the Transition form Quantum to Classical, Wojciech Jurek, Physics Today, October 1991, pages 36‐44.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.63%3A_The_Wigner_Function_for_the_Single_Slit_Diffraction_Problem.txt
|
The quantum mechanical interpretation of the triple‐slit experiment is that position is measured at the slit screen and momentum is measured at the detection screen. Position and momentum are conjugate observables connected by a Fourier transform and governed by the uncertainty principle. Knowing the slit screen geometry makes it possible to calculate the momentum distribution at the detection screen.
The slit‐screen geometry and therefore the coordinate wavefunction is modeled as a superposition of three Gaussian functions.
$\Psi(x) :=\exp \left[-4 \cdot(x-3)^{2}\right]+\exp \left(-4 \cdot x^{2}\right)+\exp \left[-4 \cdot(x+3)^{2}\right] \nonumber$
The coordinate wavefunction is Fourier transformed into momentum space to yield the diffraction pattern. Note that this calculation is in agreement with the expectation that the number of minor maxima appearing between the major maxima is given by the number of slits minus 2.
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \int_{-6}^{6} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}) \mathrm{dx} \nonumber$
The Wigner function is a phase‐space distribution that is obtained by the Fourier transform of either the coordinate or momentum wavefunction. We use the coordinate wavefunction.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \ \mathrm{N} :=100 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-6+\frac{12 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i, j} :=\text{W}\left(x_{i}, p_{j}\right) \nonumber$
The Wigner distribution is frequently called a quasi‐probability distribution because, as can be seen in the display above, it can have negative values. Integration of the Wigner function with respect to momentum recovers the coordinate wavefunction and integration with respect to position yields the momentum wavefunction.
$\Psi_{\mathrm{w}}(\mathrm{x}) :=\int_{-6}^{6} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{dp} \qquad \mathrm{x} :=-5,-4.98 \ldots 5 \ \Phi_{\mathrm{w}}(\mathrm{p}) :=\int_{-5}^{5} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \quad \mathrm{p} :=-6,-5.98 \ldots 6 \nonumber$
1.66: Wigner Distribution for the Quadruple Slit Experiment
The quantum mechanical interpretation of the quadruple‐slit experiment is that position is measured at the slit screen and momentum is measured at the detection screen. Position and momentum are conjugate observables connected by a Fourier transform and governed by the uncertainty principle. Knowing the slit screen geometry makes it possible to calculate the momentum distribution at the detection screen.
The slit‐screen geometry and therefore the coordinate wavefunction is modeled as a superposition of three Gaussian functions.
$\Psi(x)=\exp \left[-4 \cdot(x-3)^{2}\right]+\exp \left[-4 \cdot(x-1)^{2}\right]+\exp \left[-4 \cdot(x+1)^{2}\right]+\exp \left[-4 \cdot(x+3)^{2}\right] \nonumber$
The coordinate wavefunction is Fourier transformed into momentum space to yield the diffraction pattern. Note that this calculation is in agreement with the expectation that the number of minor maxima appearing between the major maxima is given by the number of slits minus 2.
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-8}^{8} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}) \mathrm{d} \mathrm{x} \nonumber$
The Wigner function is a phase‐space distribution that is obtained by the Fourier transform of either the coordinate or momentum wavefunction. We use the coordinate wavefunction.
$W(x, p) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(x+\frac{s}{2}\right) \cdot \exp (i \cdot s \cdot p) \cdot \Psi\left(x-\frac{s}{2}\right) d s \nonumber$
$\mathrm{N} :=100 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-6+\frac{12 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i, j} :=W\left(x_{i}, p_{j}\right) \nonumber$
The Wigner distribution is frequently called a quasi‐probability distribution because, as can be seen in the display above, it can have negative values. Integration of the Wigner function with respect to momentum recovers the coordinate wavefunction and integration with respect to position yields the momentum wavefunction.
$\Psi_{\mathrm{w}}(\mathrm{x}) :=\int_{-6}^{6} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{p} \qquad \mathrm{x} :=-5,-4.98 \ldots 5 \nonumber$
$\Phi_{\mathrm{w}}(\mathrm{p}) :=\int_{-5}^{5} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{x} \qquad \mathrm{p}=-6,-5.98 \ldots 6 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.65%3A_Wigner_Distribution_for_the_Triple_Slit_Experiment.txt
|
A study of quantum mechanical tunneling brings together the classical and quantum mechanical points of view. In this tutorial the harmonic oscillator will be used to analyze tunneling in coordinate-, momentum- and phase-space. The Appendix provides the position and momentum operators appropriate for these three representations.
The classical equation for the energy of a harmonic oscillator is,
$\mathrm{E}=\frac{\mathrm{p}^{2}}{2 \cdot \mu}+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \nonumber$
The quantum mechanical counter part is Schrödinger's equation (in atomic units, h = 2 $\pi$),
$\frac{-1}{2 \cdot \mu} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \nonumber$
In atomic units the quantum mechanical wave function in coordinate space for the harmonic oscillator ground state with reduced mass µ and force constant k is given by,
$\Psi(\mathrm{x}, \mathrm{k}, \mu) :=\left(\frac{\sqrt{\mathrm{k} \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{\mathrm{k} \cdot \mu} \cdot \frac{\mathrm{x}^{2}}{2}\right) \nonumber$
In the interest of mathematical simplicity and expediency we will use k = µ =1. The normalized ground state wave function under these conditions is,
$\Psi(x) :=\left(\frac{1}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(\frac{-x^{2}}{2}\right) \qquad \int_{-\infty}^{\infty} \Psi(x)^{2} d x=1 \nonumber$
Solving Schrödinger's equation for this wave function yields a ground state energy of 0.5 in atomic units.
$\frac{-1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\frac{1}{2} \cdot \mathrm{x}^{2} \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \nonumber$
Classically a harmonic oscillator, like a pendulum, has a turning point when kinetic energy is zero and the pendulum bob changes direction. The turning point is calculated as follows using the classical expression for the energy.
$\frac{1}{2}=\frac{1}{2} \cdot \mathrm{x}^{2} \text { solve, } \mathrm{x} \rightarrow\left(\begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
Thus, the permissible range of position values is between -1 and +1. Position values outside this range are classically forbidden. However, quantum theory permits position values for which the total energy is less than the potential energy. This is referred to as quantum tunneling. The probability that tunneling is occurring is calculated below.
$2 \cdot \int_{1}^{\infty} \Psi(x)^{2} d x=0.157 \nonumber$
Next we move to a similar calculation in momentum space. First the coordinate wave function is Fourier transformed into momentum space and normalization is demonstrated.
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}) \mathrm{dx} \rightarrow \frac{1}{\pi^{\frac{1}{4}}} \cdot \mathrm{e}^{\frac{-1}{2} \cdot \mathrm{p}^{2}} \qquad \int_{-\infty}^{\infty}(|\Phi(p)|)^{2} d p=1 \nonumber$
Solving Schrödinger's equation in momentum space naturally gives the same energy eigenvalue.
$\frac{\mathrm{p}^{2}}{2} \cdot \Phi(\mathrm{p})-\frac{1}{2} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dp}^{2}} \Phi(\mathrm{p})=\mathrm{E} \cdot \Phi(\mathrm{p}) \text { solve, } \mathrm{E} \rightarrow \frac{1}{2} \nonumber$
And we find that the classically permissible range of momentum values is the same given the reduced mass and force constant values used in these calculations.
$\frac{1}{2}=\frac{\mathrm{p}^{2}}{2} \text { solve, } \mathrm{p} \rightarrow\left(\begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
Next we see that the tunneling probability in momentum space is the same as it is in coordinate space.
$2 \cdot \int_{1}^{\infty} \Phi(\mathrm{p})^{2} \mathrm{dp}=0.157 \nonumber$
Moving to phase space requires a distribution function that depends on both position and momentum. The Wigner function fits these requirements and is generated here using both the coordinate and momentum wave functions. Please see “Examining the Wigner Distribution Using Dirac Notation,” arXiv: 0912.2333 (2009) for further detail.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \int_{-\infty}^{\infty} \Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}\right) \cdot \exp (-\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \rightarrow \frac{1}{\pi} \cdot \mathrm{e}^{\left(-\mathrm{x}^{2}\right)-\mathrm{p}^{2}} \nonumber$
The Wigner function is normalized over position and momentum, and yields the appropriate energy expectation value for the ground state of the harmonic oscillator.
$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W(x, p) d x d p=1 \qquad \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\left(\frac{p^{2}}{2}+\frac{x^{2}}{2}\right) \cdot W(x, p) d x d p=0.5 \nonumber$
Tunneling probability in phase space is calculated as follows:
$\int_{1}^{\infty} \int_{1}^{\infty} \mathrm{W}(\mathrm{x}, \mathrm{p}) \mathrm{dx} \mathrm{dp}=0.025 \nonumber$
This is in agreement with the separate coordinate and momentum space calculations which gave values of 0.157.
$0.157 \cdot 0.157=0.025 \nonumber$
Appendix
The table lists the forms of the position and momentum operators in the coordinate, momentum and phase space representations. Clearly the multiplicative character of the phase space operators appeals to our classical prejudices and intuition. However, we must remind ourselves that the phase space distribution function on which they "operate" is generated from either the coordinate or momentum wave function. In the coordinate representation the momentum operator is differential; in the momentum representation the coordinate operator is differential. As is shown in other tutorials in this series, the apparent "classical character" of the phase space representation only temporarily hides the underlying quantum weirdness.
$\begin{pmatrix} \text{Operator} & \text{CoordinateSpace} & \text{MomentumSpace} & \text{PhaseSpace} \ \text{position} & x \cdot \Box & i \cdot \frac{d}{dp} \Box & x \cdot \Box \ \text{momentum} & \frac{1}{i} \cdot \frac{d}{dx} \Box & p \cdot \Box & p \cdot \Box \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.67%3A_Quantum_Tunneling_in_Coordinate_Momentum_and_Phase_Space.txt
|
The Wigner function, W(x,p), is a phase space distribution function which behaves similarly to the coordinate $\left(|\Psi(x)|^{2}\right)$ and momentum $\left(|\tilde{\Psi}(p)|^{2}\right)$ distribution functions. For example, its integral over phase space is normalized.
$\iint W(x, p) d x d p=1 \nonumber$
In phase space, position and momentum are represented by multiplicative operators, so the calculation of their expectation values has a classical appearance. This, naturally, is part of the appeal of phase space quantum mechanical calculations.
$\langle x\rangle=\iint x W(x, p) d x d p \nonumber$
$\langle p\rangle=\iint p W(x, p) d x d p \nonumber$
While the Wigner function is real, unlike $|\Psi(x)|^{2}$ and $|\tilde{\Psi}(p)|^{2}$, it can take on negative values making it impossible to interpret it as a genuine probability distribution function. For this reason it is frequently referred to as a quasi-probability function, and loses some of its classical appeal. In any case, the Wigner function is redundant in the sense that it is generated from a Schrödinger coordinate or momentum wave equation.
In what follows, the quantum mechanical Wigner distribution function will be rationalized by reference to familiar classical concepts, such as position, momentum and trajectory.
In classical physics, a trajectory is a temporal sequence of position and momentum states. Let us try to represent a classical trajectory in a quantum mechanical formalism. Suppose a quantum mechanical object, a quon (thank you Nick Herbert), in state |$\Psi$> moves from position x –s/2 to position x + s/2. We might represent this transition quantum mechanically as the product of two coordinate space probability amplitudes (reading from left to right).
$\langle x- \frac{s}{2} | \Psi\rangle\langle\Psi | x+ \frac{s}{2} \rangle \nonumber$
Thus far we have a coordinate representation of a transition from one spatial location to another. However, a phase space description also requires a dynamic (or motional) parameter such as momentum. We can introduce momentum by first rearranging the above product of amplitudes as follows.
$\langle\Psi | x+ \frac{s}{2} \rangle\langle x- \frac{s}{2} | \Psi\rangle \nonumber$
This convolution of positional states takes on the coherent character of a trajectory with the insertion of the following momentum projector (see Feynman Lectures Volume 3) coupling the two spatial states.
$\langle x+ \frac{s}{2} | p\rangle\langle p | x- \frac{s}{2} \rangle \nonumber$
This gives us a quantum trajectory expressed in the following product of Dirac brackets,
$\langle\Psi | x+ \frac{s}{2} \rangle\langle x+ \frac{s}{2} | p\rangle\langle p | x- \frac{s}{2} \rangle\langle x- \frac{s}{2} | \Psi\rangle \nonumber$
The four Dirac brackets are read now from right to left as follows: (1) is the amplitude that a particle in the state $\Psi$ has position (x - $\frac{s}{2})); (2) is the amplitude that a particle with position (x - \(\frac{s}{2}$) has momentum p; (3) is the amplitude that a particle with momentum p has position (x + $\frac{s}{2}$); (4) is the amplitude that a particle with position (x + $\frac{s}{2}$) is (still) in the state $\Psi$.
Integration over s yields the Wigner distribution function, which is a superposition of all possible quantum trajectories of the state $\Psi$, which interfere constructively and destructively, providing a quasi-probability distribution in phase space.
$\int\langle\Psi | x+ \frac{s}{2} \rangle\langle x+ \frac{s}{2} | p\rangle\langle p | x- \frac{s}{2} \rangle\langle x- \frac{s}{2} | \Psi\rangle d s = \frac{1}{h} \int \Psi(x+ \frac{s}{2})^{*} \exp \left(i \frac{p s}{\hbar}\right) \Psi(x- \frac{s}{2}) d s \nonumber$
given that
$\langle x+ \frac{s}{2} | p\rangle\langle p | x- \frac{s}{2}\rangle=\frac{1}{\sqrt{h}} \exp \left(i \frac{p(x+ \frac{s}{2})}{\hbar}\right) \frac{1}{\sqrt{h}} \exp \left(-i \frac{p(x-\frac{s}{2})}{\hbar}\right)=\frac{1}{h} \exp \left(i \frac{p s}{\hbar}\right) \nonumber$
While the Wigner distribution is more than a quantum mechanical curiosity and plays an important role in current research (see references below), it is also true, as mentioned above, that it is redundant because it is generated from either a coordinate or momentum wave function. In Dan Styer’s words it is useful in exploring the quantum/classical transition, but it does not eliminate quantum weirdness – it simply repackages it (see reference 12).
Having said this it should be acknowledged that the Wigner phase-space distribution has been measured for the double slit experiment using tomographic techniques (see references 17-19).
Literature references to the Wigner distribution function:
1. E. P. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749 – 759 (1932).
2. M. Hillery, R. F. O’Connell, M. O. Scully, and E. P. Wigner, “Distribution functions in physics: Fundamentals,” Phys. Rep. 106, 121 – 167 (1984).
3. Y. S. Kim and E. P. Wigner, “Canonical transformations in quantum mechanics,” Am. J. Phys. 58, 439 – 448 (1990).
4. J. Snygg, “Wave functions rotated in phase space,” Am. J. Phys. 45, 58 – 60 (1977).
5. J. Snygg, “Uses of operator functions to construct refined correspondence principle via the quantum mechanics of Wigner and Moyal,” Am. J. Phys. 48, 964 – 970 (1980).
6. N. Mukunda, “Wigner distribution for angle coordinates in quantum mechanics,” Am. J. Phys. 47, 192 – 187 (1979).
7. S. Stenholm, “The Wigner function: I. The physical interpretation,” Eur. J. Phys. 1, 244 – 248 (1980).
8. G. Mourgues, J. C. Andrieux, and M. R. Feix, “Solutions of the Schrödinger equation for a system excited by a time Dirac pulse of pulse of potential. An example of the connection with the classical limit through a particular smoothing of the Wigner function,” Eur. J. Phys. 5, 112 – 118 (1984).
9. M. Casas, H. Krivine, and J. Martorell, “On the Wigner transforms of some simple systems and their semiclassical interpretations,” Eur. J.Phys. 12, 105 – 111 (1991).
10. R. A. Campos, “Correlation coefficient for incompatible observables of the quantum mechanical harmonic oscillator,” Am. J. Phys. 66, 712 – 718 (1998).
11. M. Belloni, M. A. Doncheski, and R. W. Robinett, “Wigner quasi-probability distribution for the infinite square well: Energy eigenstates and time-dependent wave packets,” Am. J. Phys. 72, 1183 – 1192 (2004).
12. D. F. Styer, et al., “Nine formulations of quantum mechanics,” Am. J. Phys. 70, 288 – 297 (2002).
13. H-W Lee, “Spreading of a free wave packet,” Am. J. Phys. 50, 438 – 440 (1982).
14. D. Home and S. Sengupta, “Classical limit of quantum mechanics,” Am. J. Phys. 51, 265 – 267 (1983).
15. W. H. Zurek, “Decoherence and the transition from quantum to classical,” Phys. Today 44, 36 – 44 (October 1991).
16. M. C. Teich and B. E. A. Saleh, “Squeezed and antibunched light,” Phys. Today 43, 26 – 34 (June 1990).
17. Ch. Kurtsiefer, T. Pfau, and J.Mlynek, “Measurement of the Wigner function of an ensemble of helium atoms,” Nature 386, 150-153 (1997).
18. M. Freyberger and W. P. Schleich, “True vision of a quantum state,” Nature 386, 121-122 (1997).
19. D. Leibfried, T. Pfau, and C. Monroe, “Shadows and mirrors: Reconstructing quantum states of motion,” Phys. Today 51, 22 – 28 (April 1998).
20. W. P. Schleich and G. Süssmann, “A jump shot at the Wigner distribution,” Phys. Today 44, 146 – 147 (October 1991).
21. R. A. Campos, “Correlation coefficient for incompatible observables of the quantum harmonic oscillator,” Am. J. Phys. 66, 712 – 718 (1998).
22. R. A. Campos, “Wigner quasiprobability distribution for quantum superpositions of coherent states, a Comment on ‘Correlation coefficient for incompatible observables of the quantum harmonic oscillator,’” Am. J. Phys. 67, 641 – 642 (1999).
23. C. C. Gerry and P. L. Knight, “Quantum superpositions and Schrödinger cat states in quantum optics,” Am. J. Phys. 65, 964 – 974 (1997).
24. K. Ekert and P. L. Knight, “Correlations and squeezing of two-mode oscillations,” Am. J. Phys. 57, 692 – 697 (1989).
25. W. B. Case, “Wigner functions and Weyl transforms for pedestrians,” Am. J. Phys. 76, 937 – 946 (2008).
26. M. G. Raymer, “Measuring the quantum mechanical wave function,” Contemp. Phys. 38, 343 – 355 (1997).
27. F. Rioux, “Illuminating the Wigner function with Dirac notation,”
28. F. Rioux, “The Wigner distribution for the double-slit experiment,” www.users.csbsju.edu/~frioux/wigner/DBL-SLIT-NEW.pdf
29. F. Rioux, “Basic quantum mechanics in coordinate space, momentum space and phase space,”
30. F. Rioux, “The Wigner distribution for the harmonic oscillator,”
31. F. Rioux, “The Wigner distribution for the particle in a box,”
32. F. Rioux, “The time-dependent Wigner distribution for harmonic oscillator transitions,”
33. F. Rioux, “The Wigner distribution distinguishes between a superposition and a mixture,”
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.68%3A_Another_Look_at_the_Wigner_Function.txt
|
Given the quantum number this Mathcad file calculates the Wigner distribution function for the specified harmonic oscillator eigenstate using the coordinate wave function.
Quantum number: $n :=4$
Harmonic oscillator coordinate eigenstate:
$\Psi(\mathrm{n}, \mathrm{x}) :=\frac{1}{\sqrt{2^{\mathrm{n}} \cdot \mathrm{n} ! \sqrt{\pi}}} \cdot \operatorname{Her}(\mathrm{n}, \mathrm{x}) \cdot \exp \left(-\frac{\mathrm{x}^{2}}{2}\right) \nonumber$
Display coordinate wave function and distribution function:
Calculate Wigner distribution:
$\mathrm{W}(\mathrm{n}, \mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{n}, \mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{n}, \mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N} :=80 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}}=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}}=-5+\frac{10 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i,j} : = W(n, x_{i}, p_{j}) \nonumber$
Calculate the momentum distribution function using the Wigner function:
$\rho(\mathrm{p}) :=\int_{-\infty}^{\infty} \mathrm{W}(\mathrm{n}, \mathrm{x}, \mathrm{p}) \mathrm{dx} \qquad \mathrm{p} :=-5,-4.95 \ldots5 \nonumber$
1.70: Wigner Distribution for the Particle in a Box
The Wigner function is a quantum mechanical phase-space quasi-probability function. It is called a quasi-probability function because it can take on negative values, which have no classical meaning in terms of probability.
The PIB eigenstates for a box of unit dimension are given by:
$\Psi(x, n) :=\sqrt{2} \cdot \sin (n \cdot \pi \cdot x) \nonumber$
For these eigenstates the Wigner distribution function is:
$\mathrm{W}(\mathrm{x}, \mathrm{p}, \mathrm{n}) :=\frac{1}{\pi} \cdot \int_{-\mathrm{x}}^{\mathrm{x}} \sqrt{2} \cdot \sin [\mathrm{n} \cdot \pi \cdot(\mathrm{x}+\mathrm{s})] \cdot \exp (2 \cdot \mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \sqrt{2} \cdot \sin [\mathrm{n} \cdot \pi \cdot(\mathrm{x}-\mathrm{s})] \mathrm{ds} \nonumber$
Integration with respect to s yields the following function:
$\mathrm{W}(\mathrm{x}, \mathrm{p}, \mathrm{n}) :=\frac{2}{\pi} \cdot\left[\frac{\sin [2 \cdot(\mathrm{p}-\mathrm{n} \cdot \pi) \cdot \mathrm{x}]}{4 \cdot(\mathrm{p}-\mathrm{n} \cdot \pi)}+\frac{\sin [2 \cdot(\mathrm{p}+\mathrm{n} \cdot \pi) \cdot \mathrm{x}]}{4 \cdot(\mathrm{p}+\mathrm{n} \cdot \pi)}-\cos (2 \cdot \mathrm{n} \cdot \pi \cdot \mathrm{x}) \cdot \frac{\sin (2 \cdot p \cdot \mathrm{x})}{2 \cdot p}\right] \nonumber$
The Wigner distribution for the nth eigenstate is calculated below:
$\mathrm{n} :=10 \qquad \mathrm{N} :=115 \qquad \mathrm{i} :=0 . . \mathrm{N} \ \mathrm{x}_{\mathrm{i}} :=\frac{\mathrm{i}}{\mathrm{N}} \qquad \mathrm{j} :=0 . . \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-40+\frac{80 \cdot \mathrm{j}}{\mathrm{N}} \nonumber$
$\text{Wigner}_{i, j} :=\operatorname{if}\left[x_{i} \leq 0.5, W\left(x_{i}, p_{j}, n\right), W\left[\left(1-x_{i}\right), p_{j}, n\right]\right] \nonumber$
Integration of the Wigner function over the spatial coordinate yields the momentum distribution function as is shown below.
$\rho(\mathrm{p}) :=\int_{0}^{1} \mathrm{W}(\mathrm{x}, \mathrm{p}, \mathrm{n}) \mathrm{dx} \qquad \mathrm{p} :=-40,-39.5 \ldots40 \nonumber$
Integration of the Wigner function over the momentum coordinate yields the spatial distribution function as is shown below.
$\rho(\mathrm{x}) :=\int_{-51}^{50} \mathrm{W}(\mathrm{x}, \mathrm{p}, \mathrm{n}) \mathrm{dp} \qquad \mathrm{x} :=0,01 \ldots 1 \nonumber$
The Wigner distribution can be used to calculate the expectation values for position, momentum and kinetic energy.
$\mathrm{x}_{\mathrm{bar}}=\int_{-\infty}^{\infty} \int_{0}^{1} \mathrm{W}(\mathrm{x}, \mathrm{p}, 1) \cdot \mathrm{x} \text { dx dp simplify } \rightarrow \mathrm{x}_{\mathrm{bar}}=\frac{1}{2} \nonumber$
$\mathrm{p}_{\mathrm{bar}}=\int_{-\infty}^{\infty} \int_{0}^{1} \mathrm{W}(\mathrm{x}, \mathrm{p}, 1) \cdot \mathrm{p} \mathrm{dx} \text { dp simplify } \rightarrow \mathrm{p}_{\mathrm{bar}}=0 \nonumber$
$\mathrm{T}_{\mathrm{bar}}=\int_{-\infty}^{\infty} \int_{0}^{1} \mathrm{W}(\mathrm{x}, \mathrm{p}, 1) \cdot \frac{\mathrm{p}^{2}}{2} \mathrm{d} \mathrm{x} \text { dp simplify } \rightarrow \mathrm{T}_{\mathrm{bar}}=\frac{1}{2} \cdot \pi^{2} \nonumber$
1.71: The Wigner Distribution for a Particle in a Onedimensional Box
The following outlines the calculation of the Wigner distribution for a particle in a one‐dimensional box for the n = 10 state. First the coordinate wave function is Fourier transformed into momentum space. Following that the Wigner function is calculated using the momentum space wave function.
$\Psi(x) :=\sqrt{2} \cdot \sin (10 \cdot \pi \cdot x) \nonumber$
$\Phi(\mathrm{p}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{0}^{1} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{r}) \cdot \Psi(\mathrm{x}) \mathrm{dx} \text { simplify } \rightarrow-\frac{10 \cdot \sqrt{\pi} \cdot\left(\mathrm{e}^{-\mathrm{p} \cdot \mathrm{i}}-1\right)}{100 \cdot \pi^{2}-\mathrm{p}^{2}} \nonumber$
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{2 \cdot \pi} \cdot \int_{-\infty}^{\infty} \overline{\Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}\right)} \cdot \exp (-\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \nonumber$
$\mathrm{N} :=80 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=\frac{\mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-40+\frac{80 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i, j} :=W\left(x_{i}, p_{j}\right) \nonumber$
1.72: Superposition vs. Mixture
The Wigner function can be used to illustrate the difference between a superposition and a mixture. First consider the following linear superposition of Gaussian functions.
$\Psi(x) :=\exp \left[-(x-5)^{2}\right]+\exp \left[-(x+5)^{2}\right] \nonumber$
The Wigner distribution for this function is calculated and plotted below.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\int_{-\infty}^{\infty}\left[\exp \left[-\left(\mathrm{x}+\frac{\mathrm{s}}{2}-5\right)^{2}\right]+\exp \left[-\left(\mathrm{x}+\frac{\mathrm{s}}{2}+5\right)^{2}\right]\right] \cdot \exp (\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{s}) \cdot\left[\exp \left[-\left(\mathrm{x}-\frac{\mathrm{s}}{2}-5\right)^{2}\right]+\exp \left[-\left(\mathrm{x}-\frac{\mathrm{s}}{2}+5\right)^{2}\right]\right] \mathrm{ds} \nonumber$
Integration yields:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\sqrt{2} \cdot \sqrt{\pi}\cdot\left(2 \cdot \exp \left(-2 \cdot \mathrm{x}^{2}-\frac{1}{2} \cdot \mathrm{p}^{2}\right) \cdot \cos (10 \cdot \mathrm{p})+\exp \left(-2 \cdot x^{2}+20 \cdot x-50-\frac{1}{2} \cdot p^{2}\right)+\exp \left(-2 \cdot x^{2}-20 \cdot x-50-\frac{1}{2} \cdot p^{2}\right)\right) \nonumber$
$\mathrm{N} :=50 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-7+\frac{14 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-6+\frac{12 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{\mathrm{i}, \mathrm{j}} :=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
The signature of a superposition is the occurrence of interference fringes as seen in the center of the figure above.
The Wigner function for a classical mixture is the sum of Wigner functions for each member of the mixture. The interference region is clearly absent in the figure shown below.
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\int_{-\infty}^{\infty} \exp \left[-\left(\mathrm{x}+\frac{\mathrm{s}}{2}-5\right)^{2}\right] \cdot \exp (\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{s}) \cdot \exp \left[-\left(\mathrm{x}-\frac{\mathrm{s}}{2}-5\right)^{2}\right] \mathrm{ds}+\int_{-\infty}^{\infty} \exp \left[-\left(x+\frac{s}{2}+5\right)^{2}\right] \cdot \exp (i \cdot p \cdot s) \cdot \exp \left[-\left(x-\frac{s}{2}+5\right)^{2}\right] d s \nonumber$
Integration yields:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\exp \left(-2 \cdot \mathrm{x}^{2}+20 \cdot \mathrm{x}-50-\frac{1}{2} \cdot \mathrm{p}^{2}\right) \cdot \sqrt{2} \cdot \sqrt{\pi}+\exp \left(-2 \cdot \mathrm{x}^{2}-20 \cdot \mathrm{x}-50-\frac{1}{2} \cdot \mathrm{p}^{2}\right) \cdot \sqrt{2} \cdot \sqrt{\pi} \nonumber$
$\mathrm{N} :=100 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-7+\frac{14 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-6+\frac{12 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i, j} :=\mathrm{W}\left(\mathrm{x}_{i}, \mathrm{p}_{j}\right) \nonumber$
Reference: Decoherence and the Transition form Quantum to Classical, Wojciech Jurek, Physics Today, October 1991, pages 36-44.
1.73: Timedependent Wigner Function for Harmonic Oscillator Transitions
Initial state:
$\mathrm{m} :=0 \qquad \mathrm{E}_{\mathrm{m}} :=\mathrm{m}+\frac{1}{2} \nonumber$
Final state:
$\mathrm{n} :=1 \qquad \mathrm{E}_{\mathrm{n}} :=\mathrm{n}+\frac{1}{2} \qquad t : = \text{FRAME} \nonumber$
Define Wigner distribution function for a linear superposition of the initial and final harmonic oscillator state.
$W(x,p) : = \frac{1}{\pi^{\frac{3}{2}}}\int_{-\infty}^{\infty}\left[\frac{1}{\sqrt{2^{\mathrm{n}} \cdot \mathrm{n} ! \sqrt{\pi}}} \cdot \operatorname{Her}\left(\mathrm{n}, x+\frac{s}{2}\right) \cdot \exp \left[-\frac{\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right)^{2}}{2}\right] \cdot \exp \left(\mathrm{i} \cdot \mathrm{E}_{\mathrm{n}} \cdot \mathrm{t}\right)+\frac{1}{\sqrt{2^{\mathrm{m}} \cdot \mathrm{m} ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}\left(\mathrm{m}, \mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp \left[-\frac{\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right)^{2}}{2}\right] \cdot \exp \left(\mathrm{i} \cdot \mathrm{E}_{\mathrm{m}} \cdot \mathrm{t}\right)\right] \ \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \left[\frac{1}{\sqrt{2^{\mathrm{n}} \cdot n ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}\left(\mathrm{n}, \mathrm{x}-\frac{\mathrm{s}}{2}\right) \cdot \exp \left[-\frac{\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right)^{2}}{2}\right] \cdot \exp \left(-\mathrm{i} \cdot \mathrm{E}_{\mathrm{n}} \cdot \mathrm{t}\right)\+\frac{1}{\sqrt{2^{\mathrm{m}} \cdot \mathrm{m} ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}\left(\mathrm{m}, \mathrm{x}-\frac{\mathrm{s}}{2}\right)\cdot\exp\left[-\frac{\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right)^{2}}{2}\right]\cdot \exp\left(\mathrm{-i}\cdot\mathrm{E}_{\mathrm{m}}\cdot\mathrm{t}\right)\right]\mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N}=60 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-2.5+\frac{5 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{P}_{\mathrm{j}} :=-2.5+\frac{5 \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{i,j}: = W(x_{i},p_{j}) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.69%3A_The_Wigner_Distribution_Function_for_the_Harmonic_Oscillator.txt
|
Wave-particle duality is at the heart of quantum mechanics. A particle with wavelength $\lambda$ has wave function (un-normalized)
$\langle x | \lambda\rangle=\exp \left(i 2 \pi \frac{x}{\lambda}\right) \nonumber$
However, according to deBroglie’s wave equation the particle’s momentum is p = h/$\lambda$. Therefore the momentum wave function of the particle in coordinate space is
$\langle x | p\rangle=\exp \left(\frac{i p x}{h}\right) \nonumber$
In momentum space the following eigenvalue equation holds: $\hat{p}|p\rangle= p|p\rangle$. Operating on the momentum eigenfunction with the momentum operator in momentum space returns the momentum eigenvalue times the original momentum eigenfunction. In other words, in its own space the momentum operator is a multiplicative operator (the same is true of the position operator in coordinate space). To obtain the momentum operator in coordinate space this expression can be projected onto coordinate space by operating on the left by $\langle x|$.
$\langle x|\hat{p}| p\rangle= p\langle x | p\rangle= p \exp \left(\frac{i p x}{h}\right)=\frac{h}{i} \frac{d}{d x}\langle x | p\rangle \nonumber$
Comparing the first and last terms reveals that
$\langle x|\hat{p}=\frac{h}{i} \frac{d}{d x}\langle x| \nonumber$
and that $\frac{h}{i} \frac{d}{d x}$ is the momentum operator in coordinate space.
The position wave function in momentum space is the complex conjugate of the momentum wave function coordinate space.
$\langle p | x\rangle=\langle x | p\rangle^{*}=\exp \left(\frac{-i p x}{\hbar}\right) \nonumber$
Using the method outlined above it is easy to show that the position operator in momentum space is $-\frac{\hbar}{i} \frac{d}{d p}$.
1.75: Momentum Wave Functions for the Particle in a Box
Momentum‐space wave functions frequently are most easily obtained by the Fourier transform of the already available position‐space wave function. For the particle in a one‐dimensional box the Fourier transform is given by the following equation:
$\Phi(n, p, a) :=\frac{1}{\sqrt{2 \cdot \pi}} \int_{0}^{a} \exp (-i-p-x) \cdot \sqrt{\frac{2}{a}} \cdot \sin \left(\frac{n \cdot \pi \cdot x}{a}\right) d x \nonumber$
Evaluation of this integral yields:
$\Phi(\mathrm{n}, \mathrm{p}, \mathrm{a}) :=\mathrm{n} \cdot \sqrt{\mathrm{a} \cdot \pi} \cdot\left[\frac{1-(-1)^{\mathrm{n}} \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{a})}{\mathrm{n}^{2} \cdot \pi^{2}-\mathrm{a}^{2} \cdot \mathrm{p}^{2}}\right] \nonumber$
Choose box dimension: $\mathrm{a} :=1$
The momentum‐space probability distribution functions, |$\Phi$(n,p)|2, for the n = 1, 4, 6, 8 and 10 energy levels of the particle in a one‐dimensional box are displayed below. They show the probability that the particle will be found to have various momentum values in an experimental measurement. The distribution functions are offset by small increments for clarity presentation.
This figure illustrates the correspondence principle. As the n quantum number increases th momentum distribution appears more classical. For example, for n = 10 the momentum distribution has principle maxima around $\pm$ 30, suggesting a particle moving to the right and left with a specific momentum. This effect becomes more pronounced with higher n‐values.
1.76: A Graphical Illustration of the Heisenberg Uncertainty Relationship
According to quantum mechanics position and momentum are conjugate variables; they cannot be simultaneously known with high precision. The uncertainty principle requires that if the position of an object is precisely known, its momentum is uncertain, and vice versa. This reciprocal relationship is captured by the well-known uncertainty relation, which says that the product of the uncertainties in position and momentum must be greater than or equal to Planck's constant divided by 4$\pi$.
$\Delta x \cdot \Delta p \geq \frac{h}{4 \cdot \pi} \nonumber$
This simple mathematical relation can be visualized using the traditional work horse - the quantum mechanical particle in a box (infinite one-dimensional potential well). The particle's ground-state wave function in coordinate space for a box of width a is shown below.
$\Psi(x, a) :=\sqrt{\frac{2}{a}} \cdot \sin \left(\frac{\pi \cdot x}{a}\right) \nonumber$
To illustrate the uncertainty principle and the reciprocal relationship between position and momentum, $\Psi$(x,a) is Fourier transformed into momentum space yielding the particle's ground-state wave function in the momentum representation.
$\Phi(p, a) :=\sqrt{\frac{1}{2 \cdot \pi}} \cdot \int_{0}^{a} \exp (-i \cdot p \cdot x) \cdot \sqrt{\frac{2}{a}} \cdot \sin \left(\frac{\pi \cdot x}{a}\right) d x \; \text{simplify}\rightarrow \frac{\pi \cdot a \cdot\left(e^{-a \cdot p \cdot i}+1\right) \cdot \sqrt{\frac{1}{a}}}{\pi^{\frac{5}{2}}-\sqrt{\pi} \cdot a^{2} \cdot p^{2}} \nonumber$
In the figure below, the momentum distribution, |$\Phi$(p,a)|2, is shown for three sizes, a = 1,2 and 3. The uncertainty principle is illustrated as follows: as the box size increases the position uncertainty increases and momentum uncertainty decreases because the momentum distribution narrows.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.74%3A_Momentum_Operator_in_Coordinate_Space.txt
|
The harmonic oscillator is frequently used by chemical educators as a rudimentary model for the vibrational degrees of freedom of diatomic molecules. Most often when this is done, the teacher is actually using a classical ball-and-spring model, or some hodge-podge hybrid of the classical and the quantum harmonic oscillator. Unfortunately these models are not accurate representations of the vibrational modes of molecules. To the extent that a simple harmonic potential can be used to represent molecular vibrational modes, it must be done in a pure quantum mechanical treatment based on solving the Schrödinger equation.
Schrödinger's equation in atomic units (h = 2$\pi$) for the harmonic oscillator has an exact analytical solution.
$\frac{-1}{2 \cdot \mu} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \nonumber$
Potential energy:
$\mathrm{V}(\mathrm{x}, \mathrm{k}) :=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \nonumber$
Energy eigenstates:
$\mathrm{E}(\mathrm{v}, \mathrm{k}, \mu) :=\left(\mathrm{v}+\frac{1}{2}\right) \cdot \sqrt{\frac{\mathrm{k}}{\mu}} \nonumber$
Eigenfunctions:
$\Psi(x, v, k, \mu) :=\frac{(k \cdot \mu)^{\frac{1}{8}}}{\sqrt{2^{v} \cdot v ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}\left[v,(k \cdot \mu)^{\frac{1}{4}} \cdot x\right] \cdot \exp \left(\frac{\sqrt{k \cdot \mu \cdot x^{2}}}{2}\right) \nonumber$
The probability distribution functions for k = $\mu$ = 1 for the first four eigenstates are shown graphically below.
Force constant: $k : = 1$ Effective mass: $\mu : = 1$
The harmonic oscillator eigenfunctions form an orthonormal basis set.
Normalized:
$\int_{-\infty}^{\infty} \Psi(x, 0, k, \mu)^{2} d x=1 \nonumber$
Orthogonal:
$\int_{-\infty}^{\infty} \Psi(x, 1, k, \mu) \cdot \Psi(x, 0, k, \mu) d x=0 \nonumber$
Several non-classical attributes of the quantum oscillator are revealed in the graph above. Perhaps most obvious is that energy is quantized. Another is that the allowed oscillator states are stationary states. There is no vibratory motion associated with these states. In these allowed states, the oscillator is in a weighted superposition of all values of the x-coordinate, which in this case is the internuclear separation. The only time oscillatory motion occurs in the quantum oscillator is when it is perturbed by, for example, external electromagnetic radiation and making a transition from one allowed energy state to another. For a detailed discussion of these points see "Coherent Superpositions for the Harmonic Oscillator". For a discussion of "The Harmonic Oscillator and the Uncertainty Principle" visit this tutorial.
Another non-classical feature of the quantum oscillator is tunneling. The vertical dashed lines in the figure show the classical turning points for the ground state of the quantum oscillator. The classical turning point is that value of the x-coordinate at which the potential energy is equal to the total energy, and therefore classically the system must reverse its direction of motion.
Classical turning point for v=0, k=$\mu$=1:
$\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2}=\left(\mathrm{v}+\frac{1}{2}\right) \cdot \sqrt{\frac{\mathrm{k}}{\mu}}\; \text{solve}, x\rightarrow \left[\begin{matrix} (2 \cdot v + 1)^{\frac{1}{2}} \ -(2 \cdot v + 1)^{\frac{1}{2}}\end{matrix}\right] \text{substitute}, \mathrm{v}=0 \rightarrow\left(\begin{array}{c}{1} \ {-1}\end{array}\right) \nonumber$
For v=0 the region beyond $\pm$1 is called the classically forbidden region because the oscillator does not have enough energy (E = 1/2) to be there because V > E. It implies a negative kinetic energy which does not make classical sense. Due to the symmetry of the potential well, the tunneling probability is calculated as follows.
$2 \cdot \int_{1}^{\infty} \Psi(x, 0, k, \mu)^{2} d x=0.157 \nonumber$
It is easy to demonstrate that the tunneling probability decreases as the v quantum number increases. This might be considered a minor example of Bohr's correspondence principle: as the energy of a quantum system increases it appears to be more classical.
The fact that the wave functions of quantum states are superpositions is a fundamental idea in quantum theory. A quantum object (quon to use Nick Herbert's term) is not here or there, it is here and there. We can illuminate this idea by looking at a quantum oscillator's state operator, $|\Psi><\Psi|$. This is also called the density operator or density matrix. It is a very powerful concept which is not generally presented in undergraduate courses in quantum physics or chemistry.
The state operator is now calculated and displayed for the v = 3 state. The state operator is a projection operator and its matrix elements, $<\mathrm{x}_{1}|\Psi><\Psi| x_{2}>$, are calculated and displayed below. These matrix elements are the probability amplitude that an oscillator in the state $| \Psi>$ is at both x1 and x2. This reveals the meaning of the quantum superposition as the quon being both here and there.
$\mathrm{v} :=3 \quad \operatorname{Min} :=5 \quad \mathrm{N} :=200 \quad \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{k} :=0 \ldots \mathrm{N} \nonumber$
$x_{1_{j}} :=-\operatorname{Min}+\frac{2 \cdot \operatorname{Min} \cdot \mathrm{j}}{\mathrm{N}} \qquad \mathrm{x}_{2_{k}} :=-\operatorname{Min}+\frac{2 \cdot \mathrm{Min} \cdot \mathrm{k}}{\mathrm{N}} \nonumber$
$\Psi\left(x_{1}, x_{2}\right) :=\frac{1}{2^{v} \cdot v ! \cdot \sqrt{\pi}} \cdot \operatorname{Her}\left(v, x_{1}\right) \cdot \operatorname{Her}\left(v, x_{2}\right) \cdot \exp \left[\frac{-\left(x_{1}^{2}+x_{2}^{2}\right)}{2}\right] \ \Psi \Psi_{j, k}=\Psi\left(x_{1}, x_{2}\right) \nonumber$
Viewing this as a graphical representation of the density matrix we clearly see the prominence of off-diagonal elements, elements with non-zero values for which x1 is not equal to x2. This is the signature of quantum mechanics. By comparison a classical system would have a diagonal density matrix - only the x1 = x2 elements would have non-zero values.
The following statements, modified slightly, are the best single-sentence descriptions of the meaning of the wave function I have read.
Quons are characterized by their entire distributions, called wave functions or orbitals, rather than by instantaneous positions and velocities: a quon may be considered always to be, with appropriate probability, at all points of its distribution, which does not vary with time. (F. E. Harris, Encyclopedia of Physics)
From the quantum mechanical perspective, to measure the position of a quon is not to find out where it is, but to cause it to be somewhere. (Louisa Gilder, The Age of Entanglement)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.77%3A_The_Quantum_Harmonic_Oscillator.txt
|
The quantum mechanical harmonic oscillator eigenstates are stationary states and therefore cannot be used individually to represent classical oscillatory motion. (Atomic units are used in this tutorial.)
$\Psi(v, x, t) :=\frac{1}{\sqrt{2^{v} \cdot v ! \cdot \sqrt{\pi}}} \cdot \operatorname{Her}(v, x) \cdot \exp \left(\frac{-x^{2}}{2}\right) \cdot \exp \left[-i \cdot\left(v+\frac{1}{2}\right) \cdot t\right] \nonumber$
For example, suppose we choose to represent the ground vibrational state of a homonuclear diatomic mole as a simple harmonic oscillator with vibrational quantum number v = 0. We see that probability distributio $|\Psi(0, x, t)|^{2}$, is independent of time. There is no oscillatory motion; the molecule is in a stationary state whic weighted superposition of all possible internuclear separations.
However, simple superpositions of the vibrational eigenstates do show oscillatory behavior. This is due to exponential term involving the vibrational energy, exp[-iE(v)t]. This term oscillates with a dependence on vibrational quantum number. Thus, the different eigenstates oscillate with different frequencies giving rise constructive and destructive intereference. The figures below show the time dependence of the v = 0/v =1 the v = 0/v = 2 superpositions. Both show oscillatory behavior, but the first is asymmetric and the second i symmetric. This has significance for harmonic oscillator selection rules, as will be discussed below.
$\operatorname{Sup}(\mathrm{x}, \mathrm{t}) :=\frac{\Psi(0, \mathrm{x}, \mathrm{t})+\Psi(1, \mathrm{x}, \mathrm{t})}{\sqrt{2}} \nonumber$
The asymmetry of this time-dependent probability distribution gives it electric oscillating dipole character mechanism for coupling with the oscillating dipole of the electromagnetic field. Thus we could argue this i basis for the fact that the v = 0 to v = 1 vibrational transition is allowed.
$\operatorname{Sup}(\mathrm{x}, \mathrm{t}) :=\frac{\Psi(0, \mathrm{x}, \mathrm{t})+\Psi(2, \mathrm{x}, \mathrm{t})}{\sqrt{2}} \nonumber$
By comparison the symmetry of this time-dependent probability distribution does not have oscillatory dipo character, so there is no coupling with the external electromagnetic field. Therefore, the v = 0 to v = 2 vibrational transition is formally forbidden. Further detail on this interpretation of the "quantum jump" can found in the Spectroscopy section of Quantum Potpourri.
By comparison, coherent states (also called Glauber states) of the harmonic oscillator are more elaborate superpositions that maintain the well-defined shape of the ground state distribution while exhibiting the ki classical oscillatory motion that is absent in the previous examples. The time-dependence and "classical" oscillatory behavior of a coherent superposition of 25 vibrational eigenstates is illustrated below. See any contemporary text on quantum optics for further information on coherent states of the harmonic oscillator.
$\mathrm{n} :=25 \qquad \mathrm{x} :=-8,-7.98\ldots8 \qquad \alpha :=3.5 \nonumber$
$\Psi(x, t) :=\frac{1}{\sqrt{n}} \cdot \exp \left(\frac{-x^{2}}{2}\right) \cdot \exp \left(\frac{-\alpha^{2}}{2}\right)\cdot\sum_{v=0}^{n}\left[\operatorname{Her}(v, x) \cdot \exp \left[-i \cdot\left(v+\frac{1}{2}\right) \cdot t\right] \cdot \frac{\alpha^{v}}{v ! \cdot \sqrt{2^{v} \cdot \sqrt{2}}}\right] \nonumber$
Time-dependent superpositions of coherent states have been used to model Schrödinger cat states. Below w show the interaction of two coherent states moving in opposite directions from opposite sides of a harmoni potential well. The interference observed when they meet in the middle has been observed experimentally Bose-Einstein condensates.
$\alpha :=3.5 \qquad \qquad \beta :=-3.5 \nonumber$
$\Psi(x, t) :=\frac{1}{\sqrt{n}} \cdot \exp \left(\frac{-x^{2}}{2}\right) \cdot \exp \left(\frac{-\alpha^{2}}{2}\right) \cdot \sum_{v=0}^{n}\left[\operatorname{Her}(v, x) \cdot \exp \left[-i \cdot\left(v+\frac{1}{2}\right) \cdot t\right] \cdot \frac{\alpha^{v}}{v ! \cdot \sqrt{2^{v} \cdot \sqrt{2}}}\right] \ +\frac{1}{\sqrt{n}} \cdot \exp \left(\frac{-\mathrm{x}^{2}}{2}\right) \exp \left(\frac{-\beta^{2}}{2}\right) \cdot \sum_{\mathrm{v}=0}^{\mathrm{n}}\left[\operatorname{Her}(\mathrm{v}, \mathrm{x}) \cdot \exp \left[-\mathrm{i} \cdot\left(\mathrm{v}+\frac{1}{2}\right) \cdot \mathrm{t}\right] \cdot \frac{\beta^{\mathrm{v}}}{\mathrm{v} ! \cdot \sqrt{2^{\mathrm{v}} \cdot \sqrt{2}}}\right] \nonumber$
We finish with a calculation of the Wigner phase-space distribution for a Schrödinger cat state at t = 0. In t interest of computational expediency a superposition of only 10 harmonic eigenstates is calculated. The Wigner function is itself a superposition of all phase-space trajectories and is called a quasi probability distribution because it can take on negative values as is shown in the figure below. The interference fringe the center are closely related to those that appear in the figures above.
$\Psi(\mathrm{x}) :=\frac{1}{\sqrt{\mathrm{n}}} \cdot \exp \left(\frac{-\mathrm{x}^{2}}{2}\right) \cdot \exp \left(\frac{-\alpha^{2}}{2}\right) \cdot \sum_{\mathrm{v}=0}^{10}\left(\operatorname{Her}(\mathrm{v}, \mathrm{x}) \cdot \frac{\alpha^{\mathrm{v}}}{\mathrm{v} ! \cdot \sqrt{2^{\mathrm{v}} \cdot \sqrt{2}}}\right) \ +\left[\frac{1}{\sqrt{\mathrm{n}}} \cdot \exp \left(\frac{-\mathrm{x}^{2}}{2}\right) \cdot \exp \left(\frac{-\beta^{2}}{2}\right) \cdot \sum_{\mathrm{v}=0}^{10}\left(\operatorname{Her}(\mathrm{v}, \mathrm{x}) \cdot \frac{\beta^{\mathrm{v}}}{\mathrm{v} ! \cdot \sqrt{2^{\mathrm{v}} \cdot \sqrt{2}}}\right)\right] \nonumber$
Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}\right) \mathrm{ds} \nonumber$
$\mathrm{N} :=80 \qquad \mathrm{i} :=0 \ldots\mathrm{N} \quad \mathrm{x}_{\mathrm{i}} :=-5+\frac{10 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-5+\frac{10 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text { Wigner}_{\mathrm{i},\mathrm{j}} :=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
1.79: The Harmonic Oscillator and the Uncertainty Principle
In atomic units the wave function in coordinate space for an harmonic oscillator with reduced mass, $\mu$, equal to one and force constant k is given by,
$\Psi(\mathrm{x}, \mathrm{k}, \mu) :=\left(\frac{\sqrt{\mathrm{k} \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{\mathrm{k} \cdot \mu \cdot} \frac{\mathrm{x}^{2}}{2}\right) \nonumber$
This function is easily Fourier transformed into momentum space:
$\Phi(\mathrm{p}, \mathrm{k}, \mu) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}, \mathrm{k}, \mu) \mathrm{dx} \Bigg| \begin{array}{l}{\text { assume, } \mathrm{k}>0} \ {\text { assume, } \mu>0} \rightarrow \frac{e^{-\frac{p^{2}}{2 \cdot \sqrt{\mu} \cdot \sqrt{k}}}}{\pi^{\frac{1}{4}} \cdot \mu^{\frac{1}{8}} \cdot k^{\frac{1}{8}}}\ {\text { simplify }}\end{array} \nonumber$
The force constant for $\mu$=1 controls the probability distribution in coordinate ((\Psi\)(x)2) and momentum ($\Phi$(p)2) space. For k = 1 the coordinate and momentum distributions are identical as is shown below.
For larger values of k the coordinate-space distribution decreases in breadth while the momentum distribution increases. For values of k less than 1, the reverse occurs; the coordinate-space distribution increases in breadth while the momentum distribution becomes narrower.
This is an illustration of the Uncertainty Principle: the more sharply defined position is, the greater the uncertainty in momentum. Conversely, the greater the uncertainty in position, the more sharply the momentum is defined.
Tunneling occurs in the simple harmonic oscillator. The classical turning point is that position at which the total energy is equal to the potential energy. In other words kinetic energy is zero and the oscillators direction is going to reverse. For the ground state the classical turning point is,
$\mathrm{E}=\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}}=\frac{\mathrm{p}^{2}}{2 \cdot \mu}+\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \nonumber$
$\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}}=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \text { solve, }, \mathrm{x} \rightarrow \left[\begin{array}{c}{\frac{\left(\frac{k}{\mu}\right)^{\frac{1}{4}}}{\sqrt{k}}} \ {-\frac{\left(\frac{k}{\mu}\right)^{\frac{1}{4}}}{\sqrt{k}}}\end{array}\right] \nonumber$
The probability that tunneling occurs is independent of the values of k and $\mu$.
$2 \cdot\left[\int_{\frac{1}{(k \cdot \mu)^{\frac{1}{4}}}}^{\infty} \Psi(x, k, \mu)^{2} d x\right] \Bigg| \begin{array}{l}{\text { assume, } \mathrm{k}>0} \ {\text { assume, } \mu>0 \rightarrow 1-\operatorname{erf}(1)=0.157} \ {\text { simplify }}\end{array} \nonumber$
It is also possible to calculate the tunneling probability using the momentum wave function. Classically the maximum magnitude of momentum is achieved when the potential energy is zero, so that the total energy is equal to the kinetic energy.
$\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}}=\frac{\mathrm{p}^{2}}{2 \cdot \mu} \text { solve, } \mathrm{p} \rightarrow \left[\begin{array}{c}{\sqrt{\mu} \cdot\left(\frac{\mathrm{k}}{\mu}\right)^{\frac{1}{4}}} \ {-\sqrt{\mu} \cdot\left(\frac{\mathrm{k}}{\mu}\right)^{\frac{1}{4}}}\end{array}\right] \nonumber$
Thus momentum can have values in the range shown above, depending on the magnitude of the potential energy. Values outside this range are classically forbidden. Therefore the tunneling probability in momentum space is,
$2 \cdot \int_{(\mathrm{k} \cdot \mu)^{\frac{1}{4}}}^{\infty} \Phi(\mathrm{p}, \mathrm{k}, \mu)^{2} \mathrm{dp} \Bigg| \begin{array}{l}{\text { assume, } \mathrm{k}>0} \ {\text { assume, } \mu>0 \rightarrow 1-\operatorname{erf}(1)=0.157} \ {\text { simplify }}\end{array} \nonumber$
It is not surprising that the momentum and coordinate calculations agree.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.78%3A_Coherent_Superpositions_for_the_Harmonic_Oscillator.txt
|
Schrödinger's equation in atomic units (h = 2$\pi$) for the harmonic oscillator has an exact analytical solution.
$\mathrm{V}(\mathrm{x}, \mathrm{k}) :=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \qquad \frac{-1}{2 \cdot \mu} \cdot \frac{\mathrm{d}^{2}}{\mathrm{dx}^{2}} \Psi(\mathrm{x})+\mathrm{V}(\mathrm{x}) \cdot \Psi(\mathrm{x})=\mathrm{E} \cdot \Psi(\mathrm{x}) \nonumber$
The ground-state wave function (coordinate space) and energy for an oscillator with reduced mass $\mu$ and force constant k are as follows.
$\Psi(x, k, \mu) :=\left(\frac{\sqrt{k \cdot \mu}}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left(-\sqrt{k \cdot \mu} \cdot \frac{x^{2}}{2}\right) \qquad E(k, \mu) :=\frac{1}{2} \cdot \sqrt{\frac{k}{\mu}} \nonumber$
The first thing we want to illustrate is that tunneling occurs in the simple harmonic oscillator. The classical turning point is that position at which the total energy is equal to the potential energy. In other words, classically the kinetic energy is zero and the oscillator's direction is going to reverse. For the ground state the classical turning point is,
$\frac{1}{2} \cdot \sqrt{\frac{\mathrm{k}}{\mu}}=\frac{1}{2} \cdot \mathrm{k} \cdot \mathrm{x}^{2} \quad \text{has solution(s)} \begin{pmatrix} \frac{-1}{k^{\frac{1}{4}} \cdot \mu^{\frac{1}{4}}} \ \frac{1}{k^{\frac{1}{4}} \cdot \mu^{\frac{1}{4}}}\end{pmatrix} \nonumber$
From the quantum mechanical perspective the oscillator is not vibrating; it is in a stationary state. To the extent that the oscillator's wave function extends beyond the classical turning point, tunneling is occurring. The calculation below shows that the probability that tunneling occurs is independent of the values of k and $\mu$ for the ground state.
$2 \cdot \left[ \int_{\frac{1}{(k \cdot \mu)^{\frac{1}{4}}}}^{\infty} \left[ \left(\frac{\sqrt{k\cdot\mu}}{\pi}\right)^{\frac{1}{4}}\cdot\exp\left(- \sqrt{k\cdot\mu}\cdot\frac{x^{2}}{2}\right)\right]^{2}dx\right] \Bigg|_{\text{simplify}}^{\text{assume,} k>0, \; \mu > 0} \rightarrow 1 - \text{erf} (1)=0.157 \nonumber$
A Fourier transform of the coordinate wave function provides its counter part in momentum space.
$\Phi(\mathrm{p}, \mathrm{k}, \mu) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \int_{-\infty}^{\infty} \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \Psi(\mathrm{x}, \mathrm{k}, \mu) \mathrm{d} \mathrm{x} \Bigg| \begin{array}{l}{\text { assume, } \mathrm{k}>0} \ {\text { assume, } \mu>0} \ {\text { simplify }}\end{array}\rightarrow\frac{e^{-\frac{p^{2}}{2 \cdot \sqrt{\mu} \cdot \sqrt{k}}}}{\pi^{\frac{1}{4}} \cdot \mu^{\frac{1}{8}} \cdot k^{\frac{1}{8}}} \nonumber$
The uncertainty principle can now be illustrated by comparing the coordinate and momentum wave functions for a variety values of k and $\mu$. For the benchmark case, k = $\mu$ = 1, we see that the coordinate and momentum wave functions are identical and the classical turning point (CTP) is 1. The classical turning point will be taken as a measure of the spatial domain of the oscillator.
• For k = 2 and $\mu$ =1, the force constant has doubled reducing the amplitude of vibration (CTP =0.841) and therefore the uncertainty in position. Consequently there is an increase in the uncertainty in momentum which is manifested by a broader momentum distribution function.
• For k = 1 and $\mu$ = 2, the increase in effective mass drops the oscillator in the potential well decreasing the vibrational amplitude (CTP = 0.841) causing a decrease in $\Delta$x and an increase in $\Delta$p.
• For k = 0.5 and $\mu$ = 1, the lower force constant causes a larger vibrational amplitude (CTP = 1.189) and an accompanying increase in $\Delta$x. Consequently $\Delta$p decreases.
Force constant: $k : = 0.5$ Effective mass: $\mu : = 1$
Energy: $\mathrm{E}(\mathrm{k}, \mu)=0.354$ CTP: $\frac{1}{k^{\frac{1}{4}} \cdot \mu^{\frac{1}{4}}}=1.189$
The uncertainties in position and momentum are calculated as shown below because for the harmonic oscillator <x> =
= 0.
$\Delta x :=\sqrt{\int_{-\infty}^{\infty} x^{2} \cdot \Psi(x, k, \mu)^{2} d x}=0.841 \ \Delta p :=\sqrt{\int_{-\infty}^{\infty} p^{2} \cdot \Phi(p, k, \mu)^{2} d p}=0.595 \ \Delta x \cdot \Delta p=0.5 \nonumber$
A summary of the four cases considered is provided in the table below.
$\left(\begin{array}{cccccc}{\mu} & {k} & {C T P} & {\Delta x} & {\Delta p} & {\Delta x \Delta p} \ {1} & {1} & {1.00} & {0.707} & {0.707} & {0.5} \ {1} & {2} & {0.841} & {0.595} & {0.841} & {0.5} \ {2} & {1} & {0.841} & {0.595} & {0.841} & {0.5} \ {1} & {0.5} & {1.189} & {0.841} & {0.594} & {0.5}\end{array}\right) \nonumber$
The Wigner function, W(x,p), is a phase-space distribution that can be used to provide an alternative graphical representation of the results calculated above. As shown below it can be generated using either the coordinate or momentum wave function.
Calculate Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Psi\left(\mathrm{x}+\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{p}) \cdot \Psi\left(\mathrm{x}-\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N} :=50 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{p}_{\mathrm{j}} :=-4+\frac{8 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{\mathrm{i},\mathrm{j}} :=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
Calculate Wigner distribution:
$\mathrm{W}(\mathrm{x}, \mathrm{p}) :=\frac{1}{\pi^{\frac{3}{2}}} \cdot \int_{-\infty}^{\infty} \Phi\left(\mathrm{p}+\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \cdot \exp (\mathrm{i} \cdot \mathrm{s} \cdot \mathrm{x}) \cdot \Phi\left(\mathrm{p}-\frac{\mathrm{s}}{2}, \mathrm{k}, \mu\right) \mathrm{ds} \nonumber$
Display Wigner distribution:
$\mathrm{N} :=50 \qquad \mathrm{i} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{i}} :=-4+\frac{8 \cdot \mathrm{i}}{\mathrm{N}} \ \mathrm{j} :=0 \ldots \mathrm{N} \quad \mathrm{p}_{\mathrm{j}} :=-4+\frac{8 \cdot \mathrm{j}}{\mathrm{N}} \qquad \text{Wigner}_{\mathrm{i}, \mathrm{j}} :=\mathrm{W}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.80%3A_Another_view_of_the_Harmonic_Oscillator_and_the_Uncertainty_Principle.txt
|
The uncertainty principle is revealed by a comparison of the coordinate and momentum wave functions for one‐electron species such as the hydrogen atom and the helium ion.
The coordinate 1s wave function for one‐electron species as a function of nuclear charge is given by the following function.
$\Psi(z, r) :=\sqrt{\frac{z^{3}}{\pi}} \cdot \exp (-z \cdot r) \nonumber$
The Fourier transform of the coordinate wave function yields the momentum wave function.
$\Phi(\mathrm{z}, \mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi(\mathrm{z}, \mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin(\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \Bigg|_{\text{assume,}\; z > 0}^{\text{simplify}} \rightarrow 2 \cdot \frac{2^{\frac{1}{2}}}{\pi} \cdot \frac{z^{\frac{5}{2}}}{z^{4}+2 \cdot z^{2} \cdot p^{2}+p^{4}} \nonumber$
Plots of the spatial and momentum radial distribution functions for the hydrogen atom (z=1) and helium ion (z=2) clearly illustrate the uncertainty principle.
Relative to the hydrogen atom, the helium ionʹs coordinate distribution function is localized closer to the nucleus, meaning less uncertainty in electron position. Consequently, its momentum distribution is more delocalized than that for the hydrogen atom, meaning more uncertainty in electron momentum.
1.82: The PositionMomentum Uncertainty Relation in the Hydrogen Atom
The hydrogen atom coordinate and momentum wave functions can be used to illustrate the uncertainty relation involving position and momentum.
The 1s wave function is used to calculate the average distance of the electron from the nucleus.
$\Psi_{1 s}(\mathrm{r}) :=\frac{1}{\sqrt{\pi}} \cdot \exp (-\mathrm{r}) \qquad \mathrm{r}_{1 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{r} \cdot \Psi_{1 \mathrm{s}}(\mathrm{r})^{2} \cdot 4 \cdot \pi \cdot \mathrm{r}^{2} \mathrm{d} \mathrm{r} \qquad r_{1 s}=1.500 \nonumber$
The Fourier transform of the 1s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{1 \mathrm{s}}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{\mathrm{ls}}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{r} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \rightarrow 2 \cdot \frac{2^{\frac{1}{2}}}{\pi \cdot[(-1)+\mathrm{i} \cdot \mathrm{p}]^{2} \cdot(1+\mathrm{i} \cdot \mathrm{p})^{2}} \ \mathrm{p}_{1 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{1 \mathrm{s}}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \qquad \mathrm{p}_{1 \mathrm{s}}=0.849 \nonumber$
The 2s wave function is used to calculate the average distance of the electron from the nucleus.
$\Psi_{2 s}(r) :=\frac{1}{\sqrt{32 \cdot \pi}} \cdot(2-r) \cdot \exp \left(-\frac{r}{2}\right) \qquad \mathrm{r}_{2 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{r} \cdot \Psi_{2 \mathrm{s}}(\mathrm{r})^{2} \cdot 4 \cdot \pi \cdot \mathrm{r}^{2} \mathrm{dr} \qquad \mathrm{r}_{2 \mathrm{s}}=6.000 \nonumber$
The Fourier transform of the 2s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{2 \mathrm{s}}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{2 \mathrm{s}}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \rightarrow \frac{-16}{\pi} \cdot \frac{(-1)+4 \cdot p^{2}}{[(-1)+2 \cdot i \cdot p]^{3} \cdot(1+2 \cdot i \cdot p)^{3}} \ \mathrm{p}_{2 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{2 \mathrm{s}}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \qquad \mathrm{p}_{2 \mathrm{s}}=0.340 \nonumber$
The 3s wave function is used to calculate the average distance of the electron from the nucleus
$\Psi_{3 s}(\mathrm{r}) :=\frac{1}{81 \cdot \sqrt{3 \cdot \pi}} \cdot\left(27-18 \cdot \mathrm{r}+2 \cdot \mathrm{r}^{2}\right) \exp \left(\frac{-\mathrm{r}}{3}\right) \qquad r_{3 s} :=\int_{0}^{\infty} r \cdot \Psi_{3 s}(r)^{2} \cdot 4 \cdot \pi \cdot r^{2} d r \qquad r_{3 s}=13.500 \nonumber$
The Fourier transform of the 3s wave function yields the momentum wave function. The momentum wave function is used to calculate the average magnitude of the electron momentum.
$\Phi_{3 \mathrm{s}}(\mathrm{p}) :=\frac{1}{\sqrt{8 \cdot \pi^{3}}} \cdot \int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2 \cdot \pi} \Psi_{3 \mathrm{s}}(\mathrm{r}) \cdot \exp (-\mathrm{i} \cdot \mathrm{p} \cdot \cos (\theta)) \cdot \mathrm{r}^{2} \cdot \sin (\theta) \mathrm{d} \phi \mathrm{d} \theta \mathrm{dr} \rightarrow 18 \cdot \frac{2^{2}}{\pi} \cdot 3^{\frac{1}{2}} \cdot \frac{1-30 \cdot \mathrm{p}^{2}+81 \cdot \mathrm{p}^{4}}{[(-1)+3 \cdot \mathrm{i} \cdot \mathrm{p}]^{4} \cdot(1+3 \cdot \mathrm{i} \cdot \mathrm{p})^{4}} \ \mathrm{p}_{3 \mathrm{s}} :=\int_{0}^{\infty} \mathrm{p} \cdot\left(\left|\Phi_{3 s}(\mathrm{p})\right|\right)^{2} \cdot 4 \cdot \pi \cdot \mathrm{p}^{2} \mathrm{d} \mathrm{p} \qquad \mathrm{p}_{3 \mathrm{s}}=0.218 \nonumber$
These results can be summarize in both tabular and graphical form.
$\left(\begin{array}{ccc}{\text { Orbital}} & \text{ AveragePosition} & \text{ AverageMomentum } \ {1 \mathrm{s}} & {1.5} & {0.849} \ {2 \mathrm{s}} & {6.0} & {0.340} \ {3 \mathrm{s}} & {13.5} & {0.218}\end{array}\right) \nonumber$
The table shows that the average distance of the electron from the nucleus increases from 1s to 3s, indicating an increase in the uncertainty in the location of the electron. At the same time the average magnitude of electron momentum decreases from 1s to 3s, indicating a decrease in momentum uncertainty. The spatial and momentum distribution functions shown below illustrate this effect graphically.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.81%3A_Hydrogen_Atom_and_Helium_Ion_Spatial_and_Momentum_Distribution_Functions_Illustrate_the_Uncertainty_Principle.txt
|
The uncertainty relation between angular momentum and angular position can be derived from the more familiar uncertainty relation between linear momentum and position.
$\Delta \mathrm{p} \cdot \Delta \mathrm{x} \geq \frac{\mathrm{h}}{4 \cdot \pi} \tag{1} \nonumber$
Consider a particle with linear momentum p moving on a circle of radius r. The particleʹs angular momentum is given by equation (2).
$\mathrm{L}=\mathrm{m} \cdot \mathrm{v} \cdot \mathrm{r}=\mathrm{p} \cdot \mathrm{r} \tag{2} \nonumber$
In moving a distance x on the circle the particle sweeps out an angle $\phi$ in radians.
$\phi=\frac{x}{r} \tag{3} \nonumber$
Equations (2) and (3) suggest,
$\Delta \mathrm{p}=\frac{\Delta \mathrm{L}}{\mathrm{r}} \qquad \Delta \mathrm{x}=\Delta \phi \cdot \mathrm{r} \tag{4} \nonumber$
Substitution of equations (4) into equation (1) yields the angular momentum, angular position uncertainty relation.
$\Delta \mathrm{L} \cdot \Delta \phi \geq \frac{\mathrm{h}}{4 \cdot \pi} \tag{5} \nonumber$
In addition to the Heisenberg restrictions represented by equations (1) and (5), conjugate observables are related by Fourier transforms. For example, for position and momentum it is given by equation (6) in atomic units (h = 2$\pi$).
$\langle p | x\rangle=\frac{1}{\sqrt{2 \pi}} \exp (-i p x) \tag{6} \nonumber$
Equations (2) and (3) can be used with (6) to obtain the Fourier transform between angular momentum and angular position.
$\langle L | \phi\rangle=\frac{1}{\sqrt{2 \pi}} \exp (-i L \phi) \tag{7} \nonumber$
Equations (6) and (7) are mathematical dictionaries telling us how to translate from x language to p language, or $\phi$ language to L language. The complex conjugates of (6) and (7) translate in the reverse direction, from p to x and from L to $\phi$.
The work‐horse particle‐in‐a‐box (PIB) problem can be used to provide a compelling graphical illustration of the position‐momentum uncertainty relation. The position wave function for the ground state of a PIB in a box of length a is given below.
$\langle x | \Psi\rangle=\sqrt{\frac{2}{a}} \sin \left(\frac{\pi x}{a}\right) \tag{8} \nonumber$
The conjugate momentum‐space wave function is obtained by the following Fourier transform.
$\langle p | \Psi\rangle=\int_{0}^{a}\langle p | x\rangle\langle x | \Psi\rangle d x=\frac{1}{\sqrt{\pi a}} \int_{0}^{a} \exp (-i p x) \sin \left(\frac{\pi x}{a}\right) \tag{9} \nonumber$
Evaluation of the integral in equation (9) yields,
$\Psi(p, a) :=\sqrt{a \cdot \pi} \cdot \frac{\exp (-i \cdot p \cdot a)+1}{\pi^{2}-p^{2} \cdot a^{2}} \tag{10} \nonumber$
Plotting the momentum distribution function for several box lengths, as is done in the figure below, clearly reveals the position‐momentum uncertainty relation. The greater the box length the greater the uncertainty in position. However, as the figure shows, the greater the box length the narrower the momentum distribution, and, consequently, the smaller the uncertainty in momentum.
A similar visualization of the angular‐momentum/angular‐position uncertainty relation is also possible. Suppose a particle on a ring is prepared in such a way that its angular wave function is represented by the following gaussian function,
$\langle\phi | \Psi\rangle=\exp \left(-a \phi^{2}\right) \tag{11} \nonumber$
where the parameter a controls the width of the angular distribution. The conjugate angular momentum wave function is obtained by the following Fourier transform.
$\langle L | \Psi\rangle=\int_{-\pi}^{\pi}\langle L | \phi\rangle\langle\phi | \Psi\rangle d \phi=\frac{1}{\sqrt{2 \pi}} \int_{-\pi}^{\pi} \exp (-i L \phi) \exp \left(-a \phi^{2}\right) d \phi \tag{12} \nonumber$
Plots of $|<\phi| \Psi>\left.\right|^{2}$ and $|<\mathrm{L}| \Psi>\left.\right|^{2}$ shown below for two values of the parameter a illustrate the angular momentum/angular position uncertainty relation. The larger the value of a, the smaller the angular positional uncertainty and the greater the angular momentum uncertainty. In other words, the greater the value of a the greater the number of angular momentum eigenstates observed.
$\mathrm{a} :=0.5 \qquad \Phi(\phi, \mathrm{a}) :=\exp \left(-\mathrm{a} \cdot \phi^{2}\right) \nonumber$
$\mathrm{L} :=-5 \ldots 5 \qquad \Psi(\mathrm{L}, \mathrm{a}) :=\int_{-\pi}^{\pi} \exp (-\mathrm{i} \cdot \mathrm{L} \cdot \phi) \Phi(\phi, \mathrm{a}) \mathrm{d} \phi \nonumber$
Make the angular position distribution narrower: $a :=2.5$
Observe a broader distribution in angular momentum.
The uncertainty relation between angular position and angular momentum as outlined above is a simplified version of that presented by S. Franke‐Arnold et al. in New Journal of Physics 6, 103 (2004).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.83%3A_Demonstrating_the_Uncertainty_Priniciple_for_Angular_Momentum_and_Angular_Position.txt
|
“A wavepaccket is a superposition of wavefunctions that is usually strongly peaked in one region of space and virtually zero elsewhere. The peak of the wavepacket denotes the most likely location of the particle; it occurs where the contributing wavefunctions are in phase and interfere constructively. Elsewhere the wavefunctions interfere destructively, and the net amplitude is small or zero.
A wavepacket moves because all the component functions change at different rates, and at different times the point of maximum constructive interference is in different locations. The motion of the wavepacket corresponds very closely to the motion predicted for a classical particle in the same potential. An important difference from classical physics is that the wavepacket spreads with time, but this tendency is very small for massive, slow particles.” P. W. Atkins, Quanta, page 395.
The time dependent coordinate-space wavefunction for a free particle with momentum p is, in atomic units:
$\begin{array}{l}{\mathrm{m} :=1 \qquad \mathrm{p} :=1} \qquad {\mathrm{x} :=-10,-9.95 \ldots 10}\end{array} \ \Psi_{\mathrm{p}}(\mathrm{x}, \mathrm{t}) :=\frac{1}{\sqrt{2 \cdot \pi}} \cdot \exp (\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \exp \left(\frac{-\mathrm{i} \cdot \mathrm{p}^{2} \cdot \mathrm{t}}{2 \cdot \mathrm{m}}\right) \quad \text{where} \quad \mathrm{E}_{\mathrm{p}}=\frac{\mathrm{p}^{2}}{2 \mathrm{m}} \nonumber$
In quantum mechanics a state such as this with a precisely known momentum must have a completely uncertain position as is shown in the graph above for p=1. To use this wavefunction to model a particle it is necessary to form a linear superposition of momentum states. This adds enough uncertainty to momentum to localize the position of the particle to some degree. Because the momentum of a free particle is a continuous variable the discrete summation is replaced by the integral summation in forming the linear superpositon.
For example, the wavefunction for a particle of mass =1 moving to right with momentum between 5 and 10 atomic units is calculated below and the probability distribution is plotted for several times. Note that the wavepacket is indeed spreading as Atkins said it would.
$\mathrm{m} :=1 \qquad \mathrm{x} :=-5,-4.98 \ldots 10 \ \Psi (x,m,t) : = \frac{1}{\sqrt{2 \cdot \pi}} \int_{5}^{10} \exp \left[i \cdot \left(p \cdot x - \frac{p^{2} \cdot t}{2 \cdot m}\right)\right]dp \nonumber$
The spreading is not as great for more massive particles as can be seen by increasing the mass to 3.
$m : = 3 \nonumber$
Atkins also said the spreading is not as great for slower participles. This is shown below by summing momentum values from 0 to 5 instead of 5 to 10.
$\Psi(x, m, t) :=\frac{1}{\sqrt{2 \cdot \pi}} \int_{0}^{5} \exp \left[i \cdot\left(p \cdot x-\frac{p^{2} t}{2 \cdot m}\right)\right] d p \nonumber$
1.85: The Difference Between Fermions and Bosons
$\mathrm{n}_{1} :=1 \qquad \mathrm{n}_{2} :=2 \ \Psi(x) :=\sqrt{2} \cdot \sin \left(n_{1} \cdot \pi \cdot x\right) \qquad \Phi(x) :=\sqrt{2} \cdot \sin \left(n_{2} \cdot \pi \cdot x\right) \nonumber$
Calculate the average separation, |x1 - x2|, for two fermions and two bosons in a 1D box of unit length.
Fermions have antisymmetric wave functions:
$\Psi_{\mathrm{f}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right) :=\frac{\Psi\left(\mathrm{x}_{1}\right) \cdot \Phi\left(\mathrm{x}_{2}\right)-\Psi\left(\mathrm{x}_{2}\right) \cdot \Phi\left(\mathrm{x}_{1}\right)}{\sqrt{2}} \nonumber$
The average particle separation for indistinquishable fermions:
$\int_{0}^{1} \int_{0}^{1} \Psi_{\mathrm{f}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right) \cdot\left|\mathrm{x}_{1}-\mathrm{x}_{2}\right| \cdot \Psi_{\mathrm{f}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right) \mathrm{dx}_{1} \mathrm{dx}_{2}=0.383 \nonumber$
The particles are correlated so as to keep them apart.
$\mathrm{N} :=40 \qquad \mathrm{i} :=0 . . \mathrm{N} \ x_{1_{i}} :=\frac{1}{N} \quad j :=0 \ldots N \qquad x_{2} :=\frac{j}{N} \ \Psi_{\mathrm{f}_{\mathrm{i}, \mathrm{j}}} :=\Psi_{\mathrm{f}\left(\mathrm{x}_{\mathrm{1_{i}}}, \mathrm{x}_{2_{j}}\right)^{2}} \nonumber$
Bosons have symmetric wave functions:
$\Psi_{\mathrm{b}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right)=\frac{\Psi\left(\mathrm{x}_{1}\right) \cdot \Phi\left(\mathrm{x}_{2}\right)+\Psi\left(\mathrm{x}_{2}\right) \cdot \Phi\left(\mathrm{x}_{1}\right)}{\sqrt{2}} \nonumber$
The average particle separation for indistinquishable bosons:
$\int_{0}^{1} \int_{0}^{1} \Psi_{\mathrm{b}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right) \cdot\left|\mathrm{x}_{1}-\mathrm{x}_{2}\right| \cdot \Psi_{\mathrm{b}}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right) \mathrm{dx}_{1} \mathrm{dx}_{2}=0.157 \nonumber$
The particles are correlated so as to bring them closer together.
$\mathrm{N} :=40 \qquad \mathrm{i} :=0 . . \mathrm{N} \ x_{1_{i}} :=\frac{1}{N} \quad j :=0 \ldots N \qquad x_{2} :=\frac{j}{N} \ \Psi_{\mathrm{b}_{\mathrm{i}, \mathrm{j}}} :=\Psi_{\mathrm{b}\left(\mathrm{x}_{\mathrm{1_{i}}}, \mathrm{x}_{2_{j}}\right)^{2}} \nonumber$
All fundamental particles (electrons, neutrons, protons, photons, etc.) are either bosons or fermions. Composite entities such as the elements also fall into these two categories. The fundamental distinction is spin: bosons have integer spin (0, 1, 2, ...) while fermions have half-integer spin (1/2, 3/2, ....).
The dramatic difference in behavior between bosons and fermions has led to a sociology of fundamental particles. Bosons are social and gregarious, while fermions are antisocial and aloof.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.84%3A_A_Brief_Tutorial_on_Wavepackets.txt
|
"When electrons are confined to length scales approaching the de Broglie wavelength, their behavior is dominated by quantum mechanical effects. Here we report the construction and characterization of structures for confining electrons to this length scale. The walls of these "quantum corrals" are built from Fe atoms which are individually positioned on the Cu (111) surface by means of a scanning tunneling microscope (STM). These atomic structures confine surface state electrons laterally because of the strong scattering that occurs between surface state electrons and the Fe atoms. The surface state electrons are confined in the direction perpendicular to the surface because of intrinsic energetic barriers that exist in that direction."
This is the first paragraph of "Confinement of Electrons to Quantum Corrals on a Metal Surface," published by M. F. Crommie, C. P. Lutz, and D. M. Eigler in the October 8, 1993 issue of Science Magazine. They report the corraling of the surface electrons of Cu in a ring of radius 135 a0 created by 48 Fe atoms. The quantum mechanics for this form of electron confinement is well-known. Schroedinger's equation for a particle in a ring and its solution (in atomic units) are given below.
$\dfrac{1}{2 \mu} \dfrac{d^2}{d^2r} \Psi(r) - \dfrac{1}{2 r \mu} \dfrac{d}{dr} \Psi(r) + \left( \dfrac{L^2}{2 \mu r^2}\right) \Psi(r) = E \Psi(r) \label{1}$
with energies
$E_{n,L} = \dfrac{Z_{n,L}^2}{2\mu R^2} \label{2}$
and the unnormalized wavefunctions
$\Psi_{n,L} = J_z (Z_{n,L} ,R) \label{3}$
$J_L$ is the Lth order Bessel function, $L$ is the angular momentum quantum number, $n$ is the principle quantum number, Z n,L is the nth root of $J_L$, $\mu$ is the effective mass of the electron, and $R$ is the corral (ring) radius. Dirac notation is used to describe the electronic states, $|n,L\rangle$. The roots of the Bessel function are given below in terms of the $n$ and $L$ quantum numbers.
L quantum number
$Z : = \begin{pmatrix} 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & "n" \ 2.405 & 3.832 & 5.316 & 6.380 & 7.588 & 8.771 & 9.936 & 11.086 & 1 \ 5.5.20 & 7.016 & 8.417 & 9.761 & 11.065 & 12.339 & 13.589 & 14.821 & 2 \ 8.654 & 10.173 & 11.620 & 13.015 & 14.373 & 15.700 & 17.004 & 18.288 & 3 \ 11.792 & 13.324 & 14.796 & 16.223 & 17.616 & 18.980 & 20.321 & 21.642 & 4 \ 14.931 & 16.471 & 17.960 & 19.409 & 20.827 & 22.218 & 23.586 & 24.935 & 5 \ 180071 & 19.616 & 21.117 & 22.583 & 24.019 & 25.430 & 26.820 & 28.191 & 6 \end{pmatrix}$ n quantum number
On the basis of Fermi energy considerations, Crombie, et al. identify the $|5,0\rangle$, $|4,2\rangle$ and $|2,7\rangle$ as the most likely states contributing to the behavior of the surface electrons of Cu. A graphical comparison of the calculated surface electron density contributed by $|5,0\rangle$ with the experimental data suggests that it is the dominant state in determining the surface electron density. The calculated results are displayed by plotting the wave function squared in Cartesian coordinates. The exponential term involving L and $\theta$ is discarded because $\left(|\mathrm{e}^{\mathrm{i} \cdot \mathrm{L} \cdot \theta}|\right)^{2}=1$.
The theoretical results are displayed by plotting the wave function in Cartesian coordinates.
$\mathrm{R}=135 \quad \mathrm{n}=5 \qquad \mathrm{L}=0 \quad \mathrm{N}=100 \qquad \mathrm{i}=0 \ldots \mathrm{N} \qquad \mathrm{j} :=0 \ldots \mathrm{N} \ \mathrm{x}_{\mathrm{i}}=-\mathrm{R}+\frac{2 \cdot \mathrm{i}}{\mathrm{N}} \cdot \mathrm{R} \qquad \mathrm{y}_{\mathrm{j}} :=-\mathrm{R}+\frac{2 \cdot \mathrm{j}}{\mathrm{N}} \cdot \mathrm{R} \ \Psi (x,y) : = \Bigg|_{0 \; \text{otherwise}}^{\operatorname{Jn}\left(\mathrm{L}, \mathrm{Z}_{\mathrm{n}, \mathrm{L}} \cdot \frac{\sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}}}{\mathrm{R}}\right) \text { if } \sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}} \leq \mathrm{R}} \quad P_{i, j}=\Psi\left(x_{i}, y_{j}\right)^{2} \nonumber$
However, Crommie, et al. noted that the $|5,0\rangle$, $|4,2\rangle$ and $|2,7\rangle$ states are close in energy, being proportional to the squares of 14.931, 14.796 and 14.81 given in the table above. An even statistical mixture of these states would yield the surface electron density shown below, which is also visually in agreement with the experimental surface electron density.
$\Psi^{'} (x,y) : = \Bigg|_{0 \; \text{otherwise}}^{\operatorname{Jn}\left(0, \mathrm{Z}_{5,0} \cdot \frac{\sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}}}{\mathrm{R}}\right)^{2}+\operatorname{Jn}\left(2, \mathrm{Z}_{4,2} \cdot \frac{\sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}}}{\mathrm{R}}\right)^{2}+\operatorname{Jn}\left(7, \mathrm{Z}_{2,7} \cdot \frac{\sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}}}{\mathrm{R}}\right)^{2} \text { if } \sqrt{\mathrm{x}^{2}+\mathrm{y}^{2}} \leq \mathrm{R}} \quad P_{i, j} :=\Psi^{\prime}\left(x_{i}, y_{j}\right) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.86%3A_Quantum_Corrals_-_Electrons_within_a_Ring.txt
|
$\mathrm{n} :=42 \qquad \mathrm{i} :=1 . . \mathrm{n} \nonumber$
$\rho_{i} :=$ $\lambda_{i} :=$
0.07 0.667
0.096 00.720
0.10 0.737
0.190 0.811
0.210 0.383
0.398 0.917
0.420 0.917
0.680 1.027
0.708 1.021
1.036 1.167
1.062 1.172
1.258 1.247
1.669 1.484
1.770 1.697
1.776 1.831
1.730 2.039
1.685 2.170
1.640 2.275
1.551 2.406
1.392 2.563
1.145 2.27
1.115 2.824
1.071 2.916
1.042 2.921
0.974 3.050
0.918 2.151
0.797 3.344
0.760 3.450
0.742 3.556
0.698 3.661
0.667 3.754
0.570 4.027
0.426 4.427
0.378 4.613
0.345 4.805
0.310 4.968
0.280 5.128
0.250 5.296
0.220 5.469
0.205 5.632
0.175 5.783
0.155 6.168
The data for this exercise is taken from page 19 of Eisberg and Resnick, Quantum Physics.
The values of rho are given in units of 103 joules/m3 and the values of lambda are given in 10‐6 m. The temperature is 1595 K.
Two pairs of data points are used to get approximate values for the parameters a and b in the Planck equation.
$a :=1 \qquad \mathrm{b} :=1 \nonumber$
Given
$\rho_{16}=\frac{a \cdot\left(\lambda_{16}\right)^{-5}}{e^{\frac{b}{\lambda_{16}}}} \qquad \rho_{22}=\frac{a \cdot\left(\lambda_{22}\right)^{-5}}{e^{\frac{b}{\lambda_{22}}}-1} \nonumber$
$\left(\begin{array}{l}{\mathrm{a}} \ {\mathrm{b}}\end{array}\right) :=\text { Find }(\mathrm{a}, \mathrm{b}) \qquad \mathrm{a}=3.84 \times 10^{3} \qquad \mathrm{b}=8.479 \nonumber$
Planck's Equation is fit to data:
$\mathrm{F}(\lambda, \mathrm{a}, \mathrm{b}) :=\frac{\mathrm{a} \cdot \lambda^{-5}}{e^{\frac{\mathrm{b}}{\lambda}}-1} \nonumber$
$\operatorname{SSD}(\mathrm{a}, \mathrm{b}) :=\sum_{\mathrm{i}}\left(\rho_{\mathrm{i}}-\mathrm{F}\left(\lambda_{\mathrm{i}}, \mathrm{a}, \mathrm{b}\right)\right)^{2} \nonumber$
Given
$\operatorname{SSD}(\mathrm{a}, \mathrm{b})=0 \qquad \mathrm{a}>0 \qquad \mathrm{b}>0 \qquad\left(\begin{array}{l}{\mathrm{a}} \ {\mathrm{b}}\end{array}\right) :=\operatorname{Minerr}(\mathrm{a}, \mathrm{b}) \nonumber$
Display optimum values of a and b:
$\mathrm{a}=4.715 \times 10^{3} \qquad \mathrm{b}=8.906 \nonumber$
Plot of fit:
$1 :=0.05, 0.1 \ldots 7 \nonumber$
Calculate Planck's constant using the value of b, which is equal to (hc)/(kT).
$\mathrm{h} :=\frac{\mathrm{b} \cdot 10^{-6} \cdot 1.381 \cdot 10^{-23} \cdot 1595}{2.9979 \cdot 10^{8}} \qquad \mathrm{h}=6.544 \times 10^{-34} \nonumber$
1.88: Planck's Radiation Equation Fit to Experimental Data - Another Algorithm
$\mathrm{n} :=42 \qquad \mathrm{i} :=1 . . \mathrm{n} \nonumber$
$\rho_{i} :=$ $\lambda_{i} :=$
0.07 0.667
0.096 00.720
0.10 0.737
0.190 0.811
0.210 0.383
0.398 0.917
0.420 0.917
0.680 1.027
0.708 1.021
1.036 1.167
1.062 1.172
1.258 1.247
1.669 1.484
1.770 1.697
1.776 1.831
1.730 2.039
1.685 2.170
1.640 2.275
1.551 2.406
1.392 2.563
1.145 2.27
1.115 2.824
1.071 2.916
1.042 2.921
0.974 3.050
0.918 2.151
0.797 3.344
0.760 3.450
0.742 3.556
0.698 3.661
0.667 3.754
0.570 4.027
0.426 4.427
0.378 4.613
0.345 4.805
0.310 4.968
0.280 5.128
0.250 5.296
0.220 5.469
0.205 5.632
0.175 5.783
0.155 6.168
The data for this exercise is taken from page 19 of Eisberg and Resnick, Quantum Physics.
The values of rho are given in units of 103 joules/m3 and the values of lambda are given in 10‐6 m. The temperature is 1595 K.
Define Planck radiation function and first derivatives with respect to parameters a and b:
$F(\lambda, a, b) :=\left[\begin{array}{c}{\frac{a \cdot \lambda^{-5}}{\left(\frac{b}{\lambda}\right.} )} \ {\frac{d}{d a} \frac{a \cdot \lambda^{-5}}{\frac{b}{\lambda}}} \ {\frac{d}{d b} \frac{a \cdot \lambda^{-5}}{\left(e^{\frac{b}{\lambda}}-1\right)}}\end{array}\right] \nonumber$
Carry out nonlinear regression using Mathcad's genfit algorithm:
$\text{seed}:=\left(\begin{array}{c}{5 \cdot 10^{3}} \ {10}\end{array}\right) \qquad P :=\text { genfit }(\lambda, \rho, \text { seed, } F) \qquad P=\left(\begin{array}{c}{4.715 \times 10^{3}} \ {8.906}\end{array}\right) \qquad\left(\begin{array}{l}{a} \ {b}\end{array}\right) :=P \nonumber$
Calculated radiation equation using output parameters:
$\rho_{\text { calc }}(\mathrm{L}, a, b)=\frac{a \cdot L^{-5}}{\left(e^{\frac{b}{L}}-1\right)} \nonumber$
Plot data and fit:
$\mathrm{L}=0.05, 0.1 \ldots 7 \nonumber$
Calculate Planck's constant using the value of b, which is equal to (hc)/(kT).
$\mathrm{h} :=\frac{\mathrm{b} \cdot 10^{-6} \cdot 1.381 \cdot 10^{-23} \cdot 1595}{2.9979 \cdot 10^{8}} \qquad \mathrm{h}=6.544 \times 10^{-34} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.87%3A_Planck%27s_Radiation_Equation_Fit_to_Experimental_Data.txt
|
$\mathrm{n} :=30 \qquad \mathrm{i} :=1 \ldots \mathrm{n} \nonumber$
$\mathrm{T}_{\mathrm{i}} :=$ $\mathrm{C}_{\mathrm{i}} :=$
1 0.000818
3 0.0065
5 0.0243
8 0.0927
10 0.183
15 0.670
20 1.647
25 3.066
30 4.774
35 6.612
40 8.419
45 10.11
50 11.66
55 13.04
60 14.27
65 15.35
70 16.30
80 17.87
90 19.11
100 20.10
120 21.54
140 22.52
160 23.22
180 23.75
200 24.16
220 24.49
240 24.76
260 24.99
280 25.19
300 25.37
The heat capacity data were taken from the Handbook of Physics and Chemistry ‐ 72nd Edition, page 5‐71. The data are presented in units of Joules/mole/K.
Gas law constant:
$\mathrm{R} :=8.3145 \nonumber$
Define Einstein function for heat capacity:
$\mathrm{F}(\mathrm{T}, \Theta) :=3 \cdot \mathrm{R} \cdot\left(\frac{\Theta}{\mathrm{T}}\right)^{2} \cdot \frac{\exp \left(\frac{\Theta}{\mathrm{T}}\right)}{\left(\exp \left(\frac{\Theta}{\mathrm{T}}\right)-1\right)^{2}} \quad \text{where} \; \Theta=\frac{h \cdot v}{k} \nonumber$
Form the sum of the squares of the deviations:
$\operatorname{SSD}(\Theta) :=\sum_{\mathrm{i}}\left(\mathrm{C}_{\mathrm{i}}-\mathrm{F}\left(\mathrm{T}_{\mathrm{i}}, \Theta\right)\right)^{2} \nonumber$
Minimize the sum of the squares of the deviations:
$\Theta :=200 \nonumber$
Given
$\operatorname{SSD}(\Theta)=0 \qquad \Theta=\operatorname{Minerr}(\Theta) \nonumber$
Einstein Temperature for best fit:
$\Theta=154.707 \nonumber$
Mean squared error:
$\frac{\operatorname{SSD}(\Theta)}{(n-2)}=0.319 \nonumber$
Plot data and fit:
$t :=1 \ldots 300 \nonumber$
1.90: Einstein's Heat Capacity Equation Fit to Experimental Data - Another Algorithm
$n :=30 \qquad \mathrm{i} :=1 \ldots \mathrm{n} \nonumber$
$\mathrm{T}_{\mathrm{i}} :=$ $\mathrm{C}_{\mathrm{i}} :=$
1 0.000818
3 0.0065
5 0.0243
8 0.0927
10 0.183
15 0.670
20 1.647
25 3.066
30 4.774
35 6.612
40 8.419
45 10.11
50 11.66
55 13.04
60 14.27
65 15.35
70 16.30
80 17.87
90 19.11
100 20.10
120 21.54
140 22.52
160 23.22
180 23.75
200 24.16
220 24.49
240 24.76
260 24.99
280 25.19
300 25.37
The heat capacity data were taken from the Handbook of Physics and Chemistry - 72nd Edition, page 5-71. The data are presented in units of Joules/mole/K.
$\mathrm{R} :=8.3145 \nonumber$
Define Einstein function for heat capacity and first derivative with respect $\Theta$:
$F(T, \Theta) :=\left[\begin{array}{c}{3 \cdot R \cdot\left(\frac{\Theta}{T}\right)^{2} \cdot \frac{\exp \left(\frac{\Theta}{T}\right)}{\left(\exp \left(\frac{\Theta}{T}\right)-1\right)^{2}}} \ {\frac{d}{d \Theta}\left[3 \cdot R \cdot\left(\frac{\Theta}{T}\right)^{2} \cdot \frac{\exp \left(\frac{\Theta}{T}\right)}{\left(\exp \left(\frac{\Theta}{T}\right)-1\right)^{2}}\right]}\end{array}\right] \nonumber$
Call genfit to do nonlinear regression analysis.
$P :=\text { genfit }(T, C, 200, F) \qquad P=154.707 \nonumber$
Plot data and fit:
$t :=1 \ldots 300 \nonumber$
1.91: Fitting Debye's Heat Capacity Equation to Experimental Data for Silver
$\mathrm{n} :=30 \qquad \mathrm{i} :=1 \ldots \mathrm{n} \nonumber$
$\mathrm{T}_{\mathrm{i}} :=$ $\mathrm{C}_{\mathrm{i}} :=$
1 0.000818
3 0.0065
5 0.0243
8 0.0927
10 0.183
15 0.670
20 1.647
25 3.066
30 4.774
35 6.612
40 8.419
45 10.11
50 11.66
55 13.04
60 14.27
65 15.35
70 16.30
80 17.87
90 19.11
100 20.10
120 21.54
140 22.52
160 23.22
180 23.75
200 24.16
220 24.49
240 24.76
260 24.99
280 25.19
300 25.37
The heat capacity data were taken from the Handbook of Physics and Chemistry ‐ 72nd Edition, page 5‐71. The data are presented in units of Joules/mole/K.
Gas law constant:
$\mathrm{R} :=8.31451 \nonumber$
Define Debye function for heat capacity:
$\mathrm{F}(\mathrm{T}, \Theta) :=9 \cdot \mathrm{R} \cdot\left(\frac{\mathrm{T}}{\Theta}\right)^{3} \cdot \int_{0}^{\frac{\Theta}{\mathrm{T}}} \frac{x^{4} \cdot \exp (\mathrm{x})}{(\exp (\mathrm{x})-1)^{2}} \mathrm{dx} \quad \text{where} x = \frac{hv}{kT} \nonumber$
Form the sum of the squares of the deviations:
$\operatorname{SSD}(\Theta) :=\sum_{\mathrm{i}}\left(\mathrm{C}_{\mathrm{i}}-\mathrm{F}\left(\mathrm{T}_{\mathrm{i}}, \Theta\right)\right)^{2} \nonumber$
Minimize the sum of the squares of the deviations:
$\Theta :=200 \nonumber$
Given
$\operatorname{SSD}(\Theta)=0 \qquad \Theta :=\operatorname{Minerr}(\Theta) \nonumber$
Debye Temperature for best fit:
$\Theta=210.986 \nonumber$
Mean squared error:
$\frac{\operatorname{SSD}(\Theta)}{(n-2)}=0.16 \nonumber$
Plot data and fit:
$t :=1 \ldots 300 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.89%3A_Fitting_Einstein%27s_Heat_Capacity_Equation_to_Experimental_Data_for_Silver.txt
|
$\mathrm{n} :=30 \qquad \mathrm{i} :=1 \ldots \mathrm{n} \nonumber$
$\mathrm{T}_{\mathrm{i}} :=$ $\mathrm{C}_{\mathrm{i}} :=$
1 0.000818
3 0.0065
5 0.0243
8 0.0927
10 0.183
15 0.670
20 1.647
25 3.066
30 4.774
35 6.612
40 8.419
45 10.11
50 11.66
55 13.04
60 14.27
65 15.35
70 16.30
80 17.87
90 19.11
100 20.10
120 21.54
140 22.52
160 23.22
180 23.75
200 24.16
220 24.49
240 24.76
260 24.99
280 25.19
300 25.37
The heat capacity data were taken from the Handbook of Physics and Chemistry ‐ 72nd Edition, page 5‐71. The data are presented in units of Joules/mole/K.
Gas law constant:
$\mathrm{R} :=8.31451 \nonumber$
Define Einstein function for heat capacity and first derivative with respect $\Theta$:
$F(T, \Theta)=\left[\begin{array}{c}{9 \cdot R \cdot\left(\frac{T}{\Theta}\right)^{3} \cdot \int_{0}^{\frac{\Theta}{T}} \frac{x^{4} \cdot \exp (x)}{(\exp (x)-1)^{2}} d x} \ {\frac{d}{d \Theta}\left[9 . R \cdot\left(\frac{T}{\Theta}\right)^{3} \cdot \int_{0}^{\frac{\Theta}{T}} \frac{x^{4} \cdot \exp (x)}{(\exp (x)-1)^{2}} d x\right]}\end{array}\right] \nonumber$
Call genfit to do nonlinear regression analysis.
$P :=\text { genfit }(T, C, 200, F) \qquad P=210.985 \nonumber$
Plot data and fit:
$\mathrm{t} :=1 \ldots 300 \nonumber$
1.93: Wave-particle Duality and the Uncertainty Principle
Nick Herbert, author of Quantum Reality, has proposed the name quon for "any entity that exhibits both wave and particle attributes in the peculiar quantum mechanical manner." Obvious examples of quons for chemists are the electron, proton, neutron and photon. The wave-particle duality of quons is captured succinctly by the deBroglie relation, which unites the wave property $\lambda$ with the particle property mv in a reciprocal relationship mediated by the ubiqutous Planck's constant.
$\lambda=\frac{\mathrm{h}}{\mathrm{mv}}=\frac{\mathrm{h}}{\mathrm{p}} \nonumber$
The most general momentum wave function for a quon in one-dimension is Euler's equation when it incorporates the deBroglie equation.
$\langle x | p\rangle=\exp \left(i 2 \pi \frac{x}{\lambda}\right) \frac{\lambda=h / p}{\hbar= h/ 2 \pi} \exp \left(\frac{i p x}{\hbar}\right)=\cos \left(\frac{p x}{\hbar}\right)+i \sin \left(\frac{p x}{\hbar}\right) \ \Psi(p, x) :=\exp (i p \cdot x) \nonumber$
The momentum wave function for a quon with a well-defined momentum (p = 7, for example) is shown below in atomic units (h = 2$\pi$). It clearly illustrates the uncertainty principle because the wave function is completely spatially delocalized.
Momentum:
$\mathrm{p} :=7 \qquad \mathrm{j} :=0 \ldots 300 \nonumber$
Axis of propagation:
$\mathrm{x}_{\mathrm{j}} :=\mathrm{j} \cdot 0.04 \nonumber$
Real axis:
$\mathrm{y}_{\mathrm{j}} :=\operatorname{Re}\left(\Psi\left(\mathrm{p}, \mathrm{x}_{\mathrm{j}}\right)\right) \nonumber$
Imaginary axis:
$z_{\mathrm{j}} :=\operatorname{Im}\left(\Psi\left(\mathrm{p}, \mathrm{x}_{\mathrm{j}}\right)\right) \nonumber$
As might be expected on the basis of the uncertainty principle, the particle-like character of a quon is revealed only when there is uncertainty in momentum. This can be demonstrated by plotting Euler's equation for a superposition of momentum states as shown below. This superposition (integrating, or summing over a range of momentum values) clearly reveals the incipient particle-like characteristics of a quon.
Axis of propagation:
$x_{j} :=j \cdot 0.054-7 \nonumber$
Real axis:
$\mathrm{y}_{\mathrm{j}} :=\int_{6}^{8} \operatorname{Re}\left(\Psi\left(\mathrm{p}, \mathrm{x}_{\mathrm{j}}\right)\right) \mathrm{d} \mathrm{p} \nonumber$
Imaginary axis:
$z_{j} :=\int_{6}^{8} \operatorname{Im}\left(\Psi\left(p, x_{j}\right)\right) d p \nonumber$
The form of Euler's equation we have been using, $<\mathrm{x} | \mathrm{p}>$, is the momentum eigenfunction in coordinate space. Its complex conjugate, $<\mathrm{p} | \mathrm{x}>$, is the position eigenfunction in momentum space. In other words, a quon with a well-defined position is delocalized in momentum space, just as a quon with a well-defined momentum is delocalized in coordinate space.
$\langle x | p\rangle=\exp \left(\frac{i p x}{\hbar}\right) \nonumber$
$\langle p | x\rangle=\exp \left(-\frac{i p x}{\hbar}\right) \nonumber$
These equations are simple Fourier transforms; they allow us to translate knowledge in one language (position or momentum) into the other language. They are a kind of mathematical dictionary. Richard Feynman claimed that Euler's equation is the most remarkable equation in mathematics and called it "our jewel." Because it is the kernel in Fourier transforms, it is ubiquitous in quantum mechanics.
1.94: Wave-Particle Duality for Matter and Light
A wave is spatially delocalized
A particle is spatially localized
These incompatible concepts are united by the deBroglie wave equation with the wave property (wavelength) on the left and the particle property (momentum) on the right in a reciprocal relationship mediated by the ubiquitous Planck’s constant.
$\lambda=\frac{h}{m v} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.92%3A_Debye%27s_Heat_Capacity_Equation_Fit_to_Experimental_Data_-_Another_Algorithm.txt
|
Quantum Mechanics and Mathematics According to Richard Feynman
• We have come to the conclusion that what are usually called the advanced parts of quantum mechanics are, in fact, quite simple.
• The mathematics that is involved is particularly simple, involving simple algebraic operations and no differential equations or at most very simple ones.
Einstein and Schrödinger
Bohr and Feynman
Quantum Quotes
• Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one.' I, at any rate, am convinced that He is not playing at dice. (Einstein)
• I still believe in the possibility of a model of reality, that is to say, of a theory, which represents things themselves and not merely the probability of their occurrence. (Einstein)
• It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature. (Bohr)
• Quantum states are states of knowledge and not objective features of the systems they describe. (N. David Mermin)
• I remember discussions with Bohr which went through many hours till very late at night and ended almost in despair; and when at the end of the discussion I went alone for a walk in a neighboring park I repeated to myself again and again the question: Can nature possibly be as absurd as it seemed to us in these atomic experiments? (Heisenberg)
After state preparation by the vertical polarizer, only two subsequent experiments have certain outcomes according to quantum mechanics.
1. The probability that the vertically polarized photons will pass a second vertical polarizer is 1.$|\langle \updownarrow | \updownarrow \rangle |^{2} = 1$
2. The probability that the vertically polarized photons will pass a second polarizer that is oriented horizontally is 0.$|\langle \leftrightarrow| \updownarrow \rangle |^{2} = 0$
For all other experiments involving two polarizers only the probability of the outcome can be predicted, and this is $\cos^{2}(\theta)$, where $\theta$ is the relative angle of the polarizing films. For example,
Feynman on the Significance of the Double-Slit Experiment
• We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics.
• In reality, it contains the only mystery.
• We cannot make the mystery go away by “explaining” how it works. We will just tell you how it works.
• In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics.
• Any situation in quantum mechanics can always be explained by saying, “You remember the experiment with the two holes? It’s the same thing.”
$|\uparrow\rangle_{S} \stackrel{\mathrm{BS}_{1}}{\longrightarrow} \frac{1}{\sqrt{2}}\left[|\uparrow\rangle_{A}+|\rightarrow\rangle_{B}\right] \nonumber$
$|\uparrow\rangle_{S} \stackrel{\mathrm{BS}_{1}}{\longrightarrow} \frac{1}{\sqrt{2}}\left[|\uparrow\rangle_{A}+|\rightarrow\rangle_{B}\right] \nonumber$
$|\uparrow\rangle_{A} \stackrel{\mathrm{BS}_{2}}{\longrightarrow} \frac{1}{\sqrt{2}}\left[|\rightarrow\rangle_{D_{1}}+|\uparrow\rangle_{D_{2}}\right] \qquad |\rightarrow\rangle_{B} \stackrel{\mathrm{BS}_{2}}{\longrightarrow} \frac{1}{\sqrt{2}}\left[|\rightarrow\rangle_{D_{1}}+|\downarrow\rangle_{D_{2}}\right] \nonumber$
$|\uparrow\rangle_{s} \frac{\mathrm{BS}_{1}}{\mathrm{BS}_{2}}>\frac{1}{2}\left[|\rightarrow\rangle_{D_{1}}+|\uparrow\rangle_{D_{2}}+|\rightarrow\rangle_{D_{1}}+|\downarrow\rangle_{D_{2}}\right]=|\rightarrow\rangle_{D_{1}} \nonumber$
Final Quantum Quotes
• If we want to describe what happens in an atomic event, we have to realize that the word “happens” can only apply to the observation, not to the state of affairs between two observations. [Werner Heisenberg]
• I think it is safe to say that no one understands quantum mechanics. Do not keep saying to yourself, if you can possibly avoid it, 'But how can it possibly be like that?' because you will go down the drain into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. [Richard Feynman]
• Any one who is not shocked by quantum mechanics has not fully understood it. [Niels Bohr]
• The mathematical predictions of quantum mechanics yield results that are in agreement with experimental findings. That is the reason we use quantum theory. That quantum theory fits experiment is what validates the theory, but why experiment should give such peculiar results is a mystery. This is the shock to which Bohr referred. [Marvin Chester with slight modifications by Frank Rioux]
• In the quantum world the present does not always have a unique past.
Feynman Poetry
We have always had a great deal of difficulty understanding the world view that quantum mechanics represents.
At least I do, because I’m an old enough man that I haven’t got to the point that this stuff is obvious to me.
Okay, I still get nervous with it …
You know how it always is, every new idea, it takes a generation or two until it becomes obvious that there’s no real problem.
I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.95%3A_What_Part_of_the_Quantum_Theory_Dont_You_Understand.txt
|
Wave-particle Duality
Quon
• An entity that exhibits both wave and particle behavior in the peculiar quantum mechanical manner. (Nick Herbert)
• Examples from chemistry:
• Electrons
• Protons
• Neutrons
• Photons
Wave-particle Duality Illustrated
• Source (light bulb, sun) emits light
• Interference fringes are observed suggesting wave-like behavior.
• However, the detector (eye) registers particles (retinal absorbs a photon and changes shape ultimately causing a signal to be sent to the brain via the optic nerve)
• We detect particles, but we predict what will happen, or interpret what happened, by assuming wave-like behavior
• “Everything in the future is a wave, everything in the past is a particle.” Lawrence Bragg
Retinal Absorbs a Photon
Single-slit Diffraction
Other Diffraction Phenomena
Einstein and Schrödinger
The Quantum View of Playing Dice
$|\Psi\rangle_{a}|\Psi\rangle_{b}=\frac{1}{\sqrt{6}}[|1\rangle+|2\rangle+|3\rangle+|4\rangle+|5\rangle+|6\rangle] \frac{1}{\sqrt{6}}[|1\rangle+|2\rangle+|3\rangle+|4\rangle+|5\rangle+|6\rangle] \nonumber$
$|\Psi\rangle_{a}|\Psi\rangle_{b}=\frac{1}{6} \left[ |1||1\rangle+|1||2\rangle+|1\rangle|3\rangle+|1\rangle|4\rangle+|1\rangle|5\rangle+|1\rangle|6\rangle \ +|2\rangle|1\rangle+|2\rangle|2\rangle+|2\rangle|3\rangle+|2\rangle|4\rangle+|2\rangle|5\rangle+|2\rangle|6\rangle \ +|3\rangle|1\rangle+|3\rangle|2\rangle+|3\rangle|3\rangle+|3\rangle|4\rangle+|3\rangle|5\rangle+|3\rangle|6\rangle \ +|4\rangle|1\rangle+|4||2\rangle+|4\rangle|3\rangle+|4\rangle|4\rangle+|4||5\rangle+|4\rangle|6\rangle \ +|5\rangle|1\rangle+|5\rangle|2\rangle+|5\rangle|3\rangle+|5\rangle|4\rangle+|5\rangle|5\rangle+|5\rangle|6\rangle \ +|6\rangle|1\rangle+|6\rangle|2\rangle+|6\rangle|3\rangle+|6\rangle|4\rangle+|6\rangle|5\rangle+|6\rangle|6\rangle \right] \nonumber$
Electronic Structure and the Superposition Principle
Electrons in atoms or molecules are characterized by their entire distributions, called wave functions or orbitals, rather than by instantaneous positions and velocities: an electron may be considered always to be, with appropriate probability, at all points of its distribution, which does not vary with time. (F. E. Harris)
For example, the hydrogen atom electron is in a stationary state which is a weighted superposition of all possible distances from the nucleus. The electron is not orbiting the nucleus; it does not execute a classical trajectory during its interaction with the nucleus.
From the quantum mechanical perspective, to measure the position of an electron is not to find out where it is, but to cause it to be somewhere. (Louisa Gilder)
Another Example of the Superposition Principle
$|\uparrow\rangle_{s} \stackrel{\mathrm{Bs}_{1}}{\longrightarrow} \frac{1}{\sqrt{2}}\left[|\uparrow\rangle_{A}+|\rightarrow\rangle_{B}\right] \nonumber$
Quantum Quotes
• If we want to describe what happens in an atomic event, we have to realize that the word “happens” can only apply to the observation, not to the state of affairs between two observations. [Werner Heisenberg]
• I think it is safe to say that no one understands quantum mechanics. Do not keep saying to yourself, if you can possibly avoid it, 'But how can it possibly be like that?' because you will go down the drain into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. [Richard Feynman]
• Any one who is not shocked by quantum mechanics has not fully understood it. [Niels Bohr]
• The mathematical predictions of quantum mechanics yield results that are in agreement with experimental findings. That is the reason we use quantum theory. That quantum theory fits experiment is what validates the theory, but why experiment should give such peculiar results is a mystery. This is the shock to which Bohr referred. [Marvin Chester with slight modifications by Frank Rioux]
• A philosopher once said, ‘It is necessary for the very existence of science that the same conditions always produce the same results.’ Well, they don’t! [Richard Feynman]
• After the first world war I gave a great deal of thought to the theory of quanta. It was then that I had a sudden inspiration. Einstein's wave-particle dualism for light was an absolutely general phenomenon extending to all physical nature. De Broglie
• He (de Broglie) has lifted one corner of the great veil. Einstein
• Everything in the future is a wave, everything in the past is a particle. Lawrence Bragg
• Something unknown is doing we don't know what. Sir Arthur Eddington commenting on the quantum mechanical view of the electron.
• Thirty-one years ago, Dick Feynman told me about his 'sum over histories' version of quantum mechanics. "The electron does anything it likes," he said. "It just goes in any direction at any speed, ...however it likes, and then you add up the amplitudes and it gives you the wave function." I said to him, "You're crazy." But he wasn't. Freeman Dyson
• I still believe in the possibility of a model of reality, that is to say, of a theory, which represents things themselves and not merely the probability of their occurrence. Einstein
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.96%3A_Quantum_Potpourri_-_An_Attempt_to_Demonstrate_Two_Fundamental_Quantum_Concepts-_Wave-particle_Duality_and_The_Su.txt
|
Integration of the time‐dependent Schrödinger equation
$i \hbar \frac{d|\Psi(t)\rangle}{d t}=H(\hat{p}, \hat{x})|\Psi(t)\rangle \nonumber$
yields
$|\Psi(t+\tau)\rangle=\exp \left(\frac{-i H(\hat{p}, \hat{x}) \tau}{\hbar}\right)|\Psi(t)\rangle=\exp \left(\frac{-i T(\hat{p}) \tau}{\hbar}\right) \exp \left(\frac{-i V(\hat{x}) \tau}{\hbar}\right)|\Psi(t)\rangle \nonumber$
The purpose of this tutorial is to show how this single time‐step calculation is implemented. In the above equation, T ($\hat{p}$) and V ($\hat{x}$) are the kinetic and potential energy operators, and τ is the time increment. However, it is not immediately obvious how the exponential time‐evolution operator operates on the wave function. For example, in the coordinate representation $\hat{x}$ is a multiplicative operator, but $\hat{p}$ is a differential operator (see any introductory quantum or physical chemistry textbook).
The tactic that will be employed is to carry out the potential energy operation in coordinate space where the position operator is multiplicative and Fourier transform the result to momentum space where the momentum operator is multiplicative. After operation by the kinetic energy operator the result is Fourier transformed back to coordinate space and displayed.
This procedure requires the following mathematical tools.
The coordinate‐space eigenvalue equation and completeness relation:
$\hat{x}|x\rangle=|x\rangle x \qquad \int|x\rangle\langle x| d x=1 \nonumber$
The momentum‐space eigenvalue equation and completeness relation:
$\hat{p}|p\rangle=|p\rangle p \qquad \int|p\rangle\langle p| d p=1 \nonumber$
The Fourier transforms between position and momentum:
$\langle x | p\rangle=\langle p | x\rangle^{*}=\frac{1}{\sqrt{2 \pi}} \exp \left(\frac{i p x}{\hbar}\right) \nonumber$
With these preliminaries out of the way the first step is to insert the coordinate completeness relation between the exponential potential energy operator and the wave function. In other words, we are going to work initially in coordinate space.
$|\Psi(t+\tau)\rangle=\int \exp \left(\frac{-i T(\hat{p}) \tau}{\hbar}\right) \exp \left(\frac{-i V(\hat{x}) \tau}{\hbar}\right)|x|\langle x | \Psi(t)\rangle d x \nonumber$
The next step is to carry out a series expansion on the potential energy operator. To see what happens we will assume, for the sake of mathematical clarity, the potential energy operator is simply $\hat{x}$. Here we make use of the coordinate space eigenvalue equation to take the “hat” off the position operator.
$\exp \left(-\frac{i \hat{x} \tau}{\hbar}\right)|x\rangle=\left(1-\frac{i \hat{x} \tau}{\hbar}-\frac{\hat{x}^{2} \tau^{2}}{2 \hbar^{2}}+\cdots\right)|x\rangle=|x\rangle\left(1-\frac{i x \tau}{\hbar}-\frac{x^{2} \tau^{2}}{2 \hbar^{2}}+\cdots\right)=|x\rangle \exp \left(-\frac{i x \tau}{\hbar}\right) \nonumber$
In general, for an operator operating in its “eigen space” we have,
$\exp \left(-\frac{i \hat{\delta} \tau}{\hbar}\right)|o\rangle=|o\rangle \exp \left(-\frac{i o \tau}{\hbar}\right) \nonumber$
The first two steps have brought us to this point.
$|\Psi(t+\tau)\rangle=\int \exp \left(\frac{-i T(\hat{p}) \tau}{\hbar}\right)|x\rangle \exp \left(\frac{-i V(x) \tau}{\hbar}\right)\langle x | \Psi(t)\rangle d x \nonumber$
Now we Fourier transform to momentum space by inserting the momentum completeness relation between the exponential kinetic energy operator and $|x\rangle$.
$|\Psi(t+\tau)\rangle=\iint \exp \left(\frac{-i T(\hat{p}) \tau}{\hbar}\right)|p\rangle\langle p | x\rangle \exp \left(\frac{-i V(x) \tau}{\hbar}\right)\langle x | \Psi(t)\rangle d x d p \nonumber$
The procedure used for the potential energy operator is repeated for kinetic energy using a series expansion and the momentum eigenvalue equation.
$|\Psi(t+\tau)\rangle=\iint|p\rangle \exp \left(\frac{-i T(p) \tau}{\hbar}\right)\langle p | x\rangle \exp \left(\frac{-i V(x) \tau}{\hbar}\right)\langle x | \Psi(t)\rangle d x d p \nonumber$
This result is now projected back to the coordinate representation by employing a final Fourier transform.
$\left\langle x^{\prime} | \Psi(t+\tau)\right\rangle=\iint\left\langle x^{\prime} | p\right\rangle \exp \left(\frac{-i T(p) \tau}{\hbar}\right)\langle p | x\rangle \exp \left(\frac{-i V(x) \tau}{\hbar}\right)\langle x | \Psi(t)\rangle d x d p \nonumber$
The last steps before actual calculation are to insert the mathematical expressions for the Fourier transforms (see above) and the kinetic and potential energy operators. In this exercise a harmonic oscillator potential will be used.
$\left\langle x^{\prime} | \Psi(t+\tau)\right\rangle=\iint \frac{1}{\sqrt{2 \pi}} \exp \left(\frac{i p x^{\prime}}{\hbar}\right) \exp \left(\frac{-i p^{2} \tau}{2 m \hbar}\right) \frac{1}{\sqrt{2 \pi}} \exp \left(-\frac{i p x}{\hbar}\right) \exp \left(\frac{-i k x^{2} \tau}{2 \hbar}\right)\langle x | \Psi(t)\rangle d x d p \nonumber$
This algorithm advances the wave function in time from t to t+τ. It is only valid for one short time increment because the kinetic and potential energy operators do not commute (see reference 2, page 336 for further detail). For examples of accurate algorithms for the continued time‐evolution of the wave function consult references 1 and 2.
The algorithm is now carried out in atomic units (h = 2$\pi$) in the Mathcad programming environment. In addition, mass and the force constant will be set to unity. The limits of integration are ± 4 atomic units for both position and momentum. A Gaussian initial wave function is assumed.
$\Psi_{\mathrm{i}}(\mathrm{x}) :=\left(\frac{2}{\pi}\right)^{\frac{1}{4}} \cdot \exp \left[-(\mathrm{x}+1)^{2}\right] \nonumber$
$\Psi_{\mathrm{f}}\left(\mathrm{x}^{\prime}\right) :=\frac{1}{2 \cdot \pi} \cdot \int_{-4}^{4} \int_{-4}^{4} \exp \left(\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}^{\prime}\right) \cdot \exp \left(\frac{-\mathrm{i} \cdot \mathrm{p}^{2} \cdot \tau}{2}\right) \cdot\left[\exp (-\mathrm{i} \cdot \mathrm{p} \cdot \mathrm{x}) \cdot \exp \left(\frac{-\mathrm{i} \cdot \mathrm{x}^{2} \cdot \tau}{2}\right) \cdot \Psi_{\mathrm{i}}(\mathrm{x})\right] \mathrm{d} \mathrm{x} \mathrm{d} \mathrm{p} \nonumber$
$\tau :=0.5 \qquad x :=-3,-2.9 \ldots 3 \nonumber$
The initial wave function centered at x = ‐1 moves to the right under the influence of the harmonic oscillator potential as expected.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.97%3A_Quantum_Dynamics-_One_Step_at_a_Time.txt
|
Quantum mechanics is based on the concept of wave-particle duality, which for massive particles is expressed simply and succinctly by the de Broglie wave equation.
$\lambda = \dfrac{h}{mv}=\dfrac{h}{p} \label{1}$
On the left side is the wave property, $\lambda$, and on the right the particle property, momentum. These incompatible concepts are united in a reciprocal relationship mediated by the ubiquitous Planck’s constant. Using de Broglie’s equation in the classical expression for kinetic energy, $T$ converts it to its quantum mechanical equivalent.
$T = \dfrac{p^2}{2m}=\dfrac{h^2}{2m\lambda^2} \label{2}$
Because objects with wave-like properties are subject to interference phenomena, quantum effects emerge when they are confined by some restricting potential energy function. For example, to avoid self-interference, a particle in an infinite one-dimensional square-well potential (PIB, particle in a box) of width $a$ must form standing waves. The required restriction on the allowed wavelengths,
$\lambda=\dfrac{2a}{n} \;\;\; n=1,2,... \label{3}$
quantizes the kinetic energy.
$T(n)=\dfrac{n^2h^2}{8ma^2} \label{4}$
In addition to providing a simple explanation for the origin of energy quantization, the PIB model shows that reducing the size of the box increases the kinetic energy dramatically. This “repulsive” character of quantum mechanical kinetic energy is the ultimate basis for the stability of matter. It also provides, as we see now, a quantum interpretation for gas pressure. To show this we will consider a particle in the ground state of a three-dimensional box ($n_x=n_y=n_z=1$) of width $a$ and volume $a^3$. Its kinetic energy is,
$T=\dfrac{3h^2}{8ma^2}=\dfrac{3h^2}{8mV^{2/3}}=\dfrac{A}{V^{2/3}} \label{5}$
According to thermodynamics, pressure is the negative of the derivative of energy with respect to volume.
$P = -\dfrac{dT}{dV}=-\dfrac{2}{3}\dfrac{A}{V^{5/3}} \label{6}$
Using Equation \ref{5} to eliminate $A$ from Equation \ref{6} yields,
$P=\dfrac{2}{3} \dfrac{T}{V} \label{7}$
This result has the same form as that obtained by the kinetic theory of gases for an individual gas molecule.
1.99: Visualizing the Difference Between a Superposition and a Mixture
The superposition principle, as Feynman said, is at the heart of quantum mechanics. While its mathematical expression is simple, its true meaning is difficult to grasp. For example, given a linear superposition (not normalized) of two states,
$|\Psi\rangle=|\phi_{1}\rangle+\left|\phi_{2}\right\rangle \nonumber$
one might assume that it represents a mixture of $\phi_{1}$ and $\phi_{2}$. In other words, half of the quons [1] are in state $\phi_{1}$ and half in $\phi_{2}$. However, the correct quantum mechanical interpretation of this equation is that the system represented by $\Psi$ is simultaneously in the states $\phi_{1}$ and $\phi_{2}$, properly weighted.
A mixture, half $\phi_{1}$ and half $\phi_{2}$, or any other ratio, cannot be represented by a wavefunction. It requires a density operator, which is a more general quantum mechanical construct that can be used to represent both pure states (superpositions) and mixtures, as shown below.
$\hat{\rho}_{\max }=|\Psi\rangle\langle\Psi|\qquad \hat{\rho}_{\min d}=\sum p_{i}| \Psi_{i}\rangle\langle\Psi_{i}| \nonumber$
In the equation on the right, pi is the fraction of the mixture in the state $\Psi_{i}$.
To illustrate how these equations distinguish between a mixture and a superposition, we will consider a superposition and a mixture of equally weighted gaussian functions representing one-dimensional wave packets. The normalization constants are omitted in the interest of mathematical clarity. The gaussians are centered at x = $\pm$ 4.
$\phi_{1}(x) :=\exp \left[-(x+4)^{2}\right] \qquad \phi_{2}(x) :=\exp \left[-(x-4)^{2}\right] \nonumber$
To visualize how the density operator discriminates between a superposition and a mixture, we calculate its matrix elements in coordinate space for the 50-50 superposition and mixture of $\phi_{1}$ and $\phi_{2}$. The superposition is considered first.
$\Psi(x) :=\phi_{1}(x)+\phi_{2}(x) \nonumber$
The matrix elements of this pure state are calculated as follows.
$\rho_{\text {pure}}=\left\langle x|\hat{\rho}_{\text {pure}}| x^{\prime}\right\rangle=\langle x | \Psi\rangle\left\langle\Psi | x^{\prime}\right\rangle \nonumber$
Looking at the right side we see that the matrix elements are the product of the probability amplitudes of a quon in state $\Psi$ being at x and xʹ. Next we display the density matrix graphically.
$\operatorname{DensityMatrixPure}\left(x, x^{\prime}\right) :=\Psi(x) \cdot \Psi\left(x^{\prime}\right) \nonumber$
$x_{0}=8 \qquad N :=80 \qquad i :=0 \ldots N \ \mathrm{x}_{\mathrm{i}} :=-\mathrm{x}_{0}+\frac{2 \cdot \mathrm{x}_{0} \cdot \mathrm{i}}{\mathrm{N}} \qquad \mathrm{j} :=0 \ldots \mathrm{N} \qquad \mathrm{x}_{\mathrm{j}}^{\prime} :=-\mathrm{x}_{0}+\frac{2 \cdot \mathrm{x}_{0} \cdot \mathrm{j}}{\mathrm{N}} \nonumber$
$\operatorname{DensityMatrixPure}_{\mathrm{i},\mathrm{j}} : = \operatorname{DensityMatrixPure}\left(x, x^{\prime}\right) \nonumber$
The presence of off-diagonal elements in this density matrix is the signature of a quantum mechanical superposition. For example, from the quantum mechanical perspective bi-location is possible.
Now we turn our attention to the density matrix of a mixture of gaussian states.
$\rho_{\operatorname{mix}}=\left\langle x\left|\hat{\rho}_{\operatorname{mix}}\right| x^{\prime}\right\rangle=\sum_{i} p_{i}\left\langle x | \phi_{i}\right\rangle\left\langle\phi_{i} | x^{\prime}\right\rangle=\frac{1}{2}\left\langle x | \phi_{1}\right\rangle\left\langle\phi_{1} | x^{\prime}\right\rangle+\frac{1}{2}\left\langle x | \phi_{2}\right\rangle\left\langle\phi_{2} | x^{\prime}\right\rangle \nonumber$
$\operatorname{DensityMatrixMix}(\mathrm{x}, \mathrm{x'}) :=\frac{\phi_{1}(\mathrm{x}) \cdot \phi_{1}(\mathrm{x'})+\phi_{2}(\mathrm{x}) \cdot \phi_{2}\left(\mathrm{x'}^{\prime}\right)}{2} \nonumber$
$\operatorname{DensityMatrixMix}_{\mathrm{i},\mathrm{j}} : = \operatorname{DensityMatrixMix}\left(x, x^{\prime}\right) \nonumber$
The obvious difference between a superposition and a mixture is the absence of off-diagonal elements, $\phi_{1}(\mathrm{x}) \cdot \phi_{2}\left(\mathrm{x}^{\prime}\right)+\phi_{2}(\mathrm{x}) \cdot \phi_{1}\left(\mathrm{x}^{\prime}\right)$, in the mixed state. This indicates the mixture is in a definite but unknown state; it is an example of classical ignorance.
An equivalent way to describe the difference between a superposition and a mixture, is to say that to calculate the probability of measurement outcomes for a superpostion you add the probability amplitudes and square the sum. For a mixture you square the individual probability amplitudes and sum the squares.
1. Nick Herbert (Quantum Reality, page 64) suggested ʺquonʺ be used to stand for a generic quantum object. ʺA quon is any entity, no matter how immense, that exhibits both wave and particle aspects in the peculiar quantum manner.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.98%3A_Quantum_Mechanical_Pressure.txt
|
Thumbnail: Depiction of a hydrogen atom with size of central proton shown, and the atomic diameter shown as about twice the Bohr model radius (image not to scale). (Public Domain; Bensaccount).
02: Atomic Structure
$\lambda = \frac{h}{mv}$ de Broglie's hypothesis that matter has wave-like properties.
$n \lambda = 2 \pi r$ The consequence of de Broglie's hypothesis; an integral number of wavelengths must fit within the circumference of the orbit. This introduces the quantum number which can have values 1, 2, 3...
$mv = \frac{nh}{2 \pi r}$ Substitution of the first equation into the second equation reveals that linear momentum is quantized.
$T = \frac{1}{2} mv^2 = \frac{n^2 h^2}{8 \pi^2 m_e r^2}$ If momentum is quantized, so it kinetic energy.
$E = T + V = \frac{n^2 h^2}{8 \pi^2 m_e r^2} - \frac{e^2}{4 \pi \varepsilon_0 r}$ Which means that total energy is quantized.
Below the ground state energy and orbit radius of the electron in the hydrogen atom is found by plotting the energy as a function of the orbital radius. The ground state is the minimum in the curve.
Fundamental constants: electron charge, electron mass, Planck's constant, vacuum permitivity.
$\begin{matrix} e = 1.6021777 (10)^{-19} \text{coul} & m_e= 9.10939 (10)^{-31} \text{kg} \ h = 6.62608 (10)^{-34} \text{joule sec} & \varepsilon_0 = 8.85419 (10)^{-12} \frac{ \text{coul}^2}{ \text{joule m}} \end{matrix} \nonumber$
Quantum number and conversion factor between meters and picometers and joules and atto joules.
$\begin{matrix} n = 1 & pm = 10^{-12} m & \text{ajoule} = 10^{-18} \text{joule} \end{matrix} \nonumber$
$\begin{matrix} r = 20 pm,~20.5 pm .. 500 pm & T(r) = \frac{n^2 h^2}{8 \pi^2 m_e r^2} & V(r) = - \frac{e^2}{4 \pi \varepsilon_0 r} & E(r) = T(r) + V(r) \end{matrix} \nonumber$
This figure shows that atomic stability involves a balance between potential and kinetic energy. The electron is drawn toward the nucleus by the attractive potential energy interaction (~ ‐1/R), but is prevented from spiraling into the nucleus by the extremely large kinetic energy (~1/R2) associated with small orbits.
Prepared by Frank Rioux.
2.02: The de Broglie-Bohr Model for the Hydrogen Atom - Version 3
$\lambda = \frac{h}{mv}$ de Broglie's hypothesis that matter has wave-like properties.
$n \lambda = 2 \pi r$ The consequence of de Broglie's hypothesis; an integral number of wavelengths must fit within the circumference of the orbit. This introduces the quantum number, n, which can have values 1, 2, 3...
$mv = \frac{nh}{2 \pi r}$ Substitution of the first equation into the second equation reveals that linear momentum is quantized.
$T = \frac{1}{2} mv^2 = \frac{n^2h^2}{8 \pi^2 m_e r}$ If momentum is quantized, so is kinetic energy.
$E = T + V = \frac{n^2h^2}{8 \pi^2 m_e r^2} - \frac{q^2}{4 \pi \varepsilon o_0 r}$ Which means that total energy is quantized.
Below the ground state energy and orbit radius of the electron in the hydrogen atom is found by plotting the energy as a function of the orbital radius. The ground state is the minimum in the curve.
Fundamental constants: electron charge, electron mass, Planck's constant, vacuum permitivity.
$\begin{matrix} q = 1.6021777 (10)^{-19} \text{coul} & m_e = 9.10939 (10)^{-31} \text{kg} \ h = 6.62608 (10)^{-34} \text{joule sec} & \varepsilon_0 = 8.85419 (10)^{-12} \frac{ \text{coul}^2}{ \text{joule m}} \end{matrix} \nonumber$
Conversion factors between meters and picometers and joules and atto joules.
$\begin{matrix} pm = 10^{-12} m & \text{ajoule} = 10^{-18} \text{joule} & eV = 1.602177 (10)^{-19} \text{joule} \end{matrix} \nonumber$
Setting the first derivative of the energy with respect to r equal to zero, yields the optimum value of r.
$\begin{matrix} \frac{d}{dr} \left( \frac{n^2h^2}{8 \pi^2 m_e r^2} - \frac{q^2}{4 \pi \varepsilon_0 r} \right) = 0 & \text{has solution(s)} & n^2 h^2 \frac{ \varepsilon_0}{q^2 \pi m_e} \end{matrix} \nonumber$
Substitution of this value of r back into the energy expression yields the energy gives the energy of the hydrogen atom in terms of the quantum number, n, and the fundamental constants.
$\begin{matrix} E = \frac{n^2h^2}{8 \pi^2 m_e r^2} - \frac{q^2}{4 \pi \varepsilon_0 r} & \text{by substitution, yields} & E = \frac{-1}{8 n^2h^2} \frac{m_e}{ \varepsilon_0^2} q^4 \end{matrix} \nonumber$
Calculate the allowed energy levels for the hydrogen atom: n = 1 .. 5
$\begin{matrix} E_n = \frac{-1}{8 n^2 h^2} \frac{m_e}{ \varepsilon_0^2} q^4 & \frac{E_n}{ \text{ajoule}} = \begin{pmatrix} -2.18 \ -0.545 \ -0.242 \ -0.136 \ -0.087 \end{pmatrix} & \frac{E_n}{eV} = \begin{pmatrix} -13.606 \ -3.401 \ -1.512 \ -0.85 \ -0.544 \end{pmatrix} \end{matrix} \nonumber$
Prepared by Frank Rioux.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.01%3A_The_de_Broglie-Bohr_Model_for_the_Hydrogen_Atom.txt
|
The 1913 Bohr model of the hydrogen atom was replaced by Schrödinger's wave mechanical model in 1926. However, Bohr's model is still profitably taught today because of its conceptual and mathematical simplicity, and because it introduced a number of key quantum mechanical ideas such as the quantum number, quantization of observable properties, quantum jump and stationary state.
Bohr calculated the manifold of allowed electron energies by balancing the mechanical forces (centripetal and electron-nucleus) on an electron executing a circular orbit of radius R about the nucleus, and then arbitarily quantizing its angular momentum. Finally by fiat he declared that the electron was in a non-radiating stationary state because an orbiting (accelerating) charge radiates energy and will collapse into the oppositely charge nucleus.
In 1924 de Broglie postulated wave-particle duality for the electron and thereby provided the opportunity to take some of the arbitariness out of Bohr's model. An electron possessing wave properties is subject to constructive and destructive interference. As will be shown this leads naturally to quantization of electron momentum and kinetic energy, and consequently a manifold of allowed energy states for the electron relative to the nucleus.
The de Broglie-Bohr model of the hydrogen atom presented here treats the electron as a particle on a ring with wave-like properties. The electron's kinetic (confinement) energy is calculated using its momentum eigenfunction in the coordinate representation. This equation is obtained by substituting the de Broglie wave-particle equation (λ = h/p) into Euler's equation, the most general one-dimensional wavefunction.
$\begin{matrix} \langle x | \lambda \rangle = \frac{1}{ \sqrt{2 \pi}} \text{exp} \left( \frac{i2 \pi x}{ \lambda} \right) & \text{plus} & \lambda = \frac{h}{p} & \text{yields} & \langle x | p \rangle = \frac{1}{ \sqrt{2 \pi}} \text{exp} \end{matrix} \nonumber$
In order to avoid destructive interference, the electron's momentum wavefunction must be single-valued, which in this application requires a circular boundary condition: the wavefunction must match at points separated by one circumference, 2 πr. The mathematical consequence of satisfying this requirement leads to quantized momentum and the emergence of a quantum number, n.
$\begin{matrix} & \langle x + 2 \pi r | p \rangle = \langle x | p \rangle & \ \text{exp} \left( \frac{ip(x+2 \pi r)}{ \hbar} \right) = \text{exp} \left( \frac{ipx}{ \hbar} \right) & & \text{exp} \left( \frac{ipx}{ \hbar} \right) \text{exp} \left( \frac{i2 \pi p r}{ \hbar} \right) = \text{exp} \left( \frac{ipx}{ \hbar} \right) \ & \text{exp} \left( \frac{i2 \pi p r}{ \hbar} \right) = 1 \end{matrix} \nonumber$
The last equation succinctly states the boundary condition requirements and is satisfied when,
$\begin{matrix} \frac{pr}{ \hbar} = \frac{mvr}{ \hbar} = n & \text{where} & n = 0,~ \pm 1,~ \pm 2 ... \end{matrix} \nonumber$
This restriction quantizes the angular momentum and allows the conversion of classical kinetic energy into its quantum mechanical form. In this model, the +/- values suggest clockwise and counter-clockwise angular momentum.
$\begin{matrix} T_n = \frac{p^2}{2m} = \frac{n^2 \hbar^2}{2m r^2} & \text{where} & n = 0,~ \pm 1,~ \pm 2 ... \end{matrix} \nonumber$
The wavefunctions associated with these quantized kinetic energies are obtained by using the quantum condition of equation 3 to eliminate momentum in equation 1, recognizing that in radians θ = x/R. (x is the linear distance on the ring from some reference point.)
$\Psi_n ( \theta ) = \langle \theta | n \rangle = \frac{1}{ \sqrt{2 \pi}} \text{exp} ( in \theta ) \nonumber$
The allowed electron wavefunctions are displayed graphically as follows.
$\begin{matrix} \text{Quantum number:} & n = 3 \ \text{numpts = 200} & i = 0 ... \text{numpts} & j = 0 ... \text{numpts} & \theta_i = \frac{2 \pi i}{ \text{numpts}} \end{matrix} \nonumber$
The quantum mechanical interpretation of these "Bohr orbits" is that they are stationary states. In spite of the fact that we use the expression kinetic energy, which implies electron motion, there is no motion. The electron occupies the orbit as a particle-wave, it is not orbiting the nucleus. If it was orbiting in a classical sense it would radiate energy and quickly collapse into the nucleus. Clearly the stability of matter requires the quantum mechanical version of kinetic energy.
We now place a proton at the center of the ring and calculate the potential energy using Coulombs law.
$V = - \frac{e^2}{4 \pi \varepsilon_0 r} \nonumber$
Adding the kinetic and potential energy terms yields the total energy. However, the n = 0 state is discarded because it has zero kinetic energy and therefore does not represent a stable atomic state.
$E = T + V = \frac{n^2 h^2}{* \pi^2 m_e r^2} - \frac{e^2}{4 \pi \varepsilon_0 r} \nonumber$
The ground state energy and orbit radius of the electron in the hydrogen atom is found by plotting the energy as a function of the orbital radius. The ground state is the minimum in the total energy curve. Naturally calculus can be used to obtain the same information by minimizing the energy with respect to the orbit radius. However, the graphical method has the virtue of illuminating the issue of atomic stability.
Fundamental constants: electron charge, electron mass, Planck's constant, vacuum permitivity.
$\begin{matrix} e = 1.6021777 (10)^{-19} \text{coul} & m_e = 9.10939 (10)^{-31} \text{kg} \ h = 6.62608 (10)^{-34} \text{joule sec} & \varepsilon_0 = 8.85419 (10)^{-12} \frac{ \text{coul}^2}{ \text{joule m}} \end{matrix} \nonumber$
Quantum number and conversion factor between meters and picometers and joules and attojoules.
$\begin{matrix} n = 1 & pm = (10)^{-12} m & \text{ajoule} = 10^{-18} \text{joule} \end{matrix} \nonumber$
$\begin{matrix} r = 20 pm,~20.5 pm .. 500 pm & T(r) = \frac{n^2 h^2}{8 \pi^2 m_e r^2} & V(r) = - \frac{e^2}{4 \pi \varepsilon_0 r} & E(r) = T(r) + V(r) \end{matrix} \nonumber$
This figure shows that atomic stability involves a balance between potential and kinetic energy. The electron is drawn toward the nucleus by the attractive potential energy interaction (~ -1/R), but is prevented from collapsing into the nucleus by the extremely large kinetic energy (~1/R 2) associated with small orbits.
As shown below, the graphical approach can also be used to find the electronic excited states.
$\begin{matrix} n = 2 & T(r) = \frac{n^2 h^2}{8 \pi^2 m_e r^2} & V(r) = - \frac{e^2}{4 \pi \varepsilon_0 r} & E(r) = T(r) + V(r) \end{matrix} \nonumber$
However, it is much easier to generate the manifold of allowed electron energies by minimizing the energy with respect to the orbit radius. This procedure yields,
$\begin{matrix} E_n = - \frac{m_e e^4}{2 \left( 4 \pi \varepsilon_0 \right)^2 \hbar^2} \frac{1}{n} & \text{and} & r_n = \frac{4 \pi \varepsilon_0 \hbar^2}{m_e e^2} n^2 \end{matrix} \nonumber$
2.04: A de Broglie-Bohr Model for Positronium
Positronium is a metastable bound state consisting of an electron and its positron antiparticle. In other words it might be thought of as a hydrogen atom in which the proton is replaced by a positron. Naturally it decays quickly after formation due to electron-positron annihilation. However, it exists long enough for its ground state energy, $-0.25 E_h$, to be determined. The purpose of this tutorial is to calculate this value using the Bohr model for positronium shown below.
The electron occupies a circular orbit of radius R which has a positron at its center. Likewise the positron occupies a circular orbit of radius $R$ which has an electron at its center. Occupies has been emphasized to stress that there is no motion, no orbiting. Both particles are behaving as waves (this is the meaning of wave-particle duality) occupying the orbit. As waves they are subject to interference, and to avoid destructive interference the wavelength for the ground state is one orbit circumference.
$\lambda = 2\pi R \nonumber$
Introducing the de Broglie relationship between wavelength and momentum,
$\lambda=\dfrac{h}{p} \nonumber$
yields the following expression for momentum in atomic units (h = 2).
In atomic units $m_e=m_p=1$. Therefore, the kinetic energy of each particle is,
$T=\dfrac{p^2}{2m}=\dfrac{1}{2R^2} \nonumber$
The total energy of positronium is the sum of electron and positron kinetic energies and their Coulombic potential energy.
\begin{align} R &=T +V \[4pt] &= \dfrac{1}{2R^2} + \dfrac{1}{2R^2} + \dfrac{1}{R}=\dfrac{1}{R^2}-\dfrac{1}{R} \end{align} \nonumber
Energy minimization with respect to the electron-positron distance $R$ yields the following result.
$\dfrac{d}{dR} \left(\dfrac{1}{R^2}-\dfrac{1}{R}\right) = 0 \nonumber$
solve $R \rightarrow 2$.The optimum $R$ value yields a ground state energy of $-0.25 E_h$, in agreement with experiment.substitute $R =2$
Including the symbols for mass in the kinetic energy contributions facilitates the introduction of the concept of effective mass of the composite system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.03%3A_The_de_Broglie-Bohr_Model_for_the_Hydrogen_Atom_-_Version_3.txt
|
The 1913 Bohr model of the hydrogen atom was replaced by Schrödingerʹs wave mechanical model in 1926. However, Bohrʹs model is still profitably taught today because of its conceptual and mathematical simplicity, and because it introduced a number of key quantum mechanical ideas such as the quantum number, quantization of observable properties, quantum jump and stationary state.
Bohr calculated the manifold of allowed electron energies by balancing the mechanical forces (centripetal and electron‐nucleus) on an electron executing a circular orbit of radius R about the nucleus, and then arbitarily quantizing its angular momentum. Finally by fiat he declared that the electron was in a non‐radiating stationary state because an orbiting (accelerating) charge radiates energy and will collapse into the oppositely charge nucleus.
In 1924 de Broglie postulated wave‐particle duality for the electron and other massive particles, thereby providing the opportunity to remove some of the arbitariness from Bohrʹs model. For example, an electron possessing wave properties is subject to constructive and destructive interference. As will be shown this leads naturally to quantization of electron momentum and kinetic energy, and consequently a manifold of allowed energy states for the electron relative to the nucleus. The de Broglie‐Bohr model of the hydrogen atom presented here treats the electron as a particle on a ring with wave‐like properties.
$\lambda = \frac{h}{m_e v} \nonumber$
de Broglie's hypothesis that matter has wave-like properties.
$n \lambda = 2 \pi r \nonumber$
The consequence of de Broglieʹs hypothesis; an integral number of wavelengths must fit within the circumference of the orbit. This introduces the quantum number which can have values 1,2,3,... The n = 4 electron state is shown below.
$m_e v = \frac{n h }{2 \pi r} \nonumber$
Substitution of the first equation into the second equation reveals that momentum is quantized.
$T = \frac{1}{2} m_e v^2 = \frac{n^2 h^2}{8 \pi^2 m_e r^2} \nonumber$
If momentum is quantized, so is kinetic energy.
$E = T + V = \frac{n^2 h^2}{8 \pi^2 m_e r^2} - \frac{e^2}{4 \pi \varepsilon_0 r} \nonumber$
Which means that total energy is quantized. The second term is the electron‐proton electrostatic potential energy.
The quantum mechanical interpretation of these ʺBohr orbitsʺ is that they are stationary states. In spite of the fact that we use the expression kinetic energy, which implies electron motion, there is no motion. The electron occupies the orbit as a particle‐wave, it is not orbiting the nucleus. If it was orbiting in a classical sense it would radiate energy and quickly collapse into the nucleus. Clearly the stability of matter requires the quantum mechanical version of kinetic energy.
The ground state energy and orbit radius of the electron in the hydrogen atom is found by plotting the energy as a function of the orbital radius. The ground state is the minimum in the total energy curve. Naturally calculus can be used to obtain the same information by minimizing the energy with respect to the orbit radius. However, the graphical method has the virtue of illuminating the issue of atomic stability.
Fundamental constants: electron charge, electron mass, Planck's constant, vacuum permitivity.
$\begin{matrix} e = 1.6021777 (10)^{-19} \text{coul} & m_e = 9.10939 (10)^{-31} \text{kg} \ h = 6.62608 (10)^{-34} \text{joule sec} & \varepsilon_0 = 8.85419 (10)^{-12} \frac{ \text{coul}^2}{ \text{joule m}} \end{matrix} \nonumber$
Quantum number and conversion fact between meters and picometers and joules and attojoules.
$\begin{matrix} n = 1 & pm = 10^{-12} m & \text{ajoule} = 10^{-18} \text{joule} \end{matrix} \nonumber$
$\begin{matrix} r = 20 pm,~20.5 pm,~ 500 pm & T(r) = \frac{n^2 h^2}{8 \pi^2 m_e r^2} & V(r) = - \frac{e^2}{4 \pi \varepsilon_0 r} & E(r) = T(r) + V(r) \end{matrix} \nonumber$
This figure shows that atomic stability involves a balance between potential and kinetic energy. The electron is drawn toward the nucleus by the attractive potential energy interaction (~ ‐1/R), but is prevented from collapsing into the nucleus by the extremely large kinetic energy (~1/R2) associated with small orbits.
As shown below, the graphical approach can also be used to find the electronic excited states.
$\begin{matrix} n = 2 & T(r) = \frac{n^2 h^2}{8 \pi^2 m_e r^2} & V(r) = - \frac{e^2}{4 \pi \varepsilon_0 r} & E(r) = T(r) + V(r) \end{matrix} \nonumber$
As mentioned earlier the manifold of allowed electron energies can also be obtained by minimizing the energy with respect to the orbit radius. This procedure yields,
$\begin{matrix} E_n = - \frac{m_e e^4}{2 \left(4 \pi \varepsilon_0 \right)^2 \hbar^2 } \frac{1}{n^2} & \text{and} & r_n = \frac{4 \pi \varepsilon_0 \hbar^2}{m_e e^2} n^2 \end{matrix} \nonumber$
2.06: The de Broglie-Bohr Model for a Hydrogen Atom Held Together by a Gravitational Interaction
$\lambda = \frac{h}{mv} \nonumber$
de Broglie's hypothesis that matter has wave-like properties.
$n \lambda = 2 \pi r \nonumber$
The consequence of de Broglie's hypothesis; an integral number of wavelengths must fit within the circumference of the orbit. This introduces the quantum number, n, which can have values 1,2,3,...
$mv = \frac{nh}{2 \pi r} \nonumber$
Substitution of the first equation into the second equation reveals that linear momentum is quantized.
$T = \frac{1}{2} m v^2 = \frac{n^2 h^2}{8 \pi^2 m_e r^2} \nonumber$
If momentum is quantized, so is kinetic energy.
$E = T + V = \frac{n^2 h^2}{8 \pi^2 m_e r^2} - \frac{G m_p m_e}{r} \nonumber$
Which means that total energy is quantized, where $- \frac{G m_p m_e}{r}$ is the gravitational potential energy interaction between a proton and an electron.
$\frac{d}{dr} \left( \frac{n^2h^2}{8 \pi^2 m_e r^2} - \frac{G m_p m_e}{r} \right) = 0 ~ \text{solve, r} \rightarrow \frac{h^2 n^2}{4 \pi^2 G m_e^2 m_p} \nonumber$
Minimization of the energy with respect to orbit radius yields the optimum values of r. This expression is subtituted back in the energy expression below to find the allowed energies.
$E = \frac{n^2 h^2}{8 \pi^2 m_e r^2} - \frac{G m_p m_e}{r} \text{substitute, r} = \frac{h^2 n^2}{4 \pi^2 G m_e^2 m_p} \rightarrow E = \frac{2 \pi^2 G^2 m_e^3 m_p^2}{h^2 n^2} \nonumber$
$\begin{matrix} \text{Fundamental constants:} & m_p = 1.67262 (10)^{-27} \text{kg} & m_e = 9.10939 (10)^{-31} \text{kg} \ ~ & h = 6.62608 (10)^{-34} \text{joule sec} & G = 6.67259 (10)^{-11} \frac{m^3}{ \text{kg s}^2} \end{matrix} \nonumber$
Energy:
$E(n) = - \frac{2 \pi^2 G^2 m_e^3 m_p^2}{h^2 n^2} \nonumber$
Orbit radius:
$r(n) = \frac{h^2 n^2}{4 \pi^2 G m_e^2 m_p^2} \nonumber$
Calculate the first four energy levels and orbit radii.
$\begin{matrix} n = 1 .. 4 & \frac{E(n)}{J} = \begin{pmatrix} -4.233 \times 10^{-97} \ -1.058 \times 10^{-97} \ -4.704 \times 10^{-98} \ -2.646 \times 10^{-98} \end{pmatrix} & \frac{r(n)}{m} = \begin{pmatrix} 1.201 \times 10^{29} \ 4.803 \times 10^{29} \ 1.081 \times 10^{30} \ 1.921 \times 10^{30} \end{pmatrix} \end{matrix} \nonumber$
Prepared by Frank Rioux.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.05%3A_The_de_Broglie-Bohr_Model_for_the_Hydrogen_Atom_-_Version_4.txt
|
Positronium is a metastable bound state consisting of an electron and its positron antiparticle. In other words it might be thought of as a hydrogen atom in which the proton is replaced by a positron. Naturally it decays quickly after formation due to electron-positron annihilation. However, it exists long enough for its ground state energy, -0.25 Eh, to be determined. The purpose of this tutorial is to calculate this value using the Bohr model for positronium shown below.
The electron occupies a circular orbit of radius R which has a positron at its center. Likewise the positron occupies a circular orbit of radius R which has an electron at its center. Occupies has been emphasized to stress that there is no motion, no orbiting. Both particles are behaving as waves (this is the meaning of wave-particle duality) occupying the orbit. As waves they are subject to interference, and to avoid destructive interference the wavelength for the ground state is one orbit circumference.
$\lambda = 2 \pi R \nonumber$
Introducing the de Broglie relationship between wavelength and momentum, $\lambda = \frac{h}{p}$, yields the following expression for momentum in atomic units (h = 2π).
$p = \frac{h}{2 \pi R} = \frac{1}{R} \nonumber$
In atomic units me = mp = 1. Therefore, the kinetic energy of each particle is,
$T = \frac{p^2}{2m} = \frac{1}{2 R^2} \nonumber$
The total energy of positronium is the sum of electron and positron kinetic energies and their coulombic potential energy.
$E = T_e + T_p + V_{ep} = \frac{1}{2R^2} + \frac{1}{2R^2} - \frac{1}{R} = \frac{1}{R^2} - \frac{1}{R} \nonumber$
Energy minimization with respect to the electron-positron distance R yields the following result.
$\frac{d}{dR} \left( \frac{1}{R^2} - \frac{1}{R} \right) = \text{0 solve, R} \rightarrow 2 \nonumber$
The optimum R value yields a ground state energy of -0.25 Eh, in agreement with experiment.
$E = \frac{1}{R^2} - \frac{1}{R} \text{ substitute, R = 2} \rightarrow E = - \frac{1}{4} \nonumber$
Including the symbols for mass in the kinetic energy contributions facilitates the introduction of the concept of effective mass of the composite system.
$T = \frac{1}{2 m_e R^2} + \frac{1}{2 m_p R^2} = \frac{1}{2 R^2} \left( \frac{1}{m_e} + \frac{1}{m_p} \right) = \frac{1}{2R^2} \left( \frac{m_e + m_p}{m_e m_p} \right) = \frac{1}{2 \mu_{ep} R^2} \nonumber$
$\mu_{ep} = \frac{m_e m_p}{m_e m_p} = \frac{(1)(1)}{1 + 1} = \frac{1}{2} \nonumber$
The positronium energy minimum can also be located graphically.
Plotting kinetic and potential energy along with total energy reveals that a ground state is achieved because beginning at R = 2, the kinetic energy is approaching positive infinity more quickly than the potential energy is approaching negative infinity.
2.08: The Bohr Model for the Earth-Sun System
Data for the earth-sun system assuming a circular earth orbit:
$\begin{matrix} \text{Mass of the earth:} & Me = 5.974 (10)^{24} \text{kg} & \text{Mass of the sun:} & Ms = 1.989 (10)^{30} \text{kg} \ \text{Earth orbit radius:} & r = 1.496 (10)^{11} m & \text{Gravitational constant:} & G = 6.674 (10)^{-11} \frac{ \text{N m}^2}{ \text{kg}^2} \ \text{Planck's constant:} & h = 6.62608 (10)^{-34} \text{J s} \end{matrix} \nonumber$
Assuming the earth executes a circular orbit of radius r about the sun and has a deBroglie wavelength given by h/mv, yields a quantum mechanical kinetic energy for the earth which is the first term in the total energy expression below. The potential energy of the earth-sun interaction is well-known and is the second term in the total energy expression.
$\begin{matrix} E = \frac{n^2 h^2}{8 \pi^2 Me r^2} - \frac{G Me Ms}{r} & \text{where n = 1, 2, 3, ...} \end{matrix} \nonumber$
Setting the first derivative of the energy with respect to r equal to zero, yields the allowed values of r in terms of the quantum number, n.
$\begin{matrix} \frac{d}{dr} \left( \frac{G Me Ms}{r} - \frac{G Me Ms}{r} \right) = 0 & \text{has solution(s)} & \frac{1}{4} n^2 \frac{h^2}{G \left[ Me^2 \left( Ms \pi^2 \right) \right]} \end{matrix} \nonumber$
Substitution of this value of r in the total energy expression yields the energy of the earth-sun system as a function of the quantum number, n, Planck's constant, the gravitational constant, and the masses of the earth and the sun.
$\begin{matrix} E = \frac{n^2 h^2}{8 \pi^2 Me r^2} - \frac{G Me Ms}{r} & \text{by substitution, yields} & E = \frac{-2}{n^2 h^2} \pi^2 Me^3 G^2 Ms^2 \end{matrix} \nonumber$
Given the radius of the earth's orbit listed above, calcuate the earth's quantum number.
$\begin{matrix} r = \frac{1}{4} n^2 \frac{h^2}{G \left[ Me^2 \left( Ms \pi^2 \right) \right]} & \text{has solution(s)} & \begin{pmatrix} \frac{-2}{h} \sqrt{G} Me \sqrt{Ms} \pi \sqrt{r} \ \frac{2}{h} \sqrt{G} Me \sqrt{Ms} \pi \sqrt{r} \end{pmatrix} = \begin{pmatrix} -2.524 \times 10^{74} \ 2.524 \times 10^{74} \end{pmatrix} \end{matrix} \nonumber$
The positive root n is used to calculate the energy of the earth-sun system.
$\begin{matrix} n = 2.524 (10)^{74} & E = \frac{-2}{n^2 h^2} \pi^2 Me^3 G^2 Ms^2 & E = -2.65 \times 10^{33} \text{J} \end{matrix} \nonumber$
According to the virial theorem the classical expression for the energy of the earth-sun system with earth orbit radius r is half the potential energy. Note that this gives a value which is in agreement with the Bohr model for the earth-sun system. Is this a legitimate example of the correspondence principle?
$\begin{matrix} E = - \frac{G Me Ms}{2r} & E = -2.65 \times 10^{33} \text{J} \end{matrix} \nonumber$
*Johnson and Pedersen, Problems and Solutions in Quantum Chemistry and Physics, pages 26-27.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.07%3A_The_de_Broglie-Bohr_Model_for_Positronium.txt
|
The 1913 Bohr model of the hydrogen atom was replaced by Schrödingerʹs wave mechanical model in 1926. However, Bohrʹs model is still profitably taught today because of its conceptual and mathematical simplicity, and because it introduced a number of key quantum mechanical ideas such as the quantum number, quantization of observable properties, quantum jump and stationary state. In addition it provided realistic values for such parameters as atomic and molecular size, electron ionization energy, and molecular bond energy.
In his ʺplanetaryʺ model of the hydrogen atom Bohr began with a Newtonian analysis of the electron executing a circular orbit of radius R about a stationary nucleus, and then arbitrarily quantized the electronʹs angular momentum. Finally, by fiat he declared that the electron was in a non‐radiating stationary state because an orbiting (accelerating) charge radiates energy and will collapse into the oppositely charge nucleus.
In 1924 de Broglie postulated wave‐particle duality for the electron and other massive particles, thereby providing the opportunity to remove some of the arbitrariness from Bohrʹs model. For example, an electron possessing wave properties is subject to constructive and destructive interference. As will be shown this leads naturally to quantization of electron momentum and kinetic energy, and consequently to a stable ground state for the hydrogen atom.
The de Broglie‐Bohr model of the hydrogen atom presented here treats the electron as a particle on a ring with wave‐like properties. The key equation is wave‐particle duality as expressed by the de Broglie equation. The particle concept momentum and the wave concept λ are joined in a reciprocal relationship mediated by the ubiquitous Planckʹs constant.
$p = \frac{h}{ \lambda} \nonumber$
This equation will be used with the Bohr model of the hydrogen atom to explain atomic stability and to generate estimates of atomic size and electron binding energy in the atom.
In the de Broglie version of the Bohr hydrogen atom we say that the electron occupies a ring of radius R. It is not orbiting the nucleus, it is behaving as a stationary wave. In order to avoid self‐interference the following wavelength restriction must be obeyed for the ground state of the hydrogen atom.
$\lambda = 2 \pi R \nonumber$
When combined with the de Broglie equation it reveals the following restriction on the electron's particle property, linear momentum.
$p = \frac{h}{2 \pi R} \nonumber$
This means that there is also a restriction on the electron's kinetic energy. Use of this equation in the classical expression for kinetic energy yields the quantum mechanical kinetic energy or more accurately electron confinement energy.
$T = \frac{p^2}{2m} = \frac{h^2}{8 \pi^2 m R^2} = \frac{1}{2R^2} \nonumber$
In this equation we have moved from the classical definition of kinetic energy to the quantum mechanical version expressed on the right in atomic units.
$\frac{h}{2 \pi} = m = e = 4 \pi \varepsilon_0 = 1 \nonumber$
The electrostatic potential energy retains its classical definition in quantum mechanics.
$V = \frac{-e^2}{4 \pi \varepsilon_0 R} = \frac{-1}{R} \nonumber$
The total electron energy, $E_H (R) = T(R) + V(R)$, is now minimized with respect to the ring or orbit radius, the only variational parameter in the model. The total energy, and kinetic and potential energy, are also displayed as a function of ring radius.
$\begin{matrix} R = .5 & E_H (R) = \frac{1}{2R^2} - \frac{1}{R} & R = \text{Minimize} \left( E_H,~R \right) & R = 1.000 & E_H (R) = -0.500 \end{matrix} \nonumber$
From this simple model we learn that it is the wave nature of the electron that explains atomic stability. The electronʹs ring does not collapse into the nucleus because kinetic (confinement) energy goes to positive infinity (~R‐2) faster than potential energy goes to negative infinity (~‐R‐1). This is seen very clearly in the graph. The ground state is due to the sharp increase in kinetic energy as the ring radius decreases. This is a quantum effect, a consequence of de Broglieʹs hypothesis that electrons have wave‐like properties. As Klaus Ruedenberg has written, ʺThere are no ground states in classical mechanics.ʺ
The minimization process above the figure provides the ground state ring radius and electron energy in atomic units, a0 and Eh, respectively. R = 1 a0 = 52.9 pm gives us the benchmark for atomic size. Tables of atomic and ionic radii carry entries ranging from approximately half this value to roughly five or six times it. The ground state (binding) energy, E = ‐0.5 Eh = ‐13.6 eV = ‐1312 kJ/mol, is the negative of the ionization energy. This value serves as a benchmark for how tightly electrons are held in atoms and molecules.
A more comprehensive treatment of the Bohr atom utilizing the restriction that an integral number of wavelengths must fit within the ring, nλ = 2πR , where n = 1, 2, 3, ... reveals a manifold of allowed energy states (‐0.5 Eh/n2) and the basis for Bohrʹs concept of the quantum jump which ʺexplainedʺ the hydrogen atom emission spectrum. Here for example is the n = 4 Bohr atom excited state.
Rudimentary estimates of some molecular parameters, the most important being bond energy and bond length, can be obtained using the following Bohr model for H2. The distance between the protons is D, the electron ring radius is R, and the bond axis is perpendicular to the plane of the ring.
There are eight contributions to the total molecular energy based on this model: electron kinetic energy (2), electron‐proton potential energy (4), proton‐proton potential energy (1) and electron‐electron potential energy (1).
$E_{H2} (R,~D) = \frac{1}{R^2} = \frac{4}{ \sqrt{R^2 + \left( \frac{D}{2} \right)^2}} + \frac{1}{D} + \frac{1}{2R} \nonumber$
Minimization of the energy with respect to ring radius and proton‐proton distance yields the following results.
$\begin{matrix} D = 2 & \begin{pmatrix} R \ D \end{pmatrix} = \text{Minimize} \left( E_{H2},~R,~D \right) & \begin{pmatrix} R \ D \end{pmatrix} = \begin{pmatrix} 0.953 \ 1.101 \end{pmatrix} & E_{H2} (R,~D) = -1.100 \end{matrix} \nonumber$
The H‐H bond energy is the key parameter provided by this analysis. We see that it predicts a stable molecule and that the energy released on the formation of H2 is 0.1 Eh or 263 kJ/mol, compared with the experimental value of 458 kJ/mol. The model predicts a H‐H bond length of 58 pm (D.52.9 pm), compared to the literature value of 74 pm. These results are acceptable given the primitive character of the model.
$H + H = H_2 \nonumber$
$\begin{matrix} \Delta E_{bond} = E_{H2} (R,~D) - 2 E_H (1) & \Delta E_{bond} = -0.100 \end{matrix} \nonumber$
In addition to these estimates of molecular parameters, the model clearly shows that molecular stability depends on a balancing act between electron‐proton attraction and the ʺrepulsiveʺ character of electron kinetic energy. Just as in the atomic case, it is the 1/R2 dependence of kinetic (confinement) energy on ring radius that prevents molecular collapse under electron‐proton attraction. As the energy profile provided in the Appendix shows, the immediate cause of the molecular ground state is a rise in kinetic energy. Potential energy is still declining at this point and does not begin to rise until 0.55 a0, well after the ground state is reached at 1.10 a0.
Although the model is a relic from the early days of quantum theory it still has pedagogical value. Its mathematical simplicity clearly reveals the importance of the wave nature of matter, the foundational concept of quantum theory.
Two relatively recent appraisals of Bohrʹs models of atomic and molecular structure have been appeared in Physics Today:
• ʺNiels Bohr between physics and chemistry,ʺ by Helge Kragh, May 2013, 36‐41.
• ʺBohrʹs molecular model, a century later,ʺ by Anatoly Svidzinsky, Marlan Scully, and Dudley Herschbach, January 2014, 33‐39.
Appendix
$\begin{matrix} R = .1 & \text{Energy} = -1 & \text{Given} & \text{Energy} = E_{H2} (R,~D) & \frac{d}{dR} E_{H2} (R,~D) = 0 & \text{Energy(D) = Find(R, Energy)} \end{matrix} \nonumber$
$\begin{matrix} D = .15,~.16 ..4 & T(D) = \frac{1}{ \left( \text{Energy(D)}_0 \right)^2} & V(D) = - \frac{4}{ \sqrt{ \left( \text{Energy(D)}_0 \right)^2 + \left( \frac{D}{2} \right)^2 }} + \frac{1}{D} + \frac{1}{2 \text{Energy (D)}_0} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.09%3A_Extracting_Atomic_and_Molecular_Parameters_from_the_deBroglie-Bohn_Model_for_the_Atom.txt
|
Electrons in atoms or molecules are characterized by their entire distributions, called wave functions or orbitals, rather than by instantaneous positions and velocities: an electron may be considered always to be, with appropriate probability, at all points of its distribution, which does not vary with time. (F. E. Harris)
For example, the hydrogen atom electron is in a stationary state which is a weighted superposition of all possible distances from the nucleus. The electron is not orbiting the nucleus; it does not execute a classical trajectory during its interaction with the nucleus.
From the quantum mechanical perspective, to measure the position of an electron is not to find out where it is, but to cause it to be somewhere. (Louisa Gilder)
2.11: Atomic Spectroscopy and the Correspondence Principle
Bohrʹs correspondence principle states that the predictions of classical and quantum mechanics agree in the limit of large quantum numbers (An Introduction to Quantum Physics, French and Taylor, p. 27). This principle can be illustrated using the Bohr model of the hydrogen atom, which is an ad hoc mixture of established classical and newly proposed quantum concepts.
On the basis of Rutherfordʹs nuclear model of the atom, Bohr envisioned the hydrogen atomʹs electron executing circular orbits around the proton with quantized angular momentum. This gave rise to a manifold of allowed electron orbits with discrete (as opposed to continuous) radii and energies. By fiat Bohr called these stationary states, because the orbiting (accelerating) electron did not radiate energy as required by classical electromagnetic principles.
Initial speculation, however, suggested that the observed line spectrum of the hydrogen atom might be interpeted in terms of electromagnetic emissions related to orbital frequencies of the electron. Subsequently, Bohr achieved agreement with experiment by postulating that the observed frequencies were due to photon (hν) emissions as the electron made a quantum jump from one allowed orbit to another. As will be shown below these two explanations, the first classical and the second quantum mechanical, can be used to illustrate the correspondence principle.
The calculations below are carried out in the Mathcad programming environment using the following information.
$\begin{matrix} \text{Planck's constant:} & h = 6.62608 (10)^{-34} \text{joule sec} & \text{Electron mass:} & m_e = 9.1093897 (10)^{-31} \text{kg} \ \text{Speed of light:} & c = 2.9979 (10)^8 \frac{m}{sec} & \text{Bohr radius:} & a_0 = 5.29177 (10)^{-11} \text{m} \ \text{Conversion factors:} & pm = 10^{-12} m & aJ = 10^{-18} \text{joule} \ \text{Energy of a photon:} & E_{photon} = h \nu = \frac{hc}{ \lambda} \end{matrix} \nonumber$
Energy of the hydrogen atom's electron (n is a quantum number and can have integer values).
$E_{atom} = \frac{-2.18 aJ}{n^2} \nonumber$
Emission Spectroscopy
In emission spectroscopy a photon is created as the electron undergoes a transition from a higher to a lower energy state. Energy conservation requires
$E_{atom}^{initial} = E_{atom}^{final} + E_{photon} \nonumber$
Using Bohrʹs quantum jump model we calculate the frequency of the photon emitted when an electron undergoes a transition from the n=2 to the n=1 state.
$\begin{matrix} n_i = 2 & n_f = 1 & \begin{array}{c|c} \frac{-2.178 aJ}{n_i^2} = \frac{-2.178 aJ}{n_f^2} + h \nu & _{float,~3} ^{solve,~ \nu} \rightarrow \frac{2.47e15}{sec} \end{array} \end{matrix} \nonumber$
This result is in agreement with the experimental hydrogen atom emission spectrum.
Next we calculate the orbital frequencies of these two quantum states. This requires knowing the classical orbital velocity and orbit circumference. These are most easily obtained by using postulates and results of the Bohr model.
Quantized orbital angular momentum: $m_e v r = \frac{nh}{2 \pi}$
Allowed orbit radius: $r = n^2 a_0$
Orbit circumference: $C = 2 \pi r$
Orbit frequency: $\begin{matrix} \nu = \frac{v}{C} & \nu (n) = \frac{h}{4 \pi^2 m_e n^3 a_0^2} \end{matrix}$
The classical orbital frequencies for the n = 1 and n = 2 orbits bracket the photon frequency, but are not in good agreement with the quantum result.
$\begin{matrix} \nu (1) = 6.58 \times 10^{15} \frac{1}{s} & \nu (2) = 8.22 \times 10^{14} \frac{1}{s} \end{matrix} \nonumber$
Next we explore high energy electronic states. Recently an electronic hydrogen atom emission transition was observed at 408.367 MHz in interstellar space. Assuming the transition occurs between adjacent states, calculate the quantum number of the destination state.
$\begin{array}{c|c} \frac{-2.18 aJ}{(n+1)^2} = \frac{-2.18aJ}{n^2} + h \frac{408.367 (10)^6}{ \text{sec}} & _{ \text{float, 3}} ^{ \text{solve, n}} \rightarrow \begin{pmatrix} -0.5 \ 252.0 \ -127.0 - 219.0i \ -127.0 + 219.0i \end{pmatrix} \end{array} \nonumber$
Thus the transition is from n = 253 to n = 252. Below we see that the classical orbital frequencies for these states again bracket the quantum result, but now are in much closer agreement with it.
$\begin{matrix} \nu (252) = 4.11 \times 10^8 \frac{1}{s} & \nu (253) = 4.06 \times 10^8 \frac{1}{s} \end{matrix} \nonumber$
As the n quantum number increases the predictions of classical and quantum mechanics converge as required by Bohrʹs correspondence principle.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.10%3A_Electronic_Structure_and_the_Superposition_Principle.txt
|
Under normal circumstances the the hydrogen atom consists of a proton and an electron. However, electrons are leptons and there are two other leptons which could temporarily replace the electron in the hydrogen atom. The other leptons are the muon and tauon, and their relevant properties, along with those of the electron, are given in the following table. The following calculations are carried out in atomic units (me = e = h/2π = 1) and are valid for any one‐lepton atom or ion.
$\begin{pmatrix} \text{Property} & e & \mu & \tau \ \frac{ \text{Mass}}{m_e} & 1 & 206.8 & 3491 \ \frac{ \text{EffectiveMass}}{m_e} & 1 & 185.86 & 1203 \ \frac{ \text{LifeTime}}{s} & \text{Stable} & 2.2 (10)^{-6} & 3.0 (10)^{-13} \end{pmatrix} \nonumber$
Coordinate‐space calculations:
Energy operator:
$H = - \frac{1}{2 \mu} \frac{d^2}{dx^2} \blacksquare - \frac{1}{x} \blacksquare \nonumber$
Trial ground state wave function:
$\begin{matrix} \Psi (x,~ \mu) = 2 \mu^{ \frac{3}{2}} x \text{exp} ( - \mu,~x) & \int_0^{ \infty} \Psi (x,~ \mu)^2 \text{dx assume}, \mu > 0 \rightarrow 1 \end{matrix} \nonumber$
Demonstrate that the wave function is an eigenfunction of the total energy operator, but not of the kinetic and potential energy operators individually:
$\begin{matrix} - \frac{1}{2 \mu} \frac{d^2}{dx^2} \Psi (x,~ \mu) - \frac{1}{x} \Psi (x,~ \mu) = E \Psi (x,~ \mu ) \text{solve, E} \rightarrow \frac{-1}{2} \mu \ \frac{- \frac{1}{2 \mu} \frac{d^2}{dx^2} \Psi (x,~ \mu )}{\Psi (x, ~ \mu )} \text{simplify} \rightarrow \frac{-1}{2} \frac{(-2) + \mu x}{x} ~ \frac{- \frac{1}{x} \Psi (x, ~ \mu)}{ \Psi (x, ~ \mu)} \rightarrow \frac{-1}{x} \end{matrix} \nonumber$
Calculate <T>:
$\int_0^{ \infty} \Psi (x,~ \mu ) - \frac{1}{2 \mu} \frac{d^2}{dx^2} \Psi (x,~ \mu ) \text{dx assume, } \mu > 0 \rightarrow \frac{1}{2} \mu \nonumber$
Calculate <R>:
$\int_0 ^{ \infty} \Psi (x,~ \mu) \frac{-1}{x} \Psi (x,~ \mu) \text{ dx assume, } \mu >0 \rightarrow - \mu \nonumber$
Is the virial theorem satisfied? Yes.
$<E> = \frac{<V>}{2} = -<T> = \frac{ \mu}{2} \nonumber$
Calculate the classical turning point:
$\frac{- \mu}{2} = \frac{-1}{x} \text{ solve, x} \rightarrow \frac{2}{ \mu} \nonumber$
Calculate the probability that tunneling is occurring:
$\int_{ \frac{2}{ \mu}}^{ \infty} \Psi (x,~ \mu)^2 \text{ dx assume, } \mu > 0 \rightarrow 13 e^{-4} = 0.238 \nonumber$
Calculate <x>:
$\int_0 ^{ \infty} x \Psi (x,~ \mu)^2 \text{ dx assume, } \mu > 0 \rightarrow \frac{3}{2 \mu} \nonumber$
Calculate <x2>:
$\int_0^{ \infty} x^2 \Psi (x,~ \mu )^2 \text{ dx assume, } \mu > 0 \rightarrow \frac{3}{ \mu^2} \nonumber$
Calculate the most probable position:
$\frac{d}{dx} \Psi (x,~ \mu) = 0 \text{solve, x} \rightarrow \frac{1}{ \mu} \nonumber$
Calculate the electron density at the most probable position:
$\begin{matrix} \Psi \left( \frac{1}{ \mu},~ \mu \right)^2 \rightarrow 4 \mu \left( e^{-1} \right)^2 & 0.541 \mu \end{matrix} \nonumber$
Calculate the probability the electron is beyond the most probable position:
$\int_{ \frac{1}{ \mu}}^{ \infty} \text{ dx assume, } \mu > 0 \rightarrow 5 e^{-2} = 0.677 \nonumber$
Calculate
:
$\int_0^{ \infty} \Psi (x,~ \mu) \frac{1}{i} \frac{d}{dx} \Psi (x,~ \mu) \text{dx assume, } \mu > 0 \rightarrow \nonumber$
Calculate <p2>:
$\int_0^{ \infty} \Psi (x,~ \mu) - \frac{d^2}{dx^2} \Psi (x,~ \mu) \text{ dx assume, } \mu >0 \rightarrow \mu^2 \nonumber$
Calculate the position-momentum commutator:
$\frac{x \frac{1}{i} \frac{d}{dx} \Psi (x,~ \mu) - \frac{1}{i} \frac{d}{dx} (x \Psi (x,~ \mu))}{ \Psi (x, \mu )} \text{simplify} \rightarrow i \nonumber$
Momentum-space calculations:
Generate the momentum wave function by Fourier transform of the coordinate-space wave function:
$\Phi (p,~ \mu) = \int_0^{ \infty} \frac{ \text{exp} (-i px)}{ \sqrt{2 \pi}} \Psi (x,~ \mu) \text{ dx assume, } \mu > 0 \rightarrow 2^{ \frac{1}{2}} \frac{}{} \nonumber$
Calculate <T>:
$\int_{- \infty}^{ \infty} \overline{ \Phi (p,~ \mu)} \frac{p^2}{2 \mu} \Phi (p,~ \mu ) \text{ dp assume, } \mu > 0 \rightarrow \frac{1}{2} \mu \nonumber$
Calculate <x>:
$\begin{array}{c|c} \int_{ - \infty}^{ \infty} \overline{ \Phi (p,~u)} i \frac{d}{dp} \Phi (p,~u) dp & _{ \text{simplify}}^{ \text{assume, } \mu >0} \rightarrow \frac{3}{ \mu} \end{array} \nonumber$
Calculate <x2>:
$\begin{array}{c|c} \int_{ - \infty}^{ \infty} \overline{ \Phi (p,~u)} - \frac{d}{dp} \Phi (p,~u) dp & _{ \text{simplify}}^{ \text{assume, } \mu >0} \rightarrow \frac{3}{ \mu^2} \end{array} \nonumber$
Calculate
:
$\int_{ - \infty}^{ \infty} \overline{ \Phi (p,~u)} p \Phi (p,~ \mu ) \text{ dp assume, } \mu > 0 \rightarrow 0 \nonumber$
Calculate <p2>:
$\int_{ - \infty}^{ \infty} \overline{ \Phi (p,~u)} p^2 \Phi (p,~ \mu ) \text{ dp assume, } \mu > 0 \rightarrow \mu^2 \nonumber$
Calculate Δx:
$\sqrt{ \frac{3}{ \mu^2} - \left( \frac{3}{2 \mu} \right)^2} = \frac{ \sqrt{2}}{2 \mu} \nonumber$
Calculate Δp:
$\sqrt{ \mu^2} = \mu \nonumber$
Calculate ΔxΔp:
$\frac{ \sqrt{3}}{2} = 0.866 \nonumber$
The uncertainty principle is obeyed because the result is greater than 0.5.
Spatial distribution as a function of μ:
Momentum distribution as a function of μ:
The spatial and momentum distributions provide a graphical illustration of the uncertainty principle.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.12%3A_Hydrogen-Like_Calculations_with_Variable_Lepton_Mass.txt
|
Neils Bohr once observed that from the perspective of classical physics that the stability of matter was a pure miracle. The problem, of course, is that two of the basic building blocks of matter are oppositely charged particles: the proton and the electron. Given Coulomb's Law, the troubling question is what keeps them from combining? Quantum mechanics is considered by many to be an abstract and esoteric science that does not have much to do with everyday life. Yet it provides an explanation for atomic and molecular stability, and classical physics fails at that task. Thus, to achieve some understanding of one of the basic facts about the macro-world requires quantum mechanical concepts and tools.
The issue of atomic stability will be explored with a quantum mechanical analysis of the two simplest elements in the periodic table - hydrogen and helium. Schrödinger's equation can be solved exactly for the hydrogen atom, but approximate methods are required for the helium atom. However, in the pursuit of an explanation for atomic stability, it is instructive to use an approximate method to study the hydrogen atom. The approximate method of choice for many quantum mechanical problems is the variational method.
Variational Treatment for the Hydrogen Atom
The Hamiltonian energy operator for the hydrogen atom in atomic units is,
$\hat{H} = -\frac{1}{2} \nabla^2 - \dfrac{1}{r} \label{1}$
Using a scaled hydrogenic wavefunction as the trial wavelength ($\alpha = 1$ for the exact solution),
$\Psi(r) = \sqrt{\dfrac{\alpha^3}{\pi}} \exp (-\alpha r) \label{2}$
in a variational calculation yields
$\langle E_H \rangle = \langle \hat{H} \rangle =\langle T_E \rangle + \langle V_{NE} \rangle\label{3}$
where
$\langle T_E \rangle = \dfrac{\alpha^3}{\pi} \int_o^{\infty} \exp(-\alpha r) \left(- \dfrac{1}{2} \nabla ^2\right) \exp (-\alpha r) 4 \pi r^2 dr = \dfrac{\alpha^2}{2} \label{4}$
and
$\langle V_{NE} \rangle = \dfrac{\alpha^3}{\pi} \int_o^{\infty} \exp(-\alpha r) \left(\dfrac{-Z}{r}\right) \exp (-\alpha r) 4 \pi r^2 dr = -\alpha Z \label{5}$
$Z$ is the nuclear charge and the scale factor $\alpha$ is the variational parameter in this calculation. It is easy to see that it is a decay constant which controls how quickly the wavefunction goes to zero as a function of r, the distance from the nucleus. Therefore, it is also intimately related to the average distance of the electron from the nucleus. This is easily seen by calculating the expectation value for the distance of the electron from the nucleus:
$\langle R \rangle = \dfrac{\alpha^3}{\pi} \int_o^{\infty} \exp(-\alpha r) r \exp (-\alpha r) 4 \pi r^2 dr = \dfrac{3}{2 \alpha} \label{6}$
$\langle R \rangle$ is inversely proportional to $\alpha$ vice versa. The larger the value of $\alpha$, the closer the electron is on average to the nucleus. Using this relationship $\langle R \rangle$ can be mapped into the variational parameter (instead of $\alpha$) by combining Equations \ref{3} - \ref{6}:
\begin{align} E_H &= \langle T_E \rangle + \langle V_{NE} \rangle \[4pt] &= \dfrac{\alpha^2}{2} - \alpha \[4pt] &= \dfrac{9}{8R^2} - \dfrac{3}{2R} \label{7} \end{align}
The next step in elucidating the nature of atomic stability is to plot $E_H$ vs $R$.
Imagine a hydrogen atom forming as an electron approaches a proton from a great distance. (Notice that we assume that the more massive proton is stationary.) The electron is drawn toward the proton by the Coulombic attractive interaction between the two opposite charges and the potential energy decreases like $\frac{-1}{R}$. The attractive potential energy interaction confines the electron to a smaller volume and according to de Broglie's wave hypothesis for matter the kinetic energy increases like V-2/3 or as shown above like $\frac{-1}{R^2}$. Thus the kinetic energy goes to positive infinity faster than the potential energy goes to negative infinity and a total energy minimum (ground state) is achieved at $R = 1$, as shown in the figure above. The electron does not collapse into (coalesce with) the proton under the influence of the attractive Coulombic interaction because of the repulsive effect of the confinement energy - that is, kinetic energy. Kinetic energy, therefore, is at the heart of understanding atomic stability. But it is important to understand that this is quantum mechanical kinetic energy, or confinement energy. It is remarkably different than classical kinetic energy.
Variational Treatment for the Helium Atom
We now move on to the next most complicated element, helium, which has a nucleus of charge fo two ($Z=2$) and two electrons. The Hamiltonian energy operator in atomic units is given by
$\hat{H} = -\frac{1}{2} \nabla^2_1 -\frac{1}{2} \nabla^2_2 - \dfrac{2}{r_1} - \dfrac{2}{r_2} + \dfrac{1}{r_{12}} \label{8}$
There are five terms in the energy operator, but only one new type, the electron-electron potential energy term. When this interaction is calculated using the variational wavefunction in Equation \ref{2}, we have:
$\langle R \rangle = \dfrac{\alpha^6}{\pi^2} \int_0^{\infty} \int_0^{\infty} \exp(\alpha r_1) \exp(\alpha r_2) \left( \dfrac{1}{r_{12}} \right)\exp(\alpha r_1) \exp(\alpha r_2)4\pi r_1^2dr_1 4 \pi r^2_2 dr_2= \dfrac{5\alpha}{8} \label{9}$
The total energy for helium atom can now be written as shown below because the same relationship applies between $\langle R \rangle$ and $\alpha$ ($\alpha = 3/2 \langle R \rangle$).
\begin{align} E_{He} &= 2 \langle T_E \rangle + 2\langle V_{NE} \rangle + \langle V_{EE} \rangle \[4pt] &= \alpha^2 - 4\alpha + \dfrac{5 \alpha}{8} \[4pt] &= \dfrac{9}{4R^2} - \dfrac{6}{R} + \dfrac{15}{16R} \label{10} \end{align}
Graphing $E_{He}$ vs. $\langle R \rangle$ reveals again that kinetic energy (confinement energy) is the key to atomic stability. Several things should be noted in the graph shown below. First, that when the total energy minimum is achieved $V_{NE}$ and $V$ ($V_{NE} + V_{EE}$) are still in a steep decline. This is a strong indication that $V_{EE}$ is really a rather feeble contribution to the total energy, increasing significantly only long after the energy minimum has been attained. Thus electron-electron repulsion cannot be used to explain atomic stability. The graph below clearly shows that on the basis of classical electrostatic interactions, the electron should collapse into the nucleus. This is prevented by the kinetic energy term for the same reasons as were given for the hydrogen atom.
Unfortunately chemists tend to give too much significance to electron-electron repulsion (see VSEPR for example) when it is really the least important term in the Hamiltonian energy operator. And to make matters worse they completely ignore kinetic energy as an important factor in atomic and molecular phenomena. It is becoming increasingly clear in the current literature that many well-established explanations for chemical phenomena based exclusively on electrostatic arguments are in need of critical re-evaluation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.13%3A_Atomic_Stability.txt
|
The following exercises pertain to several of the 1 = 0 states of the one-dimensional hydrogen atom.
$\begin{matrix} 1s & \Psi_1 (x) = 2 x \text{exp}(-x) & 2s & \Psi_2 (x) = \frac{1}{ \sqrt{8}} x (2-x) \text{exp} \left( - \frac{x}{2} \right) \ 3s & \Psi_3 (x) = \frac{2}{243} \sqrt{3} x \left( 27-18x + 2x^2 \right) \text{exp} \left( - \frac{x}{3} \right) \end{matrix} \nonumber$
Coordinate Space Calculations
$\begin{matrix} \text{Position operator:} & x \blacksquare & \text{Momentum operator:} & p = \frac{1}{i} \frac{d}{dx} \blacksquare & \text{Integral:} & \int_0 ^{ \infty} \blacksquare dx \ \text{Kinetic energy operator:} & KE = - \frac{1}{2} \frac{d^2}{dx^2} & \text{Potential energy operator:} & PE = \frac{-1}{x} \blacksquare \end{matrix} \nonumber$
Plot Ψ(x) and Ψ(x)2:
Show that this wave function is normalized:
$\begin{matrix} \int_0^{ \infty} \Psi_1 (x)^2 dx \rightarrow 1 & \int_0^{ \infty} \Psi_1 (x)^2 dx = 1 \end{matrix} \nonumber$
Calculate the most probable position for the electron:
$\frac{d}{dx} (2 \text{x exp(-x))} = 0 \text{solve, x} \rightarrow 1 \nonumber$
Calculate the average value of the electron position:
$\int_0^{ \infty} \Psi_1 (x) x \Psi_1 (x) dx \rightarrow \frac{3}{2} \nonumber$
Calculate the probability density at both the most probable and average positions of the electron:
$\begin{matrix} \text{Most probable:} & \Psi_1 (x)^2 = 0.541 & \text{Average:} & \Psi_1 \left( \frac{3}{2} \right)^2 = 0.448 \end{matrix} \nonumber$
Calculate the probability that the electron is between the nucleus and the most probable value of the electron position:
$\begin{matrix} \int_0^1 \Psi_1 (x)^2 \text{dx float, 3} \rightarrow 0.323 & \int_0^1 \Psi_1 (x)^2 dx = 0.323 \end{matrix} \nonumber$
Calculate the probability that the electron is between the nucleus and the average value of the electron position:
$\begin{matrix} \int_0^{ \frac{3}{2}} \Psi_1 (x)^2 \text{dx float, 3} \rightarrow 0.577 & \int_0^{ \frac{3}{2}} \Psi_1 (x)^2 dx = 0.577 \end{matrix} \nonumber$
Calculate the probability that the electron is beyond the most probable position:
$\begin{matrix} \int_1^{ \infty} \Psi_1 (x)^2 \text{dx float, 3} \rightarrow 0.677 & \int_1 ^{ \infty} \Psi_1 (x)^2 dx = 0.677 \end{matrix} \nonumber$
Calculate the probability that the electron will be found inside the nucleus. The nuclear dimension is approximately 2 x 10-5 ao.
$\int_0^{.00002} \Psi_1 (x)^2 dx = 1.067 \times 10^{-14} \nonumber$
Calculate that position from the nucleus for which the probability of finding the electron is 0.95:
$\begin{matrix} a = 2 & \text{Given} & \int_0^a \Psi_1 (x)^2 dx = .95 & \text{Find(a)} = 3.148 \end{matrix} \nonumber$
Calculate the uncertainty in position:
$\Delta x = \sqrt{ \int_0^{ \infty} \Psi_1 (x)x^2 \Psi_1 (x) dx - \left( \int_0^{ \infty} \Psi_1 (x) x \Psi_1 (x) dx \right)^2} \text{float, 3} \rightarrow 0.866 \nonumber$
Calculate the average value of the electronic momentum:
$\int_0 ^{ \infty} \Psi_1 (x) \frac{1}{i} \frac{d}{dx} \Psi_1 (x) dx \rightarrow 0 \nonumber$
Calculate the uncertainty in momentum:
$\Delta p = \sqrt{ \int_0^{ \infty} \Psi_1 (x) - \frac{d^2}{dx^2} \Psi_1 (x) dx - \left( \int_0^{ \infty} \Psi_1 (x) \frac{1}{i} \frac{d}{dx} \Psi_1 (x) dx \right)^2} \rightarrow 1 \nonumber$
Demonstrate that the position-momentum uncertainty relation is obeyed:
$\Delta x \Delta p = 0.866 \nonumber$
This value is greater than .5 as required.
Calculate the position-momentum commutator. Interpret the result.
$\frac{x \left( \frac{1}{i} \frac{d}{dx} \Psi_1 (x) \right) - \frac{1}{i} \frac{d}{dx} \left( x \Psi_1 (x) \right)}{ \Psi_1 (x)} \text{simplify} \rightarrow i \nonumber$
Position and momentum measurements do not commute. The wave function is not an eigenfunction of the position and momentum operators.
Calculate the average value for kinetic energy:
$\begin{matrix} \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi_1 (x) dx \rightarrow \frac{1}{2} & \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi_1 (x) dx = 0.5 \end{matrix} \nonumber$
Calculate the average value for potential energy:
$\begin{matrix} \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{x} \frac{d^2}{dx^2} \Psi_1 (x) dx \rightarrow -1 & \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{x} \frac{d^2}{dx^2} \Psi_1 (x) dx = -1 \end{matrix} \nonumber$
Calculate the average value for the total energy: $\frac{1}{2} - 1 = -0.5$ or
$\begin{matrix} \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi_1 (x) dx \rightarrow \frac{1}{2} & \int_0 ^{ \infty} \Psi_1 (x) - \frac{1}{x} dx \Psi_1 (x) dx = - \frac{1}{2} \end{matrix} \nonumber$
These results illustrate the virial theorem:
$E = -T = \frac{V}{2} \nonumber$
Calculate the probability that the electron is in a classically forbidden region, that is a region for which E < V. First show that the classically forbidden region begins at x = 2, where E = V. For x larger than 2 the potential energy will be greater than the total energy. This is an example of quantum mechanical tunneling. Show the classically forbidden region graphically.
$\begin{matrix} \frac{-1}{2} = \frac{-1}{x} \text{ solve, x } \rightarrow 2 & \int_2^{ \infty} \Psi_1 (x)^2 dx = 0.238 \end{matrix} \nonumber$
Demonstrate that the wave function is not an eigenfunction of the kinetic energy operator and comment on the significance of this result:
$\frac{- \frac{1}{2} \frac{d^2}{dx^2} \Psi_1 (x) - \frac{1}{x} \Psi_1 (x)}{ \Psi_1 (x)} \text{simplify} \rightarrow \frac{1}{x} - \frac{1}{2} \nonumber$
Electron in the hydrogen atom does not have a well-defined value for kinetic energy.
Demonstrate that the wave function is not an eigenfunction of the potential energy operator and comment on the significance of this result:
$\frac{- \frac{1}{2} \frac{d^2}{dx^2} \Psi_1 (x) - \frac{1}{x} \Psi_1 (x)}{ \Psi_1 (x)} \text{simplify } \rightarrow - \frac{1}{2} \nonumber$
In spite of not having a well-defined kinetic or potential energy, the electron in the hydrogen atom has a well-defined total energy.
$- \frac{1}{2} \frac{d}{dx^2} \Psi_1 (x) - \frac{1}{x} \Psi_1 (x) = E \Psi_1 (x) \text{ solve, E} \rightarrow - \frac{1}{2} \nonumber$
What is the energy eigenvalue and how does it compare to previous calculations in this exercise:
The energy eigenvalue is -0.5 which is in agreement with + calculated previously.
Calculate the overlap integral with the 2s orbital:
$\int_0^{ \infty} \Psi_1 (x) \Psi_2 (x) dx \rightarrow 0 \nonumber$
Interpret the result: the orbitals are orthogonal.
Calculate the kinetic, potential and total energy of a 2s electron and show that the virial theorem is satisfied.
$\begin{matrix} \int_0^{ \infty} \Psi_2 (x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi_2 (x) dx \rightarrow \frac{1}{8} & \int_0^{ \infty} \Psi_2 (x) - \frac{1}{x} \Psi_2 (x) dx \rightarrow - \frac{1}{4} & F = \frac{1}{8} - \frac{1}{4} & F \rightarrow - \frac{1}{8} \end{matrix} \nonumber$
$E = -T =\frac{V}{2} \nonumber$
Calculate the kinetic, potential and total energy of a 3s electron and show that the virial theorem is satisfied.
$\begin{matrix} \int_0^{ \infty} \Psi_3 (x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi_3 (x) dx \rightarrow \frac{1}{18} & \int_0^{ \infty} \Psi_3 (x) - \frac{1}{x} \Psi_3 (x) dx \rightarrow - \frac{1}{9} & E = \frac{1}{18} - \frac{1}{9} & E \rightarrow - \frac{1}{18} \end{matrix} \nonumber$
$E = -T =\frac{V}{2} \nonumber$
Plot the 1s and 2s orbitals on the same graph and explain the orthogonality or net zero overlap.
From x = 0 to 2 the overlap is positive, and x=2 to ∞ it is equal in magnitude but negative.
$\begin{matrix} \int_0^2 \Psi_1 (x) \Psi_2 (x) dx = 0.188 \ \Psi_2^{ \infty} \Psi_1 (x) \Psi_2 (x) dx = -0.188 \end{matrix} \nonumber$
Momentum Space Calculations
Fourier transform the 1s coordinate-space wave function into momentum space.
$\Phi_1 (p) = \frac{1}{ \sqrt{2 \pi}} \left( \int_0^{ \infty} \text{exp} (-ipx) \Psi_1 (x) dx \right) \rightarrow \frac{ \sqrt{2}}{ \sqrt{ \pi} (1 + pi)^2} \nonumber$
Plot |Ψ(x)|2 and |Φ(p)|2 on the same graph:
$\begin{matrix} p = -5,~-4.99, ... 5 & x = 0,~.01 .. 5 \end{matrix} \nonumber$
$\begin{matrix} \text{Momentum space integral:} & \int_{ - \infty}^{ \infty} \blacksquare dp & \text{Momentum operator:} & p \blacksquare \ \text{Kinetic energy operator:} & \frac{p^2}{2} & \text{Position operator:} & i \frac{d}{dp} \blacksquare \end{matrix} \nonumber$
Demonstrate that the 1s momentum wavefunction is normalized.
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} \Psi_1 (p) dp = 1 & \text{or} & \int_{ - \infty}^{ \infty} \left( \left| \Psi_1 (p) \right| \right)^2 = 1 \end{matrix} \nonumber$
Calculate the average value of the momentum and compare it to value obtained with the coordinate space wave function.
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} \Psi_1 (p) dp = 1 & \text{Same value} \end{matrix} \nonumber$
Calculate the average value of the kinetic energy and compare it to value obtained with the 1s coordinate space wave function.
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} \frac{p^2}{2} \Psi_1 (p) dp = 0.5 & \text{Same value} \end{matrix} \nonumber$
Calculate the average value of the electron position and compare it to value obtained with the 1s coordinate space wave function.
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} i \frac{p}{dp} \Psi_1 (p) dp = 1.5 & \text{Same value} \end{matrix} \nonumber$
Calculate the uncertainty in position using the momentum wave function:
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} - \frac{d^2}{dp^2} \Phi_1 (p) dp = 3 & \Delta x = \sqrt{3 - \left( \frac{3}{2} \right)^2} & \Delta x = 0.866 \end{matrix} \nonumber$
Calculate the uncertainty in momentum using the momentum wave function:
$\begin{matrix} \int_{- \infty}^{ \infty} \overline{ \Phi_1 (p)} - \frac{d^2}{dp^2} \Phi_1 (p) dp = 3 & \Delta p = \sqrt{1-0} & \Delta p = 1 \end{matrix} \nonumber$
Demonstrate that the position-momentum uncertainty relation is satisfied:
$\begin{matrix} \Delta x \Delta p = 0.866 & \text{Same value.} \end{matrix} \nonumber$
Fourier transform the 2s wavefunction into momentum space:
$\Phi_2 (p) = \frac{1}{ \sqrt{2 \pi}} \int_0^{ \infty} \text{exp} (-ipx) \Psi_2 (x) \text{dx simplify} \rightarrow \frac{2(-1 + 2ip)}{ \sqrt{ \pi} (1+2ip)^3} \nonumber$
Demonstrate the 2s wavefunction is normalized.
$\int_{ - \infty}^{ \infty} \left( \left| \Phi_2 (p) \right| \right)^2 dp = 1 \nonumber$
Demonstrate the 1s and 2s momentum wavefunctions are orthogonal.
$\int_{ \infty}^{ \infty} \overline{ \Phi_1 (p)} \Phi_2 (p) dp = 0 \nonumber$
Fourier transform the 3s wavefunction into momentum space.
$\Phi_3 (p) = \frac{1}{ \sqrt{2 \pi}} \int_0 ^{ \infty} \text{exp}(-ipx) \Psi_3 (x) \text{dx simplify } \rightarrow \frac{ \sqrt{6} (-1 + 3ip)^2}{ \sqrt{ \pi} (1 +3ip)^4} \nonumber$
Demonstrate the 3s momentum wavefunction is normalized.
$\int_{ - \infty}^{ \infty} \left( \left| \Phi_3 (p) \right| \right)^2 dp = 1 \nonumber$
Plot the 1s, 2s and 3s momentum wavefunctions and interpret the graph in terms of the uncertainty principle.
As shown below, as the principal quantum number increases, the spatial distribution of the electron becomes more delocalized. Therefore, according to the uncertainty principle, the momentum distribution must become more localized. The graph above shows a more localized momentum distribution as the principle quantum number increases.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.14%3A_Quantum_Mechanical_Calculations_for_the_One-Dimensional_Hydrogen_Atom.txt
|
Full kinetic energy operator in spherical coordinates:
Kinetic energy operator for s states:
$- \frac{1}{2r} \frac{d^2}{dr^2} \blacksquare \nonumber$
Kinetic energy operator for p states:
$\begin{matrix} - \frac{1}{2r} \frac{d^2}{dr^2} r \blacksquare ... \ + \frac{-1}{2r^2 \sin ( \theta )} \left[ \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \blacksquare \right) \right] ... \ + \frac{-1}{2r^2 \sin ( \theta)^2} \frac{d^2}{d \phi^2} \blacksquare \end{matrix} \nonumber$
Position operator: r
Potential energy operator: $- \frac{1}{r}$
Triple integral with volume element: $\int_0^{ \infty} \int_0^{ \pi} \int_0^{ 2 \pi} \blacksquare r^2 \sin ( \theta ) d \phi d \theta dr$
Orbitals:
$\begin{matrix} \Psi_{1s} (r) = \frac{1}{ \sqrt{ \pi}} \text{exp(-r)} & \Psi_{2s} (r) = \frac{1}{ \sqrt{32 \pi}} (2 - r) \text{exp} \left( - \frac{r}{2} \right) \ \Psi_{2pz} (r,~ \theta) = \frac{1}{ \sqrt{32 \pi}} \text{r exp} \left( - \frac{r}{2} \right) \cos \theta & \Psi_{2py} (r,~ \theta,~ \phi ) = \frac{1}{ \sqrt{32 \pi}} \text{r exp} \left( - \frac{r}{2} \right) \sin ( \theta ) \sin ( \phi ) \end{matrix} \nonumber$
Plot the wave functions on the same graph:
Plot the radial distribution functions for each orbital on the same graph:
Demonstrate that the 1s orbital is normalized:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s} (r)^2 r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 1 \nonumber$
Demonstrate that the 1s orbital is normalized:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2s} (r)^2 r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 1 \nonumber$
Demonstrate that the 2pz orbital is normalized:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2pz} (r,~ \theta)^2 r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 1 \nonumber$
Demonstrate that the 1s and the 2pz orbitals are orthogonal:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s}(r) \Psi_{2pz} (r,~ \theta)^2 r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 0 \nonumber$
Demonstrate the 1s and 2s orbitals are orthogonal:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s}(r) \Psi_{2s} (r)^2 r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 0 \nonumber$
Demonstrate that the 2py and the 2pz orbitals are orthogonal:
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2py} (r,~ \theta,~ \phi ) \Psi_{2pz} (r,~ \theta ) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 0 \nonumber$
Determine the most probable value for r using the Trace function and calculus:
$\frac{d}{dr} r^2 \Psi_{1s}(r)^2 = 0 \text{ solve, r} \rightarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \nonumber$
Calculate the probability that an electron in the 1s orbital will be found within one Bohr radius of the nucleus.
$\int_0^1 \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s} (r)^2 r^2 \sin ( \theta ) d \phi d \theta dr \text{ float, 3} \rightarrow .325 \nonumber$
Find the distance from the nucleus for which the probability of finding a 1s electron is 0.75.
$\begin{matrix} a = 2 & \text{Given} & \int_0^a \Psi_{1s} (r)^2 4 \pi r^2 dr = /75 & \text{Find(a) = 1.96} \end{matrix} \nonumber$
Calculate <T>, <V>, and <r> for the 1s orbital. Is the virial theorem obeyed? Explain.
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s} (r) - \frac{1}{2r} \frac{d^2}{dr^2} r \left( \Psi_{1s} (r) \right) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{1}{2} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s} (r) - \frac{1}{r} \Psi_{1s} (r) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow -1 \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{1s} (r) r \Psi_{1s} (r) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{3}{2} \nonumber$
Calculate <T>, <V>, and <r> for the 2s orbital. Is the virial theorem obeyed? Explain.
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2s} (r) - \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{2s} (r) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{1}{8} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2s} (r) - \frac{1}{r} \Psi_{2s} (r) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow - \frac{1}{4} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2s} (r) r \Psi_{2s} (r) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 6 \nonumber$
Calculate <T>, <V>, and <r> for the 2py orbital. Is the virial theorem obeyed? Explain.
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2py} (r, ~ \theta,~ \phi ) \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{2py} (r,~ \theta,~ \phi ) ... \ + \frac{-1}{2r^2 \sin ( \theta)} \left[ \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \Psi_{2py} (r,~ \theta,~ \phi ) \right) \right] ... \ + \frac{-1}{2r^2 \sin ( \theta )^2} \frac{d^2}{d \phi^2} \Phi_{2py} (r,~ \theta,~ \phi) \end{bmatrix} r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{1}{8} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2py} (r,~ \theta,~ \phi) - \frac{1}{r} \Psi_{2py} (r,~ \theta,~ \phi) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{-1}{4} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2py} (r,~ \theta,~ \phi) r \Psi_{2py} (r,~ \theta,~ \phi) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 5 \nonumber$
Calculate <T>, <V>, and <r> for the 2pz orbital. Is the virial theorem obeyed? Explain.
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2pz} (r, ~ \theta ) \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{2pz} (r,~ \theta ) ... \ + \frac{-1}{2r^2 \sin ( \theta)} \left[ \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \Psi_{2pz} (r,~ \theta ) \right) \right] ... \ + \frac{-1}{2r^2 \sin ( \theta )^2} \frac{d^2}{d \phi^2} \Phi_{2pz} (r,~ \theta ) \end{bmatrix} r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{1}{8} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2pz} (r,~ \theta ) - \frac{1}{r} \Psi_{2pz} (r,~ \theta ) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow \frac{-1}{4} \nonumber$
$\int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \Psi_{2pz} (r,~ \theta ) r \Psi_{2pz} (r,~ \theta ) r^2 \sin ( \theta ) d \phi d \theta dr \rightarrow 5 \nonumber$
Summarize your results in the following table:
$\begin{pmatrix} \Psi & \text{T} & \text{V} & \text{E} & \text{r} \ \text{1s} & 0.5 & -1 & -0.5 & 1.5 \ \text{2s} & 0.125 & -0.25 & -0.125 & 6 \ \text{2pz} & 0.125 & -0.25 & -0.125 & 5 \ \text{2py} & 0.125 & -0.25 & -0.125 & 5 \end{pmatrix} \nonumber$
Demonstrate that the 1s orbital is an eigenfunction of the energy operator. What is the eigenvalue?
$\begin{matrix} T = - \frac{1}{2r} \frac{d^2}{dr^2} r \blacksquare & V = - \frac{1}{r} & H = T + V & \Psi (r) = \frac{1}{ \sqrt{ \pi}} \text{exp} (-r) \end{matrix} \nonumber$
$\frac{- \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{1s} (r) - \frac{1}{r} \Psi_{2s} (r)}{ \Psi_{1s} (r)} \text{ simplify } \rightarrow \frac{-1}{2} \nonumber$
Demonstrate that the 2s orbital is an eigenfunction of the energy operator. What is the eigenvalue?
$\frac{- \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{2s} (r) - \frac{1}{r} \Psi_{2s} (r)}{ \Psi_{2s} (r)} \text{ simplify } \rightarrow \frac{-1}{8} \nonumber$
Demonstrate that the 2py orbital is an eigenfunction of the energy operator. What is the eigenvalue?
$\frac{ \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} r \Psi_{2py} (r,~ \theta,~ \phi ) ... \ + \frac{-1}{2r^2 \sin ( \theta)} \left[ \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \Psi_{2py} (r,~ \theta,~ \phi ) \right) \right] ... \ + \frac{-1}{2r^2 \sin ( \theta )^2} \frac{d^2}{d \phi^2} \Phi_{2py} (r,~ \theta, ~ \phi ) \end{bmatrix} - \frac{1}{r} \Psi_{2py} (r,~ \theta,~ \phi)}{ \Psi_{2py} (r,~ \theta,~ \phi)} \text{ simplify } \rightarrow \frac{-1}{8} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.15%3A_Quantum_Mechanical_Calculations_for_the_Hydrogen_Atom.txt
|
Neils Bohr once observed that from the perspective of classical physics the stability of matter was a pure miracle. The problem, of course, is that two of the basic building blocks of matter are oppositely charged particles - the proton and the electron. Given Coulomb's Law the troubling question is what keeps them from coaleseing?
Quantum mechanics is considered by many to be an abstract and esoteric science that doesn't have much to do with everyday life. Yet it provides an explanation for atomic and molecular stability, and classical physics fails at that task. Thus, to achieve some understanding of one of the basic facts about the macro-world requires quantum mechanical concepts and tools.
The issue of atomic stability will be explored with a quantum mechanical analysis of the two simplest elements in the periodic table - hydrogen and helium. Schrödinger's equation can be solved exactly for the hydrogen atom, but approximate methods are required for the helium atom. However, in the pursuit of an explanation for atomic stability it is instructive to use an approximate method to study the hydrogen atom. The approximate method of choice for many quantum mechanical problems is the variation method.
Variational Treatment for the Hydrogen Atom
The Hamiltonian energy operator for the hydrogen atom in atomic units is,
$\hat{H} = - \frac{1}{2} \overline{V}^2 - \frac{1}{r} \nonumber$
Using a scaled hydrogenic wave function (a=1 for the exact solution),
$\Psi (r) = \frac{ \sqrt{ \alpha^3}}{ \pi} \text{exp} ( - \alpha r) \nonumber$
in a variational calculation yields
$\langle E_H = \langle \Psi | \hat{H} | \Psi \rangle = \langle T_g \rangle + \langle V_{NE} \rangle \nonumber$
where
$\langle T_E \rangle = \frac{ \alpha^3}{ \pi} \int_0^{ \omega} \text{exp} ( - \alpha r) \left( - \frac{1}{2} \overline{V}^2 \right) \text{exp} ( - \alpha r) 4 \pi r^2 dr = \frac{ \alpha^2}{2} \nonumber$
and
$\langle V_{NE} \rangle = \frac{ \alpha^3}{ \pi} \int_0 ^{ \omega} \text{exp} ( - \alpha r) \left( \frac{ -Z}{r} \right) \text{exp} ( - \alpha r) 4 \pi r^2 dr = - \alpha Z \nonumber$
Z is the nuclear charge and the scale factor a is the variational parameter in this calculation. It is easy to see that it is a decay constant which controls how quickly the wave function goes to zero as a function of r, the distance from the nucleus. Therefore, it is also intimately related to the average distance of the electron from the nucleus. This is easily seen by calculating the expectation value for the distance of the electron from the nucleus.
$\langle R \rangle = \frac{ \alpha^3}{ \pi} \int_0 ^{ \omega} \text{exp} ( - \alpha r)r \text{exp}( - \alpha r) 4 \pi r^2 dr = \frac{3}{2 \alpha} \nonumber$
<R> is inversely proportional to a; a is inversely proportional to <R>. The larger the value of a, the closer the electron is on average to the nucleus. Using this relationship <R> can be made the variational parameter as is shown in the equation below.
$E_H = \langle T_E \rangle + \langle N_{NE} \rangle = \frac{ \alpha^2}{2} - \alpha = \frac{9}{8R^2} - \frac{3}{2R} \nonumber$
The next step in elucidating the nature of atomic stability is to plot EH vs R.
Imagine a hydrogen atom forming as an electron approaches a proton from a great distance. (Notice that we assume that the more massive proton is stationary.) The electron is drawn toward the proton by the Coulombic attractive interaction between the two opposite charges and the potential energy decreases like -R-1. The attractive potential energy interaction confines the electron to a smaller volume and according to deBroglie's wave hypothesis for matter the kinetic energy increases like V-2/3 or as shown above like R-2. Thus the kinetic energy goes to positive infinity faster than the potential energy goes to negative infinity and a total energy minimum (ground state) is achieved at R = 1, as shown in the figure above. The electron does not collapse into (coalesce with) the proton under the influence of the attractive Coulombic interaction because of the repulsive effect of the confinement energy - that is, kinetic energy. Kinetic energy, therefore, is at the heart of understanding atomic stability. But it is important to understand that this is quantum mechanical kinetic energy, or confinement energy. It is remarkably different than classical kinetic energy.
Variational Treatment for the Helium Atom
We now move on to the next most complicated element, helium. Helium has a nucleus of charge +2 and two valence electrons. The Hamiltonian energy operator in atomic units is given by,
$\hat{H} = - \frac{1}{2} \overline{V}^2 - \frac{1}{2} \overline{V}^2 - \frac{2}{r_1} - \frac{2}{r_2} + \frac{1}{r_{12}} \nonumber$
There are five terms in the energy operator, but only one new type, the electron-electron potential energy term. When this interaction is calculated using the variational wave function given above we have,
$\langle V_{EE} \rangle = \frac{ \alpha^6}{ \pi^2} \int_0^{ \omega} \int_0 ^{ \omega} \text{exp} ( - \alpha r_1 ) \text{exp} ( - \alpha r_2) \left( \frac{1}{ \overline{r}_{12}} \right) \text{exp} ( - \alpha r_1) \text{exp} ( - \alpha r_2) 4 \pi r_1^2 dr_1 4 \pi_2^2 dr_2 = \frac{5 \alpha}{8} \nonumber$
The total energy for helium atom can now be written as shown below because the same relationship applies between <R> and a [a = 3/(2<R>)].
$E_{HE} = 2 \langle T_E \rangle + 2 \langle V_{NE} \rangle + \langle V_{EE} \rangle = \alpha^2 - 4 \alpha + \frac{5 \alpha}{8} = \frac{9}{4R^2} - \frac{6}{R} + \frac{15}{16R} \nonumber$
Graphing EHe vs. <R> reveals again that kinetic energy (confinement energy) is the key to atomic stability. Several things should be noted in the graph shown below. First, that when the total energy minimum is achieved VNE and V (VNE + VEE) are still in a steep decline. This is a strong indication that VEE is really a rather feeble contribution to the total energy, increasing significantly only long after the energy minimum has been attained. Thus electron-electron repulsion cannot be used to explain atomic stability. The graph below clearly shows that on the basis of classical electrostatic interactions, the electron should collapse into the nucleus. This is prevented by the kinetic energy term for the same reasons as were given for the hydrogen atom.
Unfortunately chemists tend to give too much significance to electron-electron repulsion (see VSEPR for example) when it is really the least important term in the Hamiltonian energy operator. And to make matters worse they completely ignore kinetic energy as an important factor in atomic and molecular phenomena. It is becoming increasingly clear in the current literature that many well-established explanations for chemical phenomena based exclusively on electrostatic arguments are in need of critical re-evaluation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.16%3A_Atomic_Stability.txt
|
Neils Bohr once observed that from the perspective of classical physics the stability of matter was a pure miracle. The problem, of course, is that two of the basic building blocks of matter are oppositely charged - the electron and the proton. Given Coulomb's Law the troubling question is what keeps them from coalescing?
Quantum mechanics is considered by many to be an abstract and esoteric science that doesn't have much to do with everyday life. Yet it provides an explanation for atomic and molecular stability, and classical physics fails at that task. Thus, to achieve some understanding of one of the basic facts about the macro-world requires quantum mechanical concepts and tools.
The issue of atomic stability will be explored with a quantum mechanical analysis of the two simplest elements in the periodic table - hydrogen and helium. They are also the two most abundant elements in the universe. Schrödinger's equation can be solved exactly for the hydrogen atom, but approximate methods are required for the helium atom. In this study, the variational method will be used for both hydrogen and helium.
Variational Method for the Hydrogen Atom
Normalized trial wave function:
$\Psi ( \alpha,~r) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha,~r) α is a scale-factor that controls the size of the wave function. Integral: \[ \int_0^{ \infty} \blacksquare 4 \pi r^2 dr \nonumber$
Kinetic energy operator:
$- \frac{1}{2r} \frac{d^2}{dr^2} ( r \blacksquare ) \nonumber$
Potential energy operator:
$\frac{-Z}{r} \blacksquare \nonumber$
Demonstrate that the trial function is normalized:
$\int_0^{ \infty} \Psi ( \alpha,~R)^2 4 \pi r^2 \text{dr assume, } \alpha > 0 \rightarrow 1 \nonumber$
Plot trial wave function for several values of α, the variational parameter:
Calculate the average value of the kinetic energy of the electron:
$T ( \alpha ) = \int_0^{ \infty} \Psi ( \alpha,~ r) - \frac{1}{2r} \frac{d^2}{dr^2} (r \Psi ( \alpha,~r)) 4 \pi r^2 \text{ dr assume, } \alpha > 0 \rightarrow \frac{ \alpha^2}{2} \nonumber$
Calculate the average value of the potential energy of the electron:
$V( \alpha,~ Z) = \int_0 ^{ \infty} \Psi ( \alpha,~r) - \frac{Z}{r} \Psi ( \alpha,~r) 4 \pi r^2 \text{ dr assume, } \alpha > 0 \rightarrow - Z \alpha \nonumber$
Calculate R, the average distance of the electron from the nucleus:
$R ( \alpha ) = \int_0 ^{ \infty} \Psi ( \alpha,~r) r \Psi ( \alpha,~r) 4 \pi r^2 \text{ dr assume, } \alpha > 0 \rightarrow \frac{3}{2 \alpha} \nonumber$
From this we find that:
$E ( \alpha ) = T ( \alpha ) + V( \alpha,~Z) = \frac{ \alpha^2}{2} - \alpha Z \nonumber$
But from above we knew:
$\alpha = \frac{3}{2R} \nonumber$
This allows us to express the total energy and its components in terms of R the average distance of the electron from the nucleus.
Total energy:
$\begin{array}{c|c} \text{E(R, Z)} = \frac{ \alpha^2}{2} - \alpha Z &_{ \text{expand}}^{ \text{substitute, } \alpha = \frac{3}{2R}} \rightarrow \frac{9}{8R^2} - \frac{3Z}{2R} \end{array} \nonumber$
Electron kinetic energy:
$T_E (R) = \frac{9}{8R^2} \nonumber$
Electron-nucleus potential energy:
$V_{NE} (R,~Z) = \frac{-3Z}{2R} \nonumber$
E, TE and VNE are graphed versus R for the hydrogen atom (Z = 1): R = 0, .01 ... 8
The hydrogen atom ground-state energy is determined by minimizing its energy with respect to R:
$\begin{matrix} \text{R=R} & \text{R} = \frac{d}{dR} \text{E(R 1) = solve, R} \rightarrow \frac{3}{2} & \text{E(R, 1) = -0.5} \end{matrix} \nonumber$
Imagine a hydrogen atom forming as an electron approaches a proton from a great distance. The electron is drawn toward the proton by the Coulombic attractive interaction between the two opposite charges and the potential energy decreases like -1/R. The attractive potential energy interaction confines the electron to a smaller volume and the kinetic energy increases as 1/R2. Thus the kinetic energy goes to positive infinity faster than the potential energy goes to negative infinity and a total energy minimum (ground state) is achieved at R = 1.5, as shown in the figure above.
The electron does not collapse into (coalesce with) the proton under the influence of the attractive Coulombic interaction because of the repulsive effect of the confinement energy - that is, kinetic energy. Kinetic energy, therefore, is at the heart of understanding atomic stability.
Variational Method for the Helium Atom
Now we will proceed to the He atom. There are five contributions to the total electronic energy: kinetic energy of each electron, the interaction of each electron with the nucleus, and the electron-electron interaction.The only new term is the last, electron-electron potential energy. It is evaluated as follows for two electrons in 1s orbitals.
The electrostatic potential at r due to electron 1 is:
$\begin{array}{c|c} \Phi ( \alpha,~r) = \frac{1}{r} \int_0^r \Psi ( \alpha,~ x)^2 4 \pi x^2 dx + \int_r^{ \infty} \frac{ \Psi ( \alpha,~x)^2 4 \pi x^2}{x} dx & _{ \text{simplify}} ^{ \text{assume, } \alpha > 0} \rightarrow - \frac{e^{-2 \alpha r} + \alpha r e^{-2 \alpha r} - 1}{r} \end{array} \nonumber$
The electrostatic interaction between the two electrons is:
$\begin{array}{c|c} V_{EE} = \int_0 ^{ \infty} \Phi ( \alpha,~r) \Psi ( \alpha,~r)^2 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \textcolor{red}{ \alpha} >0} \rightarrow \frac{5 \alpha}{8} \end{array} \nonumber$
In terms of R, the electron-electron potential energy is:
$V_{EE} (R) = \frac{15}{16R} \nonumber$
$\begin{matrix} Z = 2 & E_{He} (R) = 2T_E (R) + 2 V_{NE} (R,~Z) + V_{EE} (R) & V(R) = 2 V_{NE} (R,~Z) + V_{EE} (R) \end{matrix} \nonumber$
The various contributions to the total electronic energy of the helium atom are plotted below. R = 0, .01 .. 4
The helium atom ground-state energy is determined by minimizing its energy with respect to R:
$\begin{matrix} \text{R = R} & \text{R} = \frac{d}{dR} E_{He} (R) = \text{0 solve, R} \rightarrow \frac{8}{9} & \text{R} = 0.889 & E_{He} (R) = -2.848 \end{matrix} \nonumber$
Graphing EHe vs. reveals again that kinetic energy (confinement energy) is the key to atomic stability. Several things should be noted in the graph shown above. First, that when the total energy minimum is achieved VNE and V (VNE + VEE) are still in a steep decline. This is a strong indication that VEE is really a rather feeble contribution to the total energy, increasing significantly only long after the energy minimum has been attained. Thus electron-electron repulsion cannot be used to explain atomic stability. The graph above clearly shows that on the basis of classical electrostatic interactions, the electron should collapse into the nucleus. This is prevented by the kinetic energy term for the same reasons as were given for the hydrogen atom.
Unfortunately chemists give too much significance to electron-electron repulsion (VSEPR for example) when it is really the least important term in the Hamiltonian energy operator. And to make matters worse they completely ignore kinetic energy as an important factor in atomic and molecular phenomena. It is becoming increasing clear in the current literature that many traditional explanations for chemical phenomena based exclusively on electrostatic arguments are in need of critical re-examination.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.17%3A_Atomic_Stability_-_Mathcad_Version.txt
|
Some quantum textbooks invoke the concept of effective potential energy when introducing the quantum mechanical treatment of the hydrogen atom to undergraduate audiences. I will show that this concept is deeply flawed and leads to an incorrect analysis of the energy contributions to the various electronic states of the hydrogen atom. We begin with the one‐dimensional Schrödinger radial equation (French and Taylor, An Introduction to Quantum Physics, pages 519‐523) expressed in atomic units (e = me = h/2π = 4πεo = 1).
$\begin{matrix} H \Psi (r) = (T + V) \Psi (r) = E \Psi (r) & \text{where} & H = - \frac{1}{2} \frac{d^2}{dr^2} \blacksquare + \frac{L(L+1)}{2r^2} \blacksquare - \frac{1}{r} \blacksquare \end{matrix} \nonumber$
Solving Schrödingerʹs equation yields the eigenstates and their eigenvalues, along with the ability to calculate the expectation value for any observable for which the appropriate operator is known. The allowed energy values are $E(n) = -0.5n^{-2}$, where the quantum number, n, can take integer values beginning with unity. The quantized electronic energies for the one‐electron hydrogen atom do not depend on the angular momentum quantum number, L.
The problem I wish to deal with involves the middle term in the Hamiltonian given above. Some authors call it the centrifugal potential, combine it with the third term which is the coulombic potential energy, and call the combination the effective potential energy. In support of this maneuver they invoke the idea of centrifugal force, which unfortunately for them is only an apparent force (actually itʹs just plain fictitious). Centrifugal means moving away from the center and so one might assume (and some authors claim) that the greater the centrifugal effect (ʺforceʺ), i.e. the larger the L quantum number, the farther from the nucleus the electron should be for any given value of the principal quantum number n.
This, unfortunately, simply isnʹt the case. The radial expectation values for the hydrogen atom are given below, along with a table that shows that the larger the angular momentum quantum number, the closer on average the electron is to the nucleus for any given n quantum number. This effect is also shown graphically in the Appendix.
$\begin{matrix} r(n,~L) = \frac{3n^2-L(L+1)}{2} & \begin{pmatrix} \text{L} & 0 & 1 & 2 & 3 \ n = 1 & 1.5 & ' & ' & ' \ n = 2 & 6 & 5 & ' & ' \ n = 3 & 13.5 & 12.5 & 10.5 & ' \ n = 4 & 24 & 23 & 21 & 18 \end{pmatrix} \end{matrix} \nonumber$
This is not the only difficulty with the effective potential energy approach. It also violates the virial theorem, which we all know is sacrosanct when it comes to quantum mechanical analyses. In other words, if your analysis or calculation violates the virial theorem discard it, because itʹs wrong! The virial theorem for the hydrogen atom is <E>= ‐<T> = <V>/2. The expectation values for the centrifugal and coulombic contributions to the total energy are provided below. Note that I have used ʺVʺ for the centrifugal term, assuming that it really is a potential energy term. If this assignment is valid the virial theorem will be satisfied.
$\begin{matrix} V_{centrifugal} (n,~L) = \frac{L(L+1)}{2n^3 \left( L + \frac{1}{2} \right)} & V_{coulomb} (n) = - \frac{1}{n^2} \end{matrix} \nonumber$
Next the virial theorem is used as a check on this assignment using the version / = 0.5. What we see below is that the virial theorem is violated for all states for which L > 0. This strongly suggests that the centrifugal term is not a potential energy contribution, but a kinetic energy term.
$\begin{matrix} 1s & n = 1 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 2s & n = 2 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 2p & n = 2 & L = 1 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.75 \ 3s & n = 3 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 3p & n = 3 & L = 1 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.833 \ 1s & n = 3 & L = 2 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.833 \end{matrix} \nonumber$
This suspicion is confirmed by recalculating the above results after taking out the centrifugal ʺpotential energyʺ term.
$\begin{matrix} 1s & n = 1 & L = 0 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 \ 2s & n = 2 & L = 0 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 & 2p & n = 2 & L = 1 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 \ 3s & n = 3 & L = 0 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 & 3p & n = 3 & L = 1 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 \ 3d & n = 3 & L = 2 & \frac{E(n)}{V_{coulomb}(n)} = 0.5 \end{matrix} \nonumber$
While this is decisive, we will also show that the virial theorem (in the form <T>/<V> = ‐1/2) is satisfied with expectation value calculations using the correct kinetic energy operator.
$\begin{matrix} T = - \frac{1}{2} \frac{d^2}{dr^2} \blacksquare + \frac{L(L+1)}{2r^2} \blacksquare \end{matrix} \nonumber$
This will be demonstrated using the 3s, 3p and 3d eigenfunctions. The n = 1, 2 and 4 eigenfunctions are provided in the Appendix.
$\begin{matrix} \Psi_{3s} (r) = \frac{2}{ \sqrt{27}} \left( 1 - \frac{2}{3}r + \frac{2}{27}r^2 \right) \text{r exp} \left( - \frac{2}{3} \right) & \Psi_{3p} (r) = \frac{8}{27 \sqrt{6}} \left( 1 - \frac{r}{6} \right) \text{r}^2 \text{ exp} \left( \frac{-r}{3} \right) & \Psi_{3d} (r) = \frac{4}{81 \sqrt{30}} \text{r}^2 \text{ exp} \left( \frac{-r}{3} \right) \end{matrix} \nonumber$
$\begin{matrix} 3s & L = 0 & \frac{ \int_0^{ \infty} \Psi_{3s} (r) \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3s}(r) dr + \int_0 \frac{L(L+1)}{2r^2} \Psi_{3s} (r)^2 dr}{ \int_0^{ \infty} \frac{-1}{r} \Psi_{3s} (r)^2 dr} = -0.5 \ 3p & L = 1 & \frac{ \int_0^{ \infty} \Psi_{3p} (r) \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3p}(r) dr + \int_0 \frac{L(L+1)}{2r^2} \Psi_{3p} (r)^2 dr}{ \int_0^{ \infty} \frac{-1}{r} \Psi_{3p} (r)^2 dr} = -0.5 \ 3d & L = 2 & \frac{ \int_0^{ \infty} \Psi_{3s} (r) \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3d}(r) dr + \int_0 \frac{L(L+1)}{2r^2} \Psi_{3d} (r)^2 dr}{ \int_0^{ \infty} \frac{-1}{r} \Psi_{3d} (r)^2 dr} = -0.5 \end{matrix} \nonumber$
In summary, the ʺcentrifugal effectʺ and the concept of ʺeffective potential energyʺ are good examples of the danger in thinking classically about a quantum mechanical system. Furthermore, itʹs bad pedagogy to create fictitious forces and to mislabel energy contributions in a misguided effort to provide conceptual simplicity. Other concepts in this category, in my opinion, are screening and effective nuclear charge.
Appendix
The following calculations demonstrate that wave functions used are eigenfunctions of the energy operator.
$\begin{matrix} 1s & L = 0 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{1s} (r) + \frac{L(L+1)}{2r^2} \Psi_{1s} (r) - \frac{1}{r} \Psi_{1s} (r) = E_{1s} \Psi_{1s} (r) \text{solve, } E_{1s} \rightarrow - \frac{1}{2} \ 2s & L = 0 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{2s} (r) + \frac{L(L+1)}{2r^2} \Psi_{2s} (r) - \frac{1}{r} \Psi_{2s} (r) = E_{2s} \Psi_{2s} (r) \text{solve, } E_{2s} \rightarrow - \frac{1}{8} \ 2p & L = 1 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{2p} (r) + \frac{L(L+1)}{2r^2} \Psi_{2p} (r) - \frac{1}{r} \Psi_{2p} (r) = E_{2p} \Psi_{2p} (r) \text{solve, } E_{2p} \rightarrow - \frac{1}{8} \ 3s & L = 0 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3s} (r) + \frac{L(L+1)}{2r^2} \Psi_{3s} (r) - \frac{1}{r} \Psi_{3s} (r) = E_{3s} \Psi_{3s} (r) \text{solve, } E_{3s} \rightarrow - \frac{1}{18} \ 3p & L = 1 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3p} (r) + \frac{L(L+1)}{2r^2} \Psi_{3p} (r) - \frac{1}{r} \Psi_{3p} (r) = E_{3p} \Psi_{3p} (r) \text{solve, } E_{3p} \rightarrow - \frac{1}{18} \ 3d & L = 2 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{3d} (r) + \frac{L(L+1)}{2r^2} \Psi_{3d} (r) - \frac{1}{r} \Psi_{3d} (r) = E_{3d} \Psi_{3d} (r) \text{solve, } E_{3d} \rightarrow - \frac{1}{32} \ 4s & L = 0 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{4s} (r) + \frac{L(L+1)}{2r^2} \Psi_{4s} (r) - \frac{1}{r} \Psi_{4s} (r) = E_{4s} \Psi_{4s} (r) \text{solve, } E_{4s} \rightarrow - \frac{1}{32} \ 4p & L = 1 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{4p} (r) + \frac{L(L+1)}{2r^2} \Psi_{4p} (r) - \frac{1}{r} \Psi_{4p} (r) = E_{4p} \Psi_{4p} (r) \text{solve, } E_{4p} \rightarrow - \frac{1}{32} \ 4d & L = 2 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{4d} (r) + \frac{L(L+1)}{2r^2} \Psi_{4d} (r) - \frac{1}{r} \Psi_{4d} (r) = E_{4d} \Psi_{4d} (r) \text{solve, } E_{4d} \rightarrow - \frac{1}{32} \ 4f & L = 3 & \frac{-1}{2} \frac{d^2}{dr^2} \Psi_{4f} (r) + \frac{L(L+1)}{2r^2} \Psi_{4f} (r) - \frac{1}{r} \Psi_{4f} (r) = E_{4f} \Psi_{4f} (r) \text{solve, } E_{4f} \rightarrow - \frac{1}{32} \end{matrix} \nonumber$
The computational results obtained using the three‐dimensional Schrödinger equation,
$\frac{-1}{2} \frac{d^2}{dr^2} (r,~ \Psi ( \theta,~ \phi )) - \frac{1}{2r^2 \sin ( \theta )} \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \Psi (r,~ \theta,~ \phi ) \right) - \frac{1}{2 r^2 \sin ( \theta )^2} \frac{d^2}{d \phi^2} \Psi (r,~ \theta,~ \phi ) - \frac{1}{r} (r,~ \theta,~ \phi) = E \Psi (r,~ \theta,~ \phi ) \nonumber$
and its eigenfunctions are identical to those reported in this critique using the one‐dimensional radial equation and its eigenfunctions.
Some authors call $\frac{L(L+1)}{2r^2}$ a centrifugal barrier because for L > 0 it prevents the electron from being very close the nucleus, without also noting that the greater the value of L the closer the electron is on average to the nucleus as this equation $r(n,~L) = \frac{3n^2 - L(L+1)}{2}$ clearly shows. The radial distribution functions for n = 2, 3 and 4 clearly show this dual effect: as L increases for a given n the average distance to the nucleus decreases as does the electron density in the region nearest the nucleus. The maximum value of the radial distribution function for the highest L state is indicated on each graph.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.18%3A_110._Critique_of_the_Centrifugal_Effect_in_the_Hydrogen_Atom.txt
|
On page 174 of Quantum Chemistry & Spectroscopy, 3rd ed. Thomas Engel derives equation 9.5 which is presented in equivalent form in atomic units (e = me = h/2π = 4πεo = 1) here.
$- \frac{1}{2r^2} \frac{d}{dr} \left( r^2 \frac{d}{dr} R(r) \right) + \left[ \frac{L(L+1)}{2r^2} - \frac{1}{r} \right] R(r) = E R(r) \nonumber$
At the bottom of the page he writes,
Note that the second term (in brackets) on the left‐hand side of Equation 9.5 can be viewed as a effective potential, Veff(r). It is made up of the centrifugal potential, which varies as +1/r2, and the Coulomb potential, which varies as ‐1/r.
$V_{eff} (r) = \frac{L(L+1)}{2r^2} - \frac{1}{r} \nonumber$
Engel notes that because of its positive mathematical sign, the centrifugal potential is repulsive, and goes on to say,
The net result of this repulsive centrifugal potential is to force the electrons in orbitals with L > 0 ( p, d, and f electrons) on average farther from the nucleus than s electrons for which L = 0.
This statement is contradicted by the radial distribution functions shown in Figure 9.10 on page 187, which clearly show the opposite effect. As L increases the electron is on average closer to the nucleus. It is further refuted by calculations of the average value of the electron position from the nucleus as a function of the n and L quantum numbers. For a given n the larger L is the closer on average the electron is to the nucleus. In other words, these calculations support the graphical representation in Figure 9.10.
$\begin{matrix} r(n,~L) = \frac{3n^2-L(L+1)}{2} & \begin{pmatrix} \text{L} & 0 & 1 & 2 & 3 \ n = 1 & 1.5 & ' & ' & ' \ n = 2 & 6 & 5 & ' & ' \ n = 3 & 13.5 & 12.5 & 10.5 & ' \ n = 4 & 24 & 23 & 21 & 18 \end{pmatrix} \end{matrix} \nonumber$
On page 180 in Example Problem 9.2, Engel introduces the virial theorem. For systems with a Coulombic potential energy, such as the hydrogen atom, it is <V>= 2<E> = ‐2<T>. We will work with the version <E>/<V> = 0.5. The values of the energy, the so called centrifugal potential energy and the Coulombic potential energy are as shown below as a function of the appropriate quantum numbers.
$\begin{matrix} E(n) = \frac{-1}{2n^2} & V_{centrifugal} (n,~L) = \frac{L(L+1)}{2n^3 \left( L + \frac{1}{2} \right)} & V_{coulomb} (n) = - \frac{1}{n^2} \end{matrix} \nonumber$
The calculations below show that the virial theorem is violated for any state for which L > 0.
$\begin{matrix} 1s & n = 1 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 2s & n = 2 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 2p & n = 2 & L = 1 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.75 \ 3s & n = 3 & L = 0 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.5 \ 3p & n = 3 & L = 1 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.833 \ 1s & n = 3 & L = 2 & \frac{E(n)}{V_{centrifugal}(n,~L) + V_{coulomb}(n)} = 0.833 \end{matrix} \nonumber$
These calculations are now repeated eliminating the centrifugal term, showing that the virial theorem is satisfied and supporting the claim that the ʺcentrifugal potentialʺ is actually a kinetic energy term.
$\begin{matrix} 1s & n = 1 & L = 0 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 & 2s & n = 2 & L = 0 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 \ 2p & n = 2 & L = 1 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 & 3s & n = 3 & L = 0 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 \ 3p & n = 3 & L = 1 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 & 3d & n = 3 & L = 2 & \frac{E(n)}{V_{coulomb} (n)} = 0.5 \end{matrix} \nonumber$
We finish by rewriting the equation at the top with brackets showing that the first two terms are quantum kinetic energy and that the Coulombic term is the only potential energy term.
$\left[ - \frac{1}{2r^2} \frac{d}{dr} \left( r^2 \frac{d}{dr} R(r) \right) + \frac{L(L+1)}{2r^2} R(r) \right] - \frac{1}{r} R(r) = E R(r) \nonumber$
In summary, the ʺcentrifugal potentialʺ and the concept of ʺeffective potential energyʺ are good examples of the danger in thinking classically about a quantum mechanical system. Furthermore, itʹs bad pedagogy to create fictitious forces and to mislabel energy contributions in a misguided effort to provide conceptual simplicity.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.19%3A_A_Shorter_Critique_of_the_Centrifugal_Effect_in_the_Hydrogen_Atom.txt
|
Under normal circumstances the the hydrogen atom consists of a proton and an electron. However, electrons are leptons and there are two other leptons which could temporarily replace the electron in the hydrogen atom. The other leptons are the muon and tauon, and their fundamental properties, along with those of the electron, are given in the following table.
$\begin{pmatrix} \text{Property} & e & \mu & \tau \ \frac{ \text{Mass}}{m_e} & 1 & 206.8 & 3491 \ \frac{ \text{Effective Mass}}{m_e} & 1 & 185.86 & 1203 \ \frac{ \text{Life Time}}{s} & \text{Stable} & 2.2(10)^{-6} & 3.0(10)^{-13} \end{pmatrix} \nonumber$
The purpose of this exercise is to demonstrate the importance of mass in atomic systems, and therefore also kinetic energy. Substitution of the deBroglie relation (λ = h/mv) into the classical expression for kinetic energy yields a quantum mechanical expression for kinetic energy. It is of utmost importance that in quantum mechanics, kinetic energy is inversely proportional to mass.
$T = \frac{1}{2} mv^2 = \frac{h^2}{2m \lambda} \nonumber$
A more general and versatile quantum mechanical expression for kinetic energy is the differential operator shown below, where again mass appears in the denominator. An Approach to Quantum Mechanics outlines the origin of kinetic energy operator.
Atomic units (e = me = h/2π = 4πεo = 1) will be used in the calculations that follow. Please note that μ in the equations below is the effective mass and not a symbol for the muon.
Kinetic energy operator:
$T = - \frac{1}{2 \mu r} \frac{d^2}{dr^2} ( r \blacksquare ) \nonumber$
Potential energy operator:
$V = - \frac{1}{r} \blacksquare \nonumber$
Variational trial wave function with variational parameter β:
$\Psi (r,~ \beta ) = \left( \frac{ \beta^3}{ \pi} \right)^{ \frac{1}{2}} \text{exp}( - \beta r) \nonumber$
Evaluation of the variational energy integral:
$\begin{array}{c|c} E ( \beta,~ \mu ) = \int_0^{ \infty} \Psi (r,~ \beta ) \left[ - \frac{1}{2 \mu r} \frac{d^2}{dr^2} (r \Psi (r,~ \beta )) \right] 4 \pi r^2 dr ... & _{ \text{simplify}}^{ \text{assume, } \beta > 0} \rightarrow \frac{ \beta^2}{2 \mu} - \beta \ + \int_0^{ \infty} \Psi (r,~ \beta ) - \frac{1}{r} \Psi (r,~ \beta ) 4 \pi r^2 dr \end{array} \nonumber$
Minimize the energy with respect to the variational parameter β.
$\frac{d}{d \beta} E( \beta,~ \mu+ 0 \text{ solve, } \beta \rightarrow \mu \nonumber$
Express energy in terms of reduced mass:
$E( \beta, ~ \mu ) \text{ substitute, } \beta = \mu \rightarrow - \frac{ \mu}{2} \nonumber$
Using the virial theorem, the kinetic and potential energy contributions are:
$\begin{matrix} T = \frac{ \mu}{2} & V = - \mu \end{matrix} \nonumber$
Express the trial wave function in terms of reduced mass.
$\Psi (r,~ \beta ) \text{ substitute, } \beta = \mu \rightarrow \frac{e^{- \mu r} \sqrt{ \mu^3}}{ \sqrt{ \pi}} \nonumber$
Demonstrate the effect of mass on the radial distribution function with plots of mass equal to 0.5, 1 and 2.
Calculate the expectation value for position to show that it is consistent with the graphical representation above. The more massive the lepton the closer it is on average to the proton.
$\int_0^{ \infty} \Psi (r,~ \mu) r \Psi (r,~ \mu) 4 \pi r^2 \text{dr assume, } \mu >0 \rightarrow \frac{3}{2 \mu} \nonumber$
Summarize the calculated values for the physical properties of He, Hμ and Hτ.
$\begin{pmatrix} \text{Species} & \frac{E}{E_h} & \frac{T}{E_h} & \frac{V}{E_h} & \frac{r_{avg}}{a_o} \ H_e & \frac{-1}{2} & \frac{1}{2} & -1 & \frac{3}{2} \ H_{ \mu} & -92.93 & 92.93 & -185.86 & 8.07(10)^{-3} \ H_{ \tau} & -601.5 & 601.5 & -1203 & 1.25 (10)^{-3} \end{pmatrix} \nonumber$
Now imagine that you have a regular hydrogen atom in its ground state and the electron is suddenly by some mechanism replaced by a muon. Nothing has changed from an electrostatic perspective, but the change in energy and average distance of the lepton from the proton are very large. The ground state energy and the average distance from the nucleus decrease by a factor of 185.6, the ratio of the effective masses of the electron and the muon.
This mass effect provides a challenge for those who think all atomic physical phenomena can be explained in terms of electrostatic potential energy effects. Of course, there is an even bigger problem for the potential energy aficionados, and that is the fundamental issue of atomic and molecular stability. Quantum mechanical kinetic energy effects are required to explain the stability of matter.
A Fourier transform of the coordinate wave function yields the corresponding momentum distribution and the opportunity to create a visualization of the uncertainty principle.
$\begin{array}{c|c} \Phi (p,~ \mu ) = \frac{1}{4 \sqrt{ \pi^3}} \int_0^{ \infty} \text{exp(-i p r)} \Psi (r,~ \mu) 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \mu > 0} \rightarrow \frac{2 \mu^{ \frac{3}{2}}}{ \pi ( \mu + p ~ i)^3} \end{array} \nonumber$
Replacing the proton with a positron, the electron's anti-particle, creates another exotic atom, positronium (Ps). In its singlet ground state electron-positron annihilation occurs in 125 ps creating two γ rays. Positronium's (μ = 1/2) spatial and momentum distributions are shown in Figures 1 and 2. A revised table including positronium is provided below.
$\begin{pmatrix} \text{Species} & \frac{E}{E_h} & \frac{T}{E_h} & \frac{V}{E_h} & \frac{r_{avg}}{a_o} \ H_e & \frac{-1}{2} & \frac{1}{2} & -1 & \frac{3}{2} \ H_{ \mu} & -92.93 & 92.93 & -185.86 & 8.07(10)^{-3} \ H_{ \tau} & -601.5 & 601.5 & -1203 & 1.25 (10)^{-3} \ Ps & - \frac{1}{4} & \frac{1}{4} & - \frac{1}{2} & 3 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.20%3A_Exploring_the_Role_of_Lepton_Mass_in_the_Hydrogen_Atom.txt
|
There are many in the chemical education community who believe that chemical bonding is simply an electrostatic phenomenon. I and several others have argued against this incorrect, simplistic view on many occasions (most of my critiques can be found in this section of my tutorials). In this tutorial the simplistic electrostatic model is shown to be inadequate by consideration of the effect of lepton mass on the equilibrium geometry (bond length) and energy of the hydrogen molecule ion. The electron has several heavy weight cousins and in this analysis we will look at the effect of replacing the electron with the muon (207 me).
The molecular orbital for the hydrogen molecule ion is formed as a linear combination of scaled hydrogenic 1s orbitals centered on the nuclei, a and b.
$\begin{matrix} \Psi = \frac{a+b}{ \sqrt{2 + 2S}} & \text{where} & a = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} \left( - \alpha r_a \right) & b = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} \left( - \alpha r_b \right) & S = \int a b d \tau \end{matrix} \nonumber$
The molecular energy operator in atomic units:
$H = \frac{-1}{2} \left[ \frac{d}{dr} \left( r^2 \frac{d}{dr} \blacksquare \right) \right] - \frac{1}{r_a} - \frac{1}{r_b} + \frac{1}{R} \nonumber$
The energy integral to be minimized by the variation method:
$E = \frac{ \int (a + b) H(a + b) d \tau}{2 + 2S} \nonumber$
When this integral is evaluated the following expression for the energy is obtained.
$E ( \alpha,~ m,~R) = \frac{ - \alpha^2}{2m} + \frac{ \frac{ \alpha^2}{m} - \alpha - \frac{1}{R} + \frac{1}{R} (1 + \alpha R) \text{exp} ( -2 \alpha R) + \alpha \left( \frac{ \alpha}{m} - 2 \right) (1 + \alpha R) \text{exp} ( - \alpha R)}{1 + \text{exp} ( - \alpha,~ R) \left( 1 + \alpha R + \frac{ \alpha^2 R^2}{3} \right)} + \frac{1}{R} \nonumber$
Minimization of the energy of the hydrogen molecule ion for the electron follows. There are two variational parameters, the orbital scale factor and internuclear distance.
Electron mass: m = 1
Seed values for the variational parameter and internuclear separation:
$\begin{matrix} \alpha = 1 & R = .1 \end{matrix} \nonumber$
$\begin{matrix} \text{Given} & \frac{d}{d \alpha} ( \alpha,~m,~R) = 0 & \frac{d}{dR} E ( \alpha,~m,~R) = 0 & \begin{pmatrix} \alpha \ R \end{pmatrix} = \text{Find} ( \alpha,~R) & \begin{pmatrix} \alpha \ R \end{pmatrix} = \begin{pmatrix} 1.238 \ 2.0033 \end{pmatrix} & E ( \alpha,~m,~R) = -0.5865 \end{matrix} \nonumber$
This result is well-known and can be found in any comprehensive quantum chemistry text. Next we replace the electron with the more massive muon and recalculate the ground-state energy.
Muon mass: m = 207
Seed values for the variational parameter and internuclear separation:
$\begin{matrix} \alpha = 200 & R = .012 \end{matrix} \nonumber$
$\begin{matrix} \text{Given} & \frac{d}{d \alpha} ( \alpha,~m,~R) = 0 & \frac{d}{dR} E ( \alpha,~m,~R) = 0 & \begin{pmatrix} \alpha \ R \end{pmatrix} = \text{Find} ( \alpha,~R) & \begin{pmatrix} \alpha \ R \end{pmatrix} = \begin{pmatrix} 256.2721 \ 0.0097 \end{pmatrix} & E ( \alpha,~m,~R) = -121.4068 \end{matrix} \nonumber$
The results of the calculations are summarized in the following table.
$\begin{pmatrix} \text{Lepton} & \frac{E}{E_h} & \frac{R}{a_o} \ \text{Electron} & -0.5865 & 2.0033 \ \text{Muon} & -121.41 & 0.0097 \end{pmatrix} \nonumber$
Now imagine that you have a regular hydrogen molecule ion in its ground state and the electron is suddenly by some mechanism replaced by a muon. Nothing has changed from an electrostatic perspective, but the changes in energy and internuclear distance (bond length) of the molecule are very large as is shown in the table. For example, in the muonium molecule the bond length decreases sharply bringing the nuclei 207 times closer than they are in the electron version of the molecule.
This mass effect provides a challenge for those who think chemical bond can be explained in terms of electrostatic potential energy effects. The mass change is important because quantum mechanical kinetic energy is inversely proportional to mass. By comparison classical kinetic energy is directly proportional to mass.*
Of course, there is an even bigger problem for the potential energy aficionados, and that is the fundamental issue of atomic and molecular stability. Quantum mechanical kinetic energy is required to explain the stability of matter and the physical nature of the chemical bond.
* The mass effect in the harmonic oscillator (kinetic isotope effect) is also a quantum kinetic energy phenomenon. See http://www.users.csbsju.edu/~frioux/sho/Uncertainty-SHO.pdf for calculations on the effect of mass in the harmonic oscillator.
2.22: The Hydrogen Atom with Finite Sized Nucleus
This exercise explores the impact of nuclear size on the ground state energy of the hydrogen atom's electron. The traditional approach assumes that the proton is a dimensionless point charge, which is a very good approximation for the hydrogen atom. However, for heavy atoms with many protons and neutrons, the finite size of the nucleus has to be taken into consideration when the goal is exact results. If the proton has uniform charge density and is given a finite radius, the potential energy of the electron is as given below.
Nuclear radius: Rn = 0.1
Potential energy:
$V(r) = \text{if} \left[ r \leq \text{Rn},~ \frac{-1}{ \text{Rn}} \left( 1.5 - \frac{r^2}{2 Rn^2} \right),~ \frac{-1}{r} \right] \nonumber$
Numerical integration of Schrödinger's equation (see below) yields the following results.
$\begin{pmatrix} \frac{ \text{Nuclear Radius}}{a_o} 0 & 0.1 & 0.2 & 0.5 & 1.0 & 2.0 \ \frac{ \text{Energy}}{E_h} & -0.500 & -0.496 & -0.488 & -0.450 & -0.385 & -0.293 \end{pmatrix} \nonumber$
A recommended exercise is to repeat these calculations for the 2s and 3s electronic states, and interpret the results.
Numerical integration of Schrödinger's equation:
$\begin{matrix} \text{Given} & \frac{-1}{2 \mu} \frac{d^2}{dr^2} \Psi (r) & - \frac{1}{r \mu} \frac{d}{dr} \Psi (r) + \left[ \frac{L(L+1)}{2 \mu r^2} + V(r) \right] \Psi (r) = E \Psi (r) & \Psi (.0001) = .1 & \Psi (.0001) = 0 \end{matrix} \nonumber$
$\Psi = \text{Odesolve} \left( r,~ r_{max} \right) \nonumber$
Normalize the wave function:
$\Psi (r) = \left( \int_0^{r_{max}} \Psi (r)^2 4 \pi r^2 dr \right)^{-.5} \Psi (r) \nonumber$
$\begin{matrix} \text{Reduced mass:} & \mu = 1 & \text{Angular momentum:} & L = 0 & \text{Integration limit:} & r_{max} = 7 \ \text{Energy guess:} & E = -0.496 & r = 0,~.01 .. r_{max} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.21%3A_The_Effect_of_Lepton_Mass_on_the_Energy_and_Bond_Length_of_the_Hydrogen_Molecule_Ion.txt
|
The purpose of this tutorial is to provide a much abreviated version of the first three sections in Chapter 12 of Volume III of The Feynman Lectures on Physics. These sections deal with the hyperfine interaction in the hydrogen atom.
At the introductory quantum chemistry-physics level we treat the hydrogen atom using an energy operator consisting of a kinetic energy term and an electron-proton potential energy term and calculate the ground-state energy. These are clearly the most important terms in the total energy operator, but they are not the only terms. The proton and electron are spin-1/2 fermions and as such have magnetic moments which interact with one another. This means that the ground state that we have calculated consists of four terms which have slightly different energies due to the magnetic interaction between the electron and proton (hyperfine splitting).
For example, listing the electron spin first we have the following four electron-proton states in the z-basis: |++>, |+->, |-+> and |-->. The spin-spin operator is.
$\hat{H}_{SpinSpin} = A \omega^e \omega^p = A \left( \omega_x^e \omega_x^p + \omega_y^e \omega_y^p + \omega_z^e \omega_z^p \right) \nonumber$
where the Pauli spin operators appear on the right side and represent the magnetic interaction between the electron and proton. The identity operator, on the right, will be needed later.
$\begin{matrix} \omega_x = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \omega_y = \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} & \omega_z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Tensor multiplication is now used to represent the spin-spin operator in matrix format. In the interest of mathematical clarity the constant A a is set equal to unity.
$H_{SpinSpin} = \left( \text{kronecker} \left( \omega_x,~ \omega_x \right) + \text{kronecker} \left( \omega_y,~ \omega_y \right) + \text{kronecker} \left( \omega_z,~ \omega_z \right) \right) \nonumber$
$H_{SpinSpin} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & -1 & 2 & 0 \ 0 & 2 & -1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} \nonumber$
We know ask Mathcad to calculate the eigenvalues and eigenvectors of the spin-spin operator. These results are displayed by constructing a matrix which contains the eigenvalues in the top row, and their eigenvectors in the columns below the eigenvalues.
$\begin{matrix} E = \text{eigenvals} \left( H_{SpinSpin} \right) & \text{EigenvalEigenvec} = \text{rsort} \left( \text{stack} \left( E^T,~ \text{eigenvecs} \left( H_{SpinSpin} \right) \right),~ 1 \right) \end{matrix} \nonumber$
$\text{EigenvalEigenvec} = \begin{pmatrix} -3 & 1 & 1 & 1 \ 0 & 0 & 0 & 1 \ 0.707 & 0.707 & 0 & 0 \ -0.707 & 0.707 & 0 & 0 \ 0 & 0 & 1 & 0 \end{pmatrix} \nonumber$
These results are expressed in more familiar form below.
$\begin{matrix} ~ & | T \rangle_1 = | \uparrow \rangle_p | \uparrow \rangle_e \ \text{Triplet state E}_T = 1 & | T \rangle_0 = \frac{1}{ \sqrt{2}} \left[ | \uparrow \rangle_p \downarrow \rangle_e + | \downarrow \rangle_p | \uparrow \rangle_e \right] \ ~ & | T \rangle_{-1} = | T \downarrow \rangle_p | \downarrow \rangle_e \ \text{Singlet state E}_s = -3 & | S \rangle_0 = \frac{1}{ \sqrt{2}} \left[ | \uparrow \rangle_p | \downarrow \rangle_e - | \downarrow \rangle_p | \uparrow \rangle_e \right] \end{matrix} \nonumber$
We can achieve the same result using these electron-proton states: |++>, |+->, |-+> and |-->.
First we write the spin states in vector format:
$\begin{matrix} | + \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} & | - \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Next we write the four electron-proton spin states in tensor format.
$\begin{matrix} | ++ \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & | +- \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} \ | -+ \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & | -- \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
These spin states are given the following labels to facilitate the calculation of energy matrix.
$\begin{matrix} a = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & b = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & c = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & d = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{eigenvals} \left( \begin{pmatrix} a^T H_{ \text{SpinSpin}} a & a^T H_{ \text{SpinSpin}} b & a^T H_{ \text{SpinSpin}} c & a^T H_{ \text{SpinSpin}} d \ b^T H_{ \text{SpinSpin}} a & b^T H_{ \text{SpinSpin}} b & b^T H_{ \text{SpinSpin}} c & b^T H_{ \text{SpinSpin}} d \ c^T H_{ \text{SpinSpin}} a & c^T H_{ \text{SpinSpin}} b & c^T H_{ \text{SpinSpin}} c & c^T H_{ \text{SpinSpin}} d \ d^T H_{ \text{SpinSpin}} a & d^T H_{ \text{SpinSpin}} b & d^T H_{ \text{SpinSpin}} c & d^T H_{ \text{SpinSpin}} d \end{pmatrix} \right) = \begin{pmatrix} 1 \ -3 \ 1 \ 1 \end{pmatrix} \ \text{eigenvecs} \left( \begin{pmatrix} a^T H_{ \text{SpinSpin}} a & a^T H_{ \text{SpinSpin}} b & a^T H_{ \text{SpinSpin}} c & a^T H_{ \text{SpinSpin}} d \ b^T H_{ \text{SpinSpin}} a & b^T H_{ \text{SpinSpin}} b & b^T H_{ \text{SpinSpin}} c & b^T H_{ \text{SpinSpin}} d \ c^T H_{ \text{SpinSpin}} a & c^T H_{ \text{SpinSpin}} b & c^T H_{ \text{SpinSpin}} c & c^T H_{ \text{SpinSpin}} d \ d^T H_{ \text{SpinSpin}} a & d^T H_{ \text{SpinSpin}} b & d^T H_{ \text{SpinSpin}} c & d^T H_{ \text{SpinSpin}} d \end{pmatrix} \right) = \begin{pmatrix} 0 & 0 & 1 & 0 \ 0.707 & 0.707 & 0 & 0 \ 0.707 & -0.707 & 0 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Identical to the previous calculation, this method also yields an upper triplet state at E = 1 and a lower singlet at E = -3. Two of the four final states are superpositions of |+-> and |-+>. In other words, we have found the eigenstates of the spin-spin energy operator as is shown below. Using these states the spin-spin energy matrix is diagonal.
$\begin{matrix} a = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & b = \begin{pmatrix} 0 \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & c = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} & d = \begin{pmatrix} 0 \ \frac{1}{ \sqrt{2}} \ - \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{pmatrix} a^T H_{ \text{SpinSpin}} a & a^T H_{ \text{SpinSpin}} b & a^T H_{ \text{SpinSpin}} c & a^T H_{ \text{SpinSpin}} d \ b^T H_{ \text{SpinSpin}} a & b^T H_{ \text{SpinSpin}} b & b^T H_{ \text{SpinSpin}} c & b^T H_{ \text{SpinSpin}} d \ c^T H_{ \text{SpinSpin}} a & c^T H_{ \text{SpinSpin}} b & c^T H_{ \text{SpinSpin}} c & c^T H_{ \text{SpinSpin}} d \ d^T H_{ \text{SpinSpin}} a & d^T H_{ \text{SpinSpin}} b & d^T H_{ \text{SpinSpin}} c & d^T H_{ \text{SpinSpin}} d \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & -3 \end{pmatrix} \nonumber$
The spin-spin hyperfine interaction is the basis of the hydrogen maser. The triplet state is selected using a Stern-Gerlach magnet and then 21 cm photons induce a triplet-singlet transition creating a coherent beam of photons.
The following table calculates expectation values for the z-direction spin for triplet and singlet states. In the first column, the expectation values for z-direction measurements jointly on the electron and proton are calculated. The next two columns calculate the z-direction expectation values for the electron and proton independently.
$\begin{pmatrix} a^T \text{kronecker} \left( \sigma_z,~ \sigma_z \right) a & a^T \text{kronecker} \left( \sigma_z,~ I \right) a & a^T \text{kronecker} \left( I,~ \sigma_z \right) a \ b^T \text{kronecker} \left( \sigma_z,~ \sigma_z \right) b & b^T \text{kronecker} \left( \sigma_z,~ I \right) b & b^T \text{kronecker} \left( I,~ \sigma_z \right) b \ c^T \text{kronecker} \left( \sigma_z,~ \sigma_z \right) c & c^T \text{kronecker} \left( \sigma_z,~ I \right) c & c^T \text{kronecker} \left( I,~ \sigma_z \right) c \ d^T \text{kronecker} \left( \sigma_z,~ \sigma_z \right) d & d^T \text{kronecker} \left( \sigma_z,~ I \right) d & d^T \text{kronecker} \left( I,~ \sigma_z \right) d \ \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 \ -1 & 0 & 0 \ 1 & -1 & -1 \ -1 & 0 & 0 \end{pmatrix} \nonumber$
The b and d states are entangled Bell states. Note that in these states the expectation values for the individual spins are 0, indicating complete randomness. Collectively, however, the electron and proton always show opposite spin states leading to a joint expectation value of -1. In other words, the measurement result for the z-spin for either the electron or proton is completely random, but once the result for one of the particles is obtained, the other particle's spin state can be predicted with certainty.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.23%3A_The_Hyperfine_Interaction_in_the_Hydrogen_Atom.txt
|
The following provides an alternative mathematical analysis of the annihilation of positronium as presented in section 18-3 in Volume III of The Feynman Lectures on Physics.
Positronium is an analog of the hydrogen atom in which the proton is replaced by a positron, the electron's anti-particle. The electron-positron pair undergoes annihilation in 10-10 seconds producing two γ-ray photons. Positronium's effective mass is 1/2, yielding a ground state energy (excluding the magnetic interactions between the spin 1/2 anti-particles) E = -0.5μEh = -0.25 Eh. Considering spin the ground state is four-fold degenerate, but this degeneracy is split by the magnetic spin-spin hyperfine interaction shown below. See "The Hyperfine Splitting in the Hydrogen Atom" for further detail.
$\hat{H}_{SpinSpin} = A \sigma^e \sigma^p = A \left( \sigma_x^e \sigma_x^p + \sigma_y^e \sigma_y^p + \sigma_z^e \sigma_z^p \right) \nonumber$
The spin-spin Hamiltonian has the following eigenvalues (top row) and eigenvectors (columns beneath the eigenvalues), showing a singlet ground state and triplet excited state. The electron-positron spin states are to the right of the table with their m quantum numbers, showing that the singlet (j = 0, m = 0) is a superposition state as is one of the triplet states (j = 1, m = 0). The parameter A is much larger for positronium than for the hydrogen atom because the positron has a much larger magnetic moment than the proton.
$\begin{pmatrix} -3A & A & A & A \ 0 & 0 & 0 & 1 \ \frac{1}{ \sqrt{2}} & \frac{1}{ \sqrt{2}} & 0 & 0 \ - \frac{1}{ \sqrt{2}} & \frac{1}{ \sqrt{2}} & 0 & 0 \ 0 & 0 & 1 & 0 \end{pmatrix} \begin{matrix} \ | ++ \rangle ~ m = 1 \ |+- \rangle ~ m = 0 \ |-+ \rangle ~ m = 0 \ | -- \rangle ~ m = -1 \end{matrix} \nonumber$
Feynman shows that when the singlet ground state (J = 0, m = 0) annihilates, conservation of momentum requires that the photons emitted in opposite directions (A and B) must have the same circular polarization state, either both in the right or both in left circular state in their direction of motion. This leads to the following entangled superposition. The negative sign is required by parity conservation. The positronium ground state has negative parity (see above), therefore the final photon state must have negative parity.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | R \rangle_A | R \rangle_B - | L \rangle_A | L \rangle_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ i \end{pmatrix}_B - \begin{pmatrix} 1 \ -i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ -i \end{pmatrix}_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \ i \ -1 \end{pmatrix} - \begin{pmatrix} 1 \ -i \ -i \ -1 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
The circular polarization states are:
$\begin{matrix} R = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & L = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} \end{matrix} \nonumber$
The appropriate operators are formed below:
$\begin{matrix} RC = R \left( \overline{R} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & - \frac{1}{2} i \ \frac{1}{2} i & \frac{1}{2} \end{pmatrix} & LC = L \left( \overline{L} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & \frac{1}{2} i \ - \frac{1}{2} i & \frac{1}{2} \end{pmatrix} & RLC = R \left( \overline{R} \right)^T - L \left( \overline{L} \right)^T \rightarrow \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} \end{matrix} \nonumber$
RLC is the angular momentum operator for photons. Below it is shown that |R> and |L> are eigenstates with eigenvalues of +1 and -1, respectively.
$\begin{matrix} \text{eigenvals(RLC)} = \begin{pmatrix} 1 \ -1 \end{pmatrix} & \text{RLC R} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} & \text{RLC L} \rightarrow \begin{pmatrix} - \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} \end{matrix} \nonumber$
Now we can consider some of the measurements that Feynman discusses in his analysis of positronium annihilation. Because the photons in state Ψ are entangled the measurements of observers A and B are correlated. For example, if observers A and B both measure the circular polarization of their photons and compare their results they always agree that they have measured the same polarization state. Their composite expectation value is 1.
$\begin{matrix} \Psi = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} & \left( \overline{ \Psi } \right)^T \text{kronecker(RLC, RLC)} \Psi = 1 \end{matrix} \nonumber$
The identity operation, do nothing, is now needed:
$I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
But their individual results are a random sequence of +1 and -1 outcomes averaging to an expectation value of zero.
$\begin{matrix} \left( \overline{ \Psi } \right)^T \text{kronecker(RLC, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(I, RLC)} \Psi = 0 \end{matrix} \nonumber$
The probability that both observers will measure |R> or both measure |L> is 0.5. The probability that one will measure |R> and the other |L>, or vice versa is zero.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(RC, RC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^T \text{kronecker(LC, LC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^2 \text{kronecker(LC, RC)} \Psi = 0 \end{matrix} \nonumber$
Because |R> and |L> are superpositions of |V> and |H>, the photon wave function can also be written in the V-H plane polarization basis as is shown below. See the Appendix for an alternative justification.
$| \Psi \rangle = \frac{i}{ \sqrt{2}} \left[ | V \rangle_A | H \rangle_B + |H \rangle_A | V \rangle_B \right] = \frac{i}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix}_A \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix}_B + \begin{pmatrix} 0 \ 1 \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix}_B \right] = \frac{i}{ \sqrt{2}} \left[ \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
The eigenstates for plane polarization are:
$\begin{matrix} V = \begin{pmatrix} 1 \ 0 \end{pmatrix} & H = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The appropriate measurement operators are:
$\begin{matrix} Vop = \begin{pmatrix}1 & 0 \ 0 & 0 \end{pmatrix} & Hop = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} & VH = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
As VH is diagonal it is obvious that its eigenvalues are +1 and -1, and that V is the eigenstate with eigenvalue +1 and H is the eigenstate with eigenvalue -1.
$\begin{matrix} \text{VH V} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{VH H} = \begin{pmatrix} 0 \ -1 \end{pmatrix} \end{matrix} \nonumber$
Just as for the circular polarization measurements, the observers individual plane polarization measurements are totally random, but when they compare their results they find perfect anti-correlation, always observing the opposite polarization state.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(VH, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(I, VH)} = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(VH, VH)} \Psi = -1 \end{matrix} \nonumber$
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(Vop, Hop)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^T \text{kronecker(Hop, Vop)} \Psi = 0.5 \ \left( \overline{ \Psi} \right)^T \text{kronecker(Vop, Vop)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(Hop, Hop)} \Psi = 0 \end{matrix} \nonumber$
If one observer measures circular polarization and the other measures plane polarization the expectation value is 0. In other words there is no correlation between the measurements.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(RLC, VH)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(VH, RLC)} \Psi = 0 \end{matrix} \nonumber$
Classical reasoning (according to Feynman) is in disagreement with the highlighted result. Earlier it was demonstrated that the photons are either |L> or |R> polarized. However, suppose photon A is measured in the V-H basis and found to be |V>, and given that B is either |L> or |R>, which are superpositions of |V> and |H> (see Appendix), measurement of B in the V-H basis should yield |V> 50% of the time and |H> 50% of the time. There should be no correlation between the A and B measurements. The expectation value should be zero.
Feynman put it this way (parenthetical material added):
Surely you (A) cannot alter the physical state of his (B) photons by changing the kind of observation you make on your photons. No matter what measurements you make on yours, his must still be either RHC (|R>) or LHC (|L>).
But according to quantum mechanics the photons are entangled in the R-L and V-H bases as shown above, and therefore measurement of |V> at A collapses the wave function to |H> at B.
The highlighted prediction is confirmed experimentally leading to the conclusion that reasoning classically in this manner about the photons created in positronium annihilation is not valid.
While this analysis of positronium annihilation clarifies the conflict between quantum theory and classical realism, it does not lead to an experimental adjudication of the disagreement. In 1964 John Bell demonstrated that entangled systems, like the positronium decay products, could be used to decide the conflict one way or the other empirically. As is well known the subsequent experimental work based on Bell's theorem decided the conflict between the two views in favor of quantum theory.
Appendix
The relationships between place and circularly polarized light.
$\begin{matrix} IR > & IL> & |V > & |H> \ \frac{1}{ \sqrt{2}} \text{(V + iH)} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i }{2} \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(V - iH)} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ - \frac{ \sqrt{2} i }{2} \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(L + R)} \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} & \frac{i}{ \sqrt{2}} \text{(L - R)} \rightarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Transforming Ψ from the R-L basis to the V-H basis using the superpositions above.
$\psi = \frac{1}{ \sqrt{2}} \left( R_A R_B - L_A L_B \right) \begin{array}{|l} \text{substitute, R}_A \frac{1}{ \sqrt{2}} \left( V_A + i H_A \right) \ \text{substitute, R}_B = \frac{1}{ \sqrt{2}} \left( V_B + i H_B \right) \ \text{substitute, L}_A = \frac{1}{ \sqrt{2}} \left( V_A - i H_A \right) \ \text{substitute, L}_B = \frac{1}{ \sqrt{2}} \left( V_B - i H_B \right) \ \text{simplify} \end{array} \rightarrow \psi = \sqrt{2} \left( \frac{H_A V_B}{2} + \frac{H_B V_A}{2} \right) i \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.24%3A_Positronium_Annihilation.txt
|
This tutorial provides an alternative analysis of the annihilation of positronium as presented in section 18-3 in Volume III of The Feynman Lectures on Physics. The calculations begin after the highlighted region which provides a justification for selecting an entangled singlet state for the two photons created in the decay event.
Positronium is an analog of the hydrogen atom in which the proton is replaced by a positron, the electron's anti-particle. The electron-positron pair undergoes annihilation in 10-10 seconds producing two γ-ray photons. Positronium's effective mass is 1/2, yielding a ground state energy (excluding the magnetic interactions between the spin 1/2 anti-particles) E = -0.5μEh = -0.25 Eh. Considering spin the ground state is four-fold degenerate, but this degeneracy is split by the magnetic spin-spin hyperfine interaction shown below. See "The Hyperfine Splitting in the Hydrogen Atom" for further detail.
$\hat{H}_{SpinSpin} = A \sigma^e \sigma^p = A \left( \sigma_x^e \sigma_x^p + \sigma_y^e \sigma_y^p + \sigma_z^e \sigma_z^p \right) \nonumber$
The spin-spin Hamiltonian has the following eigenvalues (top row) and eigenvectors (columns beneath the eigenvalues), showing a singlet ground state and triplet excited state. The electron-positron spin states are to the right of the table with their m quantum numbers, showing that the singlet (j = 0, m = 0) is a superposition state as is one of the triplet states (j = 1, m = 0). The parameter A is much larger for positronium than for the hydrogen atom because the positron has a much larger magnetic moment than the proton.
$\begin{pmatrix} -3A & A & A & A \ 0 & 0 & 0 & 1 \ \frac{1}{ \sqrt{2}} & \frac{1}{ \sqrt{2}} & 0 & 0 \ - \frac{1}{ \sqrt{2}} & \frac{1}{ \sqrt{2}} & 0 & 0 \ 0 & 0 & 1 & 0 \end{pmatrix} \begin{matrix} \ | ++ \rangle ~ m = 1 \ |+- \rangle ~ m = 0 \ |-+ \rangle ~ m = 0 \ | -- \rangle ~ m = -1 \end{matrix} \nonumber$
Feynman shows that when the singlet ground state (J = 0, m = 0) annihilates, conservation of momentum requires that the photons emitted in opposite directions (A and B) must have the same circular polarization state, either both in the right or both in left circular state in their direction of motion. This leads to the following entangled superposition. The negative sign is required by parity conservation. The positronium ground state has negative parity (see above), therefore the final photon state must have negative parity.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | R \rangle_A | R \rangle_B - | L \rangle_A | L \rangle_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ i \end{pmatrix}_B - \begin{pmatrix} 1 \ -i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ -i \end{pmatrix}_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \ i \ -1 \end{pmatrix} - \begin{pmatrix} 1 \ -i \ -i \ -1 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
The circular polarization states are:
$\begin{matrix} R = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & L = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} \end{matrix} \nonumber$
The appropriate operators are formed below:
$\begin{matrix} RC = R \left( \overline{R} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & - \frac{1}{2} i \ \frac{1}{2} i & \frac{1}{2} \end{pmatrix} & LC = L \left( \overline{L} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & \frac{1}{2} i \ - \frac{1}{2} i & \frac{1}{2} \end{pmatrix} & RLC = R \left( \overline{R} \right)^T - L \left( \overline{L} \right)^T \rightarrow \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} \end{matrix} \nonumber$
RLC is the angular momentum operator for photons. Below it is shown that |R> and |L> are eigenstates with eigenvalues of +1 and -1, respectively.
$\begin{matrix} \text{eigenvals(RLC)} = \begin{pmatrix} 1 \ -1 \end{pmatrix} & \text{RLC R} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} & \text{RLC L} \rightarrow \begin{pmatrix} - \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} \end{matrix} \nonumber$
Now we can consider some of the measurements that Feynman discusses in his analysis of positronium annihilation. Because the photons in state Ψ are entangled the measurements of observers A and B are correlated. For example, if observers A and B both measure the circular polarization of their photons and compare their results they always agree that they have measured the same polarization state. Their composite expectation value is 1.
$\begin{matrix} \Psi = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} & \left( \overline{ \Psi } \right)^T \text{kronecker(RLC, RLC)} \Psi = 1 \end{matrix} \nonumber$
The identity operation, do nothing, is now needed:
$I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
But their individual results are a random sequence of +1 and -1 outcomes averaging to an expectation value of zero.
$\begin{matrix} \left( \overline{ \Psi } \right)^T \text{kronecker(RLC, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(I, RLC)} \Psi = 0 \end{matrix} \nonumber$
The probability that both observers will measure |R> or both measure |L> is 0.5. The probability that one will measure |R> and the other |L>, or vice versa is zero.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(RC, RC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^T \text{kronecker(LC, LC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^2 \text{kronecker(LC, RC)} \Psi = 0 \end{matrix} \nonumber$
Because |R> and |L> are superpositions of |V> and |H>, the photon wave function can also be written in the V-H plane polarization basis as is shown below. See the Appendix for an alternative justification.
$| \Psi \rangle = \frac{i}{ \sqrt{2}} \left[ | V \rangle_A | H \rangle_B + |H \rangle_A | V \rangle_B \right] = \frac{i}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix}_A \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix}_B + \begin{pmatrix} 0 \ 1 \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix}_B \right] = \frac{i}{ \sqrt{2}} \left[ \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
The eigenstates for plane polarization are:
$\begin{matrix} V = \begin{pmatrix} 1 \ 0 \end{pmatrix} & H = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The appropriate measurement operators are:
$\begin{matrix} Vop = \begin{pmatrix}1 & 0 \ 0 & 0 \end{pmatrix} & Hop = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} & VH = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
As VH is diagonal it is obvious that its eigenvalues are +1 and -1, and that V is the eigenstate with eigenvalue +1 and H is the eigenstate with eigenvalue -1.
$\begin{matrix} \text{VH V} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{VH H} = \begin{pmatrix} 0 \ -1 \end{pmatrix} \end{matrix} \nonumber$
Just as for the circular polarization measurements, the observers individual plane polarization measurements are totally random, but when they compare their results they find perfect anti-correlation, always observing the opposite polarization state.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(VH, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(I, VH)} = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(VH, VH)} \Psi = 1 \end{matrix} \nonumber$
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(Vop, Hop)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^T \text{kronecker(Hop, Vop)} \Psi = 0.5 \ \left( \overline{ \Psi} \right)^T \text{kronecker(Vop, Vop)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(Hop, Hop)} \Psi = 0 \end{matrix} \nonumber$
If one observer measures circular polarization and the other measures plane polarization the expectation value is 0. In other words there is no correlation between the measurements.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(RLC, VH)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(VH, RLC)} \Psi = 0 \end{matrix} \nonumber$
A realist believes that all observables have definite values independent of measurement and that measurement of one variable doesn't affect the value of another variable. Such a person might construct the following table which assigns specific polarization states to the photons in the R-L and V-H bases to explain the quantum mechanical calculations performed above.
$\begin{pmatrix} \text{Photon A} & ' & \text{Photon B} & ' & \text{RL(A) RL(B)} & \text{VH(A) VH(B)} & \text{RL(A) VH(B)} & \text{VH(A) RL(B)} \ \text{R V} & ' & \text{R H} & ' & 1 & -1 & -1 & 1 \ \text{L V} & ' & \text{L H} & ' & 1 & -1 & 1 & -1 \ \text{R H} & ' & \text{R V} & ' & 1 & -1 & -1 & 1 \ \text{Expectation} & ' & \text{Value} & ' & 1 & -1 & 0 & 0 \end{pmatrix} \nonumber$
However, the quantum theorist objects that the operators representing rectilinear and circular polarization do not commute, which means that they represent incompatible observables which cannot simultaneously occupy well-defined states.
$\text{RLC VH - VH RLC} = \begin{pmatrix} 0 & 2i \ 2i & 0 \end{pmatrix} \nonumber$
As shown below |R> and |L> are superpositions of |V> and |H> and vice versa. This is another way of demonstrating that a photon cannot be in a well-defined circular polarization state, say |R>, and at the same time be definitely either |V> or |H>. |R> is a superposition of |V> and |H> and therefore its plane polarization state is completely undetermined. Thus the photon states in the table proposed by the realist are not valid from the quantum mechanical perspective. They have, therefore, no explanatory validity.
$\begin{matrix} IR > & IL> & |V > & |H> \ \frac{1}{ \sqrt{2}} \text{(V + iH)} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i }{2} \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(V - iH)} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ - \frac{ \sqrt{2} i }{2} \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(L + R)} \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} & \frac{i}{ \sqrt{2}} \text{(L - R)} \rightarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Appendix
Transforming Ψ from the R-L basis to the V-H basis using the superpositions above.
$\psi = \frac{1}{ \sqrt{2}} \left( R_A R_B - L_A L_B \right) \begin{array}{|l} \text{substitute, R}_A \frac{1}{ \sqrt{2}} \left( V_A + i H_A \right) \ \text{substitute, R}_B = \frac{1}{ \sqrt{2}} \left( V_B + i H_B \right) \ \text{substitute, L}_A = \frac{1}{ \sqrt{2}} \left( V_A - i H_A \right) \ \text{substitute, L}_B = \frac{1}{ \sqrt{2}} \left( V_B - i H_B \right) \ \text{simplify} \end{array} \rightarrow \psi = \sqrt{2} \left( \frac{H_A V_B}{2} + \frac{H_B V_A}{2} \right) i \nonumber$
I thought it would be interesting to look at calculations that included measurement in the diagonal-slant rectilinear basis.
The diagonal and slant eigenvectors:
$\begin{matrix} D = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & S = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & \frac{D-S}{ \sqrt{2}} = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \frac{D+S}{ \sqrt{2}} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The original state function in the diagonal-slant basis:
$\psi = \frac{i}{ \sqrt{2}} \left( H_A V_B - V_A H_B \right) \begin{array}{|l} \text{substitute, H}_A \frac{1}{ \sqrt{2}} \left( D_A + i S_A \right) \ \text{substitute, H}_B = \frac{1}{ \sqrt{2}} \left( D_B + i S_B \right) \ \text{substitute, L}_A = \frac{1}{ \sqrt{2}} \left( D_A - i S_A \right) \ \text{substitute, L}_B = \frac{1}{ \sqrt{2}} \left( D_B - i S_B \right) \ \text{simplify} \end{array} \rightarrow \psi = \sqrt{2} \left( \frac{D_A D_B}{2} - \frac{S_A S_B}{2} \right) i \nonumber$
$\begin{matrix} \text{DS} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{Dd} = \frac{1}{2} \begin{pmatrix} 1 & 1 \ 1 & 1 \end{pmatrix} & \text{Ds} = \frac{1}{2} \begin{pmatrix} 1 & -1 \ -1 & 1 \end{pmatrix} & \text{D D}^T - \text{S S}^T = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(DS, DS)} = 1 & \left( \overline{ \Psi} \right)^T \text{kronecker(Dd, Dd)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^T \text{kronecker(Ds, Ds)} = 0.5 \ \left( \overline{ \Psi} \right)^T \text{kronecker(Dd, Ds)} = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(DS, RLC)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(DS, VH)} = 0 \end{matrix} \nonumber$
$\begin{matrix} \text{VH D} - \text{DS VH} = \begin{pmatrix} 0 & 2 \ -2 & 0 \end{pmatrix} & \text{RLC DS} - \text{DS RLC} = \begin{pmatrix} -2i & 0 \ 0 & 2i \end{pmatrix} \end{matrix} \nonumber$
All of the states below give expectation values that agree with quantum mechanics, but are not permissible in quantum mechanics because the operators do not commute.
$\begin{pmatrix} \text{PhotonA} & \text{PhotonB} & \text{RL(A) RL(B)} & \text{VH(A) VH(B)} & \text{RL(A) VH(B)} & \text{DS(A) DS(B)} & \text{RL(A) DS(B)} & \text{VH(A) DS(B)} \ \text{RVD} & \text{RHD} & 1 & -1 & 01 & 1 & 1 & 1 \ \text{RVS} & \text{RHS} & 1 & -1 & -1 & 1 & -1 & -1 \ \text{LVD} & \text{LHD} & 1 & -1 & 1 & 1 & -1 & 1 \ \text{LVS} & \text{LHS} & 1 & -1 & 1 & 1 & 1 & -1 \ \text{RHD} & \text{RVD} & 1 & -1 & 1 & 1 & -1 & 1 \ \text{LHD} & \text{LVD} & 1 & -1 & -1 & 1 & -1 & -1 \ \text{LHS} & \text{LVS} & 1 & -1 & -1 & 1 & 1 & 1 \ \text{LHS} & \text{LVS} & 1 & -1 & -1 & 1 & 1 & 1 \ \text{Expectation} & \text{Value} & 1 & -1 & 0 & 1 & 0 & 0 \end{pmatrix} \nonumber$
$E( \theta ) = \left( \overline{ \Psi} \right)^T \begin{pmatrix} \cos (2 \theta ) & \sin (2 \theta ) & 0 & 0 \ \sin (2 \theta ) & - \cos (2 \theta ) & 0 & 0 \ 0 & 0 & - \cos (2 \theta ) & - \sin (2 \theta ) \ 0 & 0 & - \sin (2 \theta ) & \cos (2 \theta ) \end{pmatrix} \Psi \rightarrow - \cos ( 2 \theta ) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.25%3A_Positronium_Annihilation-_Another_View.txt
|
The following provides an alternative mathematical analysis of the annihilation of positronium as presented in section 18-3 of Volume III of The Feynman Lectures on Physics. Positronium is an analog of the hydrogen atom in which the proton is replaced by a positron, the electron's anti-particle. The electron-positron pair undergoes annihilation in 10-10 seconds producing two γ-ray photons.
Feynman shows that when positronium annihilates, conservation of momentum requires that the photons emitted in opposite directions (A and B) must have the same circular polarization state, either both in the right or both in left circular state in their direction of motion. This leads to the following entangled superposition in the R-L basis. The negative sign is required by parity conservation.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | R \rangle_A | R \rangle_B - | L \rangle_A | L \rangle_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ i \end{pmatrix}_B - \begin{pmatrix} 1 \ -i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ -i \end{pmatrix}_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \ i \ -1 \end{pmatrix} - \begin{pmatrix} 1 \ -i \ -i \ -1 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
The circular polarization states are:
$\begin{matrix} R = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & L = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ - i \end{pmatrix} \end{matrix} \nonumber$
The appropriate operators are formed below:
$\begin{matrix} \text{RC = R} \left( \overline{ R} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & - \frac{1}{2} i \ \frac{1}{2}i & \frac{1}{2} \end{pmatrix} \text{LC = L} \left( \overline{ L} \right)^T \rightarrow \begin{pmatrix} \frac{1}{2} & \frac{1}{2}i \ - \frac{1}{2} i & \frac{1}{2} \end{pmatrix} & \text{RLC = R} \left( \overline{ R} \right)^T - \text{L} \left( \overline{ L} \right)^T \rightarrow \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} \end{matrix} \nonumber$
RLC is the angular momentum operator for photons. Below it is shown that |R> and |L> are eigenstates with eigenvalues of +1 and -1, respectively.
$\begin{matrix} \text{eigenvals(RLC)} = \begin{pmatrix} 1 \ -1 \end{pmatrix} & \text{RLC R} = \begin{pmatrix} 0.707 \ 0.707i \end{pmatrix} & \text{RLC L} = \begin{pmatrix} -0.707 \ 0.707i \end{pmatrix} & \frac{1}{ \sqrt{2}} = 0.707 \end{matrix} \nonumber$
Now we can consider some of the measurements that Feynman discusses in his analysis of positronium annihilation. Because the photons in state Ψ are entangled the measurements of observers A and B are correlated. For example, if observers A and B both measure the circular polarization of their photons and compare their results they always agree that they have measured the same polarization state. Their composite expectation value is 1.
$\begin{matrix} \Psi = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} & \left( \overline{ \Psi} \right)^T \text{kronecker(RLC, RLC)} \Psi = 1 \end{matrix} \nonumber$
The identity operation, do nothing, is needed:
$I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
But their individual results are a random sequence of +1 and -1 outcomes averaging to an expectation value of zero.
$\begin{matrix} \left( \overline{ \Psi} \right)^2 \text{kronecker(RLC, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^2 \text{kronecker(I, RLC)} \Psi = 0 \end{matrix} \nonumber$
The probability that both observers will measure |R> or both measure |L> is 0.5. The probability that one will measure |R> and the other |L>, or vice versa is zero.
$\begin{matrix} \left( \overline{ \Psi} \right)^2 \text{kronecker(RC, RC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^2 \text{kronecker(LC, LC)} \Psi = 0.5 & \left( \overline{ \Psi} \right)^2 \text{kronecker(LC, RC)} \Psi = 0 \end{matrix} \nonumber$
Now suppose the observers measure photon polarization in the vertical/horizontal basis. The appropriate eigenstates and measurement operators needed are shown below.
The eigenstates for plane polarization are:
$\begin{matrix} V = \begin{pmatrix} 1 \ 0 \end{pmatrix} & H = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The needed measurement operators are:
$\begin{matrix} Vop = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} & Hop = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} & VH = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
As VH is diagonal it is obvious that its eigenvalues are +1 and -1, and that V is the eigenstate with eigenvalue +1 and H is the eigenstate with eigenvalue -1.
$\begin{matrix} \text{VH V} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{VH H} = \begin{pmatrix} 0 \ -1 \end{pmatrix} \end{matrix} \nonumber$
Just as for the circular polarization measurements, the observers individual polarization measurements are totally random, but when they compare their results they find perfect anti-correlation, always observing the opposite polarization state.
$\begin{matrix} \left( \overline{ \Psi} \right)^T \text{kronecker(VH, I)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(I, VH)} \Psi = 0 & \left( \overline{ \Psi} \right)^T \text{kronecker(VH, VH)} \Psi = -1 \end{matrix} \nonumber$
Local classical reasoning is in disagreement with the highlighted result. Earlier it was demonstrated that the photons are either |L> or |R> polarized. However, suppose photon A is measured in the V-H basis and found to be |V>, and given that B is either |L> or |R>, which are superpositions of |V> and |H> (as shown below), measurement of B in the V-H basis should yield |V> 50% of the time and |H> 50% of the time. There should be no correlation between the A and B measurements in the V-H basis. The expectation value should be zero.
$\begin{matrix} |R > & \frac{1}{ \sqrt{2}} \text{(V + iH)} = \begin{pmatrix} 0.707 \ 0.707i \end{pmatrix} & |L > & \frac{1}{ \sqrt{2}} \text{(V - iH)} = \begin{pmatrix} 0.707 \ -0.707i \end{pmatrix} \end{matrix} \nonumber$
Arguing temporarily against non-local effects Feynman states: Surely you (A) cannot alter the physical state of his (B) photons by changing the kind of observation you make on your photons. No matter what measurements you make on yours, his must still be either RHC (|R>) or LHC (|L>).
However, because |R> and |L> are superpositions of |V> and |H>, the initial R-L wave function is also an entangled superposition in the V-H polarization basis.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | V \rangle_A | H \rangle_B - | H \rangle_A|V \rangle_B \right] = \frac{i}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix}_A \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix}_B + \begin{pmatrix} 0 \ 1 \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix}_B \right] = \frac{i}{2}{ \sqrt{2}} \left[ \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \right] = \frac{i}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
This wave function says a measurement of |V> at A collapses the wave function to |H> at B and a measurement of |H> at A collapses the wave function to |V> at B, in agreement with the highlighted expectation value. "A non-local interaction hooks up one location (A) with another (B) without crossing space, without decay, and without delay. A non-local event is, in short, unmediated, unmitigated and immediate." Quantum Reality by Nick Herbert, page 214.
$\psi = \frac{1}{ \sqrt{2}} \left( R_A R_B - L_A L_B \right) \begin{array}{|l} \end{array} \rightarrow \psi = \sqrt{2} \left( \frac{H_B V_A i}{2} + \frac{H_A V_B i}{2} \right) \nonumber$
The interference between the probability amplitudes after the RL to HV substitutions leading to the final state is shown below by a "hand" calculation. The unseen middle step.
$\frac{1}{2 \sqrt{2}} \left[ \textcolor{blue}{V_A V_B} + i V_A H_B + i H_A V_B - \textcolor{red}{H_A H_B} - \textcolor{blue}{V_A V_B} - iV_A H_B + i H_A V_B + \textcolor{red}{H_A H_B} \right] = \frac{i}{ \sqrt{2}} \left[ V_A H_B + H_A V_B \right] \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.26%3A_Positronium_Annihilation-_Yet_Another_View.txt
|
This tutorial is an addendum to the immediately preceeding one dealing with the hyperfine splitting in the hydrogen atom. It represents an alternative version of material that can be found in section 18.6 of Volume III of The Feynman Lectures on Physics.
The deuterium isotope of hydrogen consists of an electron, and a proton and neutron in the nucleus (a deuteron). All three fundamental particles are spin-1/2 fermions. However, the proton and the neutron in the nucleus collectively behave like a spin-1 particle in their magnetic interaction with the extra-nuclear electron. The spin-spin interaction between the electron and the nucleus is given below, where the superscript d refers to the proton-neutron nucleus.
$\hat{H}_{SpinSpin} = A \left( J^d \sigma^e \right) = A \left( J_x^d \sigma_x^e + J_x^d \sigma_y^e + J_z^d \sigma_z^e \right) \nonumber$
The spin-1/2 and spin-1 operators required for this Hamiltonian are given below. The spin-1/2 operators are the familiar Pauli matrices. For a derivation of the spin-1 operators see Quantum Mechanics Demystified by David McMahon, chapter 10.
$\begin{matrix} \sigma_x = \frac{1}{2} \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \sigma_y = \frac{1}{2} \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} & \sigma_z = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \ J_x = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 & 1 & 0 \ 1 & 0 & 1 \ 0 & 1 & 0 \end{pmatrix} & J_y = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 & -i & 0 \ i & 0 & -i \ 0 & i & 0 \end{pmatrix} & J_z = \begin{pmatrix} 1 & 0 & 0 \ 0 & 0 & 0 \ 0 & 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
Tensor multiplication is now used to represent the spin-spin operator in matrix format. In the interest of mathematical clarity the constant A a is set equal to unity.
$H_{SpinSpin} = \text{kronecker} \left( J_x,~ \sigma_x \right) + \text{kronecker} \left( J_y,~ \sigma_y \right) + \text{kronecker} \left( J_x,~ \sigma_z \right) \nonumber$
We know ask Mathcad to calculate the eigenvalues and eigenvectors of the spin-spin operator. These results are displayed by constructing a matrix which contains the eigenvalues in the top row, and their eigenvectors in the columns below the eigenvalues.
$\begin{matrix} \text{E = eigenvals} \left( H_{SpinSpin} \right) & \text{EigenvalEigenvec} = \text{rsort} \left( \text{stack} \left( E^T,~ \text{eigenvecs} \left( H_{SpinSpin} \right) \right) ,~ I \right) \end{matrix} \nonumber$
$\text{EigenvalEigenvec} = \begin{pmatrix} -1 & -1 & 0.5 & 0.5 & 0.5 & 0.5 \ 0 & 0 & 0 & 0 & 0 & 1 \ 0 & -0.816 & 0 & 0.577 & 0 & 0 \ 0 & 0.577 & 0 & 0.816 & 0 & 0 \ 0.577 & 0 & 0.816 & 0 & 0 & 0 \ -0.816 & 0 & 0.577 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \nonumber$
These results are in agreement with Tables 18-5 and 18-6 in Feynman's text, a ground J = 1/2 state and an excited J = 3/2 state. We can go forward, as we did in the previous tutorial, by writing the electron and deuteron spin wavefunctions in vector format.
The spin states in vector format:
$\begin{matrix} |+ \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} & | - \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The deuteron spin states in vector format (see McMahon):
$\begin{matrix} |d_1 \rangle = \begin{pmatrix} 1 \ 0 \ 0 \end{pmatrix} & | d_0 \rangle = \begin{pmatrix} 0 \ 1 \ 0 \end{pmatrix} & | d_{-1} \rangle \begin{pmatrix} 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Next we write the six electron-deuteron spin states in tensor format.
$\begin{matrix} | +d_1 \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & | +d_0 \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & | +d_{-1} \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} \ | -d_1 \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \ 0 \ 0 \end{pmatrix} & | -d_0 \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \ 0 \end{pmatrix} & | -d_{-1} \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
These spin states are given the following labels to facilitate the calculation of energy matrix.
$\begin{matrix} \text{a} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & \text{b} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & \text{c} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{d} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{e} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{f} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
$\text{H = H}_{SpinSpin} \nonumber$
$\text{eigenvals} \left( \begin{pmatrix} \text{a}^T \text{H a} & \text{a}^T \text{H b} & \text{a}^T \text{H c} & \text{a}^T \text{H d} & \text{a}^T \text{H e} & \text{a}^T \text{H f} \ \text{b}^T \text{H a} & \text{b}^T \text{H b} & \text{b}^T \text{H c} & \text{b}^T \text{H d} & \text{b}^T \text{H e} & \text{b}^T \text{H f} \ \text{c}^T \text{H a} & \text{c}^T \text{H b} & \text{c}^T \text{H c} & \text{c}^T \text{H d} & \text{c}^T \text{H e} & \text{c}^T \text{H f} \ \text{d}^T \text{H a} & \text{d}^T \text{H b} & \text{d}^T \text{H c} & \text{d}^T \text{H d} & \text{d}^T \text{H e} & \text{d}^T \text{H f} \ \text{e}^T \text{H a} & \text{e}^T \text{H b} & \text{e}^T \text{H c} & \text{e}^T \text{H d} & \text{e}^T \text{H e} & \text{e}^T \text{H f} \ \text{f}^T \text{H a} & \text{f}^T \text{H b} & \text{f}^T \text{H c} & \text{f}^T \text{H d} & \text{f}^T \text{H e} & \text{f}^T \text{H f} \ \end{pmatrix} \right) = \begin{pmatrix} -1 \ 0.5 \ 0.5 \ -1 \ 0.5 \ 0.5 \end{pmatrix} \nonumber$
$\text{eigenvecs} \left( \begin{pmatrix} \text{a}^T \text{H a} & \text{a}^T \text{H b} & \text{a}^T \text{H c} & \text{a}^T \text{H d} & \text{a}^T \text{H e} & \text{a}^T \text{H f} \ \text{b}^T \text{H a} & \text{b}^T \text{H b} & \text{b}^T \text{H c} & \text{b}^T \text{H d} & \text{b}^T \text{H e} & \text{b}^T \text{H f} \ \text{c}^T \text{H a} & \text{c}^T \text{H b} & \text{c}^T \text{H c} & \text{c}^T \text{H d} & \text{c}^T \text{H e} & \text{c}^T \text{H f} \ \text{d}^T \text{H a} & \text{d}^T \text{H b} & \text{d}^T \text{H c} & \text{d}^T \text{H d} & \text{d}^T \text{H e} & \text{d}^T \text{H f} \ \text{e}^T \text{H a} & \text{e}^T \text{H b} & \text{e}^T \text{H c} & \text{e}^T \text{H d} & \text{e}^T \text{H e} & \text{e}^T \text{H f} \ \text{f}^T \text{H a} & \text{f}^T \text{H b} & \text{f}^T \text{H c} & \text{f}^T \text{H d} & \text{f}^T \text{H e} & \text{f}^T \text{H f} \ \end{pmatrix} \right) = \begin{pmatrix} 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0.577 & -0.816 & 0 & 0 \ 0 & 0 & 0.816 & 0.577 & 0 & 0 \ 0.577 & 0.816 & 0 & 0 & 0 & 0 \ -0.816 & 0.577 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \nonumber$
Appendix
While it's not directly pertinent to the subject of this tutorial, the interaction of two spin-1 systems is calculated below.
$H_{SpinSpin} = \left( \text{kronecker} \left( J_x,~ J_x \right) + \text{kronecker} \left( J_y,~ J_y \right) + \text{kronecker} \left( J_z,~ J_z \right) \right) \nonumber$
$H_{SpinSpin} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \nonumber$
We know ask Mathcad to calculate the eigenvalues and eigenvectors of the spin-spin operator.
$\begin{matrix} \text{E = eigenvals} \left( H_{SpinSpin} \right) & \text{EigenvalEigenvec = rsort} \left( \text{stack} \left( E^T,~ \text{eigenvecs} \left( H_{SpinSpin} \right) \right),~1 \right) \end{matrix} \nonumber$
$\text{EigenvalEigenvec} = \begin{pmatrix} -2 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0.707 & 0 & 0 & -0.707 & 0 & 0 & 0 & 0 \ 0.577 & 0 & 0 & -0.707 & 0 & -0.408 & 0 & 0 & 0 \ 0 & -0.707 & 0 & 0 & -0.707 & 0 & 0 & 0 & 0 \ -0.577 & 0 & 0 & 0 & 0 & -0.816 & 0 & 0 & 0 \ 0 & 0 & -0.770 & 0 & 0 & 0 & 0.707 & 0 & 0 \ 0.577 & 0 & 0 & 0.707 & 0 & -0.408 & 0 & 0 & 0 \ 0 & 0 & 0.707 & 0 & 0 & 0 & 0.707 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \nonumber$
These results are in agreement with Table 18-7 of Feynman's text. Reading from the left we have a singly degenerate J = 0 state, a triply degenerate J = 1 state, and a five-fold degenerate J = 2 state.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.27%3A_The_Hyperfine_Interaction_in_the_Deutrium_Atom.txt
|
The p1, d1 and f1 electronic configurations have six, ten and fourteen microstates, respectively. The degeneracies of these microstates are split by the interaction between magnetic fields associated with spin and orbital angular momentum - the spin-orbit interaction. As is well know the p1 configuration gives rise to a 2P3/2(4) and 2P1/2(2) term under the Russell-Saunders coupling scheme. The d1 configuration yields a 2D5/2(6) and 2D3/2(4) term. The f1 configuration consists of 2F7/2(8) and 2F5/2(6) terms. The numbers in parentheses are the degeneracies associated with the term symbols.
In what follows, tensor algebra will be used to calculate the spin-orbit interaction in the p1, d1 and f1 electronic configurations. An approximate energy level diagram for the three electronic configurations will also be presented. The required spin and angular momentum operators (in atomic units) are provided below.
Spin angular momentum operators for spin 1/2:
$\begin{matrix} S_x = \frac{1}{2} \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & S_y = \frac{1}{2} \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} & S_z = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
Orbital angular momentum operators for L = 1 and 2 (see E. E. Anderson, Modern Physics and Quantum Mechanics, pp 298-300):
$\begin{matrix} L1_x = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 & 1 & 0 \ 1 & 0 & 1 \ 0 & 1 & 0 \end{pmatrix} & L1_y = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 & -i & 0 \ i & 0 & -i \ 0 & i & 0 \end{pmatrix} & L1_z = \begin{pmatrix} 1 & 0 & 0 \ 0 & 0 & 0 \ 0 & 0 & -1 \end{pmatrix} \ L2_x = \frac{1}{2} \begin{pmatrix} 0 & 2 & 0 & 0 & 0 \ 2 & 0 & \sqrt{6} & 0 & 0 \ 0 & \sqrt{6} & 0 & \sqrt{6} & 0 \ 0 & 0 & \sqrt{6} & 0 & 2 \ 0 & 0 & 0 & 2 & 0 \end{pmatrix} & L2_y = \frac{i}{2} \begin{pmatrix} 0 & -2 & 0 & 0 & 0 \ 2 & 0 & - \sqrt{6} & 0 & 0 \ 0 & \sqrt{6} & 0 & - \sqrt{6} & 0 \ 0 & 0 & \sqrt{6} & 0 & -2 \ 0 & 0 & 0 & 2 & 0 \end{pmatrix} & L2_z = \begin{pmatrix} 2 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 0 & -2 \end{pmatrix} \end{matrix} \nonumber$
The spin-orbit Hamiltonian in tensor format to within a multiplicative constant which depends on the principle and angular momentum quantum numbers is as follows.
$\hat{H}_{LS} = \hat{L} \otimes \hat{S} = \hat{L}_x \otimes \hat{S}_x + \hat{L}_y \otimes \hat{S}_y + \hat{L}_z \otimes \hat{S}_z \nonumber$
For the p1 electronic configuration L = 1 and S = 1/2. The spin-orbit Hamiltonian and its eigenvalues are calculated as shown below. Kronecker is Mathcad's command for matrix tensor multiplication.
$H_{LS} = \text{kronecker} \left( L1_x,~S_x \right) + \text{kronecker} \left( L1_y,~S_y \right) + \text{kronecker} \left( L1_z,~S_z \right) \nonumber$
$\begin{matrix} \text{E = sort} \left( \text{eigenvals} \left( H_{LS} \right) \right) & E^T = \begin{pmatrix} -1 & -1 & 0.5 & 0.5 & 0.5 & 0.5 \end{pmatrix} \end{matrix} \nonumber$
We see that these results are as expected. We have two -1 eigenstates corresponding to the 2P1/2 term and four 0.5 eigenstates corresponding to the 2P3/2 term.
For the d1 electronic configuration L = 2 and S = 1/2. The spin-orbit Hamiltonian and its eigenvalues are now calculated.
$H_{LS} = \left( \text{kronecker} \left( L2_x,~ S_x \right) + \text{kronecker} \left( L2_y,~ S_y \right) + \text{kronecker} \left( L2_z,~S_z \right) \right) \nonumber$
$\begin{matrix} \text{E = sort} \left( \text{eigenvals} \left( H_{LS} \right) \right) & E^T = \begin{pmatrix} -1.5 & -1.5 & -1.5 & -1.5 & 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \end{matrix} \nonumber$
Again the results are as expected. We have four -1.5 eigenstates corresponding to the 2D3/2 term and a six-fold degenerate state at +1.0 corresponding to the 2D5/2 term.
L = 3 for the f1 configuration. The angular momentum operators for L = 3 were obtained by a study of the trends in the other angular momentum operators as L increased. To demonstrate that this procedure yielded the correct matrix operators it is shown that the x-y commutator for L = 3 is satisfied.
$\begin{matrix} L3_x = \frac{1}{2} \begin{pmatrix} 0 & \sqrt{6} & 0 & 0 & 0 & 0 & 0 \ \sqrt{6} & 0 & \sqrt{10} & 0 & 0 & 0 & 0 \ 0 & \sqrt{10} & 0 & 0 & 0 & 0 \ 0 & 0 & \sqrt{12} & 0 & \sqrt{12} & 0 & 0 \ 0 & 0 & 0 \sqrt{12} 0 & \sqrt{12} & 0 \ 0 & 0 & 0 & 0 & \sqrt{10} & 0 & \sqrt{6} \ 0 & 0 & 0 & 0 & 0 & \sqrt{6} & 0 \end{pmatrix} & L3_y = \frac{i}{2} \begin{pmatrix} 0 & - \sqrt{6} & 0 & 0 & 0 & 0 & 0 \ \sqrt{6} & 0 & - \sqrt{10} & 0 & 0 & 0 & 0 \ 0 & \sqrt{10} & 0 & - \sqrt{12} & 0 & 0 & 0 \ 0 & 0 & \sqrt{12} & 0 & - \sqrt{12} & 0 & 0 \ 0 & 0 & 0 & \sqrt{12} & 0 & - \sqrt{10} & 0 \ 0 & 0 & 0 & 0 \sqrt{10} & 0 & - \sqrt{10} \end{pmatrix} \ L3_z = \begin{pmatrix} 3 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 2 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & -1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & -2 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & -3 \end{pmatrix} & L3_x L3_y - L3_y L3_x \rightarrow \begin{pmatrix} 3i & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 2i & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & i & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & -i & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & -2i & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & -3i \end{pmatrix} \end{matrix} \nonumber$
The spin-orbit Hamiltonian and its eigenvalues are now calculated.
$H_{LS} = \left( \text{kronecker} \left( L3_x,~S_x \right) + \text{kronecker} \left( L3_y,~S_y \right) + \text{kronecker} \left( L3_z,~S-z \right) \right) \nonumber$
The six states at -2 correspond to a 2F5/2 term and the eight states at 1.5 belong to a 2F7/2 term. These results are in agreement with expectations.
Using these results and the hydrogen atom energy equation as a function of the n and j quantum numbers, we can construct a diagram of the spin-orbit fine structure and its j-level degeneracy for the n = 4 level.
$E = - \frac{1}{2n^2} \left[ 1 + \frac{ \alpha^2}{n^2} \left( \frac{n}{j + \frac{1}{2}} - \frac{3}{4} \right) \right] \nonumber$
$\begin{matrix} 4f^1 \rightarrow F_{7/2} \ 4d^1,~4f^1 \rightarrow ~^2F_{5/2},~ ^2 D_{5/2} \ 4p^1,~4d^1 \rightarrow ~ ^2 D_{3/2},~ ^2P_{3/2} \ 4s^1,~ 4p^1 \rightarrow ~ ^2P_{1/2},~ ^2S_{1/2} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.28%3A_A_Tensor_Algebra_Approach_to_Spin-Orbit_Coupling.txt
|
Abstract
A deBroglie Bohr model is described that can be used to calculate the electronic energies of atoms or ions containing up to four electrons. Seven exercises are provided which can be used to give students training in doing energy audits, carrying out simple variational calculations and critically analyzing the calculated results.
This note builds on two publications in the pedagogical literature [1,2] that showed how extending the Bohr model to two‐ and three‐electron atoms and ions could be used to enhance student understanding of atomic structure. A Bohr model, more correctly a deBroglie‐Bohr model, is used here to calculate the total electronic energies of atoms and ions containing up to four electrons.
The Bohr model for the hydrogen atom is the prototype of the semi‐classical approach to atomic and molecular structure. Although it was superseded by quantum mechanics many decades ago, it is still taught today because of its simplicity and because it introduced several important quantum mechanical concepts that have survived the model: quantum number, quantized energy, and quantum jump.
When applied to multi‐electron atoms and ions, the Bohr model provides a pedagogical tool for improving studentsʹ analytical and critical skills. As will be demonstrated, given a ʺpictureʺ of an atom or ion, it is not unrealistic to expect an undergraduate student to identify all the contributions to the total electronic energy, carry out an energy minimization and interpret the results in a variety of ways. Seven student exercises, with answers, are provided to illustrate how this might be accomplished.
We begin with a review of the deBroglie‐Bohr model for the hydrogen atom. Working in atomic units (h = 2π, me = e = 4πε0 = 1), and using the deBroglie‐Bohr restriction on electron orbits (nλ =2πRn, where n = 1, 2, 3, ...), the deBroglie wave equation (λ = h/mv) and Coulombʹs law the total electron energy is the following sum of its kinetic and potential energy contributions.
$E_n = \frac{n^2}{2R_n^2} - \frac{1}{R_n} \nonumber$
Minimization of the energy with respect to the variational parameter Rn yields the allowed orbit radii and energies.
$\begin{matrix} R_n = n^2 a_0 \ E_n = \frac{-0.5E_h}{n^2} \end{matrix} \nonumber$
where a0 = 52.9 pm and Eh = 4.36 x 10‐18 joule.
The model shown in the figure below is for the beryllium atom or any four‐electron ion. Note that the occupancy of the inner orbit is restricted to two electrons (Pauli Principle) and that the orbit radii are constrained by the hydrogen atom result (R2 = 4R1) leaving only one variational parameter, the radius of the n = 1 orbit.
With these model elements and a little geometry [2] it is easy to specify the kinetic and potential energy contributions to the total energy. The numeric subscripts refer to the quantum number of the orbit. Thus T1 is the kinetic energy of an electron in the n = 1 orbit, VN1 is the potential energy interaction of an electron in the n = 1 orbit with the nucleus, and V12 is the potential energy interaction of an electron in the n = 1 orbit with an electron in the n = 2 orbit.
The recommended student calculations described below are carried out in the Mathcad programming environment [3]. A Mathcad file for doing the exercises is available for download on the Internet [4]. The calculation for the beryllium atom is carried out as shown below. Required student input is indicated by the highlighted regions.
Enter nuclear charge: Z = 4
Kinetic energy:
$\begin{matrix} T_1 \left( R_1 \right) = \frac{1}{2 R_1^2} & T_2 \left( R_1 \right) = \frac{1}{8 R_1^2} \end{matrix} \nonumber$
Electron-nucleus potential energy:
$\begin{matrix} V_{N1} \left( R_1 \right) = \frac{-Z}{R_1} & V_{N2} \left( R_1 \right) = \frac{-Z}{4 R_1} \end{matrix} \nonumber$
Electron-electron potential energy:
$\begin{matrix} V_{11} \left( R_1 \right) = \frac{1}{2R_1} & V_{22} \left( R_1 \right) = \frac{1}{8 R_1} & V_{12} \left( R_1 \right) = \frac{1}{ \sqrt{17} R_1} \end{matrix} \nonumber$
The next step is to do an energy audit for the atom or ion under consideration. The student is prompted to weight each of the contributions to the total electronic energy. The entries given below are appropriate for the Bohr beryllium atom shown in the figure above.
Enter coefficients for each contribution to the the total energy:
$\begin{matrix} T_1 \left( R_1 \right) & T_2 \left( R_1 \right) & V_{N1} \left( R_1 \right) & V_{N2} \left( R_1 \right) & V_{11} \left( R_1 \right) & V_{22} \left( R_1 \right) & V_{12} \left( R_1 \right) \ \colorbox{yellow}{a = 2} & \colorbox{yellow}{b = 2} & \colorbox{yellow}{c = 2} & \colorbox{yellow}{d = 2} & \colorbox{yellow}{e = 1} & \colorbox{yellow}{f = 1} & \colorbox{yellow}{g = 4} \end{matrix} \nonumber$
The energy of the Bohr atom/ion in terms of the variational parameter R1, the radius of the inner electron orbit, and the various kinetic and potential energy contributions is:
$E \left( R_1 \right) = a T_1 \left( R_1 \right) + b T_2 \left( R_1 \right) + c V_{N1} \left( R_1 \right) + d V_{N2} \left( R_1 \right) + e V_{11} \left( R_1 \right) + f V_{22} \left( R_1 \right) + g V_{12} \left( R_1 \right) \nonumber$
Minimization of the electronic energy of the Bohr atom/ion with respect to R1 yields the optimum inner orbit radius and ground‐state energy
$\begin{matrix} \begin{array}{c|c} R_1 = \frac{d}{dR_1} E \left( R_1 \right) = 0 & _{ \text{float, 5}}^{ \text{solve, R}_1} \rightarrow .29744 \end{array} & E \left( R_1 \right) = -14.128 \end{matrix} \nonumber$
Thus, this model predicts a stable beryllium atom with an electronic energy in error by less than 4%. However, it must be stressed that the main purpose of the exercises presented is not to promote the Bohr model as such, but to use it as a vehicle for providing students with training in doing energy audits, carrying out simple variational calculations and critically analyzing the calculated results.
In the student exercises critical analysis will involve assessing the level of agreement with experimental results, and whether or not the variational principle and the virial theorem are satisfied. Because the second exercise involves the virial theorem as criterion for validity, it is recommended that the first and second exercises be done in tandem.
Student Exercises
Exercise 1: Use this worksheet to calculate the ground state energy for H, He, Li and Be, and confirm all the entries in the table below. The experimental ground state energy of an atom is the negative of the sum of the successive ionization energies given in the data table in the Appendix A.
$\begin{pmatrix} \text{Element} & \frac{ \text{E(calc)}}{E_h} & \frac{ \text{E(exp)}}{E_h} & \% \text{Error} \ \text{H} & -0.500 & -0.500 & 0 \ \text{He} & -3.062 & -2.904 & 5.46 \ \text{Li} & -7.385 & -7.480 & 1.26 \ \text{Be} & -14.128 & -14.672 & 3.71 \end{pmatrix} \nonumber$
This assignment shows that, given its simplicity, the Bohr model achieves acceptable results. However, the students should note that the He result violates the variational theorem. In other words, the calculated energy is lower than the experimental energy.
Exercise 2: An important criterion for the validity of a quantum mechanical calculation is satisfying the virial theorem which for atoms and ions requires: E = ‐T = V/2. Demonstrate whether or not the virial theorem is satisfied for the elements in the first exercise.
Calculate the kinetic energy:
$\begin{matrix} T \left( R_1 \right) = a T_1 \left( R_1 \right) + b T_2 \left( R_1 \right) & T \left( R_1 \right) = 14.129 \end{matrix} \nonumber$
Calculate potential energy:
$\begin{matrix} V \left( R_1 \right) = E \left( R_1 \right) - T \left( R_1 \right) & V \left( R_1 \right) = -28.257 & \frac{V \left( R_1 \right)}{2} = -14.129 \end{matrix} \nonumber$
$\begin{matrix} \text{Element} & \frac{ \text{E(calc)}}{E_h} & \frac{ \text{ -T(calc)}}{2E_h} & \text{VTSatisfied} \ \text{H} & -0.500 & -0.500 & -0.500 & \text{Yes} \ \text{He} & -3.062 & -3.062 & -3.062 & \text{Yes} \ \text{Li} & -7.385 & -7.385 & -7.385 & \text{Yes} \ \text{Be} & -14.128 & -14.129 & -14.129 & \text{Yes} \end{matrix} \nonumber$
The Bohr model satisfies the virial theorem for all atomic calculations (atoms and ions). This does not guarantee the validity of the model. However, any model calculation that violates the virial theorem indicates that the model is not quantum mechanically valid.
Exercise 3: Plot the total energy, and the kinetic and potential energy components on the same graph for beryllium and interpret the results.
This graph clearly shows that atomic stability is the result of two competing energy terms. The attractive coulombic potential energy interaction draws the electrons toward the nucleus. At large R1 values this term dominates and the electron orbits gets smaller. However, this attractive interaction is overcome at small R1 values by the ʺrepulsiveʺ character of kinetic energy term that dominates at small R1 values and an energy minimum, a ground‐state is achieved.
Exercise 4: Calculate the ground state energies of some cations (He+, Li+, Be+, B+, C2+) and compare your calculations with experimental results using the data table in Appendix A.
$\begin{pmatrix} \text{Cation} & \frac{ \text{E(calc)}}{E_h} & \frac{ \text{E(exp)}}{E_h} & \% \text{Error} \ \text{He}^1 & -2.000 & -2.000 & 0 \ \text{Li}^1 & -7.562 & -7.282 & 3.70 \ \text{Be}^1 & -14.275 & -14.329 & 0.38 \ \text{B}^1 & -23.782 & -24.348 & 2.33 \ \text{C}^2 & -35.938 & -36.613 & 1.88 \end{pmatrix} \nonumber$
Just as in the first exercise, the one‐electron species He+1 is in exact agreement with experiment and the two‐electron ion, Li+, violates the variational theorem.
Exercise 5: Use the results of exercises 1 and 3 to calculate the first ionization energies of H, He, Li and Be and compare your results with the experimental data available in Appendix A.
$\begin{pmatrix} \text{Element} & \frac{ \text{E(atom)}}{E_h} & \frac{ \text{E(ion)}}{E_h} & \frac{ \text{IE(calc)}}{E_h} & \frac{ \text{IE(exp)}}{E_h} \ \text{H} & -0.500 & 0 & 0.500 & 0.500 \ \text{He} & -3.063 & -2.000 & 1.063 & 0.904 \ \text{Li} & -7.385 & -7.562 & -0.177 & 0.198 \ \text{Be} & -14.128 & -14.275 & -0.147 & 0.343 \end{pmatrix} \nonumber$
The results for H and He are acceptable, but both Li and Be have negative ionization energies.
Exercise 6: Are the anions of H, He and Li stable? In other words, do they have energies lower than the neutral species?
$\begin{pmatrix} \text{Anion} & \frac{ \text{Eion(calc)}}{E_h} & \frac{ \text{Eatom(exp)}}{E_h} & \text{Stable} \ \text{H}^{-1} & -0.562 & -0.500 & \text{Yes} & \text{He}^{-1} & -2.745 & -2.904 & \text{No} \ \text{Li}^{-1} & -6.681 & -7.480 & \text{No} \end{pmatrix} \nonumber$
The Bohr model gets these results correct. However, it should be noted that Li‐1 is stable in liquid ammonia.
Exercise 7: Do a two‐parameter variational calculation on the beryllium atom shown in the Figure. In other words, minimize the total electronic energy of a beryllium atom that has two electrons in an orbit of radius R1 and two electrons in another orbit of radius R2.
As outlined in Appendix B this calculation yields the following results: R1 = R2 = 0.329 a0 and E(R1, R2) = ‐18.518 Eh, an energy significantly lower than calculated for beryllium in Exercise 1. Thus, in the absence of the orbital occupancy restriction of the exclusion principle, energy minimization places all electrons in the ground state orbit. It was a realization of this that lead Pauli in part to formulate the exclusion principle.
In summary, the purpose of this note is to use the Bohr model as an initial vehicle to help students develop skill in carrying out basic atomic structure calculations and to critically analyze the results of those calculations.
Appendix A
Data: Successive Ionization Energies for the First Six Elements
$\begin{pmatrix} \text{Element} & \text{IE}_1& \text{IE}_2 & \text{IE}_3 & \text{IE}_4 & \text{IE}_5 & \text{IE}_6 \ \text{H} & 0.500 & x & x & x & x & x \ \text{He} & 0.904 & 2.000 & x & x & x & x \ \text{Li} & 0.198 & 2.782 & 4.500 & x & x & x \ \text{Be} & 0.343 & 0.670 & 5.659 & 8.000 & x & x \ \text{B} & 0.305 & 0.926 & 1.395 & 9.527 & 12.500 & x \ \text{C} & 0.414 & 0.896 & 1.761 & 2.370 & 14.482 & 18.000 \end{pmatrix} \nonumber$
Appendix B
Energy contributions for the two‐parameter Bohr calculation:
$\begin{matrix} T_1 \left( R_1 \right) = \frac{1}{2R_1^2} & T_2 \left( R_2 \right) = \frac{1}{2 R_2^2} & V_{N1} \left( R_1 \right) = \frac{-Z}{R_1} & V_{N2} \left( R_2 \right) = \frac{-Z}{R_2} \ V_{11} \left( R_1 \right) = \frac{1}{2R_1} & V_{22} \left( R_2 \right) = \frac{1}{2 R_2} & V_{12} \left( R_1,~R_2 \right) = \frac{1}{ \sqrt{R_1^2 + R_2^2}} \end{matrix} \nonumber$
$E \left( R_1,~R_2 \right) = 2 T_1 \left( R_1 \right) + 2 T_2 \left( R_2 \right) + 2 V_{N1} \left( R_1 \right) + 2 V_{N2} \left( R_2 \right) + V_{11} \left( R_1 \right) + V_{22} \left( R_2 \right) + 4 V_{12} \left( R_1,~R_2 \right) \nonumber$
Minimization of the electronic energy with respect to R1 and R2:
$\begin{matrix} \begin{pmatrix} R_1 \ R_2 \end{pmatrix} = \begin{pmatrix} 1 \ 4 \end{pmatrix} & \begin{pmatrix} R_1 \ R_2 \end{pmatrix} = \text{Minimize} \left( E,~ R_1,~R_2 \right) & \begin{pmatrix} R_1 \ R_2 \end{pmatrix} = \begin{pmatrix} 0.329 \ 0.329 \end{pmatrix} & \text{E} \left( R_1,~ R_2 \right) = -18.518 \end{matrix} \nonumber$
Literature cited:
1. Bagchi, B.; Holody, P. ʺAn interesting application of Bohr theory,ʺ Am. J. Phys. 1988, 56, 746.
2. Saleh‐Jahromi, A. ʺGround State Energy of Lithium and Lithium‐like Atoms Using the Bohr Theory,ʺ The Chemical Educator, 2006, 11, 333‐334.
3. Mathcad is a product of Mathsoft, 101 Main Street, Cambridge, MA 02142
4. www.users.csbsju.edu/~frioux/stability/BohrAtoms.mcd
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.29%3A_A_Bohr_Model_for_Multi-electron_Atoms_and_Ions.txt
|
The purpose of this tutorial is to calculate the ground-state energies of simple multi-electron atoms and ions using the variational method. In the interest of mathematical and computational simplicity the single parameter, orthonormal hydrogenic wave functions shown below will be used. The method will be illustrated for boron, but can be used for any atomic or ionic species with five or less electrons.
Using the following orthonormal trial wave functions, the various contributions to the total electronic energy of a multi-electron atom are given below in terms of the variational parameter, α. For further detail see: "Atomic Variational Calculations: Hydrogen to Boron," The Chemical Educator 1999, 4, 40-43. It should be pointed out that in these calculations the exchange interaction is ignored. Including exchange generally improves the results by about 1%.
$\begin{matrix} \Psi_{1s} \left( \alpha,~ \text{r} \right) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} \left( - \alpha,~ \text{r} \right) & \Psi_{2s} \left( \alpha,~ \text{r} \right) = \sqrt{ \frac{ \alpha^3}{32 \pi}} (2 - \alpha \text{r}) \text{exp} \left( \frac{- \alpha r}{2} \right) & \Psi_{2p} \left( \alpha,~ r,~ \theta \right) = \sqrt{ \frac{ \alpha^3}{32 \pi}} \alpha \text{r exp} \left( \frac{ - \alpha \text{r}}{2} \right) \cos ( \theta) \end{matrix} \nonumber$
The method will be illustrated with boron which has the electronic structure 1s22s22p1.
Nuclear charge: Z = 5
Seed value for variational parameter α: α = Z
Kinetic energy integrals:
$\begin{matrix} T_{1s} ( \alpha ) = \frac{ \alpha^2}{2} & T_{2s} ( \alpha ) = \frac{ \alpha^2}{8} & T_{2p} ( \alpha ) = \frac{ \alpha^2}{8} \end{matrix} \nonumber$
Electron-nucleus potential energy integrals:
$\begin{matrix} V_{N1s} ( \alpha ) = -Z \alpha & V_{N2s} ( \alpha ) = - \frac{Z}{4} \alpha & V_{N2p} ( \alpha ) = - \frac{Z}{4} \alpha \end{matrix} \nonumber$
Electron-electron potential energy integrals:
$\begin{matrix} V_{1s1s} ( \alpha ) = \frac{5}{8} \alpha & V_{1s2s} ( \alpha ) = \frac{17}{81} \alpha & V_{1s2p} ( \alpha ) = \frac{59}{243} \alpha & V_{2s2s} ( \alpha ) = \frac{77}{512} \alpha & V_{2s2p} ( \alpha ) = \frac{83}{512} \alpha \end{matrix} \nonumber$
Enter coefficients for each contribution to the total energy:
$\begin{matrix} T_{1s} & T_{2s} & T_{2p} & V_{N1s} & V_{N2s} & V_{N2p} & V_{1s1s} & V_{1s2s} & V_{1s2p} & V_{2s2s} & V_{2s2p} \ \colorbox{yellow}{a = 2} & \colorbox{yellow}{b = 2} & \colorbox{yellow}{c = 1} & \colorbox{yellow}{d = 2} & \colorbox{yellow}{e = 2} & \colorbox{yellow}{f = 1} & \colorbox{yellow}{g = 1} & \colorbox{yellow}{h = 4} & \colorbox{yellow}{i = 2} & \colorbox{yellow}{j = 1} & \colorbox{yellow}{k = 2} \end{matrix} \nonumber$
Variational energy equation:
$\begin{matrix} E ( \alpha ) = & \text{a} T_{1s} ( \alpha ) + \text{b} T_{2s} ( \alpha ) + \text{c} T_{2p} ( \alpha ) + \text{d} V_{N1s} ( \alpha ) + \text{e} V_{n2s} ( \alpha ) + \text{f} V_{N2p} ( \alpha ) ... \ \text{g} V_{1s1s} ( \alpha ) + \text{h} V_{1s2s} ( \alpha ) + \text{i} V_{1s2s} ( \alpha ) + \text{j} V_{2s2s} ( \alpha ) + \text{k} V_{2s2p} ( \alpha ) \end{matrix} \nonumber$
Minimize energy with respect to the variational parameter, α.
$\begin{matrix} \alpha = \text{Minimize} \left( E,~ \alpha \right) & \alpha = 4.118 & E ( \alpha ) = -23.320 \end{matrix} \nonumber$
The experimental ground state energy is the negative of the sum of the successive ionization energies of the atom or ion (see table of experimental data below):
$\begin{matrix} \text{SumIE} = 0.305 + 0.926 + 1.395 + 9.527 + 12.500 & E_{exp} = \text{SumIE} & E_{exp} = -24.653 \end{matrix} \nonumber$
Compare theory and experiment:
$\begin{vmatrix} \frac{E ( \alpha ) - E_{exp}}{E_{exp}} \end{vmatrix} = 5.405 \% \nonumber$
Calculate orbital energies of a 1s, 2s and 2p electron by filling the place holders with the appropriate coefficients (0, 1, 2, ...). Compare the calculated results with experimental values (see table below):
$\begin{matrix} \begin{array}{l l} E_{1s} ( \alpha) = & 1 T_{1s} ( \alpha ) + 0 T_{2s} ( \alpha ) + 0 T_{2p} ( \alpha ) + 1 V_{N1s} ( \alpha ) ... \ & + 0 V_{N2s} ( \alpha ) + 0 V_{N2p} ( \alpha ) + 1 V_{1s1s} ( \alpha ) + 2 V_{1s2s} ( \alpha ) ... \ & + 1 V_{1s2p} ( \alpha ) + 0 V_{2s2s} ( \alpha ) + 0 V_{2s2p} ( \alpha ) \end{array} & E_{1s} ( \alpha ) = -6.809 & \text{Exp} = -7.355 \ \begin{array}{l l} E_{2s} ( \alpha) = & 0 T_{1s} ( \alpha ) + 1 T_{2s} ( \alpha ) + 0 T_{2p} ( \alpha ) + 0 V_{N1s} ( \alpha ) ... \ & + 1 V_{N2s} ( \alpha ) + 0 V_{N2p} ( \alpha ) + 0 V_{1s1s} ( \alpha ) + 2 V_{1s2s} ( \alpha ) ... \ & + 0 V_{1s2p} ( \alpha ) + 1 V_{2s2s} ( \alpha ) + 1 V_{2s2p} ( \alpha ) \end{array} & E_{2s} ( \alpha ) = -0.012 & \text{Exp} = -0.518 \ \begin{array}{l l} E_{1s} ( \alpha) = & 0 T_{1s} ( \alpha ) + 0 T_{2s} ( \alpha ) + 1 T_{2p} ( \alpha ) + 0 V_{N1s} ( \alpha ) ... \ & + 0 V_{N2s} ( \alpha ) + 1 V_{N2p} ( \alpha ) + 0 V_{1s1s} ( \alpha ) + 0 V_{1s2s} ( \alpha ) ... \ & + 2 V_{1s2p} ( \alpha ) + 0 V_{2s2s} ( \alpha ) + 2 V_{2s2p} ( \alpha ) \end{array} & E_{2p} ( \alpha ) = 0.307 & \text{Exp} = -0.305 \end{matrix} \nonumber$
$\begin{matrix} \text{Successive Ionization Energies for the First Six Elements} & \text{Orbital Energies for the First Six Atoms} \ \begin{pmatrix} \text{Element} & \text{IE}_1 & \text{IE}_2 & \text{IE}_3 & \text{IE}_4 & \text{IE}_5 & \text{IE}_6 \ \text{H} & 0.500 & \text{x} & \text{x} & \text{x} & \text{x} & \text{x} \ \text{He} & 0.904 & 2.000 & \text{x} & \text{x} & \text{x} & \text{x} \ \text{Li} & 0.198 & 2.782 & 4.500 & \text{x} & \text{x} & \text{x} \ \text{Be} & 0.343 & 0.670 & 5.659 & 8.000 & \text{x} & \text{x} \ \text{B} & 0.305 & 0.926 & 1.395 & 9.527 & 12.500 & \text{x} \ \text{C} & 0.414 & 0.896 & 1.761 & 2.370 & 14.482 & 18.000 \end{pmatrix} & \begin{pmatrix} \text{Element} & 1s & 2s & 2p \ \text{H} & -0.500 & \text{x} & \text{x} \ \text{He} & -0.904 & \text{x} & \text{x} \ \text{Li} & -2.386 & -0.198 & \text{x} \ \text{Be} & -4.383 & -0.343 & \text{x} \ \text{B} & -7.355 & -0.518 & -0.305 \ \text{C} & -10.899 & -0.655 & -0.414 \end{pmatrix} \end{matrix} \nonumber$
Interpretation of results:
With this model for atomic structure we are able to compare theory with experiment in two ways. The calculated ground-state energy is compared to the negative of the sum of the successive ionization energies. This comparison shows that theory is in error by 5.4% - not bad for a one-parameter model for a five-electron atom.
However, the comparison of the calculated orbital energies with the negative of the orbital ionization energies is not so favorable. It is clear that the one-parameter model used in this calculation does not do a very good job on the valence electrons. For example, the 2s electrons are barely bound and the 2p electron is not bound at all.
The total energy (ground-state energy) comparison is more favorable because the model does a decent job on the non-valence electrons where the vast majority of the energy resides. Chemistry, however, is dictated by the behavior of the valence electrons, so the failure of the model to calculate good orbital energies, and therefore good wave functions, for the valence electrons is a serious problem. This is a common problem in atomic and molecular calculations; finding wave functions that effectively model the behavior of both the core and the valence electrons.
Suggested additional problems: H, H-, Li, Li+, Be, C+, and Li atom excited state 1s22p1.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.30%3A_Atomic_Variational_Calculations-_Hydrogen_to_Boron.txt
|
The purpose of this tutorial is to point out that if all that mattered in the determination of atomic structure was energy minimization, the electronic structure of lithium would be 1s3, rather than 1s22s1.
To deal with this issue we choose the following scaled hydrogenic orbitals for the lithium atom's electrons:
$\begin{matrix} \Psi_{1s} = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha,~ \text{r} ) & \Psi_{2s} = \sqrt{ \frac{ \alpha^3}{32 \pi}} (2 - \alpha \text{r} ) \text{exp} \left( \frac{- \alpha \text{r}}{2} \right) \end{matrix} \nonumber$
Using this basis set we find the following expressions (in terms of the variational parameter α) for the expectation values for the various contributions to the electronic energy of the lithium atom.
Nuclear charge:
$Z = 3 \nonumber$
Kinetic energy integrals:
$\begin{matrix} T_{1s} ( \alpha ) = \frac{ \alpha^2}{2} & T_{2s} ( \alpha ) = \frac{ \alpha^2}{8} \end{matrix} \nonumber$
Electron-nucleus potential energy integrals:
$\begin{matrix} V_{N1s} ( \alpha ) = - Z \alpha & V_{N2s} ( \alpha ) = - \frac{Z}{4} \alpha \end{matrix} \nonumber$
Electron-electron potential energy integrals:
$\begin{matrix} V_{1s1s} ( \alpha ) = \frac{5}{8} \alpha & V_{1s2s} ( \alpha ) = \frac{17}{81} \alpha \end{matrix} \nonumber$
We now calculate the ground-state energy of lithium assuming it has the 1s 22s1 electronic configuration. The total electronic energy consists of nine contributions: three kinetic energy terms, three electron-nucleus potential energy terms, and three electron-electron potential energy contriubtions.
$E_{Li} ( \alpha ) = 2 T_{1s} ( \alpha ) + 2 V_{N1s} ( \alpha ) + V_{1s1s} ( \alpha ) + 2 T_{2s} ( \alpha ) + V_{N2s} ( \alpha ) + 2 V_{1s2s} ( \alpha ) \nonumber$
Minimization of the energy with respect to the variational parameter, α, yields the following result:
$\begin{matrix} \begin{array}{c|c} \alpha = \frac{d}{d \alpha} E_{Li} ( \alpha ) = 0 & _{ \text{float, 4}}^{ \text{solve, } \alpha} \rightarrow 2.536 \end{array} E_{Li} ( \alpha ) = -7.2333 \end{matrix} \nonumber$
Compared to the experimental ground-state energy -7.478 Eh (the negative of the successive ionization energies of the lithium atom) this result is in error by 3.3%. This result is satisfactory, indicating that the theoretical model has some merit. We could do better, of course, but it would cost something in terms of computational effort and simplicity of the model.
Now we calculate the energy of the hypothetical 1s3 electronic configuration for lithium using the same basis functions. Again, the total electronic energy consists of nine contributions: three kinetic energy terms, three electron-nucleus potential energy terms, and three electron-electron potential energy contributions.
$E_{Li} ( \alpha ) = 2 T_{1s} ( \alpha ) + 3 V_{N1s} ( \alpha ) + 3 V_{1s1s} ( \alpha ) \nonumber$
Minimization of the energy with respect to the variational parameter, α, yields the following result:
First reset the value of α:
$\begin{matrix} \alpha = \alpha \begin{array}{c|c} \alpha = \frac{d}{d \alpha} E_{Li} ( \alpha ) = 0 & _{ \text{float, 4}}^{ \text{solve, } \alpha} \rightarrow 2.375 \end{array} & E_{Li} ( \alpha ) = -8.4609 \end{matrix} \nonumber$
This electronic configuration has a lower energy than that for 1s22s1, and also lower than the experimental value in clear violation of the variational principle.
Electrons are fermions and subject to the Pauli exclusion principle which prevents two electrons from having the same set of quantum numbers. Thus, while the 1s3 electronic configuration has a lower energy its existence is prevented by the Pauli principle.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.31%3A_Some_Calculations_on_the_Lithium_Atom_Ground_State.txt
|
The electronic structure of lithium is 1s12s1. The hydrogenic 1s and 2s orbitals are as follows:
$\begin{matrix} \Psi ( \text{1s} ) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha \text{r} ) & \Psi (2s) = \sqrt{ \frac{ \alpha^3}{32 \pi}} (2 - \alpha \text{r} ) \text{exp} \left( \frac{- \alpha \text{r}}{2} \right) \end{matrix} \nonumber$
If these orbitals are used the variational expression for the lithium atom energy is given below.
Nuclear charge:
$Z = 3 \nonumber$
Seed value for α:
$\alpha = Z \nonumber$
Define variational integral for lithium:
$E ( \alpha ) = \alpha^2 - 2 Z \alpha + \frac{5}{8} \alpha + \frac{ \alpha^2}{8} - \frac{Z \alpha}{4} + \frac{34 \alpha}{81} \nonumber$
Minimize energy with respect to the variational parameter, α:
$\begin{matrix} \text{Given} \frac{d}{d \alpha} E ( \alpha ) = 0 & \alpha = \text{Find} ( \alpha ) & \alpha = 2.5357 & E ( \alpha ) = -7.2333 \end{matrix} \nonumber$
This one-parameter variational calculation is in error by 3.27%. The ground state energy is the negative of the sum of the ionization energies.
$\begin{matrix} E_{exp} = \frac{-5.392-75.638-122.451}{27.2114} & E_{exp} = -7.4778 & \begin{vmatrix} \frac{E( \alpha ) - E_{exp}}{E_{exp}} \end{vmatrix} = 3.2695 \% \end{matrix} \nonumber$
It is possible to improve the results by using a two-parameter calculation in which the 2s electron has a different scale factor that the 1s electrons. In other words the electronic structure would be 1s(α)22s(β)1.
$\begin{matrix} \Psi_{1s} \left( \text{r, } \alpha \right) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha,~ \text{r} ) & \Psi_{2s} ( \text{r, } \beta ) = \sqrt{ \frac{ \beta^3}{32 \pi}} (2 - \beta \text{r} ) \text{exp} \left( \frac{- \beta \text{r}}{2} \right) \end{matrix} \nonumber$
This calculation was first published by E. Bright Wilson (J. Chem. Phys 1, 210 (1933)) in 1933. Levine's Quantum Chemistry (6th ed. p 299) contains a brief summary of the calculation.
Nuclear charge:
$Z = 3 \nonumber$
Seed values for α and β:
$\begin{matrix} \alpha = Z & \beta = Z - 1 \end{matrix} \nonumber$
When the wave function for the 1s(α)22s(β) electron configuration is written as a Slater determinant, the following variational integrals arise.
$\begin{matrix} T_{1s} ( \alpha ) = \frac{ \alpha^2}{2} & T_{2s} ( \beta ) = \frac{ \beta^2}{2} & V_{N1s} ( \alpha ) = - Z \alpha & V_{N2s} ( \beta ) = \frac{-Z \beta}{4} \end{matrix} \nonumber$
$\begin{matrix} V_{1s1s} ( \alpha ) = \frac{5}{8} \alpha & V_{1s2s} ( \alpha,~ \beta ) = \frac{ \beta^4 + 10 \alpha \beta + 8 \alpha^4 + 20 \alpha^3 \beta + 12 \alpha^2 \beta^2}{ \left( 2 \alpha + \beta \right)^5 } \ T_{1s2s} ( \alpha,~ \beta ) = -4 \sqrt{4} \alpha^{ \frac{5}{2}} \beta^{ \frac{5}{2}} \frac{ \beta - 4 \alpha}{ \left( 2 \alpha + \beta \right)^4} & V_{N1s2s} ( \alpha,~ \beta ) = -Z 4 \sqrt{2} \alpha^{ \frac{3}{2}} \beta^{ \frac{3}{2}} \frac{2 \alpha - \beta}{ \left( 2 \alpha + \beta \right)^3} \end{matrix} \nonumber$
$V_{1112} ( \alpha,~ \beta ) = 32 \sqrt{2} \beta^{ \frac{3}{2}} \alpha^{ \frac{5}{2}} \frac{-28 \alpha^3 \beta + 264 \alpha^4 - 21 \alpha \beta^3 - \beta^4 - 86 \alpha^2 \beta^2}{ \left( 2 \alpha \beta \right)^3 \left( \beta + 6 \alpha \right)^4} \nonumber$
$\begin{matrix} V_{1212} ( \alpha,~ \beta) = 16 \alpha^3 \beta^3 \frac{13 \beta^2 + 20 \alpha^2 - 30 \beta \alpha}{ \left( \beta + 2 \alpha \right)^7} & S_{1s2s} ( \alpha,~ \beta ) = 32 \sqrt{2} \alpha^{ \frac{3}{2}} \beta^{ \frac{3}{2}} \frac{ \alpha - \beta}{ \left( 2 \alpha + \beta \right)^4} \end{matrix} \nonumber$
The next step in this calculation is to collect these terms in an expression for the total energy of the lithium atom and then minimize it with respect to the variational parameters, α and β. The results of this minimization procedure are shown below.
$E ( \alpha,~ \beta ) = \frac{ \begin{array}{l} 2T_{1s} ( \alpha) + T_{2s} ( \beta ) - T_{1s} ( \alpha ) S_{1s2s} ( \alpha,~ \beta )^2 - 2T_{1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) ... \ + 2 V_{N1s} ( \alpha ) + V_{N2s} ( \beta ) - V_{N1s} ( \alpha ) S_{1s2s} ( \alpha,~ \beta)^2 - 2 V_{N1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta) ... \ + 2V_{1s2s} ( \alpha,~ \beta ) + V_{1s1s} ( \alpha ) - 2 V_{1112} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) - V_{1212} ( \alpha,~ \beta ) \end{array}}{1 - S_{1s2s} ( \alpha,~ \beta )^2} \nonumber$
Minimization of E(α, β) simultaneously with respect to α and β.
$\begin{matrix} \text{Given} & \frac{d}{d \alpha} E( \alpha,~ \beta ) = 0 & \frac{d}{d \beta} E( \alpha,~ \beta ) = 0 \ & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Find} ( \alpha,~ \beta ) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \begin{pmatrix} 2.6797 \ 1.8683 \end{pmatrix} & E( \alpha,~ \beta ) = -7.3936 \end{matrix} \nonumber$
Comparison with experiment (ground state energy is the negative of the sum of the ionization energies):
$\begin{matrix} E_{exp} = \frac{-5.392 - -75.638 -122.451}{27.2114} & E_{exp} = -7.4778 & \begin{vmatrix} \frac{E( \alpha,~ \beta ) - E_{exp}}{E_{exp}} \end{vmatrix} = 1.1258 \% \end{matrix} \nonumber$
This result is slightly different from that reported by Wilson in 1933. He found that the energy was minimized at -7.3922 Eh, with parameters α = 2.686 and β = 1.776. When I use his parameters with my equation for the energy I get Wilson's energy value, so I can only conclude that he did not quite find the energy minimum.
$\begin{matrix} \alpha = 2.686 & \beta = 1.776 & E ( \alpha,~ \beta ) = -7.3922 \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.32%3A_E._B._Wilson%27s_Calculation_on_the_Lithium_Atom_Ground_State.txt
|
In a provacative article published in Science (21 February 1975) with the title "Of Atoms, Mountains and Stars: A Study in Qualitative Physics," Victor Weisskopf illustrates the importance of quantum mechanics in understanding not only the nanoscopic world of atoms and molecules, and our macroworld of mountains, but also the cosmological world of stars and galaxies. Weisskopf's paper presents an analysis of material science in terms of two fundamental ideas: wave-particle duality for matter and light, and the Pauli exclusion principle for the basic building blocks of matter (electrons, protons and neutrons, all fermions).
Regarding the Pauli principle Weisskopf writes:
If the Pauli principle did not hold, all electrons would be allowed to be in the lowest quantum state. That would mean that the ground states of all atoms would be similar: the atomic electrons would all assemble in the lowest and simplest quantum state. All atoms would exhibit essentially the same properties, a most uninteresting world. We owe the variety of nature largely to the exclusion principle.
The purpose of this tutorial is to illustrate what Weisskopf is saying in this paragraph. Using the following trial wave function, a general variational calculation will be carried out assuming all atomic electrons are resident in the ground 1s quantum level.
$\Psi ( \alpha ) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha \textcolor{red}{r} ) \nonumber$
The calculation of the energy of the 1sZ electronic configuration assuming this trial wave function yields.
$\text{E} = \text{Z} \frac{ \alpha^2}{2} - \text{Z}^2 \alpha + \frac{ \text{Z} ( \text{Z} - 1)}{2} \frac{5}{8} \alpha \nonumber$
Minimization of E with respect to α gives
$\frac{d}{d \alpha} \left[ \text{Z} \frac{ \alpha^2}{2} - \text{Z}^2 \alpha + \frac{ \text{Z} ( \text{Z} - 1)}{2} \frac{5}{8} \alpha \right] = 0 \text{ solve, } \alpha \rightarrow \frac{11}{16} \text{Z} + \frac{5}{16} \nonumber$
Substitution of the optimum value for α into E yields the energy 1sZ electronic configuration:
$\begin{array}{c|c} \text{E(Z)} = \text{Z} \frac{ \alpha^2}{2} - \text{Z}^2 \alpha + \frac{ \text{Z} ( \text{Z} - 1)}{2} \frac{5}{8} \alpha & _{ \text{simplify}}^{ \text{substitute, } \alpha = \frac{11}{16} \text{Z} + \frac{5}{16}} \rightarrow \frac{-121}{512} \text{Z}^3 - \frac{55}{256} \text{Z}^2 - \frac{25}{512} \text{Z} \end{array} \nonumber$
Comparing this result for the energy as a function of atomic number with the actual ground state energies of the elements confirms Weisskopf's statement. The energy result ignoring the Pauli principle presented here is lower than the experimental energies for elements beyond He.
$\begin{pmatrix} \text{Z} & \text{E(Z)} & \text{E(experimental)} \ 1 & -0.500 & -0.500 \ 2 & -2.484 & -2.903 \ 3 & -8.461 & -7.478 \ 4 & -18.758 & -14.668 \ 5 & -35.156 & -24.658 \ 6 & -59.074 & -37.855 \ 7 & -91.930 & -54.609 \ 8 & -135.141 & -75.106 \ 9 & -190.125 & -99.801 \ 10 & -258.301 & -129.044 \end{pmatrix} \nonumber$
2.34: Splitting the 2s-2p Degeneracy in the Lithium Atom
In lithium the ground state electronic configuration is 1s22s1. 1s22p1 is an excited state because the s-p degeneracy of the one-electron hydrogen atom has been split by the presence of the core (1s2) electrons. A simple variational calculation on Li+ using Ψ(r) = (α3/π)1/2exp(-αr) to represent the core electrons yields the following optimum for the variational parameter α = 2.6875.
We can model the behavior of the 2s electron by assuming that is attracted to the nucleus and repelled by the core electrons. The attraction for the nucleus is simply the familiar -Z/r coulombic interaction. The electrostatic interaction with the core electrons is given by,
$V_{core} = 2 \left( \frac{1 - \text{exp} (-2 \alpha \text{r}}{ \text{r}} - \alpha \text{exp} ( -2 \alpha \text{r} ) \right) \nonumber$
The next steps are to calculate the orbital energies of the 2s and 2p states by numerical integration of Schrödinger's equation. First, the 2s orbital energy.
$\begin{matrix} \text{Reduced mass:} & \mu = 1 & \text{Nuclear charge:} & \text{Z} = 3 & \text{Integration limit:} & r_{max} = 12 \ \text{Angular momentum:} & \text{L} = 0 & \text{Energy guess:} & \text{Es} = -.1748 & \text{r} = 0,~.01 .. \text{r}_{ \text{max}} \end{matrix} \nonumber$
$\begin{matrix} \text{Given} & \frac{-1}{2 \mu} \frac{d^2}{dr^2} \Psi s(r) - \frac{1}{r \mu} \frac{d}{dr} \Psi s(r) + \left[ 2 \left( \frac{1 - \text{exp} ( -2 \alpha r}{r} - \alpha \text{exp} (-2 \alpha r ) \right) + \frac{ \text{L} ( \text{L} + 1)}{2 \mu r^2} - \frac{Z}{r} \right] \Psi s(r) = \text{Es} \Psi s (r)\end{matrix} \nonumber$
Seed values for wave function and its first derivative:
$\begin{matrix} \Psi s (.001) = 1 & \Psi (.001 ) = 0.1 \end{matrix} \nonumber$
$\Psi s = \text{Odesolve} \left( r,~ r_{max} \right) \nonumber$
Normalize the wavefunction:
$\Psi s (r) = \left( \int_0^{ r_{max}} \Psi (r)^2 4 \pi r^2 dr \right)^{-0.5} \Psi s (r) \nonumber$
Setting L = 1 above demonstrates that the 2p state does not have the same energy as the 2s state. The next step is to demonstrate that the 2p energy is -0.1259 Eh.
$\begin{matrix} \text{Integration limit:} & r_{max} = 20 & \text{Angular momentum:} & L = 1 & \text{Energy guess:} & Ep = -.1259 \end{matrix} \nonumber$
$\text{Given} ~ \frac{-1}{2 \mu} \frac{d^2}{dr^2} \Psi p(r) - \frac{1}{r \mu} \frac{d}{dr} \Psi p(r) + \left[ 2 \frac{1 - \text{exp}( -2 \alpha r}{r} - \alpha \text{exp} (-2 \alpha r ) \right] \Psi p (r) = Ep \Psi p(r) \nonumber$
Seed values for wave function and its first derivative:
$\begin{matrix} \Psi p (.001) = 0 & \Psi p (.001) = 0.001 \end{matrix} \nonumber$
$\Psi p = \text{Odesolve} \left( r,~ r_{max} \right) \nonumber$
Normalize the wavefunction:
$\Psi p (r) = \left( \int_0^{r_{max}} \Psi p (r)^2 4 \pi r^2 dr \right)^{-0.5} \Psi p (r) \nonumber$
$r = 0,~.01 .. r_{max} \nonumber$
Note that 2s-2p degeneracy has indeed been split. The splitting, E2p - E2s = .0488 hartree = 1.33 eV. Herzberg reports an experimental splitting of 1.85 eV. The results of this model could be improved by treating α, the core electron scale factor, as an adjustable parameter.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.33%3A_The_Importance_of_the_Pauli_Principle.txt
|
In this entry I work through section 4.4.3 of David Griffithsʹ Introduction to Quantum Mechanics (2nd ed.) in which he treats the addition of angular momentum for two identical spin‐1/2 particles. The tensor algebra approach is illustrated.
The four spin states of two spin‐1/2 particles are written below in the spin‐z basis in tensor format.
$\begin{matrix} | \uparrow \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} & | \downarrow \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} | \uparrow \uparrow \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & | \uparrow \downarrow \rangle = \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & | \downarrow \uparrow \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & | \downarrow \downarrow \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The two middle states are not permissible because they distinguish between identical particles. The solution is to form symmetric and anti‐symmetric superpositions of them, but letʹs follow Griffithʹs approach for the time being. The four spin states are labeled as shown below.
$\begin{matrix} \text{a} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{b} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{c} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{d} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The identity and spin operators in units of h/2π are now defined. The final two are the up and down spin ladder operators.
$\begin{matrix} I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & S_x = \frac{1}{2} \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & S_y = \frac{1}{2} \begin{pmatrix} 0 & -i \ i & 0 \end{pmatrix} & S_z = \frac{1}{2} \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & S = S_x + S_y + S_z \ & S_u = \begin{pmatrix} 0 & 1 \ 0 & 0 \end{pmatrix} & S_d = \begin{pmatrix} 0 & 0 \ 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
Next the total spin operator and total spin operator in the z‐direction are defined using kronecker, Mathcadʹs command for tensor matrix multiplication.
$\begin{matrix} S_{tot} = \text{kronecker(S, I)} + \text{kronecker(I, S)} & Sz_{tot} = \text{kronecker} \left( S_z,~I \right) + \text{kronecker} \left( I,~ S_z \right) \end{matrix} \nonumber$
Calculation of the expectation values for total spin in the z‐direction for spin states a, b, c and d reveals the problem mentioned above. For S = 1 we expect Sz values of ‐1, 0 and 1. The extra value for Sz = 0 indicates an interpretive problem.
$\begin{matrix} \text{a}^T Sz_{tot} \text{a} = 1 & \text{b}^T Sz_{tot} \text{b} = 0 & \text{c}^T Sz_{tot} \text{c} = 0 & \text{d}^T Sz_{tot} \text{d} = -1 \end{matrix} \nonumber$
Griffiths solves the problem by operating with the lowering operator on spin state a, which yields an in‐phase superposition (unnormalized) of spin states b and c.
$\left( \text{kronecker} \left( S_d,~I \right) + \text{kronecker} \left( I,~ S_d \right) \right) \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} \nonumber$
Repeating with the result from above yields an unnormalized d spin state.
$\left( \text{kronecker} \left( S_d,~I \right) + \text{kronecker} \left( I,~ S_d \right) \right) \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 2 \end{pmatrix} \nonumber$
Operating on the unnormalized d spin state yield a null vector suggesting that the original spin states might be reformulated as triplet and singlet states.
$\left( \text{kronecker} \left( S_d,~I \right) + \text{kronecker} \left( I,~ S_d \right) \right) \begin{pmatrix} 0 \ 0 \ 0 \ 2 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \end{pmatrix} \nonumber$
Given this hint and the fact that the initial spin states are orthonormal, we preserve this property in the new spin states by constructing an out‐of‐phase superposition of b and c. This give us a revised set of orthonormal spin vectors. In conventional notation these states are |11 >, |10 >, |1‐1 > and |00 >, where the first number is total spin value and the second the spin in the z‐direction. Thus, we have a triplet and a singlet state as will be confirmed below.
$\begin{matrix} \Psi_{1p1} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \Psi_{10} = \begin{pmatrix} 0 \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & \Psi_{1m1} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} & \Psi_{00} = \begin{pmatrix} 0 \ \frac{1}{ \sqrt{2}} \ \frac{-1}{ \sqrt{2}} \ 0 \end{pmatrix} \end{matrix} \nonumber$
We now establish by calculation of expectation values for S2 (constructed below) and Sz that the first three spin states are members of a triplet state (S = 1) and the final spin state is a singlet (S = 0). The eigenvalues for the S2 operator are S(S + 1) which is 2 for the triplet state and 0 for the singlet state.
$\text{SS = kronecker} \left( S^2,~ I \right) + \text{kronecker} \left( I,~S^2 \right) + 2 \text{kronecker} \left( S_x,~ S_x \right) + \text{kronecker} \left( S_y,~S_y \right) + \text{kronecker} \left( S_z,~S_z \right) \nonumber$
$\begin{matrix} \left( \Psi_{1p1} \right)^T \text{SS} \Psi_{1p1} = 2 & \left( \Psi_{10} \right)^T \text{SS} \Psi_{10} = 2 & \left( \Psi_{1m1} \right)^T \text{SS} \Psi_{1m1} = 2 & \left( \Psi_{00} \right)^T \text{SS} \Psi_{00} = 0 \ \left( \Psi_{1p1} \right)^T Sz_{tot} \Psi_{1p1} = 1 & \left( \Psi_{10} \right)^T Sz_{tot} \Psi_{10} = 0 & \left( \Psi_{1m1} \right)^T Sz_{tot} \Psi_{1m1} = -1 & \left( \Psi_{00} \right)^T Sz_{tot} \Psi_{00} = 0 \end{matrix} \nonumber$
Next, we look at this problem from the energy perspective and use the spin‐spin interaction Hamiltonian to calculate the energy eigenvalues and eigenvectors for the two spin‐1/2 particle system
$H_{SpinSpin} = \text{kronecker} \left( S_x,~ S_x \right) + \text{kronecker} \left( S_y,~S_y \right) + \text{kronecker} \left( S_z,~ S_z \right) \nonumber$
$H_{SpinSpin} = \begin{pmatrix} 0.25 & 0 & 0 & 0 \ 0 & -0.25 & 0.5 & 0 \ 0 & 0.5 & -0.25 & 0 \ 0 & 0 & 0 & 0.25 \end{pmatrix} \nonumber$
We now ask Mathcad to calculate the eigenvalues and eigenvectors of the spin‐spin operator. These results are displayed by constructing a matrix which contains the eigenvalues in the top row, and their eigenvalues in the columns below the eigenvalues.
$\begin{matrix} i = 1..4 & \text{E = eigenvals} \left( H_{SpinSpin} \right) & \text{EigenvalsEigenvec = rsort} \left( \text{stack} \left( E^T,~ \text{eigenvecs} \left( H_{SpinSpin} \right) \right),~1 \right) \end{matrix} \nonumber$
$\text{EigenvalEigenvec} = \begin{pmatrix} -0.75 & 0.25 & 0.25 & 0.25 \ 0 & 1 & 0 & 0 \ 0.707 & 0 & 0 & 0.707 \ -0.707 & 0 & 0 & 0.707 \ 0 & 0 & 1 & 0 \end{pmatrix} \nonumber$
These calculations are consistent with those that preceded and with the results presented on page 284 of Griffithsʹ text.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.35%3A_Addition_of_Spin_Angular_Momentum-_A_Tensor_Algebra_Approach.txt
|
In a recent discussion of the role of electron–electron repulsion in interpreting chemical reaction mechanisms and spectroscopic phenomena, Liu (1) gives an incorrect answer to the question, Why is the triplet state lower in energy than the corresponding singlet state?
After reviewing the mathematical form of the singlet and triplet wave functions for a two-electron system, Liu presents the plausible and frequently used explanation that the antisymmetric triplet spatial wave function keeps electrons apart while the symmetric singlet spatial wave function permits electrons to be close together. Therefore it follows that electron–electron repulsion must be lower in the triplet state, giving it a lower total energy, E, than the singlet state, which clearly must have greater interelectron repulsion.
Snow and Bills (2) challenged this explanation over thirty years ago with a review of low- and high-level calculations on the 1s12s1 and 1s12p1 excited states of the helium atom available in the research literature. The most accurate calculations (for all practical purposes exact) available for the helium 1s12s1 excited state are shown in Table 1.1 All numerical results are reported in atomic units. (Energy: Eh = 2.626 MJ mol1; distance: a0 = 52.9 pm).
It is obvious that the explanation Liu offers for the relative stability of the triplet state is not supported by accurate quantum mechanical calculations. The electrons are actually closer on average in the triplet state (smaller r12), and consequently electron–electron repulsion (Vee) and electron kinetic energy (T) are higher in the triplet state.2 As the table shows the real reason that the triplet state lies low is due to greater electron–nucleus attraction (Vne). In other words, Vne decreases more than Vee and T increase leading to a more stable triplet state.
Table 1. Expectation Values for the Singlet and Triplet States of the Helium 1s12s1 Excited State Calculated Using the Exact Wave Function
$\begin{array}{|c c c c|} \hline \text{Property} & ~^1S & \Delta & ~^3S \ \hline \langle E \rangle & -2.146 & -0.029 & -2.175 \ \langle T \rangle & 2.146 & 0.029 & 2.175 \ \langle V_{ne} \rangle & -4.541 & -0.078 & -4.619 \ \langle V_{ee} \rangle & 0.250 & 0.018 & 0.268 \ \langle r_{12} \rangle & 5.270 & -0.822 & 4.448 \ \hline \end{array} \nonumber$
Note: All numerical results are reported in atomic units (Energy: Eh = 2.626 MJ mol-1; distance: a0 = 52.9 pm). ∆ = 3S − 1S.
Two questions emerge at this point:
• What is the origin of the incorrect explanation that the results in Table 1 refute?
• How do we “explain” the results in Table 1?
The answer to the first question seems clear in Snow and Bills’ article—first-order perturbation theory. Using zero-order wave functions (the He+ 1s and 2s eigenfunctions) to calculate the singlet and triplet energies of the helium 1s12s1 excited state yields the results shown in Table 2. While this simple calculation correctly shows that the triplet state is more stable than the singlet state, it cannot be safely used for interpretive purposes because it violates the virial theorem, which requires \( \langle E \rangle = - \langleT \rangle = \frac{1}{2} \left[V_{ \text{ne}} + V_{ \text{ee}} \right]. In other words, no physical significance should be attached to the lower interelectron repulsion it yields for the triplet state.
Table 2. Expectation Values for the Singlet and Triplet States of the Helium 1s12s1 Excited State Calculated Using the Zero-Order Wave Function
$\begin{array}{|c c c c|} \hline \text{Property} & ~^1S & \Delta & ~^3S \ \hline \langle E \rangle & -2.036 & -0.088 & -2.124 \ \langle T \rangle & 2.500 & 0 & 2.500 \ \langle V_{ne} \rangle & -5.000 & 0 & -5.000 \ \langle V_{ee} \rangle & 0.464 & -0.088 & 0.376 \ \langle r_{12} \rangle & 3.085 & 0.046 & 3.131 \ \hline \end{array} \nonumber$
In the search for an answer to the second question, we accept guidance from Robert Mulliken’s famous remark (3): “... the more accurate the calculations became, the more the concepts tended to vanish into thin air.” In other words, we seek a level of theory that is quantum mechanically sound and also comprehensible in terms of traditional chemical concepts such as orbitals. The best way to find this theoretical level is to move up gradually from the first-order perturbation theory calculation summarized in Table 2. The calculations that follow have been carried out in the Mathcad programming environment and are available on the Internet (4).
An obvious improvement to the first-order perturbation theory calculation is to add a variational parameter, α, to the 1s and 2s wave functions:
$\Psi_{1s} = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( \alpha r) \nonumber$
$\Psi_{2s} = \sqrt{ \frac{ \alpha^3}{2 \pi}} (2 - \alpha r) \text{exp} \left( - \frac{ \alpha r}{2} \right) \nonumber$
Results for such a variational calculation on the 1s12s1 singlet and triplet states of He and Li+ are shown in Table 3.
Table 3. One-Parameter Variational Calculations on He and Li+ Singlet and Triplet States
$\begin{array}{|c | c c c| c c c|} \hline \text{Property} & & \text{Helium Atom} & & & \text{Lithium Ion} \ \hline ~ & ~^1S & \Delta & ~^3S & ~^1S & \Delta & ~^3S \ \hline \alpha & 1.815 & 0.035 & 1.850 & 2.185 & 0.035 & 2.850 \ \hline \langle E \rangle & -2.058 & -0.080 & -2.138 & -4.951 & -0.124 & -5.075 \ \langle T \rangle & 2.508 & 0.088 & 2.138 & 4.951 & 0.124 & 5.075 \ \langle V_{ne} \rangle & -4.536 & -0.088 & -4.624 & -10.555 & -0.131 & -10.686 \ \langle V_{ee} \rangle & 0.421 & -0.106 & 0.315 & 0.653 & -0.117 & 0.536 \ \hline \end{array} \nonumber$
This calculation shows that the triplet state has a lower energy because both Vne and Vee have lower values in the triplet state. This is a hint that the simple notion that only Vee counts is not valid. This calculation also correctly shows that the triplet state species is smaller (has a larger decay constant α) than the singlet state.
The next step is obvious—a two-parameter calculation that assigns the 2s orbital an independent variational parameter β. The results for this calculation are shown in Table 4. The first thing to note is that this wave function is not quite good enough for He. It shows the singlet state slightly lower in energy than the triplet, but it does show, in agreement with the exact calculations, that Vee is higher for the triplet state. Everything is in order for Li+, showing that Vne is the reason the triplet state lies lower in energy than the singlet because it overwhelms the increases in T and Vee. Additional calculations on Be2+, B3+, and so forth are consistent with the lithium ion results. Given this success it appears justified to use the two-parameter wave function to formulate an answer to the second question.
Table 4. Two-Parameter Variational Calculations on He and Li + Singlet and Triplet States
$\begin{array}{|c | c c c| c c c|} \hline \text{Property} & & \text{Helium Atom} & & & \text{Lithium Ion} \ \hline ~ & ~^1S & \Delta & ~^3S & ~^1S & \Delta & ~^3S \ \hline 1s ( \alpha ) & 2.013 & -0.019 & 1.994 & 3.019 & -0.020 & 2.999 \ \hline 2s ( \beta ) & 0.925 & 0.626 & 1.551 & 1.681 & 0.889 & 2.570 \ \langle E \rangle & -2.170 & 0.003 & -2.167 & -5.091 & -0.012 & -5.103 \ \langle T \rangle & 2.170 & -0.003 & 2.167 & 5.091 & 0.012 & 5.103 \ \langle V_{ne} \rangle & -4.594 & -0.026 & -4.620 & -10.629 & -0.055 & -10.684 \ \langle V_{ee} \rangle & 0.254 & 0.033 & 0.287 & 0.448 & 0.031 & 0.479 \ \langle r_{1s} \rangle & 0.745 & 0.007 & 0.752 & 0.497 & 0.003 & 0.500 \ \langle r_{2s} & 6.489 & -2.620 & 3.869 & 3.569 & -1.235 & 2.334 \ \hline \end{array} \nonumber$
These calculations show that Vee increases and Vne decreases in going from the singlet to the triplet state. The results also reveal a sharp decrease in the average orbital radius for the 2s electron in the triplet state. These findings are selfconsistent; the antisymmetric character of the triplet state spatial wave function permits a sharp contraction of the 2s orbital increasing the interelectronic repulsion with the 1s electron, but at the same time greatly increasing the favorable attractive interaction between the 2s electron and the nucleus. Shenkuan previously offered a similar analysis on the basis of fourth-order perturbation theory calculations (5). He summarized the results of his study as follows:
In neutral atoms, in many positive ions, and in small molecules, the energy differences among multiplets are dominated by the energy differences in electron–nuclear attractions—not by the energy differences of interelectron repulsions, which was held traditionally.
The singlet–triplet energy difference can be examined further by consideration of the following two-step mechanism using the lithium ion calculation as an example:
$\begin{matrix} ~^1S & \left( \alpha = 3.019,~ \beta = 1.681 \right) \ & \rightarrow ~^3S^* \left( \alpha = 3.019,~ \beta = 1.681 \right) \ ~ & \rightarrow ~^3S \left( \alpha = 2.999,~ \beta = 2.570 \right) \end{matrix} \nonumber$
In the singlet–triplet transition two things change: spatial symmetry and orbital size. In the first step the spatial symmetry changes (symmetric to antisymmetric) while the orbitals are frozen at singlet-state size. As shown in Table 5, kinetic energy and electron–electron repulsion decrease, while electron–nucleus and total energy increase. In the second step the orbitals relax to their optimum triplet-state sizes. This net orbital contraction increases kinetic energy and electron–electron repulsion, and decreases electron–nucleus and total energy.
This mechanism is visualized in Figure 1 in which the radial distribution functions3 for the three states are graphed versus the coordinates of both electrons in contour format. The three panes of the figure clearly show the symmetry change and the subsequent orbital contraction, providing visual support for the numeric results presented in Table 5.
Table 5. Mechanism for a Singlet–Triplet Transition for the 1s12s2 Excited State of Li+
$\begin{array}{| c c c c c c|} \hline \text{Property} & ~^1S & \Delta & \text{Intermediate} & \Delta & ~^3S \ \hline 1s ( \alpha ) & 3.019 & 0 & 3.019 & -0.020 & 2.999 \ 2 s( \beta ) & 1.681 & 0 & 1.681 & 0.889 & 2.570 \ \langle E \rangle & -5.091 & 0.131 & -4.960 & -0.143 & -5.103 \ \langle T \rangle & 5.091 & -0.376 & 4.715 & 0.388 & 5.103 \ \langle V_{ee} \rangle & 0.448 & -0.142 & 0.306 & 0.173 & 0.479 \ \langle V_{ne} \rangle & -10.629 & 0.649 & -9.980 & -0.704 & -10.684 \ \hline \end{array} \nonumber$
Summary
This example teaches the important lesson that an intuitively plausible qualitative explanation may not be correct. Qualitative models for atomic and molecular phenomena require validation by rigorous calculations based on quantum mechanical principles. For example, for the electronic states examined in this study Vee represents less than 4% of the total energy. Vne, the only negative contribution, dominates at about 69% while kinetic energy contributes 31%. In the light of this breakdown it is not surprising that Vee is not the reason for the greater stability of the triplet state.
Notes
1. Please consult ref 2 for the appropriate references to the original literature. These can be found in Tables 1 and 2.
2. The smaller value for the average interelectron separation for the triplet state implies a smaller atomic volume. Therefore, kinetic energy increases from singlet to triplet because it scales as V-2/3.
3. For example, the singlet state spatial wave function can be written as $\Psi_s = \left( r_1,~ r_2 \right) = N_s \left[ 1s \left( r_1 \right) 2s \left( r_2 \right) + 2s \left( r_1 \right) 1s \left( r_2 \right) \right] \nonumber$ where r1 and r2 are the coordinates of the electrons and NS is the normalization constant. The singlet state distribution function is $R_s \left( r_1,~r_2 \right) \approx \Psi_s \left( r_1,~r_2 \right)^2 r_1^2 r_2^2 \nonumber$ Similar arguments yield the triplet state distribution function.
Literature Cited
1. Liu, R. S. H. J. Chem. Educ. 2005, 82, 558–560.
2. Snow, R. L.; Bills, J. L. J. Chem. Educ. 1974, 51, 585–586.
3. Mulliken, R. S. J. Chem. Phys. 1965, 43, S2.
4. Mathcad is a product of Mathsoft Engineering & Education, Inc., 101 Main Street, Cambridge, MA 02142. www.mathsoft.com/. The Mathcad file is available at www.users. csbsju.edu/~frioux/stability/HundsRuleCalc.pdf/.
5. Shenkuan, N. J. Chem. Educ. 1992, 69, 800–803.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.36%3A_Hund%27s_Rule.txt
|
Two-parameter Study of the 1s2s Excited State of He and Li+ - Hund's Rule
The trial variational wave functions for the 1s12s1 excited state of helium atom and lithium ion are scaled hydrogen 1s and 2s orbitals:
$\begin{matrix} \Psi_{1s} ( r,~ \alpha ) = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha r) & \Psi_{2s} (r,~ \beta ) = \sqrt{ \frac{ \beta^3}{32 \pi}} (2 - \beta r ) \text{exp} \left( \frac{- \beta r}{2} \right) \end{matrix} \nonumber$
Nuclear charge: Z = 3
Seed values for α and β: $\begin{matrix} \alpha = Z + 1 & \beta = Z - 1 \end{matrix}$
The purpose of this analysis is to illustrate Hund's rule by calculating the energy of the singlet and triplet states for the 1s12s1 electronic configuration. The singlet state has a symmetric orbital wave function and the triplet state has an antisymmetric orbital wave function:
$\begin{matrix} \Psi_S = \frac{1s(1)2s(2)+1s(2)2s(1)}{ \sqrt{2 + 2S_{1s2s}^2}} & \Psi_T = \frac{1s(1)2s(2)-1s(2)2s(1)}{ \sqrt{2 - 2S_{1s2s}^2}} \end{matrix} \nonumber$
The integrals required for variational calculations on these states are given below:
$\begin{matrix} T_{1s} ( \alpha ) = \frac{ \alpha^2}{2} & T_{2s} ( \beta ) = \frac{ \beta^2}{8} & V_{N1s} ( \alpha ) = - Z \alpha & V_{N2s} ( \beta ) = \frac{-Z \beta}{4} \end{matrix} \nonumber$
$\begin{matrix} T_{1s2s} ( \alpha,~ \beta ) = -4 \sqrt{2 \alpha^5 \beta^5} \frac{ \beta - 4 \alpha}{( \beta + 2 \alpha )^4} & V_{N1s2s} ( \alpha,~ \beta ) = 4 Z \sqrt{2 \alpha^3 \beta^3} \frac{-2 \alpha + \beta}{( \beta + 2 \alpha)^3} \ V_{1212} ( \alpha,~ \beta ) = 16 \alpha^3 \beta^3 \frac{13 \beta^2 + 20 \alpha^2 - 30 \beta \alpha}{( \beta + 2 \alpha)^7} & S_{1s2s} ( \alpha,~ \beta) = 32 \sqrt{2 \alpha^3 \beta^3} \frac{ \alpha - \beta}{( \beta + 2 \alpha)^4} \ V_{1122} ( \alpha,~ \beta ) = \beta \alpha \frac{ \beta^4 + 10 \beta^3 \alpha + 8 \alpha^4 + 20 \alpha^3 \beta + 12 \beta^2 \alpha^2}{( \beta + 2 \alpha)^5} \end{matrix} \nonumber$
The next step in this calculation is to collect these terms in an expression for the energy of the singlet and triplet states and then minimize the energy with respect to the variational parameters, α and β. The results of this minimization procedure are shown below.
Singlet state calculation:
$E_S ( \alpha,~ \beta ) = \begin{array} T_{1s} ( \alpha ) + T_{2s} ( \beta ) + 2 T_{1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) ... \ + V_{N1s} ( \alpha ) + V_{N2s} ( \beta ) + 2 V_{N1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) ... \ + V_{1122} ( \alpha,~ \beta ) + V_{1212} ( \alpha,~ \beta ) \ \hline 1 + S_{1s2s} ( \alpha,~ \beta)^2 \end{array} \nonumber$
$\begin{matrix} \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Minimize} \left( E_s,~ \alpha,~ \beta \right) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \begin{pmatrix} 3.019 \ 1.681 \end{pmatrix} \end{matrix} \nonumber$
Break down the total energy into kinetic, electron-nuclear potential, and electron-electron potential energy.
$\begin{matrix} T = \begin{array}{c} T_{1s} ( \alpha ) + T_{2s} ( \beta ) ... \ + 2 T_{1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) \ \hline 1 + S_{1s2s} ( \alpha,~ \beta )^2 \end{array} & V_{ne} = \begin{array}{c} V_{N1s} ( \alpha ) + V_{N2s} ( \beta ) ... \ + 2 V_{N1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) \ \hline 1 + S_{1s2s} ( \alpha,~ \beta )^2 \end{array} & V_{ee} = \begin{array}{c} V_{1122} ( \alpha, ~ \beta ) + V_{1212} ( \alpha,~ \beta ) \ \hline 1 + S_{1s2s} ( \alpha,~ \beta )^2 \end{array} \end{matrix} \nonumber$
$\begin{matrix} T = 5.091 & V_{ne} = -10.629 & V_{ee} = 0.448 & E_s ( \alpha,~ \beta ) = -5.091 \end{matrix} \nonumber$
Calculate <r1s>, <r2s> and the absolute magnitude of the 1s-2s overlap:
$\begin{matrix} \int_0^{ \infty} r \Psi_{1s} (r,~ \alpha)^2 4 \pi r^2 dr = 0.497 & \int_0^{ \infty} r \Psi_{2s} (r,~ \beta )^2 4 \pi r^2 dr = 3.569 & \int_0^{ \infty} \left| \Psi_{1s} (r,~ \alpha ) \Psi_{2s} (r,~ \beta ) \right| 4 \pi r^2 dr = 0.251 \end{matrix} \nonumber$
Display the radial distribution function for the 1s and 2s orbitals:
Display contour plot of singlet wave function:
$\begin{matrix} i = 0 .. 80 & r1_i = .085i & j = 0 .. 80 & r2_j = .085 j \end{matrix} \nonumber$
$\begin{matrix} \Psi (r1,~ r2 ) = \frac{ \Psi_{1s} (r1,~ \alpha ) \Psi_{2s} (r2,~ \beta ) + \Psi_{1s} (r2,~ \alpha ) \Psi_{2s} (r1,~ \beta)}{ \sqrt{2 \left( 1 + S_{1s2s} ( \alpha,~ \beta )^2 \right)}} r1 r2 & M_{i,~ j} = \Psi \left( r1_i,~ r2_j \right)^2 \end{matrix} \nonumber$
Triplet state calculation:
$E_S ( \alpha,~ \beta ) = \begin{array} T_{1s} ( \alpha ) + T_{2s} ( \beta ) - 2 T_{1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) ... \ + V_{N1s} ( \alpha ) + V_{N2s} ( \beta ) + 2 V_{N1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) ... \ + V_{1122} ( \alpha,~ \beta ) - V_{1212} ( \alpha,~ \beta ) \ \hline 1 - S_{1s2s} ( \alpha,~ \beta)^2 \end{array} \nonumber$
Minimization of E(α, β) simultaneously with respect to α and β. $\begin{matrix} \alpha = Z & \beta = Z - 1 \end{matrix}$
$\begin{matrix} \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Minimize} \left( E_T,~ \alpha,~ \beta \right) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \begin{pmatrix} 2.999 \ 2.570 \end{pmatrix} \end{matrix} \nonumber$
Break down the total energy into kinetic, electron-nuclear potential, and electron-electron potential energy.
$\begin{matrix} T = \begin{array}{c} T_{1s} ( \alpha ) + T_{2s} ( \beta ) ... \ + -2 T_{1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) \ \hline 1 - S_{1s2s} ( \alpha,~ \beta )^2 \end{array} & V_{ne} = \begin{array}{c} V_{N1s} ( \alpha ) + V_{N2s} ( \beta ) ... \ + -2 V_{N1s2s} ( \alpha,~ \beta ) S_{1s2s} ( \alpha,~ \beta ) \ \hline 1 - S_{1s2s} ( \alpha,~ \beta )^2 \end{array} & V_{ee} = \begin{array}{c} V_{1122} ( \alpha, ~ \beta ) + V_{1212} ( \alpha,~ \beta ) \ \hline 1 - S_{1s2s} ( \alpha,~ \beta )^2 \end{array} \end{matrix} \nonumber$
$\begin{matrix} T = 5.103 & V_{ne} = -10.684 & V_{ee} = 0.479 & E_T ( \alpha,~ \beta ) = -5.103 \end{matrix} \nonumber$
Calculate <r1s>, <r2s> and the absolute magnitude of the 1s-2s overlap:
$\begin{matrix} \int_0^{ \infty} r \Psi_{1s} (r,~ \alpha )^2 4 \pi r^2 dr = 0.500 & \int_0^{ \infty} r \Psi_{2s} (r,~ \beta )^2 4 \pi r^2 dr = 2.334 & \int_0^{ \infty} \left| \Psi_{1s} (r,~ \alpha ) \Psi_{2s} (r,~ \beta ) \right| 4 \pi r^2 dr = 0.328 \end{matrix} \nonumber$
Display the radial distribution function for the 1s and 2s orbitals:
Display contour plot of singlet wave function:
$\begin{matrix} \Psi (r1,~ r2 ) = \frac{ \Psi_{1s} (r1,~ \alpha ) \Psi_{2s} (r2,~ \beta ) - \Psi_{1s} (r2,~ \alpha ) \Psi_{2s} (r1,~ \beta)}{ \sqrt{2 \left( 1 - S_{1s2s} ( \alpha,~ \beta )^2 \right)}} r1 r2 & M_{i,~ j} = \Psi \left( r1_i,~ r2_j \right)^2 \end{matrix} \nonumber$
Summary of the calculations for the helium atom and lithium ion:
$\begin{pmatrix} \text{HundsRule} & \text{Helium} & \text{o} & \text{Atom} & \text{Lithium} & \text{o} & \text{Ion} \ \text{Property} & \text{Singlet} & \Delta & \text{Triplet} & \text{Singlet} & \Delta & \text{Triplet} \ \alpha & 2.013 & -0.019 & 1.994 & 3.019 & -0.020 & 2.999 \ \beta & 0.925 & 0.626 & 1.551 & 1.681 & 0.889 & 2.570 \ E & -2.170 & 0.003 & -2.167 & -5.091 & -0.012 & -5.103 \ V_{ne} & -4.594 & -0.026 & -4.260 & -10.629 & -0.055 & -10.684 \ V_{ee} & 0.254 & 0.033 & 0.287 & 0.448 & 0.031 & 0.479 \ r_{1s} & 0.745 & 0.007 & 0.752 & 0.497 & 0.003 & 0.500 \ r_{2s} & 6.489 & -2.620 & 3.869 & 3.569 & -1.235 & 2.334 \ \int | 1s2s | d \tau & 0.232 & 0.072 & 0.304 & 0.251 & 0.077 & 0.328 \end{pmatrix} \nonumber$
• The triplet state has a lower energy than the singlet state because electron-nuclear attraction increases more than electron-electron repulsion.
• Atomic size decreases in the triplet state due to the large decrease in the size of the 2s orbital. The 1s orbital's size is basically the same in the singlet and triplet states.
• The sharp decrease in the size of the 2s orbital is responsible for the increase in the electron-nuclear attraction, electron-electron repulsion and the absolute magnitude of the 1s-2s orbital overlap.
Note that this two-parameter calculation does not show the He triplet state lower in energy than the singlet state. However, it does show that the absolute magnitudes of Vne and Vee are greater in the triplet state. This is important because this is the trend that the exact wave function (see reference) reveals (Vee: 0.250 vs 0.268; Vne: -4.541 vs -4.619).
The calculation for the lithium ion does show the triplet state lower in energy and the same trend in Vne and Vee as for the helium atom and the exact calculation. In other words, both electron-electron repulsion and electron-nuclear attraction are stronger in the triplet state, and the real reason the triplet state lies below the singlet in energy is because the decrease in Vne overwhelms the increase in Vee. Thus, the common explanation that the triplet state is favored because of reduced electron-electron repulsion is without merit.
The key phenomenon that accounts for these effects is the dramatic decrease in the size of the 2s orbital in going from the singlet to the triplet state. This accounts for the increases electron-electron repulsion and electron-nuclear attraction. The size of the 1s orbital is about the same in the singlet and triplet states. The significant increase in the absolute magnitude of the overlap integral in the triplet state is consistent with its higher electron-electron repulsion.
Reference: Snow, R. L.; Bills, J. L. "The Pauli Principle and Electron Repulsion in Helium," Journal of Chemical Education 1974, 51 585.
$\begin{pmatrix} \text{Property} & \frac{ \text{Singlet}}{ \text{Initial}} & \Delta & \frac{ \text{Singlet}}{ \text{Intermediate}} & \Delta & \frac{ \text{Triplet}}{ \text{Final}} \ \alpha & 3.019 & -0.020 & 2.999 & 0 & 2.999 \ \beta & 1.681 & 0.889 & 2.570 & 0 & 2.570 \ T & 5.091 & 0.450& 5.541 & -0.438 & 5.103 \ V_{ne} & -10.629 & -0.534 & -11.163 & 0.479 & -10.684 \ V_{ee} & 0.448 & 0.174 & 0.622 & -0.143 & 0.479 \ E & -5.091 & 0.091 & -5.000 & -0.103 & -5.103 \end{pmatrix} \nonumber$
• Singlet to singlet with 2s orbital contraction: T, Vee and E increase while Vne decreases.
• Singlet to triplet with frozen orbitals: T, Vee and E decrease while Vne increases.
$\begin{pmatrix} \text{Property} & \frac{ \text{Singlet}}{ \text{Initial}} & \Delta & \frac{ \text{Singlet}}{ \text{Intermediate}} & \Delta & \frac{ \text{Triplet}}{ \text{Final}} \ \alpha & 3.019 & 0 & 3.019 & -0.020 & 2.999 \ \beta & 1.681 & 0 & 1.681 & 0.889 & 2.570 \ T & 5.091 & -0.376 & 4.715 & 0.388 & 5.103 \ V_{ne} & -10.629 & 0.649 & -9.980 & -0.704 & -10.684 \ V_{ee} & 0.448 & -0.142 & 0.306 & 0.173 & 0.479 \ E & -5.091 & 0.131 & -4.960 & -0.143 & -5.103 \end{pmatrix} \nonumber$
• Singlet to triplet with frozen orbitals: T and Vee decrease while Vne and E increase.
• Triplet to triplet with 2s orbital contraction: T and Vee increase while Vne and E decrease.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.37%3A_Hund%27s_Rule_-_Singlet-Triplet_Calculations_with_Mathcad.txt
|
The purpose of this exercise is to examine five trial wavefunctions for the helium atom and several two-electron ions. The calculations begin with an uncorrelated wavefunction in which both electrons are placed in a hydrogenic orbital with scale factor α. The next four trial functions use several methods to increase the amount of electron correlation in the wave function. As the summary of results that is appended shows this gives increasingly more favorable agreement with the experimentally determined value for the ground state energy of the species under study. The detailed calculations show that the reason for this improved agreement with experiment is due to a reduction in electron-electron repulsion.
Because of the electron-electron interaction Schrödinger's equation cannot be solved exactly for the helium atom or more complicated atomic or ionic species. However, the ground state energy of the helium atom can be calculated using approximate methods. One of these is the variation method which requires the evaluation of the following variational integral.
$E = \frac{ \int_0^{ \infty} \Psi_{ \text{trial}} \text{H} \Psi_{ \text{trial}} d \tau}{ \int_0 ^{ \infty} \Psi_{ \text{trial}}^2 d \tau} \nonumber$
First Trial Wavefunction
The variation method is discussed in all of the standard physical chemistry textbooks. As is clear from the expression above this method requires that a trial wavefunction with one or more adjustable parameters be chosen. A logical first choice for such a function would be to assume that the electrons in the helium atom occupy scaled hydrogen 1s orbitals.
$\Psi(1,~2) = \Phi (1) \Phi (2) = \text{exp} \left[ - \alpha \left( r_1,~ r_2 \right) \right] \nonumber$
He1ans.mcd illustrates how the ground state energy and the optimum value for the scale factor, α, can be found using Mathcad. The value of -2.8477 hartrees is within 2% of the known ground state energy of the helium atom. The error in the calculation is attributed to the fact that the wavefunction is based on the orbital approximation and, therefore, does not adequately take electron correlation into account. In other words, this wavefunction gives the electrons too much independence, given that they have like charges and tend to avoid one another.
Second Trial Wavefunction
Some electron correlation can be built into the wavefunction by assuming that each electron is in an orbital which is a linear combination of two scaled hydrogen 1s orbital.
$\Phi = \text{exp} ( - \alpha r) + \text{exp} ( - \beta r) \nonumber$
Under the orbital approximation this assumption gives a trial wavefunction of the form
$\Psi (1,~2) = \Phi (1) \Phi(2) = \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \beta r_2 \right) \nonumber$
Inspection of this wavefunction indicates that 50% of the time the electrons are in different orbitals, while for the first trial wavefunction the electrons were in the same orbital 100% of the time. He2ans.mcd illustrates how this calculation would be executed. Notice the enormous increase in the complexity of the variational expression for the energy. However, also notice that the calculation is very similar to that using the previous trial wavefunction. The differences are that in this case the expression for the energy is more complex and that it is being minimized simultaneously with respect to two parameters rather than just one. It is also clear that introducing some electron correlation into the trial wavefunction has improved the agreement between theory and experiment.
Third Trial Wavefunction
The extent of electron correlation can be increased further by eliminating the first and last term in the second wavefunction. This yields a wavefunction of the form,
$\Psi (1,~ 2) = \text{exp} \left( \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \nonumber$
This trial wavefunction places the electrons in different scaled hydrogen 1s orbitals 100% of the time and He3ans.mcd shows that further improvement in the agreement with the literature value of the ground state energy is obtained. This result is within 1% of the actual ground state energy of the helium atom.
Fourth Trial Wavefunction
The third trial wavefunction, however, still rests on the orbital approximation and, therefore, doesn't treat electron correlation adequately. Hylleraas took the calculation a step further by introducing electron correlation directly into the wavefunction by adding a term, r12, involving the inter-electron separation.
$\Psi (1,~2) = \text{exp} \left[ - \alpha \left( r_1 + r_2 \right) \right] \left[ 1 + \beta r_{12} \right] \nonumber$
In the trial wavefunction shown above, if the electrons are far apart r12 is large and the magnitude of the wave function increases favoring that configuration. He4ans.mcd shows that this modification of the trial wavefunction has further improved the agreement between theory and experiment to within 0.5%.
Fifth Trial Wavefunction
Chandrasakar brought about further improvement by adding Hylleraas's r12 term to the third trial wave function as shown here.
$\Psi (1,~2) = \left[ \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \right] \left[ 1 + br_{12} \right] \nonumber$
As can be seen in He5ans.mcd Chandrasakar's three parameter wavefunction gives rise to a fairly complicated variational expression for ground state energy. However it also gives a result for helium that is within .07% of the experimental value for the ground state energy. The experimental value for the ground state energy of the helium atom is 2.90372 Eh.
The calculations that have been outlined here for the helium atom can be repeated for H-, Li+, Be++ , etc. The hydride anion is a particularly interesting case because the first two trial wavefunctions do not predict a stable ion. This indicates that electron correlation is an especially important issue for atoms and ions with small nuclear charge.
Summary of the Results for the Helium Atom
$\begin{matrix} \Psi = \frac{ \alpha^3}{ \pi} \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \alpha r_2 \right) \ E = -2.84766 \ \Psi = \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \beta r_2 \right) \ E = -2.86035 \ \Psi = \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \ E = -2.87566 \ \Psi = \text{exp} \left[ - \alpha \left( r_1 + r_2 \right) \right] \left( 1 + \beta r_{12} \right) \ E = -2.89112 \ \Psi = \left( \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \right) \left( 1 + br_{12} \right) \ E = -2.90143 \end{matrix} \nonumber$
Experimentally Determined Ground State Energy
$E_{exp} = -2.90372 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.38%3A_Electron_Correlation_in_Two-electron_Systems.txt
|
The first trial wavefunction is based on the orbital concept - electrons have wavefunctions that are independent of the coordinates of other electrons. Since it does not provide for electron correlation it will serve as the benchmark in the subsequent series of calculations. The remaining trail wavefunctions will include electron correlation to an increasing degree.
$\Psi (1,~2) = \frac{ \alpha^3}{ \pi} \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \alpha r_2 \right) \nonumber$
When this wavefunction is used in a variational calculation for the ground state energy for two-electron atoms or ions the expression shown below for the energy, E(α), is obtained. This equation is then minimized with respect to the adjustable parameter, α. The calculation for He is shown below.
Nuclear charge: Z = 2
Seed value for scale factor: $\alpha = 2$
Contributions to total energy:
$\begin{matrix} T ( \alpha ) = \alpha^2 & V_{ne} ( \alpha ) = -2 Z \alpha & V_{ee} ( \alpha ) = \frac{5}{8} \alpha \end{matrix} \nonumber$
Minimization of the total energy with respect to the variational parameter:
$\begin{matrix} E ( \alpha ) = T ( \alpha ) + V_{ne} ( \alpha ) + V_{ee} ( \alpha ) & \alpha = \text{Minimize} (E,~ \alpha ) & \alpha = 1.6875 & E( \alpha ) = -2.8477 \end{matrix} \nonumber$
Calculate the orbital energy:
$\begin{matrix} \varepsilon = \frac{ \alpha^2}{2} - Z \alpha + \frac{5}{8} \alpha & \varepsilon = -0.8965 \end{matrix} \nonumber$
Compare the results of this calculation to experiment in two ways. [1] Minus the sum of the ionization energies is equal to the experimental ground state energy (you should show this). [2] The orbital energy is a good approximation to the first ionization energy. Repeat this exercise for H- , Li+ and Be2+. Present your results and the comparison with experimental results in tabular form.
$\begin{bmatrix} \text{H} \Psi = \text{E} \Psi & \text{H} & \text{He} & \text{Li} & \text{Be} \ \alpha & 0.6875 & 1.6875 & 2.6875 & 3.6875 \ \varepsilon & -0.0215 & -0.8965 & 2.7715 & -5.6465 \ - \text{IP}_1 & -0.0277 & -0.904 & -2.781 & -5.658 \ \% \text{Error} & 22.4 & 0.83 & 0.34 & 0.20 \ \text{E}_{ \text{atom}} & -0.4727 & -2.8477 & -7.2227 & -13.5977 \ - \left( \text{IP}_1 + \text{IP}_2 \right) & -0.5277 & -2.9037 & -7.2838 & -13.6640 \ \% \text{Error} & 10.4 & 1.94 & 0.80 & 0.40 \end{bmatrix} \nonumber$
• Demonstrate that the virial theorem is satisfied for each calculation: <E> = <V>/2 = -<T>
$\begin{matrix} E ( \alpha ) = -2.8477 & \frac{-2 \text{Z} \alpha + \frac{5}{8} \alpha}{2} = -2.8477 & - \alpha^2 = -2.8477 \end{matrix} \nonumber$
• Complete the table below which breaks down the various contributions to the total energy.
$\begin{pmatrix} \text{Element} & \% \text{T} & \% V_{ne} & \% V_{ee} \ \text{H} & 20.8 & 60.4 & 18.9 \ \text{He} & 26.7 & 63.4 & 9.9 \ \text{Li} & 28.9 & 64.4 & 6.7 \ \text{Be} & 29.9 & 65.0 & 5.1 \end{pmatrix} \nonumber$
Absolute value of energy:
$E_{abs} = \alpha^2 + 2Z \alpha + \frac{5}{8} \alpha \nonumber$
Percent kinetic energy contribution to total energy:
$\frac{T( \alpha)}{E_{abs} ( \alpha)} = 26.7 \% \nonumber$
Percent electron-nuclear potential energy contribution to total energy:
$\frac{ \left| V_{ne} ( \alpha ) \right|}{E_{abs} ( \alpha)} = 63.4 \% \nonumber$
Percent electron-electron potential energy contribution to total energy:
• Demonstrate that this wave function does not predict a stable hydride ion.
• The hydride anion has a higher energy (-0.4727 Eh) than the hydrogen atom (-0.5000 Eh).
• Identify a deficiency in the wave function that might explain why it does not predict a stable hydride ion.
• The wave function does not allow adequately for electron-electron correlation in a case where the nuclear charge is only +1.
• Explain the improved agreement between theory and experiment as the nuclear charge increases.
• Nuclear-electron potential energy becomes increasingly important overwhelming electron-electron potential energy. The inadequacies of the wave function are becoming less important.
• Complete the summary table below. You will be asked to do this for each trial wave function we use in this exercise and subsequently compare the results.
$\begin{matrix} E ( \alpha ) = -2.8477 & T ( \alpha ) = 2.8477 & V_{ne} ( \alpha ) = -6.7500 & V_{ee} ( \alpha ) = 1.0547 \end{matrix} \nonumber$
$\begin{pmatrix} \text{WF1} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{H} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{He} & -2.8477 & 2.8477 & -6.7500 & 1.0547 \ \text{Li} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{Be} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \end{pmatrix} \nonumber$
This table shows that Vee increases in magnitude from H- to Be2+. Earlier, however, we saw that its percentage contribution to the total energy decreases in this series.
• You should also fill in the following tables for each of the elements and carry them forward to the next Mathcad document.
$\begin{matrix} \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{WF2} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF3} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF4} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF5} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \end{pmatrix} & \begin{pmatrix} \text{He} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -2.8477 & 2.8477 & -6.7500 & 1.05447 \ \text{WF2} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF3} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF4} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF5} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \end{pmatrix} \ \begin{pmatrix} \text{Li} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{WF2} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF3} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF4} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF5} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \end{pmatrix} & \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \ \text{WF2} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF3} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF4} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \ \text{WF5} & \blacksquare & \blacksquare & \blacksquare & \blacksquare \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.39%3A_First_Trial_Wave_Function.txt
|
$\Psi = \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \beta r_2 \right) \nonumber$
When the wavefunction shown above is used in a variational method calculation for the ground state energy for two-electron atoms or ions the two-parameter equation shown below for the energy is obtained. This equation is then minimized simultaneously with respect to the adjustable parameters, α and β.
Nuclear charge: Z = 2
Seed values for scale factors: $\begin{matrix} \alpha = 2 & \beta = Z + 1 \end{matrix}$
Variational energy expression:
$E ( \alpha,~ \beta ) = \frac{ \begin{matrix} \left[ \frac{ \frac{ \alpha^2 + \beta^2}{2} - Z ( \alpha + \beta ) - \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^2} \left( Z - \frac{ \alpha \beta}{ \alpha + \beta} \right)}{1 + \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^3}} \right] ... \ \frac{5}{8} ( \alpha + \beta ) + \frac{2 \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{( \alpha + \beta)^3} + 4 \begin{bmatrix} \frac{8 \alpha^{2.5} \beta^{1.5} \left( 11 \alpha^2 + 8 \alpha \beta + \beta^2 \right)}{( \alpha + \beta)^2 (3 \alpha + \beta)^3} ... \ + \frac{8 \alpha^{1.5} \beta^{2.5} \left( 11 \beta^2 + 8 \alpha \beta + \alpha^2 \right)}{( \alpha + \beta)^2 (3 \beta + \alpha)^3} ... \ \frac{20 \alpha^3 \beta^3}{( \alpha + \beta)^5} \end{bmatrix} \end{matrix}}{4 \left[ 1 + \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^3} \right]^2} \nonumber$
$\begin{matrix} \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Minimize} (E,~ \alpha,~ \beta) & \begin{pmatrix} \alpha \ beta \end{pmatrix} = \begin{pmatrix} 1.2141 \ 2.1603 \end{pmatrix} & E ( \alpha,~ \beta ) = -2.8603 \end{matrix} \nonumber$
Experimental ground state energy:
$E_{exp} = -2.9037 \nonumber$
Calculate error in calculation:
$\begin{matrix} \text{Error} = \begin{vmatrix} \frac{E_{exp} - E( \alpha,~ \beta)}{E_{exp}} \end{vmatrix} & \text{Error} = 1.4931 \% \end{matrix} \nonumber$
Fill in the table and answer the questions below:
$\begin{pmatrix} \Psi & \text{H} & \text{He} & \text{Li} & \text{Be} \ \alpha & 0.3703 & 1.2141 & 2.0969 & 2.9993 \ \beta & 1.0001 & 2.1603 & 3.2778 & 4.3756 \ E_{atom} & -0.487 & -2.8603 & -7.235 & -13.6098 \ E_{atom} ( \text{exp} ) & -0.5277 & -2.9037 & -7.2838 & -13.6640 \ \% \text{Error} & 7.72 & 1.49 & 0.670 & 0.397 \end{pmatrix} \nonumber$
Fill in the table below and explain why this trial wave function gives better results than the first trial wave function.
$\begin{matrix} T( \alpha,~ \beta ) = \left[ \frac{ \frac{ \alpha^2 + \beta^2}{2} + \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^2} \left( \frac{ \alpha \beta}{ \alpha + \beta} \right)}{1 + \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^2}} \right] & V_{ne} ( \alpha,~ \beta ) = \left[ \frac{ -Z ( \alpha + \beta) - \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^2} Z}{1 + \frac{8 \alpha^{1.5} \beta^{1.5}}{( \alpha + \beta)^3}} \right] \end{matrix} \nonumber$
$\begin{matrix} T( \alpha,~ \beta) = 2.8603 & V_{ne} = -6.7488 \ V_{ee} ( \alpha,~ \beta ) = E( \alpha,~ \beta ) - T( \alpha,~ \beta ) - V_{ne} ( \alpha,~ \beta ) & V_{ee} ( \alpha,~ \beta ) = 1.0281 \end{matrix} \nonumber$
$\begin{pmatrix} \text{WF2} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{H} & -0.4870 & 0.4780 & -1.3705 & 0.3965 \ \text{He} & -2.8603 & 2.8603 & -6.7488 & 1.0281 \ \text{Li} & -7.2350 & 7.2350 & -16.1243 & 1.6544 \ \text{Be} & -13.6098 & 13.6098 & -29.4995 & 2.2799 \end{pmatrix} \nonumber$
Demonstrate that the virial theorem is satisfied.
$\begin{matrix} E ( \alpha,~ \beta ) = -2.8603 & - T ( \alpha,~ \beta ) = -2.8603 & \frac{V_{ne} ( \alpha,~ \beta ) + V_{ee} ( \alpha,~ \beta)}{2} = -2.8603 \end{matrix} \nonumber$
Add the results for this wave function to your summary table for all wave functions.
$\begin{matrix} \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{WF2} & -0.4870 & 0.4870 & -1.3705 & 0.3965 \end{pmatrix} & \begin{pmatrix} \text{He} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -2.8477 & 2.8477 & -6.7500 & 1.0547 \ \text{WF2} & -2.8603 & 2.8603 & -6.7488 & 1.0281 \end{pmatrix} \ \begin{pmatrix} \text{Li} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{WF2} & -7.2350 & 7.2350 & -16.1243 & 1.6544 \end{pmatrix} & \begin{pmatrix} \text{Be} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \ \text{WF2} & -13.6098 & 13.6098 & -29.4995 & 2.2799 \end{pmatrix}\end{matrix} \nonumber$
These tables show that the improved agreement with experimental results (the lower total energy), is due to a reduction in electron-electron repulsion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.40%3A_Second_Trial_Wavefunction.txt
|
$\Psi = \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \nonumber$
When the wavefunction shown above is used in a variational method calculation for the ground state energy for two-electron atoms or ions the two-parameter equation shown below for the energy is obtained. This equation is then minimized simultaneously with respect to the adjustable parameters, α and β.
Nuclear charge: Z = 2
Seed values for scale factors: $\begin{matrix} \alpha = Z & \beta = Z + 1 \end{matrix}$
Variational energy expression:
$E ( \alpha,~ \beta ) = \frac{64 \alpha^3 \beta^3 [ \alpha \beta - Z ( \alpha + \beta)] \frac{ \alpha \beta}{ \alpha + \beta} + \frac{ \alpha^2 beta^2}{( \alpha + \beta)^3} + \frac{20 \alpha^3 \beta^3}{ ( \alpha + \beta)^5}}{1 + \frac{64 \alpha^3 \beta^3}{( \alpha + \beta)^6}} \nonumber$
$\begin{matrix} \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Minimize} (E,~ \alpha,~ \beta ) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \begin{pmatrix} 1.1885 \ 2.1832 \end{pmatrix} & E ( \alpha,~ \beta ) = -2.8757 \end{matrix} \nonumber$
Experimental ground state energy:
$E_{exp} = -2.9037 \nonumber$
Calculate error in calculation:
$\begin{matrix} \text{Error} = \left| \frac{E_{exp} - E ( \alpha,~ \beta)}{E_{exp}} \right| & \text{Error} = 0.9656 \% \end{matrix} \nonumber$
Summarize the calculations in the following table.
$\begin{pmatrix} \Psi & \text{H} & \text{He} & \text{Li} & \text{Be} \ \alpha & 0.28322 & 1.18853 & 2.07898 & 2.98472 \ \beta & 1.023923 & 2.18317 & 3.29491 & 4.38972 \ E_{atom} & -0.5133 & -2.8757 & -7.2488 & -13.6230 \ E_{atom} ( \text{exp} ) & -5.277 & -2.9037 & -7.2838 & -13.6640 \ \% \text{Error} & 2.73 & 0.964 & 0.481 & 0.300 \end{pmatrix} \nonumber$
Fill in the table below and explain why this trial wave function gives better results than the previous trial wave function.
$\begin{matrix} T ( \alpha,~ \beta ) = \frac{ \frac{ \alpha^2 + \beta^2}{2} + \frac{64 \alpha^3 \beta^3 ( \alpha \beta )}{( \alpha + \beta)^6}}{1 + \frac{64 \alpha^3 \beta^3}{( \alpha + \beta)^6}} & V_{ne} ( \alpha,~ \beta ) = \frac{ -Z ( \alpha + \beta ) + \frac{64 \alpha^3 \beta^3 ( \alpha \beta )}{( \alpha + \beta)^6}}{1 + \frac{64 \alpha^3 \beta^3 [- Z ( \alpha + \beta)]}{( \alpha + \beta)^6}} \ V_{ee} ( \alpha,~ \beta ) = \frac{ \frac{ \alpha \beta}{ \alpha + \beta} + \frac{ \alpha^2 \beta^2}{( \alpha + \beta)^3} + \frac{20 \alpha^3 \beta^3}{( \alpha + \beta)^5}}{1 + \frac{64 \alpha^3 \beta^3}{( \alpha + \beta)^6}} & \begin{matrix} T ( \alpha,~ \beta ) = 2.8757 \ V_{ne} ( \alpha,~ \beta ) = -6.7434 \ V_{ee} ( \alpha,~ \beta ) = 0.9921 \end{matrix} \end{matrix} \nonumber$
$\begin{pmatrix} \text{WF3} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ne} \ \text{H} & -0.5133 & 0.5133 & -1.3225 & 0.2958 \ \text{He} & -2.8757 & 2.8757 & -6.7434 & 0.9921 \ \text{Li} & -7.2487 & 7.2487 & -16.1217 & 1.6242 \ \text{Be} & -13.6230 & 13.620 & -29.4978 & 2.2519 \end{pmatrix} \nonumber$
Demonstrate that the virial theorem is satisfied.
$\begin{matrix} E ( \alpha,~ \beta ) = -2.8757 & -T ( \alpha,~ \beta ) = -2.8757 & \frac{V_{ne} ( \alpha,~ \beta) + V_{ee} ( \alpha,~ \beta)}{2} = -2.8757 \end{matrix} \nonumber$
Add the results for this wave function to your summary table for all wave functions.
$\begin{matrix} \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{WF2} & -0.4870 & 0.4870 & -1.3705 & 0.3965 \ \text{WF3} & -0.5133 & 0.5133 & -1.3225 & 0.2958 \end{pmatrix} & \begin{pmatrix} \text{He} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -2.8477 & 2.8477 & -6.7500 & 1.0547 \ \text{WF2} & -2.8603 & 2.8603 & -6.7488 & 1.0281 \ \text{WF3} & -2.8757 & 2.8757 & -6.7434 & 0.9921 \end{pmatrix} \ \begin{pmatrix} \text{Li} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{WF2} & -7.2350 & 7.2350 & -16.1243 & 1.6544 \ \text{WF3} & -7.2487 & 7.2487 & -16.1217 & 1.6242 \end{pmatrix} & \begin{pmatrix} \text{Be} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \ \text{WF2} & -13.6098 & 13.6098 & -29.4995 & 2.2799 \ \text{WF3} & -13.6230 & 13.6230 & -29.4978 & 2.2519 \end{pmatrix} \end{matrix} \nonumber$
These tables show that the improved agreement with experimental results (the lower total energy), is due to a reduction in electron-electron repulsion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.41%3A_Third_Trial_Wavefunction.txt
|
$\Psi (r) = \text{exp} \left[ - \alpha \left( r_1 + r_2 \right) \right] \left( 1 + \beta r_{12} \right] \nonumber$
When the wavefunction shown above is used in a variational method calculation for the ground state energy for two-electron atoms or ions the two-parameter equation shown below for the energy is obtained. This equation is then minimized simultaneously with respect to the adjustable parameters, α and b.
Nuclear charge: Z = 1
Seed values for scale factors: $\begin{matrix} \alpha = Z & \beta = .7 \end{matrix}$
Contributions to total energy:
$\begin{matrix} T ( \alpha,~ \beta ) = \frac{ \frac{1}{2} + \frac{25 \beta}{16 \alpha} + \frac{2 \beta^2}{ \alpha^2}}{ \frac{1}{2 \alpha^2} + \frac{35 \beta}{16 \alpha^3} + \frac{3 \beta^2}{ \alpha^4}} & V_{ne} ( \alpha,~ \beta ) = \frac{ - \frac{Z}{ \alpha} - \frac{15 Z \beta}{4 \alpha^2} - \frac{9Z \beta^2}{ 2 \alpha^3}}{ \frac{1}{2 \alpha^2} + \frac{35 \beta}{16 \alpha^3} + \frac{3 \beta^2}{ \alpha^4}} & V_{ee} ( \alpha,~ \beta ) = \frac{ - \frac{5}{16 \alpha} - \frac{ \beta}{ \alpha^2} - \frac{35 \beta^2}{32 \alpha^3}}{ \frac{1}{2 \alpha^2} + \frac{35 \beta}{16 \alpha^3} + \frac{3 \beta^2}{ \alpha^4}} \end{matrix} \nonumber$
Minimization of the total energy with respect to the variational parameters:
$\begin{matrix} E ( \alpha,~ \beta ) = T ( \alpha,~ \beta ) + V_{ne} ( \alpha,~ \beta ) + V_{ee} ( \alpha,~ \beta ) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \text{Minimize} (E,~ \alpha,~ \beta ) & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \begin{pmatrix} 0.8257 \ 0.4934 \end{pmatrix} & E ( \alpha,~ \beta ) = -0.5088 \end{matrix} \nonumber$
Experimental ground state energy:
$E_{exp} = -2.9037 \nonumber$
Calculate error in calculation:
$\begin{matrix} \text{Error} = \left| \frac{E_{exp} - E ( \alpha,~ \beta )}{E_{exp}} \right| & \text{Error} = 82.4782 \% \end{matrix} \nonumber$
Fill in the table and answer the questions below:
$\begin{pmatrix} \Psi & \text{H} & \text{He} & \text{Li} & \text{Be} \ \alpha & 0.8257 & 1.8497 & 2.8564 & 3.8592 \ \beta & 0.4934 & 0.3658 & 0.3354 & 0.3213 \ E_{atom} & -0.5088 & -2.8911 & -7.2682 & -13.6441 \ E_{atom} \text{(exp)} & -0.5277 & -2.9037 & -7.2838 & -13.6640 \ \% \text{Error} & 3.59 & 0.433 & 0.215 & 0.146 \end{pmatrix} \nonumber$
Fill in the table below and explain why this trial wave function gives better results than the previous trial wave function.
$\begin{pmatrix} \text{WF4} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{H} & -0.5088 & 0.5088 & -1.3907 & 0.3731 \ \text{He} & -2.8911 & 2.8911 & -6.7565 & 0.9743 \ \text{Li} & -7.2682 & 7.2682 & -16.1288 & 1.5924 \ \text{Be} & -13.6441 & 13.6441 & -29.5025 & 2.2144 \end{pmatrix} \nonumber$
$\begin{matrix} T ( \alpha,~ \beta ) = 0.5088 & V_{ne} ( \alpha,~ \beta ) = -1.3907 & V_{ee} ( \alpha,~ \beta ) = 0.3731 \end{matrix} \nonumber$
Explain the importance of the parameter β. Why does its magnitude decrease as the nuclear charge increases?
The parameter β adds weight to the r12 term which most directly represents electron correlation in the wavefunction. As the nuclear charge increases, as we have previously seen, Vee becomes less important as a percentage of the total energy. Thus, the impact of the electron correlation term becomes less significant.
Demonstrate that the virial theorem is satisfied.
$\begin{matrix} E ( \alpha,~ \beta ) = -0.5088 & -T ( \alpha,~ \beta ) = -0.5088 & \frac{V_{ne} ( \alpha,~ \beta ) + V_{ee} ( \alpha,~ \beta )}{2} = -0.5088 \end{matrix} \nonumber$
Add the results for this wave function to your summary table for all wave functions.
$\begin{matrix} \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{WF2} & -0.4870 & 0.4870 & -1.3705 & 0.3965 \ \text{WF3} & -0.5133 & 0.5133 & -1.3225 & 0.2958 \ \text{WF4} & -0.5088 & 0.5088 & -1.3907 & 0.3731 \end{pmatrix} & \begin{pmatrix} \text{He} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -2.8477 & 2.8477 & -6.7500 & 1.0547 \ \text{WF2} & -2.8603 & 2.8603 & -6.7488 & 1.0281 \ \text{WF3} & -2.8757 & 2.8757 & -6.7434 & 0.9921 \ \text{WF4} & -2.8911 & 2.8911 & -6.7565 & 0.9743 \end{pmatrix} \ \begin{pmatrix} \text{Li} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{WF2} & -7.2350 & 7.2350 & -16.1243 & 1.6544 \ \text{WF3} & -7.2487 & 7.2487 & -16.1217 & 1.6242 \ \text{WF4} & -7.2682 & 7.2682 & -16.1288 & 1.5924 \end{pmatrix} & \begin{pmatrix} \text{Be} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \ \text{WF2} & -13.6098 & 13.6098 & -29.4995 & 2.2799 \ \text{WF3} & -13.6230 & 13.6230 & -29.4978 & 2.2519 \ \text{WF4} & -13.6441 & 13.6441 & -29.5025 & 2.2144 \end{pmatrix} \end{matrix} \nonumber$
Except for a hiccup in the hydrogen anion results for WF4, these tables show that the improved agreement with experimental results (the lower total energy), is due to a reduction in electron-electron repulsion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.42%3A_129.4_Fourth_Trial_Wavefunction.txt
|
$\Psi = \left( \text{exp} \left( - \alpha r_1 \right) \text{exp} \left( - \beta r_2 \right) + \text{exp} \left( - \beta r_1 \right) \text{exp} \left( - \alpha r_2 \right) \right) \left( 1 + br_{12} \right) \nonumber$
When Chandrasakar's wavefunction is used in a variational calculation on a two-electron atom or ion the following expressions are obtained.
Enter the nuclear charge: Z = 2
Enter initial values for α. β, and b (use results from previous calculations):
$\begin{matrix} \alpha = Z & \beta = Z + 1 & b = 0.3 & c = \frac{ \alpha + \beta}{2} & d = \frac{| \alpha - \beta |}{2} \end{matrix} \nonumber$
Define the normalization constant:
$N(b,~ c,~ d) = \frac{ \begin{array}{l l} \left[ -8 d^{10} + \left[ 40 d^8 + \left[ -80 d^6 \left[ 88d^4 + \left( -56 d^2 16 c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 ... \ + \left[ \left[ -35 d^{10} + \begin{bmatrix} 175 d^8 + \left[ -349 d^6 + \left[ 335 d^4 + \left( -196 d^2 + 70 c^2 \right) c^2 \right] c^2 \right] c^2 c^2 c... \ + \left[ -48 d^{10} + \left[ 240 d^8 + \left[ -480 d^6 \left[ 480 d^4 + \left( - 192 d^2 + 96 c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] b \end{bmatrix} b \right] \right] \end{array}}{8 (c+d)^5 (c-d)^5c^8} \nonumber$
Define the electron kinetic energy:
$T(b,~ c,~ d) = \frac{ \begin{array}{l l} \left[ 8d^{12} + \left[ -48 d^{10} + \left[ 120 d^8 + \left[ -152 d^6 + \left[ 112 d^4 + \left( -56 d^2 + 16c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 ... \ + \begin{bmatrix} \left[ 35 d^{12} + \left[ -200 d^{10} + \left[ 474 d^8 + \left[ -598 d^6 + \left[ 353 d^4 + \left( -114 d^2 + 50c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 \right] c... \ + \left[ 48 d^{12} + \left[ -272 d^{10} + \left[ 640d^8 + \left[ -800 d^6 + \left[ 592 d^4 + \begin{pmatrix} -80 d^2 ... \ +64c^2 \end{pmatrix} c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 \right] b \end{bmatrix} b \end{array}}{ \left[ 8 (c+d)^5 (c-d)^5c^8 \right] N(b,~ c,~ d)} \nonumber$
Define electron-nucleus potential energy:
$VN(b,~ c,~ d) = -4 cZ \frac{ \begin{array} \begin{array}{l l} \left[ -4d^{10} + \left[ 20 d^{8} + \left[ -40d^6 + \left[ 44 d^4 + \left( -28d^2 + 8c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] c^2 ... \ + \begin{bmatrix} \left[ -15 d^{10} + \left[ 75 d^8 + \left[ -149 d^6 + \left[ 139d^4 + \left( -80d^2 + 30c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] c ... \ + \left[ -18d^{10} + \left[ 90 d^8 + \left[ - 180 d^6 + \left[ 180 d^4 + \left( -60 d^2 + 36c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 \right] b \end{bmatrix} b \end{array}}{ \left[ 8 (c+d)^5 (c-d)^5 c^8 \right] N(b,~c,~d)} \nonumber$
Define electron-electron potential energy:
$VE(b,~c,~d) = \frac{ \begin{array}{l l} \left[ 10d^8 + \left[ -42d^6 + \left[ 74d^4 + \left( -62d^2 + 20c^2 \right) c^2 \right] c^2 \right] c^2 \right] c^2 ... \ + \begin{bmatrix} \left[ 32 d^8 + \left[ -128 d^6 + \left[ 192 d^4 + \left( -160 d^2 + 64c^2 \right) c^2 \right] c^2 \right] c^2 \right] c ... \ + \left[ 35 d^8 + \left[ -140 d^6 + \left[ 209 d^4 + \left( -126 d^2 + 70c^2 \right) c^2 \right] c^2 \right] c^2 \right] b \end{bmatrix} b \end{array}}{ \left[ 16(c+d)^4 (c-d)^4c^7 \right]N(b,~c,~d)} \nonumber$
Define total energy:
$\text{E(b, c, d) = T(b, c, d) + VN(b, c, d) + VE(b, c, d)} \nonumber$
Minimize total energy simultaneously with respect to the parameters, b, c, d:
$\begin{matrix} \begin{pmatrix} b \ c \ d \end{pmatrix} = \text{Minimize(E, b, c, d)} & \begin{pmatrix} b \ c \ d \end{pmatrix} = \begin{pmatrix} 0.2934 \ 1.8226 \ 0.3862 \end{pmatrix} & \text{E(b, c, d) = -2.9014} \end{matrix} \nonumber$
Experimental ground state energy: Eexp = -2.9037
Calculate error in calculation:
$\begin{matrix} \text{Error =} \left| \frac{E_{exp} - E(b,~c,~d)}{E_{exp}} \right| & \text{Error} = 0.0782 \% \end{matrix} \nonumber$
Calculate α and β from the values of c and d:
$\begin{matrix} \text{Given} & c = \frac{ \alpha + \beta}{2} & d = \frac{ | \alpha - \beta |}{2} & \text{Find( \alpha,~ \beta )} = \begin{pmatrix} 1.4364 \ 2.2088 \end{pmatrix} \end{matrix} \nonumber$
Fill in the table and answer the questions below:
$\begin{pmatrix} \Psi & \text{H} & \text{He} & \text{Li} & \text{Be} \ \alpha & 0.4925 & 1.4364 & 2.3616 & 3.2932 \ \beta & 1.0744 & 2.2088 & 3.2996 & 4.3745 \ b & 0.3326 & 0.2934 & 0.2769 & 0.2687 \ E_{atom} & -0.5255 & 2.9014 & -7.227 & -13.6525 \ E_{atom} ( \text{exp} ) -0.5277 & -2.9037 & -7.2838 & -13.6640 \ \% \text{Error} & 0.4090 & 0.0792 & .0909 & 0.0838 \end{pmatrix} \nonumber$
Explain the importance of the parameter b. Why does its magnitude decrease as the nuclear charge increases?
The parameter b adds weight to the r12 term which most directly represents electron correlation in the wavefunction. As the nuclear charge increases, as we have previously seen, Vee becomes less important as a percentage of the total energy. Thus, the impact of the electron correlation term becomes less significant.
Fill in the table below and explain why this trial wave function gives better results than the previous trial wave function.
$\begin{matrix} E(b,~ c,~ d) = -2.9014 & T(b,~c,~d) = 2.9017 & VN(b,~c,~d) = -6.7524 & VE(b,~c,~d) = 0.9492 \end{matrix} \nonumber$
$\begin{pmatrix} \text{WF5} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{H} & -0.5275 & 0.5275 & -1.3738 & 0.3208 \ \text{He} & -2.9017 & 2.9017 & -6.7524 & 0.9492 \ \text{Li} & -7.2772 & 7.2772 & -16.1265 & 1.5721 \ \text{Be} & -13.6525 & 13.6525 & -29.5011 & 2.1960 \end{pmatrix} \nonumber$
Demonstrate that the virial theorem is satisfied for the helium atom:
$\begin{matrix} E(b,~c,~d) = -2.9014 & T(b,~c,~d) = 2.9017 & \frac{VN(b,~c,~d) + VE(b,~c,~d)}{2} = -2.9016 \end{matrix} \nonumber$
Add the results for this wave function to your summary table for all wave functions.
$\begin{matrix} \begin{pmatrix} \text{H} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -0.4727 & 0.4727 & -1.375 & 0.4297 \ \text{WF2} & -0.4870 & 0.4870 & -1.3705 & 0.3965 \ \text{WF3} & -0.5133 & 0.5133 & -1.3225 & 0.2958 \ \text{WF4} & -0.5088 & 0.5088 & -1.3907 & 0.3731 \ \text{WF5} & -0.5275 & 0.5275 & -1.3738 & 0.3208 \end{pmatrix} & \begin{pmatrix} \text{He} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -2.8477 & 2.8477 & -6.7500 & 1.0547 \ \text{WF2} & -2.8603 & 2.8603 & -6.7488 & 1.0281 \ \text{WF3} & -2.8757 & 2.8757 & -6.7565 & 0.9743 \ \text{WF4} & -2.8911 & 2.8911 & -6.7565 & 0.9743 \ \text{WF5} & -2.9017 & 2.9017 & -6.7424 & 0.9492 \end{pmatrix} \ \begin{pmatrix} \text{Li} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -7.2227 & 7.2227 & -16.1250 & 1.6797 \ \text{WF2} & -7.2350 & 7.2350 & -16.1243 & 1.6544 \ \text{WF3} & -7.2487 & 7.2487 & -16.1217 & 1.6242 \ \text{WF4} & -7.2682 & 7.2682 & -16.1288 & 1.5924 \ \text{WF5} & -7.2227 & 7.2227 & -16.1265 & 1.5721 \end{pmatrix} & \begin{pmatrix} \text{Be} & \text{E} & \text{T} & \text{V}_{ne} & \text{V}_{ee} \ \text{WF1} & -13.5977 & 13.5977 & -29.5000 & 2.3047 \ \text{WF2} & -13.6098 & 13.6098 & -29.4995 & 2.2799 \ \text{WF3} & -13.6230 & 13.6230 & -29.4978 & 2.2519 \ \text{WF4} & -13.6441 & 13.6441 & -29.5025 & 2.2144 \ \text{WF5} & -13.6526 & 13.6525 & -29.5011 & 2.1960 \end{pmatrix} \ \end{matrix} \nonumber$
Except for a hiccup in the hydrogen anion results for WF4, these tables show that the improved agreement with experimental results (the lower total energy), is due to a reduction in electron-electron repulsion through the use of trial wavefunctions that improve electron correlation.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.43%3A_Fifth_Trial_Wavefunction_and_Summary.txt
|
This Journal has recently published a series of four articles by Gillespie, Spencer, and Moog under the banner of “Demystifying Introductory Chemistry”, an effort supported by the Task Force on General Chemistry (1). In their opening remarks they make the following statement (1a):
In our opinion we make chemistry seem more abstract, more mysterious, and more esoteric than necessary. Chemistry is certainly a complicated subject, but shrouding it in esoteric jargon and impenetrable theory makes it seem much more difficult than it really is.
We accept many of the excellent recommendations the authors make for improving the general chemistry sequence, but we have serious reservations about one of their arguments. In their attempt to draw back the quantum mechanical veil shrouding introductory chemistry they offer an incorrect interpretation of the trend of ionization energies for the first two elements of the periodic table and carry this form of reasoning forward to Li and Be. We quote one of their paragraphs (1a) in its entirety and then proceed to our objections and a quantum mechanical analysis.
The first ionization energy of helium (2.37 MJ mol-1) is nearly twice that of hydrogen (1.31 MJ mol-1); thus, these ionization energies are consistent with the two electrons in helium being at about the same distance from the nucleus as the single electron in hydrogen. These two electrons occupy a spherical region around the nucleus— the first (n = 1) shell. The ionization energy of helium is slightly less than twice the ionization energy of hydrogen because of the repulsion between the two electrons in helium.
We find two problems with this paragraph. The first is the statement that the two electrons in helium are about the same distance from the nucleus as the hydrogen electron. We do not believe the experimental ionization energies themselves provide support for this assertion. Furthermore, we do not believe there is other reliable experimental or theoretical evidence that supports the assertion. The second is the fact that the authors have used a classical explanation that is based solely on potential energy (Coulomb’s law, potential energy = -Ze2/r): the electrons are about the same distance from the nucleus, the nuclear charge increases by a factor of two, so the attraction to the nucleus increases by a factor of two, but the ionization energy increases by a factor of only 1.81 because of electron–electron repulsion. What is missing in the above potential energy argument is the fact that kinetic energy is an important factor in the quantum world of atoms and molecules, and cannot be ignored. More than three decades ago Ruedenberg (2) demonstrated the crucial importance of electron kinetic energy in understanding the physical nature of the chemical bond. Unfortunately, in spite of discussion of this profoundly important analysis in the pedagogical literature (3–5) and review journals (2b, 6), Ruedenberg’s work has been largely ignored by the undergraduate chemistry community. We hope that the quantum mechanical treatment that we present will help to underline the role of kinetic energy in understanding atomic structure and stability, and in particular its importance in understanding the ionization energy ratio of 1.81 mentioned above.
The hydrogen atom problem can be solved exactly, but the two-electron helium atom cannot. However, it can be solved to an arbitrary degree of accuracy by approximate methods. So the first issue is what level of theory should be employed in order to achieve an understanding of the problem under study. In our analysis, hydrogenic 1s orbitals
$\Psi = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} ( - \alpha r) \nonumber$
will be used with the variational principle to obtain the electron orbital energies for the hydrogen and helium atoms. The orbital energies will be used because Koopmans’ theorem (7) states that the ionization energy can be approximated as the negative of the orbital energy. This is correct for the hydrogen atom, but approximate for the helium atom. However, as we shall see, the error introduced by this approximation is relatively small. We show that this simple quantum mechanical analysis gives satisfactory agreement with experimental results and provides a basis from which to reach an interpretation of the relative values of the ionization energies of hydrogen and helium.
Use of the trial wave function chosen above with the variational theorem yields the results given in the tables. Table 1 shows the expressions for the total electron energy, the electron orbital energies, and the optimum values of obtained when the variational principle is applied to the total energy. Table 2 gives the results of the variational calculation for the electronic orbital energy for hydrogen and helium.
Table 1. Total Electron and Electron Orbital Energy for H and He
$\begin{array}{|c c c c|} \hline \hline \text{Element} & \text{Total Electron Energy} & \text{Electron Orbital Energy} & \text{Optimum } \alpha \ \hline H~ (Z = 1) & E_H = \frac{ \alpha^2}{2} - Z_{ \alpha} & \varepsilon = \frac{ \alpha^2}{2} - Z_{ \alpha} & \alpha = 1 \ He ~(Z = 2) & E_{He} = \alpha^2 - 2Z_{ \alpha} + \frac{5 \alpha}{8} & \varepsilon_{He} = \frac{ \alpha^2}{2} - Z_{ \alpha} + \frac{5 \alpha}{8} & \alpha = 1.6875 \ \hline \end{array} \nonumber$
The orbital energies are important because we will make use of Koopmans’ theorem in our analysis. Appendix A provides a brief discussion of the use of Koopmans’ theorem in interpreting the ionization energy of the helium atom. Atomic units have been used because they are properly scaled for atomic calculations. The conversion factor to SI units is 1 hartree = 2.6255 MJ mol-1 (8).
We now discuss the entries in these tables. For the hydrogen atom there are two contributions to the electronic energy, EH: kinetic energy (α2/2) and the electrostatic interaction with the nucleus (-Zα). Because there is only one electron in the hydrogen atom, this is also the expression for the electron orbital energy. Minimization of EH with respect to α yields α = 1, EH = -0.5 hartree, and εH = -0.5 hartree. Applying Koopmans’ theorem (IE = -εH) yields an ionization energy of 0.5 hartree, which is in agreement with experiment. The evaluation of the variational integrals that appear in the hydrogen and helium calculations outlined in this paper can be found in papers previously published in this Journal (9, 10).
For the helium atom with two electrons there are five contributions to the total electronic energy, EHe: the kinetic energy of each electron (α2/2), the electrostatic interaction of each electron with the nucleus (-Zα), and the electrostatic interaction of the electrons with each other (5α/8). The electron orbital energy for the helium atom, εHe, is simply those energy contributions that an individual electron experiences: kinetic energy (α2/2), electrostatic interaction with the nucleus (-Zα), and the electrostatic interaction with the other electron (5α/8). Minimization of EHe with respect to α yields α = 1.6875, EHe = -2.848 hartree, and εHe = -0.8965 hartree. Appendix B provides a graphical representation of the variational procedure. Thus, the predicted ionization energy using Koopmans’ theorem is 0.8965 hartree. The experimental ionization energy is 0.9037 hartree, so this result is not exact; but it is reasonably close, given the simplicity of the wave function chosen.
The first thing to note is that this analysis yields a ratio of ionization energies (IE ≈ -ε) of 1.79, as compared with the experimental value of 1.81. This gives us some confidence that we have chosen a reliable level of theory to deal with the question at hand. Note also that this quantum mechanical treatment brings into serious question the assumption (1a) that the electrons are about the same distance from the nucleus in the hydrogen atom and the helium atom. The last row of the second table shows that the ratio of , the quantum mechanically calculated average radial distance from the nucleus, is 0.59 which is considerably different from 1.0.
Continuing with the results given in the second table, we note that in going from H to He the orbital potential energy more than doubles (2.32). The orbital potential energy in He is the sum of the electron–nuclear attraction (-2α = -3.375 hartree) and the electron–electron repulsion (5α/8 = 1.055 hartree) terms. Thus we see that the electron–nuclear attraction more than triples, but the total potential energy increases by a factor of only 2.32 because of electron–electron repulsion. Coming back to the original issue, that the ionization energy increases by a factor of only 1.81 from H to He, we can now see that the explanation must lie in the kinetic energy term. Table 2 shows that the orbital kinetic energy almost triples (2.85) in going from H to He. Thus, the reason for the less than doubling of the ionization energy cannot be found by considering only potential energy, which alone predicts more than a doubling of the ionization energy. The explanation for the less than doubling of the ionization energy actually lies in the large increase in electron kinetic energy as the electrons are drawn closer to the nucleus by the increase in nuclear charge from H to He.
Table 2. Variational Results for Hydrogen and Helium
$\begin{array}{|c c c c|} \hline \hline \text{Parameter} & \text{Hydrogen} & \text{Helium} & \text{Ratio, He/H} \ \hline \text{Electron orbital energy} & \varepsilon_H = \alpha^2/2 - \alpha & \varepsilon_{He} = \frac{ \alpha^2}{2} - 2 \alpha + \frac{5 \alpha}{8} \ \text{Optimum, } \alpha & 1.0 & 1.6875 & 1.6875 \ \text{Exp. ionization energy} & 0.5 & 0.9037 & 1.81 \ \varepsilon ~ \text{(orbital energy)} & -0.5 & -0.8965 & 1.79 \ \text{Kinetic energy} & 0.5 & 1.424 & 2.85 \ \text{Potential energy} & -1.0 & -3.375 + 1.055 = -2.32 & 2.32 \ \langle R \rangle & 1.5 & 0.89 & 0.59 \end{array} \nonumber$
Earlier we referenced Ruedenberg’s analysis of the physical nature of the chemical bond. What he said about molecular bonding also pertains to atomic electronic structure (2c):
Finally, it should be emphasized that the phenomenon of the eigenstate is intimately related to the fact that molecules are subject to the laws of quantum mechanics; there are no ground states in classical mechanics or electrostatics [emphasis added]. Consequently, a physical picture seeking to describe chemical bonding must necessarily incorporate features which distinguish quantum mechanics from classical mechanics and electrostatics…It may be added that the existence of a ground state is intrinsically connected with the fact that the variation integral contains both kinetic and potential energy…Omission of one or the other from consideration cannot, therefore, lead to a full interpretation of binding.
We have seen that one needs to use a quantum chemical treatment to understand the ratio of ionization energies for H and He. We wish to point out that interpreting the ionization of any atom or molecule requires quantum chemical tools and a consideration of both kinetic and potential energy.
While the quantum mechanical arguments outlined here may be used with undergraduate physical chemistry students, they are obviously too advanced for introductory students. However, we also should not use incorrect classical models. This leaves us with the important task of deciding what we can say to introductory students about the details of the periodicity of physical properties, such as ionization energies, that is both correct and understandable.
Literature Cited
1. (a) Gillespie, R. J.; Spencer, J. N.; Moog, R. S. J. Chem. Educ. 1996, 73, 617. (b) Gillespie, R. J.; Spencer, J. N.; Moog, R. S. Ibid., 622. (c) Spencer, J. N.; Moog, R. S.; Gillespie, R. J. Ibid., 627. (d) Spencer, J. N.; Moog, R. S.; Gillespie, R. J. Ibid., 631.
2. (a) Ruedenberg, K. Rev. Mod. Phys. 1962, 34, 326. (b) Feinberg, M. J.; Ruedenberg, K.; Mehler, E. L. Adv. Quantum Chem. 1970, 5, 27. (c) Ruedenberg, K. In Localization and Delocalization in Quantum Chemistry, Vol. I; O. Chalvet et al., Eds.; D. Reidel: Dordrecht, 1975; pp 223–245.
3. Baird, N. C. J. Chem. Educ. 1986, 63, 660.
4. DeKock, R. L. J. Chem. Educ. 1987, 64, 934.
5. Harcourt, R. D. Am. J. Phys. 1988, 56, 660.
6. Kutzelnigg, W. Angew. Chem. Int. Ed. Eng. 1973, 12, 546.
7. Koopmans, T. A. Physica 1933, 1, 104. Lowe, J. P. Quantum Chemistry, 2nd ed.; Academic: New York, 1993; pp 361–363.
8. International Union of Pure and Applied Chemistry. Quantities, Units, and Symbols in Physical Chemistry, 2nd ed.; Blackwell Scientific: Oxford, 1993. Almost any physical chemistry or quantum chemistry textbook will also make reference to “atomic units”, for which the energy unit is called the “hartree.”
9. Snow, R. L.; Bills, J. L. J. Chem. Educ. 1975, 52, 506.
10. Lee, S.-Y. J. Chem. Educ. 1983, 60, 935.
11. Linnett, J. W. Wave Mechanics and Valency; Methuen: London, 1960; pp 1–2.
12. Atkins, P. W. Physical Chemistry, 5th ed.; Freeman: New York, 1994; p 371.
Appendix
A. Employing Koopmans’ theorem at the Hartree-Fock level to interpret the helium atom ionization energy breaks the ionization process into two steps: (1) frozen ionization (constant α) followed by (2) relaxation to the He+ ground state:
$He ( \alpha = 1.6875) \rightarrow He^{+} ( \alpha = 1.6875);~ \Delta E = - \varepsilon = 0.89648~ \text{hartree} \nonumber$
$He^+ ( \alpha=1.6875) \rightarrow He^+ ( \alpha = 2);~ \Delta E = -0.0488 \text{ hartree} \nonumber$
One reason Koopmans’ theorem is successful in approximating the ionization in this way is that the energy change accompanying relaxation to the true ground state of the helium ion is small. Another reason is that Hartree–Fock calculations ignore electron correlation. With the wave function used in this study the correlation energy, EC , is
$E_C = IT_{exp} - \left( E_{He+} - E_{He} \right) = 0.9037 - (-2.0000 + 2.8478) = 0.0560 \text{ hartree} \nonumber$
Thus, the correlation (0.0560 hartree) and relaxation ({0.0488 hartree) energy nearly cancel. This cancellation of terms accounts for the better-than-expected agreement between theory and experiment, given the simplicity of the wave function chosen. B. The variational expression for the helium atom energy using the wave function chosen in this study is
$E( \alpha ) = \alpha^2 - 4 \alpha + \frac{5 \alpha}{8} \nonumber$
Using the fact that the average radial distance of the electrons from the nucleus is inversely proportional to the scale factor α, = 1.5/α, R becomes the variational parameter and the variation integral becomes
$E(R) = \frac{9}{4R^2} - \frac{6}{R} + \frac{15}{16R} = T(R) + V_{en} (R) + V_{ee} (R) \nonumber$
The figure below provides a graphical representation of the variation method (minimization of E with respect to R) and shows the behavior of the kinetic energy (T), electron–nuclear potential energy (Ven), electron–electron potential energy (Vee), and total electronic energy (E) as a function of , the average radial distance of the helium atom electrons from the nucleus.
Contributions to the electronic energy of the He atom
Note the relative insignificance of Vee compared to T and Ven. In other words, T and Ven are the dominant terms contributing to the ground state of the helium atom. This figure also clearly illustrates that the existence of a ground state (2) depends on the kinetic energy term. Atomic and molecular stability, therefore, can only be understood in quantum mechanical terms, and the foundation of all quantum mechanics is de Broglie’s hypothesis that matter has wavelike properties, λ = h/mv. For one-dimensional problems it is easy to show the relationship between Schrödinger’s equation and de Broglie’s wave equation (11, 12).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.44%3A_The_Crucial_Role_of_Kinetic_Energy_in_Interpreting_Ionization_Energies.txt
|
"A quantum dot (QD) is a nanostructure that can confine the motion of an electron in all three spatial dimensions. This gives rise to a set of discrete and narrow electronic energy levels, similar to those of atomic physics (1)."
"Essentially, artificial atoms (quantum dots) are small boxes about 100 nm on a side, contained in a semiconductor, and holding a number of electrons that may be varied at will. As in real atoms, the electrons are attracted to a central location. In a natural atom, this central location is a positively charged nucleus; in an artificial atom, electrons are typically trapped in a bowl-like parabolic potential well in which electrons tend to fall in towards the bottom of the bowl (2)."
In most cases the nanostructures resemble "pancakes" in which the electrons are restricted to motion in the x-y plane. Thus the appropriate potential is the two-dimensional harmonic oscillator.
Schrödinger's equation can be solved for this potential in both cartesian and circuluar coordinates, yielding the following expressions (in units of hν ) for the quantized energy levels. For an excellent introduction to the quantum mechanics of the two-dimensional harmonic oscillator see French & Taylor, "An Introduction to Quantum Physics," pp 454 - 463, plus three exercises on page 469.
$\begin{matrix} \text{E} \left( \text{n}_x,~ \text{n}_y \right) = \text{n}_x + \text{n}_y + 1 & \text{where} & \text{n}_x = 0,~1,~2,~3... & \text{and} & \text{n}_y = 0,~1,~2,~3 ... \ \text{E} (n,~ l) = 2n+|l| + 1 & \text{where} & n = 0,~1,~3 ... & \text{and} & l = 0,~ \pm 1,~ \pm 2 ... \end{matrix} \nonumber$
The quantum numbers and energies of the first ten states are shown in tabular format below.
$\begin{array}{|c|c|c|c|c|c|c|} \hline \text{n}_x & \text{n}_y & \text{E} & & n & l & \text{E} \ \hline 0 & 0 & 1 & & 0 & 0 & 1 \ \hline 1 & 0 & 2 & & 0 & +1 & 2 \ \hline 0 & 1 & 2 & & 0 & -1 & 2 \ \hline 1 & 1 & 3 & & 0 & +2 & 3 \ \hline 2 & 0 & 3 & & 0 & -2 & 3 \ \hline 0 & 2 & 3 & & 1 & 0 & 3 \ \hline 3 & 0 & 4 & & 0 & +3 & 4 \ \hline 0 & 3 & 4 & & 0 & -3 & 4 \ \hline 1 & 2 & 4 & & 1 & +1 & 4 \ \hline 2 & 1 & 4 & & 1 & -1 & 4 \ \hline \end{array} \nonumber$
The magnitude of the energy level spacing (hν) depends on the size of the quantum dot. Each level in the diagram below can be thought of as an electronic shell and when a level is filled a new row is started in the periodic table.
According to the Pauli Exclusion principle, the first four energy levels have a capacity for 20 electrons. Assuming that there is no splitting of energy level degeneracy in multi-electron atoms, the aufbau principle would give the following structure for the periodic table for a world made up of such "pancake" atoms. The quantum numbers of the last electron added to the "atom." For example atom 4 would have four electrons and their quantum numbers would be |0 0 ½>, |0 0 - ½>, |1 0 ½>, and |0 1 ½>.
$\begin{matrix} 1 & & & & & & & 2 \ | 0 ~0~ \frac{1}{2} \rangle & & & & & & & | 0 ~0 ~\frac{-1}{2} \rangle \ 3 & 4 & &&&& 5 & 6 \ |1~ 0 ~\frac{1}{2} \rangle & |0~ 1 ~ \frac{1}{2} \rangle & & & & & |1~0~ \frac{-1}{2} \rangle & |0 ~1~ \frac{-1}{2} \rangle \ 7 & 8 & 9 & & & 10 & 11 & 12 \ |1~ 1~ \frac{1}{2} \rangle & |2~ 0~ \frac{1}{2} \rangle & |0~2~ \frac{1}{2} \rangle & & & |1~ 1~ \frac{-1}{2} \rangle & |2~ 0~ \frac{-1}{2} \rangle & |0~2~ \frac{-1}{2} \rangle \ 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \ |3~0~ \frac{1}{2} \rangle & |0 ~ 3~ \frac{1}{2} \rangle & |1~2~ \frac{1}{2} \rangle & |2 ~ 1~ \frac{1}{2} \rangle & |3 ~ 0 ~ \frac{-1}{2} \rangle & |0 ~ 3 ~ \frac{-1}{2} \rangle & |1 ~2 ~ \frac{-1}{2} \rangle & |2 ~ 1 ~ \frac{-1}{2} \rangle \end{matrix} \nonumber$
To down-load a Mathcad file which will carry out a numerical integration of Schrödinger's equation click here.
To see representative wave functions click on the states shown below:
|0 0 ½>
|0 3 ½>
|1 1 ½>
References:
1. E. E. Vdovin, Science, 2000, 290, 122.
2. R. C. Ashoori, Nature, 1996, 379, 413.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.45%3A_Quantum_Dots_Are_Artificial_Atoms.txt
|
Three experimental facts are required to determine the atomic radius of a metallic element such as polonium:
1. density,
2. molar mass and
3. crystal structure.
The crystal structure of room temperature polonium is simple cubic, the only metallic element in the periodic table with this structure. Its unit cell, or basic repeating unit, is shown below.
As noted above, this calculation will require, in addition to the crystal structure, the density and molar mass of polonium, which are given below along with Avogadroʹs number.
• Density: $\rho=9.32\; g/cm^{3}$
• Molar Mass (MM): $208.98\; g/mol$
• Atoms per mole: $N_a = 6.023 \times 10^{23}$
Assuming that atomic polonium is a sphere, as shown above, we can calculate its atomic volume.
Atomic volume
$V_{atomic} = \dfrac{4}{3} \pi R^3 \label{1}$
However, as the unit cell (basic building block) shows, the effective volume of a polonium atom is a cube of side $2R$. Therefore the effective volume of an atom of polonium is $R^3$.
Effective atomic volume
$V_{effective} =(2R)^3 = 8R^3 \label{2}$
The next step involves calculating the packing efficiency of the simple cubic structure ‐ in other words, the ratio of the atomic and effective volumes.
Ratio of atomic and effective volumes
\begin{align} \dfrac{V_{atomic}} {V_{effective}} &= \dfrac{ \frac{4}{3} \pi R^3}{ 8R^3} \[4pt] &= 0.524 \label{3} \end{align}
We see that only 52.4% of the space is occupied by polonium atoms. Next the reciprocal of the density, along with the molar mass and Avogadroʹs number is used to calculate the effective volume of an individual polonium atom.
Experimental effective volume
\begin{align*} V_{effective} &= \left( \dfrac{1}{\rho} \right) \left( \dfrac{MM}{N_a} \right) \label{4a} \[4pt] &= \left( \dfrac{1}{9.32\; \cancel{g}/cm^{3}} \right)\left( \dfrac{208.98\; \cancel{g}/ \cancel{mol}}{6.022 10^{23}\; atoms/ \cancel{mol}}\right) \[4pt] &= 3.723 \times 10^{23} \;cm^{3}/atom \label{4b} \end{align*}
The atomic volume is 52.4% of the effective volume.
$V_{atomic}= 0.524 V_{effective} = 0.195 \times 10^{-22} cm^3 \label{5}$
This allows the calculation of the atomic radius of polonium.
\begin{align*} \dfrac{4}{3} \pi R^3 &= 0.524\; V_{effective} \label{6} \[4pt] R &= \left(\dfrac{0.524\; V_{effective}}{\frac{4}{3} \pi}\right)^{\frac{1}{3}} \[4pt]&= 167 \times 10^{12} \;m= 167\;pm \label{7} \end{align*}
This is in agreement with the literature value.
2.47: Calculating the Atomic Radius of Gold
Three experimental facts are required to determine the atomic radius of a metallic element such as gold: density, molar mass and crystal structure.
The crystal structure of gold is face‐centered cubic. Its unit cell, or basic repeating unit, shows that it contains four gold atoms and that the gold atoms touch along the face diagonal. In terms of the gold atom radius, the unit cell dimension is $2 \sqrt{2} \text{R}$.
As noted above, this calculation will require, in addition to the crystal structure, the density and molar mass of gold, which are given below along with Avogadroʹs number.
• Density: $19.32 \frac{ \text{gm}}{ \text{cm}^3}$
• Molar mass: $197.0 \text{gm}$
• Atoms per mole: $6.022 (10)^{23}$
Assuming that atomic gold is a sphere, as shown above, we can calculated its atomic volume.
• Atomic volume: $V_{Atom} = \frac{4}{3} \pi \text{R}^3$
However, the effective volume of a gold atom is 25% of the unit cell volume, $(2 \sqrt{2} \text{R} )^3$.
• Effective atomic volume: $V_{Aeffective} = \frac{(2 \sqrt{2} \text{R})^3}{4} \rightarrow V_{Aeffective} = 4(2)^{ \frac{1}{2}} \text{R}^3$
The next step involves calculating the packing efficiency of the face‐centered cubic structure ‐ in other words, the ratio of the atomic and effective atomic volumes. We see that only 74% of the space is occupied by gold atoms.
• Ratio of atomic and effective atomic volumes according to the fcc model:
$\frac{V_{Atom}}{V_{Aeffective}} = \frac{ \frac{4}{3} \pi \text{R}^3}{4 \times 2^{ \frac{1}{2}} \text{R}^3} \text{float, 2} \rightarrow \frac{V_{Atom}}{V_{Aeffective}} = .74 \nonumber$
Next the reciprocal of the density, along with the molar mass and Avogadroʹs number is used to calculate the experimental effective volume of an individual gold atom.
• Experimental effective atomic volume:
$\begin{matrix} V_{ExEffective} = \frac{1 cm^3}{19.32 gm} \frac{197.0 gm}{6.022 ~10^{23}} & V_{ExEffective} = 1.693 \times 10^{-23} cm^3 \end{matrix} \nonumber$
According to the fcc model the atomic volume is 74% of the experimental effective atomic volume. This allows the calculation of the atomic radius of gold.
$\begin{matrix} \frac{4}{3} \pi \text{R}^3 = 0.74 V_{ExEffective} & R = \left( \frac{0.74 V_{ExEffective}}{ \frac{4}{3} \pi} \right)^{ \frac{1}{3}} & \text{R} = 144 \text{pm} \end{matrix} \nonumber$
This is in agreement with the literature value (see Figure 5.19 page 176 in Chemistry, 5th edition, by McMurry and Fay).
• Define picometer: $\text{pm} = 10^{-12} \text{m}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.46%3A_Calculating_the_Atomic_Radius_of_Polonium.txt
|
On December 29, 1959 Richard Feynman gave an address (Thereʹs Plenty of Room at the Bottom) in which he calculated how many Encyclopedia Britannicas could fit on the head of a pin. Legend identifies this event as the beginning of the field of theoretical nanotechnology.
To illustrate to general chemistry students how small nanoscopic entities, such as atoms, are I calculate with the following simple model how many bibles can fit on the head of a pin.
The model assumes that the pinhead surface is made up of Fe atoms packed in a simple squaric array. It further assumes that letters would be made by placing adatoms in the pockets created by the surface Fe atoms using a scanning tunneling microscope, and that 100 pinhead surface Fe atoms would be required to form a letter.
The first step is to calculate the number of Fe atoms on the pinhead.
$\begin{matrix} \text{Radium of pinhead:} & R_{PH} = \frac{1}{32} in & R_{PH} = 7.94 \times 10^8 pm \ \text{Area of pinhead:} & \text{Area}_{PH} = \pi R_{PH}^2 & \text{Area}_{PH} = 1.98 \times 10^{18} \text{pm}^3 \ \text{Radius of Fe:} & \text{R}_{Fe} = 126 \text{pm} & \text{R}_{Fe} = 1.26 \times 10^{-10} \text{pm} \ \text{Area of Fe atom:} & \text{Area}_{Fe} = \pi \text{R}_{Fe}^2 & \text{Area}_{Fe} = 4.99 \times 10^4 \text{pm}^2 \end{matrix} \nonumber$
Effective area of Fe atom:
$\begin{matrix} \text{EffectiveArea}_{Fe} = 4 \text{R}_{Fe}^2 \ \text{EffectiveArea}_{Fe} = 6.35 \times 10^4 \text{pm}^2 \end{matrix} \nonumber$
Fe atoms per pinhead:
$\begin{matrix} \text{FeAtomsPerPinHead} = \frac{ \text{Area}_{PH}}{ \text{EffectiveArea}_{Fe}} & \text{FeAtomsPerPinHead} = 3.12 \times 10^{13} \end{matrix} \nonumber$
A typical family Bible consists of 1,000 pages with an average of 5,000 characters and spaces per page. If it takes 100 Fe atoms to define a character, how many Bibles can fit on the head of a pin?
$\begin{matrix} \text{PagesPerBible} = 1000 & \text{CharactersPerPage} = 5000 & \text{FeAtomsPerCharacter} = 100 \end{matrix} \nonumber$
Fe atoms required per bible:
$\begin{matrix} \text{FeAtomsPerCharacter CharactersPerPage PagesPerBible} = 5 \times 10^8 \ \text{BiblesPerPinHead} = \frac{ \text{FeAtomsPerPinHead}}{ \text{FeAtomsPerCharacter CharactersPerPage PagesPerBible}} \ \text{BiblesPerPinHead} = 6.2 \times 10^4 \end{matrix} \nonumber$
Define picometer: $\text{pm} = 10^{-12} \text{m}$
2.49: Momentum Wavefunctions and Distributions for the Hydrogen Atom
The Fourier transform for the 1s orbital
$\Phi (p) = \frac{1}{ \sqrt{8 \pi^4}} \int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \text{exp(-r) exp} \left( - \text{i p r} \cos \left( \theta \right) \right) r^2 \sin \left( \theta \right) d \phi d \theta dr \rightarrow \frac{2^{ \frac{1}{2}}}{ \pi \left[ (-1) + \text{i p} \right]^2 (1 + \text{i p}^2} \nonumber$
p = 0, .02 .. 5
The Fourier transform for the 2s orbital
$\Phi (p) = \frac{1}{16 \pi^2} \int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} (2 - r) \text{exp} \left( - \frac{r}{2} \right) \text{exp} \left( \text{-i p r} \cos \left( \theta \right) \right) r^2 \sin ( \theta ) d \phi d \theta dr \nonumber$
$\begin{matrix} \text{yields} & \Phi (p) = \frac{-16}{ \pi} \frac{(-1) + 4p^2}{[(-1) + 2 \text{i p}]^3 (1 + 2 \text{i p})^3} & p = 0, .02 .. 2 \end{matrix} \nonumber$
The Fourier transform for the 2pz orbital
$\Phi (p) = \frac{1}{16 \pi^2} \int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \text{r exp} \left( - \frac{r}{2} \right) \text{exp} \left( \text{-i p r} \cos \left( \theta \right) \right) r^2 \sin ( \theta ) d \phi d \theta dr \nonumber$
$\begin{matrix} \text{yields} & \Phi (p) = 64 \frac{i}{ \pi} \frac{p}{[(-1) + 2 \text{i p}^3 (1 + 2 \text{i p}^3} \end{matrix} \nonumber$
The Fourier transform for the 3s orbital
$\Phi (p) = \frac{1}{162 \sqrt{2} \pi^2} \int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \left( 27 - 18r + 2r^2 \right) \text{exp} \left( - \frac{r}{3} \right) \text{exp} \left( \text{-i p r} \cos \left( \theta \right) \right) r^2 \sin ( \theta ) d \phi d \theta dr \nonumber$
$\begin{matrix} \text{yields} & \Phi (p) = 18 \frac{6^{ \frac{1}{2}}}{ \pi} \frac{(-30)p^2 + 1 + 81p^4}{[(-1) + 3 \text{i p}]^4 (1 + 3 \text{i p})^4} \end{matrix} \nonumber$
The Fourier transform for the 3pz orbital
$\Phi (p) = \frac{1}{162 \pi^2} \int_0^{ \infty} \int_0^{ \pi} \int_0^{2 \pi} \left( 6r -r^2 \right) \text{ exp} \left( - \frac{r}{3} \right) \text{exp} \left( \text{-i p r} \cos \left( \theta \right) \right) \cos \left( \theta \right) r^2 \sin ( \theta ) d \phi d \theta dr \nonumber$
$\begin{matrix} \text{yields} & \Phi (p) = (-432) \frac{i}{ \pi} p \frac{9p^2 - 1}{[(-1) + 3 \text{i p}]^4 (1 + 3 \text{i p})^4} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.48%3A_How_Many_Bibles_Can_Fit_on_the_Head_of_a_Pin.txt
|
Self‐consistent field calculations have great historical significance and play a major role in contemporary quantum chemistry. Therefore, the purpose of this exercise is to illustrate with an interactive example the simplest possible self‐consistent field calculation for atomic systems. The SCF method is particularly transparent in the Mathcad programming environment as will be shown below.
Under the orbital approximation [Φ(1,2) = Ψ(1)Ψ(2)] the two‐electron Schrödinger Equation can be decoupled into two one‐electron equations with effective Hamiltonian operators of the form,
$H_i = - \frac{1}{2 r_1} \frac{d^2}{dr_i^2} r_i - \frac{Z}{r_i} + \int_0^{ \infty} \Psi_j \frac{1}{r_j} d \tau_j \nonumber$
Here the subscripts i and j are used to distinguish the two electrons.
The first term on the right side represents the kinetic energy of the ith electron, the second term is its interaction with the nucleus, and the third term is its average interaction with the jth electron. If it is assumed that the jth electron is in a Slater‐type orbital with scale factor β,
$\Psi_j = \sqrt{ \frac{ \beta^3}{ \pi}} \text{exp} \left( - \beta r_jj \right) \nonumber$
the effective Hamiltonian for the ith electron becomes,
$H_i= - \frac{1}{2 r_1} \frac{d^2}{dr_i^2} - \frac{Z}{r_1} + \frac{1}{r_1} \left[ 1 - \left( 1 + \beta r_1 \right) \text{exp} \left( -2 \beta r_i \right) \right] \nonumber$
This Hamiltonian is then used with the variational method to calculate the orbital energy of the ith electron,
$\varepsilon_i = \int_0^{ \infty} \Psi_i H_i \Psi_i d \tau_i \nonumber$
If the ith electron is also assumed to be in a Slater‐type orbital but with a different scale factor α,
$\Psi_i = \sqrt{ \frac{ \alpha^3}{ \pi}} \text{exp} \left( - \alpha r_i \right) \nonumber$
the variational integral for the orbital energy upon evaluation is,
$\varepsilon_i = \frac{ \alpha^2}{2} - Z \alpha + \frac{ \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{ \left( \alpha + \beta \right)^3} \nonumber$
The SCF calculation proceeds by minimizing εi with respect to α given an initial value for β. This amounts to finding the orbital energy and wavefunction of the ith electron in the average electrostatic field created by the jth electron. Now we turn our attention to the jth electron. By identical arguments to those given above for the ith electron, it can be shown that the orbital energy of the jth electron is
$\varepsilon_j = \frac{ \beta^2}{2} - Z \beta + \frac{ \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{ \left( \alpha + \beta \right)^3} \nonumber$
The value of α just obtained for the ith electron serves as the seed value as εj is minimized with respect to β in order to obtain the orbital energy and wavefunction of the jth electron. One cycle has now been completed and this procedure is continued until self‐consistency is achieved. This occurs when the orbital energies and the wavefunctions of the two electrons converge to the same values. After each iteration the energy of the atom or ion is calculated as the sum of orbital energy of one of the electrons plus the kinetic and nuclear potential energy of the other electron.
$E_{atom} = \varepsilon_i + \frac{ \beta^2}{2} - Z \beta \nonumber$
To summarize: After making a guess for the wavefunction of the jth electron the variation method is used to determine the orbital energy and wavefunction of the ith electron. Using this output wavefucntion as the input wavefunction in the second iteration, the orbital energy and wavefunction of the jth electron is calculated. The procedure is repeated until self‐consistency is achieved; that is until Ψi = Ψj and εi = εj. This is also the point at which the energy of the atom has achieved a minimum value in compliance with the variational theorem. This is clearly shown in the output of the third method.
Note that the final SCF result is the same as that achieved in a variational calculation which places both electrons in the same Slater orbital from the beginning. This calculation results in the familiar expression
$E = \alpha^2 - \left( 2 Z - \frac{5}{8} \right) \alpha \nonumber$
which when minimized with respect to α yields, α = Z ‐ 5/16 and Eatom = ‐α2.
The reason for using the indirect procedure outlined here is that it provides an unusually simple and direct example of the SCF method. This example should be helpful in understanding what is going on behind the scenes in much more complicated quantum mechanical calculations performed with comprehensive commercial programs like Spartan.
SCF Calculation for Two Electron Atoms and Ions
1. Supply nuclear charge and an input value for β: $\begin{matrix} Z = 2 & \beta = 2 & \alpha = Z \end{matrix} \nonumber$
2. Define orbital energies of the electrons in terms of the variational parameters: $\begin{matrix} \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = \frac{ \alpha^2}{2} - Z \alpha + \frac{ \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{ \left( \alpha + \beta \right)^3} & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = \frac{ \beta^2}{2} - Z \beta + \frac{ \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{ \left( \alpha + \beta \right)^3} \end{matrix} \nonumber$
3. Minimize orbital energies with respect to α and β: $\begin{matrix} \text{Given} & \frac{d}{d \alpha} \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = 0 & \alpha = \text{Find} ( \alpha ) & \alpha = 1.5999 & \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = -0.8116 \ \text{Given} & \frac{d}{d \beta} \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = 0 & \beta = \text{Find} ( \beta ) & \beta = 1.5999 & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = -0.9250 \end{matrix} \nonumber$
4. Calculate the energy of the atom: $\begin{matrix} E_{atom} = \frac{ \alpha^2}{2} + \frac{ \beta^2}{2} - Z \alpha - Z \beta + \frac{ \alpha \beta \left( \alpha^2 + 3 \alpha \beta + \beta^2 \right)}{ \left( \alpha + \beta \right)^3} & E_{atom} = -2.8449 \end{matrix} \nonumber$
5. Record results of the SCF cycle and return to step 1 with the new and improved input value for β.
6. Continue until self‐consistency is achieved.
7. Verify the results shown below for He. Repeat for Li+, Be2+ and B3+.
Recommend tabular form for results of each SCF cycle:
$\begin{pmatrix} \beta \text{(input)} & \alpha & \varepsilon_{1s \alpha} & \beta & \varepsilon_{1s \beta} & E_{atom} \ 2.000 & 1.5999 & -0.8116 & 1.7126 & -0.9250 & -2.8449 \ 1.7126 & 1.6803 & -0.8887 & 1.6895 & -0.8987 & -2.8476 \ 1.6895 & 1.6869 & -0.8959 & 1.6877 & -0.8967 & -2.8477 \ 1.6877 & 1.6874 & -0.8964 & 1.6875 & -0.8965 & -2.8477 \ 1.6875 & 1.6875 & -0.8965 & 1.6875 & -0.8965 & -2.8477 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.50%3A_The_SCF_Method_for_Two_Electrons.txt
|
Trial Wave Function:
$\Psi (r,~ \beta ) = \sqrt{ \frac{ \beta^3}{ \pi}} \text{exp} ( - \beta r) \nonumber$
Calculate kinetic energy:
$\begin{array}{c|c} T_e ( \beta ) = \int_0^{ \infty} \Psi (r,~ \beta ) \left[ - \frac{1}{2r} \frac{d^2}{dr^2} (r \Psi (r,~ \beta )) \right] 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow (- \beta ) Z \end{array} \nonumber$
Calculate electron‐nucleus potential energy:
$\begin{array}{c|c} V_{ne} ( \beta,~ Z) = \int_0^{ \infty} \Psi (r,~ \beta ) \frac{-Z}{r} 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow (- \beta ) Z \end{array} \nonumber$
Calculation of electron‐electron potential energy:
a. Calculate the electrostatic potential due to the β electron:
$\begin{array}{c c |c} \Phi ( \beta,~ r) = & \frac{1}{r} \int_0^{ \tau} \Psi (x,~ \beta)^2 4 \pi x^2 dx ... & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow \frac{- \left[ e^{(-2)r \beta} \beta r + e^{(-2) r \beta} - 1 \right]}{r} \ & + \int_r^{ \infty} \frac{ \Psi (x,~ \beta )^2 4 \pi x^2 dx}{x} dx \end{array} \nonumber$
b. Calculate the electron‐electron potential energy of the α and β electrons using result of part a:
$\begin{array}{c|c} V_{ee} ( \alpha,~ \beta ) = \int_0^{ \infty} \Psi (r,~ \alpha )^2 \Phi ( \beta,~ r) 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \beta >0,~ \alpha >0} \rightarrow \alpha \beta \frac{ \alpha^2 + 3 \alpha \beta + \beta^2}{ \left( \alpha^2 + 2 \beta \alpha + \beta^2 \right) ( \beta + \alpha )} \end{array} \nonumber$
SCF Calculation
1. Supply nuclear charge and an input value for β: $\begin{matrix} Z = 2 & \beta = 2.0 & \alpha = Z \end{matrix} \nonumber$
2. Define orbital energies of the electrons in terms of the variational parameters: $\begin{matrix} \text{Orbital energy of the } \alpha \text{ electron:} & \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = T_e ( \alpha ) + V_{ne} ( \alpha,~ Z) + V_{ee} ( \alpha,~ \beta ) \ \text{Orbital energy of the } \beta \text{ electron:} & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = T_e ( \beta ) + V_{ne} ( \beta,~ Z) + V_{ee} ( \alpha,~ \beta ) \end{matrix} \nonumber$
3. Minimize orbital energies with respect to α and β: $\begin{matrix} \text{Given} & \frac{d}{d \alpha} \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = 0 & \alpha = \text{Find} ( \alpha ) & \alpha = 1.5999 & \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = -0.8116 \ \text{Given} & \frac{d}{d \beta} \varepsilon_{1s \beta} ( \alpha,~ \beta ) = 0 & \beta = \text{Find} ( \beta ) & \beta = 1.7126 & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = -0.9250 \end{matrix} \nonumber$
4. Calculate the energy of the atom: $\begin{matrix} E_{atom} = T_e ( \alpha ) +V_{ne} ( \alpha,~ Z) + T_e ( \beta ) + V_{ne} ( \beta,~Z) + V_{ee} ( \alpha,~ \beta ) & E_{atom} = -2.8449 \end{matrix} \nonumber$
5. Record results of the SCF cycle and return to step 1 with the new and improved input value for β.
6. Continue until self‐consistency is achieved.
7. Verify the results shown below for He. Repeat for Li+, Be2+ and B3+
$\begin{pmatrix} \beta \text{ (input)} & \alpha & \varepsilon_{1s \alpha} & \beta & \varepsilon_{1s \beta} & E_{atom} \ 2.000 & 1.5999 & -0.8116 & 1.7126 & -0.9250 & -2.8449 \ 1.7126 & 1.6803 & -0.8887 & 1.6895 & -0.8987 & -2.8476 \ 1.6895 & 1.6869 & -0.8959 & 1.6877 & -0.8967 & -2.8477 \ 1.6877 & 1.6874 & -0.8964 & 1.6875 & -0.8965 & -2.8477 \ 1.6875 & 1.6875 & -0.8965 & 1.6875 & -0.8965 & -2.8477 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.51%3A_Outline_of_the_SCF_Method_for_Two_Electrons.txt
|
Gaussian Trial Wave Function:
$\Psi (r,~ \beta ) = \left( \frac{2 \beta}{ \pi} \right)^{ \frac{3}{4}} \text{exp} \left( - \beta r^2 \right) \nonumber$
Calculate kinetic energy:
$\begin{array}{c|c} \int_0^{ \infty} \Psi (r,~ \beta ) \left[ - \frac{1}{2r} \frac{d^2}{dr^2} (r \Psi (r,~ \beta)) \right] 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow \frac{3}{2} \beta \end{array} \nonumber$
Calculate electron‐nucleus potential energy:
a. Calculate the electric potential of one of the electrons in the presence of the other:
$\begin{array}{c|c} \frac{1}{r} \int_0^r \Psi (x,~ \beta )^2 4 \pi x^2 dx + \int_r^{ \infty} \frac{ \Psi (x,~ \beta)^2 4 \pi x^2}{x} dx & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow \frac{ \text{erf} \left( r 2^{ \frac{1}{2}} \beta^{ \frac{1}{2}} \right)}{r} \end{array} \nonumber$
b. Calculate the electron-electron potential energy using the result of part a:
$\begin{array} \int_0^{ \infty} \Psi (r,~ \beta )^2 \left( \frac{ \text{erf} \left( r 2^{ \frac{1}{2}} 2^{ \frac{1}{2}} \right)}{r} \right) 4 \pi r^2 dr & _{ \text{simplify}}^{ \text{assume, } \beta >0} \rightarrow \frac{2}{ \pi} ( \beta \pi )^{ \frac{1}{2}} \end{array} \nonumber$
SCF Calculation
1. Supply nuclear charge and an input value for β: $\begin{matrix} Z = 2 & \beta = 0.767 & \alpha = Z \end{matrix} \nonumber$
2. Define orbital energies of the electrons in terms of the variational parameters: $\begin{matrix} \text{Orbital energy of the } \alpha \text{ electron:} & \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = \frac{3 \alpha}{2} - Z \sqrt{ \frac{8 \alpha}{ \pi}} + \sqrt{ \frac{8 \alpha \beta}{ \pi ( \alpha + \beta)}} \ \text{Orbital energy of the } \beta \text{ electron:} & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = \frac{3 \beta}{2} - Z \sqrt{ \frac{8 \beta}{ \pi}} + \sqrt{ \frac{8 \alpha \beta}{ \pi ( \alpha + \beta)}} \end{matrix} \nonumber$
3. Minimize orbital energies with respect to α and β: $\begin{matrix} \text{Given} & \frac{d}{d \alpha} \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = 0 & \alpha = \text{Find} ( \alpha ) & \alpha = 0.7670 & \varepsilon_{1s \alpha} ( \alpha,~ \beta ) = -0.6564 \ \text{Given} & \frac{d}{d \beta} \varepsilon_{1s \beta} ( \alpha,~ \beta ) = 0 & \beta = \text{Find} ( \beta ) & \beta = 0.7670 & \varepsilon_{1s \beta} ( \alpha,~ \beta ) = -0.6564 \end{matrix} \nonumber$
4. Calculate the energy of the atom: $\begin{matrix} E_{atom} = \frac{3 \alpha}{2} & \frac{3 \beta}{2} - Z \sqrt{ \frac{8 \alpha}{ \pi}} - Z \sqrt{ \frac{8 \beta}{ \pi}} + \sqrt{ \frac{8 \alpha \beta}{ \pi ( \alpha + \beta)}} & E_{atom} = -2.3010 \end{matrix} \nonumber$
5. Record results of the SCF cycle and return to step 1 with the new and improved input value for β.
6. Continue until self‐consistency is achieved.
7. Verify the results shown below for He. Repeat for Li+, Be2+ and B3+.
$\begin{pmatrix} \beta \text{(input)} & \alpha & \varepsilon_{1s \alpha} & \beta & \varepsilon_{1s \beta} & E_{atom} \ 2.000 & 0.4514 & -0.4988 & 0.9303 & -0.8031 & -2.2703 \ 0.9303 & 0.6943 & -0.6117 & 0.8023 & -0.6816 & -2.2996 \ 0.8023 & 0.7504 & -0.6454 & 0.7749 & -0.6618 & -2.3009 \ 0.7749 & 0.7633 & -0.6539 & 0.7688 & -0.6576 & -2.3010 \ 0.7688 & 0.7661 & -0.6588 & 0.7674 & -0.6567 & -2.3010 \ 0.7674 & 0.7668 & -0.6563 & 0.7671 & -0.6564 & -2.3010 \ 0.7671 & 0.7669 & -0.6564 & 0.7670 & -0.6564 & -2.3010 \ 0.7670 & 0.7670 & -0.6564 & 0.7670 & -0.6564 & -2.3010 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/02%3A_Atomic_Structure/2.52%3A_The_SCF_Method_for_Two_Electrons_Using_a_Gaussian_Wave_Function.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.