chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
The following circuit, Uf, produces the table of results to its right. The top wires carry the value of x and the circuit places f(x) on the bottom wire. As is shown in the previous tutorial this circuit can also operate in parallel accepting as input all x-values and returning on the bottom wire a superposition of all values of f(x).
Uf =
$\begin{matrix} \begin{pmatrix} \text{x} & 0 & 1 & 2 & 3 \ \text{f(x)} & 1 & 0 & 0 & 1 \end{pmatrix} & \text{where} & \begin{matrix} |0 \rangle = |0 \rangle |0 \rangle \ |1 \rangle = |0 \rangle |1 \rangle \ |2 \rangle = |1 \rangle |0 \rangle \ |3 \rangle = |1 \rangle |1 \rangle \end{matrix} \end{matrix} \nonumber$
The function belongs to the balanced category because it produces 0 and 1 with equal frequency. The modification of this circuit (Deutsch-Jozsa algoritm) highlighted below answers the question of whether the function is constant or balanced (see Julian Brown, The Quest for the Quantum Computer, page 298). Naturally we already know the answer, so this is a simple demonstration that the circuit works.
$\begin{matrix} |0 \rangle & \cdots & \fbox{H} & \cdots & \cdot & \cdots & \cdots & \cdots & \cdots & \cdots & \fbox{H} & \triangleright & \text{Measure, 0 or 1} \ ~ & ~ & ~ & ~ & | \ |0 \rangle & \cdots & \fbox{H} & \cdots & | & \cdots & \cdot & \cdots & \cdots & \cdots & \fbox{H} & \triangleright & \text{Measure, 0 or 1} \ ~ & ~ & ~ & ~ & | & ~ & | \ |1 \rangle & \cdots & \fbox{H} & \cdots & \oplus & \cdots & \oplus & \cdots & \fbox{NOT} & \cdots & \cdots \end{matrix} \nonumber$
The input is $|0 \rangle |0 \rangle |1 \rangle : ~ \Psi_{in} = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}^T$
The following matrices are required to execute the circuit.
$\begin{matrix} \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{NOT} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{H} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} & \text{CnNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
The quantum circuit is assembled out of these matrices using tensor (kronecker) multiplication.
$\begin{matrix} \text{U}_{ \text{f}} = \text{kronecker(I, kronecker(I, NOT)) kronecker(I, CNOT) CnNOT} \ \text{QuantumCircuit} = \text{kronecker(H, kronecker(H, I)) U}_{ \text{f}} \text{kronecker(H, kronecker(H, H))} \end{matrix} \nonumber$
Operation of the quantum circuit on the input vector yields the following result which is written as a product of three qubits on the right.
$\begin{matrix} \text{QuantumCircuit} \Psi_{ \text{in}} = \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ -0.707 \ 0.707 \end{pmatrix} & \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} -1 \ 1 \end{pmatrix} \end{matrix} \nonumber$
According to the Deutsch-Jozsa scheme, if both wires are |0> the function is constant, but if at least one wire is |1> the function is balanced. We see by inspection that both wires are |1> indicating that the function is balanced.
The measurements on the top wires can be simulated with projection operators |0><1|, and confirm that the function is not constant but belongs to the balanced category.
$\begin{matrix} \text{The first qubit is not |0} \rangle & \left[ \text{kronecker} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix}^T, \text{kronecker(I, I)} \right] \text{QuantumCircuit} \Psi_{ \text{in}} \right]^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \ \text{The second qubit is not |0} \rangle & \left[ \text{kronecker} \left[ \text{I, kronecker} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix}^T, \text{ I} \right] \right] \text{QuantumCircuit} \Psi_{ \text{in}} \right]^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \ \text{The first qubit is |1} \rangle & \left[ \text{kronecker} \left[ \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix}^T, \text{kronecker(I, I)} \right] \text{QuantumCircuit} \Psi_{ \text{in}} \right]^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & -0.707 & 0.707 \end{pmatrix} \ \text{The second qubit is |1} \rangle & \left[ \text{kronecker} \left[ \text{I, kronecker} \left[ \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix}^T, \text{ I} \right] \right] \text{QuantumCircuit} \Psi_{ \text{in}} \right]^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & -0.707 & 0.707 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.73%3A_Another_Illustration_of_the_Deutsch-Jozsa_Algorithm.txt
|
A concealed quantum algorithm calculates f(x) from an input register containing a superposition of all x-values. Pairs of x-values (x, x') generate the same output. Simon's algorithm is an efficient method for finding the relationship between the pairs: $\text{f(x) = f(x') = f(x} \oplus \text{s)}$, where s is a secret string and the addition on the right is bitwise modulo 2 . In a classical calculation one could compute f(x) until some pattern emerged and find the pairs by inspection. This approach is illustrated below.
$\begin{matrix} \text{Decimal} & \text{Binary} & ~ & \text{Binary} & \text{Decimal} \ |0 \rangle & |000 \rangle & \xrightarrow{ f(0)} & |011 \rangle & |3 \rangle \ |1 \rangle & |001 \rangle & \xrightarrow{f(1)} & |001 \rangle & |1 \rangle \ |2 \rangle & |010 \rangle & \xrightarrow{f(2)} & |010 \rangle & |2 \rangle \ |3 \rangle & |011 \rangle & \xrightarrow{f(3)} & |000 \rangle & |0 \rangle \ |4 \rangle & |100 \rangle & \xrightarrow{f(4)} & |001 \rangle & |1 \rangle \ |5 \rangle & |101 \rangle & \xrightarrow{f(5)} & |011 \rangle & |3 \rangle \ |6 \rangle & |110 \rangle & \xrightarrow{f(6)} & |010 \rangle & |2 \rangle \end{matrix} \nonumber$
The table of results reveals the pairs {(0,5), (1,4), (2,7), (3,6)} and that |s> = |101>. Adding |s> bitwise modulo 2 to any |x> reveals its partner |x'>.
The following quantum circuit is a rudimentary implementation of Simon's algorithm. The section in blue is the concealed algorithm. It has been discussed in two other tutorials: Quantum Parallel Calculation and An Illustration of the Deutsch-Jozsa Algorithm. Its operation yields the results shown in the following table.
$\begin{matrix} \begin{pmatrix} \text{x} & 0 & 1 & 2 & 3 \ \text{f(x)} & 1 & 0 & 0 & 1 \end{pmatrix} \ \begin{matrix} \text{Initial} & ~ & 1 & ~ & 2 & ~ & 3 & ~ & 4 & ~ & 5 & ~ & \text{Final} \ |0 \rangle & \triangleright & \fbox{H} & \cdots & \cdot & \cdots & \cdots & \cdots & \cdots & \cdots & \fbox{H} & \triangleright \ ~ & ~ & ~ & ~ & | \ |0 \rangle & \triangleright & \fbox{H} & \cdots & | & \cdots & \cdot & \cdots & \cdots & \cdots & \fbox{H} & \triangleright \ ~ & ~ & ~ & ~ & | & ~ & | \ |0 \rangle & \triangleright & \cdots & \cdots & \oplus & \cdots & \oplus & \cdots & \fbox{NOT} & \cdots & \cdots & \triangleright & \text{Measure, 0 or 1} \end{matrix} \end{matrix} \nonumber$
Next we prepare a table showing the results of a classical calculation. It is clear that the pairs are (0,3) and (1,2), and that |s> = |11>.
$\begin{matrix} \text{Decimal} & \text{Binary} & ~ & \text{Binary} & \text{Decimal} \ |0 \rangle & |00 \rangle & \xrightarrow{f(0)} & |01 \rangle & |1 \rangle \ |1 \rangle & |01 \rangle & \xrightarrow{ f(1)} & |00 \rangle & |0 \rangle \ |2 \rangle & |10 \rangle & \xrightarrow{ f(2)} & |00 \rangle & |0 \rangle \ |3 \rangle & |11 \rangle & \xrightarrow{ f(3)} & |01 \rangle & |1 \rangle \end{matrix} \nonumber$
Now we examine the operation of the quantum circuit that implements Simon's algoritm by two different, but equivalent methods. The matrices representing the quantum gates in the circuit are:
$\begin{matrix} \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{NOT} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{H} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} & \text{CnNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
The three qubit input state is: $\Psi_{in} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}^T$
The concealed algorithm: $\text{U}_{ \text{f}} = \text{kronecker(I, kronecker(I, NOT)) kronecker(I, CNOT) CnNOT}$
The complete quantum circuit: $\text{QuantumCircuit = kronecker(H, kronecker(H, kronecker(H, I)) U}_{ \text{f}} \text{ kronecker(H, kronecker(H, I))}$
The operation of the quantum circuit on the input state yields the following result:
$\begin{matrix} \text{QuantumCircuit} \Psi_{in} = \begin{pmatrix} 0.5 \ 0.5 \ 0 \ 0 \ 0 \ 0 \ -0.5 \ 0.5 \end{pmatrix} \begin{matrix} = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} + \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} \ = \frac{1}{2} \left[ |00 \rangle - |11 \rangle \right] |0 \rangle + \frac{1}{2} \left[ |00 \rangle +|11 \rangle \right] |1 \rangle \end{matrix} \end{matrix} \nonumber$
The terms in brackets are superpositions of the x-values which are related by $\text{x}' \text{x} = \oplus \text{s}$. Thus we see by inspection that |s> = |11>. The actual implementation of Simon's algorithm involves multiple measurements in order to determine the secret string. The Appendix modifies the quantum circuit to include the effect of measurement on the bottom wire.
The second method of analysis uses the following truth tables for the quantum gates and the operation of the Hadamard gate to trace the evolution of the input quibits through the quantum circuit.
$\begin{matrix} \text{NOT} & \text{CNOT} & \text{CnNOT} \ \begin{pmatrix} 0 & ' & 1 \ 1 & ' & 0 \end{pmatrix} & \begin{pmatrix} \text{Decimal} & \text{Binary} & ' & \text{Binary} & \text{Decimal} \ 0 & 00 & ' & 00 & 0 \ 1 & 01 & ' & 01 & 1 \ 2 & 10 & ' & 11 & 3 \ 3 & 11 & ' & 10 & 2 \end{pmatrix} & \begin{pmatrix} \text{Decimal} & \text{Binary} & ' & \text{Binary} & \text{Decimal} \ 0 & 000 & ' & 000 & 0 \ 1 & 001 & ' & 001 & 1 \ 2 & 010 & ' & 010 & 2 \ 3 & 011 & ' & 011 & 3 \ 4 & 100 & ' & 101 & 5 \ 5 & 101 & ' & 100 & 4 \ 6 & 110 & ' & 111 & 7 \ 7 & 111 & ' & 110 & 6 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{Hadamard operation:} & \begin{bmatrix} 0 & ' & \text{H} & ' & \frac{1}{ \sqrt{2}} (0 +1) & ' & \text{H} & ' & 0 \ 1 & ' & \text{H} & ' & \frac{1}{ \sqrt{2}} (0-1) & ' & \text{H} & ' & 1 \end{bmatrix} \end{matrix} \nonumber$
$\begin{matrix} |000 \rangle \ \text{H} \otimes \text{H} \otimes \text{I} \ \frac{1}{ \sqrt{2}} \left[ |0 \rangle + |1 \rangle \right] \frac{1}{ \sqrt{2}} \left[ |0 \rangle + |1 \rangle \right] |0 \rangle = \frac{1}{2} \left[ |000 \rangle + |010 \rangle + |100 \rangle + |110 \rangle \right] \ \text{CnNOT} \ \frac{1}{2} \left[ |000 \rangle + |010 \rangle + |101 \rangle + |111 \rangle \right] \ \text{I} \otimes \text{CNOT} \ \frac{1}{2} \left[ |000 \rangle + |011 \rangle + |101 \rangle + |110 \rangle \right] \ \text{I} \otimes \text{I} \otimes \text{NOT} \ \frac{1}{2} \left[ |001 \rangle + |010 \rangle + |100 \rangle + |111 \rangle \right] \ \text{H} \otimes \text{H} \otimes \text{I} \ \frac{1}{2} \left[ \left( |00 \rangle - |11 \rangle \right) |0 \rangle + \left( |00 \rangle + |11 \rangle \right) |1 \rangle \right] \end{matrix} \nonumber$
Appendix
The circuit modification shown below includes the effect of measurement on the bottom wire.
Measure |0> on the bottom wire:
$\text{QuantumCircuit} = \text{kronecker} \left[ \text{H, kronecker} \left[ \text{H, } \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix}^T \right] \right] \text{U}_f \text{kronecker(H, kronecker(H, I))} \nonumber$
$\begin{matrix} \text{QuantumCircuit} \Psi_{in} = \begin{pmatrix} 0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \ -0.5 \ 0 \end{pmatrix} & \begin{pmatrix} 0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \ -0.5 \ 0 \end{pmatrix} = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
Measure |1> on the bottom wire:
$\text{QuantumCircuit} = \text{kronecker} \left[ \text{H, kronecker} \left[ \text{H, } \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix}^T \right] \right] \text{U}_f \text{kronecker(H, kronecker(H, I))} \nonumber$
$\begin{matrix} \text{QuantumCircuit} \Psi_{in} = \begin{pmatrix} 0 \ 0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0.5 \end{pmatrix} & \begin{pmatrix} 0 \ 0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0.5 \end{pmatrix} = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.74%3A_Aspects_of_Simon%27s_Algorithm.txt
|
In this document I reproduce most of the results presented in Professor Galvezʹs paper using the Mathcad programming environment.
State Vector
$\begin{matrix} \text{Photon moving horizontally:} & \text{x} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{Photon moving vertically:} & \text{y} = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \text{Null vector:} & \text{n} = \begin{pmatrix} 0 \ 0 \end{pmatrix} \ \text{Horizontal polarization:} & \text{h} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{Vertical polarization:} & \text{v} = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \text{Diagonal polarization:} & \text{d} = \begin{pmatrix} \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \end{pmatrix} \end{matrix} \nonumber$
Single mode operators:
Projection operators for motion in the x- and y-directions:
$\begin{matrix} \text{X} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} & \text{Y} = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Operator for polarizing film oriented at an angle of θ to the horizontal.
$\Theta_{ \text{op}} ( \theta ) = \begin{pmatrix} \cos \theta \ \sin \theta \end{pmatrix} \begin{pmatrix} \cos \theta & \sin \theta \end{pmatrix} \rightarrow \begin{pmatrix} \cos \theta^2 & \cos \theta \sin \theta \ \cos \theta \sin \theta & \sin \theta^2 \end{pmatrix} \nonumber$
Beam splitter:
$\text{BS} = \begin{pmatrix} \frac{1}{ \sqrt{2}} & \frac{i}{ \sqrt{2}} \ \frac{i}{ \sqrt{2}} & \frac{1}{ \sqrt{2}} \end{pmatrix} \nonumber$
Mirror:
$\text{M} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \nonumber$
Phase shift:
$\text{A}( \delta ) = \begin{pmatrix} e^{i \delta} & 0 \ 0 & 1 \end{pmatrix} \nonumber$
Half and quarter wave plate:
$\begin{matrix} \text{W}_2 = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & \text{W}_4 = \begin{pmatrix} 1 & 0 \ 0 & -i \end{pmatrix} \end{matrix} \nonumber$
Identity:
$\text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
Rotated half wave plate:
$\begin{matrix} \text{W}_2 ( \theta) = \begin{pmatrix} \cos (2 \theta) & \sin (2 \theta) \ \sin (2 \theta) & - \cos (2 \theta) \end{pmatrix} & \text{W} ( \theta) = \begin{pmatrix} \cos (2 \theta) & \sin (2 \theta) & 0 & 0 \ \sin (2 \theta) & - \cos (2 \theta) & 0 & 0 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 \end{pmatrix} \end{matrix} \nonumber$
Mach-Zehnder interferometer:
$\text{MZ} ( \delta) = \text{BS A} ( \delta) \text{M BS} \nonumber$
Two mode states and operators:
Single-photon direction of propagation and polarization states:
$\begin{matrix} \text{xh} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{xv} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{yh} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{yv} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Two-photon direction of propagation states.
$\begin{matrix} \text{xx} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{xy} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{yx} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{yy} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Two-photon polarization states:
$\begin{matrix} \text{hh} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{hv} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{vh} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{vv} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Polarizing beam splitter which transmits horizontally polarized photons and reflects vertically polarized photons.
$\begin{matrix} \text{PBS} = \text{xh xh}^T + \text{yv xv}^T + \text{yh yh}^T + \text{xv yv}^T & \text{PBS} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \end{pmatrix} \end{matrix} \nonumber$
Polarization M-Z interferometer:
$\text{MZp} ( \delta) = \text{PBS kronecker} ( \text{A} ( \delta), \text{ I}) \text{kronecker(M, I) PBS} \nonumber$
Kronecker is Mathcadʹs command for tensor multiplication of square matrices.
Mach‐Zehnder interferometer for direction of propagation and polarization, which places a rotatable half‐wave plate in the upper path.
$\text{MZ}_{ \text{dp}} ( \theta, \delta) = \text{kronecker (BS, I) kronecker(A (} \delta) \text{ ,I) W (} \theta \text{) kronecker (M, I) kronecker (BS, I)} \nonumber$
Mach-Zehnder two-photon direction-of-propagation interferometer.
$\begin{matrix} \text{BSBS = kronecker(BS, BS)} & \text{MM = kronecker(M, M)} & \text{AA(} \delta \text{) = kronecker(A(} \delta \text{), A(} \delta)) \ ~ & \text{MZ}_{ \text{dd}} ( \delta) = \text{BSBS AA} ( \delta) \text{MM BSBS} \end{matrix} \nonumber$
Confirm the results in Figure 2 for the Mach-Zehnder interferometer:
$\delta = 0, .125 \pi ... 6 \pi \nonumber$
Demonstrate that a superposition is formed after the first beam splitter.
$\begin{matrix} \text{BS x} = \begin{pmatrix} 0.707 \ 0.707i \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(x + iy)} = \begin{pmatrix} 0.707 \ 0.707i \end{pmatrix} \end{matrix} \nonumber$
Confirmation that path information destroys interference.
$\delta = 0, .1 \pi .. 2 \pi \nonumber$
Erasure of path information restores interference. Erasers for the x- and y-directions place diagonal polarizers in those directions after the interferometer.
$\begin{matrix} \text{E}_{ \text{x}} = \begin{pmatrix} \frac{1}{2} & \frac{1}{2} & 0 & 0 \ \frac{1}{2} & \frac{1}{2} & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} & \text{E}_{ \text{y}} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \end{matrix} \nonumber$
The x-direction has an erase and the y-direction does not.
$\delta = 0, .1 \pi .. 6 \pi \nonumber$
The y-direction has an eraser and the x-direction does not.
For the MZ polarization interferometer diagonally polarized light enters in the x-direction, |xd>. Tensor vector multiplication is awkward in Mathcad as shown below.
$\begin{matrix} \Psi_{ \text{in}} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \ 0 \end{pmatrix} & \text{submatrix(kronecker(augment(x, n), augment(d, n)), 1, 4, 1, 1)} = \begin{pmatrix} 0.707 \ 0.707 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
No light, however, exits in the x-direction. It exits in the y-direction showing no interference effects.
$\delta = 0, .2 \pi ... \pi \nonumber$
$\begin{matrix} \text{x-direction:} & \text{y-direction:} \ \left( \left| \text{kronecker(X, I) MZ}_p ( \delta) \Psi_{ \text{in}} \right| \right)^2 = & \left( \left| \text{kronecker(Y, I) MZ)}_p ( \delta) \Psi_{ \text{in}} \right| \right)^2 = \ \begin{array}{|r|} \hline \ 0 \ \hline \ 0 \ \hline \ 0 \ \hline \ 0 \ \hline \ 0 \ \hline \ 0 \ \hline \end{array} & \begin{array}{|r|} \hline \ 1 \ \hline \ 1 \ \hline \ 1 \ \hline \ 1 \ \hline \ 1 \ \hline \ 1 \ \hline \end{array} \end{matrix} \nonumber$
Placement of a D polarizer in the y-direction output erases distinguishing information and interference appears.
$\delta = 0, .1 \pi .. 6 \pi \nonumber$
Calculation of exit probabilities for two photons in direction-of-propagation modes:
$\begin{matrix} \text{P}_{ \text{xx}} ( \delta) = \left( \left| \text{xx}^T \text{MZ}_{ \text{dd}} ( \delta) \text{xx} \right| \right)^2 & \text{P}_{ \text{xy}} ( \delta) = \left[ \left| \frac{1}{ \sqrt{2}} ( \text{xy + yx})^T \text{MZ}_{ \text{dd}} ( \delta) \text{xx} \right| \right] \ \text{P}_{ \text{yy}} ( \delta) = \left( \left| \text{yy}^T \text{MZ}_{ \text{dd}} ( \delta) \text{xx} \right| \right)^2 & \text{Tot} ( \delta) = \text{P}_{ \text{xx}} ( \delta) + \text{P}_{ \text{xy}} ( \delta) + \text{P}_{ \text{yy}} ( \delta) \end{matrix} \nonumber$
Reproduction of Figure 5b with the addition of Pyy.
$\delta = 0, .07 \pi .. 6 \pi \nonumber$
ʺThe striking result is that the (Pxy) interference pattern has twice the frequency of the single‐photon interference pattern. Nonclassical interference shows new quantum aspects: two photons acting as a single quantum object (a biphoton).ʺ
Hong-Ou-Mandel interference (right column, page 516):
$\begin{matrix} \text{BSBS} \frac{1}{ \sqrt{2}} \text{(xy + yx)} = \begin{pmatrix} 0.707i \ 0 \ 0 \ 0.707i \end{pmatrix} & \frac{i}{ \sqrt{2}} \text{(xx + yy)} = \begin{pmatrix} 0.707i \ 0 \ 0 \ 0.707i \end{pmatrix} \end{matrix} \nonumber$
Section III.D deals with distinguishing between pure and mixed states experimentally. The pure state and it density matrix are given below.
$\begin{matrix} \Psi_{ \text{pure}} = \frac{1}{ \sqrt{2}} \text{(hh + vv)} & \Psi_{ \text{pure}} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \Psi_{ \text{pure}} \Psi_{ \text{pure}}^T = \begin{pmatrix} 0.5 & 0 & 0 & 0.5 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 0.5 & 0 & 0 & 0.5 \end{pmatrix} \end{matrix} \nonumber$
The density matrix for the mixed state is calculated as follows.
$\frac{1}{2} \text{hh hh}^T + \frac{1}{2} \text{vv vv}^T = \begin{pmatrix} 0.5 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0.5 \end{pmatrix} \nonumber$
The following calculations and their graphical representation are in complete agreement with section III.D
$\begin{matrix} \text{Pure}( \alpha) = \text{tr} \left[ \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 1 \end{pmatrix} \frac{1}{2} \left[ \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ \sin \alpha \end{pmatrix} \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ sin \alpha \end{pmatrix}^T \right] \right] \text{simplify} \rightarrow \frac{ \sin (2 \alpha)}{4} + \frac{1}{4} \end{matrix} \nonumber$
$\begin{matrix} \text{Mixed}( \alpha) = \text{tr} \left[ \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 1 \end{pmatrix} \frac{1}{2} \left[ \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ \sin \alpha \end{pmatrix} \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ sin \alpha \end{pmatrix}^T \right] \right] \text{simplify} \rightarrow \frac{1}{4} \end{matrix} \nonumber$
Reproduce Figure 6 results.
$\alpha = 0 \text{deg},~ 5 \text{deg} .. 180 \text{deg} \nonumber$
The following calculation are in agreement with the math in the final paragraph of the section IV.D.
$\begin{matrix} \text{kronecker} \left( \text{W}_2 (0),~ \text{I} \right) \Psi_{ \text{pure}} = \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix}^T = \begin{pmatrix} 0.5 & 0 & 0 & -0.5 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ -0.5 & 0 & 0 & 0.5 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{kronecker} \left( \text{W}_2 (0),~ \text{I}} \right) \Psi_{ \text{pure}} \Psi_{ \text{pure}}^T \text{kronecker} \left( \text{W}_2 (0),~ \text{I} \right)^T = \begin{pmatrix} 0.5 & 0 & 0 & -0.5 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ -0.5 & 0 & 0 & 0.5 \end{pmatrix} \ \text{Pure} ( \alpha) = \text{tr} \left[ \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 & -1 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ -1 & 0 & 0 & 1 \end{pmatrix} \frac{1}{2} \left[ \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ \sin \alpha \end{pmatrix} \begin{pmatrix} \cos \alpha \ \cos \alpha \ \sin \alpha \ \sin \alpha \end{pmatrix}^T \right] \right] \text{simplify} \rightarrow \frac{1}{4} - \frac{ \sin (2 \alpha)}{4} \end{matrix} \nonumber$
The paper shows this as $\frac{[1- \sin \alpha}{4}$ which I am confident is a typographical error.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.75%3A_Qubit_Quantum_Mechanics.txt
|
The following is a Mathcad implementation of David Deutsch's quantum computer prototype as presented on pages 10-11 in "Machines, Logic and Quantum Physics" by David Deutsch, Artur Ekert, and Rossella Lupacchini, which can be found at arXiv:math.HO/9911150v1.
A function f maps {0, 1} to {0, 1}. There are four possible outcomes: f(0) = 0; f(0) = 1; f(1) = 0; f(1) = 1. The task Deutsch tackled was to develop an implementable quantum algorithm which could determine whether f(0) and f(1) were the same or different in a single calculation. By comparison classical computers require two calculations for such a task - calculating both f(0) and f(1) to see if they are the same or different.
The proposed quantum computer consists of three one-qubit gates in the arrangement shown below. The $\sqrt{NOT}$ gates are 50-50 beam splitters that assign a π/2 (i, 90 degree) phase change to reflection relative to transmission. For example, the first gate creates the following superpositions of the inputs |0> and |1>.
$\begin{matrix} |0 \rangle \rightarrow \left[ |0 \rangle + i|1 \rangle \right] & |1 \rangle \rightarrow \left[ i|0 \rangle + |1 \rangle \right] \end{matrix} \nonumber$
The middle gate carries out phase shifts on the superposition created by the first gate. Depending on the f-values, the operation of the second $\sqrt{NOT}$ converts the superposition to either |0> or |1> multiplied by a phase factor or unity. The Deutsch circuit is essentially a two-port Mach-Zehnder interferometer with the possibility for unequal phase changes in its upper and lower arms.
The matrix representations for the gates are as follows:
$\begin{matrix} 0 = \begin{pmatrix} 1 \ 0 \end{pmatrix} & ~ & ~ & ~ & 0 = \begin{pmatrix} 1 \ 0 \end{pmatrix} \ ~ & \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} & \begin{bmatrix} (-1)^{f(0)} & 0 \ 0 & (-1)^{f(1)} \end{bmatrix} & \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} & ~ \ 1 = \begin{pmatrix} 0 \ 1 \end{pmatrix} & ~ & ~ & ~ & 1 = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
We begin with a matrix mechanics approach to Deutsch's algorithm using the definitions provided immediately above. There are two input ports and two output ports, but only one input port is used in any given computational run. First it is shown how the output result depends on the input port chosen in terms of the values of f(0) and f(1).
$\begin{matrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} ~ \text{Input} & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} \rightarrow \begin{bmatrix} \frac{(-1)^{f_0}}{2} - \frac{(-1)^{f_1}}{2} \ \frac{(-1)^f_0}{2} + \frac{(-1)^{f_1} i}{2} \end{bmatrix} \ \begin{pmatrix} 0 \ 1 \end{pmatrix} ~ \text{Input} & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} \rightarrow \begin{bmatrix} \frac{(-1)^f_0}{2} + \frac{(-1)^{f_1} i}{2} \ \frac{(-1)^{f_0}}{2} - \frac{(-1)^{f_1}}{2} \end{bmatrix} \end{matrix} \nonumber$
These calculations and the circuit diagram show that there are two paths to each output port from each input port. As will now be shown these paths interfere constructively or destructively depending on the phase changes brought about by the middle circuit element's values of f(0) and f(1).
The following calculations show that the probability that |0 > input leads to |0 > output is zero if f(0) and f(1) are the same (both 0 or both 1), and unity if they are different (one 0, the other 1). Thus the task has been successfully accomplished. The highlighted central region calculates the output state for input state |0> given the values of f(0) and f(1) to the left. On the right the probability that |0> is the output state is calculated.
$\begin{matrix} f_0 = 0 & f_1 = 0 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ i \end{pmatrix} & \left[ \left| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} 0 \ i \end{pmatrix} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 1 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ -i \end{pmatrix} & \left[ \left| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} 0 \ -i \end{pmatrix} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 0 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} -1 \ 0 \end{pmatrix} & \left[ \left| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} -1 \ 0 \end{pmatrix} \right| \right]^2 = 1 \ f_0 = 0 & f_1 = 1 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \left[ \left| \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \right| \right]^2 = 1 \end{matrix} \nonumber$
As might be expected, similar calculations show that the probability that |1> input leads to |1> output is zero if f(0) and f(1) are the same, and unity if they are different.
$\begin{matrix} f_0 = 0 & f_1 = 0 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} i \ 0 \end{pmatrix} & \left[ \left| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} i \ 0 \end{pmatrix} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 1 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} -i \ 0 \end{pmatrix} & \left[ \left| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} -i \ 0 \end{pmatrix} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 0 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \left[ \left| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \right| \right]^2 = 1 \ f_0 = 0 & f_1 = 1 & \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \right] \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ -1 \end{pmatrix} & \left[ \left| \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \ -1 \end{pmatrix} \right| \right]^2 = 1 \end{matrix} \nonumber$
Examination of the Deutsch circuit reveals certain similarities with the double-slit experiment. For example, there are two paths for input |0> to output |0> and input |1> to output |1> (and also for |0> --> |1> and |1> --> |0>, but they are not of interest here). As Deutsch and his co-authors state, this is the secret of the quantum computer - the possibility of constructive and destructive interference of the probability amplitudes for the various computational paths.
Addition of probability amplitudes, rather than probabilities, is one of the fundamental rules for prediction in quantum mechanics and applies to all physical objects, in particular quantum computing machines. If a computing machine starts in a specific initial configuration (input) then the probability that after its evolution via a sequence of intermediate configurations it ends up in a specific final configuration (output) is the squared modulus of the sum of all the probability amplitudes of the computational paths that connect the input with the output. The amplitudes are complex numbers and may cancel each other, which is referred to as destructive interference, or enhance each other, referred to as constructive interference. The basic idea of quantum computation is to use quantum interference to amplify the correct outcomes and to suppress the incorrect outcomes of computations.
Recall from above (see the matrix representing the $\sqrt{NOT}$ beam splitters) that the probability amplitude for transmission at the beam splitters is $\frac{1}{ \sqrt{2}}$, and the probability amplitude for reflection is $\frac{i}{ \sqrt{2}}$. The middle element of the circuit causes phase shifts on its input wires that depend on the values of f(0) and f(1). From the circuit diagram we see that |0> output from |0> input can be achieved by two transmissions and a phase shift on the upper wire or reflection to the lower wire, phase shift, followed by reflection to the upper wire. The absolute magnitude squared of the sum of these probability amplitudes is calculated for the four possible values for f(0) and f(1).
$\begin{matrix} f_0 = 0 & f_1 = 0 & \left[ \left| \frac{1}{ \sqrt{2}} (-1)^{f_0} \frac{1}{ \sqrt{2}} + \frac{1}{ \sqrt{2}} (-1)^{f_1} \frac{i}{ \sqrt{2}} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 1 & \left[ \left| \frac{1}{ \sqrt{2}} (-1)^{f_0} \frac{1}{ \sqrt{2}} + \frac{1}{ \sqrt{2}} (-1)^{f_1} \frac{i}{ \sqrt{2}} \right| \right]^2 = 0 \ f_0 = 1 & f_1 = 0 & \left[ \left| \frac{1}{ \sqrt{2}} (-1)^{f_0} \frac{1}{ \sqrt{2}} + \frac{1}{ \sqrt{2}} (-1)^{f_1} \frac{i}{ \sqrt{2}} \right| \right]^2 = 1 \ f_0 = 0 & f_1 = 1 & \left[ \left| \frac{1}{ \sqrt{2}} (-1)^{f_0} \frac{1}{ \sqrt{2}} + \frac{1}{ \sqrt{2}} (-1)^{f_1} \frac{i}{ \sqrt{2}} \right| \right]^2 = 1 \end{matrix} \nonumber$
As expected we see consistency with the previous calculations. However, this method has the advantage of more directly revealing what is happening from the quantum mechanical perspective. When f(0) and f(1) are the same the two path amplitudes interfere destructively; when they are different there is constructive interference between the path amplitudes.
As can be seen from above, in the absence of the middle element of the quantum circuit, the two paths from |0> to |0> are 180 degrees out of phase and therefore destructively interfere. In the presence of the middle element the paths are still 180 degrees out of phase unless the f-values are different, and then they are brought into phase and constructively interfere.
As Feynman emphasized in his eponymous lecture series on physics, the creation of superpositions and the interference of probability amplitudes are the essence of quantum mechanics.
Another implementation of Deutsch's algorithm is due to Artur Ekert and co-workers (see Julian Brown's The Quest for the Quantum Computer, pages 353-355).
The following table provides a summary of the results. If qubit 1 is |0> f(0) and f(1) are the same, but if it is |1> they are different.
$\begin{bmatrix} f_0 & 0 & 1 & 1 & 0 \ f_1 & 0 & 1 & 0 & 1 \ \text{qubit1} & \begin{pmatrix} 1 \ 0 \end{pmatrix} & \begin{pmatrix} 1 \ 0 \end{pmatrix} & \begin{pmatrix} 0 \ 1 \end{pmatrix} & \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \text{qubit2} & \begin{pmatrix} -0.707 \ 0.707 \end{pmatrix} & \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} -0.707 \ 0.707 \end{pmatrix} \ \text{OutputState} & \begin{pmatrix} -0.707 \ 0.707 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0.707 \ -0.707 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0 \ 0 \ 0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0 \ 0 \ -0.707 \ 0.707 \end{pmatrix} \end{bmatrix} \nonumber$
The algorithm is implemented below.
$\begin{matrix} \text{Identity} & \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{Not gate:} & \text{NOT} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{Hadamard gate:} & \text{H} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} f_0 = 0 & f_1 = 0 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, I) } \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} -0.707 \ 0.707 \ 0 \ 0 \end{pmatrix} \ f_0 = 1 & f_1 = 1 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, I)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ -0.707 \ 0 \ 0 \end{pmatrix} \ f_0 = 1 & f_1 = 0 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, I)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0.707 \ -0.707 \end{pmatrix} \ f_0 = 0 & f_1 = 1 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, I)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ -0.707 \ 0.707 \end{pmatrix} \end{matrix} \nonumber$
It is easy to show that the output of each of these calculations is the tensor product of qubits 1 and 2 in the summary table provided above.
Perhaps a better way to set up this circuit is to begin with |00> and have Hadamard gates operate on both wires.
$\begin{matrix} f_0 = 0 & f_1 = 0 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, H)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0.707 \ 0 \ 0 \end{pmatrix} \ f_0 = 1 & f_1 = 1 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, H)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} -0.707 \ -0.707 \ 0 \ 0 \end{pmatrix} \ f_0 = 1 & f_1 = 0 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, H)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ -0.707 \ -0.707 \end{pmatrix} \ f_0 = 0 & f_1 = 1 & \text{kronecker(H, I) kronecker} \left[ \begin{bmatrix} (-1)^{f_0} & 0 \ 0 & (-1)^{f_1} \end{bmatrix},~ \text{NOT} \right] \text{kronecker(H, H)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0.707 \ 0.707 \end{pmatrix} \end{matrix} \nonumber$
As the following table shows, the same result is achieved in the previous circuit.
$\begin{bmatrix} f_0 & 0 & 1 & 1 & 0 \ f_1 & 0 & 1 & 1 & 0 \ \text{qubit1} & \begin{pmatrix} 1 \ 0 \end{pmatrix} & \begin{pmatrix} 1 \ 0 \end{pmatrix} & \begin{pmatrix} 0 \ 1 \end{pmatrix} & \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \text{qubit2} & \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} & \begin{pmatrix} -0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} -0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} & \text{OutputState} & \begin{pmatrix} 0.707 \ 0.707 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} -0.707 \ -0.707 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0 \ 0 \ -0.707 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0 \ 0 \ 0.707 \ 0.707 \end{pmatrix} \end{bmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.76%3A_Implementation_of_Deutsch%27s_Algorithm_Using_Mathcad.txt
|
In the matrix version of quantum mechanics, vectors represent states and matrices represent operators.
Quantum bits or qubit states:
$\begin{matrix} \text{Base states:} & \text{A superposition of base states:} \ 0 = \begin{pmatrix} 1 \ 0 \end{pmatrix} ~ ~ 1 = \begin{pmatrix} 0 \ 1 \end{pmatrix} & \begin{pmatrix} \alpha \ \beta \end{pmatrix} = \alpha \begin{pmatrix} 1 \ 0 \end{pmatrix} + \beta \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \text{ where } \left ( \left| \alpha \right| \right)^2 + \left( \left| \beta \right| \right)^2 = 1 \nonumber$
The identity operator and the two quantum gates that will be used to create quantum superpositions and entangled are provided below.
$\begin{matrix} \text{Identity:} & \text{Hadamard gate:} & \text{Controlled-NOT gate:} \ \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{H} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
The Hadamard gate operates on the base states to create superpositions:
$\begin{matrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2}}{2} \end{pmatrix} & \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ - \frac{ \sqrt{2}}{2} \end{pmatrix} \end{matrix} \nonumber$
When it operates on the superpositions it returns the base states, demonstrating that the Hadamard gate is reversible.
$\begin{matrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2}}{2} \end{pmatrix} \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \begin{pmatrix} \frac{ \sqrt{2}}{2} \ - \frac{ \sqrt{2}}{2} \end{pmatrix} \rightarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Two-qubit states are created by tensor multiplication of single-qubit states. These binary tensor products correspond to the decimal numbers 0, 1, 2 and 3.
$\begin{matrix} 00 = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & 01 = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & 10 = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & 11 = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
The controlled-NOT gate operates on these states yielding the following results.
$\begin{matrix} \text{CNOT} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{CNOT} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{CNOT} \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} & \text{CNOT} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The results are summarized in the following table. The first qubit is the control. If it is 0, nothing happens to the second qubit. If it is 1, the second cubit is flipped using the NOT gate embedded in the lower right quadrant of the CNOT matrix. Thus the name controlled-NOT.
$\begin{matrix} \begin{pmatrix} \text{CNOT} \ 00 > 00 \ 01 > 01 \ 10 > 11 \ 11 > 10 \end{pmatrix} & \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
Something altogether different happens when the CNOT gate operates on the following two-qubit states in which the first qubit is one of the superpositions from above and the second is one of the base states. There are four possible calculations and they yield the well known Bell states.
The Bell states are entangled superpositions and are of great importance in quantum information theory. They cannot be factored and therefore express quantum correlation in a most simple and striking way.
$\begin{matrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} & \text{CNOT} & \frac{1}{ \sqrt{1}} \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ -1 \ 0 \end{pmatrix} & \text{CNOT} & \frac{1}{ \sqrt{1}} \begin{pmatrix} 1 \ 0 \ -1 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 1 \end{pmatrix} & \text{CNOT} & \frac{1}{ \sqrt{1}} \begin{pmatrix} 0 \ 1 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \right] \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ -1 \end{pmatrix} & \text{CNOT} & \frac{1}{ \sqrt{1}} \begin{pmatrix} 0 \ 1 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} - \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \right] \end{matrix} \nonumber$
Bell states:
$\begin{matrix} \Theta_p = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} & \Theta_m = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} & \Psi_p = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} & \Psi_m = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ -1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The same results can be obtained with |00>, |01>, |10> and |11> by operating on the first qubit with a Hadamard gate (creating a superposition) followed by a CNOT operation.
$\begin{matrix} \text{CNOT kronecker(H, I)} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \text{CNOT kronecker(H, I)} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} \ \text{CNOT kronecker(H, I)} \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} & \text{CNOT kronecker(H, I)} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.77%3A_Using_Quantum_Gates_to_Create_Superpositions_and_Entangled_States.txt
|
Giving a friend directions to his house, Yogi Berra said “When you come to a fork in the road, take it.” I will attempt to demonstrate that this well-known “yogi-ism” describes an essential feature of quantum phenomena and the parallelism that is exploited by a quantum computer.
A Mach-Zehnder interferometer (MZI) is a simple example of a quantum computer. Its main components are two optical beam splitters (think half-silvered mirrors). The first, the fork in the road, creates a superposition of two computational paths. The second beam splitter recombines the paths giving rise to the constructive and destructive interference that is essential to quantum computation.
The MZI quantum computer will be assembled in three steps. The first step shows a 50-50 beam splitter that can be illuminated by two input ports and their vector designations, |0> and |1>. Since we are dealing with computation it will eventually be shown that a 50-50 beam splitter is a square root of NOT gate - √NOT. The quantum aspects of computation are already appearing because there is no classical analog for a √NOT gate, and yet it exists physically as a simple 50-50 beam splitter.
The experimental results for illuminating ports |0> and |1> are reported in the table below. From a classical respective there is nothing unusual here. A 50-50 beam splitter transmits 50% of the radiation and reflects 50% of the radiation illuminating it. Even if we adopt the photon concept and consider many single photon events, there is still nothing worthy of comment. Statistically half the photons are transmitted and half reflected, no matter which input port is used. And the results are totally random. We cannot predict with certainty the results of individual events. We only know that if we record a statistically meaningful number of results this is what we get.
$\begin{array}{|c|c|c|c|c|} \hline \ |0 \rangle \text{ Input} & |0 \rangle & \text{Transmitted} & |0 \rangle & 50 \% \ \hline \ |0 \rangle \text{ Input} & |0 \rangle & \text{Reflected} & |1 \rangle & 50 \% \ \hline \ |1 \rangle \text{ Input} & |1 \rangle & \text{Reflected} & |0 \rangle & 50 \% \ \hline \ |1 \rangle \text{ Input} & |1 \rangle & \text{Transmitted} & |1 \rangle & 50 \% \ \hline \end{array} \nonumber$
At this point we could interpret these results as saying that the source emits photons and the detector registers photons, and that the events at the beam splitter are random; we cannot predict with certainty the outcome of any single encounter of a photon with the beam splitter, only the statistical results given in the table.
Quantum mechanically, the beam splitter is Yogi’s fork in the road, and the photon as a quantum mechanical object (quon) takes both paths. The photon paths, |0> and |1>, are represented by the vectors in the figure. The beam splitter’s interaction with a photon is given by the following matrix. By convention the probability amplitude for transmission is 1/√2 and for reflection it is i/√2. In other words, a 90 degree phase change is assigned to reflection at the beam splitter.
$\widehat{BS} = \widehat{ \sqrt{NOT}} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \nonumber$
Thus, according to simple matrix algebra the consequence of a photon’s interaction with a 50-50 beam splitter is the creation of a quantum mechanical superposition of the photon being present simultaneously in both paths.
$\begin{matrix} \widehat{BS} |0 \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} = \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} + i \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \left[ |0 \rangle + i |1 \rangle \right] \ \widehat{BS} |1 \rangle = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} i \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \left[ i \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \left[ |0 \rangle + i |1 \rangle \right] \end{matrix} \nonumber$
According to quantum mechanics, the photon is in an even superposition of being transmitted and reflected after the beam splitter. The probability of being detected in either output channel is the absolute square of the probability amplitudes, as calculated below.
$\left| \frac{1}{ \sqrt{2}} \right|^2 = \left| \frac{i}{ \sqrt{2}} \right|^2 = \frac{1}{2} \nonumber$
From the quantum mechanical perspective, we say that upon observation or detection the superposition collapses into one of its classical possibilities. This interpretation of a simple experiment might seem a bit extravagant, until we proceed to the next step which involves the insertion of a second beam splitter at the intersection of the two output channels.
Now there are two paths to each output channel. The first beam splitter creates two paths, one to each detector (output channel), the second beam splitter recombines those paths giving two ways to reach each detector. This is a simple example of Feynman’s “sum over histories” approach to quantum mechanics. We now have the possibility that the probability amplitudes for these “histories” or paths will interfere constructively or destructively, and of course they do.
$\begin{array}{|c|c|c|c|} \hline \ \text{Input} & \text{History} & \text{Output} & \text{Probability} \ \hline \ |0 \rangle & \text{TT + RR} & |0 \rangle & 0 \% \ \hline \ |1 \rangle & \text{TR + RT} & |0 \rangle & 100 \% \ \hline \ |1 \rangle & \text{TT + RR} & |1 \rangle & 0 \% \ \hline \end{array} \nonumber$
Note the strikingly different results from that with a single beam splitter. Now |0> input never yields |0> output, and |1> input never yields |1> output. That’s why a beam splitter is called a √NOT gate - √NOT√NOT = NOT.
$\widehat{NOT} = \widehat{ \sqrt{NOT}} \widehat{ \sqrt{NOT}} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} = i \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \nonumber$
The operation of the NOT gate on the two possible input states is as follows: the probability that input |0> will yield output |1> is |i|2 = 1, and, of course the probability that input |1> will yield output |0> is the same.
$\begin{matrix} \widehat{NOT} |0 \rangle = i \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = i \begin{pmatrix} 0 \ 1 \end{pmatrix} = i |1 \rangle \ \widehat{NOT} |1 \rangle = i \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = i \begin{pmatrix} 1 \ 0 \end{pmatrix} = i |0 \rangle \end{matrix} \nonumber$
This is a matrix mechanics calculation. It is also instructive to use Feynman’s “sum over histories” approach explicitly. There we sum the probability amplitudes for each history or path and take the square of the absolute magnitude. Recall that the probability amplitudes are 1/√2 and i/√2 for transmission and reflection, respectively.
$\begin{matrix} |0 \rangle \xrightarrow{TT+RR} |0 \rangle \left| \frac{1}{ \sqrt{2}} \frac{1}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} \frac{i}{ \sqrt{2}} \right|^2 = 0 & |0 \rangle \xrightarrow{TR+RT} |1 \rangle \left| \frac{i}{ \sqrt{2}} \frac{1}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} \frac{1}{ \sqrt{2}} \right|^2 = 1 \ |1 \rangle \xrightarrow{TT+RR} |1 \rangle \left| \frac{1}{ \sqrt{2}} \frac{1}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} \frac{i}{ \sqrt{2}} \right|^2 = 0 & |1 \rangle \xrightarrow{TR+RT} |0 \rangle \left| \frac{1}{ \sqrt{2}} \frac{i}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} \frac{1}{ \sqrt{2}} \right|^2 = 1 \end{matrix} \nonumber$
For $|0 \rangle \rightarrow |0 \rangle$ and $|1 \rangle \rightarrow |1 \rangle$ the probability amplitudes for the paths (histories) interfere destructively, while for $|0 \rangle \rightarrow |1 \rangle$ and $|1 \rangle \rightarrow |0 \rangle$ they interfere constructively.
Now we are ready to see how a modified Mach-Zehnder interferometer can function as a quantum computer. But first a simple non-mathematical example will be examined.
Suppose you are asked whether two pieces of glass are the same thickness. Most likely you would use a caliper to measure the individual pieces and then compare the measurements. This is a bit of overkill because you were only asked if pieces of glass are the same thickness, not what the individual thicknesses are.
The question can be answered with a single measurement by placing the pieces of glass in opposite arms of the MZI as shown in the figure.
The speed of light in glass differs from that in air. Therefore the pieces of glass will cause phase shifts that depend on their thickness. This is the origin of the exponential terms in the figure. For example,
$\phi_0 = 2 \pi \frac{ \delta_0}{ \lambda} \nonumber$
is the phase shift (in radians) in the |0> arm of the interferometer. Here δ0 is the glass thickness and λ is the wavelength of the light. Recall that in the absence of the glass, the probability for |0> input to |0> output is zero. In the presence of the glass, using Feynman’s sum over histories to calculate the probability yields the following expression.
$\left| \frac{1}{ \sqrt{2}} e^{i \phi_0} \frac{1}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} e^{i \phi_1} \frac{i}{ \sqrt{2}} \right|^2 \nonumber$
We see that only if the phase changes are the same (glass thickness the same) in both arms of the interferometer is the $|0 \rangle \rightarrow |0 \rangle$ probability zero. If light emerges from the |0> output channel we know that the pieces of glass are not the same thickness, and have answered the thickness question with a single measurement.
This is a precursor to the proof of principle example David Deutsch provided for the quantum computer (see primary reference cited below). In his example, Deutsch proposed a binary function f that maps {0,1} to {0,1} for which there are four possibilities: f(0) = 0, f(0) = 1, f(1) = 0, and f(1) = 1. The question is (analogous to the glass thickness question) are f(0) and f(1) the same or different? Classically two calculations of f are required, one with input 0 and one with input 1. The modified MZI shown below illustrates how the answer can be achieved with a single parallel calculation. In other words, a photon transverses both arms of the interferometer and its output destination answers the question.
The delay loops replace the pieces of glass and cause a 1800 phase shift if taken (1800 = π; exp(iπ) = - 1). Whether they are taken is controlled by the value of f. If its value is 0 the loop is bypassed, if its value is 1 the loop is taken bringing about a 1800 phase change in that arm of the interferometer.
Feynman’s method provides the following results. The probability for $|0 \rangle \rightarrow |0 \rangle$ is 0 if f(0) = f(1) and 1 if f(0) ≠ f(1).
$\left| \frac{1}{ \sqrt{2}} (-1)^{f(0)} \frac{1}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} (-1)^{f(1)} \frac{i}{ \sqrt{2}} \right|^2 = \left| \frac{1}{2} \left[ (-1)^{f(0)} - (-1)^{f(1)} \right] \right|^2 = \begin{Bmatrix} 0 = \text{ if} f(0) = f(1) \ 1 = \text{ if} f(0) \neq f(1) \end{Bmatrix} \nonumber$
Consequently the probability for $|0 \rangle \rightarrow |1 \rangle$ is 1 if f(0) = f(1) and 0 if f(0) ≠ f(1).
$\left| \frac{1}{ \sqrt{2}} (-1)^{f(0)} \frac{i}{ \sqrt{2}} + \frac{i}{ \sqrt{2}} (-1)^{f(1)} \frac{1}{ \sqrt{2}} \right|^2 = \left| \frac{1}{2} \left[ (-1)^{f(0)} - (-1)^{f(1)} \right] \right|^2 = \begin{Bmatrix} 1 = \text{ if} f(0) = f(1) \ 0 = \text{ if} f(0) \neq f(1) \end{Bmatrix} \nonumber$
In summary, a single output measurement answers the initial question. We might think of this as an example of mathematical multitasking. A Mach-Zehnder interferometer creates a superposition of two computational paths providing the opportunity for the constructive and destructive interference that is the essential characteristic of quantum computation.
Primary reference:
Machines, Logic and Quantum Physics
David Deutsch, Artur Ekert, and Rossella Lupacchini
arXiv:math.HO/9911150 v1; 19 November 1999
Other sources:
The Quest for the Quantum Computer
Julian Brown
Simon & Schuster, 2000
Quantum Mechanical Computers
Richard P. Feynman
Foundations of Physics 16, 507-531 (1985)
Quantum Information and Computation
Charles H. Bennett
Physics Today, October 1995, pp 24-30
Quantum Computers
T. D. Ladd, et al.
Nature, 4 March 2010, pp 45-53
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.78%3A_A_Simple_Quantum_Computer.txt
|
This tutorial demonstrates the solution of two linear simultaneous equation using a quantum circuit. The circuit is taken from arXiv:1302.1210. See this reference for details on the experimental implementation of the circuit and also for a discussion of the potential of quantum solutions for systems of equations. Two other sources (arXiv:1302.1946 and 1302.4310) provide alternative quantum circuits and methods of implementation.
First we consider the conventional method of solving systems of linear equation for a particular matrix A and three different |b> vectors.
$\begin{matrix} A|x \rangle = b \rangle & |x \rangle = A^{-1} |b \rangle \end{matrix} \nonumber$
$\begin{matrix} A = \begin{pmatrix} 1.5 & 0.5 \ 0.5 & 1.5 \end{pmatrix} & b_1 = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & b_2 = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & b_3 = \begin{pmatrix} 1 \ 0 \end{pmatrix} \ A^{-1} b_1 = \begin{pmatrix} 0.354 \ 0.354 \end{pmatrix} & A^{-1} b_2 = \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} & A^{-1} b_3 = \begin{pmatrix} 0.75 \ -0.25 \end{pmatrix} \end{matrix} \nonumber$
Next we show the quantum circuit (arXiv:1302.1210) that generates the same solutions. The Appendix considers two other equivalent circuits from this reference.
$\begin{matrix} |b \rangle & \triangleright & \fbox{R} & \cdot & \fbox{Rᵀ} & \triangleright & |x \rangle \ ~ & ~ & ~ & | \ |1 \rangle & \triangleright & \cdots & \fbox{Ry(θ)} & \fbox{M₁} & \triangleright & |1 \rangle \end{matrix} \nonumber$
In this circuit, R is the matrix of eigenvectors of matrix A and RT its transpose. The last step on the bottom wire is the measurement of |1>, which is represented by the projection operator M1. The identity operator is required for cases in which a quantum gate operation is occurring on one wire and no operation is occurring on the other wire.
$\begin{matrix} \text{R = \eigenvecs(A)} & \text{R} = \begin{pmatrix} 0.707 & -0.707 \ 0.707 & 0.707 \end{pmatrix} & \text{R}^T = \begin{pmatrix} 0.707 & 0.707 \ -0.707 & 0.707 \end{pmatrix} & \text{M}_1 = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \leftarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} & \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
The controlled rotation, CR(θ), is the only two-qubit gate in the circuit. The rotation angle required is determined by the ratio of the eigenvalues of A as shown below.
$\begin{matrix} \text{CR}( \theta) = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & \cos \left( \frac{ \theta}{2} \right) & - \sin \left( \frac{ \theta}{2} \right) \ 0 & 0 & \sin \left( \frac{ \theta}{2} \right) & \cos \left( \frac{ \theta}{2} \right) \end{pmatrix} & \text{eigenvals(A)} = \begin{pmatrix} 2 \ 1 \end{pmatrix} & \theta = -2 \text{acos} \left( \frac{1}{2} \right) \end{matrix} \nonumber$
The input (|b>|1>) and output (|x>|1>) states are expressed in tensor format. Kronecker is Mathcad's command for the tensor product of matrices.
$\begin{matrix} \text{Input } |b \rangle |1 \rangle & \text{Quantum Circuit} & \text{Output } |x \rangle |1 \rangle \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 1 \end{pmatrix} & \text{kronecker}(R^T,~ M_1) \text{CR} ( \theta) \text{kronecker(R, I)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.354 \ 0 \ 0.354 \end{pmatrix} & \begin{pmatrix} 0.354 \ 0.354 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ -1 \end{pmatrix} & \text{kronecker}(R^T,~ M_1) \text{CR} ( \theta) \text{kronecker(R, I)} \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ 0 \ -0.707 \end{pmatrix} & \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{kronecker}(R^T,~M_1) \text{CR} ( \theta) \text{kronecker(R, I)} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.75 \ 0 \ -0.25 \end{pmatrix} & \begin{pmatrix} 0.75 \ -0.25 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Appendix
The alternative two-wire circuit shown in Fig. 1C requires CNOT and Ry rotation matrices.
$\begin{matrix} |b \rangle & \triangleright & \fbox{R} & \cdot & \cdots & \cdot & \fbox{Rᵀ} & \cdots & \triangleright & |x \rangle \ ~ & ~ & ~ & | & ~ & | \ |1 \rangle & \triangleright & \cdots & oplus & \fbox{Ry(θ/2)} & \oplus & \fbox{Ry (θ/2)} & \fbox{M₁} & \triangleright & |1 \rangle \end{matrix} \nonumber$
The quantum circuit is set up as follows. The input and output states are the same as the previous circuit.
$\text{QuantumCircuit} = \text{kronecker(I, M}_1) \text{kronecker} \left( \text{R}^T,~\text{Ry} \left( \frac{ \theta}{2} \right) \right) \text{CNOT kronecker} \left( \text{I, Ry} \left( \frac{- \theta}{2} \right) \right) \text{CNOT kronecker(R, I)} \nonumber$
The three-wire circuit Fig. 1B produces the following transformation: $|b \rangle |0 \rangle |1 \rangle \rightarrow |x \rangle |0 \rangle |1 \rangle$.
$\begin{matrix} |b \rangle & \triangleright & \fbox{R} & \cdot & \cdots & \cdot & \fbox{Rᵀ} & \cdots & \triangleright & |x \rangle \ ~ & ~ & ~ & | & ~ & | \ |0 \rangle & \triangleright & \cdots & \oplus & \cdot & \oplus & \cdots & \triangleright & |0 \rangle \ ~ & ~ & ~ & ~ & | \ |1 \rangle & \triangleright & \cdots & oplus & \fbox{Ry(θ/2)} & \oplus & \fbox{Ry (θ/2)} & \fbox{M₁} & \triangleright & |1 \rangle \end{matrix} \nonumber$
$\begin{matrix} \text{Input } |b \rangle |0 \rangle |1 \rangle & \text{Quantum Circuit} & \text{Output } |x \rangle |0 \rangle |1 \rangle \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{QC} \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.354 \ 0 \ 0 \ 0 \ 0.354 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0.354 \ 0.354 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \end{pmatrix} & \text{QC} \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.707 \ 0 \ 0 \ 0 \ 0.707 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & \text{QC} \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.75 \ 0 \ 0 \ 0 \ -0.25 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} 0.75 \ -0.25 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.79%3A_Solving_Equations_Using_a_Quantum_Circuit.txt
|
Quantum superdense coding reliably transmits two classical bits through an entangled pair of particles, even though only one member of the pair is handled by the sender. Charles Bennett, Physics Today, October 1995, p. 27
This tutorial is based on Brad Rubin's "Superdense Coding" at the Wolfram Demonstration Project: http:demonstrations.wolfram.com/SuperdenseCoding/. The quantum gate circuit shown below illustrates superdense coding, with Alice in control above the dotted line and Bob controlling the bottom part. Alice and Bob share the entangled pair of photons in the Bell basis shown at the left. Alice encodes two classical bits of information (four possible messages) on her photon, and Bob subsequently reads her message by performing a Bell state measurement on the modified entangled photon pair. In other words, although Alice encodes two bits on her photon Bob's readout requires a measurement involving both photons.
As shown above Alice and Bob share the following maximally entangled two-qubit state. It is easily recognized as one of four two-quibit Bell states.
$| \Phi_p \rangle = \frac{1}{ \sqrt{2}} \left[ |0 \rangle_A |0 \rangle_B + |1 \rangle_A + |1 \rangle_B \right] = \frac{1}{ \sqrt{2}} \left[ \begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
Before proceeding let's supplement the figure by showing how Alice and Bob might generate this state from an initial state in which they both possess |0> qubits.
$\begin{pmatrix} 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \nonumber$
This requires a Hadamard gate on the top wire followed by a two-qubit CNOT gate.
$\begin{matrix} I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & H = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} & \text{CNOT kronecker(H, I)} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} \end{matrix} \nonumber$
Now the circuit in the figure is implemented using the Pauli gates (X and Z) on the top wire and controlled by Alice. Note on the right below that X0 and Z0 are the identity matrix (do nothing).
$\begin{matrix} \text{X} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{Z} = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & \text{X}^0 = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{Z}^0 = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{Input} & \text{Output} \ \text{B2= 0 B1 = 1 kronecker(H, I) CNOT kronecker} \left( Z^{B2},~ \text{I} \right) \text{kronecker} (X^{B1},~I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & \text{B2 B1} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
Next we do the same for the other classical two-bit states.
$\begin{matrix} \text{B2= 0 B1 = 0 kronecker(H, I) CNOT kronecker} \left( Z^{B2},~ \text{I} \right) \text{kronecker} (X^{B1},~I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & \text{B2 B1} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \ \text{B2= 0 B1 = 0 kronecker(H, I) CNOT kronecker} \left( Z^{B2},~ \text{I} \right) \text{kronecker} (X^{B1},~I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & \text{B2 B1} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \ \text{B2= 1 B1 = 0 kronecker(H, I) CNOT kronecker} \left( Z^{B2},~ \text{I} \right) \text{kronecker} (X^{B1},~I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} & \text{B2 B1} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Of course we could have done all this starting with the |0>A|0>B state as is shown below using B2 and B1 from the last example. Note the mirror symmetry involving the beginning and end of this quantum gate circuit. It begins by creating a Bell state and ends by making a Bell state measurement. In between, depending on the values of B1 and B2, the initial Bell state is transformed by X and Z in steps into a final Bell state which is measured. This will be demonstrated below.
$\text{kronecker(H, I) CNOT kronecker} \left( Z^{B2},~I \right) \text{kronecker} \left( X^{B1},~I \right) \text{CNOT kronecker(H, I)} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
Bell states:
$\begin{matrix} \Phi_p = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} & \Phi_m = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} & \Psi_p = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ 1 \ 0 \end{pmatrix} & \Psi_m = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \ -1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} ~ & \text{Step1} & ~ & ~ & \text{Step 2} & ~ & ~ & \text{Measure} \ \text{B2 = 0 B1 = 0} & \Phi_p & \Phi_p & ~ & \Phi_p & \Phi_p & ~ & \Phi_p \ \text{kronecker} \left( X^{B1},~ \text{I} \right) \frac{1}{ \sqrt{2}} & \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \text{kronecker} \left( Z^{B2},~I \right) & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} = & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \text{kronecker(H, I) CNOT} & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} = & \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \cdot & \fbox{X}^0 & \cdot & \fbox{Z}^0 & \cdot & \cdot & \fbox{H} & \cdot \ \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & \begin{matrix} | \ | \ | \ | \end{matrix} & ~ & \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = |0 \rangle \ \cdot & \cdots & \cdot & \cdots & \cdot & \oplus & \cdots & \cdot \end{matrix} \nonumber$
$\frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{X^0 \oplus I} \frac{|00 \rangle +|11 \rangle}{ \sqrt{2}} \xrightarrow{Z^0 \otimes I} \frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{ \text{CNOT}} \frac{|00 \rangle +|10 \rangle}{ \sqrt{2}} \xrightarrow{H \otimes I} \frac{(|0 \rangle +|1 \rangle) |0 \rangle + (|0 \rangle - |1 \rangle ) |0 \rangle}{ \sqrt{2}} = |00 \rangle = |0 \rangle \nonumber$
$\begin{matrix} \text{B2 = 0 B1 = 1} & \Phi_p & \Psi_p & ~ & \Psi_p & \Psi_p & ~ & \Psi_p \ \text{kronecker} \left( X^{B1},~ \text{I} \right) \frac{1}{ \sqrt{2}} & \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \text{kronecker} \left( Z^{B2},~I \right) & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} = & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \text{kronecker(H, I) CNOT} & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} = & \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \cdot & \fbox{X}^1 & \cdot & \fbox{Z}^0 & \cdot & \cdot & \fbox{H} & \cdot \ \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & \begin{matrix} | \ | \ | \ | \end{matrix} & ~ & \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = |1 \rangle \ \cdot & \cdots & \cdot & \cdots & \cdot & \oplus & \cdots & \cdot \end{matrix} \nonumber$
$\frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{X \oplus I} \frac{|10 \rangle +|01 \rangle}{ \sqrt{2}} \xrightarrow{Z^0 \otimes I} \frac{|10 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{ \text{CNOT}} \frac{|11 \rangle +|01 \rangle}{ \sqrt{2}} \xrightarrow{H \otimes I} \frac{(|0 \rangle -|1 \rangle) |1 \rangle + (|0 \rangle + |1 \rangle ) |1 \rangle}{ \sqrt{2}} = |01 \rangle = |1 \rangle \nonumber$
$\begin{matrix} \text{B2 = 1 B1 = 0} & \Phi_p & \Phi_p & ~ & \Phi_p & \Phi_m & ~ & \Phi_m \ \text{kronecker} \left( X^{B1},~ \text{I} \right) \frac{1}{ \sqrt{2}} & \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \text{kronecker} \left( Z^{B2},~I \right) & \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} = & \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} & \text{kronecker(H, I) CNOT} & \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} = & \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \cdot & \fbox{X}^0 & \cdot & \fbox{Z}^1 & \cdot & \cdot & \fbox{H} & \cdot \ \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ - \frac{1}{ \sqrt{2}} \end{pmatrix} & \begin{matrix} | \ | \ | \ | \end{matrix} & ~ & \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = |2 \rangle \ \cdot & \cdots & \cdot & \cdots & \cdot & \oplus & \cdots & \cdot \end{matrix} \nonumber$
$\frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{X^0 \oplus I} \frac{|00 \rangle +|11 \rangle}{ \sqrt{2}} \xrightarrow{Z^0 \otimes I} \frac{|00 \rangle - |11 \rangle}{ \sqrt{2}} \xrightarrow{ \text{CNOT}} \frac{|00 \rangle - |10 \rangle}{ \sqrt{2}} \xrightarrow{H \otimes I} \frac{(|0 \rangle +|1 \rangle) |0 \rangle + (|0 \rangle - |1 \rangle ) |0 \rangle}{ \sqrt{2}} = |10 \rangle = |2 \rangle \nonumber$
$\begin{matrix} \text{B2 = 1 B1 = 1} & \Phi_p & \Phi_p & ~ & \Psi_p & \Psi_m & ~ & \Psi_m \ \text{kronecker} \left( X^{B1},~ \text{I} \right) \frac{1}{ \sqrt{2}} & \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \text{kronecker} \left( Z^{B2},~I \right) & \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} = & \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} & \text{kronecker(H, I) CNOT} & \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} = & \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \cdot & \fbox{X}^1 & \cdot & \fbox{Z}^1 & \cdot & \cdot & \fbox{H} & \cdot \ \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ - \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & \begin{matrix} | \ | \ | \ | \end{matrix} & ~ & \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = |3 \rangle \ \cdot & \cdots & \cdot & \cdots & \cdot & \oplus & \cdots & \cdot \end{matrix} \nonumber$
$\frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{X^0 \oplus I} \frac{|10 \rangle +|01 \rangle}{ \sqrt{2}} \xrightarrow{Z^0 \otimes I} \frac{ -|10 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{ \text{CNOT}} \frac{ -|11 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{H \otimes I} \frac{-(|0 \rangle -|1 \rangle) |1 \rangle + (|0 \rangle + |1 \rangle ) |1 \rangle}{ \sqrt{2}} = |11 \rangle = |3 \rangle \nonumber$
It is now shown that steps 1 and 2 in the original calculation can be condensed into one step.
$\begin{matrix} \text{Binary Input} & ~ & \text{Binary Bell State Index} & \text{Decimal Bell State Intex} \ \text{B2 = 0 B1 = 0} & \text{kronecker(H, I) CNOT kronecker} \left( Z^{B2}X^{B1},~ \text{I} \right) \Psi_p = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & |00 \rangle & |0 \rangle \ \text{B2 = 0 B1 = 1} & \text{kronecker(H, I) CNOT kronecker} \left( Z^{B2}X^{B1},~ \text{I} \right) \Psi_p = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & |01 \rangle & |1 \rangle \ \text{B2 = 1 B1 = 0} & \text{kronecker(H, I) CNOT kronecker} \left( Z^{B2}X^{B1},~ \text{I} \right) \Psi_p = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & |10 \rangle & |2 \rangle \ \text{B2 = 1 B1 = 1} & \text{kronecker(H, I) CNOT kronecker} \left( Z^{B2}X^{B1},~ \text{I} \right) \Psi_p = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} & |11 \rangle & |3 \rangle \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.80%3A_Introduction_to_Superdense_Coding.txt
|
Quantum superdense coding reliably transmits two classical bits through an entangled pair of particles, even though only one member of the pair is handled by the sender. Charles Bennett, Physics Today, October 1995, p. 27
This tutorial is based on Brad Rubin's "Superdense Coding" at the Wolfram Demonstration Project: http:demonstrations.wolfram.com/SuperdenseCoding/. The quantum circuit shown below impliments quantum dense coding. Alice and Bob share the entangled pair of photons in the Bell basis shown at the left. Alice encodes two classical bits of information (four possible messages) on her photon, and Bob subsequently reads her message by performing a Bell state measurement on the modified entangled photon pair. In other words, although Alice encodes two bits on her photon Bob's readout requires a measurement involving both photons. In this example Alice sends |11> to Bob.
$\begin{matrix} \cdot & \fbox{X}^1 & \cdot & \fbox{Z}^1 & \cdot & \cdot & \fbox{H} & \cdot \ \begin{pmatrix} \frac{1}{ \sqrt{2}} \ 0 \ 0 \ \frac{1}{ \sqrt{2}} \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & ~ & \begin{pmatrix} ~ \ \frac{1}{ \sqrt{2}} \ - \frac{1}{ \sqrt{2}} \ 0 \end{pmatrix} & \begin{matrix} | \ | \ | \ | \end{matrix} & ~ & \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \cdot & \fbox{I} & \cdot & \fbox{I} & \cdot & \oplus & \fbox{I} \end{matrix} \nonumber$
The operation of the circuit is outlined in both matrix and algebraic format. The necessary truth tables and matrix operators are provided in the Appendix.
Matrix Method
$\text{H} \otimes \text{I CNOT Z} \otimes \text{I X} \otimes \text{I} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = |11 \rangle \nonumber$
Algebraic Method
$\frac{|00 \rangle + |11 \rangle}{ \sqrt{2}} \xrightarrow{X^1 \otimes I} \frac{|10 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{Z^1 \otimes 1} \frac{-|10 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{CNOT} \frac{-|11 \rangle + |01 \rangle}{ \sqrt{2}} \xrightarrow{H \otimes I} \frac{-(|0 \rangle - |1 \rangle ) |1 \rangle + ( |0 \rangle + |1 \rangle ) |1 \rangle}{2} = |11 \rangle = \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
Appendix
Truth tables for the quantum circuit:
$\begin{matrix} \text{X = NOT} \begin{pmatrix} \text{0 to 0} \ \text{1 to 0} \end{pmatrix} & \text{Z} \begin{pmatrix} \text{0 to 0} \ \text{1 to -1} \end{pmatrix} & \text{H = Hadamard} \begin{bmatrix} \text{0 to } \frac{1}{ \sqrt{2}} (0 +1) \ \text{1 to } \frac{1}{ \sqrt{2}} (0-1) \end{bmatrix} & \text{CNOT} \begin{pmatrix} \text{00 to 00} \ \text{01 to 01} \ \text{10 to 11} \ \text{11 to 10} \end{pmatrix} \end{matrix} \nonumber$
Circuit elements in matrix formats:
$\begin{matrix} I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & X = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & Z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} & H = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.81%3A_A_Brief_Introduction_to_Quantum_Dense_Coding.txt
|
The continuous-variable Fourier transforms involving position and momentum are well known. In Dirac notation (see chapter 6 in A Modern Approach to Quantum Mechanics by John S. Townsend) they are,
$\begin{matrix} \langle \ | \Psi \rangle = \int \langle \ |x \rangle \langle x | \Psi \rangle dx & \text{and} & \langle x | \Psi \rangle = \int \langle x | p \rangle \langle \ | \Psi \rangle dp \end{matrix} \nonumber$
where
$\langle x | p \rangle = \langle p | x \rangle * = \frac{1}{ \sqrt{2 \pi \hbar}} \text{exp} \left( i \frac{2 \pi px}{h} \right) = \frac{1}{ \sqrt{2 \pi \hbar}} \text{exp} \left( i \frac{px}{ \hbar} \right) \nonumber$
Using the coordinate and momentum completeness relations
$\begin{matrix} \int |x \rangle \langle x | dx = 1 & \text{and} & \int |p \rangle \langle p |dp = 1 \end{matrix} \nonumber$
we can write the following generic Fourier transforms.
$\begin{matrix} \langle p | = \int \langle p | x \rangle \langle x | dx & \text{and} & \langle x | = \int \langle x | p \langle p | dp \end{matrix} \nonumber$
By analogy a discrete Fourier transform between the k and j indices can be created.
$\langle k | = \sum_{j-0}^{N-1} \langle k | j \rangle \langle j | \nonumber$
were, again, by analogy
$\langle k | j \rangle = \frac{1}{ \sqrt{N}} \text{exp} \left( i \frac{2 \pi}{N}kj \right) \nonumber$
so that
$\langle k | = \frac{1}{ \sqrt{N}} \sum_{j=0}^{N-1} \text{exp} \left( i \frac{2 \pi}{N} k j \right) \langle j | \nonumber$
Summing over the k index and projecting on to |Ψ> yields a system of linear equations.
$\sum_{k=0}^{N-1} \langle k | \Psi \rangle = \frac{1}{ \sqrt{N}} \sum_{k=0}^{N-1} \sum_{j=0}^{N-1} \text{exp} \left( i \frac{2 \pi}{N} kj \right) \langle j | \Psi \rangle \nonumber$
Like all systems it is expressible in matrix form. For example, with N=2 and $\begin{pmatrix} 1 \ 0 \end{pmatrix}$ as the operand we have,
$\frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \nonumber$
Here the matrix operator is the well-known Hadamard transform. In this case it transforms spin-up in the z-direction to spin-up in the x-direction, or horizontal polarization to diagonal polarization, etc. Naturally it transforms spin-up in the x-direction to spin-up in the z-direction.
$\begin{pmatrix} 1 \ 0 \end{pmatrix} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} \nonumber$
This, of course, also occurs with the continuous-variable Fourier transform.
$\langle x | \Psi \rangle \xrightarrow{FT} \langle p | \Psi \rangle \xrightarrow \langle x | \Psi \rangle \nonumber$
The Mathcad implementation of the discrete or quantum Fourier transform (QFT) is now demonstrated.
$\begin{matrix} N = 2 & m = 0 .. N-1 & n = 0 .. N - 1 & QFT_{m,~n} = \frac{1}{ \sqrt{N}} \text{exp} \left( i \frac{2 \pi m n}{N} \right) \end{matrix} \nonumber$
$QFT = \begin{pmatrix} 0.707 & 0.707 \ 0.707 & -0.707 \end{pmatrix} \nonumber$
$\begin{matrix} QFT \begin{pmatrix} 1\ 0 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} & QFT \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \ QFT \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0.707 \ -0.707 \end{pmatrix} & QFT \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
These calculations demonstrate that the QFT is a unitary operator:
$\text{QFT QFT} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.82%3A_The_Discrete_or_Quantum_Fourier_Transform.txt
|
This tutorial presents a toy calculation dealing with quantum factorization using Shor's algorithm. Before beginning that task, traditional classical factorization is reviewed with the example of finding the prime factors of 15. As shown below the key is to find the period of ax modulo 15, where a is chosen randomly.
$\begin{matrix} a = 4 & N = 5 & & f(x) = mod \left( a^x,~N \right) & Q = 8 & x = 0 .. Q - 1\end{matrix} \nonumber$
$\begin{matrix} x = \begin{array}{|c|} \hline \ 0 \ \hline \ 1 \ \hline \ 2 \ \hline \ 3 \ \hline \ 4 \ \hline \ 5 \ \hline \ 6 \ \hline \ 7 \ \hline \end{array} & f(x) = \begin{array}{|c|} \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \end{array} \end{matrix} \nonumber$
Seeing that the period of f(x) is two, the next step is to use the Euclidian algorithm by calculating the greatest common denominator of two functions involving the period and a, and the number to be factored, N.
$\begin{matrix} \text{period = 2} & \text{ged} \left( a^{ \frac{ \text{period}}{2}} - 1,~N \right) = 3 & \text{ged} \left( a^{ \frac{ \text{period}}{2}} + 1,~N \right) = 3 \end{matrix} \nonumber$
Factoring 15 by this method is trivial because it is the product of two small prime numbers. However, if N is the product of two large primes this method is impractical because finding the periodicity of f(x) would not be possible by inspection as it is above. If f(x) were plotted it would appear to be random noise with no recognizable periodic structure. Shor recognized that the discrete Fourier transform (or, quantum Fourier transform) provided an efficient method for finding the period of f(x) when N is the product of two extremely large prime numbers. So the contribution that quantum mechanics may eventually make to code breaking is efficiently finding the periodicity of f(x).
We proceed by ignoring the fact that we already know that the period of f(x) is 2 and demonstrate how it is determined using a quantum (discrete) Fourier transform. After the registers are loaded with x and f(x) using a quantum computer, they exist in the following superposition.
$\frac{1}{ \sqrt{Q}} \sum_{x=0}^{Q-1} |x \rangle | f(x) \rangle = \frac{1}{2} \left[ |0 \rangle |1 \rangle + |1 \rangle |4 \rangle + |2 \rangle |1 \rangle + |3 \rangle |4 \rangle + \cdots \right] = \frac{1}{2} \left[ \left( |0 \rangle + |2 \rangle \right) |1 \rangle + \left( |1 \rangle + |3 \rangle \right) |4 \rangle + \cdots \right] \nonumber$
The rearrangement on the right collects terms on the f(x) values. Now the x values appear in pairs with their f(x) partners. Note that the period of 2 is discernable in each pair of x values. After (|0 > + |2 >) the next pair is offset by 1. The Fourier transform on the x-register removes the offset, clearly revealing the period of 2.
In preparation for the Fourier transform the superposition is written as a sum of vector tensor products.
$\begin{matrix} \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \right] = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \right] \ \frac{1}{2} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}^T \end{matrix} \nonumber$
The next step is to find the period of f(x) by performing a quantum Fourier transform (QFT) on the input register, |x>. The identity operation (do nothing) acts on the second register.
$\begin{matrix} Q = 4 & m = 0 .. Q - 1 & n = 0 .. Q - 1 & QFT_{m,~n} = \frac{1}{ \sqrt{Q}} \text{exp} \left( i \frac{2 \pi m n}{Q} \right) & I = \text{identity(5)} \end{matrix} \nonumber$
The result of the QFT is:
$\begin{matrix} \text{kronecker(QFT, I)} \frac{1}{2} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0.5 \ 0 \ 0 \ 0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0.5 \ 0 \ 0 \ -0.5 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} & \text{expressed in tensor format} & \frac{1}{2} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ -1 \end{pmatrix} \right] \end{matrix} \nonumber$
The calculation at this point is summarized as follows:
$QFT \frac{1}{ \sqrt{2}} \sum_{x=0}^{Q-1} |x \rangle |f(x) \rangle = \frac{1}{2} \left[ |0 \rangle \left( |1 \rangle ||4 \rangle \right) + |2 \rangle \left( |1 \rangle - |4 \rangle \right) \right] \nonumber$
Recalling that x-register originally was a superposition involving 0, 1, 2, and 3, we see that the QFT brought about constructive and destructive interference because now the x-register (blue) contains only 0 and 2. This gives us a period of 2 and we can now proceed to the classical calculation that was demonstrated earlier. The details of how the QFTon the x-register gives rise to interference is given in the summary below.
Recommended reading: Two insightful analyses of quantum mechanics' role in factoring by David Mermin appear in the April and October 2007 issues of Physics Today. The first is titled "What has quantum mechanics to do with factoring?" and the second "Some curious facts about quantum factoring." Chapter 5 in Julian Brown's The Quest for the Quantum Computer deals in depth with code breaking and Shor's algorithm.
Summary
Figure 5 in "Quantum Computation," by David P. DiVincenzo, Science 270, 258 (1995) provides a graphical illustration of the steps of Shor's factorization algorithm.
This tutorial deals with steps 2 and 3 of the algorithm, summarized mathematically below. The negative sign in the far right column vector is an accumulated phase due to the quantum Fourier transform.
$\frac{1}{ \sqrt{Q}} \sum_{x=0}^{Q-1} |x \rangle |f(x) \rangle = \frac{1}{2} \left[ \right] \xrightarrow{QFT} \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ -1 \end{pmatrix} \right] \nonumber$
How a Fourier transform on the x-register can yield the periodicity of f(x) which is on the y-register is revealed by carrying out the Fourier transform on the individual members of the x-register in the middle superposition term above.
$\frac{1}{2} \left[ \text{QFT} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \text{QFT} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} + \text{QFT} \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \text{QFT} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \right] \nonumber$
$\begin{matrix} \text{QFT} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5 \ 0.5 \ 0.5 \end{pmatrix} & \text{QFT} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5i \ -0.5 \ -0.5i \end{pmatrix} & \text{QFT} \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ -0.5 \ 0.5 \ -0.5 \end{pmatrix} & \text{QFT} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0.5 \ -0.5i \ -0.5 \ 0.5i \end{pmatrix} \end{matrix} \nonumber$
Using the results from (B) the QFT on the x-register in (A) yields the following superposition.
$\frac{1}{2} \left[ \frac{1}{2} \begin{pmatrix} 1 \ 1 \ 1 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \frac{1}{2} \begin{pmatrix} 1 \ i \ -1 \ -i \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} + \frac{1}{2} \begin{pmatrix} 1 \ -1 \ 1 \ -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \frac{1}{2} \begin{pmatrix} 1 \ -i \ -1 \ i \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \end{pmatrix} \right] \nonumber$
Constructive and destructive interference between the terms of this superposition leads to the final state.
$\frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ -1 \end{pmatrix} \right] \nonumber$
The details of the interference between the terms in (C) can be seen by expanding them using vector tensor multiplication.
$\frac{1}{4} \left[ \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ i \ 0 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \ 0 \ -i \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ -i \ 0 \ 0 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \ 0 \ i \end{pmatrix} \right] = \frac{1}{2} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ -1 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} = \frac{1}{2} \left[ \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ -1 \end{pmatrix} \right] \nonumber$
An algebraic view of (C) also reveals the x-register interference. Destructive interference occurs with the terms highlighted in red (|1>) and within the terms highlighted in blue (|3>).
$\begin{matrix} \frac{1}{4} \left[ |0 \rangle + |1 \rangle + |2 \rangle + |3 \rangle \right] |1 \rangle \ + \ \frac{1}{4} \left[ |0 \rangle + i |1 \rangle -|2 \rangle - i|3 \rangle \right] |4 \rangle \ + & = \frac{1}{2} \left[ |0 \rangle \left( |1 \rangle + |4 \rangle \right) + |2 \rangle \left( |1 \rangle - |4 \rangle \right) \right] \ \frac{1}{4} \left[ |0 \rangle -|1 \rangle + |2 \rangle - |3 \rangle \right] |1 \rangle \ + \ \frac{1}{4} \left[ |0 \rangle - i |1 \rangle - |2 \rangle + i|3 \rangle \right] |4 \rangle \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.83%3A_Factoring_Using_Shor%27s_Quantum_Algorithm.txt
|
This tutorial presents a toy calculation dealing with quantum factorization using Shor's algorithm. Before beginning that task, traditional classical factorization is reviewed with the example of finding the prime factors of 15. As shown below the key is to find the period of ax modulo 15, where a is chosen randomly.
$\begin{matrix} a = 4 & N = 15 & & f(x) = mod \left( a^x,~N \right) & Q = 8 & x = 0 .. Q - 1\end{matrix} \nonumber$
$\begin{matrix} x = \begin{array}{|c|} \hline \ 0 \ \hline \ 1 \ \hline \ 2 \ \hline \ 3 \ \hline \ 4 \ \hline \ 5 \ \hline \ 6 \ \hline \ 7 \ \hline \end{array} & f(x) = \begin{array}{|c|} \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \ 1 \ \hline \ 4 \ \hline \end{array} \end{matrix} \nonumber$
Seeing that the period of f(x) is two, the next step is to use the Euclidian algorithm by calculating the greatest common denominator of two functions involving the period and a, and the number to be factored, N.
$\begin{matrix} \text{period = 2} & \text{ged} \left( a^{ \frac{ \text{period}}{2}} - 1,~N \right) = 3 & \text{ged} \left( a^{ \frac{ \text{period}}{2}} + 1,~N \right) = 5 \end{matrix} \nonumber$
We proceed by ignoring the fact that we already know that the period of f(x) is 2 and demonstrate how it is determined using a quantum (discrete) Fourier transform. After the registers are loaded with x and f(x) using a quantum computer, they exist in the following superposition.
$\frac{1}{ \sqrt{Q}} \sum_{x=0}^{Q-1} |x \rangle |f(x) \rangle = \frac{1}{2} \left[ |0 \rangle |1 \rangle + |1 \rangle |4 \rangle + |2 \rangle |1 \rangle + |3 \rangle |4 \rangle + \cdots \right] \nonumber$
The next step is to find the period of f(x) by performing a quantum Fourier transform (QFT) on the input register |x>.
$\begin{matrix} Q = 4 & m = 0 .. Q-1& n = 0 .. Q-1 & QFT_{m,~n} = \frac{1}{ \sqrt{Q}} \text{exp} \left( \frac{2 \pi m n}{Q} \right) & QFT = \frac{1}{2} \begin{pmatrix} 1 & 1 & 1 & 1 \ 1 & i & -1 & -i \ 1 & -1 & 1 & -1 \ 1 & -i & -1 & i \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} x=0 & QFT \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5 \ 0.5 \ 0.5 \end{pmatrix} & x=1 & QFT \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5i \ -0.5 \ -0.5i \end{pmatrix} & x = 2 & QFT \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ -0.5 \ 0.5 \ -0.5 \end{pmatrix} & x = 3 & QFT \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0.5 \ -0.5i \ -0.5 \ 0.5i \end{pmatrix} \end{matrix} \nonumber$
The operation of the QFT on the x-register is expressed algebraically in the middle term below. Quantum interference in this term yields the result on the right which shows a period of 2 on the x-register.
$\begin{matrix} ~ & \frac{1}{4} \left[ |0 \rangle + |1 \rangle + |2 \rangle + |3 \rangle \right] |1 \rangle \ ~ & + \ ~ & \frac{1}{4} \left[ |0 \rangle + i |1 \rangle -|2 \rangle - i|3 \rangle \right] |4 \rangle \ ~ & + \ QFT (x) \frac{1}{2} \left[ |0 \rangle |1 \rangle + |1 \rangle |4 \rangle + |2 \rangle |1 \rangle + |3 \rangle |4 \rangle \right] = & + & = \frac{1}{2} \left[ |0 \rangle \left( |1 \rangle + |4 \rangle \right) + |2 \rangle \left( |1 \rangle - |4 \rangle \right) \right] \ ~ & \frac{1}{4} \left[ |0 \rangle - |1 \rangle + |2 \rangle - |3 \rangle \right] |1 \rangle \ ~ & + \ ~ & \frac{1}{4} \left[ |0 \rangle - i |1 \rangle - |2 \rangle + i|3 \rangle \right] |4 \rangle \end{matrix} \nonumber$
Figure 5 in "Quantum Computation," by David P. DiVincenzo, Science 270, 258 (1995) provides a graphical illustration of the steps of Shor's factorization algorithm.
8.85: Simulating the Deutsch-Jozsa Algorithm with a Double-Slit Apparatus
The Deutsch‐Jozsa algorithm determines if either [1] 2N numbers are either all 0 or all 1 (a constant function), or [2] half are 0 and half are 1 (a balanced function) in one step instead of up to 2N‐1 + 1 steps. For N = 1, the Deutsch‐Jozsa algorithm can be visualized as putting two pieces of glass, which may be thin (0) or thick (1), behind the apertures of a double‐slit apparatus and measuring the interference pattern of a light source illuminating the slits. If the pattern is unchanged compared to the empty apparatus, the glass pieces have the same thickness (constant function); otherwise they have different thickness (balanced function). Peter Pfeifer, ʺQuantum Computation,ʺ Access Science, Vol. 17, p. 678, slightly modified by F. R.
This is demonstrated by calculating the diffraction pattern without glass present, and with glass present of the same thickness and different thickness. The diffraction pattern is the momentum distribution, which is the Fourier transform of the slit geometry.
$\begin{matrix} \text{Slit positions:} & x_L = 1 & x_R = 2 & \text{Slit width:} & \delta = 2 \end{matrix} \nonumber$
The momentum wave function with possible phase shifts θ and φ at the two slits is represented by the following superposition. The phase shifts are directly proportional to the thickness of the glass.
$\Psi (p, \theta, \varphi) = \frac{ \int_{x_L - \frac{ \delta}{2}}^{ x_L + \frac{ \delta}{2}} \frac{1}{ \sqrt{2 \pi}} \text{exp(-i p x)} \frac{1}{ \sqrt{ \delta}} \text{dx exp}(i \theta) + \int_{x_R -\frac{ \delta}{2}}^{x_R + \frac{ \delta}{2}} \frac{1}{ \sqrt{2 \pi}} \text{exp(-i p x)} \frac{1}{ \sqrt{ \delta}} \text{dx exp}(i \varphi)}{ \sqrt{2}} \nonumber$
The diffraction patterns (momentum distributions) for the empty apparatus (θ = φ = 0), for an apparatus with glass pieces of the same thickness (θ = φ = π/8) and for one that has glass of different thickness (θ = π/8 ϕ = π/2) behind the slits are displayed below.
Another look at the calculations with graphical support on the left is provided below.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.84%3A_Shor%27s_Quantum_Algorithm_-_A_Summary.txt
|
Suppose you are asked if two pieces of glass are the same thickness. The convential thing to do is to measure the thickness of each piece of glass and then compare the results. As David Deutsch pointed out this is overkill. You were asked only if they were the same thickness, but you made two measurements to answer that question, when in fact it can be done with one.
Quantum mechanics provides two ways to answer the question; using a double-slit apparatus or a Mach-Zehnder interferometer. They both operate on the same quantum principles. Using the double-slit apparatus you put a piece of glass behind each of the slits and shine light on the slits. If the resulting diffraction pattern is symmetrical about the center of the slits, the glasses are the same thickness. See the previous tutorial: Simulating the Deutsch-Jozsa Algorithm with a Double-Slit Apparatus.
Alternatively you could put a piece of glass in each arm of an equal-arm Mach-Zehnder interferometer (MZI). How this approach works is the subject of this tutorial. First we need to get acquainted with a MZI.
As shown in the following figure a MZI consists of a photon source, two 50-50 beam splitters, two mirrors and two detectors. The Appendix contains the mathematical information necessary to carry out a matrix mechanics analysis of the operation of the interferometer. The motional states of the photon are represented by vectors, while the beam splitters and mirrors are represented by matrices and operate on the vectors.
Yogi Berra has famously said "When you come to a fork in the road, take it." This is exactly what the photon does at a beam splitter. After the first beam splitter the photon, which was moving in the x-direction being emitted by the source, is now in a superposition of moving in both the x- and y-directions. It has been transmitted and reflected at the beam splitter. By convention a 90 degree (π/2, i) phase shift is assigned to reflection.
The following calculations illustrate the formation of the superposition state created by the photon's interaction with the first beam splitter.
$\begin{matrix} \text{BS x} \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} & S = \frac{1}{ \sqrt{2}} (T + iR) & \frac{1}{ \sqrt{2}} x + \frac{i}{ \sqrt{2}}y \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} \end{matrix} \nonumber$
After the initial beam splitter, the mirrors direct the transmitted and reflected photon states to a second beam splitter where they are recombined. The consequence of this in an equal arm MZI is that the photon is alway registered at D1. There are two paths (histories) to each detector and the amplitudes for these paths interfere. To reach D1 both paths experience one reflection and so arrive in phase with each other with their phases shifted by 90 degrees. The paths to D2, however, are 180 degrees out of phase and destructively interfere. The photon is never detected at D2.
A photon entering the MZI in the x-direction exits in the x-direction phase-shifted by 90 degrees:
$\begin{matrix} \text{BS M BS x} \rightarrow \begin{pmatrix} i \ 0 \end{pmatrix} & \text{i x} \rightarrow \begin{pmatrix} i \ 0 \end{pmatrix} \end{matrix} \nonumber$
The highlighted areas above next to the detectors show the matrix mechanics calculations for the probability of the photon being registered at D1 and D2. The equations are read from the right. A photon moving in the x-direction interacts with a beam splitter, a mirror and another beam splitter. This state is then projected onto x- and y-direction motion to calculate which detector will register the photon. The absolute square of this calculation (the probability amplitude) is the probability.
Now we place the pieces of glass in the arms of the interferometer as shown below. The speed of light in glass is different from that in air. Therefore glass causes a phase shift depending on its thickness as is shown below. If the pieces of glass are the same thickness, δ, they will cause the same phase shift and the photon will be detected at D1. However, if they have different thicknesses, the phase shifts will be different in the two arms of the interferometer. For example, if ϕx is π/2 and ϕy is π/4 then D2 will fire almost 15% of the time indicating that the glasses are not the same thickness.
$\begin{matrix} \phi_x = 2 \pi \frac{ \delta_x}{ \lambda} & \phi_x = \frac{ \pi}{2} & \phi_y 2 \pi \frac{ \delta_y}{ \lambda} & \phi_y = \frac{ \pi}{4} \end{matrix} \nonumber$
Appendix
State Vectors
$\begin{matrix} \text{Photon moving horizontally:} & x = \begin{pmatrix} 1 \ 0 \end{pmatrix} & \text{Photon moving vertically:} & y = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Operators
$\begin{matrix} \text{Operator representing a beam splitter:} & \text{BS} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} \ \text{Operator representing interaction with glass:} & \text{A} \left( \phi_x,~ \phi_y \right) = \begin{pmatrix} e^{i \phi_x} & 0 \ 0 & e^{i \phi_y} \end{pmatrix} \ \text{Operator representing a mirror:} & \text{M} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
8.87: Quantum Restrictions on Cloning
Suppose a quantum copier exists which is able to carry out the following cloning operation.
$\begin{pmatrix} 0 \ 1 \end{pmatrix} \xrightarrow{Clone} \begin{pmatrix} 0 \ 1 \end{pmatrix} \otimes \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \nonumber$
Next the cloning operation (using the same copier) is carried out on the general qubit shown below.
$\begin{pmatrix} \cos \theta \ \sin \theta \end{pmatrix} \xrightarrow{Clone} \begin{pmatrix} \cos \theta \ \sin \theta \end{pmatrix} \otimes \begin{pmatrix} \cos \theta \ \sin \theta \end{pmatrix} = \begin{pmatrix} \cos^2 \theta \ \cos \theta \sin \theta \ \sin \theta \cos \theta \ \sin^2 \theta \end{pmatrix} \nonumber$
Quantum transformations are unitary, meaning probability is preserved. This requires that the scalar products of the initial and final states must be the same.
$\begin{matrix} \text{Initial state:} & \begin{pmatrix} \cos \theta & \sin \theta \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \sin \theta \ \text{Final state:} & \begin{pmatrix} \cos^2 \theta & \cos \theta \sin \theta & \sin \theta \cos \theta & \sin^2 \theta \end{pmatrix} \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = \sin^2 \theta \end{matrix} \nonumber$
It is clear from this analysis that quantum theory puts a significant restriction on copying. Only states for which sin(θ) = 0 or 1 (0 and 90 degrees) can be copied by the original cloner.
In conclusion, two quotes from Wootters and Zurek, Physics Today, February 2009, page 76.
Perfect copying can be achieved only when the two states are orthogonal, and even then one can copy those two states (...) only with a copier specifically built for that set of states.
In sum, one cannot make a perfect copy of an unknown quantum state, since, without prior knowledge, it is impossible to select the right copier for the job. That formulation is one common way of stating the no-cloning theorem.
An equivalent way to look at this (see arXiv:1701.00989v1) is to assume that a cloner exists for the V-H polarization states.
$\begin{matrix} \hat{C} | V \rangle |X \rangle = |V \rangle V \rangle & \hat{C} |H \rangle |X \rangle = |H \rangle |H \rangle \end{matrix} \nonumber$
A diagonally polarized photon is a superposition of the V-H polarization states.
$|D \rangle = \frac{1}{ \sqrt{2}} \left( |V \rangle + |H \rangle \right) \nonumber$
However, due to the linearity of quantum mechanics the V-H cloner cannot clone a diagonally polarized photon.
$\begin{matrix} \hat{C} |D \rangle |X \rangle = \hat{C} \frac{1}{ \sqrt{2}} \left( |V \rangle + |H \rangle \right) |X \rangle = \frac{1}{ \sqrt{2}} \left( |V \rangle |X \rangle + |H \rangle |X \rangle \right) = \frac{1}{ \sqrt{2}} \left( |V \rangle |V \rangle + |H \rangle |H \rangle \right) \ \hat{C} |D \rangle |X \rangle \neq |D \rangle |D \rangle = \frac{1}{2} \left( |V \rangle |V \rangle + |V \rangle |H \rangle + |H \rangle |V \rangle + |H \rangle |H \rangle \right) \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.86%3A_Simulating_a_Quantum_Computer_with_a_Mach-Zehnder_Interferometer.txt
|
This tutorial deals with quantum error correction as presented by Julian Brown on pages 274-278 in The Quest for the Quantum Computer. Brown's three-qubit example includes an input qubit and two ancillary qubits in the initial state |Ψ00>. This state is encoded and subsequently decoded, with the possibility that in between it acquires an error. The quantum circuit demonstrates how correction occurs if a qubit is flipped due to decoherence at the intermediate state.
The quantum gates required for the error correction algorithm are:
$\begin{matrix} \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} & \text{CnNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} & \text{IToffoli} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{pmatrix} \end{matrix} \nonumber$
The encoding and decoding elements of the circuit in terms of these gates are shown in the appropriate place above the circuit diagram.
$\begin{matrix} \text{Encode = CnNOT kronecker(CNOT, I)} & \text{Decode = IToffoli Encode)} \end{matrix} \nonumber$
$\begin{matrix} \sqrt{ \frac{1}{3}} + \sqrt{ \frac{2}{3}} |1 \rangle & \cdots & \cdot & \cdots & \cdot & \cdots & \text{E} & \cdots & \cdot & \cdots & \cdot & \cdots & \oplus & \triangleright & \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \ ~ & ~ & | & ~ & | & ~ & \text{R} & ~ & | & ~ & | & ~ & | \ |0 \rangle & \cdots & \oplus & \cdots & | & \cdots & \text{R} & \cdots & \oplus & \cdots & | & \cdots & \cdot & \triangleright & |0 \rangle or |1 \rangle \ ~ & ~ & ~ & ~ & | & ~ & \text{O} & ~ & ~ & ~ & | & ~ & | \ |0 \rangle & \cdots & \cdots & \cdots & \oplus & \cdots & \text{R} & \cdots & \cdots & \cdots & \oplus & \cdots & \cdot & \triangleright & |0 \rangle or |1 \rangle \end{matrix} \nonumber$
Given an initial state, the encodng step creates an entangled Bell state as demonstrated below.
$\begin{matrix} \text{Initial State} & \text{Encode} & \text{Encoded state} \ \begin{pmatrix} \sqrt{ \frac{1}{3}} \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \frac{ \sqrt{2}{3}} \ 0 \ 0 \ 0 \end{pmatrix} & \text{Encode} \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.577 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0.816 \end{pmatrix} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \end{pmatrix} \end{matrix} \nonumber$
As an initial example, it is assumed that between encoding and decoding no errors are introduced to the encoded state. This case demonstrates that decoding simply returns the initial state. Susbsequent to this the operation of the circuit when errors occur on each of the wires are examined. These cases demonstrate that the original state appears on the top wire at the completion of the error correction circuit.
$\begin{matrix} \text{No errors:} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \sqrt{ \frac{2}{3}} \end{pmatrix} & \text{Decode} \begin{pmatrix} 0.577 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0.816 \end{pmatrix} = \begin{pmatrix} 0.577 \ 0 \ 0 \ 0 \ 0.816 \ 0 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
Next it is shown that if the input state, |Ψ>, is corrupted that the decoder corrects the error and returns the original |Ψ> to the top wire of the circuit. In the example shown below the top qubit is flipped.
$\begin{matrix} \text{Top qubit flipped:} & \begin{pmatrix} 0 \ \sqrt{ \frac{1}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} \sqrt{ \frac{2}{3}} \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \end{pmatrix} & \text{Decode} \begin{pmatrix} 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ 0.577 \ 0 \ 0 \ 0 \ 0.816 \end{pmatrix} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \end{pmatrix} \end{matrix} \nonumber$
The circuit can also be expressed using Dirac notation. Truth tables for the gates are provided in the Appendix.
$\begin{matrix} \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |00 \rangle \xrightarrow{encode} \sqrt{ \frac{1}{3}} |000 \rangle + \sqrt{ \frac{2}{3}} |111 \rangle \xrightarrow{flip~top}[qubit] \sqrt{ \frac{1}{3}} |100 \rangle + \sqrt{ \frac{2}{3}} |011 \rangle \ \xrightarrow{CNOT,~I} \sqrt{ \frac{1}{3}} |110 \rangle + \sqrt{ \frac{2}{3}} |011 \rangle \xrightarrow{CnNOT} \sqrt{ \frac{1}{3}} |111 \rangle + \sqrt{ \frac{2}{3}} |011 \rangle \ \xrightarrow{InToffoli} \sqrt{ \frac{1}{3}} |011 \rangle + \sqrt{ \frac{2}{3}} |111 \rangle = \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |11 \rangle \end{matrix} \nonumber$
Naturally the ancillary qubits are also susceptible to errors. The following examples show that if a qubit flip occurs on the middle or bottom wire, the circuit still functions properly.
$\begin{matrix} \text{Middle qubit flipped:} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \ 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \end{pmatrix} & \text{Decode} \begin{pmatrix} 0 \ 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0.577 \ 0 \ 0 \ 0 \ 0.816 \ 0 \end{pmatrix} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |00 \rangle \xrightarrow{encode} \sqrt{ \frac{1}{3}} |000 \rangle + \sqrt{ \frac{2}{3}} |111 \rangle \xrightarrow{flip~top}[qubit] \sqrt{ \frac{1}{3}} |010 \rangle + \sqrt{ \frac{2}{3}} |101 \rangle \ \xrightarrow{CNOT,~I} \sqrt{ \frac{1}{3}} |010 \rangle + \sqrt{ \frac{2}{3}} |111 \rangle \xrightarrow{CnNOT} \sqrt{ \frac{1}{3}} |010 \rangle + \sqrt{ \frac{2}{3}} |110 \rangle \ \xrightarrow{InToffoli} \sqrt{ \frac{1}{3}} |010 \rangle + \sqrt{ \frac{2}{3}} |110 \rangle = \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |10 \rangle \end{matrix} \nonumber$
$\begin{matrix} \text{Bottom qubit flipped:} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \end{pmatrix} & \text{Decode} \begin{pmatrix} 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0.577 \ 0 \ 0 \ 0 \ 0.816 \ 0 \ 0 \end{pmatrix} & \begin{pmatrix} \sqrt{ \frac{1}{3}} \ \sqrt{ \frac{2}{3}} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ \sqrt{ \frac{1}{3}} \ 0 \ 0 \ 0 \ \sqrt{ \frac{2}{3}} \ 0 \ 0 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |00 \rangle \xrightarrow{encode} \sqrt{ \frac{1}{3}} |000 \rangle + \sqrt{ \frac{2}{3}} |111 \rangle \xrightarrow{flip~top}[qubit] \sqrt{ \frac{1}{3}} |001 \rangle + \sqrt{ \frac{2}{3}} |110 \rangle \ \xrightarrow{CNOT,~I} \sqrt{ \frac{1}{3}} |001 \rangle + \sqrt{ \frac{2}{3}} |100 \rangle \xrightarrow{CnNOT} \sqrt{ \frac{1}{3}} |001 \rangle + \sqrt{ \frac{2}{3}} |101 \rangle \ \xrightarrow{InToffoli} \sqrt{ \frac{1}{3}} |001 \rangle + \sqrt{ \frac{2}{3}} |101 \rangle = \left( \sqrt{ \frac{1}{3}} |0 \rangle + \sqrt{ \frac{2}{3}} |1 \rangle \right) |01 \rangle \end{matrix} \nonumber$
Appendix
$\begin{matrix} \text{CNOT} & \text{CnNOT} & \text{IToffoli} \ \begin{pmatrix} \text{Decimal} & \text{Binary} & \text{to} & \text{Binary} & \text{Decimal} \ 0 & 00 & \text{to} & 00 & 0 \ 1 & 01 & \text{to} & 01 & 1 \ 2 & 10 & \text{to} & 11 & 3 \ 3 & 11 & \text{to} & 10 & 2 \end{pmatrix} & \begin{pmatrix} \text{Decimal} & \text{Binary} & \text{to} & \text{Binary} & \text{Decimal} \ 0 & 000 & \text{to} & 000 & 0 \ 1 & 001 & \text{to} & 001 & 1 \ 2 & 010 & \text{to} & 010 & 2 \ 3 & 011 & \text{to} & 011 & 3 \ 4 & 100 & \text{to} & 101 & 5 \ 5 & 101 & \text{to} & 100 & 4 \ 6 & 110 & \text{to} & 111 & 7 \ 7 & 111 & \text{to} & 110 & 6 \end{pmatrix} & \begin{pmatrix} \text{Decimal} & \text{Binary} & \text{to} & \text{Binary} & \text{Decimal} \ 0 & 000 & \text{to} & 000 & 0 \ 1 & 001 & \text{to} & 001 & 1 \ 2 & 010 & \text{to} & 010 & 2 \ 3 & 011 & \text{to} & 111 & 7 \ 4 & 100 & \text{to} & 100 & 4 \ 5 & 101 & \text{to} & 101 & 5 \ 6 & 110 & \text{to} & 110 & 6 \ 7 & 111 & \text{to} & 011 & 3 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.88%3A_Quantum_Error_Correction.txt
|
The figure below (www.nsa.gov/research/tnw/tnw201/article6.shtml) illustrates an implementation of the BB84 public key distribution protocol. Clare randomly sends Bob single photons in either the rectilinear or diagonal basis. Bob randomly chooses to measure the photons he receives in either the rectilinear or diagonal basis. The table below the figure displays typical results and how Clare and Bob use them to create a secret key.
While the key distribution protocol is clear enough from the graphic alone, the purpose of this tutorial is to show the details of its operation using the methods of matrix mechanics.
Direction of motion and polarization states are represented by the following vectors. The x-direction is horizontal in the figure above.
$\begin{matrix} \text{Direction of propagation states:} x = \begin{pmatrix} 1 \ 0 \end{pmatrix} & y = \begin{pmatrix} 0 \ 1 \end{pmatrix} \ \text{Photon polarization states:} & v = \begin{pmatrix} 1 \ 0 \end{pmatrix} & h = \begin{pmatrix} 0 \ 1 \end{pmatrix} & d = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & s = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} \end{matrix} \nonumber$
There are eight propagation-polarization states which are created by tensor multiplication of the appropriate vectors.
$\begin{matrix} xv = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} & xv = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} & xh = \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} & xh = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} & yv = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} & yv = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} & yh = \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} & yh = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} xd = \begin{pmatrix} 1 \ 0 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & xd = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \ 0 \end{pmatrix} & xs = \begin{pmatrix} 1 \ 0 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & xs = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \ 0 \ 0 \end{pmatrix} \ yd = \begin{pmatrix} 0 \ 1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix} & yd = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 0 \ 1 \ 1 \end{pmatrix} & ys = \begin{pmatrix} 0 \ 1 \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -1 \end{pmatrix} & ys = \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 0 \ 1 \ -1 \end{pmatrix} \end{matrix} \nonumber$
The operators required for the BB84 implementation shown above are matrices representing the identity, mirror, rotator and polarization beam splitter, PBS.
$\begin{matrix} \text{I} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & \text{M} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{Rotator} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & -1 \ 1 & 1 \end{pmatrix} & \text{PBS} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \end{pmatrix} \end{matrix} \nonumber$
The rotator, which prepares photons for measurement in the diagonal basis, rotates the photon polarization state clockwise by 45 degrees.
$\begin{matrix} \text{Rotator v} = \begin{pmatrix} 0.707 \ 0.707 \end{pmatrix} = |d \rangle & \text{Rotator h} = \begin{pmatrix} -0.707 \ 0.707 \end{pmatrix} = - |s \rangle & \text{Rotator d} = \begin{pmatrix} 0 \ 1 \end{pmatrix} = |h \rangle & \text{Rotator s} = \begin{pmatrix} 1 \ 0 \end{pmatrix} = |v \rangle \end{matrix} \nonumber$
The PBS transmits vertical and reflects horizontal photons. For example,
$\begin{matrix} \text{PBS xv} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = |xv \rangle & \text{PBS xh} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = yh \rangle & \text{PBS yv} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = |yv \rangle & \text{PBS yh} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = |xh \rangle \end{matrix} \nonumber$
Interpreting Measurement Results
Suppose Bob measures Clare's photons (|v>, |h>, |d> and |s>) in the rectilinear basis. Note that the rectilinear detector is in the same direction as the photons are propagating. For |v> and |h> he correctly measures the polarization of Clare's photons. However, |d> and |s> are not eigenstates of the PBS, but superpositions of |v> and |h>, so for these photons half the time Bob will observe |v> and half the time |h>. According to quantum principles he always observes one of the eigenstates of the measurement operator. When Clare and Bob publicly discuss the result she will tell him for which events he chose the correct measurement basis.
$\begin{matrix} \text{PBS xv} = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = xv & \text{PBS xh} = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix} = yh & \text{PBS xd} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(xv + yh)} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0.707 \end{pmatrix} \ ~ & ~ & \text{PBS xs} = \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(xv - yh)} = \begin{pmatrix} 0.707 \ 0 \ 0 \ -0.707 \end{pmatrix} \end{matrix} \nonumber$
Suppose Bob measures Clare's photons in the diagonal basis. The diagonal detector is reached in the vertical direction via a mirror. The rotator causes a basis change so that the |d> and |s> photons become eigenvectors of the PBS. Thus for |d> or |s> Bob always gets the correct result because they have been transformed to |h> or |v>. The states |v> and |h> are transformed by the rotator into superpositions of |v> and |h>, indicating that half the time he will observe |d> and half the time |s>. When Clare and Bob publicly discuss the result she will tell him which events he chose the correct measurement basis.
$\begin{matrix} \text{PBS kronecker(M, Rotator) xd} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix} = xh & \text{PBS kronecker(M, Rotator) xs} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix} = yv \ \text{PBS kronecker(M, Rotator) xv} = \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(xh + yv)} = \begin{pmatrix} 0 \ 0.707 \ 0.707 \ 0 \end{pmatrix} & \text{PBS kronecker(M, Rotator) xh} = \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \text{(xh - yv)} = \begin{pmatrix} 0 \ 0.707 \ -0.707 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The following demonstrates how a binary message is coded and subsequently decoded using a binary key and modulo 2 arithmetic.
$\begin{matrix} \text{Message} & \text{Key} & \text{Coded Message} & \text{Decoded Message} \ \text{Mes} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \ 1 \end{pmatrix} & \text{Key} = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \ 1 \ 0 \ 1 \ 0 \ 1 \ 1 \ 1 \ 0 \end{pmatrix} & \text{CMes = mod(Mes + Key, 2)} = \begin{pmatrix} 0 \ 1 \ 1 \ 0 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 1 \end{pmatrix} & \text{DMes = mod(CMes + Key, 2)} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \ 1 \end{pmatrix} \end{matrix} \nonumber$
It is clear by inspection that the message has been accurately decoded. This is confirmed by calculating the difference between the message and the decoded message.
$\text{(Mes} - \text{DMes)}^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \nonumber$
Here's a graphical representation of the coding and decoding mechanism.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.89%3A_Matrix_Mechanics_Analysis_of_the_BB84_Key_Distribution.txt
|
Alice and Bob share an entangled photon (EPR) pair in the following state.
$| \Psi \rangle = \frac{1}{ \sqrt{2}} \left[ | R \rangle_A |R \rangle_B + |L \rangle_A | L \rangle_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ i \end{pmatrix}_B + \begin{pmatrix} 1 \ -i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ -i \end{pmatrix}_B \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \left[ |V \rangle_A |V \rangle_B - |H \rangle_A |H \rangle_B \right] \nonumber$
They agree to make random polarization measurements in the rectilinear and circular polarization bases. When a measurement is made on a quantum system the result is always an eigenstate of the measurement operator. The operators and eigenstates in the rectilinear and circular basis are:
$\begin{matrix} I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} & V_{op} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} & V = \begin{pmatrix} 1 \ 0 \end{pmatrix} V_{op} \rightarrow \begin{pmatrix} 1 \ 0 \end{pmatrix} & H = \begin{pmatrix} 0 \ 1 \end{pmatrix} & H_{op} = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} & H_{op} H \rightarrow \begin{pmatrix} 0 \ 1 \end{pmatrix} \ ~ & R_{op} = \frac{1}{2} \begin{pmatrix} 1 & -i \ i & 1 \end{pmatrix} & R = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & R_{op} R \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ \frac{ \sqrt{2} i}{2} \end{pmatrix} & L_{op} = \frac{1}{2} \begin{pmatrix} 1 & i \ -i & 1 \end{pmatrix} & L = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} & L_{op}L \rightarrow \begin{pmatrix} \frac{ \sqrt{2}}{2} \ - \frac{ \sqrt{2} i}{2} \end{pmatrix} \end{matrix} \nonumber$
Pertinent superpositions:
$\begin{matrix} V = \frac{1}{ \sqrt{2}} \text{(R + L)} & H = \frac{i}{ \sqrt{2}} \text{(L - R)} & R = \frac{1}{ \sqrt{2}} \text{(V + iH)} & L = \frac{1}{ \sqrt{2}} \text{(V - iH)} \end{matrix} \nonumber$
Alice's random measurement effectively sends a random photon to Bob due to the correlations built into the entangled state of their shared photon pair. Alice's four measurement possibilities and their consequences for Bob are now examined.
Alice's photon is found to be right circularly polarized, |R>. If Bob measures circular polarization he is certain to find his photon to be |R>. But if he chooses to measure in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\begin{matrix} \text{kronecker}(R_{op},~ I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0.354 \ 0.354i \ 0.354i \ -0.354 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} R R = \frac{1}{ \sqrt{2}} R \left[ \frac{1}{ \sqrt{2}} \text{(V + iH)} \right] \end{matrix} \nonumber$
If Alice observes |L>, Bob will also if he measures circular polarization. But if he measures in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\begin{matrix} \text{kronecker}(L_{op},~ I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0.354 \ -0.354i \ -0.354i \ -0.354 \end{pmatrix} & \frac{1}{ \sqrt{2}} \left[ \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} \right] = \frac{1}{ \sqrt{2}} LL = \frac{1}{ \sqrt{2}} L \left[ \frac{1}{ \sqrt{2}} \text{(V - iH)} \right] \end{matrix} \nonumber$
The same kind of reasoning applies to measurements Alice makes in the rectilinear basis.
$\begin{matrix} \text{kronecker}(V_{op},~ I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0.707 \ 0 \ 0 \ 0 \end{pmatrix} & \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \frac{1}{ \sqrt{2}} VV = \frac{1}{ \sqrt{2}} V \left[ \frac{1}{ \sqrt{2}} \text{(R + L)} \right] \end{matrix} \nonumber$
$\begin{matrix} \text{kronecker}(H_{op},~ I) \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ -0.707 \end{pmatrix} & - \frac{1}{ \sqrt{2}} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = - \frac{1}{ \sqrt{2}} HH = \frac{1}{ \sqrt{2}} H \left[ \frac{1}{ \sqrt{2}} \text{(L - R)} \right] \end{matrix} \nonumber$
Alice and Bob keep the results for the experiments for which they measured in the same basis, and make the following bit value assignments: |V> = |R> = 0 and |H> = |L> = 1. This leads to the secret key on the bottom line.
$\begin{matrix} \text{Alice} & \textcolor{blue}{R} & \text{V} & \textcolor{blue}{V} & \textcolor{blue}{L} & \text{H} & \text{L} & \textcolor{blue}{H} & \textcolor{blue}{R} & \textcolor{blue}{V} & \text{L} & \textcolor{blue}{H} \ \text{Bob} & \textcolor{blue}{R} & \text{L} & \textcolor{blue}{V} & \textcolor{blue}{L} & \text{R} & \text{H} & \textcolor{blue}{H} & \textcolor{blue}{R} & \textcolor{blue}{V} & \text{V} & \textcolor{blue}{H} \ \text{Key} & \textcolor{blue}{0} & ~ & \textcolor{blue}{0} & \textcolor{blue}{1} & ~ & \textcolor{blue}{1} & \textcolor{blue}{0} & \textcolor{blue}{0} & ~ & \textcolor{blue}{1} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.90%3A_388._The_Quantum_Math_Behind_Ekert%27s_Key_Distribution_Scheme.txt
|
Alice and Bob share an entangled photon (EPR) pair in the following state.
$| \Psi = \frac{1}{ \sqrt{2}} \left[ |R \rangle_A |R \rangle_B + |L \rangle_A |L \rangle_B \right] = \frac{1}{2 \sqrt{2}} \left[ \begin{pmatrix} 1 \ i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ i \end{pmatrix}_B + \begin{pmatrix} 1 \ -i \end{pmatrix}_A \otimes \begin{pmatrix} 1 \ -i \end{pmatrix}_B \right] = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ 0 \ 0 \ -1 \end{pmatrix} = \frac{1}{ \sqrt{2}} \left[ |V \rangle_A |V \rangle_B - |H \rangle_A |H \rangle_B \right] \nonumber$
They agree to make random polarization measurements in the rectilinear and circular polarization bases. When a measurement is made on a quantum system the result is always an eigenstate of the measurement operator. The eigenstates in the circular and rectilinear bases are:
$\begin{matrix} R = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ i \end{pmatrix} & L = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 \ -i \end{pmatrix} & V = \begin{pmatrix} 1 \ 0 \end{pmatrix} & H = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Pertinent superpositions:
$\begin{matrix} V = \frac{1}{ \sqrt{2}} (R + L) & H = \frac{i}{ \sqrt{2}} (L - R) & R = \frac{1}{ \sqrt{2}} (V + iH) & L = \frac{1}{ \sqrt{2}} (V - iH) \end{matrix} \nonumber$
Alice's random measurement effectively sends a random photon to Bob due to the correlations built into the entangled state of their shared photon pair. Alice's four measurement possibilities and their consequences for Bob are now examined.
Alice's photon is found to be right circularly polarized, |R>. If Bob measures circular polarization he is certain to find his photon to be |R>. But if he chooses to measure in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\frac{1}{ \sqrt{2}} RR = \frac{1}{ \sqrt{2}} R \left[ \frac{1}{ \sqrt{2}} (V + iH) \right] \nonumber$
If Alice observes |L>, Bob will also if he measures circular polarization. But if he measures in the rectilinear basis the probability he will observe |V> is 0.5 and the probability he will observe |H> is 0.5.
$\frac{1}{ \sqrt{2}} LL = \frac{1}{ \sqrt{2}} L \left[ \frac{1}{ \sqrt{2}} (V - iH) \right] \nonumber$
The same kind of reasoning applies to measurements Alice makes in the rectilinear basis.
$\begin{matrix} \frac{1}{ \sqrt{2}} VV = \frac{1}{ \sqrt{2}} V \left[ \frac{1}{ \sqrt{2}} (R + L) \right] & \frac{1}{ \sqrt{2}} HH = \frac{1}{ \sqrt{2}} H \left[ \frac{1}{ \sqrt{2}} (L - R) \right] \end{matrix} \nonumber$
Alice and Bob keep the results for the experiments for which they measured in the same basis (blue in the table below), and make the following bit value assignments: |V> = |R> = 0 and |H> = |L> = 1. This leads to the secret key on the bottom line.
$\begin{matrix} \text{Alice} & \textcolor{blue}{R} & \text{V} & \textcolor{blue}{V} & \textcolor{blue}{L} & \text{H} & \text{L} & \textcolor{blue}{H} & \textcolor{blue}{R} & \textcolor{blue}{V} & \text{L} & \textcolor{blue}{H} \ \text{Bob} & \textcolor{blue}{R} & \text{L} & \textcolor{blue}{V} & \textcolor{blue}{L} & \text{R} & \text{H} & \textcolor{blue}{H} & \textcolor{blue}{R} & \textcolor{blue}{V} & \text{V} & \textcolor{blue}{H} \ \text{Key} & \textcolor{blue}{0} & ~ & \textcolor{blue}{0} & \textcolor{blue}{1} & ~ & \textcolor{blue}{1} & \textcolor{blue}{0} & \textcolor{blue}{0} & ~ & \textcolor{blue}{1} \end{matrix} \nonumber$
The following demonstrates how a binary message is coded and subsequently decoded using a shared binary secret key and modulo 2 arithmetic.
$\begin{matrix} \text{Message} & \text{Key} & \text{Coded Message} & \text{Decoded Message} \ \text{Mes} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \end{pmatrix} & \text{Key} = \begin{pmatrix} 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \end{pmatrix} & \text{CMes = mod(Mes + Key, 2)} = \begin{pmatrix} 0 \ 1 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \end{pmatrix} & \text{DMes = mod(CMes + Key, 2)} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
It is clear by inspection that the message has been accurately decoded. This is confirmed by calculating the difference between the message and the decoded message.
$\text{(Mes} - \text{DMes)}^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.91%3A_A_Shorter_Version_of_the_Quantum_Math_Behind_Ekert%27s_Key_Distribution_Scheme.txt
|
Charles H. Bennett proposed the following Mach-Zehnder interferometer for quantum key distribution (Physical Review Letters 68, 3121 (1992)).
Alice's source at the left supplies single-photon states, which are split by a symmetric beam splitter BS1 into a superposition being present in both arms of a Mach-Zehnder interferometer (MZI). Alice (PSA) applies a random 0-, 90-, 180-, or 270-degree phase shift in one arm and Bob (PSB) applies a random 0- or 90-degree phase shift in the other arm. Mirrors direct the photon to a second beam splitter creating two photon paths to each detector and thereby allowing for interference between the paths. After photon detection by Bob, Alice and Bob agree publicly to keep only those results for which their phase shifts differ by 0 or 180 degrees, settings for which the photons behave deterministically at the second beam splitter.
Direction of propagation vectors:
$\begin{matrix} x = \begin{pmatrix} 1 \ 0 \end{pmatrix} & y = \begin{pmatrix} 0 \ 1 \end{pmatrix} \end{matrix} \nonumber$
Matrix operators for the interferometer components:
$\begin{matrix} \text{Beam splitter:} & \text{BS:} = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & i \ i & 1 \end{pmatrix} & \text{Mirror:} & \text{M:} = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{Phase shift:} \begin{pmatrix} e&{i~PSA} & 0 \ 0 & e&{i~PSB} \end{pmatrix} \end{matrix} \nonumber$
Construct a Mach-Zehnder interferometer using these components.
$\text{MZI(PSA, PSB) = BS M} \begin{pmatrix} e^{i~PSA} & 0 \ 0 & e^{i~PSB} \end{pmatrix} \text{BS} \nonumber$
$\begin{matrix} \text{Probability Detector 0 will fire:} & \text{Probability Detector 1 will fire:} \ \text{Detector_0(PSA, PSB) =} \left( \left| x^T \text{MZI(PSA, PSB) x} \right| \right) ^2 & \text{Detector_1(PSA, PSB) =} \left( \left| y^T \text{MZI(PSA, PSB) x} \right| \right)^2 \end{matrix} \nonumber$
For each of eight possible phase shift settings calculate the probability that detectors |0> and |1> will register the arrival of a photon.
The PSA/PSB settings for which a photon behaves deterministically are highlighted.
$\begin{matrix} ~ & ~ & \text{Detector} = \begin{pmatrix} 1 \ 0 \end{pmatrix} = 0 & \text{Detector} = \begin{pmatrix} 0 \ 1 \end{pmatrix} = 1 \ \textcolor{magenta}{PSA = 0 deg} & \textcolor{magenta}{PSB = 0 deg} & \textcolor{magenta}{Detector_0(PSA, PSB) = 1} & \textcolor{magenta}{Detector_1(PSA, PSB) = 0} \ \text{PSA = 0 deg} & \text{PSB= 90 deg} & \text{Detector_0(PSA, PSB) = 0.5} & \text{Detector_1(PSA, PSB) = 0.5} \ \text{PSA = 90 deg} & \text{PSB= 0 deg} & \text{Detector_0(PSA, PSB) = 0.5} & \text{Detector_1(PSA, PSB) = 0.5} \ \textcolor{magenta}{PSA = 90 deg} & \textcolor{magenta}{PSB = 90 deg} & \textcolor{magenta}{Detector_0(PSA, PSB) = 1} & \textcolor{magenta}{Detector_1(PSA, PSB) = 0} \ \textcolor{magenta}{PSA = 180 deg} & \textcolor{magenta}{PSB = 0 deg} & \textcolor{magenta}{Detector_0(PSA, PSB) = 0} & \textcolor{magenta}{Detector_1(PSA, PSB) = 1} \ \text{PSA = 180 deg} & \text{PSB= 270 deg} & \text{Detector_0(PSA, PSB) = 0.5} & \text{Detector_1(PSA, PSB) = 0.5} \ \text{PSA = 270 deg} & \text{PSB= 0 deg} & \text{Detector_0(PSA, PSB) = 0.5} & \text{Detector_1(PSA, PSB) = 0.5} \ \textcolor{magenta}{PSA = 270 deg} & \textcolor{magenta}{PSB = 90 deg} & \textcolor{magenta}{Detector_0(PSA, PSB) = 0} & \textcolor{magenta}{Detector_1(PSA, PSB) = 1} \end{matrix} \nonumber$
Demonstrate that the detection results at each detector are completely random. In other words, that if someone was monitoring Bob's detectors he or she would see no pattern in the results.
The settings of phase shifters PSA and PSB are changed randomly by Alice and Bob. So given a large number of runs, each pair of settings shown above will occur with probability 1/8 or 12.5%. Overall each detector will register a photon in half the runs.
$\begin{matrix} \textcolor{magenta}{Detector = \begin{pmatrix} 1 \ 0 \end{pmatrix}} & \frac{1 + \frac{1}{2} + \frac{1}{2} + 1 + 0 + \frac{1}{2} + \frac{1}{2} + 0}{8} \rightarrow \frac{1}{2} & \textcolor{magenta}{Detector = \begin{pmatrix} 0 \ 1 \end{pmatrix}} & \frac{0 + \frac{1}{2} + \frac{1}{2} + 0 + 1 + \frac{1}{2} + \frac{1}{2} + 1}{8} \rightarrow \frac{1}{2} \end{matrix} \nonumber$
Demonstrate the use of a secret key to exchange a secure message between a sender and a receiver.
Coding and Decoding a Message
$\begin{matrix} j = 1 .. 25 & \text{Key}_j = \text{trunc(rnd(2))} & \text{Key}^T = \begin{pmatrix} 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \end{pmatrix} \ ~ & \text{Mes}_j = \text{trunc(rnd(2))} & \textcolor{magenta}{ \text{Mes}^T = \begin{pmatrix} 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{pmatrix}} \ \text{CMes}_j = \text{Mes}_j \oplus \text{Key}_j & \text{CMes}^T = \begin{pmatrix} 1& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} \ \text{DMes}_j = \text{CMes}_j \oplus \text{Key}_j & \textcolor{magenta}{ \text{Mes}^T = \begin{pmatrix} 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{pmatrix}} \end{matrix} \nonumber$
It is clear by inspection that the message has been accurately decoded. This is confirmed by calculating the difference between the message and the decoded message.
$\text{(DMes} - \text{Mes)}^T = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \nonumber$
In 2000 Anton Zeilinger and his research team sent an encrypted photo of the fertility goddess Venus of Willendorf from Alice to Bob, two computers in two buildings about 400 meters apart. The figure summarizing this achievement first appeared in Physical Review Letters and later in a review article in Nature.
By extending the previous example to two dimensions, it easy to produce a rudimentary simulation of the experiment. Bitwise XOR is nothing more than addition modulo 2. (XOR = CNOT)
The original Venus and the shared key are represented by the following matrices, where the matrix elements are pixels that are either off (0) or on (1).
$\begin{matrix} \text{Venus} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \ 0 & 1 & 1 & 1 & 1 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 1 & 1 & 1 & 1 & 0 \ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} & \text{Key} = \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 0 \ 1 & 1 & 0 & 1 & 0 & 1 \ 0 & 0 & 1 & 0 & 0 & 1 \ 0 & 1 & 0 & 1 & 1 & 0 \ 1 & 0 & 1 & 0 & 1 & 1 \ 1 & 0 & 1 & 1 & 0 & 1 \ 0 & 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{matrix} \nonumber$
A coded version of Venus is prepared by adding Venus and the Key modulo 2 and sent to Bob.
$\begin{matrix} i = 1 .. 7 & j = 1 .. 6 & \text{CVenus}_{i,~j} = \text{Venus}_{i,~j} \oplus \text{Key}_{i,~j} & \text{CVenus} = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 1 \ 1 & 0 & 1 & 0 & 1 & 1 \ 0 & 0 & 0 & 1 & 0 & 1 \ 0 & 1 & 1 & 0 & 1 & 0 \ 1 & 0 & 0 & 1 & 1 & 1 \ 1 & 1 & 0 & 0 & 1& 1 \ 1 & 0 & 1 & 1 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Bob adds the key to CVenus modulo 2 and sends the result to his printer.
$\begin{matrix} \text{DVenus}_{i,~j} = \text{CVenus}_{i,~j} \oplus \text{Key}_{i,~j} & \text{DVenus} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \ 0 & 1 & 1 & 1 & 1 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 1 & 1 & 1 & 1 & 0 \ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \end{matrix} \nonumber$
A graphic summary of the simulation:
Random key production can be implemented as follows:
$\begin{matrix} j = 1 .. 20 & \text{PSA}_j = \text{trunc(rnd(4)) 90 deg} & \text{PSB}_j = \text{trunc(rnd(2)) 90 deg} \end{matrix} \nonumber$
$\begin{matrix} \text{Det0}_j = \left[ \left| x^T \text{BS M} \begin{pmatrix} e^{i ~PSA_j} & 0 \ 0 & e^{i~PSB_j} \end{pmatrix} \text{BS x} \right| \right]^2 & \text{Det1}_j = \left[ \left| y^T \text{BS M} \begin{pmatrix} e^{i ~PSA_j} & 0 \ 0 & e^{i~PSB_j} \end{pmatrix} \text{BS x} \right| \right]^2 \end{matrix} \nonumber$
$\begin{matrix} \frac{PSA_j}{deg} = & \frac{PSB_j}{deg} = & \text{Det0}_j = & \text{Det1}_j = \ \begin{array}{|c|} \hline \ 0 \ \hline \ 0 \ \hline \ 180 \ \hline \ 90 \ \hline \ 270 \ \hline \ 0 \ \hline \ 270 \ \hline \ 0 \ \hline \ 270 \ \hline \ 0 \ \hline \ 180 \ \hline \ 90 \ \hline \ 180 \ \hline \ 180 \ \hline \ 0 \ \hline \ \cdots \ \hline \end{array} & \begin{array}{|c|} \hline \ 0 \ \hline \ 90 \ \hline \ 0 \ \hline \ 90 \ \hline \ 90 \ \hline \ 0 \ \hline \ 90 \ \hline \ 90 \ \hline \ 0 \ \hline \ 90 \ \hline \ 90 \ \hline \ 90 \ \hline \ 90 \ \hline \ 90 \ \hline \ 0 \ \hline \ \cdots \ \hline \end{array} & \begin{array}{|c|} \hline \ 1 \ \hline \ 0.5 \ \hline \ 0 \ \hline \ 1 \ \hline \ 0 \ \hline \ 1 \ \hline \ 0 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 1 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 1 \ \hline \ \cdots \ \hline \end{array} & \begin{array}{|c|} \hline \ 0 \ \hline \ 0.5 \ \hline \ 1 \ \hline \ 0 \ \hline \ 1 \ \hline \ 0 \ \hline \ 1 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0 \ \hline \ 0.5 \ \hline \ 0.5 \ \hline \ 0 \ \hline \ \cdots \ \hline \end{array} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.92%3A_Quantum_Key_Distribution_Using_a_Mach-Zehnder_Interferometer.txt
|
In 2000 Anton Zeilinger and his research team sent an encrypted photo of the fertility goddess Venus of Willendorf from Alice to Bob, two computers in two buildings about 400 meters apart. The figure summarizing this achievement first appeared in Physical Review Letters and later in a review article in Nature.
It is easy to produce a rudimentary simulation of the experiment. Bitwise XOR is nothing more than addition modulo 2. The original Venus and the shared key are represented by the following matrices, where the matrix elements are pixels that are either off (0) or on (1).
$\begin{matrix} i = 1 .. 7 & j = 1 .. 6 & \text{Key}_{i,~j} = \text{trunc(rnd(2))} & \text{Key} = \begin{pmatrix} 0 & 0 & 1 & 0 & 1 & 0 \ 1 & 0 & 0 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 & 0 & 0 \ 1 & 1 & 1 & 1 & 1 & 0 \ 1 & 1 & 1 & 1 & 0 & 1 \ 0 & 1 & 0 & 0 & 1 & 1 \ 0 & 1 & 0 & 1 & 1 & 1 \end{pmatrix} & \text{Venus} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \ 0 & 1 & 1 & 1 & 1 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 1 & 1 & 1 & 1 & 0 \ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \end{matrix} \nonumber$
A coded version of Venus is prepared by adding Venus and the Key modulo 2 and sent to Bob.
$\begin{matrix} \text{CVenus}_{i,~j} = \text{Venus}_{i,~j} \oplus \text{Key}_{i,~j} & \text{CVenus} = \begin{pmatrix} 1 & 1 & 0 & 1 & 0 & 1 \ 1 & 1 & 1 & 1 & 0 & 0 \ 0 & 1 & 0 & 1 & 0 & 0 \ 0 & 1 & 1 & 0 & 1 & 0 \ 1 & 0 & 0 & 1 & 1 & 1 \ 1 & 1 & 0 & 0 & 1& 1 \ 1 & 0 & 1 & 1 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Bob adds the key to CVenus modulo 2 and sends the result to his printer.
$\begin{matrix} \text{DVenus}_{i,~j} = \text{CVenus}_{i,~j} \oplus \text{Key}_{i,~j} & \text{DVenus} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \ 0 & 1 & 1 & 1 & 1 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 & 0 \ 0 & 1 & 1 & 1 & 1 & 0 \ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \end{matrix} \nonumber$
A graphic summary of the simulation:
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.93%3A_Coding_and_Decoding_Venus.txt
|
D. Candela described a Grover search algorithm in the August (2015) issue of the American Journal of Physics. Chris Monroe's group recently published an experimental implementation of the same algorithm in Nature Communications 8, 1918 (2017). The Grover search is implemented for N = 3 using the three qubit quantum circuit shown below. The search algorithm (blue) runs an integer number of times closest to $\frac{\pi}{4} \sqrt{2^N}$. The closest integer for N = 3 is 2. The Appendix provides a demonstration of the implementation of the J operator shown at the far right below.
$\begin{matrix} \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lceil} & ~ & \textcolor{lime}{ \rceil} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{ \cdot} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \ ~ & ~ & ~ & \textcolor{lime}{|} & ~ & \textcolor{lime}{|} & ~ & ~ & \textcolor{blue}{|} \ \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lceil} & \textcolor{lime}{ \text{Oracle}} & \textcolor{lime}{ \rceil} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{|} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \ ~ & ~ & ~ & \textcolor{lime}{|} & ~ & \textcolor{lime}{|} & ~ & ~ & \textcolor{blue}{|} \ \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lfloor} & ~ & \textcolor{lime}{ \rfloor} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{ \fbox{Z}} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \end{matrix} \nonumber$
There are 8 items in the data base and the oracle, O, identifies the correct query with a minus sign. In other words, a search of the data base should return the result |110>. The J and Hadamard matrices required are shown below.
$\begin{matrix} H = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & O = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} & J = \begin{pmatrix} -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \ \text{HHH = kronecker(H, kronecker(H, H))} & \text{GroverSearch = HHH J HHH O} \end{matrix} \nonumber$
Initial Hadamard gates on the circuit wires feed the Grover search algorithm a superposition of all possible queries yielding a superposition of answers, but with the correct answer highly weighted as shown below.
$\begin{matrix} \Psi = \frac{1}{4} \begin{pmatrix} 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \end{pmatrix} & \text{GroverSearch}^3 \Psi = \begin{pmatrix} -0.088 \ -0.088 \ -0.088 \ -0.088 \ -0.088 \ -0.088 \ 0.972 \ -0.088 \end{pmatrix} & \text{This state is close to the correct result:} & \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The probability of a successful search after two cycles of the circuit is $0.972^2 = 94.5 \%$. For a classical search it would require on average 4 (8/2) queries.
It is easy to extend the algorithm to N = 4 by adding a row to the circuit above. In this example the search of the data base should return the result |1010>.
$\begin{matrix} O = \begin{pmatrix} 1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &1 &0 &0& 0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 & 0 \ 0& 0 &0& 0 &0 &0 &0 &0 &0 &0 &-1 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0& 0 &0 &0 &0 &0 &1 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 \ 0 &0 &0 &0 &0& 0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 \ 0 &0 &0 &0 & 0& 0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 \end{pmatrix} & J' = \begin{pmatrix} -1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &1 &0 &0& 0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 & 0 \ 0& 0 &0& 0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0& 0 &0 &0 &0 &0 &1 &0 &0 &0 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 \ 0 &0 &0 &0 &0& 0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 \ 0 &0 &0 &0 & 0& 0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 \ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 \end{pmatrix} \ \text{HHHH = kronecker(H, kronecker(H, kronecker(H, H)))} & \text{GroverSearch = HHHH J' HHHH O} ~ \frac{ \pi}{4} \sqrt{2^4} = 3.142 \end{matrix} \nonumber$
$\begin{matrix} \Psi = \frac{1}{4} \begin{pmatrix} 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \end{pmatrix} & \text{GroverSearch}^3 \Psi = \begin{pmatrix} 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ -0.98 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \ 0.051 \end{pmatrix} & \text{This state is close to the correct result:} & \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} \end{matrix} \nonumber$
The probability of a successful search after two cycles of the circuit is $(-0.98)^2 = 96.0 \%$. For a classical search it would require on average 8 (16/2) queries.
Appendix
The following calculation demonstrates the identity on the right side of Grover search circuit. X is the NOT operator and CCZ is the controlled-controlled Z gate.
$\begin{matrix} \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{XXX = kronecker(X, kronecker(X, X))} & \text{CCZ} = \begin{pmatrix} 1 &0 &0 &0 &0 &0 &0 &0 \ 0 &1 &0 &0 &0 &0 &0 &0 \ 0 &0 &1 &0 &0 &0 &0 &0 \ 0 &0 &0 &1 &0 &0 &0 &0 \ 0 &0 &0 &0 &1 &0 &0 &0 \ 0 &0 &0 &0 &0 &1 &0 &0 \ 0 &0 &0 &0 &0 &0 &1 &0 \ 0 &0 &0 &0 &0 &0 &0 & -1 \end{pmatrix} \end{matrix} \nonumber$
$\begin{matrix} \text{XXX CCZ XXX} = \begin{pmatrix} -1 &0 &0 &0 &0 &0 &0 &0 \ 0 &1 &0 &0 &0 &0 &0 &0 \ 0 &0 &1 &0 &0 &0 &0 &0 \ 0 &0 &0 &1 &0 &0 &0 &0 \ 0 &0 &0 &0 &1 &0 &0 &0 \ 0 &0 &0 &0 &0 &1 &0 &0 \ 0 &0 &0 &0 &0 &0 &1 &0 \ 0 &0 &0 &0 &0 &0 &0 & 1 \end{pmatrix} & \text{J} = \begin{pmatrix} -1 &0 &0 &0 &0 &0 &0 &0 \ 0 &1 &0 &0 &0 &0 &0 &0 \ 0 &0 &1 &0 &0 &0 &0 &0 \ 0 &0 &0 &1 &0 &0 &0 &0 \ 0 &0 &0 &0 &1 &0 &0 &0 \ 0 &0 &0 &0 &0 &1 &0 &0 \ 0 &0 &0 &0 &0 &0 &1 &0 \ 0 &0 &0 &0 &0 &0 &0 & 1 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.94%3A_Grover%27s_Quantum_Search_Algorithm.txt
|
Chris Monroe's research group recently published an experimental implementation of the Grover search algorithm in Nature Communications 8, 1918 (2017). In this report the Grover search is implemented for N = 3 using the three qubit quantum circuit shown below. In this particular example it is demonstrated that the search algorithm successfully searches for two items in one operation of the circuit. The lead sentence in the paper states "The Grover search algorithm has four stages: initialization (red), oracle (green), amplification (blue) and measurement (black)."
$\begin{matrix} \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lceil} & ~ & \textcolor{lime}{ \rceil} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{ \cdot} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \ ~ & ~ & ~ & \textcolor{lime}{|} & ~ & \textcolor{lime}{|} & ~ & ~ & \textcolor{blue}{|} \ \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lceil} & \textcolor{lime}{ \text{Oracle}} & \textcolor{lime}{ \rceil} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{|} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \ ~ & ~ & ~ & \textcolor{lime}{|} & ~ & \textcolor{lime}{|} & ~ & ~ & \textcolor{blue}{|} \ \textcolor{red}{ |0 \rangle} & \textcolor{red}{ \triangleright} & \textcolor{red}{H} & \textcolor{lime}{ \lfloor} & ~ & \textcolor{lime}{ \rfloor} & \textcolor{blue}{H} & \textcolor{blue}{X} & \textcolor{blue}{ \fbox{Z}} & \textcolor{blue}{X} & \textcolor{blue}{H} & \triangleright & \text{Measure} \end{matrix} \nonumber$
The oracle, highlighted below, contains 3(|011>) and 5(|101>).
$\begin{matrix} H = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & X = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & O = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} & J = \begin{pmatrix} -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \end{pmatrix} \ \text{HHH = kronecker(H, kronecker(H, H))} & \text{XXX = kronecker(X, kronecker(X, X))} \end{matrix} \nonumber$
The initial Hadamard gates on the three circuit wires feed the Grover search algorithm (in blue) a superposition of all possible queries yielding a superposition of the correct answers.
$\begin{matrix} \text{GroverSearch = HHH XXX CCZ XXX HHH Oracle HHH} & \text{GroverSearch} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \ -0.707 \ 0 \ -0.707 \ 0 \ 0 \end{pmatrix} = - \frac{1}{ \sqrt{2}} \left[ |011 \rangle + |101 \rangle \right] \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.95%3A_Grover%27s_Search_Algorithm-_Implementation_for_Two_Items.txt
|
Grover's search algorithm is great at playing four-card monte. As the following quantum circuit shows it can determine which card is the queen in one pass.
$\begin{matrix} |0 \rangle & H & \triangleright & \lceil & ~ & \rceil & H & \lceil & ~ & \rceil & H & \triangleright & \text{Measure} & ~ & \lceil & ~ & \rceil & ~ & X & \cdot & X \ ~ & ~ & ~ & | & \text{Oracle} & | & ~ & | & J & | & ~ & ~ & ~ & \text{where} & | & J & | & = & ~ & | \ |0 \rangle & H & \triangleright & \lfloor & ~ & \rfloor & H & \lfloor & ~ & \rfloor & H & \triangleright & \text{Measure} & ~ & \lfloor & ~ & \rfloor & ~ & X & \fbox{Z} & X \end{matrix} \nonumber$
The following matrix operators are required to construct the circuit. Giving |10> a negative phase in the Oracle designates it as the queen. The Appendix shows the calculation of J as shown on the right side above.
$\begin{matrix} H = \frac{1}{ \sqrt{2}} \begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix} & \text{HH = kronecker(H, H)} & \text{Oracle} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} & J = \begin{pmatrix} -1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
Operating on the input state, which creates a superposition of all queries, enables the algorithm to identify which card is the queen in one operation of the circuit.
$\begin{matrix} \text{GroverSearch = HH J HH Oracle HH} & \text{GroverSearch} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ -1 \ 0 \end{pmatrix} = - |10 \rangle \end{matrix} \nonumber$
Now the operation of the algorithm is carried out in stages to show the importance of constructive and destructive interference in quantum computers.
$\begin{matrix} \text{Step 1} & \text{HH} \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5 \ 0.5 \ 0.5 \end{pmatrix} = \frac{1}{2} \left[ |00 \rangle + |01 \rangle + |10 \rangle + |11 \rangle \right] \ \text{Step 2} & \text{Oracle} \begin{pmatrix} 0.5 \ 0.5 \ 0.5 \ 0.5 \end{pmatrix} = \begin{pmatrix} 0.5 \ 0.5 \ -0.5 \ 0.5 \end{pmatrix} = \frac{1}{2} \left[ |00 \rangle + |01 \rangle - |10 \rangle + |11 \rangle \right] \ \text{Step 3} & \text{HH} \begin{pmatrix} 0.5 \ 0.5 \ -0.5 \ 0.5 \end{pmatrix} = \begin{pmatrix} 0.5 \ -0.5 \ 0.5 \ 0.5 \end{pmatrix} = \frac{1}{2} \left[ |00 \rangle - |01 \rangle + |10 \rangle + |11 \rangle \right] \ \text{Step 4} & \text{J} \begin{pmatrix} 0.5 \ -0.5 \ 0.5 \ 0.5 \end{pmatrix} = \begin{pmatrix} -0.5 \ -0.5 \ 0.5 \ 0.5 \end{pmatrix} = \frac{1}{2} \left[ - |00 \rangle - |01 \rangle + |10 \rangle + |11 \rangle \right] \ \text{Step 5} & \text{HH} \begin{pmatrix} -0.5 \ -0.5 \ 0.5 \ 0.5 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ -1 \ 0 \end{pmatrix} = - |10 \rangle \end{matrix} \nonumber$
Appendix
$\begin{matrix} \lceil & ~ & \rceil & ~ & X & \cdot & X \ | & J & | & = & ~ & | \ \lfloor & ~ & \rfloor & ~ & X & \fbox{Z} & X \end{matrix} \nonumber$
$\begin{matrix} J = \begin{pmatrix} -1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} & X = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} & \text{CZ} = \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & -1 \end{pmatrix} & \text{kronecker(X, X) CZ kronecker(X, X)} = \begin{pmatrix} -1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} \end{matrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/08%3A_Quantum_Teleportation/8.96%3A_Grover%27s_Search_Algorithm-_Four-Card_Monte.txt
|
Numerically solving the Schrödinger equation is a complex problem that stems from the large number of points needed on a grid and the requirement to satisfy boundary conditions.
09: Numerical Solutions for Schrodinger's Equation
Solving Schrödinger's equation is the primary task of chemists in the field of quantum chemistry. However, exact solutions for Schrödinger's equation are available only for a small number of simple systems. Therefore the purpose of this tutorial is to illustrate one of the computational methods used to obtain approximate solutions.
Mathcad offers the user a variety of numerical differential equation solvers. We will use Mathcad's ordinary differential equation solver, Odesolve, because it allows one to type Schrödinger's equation just as it appears on paper or on the blackboard; in other words it is pedagogically friendly. In what follows the use of Odesolve will be demonstrated for the one-dimensional harmonic oscillator. All applications of Odesolve naturally require the input of certain parameters: integration limits, mass, force constant, etc. Therefore the first part of the Mathcad worksheet will be reserved for this purpose.
• Integration limit: xmax := 5
• Effective mass: µ := 1
• Force constant: k := 1
The most important thing distinguishing one quantum mechanical problem from another is the potential energy term, $V(x)$. It is entered below.
Potential energy:
$V(x) = \dfrac{1}{2} kx^{2} \nonumber$
Entering the potential energy separately, as done above, allows one to write a generic form for the Schrödinger equation applicable to any one-dimensional problem. This creates a template that is easily edited when moving from one quantum mechanical problem to another. All that is necessary is to type in the appropriate potential energy expression and edit the input parameters. This is the most valuable feature of the numerical approach - you don't need a new mathematical tool or trick for each new problem, a single template works for all one-dimensional problems after minor editing.
Mathcad's syntax for solving the Schrödinger equation is given below. As it may be necessary to do subsequent calculations, the wavefunction is normalized. Note that seed values for an initial value for the wavefunction and its first derivative are required. It is also important to note that the numerical integration is carried out in atomic units:
$h/2π = me = e = 1. \nonumber$
Given
$\frac{-1}{2 \mu} \frac{d^{2}}{dx^{2}} \psi (x) + V(x) \psi (x) = E \psi (x) \nonumber$
with $\psi (-x_{max}) = 0$ and $\psi '(-x_{max}) = 0.1$.
$\psi = Odesolve (x, x_{max}) \nonumber$
Normalize wavefunction:
$\psi (x) = \frac{ \psi (x)}{\displaystyle \sqrt{ \int_{-x_{max}}^{x_{max}} \psi (x)^{2} dx}} \nonumber$
Numerical solutions also require an energy guess. If the correct energy is entered the integration algorithm will generate a wavefunction that satisfies the right-hand boundary condition. If the right boundary condition is not satisfied another energy guess is made. In other words it is advisable to sit on the energy input place holder, type a value and press F9 to recalculate.
Energy guesses that are too small yield wavefunctions that miss the right boundary condition on the high side, while high energy guesses miss the right boundary condition on the low side. Therefore it is generally quite easy to bracket the correct energy after a few guesses.
Enter energy guess: E ≡ .5
Of course the solution has to be displayed graphically to determine whether a solution (an eigenstate) has been found. The graphical display is shown below. It is frequently instructive to also display the potential energy function.
wavefunction:
%matplotlib inline
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
mu=1
k=1
E=.5
xmax=-5
def psi(y,x):
psi1, psi2_dx2 = y
return [psi2_dx2, ((2*mu)/(-1))*(E*psi1 - (1/2)*x**2*psi1)]
x0 = [0.0, 0.1]
val = np.linspace(-5,5,101)
sol = odeint(psi, x0, val)
#plot, legends, and titles
plt.plot(val,sol[:,0],color = "red",label = " ")
plt.title("Wave Function")
leg = plt.legend(title = "(x) ", loc = "center", bbox_to_anchor=[-.11,.5],frameon=False)
plt.show()
Potential energy:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
# initialize constants k and create x array
k = 1
x = np.linspace(-5,5,100)
# set boundaries
plt.xticks([-5,5])
plt.yticks([0,5,10,15])
# plot
plt.plot(x,(.5)*k*(x**2), color = "red",label = " ")
# add titles and legend
plt.title("Potential Energy")
leg = plt.legend(title = "\u03A8 ", loc = "center", bbox_to_anchor=[-.11,.5],frameon=False)
plt.show()
It is quite easy, as shown below, to generate the momentum-space wavefunction by a Fourier transform of its coordinate-space counterpart.
$p := -4, -3.9..5$
$\Phi (p) = \frac{1}{ \sqrt{ 2 \pi}} \int_{-x_{max}}^{x_{max}} e^{-i p x} \psi(x)dx \nonumber$
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import scipy.integrate as integrate
import math
#set constants
mu=1
k=1
E=.5
xmax=5
#create ode
def psi(y,x):
psi1, psi2_dx2 = y
return [psi2_dx2, ((2*mu)/(-1))*(E*psi1 - (1/2)*x**2*psi1)]
#create space
x0 = [0.0, 0.001]
val = np.linspace(-xmax,xmax,101)
#solve ode using odeint
sol = odeint(psi, x0, val)
#format plot
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#show tick marks to the left and lower axes only
plt.yticks([])
plt.axis(xmin = -4,xmax = 4,ymin = 0,ymax = 30)
#move left yaxis passing through origin
ax.spines["left"].set_position("center")
#eliminate upper and right axes
ax.spines["right"].set_color("none")
ax.spines["top"].set_color("none")
#plot graph
plt.plot(val,sol[:,0],color = "red",label = " ")
#add titles and legend
plt.title("Momentum Distribution")
leg = plt.legend(title = "(|\u03A6(p)|\u00b2)", loc = "center", bbox_to_anchor=[-.10,.5],frameon=False)
plt.show()
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.01%3A_Introduction_to_Numerical_Solutions_of_Schodinger%27s_Equation.txt
|
Numerical Solutions for Schrödinger's Equation
Integration limit: xmax := 1 Effective mass: $\mu$ := 1
Potential energy: V(x) := 0
Numerical integration of Schrödinger's equation:
Given: $\frac{1}{2 \mu} \frac{d^{2}}{dx^{2}} \Psi (x) + V(x) \Psi (x) = E \Psi (x)$ $\Psi (0) = 0$ $\Psi '(0) = 0.1$
$\Psi := Odesolve (x, x_{max}$ Normalize wave function: $\Psi (x) := \frac{ \Psi (x)}{ \sqrt{ \int_{0}^{x_{max}}} \Psi (x)^{2}dx}$
Enter energy guess: E = 4.934
Fourier transform coordinate wave function into momentum space:
p := -20, -19.5 .. 20
$\Phi (p) := \frac{1}{2 \mu} \int_{0}^{x_{max}} exp(-i \cdot p \cdot x) \cdot \Psi (x) dx$
9.03: Particle in a Gravitational Field
The Unhindered Quantized Bouncing Particle
• Integration limit: $z_{max} = 3$
• Mass: $m = 2$
• Acceleration due to gravity: $g = 1$
The first 10 roots of the Airy function are as follows:
a1 = 2.33810 a2 = 4.08794 a3 = 5.52055 a4 = 6.78670 a5 = 7.94413
a6 = 8.02265 a7 = 10.04017 a8 = 11.00852 a9 = 11.93601 a10 = 12.82877
Calculate energy analytically by selecting the appropriate Airy function root:
i = 1 E = $\frac{mg^{2}}{2}^{ \frac{1}{3}} a_{1}$ E = 2.338
Generate the associated wavefunction numerically: Potential energy: $V(z) = mgz$
Given $\frac{-1}{2 \cdot m} \frac{d^{2}}{dz^{2}} \psi (z) + V (z) \psi (z) \equiv E \psi (z)$
$\psi (0.0) = 0.0$
$\psi '(0.0) = 0.1$
Given, $\psi = Odesolve (z, z_{max})$
Normalize wavefunction: $\psi (z) = \frac{ \psi (z)}{ \sqrt{ \int_{0}^{z_{max}} \psi (z)^{2} dz}}$
9.04: Particle in a One-dimensional Egg Carton
Numerical Solutions for Schrödinger's Equation
Integration limit: xmax = 10 Effective mass: $\mu$ = 1
Potential energy: Vo = 2 atoms = 2 $V(x) = V_{o} ( \cos (atoms 2 \pi \frac{x}{x_{max}}) +1)$
Numerical integration of Schrödinger's equation:
Given $\frac{-1}{2 \mu} \psi (x) + V(x) \psi (x) = E \psi (x)$ $\psi (0) = 0$ $\psi '=0.1$
$\psi = Odesolve (x, x_{max})$ Normalize wave function: $\psi (x) = \frac{ \psi (x)}{ \sqrt{ \int_{0}^{x_{max}} \psi (x)^{2}dx}}$
Enter energy guess: E = 0.83583
Fourier transform coordinate wave function into momentum space.
p = -10, -9.9 .. 10 $\Phi (p) = \frac{1}{ \sqrt{2 \pi}} \int_{0}^{x_{max}} exp(-i p x) \psi (x)~dx$
9.05: Particle in a Finite Potential Well
Numerical Solutions for the Finite Potential Well
Schrödinger's equation is integrated numerically for the first three energy states for a finite potential well. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
n = 100 xmin = -3 xmax = 3 $\Delta = \frac{ xmax - xmin}{n-1}$
$\mu$ = 1 lb = -1 rb = 1 V0 = 4
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,i} = if[ (x_{i} \geq lb) (x_{i} \leq rb), 0, V_{0}]$ $T_{i, j} = if [i=j, \frac{ \pi^{2}}{6 \mu \Delta^{2}}, \frac{(-1)^{i-j}}{(i-j)^{2} \mu \Delta^{2}} ]$
Form Hamiltonian energy matrix: H = TV
Find eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 3
Em =
$\begin{array}{|r|} \hline \ 0.63423174 \ \hline \ 2.39691438 \ \hline \ 4.4105828 \ \hline \end{array}$
Calculate associated eigenfunctions: k = 1 .. 2 $\psi$(k) = eigenvec(H, Ek)
Plot the potential energy and bound state eigenfunctions: $V_{pot1} := V_{i,i}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.02%3A_Particle_in_an_Infinite_Potential_Well.txt
|
Numerical Solutions to Schrödinger's Equation for the Particle in the Semi-infinite Box
Parameters go here: $x_{min} = 0$ $x_{max} = 5$ $m = 1$ $lb = 2$
Potential energy $V(x) = if[(x \geq lb), V_{0}, 0]$
Given $\frac{d^{2}}{dx^{2}} \psi (x) = 2 m (V(x) -E) \psi (x)$ $\psi (x_{min}) = 0$ $\psi '(0) = 0.1$
$\psi := Odesolve (x, x_{max})$ $\psi = \frac{ \psi (x)}{ \sqrt{ \int_{x_{min}}^{x_{max}}} \psi(x)^{2} dx}$
Enter energy guess: E = 0.766
Calculate the probability that the particle is in the barrier: $\int_{2}^{5} \psi (x)^{2}dx = 0.092$
Calculate the probability that the particle is not in the barrier: $\int_{0}^{2} \psi (x)^{2}dx = 0.908$
Calculate and display the momentum distribution:
Fourier transform: p = -10,-9.9 .. 10 $\Phi (p) = \int_{ -x_{min}}^{ x_{max}} exp(-1 p x) \psi (x)~dx$
9.07: Particle in a Slanted Well Potential
Numerical Solutions for Schrödinger's Equation for the Particle in the Slanted Box
Parameters go here: $x_{max} = 1$ $\mu = 1$ $V_{0} = 2$
Potential energy $V(x) = V_{0} x$
Given
$\frac{-1}{2 \mu} \frac{d^{2}}{dx^{2}} \psi (x) + V(x) \psi (x) = E \psi (x) \nonumber$
with these boundary conditions: $\psi (0) = 0$ and $\psi '(0) = 0.1$
$\psi = Odesolve(x, x_{max})$ Normalize wavefunction: $\psi (x) = \frac{ \psi (x)}{ \sqrt{ \int_{0}^{x_{max}} \psi (x)^{2} dx}}$
Enter energy guess: E = 5.925
Calculate most probably position: x = 0.5 Given $\frac{d}{dx} \psi (x) = 0$ Find (x) = 0.485
Calculate average position: $X_{avg} = \int_{0}^{1} \psi (x) (x) \psi (x) dx$ $X_{avg} = 0.491$
Calculate potential and kinetic energy:
$V_{avg} = V_{0} X_{avg}$ $V_{avg} = 0.983$
$T_{avg} = E - V_{avg}$ $T_{avg} = 4.942$
9.08: Numerical Solutions for a Particle in a V-Shaped Potential Well
Schrödinger's equation is integrated numerically for a particle in a V-shaped potential well. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
n = 100 xmin = -4 xmax = 4 $\Delta = \frac{xmax - xmin}{n-1}$ $\mu$ = 1 Vo = 2
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
Vi, i = Vo |xi| Ti,j = if $\bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg]$
Hamiltonian matrix: H = T + V
Calculate eigenvalues: E = sort(eigenvals(H))
Selected eigenvalues: m = 1 .. 6
Em =
$\begin{array}{|r|} \hline \ 1.284 \ \hline \ 2.946 \ \hline \ 4.093 \ \hline \ 5.153 \ \hline \ 6.089 \ \hline\ 7.030 \ \hline \end{array}$
Display solution:
For V = axn the virial theorem requires the following relationship between the expectation values for kinetic and potential energy: <T> = 0.5n<V>. The calculations below show the virial theorem is satisfied for this potential for which n = 1.
$\begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ \psi (1)^{T} T \psi(1) & \psi (1)^{T} V \psi(1) & E_{1} \ \psi (2)^{T} T \psi(2) & \psi (2)^{T} V \psi(2) & E_{2} \ \psi (3)^{T} T \psi(3) & \psi (3)^{T} V \psi(3) & E_{3} \end{pmatrix} = \begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ 0.428 & 0.857 & 1.284 \ 0.982 & 1.964 & 2.946 \ 1.365 & 2.728 & 4.093 \end{pmatrix}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.06%3A_Particle_in_a_Semi-infinite_Potential_Well.txt
|
Schrödinger's equation is integrated numerically for the first three energy states for the harmonic oscillator. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
Increments: n = 100
Integration limits: xmin = -5
xmax = 5
$\Delta = \frac{xmax - xmin}{n-1} \nonumber$
Effective mass: $\mu$ = 1
Force constant: k = 1
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~ \frac{1}{2} k (x)^2 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 3
Em =
$\begin{array}{|r|} \hline \ 0.5000 \ \hline \ 1.5000 \ \hline \ 2.5000 \ \hline \end{array}$
Calculate associated eigenfunctions:
k = 1 .. 3
$\psi (k) = eigenvec (H, E_k) \nonumber$
Plot the potential energy and selected eigenfunctions:
For V = axn the virial theorem requires the following relationship between the expectation values for kinetic and potential energy: <T> = 0.5n<V>. The calculations below show the virial theorem is satisfied for the harmonic oscillator for which n = 2.
$\begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ \psi (1)^{T} T \Psi(1) & \psi (1)^{T} V \psi(1) & E_{1} \ \psi (2)^{T} T \Psi(2) & \psi (2)^{T} V \psi(2) & E_{2} \ \psi (3)^{T} T \Psi(3) & \psi (3)^{T} V \psi(3) & E_{3} \end{pmatrix} = \begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ 0.2500 & 0.2500 & 0.5000 \ 0.7500 & 0.7500 & 1.5000 \ 1.2500 & 1.2500 & 2.5000 \end{pmatrix}$
9.10: Numerical Solutions for a Double-Minimum Potential Well
Schrödinger's equation is integrated numerically for a double minimum potential well: $V = bx^4 - cx^2$. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
Increments: n = 100
Integration limits: xmin = -4
xmax = 4
$\Delta = \frac{xmax - xmin}{n-1} \nonumber$
Effective mass: $\mu$ = 1
Constants: b = 1 c = 6
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~ b(x_i)^4 - c(x_i)^2 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix:
$H = T + V \nonumber$
Calculate eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 5
Em =
$\begin{array}{|r|} \hline \ -6.64272702 \ \hline \ -6.64062824 \ \hline \ -2.45118605 \ \hline \ -2.3155705 \ \hline \ 0.41561275 \ \hline \end{array}$
Calculate selected eigenvectors:
k = 1 .. 4
$\psi (k) = eigenvec (H, E_k) \nonumber$
Display results:
First two even solutions:
First two odd solutions:
9.11: Numerical Solutions for the Quartic Oscillator
Schrödinger's equation is integrated numerically for the first three energy states for the quartic oscillator. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
Increments: n = 100
Integration limits: xmin = -3
xmax = 3
$\Delta = \frac{xmax - xmin}{n-1} \nonumber$
Effective mass: $\mu$ = 1
Force constant: k = 1
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~k (x_i)^4 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 3
Em =
$\begin{array}{|r|} \hline \ 0.6680 \ \hline \ 2.3936 \ \hline \ 4.6968 \ \hline \end{array}$
Calculate associated eigenfunctions:
k = 1 .. 3
$\psi (k) = eigenvec (H, E_k) \nonumber$
Plot the potential energy and selected eigenfunctions:
For V = axn the virial theorem requires the following relationship between the expectation values for kinetic and potential energy: <T> = 0.5n<V>. The calculations below show the virial theorem is satisfied for the harmonic oscillator for which n = 4.
$\begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ \psi (1)^{T} T \psi(1) & \psi (1)^{T} V \psi(1) & E_{1} \ \psi (2)^{T} T \psi(2) & \psi (2)^{T} V \psi(2) & E_{2} \ \psi (3)^{T} T \psi(3) & \psi (3)^{T} V \psi(3) & E_{3} \end{pmatrix} = \begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ 0.4453 & 0.2227 & 0.6680 \ 1.5958 & 0.7979 & 2.3936 \ 3.1312 & 1.5656 & 4.6968 \end{pmatrix}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.09%3A_Numerical_Solutions_for_the_Harmonic_Oscillator.txt
|
Schrödinger's equation is integrated numerically for the first three energy states for the Morse oscillator. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
n = 300
xmin = -2
xmax = 12
$\Delta = \frac{xmax - xmin}{n-1} \nonumber$
$\mu$ = 1
D = 2
$\beta$ = 2
xe = 0
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~D [ 1 - exp [ \beta ( x_i - x_e )]]^2 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \left[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \right] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 3
Em =
$\begin{array}{|r|} \hline \ 0.8750 \ \hline \ 1.8750 \ \hline \ 2.0596 \ \hline \end{array}$
Calculate associated eigenfunctions:
k = 1 .. 3
$\psi (k) = eigenvec (H, E_k) \nonumber$
Plot the potential energy and selected eigenfunctions:
For $V = ax^n$, the virial theorem requires the following relationship between the expectation values for kinetic and potential energy:
$<T> = 0.5n<V>. \nonumber$
The calculations below show that virial theorem is not satisfied for the Morse oscillator. The reason is revealed in the following series expansion in $x$. The expansion contains cubic, quartic and higher order terms in $x$, so the virial theorem does not apply to the quartic oscillator.
$D (1 - exp( - \beta x))^2$ converts to the series $D \beta ^2 x^2 + (-D) \beta ^3 x^3 + \frac{7}{12} D \beta ^4 x^4 + O(x^5)$
$\begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ \psi (1)^{T} T \psi(1) & \psi (1)^{T} V \psi(1) & E_{1} \ \psi (2)^{T} T \psi(2) & \psi (2)^{T} V \psi(2) & E_{2} \end{pmatrix} = \begin{pmatrix} "Kinetic~Energy" & "Potential~Energy" & "Total~Energy" \ 0.3750 & 0.5000 & 0.8750 \ 0.3754 & 1.4996 & 1.8750 \end{pmatrix}$
9.13: Numerical Solutions for the Lennard-Jones Potential
Merrill (Am. J. Phys. 1972, 40, 138) showed that a Lennard-Jones 6-12 potential with these parameters had three bound states. This is verified by numerical integration of Schrödinger's equation. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
• n = 200
• $x_{min} = 0.75$
• $x_{max} = 3.5$
• $\Delta = \frac{xmax - xmin}{n-1}$
• $\mu$ = 1
• $\sigma$ = 1
• $\varepsilon$ = 100
Numerical integration algorithm:
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~4 \varepsilon \bigg[ \left( \frac{ \sigma}{x_i} \right)^12 - \left( \frac{ \sigma}{x_i} \right) ^6 \bigg] ,0~ \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display three eigenvalues: m = 1 .. 4
Em =
$\begin{array}{|r|} \hline \ -66.269 \ \hline \ -22.981 \ \hline \ -4.132 \ \hline \ 1.096 \ \hline \end{array}$
Calculate eigenvectors:
k = 1 .. 3
$\psi (k) = eigenvec (H, E_k) \nonumber$
Display results:
9.14: Numerical Solutions for the Double Morse Potential
Schrödinger's equation is integrated numerically for the first four energy states for the double Morse oscillator. The integration algorithm is taken from J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.
Set parameters:
n = 200
xmin = -10
xmax = 10
$\Delta = \frac{xmax - xmin}{n-1} \nonumber$
$\mu$ = 1
D = 2
$\beta$ = 1
x0 = 1
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
$V_{i,~j} = if \bigg[ i =j,~ D \big[ 1 - exp \big[ - \beta (|x_i| - x_0) \big] \big] ^2 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display four eigenvalues: m = 1 .. 4
Em =
$\begin{array}{|r|} \hline \ 0.8092 \ \hline \ 0.9127 \ \hline \ 1.8284 \ \hline \ 1.8975 \ \hline \end{array}$
Calculate associated eigenfunctions:
k = 1 .. 4
$\psi (k) = eigenvec (H, E_k) \nonumber$
Plot the potential energy and bound state eigenfunctions:
$Vpot_i = V_{i,~i} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.12%3A_Numerical_Solutions_for_Morse_Oscillator.txt
|
Numerical integration of Schrödinger's equation:
Potential energy: $V(x) = \big| _{0~otherwise}^{V_0~if~(x \geq lb) (x \leq rb)} \nonumber$
Given:
$\frac{-1}{2 \mu} \frac{d^2}{dx^2} \psi (x) + V(x) \psi (x) = E \psi (x) \nonumber$
$\psi$(0) = 0
$\psi$'(0) = 0.1
$\psi$ = Odesolve(x, xmax)
Normalize wave function:
$\psi (x) = \frac{ \psi (x)}{ \sqrt{ \int_{0}^{x} \psi (x)^2 dx}} \nonumber$
Integration limit: xmax = 1
Effective mass: $\mu$ = 1
Barrier height: V0 = 100
Barrier boundaries: lb = 0.45
rb = 0.55
Enter energy guess: E = 15.45
Calculate potential energy: $PE = \int_{0}^{1} V(x) \psi (x)^2 dx$ PE = 4.932 $\int_{0}^{1} \psi (x)^2 dx = 1.00$
Calculate kinetic energy: $KE = E - PE$ E = 10.518
Ratio of potential energy to total energy: $\frac{PE}{E} = 0.319$
Calculate probability in barrier: $\frac{PE}{V_0} = 0.049$
$P = \int_{lb}^{rb} \psi (x)^2 dx = 0.049 \nonumber$
1. Find the first four energy levels, sketch $\psi^2$ for each state, and fill in the table below. KE, PE and the probability in the electron is in the barrier are calculated above.
$\begin{pmatrix} E & KE & PE & P \ 15.45 & 10.518 & 4.932 & 0.049 \ 20.30 & 19.827 & 0.473 & 0.0047 \ 62.20 & 47.745 & 14.455 & 0.145 \ 80.80 & 78.968 & 1.832 & 0.018 \end{pmatrix} \nonumber$
2. Interpret the results for energy in light of the fact that a 100 Eh (2720 eV) potential barrier of finite thickness exists in the center of the box.
This is an excellent example of quantum mechanical tunneling. For the first four energy states the particle has probability of being found in the tunnel in spite of the fact that its energy is less than the barrier energy.
3. Explain the obvious bunching of energy states in pair in terms of the impact of the internal barrier. In other words why is the probability of being in the potential barrier larger for the n = 1 and 3 states than it is for the n = 2 and 4 states.
The PIB energy levels without an internal barrier are: $E(n) = \frac{ \pi ^2}{2} n^2$
The bunching can be seen by comparing the two energy manifolds. The n = 2 and 4 states have nodes at the middle of the box where the internal barrier is situated. Thus their potential energy does not increase as much as the n = 1 and 3 states which do not have nodes in the barrier.
4. Find the ground state energy for particle masses of 0.5 and 1.5. Record your results in the table below and interpret them.
$\begin{pmatrix} Mass & E & T & V & P \ 0.5 & 23.95 & 14.411 & 9.539 & 0.095 \ 1.0 & 15.45 & 10.518 & 4.932 & 0.049 \ 1.5 & 11.55 & 8.684 & 2.866 & 0.029 \end{pmatrix} \nonumber$
The higher the mass the lower the energy because in quantum mechanics in $E \sim \frac{1}{mass}$. The greater the mass the lower the probability that tunneling will occur. This is due to the fact that the deBroglie wavelength is inversely proportional to mass.
5. Find the ground state energy for a m = 1 particle for barrier heights 50 and 150 Eh. Record your results in the table below and interpret them.
$\begin{pmatrix} V_0 & E & T & V & P \ 50 & 11.97 & 7.203 & 4.767 & 0.095 \ 100 & 15.45 & 10.518 & 4.932 & 0.049 \ 150 & 17.32 & 13.024 & 4.296 & 0.029 \end{pmatrix} \nonumber$
The higher the barrier energy the higher the ground-state energy and the lower the tunneling probability.
6. On the basis of your calculations in this exercise describe quantum mechanical tunneling. In your answer you should consider the importance of particle mass, barrier height and barrier width. Perform calculations for widths of 0.05 and 0.15 in atomic units.
Tunneling is inversely proportional to mass, barrier height and barrier width.
$\begin{pmatrix} Width & E & T & V & P & \frac{P}{Width} \ 0.05 & 11.65 & 7.326 & 4.324 & 0.043 & 0.860 \ 0.10 & 15.45 & 10.518 & 4.932 & 0.049 & 0.490 \ 0.15 & 18.35 & 13.317 & 5.033 & 0.050 & 0.333 \end{pmatrix} \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.15%3A_Particle_in_a_Box_with_an_Internal_Barrier.txt
|
The purpose of this tutorial is to explore the impact of the presence of a large (100 Eh) thin (0.10 a0) internal barrier on the solutions to the particle-in-a-box (PIB) problem. Schrödinger's equation is integrated numerically for the first five energy states. (Integration algorithm taken form J. C. Hansen, J. Chem. Educ. Software, 8C2, 1996.)
For the one-bohr PIB the energy eigenvalues are:
m = 1 .. 5
Em = $\frac{m^2 \pi^2}{2}$
ET = (4.935 19.739 44.413 78.957 123.37)
Set parameters:
n = 100
xmin = 0
xmax = 1
$\Delta = \frac{xmax-xmin}{n-1}$
$\mu$ = 1
Vo = 100
lb = .45
rb = .55
Calculate position vector, the potential energy matrix, and the kinetic energy matrix. Then combine them into a total energy matrix.
i = 1 .. n j = 1 .. n xi = xmin + (i - 1) $\Delta$
Potential energy:
$V_{i~j} = if[ (x_i \geq lb ) (x_i \leq rb ), Vo,~0] \nonumber$
Kinetic energy:
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
$V_{i,~j} = if \bigg[ i =j,~ D \big[ 1 - exp \big[ - \beta (|x_i| - x_0) \big] \big] ^2 ,~0 \bigg] \nonumber$
$T_{i,~j} = if \bigg[ i=j, \frac{ \pi ^{2}}{6 \mu \Delta ^{2}}, \frac{ (-1)^{i-j}}{ (i-j)^{2} \mu \Delta^{2}} \bigg] \nonumber$
Hamiltonian matrix: H = T + V
Find eigenvalues: E = sort(eigenvals(H))
Display selected eigenvalues: m = 1 .. 5
Em =
$\begin{array}{|r|} \hline \ 15.011 \ \hline \ 19.589 \ \hline \ 60.453 \ \hline \ 78.268 \ \hline \ 137.903 \ \hline \end{array}$
Calculate selected eigenvectors:
k = 1 .. 4
$\psi (k) = eigenvec (H, E_k) \nonumber$
Display probability distributions and energy level manifold in the presence of the internal potential barrier:
n = 1 and n = 3 states:
n = 2 and n = 4 states:
It is clear from the numeric and graphic display of the energy manifold that the presence of the internal barrier causes a bunching of the energy eigenstates for the four lowest levels. This is frequently called "inversion doubling" because of an identical effect that appears in the analysis of the ammonia umbrella inversion. This gives the impression that a second set of quantized energy levels is created by the internal barrier. However, the correct explanation for this bunching is evident in the display of the four lowest wave functions. The presence of the barrier raises all energy levels relative to the simple PIB, but the n = 2 and n = 4 states have nodes in the barrier, thus reducing the barrier's effect on raising the energy. Thus the odd states are raised in energy more than the even states, causing the bunching.
9.17: Particle in a Box with Multiple Internal Barriers
Integration limit: xmax = 1
Effective mass: $\mu$ = 1
Barrier height: V0 = 100
Potential energy:
$V(x) = \bigg|^{V_0~if~(x \geq .185)(x \leq .215) + (x \geq .385)(x \leq .415) + (x \geq .585) (x \leq .615) + (x \geq .785) (x \leq .815)}_{0~otherwise} \nonumber$
Numerical integration of Schrödinger's equation:
Given
$\frac{-1}{2 \mu} \frac{d^2}{dx^2} \psi (x) + V(x) \psi (x) = E \psi (x) \nonumber$
$\psi (0) = 0 \nonumber$
$\psi ' (0) = 0.1 \nonumber$
$\psi = Odesolve (x, x_{max} \nonumber$
Normalize wave function:
$\psi (x) = \frac{ \psi (x)}{ \sqrt{ \int_{0}^{x_{max}} \psi (x)^2 dx}} \nonumber$
Enter energy guess: E = 18.85
Calculate kinetic energy:
$T = \int_{0}^{1} \psi (x) \frac{-1}{2} \frac{d^2}{dx^2} \psi (x) dx = 5.926 \nonumber$
Calculate potential energy:
$V = E - T = 12.924 \nonumber$
Tunneling probability:
$\frac{V}{V_0} \times 100 = 12.924 \nonumber$
9.18: Particle in an Infinite Spherical Potential Well
Reduced mass: $\mu$ = 1
Angular momentum: L = 2
Integration limit: rmax = 1
Solve Schrödinger's equation numerically. Use Mathcad's ODE solve block:
Given
$\frac{-1}{2 \mu} \frac{d^2}{dr^2} \psi (r) - \frac{1}{r \mu} \frac{d}{dr} \psi (r) + \bigg[ \frac{L (L + 1)}{2 \mu r^2} \bigg] \psi (r) = E \psi (r)~~~ \psi (.0001) = .1~~~ \psi '(.0001) = 0 \nonumber$
$\psi = Odesolve (r, r_{max}) \nonumber$
Normalize the wavefunction:
$\psi (r) = \left( \int_{0}^{r_{max}} \psi (r)^2 4 \pi r^2 dr \right) ^{ \frac{-1}{2}} \psi (r) \nonumber$
Energy guess: E = 16.51
r = 0, .001 .. rmax
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.16%3A_Another_Look_at_the_in_a_Box_with_an_Internal_Barrier.txt
|
Reduced mass: $\mu$ = 1
Angular momentum: L = 2
Integration limit: rmax = 5
Force constant: k = 1
Energy guess: E = 3
Solve Schrödinger's equation numerically. Use Mathcad's ODE solve block:
Given
$\frac{-1}{2 \mu} \frac{d^2}{dr^2} \psi (r) - \frac{1}{2 \mu} \frac{d}{dr} \psi (r) + \left( \frac{L^2}{2 \mu r^2} + \frac{1}{2} kr^2\right) \psi (r) = E \psi (r)~~~ \psi (.001) = 1~~~ \psi '(.001) = 0.1 \nonumber$
$\psi = Odesolve (r, r_{max}, .001) \nonumber$
$\psi (r) = \left( \int_{0}^{r_{max}} \psi (r)^2 4 \pi r^2 dr \right) ^{ \frac{-1}{2}} \psi (r) \nonumber$
9.20: Numerical Solutions for the Three-Dimensional Harmonic Oscillator
Reduced mass: $\mu$ = 1
Angular momentum: L = 0
Integration limit: rmax = 6
Force constant: k = 1
Solve Schrödinger's equation numerically. Use Mathcad's ODE solve block:
Given
$\frac{-1}{2 \mu} \frac{d^2}{dr^2} \psi (r) - \frac{1}{r \mu} \frac{d}{dr} \psi (r) + \bigg[ \frac{L(L +1)}{2 \mu r^2} + \frac{1}{2} kr^2 \bigg] \psi (r) = E \psi (r)~~~ \psi (.001) = 1~~~ \psi '(.001) = 0.1 \nonumber$
$\psi = Odesolve (r, r_{max}) \nonumber$
$\psi (r) = \left( \int_{0}^{r_{max}} \psi (r)^2 4 \pi r^2 dr \right) ^{ \frac{-1}{2}} \psi (r) \nonumber$
Energy guess: E = 7.5
9.21: Numerical Solutions for the Hydrogen Atom Radial Equation
Reduced mass: $\mu$ = 1
Angular momentum: L = 0
Integration limit: rmax = 18
Nuclear charge: Z = 1
Solve Schrödinger's equation numerically. Use Mathcad's ODE solve block:
Given
$\frac{-1}{2 \mu} \frac{d^2}{dr^2} \psi (r) - \frac{1}{r \mu} \frac{d}{dr} \psi (r) + \bigg[ \frac{L( L + 1)}{2 \mu r^2} + \frac{1}{2} kr^2 \bigg] \psi (r) = E \psi (r)~~~ \psi (.0001) = .1~~~ \psi '(.0001) = 0.1 \nonumber$
$\psi = Odesolve (r, r_{max}) \nonumber$
Normalize wave function:
$\psi (r) = \left( \int_{0}^{r_{max}} \psi (r)^2 4 \pi r^2 dr \right) ^{ \frac{-1}{2}} \psi (r) \nonumber$
Energy guess:
E = -.125 r = 0, .001 .. rmax
Calculate average position:
$\int_{0}^{r_{max}} \psi (r) r \psi (r) 4 \pi r^2 dr = 5.997 \nonumber$
Calculate kinetic energy:
$\int_{0}^{r_{max}} \psi (r) \bigg[ \frac{-1}{2 \mu} \frac{d^2}{dr^2} \psi (r) - \frac{1}{r \mu} \frac{d}{dr} \psi (r) + \bigg [ \frac{L(L + 1)}{2 \mu r^2} \bigg] \psi (r) \bigg] 4 \pi r^2 dr = 0.125 \nonumber$
Calculate potential energy:
$\int_{0}^{r_{max}} \psi (r) \frac{-Z}{r} \psi (r) 4 \pi r^2 dr = -0.25 \nonumber$
9.22: Numerical Solutions for a Modified Harmonic Potential
This tutorial deals with the following potential function:
$V(x, d) = \bigg| _{ \infty ~otherwise}^{ \frac{1}{2} k(x-d)^2~if~x \geq 0 + d \leq 0} \nonumber$
If d = 0 we have the harmonic oscillator on the half-line with eigenvalues 1.5, 3.5, 5.5, ... for k = $\mu$ = 1. For large values of d we have the full harmonic oscillator problem displaced in the x-direction by d with eigenvalues 0.5, 1.5, 2.5, ... for k = $\mu$ = 1. For small to intermediate values of d the potential can be used to model the interaction of an atom or molecule with a surface.
Integration limit: xmax = 10
Effective mass: $\mu$ = 1
Force constant: k = 1
Potential energy minimum: d = 5
Potential energy:
$V(x,d) = \frac{k}{2} (x-d)^2 \nonumber$
Integration algorithm:
Given
$\nonumber$
Normalize wavefunction:
$\psi (x) = \frac{ \psi (x)}{ \sqrt{ \int_{0}^{x_{max}} \psi (x)^2 dx}} \nonumber$
Energy guess: E = 0.5
Calculate average position:
$X_{avg} = \int_{0}^{x_{max}} \psi (x) x \psi (x) dx = 5 \nonumber$
Calculate potential and kinetic energy:
$V_{avg} = \int_{0}^{x_{max}} \psi (x) V(x,d) \psi (x) dx = 0.25 \nonumber$
$T_{avg} = E - V_{avg} = 0.25 \nonumber$
Exercises:
• For d = 0, k = $\mu$ = 1 confirm that the first three energy eigenvalues are 1.5, 3.5 and 5.5 Eh. Start with xmax = 5, but be prepared to adjust to larger values if necessary. xmax is effectively infinity.
• For d = 5, k = $\mu$ = 1 confirm that the first three energy eigenvalues are 0.5, 1.5 and 2.5 Eh. Start with xmax = 10, but be prepared to adjust to larger values if necessary.
• Determine and compare the virial theorem for the exercises above.
• Calculate the probability that tunneling is occurring for the ground state for the first two exercises. (Answers: 0.112, 0.157)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/09%3A_Numerical_Solutions_for_Schrodinger's_Equation/9.19%3A_Numerical_Solutions_for_the_Two-Dimensional_Harmonic_Oscillator.txt
|
This is list of functions and the potentials for which they would be suitable trial wave functions in a variation method calculation.
$\psi (x, \alpha) = 2 \cdot \alpha ^{ \frac{3}{2}} \cdot x \cdot exp(- \alpha \cdot x)$
$\psi (x, \alpha) = ( \frac{ 128 \cdot \alpha ^{3}}{ \pi})^{ \frac{1}{4}} \cdot exp(- \alpha \cdot x^{2})$
• Particle in a gravitational field V(z) = mgz (z = 0 to ∞)
• Particle confined by a linear potential V(x) = ax (x = 0 to ∞)
• One-dimensional atoms and ions V(x) = -Z/x (x = 0 to ∞)
• Particle in semi-infinite potential well V(x) = if[ x $\leq a, 0, b$] (x = 0 to ∞)
• Particle in semi-harmonic potential well V(x) = kx2 (x = 0 to ∞)
$\psi (x, \alpha) = ( \frac{ 2 \cdot \alpha}{ \pi})^{ \frac{1}{4}} \cdot exp(- \alpha \cdot x^{2})$
• Quartic oscillator V(x) = bx4 (x = -∞ to ∞)
• Particle in the finite one-dimensional potential well V(x) = if[(x $\geq$ -1 $\cdot$ (x \leq 1), 0, 2] (x = -∞ to ∞)
• 1D Hydrogen atom ground state
• Harmonic oscillator ground state
• Particle in V(x) = | x | potential well
$\psi (x, \alpha ) = \sqrt{ \alpha} \cdot exp(- \alpha \cdot |x|)$
• This wavefunction is discontinuous at x = 0, so the following calculations must be made in momentum space
• Dirac hydrogen atom V(x) = - $\Delta$ (x)
• Harmonic oscillator ground state
• Particle in V(x) = | x | potential well
• Quartic oscillator V(x) = bx4 (x = -∞ to ∞)
$\psi (x) = \sqrt{30} \cdot x \cdot (1-x)$
$\Gamma (x) = \sqrt{105} \cdot x \cdot (1-x)^{2}$
$\Theta (x) = \sqrt{105} \cdot x^{2} \cdot (1-x)$
• Particle in a one-dimensional, one-bohr box
• Particle in a slanted one-dimensional box
• Particle in a semi-infinite potential well (change 1 to variational parameter)
• Particle in a gravitational field (change 1 to variational parameter)
$\Phi (r, a) = (a-r)$
$\Phi (r, a) = (a - r)^{2}$
$\Phi (r, a) = \frac{1}{ \sqrt{2 \cdot \pi \cdot a}} \cdot \frac{ \sin \frac{ \pi \cdot r}{a}}{r}$
• Particle in a infinite spherical potential well of radius a
• Particle in a finite spherical potential well (treat a as a variational parameter)
$\psi (r, \beta) = ( \frac{2 \cdot \beta}{ \pi})^{ \frac{3}{4}} \cdot exp (- \beta \cdot r^{2})$
• Particle in a finite spherical potential well
• Hydrogen atom ground state
• Helium atom ground state
$\psi (r, \beta) = \sqrt{ \frac{3 \cdot \beta ^{3}}{ \pi ^{3}}} \cdot sech( \beta \cdot r)$
• Particle in a finite potential well
• Hydrogen atom ground state
• Helium atom ground state
$\psi (x, \beta) = \sqrt{ \frac{ \beta}{2}} \cdot sech( \beta \cdot x)$
• Harmonic oscillator
• Quartic oscillator
• Particle in a gravitational field
• Particle in a finite potential well
$\psi ( \alpha, \beta) = \sqrt{ \frac{12 \alpha ^{3}}{ \pi}} \cdot x \cdot sech( \alpha \cdot x)$
• Particle in a semi-infinite potential well
• Particle in a gravitational field
• Particle in a linear potential well (same as above) V(x) = ax (x = 0 to ∞)
• 1D hydrogen atom or one-electron ion
Some finite potential energy wells.
V(x) = if[(x $\geq$ -1 $\cdot$ (x $\leq$ 1), 0, V0]
V(x) = if[(x $\geq$ -1 $\cdot$ (x $\leq$ 1), 0, |x| - 1]
V(x) = if[(x $\geq$ -1 $\cdot$ (x $\leq$ 1), 0, $\sqrt{|x| - 1}$]
Some semi-infinite potential energy well.
V(x) = if (x $\leq$ a, 0, b)
V(x) = if[(x $\leq$ 2), 0, $\frac{5}{x}$]
V(x) = if[(x $\geq$ 2), 0, (x - 2)]
V(x) = if[(x $\leq$ 2), 0, $\sqrt{x-2}$]
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.01%3A_Trial_Wavefunctions_for_Various_Potentials.txt
|
Using $\psi ( \alpha , r) = \frac{ \alpha ^{3}}{ \pi} exp( - \alpha r)$ as a trial wave function for the helium atom electrons leads to the following energy expression in terms of the variational parameter, α.
$E( \alpha ) = \alpha ^{2} - 4 \alpha + \frac{5}{8} \alpha$
The first term is electron kinetic energy, the second electron‐nucleus potential energy and the final term electron‐electron potential energy.
Mathcad provides four methods for energy minimization with respect to $α$. The second and third methods require a seed value for $α$.
First method
$\alpha = \frac{d}{d \alpha} E( \alpha ) = 0 |_{float, 5}^{solve, \alpha} \rightarrow 1.6875 \nonumber$
$E ( \alpha ) = -2.8477$
Second method
α = 1. Given $\frac{d}{d \alpha} E ( \alpha ) = 0$ $\alpha = Find ( \alpha )$ α = 1.685 $E ( \alpha ) = -2.8477$
Third method
α = 1. α : = Minimize(E, α) α = 1.685 $E ( \alpha ) = -2.8477$
Fourth method
Clear memory of α and X: α = α Z = Z
$En ( \alpha ,~Z) = \alpha ^{2} - 2 Z \alpha + \frac{5}{8} \alpha$ $\frac{d}{d \alpha} En( \alpha ,~Z) = 0~solve, \alpha \rightarrow Z - \frac{5}{16}$
$En ( \alpha ,~Z) = \alpha ^{2} - 2 Z \alpha + \frac{5}{8}~substitute, \alpha = Z - \frac{5}{16} \rightarrow - \frac{(16 Z - 5)^{2}}{256}$
$En ( \alpha ,~2) = -2.8477$ $En \alpha ,~3) = -7.2227$ $En( \alpha ,~4) = -13.5977$
Two variables: a molecular orbital calculation yields the following result for the energy of the hydrogen molecule ion as a function of the internuclear separation and the orbital decay constant.
$1s_{a} = \frac{ \alpha ^{3}}{ \pi} exp(- \alpha r_{a})$
$1s_{b} = \frac{ \alpha ^{3}}{ \pi} exp(- \alpha r_{b})$
$S_{ab} = \int 1s_{a} 1s_{b} d \tau$
$\psi_{mo} = \frac{1s_{a} + 1s_{b}}{ \sqrt{2+2 S_{ab}}}$
$E( \alpha, R) = \frac{- \alpha^{2}}{2} + \frac{[ \alpha^{2} - \alpha - \frac{1}{R} + \frac{1 + \alpha R}{R} exp(-2 \alpha R) + \alpha ( \alpha - 2) (1+ \alpha R) exp(- \alpha R)]}{[1 + exp(- \alpha R) (1+ \alpha R + \frac{ \alpha^{2} R^{3}}{3})]} + \frac{1}{R} \nonumber$
α = 1 R = 1 $\begin{pmatrix} \alpha \ R \end{pmatrix}$ = Minimize(E, α, R) $\begin{pmatrix} \alpha \ R \end{pmatrix}$ = $\begin{pmatrix} 1.2380 \ 2.0033 \end{pmatrix}$ E(α, R) = -0.5865
α = 1 Energy = -2 Given Energy = E(α, R) $\frac{d}{d \alpha}$ E(α, R) = 0 Energy(R) = Find(α, Energy)
R = .2, .25 .. 10 T(R) = -Energy(R)1 - R $\frac{d}{dR}$ Energy(R)1 V(R) = 2 Energy(R)1 + R $\frac{d}{dR}$ Energy(R)1
10.03: The Variation Theorem in Dirac Notation
The recipe for calculating the expectation value for energy using a trial wave function is,
$\langle E \rangle = \langle \psi | \hat{H} | \psi \rangle \label{1}$
Now suppose the eigenvalues of $\hat{H}$ are denoted by $|i \rangle$. Then,
$\hat{H} |i \rangle = \varepsilon_{i} |i \rangle = |i \rangle \varepsilon _{i} \label{2}$
Next we write $| \psi \rangle$ as a superposition of the eigenfunctions $|i \rangle$,
$| \psi \rangle = \sum_{i} |i \rangle \langle I| \psi \rangle \nonumber$
and substitute it into Equation \ref{1}.
$\langle E \rangle = \sum_{i} \langle \psi | \hat{H} | i \rangle \langle i | \psi \rangle \nonumber$
Making use of Equation \ref{2} yields,
$\langle E \rangle = \sum_{i} \langle \psi |i \rangle \varepsilon_{i} \langle i | \psi \rangle \nonumber$
After rearrangement we have,
$\langle E \rangle = \sum_{i} \varepsilon_{i} | \langle i| \psi \rangle |^{2} \nonumber$
However, $| \langle i | \psi \rangle |^{2}$ is the probability that $\varepsilon_{i}$ will be observed, $p_i$.
$\langle E \rangle = \sum_{i} \varepsilon_{i} p_{i} \geq \varepsilon_{0} \nonumber$
Thus, the expectation value obtained using the trial wave function is an upper bound to the true energy. In other words, in valid quantum mechanical calculations you can't get a lower energy than the true energy.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.02%3A_Energy_Minimization_-_Four_Methods_Using_Mathcad.txt
|
Simple models for the potential energy experienced by an alpha particle in a nucleus have the form shown below. In the interest of mathematical simplicity we will not attempt to model any particular alpha emitter, but just try to capture the essentials of the quantum mechanical tunneling mechanism for alpha decay that was formulated by Gamov, Gurney and Condon in 1928.
$V(x) := if[(x \geq 2), 0, \frac{5}{x} ] \nonumber$
The attractive nuclear interaction (strong nuclear force) is represented by a well of depth 2.5 and range 2 in atomic units. The repulsive Coulomb interaction becomes dominant as the strong nuclear force fades for $x$ values greater than 2. The trapped particle is assumed to have unit mass. A variational calculation will be carried out using the trial wavefunction given below. It is also possible to solve Schrödinger's equation for alpha decay by numerical integration.
Normalized trial wavefunction with variational parameter $α$, a decay constant:
$\psi (x, \alpha ) := 2 \alpha^{ \frac{3}{2}} x e^{- \alpha x} \nonumber$
$\int_{0}^{ \infty} \psi (x, \alpha )^{2} dx |_{simplify}^{assume,~ \alpha > 0} \rightarrow 1 \nonumber$
Set up the variational energy integral:
$E ( \alpha ) := \int_{0}^{ \infty} \psi (x, \alpha ) \frac{1}{2} \frac{d^{2}}{dx^{2}} \psi (x, \alpha ) dx + \int_{0}^{ \infty} V(x) \psi (x, \alpha )^{2} dx \nonumber$
Next the energy is minimized with respect to $α$ numerically. This method requires a seed value for $α$.
α := 3 α := Minimize(E, α) α:= 1.003 E(α) = 0.958
In what follows the results of the variational calculation will be displayed graphically and interpreted.
Display the results of the variational calculation:
Calculate the probability that the particle is in the classically forbidden region. This is the region where the particle's total energy is less than the potential energy.
Because the energy of the particle is 0.958, the classical forbidden region extends from $x = 2$ to $x = 5.22$.
$\frac{5}{x} = 0.958 |_{solve,~x}^{float,~3} \rightarrow 5.22 \nonumber$
Probability in classically forbidden region:
$\int_{2}^{5.22} \psi (x, \alpha )^{2} dx = 0.234 \nonumber$
Calculate the probability that the particle has tunneled beyond the classically forbidden region.
$\int_{5.22}^{ \infty} \psi (x, \alpha )^{2} dx \approx 1.879 \times 10^{-3} \nonumber$
Calculate the probability that the particle is still in the nucleus.
$\int_{0}^{2} \psi (x, \alpha )^{2} dx \approx 0.764 \nonumber$
10.05: Variational Method for a Particle in a Finite Potential Well
Definite potential energy: $V(x) := if [(x \geq -1) \cdot (x \leq 1), 0, 2]$
Display potential energy:
Choose trial wavefunction: $\psi (x, \beta ) := ( \frac{2 \cdot \beta}{ \pi} )^{ \frac{1}{4}} \cdot (- \beta \cdot x^{2})$
Demonstrate that the trial wavefunction is normalized.
$\int_{- \infty}^{ \infty} \psi (x, \beta )^{2} dx$ assume, $\beta > 0 \rightarrow 1$
Evaluate the variational integral:
$E( \beta ) := \int_{- \infty}^{ \infty} \psi (x, \beta ) \cdot - \frac{1}{2} \cdot \frac{d^{2}}{dx^{2}} \psi (x, \beta ) dx ... |_{simplify}^{assume,~ \beta > 0} \rightarrow \frac{1}{2} \cdot \beta + 2 - 2 \cdot erf(2^{ \frac{1}{2}} \cdot \beta^{ \frac{1}{2}})$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ := 1 $\beta$ := Minimize(E, $\beta$) $\beta$ = 0.678 E( $\beta$) = 0.538
Display wavefunction in the potential well and compare result with the exact energy, 0.530 Eh.
Calculate the fraction of time tunneling is occurring.
$2 \cdot \int_{1}^{ \infty} \psi (x, \beta )^{2} dx = 0.1$
10.06: Variation Method for a Particle in a Symmetric 1D Potential Well
Definite potential energy: $V(x) := if[(x \geq -1) \cdot (x \leq 1), 0, \sqrt{ |x| - 1}]$
Display potential energy:
Choose trial wave function: $\Psi (x, \beta ) := ( \frac{ 2 \cdot \beta}{ \pi} )^{ \frac{1}{4}} \cdot exp (- \beta \cdot x^{2})$
Evaluate the variational integral:
$E ( \beta ) := \int_{- \infty}^{ \infty} \Psi (x, \beta) \cdot - \frac{1}{2} \cdot \frac{d^{2}}{dx^{2}} \Psi (x, \beta ) dx + \int_{- \infty}^{ \infty} V(x) \cdot \Psi (x, \beta )^{2} dx$
Minimize the energy integral with respect to the variational parameter, $\beta$.
β := .2 β := Minimize(E, β) β := 0.363 E(β) = 0.313
Display wave function in the potential well.
Calculate the probability that the particle is in the potential barrier.
$2 \cdot \int_{1}^{ \infty} \Psi(x, \beta )^{2} dx = 0.228$
Define quantum mechanical tunneling.
Tunneling occurs when a quon (a quantum mchanical particle) has probability of being in a nonclassical region. In other words, a region in which the total energy is less than the potential energy.
Calculate the probability that tunneling is occurring.
$|x| - 1 = 0.313^{2} |_{solve,~x}^{float,~4} \rightarrow {\begin{pmatrix} 1.098 \ -1.098 \end{pmatrix}}$
$2 \cdot \int_{1.098}^{ \infty} \Psi (x, \beta )^{2} dx = 0.186$
Calculate the kinetic and potential energy contributions to the total energy.
Kinetic energy:
$\int_{- \infty}^{ \infty} \Psi (x, \beta ) \cdot - \frac{1}{2} \cdot \frac{d^{2}}{dx^{2}} \Psi (x, \beta ) dx = 0.182$
Potential energy:
$\int_{- \infty}^{ \infty} V(x) \cdot \Psi (x, \beta )^{2} dx = 0.131$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.04%3A_A_Rudimentary_Model_for_Alpha_Particle_Decay.txt
|
Approximate Methods: The Rydberg Potential
For unit mass the Rydberg potential function has the following energy operator in atomic units.
• Kinetic energy operator: $- \dfrac{1}{2} \frac{d^{2}}{dx^{2}} \blacksquare \nonumber$
• Potential energy operator: $V(x) = 2 -2 (1 + x) exp(-x) \nonumber$
Limits of integration: xmin := -3 xmax := 5 $\int_{x_{min}}^{x_{max}} \blacksquare ~dx$
Display potential energy:
Suggested trial wave function:
$\psi (x, \beta ) = ( \frac{2 \beta}{ \pi})^ \frac{1}{4} exp (- \beta x^{2}) \nonumber$
$\int_{- \infty}^{ \infty} \psi (x, \beta )^{2} dx~assume,~ \beta > 0 \rightarrow 1 \nonumber$
Evaluate the variational energy integral.
$E( \beta ) := \int_{x_{min}}^{x_{max}} \psi (x, \beta ) - \frac{1}{2} \frac{d^{2}}{dx^{2}} \psi (x, \beta ) dx + \int_{x_{min}}^{x_{max}} \psi (x, \beta ) V(x) \psi (x, \beta ) dx \nonumber$
Minimize the energy with respect to the variational parameter $\beta$ and report its optimum value and the ground-state energy.
β := 1 β := Minimize(E, β) β := 0.86327 E(β) = 0.789456
Plot the optimum wave function and the potential energy on the same graph.
Numerical Solution for the Rydberg Potential
Compare the variational result to energy obtained by numerically integrating Schrödinger's equation for the Rydberg potential. For all practical purposes the numerical solution can be considered to be exact.
Numerical integration of Schrödinger's equation:
Given:
$\frac{-1}{2} \frac{d^{2}}{dx^{2}} \Phi (x) + V(x) \Phi (x) = Energy \Phi (x) \nonumber$
with $\Phi (x_{min} = 0)$ and $\Phi '(x_{min} = 0.1$
$\Phi$ = Odesolve(x, x_{max}) \nonumber \]
Normalize wave function:
$\Phi (x) := \frac{ \Phi (x)}{ \sqrt{ \int_{x_{min}}^{x_{max}} \Phi (x)^{2} dx}} \nonumber$
Enter energy guess: Energy = 0.64752
Compare the variational and numerical solutions for the Morse oscillator by putting them on the same graph.
10.08: Variation Method for the Quartic Oscillator
Approximate Methods: The Quartic Oscillator
For unit mass the quartic oscillator has the following energy operator in atomic units.
$H = - \frac{1}{2} \frac{d^{2}}{dx^{2}} \blacksquare + kx^{4} \blacksquare$ $\int_{- \infty}^{ \infty} \blacksquare dx$
Suggested trial wavefunction: $\psi (x; \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{1}{4}} exp( - \beta x^{2})$
Demonstrate that the wavefunction is normalized.
$\int_{- \infty}^{ \infty} \psi (x; \beta )^{2} dx~assume,~ \beta > 0 \rightarrow 1 \nonumber$
Evaluate the variational energy integral.
$E( \beta ) := \int_{- \infty}^{ \infty} \psi (x, \beta ) - \frac{1}{2} \frac{d^{2}}{dx^{2}} \psi (x, \beta ) dx + \int_{- \infty}^{ \infty} \psi (x, \beta ) x^{4} \psi (x, \beta ) dx |_{simplify}^{assume,~ \beta > 0} \rightarrow \frac{1}{16} \frac{8 \beta ^{3}}{\beta ^{2}} \nonumber$
Minimize the energy with respect to the variational parameter $\beta$ and report its optimum value and the ground-state energy.
β := 1 β := Minimize(E, β) β = 0.90856 E(β) = 0.68142
Plot the optimum wavefunction and the potential energy on the same graph.
Calculate the classical turning point and the probability that tunneling is occurring.
\begin{align} x_{ctp} &= 0.68142^{ \frac{1}{4}} \[4pt] &= 0.90856 \end{align} \nonumber
$2 \int_{x_{ctp}}^{ \infty} \psi (x, \beta )^{2} dx \approx 0.083265 \nonumber$
Compare the variational result to energy obtained by numerically integrating Schrödinger's equation for the quartic oscillator using the numerical integration algorithm provided below.
Numerical Solutions for Schrödinger's Equation
Integration limit: xmax := 3 Effective mass: μ := 1 Force constant: k := 1
Potential energy: $V(x) := kx^{4}$
Numerical integration of Schrödinger's equation:
Given
$\frac{-1}{2 \mu} \frac{d^{2}}{dx^{2}} \Phi (x) + V(x) \Phi (x) = energy \Phi (x)$
$\Phi (-x_{max} = 0$
$\Phi '(-x_{max} = 0.1$
$\Phi := Odesolve (x, x_{max}$
Normalize wavefunction: $\Phi (x) := \frac{ \Phi (x)}{ \sqrt{ \int_{-x_{max}}^{x_{max}} \Phi (x) ^{2} dx}}$
Enter energy guess: Energy = 0.6679864
Compare the variational and numerical solutions for the quartic oscillator by putting them on the same graph.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.07%3A_Variation_Method_for_the_Rydberg_Potential.txt
|
For unit mass the quartic oscillator has the following energy operator in atomic units in coordinate space.
$H = - \frac{1}{2} \frac{d^{2}}{dx^{2}} \nonumber$
Suggested trial wavefunction:
$\psi (x, \beta ) = ( \frac{2 \beta}{ \pi})^{ \frac{1}{4}} exp( - \beta x^{2}) \nonumber$
Demonstrate that the wavefunction is normalized.
$\int_{- \infty}^{ \infty} \psi (x, \beta )^{2} dx~assume, \beta >0 \rightarrow 1 \nonumber$
Fourier transform the coordinate wavefunction into the momentum representation.
$\Phi (p, \beta ) = \frac{1}{ \sqrt{2 \pi}} \int_{- \infty}^{ \infty} exp (-i p x) \psi (x, \beta ) dx |_{simplify}^{assume,~ \beta > 1} \rightarrow \frac{1}{2} \frac{2^{ \frac{3}{4}}}{ \pi ^{ \frac{1}{4}}} \frac{ e^{ \frac{-1}{4} \frac{p^{2}}{ \beta}}}{ \beta ^{ \frac{1}{4}}} \nonumber$
Demonstrate that the momentum wavefunction is normalized.
$\int_{- \infty}^{ \infty} \overline{ \Phi (p, \beta )} \Phi (p, \beta ) dp assume,~ \beta > 0 \rightarrow 1 \nonumber$
The quartic oscillator energy operator in momentum space:
$H = \frac{p^{2}}{2} \blacksquare + \frac{d^{4}}{dp^{4}} \blacksquare \nonumber$
Evaluate the variational energy integral.
$E ( \beta ) = \int_{- \infty}^{ \infty} \overline{ \Phi(p, \beta )} \frac{p^2}{2} \Phi (p, \beta ) dp + \int_{- \infty}^{ \infty} \overline{ \Phi (p, \beta )} \frac{d^{4}}{dp^4} \Phi (p, \beta ) dp |_{simplify}^{assume,~ \beta > 0} \rightarrow \frac{1}{16} \frac{8 \beta ^{3} + 3}{ \beta ^{2}} \nonumber$
Minimize the energy with respect to the variational parameter $\beta$ and report its optimum value and the ground-state energy.
$\beta = 1$ $\beta = Minimize (E, \beta )$ $\beta = 0.90856$ $E ( \beta ) = 0.688142$
Plot the coordinate and momentum wavefunctions and the potential energy on the same graph.
These results demonstrate the uncertainty principle. For the harmonic potential, x2/2, the coordinate and momentum wavefunctions are identical. Compared to the harmonic potential the quartic potential, x4, constrains the spatial wavefunction leading to less uncertainty in position. The uncertainty principle, therefore, requires an increase in the momentum uncertainty. This is clearly revealed in graph above.
10.10: Variation Method for a Particle in a Gravitational Field
The particle of unit mass in a gravitational field for which g = 1 has the energy operator shown below.
$- \frac{1}{2} \frac{d^2}{dz^2} \blacksquare + z \blacksquare$
$\int_{0}^{ \infty} \blacksquare dz$
The following trial wave function for this problem is:
$\Phi ( \alpha, z) := 2 ( \frac{2 \alpha}{ \pi ^{ \frac{1}{3}}})^{ \frac{3}{4}} z~exp ( - \alpha z^{2})$
Determine whether or not the wave function is normalized.
$\int_{0}^{ \infty} \Psi ( \alpha , z)^{2} dz |_{simplify}^{assume,~ \alpha > 0} \rightarrow 1$
Evaluate the variational energy integral.
$E ( \alpha ) := \int_{0}^{ \infty} \Psi ( \alpha , z) - \frac{1}{2} \frac{d^2}{dz^2} \Phi ( \alpha , z) dz ... |_{simplify}^{assume,~ \alpha >0} \rightarrow \frac{1}{2 \pi ^{ \frac{1}{2}}} \frac{3 \pi ^{ \frac{1}{2}} \alpha ^{2} + 2 (2)^{ \frac{1}{2}} \alpha ^{ \frac{1}{2}}}{ \alpha} + \int_{0}^{ \infty} z \Psi ( \alpha , z)^{2} dz$
Minimize the energy with respect to the variational parameter $\alpha$ and report its optimum value and the ground‐state energy.
$\alpha := 1$ $\alpha := Minimize(E, \alpha )$ $\alpha = 0.4136$ $E( \alpha ) = 1.8611$ $E_{exact} := 1.8558$
Plot the wave function with the distance of the particle from the surface on the vertical axis.
Find that distance below which there is a 90% probability of finding the particle.
$\alpha := 1$
Given $\int_{0}^{a} \Psi ( \alpha , z)^{2} dz = .90$
Find (a) = 1.9440
Find the most probable value of the position of the particle from the surface.
$\frac{d}{dz} \Psi (0.4136, z) = 0 |_{float,~3}^{solve,~z} \rightarrow {\begin{pmatrix} -1.10 \ 1.10 \end{pmatrix}}$
Calculate the probability that the particle will be found below the most probable distance from the surface.
$\int_{0}^{1.10} \Psi ( \alpha , z)^{2} dz = 0.4279$
Calculate the probability that tunneling is occurring: $\int_{1.861}^{ \infty} \Psi ( \alpha , z)^{2} dz = 0.1256$
Kinetic energy: $\int_{0}^{ \infty} \Psi ( \alpha , z) - \frac{1}{2} \frac{d^2}{dz^2} \Psi ( \alpha , z) dz = 0.6204$
Potential energy: $\int_{0}^{ \infty} z \Psi ( \alpha , z)^{2} dz = 1.2407$
What is the apparent virial theorem for this system: $E = 3T = \frac{3}{2} V$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.09%3A_Momentum-Space_Variation_Method_for_the_Quartic_Oscillator.txt
|
Trial wavefunctions:
• $\psi_{1} (x) = \sqrt{2} \sin( \pi x)$
• $\psi_{2} (x) = \sqrt{105} x (1-x)^{2}$
Plot trial wavefunctions and potential energy. x = 0, .005 ... 1
Evaluate matrix elements:
\begin{align} S_{11} &= \int_{0}^{1} \psi _{1} (x)^{2} dx \[4pt] &= 1 \end{align} \nonumber
\begin{align} S_{12} &= \int_{0}^{1} \psi _{1} (x) \psi _{2} (x) dx\[4pt] &= 0.9347 \end{align} \nonumber
\begin{align} S_{22} &= \int_{0}^{1} \psi _{2} (x)^{2} dx \[4pt] &= 1 \end{align} \nonumber
$H_{11} = \int_{0}^{1} \psi _{1} (x) (- \frac{1}{2}) \frac{d^{2}}{dx^{2}} \psi _{1} (x) dx + \int_{0}^{1} \psi _{1} (x)~x~ \psi _{1} (x) dx$ $H_{11} = 5.4348$
$H_{12} = \int_{0}^{1} \psi _{1} (x) (- \frac{1}{2}) \frac{d^{2}}{dx^{2}} \psi _{2} (x) dx + \int_{0}^{1} \psi _{1} (x)~x~ \psi _{2} (x) dx$ $H_{12} = 5.0163$
$H_{22} = \int_{0}^{1} \psi _{2} (x) (- \frac{1}{2}) \frac{d^{2}}{dx^{2}} \psi _{2} (x) dx + \int_{0}^{1} \psi _{2} (x)~x~ \psi _{2} (x) dx$ $H_{22} = 7.375$
Solve the secular equations and normalization constraint for the energy and coefficients.
Seed values for energy and coefficients: E = 5 c1 = .5 c2 = .5
Given
$(H_{11} - E S_{11})c_{1} + (H_{12} - E S_{12})c_{2} = 0$
$(H_{12} - E S_{12})c_{1} + (H_{22} - E S_{22})c_{2} = 0$
$c_{1}^{2} S_{11} + 2 c_{1} c_{2} S_{12} + c_{2}^{2} S_{22} = 1$
${\begin{pmatrix} E \ c_{1} \ c_{2} \end{pmatrix}} = Find(E, c_{1}, c_{2})$
${\begin{pmatrix} E \ c_{1} \ c_{2} \end{pmatrix}} = {\begin{pmatrix} 5.4328 \ 0.971\ 0.031 \end{pmatrix}}$
Compare variational ground state to PIB ground state:
Calculate average position of the particle in the box:
$\int_{0}^{1} x \Phi (x)^{2} dx = 0.496 \nonumber$
Calculate the probability that the particle is in the left half of the box:
$\int_{0}^{0.5} \Phi (x)^{2} dx = 0.5088 \nonumber$
10.12: Variation Method for a Particle in a Semi-Infinite Potential Well
This problem deals with the variational approach to the particle in the semi-infinite potential well.
Kinetic energy operator: $- \frac{1}{2} \frac{d^2}{dx^2} \blacksquare$
Integral: $\int_{0}^{ \infty} \blacksquare dx$
Potential energy: $V(x) := if[( x \leq 2), 0 , 2]$
Trial wave function: $\Phi (x, \beta ) := 2 \beta ^{ \frac{3}{2}}~x~exp(- \beta x)$
If the trial wave function is not normalized, normalize it.
$\int_{0}^{ \infty} \Phi (x, \beta )^{2} dx~ assume,~ \beta > 0 \rightarrow 1$
Evaluate the variational energy integral.
$E( \beta ) := \int_{0}^{ \infty} \Phi (x, \beta ) (- \frac{1}{2}) \frac{d^2}{dx^2} \Phi (x, \beta ) dx ... |_{simplify}^{assume,~ \beta >0} \rightarrow \frac{1}{2} \beta^{2} + 16 \beta^{2} e^{-4 \beta} + 8 \beta e^{-4 \beta} + 2 e^{- 4 \beta} + \int_{2}^{ \infty} 2 \Phi (x, \beta )^{2} dx$
Minimize the energy with respect to $\beta$:
$\beta$ := .3 $\beta$ := Minimize $(E, \beta )$ $\beta = 1.053$ $E( \beta ) = 0.972$
Display optimized trial wave function and potential energy:
Calculate average position and most probable position of the particle:
$\int_{0}^{ \infty} x \Phi (x, \beta )^{2} dx = 1.425$
$\frac{d}{dx} \Phi (x, \beta ) = 0 |_{solve,~x}^{float,~3} \rightarrow \frac{1}{ \beta} = 0.95$
Calculate the probability of the particle in the barrier.
$\int_{2}^{ \infty} \Phi (x, \beta )^{2} dx = 20.891%$
Calculate the potential energy, and the kinetic energy.
$V := \int_{2}^{ \infty} 2 \Phi (x, \beta )^{2} dx$ $V = 0.418$
$T := E ( \beta ) - V$ $T = 0.554$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.11%3A_Linear_Variational_Method_for_a_Particle_in_a_Slanted_1D_Box.txt
|
$\psi _{1} (x) = \sqrt{2} \sin{ \pi x}$
$\psi _{2} (x) = \sqrt{2} \sin{3 \pi x}$
$V(x) = if [(x \geq .45) (x \leq .55), 3,0]$
Plot trial wavefunctions and potential energy.
Evaluate matrix elements for $100 E_h$ internal barrier:
$S_{11} = \int_{0}^{1} \psi _{1} (x)^{2} dx$ $S_{11} = 1$
$S_{12} = \int_{0}^{1} \psi _{1} (x) \psi _{2} (x) dx$ $S_{12} = 0$
$S_{22} = \int_{0}^{1} \psi _{2} (x)^{2} dx$ $S_{22} = 1$
\begin{align} H_{11} &= \int_{0}^{1} \psi _{1} (x) (- \dfrac{1}{2}) \dfrac{d^{2}}{dx^{2}} \psi _{1} (x) dx + \int_{.45}^{.55} \psi _{1} (x)~100~ \psi _{1} (x) dx\[4pt] & \approx 24.7711\end{align} \nonumber
\begin{align} H_{12} &= \int_{0}^{1} \psi _{1} (x) (- \dfrac{1}{2}) \dfrac{d^{2}}{dx^{2}} \psi _{2} (x) dx + \int_{.45}^{.55} \psi _{1} (x)~100~ \psi _{2} (x) dx \[4pt] & \approx -19.1912\end{align} \nonumber
\begin{align} H_{22} &= \int_{0}^{1} \psi _{2} (x) (- \dfrac{1}{2}) \dfrac{d^{2}}{dx^{2}} \psi _{2} (x) dx + \int_{.45}^{.55} \psi _{2} (x)~100~ \psi _{2} (x) dx \[4pt] &\approx 62.9972 \end{align} \nonumber
Solve the secular equations and normalization constraint for the energy and coefficients.
Seed values for energy and coefficient: E = 5 c1 = .5 c2 = .5
Given
$(H_{11} - E S_{11})c_{1} + (H_{12} - E S_{12})c_{2} = 0$
$(H_{12} - E S_{12})c_{1} + (H_{22} - E S_{22})c_{2} = 0$
$c_{1}^{2} S_{11} + 2 c_{1} c_{2} S_{12} + c_{2}^{2} S_{22} = 1$
${\begin{pmatrix} E \ c_{1} \ c_{2} \end{pmatrix}} = Find(E, c_{1}, c_{2})$
${\begin{pmatrix} E \ c_{1} \ c_{2} \end{pmatrix}} = {\begin{pmatrix} 16.7989 \ 0.9235\ 0.3836 \end{pmatrix}}$
Plot variational results:
$\Phi (x) = c_{1} \Phi _{1} (x) + c_{2} \psi_{2} (x)$
Calculate the probability the particle is in the barrier:
$\int_{0.45}^{0.55} \Phi (x)^{2} dx = 0.0605$
Calculate potential and kinetic energy:
$V = 100 \int_{0.45}^{0.55} \Phi (x)^{2} dx$ $V = 6.0541$
$T = E - V$
$T = 10.7448$
10.14: Variation Method for a Particle in a 1D Ice Cream Cone
Define potential energy: V(x) := |x|
Display potential energy:
Choose trial wave function: $\Psi (x, \beta ) := \sqrt{ \frac{ \beta}{2}} sech( \beta x)$
$E ( \beta ) := \int_{- \infty}^{ \infty} \Psi (x, \beta ) \frac{1}{2} \frac{d^2}{dx^2} \Psi (x, \beta ) dx + \int_{- \infty}^{ \infty} V(x) \Psi (x, \beta )^{2} dx \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ := 2 $\beta$ := Minimize( E, $\beta$) $\beta$ = 1.276 E( $\beta$) = 0.815
Display wave function in the potential well.
Calculate the probability that the particle is in the potential barrier.
$2 \int_{0}^{ \infty} \Psi (x, \beta )^2 dx = 1 \nonumber$
Define quantum mechanical tunneling.
Tunneling occurs when a quon (a quantum mechanical particle) has probability of being in a nonclassical region. In other words, a region in which the total energy is less than the potential energy.
Calculate the probability that tunneling is occurring.
$|x| = 0.815 |_{float,~4}^{solve,~x} \rightarrow {\begin{pmatrix} 0.8150 \ -0.8150 \end{pmatrix}} \nonumber$
$2 \int_{0.815}^{ \infty} \Psi (x, \beta )^2 dx = 0.222 \nonumber$
Calculate the kinetic and potential energy contributions to the total energy.
Kinetic energy:
$\int_{- \infty}^{ \infty} \Psi (x, \beta ) -( \frac{1}{2}) \frac{d^2}{dx^2} \Psi (x, \beta ) dx = 0.272 \nonumber$
Potential energy:
$\int_{- \infty}^{ \infty} V(x) \Psi (x, \beta )^{2} dx = 0.543 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.13%3A_Variation_Method_for_a_Particle_in_a_Box_with_an_Internal_Barrier.txt
|
A Gaussian function is proposed as a trial wavefunction in a variational calculation for a particle experiencing a linear radial potential energy. Determine the optimum value of the parameter β and the optimum ground state energy. Use atomic units: h = 2π, me = 1, e = ‐1.
$\psi (r, \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{3}{4}} exp(- \beta r^2) \nonumber$
$T = \frac{1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) \nonumber$
$V = r \nonumber$
$\int_{0}^{ \infty} \blacksquare 4 \pi r^2 dr \nonumber$
a. Demonstrate the wave function is normalized.
$\int_{0}^{ \infty} \psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \rightarrow 1 \nonumber$
b. Evaluate the variational integral.
$E( \beta ) := \int_{0}^{ \infty} \psi (r, \beta [ (- \frac{1}{2r}) \frac{d^2}{dr^2} (r \psi (r, \beta))] 4 \pi r^2 dr ... |_{simplify}^{assume,~ \beta > 0} \rightarrow \frac{1}{2} \frac{3 \pi^{ \frac{1}{2}} \beta^2 + (2)2^{ \frac{1}{2}} \beta ^{ \frac{1}{2}}}{ \pi^{ \frac{1}{2}} \beta} \nonumber$
c. Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ := 1 $\beta$ := Minimize(E, \beta ) $\beta$ = 0.414 E( $\beta$) = 1.861
d. Plot the optimized trial wave function.
10.16: Variation Method for a Particle in a Finite 3D Spherical Potential Well
This problem deals with a particle of unit mass in a finite spherical potential well of radius 2 ao and well height 2 Eh. The trial wave function is given below.
$\psi (r, \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{3}{4}} exp(- \beta r^2) \nonumber$
$T = - \frac{1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) \nonumber$
$V(r) := if [(r \leq 2), 0, 2]$
a. Demonstrate that the wave function is normalized.
$\int_{0}^{ \infty} \psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow 1 \nonumber$
b. Evaluate the variational integral.
$E( \beta ) := \int_{0}^{ \infty} \psi (r, \beta ) [- \frac{1}{2r} \frac{d^2}{dr^2} (r \psi (r, \beta ))] 4 \pi r^2 dr ... |_{simplify}^{assume,~ \beta > 0} + \int_{2}^{ \infty} 2 \psi (r, \beta )^2 4 \pi r^2 dr \nonumber$
$E( \beta ) := \frac{1}{2} \frac{3 \pi^{ \frac{1}{2}} \beta + 4 \pi ^{ \frac{1}{2}} + 16 exp(-8 \beta) 2^{ \frac{1}{2}} \beta^{ \frac{1}{2}}-4 \pi ^{\frac{1}{2}} erf((2) 2^{ \frac{1}{2}} \beta^{ \frac{1}{2}}) }{ \pi ^{ \frac{1}{2}}} \nonumber$
c. Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ := 5 $\beta$ := Minimize (E, \beta ) $\beta$ = 0.381 $E ( \beta ) = 0.786$
d. Calculate the average value of r.
$\int_{0}^{ \infty} r \psi (r, \beta )^2 4 \pi r^2 dr = 1.293$
e. Calculate the kinetic and potential energy.
Potential energy:
$\int_{2}^{ \infty} r \psi (r, \beta )^2 4 \pi r^2 dr = 0.215$
Kinetic energy:
$E( \beta ) - 0.215 = 0.571$
f. Calculate the probability that the particle is in the barrier.
$1 - \int_{0}^{2} \psi (r, \beta )^2 4 \pi r^2 dr = 0.107$
g. Plot the wavefunction on the same graph as the potential energy.
10.17: Variation Method for the Harmonic Oscillator
This exercise deals with a variational treatment for the ground state of the simple harmonic oscillator which is, of course, an exactly soluble quantum mechanical problem.
The energy operator for a harmonic oscillator with unit effective mass and force constant is:
$H = \frac{-1}{2} \frac{d^2}{dx^2} \blacksquare + \frac{x^2}{2} \blacksquare$
The following trial wavefunction is selected:
$\psi (x, \beta ) = \frac{1}{1 + \beta x^2} \nonumber$
The variational energy integral is evaluated (because of the symmetry of the problem it is only necessary to integrate from 0 to ∞, rather than from ‐∞ to ∞:
$E( \beta ) = \dfrac{ \int_{0}^{ \infty} \psi (x, \beta ) \frac{-1}{2} \frac{d^2}{dx^2} \psi (x, \beta ) dx + \int_{0}^{ \infty} \psi (x, \beta ) \frac{x^2}{2} \psi (x, \beta ) dx}{ \int_{0}^{ \infty} \psi (x, \beta )^2 dx} |^{assume,~ \beta > 0}_{simplify} \rightarrow \frac{1}{4} \frac{ \beta ^2 + 2}{ \beta}$
The energy integral is minimized with respect to the variational parameter:
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$) $\beta$ = 1.414 E( $\beta$) = 0.707
The % error is calculated given that the exact result is $0.50 E_h$.
$\frac{E( \beta ) - 0.5}{0.5} = 41.421$%
The optimized trial wavefunction is compared with the SHO ground-state eigenfunction.
Now a second trial function is chosen:
$\psi (x, \beta ) := \frac{1}{(1 + \beta x^2)^2}$
Evaluate the variational energy integral:
$E( \beta ) := \frac{ \int_{0}^{ \infty} \psi (x, \beta ) \frac{-1}{2} \frac{d^2}{dx^2} \psi (x, \beta ) dx + \int_{0}^{ \infty} \psi (x, \beta ) \frac{x^2}{2} \psi (x, \beta ) dx}{ \int_{0}^{ \infty} \psi (x, \beta )^2 dx} |^{assume,~ \beta > 0}_{simplify} \rightarrow \frac{1}{10} \frac{7 \beta ^2 + 1}{ \beta}$
Minimize the energy integral with respect to the variational parameter:
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$) $\beta$ = 0.378 E( $\beta$) = 0.529
Calculate the % error given that the exact result is 0.50 Eh.
$\frac{E( \beta ) - 0.5}{0.5} = 5.83$%
The optimized trial wavefunction is compared with the SHO ground-state eigenfunction.
Suggestion: Continue this exercise with the following trial wavefunction and interpret the improved agreement with the exact solution.
$\psi (x, \beta ) - \frac{1}{(1 + \beta x^2)^n}$
where n is an integer greater than 2.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.15%3A_Variation_Method_for_a_Particle_in_an_Ice_Cream_Cone.txt
|
Definte potential energy: V(x) := $\frac{s^2}{2}$
Display potential energy:
Choose trial wave function: $\Psi (x, \beta ) := \sqrt{ \frac{ \beta}{2}} sech( \beta x)$
Set up variational energy integral:
$E( \beta ) := \frac{ \int_{- \infty}^{ \infty} \Psi (x, \beta ) \frac{-1}{2} \frac{d^2}{dx^2} \Psi (x, \beta ) dx + \int_{- \infty}^{ \infty} \Psi (x, \beta ) \frac{x^2}{2} \Psi (x, \beta ) dx}{ \int_{0}^{ \infty} \Psi (x, \beta )^2 dx} |^{assume,~ \beta > 0}_{simplify} \rightarrow \frac{1}{24} \frac{4 \beta ^2 + \pi^2}{ \beta^2}$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ := 0.2 $\beta$ := Minimize (E, $\beta$) $\beta$ = 1.253 E( $\beta$) = 0.524
Display wave function in the potential well.
Calculate the probability that the particle is in the potential barrier.
$2 \int_{0}^{ \infty} \Psi (x, \beta )^2 dx = 1$
Define quantum mechanical tunneling.
Tunneling occurs when a quon (a quantum mechanical particle) has probability of being in a nonclassical region. In other words, a region in which the total energy is less than the potential energy.
Calculate the probability that tunneling is occurring.
Calculate the classical turning point.
$\frac{x^2}{2} = 0.524 |_{float,~4}^{solve,~x} \rightarrow {\begin{pmatrix} -1.024 \ 1.024 \end{pmatrix}}$
$2 \int_{1.024}^{ \infty} \Psi (x, \beta )^2 dx = 0.143$
Calculate the kinetic and potential energy contributions to the total energy.
Kinetic energy:
$\int_{- infty}^{ \infty} \Psi (x, \beta ) \frac{-1}{2} \frac{d^2}{dx^2} \Psi (x, \beta ) dx = 0.262$
Potential energy:
$\int_{- \infty}^{ \infty} V(x) \Psi (x, \beta )^2 dx = 0.262$
Is the virial theorem satisfied?
Yes, for the harmonic potential the virial theorem is T = V = E/2.
10.19: Trigonometric Trial Wave Function for the 3D Harmonic Potential Well
Trial wave function: $\Psi (r, \beta) := \sqrt{ \frac{3 \beta^3}{ \pi^3}} sech( \beta r)$
Integral: $\int_{0}^{ \infty} \blacksquare 4 \pi r^2 dr$
Kinetic energy operator: $T = \frac{-1}{2r} \frac{d^2}{dr^2} (r \blacksquare )$
Potential energy operatory: $V = \frac{1}{2} k r^2$
a. Demonstrate the wave function is normalized.
$\int_{0}^{ \infty} \Psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow 1$
b. Evaluate the variational integral.
$E( \beta ) := \int_{0}^{ \infty} \Psi (r, \beta ) [ \frac{-1}{2r} \frac{d^2}{dr^2} (r \Psi (r, \beta ))] 4 \pi r^2 dr + \int_{0}^{ \infty} \Psi (r, \beta ) \frac{1}{2} r^2 \Psi (r, \beta ) 4 \pi r^2 dr$
c. Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$) $\beta$ = 1.471 E( $\beta$) = 1.597
d. The exact ground state energy for the 3D harmonic oscillator is 1.5 Eh. Calculate the percent error.
$\frac{E( \beta ) - 1.5}{1.5} = 6.488$%
e. Compare the optimized trial wave function with the exact solution by plotting the radial distribution functions.
$\Phi (r) := ( \frac{1}{ \pi})^{ \frac{3}{4}} exp( \frac{r^2}{2})$
h. Calculate the overlap integral between the trial wave function and the exact wave function.
$\int_{0}^{ \infty} \Psi (r, \beta ) \Phi (r) 4 \pi r^2 dr = 0.989$
i. Calculate the probability that tunneling is occurring.
Classical turning point:
$1.597 = \frac{1}{2} r^2 |_{float,~3}^{solve,~r} \rightarrow {\begin{pmatrix} -1.79 \ 1.79 \end{pmatrix}}$
Tunneling probability:
$\int_{1.79}^{ \infty} \Psi (r, \beta )^2 4 \pi r^2 dr = 12.598$%
10.20: Gaussian Trial Wavefunction for the Hydrogen Atom
A Gaussian function, exp(‐αr2), is proposed as a trial wavefunction in a variational calculation on the hydrogen atom. Determine the optimum value of the parameter α and the ground state energy of the hydrogen atom. Use atomic units: h = 2π, me = 1, e = ‐1.
$\Phi (r, \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{3}{4}} exp(- \beta r^2) \nonumber$
$T = \frac{-1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) \nonumber$
$V = \frac{1}{r} \nonumber$
$\int_{0}^{ \infty} \blacksquare 4 \pi r^2 dr \nonumber$
a. Demonstrate the wave function is normalized.
$\int_{0}^{ \infty} \Psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \rightarrow 1 \nonumber$
b. Evaluate the variational integral.
$E ( \beta ) := \int_{0}^{ \infty} \Psi (r, \beta ) [ \frac{-1}{2r} \frac{d^2}{dr^2} (r \Psi (r, \beta ))] 4 \pi r^2 dr ... |_{simplify}^{assume,~ \beta >0} \rightarrow \frac{1}{2} \frac{3 \pi^{\frac{1}{2}} \beta - (4) 2^{ \frac{1}{2}} \beta^{ \frac{1}{2}}}{ \pi^{ \frac{1}{2}}} + \int_{0}^{ \infty} \Psi (r, \beta ) \frac{-1}{r} \Psi (r, \beta ) 4 \pi r^2 dr \nonumber$
c. Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$) $\beta$ = 0.283 E( $\beta$) = -0.424
d. The exact ground state energy for the hydrogen atom is -.5 Eh. Calculate the percent error.
$\frac{-.5 - E( \beta )}{-.5} = 15.117 \nonumber$
e. The differences between the Gaussian and Slate type wavefunctions are illustrated with the surface plots shown below.
N := 50 b := 5 i := 0..N j := 0..N $y_{i} := -b + \frac{2bi}{N}$ $x_{j} := -b + \frac{2bj}{N}$
$Gauss_{i,~j} := ( \frac{2 \beta}{ \pi})^{ \frac{3}{4}} exp[- \beta [ (x_{i})^2 + (y_{j})^2]] \nonumber$
$Slater_{i,~j} := \frac{1}{ \sqrt{ \pi}} exp [ - \sqrt{ (x_{i})^2 + (y_{j})^2}] \nonumber$
f. These wavefunctions can also be compared to their radial distribution functions:
r := 0, .1 .. 6
$G(r) := ( \frac{2 \beta}{ \pi}) ^{ \frac{3}{4}} exp( - \beta r^2) \nonumber$
$S(r) := \frac{1}{ \sqrt{ \pi}} exp( -r ) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.18%3A_Trigonometric_Trial_Wave_Function_for_the_Harmonic_Potential_Well.txt
|
The energy operator for this problem is:
$\frac{-1}{2} \frac{d^2}{dx^2} \blacksquare - \frac{1}{x} \blacksquare \nonumber$
The trial wave function:
$\Psi ( \alpha , x) := \frac{ \sqrt{12 \alpha ^3}}{ \pi} (x) sech( \alpha , x) \nonumber$
Evaluate the variational energy integral.
$E( \alpha ) := \int_{0}^{ \infty} \Psi ( \alpha , x) - \frac{1}{2} \frac{d^2}{dx^2} \Psi ( \alpha , x) dx + \int_{0}^{ \infty} \frac{-1}{x} \Psi ( \alpha , x)^2 dx |_{simplify}^{assume,~ \alpha > 0} \rightarrow \frac{1}{6} \alpha \frac{12 \alpha _ \alpha \pi^2 - 72 ln (2)}{ \pi ^2} \nonumber$
Minimize the energy with respect to the variational parameter $\alpha$ and report its optimum value and the ground-state energy.
$\alpha$ := 1 $\alpha$ := Minimize (E, $\alpha$) $\alpha$ = 1.1410 E( $\alpha$) = -0.4808
The exact ground-state energy for the hydrogen atom is -.5 Eh. Calculate the percent error.
$| \frac{-.5 - E( \alpha )}{-.5}| = 3.8401 \nonumber$
Plot the optimized trial wave function and the exact solution, $\Phi (x) := 2 (x) exp (-x)$.
Find the distance from the nucleus within which there is a 95% probability of finding the electron.
$\alpha$ := 1. Given:
$\int_{0}^{a} \Psi ( \alpha , x)^2 dx = 0.95 \nonumber$
Find (a) = 2.8754
Find the most probable value of the position of the electron from the nucleus.
$\alpha := 1.1410~~~~ \frac{d}{dx}|\frac{ \sqrt{12 \alpha ^3}}{ \pi} (x) sech( \alpha , x)| = 0~|_{float,~3}^{solve,~x} \rightarrow 1.05 \nonumber$
Calculate the probability that the electron will be found between the nucleus and the most probable distance from the nucleus.
$\int_{0}^{1.05} \Psi ( \alpha , x)^2 dx = 0.3464 \nonumber$
Break the energy down into kinetic and potential energy contributions. Is the virial theorem obeyed?
$T := \int_{0}^{ \infty} \Psi ( \alpha , x) \frac{-1}{2} \frac{d^2}{dx^2} \Psi ( \alpha , x) dx~~~T = 0.4808 \nonumber$
$V := \int_{0}^{ \infty} \frac{-1}{x} \Psi ( \alpha , x)^2 dx~~~ V = -0.9616 \nonumber$
$| \frac{V}{T}| = 2.0000 \nonumber$
Use the exact result to discuss the weakness of this trial function.
Eexact := -0.5
Using the virial theorem we know: Texact := 0.500 Vexact := -1.00
Calculate the difference between the variational results and the exact calculation:
E( $\alpha$) - Eexact = 0.0192
T - Texact = -0.0192
V - Vexact = 0.0384
The variational wave function yields a lower kinetic energy, but at the expense of a potential energy that is twice as unfavorable as the kinetic energy result is favorable.
Calculate the probability that tunneling is occurring.
Classical turning point:
$E( \alpha ) = \frac{-1}{x} |_{float,~3}^{solve,~x} \nonumber$
Tunneling probability:
$\int_{2.08}^{ \infty} \Psi ( \alpha , x)^2 dx = 0.1783 \nonumber$
10.22: Variation Calculation on the 1D Hydrogen Atom Using a Gaussian Trial Wavefunction
The energy operator for this problem is:
$\frac{-1}{2} \frac{d^2}{dx^2} \blacksquare - \frac{1}{x} \blacksquare \nonumber$
The trial wave function is:
$\psi (x, \alpha ) = 2 ( \frac{2 \alpha}{ \pi^{ \frac{1}{3}}}) (x) exp( - \alpha x^2) \nonumber$
Evaluate the energy integral.
$E( \alpha ) = \int_{x}^{ \alpha} - \frac{1}{2} \frac{d^2}{dx^2} \psi (x, \alpha ) dx + \int_{0}^{ \infty} \frac{-1}{x} \psi (x, \alpha )^2 dx |_{simplify}^{assume,~ \alpha > 0} \rightarrow \frac{-1}{2 \pi^{ \frac{1}{2}}} [ (-3) \pi^{ \frac{1}{2}} \alpha + (4) 2^{ \frac{1}{2}} \alpha ^{ \frac{1}{2}}] \nonumber$
Minimize the energy with respect to the variational parameter $\alpha$ and report its optimum value and the ground-state energy.
$\alpha$ = 1 $\alpha$ = Minimize (E, $\alpha$) $\alpha$ = 0.2829 E ( $\alpha$) = -0.4244
The exact ground state energy for the hydrogen atom is -0.5 Eh. Calculate the percent error.
$\left| \dfrac{-0.5 - E( \alpha )}{-0.5} \right| = 15.1174 \nonumber$
Plot the optimized trial wave function and the exact solution, $\Phi (x) = 2(x) exp(-x)$.
Find the distance from the nucleus within which there is a 95% probability of finding the electron.
$\alpha$ = 1. Given:
$\int_{0}^{a} \psi (x, \alpha )^2 dx = .95 \nonumber$
Find (a) = 2.6277
Find the most probable value of the position of the electron from the nucleus.
$\alpha = 0.2829 \frac{d}{dx} | \psi (x, \alpha )| = 0 |_{float,~3}^{solve,~x} \rightarrow {\begin{pmatrix} -1.33 \ 1.33 \end{pmatrix}} \nonumber$
Calculate the probability that the electron will be found between the nucleus and the most probable distance from the nucleus.
$\int_{0}^{1.33} \psi ( \alpha , x)^2 dx = 0.3584 \nonumber$
Break the energy down into kinetic and potential energy contributions. Is the virial theorem obeyed?
\begin{align*} T &= \int_{0}^{ \infty} \psi (x, \alpha ) \frac{1}{2} \frac{d^2}{dx^2} \psi (x, \alpha ) dx \[4pt] &= 0.4244\end{align*}
\begin{align*} V &= \int_{0}^{ \infty} \frac{-1}{x} \psi (x, \alpha )^2 dx \[4pt] &= -0.8488 \end{align*}
$| \frac{V}{T}| = 2.00 \nonumber$
Calculate the probability that tunneling is occurring.
Classical turning point:
$E( \alpha ) = \frac{-1}{x}|_{float,~3}^{solve,~x} \rightarrow 2.36 \nonumber$
Tunneling probability:
$\int_{2.36}^{ \infty} \psi (x, \alpha )^2 dx = 0.0978 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.21%3A_Variation_Calculation_on_the_1D_Hydrogen_Atom_Using_a_Trigonometric_Trial_Wave_Function.txt
|
Normalized trial wave function:
$\psi ( \alpha , r) = \sqrt{ \frac{2}{ \pi}} \alpha e^{- \alpha r} \nonumber$
$\int_{0}^{ \infty} \psi ( \alpha , r)^2 2 \pi r dr~~assume,~ \alpha > 0 \rightarrow 1 \nonumber$
Calculate electron kinetic energy:
$T ( \alpha ) = \int_{0}^{ \infty} \psi ( \alpha , r) \frac{-1}{2r} \frac{d}{dr} (r \frac{d}{dr} \psi ( \alpha , r)) 2 \pi r dr~~assume,~ \alpha >0 \rightarrow (-2) \alpha Z \nonumber$
Calculate electron-nucleus potential energy:
$V_{NE} ( \alpha , Z) = \int_{0}^{ \infty} \psi ( \alpha , r) \frac{-Z}{r} \psi ( \alpha , r) 2 \pi r dr~assume,~ \alpha > 0 \rightarrow (-2) \alpha Z \nonumber$
Calculate total electronic energy for the 2D H atom:
$\alpha$ = 1 $\alpha$ = Minimize (E, $\alpha$) $\alpha$ = 2 E( $\alpha$) = -2
Demonstrate that the virial theorem is satisfied:
$\frac{T ( \alpha )}{E( \alpha )} = -1~~~ \frac{T( \alpha )}{V_{NE} ( \alpha , 1)} = -0.5~~~ \frac{V_{NE} ( \alpha , 1)}{E( \alpha )} = 2 \nonumber$
10.24: Variational Calculation on Helium Using a Hydrogenic Wavefunction
Gaussian trial wavefunction:
$\psi (r, \beta ) := \sqrt{ \frac{ \beta^3}{ \pi}} exp(- \beta r) \nonumber$
Demonstrate the wavefunction is normalized.
$\int_{0}^{ \infty} \psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow 1 \nonumber$
The terms contributing to the total electronic energy of the helium atom are the kinetic energy of each electron, each electron's interaction with the nucleus, and the interaction of electrons with each other.
Calculate kinetic energy:
$2 \int_{0}^{ \infty} \psi (x, \beta )^2 \left[ \frac{-1}{2} \frac{d^2}{dx^2} (r \psi ( r, \beta )) \right]4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \rightarrow \beta ^2 \nonumber$
Calculate electron-nucleus potential energy:
1. a. Calculate the electric potential of one of the electrons in the presence of the other: $\frac{1}{r} \int_{0}^{r} \psi (x, \beta )^2 4 \pi x^2 dx + \int_{r}^{ \infty} \frac{ \psi (x, \beta )^2 4 \pi x^2}{x} dx |_{simplify}^{assume,~ \beta >0} \rightarrow \frac{- [r \beta e^{(-2) r \beta} + e^{(-2) r \beta} - 1]}{r} \nonumber$
2. b. Calculate the electron-electron potential energy using result of part a: $\int_{ \infty}^{r} \psi (x, \beta )^2 \frac{- [r \beta e^{(-2) r \beta} + e^{(-2) r \beta} - 1]}{r} 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0 \rightarrow \frac{5}{8} \beta} \nonumber$
Write the equation for the total electronic energy in terms of the variational parameter $β$ and minimize the energy with respect to $β$:
$E( \beta ) := \beta ^2 - 4 \beta + \frac{5}{8} \beta~~~ \beta := \frac{d}{d \beta} E( \beta ) = 0~solve,~ \beta \rightarrow \frac{27}{16}~~~ E( \beta ) = -2.848 \nonumber$
Compare the variational calculation to the Hartree‐Fock limit: ($E_{HF} \approx −2.8617$)
$\dfrac{E_{HF} - E( \beta )}{E_{HF}} = 0.491 \% \nonumber$
Compare optimized trial wavefunction with the Hartree‐Fock wavefunction by plotting the radial distribution functions.
$\Phi (r) = 0.75738 \exp(-1.430r) + 0.43658 \exp(-2.4415r) + 0.17295 \exp(-4.0996r)- 0.02730\exp(-6.4843r) + 0.06675\exp(-7.978r) \nonumber$
10.25: Gaussian Trial Wave Function for the Helium Atom
Gaussian trial wave function:
$\Psi (r, \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{3}{4}} exp (- \beta r^2) \nonumber$
Demonstrate the wave function is normalized.
$\int_{0}^{ \infty} \Psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0 } \rightarrow 1 \nonumber$
The terms contributing to the total electronic energy of the helium atom are the kinetic energy of each electron, each electronʹs interaction with the nucleus, and the interaction of the electrons with each other.
Calculate kinetic energy:
$2 \int_{0}^{ \infty} \Psi (r, \beta ) [ \frac{-1}{2r} \frac{d^2}{dr^2} (r \Psi (r, \beta ))] 4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \rightarrow 3 \beta] \nonumber$
Calculate electron-nucleus potential energy:
$2 \int_{0}^{ \infty} \Psi (r, \beta ) \frac{-2}{r} \Psi (r, \beta ) 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow (-8) \frac{2 ^{ \frac{1}{2}}}{ \pi} ( \beta \pi )^{ \frac{1}{2}} \nonumber$
Calculation of electron-electron potential energy:
a. Calculate the electric potential of one of the electrons in the presence of the other:
$\frac{1}{r} \int_{0}^{r} \Psi (x, \beta )^2 4 \pi x^2 dx + \int_{r}^{ \infty} \frac{ \Psi (x, \beta )^2 4 \pi x^2}{x} dx |_{simplify}^{assume,~ \beta > 0 } \rightarrow \frac{erf(r 2^{ \frac{1}{2}} \beta ^{ \frac{1}{2}})}{r} \nonumber$
b. Calculate the electron-electron potential energy using result of part a:
$\int_{0}^{ \infty} \Psi (r, \beta )^2 ( \frac{erf(r 2^{ \frac{1}{2}} \beta ^{ \frac{1}{2}})}{r}) 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} /rightarrow \frac{2}{ \pi} ( \beta \pi )^{ \frac{1}{2}} \nonumber$
Write the equation for the total electronic energy in terms of the variational parameter $\beta$:
$E( \beta ) := 3 \beta + (-8) \frac{2^{ \frac{1}{2}}}{ \pi} ( \beta \pi )^{ \frac{1}{2}} + \frac{2}{ \pi} ( \beta \pi )^{ \frac{1}{2}} simplify \rightarrow \frac{-[ (-3) \beta \pi + (8) 2^{ \frac{1}{2}} ( \beta \pi )^{ \frac{1}{2} - 2 ( \beta \pi )^{ \frac{1}{2}}}]}{ \pi} \nonumber$
Minimize the energy with respect to the variational parameter $\beta$.
$\frac{d}{d \beta} E( \beta ) = 0~~~solve,~ \beta \rightarrow \frac{-1}{9} \frac{(-33) + 8 (2)^{ \frac{1}{2}}}{ \pi} = 0.767~~~E(.767) = -2.301 \nonumber$
Compare the variational calculation to the Hartree-Fock limit: EHF := -2.8617
$\frac{E_{HF}- E(.767)}{E_{HF}} = 19.594 \nonumber$
Compare optimized trial wave function with the Hartree-Fock wave function (see McQuarrie and Simon, page 283) by plotting the radial distribution functions.
$\Phi (r) := .75738exp( -1.430 r) + .43658exp(-2.4415r) + .17295exp(-4.0996r) - .02730exp(-6.4843r) + .06675exp(-7.978) \nonumber$
10.26: Trigonometric Trial Wavefunction for the Helium Atom
Trigonometric trial wavefunction:
$\psi (r, \beta ) = \sqrt{ \dfrac{3 \beta ^3}{ \pi ^3}} \text{sech}( \beta r) \nonumber$
Demonstrate the wavefunction is normalized.
$\int_{0}^{\infty} \psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \nonumber$
The terms contributing to the total electronic energy of the helium atom are the kinetic energy of each electron, each electronʹs interaction with the nucleus, and the interaction of the electrons with each other.
Calculate kinetic energy:
$2 \int_{0}^{ \infty} \psi (r, \beta ) \left[ \frac{-1}{2r} \frac{d^2}{dr^2} (r \psi (r, \beta )) \right] 4 \pi r^2 dr |_{simplify}^{assume,~ \beta >0} \rightarrow \frac{1}{3} \beta ^2 \frac{12 + \pi^2}{ \pi^2} \nonumber$
Calculate electron-nucleus potential energy:
$2 \int_{0}^{ \infty} \psi (r, \beta ) \frac{-2}{r} \psi (r, \beta ) 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow (-48) \beta \frac{\ln(2)}{ \pi ^2} \nonumber$
Calculate electron-electron potential energy:
$\int_{0}^{ \infty} \psi (r, \beta )^2 \left( \frac{1}{r} \int_{0}^{r} \psi (x, \beta )^2 4 \pi x^2 dx + \int_{r}^{ \infty} \frac{ \psi (x, \beta )^2 4 \pi x^2}{x} dx \right) 4 \pi r^2 dr \nonumber$
Write the equation for the total electronic energy in terms of the variational parameter $\beta$:
$E( \beta ) = \frac{1}{3} \beta ^2 \frac{12 + \pi ^2}{ \pi ^2} + (-48) \beta \frac{\ln(2)}{ \pi ^2} + \int_{0}^{ \infty} \psi (r, \beta )^2 \left( \frac{1}{r} \int_{0}^{r} \psi (x, \beta )^2 4 \pi x^2 dx + \int_{r}^{ \infty} \frac{ \psi (x, \beta )^2 4 \pi x^2}{x}\right) 4 \pi r^2 dr \nonumber$
Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ = 1 $\beta$ = Minimize(E, $\beta$ $\beta$ = 1.902 E( $\beta$ = -2.672
Compare the variational calculation to the Hartree-Fock limit: $E_{HF} = -2.8617$
$\frac{E_{HF} - E( \beta )}{E_{HF}} = 6.614 \nonumber$
Compare optimized trial wavefunction with the Hartree-Fock wavefunction (see McQuarrie and Simon, page 283) by plotting the radial distribution functions.
$\Phi (r) = 0.75738\exp(1.430r) + 0.43658 \exp(-2.4415r) + 0.17295 \exp (-4.0996r) − 0.02730 \exp (-6.4843r) + 0.06675 \exp (7.97) \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.23%3A_Variational_Calculation_on_the_Two-dimensional_Hydrogen_Atom.txt
|
Trial wave function:
$\Psi (r, \beta ) := \sqrt{ \frac{3 \beta ^3}{ \pi^3}} sech( \beta r) \nonumber$
Integral:
$\int_{0}^{ \infty} \blacksquare 4 \pi r^2 dr \nonumber$
Kinetic energy operator:
$T = \frac{-1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) \nonumber$
Potential energy operator:
$V = \frac{1}{r} \nonumber$
a. Demonstrate the wave function is normalized.
$\int_{0}^{ \infty} \Psi (r, \beta )^2 4 \pi r^2 dr |_{simplify}^{assume,~ \beta > 0} \rightarrow 1 \nonumber$
b. Evaluate the variational integral.
$E( \beta ):= \Psi (r, \beta ) [ \frac{-1}{2r} \frac{d^2}{dr^2} (r \Psi (r, \beta ))] 4 \pi r^2 dr ... |_{simplify}^{assume,~ \beta >0} \rightarrow \frac{1}{6} \beta \frac{12 \beta + \beta \pi ^2 - 72 ln(2)}{ \pi^2} + \int_{0}^{ \infty} \Psi (r, \beta ) \frac{-1}{r} \Psi (r, \beta ) 4 \pi r^2 dr \nonumber$
c. Minimize the energy with respect to the variational parameter $\beta$.
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$ $\beta$ = 1.141 E( $\beta$) = -0.481
d. The exact ground state energy for the hydrogen atom is -.5Eh. Calculate the percent error.
$\frac{-.5 - E( \beta )}{-.5} (100) = -0.481 \nonumber$
e. Compare optimized trial wave function with the exact solution by plotting the radial distribution functions.
$S(r) := \frac{1}{ \sqrt{ \pi}} exp(-r) \nonumber$
f. Calculate the kinetic and potential energy contributions for the trial wave function. Is the virial theorem satisfied?
Kinetic energy:
$\int_{0}^{ \infty} \Psi (r, \beta )[ \frac{1}{2r} \frac{d^2}{dr^2} ( r \Psi (r, \beta ))] 4 \pi r^2 dr = 0.481 \nonumber$
Potential energy:
$\int_{0}^{ \infty} \Psi (r, \beta ) \frac{-1}{r} \Psi (r, \beta ) 4 \pi r^2 dr = -0.962 \nonumber$
Yes, the virial theorem is satisfied: T = -E = \( \frac{-V}{2}.
g. Given that the virial theorem is (of course) satisfied for the exact solution, explain the deficiency of the trigonometric trial wave function.
For the exact solution T = 0.500 and V = -1.00. Thus, while the kinetic energy is lower by 0.019 for the trial wave function, its potential energy is higher by twice that amount. As can be seen in the graph above, an electron in the trial wave function spends less time close to the nucleus.
10.28: Hydrogen Atom Calculation Assuming the Electron is a Particle in a Sphere of Radius R
Trial wave function:
$\Phi (r, R) := \frac{1}{ \sqrt{2 \pi R}} \frac{ \sin ( \frac{ \pi r}{R})}{r} \nonumber$
Integral:
$\int_{0}^{ \infty} \blacksquare 4 \pi r^2 dr \nonumber$
Kinetic energy operator:
$T = \frac{1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) \nonumber$
Potential energy operator:
$V = \frac{1}{r} \nonumber$
Demonstrate the wave function is normalized.
Set up the variational energy integral.
$E(R) := \int_{0}^{R} \Phi (r, R) [ \frac{-1}{2r} \frac{d^2}{dr^2} (r \Phi (r, R))] 4 \pi r^2 dr + \int_{0}^{R} \Phi (r, R) \frac{-1}{r} \Phi (r, R) 4 \pi r^2 dr] \nonumber$
Minimize the energy with respect to the variational parameter R.
R := 1 R := Minimize(E, R) R = 4.049 E(R) = -0.301
The exact ground state energy for the hydrogen atom is -.5 Eh. Calculate the percent error.
$\frac{-.5 - E(R)}{-.5} = 39.793 \nonumber$
Compare optimized trial wave function with the exact solution by plotting the radial distribution functions.
$S(r) := \frac{1}{ \sqrt{ \pi}} exp(-r)~~~r := 0,.02..4.2 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.27%3A_Trigonometric_Trial_Wavefunction_for_the_Hydrogen_Atom.txt
|
The electronic structure of lithium is $1s^2 2s^1$.
One Parameter Estimation
The trial hydrogenic 1s and 2s orbitals are as follows:
\begin{align} \psi(1s) &= \sqrt{\dfrac{\alpha^3}{\pi}} e^{-\alpha r} \label{1} \[4pt] \psi(2s) &= \sqrt{\dfrac{\alpha^3}{32 \pi}} (2-\alpha r) e^{\frac{-\alpha r}{2}} \label{2} \end{align}
If these orbitals are used, the variational expression for the lithium atom energy $E(\alpha$) is given below.
$E(\alpha) = \alpha^2 - 2 Z \alpha + \dfrac{5}{8} \alpha + \dfrac{\alpha^2}{8} - \dfrac{Z \alpha}{4} + \dfrac{34 \alpha}{81} \label{3}$
Minimize energy with respect to the variational parameter, $\alpha$.
$\dfrac{d}{d\alpha} E(\alpha) = 0 \label{4}$
with
• Nuclear charge: $Z = 3$
• Seed value for $\alpha = Z$ for minimization
results in a minimum at $\alpha = 2.5357$ with $E(\alpha) = -7.2333\;E_h$.
This simple one-parameter variational calculation is in error by 3.27% from the experimental value measured by energy necessary to fully ionize the atom (i.e., the sum of the ionization energies in Table A6).
\begin{align} E_{exp}&= \dfrac{-5.392 \;E_h-75.638\;E_h-122.451\;E_h}{27.2114 \;E_h} \[4pt] &= -7.4778\;E_h \label{5} \end{align}
$\left| \dfrac{E(\alpha)-E_{exp}}{E_{exp}} \right| = 3.2695 \% \label{6}$
As expected the Variational Method estimated energy always overestimates the ground state energy (unless the extract wavefunction is describable by the trial wavefunction and then the exact energy is determined).
Two Parameter Estimation
It is possible to improve the results by using a two-parameter calculation in which the 2s electron has a different scale factor that the 1s electrons. In other words the electronic structure would be $1s(\alpha)^22s(\beta)^1$.
$\psi(1s) = \sqrt{\dfrac{\alpha^3}{\pi}} e^{-\alpha r} \label{7}$
$\psi(2s) = \sqrt{\dfrac{\beta^3}{32 \pi}} (2-\beta r) e^{\frac{-\beta r}{2}} \label{8}$
This calculation was first published by E. Bright Wilson (J. Chem. Phys 1, 210 (1933)) in 1933.
• Nuclear charge: $Z= 3$
• Seed values
• $\alpha = Z$
• $\beta=Z- 1$ (lower to add a little bit of screening by the 1s electrons)
When the wave function for the $1s(\alpha)^22s(\beta)$ electron configuration is written as a Slater determinant, the following variational integrals arise.
$T_{1s} = \dfrac{\alpha^2}{2} \label{9a}$
$T_{1s} = \dfrac{\beta^2}{8}\label{9b}$
$V_{N1a}=-Z \alpha \label{9c}$
$V_{N2s}(\beta) = \dfrac{-Z \beta}{4} \label{9d}$
$V_{1a1a}(\alpha) = \dfrac{5}{8} \alpha \label{9e}$
$V_{1s2s}(\alpha,\beta)= \alpha\beta \dfrac{\beta^4 + 10 \alpha \beta^3 + 8 \alpha^4 + 20 \alpha^3 \beta + 12 \alpha^2\beta^2}{(2\alpha + \beta)^5} \label{9f}$
$T_{1s2s} (\alpha,\beta) = -4 \sqrt{2} \alpha^{\frac{5}{2}} \beta^{\frac{5}{2}} \dfrac{\beta - 4 \alpha}{(2 \alpha + \beta)^4} \label{9g}$
$V_{N1a2s}(\alpha,\beta) = \label{9h}$
$V_{1112}(\alpha,\beta) =32 \sqrt{2} \alpha^{\frac{3}{2}} \beta^{\frac{3}{2}} \dfrac{-28 \alpha^3\beta + 264 \alpha^4 - 21 \alpha \beta^3 - \beta^4 -86\alpha^2\beta^2}{(2 \alpha + \beta)^3(\beta + 6\alpha)^4} \label{9i}$
$V_{1212}(\alpha,\beta) = 16 \alpha^3\beta^3 \dfrac{13 \beta^2 + 20 \alpha^2 -30 \beta\alpha}{(\beta + 2 \alpha)^7} \label{9j}$
$V_{1s2s}(\alpha,\beta) = 32 \sqrt{2} \alpha^{\frac{3}{2}} \beta^{\frac{3}{2}} \dfrac{\alpha -\beta}{(2\alpha + \beta)^4}\label{9k}$
The next step in this calculation is to collect these terms in an expression for the total energy of the lithium atom and then minimize it with respect to the variational parameters, $\alpha$ and $\beta$. The results of this minimization procedure are shown below
$E'(\alpha,\beta) = 2T_{1s}(\alpha) + T_{2s}(\beta) - T_{1s}(\alpha) 2S_{1s2s}(\alpha \beta)^2 - 2 T_{1s2s}(\alpha, \beta) S_{1s2s}(\alpha, \beta) \nonumber$
$+ 2 V_{N1s}(\alpha) + V_{N2s}( \beta) - V_{N1s}(\alpha) S_{1s2s}(\alpha, \beta)^2 − 2 V_{N1s2s}(\alpha, \beta) S_{ 1s2s}(\alpha, \beta) \nonumber$
$+ 2 V_{1s2s}(\alpha, \beta) + V_{1s1s}(\alpha) - 2 V_{1112}(\alpha, \beta) S_{1s2s}(\alpha, \beta) − V_{1212}(\alpha, \beta) \label{10a}$
$E''(\alpha,\beta)= 1- S_{1s2s} \alpha \beta)^2 \label{10b}$
$E(\alpha,\beta) = \dfrac{E'(\alpha,\beta)}{E''(\alpha,\beta)} \label{11}$
Minimization of $E(\alpha,\beta)$ simultaneously with respect to $\alpha$ and $\beta$.
$\dfrac{d}{d\alpha} E(\alpha) = \dfrac{d}{d\alpha} E(\beta) = 0 \label{12}$
results in
• $\alpha$: 2.6797
• $\beta$: 1.8683
• $E(\alpha,\beta)$: $-7.3936\;E_h$
Comparison with experiment value of $-7.4778\; E_h$ in Equation 5:
$\left| \dfrac{E(\alpha,\beta)-E_{exp}}{E_{exp}} \right| = 1.1258 \% \label{13}$
This result is slightly different from that reported by Wilson in 1933.
• $\alpha$: 2.686
• $\beta$: 1.776
• $E(\alpha,\beta$): -7.3922
These parameters result in a slight higher energy than those above so it is likely that they did not quite find the energy minimum.
Reference
1. E. Bright Wilson (J. Chem. Phys 1, 210 (1933))
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.29%3A_Electronic_Structure_-_Variational_Calculations_on_the_Lithium_Atom.txt
|
The following normalized trial wavefunction is proposed for a variational calculation on the harmonic oscillator.
$\psi (x, a) := \sqrt{ \frac{1}{a}} exp( \frac{-|x|}{a}) \nonumber$
$\int_{- \infty}^{ \infty} \psi (x, a)^2 dx~~~assume,~a > 0 \rightarrow 1 \nonumber$
However, the graph below shows a cusp at x = 0, indicating that the wavefunction is not well‐behaved and therefore cannot be used for quantum mechanical calculations.
Therefore, the wavefunction is Fourier transformed into the momentum representation.
$\Phi (p, a) := \int_{- \infty}^{ \infty} \frac{exp(-ipx)}{ \sqrt{2 \pi}} \sqrt{ \frac{1}{a}} exp( \frac{-|x|}{a}) dx~|_{simplify}^{assume,~a>0} \rightarrow (-a^{ \frac{1}{2}}) \frac{2^{ \frac{1}{2}}}{(ipa-1) \pi ^{ \frac{1}{2}} (ipa +1)} \nonumber$
Normalization is checked and the function is graphed.
$\int_{- \infty}^{ \infty} \Phi (p, a)^2 dp~~~assume,~a > 0 \rightarrow 1 \nonumber$
The momentum wavefunction appears to be well‐behaved, so a variational calculation will be carried out in momentum space.
Assuming a m = k =1 and h = 2π, we have for the harmonic oscillator in momentum space.
• Momentum space integral: $\int_{- \infty}^{ \infty} \blacksquare dp$
• Momentum operator: $p \blacksquare$
• Kinetic energy operator: $\frac{p^2}{2}$
• Position operator: $i \frac{d}{dp} \blacksquare$
• Potential energy operator: $\frac{-1}{2} \frac{d^2}{dp^2} \blacksquare$
Evaluate the energy integral in the momentum representation:
$E(a) := \int_{- \infty}^{ \infty} \Phi (p, a) \frac{p^2}{2} \Phi (p, a) dp... + \int_{- \infty}^{ \infty} \Phi (p, a) \frac{-1}{2} \frac{d^2}{dp^2} \Phi (p, a) dp~|_{assume,~a >0}^{simplify} \rightarrow \frac{1}{4} \frac{2 + a^4}{a^2} \nonumber$
Minimize energy with respect to the variational parameter:
a := 1 a := Minimize (E, a) a = 1.189 E(a) = 0.707
Display optimum wavefunction along with exact wavefunction:
$Exact(p) := \frac{1}{ \pi ^{ \frac{1}{4}}} e^{ \frac{-1}{2} p^2} \nonumber$
Naturally the agreement with the exact solution is not favorable because of the poor quality of the original coordinate space wavefunction.
$\frac{E(a) - 0.5}{0.5} = 41.421 \nonumber$
10.31: Momentum-Space Variation Method for Particle in a Gravitational Field
The following problem deals with a particle of unit mass in a gravitational field with acceleration due to gravity equal to 1.
Energy operator for particles near Earth's surface:
$\frac{-1}{2 \mu} \frac{d^2}{dz^2} \blacksquare + z \blacksquare \nonumber$
Trial wave function:
$\Psi ( \alpha , z) := 2 \alpha ^{ \frac{3}{2}} z exp(- \alpha z) \nonumber$
Fourier the position wave function into momentum space:
$\Phi ( \alpha , p) := \frac{1}{ \sqrt{2 \pi}} \int_{0}^{ \infty} exp(-1 p z) \Psi ( \alpha , z) dz |_{simplify}^{assume,~ \alpha > 0} \rightarrow \frac{2 ^{ \frac{1}{2}}}{ \pi ^{ \frac{1}{2}}} \frac{ \alpha ^{ \frac{3}{2}}}{(ip + a)^2} \nonumber$
Demonstrate that the momentum wave function is normalized.
$\int_{- \infty}^{ \infty} \overline{ \Phi ( \alpha , p)} \Phi ( \alpha , p) dp~~assume,~ \alpha > 0 \rightarrow 1 \nonumber$
Energy operator in momentum space:
$\frac{p^2}{2} \blacksquare + i \frac{d}{dp} \blacksquare \nonumber$
Evaluate the variational expression for the energy:
$E( \alpha ) := \int_{- \infty}^{ \infty} \overline{ \Phi ( \alpha , p)} \frac{p^2}{2} \Phi ( \alpha , p) dp ... + \int_{- \infty}^{ \infty} \overline{ \Phi ( \alpha , p)} i ( \frac{d}{dp} \Phi ( \alpha , p)) dp \nonumber$
Minimize energy with respect to variational parameter $\alpha$:
$\alpha$ := 1 $\alpha$ := Minimize (E, $\alpha$ $\alpha$ = 1.145 E( $\alpha$) = 1.966
This momentum space result is in exact agreement with the coordinate-space result. The exact value for the energy is 1.856.
$\frac{E ( \alpha) - 1.856}{1.856} = 5.9 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.30%3A_The_Variation_Method_in_Momentum_Space.txt
|
The energy operator in atomic units in coordinate space for a unit mass particle with potential energy V = |x| is given below.
$H = \frac{-1}{2} \frac{d^2}{dx^2} \blacksquare + |x| \blacksquare \nonumber$
Suggested trial wave function:
$\Psi (x, \beta ) := ( \frac{2 \beta}{ \pi})^{ \frac{1}{4}} exp(- \beta x^2) \nonumber$
Demonstrate that the wave function is normalized.
$\int_{- \infty}^{ \infty} \Psi (x, \beta )^2 dx~~assume,~ \beta >0 \rightarrow 1 \nonumber$
Carry out Fourier transform to get momentum wave function:
$\Phi (p, \beta ) := \frac{1}{ \sqrt{2 \pi}} \int_{- \infty}^{ \infty} exp(-ipx) \Psi (x, \beta ) dx |_{simplify}^{assume,~ \beta > 1} \rightarrow \frac{1}{2} \frac{2^{ \frac{3}{4}}}{ \pi ^{ \frac{1}{4}}} \frac{e^{\frac{-1}{4}} \frac{p^2}{ \beta}}{ \beta ^{ \frac{1}{4}}} \nonumber$
Demonstrate that the momentum wave function is normalized.
$\int_{- \infty}^{ \infty} \overline{ \Phi (p, \beta )} \Phi (p, \beta ) dp~~~assume,~ \beta > 0 \rightarrow 1 \nonumber$
The energy operator in momentum space is:
$H = \frac{p^2}{2} \blacksquare + |i + \frac{d}{dp} \blacksquare| \nonumber$
Evaluate the variational energy integral:
$E( \beta ) := \int_{- \infty}^{ \infty} \overline{ \Phi (p, \beta )} \frac{p^2}{2} \Phi (p, \beta ) dp + \int_{- \infty}^{ \infty} \overline{ \Phi (p, \beta )} |i \frac{d}{dp} \Phi (p , \beta )| dp |_{simplify}^{assume,~ \beta >0} \frac{1}{2} \frac{ \pi^{ \frac{1}{2}} \beta^{ \frac{3}{2}} + 2^{ \frac{1}{2}}}{ \beta^{ \frac{1}{2}} \pi^{ \frac{1}{2}}} \nonumber$
Minimize the energy with respect to the variational parameter β and report its optimum value and the ground-state energy.
$\beta$ := 1 $\beta$ := Minimize (E, $\beta$) $\beta$ = 0.542 E( $\beta$) = 0.813
Plot the coordinate and momentum wave functions and the potential energy on the same graph.
10.33: Variational Method for the Feshbach Potential
Define potential energy: V0 = 2.5 d = 0.5 $V(x) = V_o \tanh \left( \dfrac{x}{d}\right)^2$
Display potential energy:
Choose Gaussian trial wavefunction:
$\psi (x, \beta ) = \left( \frac{2 \beta}{ \pi} \right) ^{ \frac{1}{4}} exp ( - \beta x^2) \nonumber$
Demonstrate that the trial wavefunction is normalized.
$\int_{- \infty}^{ \infty} \psi (x, \beta )^2 dx~~~assume,~ \beta > 0 \rightarrow 1 \nonumber$
Evaluate the variational integral.
$E( \beta ) = \int_{ - \infty}^{ \infty} \psi (x, \beta ) \frac{-1}{2} \frac{d^2}{dx^2} \psi (x, \beta ) dx + \int_{- \infty}^{ \infty} V(x) \psi (x, \beta )^2 dx \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.913 E( $\beta$) = 1.484
Calculate the % error given that numerical integration of Schrödingerʹs equation (see next tutorial) yields E = 1.44949 Eh.
$\frac{E( \beta ) - 1.44949}{1.44949} \times 100 = 2.36 \nonumber$
Display wavefunction in the potential well.
Calculate the probability that tunneling is occurring.
$V(x) = 1.484 |_{float,~3}^{solve,~x} \rightarrow {\begin{pmatrix} -1.511 \ 0 .511 \end{pmatrix}} \nonumber$
$2 \int_{0.511}^{ \infty} \psi (x, \beta )^2 dx = 0.329 \nonumber$
10.34: Numerical Solution for the Feshbach Potential
Parameters go here: xmax = 5 m = 1 V0 = 2.5 $\mu$ = 0 d = .5
Potential energy:
$V(x) = V_{0} tanh \left( \frac{x}{d}\right)^{2} \nonumber$
Given:
$\frac{-1}{2m} \left( \frac{d^2}{dx^2} \psi (x) \right) + V(x) \psi (x) = E \psi (x) \nonumber$
$\psi (-x_{max} = 0~~ \psi \left( -x_{max} \right) = 0.1 \nonumber$
$\psi = Odesolve (x, x_{max}) \nonumber$
Normalize wavefunction:
$\psi (x) = \frac{ \psi (x)}{ \sqrt { \int_{0}^{x_{max}} \psi (x)^2 dx}} \nonumber$
Enter energy guess: E = 1.44949
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.32%3A_Momentum-Space_Variation_Method_for_the_Abs%28x%29_Potential.txt
|
The n = 2 level of the hydrogen atom is 4‐fold degenerate with energy ‐0.125 Eh. In terms of the $|nlm \rangle$ quantum numbers these states are $|2,0,0\rangle$, $|2,1,0\rangle$, $|2,1,1 \rangle$, and $| 2,2,-1 \rangle$. An electric field in the z‐direction splits the degeneracy because it mixes the $2s$ and the $2p_z$ orbitals creating one linear combination polarized in the direction of the field and an other polarized against the field.
What happens is that the $s$ and $p$ wavefunctions "mixed" to produce eigenstates that have shifted centers. This means the atom gets an induced electric dipole moment, whose interaction with the external field either lowers or raises the eigenenergy.
The $|2,0,0\rangle$ wavfunction is spherically symmetric (left), while the $|2,2,0 \rangle$ wavefunction has two lobes where the wavefunction has different signs. If the applied field is strong, then the eigenstates will be even mixtures of these, but with different phases.
Note in particular that the electronic center of charge has moved from the origin, which means the states have nonzero dipole moments. With the electric field pointing downwards, the state to the left has a lower energy and the one to the right is raised.
Degenerate Perturbation Theory
The Hamiltonian for this perturbation in atomic units is:
$H^{\prime}= εz, \nonumber$
which in spherical polar coordinates is:
$H^{\prime} = ε r\cos(θ), \nonumber$
where $ε$ is the electric field strength.
In this perturbation method treatment the hydrogen atom eigenfunctions are used to evaluate the matrix elements associated with the total Hamiltonian,
$H = H^o + H^{\prime} \nonumber$
Since the results for Ho are known (‐0.125 Eh) only the matrix elements for Hʹ need to be evaluated and most of these are zero. Below we show that = = ‐3ε and that the other matrix elements involving the $n = 2$ orbitals are equal to zero.
$\psi_{2s} (r) = \frac{1}{ \sqrt{32 \pi}} (2-r) \exp \left( \frac{-r}{2} \right) \nonumber$
$\psi_{2p_z} (r, \theta ) = \frac{1}{ \sqrt{32 \pi}} (r)\ exp \left( \frac{-r}{2} \right) \cos ( \theta ) \nonumber$
$\psi_{2p_z} (r, \theta , \phi ) = \frac{1}{ \sqrt{32 \pi}} (r) \exp \left( \frac{-r}{2} \right) \sin ( \theta ) \cos ( \phi ) \nonumber$
$\psi_{2p_z} (r, \theta , \phi ) = \frac{1}{ \sqrt{32 \pi}} (r) \exp \left( \frac{-r}{2} \right) \sin ( \theta ) \sin ( \phi ) \nonumber$
$\langle 2s | H^{\prime} | 2s \rangle = 0$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2s} (r) \varepsilon r \cos ( \theta ) \psi_{2s} (r) r^2 \sin ( \theta ) d \pi d \theta dr \rightarrow 0 \nonumber$
$\langle 2p_z | H^{\prime} | 2p_z \rangle = \langle 2p_y | H^{\prime} | 2p_y \rangle = \langle 2p_x | H^{\prime} | 2p_x \rangle = 0$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2pz} (r, \theta ) \varepsilon r \cos ( \theta ) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2py} (r, \theta , \phi ) \varepsilon r \cos ( \theta ) \psi_{2py} (r, \theta , \phi ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2px} (r, \theta , \phi ) \varepsilon r \cos ( \theta ) \psi_{2px} (r, \theta , \phi ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\langle 2s | H^{\prime} | 2p_z \rangle = -3ε$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2s} (r) \varepsilon r \cos ( \theta ) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow -3 \varepsilon \nonumber$
$\langle 2s | H^{\prime} | 2p_x \rangle = \langle 2s | H^{\prime} | 2p_y \rangle = 0$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2s} (r) \varepsilon r \cos ( \theta ) \psi_{2px} (r, \theta , \phi ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2s} (r) \varepsilon r \cos ( \theta ) \psi_{2py} (r, \theta , \phi ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\langle 2p_x | H^{\prime} | 2p_y \rangle = \langle 2p_x | H^{\prime} | 2p_z \rangle = \langle 2p_y | H^{\prime} | 2p_z \rangle = 0$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2px} (r, \theta , \phi ) \varepsilon r \cos ( \theta ) \psi_{2py} (r, \theta , \phi ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2px} (r, \theta , \phi ) \varepsilon r \cos ( \theta ) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2py} (r, \theta , \phi ) \varepsilon r \cos ( \theta ) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
The matrix elements of the 4x4 perturbation matrix are
$\langle ψ_i | H^o + H^{\prime} | ψ_j \rangle, \nonumber$
where the ψʹs are the 2s, 2pz, 2px, and 2py hydrogen atomic orbitals. Using the values of the integrals evaluated above the perturbation matrix is formed and its eigenvalues and eigenvectors found.
$\begin{pmatrix} -0.125-E & -3 \varepsilon & 0 & 0 \ -3 \varepsilon & -0.125-E & 0 & 0\ 0 & 0 & -0.125-E & 0\ 0 & 0 & 0 & -0.125-E \end{pmatrix} \begin{pmatrix} c_1\ c_2\ c_3\ c_4 \end{pmatrix} = 0 \nonumber$
This 4x4 energy matrix is clearly one 2x2 and two 1x1 energy matrices. In other words, as we learned from evaluating the matrix elements, the 2px and 2py are not perturbed by the electric field to first order and have energy ‐0.125 Eh.
The eigenvectors and eigenvalues of the 2x2 are found as follows.
$\begin{bmatrix} (-0.125-E)c_1-3 \varepsilon c_2 = 0)\ -3 \varepsilon c_1 + (-0.125 - E) c_2 = 0)\ c_1^2 + c_2^2 = 1 \end{bmatrix} |_{float,~3}^{solve,~\begin{pmatrix} c_1\ c_2\ E \end{pmatrix}} \rightarrow \begin{pmatrix} -0.707 & 0.707 & 3.0 \varepsilon-0.125 \ 0.707 & -0.707 & 3.0 \varepsilon-0.125 \ 0.707 & 0.707 & -3.0 \varepsilon-0.125 \ -0.707 & -0.707 & -3.0 \varepsilon-0.125 \end{pmatrix} \nonumber$
The wavefunctions of the perturbed 2s and 2pz orbitals are spz hybrid states as shown below.
$\frac{1}{ \sqrt{2}} (2s + 2p_{z})~~~E = (-0.125 - 3 \varepsilon ) E_{h} \nonumber$
$\frac{1}{ \sqrt{2}} (2s - 2p_{z})~~~E = (-0.125 + 3 \varepsilon ) E_{h} \nonumber$
Because the energy of the symmetric 1s state is unaffected by the electric field, the effect of this perturbation on the electronic spectrum of hydrogen is to split the n = 1 to n = 2 transition into three lines of relative intensity 1:2:1.
< 2s | Hʹ | 2s > = 0
$\psi_{1s} (r) = \frac{1}{\sqrt{ \pi}} exp(-r) \nonumber$
$\int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{1s} (r) \varepsilon r \cos ( \theta ) \psi_{1s} (r) r^2 \sin ( \theta ) d \phi \,d \theta \,dr \rightarrow 0 \nonumber$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.35%3A_First_Order_Degenerate_Perturbation_Theory_-_the_Stark_Effect_of_the_Hydrogen_Atom.txt
|
In this exercise the polarizability of the hydrogen atom is calculated according to the procedure outlined in Problem 8‐31 in the second edition of McQuarrieʹs Quantum Chemistry. In the interest of clarity of mathematical expression atomic units are used:
$e = me = h/2π = 4πε_o = 1. \nonumber$
Polarizability, $α$, is a measure of the distortion of the electron density in the presence of an electric field. The interaction (perturbation) energy due to a field of strength ε with the hydrogen atom electron is easily shown to be:
$E = \frac{- \alpha \varepsilon ^2}{2} \nonumber$
Given that the ground state energy of the hydrogen atom is ‐0.5, in the presence of the electric field we would expect the electronic energy of the perturbed hydrogen atom to be,
$E_{H~atom} = \frac{-1}{2} - \frac{ \alpha \varepsilon ^2}{2} \nonumber$
It is assumed that the field direction is along the z axis. In this case the operator for the interaction of the external field with the electron density is, in spherical coordinates,
$H' = \varepsilon r \cos ( \theta ) \nonumber$
Thus the total energy operator for the hydrogen atom in the presence of an electric field is this term plus the kinetic and electron‐nucleus operator.
$H = \frac{-1}{2r} \frac{d^2}{dr^2} (r \blacksquare ) - \frac{1}{2 r^2 \sin ( \theta )} \frac{d}{d \theta} \left( \sin ( \theta ) \frac{d}{d \theta} \blacksquare \right) - \frac{1}{2 r^2 \sin ( \theta )^2} \frac{d^2}{d \phi ^2} \blacksquare - \frac{1}{r} \blacksquare + \varepsilon r \cos ( \theta ) \blacksquare \nonumber$
The empty place holders indicate the location of the wave function to be operated on.
In the absence of the electric field the hydrogen atom is in the 1s electronic state. The field distorts (polarizes) the electron density and this can be modeled by assuming that in the presence of the external field the electron is in a state which is a superposition of the 1s and 2pz electronic states.
$\Phi (r) = c_{1} \psi _{1s} (r) + c_2 \psi _{2pz} (r, \theta ) \nonumber$
$\psi _{1s} (r) = \frac{1}{ \sqrt{ \pi}} exp(-r)~~~ \psi_{2pz} (r, \theta ) = \frac{1}{ \sqrt{32 \pi}} (r) exp \left( - \frac{r}{2}\right) \cos ( \theta ) \nonumber$
Within the variational method, using such a trial wave function requires solving the following secular determinant.
$\begin{vmatrix} H_{11}-ES_{11} & H_{12}-ES_{12}\ H_{21}-ES_{21} & H_{22} - ES_{22} \end{vmatrix} = 0 \nonumber$
Due to normalization, orthogonality and symmetry we know that: H12 = H21; S11 = S22 = 1; S12 = S21 = 0. We now evaluate H11, H12 and H22 and solve the secular determinant.
$H_{11} = \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{1s} (r) \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} (r \psi_{1s}(r))...\ + \frac{-1}{2r^2 \sin( \theta)} \big[ \frac{d}{d \theta} \sin ( \theta ) \frac{d}{d \theta} \psi_{1s} (r) \big]\ + \frac{-1}{2 r^2 sin( \theta)^2 } \frac{d^2}{d \phi^2} \psi_{1s} (r) \end{bmatrix} r^2 sin( \theta) d\ phi d \theta dr ... \rightarrow \frac{-1}{2} + \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0} ^{2 \pi} \psi_{1s} (r) \left( - \frac{1}{r} + \varepsilon r \cos ( \theta) \right) \psi_{1s} (r) r^2 \sin ( \theta) d \phi d \theta dr \nonumber$
$H_{12} = \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{1s} (r) \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} (r \psi_{2pz}(r, \theta ))...\ + \frac{-1}{2r^2 \sin( \theta)} \big[ \frac{d}{d \theta} \sin ( \theta ) \frac{d}{d \theta} \psi_{2pz} (r, \theta) \big]\ + \frac{-1}{2 r^2 sin( \theta)^2 } \frac{d^2}{d \phi^2} \psi_{2pz} (r, \theta ) \end{bmatrix} r^2 sin( \theta) d\ phi d \theta dr ... \rightarrow \frac{128}{243} 2^{ \frac{1}{2}} \varepsilon + \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0} ^{2 \pi} \psi_{1s} (r) \left( - \frac{1}{r} + \varepsilon r \cos ( \theta) \right) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta) d \phi d \theta dr \nonumber$
$H_{22} = \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0}^{2 \pi} \psi_{2pz} (r, \theta) \begin{bmatrix} - \frac{1}{2r} \frac{d^2}{dr^2} (r \psi_{2pz}(r, \theta ))...\ + \frac{-1}{2r^2 \sin( \theta)} \big[ \frac{d}{d \theta} \sin ( \theta ) \frac{d}{d \theta} \psi_{2pz} (r, \theta) \big]\ + \frac{-1}{2 r^2 sin( \theta )^2 } \frac{d^2}{d \phi^2} \psi_{2pz} (r, \theta ) \end{bmatrix} r^2 sin( \theta) d\ phi d \theta dr ... \rightarrow \frac{-1}{8} + \int_{0}^{ \infty} \int_{0}^{ \pi} \int_{0} ^{2 \pi} \psi_{2pz} (r, \theta ) \left( - \frac{1}{r} + \varepsilon r \cos ( \theta) \right) \psi_{2pz} (r, \theta ) r^2 \sin ( \theta) d \phi d \theta dr \nonumber$
Solving the secular determinant for the energy eigenvalues:
$\bigg| \begin{pmatrix} H_{11}-E & H_{12} \ H_{12} & H_{22} - E \end{pmatrix} \bigg| = 0 \big|_{float,~4}^{solve,~E} \begin{bmatrix} (-.3125)+.2572e(-3)(.5314e6 + .8389e7 \varepsilon^2)^{\frac{1}{2}}\ (-.3125)-.2572e(-3)(.5314e6 + .8389e7 \varepsilon^2)^{\frac{1}{2}} \end{bmatrix} \nonumber$
The lowest energy eigenvalue is expanded in $\varepsilon$ in order to compare the variational calculation with equation 2.
$(-.3125-.2572e-3 \left( .5314e6 + .8389e7 \varepsilon^2 \right)^{ \frac{1}{2}} \nonumber$
This comparison shows that, in atomic units, α has the value of 2.96. This value is in error by 35% when compared to the experimental result of 4.0. Although it is not apparent when atomic units are used, this calculation does reveal that atomic polarizability is proportional to atomic volume.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.36%3A_Variational_Calculation_for_the_Polarizability_of_the_Hydrogen_Atom.txt
|
The following normalized trial wave function is used in a variational calculation for the energy of a one‐dimensional model of the hydrogen atom that postulates a delta‐function potential energy interaction between the electron and the proton. The variational parameter, $\alpha$, is a decay constant which controls the spatial extent of the wave function.
$\Psi (x, \alpha ) = \sqrt{ \alpha} exp( - \alpha |x|) \nonumber$
$\int_{- \infty}^{ \infty} \Psi (x, \alpha)^{2} dx~assume,~ \alpha > 0 \rightarrow 1 \nonumber$
The Hamiltonian energy operator in coordinate space in atomic units (h/2π = me = e = 1) is:
$H = \frac{-1}{2} \frac{d^2}{dx^2} \blacksquare - \Delta (x) \blacksquare \nonumber$
The problem this tutorial seeks to solve is that it is obvious that the coordinate wave function is unsuitable for the calculation of the expectation value for kinetic energy because it is not well‐behaved at x = 0. The wave function and its first and second derivatives are discontinuous at x = 0. In spite of this defect, calculation of the expectation value for potential energy presents no problem in coordinate space.
Therefore the plan is to calculate the potential energy in coordinate space and Fourier transform the coordinate wave function into momentum space, where the wave function is well behaved, for the calculation of kinetic energy. Then the energy will be minimized with respect to $\alpha$, yielding the following results: $\alpha$ = 1, E = ‐0.5.
Using several values for x and the optimum value of $\alpha$ it is shown that solving Schrödingerʹs equation in the coordinate representation for the energy gives the correct value except at the discontinuity, x = 0.
$x = 0~~~ \frac{-1}{2} \frac{d^2}{dx^2} \Psi (x, 1) - \Delta (x) \Psi (x, 1) = E \Psi (x, 1)~solve,~E \rightarrow 0 \nonumber$
$x = 2~~~ \frac{-1}{2} \frac{d^2}{dx^2} \Psi (x, 1) - \Delta (x) \Psi (x, 1) = E \Psi (x, 1)~solve,~E \rightarrow \frac{-1}{2} \nonumber$
$x = -1~~~ \frac{-1}{2} \frac{d^2}{dx^2} \Psi (x, 1) - \Delta (x) \Psi (x, 1) = E \Psi (x, 1)~solve,~E \rightarrow \frac{-1}{2} \nonumber$
Calculation of Potential Energy in Coordinate Space
$V( \alpha ) = \int_{- \infty}^{ \infty} \Psi (x, \alpha ) - \Delta (x) \Psi (x, ]alpha ) dx~ \rightarrow - \alpha \nonumber$
Fourier Transform of the Coordinate Space Wave Function into Momentum Space
$\Phi (p, \alpha ) = \int_{- \infty}^{ \infty} \frac{exp(-pix)}{ \sqrt{2 \pi}} \Psi (x, \alpha )dx ~\big|_{simplify}^{assume,~ \alpha > 0} \rightarrow \frac{ \sqrt{2} \alpha ^{ \frac{3}{2}}}{ \sqrt{ \pi} ( \alpha^2 + p^2)} \nonumber$
Before proceeding, we demonstrate that the momentum space wavefunction is normalized and well behaved.
$\int_{ \infty}^{- \infty} \Phi (p, \alpha )^2 dp~assume,~ \alpha > 0 \rightarrow 1 \nonumber$
The kinetic energy operator in momentum space for an electron is: $\frac{p^2}{2} \blacksquare$
Therefore, the kinetic energy is:
$T ( \alpha ) = \int_{- \infty}^{ \infty} \Phi (p, \alpha ) \frac{p^2}{2} \Phi (p, \alpha ) dp~ asssume,~ \alpha > 0 \rightarrow \frac{ \alpha ^2}{2} \nonumber$
Now the coordinate and momentum space calculations are combined and the total energy is minimized with respect to the variational parameter, $\alpha$.
$E ( \alpha ) = T ( \alpha ) + V ( \alpha ) \nonumber$
$\alpha = \frac{d}{d \alpha} E( \alpha ) = 0~solve,~ \alpha \rightarrow 1 \nonumber$
$E ( \alpha ) \rightarrow \frac{-1}{2} \nonumber$
The optimum coordinate and momentum wavefunctions are compared below: x = -6, -5.99 .. 6
The uncertainty principle is illustrated by displaying the coordinate and momentum wave functions for different values of the decay constant, $\alpha$. For $\alpha$ = 1.2 the spatial distribution contracts and the momentum distribution expands relative to the optimum value $\alpha$ =1. In other words, less uncertainty in position requires greater uncertainty in momentum.
For $\alpha$ = 0.7 the spatial distribution expands and the momentum distribution contracts. More uncertainty in position leads to less uncertainty in momentum.
Numerically the Heisenberg uncertainty principle states that the product of the uncertainties in position and momentum must be greater than or equal to 0.5 in atomic units.
$\Delta x = \sqrt{ \langle x^2 \rangle - \langle x \rangle^2} \nonumber$
$\Delta p = \sqrt{ \langle x^2 \rangle - \langle x \rangle^2} \nonumber$
$\Delta x \Delta p \geq \frac{h}{2} \nonumber$
The momentum wave function is now used to verify that the uncertainty principle is satisfied.
$\begin{pmatrix} Operator & Coordinate~Space & Momentum~Space\ position & x \blacksquare & i \frac{d}{dp} \blacksquare\ momentum & \frac{1}{i} \frac{d}{dx} \blacksquare & p \blacksquare \end{pmatrix} \nonumber$
$x_{ave} = \int_{- \infty}^{ \infty} \Phi (p, 1) i \frac{d}{dp} \Phi (p, 1) dp \rightarrow 0 \nonumber$
$x2_{ave} = \int_{- \infty}^{ \infty} \Phi (p, 1) - \frac{d}{dp} \Phi (p, 1) dp \rightarrow \frac{1}{2} \nonumber$
$p_{ave} = \int_{- \infty}^{ \infty} \Phi (p, 1)^2 dp \rightarrow 0 \nonumber$
$p2_{ave} = \int_{- \infty}^{ \infty} p^2 \Phi (p, 1)^2 dp \rightarrow 1 \nonumber$
$\sqrt{x2_{ave} - x_{ave}^2} \sqrt{p2_{ave} - p_{ave}^2} = 0.707 \nonumber$
Concluding Remarks
This hybrid variational calculation on a one‐dimensional model of the hydrogen atom using a delta function potential interaction between the proton and electron yields the correct ground state energy. The model is composed of two peculiar elements: a trial wave function that has discontinuity at x = 0 and a potential energy interaction that is zero everywhere except at x = 0.
In Molecular Quantum Mechanics, 3rd ed., p. 43 Atkins and Friedman list the following criteria for an acceptable wave function.
1. It must be single valued (strictly Ψ*Ψ must be single valued).
2. It must not be infinite over a finite range.
3. It must be continuous everywhere.
4. It must have a continuous first derivative, except at ill‐behaved regions of the potential.
Ψ(x,α) does not satisfy criterion 3, making it impossible to calculate kinetic energy in the coordinate representation. However, it does satisfy criterion 4, so it is possible to calculate potential energy in the coordinate representation. Φ(p,α) is well‐behaved for all values of p. It is used to calculate kinetic energy, but it is not suitable for the calculation of potential energy. Thatʹs why a hybrid variational calculation was used.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.37%3A_Hybrid_Variational_Calculation_for_the_1D_Hydrogen_Atom_with_Delta_Function_Potential.txt
|
Define potential energy:
$V(x) = if[( x \geq -1)(x \leq1), 0, 20 \nonumber$
Display potential energy:
Choose trial wave function:
$\psi (x, \beta ) = \left( \frac{2 \beta}{ \pi} \right)^2 exp( - \beta x^2) \nonumber$
Calculate the Wigner distribution function:
$W(x, p, \beta ) = \frac{1}{2 \pi} \int_{- \infty}^{ \infty} \psi \left( x+ \frac{s}{2} , \beta \right) ds \big|_{assume,~ \beta > 0}^{simplify} \rightarrow \frac{1}{ \pi} e^{ \frac{-1}{2} \frac{ 4 \beta ^2 x^2 + p^2}{ \beta}} \nonumber$
Evaluate the variational integral:
$E( \beta ) = \int_{- \infty}^{ \infty} \int_{- \infty}^{ \infty} (W, x, p, \beta ) \left( \frac{p^2}{2} V(x) \right) dx~dp \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.678 E( $\beta$) = 0.538
Calculate and display the coordinate distribution function:
$P (x, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dp \nonumber$
Probability that tunneling is occuring:
$2 \int_{1}^{ \infty} P (x, \beta ) dx = 0.1 \nonumber$
Calculate and display the momentum distribution function:
$Pp (p, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dx \nonumber$
Display the Wigner distribution function:
N = 60 i = 0 .. N xi = $-3 + \frac{6j}{N}$ j = ) .. N pj = $-5 + \frac{10j}{N}$ Wigneri,j = W(xi, pj, $\beta$
10.39: 455. Variation Method Using the Wigner Function- V(x) x
Define potential energy:
$V(x) = |x| \nonumber$
Display potential energy:
Choose trial wave function:
$\psi (x, \beta = \left( \frac{2 \beta}{ \pi} \right)^{ \frac{1}{4}} exp( - \beta x^2) \nonumber$
Calculate the Wigner distribution function:
$W(x, p, \beta ) = \frac{1}{2 \pi} \int_{- \infty}^{ \infty} \psi \left(x + \frac{s}{2}, \beta \right) exp(isp) \psi (\left( x- \frac{s}{2}, \beta \right) ds~ \bigg|_{assume,~ \beta > 0}^{simplify} \rightarrow \frac{1}{ \pi} e^{ \frac{-1}{2} \frac{4 \beta ^2 x^2 + p^2}{ \beta}} \nonumber$
Evaluate the variational integral:
$E( \beta ) = \int_{- \infty}^{ \infty} \int_{- \infty}^{ \infty} W(x, p, \beta ) \left( \frac{p^2}{2} + V(x) \right)dx~dp~ \bigg|_{simplify}^{assume,~ \beta > 0} \rightarrow \frac{1}{2} \frac{ \beta ^{ \frac{3}{2}} \pi ^{ \frac{1}{2}} + 2 ^{ \frac{1}{2}}}{ \pi ^{ \frac{1}{2}} \beta ^{ \frac{1}{2}}} \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.542 E( $\beta$) = 0.813
Calculate and display the coordinate distribution function:
$Px(x, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dp \nonumber$
Classical turning points: +/- 0.813
Probability that tunneling is occurring:
$2 \int_{0.813}^{ \infty} Px (x, \beta ) dx = 0.231 \nonumber$
Calculate and display the momentum distribution function:
$Pp(p, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dx \nonumber$
Display the Wigner distribution function:
N = 60 i = 0 .. N xi = $-3 + \frac{6i}{N}$ j = 0 .. N pj = $-5 + \frac{10j}{N}$ Wigneri, j = W( xi, pj, $\beta$)
10.40: Variation Method Using the Wigner Function- The Harmonic Oscillator
Define potential energy:
$V(x) = \frac{x^2}{2} \nonumber$
Display potential energy:
Choose trial wave function:
$\psi (x, \beta = \left( \frac{2 \beta}{ \pi} \right)^{ \frac{1}{4}} exp( - \beta x^2) \nonumber$
Calculate the Wigner distribution function:
$W(x, p, \beta ) = \frac{1}{2 \pi} \int_{- \infty}^{ \infty} \psi \left(x + \frac{s}{2}, \beta \right) exp(isp) \psi (\left( x- \frac{s}{2}, \beta \right) ds~ \bigg|_{assume,~ \beta > 0}^{simplify} \rightarrow \frac{1}{ \pi} e^{ \frac{-1}{2} \frac{4 \beta ^2 x^2 + p^2}{ \beta}} \nonumber$
Evaluate the variational integral:
$E( \beta ) = \int_{- \infty}^{ \infty} \int_{- \infty}^{ \infty} W(x, p, \beta ) \left( \frac{p^2}{2} + V(x) \right)dx~dp \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.5 E( $\beta$) = 0.5
Calculate and display the coordinate distribution function:
$Px(x, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dp \nonumber$
Classical turning point: $x_{cl} = 0.5^{ \frac{1}{2}}~~~ x_{cl} = 0.707$
Probability that tunneling is occurring:
$2 \int_{0.707}^{ \infty} Px (x, \beta ) dx = 0.317 \nonumber$
Calculate and display the momentum distribution function:
$Pp(p, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dx \nonumber$
Display the Wigner distribution function:
N = 60 i = 0 .. N xi = $-3 + \frac{6i}{N}$ j = 0 .. N pj = $-5 + \frac{10j}{N}$ Wigneri, j = W( xi, pj, $\beta$)
10.41: Variation Method Using the Wigner Function - The Quartic Oscillator
Define potential energy:
$V(x) = x^4 \nonumber$
Display potential energy:
Choose trial wave function:
$\psi (x, \beta = \left( \frac{2 \beta}{ \pi} \right)^{ \frac{1}{4}} exp( - \beta x^2) \nonumber$
Calculate the Wigner distribution function:
$W(x, p, \beta ) = \frac{1}{2 \pi} \int_{- \infty}^{ \infty} \psi \left(x + \frac{s}{2}, \beta \right) exp(isp) \psi (\left( x- \frac{s}{2}, \beta \right) ds~ \bigg|_{assume,~ \beta > 0}^{simplify} \rightarrow \frac{1}{ \pi} e^{ \frac{-1}{2} \frac{4 \beta ^2 x^2 + p^2}{ \beta}} \nonumber$
Evaluate the variational integral:
$E( \beta ) = \int_{- \infty}^{ \infty} \int_{- \infty}^{ \infty} W(x, p, \beta ) \left( \frac{p^2}{2} + V(x) \right)dx~dp \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.90856 E( $\beta$) = 0.68142
Calculate and display the coordinate distribution function:
$Px(x, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dp \nonumber$
Classical turning points: $x_{cl} = 0.681^{ \frac{1}{4}}~~~ x_{cl} = 0.90842$
Probability that tunneling is occurring:
$2 \int_{0.908}^{ \infty} Px (x, \beta ) dx = 0.08345 \nonumber$
Calculate and display the momentum distribution function:
$Pp(p, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dx \nonumber$
Display the Wigner distribution function:
N = 60 i = 0 .. N xi = $-3 + \frac{6i}{N}$ j = 0 .. N pj = $-5 + \frac{10j}{N}$ Wigneri, j = W( xi, pj, $\beta$)
10.42: Variation Method Using the Wigner Function - The Feshbach Potential
Define potential energy:
$V_0 = 2.5~~~ d = 0.5~~~ V(x) = V_0 tanh \left( \frac{x}{2} \right) ^2 \nonumber$
Display potential energy:
Choose trial wave function:
$\psi (x, \beta ) = \left( \frac{2 \beta}{ \pi} \right)^{ \frac{1}{4}} exp( - \beta x^2) \nonumber$
Calculate the Wigner distribution function:
$W(x, p, \beta ) = \frac{1}{2 \pi} \int_{- \infty}^{ \infty} \psi \left(x + \frac{s}{2}, \beta \right) exp(isp) \psi (\left( x- \frac{s}{2}, \beta \right) ds~ \bigg|_{assume,~ \beta > 0}^{simplify} \rightarrow \frac{1}{ \pi} e^{ \frac{-1}{2} \frac{4 \beta ^2 x^2 + p^2}{ \beta}} \nonumber$
Evaluate the variational integral:
$E( \beta ) = \int_{- \infty}^{ \infty} \int_{- \infty}^{ \infty} W(x, p, \beta ) \left( \frac{p^2}{2} + V(x) \right)dx~dp \nonumber$
Minimize the energy integral with respect to the variational parameter, $\beta$.
$\beta$ = 1 $\beta$ = Minimize (E, $\beta$) $\beta$ = 0.913 E( $\beta$) = 1.484
Calculate and display the coordinate distribution function:
$Px(x, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dp \nonumber$
Classical turning point:
$V_0 tanh \left( \frac{x}{2} \right)^2 = 1.484 \bigg|_{float,~3}^{solve,~x} \rightarrow \begin{pmatrix} -.511\ .511 \end{pmatrix} \nonumber$
Probability that tunneling is occurring:
$2 \int_{0.511}^{ \infty} Px (x, \beta ) dx = 0.329 \nonumber$
Calculate and display the momentum distribution function:
$Pp(p, \beta ) = \int_{- \infty}^{ \infty} W(x, p, \beta ) dx \nonumber$
Display the Wigner distribution function:
N = 60 i = 0 .. N xi = $-3 + \frac{6i}{N}$ j = 0 .. N pj = $-5 + \frac{10j}{N}$ Wigneri, j = W( xi, pj, $\beta$)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/10%3A_Approximate_Quantum__Mechanical_Methods/10.38%3A_Variation_Method_Using_the_Wigner_Function-_Finite_Potential_Well.txt
|
Thumbnail: The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. (CC BY-SA 3.0; Dhatfield).
11: Miscellaneous
Science is valued for its practical advantages, it is valued because it gratifies disinterested curiosity, and it is valued because it provides the contemplative imagination with objects of great aesthetic charm. J. W. N. Sullivan
The popular notion that the sciences are bodies of established fact is entirely mistaken. Nothing in science is permanently established, nothing unalterable, and indeed science is quite clearly changing all the time, and not through the accretion of new certainties. Karl Popper
The progress of science is strewn, like an ancient desert trail, with the bleached skeletons of discarded theories which once seemed to possess eternal life. Arthur Koestler
Popular views of science imply that there exists a mechanical and logically certain relationship between scientific facts and their explanations. According to these views scientific knowledge possesses a certainty which is not to be found in other disciplines such as the humanities or the social sciences. The quotations by Sullivan, Popper and Koestler suggest that there might be more to it than this.
The purpose of this paper is to present an outline of one view of the structure of scientific knowledge and to argue that aesthetics plays an important role in the evolution and progress of scientific thought. The ideas presented here represent a distillation of the thinking of several important scientists — at least those who took the time to explain to others what it was they thought they were doing.
T. H. Huxley, the 19th-century English evolutionist, speaking to a British trade union, described science in the following way:
The aim of science is the discovery of the rational order which pervades the universe. The method of science consists of observation and experimentation for the determination of the facts of Nature. Science uses inductive and deductive reasoning for the discovery of the mutual relations and connections between the facts of nature. In other words, scientists use inductive and deductive reasoning to generate hypotheses and theories which order the facts of nature. Science rests on verified, or more correctly, on uncontradicted hypotheses, and, therefore, an important condition of its progress has been the invention or creation of verifiable hypotheses.
Albert Einstein, a 20th-century physicist, concerned with scientific problems a world apart from Huxley, described science in much the same way.
The supreme task of the scientist is to arrive at those universal elementary laws from which the physical cosmos can be built up by pure deduction. His work thus falls into two parts. He must first discover the laws and then draw the conclusions which follow from them. For the second of these tasks he receives an admirable training at school. However, the first, namely that of establishing the starting point of his deductions, is of an entirely different nature. Here there is no method capable of being learned and systematically applied so that it leads to the goal. There is no logical path to these principles; only intuition, resting on a sympathetic understanding of experience can reach them.
Carl Hempel, a contemporary philosopher of science, said much the same thing.
There are no generally applicable "rules of induction" by which hypotheses or theories can be mechanically derived or inferred directly from empirical data. The transition from data to theory requires creative imagination. Theories are not derived from the data, but invented in order to account for them.
In the diagram below I have attempted to provide a graphical representation of these statements on the structure of scientific knowledge.
The essential point is that induction is not a logically rigorous process; it is incapable of leading us with certainty to the truth. As Max Jammer has said, "The fact that all past futures have resembled past pasts does not guarantee that all future futures will resemble future pasts."
Another way to put this is to say that scientific hypotheses go beyond the facts, always claiming more than is justified. In a certain sense they are works of fiction. Thus, Peter Medawar has described scientific reasoning as "an exploratory dialogue that can always be resolved into two voices or episodes of thought, imaginative and critical, which alternate and interact." Scientific thinking is a dialogue between what might be and what actually is.
The objectivity of scientific knowledge is preserved by its critical voice, by the requirement that scientific creations must ultimately face reality. In this critical episode, a scientific theory cannot really be confirmed or proven true, it can only survive; survive at least until the next confrontation with reality. According to Karl Popper, observation and experiment in science serve as critical tests of hypotheses rather than inductive bases for them. "The logic of science and an essential criterion for its progress is the falsification of conjectures."
In the light of the above considerations, Einstein posed the following question.
If, then, it is true that the axiomatic basis of science cannot be extracted from experience but must be freely invented, can we ever hope to find the right way?
This was, of course, a rhetorical question. Einstein demanded two things of a scientific theory: external confirmation and internal perfection. A theory must not only be consistent with experiment, it must also be pleasing to the mind. Simplicity and beauty, he argued, can guide the scientific thinker toward the truth. Einstein was not being immodest when he said of his own General Theory of Relativity, "No one who fully understands this theory can escape its magic."
Later, when asked if he was troubled by the early lack of experimental confirmation of his General Theory, he replied,
Such questions did not lie in my path. The result could not be otherwise than correct. I was only concerned with putting the theory into a lucid form. I did not for one second doubt that it would agree with observation. The sense of the thing was too evident.
More recently, Paul Dirac, a successor to Newton as Lucasian Professor of Mathematics at Cambridge (some great physicists are apparently really mathematicians) and co-winner of the 1933 Nobel prize in physics with Erwin Schrödinger, described Schrödinger's seminal achievement in the following way.
Schrödinger got his equation by pure thought, looking for some beautiful generalization of De Broglie's idea, and not by keeping close to the experimental development of the subject... It seems that if one is working from the point of view getting beauty in one's equations, and if one has really sound insight, one is on a sure line of progress.
We have come to call Schrödinger's work "wave mechanics" and it does for the nano-world of atoms and molecules what Newton's Laws of Motion do for the macro-world of solar systems, pendulums and billiard balls. At about the same time as Schrödinger's work came out, Werner Heisenberg, a brilliant, young (full professor at Leipzig at the age of 25; Nobel prize at 31) physicist created an alternative approach to the nano-world called "matrix mechanics." Schrödinger found Heisenberg's approach "repulsive and distasteful" and threatened to quit physics if "matrix mechanics" held sway. Subsequently it was shown (by Schrödinger!) that the two theories were formally equivalent, suggesting that in science beauty is also in the eye of the beholder.
Immediately after this period, Niels Bohr, Max Born and Dirac added to the contributions of Schrödinger and Heisenberg to create a more comprehensive theory called "quantum mechanics." It is widely regarded as the most successful scientific theory ever. Like any good scientific theory, it has been found to be very helpful in interpreting experimental results and in serving as a guide for further inquiry. But, it wasn't good enough for Einstein. Recall that in his opinion external confirmation was only one aspect of a healthy scientific theory. Quantum mechanics failed his other criterion. He expressed his dissatisfaction frequently, but perhaps never more poignantly as in the following comment he made to Max Born.
Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one.' I, at any rate, am convinced that He is not playing at dice.
Quantum mechanics challenges our traditional ideas of objectivity and causality, and Einstein claimed these as the fundamental principles of all science. He did not believe that God would make a world based on quantum mechanical principles. For almost forty years Einstein remained quantum mechanics’ foremost critic, isolating himself in his mature years from the rest of the creative scientific community. The best experimental evidence we have today indicates that Einstein's intuition led him astray this time. Several of quantum mechanics' most bizarre predictions with regard to the principle of cause and effect at the nanoscopic level have been confirmed experimentally. Long before this evidence was available Bohr became exasperated by Einstein's continual criticisms and retorted, "Einstein, stop telling God what to do!"
A final example recounts the peculiar role that aesthetics and creative imagination played in the birth of modern astronomy. Johann Kepler, a contemporary of Galileo and Shakespeare, was driven by two Pythagorean convictions in his attempts to explain celestial dynamics. These were that the structure of the solar system could be modeled using the five perfect solids and that the planetary motions could be interpreted in terms of the musical harmonies. Guided by these obsessions, Kepler ultimately discovered the three laws of planetary motion for which he is famous today. But these early examples of modern scientific laws, which eventually had such a profound impact on the evolution of physical science, were so deeply embedded in his semi-mystical writings that few of his contemporaries found them. And Kepler himself considered them to be of secondary importance to the grandiose models he built on the basis of the perfect solids and the musical harmonies. Newton and his generation did find them, recognized their significance and used them to build a new and powerful science.
The following quotations, the first by Jacob Bronowski and the second by Einstein, summarize the position presented in this paper.
Science is basically an artistic endeavor. It has all the freedom of any other imaginative endeavor. The artist and the scientist both live at the edge of mystery, surrounded by it. Both struggle to make order out of the chaos.
Science as an existing, finished product is the most objective, most un-personal thing human beings know. But science as something coming into being, as aim, is just as subjective and psychologically conditioned as any other of man's efforts.
Published in St. John’s Magazine, Winter 1986. Revised January 2006.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.01%3A_The_Art_of_Science.txt
|
This derivation of the mass-energy equivalence equation is based on the analyses of an elementary photon emission event by two sets of inertial observers as shown in the figures below.
A block which is stationary with respect to observers Ao and Bo emits photons of equal frequency in opposite directions.
According to these observers the energy and momentum changes of the block are as follows:
$\Delta E^{o}=-2\cdot h\nu$ $\Delta p^{o}=-\Delta p\gamma ^{o}=-(\nu _{B}^{o}-\nu_{A} ^{o})\cdot \frac{h}{c}=0$ $\nu_{A}^{o}=\nu _{B}^{o}=\nu$
The block is moving with velocity v with respect to observers A and B.
A and B observe photon frequencies shifted by 1/k and k respectively, where k is the optical Doppler shift factor. Their determinations of ΔE and Δp are given below. As there was no block recoil for Ao and Bo, the relativity principle requires the same for A and B. This means that the momentum change for these observers is due to a change in mass of the block, Δp = vΔm.
$\Delta E=-h\nu _{A}-h\nu _{B}=-(k+\frac{1}{k})\cdot h\nu$
$\Delta p=v\Delta m=-(\nu _{B}-\nu _{A})\cdot \frac{h}{c}=-(k-\frac{1}{k})\cdot \frac{h\nu }{c}$
These equations and the optical Doppler shift factor k yield the mass-energy equivalence relation.
$k=\sqrt{\frac{1+\frac{\nu }{c}}{1-\frac{\nu }{c}}}$
$\Delta E=-(k+\frac{1}{k})\cdot h\nu$
substitute, $hv=\frac{-c\cdot \nu \cdot \Delta m}{k-\frac{1}{k}}$ | substitute, $k=\sqrt{\frac{1+\frac{v}{c}}{1-\frac{v}{c}}}$
simplify: $\Delta E=\Delta m\cdot c^{2}$
The differences in the energy and momentum changes determined by the two sets of observers are restated below (recall that k > 0).
$\Delta E^{o}=-2\cdot h\nu$ $\Delta p^{o}=0$ $\Delta E=-(\frac{k^{2}+1}{k})\cdot h\nu$ $\Delta p=-(\frac{k^{2}-1}{k}))\cdot \frac{h\nu }{c}$
These results bring the principles of energy and momentum conservation into question. However, they are the foundation of modern science, so they are preserved by recognizing that energy and mass are equivalent to two currencies with an exchange rate of c2, E = mc2. In releasing energy (photons) the block is also releasing mass (Δm = ΔE/c2). Therefore the kinetic energy of the moving block decreases. This is the cause of the extra energy loss in the reference frame in which the block is moving. It is also the explanation for the negative change in momentum in that reference frame.
*This presentation is based on the following paper: Daniel J. Steck and Frank Rioux, "An elementary development of mass-energy equivalence," Am. J. Phys. 51(5), 461 (1983).
11.03: Commentary on Probing the Orbital Energy of an Electron in an Atom
We wish to comment on a recent article by Bills, “Probing the Orbital Energy of an Electron in an Atom” (1). Bills’ thesis is that the behavior of electrons in atoms can be successfully analyzed using classical concepts. For example, he writes
• A theoretical snapshot of an atom, showing the screened nuclear charge and the electron to be ionized at its radius of zero kinetic energy, enables anyone to approximate its ionization energy.
• Each eigenvalue is the constant sum of classical values of potential and kinetic energy.
• The classical potential energy, $V(R)$, is independent of $n$, but the classical kinetic energy, $T_n(R)$, depends on $n$.
• When the electron $i$ reaches $r_0$, much of the charge of electron $j$ is distributed within $r_0$.
From these statements we infer that the electron is executing a classical trajectory; that it has well-defined classical values for position, momentum, and kinetic and potential energy. Unfortunately, this classical picture violates accepted and validated quantum mechanical principles.
To support his model Bills draws on the authority of John C. Slater by referencing Slater’s classic monograph on the quantum theory of atomic structure (2). We acknowledge that Slater strove to provide an intuitive meaning to quantum theory by exploiting classical ideas. He was one of the early users of quantum theory in the search for an understanding of atomic and molecular structure, and so we should not be surprised that he attempted to use the classical concepts of kinetic and potential energy as guides for interpretive purposes. However, today we know that this program is not viable; classical concepts cannot provide an acceptable model for the stability and structure of atoms and molecules, nor their interaction with electromagnetic radiation.
We live in a macroscopic, classical world and are therefore challenged by the nonclassical model of the nanoscopic world of atoms and molecules that quantum mechanics requires. Peter Atkins said it well recently in his forward to Jim Baggott’s most recent book. (3)
No other theory of the physical world has caused such consternation as quantum theory, for no other theory has so completely overthrown the previously cherished concepts of classical physics and our everyday interpretation of reality.
Along the same lines Niels Bohr once said that if you are not shocked by quantum mechanics, you do not understand what it is saying.
We now articulate in detail our objections to the classical model that Bills proposes.
By assigning classical meaning to $T(r)$ and $V(r)$, and identifying a special electron position, r0, where its kinetic energy is zero, Bills contradicts accepted quantum mechanical ideas regarding the behavior of electrons in atoms. The wavefunctions of atomic electrons are not eigenfunctions of the position, momentum, kinetic energy or potential energy operators. Consequently, according to quantum mechanics, the physical properties represented by these operators do not have well-defined values. Therefore, it is impossible to attach any physical significance to the values of $T(r)$ or $V(r)$ shown in Figure 1 of Bills’ paper because they are neither eigenvalues nor expectation values of their respective operators.
The only physically meaningful entries in Table 1 of Bills’ paper are the calculated orbital energies and the experimental ionization energies. As Figure 1 shows a good Hartree-Fock wavefunction gives a constant orbital energy and therefore a reliable estimate, according to Koopmans’ theorem, for the ionization energy. What is the real meaning of $r_0$?
It is simply the inflection point of the wavefunction, nothing more and nothing less. Initially defining r0 as the electron’s radius of zero kinetic energy, Bills goes on to identify r0 with atomic size in two places – one explicitly and one implicitly.
• Each $r_0$ measures the orbital size of the weakest-held electron.
• This $r_0$ is analogous to the classical turning point of the harmonic oscillator.
The latter statement implies that the electron has reached its apogee and is turning back in the direction of the nucleus, again suggesting a classical trajectory. However, a serious difficulty emerges if one associates the calculated r0 values with atomic size. The r0 values in Table 1 of Bills’ paper are significantly larger than the literature values for the atomic radii for the chemically active elements, while for the inert gases they are significantly smaller than the literature values for the atomic radii. (4) This doesn’t make physical sense.
Bills’ r0 is physically meaningless because his model violates the basic quantum mechanical principles that govern the nanoscopic world of atoms and molecules. This is easily seen by looking at the ionization process in terms of two fundamental physical principles which hold both classically and quantum mechanically: They are energy conservation
$\Delta E=\Delta T+\Delta V \nonumber$
and the virial theorem
$\Delta E=\frac{\Delta V}{2}=-\Delta T. \nonumber$
Under Bills’ model with $\Delta T=0$ the first equation says that $\Delta E=\Delta V$, while the second equation says $\Delta E=\frac{\Delta V}{2}=-\Delta T=0$!
Bills’ model also violates the uncertainty principle. If the position of the electron is precisely known, the uncertainty in momentum and therefore kinetic energy must be infinitely large. In other words, an electron cannot have a well-defined position (r0) at the same time it has a precise value for kinetic energy (zero). In addition, since he takes a classical view of electronic behavior in the atom Bills is left with the challenging problem of assigning meaning to the negative kinetic energies that result for values of r greater than r0 (see Bills’ Figure 1).
A quantum mechanically correct description of the behavior of electrons in atoms and molecules has been provided by Harris (5):
Electrons are characterized by their entire distributions (called wavefunctions or orbitals) rather than by instantaneous positions and velocities: an electron may be considered always to be (with appropriate probability) at all points of its distribution (which does not vary with time).
“There is no space-time inside the atom,” is Heisenberg’s succinct summary of the electron’s behavior in the atom. Pascal Jordan provided further insight by stating that we measure the position of the electron, not to find out where it is, but to cause it to be somewhere.
In exploring the message of quantum theory Anton Zeilinger recently wrote (6):
It is not just that we are unable to measure two complementary quantities of a particle, such as position and momentum, at the same time. Rather the assumption that a particle possesses both position and momentum, before the measurement is made, is wrong.
Thus it is impossible, within the quantum mechanical view, to assign a classical trajectory to an electron confined in an atom or molecule. In fact, assigning such a trajectory to an electron calls into question the stability of matter because an orbiting electron would continuously radiate energy and (according to classical physics) collapse into the nucleus. Bohr famously remarked that the stability of matter is “a pure miracle when considered from the standpoint of classical physics.”
In summary, classical concepts fail at the atomic and molecular level because they cannot account for the stability and internal electronic structure of atoms and molecules, nor the interaction of matter with electromagnetic radiation. This has been known for more than a century. It is well beyond time to abandon classical models of the nano-world and teach our students atomic and molecular structure from the quantum mechanical perspective. Richard Feynman made this point forcibly in his inimitable colloquial style when he said, (7)
And I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical…
Literature Cited
1. Bills, J. J. Chem. Educ. 2006, 83, 473-476.
2. Slater, J. C. Quantum Theory of Atomic Structure; McGraw-Hill: New York, 1960; Vol. 1.
3. Baggott, J. E. Beyond Measure: Modern Physics, Philosophy, and the Meaning of Quantum Theory; Oxford University Press: New York, 2004.
4. Emsley, J. The Elements, 3rd ed.; Clarendon Press: Oxford, 1998.
5. Harris, F. E. “Molecules,” Encyclopedia of Physics, 2nd ed.; Lerner, R. G.; Trigg, G. L. Ed.; VCH Publishers, Inc.: New York, 1990; p 763.
6. Zeilinger, A. Nature, 2005, 438, 743.
7. Feynman, R. P. International Journal of Physics, 1982, 21, 486.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.02%3A_Mass-Energy_Equivalence.txt
|
The Journal of Chemical Education has published three responses (1,2,3) to a paper Roger DeKock and I published stressing the importance of kinetic energy in interpreting trends in atomic ionization energies (4). We have previously responded to Carlton (5, 6), and I would now like to respond to Gillespie, Moog, and Spencer. We have jointly submitted a response to John P. Lowe, a copy of which can be found on this page.
For the most part Gillespie, Moog, and Spencer (1) defend themselves against criticisms we do not make and ignore the major argument we do make. For example, we do not criticize the use of the shell model to teach the most basic elements of atomic structure in introductory chemistry courses. However, we do challenge the use of the shell model for purposes that extend beyond its range of validity.
Among these would be the use of the shell model to explain the trend of ionization energies within a given shell and the more fundamental issue of atomic stability. With regard to the former, one asks why is the ionization energy for He less than twice that for H? With regard to the latter, one asks why doesn't the electron collapse into the nucleus under the electrostatic force of attraction between the electron and the nucleus? The shell model has much to offer as a pedagogical tool to answer some rudimentary questions about atomic and molecular structure, but it cannot answer these question. Quantum mechanics is required to answer questions of this type as we demonstrated in our critique.
Gillespie, Moog, and Spencer concede that our criticism is valid within the context of quantum mechanics, implying that quantum mechanics is just another model on the menu of models that could be used to explain atomic and molecular phenomena. For example, they say,
At the general chemistry level it is probably sufficient to state simply that the high IE of He with respect to H is due to the greater attraction of the nucleus for electrons, offset by the repulsion between electrons.
Unfortunately this is not a valid explanation as we demonstrated in our critique. In attempting to reach and educate all of the constituencies of general chemistry, we should not over-simplify the concepts because it makes them easier to teach. Teaching simple models that students can easily digest is tempting, but these easily digestible models are frequently fundamentally incorrect. Perhaps these incorrect models do no great harm at the introductory level simply because general chemistry is a terminal course for most of those who take it, and the retention half-life for the good or the bad content is relatively short. However, over-simplifications must eventually must be un-taught, or retracted, at some later time in the education of the chemistry major or others who study chemistry beyond the general chemistry sequence. The explanation by Gillespie et al. for the H/He ionization ratio falls into this category. It has to be retracted eventually, so why teach it at all?
Why not simply state that the answer to the question of the trend in ionization energies within the shells (or the explanation of atomic stability) is too complicated for general chemistry students. Valid explanations require quantum mechanics which, for chemistry majors, will be encountered in the junior or senior year. Furthermore, quantum mechanics is not just another mouse click on the model menu, it is the benchmark theory, the one against which all other approximate models are judged. For example, in their seminal treatise on molecular mechanics Burkert and Allinger wrote (8),
Calculations that do not use the Schrödinger equation are acceptable only to the extent that they reproduce the results of high level quantum mechanical calculations.
Literature cited:
1. Gillespie, R. J.; Moog, R. S.; Spencer, J. N. J. Chem. Educ. 1998, 75, 539-540.
2. Carlton, T. S. J. Chem. Educ. 1999, 76, 605.
3. Lowe, J. P. J. Chem. Educ. 2000, 77, 155-156.
4. Rioux, F.; DeKock, R. L. J. Chem. Educ. 1998, 75, 537-539.
5. Rioux, F. J. Chem. Educ. 1999, 76, 605.
6. DeKock, R. L. J. Chem. Educ. 1999, 76, 605-606.
7. CRC Handbook of Chemistry and Physics, 80th ed.; CRC: Boca Raton, FL, 1999; p. 12-15.
8. Burkert, U.; Allinger, N. L. Molecular Mechanics, American Chemical Society, Washington, D. C., 1982; p. 10.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.04%3A_The_Use_of_Models_in_Introductory_Chemistry.txt
|
I wish to respond to an article by Ronald J. Gillespie published in The Journal on the content of the general chemistry sequence (1). While I agree with his major premise, I strongly disagree with a number of his specific recommendations. Operating under the assumption that the teaching of chemistry is enriched by lively debate and critical analysis, I would like to share my observations with the readership of The Journal. In what follows, text in italic font is taken directly from Professor Gillespie’s paper. I will follow the italicized text with my comments and observations.
Critique
We must remember that the general chemistry course is not (or should not be) designed as a first step in the training of future professional chemists.
I agree with this sentiment, but I have reservations about its implementation. In attempting to reach and educate all of the constituencies of general chemistry in a single course, we should not oversimplify the course simply for the benefit of the non-major, and thereby teach things which must be retracted at some later point in the education of the chemistry major. Teaching simple models that students can easily digest is tempting, but these easily digestible models are frequently scientifically incorrect. In my opinion Gillespie presents several models of this kind of simplicity that are incorrect and, if taught, would have to be un-taught, or retracted, at some later time in the education of the chemistry major or others who study chemistry beyond the general chemistry sequence. We should follow Einstein’s advice in our teaching and “make things as simple as possible, but no simpler.”
Elements are a kind of matter that consists of atoms of only one kind.
Besides being casual in tone, the definition is not correct. It is clear from reading this paper and its footnote, that it is a transcription of a talk given at a national ACS meeting. However, The Journal is a peer reviewed scientific journal and by not correcting this statement it appears to be sanctioning this rather informal and inaccurate definition.
All chemical bonds are formed by electrostatic attractions between positively charged cores and negatively charged valence electrons. Electrostatic forces are the only important force (sic) in chemistry.
At this point I will simply observe that there is only one electrostatic force - Coulomb’s Law. There are, however, many types of electrostatic interactions: ion-ion, ion-dipole, dipole-dipole, etc. In other words, there are a large number of ways charge is distributed in matter, and therefore a large number of ways these charge distributions can interact with one another. However, whatever its form, the electrostatic interaction is ultimately calculated using one equation - Coulomb’s Law. If this fact was more widely appreciated the grammatical error in the second sentence wouldn’t have occurred.
(Overlap of atomic orbitals) distracts attention from the real reason for bond formation: the electrostatic attraction between electrons and nuclei.
Unfortunately this simple idea is simply false, but it is easy to teach (see earlier remark). To put it bluntly, if the electrostatic force was all that was important, the electrons would reside inside the nuclei and never in the region between them or anywhere else. A simple electrostatic calculation will show that placing an electron exactly between two positive charges is not the most energetically favorable configuration of charges (see appendix). Solely on the basis of Coulomb’s law the electron would be drawn toward one nucleus or the other; unless like Buridan’s mule the electron is immobilized by its inability to distinguish between two identically attractive alternatives.
Orbitals and the LCAO-MO method are indeed only models, but at least they give a scientifically respectable picture of the covalent bond. Using the orbital model it can be shown that constructive interference due to overlap of atomic orbitals leads to charge build up in the internuclear region. This build up of charge, which is frequently described as the glue that holds atoms together, is funded by a decrease in kinetic energy due to the delocalization of electron density as Ruedenberg’s insightful analysis of the chemical bond showed more than forty years ago (2). The potential energy actually increases during this process, as the exercise outlined in the appendix shows.
I am not recommending that we teach general chemistry students full-blown quantum mechanics, I am simply saying that Gillespie’s simplistic electrostatic model is incorrect, and therefore shouldn’t be taught to anyone, especially general chemistry students who are most vulnerable to specious arguments. We are accustomed to making simplifying approximations in chemistry, but Gillespie’s model is not acceptable because it neglects a fundamental physical property, electron kinetic energy, which is essential to the understanding of atomic and molecular phenomena.
Moreover, the orbital model gives students the incorrect impression that chemistry is a difficult, abstract, mathematical subject based on a mysterious concept that is not and cannot be satisfactorily explained at the introductory level.
As a matter of fact chemistry is difficult and abstract, and students find this out long before they get to orbitals and quantum numbers. Furthermore, if quantum theory and the orbital concept are mysterious it is because the nano-scale world of electrons, nuclei, atoms, and molecules is mysterious. The fact that nano-world of atoms and molecules is not simply a miniature of the macro-world is one of the most important discoveries in the history of science. The classical principles that work in the macro-world are inadequate in the nano-world and need to be supplemented by the de Broglie hypothesis (see later, 7th great idea). In other words, the need for quantum theory is ‘data driven,’ to use a slogan currently in vogue within the community of chemical educators. Marvin Chester (3) put this most cogently when he wrote, “The mathematical predictions of quantum mechanics yield results that are in agreement with experimental findings. That is the reason we use quantum theory. That quantum theory fits experiment is what validates the theory, but why experiment should give such peculiar results is a mystery (emphasis added).”
This aspect of chemistry (molecular geometry) receives too little emphasis in the introductory course, although it is one that can stimulate and excite students by showing that chemistry is practical, useful, and challenging, not dull, theoretical, mathematical, and abstract.
I agree that structural and synthetic chemistry deserve more attention and are exciting and interesting areas of contemporary chemistry, but the last part of this sentence is simply a cheap shot. Theory is also exciting, useful, and challenging, especially when taught by those who understand its significance. Theory is an essential part of 20th Century chemistry and should be taught in a positive manner to students at all levels. In addition, it should be noted that theory has always been an essential part of chemistry. To describe chemistry simply as an experimental science, is to use a halftruth to describe a discipline that is much richer than a single-sentence definition can capture. I will return to the role of theory in science teaching in my conclusion.
Molecular modeling programs now make it even easier for students to understand and become familiar with the shapes of molecules.
This is indeed true, but molecular modeling programs (except for molecular mechanics calculations) are built on quantum mechanics and mainly exploit the orbital approximation, which Professor Gillespie has previously criticized as a “mysterious concept that is not and cannot be satisfactorily explained at the introductory level.” Is he proposing we hide this fact from the students and treat the molecular modeling programs as black boxes? If so this is not a valid or honest pedagogy.
By kinetic theory I do not mean the derivation of $PV=\frac{1}{3}nmc^{2}=nRT$...
This equation cannot be derived from kinetic theory as Dewy Carpenter showed some thirty-five years ago in this Journal (4). Temperature is not a mechanical concept; it lies outside the kinetic molecular theory. The kinetic theory yields $PV=\frac{1}{3}nmc^{2}$, which when compared to the empirically based ideal gas law, $PV = nRT$, leads to the conclusion that the average molecular kinetic energy is proportional to the absolute temperature: $<KE>=\frac{1}{2}Mc^{2}=\frac{3}{2}RT$.
Everyone can understand the concept of disorder and that is really all there is to entropy.
While this erroneous belief is uncritically accepted by many, there is no scientific justification for its widespread use in teaching, as McGlashan showed so many years ago on these very pages (5a). More recently, Lambert has incisively exposed the error in equating entropy with disorder (5b). On Ludwig Boltzmann’s tombstone in Vienna is inscribed his famous formula, S = k logW. With this simple, but powerful equation, Boltzmann connected the macro-world with the nano-world. S is entropy and W stands for the German word, wahrscheinlichkite, which in English means likelihood or more formally probability. Probability is not an overly difficult concept so why not use it here. In addition it is so much more accurate and powerful than the more comfortable and casual, but scientifically vague, concept of disorder.
A Seventh Great Idea
The fact that atomic and molecular structure and stability, and the physical nature of the chemical bond cannot be understood with the six great ideas Gillespie promotes is evidence to me that another great idea is essential in the general chemistry curriculum. This seventh great idea, the corner stone of quantum mechanics, is de Broglie’s hypothesis that matter has wave-like properties and is, therefore, subject to interference phenomena (constructive and destructive) normally associated with wave-like phenomena. This is especially important for the light-weight electron and is the idea that is necessary to explain the chemical bond, and atomic and molecular stability, and atomic and molecular structure, and atomic and molecular spectroscopy.
From de Broglie’s wave equation, $\lambda =\frac{h}{mv}$, it follows that in the nano-world kinetic energy is $\frac{h^{2}}{2m\lambda ^{2}}$, which means that if the space an electron occupies is restricted, its kinetic energy is quantized and increased significantly. Thus, kinetic energy behaves like an outward force that counter balances the inward electrostatic force and prevents the electron from collapsing into the nucleus under the electrostatic attraction that Gillespie says is all that is needed to explain chemical phenomena. As Ruedenberg has pointed out there are no ground states or quantized energy levels in the classical, macroscopic world. We need de Broglie’s hypothesis to explain chemical phenomena at the atomic and molecular level.
We have just left the century which began with the quantum revolution of Planck, Einstein, and Bohr. We have also recently celebrated the 100th anniversary of the discovery of the electron, that fundamental particle whose behavior dictates chemistry. Today, as far as we can tell, the behavior of the electron is accurately described by the principles of quantum mechanics. At some rudimentary level we should be teaching this important theory to all of our students.
Here is what I try to do. In my general chemistry course I teach de Broglie’s wave equation and its implications. I concentrate on the consequences of confinement and delocalization at the atomic and molecular level. I outline the origin of quantized energy levels and quantum numbers from de Broglie’s fundamental idea. I regard it as one of the most astonishing, provocative, and creative ideas of the 20th Century, and I want my students, majors and non-majors, to be aware of its existence and importance.
A survey of the current general chemistry texts will reveal that all of them present deBroglie’s hypothesis. My point is that it should be elevated to an essential part of the introductory chemistry curriculum. If we are going to select a small number of essential ideas or principles, deBroglie’s wave-particle duality for matter should be among them.
Conclusion
Chemistry is a great intellectual adventure and we must present the spirit of that adventure to all of our students, no matter what their academic major or their particular career objectives. If we are going to teach an honest first course in chemistry we have to describe both its experimental and theoretical features. I am offended and disturbed by Gillespie’s gratuitous attack on mathematics and theory in the general chemistry sequence. He misrepresents chemistry because chemistry is not simply an experimental science, nor has it ever been so. I believe every scientific discipline involves a lively exchange between theory and experiment, and this is what we should tell our students. There is no hierarchy here, both theory and experiment are essential, on a day to day basis, for all practitioners of the art and science of chemistry. More than 20 years ago Roald Hoffmann wrote eloquently and incisively about the “symbiosis of theory and experiment.” He spoke then of “... a vital interweaving of experiment inspired by theory, theory motivated by experiment, binding in a truly interdisciplinary way chemistry, physics, and engineering (5).”
According to Peter Medawar we can think of science as an on-going dialogue between what might be and what is actually so.
Scientific reasoning is an explanatory dialogue that can always be resolved into two voices or episodes of thought, imaginative (theoretical) and critical (experimental), which alternate and interact (6).
Hoffmann and Medawar, both Nobel Laureates, offer a much richer and more accurate description of science than the negative dichotomous view (experiment/theory, good/bad) that permeates Gillespie’s paper.
Appendix
When asked what motivated the creation of his model of the atom Bohr replied "the stability of matter, a pure miracle when considered from the standpoint of classical physics." The following simple calculation will demonstrate what Bohr meant by this statement. This calculation will be carried out in atomic units where the charge on the electron is -1, the charge on the nucleus +1, and distances are measured in bohr, ao.
Two nuclei (Z = 1) are placed at x = 0.0 and 2.0, respectively. An electron is located exactly between them at x = 1.0, where we instinctively, but incorrectly, think it would want to be on the basis of electrostatic considerations. The potential energy consists of three interactions (nuclear-nuclear repulsion and two electron-nuclear attractions) and is calculated to be:
$V=\frac{(+1) (+1)}{2}+\frac{(-1)(+1)}{1}+\frac{(-1)(+1)}{1} = 1.5$
Now move the electron 0.5 bohr closer to one of the nuclei.
$V=\frac{(+1) (+1)}{2}+\frac{(-1)(+1)}{0.5}+\frac{(-1)(+1)}{1.5} = -2.17$
And so it goes, on the basis of electrostatic considerations, until the electron is inside one nucleus or the other. While the electron was treated as a point charge in this calculation, a rigorous quantum mechanical calculation tells the same story - moving charge to the internuclear region increases electrostatic potential energy.
The failure of classical physics to explain the stability and structure of matter and its interaction with electromagnetic radiation must be emphasized in the undergraduate curriculum at all levels. Again, the need for quantum mechanics is data driven, and it should be taught at an elementary level initially (see above) and at more sophisticated levels as science students progress through the undergraduate curriculum. Perhaps by the time they graduate chemistry and physics majors might be able to appreciate what Peter Atkins is saying here (8).
In a sense, the difference between classical and quantum mechanics can be seen to be due to the fact that classical mechanics took too superficial a view of the world: it dealt with appearances. However, quantum mechanics accepts that appearances are the manifestation of a deeper structure (the wavefunction, the amplitude of the state, not the state itself), and that all calculations must be carried out on this substructure.
Literature Cited
1. Gillespie, R. J. J. Chem. Educ. 1997, 74, 862.
2. Ruedenberg, K. Rev. Mod. Phys. 1962, 34, 326.
3. Chester, M. Primer of Quantum Mechanics, Krieger Publishing Co, Malabar, Florida, 1992.
4. Carpenter, D. K. J. Chem. Educ. 1966, 43, 333.
5. a) McGlashan, M. L. J. Chem. Educ. 1966, 43, 226. b) Lambert, F. L. J. Chem. Educ. 1999, 76, 1385.
6. Hoffmann, R. Chem. Eng. News, July 29, 1974, p 32.
7. Medawar, P. B. Induction and Intuition in Scientific Thought; Methuen: London, 1969, p. 46.
8. Atkins, P. W. Quanta, Oxford University Press: Oxford, 2nd Ed., 1991, p. 348.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.05%3A_Reaction_to_Gillespie%27s_Six_Great_Ideas_in_Chemistry_-_Another_Great_Idea.txt
|
General chemistry and physical chemistry texts that use the kinetic theory to derive the pressure of an ideal gas do so by studying a gas in a cubic or rectangular container (1). The purpose of this note is to outline this derivation for a gas in a spherical container. A visual representation of this approach in coordinate and momentum space is provided in the accompanying figure.
A sphere of diameter $D$ contains gas molecules moving randomly, executing elastic collisions with each other and the surface of the container as postulated by the kinetic theory. Consider a molecule labeled i of mass m and velocity vi making a collision with the surface at an angle 2 relative to the perpendicular to the surface. The momentum transferred to the container in the direction perpendicular to the surface by the collision is
$\Delta (m\nu )_{surf}=2m\nu _{i}\cos(\Theta ) \nonumber$
Because the molecule travels a distance between $Dcos(\Theta )$ collisions with the surface, the time interval between collisions is
$\Delta t=\frac{D\cos(\Theta )}{v_{i}} \nonumber$
The force (F) exerted on the surface is the rate of momentum transfer,
$F_{i}=\dfrac{\Delta (m\nu )_{surf}}{\Delta t}=\dfrac{2mv_{i}^{2}}{D} \nonumber$
Pressure (P) is force divided by area (A) and the surface area of a sphere is $\pi D^{2}$. The volume of a sphere (V) is $\frac{1}{6}\pi D^{3}$, therefore
$P_{i}=\dfrac{F_{i}}{A}=\dfrac{2mv_{i}^{2}}{\pi D^{3}}=\frac{mv_{i}^{2}}{3V} \nonumber$
For a mole of gas molecules the total pressure is
$P=\sum_{i}^{N_{A}}P_{i}=\frac{m}{3V}\sum_{i}^{N_{A}}v^{2}_{i}=\frac{M \big \langle v^{2} \big \rangle}{3V}=\frac{3 \big \langle KE \big \rangle}{3V} \nonumber$
where $\big \langle v^{2} \big \rangle =\frac{1}{N_{A}}\sum_{i}^{N_{A}}v^{2}_{i}$ and $M=N_{A}m$.
As Dewey Carpenter pointed out many years ago (2), it is through a comparison of equation (5) with the ideal gas law that we conclude that the average molar kinetic energy of a gas is proportional to its absolute temperature.
$\big \langle KE \big \rangle = \frac{1}{2}M \big \langle v^{2} \big \rangle=\frac{3}{2}RT \nonumber$
Unfortunately most introductory texts incorrectly include this result as a postulate of the kinetic theory (3). With regard to the formal assumptions of the kinetic theory Carpenter noted (2), “No reference is made in these postulates to the property of temperature. This is because the kinetic theory is a purely mechanical theory, whereas the concept of temperature belongs to the discipline of thermodynamics.”
11.07: Examining Fourier Synthesis with Dirac Notation
The purpose of this tutorial is to use Dirac notation to examine Fourier synthesis. The first step is to write the function symbolically in Dirac notation.
$f(x)=\big \langle x|f\big \rangle \nonumber$
Select an orthonormal basis set, |n>, for which the completeness relation holds
$\sum_{n}|n \big \rangle \big \langle n|=1 \nonumber$
Expand |f> in terms of |n> by inserting equation (2) into the right side of equation (1). In other words write f(x) as a weighted () superposition using the basis set (the |n> basis set expressed in the coordinate representation).
$f(x)=\sum_{n} \big \langle x|n\big \rangle \big \langle n|f\big \rangle \nonumber$
Evaluate the Fourier coefficient, , using the continuous completeness relation in coordinate space.
$\int |x' \big \rangle\big \langle x'|dx'=1 \nonumber$
Equation (3) becomes,
$f(x)=\sum_{n} \big \langle x|n \big \rangle \int \big \langle n|x' \big \rangle\big \langle x'|f \big \rangle dx' \nonumber$
Now select a function
$\big \langle x'|f \big \rangle = x'^{3}(1-x') \nonumber$
over the interval (0,1). Choose the following orthonormal basis set over the same interval.
$\big \langle x|n \big \rangle = \sqrt{2}sin(n \pi x) \nonumber$
Substitution of equations (6) and (7) into (5) yields
$f(x) = \sum_{n} \sqrt{2}sin(n \pi x) \int_{0}^{1} \sqrt{2}(n \pi x')x'^{3}(1-x')dx' \nonumber$
The Fourier synthesis and the original function are shown for n = 2, 4, and 10 in the figure below.
$x:=0,.025 ..1.0$
$f(x,n) :=\sum_{i=1}^{n}[ \sqrt{2}\cdot sin(i \cdot \pi \cdot x) \cdot \int_{0}^{1} \sqrt{2} \cdot sin(i \cdot \pi \cdot x') \cdot x'^{3} \cdot (1-x')dx')]$
11.08: Finding Roots of Transcendental Equations
The attempt to find analytical solutions to Schrödingerʹs equation for some problems yields transcendental equations which must be solved by a combination of graphical and numerical techniques. Mathcad is particularly well‐suited for such applications.
Solving Schrödingerʹs equation for the particle in the box with an internal barrier yields the trancendental equation $f(E)$ shown below. This equation is solved by plotting f(E) vs E to find the approximate values of the bound energy states.
The box is 1 bohr wide and the barrier is 0.1 bohr thick and located in the center of the box.
• Vo is the barrier height in hartrees. Vo := 100
• The barrier thickness in bohrs. BT := .1
• Left barrier boundary in bohrs. LB := .45
E := 0.05, .1 .. 100
$f(E) := tanh[BT \cdot \sqrt{2 \cdot (V_o-E)}] \cdot (\frac{V_o-E}{E} \cdot \sin(LB \cdot \sqrt{2 \cdot E})^{2} + \cos(LB \cdot \sqrt{2 \cdot E})^{2}) \cdot \cdot \cdot + 2 \cdot \sqrt{\frac{V_o-E}{E}} \cdot \sin(LB \cdot \sqrt{2 \cdot E}) \cdot \cos(LB \cdot \sqrt{2 \cdot E})$
For a derivation of this formula see: Johnson and Williams, Amer. J. Phys. 1982, 50, 239‐244.
By inspection of the graph one can see that there are roots at approximately 15, 20, 62, and 80. The exact energy is found with Mathcadʹs root function using the approximate energy as a seed value as illustrated below.
E := 15 root(f(E), E) = 15.43
E := 20 root(f(E), E) = 20.29
E := 62 root(f(E), E) = 62.24
E := 80 root(f(E), E) = 81.07
This exercise can be extended by noting that this problem can also be solved by numerical integration of Schrödingerʹs equation. Comparisions of this sort help are helpful in strengthening the students understanding of the computational techniques available to the quantum chemist. Below the problem is solved by numerical integration of Schroedingerʹs equation.
Integration limit: xmax := 1 Effective mass: μ := 1 Barrier height: V0 := 100
Barrier boundaries: lb := .45 rb := .55 Potential energy: $V(x) := if[(x\geq lb) \cdot (x\leq rb), V_{0},0]$
Numerical integration of Schrödingerʹs equation: $\frac{-1}{2 \cdot \mu} \cdot \frac{d^{2}}{dx^{2}} \psi(x) + V(x) \cdot \psi (x) = E \cdot \psi (x)$
$\psi (0) = 0$. $\psi ' (0) = 0.1$
Enter energy guess: E := 15.43
$\psi$ := Odesolve(x, xmax)
Normalize wave function: $\psi (x) := \frac{ \psi (x)}{ \sqrt{\int_{0}^{x_{max}}} \psi (x)^{2} dx}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.06%3A_An_Alternative_Derivation_of_Gas_Pressure_Using_the_Kinetic_Theory.txt
|
Polyprotic Acids - Calculating the composition of 0.1M H3AsO4
Excluding water there are six species in this solution of arsenic acid. Therefore six constraints are required to calculate the composition of the solution. Four of them are the following equilibria, and the other two are the charge and mass balance equations given below.
Equilibria:
$\ce{H3AsO4 <=> H^{+} + H2AsO4^{-}} \nonumber$
$\ce{H2AsO4^{-} <=> H^{+} + HAsO4^{2-}} \nonumber$
$\ce{HAsO4^{2-} <=> H^{+} + AsO4^{3-}} \nonumber$
$\ce{H2O <=> H^{+} + OH^{-}} \nonumber$
Charge balance:
$[\ce{H^{+}}] = [\ce{H2AsO4^{-}}] + 2[\ce{HAsO4^{2-}}] + 3[\ce{AsO4^{3-}}] + [\ce{OH^{-}}] \nonumber$
Mass balance:
H3AsO4 + H2AsO4- + HAsO42- + AsO43- = 0.1
Relevant equilibrium constants:
Ka1 := 4.5 x 10-4 Ka2 := 5.6 x 10-8 Ka3 := 3.0 x 10-13 Kw := 10-14
Mathcad's live symbolic solver is used to calculate the concentrations of the species in solution by creating two 6x1 vectors. In one vector the six constraints are entered, and in the other the symbols for the species being calculated. The results of the calculation are given below.
$\begin{pmatrix} \frac{H \cdot H_{2}AsO_{4}}{H_{3}AsO_{4}} = Ka1\ \frac{H \cdot HAsO_{4}}{H_{2}AsO_{4}} = Ka2\ \frac{H \cdot AsO_{4}}{HAsO_{4}} = Ka3\ H \cdot OH = Kw\ H = H_{2}AsO_{4} + 2 \cdot HAsO_{4} + 3 \cdot AsO_{4} + OH\ H_{3}AsO_{4} + H_{2}AsO_{4} + HAsO_{4} + AsO_{4} = 0.1 \end{pmatrix} _{float,3}^{solve, \begin{pmatrix} H\ H_{3}AsO_{4}\ H_{2}AsO_{4}\ HAsO_{4}\ AsO_{4}\ OH \end{pmatrix}} \rightarrow \begin{pmatrix} -6.94 \cdot 10^{-3} & .107 & -6.94 \cdot 10^{-3} & -2.42 \cdot 10^{-18} & -1.44 \cdot 10^{-12}\ -1.12 \cdot 10^{-7} & -4.98 \cdot 10^{-5} & 0.200 & -.100 & 2.68 \cdot 10^{-7} & -8.93 \cdot 10^{-8}\ -4.68 \cdot 10^{-13} & 2.42 \cdot 10^{-15} & -2.33 \cdot 10^{-6} & .279 & -.179 & -2.14 \cdot 10^{-2}\ -3.21 \cdot 10^{-14} & -4.88 \cdot 10^-19 & 6.85 \cdot 10^{-9} & -1.20 \cdot 10^{-2} & .112 & -.312\ 6.49 \cdot 10^{-3} & 9.35 \cdot 10^-2 & 6.49 \cdot 10^{-3} & 5.60 \cdot 10^{-8} & 2.59 \cdot 10^{-18} & 1.54 \cdot 10^{-12} \end{pmatrix}$
The last row contains the physically meaningful solution to the fifth order polynomial that is solved. The pH of this solution is calculated below.
pH := -log(6.49 x 10-3) pH = 2.188
11.10: Solving Linear Equations Using Mathcad
Numeric Methods: A system of equations is solved numerically using a Given/Find solve block. Mathcad requires seed values for each of the variables in the numeric method.
Seed values: x :=1 y :=1 z :=1
Given: $5 \cdot x + 2 \cdot y + z = 36$ $x + 7 \cdot y + 3 \cdot z = 63$ $2 \cdot x + 3 \cdot y + 8 \cdot z = 81$
Find (x, y, z) = $\begin{pmatrix} 3.6\ 5.4\ 7.2 \end{pmatrix}$
Other Given/Find solve blocks can be used.
Given $\begin{pmatrix} 5 \cdot x + 2 \cdot y + z = 36\ x + 7 \cdot y + 3 \cdot z = 63\ 2 \cdot x + 3 \cdot y + 8 \cdot z = 81 \end{pmatrix} = \begin{pmatrix} 36\ 63\ 81 \end{pmatrix}$ Find(x, y, z) = $\begin{pmatrix} 3.6\ 5.4\ 7.2 \end{pmatrix}$
Given $\begin{pmatrix} 5 & 2 & 1\ 1 & 7 & 3\ 2 & 3 & 8 \end{pmatrix} \cdot \begin{pmatrix} x\ y\ z \end{pmatrix} = \begin{pmatrix} 36\ 63\ 81 \end{pmatrix}$ Find(x, y, z) = $\begin{pmatrix} 3.6\ 5.4\ 7.2 \end{pmatrix}$
Matrix methods: The equations can also be solved using matrix algebra as shown below. In matrix form, the equations are written as MX = C. The solution vector is found by matrix mutiplication of by the inverse of M.
M:= $\begin{pmatrix} 5 & 2 & 1\ 1 & 7 & 3\ 2 & 3 & 8 \end{pmatrix}$ C:= $\begin{pmatrix} 36\ 63\ 81 \end{pmatrix}$ X := M-1 $\cdot$ C X = $\begin{pmatrix} 3.6\ 5.4\ 7.2 \end{pmatrix}$
Confirm that a solution has been found:
M $\cdot$ X = $\begin{pmatrix} 36\ 63\ 81 \end{pmatrix}$
Alternative matrix solution using the lsolve command.
X := lsolve(M,C) X = $\begin{pmatrix} 3.6\ 5.4\ 7.2 \end{pmatrix}$ M $\cdot$ X = $\begin{pmatrix} 36\ 63\ 81 \end{pmatrix}$
Live symbolic method: To use the live symbolic method within this Mathcad document recursive definitions are required clear previous values of x, y and z. This would not be necessary if x, y and z had not been previous defined.
x := x y := y z := z
$\begin{pmatrix} 5 \cdot x + 2 \cdot y + z = 36\ x + 7 \cdot y + 3 \cdot z = 63\ 2 \cdot x + 3 \cdot y + 8 \cdot z = 81 \end{pmatrix} solve, \begin{pmatrix} x\ y\ z \end{pmatrix} \rightarrow \begin{pmatrix} \frac{18}{5} & \frac{27}{5} & \frac{36}{5} \end{pmatrix} = \begin{pmatrix} 3.6 & 5.4 & 7.2 \end{pmatrix}$
11.11: Let's Teach High School Students Computer Algebra Methods
The algebra problem shown below appears in my grandson's 9th grade math text. He is asked to solve for x and y using the "guess and check" method. It is early in the semester so I do not object, figuring he will be taught more reliable standard methods soon. The method I strongly recommend is to stress that if you have two unknowns, you need two equations, and that those equations should be stated explicitly up front. After that some standard method should be used to find the solution or solutions, by hand or using a calculator or a computer. In this case there are two solutions.
In what follows two computer algebra methods using Mathcad are presented. The total area (yellow plus green) is 450, so my two equations are given below. This is the most important part in solving the problem - setting it up and it is what should be stressed in math education. Let the computer or calculator do the tedious stuff. The Mathcad syntax for the first method is as follows:
$\begin{bmatrix} (x+6) \cdot (y + 10) = 450\ x \cdot y = 180 \end{bmatrix}$ solve, (x, y) $\rightarrow\begin{pmatrix} 12 & 15\ 9 & 20 \end{pmatrix}$
The second method involves eliminating y in the first equation by substitution using the second equation, solving for x and then using the second equation to get the appropriate values for y.
x := (x + 6) $\cdot$ (y + 10) = 450 substitute, y = $\frac{180}{x} \rightarrow \frac{10 \cdot (x+6) \cdot (x+18)}{x} = 450$ solve, x $\rightarrow \begin{pmatrix} 9\ 12 \end{pmatrix}$
Write y in terms of x: y := $\frac{180}{x}$ Display y values: y = $\begin{pmatrix} 20\ 15 \end{pmatrix}$
High school students are facile in using the computer for word processing, and the internet for resourcing term papers and social networking purposes. It's time they were taught how to use the computer to solve math and science problems.
Solution templates can be provided to the students so that they can concentrate on setting up the problem, rather than the programming syntax.
Template for first method: $\begin{pmatrix} \square = \blacksquare\ \blacksquare = \blacksquare \end{pmatrix}$ solve, $\begin{pmatrix} \blacksquare & \blacksquare \end{pmatrix} \rightarrow$
Template for second method: ■ := ■ = ■ substitute, ■ = ■ $\rightarrow$ solve, ■ $\rightarrow$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.09%3A_Calculation_of_the_Composition_of_a_Weak_Polyprotic_Acid_Using_Mathcad.txt
|
Most thermodynamics expression in textbooks are "intramural" relations. They tell us how to determine numerical values for unfamiliar quantities, such as $ΔS$ and $ΔG$ (Equation \ref{1a}-\ref{1d}) for example), or how one such quantity depends on another such quantity (Equation \ref{1c}-\ref{1d}).
$\Delta S = \frac{Q_{rev}}{T} \label{1a}$
$\Delta G^{ \circ} = -RT \ln K_{eq} \label{1b}$
$\Delta G = \Delta H - T \Delta S \label{1c}$
$\begin{bmatrix} \frac{\delta(\Delta G)}{\delta T} \end{bmatrix}_{P} = - \Delta S \label{1d}$
Only a few thermodynamic expressions are "extramural" relations--ones that tell us immediately something about "directly measurable" or familiar quantities: how, for example, an equilibrium pressure P, or concentration N2, or quotient of concentrations K or cell voltage ξ varies with temperature (Equations \ref{1a}-\ref{1d}).
$\frac{dP}{dT}= \frac{Q}{T \Delta V} \label{2a}$
$\frac{dN_{2}}{dT_{fp}} = \frac{-Q}{RT^{2}_{nfp}} \label{2b}$
$\ln \frac{K_{2}}{K_{1}} = \frac{Q_{irrev}}{R} \left( \frac{1}{T_{1}} - \frac{1}{T_{2}} \right) \label{2c}$
$nF \frac{d \xi}{dT} = \frac{Q_{rev}}{T} \label{2d}$
These extramural relations (Equation \ref{2a}-\ref{2d}) show how equilibrium parameters (P, N2, K, ξ) must change with temperature if perpetual motion of the second kind is impossible. Perpetual motion of the second kind is production of work (an increase in energy of a mechanical system) solely at the expense of the energy of a thermal reservoir. In its net effect upon the environment, it is, with respect to energy transformations, precisely the opposite of friction.
The most general statement of the Second-Law like behavior of Nature states that any process whose net effect is precisely the opposite of friction -- or heat flow, or any natural event -- is impossible. From that statement can be developed by relatively long and mathematically demanding arguments, as shown in many physical chemistry texts, the extramural relations (Equation \ref{2a}-\ref{2d}).
It is the chief purpose of this paper to show that the Clapeyron equation (\ref{2a}), the colligative property relations (such as Equation \ref{2b}), van 't Hoff's relation (Equation \ref{2c}), Gibbs-Helmholtz-type equations (such as Equation \ref{2d}) and, also (discussed later), the osmotic pressure law (Equation 19), Boltzmann's factor (equation 25), and Carnot's theorem (equation 35) can be obtained directly from the laws of chemical kinetics, without the use of calculus.
Our kinetic derivations of the extramural relations of the thermodynamics are based on Arrhenius's rate-constant expression
$k = A^{-ΔH^*/RT}. \nonumber$
It will be shown that the derivations depend ultimately, therefore, on van 't Hoff's thermodynamic equilibrium-constant expression
$K = C^{-ΔH/RT}. \label{therm}$
Thus, the kinetic derivations are not, in a logical sense, a substitute for the usual thermodynamics arguments. It is often illuminating, however, to see abstract expressions (such as those of thermodynamics) emerge seemingly unexpectedly from more concrete equations (those of chemical kinetics).
The mathematical procedures in this paper can be used, also, in purely thermodynamic arguments. With no change in the algebraic steps given below, one can derive the extramural relations of thermodynamics directly from the thermodynamic expression in Equation \ref{therm}. Thus one can move rigorously and easily from one extramural relation to another without employing calculus and the entropy function (or the chemical potential), or Carnot cycles. This simplification of the syntax of thermodynamics serves to emphasize an essential point: there is essentially only one physically independent extramural thermodynamic relation. There is only one Second Law. Expressions (Equations \ref{2a}-\ref{2d}), the osmotic pressure rule (19), and Boltzmann's factor (25) are all special instances of Carnot's theorem (35).
In summary the present discussion is, simultaneously: a set of novel applications to thermodynamics of Arrhenius's rate-constant expression; a non-calculus review from several new points of view of the central expressions of classical (and, briefly, statistical) thermodynamics; and, in close, a brief account of the origins in kinetics and thermodynamics of activated complex theory.
Henry's Law and Raoult's Law
Many texts give this kinetic interpretation of Henry and Raoult laws. Consider the change
$x~(solution) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x~(gas) \nonumber$
Let $R_{f(b)}$ represent the rate of the forward (backward) reaction, specific rate constant kf(b). Let $C_x$ be the concentration (in any units) of $X$ in the condensed phase, $N_x$ its mole fraction therein, $P_x$ its partial pressure in the gas phase, $P^o_x$ the vapor pressure of pure $X$. On the assumption that one has
$R_{f} = k_{f}C_{x} \label{3a}$
$R_{c} = k_{b}P_{x^{'}} \label{3b}$
that at equilibrium (Rf = Rb)
$P_{x} = \frac{k_{f}}{k_{b}} c_{x} = K_{eq} C_{x} \nonumber$
$\underbrace{ K_{eq} = \frac{P_{x}}{C_{x}} = 1}_{\text{Henry's Law}} \label{4a}$
$= \underbrace{ \dfrac{P_{x}}{C_{x}} = N_{x} = 1}_{\text{Raoult's Law}} \label{4b}$
$\equiv P^{ \circ}_{X^{'}}$
Similar derivations of mathematical expressions for other colligative properties can be achieved by introducing Arrhenius's expression for the dependence upon temperature (and pressure) of the specific rate constants $k_f$ and $k_b$.
Arrhenius's Rate-Constant Law
According to Arrhenius (in modern notation), for forward and backward reactions
$k=Ae^{- \Delta H^{*}/RT} \label{6}$
where, over small temperature intervals, $A$ and $ΔH^*$ may be treated as constants, and where, Fig. 1,
\begin{align} \Delta_{f}H^{*} - \Delta_{b}H^{*} &= \Delta H \label{7a} \ &= H_{products} - H_{reactions} \[4pt] &= \Delta E + \Delta (PV). \label{7b} \end{align}
The Ideal Solubility Equation and Freezing Point Depression
To illustrate the use of the Arrhenius Rate-Constant Law to obtain by a kinetic analysis expressions normally obtained through reasoning based on thermodynamic principles, consider the solution, or melting, of a pure solid.
$x~(pure~solid) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x~(solution) \nonumber$
On the assumption that Rf = kf and Rb = kbNX, one has that, at equilibrium, kf = kbNX or, on using the Arrhenius expression, (Equation \ref{6}), that
Af(-ΔfH*/RT) = NXAb(-ΔbH*/RT).
Rearrangement and use of Equation \ref{7a} yields
$\frac{A_{f}}{A_{b}} = N_{x}e^{ \frac{ \Delta H}{RT}} \nonumber$
ΔH is the enthalpy of solution, or melting, of X. Taking the natural logarithm of both sides of Equation \ref{8}, one obtains
$(a) \frac{ \Delta H}{RT} + \ln(N_{x}) = \ln( \frac{A_{f}}{A_{b}}) \nonumber$
$\ln( \frac{A_{f}}{A_{b}})$ is a constant
(b) $\frac{ \Delta H}{RT} |_{N_{x} = 1}$
(c) $\frac{ \Delta H}{RT_{nfp}}$
From (9c),
$\ln ( \frac{N_{x}}{1}) = \frac{ \Delta H}{R} \begin{pmatrix} \frac{1}{T_{nfp}} & - & \frac{1}{T} \end{pmatrix} \nonumber$
For NX ≈ 1, ln NX ≈ -(1-NX) ≡ -N2, T ≈ Tnfp, and equation 10 reduces to
$N_{2} = \frac{- \Delta H}{RT^{2}nfp} (T-T_{nfp}) \nonumber$
Equation 10, the ideal solubility equation, is a special case of equation 2c. Equation 11, the thermodynamic expression for freezing point depressions, is an integrated form of equation 2b.
Clapeyron's Equation
If a pure solid dissolves (melts) in its pure liquid,
$x~(pure~solid) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x~(pure~liquid) \nonumber$
$N_X = 1$ and, in place of Equation 9, one that has, at equilibrium, $\frac{A_{f}}{A_{b}} = 1 \cdot ^{ \frac{ \Delta H}{RT}}$. Taking the natural logarithm of both sides, one obtains in place of Equation 9a
\begin{align} \dfrac{ \Delta H}{RT} &= \ln \frac{A_{f}}{A_{b}}, \tag{a constant} \[4pt] &= \dfrac{ \Delta E + P \Delta V}{RT} \label{13} \end{align}
Equation \ref{13} is obtained through equation 7b.
If the pressure and temperature change from values $P$ and $T$ that satisfy equation to new values $P + dP$ and $T + dT$, for equilibrium to be maintained, $dP$ and $dT$ must be such that
$\dfrac{ \Delta E + (P + dP) \Delta V}{R(T+dT)} = \frac{ \Delta E + P \Delta V}{RT} \label{14}$
In writing Equation \ref{14}, it has been assumed that, like $\frac{A_{f}}{A_{b}}$, $ΔE$ and $ΔV$ are temperature- and pressure-independent. Simplification of Equation \ref{14} yields, on solving for the ration of $dP$ to $dT$ in Equation \ref{2a} with
$Q = ΔE + PΔV = ΔH. \nonumber$
A kinetic analysis of the similar but slightly more complicated case of the vaporization of a liquid (or solid) is given in Appendix 1, together with a kinetic analysis of the effect on a vapor pressure of squeezing a liquid (the Gibbs-Poynting effect), with an application to osmosis.
Osmotic Equilibrium
Consider, next, diffusion of a pure solvent at pressure P through a rigid, semi-permeable membrane into a solution at pressure P + π,
$x~(pure~solvent) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x~(solution) \nonumber$
Pure Solvent Solution
Pressure P P + π
Mole Fraction NX = 1 NX < 1
The kinetic analysis Rf = kf = Rb = kbNX yields with Arrhenius's relation, (6), expressions identical to (9) and (9a). In this instance, at least approximately, ΔE = ΔV = 0. Thus for (15)
$\Delta H = \Delta (PV) \ (P + \pi) \bar{V}_{x} - P \bar{V}_{x} = \pi \bar{V}_{x} \nonumber$
Substitution from (16) into (9a) yields
$\frac{ \pi \bar{V}_{x}}{RT} + \ln(N_{x}) = \ln \frac{A_{f}}{A_{b}} \nonumber$
where $\ln (\frac{A_{f}}{A_{b}})$ is a constant.
For NX = 1, π = 0 (at equilibrium). In this instance, therefore,
$\ln ( \frac{A_{f}}{A_{b}}) = 0 \nonumber$
Substitution from (18) into (17) yields for dilute solutions (lnNX ≈ -N2) the usual thermodynamic expression for a solution's osmotic pressure π:
$\pi \approx \frac{RTN_{2}}{ bar{V}_{x}} \approx RTC_{2} \nonumber$
in moles/liter.
Chemical Equilibrium
By the Principle of Microscopic Reversibility, one has that for chemical change
$aA + bB \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} dD + eE \nonumber$
the rate at which A and B disappear by the (perhaps unlikely) mechanism aA + bB, rate law Rf = kfcAacBb, is at equilibrium equal to the rate at which A and B appear by the mechanism dD + eE, rate law Rb = kbcDdcEe.1 Thus, from Rb = Rf one obtains the familiar Law of Mass Action, equation 21 below, which with equations 6 and 7 yields equation 22, from which can be obtained directly equation 2c (Qirrev = ΔH).
$\frac{C_{D}^{d}C_{E}^{e}}{C_{A}^{a}C_{B}^{b}} |_{equil.} = \frac{k_{f}}{k_{b}} = K_{eq} \nonumber$
$= \frac{A_{f}}{A_{b}} e^{ \frac{- \Delta H}{RT}} \nonumber$
For later reference, we note that, from equations 21 and 22,
$K_{eq} = \frac{A_{f}}{A_{b}} e^{ \frac{- \Delta H}{RT}} \nonumber$
(b) $e^{ \frac{ -[ \Delta H - RT \ln \frac{A_{f}}{A_{b}}]}{RT}}$
1For a further discussion of this point see: Frost, A. A., and Pearson, R. G., "Kinetics and Mechanism," John Wiley and Sons, Inc., 1953, Ch. 8; or Frost, A. A., J. Chem. Educ., 18, 272 (1941).
Boltzmann's Factor
A particularly simple "chemical" change is the transition of a molecule X in a quantum energy state i, energy εi, to a quantum energy state j, energy εj.
$x~(state~i) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x~(state~j) \nonumber$
By arguments identical to those given in the preceding section, one obtains expressions of the form of equations 21 and 22. In this simple instance $K_{eq} = \frac{C_{j}}{C_{i}}$ [Cj(i) = concentrations of molecules in state j (i)], ΔH = Noj - εi) [Δ(PV) = 0\, and Af = Ab. Thus, for a system that is at equilibrium with respect to the change indicated in equation 24,
$\frac{C_{j}}{C_{i}} = e^{ \frac{ -( \varepsilon_{j} - \varepsilon_{i} )}{kT}} \nonumber$
where $k \equiv \frac{R}{N_{o}}$
From the Boltzmann factor expression, equation 25 can be obtained directly by summation partition functions and thence, by differentiation and the taking of logarithms, the other standard expressions of statistical thermodynamics.
Electrochemical Equilibrium
For the flow of electrons from a potential V1 to a potential V2,
$e~(potential~V_{1}) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} e~(potential~V_{2}) \nonumber$
in an electrochemical circuit, cell voltage ξ = V2 - V1, one has that at equilibrium (a "balanced circuit"), Rf = kf = Rb = kb. Thus, by the Arrhenius relation, equation 6, at equilibrium
$\frac{A_{f}}{A_{b}}=e^{ \frac{\Delta_{f} H^{*} - \Delta_{b} H^{*}}{RT}} \nonumber$
The activation enthalpies of ΔH^{*} contain, in this instance, two contributions: one from the enthalpy of activation of the chemical change to which the electron flow is coupled in an electrochemical cell; the other from the enthalpy of activation for the physical transfer of electrons across a potential difference ξ. Thus, in this instance,
$(a) \frac{\Delta_{f} H^{*} - \Delta_{b} H^{*}}{RT} = \frac{ \Delta_{rx}H + nF \xi}{RT} \nonumber$
$= \ln \frac{A_{f}}{A_{b}}$, a constant (b)
At equilibrium the right-hand-side of (28) is equal to $\ln \frac{A_{f}}{A_{b}}$, a constant, equation 28b. If the temperature and voltage change from values T and ξ that satisfy equations 28 to new values T + dT and ξ + dξ, for equilibrium to be maintained, dT and dξ must be such that
$\frac{ \Delta_{rx} H + nF ( \xi + d \xi)}{R(T + dT)} = \frac{ \Delta_{rx} H + nF \xi}{RT} \nonumber$
In writing equation 29 it has been assumed (again) that, like $\frac{A_{f}}{A_{b}}$, ΔrxH is temperature independent: simplification of equation 29 yields on solving for the ratio of dξ to dT
$nf \frac{d \xi}{dT} = \frac{ \Delta_{rx} H + nF \xi}{T} \nonumber$
Equation 30 is, in disguise, equation 2d. For consider this universe (or isolated system): a chemical system σ, an atmosphere atm, mechanical surroundings wt, and thermal surroundings θ [for example, as here, a chemical cell σ at constant temperature (owing to thermal contact with θ) and constant pressure (owing to mechanical contact with atm) performing useful work nFξ]. Application of the First Law (the conservation of energy) to the universe σ + atm + wt + θ yields, on introducing the definitions of P, ΔrxH, and Q, the expression in equation 31.
$\Delta E_{total} = \Delta E_{ \sigma} + \Delta E_{atm} + \Delta E_{wt} + \Delta E_{ \theta} = 0 \nonumber$
where $\Delta E_{atm} = P \Delta V_{ \sigma}$; $\Delta E_{ \sigma} + \Delta E_{atm} = \Delta H_{ \sigma} = \Delta_{rx} H$; $\Delta E_{wt} = nF \xi$; $\Delta E_{ \theta} = -Q$.
Thus, for a universe σ (a chemical cell) + atm + wt + θ, ΔrxH + nFξ = Q. When the universe is in internal equilibrium (the change in equation 26 is reversible), one may write
$\Delta_{rx} H + nF \xi_{rev} = Q_{rev} \nonumber$
Substitution from equation 32 into equation 30 yields equation 2d.
Carnot's Theorem
The previous results can be generalized. The work obtained from a spontaneous chemical change need not appear as electrical energy. Replacing nFξ in equation 28a by W, any useful work, one has for reversible changes that
$\frac{ \Delta H + W_{2}}{T_{2}} = \frac{ \Delta H + W_{1}}{T_{1}} \lable{33} \nonumber$
In writing Equation \ref{33}, it has been assumed (again) that $ΔH$ is independent of temperature, i.e., that
$C_{p}(products) = C_{p} (reactants) \nonumber$
W2 and W1 represent the work obtained at, respectively, temperatures T1 and T2.
Consider now this partial cycle (a cycle for a composite chemical system σ1 + σ2, not, however, for its thermal and mechanical surroundings): A chemical reaction for which the change in enthalpy is ΔH advances forward reversibly at temperature T2 in a system σ2 in contact with a thermal reservoir θ2 performing useful work W2 with Qrev ≡ Q2 = ΔH + W2 (by Equation \ref{32}). Next the reaction is run backward reversibly at a lower temperature T1 in a system σ1 (except for its temperature, identical with σ2) in contact with thermal reservoir θ1 consuming useful work W1. Finally with a graded series of external thermal reservoirs the products in σ1 (chemically identical to the reactants in σ2) are warmed reversibly from T1 to T2 and, using the same set of thermal reservoirs, but in the opposite order, the products in σ2 (chemically identical to the reactants in σ1) are cooled from T2 to T1. By Equation \ref{34}, the individual external reservoirs suffer no net change. The net work obtained from the overall, reversible process (cyclic for σ1 + σ2) is W1 - W2. By Equation \ref{33},
$W_{1} = \frac{T_{1}}{T_{2}} ( \Delta H + W_{2} - \Delta H$.
Thus
$W_{2} - W_{1} = (W_{2} + \Delta H) (1- \frac{T_{1}}{T_{2}})$
where $W_{2} + \Delta H = Q_{2}$.
Division of both sides by Q2, the energy absorbed from the warmer thermal reservoir, yields Carnot's theorem.
$\frac{W}{Q_{2}}|_{rev} = 1- \frac{T_{1}}{T_{2}}\label{35}$
Our discussion of the kinetic derivation of the extramural relations of chemical thermodynamics concludes with Equation \ref{35} and its companion
$\frac{dW_{rev}}{dT} = \frac{Q_{rev}}{T} \label{36}$
obtained, after replacing nFξ by W, from Equations \ref{30} and \ref{32}. All the second-law based relations of thermodynamics are essentially special instances of Equations \ref{35} or \ref{36}.
ΔS and ΔG - Clausius-Gibbs Thermodynamics
The major intramural relations of chemical thermodynamics are obtained by introducing the abbreviation
$R \ln \frac{A_{f}}{A_{b}} = \Delta S \label{37}$
From the present viewpoint Equation \ref{37} may be considered a definition of ΔS.
Use of Equation \ref{37} in Equation \ref{13} yields for the melting-freezing equilibrium.
$\Delta S = \dfrac{ \Delta H}{T} \nonumber$
Use of Equation \ref{37} in Equation \ref{23b} yields
$K_{eq} = e^{ \frac{-( \Delta H - T \Delta S^{ \circ})}{RT}} \nonumber$
The superscript ° on S in Equation \ref{39} is added to indicate that in this instance the numerical value of ΔS calculated from equation 37 will depend on the units used to express the concentrations of, for example, A and B, since the latter will determine, in part, the numerical value assigned to the kinetic parameter Af in the rate law $R_{f} = A_{f}^{ \frac{- \Delta_{f} H^{*}}{RT}} C_{A}^{a} C_{B}^{b}$.
Use of Equation \ref{37} in Equations \ref{28a} and \ref{28b} yields (with nFξ = W)
$\frac{ \Delta H + W_{rev}}{T} = \Delta S \nonumber$
or,
(b) $W_{rev} = -( \Delta H - T \Delta S)$
Taken with equation 32, equation 40a yields equation 1a, which, with equation 36, yields
$\frac{dW_{rev}}{dT} = \Delta S \nonumber$
Together, equations 40a and 41 yield $\frac{dW_{rev}}{dT} = \frac{ \Delta H + W_{rev}}{T}$ or
$\frac{d \frac{W_{rev}}{T}}{dT} = \frac{ \Delta H}{T^{2}} \nonumber$
This last relation is an extramural relation. The symbols S and/or G do not appear in it. It can be obtained directly from equation 36 and equation 32, with nFξrev = Wrev).
Introduction of the abbreviation of equation 1c, a definition of ΔG, yields with equation 39 (an th ideal-solution theory approximation that ΔH is concentration independent) equation 1b. Use of equation 1c in equation 40b yields Wrev = -ΔG. The latter with equation 41 yields equation 1d and, with equation 42, the Gibbs-Helmholtz equation:
$\frac{d \frac{ \Delta G}{T}}{dT} = - \frac{ \Delta H}{T^{2}} \nonumber$
Equivalence of the Inter- and Extra-Mural Relations of Thermodynamics
Introduction of the symbols $ΔS$ and $ΔG$ with the assigned properties
(1a)$\Delta S = \frac{Q_{rev}}{T}$
(1c) $\Delta G = \Delta H - T \Delta S$
(1d) $\begin{bmatrix} \frac{\delta(\Delta G)}{\delta T} \end{bmatrix}_{P} = - \Delta S$
does not increase the physical content of thermodynamics, namely that:
(32) $Q = \Delta H + W$ The First Law2
(36) $\frac{dW_{rev}}{dT} = \frac{Q_{rev}}{T}$ The Second Law
2For a universe σ + θ + atm + wt
With definitions of Equations \ref{1c} and \ref{1a}, Equations \ref{32} and \ref{36} imply Equation \ref{1d}:
$\Delta G = \Delta H - T \Delta S = \Delta H - Q_{rev} = -W_{rev} \Rightarrow \frac{ \delta \Delta G}{ \delta T} = \Delta S$
Conversely, with Definitions \ref{1c} and \ref{1a}, Equations \ref{31} and \ref{1d} imply Equation \ref{36}. The intramural and extramural relations of thermodynamics are logically equivalent to each other. To write equation 1b
$-RT \ln K_{eq} = \Delta G^{ \circ}$
is, with Equations \ref{1d} and \ref{1c}, equivalent mathematically, to writing the van 't Hoff relation (Equation \ref{2c}) in its differential form
$\frac{d \ln K_{eq}}{dT} = \frac{ \Delta H}{RT^{2}}$
The position in the above, hierarchical arrangement of ideas of the expression $\Delta S_{total} \geq 0$ is described in Appendix 2.
Summary and Conclusions
Equations of classical (and statistical) thermodynamics based on the Second Law can be divided into two classes: those that contain the symbols S and/or G (or A) (the intramural relations) and those that do not (the extramural relations). The latter relations, those of immediate practical use, can be obtained quickly and easily, without calculus, from simple kinetic arguments based on Arrhenius's Rate-Constant Law and the assumptions of ideal solution theory (ΔH independent of concentration; activities of solvents equal to mole fractions, those of gases to partial pressures); the assumption, or approximation, that $\Delta C_{p} = 0$; and, in some instances, the Principle of Microscopic Reversibility. The kinetic treatment is, thus, a complement to, not a complete substitute for, the usual thermodynamic derivations of, for example, Clapeyron's equation and Carnot's theorem, which are valid relations even for non-ideal systems and for systems in which $\Delta C_{p} = 0$.
Arrhenius's Law is the non-thermodynamically inclined chemist's friend. While not encompassing the full content of the Second Law, and probably precisely because of that fact, Arrhenius's Rate-Constant Law embodies in a form immediately and easily applicable to many problems (both classical and statistical) those implications of the Second Law of particular interest to chemists. One may wonder how Arrhenius was led to an expression that captures so simply yet effectively the chemically significant features of the Second Law of thermodynamics.
Origin of Arrhenius's Rate-Constant Law
"In his notable book Studies in Chemical Dynamics van 't Hoff gives a theoretically-based formulation of the influence of temperature on the rate of reaction," wrote Arrhenius in 1889 in a paper (his chief contribution to chemical kinetics) On the Reaction Velocity of the Inversion of Cane Sugar by Acids (1).
"It may be proved, by means of thermodynamics," van 't Hoff had written (2), "that the values of k1 and k2 [our kf and kb] must satisfy the following equation: -
$\frac{d \log k_{1}}{dT} - \frac{d \log k_{2}}{dT} = \frac{q}{2T^{2}}" \nonumber$
[Today we usually write ln for log, ΔH for q, R for 2.]
"Although this equation does not directly give the relationship between the constants k and the temperature," continued van 't Hoff, "it shows that this relationship must be of the form
$\frac{d \log k}{dT} = \frac{A}{T^{2}} + B \nonumber$
where A and B are constants" (2).
Implicit in van 't Hoff's remarks is the understanding that A1 - A2= q (cf. equation 7a) and that B1 = B2.3
"It is, however, easily seen," notes Arrhenius, "that B can be any function, F(T), of the temperature... [provided only that] the F(T) belonging to two reciprocal reactions are the same" (1).
"Since F(T) can be anything at all," continues Arrhenius, "it is not possible to proceed further without introducing a new hypothesis, which is in a certain sense a paraphrase of the observed facts" [emphasis added].
Noting that the influence of temperature on specific reaction rate is very large, much larger than increasing gas-phase collision frequencies or decreasing liquid-phase viscicities, Arrhenius suggests by analogy with the "similar extraordinary large change in specific reaction velocity (k)... brought about by weak basis and acids" [an effect arising from the catalytic effect of often an infinitesimal amount of H+ or OH-] that in, for example, the inversion of cane sugar, the rate of which is sharply temperature-dependent, the "actual reacting substance is not [ordinary] sugar, since its amount does not change with temperature, but is another hypothetical substance... which we call 'active cane sugar' [today, 'activated cane sugar']... that is generated [in small amounts, by activation] from [ordinary, inactive] cane sugar... and must [be supposed to] increase rapidly in quantity with increasing temperature."
Continuing with his paraphrase of the observed facts, Arrhenius writes that "since the reaction velocity is approximately proportional to the amount [concentration] of [ordinary] cane sugar... the amount [concentration] of 'active can sugar', Ma, must be take to be approximately proportional to the amount of inactive cane sugar, Mi. The equilibrium condition [emphasis added] is thus:
$M_{a} = k M_{i} \nonumber$
"The form of this equation shows us that a molecule of 'active cane sugar' is formed from a molecule of inactive cane sugar either by a displacement of the atom or by addition of water", whose amount is constant; its concentration, therefore, does not appear in equation 46.
The constant k in equation 46 wears two hats. It is simultaneously a thermodynamic parameter and a rate parameter. It is the thermodynamic equilibrium constant for the postulated equilibrium between active and inactive cane sugar molecules (it would be written today as K* or K). And, if the rate of inversion is, as postulated, proportional to Ma, k is proportional to the kinetic rate constant for the inversion of cane sugar.
Carrying over in this way to kinetics a thermodynamic relation, Arrhenius applies van 't Hoff's thermodynamic expression, equation 44, for the temperature variation of an equilibrium constant K ( $=frac{k_{1}}{k_{2}}$ to the thermodynamic-kinetic constant k of equation 46. In the spirit of modern absolute rate theory, he write that "Thus for the constant k (or what is the same thing $\frac{M_{a}}{M_{i}}$ we have the equation
$\frac{d \log_{nat} k}{dT} = \frac{q}{2T^{2}}" \nonumber$
which on integration yields, with q = ΔH (and 2 = R), equation 6.
That Arrhenius's Rate-Constant Law captures for chemistry the essential features of the Second Law of thermodynamics is, thus, no mystery. It is a plausible application, based on a selective if brief axiomatization of nearly universal features of chemical reaction rates, of van 't Hoff's thermodynamic relation (equation 44), which is a special instance - THE CHEMICAL INSTANCE - of the Gibbs-Helmholtz equation (equation 43), which in turn is a general instance, if not quite the complete embodiment of Carnot's theorem (equations 35 and 36), itself THE most general mathematical statement of the Second-Law-like behavior of nature. As we have shown, however, in many chemical problems Arrhenius's Law (equation 6) is a more quickly an easily-used expression of the Second Law than is Carnot's more widely applicable and, though mathematically simpler, chemically more remote theorem (equation 35).
3In absolute rate theory $k = \frac{RT}{h} e^{ \frac{ \Delta S^{ \ddagger}}{R}} e^{ \frac{ \Delta H^{ \ddagger}}{RT}}$. Hence, for $\Delta C_{p}^{ \ddagger} = 0$, $d \ln \frac{k}{dT} = \frac{ \Delta H^{ \ddagger}}{RT} + \frac{1}{T}$ and $B = \frac{1}{T}$. More generally, if, empirically, one has $k = aT^{n}e^{ \frac{A}{T}}$, a, A, constants, then, by equation 45, $B = \frac{n}{T}$.
Appendix 1
Derivation of Clapeyron's Equation for the Phase Change
$x_{(pure~liquid)} \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x_{(gas)} \nonumber$
With a Note on the Gibbs-Poynting Effect and Osmotic Pressure
At equilibrium (Rf = Rb), kf = kbP. Using equations 6 and 7a, one obtains on taking logarithms.
$\frac{ \Delta H}{RT} + \ln(P) = \ln \frac{A_{f}}{A_{b}}$, a constant
Thus, on going from an equilibrium point T,P to another equilibrium point T + dT, P + dP, one has that if ΔH is (in this instance) independent of T and P, dT must be such that
$\frac{ \Delta H}{R(T+dT)} + \ln(P + dP) = \frac{ \Delta H}{RT} + \ln(P)$.
Multiplying through by R(T + dT)T, simplifying, noting that $\ln(P + dP) - \ln(P) = \ln(l + \frac{dP}{P} = \frac{dP}{P}$ and that $\frac{RT^{2}}{P} = TV^{-g}$ and dropping the term containing dP x dT, one obtains
$\frac{dP}{dT} = \frac{ \Delta H}{T \overline{V}^{g}} \approx \frac{ \Delta H}{T \Delta V}$.
If the partial pressure on the gas, Pg, is not the same as the pressure on the liquid phase, P1, (the liquid, for example, might be squeezed -- as in an osmotic experiment--behind a rigid, x-permeable barrier), the first equation above should be written
$\frac{ \Delta E + P^{g} \overline{V}^{g} - P^{1} \overline{V}^{1}}{RT} + \ln(P^{g}) = \ln(\frac{A_{f}}{A_{b}})$, a constant.
For vapors behaving as ideal gases, one has $P^{g} \overline{V}^{g} = RT$. If, now, at constant temperature, the two pressures change from values P1 and Pg that satisfy the above relation to new values P1 + dP1 and Pg + dPg, for equilibrium to be maintained, dP1 and dPg must be such that
$\frac{ \Delta E + RT - (P^{1} + dP^{1}}{RT} - \ln(P^{g} + dP^{g}) = \frac{ \Delta E + RT - P^{1} \overline{V}^{1}}{RT} + \ln(P^{g})$.
Simplifying, one obtains the Gibbs-Poynting equation
$dP^{g} = \frac{ \overline{V}^{1}}{ \overline{V}^{g}} dP^{1}$.
Consider, now, a squeezed, impure liquid X in equilibrium with the pure, unsqueezed liquid, equation 15, equilibration occurring (in one's mind) via a common vapor phase. A finite squeeze ΔP1 = π increases the vapor pressure (the pressure of the gas that maintains equilibrium with the liquid) by an amount (see above) $\frac{ \overline{V}^{1}}{ \overline{V}^{g}} \pi$. The presence, however, of a second component, 2, decreases the vapor pressure from that of the pure liquid, Pxo, by an amount, (see equation 5b) $P_{x}^{ \circ} - P_{x}^{ \circ}N_{x} = P_{x}^{ \circ}(1 - N_{x}) \equiv P_{x}^{ \circ}N_{2}$. Equating those two terms, one finds that the amount π an impure liquid must be squeezed to maintain equilibrium with the pure liquid is given by the expression
$\pi = \frac{P_{x}^{ \circ} \overline{V}^{g} N_{2}}{ \overline{V}^{1}} = \frac{RTN_{2}}{ \overline{V}^{1}}$
(in agreement with equation 19).
Appendix 2
$\Delta S_{ \sigma},~ \Delta S_{ \theta},~ \Delta S_{atm},~ \Delta S_{wt}$ and $\Delta S_{total}$
The primary implications for classical thermodynamics of the Second-Law-type behavior of nature are embodied in expression 36: $\frac{dW_{rev}}{dT} = \frac{Q_{rev}}{T}$. The variation with temperature of Wrev is a property jointly of the initial and final states of a system σ. It is, so to speak, a "double state function". Define, in the spirit with which equation 36 was introduced,
$\Delta S \equiv \frac{dW_{rev}}{dT}$ For convenience4
$= \frac{Q_{rev}}{T}$ By Carnot's Theorem
$= \frac{ - \Delta_{rev} E_{ \theta}}{T}$ By definition: $Q \equiv -\Delta E_{ \theta}$
$\frac{ \Delta H_{sigma} + W_{rev}}{T}$ By the First Law
Define, also, purely for bookkeeping purposes,
$\Delta S_{ \theta} \equiv \frac{ \Delta E_{} \theta}{T}$
$\Delta S_{atm} \equiv 0$
$\Delta S_{wt} \equiv 0$
$\Delta S_{wt} \equiv \Delta S_{ \sigma} + \Delta S_{theta} + \Delta S_{atom} + \Delta S_{wt}$
Clearly, for a reversible process, $\Delta S_{total} = 0$. For irreversible processes, one has that
$W_{irrev} = \Delta H_{ \sigma} + Q_{irrev} < W_{rev} = \Delta H_{ \sigma} + Q_{rev}$
$\Rightarrow Q_{irrev} < Q_{rev} or (Q \equiv - \Delta E_{ \theta}$
$\Delta_{irrev} E_{ \theta} > \Delta_{rev} E_{ \theta}$
$\Rightarrow \Delta_{irrev} S_{ \theta} > \Delta_{rev} S_{ \theta}$
$\Rightarrow \Delta_{irrev} S_{total} >0$.
4It's easier to write "$\Delta S$" than "$\frac{dW_{rev}}{dT}$".
Literature Cited
1. Arrhenius, S., Z. Physik. Chem., 4, 226 (1889); excerpted and translated by Back, M. H. and Laidler, K. J., "Selected Readings in Chemical Kinetics," Pergamon Press, New York 1967, pp31-35.
2. van 't Hoff, J.H., "Studies in Chemical Dynamics," translated by Thomas Ewan, Chemical Publishing Co., Easton, Pa., 1896, pp122-3.
$\Delta H = \Delta_{f}H^{*} = \Delta_{b}H^{*}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.12%3A_Thermodynamics_and_Kinetics.txt
|
Introduction
Kinetics and thermodynamics seem like very different disciplines. They emphasize different phenomena (change; equilibrium) and introduce different concepts (rate constants; energy/entropy). It is not widely known that a useful connection between those disparate fields was suggested late in the 19th century by Svante Arrhenius when he introduced the concept of the activated molecule to account for the exponential dependence on temperature of kinetic rate constants (1,2). Textbook accounts of Arrhenius's rate-constant law fail to reveal the creative use Arrhenius made of thermodynamic principles in his analysis of the temperature dependence of chemical reaction rates, and, thus, fail to exploit Arrhenius's insights into the teaching of thermodynamics.
After a brief review of Arrhenius's work, we will show that the relations of classical thermodynamics of chief interest to chemists can be obtained by simple algebra from elementary rate laws and Arrhenius's rate constant expression
$k = Ae^{ \frac{ \Delta H^{*}}{RT}} \nonumber$
The Origin of Arrhenius's Rate Constant Expression (1)
Facts: Reaction rate have significant temperature dependencies (typically on the order of +8 percent per degree), which cannot be accounted for by increase gas-phase collision frequencies or decreased liquid-phase viscocities.
For all practical purposes, the concentration of reactant molecules are not temperature dependent.
Hypothesis: Since the concentration of a reactant (M) does not vary with temperature, the actual reactant species (in a unimolecular reaction) might be an activated form of the molecule M*, with
$Rate = \mathcal{K} (M^{*}) \nonumber$
where the constant $\mathcal{K}$ is independent of temperature and (M*) is directly proportional to (M) and is sharply dependent on temperature.
$(M^{*}) = k(T)~(M) \nonumber$
Implications: Relationship 471.3 can be substituted into 471.2
$Rate = \mathcal{K} k(T)~(M) \nonumber$
which emphasizes that k(T) is a kinetic rate parameter. Relationship 471.3 can be rewritten as
$k(T) = \frac{(M^{*})}{(M)} \nonumber$
which emphasizes that k(T) is, also, a thermodynamic parameter: an equilibrium constant. As such it satisfies van 't Hoff's expression1
$\frac{d \ln( k(T))}{dT} = \frac{ \Delta H^{*}}{RT^{2}} \nonumber$
where $\Delta H^{*}$ is the enthalpy change accompanying the formation of M* from M - the enthalpy of activation. If over small intervals $\Delta H^{*}$ may be treated as a constant, the integrated form of Equation 471.6 is essentially expression 471.1: $k(T) = Ae^{ - \Delta H^{*} \ RT}$, where A is a constant independent of T. As indicated in Figure 1 (f = forward, b = backward),
$\Delta_{f} H^{*} = \Delta_{b} H^{*} = \Delta H \Delta E + \Delta (PV) \nonumber$
Discussion: Thermodynamics played a large role in Arrhenius's elucidation of the temperature dependence of rate constants. It is not surprising, then, that the relations of classical thermodynamics can be obtained from elementary rate laws and Arrhenius's rate-constant expression. Below are eight illustrative examples of simple kinetic derivations of thermodynamic relations.
1Arrhenius employed van 't Hoff's equation in the form $\frac{ d( \ln(k)}{dT} = \frac{q}{RT^{2}}$. Under the usual conditions of constant pressure $q = \Delta H$. This accounts for our use of $\Delta H^{*}$ rather than Ea in equation 471.1.
Clapeyron's Equation for Condensed Phases
In this analysis and those that follow, the simplest mechanism consistent with overall stoichiometry is employed. It is not difficult to show that more complicated mechanisms would yield the same result (see paper cited by Frost in footnote 2).
For equilibrium with respect to the change:
$x_{(solution)} \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x_{(pure~liquid)}$
Rf (melting rate) = kf = Rb (freezing rate) = kb. The concentrations of pure substances are constant and therefore absorbed in the rate constants. Thus, by equation 471.1, at equilibrium
$A_{f} e^{ \frac{- \Delta_{f} H^{*}}{RT}} = A_{b} e^{ \frac{- \Delta_{b} H^{*}}{RT}}$
Hence, by equation 471.7, on taking logarithms,
$\frac{ \Delta E + P \Delta V}{RT} = \ln( \frac{A_{f}}{A_{b}}) = constant$
For equilibrium to be maintained when T and P change from values satisfying the above expression to new values T + dT and P + dP, dT and dP must be such that
$\frac{ \Delta E~+~(P + dP)~ \Delta V}{R (T + dT)} = \frac{ \Delta E~+~P \Delta V}{RT}$
Simplification yields Clapeyron's equation
$\frac{dP}{dT} = \frac{ \Delta H}{T \Delta V} \nonumber$
Ideal Solubility and Freezing Point Depression
For equilibrium with respect to the change
$x_{(solid)} \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x_{(ideal~solution)}$
Rf (melting or solution rate) = kf = Rb (freezing or precipitation rate) = kbNx. Thus, by equations 471.1 and 471.7, for all mole fractions Nx,
$\ln( \frac{A_{f}}{A_{b}} = \frac{ \Delta H}{RT} + \ln(N_{x}) = \frac{ \Delta H}{RT} |_{N_{x} = 1} = \frac{\Delta H}{RT_{nfp}}$.
Hence,
$\ln(N_{x}) = \frac{ \Delta H}{R} ( \frac{1}{T_{nfp} - \frac{1}{T}}) = \frac{ \Delta H (T - T_{nfp})}{RTT_{nfp}} \nonumber$
For $N_{x} \cong 1$, $T \cong T_{nfp}$ (of x), $\ln(N_{x} \cong N_{x} - 1 \cong N_{2}$ and
$-(T-T_{nfp}) \cong \frac{RT^{2}_{nfp}}{ \Delta H} N_{2} \nonumber$
Osmotic Equilibrium
For equilibrium with respect to the change
$x_{(pure~liquid)} \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} x_{(ideal~solution)}$
Pure Liquid Ideal Solution
Pressure P P + $\pi$
Mole Fraction Nx = 1 Nx < 1
Rf = kf = Rb = kbNx. Thus, by equations 471.1 and 471.7,
$ln \frac{A_{f}}{A_{b}} = \frac{ \Delta E~+~ \Delta (PV)}{RT} + \ln(N_{x})$
In this instance:
$\Delta E = \Delta V = 0$
$\Delta (PV) = (P + \pi) \overline{V}_{x} - P \overline{V}_{x} = \pi \overline{V}_{x}$.
Hence,
$\frac{A_{f}}{A_{b}} = \frac{ \pi \overline{V}_{x}}{RT} + \ln(N_{x})$.
For Nx = 1, $\pi$ = 0. Thus $\ln( \frac{A_{f}}{A_{b}}) = 0$.
Hence,
$\pi = - \frac{RT}{ \overline{V}_{x}} \ln(N_{x}) \cong \frac{RTN_{2}}{ \overline{V}_{x}} \cong RTC_{2} \nonumber$
where N2 << 1.
Chemical Equilibrium
For equilibrium with respect to the change
aA + bB = cC + dD
the Principle of Microscopic Reversibility states that the rate at which A and B disappear by the (perhaps unlikely) mechanism aA + bB, rate law Rf = kfcAacBb, is equal to the rate at which A and B appear by the mechanism cC + dD, rate law Rb = kbcCccDd.2 Thus, with equations 471.1 and 471.7, one obtains
$K \equiv \frac{c_{C}^{c}}{C_{D}^{d}} |_{equilibrium} = \frac{k_{f}}{k_{b}} = \frac{A_{f}}{A_{b}} e^{ \frac{- \Delta H}{RT}} \nonumber$
or,
$\ln(K) + \frac{ \Delta H}{RT} = \ln( \frac{A_{f}}{A_{b}}) = constant \nonumber$
If T changes from T1 to T2, the change in K, from K1 to K2, must be such that, by equation 471.12,
$\ln( \frac{K_{2}}{K_{1}}) = \frac{ \Delta H}{R} (\frac{1}{T_{1}} - \frac{1}{T_{2}}) \nonumber$
2For a further discussion of this point, see Frost, A. A. and Pearson, R. G., "Kinetics and Mechanism," John Wiley and Sons, Inc., 1953, Ch. 8; or Frost, A. A., J. Chem. Educ., 18, 272 (1941).
Boltzmann's Factor
For equilibrium with respect to the simple "chemical" change
$x~(quantum~state~i) \underset{b}{\stackrel{f}{\rightleftharpoons}} x~(quantum~state~j)$
one has by 471.11,
$K = \frac{C_{j}}{C_{i}} = \frac{A_{f}}{A_{b}} e^{ \frac{- \Delta H}{RT}}$
In this instance:
$A_{f} = A_{b}$
$\Delta H = \Delta E~+~P \Delta V = \Delta E \equiv N_{A}( \varepsilon_{j} - \varepsilon_{i})$.
$\frac{C_{j}}{C_{i}} = \frac{N_{j}}{N_{i}}$
Hence,
$\frac{N_{j}}{N_{i}} = e^{ \frac{- ( \varepsilon_{j} - \varepsilon_{i})}{kT}} \nonumber$
Electrochemical Equilibrium
For equilibrium with respect to the flow of electrons in an electrochemical circuit
$e~(potential~V_{1}) \underset{k_b}{\stackrel{k_f}{\rightleftharpoons}} e~(potential~V_{2})$
Rf = kf = Rb = kb. Thus, by equation 471.1,
$\ln \frac{A_{f}}{A_{b}} = \frac{ \Delta_{f} H^{*} - \Delta_{b} H^{*}}{RT}$
The activation enthalpies contain in this instance two contributions: one from the enthalpy of activation of the chemical change to which the electron flow is coupled in an electrochemical cell; the other from the enthalpy of activation for the physical transfer of electrons across a potential difference E = V1 - V2. In this instance, $\Delta_{f} H^{*} - \Delta_{b} H^{*} = \Delta H + nFE$.
Hence,
$\ln \frac{A_{f}}{A_{b}} = \frac{ \Delta H + nFE}{RT}$.
For equilibrium to be maintained when T and E change from values satisfying the above expression to new values T + dT and E + dE, dT and dE must be such that
$\frac{ \Delta H + nF(E~+~dE)}{R(T~+~dT)} = \frac{ \Delta H + nFE}{RT}$
Simplification yields
$nF \frac{dE}{dT} = \frac{ \Delta H + nFE}{T}$
By the First Law, $\Delta H + nFE = Q_{rev}$.
Hence,
$nF \frac{dE}{dT} = \frac{Q_{rev}}{T} \nonumber$
Equation 471.16 is a special instance of the general thermodynamic relation
$\frac{dW_{rev}}{dT} = \frac{Q_{rev}}{T} \nonumber$
Alternatively: $\frac{dW_{rev}}{Q_{rev}} = \frac{dT}{T} \nonumber$
[In Gibbs-Clausius notation: $\frac{ \delta ( \Delta G)}{ \delta T} = - \Delta S$]
Equation 471.18 is, in turn, a special instance of the Kelvin-formula for the efficiency of a reversible heat engine.
$\varepsilon_{rev} = \frac{T_{2} - T_{1}}{T_{2}} \nonumber$
Mechanical Equilibrium
For equilibrium with respect to a change, in the energy E of a purely mechanical system say a change in altitude, h, of a weight, wt,
$wt(E_{i} = Mgh_{i}) \underset{b}{\stackrel{f}{\rightleftharpoons}} wt(E_{j} = Mgh_{j}), \nonumber$
Rf = kf = Rb = kb. Hence, by equations 471.1, 471.7 with, again, Af = Ab,
$\frac{R_{f}}{R_{b}} = \frac{k_{f}}{k_{b}} = e^{ \frac{- (E_{j} - E_{i})}{kT}} = e^{- \Delta E_{wt}}{kT} \nonumber$
For, for example, $\Delta h$ = +0.1m, M = 0.05 kg, and g = 9.8 $\frac{m}{s^{2}}$, $\Delta E_{wt}$ = +0.0049 J. Thus, at T = 300K (k = 1.38 x 10-23 J),
$\frac{R_{f}}{R_{b}} = 10^{-(10^{18.7})}$
Boltzmann's Relation
For change 471.20 let the number of macroscopically indistinguishable, microscopically distinct quantum states accessible jointly to the mechanical system wt and its thermal surroundings $\theta$ in (wt + $\theta$)'s initial state i, its transition state *, and its final state j be denoted, respectively, $( \Omega_{total})_{i}$, $( \Omega_{total}^{*})$, and $( \Omega_{total})_{j}$. If all quantum states are, a priori, equally probably, $R_{f} = k \frac{ ( \Omega_{total})^{*}}{( \Omega_{total})_{i}}$ and $R_{b} = k \frac{ ( \Omega_{total})^{*}}{( \Omega_{total})_{j}}$. Thus, for equilibrium with respect to change 471.20, by 471.21,
$\frac{ ( \Omega_{total})_{j}}{( \Omega_{total})_{i}} = e^{ \frac{ \Delta E_{wt}}{kT}} \nonumber$
By the First Law, for a universe wt + $\theta$, $- \Delta_{wt} = \Delta E_{ \theta}$. By the Second Law, $\frac{ \Delta E_{ \theta}}{T} = \Delta S_{ \theta}$. By the law of statistical independence, $\Omega_{total} = \Omega_{ \theta} \Omega_{wt}$. And by the law of the reversibility of purely mechanical changes, $\Omega_{wt}$ = constant. Thus, by equation 471.22,
$\frac{ ( \Omega_{ \theta} )_{final}}{( \Omega_{ \theta} )_{initial}} = e^{ \frac{ \Delta S_{ \theta}}{k}} \nonumber$
By the Third Law, S = 0 when $\Omega = 1$ (for example, a perfect crystal at T = 0K). Thus, by equation 471.23, for systems in internal equilibrium,
$S = k \ln \Omega \nonumber$
Summary and Concluding Comments
Arrhenius's expression (equation 471.1) embodies in a form immediately applicable to chemical problems those implications of the Second-Law-like behavior of Nature of particular interest to chemists. It is, from a chemical point of view, a more quickly and easily used expression of the Second Law than is the more widely applicable but, though mathematically simpler, chemically more remote expression 471.19.
Expression 1 is in a certain sense, said Arrhenius, "a paraphrase of the observed facts" (1). It is an axiomatization of a nearly universal feature of chemical reactions. If chemical (and physical) changes had no enthalpies of activation, it would be impossible to store energy - as, for example, fat or fuel plus oxygen. Everything would slide quickly to equilibrium, including the sun. Without energy barriers there would be no life - and energy crises.
Arrhenius's expression 1 is based on van 't Hoff's thermodynamic expression $K = Ce^{ \frac{- \Delta H}{RT}}$ (1, 2). Thus, the kinetic derivations above are not, in a logical sense, substitutes for thermodynamic arguments. It is often instructive, however, to see abstract expressions emerge unexpectedly from concrete, special instances.
The present mathematical procedures can be used without change in purely thermodynamic arguments. From the expression $K = Ce^{ \frac{- \Delta H}{RT}}$ one can obtain quickly, as above, expressions 471.8-471.16 without using calculus, the entropy function, the chemical potential, or Carnot's cycle.
Conventional expressions that involve the entropy function can be obtained from the present discussion by introducing the abbreviation $\Delta S \equiv R \ln \frac{A_{f}}{A_{b}}$.
Literature Cited
1. Arrhenius, S., Z. Physik Chem., 4, 226 (1889); excerpted and translated by Back, M. H. and Laidler, K. J., "Selected Readings in Chemical Kinetics," Pergamon Press, New York, 1967, pp. 31-35.
2. van 't Hoff, J. H., "Studies in Chemical Dynamics," translated by Thomas Ewan, Chemical Publishing Co., Easton, Pennsylvania, 1896, pp. 122-123.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.13%3A_Simple_Kinetic_Derivations_of_Thermodynamic_Relations.txt
|
Thermodynamics: Energy conservation with an entropy-based limitation on energy utilization. Nick Herbert
Cochran and Heron recently published an assessment of student ability to apply the laws of thermodynamics to heat engines and refrigerators with special emphasis on the second law.1 They reported exam question results for three levels of instruction: (a) standard; (b) standard plus a tutorial on Carnot’s theorem; (c) standard plus a tutorial on the use of the entropy inequality. The performance of students exposed to instruction levels (b) and (c) was significantly better than that for students who had only the standard level of instruction.
The three levels of instruction all use the system-centered approach to thermodynamics. I would like to propose that a future study include another option – the global approach to thermodynamics.2 In the global approach, after a sketch of the phenomenon under study is made, the sums of the energy and entropy changes are calculated.
$\Delta E_{tot} = \sum_{i} \Delta E_{i}$ $\Delta S_{tot} = \sum_{i} \Delta S_{i}$
Thermodynamics requires that $\Delta E_{tot} = 0$ and $\Delta S \geq 0$ for all phenomena.
The global method will be illustrated by applying it to the heat engine shown in the figure, which is the same heat engine as shown in Figure 2 of reference 1. As shown in the appended figure, work is represented by a weight which can rise or fall in the gravitational field.
The global statement of the first law is
$\Delta E_{tot} = \Delta E_{800} + \Delta E_{sys} + \Delta E_{400} + \Delta E_{wt}$
Because the working substance of the system completes a thermodynamic cycle, $\Delta E_{sys} = 0$. Using the energy values in the figure we find that the first law is satisfied; energy is conserved.
$\Delta E_{tot} = -900J + 0J + 400J + 500J = 0$
In the global statement for the second law is
$\Delta S_{tot} = \Delta S_{800} + \Delta S_{sys} + \Delta S_{400} + \Delta S_{wt}$
For thermal reservoirs $\Delta S = \frac{ \Delta E_{T}}{T}$, where $\Delta E_{T}$ is the thermal energy change of a reservoir at temperature T. Again, because the working substance undergoes a thermodynamic cycle, $\Delta S_{sys} = 0$. The movement of a weight in a gravitational field does not change its entropy, therefore, $\Delta S_{wt} = 0$.
Using the information from the figure we calculate a negative value for the total entropy change, indicating that a heat engine operating under these conditions is not possible.
$\Delta S_{tot} = \frac{-900J}{800K} + 0 \frac{J}{K} + \frac{400J}{400K} + 0 \frac{J}{K} = -0.125 \frac{J}{K}$
By comparison with this global analysis, Cochran and Heron use an auxiliary thermodynamic function, the entropy inequality, to arrive at the same conclusion.
$\Delta S_{sys} \geq \sum \frac{Q_{to~sys}}{T_{env}} = \frac{900 J}{800K} - \frac{400 J}{400 K} = 0.125 \frac{J}{K}$
This result contradicts the fact that a cyclic process for the system’s working substance requires that $\Delta S_{sys} = 0$. Therefore, the proposed heat engine will not function. However, this method is less direct because the entropy inequality is an auxiliary thermodynamic function which is derived from the second law of thermodynamics.
Employing a global thermodynamic analysis of the other figures shown in reference 1, as well as other types of physical and chemical phenomena is straightforward.2
It is, of course, important to explore the limits thermodynamics places on the performance of heat engines and refrigerators. These limits can be obtained from the global statements of the first and second laws of thermodynamics by elementary mathematical methods. For traditional heat engines and refrigerators (two thermal reservoirs) the first and second laws are,
$\Delta E_{hot} + \Delta E_{cold} + \Delta E_{wt} = 0$
$\frac{ \Delta E_{hot}}{T_{hot}} + \frac{ \Delta E_{cold}}{T_{cold}} \geq 0$
Using the first law to eliminate ΔEcold in the equation for the second law yields the efficiency relation for heat engines.
$\varepsilon = \frac{ \Delta E_{wt}}{ - \Delta E_{hot}} \leq 1 - \frac{T_{cold}}{T_{hot}}$
Using the first law to eliminate $\Delta E_{hot}$ in the equation for the second law yields the coefficient of performance for refrigerators.
$\kappa = \frac{- \Delta E_{cold}}{ \Delta E_{wt}} \leq \frac{T_{cold}}{T_{hot} - T_{cold}}$
The most appealing feature of the non-system centered, global approach to thermodynamics is that the first and second laws are front and center, starring you in the face, and not represented by surrogates. Another important feature is not to have to subscript terms involving thermal energy and work with “to sys” and “on sys”. Each part of the phenomenon under study carries its own label; the system is just another part, neither more nor less important than any other part.
1M. J. Cochran and P. R. L. Heron, “Development and assessment of research-based tutorials on heat engines and the second law of thermodynamics,” Am. J. Phys. 74, 734- 741 (2006).
2H. A. Bent, The Second Law: An Introduction to Classical and Statistical Thermodynamics (Oxford University Press, New York, 1965).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.14%3A_The_Global_Approach_to_Thermodynamics.txt
|
The purpose of this tutorial is to demonstrate the clarity that the global method of doing thermodynamics brings to the analysis of heat engines, which under other names are power plants, heat pumps and refrigerators. The analyses provided are based on the schematic diagram shown below. The ʺsystemʺ is a working substance that undergoes a thermodynamic cycle (ΔEsys = ΔSsys = 0, under reversible conditions), so it does not appear in the mathematical analyses that follows.
Under the circumstances regarding the role of the system outlined above, the first and second laws of thermodynamics are:
$\Delta E_{tot} = \Delta E_{hot} + \Delta E_{cold} + \Delta E_{wt} = 0$
$\Delta S_{tot} = \Delta S_{hot} + \Delta S_{cold} = \frac{ \Delta E_{hot}}{T_{hot}} + \frac{ \Delta E_{cold}}{T_{cold}} \geq 0$
Since we will be interested in the best case scenario, we will use the equal sign for the entropy change, ΔStot = 0, in our calculations.
Power Plant
A power plant harnesses the natural flow of thermal energy from a hot object to a cold object in order to generate useful energy, most likely in the form of electricity. Suppose a power plant operates with a high temperature thermal reservoir at 500K and the ambient temperature is taken to be 300K. How much electricity can, ideally (ΔStot = 0), be generated per 100 J of thermal energy flowing from the high temperature reservoir to the low (ambient) temperature reservoir?
Input parameters: ΔEhot := -100 J Thot := 500 K Tcold := 300 K
$\begin{pmatrix} \Delta E_{hot} + \Delta E_{cold} + \Delta E_{wt} = 0\ \frac{ \Delta E_{hot}}{T_{hot}} + \frac{ \Delta E_{cold}}{T_{cold}} = 0 \end{pmatrix} |_{float,3}^{solve, \begin{pmatrix} \Delta E_{wt}\ \Delta E_{cold} \end{pmatrix}} \rightarrow (40.0 J~60.0J)$
It is clear from this analysis that the power plant is theoretically 40% efficient: $| \frac{40.0 J}{-100 J}| =$ 40%.
Heat Pump
Non‐spontaneous processes occur when they are driven by some other spontaneous process, such as described in the previous example.
Heat pumps are in the news again today, as they were in the late 70s and early 80s. Here we ask the question of how much energy we can pump from the ambient thermal environment into a house by buying 100 J of energy from the local utility. We assume that the ambient thermal source temperature is 270K and the house is being maintained at 300K.
Clear ΔEhot memory: ΔEhot := ΔEhot
Input parameters: ΔEwt := -100 J Thot := 300 K Tcold := 270 K
$\begin{pmatrix} \Delta E_{hot} + \Delta E_{cold} + \Delta E_{wt} = 0\ \frac{ \Delta E_{hot}}{T_{hot}} + \frac{ \Delta E_{cold}}{T_{cold}} = 0 \end{pmatrix} |_{float,4}^{solve, \begin{pmatrix} \Delta E_{hot}\ \Delta E_{cold} \end{pmatrix}} \rightarrow (1000.0 J~-900.0J)$
This remarkable result shows that by buying 100 J from the local utility one can pump 1,000 J of thermal energy from the ambient source into the house. Again this is assuming the ideal, theoretical limit, ΔStot = 0.
Refrigerator
A refrigerator is also a heat pump. However, with a refrigerator we are interested in how much energy we can pump out of the low temperature reservoir (the contents of the refrigerator) at a certain cost, rather than how much we can deliver to the high temperature reservoir. Here we ask how much energy we must purchase to pump 100 J out of the refrigerator.
Clear ΔEwt memory: ΔEwt := ΔEwt
Input parameters: ΔEcold := -100 J Thot := 300 K Tcold := 280 K
$\begin{pmatrix} \Delta E_{hot} + \Delta E_{cold} + \Delta E_{wt} = 0\ \frac{ \Delta E_{hot}}{T_{hot}} + \frac{ \Delta E_{cold}}{T_{cold}} = 0 \end{pmatrix} |_{float,4}^{solve, \begin{pmatrix} \Delta E_{hot}\ \Delta E_{wt} \end{pmatrix}} \rightarrow (107.1 J~-7.143J)$
This result tells us something we already know, but doesnʹt obtrude on our senses: operating a refrigerator is not particularly expensive. One hundred joules can be pumped out of the refrigerator for a cost of just over 7 joules, under ideal circumstances.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.15%3A_Global_Thermodynamic_Analyses_of_Heat_Engines.txt
|
A simple experiment to determine absolute zero using Charles' Law is illustrated below. An Erlenmeyer flask is weighed and placed in a boiling water bath and allowed to come to thermal equilibrium. The temperature is measured and found to be 99.0 oC. The flask is then submerged, inverted, in an ice bath (0.20 oC) and allowed to come to thermal equilibrium. The contraction of the air in the flask at the lower temperature draws water into the flask. The flask is carefully removed, the outside dried and it is weighed. Finally, the flask is filled with water (as shown below on the right) and weighed. The mass measurements are converted to high and low temperature gas volumes and Charles's Law, V = a⋅ T+ b, is used to calculate absolute zero.
Convert mass measurements to high and low temperature gas volumes:
High temperature: Vh := $\frac{224.4 gm - 83.0 gm}{1 \frac{gm}{mL}}$ = 0.141 L; Th := 99.0 Celsius
Low temperature: Vl := $\frac{224.4 gm - 120.8 gm}{1 \frac{gm}{mL}}$ = 0.104 L; Tl := 0.20 Celsius
An algebraic method is used to calculate absolute zero. Three equations are required because there are three unknowns: a, b, and T0. Absolute zero is interpreted as the temperature at which the gas volume goes to zero. This is the last equation in the set of equations used to calculate a, b and T0.
$\begin{pmatrix} V_{h} = a T_{h} + b\ V_{1} = a T_{l} + b\ 0 = a T_{0} + b \end{pmatrix}|_{float, 4}^{solve, \begin{pmatrix} a\ b\ T_{o} \end{pmatrix}} \rightarrow [0.3826 \frac{mL}{Celsius}~103.5 mL~(-270.6) Celsius]$
The correct value for absolute zero is -273.2 oC. So this result is in error by approximately 1%.
11.17: The Origin of KE 3 2 RT
The Kinetic Molecular Theory of Gases
A gas consists of a collection of molecules in continuous random motion with varying speed.
The individual gas molecules are extremely small when compared with the volume of the container they occupy.
The molecules move in straight lines until they collide with each other or the walls of the container.
The molecules do not interact with each other or the walls of the container except during these collisions.
The laws of classical physics can be used to analyze the motion of the gas molecules and to calculate the bulk properties of the gas.
The basic features of the kinetic molecular theory (KMT) are illustrated in the figure below.
When the KMT is used to calculate the pressure of a gas the following expression results
$P = \frac{ \frac{1}{3} nMV^{2}}{V} \nonumber$
where n is the number of moles of gas, M is the molar mass of the gas, v2 is the average of the velocity squared, and V is the volume of the container.
The ideal gas law summarizes the behavior of gases with respect to the macroscopic variables of temperature, pressure, volume and moles of gas.
$P = \frac{nRT}{V} \nonumber$
A comparison of equations (1) and (2), that is, a comparison between the results of a theoretical analysis using the KMT and the actual experimental behavior of gases reveals that
$\frac{1}{2} Mv^{2} = \frac{3}{2}RT \nonumber$
Thus, the proportionality of the average molecular kinetic energy to the absolute temperature is a conclusion drawn by comparing a theoretical expression with an empirical equation which summarizes macroscopic gas facts.
The resulting equation, (3), provides an interpretation of temperature in terms of molecular motion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.16%3A_Using_Charles%27_Law_to_Determine_Absolute_Zero.txt
|
The cosmic background radiation fills all space and is a relic from the "big bang" that created the universe approximately 18 billion years ago. The data1 shown below, spectral brightness2 as a function of wave number, was recorded (1989) by the Cosmic Background Explorer satellite (COBE). Below the data is fit with the Planck blackbody radiation equation to determine the cosmic background temperature.
Define the fundamental constants h, k, and c.
h := 6.62608 ⋅ 10-34 k := 1.380622 ⋅ 10−23 c := 2.99792458 ⋅ 108
νi := 100 ⋅ νi Bi := 10−18 ⋅ Bi
Provide a seed value for the background temperature: T := 10
Define spectral brightness equation:
$F(\nu , T) := 2 \cdot h \cdot \nu ^{3} \cdot c \cdot \frac{1}{(exp( \frac{ h \cdot \nu}{k \cdot T}) - 1)} \nonumber$
SSD stands for sum of the square of the deviations between data and the equation that is being fit to the data.
$SSD(T) = \sum_{i} (B_{i} - F (\nu _{i}, T))^{2} \nonumber$
Given SSD(T) := 0 T := Minerr(T) T := 2.728
Thus the best fit to the data is obtained with a cosmic background temperature of 2.728 K.
Notes:
1. Data taken from: S. Bluestone, JCE 78, 215-218 (2001).
2. The relationship between spectral brightness and Planck's radiation density function is:
$B( \nu , T) = \frac{c}{4 \cdot \pi} \cdot \rho ( \nu, T)$
n := 43 i := 1 .. n
$\nu_{i}$ :=
$\begin{array}{|r|} \hline \ 2.27 \ \hline \ 2.72 \ \hline \ 3.18 \ \hline \ 3.63 \ \hline \ 4.08 \ \hline \ 4.54 \ \hline \ 4.99 \ \hline \ 5.45 \ \hline \ 5.90 \ \hline \ 6.35 \ \hline \ 6.81 \ \hline \ 7.26 \ \hline \ 7.71 \ \hline \ 8.17 \ \hline \ 8.62 \ \hline \ 9.08 \ \hline \ 9.53 \ \hline \ 9.98 \ \hline \ 10.44 \ \hline \ 10.89 \ \hline \ 10.89 \ \hline \ 11.34 \ \hline \ 11.80 \ \hline \ 12.25 \ \hline \ 12.71 \ \hline \ 13.16 \ \hline \ 13.61 \ \hline \ 14.07 \ \hline \ 14.52 \ \hline \ 14.97 \ \hline \ 15.43 \ \hline \ 15.88 \ \hline \ 16.34 \ \hline \ 16.79 \ \hline \ 17.24 \ \hline\ 17.70 \ \hline \ 18.15 \ \hline \ 18.61 \ \hline \ 19.06 \ \hline \ 19.51 \ \hline \ 19.97 \ \hline \ 20.42 \ \hline \ 20.87 \ \hline \ 21.33 \ \hline \end{array}$
Bi :=
$\begin{array}{|r|} \hline \ 2.0110 \ \hline \ 2.5003 \ \hline \ 2.9369 \ \hline \ 3.2858 \ \hline \ 3.5503 \ \hline \ 3.7316 \ \hline \ 3.8269 \ \hline \ 3.8477 \ \hline \ 3.8027 \ \hline \ 3.7025 \ \hline \ 3.5551 \ \hline \ 3.3773 \ \hline \ 3.1752 \ \hline \ 2.9535 \ \hline \ 2.7281 \ \hline \ 2.4957 \ \hline \ 2.2721 \ \hline \ 2.0552 \ \hline \ 1.8483 \ \hline \ 1.6488 \ \hline \ 1.4672 \ \hline \ 1.2973 \ \hline \ 1.1438 \ \hline \ 1.0019 \ \hline \ 0.8771 \ \hline \ 0.7648 \ \hline \ 0.6631 \ \hline \ 0.5749 \ \hline \ 0.4965 \ \hline \ 0.4265 \ \hline \ 0.3669 \ \hline \ 0.3136 \ \hline \ 0.2684 \ \hline \ 0.2287 \ \hline \ 0.1945 \ \hline\ 0.1657 \ \hline \ 0.1396 \ \hline \ 0.1185 \ \hline \ 0.1003 \ \hline \ 0.0846 \ \hline \ 0.0717 \ \hline \ 0.0587 \ \hline \ 0.0459 \ \hline \end{array}$
11.19: Age of the Elements
Analysis of cosmic microwave background radiation (CMBR) obtained by the Planck satellite telescope has added 80 million years to the age of the universe, making the best estimate of the current age 13.8 billion years. This tutorial deals with the more modest task of estimating the age of the elements found in the earth's crust.
As is well-known from examples such as carbon-14 dating, radioactivity can be used to tell time. Radioactive isotopes decay exponentially according to the following equation.
$A_{t} = A_{0} ( \frac{1}{2} )^{ \frac{1}{t_{1/2}}}$
The amount of a radioactive isotope remaining at time $t$ is equal to the original amount times 1/2 raised to the number of half-lives (t/t1/2) that have occurred. Using this equation plus plausible assumptions, some information about uranium isotopes and rudimentary math, the time elapsed since the isotopes were produced is calculated.
Assume that 238U and 235U were produced in equal amounts originally. Today the 238U/235U ratio is 140/1. The half-lives of 238U and 235U are 4.5 billion years and 800 million years, respectively. How long ago where these isotopes synthesized?
$\begin{bmatrix} U_{238}= \frac{1}{2}^{ \frac{t}{4.5 \cdot 10^{9} year}}\ U_{235} = \frac{1}{2}^{ \frac{t}{0.8 \cdot 10^{9} year}}\ U_{238} = 140 \cdot U_{235} \end{bmatrix} |_{float,3}^{ \begin{pmatrix} t\ U_{238}\ U_{235} \end{pmatrix}} \rightarrow (0.694e10~year~0.344~0.245e-2)$
According to this rudimentary model, the uranium in the earth's crust is about half the age of the universe.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/11%3A_Miscellaneous/11.18%3A_Cosmic_Background_Radiation.txt
|
Thermodynamics Based on Statistical Mechanics
Phenomenological thermodynamics describes relations between observable quantities that characterize macroscopic material objects. We know that these objects consist of a large number of small particles, molecules or atoms, and, for all we know, these small particles adhere to the laws of quantum mechanics and often in good approximation to the laws of Newtonian mechanics. Statistical mechanics is the theory that explains macroscopic properties, not only thermodynamic state functions, by applying probability theory to the mechanic equations of motion for a large ensemble of systems of particles. In this lecture course we are concerned with the part of statistical mechanics that relates to phenomenological thermodynamics.
In spite of its name, phenomenological (equilibrium) thermodynamics is essentially a static theory that provides an observational, macroscopic description of matter. The underlying mechanical description is dynamical and microscopic, but it is observational only for systems consisting of a small number of particles. To see this, we consider a system of $N$ identical classical point particles that adhere to Newton’s equations of motion.
Concept $1$: Newtonian Equations of Motion
With particle mass $m$, Cartesian coordinates $q_i$ $(i = 1, 2, \ldots, 3N)$ and velocity coordinates $\dot{q}_i$, a system of $N$ identical classical point particles evolves by
\begin{align} & m \frac{\mathrm{d}^2q_i}{\mathrm{d}t^2} = -\frac{\partial}{\partial{q_i}} V\left(q_1, \ldots, q_{3N}\right) \ , \label{eq:Newtonian_eqm}\end{align}
where $V(q_1, \ldots, q_{3N})$ is the potential energy function.
Notation $1$
The dynamical state or microstate of the system at any instant is defined by the $6N$ Cartesian and velocity coordinates, which span the dynamical space of the system. The curve of the system in dynamical space is called a trajectory.
The concept extends easily to atoms with different masses $m_i$. If we could, at any instant, precisely measure all $6N$ dynamical coordinates, i.e., spatial coordinates and velocities, we could precisely predict the future trajectory. The system as described by the Newtonian equations of motions behaves deterministically.
For any system that humans can see and handle directly, i.e., without complicated technical devices, the number $N$ of particles is too large (at least of the order of $10^{18}$) for such complete measurements to be possible. Furthermore, for such large systems even tiny measurement errors would make the trajectory prediction useless after a rather short time. In fact, atoms are quantum objects and the measurements are subject to the Heisenberg uncertainty principle, and even the small uncertainty introduced by that would make a deterministic description futile.
We can only hope for a theory that describes what we can observe. The number of observational states or macrostates that can be distinguished by the observer is much smaller than the number of dynamical states. Two classical systems in the same dynamical state are necessarily also in the same observational state, but the converse is not generally true. Furthermore, the observational state also evolves with time, but we have no equations of motion for this state (but see Section [Liouville]). In fact we cannot have deterministic equations of motion for the observational state of an individual system, precisely because the same observational state may correspond to different dynamical states that will follow different trajectories.
Still we can make predictions, only these predictions are necessarily statistical in nature. If we consider a large ensemble of identical systems in the same observational state we can even make fairly precise predictions about the outcome. Penrose gives the example of a women at a time when ultrasound diagnosis can detect pregnancy, but not sex of the fetus. The observational state is pregnancy, the two possible dynamical states are on path to a boy or girl. We have no idea what will happen in the individual case, but if the same diagnosis is performed on a million of women, we know that about 51-52% will give birth to a boy.
How then can we derive stable predictions for an ensemble of systems of molecules? We need to consider probabilities of the outcome and these probabilities will become exact numbers in the limit where the number $N$ of particles (or molecules) tends to infinity. The theory required for computing such probabilities will be treated in Chapter .
Note
Our current usage of the term ensemble is loose. We will devote the whole Chapter to clarifying what types of ensembles we use in computations and why.
The Markovian Postulate
There are different ways for defining and interpreting probabilities. For abstract discussions and mathematical derivations the most convenient definition is the one of physical or frequentist probability.
Definition: Physical Probability
Given a reproducible trial $\mathcal{T}$ of which $A$ is one of the possible outcomes, the physical probability $P$ of the outcome $A$ is defined as
\begin{align} & P(A|\mathcal{T}) = \lim\limits_{\mathcal{N} \rightarrow \infty}{\frac{n(A,\mathcal{N},\mathcal{T})}{\mathcal{N}}} \label{eq:phys_prob}\end{align}
where $n(A,\mathcal{N},\mathcal{T})$ is the number of times the outcome $A$ is observed in the first $\mathcal{N}$ trials.
A trial $\mathcal{T}$ conforming to this definition is statistically regular, i.e., the limit exists and is the same for all infinite series of the same trial. If the physical probability is assumed to be a stable property of the system under study, it can be measured with some experimental error. This experimental error has two contributions: (i) the actual error of the measurement of the quantity $A$ and (ii) the deviation of the experimental frequency of observing $A$ from the limit defined in Equation \ref{eq:phys_prob}. Contribution (ii) arises from the experimental number of trials $\mathcal{N}$ not being infinite.
We need some criterion that tells us whether $\mathcal{T}$ is statistically regular. For this we split the trial into a preparation period, an evolution period, and the observation itself (Figure $1$). The evolution period is a waiting time during which the system is under controlled conditions. Together with the preparation period it needs to fulfill the Markovian postulate.
A trial $\mathcal{T}$ that invariably ends up in the observational state $\mathcal{O}$ of the system after the preparation stage is called statistically regular. The start of the evolution period is assigned a time $t = 0$.
Note that the system can be in different observational states at the time of observation; otherwise the postulate would correspond to a trivial experiment. The Markovian postulate is related to the concept of a Markovian chain of events. In such a chain the outcome of the next event depends only on the current state of the system, but not on states that were encountered earlier in the chain. Processes that lead to a Markovian chain of events can thus be considered as memoryless.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/01%3A_Basics_of_Statistical_Mechanics/1.01%3A_Basic_Assumptions_of_Statistical_Thermodynamics.txt
|
Hamiltonian Equations of Motion
The Newtonian equations of motion are very convenient for atomistic molecular dynamics (MD) computations. Statistical analysis of trajectories encountered during such MD simulations can be analyzed in terms of thermodynamic quantities, such as free energy. However, for analyzing evolution of the system in terms of spectroscopic properties, the Newtonian description is very inconvenient. Since spectroscopic measurements can provide the most stringent tests of theory, we shall use the Hamiltonian formulation of mechanics in the following. This formulation is particularly convenient for molecules that also have rotational degrees of freedom. For that, we replace the velocity coordinates by momentum coordinates $p_j = m_j \dot{q}_j$, where index $j$ runs over all atoms and for each atom over the three Cartesian coordinates. Furthermore, we assume $M$ identical molecules, with each of them having $f$ degrees of freedom, so that the total number of degrees of freedom is $F = f M$. Such as system can be described by $2F$ differential equations
Concept $1$: Hamiltonian Equations of Motion
With the single-molecule Hamiltonian $\mathcal{H}(\mathbf{p}_i,\mathbf{q}_i)$ the equations of motion for $M$ non-interacting identical molecules with $f$ degrees of freedom for each molecule read
\begin{align} & \frac{\mathrm{d}\mathbf{q}_i}{\mathrm{d}t} = \frac{\partial \mathcal{H}\left(\mathbf{p}_i,\mathbf{q}_i\right)}{\partial\mathbf{p_i}} \ & \frac{\mathrm{d}\mathbf{p}_i}{\mathrm{d}t} = -\frac{\partial \mathcal{H}\left(\mathbf{p}_i,\mathbf{q}_i\right)}{\partial\mathbf{q_i}} \ , \label{eq:Hamiltonian_eqm}\end{align}
where $i = 1 \ldots M$. Each of the dynamical variables $\mathbf{q}_i$ and $\mathbf{p}_i$ is a vector of length $f$. The $2fM$ dynamical variables span the phase space.
Definition: Phase Space
Phase space is the space where microstates of a system reside. Sometimes the term is used only for problems that can be described in spatial and momentum coordinates, sometimes for all problems where some type of a Hamiltonian equation of motion applies. Sometimes the term state space is used for the space of microstates in problems that cannot be described by (only) spatial and momentum coordinates.
If the molecule is just a single atom, we have only $f=3$ translational degrees of freedom and the Hamiltonian is given by
$\mathcal{H}\left(\mathbf{p}_i,\mathbf{q}_i\right) = \frac{1}{2m} \left( p_{x,i}^2 + p_{y,i}^2 + p_{z,i}^2 \right),$
describing translation. For molecules with $n$ atoms, three of the $f = 3n$ degrees of freedom are translational, two or three are rotational for linear and non-linear molecules, respectively, and the remaining $3n-5$ or $3n-6$ degrees of freedom are vibrational.
The Liouville Equation
Our observations do not allow us to specify phase space trajectories, i.e. the trajectory of microstates for a single system. Instead, we consider an ensemble of identical systems that all represent the same (observational) macrostate $\mathcal{O}$ but may be in different microstates. At a given time we can characterize such an ensemble by a probability density $\rho(\mathbf{p},\mathbf{q},t)$ in phase space, where $\mathbf{p}$ and $\mathbf{q}$ are the vectors of all momentum and spatial coordinates in the system, respectively. We are interested in an equation of motion for this probability density $\rho$, which corresponds to the full knowledge that we have on the system. This equation can be derived from an integral representation of $\rho$ and the Hamiltonian equations of motion .
Theorem $1$: Liouville Equation
The probability density $\rho(\mathbf{p},\mathbf{q},t)$ in phase space evolves in time according to
\begin{align} & \frac{\partial \rho}{\partial t} = \sum_i \left( \frac{\partial \rho}{\partial p_i} \frac{\partial \mathcal{H}}{\partial q_i} - \frac{\partial \rho}{\partial q_i} \frac{\partial \mathcal{H}}{\partial p_i} \right) \ . \label{eq:Liouville_eqm}\end{align}
With the Poisson brackets
\begin{align} & \left\{ u,v \right\} = \sum_i \left[ \frac{\partial u}{\partial p_i} \frac{\partial v}{\partial q_i} - \frac{\partial u}{\partial q_i} \frac{\partial v}{\partial p_i} \right] \ . \label{eq:Poisson_brackets}\end{align}
this Liouville equation can be expressed as
\begin{align} & \frac{\partial \rho}{\partial t} = -\left\{ \mathcal{H}, \rho \right\} \ . \label{eq:Liouville_short}\end{align}
For the probability density along a phase space trajectory, i.e., along a trajectory that is taken by microstates, we find
$\frac{\mathrm{d}}{\mathrm{d}t} \rho \left( q(t), p(t),t \right) = 0 \ .$
If we consider a uniformly distributed number $\mathrm{d}N$ of ensemble members in a volume element $\mathrm{d}\Gamma_0$ in phase space at time $t=0$ and ask about the volume element $\mathrm{d}\Gamma$ in which these ensemble members are distributed at a later time, we find
$\mathrm{d}\Gamma = \mathrm{d}\Gamma_0 \ .$
This is the Liouville theorem of mechanics.
Quantum Systems
Hamiltonian mechanics can be applied to quantum systems, with the Hamiltonian equations of motion being replaced by the time-dependent Schrödinger equation. The probability density in phase space is replaced by the density operator $\widehat{\rho}$ and the Liouville equation by the Liouville-von-Neumann equation
$\frac{\partial \widehat{\rho}}{\partial t} = -\frac{i}{\hbar} \left[ \mathcal{\widehat{H}}, \widehat{\rho} \right] \ .$
In quantum mechanics, observables are represented by operators $\widehat{A}$. The expectation value of an observable can be computed from the density operator that represents the distribution of the ensemble in phase space,
$\langle \widehat{A} \rangle = \mathrm{Trace}\left( \widehat{\rho} \widehat{A} \right) \ .$
We note that the Heisenberg uncertainty relation does not introduce an additional complication in statistical mechanics. Determinism had been lost before and the statistical character of the measurement on an individual system is unproblematic, as we seek only statistical predictions for a large ensemble. In the limit of an infinite ensemble, $N \rightarrow \infty$, there is no uncertainty and the expectation values of incompatible observables are well defined and can be measured simultaneously. Such an infinitely large system is not perturbed by the act of observing it. The only difference between the description of classical and quantum systems arises from their statistical behavior on permutation of the coordinates of two particles, see Section [section:quantum_statistics].
1.03: Statistical Mechanics Based on Postulates
The Penrose Postulates
Penrose has made the attempt to strictly specify what results can be expected from statistical mechanics if the theory is based on a small number of plausible postulates.
1. Macroscopic physical systems are composed of molecules that obey classical or quantum mechanical equations of motion (dynamical description of matter).
2. An observation on such a macroscopic system can be idealized as an instantaneous, simultaneous measurement of a set of dynamical variables, each of which takes the values 1 or 0 only (observational description of matter).
3. A measurement on the system has no influence whatsoever on the outcome of a later measurement on the same system (compatibility).
4. The Markovian postulate. (Concept [concept:Markovian])
5. Apart from the Bose and Fermi symmetry conditions for quantum systems, the whole phase space can, in principle, be accessed by the system (accessibility).
After the discussion above, only the second of these postulates may not immediately appear plausible. In the digital world of today it appears natural enough: Measurements have resolution limits and their results are finally represented in a computer by binary numbers, which can be taken to be the dynamical variables in this postulate.
Implications of the Penrose Postulates
Entropy is one of the central quantities of thermodynamics, as it tells in which direction a spontaneous process in an isolated system will proceed. For closed systems that can exchange heat and work with their environment, such predictions on spontaneous processes are based on free energy, of which the entropy contribution is usually an important part. To keep such considerations consistent, entropy must have two fundamental properties
1. If the system does not exchange energy with its environment, its entropy cannot decrease. (non-decrease).
2. The entropy of two systems considered together is the sum of their separate entropies. (additivity).
Based on the Penrose postulates it can be shown that the definition of Boltzmann entropy (Chapter ) ensures both properties, but that statistical expressions for entropy ensure only the non-decrease property, not in general the additivity property. This appears to leave us in an inconvenient situation. However, it can also be shown that for large systems, in the sense that the number of macrostates is much smaller than the number of microstates, the term that quantifies non-additivity is negligibly small compared to the total entropy . The problem is thus rather a mathematical beauty spot than a serious difficulty in application of the theory.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/01%3A_Basics_of_Statistical_Mechanics/1.02%3A_Phase_space.txt
|
Discrete Random Variables
Consider a trial $\mathcal{T}$ where the observation is a measurement of the $z$ component $\hbar m_S$ of spin angular momentum of a spin $S = 5/2$. There are just six possible outcomes (events) that can be labeled with the magnetic spin quantum number $m_S$ or indexed by integer numbers 1, 2, $\ldots$ 6. In general, the probabilities of the six possible events will differ from each other. They will depend on preparation and may depend on evolution time before the observation. To describe such situations, we define a set of elementary events
$A = \left\{ a_j \right\} \ ,$
where in our example index $j$ runs from 1 to 6, whereas in general it runs from 1 to the number $N_A$ of possible events. Each of the events is assigned a probability $0 \leq P(a_j) \leq 1$. Impossible events (for a given preparation) have probability zero and a certain event has probability 1. Since one and only one of the events must happen in each trial, the probabilities are normalized, $\sum_j^{N_A} P(a_j) = 1$. A simplified model of our example trial is the rolling of a die. If the die is fair, we have the special situation of a uniform probability distribution, i.e., $P(a_j) = 1/6$ for all $j$.
A set of random events with their associated probabilities is called a random variable. If the number of random events is countable, the random variable is called discrete. In a computer, numbers can be assigned to the events, which makes the random variable a random number. A series of trials can then be simulated by generating a series of $\mathcal{N}$ pseudo-random numbers that assign the events observed in the $\mathcal{N}$ trials. Such simulations are called Monte Carlo simulations. Pseudo-random numbers obtained from a computer function need to be adjusted so that they reproduce the given or assumed probabilities of the events. [concept:random_variable]
Using the Matlab function rand, which provides uniformly distributed random numbers in the open interval $(0,1)$, write a program that simulates throwing a die with six faces. The outer function should have trial number $\mathcal{N}$ as an input and a vector of the numbers of encountered ones, twos, ... and sixes as an output. It should be based on an inner function that simulates a single throw of the die. Test the program by determining the difference from the expectation $P(a_j) = 1/6$ for ever larger numbers of trials.
Multiple Discrete Random Variables
For two sets of events $A$ and $B$ and their probabilities, we define a joint probability $P(a_j,b_k)$ that is the probability of observing both $a_j$ and $b_k$ in the same trial. An example is the throwing of two dice, one black and one red, and asking about the probability that the black die shows a 2 and the red die a 3. A slightly more complicated example is the measurement of the individual $z$ components of spin angular momentum of two coupled spins $S_\mathrm{A} = 5/2$ and $S_\mathrm{B} = 5/2$. Like individual probabilities, joint probabilities fall in the closed interval $[0,1]$. Joint probabilities are normalized,
$\sum_a \sum_b P(a,b) = 1 \ .$
Note that we have introduced a brief notation that suppresses indices $j$ and $k$. This notation is often encountered because of its convenience in writing.
If we know the probabilities $P(a,b)$ for all $N_A \cdot N_B$ possible combinations of the two events, we can compute the probability of a single event, for instance $a$,
$P_A(a) = \sum_b P(a,b) \ ,$
where $P_A(a)$ is the marginal probability of event $a$.
The unfortunate term ’marginal’ does not imply a small probability. Historically, these probabilities were calculated in the margins of probability tables .
Another quantity of interest is the conditional probability $P(a|b)$ of an event $a$, provided that $b$ has happened. For instance, if we call two cards from a full deck, the probability of the second card being a Queen is conditional on the first card having been a Queen. With the definition for the conditional probability we have
\begin{align} P(a,b) & = P(a|b) P_B(b) \ & = P(b|a) P_A(a) \ .\end{align}
Theorem $1$: Bayes’ theorem
If the marginal probability of event $b$ is not zero, the conditional probability of event $a$ given $b$ is
\begin{align} & P(a|b) = \frac{P(b|a) P_A(a)}{P_B(b)} \ . \label{eq:Bayes_Theorem}\end{align}
Bayes’ theorem is the basis of Bayesian inference, where the probability of proposition $a$ is sought given prior knowledge (short: the prior) $b$. Often Bayesian probability is interpreted subjectively, i.e., different persons, because they have different prior knowledge $b$, will come to different assessments for the probability of proposition $a$. This interpretation is incompatible with theoretical physics, where, quite successfully, an objective reality is assumed. Bayesian probability theory can also be applied with an objective interpretation in mind and is nowadays used, among else, in structural modeling of biomacromolecules to assess agreement of a model (the proposition) with experimental data (the prior).
In experimental physics, biophysics, and physical chemistry, Bayes’ theorem can be used to assign experimentally informed probabilities to different models for reality. For example assume that a theoretical modeling approach, for instance an MD simulation, has provided a set of conformations $A = \{ a_j \}$ of a protein molecule and associated probabilities $P_A(a_j)$. The probabilities are related, via the Boltzmann distribution, to the free energies of the conformations (this point is discussed later in the lecture course). We further assume that we have a measurement $B$ with output $b_k$ and we know the marginal probability $P_B(b)$ of encountering this output for a random set of conformations of the protein molecule. Then we need only a physical model that provides the conditional probabilities $P(b_k|a_j)$ of measuring $b_k$ given the conformations $a_j$ and can compute the probability $P(a_j|b_k)$ that the true conformation is $a_j$, given the result of our measurement, via Bayes’ theorem. Equation \ref{eq:Bayes_Theorem}). This procedure can be generalized to multiple measurements. The required $P(b_k|a_j)$ depend on measurement errors. The approach allows for combining possibly conflicting modeling and experimental results to arrive at a ’best estimate’ for the distribution of conformations.
The events associated with two random variables can occur completely independent of each other. This is the case for throwing two dice: the number shown on the black die does not depend on the number shown on the red die. Hence, the probability to observe a 2 on the black and a 3 on the red die is $(1/6)\cdot(1/6) = 1/36$. In general, joint probabilities of independent events factorize into the individual (or marginal) probabilities, which leads to huge simplifications in computations. In the example of two coupled spins $S_\mathrm{A} = 5/2$ and $S_\mathrm{B} = 5/2$ the two random variables $m_{S,\mathrm{A}}$ and $m_{S,\mathrm{B}}$ may or may not be independent. This is decided by the strength of the coupling, the preparation of trial $\mathcal{T}$, and the evolution time $t$ before observation.
If two random variables are independent, the joint probability of two associated events is the product of the two marginal probabilities,
\begin{align} & P(a,b) = P_A(a) P_B(b) \ .\end{align}
As a consequence, the conditional probability $P(a|b)$ equals the marginal probability of $a$ (and vice versa),
\begin{align} & P(a|b) = P_A(a) \ .\end{align}
[concept:independent_variables]
For a set of more than two random variables two degrees of independence can be established, a weak type of pairwise independence and a strong type of mutual independence. The set is mutually independent if the marginal probability distribution in any subset, i.e. the set of marginal probabilities for all event combinations in this subset, is given by the product of the corresponding marginal distributions for the individual events.2 This corresponds to complete independence. Weaker pairwise independence implies that the marginal distributions for any pair of random variables are given by the product of the two corresponding distributions. Note that even weaker independence can exist within the set, but not throughout the set. Some, but not all pairs or subsets of random variables can exhibit independence.
Another important concept for multiple random variables is whether or not they are distinguishable. In the example above we used a black and a red die to specify our events. If both dice would be black, the event combinations $(a_2,b_3)$ and $(a_3,b_2)$ would be indistinguishable and the corresponding composite event of observing a 2 and a 3 would have a probability of $1/18$, i.e. the product of the probability $1/36$ of the basic composite event with its multiplicity 2. In general, if $n$ random variables are indistinguishable, the multiplicity equals the number of permutations of the $n$ variables, which is $n! = 1\cdot 2\cdots (n-1)\cdot n$.
Functions of Discrete Random Variables
We consider an event $g$ that depends on two other events $a$ and $b$. For example, we ask for the probability that the sum of the numbers shown by the black and red die is $g$, where $g$ can range from 2 to 12, given that we know the probabilities $P(a,b)$, which in our example all have the value 1/36. In general, the probability distribution of random variable $G$ can be computed by
$P_G(g) = \sum_a \sum_b \delta_{g,G(a,b)} P(a,b) \ , \label{eq:fct_rand_var}$
where $G(a,b)$ is an arbitrary function of $a$ and $b$ and the Kronecker delta $\delta_{g,G(a,b)}$ assumes the value one if $g = G(a,b)$ and zero otherwise. In our example, $g = G(a,b) = a+b$ will assume the value of 5 for the event combinations $(1,4),(2,3),(3,2),(4,1)$ and no others. Hence, $P_G(5) = 4/36 = 1/9$. There is only a single combination for $g=2$, hence $P_G(2) = 1/36$, and there are 6 combinations for $g=7$, hence $P_G(7) = 1/6$. Although the probability distributions for the individual random numbers $A$ and $B$ are uniform, the one for $G$ is not. It peaks at the value of $g=7$ that has the most realizations. Such peaking of probability distributions that depend on multiple random variables occurs very frequently in statistical mechanics. The peaks tend to become the sharper the larger the number of random variables that contribute to the sum. If this number $N$ tends to infinity, the distribution of the sum $g$ is so sharp that the distribution width (to be specified below) is smaller than the error in the measurement of the mean value $g/N$ (see Section [section:prob_dist_sum]). This effect is the very essence of statistical thermodynamics: Although quantities for a single molecule may be broadly distributed and unpredictable, the mean value for a large number of molecules, let’s say $10^{18}$ of them, is very well defined and perfectly predictable.
In a numerical computer program, Equation \ref{eq:fct_rand_var}) for only two random variables can be implemented very easily by a loop over all possible values of $g$ with inner loops over all possible values of $a$ and $b$. Inside the innermost loop, $G(a,b)$ is computed and compared to loop index $g$ to add or not add $P(a,b)$ to the bin corresponding to value $g$. Note however that such an approach does not carry to large numbers of random variables, as the number of nested loops increases with the number of random variables and computation time thus increases exponentially. Analytical computations are simplified by the fact that $\delta_{g,G(a,b)}$ usually deviates from zero only within certain ranges of the summation indexes $j$ (for $a$) and $k$ (for $b$). The trick is then to find the proper combinations of index ranges.
Compute the probability distribution for the sum $g$ of the numbers shown by two dice in two ways. First, write a computer program using the approach sketched above. Second, compute the probability distribution analytically by making use of the uniform distribution for the individual events ($P(a,b) = 1/36$ for all $a,b$. For this, consider index ranges that lead to a given value of the sum $g$.3
Discrete Probability Distributions
In most cases random variables are compared by considering the mean values and widths of their probability distributions. As a measure of the width, the standard deviation $\sigma$ of the values from the mean value is used, which is the square root of the variance $\sigma^2$. The concept can be generalized by considering functions $F(A)$ of the random variable. In the following expressions, $F(A) = A$ provides the mean value and standard deviation of the original random variable $A$.
Theorem $1$: Mean value and standard deviation
For any function $F(A)$ of a random variable $A$, the mean value $\langle F \rangle$ is given by,
\begin{align} & \langle F \rangle = \sum_a F(a) P_A(a) \ .\end{align}
The standard deviation, which characterizes the width of the distribution of the function values $F(a)$, is given by,
\begin{align} & \sigma = \sqrt{\sum_a \left( F(a) - \langle F \rangle \right)^2 P_A(a)} \ .\end{align}
The mean value is the first moment of the distribution, with the $n^\mathrm{th}$ moment being defined by
$\langle F^n \rangle = \sum_a F^n(a) P_A(a) \ .$
The $n^\mathrm{th}$ central moment is
$\langle \left( F - \langle F \rangle \right)^n \rangle = \sum_a \left( F(a) - \langle F \rangle \right)^n P_A(a) \ .$
For the variance, which is the second central moment, we have
$\sigma^2 = \langle F^2 \rangle - \langle F \rangle^2 \ .$
Assume that we know the mean values for functions $F(A)$ and $G(B)$ of two random variables as well as the mean value $\langle F G \rangle$ of their product, which we can compute if the joint probability function $P(a,b)$ is known. We can then compute a correlation function
$R_{FG} = \langle F G \rangle - \langle F \rangle \langle G \rangle \ ,$
which takes the value of zero, if $F$ and $G$ are independent random numbers.
Exercise $1$
Compute the probability distribution for the normalized sum $g/M$ of the numbers obtained on throwing $M$ dice in a single trial. Start with $M=1$ and proceed via $M=10, 100, 1000$ to $M = 10000$. Find out how many Monte Carlo trials $\mathcal{N}$ you need to guess the converged distribution. What is the mean value $\langle g/M \rangle$? What is the standard deviation $\sigma_g$? How do they depend on $\mathcal{N}$?
Probability Distribution of a Sum of Random Numbers
If we associate the random numbers with $N$ molecules, identical or otherwise, we will often need to compute the sum over all molecules. This generates a new random number
$S = \sum_{j=1}^N F_j \ ,$
whose mean value is the sum of the individual mean values,
$\langle S \rangle = \sum_{j=1}^N \langle F_j \rangle \ .$
If motion of the individual molecules is uncorrelated, the individual random numbers $F_j$ are independent. It can then be shown that the variances add ,
$\sigma_S^2 = \sum_{j=1}^N \sigma_j^2$
For identical molecules, all random numbers have the same mean $\langle F \rangle$ and variance $\sigma_F^2$ and we find
\begin{align} \langle S \rangle & = N \langle F \rangle \ \sigma_S^2 & = N \sigma_F^2 \ \sigma_S & = \sqrt{N} \sigma_F \ .\end{align}
This result relates to the concept of peaking of probability distributions for a large number of molecules that was introduced above on the example of the probability distribution for sum of the numbers shown by two dice. The width of the distribution normalized to its mean value,
$\frac{\sigma_S}{\langle S \rangle} = \frac{1}{\sqrt{N}} \frac{\sigma_F}{\langle F \rangle} \ ,$
scales with the inverse square root of $N$. For $10^{18}$ molecules, this relative width of the distribution is one billion times smaller than for a single molecule. Assume that for a certain physical quantity of a single molecule the standard deviation is as large as the mean value. No useful prediction can be made. For a macroscopic sample, the same quantity can be predicted with an accuracy better than the precision that can be expected in a measurement.
Binomial Distribution
We consider the measurement of the $z$ component of spin angular momentum for an ensemble of $N$ spins $S = 1/2$.4 The random number associated with an individual spin can take only two values, $-\hbar/2$ or $+\hbar/2$. Additive and multiplicative constants can be taken care of separately and we can thus represent each spin by a random number $A$ that assumes the value $a=1$ (for $m_S = +1/2$) with probability $P$ and, accordingly, the value $a=0$ (for $m_S = -1/2$) with probability $1-P$. This is a very general problem, which also relates to the second postulate of Penrose (see Section [Penrose_postulates]). A simplified version with $P = 1-P = 0.5$ is given by $N$ flips of a fair coin. A fair coin or a biased coin with $P \neq 0.5$ can be easily implemented in a computer, for instance by using a = floor(rand+P) in Matlab. For the individual random numbers we find $\langle A \rangle = P$ and $\sigma_A^2 = P(1-P)$, so that the relative standard deviation for the ensemble with $N$ members becomes $\sigma_S/\langle S \rangle = \sqrt{(1-P)/(N \cdot P)}$.5
To compute the explicit probability distribution of the sum of the random numbers for the whole ensemble, we realize that the probability of a subset of $n$ ensemble members providing a 1 and $N-n$ ensemble members providing a 0 is $P^n(1-P)^{N-n}$. The value of the sum associated with this probability is $n$.
Now we still need to consider the phenomenon already encountered for the sum of the numbers on the black and red dice: Different numbers $n$ have different multiplicities. We have $N!$ permutations of the ensemble members. Let us assign a 1 to the first $n$ members of each permutation. For our problem, it does not matter in which sequence these $n$ members are numbered and it does not matter in which sequence the remaining $N-n$ members are numbered. Hence, we need to divide the total number of permutations $N!$ by the numbers of permutations in each subset, $n!$ and $(N-n)!$ for the first and second subset, respectively. The multiplicity that we need is the number of combinations of $N$ elements to the $n^\mathrm{th}$ class, which is thus given by the binomial coefficient,
$\binom {N} {n} = \frac{N!}{n!(N-n)!} \ , \label{eq:N_over_n}$
providing the probability distribution
$P_S(n) = \binom {N} {n} P^n(1-P)^{N-n} \ .$
For large values of $N$ the binomial distribution tends to a Gaussian distribution,
$G(s) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left[-\frac{(s-\langle s \rangle)^2}{2 \sigma^2}\right] \ .$
As we already know the mean value $\langle s \rangle = \langle n \rangle = N P$ and variance $\sigma_S^2 = N P (1-P)$, we can immediately write down the approximation
$P_S(n) \approx \frac{1}{\sqrt{2 \pi P(1-P)N}} \exp\left[-\frac{(n-PN)^2}{2P(1-P)N} \right] = G(n) \ .$
As shown in Figure $1$ the Gaussian approximation of the binomial distribution is quite good already at $N = 1000$.
In fact, the Gaussian (or normal) distribution is a general distribution for the arithmetic mean of a large number of independent random variables:
Suppose that a large number $N$ of observations has been made with each observation corresponding to a random number that is independent from the random numbers of the other observations. According to the central limit theorem, the mean value $\langle S \rangle/N$ of the sum of all these random numbers is approximately normally distributed, regardless of the probability distribution of the individual random numbers, as long all the probability distributions of all individual random numbers are identical.6 The central limit theorem applies, if each individual random variable has a well-defined mean value (expectation value) and a well-defined variance. These conditions are fulfilled for statistically regular trials $\mathcal{T}$. [concept:central_limit_theorem]
Stirling’s Formula
The number $N!$ of permutations increases very fast with $N$, leading to numerical overflow in calculators and computers at values of $N$ that correspond to nanoclusters rather than to macroscopic samples. Even binomial coefficients, which grow less strongly with increasing ensemble size, cannot be computed with reasonable precision for $N \gg 1000$. Furthermore, the factorial $N!$ is difficult to handle in calculus. The scaling problem can be solved by taking the logarithm of the factorial,
$\ln N! = \ln \left( \prod_{n=1}^N n \right) = \sum_{n=1}^N \ln n \ .$
For large numbers $N$ the natural logarithm of the factorial can be approximated by Stirling’s formula
\begin{align} & \ln N! \approx N \ln N - N + 1 \ , \label{eq:Stirling}\end{align}
which amounts to the approximation
\begin{align} & N! \approx N^N \exp(1-N)\end{align}
for the factorial itself. For large numbers $N$ it is further possible to neglect 1 in the sum and approximate $\ln N! \approx N \ln N - N$.
The absolute error of this approximation for $N!$ looks gross and increases fast with increasing $N$, but because $N!$ grows much faster, the relative error becomes insignificant already at moderate $N$. For $\ln N!$ it is closely approximated by $-0.55/N$. In fact, an even better approximation has been found by Gosper ,
$\ln N! \approx N \ln N - N + \frac{1}{2} \ln\left[\left(2N+\frac{1}{3}\right)\pi\right] \ .$
Gosper’s approximation is useful for considering moderately sized systems, but note that several of our other assumptions and approximations become questionable for such systems and much care needs to be taken in interpreting results. For the macroscopic systems, in which we are mainly interested here, Stirling’s formula is often sufficiently precise and Gosper’s is not needed.
Slightly better than Stirling’s original formula, but still a simple approximation is
$N! \approx \sqrt{2 \pi N} \left( \frac{N}{e} \right)^N \ . \label{eq:Stirling_better}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/02%3A_Probability_Theory/2.01%3A_Discrete_Probability_Theory.txt
|
Probability Density
Although the outcomes of measurements can be discretized, and in fact, are invariably discretized when storing the data, in theory it is convenient to work with continuous variables where physical quantities are assumed to be continuous. For instance, spatial coordinates in phase space are assumed to be continuous, as are the momentum coordinates for translational motion in free space.
To work with continuous variables, we assume that an event can return a real number instead of an integer index. The real number with its associated probability density $\rho$ is a continuous random number. Note the change from assigning a probability to an event to assigning a probability density. This is necessary as real numbers are not countable and thus the number of possible events is infinite. If we want to infer a probability in the usual sense, we need to specify an interval $[l,u]$ between a lower bound $l$ and an upper bound $u$. The probability that trial $\mathcal{T}$ will turn up a real number in this closed interval is given by
$P([l,u]) = \int_l^u \rho(x) \mathrm{d}x \ .$
The probability density must be normalized,
$\int_{-\infty}^{\infty} \rho(x) \mathrm{d} x = 1 \ .$
A probability density distribution can be characterized by its moments.
The $n^\mathrm{th}$ moment of a probability density distribution is defined as,
\begin{align} & \langle x^n \rangle = \int_{-\infty}^\infty x^n \rho(x) \mathrm{d}x \ .\end{align}
The first moment is the mean of the distribution. With the mean $\langle x \rangle$, the central moments are defined
\begin{align} & \langle (x-\langle x \rangle)^n \rangle = \int_{-\infty}^\infty (x-\langle x \rangle)^n \rho(x) \mathrm{d}x \ .\end{align}
The second central moment is the variance $\sigma_x^2$ and its square root $\sigma_x$ is the standard deviation. [concept:moment_analysis]
Probability density is defined along some dimension $x$, corresponding to some physical quantity. The average of a function $F(x)$ of this quantity is given by
$\langle F(x) \rangle = \int_{-\infty}^\infty F(x) \rho(x) \mathrm{d}x \ .$
In many books and articles, the same symbol $P$ is used for probabilities and probability densities. This is pointed out by Swendsen who decided to do the same, pointing out that the reader must learn to deal with this. In the next section he goes on to confuse marginal and conditional probability densities with probabilities himself. In these lecture notes we use $P$ for probabilities, which are always unitless, finite numbers in the interval $[0,1]$ and $\rho$ for probability densities, which are always infinitesimally small and may have a unit. Students are advised to keep the two concepts apart, which means using different symbols.
Computer representations of probability densities by a vector or array are discretized. Hence, the individual values are finite. We now consider the problem of generating a stream of random numbers that conforms to a given discretized probability density $\vec{\rho}$. Modern programming languages or mathematical libraries include functions that provide uniformly distributed pseudo-random numbers in the interval $(0,1)$ (Matlab: rand) or pseudo-random numbers with a Gaussian (normal) distribution with mean 0 and standard deviation 1 (Matlab: randn). A stream of uniformly distributed pseudo-random numbers in $(0,1)$ can be transformed to a stream of numbers with probability density conforming to $\vec{\rho}$ by selecting for each input number the abscissa where the cumulative sum of $\vec{\rho}$ (Matlab: cumsum(rho)) most closely matches the input number (Figure $1$). Note that $\vec{\rho}$ must be normalized (Matlab: rho = rho/sum(rho)). Since a random number generator is usually called very often in a Monte Carlo simulation, the cumulative sum cumsum_rho should be computed once for all before the loop over all trials. With this, generation of the abscissa index poi becomes a one-liner in Matlab: [~,poi] = min(abs(cumsum_rho - rand));7
Coming back to physical theory, the concept of probability density can be extended to multiple dimensions, for instance to the $2F = 2fM$ dimensions of phase space. Probability then becomes a volume integral in this hyperspace. A simple example of a multidimensional continuous problem is the probability of finding a classical particle in a box. The probability to find it at a given point is infinitely small, as there are infinitely many of such points. The probability density is uniform, since all points are equally likely for a classical (unlike a quantum) particle. With the volume $V$ of the box, this uniform probability density is $1/V$ if we have a single particle in the box. This follows from the normalization condition, which is $\int \rho \mathrm{d}V = 1$. Note that a probability density has a unit, in our example m$^{-3}$. In general, the unit is the inverse of the product of the units of all dimensions.
The marginal probability density for a subset of the events is obtained by ’integrating out’ the other events. Let us assume a particle in a two-dimensional box with dimensions $x$ and $y$ and ask about the probability density along $x$. It is given by
$\rho_x(x) = \int_{-\infty}^\infty \rho(x,y) \mathrm{d}y \ .$
Likewise, the conditional probability density $\rho(y|x)$ is defined at all points where $\rho_x(x) \neq 0$,
$\rho(y|x) = \frac{\rho(x,y)}{\rho_x(x)} \ .$
If two continuous random numbers are independent, their joint probability density is the product of the two individual probability densities,
$\rho(x,y) = \rho_x(x) \rho_y(y) \ .$
Exercise $1$
Write a Matlab program that generates random numbers conforming to a two-dimensional probability density distribution $\rho_\mathrm{mem}$ that resembles the Matlab logo (Figure $2$). The (not yet normalized) distribution $\rho_\mathrm{mem}$ is obtained with the function call L = membrane(1,resolution,9,9);. Hint: You can use the reshape function to generate a vector from a two-dimensional array as well as for reshaping a vector into a two-dimensional array. That way the two-dimensional problem (or, in general, a multi-dimensional problem) can be reduced to the problem of a one-dimensional probability density distribution.
Selective Integration of Probability Densities
We already know how to compute probability from probability density for a simply connected parameter range. Such a range can be an interval $[l,u]$ for a probability density depending on only one parameter $x$ or a simply connected volume element for a probability density depending on multiple parameters. In a general problem, the points that contribute to the probability of interest may not be simply connected. If we can find a function $g(x)$ that is zero at the points that should contribute, we can solve this problem with the Dirac delta function, which is the continuous equivalent of the Kronecker delta that was introduced above.
Concept $1$: Dirac delta function
The Dirac delta function is a generalized function with the following properties
1. The function $\delta(x)$ is zero everywhere except at $x=0$.
2. $\int_{-\infty}^\infty \delta(x) \mathrm{d}x = 1$.
The function can be used to select the value $f(x_0)$ of another continuous function $f(x)$,
\begin{align} & f(x_0) = \int_{-\infty}^\infty f(x) \delta(x-x_0) \mathrm{d}x \ .\end{align}
This concept can be used, for example, to compute the probability density of a new random variable $s$ that is a function of two given random variables $x$ and $y$ with given joint probability density $\rho(x,y)$. The probability density $\rho(s)$ corresponding to $s = f(x,y)$ is given by
$\rho(s) = \int_{-\infty}^\infty \int_{-\infty}^\infty \rho(x,y) \delta\left( s - f(x,y) \right) \mathrm{d} x \mathrm{d} y \ .$
Note that the probability density $\rho(s)$ computed that way is automatically normalized.
We now use the concept of selective integration to compute the probability density $\rho(s)$ for the sum $s = x+y$ of the numbers shown by two continuous dice, with each of them having a uniform probability density in the interval $[0,6]$ (Figure $3$). We have
\begin{align} \rho(s) & = \int_{-\infty}^\infty \int_{-\infty}^\infty \rho(x,y) \delta\left( s -(x+y) \right) \mathrm{d} y \mathrm{d} x \ & = \frac{1}{36} \int_0^6 \int_0^6 \delta\left( s -(x+y) \right) \mathrm{d} y \mathrm{d} x \ .\end{align}
The argument of the delta function in the inner integral over $y$ can be zero only for $0 \leq s-x \leq 6$, since otherwise no value of $y$ exists that leads to $s = x+y$. It follows that $x \leq s$ and $x \geq s-6$. For $s=4$ (orange line in Fig. [fig:cont_sum]c) the former condition sets the upper limit of the integration. Obviously, this is true for any $s$ with $0 \leq s \leq 6$. For $s = 8$ (orange line in Fig. [fig:cont_sum]c) the condition $x \geq s-6$ sets the lower limit of the integration, as is also true for any $s$ with $6 \leq s \leq 12$. The lower limit is 0 for $0 \leq s \leq 6$ and the upper limit is 6 for $6 \leq s \leq 12$. Hence,
$\rho(s) = \frac{1}{36} \int_0^s \mathrm{d} s = \frac{s}{36} \ \mathrm{for} \ s \leq 6 \ ,$
and
$\rho(s) = \frac{1}{36} \int_{s-6}^6 \mathrm{d} s = \frac{12-s}{36} \ \mathrm{for} \ s \geq 6 \ .$
From the graphical representation in Fig. [fig:cont_sum]c it is clear that $\rho(s)$ is zero at $s = 0$ and $s = 12$, assumes a maximum of $1/6$ at $s = 6$, increases linearly between $s=0$ and $s=6$ and decreases linearly between $s=6$ and $s=12$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/02%3A_Probability_Theory/2.02%3A_Continuous_Probability_Theory.txt
|
• 3.1: Statistical Ensembles
• 3.2: Microcanonical Ensemble
• 3.3: Canonical Ensemble
Equilibrium thermodynamics describes systems that are in thermal equilibrium. In an ensemble picture, this can be considered by assuming that the system is in contact with a very large— for mathematical purposes infinitely large— heat bath. Because of this, the individual systems in the ensemble can differ in energy. However, the probability density distribution in phase space or state space must be consistent with constant temperature T , which is the temperature of the heat bath.
• 3.4: Grand Canonical Ensemble
03: Classical Ensembles
Concept of an Ensemble
Probability densities in phase space cannot be computed by considering only a single system at a single instant in time. Such a system will be in some random microstate, but what we need is the statistics of such microstates. This problem was solved by Gibbs, who considered ensembles that consist of a very large number of identical systems in possibly different microstates. The microstates for a system with $M$ molecules with $f$ degrees of freedom each are points in $2fM$-dimensional phase space. If we have information on the probability density assigned to such points, we can use probability theory to compute thermodynamical state functions.
Ergodicity
Instead of considering a large ensemble of systems at the same time (ensemble average), we could also consider a long trajectory of a single system in phase space. The single system will go through different microstates and if we observe it for a sufficiently long time, we might expect that it visits all accessible points in phase space with a frequency that corresponds to the associated probability density. This idea is the basis of analyzing MD trajectories in terms of thermodynamic state functions. The ensemble average $\langle A \rangle$ is replaced by the time average $\overline{A}$. We assume
$\langle A \rangle = \overline{A} \ .$
Systems where this assumption holds are called ergodic systems.
Often, experiments are performed on a large ensemble of identical systems. An example is a spectroscopic experiment on a dilute solution of chromophores: Each chromophore can be considered as an individual system and their number may be of the order of $10^{10}$ or higher. In some cases an equivalent experiment can be performed on a single chromophore, but such single-molecule experiments require many repetitions and measure a time-average. The results of ensemble and single-molecule experiments are equivalent if the system is ergodic and the measurement time in the single-molecule experiment is sufficiently long.
Whether or not a system is ergodic depends on kinetic accessibility of the whole thermodynamically accessible phase space. We shall see later that thermodynamic accessibility is related to temperature and to the energy assigned to points in phase space. Points are accessible if their energy is not too much higher than the energy minimum in phase space. Whether a single dynamic system visits all these points at the same given temperature- and what time it needs to sample phase space- depends on energy barriers. In MD simulations sampling problems are often encountered, where molecular conformations that are thermodynamically accessible are not accessed within reasonable simulation times. A multitude of techniques exists for alleviating such sampling problems, none of them perfect. In general, time-average methods, be they computational or experimental, should be interpreted only with care in terms of thermodynamics. In this lecture course we focus on ensemble-average methods, which suffer from a loss in dynamic information, but get the thermodynamic state functions right.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/03%3A_Classical_Ensembles/3.01%3A_Statistical_Ensembles.txt
|
Assume that we have an isolated system with $N$ particles in a fixed volume $V$. Because the system is isolated, the total energy $E$ must also be fixed. If we know that the energy must be in an interval $[E,E+\Delta E]$ the probability density in phase space must be zero everywhere outside the region between the two hypersurfaces with constant energies $E$ and $E+\Delta E$. We call this region the energy shell in which the system is confined. If the system is in equilibrium, i.e., the probability density $\rho$ is stationary, $\rho$ must be uniform in this energy shell, i.e., it must not depend on $p$ and $q$ within this shell. We can see this from the Liouville equation ([eq:Liouville_short]), whose left-hand side must be zero for a stationary probability density. The Poisson bracket on the right-hand side will vanish if $\rho$ is uniform.8
Concept $1$: Microcanonical Ensembles
An ensemble with a constant number $N$ of particles in a constant volume $V$ and with constant total energy $E$ has a uniform probability density $\rho_\mathrm{mc}$ in the part of phase space, where it can reside, which is the energy hypersurface at energy $E$. Such an ensemble is called a microcanonical ensemble.
We are left with computing this constant probability density $\rho_\mathrm{mc}$. As the energy is given by the Hamiltonian function $\mathcal{H}(\mathbf{p},\mathbf{q})$, we can formally write $\rho_\mathrm{mc}$ for an infinitely thin energy shell ($\Delta E \rightarrow 0)$ as
$\rho_\mathrm{mc} = \frac{1}{\Omega(E)} \delta\left(E - \mathcal{H}(\mathbf{p},\mathbf{q}) \right)\ ,$
where the statistical weight $\Omega$ depends on energy, volume, and number of particles $N$, but at constant energy does not depend on momentum $\mathbf{p}$ or spatial coordinates $\mathbf{q}$. Since the probability density is normalized, we have
$\Omega(E) = \int \int \delta\left(E - \mathcal{H}(\mathbf{p},\mathbf{q})\right) \mathrm{d}\mathbf{q} \mathrm{d} \mathbf{p} \ .$
The probability density in phase space of the microcanonical ensemble is thus relatively easy to compute. However, the restriction to constant energy, i.e. to an isolated system, severely limits application of the microcanonical ensemble. To see this, we consider the simplest system, an electron spin $S = 1/2$ in an external magnetic field $B_0$. This system is neither classical nor describable in phase space, but it will nicely serve our purpose. The system has a state space consisting of only two states $|\alpha\rangle$ and $|\beta\rangle$ with energies $\epsilon_\alpha = \hbar g_e \mu_\mathrm{B} B_0/2$ and $\epsilon_\beta = -\hbar g_e \mu_\mathrm{B} B_0/2$.9 In magnetic resonance spectroscopy, one would talk of an ensemble of ’isolated’ spins, if the individual spins do not interact with each other. We shall see shortly that this ensemble is not isolated in a thermodynamical sense, and hence not a microcanonical ensemble.
The essence of the microcanonical ensemble is that all systems in the ensemble have the same energy $E$, this restricts probability density to the hypersurface with constant $E$. If our ensemble of $N$ spins would be a microcanonical ensemble, this energy would be either $E = \hbar g_e \mu_\mathrm{B} B_0/2$ or $E = -\hbar g_e \mu_\mathrm{B} B_0/2$ and all spins in the ensemble would have to be in the same state, i.e., the ensemble would be in a pure state. In almost any experiment on spins $S = 1/2$ the ensemble is in a mixed state and the populations of states $|\alpha\rangle$ and $|\beta\rangle$ are of interest. The system is not isolated, but, via spin relaxation processes, in thermal contact with its environment. To describe this situation, we need another type of ensemble.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/03%3A_Classical_Ensembles/3.02%3A_Microcanonical_Ensemble.txt
|
Equilibrium thermodynamics describes systems that are in thermal equilibrium. In an ensemble picture, this can be considered by assuming that the system is in contact with a very large— for mathematical purposes infinitely large— heat bath. Because of this, the individual systems in the ensemble can differ in energy. However, the probability density distribution in phase space or state space must be consistent with constant temperature $T$, which is the temperature of the heat bath. In experiments, it is the temperature of the environment.
Concept $1$: Canonical Ensemble
An ensemble with a constant number $N$ of particles in a constant volume $V$ and at thermal equilibrium with a heat bath at constant temperature $T$ can be considered as an ensemble of microcanonical subensembles with different energies $\epsilon_i$. The energy dependence of probability density conforms to the Boltzmann distribution. Such an ensemble is called a canonical ensemble.
Note
Because each system can exchange heat with the bath and thus change its energy, systems will transfer between subensembles during evolution. This does not invalidate the idea of microcanonical subensembles with constant particle numbers $N_i$. For a sufficiently large ensemble at thermal equilibrium the $N_i$ are constants of motion.
There are different ways of deriving the Boltzmann distribution. Most of them are rather abstract and rely on a large mathematical apparatus. The derivation gets lengthy if one wants to create the illusion that we know why the constant $\beta$ introduced below always equals $1/k_\mathrm{B} T$, where $k_\mathrm{B} = R/N_\mathrm{Av}$ is the Boltzmann constant, which in turn is the ratio of the universal gas constant $R$ and the Avogadro constant $N_\mathrm{Av}$. Here we follow a derivation that is physically transparent and relies on a minimum of mathematical apparatus that we have already introduced.
Boltzmann Distribution
Here we digress from the ensemble picture and use a system of $N$ particles that may exist in $r$ different states with energies $\epsilon_i$ with $i = 0 \ldots r-1$. The number of particles with energy $\epsilon_i$ is $N_i$. The particles do not interact, they are completely independent from each other. We could therefore associate theses particles with microcanonical subensembles of a canonical ensemble, but the situation is easier to picture with particles. The probability $P_i = N_i/N$ to find a particle with energy $\epsilon_i$ can be associated with the probability density for the microcanonical subensemble at energy $\epsilon_i$. The difference between this simple derivation and the more elaborate derivation for a canonical ensemble is thus essentially the difference between discrete and continuous probability theory. We further assume that the particles are classical particles and thus distinguishable.
To compute the probability distribution $P_i = N_i/N$, we note that
$\sum_0^{r-1} N_i = N \label{eq:conservation_N}$
and
$\sum_0^{r-1} N_i \epsilon_i = E \ , \label{eq:conservation_E}$
where $E$ is a constant total energy of the system. We need to be careful in interpreting the latter equation in the ensemble picture. The quantity $E$ corresponds to the energy of the whole canonical ensemble, which is indeed a constant of motion, if we consider a sufficiently large number of systems in contact with a thermal bath. We can thus use our simple model of $N$ particles for guessing the probability density distribution in the canonical ensemble.
What we are looking for is the most likely distribution of the $N$ particles on the $r$ energy levels. This is equivalent to putting $N$ distinguishable balls into $r$ boxes. We did already solve the problem of distributing $N$ objects to 2 states when considering the binomial distribution in Section [binomial_distribution]. The statistical weight of a configuration with $n$ objects in the first state and $N-n$ objects in the second state was $\binom {N} {n}$. With this information we would already be able to solve the problem of a canonical ensemble of $N$ spins $S=1/2$ in thermal contact with the environment, disregarding for the moment differences between classical and quantum statistics (see Section [section:quantum_statistics]).
Coming back to $N$ particles and $r$ energy levels, we still have $N!$ permutations. If we assign the first $N_0$ particles to the state with energy $\epsilon_0$, the next $N_1$ particles to $\epsilon_1$ and so on, we need to divide each time by the number of permutations $N_i!$ in the same energy state, because the sequence of particles with the same energy does not matter. We call the vector of the occupation numbers $N_i$ a configuration. The configuration specifies one particular macrostate of the system and the relative probability of the macrostates for distinguishable particles and non-degenerate states is given by their statistical weights,
$\Omega = \frac{N!}{N_0! N_1! \ldots N_{r-1}!} \ . \label{eq:N_onto_r}$
The case with degenerate energy levels is treated in Section [sec:Maxwell-Boltzmann].
The most probable macrostate is the one with maximum statistical weight $\Omega$. Because of the peaking of probability distributions for large $N$, we need to compute only this most probable macrostate; it is representative for the whole ensemble. Instead of maximizing $\Omega$ we can as well maximize $\ln \Omega$, as the natural logarithm is a strictly monotonous function. This allows us to apply Stirling’s formula,
\begin{align} \ln \Omega & = \ln N! - \sum_{i=0}^{r-1} \ln N_i! \ & \approx N \ln N - N +1 - \sum_{i=0}^{r-1} N_i \ln N_i + \sum_0^{r-1} N_i - r\ .\end{align}
By inserting Equation \ref{eq:conservation_N} we find
$\ln \Omega \approx N \ln N - \sum_{i=0}^{r-1} N_i \ln N_i + 1 - r\ . \label{eq:ln_Omega}$
Note that the second term on the right-hand side of Equation \ref{eq:ln_Omega} has some similarity to the entropy of mixing, which suggests that $\ln \Omega$ is related to entropy.
At the maximum of $\ln \Omega$ the derivative of $\ln \Omega$ with respect to the $N_i$ must vanish,
$0 = \delta \sum_i N_i \ln N_i = \sum_i \left( N_i \delta \ln N_i + \delta N_i \ln N_i \right) = \sum_i \delta N_i + \sum_i \ln N_i \delta N_i \ . \label{eq:max_ln_Omega}$
In addition, we need to consider the boundary conditions of constant particle number, Equation \ref{eq:conservation_N},
$\delta N = \sum_i \delta N_i = 0 \label{eq:conservation_N_diff}$
and constant total energy, Equation \ref{eq:conservation_E},
$\delta E = \sum_i \epsilon_i \delta N_i = 0 \ .$
It might appear that Equation \ref{eq:conservation_N_diff} could be used to cancel a term in Equation \ref{eq:max_ln_Omega}, but this would be wrong as Equation \ref{eq:conservation_N_diff} is a constraint that must be fulfilled separately. For the constrained maximization we can use the method of Lagrange multipliers.
The maximum or minimum of a function $f(x_1 \ldots, x_n)$ of $n$ variables is a stationary point that is attained at
\begin{align} & \delta f = \sum_{i=1}^{n} \left( \frac{\partial f}{\partial x_i} \right)_{x_k \neq x_i} \partial x_i = 0 \ . \label{eq:extremum_multi}\end{align}
We now consider the case where the possible sets of the $n$ variables are constrained by $c$ additional equations
\begin{align} & g_j(x_1, x_2, \ldots, x_n) = 0 \ ,\end{align}
where index $j$ runs over the $c$ constraints ($j = 1 \ldots c$). Each constraint introduces another equation of the same form as the one of Equation \ref{eq:extremum_multi},
\begin{align} & \delta g_j = \sum_{i=1}^{n} \left( \frac{\partial g_j}{\partial x_i} \right)_{x_k \neq x_i} \partial x_i = 0 \ .\end{align}
The constraints can be introduced by multiplying each of the $c$ equations by a multiplier $\lambda_j$ and subtracting it from the equation for the stationary point without the constraints,
$\delta \mathcal{L} = \sum_{i=1}^{n} \left[ \left( \frac{\partial f}{\partial x_i} \right)_{x_k \neq x_i} - \sum_{j=1}^c \lambda_j \left( \frac{\partial g_j}{\partial x_i} \right)_{x_k \neq x_i} \right] \partial x_i \ .$
If a set of variables $\left\{x_{0,1} \ldots, x_{0,n}\right\}$ solves the constrained problem then there exists a set $\left\{\lambda_{0,1} \ldots \lambda_{0,r}\right\}$ for which $\left\{x_{0,1}, x_{0,2}, \ldots, x_{0,n}\right\}$ also corresponds to a stationary point of the Lagrangian function $\mathcal{L}(x_1, \ldots, x_n, \lambda_1, \ldots \lambda_r)$. Note that not all stationary points of the Lagrangian function are necessarily solutions of the constrained problem. This needs to be checked separately. [concept:Lagrangian_multipliers]
With this method, we can write
\begin{align} 0 & = \sum_i \delta N_i + \sum_i \ln N_i \delta N_i + \alpha \sum_i \delta N_i + \beta \sum_i \epsilon_i \delta N_i \ & = \sum_i \delta N_i \left( 1 + \ln N_i + \alpha + \beta \epsilon_i \right) \ .\end{align}
The two boundary conditions fix only two of the population numbers $N_i$. We can choose the multipliers $\alpha$ and $\beta$ in a way that $\left( 1 + \ln N_i + \alpha + \beta \epsilon_i \right) = 0$ for these two $N_i$, which ensures that the partial derivatives of $\ln \Omega$ with respect to these two $N_i$ vanishes. The other $r-2$ population numbers can, in principle, be chosen freely, but again we must have
$1 + \ln N_i + \alpha + \beta \epsilon_i = 0$
for all $i$ to make sure that we find a maximum with respect to variation of any of the $r$ population numbers. This gives
$N_i = \gamma e^{-\beta \epsilon_i}$
with $\gamma = e^{-(1+\alpha)}$. We can eliminate $\gamma$ by using Equation \ref{eq:conservation_N},
$\sum_i N_i = \gamma \sum_i e^{-\beta \epsilon_i} = N \ ,$
giving
$\gamma = \frac{N}{\sum_i e^{-\beta \epsilon_i}} \ ,$
and finally leading to
$P_i = \frac{N_i}{N} = \frac{e^{-\beta \epsilon_i}}{\sum_i e^{-\beta \epsilon_i}} \ . \label{eq:Boltzmann_distribution_0}$
For many problems in statistical thermodynamics, the Lagrange multiplier $\alpha$ is related to the chemical potential by $\alpha = \mu / (k_\mathrm{B} T)$. The Lagrange multiplier $\beta$ must have the reciprocal dimension of an energy, as the exponent must be dimensionless. As indicated above, we cannot at this stage prove that $\beta$ is the same energy for all problems of the type that we have posed here, let alone for all of the analogous problems of canonical ensembles. The whole formalism can be connected to phenomenological thermodynamics via Maxwell’s kinetic gas theory (see also Section [subsection:equipartition]). For this problem one finds
$\beta = \frac{1}{k_\mathrm{B} T} \ .$
Concept $2$: Boltzmann Distribution
For a classical canonical ensemble with energy levels $\epsilon_i$ the probability distribution for the level populations is given by the Boltzmann distribution
\begin{align} & P_i = \frac{N_i}{N} = \frac{e^{-\epsilon_i/k_\mathrm{B}T}}{\sum_i e^{-\epsilon_i/k_\mathrm{B}T}} \ . \label{eq:Boltzmann_distribution}\end{align}
The sum over states
\begin{align} & Z(N, V, T) = \sum_i e^{-\epsilon_i/k_\mathrm{B}T}\end{align}
required for normalization is called canonical partition function.10 The partition function is a thermodynamical state function.
For the partition function, we use the symbol $Z$ relating to the German term Zustandssumme("sum over states"), which is a more lucid description of this quantity.
Equipartition Theorem
Comparison of Maxwell’s kinetic theory of gases with the state equation of the ideal gas from phenomenological thermodynamics provides a mean kinetic energy of a point particle of $\langle \epsilon_\mathrm{kin} \rangle = 3k_\mathrm{B}T/2$. This energy corresponds to
$\epsilon_\mathrm{trans} = \frac{1}{2}m v^2 = \frac{1}{2m} p^2 \ ,$
i.e., it is quadratic in the velocity coordinates of dynamic space or the momentum coordinates of phase space. Translational energy is distributed via three degrees of freedom, as the velocities or momenta have components along three pairwise orthogonal directions in space. Each quadratic degree of freedom thus contributes a mean energy of $k_\mathrm{B}T/2$.
If we accept that the Lagrange multiplier $\beta$ assumes a value $1/k_\mathrm{B} T$, we find a mean energy $k_\mathrm{B}T$ of an harmonic oscillator in the high-temperature limit . Such an oscillator has two degrees of freedom that contribute quadratically in the degrees of freedom to energy,
$\epsilon_\mathrm{vib} = \frac{1}{2} \mu v^2 + \frac{1}{2} f x^2 \ ,$
where $\mu$ is the reduced mass and $f$ the force constant. The first term contributes to kinetic energy, the second to potential energy. In the time average, each term contributes the same energy and assuming ergodicity this means that each of the two degrees of freedom contributes with $k_\mathrm{B}T/2$ to the average energy of a system at thermal equilibrium.
The same exercise can be performed for rotational degrees of freedom with energy
$\epsilon_\mathrm{rot} = \frac{1}{2} I \omega^2 \ ,$
where $I$ is angular momentum and $\omega$ angular frequency. Each rotational degree of freedom, being quadratic in $\omega$ again contributes a mean energy of $k_\mathrm{B}T/2$.
Based on Equation \ref{eq:Boltzmann_distribution_0} it can be shown that for an energy
$\epsilon_i = \eta_0 + \eta_1 + \eta_2 + \ldots = \sum_{k=1}^f \eta_k \ ,$
where index $k$ runs over the individual degrees of freedom, the number of molecules that contribute energy $\eta_k$ does not depend on the terms $\eta_j$ with $j \neq k$. It can be further shown that
$\langle \eta_k \rangle = \frac{1}{2 \beta}$
for all terms that contribute quadratically to energy.11
This result has two consequences. First, we can generalize $\beta = 1/k_\mathrm{B} T$, which we strictly knew only for translational degrees of freedom, to any canonical ensemble for which all individual energy contributions are quadratic along one dimension in phase space. Second, we can formulate the
Each degree of freedom, whose energy scales quadratically with one of the coordinates of state space, contributes a mean energy of $k_\mathrm{B}T/2$.
The equipartition theorem applies to all degrees of freedom that are activated. Translational degrees of freedom are always activated and rotational degrees of freedom are activated at ambient temperature, which corresponds to the high-temperature limit of rotational dynamics. To vibrational degrees of freedom the equipartition theorem applies only in the high-temperature limit. In general, the equipartition theorem fails for quantized degrees of freedom if the quantum energy spacing is comparable to $k_\mathrm{B}T/2$ or exceeds this value. We shall come back to this point when discussing the vibrational partition function.
Internal Energy and Heat Capacity of the Canonical Ensemble
The internal energy $u$ of a system consisting of $N$ particles that are distributed to $r$ energy levels can be identified as the total energy $E$ of the system considered in Section ([subsection:Boltzmann]). Using Eqs. \ref{eq:conservation_E} and \ref{eq:Boltzmann_distribution} we find
$u = N \frac{\sum_i \epsilon_i e^{-\epsilon_i/k_\mathrm{B} T}}{\sum_i e^{-\epsilon_i/k_\mathrm{B} T}} = N \frac{\sum_i \epsilon_i e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \ . \label{eq:u_from_z_sum}$
The sum in the numerator can be expressed by the partition function, since
$\frac{\mathrm{d}Z}{\mathrm{d}T} = \frac{1}{k_\mathrm{B} T^2} \sum_i \epsilon_i e^{-\epsilon_i/k_\mathrm{B} T} \ .$
Thus we obtain
$u = N k_\mathrm{B} T^2 \cdot \frac{1}{Z} \cdot \frac{\mathrm{d}Z}{\mathrm{d}T} = N k_\mathrm{B} T^2 \frac{\mathrm{d} \ln Z}{\mathrm{d}T} \ . \label{eq:u_from_z}$
Again the analogy of our simple system to the canonical ensemble holds. At this point we have computed one of the state functions of phenomenological thermodynamics from the set of energy levels. The derivation of the Boltzmann distribution has also indicated that $\ln \Omega$, and thus the partition function $Z$ are probably related to entropy. We shall see in Section [section:state_fct_partition_fct] that this is indeed the case and that we can compute all thermodynamic state functions from $Z$.
Here we can still derive the heat capacity $c_V$ at constant volume, which is the partial derivative of internal energy with respect to temperature. To that end we note that the partition function for the canonical ensemble relates to constant volume and constant number of particles.
\begin{align} c_V & = \left( \frac{\partial u}{\partial T} \right)_V = N \frac{\partial}{\partial T} \left( k_\mathrm{B} T^2 \frac{\partial \ln Z}{\partial T} \right)_V = -N \frac{\partial }{\partial T}\left( k_\mathrm{B} \frac{\partial \ln Z}{\partial 1/T}\right)_V \label{eq:cv0} \ & = - N k_\mathrm{B} \left( \frac{\partial \left[\partial \ln Z/\partial 1/T\right]}{\partial T} \right)_V = \frac{N k_\mathrm{B}}{T^2} \left( \frac{\partial \left[\partial \ln Z/\partial 1/T\right]}{\partial 1/T} \right)_V \ & = \frac{k_\mathrm{B}}{T^2} \left( \frac{\partial^2 \ln z}{\partial \left(1/T \right)^2} \right)_V \ . \label{eq:cv}\end{align}
In the last line of Equation \ref{eq:cv} we have substituted the molecular partition function $Z$ by the partition function for the whole system, $\ln z = N \ln Z$. Note that this implies a generalization. Before, we were considering a system of $N$ identical particles. Now we implicitly assume that Equation \ref{eq:cv}, as well as $u = k_\mathrm{B} T^2 \frac{\mathrm{d} \ln z}{\mathrm{d}T}$ will hold for any system, as long as we correctly derive the system partition function $z$.
We note here that the canonical ensemble describes a closed system that can exchange heat with its environment, but by definition it cannot exchange work, because its volume $V$ is constant. This does not present a problem, since the state functions can be computed at different $V$. In particular, pressure $p$ can be computed from the partition function as well (see Section [section:state_fct_partition_fct]). However, because the canonical ensemble is closed, it cannot easily be applied to all problems that involve chemical reactions. For this we need to remove the restriction of a constant number of particles in the systems that make up the ensemble.
3.04: Grand Canonical Ensemble
For the description of an open system in the thermodynamical sense, i.e., a system that can exchange not only heat, but also matter with its environment, we need to replace particle number $N$ with another constant of motion. If we would fail to introduce a new constant of motion, we would end up with a system that is not at equilibrium and thus cannot be fully described by time-independent state functions. If we assume that the system is in chemical as well as thermal equilibrium with its environment, the new constant of motion is the chemical potential $\mu$, or more precisely, a vector $\vec{\mu}$ of the chemical potentials $\mu_k$ of all components.
Concept $1$: Grand Canonical Ensemble
An ensemble with constant chemical potential $\mu_k$ of all components, and constant volume $V$ that is at thermal equilibrium with a heat bath at constant temperature $T$ and in chemical equilibrium with its environment is called a grand canonical ensemble. It can be considered as consisting of canonical subensembles with different particle numbers $N$. The grand canonical state energies and partition function contain an additional chemical potential term. With this additional term the results obtained for the canonical ensemble apply to the grand canonical ensemble, too.
The partition function for the grand canonical ensemble is given by
$Z_\mathrm{gc}(\mu, V, T) = \sum_i e^{(\sum_k N_{i,k} \mu_k - \epsilon_i)/k_\mathrm{B} T} \ ,$
whereas the probability distribution over the levels and particle numbers is
$P_i = \frac{e^{(\sum_k N_{i,k} \mu_k - \epsilon_i)/k_\mathrm{B} T}}{Z_\mathrm{gc}} \ .$
Note that the index range $i$ is much larger than for a canonical ensemble, because each microstate is now characterized by a set of particle numbers $N_{i,k}$, where $k$ runs over the components.
At this point we are in conflict with the notation that is often used in other course. For example, we often define the chemical potential $\mu$ as a molar quantity, here it is a molecular quantity. The relation is $\mu_\mathrm{PC I} = N_\mathrm{Av} \mu_\mathrm{PC VI}$. Using the PC I notation in the current lecture notes would be confusing in other ways, as $\mu$ is generally used in statistical thermodynamics for the molecular chemical potential. A similar remark applies to capital letters for state functions. Capital letters denote either a molecular quantity or a molar quantity. The difference will be clear from the context. We note that in general small letters for state functions (except for pressure $p$) denote extensive quantities and capital letters (except for volume $V$) denote intensive quantities.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/03%3A_Classical_Ensembles/3.03%3A_Canonical_Ensemble.txt
|
Cautionary Remarks on Entropy
You will search in vain for a mathematical derivation or clear condensed explanation of entropy in textbooks and textbook chapters on statistical thermodynamics.. There is a simple reason for it: no such derivation or explanation exists. With entropy being a central concept, probably the central concept of the theory, this may appear very strange. However, the situation is not as bad as it may appear. The theory and the expressions that can be derived work quite well and have predictive power. There are definitions of entropy in statistical thermodynamics (unfortunately, more than one) and they make some sense. Hence, while it may be unnerving that we cannot derive the central state function from scratch, we can still do many useful things and gain some understanding.
Textbooks tend to sweep the problem under the rug. We won’t do that here. We try to make an honest attempt to clarify what we do know and what we don’t know about entropy before accepting one working definition and base the rest of theory on this definition. It is probably best to start with a set of postulates that explains what we expect from the quantity that we want to define.
Swendsen’s Postulates
The following postulates are introduced and shortly discussed in Section 9.6 of Swendsen’s book . We copy the long form of these postulates verbatim with very small alterations that improve consistency or simplify the expression.
1. There exist equilibrium states of a macroscopic system that are characterized uniquely by a small number of extensive variables.
2. The values assumed at equilibrium by the extensive variables of an isolated system in the absence of internal constraints are those that maximize the entropy over the set of all constrained macroscopic states.
3. The entropy of a composite system is additive over the constituent subsystems.
4. For equilibrium states the entropy is a monotonically increasing function of the energy.
5. The entropy is a continuous and differentiable function of the extensive variables.
We have omitted Swendsen’s last postulate (The entropy is an extensive function of the extensive variables), because, strictly speaking, it is superfluous. If the more general third postulate of additivity is fulfilled, entropy is necessarily an extensive property.
Swendsen’s first postulate (Equilibrium States) establishes the formalism of thermodynamics, while all the remaining postulates constitute a wish list for the quantity entropy that we need to predict the equilibrium states. They are a wish list in the sense that we cannot prove that a quantity with all these properties must exist. We can, however, test any proposed definition of entropy against these postulates.
Some points need explanation. First, the set of postulates defines entropy as a state function, although this may be hidden. The first postulate implies that in equilibrium thermodynamics some extensive variables are state functions and that a small set of such state functions completely specifies all the knowledge that we can have about a macroscopic system. Because entropy in turn specifies the other state functions for an isolated system at equilibrium, according to the second postulate (Entropy Maximization), it must be a state function itself. It must be an extensive state function because of the third postulate (Additivity), but the third postulate requires more, namely that entropies can be added not only for subsystems of the same type in the same state, but also for entirely different systems. This is required if we want to compute a new equilibrium state (or entropy change) after unifying different systems. Otherwise, the simple calorimetry experiment of equilibrating a hot piece of copper with a colder water bath would already be outside our theory. The fourth postulate (Monotonicity) is new compared to what we discussed in phenomenological thermodynamics. For a classical ideal gas this postulate can be shown to hold. This postulate is needed because it ensures that temperature is positive. The fifth postulate is a matter of mathematical convenience, although it may come as a surprise in a theory based on integer numbers of particles. We assume, as at many other points, that the system is sufficiently large for neglecting any errors that arise from treating particle number as a real rather than an integer number. In other words, these errors must be smaller than the best precision that we can achieve in experiments. As we already know from phenomenological thermodynamics, the fifth postulate does not apply to first-order phase transitions, where entropy has a discontinuity. We further note that the second postulate is an alternative way of writing the Second Law of Thermodynamics. The term ’in the absence of internal constraints’ in the second postulate ensures that the whole state space (or, for systems fully described by Hamiltonian equations of motion, the whole phase space) is accessible.
Entropy in Phenomenological Thermodynamics
Textbook authors are generally much more comfortable in discussing entropy as an abstract state function in phenomenological thermodynamics than in discussing its statistical thermodynamics aspects. We recall that the concept of entropy is not unproblematic in phenomenological thermodynamics either. We had accepted the definition of Clausius entropy,
$\mathrm{d} s = \frac{\mathrm{d} q_\mathrm{rev}}{T} \ , \label{eq:entropy_phen}$
where $\mathrm{d} q_\mathrm{rev}$ is the differentially exchanged heat for a reversible process that leads to the same differential change in other state variables as an irreversible process under consideration and $T$ is the temperature. We could then show that entropy is a state function (Carnot process and its generalization) and relate entropy via its total differential to other state functions. With this definition we could further show that for closed systems, which can exchange heat, but not volume work with their environment ($\mathrm{d}V = 0$), minimization of Helmholtz free energy $f = u - T s$ provides the equilibrium state and that for closed systems at constant pressure ($\mathrm{d} p = 0$), minimization of Gibbs free energy $g = h - T s$ provides the equilibrium state. Partial molar Gibbs free energy is the chemical potential $\mu_{k,\mathrm{molar}}$ and via $\mu_{k,\mathrm{molecular}} = \mu_{k,\mathrm{molar}}/N_\mathrm{Av}$ it is related to terms in the partition function of the grand canonical ensemble, where we have abbreviated $\mu_{k,\mathrm{molecular}}$ as $\mu_k$ (Section [section:grand_canonical]).
We were unable in phenomenological thermodynamics to prove that the definition given in Equation \ref{eq:entropy_phen} ensures fulfillment of the Second Law. We were able to give plausibility arguments why such a quantity should increase in some spontaneous processes, but not more.
Boltzmann’s Entropy Definition
Boltzmann provided the first statistical definition of entropy, by noting that it is the logarithm of probability, up to a multiplicative and an additive constant. The formula $s = k \ln W$ by Planck, which expresses Boltzmann’s definition, omits the additive constant. We shall soon see why.
We now go on to test Boltzmann’s definition against Swendsen’s postulates. From probability theory and considerations on ensembles we know that for a macroscopic system, probability density distributions for an equilibrium state are sharply peaked at their maximum. In other words, the macrostate with largest probability is such a good representative for the equilibrium state that it serves to predict state variables with better accuracy that the precision of experimental measurements. It follows strictly that any definition of entropy that fulfills Swendsen’s postulates must make $s$ a monotonously increasing function of probability density12 for an isolated system.
Why the logarithm? Let us express probability (for the moment discrete again) by the measure of the statistical weights of macrostates. We consider the isothermal combination of two independent systems A with entropies $s_\mathrm{A}$ and $s_\mathrm{B}$ to a total system with entropy $s = s_\mathrm{A} + s_\mathrm{B}$. The equation for total entropy is a direct consequence of Swendsen’s third postulate. On combination, the statistical weights $\Omega_\mathrm{A}$ and $\Omega_\mathrm{B}$ multiply, since the subsystems are independent. Hence, with the monotonously increasing function $f(\mathrm{\Omega})$ we must have
$s = f(\Omega) = f(\Omega_\mathrm{A}\cdot\Omega_\mathrm{B}) = f(\Omega_\mathrm{A}) + f(\Omega_\mathrm{B}) \ . \label{eq:s_additivity}$
The only solutions of this functional equation are logarithm functions. What logarithm we choose will only influence the multiplicative constant. Hence, we can write
$s = k \ln \Omega \ , \label{eq:Boltzmann_entropy}$
where, for the moment, constant $k$ is unknown. Boltzmann’s possible additive constant must vanish at this point, because with such a constant, the functional equation ([eq:s_additivity]), which specifies additivity of entropy, would not have a solution.
It is tempting to equate $\Omega$ in Equation \ref{eq:Boltzmann_entropy} in the context of phase space problems with the volume of phase space occupied by the system. Indeed, this concept is known as Gibbs entropy (see Section [Gibbs_entropy]). It is plausible, since the phase space volume specifies a statistical weight for a continuous problem. No problem arises if Gibbs entropy is used for equilibrium states as it then coincides with Boltzmann entropy. There exists a conceptual problem, however, if we consider approach to equilibrium. The Liouville theorem (see Section [Liouville]) states that the volume in phase space taken up by a system is a constant of motion.13. Hence, Gibbs entropy is a constant of motion for an isolated system and the equilibrium state would be impossible to reach from any non-equilibrium state, which would necessarily occupy a smaller phase space volume. This leads to the following cautionary remark:
Statistical thermodynamics, as we introduce it in this text, does not describe dynamics that leads from non-equilibrium to equilibrium states. Different equilibrium states can be compared and the equilibrium state can be determined, but we have made a number of assumptions that do not allow us to apply our expressions and concepts to non-equilibrium states without further thought. Non-equilibrium statistical thermodynamics is explicitly outside the scope of the theory that we present here.
A conceptual complication with Boltzmann’s definition is that one might expect $s$ to be maximal at equilibrium for a closed system, too, not only for an isolated system. In classical thermodynamics we have seen, however, that the equilibrium condition for a closed system is related to free energy. Broadly, we could say that for a closed system probability must be maximized for the system and its environment together. Unfortunately, this cannot be done mathematically as the environment is very large (in fact, for mathematical purposes infinite). The solution to this problem lies in the treatment of the canonical ensemble (Section [section_canonical]). In that treatment we have seen that energy enters into the maximization problem via the boundary condition of constant total energy of the system that specifies what exactly is meant by thermal contact between the system and its environment. We can, therefore, conclude that Boltzmann’s entropy definition, as further specified in Equation \ref{eq:Boltzmann_entropy}, fulfills those of Swendsen’s postulates that we have already tested and that the core idea behind it, maximization of probability (density) at equilibrium is consistent with our derivation of the partition function for a canonical ensemble at thermal equilibrium. We can thus fix $k$ in Equation \ref{eq:Boltzmann_entropy}) by deriving $s$ from the partition function.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/04%3A_Entropy/4.01%3A_Swendsens_Postulates_of_Thermodynamics.txt
|
Entropy and the Partition Function
We recall that we already computed internal energy $u$ and heat capacity $c_V$ at constant volume from the system partition function $z$ (Section [section:u_and_cv_from_z]). For a canonical system ($V = \mathrm{const.}$), which is by definition at thermal equilibrium (reversible), we can identify $q_\mathrm{rev}$ in Equation \ref{eq:entropy_phen} with14
$\mathrm{d}q_\mathrm{rev} = c_V \mathrm{d} T \ .$
Definite integration with substitution of $c_V$ by Equation \ref{eq:cv0} gives ,
\begin{align} s - s_0 & = \int_0^T \frac{c_V}{T'} \mathrm{d} T' = \int_0^T \frac{1}{T'} \frac{\partial}{\partial T'} \left( k_\mathrm{B} T'^2 \frac{\partial \ln z}{\partial T'} \right)_V \mathrm{d} T' \[4pt] & = \int_0^T \frac{1}{T'} \left[k_\mathrm{B} T'^2 \left( \frac{\partial^2 \ln z}{\partial T'^2} \right)_V + 2 k_\mathrm{B} T' \left(\frac{\partial \ln z}{\partial T'} \right)_V \right] \mathrm{d} T' \[4pt] & = k_\mathrm{B} \int_0^T T' \left( \frac{\partial^2 \ln z}{\partial T'^2} \right)_V \mathrm{d} T' + 2 k_\mathrm{B} \int_0^T \left( \frac{\partial \ln z}{\partial T'} \right)_V \mathrm{d} T' \ .\end{align}
Partial integration provides
\begin{align} s - s_0 & = k_\mathrm{B} T \left( \frac{\partial \ln z}{\partial T} \right)_V - k_\mathrm{B} \int_0^T \left( \frac{\partial \ln z}{\partial T'} \right)_V \mathrm{d} T' + 2 k_\mathrm{B} \int_0^T \left( \frac{\partial \ln z}{\partial T'} \right)_V \mathrm{d} T' \[4pt] & = k_\mathrm{B} T \left( \frac{\partial \ln z}{\partial T} \right)_V + k_\mathrm{B} \ln z \Big|_0^T \label{eq:s_from_z_0} \[4pt] & = \frac{u}{T} + k_\mathrm{B} \ln z - k_\mathrm{B} \left(\ln z\right)_{T=0} \ , \label{eq:s_from_z_1} \end{align}
where we have used Equation \ref{eq:u_from_z} to substitute the first term on the right hand side of Equation \ref{eq:s_from_z_0}. If we assume that $\lim\limits_{T \rightarrow 0}{u/T} = 0$, the entropy at an absolute temperature of zero can be identified as $s_0 = k_\mathrm{B} \left(\ln z\right)_{T=0}$. If there are no degenerate ground states, $s_0 = 0$ in agreement with Nernst’s theorem (Third Law of Thermodynamics), as will be discussed in Section [subsection:z_accessible]. Thus, by associating $u=0$ with $T=0$ we obtain
$s = \frac{u}{T} + k_\mathrm{B} \ln z = k_\mathrm{B} \left[ \left( \frac{\partial \ln z}{\partial \ln T} \right)_V + \ln z \right] \ . \label{eq:s_from_z}$
We see that under the assumptions that we have made the entropy can be computed from the partition function. In fact, there should be a unique mapping between the two quantities, as both the partition function and the entropy are state functions and thus must be uniquely defined by the state of the system.
We now proceed with computing constant $k$ in the mathematical definition of Boltzmann entropy, Equation \ref{eq:Boltzmann_entropy}. By inserting Equation \ref{eq:ln_Omega} into Equation \ref{eq:Boltzmann_entropy} we have
$s = k \left( N \ln N - \sum_{i=0}^{r-1} N_i \ln N_i \right) \ .$
We have neglected the term $r$ on the right-hand side of Equation \ref{eq:ln_Omega}), as is permissible if the number $N$ of particles is much larger than the number $r$ of energy levels. Furthermore, according to Equation \ref{eq:Boltzmann_distribution}) and the definition of the partition function, we have $N_i = N e^{-\epsilon_i/k_\mathrm{B} T}/Z$. Hence,
\begin{align} s & = k \left[ N \ln N - N \sum_{i=0}^{r-1} \frac{e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \ln N \left( \frac{e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \right) \right] \[4pt] & = k \left[ N \ln N - N \sum_{i=0}^{r-1} \frac{e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \ln N + N \sum_{i=0}^{r-1} \frac{e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \ln Z + N \sum_{i=0}^{r-1} \frac{e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \cdot \frac{\epsilon_i}{k_\mathrm{B} T}\right] \label{eq:s_fix_k_0} \[4pt] & = k \left[ N \ln N - N \ln N + N \ln Z + \frac{N}{k_\mathrm{B}T} \frac{\sum_{i=0}^{r-1} \epsilon_i e^{-\epsilon_i/k_\mathrm{B} T}}{Z} \right] \ , \label{eq:s_fix_k}\end{align}
where we have used the definition of the partition function of going from Equation \ref{eq:s_fix_k_0} to \ref{[eq:s_fix_k]}. Using Equation \ref{eq:u_from_z_sum} for substitution in the last term on the right-hand side of Equation \ref{eq:s_fix_k}), we find
$s = k \left[N \ln Z + \frac{u}{k_\mathrm{B} T} \right] \ . \label{eq:s_N_particles_z}$
Comparison of Equation \ref{eq:s_N_particles_z} with Equation \ref{eq:s_from_z} gives two remarkable results. First, the multiplicative constant $k$ in Boltzmann’s entropy definition can be identified as $k = k_\mathrm{B} = R/N_\mathrm{Av}$. Second, for the system of $N$ identical, distinguishable classical particles, we must have
$z_\mathrm{dist} = Z^N \ . \label{eq:molar_z_from_molecular_z}$
In other words, the partition function of a system of $N$ identical, distinguishable, non-interacting particles is the $N^\mathrm{th}$ power of the molecular partition function.
It turns out that Equation \ref{eq:molar_z_from_molecular_z} leads to a contradiction if we apply it to an ideal gas. Assume that we partition the system into two subsystems with particle numbers $N_\mathrm{sub} = N/2$. The internal-energy dependent term in Equation \ref{eq:s_N_particles_z} obviously will not change during this partitioning. For the partition-function dependent term we have $N \ln Z$ for the total system and $2(N/2) \ln Z'$ for the sum of the two subsystems. The molecular partition function in the subsystems differs, because volume available to an individual particle is only half as large as in the total system. For the inverse process of unifying the two subsystems we would thus obtain a mixing entropy, although the gases in the subsystems are the same. This appearance of a mixing entropy for two identical ideal gases is called the Gibbs paradox. The Gibbs paradox can be healed by treating the particles as indistinguishable. This reduces the statistical weight $\Omega$ by $N!$ for the total system and by $(N/2)!$ for each subsystem, which just offsets the volume effect. Hence, for an ideal gas we have
$z_\mathrm{indist} = \frac{1}{N!} Z^N \ . \label{eq:z_indist}$
It may appear artificial to treat classical particles as indistinguishable, because the trajectory of each particle could, in principle, be followed if they adhere to classical mechanics equations of motion, which we had assumed. Note, however, that we discuss a macrostate and that we have explicitly assumed that we cannot have information on the microstates, i.e., on the trajectories. In the macrostate picture, particles in an ideal gas are, indeed, indistinguishable. For an ideal crystal, on the other hand, each particle could be individually addressed, for instance, by high resolution microscopy. In this case, we need to use Equation \ref{eq:molar_z_from_molecular_z}.
Helmholtz Free Energy
Helmholtz free energy (German: Freie Energie) $f$ is defined as
$f = u - T s \ . \label{eq:f_u_T_s}$
This equation has a simple interpretation. From phenomenological thermodynamics we know that the equilibrium state of a closed systems corresponds to a minimum in free energy. Among all macrostates with the same energy $u$ at a given temperature $T$, the equilibrium state is the one with maximum entropy $s$. Furthermore, using Equation \ref{eq:s_from_z} we have
\begin{align} f & = u - T\left( u/T + k_\mathrm{B} \ln z \right) \[4pt] & = -k_\mathrm{B} T \ln z \ . \label{eq:f_from_z}\end{align}
We note that this value of $f$, which can be computed from only the canonical partition function and temperature, corresponds to the global minimum over all macrostates. This is not surprising. After all, the partition function was found in a maximization of the probability of the macrostate.
Gibbs Free Energy, Enthalpy, and Pressure
All ensembles that we have defined correspond to equilibrium states at constant volume. To make predictions for processes at constant pressure or to compute enthalpies $h = u + p V$ and Gibbs free energies $g = f + p V$ we need to compute pressure from the partition function. The simplest way is to note that $p = -\left(\partial f/\partial V \right)_{T,n}$. With Equation \ref{eq:f_from_z} it then follows that
$p = k_\mathrm{B} T \left( \frac{\partial \ln z}{\partial V} \right)_T \ , \label{eq:p_from_z}$
where we have skipped the lower index $n$ indicating constant molar amount. This is permissible for the canonical ensemble, where the number of particles is constant by definition. From Equation \ref{eq:p_from_z} it follows that
$p V = k_\mathrm{B} T \left( \frac{\partial \ln z}{\partial \ln V} \right)_T$
and
$h = u + p V = k_\mathrm{B} T \left[ \left( \frac{\partial \ln z}{\partial \ln T} \right)_V + \left( \frac{\partial \ln z}{\partial \ln V} \right)_T \right] \ .$
Connoisseurs will notice the beautiful symmetry of this equation.
With Equation \ref{eq:p_from_z} we can also compute Gibbs free energy (German: freie Enthalpie),
$g = f + p V = -k_\mathrm{B} T \left[ \ln z - \left( \frac{\partial \ln z}{\partial \ln V} \right)_T \right] \ .$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/04%3A_Entropy/4.02%3A_The_Relation_of_State_Functions_to_the_Partition_Function.txt
|
Historical Discussion
Daily experience tells us that some processes are irreversible. Phenomenological thermodynamics had provided recipes for recognizing such processes by an increase in entropy for an isolated system or decrease of free energy for a closed system. When Boltzmann suggested a link between classical mechanics of molecules on a microscopic level and irreversibility of processes on the macroscopic level, many physicists were irritated nevertheless. In retrospect it is probably fair to say that a controversial discussion of Boltzmann’s result could only ensue because atomistic or molecular theory of matter was not yet universally accepted at the time. It is harder to understand why this discussion is still going on in textbooks. Probably this is related to the fact that physicists in the second half of the $19^\mathrm{th}$ and first half of the $20^\mathrm{th}$ believed that pure physics has implications in philosophy, beyond the obvious ones in epistemology applied to experiments in the sciences. If statistical mechanics is used to predict the future of the universe into infinite times, problems ensue. If statistical mechanics is properly applied to well-defined experiments there are no such problems.
Classical mechanics of particles does not involve irreversibility. The equations of motion have time reversal symmetry and the same applies to quantum-mechanical equations of motion. If the sign of the Hamiltonian can be inverted, the system will evolve backwards along the same trajectory in phase space (or state space) that it followed to the point of inversion. This argument is called Umkehreinwand or Loschmidt paradox and was brought up (in its classical form) by Loschmidt. The argument can be refined and is then known as the central paradox: Each microstate can be assigned a time-reversed state that evolves, under the same Hamiltonian, backwards along the same trajectory. The two states should have the same probability. The central paradox confuses equilibrium and non-equilibrium dynamics. At equilibrium a state and the corresponding time-reversed state indeed have the same probability, which explains that the macrostate of the system does not change and why processes that can be approximated by a series of equilibrium states are reversible. If, on the other hand, we are not at equilibrium, there is no reason for assuming that the probabilities of any two microstates are related. The system is at some initial condition with a given set of probabilities and we are not allowed to pose symmetry requirements to this initial condition.
The original Umkehreinwand, which is based on sign inversion of the Hamiltonian rather then the momenta of microstates, is more serious than the central paradox. Time-reversal experiments of this type can be performed, for instance, echo experiments in magnetic resonance spectroscopy and optical spectroscopy. In some of these echo experiments, indeed the Hamiltonian is sign-inverted, in most of these experiments application of a perturbation Hamiltonian for a short time (pulse experiment) causes sign inversion of the density matrix. Indeed, the first paper on observation of such a spin echo by Erwin Hahn was initially rejected with the argument that he could not have observed what he claimed, as this would have violated the Second Law of Thermodynamics. A macroscopic ’time-reversal’ experiment that creates a ’colorant echo’ in corn syrup can be based on laminar flow . We note here that all these time-reversal experiments are based on preparing a system in a non-equilibrium state. To analyze them, changes in entropy or Helmholtz free energy must be considered during the evolution that can be reversed. These experiments do not touch the question whether or not the same system will irreversibly approach an equilibrium state if left to itself for a sufficiently long time. We can see this easily for the experiment with colorants and corn syrup. If, after setup of the initial state and evolution to the point of time reversal, a long time would pass, the colorant echo would no longer be observed, because diffusion of the colorants in corn syrup would destroy spatial correlation. The echo relies on the fact that diffusion of the colorants in corn syrup can be neglected on the time scale of the experiment, i.e., that equilibrium cannot be reached. The same is true for the spin echo experiment, which fails if the evolution time is much longer than the transverse relaxation time of the spins.
Another argument against irreversibility was raised by Zermelo, based on a theorem by Poincaré. The theorem states that any isolated classical system will return repeatedly to a point in phase space that is arbitrarily close to the starting point. This argument is known as Wiederkehreinwand or Zermelo paradox. We note that such quasi-periodicity is compatible with the probability density formalism of statistical mechanics. The probability density distribution is very sharply peaked at the equilibrium state, but it is not zero at the starting point in phase space. The system fluctuates around the equilibrium state and, because the distribution is sharply peaked, these fluctuations are very small most of the time. Once in a while the fluctuation is sufficiently large to revisit even a very improbable starting point in phase space, but for a macroscopic system this while is much longer than the lifetime of our galaxy. For practical purposes such large fluctuations can be safely neglected, because they occur so rarely. That a system will never evolve far from the equilibrium state once it had attained equilibrium is an approximation, but the approximation is better than many other approximations that we use in physics. The statistical error that we make is certainly much smaller than our measurement errors.
Irreversibility as an Approximation
If the whole of phase space is accessible the system will always tend to evolve from a less probable macrostate to a more probable macrostate, until it has reached the most probable macrostate, which is the equilibrium state. Equilibrium is dynamic. The microstate of each individual system evolves in time. However, for most microstates the values of all state variables are the same as for equilibrium within experimental uncertainty. In fact, the fraction of such microstates does not significantly differ from unity. Hence, a system that has attained equilibrium once will be found at equilibrium henceforth, as long as none of the external parameters is changed on which the probability density distribution in phase space depends. In that sense, processes that run from a non-equilibrium state to an equilibrium state are irreversible.
We should note at this point that all our considerations in this lecture course assume systems under thermodynamic control. If microstate dynamics in phase space is slow compared to the time scale of the experiment or simulation, the equilibrium state may not be reached. This may also happen if dynamics is fast in the part of phase space where the initial state resides but exchange dynamics is too slow between this part of phase space and the part of phase space where maximum probability density is located.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/04%3A_Entropy/4.03%3A_Irreversibility.txt
|
Gibbs Entropy
For a system with a countable number of microstates an ensemble entropy can be defined by a weighted sum over entropies of all microstates that are in turn expressed as $-k_\mathrm{B} \ln P_i$, which is analogous to Boltzmann’s entropy definition for a macrostate.
$S = -k_\mathrm{B} \sum_i P_i \ln P_i \ .$
This is the definition of Gibbs entropy, while Boltzmann entropy is assigned to an individual microstate. Note that we have used a capital $S$ because Gibbs entropy is a molecular entropy. Using Equation \ref{eq:Boltzmann_distribution}, we obtain for the system entropy $s = N S$,
\begin{align} s & = -k_\mathrm{B} N \sum_i P_i \left( -\frac{\epsilon_i}{k_\mathrm{B} T} - \ln Z \right) \ & = \frac{u}{T} + k_B \ln z \ , \label{eq:Gibbs_system_entropy}\end{align}
where we have assumed distinguishable particles, so that $\ln z = N \ln Z$. We have recovered Equation \ref{eq:s_from_z} that we had derived for the system entropy starting from Boltzmann entropy and assuming a canonical ensemble. For a canonical ensemble of distinguishable particles, either concept can be used. As noted above, Gibbs entropy leads to the paradox of a positive mixing entropy for combination of two subsystems made up by the same ideal gas. More generally, Gibbs entropy is not extensive if the particles are indistinguishable. The problem can be solved by redefining the system partition function as in Equation \ref{eq:z_indist}.
This problem suggests that entropy is related to the information we have on the system. Consider mixing of $\ce{^{13}CO2}$ with $^{12}\mathrm{CO}_2$.15 At a time when nuclear isotopes were unknown, the two gases could not be distinguished and mixing entropy was zero. With a sufficiently sensitive spectrometer we could nowadays observe the mixing process by $^{13}\mathrm{C}$ NMR. We will observe spontaneous mixing. Quite obviously, the mixing entropy is not zero anymore.
This paradox cautions against philosophical interpretation of entropy. Entropy is a quantity that can be used for predicting the outcome of physical experiments. It presumes an observer and depends on the information that the observer has or can obtain.16 Statistical mechanics provides general recipes for defining entropy, but the details of a proper definition depend on experimental context.
Unlike the system entropy derived from Boltzmann entropy via the canonical ensemble, Gibbs entropy is, in principle, defined for non-equilibrium states. Because it is based on the same probability concept, Gibbs entropy in an isolated system is smaller for non-equilibrium states than for equilibrium states.
Von Neumann Entropy
The concept of Gibbs entropy for a countable set of discrete states and their probabilities is easily extended to continuous phase space and probability densities. This leads to the von Neumann entropy,
$S = -k_\mathrm{B} \mathrm{Trace}\left\{ \rho \ln \rho \right\} \ , \label{eq:von_Neumann_entropy}$
where $\rho$ is the density matrix. Some physics textbooks don’t distinguish von Neumann entropy from Gibbs entropy . Von Neumann entropy is a constant of motion if an ensemble of classical systems evolves according to the Liouville equation or a quantum mechanical system evolves according to the Liouville-von-Neumann equation. It cannot describe the approach of an isolated system to equilibrium. Coupling of the quantum mechanical system to an environment can be described by the stochastic Liouville equation
$\frac{\partial \widehat{\rho}}{\partial t} = -\frac{i}{\hbar} \left[ \mathcal{\widehat{H}}, \widehat{\rho} \right] + \widehat{\widehat{\Gamma}} \left( \widehat{\rho} - \widehat{\rho}_\mathrm{eq} \right) \ ,$
where $\widehat{\widehat{\Gamma}}$ is a Markovian operator and $\rho_\mathrm{eq}$ the density matrix at equilibrium. This equation of motion can describe quantum dissipative systems, i.e., the approach to equilibrium, without relying explicitly on the concept of entropy, except for the computation of $\rho_\mathrm{eq}$, which relies on generalization of the Boltzmann distribution (see Section [subsection:q_partition]). However, to derive the Markovian operator $\widehat{\widehat{\Gamma}}$, explicit assumptions on the coupling between the quantum mechanical system and its environment must be made, which is beyond the scope of this lecture course.
Shannon Entropy
The concept of entropy has also been introduced into information theory. For any discrete random number that can take values $a_j$ with probabilities $P(a_j)$, the Shannon entropy is defined as
$H_\mathrm{Shannon}\left( a \right) = -\sum_j P(a_j) \log_2 P(a_j) \ .$
A logarithm to the basis of 2 is used here as the information is assumed to be coded by binary numbers. Unlike for discrete states in statistical mechanics, an event may be in the set but still have a probability $P(a_j) = 0$. In such cases, $P(a_j) \log_2 P(a_j)$ is set to zero. Shannon entropy is the larger the ’more random’ the distribution is, or, more precisely, the closer the distribution is to a uniform distribution. Information is considered as deviation from a random stream of numbers or characters. The higher the information content is, the lower the entropy.
Shannon entropy can be related to reduced Gibbs entropy $\sigma = S/k_\mathrm{B}$. It is the amount of Shannon information that is required to specify the microstate of the system if the macrostate is known. When expressed with the binary logarithm, this amount of Shannon information specifies the number of yes/no questions that would have to be answered to specify the microstate. We note that this is exactly the type of experiment presumed in the second Penrose postulate (Section [Penrose_postulates]). The more microstates are consistent with the observed macrostate, the larger is this number of questions and the larger are Shannon and Gibbs entropy. The concept applies to non-equilibrium states as well as to equilibrium states. It follows, what was stated before Shannon by G. N. Lewis: "Gain in entropy always means loss of information, and nothing more". The equilibrium state is the macrostate that lacks most information on the underlying microstate.
We can further associate order with information, as any ordered arrangement of objects contains information on how they are ordered. In that sense, loss of order is loss of information and increase of disorder is an increase in entropy. The link arises via probability, as the total number of arrangements is much larger than the number of arrangements that conform to a certain order principle. Nevertheless, the association of entropy with disorder is only colloquial, because in most cases we do not have quantitative descriptions of order.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/04%3A_Entropy/4.04%3A_Entropy_and_Information.txt
|
Density Matrix
We have occasionally referred to the quantum-mechanical density matrix $\rho$ in previous sections. Before we discuss quantum ensembles, we need to fully specify this concept.
The microstates that can be assumed by a system in a quantum ensemble are specified by a possible set of wavefunctions $\psi_i \ (i = 1\ldots r)$. The probability or population of the $i^\mathrm{th}$ microstate is denoted as $P_i$, and for the continuous case the probability density for a given wavefunction is denoted as $p(\psi)$. The density operator is then given by
\begin{align} & \widehat{\rho} = \sum_{i=0}^{r-1} P_i \left|\psi_i\right\rangle \left\langle \psi_i \right| \ \mathrm{(discrete)} \ & \widehat{\rho} = \int_\psi p(\psi) \left|\psi_i\right\rangle \left\langle \psi_i \right| \ \mathrm{(continuous)} \ . \label{eq:qm_density_operator}\end{align}
Note that the discrete case is closely related to the problem with $r$ energy levels that we discussed in deriving the Boltzmann distribution for a classical canonical ensemble. The density operator can be expressed as a density matrix $\rho$ with respect to a set of basis functions $\left| k \right\rangle$. For exact computations the basis functions must form a countable complete set that allows for expressing the system wavefunctions $\psi_i$ as linear combinations of basis functions. For approximate computations, it suffices that this linear combination is a good approximation. The matrix elements of the density matrix are then given by
\begin{align} & \rho_{nm} = \sum_{i=0}^{r-1} P_i \left\langle m \left|\psi_i\right\rangle \left\langle \psi_i \right| n \right\rangle \ \mathrm{(discrete)} \ & \rho_{nm} = \int_\psi p(\psi) \left\langle m\left|\psi_i\right\rangle \left\langle \psi_i \right| n \right\rangle \ \mathrm{(continuous)} \ .\end{align}
With the complex coefficients $c_k$ in the linear combination representation $|\psi\rangle = \sum_k c_k |k\rangle$, the matrix elements are
$\rho_{nm} = \overline{c_n c_m^\ast} \ ,$
where the asterisk denotes the complex conjugate and the bar for once denotes the ensemble average. It follows that diagonal elements ($m=n$) are necessarily real, $\rho_{nn} = |c_n|^2$ and that $\rho_{mn}$ is the complex conjugate of $\rho_{nm}$. Therefore, the density matrix is Hermitian and the density operator is self-adjoint. The matrix dimension is the number of basis functions. It is often convenient to use the eigenfunctions of the system Hamiltonian $\widehat{\mathcal{H}}$ as the basis functions, but the concept of the density matrix is not limited to this choice. The meaning of elemnts of the density matrix is visualized in Figure $1$.
That the density matrix can be expressed in the basis of eigenstates does not imply that the ensemble can be represented as consisting of only eigenstates, as erroneously stated by Swendsen . Off-diagonal elements of the density matrix denote coherent superpositions of eigenstates, or short coherences. This is not apparent in Swendsen’s simple example where coherence is averaged to zero by construction. The ensemble can be represented as consisting of only eigenstates if coherence is absent. In that case the density matrix is diagonal in the eigenbasis. Diagonal elements of the density matrix denote populations of basis states.
In quantum mechanics, it is well defined what information we can have about the macrostate of a system, because quantum measurements are probabilistic even for a microstate. We can observe only quantities that are quantum-mechanical observables and these observables are represented by operators $\widehat{A}$. It can be shown that the expectation value $\left\langle \widehat{A} \right\rangle$ of any observable can be computed from the density matrix by
$\left\langle \widehat{A} \right\rangle = \mathrm{Trace}\left\{ \widehat{\rho} \widehat{A} \right\} \ ,$
where we have used operator notation for $\widehat{\rho}$ to point out that $\widehat{\rho}$ and $\widehat{A}$ must be expressed in the same basis.
Since the expectation values of all observables are the full information that we can have on a quantum system, the density matrix specifies the full information that we can have on the ensemble. However, the density matrix does not fully specify the ensemble itself, i.e., we cannot infer the probabilities $P_i$ or probability density function $\rho(\psi)$ from the density matrix (Swendsen gives a simple example ). This is another example for the information loss on microstates that comes about when we can only observe macrostates and that is conceptually equivalent to entropy. The von-Neumann entropy can be computed from the density matrix by Equation \ref{eq:von_Neumann_entropy}).
We note that there is one important distinction between classical and quantum-mechanical observations for an individual system. In the quantum case we can specify only an expectation value, and the second and third Penrose postulates (Section [Penrose_postulates]) do not apply: neither can we simultaneously measure all observables (they may be incompatible), nor is the outcome of a later measurement independent of the current measurement. However, quantum uncertainty is much smaller than measurement errors for the large ensembles that we treat by statistical thermodynamics. Hence, the Penrose postulates apply to the quantum-mechanical ensembles that represent macrostates, although they do not apply to the microstates.
If all systems in a quantum ensemble populate the same microstate, i.e., they correspond to the same wavefunction, the ensemble is said to be in a pure state. A pure state corresponds to minimum rather than maximum entropy. Otherwise the system is said to be in a mixed state.
Quantum Partition Function
Energy quantization leads to a difficulty in using the microcanonical ensemble. The difficulty arises because the microcanonical ensemble requires constant energy, which restricts our abilities to assign probabilities in a set of discrete energy levels. However, as we derived the Boltzmann distribution, partition function, entropy and all other state functions for classical systems from the canonical ensemble anyway, we can simply ignore this problem. The canonical ensemble is considered to be at thermal equilibrium with a heat bath (environment) of infinite size. It does not matter whether this heat bath is of classical or quantum mechanical nature. For an infinitely sized quantum system, the energy spectrum is continuous, which allows us to exchange energy between the bath and any constituent system of the quantum canonical ensemble at will.
We can derive Boltzmann distribution and partition function for the density matrix by analogy to the classical case. For that we consider the density matrix in the eigenbasis. The energies of the eigenstates are the eigenvalues $\epsilon_i$ of the Hamiltonian $\mathcal{H}$. All arguments and mathematical steps from Section [subsection:Boltzmann] still apply, with a single exception: Quantum mechanics allows for microstates that are coherent superpositions of eigenstates. The classical derivation carries over if and only if we can be sure that the equilibrium density matrix can be expressed without contributions from such microstates, which would lead to off-diagonal elements in the representation in the eigenbasis of $\mathcal{\widehat{H}}$. This argument can indeed be made. Any superposition of two eigenstates $|n\rangle$ and $|m\rangle$ with amplitudes $|c_n|$ and $|c_m|$ can be realized with arbitrary phase difference $\Delta \phi$ between the two eigenfunctions. The microstates with the same $|c_n|$ and $|c_m|$ but different $\Delta \phi$ all have the same energy. The entropy of a subensemble that populates these microstates is maximal if the distribution of phase differences $\Delta \phi$ is uniform in the interval $[0,2\pi)$. In that case $\overline{c_m^\ast c_n}$ vanishes, i.e., such subensembles will not contribute off-diagonal elements to the equilibrium density matrix.
We can now arrange the $e^{-\epsilon_i/k_\mathrm{B} T}$ in matrix form,
$\xi = e^{-\mathcal{\widehat{H}}/k_\mathrm{B} T} \ ,$
with the matrix elements $\xi_{ii} = e^{-\epsilon_i/k_\mathrm{B} T}$ and $\xi_{ij} = 0$ for $i \neq j$. The partition function is the sum of all the diagonal elements of this matrix, i.e. the trace of $\xi$. Hence,
$\widehat{\rho}_\mathrm{eq} = \frac{e^{-\mathcal{\widehat{H}}/k_\mathrm{B} T}}{\mathrm{Trace}\left\{ e^{-\mathcal{\widehat{H}}/k_\mathrm{B} T} \right\}} \ , \label{eq:rho_eq}$
where we have used operator notation. This implies that Equation \ref{eq:rho_eq} can be evaluated in any basis, not only the eigenbasis of $\widehat{\mathcal{H}}$. In a different basis, $e^{-\epsilon_i/k_\mathrm{B} T}$ needs to be computed as a matrix exponential and, in general, the density matrix $\rho_\mathrm{eq}$ will have non-zero off-diagonal elements in such a different basis.
The quantum-mechanical partition function,
$Z = \mathrm{Trace}\left\{ e^{-\mathcal{\widehat{H}}/k_\mathrm{B} T} \right\} \ , \label{eq:qm_z}$
is independent of the choice of basis, as the trace of a matrix is invariant under unitary transformations. Note that we have used a capital $Z$ for a molecular partition function. This is appropriate, as the trace of $\widehat{\rho}_\mathrm{eq}$ in Equation \ref{eq:rho_eq} is unity. In the eigenbasis, the diagonal elements of $\rho_\mathrm{eq}$ are the populations of the eigenstates at thermal equilibrium. There is no coherence for a sufficiently large quantum ensemble at thermal equilibrium.
We note that the density matrix at thermal equilibrium can be derived in a more strict manner by explicitly considering a system that includes both the canonical ensemble and the heat bath and by either tracing out the degrees of freedom of the heat bath or relying on a series expansion that reduces to only two terms in the limit of an infinite heat bath .
When approaching zero absolute temperature, the matrix element of $\rho$ in the eigenbasis that corresponds to the lowest energy $\epsilon_i$ becomes much larger than all the others. At $T=0$, the corresponding ground state is exclusively populated and the ensemble is in a pure state if there is just one state with this energy. For $T \rightarrow \infty$ on the other hand, differences between the diagonal matrix elements vanish and all states are equally populated. The ensemble is in a maximally mixed state.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/05%3A_Quantum_Ensembles/5.01%3A_Quantum_Canonical_Ensemble.txt
|
Types of Permutation Symmetry
Classical particles are either distinguishable or non-distinguishable, a difference that influences the relation between the system partition function and the molecular partition function (Section [s_from_z]). Quantum particles are special. They are always indistinguishable, but there exist two types that behave differently when two particles are permuted. For bosons, the wavefunction is unchanged on such permutation, whereas for fermions the wavefunction changes sign. This sign change does not make the particles distinguishable, as absolute phase of the wavefunction does not correspond to an observable. However, it has important consequences for the population of microstates. Two (or more) bosons can occupy the same energy level. In the limit $T \rightarrow 0$ they will all occupy the ground state and form a Bose-Einstein condensate. Bosons are particles with integer spin, with the composite boson $^{4}\mathrm{He}$ (two protons, two neutrons, two electrons) probably being the most famous example. In contrast, two fermions (particles with half-integer spin) cannot occupy the same state, a fact that is known as Pauli exclusion principle. Protons, neutrons, and electrons are fermions (spin 1/2), whereas photons are bosons (spin 1).
This difference in permutation symmetry influences the distribution of particles over energy levels. The simplest example is the distribution of two particles to two energy levels $\epsilon_\mathrm{l}$ (for ’left’) and $\epsilon_\mathrm{r}$ (for ’right’) . For distinguishable classical particles four possible configurations exist:
1. $\epsilon_\mathrm{l}$ is doubly occupied
2. $\epsilon_\mathrm{l}$ is occupied by particle A and $\epsilon_\mathrm{r}$ is occupied by particle B
3. $\epsilon_\mathrm{l}$ is occupied by particle B and $\epsilon_\mathrm{r}$ is occupied by particle A
4. $\epsilon_\mathrm{r}$ is doubly occupied.
For bosons and for indistinguishable classical particles as well, the second and third configuration above cannot be distinguished. Only three configurations exist:
1. $\epsilon_\mathrm{l}$ is doubly occupied
2. $\epsilon_\mathrm{l}$ is occupied by one particle and $\epsilon_\mathrm{r}$ is occupied by one particle
3. $\epsilon_\mathrm{r}$ is doubly occupied.
For fermions, the first and third configuration of the boson case are excluded by the Pauli principle. Only one configuration is left:
1. $\epsilon_\mathrm{l}$ is occupied by one particle and $\epsilon_\mathrm{r}$ is occupied by one particle.
Since the number of configurations enters into all probability considerations, we shall find different probability distributions for systems composed of bosons, fermions, or distinguishable classical particles. The situation is most transparent for an ideal gas, i.e. $N$ non-interacting point particles that have only translational degrees of freedom . For such a system the spectrum of energy levels is continuous.
Bose-Einstein Statistics
We want to derive the probability distribution for the occupation of energy levels by bosons. To that end, we first pose the question how many configurations exist for distributing $N_i$ particles to $A_i$ energy levels in the interval between $\epsilon_i$ and $\epsilon_i + \mathrm{d}\epsilon$. Each level can be occupied by an arbitrary number of particles. We picture the problem as a common set of particles $P_k \ (k = 1 \ldots N_i)$ and levels $L_k \ (k = 1 \ldots A_i)$ that has $N_i+A_i$ elements. Now we consider all permutations in this set and use the convention that particles that stand left from a level are assigned to this level. For instance, the permutation $\{P_1,P_2,L_1,P_3,L_2,L_3\}$ for three particles and three levels denotes a state where level $L_1$ s occupied by particles $P_1$ and $P_2$, level $L_2$ is occupied by particle $P_3$ and level $A_3$ is empty. With this convention the last energy level is necessarily the last element of the set (any particle standing right from it would not have an associated level), hence only $(N_i+A_i-1)!$ such permutations exist. Each permutation also encodes a sequence of particles, but the particles are indistinguishable. Thus we have to divide by $N_i!$ in order to not double count configurations that we cannot distinguish. It also does not matter in which sequence we order the levels with their associated subsets of particles. Without losing generality, we can thus consider only the sequence with increasing level energy, so that the level standing right (not included in the number of permutations $(N_i+A_i-1)!$) is the level with the highest energy. For the remaining $A_i-1$ lower levels we have counted $(A_i-1)!$ permutations, but should have counted only the properly ordered one. Hence, we also have to divide by $(A_i-1)!$. Therefore, the number of configurations and thus the number of microstates in the interval between $\epsilon_i$ and $\epsilon_i + \mathrm{d}\epsilon$ is
$C_i = \frac{\left( N_i + A_i - 1 \right)!}{N_i!\left(A_i-1\right)!} \ .$
The configurations in energy intervals with different indices $i$ are independent of each other. Hence, the statistical weight of a macrostate is
$\Omega = \prod_i \frac{\left( N_i + A_i - 1 \right)!}{N_i!\left(A_i-1\right)!}$
As the number of energy levels is, in practice, infinite, we can choose the $A_i$ sufficiently large for neglecting the 1 in $A_i - 1$. In an exceedingly good approximation we can thus write
$\Omega = \prod_i \frac{\left( N_i + A_i\right)!}{N_i! A_i!} \ .$
The next part of the derivation is the same as for the Boltzmann distribution in Section [subsection:Boltzmann], i.e., it relies on maximization of $\ln \Omega$ using the Stirling formula and considering the constraints of conserved total particle number $N = \sum_i N_i$ and conserved total energy of the system . The initial result is of the form
$\frac{N_i}{A_i} = \frac{1}{B e^{-\beta \epsilon_i} - 1} \ ,$
where $B$ is related to the Lagrange multiplier $\alpha$ by $B = e^{-\alpha}$ and thus to the chemical potential by $B = e^{-\mu/(k_\mathrm{B} T)}$. After a rather tedious derivation using the definitions of Boltzmann entropy and $(\partial u/ \partial s)_V = T$ we can identify $\beta$ with $-1/k_\mathrm{B} T$. We refrain from reproducing this derivation here, as the argument is circular: It uses the identification of $k$ with $k_\mathrm{B}$ in the definition of Boltzmann entropy that we had made earlier on somewhat shaky grounds. We accept the identification of $|\beta|$ with $1/k_\mathrm{B} T$ as general for this type of derivations, so that we finally have
$\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/k_\mathrm{B} T} - 1} \ . \label{eq:Bose_Einstein_stat}$
Up to this point we have supposed nothing else than a continuous, or at least sufficiently dense, energy spectrum and identical bosons. To identify $B$ we must have information on this energy spectrum and thus specify a concrete physical problem. When using the density of states for an ideal gas consisting of quantum particles with mass $m$ in a box with volume $V$ (see Section [section:gas_translation] for derivation),
$D(\epsilon) = 4 \sqrt{2} \pi \frac{V}{h^3} m^{3/2} \epsilon^{1/2} \ , \label{eq:density_of_states_ideal_quantum_gas}$
we find, for the special case $B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$,
$B = \frac{\left( 2 \pi m k_\mathrm{B} T \right)^{3/2}}{h^3} \cdot \frac{V}{N} \ . \label{eq:B_quantum_gas}$
Fermi-Dirac Statistics
The number $N_i$ of fermions in an energy interval with $A_i$ levels cannot exceed $A_i$. The number of allowed configurations is now given by the number of possibilities to select $N_i$ out of $A_i$ levels that are populated, whereas the remaining levels remain empty. As each level can exist in only one of two conditions, populated or empty, this is a binomial distribution problem as we have solved in Section [binomial_distribution]. In Equation \ref{eq:N_over_n}) we need to substitute $N$ by $A_i$ and $n$ by $N_i$. Hence, the number of allowed configurations in the energy interval between $\epsilon_i$ and $\epsilon_i + \Delta \epsilon_i$ is given by
$C_i = \frac{A_i!}{N_i! \left(A_i - N_i \right)!}$
and, considering mutual independence of the configurations in the individual energy intervals, the statistical weight of a macrostate for fermions is
$\Omega = \prod_i \frac{A_i!}{N_i! \left(A_i - N_i \right)!} \ .$
Again, the next step of the derivation is analogous to derivation of the Boltzmann distribution in Section [subsection:Boltzmann] . We find
$\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/k_\mathrm{B}T} + 1} \ . \label{eq:Fermi_Dirac_stat}$
For the special case $B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$, $B$ is again given by Equation \ref{eq:B_quantum_gas}. Comparison of Equation \ref{eq:Fermi_Dirac_stat} with Equation \ref{eq:Bose_Einstein_stat} reveals as the only difference the sign of the additional number 1 in the denominator on the right-hand side of the equations. In the regime $B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$, for which we have specified $B$, this difference is negligible.
It is therefore of interest when this regime applies. As $\epsilon_i \ge 0$ in the ideal gas problem, we have $e^{\epsilon_i/k_\mathrm{B} T} \ge 1$, so that $B \gg 1$ is sufficient for the regime to apply. Wedler and Freund have computed values of $B$ according to Equation \ref{eq:B_quantum_gas} for the lightest ideal gas, H$_2$, and have found $B \gg 1$ for $p = 1$ bar down to $T = 20$ K and at ambient temperature for pressures up to $p = 100$ bar. For heavier molecules, $B$ is larger under otherwise identical conditions. Whether a gas atom or molecule is a composite boson or fermion thus does not matter, except at very low temperatures and very high pressures. However, if conduction electrons in a metal, for instance in sodium, are considered as a gas, their much lower mass and higher number density $N/V$ leads to $B \ll 1$ at ambient temperature and even at temperatures as high as 1000 K. Therefore, a gas model for conduction electrons (spin 1/2) must be set up with Fermi-Dirac statistics.
Maxwell-Boltzmann Statistics
In principle, atoms and molecules are quantum objects and not classical particles. This would suggest that the kinetic theory of gases developed by Maxwell before the advent of quantum mechanics is deficient. However, we have already seen that for particles as heavy as atoms and molecules and number densities as low as in gases at atmospheric pressure or a bit higher, the difference between Bose-Einstein and Fermi-Dirac statistics vanishes, unless temperature is very low. This suggests that, perhaps, classical Maxwell-Boltzmann statistics is indeed adequate for describing gases under common experimental conditions.
We assume distinguishable particles. Each of the $N_i$ particles can be freely assigned to one of the $A_i$ energy levels. All these configurations can be distinguished from each other, as we can picture each of the particles to have an individual tag. Therefore,
$C_i = (A_i)^{N_i}$
configurations can be distinguished in the energy interval between $\epsilon_i$ and $\epsilon_i + \Delta \epsilon_i$. Because the particles are distinguishable (’tagged’), the configurations in the individual intervals are generally not independent from each other, i.e. the total number of microstates does not factorize into the individual numbers of microstates in the intervals. We obtain more configurations than that because we have the additional choice of distributing the $N$ ’tagged’ particles to $r$ intervals. We have already solved this problem in Section [subsection:Boltzmann], the solution is Equation \ref{eq:N_onto_r}). By considering the additional number of choices, which enters multiplicatively, we find for the statistical weight of a macrostate
\begin{align} \Omega & = \frac{N!}{N_0! N_1! \ldots N_{r-1}!}\cdot A_0^{N_0} \cdot A_1^{N_1} \cdot \ldots A_{r-1}^{N_{r-1}} \ & = N! \prod_i \frac{A_i^{N_i}}{N_i!} \ .\end{align}
It appears that we have assumed a countable number $r$ of intervals, but as in the derivations for the Bose-Einstein and Fermi-Dirac statistics, nothing prevents us from making the intervals arbitrarily narrow and their number arbitrarily large.
Again, the next step in the derivation is analogous to derivation of the Boltzmann distribution in Section [subsection:Boltzmann] . All the different statistics differ only in the expressions for $\Omega$, constrained maximization of $\ln \Omega$ uses the same Lagrange ansatz. We end up with
$\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/ k_\mathrm{B} T}} . \label{eq:Maxwell_Boltzmann_stat}$
Comparison of Equation \ref{eq:Maxwell_Boltzmann_stat} with Equation \ref{eq:Bose_Einstein_stat} and \ref{eq:Fermi_Dirac_stat} reveals that, again, only the 1 in the denominator on the right-hand side makes the difference, now it is missing. In the regime, where Bose-Einstein and Fermi-Dirac statistics coincide to a good approximation, both of them also coincide with Maxwell-Boltzmann statistics.
There exist two caveats. First, we already know that the assumption of distinguishable particles leads to an artificial mixing entropy for two subsystems consisting of the same ideal gas or, in other words, to entropy not being extensive. This problem does not, however, influence the probability distribution, it only influences scaling of entropy with system size. We can solve it by an ad hoc correction when computing the system partition function from the molecular partition function. Second, to be consistent we should not use the previous expression for $B$, because it was derived under explicit consideration of quantization of momentum.17 However, for Maxwell-Boltzmann statistics $B$ can be eliminated easily. With $\sum_i N_i = N$ we have from Equation \ref{eq:Maxwell_Boltzmann_stat}
$N = \frac{1}{B} \sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T} \ ,$
which gives
$\frac{1}{B} = \frac{N}{\sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T}} \ .$
With this, we can express the distribution function as
$P_i = \frac{N_i}{N} = \frac{A_i e^{-\epsilon_i/k_\mathrm{B} T}}{\sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T}} \ . \label{eq:Maxwell_Boltzmann}$
Comparison of Equation \ref{eq:Maxwell_Boltzmann} with the Boltzmann distribution given by Equation \ref{eq:Boltzmann_distribution} reveals the factors $A_i$ as the only difference. Thus, the probability distribution for Maxwell-Boltzmann statistics deviates from the most common form by the degree of degeneracy $A_i$ of the individual levels. This degeneracy entered the derivation because we assumed that within the intervals between $\epsilon_i$ and $\epsilon_i + \Delta \epsilon_i$ several levels exist. If $\Delta \epsilon_i$ is finite, we speak of near degeneracy. For quantum systems, degeneracy of energy levels is a quite common phenomenon even in small systems where the energy spectrum is discrete. In order to describe such systems, the influence of degeneracy on the probability distribution must be taken into account.
Concept $1$: Degeneracy
In quantum systems with discrete energy levels there may exist $g_i$ quantum states with the same energy $\epsilon_i$ that do not coincide in all their quantum numbers. This phenomenon is called degeneracy and $g_i$ the degree of degeneracy. A set of $g_i$ degenerate levels can be populated by up to $g_i$ fermions. In the regime, where Boltzmann statistics is applicable to the quantum system, the probability distribution considering such degeneracy is given by
\begin{align} & P_i = \frac{N_i}{N} = \frac{g_i e^{-\epsilon_i/k_\mathrm{B}T}}{\sum_i g_i e^{-\epsilon_i/k_\mathrm{B}T}} \label{eq:Boltzmann_with_degeneracy}\end{align}
and the molecular partition function by
\begin{align} & Z = \sum_i g_i e^{-\epsilon_i/k_\mathrm{B}T} \ .\end{align}
The condition that degenerate levels do not coincide in all quantum numbers makes sure that the Pauli exclusion principle does not prevent their simultaneous population with fermions.
At this point we can summarize the expected number of particles with chemical potential $\mu$ at level $i$ with energy $\epsilon_i$ and arbitrary degeneracy $g_i$ for Bose-Einstein, Fermi-Dirac, and Boltzmann statistics:
\begin{align} N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)} - 1} \ & \mathrm{Bose-Einstein} \ \mathrm{statistics} \ N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)} + 1} \ & \mathrm{Fermi-Dirac} \ \mathrm{statistics} \ N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)}} \ & \mathrm{Boltzmann} \ \mathrm{statistics} \ .\end{align}
Note that the chemical potential $\mu$ in these equations is determined by the condition $N = \sum_i N_i$. The constant $B$ in the derivations above is given by $B = e^{-\mu/(k_\mathrm{B} T)}$. If $N$ is not constant, we have $\mu = 0$ and thus $B=1$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/05%3A_Quantum_Ensembles/5.02%3A_Quantum_and_Classical_Statistics.txt
|
Spin S = 1/2
The simplest quantum system is a two-level system and probably the best approximation to isolated two-level systems is found in magnetic resonance spectroscopy of dilute $S=1/2$ spin systems. The Hamiltonian for an electron spin $S=1/2$ in an external magnetic field along $z$ is given by
$\widehat{\mathcal{H}} = \gamma \hbar B_0 \widehat{S}_z \ ,$
where $\gamma = g \mu_\mathrm{B}/\hbar$ is the gyromagnetic ratio and $B_0$ is the magnetic field expressed in units of 1 Tesla. The two states are designated by magnetic spin quantum number $m_S = \mp 1/2$ and have energies $\epsilon_\mp = \mp \gamma \hbar B_0/2$. The partition function is
$Z = e^{\gamma \hbar B_0/2 k_\mathrm{B} T} + e^{-\gamma \hbar B_0/2 k_\mathrm{B} T} \ ,$
and the expectation value of $\widehat{S}_z$, which is proportional to longitudinal magnetization, by
\begin{align} \left\langle \widehat{S}_z \right\rangle & = \sum m_S P(m_S) \[4pt] & = \frac{(-1/2)\cdot e^{\gamma \hbar B_0/2 k_\mathrm{B} T} + (1/2) e^{-\gamma \hbar B_0/2 k_\mathrm{B} T}}{Z} \[4pt] & = -\frac{1}{2} \tanh \left(\gamma \hbar B_0/2 k_\mathrm{B} T \right) \ .\end{align}
Usually one has $\gamma \hbar B_0 \ll 2 k_\mathrm{B} T$, which is called the high-temperature approximation. The series expansion of the hyperbolic tangent,
$\tanh(x) = x - \frac{1}{3}x^3 + \frac{2}{15}x^5 + \ldots \ ,$
can then be restricted to the leading term, which gives
$\left\langle \widehat{S}_z \right\rangle = -\frac{\gamma \hbar B_0}{4 k_\mathrm{B} T} \ .$
Harmonic Oscillator
A diatomic molecule has one vibrational mode along the bond direction $x$. If we assign masses $m_\mathrm{A}$ and $m_\mathrm{B}$ to the two atoms and a force constant $f$ to the bond, we can write the Hamiltonian as
$\widehat{\mathcal{H}} = \frac{1}{2} f \widehat{x}^2 + \frac{\widehat{p}^2}{2\mu} \ , \label{eq:oscillator}$
where the reduced mass $\mu$ is
$\mu = \frac{m_\mathrm{A} m_\mathrm{B}}{m_\mathrm{A} + m_\mathrm{B}}$
and where the first term on the right-hand side of Equation \ref{eq:oscillator} corresponds to potential energy and the second term to kinetic energy.
Equation \ref{eq:oscillator} can be cast in the form
$\widehat{\mathcal{H}} = \frac{1}{2} \mu \omega^2 \left( R - R_\mathrm{E} \right)^2 - \frac{\hbar^2}{2\mu} \frac{\mathrm{d}^2}{\mathrm{d}R^2} \ , \label{eq:diatomic}$
where we have substituted $\widehat{x}$ by the deviation of the atom-atom distance $R$ from the bond length $R_\mathrm{E}$ and introduced the angular oscillation frequency $\omega$ of a classical oscillator with
$\omega = \sqrt{\frac{f}{\mu}} \ .$
Equation \ref{eq:diatomic} produces an infinite number of eigenstates with energies
$\epsilon_v = \hbar \omega \left( v + \frac{1}{2} \right) \ ,$
where $v = 0, 1, \ldots, \infty$ is the vibrational quantum number. All energies are positive, even the one of the ground state with $v = 0$. This residual zero-point vibration can be considered as a consequence of Heisenberg’s uncertainty principle, since for a non-oscillating diatomic molecule atom coordinates as well as momentum would be sharply defined, which would violate that principle. In the context of statistical thermodynamics the unfortunate consequence is that for an ensemble of $N$ diatomic molecules for $T \rightarrow 0$ the vibrational contribution to the internal energy $u$ approaches $u_0 = N \hbar \omega/2$ and thus the term $u/T$ in the entropy expression (Equation \ref{eq:s_from_z}) approaches infinity. We ignore this problem for the moment.
The partition function of the harmonic oscillator is an infinite series,
\begin{align} Z & = \sum_{v = 0}^{\infty} e^{-\hbar \omega (v+1/2)/k_\mathrm{B} T} \[4pt] & = e^{-\hbar \omega/2 k_\mathrm{B} T} \sum_{v = 0}^{\infty} e^{-\hbar \omega v/k_\mathrm{B} T} \label{eq:Z_vib_series} \[4pt] & = e^{-\hbar \omega/2 k_\mathrm{B} T} \sum_{v = 0}^{\infty} \left( e^{-\hbar \omega/k_\mathrm{B} T} \right)^v \[4pt] & = e^{-\hbar \omega/2 k_\mathrm{B} T} \sum_{n = 0}^{\infty} x^n \ .\end{align}
where we have substituted $x = e^{-\hbar \omega/k_\mathrm{B} T}$ and $n=v$ to obtain the last line. Since for finite temperatures $0 < e^{-\hbar \omega/k_\mathrm{B} T} < 1$, the infinite series $\sum_{n=0}^\infty x^n$ converges to $1/(1-x)$. Hence,
$Z = \frac{e^{-\hbar \omega/2 k_\mathrm{B} T}}{1-e^{-\hbar \omega/k_\mathrm{B} T}} \ . \label{eq:Z_harmonic_oscillator_unshifted}$
We can again discuss the behavior for $T \rightarrow 0$. In the denominator, the argument of the exponential function approaches $-\infty$, so that the denominator approaches unity. In the numerator the argument of the exponential function also approaches $-\infty$, so that the partition function approaches zero and Helmholtz free energy $f = -k_\mathrm{B} T \ln Z$ can only be computed as a limiting value. The term $k_\mathrm{B} \ln Z$ in the entropy Equation \ref{eq:s_from_z} approaches $-\infty$.
This problem can be healed by shifting the energy scale by $\Delta E = -\hbar \omega/2$. We then have18
$Z = \frac{1}{1-e^{-\hbar \omega/k_\mathrm{B} T}} \ . \label{eq:Z_harmonic_oscillator_shifted}$
With this shift, the partition function and the population of the ground state $v = 0$ both approach 1 when the temperature approaches zero. For the term $u/T$ in the entropy expression we still need to consider a limiting value, but it can be shown that $\lim\limits_{T \rightarrow 0}u/T = 0$. Since $k_\mathrm{B} \ln Z = 0$ for $Z = 1$, entropy of an ensemble of harmonic oscillators vanishes at the zero point in agreement with Nernst’s theorem. Helmholtz free energy $f = -k_\mathrm{B}T \ln Z$ approaches zero.
For computing a Boltzmann distribution we can shift all energy levels by the same offset $\Delta E$ without influencing the $P_i$, as such a shift leads to a multiplication by the same factor of the numerator and of all terms contributing to the partition function. Such a shift can remove an infinity of the partition function.
This partition function can also be expressed with a characteristic vibrational temperature
$\Theta_\mathrm{vib} = \frac{\hbar \omega}{k_\mathrm{B}} \ . \label{eq:Theta_vib}$
This temperature is usually higher than room temperature. We have
$Z_\mathrm{vib} = \frac{1}{1-e^{-\Theta_\mathrm{vib}/T}} \ . \label{eq:Z_vib}$
Thus, $Z \approx 1$ at room temperature, which implies that only the vibrational ground state is significantly populated. Vibration does not significantly contribute to entropy at room temperature.
Einstein and Debye Models of a Crystal
The considerations on the harmonic oscillator can be extended to a simple model for vibrations in a crystal. If we assume that all atoms except one are fixed at their average locations, the potential at the unique atom is parabolic. This assumption made by Einstein may at first sight violate his own dictum that "Everything should be made as simple as possible, but not simpler.". We shall come back to this point below. For the moment we consider Einstein’s approach as a very simple mean field approach. Instead of the one-dimensional harmonic oscillator treated in Section [section:harmonic_oscillator], we now have a three-dimensional harmonic oscillator. For sufficiently high point symmetry at the unique atom, we can assume an isotropic force constant $f$. Each atom is then described by three independent harmonic oscillators along three orthogonal directions. The harmonic oscillators of different atoms are also independent by construction. Because we want to compute an absolute internal energy we revert to the partition function of the harmonic oscillator without energy shift given in Equation \ref{eq:Z_harmonic_oscillator_unshifted}. The partition function for a crystal with $N$ atoms, considering that the atoms in a crystal lattice are distinguishable and that thus Equation \ref{eq:molar_z_from_molecular_z} applies, is then given by
$Z = \left( \frac{e^{-\hbar \omega/2 k_\mathrm{B} T}}{1 - e^{-\hbar \omega/k_\mathrm{B} T}} \right)^{3N} \ .$
Internal energy can be computed by Equation \ref{eq:u_from_z},
\begin{align} u_\mathrm{vib} & = k_\mathrm{B} T^2 \left[ \frac{\partial}{\partial T} \ln \left\{ \left( \frac{e^{-\hbar \omega/2 k_\mathrm{B} T}}{1 - e^{-\hbar \omega/k_\mathrm{B}T}} \right)^{3N} \right\} \right]_V \[4pt] & = 3 k_\mathrm{B} N T^2 \left[ \frac{\partial}{\partial T} \left\{ -\frac{\hbar \omega}{2 k_\mathrm{B}T} - \ln\left( 1 - e^{-\hbar \omega/k_\mathrm{B}T} \right) \right\} \right]_V \[4pt] & = 3 k_\mathrm{B} N T^2 \left[ \frac{\hbar \omega}{2 k_\mathrm{B}T^2} + \frac{\hbar \omega/k_\mathrm{B}T^2 \cdot e^{-\hbar \omega/k_\mathrm{B}T}}{1 - e^{-\hbar \omega/k_\mathrm{B}T}} \right] \[4pt] & = \frac{3}{2} N \hbar \omega + \frac{3N \hbar \omega }{e^{\hbar \omega/k_\mathrm{B}T}-1} \ .\end{align}
With the characteristic vibrational temperature $\Theta_\mathrm{vib}$ introduced in Equation \ref{eq:Theta_vib} and by setting $N = N_\mathrm{Av}$ to obtain a molar quantity, we find
$U_\mathrm{vib} = \frac{3}{2} R \Theta_\mathrm{vib} + \frac{3 R \Theta_\mathrm{vib}}{e^{\Theta_\mathrm{vib}/T} -1} \ .$
The molar heat capacity of an Einstein solid is the derivative of $U_\mathrm{vib}$ with respect to $T$. We note that we do not need to specify constant volume or constant pressure, since this simple model depends on neither of these quantities. We find
$C_\mathrm{vib} = 3 R \frac{\left( \Theta_\mathrm{vib}/T \right)^2 e^{\Theta_\mathrm{vib}/T}}{\left( e^{\Theta_\mathrm{vib}/T} -1\right)^2} \ .$
According to the rule of Dulong and Petit we should obtain the value $3 R$ for $T \rightarrow \infty$. Since the expression becomes indeterminate $(0/0)$, we need to compute a limiting value, which is possible with the approach of de l’Hospital where we separately differentiate the numerator and denominator. The derivation is lengthy but it indeed yields the limiting value $3R$:
\begin{align} \lim\limits_{\mathcal{T} \rightarrow \infty} C_\mathrm{vib} & = \lim\limits_{\mathcal{T} \rightarrow \infty} 3 R \frac{\left( \Theta_\mathrm{vib}/T \right)^2 e^{\Theta_\mathrm{vib}/T}}{\left( e^{\Theta_\mathrm{vib}/T} -1\right)^2} \label{eq:x00} \[4pt] & = 3 R \lim\limits_{\mathcal{T} \rightarrow \infty} \frac{2 \left( \Theta_\mathrm{vib}/T \right)\left( -\Theta_\mathrm{vib}/T^2\right)}{2 \left(1-e^{-\Theta_\mathrm{vib}/T} \right)\left(e^{\Theta_\mathrm{vib}/T}\right)\left( -\Theta_\mathrm{vib}/T^2 \right)} \label{eq:x01} \[4pt] & = 3 R \lim\limits_{\mathcal{T} \rightarrow \infty} \frac{\left( \Theta_\mathrm{vib}/T \right)}{\left(1-e^{-\Theta_\mathrm{vib}/T} \right)} \label{eq:x02} \[4pt] & = 3 R \lim\limits_{\mathcal{T} \rightarrow \infty} \frac{\left( -\Theta_\mathrm{vib}/T^2 \right)}{-e^{-\Theta_\mathrm{vib}/T}\left( -\Theta_\mathrm{vib}/T^2 \right)} \label{eq:x03} \[4pt] & = 3 R \label{eq:x04} \ . \end{align}
In Equation \ref{eq:x00} in the numerator and in going from Equation \ref{eq:x01} to \ref{eq:x02} we have set $e^{\Theta_\mathrm{vib}/T}$ to 1, as we may for $T \rightarrow \infty$. As the expression was still indeterminate, we have computed the derivatives of numerator and denominator once again in going from Equation \ref{eq:x02} to \ref{eq:x03} and finally we have once more set $e^{-\Theta_\mathrm{vib}/T}$ to 1 in going from Equation \ref{eq:x03} to \refeq:x04}. We see that Einstein’s very simple model agrees with the rule of Dulong and Petit.
Note
The model of the Einstein solid differs from a model of $N_\mathrm{Av}$ one-dimensional harmonic oscillators according to Section [section:harmonic_oscillator] only by a power of 3 in the partition function, which, after computing the logarithm, becomes a factor of 3 in the temperature-dependent term of $U_\mathrm{vib}$ and thus in $C_\mathrm{vib}$. Hence, in the high-temperature limit the vibrational contribution to the molar heat capacity of a gas consisting of diatomic molecules is equal to $R$. It follows that, in this limit, each molecule contributes an energy $k_\mathrm{B} T$ to the internal energy, i.e. each of the two degrees of freedom (potential and kinetic energy of the vibration) that are quadratic in the coordinates contributes a term $k_\mathrm{B} T/2$. This agrees with the equipartition theorem. Likewise, the Einstein solid agrees with this theorem.
From experiments it is known that molar heat capacity approaches zero when temperature approaches zero. Again the limiting value can be computed by the approach of de ’l Hospital , where this time we can neglect the 1 in $e^{\Theta_\mathrm{vib}/T}-1$, as $e^{\Theta_\mathrm{vib}/T}$ tends to infinity for $T \rightarrow 0$. In the last step we obtain
$\lim\limits_{\mathcal{T} \rightarrow 0} C_\mathrm{vib} = 6 R \lim\limits_{\mathcal{T} \rightarrow 0} \frac{1}{e^{\Theta_\mathrm{vib}/T}} = 0 \ .$
Thus, the Einstein solid also agrees with the limiting behavior of heat capacity at very low temperatures.
Nevertheless the model is ’too simple’, and Einstein was well aware of that. Vibrations of the individual atoms are not independent, but rather collective. The lattice vibrations, called phonons, have a spectrum whose computation is outside the scope of the Einstein model. A model that can describe this spectrum has been developed by Debye based on the density of states of frequencies $\nu$. This density of states in turn has been derived by Rayleigh and Jeans based on the idea that the phonons are a system of standing waves in the solid. It is given by
$D\left( \nu \right) = \frac{4 \pi \nu^2}{c^3} V \ .$
Debye replaced $c$ by a mean velocity of wave propagation in the solid, considered one longitudinal and two transverse waves and only the $3N$ states with the lowest frequencies, as the solid has only $3N$ vibrational degrees of freedom. These considerations lead to a maximum phonon frequency $\nu_\mathrm{max}$ and, after resubstitution of the mean velocity, to a frequency spectrum that is still proportional to $\nu^2$ and scales with $\nu_\mathrm{max}^{-3}$. Instead of the characteristic vibration temperature, it is now convenient to define the Debye temperature
$\Theta_\mathrm{D} = \frac{h \nu_\mathrm{max}}{k_\mathrm{B}} \ .$
In this model the molar heat capacity of the solid becomes
$C_\mathrm{vib} = 9 R \left(\frac{T}{\Theta_\mathrm{D}} \right)^3 \int_0^{\Theta_\mathrm{D}/T} \frac{x^4 e^x}{\left(e^x - 1 \right)^2} \mathrm{d} x$
The integral can be evaluated numerically after series expansion and finally Debye’s $T^3$ law,
$\lim\limits_{\mathcal{T} \rightarrow 0} C_\mathrm{vib} = 233.8 R \frac{T^3}{\Theta_\mathrm{D}^3} \ ,$
results. This law does not only correctly describe that the heat capacity vanishes at absolute zero, it also correctly reproduces the scaling law, i.e., the $T^3$ dependence that is found experimentally. The high-temperature limit can also be obtained by series expansion and is again Dulong-Petit’s value of $3R$.
The Debye model is still an approximation. Phonon spectra of crystalline solids are not featureless. They are approximated, but not fully reproduced, by a $\nu^2$ dependence. The deviations from the Debye model depend on the specific crystal structure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/05%3A_Quantum_Ensembles/5.03%3A_Simple_Quantum_Systems.txt
|
Collective Degrees of Freedom
In Section [subsection:Einstein_Debye] we have seen that the treatment of condensed phases can be complicated by collective motion of particles. Such effects are absent in an ideal gas that consists of point particles, a model that is reasonable for noble gases far from condensation. For gases consisting of molecules, it does not suffice to consider only translational motion as in Maxwell’s kinetic theory of gases. We see this already when considering H$_2$ gas, where each molecule can be approximated by a harmonic oscillator (Section [section:harmonic_oscillator]). Neglect of the vibrational degrees of freedom will lead to wrong results for internal energy, heat capacity, the partition function, and entropy, at least at high temperatures. In fact, an H$_2$ molecule is not only an oscillator, it is also a rotor. As a linear molecule it has two rotational degrees of freedom, which also contribute to internal energy and to the partition function.
In principle, we could try to ignore all this and treat each atom as one particle. If the Hamiltonian includes the potentials that characterize interaction between the particles, our equations of motion would be correct. In practice, such a treatment is inconvenient and it is better to group the spatial degrees of freedom according to the type of motion. The H$_2$ molecule has 3 translational degrees of freedom, 2 rotational degrees of freedom, and 1 vibrational degree of freedom in the collective motion picture. The sum is 6, as expected for two atoms with each of them having 3 translational degrees of freedom in an ’independent’ motion picture. In general, a molecule with $n$ atoms has $f_s=3n$ spatial degrees of freedom, 3 of which are translational, 3 are rotational, except for linear molecules, which have only 2 rotation degrees of freedom, and the rest are vibrational. We note that the number of degrees of freedom in phase space is $f = 2 f_s$ because each spatial degree of freedom is also assigned a momentum degree of freedom.
These considerations take care of particle motion. Further contributions to internal energy and to the partition function can arise from spin. In both closed-shell and open-shell molecules, nuclear spin can play a role. This is indeed the case for H$_2$, which can exist in ortho and para states that differ in correlation of the nuclear spins of the two hydrogen atoms. For open-shell molecules electron spin degrees of freedom must be considered. This is the case, for instance, for O$_2$, which has a triplet ground state. In this case, rotational and spin degrees of freedom correspond to similar energies and couple. Finally, at sufficiently high temperatures electronic excitation becomes possible and then also makes a contribution to the partition function.
Factorization of Energy Modes
In many cases, the individual contributions are separable, i.e. the modes corresponding to different types of motions can be treated independently. Roughly speaking, this results from a separation of energy ranges (frequency bands) of the modes and a corresponding separation of time scales. Nuclear spin degrees of freedom have much lower energy than rotational degrees of freedom which usually have much lower energy than vibrational degrees of freedom which have much lower energies than electronic excitation. The independence of nuclear and electron motion is basis of the Born-Oppenheimer approximation and the independence of rotational and vibrational motion is invoked when treating a molecule as a rigid rotor. Separability of energy modes leads to a sum rule for the energy contributions for a single closed-shell molecule ,
$\epsilon_j = \epsilon_{j,\mathrm{trs}} + \epsilon_{j,\mathrm{ns}} + \epsilon_{j,\mathrm{rot}} + \epsilon_{j,\mathrm{vib}} + \epsilon_{j,\mathrm{el}} \ , \label{eq:epsilon_mol}$
where $\epsilon_{j,\mathrm{trs}}$, $\epsilon_{j,\mathrm{ns}}$, $\epsilon_{j,\mathrm{rot}}$, $\epsilon_{j,\mathrm{vib}}$, and $\epsilon_{j,\mathrm{el}}$ are the translational, nuclear spin, rotational, vibrational, and electronic contributions, respectively. For a monoatomic molecule (atom) $\epsilon_{j,\mathrm{rot}}$ and $\epsilon_{j,\mathrm{vib}}$ vanish. If both the number of neutrons and of protons in the nucleus is even, the nucleus has spin $I=0$. In that case the nuclear spin contribution vanishes for an atom, even in the presence of an external magnetic field. If all nuclei have spin zero, the nuclear spin contribution also vanishes for a diatomic or multi-atomic molecule.
If we assume the equipartition theorem to hold, or even more generally, the whole system to attain thermal equilibrium, there must be some coupling between the different modes. If we say that the energy modes are separable, we assume weak coupling, which means that for statistical purposes we can assume the modes to be independent of each other. The consequence for the computation of the partition function can be seen by considering a system of $N$ particles with an $\alpha$ mode associated with quantum number $k$ and an $\omega$ mode associated with quantum number $r$. The total energy of a single molecule of this type is $\epsilon_j = \epsilon_{j,\alpha k} + \epsilon_{j,\omega r}$ The molecular partition function is given by
$Z = \sum_k \sum_r e^{-\beta \left( \epsilon_{\alpha k} + \epsilon_{\omega r}\right)} \ .$
This sum can be rewritten as
\begin{align} Z & = \sum_k \sum_r e^{-\beta \epsilon_{\alpha k}} \cdot e^{-\beta \epsilon_{\omega r}} \ & = \sum_k e^{-\beta \epsilon_{\alpha k}} \sum_r e^{-\beta \epsilon_{\omega r}} \ & = \sum_k e^{-\beta \epsilon_{\alpha k}} Z_\omega = Z_\omega \sum_k e^{-\beta \epsilon_{\alpha k}} \ & = Z_\alpha Z_\omega \ .\end{align}
We see that the total partition function is the product of the partition functions corresponding to the individual modes. This consideration can be extended to multiple modes. With Equation \ref{eq:epsilon_mol} it follows that
$Z = Z_\mathrm{trs} \cdot Z_\mathrm{ns} \cdot Z_\mathrm{rot} \cdot Z_\mathrm{vib} \cdot Z_\mathrm{el} \ . \label{eq:z_factorization}$
By considering Equation \ref{eq:molar_z_from_molecular_z} or Equation \ref{eq:z_indist} we see that we can also compute the partition function for a given mode for all $N$ particles before multiplying the modes. We have already seen that we must set $z_\mathrm{trs} = Z_\mathrm{trs}^N/N!$ to heal the Gibbs paradox. What about the other, internal degrees of freedom? If two particles with different internal states are exchanged, they must be considered distinguishable, exactly because their internal state ’tags’ them. Hence, for all the other modes we have $z_\alpha = Z_\alpha^N$. Thus,
\begin{align} z & = \frac{1}{N!} \left( Z_\mathrm{trs} \cdot Z_\mathrm{ns} \cdot Z_\mathrm{rot} \cdot Z_\mathrm{vib} \cdot Z_\mathrm{el} \right)^N \ & = \frac{Z_\mathrm{trs}^N}{N!} \cdot Z_\mathrm{ns}^N \cdot Z_\mathrm{rot}^N \cdot Z_\mathrm{vib}^N \cdot Z_\mathrm{el}^N \ . \label{eq:z_separation}\end{align}
Accordingly, we can consider each of the partition functions in turn. We also note that separability of the energies implies factorization of the molecular wavefunction,
$\psi = \psi_\mathrm{trs} \cdot \psi_\mathrm{ns} \cdot \psi_\mathrm{rot} \cdot \psi_\mathrm{vib} \cdot \psi_\mathrm{el}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.01%3A_Separation_of_Contributions.txt
|
First, we derive the density of states that we had already used in computing the distribution functions for quantum gases. We consider a quantum particle in a three-dimensional cubic box with edge length $a$. The energy is quantized with integer quantum numbers $n_x$, $n_y$, and $n_z$ corresponding to the three pairwise orthogonal directions that span the cube,
\begin{align} \epsilon_\mathrm{trs} & = \frac{h^2}{8 m a^2} \left( n_x^2 + n_y^2 + n_z^2 \right) \label{eq:trans_quant} \ & = \frac{1}{2m} \left( p_x^2 + p_y^2 + p_z^2 \right) \label{eq:mom_quant} \ .\end{align}
It follows that momentum is also quantized with $|p_i| = (h/2a)n_i$ ($i=x,y,z$). It is convenient to consider momentum in a Cartesian frame where $h/2a$ is the unit along the $x$, $y$, and $z$ axes. Each state characterized by a unique set of translational quantum numbers $(n_x,n_y,n_z)$ ’owns’ a small cube with volume $h^3/8a^3$ in the octant with $x\ge0$, $y\ge0$, and $z\ge0$. Since momentum can also be negative, we need to consider all eight octants, so that each state owns a cell in momentum space with volume $h^3/a^3$. In order to go to phase space, we need to add the spatial coordinates. The particle can move throughout the whole cube with volume $a^3$. Hence, each state owns a phase space volume of $h^3$.
By rearranging Equation \ref{eq:trans_quant} we can obtain an equation that must be fulfilled by the quantum numbers,
$\frac{n_x^2}{\left(\frac{a}{h} \sqrt{8 m \epsilon} \right)^2} + \frac{n_y^2}{\left(\frac{a}{h} \sqrt{8 m \epsilon} \right)^2} + \frac{n_z^2}{\left(\frac{a}{h} \sqrt{8 m \epsilon} \right)^2} = 1 \label{eq:trans_sphere}$
and by using Equation \ref{eq:mom_quant} we can convert it to an equation that must be fulfilled by the components of the momentum vector,
$\frac{p_x^2}{\left(\frac{1}{2} \sqrt{8 m \epsilon} \right)^2} + \frac{p_y^2}{\left(\frac{1}{2} \sqrt{8 m \epsilon} \right)^2} + \frac{p_z^2}{\left(\frac{1}{2} \sqrt{8 m \epsilon} \right)^2} = 1 \ . \label{eq:mom_sphere}$
All states with quantum numbers that make the expression on the left-hand side of Equation \ref{eq:trans_sphere} or Equation \ref{eq:mom_sphere} smaller than 1 correspond to energies that are smaller than $\epsilon$. The momentum associated with these states lies in the sphere defined by Equation \ref{eq:mom_sphere}) with radius $\frac{1}{2}\sqrt{8 m \epsilon}$ and volume $\frac{\pi}{6}(8 m \epsilon)^{3/2}$. With cell size $h^3/a^3$ in momentum space the number of cells with energies smaller than $\epsilon$ is
$\mathcal{N}(\epsilon) = \frac{8 \sqrt{2}}{3} \pi \frac{V}{h^3} \left( m \epsilon \right)^{3/2} \ ,$
where we have substituted $a^3$ by volume $V$ of the box. The number of states in an energy interval between $\epsilon$ and $\epsilon + \mathrm{d} \epsilon$ is the first derivative of $\mathcal{N}(\epsilon)$ with respect to $\epsilon$ and is the sought density of states,
$D(\epsilon) = 4 \sqrt{2} \pi \frac{V}{h^3} m^{3/2} \epsilon^{1/2} \ . \label{eq:density_of_states_derived}$
Partition Function and Accessible States
This density of states is very high, so that we can replace the sum over the quantum numbers $n_i$ in the partition function of the canonical ensemble by an integral ,
\begin{align} Z_{\mathrm{trs},i} & = \int_o^\infty e^{-\beta n_i^2 h^2/8 m a^2} \mathrm{d} n_i \ \left( i = x, y, z \right) \ &= \sqrt{\frac{2 \pi m}{\beta}} \frac{a}{h} \ .\end{align}
The contributions along orthogonal spatial coordinates are also independent of each other and factorize. Hence,
$Z_{\mathrm{trs}} = Z_{\mathrm{trs},x} \cdot Z_{\mathrm{trs},y} \cdot Z_{\mathrm{trs},z} = \left( \frac{2 \pi m k_\mathrm{B} T}{h^2} \right)^{3/2} V \ , \label{eq:Z_trs_ideal}$
where we have again substituted $a^3$ by $V$ and, as by now usual, also $\beta$ by $1/k_\mathrm{B} T$. The corresponding molar partition function is
$z_\mathrm{trs} = \frac{1}{N_\mathrm{Av}!} \left[ \left( \frac{2 \pi m k_\mathrm{B} T}{h^2} \right)^{3/2} V \right]^{N_\mathrm{Av}} \ . \label{eq:z_trs_ideal}$
At this point it is useful to introduce another concept:
The molecular canonical partition function $Z$ is a measure for the number of states that are accessible to the molecule at a given temperature. [concept:accessible_states]
This can be easily seen when considering
$P_i = \frac{N_i}{N} = \frac{g_i e^{-\epsilon_i/k_\mathrm{B}T}}{Z}$
and $\sum_i P_i = 1$. If we consider a mole of $\ce{^{4}He}$ (bosons) at 4.2 K, where it liquifies, we find that $z_{\mathrm{trs}}/N_\mathrm{Av} \approx 7.5$, which is not a large number . This indicates that we are close to breakdown of the regime where Bose-Einstein statistics can be approximated by Boltzmann statistics.
For $T \rightarrow 0$ only the $g_0$ lowest energy states are populated. In the absence of ground-state degeneracy, $g_0=1$, we find $Z=1$ and with an energy scale where $U(T=0)=0$ we have $S(0)=0$ in agreement with Nernst’s theorem.
An expression for the translational contribution to the entropy of an ideal gas can be derived from Equation \ref{eq:Z_trs_ideal}), Equation \ref{eq:z_indist}), and Equation \ref{eq:s_from_z}). We know that $u = 3 N k_\mathrm{B} T/2$, so that we only need to compute $\ln z_\mathrm{trs}$,
\begin{align} \ln z_\mathrm{trs} & = & \ln \frac{1}{N!} Z_\mathrm{trs}^N \ & = & -\ln N! + N \ln Z_\mathrm{trs} \ & = & -N \ln N + N + N \ln Z_\mathrm{trs} \ & = & N \left(1 + \ln \frac{Z_\mathrm{trs}}{N} \right) \ ,\end{align}
where we have used Stirling’s formula to resolve the factorial. Thus we find
\begin{align} s & = & \frac{u}{T} + k_\mathrm{B} \ln z \ & = & \frac{3}{2} N k_\mathrm{B} + k_\mathrm{B} N \left( 1 + \ln \frac{Z_\mathrm{trs}}{N} \right) \ & = & N k_\mathrm{B} \left( \frac{5}{2} + \ln \frac{Z_\mathrm{trs}}{N} \right)\end{align}
By using Equation \ref{eq:Z_trs_ideal}) we finally obtain the Sackur-Tetrode equation
$s = N k_\mathrm{B} \left\{ \frac{5}{2} + \ln\left[ \left( \frac{2 \pi m k_\mathrm{B} T}{h^2} \right)^\frac{3}{2} \frac{V}{N}\right] \right\} \ .$
To obtain the molar entropy $S_\mathrm{m}$, $N$ has to be replaced by $N_\mathrm{Av}$. Volume can be substituted by pressure and temperature, by noting that the molar volume is given by $V_\mathrm{m} = R T/p = N_\mathrm{Av} V/N$. With $N_\mathrm{Av} k_\mathrm{B} = R$ and the molar mass $M = N_\mathrm{Av} m$ we obtain
$S_\mathrm{m} = R \left\{ \frac{5}{2} + \ln\left[ \left( \frac{2 \pi M k_\mathrm{B} T}{N_\mathrm{Av} h^2} \right)^\frac{3}{2} \frac{R T}{N_\mathrm{Av} p}\right] \right\}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.02%3A_Translational_Partition_Function.txt
|
High-Temperature Limit
In the absence of a magnetic field, all nuclear spin states are degenerate,19 except for the very tiny splittings that arise from $J$ couplings between the nuclear spins themselves. Even if we consider the largest magnetic fields available, it is safe to assume that all nuclear spin states are equally populated down to temperatures of at least 1.5 K and that the contribution of nuclear spins to the total energy is negligibly small. Of course, NMR spectroscopy relies on the fact that these states are not exactly equally populated, but in the context of statistical thermodynamics, the contribution to internal energy and the population differences are negligible.
Hence, in this high-temperature limit all nuclear spin states are fully accessible and the number of accessible states equals the total number of nuclear spin states. This gives
$Z_\mathrm{ns} = \prod_i \left( 2 I_i+1 \right) \ , \label{eq:Z_ns}$
where the $I_i$ are the nuclear spin quantum numbers for nuclei in the molecule. Magnetic equivalence leads to degeneracy of nuclear spin levels, but does not influence the total number of nuclear spin states. Since the term $u/T$ in Equation \ref{eq:s_from_z}) is negligible and $z_\mathrm{ns} = Z_\mathrm{ns}^{N}$, we have
$s = N k_\mathrm{B} \sum_i \ln \left( 2 I_i + 1 \right) \ .$
This contribution to entropy is not generally negligible. Still it is generally ignored in textbooks, which usually does not cause problems, as the contribution is constant under most conditions where experiments are conducted and does not change during chemical reactions.
Symmetry Requirements
Nuclear spin states have another, more subtle effect that may prevent separation of state spaces. We consider this effect for H$_2$. In this molecule, the electron wavefunction arises from two electrons, which are fermions, and must thus be antisymmetric with respect to exchange of the two electrons. In quantum-chemical computations this is ensured by using a Slater determinant. Likewise, the nuclear wavefunction must be antisymmetric with respect to exchange of the two protons, which are also fermions. The spin part is antisymmetric for the singlet state with total nuclear spin quantum number $F = 0$,
$\psi_\mathrm{ns,S} = \frac{1}{\sqrt{2}} \left( |\alpha \beta \rangle - |\beta \alpha \rangle \right) \ , \label{eq:singlet}$
and symmetric for the triplet state with $F=1$, as can be seen by the wavefunctions of each of the three triplet substates:
\begin{align} \psi_\mathrm{ns,T_+} & = |\alpha \alpha \rangle \label{eq:triplet_p} \ \psi_\mathrm{ns,T_0} & = \frac{1}{\sqrt{2}} \left( |\alpha \beta \rangle + |\beta \alpha \rangle \right) \label{eq:triplet_0} \ \psi_\mathrm{ns,T_-} & = |\beta \beta \rangle \label{eq:triplet_m} \ .\end{align}
The translational, vibrational, and electron wavefunction are generally symmetric with respect to the exchange of the two nuclei. The rotational wavefunction is symmetric for even rotational quantum numbers $J$ and antisymmetric for odd quantum numbers. Hence, to ensure that the generalized Pauli principle holds and the total wavefunction is antisymmetric with respect to exchange of indistinguishable nuclei, even $J$ can only be combined with the antisymmetric nuclear spin singlet state and odd $J$ only with the symmetric triplet state. The partition functions for these two cases must be considered separately. For H$_\mathrm{2}$ we have
$Z_\mathrm{para} = \sum_{J \textrm{even}} \left( 2 J + 1 \right) e^{-J(J+1)\hbar^2/2 I k_\mathrm{B}T} \ ,$
where $g_J = 2J+1$ is the degeneracy of the rotational states and $I$ is the moment of inertia, and
$Z_\mathrm{ortho} = 3 \sum_{J \textrm{odd}} \left( 2 J + 1 \right) e^{-J(J+1)\hbar^2/2 I k_\mathrm{B}T} \ ,$
where $g_I = 3$ is the degeneracy of the nuclear spin states.
For H$_2$ the $(J=0, F= 0)$ state is called para-hydrogen and the $(J=1,F=1)$ state ortho-hydrogen. At ambient temperature, both the $(J=0, F= 0)$ state and the $(J=1,F=1)$ state are, approximately, fully populated and thus, the four nuclear spin substates described by Eqs. ([eq:singlet]-[eq:triplet_m]) are equally populated. Statistics then dictates a para-hydrogen to ortho-hydrogen ratio of 1:3 and no macroscopic spin polarization in a magnetic field. The splitting between the two states is
$\frac{\epsilon_{J=1,F=1}-\epsilon_{J=0,F=0}}{k_\mathrm{B}} = 2\Theta_\mathrm{rot} = \frac{\hbar^2}{k_\mathrm{B} I} \approx 178.98 \textrm{ K} \ ,$
where we have introduced a characteristic rotational temperature analogous to the characteristic vibrational temperature for the harmonic oscillator in Equation \ref{eq:Theta_vib}). At temperatures well below this energy splitting, para-hydrogen is strongly enriched with respect to ortho-hydrogen. Equilibration in a reasonable time requires a catalyst. Still, no macroscopic spin polarization in a magnetic field is observed, as the two nuclear spins are magnetically equivalent and align antiparallel. If, however, para-hydrogen is reacted with a molecule featuring a multiple bond, magnetic equivalence of the two hydrogen atoms can be removed and in that case enhanced nuclear spin polarization is observable (para-hydrogen induced polarization, PHIP ). We note that for $^{2}\textrm{H}_2$ the combination of nuclear spin states and rotational states to an allowed state reverses, as deuterons are bosons.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.03%3A_Nuclear_Spin_Partition_Function.txt
|
Rigid Rotor Assumption and Rotamers
Separation of the rotational partition function from the partition functions of other degrees of freedom does not only require consideration of nuclear spin states, but also the assumption that the moment of inertia is the same for all rotational states. This is generally true if the molecule behaves as a rigid rotor. For small molecules consisting of only a few atoms, this is often a good approximation. Larger molecules feature internal rotations, where a group of atoms rotates with respect to the rest of the molecule. In general, internal rotations are torsions about rotatable bonds, which are often not freely rotatable. The torsion potential has several minima and these minima are separated by energy barriers with heights that are larger, but not much larger than $k_\mathrm{B} T$. If we denote the number of potential minima for the $i^\mathrm{th}$ rotatable bond with $n_{\mathrm{min},i}$, the total number of rotameric states, short rotamers is
$n_\mathrm{rot} = \prod_i n_{\mathrm{min},i} \ .$
Each rotamer has it’s own moment of inertia and, hence, its own set of states with respect to total rotation of the molecule. Because the energy scales of internal and total rotations are not well separated and because in larger molecules some vibrational modes may also have energies in this range, the partition function is not usually separable for large molecules. In such cases, insight into statistical thermodynamics can be best obtained by MD simulations. In the following, we consider small molecules that can be assumed to behave as a rigid rotor. We first consider diatomic molecules, where it certainly applies on the level of precision that we seek here.
The energy levels of a rigid rotor of a diatomic molecule are quantized by the rotational quantum number $J$ and given by
$\epsilon_J = J(J+1) \frac{h^2}{8\pi^2 I} = h c B J (J+1) \ ,$
where
$I = \mu r^2$
is the moment of inertia with the reduced mass $\mu$ and
$B = \frac{h}{8 \pi^2 I c}$
is the rotational constant. After introducing the characteristic rotational temperature,
$\Theta_\mathrm{rot} = \frac{h^2}{8 \pi^2 I k_\mathrm{B}} = \frac{h c B}{k_\mathrm{B}}$
we have
$\epsilon_J = J(J+1) k_\mathrm{B} \Theta_\mathrm{rot} \ .$
As already mentioned, each rotational level has a degeneracy $g_J = 2 J + 1$. If all nuclei in the molecules are distinguishable (magnetically not equivalent), there is no correlation with nuclear spin states. In that case we have
$Z_\mathrm{rot} = \sum \left( 2 J + 1 \right) e^{-J(J+1)\Theta_\mathrm{rot}/T} \ . \label{eq:rot_sum}$
For sufficiently high temperatures and a sufficiently large moment of inertia, the density of states is sufficiently large for replacing the sum by an integral,
$Z_\mathrm{rot} \approx \int_0^\infty \left( 2 J + 1 \right) e^{-J(J+1)\Theta_\mathrm{rot}/T} \mathrm{d}J = \frac{T}{\Theta_\mathrm{rot}} \ . \label{eq:continuum}$
Deviations between the partition functions computed by Equation \ref{eq:rot_sum}) and Equation \ref{eq:continuum}) are visualized in Figure $1$. As state functions depend on $\ln Z$, the continuum approximation is good for $T/\Theta_\mathrm{rot} \gg 1$, which applies to all gases, except at low temperatures for those that contain hydrogen. At ambient temperature it can be used in general, except for a further correction that we need to make because of symmetry considerations.
Accessible States and Symmetry
Even if all nuclei are magnetically inequivalent and, hence, distinguishable, rotational states may be not. For heteronuclear diatomic molecules they are, but for homonuclear diatomic molecules, they are not. To see this, we consider a symmetrical linear molecule that rotates by 180$^\circ$ about an axis perpendicular to the bond axis and centered in the middle of the bond. This rotation produces a configuration that is indistinguishable from the original configuration. In other words, the nuclear wavefunction is symmetric with respect to this rotation. For a homonuclear diatomic molecule, half of the rotational states are symmetric and half are antisymmetric. For nuclei that are bosons, such as $\ce{^{16}O}$ in dioxygen, only the former states are allowed, for nuclei that are fermions, only the latter are allowed. Hence, we need to divide the partition function, which counts accessible states, by two. In this example, we have deliberately chosen a case with only one nuclear spin state. If nuclear spin states with different symmetry exist, all rotational states are accessible, but correlated to the nuclear spin states, as we have seen before for dihydrogen. In the following we consider the situation with only one nuclear spin state or for a fixed nuclear spin state.
Table $1$: Symmetry numbers $\sigma$ corresponding to symmetry point groups . [table:sigma]
Group $\sigma$ Group $\sigma$ Group $\sigma$ Group $\sigma$
$C_1$, $C_i$, $C_s$, $C_{\infty v}$ 1 $D_{\infty h}$ 2 $T$, $T_d$ 12 $O_h$ 24
$C_n$, $C_{nv}$, $C_{nh}$ $n$ $D_n$, $D_{nh}$, $D_{nd}$ $2 n$ $S_n$ $n/2$ $I_h$ 60
Although we still did not discuss other complications for multi-atomic molecules, we generalize this concept of symmetry-accessible states by introducing a symmetry number $\sigma$. In general, $\sigma$ is the number of distinct orientations of a rigid molecule that are distinguished only by interchange of identical atoms. For an NH$_3$ molecule, rotation about the $C_3$ axis by 120$^\circ$ generates one such orientation from the other. No other rotation axis exists. Hence, $\sigma = 3$ for NH$_3$. In general, the symmetry number can be obtained from the molecule’s point group , as shown in Table $1$.
With the symmetry number, the continuum approximation (Equation \ref{eq:continuum} becomes
$Z_\mathrm{rot} = \frac{T}{\sigma \Theta_\mathrm{rot}} \ , \label{eq:Z_rot}$
where we still assume that symmetry is sufficiently high for assigning a single characteristic temperature.
We further find
$\ln Z_\mathrm{rot} = \ln T + \ln \frac{1}{\sigma \Theta_\mathrm{rot}} \ , \label{eq:ln_Z_rot}$
and with this
\begin{align} U_\mathrm{rot} = N_\mathrm{Av} k_\mathrm{B} T^2 \left( \frac{\partial \ln Z_\mathrm{rot}}{\partial T} \right)_V \label{eq:u_rot0} \ = R T^2 \frac{\partial}{\partial T} \ln T \label{eq:u_rot1} \ = R T \ \textrm{(linear)} \ .\end{align}
On going from Equation \ref{eq:u_rot0} to \ref{eq:u_rot1} we could drop the second term on the right-hand side of Equation \ref{eq:ln_Z_rot}, as this term does not depend on temperature. This is a general principle: Constant factors in the partition function do not contribute to internal energy. The result can be generalized to multi-atomic linear molecules that also have two rotational degrees of freedom. This result is expected from the equipartition theorem, as each degree of freedom should contribute a term $k_\mathrm{B} T/2$ to the molecular or a term $R T/2$ to the molar internal energy. However, if we refrain from the continuum approximation in Equation \ref{eq:continuum}) and numerically evaluate Equation \ref{eq:rot_sum}) instead, we find a lower contribution for temperatures lower than or comparable to $\Theta_\mathrm{rot}$. This is also a general principle: Contributions of modes to internal energy and, by inference, to heat capacity, are fully realized only at temperatures much higher than their characteristic temperature and are negligible at temperatures much lower than their characteristic temperature.20
For the rotation contribution to molar heat capacity at constant volume of a linear molecule we obtain
$C_{\mathrm{rot},V} = \frac{\partial}{\partial T} U_\mathrm{rot} = R \ .$
A non-linear multi-atomic molecule has, in general, three independent rotational moments of inertia corresponding to three pairwise orthogonal directions $a, b, c$. With
$\Theta_{\mathrm{rot},\alpha} = \frac{h^2}{8 \pi^2 I_\alpha k_\mathrm{B}} \ \left( \alpha = a \ , b \ , c \right)$
one finds for the partition function
$Z_\mathrm{rot} = \frac{\sqrt{\pi}}{\sigma} \left( \frac{T^3}{\Theta_{\mathrm{rot},a} \Theta_{\mathrm{rot},b} \Theta_{\mathrm{rot},c}} \right)^{1/2} \ .$
For spherical-top molecules, all three moments of inertia are equal, $I_a = I_b = I_c$, and hence all three characteristic temperatures are equal. For symmetric-top molecules, $I_a = I_b \neq I_c$.
The general equations for $U_\mathrm{rot}$ and $C_{\mathrm{rot},V}$ at sufficiently high temperature $T \gg \Theta_\mathrm{rot}$ are
\begin{align} U_\mathrm{rot} & = \frac{d}{2} R T \ C_{\mathrm{rot},V} & = \frac{d}{2} R \ ,\end{align}
where $d = 1$ for a free internal rotation (e.g., about a C$\equiv$C bond), $d=2$ for linear, and $d=3$ for non-linear molecules. We note that the contribution of a free internal rotation needs to be added to the contribution from total rotation of the molecule.
The expressions for the rotational contributions to molar entropy are a bit lengthy and do not provide additional physical insight. They can be easily derived from the appropriate expressions for the rotational partition function and internal energy using Equation \ref{eq:s_from_z}). We note, however, that the contribution from the $u/T$ term in the entropy expression is $d R/2$ and the contribution from $\ln Z$ is positive. Hence, at temperatures where the continuum approximation is valid, the rotational contribution to entropy is larger than $d R/2$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.04%3A_Rotational_Partition_Function.txt
|
The Harmonic Oscillator Extended
Vibration in a diatomic molecule can be described by the 1D harmonic oscillator that we have considered in Section [section:harmonic_oscillator]. In a multi-atomic molecule the $3n-5$ (linear) or $3n-6$ (non-linear) normal modes can be treated independently,
$Z_\mathrm{vib} = \prod_{i = 1}^{3n-5 \vee 3n-6} Z_{\mathrm{vib},i} = \prod_{i = 1}^{3n-5 \vee 3n-6} \frac{1}{1 - e^{-\Theta_{\mathrm{vib},i}/T}}. \label{eq:Z_vib}$
Normal mode energies are no longer independent and the partition function is no longer factorisable if anharmonicity of the vibration needs to be included, which is the case only at very high temperatures. We ignore this and ask about the limiting behavior of $Z_\mathrm{vib}$ for a diatomic molecule or $Z_{\mathrm{vib},i}$ for an individual normal mode at high temperatures. In the denominator of Equation \ref{eq:Z_vib} we can make the approximation $e^{-\Theta_\mathrm{vib}/T} = 1 - \Theta_\mathrm{vib}/T$, if $\Theta_\mathrm{vib}/T \ll 1$. We obtain
$\lim\limits_{\mathcal{T} \rightarrow \infty} Z_{\mathrm{vib},i} = \frac{T}{\Theta_{\mathrm{vib},i}} \ .$
Vibrational temperatures for most normal modes are much higher than ambient temperature. Hence, at 298 K we have often $Z_{\mathrm{vib},i} \approx 1$. Appreciable deviations are observed for vibrations that involve heavy atoms, for instance $Z_\mathrm{vib} = 1.556$ at $T=300$ K for $I_2$.
Vibrational Contributions to U, CV, and S
The vibrational partition function for a system consisting of $N$ diatomic molecules is
$z_\mathrm{vib} = Z_\mathrm{vib}^N = \left( \frac{1}{1-e^{-\Theta_\mathrm{vib}/T}} \right)^N \ .$
With $N = N_\mathrm{Av}$ we obtain the vibrational contribution to the molar internal energy
\begin{align} U_\mathrm{vib} & = k_\mathrm{B} T^2 \left( \frac{\partial \ln z_\mathrm{vib}}{\partial T} \right)_V = \frac{N_\mathrm{Av} k_\mathrm{B} \Theta_\mathrm{vib}}{e^{\Theta_\mathrm{vib}/T}-1} \ & = \frac{R \Theta_\mathrm{vib}}{e^{\Theta_\mathrm{vib}/T}-1} \ . \label{eq:U_vib}\end{align}
For multi-atomic molecules the contributions from the individual normal modes with characteristic vibrational temperatures $\Theta_{\mathrm{vib},i}$ must be summed. Equation \ref{eq:U_vib}) neglects the zero-point energy, as we had defined the partition function for an energy scaled by the zero-point energy. On an absolute energy scale, where $U=0$ corresponds to the minimum of the Born-Oppenheimer potential energy hypersurface, an additional term $U_\mathrm{zp} = N_\mathrm{Av} h \nu_i/2$ needs to be added for each normal mode, with $\nu_i$ being the frequency of the normal mode.
The vibrational contribution to molar heat capacity at constant volume is
$C_{\mathrm{vib},V} = \left( \frac{\partial U_\mathrm{vib}}{\partial T} \right)_V = R \left(\frac{\Theta_\mathrm{vib}}{T}\right)^2 \frac{e^{\Theta_\mathrm{vib}/T}}{\left(e^{\Theta_\mathrm{vib}/T} - 1 \right)^2} \ ,$
which is called the Einstein equation. With the Einstein function,
$\mathcal{F}_\mathrm{E}(u) = \frac{u^2 e^u}{\left( e^u - 1 \right)^2} \ ,$
it can be written as
$C_{\mathrm{vib},V} = R T \mathcal{F}_\mathrm{E} \left( \frac{\Theta_\mathrm{vib}}{T} \right) \ .$
For computing the vibrational contribution to molar entropy we revert to the shifted energy scale. This is required, as inclusion of the zero-point contribution to $u$ would leave us with an infinity. We find
$S_{\mathrm{vib},i} = R \left[ \frac{\Theta_{\mathrm{vib},i}}{T \left( e^{\Theta_{\mathrm{vib},i}/T}-1 \right)} - \ln \left( 1 - e^{-\Theta_{\mathrm{vib},i}/T}\right) \right] \ .$
Again contributions from individual normal modes add up. For $\Theta_{\mathrm{vib},i}/T \gg 1$, which is the usual case, both terms in the brackets are much smaller than unity, so that the contribution of any individual normal mode to entropy is much smaller than $R$. Hence, at ambient temperature the vibrational contribution to entropy is negligible compared to the rotational contribution unless the molecule contains heavy nuclei.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.05%3A_Vibrational_Partition_Function.txt
|
Atoms and molecules can also store energy by populating excited electronic states. For the hydrogen atom or any system that contains only a single electron, the energy levels can be given in closed form, based on the Bohr model,
$\epsilon_{\mathrm{el},n} = -\frac{z_q^2 R_\mathrm{E}}{n^2} \ ,$
where $n = 0 \ldots \infty$ is the principal quantum number, $z_q$ the nuclear charge, and $R_\mathrm{E}$ the Rydberg constant. However, this is an exception. For molecules and all other neutral atoms closed expressions for the energy levels cannot be found.
In most cases, the problem can be reduced to considering either only the electronic ground state with energy $\epsilon_{\mathrm{el},0}$ or to considering only excitation to the first excited state with energy $\epsilon_{\mathrm{el},1}$. If we use an energy shift to redefine $\epsilon_{\mathrm{el},0} = 0$, we can define a characteristic electronic temperature
$\Theta_\mathrm{el} = \frac{\epsilon_{\mathrm{el},1}}{k_\mathrm{B}} \ .$
Characteristic electronic temperatures are usually of the order of several thousand Kelvin. Hence, in most cases, $\Theta_\mathrm{el} \gg T$ applies, only the electronic ground state is accessible, and thus
$Z_\mathrm{el} = g_{\mathrm{el},0} \ , \label{eq:z_el_0}$
where $g_{\mathrm{el},0}$ is the degeneracy of the electronic ground state. We note that spatial degeneracy of the electronic ground state cannot exist in non-linear molecules, according to the Jahn-Teller theorem. However, a spatially non-degenerate ground state can still be spin-degenerate.
In molecules, total orbital angular momentum is usually quenched ($\Lambda = 0$, $\Sigma$ ground state). In that case
$Z_\mathrm{el}^{\{\Sigma\}} = 2 S + 1 \ , \label{eq:z_el}$
where $S$ is the electron spin quantum number. For the singlet ground state of a closed-shell molecule ($S=0$) we have $Z_\mathrm{el}^{\{\Sigma\}} = 1$, which means that the electronic contribution to the partition function is negligible. The contribution to internal energy and heat capacity is generally negligible for $\Theta_\mathrm{el} \gg T$. The electronic contribution to molar entropy,
$S_\mathrm{el}^{\{\Sigma\}} = R \ln\left( 2 S + 1 \right) \ ,$
is not negligible for open-shell molecules or atoms with $S>0$. At high magnetic fields and low temperatures, e.g. at $T < 4.2$ K and $B_0 = 3.5$ T, where the high-temperature approximation for electron spin states does no longer apply, the electronic partition function and corresponding energy contribution are smaller than given in Equation \ref{eq:z_el}). For a doublet ground state ($S = 1/2$) the problem can be solved with the treatment that we have given in Section [subsection_doublet]. For $S > 1/2$ the spin substates of the electronic ground state are not strictly degenerate even at zero magnetic field, but split by the zero-field splitting, which may exceed thermal energy in some cases. In that case Equation \ref{eq:z_el}) does not apply and the electronic contribution to the partition function depends on temperature. Accordingly, there is a contribution to internal energy and to heat capacity.
For a $\Lambda > 0$ species with term symbol $^{2S+1}\Lambda_\Omega$, each $\Omega$ component is doubly degenerate. For instance, for NO with a $\Pi$ ground state ($\Lambda = 1$), both the $^2\Pi_{1/2}$ and the $^2\Pi_{3/2}$ state are doubly degenerate. As the $^2\Pi_{3/2}$ state is only 125 cm$^{-1}$ above the ground state, the characteristic temperature for electronic excitation is $\Theta_\mathrm{el} = 178$ K. In this situation, Equation \ref{eq:z_el_0}) does not apply at ambient temperature. The energy gap to the next excited state, on the other hand, is very large. Thus, we have
$Z_\mathrm{el} = g_{\mathrm{el},0} + g_{\mathrm{el},1} e^{-\Theta_\mathrm{el}/T} \ . \label{eq:z_el_exci}$
This equation is fairly general, higher excited states rarely need to be considered. The electronic contribution to the heat capacity of NO derived from Equation \ref{eq:z_el_exci}) is in good agreement with experimental data from Eucken and d’Or .
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.06%3A_Electronic_Partition_Function.txt
|
For clarity, we use an example reaction
$|\nu_\mathrm{A}| \textrm{ A} + |\nu_\mathrm{B}| \textrm{ B} \leftrightharpoons |\nu_\mathrm{C}| \textrm{ C} + |\nu_\mathrm{D}| \textrm{ D}$
with adaptation to other reactions being straightforward. At equilibrium we must have
$\Delta G = 0 \ ,$
hence
$\sum_i \nu_i \mu_i = \nu_\mathrm{A} \mu_\mathrm{A} + \nu_\mathrm{B} \mu_\mathrm{B} + \nu_\mathrm{C} \mu_\mathrm{C} + \nu_\mathrm{D} \mu_\mathrm{D} = 0 \ ,$
where the $\mu_i$ are molar chemical potentials. To solve this problem, we do not need to explicitly work with the grand canonical ensemble, as we can compute the $\mu_i$ from the results that we have already obtained for the canonical ensemble. According to one of Gibbs’ fundamental equations, which we derived in the lecture course on phenomenological thermodynamics, we have
$\mathrm{d} f = -s \mathrm{d}T - p \mathrm{d} V + \sum_i \mu_i \mathrm{d} n_i \ .$
Comparison of coefficients with the total differential of $f(T,V,n_i)$ reveals that
$\mu_i = \left( \frac{\partial f}{\partial n_i}\right)_{T,V,n_{j \neq i}} \ , \label{eq:mu_from_f_i}$
a result that we had also obtained in the lecture course on phenomenological thermodynamics. Using Equation \ref{eq:f_from_z}), Equation \ref{eq:z_indist}), and Stirling’s formula, we obtain for the contribution $f_i$ of an individual chemical species to Helmholtz free energy
\begin{align} f_i & = -k_\mathrm{B} T \ln z_i \ & = -k_\mathrm{B} T \ln \frac{1}{N_i!}Z_i^{N_i} \ & = -k_\mathrm{B} T \left( N_i \ln Z_i - N_i \ln N_i + N_i \right) \ & = - n_i R T \ln \frac{Z_i}{n_i N_\mathrm{Av}} - n_i R T \ ,\end{align}
where $n_i$ is the amount of substance (mol). Equation \ref{eq:mu_from_f_i}) then gives
\begin{align} \mu_i & = n_i R T \cdot \frac{1}{n_i} - R T \ln \frac{Z_i}{n_i N_\mathrm{Av}} - R T \ & = -RT \ln \frac{Z_i}{n_i N_\mathrm{Av}} \ & = - R T \ln \frac{Z_i}{N_i} \ . \label{eq:mu_i_from_z_i}\end{align}
Equation \ref{eq:mu_i_from_z_i}) expresses the dependence of the chemical potential, a molar property, on the molecular partition function. It may appear odd that this property depends on the absolute number of molecules $N_i$, but exactly this introduces the contribution of mixing entropy that counterbalances the differences in standard chemical potentials $\mu_i^\ominus$. Because of our habit of shifting energies by $\epsilon_{\mathrm{el},0}$ and by the zero-point vibration energies, we cannot directly apply Equation \ref{eq:mu_i_from_z_i}). We can avoid explicit dependence on the $\epsilon_{\mathrm{el},0,i}$ and the zero-point vibrational energies by relying on Hess’ law and referencing energies of all molecules to the state where they are fully dissociated into atoms. The energies $\epsilon_{i,\mathrm{diss}}$ for the dissociated states can be defined at 0 K. We find
\begin{align} Z_{i,\mathrm{corr}} & = \sum_j e^{-\left(\epsilon_{ij} - \epsilon_{i,\mathrm{diss}}\right)/k_\mathrm{B} T} \ & = e^{\epsilon_{i,\mathrm{diss}}/k_\mathrm{B} T} \sum_j e^{-\epsilon_{ij}/k_\mathrm{B} T} \ & = Z_i e^{\epsilon_{i,\mathrm{diss}}/k_\mathrm{B} T} \ ,\end{align}
where index $j$ runs over the states of molecule $i$.
With this correction we have
$\Delta G = -R T \sum_i \nu_i \ln \frac{Z_i e^{\epsilon_{i,\mathrm{diss}}/k_\mathrm{B} T}}{N_i} \ .$
For our example reaction, the equilibrium condition is
$\nu_\mathrm{A} \mu_\mathrm{A} + \nu_\mathrm{B} \mu_\mathrm{B} = -\nu_\mathrm{C} \mu_\mathrm{C} - \nu_\mathrm{D} \mu_\mathrm{D} \ ,$
which gives
\begin{align} & -R T \nu_\mathrm{A} \ln \frac{Z_\mathrm{A} e^{\epsilon_{\mathrm{A},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{A}} -R T \nu_\mathrm{B} \ln \frac{Z_\mathrm{B} e^{\epsilon_{\mathrm{B},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{B}} \nonumber \ = & R T \nu_\mathrm{C} \ln \frac{Z_\mathrm{C} e^{\epsilon_{\mathrm{C},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{C}} + R T \nu_\mathrm{D} \ln \frac{Z_\mathrm{D} e^{\epsilon_{\mathrm{D},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{D}}\end{align}
and can be rearranged to
$\ln \frac{Z_\mathrm{A}^{-\nu_A} \cdot e^{- \nu_\mathrm{A} \epsilon_{\mathrm{A},\mathrm{diss}}/k_\mathrm{B} T} \cdot Z_\mathrm{B}^{-\nu_B} \cdot e^{- \nu_\mathrm{B} \epsilon_{\mathrm{B},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{A}^{-\nu_\mathrm{A}} \cdot N_\mathrm{B}^{-\nu_\mathrm{B}}} = \ln \frac{Z_\mathrm{C}^{\nu_C} \cdot e^{\nu_\mathrm{C} \epsilon_{\mathrm{C},\mathrm{diss}}/k_\mathrm{B} T} \cdot Z_\mathrm{D}^{\nu_D} \cdot e^{\nu_\mathrm{D} \epsilon_{\mathrm{D},\mathrm{diss}}/k_\mathrm{B} T}}{N_\mathrm{C}^{\nu_\mathrm{C}} \cdot N_\mathrm{D}^{\nu_\mathrm{D}}}$
and further rearranged to
$\frac{N_\mathrm{C}^{\left|\nu_\mathrm{C}\right|} \cdot N_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{N_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot N_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} = \frac{Z_\mathrm{C}^{\left|\nu_\mathrm{C}\right|} \cdot Z_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{Z_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot Z_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} \cdot e^{\left( \nu_\mathrm{A} \epsilon_{\mathrm{A},\mathrm{diss}} + \nu_\mathrm{B} \epsilon_{\mathrm{B},\mathrm{diss}} + \nu_\mathrm{C} \epsilon_{\mathrm{C},\mathrm{diss}} + \nu_\mathrm{D} \epsilon_{\mathrm{D},\mathrm{diss}} \right)/ k_\mathrm{B} T} \ . \label{eq:KN_0}$
In Equation \ref{eq:KN_0}) we can make the identifications
$K_N\left( V,T \right) = \frac{N_\mathrm{C}^{\left|\nu_\mathrm{C}\right|} \cdot N_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{N_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot N_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} \ ,$
where $K_N\left( V,T \right)$ is a volume-dependent equilibrium constant expressed with particle numbers, and, since dissociation energies are negative energies of formation,
$\Delta U_0 = -N_\mathrm{Av} \left( \nu_\mathrm{A} \epsilon_{\mathrm{A},\mathrm{diss}} + \nu_\mathrm{B} \epsilon_{\mathrm{B},\mathrm{diss}} + \nu_\mathrm{C} \epsilon_{\mathrm{C},\mathrm{diss}} + \nu_\mathrm{D} \epsilon_{\mathrm{D},\mathrm{diss}} \right) \ ,$
where $\Delta U_0$ is the molar reaction energy at 0 K. Hence, we have
$K_N\left( V,T \right) = \frac{Z_\mathrm{C}^{\left|\nu_\mathrm{C}\right|} \cdot Z_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{Z_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot Z_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} e^{-\Delta U_0/R T} \ .$
The dependence on volume arises from the dependence of the canonical partition functions on volume.
By dividing all particle numbers by $N_\mathrm{Av}^{\nu_i}$ and volume $V^{\nu_i}$, we obtain the equilibrium constant $K_c(T)$ in molar concentrations
$K_c\left(T \right) = \frac{Z_\mathrm{C}^{\left|\nu_\mathrm{C}\right|}\cdot Z_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{Z_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot Z_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} \cdot \left(N_\mathrm{Av} V \right)^{-\sum_i \nu_i} \cdot e^{-\Delta U_0/R T} \ . \label{eq:Kc}$
By dividing them by the total particle number $N = \sum_i N_i$ to the power of ${\nu_i}$ we obtain
$K_x (V,T) = \frac{Z_\mathrm{C}^{\left|\nu_\mathrm{C}\right|} \cdot Z_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{Z_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot Z_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} \cdot N^{-\sum_i \nu_i} \cdot e^{-\Delta U_0/R T} \ ,$
which coincides with the thermodynamical equilibrium constant $K^\dagger$ at the standard pressure $p^\ominus$. The most useful equilibrium constant for gas-phase reactions is obtained by inserting $p_i = c_i RT$ into Equation \ref{eq:Kc})21:
$K_p\left(T \right) = \frac{Z_\mathrm{C}^{\left|\nu_\mathrm{C}\right|}\cdot Z_\mathrm{D}^{\left|\nu_\mathrm{D}\right|}}{Z_\mathrm{A}^{\left|\nu_\mathrm{A}\right|} \cdot Z_\mathrm{B}^{\left|\nu_\mathrm{B}\right|}} \cdot \left(\frac{R T}{N_\mathrm{Av} V} \right)^{\sum_i \nu_i} \cdot e^{-\Delta U_0/R T} \ . \label{eq:Kp}$
For each molecular species, the molecular partition function is a product of the contributions from individual modes, Equation \ref{eq:z_factorization}), that we have discussed above. In the expression for equilibrium constants, the nuclear-spin contribution cancels out since the number of nuclei and their spins are the same on both sides of the reaction equation. Symmetry requirements on the nuclear wavefunction are considered in the symmetry numbers $\sigma_i$ for the rotational partition function. The electronic contribution often reduces to the degeneracy of the electronic ground state and in the vibrational contribution, normal modes with $\Theta_{\mathrm{vib},i} > 5 T$ can be neglected.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/06%3A_Partition_Functions_of_Gases/6.07%3A_Equilibrium_Constant_for_Gas_Reactions.txt
|
• 7.1: Thermodynamics of Mixing
Statistical thermodynamics is an important theory for understanding condensed phases or to larger molecules with several rotameric states, including macromolecules such as synthetic polymers, proteins, nucleic acids, and carbohydrates In this Chapter we introduce some of the concepts of statistical thermodynamics that do not depend on explicit computation of the partition function. We start with the entropy of mixing and, for simplicity, restrain the discussion to binary mixtures.
• 7.2: Entropic Elasticity
07: Macromolecules
The formalism introduced in Chapter is suitable for small molecules in the gas phase, but does not easily extend to condensed phases or to larger molecules with several rotameric states, let alone to macromolecules, such as synthetic polymers, proteins, nucleic acids, and carbohydrates. Nevertheless, statistical thermodynamics is an important theory for understanding such systems. In this Chapter we introduce some of the concepts of statistical thermodynamics that do not depend on explicit computation of the partition function. We start with the entropy of mixing and, for simplicity, restrain the discussion to binary mixtures.
Entropy of Binary Mixing
We consider mixing of two species A with volume $V_\mathrm{A}$ and B with volume $V_\mathrm{B}$ and neglect volume change, so that the total volume is $V_\mathrm{A}+V_\mathrm{B}$. The volume fractions of the two components in the mixture are thus given by
\begin{align} \phi_\mathrm{A} & = \frac{V_\mathrm{A}}{V_\mathrm{A}+V_\mathrm{B}} \ \phi_\mathrm{B} & = \frac{V_\mathrm{B}}{V_\mathrm{A}+V_\mathrm{B}} = 1 - \phi_\mathrm{A} \ .\end{align}
To consider the statistics of the problem we use a lattice model.
Concept $1$: Lattice model
A lattice model is a discrete representation of a system as opposed to a continuum representation. A three-dimensional lattice model is a regular arrangement of sites in Cartesian space, such as a crystal lattice is a regular arrangement of atoms in Cartesian space. The state of the model is defined by the distribution of units of matter, for instance molecules or the repeat units of a polymer (short: monomers), on the lattice sites. In statistical thermodynamics, one particular arrangement of the units on the lattice is a microstate. Energy of the microstate depends on interactions of units between lattice sites, in the simplest case only between direct neighbor sites. By considering the statistical distribution of microstates, thermodynamic state functions of the macrostate of the system can be obtained.
In our example we assign the lattice site a volume $v_0$, which cannot be larger than the volume required for one molecule of the smaller component in the mixture. The other component may then also occupy a single site (similarly sized components) or several lattice sites. A macromolecule with a large degree of polymerization consists of a large number of monomers and will thus occupy a large number of lattice sites. The molecular volumes of the species are
\begin{align} v_\mathrm{A} & = N_\mathrm{A} v_0 \ v_\mathrm{B} & = N_\mathrm{B} v_0 \ ,\end{align}
where $N_\mathrm{A}$ and $N_\mathrm{B}$ are the number of sites occupied by one molecule of species A and B, respectively. We consider the three simple cases listed in Table $1$. Regular solutions are mixtures of two low molecular weight species with $N_\mathrm{A} = N_\mathrm{B} = 1$. Polymer solutions are mixtures of one type of macromolecules ($N_\mathrm{A} = N \gg 1$) with a solvent, whose molecular volume defines the lattice site volume $v_0$ ($N_\mathrm{B} = 1$). Polymer blends correspond to the general case $1 \neq N_\mathrm{A} \neq N_\mathrm{B} \neq 1$. They are mixtures of two different species of macromolecules, so that $N_\mathrm{A}, N_\mathrm{B} \gg 1$.
Table $1$: Number of lattice sites occupied per molecule in different types of mixtures.
$N_\mathrm{A}$ $N_\mathrm{B}$
Regular solutions 1 1
Polymer solutions $N$ 1
Polymer blends $N_\mathrm{A}$ $N_\mathrm{B}$
The mixture occupies
$n = \frac{V_\mathrm{A}+V_\mathrm{B}}{v_0}$
lattice sites, whereas component A occupies $V_\mathrm{A}/v_0 = n \phi_\mathrm{A}$ of these sites. We consider a microcanonical ensemble and can thus express entropy as
$s = k_\mathrm{B} \ln \Omega \ ,$
where $\Omega$ is the number of ways in which the molecules can be arranged on the lattice (number of microstates). In a homogeneous mixture, a molecule or monomer of component A can occupy any of the $n$ lattice sites. Before mixing, it can occupy only one of the lattice sites in volume $V_\mathrm{A}$. Hence, the entropy change for one molecule of species A is
\begin{align} \Delta S_\mathrm{A} & = k_\mathrm{B} \ln n - k_\mathrm{B} \ln \phi_\mathrm{A} n \ & = k_\mathrm{B} \ln \frac{n}{\phi_\mathrm{A} n} \ & = -k_\mathrm{B} \ln \phi_\mathrm{A} \ .\end{align}
The total mixing entropy for both species is
$\Delta s_\mathrm{mix} = -k_\mathrm{B} \left( n_\mathrm{A} \ln \phi_\mathrm{A} + n_\mathrm{B} \ln \phi_\mathrm{B} \right) \ . \label{eq:s_mix_lattice}$
We note the analogy with the expression that we had obtained in phenomenological thermodynamics for an ideal mixture of ideal gases, were we had used the molar fraction $x_i$ instead of the volume fraction $\phi_i$. For ideal gases, $V_i \propto n_i$ and thus $\phi_i = x_i$. Equation \ref{eq:s_mix_lattice}) generalizes the result to any ideal mixture in condensed phase. The mixture is ideal because we did not yet consider energy of mixing and thus could get away with using a microcanonical ensemble.
For discussion it is useful to convert the extensive quantity $\Delta s_\mathrm{mix}$ to the intensive entropy of mixing per lattice site,
$\Delta \overline{S}_\mathrm{mix} = -k_\mathrm{B} \left( \frac{\phi_\mathrm{A}}{N_\mathrm{A}} \ln \phi_\mathrm{A} + \frac{\phi_\mathrm{B}}{N_\mathrm{B}} \ln \phi_\mathrm{B} \right) \ , \label{eq:s_mix_polymer}$
where we have used the number of molecules per species $n_i = n \phi_i / N_i$ and normalized by the total number $n$ of lattice sites.
For a regular solution with $N_\mathrm{A} = N_\mathrm{B} = 1$ we obtain the largest entropy of mixing at given volume fractions of the components,
$\Delta \overline{S}_\mathrm{mix} = -k_\mathrm{B} \left( \phi_\mathrm{A} \ln \phi_\mathrm{A} + \phi_\mathrm{B} \ln \phi_\mathrm{B} \right) \ \textrm{(regular solutions)} \ .$
For a polymer solution with $N_\mathrm{A} = N \gg 1$ and $N_\mathrm{B} = 1$ we have
\begin{align} \Delta \overline{S}_\mathrm{mix} & = -k_\mathrm{B} \left( \frac{\phi_\mathrm{A}}{N} \ln \phi_\mathrm{A} + \phi_\mathrm{B} \ln \phi_\mathrm{B} \right) \ & \approx -k_\mathrm{B} \phi_\mathrm{B} \ln \phi_\mathrm{B} \label{eq:mix_poly_soln_approx} \ ,\end{align}
where the approximation by Equation \ref{eq:mix_poly_soln_approx}) holds for $\phi_\mathrm{B} \gg 1/N$, i.e. for solving a polymer and even for any appreciable swelling of a high-molecular weight polymer by a solvent. For polymer blends, Equation \ref{eq:s_mix_polymer}) holds with $N_\mathrm{A}, N_\mathrm{B} \gg 1$. Compared to formation of a regular solution or a polymer solution, mixing entropy for a polymer blend is negligibly small, which qualitatively explains the difficulty of producing such polymer blends. Nevertheless, the entropy of mixing is always positive, and thus the Helmholtz free energy $\Delta \overline{F}_\mathrm{mix} = -T \Delta \overline{S}_\mathrm{mix}$ always negative, so that an ideal mixture of two polymers should form spontaneously. To see what happens in real mixtures, we have to consider the energetics of mixing.
Before doing so, we note the limitations of the simple lattice model. We have neglected conformational entropy of the polymer, which will be discussed in Section [subsection:conf_entropy]. This amounts to the assumption that conformational entropy does not change on mixing. For blends of polymers, this is a very good assumption, whereas in polymers solutions there is often an excluded volume that reduces conformational space. We have also neglected the small volume change that occurs on mixing, most notably for regular solutions. For polymer solutions and blends this volume change is very small.
Energy of Binary Mixing
To discuss the internal energy contribution to the free energy of mixing, we continue using the simplified lattice model. In particular, we consider mixing at constant volume and we assume that attractive or repulsive interactions between lattice sites are sufficiently small to not perturb random distributions of solvent molecules and monomers on lattice sites. We also ignore that the polymer chain is connected, as this would exclude random distribution of the monomers to the lattice sites. Regular solution theory, as we consider it here, is a mean-field approach where the interaction at a given lattice site is approximated by a mean interaction with the other lattice sites. This neglects correlations. Although the model may appear crude (as many models in polymer physics), it provides substantial insight and an expression that fits experimental data surprisingly well (as is the case for many crude models in polymer physics).
We start by defining three pairwise interaction energies $u_\mathrm{AA}$, $u_\mathrm{AB}$, and $u_\mathrm{BB}$ between adjacent sites of the lattice. For random distribution, the probability that a molecule or monomer A has a neighbor A is $\phi_\mathrm{A}$ and the probability that it has a neighbor B is $1-\phi_\mathrm{A}$. We neglect boundary effects, as the ratio between the number of surface sites and inner sites is very small for a macroscopic system. The mean-field interaction energy per lattice site occupied by an A unit is thus
$U_\mathrm{A} = \phi_\mathrm{A} u_\mathrm{AA} + \left( 1- \phi_\mathrm{A} \right) u_\mathrm{AB} \label{eq:uA_mix}$
and the corresponding expression for a lattice site occupied by a B unit is
$U_\mathrm{B} = \phi_\mathrm{A} u_\mathrm{AB} + \left( 1- \phi_\mathrm{A} \right) u_\mathrm{BB} \ . \label{eq:uB_mix}$
To continue, we need to specify the lattice, as the number of sites $a$ adjacent to the site under consideration depends on that. For a cubic lattice we would have $a = 6$. We keep $a$ as a parameter in the hope that we can eliminate it again at a later stage. If we compute a weighted sum of the expressions (Equation \ref{eq:uA_mix}) and (Equation \ref{eq:uB_mix}) we double count each pairwise interaction, as we will encounter it twice. Hence, total interaction energy of the mixture is
$u = \frac{a n}{2} \left[ \phi_\mathrm{A} U_\mathrm{A} + \left( 1- \phi_\mathrm{A} \right) U_\mathrm{B} \right] \ , \label{eq:umix_total}$
where we have used the probability $\phi_\mathrm{A}$ of encountering a site occupied by a unit A and $\left( 1- \phi_\mathrm{A} \right)$ of encountering a site occupied by a unit B. By inserting Eqs. \ref{eq:uA_mix} and \ref{eq:uB_mix} into Equation \ref{eq:umix_total}) and abbreviating $\phi_\mathrm{A} = \phi$, we obtain
\begin{align} u & = \frac{a n}{2} \left\{ \phi \left[ \phi u_\mathrm{AA} + \left( 1- \phi \right) u_\mathrm{AB} \right] + \left( 1 - \phi \right) \left[ \phi u_\mathrm{AB} + \left( 1- \phi \right) u_\mathrm{BB} \right] \right\} \ & = \frac{a n}{2} \left[ \phi^2 u_\mathrm{AA} + 2 \phi \left( 1- \phi \right) u_\mathrm{AB} + \left( 1- \phi \right)^2 u_\mathrm{BB} \right] \ .\end{align}
Before mixing the interaction energy per site in pure A is $a u_\mathrm{AA}/2$ and in B $a u_\mathrm{BB}/2$. Hence, the total interaction energy before mixing is
$u_0 = \frac{a n}{2} \left[ \phi u_\mathrm{AA} + \left( 1- \phi \right) u_\mathrm{BB} \right] \ ,$
so that we obtain for the energy change $\Delta u = u - u_0$ on mixing
\begin{align} \Delta u & = \frac{a n}{2} \left[ \phi^2 u_\mathrm{AA} + 2 \phi \left( 1- \phi \right) u_\mathrm{AB} + \left( 1- \phi \right)^2 u_\mathrm{BB} - \phi u_\mathrm{AA} - \left( 1- \phi \right) u_\mathrm{BB} \right] \ & = \frac{a n}{2} \left[ \left(\phi^2 - \phi \right) u_\mathrm{AA} + 2 \phi \left( 1- \phi \right) u_\mathrm{AB} + \left( 1 - 2\phi + \phi^2 - 1 + \phi \right) u_\mathrm{BB} \right] \ & = \frac{a n}{2} \left[ \phi \left( \phi - 1 \right) u_\mathrm{AA} + 2 \phi \left( 1- \phi \right) u_\mathrm{AB} + \phi \left( \phi - 1 \right) u_\mathrm{BB} \right] \ & = \frac{a n}{2} \phi \left( 1 - \phi \right) \left(2 u_\mathrm{AB} - u_\mathrm{AA} - u_\mathrm{BB} \right) \ .\end{align}
We again normalize by the number $n$ of lattice sites to arrive at the energy change per site on mixing:
$\Delta \overline{U}_\mathrm{mix} = \frac{a}{2} \phi \left( 1 - \phi \right) \left(2 u_\mathrm{AB} - u_\mathrm{AA} - u_\mathrm{BB} \right) \ .$
For discussion we need an expression that characterizes the mixing energy per lattice site as a function of composition $\phi$ and that can be easily combined with the mixing entropy to free energy. The Flory interaction parameter,
$\chi = \frac{a}{2} \cdot \frac{2 u_\mathrm{AB} - u_\mathrm{AA} - u_\mathrm{BB}}{k_\mathrm{B} T} \ ,$
elegantly eliminates the number of adjacent lattice sites and provides just such an expression:
$\Delta \overline{U}_\mathrm{mix} = \chi \phi \left( 1 - \phi \right) k_\mathrm{B} T \ .$
Introducing such a parameter is an often-used trick when working with crude models. If the parameter is determined experimentally, the expression may fit data quite well, because part of the deviations of reality from the model can be absorbed by the parameter and its dependence on state variables. We finally obtain the Flory-Huggins equation for the Helmholtz free energy of mixing, $\Delta \overline{F}_\mathrm{mix} = \Delta \overline{U}_\mathrm{mix} - T \Delta \overline{S}_\mathrm{mix}$,
$\Delta \overline{F}_\mathrm{mix} = k_\mathrm{B} T\left[ \frac{\phi}{N_\mathrm{A}} \ln \phi + \frac{1-\phi}{N_\mathrm{B}} \ln \left(1-\phi \right) + \chi \phi \left( 1 - \phi \right) \right] \ . \label{eq:Flory_Huggins}$
As the entropy contribution (first two terms in the brackets on the right-hand side of Equation \ref{eq:Flory_Huggins}) to $\Delta \overline{F}_\mathrm{mix}$ is always negative, entropy always favors mixing. The sign of $\Delta \overline{F}_\mathrm{mix}$ depends on the sign of the Flory parameter $\chi$ and the ratio between the energy and entropy. The Flory parameter is negative and thus favors mixing, if $2 u_\mathrm{AB} < u_\mathrm{AA} + u_\mathrm{BB}$, i.e., if the interaction in AB pairs is more attractive than the mean interaction in AA and and BB pairs. Such cases occur, but are rare. In most cases, the Flory parameter is positive. Since the entropy terms are very small for polymer blends, such blends tend to phase separate. In fact, high molecular weight poly(styrene) with natural isotope abundance phase separates from deuterated poly(styrene).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/07%3A_Macromolecules/7.01%3A_Thermodynamics_of_Mixing.txt
|
Ideal Chain Model
Most polymer chains have rotatable bonds as well as bond angles along the polymer backbone that differ from 180$^\circ$. This leads to flexibility of the chain. Even if the rotations are not free, but give rise to only $n_\mathrm{rot}$ rotameric states per rotatable bond, the number of possible chain conformations becomes vast. For $N_{rot}$ rotatable bonds, the number of distinct conformations is $n_\mathrm{rot}^{N_\mathrm{rot}}$. The simplest useful model for such a flexible chain is the freely jointed chain model. Here we assume bond vectors that all have the same length $l = |\vec{r}_i|$, where $\vec{r}_i$ is the bond vector of the $i^\mathrm{th}$ bond. If we further assume an angle $\theta_{ij}$ between consecutive bond vectors, we can write the scalar product of consecutive bond vectors as
$\vec{r}_i \cdot \vec{r}_j = l^2 \cos \theta_{ij} \ . \label{eq:fjc_cons}$
This scalar product is of interest, as we can use it to compute the mean-square end-to-end distance $\langle R^2 \rangle$ of an ensemble of chains, which is the simplest parameter that characterizes the spatial dimension of the chain. With the end-to-end distance vector of a chain with $n$ bonds,
$\vec{R}_n = \sum_{i=1}^n \vec{r}_i \ ,$
we have
\begin{align} \langle R^2 \rangle & = \langle \vec{R}_n^2 \rangle \ & = \langle \vec{R}_n \cdot \vec{R}_n \rangle \ & = \left\langle \left( \sum_{i=1}^n \vec{r}_i \right) \cdot \left( \sum_{j=1}^n \vec{r}_j \right) \right\rangle \ & = \sum_{i=1}^n \sum_{j=1}^n \langle \vec{r}_i \cdot \vec{r}_j \rangle \ .\end{align}
By using Equation \ref{eq:fjc_cons}) we find
$\langle R^2 \rangle = l^2 \sum_{i=1}^n \sum_{j=1}^n \langle \cos \theta_{ij} \rangle \ . \label{eq:fjc_double_sum}$
In the freely jointed chain model we further assume that there are no correlations between the directions of different bond vectors, $\langle \cos \theta_{ij} \rangle = 0$ for $i \neq j$. Then, the double sum in Equation \ref{eq:fjc_double_sum}) has only $n$ non-zero terms for $i=j$ with $\cos \theta_{ij} = 1$. Hence,
$\langle R^2 \rangle = n l^2 \ . \label{eq:eer_fjc}$
This again appears to be a crude model, but we shall now rescue it by redefining $l$. In an ideal polymer chain we can at least assume that there is no interaction between monomers that are separated by many other monomers,
$\lim\limits_{|i-j| \rightarrow \infty} \langle \cos \theta_{ij} \rangle = 0 \ .$
Furthermore, for a given bond vector $\vec{r}_i$ the sum over all correlations with other bond vectors converges to some finite number that depends on $i$,
$\sum_{j=1}^n \langle \cos \theta_{ij} \rangle = C'(i) \ .$
Therefore, when including the correlations, Equation \ref{eq:fjc_double_sum}) can still be simplified to
$\langle R^2 \rangle = l^2 \sum_{i=1}^n C'(i) = C_n n l^2 \ ,$
where Flory’s characteristic ratio $C_n$ is the average value of $C'(i)$ over all backbone bonds of the chain.
In general, $C_n$ depends on $n$, but for very long chains it converges to a value $C_\infty$. For sufficiently long chains, we can thus approximate
$\langle R^2 \rangle \approx n C_\infty l^2 \ ,$
which has the same dependence on $n$ and $l$ as the crude model of the freely jointed chain, Equation \ref{eq:eer_fjc}). Hence, we can define an equivalent freely jointed chain with $N$ Kuhn segments of length $b$. From
$\langle R^2 \rangle = N b^2 \approx n C_\infty l^2 \label{eq:Kuhn_R2}$
and the length of the maximally stretched equivalent chain, the contour length $R_\mathrm{max}$,
$R_\mathrm{max} = N b \ ,$
we obtain
$N = \frac{R_\mathrm{max}^2}{C_\infty n l^2}$
and the Kuhn length
$b = \frac{\langle R^2 \rangle}{R_\mathrm{max}} = \frac{C_\infty n l^2}{R_\mathrm{max}} \ .$
Typical values of $C_\infty$ for synthetic polymers range from 4.6 for 1,4-poly(isoprene) to 9.5 for atactic poly(styrene) with corresponding Kuhn lengths of 8.2 Å to 18 Å, respectively.
At this point we have found the mean-square end-to-end distance as a parameter of an equilibrium macrostate. If we stretch the chain to a longer end-to-end distance, it is no longer at equilibrium and must have larger free energy. Part of this increase in free energy must come from a decrease in entropy that stretching induces by reducing the number of accessible chain conformations. It turns out that this entropic contribution is the major part of the increase in free energy, typically 90%. The tendency of polymer chains to contract after they have been stretched is thus mainly an entropic effect. To quantify it, we need a probability distribution for the end-to-end vectors and to that end, we introduce a concept that is widely used in natural sciences.
Random Walk
The freely jointed chain model explicitly assumes that the direction of the next Kuhn segment is uncorrelated to the directions of all previous Kuhn segments. Where the chain end will be located after the next step that prolongs the chain by one segment depends only on the location of the current chain end. The freely jointed chain thus has aspects of a Markov chain. Each prolongation step is a random event and the trajectory of the chain in space a random walk.
Many processes can be discretized into individual steps. What happens in the next step may depend on only the current state or also on what happened in earlier steps. If it depends only on the current state, the process is memoryless and fits the definition of a Markov chain. A Markov chain where the events are analogous steps in some parameter space can be modeled as a random walk. A random walk is a mathematically formalized succession of random steps. A random walk on a lattice, where each step can only lead from a lattice point to a directly neighboring lattice point is a particularly simple model. [concept:random_walk]
We can use the concept of a random walk in combination with the concepts of statistical thermodynamics in order to solve the problem of polymer chain stretching and contraction. The problem is solved if we know the dependence of Helmholtz free energy on the length of the end-to-end vector. This, in turn, requires that we know the entropy and thus the probability distribution of the length of the end-to-end vector. This probability distribution is given by the number of possible random walks (trajectories) that lead to a particular end-to-end distance $\sqrt{\vec{R}^2}$.
For simplicity we start with a simpler example in one dimension that we can later extend to three dimensions. We consider the standard example in this field, a drunkard who has just left a pub. We assume that, starting at the pub door, he makes random steps forth and back along the road. What is the probability $P(N,x)$ that after $N$ steps he is at a distance of $x$ steps up the road from the pub door? The problem is equivalent to finding the number $W(N,x)$ of trajectories of length $N$ that end up $x$ steps from the pub door and dividing it by the total number of trajectories.
Any such trajectory consists of $N_+$ steps up the road and $N_-$ steps down the road, with the final position being $x = N_+ - N_-$. The number of such trajectories is, again, given by a binomial distribution (see Section [binomial_distribution])
$W(N,x) = \frac{\left( N_+ + N_-\right)!}{N_+! N_-!} = \frac{N!}{\left[ \left(N+x\right)/2\right] ! \left[ \left(N-x\right)/2\right] !} \ ,$
whereas the total number of trajectories is $2^N$, as the drunkard has two possibilities at each step. Hence,
$P(N,x) = \frac{1}{2^N} \cdot \frac{N!}{\left[ \left(N+x\right)/2\right] ! \left[ \left(N-x\right)/2\right] !} \ ,$
leading to
$\ln P(N,x) = -N \ln 2 + \ln(N!) - \ln \left(\frac{N+x}{2}\right)! - \ln \left(\frac{N-x}{2}\right)! \ .$
The last two terms on the right-hand side can be rewritten as
\begin{align} \ln \left(\frac{N + x}{2}\right)! = \ln \left(\frac{N}{2}\right)! + \sum_{s=1}^{x/2} \ln \left( \frac{N}{2} + s \right) \ \textrm{and} \ \ln \left(\frac{N - x}{2}\right)! = \ln \left(\frac{N}{2}\right)! - \sum_{s=1}^{x/2} \ln \left( \frac{N}{2} + 1 - s \right) \ ,\end{align}
which leads to
$\ln P(N,x) = -N \ln 2 + \ln(N!) - 2\ln \left(\frac{N}{2}\right)! - \sum_{s=1}^{x/2} \ln \left( \frac{N/2 + s}{N/2 + 1 - s} \right) \ . \label{eq:P_N_X_0}$
We now assume a long trajectory. In the range where $x \ll N$, which is realized in an overwhelming fraction of all trajectories, the numerator and denominator logarithms in the last term on the right-hand side of Equation \ref{eq:P_N_X_0}) can be approximated by series expansion, $\ln(1+y) \approx y$ for $|y| \ll 1$, which gives
\begin{align} \ln \left( \frac{N/2 + s}{N/2 + 1 -s} \right) & = \ln \left( \frac{1 + 2s/N}{1 -2s/N + 2/N} \right) \ & = \ln \left( 1 + \frac{2s}{N} \right) - \ln\left( 1 - \frac{2s}{N} + 2/N \right) \ & \approx \frac{4s}{N} - \frac{2}{N} \ . \label{eq:Gauss_approx_0}\end{align}
Hence,
\begin{align} \sum_{s=1}^{x/2} \ln \left( \frac{N/2 + s}{N/2 + 1 - s} \right) & = \sum_{s=1}^{x/2} \left( \frac{4s}{N} - \frac{2}{N} \right) \ & = \frac{4}{N} \sum_{s=1}^{x/2} s - \frac{2}{N} \sum_{s=1}^{x/2} 1 \ & = \frac{4}{N} \cdot \frac{(x/2)(x/2+1)}{2} - \frac{x}{N} \ & = \frac{x^2}{2N} \ . \label{eq:Gauss_approx}\end{align}
Inserting Equation \ref{eq:Gauss_approx} into Equation \ref{eq:P_N_X_0}) provides,
$P(N,x) \approx \frac{1}{2^N} \cdot \frac{N!}{(N/2)!(N/2)!} \cdot \exp\left( - \frac{x^2}{2N} \right) \ ,$
where we recognize, in the last factor on the right-hand side, the approximation of the binomial distribution by a Gaussian distribution that we discussed in Section [binomial_distribution]. Using the improved formula of Stirling, Equation \ref{eq:Stirling_better}), for expressing the factorials, we have
$\frac{1}{2^N} \cdot \frac{N!}{(N/2)!(N/2)!} = \frac{1}{2^N} \frac{\sqrt{2 \pi N} N^N \exp(-N)}{\left(\sqrt{\pi N} (N/2)^{N/2} \exp(-N/2)\right)^2} = \sqrt{\frac{2}{\pi N}} \ ,$
which leads to the exceedingly simple result:
$P(N,x) = \sqrt{\frac{2}{\pi N}} \exp\left( - \frac{x^2}{2N} \right) \ .$
The drunkard, if given enough time and not falling into sleep, perfectly simulates a Gaussian distribution.
We may even further simplify this result by asking about the mean square displacement $\langle x^2 \rangle$, which is given by
$\langle x^2 \rangle = \int_{-\infty}^{\infty} x^2 P(N,x) \mathrm{d}x = \sqrt{\frac{2}{\pi N}} \int_{-\infty}^{\infty} x^2 \exp\left( - \frac{x^2}{2N} \right) \mathrm{d}x = N \ .$
Before we go on, we need to fix a problem that occurs when we interpret the discrete probabilities computed at this point as a continuous probability density distribution of $x$. In the discrete case, $W(N,x)$ can be non-zero only for either even or odd $x$, depending on whether $N$ is even or odd. Thus, to arrive at the proper probability distribution we need to divide by 2. Hence, we can express the probability density distribution for a one-dimensional random walk as
$\rho_\mathrm{1d}(x) = \frac{1}{\sqrt{2\pi \langle x^2 \rangle}} \exp\left( - \frac{x^2}{2\langle x^2 \rangle} \right) \ .$
This result does no longer depend on step size, not even implicitly, because we have removed the dependence on step number $N$. Therefore, it can be generalized to three dimensions. Since the random walks along the three pairwise orthogonal directions in Cartesian space are independent of each other, we have
$\rho_\mathrm{3d}(x,y,z) \mathrm{d}x \mathrm{d} y \mathrm{dz} = \rho_\mathrm{1d}(x) \mathrm{d}x \cdot \rho_\mathrm{1d}(y) \mathrm{d}y \cdot \rho_\mathrm{1d}(z) \mathrm{d}z \ .$
At this point we relate the result to the conformational ensemble of an ideal polymer chain, using the Kuhn model discussed in Section [subsection:ideal_chain]. We pose the question of the distribution of mean-square end-to-end distances $\left\langle \vec{R}^2 \right\rangle$ with the Cartesian components of the end-to-end vector $\vec{R}$ being $x = R_x$, $y = R_y$, and $z = R_z$. According to Equation \ref{eq:Kuhn_R2}), we have
\begin{align} \left\langle \vec{R}^2 \right\rangle & = \left\langle R_x^2 \right\rangle + \left\langle R_y^2 \right\rangle + \left\langle R_z^2 \right\rangle \label{eq:R2_xyz} \ & = N b^2 \ .\end{align}
For symmetry reasons we have,
$\left\langle R_x^2 \right\rangle = \left\langle R_y^2 \right\rangle = \left\langle R_z^2 \right\rangle = \frac{N b^2}{3} \ ,$
leading to
$\rho_\mathrm{1d}(N,x) = \sqrt{\frac{3}{2 \pi N b^2}} \exp \left( -\frac{3R_x^2}{2N b^2} \right)$
and analogous expressions for $\rho_\mathrm{1d}(y)$ and $\rho_\mathrm{1d}(z)$. We have reintroduced parameter $N$, which is now the number of Kuhn segments. Yet, by discussing a continuous probability density distribution, we have removed dependence on a lattice model. This is necessary since the steps along dimensions $x$, $y$, and $z$ differ for each Kuhn segment. By using Equation \ref{eq:R2_xyz}), we find
$\rho_\mathrm{3d}(N,\vec{R}) = \left( \frac{3}{2 \pi N b^2} \right)^{3/2} \exp \left(-\frac{3 \vec{R}^2}{2 N b^2} \right) \ . \label{eq:rho3d_chain}$
The probability density attains a maximum at zero end-to-end vector.
Finally, we can pose the following question: If we let all chains of the ensemble start at the same point, how are the chain ends distributed in space? This is best pictured in a spherical coordinate system. Symmetry dictates that the distribution is uniform with respect to polar angles $\theta$ and $\phi$. The polar coordinate $R$ is equivalent to the end-to-end distance of the chain. To find the probability distribution for this end-to-end distance we need to include area $4\pi R^2$ of the spherical shells. Hence,
$\rho_\mathrm{3d}(N,R) \cdot 4 \pi R^2 \mathrm{d} R = 4 \pi \left( \frac{3}{2 \pi N b^2} \right)^{3/2} \exp \left(-\frac{3 R^2}{2 N b^2} \right) R^2 \mathrm{d} R \ .$
Because of this scaling with the volume of an infinitesimally thin spherical shell, the probability density distribution (Figure $\PageIndex{1A}$) for the end-to-end distance does not peak at zero distance. As seen in Figure $\PageIndex{1B}$ it is very unlikely to encounter a chain with $R > 2b\sqrt{N}$. Since the contour length is $R_\mathrm{max} = Nb$, we can conclude that at equilibrium almost all chains have end-to-end distances shorter than $2 R_\mathrm{max} / \sqrt{N}$.
We need to discuss validity of the result, because in approximating the discrete binomial distribution by a continuous Gaussian probability distribution we had made the assumption $x \ll N$. Within the ideal chain model, this assumption corresponds to an end-to-end distance that is much shorter than the contour length $N b$. If $R$ approaches $Nb$, the Gaussian distribution overestimates true probability density. In fact, the Gaussian distribution predicts a small, but finite probability for the chain to be longer than its contour length, which is unphysical. The model can be refined to include cases of such strong stretching of the chain . For our qualitative discussion of entropic elasticity not too far from equilibrium, we can be content with Equation \ref{eq:rho3d_chain}).
Conformational Entropy and Free Energy
We may now ask the question of the dependence of free energy on chain extension $\vec{R}$. With the definition of Boltzmann entropy, Equation \ref{eq:Boltzmann_entropy}), and the usual identification $k = k_\mathrm{B}$ we have
$S(N,\vec{R}) = k_\mathrm{B} \ln \Omega(N,\vec{R}) \ .$
The probability density distribution in Equation \ref{eq:rho3d_chain}) is related to the statistical weight $\Omega$ by
$\rho_\mathrm{3d}(N,\vec{R}) = \frac{\Omega(N,\vec{R})}{\int \Omega(N,\vec{R}) \mathrm{d} \vec{R}} \ ,$
because $\rho_\mathrm{3d}$ is the fraction of all conformations that have an end-to-end vector in the infinitesimally small interval between $\vec{R}$ and $\vec{R} + \mathrm{d}\vec{R}$. Hence,22
\begin{align} S(N,\vec{R}) & = k_\mathrm{B} \ln \rho_\mathrm{3d}(N,\vec{R}) + k_\mathrm{B} \ln \left[ \int \Omega(N,\vec{R}) \mathrm{d} \vec{R} \right] \ & = -\frac{3}{2} k_\mathrm{B} \frac{\vec{R}^2}{N b^2} + \frac{3}{2} k_\mathrm{B} \ln \left( \frac{3}{2 \pi N b^2} \right) + k_\mathrm{B} \ln \left[ \int \Omega(N,\vec{R}) \mathrm{d} \vec{R} \right] \ . \label{eq:s_N_R_ideal_chain}\end{align}
The last two terms do not depend on $\vec{R}$ and thus constitute an entropy contribution $S(N,0)$ that is the same for all end-to-end distances, but depends on the number of monomers $N$,
$S(N,\vec{R}) = -\frac{3}{2} k_\mathrm{B} \frac{\vec{R}^2}{N b^2} + S(N,0) \ .$
Since by definition the Kuhn segments of an ideal chain do not interact with each other, the internal energy is independent of $\vec{R}$. The Helmholtz free energy $F(N,\vec{R}) = U(N,\vec{R}) - T S(N,\vec{R})$ can thus be written as
$F(N,\vec{R}) = \frac{3}{2} k_\mathrm{B} T \frac{\vec{R}^2}{N b^2} + F(N,0) \ .$
It follows that the free energy of an individual chain attains a minimum at zero end-to-end vector, in agreement with our conclusion in Section [subsection:random_walk] that the probability density is maximal for a zero end-to-end vector. At longer end-to-end vectors, chain entropy decreases quadratically with vector length. Hence, the chain can be considered as an entropic spring. Elongation of the spring corresponds to separating the chain ends by a distance $R \ll N b$. The force required for this elongation is the derivative of Helmholtz free energy with respect to distance. For one dimension, we obtain
$f_x = -\frac{\partial F\left( N, \vec{R} \right)}{\partial R_x} = -\frac{3 k_\mathrm{B} T}{N b^2} \cdot R_x \ .$
For the three-dimensional case, the force is a vector that is linear in $\vec{R}$,
$\vec{f} = -\frac{3 k_\mathrm{B} T}{N b^2} \cdot \vec{R} \ ,$
i.e., the entropic spring satisfies Hooke’s law. The entropic spring constant is $3 k_\mathrm{B} T/(Nb^2)$.
Polymers are thus the easier to stretch the larger their degree of polymerization (proportional to $N$), the longer the Kuhn segment $b$ and the lower the temperature $T$. In particular, the temperature dependence is counterintuitive. A polymer chain under strain will contract if temperature is raised, since the entropic contribution to Helmholtz free energy, which counteracts the strain, then increases.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/07%3A_Macromolecules/7.02%3A_Entropic_Elasticity.txt
|
Thumbnail: Major structural changes accompany binding of the Lewis base to the coordinatively unsaturated, planar Lewis acid BF3. (Public Domain; Ben Mills ).
Acid
The Arrhenius definition can be summarized as "Arrhenius acids form hydrogen ions in aqueous solution with Arrhenius bases forming hydroxide ions."
Introduction
In 1884, the Swedish chemist Svante Arrhenius proposed two specific classifications of compounds, termed acids and bases. When dissolved in an aqueous solution, certain ions were released into the solution. As defined by Arrhenius, acid-base reactions are characterized by acids, which dissociate in aqueous solution to form hydrogen ions (H+) and bases, which form hydroxide (OH) ions.
Acids are defined as a compound or element that releases hydrogen (H+) ions into the solution (mainly water).
$NHO_3 (aq) + H_2O(l) \rightarrow H_3O^+ + NO_3^- (aq)$
In this reaction nitric acid (HNO3) disassociates into hydrogen (H+) and nitrate (NO3-) ions when dissolved in water. Bases are defined as a compound or element that releases hydroxide (OH-) ions into the solution.
In this reaction lithium hydroxide (LiOH) dissociates into lithium (Li+) and hydroxide (OH-) ions when dissolved in water.
Bronsted Concept of Acids and Bases
In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently developed definitions of acids and bases based on the compounds' abilities to either donate or accept protons ($H^+$ ions). In this theory, acids are defined as proton donors; whereas bases are defined as proton acceptors. A compound that acts as both a Brønsted-Lowry acid and base together is called amphoteric.
The Brønsted-Lowry Theory of Acids and Bases
Brønsted-Lowry theory of acid and bases took the Arrhenius definition one step further, as a substance no longer needed to be composed of hydrogen (H+) or hydroxide (OH-) ions in order to be classified as an acid or base. For example , consider the following chemical equation:
$HCl \; (aq) + NH_3 \; (aq) \rightarrow NH_4^+ \; (aq) + Cl^- \; (aq)$
Here, hydrochloric acid (HCl) "donates" a proton (H+) to ammonia (NH3) which "accepts" it , forming a positively charged ammonium ion (NH4+) and a negatively charged chloride ion (Cl-). Therefore, HCl is a Brønsted-Lowry acid (donates a proton) while the ammonia is a Brønsted-Lowry base (accepts a proton). Also, Cl- is called the conjugate base of the acid HCl and NH4+ is called the conjugate acid of the base NH3.
• A Brønsted-Lowry acid is a proton (hydrogen ion) donor.
• A Brønsted-Lowry base is a proton (hydrogen ion) acceptor.
In this theory, an acid is a substance that can release a proton (like in the Arrhenius theory) and a base is a substance that can accept a proton. A basic salt, such as Na+F-, generates OH- ions in water by taking protons from water itself (to make HF):
$F^-_{(aq)} + H_2O_{(l)} \rightleftharpoons HF_{(aq)} + OH^-$
When a Brønsted acid dissociates, it increases the concentration of hydrogen ions in the solution, $[H^+]$; conversely, Brønsted bases dissociate by taking a proton from the solvent (water) to generate $[OH^-]$.
• Acid dissociation
$HA_{(aq)} \rightleftharpoons A^-_{(aq)} + H^+_{(aq)}$
• Acid Ionization Constant:
$K_a=\dfrac{[A^-][H^+]}{[HA]}$
• Base dissociation:
$B_{(aq)} + H_2O_{(l)} \rightleftharpoons HB^+_{(aq)} + OH^-_{(aq)}$
• Base Ionization Constant
$K_b = \dfrac{[HB^+][OH^-]}{[B]}$
The determination of a substance as a Brønsted-Lowery acid or base can only be done by observing the reaction. In the case of the HOH it is a base in the first case and an acid in the second case.
To determine whether a substance is an acid or a base, count the hydrogens on each substance before and after the reaction. If the number of hydrogens has decreased that substance is the acid (donates hydrogen ions). If the number of hydrogens has increased that substance is the base (accepts hydrogen ions). These definitions are normally applied to the reactants on the left. If the reaction is viewed in reverse a new acid and base can be identified. The substances on the right side of the equation are called conjugate acid and conjugate base compared to those on the left. Also note that the original acid turns in the conjugate base after the reaction is over.
Note
Acids are Proton Donors and Bases are Proton Acceptors
For a reaction to be in equilibrium a transfer of electrons needs to occur. The acid will give an electron away and the base will receive the electron. Acids and Bases that work together in this fashion are called a conjugate pair made up of conjugate acids and conjugate bases.
$HA + Z \rightleftharpoons A^- + HZ^+$
A stands for an Acidic compound and Z stands for a Basic compound
• A Donates H to form HZ+.
• Z Accepts H from A which forms HZ+
• A- becomes conjugate base of HA and in the reverse reaction it accepts a H from HZ to recreate HA in order to remain in equilibrium
• HZ+ becomes a conjugate acid of Z and in the reverse reaction it donates a H to A- recreating Z in order to remain in equilibrium
Questions
1. Why is $HA$ an Acid?
2. Why is $Z^-$ a Base?
3. How can A- be a base when HA was and Acid?
4. How can HZ+ be an acid when Z used to be a Base?
5. Now that we understand the concept, let's look at an an example with actual compounds! $HCl + H_2O \rightleftharpoons H_3O^+ + Cl^¯$
• HCL is the acid because it is donating a proton to H2O
• H2O is the base because H2O is accepting a proton from HCL
• H3O+ is the conjugate acid because it is donating an acid to CL turn into it's conjugate acid H2O
• Cl¯ is the conjugate base because it accepts an H from H3O to return to it's conjugate acid HCl
How can H2O be a base? I thought it was neutral?
Answers
1. It has a proton that can be transferred
2. It receives a proton from HA
3. A- is a conjugate base because it is in need of a H in order to remain in equilibrium and return to HA
4. HZ+ is a conjugate acid because it needs to donate or give away its proton in order to return to it's previous state of Z
5. In the Brønsted-Lowry Theory what makes a compound an element or a base is whether or not it donates or accepts protons. If the H2O was in a different problem and was instead donating an H rather than accepting an H it would be an acid!
Contributors and Attributions
• Sarah Rundle (UCD), Charles Ophardt, Professor Emeritus, Elmhurst College; Virtual Chembook
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Arrhenius_Concept_of_Acids_and_Bases.txt
|
Learning Objectives
• Define the fraction of dissociation of a weak electrolyte.
• Calculate the fraction of dissociation of a weak acid or base.
• Sketch the fraction of dissociation as a function of concentration.
A chemical equilibrium involving dissociation can be represented by the following reaction.
$\ce{AB \rightleftharpoons A + B}, \hspace{20px} K = \ce{\dfrac{[A][B]}{[AB]}}$
The concept equilibrium has been discussed in Mass Action Law and further discussed in Weak Acids and Bases, Ka, Kb, and Kw, and Exact pH Calculations.
Further, if C is the initial concentration of $\ce{AB}$ before any dissociation takes place, and f is the fraction of dissociated molecules, the concentration of $\ce{AB}$ is (1-f)C when the system has reached equilibrium. The concentration of $\ce{A}$ and $\ce{B}$ will each then be f C. Note that C is also the total concentration. For convenience, we can write the concentration below the formula as:
$\begin{array}{cccccl} \ce{AB &\rightleftharpoons &A &+ &B&}\ (1-f)C &&fC &&fC &\leftarrow \ce{Concentration} \end{array}$
and the equilibrium constant, $K = \ce{\dfrac{[A][B]}{[AB]}}$ can be written as (using the concentration below the formula):
$K = \dfrac{(f\cancel{C})(fC)}{(1-f)\cancel{C}} = \dfrac{f^2 C}{1 - f} \label{2}$
Variation of $f$ as a Function of $C$
How does $f$ vary as a function of $C$? Common sense tells us that $f$ has a value between 0 and 1 (0 < f < 1). For dilute solutions, $f \approx 1$, and for concentrated solutions, $f \approx 0$. In solving Equation $\ref{2}$ for f, we obtain:
$f = \dfrac{ - K + \sqrt{K^2 + 4 K C}}{2 C}$
$f$ is the fraction of molecules that have dissociated and is also called the degree of ionization. When converted to percentage, the term percent ionization is used. Even with the given formulation, it is still difficult to see how $f$ varies as $C$ changes. Table $1$ illustrate the variation with a table below for a moderate value of $K = 1.0 \times 10^{-5}$.
Table $1$: Variation of fraction of dissociation f as a function of concentration C when $K = 1.0 \times 10^{-5}$.
C $1\times 10^{-7}$ $1\times 10^{-6}$ $1\times 10^{-5}$ $1\times 10^{-4}$ $1\times 10^{-3}$ $1\times 10^{-2}$ 0.1 1.0 10 100
f 0.99 0.92 0.62 0.27 0.095 0.031 $1\times 10^{-2}$ $3\times 10^{-3}$ $1\times 10^{-3}$ $3\times 10^{-4}$
There is little change in $f$ when $C$ decreases from $1.0 \times 10^{-7}$ to $1.0 \times 10^{-6}$, but the changes are rather somewhat regular for every 10 fold decrease in concentration. Please plot $f$ against a log scale of $C$ to see the shape of the variation as your activity. Normally, we will not encounter solution as dilute as C = 1.0e-7, and we will never encounter solution as concentrated as 100 M either.
Questions
1. What is the fraction of dissociation for a strong acid?
2. What is the fraction of dissociation for a compound that does not dissociate?
3. A 0.1 M solution of an acid $\ce{HB}$ has half of its molecules dissociated. Calculate the acidity constant Ka.
4. The equilibrium constant for a weak base $\ce{B}$ is 0.05; what is the fraction of dissociation if the concentration is 0.10 M?
5. The equilibrium constant for a weak base $\ce{B}$ is 1.0e-3; what is the fraction of dissociation if the concentration is 0.10 M?
Solutions
1. Answer: 1
Consider...
A strong acid is almost completely dissociated.
2. Answer: 0
Consider...
Since there is no dissociation, the fraction is zero. This is a redundant question.
3. Answer: 0.05
Consider...
$\begin{array}{cccccl} \ce{HB &\rightleftharpoons &H+ &+ &B- &}\ 0.05 &&0.05 &&0.05\: \ce M &\leftarrow \textrm{Concentrations at equilibrium} \end{array}$
K = ?
4. Answer: 0.5
Consider...
See the previous question.
5. Answer: 0.095
Consider...
Make a table to see the variation of f when the concentration changes from 1e-7 to 100 in steps of 10 folds as given previously.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Dissociation_Fraction.txt
|
Learning Objectives
• Calculate the pH when two weak acids are present in a solution.
• Calculate the pH when the concentration of the acid is very dilute.
• Calculate the pH by including the autoionization of water.
A common paradigm in solving for pHs in weak acids and bases is that the equilibria of solutions containing one weak acid or one weak base. In most cases, the amount of $\ce{H+}$ from the autoionization of water is negligible. For very dilute solutions, the amount of $\ce{H+}$ ions from the autoionization of water must also be taken into account. Thus, a strategy is given here to deal with these systems.
When two or more acids are present in a solution, the concentration of $\ce{H+}$ (or pH) of the solution depends on the concentrations of the acids and their acidic constants Ka. The hydrogen ion is produced by the ionization of all acids, but the ionizations of the acids are governed by their equilibrium constants, Ka's. Similarly, the concentration of $\ce{OH-}$ ions in a solution containing two or more weak bases depends on the concentrations and Kb values of the bases. For simplicity, we consider two acids in this module, but the strategies used to discuss equilibria of two acids apply equally well to that of two bases.
Dissociation of Acids and Bases in Water Couple Two Equilibria with a Common Ion H+
If the pH is between 6 and 8, the contribution due to autoionization of water to $\ce{[H+]}$ should also be considered. When autoionization of water is considered, the method is called the exact pH calculation or the exact treatment. This method is illustrated below. When the contribution of pH due to self-ionization of water cannot be neglected, there are two coupled equilibria to consider:
$\ce{HA \rightleftharpoons H+ + A-}$
ICE Table $\ce{HA}$ $\ce{H+}$ $\ce{A-}$
Initial $C$ 0 0
Change $- x$ ${\color{Red} x}$ $x$
Equilibrium $C-x$ ${\color{Red} x}$ $x$
and
$\ce{H2O \rightleftharpoons H+ + OH-}$
ICE Table $\ce{H2O}$ $\ce{H+}$ $\ce{OH-}$
Equilibrium - ${\color{Red} y}$ $y$
Thus,
\begin{align} \ce{[H+]} &= ({\color{Red} x+y})\ \ce{[A- ]} &= x\ \ce{[OH- ]} &= y \end{align}
and the two equilibrium constants are
$K_{\large\textrm{a}} = \dfrac{ ({\color{Red} x + y})\, x}{C - x} \label{1}$
and
$K_{\large\textrm{w}} = ({\color{Red} x + y})\, y \label{2}$
Although you may use the method of successive approximation, the formula to calculate the pH can be derived directly from Equations $\ref{1}$ and $\ref{2}$. Solving for $\color{ref} x$ from Equation $\ref{2}$ gives
$x = \dfrac{K_{\large\textrm{w}}}{y} - y$
and substituting this expression into $\ref{1}$ results in
$K_{\large\textrm{a}} = \dfrac{({\color{Red} x+y}) \left(\dfrac{K_{\large\textrm{w}}}{y} - y\right)}{C - \dfrac{K_{\large\textrm{w}}}{y} + y}$
Rearrange this equation to give:
\begin{align} \ce{[H+]} &= ({\color{Red} x+y})\ &= \dfrac{C - \dfrac{K_{\large\textrm{w}}}{y} + y}{\dfrac{K_{\large\textrm{w}}}{y} - y}\, K_{\large\textrm{a}} \end{align}
Note that
$\dfrac{K_{\large\textrm w}}{y} = \ce{[H+]}$
so
$y = \dfrac{K_{\large\textrm w}}{\ce{[H+]}}.$
Thus, we get:
$\ce{[H+]} = \dfrac{ C - \ce{[H+]} + \dfrac{K_{\large\textrm{w}}}{\ce{[H+]}}}{\ce{[H+]} - \dfrac{K_{\large\textrm{w}}}{\ce{[H+]}}}\, K_{\large\textrm{a}} \label{Exact}$
As written, Equation $\ref{Exact}$ is complicated, but can be put into a polynomial form
$\ce{[H+]^3} + K_{\large\textrm{a}} \ce{[H+]^2} - \left( K_{\large\textrm{w}} + C K_{\large\textrm{a}} \right) \ce{[H+]} - K_{\large\textrm{w}} K_{\large\textrm{a}} =0 \label{Exact2}$
Solving for the exact hydronium concentration requires solving a third-order polynomial. While this is analytically feasible, it is an awkward equation to handle. Instead, we often consider two approximations to Equation $\ref{Exact2}$ that can made under limiting conditions.
Case 1: High Concentration Approximation
If $[H^+] > 1 \times 10^{-6}$, then
$\dfrac{K_w}{[H^+]} < 1 \times 10^{-8}.$
This is small indeed compared to $[H^+]$ and $C$ in Equation $\ref{Exact}$. Thus,
$[H^+] \approx \dfrac{C - [H^+]}{[H^+]} K_{\large\textrm{a}}$
$[H^+]^2 + K_{\large\textrm{a}} [H^+] - C K_{\large\textrm{a}} \approx 0 \label{Quad}$
Equation $\ref{Quad}$ is a quadratic equation with two solutions. However, only one will be positive and real:
$[H^+] \approx \dfrac{-K_{\large\textrm{a}} + \sqrt{K_{\large\textrm{a}}^2 + 4 C K_{\large\textrm{a}}}}{2}$
Case 2: Low Concentration Approximation
If $[H^+] \ll C$, then
$C - [H^+] \approx C$
Equation $\ref{Exact}$ can be simplified
\begin{align} [H^+] &\approx \dfrac{C}{[H^+]}\: K_{\large\textrm{a}}\ [H^+] &\approx \sqrt{C K_{\large\textrm{a}}} \end{align}
The treatment presented in deriving Equation $\ref{Exact}$ is more general, and may be applied to problems involving two or more weak acids in one solution.
Example $1$
Calculate the $\ce{[H+]}$, $\ce{[Ac- ]}$, and $\ce{[Cc- ]}$ when the solution contains 0.200 M $\ce{HAc}$ ($K_a = 1.8 \times 10^{-5}$), and 0.100 M $\ce{HCc}$ (the acidity constant $K_c = 1.4 \times 10^{-3}$). ($\ce{HAc}$ is acetic acid whereas $\ce{HCc}$ is chloroacetic acid).
Solution
Assume x and y to be the concentrations of $\ce{Ac-}$ and $\ce{Cc-}$, respectively, and write the concentrations below the equations:
$\begin{array}{ccccc} \ce{HAc &\rightleftharpoons &H+ &+ &Ac-}\ 0.200-x &&x &&x\ \ \ce{HCc &\rightleftharpoons &H+ &+ &Cc-}\ 0.100-y &&y &&y \end{array}$
$\ce{[H+]} = (x + y) \nonumber$
Thus, you have
$\dfrac{(x + y)\, x}{0.200 - x} = 1.8 \times 10^{-5} \label{Ex1.1}$
$\dfrac{(x + y)\, y}{0.100 - y} = 1.4\times 10^{-3} \label{Ex1.2}$
Solving for x and y from Equations $\ref{Ex1.1}$ and $\ref{Ex1.2}$ may seem difficult, but you can often make some assumptions to simplify the solution procedure. Since $\ce{HAc}$ is a weaker acid than is $\ce{HCc}$, you expect x << y. Further, y << 0.100. Therefore, $x + y \approx y$ and 0.100 - y => 0.100. Equation $\ref{Ex1.2}$ becomes:
$\dfrac{ ( y)\, y}{0.100} = 1.4 \times 10^{-3} \label{Ex1.2a}$
which leads to
\begin{align*} y &= (1.4 \times 10^{-3} \times 0.100)^{1/2}\ &= 0.012 \end{align*}
Substituting $y$ in Equation $\ref{Ex1.1}$ results in
$\dfrac{(x + 0.012)\, x}{0.200 - x} = 1.8 \times 10^{-5} \label{1'}$
This equation is easily solved, but you may further assume that $0.200 - x \approx 0.200$, since $x << 0.200$. Thus,
\begin{align*} x &= \dfrac{-0.012 + (1.44\times 10^{-4} + 1.44\times 10^{-5})^{1/2}}{2}\ &= 2.9\times 10^{-4}\:\: \longleftarrow \textrm{Small indeed compared to 0.200} \end{align*}
You had a value of 0.012 for y by neglecting the value of x in Equation $\ref{Ex1.2}$. You can now recalculate the value for y by substituting values for x and y in Equation $\ref{Ex1.2}$.
$\dfrac{(2.9\times 10^{-4} + y)\, y}{0.100 - 0.012} = 1.4\times 10^{-3} \label{2"}$
Solving for y in the above equation gives
$y = 0.011 \nonumber$
You have improved the y value from 0.012 to 0.011. Substituting the new value for y in a successive approximation to recalculate the value for x improves its value from $2.9 \times 10^{-4}$ to a new value of $3.2 \times 10^{-4}$. Use your calculator to obtain these values. Further refinement does not lead to any significant changes for x or y.
Discussion
You should write down these calculations on your note pad, since reading alone does not lead to thorough understanding.
Example $2$
A weak acid $\ce{HA}$ has a $K_a$ value of $4.0 \times 10^{-11}$. What are the pH and the equilibrium concentration of $\ce{A-}$ in a solution of 0.0010 M $\ce{HA}$?
Solution
For the solution of this problem, two methods are given here. If you like the x and y representation, you may use method (a).
Method (a)
The two equilibrium equations are:
$\begin{array}{ccccc} \ce{HA &\rightleftharpoons &H+ &+ &A-};\ 0.0010-x &&x &&x\ \ \ce{H2O &\rightleftharpoons &H+ &+ &OH-}\ &&y &&y \end{array}$
$\ce{[H+]} = (x+y)$
\begin{align} \dfrac{(x+y)\, x}{0.0010-x} &= 4.0\times 10^{-}11 \label{3}\ \ (x+y)\, y &= 1\times 10^{-}14 \label{4} \end{align}
Assume y << x, and x << 0.0010, then you have
\begin{align} \dfrac{(x )\, x}{0.0010} &= 4.0\times 10^{-11} \label{3'} \ x &= (0.0010 \times 4.0e^{-11})^{1/2}\ &= 2.0\times 10^{-7} \end{align}
Substituting $2.0 \times ^{-7}$ for x in 4 and solving the quadratic equation for y gives,
$(2.0\times 10^{-}7+y)\, y = 1\times 10^{-14} \nonumber$
$y = 4.1\times 10^{-8} \nonumber$
Substituting $4.1 \times 10^{-8}$ in Equation $\ref{3}$, but still approximating 0.0010-x by 0.0010:
$\dfrac{(x+4.1\times 10^{-8})\, x}{0.0010} = 4.0\times 10^{-11} \label{3''}$
Solving this quadratic equation for a positive root results in
$x = 1.8 \times 10^{-7} \;\text{M} \longleftarrow \textrm{Recall }x = \ce{[A- ]} \nonumber$
\begin{align*} \ce{[H+]} &= x + y\ &= (1.8 + 0.41)\,1\times 10^{-7}\ &= 2.2\times 10^{-7}\ \ce{pH} &= 6.65 \end{align*}
The next method uses the formula derived earlier.
Method (b)
Using the formula from the exact treatment, and using $2 \times 10^{-7}$ for all the $\ce{[H+]}$ values on the right hand side, you obtain a new value of $\ce{[H+]}$ on the left hand side,
\begin{align*} \ce{[H+]} &= \dfrac{C - \ce{[H+]} + \dfrac{K_{\large\textrm w}}{\ce{[H+]}}}{\ce{[H+]} - \dfrac{K_{\large\textrm w}}{\ce{[H+]}}} K_{\large\textrm a}\ &= 2.24\times 10^{-7}\ \ce{pH} &= 6.65 \end{align*}
The new $\ce{[H+]}$ enables you to recalculate $\ce{[A- ]}$ from the formula:
\begin{align*} (2.24\times 10^{-7}) \ce{[A- ]} &= C K_{\large\textrm a} \nonumber\ \ce{[A- ]} &= \dfrac{(0.0010) (4.0\times 10^{-11})}{2.24\times 10^{-7}} \nonumber\ &= 1.8\times 10^{-7} \nonumber \end{align*}
DISCUSSION
You may have attempted to use the approximation method:
\begin{align*} x &= (C K_{\large\textrm a})^{1/2} \nonumber\ &= 2.0\times 10^{-7}\: \mathrm{M\: A^-,\: or\: H^+;\: pH = 6.70} \nonumber \end{align*}
and obtained a pH of 6.70, which is greater than 6.65 by less than 1%. However, when an approximation is made, you have no confidence in the calculated pH of 6.70.
Summary
Water is both an acid and a base due to the autoionization,
$\ce{H2O \rightleftharpoons H+ + OH-} \nonumber$
However, the amount of $\ce{H+}$ ions from water may be very small compared to the amount from an acid if the concentration of the acid is high. When calculating $\ce{[H+]}$ in an acidic solution, approximation method or using the quadratic formula has been discussed in the modules on weak acids.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Exact_pH_Calculations.txt
|
Most students in General Chemistry are never taken beyond the traditional, algebraic treatment of acid-base equilibria. This is unfortunate for the following reasons:
• It can be an awful lot of work. Have you ever noticed that many of the acid-base systems most commonly encountered (phosphate, citrate, salts such as ammonium acetate, amino acids, EDTA) are rarely treated in standard textbooks? Treating these analytically requires setting up a series of mass- and charge-balance expressions which must be solved simultaneously.
• Most algebraic treatments are approximations anyway. Did you know, for example, that the exact calculation of the pH of a solution of a monoprotic weak acid requires the solution of a cubic equation? Carrying out the same operation for H3PO4 requires solution of a fifth-order polynomial! To simplify the calculations, approximations are made, sometimes unwittingly. The algebraic calculations that we actually carry out are almost never exact in the first place!
• Equilibrium constants are not really. Even if you carry out an exact calculation, the results will never be any more reliable than the equilibrium constants you use. But the values of these vary with the temperature and especially with the ionic content of the solution. The values listed in tables are rarely applicable to practical applications.
• All you get is a number: Algebraic approaches contribute almost nothing to the larger view of how an acid-base system behaves as the pH is changed.
Yes, I still teach my students how to set up a quadratic equation to calculate the pH of an acetic acid solution, but I insist that they also be able to do the same thing graphically. The major advantages of the graphical approach:
• Easy to do: the graphs are easily constructed and generally give pH values as good as those from algebra. Even if you sketch out the graph on the back of an envelope and without a straightedge, you can get results to within half a pH unit. A good way to amaze your friends!
• Provides an overall picture of the acid-base system. A glance at the graph shows you the approximate concentrations of all species present over a range of pH values, thus providing a bird's-eye view of the acid-base system as s whole.
• Reinforces important principles. In learning to sketch out and interpret log-concentration vs. pH plots, the student must consider such things as the nature of the equilibria occurring in solutions of the pure acid and of its conjugate base, conservation of protons, and the significance of the pK.
• Allows treatment of a wider range of systems. Graphical estimation of the pH of solutions of acids, bases and ampholytes of such as phosphate, citrate, amino acids, EDTA etc. are not all that much more difficult than treating a monoprotic or diprotic system.
Log-C vs. pH graphs are plotted on coordinates of the form shown above.Be sure you understand the y-axis; "4", for example, corresponds to a concentration of 10–4 M for whatever species we plot. Note that concentrations increases with height on this graph, and that those less than about 10–8 M are usually negligible for most practical purposes.
The two plots shown on this graph are no more than formal definitions of pH and pOH; for example, when the pH is 4, –log pH = 4 and [H+] = 10–4 M. (Notice that the ordinate is the negative of the log concentration, so the smaller numbers near the top of the scale refer to larger concentrations.)
The above graph of is of no use by itself, but it forms the basis for the construction of other graphs specific to a given acid-base system. You should be able to draw this graph from memory.
The H+ and OH log concentration lines are the same ones that we saw in Figure 1. The other two lines show how the concentrations of CH3COOH and of the the acetate ion vary with the pH of the solution.
How do we construct the plots for [HAc] and [Ac]? If you look carefully at Fig 2, you will observe that each line is horizontal at the top, and then bends to become diagonal. There are thus three parameters that define these two lines: the location of their top, horizontal parts, their crossing points with the other lines, and the slopes of their diagonal parts.
The horizontal sections of these lines are placed at 3 on the ordinate scale, corresponding to the nominal acid concentration of
10–3 M. This value corresponds to
Ca = [HAc] + [Ac]
which you will recognize as the mass balance condition saying that "acetate" is conserved; Ca is the nominal "acid concentration" of the solution, and is to be distinguished from the concentration of the actual acidic species HAc.
At low pH values (strongly acidic solution) the acetate species is completely protonated, so [HAc] = 10–3 M and [Ac]=0. Similarly, at high pH, –log [Ac]=3\$ and [HAc]=0. If the solution had some other nominal concentration, such as 0.1 M or 10–5, we would simply move the pair of lines up or down.
The diagonal parts of the lines have slopes of equal magnitude but opposite sign. It can easily be shown that these slopes d(–log [HAc]}/d{pH} etc.are +-1, corresponding to the slopes of the [OH] and [H+] lines. Using the latter as a guide, the diagonal portions of lines 3 and 4 Can easily be drawn.
The crossing point of the plots for the acid and base forms corresponds to the condition [HAc]=[Ac]. You already know that this condition holds when the pH is the same as the pKa of the acid, so the the pH coordinate of the crossing point must be 4.75 for acetic acid. The vertical location of the crossing point is found as follows: When [HAc] = [Ac], the concentration of each species must be Ca/2 or in this case 0.0005 M. The logarithm of 1/2 is 0.3, so a 50% reduction in the concentration of a species will shift its location down on the log concentration scale by 0.3 unit. The crossing point therefore falls at a log-C value of (–3) – 0.3 = –3.3. Knowing the value of log100.5 is one of the few new "facts" that must be learned in order to construct these graphs.
Of special interest in acid-base chemistry are the pH values of a solution of an acid and of its conjugate base in pure water; as you know, these correspond to the beginning and equivalence points in the titration of an acid with a strong base.
pH of an acid in pure water
Except for the special cases of extremely dilute solutions or very weak acids in which the autoprotolysis of water is a major contributor to the hydrogen ion concentration, the pH of a solution of an acid in pure water will be determined largely by the extent of the reaction
\[\ce{HAc + H2O -> H3O^{+} + Ac^{–}}\]
so that at equilibrium, the approximate relation \(\ce{[H3O^{+}] \approx [Ac^{–}]}\) will hold. The equivalence of these two concentrations corresponds to the point labeled 1 in Fig 3; this occurs at a pH of about 3.7, and this is the pH of a 0.001M solution of acetic acid in pure water.
pH of a solution of the conjugate base
Now consider a 0.001M solution of sodium acetate in pure water. This, you will recall, corresponds to the composition of a solution of acetic acid that has been titrated to its equivalence point with sodium hydroxide. The acetate ion, being the conjugate base of a weak acid, will undergo hydrolysis according to
\[\ce{Ac– + H2O -> HAc + OH–}\]
As long as we can neglect the contribution of OH from the autoprotolysis of the solvent, we can expect the relation [HAc]=[OH] to be valid in such a solution. The equivalence of these two concentrations corresponds to the intersection point 3 in Fig 3; a 0.001M solution of sodium or potassium acetate should have a pH of about 8.
Putting it all together
The top part of this figure is the log-C plot for acetic acid that we saw previously, the blue dashed vertical lines now showing the pH of a 0.001M solution of HAc before and and after its titration with strong base.
The extension of these lines (and the one representing the pK) down to the bottom part of the figure locates the three points that define the titration curve for the acid.
Take a moment to verify that the lower figure is indeed a titration curve; that is, a plot of the pH as a function of the fraction of acid neutralized.
These two plots nicely sum up the complete acid-base chemistry of a monoprotic system.
Diprotic acids are no more complicated to treat graphically than monoprotic acids-- something that definitely cannot be said for exact algebraic treatment! The above example for oxalic acid (H2A = HOOCCOOH) is just the superposition for separate plots of the two acid-base systems H2A-HA and HA-A2– whose pKs are 1.2 and 4.2. The system points corresponding to 0.01M solutions of pure H2A; and A2– are determined just as in the monoprotic case. For a solution of the ampholyte, several equilibria can be written, but the one that dominates is
\[2 HA^– \rightleftharpoons H_2A + A^{2–}\]
which leads to the approximation
\[[HA^–] \approx [A^{2–}]\]
Thus fixing the pH at about 2.7. This same result can be obtained from thewell known approximation
\[pH = \dfrac{pK_1 + pK_2}{2}\]
The quantitative treatment of a solution of a salt of a weak acid and a weak base is algebraically complicated, even when done to the crudest approximation. Graphically, it is a piece of cake! As you can see above, we just construct plots of the two acid-base system on the same graph. The stoichiometry of the salt defines the system point at about pH 6.3, showing that hydrolyses of the two systems does not quite "cancel out". Of course, this is still an approximation that neglects ionization of either the anion or cation, but it is probably as valid as any calculation that is likely to be carried out ordinarily-available data.
The Phosphate system
... but why stop at diprotic acids? With the phosphate system we have three pKs, and thus two ampholytes H2PO4and HPO42–.
A glance at the graph allows one to estimate the relative amounts of the two major species at any pH.
The pH of a solution of the acid H3PO4 or of the base PO43– is found from the system points 1 and 4 which are constructed in the same way as those for a monoprotic acid.
System points 2 and 3 for solutions of the two ampholytes provide only approximate values pH values because of the effects of competing equilibria, but the accuracy is good enough for most applications.
Here it is -- the most important of all acid-base systems in natural waters and physiology! Nothing new as far as theory goes -- see the explanation given for oxalic acid. What's different here is the two sts of plots. The lower (more dilute) one corresponds to the concentration of dissolved CO2 present in water in equlibrium with the atmosphere having the normal CO2 partial pressure of about .0005 atm. You can see from this that when pure water is exposed to the atmosphere, its pH will fall from 7 to about 5 (but see the last paragraph below); thus all rain is in a sense "acid rain".
Note that around pH 8 (the pH of the ocean and of blood), over 99% of all carbonate is in the form of bicarbonate HCO3. As a result, the ocean acts as a sink for atmospheric CO2 and contains about 50 times as much as does the atmosphere. Similarly, CO2 resulting from cellular respiration is converted to HCO3 for transport through the blood stream, and is converted back to CO2 in the lungs.
The upper (.001M) set of graphs correspond to the approximate carbonate content of natural waters in contact with rocks and sediments (which normally contain carbonate components). It describes the situation in lakes, streams, ground waters and the ocean. Notice how the pH of this more concentrated CO2 solution is lower (about 2, compared to 4.3), which is just what you would expect. This is the reason the pH of an algae-containing pond rises during the day (when CO2 is being consumed by photosynthesis) and falls at night (CO2 restored by respiration.)
The term "closed system" in the caption at the top of the graph means that CT, the sum of the concentrations of all three carbonate species, is assumed constant here. For an open system the plots for the bicarbonate and carbonate concentrations would be dramatically different because an alkaline solution open to the atmosphere has access to an infinite supply of CO2 and will absorb it up to the solubility limit of sodium carbonate. (You may recall that a common test for CO2 is to watch for a precipitate to appear in a saturated solution of calcium hydroxide.) Thus the estimate of pH=5 for pure water exposed to the atmosphere is not quite correct; using an "open system" plot, a somewhat lower pH, around 4.5, will be found. It turns out, however, that for many practical situations, the relatively slow transport of CO2 between air and water makes the "closed system" predictions reasonably accurate within the time scale of interest.
Glycine is the simplest of the amino acids, the building blocks of proteins. It is also a zwitterion (the word comes from the German term for "hermaphrodite" ) which chemists use to denote a species that possesses both an acidic and a basic functional group, both of which can simultaneously exist in their ionized (conjugate) forms. Examination of this graph shows that the double-ionic form (the true zwitterion) reigns supreme between pH 3 and 9; outside this range, the glycinium anion or the glycinate cation prevails. Notice that the completely un-ionized form does not exist in solution.
The log-C vs. pH diagram is constructed as s superposition of plots for each conjugate pair at its respective pKa. Note especially that the pH of a solution of glycine does lie exactly at the crossing point [Gly] = [H+], but is slightly displaced from it according to the proton balance equation shown in the inset on the graph. The other important quantity shown on this graph is the isoelectric point, which is the pH at which the concentrations of the cationic and anionic forms are identical.
It goes without saying that treating this system algebraically would be far more complicated than the results would warrant for most applications.
Note especially that
• the only really important pH on this graph is 8, the approximate pH of the natural ocean.
• The pH of the ocean is controlled mainly by the carbonate system because it is more concentrated than the two minor buffering systems, borate and silicate.
• The surface waters of the ocean are everywhere supersaturated in calcium bicarbonate and in CO2 , as shown by the dotted curves (the "ss[CO2 ]" label is the illegible one-- sorry!)
• Ion-pair complexes such as MgOH+ are significant species in seawater; there are many others.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Graphical_Treatment_of_Acid-Base_Systems.txt
|
Learning Objectives
• Explain color changes of indicators.
• Determine the acidic dissociation constants Ka or Kai of indicators.
Indicators are substances whose solutions change color due to changes in pH. These are called acid-base indicators. They are usually weak acids or bases, but their conjugate base or acid forms have different colors due to differences in their absorption spectra. Did you know that the color of hydrangea flowers depends on the pH of the soil in which they are grown?
Indicators are organic weak acids or bases with complicated structures. For simplicity, we represent a general indicator by the formula $\mathrm{\color{Blue} HIn}$, and its ionization in a solution by the equilibrium,
$\mathrm{ {\color{Blue} HIn} \rightleftharpoons H^+ + {\color{Red} In^-}}$
and define the equilibrium constant as Kai,
$K_{\large\textrm{ai}} = \mathrm{\dfrac{[H^+][{\color{Red} In^-}]}{[{\color{Blue} HIn}]}}$
which can be rearranged to give
$\mathrm{\dfrac{[{\color{Red} In^-}]}{[{\color{Blue} HIn}]}} = \dfrac{K_{\large\textrm{ai}}}{\ce{[H+]}}$
When $\ce{[H+]}$ is greater than 10 Kai, $\mathrm{\color{Red} In^-}$ color dominates, whereas color due to $\mathrm{\color{Blue} HIn}$ dominates if $\ce{[H+]} < \dfrac{K_{\large\textrm{ai}}}{10}$. The above equation indicates that the color change is the most sensitive when $\ce{[H+]} = K_{\large\textrm{ai}}$ in numerical value.
We define pKai = - log(Kai), and the pKai value is also the pH value at which the color of the indicator is most sensitive to pH changes.
Taking the negative log of Kai gives,
$-\log K_{\large\textrm{ai}} = -\log\ce{[H+]} - \log\mathrm{\dfrac{[{\color{Red} In^- }]}{[{\color{Blue} HIn}]}}$
or
$\mathrm{pH = p\mathit K_{\large{ai}}} + \log\mathrm{\dfrac{[{\color{Red} In^-}]}{[{\color{Blue} HIn}]}}$
This is a very important formula, and its derivation is very simple. Start from the definition of the equilibrium constant K; you can easily derive it. Note that pH = pKai when $[\mathrm{\color{Red} In^-}] = [\mathrm{\color{Blue} HIn}]$. In other words, when the pH is the same as pKai, there are equal amounts of acid and base forms. When the two forms have equal concentration, the color change is most noticeable.
Colors of substances make the world a wonderful place. Because of the colors and structures, flowers, plants, animals, and minerals show their unique characters. Many indicators are extracted from plants. For example, red cabbage juice and tea pigments show different colors when the pH is different. The color of tea darkens in a basic solution, but the color becomes lighter when lemon juice is added. Red cabbage juice turns blue in a basic solution, but it shows a distinct red color in an acidic solution.
Common Indicators: Some common indicators and their pKai (also referred to as pKa) values are given in a table form.
Name Acid Color pH Range of Color Change Base Color
Methyl violet Yellow 0.0 - 1.6 Blue
Thymol blue Red 1.2 - 2.8 Yellow
Methyl orange Red 3.2 - 4.4 Yellow
Bromocresol green Yellow 3.8 - 5.4 Blue
Methyl red Red 4.8 - 6.0 Yellow
Litmus Red 5.0 - 8.0 Blue
Bromothymol blue Yellow 6.0 - 7.6 Blue
Thymol blue Yellow 8.0 - 9.6 Blue
Phenolphthalein Colorless 8.2 - 10.0 Pink
Thymolphthalein Colorless 9.4 - 10.6 Blue
Alizarin yellow R Yellow 10.1 - 12.0 Red
There is a separate file for this, and it can also be accessed from the Chemical Handbook menu.
Example $1$
Find an indicator for the titration of a 0.100 M solution of a weak acid $\ce{HA}$ ($K_a = 6.2 \times 10^{-6}$) with 0.100 M $\ce{NaOH}$ solution.
Solution
First, you should estimate the pH at the equivalence point, at which the solution is 0.0500 M $\ce{NaA}$. This is a hydrolysis problem, but the following method employs the general principle of equilibrium.
$\begin{array}{ccccccc} \ce{A- &+ &H2O &\rightleftharpoons &HA &+ &OH-}\ 0.0500-y &&&&y &&y \end{array} \nonumber$
If we multiply the numerator and the denominator by $\ce{[H+]}$, rearrange the terms, note that $\ce{[H+][OH- ]} = K_{\large\textrm w}$, and by the definition of Ka of the acid, we have the following relationship:
$\dfrac{y^2}{0.0500-y} = \ce{\dfrac{[HA][OH- ]}{[A- ]} \dfrac{[H+]}{[H+]}} = \dfrac{K_{\large\textrm w}}{K_{\large\textrm a}}$
\begin{align*} y &= \left(0.0500\left(\dfrac{K_{\large\textrm w}}{K_{\large\textrm a}}\right)\right)^{1/2}\ &= 9.0 \times 10^{-6} \end{align*}
\begin{align*} \ce{pOH} &= -\log \ce{[OH- ]} = -\log 9.0 \times 10^{-6}\ &= 5.05\ \ \ce{pH} &= 14 - 5.05\ &= 8.95 \end{align*}
Phenolphthalein in the table above has a pKai value of 9.7, which is the closest to the pH of equivalence point in this titration. This indicator is colorless in acidic solution, but a light PINK appears when the pH is > 8. The color becomes more INTENSE PINK as the pH rises. A parade of the color intensities is shown below:
___
The equivalence point is when the color changes most rapidly, not when the solution has changed color. Improper use of indicators will introduce inaccuracy to titration results.
Colors of an Indicator Solution
Indicators change color gradually at various pH. Let us assume that the acid form has a blue color and the basic form has red color. The variation of colors at different pH is shown below. The background color affects their appearance and our perception of them.
RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB
RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB RnB
_______
The long stretched color in the middle of the last line has equal intensity of BLUE and RED. If a solution has a color matching this, the pH would be the same as the pKai of the indicator, provided that the conjugate forms of the indicator have the BLUE and RED colors.
Questions
1. There are numerous natural indicators present in plants. The dye in red cabbage, the purple color of grapes, even the color of some flowers are some examples. What is the cause for some fruits to change color when they ripen?
2. Choose the true statement:
1. All weak acids are indicators.
2. All weak bases are indicators.
3. Weak acids and bases are indicators.
4. All indicators are weak acids.
5. An acid-base conjugate pair has different colors.
6. Any indicator changes color when the pH of its solution is 7.
3. Do all indicators change color at pH 7 (y/n)?
Solutions
1. Answer $\ce{[H+]}$ of the juice changes.
Hint...
The changes in pH or $\ce{[H+]}$ cause the dye to change color if their conjugate acid-base pairs have different colors. There may be other reasons too. Do colors indicate how good or bad they taste?
2. Answer d.
Hint...
Color change is a requirement for indicators.
3. Answer No!
Hint...
Phenolphthalein changes color at pH ~9. Bromothymol blue has a pKn value of 7.1. At pH 7, its color changes from yellow to blue. Some indicators change color at pH other than 7.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Indicators.txt
|
Acids and bases are an important part of chemistry. One of the most applicable theories is the Lewis acid/base motif that extends the definition of an acid and base beyond H+ and OH- ions as described by Brønsted-Lowry acids and bases.
The Brønsted acid-base theory has been used throughout the history of acid and base chemistry. However, this theory is very restrictive and focuses primarily on acids and bases acting as proton donors and acceptors. Sometimes conditions arise where the theory does not necessarily fit, such as in solids and gases. In 1923, G.N. Lewis from UC Berkeley proposed an alternate theory to describe acids and bases. His theory gave a generalized explanation of acids and bases based on structure and bonding. Through the use of the Lewis definition of acids and bases, chemists are now able to predict a wider variety of acid-base reactions. Lewis' theory used electrons instead of proton transfer and specifically stated that an acid is a species that accepts an electron pair while a base donates an electron pair.
The reaction of a Lewis acid and a Lewis base will produce a coordinate covalent bond (Figure $1$). A coordinate covalent bond is just a type of covalent bond in which one reactant gives it electron pair to another reactant. In this case the lewis base donates its electrons to the Lewis acid. When they do react this way the resulting product is called an addition compound, or more commonly an adduct.
• Lewis Acid: a species that accepts an electron pair (i.e., an electrophile) and will have vacant orbitals
• Lewis Base: a species that donates an electron pair (i.e., a nucleophile) and will have lone-pair electrons
Lewis Acids
Lewis acids accept an electron pair. Lewis Acids are Electrophilic meaning that they are electron attracting. When bonding with a base the acid uses its lowest unoccupied molecular orbital or LUMO (Figure 2).
• Various species can act as Lewis acids. All cations are Lewis acids since they are able to accept electrons. (e.g., Cu2+, Fe2+, Fe3+)
• An atom, ion, or molecule with an incomplete octet of electrons can act as an Lewis acid (e.g., BF3, AlF3).
• Molecules where the central atom can have more than 8 valence shell electrons can be electron acceptors, and thus are classified as Lewis acids (e.g., SiBr4, SiF4).
• Molecules that have multiple bonds between two atoms of different electronegativities (e.g., CO2, SO2)
Lewis Bases
Lewis Bases donate an electron pair. Lewis Bases are Nucleophilic meaning that they “attack” a positive charge with their lone pair. They utilize the highest occupied molecular orbital or HOMO (Figure 2). An atom, ion, or molecule with a lone-pair of electrons can thus be a Lewis base. Each of the following anions can "give up" their electrons to an acid, e.g., $OH^-$, $CN^-$, $CH_3COO^-$, $:NH_3$, $H_2O:$, $CO:$. Lewis base's HOMO (highest occupied molecular orbital) interacts with the Lewis acid's LUMO (lowest unoccupied molecular orbital) to create bonded molecular orbitals. Both Lewis Acids and Bases contain HOMO and LUMOs but only the HOMO is considered for Bases and only the LUMO is considered for Acids (Figure $2$).
Complex Ion / Coordination Compounds
Complex ions are polyatomic ions, which are formed from a central metal ion that has other smaller ions joined around it. While Brønsted theory can't explain this reaction Lewis acid-base theory can help. A Lewis Base is often the ligand of a coordination compound with the metal acting as the Lewis Acid (see Oxidation States of Transition Metals).
$Al^{3+} + 6 H_2O \rightleftharpoons [Al(H_2O)_6]^{3+} \label{1}$
The aluminum ion is the metal and is a cation with an unfilled valence shell, and it is a Lewis Acid. Water has lone-pair electrons and is an anion, thus it is a Lewis Base.
The Lewis Acid accepts the electrons from the Lewis Base which donates the electrons. Another case where Lewis acid-base theory can explain the resulting compound is the reaction of ammonia with Zn2+.
$Zn^{2+} + 4NH_3 \rightarrow [Zn(NH_3)_4]^{4+} \label{2}$
Similarly, the Lewis Acid is the zinc Ion and the Lewis Base is NH3. Note how Brønsted Theory of Acids and Bases will not be able to explain how this reaction occurs because there are no $H^+$ or $OH^-$ ions involved. Thus, Lewis Acid and Base Theory allows us to explain the formation of other species and complex ions which do not ordinarily contain hydronium or hydroxide ions. One is able to expand the definition of an acid and a base via the Lewis Acid and Base Theory. The lack of $H^+$ or $OH^-$ ions in many complex ions can make it harder to identify which species is an acid and which is a base. Therefore, by defining a species that donates an electron pair and a species that accepts an electron pair, the definition of a acid and base is expanded.
Amphoterism
As of now you should know that acids and bases are distinguished as two separate things however some substances can be both an acid and a base. You may have noticed this with water, which can act as both an acid or a base. This ability of water to do this makes it an amphoteric molecule. Water can act as an acid by donating its proton to the base and thus becoming its conjugate acid, OH-. However, water can also act as a base by accepting a proton from an acid to become its conjugate base, H3O+.
• Water acting as an Acid:
$H_2O + NH_3 \rightarrow NH_4^+ + OH^- \label{3}$
• Water acting as a Base:
$H_2O + HCl \rightarrow Cl^- + H_3O^+ \label{4}$
You may have noticed that the degree to which a molecule acts depends on the medium in which the molecule has been placed in. Water does not act as an acid in an acid medium and does not act as a base in a basic medium. Thus, the medium which a molecule is placed in has an effect on the properties of that molecule. Other molecules can also act as either an acid or a base. For example,
$Al(OH)_3 + 3H^+ \rightarrow Al^{3+} + 3H_2O \label{5}$
• where Al(OH)3 is acting as a Lewis Base.
$Al(OH)_3 + OH^- \rightarrow Al(OH)_4^- \label{6}$
• where Al(OH)3 is acting as an Lewis Acid.
Note how the amphoteric properties of the Al(OH)3 depends on what type of environment that molecule has been placed in.
Contributors and Attributions
• Adam Abudra (UCD), Tajinder Badial (UCD)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Lewis_Concept_of_Acids_and_Bases.txt
|
There are three major classifications of substances known as acids or bases. The Arrhenius definition states that an acid produces H+ in solution and a base produces OH-. This theory was developed by Svante Arrhenius in 1883. Later, two more sophisticated and general theories were proposed. These are the Brønsted-Lowry and the Lewis definitions of acids and bases. The Lewis theory is discussed elsewhere.
The Arrhenius Theory of Acids and Bases
In 1884, the Swedish chemist Svante Arrhenius proposed two specific classifications of compounds; acids and bases. When dissolved in an aqueous solution, certain ions were released into the solution. An Arrhenius acid is a compound that increases the concentration of H+ ions that are present when added to water. These H+ ions form the hydronium ion (H3O+) when they combine with water molecules. This process is represented in a chemical equation by adding H2O to the reactants side.
$HCl_{(aq)} \rightarrow H^+_{(aq)} + Cl^-_{(aq)}$
In this reaction, hydrochloric acid ($HCl$) dissociates completely into hydrogen (H+) and chlorine (Cl-) ions when dissolved in water, thereby releasing H+ ions into solution. Formation of the hydronium ion equation:
$HCl_{(aq)} + H_2O_{(l)} \rightarrow H_3O^+_{(aq)} + Cl^-_{(aq)}$
The Arrhenius theory, which is the simplest and least general description of acids and bases, includes acids such as HClO4 and HBr and bases such as $NaOH$ or $Mg(OH)_2$. For example the complete dissociation of $HBr$ gas into water results generates free $H_3O^+$ ions.
$HBr_{(g)} + H_2O_{(l)} \rightarrow H_3O^+_{(aq)} + Br^-_{(aq)}$
This theory successfully describes how acids and bases react with each other to make water and salts. However, it does not explain why some substances that do not contain hydroxide ions, such as $F^-$ and $NO_2^-$, can make basic solutions in water. The Brønsted-Lowry definition of acids and bases addresses this problem.
An Arrhenius base is a compound that increases the concentration of OH- ions that are present when added to water. The dissociation is represented by the following equation:
$NaOH \; (aq) \rightarrow Na^+ \; (aq) + OH^- \; (aq)$
In this reaction, sodium hydroxide (NaOH) disassociates into sodium (Na+) and hydroxide (OH-) ions when dissolved in water, thereby releasing OH- ions into solution.
Note
• Arrhenius acids are substances which produce hydrogen ions in solution.
• Arrhenius bases are substances which produce hydroxide ions in solution.
Free Hydrogen Ions do not Exist in Water
Owing to the overwhelming excess of $H_2O$ molecules in aqueous solutions, a bare hydrogen ion has no chance of surviving in water. The hydrogen ion in aqueous solution is no more than a proton, a bare nucleus. Although it carries only a single unit of positive charge, this charge is concentrated into a volume of space that is only about a hundred-millionth as large as the volume occupied by the smallest atom. (Think of a pebble sitting in the middle of a sports stadium!) The resulting extraordinarily high charge density of the proton strongly attracts it to any part of a nearby atom or molecule in which there is an excess of negative charge. In the case of water, this will be the lone pair (unshared) electrons of the oxygen atom; the tiny proton will be buried within the lone pair and will form a shared-electron (coordinate) bond with it, creating a hydronium ion, $H_3O^+$. In a sense, $H_2O$ is acting as a base here, and the product $H_3O^+$ is the conjugate acid of water:
Although other kinds of dissolved ions have water molecules bound to them more or less tightly, the interaction between H+ and $H_2O$ is so strong that writing “H+(aq)” hardly does it justice, although it is formally correct. The formula $H_3O^+$ more adequately conveys the sense that it is both a molecule in its own right, and is also the conjugate acid of water.
The equation "HA → H+ + A" is so much easier to write that chemists still use it to represent acid-base reactions in contexts in which the proton donor-acceptor mechanism does not need to be emphasized. Thus, it is permissible to talk about “hydrogen ions” and use the formula H+ in writing chemical equations as long as you remember that they are not to be taken literally in the context of aqueous solutions.
Limitations to the Arrhenius Theory
The Arrhenius theory has many more limitations than the other two theories. The theory suggests that in order for a substance to release either H+ or OH- ions, it must contain that particular ion. However, this does not explain the weak base ammonia (NH3) which, in the presence of water, releases hydroxide ions into solution, but does not contain OH- itself.
Hydrochloric acid is neutralized by both sodium hydroxide solution and ammonia solution. In both cases, you get a colourless solution which you can crystallize to get a white salt - either sodium chloride or ammonium chloride. These are clearly very similar reactions. The full equations are:
$NaOH \; (aq) + HCl \; (aq) \rightarrow NaCl \; (aq) + H_2O \; (l)$
$NH_3 \; (aq) + HCl \; (aq) \rightarrow NH_4Cl \; (aq)$
In the sodium hydroxide case, hydrogen ions from the acid are reacting with hydroxide ions from the sodium hydroxide - in line with the Arrhenius theory. However, in the ammonia case, there are no hydroxide ions!
You can get around this by saying that, when the ammonia reacts with the water, it is dissolved in to produce ammonium ions and hydroxide ions:
$NH_3 \; (aq) + H_2O \; (l) \rightleftharpoons NH_4^+ \; (aq) + OH^- \;(aq)$
This is a reversible reaction, and in a typical dilute ammonia solution, about 99% of the ammonia remains as ammonia molecules. Nevertheless, there are hydroxide ions there, and we can squeeze this into the Arrhenius theory. However, this same reaction also happens between ammonia gas and hydrogen chloride gas.
$NH_3 \; (g) + HCl \; (g) \rightarrow NH_4Cl \;(s)$
In this case, there are not any hydrogen ions or hydroxide ions in solution - because there isn't any solution. The Arrhenius theory wouldn't count this as an acid-base reaction, despite the fact that it is producing the same product as when the two substances were in solution. Because of this shortcoming, later theories sought to better explain the behavior of acids and bases in a new manner.
The Brønsted-Lowry Definition
In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently developed definitions of acids and bases based on the compounds' abilties to either donate or accept protons (H+ ions). In this theory, acids are defined as proton donors; whereas bases are defined as proton acceptors. A compound that acts as both a Brønsted-Lowry acid and base together is called amphoteric.This took the Arrhenius definition one step further, as a substance no longer needed to be composed of hydrogen (H+) or hydroxide (OH-) ions in order to be classified as an acid or base. Consider the following chemical equation:
$HCl \; (aq) + NH_3 \; (aq) \rightarrow NH_4^+ \; (aq) + Cl^- \; (aq)$
Here, hydrochloric acid (HCl) "donates" a proton (H+) to ammonia (NH3) which "accepts" it , forming a positively charged ammonium ion (NH4+) and a negatively charged chloride ion (Cl-). Therefore, HCl is a Brønsted-Lowry acid (donates a proton) while the ammonia is a Brønsted-Lowry base (accepts a proton). Also, Cl- is called the conjugate base of the acid HCl and NH4+ is called the conjugate acid of the base NH3.
The Brønsted-Lowry Theory of Acids and Bases
• A Brønsted-Lowry acid is a proton (hydrogen ion) donor.
• A Brønsted-Lowry base is a proton (hydrogen ion) acceptor.
In this theory, an acid is a substance that can release a proton (like in the Arrhenius theory) and a base is a substance that can accept a proton. A basic salt, such as Na+F-, generates OH- ions in water by taking protons from water itself (to make HF):
$F^-_{(aq)} + H_2O_{(l)} \rightleftharpoons HF_{(aq)} + OH^-$
When a Brønsted acid dissociates, it increases the concentration of hydrogen ions in the solution, $[H^+]$; conversely, Brønsted bases dissociate by taking a proton from the solvent (water) to generate $[OH^-]$.
• Acid dissociation
$HA_{(aq)} \rightleftharpoons A^-_{(aq)} + H^+_{(aq)}$
• Acid Ionization Constant:
$K_a=\dfrac{[A^-][H^+]}{[HA]}$
• Base dissociation:
$B_{(aq)} + H_2O_{(l)} \rightleftharpoons HB^+_{(aq)} + OH^-_{(aq)}$
• Base Ionization Constant
$K_b = \dfrac{[HB^+][OH^-]}{[B]}$
Conjugate Acids and Bases
One important consequence of these equilibria is that every acid ($HA$) has a conjugate base ($A^-$), and vice-versa. In the base, dissociation equilibrium above the conjugate acid of base $B$ is $HB^+$. For a given acid or base, these equilibria are linked by the water dissociation equilibrium:
$H_2O_{(l)} \rightleftharpoons H^+_{(aq)} + OH^-_{(aq)}$
with
$K_w = [H^+][OH^-]$
for which the equilibrium constant Kw is 1.00 x 10-14 at 25°C. It can be easily shown that the product of the acid and base dissociation constants Ka and Kb is Kw.
Strong and Weak Acids and Bases
Strong acids are molecular compounds that essentially ionize to completion in aqueous solution, disassociating into H+ ions and the additional anion; there are very few common strong acids. All other acids are "weak acids" that incompletely ionized in aqueous solution. Acids and bases that dissociate completely are said to be strong acids, e.g.:
• $HClO_{4(aq)} \rightarrow H^+_{(aq)} + ClO^-_{4(aq)}$
• $HBr_{(aq)} \rightarrow H^+_{(aq)} + Br^-_{(aq)}$
• $CH_3O^-_{(aq)} + H_2O_{(l)} \rightarrow CH_3OH_{(aq)} + OH^-_{(aq)}$
• $NH^-_{2(aq)} + H_2O_{(l)} \rightarrow NH_{3(aq)} + OH^-_{(aq)}$
Here the right-handed arrow ($\rightarrow$) implies that the reaction goes to completion. That is, a 1.0 M solution of HClO4 in water actually contains 1.0 M H+(aq) and 1.0 M ClO4-(aq), and no undissociated HClO4.
Conversely, weak acids such as acetic acid (CH3COOH) and weak bases such as ammonia (NH3) dissociate only slightly in water - typically a few percent, depending on their concentration and exist mostly as the undissociated molecules.
• STRONG ACIDS: HCl, HNO3, H2SO4, HBr, HI, HClO4
• WEAK ACIDS: All other acids, such as HCN, HF, H2S, HCOOH
Strong acids such as $HCl$ dissociate to produce spectator ions such as $Cl^-$ as conjugate bases, whereas weak acids produce weak conjugate bases. This is illustrated below for acetic acid and its conjugate base, the acetate anion. Acetic acid is a weak acid (Ka = 1.8 x 10-5) and acetate is a weak base (Kb = Kw/Ka = 5.6 x 10-10)
Like acids, strong and weak bases are classified by the extent of their ionization. Strong bases disassociate almost or entirely to completion in aqueous solution. Similar to strong acids, there are very few common strong bases. Weak bases are molecular compounds where the ionization is not complete.
• STRONG BASES: The hydroxides of the Group I and Group II metals such as LiOH, NaOH, KOH, RbOH, CsOH
• WEAK BASES: All other bases, such as NH3, CH3NH2, C5H5N
Note
The strength of a conjugate acid/base varies inversely with the strength or weakness of its parent acid or base. Any acid or base is technically a conjugate acid or conjugate base also; these terms are simply used to identify species in solution (i.e acetic acid is the conjugate acid of the acetate anion, a base, while acetate is the conjugate base of acetic acid, an acid).
How does one define acids and bases? In chemistry, acids and bases have been defined differently by three sets of theories. One is the Arrhenius definition, which revolves around the idea that acids are substances that ionize (break off) in an aqueous solution to produce hydrogen (H+) ions while bases produce hydroxide (OH-) ions in solution. On the other hand, the Brønsted-Lowry definition defines acids as substances that donate protons (H+) whereas bases are substances that accept protons. Also, the Lewis theory of acids and bases states that acids are electron pair acceptors while bases are electron pair donors. Acids and bases can be defined by their physical and chemical observations.
pH Scale
Since acids increase the amount of H+ ions present and bases increase the amount of OH- ions, under the pH scale, the strength of acidity and basicity can be measured by its concentration of H+ ions. This scale is shown by the following formula:
pH = -log[H+]
with [H+] being the concentration of H+ ions.
To see how these calculations are done, refer to Calculating the pH of the solution of a Polyprotic Base/Acid
The pH scale is often measured on a 1 to 14 range, but this is incorrect (see pH for more details). Something with a pH less than 7 indicates acidic properties and greater than 7 indicates basic properties. A pH at exactly 7 is neutral. The higher the [H+], the lower the pH.
Lewis Theory
The Lewis theory of acids and bases states that acids act as electron pair acceptors and bases act as electron pair doners. This definition doesn't mention anything about the hydrogen atom at all, unlike the other definitions. It only talks about the transfer of electron pairs. To demonstrate this theory, consider the following example.
$NH_{3}_{(g)} + BF_3_{(g)} \rightleftharpoons H_{3}NBF_{3}_{(g)}$
This is a reaction between ammonia (NH3) and boron trifluoride (BF3). Since there is no transfer of hydrogen atoms here, it is clear that this is a Lewis acid-base reaction. In this reaction, NH3 has a lone pair of electrons and BF3 has an incomplete octet, since boron doesn't have enough electrons around it to form an octet.
Because boron only has 6 electrons around it, it can hold 2 more. BF3 can act as a Lewis acid and accept the pair of electrons from the nitrogen in NH3, which will then form a bond between the nitrogen and the boron.
This is considered an acid-base reaction where NH3 (base) is donating the pair of electrons to BF3. BF3 (acid) is accepting those electrons to form a new compound, H3NBF3.
Neutralization
A special property of acids and bases is their ability to neutralize the other's properties. In an acid-base (or neutralization) reaction, the H+ ions from the acid and the OH- ions from the base react to create water (H2O). Another product of a neutralization reaction is an ionic compound called a salt. Therefore, the general form of an acid-base reaction is:
$Acid_{(aq)} + Base_{(aq)} \n \to \ H_{2}O_{(l)} + Salt_{(s)}$
The following are examples of neutralization reactions:
1. $HCl_{(aq)} + NaOH_{(aq)} \n \to \ H_{2}O_{(l)} + NaCl_{(s)}$
(NOTE: To see this reaction done experimentally, refer to the YouTube video link under the section "References".)
2. $H_{2}SO_{4}_{(aq)} + 2 NH_{4}OH_{(aq)} \n \to \ H_{2}O_{(l)} + (NH_{4}) _{2}SO_4_{(s)}$
Titrations
Titrations are performed with acids and bases to determine their concentrations. At the equivalence point, the number of moles of the acid will equal the number of moles of the base. This indicates that the reaction has been neutralized.
Neutralization: moles of acid = moles of base
Here's how the calculations are done:
For instance, hydrochloric acid is titrated with sodium hydroxide:
$HCl_{(aq)} + NaOH_{(aq)} \n \to \ H_{2}O_{(l)} + NaCl_{(s)}$
For instance, 30 mL of 1.00 M NaOH is needed to titrate 60 mL of an HCl solution. The concentration of HCl needs to be determined. At the eqivalence point:
moles of HCl = moles of NaOH
To solve for the molarity of HCl, plug in the given data into the equation above.
MHCl(60 mL HCl) = (1.00 M NaOH)(30 mL NaOH)
MHCl=0.5 M
The concentration of HCl is 0.5 M.
Sample Problems
1. Which of the following compounds is a strong acid?
1. CaSO4
2. NaCl
3. HNO3
4. NH3
Solution: There are 6 strong acids and all other acids are considered weak. HNO3 is one of those 6 strong acids, while NH3 is actuallly a weak base.
The answer is (3) HNO3.
2. Which of the following compounds is a Brønsted-Lowry base?
1. HCl
2. HPO42-
3. H3PO4
4. NH4+
5. CH3NH3+
Solution: A Brønsted-Lowry Base is a proton acceptor, which means it will take in an H+. This eliminates HCl, H3PO4 ,NH4+ and CH3NH3+ because they are Brønsted-Lowry acids. They all give away protons. In the case of HPO42-, consider the following equation:
$HPO_{4}^{2-}_{(aq)} + H_{2}O_{(l)} \to \ PO_{4}^{3-}_{(aq)} + H_3O^+_{(aq)}$
Here, it is clear that HPO42- is the acid since it donates a proton to water to make H3O+ and PO43-. Now consider the following equation:
$HPO_{4}^{2-}_{(aq)} + H_{2}O_{(l)} \to \ H_2PO_{4}^{-}_{(aq)} + OH^-_{(aq)}$
In this case, HPO42- is the base since it accepts a proton from water to form H2PO4- and OH-. Thus, HPO42- is an acid and base together, making it amphoteric.
Since HPO42- is the only compound from the options that can act as a base, the answer is (2) HPO42-.
3. A 50 ml solution of 0.5 M NaOH is titrated until neutralized into a 25 ml sample of HCl. What was the concentration of the HCl?
Solution: Since the number of moles of acid equals the number of moles of base at neutralization, the following equation is used to solve for the molarity of HCl:
Now, plug into the equation all the information that is given:
MHCl(25 mL HCl) = (0.5 MNaOH)(50 mL NaOH)
MHCl = 1
The correct answer is 1 MHCl.
4. In the following acid-base neutralization, 2.79 g of the acid HBr (80.91g/mol) neutralized 22.72 mL of a basic aqueous solution by the reaction:
Calculate the molarity of the basic solution.
Solution:
First, the number of moles of the acid needs to be calculated. This is done by using the molar mass of HBr to convert 2.79 g of HBr to moles.
(2.79 g HBr)/(80.91 g/mol HBr) = 0.0345 moles HBr
Since this is a neutralization reaction, the number of moles of the acid (HBr) equals the number of moles of the base (NaOH) at neutralization:
moles of acid = moles of base
0.0345 moles HBr = 0.0345 moles NaOH
The molarity of NaOH can now be determined since the amount of moles are found and the volume is given. Convert 22.72 mL to Liters first since molarity is in units of moles/L.
Molarity = (0.0345 moles NaOH)/(0.02272 L NaOH) = 1.52 MNaOH
The correct answer is 1.52 MNaOH.
5. Which of the following is a Brønsted-Lowry base but not an Arrhenius base?
1. NH3
2. NaOH
3. Ca(OH)2
4. KOH
Solution: The Brønsted-Lowry definition says that a base accepts protons (H+ ions). NaOH, Ca(OH)2, and KOH are all Arrhenius bases because they yield the hydroxide ion (OH-) when they ionize. However, NH3 does not dissociate in water like the others. Instead, it takes a proton from water and becomes NH4 while water becomes a hydroxide.
Therefore, the correct answer is (1) NH3.
References
1. Brent, Lynnette. Acids and Bases. New York, NY: Crabtree Pub., 2009. Print.
2. Hulanicki, Adam. Reactions of Acids and Bases in Analytical Chemistry. Ellis Horwood Limited: 1987.
3. Oxlade, Chris. Acids & Bases. Chicago, IL: Heinemann Library, 2002. Print.
4. Petrucci, Ralph H. General Chemistry: Principles and Modern Applications. Macmillian: 2007.
5. Vanderwerd, Calvin A. Acids, Bases, and the Chemistry of the Covalent Bond. Reinhold: 1961.
Contributors and Attributions
• Catherine Broderick (UCD), Marianne Moussa (UCD)
• Jim Clark (Chemguide.co.uk)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid/Overview_of_Acids_and_Bases.txt
|
An acid–base reaction is a chemical reaction that occurs between an acid and a base. Several theoretical frameworks provide alternative conceptions of the reaction mechanisms and their application in solving related problems; these are called acid–base theories, for example, Brønsted–Lowry acid–base theory. Their importance becomes apparent in analyzing acid–base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent.
Thumbnail: Warning picture used with dangerous acids and dangerous bases. Bases are the opposites of acids. (Public Domain).
Acid Base Reactions
Learning Objectives
• Predict the acidity of a salt solution.
• Calculate the pH of a salt solution.
• Calculate the concentrations of various ions in a salt solution.
• Explain hydrolysis reactions.
A salt is formed between the reaction of an acid and a base. Usually, a neutral salt is formed when a strong acid and a strong base are neutralized in the reaction:
$\ce{H+ + OH- \rightleftharpoons H2O} \label{1}$
The bystander ions in an acid-base reaction form a salt solution. Most neutral salts consist of cations and anions listed in the table below. These ions have little tendency to react with water. Thus, salts consisting of these ions are neutral salts. For example: $\ce{NaCl}$, $\ce{KNO3}$, $\ce{CaBr2}$, $\ce{CsClO4}$ are neutral salts.
When weak acids and bases react, the relative strength of the conjugated acid-base pair in the salt determines the pH of its solutions. The salt, or its solution, so formed can be acidic, neutral or basic. A salt formed between a strong acid and a weak base is an acid salt, for example $\ce{NH4Cl}$. A salt formed between a weak acid and a strong base is a basic salt, for example $\ce{NaCH3COO}$. These salts are acidic or basic due to their acidic or basic ions as shown in the Table $1$.
Table $1$: Examples of Neutral, Acidic, and Basic Ions
Ions of neutral salts Acidic Ions Basic Ions
Cations Anions Cations Anions Anions
$\ce{Na+}$ $\ce{K+}$ $\ce{Cl-}$ $\ce{Br-}$ $\ce{NH4+}$ $\ce{Al^3+}$ $\ce{HSO4-}$ $\ce{HPO4^2-}$ $\ce{F-}$ $\ce{C2H3O2-}$
$\ce{Rb+}$ $\ce{Cs+}$ $\ce{I-}$ $\ce{ClO4-}$ $\ce{Pb^2+}$ $\ce{Sn^2+}$ $\ce{H2PO4-}$ $\ce{PO4^3-}$ $\ce{NO2-}$ $\ce{HCO3-}$
$\ce{Mg^2+}$ $\ce{Ca^2+}$ $\ce{BrO4-}$ $\ce{ClO3-}$ $\ce{CN-}$ $\ce{CO3^2-}$
$\ce{Sr^2+}$ $\ce{Ba^2+}$ $\ce{NO3-}$ $\ce{S^2-}$ $\ce{SO4^2-}$
Hydrolysis of Acidic Salts
A salt formed between a strong acid and a weak base is an acid salt. Ammonia is a weak base, and its salt with any strong acid gives a solution with a pH lower than 7. For example, let us consider the reaction:
$\ce{HCl + NH4OH \rightleftharpoons NH4+ + Cl- + H2O} \label{2}$
In the solution, the $\ce{NH4+}$ ion reacts with water (called hydrolysis) according to the equation:
$\ce{NH4+ + H2O \rightleftharpoons NH3 + H3O+}. \label{3}$
The acidity constant can be derived from $K_w$ and $K_b$.
\begin{align} K_{\large\textrm a} &= \dfrac{\ce{[H3O+] [NH3]}}{\ce{[NH4+]}} \dfrac{\ce{[OH- ]}}{\ce{[OH- ]}}\ &= \dfrac{K_{\large\textrm w}}{K_{\large\textrm b}}\ &= \dfrac{1.00 \times 10^{-14}}{1.75 \times 10^{-5}} = 5.7 \times 10^{-10} \end{align}
Example $1$
What is the concentration of $\ce{NH4+}$, $\ce{NH3}$, and $\ce{H+}$ in a 0.100 M $\ce{NH4NO3}$ solution?
Solution
Assume that $\ce{[NH3]} = x$, then $\ce{[H3O+]} = x$, and you write the concentration below the formula in the reaction:
$\begin{array}{ccccccc} \ce{NH4+ &+ &H2O &\rightleftharpoons &NH3 &+ &H3O+}\ 0.100-x &&&&x &&x \end{array}$
\begin{align} K_{\large\textrm a} &= \textrm{5.7E-10}\ &= \dfrac{x^2}{0.100-x} \end{align}
Since the concentration has a value much greater than Ka, you may use
\begin{align} x &= (0.100\times\textrm{5.7E(-10)})^{1/2}\ &= \textrm{7.5E-6} \end{align}
\begin{align} \ce{[NH3]} &= \ce{[H+]} = x = \textrm{7.5E-6 M}\ \ce{pH} &= -\log\textrm{7.5e-6} = 5.12 \end{align}
$\ce{[NH4+]} = \textrm{0.100 M}$
DISCUSSION
Since pH = 5.12, the contribution of $\ce{[H+]}$ due to self ionization of water may therefore be neglected.
Hydrolysis of Basic Salts
A basic salt is formed between a weak acid and a strong base. The basicity is due to the hydrolysis of the conjugate base of the (weak) acid used in the neutralization reaction. For example, sodium acetate formed between the weak acetic acid and the strong base $\ce{NaOH}$ is a basic salt. When the salt is dissolved, ionization takes place:
$\ce{NaAc \rightleftharpoons Na+ + Ac-} \label{4}$
In the presence of water, $\ce{Ac-}$ undergoes hydrolysis:
$\ce{H2O + Ac- \rightleftharpoons HAc + OH-} \label{5}$
And the equilibrium constant for this reaction is Kb of the conjugate base $\ce{Ac-}$ of the acid $\ce{HAc}$. Note the following equilibrium constants: Acetic acid ($K_a=1.75 \times 10^{-5}$) and Ammonia ($K_b=1.75 \times 10^{-5}$)
\begin{align} K_{\large\textrm b} &= \ce{\dfrac{[HAc] [OH- ]}{[Ac- ]}}\ K_{\large\textrm b} &= \ce{\dfrac{[HAc] [OH- ]}{[Ac- ]} \dfrac{[H+]}{[H+]}}\ K_{\large\textrm b} &= \ce{\dfrac{[HAc]}{[Ac- ][H+]} [OH- ][H+]}\ &= \dfrac{K_{\large\textrm w}}{K_{\large\textrm a}}\ &= \dfrac{\textrm{1.00e-14}}{\textrm{1.75e-5}} = \textrm{5.7e-10} \end{align}
Thus,
$K_{\large\ce a} K_{\large\ce b} = K_{\large\ce w}$
or
$\mathrm{p\mathit K_{\large a} + p\mathit K_{\large b} = 14}$
for a conjugate acid-base pair. Let us look at a numerical problem of this type.
Example $2$
Calculate the $\ce{[Na+]}$, $\ce{[Ac- ]}$, $\ce{[H+]}$ and $\ce{[OH- ]}$ of a solution of 0.100 M $\ce{NaAc}$ (at 298 K). (Ka = 1.8E-5)
Solution
Let x represent $\ce{[H+]}$, then
$\begin{array}{ccccccc} \ce{H2O &+ &Ac- &\rightleftharpoons &HAc &+ &OH-}\ &&0.100-x &&x &&x \end{array}$
$\dfrac{x^2}{0.100-x} = \dfrac{\textrm{1E-14}}{\textrm{1.8E-5}} = \textrm{5.6E-10}$
Solving for x results in
\begin{align} x &= \sqrt{0.100\times\textrm{5.6E-10}}\ &= \textrm{7.5E-6} \end{align}
$\ce{[OH- ]} = \ce{[HAc]} = \textrm{7.5E-6}$
$\ce{[Na+]} = \textrm{0.100 F}$
DISCUSSION
This corresponds to a pH of 8.9 or $\ce{[H+]} = \textrm{1.3E-9}$.
Note that $\dfrac{K_{\large\ce w}}{K_{\large\ce a}} = K_{\large\ce b}$ of $\ce{Ac-}$, so that Kb rather than Ka may be given as data in this question.
Salts of Weak Acids and Weak Bases
A salt formed between a weak acid and a weak base can be neutral, acidic, or basic depending on the relative strengths of the acid and base.
• If Ka(cation) > Kb(anion) the solution of the salt is acidic.
• If Ka(cation) = Kb(anion) the solution of the salt is neutral.
• If Ka(cation) < Kb(anion) the solution of the salt is basic.
Example $3$
Arrange the three salts according to their acidity. $\ce{NH4CH3COO}$ (ammonium acetate), $\ce{NH4CN}$ (ammonium cyanide), and $\ce{NH4HC2O4}$ (ammonium oxalate).
• $K_{\large\ce a}(\textrm{acetic acid}) = \textrm{1.85E-5}$,
• $K_{\large\ce a}(\textrm{hydrogen cyanide}) = \textrm{6.2E-10}$,
• $K_{\large\ce a}(\textrm{oxalic acid}) = \textrm{5.6E-2}$,
• $K_{\large\ce b}(\ce{NH3}) = \textrm{1.8E-5}$.
Solution
ammonium oxalate -- acidic, $K_{\large\ce a}(\ce o) > K_{\large\ce b}(\ce{NH3})$
ammonium acetate -- neutral, $K_{\large\ce a} = K_{\large\ce b}$
ammonium cyanide -- basic, $K_{\large\ce a}(\ce c) < K_{\large\ce b}(\ce{NH3})$
Questions
1. The reaction of an acid and a base always produces a salt as the by-product, true or false? (t/f)
2. Is a solution of sodium acetate acidic, neutral or basic?
3. Are solutions of ammonium chloride acidic, basic or neutral?
4. Calculate the pH of a 0.100 M $\ce{KCN}$ solution.
$K_{\large\ce a}(\ce{HCN}) = \textrm{6.2e-10}$, $K_{\large\ce b}(\ce{CN-}) = \textrm{1.6E-5}$.
5. The symbol $K_{\large\ce b}(\ce{HS-})$ is the equilibrium constant for the reaction:
1. $\ce{HS- + OH- \rightleftharpoons S^2- + H2O}$
2. $\ce{HS- + H2O \rightleftharpoons H2S + OH-}$
3. $\ce{HS- + H2O \rightleftharpoons H3O+ + S^2-}$
4. $\ce{HS- + H3O+ \rightleftharpoons H2S + H2O}$
6. What symbol would you use for the equilibrium constant of
$\ce{HS- \rightleftharpoons H+ + S^2-}$
Solutions
1. Answer true
Consider...
Water is the real product, while the salt is formed from the spectator ions.
2. Answer basic
Consider...
Acetic acid is a weak acid that forms a salt with a strong base, $\ce{NaOH}$. The salt solution turns bromothymol-blue blue.
3. Answer acidic
Consider...
Ammonium hydroxide does not have the same strength as a base as $\ce{HCl}$ has as an acid. Ammonium chloride solutions turn bromothymol-blue yellow.
4. Answer 11.1
Consider...
$\begin{array}{ccccccccccc} \ce{KCN &\rightarrow &K+ &+ &CN- &&&&&&}\ \ce{&&& &CN- &+ &H2O &\rightleftharpoons &HCN &+ &OH-}\ &&& &(0.100-x) &&&&x &&x \end{array}$
\begin{align} x &= (0.100\times\textrm{1.5E-5})^{1/2}\ &= \textrm{1.2E-3}\ \ce{pOH} &= 2.9\ \ce{pH} &= 11.1 \end{align}
5. Answer b
Consider...
Write an equation for Kb yourself. Do not guess. The b. is the closest among the four.
6. Answer Ka
Consider...
This is the ionization of $\ce{HS-}$; Ka for $\ce{HS-}$, or $K_{\large\ce a_{\Large 2}}$ for $\ce{H2S}$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_Base_Reactions/Hydrolysis.txt
|
A neutralization reaction is when an acid and a base react to form water and a salt and involves the combination of H+ ions and OH- ions to generate water. The neutralization of a strong acid and strong base has a pH equal to 7. The neutralization of a strong acid and weak base will have a pH of less than 7, and conversely, the resulting pH when a strong base neutralizes a weak acid will be greater than 7.
When a solution is neutralized, it means that salts are formed from equal weights of acid and base. The amount of acid needed is the amount that would give one mole of protons (H+) and the amount of base needed is the amount that would give one mole of (OH-). Because salts are formed from neutralization reactions with equivalent concentrations of weights of acids and bases: N parts of acid will always neutralize N parts of base.
Table $1$: The most common strong acids and bases. Most everything else not in this table is considered to be weak.
Strong Acids Strong Bases
HCl LiOH
HBr NaOH
HI KOH
HCIO4 RbOH
HNO3 CsOH
Ca(OH)2
Sr(OH)2
Ba(OH)2
Strong Acid-Strong Base Neutralization
Consider the reaction between $\ce{HCl}$ and $\ce{NaOH}$ in water:
$\underset{acid}{HCl(aq)} + \underset{base}{NaOH_{(aq)}} \leftrightharpoons \underset{salt}{NaCl_{(aq)}} + \underset{water}{H_2O_{(l)}}$
This can be written in terms of the ions (and canceled accordingly)
$\ce{H^{+}(aq)} + \cancel{\ce{Cl^{-}(aq)}} + \cancel{\ce{Na^{+}(aq)}} + \ce{OH^{-} (aq)} → \cancel{\ce{Na^{+}(aq)}} + \cancel{\ce{Cl^{-}_(aq)}} + \ce{H_2O(l)}$
When the spectator ions are removed, the net ionic equation shows the $H^+$ and $OH^-$ ions forming water in a strong acid, strong base reaction:
$H^+_{(aq)} + OH^-_{(aq)} \leftrightharpoons H_2O_{(l)}$
When a strong acid and a strong base fully neutralize, the pH is neutral. Neutral pH means that the pH is equal to 7.00 at 25 ºC. At this point of neutralization, there are equal amounts of $OH^-$ and $H_3O^+$. There is no excess $NaOH$. The solution is $NaCl$ at the equivalence point. When a strong acid completely neutralizes a strong base, the pH of the salt solution will always be 7.
Weak Acid-Weak Base Neutralization
A weak acid, weak base reaction can be shown by the net ionic equation example:
$H^+ _{(aq)} + NH_{3(aq)} \leftrightharpoons NH^+_{4 (aq)}$
The equivalence point of a neutralization reaction is when both the acid and the base in the reaction have been completely consumed and neither of them are in excess. When a strong acid neutralizes a weak base, the resulting solution's pH will be less than 7. When a strong base neutralizes a weak acid, the resulting solution's pH will be greater than 7.
Table 1: pH Levels at the Equivalence Point
Strength of Acid and Base pH Level
Strong Acid-Strong Base 7
Strong Acid-Weak Base <7
Weak Acid-Strong Base >7
Weak Acid-Weak Base pH <7 if $K_a > K_b$
pH =7 if $K_a = K_b$
pH >7 if $K_a< K_b$
Titration
One of the most common and widely used ways to complete a neutralization reaction is through titration. In a titration, an acid or a base is in a flask or a beaker. We will show two examples of a titration. The first will be the titration of an acid by a base. The second will be the titration of a base by an acid.
Example $1$: Titrating a Weak Acid
Suppose 13.00 mL of a weak acid, with a molarity of 0.1 M, is titrated with 0.1 M NaOH. How would we draw this titration curve?
Solution
Step 1: First, we need to find out where our titration curve begins. To do this, we find the initial pH of the weak acid in the beaker before any NaOH is added. This is the point where our titration curve will start. To find the initial pH, we first need the concentration of H3O+.
Set up an ICE table to find the concentration of H3O+:
$HX$ $H_2O$ $H_3O^+$ $X^-$
Initial 0.1M
Change -xM +xM +xM
Equilibrium (0.1-x)M +xM +xM
$Ka=(7)(10^{-3})$
$K_a=(7)(10^{-3})=\dfrac{(x^2)M}{(0.1-x)M}$
$x=[H_3O^+]=0.023\;M$
Solve for pH:
$pH=-\log_{10}[H_3O^+]=-\log_{10}(0.023)=1.64$
Step 2: To accurately draw our titration curve, we need to calculate a data point between the starting point and the equivalence point. To do this, we solve for the pH when neutralization is 50% complete.
Solve for the moles of OH- that is added to the beaker. We can to do by first finding the volume of OH- added to the acid at half-neutralization. 50% of 13 mL= 6.5mL
Use the volume and molarity to solve for moles (6.5 mL)(0.1M)= 0.65 mmol OH-
Now, Solve for the moles of acid to be neutralized (10 mL)(0.1M)= 1 mmol HX
Set up an ICE table to determine the equilibrium concentrations of HX and X:
$HX$ $H_2O$ $H_3O^+$ $X^-$
Initial 1 mmol
Added Base 0.65 mmol
Change -0.65 mmol -0.65 mmol -0.65 mmol
Equilibrium 0.65 mmol 0.65 mmol
To calculate the pH at 50% neutralization, use the Henderson-Hasselbalch approximation.
pH=pKa+log[mmol Base/mmol Acid]
pH=pKa+ log[0.65mmol/0.65mmol]
pH=pKa+log(1)
$pH=pKa$
Therefore, when the weak acid is 50% neutralized, pH=pKa
Step 3: Solve for the pH at the equivalence point.
The concentration of the weak acid is half of its original concentration when neutralization is complete 0.1M/2=.05M HX
Set up an ICE table to determine the concentration of OH-:
$HX$ $H_2O$ $H_3O^+$ $X^-$
Initial 0.05 M
Change -x M +x M +x M
Equilibrium 0.05-x M +x M +x M
Kb=(x^2)M/(0.05-x)M
Since Kw=(Ka)(Kb), we can substitute Kw/Ka in place of Kb to get Kw/Ka=(x^2)/(.05)
$x=[OH^-]=(2.67)(10^{-7})$
$pOH=-\log_{10}((2.67)(10^{-7}))=6.57$
$pH=14-6.57=7.43$
Step 4: Solve for the pH after a bit more NaOH is added past the equivalence point. This will give us an accurate idea of where the pH levels off at the endpoint. The equivalence point is when 13 mL of NaOH is added to the weak acid. Let's find the pH after 14 mL is added.
Solve for the moles of OH-
$(14 mL)(0.1M)=1.4\; mmol OH^-$
Solve for the moles of acid
$(10\; mL)(0.1\;M)= 1\;mmol \;HX$
Set up an ICE table to determine the $OH^-$ concentration:
$HX$ $H_2O$ $H_3O^+$ $X^-$
Initial 1 mmol
Added Base 1.4 mmol
Change -1 mmol -1 mmol 1 mmol
Equilibrium 0 mmol 0.4 mmol 1 mmol
$[OH-]=\frac{0.4\;mmol}{10\;mL+14\;mL}=0.17\;M$
$pOH=-log_{10}(0.17)=1.8$
$pH=14-1.8=12.2$
We have now gathered sufficient information to construct our titration curve.
Example $1$
In this case, we will say that a base solution is in an Erlenmeyer flask. To neutralize this base solution, you would add an acid solution from a buret into the flask. At the beginning of the titration, before adding any acid, it is necessary to add an indicator, so that there will be a color change to signal when the equivalence point has been reached.
We can use the equivalence point to find molarity and vice versa. For example, if we know that it takes 10.5 mL of an unknown solution to neutralize 15 mL of 0.0853 M NaOH solution, we can find the molarity of the unknown solution using the following formula:
$M_1V_1 = M_2V_2$
where M1 is the molarity of the first solution, V1 is the volume in liters of the first solution, M2 is the molarity of the second solution, and V2 is the volume in liters of the second solution. When we plug in the values given to us into the problem, we get an equation that looks like the following:
$(0.0835)(0.015) = M_2(0.0105)$
After solving for M2, we see that the molarity of the unknown solution is 0.119 M. From this problem, we see that in order to neutralize 15 mL of 0.0835 M NaOH solution, 10.5 mL of the .119 M unknown solution is needed.
Problems
1. Will the salt formed from the following reaction have a pH greater than, less than, or equal to seven?
$CH3COOH_{(aq)} + NaOH_{(s)} \leftrightharpoons Na^+ + CH3COO^- + H2O_{(l)}$
2. How many mL of .0955 M Ba(OH)2 solution are required to titrate 45.00 mL of .0452 M HNO3?
3. Will the pH of the salt solution formed by the following chemical reaction be greater than, less than, or equal to seven?
$NaOH + H_2SO_4 \leftrightharpoons H_2O + NaSO_4$
4. We know that it takes 31.00 mL of an unknown solution to neutralize 25.00 mL of .135 M KOH solution. What is the molarity of the unknown solution?
Solutions
1. After looking at the net ionic equation,
$CH_3CO_2H_{(aq)} + OH^- \leftrightharpoons CH_3COO^- + H_2O_{(l)}$
we see that a weak acid, $CH_3CO_2H$, is being neutralized by a strong base, $OH^-$. By looking at the chart above, we can see that when a strong base neutralizes a weak acid, the pH level is going to be greater than 7.
2. By plugging the numbers given in the problem in the the equation:
$M_1V_1= M_2V_2$
we can solve for $V_2$.
$V_2= \dfrac{M_1V_1}{M_2} = \dfrac{(0.0452)(0.045)}{0.0955} = 21.2\; mL$
Therefore it takes 21.2 mL of $Ba(OH)_2$ to titrate 45.00 mL $HNO_3$.
3. We know that NaOH is a strong base and H2SO4 is a strong acid. Therefore, we know the pH of the salt will be equal to 7.
4. By plugging the numbers given in the problem into the equation:
$M_1V_2 = M_2V_2$
we can solve for M2.
(0.135)(0.025) = M2(0.031)
M2 = 0.108 M. Therefore, the molarity of the unknown solution is .108 M.
Contributors and Attributions
• Katherine Dunn (UCD), Carlynn Chappell (UCD)
Predicting the Direction of Acid Base Reactions
The ability to predict the outcomes of acid-base reactions, which are very common in chemistry, is extremely beneficial. Many different things can change the outcome of an acid-base reaction including heat and pressure. Knowing the Ka (acid dissociation constant) and the Kb (base association constant) is the best way to predict the direction of an Acid-Base Reaction.
Introduction
Given the reaction:
OH- CH3CH2SH
Ka 2.5x10-29 6.8x10-6
Kb 1.5x10-4 2.7x10-22
Using the Ka and Kb values, one can predict which molecule will act as the acid in this reaction and which molecule will act as the base. For OH- the Ka is extremely small in relation to Kb, so it will act as a base in this reaction. For CH3CH2SH, the Kb is extremely small in relation to Ka so CH3CH2SH will act like an acid.
Problems
For problems 1 through 3:
In which direction will an acid-base reaction move, given the following factor?
1. K is Big
2. K is Small
3. K=Q
(For extra review, check out the ChemWiki page on K and Q here!)
For problems 4-6:
Given the following information, finish the equation and determine the acid and the base
4. NaOH + HCl ⇔ ? + ?
NaOH HCl
Ka 2.5x10-20 6.8x10-7
Kb 1.5x10-2 2.7x10-27
5. H2O + HCl ⇔ ? + ?
H2O HCl
Ka 3.5x10-26 1.8x10-4
Kb 7.5x10-25 9.7x10-24
6. H2O + HC2H3O2 ⇔ ? + ?
H2O HC2H3O2
Ka 5.5x10-10 3.4x10-3
Kb 5.3x10-10 7.2x10-19
Answers
1. Reaction goes towards products (to the right)
2. Reaction goes towards reactants (to the left)
3. Reaction stays the same
4. Products: H2O + NaCl
1. NaOH is the Base
2. HCl is the Acid
5. Products: H3O+ + Cl-
1. H2O is the Base
2. HCl is the Acid
6. 6. Products: H3O+ + C2H3O2-
1. H2O is the Base
2. HC2H3O2 is the Acid
Contributors
• Ryan Benoit (UCD)
• Dakota Miller (SWCC)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_Base_Reactions/Neutralization.txt
|
An acid, being a proton donor, can only act as an acid if there is a suitable base present to accept the proton. What do we mean by "suitable'' in this context? Simply that a base, in order to accept a proton, must provide a lower-energy resting place for the proton. (We are actually referring to something called "free energy" here, but don't worry about that if you are not familiar with that term; just think of it as a form of potential energy.) Thus you can view an acid-base reaction as the "fall" of the proton from a higher potential energy to a lower potential energy-- the same as a book will fall (if you drop it) only downward, to a position of lower (gravitational) potential energy.
Sources and Sinks
Viewed in this way, an acid is a proton source, a base is a proton sink. The tendency for a proton to move from source to sink depends on how far the proton can fall in energy, and this in turn depends on the energy difference between the source and the sink. This is entirely analogous to measuring the tendency of water to flow down from a high elevation to a lower one; this tendency (which is related to the amount of energy that can be extracted in the form of electrical work if the water flows through a power station at the bottom of the dam) will be directly proportional to the difference in elevation (difference in potential energy) between the source (top of the dam) and the sink (bottom of the dam).
Now look at the diagram at the right and study it carefully. In the center columns of the diagram, you see a list of acids and their conjugate bases. These acid-base pairs are plotted on an energy scale which is shown at the left side of the diagram. This scale measures the free energy released when one mole of protons is transferred from a given acid to H2O. Thus if one mole of HCl is added to water, it dissociates completely and heat is released as the protons fall from the source (HCl) to the lower free energy that they possess in the H3O+ ions that are formed when the protons combine with H2O.
Any acid shown on the left side of the vertical line running down the center of the diagram can donate protons to any base (on the right side of the line) that appears below it. The greater the vertical separation, the greater will be the fall in free energy of the proton, and the more complete will be the proton transfer at equilibrium.
Notice the H3O+ - H2O pair shown at zero kJ on the free energy scale. This zero value of free energy corresponds to the proton transfer process
$\ce{H3O^{+ } + H_2O \rightarrow H2O + H3O^{+}}$
which is really no reaction at all, hence the zero fall in free energy of the proton. Since the proton is equally likely to attach itself to either of two identical H2O molecules, the equilibrium constant is unity.
Now look at the acid/base pairs shown at the top of the table, above the H3O+ - H2O line. All of these acids can act as proton sources to those sinks (bases) that appear below them. Since H2O is a suitable sink for these acids, all such acids will lose protons to H2O in aqueous solutions. These are therefore all strong acids that are 100% dissociated in aqueous solution; this total dissociation reflects the very large equilibrium constants that are associated with any reaction that undergoes a fall in free energy of more than a few kilojoules per mole.
Leveling effect
Because H2O serves as a proton sink to any acid in which the proton free energy level is greater than zero, the strong acids such as HCl and H2SO4 cannot "exist" (as acids) in aqueous solution; they exist as their conjugate bases instead, and the only proton donor present will be H3O+. This is the basis of the leveling effect, which states that the strongest acid that can exist in aqueous solution is H3O+.
Dissociation of weak acids
Now consider a weak acid, such as HCN at about 50 kJ on the scale. This positive free energy means that in order for a mole of HCN to dissociate (transfer its proton to H2O), the proton must gain 40 kJ of free energy per mole. In the absence of a source of energy, the reaction will simply "not go"; HCN dissociates only to a minute extent in water.
Why is a weak acid such as HCN dissociated at all? The molecules in solution are continually being struck and bounced around by the thermal motions of neighboring molecules. Every once in a while, a series of fortuitous collisions will provide enough kinetic energy to a HCN molecule to knock off the proton, effectively boosting it to the level required to attach itself to water. This process is called thermal excitation, and its probability falls off very rapidly as the distance (in kJ ) that the proton must rise increases. The protons on a "stronger'' weak acid such as HSO4 or CH3COOH will be thermally excited to the H3O+ level much more frequently than will the protons on HCN or HCO3, hence the difference in the dissociation constants of these acids.
Titration
Although a weak acid such as HCN will not react with water to a significant extent, you are well aware that such an acid can still be titrated with strong base to yield a solution of NaCN at the equivalence point. To understand this process, find the H2O-OH pair at about 80 kJ on the free energy scale. Because the OH ion can act as a proton sink to just about every acid shown on the diagram, the addition of strong base in the form of NaOH solution allows the protons at any acid above this level to fall to the OH level according to the reaction
$H_3O^+ + OH^– \rightleftharpoons 2 H_2O$
Titration, in other words, consists simply in introducing a low free energy sink that can drain off the protons from the acids initially present, converting them all into their conjugate base forms.
Strong bases
There are two other aspects of the H2O-H3O+ pair that have great chemical significance. First, its location at 80 kJ/mol tells us that for a H2O molecule to transfer its proton to another H2O molecule (which then becomes a H3O+ ion whose relative free energy is zero), a whopping 80 kJ/mol of free energy must be supplied by thermal excitation. This is so improbable that only one out of about 10 million H2O molecules will have its proton elevated to the H3O+level at a given time; this corresponds to the small value of the ion product of water, about 10–14.
The other aspect of the H2O-OH; pair is that its location means the hydroxide ion is the strongest base that can exist in water. On our diagram only two stronger bases (lower proton free energy sinks) are shown: the amide ion NH2, and the oxide ion O2–. What happens if you add a soluble oxide such as Na2O to water? Since O2– is a proton sink to \water, it will react with the solvent, leaving OH as the strongest base present:
O2– + H2O OH + OH
This again is the leveling effect; all bases stronger than OH appear equally strong in water, simply because they are all converted to OH.
Proton free energy and pH
The pH of a solution is more than a means of expressing its hydrogen ion concentration on a convenient logarithmic scale. The concept of pH was suggested by the Swedish chemist Sørensen in 1909 as a means of compressing the wide range of [H+] values encountered in aqueous solutions into a convenient range. The modern definition of pH replaces [H+] with {H+} in which the curly brackets signify the effective concentration of the hydrogen ion, which chemists refer to as the hydrogen ion activity
$pH = -\log \{H^{+}\}$
The real physical meaning of pH is that it measures the availability of protons in the solution; that is, the ability of the solution to supply protons to a base such as H2O. This is the same as the "hydrogen ion concentration" [H+] only in rather dilute solutions; at ionic concentrations greater than about 0.001M, electrostatic interactions between the ions cause the relation between the pH (as measured by direct independent means) and [H3O+] to break down. Thus we would not expect the pH of a 0.100 M solution of HCl to be exactly 1.00.
On the right side of the illustration is a pH scale. At the pH value corresponding to a given acid-base pair, the acid and base forms will be present at equal concentrations. For example, if you dissolve some solid sodium sulfate in pure water and then adjust the pH to 2.0, about half of the SO22 will be converted into HSO4. Similarly, a solution of Na2CO3 in water will not contain a very large fraction of CO32 unless the pH is kept above 10.
Suppose we have a mixture of many different weak acid-base systems, such as exists in most biological fluids or natural waters, including the ocean. The available protons will fall to the lowest free energy levels possible, first filling the lowest-energy sink, then the next, and so on until there are no more proton-vacant bases below the highest proton-filled (acid) level. Some of the highest protonated species will donate protons to H2O through thermal excitation, giving rise to a concentration of H3O+ that will depend on the concentrations of the various species. The equilibrium pH of the solution is a measure of this concentration, but this in turn reflects the relative free energy of protons required to keep the highest protonated species in its acid form; it is in this sense that pH is a direct measure of proton free energy.
In order to predict the actual pH of any given solution, we must of course know something about the nominal concentrations Ca of the various acid-base species, since this will strongly affect the distribution of protons. Thus if one proton-vacant level is present at twice the concentration of another, it will cause twice as many acid species from a higher level to become deprotonated. In spite of this limitation, the proton free energy diagram provides a clear picture of the relationships between the various acid and base species in a complex solution.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_Base_Reactions/The_Fall_of_the_Proton_-_Viewing_Acid_Base_Chemistry_from_a_Thermodynamic_Persp.txt
|
Learning Objectives
• Calculate the pH and plot it during a titration of a strong acid by a strong base.
• Calculate the pH and plot it when a weak acid is titrated by a strong base.
Titrations
The process of obtaining quantitative information of a sample using a fast chemical reaction by reacting with a certain volume of reactant whose concentration is known is called titration. When an acid-base reaction is used, the process is called acid-base titration. When a redox reaction is used, the process is called a redox titration. Titration is also called volumetric analysis, which is a type of quantitative chemical analysis.
In freshman chemistry, we treat titration this way. A titration is a technique where a solution of known concentration is used to determine the concentration of an unknown solution. Typically, the titrant (the known solution) is added from a buret to a known quantity of the analyte (the unknown solution) until the reaction is complete. Knowing the volume of titrant added allows the determination of the concentration of the unknown. Often, an indicator is used to usually signal the end of the reaction, the endpoint.
For acid-base titration, a modern lab will usually monitor titration with a pH meter which is interfaced to a computer, so that you will be able to plot the pH or other physical quantities versus the volume that is added.
In this module, we simulate this experiment graphically without using chemicals. A program that simulates titrations of strong acids and strong bases is very easy, because the calculation of pH in this experiment is very simple.
An example of titration is the acetic acid and NaOH - strong base and weak acid - titration following the equation below.
$\ce{HC2H4O2(aq)+OH^{-}(aq) → C2H4O2(aq)+H2O(l) }$
Titration Curve
The plot of pH as a function of titrant added is called a titration curve. Let us take a look at a titration process:
Example $1$
Evaluate $\ce{[H+]}$ and pH in the titration of 10.0 mL 1.0 M $\ce{HCl}$ solution with 1.0 M $\ce{NaOH}$ solution, and plot the titration curve.
Solution
\begin{align} \textrm{The amount of acid present} &= \mathrm{V_a\times C_a}\ &= \mathrm{10.0\: mL \times \dfrac{1.0\: mol}{1000\: mL}}\ &= \textrm{10 mmol (mili-mole)} \end{align}
$\textrm{The amount of base NaOH added} = \mathrm{V_b\times C_b}$
$\textrm{The amount of acid left} = \mathrm{V_a\times C_a - V_b\times C_b}$
$\textrm{The concentration of acid and thus }\ce{[H+]} = \mathrm{\dfrac{[V_a\times C_a - V_b\times C_b]}{V_a + V_b}}$
With the above formulation, we can build a table for various values as shown on the right.
Base added $\ce{[H+]}$ pH
0 1.0 0.0
1.0 9/11 0.087
2.0 8/12 0.176
5.0 5/15 0.477
Half- equivalence point
8.0 2/18 0.954
9.0 1/19 1.279
9.3 0.7/19.3 1.440
9.5 0.5/19.5 1.591
9.7 0.3/19.7 1.817
9.8 0.2/19.8 2.0
9.9 0.1/19.9 2.300
9.95 0.05/19.95 2.60
10 $\ce{H2O\rightleftharpoons H+ + OH-}$ pH = 7
$\ce{NaCl}$ neutral salt
Base added $\ce{[OH- ]}$ pH
10.05 0.05/20.05 11.397
10.10 0.5/20.1 11.697
11.0 1/21 12.678
15.0 5/25 13.301
20.0 20/30 13.924
Working to learn
Plot the titration curve on a graph based on the data.
Answer the following questions.
At equivalence point, why is pH=7? What formula is used to calculate pH?
Why does pH change rapidly at the equivalence point?
Sketch titration curves when the concentrations of both acids and bases are 0.10, 0.0010 and 0.000010 M. What can you conclude from these sketches?
What are $\ce{[Na+]}$ and $\ce{[Cl- ]}$ at the following points: initially (before any base is added), half-equivalence point; equivalence point, after 10.5 mL $\ce{NaOH}$ is added, after 20.0 mL $\ce{NaOH}$ is added?
Well, when you have acquired the skill to calculate the pH at any point during titration, you may write a simulation program to plot the titration curve. Calculations for strong-acid_strong-base titration are simple, but when weak acid or base are involved, the calculations are somewhat more complicated. However, we are interested in this area and some simulation programs are available on the internet.
Questions
1. What are the pH of solutions of 10, 1.0, 0.10, 0.010 and 0.0010 M $\ce{HCl}$?
2. What are $\ce{[H+]}$ and the pH at the half equivalence point when a solution of 1.0 M $\ce{HCl}$ is titrated by a 1.0 $\ce{NaOH}$ solution?
3. What are $\ce{[Na+]}$ at the half equivalence point when a solution of 1.0 M $\ce{HCl}$ is titrated by a 1.0 $\ce{NaOH}$ solution?
4. What are $\ce{[H+]}$ and the pH at the equivalence point when a solution of 1.0 M $\ce{HCl}$ is titrated by a 1.0 $\ce{NaOH}$ solution?
5. What are $\ce{[Na+]}$ and $\ce{[Cl- ]}$ at the equivalence point when a solution of 1.0 M $\ce{HCl}$ is titrated by a 1.0 $\ce{NaOH}$ solution?
6. What are $\ce{[H+]}$ and the pH of 1.0 M acetic acid solution? Ka = 1.8e-5
7. What is the pH of the above solution when half of the acid is neutralized by $\ce{NaOH}$ in the titration?
8. What is the pH of the end point or equivalence point when 1.0 M acetic acid is titrated by 1.0 M $\ce{NaOH}$ solution?
9. A 0.10 M acetic acid solution is titrated with a 0.10 M $\ce{NaOH}$ solution to the equivalence point. What is the concentration of the acetate ion?
Solutions
1. Answer pH = -1, 0, 1, 2, and 3
Consider...
No calculators should be used.
2. Answer $\mathrm{[H^+] = 0.333}$; pH = 0.477
Consider...
Do not forget the dilution factor.
3. Answer $\mathrm{[H^+] = 0.333}$;
Consider...
What about $\ce{[Cl- ]}$?
4. Answer $\ce{[H+]} = \textrm{1e-7}$; pH = 7
Consider...
This is only a theoretical value.
5. Answer $\ce{[Ca+]} = \ce{[Cl- ]} = 0.5$
Consider...
Note that $\ce{[H+]}$ is balanced by $\ce{[OH- ]}$, amd $\ce{[Na+]}$ is balanced by $\ce{[Cl- ]}$.
Most students usually forget the salt resulting from the titration.
6. Answer $\ce{[H+]}=0.00424$; pH = 2.37
Consider...
The approximation $\ce{[H+]}=(C K_{\ce a})^{1/2}$ can be used.
7. Answer pH = 4.74
Consider...
Do you know why pH = pKa in this case? At this point, the solution is a very good buffer.
8. Answer $\ce{[OH- ]}=1.18\textrm{e-}4$; pOH = 4.78; pH = 9.22
Consider...
Justify the following formula for this condition. $\ce{[OH- ]}=(K_{\ce w}/K_{\ce a}\times C)^{1/2}$
What is the pH of a 0.5 M sodium acetate solution?
9. Answer 0.05 M
Consider...
You have doubled the volume in this case, and the concentration is 0.05 M sodium acetate.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_Base_Titrations.txt
|
• Acid and Base Indicators
The most common method to get an idea about the pH of solution is to use an acid base indicator. An indicator is a large organic molecule that works somewhat like a " color dye". Whereas most dyes do not change color with the amount of acid or base present, there are many molecules, known as acid - base indicators , which do respond to a change in the hydrogen ion concentration. Most of the indicators are themselves weak acids.
• pH Indicators
pH indicators are weak acids that exist as natural dyes and indicate the concentration of H+ (H3O+) ions in a solution via color change. A pH value is determined from the negative logarithm of this concentration and is used to indicate the acidic, basic, or neutral character of the substance you are testing.
Acid and Base Indicators
The most common method to get an idea about the pH of solution is to use an acid base indicator. An indicator is a large organic molecule that works somewhat like a " color dye". Whereas most dyes do not change color with the amount of acid or base present, there are many molecules, known as acid - base indicators , which do respond to a change in the hydrogen ion concentration. Most of the indicators are themselves weak acids.
Indicators
The most common indicator is found on "litmus" paper. It is red below pH 4.5 and blue above pH 8.2.
Color Blue Litmus Red Litmus
Acid turns red stays same
Base stays same turns blue
Other commercial pH papers are able to give colors for every main pH unit. Universal Indicator, which is a solution of a mixture of indicators is able to also provide a full range of colors for the pH scale.
A variety of indicators change color at various pH levels. A properly selected acid-base indicator can be used to visually "indicate" the approximate pH of a sample. An indicator is usually some weak organic acid or base dye that changes colors at definite pH values. The weak acid form (HIn) will have one color and the weak acid negative ion (In-) will have a different color. The weak acid equilibrium is:
HIn → H+ + In-
• For phenolphthalein: pH 8.2 = colorless; pH 10 = red
• For bromophenol blue: pH 3 = yellow; pH 4.6 = blue
See the graphic below for colors and pH ranges.
Magic Pitcher Demonstration
Phenolphthalein is an indicator of acids (colorless) and bases (pink). Sodium hydroxide is a base, and it was in the pitcher at the beginning, so when added to the phenolphthalein in beakers 2 and 4, it turned pink (top half of the graphic).
Reaction
Equilibrium: HIn → H+ + In-
colorless red
The equilibrium shifts right, HIn decreases, and In- increases. As the pH increase between 8.2 to 10.0 the color becomes red because of the equilibrium shifts to form mostly In- ions.
• The third beaker has only the NaOH but no phenolphthalein, so it remained colorless. The first beaker contain acetic acid and is skipped over at first.
• After pouring beakers 2, 3, 4 back into the pitcher it give a pink solution.
• Bottom half of the graphic: When the pitcher is then poured back into beakers 2, 3, 4 it is a pink solution.
• In the first beaker, a strange thing happens in that the pink solution coming out of the pitcher now changes to colorless. This happens because the first beaker contains some vinegar or acetic acid which neutralizes the NaOH, and changes the solution from basic to acidic. Under acidic conditions, the phenolphthalein indicator is colorless.
Neutralization reaction:
HC2H3O2 + NaOH → Na(C2H3O2) + HOH
Explain the color indicator change
Use equilibrium principles to explain the color change for phenolphthalein at the end of the demonstration.
Solution
The simplified reaction is: H+ + OH- → HOH
As OH- ions are added, they are consumed by the excess of acid already in the beaker as expressed in the above equation. The hydroxide ions keep decreasing and the hydrogen ions increase, pH decreases.
See lower equation: The indicator equilibrium shifts left, In- ions decrease. Below pH 8.2 the indicator is colorless. As H+ ions are further increased and pH decreases to pH 4-5, the indicator equilibrium is effected and changes to the colorless HIn form.
Equilibrium: HIn → H+ + In-
colorless red
Molecular Basis for the Indicator Color Change
Color changes in molecules can be caused by changes in electron confinement. More confinement makes the light absorbed more blue, and less makes it more red.
How are electrons confined in phenolphthalein? There are three benzene rings in the molecule. Every atom involved in a double bond has a p orbital which can overlap side-to-side with similar atoms next to it. The overlap creates a 'pi bond' which allows the electrons in the p orbital to be found on either bonded atom. These electrons can spread like a cloud over any region of the molecule that is flat and has alternating double and single bonds. Each of the benzene rings is such a system.
See the far left graphic - The carbon atom at the center (adjacent to the yellow circled red oxygen atom) doesn't have a p-orbital available for pi-bonding, and it confines the pi electrons to the rings. The molecule absorbs in the ultraviolet, and this form of phenolphthalein is colorless.
In basic solution, the molecule loses one hydrogen ion. Almost instantly, the five-sided ring in the center opens and the electronic structure around the center carbon changes (yellow circled atoms) to a double bond which now does contain pi electrons. The pi electrons are no longer confined separately to the three benzene rings, but because of the change in geometry around the yellow circled atoms, the whole molecule is now flat and electrons are free to move within the entire molecule. The result of all of these changes is the change in color to pink. Chime: Phenolphthalein
Many other indicators behave on the molecular level in a similar fashion (the details may be different) but the result is a change in electronic structure along with the removal of a hydrogen ion from the molecule. Plant pigments in flowers and leaves also behave in this fashion.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_and_Base_Indicators/Acid_and_Base_Indicators.txt
|
pH indicators are weak acids that exist as natural dyes and indicate the concentration of H+ (\(H_3O^+\)) ions in a solution via color change. A pH value is determined from the negative logarithm of this concentration and is used to indicate the acidic, basic, or neutral character of the substance you are testing.
Introduction
pH indicators exist as liquid dyes and dye-infused paper strips. They are added to various solutions to determine the pH values of those solutions. Whereas the liquid form of pH indicators is usually added directly to solutions, the paper form is dipped into solutions and then removed for comparison against a color/pH key.
pH 3 4 5 6 7 8 9 10
Color
Very Acidic Acidic Neutral Basic Very Basic
See Figure 1 and 2 to see a color range (1) of a universal indicator (2).
The Implications of the Indicated pH via the Equation
Recall that the value of pH is related to the concentration of H+ (\(H_3O^+\)) of a substance. pH itself is approximated as the cologarithm or negative logarithm of the \(H^+\) ion concentration (Figure 3).
\[pH \approx -log[H_3O^+] \tag{3} \]
A pH of 7 indicates a neutral solution like water. A pH less than 7 indicates an acidic solution and a pH greater than 7 indicates a basic solution. Ultimately, the pH value indicates how much H+ has dissociated from molecules within a solution. The lower the pH value, the higher concentration of H+ ions in the solution and the stronger the acid. Likewise, the higher the pH value, the lower the concentration of H+ ions in the solution and the weaker the acid.
How the Color Change of the Indicator Happens
The color change of a pH indicator is caused by the dissociation of the H+ ion from the indicator itself. Recall that pH indicators are not only natural dyes but also weak acids. The dissociation of the weak acid indicator causes the solution to change color. The equation for the dissociation of the H+ ion of the pH indicator is show below (Figure 4).
\[HIn + H_2O \rightleftharpoons H_3O^+ + In^- \tag{4} \]
with
• \(HIn\) is the acidic pH indicator and
• \(In^-\) is the conjugate base of the pH indicator
It is important here to note that the equation expressed in figure 4 is in equilibrium, meaning Le Chatelier's principle applies to it. Thus, as the concentration of \(H_3O^+\) (H+) increases or decreases, the equilibrium shifts to the left or right accordingly. An increase in the \(HIn\) acid concentration causes the equilibrium to shift to the right (towards products), whereas an increase of the \(In^-\) base concentration causes the equilibrium to shift to the left (towards reactants).
pH Ranges of pH Indicators
pH indicators are specific to the range of pH values one wishes to observe. For example, common indicators such as phenolphthalein, methyl red, and bromothymol blue are used to indicate pH ranges of about 8 to 10, 4.5 to 6, and 6 to 7.5 accordingly. On these ranges, phenolphthalein goes from colorless to pink, methyl red goes from red to yellow, and bromothymol blue goes from yellow to blue. For universal indicators, however, the pH range is much broader and the number of color changes is much greater. See figures 1 and 2 in the introduction for visual representations. Usually, universal pH indicators are in the paper strip form.
Graphing pH vs. the H+ (\(H_3O^+\)) Concentration
It is important to note that the pH scale is a logarithmic scale: hence an increase of 1 pH unit corresponds to a ten times increase of \(H_3O^+\). For example, a solution with a pH of 3 will have an H+ (\(H_3O^+\)) concentration ten times greater than that of a solution with a pH of 4. As pH is the negative logarithm of the H+ (\(H_3O^+\)) concentration of a foreign substance, the lower the pH value, the higher the concentration of H+ (\(H_3O^+\)) ions and the stronger the acid. Additionally, the higher the pH value, the lower the H+ (\(H_3O^+\)) concentration and the stronger the base.
Indicators in Nature
pH indicators can be used in a variety of ways, including measuring the pH of farm soil, shampoos, fruit juices, and bodies of water. Additionally, pH indicators can be found in nature, so therefore their presence in plants and flowers can indicate the pH of the soil from which they grow.
Hydrangeas
Nature contains several natural pH indicators as well: for example, some flower petals (especially Roses and Hydrangeas), certain fruits (cherries, strawberries) and leaves can change color if the pH of the soil changes. See figure 7.
(7)
Lemon juice
In the lemon juice experiment, the pH paper turns from blue to vivid red, indicating the presence of \(H_3O^+\) ions: lemon juice is acidic. Refer to the table of Universal Indicator Color change (figure 1 in the introduction) for clarification.
Cleaning Detergent
The household detergent contained a concentrated solution of sodium bicarbonate, commonly known as baking soda. As shown, the pH paper turns a dark blue: baking soda (in solution) is basic.Refer to the table of Universal Indicator Color change (figure 1 in the introduction) for clarification.
Here is a closer look of the pH papers before and after dipping them in the lemon juice and cleaning detergent (Figure 10):
neutral acidic neutral basic
Figure 10:
Cabbage Juices
Here is a simple demonstration that you could try in the lab or at home to get a better sense of how indicator paper works. Make sure to always wear safety glasses and gloves when performing an experiment!
Materials
• 1 cabbage
• cooking pot
• white paper coffee filters
• strainer
• water
• a bowl
Procedure
1. Peal the cabbage leaves and place them into the pot.
2. Add water into the pot, making sure the water covers the cabbage entirely.
3. Place the pot on the stove and allow to cook at medium heat for about 30 to 35 minutes.
4. Allow it to cool, then pour contents into the bowl using the strainer.
5. Soak your coffee filters in the cabbage juice for about 25 to 30 minutes.
6. Allow the filters to fully dry, then cut them into strips.
7. Now start your pH testing (starts out blue, changes to green [basic], and red [acidic]).
Practice Problems
1. A hair stylist walks into a store and wants to buy a shampoo with slightly acidic/neutral pH for her hair. She finds 5 brands that she really likes, but since she never took any introductory chemistry classes, she is unsure about which one to purchase. The first has a pH of 3.6, the second of 13, the third of 8.2, the fourth of 6.8 and the fifth of 9.7. Which one should she buy?
Answer: The brand that has a pH of 6.8 since it's under 7 (neutral) but very close to it, making it slightly acidic.
2. You decide to test the pH of your brand new swimming pool on your own. The instruction manual advises to keep it between 7.2-7.6. Shockingly, you realize it's set at 8.3! Horrified, you panic and are unsure whether you should add some basic or acidic chemicals in your pool (being mindful of the dose, of course. Those specific chemicals are included in the set, so no need to worry about which one you have to use and (eek!) if they are legal for public use). Which one should you add?
Answer: Since the goal is to lower the pH to its ideal value, we must add acidic solution to the pool.
3. Let's say the concentration of Hydronium ions in an aqueous solution is 0.033 mol/L. What is the corresponding pH of this solution, and based on your answer identify whether the solution is acidic, basic or neutral.
Answer: Using the formula \(pH \approx -log[H_3O+]\)
pH= -log[0.033]= 1.48 : The solution is highly acidic!
4. Now let's do the inverse: Say you have a solution with a pH of 9.4. What is the H30+ ions concentration?
Answer: [\(H_3O^+\)]= 10-9.4= 3.98E-10 mol/L. Seem too low to be true? Think again, if the pH is >7, the solution will be basic, hence the hydronium ions will be low compared to the hydroxide (OH- ions).
5. A more trickier one: 0.00026 moles of acetic acid are added to 2.5 L of water. What is the pH of the solution?
Answer: M=n/L : Macetic acid= 0.00026/2.5 =1.04E-4 mol/L
pH= -log[1.04E-4]= 3.98
Outside Links
• www.krampf.com/experiments/Science_Experiment16.html
• There are many common household products and garden plants that can be used or made into pH indicators. For more information on these common house hold indicators visit http://chemistry.about.com/cs/acidsandbases/a/aa060703a.htm
• commons.wikimedia.org/wiki/Ca...ndicator_paper
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acid_and_Base_Indicators/PH_Indicators.txt
|
The use of polar solvents and the presence of weak bonds involving hydrogen introduces an interesting and complex situation for describing matter whereby protons can be removed or added to molecules or allowed to "flow freely" in the solvent (e.g. water). The study of these phenomena requires the introduction of the acid concept, and also, the equilibration concept. Rarely are the two not intertwined.
Thumbnail: The picture above illustrates the electron density of hydronium. The red area represents oxygen; this is the area where the electrostatic potential is the highest and the electrons are most dense.
Acids and Bases in Aqueous Solutions
Salts, when placed in water, will often react with the water to produce H3O+ or OH-. This is known as a hydrolysis reaction. Based on how strong the ion acts as an acid or base, it will produce varying pH levels. When water and salts react, there are many possibilities due to the varying structures of salts. A salt can be made of either a weak acid and strong base, strong acid and weak base, a strong acid and strong base, or a weak acid and weak base. The reactants are composed of the salt and the water and the products side is composed of the conjugate base (from the acid of the reaction side) or the conjugate acid (from the base of the reaction side). In this section of chemistry, we discuss the pH values of salts based on several conditions.
When is a salt solution basic or acidic?
There are several guiding principles that summarize the outcome:
1. Salts that are from strong bases and strong acids do not hydrolyze. The pH will remain neutral at 7. Halides and alkaline metals dissociate and do not affect the H+ as the cation does not alter the H+ and the anion does not attract the H+ from water. This is why NaCl is a neutral salt. In General: Salts containing halides (except F-) and an alkaline metal (except Be2+) will dissociate into spectator ions.
2. Salts that are from strong bases and weak acids do hydrolyze, which gives it a pH greater than 7. The anion in the salt is derived from a weak acid, most likely organic, and will accept the proton from the water in the reaction. This will have the water act as an acid that will, in this case, leaving a hydroxide ion (OH-). The cation will be from a strong base, meaning from either the alkaline or alkaline earth metals and, like before, it will dissociate into an ion and not affect the H+.
3. Salts of weak bases and strong acids do hydrolyze, which gives it a pH less than 7. This is due to the fact that the anion will become a spectator ion and fail to attract the H+, while the cation from the weak base will donate a proton to the water forming a hydronium ion.
4. Salts from a weak base and weak acid also hydrolyze as the others, but a bit more complex and will require the Ka and Kb to be taken into account. Whichever is the stronger acid will be the dominate factor in determining whether it is acidic or basic. The cation will be the acid, and the anion will be the base and will form either form a hydronium ion or a hydroxide ion depending on which ion reacts more readily with the water.
Salts of Polyprotic Acids
Do not be intimidated by the salts of polyprotic acids. Yes, they're bigger and "badder" then most other salts. But they can be handled the exact same way as other salts, just with a bit more math. First of all, we know a few things:
• It's still just a salt. All of the rules from above still apply. Luckily, since we're dealing with acids, the pH of a salt of polyprotic acid will always be greater than 7.
• The same way that polyprotic acids lose H+ stepwise, salts of polyprotic acids gain H+ in the same manner, but in reverse order of the polyprotic acid.
Take for example dissociation of $\ce{H2CO3}$, carbonic acid.
$\ce{H2CO3(aq) + H2O(l) <=> H3O^{+}(aq) + HCO^{-}3(aq)} \nonumber$
with $K_{a1} = 2.5 \times 10^{-4}$
$\ce{HCO^{-}3(aq) + H2O(l) <=> H3O^{+}(aq) + CO^{2-}3(aq)} \nonumber$
with $K_{a2} = 5.61 \times 10^{-11}$.
This means that when calculating the values for Kb of CO32-, the Kb of the first hydrolysis reaction will be $K_{b1} = \dfrac{K_w}{K_{a2}}$ since it will go in the reverse order.
Summary of Acid Base Properties of Salts
Type of Solution Cations Anions pH
Acidic
From weak bases NH4+, Al3+, Fe3+
From strong acids: Cl-, Br-, I-, NO3-, ClO4-
< 7
Basic From strong bases: Group 1 and Group 2, but not Be2+
From weak acids: F-, NO2-, CN-, CH3COO-
> 7
Neutral
From strong bases: Group 1 and Group 2, but not Be2+.
From strong acids: Cl-, Br-, I-, NO3-, ClO4-
= 7
Questions
1. Predict whether the pH of each of the following salts placed into water is acidic, basic, or neutral.
1. NaOCl(s)
2. KCN(s)
3. NH4NO3(s)
1. Find the pH of a solution of .200 M NH4NO3 where (Ka = 1.8 * 10-5).
2. Find the pH of a solution of .200 M Na3PO4 where (Ka1 = 7.25 * 10-5, Ka2 = 6.31 * 10-8, Ka3 = 3.98 * 10-3).
Answers
1
1. The ions present are Na+ and OCl- as shown by the following reaction:
$NaOCl _{(s)} \rightarrow Na^+_{(aq)} + OCl^-_{(aq)}$
While Na+ will not hydrolyze, OCl- will (remember that it is the conjugate base of HOCl). It acts as a base, accepting a proton from water.
$OCl^-_{(aq)} + H_2O_{(l)} \rightleftharpoons HOCl_{(aq)} + OH^-_{(aq)}$
Na+ is excluded from this reaction since it is a spectator ion.
Therefore, with the production of OH-, it will cause a basic solution and raise the pH above 7.
$pH>7$
1. The KCN(s) will dissociate into K+(aq) and CN_(aq) by the following reaction:
$KCN_{(s)}\rightarrow K^+_{(aq)} + CN^-_{(aq)}$
K+ will not hydrolyze, but the CN- anion will attract an H+away from the water:
$CN^-_{(aq)} + H_2O_{(l)}\rightleftharpoons HCN_{(aq)} + OH^-_{(aq)}$
Because this reaction produces OH-, the resulting solution will be basic and cause a pH>7.
$pH>7$
1. The NH4NO3 (s) will dissociate into NH4+ and NO3- by the following reaction:
$NH_4NO_{3(s)} \rightarrow NH^+_{4(aq)} + NO^-_{3(aq)}$
Now, NO3- won't attract an H+ because it is usually from a strong acid. This means the Kb will be very small. However, NH4+ will lose an electron and act as an acid (NH4+ is the conjugate acid of NH3) by the following reaction:
$NH^+_{4(aq)} + H_2O_{(l)} \rightleftharpoons NH_{3(aq)} + H_3O^+_{(aq)}$
This reaction produces a hydronium ion, making the solution acidic, lowering the pH below 7.
$pH<7$
1. $NH^+ _{4(aq)} + H_2O {(l)} \rightleftharpoons NH_{3(aq)} + H_3O_{(aq)}$
$\dfrac{x^2}{0.2-x} = \dfrac{1*10^{-14}}{1.8 \times 10^{-5}}$
$x = 1.05*10^-5 M = [H_3O^+]$
$pH = 4.98$
1. $PO^3-_{4(aq)} + H_2O_{(l)} \rightleftharpoons HPO^{2-}_{4(aq)} + OH^-_{(aq)}$
The majority of the hydroxide ion will come from this first step. So only the first step will be completed here. To complete the other steps, follow the same manner of this calculation.
$\dfrac{x^2}{0.2-x}=\dfrac{1*10^-14}{3.98 \times 10{-13}}$
$x = 0.0594 = [OH^-]$
$pH = 12.77$
Practice Questions
1. Why does a salt containing a cation from a strong base and an anion from a weak acid form a basic solution?
2. Why does a salt containing a cation from a weak base and an anion from a strong acid form an acidic solution?
3. How do the Ka or Kb values help determine whether a weak acid or weak base will be the dominant driving force of a reaction?
The answers to these questions can be found in the attached files section at the bottom of the page.
Contributors and Attributions
• Christopher Wu (UCD), Christian Dowell (UCD), Nicole Hooper (UCD)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acids_and_Bases_in_Aqueous_Solutions/Aqueous_Solutions_Of_Salts.txt
|
Owing to the overwhelming excess of $\ce{H2O}$ molecules in aqueous solutions, a bare hydrogen ion has no chance of surviving in water.
Free Hydrogen Ions do not Exist in Water
The hydrogen ion in aqueous solution is no more than a proton, a bare nucleus. Although it carries only a single unit of positive charge, this charge is concentrated into a volume of space that is only about a hundred-millionth as large as the volume occupied by the smallest atom. (Think of a pebble sitting in the middle of a sports stadium!) The resulting extraordinarily high charge density of the proton strongly attracts it to any part of a nearby atom or molecule in which there is an excess of negative charge. In the case of water, this will be the lone pair (unshared) electrons of the oxygen atom; the tiny proton will be buried within the lone pair and will form a shared-electron (coordinate) bond with it, creating a hydronium ion, $\ce{H3O^{+}}$. In a sense, $\ce{H2O}$ is acting as a base here, and the product $\ce{H3O^{+}}$ is the conjugate acid of water:
Although other kinds of dissolved ions have water molecules bound to them more or less tightly, the interaction between H+ and $\ce{H2O}$ is so strong that writing “$\ce{H^{+}(aq)}$” hardly does it justice, although it is formally correct. The formula $\ce{H3O^{+}}$ more adequately conveys the sense that it is both a molecule in its own right, and is also the conjugate acid of water.
The equation
$\ce{HA → H^{+} + A^{–}} \nonumber$
is so much easier to write that chemists still use it to represent acid-base reactions in contexts in which the proton donor-acceptor mechanism does not need to be emphasized. Thus, it is permissible to talk about “hydrogen ions” and use the formula $\ce{H^{+}}$ in writing chemical equations as long as you remember that they are not to be taken literally in the context of aqueous solutions.
Interestingly, experiments indicate that the proton does not stick to a single $\ce{H2O}$ molecule, but changes partners many times per second. This molecular promiscuity, a consequence of the uniquely small size and mass the proton, allows it to move through the solution by rapidly hopping from one $\ce{H2O}$ molecule to the next, creating a new $\ce{H3O^{+}}$ ion as it goes. The overall effect is the same as if the $\ce{H3O^{+}}$ ion itself were moving. Similarly, a hydroxide ion, which can be considered to be a “proton hole” in the water, serves as a landing point for a proton from another $\ce{H2O}$ molecule, so that the OH ion hops about in the same way.
Because hydronium and hydroxide ions can “move without actually moving” and thus without having to plow their way through the solution by shoving aside water molecules, as do other ions, solutions which are acidic or alkaline have extraordinarily high electrical conductivities.
Reaction
The hydronium ion is an important factor when dealing with chemical reactions that occur in aqueous solutions. Its concentration relative to hydroxide is a direct measure of the pH of a solution. It can be formed when an acid is present in water or simply in pure water. It's chemical formula is $\ce{H3O^{+}}$. It can also be formed by the combination of a H+ ion with an $\ce{H2O}$ molecule. The hydronium ion has a trigonal pyramidal geometry and is composed of three hydrogen atoms and one oxygen atom. There is a lone pair of electrons on the oxygen giving it this shape. The bond angle between the atoms is 113 degrees.
$H_2O_{(l)} \rightleftharpoons OH^-_{(aq)} + H^+_{(aq)}$
As H+ ions are formed, they bond with $\ce{H2O}$ molecules in the solution to form $\ce{H3O^{+}}$ (the hydronium ion). This is because hydrogen ions do not exist in aqueous solutions, but take the form of the hydronium ion, $\ce{H3O^{+}}$. A reversible reaction is one in which the reaction goes both ways. In other words, the water molecules dissociate while the OH- ions combine with the H+ ions to form water. Water has the ability to attract H+ ions because it is a polar molecule. This means that it has a partial charge, in this case the charge is negative. The partial charge is caused by the fact that oxygen is more electronegative than hydrogen. This means that in the bond between hydrogen and oxygen, oxygen "pulls" harder on the shared electrons thus causing a partial negative charge on the molecule and causing it to be attracted to the positive charge of H+ to form hydronium. Another way to describe why the water molecule is considered polar is through the concept of dipole moment. The electron geometry of water is tetrahedral and the molecular geometry is bent. This bent geometry is asymmetrical, which causes the molecule to be polar and have a dipole moment, resulting in a partial charge.
An overall reaction for the dissociation of water to form hydronium can be seen here:
$2 H_2O_{(l)} \rightleftharpoons OH^-_{(aq)} + H_3O^+_{(aq)}$
With Acids
Hydronium not only forms as a result of the dissociation of water, but also forms when water is in the presence of an acid. As the acid dissociates, the H+ ions bond with water molecules to form hydronium, as seen here when hydrochloric acid is in the presence of water:
$HCl (aq) + H_2O \rightarrow H_3O^+ (aq) + Cl^-(aq)$
pH
The pH of a solution depends on its hydronium concentration. In a sample of pure water, the hydronium concentration is $1 \times 10^{-7}$ moles per liter (0.0000001 M) at room temperature. The equation to find the pH of a solution using its hydronium concentration is:
$pH = -\log {(H_3O^+)}$
Using this equation, we find the pH of pure water to be 7. This is considered to be neutral on the pH scale. The pH can either go up or down depending on the change in hydronium concentration. If the hydronium concentration increases, the pH decreases, causing the solution to become more acidic. This happens when an acid is introduced. As H+ ions dissociate from the acid and bond with water, they form hydronium ions, thus increasing the hydronium concentration of the solution. If the hydronium concentration decreases, the pH increases, resulting in a solution that is less acidic and more basic. This is caused by the $OH^-$ ions that dissociate from bases. These ions bond with H+ ions from the dissociation of water to form $\ce{H2O}$ rather than hydronium ions.
A variation of the equation can be used to calculate the hydronium concentration when a pH is given to us:
$H_3O^+ = 10^{-pH}$
When the pH of 7 is plugged into this equation, we get a concentration of 0.0000001 M as we should.
Learning to use mathematical formulas to calculate the acidity and basicity of solutions can be difficult. Here is a video tutorial on the subject of calculating hydronium ion concentrations:
Configuration of Hydronium in Water
It is believed that, on average, every hydronium ion is attracted to six water molecules that are not attracted to any other hydronium ions. This topic is still currently under debate and no real answer has been found.
Problems
1. Determine the pH of a solution that has a hydronium concentration of 2.6x10-4M.
2. Determine the hydronium concentration of a solution that has a pH of 1.7.
3. If a solution has a hydronium concentration of 3.6x10-8M would this solution be basic or acidic?
4. What is the pH of a solution that has 12.2 grams of hydrochloric acid in 500 ml of water?
5. Why do acids cause burns?
Answers
1. Remembering the equation: pH = -log[H3O]
Plug in what is given: $pH = -\log[2.6 \times 10^{-4}\;M]$
When entered into a calculator: pH = 3.6
2. Remembering the equation: [H3O] = 10-pH
Plug in what is given: [H3O] = 10-1.7
When entered into a calculator: 1.995x10-2M
3. Determine pH the same way we did in question one: pH = -log[3.6x10-8]
pH = 7.4
Because this pH is above 7 it is considered to be basic.
4. First write out the balanced equation of the reaction: $HCl_{(aq)} + H_2O_{(l)} \rightleftharpoons H_3O^+_{(aq)} + Cl^-_{(aq)}$
Notice that the amount of HCl is equal to the amount of $\ce{H3O^{+}}$ produced due to the fact that all of the stoichiometric coefficents are one.
So if we can figure out concentration of HCl we can figure out concentration of hydronium.
Notice that the amount of HCl given to us is provided in grams. This needs to be changed to moles in order to find concentration:
$12.2\;g\; HCl \times \dfrac{1 \;mol\; HCl}{36.457\; g} = 0.335\; mol\; HCl$
Concentration is defined as moles per liter so we convert the 500mL of water to liters and get .5 liters.
$\dfrac{0.335\; mol\; HCl}{0.5\; L} = 0.67\;M$
Using this concentration we can obtain pH: pH = -log[.67M]
$pH = 0.17$
5. Acids cause burns because they dehydrate the cells they are exposed to. This is caused by the dissociation that occurs in acids where H+ ions are formed. These H+ ions bond with water in the cell and thus dehydrate them to cause cell damage and burns.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acids_and_Bases_in_Aqueous_Solutions/The_Hydronium_Ion.txt
|
Learning Objectives
• To define the pH scale as a measure of acidity of a solution
• Tell the origin and the logic of using the pH scale.
• Apply the same strategy for representing other types of quantities such as pKa, pKb, pKw.
Auto-Ionization of Water
Because of its amphoteric nature (i.e., acts as both an acid or a base), water does not always remain as $H_2O$ molecules. In fact, two water molecules react to form hydronium and hydroxide ions:
$\ce{ 2 H_2O (l) \rightleftharpoons H_3O^+ (aq) + OH^{−} (aq)} \label{1}$
This is also called the self-ionization of water. The concentration of $H_3O^+$ and $OH^-$ are equal in pure water because of the 1:1 stoichiometric ratio of Equation $\ref{1}$. The molarity of H3O+ and OH- in water are also both $1.0 \times 10^{-7} \,M$ at 25° C. Therefore, a constant of water ($K_w$) is created to show the equilibrium condition for the self-ionization of water. The product of the molarity of hydronium and hydroxide ion is always $1.0 \times 10^{-14}$ (at room temperature).
$K_w= [H_3O^+][OH^-] = 1.0 \times 10^{-14} \label{2}$
Equation $\ref{2}$ also applies to all aqueous solutions. However, $K_w$ does change at different temperatures, which affects the pH range discussed below.
$H^+$ and $H_3O^+$ is often used interchangeably to represent the hydrated proton, commonly call the hydronium ion.
Equation \ref{1} can also be written as
$H_2O \rightleftharpoons H^+ + OH^- \label{3}$
As expected for any equilibrium, the reaction can be shifted to the reactants or products:
• If an acid ($H^+$) is added to the water, the equilibrium shifts to the left and the $OH^-$ ion concentration decreases
• If base ( $OH^-$) is added to water, the equilibrium shifts to left and the $H^+$ concentration decreases.
pH and pOH
Because the constant of water, Kw is $1.0 \times 10^{-14}$ (at 25° C), the $pK_w$ is 14, the constant of water determines the range of the pH scale. To understand what the pKw is, it is important to understand first what the "p" means in pOH and pH. The addition of the "p" reflects the negative of the logarithm, $-\log$. Therefore, the pH is the negative logarithm of the molarity of H, the pOH is the negative logarithm of the molarity of $\ce{OH^-}$, and the $pK_w$ is the negative logarithm of the constant of water:
\begin{align} pH &= -\log [H^+] \label{4a} \[4pt] pOH &= -\log [OH^-] \label{4b} \[4pt] pK_w &= -\log [K_w] \label{4c} \end{align}
At room temperature,
$K_w =1.0 \times 10^{-14} \label{4d}$
So
\begin{align} pK_w &=-\log [1.0 \times 10^{-14}] \label{4e} \[4pt] &=14 \end{align}
Using the properties of logarithms, Equation $\ref{4e}$ can be rewritten as
$10^{-pK_w}=10^{-14}. \label{4f}$
The equation also shows that each increasing unit on the scale decreases by the factor of ten on the concentration of $\ce{H^{+}}$. Combining Equations \ref{4a} - \ref{4c} and \ref{4e} results in this important relationship:
$pK_w= pH + pOH = 14 \label{5b}$
Equation \ref{5b} is correct only at room temperature since changing the temperature will change $K_w$.
The pH scale is logarithmic, meaning that an increase or decrease of an integer value changes the concentration by a tenfold. For example, a pH of 3 is ten times more acidic than a pH of 4. Likewise, a pH of 3 is one hundred times more acidic than a pH of 5. Similarly a pH of 11 is ten times more basic than a pH of 10.
Properties of the pH Scale
From the simple definition of pH in Equation \ref{4a}, the following properties can be identified:
• This scale is convenient to use, because it converts some odd expressions such as $1.23 \times 10^{-4}$ into a single number of 3.91.
• This scale covers a very large range of $\ce{[H+]}$, from 0.1 to 10-14. When $\ce{[H+]}$ is high, we usually do not use the pH value, but simply the $\ce{[H+]}$. For example, when $\mathrm{[H^+] = 1.0}$, pH = 0. We seldom say the pH is 0, and that is why you consider pH = 0 such an odd expression. A pH = -0.30 is equivalent to a $\ce{[H+]}$ of 2.0 M. Negative pH values are only for academic exercises. Using the concentrations directly conveys a better sense than the pH scales.
• The pH scale expands the division between zero and 1 in a linear scale or a compact scale into a large scale for comparison purposes. In mathematics, you learned that there are infinite values between 0 and 1, or between 0 and 0.1, or between 0 and 0.01 or between 0 and any small value. Using a log scale certainly converts infinite small quantities into infinite large quantities.
• The non-linearity of the pH scale in terms of $\ce{[H+]}$ is easily illustrated by looking at the corresponding values for pH between 0.1 and 0.9 as follows:
pH 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
[H+] 1 0.79 0.63 0.50 0.40 0.32 0.25 0.20 0.16 0.13
• Because the negative log of $\ce{[H+]}$ is used in the pH scale, the pH scale usually has positive values. Furthermore, the larger the pH, the smaller the $\ce{[H+]}$.
The Effective Range of the pH Scale
It is common that the pH scale is argued to range from 0-14 or perhaps 1-14, but neither is correct. The pH range does not have an upper nor lower bound, since as defined above, the pH is an indication of concentration of H+. For example, at a pH of zero the hydronium ion concentration is one molar, while at pH 14 the hydroxide ion concentration is one molar. Typically the concentrations of H+ in water in most solutions fall between a range of 1 M (pH=0) and 10-14 M (pH=14). Hence a range of 0 to 14 provides sensible (but not absolute) "bookends" for the scale. One can go somewhat below zero and somewhat above 14 in water, because the concentrations of hydronium ions or hydroxide ions can exceed one molar. Figure $1$ depicts the pH scale with common solutions and where they are on the scale.
Quick Interpretation
• If pH >7, the solution is basic. The pOH should be looked in the perspective of OH- instead of H+. Whenever the value of pOH is less than 7, then it is considered basic. And therefore there are more OH- than H+ in the solution.
• At pH 7, the substance or solution is at neutral and means that the concentration of H+ and OH- ion is the same.
• If pH < 7, the solution is acidic. There are more H+ than OH- in an acidic solution.
• The pH scale does not have an upper nor lower bound.
Example $1$
If the concentration of $NaOH$ in a solution is $2.5 \times 10^{-4}\; M$, what is the concentration of $H_3O^+$?
Solution
We can assume room temperature, so
$1.0 \times 10^{-14} = [H_3O^+][OH^-] \nonumber$
to find the concentration of H3O+, solve for the [H3O+].
$\dfrac{1.0 \times 10^{-14}}{[OH^-]} = [H_3O^+]$
$\dfrac{1.0 \times 10^{-14}}{2.5 \times 10^{-4}} = [H_3O^+] = 4.0 \times 10^{-11}\; M$
Example $2$
1. Find the pH of a solution of 0.002 M of HCl.
2. Find the pH of a solution of 0.00005 M NaOH.
Solution
1. The equation for pH is -log [H+]
$[H^+]= 2.0 \times 10^{-3}\; M \nonumber$
$pH = -\log [2.0 \times 10^{-3}] = 2.70 \nonumber$
1. The equation for pOH is -log [OH-]
$[OH^-]= 5.0 \times 10^{-5}\; M \nonumber$
$pOH = -\log [5.0 \times 10^{-5}] = 4.30 \nonumber$
$pK_w = pH + pOH \nonumber$
and
$pH = pK_w - pOH \nonumber$
then
$pH = 14 - 4.30 = 9.70 \nonumber$
Example $3$: Soil
If moist soil has a pH of 7.84, what is the H+ concentration of the soil solution?
Solution
$pH = -\log [H^+] \nonumber$
$7.84 = -\log [H^+] \nonumber$
$[H^+] = 1.45 \times 10^{-8} M \nonumber$
Hint
Place -7.84 in your calculator and take the antilog (often inverse log or 10x) = 1.45 x 10-8M
Proper Definition of pH
The pH scale was originally introduced by the Danish biochemist S.P.L. Sørenson in 1909 using the symbol pH. The letter p is derived from the German word potenz meaning power or exponent of, in this case, 10. In 1909, S.P.L. Sørenson published a paper in Biochem Z in which he discussed the effect of H+ ions on the activity of enzymes. In the paper, he invented the term pH (purported to mean pondus hydrogenii in Latin) to describe this effect and defined it as the $-\log[H^+]$. In 1924, Sørenson realized that the pH of a solution is a function of the "activity" of the H+ ion and not the concentration. Thus, he published a second paper on the subject. A better definition would be
$pH = -\log\,a\{\ce{H^{+}}\}$
where $a\{H^+\}$ denotes the activity (an effective concentration) of the H+ ions. The activity of an ion is a function of many variables of which concentration is one.
• Concentration is abbreviated by using square brackets, e.g., $[H_3O^+]$ is the concentration of hydronium ion in solution.
• Activity is abbreviated by using "a" with curly brackets, e.g., $a\{H_3O^+\}$ is the activity of hydronium ions in solution
Because of the difficulty in accurately measuring the activity of the $\ce{H^{+}}$ ion for most solutions the International Union of Pure and Applied Chemistry (IUPAC) and the National Bureau of Standards (NBS) has defined pH as the reading on a pH meter that has been standardized against standard buffers. The following equation is used to calculate the pH of all solutions:
\begin{align} pH &= \dfrac{F(E-E_{standard})}{RT\;\ln 10} + pH_{standard} \label{6a} \[4pt] &= \dfrac{5039.879 (E-E_{standard})}{T} + pH_{standard} \label{6b} \end{align}
with
• $R$ is the ideal gas constant,
• $F$ is the Faraday's constant, and
• $T$ is absolute temperature (in K)
The activity of the H+ ion is determined as accurately as possible for the standard solutions used. The identity of these solutions vary from one authority to another, but all give the same values of pH to ± 0.005 pH unit. The historical definition of pH is correct for those solutions that are so dilute and so pure the H+ ions are not influenced by anything but the solvent molecules (usually water).
When measuring pH, [H+] is in units of moles of H+ per liter of solution. This is a reasonably accurate definition at low concentrations (the dilute limit) of H+. At very high concentrations (10 M hydrochloric acid or sodium hydroxide, for example,) a significant fraction of the ions will be associated into neutral pairs such as H+Cl, thus reducing the concentration of “available” ions to a smaller value which we will call the effective concentration. It is the effective concentration of H+ and OH that determines the pH and pOH. The pH scale as shown above is called sometimes "concentration pH scale" as opposed to the "thermodynamic pH scale". The main difference between both scales is that in thermodynamic pH scale one is interested not in H+concentration, but in H+activity. What a person measures in the solution is just activity, not the concentration. Thus it is thermodynamic pH scale that describes real solutions, not the concentration one.
For solutions in which ion concentrations don't exceed 0.1 M, the formulas pH = –log [H+] and pOH = –log[OH] are generally reliable, but don't expect a 10.0 M solution of a strong acid to have a pH of exactly –1.00! However, this definition is only an approximation (albeit very good under most situations) of the proper definition of pH, which depends on the activity of the hydrogen ion:
$pH= -\log a\{H^+\} \approx -\log [H^+] \label{7}$
The activity is a measure of the "effective concentration" of a substance, is often related to the true concentration via an activity coefficient, $\gamma$:
$a{H^+}=\gamma [H^+] \label{8}$
Calculating the activity coefficient requires detailed theories of how charged species interact in solution at high concentrations (e.g., the Debye-Hückel Theory). In most solutions the pH differs from the -log[H+ ] in the first decimal point. The following table gives experimentally determined pH values for a series of HCl solutions of increasing concentration at 25 °C.
Table $1$: HCl Solutions with corresponding pH values. Data taken from Christopher G. McCarty and Ed Vitz, Journal of Chemical Education, 83(5), 752 (2006) and G.N. Lewis, M. Randall, K. Pitzer, D.F. Brewer, Thermodynamics (McGraw-Hill: New York, 1961; pp. 233-34).
Molar Concentration of $HCl$ pH defined as Concentration Experimentally Determined pH Relative Deviation
0.00050 3.30 3.31 0.3%
0.0100 2 2.04 1.9%
0.100 1 1.10 9%
0.40 0.39 0.52 25%
7.6 -0.88 -1.85 52%
While the pH scale formally measures the activity of hydrogen ions in a substance or solution, it is typically approximated as the concentration of hydrogen ions; this approximation is applicable only under low concentrations.
Living Systems
Molecules that make up or are produced by living organisms usually function within a narrow pH range (near neutral) and a narrow temperature range (body temperature). Many biological solutions, such as blood, have a pH near neutral. pH influences the structure and the function of many enzymes (protein catalysts) in living systems. Many of these enzymes have narrow ranges of pH activity. Cellular pH is so important that death may occur within hours if a person becomes acidotic (having increased acidity in the blood). As one can see pH is critical to life, biochemistry, and important chemical reactions. Common examples of how pH plays a very important role in our daily lives are given below:
• Water in swimming pool is maintained by checking its pH. Acidic or basic chemicals can be added if the water becomes too acidic or too basic.
• Whenever we get a heartburn, more acid build up in the stomach and causes pain. We needs to take antacid tablets (a base) to neutralize excess acid in the stomach.
• The pH of blood is slightly basic. A fluctuation in the pH of the blood can cause in serious harm to vital organs in the body.
• Certain diseases are diagnosed only by checking the pH of blood and urine.
• Certain crops thrive better at certain pH range.
• Enzymes activate at a certain pH in our body.
Table $2$: pH in Living Systems
Compartment pH
Gastric Acid 1
Lysosomes 4.5
Granules of Chromaffin Cells 5.5
Human Skin 5.5
Urine 6
Neutral H2O at 37 °C 6.81
Cytosol 7.2
Cerebrospinal Fluid 7.3
Blood 7.43-7.45
Mitochondrial Matrix 7.5
Pancreas Secretions 8.1
Problems
1. In a solution of $2.4 \times 10^{-3} M$ of HI, find the concentration of $OH^-$.
2. Determine the pH of a solution that is 0.0035 M HCl.
3. Determine the [H3O+] of a solution with a pH = 5.65
4. If the pOH of NH3, ammonia, in water is 4.74. What is the pH?
5. Pepsin, a digestive enzyme in our stomach, has a pH of 1.5. Find the concentration of OH- in the stomach.
Solutions
1. We use the dissociation of water equation to find [OH-].
Kw = [H3O+][OH-] = 1.0 X 10-14
Solve for [OH-]
[OH-] = (1.0 X 10-14)/ [H3O+]
Plug in the molarity of HI and solve for OH-.
[OH-] = (1.0 X 10-14)/ [2.4 X 10-3] = 4.17 X 10-12 M.
2. pH = -log[H3O+]
Plug the molarity of the HCl in and solve for pH.
pH = -log[0.0035] = 2.46
3. pH = -log[H3O+]
Plug in the pH and solve for [H3O+]
5.65 = -log[H3O+]
Move the negative sign to the pH. -5.65 = log[H3O+]
10-5.65=10log[H3O+] = 2.24 X 10-6 M
4. pH + pOH = 14
Solve for pH.
14 - pOH = pH
14 - 4.74 = pH = 9.26
5. There are several ways to do this problem.
Answer 1.
pH + pOH = 14
Solve for pOH.
pOH = 14 - pH
pOH = 14 - 1.5 = 12.5
When the pOH is solved, solve for the concentration by using log.
pOH = -log[OH-]
12.5 = -log[OH-]
-12.5 = log[OH-]
10-12.5 = 10log[OH-] = 3.16 X 10-13 M.
Answer 2.
pH = -log[H+]
Plug in the pH and solve for the molarity of H+ of pepsin.
1.5 = -log[H+]
-1.5 = log[H+]
10-1.5 = 10log[H+] = [H+]= 0.032
Use the concentration of H+ to solve for the concentration of OH-.
[H+][OH-] = 1.0 X 10-14
Plug in the [H+] and solve for [OH-].
[OH-] = (1.0 X 10-14)/[H3O+]
[OH-] = (1.0 X 10-14)/(0.032) = 3.125 X 10-14 M
The pH Scale
The pH of an aqueous solution is the measure of how acidic or basic it is. The pH of an aqueous solution can be determined and calculated by using the concentration of hydronium ion concentration in the solution.
Introduction
The pH of an aqueous solution is based on the pH scale which typically ranges from 0 to 14 in water (although as discussed below this is not an a formal rule). A pH of 7 is considered to be neutral. A pH of less than 7 is considered acidic. A pH of greater than 7 is then considered basic. Acidic solutions have high hydronium concentrations and lower hydroxide concentrations. Basic solutions have high hydroxide concentrations and lower hydronium concentrations.
Self-Ionization of Water
In the self-ionization of water, the amphiprotic ability of water to act as a proton donor and acceptor allows the formation of hydronium ($H_3O^+$) and hydroxide ions ($OH^-$). In pure water, the concentration of hydronium ions equals that of hydroxide ions. At 25 oC, the concentrations of both hydronium and hydroxide ions equal $1.0 \times 10^{-7}$. The ion product of water, $K_w$, is the equilibrium condition for the self-ionization of water and is express as follows:
$K_w = [H_{3}O^+][OH^-] = 1.0 \times 10^{-14}$.
• pH: The term pH refers to the "potential of hydrogen ion." It was proposed by Danish biochemist Soren Sorensen in 1909 so that there could be a more convenient way to describe hydronium and hydroxide ion concentrations in aqueous solutions since both concentrations tend to be extremely small. Sorensen defined pH as the negative of the \logarithm of the concentration of hydrogen ions. In terms of hydronium ion concentration, the equation to determine the pH of an aqueous solution is: $pH = -\log[H_{3}O^+]$
• pOH: The pOH of an aqueous solution, which is related to the pH, can be determined by the following equation: $pOH = -\log[OH^-]$ This equation uses the hydroxide concentration of an aqueous solution instead of the hydronium concentration.
Relating pH and pOH
Another equation can be used that relates the concentrations of hydronium and hydroxide concentrations. This equation is derived from the equilibrium condition for the self-ionization of water, \K_w\). It brings the three equations for pH, pOH, and \K_w\) together to show that they are all related to each other and either one can be found if the other two are known. The following equation is expressed by taking the negative \logarithm of the \K_w\) expression for the self-ionization of water at room temperature:
$K_w = [H_{3}O^+][OH^-] = 1.0 \times 10^{-14}$
$pK_w = pH + pOH = 14$.
Strong Acids and Strong Bases
The ionization of strong acids and strong bases in dilute aqueous solutions essentially go to completion. In aqueous solutions of strong acids and strong bases, the self-ionization of water only occurs to a small extent. Since it only occurs to a small extent, the self-ionization of water is an insignificant source of hydronium and hydroxide ions. Knowing this, we can say in calculating hydronium concentration in an aqueous solution of a strong acid that the strong acid is the main source of hydronium ions. We can also say that in calculating hydroxide concentration in an aqueous solution of a strong base that the strong base is the main source of hydroxide ions. This is usually true unless the solutions of strong acids and strong bases are extremely dilute.
Weak Acids and Weak Bases
Weak acids only partially dissociate in aqueous solutions and reach a condition of equilibrium, therefore how much they dissociate is given by the equilibrium equation for that acid in solution:
$K_a = \dfrac{[H_{3}O^+][A^-]}{[HA]}$
with
• $[H_{3}O^+]$ is the Hydronium Concentration
• $[A^-]$ is conjugate base conentration
• $[HA]$ is the Weak Acid concentration
Weak bases also only partially dissociate in aqueous solutions and reach a condition of equilibrium. The equation for the partial dissociation of a base is then the equilibrium equation for that base in solution:
$K_b = \dfrac{[OH^-][B+]}{[B]}$
$[OH^-] = Hydroxide\;Concentration$
$[B^+] = Ion$
$[B] = Weak\;Base$
Problems
1. A solution is 0.055 M HBr. What is the pH of this solution?
2. A solution is 0.00025 M HCl. What is the pH AND pOH of this solution?
3. A solution is 0.0035 M LiOH. What is the pOH of this solution? pH?
4. A solution contains 0.0045 M hydrofluoric acid. What is the pH of this solution? For hydrofluoric acid, $K_a = 6.6 \times 10^{-4}$.
5. A solution contains 0.0085 M ammonia. What is the pH of this solution? For ammonia: $K_b = 1.8 \times 10^{-5}$.
Answers
1. Use the pH equation which is: $pH = -\log[H_{3}O^+]$.
0.055 M HBr, HBr is a strong acid
[H3O+] = 5.5 X 10-2 M
pH = -\log(5.5 X 10-2) = 1.26
2. Use the pH equation $pH = -\log[H_{3}O^+]$ and pKw equation $pK_w = pH + pOH = 14$.
0.00025 M HCl, HCl is a strong acid
[H3O+] = 2.5 X 10-4 M
pH = -\log(2.5 X 10-4) = 3.6
Then solve for the pOH:
pH + pOH = 14
pOH = 14 - pH
pOH = 14 - 3.6 = 10.4
3. Use the pOH equation $pH = -\log[OH^-]$ and pKw equation $pK_w = pH + pOH = 14$.
0.0035 M LiOH, LiOH is a strong base
[OH-] = 3.5 X 10-3
pOH = -\log(3.5 X 10-3) = 2.46
Now solve for pH:
pH = 14 - pOH
pH = 14 - 2.46 = 11.54
4. 0.0045 M hydrofluoric acid, hydrofluoric acid is a weak acid.
Use Kaequation $K_a = \dfrac{[H_{3}O^+][A^-]}{[HA]}$ and ICE table.
5. 0.0085 M ammonia, ammonia is a weak base.
Use Kb equation $K_b = \dfrac{[OH^-][B+]}{[B]}$ and ICE table.
Contributors and Attributions
• Michael Thai (UCD)
Temperature Dependence of pH in Solutions
Typically, the pH of solutions will change as temperature changes. The reasons why depend on the context, but even a simple solution of a weak acid (HA) will exhibit a (weak) temperature dependence.
Contributors and Attributions
• www.newton.dep.anl.gov/askasc.../chem00920.htm
• Vince Calder
• Prof. Topper
Temperature Dependence of the pH of pure Water
The formation of hydrogen ions (hydroxonium ions) and hydroxide ions from water is an endothermic process. Using the simpler version of the equilibrium:
\[ H_2O_{(l)} \rightleftharpoons H^+_{(aq)} + OH^-_{(aq)}\]
Hence, the forward reaction, as written, "absorbs heat".
According to Le Chatelier's Principle, if you make a change to the conditions of a reaction in dynamic equilibrium, the position of equilibrium moves to counter the change you have made. Hence, if you increase the temperature of the water, the equilibrium will move to lower the temperature again. It will do that by absorbing the extra heat. That means that the forward reaction will be favored, and more hydrogen ions and hydroxide ions will be formed. The effect of that is to increase the value of \(K_w\) as temperature increases.
The table below shows the effect of temperature on \(K_w\). For each value of \(K_w\), a new pH has been calculated. It might be useful if you were to check these pH values yourself.
T (°C) Kw (mol2 dm-6) pH pOH
0 0.114 x 10-14 7.47 7.47
10 0.293 x 10-14 7.27 7.27
20 0.681 x 10-14 7.08 7.08
25 1.008 x 10-14 7.00 7.00
30 1.471 x 10-14 6.92 6.92
40 2.916 x 10-14 6.77 6.77
50 5.476 x 10-14 6.63 6.63
100 51.3 x 10-14 6.14 6.14
You can see that the pH of pure water decreases as the temperature increases. Similarly, the pOH also decreases.
A word of warning!
If the pH falls as temperature increases, this does not mean that water becomes more acidic at higher temperatures. A solution is acidic if there is an excess of hydrogen ions over hydroxide ions (i.e., pH < pOH). In the case of pure water, there are always the same concentration of hydrogen ions and hydroxide ions and hence, the water is still neutral (pH = pOH) - even if its pH changes.
The problem is that we are all familiar with 7 being the pH of pure water, that anything else feels really strange. Remember that to calculate the neutral value of pH from \(K_w\). If that changes, then the neutral value for pH changes as well. At 100°C, the pH of pure water is 6.14, which is "neutral" on the pH scale at this higher temperature. A solution with a pH of 7 at this temperature is slightly alkaline because its pH is a bit higher than the neutral value of 6.14.
Similarly, you can argue that a solution with a pH of 7 at 0°C is slightly acidic, because its pH is a bit lower than the neutral value of 7.47 at this temperature. Hence, there is an excess of \(H^+\) ions vs. \(OH^-\) ions.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Acids_and_Bases/Acids_and_Bases_in_Aqueous_Solutions/The_pH_Scale/Determining_and_Calculating_pH.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.